BRN Discussion Ongoing

Originally posted in a NASA thread by @uiux but now has greater significance:

Proposal Summary​

Proposal Information


Proposal Number:
21-2- H6.22-1743

Phase 1 Contract #:
80NSSC21C0233

Subtopic Title:
Deep Neural Net and Neuromorphic Processors for In-Space Autonomy and Cognition

Proposal Title:
Neuromorphic Enhanced Cognitive Radio

Small Business Concern


Firm: Intellisense Systems, Inc.

Address:

21041 South Western Avenue, Torrance, CA 90501

Phone:
(310) 320-1827

Principal Investigator:

Name:
Mr. Wenjian Wang Ph.D.

E-mail:
wwang@intellisenseinc.com


Address:

21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Business Official:


Name: Selvy Utama

E-mail:
notify@intellisenseinc.com

Address:
21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Summary Details:

Estimated Technology Readiness Level (TRL) :
Begin: 3
End: 4


Technical Abstract (Limit 2000 characters, approximately 200 words):
Intellisense Systems, Inc. proposes in Phase II to advance development of a Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). NECR is a low-size, -weight, and -power (-SWaP) cognitive radio built on the open-source framework, i.e., GNU Radio and RFNoC™, with new enhancements in environment learning and improvements in transmission quality and data processing. Due to the high efficiency of spiking neural networks and their low-latency, energy-efficient implementation on neuromorphic computing hardware, NECR can be integrated into SWaP-constrained platforms in spacecraft and robotics, to provide reliable communication in unknown and uncharacterized space environments such as the Moon and Mars. In Phase II, Intellisense will improve the NECR system for cognitive communication capabilities accelerated by neuromorphic hardware. We will refine the overall NECR system architecture to achieve cognitive communication capabilities accelerated by neuromorphic hardware, on which a special focus will be the mapping, optimization, and implementation of smart sensing algorithms on the neuromorphic hardware. The Phase II smart sensing algorithm library will include Kalman filter, Carrier Frequency Offset estimation, symbol rate estimation, energy detection- and matched filter-based spectrum sensing, signal-to-noise ratio estimation, and automatic modulation identification.

These algorithms will be implemented on COTS neuromorphic computing hardware such as Akida processor from BrainChip, and then integrated with radio frequency modules and radiation-hardened packaging into a Phase II prototype.

At the end of Phase II, the prototype will be delivered to NASA for testing and evaluation, along with a plan describing a path to meeting fault and tolerance requirements for mission deployment and API documents for integration with CubeSat, SmallSat, and 'ROVER' for flight demonstration.

Potential NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology will have many NASA applications due to its low-SWaP and low-cost cognitive sensing capability. It can be used to enhance the robustness and reliability of space communication and networking, especially cognitive radio devices. NECR can be directly transitioned to the Human Exploration and Operations Mission Directorate (HEOMD) Space Communications and Navigation (SCaN) Program, CubeSat, SmallSat, and 'ROVER' to address the needs of the Cognitive Communications project.

Potential Non-NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology’s low-SWaP and low-cost cognitive sensing capability will have many non-NASA applications. The NECR technology can be integrated into commercial communication systems to enhance cognitive sensing and communication capability. Automakers can integrate the NECR technology into automobiles for cognitive sensing and communication.

Duration: 24

FF

Bonus thing is that NECR is now Phase II as prev posted.


Post in thread 'BRN Discussion 2022' https://thestockexchange.com.au/threads/brn-discussion-2022.1/post-81384
 
  • Like
  • Love
Reactions: 16 users

Boab

I wish I could paint like Vincent

Originally posted in a NASA thread by @uiux but now has greater significance:

Proposal Summary​

Proposal Information


Proposal Number:
21-2- H6.22-1743

Phase 1 Contract #:
80NSSC21C0233

Subtopic Title:
Deep Neural Net and Neuromorphic Processors for In-Space Autonomy and Cognition

Proposal Title:
Neuromorphic Enhanced Cognitive Radio

Small Business Concern


Firm: Intellisense Systems, Inc.

Address:

21041 South Western Avenue, Torrance, CA 90501

Phone:
(310) 320-1827

Principal Investigator:

Name:
Mr. Wenjian Wang Ph.D.

E-mail:
wwang@intellisenseinc.com


Address:

21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Business Official:


Name: Selvy Utama

E-mail:
notify@intellisenseinc.com

Address:
21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Summary Details:

Estimated Technology Readiness Level (TRL) :
Begin: 3
End: 4


Technical Abstract (Limit 2000 characters, approximately 200 words):
Intellisense Systems, Inc. proposes in Phase II to advance development of a Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). NECR is a low-size, -weight, and -power (-SWaP) cognitive radio built on the open-source framework, i.e., GNU Radio and RFNoC™, with new enhancements in environment learning and improvements in transmission quality and data processing. Due to the high efficiency of spiking neural networks and their low-latency, energy-efficient implementation on neuromorphic computing hardware, NECR can be integrated into SWaP-constrained platforms in spacecraft and robotics, to provide reliable communication in unknown and uncharacterized space environments such as the Moon and Mars. In Phase II, Intellisense will improve the NECR system for cognitive communication capabilities accelerated by neuromorphic hardware. We will refine the overall NECR system architecture to achieve cognitive communication capabilities accelerated by neuromorphic hardware, on which a special focus will be the mapping, optimization, and implementation of smart sensing algorithms on the neuromorphic hardware. The Phase II smart sensing algorithm library will include Kalman filter, Carrier Frequency Offset estimation, symbol rate estimation, energy detection- and matched filter-based spectrum sensing, signal-to-noise ratio estimation, and automatic modulation identification.

These algorithms will be implemented on COTS neuromorphic computing hardware such as Akida processor from BrainChip, and then integrated with radio frequency modules and radiation-hardened packaging into a Phase II prototype.

At the end of Phase II, the prototype will be delivered to NASA for testing and evaluation, along with a plan describing a path to meeting fault and tolerance requirements for mission deployment and API documents for integration with CubeSat, SmallSat, and 'ROVER' for flight demonstration.

Potential NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology will have many NASA applications due to its low-SWaP and low-cost cognitive sensing capability. It can be used to enhance the robustness and reliability of space communication and networking, especially cognitive radio devices. NECR can be directly transitioned to the Human Exploration and Operations Mission Directorate (HEOMD) Space Communications and Navigation (SCaN) Program, CubeSat, SmallSat, and 'ROVER' to address the needs of the Cognitive Communications project.

Potential Non-NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology’s low-SWaP and low-cost cognitive sensing capability will have many non-NASA applications. The NECR technology can be integrated into commercial communication systems to enhance cognitive sensing and communication capability. Automakers can integrate the NECR technology into automobiles for cognitive sensing and communication.

Duration: 24
Looks like Intellisense do a lot of work with all forms of US defence forces.
Apologies if already discussed.

 
  • Like
  • Love
Reactions: 19 users

Xray1

Regular
The AGM was held basically 2/3's through the 2nd quarter.

I believe that he was referring to future quarters, meaning, being disclosed in 4C's in late October 2022 and late January 2023 for example, for the next 2 quarters.

If any material contract had taken place in April, May or June 2022, we would have been informed, maybe I'm wrong, maybe there will be
an explosion in revenue, which would be fantastic, but in my opinion, it clearly isn't coming in the reported 4C in late July 2022.

I respect your view, I'm wrong about plenty of things, and always happy to admit it.

Regards....Tech :cool:
Thanks for your input and your usual well formulated response's ............. I for one am expecting a '' teaser'' increase in Co revenue commencing from this upcoming 4C in the region of say ~$500k to ~$1 million .......... I hope the funds will start to come in from a current existing client the like's of someone like "Socionext" to kick things off.
 
  • Like
  • Fire
Reactions: 20 users

TasTroy77

Founding Member
Morning Chippers,

Having a read of a weekend financial newspaper...

FULL PAGE advert from Mercedes-Benz.

* with a picture of the single panel glass dashboard floating above a Mercedes and a young lass looking up at the panel in wonderment.

* the caption at bottom of picture reads....

As innovative as it is intuitive: the Mercedes-Benz MBUX Hyperscreen with artificial intelligence.
INNOVATIONS BY
( Mercedes-Benz logo ).

* At the top of the advert , in very small print....
Overseas model showen. Vehicle showen not currently available in Australia.

Apart from what we all know here, there is no mention of Brainchip in the advert.

Getting closer by the day.

Regards,
Esq.
I think we are all pretty certain that AKIDA IP isn't in the current sales line of Mercedes Benz.
If that was the case there would have to be an announcement of a material contract MB with Brainchip.
As previously discussed we are speculating that the artificial intelligence MBUX system as demonstrated by the EQXX model MAY start to be commercially available in 2024 although nothing officially confirmed.

This company's technology is revolutionary and integration takes time. A degree of patience is required before we see the tech commercialisation and subsequent revenue.
 
  • Like
  • Love
Reactions: 26 users
Looks like Intellisense do a lot of work with all forms of US defence forces.
Apologies if already discussed.

B

Go here and start looking at diff contracts and diff keyword searches.



Screenshot_2022-07-03-09-58-42-90_4641ebc0df1485bf6b47ebd018b5ee76.jpg


Screenshot_2022-07-03-09-59-07-73_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
VictorG said:
Es gibt ein Gerücht, dass die Nasa die Akida-Flagge auf dem Mars Rover hissen wird

Translated means - there is a rumour that NASA will hoist the Akida flag on the Mars rover
(close enough to it)

And just like that, a rumour was born!

First we'll have to work out what an Akida flag looks like and whether it would be more prudent to opt for a BrainChip flag instead. And then, of course, all necessary arrangements will have to be made for it's ironing to take place in outer space prior to it's hoisting. One small step for man, one giant crease-less BrainChip flag on Mars for mankind. ⛳🥳
 
  • Haha
  • Like
  • Love
Reactions: 31 users

Boab

I wish I could paint like Vincent
  • Like
  • Fire
Reactions: 15 users

equanimous

Norse clairvoyant shapeshifter goddess
And just like that, a rumour was born!

First we'll have to work out what an Akida flag looks like and whether it would be more prudent to opt for a BrainChip flag instead. And then, of course, all necessary arrangements will have to be made for it's ironing to take place in outer space prior to it's hoisting. One small step for man, one giant crease-less BrainChip flag on Mars for mankind. ⛳🥳
1656819353010.png
 
  • Haha
  • Like
  • Love
Reactions: 41 users
The following covers the Aiot market and does not mention Brainchip by name but there are two very interesting paragraphs which I have emboldened and partitioned to make easy to locate:

What’s a Neural microcontroller?​

MAY 30, 2022 BY JEFF SHEPARD

FacebookTwitterLinkedInEmail
The ability to run neural networks (NNs) on MCUs is growing in importance to support artificial intelligence (AI) and machine learning (ML) in the Internet of Things (IoT) nodes and other embedded edge applications. Unfortunately, running NNs on MCUs is challenging due to the relatively small memory capacities of most MCUs. This FAQ details the memory challenges of running NNs on MCUs and looks at possible system-level solutions. It then presents recently announced MCUs with embedded NN accelerators. It closes by looking at how the Glow machine learning compiler for NNs can help reduce memory requirements.
Running NNs on MCUs (sometimes called tinyML) offers advantages over sending raw data to the cloud for analysis and action. Those advantages include the ability to tolerate poor or even no network connectivity and safeguard data privacy and security. MCU memory capacities are often limited to the main memory of hundreds of KB of SRAM, often less, and byte-addressable Flash of no more than a few MBs for read-only data.
To achieve high accuracy, most NNs require larger memory capacities. The memory needed by a NN includes read-only parameters and so-called feature maps that contain intermediate and final results. It can be tempting to process an NN layer on an MCU in the embedded memory before loading the next layer, but it’s often impractical. A single NN layer’s parameters and feature maps can require up to 100 MB of storage, exceeding the MCU memory size by as much as two orders of magnitude. Recently developed NNs with higher accuracies require even more memory, resulting in a widening gap between the available memory on most MCUs and the memory requirements of NNs (Figure 1).
Figure 1: The available memory on most MCUs is much too small to support the needs of the majority of NNs. (Image: Arxiv)
One solution to address MCU memory limitations is to dynamically swap NN data blocks between the MCU SRAM and a larger external (out-of-core) cash memory. Out-of-core NN implementations can suffer from several limitations, including: execution slowdown, storage wear out, higher energy consumption, and data security. If these concerns can be adequately addressed in a specific application, an MCU can be used to run large NNs with full accuracy and generality.
One approach to out-of-core NN implementation is to split one NN layer into a series of tiles small enough to fit into the MCU memory. This approach has been successfully applied to NN systems on servers where the NN tiles are swapped between the CPU/GPU memory and the server’s memory. Most embedded systems don’t have access to the large memory spaces available on servers. Using memory swapping approaches with MCUs can run into problems using a relatively small external SRAM or an SD card, such as lower SD card durability and reliability, slower execution due to I/O operations, higher energy consumption, and safety and security of out-of-core NN data storage.
Another approach to overcoming MCU memory limitations is optimizing the NN more completely using techniques such as model compression, parameter quantization, and designing tiny NNs from scratch. These approaches involve varying tradeoffs between model accuracy and generality, or both. In most cases, the techniques used to fit an NN into the memory space of an MCU result in the NN becoming too inaccurate (< 60% accuracy) or too specialized and not generalized enough (the NN can only detect a few object classes). These challenges can disqualify the use of MCUs where NNs with high accuracy and generality are needed, even if inference delays can be tolerated, such as:
  • NN inference on slowly changing signals such as monitoring crop health by analyzing hourly photos or traffic patterns by analyzing video frames taken every 20-30 minutes
  • Profiling NNs on the device by occasionally running a full-blown NN to estimate the accuracy of long-running smaller NNs
  • Transfer learning includes retraining NNs on MCUs with data collected from deployment every hour or day
NN accelerators embedded in MCUs
Many of the challenges of implementing NNs on MCU are being addressed by MCUs with embedded NN accelerators. These advanced MCUs are an emerging device category that promises to provide designers with new opportunities to develop IoT node and edge ML solutions. For example, an MCU with a hardware-based embedded convolutional neural network (CNN) accelerator enables battery-powered applications to execute AI inferences while spending only microjoules of energy (Figure 2).
Figure 2: Neural MCU block diagram showing the basic MCU blocks (upper left) and the CNN accelerator section (right). (Image: Maxim)
*******************************************************************************************************************************************************
The MCU with an embedded CNN accelerator is a system on chip combining an Arm Cortex-M4 with a RISC-V core that can execute application and control codes as well as drive the CNN accelerator. The CNN engine has a weight storage memory of 442KB and can support 1-, 2-, 4-, and 8-bit weights (supporting networks of up to 3.5 million weights). On the fly, AI network updates are supported by the SRAM-based CNN weight memory structure. The architecture is flexible and allows CNNs to be trained using conventional toolsets such as PyTorch and TensorFlow.
*********************************************************************************************************************************************************
Another MCU supplier has pre-announced developing a neural processing unit integrated with an ARM Cortex core. The new neural MCU is scheduled to ship later this year and will provide the same level of AI performance as a quad-core processor with an AI accelerator but at one-tenth the cost and one-twelfth the power consumption.
*********************************************************************************************************************************************************

Additional neural MCUs are expected to emerge in the near future.

Glow for smaller NN memories
Glow (graph lowering) is a machine learning compiler for neural network graphs. It’s available on Github and is designed to optimize the neural network graphs and generate code for various hardware devices. Two versions of Glow are available, one for Ahead of Time (AOT) and one for Just in Time (JIT) compilations. As the names suggest, AOT compilation is performed offline (ahead of time) and generates an object file (bundle) which is later linked with the application code, while JIT compilation is performed at runtime just before the model is executed.
MCUs are available that support AOT compilation using Glow. The compiler converts the neural networks into object files, which the user converts into a binary image for increased performance and a smaller memory footprint than a JIT (runtime) inference engine. In this case, Glow is used as a software back-end for the PyTorch machine learning framework and the ONNX model format (Figure 3).
Figure 3: Example of an AOT compilation flow diagram using Glow. (Image: NXP)
The Glow NN complier lowers a NN into a two-phase, strongly-typed intermediate representation. Domain-specific optimizations are performed in the first phase, while the second phase performs optimizations focused on specialized back-end hardware features. NNs on MCUs are available that combine support for Arm Cortex-M cores and Cadence Tensilica HiFi 4 DSP support, accelerating performance by utilizing Arm CMSIS-NN and HiFi NN libraries, respectively. Its features include:
  • Lower latency and smaller solution size for edge inference NNs.
  • Accelerate NN applications with CMSIS-NN and Cadence HiFi NN Library
  • Speed time to market using the available software development kit
  • Flexible implementation since Glow is open source with Apache License 2.0
Summary
Running NNs on MCUs is important for IoT nodes and other embedded edge applications, but it can be challenging due to MCU memory limitations. Several approaches have been developed to address memory limitations, including out-of-core designs that swap blocks of NN data between the MCU memory and an external memory and various NN software ‘optimization’ techniques. Unfortunately, these approaches involve tradeoffs between model accuracy and generality, which result in the NN becoming too inaccurate and/or too specialized to be of use in practical applications. The emergence of MCUs with integrated NN accelerators is beginning to address those concerns and enables the development of practical NN implementations for IoT and edge applications. Finally, the availability of the Glow NN compiler gives designers an additional tool for optimizing NN for smaller applications”
 
  • Like
  • Fire
  • Love
Reactions: 54 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 37 users
This may help? Looks like we are at least recuperating our costs.

BrainChip and VORAGO Technologies Agree to Collaborate through the Akida™ Early Access Program
BrainChip and VORAGO Technologies Agree to Collaborate through the Akida™ Early Access Program
Agreement to Support Phase I of NASA Program for Radiation-Hardened Neuromorphic Processor

Aliso Viejo, California – September 2, 2020 – BrainChip Holdings Ltd (ASX: BRN), a leading provider of ultra-low power high performance AI technology, today announced that VORAGO Technologies has signed the Akida™ Early Access Program Agreement. The collaboration is intended to support a Phase I NASA program for a neuromorphic processor that meets spaceflight requirements. The BrainChip Early Access Program is available to a select group of customers that require early access to the Akida device, evaluation boards and dedicated support. The EAP agreement includes payments that are intended to offset the Company’s expenses to support partner needs.

The Akida neuromorphic processor is uniquely suited for spaceflight and aerospace applications. The device is a complete neural processor and does not require an external CPU, memory or Deep Learning Accelerator (DLA). Reducing component count, size and power consumption are paramount concerns in spaceflight and aerospace applications. The level of integration and ultra-low power performance of Akida supports these critical criteria. Additionally, Akida provides incremental learning. With incremental learning, new classifiers can be added to the network without retraining the entire network. The benefit in spaceflight and aerospace applications is significant as real-time local incremental learning allows continuous operation when new discoveries or circumstances occur.
As others nave mentioned, BRN probably won’t make a lot of money from the NASA in the grand scheme of things (short term) It’s not like NASA is going to produce 1 million rovers. 😁

But....

I see the interaction as proving out more use cases and a showing the world what is possible at the extreme edge.

While BRN are also showing that we have partnered with one of the most cutting edge technically advanced organisation in the world which provides BRN with a massive tick on its resume to show and prove out this type is partnership. Another reason I think Mercedes were so happy to name specifically that EQXX car had Akida inside.

Who doesn’t want to be partnered up with the latest edge tech being proven out and implemented by NASA.

While NASA may not end being huge dollars for BRN in the short term the exposure(marketing) to the world from the NASA partnership to generate other global deals is priceless.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 41 users
I was just reviewing the NVISO slides and trying to work out which company had the most potential upside and growth?

The prices next to the companies are listed in US dollars.

Any suggestions on which company out of that group has the only commercial neuromorphic AI chip heading into this revolutionary and game changing era?

1656823670610.jpeg


Exciting times to be a Brainchip shareholder!
 
  • Like
  • Love
  • Fire
Reactions: 37 users
D

Deleted member 118

Guest
As others nave mentioned, BRN probably won’t make a lot of money from the NASA in the grand scheme of things (short term) It’s not like NASA is going to produce 1 million rovers. 😁

But....

I see the interaction as proving out more use cases and a showing the world what is possible at the extreme edge.

While BRN are also showing that we have partnered with one of the most cutting edge technically advanced organisation in the world which provides BRN with a massive tick on its resume to show and prove out this type is partnership. Another reason I think Mercedes were so happy to name specifically that EOXX car had Akida inside.

Who doesn’t want to be partnered up with the latest edge tech being proven out and implemented by NASA.

While NASA may not end being huge dollars for BRN in the short term the exposure(marketing) to the world from the NASA partnership to generate other global deals is priceless.
Well said, but I truly think Nasa will develop something completely out of the original using Akida
 
  • Like
  • Fire
Reactions: 11 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Oh dear, please don't shoot the messenger!

PS: Do you think we should email this ding-bat and let him know how much faster the Rover is going to go because of BrainChip?





Why did the BrainChip share price crash 30% in June?​

BrainChip’s shares were sold off in June. Here’s why…


James Mickleboro
Published July 3, 9:45 am AEST

BRN
A man in a suit face palms at the downturn happening with shares today.

Image source: Getty Images


You’re reading a free article with opinions that may differ from The Motley Fool’s Premium Investing Services. Become a Motley Fool member today to get instant access to our top analyst recommendations, in-depth research, investing resources, and more. Learn More


The BrainChip Holdings Ltd (ASX: BRN) share price had a disappointing month in June.

The semiconductor company’s shares ended the month 30% lower than where they started it.

This was despite BrainChip’s shares being added to the illustrious ASX 200 index during the month.

What happened to the BrainChip share price?​


Investors were selling down the BrainChip share price in June amid broad market weakness. With interest rates increasing to combat rising inflation, this put pressure on equities.

This was particularly the case at the higher risk side of the market, where BrainChip certainly sits.

For example, even after June’s decline, the company has a market capitalisation of over $1.4 billion despite its revenue year to date being just $205,000.

When annualised to $820,000, this means its shares are changing hands for a ridiculous 1700 times revenue. And this is before the company has even proven that it has a market for its Akida neuromorphic processor.

In light of this, it is no surprise that when the market wobbles, the BrainChip share price tumbles.

What’s next?​


The next 12 months will be very interesting for the BrainChip share price. With the company now commercialising its technology, it will have to let its sales do the talking rather than its press releases or podcasts.

Which may not be as easy as many first thought. Especially given that some of the hyped-up partnerships from the last 2-3 years appear to have amounted to nothing.

For example, its partnership with NASA was big news back in 2020 and is still talked about today as a reason to invest in BrainChip. But this seems to have ended after just three weeks on 18 January 2021 based on NASA data. It’s also worth noting that there was no mention of NASA in its most recent annual report.

So, should sales fail to materialise in a market dominated by some huge tech behemoths such as AMD, Intel, and Nvidia, then there’s a distinct danger that its days as a billion dollar plus company could be numbered.

Wondering where you should invest $1,000 right now?


When investing expert Scott Phillips has a stock tip, it can pay to listen. After all, the flagship Motley Fool Share Advisor newsletter he has run for over ten years has provided thousands of paying members with stock picks that have doubled, tripled or even more.* Scott just revealed what he believes could be the "five best ASX stocks" for investors to buy right now. These stocks are trading at near dirt-cheap prices and Scott thinks they could be great buys right now



PS: This was my reaction!


fyDO.gif
 
Last edited:
  • Haha
  • Like
  • Sad
Reactions: 22 users

Lex555

Regular
The following covers the Aiot market and does not mention Brainchip by name but there are two very interesting paragraphs which I have emboldened and partitioned to make easy to locate:

What’s a Neural microcontroller?​

MAY 30, 2022 BY JEFF SHEPARD

FacebookTwitterLinkedInEmail
The ability to run neural networks (NNs) on MCUs is growing in importance to support artificial intelligence (AI) and machine learning (ML) in the Internet of Things (IoT) nodes and other embedded edge applications. Unfortunately, running NNs on MCUs is challenging due to the relatively small memory capacities of most MCUs. This FAQ details the memory challenges of running NNs on MCUs and looks at possible system-level solutions. It then presents recently announced MCUs with embedded NN accelerators. It closes by looking at how the Glow machine learning compiler for NNs can help reduce memory requirements.
Running NNs on MCUs (sometimes called tinyML) offers advantages over sending raw data to the cloud for analysis and action. Those advantages include the ability to tolerate poor or even no network connectivity and safeguard data privacy and security. MCU memory capacities are often limited to the main memory of hundreds of KB of SRAM, often less, and byte-addressable Flash of no more than a few MBs for read-only data.
To achieve high accuracy, most NNs require larger memory capacities. The memory needed by a NN includes read-only parameters and so-called feature maps that contain intermediate and final results. It can be tempting to process an NN layer on an MCU in the embedded memory before loading the next layer, but it’s often impractical. A single NN layer’s parameters and feature maps can require up to 100 MB of storage, exceeding the MCU memory size by as much as two orders of magnitude. Recently developed NNs with higher accuracies require even more memory, resulting in a widening gap between the available memory on most MCUs and the memory requirements of NNs (Figure 1).
Figure 1: The available memory on most MCUs is much too small to support the needs of the majority of NNs. (Image: Arxiv)
One solution to address MCU memory limitations is to dynamically swap NN data blocks between the MCU SRAM and a larger external (out-of-core) cash memory. Out-of-core NN implementations can suffer from several limitations, including: execution slowdown, storage wear out, higher energy consumption, and data security. If these concerns can be adequately addressed in a specific application, an MCU can be used to run large NNs with full accuracy and generality.
One approach to out-of-core NN implementation is to split one NN layer into a series of tiles small enough to fit into the MCU memory. This approach has been successfully applied to NN systems on servers where the NN tiles are swapped between the CPU/GPU memory and the server’s memory. Most embedded systems don’t have access to the large memory spaces available on servers. Using memory swapping approaches with MCUs can run into problems using a relatively small external SRAM or an SD card, such as lower SD card durability and reliability, slower execution due to I/O operations, higher energy consumption, and safety and security of out-of-core NN data storage.
Another approach to overcoming MCU memory limitations is optimizing the NN more completely using techniques such as model compression, parameter quantization, and designing tiny NNs from scratch. These approaches involve varying tradeoffs between model accuracy and generality, or both. In most cases, the techniques used to fit an NN into the memory space of an MCU result in the NN becoming too inaccurate (< 60% accuracy) or too specialized and not generalized enough (the NN can only detect a few object classes). These challenges can disqualify the use of MCUs where NNs with high accuracy and generality are needed, even if inference delays can be tolerated, such as:
  • NN inference on slowly changing signals such as monitoring crop health by analyzing hourly photos or traffic patterns by analyzing video frames taken every 20-30 minutes
  • Profiling NNs on the device by occasionally running a full-blown NN to estimate the accuracy of long-running smaller NNs
  • Transfer learning includes retraining NNs on MCUs with data collected from deployment every hour or day
NN accelerators embedded in MCUs
Many of the challenges of implementing NNs on MCU are being addressed by MCUs with embedded NN accelerators. These advanced MCUs are an emerging device category that promises to provide designers with new opportunities to develop IoT node and edge ML solutions. For example, an MCU with a hardware-based embedded convolutional neural network (CNN) accelerator enables battery-powered applications to execute AI inferences while spending only microjoules of energy (Figure 2).
Figure 2: Neural MCU block diagram showing the basic MCU blocks (upper left) and the CNN accelerator section (right). (Image: Maxim)
*******************************************************************************************************************************************************
The MCU with an embedded CNN accelerator is a system on chip combining an Arm Cortex-M4 with a RISC-V core that can execute application and control codes as well as drive the CNN accelerator. The CNN engine has a weight storage memory of 442KB and can support 1-, 2-, 4-, and 8-bit weights (supporting networks of up to 3.5 million weights). On the fly, AI network updates are supported by the SRAM-based CNN weight memory structure. The architecture is flexible and allows CNNs to be trained using conventional toolsets such as PyTorch and TensorFlow.
*********************************************************************************************************************************************************
Another MCU supplier has pre-announced developing a neural processing unit integrated with an ARM Cortex core. The new neural MCU is scheduled to ship later this year and will provide the same level of AI performance as a quad-core processor with an AI accelerator but at one-tenth the cost and one-twelfth the power consumption.
*********************************************************************************************************************************************************

Additional neural MCUs are expected to emerge in the near future.

Glow for smaller NN memories
Glow (graph lowering) is a machine learning compiler for neural network graphs. It’s available on Github and is designed to optimize the neural network graphs and generate code for various hardware devices. Two versions of Glow are available, one for Ahead of Time (AOT) and one for Just in Time (JIT) compilations. As the names suggest, AOT compilation is performed offline (ahead of time) and generates an object file (bundle) which is later linked with the application code, while JIT compilation is performed at runtime just before the model is executed.
MCUs are available that support AOT compilation using Glow. The compiler converts the neural networks into object files, which the user converts into a binary image for increased performance and a smaller memory footprint than a JIT (runtime) inference engine. In this case, Glow is used as a software back-end for the PyTorch machine learning framework and the ONNX model format (Figure 3).
Figure 3: Example of an AOT compilation flow diagram using Glow. (Image: NXP)
The Glow NN complier lowers a NN into a two-phase, strongly-typed intermediate representation. Domain-specific optimizations are performed in the first phase, while the second phase performs optimizations focused on specialized back-end hardware features. NNs on MCUs are available that combine support for Arm Cortex-M cores and Cadence Tensilica HiFi 4 DSP support, accelerating performance by utilizing Arm CMSIS-NN and HiFi NN libraries, respectively. Its features include:
  • Lower latency and smaller solution size for edge inference NNs.
  • Accelerate NN applications with CMSIS-NN and Cadence HiFi NN Library
  • Speed time to market using the available software development kit
  • Flexible implementation since Glow is open source with Apache License 2.0
Summary
Running NNs on MCUs is important for IoT nodes and other embedded edge applications, but it can be challenging due to MCU memory limitations. Several approaches have been developed to address memory limitations, including out-of-core designs that swap blocks of NN data between the MCU memory and an external memory and various NN software ‘optimization’ techniques. Unfortunately, these approaches involve tradeoffs between model accuracy and generality, which result in the NN becoming too inaccurate and/or too specialized to be of use in practical applications. The emergence of MCUs with integrated NN accelerators is beginning to address those concerns and enables the development of practical NN implementations for IoT and edge applications. Finally, the availability of the Glow NN compiler gives designers an additional tool for optimizing NN for smaller applications”
You're on fire this weekend FF, I appreciate your input and especially the easy to read summaries.

This paragraph lit up my neurons, an industry insider stating something that smells and sounds like AKIDA is shipping later this year:
Another MCU supplier has pre-announced developing a neural processing unit integrated with an ARM Cortex core. The new neural MCU is scheduled to ship later this year and will provide the same level of AI performance as a quad-core processor with an AI accelerator but at one-tenth the cost and one-twelfth the power consumption.
 
  • Like
  • Fire
  • Love
Reactions: 29 users

Lex555

Regular
Oh dear, please don't shoot the messenger!

PS: Do you think we should email this ding-bat and let him know how much faster the Rover is going to go because of BrainChip?





Why did the BrainChip share price crash 30% in June?​

BrainChip’s shares were sold off in June. Here’s why…


James Mickleboro
Published July 3, 9:45 am AEST

BRN
A man in a suit face palms at the downturn happening with shares today.

Image source: Getty Images


You’re reading a free article with opinions that may differ from The Motley Fool’s Premium Investing Services. Become a Motley Fool member today to get instant access to our top analyst recommendations, in-depth research, investing resources, and more. Learn More


The BrainChip Holdings Ltd (ASX: BRN) share price had a disappointing month in June.

The semiconductor company’s shares ended the month 30% lower than where they started it.

This was despite BrainChip’s shares being added to the illustrious ASX 200 index during the month.

What happened to the BrainChip share price?​


Investors were selling down the BrainChip share price in June amid broad market weakness. With interest rates increasing to combat rising inflation, this put pressure on equities.

This was particularly the case at the higher risk side of the market, where BrainChip certainly sits.

For example, even after June’s decline, the company has a market capitalisation of over $1.4 billion despite its revenue year to date being just $205,000.

When annualised to $820,000, this means its shares are changing hands for a ridiculous 1700 times revenue. And this is before the company has even proven that it has a market for its Akida neuromorphic processor.

In light of this, it is no surprise that when the market wobbles, the BrainChip share price tumbles.

What’s next?​


The next 12 months will be very interesting for the BrainChip share price. With the company now commercialising its technology, it will have to let its sales do the talking rather than its press releases or podcasts.

Which may not be as easy as many first thought. Especially given that some of the hyped-up partnerships from the last 2-3 years appear to have amounted to nothing.

For example, its partnership with NASA was big news back in 2020 and is still talked about today as a reason to invest in BrainChip. But this seems to have ended after just three weeks on 18 January 2021 based on NASA data. It’s also worth noting that there was no mention of NASA in its most recent annual report.

So, should sales fail to materialise in a market dominated by some huge tech behemoths such as AMD, Intel, and Nvidia, then there’s a distinct danger that its days as a billion dollar plus company could be numbered.

Wondering where you should invest $1,000 right now?


When investing expert Scott Phillips has a stock tip, it can pay to listen. After all, the flagship Motley Fool Share Advisor newsletter he has run for over ten years has provided thousands of paying members with stock picks that have doubled, tripled or even more.* Scott just revealed what he believes could be the "five best ASX stocks" for investors to buy right now. These stocks are trading at near dirt-cheap prices and Scott thinks they could be great buys right now
Bravo I have $1,000 burning a hole in my pocket and I need to urgently know if BRN is of Scott's top 5 picks???
 
  • Haha
  • Like
  • Love
Reactions: 18 users
D

Deleted member 118

Guest
Oh dear, please don't shoot the messenger!

PS: Do you think we should email this ding-bat and let him know how much faster the Rover is going to go because of BrainChip?





Why did the BrainChip share price crash 30% in June?​

BrainChip’s shares were sold off in June. Here’s why…


James Mickleboro
Published July 3, 9:45 am AEST

BRN
A man in a suit face palms at the downturn happening with shares today.

Image source: Getty Images


You’re reading a free article with opinions that may differ from The Motley Fool’s Premium Investing Services. Become a Motley Fool member today to get instant access to our top analyst recommendations, in-depth research, investing resources, and more. Learn More


The BrainChip Holdings Ltd (ASX: BRN) share price had a disappointing month in June.

The semiconductor company’s shares ended the month 30% lower than where they started it.

This was despite BrainChip’s shares being added to the illustrious ASX 200 index during the month.

What happened to the BrainChip share price?​


Investors were selling down the BrainChip share price in June amid broad market weakness. With interest rates increasing to combat rising inflation, this put pressure on equities.

This was particularly the case at the higher risk side of the market, where BrainChip certainly sits.

For example, even after June’s decline, the company has a market capitalisation of over $1.4 billion despite its revenue year to date being just $205,000.

When annualised to $820,000, this means its shares are changing hands for a ridiculous 1700 times revenue. And this is before the company has even proven that it has a market for its Akida neuromorphic processor.

In light of this, it is no surprise that when the market wobbles, the BrainChip share price tumbles.

What’s next?​


The next 12 months will be very interesting for the BrainChip share price. With the company now commercialising its technology, it will have to let its sales do the talking rather than its press releases or podcasts.

Which may not be as easy as many first thought. Especially given that some of the hyped-up partnerships from the last 2-3 years appear to have amounted to nothing.

For example, its partnership with NASA was big news back in 2020 and is still talked about today as a reason to invest in BrainChip. But this seems to have ended after just three weeks on 18 January 2021 based on NASA data. It’s also worth noting that there was no mention of NASA in its most recent annual report.

So, should sales fail to materialise in a market dominated by some huge tech behemoths such as AMD, Intel, and Nvidia, then there’s a distinct danger that its days as a billion dollar plus company could be numbered.

Wondering where you should invest $1,000 right now?


When investing expert Scott Phillips has a stock tip, it can pay to listen. After all, the flagship Motley Fool Share Advisor newsletter he has run for over ten years has provided thousands of paying members with stock picks that have doubled, tripled or even more.* Scott just revealed what he believes could be the "five best ASX stocks" for investors to buy right now. These stocks are trading at near dirt-cheap prices and Scott thinks they could be great buys right now


 
  • Haha
  • Like
Reactions: 12 users

Rodney

Regular
As others nave mentioned, BRN probably won’t make a lot of money from the NASA in the grand scheme of things (short term) It’s not like NASA is going to produce 1 million rovers. 😁

But....

I see the interaction as proving out more use cases and a showing the world what is possible at the extreme edge.

While BRN are also showing that we have partnered with one of the most cutting edge technically advanced organisation in the world which provides BRN with a massive tick on its resume to show and prove out this type is partnership. Another reason I think Mercedes were so happy to name specifically that EOXX car had Akida inside.

Who doesn’t want to be partnered up with the latest edge tech being proven out and implemented by NASA.

While NASA may not end being huge dollars for BRN in the short term the exposure(marketing) to the world from the NASA partnership to generate other global deals is priceless.
Not sure I agree it depends in what akida can do and enables. If a rover can travel 200x faster cover more distance and collecting more data it would be like having 200 rovers. A business would expect to get a good slice of those savings as it would be beneficial to both. I would think there is a lot of scope for brainchip to make huge money from NASA depending in what it enables.
 
  • Like
  • Fire
Reactions: 9 users

Rskiff

Regular
Oh dear, please don't shoot the messenger!

PS: Do you think we should email this ding-bat and let him know how much faster the Rover is going to go because of BrainChip?





Why did the BrainChip share price crash 30% in June?​

BrainChip’s shares were sold off in June. Here’s why…


James Mickleboro
Published July 3, 9:45 am AEST

BRN
A man in a suit face palms at the downturn happening with shares today.

Image source: Getty Images


You’re reading a free article with opinions that may differ from The Motley Fool’s Premium Investing Services. Become a Motley Fool member today to get instant access to our top analyst recommendations, in-depth research, investing resources, and more. Learn More


The BrainChip Holdings Ltd (ASX: BRN) share price had a disappointing month in June.

The semiconductor company’s shares ended the month 30% lower than where they started it.

This was despite BrainChip’s shares being added to the illustrious ASX 200 index during the month.

What happened to the BrainChip share price?​


Investors were selling down the BrainChip share price in June amid broad market weakness. With interest rates increasing to combat rising inflation, this put pressure on equities.

This was particularly the case at the higher risk side of the market, where BrainChip certainly sits.

For example, even after June’s decline, the company has a market capitalisation of over $1.4 billion despite its revenue year to date being just $205,000.

When annualised to $820,000, this means its shares are changing hands for a ridiculous 1700 times revenue. And this is before the company has even proven that it has a market for its Akida neuromorphic processor.

In light of this, it is no surprise that when the market wobbles, the BrainChip share price tumbles.

What’s next?​


The next 12 months will be very interesting for the BrainChip share price. With the company now commercialising its technology, it will have to let its sales do the talking rather than its press releases or podcasts.

Which may not be as easy as many first thought. Especially given that some of the hyped-up partnerships from the last 2-3 years appear to have amounted to nothing.

For example, its partnership with NASA was big news back in 2020 and is still talked about today as a reason to invest in BrainChip. But this seems to have ended after just three weeks on 18 January 2021 based on NASA data. It’s also worth noting that there was no mention of NASA in its most recent annual report.

So, should sales fail to materialise in a market dominated by some huge tech behemoths such as AMD, Intel, and Nvidia, then there’s a distinct danger that its days as a billion dollar plus company could be numbered.

Wondering where you should invest $1,000 right now?


When investing expert Scott Phillips has a stock tip, it can pay to listen. After all, the flagship Motley Fool Share Advisor newsletter he has run for over ten years has provided thousands of paying members with stock picks that have doubled, tripled or even more.* Scott just revealed what he believes could be the "five best ASX stocks" for investors to buy right now. These stocks are trading at near dirt-cheap prices and Scott thinks they could be great buys right now
MF should've just admitted that they had put a buy on BRN and everything they say does the opposite. It has been a hex on the share price. I wish that they revert back to the negative outlook like they have had in the past and then we will see big upside.
 
  • Like
  • Haha
  • Love
Reactions: 10 users
Top Bottom