BRN Discussion Ongoing

D

Deleted member 118

Guest
  • Like
  • Fire
Reactions: 5 users

Taproot

Regular
Think akida can run up to 80 nodes?

Akida is a fully customizable event-based AI neural processor. Akida’s scalable architecture and small footprint boosts efficiency by orders of magnitude, supporting up to 256 nodes that connect over a mesh network. Every node consists of four Neural Processing Units (NPUs), each with scalable and configurable SRAM. Within each node, the NPUs can be configured as either convolutional or fully connected.

Akida is flexible and scalable for multiple edge AI use cases. Achieve the most cost-effective solution by optimizing the node configuration to the desired level of performance and efficiency. Scale down to 2 nodes (@ 1Ghz = 1 TOPS) for ultra low power or scale up to 256 nodes (@ 1Ghz = 131 TOPS) for complex use cases.
 
  • Like
  • Love
  • Fire
Reactions: 26 users

Esq.111

Fascinatingly Intuitive.
Morning Chippers,

Having a read of a weekend financial newspaper...

FULL PAGE advert from Mercedes-Benz.

* with a picture of the single panel glass dashboard floating above a Mercedes and a young lass looking up at the panel in wonderment.

* the caption at bottom of picture reads....

As innovative as it is intuitive: the Mercedes-Benz MBUX Hyperscreen with artificial intelligence.
INNOVATIONS BY
( Mercedes-Benz logo ).

* At the top of the advert , in very small print....
Overseas model showen. Vehicle showen not currently available in Australia.

Apart from what we all know here, there is no mention of Brainchip in the advert.

Getting closer by the day.

Regards,
Esq.
 
  • Like
  • Fire
  • Love
Reactions: 38 users

cosors

👀
Morning Chippers,

Having a read of a weekend financial newspaper...

FULL PAGE advert from Mercedes-Benz.

* with a picture of the single panel glass dashboard floating above a Mercedes and a young lass looking up at the panel in wonderment.

* the caption at bottom of picture reads....

As innovative as it is intuitive: the Mercedes-Benz MBUX Hyperscreen with artificial intelligence.
INNOVATIONS BY
( Mercedes-Benz logo ).

* At the top of the advert , in very small print....
Overseas model showen. Vehicle showen not currently available in Australia.

Apart from what we all know here, there is no mention of Brainchip in the advert.

Getting closer by the day.

Regards,
Esq.
I have often thought about this and wanted to write about it in detail. It's perfectly normal for companies to want to sell their success and ingenious new technology as their own. I'm getting used to that, or I already am.
Type that maybe so that the 999 disappears quickly
 
  • Like
  • Fire
Reactions: 5 users

VictorG

Member
Under 5 months since TSE began and we are on the precipice of 1000 pages in the main BRN thread and hundreds more in the other threads. Well done all, the information generated in this forum could be licensed for huge profits.
 
  • Like
  • Love
  • Fire
Reactions: 28 users

Boab

I wish I could paint like Vincent
Morning Chippers,

Having a read of a weekend financial newspaper...

FULL PAGE advert from Mercedes-Benz.

* with a picture of the single panel glass dashboard floating above a Mercedes and a young lass looking up at the panel in wonderment.

* the caption at bottom of picture reads....

As innovative as it is intuitive: the Mercedes-Benz MBUX Hyperscreen with artificial intelligence.
INNOVATIONS BY
( Mercedes-Benz logo ).

* At the top of the advert , in very small print....
Overseas model showen. Vehicle showen not currently available in Australia.

Apart from what we all know here, there is no mention of Brainchip in the advert.

Getting closer by the day.

Regards,
Esq.
Pretty standard advertising where no one wants to reveal the ingredients of the recipe.
Luckily we know whats inside. 😉😉
 
  • Like
  • Fire
  • Love
Reactions: 22 users

cosors

👀
Happy 1000th 🚀🕺
Might the force of 1000 eyes be with us!
🥳
____
4Crap, still not 😂
 
  • Like
  • Love
  • Fire
Reactions: 14 users
We have long been looking for official confirmation of a successful NASA Vorago Brainchip Phase 1 project.
Well I have found it and it is copied with a link below.

Note:

1. Vorago is Silicon Space Technology Corporation for newer shareholders.
2. Vorago used the term CNN RNN to describe AKIDA not SCNN.

As you read through the extracts you will note the following:

A.
Vorago met all of the Phase 1 objectives
B. Vorago has five letters in support of continuing to the next Phase 2 importantly/interestingly two of these letters offer funding for the Phase 2 independent of NASA - (I personally am thinking large Aerospace companies jumping on board)
C. Vorago has modelling which shows AKIDA will allow NASA to have autonomous Rovers that will achieve speeds of up to 20 kph compared with a present speed of 4 centimetres a second.

There is in my opinion no other company on this planet with technology that can compete with the creation of Peter van der Made and Anil Mankar.


The Original Phase 1:
"The ultimate goal of this project is to create a radiation-hardened Neural Network suitable for Ede use. Neural Networks operating at the Edge will need to perform Continuous Learning and Few-shot/One-shot Learning with very low energy requirements, as will NN operation. Spiking Neural Networks (SNNs) provide the architectural framework to enable Edge operation and Continuous Learning. SNNs are event-driven and represent events as a spike or a train of spikes. Because of the sparsity of their data representation, the amount of processing Neural Networks need to do for the same stimulus can be significantly less than conventional Convolutional Neural Networks (CNNs), much like a human brain. To function in Space and in other extreme Edge environments, Neural Networks, including SNNs, must be made rad-hard.Brainchip’s Akida Event Domain Neural Processor (www.brainchipinc.com) offers native support for SNNs. Brainchip has been able to drive power consumption down to about 3 pJ per synaptic operation in their 28nm Si implementation. The Akida Development Environment (ADE) uses industry-standard development tools Tensorflow and Keras to allow easy simulation of its IP.Phase I is the first step towards creating radiation-hardened Edge AI capability. We plan to use the Akida Neural Processor architecture and, in Phase I, will: Understand the operation of Brainchip’s IP Understand 28nm instantiation of that IP (Akida) Evaluate radiation vulnerability of different parts of the IP through the Akida Development Environment Define architecture of target IC Define how HARDSIL® will be used to harden each chosen IP block Choose a target CMOS node (likely 28nm) and create a plan to design and fabricate the IC in that node, including defining the HARDSIL® process modules for this baseline process Define the radiation testing plan to establish the radiation robustness of the ICSuccessfully accomplishing these objectives:Establishes the feasibility of creating a useful, radiation-hardened product IC with embedded NPU and already-existing supporting software ecosystem to allow rapid adoption and productive use within NASA and the Space community.\n\n\n\n\t Creates the basis for an executable Phase II proposal and path towards fabrication of the processor."

CNN RNN Processor
FIRM: SILICON SPACE TECHNOLOGY CORPORATION PI
: Jim Carlquist Proposal #:H6.22-4509
NON-PROPRIETARY DATA
Objectives:

The goal of this project is the creation of a radiation-hardened Spiking Neural Network (SNN) SoC based on the BrainChip Akida Neuron Fabric IP. Akida is a member of a small set of existing SNN architectures structured to more closely emulate computation in a human brain. The rationale for using a Spiking Neural Network (SNN) for Edge AI Computing is because of its efficiencies. The neurmorphic approach used in the Akida architecture takes fewer MACs per operation since it creates and uses sparsity of both weights and activation by its event-based model. In addition, Akida reduces memory consumption by quantizing and compressing network parameters. This also helps to reduce power consumption and die size while maintaining performance.
Spiking Neural Network Block Diagram


ACCOMPLISHMENTS

Notable Deliverables Provided:
• Design and Manufacturing Plans
• Radiation Testing Plan (included in Final report)
• Technical final report


Key Milestones Met
• Understand Akida Architecture
• Understand 28nm Implementation
• Evaluate Radiation Vulnerability of the IP Through the Akida
Development Environment
• Define Architecture of Target IC
• Define how HARDSIL® will be used in Target IC
• Create Design and Manufacturing Plans
• Define the Radiation Testing Plan to Establish the Radiation
Robustness of the IC


FUTURE PLANNED DEVELOPMENTS

Planned Post-Phase II Partners


We received five Letters of Support for this project.

Two of which will provide capital infusion to keep the project going, one for aid in radiation testing, and the final two for use in future space flights.

Planned/Possible Mission Infusion

NASA is keen to increase the performance of its autonomous rovers to allow for greater speeds.

Current routing methodologies limit speeds to 4cm/sec while NASA has a goal to be able to have autonomous rovers traverse at speeds up to 20km/hr.


Early calculations show the potential for this device to process several of the required neural network algorithms fast enough to meet this goal.

Planned/Possible Mission Commercialization

A detailed plan is included in the Phase I final submittal to commercialized a RADHARD flight ready QML, SNN SoC to be available for NASA and commercial use.

This plan will include a Phase II plus extensions to reach the commercialization goals we are seeking.

CONTRACT (CENTER): SUBTOPIC:
SOLICITATION-PHASE: TA:
80NSSC20C0365 (ARC)
H6.22 Deep Neural Net and Neuromorphic Processors for In- Space Autonomy and Cognition
SBIR 2020-I4.5.0 Autonomy


My opinion only DYOR
FF

AKIDA BALLISTA

Originally posted in a NASA thread by @uiux but now has greater significance:

Proposal Summary​

Proposal Information


Proposal Number:
21-2- H6.22-1743

Phase 1 Contract #:
80NSSC21C0233

Subtopic Title:
Deep Neural Net and Neuromorphic Processors for In-Space Autonomy and Cognition

Proposal Title:
Neuromorphic Enhanced Cognitive Radio

Small Business Concern


Firm: Intellisense Systems, Inc.

Address:

21041 South Western Avenue, Torrance, CA 90501

Phone:
(310) 320-1827

Principal Investigator:

Name:
Mr. Wenjian Wang Ph.D.

E-mail:
wwang@intellisenseinc.com


Address:

21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Business Official:


Name: Selvy Utama

E-mail:
notify@intellisenseinc.com

Address:
21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Summary Details:

Estimated Technology Readiness Level (TRL) :
Begin: 3
End: 4


Technical Abstract (Limit 2000 characters, approximately 200 words):
Intellisense Systems, Inc. proposes in Phase II to advance development of a Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). NECR is a low-size, -weight, and -power (-SWaP) cognitive radio built on the open-source framework, i.e., GNU Radio and RFNoC™, with new enhancements in environment learning and improvements in transmission quality and data processing. Due to the high efficiency of spiking neural networks and their low-latency, energy-efficient implementation on neuromorphic computing hardware, NECR can be integrated into SWaP-constrained platforms in spacecraft and robotics, to provide reliable communication in unknown and uncharacterized space environments such as the Moon and Mars. In Phase II, Intellisense will improve the NECR system for cognitive communication capabilities accelerated by neuromorphic hardware. We will refine the overall NECR system architecture to achieve cognitive communication capabilities accelerated by neuromorphic hardware, on which a special focus will be the mapping, optimization, and implementation of smart sensing algorithms on the neuromorphic hardware. The Phase II smart sensing algorithm library will include Kalman filter, Carrier Frequency Offset estimation, symbol rate estimation, energy detection- and matched filter-based spectrum sensing, signal-to-noise ratio estimation, and automatic modulation identification.

These algorithms will be implemented on COTS neuromorphic computing hardware such as Akida processor from BrainChip, and then integrated with radio frequency modules and radiation-hardened packaging into a Phase II prototype.

At the end of Phase II, the prototype will be delivered to NASA for testing and evaluation, along with a plan describing a path to meeting fault and tolerance requirements for mission deployment and API documents for integration with CubeSat, SmallSat, and 'ROVER' for flight demonstration.

Potential NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology will have many NASA applications due to its low-SWaP and low-cost cognitive sensing capability. It can be used to enhance the robustness and reliability of space communication and networking, especially cognitive radio devices. NECR can be directly transitioned to the Human Exploration and Operations Mission Directorate (HEOMD) Space Communications and Navigation (SCaN) Program, CubeSat, SmallSat, and 'ROVER' to address the needs of the Cognitive Communications project.

Potential Non-NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology’s low-SWaP and low-cost cognitive sensing capability will have many non-NASA applications. The NECR technology can be integrated into commercial communication systems to enhance cognitive sensing and communication capability. Automakers can integrate the NECR technology into automobiles for cognitive sensing and communication.

Duration: 24
 
  • Like
  • Love
  • Fire
Reactions: 48 users

Originally posted in a NASA thread by @uiux but now has greater significance:

Proposal Summary​

Proposal Information


Proposal Number:
21-2- H6.22-1743

Phase 1 Contract #:
80NSSC21C0233

Subtopic Title:
Deep Neural Net and Neuromorphic Processors for In-Space Autonomy and Cognition

Proposal Title:
Neuromorphic Enhanced Cognitive Radio

Small Business Concern


Firm: Intellisense Systems, Inc.

Address:

21041 South Western Avenue, Torrance, CA 90501

Phone:
(310) 320-1827

Principal Investigator:

Name:
Mr. Wenjian Wang Ph.D.

E-mail:
wwang@intellisenseinc.com


Address:

21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Business Official:


Name: Selvy Utama

E-mail:
notify@intellisenseinc.com

Address:
21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Summary Details:

Estimated Technology Readiness Level (TRL) :
Begin: 3
End: 4


Technical Abstract (Limit 2000 characters, approximately 200 words):
Intellisense Systems, Inc. proposes in Phase II to advance development of a Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). NECR is a low-size, -weight, and -power (-SWaP) cognitive radio built on the open-source framework, i.e., GNU Radio and RFNoC™, with new enhancements in environment learning and improvements in transmission quality and data processing. Due to the high efficiency of spiking neural networks and their low-latency, energy-efficient implementation on neuromorphic computing hardware, NECR can be integrated into SWaP-constrained platforms in spacecraft and robotics, to provide reliable communication in unknown and uncharacterized space environments such as the Moon and Mars. In Phase II, Intellisense will improve the NECR system for cognitive communication capabilities accelerated by neuromorphic hardware. We will refine the overall NECR system architecture to achieve cognitive communication capabilities accelerated by neuromorphic hardware, on which a special focus will be the mapping, optimization, and implementation of smart sensing algorithms on the neuromorphic hardware. The Phase II smart sensing algorithm library will include Kalman filter, Carrier Frequency Offset estimation, symbol rate estimation, energy detection- and matched filter-based spectrum sensing, signal-to-noise ratio estimation, and automatic modulation identification.

These algorithms will be implemented on COTS neuromorphic computing hardware such as Akida processor from BrainChip, and then integrated with radio frequency modules and radiation-hardened packaging into a Phase II prototype.

At the end of Phase II, the prototype will be delivered to NASA for testing and evaluation, along with a plan describing a path to meeting fault and tolerance requirements for mission deployment and API documents for integration with CubeSat, SmallSat, and 'ROVER' for flight demonstration.

Potential NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology will have many NASA applications due to its low-SWaP and low-cost cognitive sensing capability. It can be used to enhance the robustness and reliability of space communication and networking, especially cognitive radio devices. NECR can be directly transitioned to the Human Exploration and Operations Mission Directorate (HEOMD) Space Communications and Navigation (SCaN) Program, CubeSat, SmallSat, and 'ROVER' to address the needs of the Cognitive Communications project.

Potential Non-NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology’s low-SWaP and low-cost cognitive sensing capability will have many non-NASA applications. The NECR technology can be integrated into commercial communication systems to enhance cognitive sensing and communication capability. Automakers can integrate the NECR technology into automobiles for cognitive sensing and communication.

Duration: 24

FF

Bonus thing is that NECR is now Phase II as prev posted.


Post in thread 'BRN Discussion 2022' https://thestockexchange.com.au/threads/brn-discussion-2022.1/post-81384
 
  • Like
  • Love
Reactions: 16 users

Boab

I wish I could paint like Vincent

Originally posted in a NASA thread by @uiux but now has greater significance:

Proposal Summary​

Proposal Information


Proposal Number:
21-2- H6.22-1743

Phase 1 Contract #:
80NSSC21C0233

Subtopic Title:
Deep Neural Net and Neuromorphic Processors for In-Space Autonomy and Cognition

Proposal Title:
Neuromorphic Enhanced Cognitive Radio

Small Business Concern


Firm: Intellisense Systems, Inc.

Address:

21041 South Western Avenue, Torrance, CA 90501

Phone:
(310) 320-1827

Principal Investigator:

Name:
Mr. Wenjian Wang Ph.D.

E-mail:
wwang@intellisenseinc.com


Address:

21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Business Official:


Name: Selvy Utama

E-mail:
notify@intellisenseinc.com

Address:
21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Summary Details:

Estimated Technology Readiness Level (TRL) :
Begin: 3
End: 4


Technical Abstract (Limit 2000 characters, approximately 200 words):
Intellisense Systems, Inc. proposes in Phase II to advance development of a Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). NECR is a low-size, -weight, and -power (-SWaP) cognitive radio built on the open-source framework, i.e., GNU Radio and RFNoC™, with new enhancements in environment learning and improvements in transmission quality and data processing. Due to the high efficiency of spiking neural networks and their low-latency, energy-efficient implementation on neuromorphic computing hardware, NECR can be integrated into SWaP-constrained platforms in spacecraft and robotics, to provide reliable communication in unknown and uncharacterized space environments such as the Moon and Mars. In Phase II, Intellisense will improve the NECR system for cognitive communication capabilities accelerated by neuromorphic hardware. We will refine the overall NECR system architecture to achieve cognitive communication capabilities accelerated by neuromorphic hardware, on which a special focus will be the mapping, optimization, and implementation of smart sensing algorithms on the neuromorphic hardware. The Phase II smart sensing algorithm library will include Kalman filter, Carrier Frequency Offset estimation, symbol rate estimation, energy detection- and matched filter-based spectrum sensing, signal-to-noise ratio estimation, and automatic modulation identification.

These algorithms will be implemented on COTS neuromorphic computing hardware such as Akida processor from BrainChip, and then integrated with radio frequency modules and radiation-hardened packaging into a Phase II prototype.

At the end of Phase II, the prototype will be delivered to NASA for testing and evaluation, along with a plan describing a path to meeting fault and tolerance requirements for mission deployment and API documents for integration with CubeSat, SmallSat, and 'ROVER' for flight demonstration.

Potential NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology will have many NASA applications due to its low-SWaP and low-cost cognitive sensing capability. It can be used to enhance the robustness and reliability of space communication and networking, especially cognitive radio devices. NECR can be directly transitioned to the Human Exploration and Operations Mission Directorate (HEOMD) Space Communications and Navigation (SCaN) Program, CubeSat, SmallSat, and 'ROVER' to address the needs of the Cognitive Communications project.

Potential Non-NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology’s low-SWaP and low-cost cognitive sensing capability will have many non-NASA applications. The NECR technology can be integrated into commercial communication systems to enhance cognitive sensing and communication capability. Automakers can integrate the NECR technology into automobiles for cognitive sensing and communication.

Duration: 24
Looks like Intellisense do a lot of work with all forms of US defence forces.
Apologies if already discussed.

 
  • Like
  • Love
Reactions: 19 users

Xray1

Regular
The AGM was held basically 2/3's through the 2nd quarter.

I believe that he was referring to future quarters, meaning, being disclosed in 4C's in late October 2022 and late January 2023 for example, for the next 2 quarters.

If any material contract had taken place in April, May or June 2022, we would have been informed, maybe I'm wrong, maybe there will be
an explosion in revenue, which would be fantastic, but in my opinion, it clearly isn't coming in the reported 4C in late July 2022.

I respect your view, I'm wrong about plenty of things, and always happy to admit it.

Regards....Tech :cool:
Thanks for your input and your usual well formulated response's ............. I for one am expecting a '' teaser'' increase in Co revenue commencing from this upcoming 4C in the region of say ~$500k to ~$1 million .......... I hope the funds will start to come in from a current existing client the like's of someone like "Socionext" to kick things off.
 
  • Like
  • Fire
Reactions: 20 users

TasTroy77

Founding Member
Morning Chippers,

Having a read of a weekend financial newspaper...

FULL PAGE advert from Mercedes-Benz.

* with a picture of the single panel glass dashboard floating above a Mercedes and a young lass looking up at the panel in wonderment.

* the caption at bottom of picture reads....

As innovative as it is intuitive: the Mercedes-Benz MBUX Hyperscreen with artificial intelligence.
INNOVATIONS BY
( Mercedes-Benz logo ).

* At the top of the advert , in very small print....
Overseas model showen. Vehicle showen not currently available in Australia.

Apart from what we all know here, there is no mention of Brainchip in the advert.

Getting closer by the day.

Regards,
Esq.
I think we are all pretty certain that AKIDA IP isn't in the current sales line of Mercedes Benz.
If that was the case there would have to be an announcement of a material contract MB with Brainchip.
As previously discussed we are speculating that the artificial intelligence MBUX system as demonstrated by the EQXX model MAY start to be commercially available in 2024 although nothing officially confirmed.

This company's technology is revolutionary and integration takes time. A degree of patience is required before we see the tech commercialisation and subsequent revenue.
 
  • Like
  • Love
Reactions: 26 users
Looks like Intellisense do a lot of work with all forms of US defence forces.
Apologies if already discussed.

B

Go here and start looking at diff contracts and diff keyword searches.



Screenshot_2022-07-03-09-58-42-90_4641ebc0df1485bf6b47ebd018b5ee76.jpg


Screenshot_2022-07-03-09-59-07-73_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
VictorG said:
Es gibt ein Gerücht, dass die Nasa die Akida-Flagge auf dem Mars Rover hissen wird

Translated means - there is a rumour that NASA will hoist the Akida flag on the Mars rover
(close enough to it)

And just like that, a rumour was born!

First we'll have to work out what an Akida flag looks like and whether it would be more prudent to opt for a BrainChip flag instead. And then, of course, all necessary arrangements will have to be made for it's ironing to take place in outer space prior to it's hoisting. One small step for man, one giant crease-less BrainChip flag on Mars for mankind. ⛳🥳
 
  • Haha
  • Like
  • Love
Reactions: 31 users

Boab

I wish I could paint like Vincent
B

Go here and start looking at diff contracts and diff keyword searches.



View attachment 10619

View attachment 10620
Thanks FMF. it's obvious that when the US military wants the best products the money becomes available. Perhaps they'll all need an upgrade once the gang has Akida II up and running. 😉 And there's so much going on that we may never hear about. Roll on revenue stream.
 
  • Like
  • Fire
Reactions: 15 users

equanimous

Norse clairvoyant shapeshifter goddess
And just like that, a rumour was born!

First we'll have to work out what an Akida flag looks like and whether it would be more prudent to opt for a BrainChip flag instead. And then, of course, all necessary arrangements will have to be made for it's ironing to take place in outer space prior to it's hoisting. One small step for man, one giant crease-less BrainChip flag on Mars for mankind. ⛳🥳
1656819353010.png
 
  • Haha
  • Like
  • Love
Reactions: 41 users
The following covers the Aiot market and does not mention Brainchip by name but there are two very interesting paragraphs which I have emboldened and partitioned to make easy to locate:

What’s a Neural microcontroller?​

MAY 30, 2022 BY JEFF SHEPARD

FacebookTwitterLinkedInEmail
The ability to run neural networks (NNs) on MCUs is growing in importance to support artificial intelligence (AI) and machine learning (ML) in the Internet of Things (IoT) nodes and other embedded edge applications. Unfortunately, running NNs on MCUs is challenging due to the relatively small memory capacities of most MCUs. This FAQ details the memory challenges of running NNs on MCUs and looks at possible system-level solutions. It then presents recently announced MCUs with embedded NN accelerators. It closes by looking at how the Glow machine learning compiler for NNs can help reduce memory requirements.
Running NNs on MCUs (sometimes called tinyML) offers advantages over sending raw data to the cloud for analysis and action. Those advantages include the ability to tolerate poor or even no network connectivity and safeguard data privacy and security. MCU memory capacities are often limited to the main memory of hundreds of KB of SRAM, often less, and byte-addressable Flash of no more than a few MBs for read-only data.
To achieve high accuracy, most NNs require larger memory capacities. The memory needed by a NN includes read-only parameters and so-called feature maps that contain intermediate and final results. It can be tempting to process an NN layer on an MCU in the embedded memory before loading the next layer, but it’s often impractical. A single NN layer’s parameters and feature maps can require up to 100 MB of storage, exceeding the MCU memory size by as much as two orders of magnitude. Recently developed NNs with higher accuracies require even more memory, resulting in a widening gap between the available memory on most MCUs and the memory requirements of NNs (Figure 1).
Figure 1: The available memory on most MCUs is much too small to support the needs of the majority of NNs. (Image: Arxiv)
One solution to address MCU memory limitations is to dynamically swap NN data blocks between the MCU SRAM and a larger external (out-of-core) cash memory. Out-of-core NN implementations can suffer from several limitations, including: execution slowdown, storage wear out, higher energy consumption, and data security. If these concerns can be adequately addressed in a specific application, an MCU can be used to run large NNs with full accuracy and generality.
One approach to out-of-core NN implementation is to split one NN layer into a series of tiles small enough to fit into the MCU memory. This approach has been successfully applied to NN systems on servers where the NN tiles are swapped between the CPU/GPU memory and the server’s memory. Most embedded systems don’t have access to the large memory spaces available on servers. Using memory swapping approaches with MCUs can run into problems using a relatively small external SRAM or an SD card, such as lower SD card durability and reliability, slower execution due to I/O operations, higher energy consumption, and safety and security of out-of-core NN data storage.
Another approach to overcoming MCU memory limitations is optimizing the NN more completely using techniques such as model compression, parameter quantization, and designing tiny NNs from scratch. These approaches involve varying tradeoffs between model accuracy and generality, or both. In most cases, the techniques used to fit an NN into the memory space of an MCU result in the NN becoming too inaccurate (< 60% accuracy) or too specialized and not generalized enough (the NN can only detect a few object classes). These challenges can disqualify the use of MCUs where NNs with high accuracy and generality are needed, even if inference delays can be tolerated, such as:
  • NN inference on slowly changing signals such as monitoring crop health by analyzing hourly photos or traffic patterns by analyzing video frames taken every 20-30 minutes
  • Profiling NNs on the device by occasionally running a full-blown NN to estimate the accuracy of long-running smaller NNs
  • Transfer learning includes retraining NNs on MCUs with data collected from deployment every hour or day
NN accelerators embedded in MCUs
Many of the challenges of implementing NNs on MCU are being addressed by MCUs with embedded NN accelerators. These advanced MCUs are an emerging device category that promises to provide designers with new opportunities to develop IoT node and edge ML solutions. For example, an MCU with a hardware-based embedded convolutional neural network (CNN) accelerator enables battery-powered applications to execute AI inferences while spending only microjoules of energy (Figure 2).
Figure 2: Neural MCU block diagram showing the basic MCU blocks (upper left) and the CNN accelerator section (right). (Image: Maxim)
*******************************************************************************************************************************************************
The MCU with an embedded CNN accelerator is a system on chip combining an Arm Cortex-M4 with a RISC-V core that can execute application and control codes as well as drive the CNN accelerator. The CNN engine has a weight storage memory of 442KB and can support 1-, 2-, 4-, and 8-bit weights (supporting networks of up to 3.5 million weights). On the fly, AI network updates are supported by the SRAM-based CNN weight memory structure. The architecture is flexible and allows CNNs to be trained using conventional toolsets such as PyTorch and TensorFlow.
*********************************************************************************************************************************************************
Another MCU supplier has pre-announced developing a neural processing unit integrated with an ARM Cortex core. The new neural MCU is scheduled to ship later this year and will provide the same level of AI performance as a quad-core processor with an AI accelerator but at one-tenth the cost and one-twelfth the power consumption.
*********************************************************************************************************************************************************

Additional neural MCUs are expected to emerge in the near future.

Glow for smaller NN memories
Glow (graph lowering) is a machine learning compiler for neural network graphs. It’s available on Github and is designed to optimize the neural network graphs and generate code for various hardware devices. Two versions of Glow are available, one for Ahead of Time (AOT) and one for Just in Time (JIT) compilations. As the names suggest, AOT compilation is performed offline (ahead of time) and generates an object file (bundle) which is later linked with the application code, while JIT compilation is performed at runtime just before the model is executed.
MCUs are available that support AOT compilation using Glow. The compiler converts the neural networks into object files, which the user converts into a binary image for increased performance and a smaller memory footprint than a JIT (runtime) inference engine. In this case, Glow is used as a software back-end for the PyTorch machine learning framework and the ONNX model format (Figure 3).
Figure 3: Example of an AOT compilation flow diagram using Glow. (Image: NXP)
The Glow NN complier lowers a NN into a two-phase, strongly-typed intermediate representation. Domain-specific optimizations are performed in the first phase, while the second phase performs optimizations focused on specialized back-end hardware features. NNs on MCUs are available that combine support for Arm Cortex-M cores and Cadence Tensilica HiFi 4 DSP support, accelerating performance by utilizing Arm CMSIS-NN and HiFi NN libraries, respectively. Its features include:
  • Lower latency and smaller solution size for edge inference NNs.
  • Accelerate NN applications with CMSIS-NN and Cadence HiFi NN Library
  • Speed time to market using the available software development kit
  • Flexible implementation since Glow is open source with Apache License 2.0
Summary
Running NNs on MCUs is important for IoT nodes and other embedded edge applications, but it can be challenging due to MCU memory limitations. Several approaches have been developed to address memory limitations, including out-of-core designs that swap blocks of NN data between the MCU memory and an external memory and various NN software ‘optimization’ techniques. Unfortunately, these approaches involve tradeoffs between model accuracy and generality, which result in the NN becoming too inaccurate and/or too specialized to be of use in practical applications. The emergence of MCUs with integrated NN accelerators is beginning to address those concerns and enables the development of practical NN implementations for IoT and edge applications. Finally, the availability of the Glow NN compiler gives designers an additional tool for optimizing NN for smaller applications”
 
  • Like
  • Fire
  • Love
Reactions: 54 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 37 users
This may help? Looks like we are at least recuperating our costs.

BrainChip and VORAGO Technologies Agree to Collaborate through the Akida™ Early Access Program
BrainChip and VORAGO Technologies Agree to Collaborate through the Akida™ Early Access Program
Agreement to Support Phase I of NASA Program for Radiation-Hardened Neuromorphic Processor

Aliso Viejo, California – September 2, 2020 – BrainChip Holdings Ltd (ASX: BRN), a leading provider of ultra-low power high performance AI technology, today announced that VORAGO Technologies has signed the Akida™ Early Access Program Agreement. The collaboration is intended to support a Phase I NASA program for a neuromorphic processor that meets spaceflight requirements. The BrainChip Early Access Program is available to a select group of customers that require early access to the Akida device, evaluation boards and dedicated support. The EAP agreement includes payments that are intended to offset the Company’s expenses to support partner needs.

The Akida neuromorphic processor is uniquely suited for spaceflight and aerospace applications. The device is a complete neural processor and does not require an external CPU, memory or Deep Learning Accelerator (DLA). Reducing component count, size and power consumption are paramount concerns in spaceflight and aerospace applications. The level of integration and ultra-low power performance of Akida supports these critical criteria. Additionally, Akida provides incremental learning. With incremental learning, new classifiers can be added to the network without retraining the entire network. The benefit in spaceflight and aerospace applications is significant as real-time local incremental learning allows continuous operation when new discoveries or circumstances occur.
As others nave mentioned, BRN probably won’t make a lot of money from the NASA in the grand scheme of things (short term) It’s not like NASA is going to produce 1 million rovers. 😁

But....

I see the interaction as proving out more use cases and a showing the world what is possible at the extreme edge.

While BRN are also showing that we have partnered with one of the most cutting edge technically advanced organisation in the world which provides BRN with a massive tick on its resume to show and prove out this type is partnership. Another reason I think Mercedes were so happy to name specifically that EQXX car had Akida inside.

Who doesn’t want to be partnered up with the latest edge tech being proven out and implemented by NASA.

While NASA may not end being huge dollars for BRN in the short term the exposure(marketing) to the world from the NASA partnership to generate other global deals is priceless.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 41 users
Top Bottom