BRN Discussion Ongoing

Pepsin

Regular
I was just reviewing the NVISO slides and trying to work out which company had the most potential upside and growth?

The prices next to the companies are listed in US dollars.

Any suggestions on which company out of that group has the only commercial neuromorphic AI chip heading into this revolutionary and game changing era?

View attachment 10631

Exciting times to be a Brainchip shareholder!
Please remember, that these numbers just represent the price per share and have nothing to do with marked-cap or potential growth. The number of shares out there differs widely for these companies!
It would have been better to indicate the piece of cake that every company gains from the overall AI-hardware market or something like that.

MegaChips is often seen as a big player here in the forum but from the perspective of market-cap has only about 400 - 500 mio. USD and is smaller than BRN. Despite that it is a great costomer, of course!
 
  • Like
Reactions: 8 users
Well said, but I truly think Nasa will develop something completely out of the original using Akida
I agree they will, but bulk IP purchases will probably not be required...it will primarily be professional services(labour) where funds will come in - and we won’t knock that back at all. Government contracts for professional services can be very lucrative. Over time more products I am sure will be developed - I still see professional services as the main income from them.

But...the flow on effect are more deals and then income from other companies out there in the wild seeing the standout visibility that Akida will be used as the smarts of the next NASA Rover! It does not get much better exposure globally for AI edge tech!
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 15 users

Diogenese

Top 20
The following covers the Aiot market and does not mention Brainchip by name but there are two very interesting paragraphs which I have emboldened and partitioned to make easy to locate:

What’s a Neural microcontroller?​

MAY 30, 2022 BY JEFF SHEPARD

FacebookTwitterLinkedInEmail
The ability to run neural networks (NNs) on MCUs is growing in importance to support artificial intelligence (AI) and machine learning (ML) in the Internet of Things (IoT) nodes and other embedded edge applications. Unfortunately, running NNs on MCUs is challenging due to the relatively small memory capacities of most MCUs. This FAQ details the memory challenges of running NNs on MCUs and looks at possible system-level solutions. It then presents recently announced MCUs with embedded NN accelerators. It closes by looking at how the Glow machine learning compiler for NNs can help reduce memory requirements.
Running NNs on MCUs (sometimes called tinyML) offers advantages over sending raw data to the cloud for analysis and action. Those advantages include the ability to tolerate poor or even no network connectivity and safeguard data privacy and security. MCU memory capacities are often limited to the main memory of hundreds of KB of SRAM, often less, and byte-addressable Flash of no more than a few MBs for read-only data.
To achieve high accuracy, most NNs require larger memory capacities. The memory needed by a NN includes read-only parameters and so-called feature maps that contain intermediate and final results. It can be tempting to process an NN layer on an MCU in the embedded memory before loading the next layer, but it’s often impractical. A single NN layer’s parameters and feature maps can require up to 100 MB of storage, exceeding the MCU memory size by as much as two orders of magnitude. Recently developed NNs with higher accuracies require even more memory, resulting in a widening gap between the available memory on most MCUs and the memory requirements of NNs (Figure 1).
Figure 1: The available memory on most MCUs is much too small to support the needs of the majority of NNs. (Image: Arxiv)
One solution to address MCU memory limitations is to dynamically swap NN data blocks between the MCU SRAM and a larger external (out-of-core) cash memory. Out-of-core NN implementations can suffer from several limitations, including: execution slowdown, storage wear out, higher energy consumption, and data security. If these concerns can be adequately addressed in a specific application, an MCU can be used to run large NNs with full accuracy and generality.
One approach to out-of-core NN implementation is to split one NN layer into a series of tiles small enough to fit into the MCU memory. This approach has been successfully applied to NN systems on servers where the NN tiles are swapped between the CPU/GPU memory and the server’s memory. Most embedded systems don’t have access to the large memory spaces available on servers. Using memory swapping approaches with MCUs can run into problems using a relatively small external SRAM or an SD card, such as lower SD card durability and reliability, slower execution due to I/O operations, higher energy consumption, and safety and security of out-of-core NN data storage.
Another approach to overcoming MCU memory limitations is optimizing the NN more completely using techniques such as model compression, parameter quantization, and designing tiny NNs from scratch. These approaches involve varying tradeoffs between model accuracy and generality, or both. In most cases, the techniques used to fit an NN into the memory space of an MCU result in the NN becoming too inaccurate (< 60% accuracy) or too specialized and not generalized enough (the NN can only detect a few object classes). These challenges can disqualify the use of MCUs where NNs with high accuracy and generality are needed, even if inference delays can be tolerated, such as:
  • NN inference on slowly changing signals such as monitoring crop health by analyzing hourly photos or traffic patterns by analyzing video frames taken every 20-30 minutes
  • Profiling NNs on the device by occasionally running a full-blown NN to estimate the accuracy of long-running smaller NNs
  • Transfer learning includes retraining NNs on MCUs with data collected from deployment every hour or day
NN accelerators embedded in MCUs
Many of the challenges of implementing NNs on MCU are being addressed by MCUs with embedded NN accelerators. These advanced MCUs are an emerging device category that promises to provide designers with new opportunities to develop IoT node and edge ML solutions. For example, an MCU with a hardware-based embedded convolutional neural network (CNN) accelerator enables battery-powered applications to execute AI inferences while spending only microjoules of energy (Figure 2).
Figure 2: Neural MCU block diagram showing the basic MCU blocks (upper left) and the CNN accelerator section (right). (Image: Maxim)
*******************************************************************************************************************************************************
The MCU with an embedded CNN accelerator is a system on chip combining an Arm Cortex-M4 with a RISC-V core that can execute application and control codes as well as drive the CNN accelerator. The CNN engine has a weight storage memory of 442KB and can support 1-, 2-, 4-, and 8-bit weights (supporting networks of up to 3.5 million weights). On the fly, AI network updates are supported by the SRAM-based CNN weight memory structure. The architecture is flexible and allows CNNs to be trained using conventional toolsets such as PyTorch and TensorFlow.
*********************************************************************************************************************************************************
Another MCU supplier has pre-announced developing a neural processing unit integrated with an ARM Cortex core. The new neural MCU is scheduled to ship later this year and will provide the same level of AI performance as a quad-core processor with an AI accelerator but at one-tenth the cost and one-twelfth the power consumption.
*********************************************************************************************************************************************************

Additional neural MCUs are expected to emerge in the near future.

Glow for smaller NN memories
Glow (graph lowering) is a machine learning compiler for neural network graphs. It’s available on Github and is designed to optimize the neural network graphs and generate code for various hardware devices. Two versions of Glow are available, one for Ahead of Time (AOT) and one for Just in Time (JIT) compilations. As the names suggest, AOT compilation is performed offline (ahead of time) and generates an object file (bundle) which is later linked with the application code, while JIT compilation is performed at runtime just before the model is executed.
MCUs are available that support AOT compilation using Glow. The compiler converts the neural networks into object files, which the user converts into a binary image for increased performance and a smaller memory footprint than a JIT (runtime) inference engine. In this case, Glow is used as a software back-end for the PyTorch machine learning framework and the ONNX model format (Figure 3).
Figure 3: Example of an AOT compilation flow diagram using Glow. (Image: NXP)
The Glow NN complier lowers a NN into a two-phase, strongly-typed intermediate representation. Domain-specific optimizations are performed in the first phase, while the second phase performs optimizations focused on specialized back-end hardware features. NNs on MCUs are available that combine support for Arm Cortex-M cores and Cadence Tensilica HiFi 4 DSP support, accelerating performance by utilizing Arm CMSIS-NN and HiFi NN libraries, respectively. Its features include:
  • Lower latency and smaller solution size for edge inference NNs.
  • Accelerate NN applications with CMSIS-NN and Cadence HiFi NN Library
  • Speed time to market using the available software development kit
  • Flexible implementation since Glow is open source with Apache License 2.0
Summary
Running NNs on MCUs is important for IoT nodes and other embedded edge applications, but it can be challenging due to MCU memory limitations. Several approaches have been developed to address memory limitations, including out-of-core designs that swap blocks of NN data between the MCU memory and an external memory and various NN software ‘optimization’ techniques. Unfortunately, these approaches involve tradeoffs between model accuracy and generality, which result in the NN becoming too inaccurate and/or too specialized to be of use in practical applications. The emergence of MCUs with integrated NN accelerators is beginning to address those concerns and enables the development of practical NN implementations for IoT and edge applications. Finally, the availability of the Glow NN compiler gives designers an additional tool for optimizing NN for smaller applications”
Last Tuesday 12:52, @Fullmoonfever brought up something by Maxim
Hi D,
Appreciate your thoughts on the following new recent release. Couple things caught my eye particularly the weights but not tech enough to be comfortable this is similar to Akida?

Edit. Not saying Akida involved but curious to any similarities
.

https://au.mouser.com/ProductDetail/Maxim-Integrated/MAX78000EXG+?qs=yqaQSyyJnNigS5t/Kz0nhQ==

MAX78000 Artificial Intelligence Microcontroller with Ultra-Low-Power Convolutional Neural Network Accelerator

Artificial intelligence (AI) requires extreme computational horsepower, but Maxim is cutting the power cord from AI insights. The MAX78000 is a new breed of AI microcontroller built to enable neural networks to execute at ultra-low power and live at the edge.

www.maximintegrated.com

In my reply, I noted that Maxim referred to 1-, 2- , 4- , and 8-bit weights (They don't mention how many bits in the activations).

#19,425
While Maxim don't refer to spikes, they do refer to 1-bit weights. They also refer to 8-bit weights in the same breath, so they have gone for additional accuracy as an option cf Akida's optional 4-bit weights/actuations.

Maxim have several NN patents, mainly directed to CNN using the now fashionable in-memory-compute, eg
:

US2020110604A1 ENERGY-EFFICIENT MEMORY SYSTEMS AND METHODS
Priority: 20181003.
...

Now I'm not saying it's impossible for the Akida IP to stretch to 8-bits, but we have not been told that it does. Similar to the 4-bit Akida, an 8-bit Akida would have even greater accuracy than the initial 1-bit Akida at the expense of speed and power.

Maxim also dabbled in analog and Frankenstein (analog/digital) NNs.

An analog NN on its own would struggle with accuracy to compete with a multibit digital NN.

Interestingly, Maxim is now part of Analog Devices:

https://www.maximintegrated.com/en.html

Maxim have had an AI chip since 2020. It uses ARM Cortex M4

https://www.maximintegrated.com/en/products/microcontrollers/artificial-intelligence.html

Artificial intelligence (AI) is opening up a whole new universe of possibilities, from virtual assistants to self-driving cars, automated factory equipment, and voice recognition in consumer devices. But the computational horsepower to enable these possibilities is extreme, and requires expensive, power-hungry, and big processors. In embedded devices, this functionality isn't really available–embedded microcontrollers are too slow to effectively process images and make decisions in real time.

Enter Maxim's new line of Artificial Intelligence microcontrollers. They run AI inferences hundreds of times faster and lower energy than other embedded solutions. Our built in neural network hardware accelerator practically eliminates the energy spent on audio and image AI inferences. Now small machines like thermostats, smart watches, and cameras can deliver the promise of AI—embedded devices can see and hear like never before.

Get started today with the MAX78000FTHR for only $25
.

https://datasheets.maximintegrated.com/en/ds/MAX78000.pdf

Artificial intelligence (AI) requires extreme computational horsepower, but Maxim is cutting the power cord from AI insights. The MAX78000 is a new breed of AI microcontroller built to enable neural networks to execute at ultra-low power and live at the edge of the IoT. This product combines the most energy-efficient AI processing with Maxim's proven ultra-low power microcontrollers. Our hardware-based convolutional neural network (CNN) accelerator enables battery-powered applications to execute AI inferences while spending only microjoules of energy. The MAX78000 is an advanced system-on-chip featuring an Arm® Cortex®-M4 with FPU CPU for efficient system control with an ultra-low-power deep neural network accelerator. The CNN engine has a weight storage memory of 442KB, and can support 1-, 2-, 4-, and 8-bit weights (supporting networks of up to 3.5 million weights). The CNN weight memory is SRAM-based, so AI network updates can be made on the fly. The CNN engine also has 512KB of data memory. The CNN architecture is highly flexible, allowing networks to be trained in conventional toolsets like PyTorch® and TensorFlow®, then converted for execution on the MAX78000 using tools provided by Maxim. In addition to the memory in the CNN engine, the MAX78000 has large on-chip system memory for the microcontroller core, with 512KB flash and up to 128KB SRAM. Multiple high-speed and low-power communications interfaces are supported, including I2S and a parallel camera interface (PCIF).

Neural Network Accelerator
• Highly Optimized for Deep Convolutional Neural Networks • 442k 8-Bit Weight Capacity with 1,2,4,8-Bit Weights
• Programmable Input Image Size up to 1024 x 1024 pixels
• Programmable Network Depth up to 64 Layers
• Programmable per Layer Network Channel Widths up to 1024 Channels
• 1 and 2 Dimensional Convolution Processing
• Streaming Mode
• Flexibility to Support Other Network Types, Including MLP and Recurrent Neural Networks



1656849175166.png



This is a one-to-one fit for the first highlighted paragraph.
The MCU with an embedded CNN accelerator is a system on chip combining an Arm Cortex-M4 with a RISC-V core that can execute application and control codes as well as drive the CNN accelerator. The CNN engine has a weight storage memory of 442KB and can support 1-, 2-, 4-, and 8-bit weights (supporting networks of up to 3.5 million weights). On the fly, AI network updates are supported by the SRAM-based CNN weight memory structure. The architecture is flexible and allows CNNs to be trained using conventional toolsets such as PyTorch and TensorFlow.


1656849967415.png
 
  • Like
  • Love
  • Fire
Reactions: 20 users

jtardif999

Regular
Well said, but I truly think Nasa will develop something completely out of the original using Akida
And then.. eventually whatever it is they develop will be sold on mass e.g. Polaroid tech used in Luna helmet sun visors wound up in sun glasses.
 
  • Like
  • Love
Reactions: 10 users

Sirod69

bavarian girl ;-)

CDAO, Air Force Test AI-Enabled ‘Smart Sensor’ Autonomous Tech on UAS; Lt. Gen. Michael Groen Quoted


perhaps the next articel ist something for our specialists?

 
  • Like
  • Fire
Reactions: 13 users

Krustor

Regular
Can someone help me please with the following topic from our German "Börsennews" community:

"Die einzigsten Unternehmen, die eine PM über den Gebrauch von Brainchip-IP rausgeben können, sind zurzeit Renesas oder MegaChips. Ein IP-Deal ist ein zwingendes ASX-Announcement. Es besteht die Möglichkeit, wegen einer Geheimhaltungsvereinbarung das Announcement zurückzuhalten. Das bedeutet aber auch, daß der IP-Nehmer wegen der Vereinbarung nicht veröffentlichen darf. Sollte ein IP-Gebrauch ohne vorheriges ASX-Announcement veröffentlicht werden, würde das große Probleme für Brainchip bedeuten. Die Maßnahmen reichen von Handelsaussetzung über mehrere Wochen bis zum Delisting. Entsprechend gibt es nur entweder ASX-Announcement oder keine News, mehr Spielraum gibt es einfach nicht. Partnerschaften für gemeinsame Entwicklungen wie Edge Impulse oder Nviso fallen nicht unter diese Regeln, jeder kommerzielle Gebrauch der IP aber schon."

I will do the Google translation for you:

"The only companies that can PM about the use of Brainchip IP are currently Renesas or MegaChips. An IP deal is a mandatory ASX announcement. It is possible to withhold the announcement due to a non-disclosure agreement. But that also means that the IP recipient is not allowed to publish because of the agreement. Should an IP usage be published without prior ASX announcement, it would mean big problems for Brainchip. The measures range from suspension of trading over several weeks to delisting. Accordingly, there is only either ASX announcement or no news, there is simply no more leeway. Partnerships for joint developments such as Edge Impulse or Nviso are not covered by these rules, but any commercial use of the IP is."

I tried to understand the ASX-regulations about this - unfortunately my Business English seems to be not enough...

Can someone help and confirm this? The German community seems to take it as a fact...
 
  • Like
Reactions: 13 users

VictorG

Member
Can someone help me please with the following topic from our German "Börsennews" community:

"Die einzigsten Unternehmen, die eine PM über den Gebrauch von Brainchip-IP rausgeben können, sind zurzeit Renesas oder MegaChips. Ein IP-Deal ist ein zwingendes ASX-Announcement. Es besteht die Möglichkeit, wegen einer Geheimhaltungsvereinbarung das Announcement zurückzuhalten. Das bedeutet aber auch, daß der IP-Nehmer wegen der Vereinbarung nicht veröffentlichen darf. Sollte ein IP-Gebrauch ohne vorheriges ASX-Announcement veröffentlicht werden, würde das große Probleme für Brainchip bedeuten. Die Maßnahmen reichen von Handelsaussetzung über mehrere Wochen bis zum Delisting. Entsprechend gibt es nur entweder ASX-Announcement oder keine News, mehr Spielraum gibt es einfach nicht. Partnerschaften für gemeinsame Entwicklungen wie Edge Impulse oder Nviso fallen nicht unter diese Regeln, jeder kommerzielle Gebrauch der IP aber schon."

I will do the Google translation for you:

"The only companies that can PM about the use of Brainchip IP are currently Renesas or MegaChips. An IP deal is a mandatory ASX announcement. It is possible to withhold the announcement due to a non-disclosure agreement. But that also means that the IP recipient is not allowed to publish because of the agreement. Should an IP usage be published without prior ASX announcement, it would mean big problems for Brainchip. The measures range from suspension of trading over several weeks to delisting. Accordingly, there is only either ASX announcement or no news, there is simply no more leeway. Partnerships for joint developments such as Edge Impulse or Nviso are not covered by these rules, but any commercial use of the IP is."

I tried to understand the ASX-regulations about this - unfortunately my Business English seems to be not enough...

Can someone help and confirm this? The German community seems to take it as a fact...
I doubt the ASX understand their own rules. I can only guess that any announcement must have a material impact on the BRN's revenue and or obligations. Case in point, when BRN announced the MEGACHIPS licence, they were made to amend the announcement to include revenue from the license.
 
  • Like
  • Fire
  • Haha
Reactions: 13 users

stuart888

Regular
Today's lack of volume in Brainchip shares traded has me wondering about something.

I am just blown away by the effectiveness of the Integrous Communications investor relations advisory campaign to ah, .........well,....let's just quote the press release from a year ago (July 14).....

"...to lead Brainchip's financial communications and investor relations initiatives". And further, "....we have seen a tremendous rise in opportunities on the financial side of our business that requires further attention. By retaining an investor relations firm like Integrous we are able to concentrate on our core competencies....." And further, ".....We anticipate our work with Integrous will help attract additional institutional investment while maximizing the returns for our current shareholders." The above is quoting or paraphrasing Ken Scarince, Brainchip, CFO.

When this was released I was pleased and hoped like everyone that it would result in more tangible interest in owning the equity. I think it is quite reasonable to give a relationship such as this a year to develop and bear fruit, or produce tangible results. So I bided my time, and waited. And waited.

Unless I have missed announcements I can't remember what, if anything, has come from this relationship. I thought Integrous may have been involved in the move to get ADR shares (BCHPY) for institutional trading, but that action is non-existent. Clearly, institutional interest in the U.S. has been insignificant. Maybe non-existent is more accurate since virtually no ADR shares trade. And the move to eliminate a foreign trading fee of $50 per trade for BRCHF shares by gaining DTC eligibility went away for a couple months but is now back, at least on my trading platform (Fidelity). So, if Integrous was involved with the DTC thing it has since fizzled out. That fee suppresses trading. And even though there does not appear to be any fee associated with the ADR shares it doesn't seem to matter as the market yawns and says, "meh".

Is Brainchip still using Integrous? If so, why? My guess is they have gone their separate ways, and announcing that breakup doesn't get the same rosy press release with all the optimism as when they started working with Brainchip.

To make my point, today BRCHF traded 529 shares, and BCHPY traded 12 ADR's. These volumes are sad. Some may say well, it is July 4th weekend - summer - low interest. Sorry, that doesn't explain this pathetic U.S. interest for a tech company that should be doing better, revenue's notwithstanding. And it misses the point to say, yeah, but BRCHF went up 3.5 cents.

Does anyone know anything about Integrous? I really hope Brainchip and Integrous have gone their separate ways. Regards, dippY
So glad DippY22 you again highlight the problematic extra $50 fee to buy BRCHF shares here in the United States, via Fidelity. This keeps me from adding/buying more shares. Since USA investors can buy such a vast amount of stocks, with no fee at all, it really makes it a barrier to all modest sized investments.

I was adding in $500 increments each week, but cannot do that with the $50 buying fee (foreign trading fee). Luckily, I bought a lot during no fee times, but I have not added any shares due to the extra $50 buy fee. Anyone that has connections and can make this fee disappear like last year, please help!

Best Regards, Stuart
(Side note: I always change the text size up one notch, as reading this forum is tiny on my Amazon Fire 10" Tablet. The default is 15, I pick next the 18 size, on messages I post).
 
  • Like
  • Fire
  • Sad
Reactions: 19 users

Labsy

Regular
The following covers the Aiot market and does not mention Brainchip by name but there are two very interesting paragraphs which I have emboldened and partitioned to make easy to locate:

What’s a Neural microcontroller?​

MAY 30, 2022 BY JEFF SHEPARD

FacebookTwitterLinkedInEmail
The ability to run neural networks (NNs) on MCUs is growing in importance to support artificial intelligence (AI) and machine learning (ML) in the Internet of Things (IoT) nodes and other embedded edge applications. Unfortunately, running NNs on MCUs is challenging due to the relatively small memory capacities of most MCUs. This FAQ details the memory challenges of running NNs on MCUs and looks at possible system-level solutions. It then presents recently announced MCUs with embedded NN accelerators. It closes by looking at how the Glow machine learning compiler for NNs can help reduce memory requirements.
Running NNs on MCUs (sometimes called tinyML) offers advantages over sending raw data to the cloud for analysis and action. Those advantages include the ability to tolerate poor or even no network connectivity and safeguard data privacy and security. MCU memory capacities are often limited to the main memory of hundreds of KB of SRAM, often less, and byte-addressable Flash of no more than a few MBs for read-only data.
To achieve high accuracy, most NNs require larger memory capacities. The memory needed by a NN includes read-only parameters and so-called feature maps that contain intermediate and final results. It can be tempting to process an NN layer on an MCU in the embedded memory before loading the next layer, but it’s often impractical. A single NN layer’s parameters and feature maps can require up to 100 MB of storage, exceeding the MCU memory size by as much as two orders of magnitude. Recently developed NNs with higher accuracies require even more memory, resulting in a widening gap between the available memory on most MCUs and the memory requirements of NNs (Figure 1).
Figure 1: The available memory on most MCUs is much too small to support the needs of the majority of NNs. (Image: Arxiv)
One solution to address MCU memory limitations is to dynamically swap NN data blocks between the MCU SRAM and a larger external (out-of-core) cash memory. Out-of-core NN implementations can suffer from several limitations, including: execution slowdown, storage wear out, higher energy consumption, and data security. If these concerns can be adequately addressed in a specific application, an MCU can be used to run large NNs with full accuracy and generality.
One approach to out-of-core NN implementation is to split one NN layer into a series of tiles small enough to fit into the MCU memory. This approach has been successfully applied to NN systems on servers where the NN tiles are swapped between the CPU/GPU memory and the server’s memory. Most embedded systems don’t have access to the large memory spaces available on servers. Using memory swapping approaches with MCUs can run into problems using a relatively small external SRAM or an SD card, such as lower SD card durability and reliability, slower execution due to I/O operations, higher energy consumption, and safety and security of out-of-core NN data storage.
Another approach to overcoming MCU memory limitations is optimizing the NN more completely using techniques such as model compression, parameter quantization, and designing tiny NNs from scratch. These approaches involve varying tradeoffs between model accuracy and generality, or both. In most cases, the techniques used to fit an NN into the memory space of an MCU result in the NN becoming too inaccurate (< 60% accuracy) or too specialized and not generalized enough (the NN can only detect a few object classes). These challenges can disqualify the use of MCUs where NNs with high accuracy and generality are needed, even if inference delays can be tolerated, such as:
  • NN inference on slowly changing signals such as monitoring crop health by analyzing hourly photos or traffic patterns by analyzing video frames taken every 20-30 minutes
  • Profiling NNs on the device by occasionally running a full-blown NN to estimate the accuracy of long-running smaller NNs
  • Transfer learning includes retraining NNs on MCUs with data collected from deployment every hour or day
NN accelerators embedded in MCUs
Many of the challenges of implementing NNs on MCU are being addressed by MCUs with embedded NN accelerators. These advanced MCUs are an emerging device category that promises to provide designers with new opportunities to develop IoT node and edge ML solutions. For example, an MCU with a hardware-based embedded convolutional neural network (CNN) accelerator enables battery-powered applications to execute AI inferences while spending only microjoules of energy (Figure 2).
Figure 2: Neural MCU block diagram showing the basic MCU blocks (upper left) and the CNN accelerator section (right). (Image: Maxim)
*******************************************************************************************************************************************************
The MCU with an embedded CNN accelerator is a system on chip combining an Arm Cortex-M4 with a RISC-V core that can execute application and control codes as well as drive the CNN accelerator. The CNN engine has a weight storage memory of 442KB and can support 1-, 2-, 4-, and 8-bit weights (supporting networks of up to 3.5 million weights). On the fly, AI network updates are supported by the SRAM-based CNN weight memory structure. The architecture is flexible and allows CNNs to be trained using conventional toolsets such as PyTorch and TensorFlow.
*********************************************************************************************************************************************************
Another MCU supplier has pre-announced developing a neural processing unit integrated with an ARM Cortex core. The new neural MCU is scheduled to ship later this year and will provide the same level of AI performance as a quad-core processor with an AI accelerator but at one-tenth the cost and one-twelfth the power consumption.
*********************************************************************************************************************************************************

Additional neural MCUs are expected to emerge in the near future.

Glow for smaller NN memories
Glow (graph lowering) is a machine learning compiler for neural network graphs. It’s available on Github and is designed to optimize the neural network graphs and generate code for various hardware devices. Two versions of Glow are available, one for Ahead of Time (AOT) and one for Just in Time (JIT) compilations. As the names suggest, AOT compilation is performed offline (ahead of time) and generates an object file (bundle) which is later linked with the application code, while JIT compilation is performed at runtime just before the model is executed.
MCUs are available that support AOT compilation using Glow. The compiler converts the neural networks into object files, which the user converts into a binary image for increased performance and a smaller memory footprint than a JIT (runtime) inference engine. In this case, Glow is used as a software back-end for the PyTorch machine learning framework and the ONNX model format (Figure 3).
Figure 3: Example of an AOT compilation flow diagram using Glow. (Image: NXP)
The Glow NN complier lowers a NN into a two-phase, strongly-typed intermediate representation. Domain-specific optimizations are performed in the first phase, while the second phase performs optimizations focused on specialized back-end hardware features. NNs on MCUs are available that combine support for Arm Cortex-M cores and Cadence Tensilica HiFi 4 DSP support, accelerating performance by utilizing Arm CMSIS-NN and HiFi NN libraries, respectively. Its features include:
  • Lower latency and smaller solution size for edge inference NNs.
  • Accelerate NN applications with CMSIS-NN and Cadence HiFi NN Library
  • Speed time to market using the available software development kit
  • Flexible implementation since Glow is open source with Apache License 2.0
Summary
Running NNs on MCUs is important for IoT nodes and other embedded edge applications, but it can be challenging due to MCU memory limitations. Several approaches have been developed to address memory limitations, including out-of-core designs that swap blocks of NN data between the MCU memory and an external memory and various NN software ‘optimization’ techniques. Unfortunately, these approaches involve tradeoffs between model accuracy and generality, which result in the NN becoming too inaccurate and/or too specialized to be of use in practical applications. The emergence of MCUs with integrated NN accelerators is beginning to address those concerns and enables the development of practical NN implementations for IoT and edge applications. Finally, the availability of the Glow NN compiler gives designers an additional tool for optimizing NN for smaller applications”
"One-thenth the cost and one-twelfth the power consumption"
Hows that for dangling a carrot in front of the tech world....
 
  • Like
  • Love
  • Fire
Reactions: 23 users

stuart888

Regular
Sure seems like Park Assist functionality in an EV is a great Use Case for the spiking Akida low power solution. A person driving the EV does not park often, so Akida is not wasting energy when the car is driving down the road. The TAM Total Addressable Market for the ADAS Advanced Driver Assistance Systems is crazy stuff.

Certainly, Brainchip will get a nice foothold for long-term sustained growth in EV ADAS. The Use Cases are massive, including making braking decisions without the cloud (tunnels, no signal, etc). I like this ADAS space for Brainchip, Mercedes blessing just makes me more curious on all the stuff going on we don't know about. Park Assist by Akida, seems like a marriage to me.


1656888907022.png
 
  • Like
  • Love
  • Fire
Reactions: 32 users
Can someone help me please with the following topic from our German "Börsennews" community:

"Die einzigsten Unternehmen, die eine PM über den Gebrauch von Brainchip-IP rausgeben können, sind zurzeit Renesas oder MegaChips. Ein IP-Deal ist ein zwingendes ASX-Announcement. Es besteht die Möglichkeit, wegen einer Geheimhaltungsvereinbarung das Announcement zurückzuhalten. Das bedeutet aber auch, daß der IP-Nehmer wegen der Vereinbarung nicht veröffentlichen darf. Sollte ein IP-Gebrauch ohne vorheriges ASX-Announcement veröffentlicht werden, würde das große Probleme für Brainchip bedeuten. Die Maßnahmen reichen von Handelsaussetzung über mehrere Wochen bis zum Delisting. Entsprechend gibt es nur entweder ASX-Announcement oder keine News, mehr Spielraum gibt es einfach nicht. Partnerschaften für gemeinsame Entwicklungen wie Edge Impulse oder Nviso fallen nicht unter diese Regeln, jeder kommerzielle Gebrauch der IP aber schon."

I will do the Google translation for you:

"The only companies that can PM about the use of Brainchip IP are currently Renesas or MegaChips. An IP deal is a mandatory ASX announcement. It is possible to withhold the announcement due to a non-disclosure agreement. But that also means that the IP recipient is not allowed to publish because of the agreement. Should an IP usage be published without prior ASX announcement, it would mean big problems for Brainchip. The measures range from suspension of trading over several weeks to delisting. Accordingly, there is only either ASX announcement or no news, there is simply no more leeway. Partnerships for joint developments such as Edge Impulse or Nviso are not covered by these rules, but any commercial use of the IP is."

I tried to understand the ASX-regulations about this - unfortunately my Business English seems to be not enough...

Can someone help and confirm this? The German community seems to take it as a fact...
Hi Krustor

As @VictorG posted a company must publish on the ASX full details of any event that a reasonable person would expect to have a material effect on the share price.

The event can be a positive or a negative event. For example if the company is making a product to sell the successful or unsuccessful production of the product could be a material event and require an announcement on the ASX.

The successful production of AKD1000 is a good example.

When a company signs a contract or enters a partnership it has to decide if this contract or partnership is material and the guidance from the ASX is that if it does not have a dollar value that can be calculated with certainty it is not something which meets the definition of being material and announced on the ASX.

So MegaChips was able to be announced because they could calculate the value of the licence fee. If there was no upfront licence fee and just the hope of future royalties Brainchip would not have been allowed to announce it on the ASX.

The partnership with ARM could not be announced on the ASX because neither Brainchip or ARM could put a dollar amount on what sales this partnership was likely to generate in the future.

The real problem for companies is that they have to decide if the event is price sensitive and and whether to publish or not publish on the
ASX.

The ASX only decides if the company did the right thing afterwards. If the ASX decides the company made the wrong choice they can penalise and suspend the company and Directors.

It is like having a highway where there are no speed signs to tell you what the speed limit is and it is up to you to guess what speed you are allowed to travel.

You are not allowed to ask the police on duty what speed is legal but if you make the wrong choice the police jump out and take away your car and fine you for travelling too fast or too slow.

This is how the ASX works.

Brainchip has the intention of listing on the Nasdaq one day and so as their good character as a company is a consideration they are taking a very cautious approach to the ASX Rules so that they do not have adverse notations by the ASX to explain to the Nasdaq.

The Non Disclosure Agreements are a separate issue as these are between Brainchip and third parties that have nothing to do with the ASX.

If Brainchip enters a material contract with a company even if they have a non disclosure agreement this material agreement has to be announced in accordance with the Rules on the ASX.

The non disclosure agreement does not override the ASX Rules.

I hope this helps your understanding.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 57 users

mrgds

Regular
Hi Krustor

As @VictorG posted a company must publish on the ASX full details of any event that a reasonable person would expect to have a material effect on the share price.

The event can be a positive or a negative event. For example if the company is making a product to sell the successful or unsuccessful production of the product could be a material event and require an announcement on the ASX.

The successful production of AKD1000 is a good example.

When a company signs a contract or enters a partnership it has to decide if this contract or partnership is material and the guidance from the ASX is that if it does not have a dollar value that can be calculated with certainty it is not something which meets the definition of being material and announced on the ASX.

So MegaChips was able to be announced because they could calculate the value of the licence fee. If there was no upfront licence fee and just the hope of future royalties Brainchip would not have been allowed to announce it on the ASX.

The partnership with ARM could not be announced on the ASX because neither Brainchip or ARM could put a dollar amount on what sales this partnership was likely to generate in the future.

The real problem for companies is that they have to decide if the event is price sensitive and and whether to publish or not publish on the
ASX.

The ASX only decides if the company did the right thing afterwards. If the ASX decides the company made the wrong choice they can penalise and suspend the company and Directors.

It is like having a highway where there are no speed signs to tell you what the speed limit is and it is up to you to guess what speed you are allowed to travel.

You are not allowed to ask the police on duty what speed is legal but if you make the wrong choice the police jump out and takeaway your car and fine you for travelling too fast or too slow.

This is how the ASX works.

Brainchip has the intention of listing on the Nasdaq one day and so as their good character as a company is a consideration they are taking a very cautious approach to the ASX Rules so that they do not have adverse notations by the ASX to explain to the Nasdaq.

The Non Disclosure Agreements are a separate issue as these are between Brainchip and third parties that have nothing to do with the ASX.

If Brainchip enters a material contract with a company even if they have a non disclosure agreement this material agreement has to be announced in accordance with the Rules on the ASX.

The non disclosure agreement does not override the ASX Rules.

I hope this helps your understanding.

My opinion only DYOR
FF

AKIDA BALLISTA
Good morning FF,
Would like to ask you, with regards to the "Nasa Phase 2 " post you shared yesterday, you did say you had sent the document to BRN, im wondering if you had heard anything back from BRN .................... with thnx if you are able to share anything.

Akida Ballista
 
  • Like
  • Love
Reactions: 13 users

alwaysgreen

Top 20
It is like having a highway where there are no speed signs to tell you what the speed limit is and it is up to you to guess what speed you are allowed to travel.

You are not allowed to ask the police on duty what speed is legal but if you make the wrong choice the police jump out and takeaway your car and fine you for travelling too fast or too slow.

This is how the ASX works.
What a spot on explanation of the ridiculous workings of the ASX! :ROFLMAO:

It's why companies need to waste money and resources on compliance officers just to make sure they are towing the ASX line. What a stressful occupation that would be!
 
  • Like
  • Love
  • Haha
Reactions: 12 users
Good morning FF,
Would like to ask you, with regards to the "Nasa Phase 2 " post you shared yesterday, you did say you had sent the document to BRN, im wondering if you had heard anything back from BRN .................... with thnx if you are able to share anything.

Akida Ballista
Just checked my emails and nothing at this point in time. I sent it to Perth and to the US. The time difference can come into play but I suspect it has taken them by surprise. 😂

Will report when and if I hear anything. They may choose to ignore me for a little while. 😎

Regards
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 41 users
We have long been looking for official confirmation of a successful NASA Vorago Brainchip Phase 1 project.
Well I have found it and it is copied with a link below.

Note:

1. Vorago is Silicon Space Technology Corporation for newer shareholders.
2. Vorago used the term CNN RNN to describe AKIDA not SCNN.

As you read through the extracts you will note the following:

A.
Vorago met all of the Phase 1 objectives
B. Vorago has five letters in support of continuing to the next Phase 2 importantly/interestingly two of these letters offer funding for the Phase 2 independent of NASA - (I personally am thinking large Aerospace companies jumping on board)
C. Vorago has modelling which shows AKIDA will allow NASA to have autonomous Rovers that will achieve speeds of up to 20 kph compared with a present speed of 4 centimetres a second.

There is in my opinion no other company on this planet with technology that can compete with the creation of Peter van der Made and Anil Mankar.


The Original Phase 1:
"The ultimate goal of this project is to create a radiation-hardened Neural Network suitable for Ede use. Neural Networks operating at the Edge will need to perform Continuous Learning and Few-shot/One-shot Learning with very low energy requirements, as will NN operation. Spiking Neural Networks (SNNs) provide the architectural framework to enable Edge operation and Continuous Learning. SNNs are event-driven and represent events as a spike or a train of spikes. Because of the sparsity of their data representation, the amount of processing Neural Networks need to do for the same stimulus can be significantly less than conventional Convolutional Neural Networks (CNNs), much like a human brain. To function in Space and in other extreme Edge environments, Neural Networks, including SNNs, must be made rad-hard.Brainchip’s Akida Event Domain Neural Processor (www.brainchipinc.com) offers native support for SNNs. Brainchip has been able to drive power consumption down to about 3 pJ per synaptic operation in their 28nm Si implementation. The Akida Development Environment (ADE) uses industry-standard development tools Tensorflow and Keras to allow easy simulation of its IP.Phase I is the first step towards creating radiation-hardened Edge AI capability. We plan to use the Akida Neural Processor architecture and, in Phase I, will: Understand the operation of Brainchip’s IP Understand 28nm instantiation of that IP (Akida) Evaluate radiation vulnerability of different parts of the IP through the Akida Development Environment Define architecture of target IC Define how HARDSIL® will be used to harden each chosen IP block Choose a target CMOS node (likely 28nm) and create a plan to design and fabricate the IC in that node, including defining the HARDSIL® process modules for this baseline process Define the radiation testing plan to establish the radiation robustness of the ICSuccessfully accomplishing these objectives:Establishes the feasibility of creating a useful, radiation-hardened product IC with embedded NPU and already-existing supporting software ecosystem to allow rapid adoption and productive use within NASA and the Space community.\n\n\n\n\t Creates the basis for an executable Phase II proposal and path towards fabrication of the processor."

CNN RNN Processor
FIRM: SILICON SPACE TECHNOLOGY CORPORATION PI
: Jim Carlquist Proposal #:H6.22-4509
NON-PROPRIETARY DATA
Objectives:

The goal of this project is the creation of a radiation-hardened Spiking Neural Network (SNN) SoC based on the BrainChip Akida Neuron Fabric IP. Akida is a member of a small set of existing SNN architectures structured to more closely emulate computation in a human brain. The rationale for using a Spiking Neural Network (SNN) for Edge AI Computing is because of its efficiencies. The neurmorphic approach used in the Akida architecture takes fewer MACs per operation since it creates and uses sparsity of both weights and activation by its event-based model. In addition, Akida reduces memory consumption by quantizing and compressing network parameters. This also helps to reduce power consumption and die size while maintaining performance.
Spiking Neural Network Block Diagram


ACCOMPLISHMENTS

Notable Deliverables Provided:
• Design and Manufacturing Plans
• Radiation Testing Plan (included in Final report)
• Technical final report


Key Milestones Met
• Understand Akida Architecture
• Understand 28nm Implementation
• Evaluate Radiation Vulnerability of the IP Through the Akida
Development Environment
• Define Architecture of Target IC
• Define how HARDSIL® will be used in Target IC
• Create Design and Manufacturing Plans
• Define the Radiation Testing Plan to Establish the Radiation
Robustness of the IC


FUTURE PLANNED DEVELOPMENTS

Planned Post-Phase II Partners


We received five Letters of Support for this project.

Two of which will provide capital infusion to keep the project going, one for aid in radiation testing, and the final two for use in future space flights.

Planned/Possible Mission Infusion

NASA is keen to increase the performance of its autonomous rovers to allow for greater speeds.

Current routing methodologies limit speeds to 4cm/sec while NASA has a goal to be able to have autonomous rovers traverse at speeds up to 20km/hr.


Early calculations show the potential for this device to process several of the required neural network algorithms fast enough to meet this goal.

Planned/Possible Mission Commercialization

A detailed plan is included in the Phase I final submittal to commercialized a RADHARD flight ready QML, SNN SoC to be available for NASA and commercial use.

This plan will include a Phase II plus extensions to reach the commercialization goals we are seeking.

CONTRACT (CENTER): SUBTOPIC:
SOLICITATION-PHASE: TA:
80NSSC20C0365 (ARC)
H6.22 Deep Neural Net and Neuromorphic Processors for In- Space Autonomy and Cognition
SBIR 2020-I4.5.0 Autonomy


My opinion only DYOR
FF

AKIDA BALLISTA
For those who might not read here over the weekend and missed it :

https://techport.nasa.gov/file/141775

We live in “exciting times” where Ai at the Edge will become ubiquitous.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 53 users

TECH

Regular
I aren't aware if this has been posted already, but there are certainly a rather large number of semi-conductor/tech companies
partnering up as we move into the next phase in the AI space.

We know of both these companies, and are linked with them..see the attached press release out of India overnight...Tech

 
  • Like
  • Fire
  • Love
Reactions: 57 users

stuart888

Regular
from @Bravo 's post:

That data all comes from a variety of sensors around the vehicle, a few of which will be new to future S-Class vehicles that have been ordered with the new Drive Pilot system. While the company wouldn’t disclose specific costs of the system, representatives did say that it will cost as much as their top-of-the-line Burmester audio system. That audio system on the S-Class is a $6,700 option alone, yet requires the addition of a separate $3,800 package, bringing the rough total to around $10,500. That’s getting close to the cost of Tesla’s “Full-Self Driving” system, which currently is a $12,000 option.

The conditional Level 3 Drive Pilot system builds on the hardware and software used by Mercedes’ Level 2 ADAS system known as Distronic. It adds a handful of additional advanced sensors as well as software to support the features. Key hardware systems that will be added to future S-Class vehicles configured with the Drive Pilot upgrade include an advanced lidar system developed by Valeo SA, a wetness sensor in the wheel well to determine moisture on the road, rear-facing cameras and microphones to detect emergency vehicles, and a special antenna array located at the rear of the sunroof to help with precise GPS location.

The Valeo lidar system is more advanced than what is on the current generation of S-Class, in that it scans at a rate of 25 times per second at a range of 200 meters (approximately 650 feet). This is the second generation of the system, according to the Valeo spokesperson at the event. The system sends out lasers, which then create points in space to help the AI classify the type of object in and around the path of the vehicle, whether it’s human, animal, vehicle, tree, or building. From there, the AI uses data from the other sensors around the car to determine more than 400 different projected paths for itself and the potential paths for the vehicles, pedestrians, and motorcyclists around it, and chooses the safest route through
.

We previously discussed the "advanced lidar system developed by Valeo" and found a patent for a LiDaR zoom feature using a NN, but there was no indication that Veleo developed the NN. We are friends with Valeo, and we have a sweet spot for LiDaR. So we've been drawing a reasonable inference in the absence of express evidence.

US2021166090A1 DRIVING ASSISTANCE FOR THE LONGITUDINAL AND/OR LATERAL CONTROL OF A MOTOR VEHICLE



View attachment 10414



The invention relates to a driving assistance system (3) for the longitudinal and/or lateral control of a motor vehicle, comprising an image processing device (31a) trained beforehand using a learning algorithm and configured so as to generate, at output, a control instruction (Scom1) for the motor vehicle from an image (Im1) provided at input and captured by an on-board digital camera (2); a digital image processing module (32) configured so as to provide at least one additional image (Im2) at input of an additional device (31b), identical to the device (31a), for parallel processing of the image (Im1) captured by the camera (2) and said at least one additional image (Im2), such that said additional device (31b) generates at least one additional control instruction (Scom2) for the motor vehicle, said additional image (Im2) resulting from at least one geometric and/or radiometric transformation performed on said captured image (Im1), and a digital fusion module (33) configured so as to generate a resultant control instruction (Scom) on the basis of said control instruction (Scom1) and of said a
Sensor Fusion seems like another way of saying our Eco-System of Partners. I started thinking about the Park Assist functionality. It is going to use the same camera, lidar and sensors, but might Not be active until the car is in Reverse or Going less that say 3 mph. Park Assist by Akida would be basically dormant energy wise in an EV, until the Sensor Fusion logic shifts priority to Park Assist by Akida.

The point is the Sensor Fusion, that data only needs to get sent to whatever Functions that do the work given the driving situation. The Rain Alert Auto Windshield Wiper stuff is the same, perfect for the Spiking Neural Network, low power EV brain tasks.

Use Cases are so exhaustive for the EV low battery usage segment. Same for Drones, Toys, and Smart Fabric Medical Devices. Go Brainchip team, employees, you can win this industry!
 
  • Like
  • Fire
  • Love
Reactions: 32 users
Was gonna say bit surprised no non sensitive Ann this morn but I'm not haha

Would've thought given the original Vorago Ann outlined the use case eg NASA Ph I etc that the outcome of said use would at least be advised via non sensitive Ann especially as FF found already it in the public domain.

Imo is kinda like a biotech saying oh...we partnering in a clinical trial and no commercial outcome yet but if want results go find it on clinical trials database.
 
  • Like
  • Love
Reactions: 15 users

Boab

I wish I could paint like Vincent
Big discrepancy between buyers and sellers today.

Screen Shot 2022-07-04 at 8.16.18 am.png
 
  • Like
  • Love
Reactions: 23 users
Top Bottom