BRN Discussion Ongoing

Andy38

The hope of potential generational wealth is real
He is a troll, don't feed him!
He’s apparently a founding member of this forum? Can’t ignore this?
 
  • Like
  • Love
Reactions: 3 users

JB49

Regular
Hi MrRomper,

The US specification is not available on Espacenet just yet, but the EU version is.

The specification draws a distinction between "brain inspired hardware" and "SNNs which run on the neuromorphic hardware".

EP4243011A1 EFFICIENT SPEECH TO SPIKES CONVERSION PIPELINE FOR A SPIKING NEURAL NETWORK 20220307

View attachment 44763

[0005] Brain-inspired neuromorphic hardware and Spiking Neural Network models (SNNs) which run on the neuromorphic hardware can provide desirable benefits that make them well-suited for speech recognition for an edge device.

[0026] The audio to spikes conversion pipeline 122 is configured to convert audio data into spikes for processing by the neuromorphic processor 124. A series of spikes output by the audio to spikes conversion pipeline 122 for audio data can be referred to as speech input that is provided as input to a SNN 126 executed by the neuromorphic processor 124. The audio to spikes conversion pipeline 122 can include multiple processing stages to convert the audio data into the spikes.

[0027] The neuromorphic processor 124 is configured to perform speech recognition tasks using a SNN 126. For example, the neuromorphic processor 124 can execute the SNN 126 to predict speech sounds represented by the spikes of the spike input output by the audio to spikes conversion pipeline 122. For example, the SNN 126 can process the spikes to generate speech recognition results that include predicted speech sounds, e.g., in the form of textual phonemes, triphones, full words, or other units of sound, corresponding to speech of the audio data
.


However, it looks like learning is done off-chip.


[0030] One or both servers 110, 112 can include a hyperparameter adjuster 128, which can be implemented in hardware and/or software. In some implementations, the hyperparameter adjuster 128 can be part of the computing device 102. The hyperparameter adjuster 128 can evaluate SNNs and adjust hyperparameters of the audio to spikes conversion pipeline 122 based on the evaluation. Example hyperparameters of the audio to spikes conversion pipeline 122 are described with reference to FIG. 2 . For example, the hyperparameter adjuster 128 can perform hyperparameter sweeps using multiple sets of hyperparameter configurations and select a hyperparameter configuration based on the results of the sweeps. Each sweep can include applying a particular hyperparameter configuration to the audio to spikes conversion pipeline 122, training multiple SNNs using training data and spikes generated by the audio to spikes conversion pipeline 122 based on the training data, testing each of the multiple SNNs using testing data, measuring the accuracy of each SNN based on the test. The training data and/or the testing data can include labeled audio data with labels indicating predicted speech sounds, e.g., words or phonemes, corresponding to the audio data.

[0031] The hyperparameter adjuster 128 can adjust the hyperparameters of the audio to spikes conversion pipeline 122 based on the evaluation. For example, the hyperparameter adjuster 128 can identify the hyperparameter configuration for which the most accurate SNN was trained during the sweeps and apply that hyperparameter configuration to the audio to spikes conversion pipeline 122
.


As an aside. remember Sean's Podcast with Jean-Luc Chatelain from Accenture from January:


Episode 23 – BrainChip Talks AI Innovation with Accenture’s Managing Director and Global Chief Technology Officer of Accenture Applied Intelligence Jean-Luc Chatelain - BrainChip


At the 12 minute mark, Sean asks an innocent question about transformers. If only he could have seen what was coming!!??
Accenture have a page on their website all about neuromorphic computing. But they only refer to Intel loihi throughout the document.
 
  • Like
Reactions: 4 users

IloveLamp

Top 20
Screenshot_20230916_224436_LinkedIn.jpg
Screenshot_20230916_224458_LinkedIn.jpg
 
  • Like
  • Fire
Reactions: 14 users

Glen

Regular
  • Like
  • Fire
  • Love
Reactions: 17 users

MDhere

Regular
Thanks Sladius Maximus! I might ask Sean for a raise at the next AGM. 😝
isn't my cocktail enough?? 🤣
 
  • Haha
  • Like
Reactions: 6 users

IloveLamp

Top 20
I find this a little interesting........

 
  • Like
  • Love
  • Wow
Reactions: 17 users

MDhere

Regular
I find this a little interesting........

yes very interesting great find ❤
vinteresring on may 16 2022, post #12184, 1 june 2022 post #15184 and most recently16 aug 2023 #61744 , i speculated like we all do a offer by softbank. BUTT its makes sense for softbank to go for brn. And now they are in position after their ipo of arm. again just speculation but i remember at the agm this time the suggestion of a offer wasnt brushed over and to my recollection it was being thought of. My thoughts and brainchip management will make the right choice i believe and it will be in my opinion favourable to all current shareholders well i knew favorable to me not less then $6. i will be happy between $6 and $ 100 time will tell if im right. we will be in good hands with softbank i believe for the future. imo
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 11 users

IloveLamp

Top 20
Screenshot_20230917_075211_LinkedIn.jpg
Screenshot_20230917_075151_LinkedIn.jpg
 
  • Like
  • Fire
  • Wow
Reactions: 32 users
Can Nvidia Survive the 4th Industrial Revolution?
by Fact Finder


Though Nvidia is riding high at the moment all indicators are that it has positioned itself on the wrong side of technology history.

While Nvidia has been compressing models to stave off the end of Moore’s Law it’s continued preoccupation with its Von Neumann market dominance has seen it embrace the false dawn offered to it by Large Language Model’s and the cult of CHATGpt.

The fragile nature of Nvidia’s technology future has been exposed in the last week by a small Australian technology company that has been stealthily developing an entirely new, some have said science fiction solution, to the energy resource issue exposed by the power and cost involved in training and running Large Language Models represented by CHATGpt.

The World has been fantasising about what is called Edge Computing for over a decade. The principle underpinning Edge Computing is actually very simple and can best be understood by what might be considered a strange example.

I am sure you have heard of terms like Food Miles, Buy Local, Eat Local, Grow Your Own as ways to decarbonise and save the planet. The simple indisputable proposition being if you reduce the distance between you and your food sources point of production the reduction in transport will reduce the energy consumed in putting the food on your plate.

Putting a bunch of flowers cut from your own garden on the sideboard is infinitely more fuel efficient than trucking, flying, trucking, driving fresh picked flowers from Europe around the globe to you in Australia.

Cutting asparagus in your kitchen garden is infinitely more efficient than buying asparagus from your Local Supermarket that has been cut in Peru and transported to you in Australia.

Now in the above examples I have chosen two products that require refrigeration to keep them fresh after picking to ensure they arrive at your home still useable and which as a result requires transport by jet airliners.

Suffice to say it is immediately obvious that if you are trying to reduce carbon and cost processing your flowers and asparagus at home wins hands down as zero carbon versus tonnes is a no brainer.

Now I know there are practical limitations making this solution difficult for those of us with black thumbs or living in home units to embrace. But that is an argument for another day.

The point is that this is what Edge Computing is all about. It is about reducing compute miles and in so doing cutting dramatically the cost of doing compute and carbon emissions.

For example take a Smart Doorbell. Currently a Smart Doorbell needs to be constantly connected via your homes wireless network to carry out its function of identifying and alerting you to the presence of someone at the door.

In lay terms 24/7 it sits there constantly processing camera frames showing the brick wall next to your front door and sending to the cloud over and over and over an image of the brick wall and receiving back message after message that no one is at the front door. Now you can reduce the power by slowing down the number of photos/frames it takes every second to monitor for someone coming to your front door but if the gap becomes too great between frames someone can come and go in that gap and avoid detection. So this method has built in limitations even when working to design but not so well when bandwidth is congested or connection breaks down.

Enter the neuromorphic Edge Computing revolution.

Edge Computing is about as I said reducing the compute miles. By placing the compute close as possible to the Smart Doorbell if not right up against it you immediately reduce the distance between the camera/sensor and the compute/intelligence. This has the advantage of reducing power consumption, reducing latency (time it takes to send the message back and forth from the camera/sensor to the data centre) and preventing congesting the bandwidth with photos of a blank brick wall affecting your ability to stream Netflix or Sky Sport.

Everywhere Edge Computing is being spoken about and Nvidia as the dominant player in the computing space is calling out about its Edge Computing solutions.

There is probably not one person on the planet with any interest in computing who has not heard of Nvidia or its Nvidia Jetson range of edge computing solutions.

Indeed, the Nvidia Jeson range is a leader in this space across the globe. Its Jetson solutions are to be found everywhere but for how much longer can Jetson dominate for Nvidia when it is hamstrung by old thinking in a World that is transitioning towards the Fourth Industrial Revolution.

So let's take a quick look at what Nvidia publishes about the Nvidia Jetson AGX Orin series, the Jetson Orin NX series and the Jetson Orin Nano series by reference to the advertised performance figures.

Power ranges from 5 watts to 60 Watts

“Power 15W - 60W 15W - 75W 15W - 40W 10W - 25W 10W - 20W 7W - 15W 7W - 10W”

TOPs ranges from 20 TOPs to 275 TOPs

AI Performance 275 TOPS 248 TOPS 200 TOPS 100 TOPS 70 TOPS 40 TOPS 20 TOPS


These numbers would certainly seem impressive to those who were feeding punch cards into the first IBM main frame computer even if you added on the power required to externally cool Jetson running some form of external cooling such as fans.

As impressive as these numbers are they clearly do not offer a power budget that can be reasonably embraced by those looking for an Edge Computing solution. Nvidia have to their credit recognised this and in consequence introduced the Jetson Nano TX2 Series and boasts that the Jetson TX2i, the Jetson TX2, the Jetson TX2 4GB and the Jetson TX2 NX come with AI Performance or 1.26 TFLOPS to 1.33 TFLOPS however the power budget ranges from 7.5 Watts to 20 Watts and on top of these numbers you need to allow for external cooling. You might have noticed that while Nvidia has reduced the form factor the power required remains in the multi watt region. (1)

To address these failings a new form of computing referred to as spiking neural network compute is being championed by Intel and IBM and they have over the last decade reached the point of proving out in research chips the huge benefits to be had by embracing this new style of compute not the least of which is a massively reduced power budget. The research at Intel and IBM goes on apace.

Enter stage left this little-known Australian company that has been listed on the Australian Stock Exchange since 2015. This tiny company with less than 100 employees has quietly some would say stealthily gone about its business and has beaten Intel and IBM to the punch launching its first commercial spiking neural network engineering chip in 2020 and is shortly to release its second generation technology, which it reports on its website as being capable of the following performance figures across three models or iterations of this technology advancement:

ITERATION ONE:

Max
Efficiency


Ideal for always-on, energy-sipping Sensor Applications:

  • Vibration Detection
  • Anomaly Detection
  • Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection
Extremely Efficient
@Sensor Inference


Either Standalone or with Min-spec MCU.

Configurable to ideal fit:

  • 1 – 2 nodes (4 NPE/node)
  • Anomaly Detection
  • Keyword Spotting
Expected implementations:

  • 50 MHz – 200
  • MHz Up to 100 GOPs
Additional Benefits

Eliminates need for CPU intervention

Fully accelerates most feed-forward networks

  • Optional skip connection and TENNs support for more complex networks
  • Completely customizable to fit very constrained power, thermal, and silicon area budgets
  • Enables energy-harvesting and multi-year battery life applications, sub milli-watt sensors

INTERATION 2:

Sensor
Balanced


Accelerates in hardware most Neural Network Functions:

  • Advanced Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection & Recognition
  • Object Classification
  • Biometric Recognition
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
Optimal for Sensor Fusion
and Application SoCs


With Min-Spec or Mid-Spec MCU.

Configurable to ideal fit:

  • 3 – 8 nodes (4 NPE/node) 25 KB
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:

  • 100 – 500 MHz
  • Up to 1 TOP
Additional Benefits

  • CPU is free for most non-NN compute
  • CPU runs application with minimal NN-management
  • Completely customizable to fit very constrained power, thermal and silicon area budgets
  • Enables intelligent, learning-enabled MCUs and SoCs consuming tens to hundreds of milliwatts or less

ITERATION 3:

Max
Performance


Detection, Classification, Segmentation, Tracking, and ViT:

  • Gesture Detection
  • Object Classification
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
  • Advanced Sequence Prediction
  • Video Object Detection & Tracking
  • Vision Transformer Networks
Advanced Network-Edge Performance
in a Sensor-Edge Power Envelope


With Mid-Spec MCU or Mid-Spec MPU.

Configurable to ideal fit:

  • 8 – 256 nodes (4 NPE/node) + optional Vision Transformer
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:
  • 800 MHz – 2 GHz
  • Up to 131 TOPs
Additional Benefits
  • CPU is free for most non-NN compute
At this stage I cannot comment on the power budget of these iterations however we do know that the first released chip the AKD1000 which was able to retail for about $US25.00 had a power budget that ran in the micro to milliwatts and was claimed by Edge Impulse (5), Quantum Ventura (3 & 4) and Tata Consulting Services (1) to outperform a GPU from Nvidia by some considerable margin across all performance measurements and this new version is an advancement grown out of the underlying neural fabric supporting AKD1000.

Perhaps the most worrying recent doomsday prediction for Nvidia at the Edge came from Tata Elxsi’s Mr. Sunil Nair Vice President EMEA and Design Digital who posted on his LinkedIn page right beside a post about Tata Elxsi partnering with Nvidia in the Cloud the following:

“Cloud computing is commodity. Edge is where the action is.

Thrilled to see Tata Elxsi and Brainchip partner to enable and integrate ultra-low power neuromorphic processors for use cases that would bring huge savings and transform citizen experience. (especially the ones over spending on Nvidia.)”


Mr. Nair has been with Tata Elxsi since 1997. While the partnership with Tata Elxsi has only recently been announced Brainchip has been working with Tata Consulting Services (1) TATA Groups research arm since at least 2019 when they jointly presented AKD1000 performing a live gesture recognition demonstration. Since that time Tata Consulting Services has released a number of peer reviewed papers covering the use of AKD1000 and it can be said that Mr. Nair would be very well informed when it comes to the benefits that Brainchip’s AKIDA technology solutions can bring to the Edge.

The full release of the next generation referred to as AKIDA 2.0 up till today has been restricted to a number of select customers however the company has recently advised in a CEO investor presentation that the full public release is imminent. This prediction seems to be holding true as in the past week Brainchip’s website has been updated with substantial information regarding AKIDA 2.0 signalling it is getting close to the launch date.

The interesting aspect of Brainchip Inc is that while it has remained largely unknown to the general public and the investment world in its quiet way it has been accumulating a very long and impressive list of corporate and academic engagements including the following publicly acknowledged group and according to Mr. Rob Telson Vice President of Ecosystems & Partnerships, they have hundreds of companies testing AKIDA technology boards:

1. FORD, 2. VALEO, 3. RENESAS, 4. NASA, 5. TATA Consulting Services, 6. MEGACHIPS, 7. MOSCHIP, 8. SOCIONEXT, 9. PROPHESEE, 10. VVDN, 11. TEKSUN, 12. Ai LABS, 13. NVISO, 14. EMOTION 3D, 15. ARM, 16. EDGE IMPULSE, 17. INTEL Foundries, 18. GLOBAL FOUNDRIES, 19. BLUE RIDGE ENVISIONEERING, 20. MERCEDES BENZ, 21. ANT 61, 22. QUANTUM VENTURA, 23. INFORMATION SYSTEM LABORATORIES, 24. INTELLISENSE SYSTEMS, 25. CVEDIA, 26. LORSER INDUSTRIES, 27. SiFIVE, 28. IPRO Silicon IP, 29. SALESLINK, 30. NUMEN, 31. VORAGO, 32. NANOSE, 33. BIOTOME, 34. OCULI, 35. Magik Eye, 36. GMAC, 37. TATA Elxsi, 38. University of Oklahoma, 41. Arizona State University, 42. Carnegie Mellon University, 43. Rochester Institute of Technology, 44. Drexel University, 45. University of Virginia.

It should be noted that Brainchip has been at pains in its literature and presentations to explain that the AKIDA technology is processor and sensor agnostic and being fully digital, scalable, and portable across all foundries. The AKD1000 was produced successfully first time and every time in 28nm at TSMC and only recently the AKD1500 was received back from Global Foundries successfully fabricated in 22nm FDSOI first time and every time.

The individual who gave the underlying AKIDA technology life is Peter van der Made who is also one of the founders of Brainchip and his full vision is to create a beneficial form of Artificial General Intelligence by about 2030. This vision plays out in a series of steps and AKIDA 3.0 is presently in development and according to the company’s CEO Sean Hehir in a very recent investor presentation each step is targeted to take 12 to 18 months. Historically Peter van der Made and his cofounder Anil Mankar have impressed with their capacity to deliver on their technology development time lines and by always having a little extra what they like to call secret sauce with each new technology reveal.

This little extra secret sauce with AKIDA 2.0 was the release of the TeNNs (6 & 9) and ViT (7) which provide an unprecedented leap into the future from what even the most optimistic expected to be possible at the far Edge using energy harvesting to power these devices. It is impossible to do justice to what they bring to the Edge Compute revolution in this article but fortunately even though patents are pending Brainchip has published a White Paper (6) and videos providing easy to follow plain English explanations. (7)

By the way anyone up for a bit of regression analysis (8) using AKIDA technology Brainchip also has that covered. When others were opining that spiking neural networks could not do regression analysis Brainchip Inc was demonstrating it running on AKD1000 for monitoring vibration in rail infrastructure.

There is so much to delight those who love to read about and explore science fiction becoming reality when peeling back the petals of the rose that is the Brainchip AKIDA technology revolution.

In concluding in Australia the ignorant and poor of intellect have treated at times the vision of Peter van der Made with a savagery of doubt usually reserved for those who claim to have been abducted by aliens and as is usually the case these critics have been members of the so called sophisticated investor class and even though they have little credibility in their areas of claimed expertise they drown in their own ignorance when it comes to the science of neuromorphic computing. If tempted to listen to such individuals about the science of neuromorphic computing one is well served to remember the life of Robert Goddard - https://www.msn.com/en-au/news/aust...1&cvid=297d1358fb8c45ea9b1ae445a4985d75&ei=51

REFERENCES:

1
.Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors

Chetan Kadway, Sounak Dey, Arijit Mukherjee, Arpan Pal, Gilles Bézard

2023 International Joint Conference on Neural Networks (IJCNN), 1-8, 2023

Emergence of small satellites for earth observation missions has opened up new horizons for space research but at the same time posed newer challenges of limited power and compute resource arising out of the size & weight constraints imposed by these satellites. The currently evolving neuromorphic computing paradigm shows promise in terms of energy efficiency and may possibly be exploited here. In this paper, we try to prove the applicability of neuromorphic computing for on-board data processing in satellites by creating a 2-stage hierarchical cloud cover detection application for multi-spectral earth observation images. We design and train a CNN and convert it into SNN using the CNN2SNN conversion toolkit of Brainchip Akida neuromorphic platform. We achieve 95.46% accuracy while power consumption and latency are at least 35x and 3.4x more efficient respectively in stage-1 (and 230x & 7x in stage-2) compared to the equivalent CNN running on Jetson TX2.

https://ieeexplore.ieee.org/abstract/document/10191569/

2.An energy-efficient AkidaNet for morphologically similar weeds and crops recognition at the Edge

Vi Nguyen Thanh Le, Kevin Tsiknos, Kristofor D Carlson, Selam Ahderom

2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1-8, 2022

Wild radish weeds have always been a persistent problem in agriculture due to their quick and undesirable spread. Therefore, the accurate identification and effective control of wild radish in canola crops at early growth stage play an indispensable role in reducing herbicide rates and enhancing agricultural productivity. In this paper, an energy efficient and lightweight AkidaNet model is developed to accurately identify broad-leaf weeds and crops with similar morphology at four different growth stages. Experiments performed on a published bccr-segset dataset show that our proposed method achieves competitive performance, a classification accuracy of 99.73%, compared to several well-known CNNs architectures. Next, we quantized and converted the model into a Spiking Neural Network for implementation on a spike-based neuromorphic hardware device. The converted model is not only superior in low-latency and low-power consumption but also retains a similar accuracy to the original model. We also employ Grad-CAM to validate whether our model focuses on important features in plant images to identify wild radish weeds in crops

https://www.researchgate.net/profil...r-weeds-and-crops-recognition-at-the-Edge.pdf


3.Table 6.

1694918163371.png


https://www.sciencedirect.com/scien...c65de04&pid=1-s2.0-S1877050922017860-main.pdf (Page 494)

4. “In this federally funded phase 2 program, Quantum Ventura is creating state-of-the-art cybersecurity applications for the U.S. Department of Energy under the Small Business Innovation Research (SBIR) Program. The program is focused on “Cyber threat-detection using neuromorphic computing,” which aims to develop an advanced approach to detect and prevent cyberattacks on computer networks and critical infrastructure using brain-inspired artificial intelligence.

“Neuromorphic computing is an ideal technology for threat detection because of its small size and power, accuracy, and in particular, its ability to learn and adapt, since attackers are constantly changing their tactics,” said Srini Vasan, President and CEO of Quantum Ventura Inc. “We believe that our solution incorporating Brainchip’s Akida will be a breakthrough for defending against cyber threats and address additional applications as well.””

https://brainchip.com/brainchip-and-quantum-ventura-partner-to-develop-cyber-threat-detection/

5. Running the out-of-the-box demos on the Akida Raspberry Pi development kit I have was very impressive achieving, according to the statistics, approximately 100 FPS for a 9-mW power dissipation.

All told I am very impressed with the BrainChip Akida neuromorphic processor. The performance of the networks implemented is very good, while the power used is also exceptionally low. These two parameters are critical parameters for embedded solutions deployed at the edge.

Project links

  1. Visal wake word: https://studio.edgeimpulse.com/studio/224143
  2. Anomaly detection: https://studio.edgeimpulse.com/studio/261242
  3. CIFAR10: https://studio.edgeimpulse.com/studio/257103
  4. Keyword spotting: https://studio.edgeimpulse.com/studio/257193
Adiuvo is a consultancy that provides embedded systems design, training, and marketing services. Taylor is Founder and Principal Consultant of the company, teaches about embedded systems at the University of Lincoln, and is host of the podcast “The Embedded Hour.”

https://www.edgeimpulse.com/blog/brainchip-akida-and-edge-impulse

6.https://brainchip.com/temporal-event-based-neural-networks-a-new-approach-to-temporal-processing/

7.

8.https://brainchip.com/brainchip-demonstrates-regression-analysis-with-vibration-sensors/

9.
https://brainchip.com/wp-content/uploads/2023/06/TENNs_Whitepaper_Final.pdf

Pg 8:
Unlike standard CNN networks that only operate on the spatial dimensions, TENNs contain both temporal and spatial convolution layers. They may combine spatial and temporal features of the data at all levels from shallow to deep layers. In addition, TENNs efficiently learn both spatial and temporal correlations from data in contrast with state-space models that mainly treat time series data with no spatial components. Given the hierarchical and causal nature of TENNs, relationships between elements that are both distant in space and time may be constructed for efficient continuous data processing (such as video, raw speech, and medical data).


My opinion only so DYOR
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 237 users

Rskiff

Regular
Can Nvidia Survive the 4th Industrial Revolution?
by Fact Finder


Though Nvidia is riding high at the moment all indicators are that it has positioned itself on the wrong side of technology history.

While Nvidia has been compressing models to stave off the end of Moore’s Law it’s continued preoccupation with its Von Neumann market dominance has seen it embrace the false dawn offered to it by Large Language Model’s and the cult of CHATGpt.

The fragile nature of Nvidia’s technology future has been exposed in the last week by a small Australian technology company that has been stealthily developing an entirely new, some have said science fiction solution, to the energy resource issue exposed by the power and cost involved in training and running Large Language Models represented by CHATGpt.

The World has been fantasising about what is called Edge Computing for over a decade. The principle underpinning Edge Computing is actually very simple and can best be understood by what might be considered a strange example.

I am sure you have heard of terms like Food Miles, Buy Local, Eat Local, Grow Your Own as ways to decarbonise and save the planet. The simple indisputable proposition being if you reduce the distance between you and your food sources point of production the reduction in transport will reduce the energy consumed in putting the food on your plate.

Putting a bunch of flowers cut from your own garden on the sideboard is infinitely more fuel efficient than trucking, flying, trucking, driving fresh picked flowers from Europe around the globe to you in Australia.

Cutting asparagus in your kitchen garden is infinitely more efficient than buying asparagus from your Local Supermarket that has been cut in Peru and transported to you in Australia.

Now in the above examples I have chosen two products that require refrigeration to keep them fresh after picking to ensure they arrive at your home still useable and which as a result requires transport by jet airliners.

Suffice to say it is immediately obvious that if you are trying to reduce carbon and cost processing your flowers and asparagus at home wins hands down as zero carbon versus tonnes is a no brainer.

Now I know there are practical limitations making this solution difficult for those of us with black thumbs or living in home units to embrace. But that is an argument for another day.

The point is that this is what Edge Computing is all about. It is about reducing compute miles and in so doing cutting dramatically the cost of doing compute and carbon emissions.

For example take a Smart Doorbell. Currently a Smart Doorbell needs to be constantly connected via your homes wireless network to carry out its function of identifying and alerting you to the presence of someone at the door.

In lay terms 24/7 it sits there constantly processing camera frames showing the brick wall next to your front door and sending to the cloud over and over and over an image of the brick wall and receiving back message after message that no one is at the front door. Now you can reduce the power by slowing down the number of photos/frames it takes every second to monitor for someone coming to your front door but if the gap becomes too great between frames someone can come and go in that gap and avoid detection. So this method has built in limitations even when working to design but not so well when bandwidth is congested or connection breaks down.

Enter the neuromorphic Edge Computing revolution.

Edge Computing is about as I said reducing the compute miles. By placing the compute close as possible to the Smart Doorbell if not right up against it you immediately reduce the distance between the camera/sensor and the compute/intelligence. This has the advantage of reducing power consumption, reducing latency (time it takes to send the message back and forth from the camera/sensor to the data centre) and preventing congesting the bandwidth with photos of a blank brick wall affecting your ability to stream Netflix or Sky Sport.

Everywhere Edge Computing is being spoken about and Nvidia as the dominant player in the computing space is calling out about its Edge Computing solutions.

There is probably not one person on the planet with any interest in computing who has not heard of Nvidia or its Nvidia Jetson range of edge computing solutions.

Indeed, the Nvidia Jeson range is a leader in this space across the globe. Its Jetson solutions are to be found everywhere but for how much longer can Jetson dominate for Nvidia when it is hamstrung by old thinking in a World that is transitioning towards the Fourth Industrial Revolution.

So let's take a quick look at what Nvidia publishes about the Nvidia Jetson AGX Orin series, the Jetson Orin NX series and the Jetson Orin Nano series by reference to the advertised performance figures.

Power ranges from 5 watts to 60 Watts

“Power 15W - 60W 15W - 75W 15W - 40W 10W - 25W 10W - 20W 7W - 15W 7W - 10W”

TOPs ranges from 20 TOPs to 275 TOPs

AI Performance 275 TOPS 248 TOPS 200 TOPS 100 TOPS 70 TOPS 40 TOPS 20 TOPS


These numbers would certainly seem impressive to those who were feeding punch cards into the first IBM main frame computer even if you added on the power required to externally cool Jetson running some form of external cooling such as fans.

As impressive as these numbers are they clearly do not offer a power budget that can be reasonably embraced by those looking for an Edge Computing solution. Nvidia have to their credit recognised this and in consequence introduced the Jetson Nano TX2 Series and boasts that the Jetson TX2i, the Jetson TX2, the Jetson TX2 4GB and the Jetson TX2 NX come with AI Performance or 1.26 TFLOPS to 1.33 TFLOPS however the power budget ranges from 7.5 Watts to 20 Watts and on top of these numbers you need to allow for external cooling. You might have noticed that while Nvidia has reduced the form factor the power required remains in the multi watt region. (1)

To address these failings a new form of computing referred to as spiking neural network compute is being championed by Intel and IBM and they have over the last decade reached the point of proving out in research chips the huge benefits to be had by embracing this new style of compute not the least of which is a massively reduced power budget. The research at Intel and IBM goes on apace.

Enter stage left this little-known Australian company that has been listed on the Australian Stock Exchange since 2015. This tiny company with less than 100 employees has quietly some would say stealthily gone about its business and has beaten Intel and IBM to the punch launching its first commercial spiking neural network engineering chip in 2020 and is shortly to release its second generation technology, which it reports on its website as being capable of the following performance figures across three models or iterations of this technology advancement:

ITERATION ONE:

Max
Efficiency


Ideal for always-on, energy-sipping Sensor Applications:

  • Vibration Detection
  • Anomaly Detection
  • Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection
Extremely Efficient
@Sensor Inference


Either Standalone or with Min-spec MCU.

Configurable to ideal fit:

  • 1 – 2 nodes (4 NPE/node)
  • Anomaly Detection
  • Keyword Spotting
Expected implementations:

  • 50 MHz – 200
  • MHz Up to 100 GOPs
Additional Benefits

Eliminates need for CPU intervention

Fully accelerates most feed-forward networks

  • Optional skip connection and TENNs support for more complex networks
  • Completely customizable to fit very constrained power, thermal, and silicon area budgets
  • Enables energy-harvesting and multi-year battery life applications, sub milli-watt sensors

INTERATION 2:

Sensor
Balanced


Accelerates in hardware most Neural Network Functions:

  • Advanced Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection & Recognition
  • Object Classification
  • Biometric Recognition
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
Optimal for Sensor Fusion
and Application SoCs


With Min-Spec or Mid-Spec MCU.

Configurable to ideal fit:

  • 3 – 8 nodes (4 NPE/node) 25 KB
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:

  • 100 – 500 MHz
  • Up to 1 TOP
Additional Benefits

  • CPU is free for most non-NN compute
  • CPU runs application with minimal NN-management
  • Completely customizable to fit very constrained power, thermal and silicon area budgets
  • Enables intelligent, learning-enabled MCUs and SoCs consuming tens to hundreds of milliwatts or less

ITERATION 3:

Max
Performance


Detection, Classification, Segmentation, Tracking, and ViT:

  • Gesture Detection
  • Object Classification
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
  • Advanced Sequence Prediction
  • Video Object Detection & Tracking
  • Vision Transformer Networks
Advanced Network-Edge Performance
in a Sensor-Edge Power Envelope


With Mid-Spec MCU or Mid-Spec MPU.

Configurable to ideal fit:

  • 8 – 256 nodes (4 NPE/node) + optional Vision Transformer
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:
  • 800 MHz – 2 GHz
  • Up to 131 TOPs
Additional Benefits
  • CPU is free for most non-NN compute
At this stage I cannot comment on the power budget of these iterations however we do know that the first released chip the AKD1000 which was able to retail for about $US25.00 had a power budget that ran in the micro to milliwatts and was claimed by Edge Impulse (5), Quantum Ventura (3 & 4) and Tata Consulting Services (1) to outperform a GPU from Nvidia by some considerable margin across all performance measurements and this new version is an advancement grown out of the underlying neural fabric supporting AKD1000.

Perhaps the most worrying recent doomsday prediction for Nvidia at the Edge came from Tata Elxsi’s Mr. Sunil Nair Vice President EMEA and Design Digital who posted on his LinkedIn page right beside a post about Tata Elxsi partnering with Nvidia in the Cloud the following:

“Cloud computing is commodity. Edge is where the action is.

Thrilled to see Tata Elxsi and Brainchip partner to enable and integrate ultra-low power neuromorphic processors for use cases that would bring huge savings and transform citizen experience. (especially the ones over spending on Nvidia.)”


Mr. Nair has been with Tata Elxsi since 1997. While the partnership with Tata Elxsi has only recently been announced Brainchip has been working with Tata Consulting Services (1) TATA Groups research arm since at least 2019 when they jointly presented AKD1000 performing a live gesture recognition demonstration. Since that time Tata Consulting Services has released a number of peer reviewed papers covering the use of AKD1000 and it can be said that Mr. Nair would be very well informed when it comes to the benefits that Brainchip’s AKIDA technology solutions can bring to the Edge.

The full release of the next generation referred to as AKIDA 2.0 up till today has been restricted to a number of select customers however the company has recently advised in a CEO investor presentation that the full public release is imminent. This prediction seems to be holding true as in the past week Brainchip’s website has been updated with substantial information regarding AKIDA 2.0 signalling it is getting close to the launch date.

The interesting aspect of Brainchip Inc is that while it has remained largely unknown to the general public and the investment world in its quiet way it has been accumulating a very long and impressive list of corporate and academic engagements including the following publicly acknowledged group and according to Mr. Rob Telson Vice President of Ecosystems & Partnerships, they have hundreds of companies testing AKIDA technology boards:

1. FORD, 2. VALEO, 3. RENESAS, 4. NASA, 5. TATA Consulting Services, 6. MEGACHIPS, 7. MOSCHIP, 8. SOCIONEXT, 9. PROPHESEE, 10. VVDN, 11. TEKSUN, 12. Ai LABS, 13. NVISO, 14. EMOTION 3D, 15. ARM, 16. EDGE IMPULSE, 17. INTEL Foundries, 18. GLOBAL FOUNDRIES, 19. BLUE RIDGE ENVISIONEERING, 20. MERCEDES BENZ, 21. ANT 61, 22. QUANTUM VENTURA, 23. INFORMATION SYSTEM LABORATORIES, 24. INTELLISENSE SYSTEMS, 25. CVEDIA, 26. LORSER INDUSTRIES, 27. SiFIVE, 28. IPRO Silicon IP, 29. SALESLINK, 30. NUMEN, 31. VORAGO, 32. NANOSE, 33. BIOTOME, 34. OCULI, 35. Magik Eye, 36. GMAC, 37. TATA Elxsi, 38. University of Oklahoma, 41. Arizona State University, 42. Carnegie Mellon University, 43. Rochester Institute of Technology, 44. Drexel University, 45. University of Virginia.

It should be noted that Brainchip has been at pains in its literature and presentations to explain that the AKIDA technology is processor and sensor agnostic and being fully digital, scalable, and portable across all foundries. The AKD1000 was produced successfully first time and every time in 28nm at TSMC and only recently the AKD1500 was received back from Global Foundries successfully fabricated in 22nm FDSOI first time and every time.

The individual who gave the underlying AKIDA technology life is Peter van der Made who is also one of the founders of Brainchip and his full vision is to create a beneficial form of Artificial General Intelligence by about 2030. This vision plays out in a series of steps and AKIDA 3.0 is presently in development and according to the company’s CEO Sean Hehir in a very recent investor presentation each step is targeted to take 12 to 18 months. Historically Peter van der Made and his cofounder Anil Mankar have impressed with their capacity to deliver on their technology development time lines and by always having a little extra what they like to call secret sauce with each new technology reveal.

This little extra secret sauce with AKIDA 2.0 was the release of the TeNNs (6) and ViT (7) which provide an unprecedented leap into the future from what even the most optimistic expected to be possible at the far Edge using energy harvesting to power these devices. It is impossible to do justice to what they bring to the Edge Compute revolution in this article but fortunately even though patents are pending Brainchip has published a White Paper (6) and videos providing easy to follow plain English explanations. (7)

By the way anyone up for a bit of regression analysis (8) using AKIDA technology Brainchip also has that covered. When others were opining that spiking neural networks could not do regression analysis Brainchip Inc was demonstrating it running on AKD1000 for monitoring vibration in rail infrastructure.

There is so much to delight those who love to read about and explore science fiction becoming reality when peeling back the petals of the rose that is the Brainchip AKIDA technology revolution.

In concluding in Australia the ignorant and poor of intellect have treated at times the vision of Peter van der Made with a savagery of doubt usually reserved for those who claim to have been abducted by aliens and as is usually the case these critics have been members of the so called sophisticated investor class and even though they have little credibility in their areas of claimed expertise they drown in their own ignorance when it comes to the science of neuromorphic computing. If tempted to listen to such individuals about the science of neuromorphic computing one is well served to remember the life of Robert Goddard - https://www.msn.com/en-au/news/aust...1&cvid=297d1358fb8c45ea9b1ae445a4985d75&ei=51

REFERENCES:

1
.Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors

Chetan Kadway, Sounak Dey, Arijit Mukherjee, Arpan Pal, Gilles Bézard

2023 International Joint Conference on Neural Networks (IJCNN), 1-8, 2023

Emergence of small satellites for earth observation missions has opened up new horizons for space research but at the same time posed newer challenges of limited power and compute resource arising out of the size & weight constraints imposed by these satellites. The currently evolving neuromorphic computing paradigm shows promise in terms of energy efficiency and may possibly be exploited here. In this paper, we try to prove the applicability of neuromorphic computing for on-board data processing in satellites by creating a 2-stage hierarchical cloud cover detection application for multi-spectral earth observation images. We design and train a CNN and convert it into SNN using the CNN2SNN conversion toolkit of Brainchip Akida neuromorphic platform. We achieve 95.46% accuracy while power consumption and latency are at least 35x and 3.4x more efficient respectively in stage-1 (and 230x & 7x in stage-2) compared to the equivalent CNN running on Jetson TX2.

https://ieeexplore.ieee.org/abstract/document/10191569/

2.An energy-efficient AkidaNet for morphologically similar weeds and crops recognition at the Edge

Vi Nguyen Thanh Le, Kevin Tsiknos, Kristofor D Carlson, Selam Ahderom

2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1-8, 2022

Wild radish weeds have always been a persistent problem in agriculture due to their quick and undesirable spread. Therefore, the accurate identification and effective control of wild radish in canola crops at early growth stage play an indispensable role in reducing herbicide rates and enhancing agricultural productivity. In this paper, an energy efficient and lightweight AkidaNet model is developed to accurately identify broad-leaf weeds and crops with similar morphology at four different growth stages. Experiments performed on a published bccr-segset dataset show that our proposed method achieves competitive performance, a classification accuracy of 99.73%, compared to several well-known CNNs architectures. Next, we quantized and converted the model into a Spiking Neural Network for implementation on a spike-based neuromorphic hardware device. The converted model is not only superior in low-latency and low-power consumption but also retains a similar accuracy to the original model. We also employ Grad-CAM to validate whether our model focuses on important features in plant images to identify wild radish weeds in crops

https://www.researchgate.net/profil...r-weeds-and-crops-recognition-at-the-Edge.pdf


3.Table 6.

Comparison of SWaP-C profiles for CPU/GPU platforms and neuromorphic versions

CPU/GPU platforms SWaP

NVIDIA A100

26.7 long x 11.2 tall x 3.5 wide cm (10.5 x 4.4 x 1.4 in)

250 W

Cost $30,000 USD (est.)

Neuromorphic versions

USB key form factor

5.1 long 1.3tall x 0.6 wide (2x0.5x0.3 in)

1W

$50 USD (est.)

https://www.sciencedirect.com/scien...c65de04&pid=1-s2.0-S1877050922017860-main.pdf (Page 494)

4. “In this federally funded phase 2 program, Quantum Ventura is creating state-of-the-art cybersecurity applications for the U.S. Department of Energy under the Small Business Innovation Research (SBIR) Program. The program is focused on “Cyber threat-detection using neuromorphic computing,” which aims to develop an advanced approach to detect and prevent cyberattacks on computer networks and critical infrastructure using brain-inspired artificial intelligence.

“Neuromorphic computing is an ideal technology for threat detection because of its small size and power, accuracy, and in particular, its ability to learn and adapt, since attackers are constantly changing their tactics,” said Srini Vasan, President and CEO of Quantum Ventura Inc. “We believe that our solution incorporating Brainchip’s Akida will be a breakthrough for defending against cyber threats and address additional applications as well.””

https://brainchip.com/brainchip-and-quantum-ventura-partner-to-develop-cyber-threat-detection/

5. Running the out-of-the-box demos on the Akida Raspberry Pi development kit I have was very impressive achieving, according to the statistics, approximately 100 FPS for a 9-mW power dissipation.

All told I am very impressed with the BrainChip Akida neuromorphic processor. The performance of the networks implemented is very good, while the power used is also exceptionally low. These two parameters are critical parameters for embedded solutions deployed at the edge.

Project links

  1. Visal wake word: https://studio.edgeimpulse.com/studio/224143
  2. Anomaly detection: https://studio.edgeimpulse.com/studio/261242
  3. CIFAR10: https://studio.edgeimpulse.com/studio/257103
  4. Keyword spotting: https://studio.edgeimpulse.com/studio/257193
Adiuvo is a consultancy that provides embedded systems design, training, and marketing services. Taylor is Founder and Principal Consultant of the company, teaches about embedded systems at the University of Lincoln, and is host of the podcast “The Embedded Hour.”

https://www.edgeimpulse.com/blog/brainchip-akida-and-edge-impulse

6.https://brainchip.com/temporal-event-based-neural-networks-a-new-approach-to-temporal-processing/

7.

8.https://brainchip.com/brainchip-demonstrates-regression-analysis-with-vibration-sensors/

My opinion only so DYOR

Wow @Fact Finder .You should submit this for publication to a worthy outlet so more people get to see fantastic analysis of Brainchip.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 69 users

Gemmax

Regular
Can Nvidia Survive the 4th Industrial Revolution?
by Fact Finder


Though Nvidia is riding high at the moment all indicators are that it has positioned itself on the wrong side of technology history.

While Nvidia has been compressing models to stave off the end of Moore’s Law it’s continued preoccupation with its Von Neumann market dominance has seen it embrace the false dawn offered to it by Large Language Model’s and the cult of CHATGpt.

The fragile nature of Nvidia’s technology future has been exposed in the last week by a small Australian technology company that has been stealthily developing an entirely new, some have said science fiction solution, to the energy resource issue exposed by the power and cost involved in training and running Large Language Models represented by CHATGpt.

The World has been fantasising about what is called Edge Computing for over a decade. The principle underpinning Edge Computing is actually very simple and can best be understood by what might be considered a strange example.

I am sure you have heard of terms like Food Miles, Buy Local, Eat Local, Grow Your Own as ways to decarbonise and save the planet. The simple indisputable proposition being if you reduce the distance between you and your food sources point of production the reduction in transport will reduce the energy consumed in putting the food on your plate.

Putting a bunch of flowers cut from your own garden on the sideboard is infinitely more fuel efficient than trucking, flying, trucking, driving fresh picked flowers from Europe around the globe to you in Australia.

Cutting asparagus in your kitchen garden is infinitely more efficient than buying asparagus from your Local Supermarket that has been cut in Peru and transported to you in Australia.

Now in the above examples I have chosen two products that require refrigeration to keep them fresh after picking to ensure they arrive at your home still useable and which as a result requires transport by jet airliners.

Suffice to say it is immediately obvious that if you are trying to reduce carbon and cost processing your flowers and asparagus at home wins hands down as zero carbon versus tonnes is a no brainer.

Now I know there are practical limitations making this solution difficult for those of us with black thumbs or living in home units to embrace. But that is an argument for another day.

The point is that this is what Edge Computing is all about. It is about reducing compute miles and in so doing cutting dramatically the cost of doing compute and carbon emissions.

For example take a Smart Doorbell. Currently a Smart Doorbell needs to be constantly connected via your homes wireless network to carry out its function of identifying and alerting you to the presence of someone at the door.

In lay terms 24/7 it sits there constantly processing camera frames showing the brick wall next to your front door and sending to the cloud over and over and over an image of the brick wall and receiving back message after message that no one is at the front door. Now you can reduce the power by slowing down the number of photos/frames it takes every second to monitor for someone coming to your front door but if the gap becomes too great between frames someone can come and go in that gap and avoid detection. So this method has built in limitations even when working to design but not so well when bandwidth is congested or connection breaks down.

Enter the neuromorphic Edge Computing revolution.

Edge Computing is about as I said reducing the compute miles. By placing the compute close as possible to the Smart Doorbell if not right up against it you immediately reduce the distance between the camera/sensor and the compute/intelligence. This has the advantage of reducing power consumption, reducing latency (time it takes to send the message back and forth from the camera/sensor to the data centre) and preventing congesting the bandwidth with photos of a blank brick wall affecting your ability to stream Netflix or Sky Sport.

Everywhere Edge Computing is being spoken about and Nvidia as the dominant player in the computing space is calling out about its Edge Computing solutions.

There is probably not one person on the planet with any interest in computing who has not heard of Nvidia or its Nvidia Jetson range of edge computing solutions.

Indeed, the Nvidia Jeson range is a leader in this space across the globe. Its Jetson solutions are to be found everywhere but for how much longer can Jetson dominate for Nvidia when it is hamstrung by old thinking in a World that is transitioning towards the Fourth Industrial Revolution.

So let's take a quick look at what Nvidia publishes about the Nvidia Jetson AGX Orin series, the Jetson Orin NX series and the Jetson Orin Nano series by reference to the advertised performance figures.

Power ranges from 5 watts to 60 Watts

“Power 15W - 60W 15W - 75W 15W - 40W 10W - 25W 10W - 20W 7W - 15W 7W - 10W”

TOPs ranges from 20 TOPs to 275 TOPs

AI Performance 275 TOPS 248 TOPS 200 TOPS 100 TOPS 70 TOPS 40 TOPS 20 TOPS


These numbers would certainly seem impressive to those who were feeding punch cards into the first IBM main frame computer even if you added on the power required to externally cool Jetson running some form of external cooling such as fans.

As impressive as these numbers are they clearly do not offer a power budget that can be reasonably embraced by those looking for an Edge Computing solution. Nvidia have to their credit recognised this and in consequence introduced the Jetson Nano TX2 Series and boasts that the Jetson TX2i, the Jetson TX2, the Jetson TX2 4GB and the Jetson TX2 NX come with AI Performance or 1.26 TFLOPS to 1.33 TFLOPS however the power budget ranges from 7.5 Watts to 20 Watts and on top of these numbers you need to allow for external cooling. You might have noticed that while Nvidia has reduced the form factor the power required remains in the multi watt region. (1)

To address these failings a new form of computing referred to as spiking neural network compute is being championed by Intel and IBM and they have over the last decade reached the point of proving out in research chips the huge benefits to be had by embracing this new style of compute not the least of which is a massively reduced power budget. The research at Intel and IBM goes on apace.

Enter stage left this little-known Australian company that has been listed on the Australian Stock Exchange since 2015. This tiny company with less than 100 employees has quietly some would say stealthily gone about its business and has beaten Intel and IBM to the punch launching its first commercial spiking neural network engineering chip in 2020 and is shortly to release its second generation technology, which it reports on its website as being capable of the following performance figures across three models or iterations of this technology advancement:

ITERATION ONE:

Max
Efficiency


Ideal for always-on, energy-sipping Sensor Applications:

  • Vibration Detection
  • Anomaly Detection
  • Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection
Extremely Efficient
@Sensor Inference


Either Standalone or with Min-spec MCU.

Configurable to ideal fit:

  • 1 – 2 nodes (4 NPE/node)
  • Anomaly Detection
  • Keyword Spotting
Expected implementations:

  • 50 MHz – 200
  • MHz Up to 100 GOPs
Additional Benefits

Eliminates need for CPU intervention

Fully accelerates most feed-forward networks

  • Optional skip connection and TENNs support for more complex networks
  • Completely customizable to fit very constrained power, thermal, and silicon area budgets
  • Enables energy-harvesting and multi-year battery life applications, sub milli-watt sensors

INTERATION 2:

Sensor
Balanced


Accelerates in hardware most Neural Network Functions:

  • Advanced Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection & Recognition
  • Object Classification
  • Biometric Recognition
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
Optimal for Sensor Fusion
and Application SoCs


With Min-Spec or Mid-Spec MCU.

Configurable to ideal fit:

  • 3 – 8 nodes (4 NPE/node) 25 KB
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:

  • 100 – 500 MHz
  • Up to 1 TOP
Additional Benefits

  • CPU is free for most non-NN compute
  • CPU runs application with minimal NN-management
  • Completely customizable to fit very constrained power, thermal and silicon area budgets
  • Enables intelligent, learning-enabled MCUs and SoCs consuming tens to hundreds of milliwatts or less

ITERATION 3:

Max
Performance


Detection, Classification, Segmentation, Tracking, and ViT:

  • Gesture Detection
  • Object Classification
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
  • Advanced Sequence Prediction
  • Video Object Detection & Tracking
  • Vision Transformer Networks
Advanced Network-Edge Performance
in a Sensor-Edge Power Envelope


With Mid-Spec MCU or Mid-Spec MPU.

Configurable to ideal fit:

  • 8 – 256 nodes (4 NPE/node) + optional Vision Transformer
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:
  • 800 MHz – 2 GHz
  • Up to 131 TOPs
Additional Benefits
  • CPU is free for most non-NN compute
At this stage I cannot comment on the power budget of these iterations however we do know that the first released chip the AKD1000 which was able to retail for about $US25.00 had a power budget that ran in the micro to milliwatts and was claimed by Edge Impulse (5), Quantum Ventura (3 & 4) and Tata Consulting Services (1) to outperform a GPU from Nvidia by some considerable margin across all performance measurements and this new version is an advancement grown out of the underlying neural fabric supporting AKD1000.

Perhaps the most worrying recent doomsday prediction for Nvidia at the Edge came from Tata Elxsi’s Mr. Sunil Nair Vice President EMEA and Design Digital who posted on his LinkedIn page right beside a post about Tata Elxsi partnering with Nvidia in the Cloud the following:

“Cloud computing is commodity. Edge is where the action is.

Thrilled to see Tata Elxsi and Brainchip partner to enable and integrate ultra-low power neuromorphic processors for use cases that would bring huge savings and transform citizen experience. (especially the ones over spending on Nvidia.)”


Mr. Nair has been with Tata Elxsi since 1997. While the partnership with Tata Elxsi has only recently been announced Brainchip has been working with Tata Consulting Services (1) TATA Groups research arm since at least 2019 when they jointly presented AKD1000 performing a live gesture recognition demonstration. Since that time Tata Consulting Services has released a number of peer reviewed papers covering the use of AKD1000 and it can be said that Mr. Nair would be very well informed when it comes to the benefits that Brainchip’s AKIDA technology solutions can bring to the Edge.

The full release of the next generation referred to as AKIDA 2.0 up till today has been restricted to a number of select customers however the company has recently advised in a CEO investor presentation that the full public release is imminent. This prediction seems to be holding true as in the past week Brainchip’s website has been updated with substantial information regarding AKIDA 2.0 signalling it is getting close to the launch date.

The interesting aspect of Brainchip Inc is that while it has remained largely unknown to the general public and the investment world in its quiet way it has been accumulating a very long and impressive list of corporate and academic engagements including the following publicly acknowledged group and according to Mr. Rob Telson Vice President of Ecosystems & Partnerships, they have hundreds of companies testing AKIDA technology boards:

1. FORD, 2. VALEO, 3. RENESAS, 4. NASA, 5. TATA Consulting Services, 6. MEGACHIPS, 7. MOSCHIP, 8. SOCIONEXT, 9. PROPHESEE, 10. VVDN, 11. TEKSUN, 12. Ai LABS, 13. NVISO, 14. EMOTION 3D, 15. ARM, 16. EDGE IMPULSE, 17. INTEL Foundries, 18. GLOBAL FOUNDRIES, 19. BLUE RIDGE ENVISIONEERING, 20. MERCEDES BENZ, 21. ANT 61, 22. QUANTUM VENTURA, 23. INFORMATION SYSTEM LABORATORIES, 24. INTELLISENSE SYSTEMS, 25. CVEDIA, 26. LORSER INDUSTRIES, 27. SiFIVE, 28. IPRO Silicon IP, 29. SALESLINK, 30. NUMEN, 31. VORAGO, 32. NANOSE, 33. BIOTOME, 34. OCULI, 35. Magik Eye, 36. GMAC, 37. TATA Elxsi, 38. University of Oklahoma, 41. Arizona State University, 42. Carnegie Mellon University, 43. Rochester Institute of Technology, 44. Drexel University, 45. University of Virginia.

It should be noted that Brainchip has been at pains in its literature and presentations to explain that the AKIDA technology is processor and sensor agnostic and being fully digital, scalable, and portable across all foundries. The AKD1000 was produced successfully first time and every time in 28nm at TSMC and only recently the AKD1500 was received back from Global Foundries successfully fabricated in 22nm FDSOI first time and every time.

The individual who gave the underlying AKIDA technology life is Peter van der Made who is also one of the founders of Brainchip and his full vision is to create a beneficial form of Artificial General Intelligence by about 2030. This vision plays out in a series of steps and AKIDA 3.0 is presently in development and according to the company’s CEO Sean Hehir in a very recent investor presentation each step is targeted to take 12 to 18 months. Historically Peter van der Made and his cofounder Anil Mankar have impressed with their capacity to deliver on their technology development time lines and by always having a little extra what they like to call secret sauce with each new technology reveal.

This little extra secret sauce with AKIDA 2.0 was the release of the TeNNs (6) and ViT (7) which provide an unprecedented leap into the future from what even the most optimistic expected to be possible at the far Edge using energy harvesting to power these devices. It is impossible to do justice to what they bring to the Edge Compute revolution in this article but fortunately even though patents are pending Brainchip has published a White Paper (6) and videos providing easy to follow plain English explanations. (7)

By the way anyone up for a bit of regression analysis (8) using AKIDA technology Brainchip also has that covered. When others were opining that spiking neural networks could not do regression analysis Brainchip Inc was demonstrating it running on AKD1000 for monitoring vibration in rail infrastructure.

There is so much to delight those who love to read about and explore science fiction becoming reality when peeling back the petals of the rose that is the Brainchip AKIDA technology revolution.

In concluding in Australia the ignorant and poor of intellect have treated at times the vision of Peter van der Made with a savagery of doubt usually reserved for those who claim to have been abducted by aliens and as is usually the case these critics have been members of the so called sophisticated investor class and even though they have little credibility in their areas of claimed expertise they drown in their own ignorance when it comes to the science of neuromorphic computing. If tempted to listen to such individuals about the science of neuromorphic computing one is well served to remember the life of Robert Goddard - https://www.msn.com/en-au/news/aust...1&cvid=297d1358fb8c45ea9b1ae445a4985d75&ei=51

REFERENCES:

1
.Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors

Chetan Kadway, Sounak Dey, Arijit Mukherjee, Arpan Pal, Gilles Bézard

2023 International Joint Conference on Neural Networks (IJCNN), 1-8, 2023

Emergence of small satellites for earth observation missions has opened up new horizons for space research but at the same time posed newer challenges of limited power and compute resource arising out of the size & weight constraints imposed by these satellites. The currently evolving neuromorphic computing paradigm shows promise in terms of energy efficiency and may possibly be exploited here. In this paper, we try to prove the applicability of neuromorphic computing for on-board data processing in satellites by creating a 2-stage hierarchical cloud cover detection application for multi-spectral earth observation images. We design and train a CNN and convert it into SNN using the CNN2SNN conversion toolkit of Brainchip Akida neuromorphic platform. We achieve 95.46% accuracy while power consumption and latency are at least 35x and 3.4x more efficient respectively in stage-1 (and 230x & 7x in stage-2) compared to the equivalent CNN running on Jetson TX2.

https://ieeexplore.ieee.org/abstract/document/10191569/

2.An energy-efficient AkidaNet for morphologically similar weeds and crops recognition at the Edge

Vi Nguyen Thanh Le, Kevin Tsiknos, Kristofor D Carlson, Selam Ahderom

2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1-8, 2022

Wild radish weeds have always been a persistent problem in agriculture due to their quick and undesirable spread. Therefore, the accurate identification and effective control of wild radish in canola crops at early growth stage play an indispensable role in reducing herbicide rates and enhancing agricultural productivity. In this paper, an energy efficient and lightweight AkidaNet model is developed to accurately identify broad-leaf weeds and crops with similar morphology at four different growth stages. Experiments performed on a published bccr-segset dataset show that our proposed method achieves competitive performance, a classification accuracy of 99.73%, compared to several well-known CNNs architectures. Next, we quantized and converted the model into a Spiking Neural Network for implementation on a spike-based neuromorphic hardware device. The converted model is not only superior in low-latency and low-power consumption but also retains a similar accuracy to the original model. We also employ Grad-CAM to validate whether our model focuses on important features in plant images to identify wild radish weeds in crops

https://www.researchgate.net/profil...r-weeds-and-crops-recognition-at-the-Edge.pdf


3.Table 6.

Comparison of SWaP-C profiles for CPU/GPU platforms and neuromorphic versions

CPU/GPU platforms SWaP

NVIDIA A100

26.7 long x 11.2 tall x 3.5 wide cm (10.5 x 4.4 x 1.4 in)

250 W

Cost $30,000 USD (est.)

Neuromorphic versions

USB key form factor

5.1 long 1.3tall x 0.6 wide (2x0.5x0.3 in)

1W

$50 USD (est.)

https://www.sciencedirect.com/scien...c65de04&pid=1-s2.0-S1877050922017860-main.pdf (Page 494)

4. “In this federally funded phase 2 program, Quantum Ventura is creating state-of-the-art cybersecurity applications for the U.S. Department of Energy under the Small Business Innovation Research (SBIR) Program. The program is focused on “Cyber threat-detection using neuromorphic computing,” which aims to develop an advanced approach to detect and prevent cyberattacks on computer networks and critical infrastructure using brain-inspired artificial intelligence.

“Neuromorphic computing is an ideal technology for threat detection because of its small size and power, accuracy, and in particular, its ability to learn and adapt, since attackers are constantly changing their tactics,” said Srini Vasan, President and CEO of Quantum Ventura Inc. “We believe that our solution incorporating Brainchip’s Akida will be a breakthrough for defending against cyber threats and address additional applications as well.””

https://brainchip.com/brainchip-and-quantum-ventura-partner-to-develop-cyber-threat-detection/

5. Running the out-of-the-box demos on the Akida Raspberry Pi development kit I have was very impressive achieving, according to the statistics, approximately 100 FPS for a 9-mW power dissipation.

All told I am very impressed with the BrainChip Akida neuromorphic processor. The performance of the networks implemented is very good, while the power used is also exceptionally low. These two parameters are critical parameters for embedded solutions deployed at the edge.

Project links

  1. Visal wake word: https://studio.edgeimpulse.com/studio/224143
  2. Anomaly detection: https://studio.edgeimpulse.com/studio/261242
  3. CIFAR10: https://studio.edgeimpulse.com/studio/257103
  4. Keyword spotting: https://studio.edgeimpulse.com/studio/257193
Adiuvo is a consultancy that provides embedded systems design, training, and marketing services. Taylor is Founder and Principal Consultant of the company, teaches about embedded systems at the University of Lincoln, and is host of the podcast “The Embedded Hour.”

https://www.edgeimpulse.com/blog/brainchip-akida-and-edge-impulse

6.https://brainchip.com/temporal-event-based-neural-networks-a-new-approach-to-temporal-processing/

7.

8.https://brainchip.com/brainchip-demonstrates-regression-analysis-with-vibration-sensors/

My opinion only so DYOR

Mike drop right there!!
 
  • Like
  • Love
  • Fire
Reactions: 27 users

Tuliptrader

Regular
Can Nvidia Survive the 4th Industrial Revolution?
by Fact Finder


Though Nvidia is riding high at the moment all indicators are that it has positioned itself on the wrong side of technology history.

While Nvidia has been compressing models to stave off the end of Moore’s Law it’s continued preoccupation with its Von Neumann market dominance has seen it embrace the false dawn offered to it by Large Language Model’s and the cult of CHATGpt.

The fragile nature of Nvidia’s technology future has been exposed in the last week by a small Australian technology company that has been stealthily developing an entirely new, some have said science fiction solution, to the energy resource issue exposed by the power and cost involved in training and running Large Language Models represented by CHATGpt.

The World has been fantasising about what is called Edge Computing for over a decade. The principle underpinning Edge Computing is actually very simple and can best be understood by what might be considered a strange example.

I am sure you have heard of terms like Food Miles, Buy Local, Eat Local, Grow Your Own as ways to decarbonise and save the planet. The simple indisputable proposition being if you reduce the distance between you and your food sources point of production the reduction in transport will reduce the energy consumed in putting the food on your plate.

Putting a bunch of flowers cut from your own garden on the sideboard is infinitely more fuel efficient than trucking, flying, trucking, driving fresh picked flowers from Europe around the globe to you in Australia.

Cutting asparagus in your kitchen garden is infinitely more efficient than buying asparagus from your Local Supermarket that has been cut in Peru and transported to you in Australia.

Now in the above examples I have chosen two products that require refrigeration to keep them fresh after picking to ensure they arrive at your home still useable and which as a result requires transport by jet airliners.

Suffice to say it is immediately obvious that if you are trying to reduce carbon and cost processing your flowers and asparagus at home wins hands down as zero carbon versus tonnes is a no brainer.

Now I know there are practical limitations making this solution difficult for those of us with black thumbs or living in home units to embrace. But that is an argument for another day.

The point is that this is what Edge Computing is all about. It is about reducing compute miles and in so doing cutting dramatically the cost of doing compute and carbon emissions.

For example take a Smart Doorbell. Currently a Smart Doorbell needs to be constantly connected via your homes wireless network to carry out its function of identifying and alerting you to the presence of someone at the door.

In lay terms 24/7 it sits there constantly processing camera frames showing the brick wall next to your front door and sending to the cloud over and over and over an image of the brick wall and receiving back message after message that no one is at the front door. Now you can reduce the power by slowing down the number of photos/frames it takes every second to monitor for someone coming to your front door but if the gap becomes too great between frames someone can come and go in that gap and avoid detection. So this method has built in limitations even when working to design but not so well when bandwidth is congested or connection breaks down.

Enter the neuromorphic Edge Computing revolution.

Edge Computing is about as I said reducing the compute miles. By placing the compute close as possible to the Smart Doorbell if not right up against it you immediately reduce the distance between the camera/sensor and the compute/intelligence. This has the advantage of reducing power consumption, reducing latency (time it takes to send the message back and forth from the camera/sensor to the data centre) and preventing congesting the bandwidth with photos of a blank brick wall affecting your ability to stream Netflix or Sky Sport.

Everywhere Edge Computing is being spoken about and Nvidia as the dominant player in the computing space is calling out about its Edge Computing solutions.

There is probably not one person on the planet with any interest in computing who has not heard of Nvidia or its Nvidia Jetson range of edge computing solutions.

Indeed, the Nvidia Jeson range is a leader in this space across the globe. Its Jetson solutions are to be found everywhere but for how much longer can Jetson dominate for Nvidia when it is hamstrung by old thinking in a World that is transitioning towards the Fourth Industrial Revolution.

So let's take a quick look at what Nvidia publishes about the Nvidia Jetson AGX Orin series, the Jetson Orin NX series and the Jetson Orin Nano series by reference to the advertised performance figures.

Power ranges from 5 watts to 60 Watts

“Power 15W - 60W 15W - 75W 15W - 40W 10W - 25W 10W - 20W 7W - 15W 7W - 10W”

TOPs ranges from 20 TOPs to 275 TOPs

AI Performance 275 TOPS 248 TOPS 200 TOPS 100 TOPS 70 TOPS 40 TOPS 20 TOPS


These numbers would certainly seem impressive to those who were feeding punch cards into the first IBM main frame computer even if you added on the power required to externally cool Jetson running some form of external cooling such as fans.

As impressive as these numbers are they clearly do not offer a power budget that can be reasonably embraced by those looking for an Edge Computing solution. Nvidia have to their credit recognised this and in consequence introduced the Jetson Nano TX2 Series and boasts that the Jetson TX2i, the Jetson TX2, the Jetson TX2 4GB and the Jetson TX2 NX come with AI Performance or 1.26 TFLOPS to 1.33 TFLOPS however the power budget ranges from 7.5 Watts to 20 Watts and on top of these numbers you need to allow for external cooling. You might have noticed that while Nvidia has reduced the form factor the power required remains in the multi watt region. (1)

To address these failings a new form of computing referred to as spiking neural network compute is being championed by Intel and IBM and they have over the last decade reached the point of proving out in research chips the huge benefits to be had by embracing this new style of compute not the least of which is a massively reduced power budget. The research at Intel and IBM goes on apace.

Enter stage left this little-known Australian company that has been listed on the Australian Stock Exchange since 2015. This tiny company with less than 100 employees has quietly some would say stealthily gone about its business and has beaten Intel and IBM to the punch launching its first commercial spiking neural network engineering chip in 2020 and is shortly to release its second generation technology, which it reports on its website as being capable of the following performance figures across three models or iterations of this technology advancement:

ITERATION ONE:

Max
Efficiency


Ideal for always-on, energy-sipping Sensor Applications:

  • Vibration Detection
  • Anomaly Detection
  • Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection
Extremely Efficient
@Sensor Inference


Either Standalone or with Min-spec MCU.

Configurable to ideal fit:

  • 1 – 2 nodes (4 NPE/node)
  • Anomaly Detection
  • Keyword Spotting
Expected implementations:

  • 50 MHz – 200
  • MHz Up to 100 GOPs
Additional Benefits

Eliminates need for CPU intervention

Fully accelerates most feed-forward networks

  • Optional skip connection and TENNs support for more complex networks
  • Completely customizable to fit very constrained power, thermal, and silicon area budgets
  • Enables energy-harvesting and multi-year battery life applications, sub milli-watt sensors

INTERATION 2:

Sensor
Balanced


Accelerates in hardware most Neural Network Functions:

  • Advanced Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection & Recognition
  • Object Classification
  • Biometric Recognition
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
Optimal for Sensor Fusion
and Application SoCs


With Min-Spec or Mid-Spec MCU.

Configurable to ideal fit:

  • 3 – 8 nodes (4 NPE/node) 25 KB
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:

  • 100 – 500 MHz
  • Up to 1 TOP
Additional Benefits

  • CPU is free for most non-NN compute
  • CPU runs application with minimal NN-management
  • Completely customizable to fit very constrained power, thermal and silicon area budgets
  • Enables intelligent, learning-enabled MCUs and SoCs consuming tens to hundreds of milliwatts or less

ITERATION 3:

Max
Performance


Detection, Classification, Segmentation, Tracking, and ViT:

  • Gesture Detection
  • Object Classification
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
  • Advanced Sequence Prediction
  • Video Object Detection & Tracking
  • Vision Transformer Networks
Advanced Network-Edge Performance
in a Sensor-Edge Power Envelope


With Mid-Spec MCU or Mid-Spec MPU.

Configurable to ideal fit:

  • 8 – 256 nodes (4 NPE/node) + optional Vision Transformer
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:
  • 800 MHz – 2 GHz
  • Up to 131 TOPs
Additional Benefits
  • CPU is free for most non-NN compute
At this stage I cannot comment on the power budget of these iterations however we do know that the first released chip the AKD1000 which was able to retail for about $US25.00 had a power budget that ran in the micro to milliwatts and was claimed by Edge Impulse (5), Quantum Ventura (3 & 4) and Tata Consulting Services (1) to outperform a GPU from Nvidia by some considerable margin across all performance measurements and this new version is an advancement grown out of the underlying neural fabric supporting AKD1000.

Perhaps the most worrying recent doomsday prediction for Nvidia at the Edge came from Tata Elxsi’s Mr. Sunil Nair Vice President EMEA and Design Digital who posted on his LinkedIn page right beside a post about Tata Elxsi partnering with Nvidia in the Cloud the following:

“Cloud computing is commodity. Edge is where the action is.

Thrilled to see Tata Elxsi and Brainchip partner to enable and integrate ultra-low power neuromorphic processors for use cases that would bring huge savings and transform citizen experience. (especially the ones over spending on Nvidia.)”


Mr. Nair has been with Tata Elxsi since 1997. While the partnership with Tata Elxsi has only recently been announced Brainchip has been working with Tata Consulting Services (1) TATA Groups research arm since at least 2019 when they jointly presented AKD1000 performing a live gesture recognition demonstration. Since that time Tata Consulting Services has released a number of peer reviewed papers covering the use of AKD1000 and it can be said that Mr. Nair would be very well informed when it comes to the benefits that Brainchip’s AKIDA technology solutions can bring to the Edge.

The full release of the next generation referred to as AKIDA 2.0 up till today has been restricted to a number of select customers however the company has recently advised in a CEO investor presentation that the full public release is imminent. This prediction seems to be holding true as in the past week Brainchip’s website has been updated with substantial information regarding AKIDA 2.0 signalling it is getting close to the launch date.

The interesting aspect of Brainchip Inc is that while it has remained largely unknown to the general public and the investment world in its quiet way it has been accumulating a very long and impressive list of corporate and academic engagements including the following publicly acknowledged group and according to Mr. Rob Telson Vice President of Ecosystems & Partnerships, they have hundreds of companies testing AKIDA technology boards:

1. FORD, 2. VALEO, 3. RENESAS, 4. NASA, 5. TATA Consulting Services, 6. MEGACHIPS, 7. MOSCHIP, 8. SOCIONEXT, 9. PROPHESEE, 10. VVDN, 11. TEKSUN, 12. Ai LABS, 13. NVISO, 14. EMOTION 3D, 15. ARM, 16. EDGE IMPULSE, 17. INTEL Foundries, 18. GLOBAL FOUNDRIES, 19. BLUE RIDGE ENVISIONEERING, 20. MERCEDES BENZ, 21. ANT 61, 22. QUANTUM VENTURA, 23. INFORMATION SYSTEM LABORATORIES, 24. INTELLISENSE SYSTEMS, 25. CVEDIA, 26. LORSER INDUSTRIES, 27. SiFIVE, 28. IPRO Silicon IP, 29. SALESLINK, 30. NUMEN, 31. VORAGO, 32. NANOSE, 33. BIOTOME, 34. OCULI, 35. Magik Eye, 36. GMAC, 37. TATA Elxsi, 38. University of Oklahoma, 41. Arizona State University, 42. Carnegie Mellon University, 43. Rochester Institute of Technology, 44. Drexel University, 45. University of Virginia.

It should be noted that Brainchip has been at pains in its literature and presentations to explain that the AKIDA technology is processor and sensor agnostic and being fully digital, scalable, and portable across all foundries. The AKD1000 was produced successfully first time and every time in 28nm at TSMC and only recently the AKD1500 was received back from Global Foundries successfully fabricated in 22nm FDSOI first time and every time.

The individual who gave the underlying AKIDA technology life is Peter van der Made who is also one of the founders of Brainchip and his full vision is to create a beneficial form of Artificial General Intelligence by about 2030. This vision plays out in a series of steps and AKIDA 3.0 is presently in development and according to the company’s CEO Sean Hehir in a very recent investor presentation each step is targeted to take 12 to 18 months. Historically Peter van der Made and his cofounder Anil Mankar have impressed with their capacity to deliver on their technology development time lines and by always having a little extra what they like to call secret sauce with each new technology reveal.

This little extra secret sauce with AKIDA 2.0 was the release of the TeNNs (6) and ViT (7) which provide an unprecedented leap into the future from what even the most optimistic expected to be possible at the far Edge using energy harvesting to power these devices. It is impossible to do justice to what they bring to the Edge Compute revolution in this article but fortunately even though patents are pending Brainchip has published a White Paper (6) and videos providing easy to follow plain English explanations. (7)

By the way anyone up for a bit of regression analysis (8) using AKIDA technology Brainchip also has that covered. When others were opining that spiking neural networks could not do regression analysis Brainchip Inc was demonstrating it running on AKD1000 for monitoring vibration in rail infrastructure.

There is so much to delight those who love to read about and explore science fiction becoming reality when peeling back the petals of the rose that is the Brainchip AKIDA technology revolution.

In concluding in Australia the ignorant and poor of intellect have treated at times the vision of Peter van der Made with a savagery of doubt usually reserved for those who claim to have been abducted by aliens and as is usually the case these critics have been members of the so called sophisticated investor class and even though they have little credibility in their areas of claimed expertise they drown in their own ignorance when it comes to the science of neuromorphic computing. If tempted to listen to such individuals about the science of neuromorphic computing one is well served to remember the life of Robert Goddard - https://www.msn.com/en-au/news/aust...1&cvid=297d1358fb8c45ea9b1ae445a4985d75&ei=51

REFERENCES:

1
.Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors

Chetan Kadway, Sounak Dey, Arijit Mukherjee, Arpan Pal, Gilles Bézard

2023 International Joint Conference on Neural Networks (IJCNN), 1-8, 2023

Emergence of small satellites for earth observation missions has opened up new horizons for space research but at the same time posed newer challenges of limited power and compute resource arising out of the size & weight constraints imposed by these satellites. The currently evolving neuromorphic computing paradigm shows promise in terms of energy efficiency and may possibly be exploited here. In this paper, we try to prove the applicability of neuromorphic computing for on-board data processing in satellites by creating a 2-stage hierarchical cloud cover detection application for multi-spectral earth observation images. We design and train a CNN and convert it into SNN using the CNN2SNN conversion toolkit of Brainchip Akida neuromorphic platform. We achieve 95.46% accuracy while power consumption and latency are at least 35x and 3.4x more efficient respectively in stage-1 (and 230x & 7x in stage-2) compared to the equivalent CNN running on Jetson TX2.

https://ieeexplore.ieee.org/abstract/document/10191569/

2.An energy-efficient AkidaNet for morphologically similar weeds and crops recognition at the Edge

Vi Nguyen Thanh Le, Kevin Tsiknos, Kristofor D Carlson, Selam Ahderom

2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1-8, 2022

Wild radish weeds have always been a persistent problem in agriculture due to their quick and undesirable spread. Therefore, the accurate identification and effective control of wild radish in canola crops at early growth stage play an indispensable role in reducing herbicide rates and enhancing agricultural productivity. In this paper, an energy efficient and lightweight AkidaNet model is developed to accurately identify broad-leaf weeds and crops with similar morphology at four different growth stages. Experiments performed on a published bccr-segset dataset show that our proposed method achieves competitive performance, a classification accuracy of 99.73%, compared to several well-known CNNs architectures. Next, we quantized and converted the model into a Spiking Neural Network for implementation on a spike-based neuromorphic hardware device. The converted model is not only superior in low-latency and low-power consumption but also retains a similar accuracy to the original model. We also employ Grad-CAM to validate whether our model focuses on important features in plant images to identify wild radish weeds in crops

https://www.researchgate.net/profil...r-weeds-and-crops-recognition-at-the-Edge.pdf


3.Table 6.

Comparison of SWaP-C profiles for CPU/GPU platforms and neuromorphic versions

CPU/GPU platforms SWaP

NVIDIA A100

26.7 long x 11.2 tall x 3.5 wide cm (10.5 x 4.4 x 1.4 in)

250 W

Cost $30,000 USD (est.)

Neuromorphic versions

USB key form factor

5.1 long 1.3tall x 0.6 wide (2x0.5x0.3 in)

1W

$50 USD (est.)

https://www.sciencedirect.com/scien...c65de04&pid=1-s2.0-S1877050922017860-main.pdf (Page 494)

4. “In this federally funded phase 2 program, Quantum Ventura is creating state-of-the-art cybersecurity applications for the U.S. Department of Energy under the Small Business Innovation Research (SBIR) Program. The program is focused on “Cyber threat-detection using neuromorphic computing,” which aims to develop an advanced approach to detect and prevent cyberattacks on computer networks and critical infrastructure using brain-inspired artificial intelligence.

“Neuromorphic computing is an ideal technology for threat detection because of its small size and power, accuracy, and in particular, its ability to learn and adapt, since attackers are constantly changing their tactics,” said Srini Vasan, President and CEO of Quantum Ventura Inc. “We believe that our solution incorporating Brainchip’s Akida will be a breakthrough for defending against cyber threats and address additional applications as well.””

https://brainchip.com/brainchip-and-quantum-ventura-partner-to-develop-cyber-threat-detection/

5. Running the out-of-the-box demos on the Akida Raspberry Pi development kit I have was very impressive achieving, according to the statistics, approximately 100 FPS for a 9-mW power dissipation.

All told I am very impressed with the BrainChip Akida neuromorphic processor. The performance of the networks implemented is very good, while the power used is also exceptionally low. These two parameters are critical parameters for embedded solutions deployed at the edge.

Project links

  1. Visal wake word: https://studio.edgeimpulse.com/studio/224143
  2. Anomaly detection: https://studio.edgeimpulse.com/studio/261242
  3. CIFAR10: https://studio.edgeimpulse.com/studio/257103
  4. Keyword spotting: https://studio.edgeimpulse.com/studio/257193
Adiuvo is a consultancy that provides embedded systems design, training, and marketing services. Taylor is Founder and Principal Consultant of the company, teaches about embedded systems at the University of Lincoln, and is host of the podcast “The Embedded Hour.”

https://www.edgeimpulse.com/blog/brainchip-akida-and-edge-impulse

6.https://brainchip.com/temporal-event-based-neural-networks-a-new-approach-to-temporal-processing/

7.

8.https://brainchip.com/brainchip-demonstrates-regression-analysis-with-vibration-sensors/

My opinion only so DYOR

Now that is one epic mic drop.


TT
 
  • Like
  • Love
  • Fire
Reactions: 23 users

Tuliptrader

Regular
  • Like
  • Fire
Reactions: 11 users

Terroni2105

Founding Member
  • Like
  • Fire
Reactions: 16 users

Worker122

Regular
Can Nvidia Survive the 4th Industrial Revolution?
by Fact Finder


Though Nvidia is riding high at the moment all indicators are that it has positioned itself on the wrong side of technology history.

While Nvidia has been compressing models to stave off the end of Moore’s Law it’s continued preoccupation with its Von Neumann market dominance has seen it embrace the false dawn offered to it by Large Language Model’s and the cult of CHATGpt.

The fragile nature of Nvidia’s technology future has been exposed in the last week by a small Australian technology company that has been stealthily developing an entirely new, some have said science fiction solution, to the energy resource issue exposed by the power and cost involved in training and running Large Language Models represented by CHATGpt.

The World has been fantasising about what is called Edge Computing for over a decade. The principle underpinning Edge Computing is actually very simple and can best be understood by what might be considered a strange example.

I am sure you have heard of terms like Food Miles, Buy Local, Eat Local, Grow Your Own as ways to decarbonise and save the planet. The simple indisputable proposition being if you reduce the distance between you and your food sources point of production the reduction in transport will reduce the energy consumed in putting the food on your plate.

Putting a bunch of flowers cut from your own garden on the sideboard is infinitely more fuel efficient than trucking, flying, trucking, driving fresh picked flowers from Europe around the globe to you in Australia.

Cutting asparagus in your kitchen garden is infinitely more efficient than buying asparagus from your Local Supermarket that has been cut in Peru and transported to you in Australia.

Now in the above examples I have chosen two products that require refrigeration to keep them fresh after picking to ensure they arrive at your home still useable and which as a result requires transport by jet airliners.

Suffice to say it is immediately obvious that if you are trying to reduce carbon and cost processing your flowers and asparagus at home wins hands down as zero carbon versus tonnes is a no brainer.

Now I know there are practical limitations making this solution difficult for those of us with black thumbs or living in home units to embrace. But that is an argument for another day.

The point is that this is what Edge Computing is all about. It is about reducing compute miles and in so doing cutting dramatically the cost of doing compute and carbon emissions.

For example take a Smart Doorbell. Currently a Smart Doorbell needs to be constantly connected via your homes wireless network to carry out its function of identifying and alerting you to the presence of someone at the door.

In lay terms 24/7 it sits there constantly processing camera frames showing the brick wall next to your front door and sending to the cloud over and over and over an image of the brick wall and receiving back message after message that no one is at the front door. Now you can reduce the power by slowing down the number of photos/frames it takes every second to monitor for someone coming to your front door but if the gap becomes too great between frames someone can come and go in that gap and avoid detection. So this method has built in limitations even when working to design but not so well when bandwidth is congested or connection breaks down.

Enter the neuromorphic Edge Computing revolution.

Edge Computing is about as I said reducing the compute miles. By placing the compute close as possible to the Smart Doorbell if not right up against it you immediately reduce the distance between the camera/sensor and the compute/intelligence. This has the advantage of reducing power consumption, reducing latency (time it takes to send the message back and forth from the camera/sensor to the data centre) and preventing congesting the bandwidth with photos of a blank brick wall affecting your ability to stream Netflix or Sky Sport.

Everywhere Edge Computing is being spoken about and Nvidia as the dominant player in the computing space is calling out about its Edge Computing solutions.

There is probably not one person on the planet with any interest in computing who has not heard of Nvidia or its Nvidia Jetson range of edge computing solutions.

Indeed, the Nvidia Jeson range is a leader in this space across the globe. Its Jetson solutions are to be found everywhere but for how much longer can Jetson dominate for Nvidia when it is hamstrung by old thinking in a World that is transitioning towards the Fourth Industrial Revolution.

So let's take a quick look at what Nvidia publishes about the Nvidia Jetson AGX Orin series, the Jetson Orin NX series and the Jetson Orin Nano series by reference to the advertised performance figures.

Power ranges from 5 watts to 60 Watts

“Power 15W - 60W 15W - 75W 15W - 40W 10W - 25W 10W - 20W 7W - 15W 7W - 10W”

TOPs ranges from 20 TOPs to 275 TOPs

AI Performance 275 TOPS 248 TOPS 200 TOPS 100 TOPS 70 TOPS 40 TOPS 20 TOPS


These numbers would certainly seem impressive to those who were feeding punch cards into the first IBM main frame computer even if you added on the power required to externally cool Jetson running some form of external cooling such as fans.

As impressive as these numbers are they clearly do not offer a power budget that can be reasonably embraced by those looking for an Edge Computing solution. Nvidia have to their credit recognised this and in consequence introduced the Jetson Nano TX2 Series and boasts that the Jetson TX2i, the Jetson TX2, the Jetson TX2 4GB and the Jetson TX2 NX come with AI Performance or 1.26 TFLOPS to 1.33 TFLOPS however the power budget ranges from 7.5 Watts to 20 Watts and on top of these numbers you need to allow for external cooling. You might have noticed that while Nvidia has reduced the form factor the power required remains in the multi watt region. (1)

To address these failings a new form of computing referred to as spiking neural network compute is being championed by Intel and IBM and they have over the last decade reached the point of proving out in research chips the huge benefits to be had by embracing this new style of compute not the least of which is a massively reduced power budget. The research at Intel and IBM goes on apace.

Enter stage left this little-known Australian company that has been listed on the Australian Stock Exchange since 2015. This tiny company with less than 100 employees has quietly some would say stealthily gone about its business and has beaten Intel and IBM to the punch launching its first commercial spiking neural network engineering chip in 2020 and is shortly to release its second generation technology, which it reports on its website as being capable of the following performance figures across three models or iterations of this technology advancement:

ITERATION ONE:

Max
Efficiency


Ideal for always-on, energy-sipping Sensor Applications:

  • Vibration Detection
  • Anomaly Detection
  • Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection
Extremely Efficient
@Sensor Inference


Either Standalone or with Min-spec MCU.

Configurable to ideal fit:

  • 1 – 2 nodes (4 NPE/node)
  • Anomaly Detection
  • Keyword Spotting
Expected implementations:

  • 50 MHz – 200
  • MHz Up to 100 GOPs
Additional Benefits

Eliminates need for CPU intervention

Fully accelerates most feed-forward networks

  • Optional skip connection and TENNs support for more complex networks
  • Completely customizable to fit very constrained power, thermal, and silicon area budgets
  • Enables energy-harvesting and multi-year battery life applications, sub milli-watt sensors

INTERATION 2:

Sensor
Balanced


Accelerates in hardware most Neural Network Functions:

  • Advanced Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection & Recognition
  • Object Classification
  • Biometric Recognition
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
Optimal for Sensor Fusion
and Application SoCs


With Min-Spec or Mid-Spec MCU.

Configurable to ideal fit:

  • 3 – 8 nodes (4 NPE/node) 25 KB
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:

  • 100 – 500 MHz
  • Up to 1 TOP
Additional Benefits

  • CPU is free for most non-NN compute
  • CPU runs application with minimal NN-management
  • Completely customizable to fit very constrained power, thermal and silicon area budgets
  • Enables intelligent, learning-enabled MCUs and SoCs consuming tens to hundreds of milliwatts or less

ITERATION 3:

Max
Performance


Detection, Classification, Segmentation, Tracking, and ViT:

  • Gesture Detection
  • Object Classification
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
  • Advanced Sequence Prediction
  • Video Object Detection & Tracking
  • Vision Transformer Networks
Advanced Network-Edge Performance
in a Sensor-Edge Power Envelope


With Mid-Spec MCU or Mid-Spec MPU.

Configurable to ideal fit:

  • 8 – 256 nodes (4 NPE/node) + optional Vision Transformer
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:
  • 800 MHz – 2 GHz
  • Up to 131 TOPs
Additional Benefits
  • CPU is free for most non-NN compute
At this stage I cannot comment on the power budget of these iterations however we do know that the first released chip the AKD1000 which was able to retail for about $US25.00 had a power budget that ran in the micro to milliwatts and was claimed by Edge Impulse (5), Quantum Ventura (3 & 4) and Tata Consulting Services (1) to outperform a GPU from Nvidia by some considerable margin across all performance measurements and this new version is an advancement grown out of the underlying neural fabric supporting AKD1000.

Perhaps the most worrying recent doomsday prediction for Nvidia at the Edge came from Tata Elxsi’s Mr. Sunil Nair Vice President EMEA and Design Digital who posted on his LinkedIn page right beside a post about Tata Elxsi partnering with Nvidia in the Cloud the following:

“Cloud computing is commodity. Edge is where the action is.

Thrilled to see Tata Elxsi and Brainchip partner to enable and integrate ultra-low power neuromorphic processors for use cases that would bring huge savings and transform citizen experience. (especially the ones over spending on Nvidia.)”


Mr. Nair has been with Tata Elxsi since 1997. While the partnership with Tata Elxsi has only recently been announced Brainchip has been working with Tata Consulting Services (1) TATA Groups research arm since at least 2019 when they jointly presented AKD1000 performing a live gesture recognition demonstration. Since that time Tata Consulting Services has released a number of peer reviewed papers covering the use of AKD1000 and it can be said that Mr. Nair would be very well informed when it comes to the benefits that Brainchip’s AKIDA technology solutions can bring to the Edge.

The full release of the next generation referred to as AKIDA 2.0 up till today has been restricted to a number of select customers however the company has recently advised in a CEO investor presentation that the full public release is imminent. This prediction seems to be holding true as in the past week Brainchip’s website has been updated with substantial information regarding AKIDA 2.0 signalling it is getting close to the launch date.

The interesting aspect of Brainchip Inc is that while it has remained largely unknown to the general public and the investment world in its quiet way it has been accumulating a very long and impressive list of corporate and academic engagements including the following publicly acknowledged group and according to Mr. Rob Telson Vice President of Ecosystems & Partnerships, they have hundreds of companies testing AKIDA technology boards:

1. FORD, 2. VALEO, 3. RENESAS, 4. NASA, 5. TATA Consulting Services, 6. MEGACHIPS, 7. MOSCHIP, 8. SOCIONEXT, 9. PROPHESEE, 10. VVDN, 11. TEKSUN, 12. Ai LABS, 13. NVISO, 14. EMOTION 3D, 15. ARM, 16. EDGE IMPULSE, 17. INTEL Foundries, 18. GLOBAL FOUNDRIES, 19. BLUE RIDGE ENVISIONEERING, 20. MERCEDES BENZ, 21. ANT 61, 22. QUANTUM VENTURA, 23. INFORMATION SYSTEM LABORATORIES, 24. INTELLISENSE SYSTEMS, 25. CVEDIA, 26. LORSER INDUSTRIES, 27. SiFIVE, 28. IPRO Silicon IP, 29. SALESLINK, 30. NUMEN, 31. VORAGO, 32. NANOSE, 33. BIOTOME, 34. OCULI, 35. Magik Eye, 36. GMAC, 37. TATA Elxsi, 38. University of Oklahoma, 41. Arizona State University, 42. Carnegie Mellon University, 43. Rochester Institute of Technology, 44. Drexel University, 45. University of Virginia.

It should be noted that Brainchip has been at pains in its literature and presentations to explain that the AKIDA technology is processor and sensor agnostic and being fully digital, scalable, and portable across all foundries. The AKD1000 was produced successfully first time and every time in 28nm at TSMC and only recently the AKD1500 was received back from Global Foundries successfully fabricated in 22nm FDSOI first time and every time.

The individual who gave the underlying AKIDA technology life is Peter van der Made who is also one of the founders of Brainchip and his full vision is to create a beneficial form of Artificial General Intelligence by about 2030. This vision plays out in a series of steps and AKIDA 3.0 is presently in development and according to the company’s CEO Sean Hehir in a very recent investor presentation each step is targeted to take 12 to 18 months. Historically Peter van der Made and his cofounder Anil Mankar have impressed with their capacity to deliver on their technology development time lines and by always having a little extra what they like to call secret sauce with each new technology reveal.

This little extra secret sauce with AKIDA 2.0 was the release of the TeNNs (6) and ViT (7) which provide an unprecedented leap into the future from what even the most optimistic expected to be possible at the far Edge using energy harvesting to power these devices. It is impossible to do justice to what they bring to the Edge Compute revolution in this article but fortunately even though patents are pending Brainchip has published a White Paper (6) and videos providing easy to follow plain English explanations. (7)

By the way anyone up for a bit of regression analysis (8) using AKIDA technology Brainchip also has that covered. When others were opining that spiking neural networks could not do regression analysis Brainchip Inc was demonstrating it running on AKD1000 for monitoring vibration in rail infrastructure.

There is so much to delight those who love to read about and explore science fiction becoming reality when peeling back the petals of the rose that is the Brainchip AKIDA technology revolution.

In concluding in Australia the ignorant and poor of intellect have treated at times the vision of Peter van der Made with a savagery of doubt usually reserved for those who claim to have been abducted by aliens and as is usually the case these critics have been members of the so called sophisticated investor class and even though they have little credibility in their areas of claimed expertise they drown in their own ignorance when it comes to the science of neuromorphic computing. If tempted to listen to such individuals about the science of neuromorphic computing one is well served to remember the life of Robert Goddard - https://www.msn.com/en-au/news/aust...1&cvid=297d1358fb8c45ea9b1ae445a4985d75&ei=51

REFERENCES:

1
.Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors

Chetan Kadway, Sounak Dey, Arijit Mukherjee, Arpan Pal, Gilles Bézard

2023 International Joint Conference on Neural Networks (IJCNN), 1-8, 2023

Emergence of small satellites for earth observation missions has opened up new horizons for space research but at the same time posed newer challenges of limited power and compute resource arising out of the size & weight constraints imposed by these satellites. The currently evolving neuromorphic computing paradigm shows promise in terms of energy efficiency and may possibly be exploited here. In this paper, we try to prove the applicability of neuromorphic computing for on-board data processing in satellites by creating a 2-stage hierarchical cloud cover detection application for multi-spectral earth observation images. We design and train a CNN and convert it into SNN using the CNN2SNN conversion toolkit of Brainchip Akida neuromorphic platform. We achieve 95.46% accuracy while power consumption and latency are at least 35x and 3.4x more efficient respectively in stage-1 (and 230x & 7x in stage-2) compared to the equivalent CNN running on Jetson TX2.

https://ieeexplore.ieee.org/abstract/document/10191569/

2.An energy-efficient AkidaNet for morphologically similar weeds and crops recognition at the Edge

Vi Nguyen Thanh Le, Kevin Tsiknos, Kristofor D Carlson, Selam Ahderom

2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1-8, 2022

Wild radish weeds have always been a persistent problem in agriculture due to their quick and undesirable spread. Therefore, the accurate identification and effective control of wild radish in canola crops at early growth stage play an indispensable role in reducing herbicide rates and enhancing agricultural productivity. In this paper, an energy efficient and lightweight AkidaNet model is developed to accurately identify broad-leaf weeds and crops with similar morphology at four different growth stages. Experiments performed on a published bccr-segset dataset show that our proposed method achieves competitive performance, a classification accuracy of 99.73%, compared to several well-known CNNs architectures. Next, we quantized and converted the model into a Spiking Neural Network for implementation on a spike-based neuromorphic hardware device. The converted model is not only superior in low-latency and low-power consumption but also retains a similar accuracy to the original model. We also employ Grad-CAM to validate whether our model focuses on important features in plant images to identify wild radish weeds in crops

https://www.researchgate.net/profil...r-weeds-and-crops-recognition-at-the-Edge.pdf


3.Table 6.

Comparison of SWaP-C profiles for CPU/GPU platforms and neuromorphic versions

CPU/GPU platforms SWaP

NVIDIA A100

26.7 long x 11.2 tall x 3.5 wide cm (10.5 x 4.4 x 1.4 in)

250 W

Cost $30,000 USD (est.)

Neuromorphic versions

USB key form factor

5.1 long 1.3tall x 0.6 wide (2x0.5x0.3 in)

1W

$50 USD (est.)

https://www.sciencedirect.com/scien...c65de04&pid=1-s2.0-S1877050922017860-main.pdf (Page 494)

4. “In this federally funded phase 2 program, Quantum Ventura is creating state-of-the-art cybersecurity applications for the U.S. Department of Energy under the Small Business Innovation Research (SBIR) Program. The program is focused on “Cyber threat-detection using neuromorphic computing,” which aims to develop an advanced approach to detect and prevent cyberattacks on computer networks and critical infrastructure using brain-inspired artificial intelligence.

“Neuromorphic computing is an ideal technology for threat detection because of its small size and power, accuracy, and in particular, its ability to learn and adapt, since attackers are constantly changing their tactics,” said Srini Vasan, President and CEO of Quantum Ventura Inc. “We believe that our solution incorporating Brainchip’s Akida will be a breakthrough for defending against cyber threats and address additional applications as well.””

https://brainchip.com/brainchip-and-quantum-ventura-partner-to-develop-cyber-threat-detection/

5. Running the out-of-the-box demos on the Akida Raspberry Pi development kit I have was very impressive achieving, according to the statistics, approximately 100 FPS for a 9-mW power dissipation.

All told I am very impressed with the BrainChip Akida neuromorphic processor. The performance of the networks implemented is very good, while the power used is also exceptionally low. These two parameters are critical parameters for embedded solutions deployed at the edge.

Project links

  1. Visal wake word: https://studio.edgeimpulse.com/studio/224143
  2. Anomaly detection: https://studio.edgeimpulse.com/studio/261242
  3. CIFAR10: https://studio.edgeimpulse.com/studio/257103
  4. Keyword spotting: https://studio.edgeimpulse.com/studio/257193
Adiuvo is a consultancy that provides embedded systems design, training, and marketing services. Taylor is Founder and Principal Consultant of the company, teaches about embedded systems at the University of Lincoln, and is host of the podcast “The Embedded Hour.”

https://www.edgeimpulse.com/blog/brainchip-akida-and-edge-impulse

6.https://brainchip.com/temporal-event-based-neural-networks-a-new-approach-to-temporal-processing/

7.

8.https://brainchip.com/brainchip-demonstrates-regression-analysis-with-vibration-sensors/

My opinion only so DYOR

Hi ya Fact Finder,
Incisive, easy read for a layman like me.
This should be pinned next to your “ The Brainchip Story” post.
 
  • Like
  • Love
Reactions: 32 users

Slade

Top 20
Can Nvidia Survive the 4th Industrial Revolution?
by Fact Finder


Though Nvidia is riding high at the moment all indicators are that it has positioned itself on the wrong side of technology history.

While Nvidia has been compressing models to stave off the end of Moore’s Law it’s continued preoccupation with its Von Neumann market dominance has seen it embrace the false dawn offered to it by Large Language Model’s and the cult of CHATGpt.

The fragile nature of Nvidia’s technology future has been exposed in the last week by a small Australian technology company that has been stealthily developing an entirely new, some have said science fiction solution, to the energy resource issue exposed by the power and cost involved in training and running Large Language Models represented by CHATGpt.

The World has been fantasising about what is called Edge Computing for over a decade. The principle underpinning Edge Computing is actually very simple and can best be understood by what might be considered a strange example.

I am sure you have heard of terms like Food Miles, Buy Local, Eat Local, Grow Your Own as ways to decarbonise and save the planet. The simple indisputable proposition being if you reduce the distance between you and your food sources point of production the reduction in transport will reduce the energy consumed in putting the food on your plate.

Putting a bunch of flowers cut from your own garden on the sideboard is infinitely more fuel efficient than trucking, flying, trucking, driving fresh picked flowers from Europe around the globe to you in Australia.

Cutting asparagus in your kitchen garden is infinitely more efficient than buying asparagus from your Local Supermarket that has been cut in Peru and transported to you in Australia.

Now in the above examples I have chosen two products that require refrigeration to keep them fresh after picking to ensure they arrive at your home still useable and which as a result requires transport by jet airliners.

Suffice to say it is immediately obvious that if you are trying to reduce carbon and cost processing your flowers and asparagus at home wins hands down as zero carbon versus tonnes is a no brainer.

Now I know there are practical limitations making this solution difficult for those of us with black thumbs or living in home units to embrace. But that is an argument for another day.

The point is that this is what Edge Computing is all about. It is about reducing compute miles and in so doing cutting dramatically the cost of doing compute and carbon emissions.

For example take a Smart Doorbell. Currently a Smart Doorbell needs to be constantly connected via your homes wireless network to carry out its function of identifying and alerting you to the presence of someone at the door.

In lay terms 24/7 it sits there constantly processing camera frames showing the brick wall next to your front door and sending to the cloud over and over and over an image of the brick wall and receiving back message after message that no one is at the front door. Now you can reduce the power by slowing down the number of photos/frames it takes every second to monitor for someone coming to your front door but if the gap becomes too great between frames someone can come and go in that gap and avoid detection. So this method has built in limitations even when working to design but not so well when bandwidth is congested or connection breaks down.

Enter the neuromorphic Edge Computing revolution.

Edge Computing is about as I said reducing the compute miles. By placing the compute close as possible to the Smart Doorbell if not right up against it you immediately reduce the distance between the camera/sensor and the compute/intelligence. This has the advantage of reducing power consumption, reducing latency (time it takes to send the message back and forth from the camera/sensor to the data centre) and preventing congesting the bandwidth with photos of a blank brick wall affecting your ability to stream Netflix or Sky Sport.

Everywhere Edge Computing is being spoken about and Nvidia as the dominant player in the computing space is calling out about its Edge Computing solutions.

There is probably not one person on the planet with any interest in computing who has not heard of Nvidia or its Nvidia Jetson range of edge computing solutions.

Indeed, the Nvidia Jeson range is a leader in this space across the globe. Its Jetson solutions are to be found everywhere but for how much longer can Jetson dominate for Nvidia when it is hamstrung by old thinking in a World that is transitioning towards the Fourth Industrial Revolution.

So let's take a quick look at what Nvidia publishes about the Nvidia Jetson AGX Orin series, the Jetson Orin NX series and the Jetson Orin Nano series by reference to the advertised performance figures.

Power ranges from 5 watts to 60 Watts

“Power 15W - 60W 15W - 75W 15W - 40W 10W - 25W 10W - 20W 7W - 15W 7W - 10W”

TOPs ranges from 20 TOPs to 275 TOPs

AI Performance 275 TOPS 248 TOPS 200 TOPS 100 TOPS 70 TOPS 40 TOPS 20 TOPS


These numbers would certainly seem impressive to those who were feeding punch cards into the first IBM main frame computer even if you added on the power required to externally cool Jetson running some form of external cooling such as fans.

As impressive as these numbers are they clearly do not offer a power budget that can be reasonably embraced by those looking for an Edge Computing solution. Nvidia have to their credit recognised this and in consequence introduced the Jetson Nano TX2 Series and boasts that the Jetson TX2i, the Jetson TX2, the Jetson TX2 4GB and the Jetson TX2 NX come with AI Performance or 1.26 TFLOPS to 1.33 TFLOPS however the power budget ranges from 7.5 Watts to 20 Watts and on top of these numbers you need to allow for external cooling. You might have noticed that while Nvidia has reduced the form factor the power required remains in the multi watt region. (1)

To address these failings a new form of computing referred to as spiking neural network compute is being championed by Intel and IBM and they have over the last decade reached the point of proving out in research chips the huge benefits to be had by embracing this new style of compute not the least of which is a massively reduced power budget. The research at Intel and IBM goes on apace.

Enter stage left this little-known Australian company that has been listed on the Australian Stock Exchange since 2015. This tiny company with less than 100 employees has quietly some would say stealthily gone about its business and has beaten Intel and IBM to the punch launching its first commercial spiking neural network engineering chip in 2020 and is shortly to release its second generation technology, which it reports on its website as being capable of the following performance figures across three models or iterations of this technology advancement:

ITERATION ONE:

Max
Efficiency


Ideal for always-on, energy-sipping Sensor Applications:

  • Vibration Detection
  • Anomaly Detection
  • Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection
Extremely Efficient
@Sensor Inference


Either Standalone or with Min-spec MCU.

Configurable to ideal fit:

  • 1 – 2 nodes (4 NPE/node)
  • Anomaly Detection
  • Keyword Spotting
Expected implementations:

  • 50 MHz – 200
  • MHz Up to 100 GOPs
Additional Benefits

Eliminates need for CPU intervention

Fully accelerates most feed-forward networks

  • Optional skip connection and TENNs support for more complex networks
  • Completely customizable to fit very constrained power, thermal, and silicon area budgets
  • Enables energy-harvesting and multi-year battery life applications, sub milli-watt sensors

INTERATION 2:

Sensor
Balanced


Accelerates in hardware most Neural Network Functions:

  • Advanced Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection & Recognition
  • Object Classification
  • Biometric Recognition
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
Optimal for Sensor Fusion
and Application SoCs


With Min-Spec or Mid-Spec MCU.

Configurable to ideal fit:

  • 3 – 8 nodes (4 NPE/node) 25 KB
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:

  • 100 – 500 MHz
  • Up to 1 TOP
Additional Benefits

  • CPU is free for most non-NN compute
  • CPU runs application with minimal NN-management
  • Completely customizable to fit very constrained power, thermal and silicon area budgets
  • Enables intelligent, learning-enabled MCUs and SoCs consuming tens to hundreds of milliwatts or less

ITERATION 3:

Max
Performance


Detection, Classification, Segmentation, Tracking, and ViT:

  • Gesture Detection
  • Object Classification
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
  • Advanced Sequence Prediction
  • Video Object Detection & Tracking
  • Vision Transformer Networks
Advanced Network-Edge Performance
in a Sensor-Edge Power Envelope


With Mid-Spec MCU or Mid-Spec MPU.

Configurable to ideal fit:

  • 8 – 256 nodes (4 NPE/node) + optional Vision Transformer
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:
  • 800 MHz – 2 GHz
  • Up to 131 TOPs
Additional Benefits
  • CPU is free for most non-NN compute
At this stage I cannot comment on the power budget of these iterations however we do know that the first released chip the AKD1000 which was able to retail for about $US25.00 had a power budget that ran in the micro to milliwatts and was claimed by Edge Impulse (5), Quantum Ventura (3 & 4) and Tata Consulting Services (1) to outperform a GPU from Nvidia by some considerable margin across all performance measurements and this new version is an advancement grown out of the underlying neural fabric supporting AKD1000.

Perhaps the most worrying recent doomsday prediction for Nvidia at the Edge came from Tata Elxsi’s Mr. Sunil Nair Vice President EMEA and Design Digital who posted on his LinkedIn page right beside a post about Tata Elxsi partnering with Nvidia in the Cloud the following:

“Cloud computing is commodity. Edge is where the action is.

Thrilled to see Tata Elxsi and Brainchip partner to enable and integrate ultra-low power neuromorphic processors for use cases that would bring huge savings and transform citizen experience. (especially the ones over spending on Nvidia.)”


Mr. Nair has been with Tata Elxsi since 1997. While the partnership with Tata Elxsi has only recently been announced Brainchip has been working with Tata Consulting Services (1) TATA Groups research arm since at least 2019 when they jointly presented AKD1000 performing a live gesture recognition demonstration. Since that time Tata Consulting Services has released a number of peer reviewed papers covering the use of AKD1000 and it can be said that Mr. Nair would be very well informed when it comes to the benefits that Brainchip’s AKIDA technology solutions can bring to the Edge.

The full release of the next generation referred to as AKIDA 2.0 up till today has been restricted to a number of select customers however the company has recently advised in a CEO investor presentation that the full public release is imminent. This prediction seems to be holding true as in the past week Brainchip’s website has been updated with substantial information regarding AKIDA 2.0 signalling it is getting close to the launch date.

The interesting aspect of Brainchip Inc is that while it has remained largely unknown to the general public and the investment world in its quiet way it has been accumulating a very long and impressive list of corporate and academic engagements including the following publicly acknowledged group and according to Mr. Rob Telson Vice President of Ecosystems & Partnerships, they have hundreds of companies testing AKIDA technology boards:

1. FORD, 2. VALEO, 3. RENESAS, 4. NASA, 5. TATA Consulting Services, 6. MEGACHIPS, 7. MOSCHIP, 8. SOCIONEXT, 9. PROPHESEE, 10. VVDN, 11. TEKSUN, 12. Ai LABS, 13. NVISO, 14. EMOTION 3D, 15. ARM, 16. EDGE IMPULSE, 17. INTEL Foundries, 18. GLOBAL FOUNDRIES, 19. BLUE RIDGE ENVISIONEERING, 20. MERCEDES BENZ, 21. ANT 61, 22. QUANTUM VENTURA, 23. INFORMATION SYSTEM LABORATORIES, 24. INTELLISENSE SYSTEMS, 25. CVEDIA, 26. LORSER INDUSTRIES, 27. SiFIVE, 28. IPRO Silicon IP, 29. SALESLINK, 30. NUMEN, 31. VORAGO, 32. NANOSE, 33. BIOTOME, 34. OCULI, 35. Magik Eye, 36. GMAC, 37. TATA Elxsi, 38. University of Oklahoma, 41. Arizona State University, 42. Carnegie Mellon University, 43. Rochester Institute of Technology, 44. Drexel University, 45. University of Virginia.

It should be noted that Brainchip has been at pains in its literature and presentations to explain that the AKIDA technology is processor and sensor agnostic and being fully digital, scalable, and portable across all foundries. The AKD1000 was produced successfully first time and every time in 28nm at TSMC and only recently the AKD1500 was received back from Global Foundries successfully fabricated in 22nm FDSOI first time and every time.

The individual who gave the underlying AKIDA technology life is Peter van der Made who is also one of the founders of Brainchip and his full vision is to create a beneficial form of Artificial General Intelligence by about 2030. This vision plays out in a series of steps and AKIDA 3.0 is presently in development and according to the company’s CEO Sean Hehir in a very recent investor presentation each step is targeted to take 12 to 18 months. Historically Peter van der Made and his cofounder Anil Mankar have impressed with their capacity to deliver on their technology development time lines and by always having a little extra what they like to call secret sauce with each new technology reveal.

This little extra secret sauce with AKIDA 2.0 was the release of the TeNNs (6) and ViT (7) which provide an unprecedented leap into the future from what even the most optimistic expected to be possible at the far Edge using energy harvesting to power these devices. It is impossible to do justice to what they bring to the Edge Compute revolution in this article but fortunately even though patents are pending Brainchip has published a White Paper (6) and videos providing easy to follow plain English explanations. (7)

By the way anyone up for a bit of regression analysis (8) using AKIDA technology Brainchip also has that covered. When others were opining that spiking neural networks could not do regression analysis Brainchip Inc was demonstrating it running on AKD1000 for monitoring vibration in rail infrastructure.

There is so much to delight those who love to read about and explore science fiction becoming reality when peeling back the petals of the rose that is the Brainchip AKIDA technology revolution.

In concluding in Australia the ignorant and poor of intellect have treated at times the vision of Peter van der Made with a savagery of doubt usually reserved for those who claim to have been abducted by aliens and as is usually the case these critics have been members of the so called sophisticated investor class and even though they have little credibility in their areas of claimed expertise they drown in their own ignorance when it comes to the science of neuromorphic computing. If tempted to listen to such individuals about the science of neuromorphic computing one is well served to remember the life of Robert Goddard - https://www.msn.com/en-au/news/aust...1&cvid=297d1358fb8c45ea9b1ae445a4985d75&ei=51

REFERENCES:

1
.Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors

Chetan Kadway, Sounak Dey, Arijit Mukherjee, Arpan Pal, Gilles Bézard

2023 International Joint Conference on Neural Networks (IJCNN), 1-8, 2023

Emergence of small satellites for earth observation missions has opened up new horizons for space research but at the same time posed newer challenges of limited power and compute resource arising out of the size & weight constraints imposed by these satellites. The currently evolving neuromorphic computing paradigm shows promise in terms of energy efficiency and may possibly be exploited here. In this paper, we try to prove the applicability of neuromorphic computing for on-board data processing in satellites by creating a 2-stage hierarchical cloud cover detection application for multi-spectral earth observation images. We design and train a CNN and convert it into SNN using the CNN2SNN conversion toolkit of Brainchip Akida neuromorphic platform. We achieve 95.46% accuracy while power consumption and latency are at least 35x and 3.4x more efficient respectively in stage-1 (and 230x & 7x in stage-2) compared to the equivalent CNN running on Jetson TX2.

https://ieeexplore.ieee.org/abstract/document/10191569/

2.An energy-efficient AkidaNet for morphologically similar weeds and crops recognition at the Edge

Vi Nguyen Thanh Le, Kevin Tsiknos, Kristofor D Carlson, Selam Ahderom

2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1-8, 2022

Wild radish weeds have always been a persistent problem in agriculture due to their quick and undesirable spread. Therefore, the accurate identification and effective control of wild radish in canola crops at early growth stage play an indispensable role in reducing herbicide rates and enhancing agricultural productivity. In this paper, an energy efficient and lightweight AkidaNet model is developed to accurately identify broad-leaf weeds and crops with similar morphology at four different growth stages. Experiments performed on a published bccr-segset dataset show that our proposed method achieves competitive performance, a classification accuracy of 99.73%, compared to several well-known CNNs architectures. Next, we quantized and converted the model into a Spiking Neural Network for implementation on a spike-based neuromorphic hardware device. The converted model is not only superior in low-latency and low-power consumption but also retains a similar accuracy to the original model. We also employ Grad-CAM to validate whether our model focuses on important features in plant images to identify wild radish weeds in crops

https://www.researchgate.net/profil...r-weeds-and-crops-recognition-at-the-Edge.pdf


3.Table 6.

Comparison of SWaP-C profiles for CPU/GPU platforms and neuromorphic versions

CPU/GPU platforms SWaP

NVIDIA A100

26.7 long x 11.2 tall x 3.5 wide cm (10.5 x 4.4 x 1.4 in)

250 W

Cost $30,000 USD (est.)

Neuromorphic versions

USB key form factor

5.1 long 1.3tall x 0.6 wide (2x0.5x0.3 in)

1W

$50 USD (est.)

https://www.sciencedirect.com/scien...c65de04&pid=1-s2.0-S1877050922017860-main.pdf (Page 494)

4. “In this federally funded phase 2 program, Quantum Ventura is creating state-of-the-art cybersecurity applications for the U.S. Department of Energy under the Small Business Innovation Research (SBIR) Program. The program is focused on “Cyber threat-detection using neuromorphic computing,” which aims to develop an advanced approach to detect and prevent cyberattacks on computer networks and critical infrastructure using brain-inspired artificial intelligence.

“Neuromorphic computing is an ideal technology for threat detection because of its small size and power, accuracy, and in particular, its ability to learn and adapt, since attackers are constantly changing their tactics,” said Srini Vasan, President and CEO of Quantum Ventura Inc. “We believe that our solution incorporating Brainchip’s Akida will be a breakthrough for defending against cyber threats and address additional applications as well.””

https://brainchip.com/brainchip-and-quantum-ventura-partner-to-develop-cyber-threat-detection/

5. Running the out-of-the-box demos on the Akida Raspberry Pi development kit I have was very impressive achieving, according to the statistics, approximately 100 FPS for a 9-mW power dissipation.

All told I am very impressed with the BrainChip Akida neuromorphic processor. The performance of the networks implemented is very good, while the power used is also exceptionally low. These two parameters are critical parameters for embedded solutions deployed at the edge.

Project links

  1. Visal wake word: https://studio.edgeimpulse.com/studio/224143
  2. Anomaly detection: https://studio.edgeimpulse.com/studio/261242
  3. CIFAR10: https://studio.edgeimpulse.com/studio/257103
  4. Keyword spotting: https://studio.edgeimpulse.com/studio/257193
Adiuvo is a consultancy that provides embedded systems design, training, and marketing services. Taylor is Founder and Principal Consultant of the company, teaches about embedded systems at the University of Lincoln, and is host of the podcast “The Embedded Hour.”

https://www.edgeimpulse.com/blog/brainchip-akida-and-edge-impulse

6.https://brainchip.com/temporal-event-based-neural-networks-a-new-approach-to-temporal-processing/

7.

8.https://brainchip.com/brainchip-demonstrates-regression-analysis-with-vibration-sensors/

My opinion only so DYOR

Epic. Thank you FF
 
  • Like
  • Love
  • Fire
Reactions: 23 users

Sosimple

Regular
Can Nvidia Survive the 4th Industrial Revolution?
by Fact Finder


Though Nvidia is riding high at the moment all indicators are that it has positioned itself on the wrong side of technology history.

While Nvidia has been compressing models to stave off the end of Moore’s Law it’s continued preoccupation with its Von Neumann market dominance has seen it embrace the false dawn offered to it by Large Language Model’s and the cult of CHATGpt.

The fragile nature of Nvidia’s technology future has been exposed in the last week by a small Australian technology company that has been stealthily developing an entirely new, some have said science fiction solution, to the energy resource issue exposed by the power and cost involved in training and running Large Language Models represented by CHATGpt.

The World has been fantasising about what is called Edge Computing for over a decade. The principle underpinning Edge Computing is actually very simple and can best be understood by what might be considered a strange example.

I am sure you have heard of terms like Food Miles, Buy Local, Eat Local, Grow Your Own as ways to decarbonise and save the planet. The simple indisputable proposition being if you reduce the distance between you and your food sources point of production the reduction in transport will reduce the energy consumed in putting the food on your plate.

Putting a bunch of flowers cut from your own garden on the sideboard is infinitely more fuel efficient than trucking, flying, trucking, driving fresh picked flowers from Europe around the globe to you in Australia.

Cutting asparagus in your kitchen garden is infinitely more efficient than buying asparagus from your Local Supermarket that has been cut in Peru and transported to you in Australia.

Now in the above examples I have chosen two products that require refrigeration to keep them fresh after picking to ensure they arrive at your home still useable and which as a result requires transport by jet airliners.

Suffice to say it is immediately obvious that if you are trying to reduce carbon and cost processing your flowers and asparagus at home wins hands down as zero carbon versus tonnes is a no brainer.

Now I know there are practical limitations making this solution difficult for those of us with black thumbs or living in home units to embrace. But that is an argument for another day.

The point is that this is what Edge Computing is all about. It is about reducing compute miles and in so doing cutting dramatically the cost of doing compute and carbon emissions.

For example take a Smart Doorbell. Currently a Smart Doorbell needs to be constantly connected via your homes wireless network to carry out its function of identifying and alerting you to the presence of someone at the door.

In lay terms 24/7 it sits there constantly processing camera frames showing the brick wall next to your front door and sending to the cloud over and over and over an image of the brick wall and receiving back message after message that no one is at the front door. Now you can reduce the power by slowing down the number of photos/frames it takes every second to monitor for someone coming to your front door but if the gap becomes too great between frames someone can come and go in that gap and avoid detection. So this method has built in limitations even when working to design but not so well when bandwidth is congested or connection breaks down.

Enter the neuromorphic Edge Computing revolution.

Edge Computing is about as I said reducing the compute miles. By placing the compute close as possible to the Smart Doorbell if not right up against it you immediately reduce the distance between the camera/sensor and the compute/intelligence. This has the advantage of reducing power consumption, reducing latency (time it takes to send the message back and forth from the camera/sensor to the data centre) and preventing congesting the bandwidth with photos of a blank brick wall affecting your ability to stream Netflix or Sky Sport.

Everywhere Edge Computing is being spoken about and Nvidia as the dominant player in the computing space is calling out about its Edge Computing solutions.

There is probably not one person on the planet with any interest in computing who has not heard of Nvidia or its Nvidia Jetson range of edge computing solutions.

Indeed, the Nvidia Jeson range is a leader in this space across the globe. Its Jetson solutions are to be found everywhere but for how much longer can Jetson dominate for Nvidia when it is hamstrung by old thinking in a World that is transitioning towards the Fourth Industrial Revolution.

So let's take a quick look at what Nvidia publishes about the Nvidia Jetson AGX Orin series, the Jetson Orin NX series and the Jetson Orin Nano series by reference to the advertised performance figures.

Power ranges from 5 watts to 60 Watts

“Power 15W - 60W 15W - 75W 15W - 40W 10W - 25W 10W - 20W 7W - 15W 7W - 10W”

TOPs ranges from 20 TOPs to 275 TOPs

AI Performance 275 TOPS 248 TOPS 200 TOPS 100 TOPS 70 TOPS 40 TOPS 20 TOPS


These numbers would certainly seem impressive to those who were feeding punch cards into the first IBM main frame computer even if you added on the power required to externally cool Jetson running some form of external cooling such as fans.

As impressive as these numbers are they clearly do not offer a power budget that can be reasonably embraced by those looking for an Edge Computing solution. Nvidia have to their credit recognised this and in consequence introduced the Jetson Nano TX2 Series and boasts that the Jetson TX2i, the Jetson TX2, the Jetson TX2 4GB and the Jetson TX2 NX come with AI Performance or 1.26 TFLOPS to 1.33 TFLOPS however the power budget ranges from 7.5 Watts to 20 Watts and on top of these numbers you need to allow for external cooling. You might have noticed that while Nvidia has reduced the form factor the power required remains in the multi watt region. (1)

To address these failings a new form of computing referred to as spiking neural network compute is being championed by Intel and IBM and they have over the last decade reached the point of proving out in research chips the huge benefits to be had by embracing this new style of compute not the least of which is a massively reduced power budget. The research at Intel and IBM goes on apace.

Enter stage left this little-known Australian company that has been listed on the Australian Stock Exchange since 2015. This tiny company with less than 100 employees has quietly some would say stealthily gone about its business and has beaten Intel and IBM to the punch launching its first commercial spiking neural network engineering chip in 2020 and is shortly to release its second generation technology, which it reports on its website as being capable of the following performance figures across three models or iterations of this technology advancement:

ITERATION ONE:

Max
Efficiency


Ideal for always-on, energy-sipping Sensor Applications:

  • Vibration Detection
  • Anomaly Detection
  • Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection
Extremely Efficient
@Sensor Inference


Either Standalone or with Min-spec MCU.

Configurable to ideal fit:

  • 1 – 2 nodes (4 NPE/node)
  • Anomaly Detection
  • Keyword Spotting
Expected implementations:

  • 50 MHz – 200
  • MHz Up to 100 GOPs
Additional Benefits

Eliminates need for CPU intervention

Fully accelerates most feed-forward networks

  • Optional skip connection and TENNs support for more complex networks
  • Completely customizable to fit very constrained power, thermal, and silicon area budgets
  • Enables energy-harvesting and multi-year battery life applications, sub milli-watt sensors

INTERATION 2:

Sensor
Balanced


Accelerates in hardware most Neural Network Functions:

  • Advanced Keyword Spotting
  • Sensor Fusion
  • Low-Res Presence Detection
  • Gesture Detection & Recognition
  • Object Classification
  • Biometric Recognition
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
Optimal for Sensor Fusion
and Application SoCs


With Min-Spec or Mid-Spec MCU.

Configurable to ideal fit:

  • 3 – 8 nodes (4 NPE/node) 25 KB
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:

  • 100 – 500 MHz
  • Up to 1 TOP
Additional Benefits

  • CPU is free for most non-NN compute
  • CPU runs application with minimal NN-management
  • Completely customizable to fit very constrained power, thermal and silicon area budgets
  • Enables intelligent, learning-enabled MCUs and SoCs consuming tens to hundreds of milliwatts or less

ITERATION 3:

Max
Performance


Detection, Classification, Segmentation, Tracking, and ViT:

  • Gesture Detection
  • Object Classification
  • Advanced Speech Recognition
  • Object Detection & Semantic Segmentation
  • Advanced Sequence Prediction
  • Video Object Detection & Tracking
  • Vision Transformer Networks
Advanced Network-Edge Performance
in a Sensor-Edge Power Envelope


With Mid-Spec MCU or Mid-Spec MPU.

Configurable to ideal fit:

  • 8 – 256 nodes (4 NPE/node) + optional Vision Transformer
  • 100 KB per NPE
  • Process, physical IP and other optimizations
Expected implementations:
  • 800 MHz – 2 GHz
  • Up to 131 TOPs
Additional Benefits
  • CPU is free for most non-NN compute
At this stage I cannot comment on the power budget of these iterations however we do know that the first released chip the AKD1000 which was able to retail for about $US25.00 had a power budget that ran in the micro to milliwatts and was claimed by Edge Impulse (5), Quantum Ventura (3 & 4) and Tata Consulting Services (1) to outperform a GPU from Nvidia by some considerable margin across all performance measurements and this new version is an advancement grown out of the underlying neural fabric supporting AKD1000.

Perhaps the most worrying recent doomsday prediction for Nvidia at the Edge came from Tata Elxsi’s Mr. Sunil Nair Vice President EMEA and Design Digital who posted on his LinkedIn page right beside a post about Tata Elxsi partnering with Nvidia in the Cloud the following:

“Cloud computing is commodity. Edge is where the action is.

Thrilled to see Tata Elxsi and Brainchip partner to enable and integrate ultra-low power neuromorphic processors for use cases that would bring huge savings and transform citizen experience. (especially the ones over spending on Nvidia.)”


Mr. Nair has been with Tata Elxsi since 1997. While the partnership with Tata Elxsi has only recently been announced Brainchip has been working with Tata Consulting Services (1) TATA Groups research arm since at least 2019 when they jointly presented AKD1000 performing a live gesture recognition demonstration. Since that time Tata Consulting Services has released a number of peer reviewed papers covering the use of AKD1000 and it can be said that Mr. Nair would be very well informed when it comes to the benefits that Brainchip’s AKIDA technology solutions can bring to the Edge.

The full release of the next generation referred to as AKIDA 2.0 up till today has been restricted to a number of select customers however the company has recently advised in a CEO investor presentation that the full public release is imminent. This prediction seems to be holding true as in the past week Brainchip’s website has been updated with substantial information regarding AKIDA 2.0 signalling it is getting close to the launch date.

The interesting aspect of Brainchip Inc is that while it has remained largely unknown to the general public and the investment world in its quiet way it has been accumulating a very long and impressive list of corporate and academic engagements including the following publicly acknowledged group and according to Mr. Rob Telson Vice President of Ecosystems & Partnerships, they have hundreds of companies testing AKIDA technology boards:

1. FORD, 2. VALEO, 3. RENESAS, 4. NASA, 5. TATA Consulting Services, 6. MEGACHIPS, 7. MOSCHIP, 8. SOCIONEXT, 9. PROPHESEE, 10. VVDN, 11. TEKSUN, 12. Ai LABS, 13. NVISO, 14. EMOTION 3D, 15. ARM, 16. EDGE IMPULSE, 17. INTEL Foundries, 18. GLOBAL FOUNDRIES, 19. BLUE RIDGE ENVISIONEERING, 20. MERCEDES BENZ, 21. ANT 61, 22. QUANTUM VENTURA, 23. INFORMATION SYSTEM LABORATORIES, 24. INTELLISENSE SYSTEMS, 25. CVEDIA, 26. LORSER INDUSTRIES, 27. SiFIVE, 28. IPRO Silicon IP, 29. SALESLINK, 30. NUMEN, 31. VORAGO, 32. NANOSE, 33. BIOTOME, 34. OCULI, 35. Magik Eye, 36. GMAC, 37. TATA Elxsi, 38. University of Oklahoma, 41. Arizona State University, 42. Carnegie Mellon University, 43. Rochester Institute of Technology, 44. Drexel University, 45. University of Virginia.

It should be noted that Brainchip has been at pains in its literature and presentations to explain that the AKIDA technology is processor and sensor agnostic and being fully digital, scalable, and portable across all foundries. The AKD1000 was produced successfully first time and every time in 28nm at TSMC and only recently the AKD1500 was received back from Global Foundries successfully fabricated in 22nm FDSOI first time and every time.

The individual who gave the underlying AKIDA technology life is Peter van der Made who is also one of the founders of Brainchip and his full vision is to create a beneficial form of Artificial General Intelligence by about 2030. This vision plays out in a series of steps and AKIDA 3.0 is presently in development and according to the company’s CEO Sean Hehir in a very recent investor presentation each step is targeted to take 12 to 18 months. Historically Peter van der Made and his cofounder Anil Mankar have impressed with their capacity to deliver on their technology development time lines and by always having a little extra what they like to call secret sauce with each new technology reveal.

This little extra secret sauce with AKIDA 2.0 was the release of the TeNNs (6) and ViT (7) which provide an unprecedented leap into the future from what even the most optimistic expected to be possible at the far Edge using energy harvesting to power these devices. It is impossible to do justice to what they bring to the Edge Compute revolution in this article but fortunately even though patents are pending Brainchip has published a White Paper (6) and videos providing easy to follow plain English explanations. (7)

By the way anyone up for a bit of regression analysis (8) using AKIDA technology Brainchip also has that covered. When others were opining that spiking neural networks could not do regression analysis Brainchip Inc was demonstrating it running on AKD1000 for monitoring vibration in rail infrastructure.

There is so much to delight those who love to read about and explore science fiction becoming reality when peeling back the petals of the rose that is the Brainchip AKIDA technology revolution.

In concluding in Australia the ignorant and poor of intellect have treated at times the vision of Peter van der Made with a savagery of doubt usually reserved for those who claim to have been abducted by aliens and as is usually the case these critics have been members of the so called sophisticated investor class and even though they have little credibility in their areas of claimed expertise they drown in their own ignorance when it comes to the science of neuromorphic computing. If tempted to listen to such individuals about the science of neuromorphic computing one is well served to remember the life of Robert Goddard - https://www.msn.com/en-au/news/aust...1&cvid=297d1358fb8c45ea9b1ae445a4985d75&ei=51

REFERENCES:

1
.Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors

Chetan Kadway, Sounak Dey, Arijit Mukherjee, Arpan Pal, Gilles Bézard

2023 International Joint Conference on Neural Networks (IJCNN), 1-8, 2023

Emergence of small satellites for earth observation missions has opened up new horizons for space research but at the same time posed newer challenges of limited power and compute resource arising out of the size & weight constraints imposed by these satellites. The currently evolving neuromorphic computing paradigm shows promise in terms of energy efficiency and may possibly be exploited here. In this paper, we try to prove the applicability of neuromorphic computing for on-board data processing in satellites by creating a 2-stage hierarchical cloud cover detection application for multi-spectral earth observation images. We design and train a CNN and convert it into SNN using the CNN2SNN conversion toolkit of Brainchip Akida neuromorphic platform. We achieve 95.46% accuracy while power consumption and latency are at least 35x and 3.4x more efficient respectively in stage-1 (and 230x & 7x in stage-2) compared to the equivalent CNN running on Jetson TX2.

https://ieeexplore.ieee.org/abstract/document/10191569/

2.An energy-efficient AkidaNet for morphologically similar weeds and crops recognition at the Edge

Vi Nguyen Thanh Le, Kevin Tsiknos, Kristofor D Carlson, Selam Ahderom

2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1-8, 2022

Wild radish weeds have always been a persistent problem in agriculture due to their quick and undesirable spread. Therefore, the accurate identification and effective control of wild radish in canola crops at early growth stage play an indispensable role in reducing herbicide rates and enhancing agricultural productivity. In this paper, an energy efficient and lightweight AkidaNet model is developed to accurately identify broad-leaf weeds and crops with similar morphology at four different growth stages. Experiments performed on a published bccr-segset dataset show that our proposed method achieves competitive performance, a classification accuracy of 99.73%, compared to several well-known CNNs architectures. Next, we quantized and converted the model into a Spiking Neural Network for implementation on a spike-based neuromorphic hardware device. The converted model is not only superior in low-latency and low-power consumption but also retains a similar accuracy to the original model. We also employ Grad-CAM to validate whether our model focuses on important features in plant images to identify wild radish weeds in crops

https://www.researchgate.net/profil...r-weeds-and-crops-recognition-at-the-Edge.pdf


3.Table 6.

Comparison of SWaP-C profiles for CPU/GPU platforms and neuromorphic versions

CPU/GPU platforms SWaP

NVIDIA A100

26.7 long x 11.2 tall x 3.5 wide cm (10.5 x 4.4 x 1.4 in)

250 W

Cost $30,000 USD (est.)

Neuromorphic versions

USB key form factor

5.1 long 1.3tall x 0.6 wide (2x0.5x0.3 in)

1W

$50 USD (est.)

https://www.sciencedirect.com/scien...c65de04&pid=1-s2.0-S1877050922017860-main.pdf (Page 494)

4. “In this federally funded phase 2 program, Quantum Ventura is creating state-of-the-art cybersecurity applications for the U.S. Department of Energy under the Small Business Innovation Research (SBIR) Program. The program is focused on “Cyber threat-detection using neuromorphic computing,” which aims to develop an advanced approach to detect and prevent cyberattacks on computer networks and critical infrastructure using brain-inspired artificial intelligence.

“Neuromorphic computing is an ideal technology for threat detection because of its small size and power, accuracy, and in particular, its ability to learn and adapt, since attackers are constantly changing their tactics,” said Srini Vasan, President and CEO of Quantum Ventura Inc. “We believe that our solution incorporating Brainchip’s Akida will be a breakthrough for defending against cyber threats and address additional applications as well.””

https://brainchip.com/brainchip-and-quantum-ventura-partner-to-develop-cyber-threat-detection/

5. Running the out-of-the-box demos on the Akida Raspberry Pi development kit I have was very impressive achieving, according to the statistics, approximately 100 FPS for a 9-mW power dissipation.

All told I am very impressed with the BrainChip Akida neuromorphic processor. The performance of the networks implemented is very good, while the power used is also exceptionally low. These two parameters are critical parameters for embedded solutions deployed at the edge.

Project links

  1. Visal wake word: https://studio.edgeimpulse.com/studio/224143
  2. Anomaly detection: https://studio.edgeimpulse.com/studio/261242
  3. CIFAR10: https://studio.edgeimpulse.com/studio/257103
  4. Keyword spotting: https://studio.edgeimpulse.com/studio/257193
Adiuvo is a consultancy that provides embedded systems design, training, and marketing services. Taylor is Founder and Principal Consultant of the company, teaches about embedded systems at the University of Lincoln, and is host of the podcast “The Embedded Hour.”

https://www.edgeimpulse.com/blog/brainchip-akida-and-edge-impulse

6.https://brainchip.com/temporal-event-based-neural-networks-a-new-approach-to-temporal-processing/

7.

8.https://brainchip.com/brainchip-demonstrates-regression-analysis-with-vibration-sensors/

My opinion only so DYOR

Post of the year
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Draed

Regular
Exciting work fact finder. The pay-off will be huge. And when it happens, I will be sitting back on my yacht, remembering the times when it all seemed so obvious, it was going to happen.
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Foxdog

Regular
Intel and neuromorphic computing, I’m wondering if they are using Akida in Loihi now


I highly doubt Intel are using AKIDA in Loihi2, it's their baby and very unlikely they'll just drop years of expensive research to incorporate the tech of an upstart. I think if it was the case we'd have seen some revenue by now. I'd love to be wrong and perhaps the next 4c will give us a surprise mammoth number but for now this is just a bit too fanciful. No offence, just trying to keep it real after past dots have failed to materialise. We simply need actual price sensitive company announcements from BRN. Until then it's all just guessing and hope.
 
  • Like
  • Fire
Reactions: 10 users
Top Bottom