BRN Discussion Ongoing

alwaysgreen

Top 20

Did your heart skip a beat šŸ¤”šŸ˜›

It does show though, that Musk is man enough to admit his own venture Neuralink, isn't good enough and is willing to buy into something, that's getting there faster..

It could also apply, to his approach to ADAS.
Geez wouldn't it be nice. A little offer of 6 billion to rocket our share price to $3 or $4. Our founders urge us to not accept because with everything in the pipeline, we are worth so much more, driving our share price to $6.

A man can dream...

K-Pop Daydream GIF by PENTAGON
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Diogenese

Top 20

Combinatorial optimization solving by coherent Ising machines based on spiking neural networks​

Bo Lu, Yong-Pan Gao, Kai Wen, Chuan Wang

Comments:5 pages, 4 figures, comments are welcome
Subjects:Quantum Physics (quant-ph); Neural and Evolutionary Computing (cs.NE)
Cite as:arXiv:2208.07502 [quant-ph]
(or arXiv:2208.07502v1 [quant-ph] for this version)
https://doi.org/10.48550/arXiv.2208.07502
Focus to learn more

Submission history​

From: Chuan Wang [view email]
[v1] Tue, 16 Aug 2022 02:19:05 UTC (1,024 KB)

"Ising machines": so they're using SNNs in the cake shop now!
 
  • Haha
  • Like
Reactions: 13 users
Hi tls,

Back on the farm, we used to count the legs and divide by four.

This is a proof-of-concept arrangement, so not optimized for commercial exploitation. It is at this stage an algorithm running in Raspberry Pi.

The heat sink indicates the processor is generating a bit of heat. I'm not sure if they could dispense with the heat sink if they had Akida, but they could use a smaller one, which would lighten the load for the drone, and, as we know, Akida would extend the battery charge.

Apparently Edge impulse does a lot of this proof-of-concept modelling:

https://www.edgeimpulse.com/blog/getting-more-cycles-per-second-with-fomo

Note that this system does not appear to have one-shot learning, and needs a significant training database.

Basically they are using Edge Impulse's FOMO (Faster Objects, More Objects) algorithm:

https://www.edgeimpulse.com/blog/announcing-fomo-faster-objects-more-objects

It appears that there are synergies in using FOMO and Akida.

I hadn't previously looked into EI's FOMO, but the above link provides an excellent introduction, and the synergies become apparent.

FOMO uses a form of CNN and has significant limitations.

Thanks for feedback @Diogenese
Could it relate to the recent initiative? Or am I off course?

Edge Impulse Releases Deployment Support for BrainChip Akida Neuromorphic IP


Edge Impulse, the leading platform for enabling ML at the edge, and BrainChip, the leading provider of neuromorphic AI IP technology, announced support for deploying Edge Impulse projects using the BrainChip Akida development kit.

Edge Impulse enables developers to rapidly build enterprise-grade ML algorithms, trained on real sensor data, in a low to no code environment.

These trained algorithms can now be quantized, optimized and converted to Spiking Neural Networks (SNN), which are compatible and can be deployed with BrainChip Akidaā„¢ devices.

This capability is available for new and existing Edge Impulse projects by using the BrainChip MetaTF model deployment block integrated on the platform. This deployment block enables free-tier developers and enterprise developer users to create and validate neuromorphic models for real-world use-cases and deploy on BrainChip Akida development kits.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Diogenese

Top 20
Thanks for feedback @Diogenese
Could it relate to the recent initiative? Or am I off course?

Edge Impulse Releases Deployment Support for BrainChip Akida Neuromorphic IP


Edge Impulse, the leading platform for enabling ML at the edge, and BrainChip, the leading provider of neuromorphic AI IP technology, announced support for deploying Edge Impulse projects using the BrainChip Akida development kit.

Edge Impulse enables developers to rapidly build enterprise-grade ML algorithms, trained on real sensor data, in a low to no code environment.

These trained algorithms can now be quantized, optimized and converted to Spiking Neural Networks (SNN), which are compatible and can be deployed with BrainChip Akidaā„¢ devices.

This capability is available for new and existing Edge Impulse projects by using the BrainChip MetaTF model deployment block integrated on the platform. This deployment block enables free-tier developers and enterprise developer users to create and validate neuromorphic models for real-world use-cases and deploy on BrainChip Akida development kits.
Yes. When EI's FOMO is supercharged with Akida, the G-forces will be overpowering.

The synergies of combining EI's FOMO, which uses CNN, with Akida, which can convert CNN to SNN, enabling N-of-M coding greatly reduces the compute workload, and also brings one-shot learning into play.

This shows the wisdom of incorporating CNN2SNN into Akida. CNN is the pre-existing standard with which many designers working in the field will be comfortable, but now CNN applications can be converted to SNN.

EI's FOMO can be adapted to locate objects, rather than classifying them, and thus uses centroids instead of bounding boxes derived from the input signals.

https://www.edgeimpulse.com/blog/announcing-fomo-faster-objects-more-objects

"Design choices
Trained on centroid

The main design decisions for FOMO are based on the idea that a lot of object detection problems don't actually need the size of the objects but rather just their location in the frame. Once we know the location of salient objects we can ask further questions such as "Is an object above/below another?" or "How many of these objects are in view?"

Based on this observation, classical bounding boxes are no longer needed. Instead, a detection based on the centroids of objects is enough
.

Note that, to keep the interoperability with other models, your training image input still uses bounding boxes."

Even so, they only manage 10 frames per second with their smallest version of FOMO, which they considered "amazingly fast".

"Designed to be small and amazingly fast

One of the first goals, when we started to design FOMO, was to run object detection on microcontrollers where flash and RAM are most of the time very limited. The smallest version of FOMO (96x96 grayscale input, MobileNetV2 0.05 alpha) runs in <100KB RAM and ~10 fps on a Cortex-M4F at 80MHz."

The combination of Akida with EI's FOMO will blow their socks off, even doing both classification and location. This greatly expands the potential field of use of EI's FOMO.
 
  • Like
  • Love
  • Fire
Reactions: 45 users
  • Like
  • Fire
  • Haha
Reactions: 33 users

Cyw

Regular
  • Like
Reactions: 4 users

Fox151

Regular
Wonder what kind of money we are talking about in this type of deal. There may be a licence sold but no royalties.
Wasn't this something we already knew?
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Cyw

Regular

Townyj

Ermahgerd
This has been put out a few times already.


 
  • Like
  • Fire
Reactions: 16 users

TopCat

Regular
Not saying this is Akida, but absolutely amazing technology to watch. Woolworths is to open one in Sydney by 2024. Think of the power savings they could get by using Akida!

 
  • Like
  • Fire
  • Wow
Reactions: 19 users

Potato

Regular
Can i get an Akida Balista please?
 
  • Like
  • Haha
  • Love
Reactions: 9 users
Liked by RT.

A bit technical for me but I understood it to mean things are progressing with Si-Five and there are big opportunities in the future!


Enjoy
 
  • Like
  • Fire
  • Wow
Reactions: 21 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 13 users

Iseki

Regular
Liked by RT.

A bit technical for me but I understood it to mean things are progressing with Si-Five and there are big opportunities in the future!


Enjoy
On the technical front here is my take
. Because we are trying to take over the edge, Akida's performance is not measured so much in TOPS, but TOPS/W (ie per watt.)
. We really want Akida IP to be tied into their smallest chip, ( so just an MCU), so that this can be placed adjacent to each and every sensor.
( so the SiFive E2 would be best choice.)
. This Akida informed MCU will pass on just the information required to whatever vector based trained neural network is appropriate.
. So What I'm praying for is an ann that says "SuchandSuch SoC developer publicly announces their adoption of SiFive and Akida IP to produce worlds smallest and most versatile sensor MCU"
. So the question is, which sensor manufacturer apart from Renesas who seems to have gone down the Arm road, is big enough to push for this?
 
  • Like
  • Fire
Reactions: 21 users

MADX

Regular
For those who understand the tech, you will find the link below interesting. It is from the Weebit nano forum. Along with slymeat we see increasing cross-tech emerging between WBT and BRN. I believe BRN holders should keep an eye on WBT.

 
Last edited:
  • Like
Reactions: 7 users
On the technical front here is my take
. Because we are trying to take over the edge, Akida's performance is not measured so much in TOPS, but TOPS/W (ie per watt.)
. We really want Akida IP to be tied into their smallest chip, ( so just an MCU), so that this can be placed adjacent to each and every sensor.
( so the SiFive E2 would be best choice.)
. This Akida informed MCU will pass on just the information required to whatever vector based trained neural network is appropriate.
. So What I'm praying for is an ann that says "SuchandSuch SoC developer publicly announces their adoption of SiFive and Akida IP to produce worlds smallest and most versatile sensor MCU"
. So the question is, which sensor manufacturer apart from Renesas who seems to have gone down the Arm road, is big enough to push for this?


Hi Iseki,

I agree that the primary focus is edge devices.

There has also been some discussions headed by @Fullmoonfever where Akida was also being tested in larger devices.

I know itā€™s only dot joining but Iā€˜m still waiting for the day Michael Dell announces a partnership with Brainchip. Iā€™m confident one of the largest computing companies in the world didnā€™t randomly give a podcast interview with a small Australian start up for no reason. I could be wrong but from memory Akida wasnā€™t even in silicon at that point.

ā€œ10 per cent of the data in the world today is processed outside of the data centre but by 2025, 75 per cent of enterprise data will be processed outside of the traditional, centralised data centre or cloud,ā€ Dell noted.

Brainchip is in the perfect position and building itā€™s ecosystem to capitalise on that edge processing/inference need!

:)
 
  • Like
  • Fire
  • Love
Reactions: 32 users

MADX

Regular
For those who understand the tech, you will find the link below interesting. It is from the Weebit nano forum. Along with slymeat we see increasing cross-tech emerging between WBT and BRN. I believe BRN holders should keep an eye on WBT.

Further to the above, I am indebted to Fact Finder who felt that we can relax about BRN now that its tech is proven and accepted. My focus is now on WBT and hope for a repeat of the BRN story. I just wish that a clone of Fact Finder appears on the WBT threads. šŸ™
 
  • Like
  • Love
  • Haha
Reactions: 11 users

Dhm

Regular
Hi Iseki,

I agree that the primary focus is edge devices.

There has also been some discussions headed by @Fullmoonfever where Akida was also being tested in larger devices.

I know itā€™s only dot joining but Iā€˜m still waiting for the day Michael Dell announces a partnership with Brainchip. Iā€™m confident one of the largest computing companies in the world didnā€™t randomly give a podcast interview with a small Australian start up for no reason. I could be wrong but from memory Akida wasnā€™t even in silicon at that point.

ā€œ10 per cent of the data in the world today is processed outside of the data centre but by 2025, 75 per cent of enterprise data will be processed outside of the traditional, centralised data centre or cloud,ā€ Dell noted.

Brainchip is in the perfect position and building itā€™s ecosystem to capitalise on that edge processing/inference need!

:)
"I know itā€™s only dot joining but Iā€˜m still waiting for the day Michael Dell announces a partnership with Brainchip. Iā€™m confident one of the largest computing companies in the world didnā€™t randomly give a podcast interview with a small Australian start up for no reason. I could be wrong but from memory Akida wasnā€™t even in silicon at that point."

Hi @Stable Genius can you lead me to a link about that podcast?
 
  • Like
  • Fire
Reactions: 9 users

Mt09

Regular
"I know itā€™s only dot joining but Iā€˜m still waiting for the day Michael Dell announces a partnership with Brainchip. Iā€™m confident one of the largest computing companies in the world didnā€™t randomly give a podcast interview with a small Australian start up for no reason. I could be wrong but from memory Akida wasnā€™t even in silicon at that point."

Hi @Stable Genius can you lead me to a link about that podcast?


The bloke on the podcast is Dell engineer Rob Lincourt. His job roll is to keep on top of emerging technology. He makes reference to ā€œearly on Brainchip releasing some power consumption figuresā€. Anyoneā€™s guess as to how long Dell have known about and been asessing Akida.
 
  • Like
  • Fire
  • Love
Reactions: 28 users

equanimous

Norse clairvoyant shapeshifter goddess
he following article makes clear that NASA and DARPA have been anxiously awaiting a chip like AKIDA from at least 2013. At that time they were anticipating an analogue solution not knowing Peter and Anil were on a digital SNN fast track:

ā€œSpiking Neurons for Analysis of Patterns
High-performance pattern-analysis systems could be implemented as analog VLSI circuits. NASAā€™s Jet Propulsion Laboratory, Pasadena, California
Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern- analysis and pattern-recognition compu- tational systems. These neurons are rep- resented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limita- tions of prior spiking-neuron models. There are numerous potential pattern- recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets.
Spiking neurons imitate biological neu- rons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a treelike intercon- nection network (dendrites). Spiking neu- rons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed ad- vantage over traditional neural networks by using the timing of individual spikes for
computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because tradi- tional artificial neurons fail to capture this encoding, they have less processing capa- bility, and so it is necessary to use more gates when implementing traditional artifi- cial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike- transmitting fibers.
The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological neurons). These features enable the neurons to adapt their responses to high-rate inputs from sensors, and to adapt their firing thresholds to mitigate noise or effects of potential sensor fail- ure. The mathematical derivation of the SVM starts from a prior model, known in the art as the point soma model, which captures all of the salient properties of neuronal response while keeping the computational cost low. The point-soma latency time is modified to be an expo- nentially decaying function of the strength of the applied potential.
Choosing computational efficiency over biological fidelity, the dendrites sur- rounding a neuron are represented by simplified compartmental submodels and there are no dendritic spines. Up- dates to the dendritic potential, calcium- ion concentrations and conductances, and potassium-ion conductances are done by use of equations similar to those of the point soma. Diffusion processes in dendrites are modeled by averaging among nearest-neighbor compartments. Inputs to each of the dendritic compart- ments come from sensors. Alternatively or in addition, when an affected neuron is part of a pool, inputs can come from other spiking neurons.
At present, SVM neural networks are im- plemented by computational simulation, using algorithms that encode the SVM and its submodels. However, it should be possi- ble to implement these neural networks in hardware: The differential equations for the dendritic and cellular processes in the SVM model of spiking neurons map to equivalent circuits that can be imple- mented directly in analog very-large-scale integrated (VLSI) circuits.
This work was done by Terrance Hunts- berger of Caltech for NASAā€™s Jet Propulsion Laboratory.ā€


 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 29 users
Top Bottom