BRN Discussion Ongoing

Hi Frederick,

1. Akida is asynchronous, responding to input events, so I don't think it can be overclocked. However, producing it as, say, 7nm will increase speed and reduce power.

2. Because Akida does not need cooling because it is so low power, it would be an ideal buried layer in a stacked chip arrangement. One potential application would be the Sony/Prophesee image sensor which uses Sony's 3D technology. I imagine the Prophesee DVS is also low power as it only fires on events. Also some IR night-vision systems need to be cooled, so Akida would be a good fit as it would reduce cooling requirements. Akida is designed for multiple chip applications.

One of my friends whom I convinced to buy BRN was asking about Ai following the recent ChatGPT publicity, so I sent the following:

You asked whether BrainChip's Akida was involved in AI, and I subsequently realized that ChatGPT has been in the news lately, not always favourably.

It is not related to ChatGPT. It could be used on the input side, but, at this stage, not on the output side generating the responses. That is all done with software using a web browser and language interpreting software.

Akida is a spiking neural network which imitates the brain's neural/synaptic processes. It does not use the same digital mathematical methods as conventional computers which consume enormous amounts of processing power in performing object recognition. Think of objects in a field of view as having a line around the edge, ie, where the pixel illumination changes. Akida compares the edge line with stored edge lines in a library of edge lines, whereas an object recognition program in a conventional computer does a pixel-by-pixel comparison of full frame images. Akida also does it in silicon, not in software. It is loaded with its compact object library data and does a hardware comparison, not a software comparison.

Akida is used to classify input signals, voice, video, etc. It does not generate replies.

It can be used for autonomous driving sensors such as LiDaR, radar, video, and event-cameras (shutterless cameras which detect only changes in the pixel illumination) aka DVS (Dynamic Vision Systems) to recognize the surroundings. It is also be used in-cabin driver monitoring, speech recognition, and any application which involves sensor signal interpretation such as vibration sensing to anticipate the need for maintenance. I suppose it could detect when a Boeing spits a turbine blade out.

It is used in Mercedes EQXX concept car which achieved 1000 km in the in-vehicle voice recognition system where it performed 5 to 10 times more efficiently that the other systems they had tested in a system where every Watt counts.

It can be used for autonomous drone navigation and image detection.

NASA are trialing it for navigation, Mars rovers, as well as wireless communication via a company called Intellisense.

Similarly, ISL (Information Systems Limited) is using it in USAF trials.

Akida is listed by ARM as being compatible with all their processors, as well as being compatible with ARM's upstart challenger SiFive which use RISC-V architecture compared with ARM's RISC-IV.

It is also available to Intel Foundry Services (IFS) customers.

It is also part of a few US university computer courses including Carnegie Mellon.

BrainChip's main business is licensing the design of Akida, a similar business model to ARM. This does limit the customer base to those who can afford the licence fee and the cost of incorporating the design in their product and manufacturing the chips. It also introduces a large time lag for designing, making and testing the chips.

A company called Renesas has licenced the Akida IC design and will be bringing out microprocessors capable of handling 30 frames per second (fps) later this year. Akida is capable of much faster fps (> 1000) but Renesas only licenced two Akida nodes out of a possible 80.

Similarly, MegaChips is also in the process of producing chips containing the Akida design.

A second generation of Akida will also be available later this year. This will have the capability to determine the speed and direction of objects in hardware rather than relying on software to process the images identified by Akida. Software is much slower and more power hungry
.
Nice 🙂

With regards to point 1, I wasen't thinking overclocking as I know it's an SNN, but thinking generally that there must be ways to run it more agressively? More input? Extreme sensor fusion? More complicated models?

Another perspective could be that it may be easier to achieve 1nm if the heat dissipation is minuscle?
 
  • Like
Reactions: 6 users
Great post Dio!

It's also important to zoom out and think about who wants Akida and why.
They aren't chasing extra flops (equivalent) per watt, they are trying to reduce watts per operation.
The super computers and Nordic self-cooling warehouses can handle the big stuff, but only low power, edge NM technology will solve the use-cases you mention above.

Great post again, it's easy to lose track of all the connections and reasons why Akida excels.
Thanks. I'm aware that the Edge is where we'll probably find our bread and butter first, but there must be a potential in the server market too.

If one Akida-P can challenge some of nVidias gear in some tasks and you pack 50 Akida-P on a card and still don't need cooling, then some in the server market may become curious 😀
 
  • Like
  • Fire
  • Thinking
Reactions: 15 users

JDelekto

Regular
Nice 🙂

With regards to point 1, I wasen't thinking overclocking as I know it's an SNN, but thinking generally that there must be ways to run it more agressively? More input? Extreme sensor fusion? More complicated models?

Another perspective could be that it may be easier to achieve 1nm if the heat dissipation is minuscle?

Because it is event-driven, I think one of the metrics which will probably be used to stress not just Akida but other SNNs would be the timing between the pulses of the spikes.

However, I would think that by processing events more frequently, one would also be consuming more power to do so. I don't know the minimum spike distance that Akida will process comfortably. It may already be capable of keeping up with the sensors that exist.

While spikes with a short time between each pulse might be most beneficial for detecting or inferencing things in a video stream, the actual training of the network itself may not require such rapid input. However, it may require more passes in training to get better accuracy.

Memory, parameters in the model, power consumption, and cost will be factors, but Akida will also require some different benchmarking criteria than the existing AI accelerators that crunch matrices.
 
  • Like
Reactions: 12 users

rgupta

Regular
Thanks. I'm aware that the Edge is where we'll probably find our bread and butter first, but there must be a potential in the server market too.

If one Akida-P can challenge some of nVidias gear in some tasks and you pack 50 Akida-P on a card and still don't need cooling, then some in the server market may become curious 😀
I assume company is concentrating it's efforts on the edge. Opening another front will become lot more challenging.
But a nice thought process and may be we will be there as well.
Dyoor
 
  • Like
Reactions: 1 users
Yes there was some strong buying “up” of stock yesterday and today. Great to see that momentum is shifting.

So good was the buying up today that I observed there were whole SP levels bought out within seconds at some points and I am talking 300k to 400k worth of shares bought in a big chunk so that is insto’s or shorters covering and either way it’s great as it’s a sign that they know the price is cheap and it’s not staying here for very much longer.
It’s too early to tell. It’s a good start.. Look at that price action you got end of 2021.. That shape is what you want.. Staircase action..

It had to start somewhere.. That was on Mercedes Benz hype. One could expect with some positive newsflow into H2 of 2023, the market will start pricing it in.. So here’s to crossing fingers toes and everything that the corner has been turned, and there’s a lot to be looking forward to..
 
  • Like
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi @SebThatGermanChap,

I just noticed that Joe Guerci (CEO at ISL) liked another BrainChip post 8 hours ago on his Linkedin page.

Screen Shot 2023-05-13 at 10.01.28 am.png



 
  • Like
  • Fire
  • Love
Reactions: 37 users

Diogenese

Top 20
Last edited:
  • Haha
  • Like
Reactions: 5 users

TopCat

Regular
Another article ( 12-5-2023) about KnowLabs and Edge Impulse, this one mentioning Microwave and Radio Frequency sensors.

Microwave and Radio Frequency sensors operate over a broader frequency range, and with this comes an extremely broad dataset that requires sophisticated algorithm development. Working with Know Labs, Edge Impulse uses its machine learning tools to train a Neural Network model to interpret this data and make blood glucose level predictions using a popular CGM proxy for blood glucose. Edge Impulse provides a user-friendly approach to machine learning that allows product developers and researchers to optimize the performance of sensory data analysis. This technology is based on AutoML and TinyML to make AI more accessible, enabling quick and efficient machine learning modeling.

 
  • Like
  • Fire
  • Love
Reactions: 10 users

Learning

Learning to the Top 🕵‍♂️
This is one of the reason I believe Brainchip Akida is important for Edge AI. Although the article talk about LLMs, but the cost of running Data Center is exploding. Hence, enter Brainchip's Akida, inferencing at the edge.(JMHO)

Screenshot_20230513_111048_Samsung Internet.jpg


The link to the article below:


Learning 🏖
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Diogenese

Top 20
Because it is event-driven, I think one of the metrics which will probably be used to stress not just Akida but other SNNs would be the timing between the pulses of the spikes.

However, I would think that by processing events more frequently, one would also be consuming more power to do so. I don't know the minimum spike distance that Akida will process comfortably. It may already be capable of keeping up with the sensors that exist.

While spikes with a short time between each pulse might be most beneficial for detecting or inferencing things in a video stream, the actual training of the network itself may not require such rapid input. However, it may require more passes in training to get better accuracy.

Memory, parameters in the model, power consumption, and cost will be factors, but Akida will also require some different benchmarking criteria than the existing AI accelerators that crunch matrices.
Hi @FrederikSchack , JD,

We know from nViso that Akida 1 can run at better than 1000 fps equivalent. If memory serves it tops out at about 1600. Akida 1 can process 30 fps with 2 nodes (8*NPUs). Akida 1 can also run several independent parallel threads, and the threads can be interpreting different types of data - that is all down to the configuration and the different model libraries and weights.

I think Akida 2 has 64 nodes maximum, but can be connected to a lot more Akida 2s.

CORRECTION: 128 nodes:

https://www.hackster.io/news/brainc...-vision-transformer-acceleration-5fc2d2db9d65
1683944377055.png




One limiting factor on event rate, apart from the actual event occurrence rate, is the sensor response/recovery time. A DVS like Prophesee has to compare the photodiode output of each pixel with a threshold voltage to determine if an event has been detected. If the diode output falls below the threshold, it is ignored.

The signals from each pixel of Prophesee's DVS (event camera) undergo a lot of processing.

This is the circuitry connected to each individual pixel of Prophesee's collision anticipation DVS:

1683939581607.png



US2021056323A1 FAST DETECTION OF SECONDARY OBJECTS THAT MAY INTERSECT THE TRAJECTORY OF A MOVING PRIMARY OBJECT

A system (1) for detecting dynamic secondary objects (55) that have a potential to intersect the trajectory (51) of a moving primary object (50), comprising a vision sensor (2) with a light-sensitive area (20) that comprises event-based pixels (21), so that a relative change in the light intensity impinging onto an event-based pixel (21) of the vision sensor (2) by at least a predetermined percentage causes the vision sensor (2) to emit an event (21a) associated with this event-based pixel (21), wherein the system (1) further comprises a discriminator module (3) that gets both the stream of events (21a) from the vision sensor (2) and information (52) about the heading and/or speed of the motion of the primary object (50) as inputs, and is configured to identify, from said stream of events (21a), based at least in part on said information (52), events (21b) that are likely to be caused by the motion of a secondary object (55), rather than by the motion of the primary object (50).


Th spike rate is in the lap of the gods. It is determined by real-world events and the ability of the sensor to respond. Each Akida NPU does packetize input events, but the input spike rate limits the response time:

WO2020092691A1 AN IMPROVED SPIKING NEURAL NETWORK

1683940811611.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 30 users

Diogenese

Top 20
Nice 🙂

With regards to point 1, I wasen't thinking overclocking as I know it's an SNN, but thinking generally that there must be ways to run it more agressively? More input? Extreme sensor fusion? More complicated models?

Another perspective could be that it may be easier to achieve 1nm if the heat dissipation is minuscle?
I think that getting down below about 4 nm or so, the heat becomes a problem as the resistance of the "wires" increases.
 
  • Like
Reactions: 7 users
Those who signed up to the arm podcast a few days ago should have received an email thanking them and also Toms email to ask questions. So check your email if you are interested in asking questions.
 
  • Like
Reactions: 6 users

Rskiff

Regular
Those who signed up to the arm podcast a few days ago should have received an email thanking them and also Toms email to ask questions. So check your email if you are interested in asking questions.
@Rise from the ashes a good question would be "what % of ARM products do they see Akida being implimented in?"
 
  • Like
  • Fire
Reactions: 14 users
@Rise from the ashes a good question would be "what % of ARM products do they see Akida being implimented in?"
Great question but imo we won't get an answer to that. But as they say if you don't ask you shall not receive.
 
  • Like
Reactions: 5 users
  • Like
Reactions: 8 users

suss

Regular
  • Like
Reactions: 1 users

Newk R

Regular
Hey why the eff did my orange juice dilution post get moderated, talk about party poopers.... so precious... this is getting as bad as the crapper!

Err better mention something relevant....hmmm... ok i think we are at the bottom now, and she will hover mid 40s until the agm..

There u go

View attachment 36347
You're lucky. I got an email saying my post was moderated and I didn't even post anything
 
  • Haha
  • Wow
  • Like
Reactions: 11 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Love
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This might be interesting to someone, somewhere. Maybe one for @Diogenese.

IMEC advertising 2 weeks ago for a PhD project exploiting neural networks for Extended Reality using 6G radio signals.

It says here "Extended Reality (XR) has been identified as the driving applications for future 6G networks by companies like Nokia [1], Ericsson [2], and Qualcomm [3"


View attachment 36359





View attachment 36358

I thought it might be interesting given Micro Morse dabbles in this general mmWave, radio signals area. Also, Michael De Mils (CEO Morse Micro) used to work in low-power digital IC design at Imec and Broadcom before founding Morse Micro.
 
  • Like
  • Love
  • Fire
Reactions: 6 users
Top Bottom