BRN Discussion Ongoing

buena suerte :-)

BOB Bank of Brainchip
  • Like
  • Fire
  • Love
Reactions: 10 users

uiux

Regular
Could someone who knows stuff please have a look at this and confirm ours is better?



Misslou

It's interesting that the Grai VIP is "a third–gen product" - so we are essentially comparing Akida v1 to Graimatter v3

It's on 12nm, which means they can squeeze a lot more into it

From cursory glance, Akida has 1.2 million neurons and 10 billion synapses vs VIP ones 18 million neurons... Grai only list "parameters" though and don't specify synapses

My understanding is the # of parameters that the chip can hold is calculated by doing something with the # of neurons and the # of synapses - I'll look for how this is calculated, but I remember @FrederikSchack digging in extremely deeply into this line of questioning, so maybe he could shed some light on how neurons/synapses/parameters are calculated

BrainChip has identified Grai Matter as the "nearest competitor" in their AGM presentation

There is a bunch of shared history between the two companies:


Hopefully this is helpful
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 51 users

jtardif999

Regular
Misslou

It's interesting that the Grai VIP is "a third–gen product" - so we are essentially comparing Akida v1 to Graimatter v3

It's on 12nm, which means they can squeeze a lot more into it

From cursory glance, Akida has 1.2 million neurons and 10 billion synapses vs VIP ones 48 million neurons... Grai only list "parameters" though and don't specify synapses

My understanding is the # of parameters that the chip can hold is calculated by doing something with the # of neurons and the # of synapses - I'll look for how this is calculated, but I remember @FrederikSchack digging in extremely deeply into this line of questioning, so maybe he could shed some light on how neurons/synapses/parameters are calculated

BrainChip has identified Grai Matter as the "nearest competitor" in their AGM presentation

There is a bunch of shared history between the two companies:


Hopefully this is helpful
Synapses are collocated memory containing the synaptic weights associated with the memory of learning things. So if their Neuromorphic architecture doesn’t contain Synapses how would anything be learned or indeed training be stored? Their Neuromorphic architecture is not real Kung Fu 😎
 
  • Like
  • Fire
  • Love
Reactions: 18 users

uiux

Regular
Synapses are collocated memory containing the synaptic weights associated with the memory of learning things. So if their Neuromorphic architecture doesn’t contain Synapses how would anything be learned or indeed training be stored? Their Neuromorphic architecture is not real Kung Fu 😎

If it's got neurons you would assume synapses follow

My understanding is that there's a relationship between neurons and synapses, like a sliding scale. You can decrease the number of neurons for more synapses and vice versa.

Maybe Graimatter see more value in advertising # parameters vs # of synapses




I see zero mention of learning rules associated with the device. It can't perform on-chip learning or one-shot learning..
 
  • Like
  • Fire
Reactions: 19 users

uiux

Regular
  • Like
  • Fire
  • Love
Reactions: 37 users

equanimous

Norse clairvoyant shapeshifter goddess
The following paragraph from the GrAi article when compared with the Nviso BRN demonstration is telling as to 1,000 fps (also somewhere I saw the number 1670 fps but cannot recall where might have been said by Tim Llewellyn):

“GrAI VIP can handle MobileNetv1–SSD running at 30fps for 184 mW, around 20× the inferences per second per Watt compared to a comparable GPU, the company said, adding that further optimizations in sparsity and voltage scaling could improve this further”

They are saying they are comparable with GPU performance per watt while AKIDA according to Nviso is 10 times better than the Jetson Nano 100 fps.

AKD1000 too cool for school. It does this at $US10 to $US15 a chip on 28nm and as you scale down to 12nm its performance will increase so GrAi are much more expensive at 12nm and well behind.

The article makes clear they do not have one shot and incremental learning.

They are using 16 bit activations and boast only a 1 percent accuracy lose when converting from 32 bit activations. AKD1000 at 4 bit activations is offering only 1 percent loss because it has on chip convolution as well as the SNN.
 
  • Like
  • Fire
  • Love
Reactions: 69 users

Iseki

Regular
Could someone who knows stuff please have a look at this and confirm ours is better?

From this article, it's a whole SoC, not IP that can be applied to any new small chips coming out.
It requires dual Arm M7 ( extremely high powered CPU's) to operate it
and it uses floating point maths (ie Graphics processing which involves power and heat.)

So not competition, except possibly for a small number of specialised applications. Certainly not at the edge.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

uiux

Regular
The following paragraph from the GrAi article when compared with the Nviso BRN demonstration is telling as to 1,000 fps (also somewhere I saw the number 1670 fps but cannot recall where might have been said by Tim Llewellyn):

“GrAI VIP can handle MobileNetv1–SSD running at 30fps for 184 mW, around 20× the inferences per second per Watt compared to a comparable GPU, the company said, adding that further optimizations in sparsity and voltage scaling could improve this further”

They are saying they are comparable with GPU performance per watt while AKIDA according to Nviso is 10 times better than the Jetson Nano 100 fps.

AKD1000 too cool for school. It does this at $US10 to $US15 a chip on 28nm and as you scale down to 12nm its performance will increase so GrAi are much more expensive at 12nm and well behind.

The article makes clear they do not have one shot and incremental learning.

They are using 16 bit activations and boast only a 1 percent accuracy lose when converting from 32 bit activations. AKD1000 at 4 bit activations is offering only 1 percent loss because it has on chip convolution as well as the SNN.

It's not a fair comparison looking at NVISOs reported FPS and Grais - they are running two different neural network architectures

On Akida hardware, Nviso were only running the gaze tracking in that graph which from memory was running on 5 nodes compared to mobilenets ~30-123 - which is a significant footprint increase

 
  • Like
  • Fire
Reactions: 18 users

Slade

Top 20
Think we have to stop thinking of Akida as a chip and rather think of it as agnostic IP that improves the performance and efficiency of all chips. There will always be a place for Akida.
 
  • Like
  • Love
  • Fire
Reactions: 54 users

uiux

Regular
From this article, it's a whole SoC, not IP that can be applied to any new small chips coming out.
It requires dual Arm M7 ( extremely high powered CPU's) to operate it
and it uses floating point maths (ie Graphics processing which involves power and heat.)

So not competition, except possibly for a small number of specialised applications. Certainly not at the edge.

The article says they are aiming for near sensor applications. Though the wording of the article makes appear a little cagey, eg.

"GrAI Matter sees its offering in between edge server chips and tinyML, though its device is intended to sit next to sensors in the system. An ideal use case would be GrAI VIP next to a camera in a compact camera module, he added."
 
  • Like
Reactions: 8 users
D

Deleted member 118

Guest
The shorter is giving it a good go this morning and there is only 1 thing I need to say.


 
  • Haha
  • Like
Reactions: 8 users

uiux

Regular

It's not a fair comparison looking at NVISOs reported FPS and Grais - they are running two different neural network architectures

On Akida hardware, Nviso were only running the gaze tracking in that graph which from memory was running on 5 nodes compared to mobilenets ~30-123 - which is a significant footprint increase


Actually, nviso did port all their models over.... So not entirely sure about # of required nodes of all models..... Either way, it's not a fair comparison with Mobilenet as apples vs oranges


1660263962470.png
 
  • Like
  • Love
  • Fire
Reactions: 13 users
D

Deleted member 118

Guest
Accumulator is back and picking everything up at $1.13
 
  • Like
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Love
Reactions: 25 users

skutza

Regular
Here is what the shorters drink after work on a Friday.
1660264929012.png
 
  • Haha
  • Like
  • Fire
Reactions: 25 users

Xhosa12345

Regular
  • Haha
  • Like
  • Love
Reactions: 17 users

Slymeat

Move on, nothing to see.
Could someone who knows stuff please have a look at this and confirm ours is better?

“Better” is open for interpretation. I’d say different and limited. It seems to be a chip that can be trained to do a specific task in a way that is generally acceptable to call it AI. Not in my opinion, but seemingly accepted by many others. But then I could do the same in a sequential programmimg language also. No AI need be involved.

For each pixel
if( pixel changes from previous state) then
do something
end if
Store current pixel state for next iteration
end loop

A couple of things that stood out for me were:
1) “GrAI VIP can handle MobileNetv1–SSD running at 30fps for 184 mW, around 20× the inferences per second per Watt compared to a comparable GPU”

Comparing it to a power-hungry GPU is a bit naughty. Everyone knows they are power hungry and anyway, GPUs don’t do inferences per se—just sledgehammer, power hungry, high level maths. Well considering multiplication to be high level that is.

Akida has helped achieve 1000fps and uses ÂľW


2) it uses 16 but floating point in calcs. That would be compute intensive.


3) the system can be trained, but I saw nothing about it learning.

and
4) it seems very specific to processing images only. Although they do also mention audio, bit their example is only for video.

IMHO it seems like they are closer to a normal, and single tasked, CNN and are using the word neuromorphic in a very loose manner. Pretty much just as a buzz word—probably to get search engines to find the article. Sure they call things neurons, but so to do many other implementations call memory cells neurons, and call what they have neuromorphic.

As @jtardif999 stated, they don’t mention synapses, and I don’t accept that if you have neurons, then synapses naturally follow. They should, in a true neuromorphic implementation, but so many are using that term for things that are very loosely modelled on only part of the brain.

As an example I refer to ReRAM implementations of “neuromorphic” systems. They store both state and weight in memory cells, and use the resistive state of memory cells to perform analogue addition and multiplication. But I think all such “neuromorphic” implementations suffer the same limitation of not being able to learn, they can only be trained. And once trained for a task, that is the only task they do until re-trained. And if that is all you want, then is your definition of “better”.

This raises a VERY relevant question, is Akida too good. The world has time-and-time again gone with simple to understand, and simple to use solutions, over complex multi-faceted solutions. The world especially likes mass-produced widgets that do a required task well-enough. Some of these other “neuromorphic” solutions may prove to be just that. People seem happy to throw money multiple times at an inferior product rather than pay extra for the product they really need.

There’s enough room in the TAM for multiple players. I’m happy for Akida to occupy the top spot, solving the more difficult problems, and leave the more mundane to others.
 
  • Like
  • Fire
  • Love
Reactions: 35 users

Dang Son

Regular
  • Like
Reactions: 4 users
@uiux @misslou

Not sure if this helps but does give some insight into Grai and their original AI Neuronflow Architecture on the GrAI One and they have now moved to the GrAI VIP as an evolution so I could be wrong but, expect the underlying architecture of the VIP is just some enhancements and not something entirely new or ground breaking.

Whilst obviously suitable for certain cases and close to Akida in particular areas eg digital, event based, would appear no on device learning and indicate to me still not near Akida overall.

Original May 22 paper attached.

1660265600052.png


1660265266626.png


1660265129479.png

1660265146120.png
 

Attachments

  • 2205.13037.pdf
    613 KB · Views: 201
  • Like
  • Fire
  • Love
Reactions: 58 users

uiux

Regular
@uiux @misslou

Not sure if this helps but does give some insight into Grai and their original AI Neuronflow Architecture on the GrAI One and they have now moved to the GrAI VIP as an evolution so I could be wrong but, expect the underlying architecture of the VIP is just some enhancements and not something entirely new or ground breaking.

Whilst obviously suitable for certain cases and close to Akida in particular areas eg digital, event based, would appear no on device learning and indicate to me still not near Akida overall.

Original May 22 paper attached.

View attachment 13972

View attachment 13969

View attachment 13966
View attachment 13967


This is an awesome find, thank you!



I am looking for the slide from a BrainChip presentation that shows Graimatter next to BrainChip with a few of the other chips to the right


Anyone know which one it's from? I thought it was 2022 AGM preso
 
  • Like
Reactions: 10 users
Top Bottom