BRN Discussion Ongoing

FJ-215

Regular
If Simon Thorpe had been pressing for years and if I am correct about patents being finalised he was pressing when as we all know at least 27 patents had still to be finalised and lodged.

Simon Thorpe is no longer on the SAB so was he meddling in management beyond the proper scope of his role?

Carnegie Mellon is not suggested by me as the reason behind why the benchmarking may or may not be being undertaken but as an example of a reason why Brainchip may not be following up on your efforts.

Apart from anything else choosing an academic in the UK might not play well with the US Silicon Valley tech companies they are selling too as well as a US academic at a well known American University may be received.

What I am arguing against is your decision to condemn Brainchip’s perceived lack of action when you cannot be in possession of the full facts.

Did you communicate directly with the CEO and Board of Brainchip and present your idea directly to them or did you take your approach to investor relations?

Was your idea placed on a Board agenda for consideration?

Far to many unknowns.

My opinion only DYOR
FF

AKIDA BALLISTA
Hi FF,

I think after all this time everyone just wants to see some numbers. For myself it's partly for reassurance but mostly so I can puff my chest out and say "See, told you so"

I can remember you arguing against Mike Davies calls for benchmarks in neuromorphic computing. Rightly so if Intel write the rules to suit Loihi.

One potential problem for benchmarking is what configuration of Akida do you use? On paper Akida has 1.2 M neurons and 10B synapses but they aren't hard and fast numbers. You can trade them off to suit your application. Unfortunately I had a computer crash a few months back and lost the copy of the email exchange between Peter and Trothlis. If memory serves Peter blew everyones' minds by claiming the upper level of neurons was approx. 6 million but BRN didn't publish that to avoid confusion.

To this point FF, your story about Anil's tweaks to NVISO to achieve monster frames per second.

It's a struggle being special!
 
  • Like
  • Love
Reactions: 24 users
Hi FF,

I think after all this time everyone just wants to see some numbers. For myself it's partly for reassurance but mostly so I can puff my chest out and say "See, told you so"

I can remember you arguing against Mike Davies calls for benchmarks in neuromorphic computing. Rightly so if Intel write the rules to suit Loihi.

One potential problem for benchmarking is what configuration of Akida do you use? On paper Akida has 1.2 M neurons and 10B synapses but they aren't hard and fast numbers. You can trade them off to suit your application. Unfortunately I had a computer crash a few months back and lost the copy of the email exchange between Peter and Trothlis. If memory serves Peter blew everyones' minds by claiming the upper level of neurons was approx. 6 million but BRN didn't publish that to avoid confusion.

To this point FF, your story about Anil's tweaks to NVISO to achieve monster frames per second.

It's a struggle being special!
Pump this number into the toilet site 38685813
And that post will have a link to the pdf with the emails sent.
 
  • Like
  • Haha
  • Fire
Reactions: 13 users
Just on benchmarking & a bit of understanding....would appear we're no orphans :)


The Sounds Of The AI Benchmark War: Crickets
Karl Freund
Contributor
Founder and Principal Analyst, Cambrian-AI Research LLC
Follow
Apr 6, 2022,01:00pm EDT

NVIDIA once again dominated a near-empty field of AI competitors in MLPerf Inference V2.0. Let’s explore why other chip companies don’t want to play.

Industry standard benchmarks have been an important feature of the IT landscape for decades, with SPEC, TPC, and other organizations offering benchmark suites to help buyers understand what chips and systems are best for which workloads.

The AI benchmark organization MLCommons has done a great job collaborating across dozens of member companies to define a broad suite of training and inference benchmarks that represent the bulk of AI applications in the data center and the edge.

Where is everyone Else?​

While the number of contributors (NVIDIA, Qualcomm and system vendors) increased dramatically, and performance improved by up to 50%, the number of chip architectures being tested dropped to two: performance-leader NVIDIA and efficiency-leader Qualcomm. Intel demurred this time around, while AWS, Google, AMD, Intel Habana, Graphcore, Cerebras, SambaNova, Groq, Alibaba and Bidu decided to skip the fun with their own chips.

I’ve spoken to a few customers about their experience running inference on these novel platforms, and in general the results are promising. So why not publish and provide a public tabulation of which chips are good at which problems? There are several reasons:

  1. The top reason is the lack of a good ROI. These benchmarks take a lot of engineering effort to run because most platforms will not run them well without optimizations. The effort could be better spent working with a live customer to close the deal.
  2. Performing those optimizations will produce a better product, but publishing the results can risky. You don’t want to be slower than NVIDIA. Frankly we suspect everyone is slower than NVIDIA on a chip-to-chip basis. So one would have to find a different way to interpret the results, as Qualcomm has done by touting energy efficiency. But,...
  3. Even if you have an angle, the arguments you make can easily be blunted. For example, a more energy-efficient chip sounds good, but if it is significantly slower, then a buyer may have to purchase more accelerators, reducing or even reversing the supposed advantage.
  4. Apples to apples comparisons are next to impossible, even in a well managed benchmark effort like MLPerf. Pricing, for example, is not considered. Nor is ease of software porting and optimization. And not everyone needs or can afford a Ferrari anyway.
  5. Did we mention that NVIDIA is hard to beat? Yeah, THAT. We count over 5000 results in the MLPerf Inference spreadsheet, with over 95% of them run on NVIDIA. The GPU leader simply overwhelms any startup that wants to pick a few cells and hope for the best.

So, what we end up with NVIDIA showing how much better they are than they were last year. While NVIDIA did not publish any “Hopper” benchmarks, preferring to await the AI Training regatta in six months, their engineers did publish results for the latest edge SOC, the Jetson Orin, besting its predecessor by 2 to 5X.

1668591323260.png


So, what is the outlook for MLCommons?​

I believe that MLCommons provides an extremely valuable service to the industry and will continue to do so. All AI chip vendors use the suite of inference and training benchmarks to help determine performance bottlenecks and to refine their software optimizations. End users can run these open source benchmarks themselves to determine which platform best meets their needs.

As for participation, I suspect Intel will rejoin the fray to tout their Sapphire Rapids CPU, which has significant AI acceleration on board, and hopefully their new Ponte Vecchio GPU now being installed at the DOE ARNL labs. And I expect more contributions from Graphcore as well, at least in training.

That being said, I doubt that others such as AMD and AWS will step up any time soon, but the Chinese vendors might see an opportunity to show off their silicon.
But let’s acknowledge that NVIDIA is just plain hard to beat; great engineers under a great leader can do, well, great things. Also, Qualcomm has amazing energy efficiency born from over a decade of smart phone chip development and research.

Regardless of the size of the public party, however, everyone will continue to benefit from the rich set of apps and data sets that MLCommons has helped the community develop. These are real applications that cover the waterfront of AI use cases, which in and of itself is a great value to the industry.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

FJ-215

Regular
Pump this number into the toilet site 38685813
And that post will have a link to the pdf with the emails sent.
Cheers Rise,

From the Man himself.......

Travis,
The 1.2M neurons we published is an average figure. The total number of neurons depends on the configuration, and
the ratio of synapses to neurons. In the example we have only 10M physical synapses, so we have plenty to spare to
increase the nu
mber of neurons to 8.8M. We don’t publish that high-end figure because it would confuse people, but
this network does fit entirely on the chip. You will see the same if you do the calculations for MobileNet, and that
network is also entirely running within our chip. We have a working example of that.
Because the Akida design is so flexible, it is possible to reuse the same synapses connected to different neurons. We
reuse the synapses, rather than multiplexing them. How that works is our trade secret. In one instance the synapses are
connected to filter #1, the next to filter #2. If you like to call that multiplexing then that is ok with me, but these same
synapses are connected to a single neuron in a dense layer, so there is no traditional multiplexer as in your diagram.
Best regards,
Peter


Whoops, I missed by 2.8 million neurons.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 24 users
Yes,

and if Brainchip is engaging in Benchmarking,

and if that is being done through just say Carnegie Mellon,

and if just say they have struck a deal where they will give the Carnegie Mellon academics the first right of publication to dramatically reduce the cost,

well if this is what they are doing then it would seem an entirely reasonable and responsible approach.

I have long desired that there be independent benchmarking undertaken but have remained silent as I could understand that until complete patent coverage was in place giving out the technology for this purpose would carry an unnecessary risk to the company and shareholders.

My opinion only DYOR
FF

AKIDA BALLISTA
FF , has Brainchip got one of our chips involved with assisting a NASA rocket 🚀 travel to the moon ?
 
  • Like
Reactions: 5 users

wilzy123

Founding Member
  • Like
Reactions: 3 users

wilzy123

Founding Member
  • Fire
  • Like
  • Love
Reactions: 7 users

wilzy123

Founding Member
FF , has Brainchip got one of our chips involved with assisting a NASA rocket 🚀 travel to the moon ?

Yep, whatever FF says... bank on it.

Not financial advice. DYOR.

:ROFLMAO:
 
  • Like
Reactions: 1 users

Foxdog

Regular
  • Like
Reactions: 1 users

AusEire

Founding Member. It's ok to say No to Dot Joining
Ok mate thanks for that.

I might have to buy a pair of specs to straighten the lines next time and if that fails I’ll just ask you to confirm if I’m ever not sure :)

Flick me a message when you’re back in Perth, the home of Brainchip’s HQ (fk you @AusEire ) and we can grab a beer or two 🍻

Stay safe mate and enjoy the island!
You still owe me a beer
 
  • Like
Reactions: 4 users
FF , has Brainchip got one of our chips involved with assisting a NASA rocket 🚀 travel to the moon ?
Hi @Frank Zappa

What we have on this question or should I say the closest we have is a statement made Professor Iliadis at Democratis University of Thrace that NASA was using AKIDA for navigation in space to a university newspaper after the deal for his spiking neural network cyber security algorithm was licensed to Brainchip.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 26 users

alwaysgreen

Top 20
Hi @Frank Zappa

What we have on this question or should I say the closest we have is a statement made Professor Iliadis at Democratis University of Thrace that NASA was using AKIDA for navigation in space to a university newspaper after the deal for his spiking neural network cyber security algorithm was licensed to Brainchip.

My opinion only DYOR
FF

AKIDA BALLISTA
Purely based on his name, I reckon Professor Iliadis also enjoys a little chicken gyros cooked on a spit :LOL:
 
  • Haha
  • Fire
  • Like
Reactions: 8 users

wilzy123

Founding Member
  • Like
Reactions: 4 users
  • Fire
  • Like
  • Love
Reactions: 3 users

FJ-215

Regular
  • Love
  • Like
Reactions: 6 users

Diogenese

Top 20
Just on benchmarking & a bit of understanding....would appear we're no orphans :)



The Sounds Of The AI Benchmark War: Crickets
Karl Freund
Contributor
Founder and Principal Analyst, Cambrian-AI Research LLC
Follow
Apr 6, 2022,01:00pm EDT

NVIDIA once again dominated a near-empty field of AI competitors in MLPerf Inference V2.0. Let’s explore why other chip companies don’t want to play.

Industry standard benchmarks have been an important feature of the IT landscape for decades, with SPEC, TPC, and other organizations offering benchmark suites to help buyers understand what chips and systems are best for which workloads.

The AI benchmark organization MLCommons has done a great job collaborating across dozens of member companies to define a broad suite of training and inference benchmarks that represent the bulk of AI applications in the data center and the edge.

Where is everyone Else?​

While the number of contributors (NVIDIA, Qualcomm and system vendors) increased dramatically, and performance improved by up to 50%, the number of chip architectures being tested dropped to two: performance-leader NVIDIA and efficiency-leader Qualcomm. Intel demurred this time around, while AWS, Google, AMD, Intel Habana, Graphcore, Cerebras, SambaNova, Groq, Alibaba and Bidu decided to skip the fun with their own chips.

I’ve spoken to a few customers about their experience running inference on these novel platforms, and in general the results are promising. So why not publish and provide a public tabulation of which chips are good at which problems? There are several reasons:

  1. The top reason is the lack of a good ROI. These benchmarks take a lot of engineering effort to run because most platforms will not run them well without optimizations. The effort could be better spent working with a live customer to close the deal.
  2. Performing those optimizations will produce a better product, but publishing the results can risky. You don’t want to be slower than NVIDIA. Frankly we suspect everyone is slower than NVIDIA on a chip-to-chip basis. So one would have to find a different way to interpret the results, as Qualcomm has done by touting energy efficiency. But,...
  3. Even if you have an angle, the arguments you make can easily be blunted. For example, a more energy-efficient chip sounds good, but if it is significantly slower, then a buyer may have to purchase more accelerators, reducing or even reversing the supposed advantage.
  4. Apples to apples comparisons are next to impossible, even in a well managed benchmark effort like MLPerf. Pricing, for example, is not considered. Nor is ease of software porting and optimization. And not everyone needs or can afford a Ferrari anyway.
  5. Did we mention that NVIDIA is hard to beat? Yeah, THAT. We count over 5000 results in the MLPerf Inference spreadsheet, with over 95% of them run on NVIDIA. The GPU leader simply overwhelms any startup that wants to pick a few cells and hope for the best.

So, what we end up with NVIDIA showing how much better they are than they were last year. While NVIDIA did not publish any “Hopper” benchmarks, preferring to await the AI Training regatta in six months, their engineers did publish results for the latest edge SOC, the Jetson Orin, besting its predecessor by 2 to 5X.

View attachment 22170

So, what is the outlook for MLCommons?​

I believe that MLCommons provides an extremely valuable service to the industry and will continue to do so. All AI chip vendors use the suite of inference and training benchmarks to help determine performance bottlenecks and to refine their software optimizations. End users can run these open source benchmarks themselves to determine which platform best meets their needs.

As for participation, I suspect Intel will rejoin the fray to tout their Sapphire Rapids CPU, which has significant AI acceleration on board, and hopefully their new Ponte Vecchio GPU now being installed at the DOE ARNL labs. And I expect more contributions from Graphcore as well, at least in training.

That being said, I doubt that others such as AMD and AWS will step up any time soon, but the Chinese vendors might see an opportunity to show off their silicon.
But let’s acknowledge that NVIDIA is just plain hard to beat; great engineers under a great leader can do, well, great things. Also, Qualcomm has amazing energy efficiency born from over a decade of smart phone chip development and research.

Regardless of the size of the public party, however, everyone will continue to benefit from the rich set of apps and data sets that MLCommons has helped the community develop. These are real applications that cover the waterfront of AI use cases, which in and of itself is a great value to the industry.
In order to benchmark NNs, you would need to start with a standardized heterogeneous model library for each of image and sound.

So already there is a problem because Akida libraries are different form other libraries, being optimized for digital SNN, and then Akida can run 1-bit to 4-bit weights and activations. Very recently other companies have emerged claiming 1-bit to 8-bit, but higher bit counts are common.

So if the benchmarking is done against a model library designed for an 8-bit NN, this will not show Akida at its best.

So you need to benchmark for each of 1-bit, 2-bits, 4-bits, and 8-bits ... 32-bits ... [In CNN, the more bits, the greater the accuracy].

Then you need to identify the benchmark parameters, eg, fps, power per inference, accuracy.

Then it is further complicated by new sensor technologies like DVS (Prophesee) where Akida is incomparable, and other tasks like nViso's facial recognition. .

But we have had some benchmarking figures released recently:

[courtesy of @uiux ]
1668594746492.png






1668594675408.png


In the left chart, the orange bars illustrate the performance of Akida/GPU/CPU against the best performance model against which they were tested, while the gray bars show the average performance of each tested on 5 models. Now, assuming that each was tested against the same 5 models (otherwise the comparison is meaningless) Akida has been run against 8-bit plus models, requiring quantization before being used on Akida, unless the models were adapted for Akida before testing.

The effect of Akida's quantization is illustrated by the memory requirements shown in this chart:

1668596142600.png

So, really, it's a bit like comparing lemons with diamonds.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 62 users

Deadpool

hyper-efficient Ai
Why wait to Christmas, I cooked this up in my airfryer the other night
Good job Rach, that looks so delicious.

Well Done Applause GIF by MOODMAN
 
  • Like
  • Love
  • Haha
Reactions: 7 users
Top Bottom