BRN Discussion Ongoing


I’m more worried about this sort of chip - traditional AI done faster - at the edge than Loihi in terms of competition.
"I’m more worried about this sort of chip - traditional AI done faster - at the edge than Loihi in terms of competition"

Yeah, we're likely going to miss the Moon, but maybe we'll land among the Stars?..

Sorry 😔...


Unfortunately we are going to have to deal with competition and we will not achieve 100% market share, in the Far Edge space.

Those OEM manufacturers, content with animatron (a puppet or similar figure that is animated by means of electromechanical devices) style "A.I" will go with these other options, even if they are less flexible in their applications, with respect to sizing (nm, nodes etc) addition to new SOCs, OEMs chip designs etc etc..

Would be nice if we had a Monopoly, but unfortunately, we only have that with true independent A.I. at the edge.
 
  • Like
  • Fire
  • Love
Reactions: 26 users

VictorG

Member
Howdy Gang!

Following on from Cerence's 25th Annual Needham Growth Conference (as discussed above), I've just had the opportunity to take a peek at the Cerence Investor Day Destination Next pdf which you can access here if you wish to https://cerence.gcs-web.com/static-files/4fd79f3e-2c0a-429a-841e-dcc68fc52e83

So the way I see now it is that AKIDA is 99.999999% likely to be incorporated in the "Cerence Immersive Companion" and I'll explain why.

Firstly, you'll see on one of the slides I've posted it lists the "New Innovations" including enhanced in-cabin audio AI, surroundings awareness, teachable AI, and a multi-sensory experience platform in addition to other features. These New Innovations will be rolled out in FY 23/24.

Thirdly, there's a slide that shows Niils Shanz and the slide highlights his work on the MBUX, hyperscreen and "Hey Mercedes" voice assistant.

Fourthly, remember at the Cerence's 25th Annual Needham Growth Conference when Stefan Ortmanns said in relation to NVDIA something like "So you're right. They have their own technology, but our technology stack is more advanced. And here we're talking about specifically Mercedes where they're positioned with their hardware and with our solution." I believe that this comment was made in reference to the Vision EQXX voice control system which is what I think is being referred to by Cerence as the new "enhanced in-cabin audio AI" which will roll out in FY23/24 as part of the Cerence Immersive Companion.

Fifthly, another slide shows "Multisensory Inputs" which includes full HMI solution integrating multi-modal inputs (incl. conversational AI, DMS, facial, gaze, gesture, emotion, etc.) and multi-modal outputs (incl. voice, graphic, haptic, vibration, etc.) I bolded the ect because I thought that could be hinting at smell and taste.

Sixthly, Stefan Ortmans said that Cerence technology is more advanced than NVDIA's and that Qualcomm are sing it too, so that rules out NVDIA and Qualcomm form being the magic ingredient.

Seventhly, everything adds up nicely in terms of functionality, time-lines and just the general vibe IMO.

Eightly, DYOR because I often have no idea what I'm talking about.

B 💋

View attachment 27644


View attachment 27645

View attachment 27646
View attachment 27648
View attachment 27649
View attachment 27650
I believe you are over the target Bravo.
I hope this has something to do with BRN calling the LDA cap. I'm probably way off target but my thinking is that the relationship between BRN and Mercedes began when BRN only had Akida 1000 and before Akida 1.0. If the timeline is corect, would that also mean that the EQXX platform and flow on technologies incorporate BRN silicon rather than BRN IP.
 
  • Like
  • Love
  • Thinking
Reactions: 7 users

JDelekto

Regular
Howdy Gang!

Following on from Cerence's 25th Annual Needham Growth Conference (as discussed above), I've just had the opportunity to take a peek at the Cerence Investor Day Destination Next pdf which you can access here if you wish to https://cerence.gcs-web.com/static-files/4fd79f3e-2c0a-429a-841e-dcc68fc52e83

So the way I see now it is that AKIDA is 99.999999% likely to be incorporated in the "Cerence Immersive Companion" and I'll explain why.

Firstly, you'll see on one of the slides I've posted it lists the "New Innovations" including enhanced in-cabin audio AI, surroundings awareness, teachable AI, and a multi-sensory experience platform in addition to other features. These New Innovations will be rolled out in FY 23/24.

Secondly, there's a slide that shows Niils Shanz and the slide highlights his work on the MBUX, hyperscreen and "Hey Mercedes" voice assistant.

Thirdly, remember at the Cerence's 25th Annual Needham Growth Conference when Stefan Ortmanns said in relation to NVDIA something like "So you're right. They have their own technology, but our technology stack is more advanced. And here we're talking about specifically Mercedes where they're positioned with their hardware and with our solution." I believe that this comment was made in reference to the Vision EQXX voice control system which is what I think is being referred to by Cerence as the new "enhanced in-cabin audio AI" which will roll out in FY23/24 as part of the Cerence Immersive Companion.

Fourthly, another slide shows "Multisensory Inputs" which includes full HMI solution integrating multi-modal inputs (incl. conversational AI, DMS, facial, gaze, gesture, emotion, etc.) and multi-modal outputs (incl. voice, graphic, haptic, vibration, etc.) I bolded the ect because I thought that could be hinting at smell and taste.

Fifthly, Stefan Ortmans said that Cerence's technology is more advanced than NVDIA's and that Qualcomm are using it too, so that rules out NVDIA and Qualcomm form being the magic edge computing ingredient.

Sixthly, Arm and Cerence are partners.

Seventhly, everything adds up nicely in terms of functionality, time-lines and just the general vibe IMO.

Eighthly, DYOR because I often have no idea what I'm talking about.

B 💋

View attachment 27644


View attachment 27645

View attachment 27646
View attachment 27648
View attachment 27649

View attachment 27651
The one bullet point that jumped off the page was "Teachable AI."

At this point, Akida's patented one-shot (or multi-shot) learning seems to be a candidate for an edge-based system that can learn and differentiate between different vehicle drivers with as little friction as possible.

Something I don't see as a talking point often enough is the ability of a shared device using "smart" sensors to learn and identify the individual using it while keeping the identifying information local to the device for security.
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Murphy

Life is not a dress rehearsal!
Howdy Gang!

Following on from Cerence's 25th Annual Needham Growth Conference (as discussed above), I've just had the opportunity to take a peek at the Cerence Investor Day Destination Next pdf which you can access here if you wish to https://cerence.gcs-web.com/static-files/4fd79f3e-2c0a-429a-841e-dcc68fc52e83

So the way I see now it is that AKIDA is 99.999999% likely to be incorporated in the "Cerence Immersive Companion" and I'll explain why.

Firstly, you'll see on one of the slides I've posted it lists the "New Innovations" including enhanced in-cabin audio AI, surroundings awareness, teachable AI, and a multi-sensory experience platform in addition to other features. These New Innovations will be rolled out in FY 23/24.

Secondly, there's a slide that shows Niils Shanz and the slide highlights his work on the MBUX, hyperscreen and "Hey Mercedes" voice assistant.

Thirdly, remember at the Cerence's 25th Annual Needham Growth Conference when Stefan Ortmanns said in relation to NVDIA something like "So you're right. They have their own technology, but our technology stack is more advanced. And here we're talking about specifically Mercedes where they're positioned with their hardware and with our solution." I believe that this comment was made in reference to the Vision EQXX voice control system which is what I think is being referred to by Cerence as the new "enhanced in-cabin audio AI" which will roll out in FY23/24 as part of the Cerence Immersive Companion.

Fourthly, another slide shows "Multisensory Inputs" which includes full HMI solution integrating multi-modal inputs (incl. conversational AI, DMS, facial, gaze, gesture, emotion, etc.) and multi-modal outputs (incl. voice, graphic, haptic, vibration, etc.) I bolded the ect because I thought that could be hinting at smell and taste.

Fifthly, Stefan Ortmans said that Cerence's technology is more advanced than NVDIA's and that Qualcomm are using it too, so that rules out NVDIA and Qualcomm from being the magic edge computing ingredient.

Sixthly, Arm and Cerence are partners.

Seventhly, everything adds up nicely in terms of functionality, time-lines and just the general vibe IMO.

Eighthly, DYOR because I often have no idea what I'm talking about.

B 💋

View attachment 27644


View attachment 27645

View attachment 27646
View attachment 27648
View attachment 27649

View attachment 27651
Once again, you dazzle me Sherlock!! Great research!! :)

If you don't have dreams, you can't have dreams come true!
 
  • Like
  • Love
  • Fire
Reactions: 17 users
  • Like
  • Fire
Reactions: 7 users

equanimous

Norse clairvoyant shapeshifter goddess
When comparing chalk and cheese, are we talking camembert or parmesan?

Perceive's figures are for the YOLO V5 database.

In 2019, the Akida simulator did 30 fps @ 157 mW on MobileNet V1.

View attachment 27636
I don't have the fps/W figures for the Akida 1 SoC performance, so if anyone has those to hand, it would be much appreciated.

The chip simulator seems to have topped out at 80 fps. We know Akida 1 SoC can top 1600 fps (don't recall what database), which is 20 times faster than the simulator, so it wouldn't even get into second gear doing 30 fps.

Back in 2018, Bob Beachler said they expected 1400 frames per second per Watt.
https://www.eejournal.com/article/brainchip-debuts-neuromorphic-chip/

So 30 fps would be 20 mW if there is a linear dependence. And, of course, we were told that the SoC performed better than expected.

But the main thing is that the comparison databases need to be the same as performance varies depending on database.

Perceive relys on compression to achieve extreme sparsity by ignoring the zeros in the multiplier.

US11003736B2 Reduced dot product computation circuit

View attachment 27638

[0003] Some embodiments provide an integrated circuit (IC) for implementing a machine-trained network (e.g., a neural network) that computes dot products of input values and corresponding weight values (among other operations). The IC of some embodiments includes a neural network computation fabric with numerous dot product computation circuits in order to process partial dot products in parallel (e.g., for computing the output of a node of the machine-trained network). In some embodiments, the weight values for each layer of the network are ternary values (e.g., each weight is either zero, a positive value, or the negation of the positive value), with at least a fixed percentage (e.g., 75%) of the weight values being zero. As such, some embodiments reduce the size of the dot product computation circuits by mapping each of a first number (e.g., 144) input values to a second number (e.g., 36) of dot product inputs, such that each dot product input only receives at most one input value with a non-zero corresponding weight value.

1. A method for implementing a machine-trained network that comprises a plurality of processing nodes, the method comprising:

at a particular dot product circuit performing a dot product computation for a particular node of the machine-trained network:

receiving (i) a first plurality of input values that are output values of a set of previous nodes of the machine-trained network and (ii) a set of machine-trained weight values associated with a set of the input values;

selecting, from the first plurality, a second plurality of input values that is a smaller subset of the first plurality of input values, said selecting comprising (i) selecting the input values from the first plurality of input values that are associated with non-zero weight values, and (ii) not selecting a group of input values from the first plurality of input values that are associated with weight values that are equal to zero;

computing a dot product based on (i) the second plurality of input values and (ii) weight values associated with the second plurality of input values
.

ASIDE: This isn't relevant to the discussion, but I stumbled across it just now and it shows that we developed a dataset for MagikEye in 2020, which I had not observed before.
View attachment 27637
1674290230168.png
 
  • Haha
  • Like
  • Thinking
Reactions: 26 users

Terroni2105

Founding Member
I just had a thought - with all the lay offs happening at Google, Microsoft, Meta, Amazon etc there might be some high class people applying for any jobs going at Brainchip
 
  • Like
  • Wow
  • Fire
Reactions: 26 users
S

Straw

Guest
  • Haha
  • Like
Reactions: 5 users

TheFunkMachine

seeds have the potential to become trees.
  • Like
  • Fire
  • Love
Reactions: 20 users

equanimous

Norse clairvoyant shapeshifter goddess
  • Like
  • Haha
  • Love
Reactions: 16 users

TECH

Regular
I just had a thought - with all the lay offs happening at Google, Microsoft, Meta, Amazon etc there might be some high class people applying for any jobs going at Brainchip

Hi Terroni,

That is an excellent thought, I have also noticed the latest tech headlines coming out of the US, there is certainly a downturn, but maybe
this is the first true sign that the "Vons" Architecture is becoming more than a bottleneck and more like a complete choke point.

I'm sure many tech employees will be looking for new positions, as many would have been paid rather handsomely and become rather
accustomed to a certain type of lifestyle.

Our leadership team are very savvy, they know exactly the type of individual they are looking for, but I'd suggest many wouldn't be up to speed with Brainchip's technology, their previous employers were potentially 3+ years behind us.

I like your thinking though, nice to see.

Regards Tech 🙂🙃🙂
 
  • Like
  • Love
  • Fire
Reactions: 23 users

TheFunkMachine

seeds have the potential to become trees.
Thanks for sharing with us Funk. Markus did the Brainchip sermon 4 days ago but good to see it again
Oh, ok! Haha. Fair enough. Never to much of a good thing I suppose.
 
  • Like
  • Haha
Reactions: 10 users

Build-it

Regular
I just had a thought - with all the lay offs happening at Google, Microsoft, Meta, Amazon etc there might be some high class people applying for any jobs going at Brainchip

Agree Terroni2105, appears the planets are aligning for Brainchip on all fronts however 2 points imo worth noting from the Alphabet CEO.

Sundar Pichai, Alphabet's CEO, said in a staff memo shared with Reuters that the company had rapidly expanded its headcount in recent years "for a different economic reality than the one we face today".

The news comes during a period of economic uncertainty as well as technological promise, in which Google and Microsoft have been investing in a burgeoning area of software known as generative artificial intelligence.

"I am confident about the huge opportunity in front of us thanks to the strength of our mission, the value of our products and services, and our early investments in AI," Mr Pichai said in the note.

I will stick with the science fiction secret Ubiquitous sauce thanks Mr Pichai.

Screenshot_20230121-161828_Chrome.jpg



Edge Compute.
 
  • Like
  • Fire
  • Love
Reactions: 28 users

Deadpool

hyper-efficient Ai
  • Haha
  • Like
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I believe you are over the target Bravo.
I hope this has something to do with BRN calling the LDA cap. I'm probably way off target but my thinking is that the relationship between BRN and Mercedes began when BRN only had Akida 1000 and before Akida 1.0. If the timeline is corect, would that also mean that the EQXX platform and flow on technologies incorporate BRN silicon rather than BRN IP.

Hey @VictorG, is “over the target” good or bad? Just checking…
 
  • Haha
  • Like
  • Thinking
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Haha
  • Like
Reactions: 6 users

Diogenese

Top 20
You’re a champion Dio


here are the block diagrams for Ergo 1 & Ergo 2:
1674303652098.png




1674303678238.png



Among other things, the position of the CPU subsystem and the NN fabric have swapped over, and the NN memory has been shared between all 4 NN Clusters in the UMEM block instead of each cluster having its own memory.

As Perceive is using CNNs with von Neumann CPU plus a NN fabric of dot product cores, their speed and efficiency seems to derive largely from their compression/quantization of weights and activations. They mention 1-bit and 4-bits.

Given that they don't have N-of-M coding, I would be interested in their accuracy, because simply compressing the data may mean that meaningful information is discarded, whereas N-of-M seeks to capture the meaningful data.

To my uninformed eye, the Perceive SoC does not seem to be scalable, as, from the patent description, the number of NN clusters seems germane to the operation of the system.


US11049013B1 Encoding of weight values stored on neural network inference circuit

1674304266378.png


Some embodiments provide a neural network inference circuit for executing a neural network that includes multiple computation nodes at multiple layers. Each of a set of the computation nodes includes a dot product of input values and weight values. The neural network inference circuit includes (i) a first set of memory units allocated to storing input values during execution of the neural network and (ii) a second set of memory units storing encoded weight value data. The weight value data is encoded such that less than one bit of memory is used per weight value of the neural network.

A neural network inference circuit for executing a neural network that comprises a plurality of computation nodes at a plurality of layers, each of a set of the computation nodes comprising a dot product of input values and weight values, the neural network inference circuit comprising:
a first set of memory units allocated to storing input values during execution of the neural network;
a second set of memory units storing encoded weight value data, wherein the weight value data is encoded such that less than one bit of memory is used per weight value of the neural network; and
a plurality of dot product computation circuits, wherein the weight value data is decoded and used to configure the dot product computation circuit
s.


.
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Diogenese

Top 20
  • Haha
  • Like
  • Fire
Reactions: 21 users

VictorG

Member
Hey @VictorG, is “over the target” good or bad? Just checking…
I believe the rating starts at excellent. 👍
 
  • Like
  • Fire
  • Love
Reactions: 11 users
Top Bottom