BRN Discussion Ongoing

Taproot

Regular

The electric Mercedes-Benz CLA or EQC will be based on the Vision EQXX, a striking prototype with a drag coefficient of 0.17 that was recently able to complete a journey of more than 1,200 km on a single charge. It would not be ruled out that its production version improves the coefficient of 0.20 that the EQS boasts.

The current second-generation CLA-Class arrived on the scene in 2019 as a 2020 model, and in Mercedes tradition we should see the updated version arrive in 2023 as a 2024 model.
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Taproot

Regular
Very cool find generously shared.

I recommend reading the whole article to find the last line:

“Finally, we cannot fail to mention that it will also be in charge of launching the new MB.OS operating system .ac”

My opinion only DYOR
FF

AKIDA BALLISTA

PS: The author emboldened the words in the quote it was not my doing.
 
  • Like
  • Love
  • Fire
Reactions: 10 users
Not wanting to dampen the brilliant message of Akida being more efficient (as it is) but it gets up my nose to see wording such as "111x less power". More so because of the "111x less" than the fact that it is actually comparing Energy and not Power.

From a purely mathematical point of view 111 x less power is 1 - 111 = -110 meaning it generates 110 times the amount of power. How fantastic is that—when spotting keywords it actually generates power.

I assume they mean it uses 1/111th of the power or 0.9% of the power, which is 99.1% less and not 111x.

Or maybe even it is 99% more efficient!

I won't get into the inaccuracy of calling it a reduction in power though, Joules measure energy, power is measured in Watts or Joules per second. When converting to instantaneous power, the comparison is very different. But it is overall energy used that IS important anyway. And in this chart Akida uses less energy at 5MHz, and more energy at 300Mhz.

So that it is 99% more efficient is correct.

The bar charts appear to give a much better representation—but these are equally confusing in that they are NOT comparing like against like.

But what would be a better thing to report is total bang for buck - as in number of inferences per Watt. i.e. standard scientific normalization to a constant denominator (the Watt).

Let's round for ease of visualizing
ARM Cortex M4 (120MHz) = 150 inf/W
Jetson Nano - (921 MHz) = 1080 inf/W
Jetson Nano (5W - 614 MHz) = 1520 inf/W
Coral Dev Board = 2850 inf/W
AKD 1000 (300MHz) = 17000 inf/W
AKD 1000 (5MHz) = 27000 inf/W

Now we can say roughly 10x the number of inferences for the same power usage, comparing best against the best. And the comparison becomes 180x the number of inferences for the same power usage when comparing best Akida to worst (ARM).

And Also the ARM Cortex looks far inferior now. And I don't think that was the intention of this presentation.

This leads to yet another of my pet hates. Statistics, damned lies, and statistics. "Facts" can be presented any way to show the message you want to convey.

Charts SHOULD ALWAYS be normalized so as to show same versus same.
So after all that is the presenter being untruthful in saying AKIDA1000 is the most efficient of the semiconductors displayed for comparison purposes in the graphic?

Regards
FF

AKIDA BALLISTA
 
Last edited:
  • Like
Reactions: 7 users

Slymeat

Move on, nothing to see.
So after all that is presenter being untruthful in saying AKIDA1000 is the most efficient of the semiconductors displayed for comparison purposes in the graphic?

Regards
FF

AKIDA BALLISTA
Absolutely not being untruthful in presenting Akida as the most efficient. The numbers absolutely support that, as I said.

My only concern is with the poor, but typical, misuse of language and abuse of using charts that do not compare like-against-like. That DOES mislead!

The block/bar chart showing ARM Cortex M4 with the 2nd smallest value and also containing the words "smaller is better" ABSOLUTELY IS misleading!

People WILL see that chart and WILL draw an incorrect conclusion. And possibly even one that misleads them to believe Akida (at 350Mhz) is not as good at ARM M4. Which is NOT the case!!

When you normalize the values, ARM is actually the worst, and by a very long way. It only uses less power because it does far less - only 6 inferenses per second compared to worst case Akida (power-wise, as in larger bar on the chart) of 1,683 inferences per second.

Joules per inference, and the time taken to perform said inference, is the ONLY meaningful data. I think normalizing as inferences per Watt best presents this information.
 
  • Like
  • Fire
  • Love
Reactions: 41 users

Diogenese

Top 20
Me either, Sunday morning musings 😅
My concern is that Mercedes is also made in China:

https://group.mercedes-benz.com/com...enz-production-with-its-partner-in-china.html

February 26, 2018 - Seeing further growth potential in the Chinese automotive market, Daimler and BAIC announced plans to further expand local production for the Mercedes-Benz brand at their joint venture, Beijing Benz Automotive Co. Ltd. (BBAC).
 
  • Like
  • Love
  • Sad
Reactions: 7 users

Dang Son

Regular
Not wanting to dampen the brilliant message of Akida being more efficient (as it is) but it gets up my nose to see wording such as "111x less power". More so because of the "111x less" than the fact that it is actually comparing Energy and not Power.

From a purely mathematical point of view 111 x less power is 1 - 111 = -110 meaning it generates 110 times the amount of power. How fantastic is that—when spotting keywords it actually generates power.

I assume they mean it uses 1/111th of the power or 0.9% of the power, which is 99.1% less and not 111x.

Or maybe even it is 99% more efficient!

I won't get into the inaccuracy of calling it a reduction in power though, Joules measure energy, power is measured in Watts or Joules per second. When converting to instantaneous power, the comparison is very different. But it is overall energy used that IS important anyway. And in this chart Akida uses less energy at 5MHz, and more energy at 300Mhz.

So that it is 99% more efficient is correct.

The bar charts appear to give a much better representation—but these are equally confusing in that they are NOT comparing like against like.

But what would be a better thing to report is total bang for buck - as in number of inferences per Watt. i.e. standard scientific normalization to a constant denominator (the Watt).

Let's round for ease of visualizing
ARM Cortex M4 (120MHz) = 150 inf/W
Jetson Nano - (921 MHz) = 1080 inf/W
Jetson Nano (5W - 614 MHz) = 1520 inf/W
Coral Dev Board = 2850 inf/W
AKD 1000 (300MHz) = 17000 inf/W
AKD 1000 (5MHz) = 27000 inf/W

Now we can say roughly 10x the number of inferences for the same power usage, comparing best against the best. And the comparison becomes 180x the number of inferences for the same power usage when comparing best Akida to worst (ARM).

And Also the ARM Cortex looks far inferior now. And I don't think that was the intention of this presentation.

This leads to yet another of my pet hates. Statistics, damned lies, and statistics. "Facts" can be presented any way to show the message you want to convey.

Charts SHOULD ALWAYS be normalized so as to show same versus same.
Well said @Slymeat
I think you should definately submit this message to Chris and Rob at BRN, so AKIDA benefits can be described more clearly and accurately moving forward.
 
  • Like
  • Love
Reactions: 22 users
Absolutely not being untruthful in presenting Akida as the most efficient. The numbers absolutely support that, as I said.

My only concern is with the poor, but typical, misuse of language and abuse of using charts that do not compare like-against-like. That DOES mislead!

The block/bar chart showing ARM Cortex M4 with the 2nd smallest value and also containing the words "smaller is better" ABSOLUTELY IS misleading!

People WILL see that chart and WILL draw an incorrect conclusion. And possibly even one that misleads them to believe Akida (at 350Mhz) is not as good at ARM M4. Which is NOT the case!!

When you normalize the values, ARM is actually the worst, and by a very long way. It only uses less power because it does far less - only 6 inferenses per second compared to worst case Akida (power-wise, as in larger bar on the chart) of 1,683 inferences per second.

Joules per inference, and the time taken to perform said inference, is the ONLY meaningful data. I think normalizing as inferences per Watt best presents this information.
I accept that part of what you are saying but you are the target audience not me and you were not misled.

This is marketing science not computer science.

Jerome Nadel is a marketing psychologist.

He knows the target audience and he knows you and how you will react and in doing your own maths you will self verify that AKIDA is the best by some margin.

He has drawn you in and unless you have a personality defect, which I am sure you do not, you have converted yourself by your own analysis into someone who will want to explore AKIDA and Brainchip further.

In marketing as you have said you can present FACTS in whatever way you want to do as to try and prove a point.

What you should never do is tell a lie and claim it as a FACT which would have been in this case that AKIDA was best when the maths did not support this FACT.

Why because when your target audience can do the maths and dig into the numbers they will come up with the view it is all BS.

This sort of marketing is used everywhere even in courts.

Competent advocates use it all the time by misstating a FACT in the knowledge that the judge or the opponent will correct them which then reinforces the significance of the FACT in the mind of the judge or the jury.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
Reactions: 11 users
My concern is that Mercedes is also made in China:

https://group.mercedes-benz.com/com...enz-production-with-its-partner-in-china.html

February 26, 2018 - Seeing further growth potential in the Chinese automotive market, Daimler and BAIC announced plans to further expand local production for the Mercedes-Benz brand at their joint venture, Beijing Benz Automotive Co. Ltd. (BBAC).
I think we do not have enough details. In the automotive world producing a motor vehicle means having a plant where you bolt together bits sourced from all over the world. Mercedes Benz has been building a very high tech production facility in Germany to build the electronics as platforms to send to plants where the vehicles are assembled. If AKIDA was for example in a dashboard platform shipped from Germany to their assembly plant in China it would not be an issue.

Lots of ways this can play out.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
Reactions: 13 users
D

Deleted member 118

Guest
Competitors are moving fast and it seems Intel have brought out something solid with the
Loihi 2

 
  • Like
  • Fire
  • Thinking
Reactions: 6 users

Diogenese

Top 20
This bar chart goes back to mid-2020. I saved a copy but the link I saved has been "404" d.


BrainChip_tech-brief_6-How-BrainChip-is-Changing-AI_v1.2.pdf (brainchipinc.com)
1665885920735.png


I was afraid that this would happen to the web page when we switched from being a tech company to a marketing company. The original tech stuff has been buried and is impossible to retrieve.
 
  • Like
  • Love
  • Sad
Reactions: 15 users

VictorG

Member
Competitors are moving fast and it seems Intel have brought out something solid with the
Loihi 2

It's still a research chip with the minimal scale possible based on a 3 inch square board carrying 8 x Loihi 2 chips to deliver UP TO a million neurons and UP TO a billion synapses.
I'm not worried, neither should you be.


Davies said that Kapoho Point can represent “up to a million neurons” and “up to a billion synapses” — “a pretty good scale of network size just in this very compact form factor.

Kapoho Point (which, like all of Intel’s neuromorphic releases thus far, is named after Hawaii’s volcanic geography) marks a new form factor for Intel: an ultracompact board about three inches square (Davies likened it to a credit card) with four Loihi 2 chips on its top side and another four on its underside, for a total of eight Loihi 2 chips at “the most minimal scale possible.”
 
  • Like
  • Love
Reactions: 12 users
Competitors are moving fast and it seems Intel have brought out something solid with the
Loihi 2

But is this really catching up:

“Kapoho Point (which, like all of Intel’s neuromorphic releases thus far, is named after Hawaii’s volcanic geography) marks a new form factor for Intel: an ultracompact board about three inches square (Davies likened it to a credit card) with four Loihi 2 chips on its top side and another four on its underside, for a total of eight Loihi 2 chips at “the most minimal scale possible.”

Kapoho Point. Image courtesy of Intel.
Davies said that Kapoho Point can represent “up to a million neurons” and “up to a billion synapses” — “a pretty good scale of network size just in this very compact form factor.” It can solve optimization problems (an area of particular strength for neuromorphic computing) with up to 8 million variables and, according to Intel, with up to 1000× better energy efficiency than a state-of-the-art CPU solver.”

AKIDA 1.0 in a single 28nm chip has 1.2 million neurons and 10 billion synapse and is 1,000 times more energy efficient than a GPU.

Cascading eight AKIDA 1.0 chips gives you 9.2 million neurons and 80 billion synapses.

If I am remembering correctly Loihi 2 was fabricated at 7 nm which makes it straight off many times more expensive and at 7nm foundry yields fall off significantly whereas at 28 nm at TSMC yields are guaranteed above 90%.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 49 users

Boab

I wish I could paint like Vincent
But is this really catching up:

“Kapoho Point (which, like all of Intel’s neuromorphic releases thus far, is named after Hawaii’s volcanic geography) marks a new form factor for Intel: an ultracompact board about three inches square (Davies likened it to a credit card) with four Loihi 2 chips on its top side and another four on its underside, for a total of eight Loihi 2 chips at “the most minimal scale possible.”

Kapoho Point. Image courtesy of Intel.
Davies said that Kapoho Point can represent “up to a million neurons” and “up to a billion synapses” — “a pretty good scale of network size just in this very compact form factor.” It can solve optimization problems (an area of particular strength for neuromorphic computing) with up to 8 million variables and, according to Intel, with up to 1000× better energy efficiency than a state-of-the-art CPU solver.”

AKIDA 1.o in a single 28nm chip has 1.2 million neurons and 10 billion synapse and is 1,000 times more energy efficient than a GPU.

Eight AKIDA 1.0 chips gives you 9.2 million neurons and 80 billion synapses.

If I am remembering correctly Loihi 2 was fabricated at 7 nm which makes it straight off many times more expensive and at 7nm foundry yields fall off significantly whereas at 28 nm at TSMC yields are guaranteed above 90%.

My opinion only DYOR
FF

AKIDA BALLISTA
Another one put to the sword👍👍

 
  • Like
  • Haha
  • Fire
Reactions: 18 users
Now for my wild weekend thought on how Brainchip is releasing or not releasing performance figures the way those who know what they are talking about would like.

Intel and the other big players have been trying to get agreement on benchmarking for neuromorphic computing.

To my knowledge Brainchip has stayed out of this debate.

Why?

Well if they joined in they would be the squeaky little voice at the end of a very long table that nobody would take notice of and they would be drawn into a set of benchmarks designed to benefit others.

I am not in anyway shape or form a techie and refer to myself as a technophobe but one thing I am sure of is there is an underlying brilliance to the AKIDA technology which no one here comprehends even the techies.

This is not a criticism this is why Brainchip refers to its secret sauce because it is secret.

I glimpsed a little of the secret when I attended the demonstration of Nviso running on AKIDA after the AGM.

Actually I did not glimpse but I heard about its brilliance from Tim Llewellyn when he said that Anil Mankar had shown them how to tweak AKIDA to increase the frame rate to run at up to 1670 fps from the initial rate of 100 fps which was ten times faster than Nvidia Jetson which was giving them 10 fps.

Clearly Brainchip can play games with performance and power use and as such it in my opinion is well served by ducking and weaving rather than locking themselves into a benchmarking game where the big players set the rules.

When and if a benchmark for neuromorphic computing is agreed by the big players and published then Brainchip can tweak AKIDA technology so that out of the box it blows the benchmark away.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 54 users
D

Deleted member 118

Guest
Not sure if it’s been posted before


2936E4B5-A761-4C33-84F3-A669BE834B5B.png
 
  • Like
  • Fire
  • Love
Reactions: 14 users
D

Deleted member 118

Guest
  • Like
  • Love
  • Fire
Reactions: 26 users
D

Deleted member 118

Guest
  • Like
Reactions: 5 users
1665890399292.png


Sorry, I can’t find the link to it now; but not a bad company to have working on the same vehicle design!

:)
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Boab

I wish I could paint like Vincent
Just reading through some old stuff and I'm reminded (excited) of/by the below comments.

BrainChip is aiming its technology at the edge, where more data is expected to be generated in the coming years. Pointing to IDC and McKinsey research, BrainChip expects the market for edge-based devices needing AI to grow from $44 billion this year to $70 billion by 2025. In addition, at last week’s Dell Technologies World event, CEO Michael Dell reiterated his belief that while 10 percent of data now is generated at the edge, that will shift to 75 percent by 2025. Where data is created, AI will follow. BrainChip has designed Akida for the high-processing, low-power environment and to be able to run AI analytic workloads – particularly inference – on the chip to lessen the data flow to and from the cloud and thus reduce latency in generating results.
 
  • Like
  • Love
  • Fire
Reactions: 21 users
Top Bottom