BRN Discussion Ongoing

Last edited:
  • Like
  • Love
Reactions: 22 users

IloveLamp

Top 20

By extending processing and storage capabilities to the edge, we can improve the latency and cyber security of smart systems. Because computing occurs closer to the data source, the risk of data leakage is reduced.

Furthermore, edge storage alleviates the strain on cloud infrastructure by allowing manufacturers to send only relevant data to their cloud solutions. This reduces storage costs while also lightening the load on cloud-based analytics.
1000016134.jpg
 
  • Like
  • Love
  • Fire
Reactions: 20 users

7für7

Top 20
Germany closed green! 🤔 I’m scared
 
  • Haha
  • Like
Reactions: 7 users
  • Like
  • Love
Reactions: 30 users

Boab

I wish I could paint like Vincent
Interesting how AI Labs have some of the same partners as us.
Hopefully they will find some customers to use Akida 2.0 in the areas that that commented on at the release?
So much potential as we sometimes forget how big the ecosystem is.
1717202811130.png
1717202684961.png
 
  • Like
  • Fire
  • Thinking
Reactions: 23 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
An extract from Pat Gelsinger's (CEO Intel) interview with TIME on 3 May 2024.
Screenshot 2024-06-01 at 11.03.29 am.png




 
  • Like
  • Love
  • Fire
Reactions: 23 users
Question for the more technically intelligent on BRN and mobile phones.
How do you see Akida being used in mobile phones apart from a possible integration with the Prophesse camera.

Is there any opportunities with Cyber security or any other way to integrate Akida do you think ?.
 
Last edited:
  • Like
Reactions: 3 users

Diogenese

Top 20
Some time ago I responded to Magnus Ostberg (Mercedes Software guru) querying whether the water-cooled processor in the CLA concept car required the cooling because it was running NN software.

His enigmatic response was:

"STAY TUNED!"

His latest linkedin posts do nothing to dispel my interest in the use of Akida simulation software in MB.OS.

https://www.linkedin.com/feed/update/urn:li:activity:7201230087612940288/

we’ve enhanced the “Hey Mercedes” voice assistant so it can help with a wider range of everyday functions.

https://www.linkedin.com/feed/update/urn:li:activity:7202202756806230016/

"the design of MB.OS demands a different approach because we are decoupling the hardware and software innovation cycles and integration steps".

This could be interpreted to mean that the software is developing faster than the silicon, and the software is updatable, so it would make sense to use software simulations of evolving tech (Akida 2/TeNNs) until the IP was in a sufficiently stable state to move to silicon.

In fact the Akida 2/TeNNs patents were filed 2 years ago, and the EAPs would have been informed about Akida 2 then, so they would have been reluctant to commit to silicon as the silicon was still being developed. Indeed, the TeNNs concept would have been still in its early stages of development.

Mercedes would be anxious to implement the technology which was 5 to 10 times more efficient than alternatives in kws among other things, but they would not want to serve up "yesterday's" day-old bread rolls when they know there's a fresh bun in the oven.

Similarly, we have recently "discovered" that Valeo's Scala 3 does not appear to include Akida SoC, but it comes with software to process the lidar signals.

Akida 2 was ready for tape-out a little while ago, ie, the engineering samples, but it would not be ready for integration with some other processor (CPU/GPU) for some time, certainly not in time for the 2025 MB release ... and who the heck is doing the engineering samples/production run?!

This blurb from MB CES 2024 presentation confirms that they are using "Hey Mercedes" in software:

Mercedes-Benz heralds a new era for the user interface with human-like virtual assistant powered by generative AI (mbusa.com)

At CES 2024, Mercedes-Benz is showcasing a raft of developments that define its vision for the hyper-personalized user experience of the future – in-car and beyond. Headlining those is the new integrated MBUX Virtual Assistant. It uses advanced software and generative AI to create an even more natural and intuitive relationship with the car, with proactive support that makes the customer's life more convenient. This game-changing development takes the 'Hey Mercedes' voice assistant into a whole new visual dimension with Unity's high-resolution game-engine graphics. Running on the Mercedes-Benz Operating System (MB.OS) developed in-house, its rollout starts with the vehicles on the forthcoming MMA platform (Mercedes-Benz Modular Architecture). The Concept CLA Class, which celebrates its North American premiere at CES 2024, is based on this platform and likewise provides a preview of MB.OS.

The MBUX Virtual Assistant is a further development of the system first showcased in the VISION EQXX. It uses generative AI and proactive intelligence to make life as easy, convenient and comfortable as possible. For instance, it can offer helpful suggestions based on learned behavior and situational context
.

So they are using Unity's game-engine graphics, but a quick glance did not find any Unity kws/nlp patents.

One possible corollary of this is that, when the EQXX Akida reveal was made a couple of years ago, it was about the use of Akida simulation software and not the Akida 1 SoC, but I'd have to go back to check this.

In any event, it seems pretty clear that there is a distinct possibility that the Mercedes CLE MBUX is using Akida simulation software until the design is sufficiently mature to produce the silicon.
 
  • Like
  • Fire
  • Love
Reactions: 76 users
Question for the more technically intelligent on BRN and mobile phones.
How do you see Akida being used in mobile phones apart from a possible integration with the Prophesse camera.

Is there any opportunities with Cyber security or any other way to integrate Akida do you think ?.
Possibly LLM / SLM maybe imo.

Processed on device reducing time to do so.
 
  • Like
  • Love
Reactions: 5 users

Kachoo

Regular
Some time ago I responded to Magnus Ostberg (Mercedes Software guru) querying whether the water-cooled processor in the CLA concept car required the cooling because it was running NN software.

His enigmatic response was:

"STAY TUNED!"

His latest linkedin posts do nothing to dispel my interest in the use of Akida simulation software in MB.OS.

https://www.linkedin.com/feed/update/urn:li:activity:7201230087612940288/

we’ve enhanced the “Hey Mercedes” voice assistant so it can help with a wider range of everyday functions.

https://www.linkedin.com/feed/update/urn:li:activity:7202202756806230016/

"the design of MB.OS demands a different approach because we are decoupling the hardware and software innovation cycles and integration steps".

This could be interpreted to mean that the software is developing faster than the silicon, and the software is updatable, so it would make sense to use software simulations of evolving tech (Akida 2/TeNNs) until the IP was in a sufficiently stable state to move to silicon.

In fact the Akida 2/TeNNs patents were filed 2 years ago, and the EAPs would have been informed about Akida 2 then, so they would have been reluctant to commit to silicon as the silicon was still being developed. Indeed, the TeNNs concept would have been still in its early stages of development.

Mercedes would be anxious to implement the technology which was 5 to 10 times more efficient than alternatives in kws among other things, but they would not want to serve up "yesterday's" day-old bread rolls when they know there's a fresh bun in the oven.

Similarly, we have recently "discovered" that Valeo's Scala 3 does not appear to include Akida SoC, but it comes with software to process the lidar signals.

Akida 2 was ready for tape-out a little while ago, ie, the engineering samples, but it would not be ready for integration with some other processor (CPU/GPU) for some time, certainly not in time for the 2025 MB release ... and who the heck is doing the engineering samples/production run?!

This blurb from MB CES 2024 presentation confirms that they are using "Hey Mercedes" in software:

Mercedes-Benz heralds a new era for the user interface with human-like virtual assistant powered by generative AI (mbusa.com)

At CES 2024, Mercedes-Benz is showcasing a raft of developments that define its vision for the hyper-personalized user experience of the future – in-car and beyond. Headlining those is the new integrated MBUX Virtual Assistant. It uses advanced software and generative AI to create an even more natural and intuitive relationship with the car, with proactive support that makes the customer's life more convenient. This game-changing development takes the 'Hey Mercedes' voice assistant into a whole new visual dimension with Unity's high-resolution game-engine graphics. Running on the Mercedes-Benz Operating System (MB.OS) developed in-house, its rollout starts with the vehicles on the forthcoming MMA platform (Mercedes-Benz Modular Architecture). The Concept CLA Class, which celebrates its North American premiere at CES 2024, is based on this platform and likewise provides a preview of MB.OS.

The MBUX Virtual Assistant is a further development of the system first showcased in the VISION EQXX. It uses generative AI and proactive intelligence to make life as easy, convenient and comfortable as possible. For instance, it can offer helpful suggestions based on learned behavior and situational context
.

So they are using Unity's game-engine graphics, but a quick glance did not find any Unity kws/nlp patents.

One possible corollary of this is that, when the EQXX Akida reveal was made a couple of years ago, it was about the use of Akida simulation software and not the Akida 1 SoC, but I'd have to go back to check this.

In any event, it seems pretty clear that there is a distinct possibility that the Mercedes CLE MBUX is using Akida simulation software until the design is sufficiently mature to produce the silicon.
Hi Dio,

I'm aware that MB played with both chips Akida 1000 and as recently as last October Akida 1500. It could be for various trials so I understand your software point.

We also know that there has been a tonne of talk about the software in the last few year. We also know and have been told that many put Akida development on 1000 on hold for 2.0 which is much superior in performance and meets what the target audience want.

So this would highlight the exit of Chris Stevens comments in sales how they hardware product was not ready hence his departure.

So is it easy enough to implement the Akida 2.0 hardware once it reaches a stage of readiness?

Clearly MB has not abandoned BRN as we are trusted only thing is the relationship has not been clearly defined as its fluid.

So if we have valeo and MB using software it should generate revenue to degree but not as elevated as hardware sales.

So the big question is did Sean say we are not putting 2.0 out as they do not want to compete with a customer who ever this customer is would need to be pretty secure to wait.

As for the 1000 and 1500 they really should put more production in as they have demand or will it all now have to be 2.0 and the ones using Akida ESaa Ant61 VVDN Unigen and others have to purchase a 2.0 variant for the products?
 
  • Like
  • Fire
  • Thinking
Reactions: 14 users

KMuzza

Mad Scientist
Low-Power Image Classification With the BrainChip Akida Edge AI Enablement Platform


Hi Dio,

I'm aware that MB played with both chips Akida 1000 and as recently as last October Akida 1500. It could be for various trials so I understand your software point.

.................-------.....

So if we have valeo and MB using software it should generate revenue to degree but not as elevated as hardware sales.

So the big question is did Sean say we are not putting 2.0 out as they do not want to compete with a customer who ever this customer is would need to be pretty secure to wait.

As for the 1000 and 1500 they really should put more production in as they have demand or will it all now have to be 2.0 and the ones using Akida ESaa Ant61 VVDN Unigen and others have to purchase a 2.0 variant for the products?
Hi Sirod69/Kachoo,

The Edge Impulse/Brainchip on Demand Webinar - is really worth a watch again- and great to see Edge Impulse so out there
with Brainchip Akida 1 and Akida 2.
1717215592920.png


AKIDA BALLISTA UBQTS.
 
  • Like
  • Fire
  • Love
Reactions: 36 users

Diogenese

Top 20
Question for the more technically intelligent on BRN and mobile phones.
How do you see Akida being used in mobile phones apart from a possible integration with the Prophesse camera.

Is there any opportunities with Cyber security or any other way to integrate Akida do you think ?.
Ever since the days of DUTH, I've been hanging out for a USB cybersecurity dongle so every PC/Notepad could be protected, so I would thing a mobile phone version would not be too much of a stretch.
Hi Dio,

I'm aware that MB played with both chips Akida 1000 and as recently as last October Akida 1500. It could be for various trials so I understand your software point.

We also know that there has been a tonne of talk about the software in the last few year. We also know and have been told that many put Akida development on 1000 on hold for 2.0 which is much superior in performance and meets what the target audience want.

So this would highlight the exit of Chris Stevens comments in sales how they hardware product was not ready hence his departure.

So is it easy enough to implement the Akida 2.0 hardware once it reaches a stage of readiness?

Clearly MB has not abandoned BRN as we are trusted only thing is the relationship has not been clearly defined as its fluid.

So if we have valeo and MB using software it should generate revenue to degree but not as elevated as hardware sales.

So the big question is did Sean say we are not putting 2.0 out as they do not want to compete with a customer who ever this customer is would need to be pretty secure to wait.

As for the 1000 and 1500 they really should put more production in as they have demand or will it all now have to be 2.0 and the ones using Akida ESaa Ant61 VVDN Unigen and others have to purchase a 2.0 variant for the products?
Hi Kachoo,

I don't know so much about the tons of stuff about software as a commercial product.

When I asked Marcus the naive question about NN software, I was expecting him to say "No. We are using a NN SoC."

Certainly it's been a hard sell for the sales group, especially when we went IP only. There would have been no commissions for the chip sales people.

Implementing Akida 2 in silicon would be no more difficult than Akida 1. In fact, having the Akida 1 template would greatly simplify the Akida 2 tape-out. And, of course, we have Anil.

I'm quietly confident both Valeo and Mercedes are using Akida software, but I don't know this to be a fact.

By the way Ford also use software NNs.

As far as dropping the Akida 2 silicon (if that is what has happened), I'm guessing that would be at the behest of a Teir 1 chip maker. I doubt that it would be worth our while for even a bug customer like Mercedes or Valeo, or Prophesee.

We do have engineering (test) runs of 1000 and 1500, but not commercial production.

Edge box is "Sold out" on the BRN web page, but whether this is 100 units or 1000 units, we do not know.

I wonder if Sean spoke to Rosie Lee in Malaysia?
View attachment 64149 View attachment 64150 View attachment 64152 View attachment 64153
 
  • Like
  • Fire
  • Love
Reactions: 33 users
Diogenese I wrote to Rob Telson prior to his departure asking him about cyber security for mobile phones and he said he couldn’t say to much because of the NDA but to watch this space with Quantum Ventures. I am not technically able to
Clarify what quantum ventures have recently announced however it seems they are more focused around the development for defence force and national security what are your thoughts on this please, could their recent development be turned in to the Dongle for PC/ IPads and for iPhones integration ?.
MY GUESS IS YES 🙌
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 17 users

KMuzza

Mad Scientist
  • Like
  • Love
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Sometimes one has to JUST WONDER.

View attachment 64171

and this with CSIRO Aus-


Here's a "recentish" paper on SNN's and X-Ray Diagnoses. Traditional X-Ray diagnosis systems are computationally complex thus requiring high power. Neuromorphic can achieve similar, if not superior results with much lower energy consumption.

Screenshot 2024-06-01 at 10.13.26 pm.png


Screenshot 2024-06-01 at 10.15.42 pm.png





 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 33 users
Appears our friends at TCS Research Lab were presenting their work with Akida at the recent Morpheus event ;)



Chetan Kadway
ML Researcher at TCS Research Labs | Computer Vision | SpaceTech | Edge AI | Neuromorphic Computing | Azure ML | ML-Ops | Streamlit Applications
2w Edited

Last week, I had the incredible opportunity to visit the Netherlands (with my colleague Sounak Dey) on a business trip that turned out to be much more than I expected.

Primary purpose of the visit was to present our research work (as invited speakers) at Morpheus Edge AI & Neuromorphic Computing workshop (https://lnkd.in/g-z-dAxJ) organised by Laurent Hili at ESA/ESTEC (European Space Research & Technology Centre). It was well rounded workshop where we got to meet SpaceTech stakeholders that a innnovating towards embedding intelligence onboard satellites (startups, soc manufacturers, european university researchers, ESA project leaders, etc). Excellent work by workshop organiser and the presenting participants.

Unexpectedly, I also met Sir Ananth Krishnan (Tata Consultancy Services Ex-CTO) on his post retirement holiday. And the first thing that came to my mind was his retirement speech address at our IIT-KGP Research Park office: He motivated us to make products & services that are 1] Usable (should pass Grandma test) 2] Trustworthy/Reliable 3] Frugal (Space & Time resource efficient, think of bits & bytes). His advice still guides us in our research work.

The research work we presented was done in colllboration with our friends at BrainChip (Gilles Bézard and Alf Kuchenbuch). Brief introduction to our work that got deployed on Brainchip AKIDA neuromorphic processor:

Currently, there is a delay of many hours or even days to draw actionable insights from satellite imagery (due to mismatch between data volume acquisition & limited comms bandwidth). We observed that end-users either need RAW images or Analytics-ready meta-data as soon as possible. Therefore, embedding intelligence (Edge AI) onboard satellites can result in quicker business decision making across business verticals that rely on geo-spatial data.

To address this, guided by the foresight of Dr. Arpan Pal we built a bundle of tech capabilities that helps send RAW data & Analytics-ready meta-data as soon as possible to ground station. These tech capabilites include:

1) Cloud Cover Detection Model (high-accuracy, low-latency, low-power).
2) DL based Lossless Compression (around 45% compression ratio).
3) RL based Neural Architecture Search Algorithm (quickly search data+task+hardware specific optimal DL models).

We also had a chance to visit TCS Paceport in Amsterdam, hoping to showcase our research prototype there soon. Looking forward to more future collaborations with Edge AI/Neuromorphic hardware accelelator designers & space-grade SoC manufacturers.

Would like to thank Tata Consultancy Services - Research for such great opportunity to build future tech for future demand. Would also like to thank our Edge & Neuromorphic team: Arijit Mukherjee, Sounak Dey, Swarnava Dey, Abhishek Roy Choudhury, Shalini Mukhopadhyay, Syed Mujibul Islam, Sayan Kahali and our academic research advisor Dr. Manan Suri. #spacetech #satellite #edgecomputing #orbital #AI #neuromorphic #SNN #AKIDA #TCS
  • No alternative text description for this image
  • No alternative text description for this image
  • No alternative text description for this image
  • No alternative text description for this image
  • No alternative text description for this image

    +1
 
  • Like
  • Fire
  • Love
Reactions: 58 users

Frangipani

Regular
The University of Washington in Seattle, interesting…

UW’s Department of Electrical & Computer Engineering had already spread the word about a summer internship opportunity at BrainChip last May. In fact, it was one of their graduating Master’s students, who was a BrainChip intern himself at the time (April 2023 - July 2023), promoting said opportunity.

I wonder what exactly Rishabh Gupta is up to these days, hiding in stealth mode ever since graduating from UW & simultaneously wrapping up his internship at BrainChip. What he has chosen to reveal on LinkedIn is that he now resides in San Jose, CA and is “Building next-gen infrastructure and aligned services optimized for multi-modal Generative AI” resp. that his start-up intends to build said infrastructure and services “to democratise AI”… 🤔 He certainly has a very impressive CV so far as well as first-hand experience with Akida 2.0, given the time frame of his internship and him mentioning vision transformers.




View attachment 63099

View attachment 63101

View attachment 63102

View attachment 63103

View attachment 63104


Meanwhile, yet another university is encouraging their students to apply for a summer internship at BrainChip:



View attachment 63100


I guess it is just a question of time before USC will be added to the BrainChip University AI Accelerator Program, although Nandan Nayampally is sadly no longer with our company…

Maybe from general interest:

I have just heard an interesting R&D report on the radio about Ai improved noise cancelling. It was noted that other big companies are certainly still developing this exciting technology too.
The idea is to let sounds through the cancelling. Until now, there has been no system that allows the user to individually set, define or let learn which sounds the Ai should filter out. The headphones have to learn what the person wants to hear, without cloud, obvious. To do this, the team uses the time differences between the left and right headphones and the noise source. This team solves this as follows: if the person with noise cancelling headphones points their face in the direction of what they want to hear despite the suppression, the Ai or electronics learns within around three seconds that the source is being targeted because it recognises the runtime differences from left to right and lets these sounds through.
So far with app on the smartphone. He also says that the team is working on button (? small) headphones, which they want to introduce in about 6 to 8 months.
Up to now this is being done with the phone he said, but I can very well imagine that the neural network will be placed directly in the headphones, drastically reducing latency even further.
I'm on the road and my research options with the phone are limited, but it's about Shyam Gollakota's team at the University of Washington.

KEYWORDS
Augmented hearing, auditory perception, spatial computing
PDF:

______
Older status:
_______


___
Webuild an end-to-end hardware system that integrates a noisecanceling headset (Sony WH-1000XM4), a pair of binaural microphones (Sonic Presence SP15C) with our real-time target speech hearing network running on an embedded IoT CPU (Orange Pi 5B).

We deploy our neural network on the embedded device by converting the PyTorch model into an ONNX model using a nightly
PyTorch version (2.1.0.dev20230713+cu118) and we use the python package onnxsim to simplify the resulting ONNX model.

IMO there is a good chance that Akida will be utilised in future versions of that UW prototype @cosors referred to.



“A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to “enroll” them. The system, called “Target Speech Hearing,” then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker.

The team presented its findings May 14 in Honolulu at the ACM CHI Conference on Human Factors in Computing Systems. The code for the proof-of-concept device is available for others to build on. The system is not commercially available.”



While the paper shared by @cosors (https://homes.cs.washington.edu/~gshyam/Papers/tsh.pdf) indicates that the on-device processing of the end-to-end hardware system UW professor Shyam Gollakota and his team built as a proof-of-concept device is not based on neuromorphic technology, the paper’s future outlook (see below), combined with the fact that UW’s Paul G. Allen School of Computer Science & Engineering* has been encouraging students to apply for the BrainChip Summer Internship Program for the second year in a row, is reason enough for me to speculate those UW researchers could well be playing around with Akida at some point to minimise their prototype’s power consumption and latency.


*(named after the late Paul Gardner Allen who co-founded Microsoft in 1975 with his childhood - and lifelong - friend Bill Gates and donated millions of dollars to UW over the years)

0B2CBBA7-D38C-4290-8A77-91E467D502FE.jpeg

9C6D904D-0DF9-458B-8292-8F085EDA293B.jpeg
The paper was co-authored by Shyam Gollakota (https://homes.cs.washington.edu/~gshyam) and three of his UW PhD students as well as by AssemblyAI’s Director of Research, Takuya Yoshioka (ex-Microsoft).

AssemblyAI (www.AssemblyAI.com) sounds like an interesting company to keep an eye on:

08D996C7-6DD7-4D0B-B6AB-21C0FDB35D05.jpeg

EA96C830-2E21-4E22-BFFE-BD1E936E8287.jpeg


1BF46C71-D4B4-491B-B67D-1EB9D3C6514A.jpeg


9C6D904D-0DF9-458B-8292-8F085EDA293B.jpeg

AA4A39F8-FF05-4001-9922-D8E8DA8F3984.jpeg

9E347DBC-4A2E-4A26-8159-8048E113E018.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 30 users

itsol4605

Regular
To say it again loud and clear:
A study says nothing at all!
A study is by no means a product with sales in the millions!!
 
  • Like
  • Haha
  • Sad
Reactions: 5 users
Top Bottom