Possibly LLM / SLM maybe imo.Question for the more technically intelligent on BRN and mobile phones.
How do you see Akida being used in mobile phones apart from a possible integration with the Prophesse camera.
Is there any opportunities with Cyber security or any other way to integrate Akida do you think ?.
Hi Dio,Some time ago I responded to Magnus Ostberg (Mercedes Software guru) querying whether the water-cooled processor in the CLA concept car required the cooling because it was running NN software.
His enigmatic response was:
"STAY TUNED!"
His latest linkedin posts do nothing to dispel my interest in the use of Akida simulation software in MB.OS.
https://www.linkedin.com/feed/update/urn:li:activity:7201230087612940288/
weāve enhanced the āHey Mercedesā voice assistant so it can help with a wider range of everyday functions.
https://www.linkedin.com/feed/update/urn:li:activity:7202202756806230016/
"the design of MB.OS demands a different approach because we are decoupling the hardware and software innovation cycles and integration steps".
This could be interpreted to mean that the software is developing faster than the silicon, and the software is updatable, so it would make sense to use software simulations of evolving tech (Akida 2/TeNNs) until the IP was in a sufficiently stable state to move to silicon.
In fact the Akida 2/TeNNs patents were filed 2 years ago, and the EAPs would have been informed about Akida 2 then, so they would have been reluctant to commit to silicon as the silicon was still being developed. Indeed, the TeNNs concept would have been still in its early stages of development.
Mercedes would be anxious to implement the technology which was 5 to 10 times more efficient than alternatives in kws among other things, but they would not want to serve up "yesterday's" day-old bread rolls when they know there's a fresh bun in the oven.
Similarly, we have recently "discovered" that Valeo's Scala 3 does not appear to include Akida SoC, but it comes with software to process the lidar signals.
Akida 2 was ready for tape-out a little while ago, ie, the engineering samples, but it would not be ready for integration with some other processor (CPU/GPU) for some time, certainly not in time for the 2025 MB release ... and who the heck is doing the engineering samples/production run?!
This blurb from MB CES 2024 presentation confirms that they are using "Hey Mercedes" in software:
Mercedes-Benz heralds a new era for the user interface with human-like virtual assistant powered by generative AI (mbusa.com)
At CES 2024, Mercedes-Benz is showcasing a raft of developments that define its vision for the hyper-personalized user experience of the future ā in-car and beyond. Headlining those is the new integrated MBUX Virtual Assistant. It uses advanced software and generative AI to create an even more natural and intuitive relationship with the car, with proactive support that makes the customer's life more convenient. This game-changing development takes the 'Hey Mercedes' voice assistant into a whole new visual dimension with Unity's high-resolution game-engine graphics. Running on the Mercedes-Benz Operating System (MB.OS) developed in-house, its rollout starts with the vehicles on the forthcoming MMA platform (Mercedes-Benz Modular Architecture). The Concept CLA Class, which celebrates its North American premiere at CES 2024, is based on this platform and likewise provides a preview of MB.OS.
The MBUX Virtual Assistant is a further development of the system first showcased in the VISION EQXX. It uses generative AI and proactive intelligence to make life as easy, convenient and comfortable as possible. For instance, it can offer helpful suggestions based on learned behavior and situational context.
So they are using Unity's game-engine graphics, but a quick glance did not find any Unity kws/nlp patents.
One possible corollary of this is that, when the EQXX Akida reveal was made a couple of years ago, it was about the use of Akida simulation software and not the Akida 1 SoC, but I'd have to go back to check this.
In any event, it seems pretty clear that there is a distinct possibility that the Mercedes CLE MBUX is using Akida simulation software until the design is sufficiently mature to produce the silicon.
Ever since the days of DUTH, I've been hanging out for a USB cybersecurity dongle so every PC/Notepad could be protected, so I would thing a mobile phone version would not be too much of a stretch.Question for the more technically intelligent on BRN and mobile phones.
How do you see Akida being used in mobile phones apart from a possible integration with the Prophesse camera.
Is there any opportunities with Cyber security or any other way to integrate Akida do you think ?.
Hi Kachoo,Hi Dio,
I'm aware that MB played with both chips Akida 1000 and as recently as last October Akida 1500. It could be for various trials so I understand your software point.
We also know that there has been a tonne of talk about the software in the last few year. We also know and have been told that many put Akida development on 1000 on hold for 2.0 which is much superior in performance and meets what the target audience want.
So this would highlight the exit of Chris Stevens comments in sales how they hardware product was not ready hence his departure.
So is it easy enough to implement the Akida 2.0 hardware once it reaches a stage of readiness?
Clearly MB has not abandoned BRN as we are trusted only thing is the relationship has not been clearly defined as its fluid.
So if we have valeo and MB using software it should generate revenue to degree but not as elevated as hardware sales.
So the big question is did Sean say we are not putting 2.0 out as they do not want to compete with a customer who ever this customer is would need to be pretty secure to wait.
As for the 1000 and 1500 they really should put more production in as they have demand or will it all now have to be 2.0 and the ones using Akida ESaa Ant61 VVDN Unigen and others have to purchase a 2.0 variant for the products?
View attachment 64149 View attachment 64150 View attachment 64152 View attachment 64153![]()
Top Neuromorphic Computing Stocks for 2025: Ranked by Pure-Play Focus
This guide covers the top neuromorphic computing stocks and companies for investors to watch, ranked by their pure-play focus.buff.ly
Sometimes one has to JUST WONDER.
View attachment 64171
and this with CSIRO Aus-
![]()
CSIRO study identifies AI models to improve automated chest X-ray diagnoses
The research from CSIRO's Australian e-Health Research Centre demonstrates the potential for AI to better support clinicians.www.csiro.au
The University of Washington in Seattle, interestingā¦
UWās Department of Electrical & Computer Engineering had already spread the word about a summer internship opportunity at BrainChip last May. In fact, it was one of their graduating Masterās students, who was a BrainChip intern himself at the time (April 2023 - July 2023), promoting said opportunity.
I wonder what exactly Rishabh Gupta is up to these days, hiding in stealth mode ever since graduating from UW & simultaneously wrapping up his internship at BrainChip. What he has chosen to reveal on LinkedIn is that he now resides in San Jose, CA and is āBuilding next-gen infrastructure and aligned services optimized for multi-modal Generative AIā resp. that his start-up intends to build said infrastructure and services āto democratise AIāā¦He certainly has a very impressive CV so far as well as first-hand experience with Akida 2.0, given the time frame of his internship and him mentioning vision transformers.
![]()
Brainchip Inc: Summer Internship Program 2023
I am Rishabh Gupta, I am a 2nd year ECE PMP masters student and currently working at Brainchip.advisingblog.ece.uw.edu
View attachment 63099
View attachment 63101
View attachment 63102
View attachment 63103
View attachment 63104
Meanwhile, yet another university is encouraging their students to apply for a summer internship at BrainChip:
![]()
BrainChipā Internship Program 2024 - USC Viterbi | Career Services
BrainChipā Internship Opportunity! Apply Today! About BrainChip: BrainChip develops technology that brings commonsense to the processing of sensor data, allowing efficient use for AI inferencing enabling one to do more with less. Accurately. Elegantly. Meaningfully. They call this Essential AI...viterbicareers.usc.edu
View attachment 63100
I guess it is just a question of time before USC will be added to the BrainChip University AI Accelerator Program, although Nandan Nayampally is sadly no longer with our companyā¦
Maybe from general interest:
I have just heard an interesting R&D report on the radio about Ai improved noise cancelling. It was noted that other big companies are certainly still developing this exciting technology too.
The idea is to let sounds through the cancelling. Until now, there has been no system that allows the user to individually set, define or let learn which sounds the Ai should filter out. The headphones have to learn what the person wants to hear, without cloud, obvious. To do this, the team uses the time differences between the left and right headphones and the noise source. This team solves this as follows: if the person with noise cancelling headphones points their face in the direction of what they want to hear despite the suppression, the Ai or electronics learns within around three seconds that the source is being targeted because it recognises the runtime differences from left to right and lets these sounds through.
So far with app on the smartphone. He also says that the team is working on button (? small) headphones, which they want to introduce in about 6 to 8 months.
Up to now this is being done with the phone he said, but I can very well imagine that the neural network will be placed directly in the headphones, drastically reducing latency even further.
I'm on the road and my research options with the phone are limited, but it's about Shyam Gollakota's team at the University of Washington.
KEYWORDS
Augmented hearing, auditory perception, spatial computing
PDF:
______
Older status:
_______![]()
New AI noise-canceling headphone technology lets wearers pick which sounds they hear
A team led by researchers at the University of Washington has developed deep-learning algorithms that let users pick which sounds filter through their headphones in real time. Either through voice...www.washington.edu
Shyam Gollakota - UW CSE
Shyam Gollakota is a professor at UW CSE focusing on mobile intelligence.homes.cs.washington.edu
___
Webuild an end-to-end hardware system that integrates a noisecanceling headset (Sony WH-1000XM4), a pair of binaural microphones (Sonic Presence SP15C) with our real-time target speech hearing network running on an embedded IoT CPU (Orange Pi 5B).
We deploy our neural network on the embedded device by converting the PyTorch model into an ONNX model using a nightly
PyTorch version (2.1.0.dev20230713+cu118) and we use the python package onnxsim to simplify the resulting ONNX model.
To say it again loud and clear:
A study says nothing at all!
A study is by no means a product with sales in the millions!!
Nice game or maybe study.IMO there is a good chance that Akida will be utilised in future versions of that UW prototype @cosors referred to.
![]()
AI headphones let wearer listen to a single person in a crowd, by looking at them just once
A University of Washington team has developed an artificial intelligence system that lets someone wearing headphones look at a person speaking for three to five seconds to āenrollā them. The...www.washington.edu
āA University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to āenrollā them. The system, called āTarget Speech Hearing,ā then cancels all other sounds in the environment and plays just the enrolled speakerās voice in real time even as the listener moves around in noisy places and no longer faces the speaker.
The team presented its findings May 14 in Honolulu at the ACM CHI Conference on Human Factors in Computing Systems. The code for the proof-of-concept device is available for others to build on. The system is not commercially available.ā
While the paper shared by @cosors (https://homes.cs.washington.edu/~gshyam/Papers/tsh.pdf) indicates that the on-device processing of the end-to-end hardware system UW professor Shyam Gollakota and his team built as a proof-of-concept device is not based on neuromorphic technology, the paperās future outlook (see below), combined with the fact that UWās Paul G. Allen School of Computer Science & Engineering* has been encouraging students to apply for the BrainChip Summer Internship Program for the second year in a row, is reason enough for me to speculate those UW researchers could well be playing around with Akida at some point to minimise their prototypeās power consumption and latency.
*(named after the late Paul Gardner Allen who co-founded Microsoft in 1975 with his childhood - and lifelong - friend Bill Gates and donated millions of dollars to UW over the years)
View attachment 64177
View attachment 64203 The paper was co-authored by Shyam Gollakota (https://homes.cs.washington.edu/~gshyam) and three of his UW PhD students as well as by AssemblyAIās Director of Research, Takuya Yoshioka (ex-Microsoft).
AssemblyAI (www.AssemblyAI.com) sounds like an interesting company to keep an eye on:
View attachment 64205
View attachment 64206
View attachment 64207
View attachment 64204
View attachment 64208
View attachment 64209
Your mums calling you in for breakfast itso, run along home nowNice game or maybe study.
Where is the revenue for Brainchip??
A study has no value - Sorry!!