
Nvidia's newest chips are designed to run AI at home as competition from Intel, AMD looms
Nvidia is playing up its strength in consumer GPUs for so-called "local" AI that can run on a PC or laptop from home or an office.
I’m thinking the sameUnfortunately, Renesas linked with PlumerAi on the RA MCU and M85.
However, drawing a long bow, as not really read up on PlumerAi, was the Renesas tape out for a 3rd party as I read it and the subsequent BRN update about Akida being able to work with the M85 part of this....hmmmm, I wish
![]()
Renesas Launches New RA8DI MCUs Group
TOKYO, Japan ―Renesas Electronics Corporation (TSE:6723), a premier supplier of advanced semiconductor solutions, today introduced the RA8D1 microcontroller (MCU) group.electronicsera.in
Renesas Launches New RA8DI MCUs Group
Renesas’ Arm® Cortex®-M85 Processor-Based Devices Offer Ultra High Performance, Graphics Capabilities and Leading-Edge Security for Building Automation, Smart Home, Consumer and Medical Applications
View attachment 53828
The below is disappointing Bravo. I was expecting the full blown version in the 2024 model. I'm not sure if the 2024 model will have the new MBUX etc or the 5 microcontrollers. Might have to wait another yearAI Assistants, In-Car Gaming And Tons Of Range: Coming To Your Next Mercedes-Benz
Mercedes has some bold, experimental ideas for how your driving experience could feel in the next few years. Will customers bite?
![]()
Jan 8, 2024 at 9:00pm ET
0
![]()
By: Patrick George
The fight for automotive supremacy in the future won't come down to horsepower or mechanical specs. It will probably come down to software; at least, that's what the car companies tell us. That idea may seem radical to anyone driving a car from the last decade who's just happy that their Apple CarPlay or Android Auto setup works so well, but all of the automakers have much bigger, grander plans for what we experience inside their cars than we currently do.
Case in point: Mercedes-Benz's big announcements at CES 2024 aren't about performance or even electric range this time, but the future software-driven features coming to its cars very soon.
Gallery: Mercedes-Benz CES 2024 Tech Announcements
![]()
18 Photos
![]()
![]()
![]()
![]()
![]()
![]()
![]()
The giant Las Vegas tech show served as a preview for a bevy of Mercedes technologies coming to both its EVs and internal combustion-powered cars starting next year, including its smartest voice-controlled virtual assistant ever thanks to AI; more expansions into in-car gaming; a taste of its new all-encompassing suite, MB.OS; the North American debut of its future electric sedan and Tesla Model 3 competitor; and a unique system co-developed with musician will.i.am that "allows music to react to how the car is being driven," in Mercedes' words. (More on that in its own story right here.) Let's dive in.
Top Videos:
https://insideevs.com/news/702994/porsche-taycan-nurburgring-2024/?minutetv=true
https://insideevs.com/news/702938/winter-tire-awd-fwd-explained/?minutetv=true
2
The Great Debate: Do You Really Need AWD Or Will Winter Tires And FWD/RWD Suffice?
WATCH FULL VIDEO
https://insideevs.com/reviews/703109/tesla-cybertruck-exposed-video-tour-review/?minutetv=true
https://insideevs.com/news/702917/tesla-cybertruck-steep-hill-off-road/?minutetv=true
https://insideevs.com/news/703512/tesla-cybertruck-ram-trx-snowy-drag-race/?minutetv=true
Powered by
Get Fully Charged
In-car software and AI is the big story at CES this year
After years of EV and concept car debuts, automakers like Mercedes, Volkswagen and BMW are focused on upping their software game at the tech mega-show in 2024. All car companies have huge plans for revenue through software features, including via subscriptions.
MB.OS' Party Trick Is Its Virtual Assistant
![]()
If you've been fortunate enough to experience Mercedes' MBUX software system across all three parts of the Hyperscreen on certain EQ electric cars, you know how powerful and graphically impressive it is. (At least, that's my take.) Equally impressive are the voice controls, where you can say "Hey, Mercedes" and get a swift response to most things you need without having to fish through the Hyperscreen's menus when you're on the road.
To that, Mercedes says: "You ain't seen nothing yet." The MB.OS system coming soon to the company's all-new MMA platform is a purpose-built chip-to-cloud operating system that brings a number of diverse functions under one roof: in-car content, charging, maps, car setting controls and automated driving.
![]()
A big part of that will be the MBUX Virtual Assistant, which Mercedes calls " the most human-like interface with a Mercedes-Benz yet." Powered by ChatGPT—something Volkswagen announced today at CES as well—Mercedes' Chief Technology Officer Markus Schäfer said the system can do more than just follow your voice commands. It can have a "dialogue" with users, and even pick up on the moods they're in to respond with "empathy."
The AI has four different personality traits—Natural, Predictive, Personal and Empathetic—that can be adjusted to your personal tastes or situation behind the wheel. The AI Projects as a glowing three-pointed star logo with various color changes and animations to illustrate its state of being. It could glow a certain way when providing a warning, for example, and a different way when it's listening to you.
"In some cases, hopefully, the assistant predicts already what you want, depending on your moods, the daytime, the location you are in and the circumstances you are in," Schäfer said. But after an easy conversation, Schäfer said the assistant "can address whatever you want from your car, whether you want entertainment, you want information and so on."
![]()
The assistant has four different "emotions" and can respond to your requests based on your own mood at the time. "You can have a constant interaction with your car," Schäfer told InsideEVs at a press roundtable earlier today.
In other words, this system should do a lot more than just tell you where the nearest charging station is when you say "Hey, Mercedes." It may also commiserate with you about how annoyed you are to be stuck in traffic.
Surround Navigation Turns Your Driving Experience Into A Video Game
![]()
These days, it's not uncommon for high-tech EVs to have displays with visualizations of the cars and objects in the surrounding area; Tesla's center screen is arguably the best example of this. But Mercedes is taking this to a different level.
Using the Unity game engine, the next-gen cars will offer MBUX Surround Navigation: a 3D visualization that allows you to "see what the car sees," as Mercedes puts it. Though your eyes are good at what they do, this system gives more of a bird's-eye view of other cars, trucks, cyclists or even pedestrians—superimposing potential hazards onto the route guidance to help you navigate a tough urban environment beyond just what you yourself can see.
"So for example, navigating through downtown traffic, finding a parking spot, knowing where you're going to make your turn"—all are projected via Surround Navigation, Mercedes' Chief Software Officer Magnus Östberg said during the press conference. The 3D game engine projects "all the sensor data that we're taking from the cameras, the radar, the LIDAR, that really gives the driver situational awareness. It's not only beautiful graphics but also a safe experience," he said.
Though we weren't offered the chance to experience this for ourselves at CES, it does seem like an impressive evolution of the navigation experience—and a good visualization of much of the data the car normally keeps to itself.
More In-Car Gaming Soon
![]()
Mercedes also knows you need something to do while you're charging, and at least one answer to that is the video games I had when I was a kid. Bubble Bobble is back, kids. Get excited.
Mercedes announced a wide array of partnerships bringing new music, movies, video games and other content to those giant screens soon. Those include Audible and Amazon Music for podcasts and audiobooks, as well as Antstream Arcade for retro cloud-based gaming. It's not far off what you can already do with your smartphone, but the native access to these services should provide a nice experience on some much bigger, more high-res screens when you have downtime in the car.
The Next CLA Makes Its North American Debut
![]()
Though some early iterations of these features are available on the new 2024 Mercedes E-Class, which runs a "precursor" to MB.OS, the real show starts when the car above hits the road next year. That's the Concept CLA Class, which first debuted at IAA Munich in 2023 and is making its first-ever appearance in North America for CES.
The final production version of the next CLA isn't expected to veer too far from the concept here, and when it does, it will not only be a Model 3 fighter—something badly needed in an SUV-focused world—but it will debut MB.OS for the first time. That car will also be the first of at least four EVs on the new MMA platform, which, together with MB.OS, represents a much more cohesive transformation of Mercedes than we've seen before. By putting all these technologies together with one software platform, the automaker says it can offer a level of features and personalization it's never been able to before.
And the next CLA is expected to be a mighty EV, not just a computer on wheels. With an 800-volt architecture, an estimated 466 miles of range on Europe's WLTP cycle and the ability to add an estimated 248 miles of range in 15 minutes, this should be a winner if it even come close to those claims. Furthermore, with an estimated 5.2 miles per kilowatt-hour, it should be one of the most efficient EVs on the road.
We'll see this and more from Mercedes as CES goes on this week.
Contact the author: patrick.george@insideevs.com
![]()
AI Assistants, In-Car Gaming And Tons Of Range: Coming To Your Next Mercedes-Benz
The new MB.OS system, ChatGPT-based assistants and in-car "gaming hubs" are part of Mercedes' bold, experimental ideas for your driving experience.insideevs.com
Akida assists the M seriesUnfortunately, Renesas linked with PlumerAi on the RA MCU and M85.
However, drawing a long bow, as not really read up on PlumerAi, was the Renesas tape out for a 3rd party as I read it and the subsequent BRN update about Akida being able to work with the M85 part of this....hmmmm, I wish
![]()
Renesas Launches New RA8DI MCUs Group
TOKYO, Japan ―Renesas Electronics Corporation (TSE:6723), a premier supplier of advanced semiconductor solutions, today introduced the RA8D1 microcontroller (MCU) group.electronicsera.in
Renesas Launches New RA8DI MCUs Group
Renesas’ Arm® Cortex®-M85 Processor-Based Devices Offer Ultra High Performance, Graphics Capabilities and Leading-Edge Security for Building Automation, Smart Home, Consumer and Medical Applications
View attachment 53828
Agree but having a look at PlumerAi they seemed to be working on BNN not SNN.Akida assists the M series![]()
Akida assists the M series![]()
1 + 1 = 0Renesas to Demonstrate First AI Implementations on the Arm Cortex-M85 Processor Featuring Helium Technology at Embedded World
March 9, 2023 -- TOKYO, Japan ― Renesas Electronics Corporation (TSE:6723), a premier supplier of advanced semiconductor solutions, today announced that it will present the first live demonstrations of artificial intelligence (AI) and machine learning (ML) implementations on an MCU based on the Arm® Cortex®-M85 processor.
![]()
Renesas to Demonstrate First AI Implementations on the Arm Cortex-M85 Processor Featuring Helium Technology at Embedded World
Renesas Electronics Corporation (TSE:6723), a premier supplier of advanced semiconductor solutions, today announced that it will present the first live demonstrations of artificial intelligence (AI) and machine learning (ML) implementations on an MCU based on the Arm® Cortex®-M85 processor.www.design-reuse.com
BrainChip integrates Akida with Arm Cortex-M85 Processor, Unlocking AI Capabilities for Edge Devices
Laguna Hills, Calif. – March 12, 2023 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced it has validated that its Akida™ processor family integrates with the Arm® Cortex®-M85 processor, unlocking new levels of performance and efficiency for next-generation intelligent edge devices.
![]()
BrainChip integrates Akida with Arm Cortex-M85 Processor, Unlocking AI Capabilities for Edge Devices
BrainChip today announced it has validated that its Akida™ processor family integrates with the Arm® Cortex®-M85 processor, unlocking new levels of performance and efficiency for next-generation intelligent edge devices.www.design-reuse.com
But Renesas are going to announce the new 22nm with BrainChip shortly, don’t you think? They just didn’t want to do it too close to their M-85/arm offering.1 + 1 = 0
Hi Sirod,why 0, when will 1 + 1 come together?
![]()
why 0, when will 1 + 1 come together?
![]()
Nice to see ZF listed under the client / partner section like Brainchip.So, have we got an offshoot, rebranding, part the restructure process on NVISO?
BeEmotion.ai.
I thought the layout, images and graphics looked similar when I looked at their homepage initially and digging deeper find some of what I snipped below.
When I scrolled to the bottom of the page I can see the site managed by Tropus here in Perth. I know exactly where their office is as drive past it for certain client meetings.
There is also a small Japanese link at the bottom which takes you to the Japanese site.
![]()
BeEmotion
beemotion.ai
![]()
BeEmotion
beemotion.ai
View attachment 53840
HUMAN BEHAVIOUR AI
NEUROMORPHIC COMPUTING
BeEmotion empowers system integrators to build AI-driven human machine interfaces to transform our lives using neuromorphic computing. Understand people and their behavior in real-time without an internet connection to make autonomous devices safe, secure, and personalized for humans.
NEUROMORPHIC COMPUTING INTEROPERABILITY
ULTRA-LOW LATENCY WITH LOW POWER
ULTRA-LOW LATENCY (<1MS)
Total BeEmotion Neuro Model latency is similar for GPU and BrainChip Akida™ neuromorphic processor (300 MHz), however CPU latency is approximately 2.4x slower. All models on all platforms can achieve <10ms latency and the best model can achieve 0.6ms which is almost 2x times faster than a GPU. On a clock frequency normalization basis, this represents an acceleration of 6x.
HIGH THROUGHPUT (>1000 FPS)
BeEmotion Neuro Model performance can be accelerated by an average of 3.67x using BrainChip Akida™ neuromorphic processor at 300MHz over a single core ARM Cortex A57 as found in a NVIDIA Jetson Nano (4GB) running at close to 5x the clock frequency. On a clock frequency normalization basis, this represents an acceleration of 18.1x.
SMALL STORAGE (<1MB)
BeEmotion Neuro Models can achieve a model storage size under 1MB targeting ultra-low power MCU system where onboard flash memory is limited. Removing the need for external flash memory saves cost and power. BrainChip Akida™ format uses 4-bit quantisation where ONNX format uses Float32 format.
DESIGNED FOR EDGE COMPUTING
NO CLOUD REQUIRED
![]()
![]()
PRIVACY PRESERVING
By processing video and audio sensor data locally it does not have to be sent over a network to remote servers for processing. This improves data security and privacy as it can perform all processing disconnected from the central server, which is a more secure and private architecture decreasing security risks
This is a great endorsement of Akida!So, have we got an offshoot, rebranding, part the restructure process on NVISO?
BeEmotion.ai.
I thought the layout, images and graphics looked similar when I looked at their homepage initially and digging deeper find some of what I snipped below.
When I scrolled to the bottom of the page I can see the site managed by Tropus here in Perth. I know exactly where their office is as drive past it for certain client meetings.
There is also a small Japanese link at the bottom which takes you to the Japanese site.
![]()
BeEmotion
beemotion.ai
![]()
BeEmotion
beemotion.ai
View attachment 53840
HUMAN BEHAVIOUR AI
NEUROMORPHIC COMPUTING
BeEmotion empowers system integrators to build AI-driven human machine interfaces to transform our lives using neuromorphic computing. Understand people and their behavior in real-time without an internet connection to make autonomous devices safe, secure, and personalized for humans.
NEUROMORPHIC COMPUTING INTEROPERABILITY
ULTRA-LOW LATENCY WITH LOW POWER
ULTRA-LOW LATENCY (<1MS)
Total BeEmotion Neuro Model latency is similar for GPU and BrainChip Akida™ neuromorphic processor (300 MHz), however CPU latency is approximately 2.4x slower. All models on all platforms can achieve <10ms latency and the best model can achieve 0.6ms which is almost 2x times faster than a GPU. On a clock frequency normalization basis, this represents an acceleration of 6x.
HIGH THROUGHPUT (>1000 FPS)
BeEmotion Neuro Model performance can be accelerated by an average of 3.67x using BrainChip Akida™ neuromorphic processor at 300MHz over a single core ARM Cortex A57 as found in a NVIDIA Jetson Nano (4GB) running at close to 5x the clock frequency. On a clock frequency normalization basis, this represents an acceleration of 18.1x.
SMALL STORAGE (<1MB)
BeEmotion Neuro Models can achieve a model storage size under 1MB targeting ultra-low power MCU system where onboard flash memory is limited. Removing the need for external flash memory saves cost and power. BrainChip Akida™ format uses 4-bit quantisation where ONNX format uses Float32 format.
DESIGNED FOR EDGE COMPUTING
NO CLOUD REQUIRED
![]()
![]()
PRIVACY PRESERVING
By processing video and audio sensor data locally it does not have to be sent over a network to remote servers for processing. This improves data security and privacy as it can perform all processing disconnected from the central server, which is a more secure and private architecture decreasing security risks
Good point.This is a great endorsement of Akida!
... and BeEmotion will need to do a lot of brand recognition work and it looks like we will be dragged along with their coattails. They need us to show the benefits of their product.
So nViso must have lost out on the trade mark front to the security software mob. Maybe they thought using lower case with the capital V was sufficient to distinguish them, but, apart from being a feeble point of difference, it doesn't help phonetically.
So, have we got an offshoot, rebranding, part the restructure process on NVISO?
BeEmotion.ai.
I thought the layout, images and graphics looked similar when I looked at their homepage initially and digging deeper find some of what I snipped below.
When I scrolled to the bottom of the page I can see the site managed by Tropus here in Perth. I know exactly where their office is as drive past it for certain client meetings.
There is also a small Japanese link at the bottom which takes you to the Japanese site.
![]()
BeEmotion
beemotion.ai
![]()
BeEmotion
beemotion.ai
View attachment 53840
HUMAN BEHAVIOUR AI
NEUROMORPHIC COMPUTING
BeEmotion empowers system integrators to build AI-driven human machine interfaces to transform our lives using neuromorphic computing. Understand people and their behavior in real-time without an internet connection to make autonomous devices safe, secure, and personalized for humans.
NEUROMORPHIC COMPUTING INTEROPERABILITY
ULTRA-LOW LATENCY WITH LOW POWER
ULTRA-LOW LATENCY (<1MS)
Total BeEmotion Neuro Model latency is similar for GPU and BrainChip Akida™ neuromorphic processor (300 MHz), however CPU latency is approximately 2.4x slower. All models on all platforms can achieve <10ms latency and the best model can achieve 0.6ms which is almost 2x times faster than a GPU. On a clock frequency normalization basis, this represents an acceleration of 6x.
HIGH THROUGHPUT (>1000 FPS)
BeEmotion Neuro Model performance can be accelerated by an average of 3.67x using BrainChip Akida™ neuromorphic processor at 300MHz over a single core ARM Cortex A57 as found in a NVIDIA Jetson Nano (4GB) running at close to 5x the clock frequency. On a clock frequency normalization basis, this represents an acceleration of 18.1x.
SMALL STORAGE (<1MB)
BeEmotion Neuro Models can achieve a model storage size under 1MB targeting ultra-low power MCU system where onboard flash memory is limited. Removing the need for external flash memory saves cost and power. BrainChip Akida™ format uses 4-bit quantisation where ONNX format uses Float32 format.
DESIGNED FOR EDGE COMPUTING
NO CLOUD REQUIRED
![]()
![]()
PRIVACY PRESERVING
By processing video and audio sensor data locally it does not have to be sent over a network to remote servers for processing. This improves data security and privacy as it can perform all processing disconnected from the central server, which is a more secure and private architecture decreasing security risks
So nViso must have lost out on the trade mark front to the security software mob. Maybe they thought using lower case with the capital V was sufficient to distinguish them, but, apart from being a feeble point of difference, it doesn't help phonetically.