BRN Discussion Ongoing

SERA2g

Founding Member
Interesting article regarding Sony.

The Sony/Prophesee/Brainchip triangle may end up being more important to Brainchip's success than any of us have ever imagined.

This aged well with the apple/Sony news that has come out this week.

A quick google indicates apple sold 240M iPhones in 2021 and are on track to do the same in 2022.

That excludes iPads and other devices.

Let’s hope we end up in next gen iPhones.
240M units per year generating $1-$1.50 royalty per unit. Yes please.
 
  • Like
  • Fire
  • Love
Reactions: 41 users

BaconLover

Founding Member
This aged well with the apple/Sony news that has come out this week.

A quick google indicates apple sold 240M iPhones in 2021 and are on track to do the same in 2022.

That excludes iPads and other devices.

Let’s hope we end up in next gen iPhones.
240M units per year generating $1-$1.50 royalty per unit. Yes please.


The Apple thread @SERA2g ... enjoy!
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users

TECH

Regular
A Santa Claus rally?

I think it's already been well and truly factored in.

Looks like Santa is trying to shake our Christmas Tree one final time this year.

The decorations and lights maybe moving, but long-term shareholders, like myself, are bolted on and really enjoy a good ride.

Bring it on......2023 will be the start of the bonfire, I'll meet you at the top of Everest in January 2025 :ROFLMAO::ROFLMAO:

Love Brainchip and our future direction, northwards 😍
 
  • Like
  • Love
  • Fire
Reactions: 47 users

Damo4

Regular
A Santa Claus rally?

I think it's already been well and truly factored in.

Looks like Santa is trying to shake our Christmas Tree one final time this year.

The decorations and lights maybe moving, but long-term shareholders, like myself, are bolted on and really enjoy a good ride.

Bring it on......2023 will be the start of the bonfire, I'll meet you at the top of Everest in January 2025 :ROFLMAO::ROFLMAO:

Love Brainchip and our future direction, northwards 😍



Diamond Hands GIFs | Tenor
 
  • Haha
  • Like
  • Love
Reactions: 15 users

ndefries

Regular
Found this article very interesting about Sony

Sony is implementing Edge AI sensors into vehicles that use way less power. hmmm i have some pretty speculative opinions about the IP being used.


Sony is working on new sensors for self-driving that it claims use 70% less electricity.

The sensors would help significantly extend the range of electric vehicles with autonomous capabilities.

According to a report in Nikkei Asia, they will be made by Sony Semiconductor Solutions and be paired with software developed by Japanese start-up Tier IV.

The companies aim to deliver Level 4 tech, as defined by the Society of Automotive Engineers, by 2030. This means that the car drives itself, with no requirement for human intervention.

To achieve Level 4, autonomous vehicles (AVs) need a wide array of hardware, including sensors and cameras, that transmit massive amounts of data, requiring vast amounts of power.

Sony is hoping to reduce electricity usage via edge computing, with as much data as possible processed through artificial intelligence-equipped sensors and software on the vehicles themselves, rather than being transmitted to external networks.


This approach would potentially make AVs safer, too, by cutting communication lags.

It’s also claimed that Sony will incorporate image recognition and radar technologies into the new sensor, which would assist self-driving in rain and other adverse weather conditions.

The company currently controls around 50% of the global market for image sensors, and also has strong experience in edge computing, having commercialized technology in chips for retailers and industrial equipment.

Tier IV, meanwhile, provides open-source self-driving software. Among its partners are Taiwan consumer electronics company Foxconn, which is planning to challenge car makers with an EV platform of its own, and Japanese company Yamaha, with whom it is developing autonomous transport solutions for factories.

In recent years, Sony has become a much more visible presence in the automotive arena. In 2020, the company displayed an electric sedan concept called the VISION-S at CES in Las Vegas and at the 2022 event it revealed an SUV version, the VISION-S 02.

Earlier this year, it announced it was teaming up with automaker Honda to form a new company to build electric vehicles and “provide services for mobility,” Sony Honda Mobility Inc.

The VISION-S featured a total of 40 sensors – 18 cameras, 18 radar/ultrasonic and four lidar – suggesting automation will have a key role to play in the new company.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 87 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Better put this event in the calendar! In January 2023, at the CES, BMW will present its concept car called the "Digital Vision Vehicle". I think there's a very good chance BMW "will do a Merc" at this event. I say this because I know that BMW's “Neue Klasse” is set to "feature the next generation of Valeo’s ultrasonic sensors, the full set of surround view cameras, as well as a new multifunctional interior camera that will contribute to improved safety and create a new level of user experience.”

I can't wait!


View attachment 21902


View attachment 21900



Following on from the above, I saw this article today discussing BMW’s Facebook pages and the brand’s main Instagram account which both had their profile pictures changed on Tuesday. Here's some of what the article had to say.




BMW pm.png




 
  • Like
  • Fire
  • Love
Reactions: 66 users

TheFunkMachine

seeds have the potential to become trees.
How can we say they have not already been achieved?

Income was not given as a performance indicator by which to judge his and the Boards performance.

My opinion only DYOR
FF

AKIDA BALLISTA
I had the same thought. What he said was “talk about results”. I would argue they have had heaps of results, but the statement of explosive increases in sales and running break even etc is more related to sales figures.
 
  • Like
  • Love
Reactions: 13 users

Boab

I wish I could paint like Vincent
The attached image comes from the CES website.
CES.jpg

This appears to be a new design for BRN website?? Nice work by the designers I reckon.
BrainCog.png
 
  • Like
  • Fire
  • Love
Reactions: 27 users
He's stalking us TSEers on Twitter!

homer simpson halloween GIF



1671072200402.png


1671072238487.png
 
  • Haha
  • Like
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Following on from the above, I saw this article today discussing BMW’s Facebook pages and the brand’s main Instagram account which both had their profile pictures changed on Tuesday. Here's some of what the article had to say.




View attachment 24409





Oh yeah, it's probably best to read all of the above stuff in context of all of the below stuff whilst also remembering not to forget about the Alexa-enabled assistant named Callisto which is embedded into NASA's Orion spacecraft Artemis I. 🥴


Screen Shot 2022-12-15 at 1.43.57 pm.png

 
  • Like
  • Fire
  • Love
Reactions: 49 users
  • Haha
  • Like
Reactions: 4 users

AusEire

Founding Member. It's ok to say No to Dot Joining
  • Like
  • Haha
Reactions: 4 users

Diogenese

Top 20
Found this article very interesting about Sony

Sony is implementing Edge AI sensors into vehicles that use way less power. hmmm i have some pretty speculative opinions about the IP being used.


Sony is working on new sensors for self-driving that it claims use 70% less electricity.

The sensors would help significantly extend the range of electric vehicles with autonomous capabilities.

According to a report in Nikkei Asia, they will be made by Sony Semiconductor Solutions and be paired with software developed by Japanese start-up Tier IV.

The companies aim to deliver Level 4 tech, as defined by the Society of Automotive Engineers, by 2030. This means that the car drives itself, with no requirement for human intervention.

To achieve Level 4, autonomous vehicles (AVs) need a wide array of hardware, including sensors and cameras, that transmit massive amounts of data, requiring vast amounts of power.

Sony is hoping to reduce electricity usage via edge computing, with as much data as possible processed through artificial intelligence-equipped sensors and software on the vehicles themselves, rather than being transmitted to external networks.


This approach would potentially make AVs safer, too, by cutting communication lags.

It’s also claimed that Sony will incorporate image recognition and radar technologies into the new sensor, which would assist self-driving in rain and other adverse weather conditions.

The company currently controls around 50% of the global market for image sensors, and also has strong experience in edge computing, having commercialized technology in chips for retailers and industrial equipment.

Tier IV, meanwhile, provides open-source self-driving software. Among its partners are Taiwan consumer electronics company Foxconn, which is planning to challenge car makers with an EV platform of its own, and Japanese company Yamaha, with whom it is developing autonomous transport solutions for factories.

In recent years, Sony has become a much more visible presence in the automotive arena. In 2020, the company displayed an electric sedan concept called the VISION-S at CES in Las Vegas and at the 2022 event it revealed an SUV version, the VISION-S 02.

Earlier this year, it announced it was teaming up with automaker Honda to form a new company to build electric vehicles and “provide services for mobility,” Sony Honda Mobility Inc.

The VISION-S featured a total of 40 sensors – 18 cameras, 18 radar/ultrasonic and four lidar – suggesting automation will have a key role to play in the new company.

Talk about morphic resonance ... a couple of days ago we looked at a system (Qualcomm) which implemented a NN by splitting it over 2 chips. Now we see SSS doing the same thing using their 3D IC implementation of a DNN.

Sony Semiconductor Solutions (SSS):

US2022058411A1 SOLID STATE IMAGE CAPTURING SYSTEM, SOLID STATE IMAGE CAPTURING DEVICE, INFORMATION PROCESSING DEVICE, IMAGE PROCESSING METHOD, INFORMATION PROCESSING METHOD, AND PROGRAM

1671071854256.png


1671071887093.png


[0038] As illustrated in FIG. 1, a solid-state image capturing system 1 includes a solid-state image capturing device 100 and an information processing device 200 .

Ostensibly, the image capturing device and the information processing device are separated to provide security for personal information.

[0007] To solve the above-mentioned problem, a solid-state image capturing system according to the present disclosure includes a solid-state image capturing device; and an information processing device, wherein the solid-state image capturing device includes a first Deep-Neural-Network (DNN) processing unit that executes, on image data, a part of a DNN algorithm by a first DNN to generate a first result, and the information processing device includes a second DNN processing unit that executes, on the first result acquired from the solid-state image capturing device, remaining of the DNN algorithm by a second DNN to generate a second result.

[0062] The second DNN processing unit 230 executes, on the basis of a DNN model stored in the second storage 240 , for example, DNN on first result input from the solid-state image capturing device 100 so as to execute a recognition process of an object included in image data. Specifically, the second DNN processing unit 230 executes a second DNN on the first result having received from the solid-state image capturing device 100 so as to execute remaining part among algorithms constituting a DNN model, which is not executed in the first DNN.



1671072102302.png


[0010] FIG. 3 is a diagram illustrating one example of a laminate structure of a solid-state image capturing device according to the first embodiment of the present disclosure.

1671072229916.png



... and here's their LSTM implementation replete with CNN:
1671072795626.png

0085] First, in the process illustrated in FIG. 6, for example, a plurality of pieces of image data is input to the first DNN processing unit 130 from the capture processing unit 120 (Step S 11 ).

[0086] Next, the first DNN processing unit 130 executes an image recognizing process on the image data received from the capture processing unit 120 so as to recognize an object included in image data (Step S 12 ). Specifically, the first DNN processing unit 130 executes CNN on each piece of image data so as to recognize an object included in image data. The first DNN processing unit 130 generates metadata from an execution result of CNN for each piece of image data.

[0087] Next, the second DNN processing unit 230 recognizes, by using a Recurrent Neural Network (RNN), relationship of metadata generated by the first DNN processing unit 130 (Step S 13 ). Specifically, the second DNN processing unit 230 recognizes relationship of metadata by using a Long short-term memory (LSTM) network.

[0088] The second DNN processing unit 230 recognizes relationship of metadata so as to execute captioning (Step S 14 ). For example, the second DNN processing unit 230 executes captioning on image data, such as “boy”, “playing”, and “golf”.



The earliest Priority is 20180921, so well before Prophesee/Akida.

CNN uses MAC with a multiplication matrix, which takes up a lot of real estate. I imagine SSS would have had engineering samples of this working and in the hands of their customers for quite a while.

Akida can certainly perform the above CNN function.

Will Akida2 with LSTM be able to label action videos? I wonder when the engineering samples of Akida2 with LSTM will be produced?

There is always the ADE/MetaTF Akida simulation software which is provided to EAP customers ... and of course, Anil would still have a few Akida 1 SoCs at the back of his desk drawer, so, if one were that way inclined, one could patch together a hybrid Akida1/MetaTF hybrid.

... and then there's the mysterious visit of Apple CEO to Sony image processing labs.
 
  • Like
  • Fire
  • Love
Reactions: 42 users
I don’t remember seeing or reading this article but I would like to think that Rob Telson stating that they saw Nvidia more as a partner than as a competitor and with Nvidia through Mercedes Benz at least being fully aware of AKIDA Science Fiction that in their role as a consultant to Sony EV they may have mentioned Brainchip:

Computing Hardware Underpinning the Next Wave of Sony, Hyundai, and Mercedes EVs​

January 30, 2022 by Tyler Charboneau

Major automakers Sony, Hyundai, and Mercedes-Benz have recently announced their EV roadmaps. What computing hardware will appear in these vehicles?

With electric vehicles (EVs) becoming increasingly mainstream, automakers are engaging in the next great development war in hopes of elevating themselves above their competitors. Auto executives expect EVs, on average, to account for 52% of all sales by 2030. Accordingly, investing in new computing technologies and EV platforms is key.
While the battery is the heart of the EV, intelligently engineering the car's “brain” is equally important. The EV’s computer is responsible for controlling a plethora of functions—ranging from regenerative-braking feedback, to infotainment operation, to battery management, to instrument cluster operation. Specifically, embedded chips like the CPU enable these features.

Diagram of some EV subsystems

Diagram of some EV subsystems. Image used courtesy of MDPI

Modernized solutions like GM’s Super Cruise and Ultra Cruise claim to effectively handle 95% of driving scenarios. Ultra Cruise alone will leverage a new AI-capable 5nm processor. Drivers are demanding improved safety features like advanced lane centering, emergency braking, and adaptive cruise control. In fact, Volkswagen’s ID.4 EV received poor marks from buyers because it lacked such core capabilities.
What other hardware-level developments have manufacturers unveiled?

Sony Enters the EV Fray​

At CES 2022, Sony announced its intention to form a new company called Sony Mobility. This offshoot will be dedicated solely to exploring EV development—building on Sony’s 2020 VISION-S research initiative. While Sony unveiled its coup EV prototype two years ago, dubbed VISION-S 01, this year’s VISION-S 02 prototype is an SUV. However, the company hasn’t committed to bringing these cars to mass-market consumers themselves.
It’s said that both Qualcomm and NVIDIA have been involved throughout the development process. However, the two prominent electronics manufacturers haven’t made their involvement with Sony clear (and vice versa). Tesla has adopted NVIDIA hardware to support its machine-learning algorithms; it’s, therefore, possible that Sony has taken similar steps.
Additionally, NVIDIA has long touted its DRIVE Orin SoC, DRIVE Hyperion, and DRIVE AGX Pegasus SoC/GPU. These are specifically built to power autonomous vehicles. The same can be said for its DRIVE Sim program, which enables self-driving simulations based on dynamic data.

The NVIDIA DRIVE Atlan

The NVIDIA DRIVE Atlan. Image used courtesy of NVIDIA

The Sony VISION-S 02 features a number of internal displays and driver-monitoring features. This is where Qualcomm’s involvement may begin. The chipmaker previously introduced the Snapdragon Digital Chassis, a hardware-software suite that supports the following:
  • Advanced driver-assistance feature development
  • 4G, 5G, Wi-Fi, and Bluetooth connectivity
  • Virtual assistance, voice control, and graphical information
  • Car-to-Cloud connectivity
  • Navigation and GPS
It’s unclear if any of Sony’s EVs are reliant on either supplier for in-cabin functionality or overall development. However, both companies have a vested interest in the EV-AV market, and at least have held consulting roles with Sony for two years.

Hyundai and IonQ Join Forces​

SCROLL TO CONTINUE WITH CONTENT

Since Hyundai unveiled its BlueOn electric car in 2010, the company has been hard at work developing improved EVs behind the scenes. These efforts have led to recent releases of the IONIQ EV and Kona Electric. However, the automaker concedes that battery challenges have plagued the ownership experience of EVs following their market launch. Batteries continue to suffer wear and tear from charge and discharge cycling. Capacities have left something to be desired, as have overall durability and safety throughout an EV’s lifespan.
A recent partnership with quantum-computing experts at IonQ aims to solve many of these problems. Additionally, the duo hopes to lower battery costs while improving efficiencyalong the way. IonQ’s quantum processors are doing the legwork here—alongside the company’s quantum algorithms. The goal is to study lithium-based battery chemistries while leveraging Hyundai’s data and expertise in the area.

IonQ

One of IonQ’s ion-trap chips announced in August 2021. Image used courtesy of IonQ

By 2025, Hyundai is aiming to introduce more than 12 battery electric vehicles (BEVs) to consumers. Batteries remain the most expensive component in all EVs, and there’s a major incentive to reduce their costs and pass savings down to consumers. This will boost EV uptake. While the partnership isn’t supplying Hyundai vehicles with hardware components at scale, the venture could help Hyundai design better chip-dependent battery-management systems in the future.

Mercedes-Benz Delivers Smarter Operation​

Stemming from time in the lab, including contributions from Formula 1 and Formula E, Mercedes-Benz has developed its next-generation VISION EQXX vehicle. A major selling point of Mercedes’ newest EV is the cockpit design—which features displays and graphics spanning the vehicle’s entire width. The car is designed to be human-centric and actually mimic the human mind during operation.
How is this possible? The German automaker has incorporated BrainChip’s Akida neural processor and associated software suite. This chipset powers the EQXX’s onboard systems and runs spiking neural networks. This operation saves power by only consuming energy during periods of learning or processing. Such coding dramatically lowers energy consumption.

Diagram of some of Akida's IP

Diagram of some of Akida's IP. Image used courtesy of Brainchip

Additionally, it makes driver interaction much smoother via voice control. Keyword recognition is now five to ten times more accurate than it is within competing systems, according to Mercedes. The result is described as a better driving experience while markedly reducing AI energy needs across the vehicle’s entirety. The EQXX and EVs after it will think in much more humanistic ways and support continuous learning. By doing so, Mercedes hopes to continually refine the driving experience throughout periods of extended ownership, across hundreds of thousands of miles.

The Future of EV Electronics​

While companies have achieved Level 2+ autonomy through driver-assistance packages, upgradeable EV software systems may eventually unlock fully-fledged self-driving. Accordingly, chip-level innovations are surging forward to meet future demand.
It’s clear that EV development has opened numerous doors for electrical engineers and design teams. The inclusion of groundbreaking new components rooted in AI and ML will help drivers connect more effectively with their vehicles. Interestingly, different automakers are taking different approaches on both software and hardware fronts.
Harmonizing these two facets of EV computing will help ensure a better future for battery-powered cars—making them more accessible and affordable to boot”


The Brainchip stated ambition in automotive is to first make every automotive sensor smart and later take control by becoming the central processing unit to which all these smart sensors report.

My opinion only so DYOR
FF

AKIDA BALLISTA

PS: As we approach the festive season when hopefully there will be time for reflection please if you have been too busy to decide upon a plan as 2023 is shaping up as a breakout year for Brainchip use some of that time to do so.

If it was not clear to you from the MF article it should be that manipulators are already planning their activities for 2023 and will be out in force even if the price is rising off the back of price sensitive announcements claiming that any income no matter that starts to appear does not justify the share price hoping to manipulate retail.

The only way to avoid being manipulated is to have a plan locked in before emotion comes into play and hasty decisions are made which are later become a cause for regret.
 
  • Like
  • Love
  • Fire
Reactions: 53 users

Diogenese

Top 20
I don’t remember seeing or reading this article but I would like to think that Rob Telson stating that they saw Nvidia more as a partner than as a competitor and with Nvidia through Mercedes Benz at least being fully aware of AKIDA Science Fiction that in their role as a consultant to Sony EV they may have mentioned Brainchip:

Computing Hardware Underpinning the Next Wave of Sony, Hyundai, and Mercedes EVs​

January 30, 2022 by Tyler Charboneau

Major automakers Sony, Hyundai, and Mercedes-Benz have recently announced their EV roadmaps. What computing hardware will appear in these vehicles?

With electric vehicles (EVs) becoming increasingly mainstream, automakers are engaging in the next great development war in hopes of elevating themselves above their competitors. Auto executives expect EVs, on average, to account for 52% of all sales by 2030. Accordingly, investing in new computing technologies and EV platforms is key.
While the battery is the heart of the EV, intelligently engineering the car's “brain” is equally important. The EV’s computer is responsible for controlling a plethora of functions—ranging from regenerative-braking feedback, to infotainment operation, to battery management, to instrument cluster operation. Specifically, embedded chips like the CPU enable these features.

Diagram of some EV subsystems

Diagram of some EV subsystems. Image used courtesy of MDPI

Modernized solutions like GM’s Super Cruise and Ultra Cruise claim to effectively handle 95% of driving scenarios. Ultra Cruise alone will leverage a new AI-capable 5nm processor. Drivers are demanding improved safety features like advanced lane centering, emergency braking, and adaptive cruise control. In fact, Volkswagen’s ID.4 EV received poor marks from buyers because it lacked such core capabilities.
What other hardware-level developments have manufacturers unveiled?

Sony Enters the EV Fray​

At CES 2022, Sony announced its intention to form a new company called Sony Mobility. This offshoot will be dedicated solely to exploring EV development—building on Sony’s 2020 VISION-S research initiative. While Sony unveiled its coup EV prototype two years ago, dubbed VISION-S 01, this year’s VISION-S 02 prototype is an SUV. However, the company hasn’t committed to bringing these cars to mass-market consumers themselves.
It’s said that both Qualcomm and NVIDIA have been involved throughout the development process. However, the two prominent electronics manufacturers haven’t made their involvement with Sony clear (and vice versa). Tesla has adopted NVIDIA hardware to support its machine-learning algorithms; it’s, therefore, possible that Sony has taken similar steps.
Additionally, NVIDIA has long touted its DRIVE Orin SoC, DRIVE Hyperion, and DRIVE AGX Pegasus SoC/GPU. These are specifically built to power autonomous vehicles. The same can be said for its DRIVE Sim program, which enables self-driving simulations based on dynamic data.

The NVIDIA DRIVE Atlan

The NVIDIA DRIVE Atlan. Image used courtesy of NVIDIA

The Sony VISION-S 02 features a number of internal displays and driver-monitoring features. This is where Qualcomm’s involvement may begin. The chipmaker previously introduced the Snapdragon Digital Chassis, a hardware-software suite that supports the following:
  • Advanced driver-assistance feature development
  • 4G, 5G, Wi-Fi, and Bluetooth connectivity
  • Virtual assistance, voice control, and graphical information
  • Car-to-Cloud connectivity
  • Navigation and GPS
It’s unclear if any of Sony’s EVs are reliant on either supplier for in-cabin functionality or overall development. However, both companies have a vested interest in the EV-AV market, and at least have held consulting roles with Sony for two years.

Hyundai and IonQ Join Forces​

SCROLL TO CONTINUE WITH CONTENT

Since Hyundai unveiled its BlueOn electric car in 2010, the company has been hard at work developing improved EVs behind the scenes. These efforts have led to recent releases of the IONIQ EV and Kona Electric. However, the automaker concedes that battery challenges have plagued the ownership experience of EVs following their market launch. Batteries continue to suffer wear and tear from charge and discharge cycling. Capacities have left something to be desired, as have overall durability and safety throughout an EV’s lifespan.
A recent partnership with quantum-computing experts at IonQ aims to solve many of these problems. Additionally, the duo hopes to lower battery costs while improving efficiencyalong the way. IonQ’s quantum processors are doing the legwork here—alongside the company’s quantum algorithms. The goal is to study lithium-based battery chemistries while leveraging Hyundai’s data and expertise in the area.

IonQ

One of IonQ’s ion-trap chips announced in August 2021. Image used courtesy of IonQ

By 2025, Hyundai is aiming to introduce more than 12 battery electric vehicles (BEVs) to consumers. Batteries remain the most expensive component in all EVs, and there’s a major incentive to reduce their costs and pass savings down to consumers. This will boost EV uptake. While the partnership isn’t supplying Hyundai vehicles with hardware components at scale, the venture could help Hyundai design better chip-dependent battery-management systems in the future.

Mercedes-Benz Delivers Smarter Operation​

Stemming from time in the lab, including contributions from Formula 1 and Formula E, Mercedes-Benz has developed its next-generation VISION EQXX vehicle. A major selling point of Mercedes’ newest EV is the cockpit design—which features displays and graphics spanning the vehicle’s entire width. The car is designed to be human-centric and actually mimic the human mind during operation.
How is this possible? The German automaker has incorporated BrainChip’s Akida neural processor and associated software suite. This chipset powers the EQXX’s onboard systems and runs spiking neural networks. This operation saves power by only consuming energy during periods of learning or processing. Such coding dramatically lowers energy consumption.

Diagram of some of Akida's IP's IP

Diagram of some of Akida's IP. Image used courtesy of Brainchip

Additionally, it makes driver interaction much smoother via voice control. Keyword recognition is now five to ten times more accurate than it is within competing systems, according to Mercedes. The result is described as a better driving experience while markedly reducing AI energy needs across the vehicle’s entirety. The EQXX and EVs after it will think in much more humanistic ways and support continuous learning. By doing so, Mercedes hopes to continually refine the driving experience throughout periods of extended ownership, across hundreds of thousands of miles.

The Future of EV Electronics​

While companies have achieved Level 2+ autonomy through driver-assistance packages, upgradeable EV software systems may eventually unlock fully-fledged self-driving. Accordingly, chip-level innovations are surging forward to meet future demand.
It’s clear that EV development has opened numerous doors for electrical engineers and design teams. The inclusion of groundbreaking new components rooted in AI and ML will help drivers connect more effectively with their vehicles. Interestingly, different automakers are taking different approaches on both software and hardware fronts.
Harmonizing these two facets of EV computing will help ensure a better future for battery-powered cars—making them more accessible and affordable to boot”


The Brainchip stated ambition in automotive is to first make every automotive sensor smart and later take control by becoming the central processing unit to which all these smart sensors report.

My opinion only so DYOR
FF

AKIDA BALLISTA

PS: As we approach the festive season when hopefully there will be time for reflection please if you have been too busy to decide upon a plan as 2023 is shaping up as a breakout year for Brainchip use some of that time to do so.

If it was not clear to you from the MF article it should be that manipulators are already planning their activities for 2023 and will be out in force even if the price is rising off the back of price sensitive announcements claiming that any income no matter that starts to appear does not justify the share price hoping to manipulate retail.

The only way to avoid being manipulated is to have a plan locked in before emotion comes into play and hasty decisions are made which are later become a cause for regret.
Planning ahead:
Level 4/5 capable AD EV with 10+ Akidae plus Talnode-Si battery - not really fussed about voice/gesture control but seems inevitable. I think I'll have a dummy accelerator and brake pedals and steering wheel - a clutch pedal and gear level would be excessive?
 
  • Haha
  • Like
  • Love
Reactions: 14 users
I don’t remember seeing or reading this article but I would like to think that Rob Telson stating that they saw Nvidia more as a partner than as a competitor and with Nvidia through Mercedes Benz at least being fully aware of AKIDA Science Fiction that in their role as a consultant to Sony EV they may have mentioned Brainchip:

Computing Hardware Underpinning the Next Wave of Sony, Hyundai, and Mercedes EVs​

January 30, 2022 by Tyler Charboneau

Major automakers Sony, Hyundai, and Mercedes-Benz have recently announced their EV roadmaps. What computing hardware will appear in these vehicles?

With electric vehicles (EVs) becoming increasingly mainstream, automakers are engaging in the next great development war in hopes of elevating themselves above their competitors. Auto executives expect EVs, on average, to account for 52% of all sales by 2030. Accordingly, investing in new computing technologies and EV platforms is key.
While the battery is the heart of the EV, intelligently engineering the car's “brain” is equally important. The EV’s computer is responsible for controlling a plethora of functions—ranging from regenerative-braking feedback, to infotainment operation, to battery management, to instrument cluster operation. Specifically, embedded chips like the CPU enable these features.

Diagram of some EV subsystems

Diagram of some EV subsystems. Image used courtesy of MDPI

Modernized solutions like GM’s Super Cruise and Ultra Cruise claim to effectively handle 95% of driving scenarios. Ultra Cruise alone will leverage a new AI-capable 5nm processor. Drivers are demanding improved safety features like advanced lane centering, emergency braking, and adaptive cruise control. In fact, Volkswagen’s ID.4 EV received poor marks from buyers because it lacked such core capabilities.
What other hardware-level developments have manufacturers unveiled?

Sony Enters the EV Fray​

At CES 2022, Sony announced its intention to form a new company called Sony Mobility. This offshoot will be dedicated solely to exploring EV development—building on Sony’s 2020 VISION-S research initiative. While Sony unveiled its coup EV prototype two years ago, dubbed VISION-S 01, this year’s VISION-S 02 prototype is an SUV. However, the company hasn’t committed to bringing these cars to mass-market consumers themselves.
It’s said that both Qualcomm and NVIDIA have been involved throughout the development process. However, the two prominent electronics manufacturers haven’t made their involvement with Sony clear (and vice versa). Tesla has adopted NVIDIA hardware to support its machine-learning algorithms; it’s, therefore, possible that Sony has taken similar steps.
Additionally, NVIDIA has long touted its DRIVE Orin SoC, DRIVE Hyperion, and DRIVE AGX Pegasus SoC/GPU. These are specifically built to power autonomous vehicles. The same can be said for its DRIVE Sim program, which enables self-driving simulations based on dynamic data.

The NVIDIA DRIVE Atlan

The NVIDIA DRIVE Atlan. Image used courtesy of NVIDIA

The Sony VISION-S 02 features a number of internal displays and driver-monitoring features. This is where Qualcomm’s involvement may begin. The chipmaker previously introduced the Snapdragon Digital Chassis, a hardware-software suite that supports the following:
  • Advanced driver-assistance feature development
  • 4G, 5G, Wi-Fi, and Bluetooth connectivity
  • Virtual assistance, voice control, and graphical information
  • Car-to-Cloud connectivity
  • Navigation and GPS
It’s unclear if any of Sony’s EVs are reliant on either supplier for in-cabin functionality or overall development. However, both companies have a vested interest in the EV-AV market, and at least have held consulting roles with Sony for two years.

Hyundai and IonQ Join Forces​

SCROLL TO CONTINUE WITH CONTENT

Since Hyundai unveiled its BlueOn electric car in 2010, the company has been hard at work developing improved EVs behind the scenes. These efforts have led to recent releases of the IONIQ EV and Kona Electric. However, the automaker concedes that battery challenges have plagued the ownership experience of EVs following their market launch. Batteries continue to suffer wear and tear from charge and discharge cycling. Capacities have left something to be desired, as have overall durability and safety throughout an EV’s lifespan.
A recent partnership with quantum-computing experts at IonQ aims to solve many of these problems. Additionally, the duo hopes to lower battery costs while improving efficiencyalong the way. IonQ’s quantum processors are doing the legwork here—alongside the company’s quantum algorithms. The goal is to study lithium-based battery chemistries while leveraging Hyundai’s data and expertise in the area.

IonQ

One of IonQ’s ion-trap chips announced in August 2021. Image used courtesy of IonQ

By 2025, Hyundai is aiming to introduce more than 12 battery electric vehicles (BEVs) to consumers. Batteries remain the most expensive component in all EVs, and there’s a major incentive to reduce their costs and pass savings down to consumers. This will boost EV uptake. While the partnership isn’t supplying Hyundai vehicles with hardware components at scale, the venture could help Hyundai design better chip-dependent battery-management systems in the future.

Mercedes-Benz Delivers Smarter Operation​

Stemming from time in the lab, including contributions from Formula 1 and Formula E, Mercedes-Benz has developed its next-generation VISION EQXX vehicle. A major selling point of Mercedes’ newest EV is the cockpit design—which features displays and graphics spanning the vehicle’s entire width. The car is designed to be human-centric and actually mimic the human mind during operation.
How is this possible? The German automaker has incorporated BrainChip’s Akida neural processor and associated software suite. This chipset powers the EQXX’s onboard systems and runs spiking neural networks. This operation saves power by only consuming energy during periods of learning or processing. Such coding dramatically lowers energy consumption.

Diagram of some of Akida's IP's IP

Diagram of some of Akida's IP. Image used courtesy of Brainchip

Additionally, it makes driver interaction much smoother via voice control. Keyword recognition is now five to ten times more accurate than it is within competing systems, according to Mercedes. The result is described as a better driving experience while markedly reducing AI energy needs across the vehicle’s entirety. The EQXX and EVs after it will think in much more humanistic ways and support continuous learning. By doing so, Mercedes hopes to continually refine the driving experience throughout periods of extended ownership, across hundreds of thousands of miles.

The Future of EV Electronics​

While companies have achieved Level 2+ autonomy through driver-assistance packages, upgradeable EV software systems may eventually unlock fully-fledged self-driving. Accordingly, chip-level innovations are surging forward to meet future demand.
It’s clear that EV development has opened numerous doors for electrical engineers and design teams. The inclusion of groundbreaking new components rooted in AI and ML will help drivers connect more effectively with their vehicles. Interestingly, different automakers are taking different approaches on both software and hardware fronts.
Harmonizing these two facets of EV computing will help ensure a better future for battery-powered cars—making them more accessible and affordable to boot”


The Brainchip stated ambition in automotive is to first make every automotive sensor smart and later take control by becoming the central processing unit to which all these smart sensors report.

My opinion only so DYOR
FF

AKIDA BALLISTA

PS: As we approach the festive season when hopefully there will be time for reflection please if you have been too busy to decide upon a plan as 2023 is shaping up as a breakout year for Brainchip use some of that time to do so.

If it was not clear to you from the MF article it should be that manipulators are already planning their activities for 2023 and will be out in force even if the price is rising off the back of price sensitive announcements claiming that any income no matter that starts to appear does not justify the share price hoping to manipulate retail.

The only way to avoid being manipulated is to have a plan locked in before emotion comes into play and hasty decisions are made which are later become a cause for regret.
Wonder if Akida will introduce Intel and NVIDIA properly to the wonderful world of 1 - 4 bit instead ;)



Nvidia, Intel develop memory-optimizing deep learning training standard
Paper: FP8 can deliver training accuracy similar to 16-bit standards

Picture of Ben Wodecki
Ben Wodecki
September 20, 2022

2 Min Read

Paper: FP8 can deliver training accuracy similar to 16-bit standards

Nvidia, Intel and Arm have joined forces to create a new standard designed to optimize memory usage in deep learning applications.

The 8-bit floating point (FP8) standard was developed across several neural network architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformer-based models.

The standard is also applicable to language models up to 175 billion parameters, which would cover the likes of GPT-3, OPT-175B and Bloom.

“By adopting an interchangeable format that maintains accuracy, AI models will operate consistently and performantly across all hardware platforms, and help advance the state of the art of AI,” Nvidia’s Shar Narasimhan wrote in a blog post.

Optimizing AI memory usage
When building an AI system, developers need to consider the weight of the system, which governs the effectiveness of what a system learns from its training data.

There are several standards used currently, including FP32 and FP16, but these often reduce the volume of memory required to train a system in place of accuracy.

Their new approach focuses on bits compared with prior methods, so as to use memory capabilities more efficiently; less memory being used by a system means less computational power is needed to run an application.

The trio outlined the new standard in a paper, which covers training and inference evaluation using the standard across a variety of tasks and models.

According to the paper, FP8 achieved “comparable accuracy” to FP16 format across use cases and applications including computer vision.

Results on transformers and GAN networks, like OpenAI’s DALL-E, saw FP8 achieve training accuracy similar to 16-bit precisions while delivering “significant speedups.”

Testing using the MLPerf Inference benchmark, Nvidia Hopper using FP8 achieved 4.5x faster times using the BERT model for natural language processing.

“Using FP8 not only accelerates and reduces resources required to train but also simplifies 8-bit inference deployment by using the same datatypes for training and inference,” according to the paper.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Diogenese

Top 20
Wonder if Akida will introduce Intel and NVIDIA properly to the wonderful world of 1 - 4 bit instead ;)



Nvidia, Intel develop memory-optimizing deep learning training standard
Paper: FP8 can deliver training accuracy similar to 16-bit standards

Picture of Ben Wodecki
Ben Wodecki
September 20, 2022

2 Min Read

Paper: FP8 can deliver training accuracy similar to 16-bit standards

Nvidia, Intel and Arm have joined forces to create a new standard designed to optimize memory usage in deep learning applications.

The 8-bit floating point (FP8) standard was developed across several neural network architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformer-based models.

The standard is also applicable to language models up to 175 billion parameters, which would cover the likes of GPT-3, OPT-175B and Bloom.

“By adopting an interchangeable format that maintains accuracy, AI models will operate consistently and performantly across all hardware platforms, and help advance the state of the art of AI,” Nvidia’s Shar Narasimhan wrote in a blog post.

Optimizing AI memory usage
When building an AI system, developers need to consider the weight of the system, which governs the effectiveness of what a system learns from its training data.

There are several standards used currently, including FP32 and FP16, but these often reduce the volume of memory required to train a system in place of accuracy.

Their new approach focuses on bits compared with prior methods, so as to use memory capabilities more efficiently; less memory being used by a system means less computational power is needed to run an application.

The trio outlined the new standard in a paper, which covers training and inference evaluation using the standard across a variety of tasks and models.

According to the paper, FP8 achieved “comparable accuracy” to FP16 format across use cases and applications including computer vision.

Results on transformers and GAN networks, like OpenAI’s DALL-E, saw FP8 achieve training accuracy similar to 16-bit precisions while delivering “significant speedups.”

Testing using the MLPerf Inference benchmark, Nvidia Hopper using FP8 achieved 4.5x faster times using the BERT model for natural language processing.

“Using FP8 not only accelerates and reduces resources required to train but also simplifies 8-bit inference deployment by using the same datatypes for training and inference,” according to the paper.
Hi Fmf,

That just triggered a couple of obscure dots ... Akida works on probability, what does the image most closely resemble?

In fact, I reckon we are on the path of the Infinite Improbability Drive. How many heads does PvdM have?
 
  • Like
  • Haha
  • Fire
Reactions: 30 users
Top Bottom