BRN Discussion Ongoing

you tell me. Your closely acquainted with being one
What's your problem Proga?
Are you an English speaking Australian?

If so, not a good look, to be basically saying I'm an idiot, when you start a sentence with lower case and don't know the difference between your and you're?

If English is "your" first language, maybe "you're" just not educated enough, or paid enough attention in school, but I still wouldn't say "you're" an idiot.

Communication is more important, than how it's presented.
 
  • Like
  • Haha
Reactions: 5 users
Hi TFM
One of Bravo’s posts in the last week or so contained a Mercedes article wherein they referred to creating their own chips.

This as you state has also been previously confirmed by Mercedes Benz.

Logically if they are going to deploy AKIDA at scale the including of the AKIDA IP in their own semiconductor makes the most sense.

I think it was the Infineon fellow who spoke to the advantages of off loading the Ai processing to AKIDA.

Most certainly OnSemi confirmed the good sense of this approach in automotive.

The wild speculation that arises given OnSemi are partnered with Mercedes Benz is that the proving out AKIDA with OnSemi’s intelligent airbag deployment system was requested by Mercedes - HC readers look up meaning of ‘wild speculation’.

Given the recognition of Mercedes Benz of the 5 to 10 times better efficiency brought by AKIDA and the confirmation they are making their own chips and that they need to compensate for the energy excesses of Nvidia it is hard to imagine they are not finding themselves over an AKIDA barrel.

My opinion only DYOR
Fact Finder
Is Mercedes a partner of Brainchip, like I've said partnerships are the new IP'S
As for the balance sheet this company to be a powerhouse is to be in the green every quarter, Until this is the case we'll be constantly looking over our shoulders,
It's not being negative it's just the plain hard facts, even the great Fact Finder and his wealth of knowledge couldn't put a argument up, Shorters will disappear, it's a good quarter moving in a good direction however the LDA Capital announcement tells me the company isn't there yet
 
  • Like
Reactions: 1 users

wilzy123

Founding Member
Is Mercedes a partner of Brainchip, like I've said partnerships are the new IP'S
As for the balance sheet this company to be a powerhouse is to be in the green every quarter, Until this is the case we'll be constantly looking over our shoulders,
It's not being negative it's just the plain hard facts, even the great Fact Finder and his wealth of knowledge couldn't put a argument up, Shorters will disappear, it's a good quarter moving in a good direction however the LDA Capital announcement tells me the company isn't there yet
drunk-drugs.gif
 
  • Haha
Reactions: 9 users

Proga

Regular
What's your problem Proga?
Are you an English speaking Australian?

If so, not a good look, to be basically saying I'm an idiot, when you start a sentence with lower case and don't know the difference between your and you're?

If English is "your" first language, maybe "you're" just not educated enough, or paid enough attention in school, but I still wouldn't say "you're" an idiot.

Communication is more important, than how it's presented.
ROFL. Is that the best you have FW. Am I an Australian? quote "not a good look, to be basically saying I'm an idiot (which is true), when you start a sentence with lower case and don't know the difference between your and you're?" unquote

So you're basing your whole analysis on one post FW? o_O Not the other thousands of posts FW
 
  • Haha
  • Like
Reactions: 2 users
The excitement for 780k in revenue in this forum is actually frightening. At least this 4c isn’t as bad as the prior one. Logically speaking this company is closer to bankruptcy than it is to be profitable. But hopes are high. Let’s see what 2024 brings.
I suspect you are not aware of what constitutes bankruptcy under Australian law.

Bankruptcy is a term that can only be used for a real person.

Companies cannot be bankrupt they can only become insolvent.

Insolvent has a specific legal definition and to be close to insolvency a company has to not have sufficient funds to meet its debts as they fall due and as a rule of thumb the ASX requires a company to have at least two quarters of working capital.

Brainchip has four quarters and a facility to raise more working capital if required.

Your attempt to suggest Brainchip is close to bankruptcy is wrong at law and ignorant at best.

Making this statement when it is so incorrect could be seen as an attempt to influence the market for Brainchip shares on the ASX and as such a criminal offence.

I assume you did not intend to break the law however I would suggest you not continue in this vein.

Recently I asked you to provide the evidence you relied upon to assert Brainchip was partnered with Samsung. You responded with a reference to the fact that Brainchip presented at a TinyML conference held in a venue where Samsung had naming rights or owned the premises.

As proof goes this does not even rate as a dot so I think it is true to say you have no evidence to prove your assertion.

As that is the case your assertion that Samsung’s failure to announce using AKIDA confirms Brainchip is on the brink of collapse is quite ludicrous except when I put it together with your attempt to falsely raise concerns about Brainchips solvency.

I think you should take care with the allegations you make in the future.

My opinion only DYOR
Fact Finder
 
  • Like
  • Fire
  • Love
Reactions: 65 users

Diogenese

Top 20
Hi TFM
One of Bravo’s posts in the last week or so contained a Mercedes article wherein they referred to creating their own chips.

This as you state has also been previously confirmed by Mercedes Benz.

Logically if they are going to deploy AKIDA at scale the including of the AKIDA IP in their own semiconductor makes the most sense.

I think it was the Infineon fellow who spoke to the advantages of off loading the Ai processing to AKIDA.

Most certainly OnSemi confirmed the good sense of this approach in automotive.

The wild speculation that arises given OnSemi are partnered with Mercedes Benz is that the proving out AKIDA with OnSemi’s intelligent airbag deployment system was requested by Mercedes - HC readers look up meaning of ‘wild speculation’.

Given the recognition of Mercedes Benz of the 5 to 10 times better efficiency brought by AKIDA and the confirmation they are making their own chips and that they need to compensate for the energy excesses of Nvidia it is hard to imagine they are not finding themselves over an AKIDA barrel.

My opinion only DYOR
Fact Finder
If MB are making their own processor, I would take that to mean that that are working with a processor maker and adapting an existing processor for MB's specific requirements, because designing a processor from the ground up would be a herculean task.

ARM has released its newest batch of processors, Intel has fallen behind, and Si-Five are the new boy. We know BRN 's ecosystem encompasses all three, but I haven't seen anything to suggest MB is working with any of these. That doesn't mean they aren't, just that I haven't seen any indication.

We know that MB have partnered with Nvidia and Qualcomm. Qualcomm's strength is communications, and Nvidia are the leaders in GPU, very fast parallel processors with a Wattage appetite to match, but which nevertheless MB have decided is the optimal processor for their purposes.

We know that EV makers are keen to squeeze every km out of the battery they can, and MB is no exception. A major function of an ADAS system is continuously identifying objects in the environment. We also know that implementing CNN (classification/inference) functions in software is ruinously power hungry in general, and in EVs in particular.

We also know that Akida NN SoC can preform classification/inference using a fraction of the power of a software CNN.

Then there is the enigmatic "more like partners than competitors" comment.

MB recently stated they are continuing to develop their ADAS processor and will announce another partner for their ADAS processor project (having already mentioned Nvidia and Qualcomm). While both Nvidia and Qualcomm both have pretensions to having NN capabilities, I doubt that they come up to Akida's standards.

As we all know, the only sensible thing for MB and Nvidia would be to work together with BRN to design MB's purpose-built ADAS processor.

If Akida is to be adopted for MB's ADAS processor, it will require a raft of SNN models (image libraries for frame cameras, lidar, ultrasound, radar, ...) adapted for SNN, so the reference to continuing development of MB's ADAS processor could encompass development of such SNN libraries.

The model libraries are crucial for ADAS because they are equivalent to a driver's experience in road use. This is why Tesla and the robotaxi companies have been accumulating this information for years.

MB has also been accumulating such data, but it would not be SNN format, and converting it to SNN would be a major task.

Let's hope common sense prevails.
 
  • Like
  • Love
  • Fire
Reactions: 69 users

manny100

Regular
Hi TFM
One of Bravo’s posts in the last week or so contained a Mercedes article wherein they referred to creating their own chips.

This as you state has also been previously confirmed by Mercedes Benz.

Logically if they are going to deploy AKIDA at scale the including of the AKIDA IP in their own semiconductor makes the most sense.

I think it was the Infineon fellow who spoke to the advantages of off loading the Ai processing to AKIDA.

Most certainly OnSemi confirmed the good sense of this approach in automotive.

The wild speculation that arises given OnSemi are partnered with Mercedes Benz is that the proving out AKIDA with OnSemi’s intelligent airbag deployment system was requested by Mercedes - HC readers look up meaning of ‘wild speculation’.

Given the recognition of Mercedes Benz of the 5 to 10 times better efficiency brought by AKIDA and the confirmation they are making their own chips and that they need to compensate for the energy excesses of Nvidia it is hard to imagine they are not finding themselves over an AKIDA barrel.

My opinion only DYOR
Fact Finder
Hi FF, See the link below from BRN Website. Mercedes gets a mention. I have not seen this link before today. Its Titled "

Designing Smarter and Safer Cars with Essential AI"​

Under the SOC chip drawing it says " AKIDA intergrated into SOC"
I presume this may mean the SOC may be provided by a 3rd party , eg NVIDIA, Renesas, Microchip etc.
If this is the case it will be a watch the financials?
Correct me if i am wrong.
I am thinking that BRN may get involved in the design of AI applications as an extra source of revenue and a way of retaining customer loyalty?
 
  • Like
  • Fire
  • Thinking
Reactions: 28 users

Reuben

Founding Member
I suspect you are not aware of what constitutes bankruptcy under Australian law.

Bankruptcy is a term that can only be used for a real person.

Companies cannot be bankrupt they can only become insolvent.

Insolvent has a specific legal definition and to be close to insolvency a company has to not have sufficient funds to meet its debts as they fall due and as a rule of thumb the ASX requires a company to have at least two quarters of working capital.

Brainchip has four quarters and a facility to raise more working capital if required.

Your attempt to suggest Brainchip is close to bankruptcy is wrong at law and ignorant at best.

Making this statement when it is so incorrect could be seen as an attempt to influence the market for Brainchip shares on the ASX and as such a criminal offence.

I assume you did not intend to break the law however I would suggest you not continue in this vein.

Recently I asked you to provide the evidence you relied upon to assert Brainchip was partnered with Samsung. You responded with a reference to the fact that Brainchip presented at a TinyML conference held in a venue where Samsung had naming rights or owned the premises.

As proof goes this does not even rate as a dot so I think it is true to say you have no evidence to prove your assertion.

As that is the case your assertion that Samsung’s failure to announce using AKIDA confirms Brainchip is on the brink of collapse is quite ludicrous except when I put it together with your attempt to falsely raise concerns about Brainchips solvency.

I think you should take care with the allegations you make in the future.

My opinion only DYOR
Fact Finder
Nice one FF....
 
  • Like
  • Love
Reactions: 14 users

IloveLamp

Top 20
  • Haha
  • Fire
  • Like
Reactions: 7 users

Tothemoon24

Top 20
Is Mercedes a partner of Brainchip, like I've said partnerships are the new IP'S
As for the balance sheet this company to be a powerhouse is to be in the green every quarter, Until this is the case we'll be constantly looking over our shoulders,
It's not being negative it's just the plain hard facts, even the great Fact Finder and his wealth of knowledge couldn't put a argument up, Shorters will disappear, it's a good quarter moving in a good direction however the LDA Capital announcement tells me the company isn't there yet

It’s got me stuffed how David Holland isn’t in the Top 20 poster’s .

🥷🏻
 
  • Haha
  • Like
Reactions: 13 users

IloveLamp

Top 20
If MB are making their own processor, I would take that to mean that that are working with a processor maker and adapting an existing processor for MB's specific requirements, because designing a processor from the ground up would be a herculean task.

ARM has released its newest batch of processors, Intel has fallen behind, and Si-Five are the new boy. We know BRN 's ecosystem encompasses all three, but I haven't seen anything to suggest MB is working with any of these. That doesn't mean they aren't, just that I haven't seen any indication.

We know that MB have partnered with Nvidia and Qualcomm. Qualcomm's strength is communications, and Nvidia are the leaders in GPU, very fast parallel processors with a Wattage appetite to match, but which nevertheless MB have decided is the optimal processor for their purposes.

We know that EV makers are keen to squeeze every km out of the battery they can, and MB is no exception. A major function of an ADAS system is continuously identifying objects in the environment. We also know that implementing CNN (classification/inference) functions in software is ruinously power hungry in general, and in EVs in particular.

We also know that Akida NN SoC can preform classification/inference using a fraction of the power of a software CNN.

Then there is the enigmatic "more like partners than competitors" comment.

MB recently stated they are continuing to develop their ADAS processor and will announce another partner for their ADAS processor project (having already mentioned Nvidia and Qualcomm). While both Nvidia and Qualcomm both have pretensions to having NN capabilities, I doubt that they come up to Akida's standards.

As we all know, the only sensible thing for MB and Nvidia would be to work together with BRN to design MB's purpose-built ADAS processor.

If Akida is to be adopted for MB's ADAS processor, it will require a raft of SNN models (image libraries for frame cameras, lidar, ultrasound, radar, ...) adapted for SNN, so the reference to continuing development of MB's ADAS processor could encompass development of such SNN libraries.

The model libraries are crucial for ADAS because they are equivalent to a driver's experience in road use. This is why Tesla and the robotaxi companies have been accumulating this information for years.

MB has also been accumulating such data, but it would not be SNN format, and converting it to SNN would be a major task.

Let's hope common sense prevails.
"ARM has released its newest batch of processors, Intel has fallen behind, and Si-Five are the new boy. We know BRN 's ecosystem encompasses all three, but I haven't seen anything to suggest MB is working with any of these."

@Diogenese

The ceo of ARM, Rene Haas, told an interviewer at ces in one of the videos i posted that every ev at ces had ARM chips in it so there is that.

Apologies, i can't remember which post it was.

Edit: found it. Well worth a watch for those who missed it

 
Last edited:
  • Like
  • Fire
Reactions: 16 users
@DingoBorat do you agree you're a FW?
You've got mail Proga, thought I would keep your BS out of the forum, but maybe you haven't worked out how to open it yet..

You're welcome to think what you like about me.

And I guess I'd have to agree, that yes sometimes I can be a FW, but generally speaking, I don't think so.

Maybe, I'm just justifying my behavior 🤔..
 
  • Like
Reactions: 3 users

Harwig

Regular
The excitement for 780k in revenue in this forum is actually frightening. At least this 4c isn’t as bad as the prior one. Logically speaking this company is closer to bankruptcy than it is to be profitable. But hopes are high. Let’s see what 2024 brings.
So boring. Please allow others to form their own opinions and make their decisions based on the evidence available to them. I am not stupid but am really sick of reading your veiled attemps to cast doubt. Please please put me on ignore or I will tomorrow
 
  • Like
Reactions: 7 users

IloveLamp

Top 20
  • Haha
Reactions: 14 users

Tothemoon24

Top 20
IMG_8239.jpeg


Integrated visual communication is entering a new dimension with #MBUX Surround Navigation. It marries route guidance and driving assistance for a seamless real-time experience brought to life by powerful Unity 3D game engine graphics.

Facilitated by integration with MB.OS, this advanced feature pairs a real-world view of the car’s surroundings using the vehicle sensors. The driver benefits from significantly enhanced situational awareness and “seeing what the car sees.” Incorporation of information into the driver display provides everything the driver needs to know in a single glance. For instance, it shows other cars, vans, trucks, cyclists and even pedestrians and potential road hazards.

It superimposes route guidance into a realistic representation of the surroundings, which is very helpful in busy urban environments. The customer clearly sees exactly where the next turn will take them with easily recognizable buildings and infrastructure. The system also provides integrated visualizations of the car’s current status, including activated indicators, headlights and vehicle movement depicted by spinning wheels.

What cities would you like to test the upcoming MBUX Surround Navigation in first?

IMG_8240.jpeg
 
  • Like
  • Wow
Reactions: 17 users

7für7

Regular
View attachment 55156

Integrated visual communication is entering a new dimension with #MBUX Surround Navigation. It marries route guidance and driving assistance for a seamless real-time experience brought to life by powerful Unity 3D game engine graphics.

Facilitated by integration with MB.OS, this advanced feature pairs a real-world view of the car’s surroundings using the vehicle sensors. The driver benefits from significantly enhanced situational awareness and “seeing what the car sees.” Incorporation of information into the driver display provides everything the driver needs to know in a single glance. For instance, it shows other cars, vans, trucks, cyclists and even pedestrians and potential road hazards.

It superimposes route guidance into a realistic representation of the surroundings, which is very helpful in busy urban environments. The customer clearly sees exactly where the next turn will take them with easily recognizable buildings and infrastructure. The system also provides integrated visualizations of the car’s current status, including activated indicators, headlights and vehicle movement depicted by spinning wheels.

What cities would you like to test the upcoming MBUX Surround Navigation in first?

View attachment 55157
Did you just put some pictures together with your own text? 🤔
 
  • Haha
Reactions: 2 users
Excellent recent EE article espousing the need and use of in memory and SNN AI.



Revolutionizing AI Inference: Unveiling the Future of Neural Processing​

January 12, 2024 Virgile Javerliac
To overcome CPU and GPU limitations, hardware accelerators have been designed specifically for AI inference workloads, enabling highly efficient and optimized processing while minimizing energy consumption.

The AI industry encompasses a dynamic environment influenced by technological advancements, societal needs and regulatory considerations. Technological progress in machine learning, natural-language processing and computer vision has accelerated AI’s development and adoption. Societal demands for automation, personalization and efficiency across various sectors, including healthcare, finance and manufacturing, have further propelled the integration of AI technologies.

Additionally, the evolving regulatory landscape emphasizes the importance of ethical AI deployment, data privacy and algorithmic transparency, guiding the responsible development and application of AI systems.

The AI industry combines both training and inference processes to create and deploy AI solutions effectively. Both AI inference and AI training are integral components of the overall AI lifecycle, and their significance depends on the specific context and application. While AI training is crucial for developing and fine-tuning models by learning patterns and extracting insights from data, AI inference plays a vital role in utilizing these trained models to make real-time predictions and decisions. The growing importance of AI inference—more than 80% of AI tasks today—lies in its pivotal role in driving data-driven decision-making, personalized user experiences and operational efficiency across diverse industries.

Efficient AI inference implementation faces challenges concerning data availability, computational resources, algorithmic complexity, interpretability and regulatory compliance. Adapting to dynamic environments and managing scalability while controlling costs pose additional hurdles. Overcoming these challenges requires comprehensive strategies, including robust data management practices, advancements in hardware capabilities and algorithmic refinements. Developing explainable AI models and adhering to ethical and regulatory guidelines are crucial for building user trust and ensuring compliance. Furthermore, balancing resource allocation and cost management through efficient operational practices and technological innovations is essential for achieving sustainable and effective AI inference solutions across diverse industry sectors.

The pivotal role of AI inference

By automating tasks, enhancing predictive maintenance and enabling advanced analytics, AI inference optimizes processes, reduces errors and improves resource allocation. AI inference powers natural-language processing, improving communication and comprehension between humans and machines. Its impact on manufacturing includes predictive maintenance, quality control and supply chain management, fostering efficiency, reduced waste and enhanced product quality, highlighting its transformative influence on industry operations.

Industry challenges in sustainable AI inference

AI inference faces challenges concerning high energy consumption, intensive computational demands and real-time processing constraints, leading to increased operational costs and environmental impact. More than 60% of total AI power consumption comes from inference, and the increase of inference demands led to a 2.5× increase in data center capacity over two years (GAFA data). For servers, heat generation during intensive computations necessitates sophisticated cooling systems that further contribute to the overall energy consumption of AI processes.

Furthermore, balancing the need for efficient real-time processing with low-latency requirements, mandatory for servers, advanced driver-assistance systems (ADAS) or manufacturing applications, poses a significant challenge, requiring advanced hardware designs and optimized computational strategies. Prioritizing energy-efficient solutions—without compromising accuracy—with renewable energy sources and eco-friendly initiatives is crucial for mitigating the environmental impact of AI inference processes.

Classical AI inference hardware design, using a CPU or GPU, face limitations in achieving energy efficiency due to the complexity and specificity of AI algorithms, leading to high power consumption (hundreds of watts per multi-core units for servers). Inefficient data movement between processing units and memory further impacts energy efficiency and throughput; for instance, access to external DRAM consumes 200× more energy than access to local registers. At the end, and due to higher computational demands, next-gen servers using a CPU and GPU could consume up to 1,000 W by 2025. Deploying AI inference on resource-constrained, battery-powered devices is even more challenging, as the most efficient CPU- and GPU-based designs, with 10 mW to a few watts, suffer from strong throughput limitations, limiting AI complexity and the final user experience. Balancing energy efficiency with performance and accuracy requirements necessitates careful tradeoffs during the design process, calling for comprehensive optimization strategies. Inadequate hardware support for complex AI workloads can hinder energy efficiency and performance.

The search for energy-efficient solutions

The industry’s escalating demand for energy-efficient AI inference solutions is driven by sustainability goals, cost-reduction objectives and new usages.

Businesses seek scalable and high-performance solutions to manage complex AI workloads without incurring excessive energy costs. On the other hand, energy-efficient AI inference would enable mobile and resource-constrained devices to perform complex tasks without draining the battery quickly while reducing the reliance on cloud-based processing, minimizing data transmission and latency issues. It will contribute to enhanced user experiences through new usages with advanced features, such as real-time language translation, personalized recommendations and accurate image recognition, fostering greater engagement and satisfaction.

Innovative contributions in AI inference

To overcome CPU and GPU limitations, innovative hardware accelerators have been designed specifically for AI inference workloads, enabling highly efficient and optimized processing while minimizing energy consumption. Such accelerators implement an optimized dataflow with dedicated operators (pooling, activation functions, normalization, etc.) used in AI applications. The dataflow engine is the matrix-multiply unit, a large array of processing elements able to efficiently handle large matrix-vector multiplications, convolutions and many more complex operations, knowing the majority of neural networks are based on matrix-multiply operations.

To further optimize energy efficiency, AI accelerators have implemented new techniques, such as near-memory computing. Near-memory computing integrates processing elements within the memory subsystem, enabling faster data processing near the memory, thus reducing the energy consumption associated with data transfer. More recently, new approaches using “non-standard” techniques, such as in-memory computing or spiking neural networks (SNNs), are the most aggressive solutions to achieve highly energy-efficient AI inferences.

In-memory computing conducts computations directly within the memory, at circuit level, eliminating the need for data transfer and enhancing processing speed. The processing can be either performed in an analog or a digital way and implement different memory technologies, such as SRAM, flash or new NVM (RRAM, MRAM, PCRAM, FeFET, etc.). This approach is particularly beneficial for complex AI tasks involving large datasets. SNNs also represent an innovative approach to AI inference: They typically consist of interconnected nodes that communicate through spikes, enabling the simulation of complex temporal processes and event-based computations, which can be useful for tasks like processing time-sensitive data or simulating brain-like behavior.

Shaping the future of AI inference

AI accelerators leveraging near-/in-memory computing or SNNs offer significant impacts for the AI industry, including enhanced energy efficiency, improved processing speed and advanced pattern-recognition capabilities. These accelerators drive the optimization of hardware design, leading to the creation of specialized architectures tailored for specific AI workloads.

Additionally, they promote advancements in edge computing, facilitating efficient AI processing directly on edge devices and reducing latency. The transformative potential of these technologies highlights their crucial role in revolutionizing diverse industries, from healthcare and manufacturing to automotive and consumer electronics.

The integration of highly energy-efficient AI inference in healthcare and automotive sectors yields transformative impacts. In healthcare, it facilitates faster diagnostics and personalized patient care through rapid data analysis, leading to improved treatment outcomes and tailored medical interventions. Additionally, it enables the development of remote patient-monitoring systems, ensuring continuous health tracking and proactive intervention for individuals with chronic conditions. Moreover, in the realm of drug discovery, energy-efficient AI inference expedites the identification of potential drug candidates and accelerates pharmaceutical research and development processes, fostering innovation in medical treatments and therapies.

In the automotive industry, energy-efficient AI inference plays a crucial role in advancing safety features and autonomous-driving capabilities. It empowers vehicles with ADAS and real-time collision detection, enhancing overall road safety. Furthermore, it contributes to the development of self-driving technologies, enabling vehicles to make informed decisions based on real-time data analysis, thereby improving navigation systems and autonomous-driving functionalities. Additionally, the implementation of predictive maintenance solutions based on energy-efficient AI inference enables early detection of potential vehicle issues, optimizing performance, reducing downtime and extending vehicle lifespan.

Conclusion

The industry’s critical demand for energy-efficient AI inference solutions is driven by the need to promote sustainable operations, optimize resource utilization and extend device battery life. These solutions play a vital role in fostering eco-friendly practices, reducing operational costs and enhancing competitive advantages. By facilitating edge computing applications and minimizing energy consumption, energy-efficient AI inference solutions enable businesses to improve profitability, streamline processes and ensure uninterrupted functionality in mobile and IoT devices. Addressing this demand necessitates the development of energy-efficient algorithms and optimized hardware architectures heavily based on smart near-/in-memory computing techniques. Many new players come into the market with innovative computing solutions and the promise of running AI everywhere, from sensors to data centers, with the ambition of offering a completely new user experience.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 24 users

Diogenese

Top 20
"ARM has released its newest batch of processors, Intel has fallen behind, and Si-Five are the new boy. We know BRN 's ecosystem encompasses all three, but I haven't seen anything to suggest MB is working with any of these."

@Diogenese

The ceo of ARM, Rene Haas, told an interviewer at ces in one of the videos i posted that every ev at ces had ARM chips in it so there is that.

Apologies, i can't remember which post it was.

Edit: found it

Hi ILL,

Yes - I remember that. However, if I recall correctly, the water cooled ADAS processor in CLA concept car was said to be Nvidia. As has been discussed above, there are 1000 chips in a car, so there's plenty of room for ARM outside the ADAS processor.

I think MB marketing were making a virtue of necessity by highlighting the water-cooled processor with the sci-fi blue light.

It would be of advantage to MB to have a processor which does not need water cooling because that is just wasted energy. MB have said the ADAS processor is still in development, so they had to use what they had to hand for the CLA concept. I reckon they will be happy to see the back of the water-cooled processor by off-loading the heavy lifting to Akida.
 
  • Like
  • Fire
  • Love
Reactions: 25 users

wilzy123

Founding Member
Did you just put some pictures together with your own text? 🤔
Yeah... that's it.............

 
  • Haha
  • Like
Reactions: 9 users
Top Bottom