BRN Discussion Ongoing

Townyj

Ermahgerd

“Mercedes has experimented with a new type of processor that performs tasks in “neuromorphic spikes”


Oh man they are pulling at our heart strings labeling it as a "Concept" again...

Baby Pacifier GIF by 10e Ave Productions
 
  • Haha
  • Like
  • Fire
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Oh man they are pulling at our heart strings labeling it as a "Concept" again...

Baby Pacifier GIF by 10e Ave Productions
The article below posted previously indicates it will be used to "showcase the automaker's technology that will be used in future EV's".

Technology that WILL be used, not might be...


MERC (1).png


 
  • Like
  • Love
  • Fire
Reactions: 48 users
S

Straw

Guest
With Mercedes creating their own operating system it's great we are in on the ground floor. Bloody fantastic.
Thanks @Slade
 
  • Like
  • Love
  • Fire
Reactions: 35 users

Townyj

Ermahgerd
The article below posted previously indicates it will be used to "showcase the automaker's technology that will be used in future EV's".

Technology that WILL be used, not might be...


View attachment 41150


Oh i didn't say might be, i know we will be at some point but... Saying its a "concept" will have flogs like MF on our case again. Ughhh
 
  • Like
  • Sad
  • Fire
Reactions: 9 users

Cartagena

Regular
With Mercedes creating their own operating system it's great we are in on the ground floor. Bloody fantastic.
Thanks @Slade
Good to see Merc progressing with the right tech for the future and not many other spiking Neuromorphic companies other than our Brainchip. We hope :)
Hopefully we can stem this shorter onslaught by the right announcement, we are now approaching mid Q3.
 
  • Like
  • Fire
  • Sad
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Oh i didn't say might be, i know we will be at some point but... Saying its a "concept" will have flogs like MF on our case again. Ughhh
The point I was trying to make is that the article itself says the technology WILL be used in Mercedes future EV's and this is something that the "flogs" will no doubt have to contend with.
 
  • Like
  • Love
  • Fire
Reactions: 41 users

Townyj

Ermahgerd
The point I was trying to make is that the article itself says the technology WILL be used in Mercedes future EV's and this is something that the "flogs" will no doubt have to contend with.

Oh Bravo... love it hahaha. Exactly, they will have to live and eat their stupid comments at some point.
 
  • Like
  • Fire
Reactions: 11 users

CHIPS

Regular
I don't wanna be that guy that sold Apple. 🤣
This is exactly the reason why I would never sell my stocks (and I have a lot of them) :ROFLMAO: . Just imagine selling them and then ... BOOM ... magic happens ... without us 😱.
 
  • Like
  • Haha
  • Fire
Reactions: 16 users

CHIPS

Regular
“Mercedes has experimented with a new type of processor that performs tasks in “neuromorphic spikes”


And just 2 hours ago Mercedes posted this on Twitter. Coincidence?? :unsure:

Translation:

MercedesBenz Vision EQXX - the most efficient Mercedes of all times
runs more than 1,000 km with only one battery load.

More information here: https://www.mercedes-benz.de/passengercars/the-brand/eqxx/battery.module.html

1690793953269.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 55 users

Rach2512

Regular
Sorry if already posted 🤷‍♀️

Snippets


Look for the new 2024 Mercedes-Benz E-Class Sedan in dealerships later this year.

Key Highlights

Latest generation of Driver Assistance Systems availableDisclaimer[6]

Design

New-generation MBUX Multimedia System with Augmented Reality Navigation system, including Natural Language Understanding and Keyword Activation (“Hey Mercedes”) and “Just Talk” functionAvailable MBUX Superscreen



 
  • Like
  • Love
  • Fire
Reactions: 30 users

hamilton66

Regular
Judging by what the company has told us in the latest 4c, the steadily increasing volume of commentary regarding AI and it's movement towards the Edge, sensor fusion and the IoT, and the known and supposed partnerships and alliances quietly revealed over the past 6-12 months, a lot is happening, that is below the event horizon of an ASX price sensitive announcement and not currently registering on the share price needle.

We also know that the revelation of same is as deeply desired by Antonio, Sean and the rest of our management, as it is by us.
Not only is a significant portion of their fortunes depending on it, but also their professional reputations.

Lots and lots of pots on the boil, but in virtually all cases the exact timing of the release of this information is in the hands and at the discretion of third parties.
For in most cases we are an enabler rather than a product in and of itself.
We help to make the sensor smart reducing the latency, load and bandwidth required to get the job done.
We sit between the cloud based server and the onboard intelligence improving its functionality and providing added privacy and energy efficiency to boot.
The product cycles take time to implement and time is also required and desired by initiators to extract value from previously invested in tech.
Our time is coming, but like adults driving the car we know the journey takes as long as it takes, and no amount of "are we there, yetting" from inexperienced and impatient minds will speed the process.
We all want our cookies now, but some know that a touch more delayed gratification may result in that magical experience of both having ones cake and eating it too.
Coming up on 8 years in for me, gradually building my position, watching others come and go, so believe me, I too am subject to the angst, anxiety and impatience many here are feeling, but I also think we are on the right track and I don't wanna be that guy that sold Apple. 🤣
No advise of course and DYOR but even after all this time it feels closer to me, every day.
GLTAH
Hoppy, matters not which 1 of of us was in 1st, but I'm of the same vintage. It's been a long, frustrating journey. Never sold 1, and have just quietly accumulated. Buy price is under 8c, so healthy profit, but also need to take into account lost opportunity. Is this our year? I certainly hope so. GLTA
 
  • Like
  • Love
  • Fire
Reactions: 33 users

HopalongPetrovski

I'm Spartacus!
Hoppy, matters not which 1 of of us was in 1st, but I'm of the same vintage. It's been a long, frustrating journey. Never sold 1, and have just quietly accumulated. Buy price is under 8c, so healthy profit, but also need to take into account lost opportunity. Is this our year? I certainly hope so. GLTA
Hi Hamilton. 😊
The stars certainly seem to be aligning at last and I too hope we see that revenue snowball gathering momentum and mass this year and into next. We’ve paid our dues and appear to be positioned nicely in the right place at the right time.
Well done you on your patience and perseverance. Bring it, BrainChip!
 
  • Like
  • Love
Reactions: 26 users

buena suerte :-)

BOB Bank of Brainchip
Judging by what the company has told us in the latest 4c, the steadily increasing volume of commentary regarding AI and it's movement towards the Edge, sensor fusion and the IoT, and the known and supposed partnerships and alliances quietly revealed over the past 6-12 months, a lot is happening, that is below the event horizon of an ASX price sensitive announcement and not currently registering on the share price needle.

We also know that the revelation of same is as deeply desired by Antonio, Sean and the rest of our management, as it is by us.
Not only is a significant portion of their fortunes depending on it, but also their professional reputations.

Lots and lots of pots on the boil, but in virtually all cases the exact timing of the release of this information is in the hands and at the discretion of third parties.
For in most cases we are an enabler rather than a product in and of itself.
We help to make the sensor smart reducing the latency, load and bandwidth required to get the job done.
We sit between the cloud based server and the onboard intelligence improving its functionality and providing added privacy and energy efficiency to boot.
The product cycles take time to implement and time is also required and desired by initiators to extract value from previously invested in tech.
Our time is coming, but like adults driving the car we know the journey takes as long as it takes, and no amount of "are we there, yetting" from inexperienced and impatient minds will speed the process.
We all want our cookies now, but some know that a touch more delayed gratification may result in that magical experience of both having ones cake and eating it too.
Coming up on 8 years in for me, gradually building my position, watching others come and go, so believe me, I too am subject to the angst, anxiety and impatience many here are feeling, but I also think we are on the right track and I don't wanna be that guy that sold Apple. 🤣
No advise of course and DYOR but even after all this time it feels closer to me, every day.
GLTAH
Very nice Hoppy .....👏

As you say ... "a lot is happening"! ... I personally think we are so very close to that all important 'Power Announcement'! 🙏🙏🙏

So very much looking forward to seeing the excitment on these threads when 'The BIG one drops' ;) .... Soooooooooon!!

Can I swap out the "cookies" for 'Chips' please with heaps of 'Secret sauce'.... :love:







 
  • Like
  • Love
  • Fire
Reactions: 37 users

Frangipani

Regular
Yet another reminder of the lengthy process from conceptualising a car to it rolling off the production line and into showrooms, and what this signifies with regards to cutting-edge technology…




The Latest Autonomous Technologies Are Already Outdated​

July 3, 2023 James Jeffs
if Mercedes can get Level 3 certification with yesterday’s technologies today, what will its car designers be able to do with today’s technologies tomorrow?

Last year, Mercedes released its Level 3 Drive Pilot system for certified use on German roads. Those wealthy enough to purchase a new Mercedes equipped with the Drive Pilot system, which is an optional extra costing a few thousand euros, could relax as their car drove itself up to speeds of 60 kph. This year, Mercedes announced that its Level 3 Drive Pilot had been certified for use in Nevada, with delivery of L3-capable vehicles to that state expected in the second half.


Certified Level 3 is a gargantuan milestone on the way to fully autonomous vehicles. In fact, the step from L2 to L3 might be the trickiest of them all. So it might be surprising to learn that most of the cutting-edge technologies that made this step possible for Mercedes are already out of date, according to research findings from IDTechEx.

Autonomous Vehicle on the road Autonomous Technologies AVs

SAE levels and L3’s importance​

“Level 3” here refers to the Society of Automotive Engineers’ six levels of autonomy, where Level 0 is a completely manually driven vehicle and Level 5 is a completely autonomous vehicle. As you might expect, with Level 3 being in the middle, L3 vehicles are somewhat autonomous; specifically, under some predetermined conditions, the vehicle can drive itself, with the driver free to take their concentration away from the road. This is an enormous step up from Level 2, where the driver is always in control of the vehicle. L2 vehicles typically have a combination of lane-keep assistance systems and adaptive cruise control to ease the monotony of driving. Manufacturers will always want to make those features as safe as possible, of course, but at the end of the day, if the vehicle crashes when L2 features are being used, the driver is responsible.

The six levels of autonomy defined by SAE The six levels of autonomy defined by SAE (Source: IDTechEx)

Part of the challenge with Level 3 has been clarifying who is responsible if the vehicle gets into trouble while the L3 system is operating. Mercedes resolved the issue to some extent by saying the company would assume liability if the vehicle crashed while reporting that it was operating at Level 3. But it also requires lots of confidence in the vehicle’s autonomous system; after all, Mercedes will want to ensure that the profit from selling the system dwarfs any bills it needs to pick up if the system is found liable for a crash.

The autonomous system used in the Mercedes S-Class and EQS is indeed impressive. It features a LiDAR, five radars and six cameras, as well as an Nvidia-powered autonomous brain. All of these are based on semiconductor technologies, all of which are evolving rapidly. Each of the components that Mercedes chose when specifying the vehicle would have been cutting-edge and best-in-class, but semiconductor technologies advance extremely quickly. This means that the tech rolling off Mercedes’s production lines today is a generation behind what the semiconductor foundries can produce.


Here’s why.

How drive pilot technologies are being superseded​

This situation is not unique to Mercedes. The progression from conceptualizing a car—including deciding on the desired autonomous features and the appropriate sensors to achieve them—to having one roll off the production line is a years-long process. During that long time to market, new and better sensors will be announced, so that the car manufacturer will be stuck shipping “last gen” tech when the car is finally available.

This effect is compounded when considering the speed of development of the semiconductor industry.
Say Company X is designing a car and chooses a radar today. That radar was designed with the semiconductor tech available at the time of its design, maybe one or two years ago. At the pace of the semiconductor industry, that means that the radar could already be out of date—and it won’t even be on the car for another couple of years.

The LiDAR on the Mercedes S-Class and EQS is a great example of this phenomenon. Currently, the vehicles use a second-generation Scala from Valeo. Paris-based Valeo has cemented itself in the automotive LiDAR space, with its design slot in the Audi A8 in 2017 having made it one of the first to market. The second-generation Scala will no doubt be an excellent product, but a little over one year after announcing that the Scala would power its Level 3 system, Mercedes has already announced that it will use Luminar’s LiDAR in the future. In the information within the announcement, Mercedes says that the Luminar Iris will allow its autonomous Level 3 system to operate at speeds of up to 80 mph—double the 40-mph limit today.

The performance increase that Mercedes will get from switching LiDARs will be partly due to the shift from the 905-nm lasers in the Scala to the 1,550-nm lasers in Luminar’s Iris. The longer wavelength means that Luminar can use much more powerful lasers, yielding longer-range LiDARs, while maintaining eye safety. But this change in wavelength also means that different semiconductor technologies will be needed.

Semiconductor foundry smallest-node capabilities in recent years and on roadmaps for the future
Semiconductor foundry smallest-node capabilities in recent years and on roadmaps for the future (Source: IDTechEx)
 LiDAR announcements over recent years by wavelength LiDAR announcements over recent years by wavelength (Source: IDTechEx)

For example, it is common to use silicon technologies for the detector in 905-nm–laser LiDAR because silicon is readily available, well understood due to its maturity and comparatively cheap. Above ~1,000-nm wavelengths, however, silicon stops absorbing light, so LiDARmanufacturers will have to use something else, such as indium gallium arsenide. InGaAs provides the detection requirements for 1,550-nm LiDAR, but its production processes and supply infrastructure are not as mature as silicon’s, the minerals are rarer and the costs are therefore higher. Pioneering semiconductor startups could offer some hope here; TriEye, for example, has shown that it can build silicon-based image sensors that detect up to ~1,600 nm. Perhaps that technology can be adapted for use in a 1,550-nm LiDAR, a potentially game-changing prospect.

Mercedes’s switch to 1,550 nm is in keeping with LiDAR industry trends. The vast majority of LiDAR announcements these days are for products using 1,550-nm technologies. In select cases, companies are still pursuing 905 nm, but IDTechEx thinks that the industry is coming down from the fence on the side of 1,550 nm.

The autonomous brain is another area where it is nearly impossible to keep up with the pace of the semiconductor industry. Mercedes announced in 2020 that it would use the Nvidia Drive Orin SoC to power its autonomous system. That SoC was based on Nvidia’s Ampere architecture, which today is a generation out of date. The Ampere architecture uses a 10-nm process from Samsung and gave the Orin computational power of 255 trillion operations per second (TOPS). But before Mercedes won Level 3 certification in Nevada, Nvidia announced the Thor SoC, promising 2,000 TOPS. Thor will likely share the 4-nm TSMC process used in Nvidia’s Ada Lovelace architecture, found in its 40-series GPUs. Thor is expected to go into production vehicles by 2025, but by that point, TSMC will be capable of 2-nm processes, according to its roadmaps.

As you can see, the semiconductor industry moves so fast that a new vehicle can appear out of date by the time it rolls into showrooms. The silver lining is that this leaves ample headroom for technology improvements. The Mercedes S-Class and EQS, with their Level 3 systems, are incredibly advanced and capable machines, but by looking at emerging semiconductor technologies, from sensors to powerful SoCs, it’s clear to see that they are already a generation out of date. So if Mercedes can get Level 3 certification with yesterday’s technologies today, what will its car designers be able to do with today’s technologies tomorrow?
 
  • Like
  • Love
  • Fire
Reactions: 34 users

schuey

Regular
  • Like
  • Haha
  • Fire
Reactions: 44 users

CHIPS

Regular
Someone tell MB that if they come out of the closet Everyone on this forum will buy a MB

I certainly will. :giggle: By the way, I always have to think of Mr. Schäfer/Mercedes when he was asked by somebody on LinkedIn (some months ago) if Akida will be part of the new series and Schäfer's reply was: "We will see 😉".
I found this smiley very promising at that time already.
 
  • Like
  • Fire
  • Haha
Reactions: 30 users

Pmel

Regular
I certainly will. :giggle: By the way, I always have to think of Mr. Schäfer/Mercedes when he was asked by somebody on LinkedIn (some months ago) if Akida will be part of the new series and Schäfer's reply was: "We will see 😉".
I found this smiley very promising at that time already.
Here you go . It was me.
 

Attachments

  • Screenshot_20230731_214257_Gallery.jpg
    Screenshot_20230731_214257_Gallery.jpg
    492.9 KB · Views: 363
  • Like
  • Love
  • Fire
Reactions: 50 users

GStocks123

Regular
  • Haha
  • Like
Reactions: 16 users

Jchandel

Regular
  • Like
  • Wow
  • Fire
Reactions: 15 users

Tothemoon24

Top 20

2023 Edge AI Technology Report​

The guide to understanding the state of the art in hardware & software in Edge AI.​

Hero Image



2023 Edge AI Technology Report.​

Edge AI, empowered by the recent advancements in Artificial Intelligence, is driving significant shifts in today's technology landscape. By enabling computation near the data source, Edge AI enhances responsiveness, boosts security and privacy, promotes scalability, enables distributed computing, and improves cost efficiency.
Wevolver has partnered with industry experts, researchers, and tech providers to create a detailed report on the current state of Edge AI. This document covers its technical aspects, applications, challenges, and future trends. It merges practical and technical insights from industry professionals, helping readers understand and navigate the evolving Edge AI landscape.



Report Introduction​

The advent of Artificial Intelligence (AI) over recent years has truly revolutionized our industries and personal lives, offering unprecedented opportunities and capabilities. However, while cloud-based processing and cloud AI took off in the past decade, we have come to experience issues such as latency, bandwidth constraints, and security and privacy concerns, to name a few. That is where the emergence of Edge AI became extremely valuable and transformed the AI landscape.

Edge AI represents a paradigm shift in AI deployment, bringing computational power closer to the data source. It allows for on-device data processing and enables real-time, context-aware decision-making. Instead of relying on cloud-based processing, Edge AI utilizes edge devices such as sensors, cameras, smartphones, and other compact devices to perform AI computations on the device itself. Such an approach offers multitudes of advantages, including reduced latency, improved bandwidth efficiency, enhanced data privacy, and increased reliability in scenarios with limited or intermittent connectivity.

“Even with ubiquitous 5G, connectivity to the cloud isn’t guaranteed, and bandwidth isn’t assured in every case. The move to AIoT increasingly needs that intelligence and computational power at the edge.”
- Nandan Nayampally, CMO, Brainchip
While Cloud AI predominantly performs data processing and analysis in remote servers, Edge AI focuses on enabling AI capabilities directly on the devices. The key distinction here lies in the processing location and the nature of the data being processed. Cloud AI is suitable for processing-intensive applications that can tolerate latency, while Edge AI excels in time-sensitive scenarios where real-time processing is essential. By deploying AI models directly on edge devices, Edge AI minimizes the reliance on cloud connectivity, enabling localized decision-making and response.

The Edge encompasses the entire spectrum from data centers to IoT endpoints. This includes the data center edge, network edge, embedded edge, and on-prem edge, each with its own use cases. The compute requirements essentially determine where a particular application falls on the spectrum, ranging from data-center edge solutions to small sensors embedded in devices like automobile tires. Vibration-related applications would be positioned towards one end of the spectrum, often implemented on microcontrollers, while more complex video analysis tasks might be closer to the other end, sometimes on more powerful microprocessors.

“Applications are gradually moving towards the edge as these edge platforms enhance their compute power.”
- Ian Bratt, Fellow and Senior Director of Technology, Arm
When it comes to Edge AI, the focus is primarily on sensing systems. This includes camera-based systems, audio sensors, and applications like traffic monitoring in smart cities. Edge AI essentially functions as an extensive sensory system, continuously monitoring and interpreting events in the world. In an integrated-technology approach, the collected information can then be sent to the cloud for further processing.

Edge AI shines in applications where rapid decision-making and immediate response to time-sensitive data are required. For instance, in autonomous driving, Edge AI empowers vehicles to process sensor data onboard and make split-second decisions to ensure safe navigation. Similarly, in healthcare, Edge AI enables real-time patient monitoring, detecting anomalies, and facilitating immediate interventions. The ability to process and analyze data locally empowers healthcare professionals to deliver timely and life-saving interventions.

Edge AI application areas can be distinguished based on specific requirements such as power sensitivity, size limitations, weight constraints, and heat dissipation. Power sensitivity is a crucial consideration, as edge devices are often low-power devices used in smartphones, wearables, or Internet of Things (IoT) systems. AI models deployed on these devices must be optimized for efficient power consumption to preserve battery life and prolong operational duration.

Size limitations and weight constraints also play quite a significant role in distinguishing Edge AI application areas. Edge devices are typically compact and portable, making it essential for AI models to be lightweight and space-efficient. This consideration is particularly relevant upon integrating edge devices into drones, robotics, or wearable devices, where size and weight directly impact performance and usability.

Nevertheless, edge computing presents significant advantages that weren’t achievable beforehand. Owning the data, for instance, provides a high level of security, as there is no need for the data to be sent to the cloud, thus mitigating the increasing cybersecurity risks. Edge computing also reduces latency and power usage due to less communication back and forth with the cloud, which is particularly important for constrained devices running on low power. And the advantages don’t stop there, as we are seeing more and more interesting developments in real-time performance and decision-making, improved privacy control, and on-device learning, enabling intelligent devices to operate autonomously and adaptively without relying on constant cloud interaction.

“The recent surge in AI has been fueled by a harmonious interplay between cutting-edge algorithms and advanced hardware. As we move forward, the symbiosis of these two elements will become even more crucial, particularly for Edge AI.”
- Dr. Bram Verhoef, Head of Machine Learning at Axelera AI
Edge AI holds immense significance in the current and future technology landscape. With decentralized AI processing, improved responsiveness, enhanced privacy and security, cost-efficiency, scalability, and distributed computing, Edge AI is revolutionizing our world as we speak. And with the rapid developments happening constantly, it may be difficult to follow all the new advancements in the field.

That is why Wevolver has collaborated with several industry experts, researchers, professors, and leading companies to create a comprehensive report on the current state of Edge AI, exploring its history, cutting-edge applications, and future developments. This report will provide you with practical and technical knowledge to help you understand and navigate the evolving landscape of Edge AI.

This report would not have been possible without the esteemed contributions and sponsorship of Alif Semiconductor, Arduino, Arm, Axelera AI, BrainChip, Edge Impulse, GreenWaves Technologies, Sparkfun, ST, and Synaptics. Their commitment to objectively sharing knowledge and insights to help inspire innovation and technological evolution aligns perfectly with what Wevolver does and the impact it aims to achieve.

As the world becomes increasingly connected and data-driven, Edge AI is emerging as a vital technology at the core of this transformation, and we hope this comprehensive report provides all the knowledge and inspiration you need to participate in this technological journey.
 
  • Like
  • Fire
  • Love
Reactions: 19 users
Top Bottom