BRN Discussion Ongoing

Sirod69

bavarian girl ;-)
Imaging and Machine Vision Europe
Imaging and Machine Vision EuropeImaging and Machine Vision Europe

In our latest article, Luca Verre, Co-founder and CEO of PROPHESEE, highlights how event-based vision is set to revolutionise mobile photography.

"By combining an event-based sensor with a frame-based system, effective deblurring can be achieved, making photography and video capture more precise and lifelike."

 
  • Like
  • Love
  • Fire
Reactions: 21 users

Tothemoon24

Top 20
Great read .
I couldn’t copy the original article in its entirety, link provides full read 🐰





Montaseri said one factor that is driving intelligence at the edge is that connectivity is not a hundred percent available, which means delays getting information back from the cloud. Another is that microprocessors and micro controllers are becoming more powerful. This enables the necessary data management at the edge, he said, and allows to devices quickly analyze and make sense of data.

ITTIA is focused on providing the software necessary for edge data streaming, analysis and management for embedded systems and IoT devices – robust data management is foundational for doing AI and machine learning in embedded systems at the edge, Montaseri said.

diagram

ITTIA provides software for edge data streaming, analysis and management for embedded and IoT for uses such as transportation when it's not feasible to depend on a central cloud. (ITTIA)
Reliability is critical for smart edge devices, he added, whether it’s for industrial robotics, medical or transportation applications. “You want to make sure that they don't go down.”

What’s also becoming apparent is that not all edges are created equal – some will be smarter sooner than others depending on the use case and industry, such as robotics and medical devices. Montaseri said today’s embedded systems that gain intelligence through IoT deployments will be doing the jobs needed for the next generation of computing. “The nature of everything is changing,” he said. “We are seeing more security, more safety, and more functionality, like the ingestion rate and the query rate. Our focus is safety, security, and reliability.”


Not all edges are created equal

What makes the smart edge murky is the definition of edge, which means different things to different people, Nandan Nayampally, CMO at BrainChip, said in an interview with Fierce Electronics. He was previously at ARM for more than 15 years when the edge was viewed as primarily sensor driven. “That's how IoT kind of cropped up,” he said. “IoT is a sensor plus connectivity plus processing.” While a Dell or an Intel might think of the smart edge as another giant box that’s now smaller, the right starting point to him is IoT with AI.

AI on the edge is a step forward from a sensor just doing one function, with devices now having more storage, memory, and processing power. Nayampally said this battle between cloud and the edge has been going on a for a while, going back to the days of a terminal connected to mainframe before the move to a client/server model. “What you realize is however much we think that latency to cloud or connectivity to cloud is guaranteed, and the bandwidth assured, it's never going to be the case,” he said. “You need that intelligence and computational power at the edge.”

diagram of chip

BrainChip's Akida processor can learn at the edge to address security and privacy while limiting network congestion. (BrainChip)
Having the smarts at the edge is beneficial for preventative maintenance in factories and patient monitoring, Nayampally said, both in terms of latency and privacy. “Anytime you send raw data or sensitive data out, you are obviously going to have challenges.” Privacy and security have become especially important to the general public, he added. BrainChip was started with the idea that edge computing was necessary and that any approach to AI at the edge had to be different from the cloud. “The cloud kind of assumes almost infinite resources and infinite compute.”

While compute resources at the edge are rather finite, more AI is possible due to advances with low power hardware including memory and systems on chip (SoC), which means not all training and machine learning need be shipped back to the cloud. Nayampally said it’s a matter of scaling, with neuromorphic computing offering inspiration for how to low power intelligence at the edge. “Let's try to emulate the efficiency of it and start from there.”

Machine learning will increasingly happen at the edge both because of inherent capability but also out of necessity. Nayampally said some applications that require a real-time response can’t afford the delay between the edge and the cloud, or the power. “Any time you use radio and connectivity, especially to cloud, that burns a lot of power,” he said. “Radios are the most expensive parts of devices.” Smaller, more cost-effective devices may not be able to afford to have connectivity and need to do more compute locally.

Nayampally said the neuromorphic nature of BrainChip’s Akida platform allows it to learn at the edge, which also addresses security and privacy and reduces network congestion – today’s autonomous vehicles can generate a terabyte of data per day, he noted, so it makes sense to be selective about how much data needs to travel the network.

For the smart edge, simple is better and BrainChip’s processor does that from a computational standpoint, as well as from a development and deployment standpoint, Nayampally said. “It's almost like a self-contained processor.” It is neuromorphic and event driven, so it only processes data when needed, and only communicates when needed, he said.

Being event driven is an excellent example of how basic machine learning may express itself in a single device for the user or the environment, which is what Synaptics is calling the “edge of the edge,” said Elad Baram, director of product marketing for low-power edge AI. The company has a portfolio of low power AI offerings operating at a milliwatt scale which is enabling machine learning using minimal processing and minimal learning – an initiative in line with the philosophy of the tinyML Foundation, he said. While an ADAS uses gigabytes of memory, Synaptics is using megabytes of memory.

Baram’s expertise is in computer vision, and Synaptics sees a lot of potential around any kind of sensing and doing the compute right next to where the data is coming from. Moving data requires power and increases latency and creates privacy issues. Organizations like tinyML are an indicator of how smart the edge could get. “We are at an inflection point within this year or next year,” he said. “This inflection point is where it's booming.”

diagram of a chip

Synaptics has a context aware Smart Home SoC with an AI accelerator for 'edge at the edge'. (Synaptics)
Context aware edge remain application specific

Baram said just as the GPU boom occurred in the cloud five years ago, the same evolution is happening with TinyML. Workloads at the edge that previously required an Nvidia processor, such as detection and recognition, can now be done on a microcontroller. “There is a natural progression.”

Sound event detection is already relatively mature, Baram said, starting with Alexa and Siri and progressing to detecting glass breaking or a baby crying. “We are seeing a lot of use cases in smart home and home security around the audio space.” In the vision space, he said, Synaptics is supporting “context awareness” for laptops so they can detect whether a user is present or not, and to ensure privacy, any imaging stays on the on-camera module – it’s never stored on the computer’s main processor.

Power, of course, is important for context awareness applications. Baram said. “You don't want the power to the battery to drain too fast.” But having this awareness actually extends battery life, he said, because now the system understands if the user is engaged with the device and its content and can respond accordingly. “You approach the laptop, it's turned on and you log in and it's like magic. The machine just knows what you want to do, what you are doing, and it can adapt itself.”

Similarly, an HVAC system could adapt based on the number of occupants in a room, or a dishwasher could let you know how full it is. Baram said a fridge could be smart enough so you can know whether or not you need to pick up milk on the way home. Aside from smart laptops and home appliances, there are many safety applications in construction, manufacturing and agriculture that could benefit from context awareness. “The amount of use cases out there in the world is pretty amazing.”

Baram said the hardware is coming together to enable the smart edge, including machine learning, while algorithms and networking are also improving significantly. “The neural networks are way more efficient than they were a decade ago.” As compute capabilities advance, devices will be able to have more general purposes, but for now processor and algorithm constraints mean smart edge devices will have targeted applications.

In the long run, making the edge smarter is ultimately contingent on successfully pulling on these elements together, which requires an AI framework, Baram said, such as TensorFlow, an open source ML platform. “Those frameworks make it much easier to deploy a neural network into edge devices.”
 
Last edited:
  • Like
  • Fire
Reactions: 26 users

Tothemoon24

Top 20
The Next SiFive Revolution

As SiFive announces its long-term plans to meet the rapidly changing needs of the automotive markets, this blog explores the many advantages the SiFive Automotive™ portfolio brings across a wide range of current and future applications, including; electrification, cockpit, ADAS, safety, and others. Our power efficient, flexible and high-performance cores are ideally suited for the most critical applications, and are of course, supported by the global RISC-V ecosystem.

Why SiFive?

Founded by the same engineers who invented the RISC-V ISA, SiFive is uniquely positioned to extend and broaden the reach of RISC-V based processors in new and growing markets. The rate of SiFive innovation and product development in the last few years has been astounding. SiFive has grown rapidly, and today offers not only the most comprehensive portfolio of RISC-V IP, but is rapidly approaching performance at the high-end that matches or exceeds any CPU IP vendor. SiFive’s growing portfolio of IP, ranging from small 32-bit real time CPUs all the way up to high-performance 64-bit application processors, singles out SiFive as the only RISC-V IP supplier that can cater to the entire spectrum of automotive compute requirements as a one stop shop. From our CEO, who grew Qualcomm’s automotive division, to our many automotive SOC experts, we are a team that uniquely understands the needs of today’s automotive manufacturer.

A Portfolio Approach

SiFive’s roadmap delivers a portfolio of products that meets the requirements of the vast majority of automotive CPU applications, including MCUs, MPUs and, soon, SoCs. We often read about macro trends in automotive electronic architecture design, including centralization of computing, increased compute at the sensing edge, increased SW complexity due to mixed criticality support, and a shift from domain to zonal controllers, to list but a few of today’s trends. The implications are a need for newer, more capable ECUs, and a higher degree of mixed criticality functional integration into fewer devices, at the price of increased software complexity. Despite the evolution in electronic architectures, the variety of automotive semiconductor products continues to remain broad, from small MCUs all the way to complex SoCs; however, there are significant commonalities across the many use cases including functional safety and security. Virtually every electronic device in a vehicle must comply with ISO26262, ISO21434, and additional local standards as applicable, such as UNECE WP.29 r155. SiFive Automotive products provide tailored levels of integrity, with area optimized products for both ASIL B and ASIL D, while in-field configurable integrity levels can be enabled through the available split-lock variants.

Introducing the SiFive Automotive Family of Products

SiFive has launched its new product portfolio and the first three SiFive Automotive product series, each with area and performance optimized variants for their intended use. SiFive is the only RISC-V IP supplier to offer multiple processor series that optimally meet automotive designers’ exact compute, integrity, and security requirements across a broad range of computing applications, from MCUs to complex SoCs. All of the product variants offer best-in-class levels of functional safety and security capabilities, with a high degree of configurability to different integrity levels. The E6-A is the first commercially available offering. The E6-A is a 32-bit real-time core, with broad availability later in 2022. By the second half of 2023, two more product series will be added to the portfolio, the X280-A and S7-A. The X280-A is a vector capable processor ideally suited for edge sensing in ADAS, sensor fusion applications, and any ML/AI accelerated functionality in the vehicle. The S7-A is an optimized 64-bit CPU tailored for safety islands as often used in ADAS, gateways, and domain controllers. A standout feature of S7-A is native 64-bit support, meaning that the safety island can now access the same memory space as the application CPUs.

Later in 2023, a new high-end processor, configurable to up to 16-cores, will be added to the portfolio. Customers should expect best-in-class performance and ASIL support and a tailored automotive alternative for high performance CPU IP. And much more is on the way in the coming months as the roadmap expands to meet market needs, and our SW, tools and OS partners add new levels of support.

All of the above could not have been possible without SiFive strategically adding top talent from the industry, both in CPU design and the automotive market, including, our CEO who managed the growth of Qualcomm’s automotive business, and a renowned safety expert and co-founder of Yogitech, among others. The automotive team benefits from investments in processor design that are foundational to the underlying technology, while also leveraging expertise in automotive R&D, functional safety, and security. Our team is ready to answer your automotive questions.

Ecosystem and Next Steps

The automotive semiconductor market is dependent on advances from a broad range of SW, tool, and OS vendors, and their support for SiFive the RISC-V ecosystem is growing at a phenomenal pace. The ecosystem supporting RISC-V today is fast growing and reaching maturity, with many announcements in the pipeline.

SiFive is collaborating with a wide range of automotive technology companies to create a robust and vibrant RISC-V community.

SiFive’s first automotive products are arriving in the market at a time of rapid growth and change and the company brings a modern architecture, supported by the broad global RISC-V ecosystem, with unique solutions for some of automotive’s most critical technologies.
 
  • Like
  • Fire
Reactions: 17 users

IloveLamp

Top 20
If you want to know more about RISC=V / SIFIVE.....




This bit was interesting too

Screenshot_20230408_074310_Chrome.jpg
Screenshot_20230408_075011_Chrome.jpg
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 22 users

cosors

👀
Hi all

To address some complaints or concerns recently regarding off-topic moderation.

I understand that friendly banter can be enjoyable, but it's important to keep in mind that our forum's rules exist to ensure that discussions remain focused and informative for everyone. I enforce these rules evenly for all members, even for those who are regulars or have good intentions, to ensure a fair and consistent experience for everyone. I do have an off-topic section called The Lounge, where members are free to chat about anything they like. Please feel free to continue your friendly banter there. Here's the link: https://thestockexchange.com.au/forums/the-lounge-where-anything-goes.15/. Thank you for your understanding.
Our The Talga Bar works perfectly. I can only recommend opening The Brainchip Bar here. Maybe a task for Rise?

Not everyone wants to go to the other end of town to find their way to their neighborhood bar. Some just like to walk across the street.
 
Last edited:
  • Like
  • Love
  • Haha
Reactions: 12 users
D

Deleted member 118

Guest
  • Like
Reactions: 4 users

zeeb0t

Administrator
Staff member
Our The Talga Bar works perfectly. I can only recommend opening The Brainchip Bar here. Maybe a task for Rise?

Not everyone wants to go to the other end of town to find their way to their neighborhood bar. Some just like to walk across the street.
That can of course work too.
 
  • Like
Reactions: 4 users

cosors

👀
That can of course work too.
Mostly it's about the people you already know. Many people feel 'safe' there. The lounge is nice but also 'cosmopolitan'. Maybe a place like your own bar could be the solution.
A reflection of the real environment.
 
Last edited:
  • Like
Reactions: 5 users
Around 120 million shares

About 3.38 million shares up for sale on commsec. If something really go goes BRN way well shit i wouldnt be left holding any shorts...
I’d suspect most of those shorts are hedging positions by funds..
Most pros shorting individually wouldn’t hold BRN short borrow overnight as it always has a chance of a 50% move at the drop of one new IP deal or related Ann.
 
  • Like
  • Thinking
Reactions: 6 users
$3 SP = $5.4B MC. At a high PE100 will require $54M NPAT / 0.7 ATO = $77.14M EBITDA / 0.65 = $118.68M revenue.

To maintain around $3 SP with hot PE100 requires circa $120M revenue.

$5.4B MC / $120M revenue = x45 revenue multiple.

A Nasdaq listing could result in PE150 similar to Nvidia & would require $36M NPAT & $79.12M revenue.

$5.4B MC / $80M revenue = x67.5 revenue multiple.

PE100 & PE150 are quite high so would require high growth rate & high forward looking revenue.

So the question becomes, how long until BRN can generate $80-120M revenue?

Could be quite fast such as via 50c royalty from Qualcomm x 200M smart phones = $100M revenue.

Or slower such as via Mercedes with premium cars having about 70 MCU's x 15% AI equipped = 10 x 2M cars pa = 20M x $1 royalty = $20M revenue. If 50% of the 70 MCU's are AI equipped in future due to extra sensors/Lidar etc = 35 x 2M cars pa = 70M x $1 royalty = $70M revenue.

I allowed higher royalty for Mercedes as their products are more expensive & royalty was going to be higher for more expensive products.

Then there will also be washing machine, dryer, air conditioner etc applications for MCU's as well. Will take a few years for mass adoption & there will be hundreds of millions units per year. May start with 25-50M units first year x 50c royalty = $12.5-25M revenue & within 2-3 years reach $100M revenue.

Combination of the above should result in about $100M revenue by end of next year including more license agreements. Thus SP should maintain around $3 SP towards the end of 2024. But it will need to be at a high PE100-150.

However, BRN SP tends to spike due to market excitement so it would not surprise me to see SP at $3 on the back of a couple of big name license agreements. If Qualcomm were to be announced it would create a frenzy & the $100M revenue will get priced in within 3-4 months of the announcement. 50c SP x 6 run to $3 SP would be highly likely as we had 39c SP x 6 run to $2.34 intraday peak when Mercedes was announced without any revenue.

So $3 SP could come as early as next 6 months or as late as end of 2024. The earlier it gets to $3 SP without revenue the higher the probability of a 50% retrace to $1.50.

Happy Easter to all.
Thanks for that nice share price breakdown on future revenue possibilities. Would love to see 3 bucks in the next year. And happy Easter 🐰 to you also..
 
  • Like
Reactions: 11 users

Steve10

Regular
Chart with BRN shorts & SP. Make your own decision on the effects.

There have been squeezes up & declines correlated to market sentiment as well. ie. short during rising sentiment results in squeeze up & short during declining sentiment results in big decline.

Shorts are like amplifiers during rises & declines. The effect of the move up/down is amplified.


1680909094481.png
 
  • Like
  • Sad
  • Fire
Reactions: 29 users

JoMo68

Regular
Our The Talga Bar works perfectly. I can only recommend opening The Brainchip Bar here. Maybe a task for Rise?

Not everyone wants to go to the other end of town to find their way to their neighborhood bar. Some just like to walk across the street.
Done - we now have our own bar! Come, relax, and enjoy the chat.
 
  • Like
  • Love
Reactions: 10 users
Chart with BRN shorts & SP. Make your own decision on the effects.

There have been squeezes up & declines correlated to market sentiment as well. ie. short during rising sentiment results in squeeze up & short during declining sentiment results in big decline.

Shorts are like amplifiers during rises & declines. The effect of the move up/down is amplified.


View attachment 33885
More like accelerators (analogue) 😆
They are taking on a co processor that learns as it goes and they will be crushed.

SC
 
  • Like
  • Fire
Reactions: 7 users

TECH

Regular
  • Like
Reactions: 7 users
$3 SP = $5.4B MC. At a high PE100 will require $54M NPAT / 0.7 ATO = $77.14M EBITDA / 0.65 = $118.68M revenue.

To maintain around $3 SP with hot PE100 requires circa $120M revenue.

$5.4B MC / $120M revenue = x45 revenue multiple.

A Nasdaq listing could result in PE150 similar to Nvidia & would require $36M NPAT & $79.12M revenue.

$5.4B MC / $80M revenue = x67.5 revenue multiple.

PE100 & PE150 are quite high so would require high growth rate & high forward looking revenue.

So the question becomes, how long until BRN can generate $80-120M revenue?

Could be quite fast such as via 50c royalty from Qualcomm x 200M smart phones = $100M revenue.

Or slower such as via Mercedes with premium cars having about 70 MCU's x 15% AI equipped = 10 x 2M cars pa = 20M x $1 royalty = $20M revenue. If 50% of the 70 MCU's are AI equipped in future due to extra sensors/Lidar etc = 35 x 2M cars pa = 70M x $1 royalty = $70M revenue.

I allowed higher royalty for Mercedes as their products are more expensive & royalty was going to be higher for more expensive products.

Then there will also be washing machine, dryer, air conditioner etc applications for MCU's as well. Will take a few years for mass adoption & there will be hundreds of millions units per year. May start with 25-50M units first year x 50c royalty = $12.5-25M revenue & within 2-3 years reach $100M revenue.

Combination of the above should result in about $100M revenue by end of next year including more license agreements. Thus SP should maintain around $3 SP towards the end of 2024. But it will need to be at a high PE100-150.

However, BRN SP tends to spike due to market excitement so it would not surprise me to see SP at $3 on the back of a couple of big name license agreements. If Qualcomm were to be announced it would create a frenzy & the $100M revenue will get priced in within 3-4 months of the announcement. 50c SP x 6 run to $3 SP would be highly likely as we had 39c SP x 6 run to $2.34 intraday peak when Mercedes was announced without any revenue.

So $3 SP could come as early as next 6 months or as late as end of 2024. The earlier it gets to $3 SP without revenue the higher the probability of a 50% retrace to $1.50.

Happy Easter to all.
Thanks Steve that’s exactly what I was looking for.. happy Easter to all as well cheers
 
  • Like
Reactions: 4 users
An article dated 23/2/23 regarding smart homes from our partner TEKSUN:



AI and the Internet of Things are driving the expansion of smart home markets. As home automation solutions have gotten more inexpensive, smart living with automation and integrated AI-IoT is no longer considered a luxury. Local hardware or cloud-based intelligence can be used to provide smart home control.

According to a recent study, the smart home market is expected to develop at a 27.01% annual rate and reach a value of $537 billion by 2030.AI is one of the driving forces behind this expansion.


As AI continues to expand automation’s capabilities, such as replicating human decision-making and anticipating human behavior, it offers huge benefits in terms of convenience and smart support.

AI in Smart Homes​

The application of AI in managing the smart home infrastructure helps gather data from the home automation devices, anticipate user behavior, give maintenance data, and aid better data security and privacy. Because of its ability to do certain activities automatically for the user, its presence in home automation enables us to control our home appliances, safeguard our houses, and so on by reducing the need for human intervention.

This automation relies heavily on the data acquired by the devices and trained on utilizing a range of machine learning and deep learning methods. Smart home-linked devices provide the data, and the AI learns from the data to do particular activities without human interaction.

For example, Teksun thermostats learn automatically from their customers’ behavior on how to operate and then utilize that information to adjust the temperatures when someone is home or go energy efficient when no one is home.

The Internet of Things in Smart Homes​

IoT allows connected devices, vehicles, buildings, and other items implanted with software, sensors, and the internet to communicate with one another and may either be operated remotely or relay data to a distant user via AI. With the help of AI, these linked devices can monitor the status of every device connected to the same network and offer real-time data.

Important Considerations for Any Smart Home System​

How-AI-and-IoT-are-Transforming-Smart-Homes-Secondary-image.jpg

1. Data security and privacy are the two most important issues that any AI and IoT-enabled smart home should solve. Every connected device leaves digital traces of personal data that must be kept safe and secure.
2. Proper AI and IoT integration enables devices to perform more automatically and with expanded features. Security cameras, for example, often warn of threats automatically, but with correct AI integration, they will proactively alert humans to take charge of the situation when something goes wrong.
3. Interoperability is a critical issue that must be addressed by any home automation tool. Smart home devices should be made interoperable so that new use cases such as energy saving, appliance diagnostics, disaster damage prevention, and so on can be applied to the same smart devices.
4. Better customer service is an essential component of any organization. People living in smart homes may face issues inside their IoT environment, ranging from minor troubleshooting to major data protection concerns. Companies that deliver superior customer service will always be ahead of the competition.
5. Incorporating voice commands will allow the user to save time, and money and alleviate certain laborious activities. Voice control of devices and home appliances should be prioritized because providing user-friendly services always benefits the business.

How will the convergence of AI and IoT affect smart homes?​

AI in smart homes can translate raw sensor data from connected smart devices into beneficial behavior designs in our daily lives. AI-enabled gadgets understand the patterns of the renters and forecast the best experience. It will not turn on the heating, fan, or lights if there is no one in the house, and it will automatically lock the doors if there is no one in the house.
A perfect scenario would be for a user to prepare meals in a smart oven or stove while AI checks the meal’s internal temperature. If the meal reaches the ideal temperature, the AI can lower the cooking temperature to prevent it from burning. The AI would notify the user when the meal was ready to be taken from the oven or burner.
Artificial intelligence (AI) may be able to learn and anticipate a user’s desires. For example, a smart kitchen may be set up before a client user reaches home to begin cooking.
The promise of IoT and AI isn’t restricted to new homes; there are a variety of options that allow current devices, such as switches, to be converted to Smart Switches and old air conditioners to be updated to provide remote access via Smart Apps or AI-based on cloud servers, among other things.
Wireless solutions facilitate deployment, requiring no major electrical or common labor to get the user to Smart Living. Almost any present switch, air conditioner, or light can be converted to IoT-enabled via various brand-agnostic retrofit methods.
The combination of AI and IoT in the smart home is a winning combo for tech-savvy households. AI-enabled personalization, rather than historical usage, can assist your home in keeping track of how you go about your everyday routine. AI and smart home automation have come to a crossroads. Significant gains will be realized as technology progresses and more device integration becomes available.

In Nutshell​

Technology is changing and combining Smart Home requirements on a large scale. As the number of connected devices outnumbers humans, the concept of a smarter, more convenient home is gaining traction. Home automation has virtually endless applications.
Smart Homes, which blend AI and IoT, appeal to the technologically savvy while cutting energy expenditures and enhancing security. As a result, smart homes enable and safeguard the next level of technological existence.
The Internet of Things (IoT) and artificial intelligence (AI) are here to stay and will dramatically improve Smart Home automation.
 
  • Like
  • Fire
Reactions: 18 users

Tothemoon24

Top 20

Four pillars to achieve the software-defined vehicle vision​

Practical approaches for the automotive industry to accelerate the development of software-defined vehicles.​

Dennis Laudick Headshot

Posted on 30th March 2023By Dennis Laudick, VP, Go-to-Market, Automotive, Arm
AutomotiveSoftware
Reading Time: 9 mins
Four pillars to achieve the software-defined vehicle vision

What an amazing time to be in the automotive technology! The industry is currently going through one of the biggest evolutions in its history. Whether it’s the rapid conversion to electrical drivetrains, the miraculous rise of driver assistance features or the reinvention of what owners expect from their car and the features and applications they use in the vehicle, the automotive industry is fundamentally reinventing itself.
From a technologist’s perspective, this all translates into a huge rise in software and the technology that underpins this software. This has led to the widely coined industry term, the software-defined vehicle (SDV). While there is a lot of debate about when the SDV will be ‘standard’ and how to make this a reality, few doubt it’s the direction we are headed. Before tackling how we can achieve this SDV vision, there are some initial challenges that will require us to think differently as an industry.

Complex computing and industry collaboration​

Firstly, let’s consider the scale of technical complexity that comes with SDVs. You can see the trajectory during the past 75 years. For example, if we look at a typical vehicle in 1948, this would contain maybe 50 wires, 150 feet/40 meters of cable and no semiconductors or software. Fast forward to today and a high-end vehicle can contain over 3000 wires and 2-3 miles/3-5 kilometers of cable. On top of that, a high-end vehicle today can already contain over 100 million lines of code – that’s more software than many modern aircraft or even entire social media websites! Looking forward, future vehicles are already predicted to need 4 to 5 times this amount of software, alongside vastly more processing power. This is leading many to call the car of the future a “data center on wheels”. To achieve this, vehicle manufacturers are having to rethink their vehicle electrical architecture approaches.
Interestingly, the technology evolution happening with vehicles is a pattern we’ve been seeing for many years. Take for example the telephone, which has evolved from a largely mechanical, landline device for calling only, to the electrical/hardware centric first cellphones of the 90s where users could text and play basic games, to today’s ‘software defined’ smartphones that do everything we need for our digital lives – calling, messaging, gaming, video streaming, social media, banking and web browsing. We’ve seen the same in watches and many other industries. In many ways, vehicles are just following the natural evolution of devices in our lives as they move, ever increasingly, into the software defined era. As predicted back in 2011, software really is “eating the world” and the automobile is finally getting it’s well deserved turn of attention.
Glass-car.jpg
Inside future vehicles
The second challenge being faced is around the nature of the industry itself. The automotive industry is made up of some incredibly capable companies, but they are held back as an industry by the legacies of competition between them. The technical scope of the SDV is so large that there is no way to achieve the scale (and investment) required as individual companies. If the industry is to achieve the scale needed in SDV production and if companies are going to thrive in the SDV era, we need to work together like never before around some key principles and standards that companies can then create value around.

The SDV vision​

While we, at Arm, appreciate the scale of the challenges, we also believe that the end-results and, ultimately, the benefits the SDV will deliver will be incredible and make it worthwhile. To me, it’s about taking today’s car from being a largely static, somewhat functional piece of equipment to an enjoyable, constantly evolving, and safer hub in our digital lives that is more than just a way to get us from A to B. What’s more, the SDV will provide the basis for future vehicles that:
  • Are far more sustainable and environmentally efficient.
  • Provide advanced driver assistance and autonomous features saving many lives.
  • Are relaxing, entertainment rich hubs for passengers.
  • Provide features to make traveling easier and more efficient.
  • Are constantly evolving with new features and capability over its lifetime.
  • Provide a platform for the industry to deliver a myriad of services and features we have barely begun to imagine (for example, compare the smartphone app stores when they first launched to the explosion of mobile accessible services offered today).
In my talk at the CAR IT Symposium, a leading industry event for the automotive supply chain, I outlined four key pillars that I believe will make this SDV dream possible. These pillars are all practical approaches that the automotive and technology industries can take now to accelerate our path to the SDV and address the key challenges outlined previously. Underpinning these four pillars are the common, critically important themes of safety and security, while also acknowledging the importance of the real-time nature of automotive software.

1. Industry collaboration​

As I said at the start of this article, we are going through a seismic change in the automotive industry. The traditional automotive technology approaches of monolithic designs, serialized development and hand-crafting hardware to software just won’t scale to what we need. So, we need to build new approaches. However, the scale of the new technology stacks and development environments we need to create are monumental – no one company can do it alone. Even worse, if many different individual companies go and try to solve this in many different ways, it will actually create more complication, expense and hold the industry back. This means we need to work together as an industry on the fundamentals that will allow us to achieve the SDV dream.
As a neutral technology provider across the industry, Arm is in a unique position to help with this and this is why we created the SOAFEE (the Scalable Open Architecture for Embedded Edge) Special Interest Group. SOAFEE brings together the traditional automotive industry and emerging software development community to share their expertise, technologies and products to help define the future of SDVs. Already SOAFEE has more than 50 members from across the automotive supply chain including silicon vendors, software providers, system integrators, cloud service providers, vehicle manufacturers and tier ones – with membership continuing to grow.

2. Standards​

The true value in SDVs – to both companies and consumers – will be the applications and services that allow vehicles to provide increasing value over their lifetime, providing new and better features and capabilities and establishing a lifelong relationship with the consumer. But this value sits right at the top of the SDV technology stack. Enabling companies to focus on these high-value apps and services rather than the infrastructure and frameworks with limited differentiation requires us to build standard foundations that allow scale, provide portability and encourage reuse. Different foundations and approaches mean far greater complexity, but having standardized platforms and approaches save time and costs producing SDVs, while allowing companies to create real, differentiated value on top.
Again, this is where SOAFEE comes into the picture. The overarching aim of SOAFEE is the industry standardization and acceleration of foundational computing standards and frameworks that will enable the scalability and realization of the SDV. There are already various SOAFEE-based ‘blueprints’ in development, such as an SDV focused IVI implementation.
In addition, underpinning the SDV technology stack, Arm provides a range of compute/processors and low-level software. We have a wide range of Cortex-A, M and R CPUs, Mali GPUs and ISPs that can be integrated into a broad range of computing solutions for SDVs. For low-level software, we incorporate standards built on commonly-accepted industry approaches, such as SystemReady and PSA Certified.

3. Modern methodologies​

As mentioned previously, the scale of the technology we are trying to create with the future SDV is massive. Furthermore, consumer expectations are clearly heading in the direction of wanting ever-increasing capabilities, features and services. This will require software architectural approaches that can achieve vast scale, with development methodologies that will allow efficient, repeated and ongoing development, validation and deployment across a variable hardware landscape.
Traditional development approaches will simply not get us there. However, the good news is that other industries have been solving these issues for decades and there are now robust approaches and methodologies we can utilize to achieve our needs. It’s a major change for the automotive industry, but one that is within the goals of the SOAFEE project.
Modern-car-1200x896.jpg
Vehicle simulation

4. Vehicle simulation​

Once you have addressed the three pillars above, one of the last major hurdles is enabling the vast amounts of continuously developed software built by armies of developers in parallel, but still achieving the validation quality and robustness requirements that is inherent in the automotive industry. If we are to enable this, traditional approaches relying on physical hardware will simply not scale to where we need to go. To unblock this, we are going to have to enable simulation of the vehicle hardware at all levels, from the control unit to the entire vehicle and external environment. This will require advanced simulation methodologies. These methodologies do exist today, but they are new to the automotive industry and need to be embraced and adopted more widely.

Why the pillars matter​

These four pillars are going to be critical to realising the SDV. I believe that one way or another the industry will come to this conclusion – there is no viable long-term alternative. However, if we don’t take on board these four pillars now – if we try to solve the SDV challenges in our silos, if we all adopt different approaches to the same problems, if we cling to software methodologies that struggle to scale or don’t enable vast, parallel development – then we will be on a very long journey to the SDV future. A journey that will still end at the same principals, but only after huge amounts of wasted time, effort, money and to the detriment of not only the industry, but to the enjoyment and safety of all of us and our environments as consumers.
Fortunately, we already have plenty in place to realize this SDV dream. We are collaborating as an industry through SOAFEE and we at Arm continue working to provide the industry with a wide range of compute and processor technologies with common standards built in. These common platforms will give vehicle manufacturers the ability to create high value, high volume SDVs and allow the industry to support the vehicles with high value apps and services. From the established foundations already in place, we have the ability to achieve the SDV vision of the future that is going to positively transform the automotive industry and bring greater consumer value and safety to the driving and in-vehicle experienc
 
  • Like
  • Fire
Reactions: 8 users
The interesting bit
The passenger display supports gaming and streaming video via the internet. It can be operated on the run, but only for viewing by the front seat passenger. A camera continuously monitors the eye movement of the driver; when he or she attempts to view the passenger display, the content is hidden.
If you were not happy with your passenger you could easily make them very annoyed😂
Or if you’re cross eyed 😬
 
  • Haha
  • Like
Reactions: 4 users
Another one for the tech heads from Qualcomm regarding AI and NN. I was trying to see how this would affect the likelihood of using Akida.

To my lay understanding it is promoting using integer 8 versus floating point 8.

There is a white paper link on the topic included in the article. From memory this came out about the same time Brainchip released the “4 bits are enough” white paper.


I’ve included the start of the article as a teaser but hit the link if you want to read more.

Floating-point arithmetic for AI inference — hit or miss?​

Our latest whitepaper shows that a new floating-point format doesn't measure up to integer when you're quantizing AI models to run on edge devices
APR 6, 2023
Snapdragon and Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries.




Qualcomm-image


Artificial intelligence (AI) has become pervasive in our lives, improving our phones, cars, homes, medical centers, and more. As currently structured, these models primarily run in power-hungry, network-dependent data centers. Running AI on edge devices such as smartphones and PCs would improve reliability, latency, privacy, network bandwidth usage, and overall cost.
To move AI workloads to devices, we need to make neural networks considerably more efficient. Qualcomm has been investing heavily in the tools to do so, most recently showcasing the world's first Stable Diffusion model on an Android phone. Bringing models like GPT, with its hundreds of billions of parameters, to devices will require even more work.
The Qualcomm AI Research team has been making advances in deep learning model efficiency for the past years with state-of-the-art results in neural architecture search, compilation, conditional compute, and quantization. Quantization, which reduces the number of bits needed to represent information, is particularly important because it allows for the largest effective reduction of the weights and activations to improve power efficiency and performance while maintaining accuracy. It also helps enable use cases that run multiple AI models concurrently, which is relevant for industries such as mobile, XR, automotive, and more.
Recently, a new 8-bit floating-point format (FP8) has been suggested for efficient deep-learning network training. As some layers in neural networks can be trained in FP8 as opposed to the incumbent FP16 and FP32 networks, this format would improve efficiency for training tremendously. However, the integer formats such as INT4 and INT8 have traditionally been used for inference, producing an optimal trade-off between network accuracy and efficiency.

Enjoy

:)
 
  • Like
  • Fire
Reactions: 5 users
D

Deleted member 118

Guest
Looking into it @Rocket577 it looks like old news dating back around 2020. The link in the press release didn’t work but I found a similar article quoting LDN about the subject. I could be mistaken but I don’t think it’s new.

That’s why I deleted it lol
 
  • Haha
Reactions: 2 users
Top Bottom