BRN Discussion Ongoing

Sirod69

bavarian girl ;-)
Yes. You can go into any pub in Munich and order a Jeff.

I don't think I want to drink that šŸ„µšŸ„µ

1683118381627.png
 
  • Haha
  • Like
  • Sad
Reactions: 14 users

Frangipani

Regular
Hi Labsy....very pleased for you being in your 40's and holding Brainchip stock...such a great decision you have made !

Interesting how "Disney" got a mention...they are working with Robotics in the AI space, also video streaming etc...such a
good fit with the release of AKD 2.0....were Sean and Geoff talking about our engagements prior to the Podcast in private,
meaning, Geoff's subconscious mind was coughing up the words Mercedes, Disney and Tesla. :ROFLMAO::ROFLMAO::ROFLMAO: purely speculative of course.

I am truly "hoping" to hear that another 2 companies have signed an IP License by years end...which 2, any 2 !!

Tech ;)

Look what Iā€™ve foundā€¦

ā€œDisneyResearch|Studios in Zurich, Switzerland, focuses on exploring the scientific frontiers in a variety of domains in service to the technical and creative filmmaking process. Our world-class research talent in visual computing, machine learning, and artificial intelligence shapes early-stage ideas into technological innovations that revolutionize the way we produce movies and create media content (ā€¦)
To complement our considerable research talent, DisneyResearch|Studios maintains a close academic partnership with ETH ZĆ¼richā€”supporting joint research programs and PhD studentsā€”but also collaborates with the best of academia and industry from all over the world.ā€

So yes, maybe there is indeed more to Disney than just a fleeting mention of the name as a random example of a world-class company during the podcast. I wonder if Geoffrey knows Moore than we do. šŸ˜„

Mind you, this is mere speculation, I didnā€™t find any direct links to Brainchip.
And keep in mind that ZĆ¼rich is also home to the renowned Institute of Neuroinformatics (INI), which was established at the University of ZĆ¼rich and ETH ZĆ¼rich at the end of 1995. With its focus on neuromorphic engineering, INI has been a fertile breeding ground for spin-offs such as IniLabs, iniVation (they are the ones that have been collaborating with WSUā€˜s International Centre for Neuromorphic Systems and the RAAF & UNSW Canberra Space resp. US Air Force on neuromorphic event cameras in space - letā€™s keep our fingers crossed that Brainchip will be involved in the Falcon Neuroā€™s follow-on experiment Falcon ODIN planned for later this year) and SynSense.
Then again, Disney could of course be collaborating with several competing companies to find out which ones best suit their needs.

To quote Disneyā€™s Aladdin:
ā€žšŸŽ¼A whole new world
A new fantastic point of view
No one to tell us ā€œnoā€œ, or where to go
Or say weā€˜re only dreamingā€¦ā€




Welcome to DisneyResearch|Studios!​

For over 12 years Disney Research has been at the forefront of technological innovation, pushing the boundaries of what is possible to help the Walt Disney Company differentiate its entertainment products, services, and content.
Our mission is rooted in the long Disney tradition of inventing new technologies to contribute to the magic of the stories we tell and the characters we all love.
Markus Gross

Markus Gross​

Chief Scientist
DisneyResearch|Studios in Zurich, Switzerland, focuses on exploring the scientific frontiers in a variety of domains in service to the technical and creative filmmaking process. Our world-class research talent in visual computing, machine learning, and artificial intelligence shapes early-stage ideas into technological innovations that revolutionize the way we produce movies and create media content.
Our inventions are used in almost every Disney feature film production and have dazzled hundreds of millions of people in audiences worldwide.DisneyResearch|Studios is part of a wider innovation ecosystem operating in close partnership with our technology units at the Walt Disney Animation Studios, Pixar Animation Studios, Lucasfilm/ILM, Marvel Studios, and Walt Disney Pictures.
To complement our considerable research talent, DisneyResearch|Studios maintains a close academic partnership with ETH ZĆ¼richā€”supporting joint research programs and PhD studentsā€”but also collaborates with the best of academia and industry from all over the world. With such a strong academic and creative grounding, DisneyResearch|Studios is a lab like no otherā€”with the unique mission of bringing the magic to life.
DisneyReseachZurich_003.jpg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 39 users

goodvibes

Regular
Propheseeā€¦sounds like Akida insideā€¦

Weā€™re headed to #AutomateShow in Detroit 22-25 May!
Secure a meeting with our experts at booth #3645 to discuss how Event-Based vision is impacting the future of industrial automation.

Weā€™ll be showing demos for various applications in Industry 4.0 including ultra high-speed counting, vibration monitoring and frequency analysis for predictive maintenance, batch homogeneity & gauging and more.

Book your meeting today šŸ‘‰ https://lnkd.in/dpV8eWhA

A3 - Association for Advancing Automation
#Automate2023 #machinevision #industrialautomation #industry4_0

 
  • Like
  • Fire
  • Love
Reactions: 17 users

goodvibes

Regular
Does someone knows about EU project REBECCA?


Our MISSION​

The mission of REBECCA is to develop efficient and secure edge-AI systems using open CPU architecture, to enhance European strategic autonomy and sovereignty.

Bonseyes is one of 24 partnersā€¦Bonseyes linked to Nvisoā€¦


 
  • Like
Reactions: 5 users

Sirod69

bavarian girl ;-)

A LOOK AT THE TOP HOLDERS OF BRAINCHIP SHARES​

(An impressive list of big funds that are taking Brainchip seriously and are invested. For example, Citicorp owns nearly 1 in 10 shares of Brainchip, Merrill Lynch (Australia) 1 in 20 shares. Personally, I find this reassuring we are on the right track. The professional big boy investors have their hat in the ring with us and believe we are on to something and they want in too.)
According to the company, BrainChipā€™s top 20 shareholders are as follows:
  1. Citicorp, with 9.15% of all outstanding shares
  2. Mr Peter Adrien van der Made, with 8.87%
  3. Merrill Lynch, with 4.88%
  4. BNP Paribas, with 4,75%
  5. HSBC, with 4.44%
  6. JPMorgan, with 2.82%
  7. BNP Paribas (DRP), with 2.53%
  8. HSBC (customer accounts), with 1.17%
  9. National Nominees, with 0.67%
  10. LDA Capital, with 0.52%
  11. BNP Paribas (Retail Clients), with 0.47%
  12. Mrs Rebecca Ossieran-Moisson, with 0.45%
  13. Crossfield Intech (Liebskind Family), with 0.4%
  14. Certane CT Pty Ltd (BrainChipā€™s unallocated long-term incentive plan), with 0.4%
  15. Mr Paul Glendon Hunter, with 0.35%
  16. Certane CT Pty Ltd ((BrainChipā€™s allocated long-term incentive plan), with 0.35%
  17. Mr Louis Dinardo, with 0.34%
  18. Mr Jeffrey Brian Wilton, with 0.31%
  19. Mr David James Evans, with 0.31%
  20. Superhero Securities (Client Accounts), with 0.3%

 
  • Like
  • Fire
Reactions: 13 users

Sirod69

bavarian girl ;-)
Exciting news from Plumerai! šŸ”„ You can now test Plumeraiā€™s People Detection right here in your browser. šŸ˜± Click below and witness the accuracy of our tiny AI model running with your webcam. Rest assured, your privacy is protected. We do not capture any images and everything stays on your PC. How does it work? Normally we compile our models for CPUs and NPUs, but here weā€™ve compiled them for WebAssembly, which runs in the browser. Give it a try and see for yourself what kind of accuracy we can achieve with our tiny models! šŸš€

Plumerai People Detection will run locally in your browser, with no involvement from the cloud. This is how we preserve your privacy.
Your videos or images are not transmitted, not stored, and not shared with Plumerai. For full details, see our privacy policy.

This exact same AI model runs on tiny chips.​

And thatā€™s why we can deploy AI in devices where others canā€™t.
We are running an extremely tiny AI model in your browser. Thereā€™s no involvement from the cloud, so preserving your privacy. Itā€™s so small and so efficient that we can run the exact same AI model on tiny and low-cost chips. Thatā€™s how we enable our customers to run Plumerai People Detection on nearly any device, while providing the same highly-accurate detections that you are seeing here in your browser.

SINGLE CORE ARM CORTEX-A72 @ 1.5 GHz​

29 framesā„s

WITH A TINY FOOTPRINT​

2.3 MB
Plumerai People Detection runs on Arm Cortex-A, x86, and RISC-V CPUs and on $1 Arm Cortex-M and ESP32-S3 microcontrollers. It can also easily be adapted to leverage AI accelerators.

1683137512384.png

 
  • Like
  • Love
  • Fire
Reactions: 36 users

TheDon

Regular
Brn is making noise and its getting louder and louder!
 
  • Like
Reactions: 22 users
  • Like
  • Fire
Reactions: 12 users

MrRomper

Regular
Unfortunately no. Xperi has a competing NCU chip (Perceive Ergo) that's not exactly truly neuromorphic in architecture (uses MAC functions, at least in the first version, no info on second info to suggest a change). @Diogenese I believe has had a look at Xperi about 2 years ago on that other place where the grass is browner.
Yes. understand exactly what you are saying in regards to Xperi.
I was essentially referencing part of the post where it states 'powered by PROPHESEE Event-Based MetavisionĀ® sensor.'

When looking further into Event-Based Metavision you get the following:
One for Akida.
https://www.linkedin.com/feed/updat...date:(V2,urn:li:activity:6944573360253059072)
For balance. One for Snapdragon (in smartphones)
https://www.linkedin.com/feed/updat...date:(V2,urn:li:activity:7036327512653594624)

Does it have Akida? Ultimately until it is definitively answered it still speculation.
 
  • Like
Reactions: 2 users
  • Like
  • Fire
Reactions: 16 users

RobjHunt

Regular
Would it be too far from the realms of possibilities that Elon hasnt yet released his fantastical Pi Phone due to being in cahoots with our little nipper (Akida Version??) wanting it to do the things that he envisages it to do?

Now my own speculation is getting me excited possums ;)

Pantene Peeps!
 
  • Like
  • Fire
Reactions: 15 users

RobjHunt

Regular
Would it be too far from the realms of possibilities that Elon hasnt yet released his fantastical Pi Phone due to being in cahoots with our little nipper (Akida Version??) wanting it to do the things that he envisages it to do?

Now my own speculation is getting me excited possums ;)

Pantene Peeps!
With no disrespect to Barry, Dame Edna or Les. God rest their wonderful soles!
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its bicepsšŸ’Ŗ!
I love this part @Tothemoon24!

According to Arm, more than 90% of in-vehicle infotainment (IVI) systems use the companyā€™s chip designs. The architectures are also found in various under-the-hood applications, including meter clusters, e-mirrors, and heating, ventilation, and air conditioning (HVAC) control.

If I was Arm, I would be incorporating AKIDA, into not just in the Cortex M based MCU's but into the A and R based MCU's as well, just to cover all bases.šŸ

View attachment 35570

View attachment 35572
View attachment 35573
View attachment 35575 View attachment 35574

View attachment 35576

https://armkeil.blob.core.windows.n...ide-to-arm-processing-power-in-automotive.pdf


Continuing on from the above ramblings, if Arm were to incorporate AKIDA 1500 in all of its M based MCU's then it would tie in nicely with Renesas plans to in regards to the 22nm RA-family which is being sampled right now with select customers with plans for general availability towards the end of the year. Seems to marry in nicely with the tape out times of Global Foundaries 22nm AKIDA 1500.

We know AKIDA is compatible with all of Arm's product families , so it wouldn't make sense just to incorporate it with Cortex M-85, would it?

Why stop there?

IMO.
Screen Shot 2023-05-04 at 10.40.28 am.png


Renesas Makes the Jump to 22nm with a New RA-Class MCU with Software-Defined Radio, Sampling Now Offering Bluetooth 5.3 Low Energy (BLE) at launch, this cutting-edge Arm Cortex-M33 microcontroller can be upgraded for future releases.​


Gareth HalfacreeFollow
22 days ago ā€¢ HW101 / Internet of Things / Communication
image_R2qlbKqlN4.png





2



Renesas Electronics has announced sampling of its first microcontroller to be built on a 22nm semiconductor process node ā€” an RA-family 32-bit Arm Cortex-M33-based chip with Bluetooth 5.3 Low Energy (BLE) provided via an on-board software-defined radio (SDR).
"Renesas' MCU [Microcontroller Unit] leadership is based on a wide array of products and manufacturing process technologies," boasts Renesas' Roger Wendelken of the sampling. "We are pleased to announce the first 22nm product development in the RA MCU family which will pave the way for next generation devices that will help customers to future proof their design while ensuring long term availability. We are committed to providing the best performance, ease-of-use, and the latest features on the market. This advancement is only the beginning."
Renesas has announced a new RA-class microcontroller with SDR-powered Bluetooth 5.3 Low Energy (BLE) support, built on a 22nm process node. (šŸ“·: Renesas)

Renesas has announced a new RA-class microcontroller with SDR-powered Bluetooth 5.3 Low Energy (BLE) support, built on a 22nm process node. (šŸ“·: Renesas)

Modern semiconductor manufacturing processes are measured, after a fashion, in nanometers ā€” once the size of a given feature, then the smallest gap between features, and now a somewhat hand-wavy way of differentiating a next-generation process node from a previous one. While bleeding-edge high-frequency application class processors, like those from Intel or AMD, are now playing with single-digit nanometer process nodes, traditionally microcontrollers ā€” needing to pack in far fewer transistors than high-performance application processors ā€” have stuck with proven, and more affordable, double- or triple-digit process nodes.
That's key to why Renesas' announcement of a part built on a 22nm process node, a node which Intel began using back in 2012 for its Ivy Bridge family of chips before moving to 14nm for Broadwell in 2014, is notable: for microcontrollers, 22nm is an advanced node indeed. It allows the company to pack more components into a given area, and Renesas has taken full advantage of that extra capacity by fitting the chip with a software-defined radio (SDR) ā€” powering Bluetooth 5.3 Low Energy (BLE) connectivity with direction-finding and low-power audio capabilities at launch, but upgradeable post-release to support new radio protocols and standards as-required.
The new microcontroller enters the RA family, alongside the recently-launched entry-line RA4E2. (šŸ“·: Renesas)

The new microcontroller enters the RA family, alongside the recently-launched entry-line RA4E2. (šŸ“·: Renesas)

The shift to a 22nm node will also bring with it an overall reduction in part size and gains in efficiency which can be exploited as either increased performance for the same power draw or a lower power draw for the same performance ā€” or a balanced combination of the two. Renesas has not, however, yet shared full specifications for the part, including frequency and power requirements.
Renesas is now sampling the 22nm RA-family chips to "select customers," with plans for general availability towards the end of the year. Parties interested in requesting a sample should contact their local sales office for more details,

 
  • Like
  • Love
  • Fire
Reactions: 38 users

Diogenese

Top 20
With no disrespect to Barry, Dame Edna or Les. God rest their wonderful soles!
... and sadly, shares in Gladioli Growers Pty Ltd may never recover.
 
  • Haha
  • Sad
  • Like
Reactions: 8 users

TECH

Regular

A LOOK AT THE TOP HOLDERS OF BRAINCHIP SHARES​

(An impressive list of big funds that are taking Brainchip seriously and are invested. For example, Citicorp owns nearly 1 in 10 shares of Brainchip, Merrill Lynch (Australia) 1 in 20 shares. Personally, I find this reassuring we are on the right track. The professional big boy investors have their hat in the ring with us and believe we are on to something and they want in too.)
According to the company, BrainChipā€™s top 20 shareholders are as follows:
  1. Citicorp, with 9.15% of all outstanding shares
  2. Mr Peter Adrien van der Made, with 8.87%
  3. Merrill Lynch, with 4.88%
  4. BNP Paribas, with 4,75%
  5. HSBC, with 4.44%
  6. JPMorgan, with 2.82%
  7. BNP Paribas (DRP), with 2.53%
  8. HSBC (customer accounts), with 1.17%
  9. National Nominees, with 0.67%
  10. LDA Capital, with 0.52%
  11. BNP Paribas (Retail Clients), with 0.47%
  12. Mrs Rebecca Ossieran-Moisson, with 0.45%
  13. Crossfield Intech (Liebskind Family), with 0.4%
  14. Certane CT Pty Ltd (BrainChipā€™s unallocated long-term incentive plan), with 0.4%
  15. Mr Paul Glendon Hunter, with 0.35%
  16. Certane CT Pty Ltd ((BrainChipā€™s allocated long-term incentive plan), with 0.35%
  17. Mr Louis Dinardo, with 0.34%
  18. Mr Jeffrey Brian Wilton, with 0.31%
  19. Mr David James Evans, with 0.31%
  20. Superhero Securities (Client Accounts), with 0.3%


Hi Sirod69.....thanks for that breakdown.

Nice to see Peter and Anil still sitting at number 2 & 3 respectively...holding 13.75% of the company, plus 56.52% currently being held by
the rest of us outside the top 20...combined giving a nice 70.27% of the company...just an observation at this point, nothing more than that.

Regarding some nice news to drop just prior to the AGM, legally the company can't sit on information to make the timing of that possible,
it either plays out that way or not, maybe we could ask that the signing of a new IP License takes place on 22 May, that would work :ROFLMAO::ROFLMAO::ROFLMAO:

Tech šŸ˜‰
 
  • Like
  • Haha
  • Wow
Reactions: 22 users
Just on Prophesee, I see Christoph involved with a conference not long ago.

He was also one of the authors of a paper being presented and for mine it seems they are all still working through the best or most suitable systems.


DATE 2023 Detailed Programme​


FS6 Focus session: New perspectives for neuromorphic cameras: algorithms, architectures and circuits for event-based CMOS sensors​

Date: Tuesday, 18 April 2023
Time: 16:30 CEST - 18:00 CEST
Location / Room: Okapi Room 0.8.1

Session chair:
Pascal VIVET, CEA-List, FR

Session co-chair:
Christoph Posch, PROPHESEE, FR


Time
LabelPresentation Title
Authors
16:30 CESTFS6.1THE CNN VS. SNN EVENT-CAMERA DICHOTOMY AND PERSPECTIVES FOR EVENT-GRAPH NEURAL NETWORKS
Speaker
:
Thomas DALGATY, CEA-LIST, FR
Authors:
Thomas DALGATY1, Thomas Mesquida2, Damien JOUBERT3, Amos SIRONI3, Pascal Vivet4 and Christoph POSCH3
1CEA-List, FR; 2UniversitƩ Grenoble Alpes, CEA, LETI, MINATEC Campus, FR; 3Prophesee, FR; 4CEA-Leti, FR
Abstract
Since neuromorphic event-based pixels and cameraswere first proposed, the technology has greatly advanced suchthat there now exists several industrial sensors, processors andtoolchains. This has also paved the way for a blossoming newbranch of AI dedicated to processing the event-based data thesesensors generate. However, there is still much debate about whichof these approaches can best harness the inherent sparsity, low-latency and fine spatiotemporal structure of event-data to obtainbetter performance and do so using the least time and energy.The latter is of particular importance since these algorithms willtypically be employed near or inside of the sensor at the edgewhere the power supply may be heavily constrained. The twopredominant methods to process visual events - convolutionaland spiking neural networks - are fundamentally opposed inprinciple. The former converts events into static 2D frames suchthat they are compatible with 2D convolutions, while the lattercomputes in an event-driven fashion naturally compatible withthe raw data. We review this dichotomy by studying recentalgorithmic and hardware advances of both approaches. Weconclude with a perspective on an emerging alternative approachwhereby events are transformed into a graph data structure andthereafter processed using techniques from the domain of graphneural networks. Despite promising early results, algorithmic andhardware innovations are required before this approach can beapplied close or within the Event-based sensor.


They also appear to be involved in the NimbleAI project.

MPP1 Multi-partner projects​

Date: Monday, 17 April 2023
Time: 11:00 CEST - 12:30 CEST
Location / Room: Gorilla Room 1.5.1

Session chair:
Luca Sterpone, Politecnico di Torino, IT

TimeLabelPresentation Title
Authors
11:00 CESTMPP1.1NIMBLEAI: TOWARDS NEUROMORPHIC SENSING-PROCESSING 3D-INTEGRATED CHIPS
 
  • Like
  • Love
  • Fire
Reactions: 23 users

Deadpool

hyper-efficient Ai
Continuing on from the above ramblings, if Arm were to incorporate AKIDA 1500 in all of its M based MCU's then it would tie in nicely with Renesas plans to in regards to the 22nm RA-family which is being sampled right now with select customers with plans for general availability towards the end of the year. Seems to marry in nicely with the tape out times of Global Foundaries 22nm AKIDA 1500.

We know AKIDA is compatible with all of Arm's product families , so it wouldn't make sense just to incorporate it with Cortex M-85, would it?

Why stop there?

IMO. View attachment 35638

Renesas Makes the Jump to 22nm with a New RA-Class MCU with Software-Defined Radio, Sampling Now Offering Bluetooth 5.3 Low Energy (BLE) at launch, this cutting-edge Arm Cortex-M33 microcontroller can be upgraded for future releases.​


Gareth HalfacreeFollow
22 days ago ā€¢ HW101 / Internet of Things / Communication
image_R2qlbKqlN4.png





2



Renesas Electronics has announced sampling of its first microcontroller to be built on a 22nm semiconductor process node ā€” an RA-family 32-bit Arm Cortex-M33-based chip with Bluetooth 5.3 Low Energy (BLE) provided via an on-board software-defined radio (SDR).
"Renesas' MCU [Microcontroller Unit] leadership is based on a wide array of products and manufacturing process technologies," boasts Renesas' Roger Wendelken of the sampling. "We are pleased to announce the first 22nm product development in the RA MCU family which will pave the way for next generation devices that will help customers to future proof their design while ensuring long term availability. We are committed to providing the best performance, ease-of-use, and the latest features on the market. This advancement is only the beginning."
Renesas has announced a new RA-class microcontroller with SDR-powered Bluetooth 5.3 Low Energy (BLE) support, built on a 22nm process node. (šŸ“·: Renesas)

Renesas has announced a new RA-class microcontroller with SDR-powered Bluetooth 5.3 Low Energy (BLE) support, built on a 22nm process node. (šŸ“·: Renesas)

Modern semiconductor manufacturing processes are measured, after a fashion, in nanometers ā€” once the size of a given feature, then the smallest gap between features, and now a somewhat hand-wavy way of differentiating a next-generation process node from a previous one. While bleeding-edge high-frequency application class processors, like those from Intel or AMD, are now playing with single-digit nanometer process nodes, traditionally microcontrollers ā€” needing to pack in far fewer transistors than high-performance application processors ā€” have stuck with proven, and more affordable, double- or triple-digit process nodes.
That's key to why Renesas' announcement of a part built on a 22nm process node, a node which Intel began using back in 2012 for its Ivy Bridge family of chips before moving to 14nm for Broadwell in 2014, is notable: for microcontrollers, 22nm is an advanced node indeed. It allows the company to pack more components into a given area, and Renesas has taken full advantage of that extra capacity by fitting the chip with a software-defined radio (SDR) ā€” powering Bluetooth 5.3 Low Energy (BLE) connectivity with direction-finding and low-power audio capabilities at launch, but upgradeable post-release to support new radio protocols and standards as-required.
The new microcontroller enters the RA family, alongside the recently-launched entry-line RA4E2. (šŸ“·: Renesas)

The new microcontroller enters the RA family, alongside the recently-launched entry-line RA4E2. (šŸ“·: Renesas)

The shift to a 22nm node will also bring with it an overall reduction in part size and gains in efficiency which can be exploited as either increased performance for the same power draw or a lower power draw for the same performance ā€” or a balanced combination of the two. Renesas has not, however, yet shared full specifications for the part, including frequency and power requirements.
Renesas is now sampling the 22nm RA-family chips to "select customers," with plans for general availability towards the end of the year. Parties interested in requesting a sample should contact their local sales office for more details,

Fantastic detective work once again @Bravo. Fingers crossed this is their thinking as well.

mr bean bravo GIF
 
  • Like
  • Haha
  • Fire
Reactions: 17 users

Deena

Regular
Extraordinary low volumes of shares changing hands today. Only just over 1.6 million traded. The price will have to rise considerably if they want more ... but they still won't be mine!
Deena
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 37 users
Excellent, albeit a bit long, article to provide some perspective on design, testing and implementation when dealing with AVs.

Good example of why timelines can push out as well.




Testing Perception and Sensor Fusion Systems​

Updated Apr 5, 2023


Overview​

An autonomous vehicleā€™s (AV) most computationally complex components lie within the perception and sensor fusion system. This system must make sense of the information that the sensors provide, which might include raw point-cloud and video-stream data. The perception and sensor fusion systemā€™s job is to crunch all of the data and determine what it is seeing: Lane markings, pedestrians, cyclists, vehicles, or street signs, for example.

To address this computational challenge, automotive suppliers seemingly could build a supercomputer and throw it in the vehicle. However, a supercomputer consumes heaps of power, and that directly conflicts with the automotive industryā€™s goal to create efficient cars. We canā€™t expect Level 4 vehicles to be connected to a huge power supply to run the largest and smartest computer for making huge decisions. The industry must strike a balance between processing power and power consumption.

Such a monumental task requires specialized hardware; for example, ā€œacceleratorsā€ that help specific algorithms that perceive the world execute extremely fast and precisely. Learn more about that hardware architecture and its various implementations in the next section. After that, discover methodologies to test the perception and sensor fusion systems from a hardware and system-level test perspective.

Contents​

Perception and Sensor Fusion Systems​

As noted in the introduction, AV brains can be centralized in a single system, distributed to the edge of the sensors, or a combination of both:
35566_TVT_Imagery_AV_Sensor_Fusion_Images_01

Figure 1. Control Placement Architectures
NI often refers to a centralized platform as the AV compute platform, though other companies have different names for it. AV compute platforms include the Tesla full self-driving platform and the NVIDIA DRIVE AGX platform.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_02

Figure 2. NVIDIA DRIVE AGX Platform
MobilEyeā€™s EyeQ4 offers decentralized sensors. If you combine such platforms with a centralized compute platform to offload processing, they become a hybrid system.
When we speak of perception and sensor fusion systems, we isolate the part of the AV compute platform that takes in sensor information from the cameras, RADARs, Lidars, and, occasionally, other sensors, and spits out a representation of the world around the vehicle to the next system: The path planning system. This system locates the vehicle in 3D space and maps it in the world..

Hardware and Software Technologies​

Certain processing units are best-suited for certain types of computation; for example, CPUs are particularly good at utilizing off-the-shelf and open source code to execute high-level commands and handle memory allocation. Graphics processing units can handle general-purpose image processing very well. FPGAs are excellent at executing fixed-point math very quickly and deterministically. You can find tensor processing units and neural network (NN) accelerators built to execute deep learning algorithms with specific activation functions, such as rectified linear units, extremely quickly and inparallel. Redundancy is built into the system to ensure that, if any component fails, it has a backup.
Because backups are critical in any catastrophic failure, there cannot be a single point of failure (SPOF) anywhere, especially if those compute elements are to receive their ASIL-D certification.

Some of these processing units consume large amounts of power. The more power compute elements consume, the shorter the range of the vehicle (if electric), and the more heat thatā€™s generated. That is why youā€™ll often find large fans on centralized AV compute platforms and power-management integrated circuits in the board. These are critical for keeping the platform operating under ideal conditions. Some platforms incorporate liquid cooling, which requires
controlling pumps and several additional chips.
Atop the processing units lies plenty of software in the form of firmware, OSs, middleware, and application software. As of this writing, most Level 4 vehicle compute platforms run something akin to the Robot Operating System (ROS) on a Linux Ubuntu or Unix distribution. Most of these implementations are nondeterministic, and engineers recognize that, in order to deploy safety critical vehicles, they must eventually adopt a real-time OS (RTOS). However, ROS and similar robot middleware are excellent prototyping environments due to their vast amount of open source tools, ease of getting started, massive online communities, and data workflow simplicity.
With advanced driver-assistance systems (ADAS), engineers have recognized the need for RTOSs and have been developing and creating their own hardware and OSs to provide it. In many cases, these compute-platform providers incorporate best practices such as the AUTOSAR framework.
Perception and sensor fusion system software architecture varies dramatically due to the number and type of sensors associated with the perception system; types of algorithms used; hardware thatā€™s running the software; and platform maturity. One significant difference in software architecture is ā€œlateā€ versus ā€œearlyā€ sensor fusion.

Product Design Cycle​

To create a sensor fusion compute platform, engineers implement a multistep process. First, they purchase and/or design the chips. Autonomous compute platform providers may employ their own silicon design, particularly for specific NN accelerators. Those chips undergo test as described in the semiconductor section below. After the chips are confirmed good, contract manufacturers assemble and test the boards. Embedded chip software design and simulation occur in parallel to chip design and chip/module bring-up. Once the system is assembled, engineers conduct functional tests and embedded software tests, such as hardware-in-the-loop (HIL). Final compute-platform packaging takes place in-house or at the contract manufacturer, where additional testing occurs.
Back to top

Semiconductor Hardware-Level Tests​

As engineers design these compute units, they execute tests to ensure that the units operate as expected.:

Semiconductor-Level Validation and Verification​

As mentioned, all semiconductor chips undergo a process of chip-level validation and verification. Typically, these help engineers create specifications documents and send the product through the certification process. Often, hardware redundancy and safety are checked at this level. Most of these tests are conducted digitally, though analog tests also ensure that the semiconductor manufacturing process occurred correctly.

Semiconductor-Level Production Test​

After the chip engineering samples are verified, theyā€™re sent into production. Several tests unique to processing units at the production wafer-level test stage revolve around testing the highly dense digital connections on the processors.
At this stage, ASIL-D and ISO 26262 validation occurs, and further testing confirms redundancy, identifies SPOF, and verifies manufacturing process integrity.
Back to top

Compute-Platform Validation​

After compute-platform manufacturers receive their chips and package them onto a module or subsystem, the compute-platform validation begins. Often, this means testing various subsystem functionality and the entire compute platform as a whole; for example:
  • Ensuring that all automotive network ports (controller area network [CAN], local interconnect network, and T1/Ethernet [ENET]) are communicating correctly in both directions
  • Ensuring that all standard network ports (ENET, USB, and PCIe) are communicating correctly in both directions
  • Ensuring that all sensor interfaces can communicate and handle standard loads for each type of sensor
  • Providing a representative ā€œloadā€ on the system and validating that it completes a task
  • Measuring the various element and complete system power consumptions as they
    complete a task
  • Measuring system thermal performance under various loads
  • Placing the subsystem or entire compute platform in a temperature, environmental, or accelerating (shaker table) chamber to ensure that it can withstand extreme operating conditions
  • Verifying that the system can connect to a GPS or global navigation satellite system (GNSS) port and synchronizing it with the clock to a certain specification and within a certain time
  • Checking onboard system diagnostics
  • Power-cycling the complete system at various voltage and current levels
35566_TVT_Imagery_AV_Sensor_Fusion_Images_03

Figure 3. Chip Validation

Functional and Module-Level Test​

Because the compute platform is a perfect mix of both consumer electronics components and automotive components, you have to thoroughly validate it with testing procedures from both industries: You need automotive and consumer electronics network interfaces and methodologies.
NI is uniquely suited to address these complex requirements through our Autonomous Compute Platform Validation product. We selected a mix of the interfaces and test instruments you might require to address the validation steps outlined above, and packaged them into a single configuration. Because we utilize PXI instrumentation, our flexible solution easily addresses changing and growing AV validation needs. Figure 4 shows a solution example:
35566_TVT_Imagery_AV_Sensor_Fusion_Images_04

Figure 4. Autonomous Compute Platform Validation Solution Example

Life Cycle, Environmental, and Reliability Tests​

Functionally validating a single compute platform is fairly straightforward. However, once the scope of the test grows to encompass multiple devices at a time or in various environments, test system size and complexity grows. Itā€™s important to incorporate functional test and scale the number of test resources appropriately, with corresponding parallelism or serial testing capabilities. Also, you need to integrate the appropriate chambers, ovens, shaker tables, and dust rooms to simulate environmental factors. And because some tests must run for days, weeks, or even months to represent the life cycle of the devices under test, tests need to executeā€”uninterruptedā€”for that duration of time.
All of these life cycle, environmental, and reliability testing challenges are solved with the right combination of test equipment, chambering, DUT knowledge, and integration capability. To learn more about our partner channel that can assist with integrating these complex test systems, please contact us.
Back to top

Embedded Software and Systems Tests​

Perception and sensor fusion systems are the most complex vehicular elements for both hardware and software. Because the embedded software in these systems is truly cutting-edge, software validation test processes also must be cutting-edge. Later in this document, learn more about isolating the software itself to validate the code as well as testing the software once it has been deployed onto the hardware that will eventually go in the vehicle.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_05

Figure 5. Simulation, Test, and Data Recording
Back to top

Algorithm Design and Development​

We canā€™t talk about software test without recognizing that the engineers and developers designing the software are constantly trying to improve their softwareā€™s capability. Without diving in too deeply, know that software that is appropriately architected makes validating that software significantly easier.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_06

Figure 6. Design, Deployment, and Verification and Validation (V and V)

Software Test and Simulation​

99.9% of perception and sensor fusion validation occurs in software. Itā€™s the only way to test an extremely high volume within reasonable cost and timeframe constraints because you can utilize cloud-level deployments and run hundreds of simulations simultaneously. Often, this is known as simulation or software-in-the-loop (SIL) testing. As mentioned, we need an extremely realistic environment if we are testing the perception and sensor fusion software stack; otherwise, we will have validated our software against scenarios and visual representation that only exists in cartoon worlds.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_07

Figure 7. Perception Validation Characteristics
Testing AV software stack perception and sensor fusion elements requires a multitude of things: You need a representative ā€œegoā€ vehicle in a representative environmental worldview. You need to place realistic sensor representations on the ego vehicle in spatially accurate locations, and they need to move with the vehicle. You need accurate ego vehicle and environmental physics and dynamics. You need physics-based sensor models that give you actual information that a realworld sensor would provide, not some idealistic version of it.
After you have equipped the ego vehicle and set up the worldview, you need to execute scenarios for that vehicle and sensors to encounter by playing through a preset or prerecorded scene. You also can let the vehicle drive itself through the scene. Either way, the sensors must have a communication link to the software under test. It can be through some type of TCP linkā€”either running on the same machine or separatelyā€”to the software under test.
That software under test is then tasked with identifying its environment, and you can verify how well it did by comparing the results of the perception and sensor fusion stack against the ā€œground truthā€ that the simulation environment provides.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_08

Figure 8. Testing Perception and Planning
The real advantage is that you can spin up tens of thousands of simulation environments in the cloud and cover millions of miles per day in simulated test scenarios. To learn more about how to do this, contact us.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_09

Figure 9. Using the Cloud with SystemLink Software

Record and Playback​

If you live in a city that tests AVs, you may have seen those dressed-up cars navigating the road with test drivers hovering their hands over the wheel. Those mule vehicles rack up millions of miles so that engineers can verify their software. There are many steps to validating software with road-and-track test..
The most prevalent methodology for validating embedded software is to record a bunch of realworld sensor information through the sensors placed on vehicles. This is the highest-fidelity way to provide software-under-test sensor data, as it is actual, real-world data. The vehicle can be in autonomous mode or non autonomous mode. It is the engineerā€™s job to equip the vehicle with a recording system that stores massive amounts of sensor information without impeding the vehicle. A representative recording system is shown in Figure 10:
35566_TVT_Imagery_AV_Sensor_Fusion_Images_10

Figure 10. Figure 10. Vehicle Recording System
Once data records onto large data stores, it needs to move to a place where engineers can play with it. The process of moving the data from a large raid array to the cloud or on-premise storage is a challenge, because weā€™re talking about moving tens, if not hundreds, of terabytes to storage as quickly as possible. There are dedicated copy centers and server-farm-level interfaces that can help accomplish this.
Engineers then face the daunting task of classifying or labeling stored data. Typically, companies pay millions of dollars to send sensor data to buildings full of people that visually inspect the data and identify things such as pedestrians, cars, and lane markings. These identifications serve as ā€œground truthā€ for the next step of the process. Many companies are investing heavily in automated labeling that would ideally eliminate the need for human annotators, but that technology is not yet feasible. As you might imagine, sufficiently developing that technology would greatly reduce testing embedded software that classifies data, resulting in much more confidence in AVs.
After data has been classified and is ready for use, engineers play it back into the embedded software, typically on a development machine (open-loop software test), or on the actual hardware (open-loop hardware test). This is known as open-loop playback because the embedded software is not able to control the vehicleā€”it can only identify what it sees, which is then compared against the ground truth data.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_11

Figure 11. Figure 11. ADAS Data Playback Diagram
One of the more cutting-edge things engineers do is convert real-world sensor data into their simulation environment so that they can make changes to their prerecorded data. This way, they can add weather conditions that the recorded data didnā€™t see, or other scenarios that the recording vehicle didnā€™t encounter. While this provides high-fidelity sensor data and test-case breadth, it is quite complex to implement. It does provide limited capability to perform closedloop tests, such as SIL, with real-world data while controlling a simulated vehicle.
Mule vehicles equipped with recording systems are often very expensive and take significant time and energy to deploy and rack up miles. Plus, you canā€™t possibly encounter all of the various scenarios you need to validate vehicle software. This is why you see significantly more tests performed in simulation.

HIL Test​

Once thereā€™s an SIL workflow in place, itā€™s easier to transition to HIL, which runs the same tests with the software onboard the hardware that eventually makes it into the vehicle. You can take the existing SIL workflow and cut communication between the simulator and the software under test. And you can add an abstraction layer between the commands sent to and from the simulator and hardware that has sensor and network interfaces to communicate with the compute platform under test. The commands talking to the hardware must execute in real time to be validated appropriately. You can take those same sensor interfaces described in the AV Functional Test section and plug them into the SIL system with the real-time software abstraction layer and create a true HIL tester.
You can execute perception and sensor fusion HIL tests either by directly injecting into the sensor interfaces, or, with the sensor in the loop, providing an emulated over-the-air interface to the sensors, as shown in Figure 12.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_12

Figure 12. Figure 12. Closed-Loop Perception Test
Each of these processesā€”from road test to SIL to HILā€”employ a similar workflow. For more information about this simulation test platform, contact us.
Back to top

Conclusion​

Now that you understand how to test AV compute platform perception and sensor fusion systems, you may want a supercomputer as the brain of your AV. Know that, as a new market emerges, there are uncertainties. NI offers a software-defined platform that helps solve functional testing challenges to validate the custom computing platform and test automotive network protocols, computer protocols, sensor interfaces, power consumption, and pin measurements. Our platform flexibility, I/O breadth, and customizability cover not only todayā€™s testing requirements to bring the AV to market, but can help you swiftly adapt to tomorrowā€™s needs.
 
  • Like
  • Fire
Reactions: 19 users
Top Bottom