BRN Discussion Ongoing

manny100

Top 20
  • Like
Reactions: 2 users

Diogenese

Top 20
Look at this! I think we could be onto a goer here! Or at the very least it's definitely something we should keep an eye on from a competition point of view IMO.


Background:
  • Sonova acquired Sennheisser in 2021.
  • VALEO and Sennheisser have been working together combining their technologies on for an immersive sound system in a demo car which was showcased at CES 2024.

Sonova's new hearing aids:
  • Will be available 2025
  • Incorporate real-time artificial intelligence
  • Can learn from the user
  • Can reduce and cancel unwanted noise



View attachment 70997


View attachment 71002







View attachment 71004



View attachment 71003



View attachment 71001
Hi Bravo,

Probly a competitor.

Sonova have been dabbling with NNs since at least 2006.

Their latest patent does not use PICO:

US12108219B1 Processing chip for processing audio signals using at least one deep neural network in a hearing device 20230324

1728867939468.png



A processing chip for processing audio signals using at least one deep neural network (DNN) in a hearing device comprises a first compute unit having a hardware architecture adapted for processing one or more convolutional neural network layers of the at least one DNN, a second compute unit having hardware architecture adapted for processing one or more recurring neural network layers of the at least DNN, a control unit for directing the first and second compute units when to compute a respective layer of the at least one DNN, a shared memory unit of storing data to be processed in respective layers of the at least one DNN, and a data bus system for providing access to the shared memory unit for each of the first and the second compute unit.
 
  • Like
  • Sad
  • Wow
Reactions: 13 users

Guzzi62

Regular
How does it fit to Anil's reaction?

View attachment 70984

Maybe he is just being polite?


But off course, I also hope it's us but nothing is granted, yet!
 
  • Like
Reactions: 2 users

Esq.111

Fascinatingly Intuitive.
Look at this! I think we could be onto a goer here! Or at the very least it's definitely something we should keep an eye on from a competition point of view IMO.


Background:
  • Sonova acquired Sennheisser in 2021.
  • VALEO and Sennheisser have been working together combining their technologies on for an immersive sound system in a demo car which was showcased at CES 2024.

Sonova's new hearing aids:
  • Will be available 2025
  • Incorporate real-time artificial intelligence
  • Can learn from the user
  • Can reduce and cancel unwanted noise



View attachment 70997


View attachment 71002







View attachment 71004



View attachment 71003



View attachment 71001
Morning Chippers ,

Hearing Aids .

Saw this company ann a little while ago , AUDEARA LIMITED ,Australian listed (Code . AUA ) , Tiny market cap AU$7 mill give or take , they recently picked up a contract...... Dated 8th October .

Patent Sleuth required for a gentle probing ???.


Audeara Limited (AUA) is a hearing health leader specialised in innovative listening solutions for people with hearing challenges. The company is focused about redefining hearing health, particularly on delivering products that provide world-class tailored listening experiences. All Audeara products are proudly designed and engineered in Australia

Company Details​

Company Details​

NameAudeara Limited
ACN604 368 443
ChairmanDavid Trimboli
MDDr James Alexander Fielding
Address35 Brookes St
Bowen Hills, QLD 4006
Phone--
Fax--
Websitewww.audeara.com
Dividend ReinvestmentNo
DRP StatusNone
Dividend Reinvestment Features--
Shareholder Discount--
Investor Relations Phone--
Investor Relations NameHenry Jordan

Index Participation​

Index Participation Names​

Share Registry Details​

Share Registry Details​

Principal RegistryComputershare Investor Services Pty Ltd
AddressYarra Falls, 452 Johnston St
Abbotsford, VIC 3067
Postal AddressGPO Box 3224
Melbourne,VIC,3001
Phone+61 3 9415 4000
Fax+61 3 9473 2500
Investor Enquiries+61 3 9415 4000
Toll Free1300 787 272
WebsiteClick to view
Emailenquiry@computershare.com.au


Regards ,
Esq.

Edit ... My limited search came up with three patents which , to my eye , don't use our tech but who knows.

HomeResultsUS10936277B2



Search results:
3 results found​

List view
Text only
List content
All
Sort by
Relevance

Select result
(0 patents selected)Select the first 3 results
Select result
1.Customizable Personal Sound Delivery System
US2017046120A1 • 2017-02-16 •
AUDEARA PTY LTD
Earliest priority: 2015-06-29 • Earliest publication: 2016-07-07
A sound delivery system includes a processing assembly with a user interface coupled to the at least one processing assembly. At least one audio transducer is provided for delivering sound to a user. The audio transducer is responsive to the processing assembly. Typically the audio transducer is a loudspeaker of a pair of headphones or earbuds, though it may also be a bone conduction transducer. The at least one processing assembly is arranged to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via the audio transducer.
Select result
2.Calibration Method for Customizable Personal Sound Delivery System
US10936277B2 (A1) • 2021-03-02 •
AUDEARA PTY LTD
Earliest priority: 2015-06-29 • Earliest publication: 2018-12-20
A method (100) for calibrating a sound delivery system (1) having a processing assembly, a data communications assembly (9) coupled to the processing assembly, and at least one audio transducer (21a, 21b) mounted with at least one processor (11) of the processing assembly and responsive thereto for delivering sound to a user (3), the method including the steps of: transmitting from a remote user interface device (6) for the sound delivery system, a sequence of command codes for specifying predetermined characteristics of test sounds; receiving the command code sequence at the communications assembly of the sound delivery system; providing the command code sequence to the processing assembly of the sound delivery system; reproducing by a selected at least one audio transducer, the predetermined test sounds under control of said at least one processor according to the command code sequence; measuring with a reference SPL meter (70) proximate to the audio transducer, characteristics of test sounds reproduced by the sound delivery system; comparing the measured characteristics of the reproduced sounds with the predetermined characteristics of the test sounds; producing a mapping of specified test sounds to sounds reproduced by said at least one audio transducer; and storing the mapping in an electronic memory (12, 82) associated with the processing assembly or remote interface device (6).
Select result
3.CALIBRATION METHOD FOR CUSTOMIZABLE PERSONAL SOUND DELIVERY SYSTEMS
EP3827598A1 (A4) • 2021-06-02 •
AUDEARA PTY LTD
Earliest priority: 2018-07-23 • Earliest publication: 2020-01-30
No abstract available
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 13 users

Diogenese

Top 20
If the market is always right, why does the answer change so often?
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

7für7

Top 20
If the market is always right, why does the answer change so often?
There are situations, where the question is mostly the answer!
 
  • Like
Reactions: 1 users

Diogenese

Top 20
Is it just me, or is ASX in witness protection?
 
  • Like
  • Thinking
  • Haha
Reactions: 8 users

7für7

Top 20
  • Haha
Reactions: 3 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Some nice healthy sideward movement over the last couple of trading days. Getting ready for the next rise.........

044ded84-7ed8-4346-88e5-6eeb49973548_text.gif



Happy as Larry
 
  • Haha
  • Like
  • Love
Reactions: 20 users

Tothemoon24

Top 20
IMG_9751.jpeg


Neuromorphic computing has considerable potential for us across many areas.

By mimicking the functionality of the human brain, it can make next-generation AI computation considerably faster and more energy efficient.

One current research project is NAOMI4Radar funded by the German Federal Ministry for Economic Affairs and Climate Action. As consortium leader, we are working with partners to assess how neuromorphic computing can be used to optimise the processing chain for radar data in automated driving systems.

Current Mercedes-Benz models use frontal radar to see 200 metres in front of them. For instance, our DRIVE PILOT system uses radar data as one of its many sources for enabling conditionally automated driving.

The aim of the NAOMI4Radar project is to demonstrate that neuromorphic computing can bring fundamental benefits to future generations of automated and autonomous driving systems.

But as I said, this is just one current research. More on that soon.

IMG_9752.jpeg

Loihi 2 - smoke screen or smoked 🚭



Over 12 months ago Mercedes served this up fingers crossed our time has arrived ⬇️



IMG_9753.jpeg



My new “In the Loop” series kicks off with #Neuromorphic Computing – the clear winner of my poll a few weeks ago.

For those unfamiliar, this highly significant field of computing strives to emulate the multi-tasking of the human brain. Traditional microprocessors function sequentially. However, as the complexity and scale of calculations sores, this way of doing things is rapidly running out of road.

The idea is not new, but trying to “put a brain on a chip” is a mammoth task. To put it into figures: the human brain has 86-100 billion neurons operating on around 20 watts. Current neural chips from leading developers such as BrainChip and Intel Corporation contain around 1 million neurons and consume roughly 1 watt of power.

So, you see, despite impressive advances, there is still a very long way to go. Neuromorphic computing goes well beyond chip design and includes a specific kind of artificial neural network called #spikingneuralnetworks (SNN). They consume far less energy because the neurons are silent most of the time, only firing (or spiking) when needed for events.

Together with intense parallel execution on neuromorphic chips, the new processing principles require us to go beyond the application of existing #AI frameworks to neuromorphic chips. We have to fundamentally rethink the algorithms that ultimately enable future AI functions in our cars, gathering joint inspiration from machine learning, chip design and neuroscience. Our experts are working closely with our partners to examine their potential in new applications.

The thing is, even a tiny fraction of the thinking capacity of the human brain can go a long way in several fields that are extremely relevant to automotive applications. Examples include advanced driving assistance systems #ADAS as well as the on-board analysis of speech and video data, which can unlock major advances in how we communicate with our cars.

We already made some interesting findings here with our VISION EQXX, where we applied neuromorphic principles to the “Hey Mercedes” hot-word detection. That alone made it five to ten times more energy efficient than conventional voice control. As AI and machine learning take on an increasingly important role in the software-defined vehicle, the energy this consumes is likely to become a critical factor.

I’ll touch on our latest findings in an upcoming “In the Loop” and tell you my thoughts on where this is taking us.

In the meantime, for those of you interested in reading up on neuromorphic computing, check out the slider for my recommended sources. I’ve graded them to ensure there’s something for everyone, from absolute beginner to true geeks.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 28 users
  • Haha
  • Like
  • Wow
Reactions: 6 users
  • Haha
  • Like
Reactions: 7 users

IloveLamp

Top 20
1000019057.jpg
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Frangipani

Top 20
Maybe, just maybe, we’ll find out a teeny-weeny bit more about the current status of MB’s neuromorphic research later this week:

I checked out the website of Hochschule Karlsruhe (Karlsruhe University of Applied Sciences aka HKA) - since Markus Schäfer mentioned in his post they were collaborating with them on event-based cameras - and discovered an intriguing hybrid presentation by Dominik Blum, one of MB’s neuromorphic researchers, titled “Intelligente Fahrassistenzsysteme der Zukunft: KI, Sensorik und Neuromorphes Computing” (“Future Intelligent ADAS: AI, Sensor Technology and Neuromorphic Computing”).

The upcoming presentation is part of this week’s Themenwoche Künstliche Intelligenz, a week (Mon-Thu to be precise) devoted to AI, with numerous, mostly hybrid presentations from various HKA research areas (both faculty and external speakers will present), held daily between 5.15 pm and 8.30 pm.

Oct 17 is devoted to the topic of AI & Traffic:


A7B412D4-0DB8-4001-B434-00BEB7C0A144.jpeg


7F5F9451-2C45-4914-8A2E-4894A98A2295.jpeg


If you speak German (or even if you don’t, but are nevertheless interested in the presentation slides) and live in a compatible time zone, you may want to join the following livestream on Oct 17, at 5.15 pm (CEST):







(Since similar June AI Day presentations were recorded and uploaded to the HKA website, I assume this will also apply to the AI Week presentations.)

D4DFFB79-1141-4179-94EB-3E6D01A34059.jpeg

AB478315-521B-4C14-A87A-903D778B68BD.jpeg



The reference to NMC (neuromorphic computing) being considered a “möglicher Lösungsweg” (possible/potential solution) suggests to me - once again - that Mercedes-Benz is nowhere near to implementing neuromorphic technology at scale into serial cars.


2BC2769F-3564-4E86-8B50-1B2DED262AE1.jpeg


F8A861F5-B86E-4CDC-A398-106C346772E9.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Frangipani

Top 20
View attachment 71033

Neuromorphic computing has considerable potential for us across many areas.

By mimicking the functionality of the human brain, it can make next-generation AI computation considerably faster and more energy efficient.

One current research project is NAOMI4Radar funded by the German Federal Ministry for Economic Affairs and Climate Action. As consortium leader, we are working with partners to assess how neuromorphic computing can be used to optimise the processing chain for radar data in automated driving systems.

Current Mercedes-Benz models use frontal radar to see 200 metres in front of them. For instance, our DRIVE PILOT system uses radar data as one of its many sources for enabling conditionally automated driving.

The aim of the NAOMI4Radar project is to demonstrate that neuromorphic computing can bring fundamental benefits to future generations of automated and autonomous driving systems.

But as I said, this is just one current research. More on that soon.

View attachment 71034
Loihi 2 - smoke screen or smoked 🚭



Over 12 months ago Mercedes served this up fingers crossed our time has arrived ⬇️



View attachment 71035


My new “In the Loop” series kicks off with #Neuromorphic Computing – the clear winner of my poll a few weeks ago.

For those unfamiliar, this highly significant field of computing strives to emulate the multi-tasking of the human brain. Traditional microprocessors function sequentially. However, as the complexity and scale of calculations sores, this way of doing things is rapidly running out of road.

The idea is not new, but trying to “put a brain on a chip” is a mammoth task. To put it into figures: the human brain has 86-100 billion neurons operating on around 20 watts. Current neural chips from leading developers such as BrainChip and Intel Corporation contain around 1 million neurons and consume roughly 1 watt of power.

So, you see, despite impressive advances, there is still a very long way to go. Neuromorphic computing goes well beyond chip design and includes a specific kind of artificial neural network called #spikingneuralnetworks (SNN). They consume far less energy because the neurons are silent most of the time, only firing (or spiking) when needed for events.

Together with intense parallel execution on neuromorphic chips, the new processing principles require us to go beyond the application of existing #AI frameworks to neuromorphic chips. We have to fundamentally rethink the algorithms that ultimately enable future AI functions in our cars, gathering joint inspiration from machine learning, chip design and neuroscience. Our experts are working closely with our partners to examine their potential in new applications.

The thing is, even a tiny fraction of the thinking capacity of the human brain can go a long way in several fields that are extremely relevant to automotive applications. Examples include advanced driving assistance systems #ADAS as well as the on-board analysis of speech and video data, which can unlock major advances in how we communicate with our cars.

We already made some interesting findings here with our VISION EQXX, where we applied neuromorphic principles to the “Hey Mercedes” hot-word detection. That alone made it five to ten times more energy efficient than conventional voice control. As AI and machine learning take on an increasingly important role in the software-defined vehicle, the energy this consumes is likely to become a critical factor.

I’ll touch on our latest findings in an upcoming “In the Loop” and tell you my thoughts on where this is taking us.

In the meantime, for those of you interested in reading up on neuromorphic computing, check out the slider for my recommended sources. I’ve graded them to ensure there’s something for everyone, from absolute beginner to true geeks.

“The NAOMI4Radar project uses the “Loihi 2” Intel chip with up to 1 million neurons and up to 120 million synapses.”

D854AA6A-00EB-4C87-998E-DE42E0CD712B.jpeg
 
  • Like
  • Sad
Reactions: 7 users

Tothemoon24

Top 20
  • Like
Reactions: 7 users

7für7

Top 20
View attachment 71033

Neuromorphic computing has considerable potential for us across many areas.

By mimicking the functionality of the human brain, it can make next-generation AI computation considerably faster and more energy efficient.

One current research project is NAOMI4Radar funded by the German Federal Ministry for Economic Affairs and Climate Action. As consortium leader, we are working with partners to assess how neuromorphic computing can be used to optimise the processing chain for radar data in automated driving systems.

Current Mercedes-Benz models use frontal radar to see 200 metres in front of them. For instance, our DRIVE PILOT system uses radar data as one of its many sources for enabling conditionally automated driving.

The aim of the NAOMI4Radar project is to demonstrate that neuromorphic computing can bring fundamental benefits to future generations of automated and autonomous driving systems.

But as I said, this is just one current research. More on that soon.

View attachment 71034
Loihi 2 - smoke screen or smoked 🚭



Over 12 months ago Mercedes served this up fingers crossed our time has arrived ⬇️



View attachment 71035


My new “In the Loop” series kicks off with #Neuromorphic Computing – the clear winner of my poll a few weeks ago.

For those unfamiliar, this highly significant field of computing strives to emulate the multi-tasking of the human brain. Traditional microprocessors function sequentially. However, as the complexity and scale of calculations sores, this way of doing things is rapidly running out of road.

The idea is not new, but trying to “put a brain on a chip” is a mammoth task. To put it into figures: the human brain has 86-100 billion neurons operating on around 20 watts. Current neural chips from leading developers such as BrainChip and Intel Corporation contain around 1 million neurons and consume roughly 1 watt of power.

So, you see, despite impressive advances, there is still a very long way to go. Neuromorphic computing goes well beyond chip design and includes a specific kind of artificial neural network called #spikingneuralnetworks (SNN). They consume far less energy because the neurons are silent most of the time, only firing (or spiking) when needed for events.

Together with intense parallel execution on neuromorphic chips, the new processing principles require us to go beyond the application of existing #AI frameworks to neuromorphic chips. We have to fundamentally rethink the algorithms that ultimately enable future AI functions in our cars, gathering joint inspiration from machine learning, chip design and neuroscience. Our experts are working closely with our partners to examine their potential in new applications.

The thing is, even a tiny fraction of the thinking capacity of the human brain can go a long way in several fields that are extremely relevant to automotive applications. Examples include advanced driving assistance systems #ADAS as well as the on-board analysis of speech and video data, which can unlock major advances in how we communicate with our cars.

We already made some interesting findings here with our VISION EQXX, where we applied neuromorphic principles to the “Hey Mercedes” hot-word detection. That alone made it five to ten times more energy efficient than conventional voice control. As AI and machine learning take on an increasingly important role in the software-defined vehicle, the energy this consumes is likely to become a critical factor.

I’ll touch on our latest findings in an upcoming “In the Loop” and tell you my thoughts on where this is taking us.

In the meantime, for those of you interested in reading up on neuromorphic computing, check out the slider for my recommended sources. I’ve graded them to ensure there’s something for everyone, from absolute beginner to true geeks.
Lohi ? 🧐 hmmm
 

Guzzi62

Regular
“The NAOMI4Radar project uses the “Loihi 2” Intel chip with up to 1 million neurons and up to 120 million synapses.”

View attachment 71040
That's a research chip that Loihi2 from Intel.

The Akida chip used in the M.B. project car was for something else, was it the screen control and power management?

So above is just something they are fiddling around with, so years away from being in a car.

 
  • Like
  • Love
  • Fire
Reactions: 16 users

Frangipani

Top 20
“The NAOMI4Radar project uses the “Loihi 2” Intel chip with up to 1 million neurons and up to 120 million synapses.”

View attachment 71040

This was uploaded on the website of Uni Lübeck, where MB is collaborating with Sebastian Otte (who leads the Adaptive AI research group at the Institute for Robotics and Cognitive Systems) and his doctoral student Saya Higuchi on the NAOMI4Radar project:


Autonomes Fahren: Intelligente Sensoren durch innovative neuronale Netzwerke​



1728892201287.jpeg


Dynamische Darstellung innovativer Radarsensoren und neuronaler Netze für autonomes Fahren (Bild: Anja Stähle, generiert mit Adobe Firefly)

Prof. Sebastian Otte ist am Projekt NAOMI4Radar beteiligt und entwickelt mit Projektpartnern aus Industrie und Hochschulen energieeffiziente Radarsensoren.


Autonome Fahrzeuge benötigen präzise Sensoren für eine schnelle und zuverlässige Umgebungserfassung. Im Projekt NAOMI4Radar arbeitet ein Forschungsteam der Universität zu Lübeck unter der Leitung von Prof. Sebastian Otte gemeinsam mit der Mercedes-Benz AG, TWT GmbH Science & Innovation, Intel Deutschland GmbH und der Technischen Universität München an einer energieeffizienten Radarsensorik. Durch den Einsatz von Neuromorphic Computing und Spiking Neural Networks (SNNs) soll die Batterielaufzeit optimiert, die Reaktionszeit verkürzt und die Sicherheit erhöht werden. Das Projekt wird vom Bundesministerium für Wirtschaft und Klimaschutz (BMWK) gefördert und durch den Projektträger TÜV Rheinland begleitet.

Autonome Fahrzeuge benötigen präzise arbeitenden Sensoren, um schnell und zuverlässig auf ihre Umgebung reagieren zu können. Aktuelle Forschung zielt darauf ab, die Energieeffizienz der Sensordatenverarbeitung zu verbessern, um beispielsweise die Batterielaufzeit zu maximieren und die CO₂-Emissionen zu verringern. Prof. Sebastian Otte vom Institut für Robotik und kognitive Systeme entwickelt mit seinem Team innovative Lösungen in diesem Bereich. Im Projekt NAOMI4Radar arbeitet das Team der Universität zu Lübeck gemeinsam mit der Mercedes-Benz AG, der TWT GmbH Science & Innovation sowie den assoziierten Partnern Intel Deutschland GmbH und der Technischen Universität München an der Optimierung der Radarsensorik für autonome Fahrzeuge durch den Einsatz von Neuromorphic Computing. Diese innovative Technologie orientiert sich an der Arbeitsweise des menschlichen Gehirns und ermöglicht eine energieeffiziente und schnelle Verarbeitung von Sensordaten. Das Lübecker Forschungsteam erhält dafür eine Fördersumme von rund 166.000 Euro.

Neuronale Netze für Radardatenverarbeitung
Die 2024 mit dem Nobelpreis für Physik ausgezeichneten Entwicklungen im Bereich künstlicher neuronaler Netze bilden auch im Projekt NAOMI4Radar eine Schlüsseltechnologie in der Radardatenverarbeitung. Ziel des Projekts ist es, die Radardatenverarbeitung durch Spiking Neural Networks (SNNs), einer speziellen Form neuronaler Netzwerke, effizienter zu gestalten. Im Vergleich zu herkömmlichen KI-Algorithmen bieten SNNs vereinfacht ausgedrückt den Vorteil, dass einzelne Neuronen nur dann aktiv werden, wenn sie tatsächlich gebraucht werden. Durch Einsatz in neuromorphen Prozessoren, wie es im Loihi 2 von Intel vorgesehen ist, kann dieses Potenzial ausgeschöpft werden. Das senkt nicht nur den Energieverbrauch, sondern ermöglicht prinzipiell auch eine schnellere Reaktionszeit der autonomen Fahrzeuge, und erhöht somit die Sicherheit im Straßenverkehr.

Energiesparende Neuronenmodelle
Prof. Otte und sein Team konzentrieren sich dabei insbesondere auf die Weiterentwicklung des Balanced Resonate-and-Fire (BRF) Modells, dessen spezielle Eigenschaften es besonders für die effiziente Verarbeitung von Radardaten interessant macht. Die Effizienz soll durch Verwendung von biologisch inspirierten Sparse Coding Ansätzen noch weiter gesteigert werden. Sparse Coding hat das Ziel, die Robustheit von neuronalen Netzen zu verbessern, um sie beispielsweise fehlertoleranter zu machen. Gleichzeitig wird die Aktivität, also die Menge der Spikes, die im Netzwerk zirkulieren, auf ein Minimum zu reduziert. In Zusammenarbeit mit den Projektpartnern soll eine vollständige Integration von neuromorpher Radardatenverarbeitung realisiert und in einem Prototypenfahrzeug getestet werden.

Das Projekt, das bis August 2025 läuft
, wird vom Bundesministerium für Wirtschaft und Klimaschutz (BMWK) gefördert. Die Universität Lübeck bringt ihre Expertise im Bereich Künstlicher Intelligenz und neuromorphe Algorithmen in dieses praxisorientierte Forschungsprojekt ein und leistet damit einen Beitrag für die Entwicklung und Erprobung nachhaltiger KI-Lösungen im industriellen Kontext.

Originalpublikation zum Lübecker BRF-Modell

Higuchi et al. Balanced Resonate-and-Fire Neurons. Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria, 2024.

Kontakt:
Prof. Sebastian Otte
Adaptive AI Forschungsgruppe
Institut für Robotik und Kognitive Systeme
Universität zu Lübeck
Email: sebastian.otte(at)uni-luebeck(dot)de





Autonomous driving: Intelligent sensors through innovative neural networks

1728892201287.jpeg


Prof. Sebastian Otte is involved in the NAOMI4Radar project and is developing energy-efficient radar sensors with project partners from industry and universities.


Autonomous vehicles require precise sensors for fast and reliable detection of their surroundings. In the NAOMI4Radar project, a research team from the University of Lübeck led by Prof. Sebastian Otte is working together with Mercedes-Benz AG, TWT GmbH Science & Innovation, Intel Deutschland GmbH and the Technical University of Munich on energy-efficient radar sensor technology. The use of neuromorphic computing and spiking neural networks (SNNs) is intended to optimize battery life, shorten reaction times and increase safety. The project is funded by the Federal Ministry for Economic Affairs and Climate Protection (BMWK) and supported by the project sponsor TÜV Rheinland.

Autonomous vehicles require precise sensors in order to react quickly and reliably to their environment. Current research aims to improve the energy efficiency of sensor data processing in order to maximize battery life and reduce CO₂ emissions, for example. Prof. Sebastian Otte from the Institute of Robotics and Cognitive Systems and his team are developing innovative solutions in this area. In the NAOMI4Radar project, the team at the University of Lübeck is working together with Mercedes-Benz AG, TWT GmbH Science & Innovation and associated partners Intel Deutschland GmbH and the Technical University of Munich to optimize radar sensor technology for autonomous vehicles through the use of neuromorphic computing. This innovative technology is based on the way the human brain works and enables energy-efficient and fast processing of sensor data. The Lübeck research team will receive funding of around 166,000 euros.

Neural networks for radar data processing
The developments in the field of artificial neural networks, which were awarded the Nobel Prize in Physics in 2024, are also a key technology in radar data processing in the NAOMI4Radar project. The aim of the project is to make radar data processing more efficient using spiking neural networks (SNNs), a special form of neural network. In comparison to conventional AI algorithms, SNNs offer the advantage that individual neurons only become active when they are actually needed. This potential can be exploited by using them in neuromorphic processors, as envisaged in Intel's Loihi 2. This not only reduces energy consumption, but in principle also enables autonomous vehicles to react more quickly, thereby increasing road safety.

Energy-saving neuron models
Prof. Otte and his team are focusing in particular on the further development of the Balanced Resonate-and-Fire (BRF) model, whose special properties make it particularly interesting for the efficient processing of radar data. Efficiency is to be increased even further by using biologically inspired sparse coding approaches. Sparse coding aims to improve the robustness of neural networks, for example to make them more fault-tolerant.

At the same time, the activity, i.e. the amount of spikes circulating in the network, is reduced to a minimum. In collaboration with the project partners, a complete integration of neuromorphic radar data processing is to be realized and tested in a prototype vehicle.

The project, which will run until August 2025, is funded by the Federal Ministry for Economic Affairs and Climate Protection (BMWK). The University of Lübeck is contributing its expertise in the field of artificial intelligence and neuromorphic algorithms to this practice-oriented research project, thereby contributing to the development and testing of sustainable AI solutions in an industrial context.


Original publication on the Lübeck BRF model
Higuchi et al. Balanced Resonate-and-Fire Neurons. Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria, 2024.


Contact:
Prof. Sebastian Otte
Adaptive AI Research Group
Institute for Robotics and Cognitive Systems
University of Lübeck
Email: sebastian.otte(at)uni-luebeck(dot)de


(Translated by DeepL)



356F2F0C-A2E5-44D5-8C62-EFADCC5796DA.jpeg
 
  • Like
  • Fire
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
View attachment 71033

Neuromorphic computing has considerable potential for us across many areas.

By mimicking the functionality of the human brain, it can make next-generation AI computation considerably faster and more energy efficient.

One current research project is NAOMI4Radar funded by the German Federal Ministry for Economic Affairs and Climate Action. As consortium leader, we are working with partners to assess how neuromorphic computing can be used to optimise the processing chain for radar data in automated driving systems.

Current Mercedes-Benz models use frontal radar to see 200 metres in front of them. For instance, our DRIVE PILOT system uses radar data as one of its many sources for enabling conditionally automated driving.

The aim of the NAOMI4Radar project is to demonstrate that neuromorphic computing can bring fundamental benefits to future generations of automated and autonomous driving systems.

But as I said, this is just one current research. More on that soon.

View attachment 71034
Loihi 2 - smoke screen or smoked 🚭



Over 12 months ago Mercedes served this up fingers crossed our time has arrived ⬇️



View attachment 71035


My new “In the Loop” series kicks off with #Neuromorphic Computing – the clear winner of my poll a few weeks ago.

For those unfamiliar, this highly significant field of computing strives to emulate the multi-tasking of the human brain. Traditional microprocessors function sequentially. However, as the complexity and scale of calculations sores, this way of doing things is rapidly running out of road.

The idea is not new, but trying to “put a brain on a chip” is a mammoth task. To put it into figures: the human brain has 86-100 billion neurons operating on around 20 watts. Current neural chips from leading developers such as BrainChip and Intel Corporation contain around 1 million neurons and consume roughly 1 watt of power.

So, you see, despite impressive advances, there is still a very long way to go. Neuromorphic computing goes well beyond chip design and includes a specific kind of artificial neural network called #spikingneuralnetworks (SNN). They consume far less energy because the neurons are silent most of the time, only firing (or spiking) when needed for events.

Together with intense parallel execution on neuromorphic chips, the new processing principles require us to go beyond the application of existing #AI frameworks to neuromorphic chips. We have to fundamentally rethink the algorithms that ultimately enable future AI functions in our cars, gathering joint inspiration from machine learning, chip design and neuroscience. Our experts are working closely with our partners to examine their potential in new applications.

The thing is, even a tiny fraction of the thinking capacity of the human brain can go a long way in several fields that are extremely relevant to automotive applications. Examples include advanced driving assistance systems #ADAS as well as the on-board analysis of speech and video data, which can unlock major advances in how we communicate with our cars.

We already made some interesting findings here with our VISION EQXX, where we applied neuromorphic principles to the “Hey Mercedes” hot-word detection. That alone made it five to ten times more energy efficient than conventional voice control. As AI and machine learning take on an increasingly important role in the software-defined vehicle, the energy this consumes is likely to become a critical factor.

I’ll touch on our latest findings in an upcoming “In the Loop” and tell you my thoughts on where this is taking us.

In the meantime, for those of you interested in reading up on neuromorphic computing, check out the slider for my recommended sources. I’ve graded them to ensure there’s something for everyone, from absolute beginner to true geeks.


It'll be very interesting to see how this plays out.

As one person noted in the comments on Markus Shafer's Linkedin post, the first mover will likely gain the advantage, i.e. the greatest market share.

What we do know is that at present Loihi2 is still a research chip and in the Smarter Cars article sited below dated December 2020 Mike Davies stated he thought commercialisation of Loihi2 would be 5 years from the date of that article, which is approximately December 2025.

If we compare the Loihi2 with AKIDA we can see AKIDA has the edge, no pun intended on Loihi, in terms of the higher number of neurons and synapses.

AKIDA: 1.2 million neurons and 10 billion synapses VERSUS
Liohi2: 1 million neurons and 120 million synapses

And obviously BrainChip has a commercially available product which Intel doesn't at this particular point in time.


Screenshot 2024-10-14 at 7.12.52 pm.png


Screenshot 2024-10-14 at 7.11.35 pm.png

Screenshot 2024-10-14 at 7.09.47 pm.png

Screenshot 2024-10-14 at 7.06.02 pm.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users
Top Bottom