BRN Discussion Ongoing

itsol4605

Regular
I am not so sure whether last week’s announcement by Mercedes-Benz is really a reason for BRN shareholders to celebrate (other than the fact that neuromorphic computing is again confirmed to be a promising technology), given Chris Eliasmith, who leads the neuromorphic research at the University of Waterloo and is the co-founder and CTO of ABR (Applied Brain Research) (https://www.appliedbrainresearch.com/), another company dealing in the edge space, no longer appears to be close to BrainChip.

This is what @uiux shared two years ago:

View attachment 70980



ABR seems more like a competitor in the Edge AI space to me?

View attachment 70981
View attachment 70982



ABR demonstrates the world’s first single chip solution for full vocabulary speech recognition​



ABR-Press-Release.webp

SAN JOSE, CA, [Sep 9] – Applied Brain Research (ABR), a leader in the development of AI solutions, is demonstrating the world’s first self-contained single-chip speech recognition solution at the AI Hardware and Edge AI Summit this week. This is an unveiling of the technology integrated into ABR’s first time series processor chip, the TSP1, capable of performing real-time low latency automatic speech recognition.

The solution employs ABR’s innovations at several levels of the technology. It starts with the world’s first patented state-space network, the Legendre Memory Unit (LMU), that is a breakthrough in efficient computation for time series processing. Next, the networks are trained and compiled using ABR’s advanced full-stack toolchain. And finally, the network runs on ABR’s proprietary computational neural fabric that greatly reduces power consumption through reduction in data movement within the chip.

“What ABR is showcasing today has been five years in the making starting with our earliest observations of how the brain processes memories which led to the state space network model that we derived from that study and subsequently patented,” said Dr. Chris Eliasmith, ABR’s co-founder and CTO. “From that starting point, we have innovated at every level of the technology stack to do what has never before been possible for speech processing in low-powered edge devices.”

“ABR’s TSP1 is going to revolutionize how time series AI is integrated into devices at the edge,“ said Kevin Conley, ABR’s CEO. “We are showcasing the fastest, most accurate self-contained speech recognition solution ever produced, with both English and Mandarin versions. The TSP1 will delivery these capabilities at 100X lower power than currently available edge GPU solutions. And speech recognition, which we are actively engaged with customers to develop, is only the first step in commercializing the potential of this technology.”

ABR’s TSP1 is a single-chip solution for time series inference applications like real-time speech recognition (including keyword spotting), realistic text-to-speech synthesis, natural language control interfaces and other advanced sensor fusion applications. The TSP1 integrates neural processing fabric, CPU, sensor interfaces and on-chip NVM for a self-contained easy to integrate solution. The TSP1 is supported by an advanced no-code network development toolchain to create the easiest to develop and deploy time series solution on the market.

ABR has a booth in the Startup Village at the AI Hardware and Edge AI Summit at the Signia by Hilton in San Jose, CA from Sept 10-12.

About Applied Brain Research
Applied Brain Research Inc (ABR) is a pioneer in Artificial Intelligence technology founded by alumni of the Computational Neuroscience Research Group at the University of Waterloo. ABR is leading a new wave of product development targeting ultra-low power Edge AI, enabling a new level of capability in low-power critical applications. ABR’s revolutionary time-series AI processor uses 100x less-power than other high-functionality edge AI hardware, and supports AI models up to 10-100x larger than other low-power edge AI hardware.
ABR, headquartered in Waterloo, Ontario, is a Silicon Catalyst Portfolio Company. More company and product information can be found at www.appliedbrainresearch.com.

How does it fit to Anil's reaction?

20241014_005210.jpg


 
  • Like
  • Love
  • Fire
Reactions: 30 users

7für7

Top 20
  • Haha
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
AI is driving demand for Edge computing

AI is driving demand for Edge computing​

Arun Shankar | 13 October, 2024
Pete Hall, Regional Managing Director EMEA, Ciena
AI promises to accelerate advanced automation to unprecedented levels creating a surge in demand for Edge computing. A substantial part of the demand for Edge computing will be comprised of GPU clusters to provide distributed AI inference, explains Pete Hall at Ciena.

AI promises to accelerate advanced automation to unprecedented levels creating a surge in demand for Edge computing. A substantial part of the demand for Edge computing will be comprised of GPU clusters to provide distributed AI inference, explains Pete Hall at Ciena.


Edge data centres are set to become an area of strategic growth as companies strive to minimise latency and enhance the end-user experience in the AI era. A forecast from IDC projects global spending on Edge computing to be $232 billion in 2024, an increase of 15.4% from last year.

In the Middle East, countries like the UAE and Saudi Arabia are investing in Edge data centres to support their digital ambitions and AI initiatives, addressing challenges related to application latency, data sovereignty, and sustainability of information and communications technologies.

In the past two decades, the world has seen an intense process of cloudification of IT infrastructure, with an increasing number of applications moving to the public cloud. The massive scale of cloud data centres with highly flexible consumption models has enabled a compelling business model for compute and storage workloads, effectively discouraging local builds.

However, centralised data processing means longer routes between users and content, and thus higher latency experienced by users accessing this content.

To remediate this issue, service providers have turned to content delivery network architectures, deploying cache servers closer to users, particularly targeting streaming video. This approach has been effective to improve user experience for streaming services, while also offloading some the network of some heavy traffic flows.

Nonetheless, it is only effective for frequently consumed repeatable data, like popular streaming videos, and not economically viable for random workloads.

Although content delivery networks have been the most widespread use case of Edge computing, a prominent and largely expected application of Edge computing has been its potential to accelerate automation and machine orchestration.

AdobeStock_950021083_1000x450.jpg

Machine decisions that need to be tightly synchronised require very low latency, in a level that a centralised compute infrastructure cannot deliver.

As AI promises to accelerate advanced automation in unprecedented levels, we are on the verge of a surge in Edge compute demand. And most likely, a substantial part of that Edge compute demand will be comprised of GPU clusters to provide distributed AI inference.

By accelerating the build of decentralised compute infrastructure, the UAE and Saudi Arabia can bolster the performance of AI-driven applications and boost the competitiveness of the region in this flourishing field.

In addition to delivering lower latency, this infrastructure can also help sensitive data stay in the region. AI models training, fine-tuning, or inference deals with data that might be preferred to be kept locally, rather than sent to a centralised location.

Even as core data centre buildouts continue to unfold across vast expanses of the world, the shift toward Edge data centres presents both challenges and opportunities. For instance, the environmental impact of data centres cannot be ignored. According to an International Energy Agency forecast, electricity consumption from data centres, cryptocurrencies, and Artificial Intelligence could double between 2022 and 2026.

Consequently, data centre projects are exploring various techniques to enhance sustainability in storage and processing to reduce the burden on existing power grids. This includes adopting the latest optical technology, implementing more efficient cooling methods, and utilising alternative power sources.

This is particularly critical in the Middle East, where there is heavy reliance on cooling systems to counter the effects of extreme heat. There is a shift to alternative power sources such as solar energy to enhance sustainability, with Masdar City in Abu Dhabi integrating sustainable practices into its data centre operations.

Delivering applications closer to the end user is a critical factor for AI applications. However, to realise these gains, the networks within, and between, data centres must be upgraded. Cutting-edge AI services cannot run inside everyday data centre servers; they need computers with high-performance graphics processing units, GPUs.

And those high-performance clusters of GPUs running AI services need high-speed networks to move AI-related data inside a data centre and then out to the wider world. Outside the site, high-speed and high-capacity data centre interconnect networks must remain front of mind for investment.

Regional telcos can capitalise on the proximity to end users and the ability to process data closer to the source to support a plethora of localised services. It will result in ever more responsive business decision-making and an explosion in service innovation.

AdobeStock_313384530_1000x450.jpg


Key takeaways

  • IDC projects global spending on Edge computing to be $232 billion in 2024, an increase of 15.4% from last year.
  • Centralised data processing means longer routes between users and content and higher latency experienced by users accessing content.
  • Service providers have turned to content delivery network architectures, deploying cache servers closer to users.
  • Content delivery networks are only effective for frequently consumed repeatable data, and not economically viable for random workloads
  • Machine decisions that need to be tightly synchronised require very low latency, that centralised compute infrastructure cannot deliver.
  • By accelerating buildup of decentralised compute infrastructure, UAE and Saudi Arabia can bolster performance of AI-driven applications.
  • A substantial part of that Edge compute demand will be comprised of GPU clusters to provide distributed AI inference.
  • Delivering applications closer to the end user is a critical factor for AI applications.
  • AI services cannot run inside everyday data centre servers and need computers with high-performance graphics processing units.
  • High-performance clusters of graphics processing units running AI services need high-speed networks to move AI-related data.
  • High-speed and high-capacity data centre interconnect networks must remain front of mind for investment.
  • Regional telcos can capitalise on the proximity to end users to process data closer to the source to support localised services.
  • International Energy Agency forecasts, electricity consumption from data centres, cryptocurrencies, and Artificial Intelligence could double between 2022 and 2026.

Screenshot 2024-10-14 at 10.27.31 am.png



Screenshot 2024-10-14 at 10.27.58 am.png


Screenshot 2024-10-14 at 10.40.09 am.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 23 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Look at this! I think we could be onto a goer here! Or at the very least it's definitely something we should keep an eye on from a competition point of view IMO.


Background:
  • Sonova acquired Sennheisser in 2021.
  • VALEO and Sennheisser have been working together combining their technologies on for an immersive sound system in a demo car which was showcased at CES 2024.

Sonova's new hearing aids:
  • Will be available 2025
  • Incorporate real-time artificial intelligence
  • Can learn from the user
  • Can reduce and cancel unwanted noise



Screenshot 2024-10-14 at 10.58.38 am.png


Screenshot 2024-10-14 at 10.49.01 am.png







Screenshot 2024-10-14 at 11.16.42 am.png




Screenshot 2024-10-14 at 10.52.06 am.png




Screenshot 2024-10-14 at 11.01.07 am.png
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 30 users

7für7

Top 20
1728866211587.gif


Me relaxed and calm enjoying the ride while day traders try to catch the perfect entry and exit… „Oh no, I bought too early… Oh no, I sold too late… Oh no, I sold too early.“
 
  • Like
  • Haha
Reactions: 13 users

manny100

Regular
  • Like
Reactions: 2 users

Diogenese

Top 20
Look at this! I think we could be onto a goer here! Or at the very least it's definitely something we should keep an eye on from a competition point of view IMO.


Background:
  • Sonova acquired Sennheisser in 2021.
  • VALEO and Sennheisser have been working together combining their technologies on for an immersive sound system in a demo car which was showcased at CES 2024.

Sonova's new hearing aids:
  • Will be available 2025
  • Incorporate real-time artificial intelligence
  • Can learn from the user
  • Can reduce and cancel unwanted noise



View attachment 70997


View attachment 71002







View attachment 71004



View attachment 71003



View attachment 71001
Hi Bravo,

Probly a competitor.

Sonova have been dabbling with NNs since at least 2006.

Their latest patent does not use PICO:

US12108219B1 Processing chip for processing audio signals using at least one deep neural network in a hearing device 20230324

1728867939468.png



A processing chip for processing audio signals using at least one deep neural network (DNN) in a hearing device comprises a first compute unit having a hardware architecture adapted for processing one or more convolutional neural network layers of the at least one DNN, a second compute unit having hardware architecture adapted for processing one or more recurring neural network layers of the at least DNN, a control unit for directing the first and second compute units when to compute a respective layer of the at least one DNN, a shared memory unit of storing data to be processed in respective layers of the at least one DNN, and a data bus system for providing access to the shared memory unit for each of the first and the second compute unit.
 
  • Like
  • Sad
  • Wow
Reactions: 13 users

Guzzi62

Regular
How does it fit to Anil's reaction?

View attachment 70984

Maybe he is just being polite?


But off course, I also hope it's us but nothing is granted, yet!
 
  • Like
Reactions: 2 users

Esq.111

Fascinatingly Intuitive.
Look at this! I think we could be onto a goer here! Or at the very least it's definitely something we should keep an eye on from a competition point of view IMO.


Background:
  • Sonova acquired Sennheisser in 2021.
  • VALEO and Sennheisser have been working together combining their technologies on for an immersive sound system in a demo car which was showcased at CES 2024.

Sonova's new hearing aids:
  • Will be available 2025
  • Incorporate real-time artificial intelligence
  • Can learn from the user
  • Can reduce and cancel unwanted noise



View attachment 70997


View attachment 71002







View attachment 71004



View attachment 71003



View attachment 71001
Morning Chippers ,

Hearing Aids .

Saw this company ann a little while ago , AUDEARA LIMITED ,Australian listed (Code . AUA ) , Tiny market cap AU$7 mill give or take , they recently picked up a contract...... Dated 8th October .

Patent Sleuth required for a gentle probing ???.


Audeara Limited (AUA) is a hearing health leader specialised in innovative listening solutions for people with hearing challenges. The company is focused about redefining hearing health, particularly on delivering products that provide world-class tailored listening experiences. All Audeara products are proudly designed and engineered in Australia

Company Details​

Company Details​

NameAudeara Limited
ACN604 368 443
ChairmanDavid Trimboli
MDDr James Alexander Fielding
Address35 Brookes St
Bowen Hills, QLD 4006
Phone--
Fax--
Websitewww.audeara.com
Dividend ReinvestmentNo
DRP StatusNone
Dividend Reinvestment Features--
Shareholder Discount--
Investor Relations Phone--
Investor Relations NameHenry Jordan

Index Participation​

Index Participation Names​

Share Registry Details​

Share Registry Details​

Principal RegistryComputershare Investor Services Pty Ltd
AddressYarra Falls, 452 Johnston St
Abbotsford, VIC 3067
Postal AddressGPO Box 3224
Melbourne,VIC,3001
Phone+61 3 9415 4000
Fax+61 3 9473 2500
Investor Enquiries+61 3 9415 4000
Toll Free1300 787 272
WebsiteClick to view
Emailenquiry@computershare.com.au


Regards ,
Esq.

Edit ... My limited search came up with three patents which , to my eye , don't use our tech but who knows.

HomeResultsUS10936277B2



Search results:
3 results found​

List view
Text only
List content
All
Sort by
Relevance

Select result
(0 patents selected)Select the first 3 results
Select result
1.Customizable Personal Sound Delivery System
US2017046120A1 • 2017-02-16 •
AUDEARA PTY LTD
Earliest priority: 2015-06-29 • Earliest publication: 2016-07-07
A sound delivery system includes a processing assembly with a user interface coupled to the at least one processing assembly. At least one audio transducer is provided for delivering sound to a user. The audio transducer is responsive to the processing assembly. Typically the audio transducer is a loudspeaker of a pair of headphones or earbuds, though it may also be a bone conduction transducer. The at least one processing assembly is arranged to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via the audio transducer.
Select result
2.Calibration Method for Customizable Personal Sound Delivery System
US10936277B2 (A1) • 2021-03-02 •
AUDEARA PTY LTD
Earliest priority: 2015-06-29 • Earliest publication: 2018-12-20
A method (100) for calibrating a sound delivery system (1) having a processing assembly, a data communications assembly (9) coupled to the processing assembly, and at least one audio transducer (21a, 21b) mounted with at least one processor (11) of the processing assembly and responsive thereto for delivering sound to a user (3), the method including the steps of: transmitting from a remote user interface device (6) for the sound delivery system, a sequence of command codes for specifying predetermined characteristics of test sounds; receiving the command code sequence at the communications assembly of the sound delivery system; providing the command code sequence to the processing assembly of the sound delivery system; reproducing by a selected at least one audio transducer, the predetermined test sounds under control of said at least one processor according to the command code sequence; measuring with a reference SPL meter (70) proximate to the audio transducer, characteristics of test sounds reproduced by the sound delivery system; comparing the measured characteristics of the reproduced sounds with the predetermined characteristics of the test sounds; producing a mapping of specified test sounds to sounds reproduced by said at least one audio transducer; and storing the mapping in an electronic memory (12, 82) associated with the processing assembly or remote interface device (6).
Select result
3.CALIBRATION METHOD FOR CUSTOMIZABLE PERSONAL SOUND DELIVERY SYSTEMS
EP3827598A1 (A4) • 2021-06-02 •
AUDEARA PTY LTD
Earliest priority: 2018-07-23 • Earliest publication: 2020-01-30
No abstract available
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 13 users

Diogenese

Top 20
If the market is always right, why does the answer change so often?
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

7für7

Top 20
If the market is always right, why does the answer change so often?
There are situations, where the question is mostly the answer!
 
  • Like
Reactions: 1 users

Diogenese

Top 20
Is it just me, or is ASX in witness protection?
 
  • Like
  • Thinking
  • Haha
Reactions: 8 users

7für7

Top 20
  • Haha
Reactions: 3 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Some nice healthy sideward movement over the last couple of trading days. Getting ready for the next rise.........

044ded84-7ed8-4346-88e5-6eeb49973548_text.gif



Happy as Larry
 
  • Haha
  • Like
  • Love
Reactions: 20 users

Tothemoon24

Top 20
IMG_9751.jpeg


Neuromorphic computing has considerable potential for us across many areas.

By mimicking the functionality of the human brain, it can make next-generation AI computation considerably faster and more energy efficient.

One current research project is NAOMI4Radar funded by the German Federal Ministry for Economic Affairs and Climate Action. As consortium leader, we are working with partners to assess how neuromorphic computing can be used to optimise the processing chain for radar data in automated driving systems.

Current Mercedes-Benz models use frontal radar to see 200 metres in front of them. For instance, our DRIVE PILOT system uses radar data as one of its many sources for enabling conditionally automated driving.

The aim of the NAOMI4Radar project is to demonstrate that neuromorphic computing can bring fundamental benefits to future generations of automated and autonomous driving systems.

But as I said, this is just one current research. More on that soon.

IMG_9752.jpeg

Loihi 2 - smoke screen or smoked 🚭



Over 12 months ago Mercedes served this up fingers crossed our time has arrived ⬇️



IMG_9753.jpeg



My new “In the Loop” series kicks off with #Neuromorphic Computing – the clear winner of my poll a few weeks ago.

For those unfamiliar, this highly significant field of computing strives to emulate the multi-tasking of the human brain. Traditional microprocessors function sequentially. However, as the complexity and scale of calculations sores, this way of doing things is rapidly running out of road.

The idea is not new, but trying to “put a brain on a chip” is a mammoth task. To put it into figures: the human brain has 86-100 billion neurons operating on around 20 watts. Current neural chips from leading developers such as BrainChip and Intel Corporation contain around 1 million neurons and consume roughly 1 watt of power.

So, you see, despite impressive advances, there is still a very long way to go. Neuromorphic computing goes well beyond chip design and includes a specific kind of artificial neural network called #spikingneuralnetworks (SNN). They consume far less energy because the neurons are silent most of the time, only firing (or spiking) when needed for events.

Together with intense parallel execution on neuromorphic chips, the new processing principles require us to go beyond the application of existing #AI frameworks to neuromorphic chips. We have to fundamentally rethink the algorithms that ultimately enable future AI functions in our cars, gathering joint inspiration from machine learning, chip design and neuroscience. Our experts are working closely with our partners to examine their potential in new applications.

The thing is, even a tiny fraction of the thinking capacity of the human brain can go a long way in several fields that are extremely relevant to automotive applications. Examples include advanced driving assistance systems #ADAS as well as the on-board analysis of speech and video data, which can unlock major advances in how we communicate with our cars.

We already made some interesting findings here with our VISION EQXX, where we applied neuromorphic principles to the “Hey Mercedes” hot-word detection. That alone made it five to ten times more energy efficient than conventional voice control. As AI and machine learning take on an increasingly important role in the software-defined vehicle, the energy this consumes is likely to become a critical factor.

I’ll touch on our latest findings in an upcoming “In the Loop” and tell you my thoughts on where this is taking us.

In the meantime, for those of you interested in reading up on neuromorphic computing, check out the slider for my recommended sources. I’ve graded them to ensure there’s something for everyone, from absolute beginner to true geeks.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 28 users
  • Haha
  • Like
  • Wow
Reactions: 6 users
  • Haha
  • Like
Reactions: 7 users

IloveLamp

Top 20
1000019057.jpg
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Frangipani

Top 20
Maybe, just maybe, we’ll find out a teeny-weeny bit more about the current status of MB’s neuromorphic research later this week:

I checked out the website of Hochschule Karlsruhe (Karlsruhe University of Applied Sciences aka HKA) - since Markus Schäfer mentioned in his post they were collaborating with them on event-based cameras - and discovered an intriguing hybrid presentation by Dominik Blum, one of MB’s neuromorphic researchers, titled “Intelligente Fahrassistenzsysteme der Zukunft: KI, Sensorik und Neuromorphes Computing” (“Future Intelligent ADAS: AI, Sensor Technology and Neuromorphic Computing”).

The upcoming presentation is part of this week’s Themenwoche Künstliche Intelligenz, a week (Mon-Thu to be precise) devoted to AI, with numerous, mostly hybrid presentations from various HKA research areas (both faculty and external speakers will present), held daily between 5.15 pm and 8.30 pm.

Oct 17 is devoted to the topic of AI & Traffic:


A7B412D4-0DB8-4001-B434-00BEB7C0A144.jpeg


7F5F9451-2C45-4914-8A2E-4894A98A2295.jpeg


If you speak German (or even if you don’t, but are nevertheless interested in the presentation slides) and live in a compatible time zone, you may want to join the following livestream on Oct 17, at 5.15 pm (CEST):







(Since similar June AI Day presentations were recorded and uploaded to the HKA website, I assume this will also apply to the AI Week presentations.)

D4DFFB79-1141-4179-94EB-3E6D01A34059.jpeg

AB478315-521B-4C14-A87A-903D778B68BD.jpeg



The reference to NMC (neuromorphic computing) being considered a “möglicher Lösungsweg” (possible/potential solution) suggests to me - once again - that Mercedes-Benz is nowhere near to implementing neuromorphic technology at scale into serial cars.


2BC2769F-3564-4E86-8B50-1B2DED262AE1.jpeg


F8A861F5-B86E-4CDC-A398-106C346772E9.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Top Bottom