BRN Discussion Ongoing

BrainShit

Regular
Speaking of EDGX:
I am somewhat surprised no one has yet commented on the fact that EDGX no longer seems to be in an exclusive relationship with us as their neuromorphic partner:

View attachment 70844

Some posters will want to make you believe that as soon as a company / research institution / consultancy has discovered us, they will only have eyes for us, and that the competition can basically pack up and go home. It is a romantic notion for sure, but alas it is not the reality. The companies and institutions truly convinced of the benefits of neuromorphic technology will often be taking their time to explore different solutions and may end up doing business with / recommending (in the case of a consultancy) either
a) us
b) us and someone else or
c) someone else [as unimaginable that may seem to certain posters here].


While Accenture did praise Akida earlier this year, they continue to research Loihi (
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-428774) and have also been evaluating SynSense’s ultra-low power offerings:

View attachment 70909

Or take ESA, for example: Laurent Hili didn’t restrict himself to visiting the BrainChip booth at the AI Hardware & Edge AI Summit in September: He and his colleague Luis Mansilla Garcia (who were both guests on Episode 31 of the BrainChip This is Our Mission podcast in March) also dropped by other AI chip companies’ booths such as that of Intel (-> Gaudi 3) and SpinnCloud Systems ( -> SpiNNaker 2), as evidenced by these recent screenshots I took of photos he posted resp. reposted on LinkedIn:

View attachment 70846

View attachment 70971

Another example:
We know the neuromorphic researchers from TCS to be BrainChip fans.
Yet, a month ago, in the comment section underneath one of his own posts, Sounak Dey from TCS expressed his regret of having missed the chance to meet up with Petrut Antoniu Bogdan from Innatera at Semicon India 2024 (Sept 11-13). No surprise, really, given that in recent months Sounak Dey has liked numerous posts by both BrainChip and Innatera.

View attachment 70848


Of course our competitors are in the same situation, with BrainChip showing up in unexpected places - so standing still is not an option, all those companies need to continually innovate, and BrainChip is doing just that. Having chosen to go the path of an IP company may pay out in the long run, but of course means leaving part of the addressable market to our competitors.

I’d be very cautious to quantify any lead in months or even years, like some posters have done and still do, despite having no insight whatsoever into the negotiations between any of the companies offering neuromorphic technology and their potential customers - in my opinion, such posts lull us into a false sense of security, which in turn could lead to further disappointment among already disappointed shareholders and provide more fodder for the downrampers should one of our competitors land a juicy contract first, especially in case it concerned one that BrainChip had also been vying for.

And in case you were wondering: No, I don’t have any insider information. I am just a keen observer (such as taking note of LinkedIn posts like the ones above or below), and prefer to draw my own conclusions rather than rely on contributions by anonymous shareholders wearing rose-coloured glasses or deliberately cherry-picking info or even twisting the truth to suit their narrative (be it negative or positive - this happens on both ends of the spectrum). And I encourage everyone to do the same (which admittedly is hard to do for many with very limited time to spare.)


View attachment 70893

Reading between the lines: We are also exploring other companies’ offerings and won’t make any promises.


View attachment 70894

Reading between the lines: We are also exploring other companies’ offerings and won’t make any promises.

View attachment 70943
View attachment 70944

View attachment 70896

No reading between the lines is necessary here, I’d say...
They just don’t spell it out with the words: “You’re in good company” or “Trusted by…”, but to me this is essentially saying the same thing, even though the folks at Innatera cannot pride themselves to already have had their tech publicly validated in an MB concept car.

Innatera and their T1 are indeed quite good. Their mission: to bring intelligence to a billion sensors by 2030.

Innatera's T1 operates using a proprietary analog-mixed signal computing architecture, rather than a fully digital one. In addition to the SNN accelerator, T1 also includes a CNN accelerator and a 32-bit RISCV core with 384 KB of memory for more conventional workloads.

Akida operates digitally. It is a fully digital, event-based neuromorphic processor.

The differences between digital processing and a proprietary analog-mixed signal computing architecture are as follows:

Digital Processing: Digital systems use discrete values (typically 0s and 1s) to represent information. They are highly resistant to noise, allow for efficient error detection and correction, and can be easily integrated with other digital systems.

Digital processing is deterministic, meaning each signal has a specific value at a given time.

Analog-Mixed Signal Architecture: This combines both analog and digital components to process signals. It captures the benefits of both worlds: the precision of analog signal processing and the flexibility of digital systems. Mixed-signal architectures are ideal for applications requiring the conversion between analog and digital signals, such as ADCs (Analog-to-Digital Converters) and DACs (Digital-to-Analog Converters). They are particularly useful in environments where both types of signals are present.

To sum up, digital processing offers robustness and integration ease, while mixed-signal architectures provide versatility in handling both analog and digital signals.


... but we got PICO, TENNs, features such as Vision Transformer acceleration and support for 8-bit weights, enabling larger and more complex models. We also target at a wider range of edge applications, including image processing and audio applications, while the T1 targets applications in battery-powered, power-limited and latency-critical devices.

Crossing my finger we'll win.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 45 users

Frangipani

Regular
Brilliant 🍻

The work with the University of Waterloo complements a series of existing Mercedes‑Benz research collaborations on neuromorphic computing. One focus is on neuromorphic end-to-end learning for autonomous driving. To realize the full potential of neuromorphic computing, Mercedes‑Benz is building up a network of universities and research partnerships. The company is, for example, consortium leader in the NAOMI4Radar project funded by the German Federal Ministry for Economic Affairs and Climate Action. Here, the company is working with partners to assess how neuromorphic computing can be used to optimize the processing of radar data in automated driving systems. In addition, Mercedes‑Benz has been cooperating with Karlsruhe University of Applied Sciences. This work centres on neuromorphic cameras, also known as event-based cameras.



View attachment 70549



October 8, 2024 – Stuttgart/Toronto
  • Mercedes-Benz and the Ontario government, through the Ontario Vehicle Innovation Network (OVIN), establish incubators to foster startup creation, startup scouting and automotive innovation in Ontario, Canada
  • OVIN Incubators join growing international Mercedes-Benz STARTUP AUTOBAHN network
  • Initiative aims to drive transfer to industrialization, leveraging the region's strong foundation in advanced automotive technology and smart mobility
  • Research collaboration with University of Waterloo complements existing academic research into neuromorphic computing
Mercedes-Benz is partnering with the Ontario Vehicle Innovation Network (OVIN), the Government of Ontario's flagship initiative for the automotive and mobility sector. The purpose is to expand startup creation and scouting activities in North America and to promote the commercialization of automotive innovation. The OVIN Incubators Program will focus on identifying and fostering innovation in future software & AI, future vehicle components and future electric drive. Working with startups, and in partnership with OVIN, Mercedes-Benz will help progress promising projects through the provision of its specialist expertise and use cases. Selected projects will also benefit from the international Mercedes-Benz STARTUP AUTOBAHN network. Separately, the company intends to start a research collaboration with the University of Waterloo, Ontario with a focus on neuromorphic computing for automated driving applications. The move complements a range of ongoing Mercedes-Benz R&D activities in Canada.
"Innovation is part of Mercedes-Benz DNA. In our global R&D strategy, open innovation gives us rapid and direct access to the latest ideas and developments around the world. We are therefore delighted to further expand our activities in Canada as a founding partner of the OVIN Incubators. In a fast-paced environment, it is another important channel for developing exciting future products and elevating our customer experience through new technologies."
Markus Schäfer, Member of the Board of Management of Mercedes-Benz Group AG, Chief Technology Officer, Development & Procurement​
The academic research collaboration and participation in the OVIN Incubators Program are the latest in a series of initiatives underpinned by the company's Memorandum of Understanding (MoU) with the government of Canada, signed in 2022. The aim of the MoU is to strengthen cooperation across the electric vehicle value chain. Through the partnership with the Ontario government through OVIN, Mercedes-Benz is accelerating and expanding its presence by tapping into Ontario's international acclaim as a centre for tech development, recognizing the province's significance for Mercedes-Benz's global innovation network.
Open innovation draws in ideas, inspiration and technologies from a wide variety of external sources and partners. This approach is a long-established part of Mercedes-Benz R&D strategy, enriching and complementing the company's internal R&D work worldwide.
"This new partnership between the Ontario Vehicle Innovation Network (OVIN) and Mercedes‑Benz is going to be a significant boost for our province's automotive and mobility sectors. By bringing together the best of industry, research, and entrepreneurial talent, we're fostering innovation that will strengthen our economy, create good jobs and position Ontario as a leader in the auto and electric vehicle technologies of the future."
Doug Ford, Premier of Ontario
"Ontario continues to build its reputation as a world leader in manufacturing the cars of the future, with $44 billion in new investments by automakers, EV battery manufacturers and parts suppliers coming into the province over the last four years. The launch of OVIN Incubators represents another link in our growing end-to-end, fully integrated, EV supply chain. With a new platform for our world-class tech ecosystem to develop homegrown mobility innovations, Ontario talent will continue to be on the forefront of creating the technologies that will power vehicles all over the world through the Mercedes-Benz STARTUP AUTOBAHN network."
Vic Fedeli, Ontario Minister of Economic Development, Job Creation and Trade
"As Ontario sets its sights on the next decade of growth of its automotive and mobility sector, it is vital that we continue to foster the talent, technical expertise and capacity for innovation to achieve this future. The OVIN Incubators build a robust foundation for nurturing the next generation of innovators by providing a clear pathway from research and development to commercialization and industrialization, in partnership with Ontario's leading postsecondary institutions and major industry players. This platform will further cement the foundation for sustainable economic growth within the sector and beyond, across the entire province."
Raed Kadri, Head of OVIN​
Mercedes-Benz partners in OVIN Incubators to accelerate startup scouting and support commercialization
In its pilot phase, the OVIN Incubators Program will conduct startup scouting to identify opportunities in Ontario relevant to Mercedes-Benz fields of research. The aim is to empower startups to engage with industry and establish a robust pipeline of companies whose growth can be catalyzed. Together, OVIN and Mercedes‑Benz will narrow down an initial longlist through a process of evaluation, ultimately arriving at individual projects that will progress to proof-of-concept based on Mercedes‑Benz use cases. The OVIN Incubators join a growing international network of regional programmes benefitting from the Mercedes‑Benz STARTUP AUTOBAHN platform for open innovation. This globally networked and locally executed approach seeks to maximize the pool of ideas, innovations and technologies that can flow into future Mercedes‑Benz products. Looking to the future, the next phase of the OVIN Incubators will seek to expand its scope through the addition of further partners from industry and academia.
Collaboration with the University of Waterloo to help seed, grow and harvest research in the field of neuromorphic computing
Mercedes-Benz and the University of Waterloo have signed a Memorandum of Understanding to collaborate on research led by Prof. Chris Eliasmith in the field of neuromorphic computing. The focus is on the development of algorithms for advanced driving assistance systems. By mimicking the functionality of the human brain, neuromorphic computing could significantly improve AI computation, making it faster and more energy efficient. While preserving vehicle range, safety systems could, for example, detect traffic signs, lanes and objects much better, even in poor visibility, and react faster. Neuromorphic computing has the potential to reduce the energy required to process data for autonomous driving by 90 percent compared to current systems.
"Industry collaboration is at the heart of our success as Canada's largest engineering school. We recognize that research partnerships with companies such as Mercedes-Benz bring opportunities to directly apply and test our work, while introducing our students to the highest standards in industry."
Mary Wells, Dean, Faculty of Engineering at the University of Waterloo​
The work with the University of Waterloo complements a series of existing Mercedes‑Benz research collaborations on neuromorphic computing. One focus is on neuromorphic end-to-end learning for autonomous driving. To realize the full potential of neuromorphic computing, Mercedes‑Benz is building up a network of universities and research partnerships. The company is, for example, consortium leader in the NAOMI4Radar project funded by the German Federal Ministry for Economic Affairs and Climate Action. Here, the company is working with partners to assess how neuromorphic computing can be used to optimize the processing of radar data in automated driving systems. In addition, Mercedes‑Benz has been cooperating with Karlsruhe University of Applied Sciences. This work centres on neuromorphic cameras, also known as event-based cameras.
# # #
About the Ontario Vehicle Innovation Network OVIN
OVIN is an initiative of the Government of Ontario, led by the Ontario Centre of Innovation (OCI), designed to reinforce Ontario's position as a North American leader in automotive and mobility technology and solutions such as connected vehicles, autonomous vehicles, and electric and low-carbon vehicle technologies. Through resources such as research and development (R&D) support, talent and skills development, technology acceleration, business and technical supports, and demonstration grounds, OVIN provides a competitive advantage to Ontario-made automotive and mobility technology companies.
About STARTUP AUTOBAHN
STARTUP AUTOBAHN is an open innovation platform for startups in the field of mobility. The innovation driver was founded in 2016 by Mercedes‑Benz, formerly Daimler, in cooperation with the innovation platform Plug and Play, the research factory ARENA2036 and the University of Stuttgart. This has resulted in an entire innovation network around the globe - with programmes in the United States, China, India, South Korea and now also in Canada. Since its foundation, a growing number of industrial partners and startups from all over the world have benefited from the STARTUP AUTOBAHN. Several technologies from the network have already been integrated into Mercedes-Benz series-production vehicles.

Thanks for posting @Tothemoon24!

Couldn't help but notice this part of the article!

View attachment 70573

Prof. Chris Eliasmith has published numerous research papers on neuromorphic computing, a few of which I posted below.


View attachment 70574
View attachment 70580

View attachment 70581





And here's the cool part. 🥰


View attachment 70583



View attachment 70582

View attachment 70974 View attachment 70975 View attachment 70976

I am not so sure whether last week’s announcement by Mercedes-Benz is really a reason for BRN shareholders to celebrate (other than the fact that neuromorphic computing is again confirmed to be a promising technology), given Chris Eliasmith, who leads the neuromorphic research at the University of Waterloo and is the co-founder and CTO of ABR (Applied Brain Research) (https://www.appliedbrainresearch.com/), another company dealing in the edge space, no longer appears to be close to BrainChip.

This is what @uiux shared two years ago:

CC7A43DA-13CC-4F87-A3C4-DE695F45F690.jpeg




ABR seems more like a competitor in the Edge AI space to me?

D56F9B3F-D83C-4442-8CC1-4CF22DB16A10.jpeg

BC87AB71-AB9D-4DD3-B164-D0265E3BA093.jpeg




ABR demonstrates the world’s first single chip solution for full vocabulary speech recognition​



ABR-Press-Release.webp

SAN JOSE, CA, [Sep 9] – Applied Brain Research (ABR), a leader in the development of AI solutions, is demonstrating the world’s first self-contained single-chip speech recognition solution at the AI Hardware and Edge AI Summit this week. This is an unveiling of the technology integrated into ABR’s first time series processor chip, the TSP1, capable of performing real-time low latency automatic speech recognition.

The solution employs ABR’s innovations at several levels of the technology. It starts with the world’s first patented state-space network, the Legendre Memory Unit (LMU), that is a breakthrough in efficient computation for time series processing. Next, the networks are trained and compiled using ABR’s advanced full-stack toolchain. And finally, the network runs on ABR’s proprietary computational neural fabric that greatly reduces power consumption through reduction in data movement within the chip.

“What ABR is showcasing today has been five years in the making starting with our earliest observations of how the brain processes memories which led to the state space network model that we derived from that study and subsequently patented,” said Dr. Chris Eliasmith, ABR’s co-founder and CTO. “From that starting point, we have innovated at every level of the technology stack to do what has never before been possible for speech processing in low-powered edge devices.”

“ABR’s TSP1 is going to revolutionize how time series AI is integrated into devices at the edge,“ said Kevin Conley, ABR’s CEO. “We are showcasing the fastest, most accurate self-contained speech recognition solution ever produced, with both English and Mandarin versions. The TSP1 will delivery these capabilities at 100X lower power than currently available edge GPU solutions. And speech recognition, which we are actively engaged with customers to develop, is only the first step in commercializing the potential of this technology.”

ABR’s TSP1 is a single-chip solution for time series inference applications like real-time speech recognition (including keyword spotting), realistic text-to-speech synthesis, natural language control interfaces and other advanced sensor fusion applications. The TSP1 integrates neural processing fabric, CPU, sensor interfaces and on-chip NVM for a self-contained easy to integrate solution. The TSP1 is supported by an advanced no-code network development toolchain to create the easiest to develop and deploy time series solution on the market.

ABR has a booth in the Startup Village at the AI Hardware and Edge AI Summit at the Signia by Hilton in San Jose, CA from Sept 10-12.

About Applied Brain Research
Applied Brain Research Inc (ABR) is a pioneer in Artificial Intelligence technology founded by alumni of the Computational Neuroscience Research Group at the University of Waterloo. ABR is leading a new wave of product development targeting ultra-low power Edge AI, enabling a new level of capability in low-power critical applications. ABR’s revolutionary time-series AI processor uses 100x less-power than other high-functionality edge AI hardware, and supports AI models up to 10-100x larger than other low-power edge AI hardware.
ABR, headquartered in Waterloo, Ontario, is a Silicon Catalyst Portfolio Company. More company and product information can be found at www.appliedbrainresearch.com.
 
  • Wow
  • Thinking
  • Like
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Still some very competent posters over there, put the downramping clowns on ignore.

curdlednoodles over there posted the following research paper (dated Aug 2024) from:


Department of Mechanical and Aerospace Engineering, Missouri University of
Science and Technology, 400 W. 13th Street, Rolla, MO, USA, 65409
2 Department of Computer Science, Missouri University of Science and Technology,
500 W. 15th Street, Rolla, MO, USA, 65409

Called:

Few-Shot Transfer Learning for Individualized Braking Intent Detection on Neuromorphic Hardware

Nathan Lutes, Venkata Sriram Siddhardh Nadendla, K. Krishnamurthy

Objective: This work explores use of a few-shot transfer learning method to train and implement a convolutional spiking neural network (CSNN) on a BrainChip Akida AKD1000 neuromorphic system-on-chip for developing individual-level, instead of traditionally used group-level, models using electroencephalographic data. The efficacy of the method is studied on an advanced driver assist system related task of predicting braking intention. Main Results: Efficacy of the above methodology to develop individual specific braking intention predictive models by rapidly adapting the group-level model in as few as three training epochs while achieving at least 90% accuracy, true positive rate and true negative rate is presented. Further, results show an energy reduction of over 97% with only a 1.3x increase in latency when using the Akida AKD1000 processor for network inference compared to an Intel Xeon CPU. Similar results were obtained in a subsequent ablation study using a subset of five out of 19 channels. Significance: Especially relevant to real-time applications, this work presents an energy-efficient, few-shot transfer learning method that is implemented on a neuromorphic processor capable of training a CSNN as new data becomes available, operating conditions change, or to customize group-level models to yield personalized models unique to each individual.




Hi @Guzzi62, yes of course, there are definitely some very competent posters over yonder and I was definitely not referring to any of them, just the members on that forum that 7fur7 was referring to in his post (i.e the serial down-rampers).
 
  • Like
  • Fire
  • Love
Reactions: 9 users

itsol4605

Regular
I am not so sure whether last week’s announcement by Mercedes-Benz is really a reason for BRN shareholders to celebrate (other than the fact that neuromorphic computing is again confirmed to be a promising technology), given Chris Eliasmith, who leads the neuromorphic research at the University of Waterloo and is the co-founder and CTO of ABR (Applied Brain Research) (https://www.appliedbrainresearch.com/), another company dealing in the edge space, no longer appears to be close to BrainChip.

This is what @uiux shared two years ago:

View attachment 70980



ABR seems more like a competitor in the Edge AI space to me?

View attachment 70981
View attachment 70982



ABR demonstrates the world’s first single chip solution for full vocabulary speech recognition​



ABR-Press-Release.webp

SAN JOSE, CA, [Sep 9] – Applied Brain Research (ABR), a leader in the development of AI solutions, is demonstrating the world’s first self-contained single-chip speech recognition solution at the AI Hardware and Edge AI Summit this week. This is an unveiling of the technology integrated into ABR’s first time series processor chip, the TSP1, capable of performing real-time low latency automatic speech recognition.

The solution employs ABR’s innovations at several levels of the technology. It starts with the world’s first patented state-space network, the Legendre Memory Unit (LMU), that is a breakthrough in efficient computation for time series processing. Next, the networks are trained and compiled using ABR’s advanced full-stack toolchain. And finally, the network runs on ABR’s proprietary computational neural fabric that greatly reduces power consumption through reduction in data movement within the chip.

“What ABR is showcasing today has been five years in the making starting with our earliest observations of how the brain processes memories which led to the state space network model that we derived from that study and subsequently patented,” said Dr. Chris Eliasmith, ABR’s co-founder and CTO. “From that starting point, we have innovated at every level of the technology stack to do what has never before been possible for speech processing in low-powered edge devices.”

“ABR’s TSP1 is going to revolutionize how time series AI is integrated into devices at the edge,“ said Kevin Conley, ABR’s CEO. “We are showcasing the fastest, most accurate self-contained speech recognition solution ever produced, with both English and Mandarin versions. The TSP1 will delivery these capabilities at 100X lower power than currently available edge GPU solutions. And speech recognition, which we are actively engaged with customers to develop, is only the first step in commercializing the potential of this technology.”

ABR’s TSP1 is a single-chip solution for time series inference applications like real-time speech recognition (including keyword spotting), realistic text-to-speech synthesis, natural language control interfaces and other advanced sensor fusion applications. The TSP1 integrates neural processing fabric, CPU, sensor interfaces and on-chip NVM for a self-contained easy to integrate solution. The TSP1 is supported by an advanced no-code network development toolchain to create the easiest to develop and deploy time series solution on the market.

ABR has a booth in the Startup Village at the AI Hardware and Edge AI Summit at the Signia by Hilton in San Jose, CA from Sept 10-12.

About Applied Brain Research
Applied Brain Research Inc (ABR) is a pioneer in Artificial Intelligence technology founded by alumni of the Computational Neuroscience Research Group at the University of Waterloo. ABR is leading a new wave of product development targeting ultra-low power Edge AI, enabling a new level of capability in low-power critical applications. ABR’s revolutionary time-series AI processor uses 100x less-power than other high-functionality edge AI hardware, and supports AI models up to 10-100x larger than other low-power edge AI hardware.
ABR, headquartered in Waterloo, Ontario, is a Silicon Catalyst Portfolio Company. More company and product information can be found at www.appliedbrainresearch.com.

How does it fit to Anil's reaction?

20241014_005210.jpg


 
  • Like
  • Love
  • Fire
Reactions: 30 users

7für7

Top 20
  • Haha
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
AI is driving demand for Edge computing

AI is driving demand for Edge computing​

Arun Shankar | 13 October, 2024
Pete Hall, Regional Managing Director EMEA, Ciena
AI promises to accelerate advanced automation to unprecedented levels creating a surge in demand for Edge computing. A substantial part of the demand for Edge computing will be comprised of GPU clusters to provide distributed AI inference, explains Pete Hall at Ciena.

AI promises to accelerate advanced automation to unprecedented levels creating a surge in demand for Edge computing. A substantial part of the demand for Edge computing will be comprised of GPU clusters to provide distributed AI inference, explains Pete Hall at Ciena.


Edge data centres are set to become an area of strategic growth as companies strive to minimise latency and enhance the end-user experience in the AI era. A forecast from IDC projects global spending on Edge computing to be $232 billion in 2024, an increase of 15.4% from last year.

In the Middle East, countries like the UAE and Saudi Arabia are investing in Edge data centres to support their digital ambitions and AI initiatives, addressing challenges related to application latency, data sovereignty, and sustainability of information and communications technologies.

In the past two decades, the world has seen an intense process of cloudification of IT infrastructure, with an increasing number of applications moving to the public cloud. The massive scale of cloud data centres with highly flexible consumption models has enabled a compelling business model for compute and storage workloads, effectively discouraging local builds.

However, centralised data processing means longer routes between users and content, and thus higher latency experienced by users accessing this content.

To remediate this issue, service providers have turned to content delivery network architectures, deploying cache servers closer to users, particularly targeting streaming video. This approach has been effective to improve user experience for streaming services, while also offloading some the network of some heavy traffic flows.

Nonetheless, it is only effective for frequently consumed repeatable data, like popular streaming videos, and not economically viable for random workloads.

Although content delivery networks have been the most widespread use case of Edge computing, a prominent and largely expected application of Edge computing has been its potential to accelerate automation and machine orchestration.

AdobeStock_950021083_1000x450.jpg

Machine decisions that need to be tightly synchronised require very low latency, in a level that a centralised compute infrastructure cannot deliver.

As AI promises to accelerate advanced automation in unprecedented levels, we are on the verge of a surge in Edge compute demand. And most likely, a substantial part of that Edge compute demand will be comprised of GPU clusters to provide distributed AI inference.

By accelerating the build of decentralised compute infrastructure, the UAE and Saudi Arabia can bolster the performance of AI-driven applications and boost the competitiveness of the region in this flourishing field.

In addition to delivering lower latency, this infrastructure can also help sensitive data stay in the region. AI models training, fine-tuning, or inference deals with data that might be preferred to be kept locally, rather than sent to a centralised location.

Even as core data centre buildouts continue to unfold across vast expanses of the world, the shift toward Edge data centres presents both challenges and opportunities. For instance, the environmental impact of data centres cannot be ignored. According to an International Energy Agency forecast, electricity consumption from data centres, cryptocurrencies, and Artificial Intelligence could double between 2022 and 2026.

Consequently, data centre projects are exploring various techniques to enhance sustainability in storage and processing to reduce the burden on existing power grids. This includes adopting the latest optical technology, implementing more efficient cooling methods, and utilising alternative power sources.

This is particularly critical in the Middle East, where there is heavy reliance on cooling systems to counter the effects of extreme heat. There is a shift to alternative power sources such as solar energy to enhance sustainability, with Masdar City in Abu Dhabi integrating sustainable practices into its data centre operations.

Delivering applications closer to the end user is a critical factor for AI applications. However, to realise these gains, the networks within, and between, data centres must be upgraded. Cutting-edge AI services cannot run inside everyday data centre servers; they need computers with high-performance graphics processing units, GPUs.

And those high-performance clusters of GPUs running AI services need high-speed networks to move AI-related data inside a data centre and then out to the wider world. Outside the site, high-speed and high-capacity data centre interconnect networks must remain front of mind for investment.

Regional telcos can capitalise on the proximity to end users and the ability to process data closer to the source to support a plethora of localised services. It will result in ever more responsive business decision-making and an explosion in service innovation.

AdobeStock_313384530_1000x450.jpg


Key takeaways

  • IDC projects global spending on Edge computing to be $232 billion in 2024, an increase of 15.4% from last year.
  • Centralised data processing means longer routes between users and content and higher latency experienced by users accessing content.
  • Service providers have turned to content delivery network architectures, deploying cache servers closer to users.
  • Content delivery networks are only effective for frequently consumed repeatable data, and not economically viable for random workloads
  • Machine decisions that need to be tightly synchronised require very low latency, that centralised compute infrastructure cannot deliver.
  • By accelerating buildup of decentralised compute infrastructure, UAE and Saudi Arabia can bolster performance of AI-driven applications.
  • A substantial part of that Edge compute demand will be comprised of GPU clusters to provide distributed AI inference.
  • Delivering applications closer to the end user is a critical factor for AI applications.
  • AI services cannot run inside everyday data centre servers and need computers with high-performance graphics processing units.
  • High-performance clusters of graphics processing units running AI services need high-speed networks to move AI-related data.
  • High-speed and high-capacity data centre interconnect networks must remain front of mind for investment.
  • Regional telcos can capitalise on the proximity to end users to process data closer to the source to support localised services.
  • International Energy Agency forecasts, electricity consumption from data centres, cryptocurrencies, and Artificial Intelligence could double between 2022 and 2026.

Screenshot 2024-10-14 at 10.27.31 am.png



Screenshot 2024-10-14 at 10.27.58 am.png


Screenshot 2024-10-14 at 10.40.09 am.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 23 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Look at this! I think we could be onto a goer here! Or at the very least it's definitely something we should keep an eye on from a competition point of view IMO.


Background:
  • Sonova acquired Sennheisser in 2021.
  • VALEO and Sennheisser have been working together combining their technologies on for an immersive sound system in a demo car which was showcased at CES 2024.

Sonova's new hearing aids:
  • Will be available 2025
  • Incorporate real-time artificial intelligence
  • Can learn from the user
  • Can reduce and cancel unwanted noise



Screenshot 2024-10-14 at 10.58.38 am.png


Screenshot 2024-10-14 at 10.49.01 am.png







Screenshot 2024-10-14 at 11.16.42 am.png




Screenshot 2024-10-14 at 10.52.06 am.png




Screenshot 2024-10-14 at 11.01.07 am.png
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 30 users

7für7

Top 20
1728866211587.gif


Me relaxed and calm enjoying the ride while day traders try to catch the perfect entry and exit… „Oh no, I bought too early… Oh no, I sold too late… Oh no, I sold too early.“
 
  • Like
  • Haha
Reactions: 13 users

manny100

Regular
  • Like
Reactions: 2 users

Diogenese

Top 20
Look at this! I think we could be onto a goer here! Or at the very least it's definitely something we should keep an eye on from a competition point of view IMO.


Background:
  • Sonova acquired Sennheisser in 2021.
  • VALEO and Sennheisser have been working together combining their technologies on for an immersive sound system in a demo car which was showcased at CES 2024.

Sonova's new hearing aids:
  • Will be available 2025
  • Incorporate real-time artificial intelligence
  • Can learn from the user
  • Can reduce and cancel unwanted noise



View attachment 70997


View attachment 71002







View attachment 71004



View attachment 71003



View attachment 71001
Hi Bravo,

Probly a competitor.

Sonova have been dabbling with NNs since at least 2006.

Their latest patent does not use PICO:

US12108219B1 Processing chip for processing audio signals using at least one deep neural network in a hearing device 20230324

1728867939468.png



A processing chip for processing audio signals using at least one deep neural network (DNN) in a hearing device comprises a first compute unit having a hardware architecture adapted for processing one or more convolutional neural network layers of the at least one DNN, a second compute unit having hardware architecture adapted for processing one or more recurring neural network layers of the at least DNN, a control unit for directing the first and second compute units when to compute a respective layer of the at least one DNN, a shared memory unit of storing data to be processed in respective layers of the at least one DNN, and a data bus system for providing access to the shared memory unit for each of the first and the second compute unit.
 
  • Like
  • Sad
  • Wow
Reactions: 13 users

Guzzi62

Regular
How does it fit to Anil's reaction?

View attachment 70984

Maybe he is just being polite?


But off course, I also hope it's us but nothing is granted, yet!
 
  • Like
Reactions: 2 users

Esq.111

Fascinatingly Intuitive.
Look at this! I think we could be onto a goer here! Or at the very least it's definitely something we should keep an eye on from a competition point of view IMO.


Background:
  • Sonova acquired Sennheisser in 2021.
  • VALEO and Sennheisser have been working together combining their technologies on for an immersive sound system in a demo car which was showcased at CES 2024.

Sonova's new hearing aids:
  • Will be available 2025
  • Incorporate real-time artificial intelligence
  • Can learn from the user
  • Can reduce and cancel unwanted noise



View attachment 70997


View attachment 71002







View attachment 71004



View attachment 71003



View attachment 71001
Morning Chippers ,

Hearing Aids .

Saw this company ann a little while ago , AUDEARA LIMITED ,Australian listed (Code . AUA ) , Tiny market cap AU$7 mill give or take , they recently picked up a contract...... Dated 8th October .

Patent Sleuth required for a gentle probing ???.


Audeara Limited (AUA) is a hearing health leader specialised in innovative listening solutions for people with hearing challenges. The company is focused about redefining hearing health, particularly on delivering products that provide world-class tailored listening experiences. All Audeara products are proudly designed and engineered in Australia

Company Details​

Company Details​

NameAudeara Limited
ACN604 368 443
ChairmanDavid Trimboli
MDDr James Alexander Fielding
Address35 Brookes St
Bowen Hills, QLD 4006
Phone--
Fax--
Websitewww.audeara.com
Dividend ReinvestmentNo
DRP StatusNone
Dividend Reinvestment Features--
Shareholder Discount--
Investor Relations Phone--
Investor Relations NameHenry Jordan

Index Participation​

Index Participation Names​

Share Registry Details​

Share Registry Details​

Principal RegistryComputershare Investor Services Pty Ltd
AddressYarra Falls, 452 Johnston St
Abbotsford, VIC 3067
Postal AddressGPO Box 3224
Melbourne,VIC,3001
Phone+61 3 9415 4000
Fax+61 3 9473 2500
Investor Enquiries+61 3 9415 4000
Toll Free1300 787 272
WebsiteClick to view
Emailenquiry@computershare.com.au


Regards ,
Esq.

Edit ... My limited search came up with three patents which , to my eye , don't use our tech but who knows.

HomeResultsUS10936277B2



Search results:
3 results found​

List view
Text only
List content
All
Sort by
Relevance

Select result
(0 patents selected)Select the first 3 results
Select result
1.Customizable Personal Sound Delivery System
US2017046120A1 • 2017-02-16 •
AUDEARA PTY LTD
Earliest priority: 2015-06-29 • Earliest publication: 2016-07-07
A sound delivery system includes a processing assembly with a user interface coupled to the at least one processing assembly. At least one audio transducer is provided for delivering sound to a user. The audio transducer is responsive to the processing assembly. Typically the audio transducer is a loudspeaker of a pair of headphones or earbuds, though it may also be a bone conduction transducer. The at least one processing assembly is arranged to determine compensatory weights at each of a number of audio frequencies for the user on the basis of user responses via the interface to sounds delivered via the audio transducer and to deliver audio signals to the user modified in accordance with the determined weights via the audio transducer.
Select result
2.Calibration Method for Customizable Personal Sound Delivery System
US10936277B2 (A1) • 2021-03-02 •
AUDEARA PTY LTD
Earliest priority: 2015-06-29 • Earliest publication: 2018-12-20
A method (100) for calibrating a sound delivery system (1) having a processing assembly, a data communications assembly (9) coupled to the processing assembly, and at least one audio transducer (21a, 21b) mounted with at least one processor (11) of the processing assembly and responsive thereto for delivering sound to a user (3), the method including the steps of: transmitting from a remote user interface device (6) for the sound delivery system, a sequence of command codes for specifying predetermined characteristics of test sounds; receiving the command code sequence at the communications assembly of the sound delivery system; providing the command code sequence to the processing assembly of the sound delivery system; reproducing by a selected at least one audio transducer, the predetermined test sounds under control of said at least one processor according to the command code sequence; measuring with a reference SPL meter (70) proximate to the audio transducer, characteristics of test sounds reproduced by the sound delivery system; comparing the measured characteristics of the reproduced sounds with the predetermined characteristics of the test sounds; producing a mapping of specified test sounds to sounds reproduced by said at least one audio transducer; and storing the mapping in an electronic memory (12, 82) associated with the processing assembly or remote interface device (6).
Select result
3.CALIBRATION METHOD FOR CUSTOMIZABLE PERSONAL SOUND DELIVERY SYSTEMS
EP3827598A1 (A4) • 2021-06-02 •
AUDEARA PTY LTD
Earliest priority: 2018-07-23 • Earliest publication: 2020-01-30
No abstract available
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 13 users

Diogenese

Top 20
If the market is always right, why does the answer change so often?
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

7für7

Top 20
If the market is always right, why does the answer change so often?
There are situations, where the question is mostly the answer!
 
  • Like
Reactions: 1 users

Diogenese

Top 20
Is it just me, or is ASX in witness protection?
 
  • Like
  • Thinking
  • Haha
Reactions: 8 users

7für7

Top 20
  • Haha
Reactions: 3 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Some nice healthy sideward movement over the last couple of trading days. Getting ready for the next rise.........

044ded84-7ed8-4346-88e5-6eeb49973548_text.gif



Happy as Larry
 
  • Haha
  • Like
  • Love
Reactions: 20 users

Tothemoon24

Top 20
IMG_9751.jpeg


Neuromorphic computing has considerable potential for us across many areas.

By mimicking the functionality of the human brain, it can make next-generation AI computation considerably faster and more energy efficient.

One current research project is NAOMI4Radar funded by the German Federal Ministry for Economic Affairs and Climate Action. As consortium leader, we are working with partners to assess how neuromorphic computing can be used to optimise the processing chain for radar data in automated driving systems.

Current Mercedes-Benz models use frontal radar to see 200 metres in front of them. For instance, our DRIVE PILOT system uses radar data as one of its many sources for enabling conditionally automated driving.

The aim of the NAOMI4Radar project is to demonstrate that neuromorphic computing can bring fundamental benefits to future generations of automated and autonomous driving systems.

But as I said, this is just one current research. More on that soon.

IMG_9752.jpeg

Loihi 2 - smoke screen or smoked 🚭



Over 12 months ago Mercedes served this up fingers crossed our time has arrived ⬇️



IMG_9753.jpeg



My new “In the Loop” series kicks off with #Neuromorphic Computing – the clear winner of my poll a few weeks ago.

For those unfamiliar, this highly significant field of computing strives to emulate the multi-tasking of the human brain. Traditional microprocessors function sequentially. However, as the complexity and scale of calculations sores, this way of doing things is rapidly running out of road.

The idea is not new, but trying to “put a brain on a chip” is a mammoth task. To put it into figures: the human brain has 86-100 billion neurons operating on around 20 watts. Current neural chips from leading developers such as BrainChip and Intel Corporation contain around 1 million neurons and consume roughly 1 watt of power.

So, you see, despite impressive advances, there is still a very long way to go. Neuromorphic computing goes well beyond chip design and includes a specific kind of artificial neural network called #spikingneuralnetworks (SNN). They consume far less energy because the neurons are silent most of the time, only firing (or spiking) when needed for events.

Together with intense parallel execution on neuromorphic chips, the new processing principles require us to go beyond the application of existing #AI frameworks to neuromorphic chips. We have to fundamentally rethink the algorithms that ultimately enable future AI functions in our cars, gathering joint inspiration from machine learning, chip design and neuroscience. Our experts are working closely with our partners to examine their potential in new applications.

The thing is, even a tiny fraction of the thinking capacity of the human brain can go a long way in several fields that are extremely relevant to automotive applications. Examples include advanced driving assistance systems #ADAS as well as the on-board analysis of speech and video data, which can unlock major advances in how we communicate with our cars.

We already made some interesting findings here with our VISION EQXX, where we applied neuromorphic principles to the “Hey Mercedes” hot-word detection. That alone made it five to ten times more energy efficient than conventional voice control. As AI and machine learning take on an increasingly important role in the software-defined vehicle, the energy this consumes is likely to become a critical factor.

I’ll touch on our latest findings in an upcoming “In the Loop” and tell you my thoughts on where this is taking us.

In the meantime, for those of you interested in reading up on neuromorphic computing, check out the slider for my recommended sources. I’ve graded them to ensure there’s something for everyone, from absolute beginner to true geeks.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 28 users
  • Haha
  • Like
  • Wow
Reactions: 6 users
Top Bottom