BRN Discussion Ongoing

Frangipani

Top 20
Also…Sean confirmed the company that is manufacturing the glasses for Onsor is the same one doing them for Meta. I thought that was interesting synergy there.

Maybe you should stop laughing and hear what he says at 14.29
Onsor" moving aggressively and frames are being designed by none other that LUXOTTICA! The same company (Ray bans) who are making the meta glasses. Based on this little nugget I'm gonna buy a shit ton more shares in the next few weeks...
Now back to your whining, crying and laughing.
PS I think Sean presented amazing well here. Im a fan....

EssilorLuxottica must have been one of the companies Steve Brightfield was referring to in his CES 2025 interview with Don Baine aka The Gadget Professor, when he said:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-446882

“(…) I talked to some manufacturers that have glasses, where they put microphones in the front and then AI algorithms in the arm of the glasses, and it doesn’t look like you’re wearing hearing aids, but you’re getting this much improved audio quality. Because most people have minor hearing loss and they don't want the stigma of wearing hearing aids, but this kind of removes it - oh, it's an earbud, oh, it's glasses I am wearing, and now I can hear and engage."

DB: “Sure, I could see that, that's great. So does this technology exist? Is that something someone could purchase in a hearing aid, for example?”

SB: “Actually, I’ve seen manufacturers out on the floor doing this.
We are working with some manufacturers to put more advanced algorithms in the current ones in there, yes
.”


He didn’t specify which manufacturers or even confirm BrainChip were working with any of those manufacturers that were exhibiting at CES (note he didn’t say “I’ve seen manufacturers on the floor doing this. We are working with some of those manufacturers…”).

What this little dialogue does imply though, is that BrainChip’s technology is not yet implemented in any of the models that are already commercially available, such as the Nuance Audio OTC (Over The Counter) Hearing Aid Glasses that EssilorLuxottica launched earlier this year.

In January 2024, @Tothemoon24 had posted about their prototype to be displayed at CES 2024:

In July, EssilorLuxottica announced its plans to expand into the hearing solutions market, enabled by its acquisition of Nuance Hearing earlier in the year. With a dedicated Super Audio team and in-house R&D resources, the Group plans to introduce an integrated technology at the intersection of vision and sound for the 1.25 billion people experiencing mild to moderate hearing loss.



The Group’s Super Audio team is working on its first product, embedding a high-quality hearing technology into fashionable eyeglasses seamlessly, expected to launch in the second half of 2024.



Francesco Milleri, Chairman and CEO of EssilorLuxottica, and Paul du Saillant, deputy CEO said: “While sight remains our core business – and growing the optical market our strategy – we are uniquely positioned to open up a new avenue for the industry by addressing the need for good hearing with innovative technologies. As we did in the vision space, we will be the first to remove the stigma of traditional hearing solutions, replacing it with comfort and style.


Maybe one for the watch list

View attachment 53758



The audio component will be completely invisible, removing a psychological barrier that has historically stood in the way of consumer adoption of traditional hearing aids.



A prototype of the solution will be on display at the upcoming Consumer Electronics Show, the most powerful tech event in the world, which will be taking place from the 9 to the 12 of January 2024 in Las Vegas.

With their fully developed solution, EssilorLuxottica clinched an award in Digital Health at CES 2025:


8FA8AE82-8332-4A75-995F-2BD6F8E81F19.jpeg



A month later, they had received FDA clearance and EU Certifications, which paved the way to make Nuance Audio available to consumers in the US and Europe:


B578FD54-864A-4D8C-9ED6-B964E36D64A7.jpeg





30D396D5-6B5A-4A76-8EB7-D44339B61946.jpeg


5D3A1646-0464-4BB5-927E-3272348DE54A.jpeg




Here are more details about the Nuance Audio hearing glasses courtesy of Soundly, “a single destination for hearing aid research, shopping, and expert care”, including a recent video review:






72460BDE-ECA8-4152-A191-1E2A9CB3E4D4.jpeg



The Nuance Audio hearing glasses are now also available for purchase here in Germany, where they sell for 1100 €.


2D9A8041-D8E5-4218-B9BD-DD2EE0D56589.jpeg



One drawback is their limited battery life (8-10 hours per charge) - that’s where neuromorphic technology could come in handy, extending the battery life of future models of such hearing aid eyewear.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

wilson

Emerged
EssilorLuxottica must have been one of the companies Steve Brightfield was referring to in his CES 2025 interview with Don Baine aka The Gadget Professor, when he said:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-446882

“(…) I talked to some manufacturers that have glasses, where they put microphones in the front and then AI algorithms in the arm of the glasses, and it doesn’t look like you’re wearing hearing aids, but you’re getting this much improved audio quality. Because most people have minor hearing loss and they don't want the stigma of wearing hearing aids, but this kind of removes it - oh, it's an earbud, oh, it's glasses I am wearing, and now I can hear and engage."

DB: “Sure, I could see that, that's great. So does this technology exist? Is that something someone could purchase in a hearing aid, for example?”

SB: “Actually, I’ve seen manufacturers out on the floor doing this.
We are working with some manufacturers to put more advanced algorithms in the current ones in there, yes
.”


He didn’t specify which manufacturers or even confirm BrainChip were working with any of those manufacturers that were exhibiting at CES (note he didn’t say “I’ve seen manufacturers on the floor doing this. We are working with some of those manufacturers…”).

What this little dialogue does imply though, is that BrainChip’s technology is not yet implemented in any of the models that are already commercially available, such as the Nuance Audio OTC (Over The Counter) Hearing Aid Glasses that EssilorLuxottica launched earlier this year.

In January 2024, @Tothemoon24 had posted about their prototype to be displayed at CES 2024:



With their fully developed solution, EssilorLuxottica clinched an award in Digital Health at CES 2025:


View attachment 85010


A month later, they had received FDA clearance and EU Certifications, which paved the way to make Nuance Audio available to consumers in the US and Europe:


View attachment 85007




View attachment 85005

View attachment 85006



Here are more details about the Nuance Audio hearing glasses courtesy of Soundly, “a single destination for hearing aid research, shopping, and expert care”, including a recent video review:






View attachment 85008


The Nuance Audio hearing glasses are now also available for purchase here in Germany, where they sell for 1100 €.


View attachment 85009


One drawback is their limited battery life (8-10 hours per charge) - that’s where neuromorphic technology could come in handy, extending the battery life of future models of such hearing aid eyewear.



How much longer would neuromorphic technology extend battery life in your opinion?
 
  • Like
Reactions: 1 users

wilson

Emerged
Good presentation.
Sean says from the 3.48min mark, that we will be hitting our stride, within the next 10 to 20 years.👍
Haha, gotta think long term
 
  • Like
Reactions: 2 users

Baneino

Regular
Correct me if I am wrong but I think Innatera T1 doe not have on chip learning as AKIDA does.
As far as I know, the T1 cannot do this
 

wilson

Emerged
Also…Sean confirmed the company that is manufacturing the glasses for Onsor is the same one doing them for Meta. I thought that was interesting synergy there.

To me it is an indicator that they are a reliable manufacturer at the very least. Many of the quality control processes could potentially be common for both product streams.
 
  • Like
Reactions: 2 users

BrainShit

Regular


Summary for [BrainChip spoke at the Pitt Street Research Semiconductor Conference 2025]


[00:01](https://www.youtube.com/watch?v=IegDQDnNRvU&t=1)

In this presentation, Sean Hair talks about the role of BrainChip in the field of Artificial Intelligence and the semiconductor industry. He emphasizes the importance of neural processing units (NPUs) and describes the widespread adoption of AI in various industries - Introduction of Sean Hair and his background in the semiconductor industry.
  • Explanation of BrainChip's role as a provider of neural processing units (NPUs) for AI applications.
  • Discussion of the widespread adoption of AI and its impact on all industries.

[03:28](https://www.youtube.com/watch?v=IegDQDnNRvU&t=208)
This presentation will highlight the importance of inference in AI development, while focusing on the need for the right hardware to create business value - Nvidia and similar companies specialize in training AI models, with a focus on inference, where the real value for businesses comes from.
  • Over the next 10 to 20 years, the majority of investment will go into inference, while the training market will remain stable.
  • The trend is towards decentralized models, with Edge AI gaining traction and the right technology required for specific business problems.
  • Data centers are ideal for training AI models, while the development of edge AI opens up new opportunities for the implementation of AI in various industries.

[07:00](https://www.youtube.com/watch?v=IegDQDnNRvU&t=420)

In this section, the speaker talks about his company's strengths and opportunities in semiconductor technology and emphasizes the importance of innovative applications - The speaker notes the challenge that companies often have either good hardware or good software and emphasizes that his company is strong in both areas.
  • He explains the size and growth potential of the edge market and emphasizes that the demand for semiconductor solutions is increasing.
  • The speaker shares his excitement about the innovative use cases he is hearing from customers and prospects and emphasizes that it is still early in the development process.
  • He discusses the efficiency of models and applications compared to different hardware solutions and warns that not everything that is possible is useful.

[10:29](https://www.youtube.com/watch?v=IegDQDnNRvU&t=629)

This section highlights the importance of models and innovation in the semiconductor industry, especially for the development of AI solutions and the need to provide a comprehensive solution - customization of software and models to specific requirements is crucial for success in the industry.
  • An extensive model pool is necessary to meet different customer needs and ensure continuous growth.
  • Importance is placed on the company's roadmap to give customers confidence in future technology.
  • Constant innovation is essential to remain relevant in the highly competitive Silicon market.

[14:01](https://www.youtube.com/watch?v=IegDQDnNRvU&t=841)

This section introduces BrainChip's innovative technology that can significantly improve the lives of people with epilepsy. A partner company is developing a pair of glasses equipped with a special chip that predicts the probability of an impending seizure with 98% accuracy - The government is showing great interest in investing in new technologies.
  • The company OnSour is working on a pair of glasses equipped with a brain chip that is currently undergoing clinical trials.
  • The glasses can predict seizures in epilepsy patients, giving them greater safety and quality of life.
  • The presentation also explains how companies like BrainChip generate revenue through license sales and royalties.

[17:30](https://www.youtube.com/watch?v=IegDQDnNRvU&t=1050)

This section emphasizes the importance of talent and strategic planning for success in the technology industry. It emphasizes the importance of attracting and retaining the best talent while addressing the challenges and risks within the company - challenges include potential risks and talent issues that are critical to the company's success.
  • The importance of talent is emphasized; the company strives to attract the best professionals to drive change.
  • Talent management is a major challenge as the company competes against high salaries in Silicon Valley.
  • There is a risk that the company will not properly execute its strategic plan, highlighting the need for constant review of trends in the industry.
  • The discussion about new model trends in AI, particularly the development of state space models, demonstrates the company's ability to innovate.

[21:00](https://www.youtube.com/watch?v=IegDQDnNRvU&t=1260)

This discussion on BrainChip emphasizes the importance of discrete computing solutions and their applications in various industries - Discrete computing solutions make it possible to work without a network connection, which is especially important in mobile applications.
  • The technology has an impact on many industries, with some such as IoT and medical being in particularly high demand.
  • The technology is also being researched in the automotive sector, but the adoption cycle is longer here due to the complexity of the systems.


My personal thoughts: BrainChip will need money to keep up his development and have to make sure it is able to pay a decent salary to their experts. Otherwise we'll loose grip ... or be taking over.
 
  • Like
  • Fire
Reactions: 8 users

Diogenese

Top 20
How much longer would neuromorphic technology extend battery life in your opinion?
Hi wilson,

Inference is a very compute-heavy task, but Akida excels in this.

It's difficult to find power usage stats for a particular task, much less comparison between different processors. However, this is a comparison between Akida and Jetson for KWS:
1747904508485.png




https://brainchip.com/wp-content/uploads/2023/01/BrainChip_Benchmarking-Edge-AI-Inference-1.pdf

That's 33.17 : 756 less power for KWS, about 250 times less power for the KWS task using 2 nodes (4 NPUs) of Akida. Remember that Akida 1000 does have a relative large "standby" power usage of about 1 W, but if Akida 1500 is used, there is no dedicated Cortex processor. I don't know the standby power for Nvidia. Akida 2 is said to be 8 times more efficient than Akida 1. Then there's TENNs which is even more efficient.

In performing inference, Akida 2 is potentially more than 1000 times more efficient than the GPU-based Jetson Nano.

However, some hearing aid glasses do not use AI, so it's not an apples : apples comparison:

https://www.wired.com/review/essilorluxottica-nuance-audio-glasses/

The Nuance Audio builds on technology the company pioneered last year for its Ray-Ban Meta glasses, which pipe audio to the wearer via open-ear speakers built into each arm of the specs. There’s no camera or intelligence (nor an Awkwafina voice option) on the Nuance Audio, as these speakers are meant to amplify ambient sounds captured by the directional microphones and send them to, or at least near, your ears. You can’t even see the speakers on the glasses or pinpoint where the sound is coming from while you’re wearing them. It’s just … there.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 20 users

wilson

Emerged
Hi wilson,

Inference is a very compute-heavy task, but Akida excels in this.

It's difficult to find power usage stats for a particular task, much less comparison between different processors. However, this is a comparison between Akida and Jetson for KWS:
View attachment 85012



https://brainchip.com/wp-content/uploads/2023/01/BrainChip_Benchmarking-Edge-AI-Inference-1.pdf

That's 33.17 : 756 less power for KWS, about 250 times less power for the KWS task using 2 nodes (4 NPUs) of Akida. Remember that Akida 1000 does have a relative large "standby" power usage of about 1 W, but if Akida 1500 is used, there is no dedicated Cortex processor. I don't know the standby power for Nvidia. Akida 2 is said to be 8 times more efficient than Akida 1. Then there's TENNs which is even more efficient.

In performing inference, Akida 2 is potentially more than 1000 times efficient than the GPU-based Jetson Nano.

However, some hearing aid glasses do not use AI, so it's nor an apples : apples comparison:

https://www.wired.com/review/essilorluxottica-nuance-audio-glasses/

The Nuance Audio builds on technology the company pioneered last year for its Ray-Ban Meta glasses, which pipe audio to the wearer via open-ear speakers built into each arm of the specs. There’s no camera or intelligence (nor an Awkwafina voice option) on the Nuance Audio, as these speakers are meant to amplify ambient sounds captured by the directional microphones and send them to, or at least near, your ears. You can’t even see the speakers on the glasses or pinpoint where the sound is coming from while you’re wearing them. It’s just … there.

Hey, that is such a detailed reply. Thanks very much for taking the time ☺️☺️
 
  • Like
  • Love
Reactions: 4 users

Frangipani

Top 20
How much longer would neuromorphic technology extend battery life in your opinion?

Hi @wilson,

sorry, I don’t have any technical background, so I need to rely on what we’re being told by our company or others in this field.

In that same CES 2025 interview I quoted from in my previous post, Steve Brightfield said the following:

“… and we announced this fall Akida Pico, which is microwatts, so we can use a hearing aid battery and have this run for days and days doing detection algorithms [?]. So it is really applicable to put on wearables, put it in eye glasses, put it in earbuds, put it in smart watches.” (from 4:18 min)







FABDDD9B-3CFF-4AA4-89FC-B75C2D8EC0DB.jpeg



You may also want to have a look at this:

 
Last edited:
  • Like
  • Fire
Reactions: 10 users

jrp173

Regular
As fascinating as the technology is, I don't think I'm going too far out on a limb by saying for us here it's all about the share price.
Just watching how the price has been manipulated over the last several years, when the inevitable price sensitive announcement drops 🙏🏻 I hope the share price reflects the achievement and it isn't contained and controlled by whoever the scumbags are who have been doing the manipulating.

It's not manipulation that keeping the share price down, it's our board who refuse to make announcements through the ASX about our own company. The board who won't do anything to keep current (and potential) shareholders informed.

The board who says the share price will do what the share price will do.

The blame lies squarely at their feet.
 
  • Like
  • Fire
Reactions: 5 users
It's not manipulation that keeping the share price down, it's our board who refuse to make announcements through the ASX about our own company. The board who won't do anything to keep current (and potential) shareholders informed.

The board who says the share price will do what the share price will do.

The blame lies squarely at their feet.
They don't care there going to America
 

Rach2512

Regular


Interesting talk, talks about spiking neural networks, not Brainchip, but raising awareness of the energy efficiency.

Perhaps someone with a lot more technical knowledge than me can post a comment 🤔
 

FiveBucks

Regular
It's not manipulation that keeping the share price down, it's our board who refuse to make announcements through the ASX about our own company. The board who won't do anything to keep current (and potential) shareholders informed.

The board who says the share price will do what the share price will do.

The blame lies squarely at their feet.

Its not the lack of announcements, its the lack of sales....
 
  • Fire
Reactions: 1 users

JB49

Regular

Hopefully they aren't limiting themselves to analog designs.

Also a mention of RRAM. We heard Coby Hanoch of Weebit recently say that they were working on something with Brainchip.

Role: Analog Devices (ADI) is exploring heterogeneous AI compute platforms that blend traditional digital architectures with emerging, non-conventional approaches, including but not limited to analog compute, in-memory processing, and neuromorphic hardware.
The Frontier AI team is seeking a senior technical contributor to launch and lead this long-term, cross-functional initiative. The role focuses on exploration, prototyping, and platform definition, with the ultimate goal of enabling ultra-efficient AI compute solutions that can scale across ADI’s wide variety of applications, spanning robotics, health wearables, audio systems, energy, and much more!
This is a strategic and ongoing initiative, not a one-off research project.
The role requires technical depth, program execution, and the ability to engage both internal stakeholders across functions and external organizations, in particular the startup ecosystem.
Key responsibilities:
  • Lead the early-stage definition and execution of ADI’s next-generation AI compute initiative
  • Evaluate and prototype non-traditional computing paradigms, including analog signal-chain AI, in-memory compute (e.g SRAM or RRAM), neuromorphic systems (e.g., event-driven or spiking architectures).
  • Architect hybrid compute platforms that integrate low-power, unconventional elements with traditional digital compute blocks
  • Translate architectural advances into deployable systems for high-impact ADI markets, driving ultra-low latency, ultra-low power, and ultra-local adaptability (on-device learning) in application domains spanning robotics, consumer devices, automotive, industrial, and other areas.
  • Run internal proof-of-concept trials both with internal groups (e.g., Field-Programmable Gate Arrays, FPGAs) and with external startups focused on non-traditional AI compute.
  • Author and maintain core technical documents, such as Compute Exploration Plan, Architecture Whitepapers, Prototype Specification (initial system build targets)
  • Partner with ADI \Corporate Strategy and Development, and platform leaders to align on roadmap, application fit, and integration potential
  • Stay abreast of leading AI compute research and technical practicalities of implementation and drive the team towards advancing state of the art.
Qualifications:
  • 10+ years of experience in hardware systems, embedded compute, and AI platform development
  • Strong understanding of Artificial Intelligence (AI) and Machine Learning (ML) workload requirements and their impact on compute hardware
  • Technical exposure and knowledge of state-of-the-art of non-traditional architectures such as analog multiply-accumulate units, in-memory-compute, spiking neural networks
 

Frangipani

Top 20
EssilorLuxottica must have been one of the companies Steve Brightfield was referring to in his CES 2025 interview with Don Baine aka The Gadget Professor, when he said:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-446882

“(…) I talked to some manufacturers that have glasses, where they put microphones in the front and then AI algorithms in the arm of the glasses, and it doesn’t look like you’re wearing hearing aids, but you’re getting this much improved audio quality. Because most people have minor hearing loss and they don't want the stigma of wearing hearing aids, but this kind of removes it - oh, it's an earbud, oh, it's glasses I am wearing, and now I can hear and engage."

DB: “Sure, I could see that, that's great. So does this technology exist? Is that something someone could purchase in a hearing aid, for example?”

SB: “Actually, I’ve seen manufacturers out on the floor doing this.
We are working with some manufacturers to put more advanced algorithms in the current ones in there, yes
.”


He didn’t specify which manufacturers or even confirm BrainChip were working with any of those manufacturers that were exhibiting at CES (note he didn’t say “I’ve seen manufacturers on the floor doing this. We are working with some of those manufacturers…”).

What this little dialogue does imply though, is that BrainChip’s technology is not yet implemented in any of the models that are already commercially available, such as the Nuance Audio OTC (Over The Counter) Hearing Aid Glasses that EssilorLuxottica launched earlier this year.

In January 2024, @Tothemoon24 had posted about their prototype to be displayed at CES 2024:



With their fully developed solution, EssilorLuxottica clinched an award in Digital Health at CES 2025:


View attachment 85010


A month later, they had received FDA clearance and EU Certifications, which paved the way to make Nuance Audio available to consumers in the US and Europe:


View attachment 85007




View attachment 85005

View attachment 85006



Here are more details about the Nuance Audio hearing glasses courtesy of Soundly, “a single destination for hearing aid research, shopping, and expert care”, including a recent video review:






View attachment 85008


The Nuance Audio hearing glasses are now also available for purchase here in Germany, where they sell for 1100 €.


View attachment 85009


One drawback is their limited battery life (8-10 hours per charge) - that’s where neuromorphic technology could come in handy, extending the battery life of future models of such hearing aid eyewear.


I am a bit confused now as to whether or not there is any AI involved in the technology inside the Nuance Audio frames, since the Wired article @Diogenese shared claims there isn’t, although an interview from CES 2025 (see video from 5 min or the transcript) suggested to me that the software running on the glasses “capturing the sound, cleaning the sound” was something similar to the audio denoising algorithms BrainChip has been demoing? 🤔


AA8B5A88-F31B-4CC7-8234-54CF54A7C963.jpeg


Anyway, regardless of whether or not there are any AI algorithms in the arm of the Nuance Audio hearing glasses, Steve Brightfield may technically still have referred to EssilorLuxottica when mentioning in his CES interview that he had “talked to some manufacturers that have glasses, where they put microphones in the front and then AI algorithms in the arm of the glasses” and confirming to Don Baine that he had actually seen manufacturers on the CES 2025 floor promoting this technology… 😉

Just prior to CES 2025, EssilorLuxottica had announced the acquisition of Pulse Audition, a French start-up that uses AI-driven noise reduction and voice enhancement embedded in their eyeglasses (“Pulse Frames”).
Their algorithms are designed to run only on the Pulse Frames.


FB50BF20-5A12-4BD1-A3C4-0AB74968308A.jpeg

6F3E0824-9E32-4AB7-B7AA-D58AEA655295.jpeg
E41628E3-FFB1-412C-BC84-9C7CBE86F688.jpeg




EssilorLuxottica Acquires Pulse Audition's Audio Glasses Tech​

The acquisition brings Pulse Frames' AI-driven technology and audio signal processing—as well as a team of highly skilled professionals—into EssilorLuxottica's portfolio, which includes its new Nuance Audio Glasses.

Written by
Karl Strom

Published on
EssilorLuxottica Acquires Pulse Audition's Audio Glasses Tech


EssilorLuxottica, Milan, Italy, the global leader in eyewear and creator of Nuance Audio Glasses, has announced the purchase of Pulse Audition, a French company that has developed AI software for a hearing glasses product, called Pulse Frames. Pulse Audition, based in Vallauris (near Cannes), France, is a startup that uses AI-driven noise reduction and voice enhancement software embedded in eyeglasses to enable individuals with hearing loss to better comprehend speech, even in challenging, noisy environments.

EssilorLuxottica is introducing its Nuance Audio glasses which feature advanced speech-in-noise software and an open-ear amplification system for people with perceived mild to moderate hearing loss. Nuance hopes to gain FDA approval for the product in the first half of 2025.

Nuance Audio Glasses use directional microphones and advanced speech-in-noise processing software to transmit sound to the open ear.
Nuance Audio Glasses use directional microphones and advanced speech-in-noise processing software to transmit sound to the open ear.

According to EssilorLuxottica, the acquisition brings Pulse Audition’s proprietary technologies, expertise in AI software development, embedded AI, and audio signal processing—as well as a team of highly skilled professionals—into EssilorLuxottica's portfolio. By integrating these capabilities, the Group plans to complement its exisiting proprietary hardware and software offerings, further elevating the quality of its products and solutions. The move builds on the Group’s strategic focus on hearing solutions, following its acquisition of Nuance Hearing in 2023.

“We continuously explore market opportunities in AI and big data, and this acquisition in France— one of our home countries— is a perfect fit with our long-term goals and investments in hearing solutions,” noted EssilorLuxottica Chairman & CEO Francesco Milleri and Deputy CEO Paul du Saillant in a press statement. “It reinforces our commitment to advancing the next category of computing platforms, also in Europe. We are excited to welcome this talented team in our Group and look forward to further unlocking the enormous potential in the underserved hearing space.”

HearingTracker Founder and Audiologist Abram Bailey, AuD, and Audiologist Matthew Allsop have favorably reviewed Nuance Audio glasses, along with several other experts in the field. It appears to be a promising new form factor for people seeking help for milder hearing loss and a discrete solution for hearing in noise. Both companies are displaying their products at this year's CES 2025 in Las Vegas on January 7-10 (see HearingTracker's CES 2025 preview).
 
  • Like
Reactions: 2 users

BrainShit

Regular
Hi wilson,

Inference is a very compute-heavy task, but Akida excels in this.

It's difficult to find power usage stats for a particular task, much less comparison between different processors. However, this is a comparison between Akida and Jetson for KWS:
View attachment 85012



https://brainchip.com/wp-content/uploads/2023/01/BrainChip_Benchmarking-Edge-AI-Inference-1.pdf

That's 33.17 : 756 less power for KWS, about 250 times less power for the KWS task using 2 nodes (4 NPUs) of Akida. Remember that Akida 1000 does have a relative large "standby" power usage of about 1 W, but if Akida 1500 is used, there is no dedicated Cortex processor. I don't know the standby power for Nvidia. Akida 2 is said to be 8 times more efficient than Akida 1. Then there's TENNs which is even more efficient.

In performing inference, Akida 2 is potentially more than 1000 times more efficient than the GPU-based Jetson Nano.

However, some hearing aid glasses do not use AI, so it's nor an apples : apples comparison:

https://www.wired.com/review/essilorluxottica-nuance-audio-glasses/

The Nuance Audio builds on technology the company pioneered last year for its Ray-Ban Meta glasses, which pipe audio to the wearer via open-ear speakers built into each arm of the specs. There’s no camera or intelligence (nor an Awkwafina voice option) on the Nuance Audio, as these speakers are meant to amplify ambient sounds captured by the directional microphones and send them to, or at least near, your ears. You can’t even see the speakers on the glasses or pinpoint where the sound is coming from while you’re wearing them. It’s just … there.

In addition...

Comparison TargetPower Saving (Akida)
Most power-efficient AI chipsUp to 10x lower
Standard data center architectures (CPU/GPU)Up to 1,000x lower
Data center (task-specific)~97% more efficient
Typical edge device operationMicro-watts to milli-watts


Akida neuromorphic chips provide a substantial reduction in power consumption—ranging from 10x to 1,000x—compared to conventional AI hardware, making them highly suitable for energy-constrained edge and battery-powered AI applications. Typical energy consumption is about 3 picojoules per synaptic operation (in 28nm technology)

Brainchip's Akida (and especially Akida Pico) leads in maximum power savings, followed by Innatera Pulsar and T1, which are closely matched.

Actually you operate within a magic triangle between:
  • Power consumption
  • Workloads
  • Accuracy

Source 1: https://brainchip.com/what-is-the-akida-event-domain-neural-processor-2/
Source 2: https://brainchip.com/brainchip-introduces-lowest-power-ai-acceleration-co-processor/
Source 3: https://www.businesswire.com/news/h...-Neuromorphic-Capabilities-to-M.2-Form-Factor
Source 4: https://marc-kennis.squarespace.com...esearch-initiation-report-20-08-2021-zpgy.pdf
Source 5: https://buyzero.de/blogs/news/unvei...inchip-and-innateras-contributions-to-edge-ai
 
  • Like
  • Fire
Reactions: 3 users

Jamiesam

Emerged
It's not manipulation that keeping the share price down, it's our board who refuse to make announcements through the ASX about our own company. The board who won't do anything to keep current (and potential) shareholders informed.

The board who says the share price will do what the share price will do.

The blame lies squarely at their feet.

One would think they would have to have a reason for that though surely? If they thought it was in the best interests of the company to release more, I would have thought they would.
 

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi wilson,

Inference is a very compute-heavy task, but Akida excels in this.

It's difficult to find power usage stats for a particular task, much less comparison between different processors. However, this is a comparison between Akida and Jetson for KWS:
View attachment 85012



https://brainchip.com/wp-content/uploads/2023/01/BrainChip_Benchmarking-Edge-AI-Inference-1.pdf

That's 33.17 : 756 less power for KWS, about 250 times less power for the KWS task using 2 nodes (4 NPUs) of Akida. Remember that Akida 1000 does have a relative large "standby" power usage of about 1 W, but if Akida 1500 is used, there is no dedicated Cortex processor. I don't know the standby power for Nvidia. Akida 2 is said to be 8 times more efficient than Akida 1. Then there's TENNs which is even more efficient.

In performing inference, Akida 2 is potentially more than 1000 times more efficient than the GPU-based Jetson Nano.

However, some hearing aid glasses do not use AI, so it's nor an apples : apples comparison:

https://www.wired.com/review/essilorluxottica-nuance-audio-glasses/
S
The Nuance Audio builds on technology the company pioneered last year for its Ray-Ban Meta glasses, which pipe audio to the wearer via open-ear speakers built into each arm of the specs. There’s no camera or intelligence (nor an Awkwafina voice option) on the Nuance Audio, as these speakers are meant to amplify ambient sounds captured by the directional microphones and send them to, or at least near, your ears. You can’t even see the speakers on the glasses or pinpoint where the sound is coming from while you’re wearing them. It’s just … there.

Dear Dodgy-Knees,

Thank you! This is such a phenomenal explanation!!!

I know I can’t possibly be the only one to feel so grateful for your contributions to this forum.

B 🌺
 
  • Like
  • Love
  • Fire
Reactions: 7 users

Jamiesam

Emerged
Dear Dodgy-Knees,

Thank you! This is such a phenomenal explanation!!!

I know I can’t possibly be the only one to feel so grateful for your contributions to this forum.

B 🌺
Definitely not the only one!
 
  • Fire
Reactions: 1 users

TanCA

Emerged
When are people's predictions for when we might see share price back up at the previous highs?
 
Top Bottom