BRN Discussion Ongoing

manny100

Regular
Think of a timeline....12 months, 24 months, 36 months, with the way life and all things AI are or seem to be continually accelerating,
where would you like to be positioning yourself, at the Data Centre or it the Edge.

Now, currently there's a place for both types of approaches, but one is based on an aging architecture, lets think of bottlenecks, lets
think of power consumption, lets think of critical life changing scenarios where real time answers are needed in "real time" not where the
latency becomes a major issue, or internet outages become a major issue or bandwidth becomes a major issue....the companies that talk
about the Edge, are they "really" treating the Edge (the future) with the respect that it is deserving of ?

We are clearly, in my opinion, so far out in front, (maybe an exaggeration) that when we have Senior Executives (Steve Brightfield) who
know this industry backwards say that he feels within 5 years we, Brainchip will be in every product, well one has to stop and think, is that
the salesman/marketer speaking or is he really saying, listen up Nvidia and Co, we are the company that is going to be lighting up the future
with all our connected devices, all our intelligence and continuous learning on device, and so if there is going to be a future conduit to this
massive technology revolution, well, we Brainchip currently hold the "Key (singular) to the Kingdom...come and let's talk, only condition that
we ask is that you leave your egos at the reception area, thanks.

Go AKIDA...we all love your brilliance !

Tech.
Agree TECH, its taken a lot of time and patience but we are closing on deals and the Edge is getting closer to widespread adoption.
Ironically as we and the Edge are closing in on success cloud based NVIDIA is making record highs.
It just shows that big changes in IT move at tortoise pace. This is actually good for us as when we are 'flying' new tech will have to wait like we did while we are making record highs.
NVIDIA has 25 billion shares and we have 2 billion on issue. If NVIDIA had only 2 bill SOI then it would be trading at circa $US1735. Puts our LDA plans into perspective.
 
  • Like
  • Fire
  • Wow
Reactions: 20 users
Hi FMF,

That's a 20 node Akida, so maybe a FPGA test chip. Certainly not the Akida 1000.

https://www.eejournal.com/article/brainchip-debuts-neuromorphic-chip/
...
Brainchip’s test chip benchmarking results are impressive – claiming 1,100 fps with the $10 chip on CIFAR-10 with 82% accuracy, using less than 0.2 Watts – about 6K fps/watt. The company says this compares with 83% at 6K fps/watt from IBM’s “True North” (at a cost of around $1K) and 80% from Xilinx ZC709 at 6K fps/watt (also around $1K).
...
Brainchip’s development environment is available in Q3 2018, and should be accessible to an FPGA-based acceleration board in advance of availability of the Akida chip in 2019.

I'm constantly amazed at hoe BRN keep producing Akida 1000's out of their hat. I don't know where the Unigen, BH, VVDN, etc., chips will come from.

Pretty sure the December 2020 date was overoptimistic.
Thanks for that.

20 nodes hey....wonder what the writer would think now of 80 haha
 
  • Like
Reactions: 4 users

manny100

Regular
Hi FMF,

That's a 20 node Akida, so maybe a FPGA test chip. Certainly not the Akida 1000.

https://www.eejournal.com/article/brainchip-debuts-neuromorphic-chip/
...
Brainchip’s test chip benchmarking results are impressive – claiming 1,100 fps with the $10 chip on CIFAR-10 with 82% accuracy, using less than 0.2 Watts – about 6K fps/watt. The company says this compares with 83% at 6K fps/watt from IBM’s “True North” (at a cost of around $1K) and 80% from Xilinx ZC709 at 6K fps/watt (also around $1K).
...
Brainchip’s development environment is available in Q3 2018, and should be accessible to an FPGA-based acceleration board in advance of availability of the Akida chip in 2019.

I'm constantly amazed at hoe BRN keep producing Akida 1000's out of their hat. I don't know where the Unigen, BH, VVDN, etc., chips will come from.

Pretty sure the December 2020 date was overoptimistic.
Unfortunately all chips look the same to me.😁
 
  • Haha
Reactions: 3 users

7für7

Top 20
If I think about it how bill eichen explained how easy it is to implement akida into existing systems using their platform… (short version) I don’t think we will need to wait too long any more… just my opinion (finger crossed)
 
  • Like
Reactions: 14 users

Diogenese

Top 20
Unfortunately all chips look the same to me.😁
Yes. I recall Sean had something to say about the naming convention - or maybe they just had a whole lot of Akida 1000 caps left over.
 
  • Like
Reactions: 3 users

FuzM

Member
Did someone post this already? Maybe I missed it! 🤔


Seems like an old podcast from March 2, 2022.
1736588666102.png
 
  • Like
Reactions: 3 users

7für7

Top 20
Seems like an old podcast from March 2, 2022.
View attachment 75781
I haven’t listened to it yet… but it would make sense if it’s older. I was wondering anyway why Rob would talk about BrainChip in a podcast… I’m just wondering why the date is current. Thanks for the correction.
 
  • Like
Reactions: 1 users
Chetan Kadway’s Post

View profile for Chetan Kadway, graphic
Chetan Kadway
Data Scientist | ML Researcher | Computer Vision | SpaceTech | Automotive | Edge AI | Neuromorphic Computing | Azure ML | ML-Ops

8mo Edited

Last week, I had the incredible opportunity to visit the Netherlands (with my colleague Sounak Dey) on a business trip that turned out to be much more than I expected.

Primary purpose of the visit was to present our research work (as invited speakers) at Morpheus Edge AI & Neuromorphic Computing workshop (https://lnkd.in/g-z-dAxJ) organised by Laurent Hili at ESA/ESTEC (European Space Research & Technology Centre). It was well rounded workshop where we got to meet SpaceTech stakeholders that a innnovating towards embedding intelligence onboard satellites (startups, soc manufacturers, european university researchers, ESA project leaders, etc). Excellent work by workshop organiser and the presenting participants.

Unexpectedly, I also met Sir Ananth Krishnan (Tata Consultancy Services Ex-CTO) on his post retirement holiday. And the first thing that came to my mind was his retirement speech address at our IIT-KGP Research Park office: He motivated us to make products & services that are 1] Usable (should pass Grandma test) 2] Trustworthy/Reliable 3] Frugal (Space & Time resource efficient, think of bits & bytes). His advice still guides us in our research work.

The research work we presented was done in colllboration with our friends at BrainChip (Gilles Bézard and Alf Kuchenbuch). Brief introduction to our work that got deployed on Brainchip AKIDA neuromorphic processor:

Currently, there is a delay of many hours or even days to draw actionable insights from satellite imagery (due to mismatch between data volume acquisition & limited comms bandwidth). We observed that end-users either need RAW images or Analytics-ready meta-data as soon as possible. Therefore, embedding intelligence (Edge AI) onboard satellites can result in quicker business decision making across business verticals that rely on geo-spatial data.

To address this, guided by the foresight of Dr. Arpan Pal we built a bundle of tech capabilities that helps send RAW data & Analytics-ready meta-data as soon as possible to ground station. These tech capabilites include:
1) Cloud Cover Detection Model (high-accuracy, low-latency, low-power).
2) DL based Lossless Compression (around 45% compression ratio).
3) RL based Neural Architecture Search Algorithm (quickly search data+task+hardware specific optimal DL models).

We also had a chance to visit TCS Paceport in Amsterdam, hoping to showcase our research prototype there soon. Looking forward to more future collaborations with Edge AI/Neuromorphic hardware accelelator designers & space-grade SoC manufacturers.

Would like to thank Tata Consultancy Services - Research for such great opportunity to build future tech for future demand. Would also like to thank our Edge & Neuromorphic team: Arijit Mukherjee, Sounak Dey, Swarnava Dey, Abhishek Roy Choudhury, Shalini Mukhopadhyay, Syed Mujibul Islam, Sayan Kahali and our academic research advisor Dr. Manan Suri.

#spacetech #satellite #edgecomputing #orbital #AI #neuromorphic #SNN #AKIDA #TCS

https://www.linkedin.com/posts/chet...gecomputing-activity-7195083124312088577-SO7P

SC
 
  • Like
  • Fire
  • Love
Reactions: 56 users

TopCat

Regular
IMG_0435.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 117 users
56 minutes to long for me but might be worth a watch for someone on this lovely Sunday morning

 
  • Haha
  • Wow
  • Like
Reactions: 3 users

Tothemoon24

Top 20
Brilliant ,

Great to have our CTO a well respected industry professional who’s happy to share company progress & also share how we compare to other less worthy competition.

This post by our CTO in my view is a perfect example of how to keep share holders updated with progress without any NDA infringements .
 
  • Like
  • Fire
  • Love
Reactions: 69 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Our illustrious leader, Sean Hehir participated in a video session with the Industrial Technology Research Institute (ITRI).

Intriguingly there was an article published online about 16 hours ago which describes how the ITRI was at CES, showing off a new AI-driven trainer that uses image analysis and data inference to improve an athlete’s performance.

I wonder if Sean got to polish up on his badminton skills? But more importantly, I wonder if we're involved in this AI athlete training system in some way, shape or form?


View attachment 75766





AI taught me to be a (slightly) better badminton player at CES

The Industrial Technology Research Institute was in Las Vegas to show off a new AI-driven trainer that uses image analysis and data inference to improve an athlete’s performance. The sport that served as our demonstration? Badminton.
AI taught me to be a (slightly) better badminton player at CES

US badminton Olympian Howard Shu plays with ITRI’s AI Badminton Trainer at CES 2025. [Photo: ITRI]
https://www.linkedin.com/shareArticle?mini=true&url=https://www.fastcompany.com/91258203/ai-taught-me-to-be-a-slightly-better-badminton-player-at-ces&media=AI+taught+me+to+be+a+(slightly)+better+badminton+player+at+CES
BY Chris Morris3 minute read
I am not what you would call a finely tuned athletic machine. I am, if anything, an outdated lawnmower engine held together by duct tape and rust. So when I was offered the opportunity to let AI help make me a better athlete at CES, I jumped at the opportunity.
The offer came from the Industrial Technology Research Institute (ITRI), a non-profit that uses applied research to drive industrial development. They were showing off a new AI-driven trainer that uses image analysis and data inference to improve an athlete’s performance. The sport that was the focus of this particular demonstration? Badminton.
The upside, I thought, would be that no one at CES would be especially good at badminton—so it wouldn’t be as humbling as it would if the system was tracking, say, my golf swing. Then Howard Shu strolled by.
Shu is a member of the U. S. Olympic badminton team and has been playing the sport for 26 years. He’s tall, in remarkable shape, and knows how to make a shuttlecock do pretty much whatever he wants. He is, in other words, the antithesis of me. We’ll get back to him in a moment, though.
i-2-CES_2025_AI_Badminton_C.jpg
[Photo: ITRI]
To get a sense of my abilities, the training tool used a series of cameras to track my stance, swing, and other movements over five volleys. The data was fed into a Generative AI system, which instantly offered recommendations. At the same time, information like the speed of my volleys, the height with which they cleared the net, and where they landed on the court was captured and factored in as well.
I thought I was doing okay, honestly. I whacked the shuttlecock at 52 miles per hour, cleared the net by two feet, and made my invisible opponent chase it around the court. Then I walked over to see what the AI had to say.
“Ah, my friend!,” it wrote. “It looks like we’ve got a bit of a situation here.”


Maybe I hadn’t done quite as well as I thought I had.
The AI (which called itself Bill) noted that I was standing too close to the shuttlecock, which limited my ability to reach for the shot. Also, I needed to work on my weight transfer and balance. My footwork was not exactly ideal either.
And while it had my attention, Bill noted that my grip on the racket “might not be ideal for controlling the shuttlecock effectively or generating power” with my shots. And my follow-through was “abrupt.”
advertisement



Basically, the machine told me, I suck.
That’s when Shu took a turn. His speeds were closer to 80 mph—and he tightly grouped the shots. (He later told me he felt the system’s speed detection needed some calibrating as he normally hits faster than that.)
I gave the system one more try, with Shu suggesting I stand in a different spot on the court—and while my shots weren’t as powerful, they were much more tightly grouped. I won’t be threatening Shu’s spot on the Olympic team anytime soon, but I could be more of a beast at the local recreation center.
i-CES_2025_AI_Badminton_A.jpg
[Photo: ITRI]
This training tool is already used by the Taiwan Olympic badminton team, an ITRI representative told me. Shu said it was the first time he had had an opportunity to try it—adding that he expects a growing number of athletes will begin to incorporate AI into their training.
“It’s able to pick up things you’re not able to pick up with the naked eye,” he said. “I can tell you my smash is fast, but I’m not going to be able to tell you the exact speed. You’re able to dial in exact numbers and get data driven results. As high performing athletes, we’re always trying to find that 1% advantage.”
Bill, I should note, remained unimpressed with my performance on the court.
Shu might be looking for a 1% advantage. I’d settle for the AI being a bit less judgmental.




Here are some other things that ITRI are working on which might be worth looking into further. The High-Privacy Digital CareGiver sounds extremely hopeful to me!


ITRI to exhibit wellness and smart medical innovations at CES​

The Industrial Technology Research Institute (ITRI, 工研院) yesterday said it would exhibit wellness and smart medical innovations at the Consumer Electronics Show (CES) in Las Vegas, Nevada, from today to Friday.

This year would be the institute’s ninth consecutive year exhibiting at the CES, the ITRI said in a statement.

Some of the wellness technologies on show include iSleePad, a smart sleep pad that uses noncontact sensing technology to track heart rate, breathing and sleep position, as well as its pet variant, iPetSuite, the institute said.

P09-250107-016.jpg

A visitor tries KneeBo, a portable knee joint exoskeleton, at a news conference organized by the Industrial Technology Research Institute in Las Vegas, Nevada, yesterday.​

Photo courtesy of the Industrial Technology Research Institute
Other highlights include KneeBO, a portable knee joint exoskeleton designed for lower-limb muscle training and walking improvement, MedBobi, a generative artificial intelligence (AI)-powered smart medical assistant system, and the AI Badminton Trainer, which can capture player movements for analysis and provide personalized training programs instantly, it added.

Among the institute’s new smart medical technologies, the High-Privacy AI Digital Caregiver on display offers hospital-level accuracy for vitals monitoring and helps address healthcare workforce shortages, according to the ITRI.







I found a video embedded within this link below on the Privacy AI Digital CareGiver. Benefits are anomaly detection, fall detection, 24/7 real-time monitoring of vital signs such as breathing, heart rate, body temperature, blood oxygen levels, and blood pressure.


Screenshot 2025-01-12 at 10.37.02 am.png






And here's some more information fro the itir Media Centre.


Media Center​

HomeMedia CenterLatest News

ITRI Unveils Cutting-Edge Smart Medical Technology at CES 2025​


Date:2025/01/05

The High-Privacy AI Digital Caregiver offers hospital-level accuracy for vitals monitoring and helps address healthcare workforce shortages.

The High-Privacy AI Digital Caregiver offers hospital-level accuracy for vitals monitoring and helps address healthcare workforce shortages.
ITRI, Taiwan’s largest and one of the world’s leading high-tech applied research institutions, today announced the introduction of smart medical innovations at CES 2025 booth 8430, North Hall, LVCC, and its event website. The smart medical technologies include the High-Privacy AI Digital Caregiver, the Intelligent Medical Assistant Solution (iMAS), Janus, MedBobi and iKNOBeads.

The High-Privacy AI Digital Caregiver, co-developed by ITRI and Streamteck, is an advanced remote monitoring system that utilizes thermal imaging and millimeter wave radar technologies to detect patients’ vital signs and activities with hospital-level accuracy while maintaining privacy. The system tracks bed exits, falls and prolonged inactivity, and promptly alerts caregivers via a mobile app when abnormalities are detected. With its compact design, it is perfect for use in clinics, wards, nursing homes and homes, offering a cost-effective telemedicine solution.
The High-Privacy AI Digital Caregiver features:

• Privacy thermal imaging: This eliminates the need for cameras, ensuring patient privacy. With a fine resolution of 3.75 cm (compared to a competitor’s 60 cm), it precisely identifies images and boasts an action detection error rate of <1% for activities like prolonged lying, bed exits and falling.
• Real-time alerts: Sent via a mobile app, real-time alerts immediately notify caregivers of any abnormal incidents such as fever, shortness of breath or falls. This feature enables early detection of emergencies, reducing staff response time from one hour to just one minute. It significantly improves remote monitoring, emergency response rates and outcomes.
• Multi-sensory module: Its 60-GHz mmWave radar and thermal imaging sensors accurately detect body temperature, heart rate, respiration rate, blood pressure and blood oxygen saturation, matching the accuracy of hospital equipment.
• AI-powered calibration: AI-based noise filtering and IoT capabilities work together to minimize interference based on the distance from the heat source, ambient temperature and humidity, ensuring accurate body temperature detection and continuous motion analysis.
• Hospital-level precision: The High-Privacy AI Digital Caregiver stands out as the only care product on the market offering hospital-level accuracy. In collaboration with Streamteck, ITRI tested 50 units at a Taipei City Hospital long-term care facility, where they successfully met National Early Warning Score (NEWS) standards. Unlike competitors with up to a 10% margin in respiration detection accuracy, the High-Privacy AI Digital Caregiver achieves accuracy of <±2 bpm for respiratory rate, <5% for heart rate, <±0.3 degrees Celsius for body temperature, <±2% for blood oxygen, and maintains a 90% accuracy rate for blood pressure trends. It also has an action detection error of <1% for activities such as prolonged standing, sitting, lying down, bed exits and falling.
Many care facilities suffer from inadequate equipment and staffing. The High-Privacy AI Digital Caregiver allows care staff to monitor patients remotely. This reduces the need for frequent in-person visits and checks, alleviating the workload on health workers and helping to address workforce shortages.

Booth visitors at CES 2025 can experience how this advanced system tracks their vital signs—without the need for cameras or wearables—with remarkable accuracy comparable to hospital equipment while ensuring privacy and comfort. Visitors can also see the system in action as it detects critical events such as bed exits, falls and prolonged inactivity, providing real-time alerts via a mobile app.

The Intelligent Medical Assistant Solution (iMAS) integrates multiple lightweight and portable medical devices to offer a comprehensive telemedicine solution for patients with limited access to healthcare. Its intelligent medical imaging technology aids in diagnostic interpretation, enabling physicians to provide remote diagnoses. Physicians can monitor patient conditions in real time, helping to prevent treatment delays. During remote visits, physicians can easily carry the portable iMAS to deliver swift diagnosis and care. iMAS serves as a “homespital” solution that seamlessly combines home care and hospital services. This innovative solution has been licensed to 36 vendors including collaboration with Thailand’s largest telecom company.

Janus, an AI-powered cybersecurity solution, is transforming medical device manufacturing. By integrating seamlessly into the process, it helps manufacturers meet rigorous global compliance standards like FDA, HIPAA, and NIST. Janus proactively protects devices from cyber threats, monitors M-IoT communications in real time, and detects suspicious activities. It simplifies network logs, updates firewalls dynamically, and streamlines compliance processes. This not only enhances product security but also boosts market competitiveness. As healthcare demands robust data protection, Janus offers a reliable and efficient solution, setting a new standard for smart cybersecurity in medical devices.

MedBobi is a smart medical assistant system that uses multimodal generative AI and retrieval-augmented generation (RAG) technology to rapidly create medical reports from voice input and other data. It can understand up to 96 languages such as English, Mandarin, Taiwanese, Japanese, Spanish, French and Thai and can convert speech into text, reducing medical staff’s administrative work time by 75%. The system integrates voice input, pathological images and patient records to provide AI-powered personalized medical recommendations. Additionally, MedBobi features a professional dementia database, allowing users to better understand the disease through text-based Q&A and obtain personalized care plans.

iKNOBeads, the world’s first and only magnetic microbeads with controllable morphology, are designed to activate immune cells in the fight against diseases. Their uniform bead size ensures batch-to-batch reproducibility, while the bumpy body design maximizes cell interaction, enhancing activation efficiency and reducing the required bead quantity. This innovative approach surpasses traditional activation materials and offers a more effective and economical feeder-free platform for generating high-quality and quantity immune cells for therapy. The adaptability of iKNOBeads technology, through surface modification, allows for a wide range of immunotherapy applications, promising potential breakthroughs in disease treatment that could revolutionize the field and bring hope to millions.

ITRI’s smart medical technologies are making their U.S. debut at CES 2025. ITRI invites potential collaborators for technology transfers and business partnerships. Interested parties can book a meeting here.

Access ITRI’s CES 2025 press kit here. Additional multimedia materials including technology videos, photos, and fact sheets are available here.





They look like they'd be a really good fit for us given their areas of focus. I didn't circle Resilient Society of Sustainable Environment but we all know that out technology includes ultra low power consumption which is obviously ticks the box for sustainability and naturally it ticks the box for infrastructure and productivity where products can improve people's health and make people's lives and jobs easier in general.

Screenshot 2025-01-12 at 10.53.45 am.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 32 users

Frangipani

Regular
Please do not just rely on other posters’ (and that includes mine) interpretations and/or supposed “transcripts” to find out what Steve Brightfield said in the interview with Don Baine (aka The Gadget Professor), but instead listen carefully to the interview yourself.

In brief: DYOR.

The way I see it, a few of the comments relating to that interview are testimony of some BRN shareholders reading too much into our CMO’s words. They hear what they would love to hear, not necessarily what was actually being said.

For example: Did Steve Brightfield really let slip we are involved with Apple or say that in five years from now he sees BrainChip’s offerings “embedded in every product”?

Well no, that is not what I gathered from this interview.

Oh, and LOL! What work of fiction is THAT?! 👇🏻

658AA12E-21B5-4AD7-B251-CEAF80A5C5BA.jpeg



This “transcript” by @curlednoodles is NOT what Don Baine and Steve Brightfield actually said. It seems to be some kind of AI-generated version that content-wise resembles the real interview, but apart from some accurate phrases here and there, it is by no means a literal transcript! Do yourself a favour and listen to the original video interview instead.

On top of that, the sequence of snippets is not correct, and those three snippets should have been separated by using an ellipsis (…), too, to clarify other things were said in between. It is not the coherent dialogue it appears to be and misses important context.


Here are some excerpts of what Steve Brightfield ACTUALLY said (please feel free to comment on my transcript, in case you spot something that is not accurate):

From 12:52 min

“Most of the AI chips you hear about, they’re doing one, tens of watts or hundreds of watts or more. This [holding up the newly announced AKD1000 M.2 factor] is a 1 watt device, and we actually have versions [now holds up an unidentified chip, which I took to be just a single AKD1000 or AKD1500 chip, but I could be wrong] that are fractions of a watt here [puts down the chip] and we announced this fall Akida Pico, which is microwatts, so we can use a hearing aid battery and have this run for days and days doing detection algorithms [?]. So it is really applicable to put on wearables, put it in eye glasses, put it in earbuds, put it in smart watches.”

Don Baine, the interviewer, interrupts him and mentions he himself is “grossly hearing-impaired” and is wearing hearing aids but thinks they are horrible, adding that “I would love to see your technology in better [ones?] than those”.

To this, Steve Brightfield replies:

"One of the things we demonstrate in our suite is an audio denoising algorithm. So if you have a noisy environment like the show here, you can pass your audio through the Machine Learning algorithm, and it cleans it up and it sounds wonderful. And this is the thing you're gonna start seeing with the deregulation of hearing aids by the FDA. You know, Apple Pro earbuds have put some of this technology in theirs. And we're seeing, you know - I talked to some manufacturers that have glasses, where they put microphones in the front and then AI algorithms in the arm of the glasses, and it doesn’t look like you’re wearing hearing aids, but you’re getting this much improved audio quality. Because most people have minor hearing loss and they don't want the stigma of wearing hearing aids, but this kind of removes it - oh, it's an earbud, oh, it's glasses I am wearing, and now I can hear and engage."

DB: “Sure, I could see that, that's great. So does this technology exist? Is that something someone could purchase in a hearing aid, for example?”

SB: “Actually, I’ve seen manufacturers out on the floor doing this.
We are working with some manufacturers to put more advanced algorithms in the current ones in there, yes.”


As for the alleged Apple “name drop”:

Steve Brightfield talks both about BrainChip's audio denoising algorithm demo at CES 2025, but also about the growing importance of audio denoising algorithms in general in the context of the deregulation of hearing aids by the FDA and THEN says “Apple Pro earbuds have put some of this technology in theirs [stressing that last word]”.
I take it he meant audio denoising algorithms in general when he referred to “this technology”, not to BrainChip’s audio denoising algorithm specifically.

Also, the fact that our CMO said he had talked to some manufacturers of smart glasses does not necessarily mean he had been in business negotiations with them or that they are already customers behind an NDA - he may very well have just walked around the CES 2025 floor and checked out the respective manufacturers’ booths and their products, chatting with company representatives and handing out promotional material and business cards to get them interested in BrainChip’s offerings that could improve their current products.

As for hearing aids, my interpretation of this little exchange is that hearing aids with our technology are not yet available for purchase, but that they are working on it and once they will become available, our CMO foresees them surpassing those currently available with less advanced audio denoising algorithms.


After talking in more detail about the VVDN Edge AI Box and the newly announced M.2 form factor, Steve Brightfield moved on to the topic of neural networks and explained that there have so far been three waves of NN algorithms: While the first wave of AI was based on Convolutional Neural Networks (CNNs), the second wave was based on transformer-based neural networks that he said have amazing capabilities in generating images and text (he gave ChatGPT as an example), but are very compute-intensive, and then he added that BrainChip was working on the very recent third wave of neural network algorithms called State-Space Models, popularised by Mamba.

He mentions that BrainChip calls its own version of a State-Space Model TENNs and explains it a little, calling the real-life solution it enables “a personal assistant that can actually go in an earbud. We are not talking to the cloud here with a supercomputer. We have taken basically a ChatGPT algorithm and compressed it into a footprint that will fit on a board like this [briefly picks up the M.2 form factor]. And then you’re not sending your data to the cloud, you don’t need a modem or connectivity, and everything you say is private, it’s just being talked to this local device here. So, there’s privacy, security and low latency for this.”

DB: “Are there devices that are out now that incorporate that, not necessarily, you know, hearing aid-types of devices?”

SB: “Not the State-Space Models I’m talking about. All you’ll see today is transformer-based models that take a lot of computing. So probably the smallest devices you are seeing this on right now are $ 1000 smartphones.”

I understand the word “this” to refer to the just-mentioned transformer-based models. Meaning tiny devices containing State-Space Models such as TENNs (“Personal assistant that can go into an earbud.”) are not yet on the market.


Towards the end of the interview they talk about the advantages of neuromorphic computing that companies will benefit from such as independence from the cloud, which will translate into more privacy and security, and it was in this context of talking about the advantages of neuromorphic computing that Don Baine asked the question: “Where do you see this five years from now?”

So when Steve Brightfield answered “I see this embedded in every product and making our lives easier”, I believe he was referring to the benefits of on-device Edge AI and neuromorphic computing in general, and not specifically to BrainChip’s offerings. Such a statement wouldn’t make sense anyway: To the best of my knowledge, there is no company in the world that has a 100% monopoly on anything.


Something informative I took away from the interview, which I can’t recall having been mentioned, yet, was that each of the VVDN Edge AI Boxes apparently contains two of the newly announced AKD1000 M.2 form factor devices. Could their manufacturing have anything to do with the long delay in the Edge AI Boxes’ production and shipping (people who had pre-ordered and fully prepaid theirs last February did not receive it until early December)?
 
  • Like
  • Love
  • Fire
Reactions: 39 users
Looking back at the chart from last years ces the SP took a few weeks to rise to 0.50 after it was over. Let’s hope it’s the same this year as things are looking so much better 12 months on.

 
Last edited:
  • Fire
  • Like
Reactions: 6 users
Looking back at the chart from last years ces the SP took a few weeks to rise to 0.50 after it was over. Let’s hope it’s the same this year as things are looking so much better 12 months on.


That had more to do with Intel Foundry Services (IFS) spruiking us, at the time, than CES I think..

Before their Big Presentation, they kept saying..
"Hello BrainChip, we Love you"..
Or something like that.

There will be plenty of other Newsflow coming though, by the sound of things, to spur on investment, in this thoroughbred..
 
  • Like
Reactions: 11 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Please do not just rely on other posters’ (and that includes mine) interpretations and/or supposed “transcripts” to find out what Steve Brightfield said in the interview with Don Baine (aka The Gadget Professor), but instead listen carefully to the interview yourself.

In brief: DYOR.

The way I see it, a few of the comments relating to that interview are testimony of some BRN shareholders reading too much into our CMO’s words. They hear what they would love to hear, not necessarily what was actually being said.

For example: Did Steve Brightfield really let slip we are involved with Apple or say that in five years from now he sees BrainChip’s offerings “embedded in every product”?

Well no, that is not what I gathered from this interview.

Oh, and LOL! What work of fiction is THAT?! 👇🏻

View attachment 75798


This “transcript” by @curlednoodles is NOT what Don Baine and Steve Brightfield actually said. It seems to be some kind of AI-generated version that content-wise resembles the real interview, but apart from some accurate phrases here and there, it is by no means a literal transcript! Do yourself a favour and listen to the original video interview instead.

On top of that, the sequence of snippets is not correct, and those three snippets should have been separated by using an ellipsis (…), too, to clarify other things were said in between. It is not the coherent dialogue it appears to be and misses important context.


Here are some excerpts of what Steve Brightfield ACTUALLY said (please feel free to comment on my transcript, in case you spot something that is not accurate):

From 12:52 min

“Most of the AI chips you hear about, they’re doing one, tens of watts or hundreds of watts or more. This [holding up the newly announced AKD1000 M.2 factor] is a 1 watt device, and we actually have versions [now holds up an unidentified chip, which I took to be just a single AKD1000 or AKD1500 chip, but I could be wrong] that are fractions of a watt here [puts down the chip] and we announced this fall Akida Pico, which is microwatts, so we can use a hearing aid battery and have this run for days and days doing detection algorithms [?]. So it is really applicable to put on wearables, put it in eye glasses, put it in earbuds, put it in smart watches.”

Don Baine, the interviewer, interrupts him and mentions he himself is “grossly hearing-impaired” and is wearing hearing aids but thinks they are horrible, adding that “I would love to see your technology in better [ones?] than those”.

To this, Steve Brightfield replies:

"One of the things we demonstrate in our suite is an audio denoising algorithm. So if you have a noisy environment like the show here, you can pass your audio through the Machine Learning algorithm, and it cleans it up and it sounds wonderful. And this is the thing you're gonna start seeing with the deregulation of hearing aids by the FDA. You know, Apple Pro earbuds have put some of this technology in theirs. And we're seeing, you know - I talked to some manufacturers that have glasses, where they put microphones in the front and then AI algorithms in the arm of the glasses, and it doesn’t look like you’re wearing hearing aids, but you’re getting this much improved audio quality. Because most people have minor hearing loss and they don't want the stigma of wearing hearing aids, but this kind of removes it - oh, it's an earbud, oh, it's glasses I am wearing, and now I can hear and engage."

DB: “Sure, I could see that, that's great. So does this technology exist? Is that something someone could purchase in a hearing aid, for example?”

SB: “Actually, I’ve seen manufacturers out on the floor doing this.
We are working with some manufacturers to put more advanced algorithms in the current ones in there, yes.”


As for the alleged Apple “name drop”:

Steve Brightfield talks both about BrainChip's audio denoising algorithm demo at CES 2025, but also about the growing importance of audio denoising algorithms in general in the context of the deregulation of hearing aids by the FDA and THEN says “Apple Pro earbuds have put some of this technology in theirs [stressing that last word]”.
I take it he meant audio denoising algorithms in general when he referred to “this technology”, not to BrainChip’s audio denoising algorithm specifically.

Also, the fact that our CMO said he had talked to some manufacturers of smart glasses does not necessarily mean he had been in business negotiations with them or that they are already customers behind an NDA - he may very well have just walked around the CES 2025 floor and checked out the respective manufacturers’ booths and their products, chatting with company representatives and handing out promotional material and business cards to get them interested in BrainChip’s offerings that could improve their current products.

As for hearing aids, my interpretation of this little exchange is that hearing aids with our technology are not yet available for purchase, but that they are working on it and once they will become available, our CMO foresees them surpassing those currently available with less advanced audio denoising algorithms.


After talking in more detail about the VVDN Edge AI Box and the newly announced M.2 form factor, Steve Brightfield moved on to the topic of neural networks and explained that there have so far been three waves of NN algorithms: While the first wave of AI was based on Convolutional Neural Networks (CNNs), the second wave was based on transformer-based neural networks that he said have amazing capabilities in generating images and text (he gave ChatGPT as an example), but are very compute-intensive, and then he added that BrainChip was working on the very recent third wave of neural network algorithms called State-Space Models, popularised by Mamba.

He mentions that BrainChip calls its own version of a State-Space Model TENNs and explains it a little, calling the real-life solution it enables “a personal assistant that can actually go in an earbud. We are not talking to the cloud here with a supercomputer. We have taken basically a ChatGPT algorithm and compressed it into a footprint that will fit on a board like this [briefly picks up the M.2 form factor]. And then you’re not sending your data to the cloud, you don’t need a modem or connectivity, and everything you say is private, it’s just being talked to this local device here. So, there’s privacy, security and low latency for this.”

DB: “Are there devices that are out now that incorporate that, not necessarily, you know, hearing aid-types of devices?”

SB: “Not the State-Space Models I’m talking about. All you’ll see today is transformer-based models that take a lot of computing. So probably the smallest devices you are seeing this on right now are $ 1000 smartphones.”

I understand the word “this” to refer to the just-mentioned transformer-based models. Meaning tiny devices containing State-Space Models such as TENNs (“Personal assistant that can go into an earbud.”) are not yet on the market.


Towards the end of the interview they talk about the advantages of neuromorphic computing that companies will benefit from such as independence from the cloud, which will translate into more privacy and security, and it was in this context of talking about the advantages of neuromorphic computing that Don Baine asked the question: “Where do you see this five years from now?”

So when Steve Brightfield answered “I see this embedded in every product and making our lives easier”, I believe he was referring to the benefits of on-device Edge AI and neuromorphic computing in general, and not specifically to BrainChip’s offerings. Such a statement wouldn’t make sense anyway: To the best of my knowledge, there is no company in the world that has a 100% monopoly on anything.


Something informative I took away from the interview, which I can’t recall having been mentioned, yet, was that each of the VVDN Edge AI Boxes apparently contains two of the newly announced AKD1000 M.2 form factor devices. Could their manufacturing have anything to do with the long delay in the Edge AI Boxes’ production and shipping (people who had pre-ordered and fully prepaid theirs last February did not receive it until early December)?

I did listen to the interview myself, thank you very much. This is not the first time you’ve made completely false assertions about me, which reflects far more on your character than on mine.

For the record, I posted my own comments on the interview after listening to it a second time, before sharing what Curlednoodles had to say from HC. I made no claims regarding the accuracy of his summary. And I certainly did not encourage people to rely on that information exclusively or not listen to the actual interview themselves.

If you have concerns about Curlednoodles' post, then I suggest you address those concerns directly with him on the Crapper, where you can engage in your forensic critiquing and pathological nitpicking to your heart’s content.

As a side note, Curlednoodles explicitly labeled their post as "some interesting snippets from the interview." He didn't claim it was a precise, chronological transcript. But, it's a free world, so if you want to go chew his balls off, then I can't stop you. With any luck, you'll like it over there so much that you'll decide to stay for an extended holiday.
 
  • Like
  • Fire
  • Love
Reactions: 66 users

IloveLamp

Top 20
1389.gif
 
  • Haha
Reactions: 8 users

7für7

Top 20
I did listen to the interview myself, thank you very much. This is not the first time you’ve made completely false assertions about me, which reflects far more on your character than on mine.

For the record, I posted my own comments on the interview after listening to it a second time, before sharing what Curlednoodles had to say from HC. I made no claims regarding the accuracy of his summary. And I certainly did not encourage people to rely on that information exclusively or not listen to the actual interview themselves.

If you have concerns about Curlednoodles' post, then I suggest you address those concerns directly with him on the Crapper, where you can engage in your forensic critiquing and pathological nitpicking to your heart’s content.

As a side note, Curlednoodles explicitly labeled their post as "some interesting snippets from the interview." He didn't claim it was a precise, chronological transcript. But, it's a free world, so if you want to go chew his balls off, then I can't stop you. With any luck, you'll like it over there so much that you'll decide to stay for an extended holiday.
I don’t understand how some people act like this is a contest where the goal is to provide the best news and win something in the end—like free stocks or a new washing machine. Everyone here is doing their best to contribute voluntarily. Imagine that some people sacrifice their free time to continuously search for news because they genuinely enjoy informing others. And then to criticize them because someone thinks they have a monopoly on posting news is just arrogant.





Bravo, don’t let yourself get upset over and over again by him..This is all about exchanging information, writing some nonsense, and having fun together. You’re one of the best here … after me, of course. 👋
 
  • Like
  • Love
  • Fire
Reactions: 41 users

Doz

Regular
Renesas new RA8E1 and RA8E2 has an event link controller .


0F055E92-6125-405C-B228-A4C3CB9A3EA9.jpeg
 
  • Like
  • Thinking
Reactions: 7 users

HopalongPetrovski

I'm Spartacus!
I don’t understand how some people act like this is a contest where the goal is to provide the best news and win something in the end—like free stocks or a new washing machine. Everyone here is doing their best to contribute voluntarily. Imagine that some people sacrifice their free time to continuously search for news because they genuinely enjoy informing others. And then to criticize them because someone thinks they have a monopoly on posting news is just arrogant.





Bravo, don’t let yourself get upset over and over again by him..This is all about exchanging information, writing some nonsense, and having fun together. You’re one of the best here … after me, of course. 👋
I wanna get them in a ring somewhere and watch em duke it out. 🤣
We could do a netflix special. 🤣
btw....my money's on Bravo. 🤣

 
  • Haha
  • Like
  • Fire
Reactions: 16 users
Top Bottom