Please do not just rely on other posters’ (and that includes mine) interpretations and/or supposed “transcripts” to find out what Steve Brightfield said in the interview with Don Baine (aka The Gadget Professor), but instead listen carefully to the interview yourself.
In brief: DYOR.
The way I see it, a few of the comments relating to that interview are testimony of some BRN shareholders reading too much into our CMO’s words. They hear what they would love to hear, not necessarily what was actually being said.
For example: Did Steve Brightfield really let slip we are involved with Apple or say that in five years from now he sees BrainChip’s offerings “embedded in every product”?
Well no, that is not what I gathered from this interview.
Oh, and LOL! What work of fiction is THAT?!
View attachment 75798
This “transcript” by
@curlednoodles is NOT what Don Baine and Steve Brightfield actually said. It seems to be some kind of AI-generated version that content-wise resembles the real interview, but apart from some accurate phrases here and there, it is by no means a literal transcript! Do yourself a favour and listen to the original video interview instead.
On top of that, the sequence of snippets is not correct, and those three snippets should have been separated by using an ellipsis (…), too, to clarify other things were said in between. It is not the coherent dialogue it appears to be and misses important context.
Here are some excerpts of what Steve Brightfield ACTUALLY said (please feel free to comment on my transcript, in case you spot something that is not accurate):
From 12:52 min
“Most of the AI chips you hear about, they’re doing one, tens of watts or hundreds of watts or more. This [holding up the newly announced AKD1000 M.2 factor] is a 1 watt device, and we actually have versions [now holds up an unidentified chip, which I took to be just a single AKD1000 or AKD1500 chip, but I could be wrong] that are fractions of a watt here [puts down the chip] and we announced this fall Akida Pico, which is microwatts, so we can use a hearing aid battery and have this run for days and days doing detection algorithms [?]. So it is really applicable to put on wearables, put it in eye glasses, put it in earbuds, put it in smart watches.”
Don Baine, the interviewer, interrupts him and mentions he himself is “grossly hearing-impaired” and is wearing hearing aids but thinks they are horrible, adding that “I would love to see your technology in better [ones?] than those”.
To this, Steve Brightfield replies:
"One of the things we demonstrate in our suite is an audio denoising algorithm. So if you have a noisy environment like the show here, you can pass your audio through the Machine Learning algorithm, and it cleans it up and it sounds wonderful. And this is the thing you're gonna start seeing with the deregulation of hearing aids by the FDA. You know, Apple Pro earbuds have put some of this technology in theirs. And we're seeing, you know - I talked to some manufacturers that have glasses, where they put microphones in the front and then AI algorithms in the arm of the glasses, and it doesn’t look like you’re wearing hearing aids, but you’re getting this
much improved audio quality. Because most people have minor hearing loss and they don't want the stigma of wearing hearing aids, but this kind of removes it -
oh, it's an earbud, oh, it's glasses I am wearing, and now I can hear and engage."
DB: “Sure, I could see that, that's great. So does this technology exist? Is that something someone could purchase in a hearing aid, for example?”
SB: “Actually, I’ve seen manufacturers out on the floor doing this.
We are working with some manufacturers to put more advanced algorithms in the current ones in there, yes.”
As for the alleged Apple “name drop”:
Steve Brightfield talks both about BrainChip's audio denoising algorithm demo at CES 2025, but also about the growing importance of audio denoising algorithms in general in the context of the deregulation of hearing aids by the FDA and THEN says “Apple Pro earbuds have put some of this technology in theirs [stressing that last word]”.
I take it he meant audio denoising algorithms in general when he referred to “this technology”, not to BrainChip’s audio denoising algorithm specifically.
Also, the fact that our CMO said he had talked to some manufacturers of smart glasses does not necessarily mean he had been in business negotiations with them or that they are already customers behind an NDA - he may very well have just walked around the CES 2025 floor and checked out the respective manufacturers’ booths and their products, chatting with company representatives and handing out promotional material and business cards to get them interested in BrainChip’s offerings that could improve their current products.
As for hearing aids, my interpretation of this little exchange is that hearing aids with our technology are not yet available for purchase, but that they are working on it and once they will become available, our CMO foresees them surpassing those currently available with less advanced audio denoising algorithms.
After talking in more detail about the VVDN Edge AI Box and the newly announced M.2 form factor, Steve Brightfield moved on to the topic of neural networks and explained that there have so far been three waves of NN algorithms: While the first wave of AI was based on Convolutional Neural Networks (CNNs), the second wave was based on transformer-based neural networks that he said have amazing capabilities in generating images and text (he gave ChatGPT as an example), but are very compute-intensive, and then he added that BrainChip was working on the very recent third wave of neural network algorithms called State-Space Models, popularised by Mamba.
He mentions that BrainChip calls its own version of a State-Space Model TENNs and explains it a little, calling the real-life solution it enables “a personal assistant that can actually go in an earbud. We are not talking to the cloud here with a supercomputer. We have taken basically a ChatGPT algorithm and compressed it into a footprint that will fit on a board like this [briefly picks up the M.2 form factor]. And then you’re not sending your data to the cloud, you don’t need a modem or connectivity, and everything you say is private, it’s just being talked to this local device here. So, there’s privacy, security and low latency for this.”
DB: “Are there devices that are out now that incorporate that, not necessarily, you know, hearing aid-types of devices?”
SB: “Not the State-Space Models I’m talking about. All you’ll see today is transformer-based models that take a lot of computing. So probably the smallest devices you are seeing this on right now are $ 1000 smartphones.”
I understand the word “this” to refer to the just-mentioned transformer-based models. Meaning tiny devices containing State-Space Models such as TENNs (“Personal assistant that can go into an earbud.”) are not yet on the market.
Towards the end of the interview they talk about the advantages of neuromorphic computing that companies will benefit from such as independence from the cloud, which will translate into more privacy and security, and it was in this context of talking about the advantages of neuromorphic computing that Don Baine asked the question: “Where do you see this five years from now?”
So when Steve Brightfield answered “I see this embedded in every product and making our lives easier”, I believe he was referring to the benefits of on-device Edge AI and neuromorphic computing in general, and not specifically to BrainChip’s offerings. Such a statement wouldn’t make sense anyway: To the best of my knowledge, there is no company in the world that has a 100% monopoly on anything.
Something informative I took away from the interview, which I can’t recall having been mentioned, yet, was that each of the VVDN Edge AI Boxes apparently contains two of the newly announced AKD1000 M.2 form factor devices. Could their manufacturing have anything to do with the long delay in the Edge AI Boxes’ production and shipping (people who had pre-ordered and fully prepaid theirs last February did not receive it until early December)?