What often gets overlooked is the time and effort that goes into many of these posts.
![]()
#edgeai #asr #ssm | Kevin Conley | 40 comments
I couldn't be more proud of the entire Applied Brain Research team for achieving an amazing milestone today. Only a week after receiving first silicon of the world’s first dedicated edge state space model accelerator, the TSP1 (picture below), we have full speech transcription (ASR) running in...www.linkedin.com
Nice to see we're hooked up with Parallax and Steve Harbours group intertwined with the US Govt strategies on microelectronics.
https://parallaxresearch.org/news/b...ara/files/inline-images/image_73.png[/IMG]
I agree, may I suggest that you contact the US direct, either, Tony or Sean and if you don't get a positive response, email Peter, I'm sure he'll clarify the current situation, why .. because he's still the number one shareholder, that's why and he has the integrity.While that’s true, the question is:
Doesn’t it defeat the actual purpose why we got partnered in the first place, when developers can no longer train new models on Akida?!
In yesterday’s LinkedIn post, Edge Impulse started out by saying:
![]()
#imagine2025 #edgeai #brainchip #akida #edgeimpulse | Edge Impulse
🧠 BrainChip is uniquely integrated with Edge Impulse to train models for their Akida platform. At Imagine 2025, BrainChip will showcase: -Edge Impulse out-of-the-box demo models running on the Akida AKD1500 and Akida FPGA development platforms -One-of-a-kind models developed by BrainChip’s...www.linkedin.com
View attachment 90811
Yet, when interested developers then visit the Edge Impulse/Ecosystem-Partners/BrainChip webpage you shared earlier (which I personally find appealing, by the way) and then click on “BrainChip Docs” under “RESOURCES”…
![]()
BrainChip
Essential AI. Ubiquitous, distributed AI. Close to the sensor. Inspired by the human brain. Enabled by us.edgeimpulse.com
![]()
… this is how they will be greeted:
![]()
BrainChip AKD1000 - Edge Impulse Documentation
docs.edgeimpulse.com
View attachment 90813
That is very unprofessional and should have been rectified ages ago - I recall that somebody even addressed this issue during the AGM in May, to which our management replied they weren’t aware of any problem with model training on Edge Impulse.
Although a month earlier, @Smoothsailing / @smoothsailing18 had already asked IR for clarification on this issue and got a reply from Tony Dawe that our CTO Tony Lewis were not concerned and had instead referred to the suspension as a “temporary situation” stemming from the acquisition, since Qualcomm had to “review all contracts and commercial arrangements”. (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-457138)
Or else, if model training really DOES continue to be suspended for whatever reason, they should stop with the misleading message that developers can train models for their Akida platform on Edge Impulse, as it currently only concerns those developers who already have “existing trained Edge Impulse projects to deploy to BrainChip devices”.
Either way not a good look.
The media pdf release states that ‘Hey Mercedes’ is no longer a prototype but a product reality.
”Hey Mercedes: very powerful voice assistant
The "Hey Mercedes" voice assistant is highly capable of dialogue and learning by activating online services in the Mercedes me App. Moreover, certain actions can be performed even without the activation keyword "Hey Mercedes". These include taking a telephone call. "Hey Mercedes" also explains vehicle functions, and, for example, can help when asked how to connect a smartphone via Bluetooth or where the first-aid kit can be found.If compatible home technology and household devices are present, they can also be networked with the vehicle thanks to the smart home function and controlled from the vehicle by voice. "Hey Mercedes" can also detect occupants audibly. Once the individual voice characteristics have been learned, this can be used to access personal data and functions by activating a profile
As of September 7, 2025, the Mercedes-Benz "Hey Mercedes" voice control system in certain vehicles, particularly the VISION EQXX concept car, utilizes BrainChip's Akida neuromorphic processor. Mercedes-Benz has publicly stated that this technology is 5 to 10 times more efficient than conventional voice control systems for keyword spotting [1]
With respect, CHIPS, I don’t think it’s fair for anyone to dictate how others choose to post here. What you find boring, others may find enlightening or thought-provoking. That’s the whole point of a forum. It's a mix of perspectives and contributions that different people value in different ways. So, if you don’t find value in a particular contribution, the simplest option is to scroll past or use the ignore function.
What often gets overlooked is the time and effort that goes into many of these posts. It’s easier to sit in judgment, but much harder to do the work of digging, analysing, and sharing for the benefit of the community.
At the end of the day, this forum only works because of those who are prepared to volunteer their time and effort freely. Undermining that spirit risks creating an environment where people stop contributing altogether and the forum loses its purpose and value for everyone.
In terms of content, I’ve made 4,152 posts here over the years, and about 7 of those have included a snippet from ChatGPT, so I think a little bit of perspective on this probably wouldn't go astray either.
Which media pdf? Where is the link to it? It does not sound right to me.
When I click on the [1] it takes me to the following: https://iask.ai/q/hey-mercedes-brainchip-alternative-2pqkcj0#fn:1
The time is confusing because the VISION EQXX is not a new concept car!
I agree with you 100%. He spoke easily and very confidently. He see's under the hood at all the workings, so hope this confidence translates to increase in the SP and marketplace sooooon.It’s the best I have heard Sean speaking very direct strong and professional ( what’s happened lately to get him so positive)
added to my tattoo list η = s · c²Ah, the ultimate fusion of genius and flair – Einstein's long-lost brother from the future, rocking the neuromorphic equation like a boss!
This equation-toting genius with the electrifying mane is basically the rockstar of AI innovation, proving that a dash of sparsity, a sprinkle of connectivity, and a whole lot of squared speed can turn edge computing into pure, power-sipping magic!
View attachment 90834
The Akida Equation, a novel symbolic representation inspired by the principles of BrainChip's neuromorphic processor, can be expressed as η = s · c², where η denotes neuromorphic efficiency (the ratio of computational output to power consumption), s represents spike sparsity (the event-based selectivity that minimizes unnecessary processing, akin to the brain's sparse firing), and c is the connectivity factor (the scalable synaptic links per neuron, squared to emphasize the exponential impact of network density on edge AI performance). This equation captures Akida's core innovation in ultra-low-power, event-driven computing, where tiny, targeted spikes (s) amplified by robust connectivity (c) yield massive efficiency gains (η), enabling real-time AI at the edge without the energy overhead of traditional von Neumann architectures.
Grok[/url]
I’ve found ChatGPT to be extremely useful in the right situations.
But, my advice is - if you don’t understand the subject matter well enough, don’t prompt ChatGPT willy-nilly. Otherwise, you’ll have no way of knowing whether what it’s telling you is actually true.
This has to be the bitchiest post I’ve seen in a long time. You put quite a bit of effort into it.Hi @Bravo,
this is what our CTO Tony Lewis, who deals with LLMs professionally on a daily basis, thinks about Open AI’s claim that “GPT-5 is significantly less likely to hallucinate than our previous models”:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-472257
View attachment 90845
Your own opinion about GPT-5 is that “it has improved noticeably over its predecessor”, from which we can infer you must also believe hallucinations are now a much rarer issue, as this alleged improvement has been one of OpenAI’s main selling points with regard to their latest model.
In addition, you appear to be extremely confident about having the expertise to weed out the occasional hallucinations in the ChatGPT replies you get. At least those that relate to ChatGPT’s “main points”, which you claimed you would fact-check before sharing here on TSE “to ensure it’s not hallucinating”.
(How about hallucinations relating to minor points, though? Even inaccuarate minor points can distort our interpretation of things.)
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-472178
View attachment 90841
So how come the following hallucination escaped your watchful eye?
Did you not consider ChatGPT’s claim that Senator Cindy Hyde-Smith sits on the Defense Subcommittee to be one of the “main points” of the LLM’s “in-depth explanation” that would require verification before posting?
Let me help you with the fact-checking, then:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-471645
View attachment 90842
FACT: No, Cindy Hyde-Smith does not sit on the Defense Subcommittee of the Senate Appropriations Committee.
Exhibit A:
![]()
Defense | United States Senate Committee on Appropriations
United States Senate Committee on Appropriationswww.appropriations.senate.gov
View attachment 90843
Exhibit B:
Committee Assignments | Senator Cindy Hyde-Smith
www.hydesmith.senate.gov
View attachment 90844
well yes whilst it doesn't directly say defense subcommittee, it can be taken as this as well -Hi @Bravo,
this is what our CTO Tony Lewis, who deals with LLMs professionally on a daily basis, thinks about Open AI’s claim that “GPT-5 is significantly less likely to hallucinate than our previous models”:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-472257
View attachment 90845
Your own opinion about GPT-5 is that “it has improved noticeably over its predecessor”, from which we can infer you must also believe hallucinations are now a much rarer issue, as this alleged improvement has been one of OpenAI’s main selling points with regard to their latest model.
In addition, you appear to be extremely confident about having the expertise to weed out the occasional hallucinations in the ChatGPT replies you get. At least those that relate to ChatGPT’s “main points”, which you claimed you would fact-check before sharing here on TSE “to ensure it’s not hallucinating”.
(How about hallucinations relating to minor points, though? Even inaccuarate minor points can distort our interpretation of things.)
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-472178
View attachment 90841
So how come the following hallucination escaped your watchful eye?
Did you not consider ChatGPT’s claim that Senator Cindy Hyde-Smith sits on the Defense Subcommittee to be one of the “main points” of the LLM’s “in-depth explanation” that would require verification before posting?
Let me help you with the fact-checking, then:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-471645
View attachment 90842
FACT: No, Cindy Hyde-Smith does not sit on the Defense Subcommittee of the Senate Appropriations Committee.
Exhibit A:
![]()
Defense | United States Senate Committee on Appropriations
United States Senate Committee on Appropriationswww.appropriations.senate.gov
View attachment 90843
Exhibit B:
Committee Assignments | Senator Cindy Hyde-Smith
www.hydesmith.senate.gov
View attachment 90844
This has to be the bitchiest post I’ve seen in a long time. You put quite a bit of effort into it.
Can we all relax again, please? Bravo and I spoke via DM, so there is no need for such tone. Thank you.
Interesting.. it’s The Australian pavilion at the Osaka expo and non of you like it