So yes to my humble opinion we’re with Merc for sure
Could be!
Who knows?
There's something due any day
I will know right away
Soon as it shows
So yes to my humble opinion we’re with Merc for sure
Would you rather a ChaptGPT thread or a @Frangipani thread ......I know what I would prefer.......selfless people sharing their own amazing in-depth validated research or a thread based on ChatGPT or other baseless rubbish.....????
Thank you. I do not read the ChatGPT posts anymore, I just scroll down to the next post.
Finding new information with it is fine, but constantly posting the whole lot and hoping it is right is just annoying. Bravo said a few days ago that she does not have the technical background to judge the results. Why post it then? I rather read real research instead of good guesses.
Nooooo please don't do that Bravo. I very much appreciate your help in searching for us brn's possible opportunitiesRude.
You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.
What both you and @TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.
The excerpt I shared highlighted Arm stating the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.
View attachment 90791
So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots
In a previous post #83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.
Likewise, in another previous post #83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.
The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.
Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.
ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.
That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.
Thanks Lamp! I was a bit worried when Qualcomm first acquired edge impulse and their website section on Brainchip said "At this time the training of brainchip models is suspended".![]()
#imagine2025 #edgeai #brainchip #akida #edgeimpulse | Edge Impulse (a Qualcomm company)
🧠 BrainChip is uniquely integrated with Edge Impulse to train models for their Akida platform. At Imagine 2025, BrainChip will showcase: -Edge Impulse out-of-the-box demo models running on the Akida AKD1500 and Akida FPGA development platforms -One-of-a-kind models developed by BrainChip’s...www.linkedin.com
View attachment 90786![]()
Imagine 2025 - The conference for edge AI
Join us for the premier edge AI event for the real world, with presentations and demonstrations from top technology leaders and innovators.edgeimpulse.com
BravoRude.
You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.
What both you and @TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.
The excerpt I shared highlighted Arm stating the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.
View attachment 90791
So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots
In a previous post #83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.
Likewise, in another previous post #83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.
The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.
Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.
ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.
That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.
Thanks Lamp! I was a bit worried when Qualcomm first acquired edge impulse and their website section on Brainchip said "At this time the training of brainchip models is suspended".
I was on Edge Impulse website about 2 weeks ago, and it still had the exact same comment. But I've checked out on the Edge Impulse website today, which seems to have been updated with a new layout, and that note on Brainchip is now gone and they have us listed as an official partner. https://edgeimpulse.com/ecosystem-partners/brainchip
And based on todays announcement, it looks as though the relationship is alive and well!
Apologies, you're right. Nonetheless their LinkedIn post today reaffirms the partnership has not been affected after the acquisition by Qualcomm.No, the 25 March 2025 note that the training of BrainChip models has been suspended is not gone, but embarrassingly still very much alive on the Edge Impulse website… It’s just on a different webpage (where it’s always been):
![]()
BrainChip AKD1000 - Edge Impulse Documentation
docs.edgeimpulse.com
View attachment 90792
Hi Bravo,Rude.
You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.
What both you and @TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.
The excerpt I shared highlighted Arm stating the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.
View attachment 90791
So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots
In a previous post #83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.
Likewise, in another previous post #83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.
The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.
Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.
ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.
That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.
Rude.
You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.
What both you and @TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.
The excerpt I shared highlighted Arm stating the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.
View attachment 90791
So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots
In a previous post #83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.
Likewise, in another previous post #83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.
The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.
Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.
ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.
That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.
And hopefully it contains 3000 Akida chipsFeatures from the EQXX concept car have been implemented into the GLC EQ, as per Auto Express. One feature that stands out is in the interior:
The curved 8K screen that stretches 47.2 inches between the A-pillars makes the most of the EQXX's crisp and sparkling game-engine-powered graphics, including a realtime 3D navigation display. The mini-LED backlit screen features more than 3000 local dimming zones, which means it only consumes power as and when specific parts of the screen are in use.
![]()
I forgot to mention...
View attachment 90796
EXTRACT ONLY
View attachment 90795
![]()
Arm's Zena CSS aiming to speed up automotive AI
global.chinadaily.com.cn