Rude.
You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.
What both you and
@TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.
The excerpt I shared highlighted Arm stating
the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.
View attachment 90791
So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots
In a previous post
#83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.
Likewise, in another previous post
#83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.
The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.
Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.
ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.
That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.