BRN Discussion Ongoing

Slade

Top 20
  • Like
  • Fire
Reactions: 8 users
My granddaughter 1st birthday and we had to dress up
 

Attachments

  • IMG_3434.jpeg
    3.1 MB · Views: 45
  • Haha
  • Like
  • Fire
Reactions: 11 users

manny100

Top 20
Rude.

You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.

What both you and @TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.

The excerpt I shared highlighted Arm stating the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.


View attachment 90791




So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots

In a previous post #83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.

Likewise, in another previous post #83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.

The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.

Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.

ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.

That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.
IMO we all have to get used to reading AI produced information. Just like Neuromophic Edge AI ain't going anywhere except up.
The trick is understanding its limitations and as 'you do' and frame queries in such a way that increases probability.
As long as its disclosed as 'chatty' produced as you do then every poster has an option ' to read or not to read'.
A specific question to 'chat' will reveal how to increase the probabilities of a correct answer.
I enjoy your posts.
 
  • Like
Reactions: 9 users

CHIPS

Regular
Rude.

You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.

What both you and @TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.

The excerpt I shared highlighted Arm stating the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.


View attachment 90791




So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots

In a previous post #83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.

Likewise, in another previous post #83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.

The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.

Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.

ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.

That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.

Not rude, but honest! AND it is my decision! I do not have the time, and I also do not want to read all those pages with something an AI is guessing. ChatGPT might be right sometimes, but also wrong many times.

I asked you before to reduce those posts to summaries, and many people here supported that, but you ignored it and continued posting those long stories. Who was rude here?

I have always highly appreciated your posts before you started using ChatGPT, but I do not see those posts anymore. It is only ChatGPT you have been posting lately. That's your decision.

You cannot win them all!
 
  • Like
Reactions: 3 users

Frangipani

Top 20
Apologies, you're right. Nonetheless their LinkedIn post today reaffirms the partnership has not been affected after the acquisition by Qualcomm.

While that’s true, the question is:
Doesn’t it defeat the actual purpose why we got partnered in the first place, when developers can no longer train new models on Akida?!

In yesterday’s LinkedIn post, Edge Impulse started out by saying:


8FE30643-FC85-41F4-879C-E0AFAA3E37FA.jpeg



Yet, when interested developers then visit the Edge Impulse/Ecosystem-Partners/BrainChip webpage you shared earlier (which I personally find appealing, by the way) and then click on “BrainChip Docs” under “RESOURCES”…


424c37f5-b3e5-450e-bb5a-5f8a529fc5dd-jpeg.90812



… this is how they will be greeted:


9C6AB202-948D-4D3C-980E-7CC527E77DF3.jpeg



That is very unprofessional and should have been rectified ages ago - I recall that somebody even addressed this issue during the AGM in May, to which our management replied they weren’t aware of any problem with model training on Edge Impulse.
Although a month earlier, @Smoothsailing / @smoothsailing18 had already asked IR for clarification on this issue and got a reply from Tony Dawe that our CTO Tony Lewis were not concerned and had instead referred to the suspension as a “temporary situation” stemming from the acquisition, since Qualcomm had to “review all contracts and commercial arrangements”. (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-457138)


Or else, if model training really DOES continue to be suspended for whatever reason, they should stop with the misleading message that developers can train models for their Akida platform on Edge Impulse, as it currently only concerns those developers who already have “existing trained Edge Impulse projects to deploy to BrainChip devices”.

Either way not a good look.
 

Attachments

  • 424C37F5-B3E5-450E-BB5A-5F8A529FC5DD.jpeg
    424C37F5-B3E5-450E-BB5A-5F8A529FC5DD.jpeg
    341.6 KB · Views: 0
Last edited:
Top Bottom