Rskiff
Regular
dyorI have stopped following brn on the daily a long time ago, but can someone catch me up what is going on atm? Serious drop in SP in last few days and people talking about BRN re-listing on the American stock exchange?
dyorI have stopped following brn on the daily a long time ago, but can someone catch me up what is going on atm? Serious drop in SP in last few days and people talking about BRN re-listing on the American stock exchange?
Use the page back button mate there it all is in black and white.I have stopped following brn on the daily a long time ago, but can someone catch me up what is going on atm? Serious drop in SP in last few days and people talking about BRN re-listing on the American stock exchange?
![]()
Bringing Next-Gen AI Developer Experiences to Africa
Arm has been across Africa attending various events and initiatives to highlight the range of AI-based developer experiences running on Arm.newsroom.arm.com
No it’s all there in black and writeUse the page back button mate there it all is in black and white.
Certainly is imo.No it’s all there in black and write
I have stopped following brn on the daily a long time ago, but can someone catch me up what is going on atm? Serious drop in SP in last few days and people talking about BRN re-listing on the American stock exchange?
Hi Bravo,
In your orange highlighted bit it talks about running lightweight AI workloads independently of the cloud.
The word lightweight in my mind rules out Akida ... and I don't class TENNs as lightweight because it adds the time factor for image tracking, and, with long skip, speech processing.
Here's one I prepared earlier:
https://www.cnx-software.com/2025/0...rmv9-core-optimized-for-edge-ai-and-iot-socs/
February 27, 2025 by Jean-Luc Aufranc (CNXSoft) - 2 Commentson Arm Cortex-A320 low-power CPU is the smallest Armv9 core, optimized for Edge AI and IoT SoCs
Arm Cortex-A320 low-power CPU is the smallest Armv9 core, optimized for Edge AI and IoT SoCs
Arm Cortex-A320 is a low-power Armv9 CPU core optimized for Edge AI and IoT applications, with up to 50% efficiency improvements over the Cortex-A520 CPU core. It is the smallest Armv9 core unveiled so far.
The Armv9 architecture was first introduced in 2021 with a focus on AI and specialized cores, followed by the first Armv9 cores – Cortex-A510, Cortex-A710, Cortex-X2 – unveiled later that year and targeting flagship mobile devices. Since then we’ve seen Armv9 cores on a wider range of smartphones, high-end Armv9 motherboards, and TV boxes, The upcoming Rockchip RK3688 AIoT SoC also features Armv9 but targets high-end applications. The new Arm Cortex-A320 will expand Armv9 usage to a much wider range of IoT devices including power-constrained Edge AI devices.
View attachment 78344
Arm Cortex-A320 highlights:
- Architecture – Armv9.2-A (Harvard)
- Extensions
- Up to Armv8.7 extensions
- QARMA3 extensions
- SVE2 extensions
- Memory Tagging Extensions (MTE) (including Asymmetric MTE)
- Cryptography extensions
- RAS extensions
- Microarchitecture
- In-order pipeline
- Partial superscalar support
- NEON/Floating Point Unit
- Optional Cryptography Unit
- Up to 4x CPUs in cluster
- 40-bit Physical Addressing (PA)
- Memory system and external interfaces
- 32KB or 64KB L1 I-Cache / D-Cache
- Optional L2 Cache – 128KB, 192KB, 256KB, 384KB, or 512KB
- No L3 Cache
- ECC Support
- Bus interfaces – AMBA AXI5
- No ACP, No Peripheral Port
- Security – TrustZone, Secure EL2, MTE, PAC/BTI
- Debugging
- Debug – Armv9.2-A features
- CoreSightv3
- Embedded Trace Extension (ETEv1.1)
- Trace Buffer Extension
- Misc
- Interrupts – GIC interface, GICv4.1
- Generic timer – Armv9.2-A
- PMUv3.7
The Cortex-A320 can be combined with the Ethos-U85 NPU for Edge AI, providing an upgrade path to Cortex-M85+Ethos-U85-based Endpoint AI devices, with support for LLMs with up to one billion parameters, and Linux or Android operating systems, besides RTOSes like FreeRTOS or Zephyr OS. We’re also told a quad-core Cortex-A320 can execute up to 256 GOPS, measured in 8-bit MACs/cycle when running at 2GHz.
Besides the 50% efficiency improvements over the Cortex-A520, Arm says the performance of the Cortex-A320 has improved by more than 30% in SPECINT2K6, compared to its Armv8 predecessor, the Cortex-A35 thanks to efficient branch predictors and pre-fetchers, and memory system improvements.
The Cortex-A320 also makes use of NEON and SVE2 improvements in the Armv9 architecture to deliver up to 10x better machine learning (ML) performance compared to Cortex-A35, or up to 6x higher ML performance than the Cortex-A53. With these ML improvements and high area and energy efficiencies, Arm claims that the Arm Cortex-A320 is the most efficient core in ML applications across all Arm Cortex-A CPUs.
Consider yourself ogred.
Google DeepMind hires Rain AI engineer for growing "AI hardware design" team
Neuromorphic computing expert leaves Altman-backed startup for Google's AI lab
February 28, 2025 By Sebastian Moss Have your say
FacebookTwitterLinkedInRedditEmailShare
Google DeepMind has hired a former Rain AI engineer with a background in neuromorphic computing for an AI hardware team.
Maxence Ernoult spent just over two years at Rain AI, a Sam Altman-backed neuromorphic chip startup that has a letter of intent from OpenAI. The hire has not been previously reported.
![]()
17 Jan 2019
The creation of the electronic brain
DCD reports on the epic decades-long quest to make computers more like the human brain
Prior to his time working on in-memory computing at Rain, Ernoult worked at IBM and studied neuromorphic computing at Sorbonne Université.
Neuromorphic computing aims to mimic how the brain works, by using hardware and software that simulate the neural and synaptic structures and functions of the brain.
"Absolutely thrilled to announce that I'm starting next week as a senior research engineer at Google DeepMind in Albert Cohen's team dedicated to AI hardware design," Ernoult said in a LinkedIn post.
Cohen is a research scientist at Google DeepMind who in 2020 published research on Computing-In-Memory (CIM) architectures. "Memristor-based, non-von-Neumann architectures performing tensor operations directly in memory are a promising approach to address the ever-increasing demand for energy-efficient, high-throughput hardware accelerators for Machine Learning (ML) inference," the abstract states.
Across Google DeepMind, a number of researchers appear to be working on AI chips, under director Olivier Temam, who says that he is building the "hardware infrastructure for AGI." Temam previously co-led Google's Cloud Tensor Processing Unit (TPU) project for a year and created a TPU project for embedded and low-power applications.
TPUs are Google's broader AI chip family, used to train Gemini and available on its cloud service.
Temam is also part of DeepMind's AlphaChip effort, which aims to use AI to help design chip layouts. The company claims that it has used AlphaChip in the design of TPUs since 2020, as well as for its Axion Arm chips and other processors.
"We believe AlphaChip has the potential to optimize every stage of the chip design cycle, from computer architecture to manufacturing — and to transform chip design for custom hardware found in everyday devices such as smartphones, medical equipment, agricultural sensors, and more," Google said last year.
A job listing for a DeepMind chip design research engineer says that the company aims to "solve some of the most complex tasks in Chip Design (RTL generation, RTL verification, Logic Synthesis, Physical Design, PPA prediction." It adds: "As part of our team at Google DeepMind, you'll have opportunities to advance AI for Chip Design to enable breakthrough capabilities and pioneer next-generation products."
Whether the company is looking at neuromorphic hardware is unclear. This January, Google DeepMind researcher Cliff Young was a co-author on a Nature paper that laid out a roadmap to neuromorphic computing competing at scale with conventional approaches. Young was critical to the development of the TPU, and also designed much of D. E. Shaw Research's seminal Anton supercomputer.
Also pointing to DeepMind's interest in neuromorphic research is Yale Professor Priyadarshini Panda, a highly respected neuromorphic computing figure who leads the university's Intelligent Computing Lab. Prof. Panda became a visiting scholar to Google DeepMind last April.
Google DeepMind declined to comment on whether this hardware project is related to AlphaChip, TPUs, neuromorphic computing, or is a new initiative.
![]()
Google DeepMind hires Rain AI engineer for growing "AI hardware design" team
Neuromorphic computing expert leaves Altman-backed startup for Google's AI labwww.datacenterdynamics.com
Professor Priyadarshini Panda, Yale University Intelligent Computing Lab.
View attachment 78372
Here are some extracts from Priya's research developed with funding from DARPA which mention the use of SNN's.
View attachment 78373
View attachment 78374
View attachment 78378
View attachment 78376
View attachment 78377
Priya was also on a panel organised in 2022 by Tata Consultancy "Neuromorphic Computing for Transformation of the Industrial Future".
Arpan Paul was on the same panel.
View attachment 78379
If you Google "Temu shooter", Temu will have a few on sale.I am logged out for a holiday and see for the first time what TSE looks like without a login, so ads.
How do the rest of you cope with the shooting of F*Temu?
Keep up Manny , Cortex A confirmed . I myself have absolutely no doubt‘s that Arm’s Ethos U85 is Akida , as we are seeing far to many clues at this point in time .
View attachment 78390
View attachment 78391
View attachment 78392
Embedded World 2023 / Remember the Embeddy Award / The below is from Sally Ward Foxton spilling the beans .
Anil and Rob both liked this LinkedIn post . Noting at the time Arm only had the Ethos U55 and U65
View attachment 78393
Funny , Brainchip didn’t even mention that we were in the Arm booth doing joint demo’s .
All in my opinion .
Hi Doz,View attachment 78402
Dodgy , will all due respect and admiration , can I see your Ogre and raise you an Ogre ?
I know that you conversed with PVDM about Mac’s and that he told you that Akida does not perform Mac operations . However we continually see Brainchip documentation providing Mac performance . As such I don’t believe we can continue with this thought process .
View attachment 78403
Does the below provide the feasible reason why we now continue to see Mac comparisons in all of Brainchip’s technical documentation ? Did the industry once accept that the multiplication was not considered a Mac , but have now unified to provide a level playing field for comparative reasons ?
View attachment 78404
( Note to self … If Dodgy starts getting technical , I may need to resort to bluffing to win this hand . I know … )
Dodgy , all counter agruments must be supported with pretty pictures . Thanks .
.
Much as I would like to be wrong, I don't agree that Ethos U85 contains Akida.
https://armkeil.blob.core.windows.n...rm-ethos-u-processor-series-product-brief.pdf
View attachment 78410
Hi Doz,Dodgy , thanks for your reply , if I look at the above
256 GOPs is the same lowest as Akida
Mac’s …… in debate
SRAM / available
Interfaces 6 …. Our edge box has 5 I believe
External memory / available ( flash think Mercedes )
Cortex M & A compatibility with Akida
i’m not sure the above is concrete evidence that the Arm Ethos U85 is not Akida , unless I am not understanding your reasons correctly .
Hi Doz,
Ogres at 10 paces!
So to address your straw man argument - I have posted a few times about TENNs using MACs, just little ones, and mentioned that we could no longer rely on MACs as a guaranteed discriminator.
For example:
- #94,041
- Jan 5, 2025
- #83,783
- May 16, 2024
Of course, if your oger wishes to engage in combat blindfolded, so be it.
PS: Ditch the "respect and admiration" - we're here for afightdebate.
In all seriousness Dodgy , the Arm Ethos U85 due for release in 2025 sure has a lot of similarities as Brainchip Akida and for all shareholders sake it needs to be us .