BRN Discussion Ongoing

TheFunkMachine

seeds have the potential to become trees.
I have stopped following brn on the daily a long time ago, but can someone catch me up what is going on atm? Serious drop in SP in last few days and people talking about BRN re-listing on the American stock exchange?
 

Pmel

Regular
Keep up Manny , Cortex A confirmed . I myself have absolutely no doubt‘s that Arm’s Ethos U85 is Akida , as we are seeing far to many clues at this point in time .



View attachment 78390
View attachment 78391
View attachment 78392
Embedded World 2023 / Remember the Embeddy Award / The below is from Sally Ward Foxton spilling the beans .
Anil and Rob both liked this LinkedIn post . Noting at the time Arm only had the Ethos U55 and U65

View attachment 78393
Funny , Brainchip didn’t even mention that we were in the Arm booth doing joint demo’s .

All in my opinion .
 
  • Like
  • Haha
Reactions: 5 users

Rskiff

Regular
I have stopped following brn on the daily a long time ago, but can someone catch me up what is going on atm? Serious drop in SP in last few days and people talking about BRN re-listing on the American stock exchange?
dyor
 
  • Like
Reactions: 6 users
I have stopped following brn on the daily a long time ago, but can someone catch me up what is going on atm? Serious drop in SP in last few days and people talking about BRN re-listing on the American stock exchange?
Use the page back button mate there it all is in black and white.
 
  • Like
  • Love
Reactions: 4 users

Drewski

Regular
I don't post often, but a webinar would be greatly appreciated by all LTHs I'm sure. I can only hope that by the time the transition is completed BRN has some revenue and deals signed and a healthier share price. This would make the issue of consolidation way more appealing and ward off the prospect of being destroyed in the US like we have been here by the scumbags.
I'm a huge believer in the potential of this company, may we all benefit from the realisation of its potential.
 
  • Like
Reactions: 6 users

Doz

Regular

Slowly coming out of the closet , one by one .
1740799037111.png


1740799101818.png

1740799143078.png

All in my opinion …..
 
  • Like
  • Love
Reactions: 9 users

Doz

Regular
1740799619529.png


1740799685216.png


1740799724019.png


1740799781796.png

What else are Arm and Meta up to ?

1740799864911.png


All in my opinion ……
 
  • Like
  • Fire
  • Love
Reactions: 10 users

Mccabe84

Regular
I have stopped following brn on the daily a long time ago, but can someone catch me up what is going on atm? Serious drop in SP in last few days and people talking about BRN re-listing on the American stock exchange?
Screenshot_20250301_155747_Drive.jpg
 

Doz

Regular
Hi Bravo,

In your orange highlighted bit it talks about running lightweight AI workloads independently of the cloud.

The word lightweight in my mind rules out Akida ... and I don't class TENNs as lightweight because it adds the time factor for image tracking, and, with long skip, speech processing.

Here's one I prepared earlier:

https://www.cnx-software.com/2025/0...rmv9-core-optimized-for-edge-ai-and-iot-socs/

February 27, 2025 by Jean-Luc Aufranc (CNXSoft) - 2 Commentson Arm Cortex-A320 low-power CPU is the smallest Armv9 core, optimized for Edge AI and IoT SoCs

Arm Cortex-A320 low-power CPU is the smallest Armv9 core, optimized for Edge AI and IoT SoCs​

Arm Cortex-A320 is a low-power Armv9 CPU core optimized for Edge AI and IoT applications, with up to 50% efficiency improvements over the Cortex-A520 CPU core. It is the smallest Armv9 core unveiled so far.
The Armv9 architecture was first introduced in 2021 with a focus on AI and specialized cores, followed by the first Armv9 cores – Cortex-A510, Cortex-A710, Cortex-X2 – unveiled later that year and targeting flagship mobile devices. Since then we’ve seen Armv9 cores on a wider range of smartphones, high-end Armv9 motherboards, and TV boxes, The upcoming Rockchip RK3688 AIoT SoC also features Armv9 but targets high-end applications. The new Arm Cortex-A320 will expand Armv9 usage to a much wider range of IoT devices including power-constrained Edge AI devices
.


View attachment 78344

Arm Cortex-A320 highlights:

  • Architecture – Armv9.2-A (Harvard)
  • Extensions
    • Up to Armv8.7 extensions
    • QARMA3 extensions
    • SVE2 extensions
    • Memory Tagging Extensions (MTE) (including Asymmetric MTE)
    • Cryptography extensions
    • RAS extensions
  • Microarchitecture
    • In-order pipeline
    • Partial superscalar support
    • NEON/Floating Point Unit
    • Optional Cryptography Unit
    • Up to 4x CPUs in cluster
    • 40-bit Physical Addressing (PA)
  • Memory system and external interfaces
    • 32KB or 64KB L1 I-Cache / D-Cache
    • Optional L2 Cache – 128KB, 192KB, 256KB, 384KB, or 512KB
    • No L3 Cache
    • ECC Support
    • Bus interfaces – AMBA AXI5
    • No ACP, No Peripheral Port
  • Security – TrustZone, Secure EL2, MTE, PAC/BTI
  • Debugging
    • Debug – Armv9.2-A features
    • CoreSightv3
    • Embedded Trace Extension (ETEv1.1)
    • Trace Buffer Extension
  • Misc
    • Interrupts – GIC interface, GICv4.1
    • Generic timer – Armv9.2-A
    • PMUv3.7

The Cortex-A320 can be combined with the Ethos-U85 NPU for Edge AI, providing an upgrade path to Cortex-M85+Ethos-U85-based Endpoint AI devices, with support for LLMs with up to one billion parameters, and Linux or Android operating systems, besides RTOSes like FreeRTOS or Zephyr OS. We’re also told a quad-core Cortex-A320 can execute up to 256 GOPS, measured in 8-bit MACs/cycle when running at 2GHz.

Besides the 50% efficiency improvements over the Cortex-A520, Arm says the performance of the Cortex-A320 has improved by more than 30% in SPECINT2K6, compared to its Armv8 predecessor, the Cortex-A35 thanks to efficient branch predictors and pre-fetchers, and memory system improvements.

The Cortex-A320 also makes use of NEON and SVE2 improvements in the Armv9 architecture to deliver up to 10x better machine learning (ML) performance compared to Cortex-A35, or up to 6x higher ML performance than the Cortex-A53. With these ML improvements and high area and energy efficiencies, Arm claims that the Arm Cortex-A320 is the most efficient core in ML applications across all Arm Cortex-A CPUs
.

Consider yourself ogred.

1740803830489.png




Dodgy , will all due respect and admiration , can I see your Ogre and raise you an Ogre ?

I know that you conversed with PVDM about Mac’s and that he told you that Akida does not perform Mac operations . However we continually see Brainchip documentation providing Mac performance . As such I don’t believe we can continue with this thought process .

1740804016332.png



Does the below provide the feasible reason why we now continue to see Mac comparisons in all of Brainchip’s technical documentation ? Did the industry once accept that the multiplication was not considered a Mac , but have now unified to provide a level playing field for comparative reasons ?


1740804547982.png





( Note to self … If Dodgy starts getting technical , I may need to resort to bluffing to win this hand . I know … )

Dodgy , all counter agruments must be supported with pretty pictures . Thanks .

.
 
Last edited:
  • Like
Reactions: 2 users
Guess no one had look at any of these videos yet from edge ai foundation in Austin we attended.
IMG_2168.png
 
Last edited:
  • Like
Reactions: 1 users

Diogenese

Top 20

Google DeepMind hires Rain AI engineer for growing "AI hardware design" team

Neuromorphic computing expert leaves Altman-backed startup for Google's AI lab
February 28, 2025 By Sebastian Moss Have your say
FacebookTwitterLinkedInRedditEmailShare

Google DeepMind has hired a former Rain AI engineer with a background in neuromorphic computing for an AI hardware team.
Maxence Ernoult spent just over two years at Rain AI, a Sam Altman-backed neuromorphic chip startup that has a letter of intent from OpenAI. The hire has not been previously reported.

The brain

17 Jan 2019

The creation of the electronic brain

DCD reports on the epic decades-long quest to make computers more like the human brain

Prior to his time working on in-memory computing at Rain, Ernoult worked at IBM and studied neuromorphic computing at Sorbonne Université.
Neuromorphic computing aims to mimic how the brain works, by using hardware and software that simulate the neural and synaptic structures and functions of the brain.
"Absolutely thrilled to announce that I'm starting next week as a senior research engineer at Google DeepMind in Albert Cohen's team dedicated to AI hardware design," Ernoult said in a LinkedIn post.
Cohen is a research scientist at Google DeepMind who in 2020 published research on Computing-In-Memory (CIM) architectures. "Memristor-based, non-von-Neumann architectures performing tensor operations directly in memory are a promising approach to address the ever-increasing demand for energy-efficient, high-throughput hardware accelerators for Machine Learning (ML) inference," the abstract states.

Across Google DeepMind, a number of researchers appear to be working on AI chips, under director Olivier Temam, who says that he is building the "hardware infrastructure for AGI." Temam previously co-led Google's Cloud Tensor Processing Unit (TPU) project for a year and created a TPU project for embedded and low-power applications.
TPUs are Google's broader AI chip family, used to train Gemini and available on its cloud service.
Temam is also part of DeepMind's AlphaChip effort, which aims to use AI to help design chip layouts. The company claims that it has used AlphaChip in the design of TPUs since 2020, as well as for its Axion Arm chips and other processors.
"We believe AlphaChip has the potential to optimize every stage of the chip design cycle, from computer architecture to manufacturing — and to transform chip design for custom hardware found in everyday devices such as smartphones, medical equipment, agricultural sensors, and more," Google said last year.
A job listing for a DeepMind chip design research engineer says that the company aims to "solve some of the most complex tasks in Chip Design (RTL generation, RTL verification, Logic Synthesis, Physical Design, PPA prediction." It adds: "As part of our team at Google DeepMind, you'll have opportunities to advance AI for Chip Design to enable breakthrough capabilities and pioneer next-generation products."
Whether the company is looking at neuromorphic hardware is unclear. This January, Google DeepMind researcher Cliff Young was a co-author on a Nature paper that laid out a roadmap to neuromorphic computing competing at scale with conventional approaches. Young was critical to the development of the TPU, and also designed much of D. E. Shaw Research's seminal Anton supercomputer.
Also pointing to DeepMind's interest in neuromorphic research is Yale Professor Priyadarshini Panda, a highly respected neuromorphic computing figure who leads the university's Intelligent Computing Lab. Prof. Panda became a visiting scholar to Google DeepMind last April.
Google DeepMind declined to comment on whether this hardware project is related to AlphaChip, TPUs, neuromorphic computing, or is a new initiative.







Professor Priyadarshini Panda, Yale University Intelligent Computing Lab.

View attachment 78372





Here are some extracts from Priya's research developed with funding from DARPA which mention the use of SNN's.


View attachment 78373


View attachment 78374



View attachment 78378



View attachment 78376


View attachment 78377





Priya was also on a panel organised in 2022 by Tata Consultancy "Neuromorphic Computing for Transformation of the Industrial Future".

Arpan Paul was on the same panel.




View attachment 78379

As well as the reference to Google's compute-in-memory (cim) research, Ernoult's work at Rain has been in hybrid analog/digital NNs and edge devices.

US2024202594A1 PRIVATE TRAINING OF ARTIFICIAL INTELLIGENCE 20221214

1740806775163.png




US2024249190A1
GRADIENT COMPUTATION IN HYBRID DIGITALLY TIED ANALOG BLOCKS WITH ARBITRARY CONNECTIVITY BY EQUILIBRIUM PROPAGATION 20230119

1740806873352.png
 

Diogenese

Top 20
I am logged out for a holiday and see for the first time what TSE looks like without a login, so ads.
How do the rest of you cope with the shooting of F*Temu?
If you Google "Temu shooter", Temu will have a few on sale.
 
Top Bottom