BRN Discussion Ongoing

Diogenese

Top 20
You can
Hi fmf,

The authors are from a couple of Italian Unis and ESA. Alexander Hadjiivanov studied at UNSW.

The Abstract provides some informative background which illustrates the context:

"Spiking Neural Networks (SNN) are highly attractive due to their theoretically superior energy efficiency due to their inherently sparse activity induced by neurons communicating by means of binary spikes. Nevertheless, the ability of SNN to reach such efficiency on real world tasks is still to be demonstrated in practice. To evaluate the feasibility of utilizing SNN onboard spacecraft, this work presents a numerical analysis and comparison of different SNN techniques applied to scene classification for the EuroSAT dataset. Such tasks are of primary importance for space applications and constitute a valuable test case given the abundance of competitive methods available to establish a benchmark. Particular emphasis is placed on models based on temporal coding, where crucial information is encoded in the timing of neuron spikes. These models promise even greater efficiency of resulting networks, as they maximize the sparsity properties inherent in SNN. A reliable metric capable of comparing different architectures in a hardware-agnostic way is developed to establish a clear theoretical dependence between architecture parameters and the energy consumption that can be expected onboard the spacecraft. The potential of this novel method and his flexibility to describe specific hardware platforms is demonstrated by its application to predicting the energy consumption of a BrainChip Akida AKD1000 neuromorphic processor."

The tests were carried out on a single Akida node (4 NPUs). while this serves to demonstrate the versatility of Akida, I guess this would have come at a latency penalty if the various "layers" needed to be run sequentially through the node. Still, as they were looking at energy efficiency, this is a secondary consideration.

I wonder if the recycling through a single node had anything to do with the problem with premature spiking as this increases the latency of processing:

3.4 Summary

"SNN still struggle to scale to deeper architectures: when the number of layers N≥5, higher layers start to fire while only partial information is available from lower layers. Possible causes for this include the large memory consumption at training and error accumulation, both due to the unrolling in time adopted by SG, which can limit the effectiveness of regularization methods such as BNTT."

4 Hardware testing​

In order to evaluate the energy consumption of spiking networks on actual hardware, a series of benchmark models were implemented on the BrainChip Akida AKD1000 [78] device as it has built-in power consumption reporting capabilities. The AKD1000 system-on-chip (SoC) is the first generation of digital neuromorphic accelerator by BrainChip, designed to facilitate the evaluation of models for different types of tasks, such as image classification and online on-chip learning. Three convolutional models compatible with the AKD1000 hardware were trained on the EuroSAT dataset. They consists of convolutional blocks made up by a Conv2D → BatchNorm → MaxPool → ReLU → stack of layers. The last convolutional block lacks the MaxPool layer and is followed by a flattening and a dense linear layer at the end. The three models differ in the number of convolutional blocks, and in the number of filters in each Conv2D layers: the architectures are detailed in Fig. 10. The selection of rather small models was determined by the hardware capabilities, for a single AKD1000 node was available during this work. The Akida architecture can be arranged in multiple parallel nodes to host larger models.

Jonathan Tapson said that Akida 2 is 8 times as efficient as Akida 1000. That gives an even greater power/energy advantage. It can also handle 8-bit activations.

As to the models:

"The EuroSAT RGB dataset, a classification task representative of a class of tasks of potential interest in the field of Earth Observation, was selected as case study. SNN models based on both temporal and rate coding, and their ANN counterparts, were compared in a hardware-agnostic way in terms of accuracy and complexity by means of a novel metric,."


"Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their equivalent ANN, with significantly lower (from −50 % to −80 %) EMAC per inference."


As we know, Edge impulse can rapidly develop models for Akida, and then there is on-chip learning. and we're mates with DeGirum.
https://au.video.search.yahoo.com/s...04a57e78d47e4795939bc4ed54b9d967&action=click


Akida uses rank (temporal) coding, rather than rate coding. I think that rank coding is faster than rate coding.
"You can go to the DeGirum website" ... and try Akida with your model or our model"
 
  • Like
  • Fire
Reactions: 8 users
Hi fmf,

The authors are from a couple of Italian Unis and ESA. Alexander Hadjiivanov studied at UNSW.

The Abstract provides some informative background which illustrates the context:

"Spiking Neural Networks (SNN) are highly attractive due to their theoretically superior energy efficiency due to their inherently sparse activity induced by neurons communicating by means of binary spikes. Nevertheless, the ability of SNN to reach such efficiency on real world tasks is still to be demonstrated in practice. To evaluate the feasibility of utilizing SNN onboard spacecraft, this work presents a numerical analysis and comparison of different SNN techniques applied to scene classification for the EuroSAT dataset. Such tasks are of primary importance for space applications and constitute a valuable test case given the abundance of competitive methods available to establish a benchmark. Particular emphasis is placed on models based on temporal coding, where crucial information is encoded in the timing of neuron spikes. These models promise even greater efficiency of resulting networks, as they maximize the sparsity properties inherent in SNN. A reliable metric capable of comparing different architectures in a hardware-agnostic way is developed to establish a clear theoretical dependence between architecture parameters and the energy consumption that can be expected onboard the spacecraft. The potential of this novel method and his flexibility to describe specific hardware platforms is demonstrated by its application to predicting the energy consumption of a BrainChip Akida AKD1000 neuromorphic processor."

The tests were carried out on a single Akida node (4 NPUs). while this serves to demonstrate the versatility of Akida, I guess this would have come at a latency penalty if the various "layers" needed to be run sequentially through the node. Still, as they were looking at energy efficiency, this is a secondary consideration.

I wonder if the recycling through a single node had anything to do with the problem with premature spiking as this increases the latency of processing:

3.4 Summary

"SNN still struggle to scale to deeper architectures: when the number of layers N≥5, higher layers start to fire while only partial information is available from lower layers. Possible causes for this include the large memory consumption at training and error accumulation, both due to the unrolling in time adopted by SG, which can limit the effectiveness of regularization methods such as BNTT."

4 Hardware testing​

In order to evaluate the energy consumption of spiking networks on actual hardware, a series of benchmark models were implemented on the BrainChip Akida AKD1000 [78] device as it has built-in power consumption reporting capabilities. The AKD1000 system-on-chip (SoC) is the first generation of digital neuromorphic accelerator by BrainChip, designed to facilitate the evaluation of models for different types of tasks, such as image classification and online on-chip learning. Three convolutional models compatible with the AKD1000 hardware were trained on the EuroSAT dataset. They consists of convolutional blocks made up by a Conv2D → BatchNorm → MaxPool → ReLU → stack of layers. The last convolutional block lacks the MaxPool layer and is followed by a flattening and a dense linear layer at the end. The three models differ in the number of convolutional blocks, and in the number of filters in each Conv2D layers: the architectures are detailed in Fig. 10. The selection of rather small models was determined by the hardware capabilities, for a single AKD1000 node was available during this work. The Akida architecture can be arranged in multiple parallel nodes to host larger models.

Jonathan Tapson said that Akida 2 is 8 times as efficient as Akida 1000. That gives an even greater power/energy advantage. It can also handle 8-bit activations.

As to the models:

"The EuroSAT RGB dataset, a classification task representative of a class of tasks of potential interest in the field of Earth Observation, was selected as case study. SNN models based on both temporal and rate coding, and their ANN counterparts, were compared in a hardware-agnostic way in terms of accuracy and complexity by means of a novel metric,."


"Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their equivalent ANN, with significantly lower (from −50 % to −80 %) EMAC per inference."


As we know, Edge impulse can rapidly develop models for Akida, and then there is on-chip learning. and we're mates with DeGirum.
https://au.video.search.yahoo.com/s...04a57e78d47e4795939bc4ed54b9d967&action=click


Akida uses rank (temporal) coding, rather than rate coding. I think that rank coding is faster than rate coding.
Not sure when it comes to Edge Impulse and new models. The Mar 25 update not changed still.


As for DeGirum, maybe certain models or limited maybe(?) but if you see my previous post below it appears BRN bit tardy in getting back to our supposed partners when they send BRN a request on a user's behalf.


 
  • Like
  • Sad
  • Wow
Reactions: 7 users

Frangipani

Top 20
Recently released paper funded by ESA.

Paper HERE

From what I can understand, the intent is focused on SNN vs ANN and neuromorphic processing power efficiencies etc.

They used AKD1000 for the test HW.

Appears to have come up alright but with sone work still to be refining re the SNN models.



Released 16/5.

Energy efficiency analysis of Spiking Neural Networks for space applications

Ethics declarations​

This work was founded by the European Space Agency (contract number: 4000135881/21/NL/GLC/my) in the framework of the Ariadna research program. The authors declare that they have no known competing financial interests or personal relationships that are relevant to the content of this article. The EuroSAT dataset used in this activity is publicly available at [43].


1.1 Work objectives​

With neuromorphic research being a relatively new topic, several steps are still needed to move from theoretical and numerical studies to practical implementation on real hardware flying onboard spacecraft. This work aims to contribute to this transition by providing some practical tools needed for the design of future spaceborne neuromorphic systems. The primary objective is to estimate the actual advantages that can be expected from SNN with respect to classical ANN. Here, the focus is put to the trade-off between accuracy and energy, as energy efficiency is the most prominent benefit sought in space applications. To achieve this goal, a novel metric, capable of comparing the model complexity of both ANN and SNN in a hardware-agnostic way, is proposed as a proxy for the energy consumption. The performance of several SNN and ANN models is compared on a scene classification task, using the EuroSAT RGB dataset. Special attention is placed on spiking models based on temporal coding, as they promise even greater efficiency of resulting networks, as they maximize the sparsity properties inherent in SNN, but rate-based models are included in the analysis as well. In order to validate this approach, the energy trend predicted with the proposed metric is compared with actual measures of energy used by benchmark SNN models running on neuromorphic hardware. The secondary goal of this work is to exploit the data collected in the comparison to analyze the internal dynamics of SNN models, identifying the most significant factors which affect the energy consumption, in order to derive design principles useful in future activities.


5 Conclusion​

An investigation of the potential benefits of Spiking Neural Networks for onboard AI applications in space was carried out in this work. The EuroSAT RGB dataset, a classification task representative of a class of tasks of potential interest in the field of Earth Observation, was selected as case study. SNN models based on both temporal and rate coding, and their ANN counterparts, were compared in a hardware-agnostic way in terms of accuracy and complexity by means of a novel metric, Equivalent MAC operations (EMAC) per inference. EMAC is suitable for assessing the impact of different neuron models, distinguishing the contributions of synaptic operations with respect to neuron updates, and comparing SNN with their ANN counterparts. In its base formulation, EMAC achieves dimensionless estimation, and it should then be considered only a proxy for energy consumption. Nevertheless, internal parameters can be tuned to match the features of specific hardware, if known, achieving also absolute estimation. A preliminary successful demonstration is given for the BrainChip Akida AKD1000 neuromorphic processor. Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their equivalent ANN, with significantly lower (from −50 % to −80 %) EMAC per inference. An even greater energy reduction can be expected with SNN implemented on actual neuromorphic devices, with respect to standard ANN running on traditional hardware. While Surrogate Gradient proved to be an easy and effective way to achieve offline, supervised training of SNN, scaling to very deep architectures to achieve state-of-the-art performance is still an issue. Further research is needed particularly in the search of architectures capable to exploit SNN peculiarities, and in the development of regularization techniques and initialization methods suited to latency-based networks. Attention should be also given to recent developments in training techniques which do not require backward propagation in time, but only along the network at each time step [81], and new ANN-to-SNN conversion methods tailored for achieving extremely low latency [82]. Overall, the confirmed superior energy efficiency of Spiking Neural Networks is of extreme interest for applications limited in terms of power and energy, which is typical of the space environment, and SNN are a competitive candidate for achieving autonomy in space systems.

Hi @Fullmoonfever,

while this ESA-funded paper was indeed published on arXiv.org only a few days ago, it appears to be a revised version of a conference paper presented at the 75th International Astronautical Congress (IAC), Milan, Italy, 14-18 October 2024 rather than novel research.




The deadline for IAC 2024 paper submissions was originally 28 February 2024
(https://www.iafastro.org/news/submit-your-abstract-for-iac-2024-by-28-february.html), and was later extended by about a week. Which obviously means the research involving AKD1000 referred to in that paper submitted by the six co-authors from Politecnico di Milano and ESA to IAC 2024 must have been conducted even earlier.


Although the papers’ titles don’t match, the connection between those two papers becomes apparent when you compare the section underlined in green of the newly released paper you shared today…


2189E028-6C58-46AE-92F9-A89A90EC2FA7.jpeg



… with my bolded quote below, which is an excerpt from the October 2024 conference paper’s conclusion:


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-454705

F74FD97F-5424-4E03-B2ED-1E8BBDE75810.jpeg


Also note that both papers refer to the exact same contract number for the ESA funding, which was awarded in the framework of the Ariadna research program: 4000135881/21/NL/GLC/my

Another hint that this research involving AKD1000 must have been conducted quite a while ago is the fact that co-authors Dominik Dold and Alexander Hajiivanov are still listed as members of the ESA Advanced Concepts Team (ACT) in Noordwijk, The Netherlands, although both of them already left ACT back in September 2024 to become a Marie Skłodowska-Curie Research Fellow with the Faculty of Mathematics at Universität Wien (University of Vienna) resp. a Research Software Engineer at the Netherlands eScience Center:


F2D6743F-B042-4E41-B64F-AE09490DEBF5.jpeg





18B14AD2-2C8E-4548-8EE0-247AE2888D91.jpeg


2F35E47E-C4C8-4957-9B5E-C5CCCB6A4650.jpeg



0D2E6F05-F1DC-4764-A8FD-F169D7BA1C9A.jpeg

F08B2E9C-3E6B-4953-8640-E64F47348F18.jpeg



Who knows - maybe the release of the revised paper has to do with Politecnico di Milano being a project partner of the Europe Defense Fund (EDF) research project ARCHYTAS (ARCHitectures based on unconventional accelerators for dependable/energY efficienT AI Systems), which I happened to come across two months ago?

Check out this newly launched project called ARCHYTAS (ARCHitectures based on unconventional accelerators for dependable/energY efficienT AI Systems), funded by the European Defence Fund (EDF):

View attachment 79763

View attachment 79764



View attachment 79765



View attachment 79766 View attachment 79767 View attachment 79768
One of the ARCHYTAS project partners is Politecnico di Milano, whose neuromorphic researchers Paolo Lunghi and Stefano Silvestrini have experimented with AKD1000 in collaboration with Gabriele Meoni, Dominik Dold, Alexander Hadjiivanov and Dario Izzo from the ESA-ESTEC (European Space Research and Technology Centre) Advanced Concepts Team in Noordwijk, the Netherlands*, as evidenced by the conference paper below, presented at the 75th International Astronautical Congress in October 2024: 🚀
*(Gabriele Meoni and Dominik Dold have since left the ACT)

A preliminary successful demonstration is given for the BrainChip Akida AKD1000 neuromorphic processor. Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their ANN coun- terparts, with significantly lower (from −50 % to −80 %) EMAC per inference, making SNN of extreme interest for applications limited in power and energy typical of the space environment, especially considering that an even greater improvement (with respect to standard ANN running on traditional hardware) in energy consumption can be expected with SNN when implemented on actual neuromorphic devices. A research effort is still needed, especially in the search of new architectures and training methods capable to fully exploit SNN peculiarities.

The work was funded by the European Space Agency (contract number: 4000135881/21/NL/GLC/my) in the framework of the Ariadna research program.



View attachment 79770 View attachment 79771 View attachment 79772 View attachment 79773
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 22 users

Esq.111

Fascinatingly Intuitive.
Morning Chippers,

Little bit on Lucky.

Five minute watch time.



Regards,
Esq.
 
  • Like
  • Fire
  • Wow
Reactions: 23 users

Esq.111

Fascinatingly Intuitive.
Chippers ,

Small Australian mining company using drones.....

Three min video.



Regards,
Esq.
 
  • Like
  • Thinking
  • Wow
Reactions: 25 users

Labsy

Regular
Morning Chippers,

Little bit on Lucky.

Five minute watch time.



Regards,
Esq.

The guy is on a winner. Pure genius. Strategic... Functional, scalable, cheep. Watch this space closely. Palmer lucky is a forward thinker and he knows what's the Beez knees... And that's nuromorphic... No doubt about it we are on the "radar"... ;)
 
  • Like
  • Fire
  • Thinking
Reactions: 21 users

Labsy

Regular
I'm also watching the meta Ray bans space.
This is going to be a boom product. Only issue is 6 hr battery life.... We can help with that im sure.
Going back Anduril, they have their hands firmly grasped on VR defence contract... Fingers crossed we creep into that too...
 
  • Like
  • Fire
  • Thinking
Reactions: 19 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Arm is saying all the right things but is yet to take a serious punt on us. So frustrating!




AI bigger than internet and smartphones, says Arm Holdings
By Liew Jia Teng
19 May 2025, 05:22 pm


Chris Bergey, Arm's senior vice-president and general manager of client line of business, addresses the audience during the Computex 2025 Arm Executive Session at Grand Hilai Taipei on Monday. (Photo by Liew Jia Teng/The Edge)
TAIPEI (May 19): British chip architect Arm Holdings plc, backed by Japan’s SoftBank Group Corp, believes artificial intelligence (AI) is a bigger technological shift than the internet or the smartphone.
AI models are now evolving and accelerating, Chris Bergey, Arm's senior vice-president and general manager of client line of business, told the audience during the Computex 2025 Arm Executive Session at Grand Hilai Taipei on Monday.
“We are at the doorstep of the most important moment in the history of technology,” he said. “Today, the discussion is no longer about what AI might do, but what it’s doing.”


The Cambridge-headquartered semiconductor and software design firm licenses the instruction sets for modern chips to partners, who then make chips with customisations for their unique applications.
Arm-designed chips are nearly all-present in the modern world, from sensors, automotive, personal computers (PCs), Internet of Things, smartphones to supercomputers.
“Now is a time for us to build what’s next,” Bergey said. “And the best part? We don’t have to do it alone… we can do this together.”
In March, the Malaysian government announced that it will pay Arm a total of US$250 million (RM1.11 billion) over 10 years for semiconductor-related licences and know-how.
The main goals are to create “Made by Malaysia” AI chips in the next five to seven years, as well as to establish 10 chip companies with a combined annual turnover of up to US$20 billion.
The UK firm has shipped over 300 billion chips and today, 99% of smartphones run on Arm-based processors. Some 70% of the world's population uses Arm-based products, while 50% of all chips with processors are Arm-based.
“Yesterday’s legacy architecture can’t power tomorrow’s AI. We need to move faster and scale smartly. If we don’t solve efficiency, AI won’t scale,” he warned.
By the end of 2025, half of new devices and server chips being shipped to hyperscalers will be Arm-based, while over 40% of PCs and tablets shipments will also be Arm-backed, according to Bergey.

 
  • Like
  • Fire
  • Love
Reactions: 32 users

manny100

Top 20
Arm is saying all the right things but is yet to take a serious punt on us. So frustrating!




AI bigger than internet and smartphones, says Arm Holdings
By Liew Jia Teng
19 May 2025, 05:22 pm


Chris Bergey, Arm's senior vice-president and general manager of client line of business, addresses the audience during the Computex 2025 Arm Executive Session at Grand Hilai Taipei on Monday. (Photo by Liew Jia Teng/The Edge)
TAIPEI (May 19): British chip architect Arm Holdings plc, backed by Japan’s SoftBank Group Corp, believes artificial intelligence (AI) is a bigger technological shift than the internet or the smartphone.
AI models are now evolving and accelerating, Chris Bergey, Arm's senior vice-president and general manager of client line of business, told the audience during the Computex 2025 Arm Executive Session at Grand Hilai Taipei on Monday.
“We are at the doorstep of the most important moment in the history of technology,” he said. “Today, the discussion is no longer about what AI might do, but what it’s doing.”


The Cambridge-headquartered semiconductor and software design firm licenses the instruction sets for modern chips to partners, who then make chips with customisations for their unique applications.
Arm-designed chips are nearly all-present in the modern world, from sensors, automotive, personal computers (PCs), Internet of Things, smartphones to supercomputers.
“Now is a time for us to build what’s next,” Bergey said. “And the best part? We don’t have to do it alone… we can do this together.”
In March, the Malaysian government announced that it will pay Arm a total of US$250 million (RM1.11 billion) over 10 years for semiconductor-related licences and know-how.
The main goals are to create “Made by Malaysia” AI chips in the next five to seven years, as well as to establish 10 chip companies with a combined annual turnover of up to US$20 billion.
The UK firm has shipped over 300 billion chips and today, 99% of smartphones run on Arm-based processors. Some 70% of the world's population uses Arm-based products, while 50% of all chips with processors are Arm-based.
“Yesterday’s legacy architecture can’t power tomorrow’s AI. We need to move faster and scale smartly. If we don’t solve efficiency, AI won’t scale,” he warned.
By the end of 2025, half of new devices and server chips being shipped to hyperscalers will be Arm-based, while over 40% of PCs and tablets shipments will also be Arm-backed, according to Bergey.

If he is 'on the ball' the least we will get is an significant offer for our patents once we have made significant progress on our Tech roadmap.
Patents = our safety net
 
  • Like
  • Fire
  • Thinking
Reactions: 17 users

BigDonger101

Founding Member
As i said the other day, 2027 - 2029 is when i expect things to get commercially better for brainchip.
View attachment 84444
Hahaha mate. In what world is that acceptable.

That is pure hopium right there.
 
  • Like
  • Sad
Reactions: 3 users

7für7

Top 20
Wow… the positive vibes here just hit different! 😵‍💫
Who else was celebrating the New Year like they were gonna get rich in 2025?

I’ll go first…
I did!

 
  • Haha
  • Like
Reactions: 11 users
  • Like
  • Fire
Reactions: 9 users

JB49

Regular


Nanoveu seem to be moving quickly. Listen from 22.15. A few key points:
  • Starting to work with customers in the US such as Garmin, Meta, Amazon
  • Start getting MOU, LOUs and engineering revenue this year; Commercialisation on products next year. Should be profitable very quick. Expects to be turning a huge profit in 2027 (lol)
 
Last edited:
  • Fire
  • Haha
Reactions: 3 users

Drewski

Regular
Wow… the positive vibes here just hit different! 😵‍💫
Who else was celebrating the New Year like they were gonna get rich in 2025?

I’ll go first…
I did!


The fat lady hasn't sung yet.
 
  • Like
  • Haha
Reactions: 5 users

BigDonger101

Founding Member


Nanoveu seem to be moving quickly. Listen from 22.15. A few key points:
  • Starting to work with customers in the US such as Garmin, Meta, Amazon
  • Start getting MOU, LOUs and engineering revenue this year; Commercialisation on products next year. Should be profitable very quick. Expects to be turning a huge profit in 2027 (lol)

Can imagine he's blowing a tonne of smoke up investors asses.
 

JB49

Regular
  • Like
  • Fire
Reactions: 2 users
Nice to see the latest May 2025 ESA publication including Akida in the hardware solutions for future TN and NTN communications. (Terrestrial and Non Terrestrial Networks).

Screenshot_2025-05-20-16-02-14-54_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg


IMG_20250520_160630.jpg



Full publication HERE
 
  • Like
  • Fire
  • Love
Reactions: 55 users
Top Bottom