BRN Discussion Ongoing

Frangipani

Top 20

View attachment 82221



View attachment 82229

View attachment 82225

View attachment 82223


“NEUROBUS: FRUGAL AI SERVING DEFENSE AND AEROSPACE​

(…) Though Space remains a core sector for Neurobus, its technology's practical application in the drone sector unlocks compelling possibilities for autonomy. Drones equipped with Neurobus's frugal AI can execute missions more independently, making real-time decisions with minimal human oversight. While human validation remains crucial for strategic actions, tasks like area surveillance can be managed autonomously.
For instance, a drone could autonomously evade an oncoming object at high speed. However, directing itself toward a target would require prior human authorization.

Although the present application is primarily focused on defense, driven by the current geopolitical climate and pressing demands, Neurobus also foresees a future in the civilian domain, particularly in applications like autonomous drone delivery services.

(…) Identifying the ideal market for a disruptive technology proved difficult. After considering the automotive and space sectors, both with lengthy integration cycles, Neurobus found its niche in drones.”




View attachment 82690
View attachment 82691 View attachment 82692



INNOVATION & ENTREPRENEURSHIP INSTITUTE
Apr 17, 2025

NEUROBUS: FRUGAL AI SERVING DEFENSE AND AEROSPACE​

SHARE

Combining Deep Tech and Sustainability is the winning bet of Neurobus, an innovative startup founded by Florian Corgnou, an entrepreneur whose successive periods at HEC Paris have greatly impacted Neurobus's trajectory.

Neurobus HEC paris

AEROSPACE, A CORE SECTOR​

Initially destined for an engineering career in aeronautics, Florian's trajectory gradually shifted toward entrepreneurship and the business world, a change accelerated by a significant period in the United States. Following this, his desire to engage with cutting-edge technologies drew him to Tesla's European headquarters in the Netherlands for his first experience. Though formative and inspiring, this experience ultimately led Florian to make a 360-degree turn. The engineer departed from his favored sector, becoming an entrepreneur in an unexpected field: finance.

With Trezy, his startup, SME managers could monitor their company's financial situation in real-time. The venture proved successful, raising 3.5 million euros and continuing to operate today. Yet, despite Trezy's success, Florian felt compelled to invest in a more meaningful project, one within a field that genuinely excited him.

Aerospace beckoned him once more. It was there, while participating in a space entrepreneurship program in partnership with Airbus Defense & Space, that the idea for Neurobus began to germinate.

NEUROBUS, A SOLUTION THAT COMBINES ENERGY EFFICIENCY AND INTELLIGENCE

By observing technological advances at Tesla and SpaceX, and then participating in the Airbus Defense Space program, Florian established the groundwork for Neurobus. Immersion with engineers and space experts allowed him to pinpoint market trends and unmet needs, needs Neurobus was determined to address.

So, what does Neurobus offer? It's an embedded, frugal Artificial Intelligence – specifically, an AI engineered for minimal energy consumption and direct integration into host systems like drones and satellites. Data processing occurs locally, eliminating the costly energy expenditure of transferring data to centers.


Neurobus HEC paris


Neurobus's initial focus was the space sector, a field inherently linked to defense, with partners like Airbus Defense and Space, the European Space Agency, and the French Space Agency. However, the company adroitly adapted its promising technology to the drone sector, a rapidly expanding market with more immediate demands. Winning a European defense innovation competition further validated the potential of their solution for drone detection.

The core of Neurobus's innovation lies in its biologically-inspired approach: the neuromorphic system. This disruptive technology draws inspiration from the human brain and retina to create processors and sensors that are remarkably energy-efficient. For Florian, the human brain serves as an unparalleled source of inspiration:
"The brain is one of the best computers that exists today because it delivers immense computing power with extremely low energy consumption."


DRONES: A TESTED AND VALIDATED FIELD OF APPLICATION TESTED AND VALIDATED​

Neurobus sidesteps the capital-intensive manufacturing of components like processors and sensors. Instead, its value proposition lies in assembling these components and developing tailored software layers to meet specific manufacturer needs. This positions the startup as both an integrator and a software publisher, streamlining the adoption of this cutting-edge technology.

As Florian Corgnou explains, "Neurobus operates precisely between the manufacturer and the industrialist. We don't create the hardware, but we assemble it into a product that specifically addresses our customers' requirements and develop software layers that cater to the unique applications of that industrialist."


Neurobus HEC paris


Though Space remains a core sector for Neurobus, its technology's practical application in the drone sector unlocks compelling possibilities for autonomy. Drones equipped with Neurobus's frugal AI can execute missions more independently, making real-time decisions with minimal human oversight. While human validation remains crucial for strategic actions, tasks like area surveillance can be managed autonomously.
For instance, a drone could autonomously evade an oncoming object at high speed. However, directing itself toward a target would require prior human authorization.

Although the present application is primarily focused on defense, driven by the current geopolitical climate and pressing demands, Neurobus also foresees a future in the civilian domain, particularly in applications like autonomous drone delivery services.


OVERCOMING CHALLENGES, STEP BY STEP​

Like any entrepreneurial venture, Neurobus faced its share of challenges – challenges Florian embraces. As he notes, "The biggest challenge was securing our market within a limited timeframe and on a tight budget." A dual problem indeed.

Identifying the ideal market for a disruptive technology proved difficult. After considering the automotive and space sectors, both with lengthy integration cycles, Neurobus found its niche in drones.

Financially, Florian acknowledges the perpetual challenge for startups: "Financial resources are the lifeblood of any venture, and as a startup, we're constantly in survival mode."

However, Neurobus distinguished itself through its initial financing strategy. Rather than immediately pursuing fundraising to convince investors of a nascent technology, Florian prioritized securing R&D contracts with clients. "Having already gained institutional and industrial validation in previous roles, I ensured clients financed the R&D, guiding us toward the optimal applications."


Neurobus HEC paris


Another inherent difficulty with disruptive technologies is their vast potential scope. Florian stresses the importance of discipline: "In Deep Tech, it's easy to get lost – a trap we all fall into, myself included!" While the technology's versatility is tempting, the real challenge lies in identifying the application offering the greatest business and technological value. "You must focus, master a single use case, execute it flawlessly, and avoid spreading yourself too thin," he advises.

This strategy is paying dividends, with R&D contracts generating roughly €600,000 in revenue in 2024, excluding public subsidies
. The team, currently composed of two partners and three employees, plans to expand to approximately ten members by June through four new hires.
With this traction and a refined roadmap, Neurobus is planning an ambitious fundraising round of €5 to €10 million by year-end to initiate the conceptual phase – developing the final product based on client feedback.


HEC PARIS, THE COMMON THREAD OF THE NEUROBUS ADVENTURE​

Would Neurobus be where it is today without Florian's HEC Paris experience? Unlikely. While launching Neurobus, Florian simultaneously pursued HEC Paris's Executive Master of Science in Innovation & Entrepreneurship (EMSIE) to "gain support and surround myself with expertise." This, combined with Neurobus's participation in the Incubateur HEC Parisand Creative Destruction Lab (CDL) - Paris, provided a framework and strategic guidance instrumental in shaping Neurobus's trajectory. The Challenge+program further honed the team's skills.

Neurobus HEC paris

CDL Next Gen Computing Session

Mentors from these programs challenged the company's direction and refined its strategy. "The Incubator team provided invaluable assistance in finalizing complex client contracts," Florian explains, "and the CDL mentors challenged us on critical issues. This external perspective enabled us to ask the right questions and minimize mistakes."

Finally, Florian shares advice for aspiring Deep Tech entrepreneurs:
  1. Validate your idea with customers before seeking funding.
  2. Securing a paying customer – a champion and internal advocate – is the best possible validation.
  3. Develop a true 'customer obsession' to deeply understand your clients' needs and confirm their willingness to pay upfront."
This customer-centric approach has fueled Neurobus's success, and we eagerly anticipate following their future achievements.

Neurobus 5

Neurobus team

Our partner Neurobus will be demonstrating their cutting-edge solutions for surveillance drones👆🏻at the Paris Air Show (16 - 22 June):



45CA88CC-8719-4892-B4C3-76C413CE3A05.jpeg
 
  • Fire
  • Like
  • Love
Reactions: 8 users

Diogenese

Top 20
Recently released paper funded by ESA.

Paper HERE

From what I can understand, the intent is focused on SNN vs ANN and neuromorphic processing power efficiencies etc.

They used AKD1000 for the test HW.

Appears to have come up alright but with sone work still to be refining re the SNN models.



Released 16/5.

Energy efficiency analysis of Spiking Neural Networks for space applications

Ethics declarations​

This work was founded by the European Space Agency (contract number: 4000135881/21/NL/GLC/my) in the framework of the Ariadna research program. The authors declare that they have no known competing financial interests or personal relationships that are relevant to the content of this article. The EuroSAT dataset used in this activity is publicly available at [43].


1.1 Work objectives​

With neuromorphic research being a relatively new topic, several steps are still needed to move from theoretical and numerical studies to practical implementation on real hardware flying onboard spacecraft. This work aims to contribute to this transition by providing some practical tools needed for the design of future spaceborne neuromorphic systems. The primary objective is to estimate the actual advantages that can be expected from SNN with respect to classical ANN. Here, the focus is put to the trade-off between accuracy and energy, as energy efficiency is the most prominent benefit sought in space applications. To achieve this goal, a novel metric, capable of comparing the model complexity of both ANN and SNN in a hardware-agnostic way, is proposed as a proxy for the energy consumption. The performance of several SNN and ANN models is compared on a scene classification task, using the EuroSAT RGB dataset. Special attention is placed on spiking models based on temporal coding, as they promise even greater efficiency of resulting networks, as they maximize the sparsity properties inherent in SNN, but rate-based models are included in the analysis as well. In order to validate this approach, the energy trend predicted with the proposed metric is compared with actual measures of energy used by benchmark SNN models running on neuromorphic hardware. The secondary goal of this work is to exploit the data collected in the comparison to analyze the internal dynamics of SNN models, identifying the most significant factors which affect the energy consumption, in order to derive design principles useful in future activities.


5 Conclusion​

An investigation of the potential benefits of Spiking Neural Networks for onboard AI applications in space was carried out in this work. The EuroSAT RGB dataset, a classification task representative of a class of tasks of potential interest in the field of Earth Observation, was selected as case study. SNN models based on both temporal and rate coding, and their ANN counterparts, were compared in a hardware-agnostic way in terms of accuracy and complexity by means of a novel metric, Equivalent MAC operations (EMAC) per inference. EMAC is suitable for assessing the impact of different neuron models, distinguishing the contributions of synaptic operations with respect to neuron updates, and comparing SNN with their ANN counterparts. In its base formulation, EMAC achieves dimensionless estimation, and it should then be considered only a proxy for energy consumption. Nevertheless, internal parameters can be tuned to match the features of specific hardware, if known, achieving also absolute estimation. A preliminary successful demonstration is given for the BrainChip Akida AKD1000 neuromorphic processor. Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their equivalent ANN, with significantly lower (from −50 % to −80 %) EMAC per inference. An even greater energy reduction can be expected with SNN implemented on actual neuromorphic devices, with respect to standard ANN running on traditional hardware. While Surrogate Gradient proved to be an easy and effective way to achieve offline, supervised training of SNN, scaling to very deep architectures to achieve state-of-the-art performance is still an issue. Further research is needed particularly in the search of architectures capable to exploit SNN peculiarities, and in the development of regularization techniques and initialization methods suited to latency-based networks. Attention should be also given to recent developments in training techniques which do not require backward propagation in time, but only along the network at each time step [81], and new ANN-to-SNN conversion methods tailored for achieving extremely low latency [82]. Overall, the confirmed superior energy efficiency of Spiking Neural Networks is of extreme interest for applications limited in terms of power and energy, which is typical of the space environment, and SNN are a competitive candidate for achieving autonomy in space systems.

Hi fmf,

The authors are from a couple of Italian Unis and ESA. Alexander Hadjiivanov studied at UNSW.

The Abstract provides some informative background which illustrates the context:

"Spiking Neural Networks (SNN) are highly attractive due to their theoretically superior energy efficiency due to their inherently sparse activity induced by neurons communicating by means of binary spikes. Nevertheless, the ability of SNN to reach such efficiency on real world tasks is still to be demonstrated in practice. To evaluate the feasibility of utilizing SNN onboard spacecraft, this work presents a numerical analysis and comparison of different SNN techniques applied to scene classification for the EuroSAT dataset. Such tasks are of primary importance for space applications and constitute a valuable test case given the abundance of competitive methods available to establish a benchmark. Particular emphasis is placed on models based on temporal coding, where crucial information is encoded in the timing of neuron spikes. These models promise even greater efficiency of resulting networks, as they maximize the sparsity properties inherent in SNN. A reliable metric capable of comparing different architectures in a hardware-agnostic way is developed to establish a clear theoretical dependence between architecture parameters and the energy consumption that can be expected onboard the spacecraft. The potential of this novel method and his flexibility to describe specific hardware platforms is demonstrated by its application to predicting the energy consumption of a BrainChip Akida AKD1000 neuromorphic processor."

The tests were carried out on a single Akida node (4 NPUs). while this serves to demonstrate the versatility of Akida, I guess this would have come at a latency penalty if the various "layers" needed to be run sequentially through the node. Still, as they were looking at energy efficiency, this is a secondary consideration.

I wonder if the recycling through a single node had anything to do with the problem with premature spiking as this increases the latency of processing:

3.4 Summary

"SNN still struggle to scale to deeper architectures: when the number of layers N≥5, higher layers start to fire while only partial information is available from lower layers. Possible causes for this include the large memory consumption at training and error accumulation, both due to the unrolling in time adopted by SG, which can limit the effectiveness of regularization methods such as BNTT."

4 Hardware testing​

In order to evaluate the energy consumption of spiking networks on actual hardware, a series of benchmark models were implemented on the BrainChip Akida AKD1000 [78] device as it has built-in power consumption reporting capabilities. The AKD1000 system-on-chip (SoC) is the first generation of digital neuromorphic accelerator by BrainChip, designed to facilitate the evaluation of models for different types of tasks, such as image classification and online on-chip learning. Three convolutional models compatible with the AKD1000 hardware were trained on the EuroSAT dataset. They consists of convolutional blocks made up by a Conv2D → BatchNorm → MaxPool → ReLU → stack of layers. The last convolutional block lacks the MaxPool layer and is followed by a flattening and a dense linear layer at the end. The three models differ in the number of convolutional blocks, and in the number of filters in each Conv2D layers: the architectures are detailed in Fig. 10. The selection of rather small models was determined by the hardware capabilities, for a single AKD1000 node was available during this work. The Akida architecture can be arranged in multiple parallel nodes to host larger models.

Jonathan Tapson said that Akida 2 is 8 times as efficient as Akida 1000. That gives an even greater power/energy advantage. It can also handle 8-bit activations.

As to the models:

"The EuroSAT RGB dataset, a classification task representative of a class of tasks of potential interest in the field of Earth Observation, was selected as case study. SNN models based on both temporal and rate coding, and their ANN counterparts, were compared in a hardware-agnostic way in terms of accuracy and complexity by means of a novel metric,."


"Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their equivalent ANN, with significantly lower (from −50 % to −80 %) EMAC per inference."


As we know, Edge impulse can rapidly develop models for Akida, and then there is on-chip learning. and we're mates with DeGirum.
https://au.video.search.yahoo.com/s...04a57e78d47e4795939bc4ed54b9d967&action=click


Akida uses rank (temporal) coding, rather than rate coding. I think that rank coding is faster than rate coding.
 
  • Like
  • Fire
  • Love
Reactions: 9 users

Diogenese

Top 20
You can
Hi fmf,

The authors are from a couple of Italian Unis and ESA. Alexander Hadjiivanov studied at UNSW.

The Abstract provides some informative background which illustrates the context:

"Spiking Neural Networks (SNN) are highly attractive due to their theoretically superior energy efficiency due to their inherently sparse activity induced by neurons communicating by means of binary spikes. Nevertheless, the ability of SNN to reach such efficiency on real world tasks is still to be demonstrated in practice. To evaluate the feasibility of utilizing SNN onboard spacecraft, this work presents a numerical analysis and comparison of different SNN techniques applied to scene classification for the EuroSAT dataset. Such tasks are of primary importance for space applications and constitute a valuable test case given the abundance of competitive methods available to establish a benchmark. Particular emphasis is placed on models based on temporal coding, where crucial information is encoded in the timing of neuron spikes. These models promise even greater efficiency of resulting networks, as they maximize the sparsity properties inherent in SNN. A reliable metric capable of comparing different architectures in a hardware-agnostic way is developed to establish a clear theoretical dependence between architecture parameters and the energy consumption that can be expected onboard the spacecraft. The potential of this novel method and his flexibility to describe specific hardware platforms is demonstrated by its application to predicting the energy consumption of a BrainChip Akida AKD1000 neuromorphic processor."

The tests were carried out on a single Akida node (4 NPUs). while this serves to demonstrate the versatility of Akida, I guess this would have come at a latency penalty if the various "layers" needed to be run sequentially through the node. Still, as they were looking at energy efficiency, this is a secondary consideration.

I wonder if the recycling through a single node had anything to do with the problem with premature spiking as this increases the latency of processing:

3.4 Summary

"SNN still struggle to scale to deeper architectures: when the number of layers N≥5, higher layers start to fire while only partial information is available from lower layers. Possible causes for this include the large memory consumption at training and error accumulation, both due to the unrolling in time adopted by SG, which can limit the effectiveness of regularization methods such as BNTT."

4 Hardware testing​

In order to evaluate the energy consumption of spiking networks on actual hardware, a series of benchmark models were implemented on the BrainChip Akida AKD1000 [78] device as it has built-in power consumption reporting capabilities. The AKD1000 system-on-chip (SoC) is the first generation of digital neuromorphic accelerator by BrainChip, designed to facilitate the evaluation of models for different types of tasks, such as image classification and online on-chip learning. Three convolutional models compatible with the AKD1000 hardware were trained on the EuroSAT dataset. They consists of convolutional blocks made up by a Conv2D → BatchNorm → MaxPool → ReLU → stack of layers. The last convolutional block lacks the MaxPool layer and is followed by a flattening and a dense linear layer at the end. The three models differ in the number of convolutional blocks, and in the number of filters in each Conv2D layers: the architectures are detailed in Fig. 10. The selection of rather small models was determined by the hardware capabilities, for a single AKD1000 node was available during this work. The Akida architecture can be arranged in multiple parallel nodes to host larger models.

Jonathan Tapson said that Akida 2 is 8 times as efficient as Akida 1000. That gives an even greater power/energy advantage. It can also handle 8-bit activations.

As to the models:

"The EuroSAT RGB dataset, a classification task representative of a class of tasks of potential interest in the field of Earth Observation, was selected as case study. SNN models based on both temporal and rate coding, and their ANN counterparts, were compared in a hardware-agnostic way in terms of accuracy and complexity by means of a novel metric,."


"Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their equivalent ANN, with significantly lower (from −50 % to −80 %) EMAC per inference."


As we know, Edge impulse can rapidly develop models for Akida, and then there is on-chip learning. and we're mates with DeGirum.
https://au.video.search.yahoo.com/s...04a57e78d47e4795939bc4ed54b9d967&action=click


Akida uses rank (temporal) coding, rather than rate coding. I think that rank coding is faster than rate coding.
"You can go to the DeGirum website" ... and try Akida with your model or our model"
 
  • Fire
  • Like
Reactions: 2 users
Hi fmf,

The authors are from a couple of Italian Unis and ESA. Alexander Hadjiivanov studied at UNSW.

The Abstract provides some informative background which illustrates the context:

"Spiking Neural Networks (SNN) are highly attractive due to their theoretically superior energy efficiency due to their inherently sparse activity induced by neurons communicating by means of binary spikes. Nevertheless, the ability of SNN to reach such efficiency on real world tasks is still to be demonstrated in practice. To evaluate the feasibility of utilizing SNN onboard spacecraft, this work presents a numerical analysis and comparison of different SNN techniques applied to scene classification for the EuroSAT dataset. Such tasks are of primary importance for space applications and constitute a valuable test case given the abundance of competitive methods available to establish a benchmark. Particular emphasis is placed on models based on temporal coding, where crucial information is encoded in the timing of neuron spikes. These models promise even greater efficiency of resulting networks, as they maximize the sparsity properties inherent in SNN. A reliable metric capable of comparing different architectures in a hardware-agnostic way is developed to establish a clear theoretical dependence between architecture parameters and the energy consumption that can be expected onboard the spacecraft. The potential of this novel method and his flexibility to describe specific hardware platforms is demonstrated by its application to predicting the energy consumption of a BrainChip Akida AKD1000 neuromorphic processor."

The tests were carried out on a single Akida node (4 NPUs). while this serves to demonstrate the versatility of Akida, I guess this would have come at a latency penalty if the various "layers" needed to be run sequentially through the node. Still, as they were looking at energy efficiency, this is a secondary consideration.

I wonder if the recycling through a single node had anything to do with the problem with premature spiking as this increases the latency of processing:

3.4 Summary

"SNN still struggle to scale to deeper architectures: when the number of layers N≥5, higher layers start to fire while only partial information is available from lower layers. Possible causes for this include the large memory consumption at training and error accumulation, both due to the unrolling in time adopted by SG, which can limit the effectiveness of regularization methods such as BNTT."

4 Hardware testing​

In order to evaluate the energy consumption of spiking networks on actual hardware, a series of benchmark models were implemented on the BrainChip Akida AKD1000 [78] device as it has built-in power consumption reporting capabilities. The AKD1000 system-on-chip (SoC) is the first generation of digital neuromorphic accelerator by BrainChip, designed to facilitate the evaluation of models for different types of tasks, such as image classification and online on-chip learning. Three convolutional models compatible with the AKD1000 hardware were trained on the EuroSAT dataset. They consists of convolutional blocks made up by a Conv2D → BatchNorm → MaxPool → ReLU → stack of layers. The last convolutional block lacks the MaxPool layer and is followed by a flattening and a dense linear layer at the end. The three models differ in the number of convolutional blocks, and in the number of filters in each Conv2D layers: the architectures are detailed in Fig. 10. The selection of rather small models was determined by the hardware capabilities, for a single AKD1000 node was available during this work. The Akida architecture can be arranged in multiple parallel nodes to host larger models.

Jonathan Tapson said that Akida 2 is 8 times as efficient as Akida 1000. That gives an even greater power/energy advantage. It can also handle 8-bit activations.

As to the models:

"The EuroSAT RGB dataset, a classification task representative of a class of tasks of potential interest in the field of Earth Observation, was selected as case study. SNN models based on both temporal and rate coding, and their ANN counterparts, were compared in a hardware-agnostic way in terms of accuracy and complexity by means of a novel metric,."


"Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their equivalent ANN, with significantly lower (from −50 % to −80 %) EMAC per inference."


As we know, Edge impulse can rapidly develop models for Akida, and then there is on-chip learning. and we're mates with DeGirum.
https://au.video.search.yahoo.com/s...04a57e78d47e4795939bc4ed54b9d967&action=click


Akida uses rank (temporal) coding, rather than rate coding. I think that rank coding is faster than rate coding.
Not sure when it comes to Edge Impulse and new models. The Mar 25 update not changed still.


As for DeGirum, maybe certain models or limited maybe(?) but if you see my previous post below it appears BRN bit tardy in getting back to our supposed partners when they send BRN a request on a user's behalf.


 
  • Like
Reactions: 1 users

Frangipani

Top 20
Recently released paper funded by ESA.

Paper HERE

From what I can understand, the intent is focused on SNN vs ANN and neuromorphic processing power efficiencies etc.

They used AKD1000 for the test HW.

Appears to have come up alright but with sone work still to be refining re the SNN models.



Released 16/5.

Energy efficiency analysis of Spiking Neural Networks for space applications

Ethics declarations​

This work was founded by the European Space Agency (contract number: 4000135881/21/NL/GLC/my) in the framework of the Ariadna research program. The authors declare that they have no known competing financial interests or personal relationships that are relevant to the content of this article. The EuroSAT dataset used in this activity is publicly available at [43].


1.1 Work objectives​

With neuromorphic research being a relatively new topic, several steps are still needed to move from theoretical and numerical studies to practical implementation on real hardware flying onboard spacecraft. This work aims to contribute to this transition by providing some practical tools needed for the design of future spaceborne neuromorphic systems. The primary objective is to estimate the actual advantages that can be expected from SNN with respect to classical ANN. Here, the focus is put to the trade-off between accuracy and energy, as energy efficiency is the most prominent benefit sought in space applications. To achieve this goal, a novel metric, capable of comparing the model complexity of both ANN and SNN in a hardware-agnostic way, is proposed as a proxy for the energy consumption. The performance of several SNN and ANN models is compared on a scene classification task, using the EuroSAT RGB dataset. Special attention is placed on spiking models based on temporal coding, as they promise even greater efficiency of resulting networks, as they maximize the sparsity properties inherent in SNN, but rate-based models are included in the analysis as well. In order to validate this approach, the energy trend predicted with the proposed metric is compared with actual measures of energy used by benchmark SNN models running on neuromorphic hardware. The secondary goal of this work is to exploit the data collected in the comparison to analyze the internal dynamics of SNN models, identifying the most significant factors which affect the energy consumption, in order to derive design principles useful in future activities.


5 Conclusion​

An investigation of the potential benefits of Spiking Neural Networks for onboard AI applications in space was carried out in this work. The EuroSAT RGB dataset, a classification task representative of a class of tasks of potential interest in the field of Earth Observation, was selected as case study. SNN models based on both temporal and rate coding, and their ANN counterparts, were compared in a hardware-agnostic way in terms of accuracy and complexity by means of a novel metric, Equivalent MAC operations (EMAC) per inference. EMAC is suitable for assessing the impact of different neuron models, distinguishing the contributions of synaptic operations with respect to neuron updates, and comparing SNN with their ANN counterparts. In its base formulation, EMAC achieves dimensionless estimation, and it should then be considered only a proxy for energy consumption. Nevertheless, internal parameters can be tuned to match the features of specific hardware, if known, achieving also absolute estimation. A preliminary successful demonstration is given for the BrainChip Akida AKD1000 neuromorphic processor. Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their equivalent ANN, with significantly lower (from −50 % to −80 %) EMAC per inference. An even greater energy reduction can be expected with SNN implemented on actual neuromorphic devices, with respect to standard ANN running on traditional hardware. While Surrogate Gradient proved to be an easy and effective way to achieve offline, supervised training of SNN, scaling to very deep architectures to achieve state-of-the-art performance is still an issue. Further research is needed particularly in the search of architectures capable to exploit SNN peculiarities, and in the development of regularization techniques and initialization methods suited to latency-based networks. Attention should be also given to recent developments in training techniques which do not require backward propagation in time, but only along the network at each time step [81], and new ANN-to-SNN conversion methods tailored for achieving extremely low latency [82]. Overall, the confirmed superior energy efficiency of Spiking Neural Networks is of extreme interest for applications limited in terms of power and energy, which is typical of the space environment, and SNN are a competitive candidate for achieving autonomy in space systems.

Hi @Fullmoonfever,

while this ESA-funded paper was indeed published on arXiv.org only a few days ago, it appears to be a revised version of a conference paper presented at the 75th International Astronautical Congress (IAC), Milan, Italy, 14-18 October 2024 rather than novel research.




The deadline for IAC 2024 paper submissions was originally 28 February 2024
(https://www.iafastro.org/news/submit-your-abstract-for-iac-2024-by-28-february.html), and was later extended by about a week. Which obviously means the research involving AKD1000 referred to in that paper submitted by the six co-authors from Politecnico di Milano and ESA to IAC 2024 must have been conducted even earlier.


Although the papers’ titles don’t match, the connection between those two papers becomes apparent when you compare the section underlined in green of the newly released paper you shared today…


2189E028-6C58-46AE-92F9-A89A90EC2FA7.jpeg



… with my bolded quote below, which is an excerpt from the October 2024 conference paper’s conclusion:


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-454705

F74FD97F-5424-4E03-B2ED-1E8BBDE75810.jpeg


Also note that both papers refer to the exact same contract number for the ESA funding, which was awarded in the framework of the Ariadna research program: 4000135881/21/NL/GLC/my

Another hint that this research involving AKD1000 must have been conducted quite a while ago is the fact that co-authors Dominik Dold and Alexander Hajiivanov are still listed as members of the ESA Advanced Concepts Team (ACT) in Noordwijk, The Netherlands, although both of them already left ACT back in September 2024 to become a Marie Skłodowska-Curie Research Fellow with the Faculty of Mathematics at Universität Wien (University of Vienna) resp. a Research Software Engineer at the Netherlands eScience Center:


F2D6743F-B042-4E41-B64F-AE09490DEBF5.jpeg





18B14AD2-2C8E-4548-8EE0-247AE2888D91.jpeg


2F35E47E-C4C8-4957-9B5E-C5CCCB6A4650.jpeg



0D2E6F05-F1DC-4764-A8FD-F169D7BA1C9A.jpeg

F08B2E9C-3E6B-4953-8640-E64F47348F18.jpeg



Who knows - maybe the release of the revised paper has to do with Politecnico di Milano being a project partner of the Europe Defense Fund (EDF) research project ARCHYTAS (ARCHitectures based on unconventional accelerators for dependable/energY efficienT AI Systems), which I happened to come across two months ago?

Check out this newly launched project called ARCHYTAS (ARCHitectures based on unconventional accelerators for dependable/energY efficienT AI Systems), funded by the European Defence Fund (EDF):

View attachment 79763

View attachment 79764



View attachment 79765



View attachment 79766 View attachment 79767 View attachment 79768
One of the ARCHYTAS project partners is Politecnico di Milano, whose neuromorphic researchers Paolo Lunghi and Stefano Silvestrini have experimented with AKD1000 in collaboration with Gabriele Meoni, Dominik Dold, Alexander Hadjiivanov and Dario Izzo from the ESA-ESTEC (European Space Research and Technology Centre) Advanced Concepts Team in Noordwijk, the Netherlands*, as evidenced by the conference paper below, presented at the 75th International Astronautical Congress in October 2024: 🚀
*(Gabriele Meoni and Dominik Dold have since left the ACT)

A preliminary successful demonstration is given for the BrainChip Akida AKD1000 neuromorphic processor. Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their ANN coun- terparts, with significantly lower (from −50 % to −80 %) EMAC per inference, making SNN of extreme interest for applications limited in power and energy typical of the space environment, especially considering that an even greater improvement (with respect to standard ANN running on traditional hardware) in energy consumption can be expected with SNN when implemented on actual neuromorphic devices. A research effort is still needed, especially in the search of new architectures and training methods capable to fully exploit SNN peculiarities.

The work was funded by the European Space Agency (contract number: 4000135881/21/NL/GLC/my) in the framework of the Ariadna research program.



View attachment 79770 View attachment 79771 View attachment 79772 View attachment 79773
 
Last edited:
  • Like
Reactions: 4 users
Top Bottom