BRN Discussion Ongoing

FJ-215

Regular
  • Haha
Reactions: 1 users

Rskiff

Regular
  • Haha
  • Love
Reactions: 3 users

Diogenese

Top 20

AI data centres need round-the-clock energy and could be more power-hungry than we think​


https://www.abc.net.au/news/2025-07...nally-rising-again-amid-surging-use/105238878

We're doomed! Doomed!!

https://www.google.com/search?q=pte...ate=ive&vld=cid:bd5c4671,vid:sxqvwkmTNy8,st:0

...

Matt Rennie, who co-owns and runs energy advisory firm Rennie, says our need for data has the potential to change everything.
...

He says it is a revolution that is being driven by the migration of so many services — from education and games to healthcare and shopping — to the digital realm.

More ominously, he suggests the rise of artificial intelligence is another thing entirely.

In a world where AI becomes "pervasive", he says there is likely to be a step change in demand power that will require round-the-clock supply.

"The thing about AI is that the algorithms that it uses are much more power-intensive," Mr Rennie says.

"So as these begin to pervade the way in which we do business and the way in which we plan and conduct our lives, we can expect that there'll be many more of these data centres specifically allocated to training AI systems and then to operate them after that
.
 
  • Like
  • Fire
  • Love
Reactions: 11 users

FJ-215

Regular
  • Haha
  • Thinking
Reactions: 2 users
Oh no ...

What is wrong with our company that people leave so quickly? They are well-paid, aren't they?
Was he bored because nothing happened, or did he receive a better offer?
Or was he not performing with regard to sales?

I guess we won't find out.
Yes on

1753169860502.gif
 
  • Like
Reactions: 2 users
Oh no ...

What is wrong with our company that people leave so quickly? They are well-paid, aren't they?
Was he bored because nothing happened, or did he receive a better offer?
Or was he not performing with regard to sales?

I guess we won't find out.
Or maybe he had

1753170218316.gif
 
  • Haha
  • Love
Reactions: 3 users

jrp173

Regular
in agm that old lawyer FF suggested them to hire a IR management company.

But why would you bother engaging an IR firm if they are not going to interact with shareholders who email them questions?. I have either received stock standard non answers, or just no reply at all!

One can only assume this is under the instruction of BrainChip. Total waste of money.
 
  • Like
  • Love
Reactions: 5 users

Labsy

Regular
  • Haha
Reactions: 5 users

Esq.111

Fascinatingly Intuitive.

AI data centres need round-the-clock energy and could be more power-hungry than we think​


https://www.abc.net.au/news/2025-07...nally-rising-again-amid-surging-use/105238878

We're doomed! Doomed!!

https://www.google.com/search?q=pte+frazer+"we're+doomed"&rlz=1C1RXQR_en-GBAU1166AU1166&oq=pte+frazer+"we're+doomed"&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRigATIHCAIQIRigAdIBCTE3MzQ3ajBqN6gCALACAA&sourceid=chrome&ie=UTF-8#fpstate=ive&vld=cid:bd5c4671,vid:sxqvwkmTNy8,st:0

...

Matt Rennie, who co-owns and runs energy advisory firm Rennie, says our need for data has the potential to change everything.
...


He says it is a revolution that is being driven by the migration of so many services — from education and games to healthcare and shopping — to the digital realm.

More ominously, he suggests the rise of artificial intelligence is another thing entirely.

In a world where AI becomes "pervasive", he says there is likely to be a step change in demand power that will require round-the-clock supply.

"The thing about AI is that the algorithms that it uses are much more power-intensive," Mr Rennie says.

"So as these begin to pervade the way in which we do business and the way in which we plan and conduct our lives, we can expect that there'll be many more of these data centres specifically allocated to training AI systems and then to operate them after that
.
Good evening Diogenese ,

Only today one was listening to the .....Radio , of all things, ......apparently the individuals which Australians pay , TAX & HOW ITS SPENT , for and to quantity such needs . .. ....seriously f^#ked up.

Apparently need around at least 3X the power Australian produces pressently , ......purely to run these data banks, servers etc , Amazon and co are building pressently.


Fu@k me spinning , might be time for the Australian Goverment to take a ,SERIOUS STRATEGIC STEAK, in our Company , For , Amoungst..., many reasons which are GLEARINGLY OBVIOUS , for even those with the most blinded outlook on such matters.


* Appreciate your generosity greatly.

Regards,
Esq.
 
  • Like
  • Fire
  • Love
Reactions: 20 users

manny100

Top 20
Interesting article. Evidently sections of Labor looking to scrap CGT on property.
"
Labor for Housing argued that scrapping the 50 per cent capital gains tax discount would encourage investors to invest in technology instead of speculating on real estate, and driving up house prices.

'Australia’s capital resources have become landlocked by a CGT discount on property,' it said in a submission to the government's August Economic Reform Roundtable."
Might give BRN a boost??
 
  • Sad
  • Thinking
  • Like
Reactions: 5 users

manny100

Top 20
  • Like
  • Fire
Reactions: 7 users

IloveLamp

Top 20

1000009032.jpg
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Frangipani

Top 20
From the AFRL FY25 Facilities book.

I wonder whose 3UVPX board in the Edge (neuromorphic) space they've been playing with.

Suspect Bascom Hunter Snapcard as we know.


View attachment 88611 View attachment 88612



Hi @Fullmoonfever,

I strongly doubt it.

First of all, Bascom Hunter developed the 3U VPX SNAP (Spiking Neuromorphic Advanced Processor) Card with SBIR funding from the Navy, not the Air Force.

The publication you shared, however, is the AFRL (Air Force Research Laboratory) Facilities Book FY25.

NAVAIR, the Navy Air Systems Command, would likely be the first DoD entity to get their hands on the Bascom Hunter 3U VPX SNAP card, given they awarded the SBIR funding for the N202-099 project “Implementing Neural Network Algorithms on Neuromorphic Processors”.


B8AA3DFA-BA8D-46AD-9999-32BC8A0D9859.jpeg




6A89E9F0-25F8-4F12-BCF1-CE5BC34D08CA.jpeg
CFB1A214-5B67-4455-9695-81C27F1538BF.jpeg


However, as you can see, Bascom Hunter’s SBIR II phase is still ongoing - the SBIR award end date is 18 August 2025, which as of today is still four weeks away. Bascom Hunter as the awardee will then be required to submit a Phase II Final Report.

Which in turn means the 3U VPX SNAP Card is highly likely not a commercially ready product, yet (although one could be forgiven for thinking so when checking out the BH website https://bascomhunter.com/bh-tech/di...c-processors/asic-solutions/3u-vpx-snap-card/), but rather still a prototype, which Bascom Hunter will subsequently aim to commercialise in the ensuing Phase III (which will need to happen without further SBIR funding, though):


CFD8CC65-6267-4027-BC2A-0723097113FF.jpeg


I believe that also explains why we had never heard anything “ASX-worthy” about a connection with Bascom Hunter prior to the Appendix 4C and Quarterly Activities Report for the Period Ended 31 December 2024, dated 28 January 2025:


936E27F5-B48A-418A-B867-67AE3DE74FF5.jpeg


Those two sentences about BH even suggested to me at the time they may have changed their prototype plan and may have decided on using AKD1500 rather than AKD1000 for their commercialisation efforts, possibly having already secured an interested Phase III commercialisation partner/customer.

Of course I could be wrong and the AKD1500 chips are actually slated for a different Bascom Hunter product-in-the-making. But in that case we’d still have to see another major purchase of AKD1000 chips before BH will be able to take their SNAP Card to market, and so far we have neither had an ASX announcement about it nor have we yet seen any evidence of such a deal in the financials. (Even assuming for a moment a top-secret NDA could have been be the reason for a non-announcement, the financials wouldn’t lie. But I don’t buy the NDA “excuse” anyway, as BH has been openly (ie. on their website) promoting their 3U VPX SNAP Card to have a total of 5 AKD1000 chips for months, and our company also let the cat out of the bag about their connection to BH with the January ASX announcement, hence there is no [more] secrecy required).

So unless it were a custom-made-to-order-design and BH were excitingly waiting for their first customer(s) to sign a deal before placing an order with BrainChip, it seems unlikely to me the 3U VPX SNAP Card is a commercially available product, yet.

Happy to be corrected, though…




Rugged VPX chassis systems are commonly used for mission-critical defense and aerospace applications, and VPX cards are available in 3U VPX and 6 U VPX form factors (cf. https://militaryembedded.com/avionics/computers/vpx-and-openvpx).

Here are two alternative suggestions what the mention on page 66 of the AFRL Facilities Book FY 2025 could possibly refer to:


The Embedded Edge Computing Laboratory is one of 7 labs housed by the AFRL Extreme Computing Facility (ECF), see page 65:

“EXTREME COMPUTING FACILITY
Research and development of unconventional computing and communications architectures and paradigms, trusted systems and edge processing.
A 7,100 Sq. foot multi discipline lab housing 7 Laboratories: Embedded Edge Computing, Nanocomputing, Trusted Systems, Agile Software Development, and 3 Quantum Labs along with
a Video Wall for demonstrations focused on research and development of unconventional computing architectures, networks, and processing that is secure, trusted, and can be done at the tactical edge.
$6.5 Million laboratory possesses world class capabilities in Neuromorphic based hardware characterization and testing, secure processing and Quantum based Communication, Networking, and Computing.

Chief, Mr. Greg Zagar
Deputy Chief, Mr. Michael Hartnett”


1386E696-0F1A-46EE-8D1E-DD8ADA590228.jpeg


Under “Examples”, two AFRL programs are referred to: 6.3 SE3PO and 6.2 NICS+ (Neuromorphic Computing for Space, in collaboration with Intel: https://afresearchlab.com/technology/nics).

On page 72, which covers the “ECF AGILE SOFTWARE DEVELOPMENT LAB”, led by Pete Lamonica (named as Primary Alternate POC [point of contact] of the Embedded Edge Computing Laboratory on page 66), SE3PO is spelled out as the “Secure Extreme Embedded Exploitation and Processing On-board (SE3PO) program”.

The Secure Processor and the adjacent lab referred to in the second sentence underlined in green could possibly be this one:

99E95B52-4D53-4A1D-B096-31BAD1485CC1.jpeg


As you can see, AFRL has been developing a military-grade secure processor with built-in cyber-defensive capabilities, dubbed T-CORE, testing it on a 3U VPX Board and is currently working on version 2 of T-CORE.

So that’s one of many option what the mention of 3U VPX heterogeneous computing (systems that use multiple types of processors such as CPUs, GPUs, ASICs, FPGAs, NPUs) could possibly refer to.



As for neuromorphic research, we know that AFRL has been collaborating with both IBM and Intel for years.

While the AFRL Facilities Book FY25 doesn’t specify whether or not the “3UVPX heterogeneous computing” in their equipment list includes a 3U VPX board with a neuromorphic processor, it could theoretically even refer to IBM’s NorthPole in a 3U VPX form factor (aka NP-VPX):




6F33D974-CE3C-49E0-826F-065755F1DC70.jpeg

(…)

39E91BE2-7AFC-4158-8A75-161AA3A886B6.jpeg

The following October 2023 exchange on LinkedIn between IBM’s Dharmendra Modha and AFRL Principal Computer Scientist Mark Barnell (who is also named as the Embedded Edge Computing Lab’s Primary POC on page 66 of the AFRL Facilities Book FY25) on the release of NorthPole is evidence that the collaboration between AFRL and IBM did not end with the TrueNorth era.





0305164B-87BB-4539-8BCE-987992114EDB.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Frangipani

Top 20
OHB Hellas are not only exploring Akida for their ‘Satellite as a Service’ concept (GIASAAS) 👆🏻, but also as a consortium partner for an ESA project called

BOLERO (On-Board Continual Learning for SatCom Systems).

Prime contractor of the BOLERO project is KPLabs - both OHB Hellas and Eutelsat OneWeb are subcontractors.

View attachment 83668



BOLERO On-Board Continual Learning For SatCom Systems​

bolero.jpg

Objectives
The project identifies, explores, and implements onboard continual machine learning techniques to enhance reliability and data throughput of communication satellites.

The first objective is to identify 3 most promising use cases and applications of continual learning (CL), together with at least two most promising hardware platforms.

The second objective is to implement different CL techniques in the selected scenarios and assess their performance and feasibility for onboard deployment using the selected hardware platforms
. The assessment includes the analysis of advantages and trade-offs of CL in comparison to traditional offline machine learning approaches, and the comparative analysis of hardware platforms for CL.

The final goal of the project is to identify a state-of-the-art, potential gaps, and future roadmap for CL in satellite communication systems.

Challenges
The main challenge is related to the limited resources and limited support for continual machine learning mechanisms in existing onboard processing units, e.g., not all operations and layers are supported and model parameters cannot be updated without hardware-specific recompilation. Therefore, common CL approaches are not straightforward to implement on board. Additionally, CL techniques come with the stability-plasticity trade-offs and the need of continuous validation and monitoring.

Benefits
The project offers a complete software and hardware pipeline to implement 3 different continual machine learning approaches (i.e., class-incremental, domain-incremental, and task-incremental) in 3 different application for communication satellites. The comparative analysis helps to identify which approach and hardware platform is best suited for different CL scenarios. The project establishes the foundation for future development of CL in SatCom systems.

Features
  • Structured and informed report from selecting 3 most promising applications of CL in communication satellites and 2 hardware platforms
  • Code for running a complete onboard CL process for both hardware platforms
  • Report containing technology gaps and future roadmap for CL in SatCom systems

System Architecture
The 3 CL applications identified in the project are implemented for two hardware platforms of very different architectures (KP Labs Leopard DPU and BrainChip Akida neuromorphic computer). For each application and platform, there is a complete CL pipeline architecture proposed from data preprocessing to onboard continual learning.

Current status
The 3 most promising applications of continual machine learning in communication satellites have been identified, i.e., domain-incremental beam hopping optimization, task-incremental inter-satellite links routing, and class-incremental telemetry anomaly classification.

For each application, a state-of-the-art CL approach has been implemented for two diverse hardware platforms identified as the most promising ones for CL (KP Labs Leopar
d DPU and BrainChip Akida neuromorphic computer). The performance of each CL approach has been assessed and main technology gaps have been identified.

documentation​

Documentation may be requested

Prime Contractor​


KP Labs Sp. z o. o.

Poland
https://kplabs.space

Subcontractors​


OHB HELLAS

Greece
Website

Eutelsat OneWeb (OW)

United Kingdom
https://oneweb.net/

Last update
2025-05-03 12:39






PROJECT
4 min read

BOLERO: On-Board Continual Learning for SatCom Systems​

671a26c9c1e140d22e18bc6c_Bolero-4-1280x480.jpg

Published on
January 28, 2025

In an era of exponentially increasing data generation across all domains, satellite communications (SatCom) systems are no exception. The innovative BOLERO project, led by KP Labs, and supported by a consortium including OHB Hellas and Eutelsat OneWeb, is at the forefront of this technological evolution. This project is making significant strides in applying both classic and deep machine learning (ML and DL) techniques within the dynamic realm of satellite data, marking a transformative step in SatCom technology.

Understanding the Need for Continual Learning in SatCom​

Traditionally, satellite applications have relied on supervised ML algorithms trained offline, with all training data prepared before the training process begins. This method is effective in stable data scenarios. For example, a deep learning model can accurately identify brain tumor lesions from magnetic resonance images after being trained on a diverse dataset. However, the dynamic space environment presents unique challenges. Factors such as thermal noise, atmospheric conditions, and on-board noise can significantly alter data characteristics, causing these offline-trained models to struggle or fail when encounteringnew, unfamiliar data distributions.

The BOLERO Approach​

BOLERO addresses these challenges by adopting an online training paradigm. The training process is shifted directly to the target environment, such as an edge device on a satellite. This innovative approach bypasses the need for downlinking large amounts of data for Earth-based retraining, overcoming bandwidth and time limitations. Training models in their deployment environment accelerate the training-to-deployment cycle and significantly improve model reliability under dynamic conditions.

Tackling New Challenges​

Implementing continual learning brings its own challenges, including catastrophic forgetting, where models may lose previously acquired knowledge. Additionally, the stability-plasticity dilemma must be addressed to ensure models are adaptable and capable of retaining learned information. BOLERO tackles these issues through strategies such as task-incremental learning, allowing models to adapt to new tasks, and domain-incremental learning, enabling them to handle data with evolving distributions.

675ad5f84ea8fcd9e26cf449_675ad5ef1c1b27f038818ca9_bolero-2-1.jpeg

The Consortium’s Collaborative Dynamics in BOLERO


The BOLERO project is propelled by the synergistic efforts of its consortium members. As the project leader, KP Labs is primarily responsible for developing the Synthetic Data Generators (SDGs) and the continual learning models, ensuring their efficacy across multiple SatCom applications and hardware architectures. OHB Hellas contributes by exploring novel machine learning methodologies suitable for streaming data, assessing continual learning applications in and beyond the space sector, and implementing two use cases in different hardware modalities. Eutelsat OneWeb focuses on identifying strategic space-based applications for continual learning, evaluating their business impact, and analyzing the benefits of continual learning models, particularly in terms of performance and cost-efficiency. Together, these entities combine their unique strengths to advance the BOLERO project, addressing the evolving demands of SatCom systems.

Real-World Applications and Future Impact​

The applications of BOLERO are diverse, ranging from monitoring the operational capabilities of space devices to gas-level sensing and object detection in satellite imagery. These applications highlight the potential of continual learning to enhance the efficiency and accuracy of SatCom systems, potentially revolutionizing the management and processing of satellite data for more responsive, agile, and efficient operations.

The BOLERO project, led by KP Labs and supported by a consortium including OHB Hellas and Eutelsat OneWeb, represents a groundbreaking step in harnessing the full potential of continual learning for SatCom systems. By confronting the unique challenges associated with satellite data and leveraging the latest in ML technology, BOLERO is poised to significantly improve the adaptability and efficiency of SatCom systems, setting a new standard in the field of satellite communications.


The other promising hardware platform being tested is KPLabs’ own Leopard DPU: https://www.kplabs.space/solutions/hardware/leopard

View attachment 83667


ESA’s BOLERO project webpage (BOLERO stands for On-Board Continual Learning For SatCom Systems) was updated earlier today:



Current status

In BOLERO, we identified satcom functionalities that could benefit from on-board continual learning, selected them via a quantifiable process, and developed ML models accordingly.

We built data simulators to test various continual learning techniques, emphasizing reproducibility and algorithm generalization in realistic settings. The project delivered an end-to-end pipeline for continual learning and online adaptation for satcom, validated in simulated scenarios. Finally, we benchmarked these methods across different hardware, including KP Labs’ Leopard and BrainChip’s Akida, providing comprehensive results for all algorithms, hardware, and applications explored within BOLERO.”



However, it appears none of the benchmark results have been published so far.




BOLERO On-Board COntinual LEarning FoR SatcOm Systems​

bolero.jpg

Objectives
The objectives of BOLERO were multi-fold:
  • To identify potential satellite telecommunication (satcom) functionality and applications that could benefit from the use of continual learning methodologies in a fully transparent, quantifiable and reproducible way which will be reusable in future satcom and other missions for an informed selection of on-board machine learning (ML) models that could benefit from continual learning techniques.
  • To explore, develop, and simulate different continual learning implementation techniques for the identified satcom applications. This project objective also aimed to explore the connection between offline and online learning and the current state-of-the-art methodologies that would allow models that have been pre-trained offline to be updated so they can be enhanced by online/continual learning.
  • To identify and justify a suitable system architecture for on-board continual learning applications through performing the benchmarking process of all developed ML algorithms with continual learning techniques for all satcom applications and simulation scenarios, as well as through performing the theoretical trade-off analysis of the hardware and system architectures considered in BOLERO.
Challenges
The most important challenges of BOLERO related to:
  • Availability of hardware architectures that could be used for benchmarking continual learning AI algorithms.
  • Availability of datasets that could be used to verify and validate continual learning scenarios (therefore, we developed synthetic data simulators for all selected satellite communications applications).
  • Building reproducible, unbiased and reproducible pipelines for the quantitative validation of AI algorithms.
  • Developing an objective procedure for selecting satcom use-cases that would benefit from continual learning paradigms.
Benefits
The technology solutions develop in BOLERO bring important benefits that are transferable to the current and future (not only) satcom missions – these potential benefits include:
  • The possibility of synthesising the datasets in selected satcom applications (anomaly detection in telemetry data, congestion prediction with flexible payload, and inter-satellite link optimisation);
  • Ready-to-use and thoroughly validated continual learning algorithms for satcom applications that can adapt their behavior to the changing data characteristics;
  • The possibility of performing fully objective, quantifiable and reproducible analysis of (not only) satcom applications for on-board deployment in continual learning settings, and of hardware architectures that can be considered for on-board deployment in continual learning settings;
  • The possibility of understanding the most pressing research and development gaps through analysing the developed research and development roadmaps that shall be followed to accelerate the adoption of continual learning in real-world satcom missions.
Features
The BOLERO technology is composed of several pivotal components, including:
  • An assessment matrix that can be used to objectively select the most appropriate satcom applications for on-board continual learning implementation (i.e., the use-cases and on-board applications that could benefit most from continual learning);
  • An assessment matrix that can be used to quantify the applicability of the analysed hardware architectures in on-board implementation, with a special emphasis put on on-board continual learning algorithms;
  • Synthetic data generators, developed for each considered satcom application (anomaly detection in satellite telemetry, beam-hopping, and inter-satellite routing) that can be used to synthetically generate data for simulated continual learning scenarios in a fully reproducible and traceable way;
  • Continual learning AI algorithms for the selected satcom applications, dealing with the catastrophic forgetting phenomenon and addressing different continual learning strategies (including class-incremental learning, task-incremental learning, and domain-incremental learning).
The roadmaps, presenting the most important activities that need to be followed to accelerate the adoption of continual learning technologies in satcom systems. These roadmaps have been split into those that relate to the algorithms, technologies, hardware as well as programmes, the latter indicating the programmatic gaps that were identified in this activity.

System Architecture
The technology developed in BOLERO is fully modular, and directly relates to the key product features, including:
  • Synthetic data generators;
  • Continual learning artificial intelligence algorithms developed for selected satcom applications;
  • Assessment matrices for selecting a) appropriate satcom applications for on-board continual learning deployment and the b) best hardware architectures for such on-board implementation;
  • Research and development roadmaps. All these components are stand-alone and self-contained entities that can be effectively used separately.
Plan
Project was planned and divided into specific Work Packages focusing on the following:
  • WP100 SOTA: Review and analysis;
  • WP200 Satcom applications: Identification and analysis;
  • WP300 Algorithms: Continual learning for satcom;
  • WP400 Hardware: Performance assessment;
  • WP500 Programmatic and development gaps;
  • WP600 Software;
  • WP700 Management, outreach and dissemination.

Current status

In BOLERO, we identified satcom functionalities that could benefit from on-board continual learning, selected them via a quantifiable process, and developed ML models accordingly.

We built data simulators to test various continual learning techniques, emphasizing reproducibility and algorithm generalization in realistic settings. The project delivered an end-to-end pipeline for continual learning and online adaptation for satcom, validated in simulated scenarios. Finally, we benchmarked these methods across different hardware, including KP Labs’ Leopard and BrainChip’s Akida, providing comprehensive results for all algorithms, hardware, and applications explored within BOLERO.


Related links


KP Labs' blog post on the activity

documentation​

Documentation may be requested

PRIME CONTRACTOR​


KP Labs Sp. z o. o.

Poland
https://kplabs.space


SUBCONTRACTORS​


OHB HELLAS

Greece
Website



Eutelsat OneWeb (OW)

United Kingdom
https://oneweb.net/

Last update
2025-07-22 11:37
 
  • Like
  • Love
  • Fire
Reactions: 21 users

TopCat

Regular
Interesting 🤔


As a result of the ongoing restructuring, Intel seems to be focusing on three major areas:

  1. Edge AI: Non-cloud/data center AI chip deployments in devices (phones/PCs), automobiles, factory floor, and other “local” systems. This category may also include “non-connected” remote systems.
  2. Agentic AI: AI chips that can act autonomously, making decisions and taking actions to achieve specific goals with minimal or no human intervention
  3. Foundry services: Supporting U.S.-based sovereign manufacturer of chips for bth itself and other vendors.
 
  • Like
  • Fire
  • Wow
Reactions: 20 users

Frangipani

Top 20
Here is a recent interview with Florian Corgnou, CEO of BrainChip partner Neurobus, which was conducted in the run-up to the 12 June INPI* Pitch Contest at Viva Technology 2025, during which five start-ups competed against each other. Neurobus ended up winning the pitch contest by “showcasing our vision for low-power, bio-inspired edge AI for autonomous systems” (see above post by @itsol4605).
*INPI France is the Institut National de la Propriété Industrielle, France’s National Intellectual Property Office.

In this interview, Florian Corgnou mentions NeurOS, an embedded operating system that Neurobus is developing internally. Interesting…

“CF: Traditional AI, based on the deep learning [2], is computationally, data-intensive and energy-intensive. However, the equipment we equip – satellites, micro-drones, fully autonomous robots – operates in environments where these resources are rare, or even absent.

So we adopted a frugal AI, designed from the ground up to work with little: little data (thanks to event cameras), little energy (thanks to neuromorphic chips), and little memory.

This forces us to rethink the entire design chain: from hardware to algorithms, including the embedded operating system that we develop internally, NeurOS.”




I found another reference to NeurOS here: https://dealroom.launchvic.org/companies/neurobus/

MORE ABOUT NEUROBUS

Neurobus is pioneering a new era of ultra-efficient, autonomous intelligence for drones and satellites. Leveraging neuromorphic computing, an AI inspired by the brain’s structure and energy efficiency, our edge AI systems empower aerial and orbital platforms to perceive, decide, and act in real-time, with minimal power consumption and maximum autonomy.

Traditional AI architectures struggle in constrained environments, such as low-Earth orbit or on-board UAVs, where power, weight, and bandwidth are critical limitations. Neurobus addresses this with a disruptive approach: combining event-based sensors with neuromorphic processors that mimic biological neural networks. This unique integration enables fast, asynchronous data processing, up to 100 times more power-efficient than conventional methods, while preserving situational awareness in extreme or dynamic conditions.

Our embedded AI systems are designed to meet the needs of next-generation autonomous platforms in aerospace, defense, and space. From precision drone navigation in GPS-denied environments to on-orbit space surveillance and threat detection, Neurobus technology supports missions where latency, energy, and reliability matter most.

We offer a modular technology stack that includes hardware integration, a proprietary neuromorphic operating system (NeurOS), and real-time perception algorithms. This enables end-users and integrators to accelerate the deployment of innovative, autonomous capabilities at the edge without compromising performance or efficiency.


Backed by deeptech expertise, partnerships with leading sensor manufacturers, and strategic collaborations in the aerospace sector, Neurobus is building the foundation for intelligent autonomy across air and space.

Our mission is to unlock the full potential of edge autonomy with brain-inspired AI, starting with drones and satellites, and scaling to all autonomous systems.







French original:


NEUROBUS : une IA embarquée qui consomme très peu d’énergie​

Grâce à une intelligence artificielle sobre et efficiente, Neurobus réinvente, entre Toulouse et Paris, la façon dont les machines perçoivent et interagissent avec leur environnement, ouvrant ainsi la voie vers la conquête des milieux hostiles. Florian Corgnou, son dirigeant et fondateur, nous en dit un peu plus sur cette start-up de la Deeptech qu’il présentera à Viva Technology, lors du Pitch Contest INPI organisé en partenariat avec HEC Paris.​


PC-Neurobus-portrait%20seul-1024x683.jpg.webp


Pouvez-vous vous présenter en quelques mots ?

Florian Corgnou :
Je m’appelle Florian Corgnou, fondateur et CEO de Neurobus, une start-up deeptech que j’ai créée en 2023, entre Paris et Toulouse. Diplômé d’HEC, j’ai fondé une première entreprise dans le secteur du logiciel financier avant de rejoindre le siège européen de Tesla aux Pays-Bas. J’y ai travaillé sur des problématiques d’innovation et de stratégie produit.

Avec Neurobus, je me consacre à une mission : concevoir des systèmes embarqués d’intelligence artificielle neuromorphique, une technologie bio-inspirée qui réinvente la façon dont les machines perçoivent et interagissent avec leur environnement. Cette approche radicalement sobre et efficiente de l’IA ouvre des perspectives inédites pour les applications critiques dans la défense, le spatial, et la robotique autonome.

Notre conviction, c’est que l’autonomie embarquée ne peut émerger qu’en conciliant performance, sobriété énergétique et intelligence contextuelle, même dans les environnements les plus contraints, comme l’espace, les drones légers ou les missions en zones isolées.

Qu’est-ce qui rend votre entreprise innovante ?

F.C. :
Neurobus se distingue par l’intégration de technologies neuromorphiques, c’est-à-dire une IA capable de fonctionner en temps réel avec une consommation énergétique ultra-faible, à l’image du cerveau humain.
Nous combinons des caméras événementielles[1] avec des processeurs neuromorphiques pour traiter directement à la source des signaux complexes, sans avoir besoin d’envoyer toutes les données dans le cloud.

Ce changement de paradigme permet une autonomie décisionnelle embarquée inédite, essentielle dans les applications critiques comme la détection de missiles ou la surveillance orbitale.

Florian Corgnou, fondateur et CEO de Neurobus

Florian Corgnou, fondateur et CEO de Neurobus©

Vous avez choisi une IA sobre, adaptée aux contraintes de son environnement. Pourquoi ce choix et en quoi cela change-t-il la façon de concevoir vos solutions ?

F.C. :
L’IA traditionnelle, basée sur le deep learning [2], est gourmande en calcul, en données et en énergie. Or, les matériels que nous équipons — satellites, micro-drones, robots en autonomie complète — évoluent dans des environnements où ces ressources sont rares, voire absentes.

Nous avons donc adopté une IA frugale, conçue dès le départ pour fonctionner avec peu : peu de données (grâce aux caméras événementielles), peu d’énergie (grâce aux puces neuromorphiques), et peu de mémoire.

Cela nous force à repenser toute la chaîne de conception : du matériel jusqu’aux algorithmes, en passant par le système d’exploitation embarqué que nous développons en interne, le NeurOS.



Quel est le plus gros défi auquel vous avez dû faire face au cours du montage de votre projet ?

F.C. :
L’un des plus grands défis a été de convaincre nos premiers partenaires et financeurs que notre technologie, bien qu’encore émergente, pouvait surpasser les approches conventionnelles.

Cela impliquait de créer de la confiance sans produit final, de prouver la valeur de notre approche avec des démonstrateurs très en amont, et de naviguer dans des écosystèmes exigeants comme le spatial ou la défense, où la crédibilité technologique et la propriété intellectuelle sont clés.


Votre prise en compte de la propriété industrielle a-t-elle été naturelle ? Quel rôle a joué l’INPI ?

F.C. :
Dès le début, nous avons compris que la propriété industrielle serait un levier stratégique essentiel pour valoriser notre R&D et protéger notre avantage technologique.

Cela a été naturel, car notre innovation se situe à l’intersection du hardware, du software et des algorithmes.

L’INPI nous a accompagnés dans cette démarche, en nous aidant à structurer notre propriété industrielle — brevets, marques, enveloppes Soleau… — et à mieux comprendre les enjeux liés à la valorisation de l’innovation dans un contexte européen.

[1] Afin d’éviter des opérations inutilement coûteuses en temps comme en énergie, ce type de caméra n’enregistre une donnée qu’en cas de changement de luminosité.
[2] Le Deep learning est un type d'apprentissage automatique, utilisé dans le cadre de l’élaboration d’intelligence artificielle, basé sur des réseaux neuronaux artificiels, c’est-à-dire des algorithmes reproduisant le fonctionnement du cerveau humain pour apprendre à partir de grandes quantités de données.

Titre​

Données clés :​

Contenu
  • Date de création : avril 2023
  • Secteur d’activité : Deeptech - IA neuromorphique embarquée (spatial, défense, robotique)
  • Effectif : 6
  • Chiffre d’affaires : 600 k€ (2024)
  • Part du CA consacrée à la R&D : 70 % (estimé)
  • Part du CA à l’export : 20 %
  • Site web : https://neurobus.space/

Titre​

Propriété industrielle :​

Contenu
Enveloppe(s) Soleau : 1




English translation provided on the INPI website:


NEUROBUS: an on-board AI that consumes very little energy​

Using a simple and efficient artificial intelligence (AI), Neurobus is reinventing the way machines perceive and interact with their environment between Toulouse and Paris, paving the way for conquering hostile environments. Florian Corgnou, its director and founder, tells us a little more about this Deeptech startup, which he will present at Viva Technology during the INPI Pitch Contest organized in partnership with HEC Paris.​

PC-Neurobus-portrait%20seul-1024x683.jpg.webp


Can you introduce yourself in a few words?

Florian Corgnou:
My name is Florian Corgnou, founder and CEO of Neurobus, a start-up deeptech which I created in 2023, between Paris and Toulouse. A graduate of HEC, I founded my first company in the financial software sector before joining Tesla's European headquarters in the Netherlands. There, I worked on innovation and product strategy issues.

With Neurobus, I'm dedicated to a mission: to design embedded neuromorphic artificial intelligence systems, a bio-inspired technology that reinvents the way machines perceive and interact with their environment. This radically sober and efficient approach to AI opens up unprecedented perspectives for critical applications in defense, space, and autonomous robotics.

Our belief is that on-board autonomy can only emerge by reconciling performance, energy efficiency and contextual intelligence, even in the most constrained environments, such as space, light drones or missions in isolated areas.

What makes your company innovative?

CF:
Neurobus stands out for its integration of neuromorphic technologies, i.e., an AI capable of operating in real time with ultra-low energy consumption, like the human brain.

We combine event cameras[1] with neuromorphic processors to process complex signals directly at the source, without needing to send all the data into the cloud.

This paradigm shift enables unprecedented on-board decision-making autonomy, essential in critical applications such as missile detection or orbital surveillance.

Florian Corgnou, founder and CEO of Neurobus

Florian Corgnou, founder and CEO of Neurobus©

You've chosen a simple AI, adapted to the constraints of its environment. Why this choice, and how does it change the way you design your solutions?

CF:
Traditional AI, based on the deep learning [2], is computationally, data-intensive and energy-intensive. However, the equipment we equip – satellites, micro-drones, fully autonomous robots – operates in environments where these resources are rare, or even absent.

So we adopted a frugal AI, designed from the ground up to work with little: little data (thanks to event cameras), little energy (thanks to neuromorphic chips), and little memory.

This forces us to rethink the entire design chain: from hardware to algorithms, including the embedded operating system that we develop internally, NeurOS.


What was the biggest challenge you faced while setting up your project?

CF:
One of the biggest challenges was convincing our early partners and funders that our technology, while still emerging, could outperform conventional approaches.

This involved building trust without a final product, proving the value of our approach with early demonstrators, and navigating demanding ecosystems like space or defense, where technological credibility and intellectual property are key.


Was your consideration of industrial property a natural one? What role did the INPI play?

CF:
From the outset, we understood that industrial property would be an essential strategic lever to enhance our R&D and protect our technological advantage.

This was natural, because our innovation lies at the intersection of the hardware, with and algorithms. [There seems to be a translation error here, as the French original mentions hardware, software and algorithms: “l’intersection du hardware, du software et des algorithmes.”]

The INPI supported us in this process, helping us to structure our industrial property — patents, trademarks, Soleau envelopes, etc. — and to better understand the issues related to the promotion of innovation in a European context.

[1] To avoid unnecessarily costly operations in terms of time and energy, this type of camera only records data when there is a change in brightness.
[2] Le Deep learning is a type of machine learning, used in the development of artificial intelligence, based on artificial neural networks, that is, algorithms reproducing the functioning of the human brain to learn from large amounts of data.

Title​

Key data:​

Contents
  • Date created: April 2023
  • Sector of activity: Deeptech - Embedded neuromorphic AI (space, defense, robotics)
  • Number: 6
  • Turnover: €600k (2024)
  • Share of turnover devoted to R&D: 70% (estimated)
  • Share of turnover from exports: 20%
  • Website: https://neurobus.space/

Title​

Industrial property:​

Contents
Soleau envelope(s): 1



*Soleau envelope:


The Soleau envelope (French: Enveloppe Soleau), named after its French inventor, Eugène Soleau [fr], is a sealed envelope serving as proof of priority for inventions valid in France, exclusively to precisely ascertain the date of an invention, idea or creation of a work. It can be applied for at the French National Institute of Industrial Property(INPI). The working principles were defined in the ruling of May 9, 1986, published in the official gazette of June 6, 1986 (Journal officiel de la République française or JORF), although the institution of the Soleau envelope dates back to 1915.[1]

The envelope has two compartments which must each contain the identical version of the element for which registration is sought.[2] The INPI laser-marks some parts of the envelope for the sake of delivery date authentication and sends one of the compartments back to the original depositary who submitted the envelope.[2]

The originator must keep their part of the envelope sealed except in case of litigation.[3] The deposit can be made at the INPI, by airmail, or at the INPI's regional subsidiaries.[2] The envelope is kept for a period of five years, and the term can be renewed once.[3]

The envelope may not contain any hard element such as cardboard, rubber, computer disks, leather, staples, or pins. Each compartment can only contain up to seven A4-size paper sheets, with a maximum of 5 millimetres (0.2 in) thickness. If the envelope is deemed inadmissible, it is sent back to the depositary at their own expense.[2]

Unlike a patent or utility model, the depositor has no exclusivity right over the claimed element. The Soleau envelope, as compared to a later patent, only allows use of the technique, rather than ownership, and multiple people might submit envelopes to support separate similar use, before a patent is later granted to restrict application.

Check this out: The Neurobus website has been redesigned - and the familiar URL https://neurobus.space will now automatically forward to www.neurobus.ai
Their new motto is “Neuromorphic intelligence for autonomous systems
inspired by the human biology, built for the edge.


While BrainChip, Intel and Prophesee - all three their partners according to the previous Neurobus website - no longer show up on the redesigned website, the team at Neurobus obviously still need suppliers of neuromorphic processors and event-based sensors for the solutions they offer.

Which brings me to the next topic, namely the brand new “Products” section, and apparently all four solutions currently offered by Neurobus can already be ordered (although there is no estimated delivery date given):

- Ground Station for Drone Detection
- Autonomous Drone Detection
- Space-based Surveillance
- Autonomous Defense Intelligence


VERY intriguing, isn’t it?!

Which begs the question, though, whether these are made-to-order solutions.
And whether “pre-order now” would have been a more realistic button than “order now”.

If not - and of course provided we are indeed involved in any of the offered solutions - wouldn’t it be high time for a joint partnership announcement or some other official announcement of a commercial arrangement? Or will watching the financials be the only way we will find out that a Neurobus customer had previously signed on the dotted line for a product that involves BrainChip technology?

Unfortunately you need to email Neurobus to “Ask for Specs”.

5511DEF1-C2A9-4D02-9F79-BA8CCC97B6AB.jpeg


C1A38F2A-8C3D-4F9D-9362-803A98ED4490.jpeg
DD3B599B-4E45-482D-8422-479A48BA7582.jpeg


ACA2C1C9-5B7B-4456-81F7-BD39F03DDD03.jpeg


D04A2388-567B-47A4-93E3-58EB1EFFD344.jpeg


78644BA7-59D5-4AC9-88DF-19F330D6E8F5.jpeg


F36C5002-8184-4AE8-A878-75EA2CB548FC.jpeg


121C85AA-0C0E-4B26-AC1C-CF13D330AC0D.jpeg


9CC24A4A-43D5-4116-927F-B205F19F929C.jpeg


In my tagged 23 June post, I had referred to an interview in which CEO Florian Corgnou mentioned NeurOS, an embedded operating system that Neurobus is developing internally. Unfortunately, there is currently no further information available when you click on the above “Custom Software Stack” tile - and neither for any other of the “technology foundations” tiles.



A current job opening for an internship with Neurobus as their CEO’s “right hand” reveals two more interesting facts:

1. Neurobus are currently not only working with Airbus and CNES (which we we already knew), but also with the French Ministry of Armed Forces

and

2. Neurobus are planning an upcoming €5 million seed fundraising round.


No more mention of any current collaboration with Mercedes-Benz, though.
(see my 12 November 2024 post: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-441454)


719615A2-2DD0-4D64-B141-288647E2449B.jpeg
 
Last edited:
  • Like
  • Wow
  • Love
Reactions: 16 users

Frangipani

Top 20
Check this out: The Neurobus website has been redesigned - and the familiar URL https://neurobus.space will now automatically forward to www.neurobus.ai
Their new motto is “Neuromorphic intelligence for autonomous systems
inspired by the human biology, built for the edge.


While BrainChip, Intel and Prophesee - all three their partners according to the previous Neurobus website - no longer show up on the redesigned website, the team at Neurobus obviously still need suppliers of neuromorphic processors and event-based sensors for the solutions they offer.

Which brings me to the next topic, namely the brand new “Products” section, and apparently all four solutions currently offered by Neurobus can already be ordered (although there is no estimated delivery date given):

- Ground Station for Drone Detection
- Autonomous Drone Detection
- Space-based Surveillance
- Autonomous Defense Intelligence


VERY intriguing, isn’t it?!

Which begs the question, though, whether these are made-to-order solutions.
And whether “pre-order now” would have been a more realistic button than “order now”.

If not - and of course provided we are indeed involved in any of the offered solutions - wouldn’t it be high time for a joint partnership announcement or some other official announcement of a commercial arrangement? Or will watching the financials be the only way we will find out that a Neurobus customer had previously signed on the dotted line for a product that involves BrainChip technology?

Unfortunately you need to email Neurobus to “Ask for Specs”.

View attachment 88781

View attachment 88782 View attachment 88783

View attachment 88784

View attachment 88785

View attachment 88786

View attachment 88787

View attachment 88788

View attachment 88789

In my tagged 23 June post, I had a referred to an interview in which CEO Florian Corgnou mentioned NeurOS, an embedded operating system that Neurobus is developing internally. Unfortunately, there is currently no further information available when you click on the above “Custom Software Stack” tile - and neither for any other of the “technology foundations” tiles.



A current job opening for an internship with Neurobus as their CEO’s “right hand” reveals two more interesting facts:

1. Neurobus are currently not only working with Airbus and CNES (which we we already knew), but also with the French Ministry of Armed Forces

and

2. Neurobus are planning an upcoming €5 million seed fundraising round.


No more mention of any current collaboration with Mercedes-Benz, though.
(see my 12 November 2024 post: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-441454)


View attachment 88790

Two more screenshots that didn’t fit in my last post:

The new Neurobus website lists four more open positions that do, however, not (yet) show up on the Station F website https://jobs.stationf.co/companies/neurobus
(The “CEO’s right hand” job ad, on the other hand, is missing here.)

47AB4E8D-2623-46C5-B16D-9887DDBE3942.jpeg
9DE7B7BB-BF41-4987-9835-2F13664D55E6.jpeg
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 10 users

manny100

Top 20
Hi @Fullmoonfever,

I strongly doubt it.

First of all, Bascom Hunter developed the 3U VPX SNAP (Spiking Neuromorphic Advanced Processor) Card with SBIR funding from the Navy, not the Air Force.

The publication you shared, however, is the AFRL (Air Force Research Laboratory) Facilities Book FY25.

NAVAIR, the Navy Air Systems Command, would likely be the first DoD entity to get their hands on the Bascom Hunter 3U VPX SNAP card, given they awarded the SBIR funding for the N202-099 project “Implementing Neural Network Algorithms on Neuromorphic Processors”.


View attachment 88772



View attachment 88770 View attachment 88771

However, as you can see, Bascom Hunter’s SBIR II phase is still ongoing - the SBIR award end date is 18 August 2025, which as of today is still four weeks away. Bascom Hunter as the awardee will then be required to submit a Phase II Final Report.

Which in turn means the 3U VPX SNAP Card is highly likely not a commercially ready product, yet (although one could be forgiven for thinking so when checking out the BH website https://bascomhunter.com/bh-tech/di...c-processors/asic-solutions/3u-vpx-snap-card/), but rather still a prototype, which Bascom Hunter will subsequently aim to commercialise in the ensuing Phase III (which will need to happen without further SBIR funding, though):


View attachment 88773

I believe that also explains why we had never heard anything “ASX-worthy” about a connection with Bascom Hunter prior to the Appendix 4C and Quarterly Activities Report for the Period Ended 31 December 2024, dated 28 January 2025:


View attachment 88774

Those two sentences about BH even suggested to me at the time they may have changed their prototype plan and may have decided on using AKD1500 rather than AKD1000 for their commercialisation efforts, possibly having already secured an interested Phase III commercialisation partner/customer.

Of course I could be wrong and the AKD1500 chips are actually slated for a different Bascom Hunter product-in-the-making. But in that case we’d still have to see another major purchase of AKD1000 chips before BH will be able to take their SNAP Card to market, and so far we have neither had an ASX announcement about it nor have we yet seen any evidence of such a deal in the financials. (Even assuming for a moment a top-secret NDA could have been be the reason for a non-announcement, the financials wouldn’t lie. But I don’t buy the NDA “excuse” anyway, as BH has been openly (ie. on their website) promoting their 3U VPX SNAP Card to have a total of 5 AKD1000 chips for months, and our company also let the cat out of the bag about their connection to BH with the January ASX announcement, hence there is no [more] secrecy required).

So unless it were a custom-made-to-order-design and BH were excitingly waiting for their first customer(s) to sign a deal before placing an order with BrainChip, it seems unlikely to me the 3U VPX SNAP Card is a commercially available product, yet.

Happy to be corrected, though…




Rugged VPX chassis systems are commonly used for mission-critical defense and aerospace applications, and VPX cards are available in 3U VPX and 6 U VPX form factors (cf. https://militaryembedded.com/avionics/computers/vpx-and-openvpx).

Here are two alternative suggestions what the mention on page 66 of the AFRL Facilities Book FY 2025 could possibly refer to:


The Embedded Edge Computing Laboratory is one of 7 labs housed by the AFRL Extreme Computing Facility (ECF), see page 65:

“EXTREME COMPUTING FACILITY
Research and development of unconventional computing and communications architectures and paradigms, trusted systems and edge processing.
A 7,100 Sq. foot multi discipline lab housing 7 Laboratories: Embedded Edge Computing, Nanocomputing, Trusted Systems, Agile Software Development, and 3 Quantum Labs along with
a Video Wall for demonstrations focused on research and development of unconventional computing architectures, networks, and processing that is secure, trusted, and can be done at the tactical edge.
$6.5 Million laboratory possesses world class capabilities in Neuromorphic based hardware characterization and testing, secure processing and Quantum based Communication, Networking, and Computing.

Chief, Mr. Greg Zagar
Deputy Chief, Mr. Michael Hartnett”


View attachment 88779

Under “Examples”, two AFRL programs are referred to: 6.3 SE3PO and 6.2 NICS+ (Neuromorphic Computing for Space, in collaboration with Intel: https://afresearchlab.com/technology/nics).

On page 72, which covers the “ECF AGILE SOFTWARE DEVELOPMENT LAB”, led by Pete Lamonica (named as Primary Alternate POC [point of contact] of the Embedded Edge Computing Laboratory on page 66), SE3PO is spelled out as the “Secure Extreme Embedded Exploitation and Processing On-board (SE3PO) program”.

The Secure Processor and the adjacent lab referred to in the second sentence underlined in green could possibly be this one:

View attachment 88780

As you can see, AFRL has been developing a military-grade secure processor with built-in cyber-defensive capabilities, dubbed T-CORE, testing it on a 3U VPX Board and is currently working on version 2 of T-CORE.

So that’s one of many option what the mention of 3U VPX heterogeneous computing (systems that use multiple types of processors such as CPUs, GPUs, ASICs, FPGAs, NPUs) could possibly refer to.



As for neuromorphic research, we know that AFRL has been collaborating with both IBM and Intel for years.

While the AFRL Facilities Book FY25 doesn’t specify whether or not the “3UVPX heterogeneous computing” in their equipment list includes a 3U VPX board with a neuromorphic processor, it could theoretically even refer to IBM’s NorthPole in a 3U VPX form factor (aka NP-VPX):




View attachment 88776
(…)

View attachment 88777
The following October 2023 exchange on LinkedIn between IBM’s Dharmendra Modha and AFRL Principal Computer Scientist Mark Barnell (who is also named as the Embedded Edge Computing Lab’s Primary POC on page 66 of the AFRL Facilities Book FY25) on the release of NorthPole is evidence that the collaboration between AFRL and IBM did not end with the TrueNorth era.





View attachment 88778
Frangipani, very well researched post.
The 3PU VPX cards are hardware and software.
Not sure why the those using AFRL would use non commercial IBM or Intel hardware products when AKIDA as a commercial product is available unless its for comparison against AKIDA.
Raytheon and Lockheed- Martin (LM) likely have their own 3PU VPX cards and both are aware of AKIDA. We also know of the LM connection with game changing cybersecurity which would be a perfect fit for defence.
It's interesting and not a lot is made public.
You have put the pieces of the puzzle together very well.
 
  • Like
  • Fire
  • Thinking
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
:cry:

Screenshot 2025-07-23 at 10.23.37 am.png




EXTRACT

Screenshot 2025-07-23 at 10.24.35 am.png






ChatGPT 4 says:

Hailo
and BrainChip are competing in several overlapping markets, particularly in the edge AI space. Here's a breakdown of how and where they compete—and where they differ:




🔁 Overlapping Markets


Both BrainChip and Hailo target ultra-low-power, high-efficiency AI inference at the edge. Their core focus areas include:


  1. Industrial Automation / Robotics
    • BrainChip’s Akida: Optimised for spiking neural networks and event-based processing, ideal for ultra-low-power robotics and sensor fusion.
    • Hailo-8: Geared for high-performance real-time processing in industrial machines, robotics, and automation tasks.
  2. Smart Cameras / Surveillance
    • Both provide hardware for object detection, facial recognition, and anomaly detection at the edge.
    • BrainChip supports neuromorphic processing for efficiency and low bandwidth needs.
    • Hailo focuses on high-frame-rate, dense neural networks (e.g., YOLOv5) on traditional CNNs.
  3. Automotive (ADAS & In-Cabin Monitoring)
    • Hailo has certified partnerships with major Tier 1 automotive suppliers (e.g., Renesas, Foresight).
    • BrainChip is developing neuromorphic vision and audio sensing with ultra-low latency, especially suited for sensor fusion and V2X potential.
  4. Medical and Wearables
    • Both are eyeing AI health diagnostics, especially for on-device inference (e.g., EEG, ECG).
    • BrainChip's strength lies in spiking neural networks mimicking brain activity—a possible edge in neurology applications.



🧠 Key Differentiators


FeatureBrainChipHailo
Core ArchitectureSpiking Neural Network (SNN) (Akida)Deep Learning Accelerator (Hailo-8)
Processing StyleEvent-based, asynchronousFrame-based, synchronous
Power ConsumptionExtremely low (<1mW for some ops)Low (but higher than Akida)
Data EfficiencyExcellent for sparse, time-series dataDesigned for dense CNN workloads
Training CompatibilityRequires SNN conversion/training toolsStandard TensorFlow / PyTorch models
Ecosystem IntegrationEarly-stage but expanding (Renesas, NVISO)Strong integrations with NVIDIA, Renesas, ABB




🚀 Summary​


  • Competing Areas: Edge AI for vision, audio, industrial, and automotive.
  • BrainChip Advantage: Ultra-low-power, neuromorphic design, asynchronous event-based inference.
  • Hailo Advantage: Higher throughput for traditional CNNs and more mature ecosystem for frame-based processing.

They are not direct substitutes, but are converging in markets such as automotive and smart surveillance. The choice often comes down to use case characteristics (e.g., sparse vs dense data, power constraints, real-time latency).
 
Last edited:
  • Thinking
  • Like
  • Sad
Reactions: 4 users
Top Bottom