FJ-215
Regular
Classic!!!Broadway Tony I call him
Classic!!!Broadway Tony I call him
Tony of the OperaBroadway Tony I call him
BariTonyTony of the Opera
Hopefully not Phantom.BariTony
Yes onOh no ...
What is wrong with our company that people leave so quickly? They are well-paid, aren't they?
Was he bored because nothing happened, or did he receive a better offer?
Or was he not performing with regard to sales?
I guess we won't find out.
Or maybe he hadOh no ...
What is wrong with our company that people leave so quickly? They are well-paid, aren't they?
Was he bored because nothing happened, or did he receive a better offer?
Or was he not performing with regard to sales?
I guess we won't find out.
in agm that old lawyer FF suggested them to hire a IR management company.
"Tony Soprano" 🫣Tony of the Opera
Good evening Diogenese ,AI data centres need round-the-clock energy and could be more power-hungry than we think
https://www.abc.net.au/news/2025-07...nally-rising-again-amid-surging-use/105238878
We're doomed! Doomed!!
https://www.google.com/search?q=pte+frazer+"we're+doomed"&rlz=1C1RXQR_en-GBAU1166AU1166&oq=pte+frazer+"we're+doomed"&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRigATIHCAIQIRigAdIBCTE3MzQ3ajBqN6gCALACAA&sourceid=chrome&ie=UTF-8#fpstate=ive&vld=cid:bd5c4671,vid:sxqvwkmTNy8,st:0
...
Matt Rennie, who co-owns and runs energy advisory firm Rennie, says our need for data has the potential to change everything.
...
He says it is a revolution that is being driven by the migration of so many services — from education and games to healthcare and shopping — to the digital realm.
More ominously, he suggests the rise of artificial intelligence is another thing entirely.
In a world where AI becomes "pervasive", he says there is likely to be a step change in demand power that will require round-the-clock supply.
"The thing about AI is that the algorithms that it uses are much more power-intensive," Mr Rennie says.
"So as these begin to pervade the way in which we do business and the way in which we plan and conduct our lives, we can expect that there'll be many more of these data centres specifically allocated to training AI systems and then to operate them after that.
From the AFRL FY25 Facilities book.
I wonder whose 3UVPX board in the Edge (neuromorphic) space they've been playing with.
Suspect Bascom Hunter Snapcard as we know.
View attachment 88611 View attachment 88612
OHB Hellas are not only exploring Akida for their ‘Satellite as a Service’ concept (GIASAAS), but also as a consortium partner for an ESA project called
BOLERO (On-Board Continual Learning for SatCom Systems).
Prime contractor of the BOLERO project is KPLabs - both OHB Hellas and Eutelsat OneWeb are subcontractors.
View attachment 83668
![]()
BOLERO
Our mission is to connect Europe. We join engineers, entrepreneurs and investors to forge strong links between institutions, industries, and businesses.connectivity.esa.int
BOLERO On-Board Continual Learning For SatCom Systems
![]()
Objectives
- Next generation of systems
- Status
Ongoing- Status date
2025-05-01- Activity Code
1B.138
The project identifies, explores, and implements onboard continual machine learning techniques to enhance reliability and data throughput of communication satellites.
The first objective is to identify 3 most promising use cases and applications of continual learning (CL), together with at least two most promising hardware platforms.
The second objective is to implement different CL techniques in the selected scenarios and assess their performance and feasibility for onboard deployment using the selected hardware platforms. The assessment includes the analysis of advantages and trade-offs of CL in comparison to traditional offline machine learning approaches, and the comparative analysis of hardware platforms for CL.
The final goal of the project is to identify a state-of-the-art, potential gaps, and future roadmap for CL in satellite communication systems.
Challenges
The main challenge is related to the limited resources and limited support for continual machine learning mechanisms in existing onboard processing units, e.g., not all operations and layers are supported and model parameters cannot be updated without hardware-specific recompilation. Therefore, common CL approaches are not straightforward to implement on board. Additionally, CL techniques come with the stability-plasticity trade-offs and the need of continuous validation and monitoring.
Benefits
The project offers a complete software and hardware pipeline to implement 3 different continual machine learning approaches (i.e., class-incremental, domain-incremental, and task-incremental) in 3 different application for communication satellites. The comparative analysis helps to identify which approach and hardware platform is best suited for different CL scenarios. The project establishes the foundation for future development of CL in SatCom systems.
Features
- Structured and informed report from selecting 3 most promising applications of CL in communication satellites and 2 hardware platforms
- Code for running a complete onboard CL process for both hardware platforms
- Report containing technology gaps and future roadmap for CL in SatCom systems
System Architecture
The 3 CL applications identified in the project are implemented for two hardware platforms of very different architectures (KP Labs Leopard DPU and BrainChip Akida neuromorphic computer). For each application and platform, there is a complete CL pipeline architecture proposed from data preprocessing to onboard continual learning.
Current status
The 3 most promising applications of continual machine learning in communication satellites have been identified, i.e., domain-incremental beam hopping optimization, task-incremental inter-satellite links routing, and class-incremental telemetry anomaly classification.
For each application, a state-of-the-art CL approach has been implemented for two diverse hardware platforms identified as the most promising ones for CL (KP Labs Leopard DPU and BrainChip Akida neuromorphic computer). The performance of each CL approach has been assessed and main technology gaps have been identified.
documentation
Documentation may be requested
Prime Contractor
![]()
KP Labs Sp. z o. o.
Poland
https://kplabs.space
Subcontractors
![]()
OHB HELLAS
Greece
Website
![]()
Eutelsat OneWeb (OW)
United Kingdom
https://oneweb.net/
Last update
2025-05-03 12:39
BOLERO: On-Board Continual Learning for SatCom Systems - [Article Title] | Explore KP Labs' Innovations and Insights
Discover insights from KP Labs in our news update, BOLERO: On-Board Continual Learning for SatCom Systems, focusing on key developments and innovations in satellite systems and space exploration.www.kplabs.space
PROJECT
4 min read
BOLERO: On-Board Continual Learning for SatCom Systems
![]()
Published on
January 28, 2025
In an era of exponentially increasing data generation across all domains, satellite communications (SatCom) systems are no exception. The innovative BOLERO project, led by KP Labs, and supported by a consortium including OHB Hellas and Eutelsat OneWeb, is at the forefront of this technological evolution. This project is making significant strides in applying both classic and deep machine learning (ML and DL) techniques within the dynamic realm of satellite data, marking a transformative step in SatCom technology.
Understanding the Need for Continual Learning in SatCom
Traditionally, satellite applications have relied on supervised ML algorithms trained offline, with all training data prepared before the training process begins. This method is effective in stable data scenarios. For example, a deep learning model can accurately identify brain tumor lesions from magnetic resonance images after being trained on a diverse dataset. However, the dynamic space environment presents unique challenges. Factors such as thermal noise, atmospheric conditions, and on-board noise can significantly alter data characteristics, causing these offline-trained models to struggle or fail when encounteringnew, unfamiliar data distributions.
The BOLERO Approach
BOLERO addresses these challenges by adopting an online training paradigm. The training process is shifted directly to the target environment, such as an edge device on a satellite. This innovative approach bypasses the need for downlinking large amounts of data for Earth-based retraining, overcoming bandwidth and time limitations. Training models in their deployment environment accelerate the training-to-deployment cycle and significantly improve model reliability under dynamic conditions.
Tackling New Challenges
Implementing continual learning brings its own challenges, including catastrophic forgetting, where models may lose previously acquired knowledge. Additionally, the stability-plasticity dilemma must be addressed to ensure models are adaptable and capable of retaining learned information. BOLERO tackles these issues through strategies such as task-incremental learning, allowing models to adapt to new tasks, and domain-incremental learning, enabling them to handle data with evolving distributions.
![]()
The Consortium’s Collaborative Dynamics in BOLERO
The BOLERO project is propelled by the synergistic efforts of its consortium members. As the project leader, KP Labs is primarily responsible for developing the Synthetic Data Generators (SDGs) and the continual learning models, ensuring their efficacy across multiple SatCom applications and hardware architectures. OHB Hellas contributes by exploring novel machine learning methodologies suitable for streaming data, assessing continual learning applications in and beyond the space sector, and implementing two use cases in different hardware modalities. Eutelsat OneWeb focuses on identifying strategic space-based applications for continual learning, evaluating their business impact, and analyzing the benefits of continual learning models, particularly in terms of performance and cost-efficiency. Together, these entities combine their unique strengths to advance the BOLERO project, addressing the evolving demands of SatCom systems.
Real-World Applications and Future Impact
The applications of BOLERO are diverse, ranging from monitoring the operational capabilities of space devices to gas-level sensing and object detection in satellite imagery. These applications highlight the potential of continual learning to enhance the efficiency and accuracy of SatCom systems, potentially revolutionizing the management and processing of satellite data for more responsive, agile, and efficient operations.
The BOLERO project, led by KP Labs and supported by a consortium including OHB Hellas and Eutelsat OneWeb, represents a groundbreaking step in harnessing the full potential of continual learning for SatCom systems. By confronting the unique challenges associated with satellite data and leveraging the latest in ML technology, BOLERO is poised to significantly improve the adaptability and efficiency of SatCom systems, setting a new standard in the field of satellite communications.
The other promising hardware platform being tested is KPLabs’ own Leopard DPU: https://www.kplabs.space/solutions/hardware/leopard
View attachment 83667
Here is a recent interview with Florian Corgnou, CEO of BrainChip partner Neurobus, which was conducted in the run-up to the 12 June INPI* Pitch Contest at Viva Technology 2025, during which five start-ups competed against each other. Neurobus ended up winning the pitch contest by “showcasing our vision for low-power, bio-inspired edge AI for autonomous systems” (see above post by @itsol4605).
*INPI France is the Institut National de la Propriété Industrielle, France’s National Intellectual Property Office.
In this interview, Florian Corgnou mentions NeurOS, an embedded operating system that Neurobus is developing internally. Interesting…
“CF: Traditional AI, based on the deep learning [2], is computationally, data-intensive and energy-intensive. However, the equipment we equip – satellites, micro-drones, fully autonomous robots – operates in environments where these resources are rare, or even absent.
So we adopted a frugal AI, designed from the ground up to work with little: little data (thanks to event cameras), little energy (thanks to neuromorphic chips), and little memory.
This forces us to rethink the entire design chain: from hardware to algorithms, including the embedded operating system that we develop internally, NeurOS.”
I found another reference to NeurOS here: https://dealroom.launchvic.org/companies/neurobus/
MORE ABOUT NEUROBUS
Neurobus is pioneering a new era of ultra-efficient, autonomous intelligence for drones and satellites. Leveraging neuromorphic computing, an AI inspired by the brain’s structure and energy efficiency, our edge AI systems empower aerial and orbital platforms to perceive, decide, and act in real-time, with minimal power consumption and maximum autonomy.
Traditional AI architectures struggle in constrained environments, such as low-Earth orbit or on-board UAVs, where power, weight, and bandwidth are critical limitations. Neurobus addresses this with a disruptive approach: combining event-based sensors with neuromorphic processors that mimic biological neural networks. This unique integration enables fast, asynchronous data processing, up to 100 times more power-efficient than conventional methods, while preserving situational awareness in extreme or dynamic conditions.
Our embedded AI systems are designed to meet the needs of next-generation autonomous platforms in aerospace, defense, and space. From precision drone navigation in GPS-denied environments to on-orbit space surveillance and threat detection, Neurobus technology supports missions where latency, energy, and reliability matter most.
We offer a modular technology stack that includes hardware integration, a proprietary neuromorphic operating system (NeurOS), and real-time perception algorithms. This enables end-users and integrators to accelerate the deployment of innovative, autonomous capabilities at the edge without compromising performance or efficiency.
Backed by deeptech expertise, partnerships with leading sensor manufacturers, and strategic collaborations in the aerospace sector, Neurobus is building the foundation for intelligent autonomy across air and space.
Our mission is to unlock the full potential of edge autonomy with brain-inspired AI, starting with drones and satellites, and scaling to all autonomous systems.
French original:
![]()
NEUROBUS : une IA embarquée qui consomme très peu d’énergie
Grâce à une intelligence artificielle sobre et efficiente, Neurobus réinvente, entre Toulouse et Paris, la façon dont les machines perçoivent et interagissent avec leur environnement, ouvrant ainsi la voie vers la conquête des milieux hostiles. Florian Corgnou, son dirigeant et fondateur, nous...www.inpi.fr
NEUROBUS : une IA embarquée qui consomme très peu d’énergie
Grâce à une intelligence artificielle sobre et efficiente, Neurobus réinvente, entre Toulouse et Paris, la façon dont les machines perçoivent et interagissent avec leur environnement, ouvrant ainsi la voie vers la conquête des milieux hostiles. Florian Corgnou, son dirigeant et fondateur, nous en dit un peu plus sur cette start-up de la Deeptech qu’il présentera à Viva Technology, lors du Pitch Contest INPI organisé en partenariat avec HEC Paris.
![]()
Pouvez-vous vous présenter en quelques mots ?
Florian Corgnou : Je m’appelle Florian Corgnou, fondateur et CEO de Neurobus, une start-up deeptech que j’ai créée en 2023, entre Paris et Toulouse. Diplômé d’HEC, j’ai fondé une première entreprise dans le secteur du logiciel financier avant de rejoindre le siège européen de Tesla aux Pays-Bas. J’y ai travaillé sur des problématiques d’innovation et de stratégie produit.
Avec Neurobus, je me consacre à une mission : concevoir des systèmes embarqués d’intelligence artificielle neuromorphique, une technologie bio-inspirée qui réinvente la façon dont les machines perçoivent et interagissent avec leur environnement. Cette approche radicalement sobre et efficiente de l’IA ouvre des perspectives inédites pour les applications critiques dans la défense, le spatial, et la robotique autonome.
Notre conviction, c’est que l’autonomie embarquée ne peut émerger qu’en conciliant performance, sobriété énergétique et intelligence contextuelle, même dans les environnements les plus contraints, comme l’espace, les drones légers ou les missions en zones isolées.
Qu’est-ce qui rend votre entreprise innovante ?
F.C. : Neurobus se distingue par l’intégration de technologies neuromorphiques, c’est-à-dire une IA capable de fonctionner en temps réel avec une consommation énergétique ultra-faible, à l’image du cerveau humain.
Nous combinons des caméras événementielles[1] avec des processeurs neuromorphiques pour traiter directement à la source des signaux complexes, sans avoir besoin d’envoyer toutes les données dans le cloud.
Ce changement de paradigme permet une autonomie décisionnelle embarquée inédite, essentielle dans les applications critiques comme la détection de missiles ou la surveillance orbitale.
![]()
Florian Corgnou, fondateur et CEO de Neurobus©
Vous avez choisi une IA sobre, adaptée aux contraintes de son environnement. Pourquoi ce choix et en quoi cela change-t-il la façon de concevoir vos solutions ?
F.C. : L’IA traditionnelle, basée sur le deep learning [2], est gourmande en calcul, en données et en énergie. Or, les matériels que nous équipons — satellites, micro-drones, robots en autonomie complète — évoluent dans des environnements où ces ressources sont rares, voire absentes.
Nous avons donc adopté une IA frugale, conçue dès le départ pour fonctionner avec peu : peu de données (grâce aux caméras événementielles), peu d’énergie (grâce aux puces neuromorphiques), et peu de mémoire.
Cela nous force à repenser toute la chaîne de conception : du matériel jusqu’aux algorithmes, en passant par le système d’exploitation embarqué que nous développons en interne, le NeurOS.
Quel est le plus gros défi auquel vous avez dû faire face au cours du montage de votre projet ?
F.C. : L’un des plus grands défis a été de convaincre nos premiers partenaires et financeurs que notre technologie, bien qu’encore émergente, pouvait surpasser les approches conventionnelles.
Cela impliquait de créer de la confiance sans produit final, de prouver la valeur de notre approche avec des démonstrateurs très en amont, et de naviguer dans des écosystèmes exigeants comme le spatial ou la défense, où la crédibilité technologique et la propriété intellectuelle sont clés.
Votre prise en compte de la propriété industrielle a-t-elle été naturelle ? Quel rôle a joué l’INPI ?
F.C. : Dès le début, nous avons compris que la propriété industrielle serait un levier stratégique essentiel pour valoriser notre R&D et protéger notre avantage technologique.
Cela a été naturel, car notre innovation se situe à l’intersection du hardware, du software et des algorithmes.
L’INPI nous a accompagnés dans cette démarche, en nous aidant à structurer notre propriété industrielle — brevets, marques, enveloppes Soleau… — et à mieux comprendre les enjeux liés à la valorisation de l’innovation dans un contexte européen.
[1] Afin d’éviter des opérations inutilement coûteuses en temps comme en énergie, ce type de caméra n’enregistre une donnée qu’en cas de changement de luminosité.
[2] Le Deep learning est un type d'apprentissage automatique, utilisé dans le cadre de l’élaboration d’intelligence artificielle, basé sur des réseaux neuronaux artificiels, c’est-à-dire des algorithmes reproduisant le fonctionnement du cerveau humain pour apprendre à partir de grandes quantités de données.
Titre
Données clés :
Contenu
- Date de création : avril 2023
- Secteur d’activité : Deeptech - IA neuromorphique embarquée (spatial, défense, robotique)
- Effectif : 6
- Chiffre d’affaires : 600 k€ (2024)
- Part du CA consacrée à la R&D : 70 % (estimé)
- Part du CA à l’export : 20 %
- Site web : https://neurobus.space/
Titre
Propriété industrielle :
Contenu
Enveloppe(s) Soleau : 1
English translation provided on the INPI website:
![]()
NEUROBUS: an on-board AI that consumes very little energy
Using a simple and efficient artificial intelligence (AI), Neurobus is reinventing the way machines perceive and interact with their environment between Toulouse and Paris, paving the way for conquering hostile environments. Florian Corgnou, its director and founder, tells us a little more about...www.inpi.fr
NEUROBUS: an on-board AI that consumes very little energy
Using a simple and efficient artificial intelligence (AI), Neurobus is reinventing the way machines perceive and interact with their environment between Toulouse and Paris, paving the way for conquering hostile environments. Florian Corgnou, its director and founder, tells us a little more about this Deeptech startup, which he will present at Viva Technology during the INPI Pitch Contest organized in partnership with HEC Paris.
![]()
Can you introduce yourself in a few words?
Florian Corgnou: My name is Florian Corgnou, founder and CEO of Neurobus, a start-up deeptech which I created in 2023, between Paris and Toulouse. A graduate of HEC, I founded my first company in the financial software sector before joining Tesla's European headquarters in the Netherlands. There, I worked on innovation and product strategy issues.
With Neurobus, I'm dedicated to a mission: to design embedded neuromorphic artificial intelligence systems, a bio-inspired technology that reinvents the way machines perceive and interact with their environment. This radically sober and efficient approach to AI opens up unprecedented perspectives for critical applications in defense, space, and autonomous robotics.
Our belief is that on-board autonomy can only emerge by reconciling performance, energy efficiency and contextual intelligence, even in the most constrained environments, such as space, light drones or missions in isolated areas.
What makes your company innovative?
CF: Neurobus stands out for its integration of neuromorphic technologies, i.e., an AI capable of operating in real time with ultra-low energy consumption, like the human brain.
We combine event cameras[1] with neuromorphic processors to process complex signals directly at the source, without needing to send all the data into the cloud.
This paradigm shift enables unprecedented on-board decision-making autonomy, essential in critical applications such as missile detection or orbital surveillance.
![]()
Florian Corgnou, founder and CEO of Neurobus©
You've chosen a simple AI, adapted to the constraints of its environment. Why this choice, and how does it change the way you design your solutions?
CF: Traditional AI, based on the deep learning [2], is computationally, data-intensive and energy-intensive. However, the equipment we equip – satellites, micro-drones, fully autonomous robots – operates in environments where these resources are rare, or even absent.
So we adopted a frugal AI, designed from the ground up to work with little: little data (thanks to event cameras), little energy (thanks to neuromorphic chips), and little memory.
This forces us to rethink the entire design chain: from hardware to algorithms, including the embedded operating system that we develop internally, NeurOS.
What was the biggest challenge you faced while setting up your project?
CF: One of the biggest challenges was convincing our early partners and funders that our technology, while still emerging, could outperform conventional approaches.
This involved building trust without a final product, proving the value of our approach with early demonstrators, and navigating demanding ecosystems like space or defense, where technological credibility and intellectual property are key.
Was your consideration of industrial property a natural one? What role did the INPI play?
CF: From the outset, we understood that industrial property would be an essential strategic lever to enhance our R&D and protect our technological advantage.
This was natural, because our innovation lies at the intersection of the hardware, with and algorithms. [There seems to be a translation error here, as the French original mentions hardware, software and algorithms: “l’intersection du hardware, du software et des algorithmes.”]
The INPI supported us in this process, helping us to structure our industrial property — patents, trademarks, Soleau envelopes, etc. — and to better understand the issues related to the promotion of innovation in a European context.
[1] To avoid unnecessarily costly operations in terms of time and energy, this type of camera only records data when there is a change in brightness.
[2] Le Deep learning is a type of machine learning, used in the development of artificial intelligence, based on artificial neural networks, that is, algorithms reproducing the functioning of the human brain to learn from large amounts of data.
Title
Key data:
Contents
- Date created: April 2023
- Sector of activity: Deeptech - Embedded neuromorphic AI (space, defense, robotics)
- Number: 6
- Turnover: €600k (2024)
- Share of turnover devoted to R&D: 70% (estimated)
- Share of turnover from exports: 20%
- Website: https://neurobus.space/
Title
Industrial property:
Contents
Soleau envelope(s): 1
*Soleau envelope:
Soleau envelope - Wikipedia
en.m.wikipedia.org
The Soleau envelope (French: Enveloppe Soleau), named after its French inventor, Eugène Soleau [fr], is a sealed envelope serving as proof of priority for inventions valid in France, exclusively to precisely ascertain the date of an invention, idea or creation of a work. It can be applied for at the French National Institute of Industrial Property(INPI). The working principles were defined in the ruling of May 9, 1986, published in the official gazette of June 6, 1986 (Journal officiel de la République française or JORF), although the institution of the Soleau envelope dates back to 1915.[1]
The envelope has two compartments which must each contain the identical version of the element for which registration is sought.[2] The INPI laser-marks some parts of the envelope for the sake of delivery date authentication and sends one of the compartments back to the original depositary who submitted the envelope.[2]
The originator must keep their part of the envelope sealed except in case of litigation.[3] The deposit can be made at the INPI, by airmail, or at the INPI's regional subsidiaries.[2] The envelope is kept for a period of five years, and the term can be renewed once.[3]
The envelope may not contain any hard element such as cardboard, rubber, computer disks, leather, staples, or pins. Each compartment can only contain up to seven A4-size paper sheets, with a maximum of 5 millimetres (0.2 in) thickness. If the envelope is deemed inadmissible, it is sent back to the depositary at their own expense.[2]
Unlike a patent or utility model, the depositor has no exclusivity right over the claimed element. The Soleau envelope, as compared to a later patent, only allows use of the technique, rather than ownership, and multiple people might submit envelopes to support separate similar use, before a patent is later granted to restrict application.
Check this out: The Neurobus website has been redesigned - and the familiar URL https://neurobus.space will now automatically forward to www.neurobus.ai![]()
Their new motto is “Neuromorphic intelligence for autonomous systems
inspired by the human biology, built for the edge.”
While BrainChip, Intel and Prophesee - all three their partners according to the previous Neurobus website - no longer show up on the redesigned website, the team at Neurobus obviously still need suppliers of neuromorphic processors and event-based sensors for the solutions they offer.
Which brings me to the next topic, namely the brand new “Products” section, and apparently all four solutions currently offered by Neurobus can already be ordered (although there is no estimated delivery date given):
- Ground Station for Drone Detection
- Autonomous Drone Detection
- Space-based Surveillance
- Autonomous Defense Intelligence
VERY intriguing, isn’t it?!
Which begs the question, though, whether these are made-to-order solutions.
And whether “pre-order now” would have been a more realistic button than “order now”.
If not - and of course provided we are indeed involved in any of the offered solutions - wouldn’t it be high time for a joint partnership announcement or some other official announcement of a commercial arrangement? Or will watching the financials be the only way we will find out that a Neurobus customer had previously signed on the dotted line for a product that involves BrainChip technology?
Unfortunately you need to email Neurobus to “Ask for Specs”.
View attachment 88781
View attachment 88782 View attachment 88783
View attachment 88784
View attachment 88785
View attachment 88786
View attachment 88787
View attachment 88788
View attachment 88789
In my tagged 23 June post, I had a referred to an interview in which CEO Florian Corgnou mentioned NeurOS, an embedded operating system that Neurobus is developing internally. Unfortunately, there is currently no further information available when you click on the above “Custom Software Stack” tile - and neither for any other of the “technology foundations” tiles.
A current job opening for an internship with Neurobus as their CEO’s “right hand” reveals two more interesting facts:
1. Neurobus are currently not only working with Airbus and CNES (which we we already knew), but also with the French Ministry of Armed Forces
and
2. Neurobus are planning an upcoming €5 million seed fundraising round.
No more mention of any current collaboration with Mercedes-Benz, though.
(see my 12 November 2024 post: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-441454)
![]()
CEO Right Hand: Neurobus is hiring!
Neurobus is hiring a CEO Right Hand at its Paris offices.jobs.stationf.co
View attachment 88790
Frangipani, very well researched post.Hi @Fullmoonfever,
I strongly doubt it.
First of all, Bascom Hunter developed the 3U VPX SNAP (Spiking Neuromorphic Advanced Processor) Card with SBIR funding from the Navy, not the Air Force.
The publication you shared, however, is the AFRL (Air Force Research Laboratory) Facilities Book FY25.
NAVAIR, the Navy Air Systems Command, would likely be the first DoD entity to get their hands on the Bascom Hunter 3U VPX SNAP card, given they awarded the SBIR funding for the N202-099 project “Implementing Neural Network Algorithms on Neuromorphic Processors”.
View attachment 88772
Award | SBIR
www.sbir.gov
View attachment 88770 View attachment 88771
However, as you can see, Bascom Hunter’s SBIR II phase is still ongoing - the SBIR award end date is 18 August 2025, which as of today is still four weeks away. Bascom Hunter as the awardee will then be required to submit a Phase II Final Report.
Which in turn means the 3U VPX SNAP Card is highly likely not a commercially ready product, yet (although one could be forgiven for thinking so when checking out the BH website https://bascomhunter.com/bh-tech/di...c-processors/asic-solutions/3u-vpx-snap-card/), but rather still a prototype, which Bascom Hunter will subsequently aim to commercialise in the ensuing Phase III (which will need to happen without further SBIR funding, though):
View attachment 88773
I believe that also explains why we had never heard anything “ASX-worthy” about a connection with Bascom Hunter prior to the Appendix 4C and Quarterly Activities Report for the Period Ended 31 December 2024, dated 28 January 2025:
View attachment 88774
Those two sentences about BH even suggested to me at the time they may have changed their prototype plan and may have decided on using AKD1500 rather than AKD1000 for their commercialisation efforts, possibly having already secured an interested Phase III commercialisation partner/customer.
Of course I could be wrong and the AKD1500 chips are actually slated for a different Bascom Hunter product-in-the-making. But in that case we’d still have to see another major purchase of AKD1000 chips before BH will be able to take their SNAP Card to market, and so far we have neither had an ASX announcement about it nor have we yet seen any evidence of such a deal in the financials. (Even assuming for a moment a top-secret NDA could have been be the reason for a non-announcement, the financials wouldn’t lie. But I don’t buy the NDA “excuse” anyway, as BH has been openly (ie. on their website) promoting their 3U VPX SNAP Card to have a total of 5 AKD1000 chips for months, and our company also let the cat out of the bag about their connection to BH with the January ASX announcement, hence there is no [more] secrecy required).
So unless it were a custom-made-to-order-design and BH were excitingly waiting for their first customer(s) to sign a deal before placing an order with BrainChip, it seems unlikely to me the 3U VPX SNAP Card is a commercially available product, yet.
Happy to be corrected, though…
Rugged VPX chassis systems are commonly used for mission-critical defense and aerospace applications, and VPX cards are available in 3U VPX and 6 U VPX form factors (cf. https://militaryembedded.com/avionics/computers/vpx-and-openvpx).
Here are two alternative suggestions what the mention on page 66 of the AFRL Facilities Book FY 2025 could possibly refer to:
The Embedded Edge Computing Laboratory is one of 7 labs housed by the AFRL Extreme Computing Facility (ECF), see page 65:
“EXTREME COMPUTING FACILITY
Research and development of unconventional computing and communications architectures and paradigms, trusted systems and edge processing.
A 7,100 Sq. foot multi discipline lab housing 7 Laboratories: Embedded Edge Computing, Nanocomputing, Trusted Systems, Agile Software Development, and 3 Quantum Labs along with
a Video Wall for demonstrations focused on research and development of unconventional computing architectures, networks, and processing that is secure, trusted, and can be done at the tactical edge.
$6.5 Million laboratory possesses world class capabilities in Neuromorphic based hardware characterization and testing, secure processing and Quantum based Communication, Networking, and Computing.
Chief, Mr. Greg Zagar
Deputy Chief, Mr. Michael Hartnett”
View attachment 88779
Under “Examples”, two AFRL programs are referred to: 6.3 SE3PO and 6.2 NICS+ (Neuromorphic Computing for Space, in collaboration with Intel: https://afresearchlab.com/technology/nics).
On page 72, which covers the “ECF AGILE SOFTWARE DEVELOPMENT LAB”, led by Pete Lamonica (named as Primary Alternate POC [point of contact] of the Embedded Edge Computing Laboratory on page 66), SE3PO is spelled out as the “Secure Extreme Embedded Exploitation and Processing On-board (SE3PO) program”.
The Secure Processor and the adjacent lab referred to in the second sentence underlined in green could possibly be this one:
View attachment 88780
As you can see, AFRL has been developing a military-grade secure processor with built-in cyber-defensive capabilities, dubbed T-CORE, testing it on a 3U VPX Board and is currently working on version 2 of T-CORE.
So that’s one of many option what the mention of 3U VPX heterogeneous computing (systems that use multiple types of processors such as CPUs, GPUs, ASICs, FPGAs, NPUs) could possibly refer to.
As for neuromorphic research, we know that AFRL has been collaborating with both IBM and Intel for years.
While the AFRL Facilities Book FY25 doesn’t specify whether or not the “3UVPX heterogeneous computing” in their equipment list includes a 3U VPX board with a neuromorphic processor, it could theoretically even refer to IBM’s NorthPole in a 3U VPX form factor (aka NP-VPX):
View attachment 88776
(…)
View attachment 88777
The following October 2023 exchange on LinkedIn between IBM’s Dharmendra Modha and AFRL Principal Computer Scientist Mark Barnell (who is also named as the Embedded Edge Computing Lab’s Primary POC on page 66 of the AFRL Facilities Book FY25) on the release of NorthPole is evidence that the collaboration between AFRL and IBM did not end with the TrueNorth era.
![]()
New in Science Magazine, a major new article from IBM Research, IBM introducing a new brain-inspired, silicon-optimized chip architecture suitable for neural inference. | Dharmendra Modha
New in Science Magazine, a major new article from IBM Research, IBM introducing a new brain-inspired, silicon-optimized chip architecture suitable for neural inference. The chip, NorthPole, is the result of nearly two decades of work by scientists at IBM Research and has been an outgrowth of a...www.linkedin.com
View attachment 88778
Feature | BrainChip | Hailo |
---|---|---|
Core Architecture | Spiking Neural Network (SNN) (Akida) | Deep Learning Accelerator (Hailo-8) |
Processing Style | Event-based, asynchronous | Frame-based, synchronous |
Power Consumption | Extremely low (<1mW for some ops) | Low (but higher than Akida) |
Data Efficiency | Excellent for sparse, time-series data | Designed for dense CNN workloads |
Training Compatibility | Requires SNN conversion/training tools | Standard TensorFlow / PyTorch models |
Ecosystem Integration | Early-stage but expanding (Renesas, NVISO) | Strong integrations with NVIDIA, Renesas, ABB |