BRN Discussion Ongoing

God you talk shit... Peter holding Sean to account... are you serious? How has he held him to account?

And I hope to God Peter is looking at this site (and HC) and gets a serious reality check. People are sick and tried of being kept in the dark, left to rely on hope. Strung along, year after year....

Must be great to be Peter with so many shares that you really don't give a shit, he is set for life regardless (even at today's price).. He doesn't care about shareholders.

Stop making him out to be some sort of a messiah...you come across as a love struck puppy, and a delusional fool.

And who is holding Peter to account? No-one!

Wake up....
Its has been proven in the past Sean and Antonio have told a fib
 
  • Fire
  • Like
Reactions: 3 users

manny100

Top 20
Hai Dio, I have no doubt that RTX, Parsons and Bascom Hunter will be doing big business for us in due course.
 
  • Like
  • Love
Reactions: 12 users

Fiendish

Regular
Any ideas on the 44mil shares traded SCXT at 17.29pm?
 

Fiendish

Regular
1000012525.jpg
 
  • Wow
  • Like
Reactions: 6 users



The ASX code SCXT is likely a special cross trade for a specific security, where S indicates a special trade, C means cross trade, and XT specifies the type or tier of the special crossing. This type of trade, where a broker buys and sells the same security at the same time for different clients, can result in faster execution at competitive prices. Recent updates have simplified the reporting of special cross trades with the new code SC replacing older codes like S1, S2, and S3 for certain equity trades.

SCXT breakdown
  • S: Indicates a "Special" trade, which might be a block trade or a crossing where pre-trade transparency rules are modified under specific conditions.
    • C: Stands for "Cross" trade, where the broker executes both the buy and sell orders for the same security simultaneously for different clients.
    • XT: Specifies the particular type or tier of the special cross trade.


Key features of special cross trades
  • Faster execution: The trade may be executed faster than if sent to public trading venues, as it's an internal cross.
  • Competitive pricing: The trade is executed at a price within the current National Best Bid and Offer (NBBO), meaning the client is not disadvantaged.
  • Simplified reporting: The ASX has introduced a new SC condition code for special cross trades to simplify reporting, replacing older codes like S1, S2, and S3 for equity market products.
 
  • Like
Reactions: 8 users

Diogenese

Top 20
The ASX code SCXT is likely a special cross trade for a specific security, where S indicates a special trade, C means cross trade, and XT specifies the type or tier of the special crossing. This type of trade, where a broker buys and sells the same security at the same time for different clients, can result in faster execution at competitive prices. Recent updates have simplified the reporting of special cross trades with the new code SC replacing older codes like S1, S2, and S3 for certain equity trades.

SCXT breakdown
  • S: Indicates a "Special" trade, which might be a block trade or a crossing where pre-trade transparency rules are modified under specific conditions.
    • C: Stands for "Cross" trade, where the broker executes both the buy and sell orders for the same security simultaneously for different clients.
    • XT: Specifies the particular type or tier of the special cross trade.


Key features of special cross trades
  • Faster execution: The trade may be executed faster than if sent to public trading venues, as it's an internal cross.
  • Competitive pricing: The trade is executed at a price within the current National Best Bid and Offer (NBBO), meaning the client is not disadvantaged.
  • Simplified reporting: The ASX has introduced a new SC condition code for special cross trades to simplify reporting, replacing older codes like S1, S2, and S3 for equity market products.
Aka: Thimble and pea ... The TaPXT
 
  • Haha
  • Love
Reactions: 2 users

FJ-215

Regular
And it has begun........20M shorts closed in a day, more to follow no doubt..

1763541105097.png
 
  • Like
Reactions: 4 users

7für7

Top 20
Aka: Thimble and pea ... The TaPXT

Someone was willing to take 44 million shares at 0.17. That means there is at least one big buyer who accepts the price at this level. That’s neutral to slightly positive rather than bearish… so far so good. But the share price still won’t change tomorrow because it was an off-market trade, most likely between institutional investors.

We still have to wait and see what announcements come from now on, and how strong they are – or not. That alone is what really matters for us.
 
  • Like
Reactions: 1 users

Fiendish

Regular
Should that trade trigger a notice? Thats a pretty substantial trade. If a substantial holding changes by more than 1% we should be expecting a notification by friday if im not mistaken.
 
  • Like
Reactions: 1 users

Frangipani

Top 20
Bremen is the place to be this week when you want to engage with customers in the space tech business…


0D91BD1D-7376-40BA-A5E7-8D750DCADC8D.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 14 users
Should that trade trigger a notice? Thats a pretty substantial trade. If a substantial holding changes by more than 1% we should be expecting a notification by friday if im not mistaken.
Maybe not.

I wonder if Brokers can do a block trade made up of multiple parties on one or both sides of the trade. Most likely I suspect.

They ASX allows them to parcel up large blocks this way rather than smash or inflate the SP on mkt due to the volume of the trade.

You know.....orderly market n all that crap ...we wouldn't want to make them spend time like retailers drip feeding the buy / sells on mkt or have this create liquidity / volatility like the reason why bots and shorting are allowed....fuppets (fkn muppets).
 
  • Like
Reactions: 7 users

Diogenese

Top 20
Maybe not.

I wonder if Brokers can do a block trade made up of multiple parties on one or both sides of the trade. Most likely I suspect.

They ASX allows them to parcel up large blocks this way rather than smash or inflate the SP on mkt due to the volume of the trade.

You know.....orderly market n all that crap ...we wouldn't want to make them spend time like retailers drip feeding the buy / sells on mkt or have this create liquidity / volatility like the reason why bots and shorting are allowed....fuppets (fkn muppets).
I just found out that a firkin is 9 imperial gallons of ale, but 70 imperial gallons of wine - I've always preferred firkin wine.
 
  • Haha
  • Like
Reactions: 3 users

Fiendish

Regular
Well.. somethings afoot! For good or ill soon we shall see!

Alf Kuchenbuch, BrainChip's VP of Sales for EMEA (the ASX-listed company behind the BRN.AX stock ticker), has been a key figure in pitching their neuromorphic AI tech—specifically the Akida IP—to the space sector. The "GRAIN" you're referring to isn't a literal grain but the GRAIN architecture, a new product line of radiation-hardened, space-grade system-on-chips (SoCs) developed by Frontgrade Gaisler in partnership with BrainChip. It integrates Akida for ultra-low-power, event-based AI processing tailored for satellites, deep-space probes, and autonomous missions (think real-time image recognition, navigation, and data analysis in orbit without draining batteries or failing under radiation).
Launched in April 2025, the first device in the line (GR801 SoC) combines Akida with Gaisler's NOEL-V RISC-V processor. It's designed for harsh space environments, enabling "always-on" AI that's 10-100x more energy-efficient than traditional chips. This is a big deal for BRN.AX because it positions BrainChip's IP as a go-to for the booming $10B+ space AI market, potentially driving licensing revenue and stock upside as adoption ramps.
As for how many space customers are "waiting" for GRAIN: There's no public exact tally (companies like BrainChip don't disclose pipeline specifics to avoid tipping competitors), but recent activities point to solid, growing demand from institutional players:
Confirmed contracts/partners: At least 2 major space agencies so far.
Swedish National Space Agency (SNSA): Awarded a commercialization contract in April 2025 to deploy GRAIN in Swedish-led missions, focusing on power-constrained satellites.
European Space Agency (ESA): BrainChip (with Frontgrade Gaisler, Airbus, and Neurobus) won ESA's "NEURAVIS" ITT (Invitation to Tender) in July 2024 for neuromorphic space tech evaluation, which directly feeds into GRAIN demos. ESA hosted the October 2025 EDHPC event where Alf demoed it.
Broader interest: Alf's been hustling at events, connecting with "space engineers and agencies" (plural, per BrainChip's Oct 2025 EDHPC recap). This includes hands-on demos for Earth observation, autonomous nav, and hazard detection. Sweden's Royal Institute of Technology (KTH) is already building a GRAIN demo app with neuromorphic sensors. The space crowd is hungry for this because traditional AI chips guzzle power and flop in radiation—GRAIN fixes that.
Right now (as of Nov 19, 2025), Alf's at the Space Tech Expo Europe in Bremen, Germany, schmoozing more prospects. Expect announcements on additional pilots or deals soon, as the line's still ramping to full production. For BRN.AX holders, this screams validation: space is a high-margin, sticky market, and GRAIN could be a multi-year revenue catalyst if it scales beyond these early wins. If you're trading on it, watch for Q4 updates from BrainChip's investor calls.
 
  • Like
  • Fire
  • Wow
Reactions: 9 users

gilti

Regular
That's one way to provide liquidity.
afterhours trade of 44million after pissing around
all day with 1 & 101 trades. Pricks.
shorts closing after screwing retail share holders
down from 25c to 17c Pricks
Pricks Pricks Pricks etc.
 
  • Like
  • Fire
  • Love
Reactions: 10 users

IloveLamp

Top 20
Apologies if posted already

1000013931.jpg
 
  • Like
  • Love
  • Fire
Reactions: 9 users

Frangipani

Top 20
Bremen is the place to be this week when you want to engage with customers in the space tech business…


View attachment 93140

While Frontgrade Gaisler are expectedly spruiking GR801- their “rad hard, fault tolerant Space SoC with integrated Akida IP” - at Space Tech Expo Bremen…


C747D887-C820-4CD1-A63C-4667E8FACF07.jpeg


(still picture taken from booth-setting-up video)

… they have also just signed an MoU with KP Labs from Poland “to join forces on next-generation on-board computing architectures that enhance autonomy and resilience for future space missions.” Their collaboration aims at the integration of FG’s GR716B with KP Labs’ Lion DPU.



2D614C90-DD39-4A41-A2EA-7DF8A70F332D.jpeg



We know KP Labs as the prime contractor of the now concluded ESA BOLERO (On-Board Continual Learning for SatCom Systems) project, in which OHB Hellas and Eutelsat OneWeb were the subcontractors. That project involved another of KP Labs’ DPUs - the Leopard DPU.

OHB Hellas are not only exploring Akida for their ‘Satellite as a Service’ concept (GIASAAS) 👆🏻, but also as a consortium partner for an ESA project called

BOLERO (On-Board Continual Learning for SatCom Systems).

Prime contractor of the BOLERO project is KPLabs - both OHB Hellas and Eutelsat OneWeb are subcontractors.

View attachment 83668



BOLERO On-Board Continual Learning For SatCom Systems​

bolero.jpg

Objectives
The project identifies, explores, and implements onboard continual machine learning techniques to enhance reliability and data throughput of communication satellites.

The first objective is to identify 3 most promising use cases and applications of continual learning (CL), together with at least two most promising hardware platforms.

The second objective is to implement different CL techniques in the selected scenarios and assess their performance and feasibility for onboard deployment using the selected hardware platforms
. The assessment includes the analysis of advantages and trade-offs of CL in comparison to traditional offline machine learning approaches, and the comparative analysis of hardware platforms for CL.

The final goal of the project is to identify a state-of-the-art, potential gaps, and future roadmap for CL in satellite communication systems.

Challenges
The main challenge is related to the limited resources and limited support for continual machine learning mechanisms in existing onboard processing units, e.g., not all operations and layers are supported and model parameters cannot be updated without hardware-specific recompilation. Therefore, common CL approaches are not straightforward to implement on board. Additionally, CL techniques come with the stability-plasticity trade-offs and the need of continuous validation and monitoring.

Benefits
The project offers a complete software and hardware pipeline to implement 3 different continual machine learning approaches (i.e., class-incremental, domain-incremental, and task-incremental) in 3 different application for communication satellites. The comparative analysis helps to identify which approach and hardware platform is best suited for different CL scenarios. The project establishes the foundation for future development of CL in SatCom systems.

Features
  • Structured and informed report from selecting 3 most promising applications of CL in communication satellites and 2 hardware platforms
  • Code for running a complete onboard CL process for both hardware platforms
  • Report containing technology gaps and future roadmap for CL in SatCom systems

System Architecture
The 3 CL applications identified in the project are implemented for two hardware platforms of very different architectures (KP Labs Leopard DPU and BrainChip Akida neuromorphic computer). For each application and platform, there is a complete CL pipeline architecture proposed from data preprocessing to onboard continual learning.

Current status
The 3 most promising applications of continual machine learning in communication satellites have been identified, i.e., domain-incremental beam hopping optimization, task-incremental inter-satellite links routing, and class-incremental telemetry anomaly classification.

For each application, a state-of-the-art CL approach has been implemented for two diverse hardware platforms identified as the most promising ones for CL (KP Labs Leopar
d DPU and BrainChip Akida neuromorphic computer). The performance of each CL approach has been assessed and main technology gaps have been identified.

documentation​

Documentation may be requested

Prime Contractor​


KP Labs Sp. z o. o.

Poland
https://kplabs.space

Subcontractors​


OHB HELLAS

Greece
Website

Eutelsat OneWeb (OW)

United Kingdom
https://oneweb.net/

Last update
2025-05-03 12:39






PROJECT
4 min read

BOLERO: On-Board Continual Learning for SatCom Systems​

671a26c9c1e140d22e18bc6c_Bolero-4-1280x480.jpg

Published on
January 28, 2025

In an era of exponentially increasing data generation across all domains, satellite communications (SatCom) systems are no exception. The innovative BOLERO project, led by KP Labs, and supported by a consortium including OHB Hellas and Eutelsat OneWeb, is at the forefront of this technological evolution. This project is making significant strides in applying both classic and deep machine learning (ML and DL) techniques within the dynamic realm of satellite data, marking a transformative step in SatCom technology.

Understanding the Need for Continual Learning in SatCom​

Traditionally, satellite applications have relied on supervised ML algorithms trained offline, with all training data prepared before the training process begins. This method is effective in stable data scenarios. For example, a deep learning model can accurately identify brain tumor lesions from magnetic resonance images after being trained on a diverse dataset. However, the dynamic space environment presents unique challenges. Factors such as thermal noise, atmospheric conditions, and on-board noise can significantly alter data characteristics, causing these offline-trained models to struggle or fail when encounteringnew, unfamiliar data distributions.

The BOLERO Approach​

BOLERO addresses these challenges by adopting an online training paradigm. The training process is shifted directly to the target environment, such as an edge device on a satellite. This innovative approach bypasses the need for downlinking large amounts of data for Earth-based retraining, overcoming bandwidth and time limitations. Training models in their deployment environment accelerate the training-to-deployment cycle and significantly improve model reliability under dynamic conditions.

Tackling New Challenges​

Implementing continual learning brings its own challenges, including catastrophic forgetting, where models may lose previously acquired knowledge. Additionally, the stability-plasticity dilemma must be addressed to ensure models are adaptable and capable of retaining learned information. BOLERO tackles these issues through strategies such as task-incremental learning, allowing models to adapt to new tasks, and domain-incremental learning, enabling them to handle data with evolving distributions.

675ad5f84ea8fcd9e26cf449_675ad5ef1c1b27f038818ca9_bolero-2-1.jpeg

The Consortium’s Collaborative Dynamics in BOLERO


The BOLERO project is propelled by the synergistic efforts of its consortium members. As the project leader, KP Labs is primarily responsible for developing the Synthetic Data Generators (SDGs) and the continual learning models, ensuring their efficacy across multiple SatCom applications and hardware architectures. OHB Hellas contributes by exploring novel machine learning methodologies suitable for streaming data, assessing continual learning applications in and beyond the space sector, and implementing two use cases in different hardware modalities. Eutelsat OneWeb focuses on identifying strategic space-based applications for continual learning, evaluating their business impact, and analyzing the benefits of continual learning models, particularly in terms of performance and cost-efficiency. Together, these entities combine their unique strengths to advance the BOLERO project, addressing the evolving demands of SatCom systems.

Real-World Applications and Future Impact​

The applications of BOLERO are diverse, ranging from monitoring the operational capabilities of space devices to gas-level sensing and object detection in satellite imagery. These applications highlight the potential of continual learning to enhance the efficiency and accuracy of SatCom systems, potentially revolutionizing the management and processing of satellite data for more responsive, agile, and efficient operations.

The BOLERO project, led by KP Labs and supported by a consortium including OHB Hellas and Eutelsat OneWeb, represents a groundbreaking step in harnessing the full potential of continual learning for SatCom systems. By confronting the unique challenges associated with satellite data and leveraging the latest in ML technology, BOLERO is poised to significantly improve the adaptability and efficiency of SatCom systems, setting a new standard in the field of satellite communications.


The other promising hardware platform being tested is KPLabs’ own Leopard DPU: https://www.kplabs.space/solutions/hardware/leopard

View attachment 83667

The BOLERO project, which implemented and benchmarked continual machine learning techniques on “two hardware platforms of very different architectures” (namely KP Lab’s Leopard DPU and BrainChip’s Akida) gets a mention in a new LinkedIn video post by OHB Hellas, which introduces their Orbital High-Performance Computing (OHPC) team.

View attachment 91475

Giannis Panagiotopoulos, Space ML Engineer at OHB Hellas, describes the project as follows:

“[The] BOLERO project explores on-board continual machine learning techniques to enhance reliability and data throughput [in?] satellite communications systems. OHB Hellas has developed the continual learning models for two use cases and deployed them on selected hardware platforms, including a neuromorphic processing device.”


View attachment 91476


Is Akida possibly also being evaluated as part of the ongoing In-Space PoC-3 that the OHPC department is equally involved in (start date was 6 November 2024)?

OHB Hellas and their partners OHB Digital Connect and Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI, German Research Center for Artificial Intelligence) have been selected by ESA “to develop ideas for a networked fleet of autonomously operating in-space transportation vehicles.”

“In-Space Proof-of-Concept 3 is the third milestone in ESA’s in-space transportation roadmap, aiming at demonstrating the key enabling capabilities for on-board & shared intelligence, culminating at an In-Orbit Demonstration (IOD) of these capabilities.”




View attachment 91479



View attachment 91480

View attachment 91481


View attachment 91482

The ESA project webpage on BOLERO (https://connectivity.esa.int/projects/bolero) gets updated from time to time, but still doesn’t tell us anything about the results of benchmarking the developed continual learning techniques on different hardware, that included KP Labs’ Leopard DPU as well as Akida:

“In BOLERO, we identified satcom functionalities that could benefit from on-board continual learning, selected them via a quantifiable process, and developed ML models accordingly.

We built data simulators to test various continual learning techniques, emphasizing reproducibility and algorithm generalization in realistic settings. The project delivered an end-to-end pipeline for continual learning and online adaptation for satcom, validated in simulated scenarios. Finally, we benchmarked these methods across different hardware, including KP Labs’ Leopard and BrainChip’s Akida, providing comprehensive results for all algorithms, hardware, and applications explored within BOLERO.”



Note that the ESA webpage lists the project status as “ongoing”, but hasn’t updated the header since July 2025. It does use past tense to describe the project, though.
On the OHB Hellas website (https://www.ohb-hellas.gr/ohb-hellas-project/bolero/), the project status is marked as “Delivered”.
The only article on BOLERO on KP Labs’ website so far is dated 28 January 2025 (https://www.kplabs.space/news/bolero-on-board-continual-learning-for-satcom-systems).
 
  • Like
  • Love
  • Fire
Reactions: 5 users

Guzzi62

Regular
Well.. somethings afoot! For good or ill soon we shall see!

Alf Kuchenbuch, BrainChip's VP of Sales for EMEA (the ASX-listed company behind the BRN.AX stock ticker), has been a key figure in pitching their neuromorphic AI tech—specifically the Akida IP—to the space sector. The "GRAIN" you're referring to isn't a literal grain but the GRAIN architecture, a new product line of radiation-hardened, space-grade system-on-chips (SoCs) developed by Frontgrade Gaisler in partnership with BrainChip. It integrates Akida for ultra-low-power, event-based AI processing tailored for satellites, deep-space probes, and autonomous missions (think real-time image recognition, navigation, and data analysis in orbit without draining batteries or failing under radiation).
Launched in April 2025, the first device in the line (GR801 SoC) combines Akida with Gaisler's NOEL-V RISC-V processor. It's designed for harsh space environments, enabling "always-on" AI that's 10-100x more energy-efficient than traditional chips. This is a big deal for BRN.AX because it positions BrainChip's IP as a go-to for the booming $10B+ space AI market, potentially driving licensing revenue and stock upside as adoption ramps.
As for how many space customers are "waiting" for GRAIN: There's no public exact tally (companies like BrainChip don't disclose pipeline specifics to avoid tipping competitors), but recent activities point to solid, growing demand from institutional players:
Confirmed contracts/partners: At least 2 major space agencies so far.
Swedish National Space Agency (SNSA): Awarded a commercialization contract in April 2025 to deploy GRAIN in Swedish-led missions, focusing on power-constrained satellites.
European Space Agency (ESA): BrainChip (with Frontgrade Gaisler, Airbus, and Neurobus) won ESA's "NEURAVIS" ITT (Invitation to Tender) in July 2024 for neuromorphic space tech evaluation, which directly feeds into GRAIN demos. ESA hosted the October 2025 EDHPC event where Alf demoed it.
Broader interest: Alf's been hustling at events, connecting with "space engineers and agencies" (plural, per BrainChip's Oct 2025 EDHPC recap). This includes hands-on demos for Earth observation, autonomous nav, and hazard detection. Sweden's Royal Institute of Technology (KTH) is already building a GRAIN demo app with neuromorphic sensors. The space crowd is hungry for this because traditional AI chips guzzle power and flop in radiation—GRAIN fixes that.
Right now (as of Nov 19, 2025), Alf's at the Space Tech Expo Europe in Bremen, Germany, schmoozing more prospects. Expect announcements on additional pilots or deals soon, as the line's still ramping to full production. For BRN.AX holders, this screams validation: space is a high-margin, sticky market, and GRAIN could be a multi-year revenue catalyst if it scales beyond these early wins. If you're trading on it, watch for Q4 updates from BrainChip's investor calls.
Is that from an article somewhere?
 
  • Love
Reactions: 1 users

Frangipani

Top 20
Spotted this intriguing LinkedIn post involving Genoa-based MYWAI, with whom we partnered back in 2023 (https://brainchip.com/brainchip-and-mywai-partner-to-deliver-next-generation-edge-ai-solutions/). What caught MYEYE specifically was “FIONA™ (Fin Ray Inspired Gripper Maintenance Agent)” that “uses edge-optimized neuromorphic AI to monitor the gripper’s condition in real time, improving reliability and maintenance efficiency”.



View attachment 92992


I then googled “I-GENIUS”, which stands for Intelligent Gripper using Edge AI & Neural Imitation for User Support” and found this April 2025 YouTube video, in which MYWAI Founder & CEO Fabrizio Cardinali explains the project proposal and also mentions “neuromorphic chips that can optionally be used to accelerate the delivery”, specifically naming BrainChip. Akida also shows up on two of the presentation slides. (See also the tagged October 2024 post by @Stable Genius, which covered a presentation already alluding to the project at the time).

MYWAI is the prime contractor for I-Genius and have partnered with the Stellantis Group Research Centre Centro Ricerche Fiat near Torino/Turin and engaged Università di Genova/University of Genoa as subcontractor.




View attachment 92993 View attachment 92994 View attachment 92995 View attachment 92996
View attachment 92997
View attachment 92998 View attachment 92999



View attachment 93000



View attachment 93001


Our partner MYWAI remains convinced that “Neuromorphic computing isn’t just another buzzword - it’s a paradigm shift.”


FE962AE4-720E-4AF0-8822-79177F6DBF33.jpeg



In today’s LinkedIn post, they link to an article titled “Neuromorphic computing and the future of edge AI” by Daniel Hoffman that I shared here in early September:



Daniel Hoffman

by Daniel Hoffman
Contributor

Neuromorphic computing and the future of edge AI​

Opinion
Sep 8, 202510 mins
Artificial IntelligenceNeural NetworksQuantum Computing

Forget quantum — neuromorphic AI could be the real disruptor, reshaping everything from medicine to warfare while cutting energy use dramatically.

virtual brain / digital mind / artificial intelligence / machine learning / neural network

Credit: MetamorWorks / Getty Images

By now, most IT professionals are well aware of the current CPU/GPU hardware architecture used for AI and the associated problems with power, cooling and connectivity. While these architectures have enabled breakthrough AI capabilities, they also face serious physical and operational constraints:
  • Power consumption. A single data-center-grade GPU can draw 300–700 watts, and large training runs often require thousands of them operating in parallel.
  • Cooling requirements. A high power draw results in significant thermal output, necessitating extensive liquid or air cooling systems, which add cost and complexity.
  • Bandwidth bottlenecks. Moving massive datasets between memory, storage and compute units strains interconnects, introducing delays and consuming more energy.
  • Latency. Even with high-speed networking, inference requests routed to cloud-based GPU clusters face network latency — problematic for time-critical applications such as robotics, autonomous vehicles or cybersecurity defense.
These constraints are pushing researchers and industry toward alternative computing architectures that can deliver high performance with drastically lower energy and infrastructure demands.

Recently, many of my peers have been speculating about the effect quantum computing (QC) will have on AI, as if that will be the next big catalyst for AI to dominate everything. But due tovquantum’s major hurdles — qubit instability, error correction overhead and the need for cryogenic cooling — they are costly and impractical for the continuous, large-scale workloads that drive modern AI.

While QC captures the mainstream headlines, neuromorphic computing has positioned itself as a force in the next era of AI. While conventional AI relies heavily on GPU/TPU-based architectures, neuromorphic systems mimic the parallel and event-driven nature of the human brain. The recently announced Darwin Monkey 3 by the Chinese Academy of Sciences signals a new stage in this race. With claims of outperforming traditional supercomputers in edge and energy‑constrained environments, the DM3 invites analysis of its potential impact.

Life imitates art​

In “Terminator 2: Judgment Day,” the T-800 explains that his brain runs on a “neural-net processor, a learning computer,” a piece of Hollywood sci-fi tech that seemed designed to make cyborgs sound more menacing. And against all reason, scientists have gone ahead and built it. Neuromorphic chips now mimic the very thing the scriptwriters imagined: brain-like processors that learn and adapt in real time. No, they’re not striding around in leather jackets with shotguns, but they are bringing artificial intelligence to the edge, disconnected and learning. What was once a cinematic plot device has quietly become a laboratory reality, proving that sometimes life really does imitate the movies.

Comparative analysis: CPU/GPU vs. neuromorphic systems​

To appreciate the significance of neuromorphic breakthroughs, it is important to compare them directly with the dominant compute platforms of today, CPUs and GPUs.
  • CPUs excel at general-purpose computing with strong single-threaded performance and broad software ecosystems. However, they struggle with massively parallel AI workloads and are power‑hungry at scale.
  • GPUs/TPUs have become the backbone of modern AI training and inference, offering massive parallelism and mature frameworks like TensorFlow and PyTorch. Yet they are highly energy‑intensive, require cooling and infrastructure, and are less suitable for size, weight and power (SWaP)-constrained environments such as IoT or edge devices.
  • Neuromorphic systems, in contrast, are designed around event-driven, spike-based architectures. They compute only when stimuli occur, yielding extraordinary energy efficiency and low-latency processing. This makes them highly suitable for real-time edge applications, adaptive control and on-device intelligence. However, neuromorphic platforms face limitations in tooling, developer familiarity and ecosystem maturity.

AI on the edge​

Neuromorphic hardware has shown promise in edge environments where power efficiency, latency and adaptability matter most. From wearable medical devices to battlefield robotics, systems that can “think locally” without requiring constant cloud connectivity offer clear advantages. Recent surveys in neuromorphic computing demonstrate applications spanning real‑time sensory processing, robotics and adaptive control.

Healthcare and R&D bridge​

Healthcare is a leading beneficiary of neuromorphic breakthroughs. Neuromorphic systems are being studied for diagnostics, prosthetics and personalized medicine. A recent review highlights how neuromorphic computing is contributing to diagnostic imaging, brain-computer interfaces and adaptive neuroprosthetics, bridging edge diagnosis with frontier R&D. For example, neuromorphic neuroprosthetics have shown promise in restoring sensory feedback to amputees, while low‑power neuromorphic imaging chips are enabling continuous patient monitoring.

At the same time, Steve Furber and others stress that real‑time event‑driven sensors — such as event‑based vision systems — are where neuromorphic excels today. These systems only compute when stimuli occur, making them particularly valuable in healthcare wearables and medical imaging, where sparse, efficient data capture is critical.

Industrial control systems (ICS)​

Industrial systems demand ultra‑low‑latency and robust decision‑making under uncertainty. Neuromorphic computing offers clear advantages in closed‑loop control, process optimization and anomaly detection. A paper published recently in Nature Communications demonstrates how spiking neural networks can handle nonlinear process control and adapt to disturbances in real time. This aligns with prior work on neuromorphic resilience in ICS, particularly in power grids, oil & gas and manufacturing, where continuous adaptation is critical.

CIO Smart Answers​

Learn more

Explore related questions​



Why this matters in simple terms: Closed‑loop control is like a thermostat constantly adjusting heating to keep a room comfortable — neuromorphic chips allow ICS to make those adjustments instantly and efficiently. Process optimization is similar to cruise control in a car, keeping things running smoothly while using less fuel or energy. Anomaly detection works like a smoke detector, spotting early signs of trouble before they escalate. In industries where downtime costs millions, or where failures can be dangerous, these capabilities translate directly into safer, cheaper and more reliable operations.

Aircraft and shipping​

Applications in aerospace and maritime domains leverage neuromorphic systems’ ability to process complex sensory streams while remaining power efficient. In aviation, neuromorphic processors can aid in autonomous navigation, fault detection and cockpit assistance. In shipping, neuromorphic computing supports sensor fusion and real‑time anomaly detection in harsh, bandwidth‑limited environments.

Logistics​

Beyond ICS and transportation, logistics presents a compelling use case for neuromorphic computing. A 2025 review of supply chain resilienceemphasizes dynamic modeling through hybrid complex‑network and agent‑based approaches. Neuromorphic architectures mirror this hybrid paradigm, enabling parallel simulations, disruption response and adaptive re‑routing in real time. Practical applications include warehouse robotics, just‑in‑time inventory management and intermodal transport optimization. By integrating neuromorphic systems, logistics chains could achieve a higher degree of resilience, limiting ripple effects during global disruptions.

Security and SOC applications​

Another promising area is cybersecurity and Security Operations Centers (SOCs). Spiking neural networks (SNNs) process data in an event‑driven fashion, making them ideal for real‑time anomaly detection with minimal energy overhead. Their selective processing also enhances privacy by limiting unnecessary data exposure, a key advantage in handling sensitive information. Emerging work on spiking neural P systems shows effectiveness in malware detection, phishing identification and spam filtering with fewer training cycles than conventional deep learning systems. Early findings also suggest that SNNs may be more resilient to adversarial attacks due to their spike‑based encoding and nonlinear temporal dynamics.

More recently, a US government‑backed study demonstrated that neuromorphic platforms such as BrainChip’s Akida 1000 and Intel’s Loihi 2 can achieve up to 98.4% accuracy in multiclass attack detection, matching full‑precision GPUs while consuming far less power. These chips were tested across nine network traffic types, including multiple attack categories and benign traffic, showing their suitability for deployment in aircraft, UAVs and edge gateways where size, weight, power and cost (SWaP‑C) constraints are critical. This represents a leap over earlier prototypes (~93.7% accuracy), aided by improved tooling like Intel’s Lava framework. Combined with advances in semi‑supervised and continual learning, neuromorphic SOC solutions are now capable of adapting to evolving threats while minimizing catastrophic forgetting.

Equally important, neuromorphic AI is directly tackling the SWaP problem that prevents conventional AI from running effectively at the edge. In 2022, more than 112 million IoT devices were compromised, and IoT malware surged by 400% the following year. Neuromorphic processors, such as Akida 1000, address these challenges by delivering on‑device, event‑driven anomaly detection without heavy infrastructure requirements. This positions neuromorphic SOC technologies as a practical path to securing IoT, UAVs and critical infrastructure endpoints that cannot support traditional AI models.

Market and strategic implications​

Darwin Monkey 3 symbolizes more than a technological achievement; it reflects geopolitical competition in next‑generation AI hardware. The ability to deploy neuromorphic systems across healthcare, ICS, defense, logistics and security may shape both national resilience and private‑sector competitiveness. Importantly, as Furber notes, the hardware is ready — but the ecosystem isn’t. Development tools akin to TensorFlow or PyTorch are still emerging (e.g., PyNN, Lava), and convergence toward standards will be crucial for widespread adoption (IEEE Spectrum, 2024).

Adding to this, a 2025–2035 global market forecast projects significant growth in neuromorphic computing and sensing, spanning sectors such as healthcare, automotive, logistics, aerospace and cybersecurity. The study profiles more than 140 companies, from established giants like Intel and IBM to startups such as BrainChip and Prophesee, which are releasing joint products now, underscoring the breadth of investment and innovation. It also emphasizes challenges in standardization, tooling and supply chain readiness, suggesting that the race will not just be technological but also commercial and regulatory.

Ethics and sustainability​

As neuromorphic computing matures, ethical and sustainability considerations will shape adoption as much as raw performance. Spiking neural networks’ efficiency reduces carbon footprints by cutting energy demands compared to GPUs, aligning with global decarbonization targets. At the same time, ensuring that neuromorphic models are transparent, bias‑aware and auditable is critical for applications in healthcare, defense and finance. Calls for AI governance frameworks now explicitly include neuromorphic AI, reflecting its potential role in high‑stakes decision‑making. Embedding sustainability and ethics into the neuromorphic roadmap will ensure that efficiency gains do not come at the cost of fairness or accountability.

A paradigm shift in AI?​

Will Darwin Monkey 3 spark a paradigm shift in AI? The answer lies in adoption and integration. Neuromorphic computing is no longer theoretical — it is moving into applied domains from healthcare to logistics to cybersecurity. Yet the field still searches for its “killer app” — a domain where neuromorphic’s efficiency and adaptability decisively outperform conventional AI. As industries face rising energy costs and escalating cyber‑physical risks, neuromorphic solutions offer a forward‑looking path that blends efficiency, adaptability, resilience and responsibility.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Daniel Hoffman

by Daniel Hoffman
Contributor
  1. Follow Daniel Hoffman on LinkedIn
Daniel Hoffman is a veteran IT and cybersecurity leader with over two decades of experience across healthcare, retail, software and finance. He began his career at Symantec and later joined GoldMine Software, where he helped develop content for the GoldSync Server 5.0 technical certification platform. Today, Daniel supports small to mid-sized businesses in navigating cybersecurity, business continuity, disaster recovery and regulatory compliance. He holds certifications including CISSP, ISO 27001 Lead Auditor, and Privacy Law and HIPAA from the University of Pennsylvania. A lifelong learner, Daniel is currently focused on physical security, compliance, and IT/OT convergence. He is an active member of both ISSA and ISC2 and values the opportunity to grow through collaboration with peers in the cybersecurity community. Daniel is also a strong advocate for security awareness training as a foundational layer of risk management.

 
  • Love
  • Like
  • Fire
Reactions: 4 users

Galaxycar

Regular
Just one of the big shorts closing out that was opened just after management had a talk to his sophisticated investors in Sydney. Thanks for the heads up boys, no worries we love it when you screw over retail,so that’s 60-70million closed out we know off going by shortmanand todays. That’s one nice payday for the shorters, pity at the expense of retail investors,AGM’S Coming management and directors paybacks a bitch, votings gunna be ugly. Time for PVDM to step in and say enough enough Antonio and Sean there’s the door.
 
Top Bottom