BRN Discussion Ongoing

manny100

Top 20
Hi @7für7,

Unfortunately, we can’t take everything ChatGPT says at face value. If BrainChip’s Akida IP had been integrated into the R‑Car V3H SoC, BrainChip would almost certainly be receiving royalties by now.

However, I believe it's much more plausible that BrainChip’s Akida IP could be incorporated into a next-generation platform like the Renesas R‑Car X5H.

See the above post and excerpts below.




EXTRACT - Renesas Blog published 24 September 2024 which discusses the 5th gen. R‑Car X5H.




View attachment 88873



EXTRACT - Business Wire article dated 13 November 2024 stating "the R-Car X5H will be sampling to select automotive customers in 1H/2025, with production scheduled in 2H/2027. "




View attachment 88874




Hi Bravo, I read again the news releases concerning Renesas tape out in Dec'22 incorporating AKIDA.
" This is part of a move to boost the leading edge performance of its chips for the Internet of Things, Sailesh Chittipeddi became Executive Vice President and General Manager of IoT and Infrastructure Business Unit at Renesas Electronics and the former CEO of IDT tells eeNews Europe."
I had forgotten that it was all about IOT. Well it was 2.5 years ago - my excuse anyway - my memory lapse.
The good news though if it's for client IOT we could see revenue sometime in 2026.
Onsor first AKIDA contact in 2022 - expected launch sometime in 2026 - revenue sometime later.
 
  • Like
  • Wow
Reactions: 7 users

Frangipani

Top 20
Well another piss poor performance by the SP but in saying that we are heading towards the better part of the year the build towards Christmas and the perception that BrainChip’s SP will rise due to Sean’s 9 million dollar turnover.
Will it actually happen????

Who knows? We live in crazy times: My weather app just informed me that summer temperatures are currently higher in Oslo than in Barcelona! ☀️☀️☀️

217C96CA-8193-4E35-B0D4-D658A7F623A2.jpeg
 
  • Wow
  • Like
  • Love
Reactions: 8 users

Getupthere

Regular
Everyone has already heard about the 9 million… so why should it only go up once you see it on paper? I think it’s already priced in.
I don’t believe the $9 million is factored into the price. Unfortunately, the share prices don’t believe BRN management until they see it.
 
  • Like
  • Fire
Reactions: 11 users
I don’t believe the $9 million is factored into the price. Unfortunately, the share prices don’t believe BRN management until they see it.
I would think that fomo and look at this company are doing its yearly turnover has 10x this quarter that’s impressive. I better buy in now before the price goes up to much.

And I agree with you I don’t think much of anything has been priced in yet. Lthink when this actually takes off it’s going to go crazy
 
  • Like
  • Fire
Reactions: 9 users

Rach2512

Regular


Screenshot_20250725_201046_Samsung Internet.jpg

Screenshot_20250725_201217_Samsung Internet.jpg

Screenshot_20250725_201111_Samsung Internet.jpg
 
  • Like
  • Thinking
  • Fire
Reactions: 15 users

BrainShit

Regular
I think it's got to the stage where it's no longer:

"Golly, Akida got mentioned in the same breath as Loihi!"

Now it's:

"Are they still talking about Loihi when Akida gets mentioned?"

Intel got another views:

1. "They still talking about Akida. Should we commercialize Loihi?"
2. "They still talking about Akida. Should we buy BrainChip?"
 
  • Like
  • Haha
Reactions: 5 users

CHIPS

Regular
Intel got another views:

1. "They still talking about Akida. Should we commercialize Loihi?"
2. "They still talking about Akida. Should we buy BrainChip?"

Oh no no nooooo ... We are not for sale!!!

Season 5 No GIF by The Office
 
  • Haha
  • Like
Reactions: 6 users

Frangipani

Top 20
Uni researchers in the United Arab Emirates (from New York University Abu Dhabi and Khalifa University, Abu Dhabi) have experimented with Akida for neuromorphic AI-based robotics - the field our CTO Dr. Tony Lewis is an expert in:


View attachment 60342
View attachment 60343

View attachment 60344


Rachmad Vidya Wicaksana Putra and Muhammad Shafique, two of the New York University (NYU) Abu Dhabi eBrain Lab researchers, whose above paper on experimenting with Akida for neuromorphic AI-based robotics was published almost exactly a year ago, appear to be extremely enamoured with our neuromorphic processor! They co-authored another paper on AKD1000 with their colleague Pasindu Wickramasinghe that was published yesterday and that also got accepted at the International Joint Conference on Neural Networks (IJCNN), June 30th - July 5th, 2025 in Rome, Italy.


“Neuromorphic Processors
1) Overview: The energy efficiency potentials offered by
SNNs can be maximized by employing neuromorphic hard- ware processors [22]. In the literature, several processors have been proposed, and they can be categorized as research and commodity processors. Research processors refer to neuro- morphic chips that are designed only for research and not commercially available, hence access to these processors is limited. Several examples in this category are SpiNNaker, NeuroGrid, IBM’s TrueNorth, and Intel’s Loihi [4].
Meanwhile, commodity processors refer to neuromorphic chips that
are available commercially, such as BrainChip’s Akida [7] and SynSense’s DYNAP-CNN [8]. In this work, we consider the Akida processor as it supports on-chip learning for SNN fine-tuning, which is beneficial for adaptive edge AI systems [7].


(…)


E. Further Discussion
It is important to compare neuromorphic-based solutions against the state-of-the-art ANN-based solutions, which typically employ conventional hardware platforms, such as CPUs, GPUs, and specialized accelerators (e.g., FPGA or ASIC). To ensure a fair comparison, we select object recognition as the application and YOLOv2 as the network, while considering performance efficiency (FPS/W) as the comparison metric. Summary of the comparison is provided in Table I, and it clearly shows that our Akida-based neuromorphic solution achieves the highest performance efficiency. This is due to the sparse spike-driven computation that is fully exploited by neuromorphic processor, thus delivering highly power/energy-efficient SNN processing. Moreover, our Akida-based neuromorphic solution also offers an on-chip learning capability, which gives it further advantages over the other solutions. This comparison highlights the immense potentials of neuromor- phic computing for enabling efficient edge AI systems.

VI. CONCLUSION
We propose a novel design methodology to enable efficient SNN processing on commodity neuromorphic processors. It is evaluated using a real-world edge AI system implementation with the Akida processor. The experimental results demonstrate that, our methodology leads the system to achieve high performance and high energy efficiency across different applications. It achieves low latency of inference (i.e., less than 50ms for image classification, less than 200ms for real- time object detection in video streaming, and less than 1ms for keyword recognition) and low latency of on-chip learning (i.e., less than 2ms for keyword recognition), while consuming less than 250mW of power. In this manner, our design methodology potentially enables ultra-low power/energy design of edge AI systems for diverse application use-cases.”



View attachment 81147
View attachment 81148 View attachment 81149 View attachment 81150 View attachment 81151 View attachment 81152 View attachment 81154 View attachment 81155

Another paper containing an Akida reference by two of the New York University (NYU) Abu Dhabi eBRAiN Lab researchers, Alberto Marchisio and Muhammad Shafique, came out yesterday.
It is titled “Neuromorphic Computing for Embodied Intelligence in Autonomous Systems: Current Trends, Challenges, and Future Directions”.

While Shafique (who is Director of the eBRAIN Lab), Marchisio and other colleagues had previously published in detail about evaluating Akida for neuromorphic AI-based robotics 👆🏻 (for the first paper above in collaboration with two other Abu Dhabi-based researchers from Khalifa University), this new paper - as the title suggests - is more foundational and mentions Akida only once when listing examples of available neuromorphic hardware. It also addresses “key challenges and open research questions”.


Under ACKNOWLEDGMENT it says:

“This work was supported in parts by the NYUAD Center for Cyber Security (CCS), funded by Tamkeen under the NYUAD Research Institute Award G1104, and the NYUAD Center for Artificial Intelligence and Robotics (CAIR), funded by Tamkeen under the NYUAD Research Institute Award CG010.”



“Tamkeen is New York University’s partner in the UAE, enabling NYU Abu Dhabi’s ongoing development as a local and global center of academic excellence and community engagement.”






5E2E960B-4663-43B7-91B6-AAC048BD25F2.jpeg
8D74BFAD-3F94-4DA5-AE16-1A552130C5B3.jpeg
5944E76F-CAD7-4D4F-885D-A040AFB7A07A.jpeg
752517C0-0218-44D1-946F-EDA4DB377505.jpeg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 16 users

Rach2512

Regular

Screenshot_20250725_221126_Samsung Internet.jpg
Screenshot_20250725_221139_Samsung Internet.jpg
Screenshot_20250725_221154_Samsung Internet.jpg
Screenshot_20250725_221209_Samsung Internet.jpg
Screenshot_20250725_221218_Samsung Internet.jpg
Screenshot_20250725_221228_Samsung Internet.jpg



 
  • Like
  • Fire
  • Love
Reactions: 22 users

BrainShit

Regular


How does this compare to Akida?

The Chip Hailo-10H and Akida have distinct core architectures and target somewhat different edge AI use cases, but they do compete in the broader edge AI market where power efficiency and real-time intelligence matter.

So, the Hailo-10H is more of a classic, powerful AI accelerator for complex neural networks, including generative AI at the edge, with good energy efficiency. Akida specializes in particularly energy-efficient, neuromorphic real-time AI for sensors and continuous data processing with a completely different architecture.

Hailo-10H would probably be better suited for complex AI models (e.g., LLMs), while Akida 2.0 has advantages in highly energy-efficient, adaptive sensor applications. It is optimized for higher-performance AI inference, including generative AI models (LLMs, Vision-Language Models, Stable Diffusion) and multi-modal workloads on edge devices with M.2 integration. ... they also good for ADAS, because it is automotive-qualified to AEC-Q100 Grade 2 standards and is aimed at automotive designs with 2026 start of production.

Akida focuses on ultra-low power, always-on event-based AI, priority in domains such as medical, environmental, and industrial sensors where power saving is paramount.

Both chips can be applied to real-time vision analytics tasks. While Hailo-10H handles this with high throughput and generative AI capability, Akida excels at ultra-low power, always-on vision pattern recognition.

Both target embedded systems and IoT deployments that demand AI at the edge with minimal latency and energy footprint, such as industrial monitoring, smart infrastructure, and real-time anomaly detection.

They overlap in edge vision and sensor-related AI applications but serve different levels of AI complexity and power requirements, making them complementary yet partially competing solutions for low-latency AI at the edge.

Source: https://hailo.ai/company-overview/n...-accelerator-with-generative-ai-capabilities/
Source: https://hailo.ai/products/ai-accelerators/hailo-10h-ai-accelerator/
 
Last edited:
  • Like
  • Fire
Reactions: 14 users

Frangipani

Top 20
Alican Kiraz has meanwhile received the AKD1000 Mini PCIe Board he had ordered to adapt SNN and GNN (= graph neural network) models for use in robotic arm and autonomous drone projects.

I’m not sure why he calls this an “AKD1000 PCIeM.2 module”, though, as this is evidently not the AKD1000 M.2 form factor that came out in January. 🤔

Saying that, we wouldn’t mind him purchasing both form factors, would we? 😊

View attachment 88841


Google’s translation from Turkish to English:

“Hello friends! 🙏, I've received my BrainChipAkida AKD1000 PCIe neuromorphic chip, which I purchased to adapt SNN and GNN models for use in my Robotic Arm and Autonomous Drone projects. 🔥🎉

As you know, today's AI projects require high-parameter Transformer-based models and GPUs to reduce the latency between real-time perception and decision-making. However, achieving this with a milliwatt power budget poses a critical engineering challenge for these systems.

To this end, I integrated the BrainChipAkida AKD1000 PCIeM.2 module into the RaspberryPi5's PCIeGen2x1 lane with a Waveshare PCIe x1, creating a fully edge-driven, event-driven Spiking Neural Network accelerator. This reduces average power consumption by 10x compared to traditional models and accelerates perception-to-decision latency by 5x. 🦾⚙️

In my research plan, I hope to develop excellent examples of hybrid GNN and SNN architectures using Causal Reasoning in energy-constrained edge applications. Friends pursuing related academic research can easily connect with me on LinkedIn. 🧡🙏

Here is another post by Istanbul-based Alican Kiraz, Head of Cyber Defense Center @Trendyol Group, which links to a Medium article written by him titled “Hybrid SNN Models and Architectures for Causal Reasoning in Next-Generation Military UAVs” that gives us a little theoretical background regarding the PoC he is envisaging, for which he will be utilising his newly purchased Akida Mini PCIe Board:



52476A72-C41D-49B3-9D8F-20B6072B2910.jpeg
2yDM?



(English version)

EN | Hybrid SNN Models and Architectures for Causal Reasoning in Next-Generation Military UAVs​

Alican Kiraz
Alican Kiraz


Follow
7 min read
·
1 hour ago


I created it with Midjourney


Although the initial emergence and areas of interest for Unmanned Aerial Vehicles (UAVs) were diverse, their most prominent and widespread use today is undoubtedly in military applications. With the rise of Artificial Intelligence systems and technologies over the past five years, this interest has undergone a significant transformation. Platforms such as the Anduril YFQ-44 and Shield AI MQ-35 V-BAT are now capable of executing missions autonomously, being tasked and coordinated by decision-support AI systems.

Artificial Intelligence is increasingly employed in critical missions such as Intelligence, Surveillance, and Reconnaissance (ISR), autonomous navigation, and target identification. However, the deployment of such AI solutions on the field presents significant challenges — both in terms of mission effectiveness and operational security, as well as resource constraints. These limitations often lead to hesitation and caution in real-world use.


You can access the Turkish version of my article here:

TR | Hybrid SNN Models and Architectures for Causal Reasoning in Next-Generation Military UAVs

İnsansız Hava Araçlarının ilk çıkış ve ilgi noktaları farklı olsada şuan dünyada en çok kullanıldıkları ve ses…

alican-kiraz1.medium.com

One of the most critical challenges lies in the high computational and energy demands of conventional Artificial Neural Networks (ANNs) and Convolutional Neural Networks (CNNs), which are widely used in computer vision and autonomous navigation. Additionally, transformer-based Large Language Models (LLMs) operating in an embedded, agentic architecture for decision support further exacerbate these requirements. These model architectures typically necessitate high-performance GPUs or ruggedized onboard supercomputers integrated into the UAV system.

As you may recall, in my previous article, I examined Russia’s Geran-2 drone, which operates with an onboard Nvidia Jetson Orin module. You may also want to revisit that article for further insights.

TR | Nvidia Jetson Orin-Powered AI Kamikaze Drones: Beating GPS Jamming on the Battlefield

Shahed-136’nın MS serisi versiyonları, İran ve Rusya arasındaki askeri iş birliğinin çıktısı olarak görülüyor. Bu…

alican-kiraz1.medium.com

EN | Nvidia Jetson Orin-Powered AI Kamikaze Drones: Beating GPS Jamming on the Battlefield

The MS series variants of the Shahed-136 are seen as a product of military cooperation between Iran and Russia. The…

alican-kiraz1.medium.com

The integration of Hybrid Spiking Neural Network (SNN) and Causal Reasoning (CR) architectures offers significant advantages in terms of reduced energy consumption, lower latency for real-time decision-making, and increased system reliability. Let us now conceptualize such a system — CR-SNN — built upon a hybrid architecture combining SNN and CR.

This hybrid system leverages the biologically inspired, event-driven nature of SNNs to achieve energy efficiency and computational speed, especially when deployed on neuromorphic hardware platforms. Meanwhile, the Causal Reasoning (CR) component — grounded in Judea Pearl’s causal inference framework, including Structural Causal Models (SCMs) and do-calculus — enables UAVs to develop enhanced situational awareness in complex scenarios. It empowers autonomous systems to perform tasks such as target tracking more effectively and to produce more generalizable and interpretable outputs, even in novel situations not encountered during training.

Moreover, although current AI-based systems can exhibit high accuracy in controlled environments, they tend to lose output quality and reliability when exposed to the complexities of real-world civilian or battlefield environments. In such conditions, the probability of incorrect responses increases significantly. Especially under dynamic circumstances or in the presence of adversarial attacks targeting the UAV, reactive maneuvers become increasingly challenging.

For instance, distinguishing between civilians and combatants in dense urban environments or detecting camouflaged targets remains a considerable challenge for current AI models. Furthermore, the inability of deep learning models — often considered “black boxes” — to provide transparent reasoning for their decisions undermines the trustworthiness and accountability of weaponized UAVs in military operations.

In addition, processing high-bandwidth sensor data in real-time — such as high-resolution visible & thermal imagery or Synthetic Aperture Radar (SAR) outputs — imposes substantial computational burdens on existing architectures, increasing latency and slowing down decision-making.

Even on popular edge computing platforms like the NVIDIA Jetson Orin Nano, optimized models such as YOLOv8n typically operate at around 10 frames per second (FPS), which falls short of the 30 FPS threshold required for real-time navigation. Additionally, the need for active thermal management and an average power consumption of around 20 watts pose significant challenges concerning weight, energy constraints, and thermal design requirements on UAV platforms.

1*1_770mrzUeXM_ySaZ56EdA.png

Source: https://www.seeedstudio.com/blog/20...eYa20KsdYMhX0iJB8GfkWnDoTWWnMlo74zgTLxHoOA3_A

1*CAgvHeedIwTH1BDd_uuvIA.png

Source: https://dataphoenix.info/a-guide-to-the-yolo-family-of-computer-vision-models/


Furthermore, transmitting data to the C4 (Command, Control, Communication, and Computers) center for decision-support purposes and relaying mission directives back to the UAV can introduce end-to-end latencies exceeding 150 milliseconds. Such delays risk causing the UAV to miss its optimal decision-making window, especially in time-sensitive scenarios.

Beyond performance metrics, next-generation military UAVs are expected to fulfill a range of advanced capabilities. These include: achieving higher levels of autonomy, maintaining superior situational awareness in complex and uncertain environments, making faster and more accurate decisions, extending operational range and flight endurance, and participating in joint missions in coordination with manned combat aircraft. Meeting these requirements is critical for future air combat effectiveness.

To address these demands, we must go beyond current AI architectures. There is a pressing need for solutions that are more efficient, faster, more reliable, and fundamentally smarter. Specifically, we require AI systems — both in software and hardware — that combine low power consumption, low latency, and high performance, while also possessing the ability to comprehend and act upon complex causal relationships within dynamic operational contexts.

1*xU6fqfdP2lZ08nBUkw-rrQ.jpeg

Source: https://exoswan.com/is-neuromorphic-computing-the-future-of-ai


At this point, the combination of approaches such as Spiking Neural Networks (SNNs) and Causal Reasoning (CR) presents a promising pathway to overcome these challenges. These hybrid SNN-CR models hold significant potential to deliver strategic advantages for next-generation military UAVs.

Thanks to the temporally encoded and sparsely activated communication mechanisms of SNNs — modeled after biological neural systems — these networks exhibit significantly lower energy consumption compared to traditional Artificial Neural Networks (ANNs), while also enabling faster inference times under real-time operational constraints.

1*A-2hTpaK_jMYjsv4GS1WjA.png

Source: https://peerj.com/articles/cs-2549/


In a study conducted by researchers at ETH Zurich, it was demonstrated that SNNs consumed six times less energy than a CNN when performing an obstacle avoidance task. Moreover, the SNN model operated with an average inference latency of just 2.4 milliseconds, showcasing its suitability for highly dynamic scenarios.

For further details, see:

https://www.sciencedirect.com/science/article/pii/S0925231223010081

Another critical aspect is Causal Reasoning (CR). When integrated — particularly through the framework proposed by Judea Pearl — CR enables AI systems to reason not only through learned correlations, but also through underlying causal relationships. This allows the model to perform robustly and generalize more effectively in environments that deviate from the training data distribution or when faced with unforeseen conditions.

To illustrate with a practical case: in a mission involving the detection of a human group as a potential target, conventional models may focus on features such as camouflage color, presence of vehicles, or group size. In contrast, a causally-informed model could prioritize more nuanced and domain-relevant cues — such as body height, shadows, camouflage patterns, military-style marching formations, or spacing intervals — along with distinct features like carried weapons, by leveraging appropriate sensor modalities and reasoning mechanisms.

1*PmSQPIC50JpHX51aU-KTAw.png

Image Source: https://www.newscientist.com/

The component I wish to emphasize most — and which I have long researched for integration into this powerful architecture — is Causal Reasoning (CR). CR refers to the process by which an AI system understands and analyzes causal relationships between events, then uses this understanding to make decisions or support decision-making based on those causal inferences.

What distinguishes CR from conventional approaches? Traditional machine learning and large language models (LLMs) primarily rely on statistical correlations or correlation-based inference. By integrating CR, we aim to enable the model to reason through causality — essentially allowing the system to ask itself “why” and to answer that question autonomously. This line of reasoning is grounded in the foundational work of Judea Pearl, who introduced a formal framework for causality. Pearl’s model is built upon concepts such as Structural Causal Models (SCMs), causal graphs, and do-calculus.

So, what can CR contribute to a UAV system? For instance, it can be highly effective in diagnosing system failures. Imagine a scenario where a UAV encounters multiple issues such as GPS malfunction, engine anomalies, or communication loss. A causal model can represent these as nodes in a causal graph, with the edges denoting the causal links between them. This enables the system to perform counterfactual or interventional reasoning, such as:

“The GPS has failed, an engine fault has also been reported, but the propellers have not decelerated and location signals are still being transmitted — therefore, the fault may lie with the sensor supervisor.”
Or:
“GPS failure occurred, communication is down, and I am experiencing deviations in altitude and trajectory. I must immediately identify a safe emergency landing zone!”

Such inferential capabilities are powerful, aren’t they?

1*zKUz6xLXneKUkPNdDfrFGQ.png

Source: https://www.nature.com/articles/s42256-023-00744-z


I hope this overview has been insightful. In the next section, we will delve into the technical details and adaptation strategies. Additionally, now that the neuromorphic chip — BrainChip Akida — has arrived, I will be working towards establishing a proof of concept (PoC) based on the proposed architecture.

1*MC3aa6GJoisqSR556AsfcA.jpeg

See you soon!
Drones
Military
AI
Neural Networks
Military Drones




Alican Kiraz

Written by Alican Kiraz


1.4K followers
·411 following
Head of Cyber Defense Center @Trendyol | CSIE | CSAE | CCISO | CASP+ | OSCP | eCIR | CPENT | eWPTXv2 | eCDFP | eCTHPv2 | OSWP | CEH Master | Pentest+ | CySA+
Follow
 
  • Like
  • Fire
  • Love
Reactions: 34 users

TheDrooben

Pretty Pretty Pretty Pretty Good

Screenshot_20250726_155702_LinkedIn.jpg
Screenshot_20250726_155755_LinkedIn.jpg
Screenshot_20250726_155805_LinkedIn.jpg
Screenshot_20250726_155818_LinkedIn.jpg


Happy as Larry
 
  • Like
  • Fire
  • Love
Reactions: 30 users

manny100

Top 20
Here is another post by Istanbul-based Alican Kiraz, Head of Cyber Defense Center @Trendyol Group, which links to a Medium article written by him titled “Hybrid SNN Models and Architectures for Causal Reasoning in Next-Generation Military UAVs” that gives us a little theoretical background regarding the PoC he is envisaging, for which he will be utilising his newly purchased Akida Mini PCIe Board:



View attachment 88909 2yDM?



(English version)

EN | Hybrid SNN Models and Architectures for Causal Reasoning in Next-Generation Military UAVs​

Alican Kiraz
Alican Kiraz


Follow
7 min read
·
1 hour ago


I created it with Midjourney


Although the initial emergence and areas of interest for Unmanned Aerial Vehicles (UAVs) were diverse, their most prominent and widespread use today is undoubtedly in military applications. With the rise of Artificial Intelligence systems and technologies over the past five years, this interest has undergone a significant transformation. Platforms such as the Anduril YFQ-44 and Shield AI MQ-35 V-BAT are now capable of executing missions autonomously, being tasked and coordinated by decision-support AI systems.

Artificial Intelligence is increasingly employed in critical missions such as Intelligence, Surveillance, and Reconnaissance (ISR), autonomous navigation, and target identification. However, the deployment of such AI solutions on the field presents significant challenges — both in terms of mission effectiveness and operational security, as well as resource constraints. These limitations often lead to hesitation and caution in real-world use.


You can access the Turkish version of my article here:

TR | Hybrid SNN Models and Architectures for Causal Reasoning in Next-Generation Military UAVs

İnsansız Hava Araçlarının ilk çıkış ve ilgi noktaları farklı olsada şuan dünyada en çok kullanıldıkları ve ses…

alican-kiraz1.medium.com

One of the most critical challenges lies in the high computational and energy demands of conventional Artificial Neural Networks (ANNs) and Convolutional Neural Networks (CNNs), which are widely used in computer vision and autonomous navigation. Additionally, transformer-based Large Language Models (LLMs) operating in an embedded, agentic architecture for decision support further exacerbate these requirements. These model architectures typically necessitate high-performance GPUs or ruggedized onboard supercomputers integrated into the UAV system.

As you may recall, in my previous article, I examined Russia’s Geran-2 drone, which operates with an onboard Nvidia Jetson Orin module. You may also want to revisit that article for further insights.

TR | Nvidia Jetson Orin-Powered AI Kamikaze Drones: Beating GPS Jamming on the Battlefield

Shahed-136’nın MS serisi versiyonları, İran ve Rusya arasındaki askeri iş birliğinin çıktısı olarak görülüyor. Bu…

alican-kiraz1.medium.com

EN | Nvidia Jetson Orin-Powered AI Kamikaze Drones: Beating GPS Jamming on the Battlefield

The MS series variants of the Shahed-136 are seen as a product of military cooperation between Iran and Russia. The…

alican-kiraz1.medium.com

The integration of Hybrid Spiking Neural Network (SNN) and Causal Reasoning (CR) architectures offers significant advantages in terms of reduced energy consumption, lower latency for real-time decision-making, and increased system reliability. Let us now conceptualize such a system — CR-SNN — built upon a hybrid architecture combining SNN and CR.

This hybrid system leverages the biologically inspired, event-driven nature of SNNs to achieve energy efficiency and computational speed, especially when deployed on neuromorphic hardware platforms. Meanwhile, the Causal Reasoning (CR) component — grounded in Judea Pearl’s causal inference framework, including Structural Causal Models (SCMs) and do-calculus — enables UAVs to develop enhanced situational awareness in complex scenarios. It empowers autonomous systems to perform tasks such as target tracking more effectively and to produce more generalizable and interpretable outputs, even in novel situations not encountered during training.

Moreover, although current AI-based systems can exhibit high accuracy in controlled environments, they tend to lose output quality and reliability when exposed to the complexities of real-world civilian or battlefield environments. In such conditions, the probability of incorrect responses increases significantly. Especially under dynamic circumstances or in the presence of adversarial attacks targeting the UAV, reactive maneuvers become increasingly challenging.

For instance, distinguishing between civilians and combatants in dense urban environments or detecting camouflaged targets remains a considerable challenge for current AI models. Furthermore, the inability of deep learning models — often considered “black boxes” — to provide transparent reasoning for their decisions undermines the trustworthiness and accountability of weaponized UAVs in military operations.

In addition, processing high-bandwidth sensor data in real-time — such as high-resolution visible & thermal imagery or Synthetic Aperture Radar (SAR) outputs — imposes substantial computational burdens on existing architectures, increasing latency and slowing down decision-making.

Even on popular edge computing platforms like the NVIDIA Jetson Orin Nano, optimized models such as YOLOv8n typically operate at around 10 frames per second (FPS), which falls short of the 30 FPS threshold required for real-time navigation. Additionally, the need for active thermal management and an average power consumption of around 20 watts pose significant challenges concerning weight, energy constraints, and thermal design requirements on UAV platforms.

1*1_770mrzUeXM_ySaZ56EdA.png

Source: https://www.seeedstudio.com/blog/20...eYa20KsdYMhX0iJB8GfkWnDoTWWnMlo74zgTLxHoOA3_A

1*CAgvHeedIwTH1BDd_uuvIA.png

Source: https://dataphoenix.info/a-guide-to-the-yolo-family-of-computer-vision-models/


Furthermore, transmitting data to the C4 (Command, Control, Communication, and Computers) center for decision-support purposes and relaying mission directives back to the UAV can introduce end-to-end latencies exceeding 150 milliseconds. Such delays risk causing the UAV to miss its optimal decision-making window, especially in time-sensitive scenarios.

Beyond performance metrics, next-generation military UAVs are expected to fulfill a range of advanced capabilities. These include: achieving higher levels of autonomy, maintaining superior situational awareness in complex and uncertain environments, making faster and more accurate decisions, extending operational range and flight endurance, and participating in joint missions in coordination with manned combat aircraft. Meeting these requirements is critical for future air combat effectiveness.

To address these demands, we must go beyond current AI architectures. There is a pressing need for solutions that are more efficient, faster, more reliable, and fundamentally smarter. Specifically, we require AI systems — both in software and hardware — that combine low power consumption, low latency, and high performance, while also possessing the ability to comprehend and act upon complex causal relationships within dynamic operational contexts.

1*xU6fqfdP2lZ08nBUkw-rrQ.jpeg

Source: https://exoswan.com/is-neuromorphic-computing-the-future-of-ai


At this point, the combination of approaches such as Spiking Neural Networks (SNNs) and Causal Reasoning (CR) presents a promising pathway to overcome these challenges. These hybrid SNN-CR models hold significant potential to deliver strategic advantages for next-generation military UAVs.

Thanks to the temporally encoded and sparsely activated communication mechanisms of SNNs — modeled after biological neural systems — these networks exhibit significantly lower energy consumption compared to traditional Artificial Neural Networks (ANNs), while also enabling faster inference times under real-time operational constraints.

1*A-2hTpaK_jMYjsv4GS1WjA.png

Source: https://peerj.com/articles/cs-2549/


In a study conducted by researchers at ETH Zurich, it was demonstrated that SNNs consumed six times less energy than a CNN when performing an obstacle avoidance task. Moreover, the SNN model operated with an average inference latency of just 2.4 milliseconds, showcasing its suitability for highly dynamic scenarios.

For further details, see:

https://www.sciencedirect.com/science/article/pii/S0925231223010081

Another critical aspect is Causal Reasoning (CR). When integrated — particularly through the framework proposed by Judea Pearl — CR enables AI systems to reason not only through learned correlations, but also through underlying causal relationships. This allows the model to perform robustly and generalize more effectively in environments that deviate from the training data distribution or when faced with unforeseen conditions.

To illustrate with a practical case: in a mission involving the detection of a human group as a potential target, conventional models may focus on features such as camouflage color, presence of vehicles, or group size. In contrast, a causally-informed model could prioritize more nuanced and domain-relevant cues — such as body height, shadows, camouflage patterns, military-style marching formations, or spacing intervals — along with distinct features like carried weapons, by leveraging appropriate sensor modalities and reasoning mechanisms.

1*PmSQPIC50JpHX51aU-KTAw.png

Image Source: https://www.newscientist.com/

The component I wish to emphasize most — and which I have long researched for integration into this powerful architecture — is Causal Reasoning (CR). CR refers to the process by which an AI system understands and analyzes causal relationships between events, then uses this understanding to make decisions or support decision-making based on those causal inferences.

What distinguishes CR from conventional approaches? Traditional machine learning and large language models (LLMs) primarily rely on statistical correlations or correlation-based inference. By integrating CR, we aim to enable the model to reason through causality — essentially allowing the system to ask itself “why” and to answer that question autonomously. This line of reasoning is grounded in the foundational work of Judea Pearl, who introduced a formal framework for causality. Pearl’s model is built upon concepts such as Structural Causal Models (SCMs), causal graphs, and do-calculus.

So, what can CR contribute to a UAV system? For instance, it can be highly effective in diagnosing system failures. Imagine a scenario where a UAV encounters multiple issues such as GPS malfunction, engine anomalies, or communication loss. A causal model can represent these as nodes in a causal graph, with the edges denoting the causal links between them. This enables the system to perform counterfactual or interventional reasoning, such as:


Or:


Such inferential capabilities are powerful, aren’t they?

1*zKUz6xLXneKUkPNdDfrFGQ.png

Source: https://www.nature.com/articles/s42256-023-00744-z


I hope this overview has been insightful. In the next section, we will delve into the technical details and adaptation strategies. Additionally, now that the neuromorphic chip — BrainChip Akida — has arrived, I will be working towards establishing a proof of concept (PoC) based on the proposed architecture.

1*MC3aa6GJoisqSR556AsfcA.jpeg

See you soon!
Drones
Military
AI
Neural Networks
Military Drones




Alican Kiraz

Written by Alican Kiraz


1.4K followers
·411 following
Head of Cyber Defense Center @Trendyol | CSIE | CSAE | CCISO | CASP+ | OSCP | eCIR | CPENT | eWPTXv2 | eCDFP | eCTHPv2 | OSWP | CEH Master | Pentest+ | CySA+
Follow

Great post Frangipani.
" Similarly, components previously unseen in Iranian Shahed models — such as the Jetson Orin AI computer and several infrared (IR) camera modules recently announced by Russia — have been integrated into the MS series. This level of sophistication also necessitates the formation of a global procurement network: U.S.-made Nvidia chips and Japanese-manufactured camera sensors have reportedly reached Iran and Russia through indirect channels. In fact, reports from 2023 mention that over $17 million worth of Nvidia products entered Russia via gray-market routes, including Jetson Orin modules."

How long before Russia, Iran, China etc manage to get their hands on AKIDA one way or another. Pretty easy.
The US DOD would be aware of this. They would not want to be going up with traditional AI against Neuromorphic Edge AI.
Easy to see why the DOD is transitioning to Neuromorphic Edge AI.
US software quality will be the lynchpin for AKIDA success.
If it comes to AKIDA Vs AKIDA in battlefield the best software will win out.,ie software produced by RTX, Lockheed-Martin etc.
AKIDA is the Brains and the software does the thinking.
 
  • Like
  • Fire
  • Love
Reactions: 15 users

Rach2512

Regular


Screenshot_20250726_164002_Samsung Internet.jpg

Screenshot_20250726_164014_Samsung Internet.jpg
Screenshot_20250726_164025_Samsung Internet.jpg

Screenshot_20250726_163943_Samsung Internet.jpg
 
  • Like
  • Love
  • Fire
Reactions: 12 users

Frangipani

Top 20
No doubt Frontgrade Gaisler will be spruiking GR801 - their first neuromorphic AI solution for space, powered by Akida - during their upcoming Asia Tour through India, Singapore and Japan:


Yesterday, Frontgrade Gaisler visited ST Engineering Satellite Systems (SatSys) on the Singapore leg of their 2025 Asia Tour.

SatSys is a 🇸🇬 joint venture between ST Engineering, DSO National Laboratories and NTU (Nanyang Technological University).


4F1E3E30-3722-457B-BA51-E11B7C22B9BE.jpeg




455989D7-E523-497A-94DF-D88595295B8F.jpeg






B5724374-70DC-4F68-A3A6-305A261C8E32.jpeg



ST Engineering’s largest shareholder with about 50.9%* of total issued shares as at 31 December 2024 is Temasek Holdings, a global investment company wholly owned by the Singapore Government.

*Total shareholdings of Temasek Holdings and Vestal Investments.




8B585206-9D26-4DE6-AD71-68A98CB3EB8B.jpeg

9948F4DF-1FB4-4BC2-80D8-8CF761BC8B25.jpeg




647A118F-BFE4-49EE-9794-8AD30D647488.jpeg
81C98F13-C381-4242-B86B-10605D88CDF8.jpeg



4676CEC4-F1E2-411F-B621-E855A7317177.jpeg
86AA0F08-7A8D-4E9F-9CD1-C9C04D8CD123.jpeg


F15197E8-7BF8-4A88-90FA-B2DEF55E9877.jpeg



Other SatSys partners are STE-GI (ST Engineering Geo-Insights Pte. Ltd.), an end-to-end satellite downstream services provider as well as CRISP, the NUS (National University of Singapore) Centre for Remote Imaging, Sensing and Processing, which was established with funding from A*STAR, Singapore’s Agency for Science, Technology and Research.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2025-07-26 at 9.48.02 pm.png








Screenshot 2025-07-26 at 9.47.47 pm.png
 
  • Like
  • Fire
  • Love
Reactions: 12 users

cosors

👀
Could this be interesting or is it impossible because 2nm is too small?

"Google Tensor
Pixel 11’s Tensor G6 will reportedly use 2nm process at TSMC, no longer lagging behind
| Jun 24 2025 - 5:30 am PT

According to a new report, Google is planning to jump to TSMC’s 2nm process for Tensor G6 faster than it has in prior years.

The first Tensor chip in 2021’s Pixel 6 was built on a 5nm process at Samsung, with the following year’s Tensor G2 chip using the same process despite the same year’s Snapdragon 8 Gen 2 moving to a 4nm process. It took Google a year to catch up, with Tensor G3 being the first on a 4nm process, and the current Tensor G4 following suit despite current Qualcomm and MediaTek chips being built on a 3nm process. Google’s move to TSMC later this year with Tensor G5 will mark the transition to a 3nm process.

But, according to a new report, Google isn’t looking to lag behind next year.

It’s being reported (via analyst @dnystedt) that Google will leverage TSMC’s 2nm process for Tensor G6 on the Pixel 11 series in 2026. If true, that marks a shift in how Google has kept up with the competition, as even the next Snapdragon 8 Elite chipset is likely going to still be using a 3nm process. Somewhat unbelievably, that could mean that Google could beat its competitors to the punch in this regard, as a 2nm Snapdragon chip wouldn’t arrive until a few months after Tensor G6. Notably, though, there is word that Qualcomm might use TSMC for a 3nm version of the upcoming Snapdragon flagship, while Samsung may produce a 2nm version specific to Galaxy devices.

The focus, for now, should absolutely be on Google’s initial switch to TSMC’s 3nm process later this year with Tensor G5, but it’s interesting nonetheless to see that Google is looking for further advancements this quickly."
 
  • Like
  • Thinking
  • Fire
Reactions: 4 users

Diogenese

Top 20
This bloke is a professor at Carnegie Mellon and founded spin-off, Efficient Computer, which claims 100 * energy efficiency.

Brandon LuciaBrandon Lucia • 3rd+Premium • 3rd+Professor at Carnegie Mellon University, Co-Founder & CEO at Efficient ComputerProfessor at Carnegie Mellon University, Co-Founder & CEO at Efficient Computer2mo • 2 months ago • Visible to anyone on or off LinkedIn
Follow

I recently spoke to Embedded Insiders about how Efficient Computer is shaping the future of computing - for AI/ML, but also much more. We discuss the criticality of energy efficiency and the absolute need for general-purpose architectures, as opposed to over-specialized hardware. Generality keeps programmers happy, supports innovation, and actually, somewhat surprisingly, *increases* end-to-end efficiency (ask me about Amdahl's Law!)

We also talked about the future of the computing workforce and the next generation of computer engineers. Implicit in that discussion was the idea that major initiatives like the National Science Foundation are so crucial to shaping future scientists, startup endeavors like Efficient, and generally to creating trillions in economic value.

If you're interested in the energy-efficient future of computing, hit me up. We are doing something unique at Efficient Computer and I'd love to tell you more. Thanks again to Embedded Insiders for the great discussion
!

This is their patent:

US2025190189A1 ENERGY-MINIMAL DATAFLOW ARCHITECTURE WITH PROGRAMMABLE ON-CHIP NETWORK 20220922

1753534667241.png




Disclosed herein is a co-designed compiler and CGRA architecture that achieves both high programmability and extreme energy efficiency. The architecture includes a rich set of control-flow operators that support arbitrary control flow and memory access on the CGRA fabric. The architecture is able to realize both energy and area savings over prior art implementations by offloading most control operations into a programmable on-chip network where they can re-use existing network switches.

US army has rights to the invention.

The patent dates from 2022 so its development overlapped with the Buainchip University pilot at Carnegie:


Laguna Hills, Calif. – August 16, 2022 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic AI IP, is bringing its neuromorphic technology into higher education institutions via the BrainChip University AI Accelerator Program, which shares technical knowledge, promotes leading-edge discoveries and positions students to be next-generation technology innovators.

The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year. Each program session will include a demonstration and education of a working environment for BrainChip’s AKD1000 on a Linux-based system, combining lecture-based teaching methods with hands-on experiential exploration
.

The invention is above my paygrade, but it seems to be like removing traffic lights and allowing the program steps to weave their way through the process using inherent collision-avoidance code. Because it does use code, I would guess its slower than Akida/TENNs.
 
  • Like
  • Fire
  • Love
Reactions: 11 users

Diogenese

Top 20
This bloke is a professor at Carnegie Mellon and founded spin-off, Efficient Computer, which claims 100 * energy efficiency.

Brandon LuciaBrandon Lucia • 3rd+Premium • 3rd+Professor at Carnegie Mellon University, Co-Founder & CEO at Efficient ComputerProfessor at Carnegie Mellon University, Co-Founder & CEO at Efficient Computer2mo • 2 months ago • Visible to anyone on or off LinkedIn
Follow

I recently spoke to Embedded Insiders about how Efficient Computer is shaping the future of computing - for AI/ML, but also much more. We discuss the criticality of energy efficiency and the absolute need for general-purpose architectures, as opposed to over-specialized hardware. Generality keeps programmers happy, supports innovation, and actually, somewhat surprisingly, *increases* end-to-end efficiency (ask me about Amdahl's Law!)

We also talked about the future of the computing workforce and the next generation of computer engineers. Implicit in that discussion was the idea that major initiatives like the National Science Foundation are so crucial to shaping future scientists, startup endeavors like Efficient, and generally to creating trillions in economic value.

If you're interested in the energy-efficient future of computing, hit me up. We are doing something unique at Efficient Computer and I'd love to tell you more. Thanks again to Embedded Insiders for the great discussion
!

This is their patent:

US2025190189A1 ENERGY-MINIMAL DATAFLOW ARCHITECTURE WITH PROGRAMMABLE ON-CHIP NETWORK 20220922

View attachment 88977



Disclosed herein is a co-designed compiler and CGRA architecture that achieves both high programmability and extreme energy efficiency. The architecture includes a rich set of control-flow operators that support arbitrary control flow and memory access on the CGRA fabric. The architecture is able to realize both energy and area savings over prior art implementations by offloading most control operations into a programmable on-chip network where they can re-use existing network switches.

US army has rights to the invention.

The patent dates from 2022 so its development overlapped with the Buainchip University pilot at Carnegie:


Laguna Hills, Calif. – August 16, 2022 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic AI IP, is bringing its neuromorphic technology into higher education institutions via the BrainChip University AI Accelerator Program, which shares technical knowledge, promotes leading-edge discoveries and positions students to be next-generation technology innovators.

The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year. Each program session will include a demonstration and education of a working environment for BrainChip’s AKD1000 on a Linux-based system, combining lecture-based teaching methods with hands-on experiential exploration.

The invention is above my paygrade, but it seems to be like removing traffic lights and allowing the program steps to weave their way through the process using inherent collision-avoidance code. Because it does use code, I would guess its slower than Akida/TENNs.

Oh, of course!

https://www.efficient.computer/resources/the-real-reason-processors-waste-energy-and-how-we-fixed-it

Modern computing systems face a fundamental trade-off: achieving extreme energy efficiency often comes at the cost of general-purpose programmability. Today, we're introducing one of our key innovations in the Electron E1 processor that helps reconcile this tension, called Non-Uniform Processing-Element Access (NUPEA). This work was just published at this year's International Symposium on Computer Architecture, our field's top academic conference.

6859d14ad4c6dcea4427c660_Efficient_1%20NUMA%20NUPEA%20-%20DM.png


To understand why NUPEA is needed, it's important to recognize that in modern architectures, data movement—not computation—is the dominant bottleneck for energy, performance, and scalability. Thus, achieving high efficiency is synonymous with keeping the bulk of the computational work as close as possible to the memory that is accessed.
Today’s systems, including CPUs, GPUs, and other programmable accelerators, address this challenge by using distributed data memories amongst processing elements (PEs) [refer to figure of NUMA]. The idea is to predetermine which program data should be mapped to which memory, as well as how tasks should be assigned to PEs near each corresponding memory. This approach is widely known as Non-Uniform Memory Access (NUMA), as different memories have different (i.e., non-uniform) access times to different processors.

NUMA alone fails to balance efficiency and generality. For all but the most simple programs, determining how to co-schedule data and tasks is a nearly impossible compiler and systems problem. Workloads with irregular computational patterns (e.g., sparse ML models) are too hard to analyze, and even with perfect information it might not be possible to map data to significantly reduce data movement. Another common approach is to shift this burden to the programmer using domain-specific languages or low-level APIs. However, this sacrifices generality and limits usability.
NUPEA is an alternative to NUMA that comes from a simple insight: while analyzing and manipulating data is unduly difficult, analyzing and manipulating instructions is trivial for today’s compilers. Thus, rather than distributing memories, NUPEA instead gives PEs varying proximity to memory [refer now to the figure for NUPEA]. The effect is that some PEs are near-memory – thus more efficient — and other PEs are farther away. Efficient’s effcc Compiler can identify the critical instructions (e.g., that fire most often or are on the critical path) and place them closer to memory, significantly reducing data movement. Other, less critical instructions can be placed further away without a significant effect on energy or performance. Additionally, by limiting the fraction of near-memory PEs, NUPEA architectures can use lightweight, energy-efficient memory access networks.
NUPEA is a step toward smarter hardware-software co-design—offering a practical path to energy-efficient, general-purpose computing. In short, NUPEA reduces communication energy by moving the most important instructions to a limited set of near-data PEs. It works without any programmer involvement or profiling, works on any programming language, and does not require the data access pattern to be regular or analyzable – thus simultaneously achieving efficiency and general purpose programmability
.
 
  • Like
  • Fire
  • Love
Reactions: 8 users
Top Bottom