BRN Discussion Ongoing

FromBeyond

Member
  • Like
  • Love
  • Haha
Reactions: 6 users

Frangipani

Top 20
And then there are also those who have recently joined BrainChip, such as Winston Tang and Aras Pirbadian, bringing along their individual talents, skills and interests:


View attachment 83933

View attachment 83936



View attachment 83937 View attachment 83938



15803C25-D75A-42F6-B879-C82995960DF8.jpeg




How to Solve the Size, Weight, Power and Cooling Challenge in Radar & Radio Frequency Modulation Classification​

By Aras Pirbadian and Amir Naderi
June 27, 2025
cropa-280x112.webp

Modern radar and Radio Frequency (RF) signal processing systems—especially those deployed on platforms like drones, CubeSats, and portable systems—are increasingly limited by strict Size, Weight, Power and Cooling (SWaP-Cool) constraints. These environments demand real-time performance and efficient computation, yet many conventional algorithms are too resource-intensive to operate within such tight margins. As the need for intelligent signal interpretation at the edge grows, it becomes essential to identify processing methods that balance accuracy within these constraints.

One such essential task in radar and RF applications is Automatic Modulation Classification (AMC). AMC enables systems to autonomously recognize the modulation type of incoming signals without prior coordination, a function crucial for dynamic spectrum access, electronic warfare, and cognitive radar systems. However, many existing AI-based AMC models, such as deep CNNs or hybrid ensembles, are computationally heavy and ill-suited for low- SWaP-Cool deployment, creating a pressing gap between performance needs and implementation feasibility.

In this post, we’ll show how BrainChip’s Temporal Event-Based Neural Network (TENN), a state space model, overcomes this challenge. You’ll learn why conventional models fall short in AMC tasks—and how TENN enables efficient, accurate, low-latency classification, even in noisy RF environments.

Why Traditional AMC Models Fall Short at the Edge​

AMC is essential for identifying unknown or hostile signals, enabling cognitive electronic warfare, and managing spectrum access. But systems like UAVs, edge sensors, and small satellites can’t afford large models that eat power and memory.
Unfortunately, traditional deep learning architectures used for AMC come with real drawbacks:
  • Hundreds of millions of Multiply Accumulate (MAC) operations resulting in high power consumption and large parameter counts demanding large memory
  • Heavy preprocessing requirements (e.g., Fast Fourier Transform (FFTs), spectrograms)
  • Still fail to maintain accuracy under 0 dB Signal-to-Noise Ratio (SNR), where signal and noise have similar power.
In mobile, airborne, and space-constrained deployments, these inefficiencies are showstoppers.

BrainChip’s TENN Model: A Low-SWaP-Cool Breakthrough for Real-Time RF Signal Processing​

BrainChip’s TENN model provides a game-changing alternative. It replaces traditional CNNs with structured state-space layers and is specifically optimized for low SWaP-Cool high-performance RF signal processing. State‑Space Models (SSMs) propagate a compact hidden state forward in time, so they need only constant‑size memory at every step. Modern SSM layers often recast this recurrent update as a convolution of the input with a small set of basis kernels produced by recurrence. Inference‑time efficiency therefore matches that of classic RNNs, but SSMs enjoy a major edge during training: like Transformers, they expose parallelizable convolutional structure, eliminating the strict step‑by‑step back‑propagation bottleneck that slows RNN training. The result is a sequence model that is memory‑frugal in deployment yet markedly faster to train than traditional RNNs, while still capturing long‑range dependencies without the quadratic cost of attention of Transformers.

TENN introduces the following innovations:​

  • A compact state-space modeling that simplifies modulation classification by reducing memory usage and computation—offering a leaner alternative to transformer-based models.
  • Tensor contraction optimization, applying efficient strategies to minimize memory footprint, computation and maximize throughput.
  • Hybrid SSM architecture that replaces CNN layers and avoids attention mechanisms, maintaining feature richness with lower computational cost.
  • Real-time, low-latency inference by eliminating the need for FFTs or buffering at inference time

Matching Accuracy with a Fraction of the Compute​

The Convolutional Long Short-Term Deep Neural Network (CLDNN), introduced by O’Shea et al. (2018), was selected as the benchmark model for comparison with BrainChip’s TENN. Although the original RadioML paper did not use the CLDNN acronym, it proposed a hybrid architecture combining convolutional layers with LSTM and fully connected layers—an architecture that has since become widely referred to as CLDNN in the AMC literature.
This model was chosen as a reference because it comes from the foundational paper that introduced the RadioML dataset—making it a widely accepted standard for evaluation. As a hybrid of convolutional and LSTM layers, CLDNN offers a meaningful performance baseline by capturing both spectral and temporal features of the input signals in the In-phase (I) and Quadrature (Q) (I/Q) components, which are used to represent complex signals in communication systems.

While more recent models like the Mixture-of-Experts AMC (MoE-AMC) have achieved state-of-the-art accuracy on the RadioML 2018.01A dataset, they rely on complex ensemble strategies involving multiple specialized networks, making them unsuitable for low-SWaP-Cool deployments due to their high computational and memory demands. In contrast, TENN matches or exceeds the accuracy of CLDNN, while operating at a fraction of the resource cost—delivering real-time, low-latency AMC performance with under 4 million MACs and no reliance on using multi-model ensembles or hand-crafted features like spectral pre-processing.

With just ~3.7 million MACs and 276K parameters, TENN is over 100x more efficient than CLDNN, while matching or exceeding its accuracy—even in low-SNR regimes. Moreover, the latency in the table refers to the simulated latency on a A30 GPU for both models.
20250627_1.webp


On the RadioML 2018.01A dataset (24 modulations, –20 to +30 dB), TENN consistently outperforms CLDNN especially in mid to higher SNR scenarios. Here is the performance of TENN compared to CLDNN's over the SNR range of -20 to +30 dB:
20250627_2.webp

Ready to bring low SWaP-Cool AI to your RF platforms?​

Today’s RF systems need fast, accurate signal classification that fits into small power and compute envelopes. CLDNN and similar models are simply too resource intensive. With TENN, BrainChip offers a smarter, more scalable approach—one that’s purpose-built for edge intelligence.

By leveraging efficient state-space modeling, TENN delivers:
  • Dramatically reduces latency, power consumption, and cooling requirements
  • Robust accuracy across noisy environments
  • Seamless deployment on real-time, mobile RF platforms
Whether you're deploying on a drone, CubeSat, or embedded system, TENN enables real-time AMC at the edge—without compromise.

Schedule a demo with our team to benchmark your modulation use cases on BrainChip’s event-driven AI platform and explore how TENN can be tailored to your RF edge deployment.
Book Demo

Tools and Resources Used​

  • Dataset: RadioML 2018.01A – A widely used AMC benchmark with 2 million samples:
  • DeepSig Inc., "Datasets," [Online]. Available: https://www.deepsig.io/datasets
  • BrainChip Paper: Pei , Let SSMs be ConvNets: State-Space Modeling with Optimal Tensor Contractions arXiv, 2024. Available: https://arxiv.org/pdf/2501.13230
  • Reference Paper: O’Shea, T. J., Roy, T., & Clancy, T. C. (2018). Over-the-air deep learning based radio signal classification. IEEE Journal of Selected Topics in Signal Processing, 12(1), 168–179. Available: https://doi.org/10.1109/JSTSP.2018.2797022
  • Framework: PyTorch was used to implement and train the TENN-based SSM classifier
  • Thop is a library designed to profile PyTorch models; it calculates the number of MACs and parameters.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Frangipani

Top 20
And then there are also those who have recently joined BrainChip, such as Winston Tang and Aras Pirbadian, bringing along their individual talents, skills and interests:


View attachment 83933

View attachment 83936



View attachment 83937 View attachment 83938

Meanwhile, one of our former interns left us after two months and is now interning with Accenture:



56DC5F83-0387-4C58-8BF9-D61F0DCE53A6.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 6 users

Diogenese

Top 20
We should have some feedback from this exhibition soon:

https://brainchip.com/sensors-converge-2025/

June 24-26 | Santa Clara Convention Center, CA


Join BrainChip at Sensors Converge 2025. Discover the future of sensing, processing, and connectivity at Sensors Converge 2025, North America’s leading event for design engineers. BrainChip is proud to exhibit and present how Akida™ technology is driving smarter, energy-efficient solutions for next-gen systems and IoT devices.
 
  • Like
  • Fire
Reactions: 18 users
The momentum is definitely picking up
 
  • Like
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

They may as well target us, since these are all the areas we excel at.

EXTRACT

Qualcomm has backed nearly 70 Taiwan startups since 2019 and is now focusing on drone, space technology, and cybersecurity companies as artificial intelligence processing moves from cloud to edge devices.
 
  • Like
  • Fire
  • Thinking
Reactions: 9 users

BrainShit

Regular
Hi TECH,

are you privy to know more about the depth of engagement that our competitors have with their partners?

Have a look at the ecosystems of two of those companies that, just like us, already have products on the market. Pretty unknown names, huh? Oh, wait…



View attachment 87728

View attachment 87733



View attachment 87729
View attachment 87730 View attachment 87732


Who ever underestimate INNATERA is a fool....
 
  • Fire
  • Like
Reactions: 4 users

Frangipani

Top 20


Edge AI Milan 2025​


Join BrainChip for Edge AI Milan 2024 July 2-4, an inspiring and forward-thinking event exploring how edge AI is bridging the gap between digital intelligence and real-world impact. Attendees can engage with industry leaders and experience the latest innovations including Brainchip’s Akida neuromorphic, event-driven AI at the edge.

Register



BrainChip will be exhibiting at Edge AI Milan 2025 next week.
It’s a pity, though, that no one from our company will be giving a presentation at that conference, especially since Innatera will be spruiking their technology.

In addition, Innatera’s Petruț Antoniu Bogdan will give a workshop on neuromorphic computing “on the current state of neuromorphic computing technologies and their potential to transform Edge AI” in his capacity as co-chair of the Edge AI Foundation Working Group on Neuromorphic Computing, which BrainChip also belongs to.
Our CMO Steve Brightfield is co-chair of another Edge AI Working Group, namely Marketing.


CC9E18B4-2CA1-49A5-BBE1-D927707E830A.jpeg






0D7D251D-480E-4A28-94C2-962F5E382098.jpeg
FDAE7D30-9916-42B7-8A64-221AFDCB4C41.jpeg




3D957079-D54D-4C72-A42B-A92BC4968526.jpeg
 
  • Like
  • Love
Reactions: 13 users

Frangipani

Top 20

0CF7684A-B2A2-4270-A9AE-B2FF828AC832.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Frangipani

Top 20
  • Like
  • Love
  • Fire
Reactions: 11 users

Frangipani

Top 20
  • Like
  • Love
Reactions: 13 users

Frangipani

Top 20

DFF18221-9523-4F88-9280-C06E1CE6900F.jpeg



DEE2ABC7-4F4D-4AE7-9CAB-AE51A1C7CAF1.jpeg
7C46BDFC-BD29-47E2-B286-8FA6F9436CC4.jpeg
9B51FE47-BB57-498D-82F2-6056C035D023.jpeg



7DA3F121-E6F5-48ED-A2D7-58F2A3A7A917.jpeg
635EBD48-2B59-43D7-A281-B712B32AC255.jpeg
E2521943-9B2E-4541-AD0D-A400AE046AD9.jpeg



224F37D2-A9C7-4471-A64F-60253BE52C36.jpeg




BB73047E-7B81-4C64-BD9F-754EE64BB95F.jpeg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 15 users

IloveLamp

Top 20
1000008291.jpg
 
  • Like
  • Love
  • Fire
Reactions: 25 users

TECH

Regular
Hi TECH,

are you privy to know more about the depth of engagement that our competitors have with their partners?

Have a look at the ecosystems of two of those companies that, just like us, already have products on the market. Pretty unknown names, huh? Oh, wait…



View attachment 87728

View attachment 87733



View attachment 87729
View attachment 87730 View attachment 87732

Hi Frangi,

For the last decade every poster on HC and this forum know that I am totally one-eyed.....my posts generally contain a combination
of fact and personal opinion.

Yes, we have competition, which is very healthy, and no we will never command the whole pie, or 50%, or even 25% (guess) but we
will establish ourselves as a respected player (IP supplier) in the Edge AI market, to think otherwise would be extremely negative with
what we "believe" we have going on currently.

Thank you for your continued dedication (time) devoted to keeping our forum balanced.

Kind regards......Tech
 
  • Like
  • Love
  • Fire
Reactions: 43 users

Rach2512

Regular
 
  • Like
  • Love
  • Fire
Reactions: 20 users
Hi Frangi,

For the last decade every poster on HC and this forum know that I am totally one-eyed.....my posts generally contain a combination
of fact and personal opinion.

Yes, we have competition, which is very healthy, and no we will never command the whole pie, or 50%, or even 25% (guess) but we
will establish ourselves as a respected player (IP supplier) in the Edge AI market, to think otherwise would be extremely negative with
what we "believe" we have going on currently.

Thank you for your continued dedication (time) devoted to keeping our forum balanced.

Kind regards......Tech
Sean has stated a couple of times at least now. how when New Technological frontiers are created, it is a bit of a "Wild West" for a while, until 2 or 3 dominant players, come to the fore.

He wants "us" to be one of those dominant players and so obviously, do we!
I would think this would look like, something of a 20 to 30% market share, of our targeted markets combined.

With our apparent foothold lead, in the Premier, Space, Military and Medical fields, this is well within reach, in my opinion.
But it's obviously the Big breakthrough in "bread and butter" consumer markets, that we eagerly seek.

Despite what our 20 cent share price and lack of any serious revenue shows, "we" have been laying the foundations and groundwork, to become one of those dominant players, for some time now.

Tony Lewis is hinting very strongly at the announcement of a new technological update, coming out imminently, which while strategically important and value adding from an IP perspective, is not the kind of announcement we really need.


I know Sean must have mixed up the Time Zones, as "his" Friday has only recently ended and the necessity to announce a new large IP deal on the ASX, missed even "our" late announcements deadline.
So Monday morning, is still on the cards, for the Big Announcement, I promised last week. 😉👍



clint-eastwood-nod.gif
 
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 26 users

7für7

Top 20
BULLIIIIIIIISH

Lets Go Astronaut GIF by Zypto
 
  • Haha
  • Fire
  • Like
Reactions: 6 users

7für7

Top 20
Have a nice weekend!


IMG_4781.jpeg
 
  • Haha
  • Like
  • Fire
Reactions: 11 users
FF. On crapper.........This part of the website was updated in the last 12 hours. So much now ready and being offered to customers

 
  • Like
  • Love
  • Fire
Reactions: 27 users

Rach2512

Regular

See Philip Dodge's conment.


Screenshot_20250628_125046_Samsung Internet.jpg
 
  • Like
  • Fire
  • Love
Reactions: 22 users
Top Bottom