BRN Discussion Ongoing

Are only friends with small noses invited? Just asking for a friend…
GIF by Matea Radic
No your invited as well

1751011181528.gif
 
  • Haha
  • Like
Reactions: 13 users
I like the way you think Smoothie! Hehehe! 🤣

If I ever become rich and famous, I’ll need some friends to come sailing with me around the Bahamas and you’ll be the first on my list of invitees.

We can look out from the yacht’s jacuzzi and wave at Mr and Mrs Bezos as they float past us in their inferior vessel. And then we can pretend we can’t hear what they‘re saying while they desperately try to discover where we bought our humungous yet totally groovy glasses from.
Now that's poetry
 
  • Haha
Reactions: 2 users
The answer is I personally believe Brainchip has already gained real traction in at least the following markets.

1. Aerospace and Defence

2. Drones (civilian)

3. Medical

4. Industrial

5. Cyber security

6. Transportation

7. RISC-V Ai extension

and yet we still hang around at what I consider to be well below true value based on the lowest of two analyst reports commencing at $AU1.20 and terminating at $AU1.97.

My opinion only DYOR

Fact Finder
 
  • Like
  • Fire
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Are only friends with small noses invited? Just asking for a friend…
GIF by Matea Radic

Not at all Jo! Whatever size your schnoz is, is fine by me.

So count yourself invited!!!🥳 🍾🏖️🪇

And if you do happen to have an abnormally large honker, then you can safeguard my glasses by wearing them on-top of your own pair when I go for a swim.
 
  • Haha
  • Love
  • Like
Reactions: 8 users

Frangipani

Top 20
SWaP is clearly one of our strengths moving forward, if you quietly check out with whom we are engaged with, their USD market caps prove to all non-believers that Brainchiip's technology at the far edge is currently out of the reach of our potential competitors, this isn't a child's little game, this is the real big league, and we are neck deep in these engagements.

Hi TECH,

are you privy to know more about the depth of engagement that our competitors have with their partners?

Have a look at the ecosystems of two of those companies that, just like us, already have products on the market. Pretty unknown names, huh? Oh, wait…



EA82F8A3-7D58-47DC-A762-6B2006E0AC73.jpeg


906C9839-F5E9-4220-B8E1-359808C05254.jpeg




F1349A16-3288-4373-B1E0-56523C6B40D8.jpeg

9C94E1F1-250F-4DB8-91FF-855A53B648C3.jpeg
0C21FCC8-B3B4-40F1-A4DB-5ACCC495EF6D.jpeg
 

Attachments

  • C2990EAC-1AF2-4DC3-9177-5FECF502DFD9.jpeg
    C2990EAC-1AF2-4DC3-9177-5FECF502DFD9.jpeg
    63.9 KB · Views: 25
  • Like
  • Fire
  • Love
Reactions: 12 users
Not at all Jo! Whatever size your schnoz is, is fine by me.

So count yourself invited!!!🥳 🍾🏖️🪇

And if you do happen to have an abnormally large honker, then you can safeguard my glasses by wearing them on-top of your own pair when I go for a swim.
1751014221115.gif
 
  • Like
  • Haha
Reactions: 2 users
Hi TECH,

are you privy to know more about the depth of engagement that our competitors have with their partners?

Have a look at the ecosystems of two of those companies that, just like us, already have products on the market. Pretty unknown names, huh? Oh, wait…



View attachment 87728

View attachment 87733



View attachment 87729
View attachment 87730 View attachment 87732
Impressive
 
  • Like
Reactions: 1 users

Frangipani

Top 20
So I guess Loihi 2 will have to wait now … and BrainChip will profit from that situation. More good engineers are now also looking for a new job.
I read somewhere they were axing any division which didn't guarantee at least 30% return on investment, or something along those lines. Perhaps Loihi 2 is in that category?
I hope so. I guess that Loihi2 is not urgent enough for them to invest staff and money in further development at this stage.
Intel is now reducing costs in all areas.

I doubt the neuromorphic team at Intel Labs will be affected much, if at all…
The appointment of Sachin Katti as Intel’s CTO, AI Officer and Head of Intel Labs suggests otherwise.



“Lip-Bu Tan, the newly appointed chief executive of Intel has launched a major leadership overhaul aimed at streamlining decision-making at the company, according to Reuters. With the new changes, Sachin Katti will become the chief technology office officer of Intel and will lead the company's AI effort. Also, the new management structure will get flatter and technical leaders from key groups will get direct lines with the CEO.

"Sachin Katti is expanding his role to include chief technology and AI officer for Intel," a spokesperson for Intel confirmed to Tom's Hardware. "As part of this, he will lead our overall AI strategy and AI product roadmap, as well as Intel Labs and our relationship with the startup and developer ecosystems."

New CTO

Up until now, Sachin Katti was in charge for Intel's networking and edge computing business unit and prior to that he was CTO of that unit. However, with the new expansion of his role, he will become chief technology officer of the whole company and the head of Intel Labs, therefore responsible for all the fundamental and applied research at Intel, which includes fundamental research for Intel's process technologies.

The appointment of a dedicated AI chief is perhaps a long overdue job as Intel's AI strategy so far has not exactly been a success.
Perhaps the problem is that AI was a part of Intel's data center unit and was considered as somewhat of a second-class citizen and therefore competed both for resources and management attention. With a dedicated lead, this could change, but keep in mind that Sachin Katti will not be solely dedicated to AI as he will be Intel's CTO as well as in charge of the edge and networking business.

(…) That said, Katti will not be the first Intel CTO with additional responsibilities. However, Katti's CTO and AI responsibilities are both strategically important for the company's future and the fusion of the roles may be a strategic move by Lip-Bu Tan (…)”



Here are two press releases from MWC Barcelona 2024, when Sachin Katti was in charge of Intel’s Network and Edge Group. At that event, Ericsson demoed a radio receiver algorithm prototype targeting Loihi 2.



427548C5-546F-4884-9240-56F707F05281.jpeg




1AC8D6A1-B5B9-43FC-9246-1C123E52E79A.jpeg



And check out this video:







Also, Intel recently hired Jean-Didier Allegrucci:

“Allegrucci has been named VP of AI System on Chip (SoC) Engineering. He will be responsible for managing the development of multiple SoCs that will be part of Intel’s AI roadmap. He joins from Rain AI, an innovative startup where he led AI silicon engineering. Prior to joining Rain, he spent 17 years at Apple where he oversaw the development of more than 30 SoCs used across many of the company’s flagship products.”

 
  • Like
  • Fire
  • Love
Reactions: 12 users

FromBeyond

Member
  • Like
  • Love
  • Haha
Reactions: 6 users

Frangipani

Top 20
And then there are also those who have recently joined BrainChip, such as Winston Tang and Aras Pirbadian, bringing along their individual talents, skills and interests:


View attachment 83933

View attachment 83936



View attachment 83937 View attachment 83938



15803C25-D75A-42F6-B879-C82995960DF8.jpeg




How to Solve the Size, Weight, Power and Cooling Challenge in Radar & Radio Frequency Modulation Classification​

By Aras Pirbadian and Amir Naderi
June 27, 2025
cropa-280x112.webp

Modern radar and Radio Frequency (RF) signal processing systems—especially those deployed on platforms like drones, CubeSats, and portable systems—are increasingly limited by strict Size, Weight, Power and Cooling (SWaP-Cool) constraints. These environments demand real-time performance and efficient computation, yet many conventional algorithms are too resource-intensive to operate within such tight margins. As the need for intelligent signal interpretation at the edge grows, it becomes essential to identify processing methods that balance accuracy within these constraints.

One such essential task in radar and RF applications is Automatic Modulation Classification (AMC). AMC enables systems to autonomously recognize the modulation type of incoming signals without prior coordination, a function crucial for dynamic spectrum access, electronic warfare, and cognitive radar systems. However, many existing AI-based AMC models, such as deep CNNs or hybrid ensembles, are computationally heavy and ill-suited for low- SWaP-Cool deployment, creating a pressing gap between performance needs and implementation feasibility.

In this post, we’ll show how BrainChip’s Temporal Event-Based Neural Network (TENN), a state space model, overcomes this challenge. You’ll learn why conventional models fall short in AMC tasks—and how TENN enables efficient, accurate, low-latency classification, even in noisy RF environments.

Why Traditional AMC Models Fall Short at the Edge​

AMC is essential for identifying unknown or hostile signals, enabling cognitive electronic warfare, and managing spectrum access. But systems like UAVs, edge sensors, and small satellites can’t afford large models that eat power and memory.
Unfortunately, traditional deep learning architectures used for AMC come with real drawbacks:
  • Hundreds of millions of Multiply Accumulate (MAC) operations resulting in high power consumption and large parameter counts demanding large memory
  • Heavy preprocessing requirements (e.g., Fast Fourier Transform (FFTs), spectrograms)
  • Still fail to maintain accuracy under 0 dB Signal-to-Noise Ratio (SNR), where signal and noise have similar power.
In mobile, airborne, and space-constrained deployments, these inefficiencies are showstoppers.

BrainChip’s TENN Model: A Low-SWaP-Cool Breakthrough for Real-Time RF Signal Processing​

BrainChip’s TENN model provides a game-changing alternative. It replaces traditional CNNs with structured state-space layers and is specifically optimized for low SWaP-Cool high-performance RF signal processing. State‑Space Models (SSMs) propagate a compact hidden state forward in time, so they need only constant‑size memory at every step. Modern SSM layers often recast this recurrent update as a convolution of the input with a small set of basis kernels produced by recurrence. Inference‑time efficiency therefore matches that of classic RNNs, but SSMs enjoy a major edge during training: like Transformers, they expose parallelizable convolutional structure, eliminating the strict step‑by‑step back‑propagation bottleneck that slows RNN training. The result is a sequence model that is memory‑frugal in deployment yet markedly faster to train than traditional RNNs, while still capturing long‑range dependencies without the quadratic cost of attention of Transformers.

TENN introduces the following innovations:​

  • A compact state-space modeling that simplifies modulation classification by reducing memory usage and computation—offering a leaner alternative to transformer-based models.
  • Tensor contraction optimization, applying efficient strategies to minimize memory footprint, computation and maximize throughput.
  • Hybrid SSM architecture that replaces CNN layers and avoids attention mechanisms, maintaining feature richness with lower computational cost.
  • Real-time, low-latency inference by eliminating the need for FFTs or buffering at inference time

Matching Accuracy with a Fraction of the Compute​

The Convolutional Long Short-Term Deep Neural Network (CLDNN), introduced by O’Shea et al. (2018), was selected as the benchmark model for comparison with BrainChip’s TENN. Although the original RadioML paper did not use the CLDNN acronym, it proposed a hybrid architecture combining convolutional layers with LSTM and fully connected layers—an architecture that has since become widely referred to as CLDNN in the AMC literature.
This model was chosen as a reference because it comes from the foundational paper that introduced the RadioML dataset—making it a widely accepted standard for evaluation. As a hybrid of convolutional and LSTM layers, CLDNN offers a meaningful performance baseline by capturing both spectral and temporal features of the input signals in the In-phase (I) and Quadrature (Q) (I/Q) components, which are used to represent complex signals in communication systems.

While more recent models like the Mixture-of-Experts AMC (MoE-AMC) have achieved state-of-the-art accuracy on the RadioML 2018.01A dataset, they rely on complex ensemble strategies involving multiple specialized networks, making them unsuitable for low-SWaP-Cool deployments due to their high computational and memory demands. In contrast, TENN matches or exceeds the accuracy of CLDNN, while operating at a fraction of the resource cost—delivering real-time, low-latency AMC performance with under 4 million MACs and no reliance on using multi-model ensembles or hand-crafted features like spectral pre-processing.

With just ~3.7 million MACs and 276K parameters, TENN is over 100x more efficient than CLDNN, while matching or exceeding its accuracy—even in low-SNR regimes. Moreover, the latency in the table refers to the simulated latency on a A30 GPU for both models.
20250627_1.webp


On the RadioML 2018.01A dataset (24 modulations, –20 to +30 dB), TENN consistently outperforms CLDNN especially in mid to higher SNR scenarios. Here is the performance of TENN compared to CLDNN's over the SNR range of -20 to +30 dB:
20250627_2.webp

Ready to bring low SWaP-Cool AI to your RF platforms?​

Today’s RF systems need fast, accurate signal classification that fits into small power and compute envelopes. CLDNN and similar models are simply too resource intensive. With TENN, BrainChip offers a smarter, more scalable approach—one that’s purpose-built for edge intelligence.

By leveraging efficient state-space modeling, TENN delivers:
  • Dramatically reduces latency, power consumption, and cooling requirements
  • Robust accuracy across noisy environments
  • Seamless deployment on real-time, mobile RF platforms
Whether you're deploying on a drone, CubeSat, or embedded system, TENN enables real-time AMC at the edge—without compromise.

Schedule a demo with our team to benchmark your modulation use cases on BrainChip’s event-driven AI platform and explore how TENN can be tailored to your RF edge deployment.
Book Demo

Tools and Resources Used​

  • Dataset: RadioML 2018.01A – A widely used AMC benchmark with 2 million samples:
  • DeepSig Inc., "Datasets," [Online]. Available: https://www.deepsig.io/datasets
  • BrainChip Paper: Pei , Let SSMs be ConvNets: State-Space Modeling with Optimal Tensor Contractions arXiv, 2024. Available: https://arxiv.org/pdf/2501.13230
  • Reference Paper: O’Shea, T. J., Roy, T., & Clancy, T. C. (2018). Over-the-air deep learning based radio signal classification. IEEE Journal of Selected Topics in Signal Processing, 12(1), 168–179. Available: https://doi.org/10.1109/JSTSP.2018.2797022
  • Framework: PyTorch was used to implement and train the TENN-based SSM classifier
  • Thop is a library designed to profile PyTorch models; it calculates the number of MACs and parameters.
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Frangipani

Top 20
And then there are also those who have recently joined BrainChip, such as Winston Tang and Aras Pirbadian, bringing along their individual talents, skills and interests:


View attachment 83933

View attachment 83936



View attachment 83937 View attachment 83938

Meanwhile, one of our former interns left us after two months and is now interning with Accenture:



56DC5F83-0387-4C58-8BF9-D61F0DCE53A6.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 6 users

Diogenese

Top 20
We should have some feedback from this exhibition soon:

https://brainchip.com/sensors-converge-2025/

June 24-26 | Santa Clara Convention Center, CA


Join BrainChip at Sensors Converge 2025. Discover the future of sensing, processing, and connectivity at Sensors Converge 2025, North America’s leading event for design engineers. BrainChip is proud to exhibit and present how Akida™ technology is driving smarter, energy-efficient solutions for next-gen systems and IoT devices.
 
  • Like
  • Fire
Reactions: 17 users
The momentum is definitely picking up
 
  • Like
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

They may as well target us, since these are all the areas we excel at.

EXTRACT

Qualcomm has backed nearly 70 Taiwan startups since 2019 and is now focusing on drone, space technology, and cybersecurity companies as artificial intelligence processing moves from cloud to edge devices.
 
  • Like
  • Fire
  • Thinking
Reactions: 9 users

BrainShit

Regular
Hi TECH,

are you privy to know more about the depth of engagement that our competitors have with their partners?

Have a look at the ecosystems of two of those companies that, just like us, already have products on the market. Pretty unknown names, huh? Oh, wait…



View attachment 87728

View attachment 87733



View attachment 87729
View attachment 87730 View attachment 87732


Who ever underestimate INNATERA is a fool....
 
  • Fire
  • Like
Reactions: 4 users

Frangipani

Top 20


Edge AI Milan 2025​


Join BrainChip for Edge AI Milan 2024 July 2-4, an inspiring and forward-thinking event exploring how edge AI is bridging the gap between digital intelligence and real-world impact. Attendees can engage with industry leaders and experience the latest innovations including Brainchip’s Akida neuromorphic, event-driven AI at the edge.

Register



BrainChip will be exhibiting at Edge AI Milan 2025 next week.
It’s a pity, though, that no one from our company will be giving a presentation at that conference, especially since Innatera will be spruiking their technology.

In addition, Innatera’s Petruț Antoniu Bogdan will give a workshop on neuromorphic computing “on the current state of neuromorphic computing technologies and their potential to transform Edge AI” in his capacity as co-chair of the Edge AI Foundation Working Group on Neuromorphic Computing, which BrainChip also belongs to.
Our CMO Steve Brightfield is co-chair of another Edge AI Working Group, namely Marketing.


CC9E18B4-2CA1-49A5-BBE1-D927707E830A.jpeg






0D7D251D-480E-4A28-94C2-962F5E382098.jpeg
FDAE7D30-9916-42B7-8A64-221AFDCB4C41.jpeg




3D957079-D54D-4C72-A42B-A92BC4968526.jpeg
 
  • Like
  • Love
Reactions: 13 users

Frangipani

Top 20

0CF7684A-B2A2-4270-A9AE-B2FF828AC832.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Frangipani

Top 20
  • Like
  • Love
  • Fire
Reactions: 10 users

Frangipani

Top 20
  • Like
  • Love
Reactions: 13 users

Frangipani

Top 20

DFF18221-9523-4F88-9280-C06E1CE6900F.jpeg



DEE2ABC7-4F4D-4AE7-9CAB-AE51A1C7CAF1.jpeg
7C46BDFC-BD29-47E2-B286-8FA6F9436CC4.jpeg
9B51FE47-BB57-498D-82F2-6056C035D023.jpeg



7DA3F121-E6F5-48ED-A2D7-58F2A3A7A917.jpeg
635EBD48-2B59-43D7-A281-B712B32AC255.jpeg
E2521943-9B2E-4541-AD0D-A400AE046AD9.jpeg



224F37D2-A9C7-4471-A64F-60253BE52C36.jpeg




BB73047E-7B81-4C64-BD9F-754EE64BB95F.jpeg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 15 users
Top Bottom