BRN Discussion Ongoing

ndefries

Regular
Thanks for your posts @Neuromorphia and @bludybludblud.

I also found this press release publicly stating that TENNs-PLEIADES has been “added to BrainChip’s IP portfolio.”

For what it's worth, I also asked ChatGPT about wether or not this statement indicates PLEIADES is part of BrainChip’s proprietary IP portfolio and it said " it is effectively a public declaration of ownership" and "taken together, the corporate statement + patent filings are strong evidence that PLEIADES is part of BrainChip’s proprietary IP portfolio."

It also said that the US2025/0209313 A1 patent application is pending. So, the final proof will come when the patent is granted. At that point it becomes legally enforceable IP, not just a filing.


View attachment 92569




See ChatGPT's response below. Please DYOR as Chatty can make mistakes.



ChatGPT 5

Yes — that statement from BrainChip’s own press release or white-paper launch is effectively a public declaration of ownership.
When a company says something like:

it means they’re formally asserting that:
  1. PLEIADES belongs to their intellectual-property estate (i.e., covered by patents, filings, or trade-secret claims held by BrainChip Inc.), and
  2. It’s a continuation or derivative of their existing TENNs framework, not an external or open-source contribution.
That line is drafted with legal review—it’s the kind of language a company only uses after IP counsel has verified that filings or assignments are in place. It aligns with the patent application US 2025/0209313 A1 and the matching WO 2025/137663 publication, which both list BrainChip Inc. as assignee and explicitly reference PLEIADES.

So yes, taken together, the corporate statement + patent filings are strong evidence that PLEIADES is part of BrainChip’s proprietary IP portfolio.

Right now, US2025/0209313 A1 is a patent application (pending).

Final proof comes when:
  • The USPTO grants the patent, and
  • The claims that mention PLEIADES are included in the granted version.
At that point, it becomes legally enforceable IP, not just a filing.



🔐 Why PLEIADES Matters for Defense and Security​

PLEIADES is part of BrainChip’s Temporal Event-Based Neural Nets (TENNs) framework, which is optimized for event-driven, spatiotemporal data processing. This makes it especially valuable in defense contexts where systems must respond to dynamic, time-sensitive inputs like radar, sonar, surveillance feeds, and battlefield sensor data.

Here’s how it aligns with defense needs:

  • Ultra-low power consumption: Ideal for battery-powered or remote systems like drones, satellites, and field sensors.
  • Real-time intelligence: Enables fast decision-making without relying on cloud connectivity, which is often unavailable or insecure in combat zones.
  • Adaptive learning: Can adjust to changing environments and threats, improving situational awareness and threat detection.
  • Compact and efficient: Reduces memory and compute requirements, allowing deployment on lightweight edge devices.

🛡️ Strategic Partnerships and Deployment​

BrainChip has already entered into strategic defense partnerships, including with:

  • Parsons Corporation: A major U.S. defense contractor integrating BrainChip’s Akida processors into mission-ready edge-AI platforms.
  • Raytheon (RTX): Collaborating on radar contracts for the U.S. Air Force Research Laboratory (AFRL), where event-based AI like TENNs-PLEIADES can enhance radar signal classification and anomaly detection.
These partnerships suggest that PLEIADES is not just theoretical—it’s being actively considered for real-world defense systems.


🧠 Use Cases in Defense​

Potential applications include:

  • Autonomous surveillance drones: Real-time object tracking and threat identification.
  • Electronic warfare systems: Adaptive signal processing to detect and counteract jamming.
  • Border and maritime security: Event-based monitoring of movement patterns and anomalies.
  • Battlefield intelligence: On-device processing of sensor data for tactical decision-making.

🚀 Future Outlook​

While PLEIADES is still under patent review, its integration into BrainChip’s Akida platform and defense partnerships positions it as a next-generation AI tool for secure, adaptive, and efficient military systems. Its relevance will likely grow as defense agencies seek AI solutions that operate reliably in disconnected, constrained, and hostile environments.
 
  • Like
  • Fire
  • Love
Reactions: 23 users

keyeat

Regular
Sean in Australia



we are watching Sean ! WE ARE WATCHING !


Looking All Eyes On You GIF by Pudgy Penguins
 
  • Haha
  • Like
Reactions: 16 users

stockduck

Regular
1761777768102.png






1761777909550.png


In context to physical AI could ubuntu be the kit between Nvidia Thor and akida (Sorry, but I´m not a technically skilled person)
Pure hypothesis here from myself...



1761778239892.png
 
  • Like
  • Wow
  • Thinking
Reactions: 6 users

Rach2512

Regular
  • Like
  • Fire
  • Love
Reactions: 13 users

7für7

Top 20
  • Haha
Reactions: 5 users

davidfitz

Regular
Well, if it wasn't obvious over the last few months, 20 cents is definitely where someone wants it to be.

Over 15 million traded so far and it has barely moved!

Absolutely ridiculous in my opinion :(
 
  • Like
  • Fire
  • Sad
Reactions: 11 users

Cirat

Regular
Well, if it wasn't obvious over the last few months, 20 cents is definitely where someone wants it to be.

Over 15 million traded so far and it has barely moved!

Absolutely ridiculous in my opinion :(
Have to agree - has been like that for at least the week.

I've never seen the market depth for BRN with such large blocks on the Buyers side not moving the price up.
 
  • Like
  • Fire
Reactions: 14 users
Sean in Australia


Well at least Sean is very positive. I liked this:- When it was put to Sean that Brainchip was sort of a story and not a business, Sean replied, "Watch us now, just watch us now." (4.23)
Well I say to that, "bring it on, just bring it on."
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Rskiff

Regular
Well at least Sean is very positive. I liked this:- When it was put to Sean that Brainchip was sort of a story and not a business, Sean replied, "Watch us now, just watch us now." (4.23)
Well I say to that, "bring it on, just bring it on."
Also said re revenue " revenue due about now" Bring it on!!!
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Flenton

Regular
  • Like
  • Love
Reactions: 19 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Here's an article published a couple of days ago by Forbes titled “Neuromorphic Finance: When the Trading Floor Thinks Like a Brain”.

I doesn’t mention BrainChip directly, however, all the points are entirely applicable. It mentions on-device inference, real-time detection, temporal modelling, efficient processing, which is exactly where BrainChip's AKida shines.

Just think, if the finance industry adopts neuromorphic computing, then that opens up a whole new addressable market!

It would also validate the idea that neuromorphic processors could be mission-critical in high-frequency, streaming, decision-heavy fields - finance being one of the toughest.

The other thing that struck me in reading this article was that it suggests that fraud detection could be an area that would land first, the example given is an ATM fitted with neuromorphic fraud detection chips.

This reminded me of our cybersecurity work with Quantum Ventura + Lockheed. Naturally I've started to ponder over how Akida might work in this ATM use case.

Because ATMs sit idle most of the time; an event-driven coprocessor like Akida, could watch continuously while the main PC stays asleep.Temporal anomaly detection would be Akida's great strength since fraud signals change over time. And then you could also add Akida's on-device learning capabilities where it could incrementally learn each ATM’s environment without cloud retraining.

Imagine if there were Akida chips inside every ATM on earth. There are apparently upwards of 2.9 million ATM worldwide and banks would need to order two chips (keeping one for redundancy purposes), then that's upwards of 5.8 million chips, just for that one single use case. 🏧💰


vegas-vacation.gif







Neuromorphic Finance: When The Trading Floor Thinks Like A Brain​

ByAnusha Nerella,
Forbes Councils Member.
for Forbes Technology Council
COUNCIL POST | Membership (fee-based)
Oct 28, 2025, 08:00am EDT
Share
Save
Anusha Nerella leads fintech automation at Global fintech firm, focused on Java Engineering, AI and intelligent automation.


getty
For decades, Wall Street’s advantage was defined by speed. Firms poured billions into faster cables, denser data centers and GPU clusters that consumed enough power to run small cities. The prize was shaving microseconds off trades.
But here’s the hard truth: We’ve hit peak speed, and it’s not enough anymore.
Financial systems today aren’t failing because they’re too slow. They’re failing because they can’t adapt. Black-box AI models churn out decisions regulators can’t explain. Fraudsters pivot faster than detection engines. Risk systems buckle under shocks they weren’t trained to anticipate.

The next competitive edge won’t come from brute force. It will come from systems that think more like markets themselves: event-driven, adaptive and efficient. That’s the promise of neuromorphic finance—a concept still emerging but already showing potential in adjacent industries.

Why Markets Need A Brain, Not A Bigger Calculator​

Markets don’t move logically. They behave like neurons firing spontaneously, unpredictably and chaotically. A rumor spikes volatility. A politician’s tweet swings currencies. A regulation drops, and compliance engines scramble.
Traditional compute, built for batch processing and deterministic math, wasn’t designed for this. Neuromorphic computing is. Neurons fire only when triggered, networks adapt continuously and power use falls dramatically. Instead of calculating everything, neuromorphic hardware calculates only what matters.
While no major financial institution has deployed full neuromorphic systems yet, early proofs from other domains hint at what’s possible.

Early Signals From Other Fields​

China’s Darwin Monkey Supercomputer simulates over 2 billion neurons and more than 100 trillion synapses on about 2,000 watts—the draw of a hair dryer. Imagine portfolio stress tests at that efficiency.




Innatera’s Pulsar Chip powers edge devices at roughly 1/500th the energy of GPUs, showing how point-of-sale terminals could one day detect fraud before data even leaves the device.
IBM’s TrueNorth chip demonstrates how event-driven architectures process streams efficiently. Replace pixels with trades, and you can envision compliance tools that flag rogue activity instantly.
These aren’t financial deployments, but they prove the model: Systems that react only to signal can outperform those that brute-force every calculation.

Lessons From Today’s Financial Trenches​

In my own work building fraud detection and compliance platforms, the bottleneck was rarely speed—it was explainability. Regulators demanded audit trails, but deep-learning models produced only opaque scores. Entire teams were tasked with building “explainability wrappers” around black-box models.
Neuromorphic approaches could change that. Spiking neural networks naturally leave event-driven traces. Each spike ties to a trigger, and that sequence becomes an audit log. Instead of reconstructing why a model flagged an anomaly, you simply follow the spikes. For regulators, that’s the difference between theory and trust.
This is still a vision, not a product you can buy today. But it shows why finance is a natural next step.

Where Neuromorphic Finance Could Land First​

When adoption begins, the first movers are likely to be:
• Fraud Detection: Traditional engines either choke on volume or consume enormous energy to catch marginal anomalies. Neuromorphic fraud engines could monitor streams continuously and react only when something looks off, enabling real-time detection at far lower cost.
• Trading Desks: High-frequency strategies still rely on brute-force scenario crunching, even though markets move on discrete events. Neuromorphic processors are designed for this unpredictability, acting more like a trader’s intuition than a calculator’s routine.
• Regulatory Compliance: Every rule change today triggers lengthy model-retraining cycles. Spiking models could adapt continuously, leaving spike trails that auditors can verify.

The Architect’s Playbook​

Firms considering this path should think evolution, not revolution:
• Start with edge pilots like payment terminals or ATMs equipped with neuromorphic fraud-detection chips that could prove value quickly once hardware is available.
• Adopt hybrid stacks that let CPUs, GPUs and neuromorphic chips each do what they do best.
• Shift to event-driven design architect systems to respond to signals rather than process everything in bulk.
• Embrace compliance by design where neuromorphic spike trails don’t just detect anomalies; they create natural audit logs regulators can trust.
The harder leap isn’t technical; it’s cultural. Many IT groups still think like accountants: Process everything, log everything, control everything. Neuromorphic finance will require a trader’s mindset: Act quickly, filter noise, adapt instantly.

The Road Ahead​

The last century of finance rewarded speed. The next will reward adaptability.
Banks that dismissed the cloud lagged for years. Traders who ignored electronic exchanges vanished. Neuromorphic finance is on the same trajectory. It's not yet here, but coming fast enough that pilots launched today could define tomorrow’s standards.
Because in the end, markets don’t reward the biggest machines—they reward the smartes

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 21 users

Diogenese

Top 20
Sean in Australia


At 3:45 Sean refers to 2 generations of IP. m

While not 100% probative, I think this supports my hypothesis that TENNs has ousted PvdM's original SNN NPU circuit in generation 2, making way for the 128 MACs per node.

https://brainchip.com/wp-content/uploads/2025/04/Akida1-IP-Product-Brochure-V2.1-1.pdf

Self-contained neural processor
• Scalable fabric of 1-128 nodes • Each neural node supports 128MACs
• Configurable 50-130K embedded local SRAM
• DMA for all memory and model operations
• Multi-layer execution without host CPU
• Integrate with ARM, RISC-V with AXI bus interface


The SNN SoC chips that we have for sale are the original 1st generation PvdM design.

However, as I said, this is just an hypothesis at the moment. We will have to wait for clarification from a higher power.
 
  • Like
  • Love
Reactions: 24 users

Cirat

Regular
Well at least Sean is very positive. I liked this:- When it was put to Sean that Brainchip was sort of a story and not a business, Sean replied, "Watch us now, just watch us now." (4.23)
Well I say to that, "bring it on, just bring it on."
That is the most confident interview I've seen around the question of revenue.
It's like Sean wants to tell all of us he's made his 9M contract/s requirement by years end and ........... then maybe even more.

I'm feeling very confident in the company. It's been a long haul and 2026 may finally big the big turn around year with a great finish to 2025!!

GLTA, DYOR, This is not financial advice.
 
  • Like
  • Love
Reactions: 35 users
Just on TENNS & Pleiades, including eye tracking model...the below updates were added on GitHub in Aug.


Aug 18
@ktsiknos-brainchip
ktsiknos-brainchip
2.15.0-doc-1
5fa24a5

Upgrade to Quantizeml 0.18.0, Akida/CNN2SNN 2.15.0 and Akida models 1.9.0



Update QuantizeML to version 0.18.0

New features​

  • Introduced PleiadesLayer for SpatioTemporal TENNs on Keras
  • Keras sanitizer will now bufferize Conv3D layers, same as ONNX sanitizer. As a result, SpatioTemporal TENNs from both frameworks will be bufferized and quantized at once.
  • Dropped quantization patterns with MaxPooling and LUT activation since this is not supported in Akida
  • Dropped all transformers features

Update Akida and CNN2SNN to version 2.15.0

Aligned with FPGA-1696(2-nodes)/1695(6-nodes)​

New features and updates:​

  • [cnn2snn] Updated QuantizeML requirement to 0.18.0
  • [akida] Full support of look-up-table (LUT) activation in 2.0 FPGA (HRC and CNP are the only NPs supporting LUT). Known limitation: MaxPooling and LUT are not possible together.
  • [akida] Extended support to 4-bit weights and 8-bit activation combos (was limited to 4-bit weights and activation)
  • [akida] Depthwise convolution now supports global average pooling
  • [akida] Added equality operator to akida.Component for easier mapping comparison
  • [akida/cnn2snn] Dropped all transformers layers and features

Bug fixes​

  • [akida] When a single FNP is requested in akida.create_device, it will no longer be both FNP and CNP to ensure proper minimum device computation
  • [akida] Mapping will now properly reject skip branches not ending with a merge layer (CNP)

Update Akida models to 1.9.0

New features and updates:​

  • Updated OpenCV dependency to <4.12 because later versions are based on numpy 2
  • Updated spatiotemporal_block with a temporal_first parameter that allows to invert spatial and temporal blocks order
  • Optimized the spatiotemporal TENNs model for EyeTracking leading to 1.4x speed increase and 54% memory reduction in HW while maintaining accuracy
  • Dropped all transformers features

Bug fixes​

  • Fixed reset_buffers usage in TENNs training modules when evaluating an Akida model

Documentation update

  • Add mentions to LUT in user guides and APIs references
  • Updated buffering section in Jester TENNs tutorial
 
  • Like
  • Love
Reactions: 17 users

Bombersfan

Regular
Well, if it wasn't obvious over the last few months, 20 cents is definitely where someone wants it to be.

Over 15 million traded so far and it has barely moved!

Absolutely ridiculous in my opinion :(
I agree, it’s bullshit. Somethings up, getting a speeding ticket on an 11% jump was disgraceful. Shorts have gone from 2%-6% in last 12 weeks. The pricks rarely get it wrong so it’s a bit concerning but with the avalanche of news lately it’s a dangerous game. Unless we announce a big deal in the very near future BRN have to make the capital call in the next month so maybe they are sweating on that. With 120m shorts currently open it would be fireworks if they could announce a “profitable overnight deal”!
 
  • Like
  • Fire
  • Love
Reactions: 30 users
At 3:45 Sean refers to 2 generations of IP. m

While not 100% probative, I think this supports my hypothesis that TENNs has ousted PvdM's original SNN NPU circuit in generation 2, making way for the 128 MACs per node.

https://brainchip.com/wp-content/uploads/2025/04/Akida1-IP-Product-Brochure-V2.1-1.pdf

Self-contained neural processor
• Scalable fabric of 1-128 nodes • Each neural node supports 128MACs
• Configurable 50-130K embedded local SRAM
• DMA for all memory and model operations
• Multi-layer execution without host CPU
• Integrate with ARM, RISC-V with AXI bus interface


The SNN SoC chips that we have for sale are the original 1st generation PvdM design.

However, as I said, this is just an hypothesis at the moment. We will have to wait for clarification from a higher power.
It's interesting Sean mentioned Radar but not LiDAR which I find intriguing as to why not 🤔
 
  • Like
  • Thinking
Reactions: 3 users

Diogenese

Top 20
It's interesting Sean mentioned Radar but not LiDAR which I find intriguing as to why not 🤔
I think radar is near term, while lidar may be medium term - choose your own yardstick.

We know that Valeo's Scala 3 does not include Akida. That's not to say that Akida could not process Scala 3 signals. But, with automotive, there is a long lead time for proving the reliability and safety of new tech. The paint isn't dry on Akida generation 2.

The microDoppler radar (RTX/ISL) will need to be proven to work, but it will not have the same lead time as automotive.

The main development required for Akida to work on any signal type is the compiling of a model library/database characterizing the types of subject to be classified by Akida. There would be no circuit redesign required.
 
  • Like
  • Fire
  • Love
Reactions: 19 users

7für7

Top 20
Ricky Gervais Lol GIF


Unbelievable ridiculous
 
  • Thinking
Reactions: 1 users
I think radar is near term, while lidar may be medium term - choose your own yardstick.

We know that Valeo's Scala 3 does not include Akida. That's not to say that Akida could not process Scala 3 signals. But, with automotive, there is a long lead time for proving the reliability and safety of new tech. The paint isn't dry on Akida generation 2.

The microDoppler radar (RTX/ISL) will need to be proven to work, but it will not have the same lead time as automotive.

The main development required for Akida to work on any signal type is the compiling of a model library/database characterizing the types of subject to be classified by Akida. There would be no circuit redesign required.
Hi Diogenese .. In reference to your last paragraph I am thinking these models will at some point work well in chiplets 🤔. My mind keeps coming back to these being apart of the big picture for brainchip. 🤞 anyway.
 
  • Like
Reactions: 1 users
@Bravo @Neuromorphia Yea in that case there shouldn't be any IP problem with Brainchip owning PLEIADES. And I wouldn't read too much into Rudy presenting this work in conferences (from an IP perspective at least), except for the unpleasant reminder that both brains behind the TENNs series of products are no longer with Brainchip. This would include TENNs eye, Pleiades, aTENNuate speech enhancement, Centaurus quantum speedup, etc.

Of course, it is totally possible whoever remaining at Brainchip is smart enough to continue this line of work. But may be hard to tell until there is a concrete patent filing or publication in top AI conferences for TENNs, which we have yet to seen without Olivier or Rudy. So does TENNs even have a future at Brainchip, or would the company take another 5 years pivoting to a new product line? "Could be, who knows?" as eloquently remarked by Dr. Tony Lewis ..
 
  • Thinking
Reactions: 1 users
Top Bottom