BRN Discussion Ongoing

Rach2512

Regular
 
  • Like
  • Love
  • Fire
Reactions: 20 users
Hi Frangi,

For the last decade every poster on HC and this forum know that I am totally one-eyed.....my posts generally contain a combination
of fact and personal opinion.

Yes, we have competition, which is very healthy, and no we will never command the whole pie, or 50%, or even 25% (guess) but we
will establish ourselves as a respected player (IP supplier) in the Edge AI market, to think otherwise would be extremely negative with
what we "believe" we have going on currently.

Thank you for your continued dedication (time) devoted to keeping our forum balanced.

Kind regards......Tech
Sean has stated a couple of times at least now. how when New Technological frontiers are created, it is a bit of a "Wild West" for a while, until 2 or 3 dominant players, come to the fore.

He wants "us" to be one of those dominant players and so obviously, do we!
I would think this would look like, something of a 20 to 30% market share, of our targeted markets combined.

With our apparent foothold lead, in the Premier, Space, Military and Medical fields, this is well within reach, in my opinion.
But it's obviously the Big breakthrough in "bread and butter" consumer markets, that we eagerly seek.

Despite what our 20 cent share price and lack of any serious revenue shows, "we" have been laying the foundations and groundwork, to become one of those dominant players, for some time now.

Tony Lewis is hinting very strongly at the announcement of a new technological update, coming out imminently, which while strategically important and value adding from an IP perspective, is not the kind of announcement we really need.


I know Sean must have mixed up the Time Zones, as "his" Friday has only recently ended and the necessity to announce a new large IP deal on the ASX, missed even "our" late announcements deadline.
So Monday morning, is still on the cards, for the Big Announcement, I promised last week. 😉👍



clint-eastwood-nod.gif
 
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 24 users

7für7

Top 20
BULLIIIIIIIISH

Lets Go Astronaut GIF by Zypto
 
  • Haha
  • Fire
  • Like
Reactions: 6 users

7für7

Top 20
Have a nice weekend!


IMG_4781.jpeg
 
  • Haha
  • Like
  • Fire
Reactions: 10 users
FF. On crapper.........This part of the website was updated in the last 12 hours. So much now ready and being offered to customers

 
  • Like
  • Love
  • Fire
Reactions: 26 users

Rach2512

Regular

See Philip Dodge's conment.


Screenshot_20250628_125046_Samsung Internet.jpg
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Sean has stated a couple of times at least now. how when New Technological frontiers are created, it is a bit of a "Wild West" for a while, until 2 or 3 dominant players, come to the fore.

He wants "us" to be one of those dominant players and so obviously, do we!
I would think this would look like, something of a 20 to 30% market share, of our targeted markets combined.

With our apparent foothold lead, in the Premier, Space, Military and Medical fields, this is well within reach, in my opinion.
But it's obviously the Big breakthrough in "bread and butter" consumer markets, that we eagerly seek.

Despite what our 20 cent share price and lack of any serious revenue shows, "we" have been laying the foundations and groundwork, to become one of those dominant players, for some time now.

Tony Lewis is hinting very strongly at the announcement of a new technological update, coming out imminently, which while strategically important and value adding from an IP perspective, is not the kind of announcement we really need.


I know Sean must have mixed up the Time Zones, as "his" Friday has only recently ended and the necessity to announce a new large IP deal on the ASX, missed even "our" late announcements deadline.
So Monday morning, is still on the cards, for the Big Announcement, I promised last week. 😉👍



View attachment 87779
 
  • Fire
  • Love
  • Wow
Reactions: 3 users

Esq.111

Fascinatingly Intuitive.
Last edited:
  • Haha
  • Like
  • Love
Reactions: 6 users
Qualcom
 

Attachments

  • Screenshot_20250628_164107_LinkedIn.jpg
    Screenshot_20250628_164107_LinkedIn.jpg
    244.3 KB · Views: 188
  • Sad
  • Like
  • Haha
Reactions: 3 users
Afternoon Pom ,

Well that was rather racey, 😁

CRANK IT .

Regards,
Esq.
I just did driving back from the shops ❤️
 
  • Fire
Reactions: 1 users
Exoswan thoughts from a few days ago.




Top Neuromorphic Computing Stocks 2025: Pure-Play Watchlist​

Last modified: June 24, 2025


BrainChip Holdings (ASX: BRN)​

HQ: Australia; Pure-play in digital spiking neural processors for edge AI.

BrainChip is a small Australian chip company with one big idea: put a brain-like computer at the sensor itself. Its Akida processor is a fully digital, event-driven neural network that only “fires” when an input changes, so tasks such as keyword spotting, vibration analysis or security vision can run for months on a coin-cell battery. Earlier this year, the firm released a second-generation core that adds vision-transformer layers and temporal event networks and began shipping the design on a simple M.2 card so engineers can test it in any PC.

The bullish view is direct. Akida already ships in plug-and-play modules and a growing set of microcontroller boards; each volume design win could turn into a royalty stream backed by the MetaTF software tool-chain. Because power efficiency is measured in microwatts the chip competes where GPUs never fit—hearing aids, industrial sensors, drones. Patents and first-mover status give BrainChip room to run if neuromorphic demand finally arrives.

Yet the risks are plain. Annual revenue is still tiny and R&D burn exceeds sales, so dilution is the default funding source. Bigger semiconductor houses can cross-subsidize their own spiking-network blocks and bundle them for free. The technology also forces customers to rethink algorithms; without a larger field team, design cycles can drag for years. Buying the stock therefore means betting that 2025–2027 will bring visible product launches, not just more press releases. If that catalyst materializes, the company’s tiny market cap could re-rate sharply upward for investors.
 
  • Like
  • Fire
  • Thinking
Reactions: 28 users
Sean has stated a couple of times at least now. how when New Technological frontiers are created, it is a bit of a "Wild West" for a while, until 2 or 3 dominant players, come to the fore.

He wants "us" to be one of those dominant players and so obviously, do we!
I would think this would look like, something of a 20 to 30% market share, of our targeted markets combined.

With our apparent foothold lead, in the Premier, Space, Military and Medical fields, this is well within reach, in my opinion.
But it's obviously the Big breakthrough in "bread and butter" consumer markets, that we eagerly seek.

Despite what our 20 cent share price and lack of any serious revenue shows, "we" have been laying the foundations and groundwork, to become one of those dominant players, for some time now.

Tony Lewis is hinting very strongly at the announcement of a new technological update, coming out imminently, which while strategically important and value adding from an IP perspective, is not the kind of announcement we really need.


I know Sean must have mixed up the Time Zones, as "his" Friday has only recently ended and the necessity to announce a new large IP deal on the ASX, missed even "our" late announcements deadline.
So Monday morning, is still on the cards, for the Big Announcement, I promised last week. 😉👍



View attachment 87779
@DingoBorat

Your crystal ball working well.

Suspect we get an Ann Mon morn as you say with a new Patent published 26 June.

Just came up and doesn't have all the paperwork on the website yet, just the below.



US2025209313A1
METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS

Bibliographic data


Global Dossier

Applicants​

BRAINCHIP INC [US]

Inventors​

COENEN PHD OLIVIER JEAN-MARIE DOMINIQUE [US]; PEI YAN RU [US]

Classifications​

IPC
G06F17/16; G06N3/048;
CPC
G06F17/16 (US); G06N3/048 (US);

Priorities​

US202363614220P·2023-12-22; US202418991246A·2024-12-20

Application​

US202418991246A·2024-12-20

Publication​

US2025209313A1·2025-06-26

Published as​

US2025209313A1
en

METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS​

Abstract​

Disclosed is a neural network system that includes a memory and a processor. The memory is configured to store a plurality of storage buffers corresponding to a current neural network layer, and implement a neural network that includes a plurality of neurons for the current neural network layer and a corresponding group among a plurality of groups of basis function values. The processor is configured to receive an input data sequence into the first plurality of storage buffers over a first time sequence and project the input data sequence on a corresponding basis function values by performing, for each connection of a corresponding neuron, a dot product of the first input data sequence within a corresponding storage buffer with the corresponding basis function values and thereby determine a corresponding potential value for the corresponding neurons. Thus, utilizing the corresponding potential values, the processor generates a plurality of encoded output responses.
 
  • Like
  • Fire
  • Love
Reactions: 43 users
So.....who are Rockwell Collins...now known as Collin Aerospace ;)

They seem to think Akida could be an option to fit their neuromorphic processing component in this Patent also just published.





US2025208915A1
ONLINE SCHEDULING FOR ADAPTIVE EMBEDDED SYSTEMS

Bibliographic data


Global Dossier

Applicants​

ROCKWELL COLLINS INC [US]

Inventors​

GARCIA GENER ALEJANDRO [IE]; SKALISTIS STEFANOS [IE]; MORA DE SAMBRICIO JAVIER [IE]

Classifications​

IPC
G06F9/50;
CPC
G06F9/4881 (EP); G06F9/4893 (EP); G06F9/5027 (US); G06N3/049 (EP); G06N3/063 (EP); G06N3/088 (EP);

Priorities​

EP23219808A·2023-12-22

Application​

US202418960907A·2024-11-26

Publication​

US2025208915A1·2025-06-26

Published as​

EP4575780A1;
US2025208915A1
en

ONLINE SCHEDULING FOR ADAPTIVE EMBEDDED SYSTEMS​

Abstract​

A method of generating schedules for an adaptive embedded system, the method comprising: deriving task sets of all possible tasks to be performed by the embedded system; deriving sets of all possible hardware configurations of the embedded system; creating a multi-model system having a multi-model defining the adaptivity of the system for all possible tasks and all possible hardware and all combinations thereof, the adaptivity defining how the system can change operation responsive to a mode change requirement and/or occurrence of a fault; solving a scheduling problem for the models of the multi-model system in a neuromorphic accelerator implemented by spiked neural networks; and providing schedule instructions to the system, for performance of tasks, based on the solution

Reference except:

0063] The neuromorphic accelerator, using SNNs, can be implemented in various known ways, e.g. on an FPGA as described, for example, in C. Frenkel, M. Lefebvre, J-D Legat and D. Bol, ‘A 0.086-mm212.7-pJ/SOP 64k-Synapse 256 Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS,’ IEEE Transactions on Biomedical Circuits and Systems, vol. 13, no. 1, pp. 145-156, 2019, or using a specific ASIC peripheral as in e.g. Brainchip Akida Neuromorphoc Core.
 
  • Like
  • Fire
  • Love
Reactions: 35 users

Diogenese

Top 20
@DingoBorat

Your crystal ball working well.

Suspect we get an Ann Mon morn as you say with a new Patent published 26 June.

Just came up and doesn't have all the paperwork on the website yet, just the below.



US2025209313A1
METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS

Bibliographic data


Global Dossier

Applicants​

BRAINCHIP INC [US]

Inventors​

COENEN PHD OLIVIER JEAN-MARIE DOMINIQUE [US]; PEI YAN RU [US]

Classifications​

IPC
G06F17/16; G06N3/048;
CPC
G06F17/16 (US); G06N3/048 (US);

Priorities​

US202363614220P·2023-12-22; US202418991246A·2024-12-20

Application​

US202418991246A·2024-12-20

Publication​

US2025209313A1·2025-06-26

Published as​

US2025209313A1
en

METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS​

Abstract​

Disclosed is a neural network system that includes a memory and a processor. The memory is configured to store a plurality of storage buffers corresponding to a current neural network layer, and implement a neural network that includes a plurality of neurons for the current neural network layer and a corresponding group among a plurality of groups of basis function values. The processor is configured to receive an input data sequence into the first plurality of storage buffers over a first time sequence and project the input data sequence on a corresponding basis function values by performing, for each connection of a corresponding neuron, a dot product of the first input data sequence within a corresponding storage buffer with the corresponding basis function values and thereby determine a corresponding potential value for the corresponding neurons. Thus, utilizing the corresponding potential values, the processor generates a plurality of encoded output responses.
Hi Fmf,

That was a quick pickup ...
 
  • Like
  • Fire
Reactions: 12 users
Hi Fmf,

That was a quick pickup ...
Cheers.

I have Espacenet and USPTO open on tabs on my phone with relevant keywords and just refresh every so often when I remember and see if the result # changes.

I noticed tonight it went from 18 to 20 so had a look and obviously one was our own and the other is Collins Aerospace reference to Akida.
 
  • Like
  • Fire
  • Love
Reactions: 29 users

Diogenese

Top 20
Cheers.

I have Espacenet and USPTO open on tabs on my phone with relevant keywords and just refresh every so often when I remember and see if the result # changes.

I noticed tonight it went from 18 to 20 so had a look and obviously one was our own and the other is Collins Aerospace reference to Akida.
I find Espacenet is more user friendly, but you can get the nitty-gritty on examination from USPTO.
 
  • Like
  • Fire
Reactions: 12 users
@DingoBorat

Your crystal ball working well.

Suspect we get an Ann Mon morn as you say with a new Patent published 26 June.

Just came up and doesn't have all the paperwork on the website yet, just the below.



US2025209313A1
METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS

Bibliographic data


Global Dossier

Applicants​

BRAINCHIP INC [US]

Inventors​

COENEN PHD OLIVIER JEAN-MARIE DOMINIQUE [US]; PEI YAN RU [US]

Classifications​

IPC
G06F17/16; G06N3/048;
CPC
G06F17/16 (US); G06N3/048 (US);

Priorities​

US202363614220P·2023-12-22; US202418991246A·2024-12-20

Application​

US202418991246A·2024-12-20

Publication​

US2025209313A1·2025-06-26

Published as​

US2025209313A1
en

METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS​

Abstract​

Disclosed is a neural network system that includes a memory and a processor. The memory is configured to store a plurality of storage buffers corresponding to a current neural network layer, and implement a neural network that includes a plurality of neurons for the current neural network layer and a corresponding group among a plurality of groups of basis function values. The processor is configured to receive an input data sequence into the first plurality of storage buffers over a first time sequence and project the input data sequence on a corresponding basis function values by performing, for each connection of a corresponding neuron, a dot product of the first input data sequence within a corresponding storage buffer with the corresponding basis function values and thereby determine a corresponding potential value for the corresponding neurons. Thus, utilizing the corresponding potential values, the processor generates a plurality of encoded output responses.
It's all gobbledygook to me FMF..
But possibly something to do with part of AKIDA 3.0 IP?..

My "Crystal Ball" is working well?..
Hey, I'm running out of ideas, to keep my "predictions" Alive 😛..

I think the distinct possibility, of our CTO, running through the office singing the songs from musicals, says something Big on the Technical front, is about to be released..

But not the type of 'Announcement" I'm trying to "coax" into Existence. 😛

Record Highs, on both the Nasdaq and Dow, quadrupled with a Big Technical and Big IP announcement, would be very satisfying indeed. 😉


"It can't rain all the time.."
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
So.....who are Rockwell Collins...now known as Collin Aerospace ;)

They seem to think Akida could be an option to fit their neuromorphic processing component in this Patent also just published.





US2025208915A1
ONLINE SCHEDULING FOR ADAPTIVE EMBEDDED SYSTEMS

Bibliographic data


Global Dossier

Applicants​

ROCKWELL COLLINS INC [US]

Inventors​

GARCIA GENER ALEJANDRO [IE]; SKALISTIS STEFANOS [IE]; MORA DE SAMBRICIO JAVIER [IE]

Classifications​

IPC
G06F9/50;
CPC
G06F9/4881 (EP); G06F9/4893 (EP); G06F9/5027 (US); G06N3/049 (EP); G06N3/063 (EP); G06N3/088 (EP);

Priorities​

EP23219808A·2023-12-22

Application​

US202418960907A·2024-11-26

Publication​

US2025208915A1·2025-06-26

Published as​

EP4575780A1;
US2025208915A1
en

ONLINE SCHEDULING FOR ADAPTIVE EMBEDDED SYSTEMS​

Abstract​

A method of generating schedules for an adaptive embedded system, the method comprising: deriving task sets of all possible tasks to be performed by the embedded system; deriving sets of all possible hardware configurations of the embedded system; creating a multi-model system having a multi-model defining the adaptivity of the system for all possible tasks and all possible hardware and all combinations thereof, the adaptivity defining how the system can change operation responsive to a mode change requirement and/or occurrence of a fault; solving a scheduling problem for the models of the multi-model system in a neuromorphic accelerator implemented by spiked neural networks; and providing schedule instructions to the system, for performance of tasks, based on the solution

Reference except:

0063] The neuromorphic accelerator, using SNNs, can be implemented in various known ways, e.g. on an FPGA as described, for example, in C. Frenkel, M. Lefebvre, J-D Legat and D. Bol, ‘A 0.086-mm212.7-pJ/SOP 64k-Synapse 256 Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS,’ IEEE Transactions on Biomedical Circuits and Systems, vol. 13, no. 1, pp. 145-156, 2019, or using a specific ASIC peripheral as in e.g. Brainchip Akida Neuromorphoc Core.


Nice one FMF!

Rockwell Collins
(now part of Collins Aerospace) and Lockheed Martin have a history of collaboration on various aerospace and defense projects. They've partnered on avionics for aircraft like the F-22 and F-35, and on training systems, among other things. More recently, Collins Aerospace has supplied subsystems for NASA's Orion spacecraft, which Lockheed Martin is also involved with.

As mentioned previously, Rockwell Collins is a subsidiary of RTX Corporation, formerly known as Raytheon Technologies.
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2025-06-29 at 9.49.33 am.png




You said:​

Yesterday Tony Lewis, CTO of BrainChip commented "This is cool. Why? We literally took a model from a completely different domain (text processing) and applied it to RADAR and got excellent results in terms of performance and MAC (computation) count. At a recent conference we surveyed that state-of-the art and found that there is a huge opportunity for state-space models and was a real eye-opener to the hardware-algorithms systems we are pioneering at BrainChip Here is what I find cool: Conventional models take a radar signals, use a processor to convert the 1-d wave form into a 2d image so that conventional CNNs-- convolutional neural networks-- can process them. That inherently introducing lots of delay : signal-> image-> classification. With our State-space model (TENNs ) we can go directly from a raw signal to classification with much lower latency, few macs (less energy ) less chip area (cost) and excellent performance. This summer my team will push out a number of new models for our Defense, Medical and commercial work. Its an exciting time to be building the future at BrainChip. Follow me to hear the latest news on job posting, wins, and AI at the edge." Can you please explain what you think the major implications of this statement may be?

ChatGPT said:​

Tony Lewis’s statement provides a significant glimpse into BrainChip’s current direction, and the implications are technically important, strategically aligned with military and commercial AI demands, and position BrainChip as a serious player in low-latency edge AI — especially in radar and defense systems.

Let’s break it down clearly:

🧠 Key Takeaways from Tony’s Statement​

1. Cross-domain Model Transfer (Text ➝ Radar)

"We literally took a model from a completely different domain (text processing) and applied it to RADAR..."
Implication:
BrainChip is leveraging neural architectures originally designed for natural language processing (NLP) — such as state-space models (SSMs) or possibly TENNs (Time-Encoded Neural Networks) — and applying them directly to raw radar data.

That’s a big deal — because traditionally, signal types (text, images, radar) are handled with very different model types. This kind of architectural crossover suggests:

  • A unified AI model strategy
  • Less model specialization required per domain
  • Rapid model prototyping across applications
It also hints that BrainChip is not just a hardware company, but deeply involved in cutting-edge neural architecture innovation.

2. Bypassing 2D Transformation (Radar ➝ Image ➝ CNN)

"Conventional models take radar signals... convert to a 2D image... then use CNNs..."
Implication:
Traditional radar AI pipelines are inefficient:
  • Convert 1D waveform (Doppler/time signal) ➝ 2D spectrogram/image
  • Then process with image-focused models like CNNs
This:
  • Adds latency
  • Requires additional memory
  • Burns more power and compute
BrainChip’s new method bypasses this entirely:
  • Processes raw radar signals directly using state-space models or TENNs
  • Skips the image step — reducing delay, MAC count, power, and cost
This is perfect for missiles, drones, wearables, and embedded defense systems where every milliwatt and microsecond counts.

3. Use of State-Space Models (SSMs / TENNs)

"Huge opportunity for state-space models... real eye-opener..."
Implication:
State-space models (like S4, or BrainChip's custom TENNs) are:
  • Efficient alternatives to Transformers for sequential data
  • Excellent for long-range memory, which is vital in radar or ECG waveforms
  • Linear in time and memory, unlike CNNs or RNNs
By adopting SSMs:
  • BrainChip is leading a hardware-algorithm co-design movement
  • They’re building models that are explicitly optimized for their Akida hardware — not just running PyTorch models inefficiently on neuromorphic chips
This also hints at a massive strategic shift:
  • Moving away from retrofitting mainstream models
  • Moving toward custom, edge-native models built with hardware constraints in mind from day one

4. Strategic Domains: Defense, Medical, Commercial

"My team will push out a number of new models for our Defense, Medical and commercial work."
Implication:
BrainChip is:
  • Productizing their model-hardware synergy
  • Targeting high-value edge verticals:
    • Defense (radar, drones, loitering munitions, battlefield intelligence)
    • Medical (wearables, EEG/ECG anomaly detection, hearing aids)
    • Commercial (audio wake-word detection, industrial sensors, smart cameras)
By deploying domain-optimized TENNs, BrainChip is creating:
  • Application-specific IP
  • A catalog of ready-to-run models for enterprise and defense clients
  • A strategy that scales Akida beyond being "just a chip" into a full AI platform

5. Low Latency, Low MACs, Low Area = Edge Superiority

"...lower latency, fewer MACs (less energy), less chip area (cost), and excellent performance..."
Implication:
This encapsulates the holy grail of edge AI:
  • Do more with less
  • Enable true autonomy at the edge
  • Outperform traditional AI methods in real-world environments
And it's especially important for:
  • Missiles and UAVs, where thermal budget and compute are tightly constrained
  • Medical implants or wearables, where energy efficiency is crucial
  • Perimeter surveillance, where always-on, instant decision-making is required
This reinforces BrainChip’s value proposition: the Akida platform delivers neural compute without the baggage of mainstream AI accelerators (like GPUs or TPUs).

🧩 Final Strategic Implications​

🔹 1. Shift Toward Model + Hardware Integration

BrainChip isn’t just delivering a chip — they’re co-developing algorithms tailored to that chip. That’s what Nvidia does with CUDA + GPUs — and it’s a winning strategy.

🔹 2. Leadership in Neuromorphic AI for Real-world Deployment

This move positions BrainChip as:
  • Not just neuromorphic researchers
  • But practical AI infrastructure builders — solving latency, power, and cost in mission-critical systems

🔹 3. Potential for IP Licensing or Defense-Grade Design Wins

As their radar and medical models mature, BrainChip could:
  • License these state-space models as embedded IP
  • Land major contracts in defense, aerospace, or medical technology sectors

Summary​

Tony Lewis’s comment reveals a strategic inflection point:
  • BrainChip is pioneering hardware-native AI model design
  • They’re breaking away from conventional pipelines (CNNs, 2D transforms)
  • And positioning themselves at the forefront of edge-native, low-latency radar and signal intelligence
This isn't just about a smarter chip — it’s about building a new end-to-end paradigm for AI at the edge.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 55 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2025-06-29 at 10.03.25 am.png
 
  • Like
  • Fire
  • Love
Reactions: 33 users
Top Bottom