BRN Discussion Ongoing

Looking forward to hearing something solid in brn favour.
 
  • Like
Reactions: 3 users

manny100

Top 20
From a recent video interview with Steve Brighfield.
"The primary difference between brain chip and the Intel and the IBM solutions was they were analog. So they truly tried to match the analog waveforms of the brain, whereas the brain chip made a digital equivalent of the analog waveform. So now you could easily manufacture a computer, digital computer chip using the approach. The chips that you, the analog chips that are made today for neuromorphics, they're notorious for, you know, you have to have them biased and temperature stabilized, and there's all the problems with analog, which is the reason we don't have a lot of analog computers today, or the problems that they're faced with their neuromorphic chips."
"There are other companies that are producing analog Neuromorphic chips, but they're kind of dedicated for a specific market second, like speech wake-up, right? Or a biological wake-up. So they're like function-specific Neuromorphic chips. We have a very digital programmable chip that can use any kind of sensor, so we're kind of unique in that aspect. Build the future of multi-agent software with agency. "
My bold above.
 
  • Like
  • Fire
  • Love
Reactions: 15 users

manny100

Top 20
Another quote from Steve Brightfield.
"I think we're trying to ride the neuromorphic computing and brain chip in particular is trying to ride the coattails of the overall market moving to the edge. And when we look at market research reports from companies, they're saying about 10% of these edge products embedded devices are running some AI software on them. But within the next four years, four to five years, 30 to 35% of those products will have AI on. And I think if we look out, the next five years, 90% of them will have it all embedded in it. And there will be a neuromorphic computing in probably half of those devices. Because it's going to be more generally available. "
 
  • Like
  • Fire
  • Love
Reactions: 8 users

Guzzi62

Regular
From a recent video interview with Steve Brighfield.
"The primary difference between brain chip and the Intel and the IBM solutions was they were analog. So they truly tried to match the analog waveforms of the brain, whereas the brain chip made a digital equivalent of the analog waveform. So now you could easily manufacture a computer, digital computer chip using the approach. The chips that you, the analog chips that are made today for neuromorphics, they're notorious for, you know, you have to have them biased and temperature stabilized, and there's all the problems with analog, which is the reason we don't have a lot of analog computers today, or the problems that they're faced with their neuromorphic chips."
"There are other companies that are producing analog Neuromorphic chips, but they're kind of dedicated for a specific market second, like speech wake-up, right? Or a biological wake-up. So they're like function-specific Neuromorphic chips. We have a very digital programmable chip that can use any kind of sensor, so we're kind of unique in that aspect. Build the future of multi-agent software with agency. "
My bold above.
Sadly, Steve got that wrong!

Intel's Loihi 2 is fully digital.


 
  • Like
Reactions: 4 users

manny100

Top 20
  • Like
  • Haha
Reactions: 4 users

perceptron

Regular
So to clarify one last time, I am not asking what it costs BrainChip to manufacture the chips, nor am I asking for a margin analysis.

I am pointing out that we cannot determine the revenue from these orders because the announcement states customers will be charged anywhere between $4 and $50 per chip, depending on volume.

What the announcement does not disclose however, is where on that sliding scale these orders actually sit.

Specifically, 1) what volume qualifies as a high volume order 2) at what quantity does pricing move from $50 to $10 to $4? 3) where does an order of 10,000 units or 1,200 units fall on that curve?

Without that information, revenue could be materially different under perfectly reasonable interpretations.

To illustrate by way of demonstration only:
  • If BrainChip considers anything above 5,000 units to be a volume order, then 11,200 units could be priced at $4, generating roughly $45k in revenue.
  • If instead “volume” means anything exceeding 50,000 units, then the same 11,200 units could be priced far higher — say $20–$30 per chip, resulting in $224k–$336k of revenue.

My point is that until the company clarifies how the volume pricing tiers actually work, any attempt to calculate revenue from these orders is pure guesswork.
Appreciate your reply Bravo. So the foundry is only charging it's customer's the variable costs associated to produce any number of chips while excluding their fixed costs such capital expenditure, taxes, salaries and the many other costs that remain constant for any given amount of chips ordered.
 

Diogenese

Top 20
From a recent video interview with Steve Brighfield.
"The primary difference between brain chip and the Intel and the IBM solutions was they were analog. So they truly tried to match the analog waveforms of the brain, whereas the brain chip made a digital equivalent of the analog waveform. So now you could easily manufacture a computer, digital computer chip using the approach. The chips that you, the analog chips that are made today for neuromorphics, they're notorious for, you know, you have to have them biased and temperature stabilized, and there's all the problems with analog, which is the reason we don't have a lot of analog computers today, or the problems that they're faced with their neuromorphic chips."
"There are other companies that are producing analog Neuromorphic chips, but they're kind of dedicated for a specific market second, like speech wake-up, right? Or a biological wake-up. So they're like function-specific Neuromorphic chips. We have a very digital programmable chip that can use any kind of sensor, so we're kind of unique in that aspect. Build the future of multi-agent software with agency. "
My bold above.
Hi manny,

I'd like your thoughts/corrections to the following.

Tony Lewis mentioned that Akida 3 and GenAI will have a flexible hardware switched communication mesh whereas Akida 1 & 2 have a packet switched type comms mesh, and I'm trying to understand the advantages of the H/W switch.

The packet switched version requires that each event/spike includes an address header to direct it to the destination neuron. This entails transmitting additional bits with each event. increasing latency and power usage.

Thus for a many-to-many neuron connexion, there must be a transistor switch matrix (~ a crossbar switch?). But this only requires one switch per destination neuron compared with a requirement for the header to include the address of each destination neuron. For example, if there were 256 possible destination neurons, then the header would need to include 8 bits per destination neuron. So for 8 destination neurons for example, that would be 64 additional bits (ignoring any protocol overhead). (There may well be a more efficient protocol for this it, but that is above my pay grade).

In contrast, for a hardware switching matrix, only one transistor needs to switch per destination neuron, so, in the above example, that would be 8 switch operations.

In addition, the packet switched mesh protocol needs a larger collision avoidance buffer, to ensure only 1 event is transmitted at a time.

Obviously, the H/W switch will still require additional power/time to set up the switch connexions, but I assume this is a one off for each task.
 
  • Love
  • Wow
  • Thinking
Reactions: 4 users

Diogenese

Top 20
  • Haha
  • Like
  • Love
Reactions: 4 users

manny100

Top 20
Hi manny,

I'd like your thoughts/corrections to the following.

Tony Lewis mentioned that Akida 3 and GenAI will have a flexible hardware switched communication mesh whereas Akida 1 & 2 have a packet switched type comms mesh, and I'm trying to understand the advantages of the H/W switch.

The packet switched version requires that each event/spike includes an address header to direct it to the destination neuron. This entails transmitting additional bits with each event. increasing latency and power usage.

Thus for a many-to-many neuron connexion, there must be a transistor switch matrix (~ a crossbar switch?). But this only requires one switch per destination neuron compared with a requirement for the header to include the address of each destination neuron. For example, if there were 256 possible destination neurons, then the header would need to include 8 bits per destination neuron. So for 8 destination neurons for example, that would be 64 additional bits (ignoring any protocol overhead). (There may well be a more efficient protocol for this it, but that is above my pay grade).

In contrast, for a hardware switching matrix, only one transistor needs to switch per destination neuron, so, in the above example, that would be 8 switch operations.

In addition, the packet switched mesh protocol needs a larger collision avoidance buffer, to ensure only 1 event is transmitted at a time.

Obviously, the H/W switch will still require additional power/time to set up the switch connexions, but I assume this is a one off for each task.
Hi Dio, its above my paygrade but Gen 3 is an upgrade so it has to be an improvement.
Appears ditching the packets will likely improve latency and reduce power costs. It would be designed to work well/better with GenAI.
Basically a guess based on why do it if it's not better and they are key areas.
 
  • Love
  • Thinking
Reactions: 2 users

HopalongPetrovski

I'm Spartacus!
Below is a copy of Marketing Mans recent post over on the crapper.
I am reposting it here because I think it is eloquent and extremely pertinent.
It is in reply to Phil the yank who is a recent blow in, a self proclaimed heavy hitter who, having now "discovered" WBT, seems to be falling out of love with us. 🤣


To add, I agree with the core issue you’re highlighting.

The risk isn’t “does Akida work?” it’s whether adoption happens fast enough in a market that often rewards good-enough and easy over better but harder.

That said, a couple of nuances are worth adding.

  • First, if neuromorphic adoption is being held back by friction, that friction applies to all neuromorphic approaches, not just BrainChip. In that context, BRN actually appears to be ahead of the pack rather than behind. They’ve invested earlier than most in developer tooling, SDKs, MetaTF to bridge into existing ML workflows, and they’re now explicitly recruiting senior leadership to deepen the software and toolchain side. That doesn’t eliminate friction, but it does suggest management understands exactly where the bottleneck is.

  • Second, you’re absolutely right that today BRN’s traction is showing up mainly where SWaP (Space, Weight, and Power) is unavoidable - space, defence, and ultra-low-power wearables like Onsor. That’s not a weakness of the technology; it’s where costs force a different compute approach. In those environments, “good enough” conventional compute doesn't fit. Early adoption almost always starts where the pain is baked-in.

  • Third, while much of edge-AI today is advancing quickly on conventional methods, power consumption and heat rejection are becoming high-priority problems even in data centres. We’re already seeing hyper-scalers talk openly about energy limits, cooling constraints, and diminishing returns from brute-force GPUs. For example, several new, large-scale data centres in Western Sydney, particularly projects by AirTrunk, Microsoft, and Amazon in areas like Kemps Creek and Marsden Park, are facing scrutiny over high water consumption for cooling (even though cooling systems are heat-pump based, the cooling towers are typically evaporative). And in the US Microsoft is recommissioning 3-Mile Island Nuclear PLant (the site of one of the worst nuclear accidents before Fukushima and Chernobyl) because their new data centre needs enormous power. Eventually efficiency stops being a “nice to have” and becomes a design constraint.

  • Finally, this whole debate has a strong historical echo. Valves weren’t displaced by transistors because transistors were immediately easier. Quite the opposite: valves had a huge industrial base, familiar engineering practices, and plenty of off-the-shelf solutions. Transistors were initially viewed as fragile, unproven, and niche. Yet once power, size, heat and reliability became dominant constraints, the market flipped, slowly at first, then decisively.

None of this guarantees BRN wins.

Adoption speed still matters, and the window isn’t open-ended. But it’s not that BrainChip is falling behind “better” competitors, it’s that the market hasn’t yet been forced to care enough about the problems neuromorphic solves. Where it has been forced, BRN is already seeing traction.

So I agree with your concern, but I’d say the verdict hinges less on near-term elegance or ease of adoption, and more on how fast power, heat, and autonomy constraints tighten across the broader AI landscape. That’s the real time-frame that matters here.

One final point I’d add is that much of what the market currently labels as “AI progress” is still overwhelmingly centred on LLMs and hyperscale cloud infrastructure and that phase is rapidly maturing.

The next leg of AI growth is far more likely to be about where the compute happens, not just how large models become. As AI pushes out of data centres and into autonomous, mobile, always-on systems, edge constraints like power, latency, heat and connectivity become dominant.

That shift doesn’t automatically guarantee success for neuromorphic approaches, but it does materially expand the addressable market where neuromorphic makes sense. In that context, neuromorphic isn’t competing head-to-head with cloud AI; it’s positioned for the phase after cloud-centric AI saturates, which is where a lot of the longer-term optionality sits.

Of course, a big application is battle drones - I saw a Youtube recently that reported that 80% of Kills in the Ukraine-Russia conflict have been executed by drones (of all shapes and sizes). No military budget can ignore this trend. With US isolationism and Trump forcing countries to re-arm drone development (UAVs is the correct term) is at an all time high. And AI that is fast, local, small, reliable, low-power, and cheap is what makes UAVs deadlier.

You don't wait for the conflict to start - you fill warehouses with 'em so you're ready.

That's good news for BRN (and us investors) - but World War III - not so good.

***
I had an AI "Chat" which identified the following edge use cases....
“Autonomous, mobile, always-on”
systems share three traits:

  • They move through the world
  • They must perceive continuously
  • They cannot rely on constant connectivity or abundant power

Here are the best, real-world examples, grouped by maturity and importance.

Military & Security (already here, already critical)​

1️⃣ Drones (air, land, sea)​

This is the archetype.

  • ISR drones
  • Loitering munitions
  • Autonomous naval vessels
  • Swarm drones
Why they matter:

  • Battery-powered
  • Highly mobile
  • Must sense continuously
  • Must react in milliseconds
  • Often disconnected or jammed
Cloud AI is impossible here.
Brute-force edge compute is tolerated — but inefficient.

2️⃣ Counter-drone & perimeter defence systems​

  • Radar + RF + EO/IR sensing
  • Always on
  • Must distinguish signal from noise continuously
  • False positives are costly
Event-driven, low-power intelligence is a natural fit.

Robotics & Industrial Autonomy (rapidly emerging)​

3️⃣ Mobile robots (AMRs, humanoids, legged robots)​

  • Warehouses
  • Hospitals
  • Field inspection
  • Construction sites
Constraints:

  • Battery-limited
  • Must operate for hours or days
  • Continuous perception (vision, lidar, audio)
  • Safety-critical decisions
Most current systems are over-provisioned and power-inefficient — by design.

4️⃣ Autonomous vehicles (non-consumer first)​

Not robotaxis — those are cloud-heavy.

But:

  • Mining vehicles
  • Agricultural machinery
  • Military ground vehicles
These:

  • Operate in remote areas
  • Require deterministic response
  • Have limited connectivity

Medical & Human-adjacent Systems (under-appreciated)​

5️⃣ Implantable and wearable medical devices​

Examples:

  • Cochlear implants
  • Neural stimulators
  • Smart insulin pumps
  • Continuous monitoring wearables
Why they’re important:

  • Always on
  • Ultra-low power
  • Safety critical
  • Privacy sensitive
Cloud AI is a non-starter.
“Good enough” compute often isn’t good enough.

6️⃣ Assistive technologies​

  • Prosthetics
  • Exoskeletons
  • Mobility aids
These require:

  • Real-time feedback loops
  • Local learning
  • Minimal latency

Consumer & Infrastructure (early but inevitable)​

7️⃣ Smart sensors & IoT at scale​

  • Environmental monitoring
  • Smart cities
  • Infrastructure health
  • Border security
Thousands or millions of sensors:

  • Always on
  • Mostly idle
  • Occasionally critical
Processing everything centrally is economically insane.

8️⃣ Space systems​

  • Satellites
  • Spacecraft autonomy
  • Debris avoidance
  • On-orbit servicing
Constraints:

  • Power is scarce
  • Communication delayed
  • Reliability is existential

The unifying insight (this is the key)​

In all these systems:

  • Power is a hard constraint
  • Latency is non-negotiable
  • Connectivity is unreliable
  • Intelligence must be continuous, not episodic
That combination is exactly where:

  • Cloud AI fails
  • Brute-force edge AI struggles
  • Event-driven approaches become interesting

Why this matters for your broader thesis​

These systems are not fringe.

They are:

  • Growing in number
  • Growing in strategic importance
  • Becoming more autonomous over time
And importantly:

They don’t tolerate inefficiency forever.
That’s why this category matters more than benchmarks or demos.

Do you own research.
 
  • Fire
  • Like
  • Love
Reactions: 14 users

Diogenese

Top 20
Below is a copy of Marketing Mans recent post over on the crapper.
I am reposting it here because I think it is eloquent and extremely pertinent.
It is in reply to Phil the yank who is a recent blow in, a self proclaimed heavy hitter who, having now "discovered" WBT, seems to be falling out of love with us. 🤣


To add, I agree with the core issue you’re highlighting.

The risk isn’t “does Akida work?” it’s whether adoption happens fast enough in a market that often rewards good-enough and easy over better but harder.

That said, a couple of nuances are worth adding.

  • First, if neuromorphic adoption is being held back by friction, that friction applies to all neuromorphic approaches, not just BrainChip. In that context, BRN actually appears to be ahead of the pack rather than behind. They’ve invested earlier than most in developer tooling, SDKs, MetaTF to bridge into existing ML workflows, and they’re now explicitly recruiting senior leadership to deepen the software and toolchain side. That doesn’t eliminate friction, but it does suggest management understands exactly where the bottleneck is.

  • Second, you’re absolutely right that today BRN’s traction is showing up mainly where SWaP (Space, Weight, and Power) is unavoidable - space, defence, and ultra-low-power wearables like Onsor. That’s not a weakness of the technology; it’s where costs force a different compute approach. In those environments, “good enough” conventional compute doesn't fit. Early adoption almost always starts where the pain is baked-in.

  • Third, while much of edge-AI today is advancing quickly on conventional methods, power consumption and heat rejection are becoming high-priority problems even in data centres. We’re already seeing hyper-scalers talk openly about energy limits, cooling constraints, and diminishing returns from brute-force GPUs. For example, several new, large-scale data centres in Western Sydney, particularly projects by AirTrunk, Microsoft, and Amazon in areas like Kemps Creek and Marsden Park, are facing scrutiny over high water consumption for cooling (even though cooling systems are heat-pump based, the cooling towers are typically evaporative). And in the US Microsoft is recommissioning 3-Mile Island Nuclear PLant (the site of one of the worst nuclear accidents before Fukushima and Chernobyl) because their new data centre needs enormous power. Eventually efficiency stops being a “nice to have” and becomes a design constraint.

  • Finally, this whole debate has a strong historical echo. Valves weren’t displaced by transistors because transistors were immediately easier. Quite the opposite: valves had a huge industrial base, familiar engineering practices, and plenty of off-the-shelf solutions. Transistors were initially viewed as fragile, unproven, and niche. Yet once power, size, heat and reliability became dominant constraints, the market flipped, slowly at first, then decisively.

None of this guarantees BRN wins.

Adoption speed still matters, and the window isn’t open-ended. But it’s not that BrainChip is falling behind “better” competitors, it’s that the market hasn’t yet been forced to care enough about the problems neuromorphic solves. Where it has been forced, BRN is already seeing traction.

So I agree with your concern, but I’d say the verdict hinges less on near-term elegance or ease of adoption, and more on how fast power, heat, and autonomy constraints tighten across the broader AI landscape. That’s the real time-frame that matters here.

One final point I’d add is that much of what the market currently labels as “AI progress” is still overwhelmingly centred on LLMs and hyperscale cloud infrastructure and that phase is rapidly maturing.

The next leg of AI growth is far more likely to be about where the compute happens, not just how large models become. As AI pushes out of data centres and into autonomous, mobile, always-on systems, edge constraints like power, latency, heat and connectivity become dominant.

That shift doesn’t automatically guarantee success for neuromorphic approaches, but it does materially expand the addressable market where neuromorphic makes sense. In that context, neuromorphic isn’t competing head-to-head with cloud AI; it’s positioned for the phase after cloud-centric AI saturates, which is where a lot of the longer-term optionality sits.

Of course, a big application is battle drones - I saw a Youtube recently that reported that 80% of Kills in the Ukraine-Russia conflict have been executed by drones (of all shapes and sizes). No military budget can ignore this trend. With US isolationism and Trump forcing countries to re-arm drone development (UAVs is the correct term) is at an all time high. And AI that is fast, local, small, reliable, low-power, and cheap is what makes UAVs deadlier.

You don't wait for the conflict to start - you fill warehouses with 'em so you're ready.

That's good news for BRN (and us investors) - but World War III - not so good.

***
I had an AI "Chat" which identified the following edge use cases....
“Autonomous, mobile, always-on”
systems share three traits:

  • They move through the world
  • They must perceive continuously
  • They cannot rely on constant connectivity or abundant power

Here are the best, real-world examples, grouped by maturity and importance.

Military & Security (already here, already critical)​

1️⃣ Drones (air, land, sea)​

This is the archetype.

  • ISR drones
  • Loitering munitions
  • Autonomous naval vessels
  • Swarm drones
Why they matter:

  • Battery-powered
  • Highly mobile
  • Must sense continuously
  • Must react in milliseconds
  • Often disconnected or jammed
Cloud AI is impossible here.
Brute-force edge compute is tolerated — but inefficient.

2️⃣ Counter-drone & perimeter defence systems​

  • Radar + RF + EO/IR sensing
  • Always on
  • Must distinguish signal from noise continuously
  • False positives are costly
Event-driven, low-power intelligence is a natural fit.

Robotics & Industrial Autonomy (rapidly emerging)​

3️⃣ Mobile robots (AMRs, humanoids, legged robots)​

  • Warehouses
  • Hospitals
  • Field inspection
  • Construction sites
Constraints:

  • Battery-limited
  • Must operate for hours or days
  • Continuous perception (vision, lidar, audio)
  • Safety-critical decisions
Most current systems are over-provisioned and power-inefficient — by design.

4️⃣ Autonomous vehicles (non-consumer first)​

Not robotaxis — those are cloud-heavy.

But:

  • Mining vehicles
  • Agricultural machinery
  • Military ground vehicles
These:

  • Operate in remote areas
  • Require deterministic response
  • Have limited connectivity

Medical & Human-adjacent Systems (under-appreciated)​

5️⃣ Implantable and wearable medical devices​

Examples:

  • Cochlear implants
  • Neural stimulators
  • Smart insulin pumps
  • Continuous monitoring wearables
Why they’re important:

  • Always on
  • Ultra-low power
  • Safety critical
  • Privacy sensitive
Cloud AI is a non-starter.
“Good enough” compute often isn’t good enough.

6️⃣ Assistive technologies​

  • Prosthetics
  • Exoskeletons
  • Mobility aids
These require:

  • Real-time feedback loops
  • Local learning
  • Minimal latency

Consumer & Infrastructure (early but inevitable)​

7️⃣ Smart sensors & IoT at scale​

  • Environmental monitoring
  • Smart cities
  • Infrastructure health
  • Border security
Thousands or millions of sensors:

  • Always on
  • Mostly idle
  • Occasionally critical
Processing everything centrally is economically insane.

8️⃣ Space systems​

  • Satellites
  • Spacecraft autonomy
  • Debris avoidance
  • On-orbit servicing
Constraints:

  • Power is scarce
  • Communication delayed
  • Reliability is existential

The unifying insight (this is the key)​

In all these systems:

  • Power is a hard constraint
  • Latency is non-negotiable
  • Connectivity is unreliable
  • Intelligence must be continuous, not episodic
That combination is exactly where:

  • Cloud AI fails
  • Brute-force edge AI struggles
  • Event-driven approaches become interesting

Why this matters for your broader thesis​

These systems are not fringe.

They are:

  • Growing in number
  • Growing in strategic importance
  • Becoming more autonomous over time
And importantly:


That’s why this category matters more than benchmarks or demos.

Do you own research.
Hi Hoppy,

The chatbot overlooked my hobbyhorse, cybersecurity. There is an urgent and explosively growing need, and with the QV CyberNeuro-RT/Akida Edge Box (or whatever it's called now), we have a CotS solution which grew out of a DoE SBIR and an associated MDA SBIR. To be fair, cybersecurity does not necessarily meet the requirement to "move through the world".

On "2. Counter drone", false negatives are more costly.
 
  • Like
  • Fire
Reactions: 10 users

HopalongPetrovski

I'm Spartacus!
Hi Hoppy,

The chatbot overlooked my hobbyhorse, cybersecurity. There is an urgent and explosively growing need, and with the QV CyberNeuro-RT/Akida Edge Box (or whatever it's called now), we have a CotS solution which grew out of a DoE SBIR and an associated MDA SBIR. To be fair, cybersecurity does not necessarily meet the requirement to "move through the world".

On "2. Counter drone", false negatives are more costly.
Thanks Dio and I agree with both the importance regarding cybersecurity and the possibility of it becoming both a lucrative and near term opportunity for us. I assume its implementation would necessarily entail a physical device to provide protection?
Is retrofit protection available through something like a raspberry pie plug in device or would it need to be built in and only available on new specifically designed hardware do you think?
 
  • Like
  • Love
Reactions: 2 users

Diogenese

Top 20
Thanks Dio and I agree with both the importance regarding cybersecurity and the possibility of it becoming both a lucrative and near term opportunity for us. I assume its implementation would necessarily entail a physical device to provide protection?
Is retrofit protection available through something like a raspberry pie plug in device or would it need to be built in and only available on new specifically designed hardware do you think?
Hi Hoppy,

The first product is the Edge Box, and is designed for small/medium enterprises. I guess Akida will detect it first and then the CyberNeuro software deals with hostile input. The model will be continually updated via federated learning as new threats are detected.
 
  • Fire
  • Like
Reactions: 6 users

Esq.111

Fascinatingly Intuitive.
So to clarify one last time, I am not asking what it costs BrainChip to manufacture the chips, nor am I asking for a margin analysis.

I am pointing out that we cannot determine the revenue from these orders because the announcement states customers will be charged anywhere between $4 and $50 per chip, depending on volume.

What the announcement does not disclose however, is where on that sliding scale these orders actually sit.

Specifically, 1) what volume qualifies as a high volume order 2) at what quantity does pricing move from $50 to $10 to $4? 3) where does an order of 10,000 units or 1,200 units fall on that curve?

Without that information, revenue could be materially different under perfectly reasonable interpretations.

To illustrate by way of demonstration only:
  • If BrainChip considers anything above 5,000 units to be a volume order, then 11,200 units could be priced at $4, generating roughly $45k in revenue.
  • If instead “volume” means anything exceeding 50,000 units, then the same 11,200 units could be priced far higher — say $20–$30 per chip, resulting in $224k–$336k of revenue.

My point is that until the company clarifies how the volume pricing tiers actually work, any attempt to calculate revenue from these orders is pure guesswork.
Top of the evening Bravo.

Bloody good to see you are still with us , 🍻.

On the .... what the farrk are Brn shareholders getting from these initial chip sales , ..

Who would honestly know, fucked if I do , Pretty sure our our Manigement ???? Also have know idea , personally going with a percentage ,per chip , per product .

ie , a Tomahawk missile costs X , incorperate our tech ( one hase to fucking well pay for it ) not $50 Per chip but a percentage per use .

OUR DIRECTORS ??????? ACCORDING TO OUR CHAIRMAN , ARE GETTING CONMENSURATED , WITH A COMPANY RETURNING $20 to $ 30 ODD MILLION ANNUALLY.

FUNNY AS FUCK, THIS CAME FROM KNOWOTHER THAN OUR VERY OWN CHAIRMANS MOUTH , THREE YEARS AGO.

Thay have not even ....remotely approached such levels , yet continue to suck the companys reservs dry without remorse , said it once & will say it again , & will be voting ALL OUT next AGM , AGAIN

Thay have consistently, without fail , showen a compleet disregard , for the °( OUR COMPANYS ) finances to further their own salaries/ perks .

The arrogance showen thus far on many fronts is staggering .

Would be happy to , (and will vote accordingly ,.... again , third year running ) to eject the entire board , then our CEO .


Yep , would be a happy holder employing , not the first, probbibly the forth or fith smartest graduate out of Harved or Stamford Business , a young soul WITH A BITT OF DRIVE .


Not in any way directed at you Bravo , thankyou for ones inpout always appreciated.

Regards,
Esq.
 
  • Love
  • Wow
  • Sad
Reactions: 4 users

7für7

Top 20
Sean when shareholders ask him what he is doing for almost 5 years

Tonito Just Hanging Around GIF
 
  • Haha
Reactions: 1 users

Drewski

Regular
SNAFU

The IP is incredibly valuable.

The tech is revolutionary and necessary.

We are told nothing, or there is nothing to tell or they can't tell.

I don't doubt Sean and co are not doing everything within their power to make BRN win.

There are insanely powerfull forces that BRN has to compete with.

Time will tell, for good or bad.

2026 is make or break there is no doubt about it.
 
  • Like
Reactions: 1 users

7für7

Top 20
Top of the evening Bravo.

Bloody good to see you are still with us , 🍻.

On the .... what the farrk are Brn shareholders getting from these initial chip sales , ..

Who would honestly know, fucked if I do , Pretty sure our our Manigement ???? Also have know idea , personally going with a percentage ,per chip , per product .

ie , a Tomahawk missile costs X , incorperate our tech ( one hase to fucking well pay for it ) not $50 Per chip but a percentage per use .

OUR DIRECTORS ??????? ACCORDING TO OUR CHAIRMAN , ARE GETTING CONMENSURATED , WITH A COMPANY RETURNING $20 to $ 30 ODD MILLION ANNUALLY.

FUNNY AS FUCK, THIS CAME FROM KNOWOTHER THAN OUR VERY OWN CHAIRMANS MOUTH , THREE YEARS AGO.

Thay have not even ....remotely approached such levels , yet continue to suck the companys reservs dry without remorse , said it once & will say it again , & will be voting ALL OUT next AGM , AGAIN

Thay have consistently, without fail , showen a compleet disregard , for the °( OUR COMPANYS ) finances to further their own salaries/ perks .

The arrogance showen thus far on many fronts is staggering .

Would be happy to , (and will vote accordingly ,.... again , third year running ) to eject the entire board , then our CEO .


Yep , would be a happy holder employing , not the first, probbibly the forth or fith smartest graduate out of Harved or Stamford Business , a young soul WITH A BITT OF DRIVE .


Not in any way directed at you Bravo , thankyou for ones inpout always appreciated.

Regards,
Esq.
Can you imagine? Next week…..

Price

Sensitive








An….

Nounce….

Ment……



Thierry Henry Smile GIF by hamlet
 

manny100

Top 20
I Pasted a portion of Steve Brightfield's concerning the Aitforce Research (obviously RTX/US AFRL) interview and asked 'chat' to outline the implications for Brainchip.
Q. " One of the things that was interesting is we got a contract with Air Force research libraries to work on radar using these algorithms, right? And the results actually surprised us and they surprised the contracting agency and now we're expanding that. And we think that we can, you know, add capabilities to radar that weren't there before. Like, for example, radar can detect things, right? But it can't tell you what it is. Well, we can classify objects now with radar in addition to detecting them. We can improve the tracking and the latency of these radars. But we can also make them a lot smaller, right? So it's that size weight and power. Can I put a radar in a robot? So when it's hand has got a radar signal in it and it can basically navigate, you can paint the scene without a camera. You can use it like a camera to paint the scene and recognize and grasp things that a drone. You can fly it inside tunnels or buildings indoors. You can map out where you're going. We see this shrinking of the conventional radar technologies to really go into anything moving because it's all whether it works in the dark. And if it can replicate some of the things in vision, then, you know, you don't have to worry about rain and fog and all the issues that visual, you know, control of robots. Yeah. And are you working with robotic companies or is this still in the research room? It's still in the research. We're working with companies that are creating components or solutions that go to the robotics companies. We are in active conversations with robotic companies today." quote from Steve Brightfield's interview. What are the implications for Brainchip."
My bold above highlighting important area.
Chat rely:
The implications for BrainChip are much bigger than the quote sounds at first glance. Brightfield is essentially revealing that neuromorphic radar is not just a research curiosity—it’s showing unexpectedly strong performance and opening entirely new markets.

Here’s what this really means for BrainChip.

1. Validation From a High‑Credibility Customer (the U.S. Air Force)

When a defense research agency tests a technology and is surprised by the results, that’s a major signal.

It implies:

  • BrainChip’s neuromorphic approach is outperforming traditional radar processing.
  • The technology is maturing faster than expected.
  • The Air Force sees enough promise to expand the contract.
For a small semiconductor company, this is enormous validation.


📡 2. Neuromorphic Radar Could Become a New Category of Sensor

Brightfield is describing a capability that traditional radar does not have:

  • Object classification, not just detection
  • Scene reconstruction (like a camera, but using radar)
  • Low‑latency tracking
  • Miniaturization (radar small enough for a robot hand)
This implies BrainChip’s Akida architecture can:

  • Process radar signals in real time
  • Extract features the way a biological brain would
  • Do it with extremely low power
If this holds up, BrainChip isn’t just improving radar—they’re enabling a new sensing modality.


🤖 3. Robotics Becomes a Natural Market

He’s hinting at a future where robots use radar the way humans use vision:

  • Navigating in darkness
  • Operating in smoke, fog, dust, tunnels
  • Grasping objects without cameras
  • Flying drones indoors or underground
This is a huge implication:

BrainChip could become a foundational chip supplier for next‑generation robotics.​

Robotics companies are already “in active conversations,” which means:

  • Early evaluations are underway
  • BrainChip is being considered for integration into future platforms
  • The company is moving from R&D to commercial engagement

📦 4. Size, Weight, and Power (SWaP) Advantage = Competitive Edge

Neuromorphic chips excel at:

  • Ultra‑low power
  • Low heat
  • Small footprint
Shrinking radar systems is a massive competitive advantage in:

  • Drones
  • Autonomous robots
  • Wearables
  • Defense systems
  • Industrial automation
This positions BrainChip as a SWaP‑optimized alternative to GPUs and traditional DSPs.


🧠 5. BrainChip Is Moving Up the Value Chain

He says they’re no longer just giving algorithms—they’re building reference platforms.

This means:

  • BrainChip wants to make adoption easier
  • They’re packaging hardware + software + demo systems
  • They’re targeting OEMs who want turnkey solutions
This is the same strategy NVIDIA used early on: don’t just sell chips—sell ecosystems.


🌐 6. Potential for Dual‑Use Technology

Radar + neuromorphic processing has both:

  • Defense applications (surveillance, navigation, targeting)
  • Commercial applications (robotics, drones, smart sensors)
Dual‑use tech tends to scale faster because:

  • Defense funds early R&D
  • Commercial markets drive volume
BrainChip is positioning itself right at that intersection.
 
  • Like
Reactions: 1 users

Diogenese

Top 20
I Pasted a portion of Steve Brightfield's concerning the Aitforce Research (obviously RTX/US AFRL) interview and asked 'chat' to outline the implications for Brainchip.
Q. " One of the things that was interesting is we got a contract with Air Force research libraries to work on radar using these algorithms, right? And the results actually surprised us and they surprised the contracting agency and now we're expanding that. And we think that we can, you know, add capabilities to radar that weren't there before. Like, for example, radar can detect things, right? But it can't tell you what it is. Well, we can classify objects now with radar in addition to detecting them. We can improve the tracking and the latency of these radars. But we can also make them a lot smaller, right? So it's that size weight and power. Can I put a radar in a robot? So when it's hand has got a radar signal in it and it can basically navigate, you can paint the scene without a camera. You can use it like a camera to paint the scene and recognize and grasp things that a drone. You can fly it inside tunnels or buildings indoors. You can map out where you're going. We see this shrinking of the conventional radar technologies to really go into anything moving because it's all whether it works in the dark. And if it can replicate some of the things in vision, then, you know, you don't have to worry about rain and fog and all the issues that visual, you know, control of robots. Yeah. And are you working with robotic companies or is this still in the research room? It's still in the research. We're working with companies that are creating components or solutions that go to the robotics companies. We are in active conversations with robotic companies today." quote from Steve Brightfield's interview. What are the implications for Brainchip."
My bold above highlighting important area.
Chat rely:
The implications for BrainChip are much bigger than the quote sounds at first glance. Brightfield is essentially revealing that neuromorphic radar is not just a research curiosity—it’s showing unexpectedly strong performance and opening entirely new markets.

Here’s what this really means for BrainChip.

1. Validation From a High‑Credibility Customer (the U.S. Air Force)

When a defense research agency tests a technology and is surprised by the results, that’s a major signal.

It implies:

  • BrainChip’s neuromorphic approach is outperforming traditional radar processing.
  • The technology is maturing faster than expected.
  • The Air Force sees enough promise to expand the contract.
For a small semiconductor company, this is enormous validation.


📡 2. Neuromorphic Radar Could Become a New Category of Sensor

Brightfield is describing a capability that traditional radar does not have:

  • Object classification, not just detection
  • Scene reconstruction (like a camera, but using radar)
  • Low‑latency tracking
  • Miniaturization (radar small enough for a robot hand)
This implies BrainChip’s Akida architecture can:

  • Process radar signals in real time
  • Extract features the way a biological brain would
  • Do it with extremely low power
If this holds up, BrainChip isn’t just improving radar—they’re enabling a new sensing modality.


🤖 3. Robotics Becomes a Natural Market

He’s hinting at a future where robots use radar the way humans use vision:

  • Navigating in darkness
  • Operating in smoke, fog, dust, tunnels
  • Grasping objects without cameras
  • Flying drones indoors or underground
This is a huge implication:

BrainChip could become a foundational chip supplier for next‑generation robotics.​

Robotics companies are already “in active conversations,” which means:

  • Early evaluations are underway
  • BrainChip is being considered for integration into future platforms
  • The company is moving from R&D to commercial engagement

📦 4. Size, Weight, and Power (SWaP) Advantage = Competitive Edge

Neuromorphic chips excel at:

  • Ultra‑low power
  • Low heat
  • Small footprint
Shrinking radar systems is a massive competitive advantage in:

  • Drones
  • Autonomous robots
  • Wearables
  • Defense systems
  • Industrial automation
This positions BrainChip as a SWaP‑optimized alternative to GPUs and traditional DSPs.


🧠 5. BrainChip Is Moving Up the Value Chain

He says they’re no longer just giving algorithms—they’re building reference platforms.

This means:

  • BrainChip wants to make adoption easier
  • They’re packaging hardware + software + demo systems
  • They’re targeting OEMs who want turnkey solutions
This is the same strategy NVIDIA used early on: don’t just sell chips—sell ecosystems.


🌐 6. Potential for Dual‑Use Technology

Radar + neuromorphic processing has both:

  • Defense applications (surveillance, navigation, targeting)
  • Commercial applications (robotics, drones, smart sensors)
Dual‑use tech tends to scale faster because:

  • Defense funds early R&D
  • Commercial markets drive volume
BrainChip is positioning itself right at that intersection.
For a robot, the detection range will normally only need to be <200m, compared with normal radar range >20km, so the radar system will also be low power. So theoretically only about 1 millionth of the power, but the radar receive probably needs a little more than that.

It will use the 8-bit Akida 2, which does use more power than Akida 1 (4-bit).

PS: Prof Google tells me that 10 to 100 kW is the typical peak transmit power for radar.
 
  • Like
Reactions: 2 users
Top Bottom