BRN Discussion Ongoing

Looking forward to hearing something solid in brn favour.
 
  • Like
Reactions: 3 users

manny100

Top 20
From a recent video interview with Steve Brighfield.
"The primary difference between brain chip and the Intel and the IBM solutions was they were analog. So they truly tried to match the analog waveforms of the brain, whereas the brain chip made a digital equivalent of the analog waveform. So now you could easily manufacture a computer, digital computer chip using the approach. The chips that you, the analog chips that are made today for neuromorphics, they're notorious for, you know, you have to have them biased and temperature stabilized, and there's all the problems with analog, which is the reason we don't have a lot of analog computers today, or the problems that they're faced with their neuromorphic chips."
"There are other companies that are producing analog Neuromorphic chips, but they're kind of dedicated for a specific market second, like speech wake-up, right? Or a biological wake-up. So they're like function-specific Neuromorphic chips. We have a very digital programmable chip that can use any kind of sensor, so we're kind of unique in that aspect. Build the future of multi-agent software with agency. "
My bold above.
 
  • Like
  • Fire
  • Love
Reactions: 14 users

manny100

Top 20
Another quote from Steve Brightfield.
"I think we're trying to ride the neuromorphic computing and brain chip in particular is trying to ride the coattails of the overall market moving to the edge. And when we look at market research reports from companies, they're saying about 10% of these edge products embedded devices are running some AI software on them. But within the next four years, four to five years, 30 to 35% of those products will have AI on. And I think if we look out, the next five years, 90% of them will have it all embedded in it. And there will be a neuromorphic computing in probably half of those devices. Because it's going to be more generally available. "
 
  • Fire
  • Like
  • Love
Reactions: 7 users

Guzzi62

Regular
From a recent video interview with Steve Brighfield.
"The primary difference between brain chip and the Intel and the IBM solutions was they were analog. So they truly tried to match the analog waveforms of the brain, whereas the brain chip made a digital equivalent of the analog waveform. So now you could easily manufacture a computer, digital computer chip using the approach. The chips that you, the analog chips that are made today for neuromorphics, they're notorious for, you know, you have to have them biased and temperature stabilized, and there's all the problems with analog, which is the reason we don't have a lot of analog computers today, or the problems that they're faced with their neuromorphic chips."
"There are other companies that are producing analog Neuromorphic chips, but they're kind of dedicated for a specific market second, like speech wake-up, right? Or a biological wake-up. So they're like function-specific Neuromorphic chips. We have a very digital programmable chip that can use any kind of sensor, so we're kind of unique in that aspect. Build the future of multi-agent software with agency. "
My bold above.
Sadly, Steve got that wrong!

Intel's Loihi 2 is fully digital.


 
  • Like
Reactions: 4 users

manny100

Top 20
  • Like
  • Haha
Reactions: 4 users

perceptron

Regular
So to clarify one last time, I am not asking what it costs BrainChip to manufacture the chips, nor am I asking for a margin analysis.

I am pointing out that we cannot determine the revenue from these orders because the announcement states customers will be charged anywhere between $4 and $50 per chip, depending on volume.

What the announcement does not disclose however, is where on that sliding scale these orders actually sit.

Specifically, 1) what volume qualifies as a high volume order 2) at what quantity does pricing move from $50 to $10 to $4? 3) where does an order of 10,000 units or 1,200 units fall on that curve?

Without that information, revenue could be materially different under perfectly reasonable interpretations.

To illustrate by way of demonstration only:
  • If BrainChip considers anything above 5,000 units to be a volume order, then 11,200 units could be priced at $4, generating roughly $45k in revenue.
  • If instead “volume” means anything exceeding 50,000 units, then the same 11,200 units could be priced far higher — say $20–$30 per chip, resulting in $224k–$336k of revenue.

My point is that until the company clarifies how the volume pricing tiers actually work, any attempt to calculate revenue from these orders is pure guesswork.
Appreciate your reply Bravo. So the foundry is only charging it's customer's the variable costs associated to produce any number of chips while excluding their fixed costs such capital expenditure, taxes, salaries and the many other costs that remain constant for any given amount of chips ordered.
 

Diogenese

Top 20
From a recent video interview with Steve Brighfield.
"The primary difference between brain chip and the Intel and the IBM solutions was they were analog. So they truly tried to match the analog waveforms of the brain, whereas the brain chip made a digital equivalent of the analog waveform. So now you could easily manufacture a computer, digital computer chip using the approach. The chips that you, the analog chips that are made today for neuromorphics, they're notorious for, you know, you have to have them biased and temperature stabilized, and there's all the problems with analog, which is the reason we don't have a lot of analog computers today, or the problems that they're faced with their neuromorphic chips."
"There are other companies that are producing analog Neuromorphic chips, but they're kind of dedicated for a specific market second, like speech wake-up, right? Or a biological wake-up. So they're like function-specific Neuromorphic chips. We have a very digital programmable chip that can use any kind of sensor, so we're kind of unique in that aspect. Build the future of multi-agent software with agency. "
My bold above.
Hi manny,

I'd like your thoughts/corrections to the following.

Tony Lewis mentioned that Akida 3 and GenAI will have a flexible hardware switched communication mesh whereas Akida 1 & 2 have a packet switched type comms mesh, and I'm trying to understand the advantages of the H/W switch.

The packet switched version requires that each event/spike includes an address header to direct it to the destination neuron. This entails transmitting additional bits with each event. increasing latency and power usage.

Thus for a many-to-many neuron connexion, there must be a transistor switch matrix (~ a crossbar switch?). But this only requires one switch per destination neuron compared with a requirement for the header to include the address of each destination neuron. For example, if there were 256 possible destination neurons, then the header would need to include 8 bits per destination neuron. So for 8 destination neurons for example, that would be 64 additional bits (ignoring any protocol overhead). (There may well be a more efficient protocol for this it, but that is above my pay grade).

In contrast, for a hardware switching matrix, only one transistor needs to switch per destination neuron, so, in the above example, that would be 8 switch operations.

In addition, the packet switched mesh protocol needs a larger collision avoidance buffer, to ensure only 1 event is transmitted at a time.

Obviously, the H/W switch will still require additional power/time to set up the switch connexions, but I assume this is a one off for each task.
 
  • Love
  • Wow
  • Thinking
Reactions: 4 users

Diogenese

Top 20
  • Haha
  • Like
  • Love
Reactions: 4 users

manny100

Top 20
Hi manny,

I'd like your thoughts/corrections to the following.

Tony Lewis mentioned that Akida 3 and GenAI will have a flexible hardware switched communication mesh whereas Akida 1 & 2 have a packet switched type comms mesh, and I'm trying to understand the advantages of the H/W switch.

The packet switched version requires that each event/spike includes an address header to direct it to the destination neuron. This entails transmitting additional bits with each event. increasing latency and power usage.

Thus for a many-to-many neuron connexion, there must be a transistor switch matrix (~ a crossbar switch?). But this only requires one switch per destination neuron compared with a requirement for the header to include the address of each destination neuron. For example, if there were 256 possible destination neurons, then the header would need to include 8 bits per destination neuron. So for 8 destination neurons for example, that would be 64 additional bits (ignoring any protocol overhead). (There may well be a more efficient protocol for this it, but that is above my pay grade).

In contrast, for a hardware switching matrix, only one transistor needs to switch per destination neuron, so, in the above example, that would be 8 switch operations.

In addition, the packet switched mesh protocol needs a larger collision avoidance buffer, to ensure only 1 event is transmitted at a time.

Obviously, the H/W switch will still require additional power/time to set up the switch connexions, but I assume this is a one off for each task.
Hi Dio, its above my paygrade but Gen 3 is an upgrade so it has to be an improvement.
Appears ditching the packets will likely improve latency and reduce power costs. It would be designed to work well/better with GenAI.
Basically a guess based on why do it if it's not better and they are key areas.
 
  • Love
  • Thinking
Reactions: 2 users

HopalongPetrovski

I'm Spartacus!
Below is a copy of Marketing Mans recent post over on the crapper.
I am reposting it here because I think it is eloquent and extremely pertinent.
It is in reply to Phil the yank who is a recent blow in, a self proclaimed heavy hitter who, having now "discovered" WBT, seems to be falling out of love with us. 🤣


To add, I agree with the core issue you’re highlighting.

The risk isn’t “does Akida work?” it’s whether adoption happens fast enough in a market that often rewards good-enough and easy over better but harder.

That said, a couple of nuances are worth adding.

  • First, if neuromorphic adoption is being held back by friction, that friction applies to all neuromorphic approaches, not just BrainChip. In that context, BRN actually appears to be ahead of the pack rather than behind. They’ve invested earlier than most in developer tooling, SDKs, MetaTF to bridge into existing ML workflows, and they’re now explicitly recruiting senior leadership to deepen the software and toolchain side. That doesn’t eliminate friction, but it does suggest management understands exactly where the bottleneck is.

  • Second, you’re absolutely right that today BRN’s traction is showing up mainly where SWaP (Space, Weight, and Power) is unavoidable - space, defence, and ultra-low-power wearables like Onsor. That’s not a weakness of the technology; it’s where costs force a different compute approach. In those environments, “good enough” conventional compute doesn't fit. Early adoption almost always starts where the pain is baked-in.

  • Third, while much of edge-AI today is advancing quickly on conventional methods, power consumption and heat rejection are becoming high-priority problems even in data centres. We’re already seeing hyper-scalers talk openly about energy limits, cooling constraints, and diminishing returns from brute-force GPUs. For example, several new, large-scale data centres in Western Sydney, particularly projects by AirTrunk, Microsoft, and Amazon in areas like Kemps Creek and Marsden Park, are facing scrutiny over high water consumption for cooling (even though cooling systems are heat-pump based, the cooling towers are typically evaporative). And in the US Microsoft is recommissioning 3-Mile Island Nuclear PLant (the site of one of the worst nuclear accidents before Fukushima and Chernobyl) because their new data centre needs enormous power. Eventually efficiency stops being a “nice to have” and becomes a design constraint.

  • Finally, this whole debate has a strong historical echo. Valves weren’t displaced by transistors because transistors were immediately easier. Quite the opposite: valves had a huge industrial base, familiar engineering practices, and plenty of off-the-shelf solutions. Transistors were initially viewed as fragile, unproven, and niche. Yet once power, size, heat and reliability became dominant constraints, the market flipped, slowly at first, then decisively.

None of this guarantees BRN wins.

Adoption speed still matters, and the window isn’t open-ended. But it’s not that BrainChip is falling behind “better” competitors, it’s that the market hasn’t yet been forced to care enough about the problems neuromorphic solves. Where it has been forced, BRN is already seeing traction.

So I agree with your concern, but I’d say the verdict hinges less on near-term elegance or ease of adoption, and more on how fast power, heat, and autonomy constraints tighten across the broader AI landscape. That’s the real time-frame that matters here.

One final point I’d add is that much of what the market currently labels as “AI progress” is still overwhelmingly centred on LLMs and hyperscale cloud infrastructure and that phase is rapidly maturing.

The next leg of AI growth is far more likely to be about where the compute happens, not just how large models become. As AI pushes out of data centres and into autonomous, mobile, always-on systems, edge constraints like power, latency, heat and connectivity become dominant.

That shift doesn’t automatically guarantee success for neuromorphic approaches, but it does materially expand the addressable market where neuromorphic makes sense. In that context, neuromorphic isn’t competing head-to-head with cloud AI; it’s positioned for the phase after cloud-centric AI saturates, which is where a lot of the longer-term optionality sits.

Of course, a big application is battle drones - I saw a Youtube recently that reported that 80% of Kills in the Ukraine-Russia conflict have been executed by drones (of all shapes and sizes). No military budget can ignore this trend. With US isolationism and Trump forcing countries to re-arm drone development (UAVs is the correct term) is at an all time high. And AI that is fast, local, small, reliable, low-power, and cheap is what makes UAVs deadlier.

You don't wait for the conflict to start - you fill warehouses with 'em so you're ready.

That's good news for BRN (and us investors) - but World War III - not so good.

***
I had an AI "Chat" which identified the following edge use cases....
“Autonomous, mobile, always-on”
systems share three traits:

  • They move through the world
  • They must perceive continuously
  • They cannot rely on constant connectivity or abundant power

Here are the best, real-world examples, grouped by maturity and importance.

Military & Security (already here, already critical)​

1️⃣ Drones (air, land, sea)​

This is the archetype.

  • ISR drones
  • Loitering munitions
  • Autonomous naval vessels
  • Swarm drones
Why they matter:

  • Battery-powered
  • Highly mobile
  • Must sense continuously
  • Must react in milliseconds
  • Often disconnected or jammed
Cloud AI is impossible here.
Brute-force edge compute is tolerated — but inefficient.

2️⃣ Counter-drone & perimeter defence systems​

  • Radar + RF + EO/IR sensing
  • Always on
  • Must distinguish signal from noise continuously
  • False positives are costly
Event-driven, low-power intelligence is a natural fit.

Robotics & Industrial Autonomy (rapidly emerging)​

3️⃣ Mobile robots (AMRs, humanoids, legged robots)​

  • Warehouses
  • Hospitals
  • Field inspection
  • Construction sites
Constraints:

  • Battery-limited
  • Must operate for hours or days
  • Continuous perception (vision, lidar, audio)
  • Safety-critical decisions
Most current systems are over-provisioned and power-inefficient — by design.

4️⃣ Autonomous vehicles (non-consumer first)​

Not robotaxis — those are cloud-heavy.

But:

  • Mining vehicles
  • Agricultural machinery
  • Military ground vehicles
These:

  • Operate in remote areas
  • Require deterministic response
  • Have limited connectivity

Medical & Human-adjacent Systems (under-appreciated)​

5️⃣ Implantable and wearable medical devices​

Examples:

  • Cochlear implants
  • Neural stimulators
  • Smart insulin pumps
  • Continuous monitoring wearables
Why they’re important:

  • Always on
  • Ultra-low power
  • Safety critical
  • Privacy sensitive
Cloud AI is a non-starter.
“Good enough” compute often isn’t good enough.

6️⃣ Assistive technologies​

  • Prosthetics
  • Exoskeletons
  • Mobility aids
These require:

  • Real-time feedback loops
  • Local learning
  • Minimal latency

Consumer & Infrastructure (early but inevitable)​

7️⃣ Smart sensors & IoT at scale​

  • Environmental monitoring
  • Smart cities
  • Infrastructure health
  • Border security
Thousands or millions of sensors:

  • Always on
  • Mostly idle
  • Occasionally critical
Processing everything centrally is economically insane.

8️⃣ Space systems​

  • Satellites
  • Spacecraft autonomy
  • Debris avoidance
  • On-orbit servicing
Constraints:

  • Power is scarce
  • Communication delayed
  • Reliability is existential

The unifying insight (this is the key)​

In all these systems:

  • Power is a hard constraint
  • Latency is non-negotiable
  • Connectivity is unreliable
  • Intelligence must be continuous, not episodic
That combination is exactly where:

  • Cloud AI fails
  • Brute-force edge AI struggles
  • Event-driven approaches become interesting

Why this matters for your broader thesis​

These systems are not fringe.

They are:

  • Growing in number
  • Growing in strategic importance
  • Becoming more autonomous over time
And importantly:

They don’t tolerate inefficiency forever.
That’s why this category matters more than benchmarks or demos.

Do you own research.
 
  • Fire
  • Like
  • Love
Reactions: 13 users

Diogenese

Top 20
Below is a copy of Marketing Mans recent post over on the crapper.
I am reposting it here because I think it is eloquent and extremely pertinent.
It is in reply to Phil the yank who is a recent blow in, a self proclaimed heavy hitter who, having now "discovered" WBT, seems to be falling out of love with us. 🤣


To add, I agree with the core issue you’re highlighting.

The risk isn’t “does Akida work?” it’s whether adoption happens fast enough in a market that often rewards good-enough and easy over better but harder.

That said, a couple of nuances are worth adding.

  • First, if neuromorphic adoption is being held back by friction, that friction applies to all neuromorphic approaches, not just BrainChip. In that context, BRN actually appears to be ahead of the pack rather than behind. They’ve invested earlier than most in developer tooling, SDKs, MetaTF to bridge into existing ML workflows, and they’re now explicitly recruiting senior leadership to deepen the software and toolchain side. That doesn’t eliminate friction, but it does suggest management understands exactly where the bottleneck is.

  • Second, you’re absolutely right that today BRN’s traction is showing up mainly where SWaP (Space, Weight, and Power) is unavoidable - space, defence, and ultra-low-power wearables like Onsor. That’s not a weakness of the technology; it’s where costs force a different compute approach. In those environments, “good enough” conventional compute doesn't fit. Early adoption almost always starts where the pain is baked-in.

  • Third, while much of edge-AI today is advancing quickly on conventional methods, power consumption and heat rejection are becoming high-priority problems even in data centres. We’re already seeing hyper-scalers talk openly about energy limits, cooling constraints, and diminishing returns from brute-force GPUs. For example, several new, large-scale data centres in Western Sydney, particularly projects by AirTrunk, Microsoft, and Amazon in areas like Kemps Creek and Marsden Park, are facing scrutiny over high water consumption for cooling (even though cooling systems are heat-pump based, the cooling towers are typically evaporative). And in the US Microsoft is recommissioning 3-Mile Island Nuclear PLant (the site of one of the worst nuclear accidents before Fukushima and Chernobyl) because their new data centre needs enormous power. Eventually efficiency stops being a “nice to have” and becomes a design constraint.

  • Finally, this whole debate has a strong historical echo. Valves weren’t displaced by transistors because transistors were immediately easier. Quite the opposite: valves had a huge industrial base, familiar engineering practices, and plenty of off-the-shelf solutions. Transistors were initially viewed as fragile, unproven, and niche. Yet once power, size, heat and reliability became dominant constraints, the market flipped, slowly at first, then decisively.

None of this guarantees BRN wins.

Adoption speed still matters, and the window isn’t open-ended. But it’s not that BrainChip is falling behind “better” competitors, it’s that the market hasn’t yet been forced to care enough about the problems neuromorphic solves. Where it has been forced, BRN is already seeing traction.

So I agree with your concern, but I’d say the verdict hinges less on near-term elegance or ease of adoption, and more on how fast power, heat, and autonomy constraints tighten across the broader AI landscape. That’s the real time-frame that matters here.

One final point I’d add is that much of what the market currently labels as “AI progress” is still overwhelmingly centred on LLMs and hyperscale cloud infrastructure and that phase is rapidly maturing.

The next leg of AI growth is far more likely to be about where the compute happens, not just how large models become. As AI pushes out of data centres and into autonomous, mobile, always-on systems, edge constraints like power, latency, heat and connectivity become dominant.

That shift doesn’t automatically guarantee success for neuromorphic approaches, but it does materially expand the addressable market where neuromorphic makes sense. In that context, neuromorphic isn’t competing head-to-head with cloud AI; it’s positioned for the phase after cloud-centric AI saturates, which is where a lot of the longer-term optionality sits.

Of course, a big application is battle drones - I saw a Youtube recently that reported that 80% of Kills in the Ukraine-Russia conflict have been executed by drones (of all shapes and sizes). No military budget can ignore this trend. With US isolationism and Trump forcing countries to re-arm drone development (UAVs is the correct term) is at an all time high. And AI that is fast, local, small, reliable, low-power, and cheap is what makes UAVs deadlier.

You don't wait for the conflict to start - you fill warehouses with 'em so you're ready.

That's good news for BRN (and us investors) - but World War III - not so good.

***
I had an AI "Chat" which identified the following edge use cases....
“Autonomous, mobile, always-on”
systems share three traits:

  • They move through the world
  • They must perceive continuously
  • They cannot rely on constant connectivity or abundant power

Here are the best, real-world examples, grouped by maturity and importance.

Military & Security (already here, already critical)​

1️⃣ Drones (air, land, sea)​

This is the archetype.

  • ISR drones
  • Loitering munitions
  • Autonomous naval vessels
  • Swarm drones
Why they matter:

  • Battery-powered
  • Highly mobile
  • Must sense continuously
  • Must react in milliseconds
  • Often disconnected or jammed
Cloud AI is impossible here.
Brute-force edge compute is tolerated — but inefficient.

2️⃣ Counter-drone & perimeter defence systems​

  • Radar + RF + EO/IR sensing
  • Always on
  • Must distinguish signal from noise continuously
  • False positives are costly
Event-driven, low-power intelligence is a natural fit.

Robotics & Industrial Autonomy (rapidly emerging)​

3️⃣ Mobile robots (AMRs, humanoids, legged robots)​

  • Warehouses
  • Hospitals
  • Field inspection
  • Construction sites
Constraints:

  • Battery-limited
  • Must operate for hours or days
  • Continuous perception (vision, lidar, audio)
  • Safety-critical decisions
Most current systems are over-provisioned and power-inefficient — by design.

4️⃣ Autonomous vehicles (non-consumer first)​

Not robotaxis — those are cloud-heavy.

But:

  • Mining vehicles
  • Agricultural machinery
  • Military ground vehicles
These:

  • Operate in remote areas
  • Require deterministic response
  • Have limited connectivity

Medical & Human-adjacent Systems (under-appreciated)​

5️⃣ Implantable and wearable medical devices​

Examples:

  • Cochlear implants
  • Neural stimulators
  • Smart insulin pumps
  • Continuous monitoring wearables
Why they’re important:

  • Always on
  • Ultra-low power
  • Safety critical
  • Privacy sensitive
Cloud AI is a non-starter.
“Good enough” compute often isn’t good enough.

6️⃣ Assistive technologies​

  • Prosthetics
  • Exoskeletons
  • Mobility aids
These require:

  • Real-time feedback loops
  • Local learning
  • Minimal latency

Consumer & Infrastructure (early but inevitable)​

7️⃣ Smart sensors & IoT at scale​

  • Environmental monitoring
  • Smart cities
  • Infrastructure health
  • Border security
Thousands or millions of sensors:

  • Always on
  • Mostly idle
  • Occasionally critical
Processing everything centrally is economically insane.

8️⃣ Space systems​

  • Satellites
  • Spacecraft autonomy
  • Debris avoidance
  • On-orbit servicing
Constraints:

  • Power is scarce
  • Communication delayed
  • Reliability is existential

The unifying insight (this is the key)​

In all these systems:

  • Power is a hard constraint
  • Latency is non-negotiable
  • Connectivity is unreliable
  • Intelligence must be continuous, not episodic
That combination is exactly where:

  • Cloud AI fails
  • Brute-force edge AI struggles
  • Event-driven approaches become interesting

Why this matters for your broader thesis​

These systems are not fringe.

They are:

  • Growing in number
  • Growing in strategic importance
  • Becoming more autonomous over time
And importantly:


That’s why this category matters more than benchmarks or demos.

Do you own research.
Hi Hoppy,

The chatbot overlooked my hobbyhorse, cybersecurity. There is an urgent and explosively growing need, and with the QV CyberNeuro-RT/Akida Edge Box (or whatever it's called now), we have a CotS solution which grew out of a DoE SBIR and an associated MDA SBIR. To be fair, cybersecurity does not necessarily meet the requirement to "move through the world".

On "2. Counter drone", false negatives are more costly.
 
  • Like
  • Fire
Reactions: 9 users

HopalongPetrovski

I'm Spartacus!
Hi Hoppy,

The chatbot overlooked my hobbyhorse, cybersecurity. There is an urgent and explosively growing need, and with the QV CyberNeuro-RT/Akida Edge Box (or whatever it's called now), we have a CotS solution which grew out of a DoE SBIR and an associated MDA SBIR. To be fair, cybersecurity does not necessarily meet the requirement to "move through the world".

On "2. Counter drone", false negatives are more costly.
Thanks Dio and I agree with both the importance regarding cybersecurity and the possibility of it becoming both a lucrative and near term opportunity for us. I assume its implementation would necessarily entail a physical device to provide protection?
Is retrofit protection available through something like a raspberry pie plug in device or would it need to be built in and only available on new specifically designed hardware do you think?
 
  • Love
Reactions: 1 users

Diogenese

Top 20
Thanks Dio and I agree with both the importance regarding cybersecurity and the possibility of it becoming both a lucrative and near term opportunity for us. I assume its implementation would necessarily entail a physical device to provide protection?
Is retrofit protection available through something like a raspberry pie plug in device or would it need to be built in and only available on new specifically designed hardware do you think?
Hi Hoppy,

The first product is the Edge Box, and is designed for small/medium enterprises. I guess Akida will detect it first and then the CyberNeuro software deals with hostile input. The model will be continually updated via federated learning as new threats are detected.
 
  • Fire
  • Like
Reactions: 5 users

Esq.111

Fascinatingly Intuitive.
So to clarify one last time, I am not asking what it costs BrainChip to manufacture the chips, nor am I asking for a margin analysis.

I am pointing out that we cannot determine the revenue from these orders because the announcement states customers will be charged anywhere between $4 and $50 per chip, depending on volume.

What the announcement does not disclose however, is where on that sliding scale these orders actually sit.

Specifically, 1) what volume qualifies as a high volume order 2) at what quantity does pricing move from $50 to $10 to $4? 3) where does an order of 10,000 units or 1,200 units fall on that curve?

Without that information, revenue could be materially different under perfectly reasonable interpretations.

To illustrate by way of demonstration only:
  • If BrainChip considers anything above 5,000 units to be a volume order, then 11,200 units could be priced at $4, generating roughly $45k in revenue.
  • If instead “volume” means anything exceeding 50,000 units, then the same 11,200 units could be priced far higher — say $20–$30 per chip, resulting in $224k–$336k of revenue.

My point is that until the company clarifies how the volume pricing tiers actually work, any attempt to calculate revenue from these orders is pure guesswork.
Top of the evening Bravo.

Bloody good to see you are still with us , 🍻.

On the .... what the farrk are Brn shareholders getting from these initial chip sales , ..

Who would honestly know, fucked if I do , Pretty sure our our Manigement ???? Also have know idea , personally going with a percentage ,per chip , per product .

ie , a Tomahawk missile costs X , incorperate our tech ( one hase to fucking well pay for it ) not $50 Per chip but a percentage per use .

OUR DIRECTORS ??????? ACCORDING TO OUR CHAIRMAN , ARE GETTING CONMENSURATED , WITH A COMPANY RETURNING $20 to $ 30 ODD MILLION ANNUALLY.

FUNNY AS FUCK, THIS CAME FROM KNOWOTHER THAN OUR VERY OWN CHAIRMANS MOUTH , THREE YEARS AGO.

Thay have not even ....remotely approached such levels , yet continue to suck the companys reservs dry without remorse , said it once & will say it again , & will be voting ALL OUT next AGM , AGAIN

Thay have consistently, without fail , showen a compleet disregard , for the °( OUR COMPANYS ) finances to further their own salaries/ perks .

The arrogance showen thus far on many fronts is staggering .

Would be happy to , (and will vote accordingly ,.... again , third year running ) to eject the entire board , then our CEO .


Yep , would be a happy holder employing , not the first, probbibly the forth or fith smartest graduate out of Harved or Stamford Business , a young soul WITH A BITT OF DRIVE .


Not in any way directed at you Bravo , thankyou for ones inpout always appreciated.

Regards,
Esq.
 
  • Sad
  • Wow
Reactions: 2 users

7für7

Top 20
Sean when shareholders ask him what he is doing for almost 5 years

Tonito Just Hanging Around GIF
 
  • Haha
Reactions: 1 users

Drewski

Regular
SNAFU

The IP is incredibly valuable.

The tech is revolutionary and necessary.

We are told nothing, or there is nothing to tell or they can't tell.

I don't doubt Sean and co are not doing everything within their power to make BRN win.

There are insanely powerfull forces that BRN has to compete with.

Time will tell, for good or bad.

2026 is make or break there is no doubt about it.
 
Top Bottom