smoothsailing18
Regular
Looking forward to hearing something solid in brn favour.
Sadly, Steve got that wrong!From a recent video interview with Steve Brighfield.
"The primary difference between brain chip and the Intel and the IBM solutions was they were analog. So they truly tried to match the analog waveforms of the brain, whereas the brain chip made a digital equivalent of the analog waveform. So now you could easily manufacture a computer, digital computer chip using the approach. The chips that you, the analog chips that are made today for neuromorphics, they're notorious for, you know, you have to have them biased and temperature stabilized, and there's all the problems with analog, which is the reason we don't have a lot of analog computers today, or the problems that they're faced with their neuromorphic chips."
"There are other companies that are producing analog Neuromorphic chips, but they're kind of dedicated for a specific market second, like speech wake-up, right? Or a biological wake-up. So they're like function-specific Neuromorphic chips. We have a very digital programmable chip that can use any kind of sensor, so we're kind of unique in that aspect. Build the future of multi-agent software with agency. "
My bold above.
Appreciate your reply Bravo. So the foundry is only charging it's customer's the variable costs associated to produce any number of chips while excluding their fixed costs such capital expenditure, taxes, salaries and the many other costs that remain constant for any given amount of chips ordered.So to clarify one last time, I am not asking what it costs BrainChip to manufacture the chips, nor am I asking for a margin analysis.
I am pointing out that we cannot determine the revenue from these orders because the announcement states customers will be charged anywhere between $4 and $50 per chip, depending on volume.
What the announcement does not disclose however, is where on that sliding scale these orders actually sit.
Specifically, 1) what volume qualifies as a high volume order 2) at what quantity does pricing move from $50 to $10 to $4? 3) where does an order of 10,000 units or 1,200 units fall on that curve?
Without that information, revenue could be materially different under perfectly reasonable interpretations.
To illustrate by way of demonstration only:
- If BrainChip considers anything above 5,000 units to be a volume order, then 11,200 units could be priced at $4, generating roughly $45k in revenue.
- If instead “volume” means anything exceeding 50,000 units, then the same 11,200 units could be priced far higher — say $20–$30 per chip, resulting in $224k–$336k of revenue.
My point is that until the company clarifies how the volume pricing tiers actually work, any attempt to calculate revenue from these orders is pure guesswork.
Hi manny,From a recent video interview with Steve Brighfield.
"The primary difference between brain chip and the Intel and the IBM solutions was they were analog. So they truly tried to match the analog waveforms of the brain, whereas the brain chip made a digital equivalent of the analog waveform. So now you could easily manufacture a computer, digital computer chip using the approach. The chips that you, the analog chips that are made today for neuromorphics, they're notorious for, you know, you have to have them biased and temperature stabilized, and there's all the problems with analog, which is the reason we don't have a lot of analog computers today, or the problems that they're faced with their neuromorphic chips."
"There are other companies that are producing analog Neuromorphic chips, but they're kind of dedicated for a specific market second, like speech wake-up, right? Or a biological wake-up. So they're like function-specific Neuromorphic chips. We have a very digital programmable chip that can use any kind of sensor, so we're kind of unique in that aspect. Build the future of multi-agent software with agency. "
My bold above.
I wish Steve would speak in paragraphs!See the transcript of the Steve Brightfield interview.
Its a great read for new and 'worn out' investors.
AI Podcast Transcript & Summary - Steven Brightfield: How Neuromorphic Computing Cuts Inference Power by 10x
Transcription: So neuromorphics is really how our biology does complications in our brain. So we don't have circuits, we have neurons,...podcasttranscript.ai
Hi Dio, its above my paygrade but Gen 3 is an upgrade so it has to be an improvement.Hi manny,
I'd like your thoughts/corrections to the following.
Tony Lewis mentioned that Akida 3 and GenAI will have a flexible hardware switched communication mesh whereas Akida 1 & 2 have a packet switched type comms mesh, and I'm trying to understand the advantages of the H/W switch.
The packet switched version requires that each event/spike includes an address header to direct it to the destination neuron. This entails transmitting additional bits with each event. increasing latency and power usage.
Thus for a many-to-many neuron connexion, there must be a transistor switch matrix (~ a crossbar switch?). But this only requires one switch per destination neuron compared with a requirement for the header to include the address of each destination neuron. For example, if there were 256 possible destination neurons, then the header would need to include 8 bits per destination neuron. So for 8 destination neurons for example, that would be 64 additional bits (ignoring any protocol overhead). (There may well be a more efficient protocol for this it, but that is above my pay grade).
In contrast, for a hardware switching matrix, only one transistor needs to switch per destination neuron, so, in the above example, that would be 8 switch operations.
In addition, the packet switched mesh protocol needs a larger collision avoidance buffer, to ensure only 1 event is transmitted at a time.
Obviously, the H/W switch will still require additional power/time to set up the switch connexions, but I assume this is a one off for each task.
That’s why this category matters more than benchmarks or demos.They don’t tolerate inefficiency forever.
Hi Hoppy,Below is a copy of Marketing Mans recent post over on the crapper.
I am reposting it here because I think it is eloquent and extremely pertinent.
It is in reply to Phil the yank who is a recent blow in, a self proclaimed heavy hitter who, having now "discovered" WBT, seems to be falling out of love with us.
To add, I agree with the core issue you’re highlighting.
The risk isn’t “does Akida work?” it’s whether adoption happens fast enough in a market that often rewards good-enough and easy over better but harder.
That said, a couple of nuances are worth adding.
- First, if neuromorphic adoption is being held back by friction, that friction applies to all neuromorphic approaches, not just BrainChip. In that context, BRN actually appears to be ahead of the pack rather than behind. They’ve invested earlier than most in developer tooling, SDKs, MetaTF to bridge into existing ML workflows, and they’re now explicitly recruiting senior leadership to deepen the software and toolchain side. That doesn’t eliminate friction, but it does suggest management understands exactly where the bottleneck is.
- Second, you’re absolutely right that today BRN’s traction is showing up mainly where SWaP (Space, Weight, and Power) is unavoidable - space, defence, and ultra-low-power wearables like Onsor. That’s not a weakness of the technology; it’s where costs force a different compute approach. In those environments, “good enough” conventional compute doesn't fit. Early adoption almost always starts where the pain is baked-in.
- Third, while much of edge-AI today is advancing quickly on conventional methods, power consumption and heat rejection are becoming high-priority problems even in data centres. We’re already seeing hyper-scalers talk openly about energy limits, cooling constraints, and diminishing returns from brute-force GPUs. For example, several new, large-scale data centres in Western Sydney, particularly projects by AirTrunk, Microsoft, and Amazon in areas like Kemps Creek and Marsden Park, are facing scrutiny over high water consumption for cooling (even though cooling systems are heat-pump based, the cooling towers are typically evaporative). And in the US Microsoft is recommissioning 3-Mile Island Nuclear PLant (the site of one of the worst nuclear accidents before Fukushima and Chernobyl) because their new data centre needs enormous power. Eventually efficiency stops being a “nice to have” and becomes a design constraint.
- Finally, this whole debate has a strong historical echo. Valves weren’t displaced by transistors because transistors were immediately easier. Quite the opposite: valves had a huge industrial base, familiar engineering practices, and plenty of off-the-shelf solutions. Transistors were initially viewed as fragile, unproven, and niche. Yet once power, size, heat and reliability became dominant constraints, the market flipped, slowly at first, then decisively.
None of this guarantees BRN wins.
Adoption speed still matters, and the window isn’t open-ended. But it’s not that BrainChip is falling behind “better” competitors, it’s that the market hasn’t yet been forced to care enough about the problems neuromorphic solves. Where it has been forced, BRN is already seeing traction.
So I agree with your concern, but I’d say the verdict hinges less on near-term elegance or ease of adoption, and more on how fast power, heat, and autonomy constraints tighten across the broader AI landscape. That’s the real time-frame that matters here.
One final point I’d add is that much of what the market currently labels as “AI progress” is still overwhelmingly centred on LLMs and hyperscale cloud infrastructure and that phase is rapidly maturing.
The next leg of AI growth is far more likely to be about where the compute happens, not just how large models become. As AI pushes out of data centres and into autonomous, mobile, always-on systems, edge constraints like power, latency, heat and connectivity become dominant.
That shift doesn’t automatically guarantee success for neuromorphic approaches, but it does materially expand the addressable market where neuromorphic makes sense. In that context, neuromorphic isn’t competing head-to-head with cloud AI; it’s positioned for the phase after cloud-centric AI saturates, which is where a lot of the longer-term optionality sits.
Of course, a big application is battle drones - I saw a Youtube recently that reported that 80% of Kills in the Ukraine-Russia conflict have been executed by drones (of all shapes and sizes). No military budget can ignore this trend. With US isolationism and Trump forcing countries to re-arm drone development (UAVs is the correct term) is at an all time high. And AI that is fast, local, small, reliable, low-power, and cheap is what makes UAVs deadlier.
You don't wait for the conflict to start - you fill warehouses with 'em so you're ready.
That's good news for BRN (and us investors) - but World War III - not so good.
***
I had an AI "Chat" which identified the following edge use cases....
“Autonomous, mobile, always-on” systems share three traits:
- They move through the world
- They must perceive continuously
- They cannot rely on constant connectivity or abundant power
Here are the best, real-world examples, grouped by maturity and importance.
Military & Security (already here, already critical)
This is the archetype.
Drones (air, land, sea)
Why they matter:
- ISR drones
- Loitering munitions
- Autonomous naval vessels
- Swarm drones
Cloud AI is impossible here.
- Battery-powered
- Highly mobile
- Must sense continuously
- Must react in milliseconds
- Often disconnected or jammed
Brute-force edge compute is tolerated — but inefficient.
Counter-drone & perimeter defence systems
Event-driven, low-power intelligence is a natural fit.
- Radar + RF + EO/IR sensing
- Always on
- Must distinguish signal from noise continuously
- False positives are costly
Robotics & Industrial Autonomy (rapidly emerging)
Mobile robots (AMRs, humanoids, legged robots)
Constraints:
- Warehouses
- Hospitals
- Field inspection
- Construction sites
Most current systems are over-provisioned and power-inefficient — by design.
- Battery-limited
- Must operate for hours or days
- Continuous perception (vision, lidar, audio)
- Safety-critical decisions
Not robotaxis — those are cloud-heavy.
Autonomous vehicles (non-consumer first)
But:
These:
- Mining vehicles
- Agricultural machinery
- Military ground vehicles
- Operate in remote areas
- Require deterministic response
- Have limited connectivity
Medical & Human-adjacent Systems (under-appreciated)
Examples:
Implantable and wearable medical devices
Why they’re important:
- Cochlear implants
- Neural stimulators
- Smart insulin pumps
- Continuous monitoring wearables
Cloud AI is a non-starter.
- Always on
- Ultra-low power
- Safety critical
- Privacy sensitive
“Good enough” compute often isn’t good enough.
Assistive technologies
These require:
- Prosthetics
- Exoskeletons
- Mobility aids
- Real-time feedback loops
- Local learning
- Minimal latency
Consumer & Infrastructure (early but inevitable)
Smart sensors & IoT at scale
Thousands or millions of sensors:
- Environmental monitoring
- Smart cities
- Infrastructure health
- Border security
Processing everything centrally is economically insane.
- Always on
- Mostly idle
- Occasionally critical
Space systems
Constraints:
- Satellites
- Spacecraft autonomy
- Debris avoidance
- On-orbit servicing
- Power is scarce
- Communication delayed
- Reliability is existential
The unifying insight (this is the key)
In all these systems:
That combination is exactly where:
- Power is a hard constraint
- Latency is non-negotiable
- Connectivity is unreliable
- Intelligence must be continuous, not episodic
- Cloud AI fails
- Brute-force edge AI struggles
- Event-driven approaches become interesting
Why this matters for your broader thesis
These systems are not fringe.
They are:
And importantly:
- Growing in number
- Growing in strategic importance
- Becoming more autonomous over time
That’s why this category matters more than benchmarks or demos.
Do you own research.