BRN Discussion Ongoing

7für7

Top 20
Morgans.....?

An Australian company… not JP Morgan’s
 

Frangipani

Top 20

CFE07818-D1BD-4DF0-BD24-3ED71D86E17C.jpeg




80E6341D-2635-4375-A4B7-754D862428F4.jpeg


19 November 2025 - Session 2:

“Towards an Energy-Efficient and Sustainable IIoT using Embedded Neuromorphic AI
Behrooz Azadi, Bernhard Anzengruber-Tanase, Georgios Sopidis, Michael Haslgrübler and Alois Ferscha”




39340D5A-972C-4B95-8359-DB799E2ADAB8.jpeg
094A2B94-3003-4A9D-9ED5-32B976F6ED93.jpeg


985FE8AE-5C88-4C24-9095-9E2A68CE1763.jpeg







6659C31E-339D-49E6-A7D4-B6111EA238E2.jpeg
721ADA96-E4DE-4443-A782-587918F2623F.jpeg



933F2B8B-0CA9-46E7-8C08-AED10CE161F9.jpeg



8DA61DC6-B738-467F-BA9C-60990B3B5C62.jpeg





I recall somebody posting the below Pro²Future poster the other day, some of whose co-authors are also co-authors of the above paper that is going to be presented at the conference in Vienna next week:



47795F9A-F90C-4A95-8EC6-E4648DC6668C.jpeg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 24 users
Yep, but i cant see the world turning its back on neuromorphic health wearables or defense applications.
It is possible when Seans contract is up that they find someone else to take the business to the next step. But IMO Sean has built a great eco system and has started to move the business in the right direction. The BOD will evaluate and make a decision.
Holders do not see what is going on inside.
You can bet even Mega chips and Renesas had no idea Neuromorphic would take so long to become mainstream when they took licences all those years ago so its no surprise that we are still waiting.
Its also no surprise that we saw modest volumes for the 1500 tapeout. There is still uncertainty about our new product. Volumes will come in due course. No one wants to get it wrong.
Hang on, The CEO is stating Watch us Now,
You can't come out and say these words and deliver next to nothing,
All those AGM'S In the past ,where engaging and talking but we can't talk about it,
Maybe because there was nothing to talk about
 
  • Fire
  • Like
Reactions: 4 users

manny100

Top 20
Hang on, The CEO is stating Watch us Now,
You can't come out and say these words and deliver next to nothing,
All those AGM'S In the past ,where engaging and talking but we can't talk about it,
Maybe because there was nothing to talk about
They delivered what they said they would deliver. Parsons, Bascom Hunter and Onsor see the 1500 in several end solutions.
Posters just forgot or were not aware that it was only ever for a few products and panicked. Its pretty much a proof of concept that BRN expects will snow ball from here- hence watch us now.
Its a start that no doubt will actually lead to more.
Getting AKIDA into the mainstream is and has been a very long drawn out process.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 12 users

perceptron

Regular
I am conducting my own analysis of the Brainchip Capital Raise Presentation, Novemebr 2025. As a result, I am seeking clarification from the company to explain the first 2 rows of the Key Metrics table found on the right hand side of page 14:

Variable Cost per chip$2.94
Price/chip (volume
dependent)
$4 - $50
I am unsure if these metrics refer to manufacturing costs only.
 
Last edited:
  • Like
Reactions: 3 users

Frangipani

Top 20
It was uploaded to GitHub in preparation for the Munich Neuromorphic Hackathon 2025 that will take place from 7-8 and 10-12 November and is being co-organised by NeuroTUM and fortiss:








View attachment 92452
View attachment 92453

Priya Kannan, who is named as the contact person on GitHub, will also be one of three fortiss researchers giving a presentation titled “Neuromorphic Computing for Industrial Use Cases” on the Hackathon’s first day:


View attachment 92450



View attachment 92451


7F16A027-3CBD-4907-A2B3-BDAD8076AB40.jpeg
 
  • Like
  • Fire
Reactions: 12 users

TheDrooben

Pretty Pretty Pretty Pretty Good
I am conducting my own analysis of the Brainchip Capital Raise Presentation, Novemebr 2025. As a result, I am seeking clarification from the company to explain the first 2 rows of the Key Metrics table found on the right hand side of page 14:

Variable Cost per chip$2.94
Price/chip (volume
dependent)
$4 - $50
I am unsure if these metrics refer to manufacturing costs only.
My "perceptron" is that the variable cost of $2.94 is to manufacture the chip and we are charging companies $4-$50 to purchase each chip from us - price depends on the volume ordered (higher volume = lower price)

larry-never.gif


Happy as Larry
 
  • Like
  • Haha
  • Love
Reactions: 12 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 8 users

NewGee101

Emerged
Nembutal Pentobarbital Sodium
 

Attachments

  • PentoBarbital.jpg
    PentoBarbital.jpg
    95.8 KB · Views: 17

perceptron

Regular
My "perceptron" is that the variable cost of $2.94 is to manufacture the chip and we are charging companies $4-$50 to purchase each chip from us - price depends on the volume ordered (higher volume = lower price)

View attachment 92910

Happy as Larry
Hi,
This highlights why I am seeking clarification from the company.
 

Fiendish

Member
Hi,
This highlights why I am seeking clarification from the company.
*This also highlights the fact that though some people struggle with reading comprehension, they can still operate electronic devices well enough to broadcast it to many, whom might be blissfully unaware of said fact!
 

perceptron

Regular
*This also highlights the fact that though some people struggle with reading comprehension, they can still operate electronic devices well enough to broadcast it to many, whom might be blissfully unaware of said fact!
Hi Fiendish,
Maybe you can explain what both the "Variable Cost per chip" and "Price/chip (volume dependent)" metrics are? I am finding it hard to research these metrics in relation to chip manufacturing.
 
Last edited:

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi All,

Here's a MUST READ report by Woodside Capital Partners, published 3 November 2025. I don't believe it's been posted previously.

The report frames edge AI as the next big battleground for silicon, with billions of sensors, to handle latency/bandwidth/privacy constraints, and an explosion of data at the edge.

It lists BrainChip among the “notable edge AI chip players” with its neuromorphic “Akida” spiking neural network processor and claims it targets “extreme-edge” AI and milliwatt-scale power budgets. I've highlighted the relevant parts in orange.

It emphasises that the winners will be those who deliver the most AI per joule, especially on devices with tight power/thermal budgets and “always-on” demand, which is good for us because that's exactly the domain BrainChip targets.

While the report notes that "spiking neural networks still lag in programming ease and general performance, and the software ecosystem is nascent', it also says that "momentum is building".

It also says "while sales are still modest" the potential payoff is enormous.

AI at the Edge: Low Power, High Stakes​

November 3, 2025Industry Reports, WCP News
Palo Alto – November 3, 2025 – Woodside Capital Partners (WCP) is pleased to release our Digital Advertising Quarterly Sector Update for Q3 2025, authored by senior bankers Alain Bismuth and George Jones.

Introduction: Intelligence on the Edge


Edge AI isn’t just a buzzword – it’s fast becoming the next battleground in silicon. As billions of sensors and devices come online, they are generating an avalanche of data that can’t always wait for the cloud. IDC projects 41.6 billion connected IoT devices by 2025, producing nearly 79 zettabytes of data per year. Transmitting all that information to distant data centers is impractical due to latency, bandwidth, and privacy constraints. The solution? Push the computing – and the intelligence – out to the network’s edge.

As EE Times Editor-in-Chief Nitin Dahad noted, “While industry commentators have been talking about edge AI for a while, the challenge to date, as with IoT, was in the market fragmentation… But that is changing.” He highlighted how the ecosystem is consolidating through moves like Qualcomm’s acquisitions of Edge Impulse and Arduino, or Google’s collaboration with Synaptics on open-source RISC-V NPUs – signs that “edge AI really pushes connected IoT devices into a new realm – one of “ambient intelligence,” where AI chips put intelligence into things without having to connect to the cloud, consume massive power, or compromise security.”

This new paradigm is ushering in ultra-efficient chips purpose-built for on-device learning and inference. These Edge AI processors are no longer niche curiosities; they are becoming “the heartbeat of a new digital age,” with the market expected to reach $13.5 billion in 2025. The strategic takeaway is clear: the future of AI will be decentralized, and the real innovation—and value creation—is happening at the edge.
Drivers of the Edge AI Chip Boom
Several converging forces are propelling the rapid rise of edge AI chips. For the investment community, understanding these drivers is key to seeing where value will accrue in the coming years:
  • Data Deluge & Latency Sensitivity: The sheer volume of sensor data (from cameras, microphones, wearables, etc.) is overwhelming. Sending it all to the cloud is often infeasible. Edge chips enable real-time processing at the source, avoiding network latency. This is mission-critical for applications like autonomous drones or surgical robots that can’t afford the milliseconds of round-trip cloud delay. For example, IDC estimates that 41.6 billion IoT devices will produce unimaginable data streams by 2025 – processing must be pushed outward to handle this in time.
  • Bandwidth & Connectivity Limits: Not every environment has fat internet pipes or reliable 5G. From rural farms to factory floors, edge AI hardware ensures analytics continue even with spotty connectivity. It’s often more cost-effective to process data locally than to continuously offload gigabytes to the cloud.
  • Privacy and Security: In an era of stricter data regulations and heightened user sensitivity, keeping data on-device is a significant advantage. Edge AI chips let a smartphone analyze your biometrics or photos privately, without ever uploading to a server. This reduces exposure to breaches and eases compliance with laws like GDPR.
  • Power Efficiency: Perhaps counterintuitively, doing AI on the edge can save energy overall. Rather than firing up a distant server (and all the network infrastructure in between) for a small inference task, a low-power chip in a device can do it with less total energy. Moreover, many edge use-cases involve battery-powered gear – think wearables or remote sensors – where ultra-efficient silicon is the only option. A cloud model simply can’t run on a coin cell battery.
  • Cost & Scalability: Lastly, cloud computing at scale is expensive. As AI becomes ubiquitous, offloading every task to centralized GPUs or TPUs racks up cloud bills. Pushing intelligence to millions of cheap, dedicated edge chips distributes the load and can be more cost-efficient at scale. It also enables new experiences and products that wouldn’t be feasible if every interaction required a cloud call (for instance, AI features in areas with no connectivity, or devices that need to respond in under 10 ms).
In short, the edge is where the digital world meets the real world, and it demands silicon that can handle messy, real-time data within tight power and latency budgets. This demand is fueling an “arms race” among chip makers – both established giants and ambitious startups – to build the brains for the edge. And the money is following: by 2025, custom ASICs for edge inference are projected to generate nearly $7.8 billion in revenue, and AI chip startups have already raised over $5.1 billion in VC funding in the first half of 2025 alone. The race is on to create chips that deliver data-center-caliber smarts without the luxury of a data center’s power or cooling.


The New Silicon Landscape: From Cloud Titans to Edge Niche


Not long ago, NVIDIA GPUs dominated the AI hardware narrative, thanks to the deep learning boom. But those power-hungry processors live in server racks, gulping kilowatts – not exactly ideal for a smart camera or a drone. The shift to edge AI has cracked the door open for a new wave of silicon solutions optimized for efficiency, specialization, and integration into smaller devices. This has made the competitive landscape far more diverse and exciting than the old CPU/GPU duopoly.

Tech giants have recognized the trend and are embedding AI acceleration in their edge offerings. Apple’s latest iPhone chip, the A19 Bionic, packs a 35 TOPS neural engine – effectively a dedicated AI brain inside your pocket. Qualcomm now ships NPUs (neural processing units) in hundreds of millions of Snapdragon chips each year, ensuring that nearly every new smartphone has on-device AI capability. Even Google, synonymous with cloud AI, has its Edge TPU chips for on-premise and IoT inference. These incumbents leverage enormous R&D and software ecosystems, but they also have broad mandates (serving many applications), which leaves plenty of room for specialists to outperform in niche areas.

This is where startups and smaller players shine, by laser-focusing on edge use cases and squeezing out efficiencies that general-purpose silicon can’t match. A slew of innovators worldwide are delivering novel architectures for edge AI – many of them fundamentally rethinking how computations are done under the hood. The approaches vary (digital ASICs, analog in-memory computing, neuromorphic designs, and more), but the goal is the same: maximum AI performance per watt on tiny footprints. Below are a few notable edge AI chip players and their strategies:

  • Ambient Scientific (USA): Silicon Valley–based company developing ultra-low power, analog in-memory AI processors that enable always-on, on-device AI for battery-powered edge applications.
  • Axelera AI (Netherlands): Developed the Metis AI processing unit, a high-performance vision accelerator purpose-built for the network edge.
  • Blumind (Canada): Pioneers all-analog AI chips for ultra-low-power, always-on edge tasks, delivering standard neural network performance at up to 1000x less power than traditional digital designs.
  • BrainChip (Australia): A neuromorphic-chip trailblazer that has commercialized the Akida spiking neural network processor for extreme-edge AI. BrainChip’s architecture performs brain-inspired event-based learning on-chip, targeting milliwatt-scale power budgets.
  • GreenWaves Technologies (France): A pioneer in RISC-V-based edge processors for battery-powered devices. Its GAP9 processor combines a multi-core MCU, a DSP, and a neural accelerator, enabling advanced AI features like neural noise cancellation in hearables at exceptionally low power.
  • Hailo (Israel): A leading-edge AI accelerator vendor whose specialized processors combine high throughput with low energy use for deep learning at the edge.
  • Klepsydra (Switzerland): Takes a software-centric approach to edge AI optimization, with a lightweight framework that boosts inference efficiency across existing hardware, achieving up to 4x lower latency and 50% less power consumption on standard processors.
  • Kneron (USA): Provides low-power AI inference chips for smart devices at the edge. Kneron’s “full-stack” edge AI SoCs deliver efficient on-device vision processing and face/pattern recognition, powering everything from smart home cameras to driver-assistance systems.
  • Neuronova (Italy): Builds neuromorphic processors that emulate brain neurons and synapses in silicon, enabling sensor AI tasks with up to 1000x lower energy consumption than conventional chips.
  • Mentium (USA): Using a hybrid in-memory and digital-computation approach, Mentium delivers dependable AI at the Edge with co-processors capable of Cloud-quality inference at ultra-low power. The company has enjoyed success in space-based applications.
  • SiMa.ai (USA): Supplies low-power, high-performance system-on-chip (SoC) solutions for edge machine learning, branded as an MLSoC. SiMa.ai’s platform emphasizes ease of deployment for computer vision and autonomous systems.
  • SynSense (China): Offers event-driven neuromorphic chips that tightly integrate sensing and processing to achieve ultra-low-latency, low-power AI on the edge.
  • Syntiant (USA): Designs ultra-low-power Neural Decision Processors that enable always-on voice and sensor analytics in battery-operated gadgets. Syntiant’s tiny chips have already shipped in over 10 million devices.

Neuromorphic Computing: The Brain as Blueprint



Among all edge AI innovations, neuromorphic computing stands out as the most radical— and arguably the most visionary—approach. Rather than brute-force number crunching, these chips mimic biological brains, using networks of artificial “neurons” and “synapses” that communicate via spiking signals. The appeal is clear: the human brain is a 20-watt wonder that can outperform megawatt supercomputers on specific tasks. After millions of years of evolution, it remains the ultimate proof of concept for efficient intelligence. Why not try to capture some of that magic in silicon?

“The reason for that is evolution,” says Steven Brightfield, CMO of BrainChip. “Our brain had a power budget.” Nature had to optimize for energy efficiency, and neuromorphic chips follow that same rule, making them ideal for battery-powered AI. As Brightfield puts it,
“If you only have a coin-cell battery to run AI, you want a chip that works like the brain; sipping energy only when there’s something worth processing.”

This event-driven paradigm is neuromorphic computing’s secret sauce: neurons fire only when input changes, consuming power only when needed. Intel’s Mike Davies, who leads the company’s neuromorphic lab, explains that such architectures excel at “processing signal streams when you can’t afford to wait to collect the whole stream… suited for a streaming, real-time mode of operation.” Intel’s Loihi chip, for example, matched GPU accuracy on a sensing task while using just one-thousandth the energy.

Though still early, the field is advancing fast. Major players like Intel and IBM, along with startups such as BrainChip, SynSense, and Innatera, are proving that brain-inspired computing is more than academic curiosity. Neuromorphic processors now handle keyword spotting, gesture recognition, and anomaly detection at microwatt power levels—a breakthrough for wearables, drones, and IoT devices.

Challenges remain: spiking neural networks still lag in programming ease and general performance, and the software ecosystem is nascent. As Davies cautions, with tiny neural networks there’s a “limited amount of magic you can bring to a problem.” Yet momentum is building. The efficiency gains are too compelling to ignore. Neuromorphic chips mirror the brain’s architecture, offering real-time intelligence at minimal energy – precisely what the edge demands.

While sales are still modest —projected to reach $0.5 billion by 2025 —the potential payoff is enormous. In a world where AI’s power appetite collides with energy constraints, brain-like chips may become essential infrastructure for the next generation of intelligent, efficient devices.



The Edge Is Also About Security



As Thomas Rosteck, President of Infineon’s Connected Secure Systems division, told EE Times in a recent interview, “The future of AI is about intelligence and security moving together to the edge. We can’t separate compute performance from trust – both have to be built into the silicon.”

Rosteck emphasized that this transformation isn’t simply about smaller chips or lower power; it’s about secure intelligence at the system level—combining sensors, connectivity, compute, and protection in a single integrated architecture. In his words, “The edge has to be smart, but also safe. You need to trust the data before you can use it for AI.”

That trust layer is rapidly becoming a differentiator in edge AI. Devices operating outside controlled environments – from industrial sensors to connected vehicles and medical wearables – are continuously exposed to tampering and data interception. Embedding hardware-based security (secure boot, encrypted memory, and trusted execution environments) ensures that models, data, and inferences cannot be altered or spoofed.

Infineon and peers are leading a broader industry shift: security as an enabler, not an afterthought. As energy efficiency defines the viability of edge AI, trust defines its scalability. For AI to permeate the physical world safely, intelligence must be both local and verifiable—a dual mandate that will shape the next generation of edge architectures.


Case Study: Large Player Pivots



Qualcomm is executing a fundamental strategic shift, moving beyond its traditional cellular markets to focus heavily on the high-growth Intelligent Edge and IoT. Suddenly, Qualcomm has a comprehensive, full-stack edge AI platform accelerated by the acquisitions of Edge Impulse (AI/ML tooling) and Arduino (prototyping ecosystem). A massive, diverse, and bottom-up customer base leads to a crucial shift away from serving a small number of large cellular customers (OEMs and carriers). This diversification mitigates risk and establishes a global innovation pipeline.

This strategy establishes a deep competitive moat by securing software mindshare and platform control. Edge Impulse provides the critical AI/ML framework, ensuring models are built and optimized specifically for Qualcomm’s specialized hardware, such as the AI accelerators (NPUs) in its Dragonwing™ platforms. Qualcomm has created a technical lock-in: developers face significant switching costs if they attempt to migrate optimized models to competing, generic hardware platforms. Qualcomm receives real-time market intelligence on successful developer models, enabling it to tailor its silicon roadmap.

For the millions of developers already engaged, the primary outcomes are accessibility, reduced development friction, and guaranteed scalability. The integrated ecosystem effectively democratizes access to robust, complex chip architectures. Arduino offers a universally trusted, user-friendly Integrated Development Environment (IDE) and open-source libraries. Developers can minimize the need for high-cost, specialized engineering talent. Critically, the workflow bridges the gap between prototyping and mass production, enabling a significantly faster time-to-market.

The acquisitions signify a radical departure from Qualcomm’s historical operating model, shifting from concentrated engagements to high-velocity community adoption.


Edge-AI-Chips-article-1024x382.png


Qualcomm is transitioning from a premium cellular hardware provider to a full-stack platform provider. This strategy ensures revenue diversification and establishes powerful software-based competitive lock-in for the company. For its customers, the result is the democratization of advanced AI hardware and a clear, supported path from concept to global mass production, positioning Qualcomm as the crucial infrastructure partner for Edge AI innovation.

Outlook: Toward an Intelligent, Efficient, and Secure Edge


The edge AI chip arena in 2025 is nothing short of a renaissance in computer architecture. Startups are racing, incumbents are pivoting, and the shakeout is coming. Apple’s grab of Xnor.ai showed the playbook: big semis will buy edge innovation or build it in-house. Meanwhile, NVIDIA’s Jetson, AMD/Xilinx FPGAs, and Qualcomm NPUs are already redefining the edge. Lines between categories are blurring, but the rule is simple: efficiency is king. The winners will deliver the most AI per joule, whether through digital accelerators, analog tricks, or brain-inspired architectures.

For investors, the strategic importance is massive. Edge chips sit at the crossroads of AI, IoT, 5G, and smart everything. The market spans $1 sensors to $1,000+ auto processors, a fragmentation that allows nimble players to dominate niches like hearables, robotics, or satellite imaging. But fragmentation also raises the stakes; chips alone aren’t enough; software ecosystems, partnerships, and timing decide who wins design slots.

Near term, digital ASICs from Hailo, Qualcomm, and Google will capture the lion’s share—aligned with today’s deep learning. Analog and in-memory approaches are next, delivering leaps in efficiency for power-starved devices. And on the horizon, neuromorphic computing looms: if spiking neural nets scale, brain-like chips could rewrite the rules entirely. Giants like Intel and IBM are betting the upside is worth it.

The sustainability angle only adds fuel. Cloud AI guzzles megawatts; edge AI can slash energy costs by orders of magnitude. A 1 W camera chip doing local inference beats streaming to a 100 W GPU farm. In sectors like agriculture and healthcare, efficiency isn’t just about battery; it’s about global carbon footprint.

Bottom line: edge AI chips are evolving faster than the incumbents can dictate. What seemed like sci-fi—analog brains, self-learning silicon—is moving into commercial reality. The smart money is shifting to the edge, where the next generation of AI will be defined not by brute force, but by clever, energy-frugal design. The brain took eons to optimize; edge AI chips are doing it in years, and the race is on.

 
  • Like
  • Fire
  • Love
Reactions: 33 users

Guzzi62

Regular
Hi Fiendish,
Maybe you can explain what both the "Variable Cost per chip" and "Price/chip (volume dependent)" metrics are?
It's quite simple:

The more chips BRN orders, the less they pay per unit.

The more chips a customer buys from BRN, the less they pay per unit.

It's about the same if you go to the grocery store and wants to buy some beers: A case is cheaper to buy per unit as compared to a few.

I hope you get it now, if not, well maybe better not invest in BRN but go and buy some beers instead, a case for starters maybe?
 
  • Haha
  • Like
  • Fire
Reactions: 9 users

White Horse

Regular
  • Like
  • Love
  • Fire
Reactions: 10 users

7für7

Top 20
Hi All,

Here's a MUST READ report by Woodside Capital Partners, published 3 November 2025. I don't believe it's been posted previously.

The report frames edge AI as the next big battleground for silicon, with billions of sensors, to handle latency/bandwidth/privacy constraints, and an explosion of data at the edge.

It lists BrainChip among the “notable edge AI chip players” with its neuromorphic “Akida” spiking neural network processor and claims it targets “extreme-edge” AI and milliwatt-scale power budgets. I've highlighted the relevant parts in orange.

It emphasises that the winners will be those who deliver the most AI per joule, especially on devices with tight power/thermal budgets and “always-on” demand, which is good for us because that's exactly the domain BrainChip targets.

While the report notes that "spiking neural networks still lag in programming ease and general performance, and the software ecosystem is nascent', it also says that "momentum is building".

It also says "while sales are still modest" the potential payoff is enormous.

AI at the Edge: Low Power, High Stakes​

November 3, 2025Industry Reports, WCP News
Palo Alto – November 3, 2025 – Woodside Capital Partners (WCP) is pleased to release our Digital Advertising Quarterly Sector Update for Q3 2025, authored by senior bankers Alain Bismuth and George Jones.

Introduction: Intelligence on the Edge


Edge AI isn’t just a buzzword – it’s fast becoming the next battleground in silicon. As billions of sensors and devices come online, they are generating an avalanche of data that can’t always wait for the cloud. IDC projects 41.6 billion connected IoT devices by 2025, producing nearly 79 zettabytes of data per year. Transmitting all that information to distant data centers is impractical due to latency, bandwidth, and privacy constraints. The solution? Push the computing – and the intelligence – out to the network’s edge.

As EE Times Editor-in-Chief Nitin Dahad noted, “While industry commentators have been talking about edge AI for a while, the challenge to date, as with IoT, was in the market fragmentation… But that is changing.” He highlighted how the ecosystem is consolidating through moves like Qualcomm’s acquisitions of Edge Impulse and Arduino, or Google’s collaboration with Synaptics on open-source RISC-V NPUs – signs that “edge AI really pushes connected IoT devices into a new realm – one of “ambient intelligence,” where AI chips put intelligence into things without having to connect to the cloud, consume massive power, or compromise security.”

This new paradigm is ushering in ultra-efficient chips purpose-built for on-device learning and inference. These Edge AI processors are no longer niche curiosities; they are becoming “the heartbeat of a new digital age,” with the market expected to reach $13.5 billion in 2025. The strategic takeaway is clear: the future of AI will be decentralized, and the real innovation—and value creation—is happening at the edge.
Drivers of the Edge AI Chip Boom
Several converging forces are propelling the rapid rise of edge AI chips. For the investment community, understanding these drivers is key to seeing where value will accrue in the coming years:
  • Data Deluge & Latency Sensitivity: The sheer volume of sensor data (from cameras, microphones, wearables, etc.) is overwhelming. Sending it all to the cloud is often infeasible. Edge chips enable real-time processing at the source, avoiding network latency. This is mission-critical for applications like autonomous drones or surgical robots that can’t afford the milliseconds of round-trip cloud delay. For example, IDC estimates that 41.6 billion IoT devices will produce unimaginable data streams by 2025 – processing must be pushed outward to handle this in time.
  • Bandwidth & Connectivity Limits: Not every environment has fat internet pipes or reliable 5G. From rural farms to factory floors, edge AI hardware ensures analytics continue even with spotty connectivity. It’s often more cost-effective to process data locally than to continuously offload gigabytes to the cloud.
  • Privacy and Security: In an era of stricter data regulations and heightened user sensitivity, keeping data on-device is a significant advantage. Edge AI chips let a smartphone analyze your biometrics or photos privately, without ever uploading to a server. This reduces exposure to breaches and eases compliance with laws like GDPR.
  • Power Efficiency: Perhaps counterintuitively, doing AI on the edge can save energy overall. Rather than firing up a distant server (and all the network infrastructure in between) for a small inference task, a low-power chip in a device can do it with less total energy. Moreover, many edge use-cases involve battery-powered gear – think wearables or remote sensors – where ultra-efficient silicon is the only option. A cloud model simply can’t run on a coin cell battery.
  • Cost & Scalability: Lastly, cloud computing at scale is expensive. As AI becomes ubiquitous, offloading every task to centralized GPUs or TPUs racks up cloud bills. Pushing intelligence to millions of cheap, dedicated edge chips distributes the load and can be more cost-efficient at scale. It also enables new experiences and products that wouldn’t be feasible if every interaction required a cloud call (for instance, AI features in areas with no connectivity, or devices that need to respond in under 10 ms).
In short, the edge is where the digital world meets the real world, and it demands silicon that can handle messy, real-time data within tight power and latency budgets. This demand is fueling an “arms race” among chip makers – both established giants and ambitious startups – to build the brains for the edge. And the money is following: by 2025, custom ASICs for edge inference are projected to generate nearly $7.8 billion in revenue, and AI chip startups have already raised over $5.1 billion in VC funding in the first half of 2025 alone. The race is on to create chips that deliver data-center-caliber smarts without the luxury of a data center’s power or cooling.


The New Silicon Landscape: From Cloud Titans to Edge Niche


Not long ago, NVIDIA GPUs dominated the AI hardware narrative, thanks to the deep learning boom. But those power-hungry processors live in server racks, gulping kilowatts – not exactly ideal for a smart camera or a drone. The shift to edge AI has cracked the door open for a new wave of silicon solutions optimized for efficiency, specialization, and integration into smaller devices. This has made the competitive landscape far more diverse and exciting than the old CPU/GPU duopoly.

Tech giants have recognized the trend and are embedding AI acceleration in their edge offerings. Apple’s latest iPhone chip, the A19 Bionic, packs a 35 TOPS neural engine – effectively a dedicated AI brain inside your pocket. Qualcomm now ships NPUs (neural processing units) in hundreds of millions of Snapdragon chips each year, ensuring that nearly every new smartphone has on-device AI capability. Even Google, synonymous with cloud AI, has its Edge TPU chips for on-premise and IoT inference. These incumbents leverage enormous R&D and software ecosystems, but they also have broad mandates (serving many applications), which leaves plenty of room for specialists to outperform in niche areas.

This is where startups and smaller players shine, by laser-focusing on edge use cases and squeezing out efficiencies that general-purpose silicon can’t match. A slew of innovators worldwide are delivering novel architectures for edge AI – many of them fundamentally rethinking how computations are done under the hood. The approaches vary (digital ASICs, analog in-memory computing, neuromorphic designs, and more), but the goal is the same: maximum AI performance per watt on tiny footprints. Below are a few notable edge AI chip players and their strategies:

  • Ambient Scientific (USA): Silicon Valley–based company developing ultra-low power, analog in-memory AI processors that enable always-on, on-device AI for battery-powered edge applications.
  • Axelera AI (Netherlands): Developed the Metis AI processing unit, a high-performance vision accelerator purpose-built for the network edge.
  • Blumind (Canada): Pioneers all-analog AI chips for ultra-low-power, always-on edge tasks, delivering standard neural network performance at up to 1000x less power than traditional digital designs.
  • BrainChip (Australia): A neuromorphic-chip trailblazer that has commercialized the Akida spiking neural network processor for extreme-edge AI. BrainChip’s architecture performs brain-inspired event-based learning on-chip, targeting milliwatt-scale power budgets.
  • GreenWaves Technologies (France): A pioneer in RISC-V-based edge processors for battery-powered devices. Its GAP9 processor combines a multi-core MCU, a DSP, and a neural accelerator, enabling advanced AI features like neural noise cancellation in hearables at exceptionally low power.
  • Hailo (Israel): A leading-edge AI accelerator vendor whose specialized processors combine high throughput with low energy use for deep learning at the edge.
  • Klepsydra (Switzerland): Takes a software-centric approach to edge AI optimization, with a lightweight framework that boosts inference efficiency across existing hardware, achieving up to 4x lower latency and 50% less power consumption on standard processors.
  • Kneron (USA): Provides low-power AI inference chips for smart devices at the edge. Kneron’s “full-stack” edge AI SoCs deliver efficient on-device vision processing and face/pattern recognition, powering everything from smart home cameras to driver-assistance systems.
  • Neuronova (Italy): Builds neuromorphic processors that emulate brain neurons and synapses in silicon, enabling sensor AI tasks with up to 1000x lower energy consumption than conventional chips.
  • Mentium (USA): Using a hybrid in-memory and digital-computation approach, Mentium delivers dependable AI at the Edge with co-processors capable of Cloud-quality inference at ultra-low power. The company has enjoyed success in space-based applications.
  • SiMa.ai (USA): Supplies low-power, high-performance system-on-chip (SoC) solutions for edge machine learning, branded as an MLSoC. SiMa.ai’s platform emphasizes ease of deployment for computer vision and autonomous systems.
  • SynSense (China): Offers event-driven neuromorphic chips that tightly integrate sensing and processing to achieve ultra-low-latency, low-power AI on the edge.
  • Syntiant (USA): Designs ultra-low-power Neural Decision Processors that enable always-on voice and sensor analytics in battery-operated gadgets. Syntiant’s tiny chips have already shipped in over 10 million devices.

Neuromorphic Computing: The Brain as Blueprint



Among all edge AI innovations, neuromorphic computing stands out as the most radical— and arguably the most visionary—approach. Rather than brute-force number crunching, these chips mimic biological brains, using networks of artificial “neurons” and “synapses” that communicate via spiking signals. The appeal is clear: the human brain is a 20-watt wonder that can outperform megawatt supercomputers on specific tasks. After millions of years of evolution, it remains the ultimate proof of concept for efficient intelligence. Why not try to capture some of that magic in silicon?

“The reason for that is evolution,” says Steven Brightfield, CMO of BrainChip. “Our brain had a power budget.” Nature had to optimize for energy efficiency, and neuromorphic chips follow that same rule, making them ideal for battery-powered AI. As Brightfield puts it,
“If you only have a coin-cell battery to run AI, you want a chip that works like the brain; sipping energy only when there’s something worth processing.”

This event-driven paradigm is neuromorphic computing’s secret sauce: neurons fire only when input changes, consuming power only when needed. Intel’s Mike Davies, who leads the company’s neuromorphic lab, explains that such architectures excel at “processing signal streams when you can’t afford to wait to collect the whole stream… suited for a streaming, real-time mode of operation.” Intel’s Loihi chip, for example, matched GPU accuracy on a sensing task while using just one-thousandth the energy.

Though still early, the field is advancing fast. Major players like Intel and IBM, along with startups such as BrainChip, SynSense, and Innatera, are proving that brain-inspired computing is more than academic curiosity. Neuromorphic processors now handle keyword spotting, gesture recognition, and anomaly detection at microwatt power levels—a breakthrough for wearables, drones, and IoT devices.

Challenges remain: spiking neural networks still lag in programming ease and general performance, and the software ecosystem is nascent. As Davies cautions, with tiny neural networks there’s a “limited amount of magic you can bring to a problem.” Yet momentum is building. The efficiency gains are too compelling to ignore. Neuromorphic chips mirror the brain’s architecture, offering real-time intelligence at minimal energy – precisely what the edge demands.

While sales are still modest —projected to reach $0.5 billion by 2025 —the potential payoff is enormous. In a world where AI’s power appetite collides with energy constraints, brain-like chips may become essential infrastructure for the next generation of intelligent, efficient devices.



The Edge Is Also About Security



As Thomas Rosteck, President of Infineon’s Connected Secure Systems division, told EE Times in a recent interview, “The future of AI is about intelligence and security moving together to the edge. We can’t separate compute performance from trust – both have to be built into the silicon.”

Rosteck emphasized that this transformation isn’t simply about smaller chips or lower power; it’s about secure intelligence at the system level—combining sensors, connectivity, compute, and protection in a single integrated architecture. In his words, “The edge has to be smart, but also safe. You need to trust the data before you can use it for AI.”

That trust layer is rapidly becoming a differentiator in edge AI. Devices operating outside controlled environments – from industrial sensors to connected vehicles and medical wearables – are continuously exposed to tampering and data interception. Embedding hardware-based security (secure boot, encrypted memory, and trusted execution environments) ensures that models, data, and inferences cannot be altered or spoofed.

Infineon and peers are leading a broader industry shift: security as an enabler, not an afterthought. As energy efficiency defines the viability of edge AI, trust defines its scalability. For AI to permeate the physical world safely, intelligence must be both local and verifiable—a dual mandate that will shape the next generation of edge architectures.


Case Study: Large Player Pivots



Qualcomm is executing a fundamental strategic shift, moving beyond its traditional cellular markets to focus heavily on the high-growth Intelligent Edge and IoT. Suddenly, Qualcomm has a comprehensive, full-stack edge AI platform accelerated by the acquisitions of Edge Impulse (AI/ML tooling) and Arduino (prototyping ecosystem). A massive, diverse, and bottom-up customer base leads to a crucial shift away from serving a small number of large cellular customers (OEMs and carriers). This diversification mitigates risk and establishes a global innovation pipeline.

This strategy establishes a deep competitive moat by securing software mindshare and platform control. Edge Impulse provides the critical AI/ML framework, ensuring models are built and optimized specifically for Qualcomm’s specialized hardware, such as the AI accelerators (NPUs) in its Dragonwing™ platforms. Qualcomm has created a technical lock-in: developers face significant switching costs if they attempt to migrate optimized models to competing, generic hardware platforms. Qualcomm receives real-time market intelligence on successful developer models, enabling it to tailor its silicon roadmap.

For the millions of developers already engaged, the primary outcomes are accessibility, reduced development friction, and guaranteed scalability. The integrated ecosystem effectively democratizes access to robust, complex chip architectures. Arduino offers a universally trusted, user-friendly Integrated Development Environment (IDE) and open-source libraries. Developers can minimize the need for high-cost, specialized engineering talent. Critically, the workflow bridges the gap between prototyping and mass production, enabling a significantly faster time-to-market.

The acquisitions signify a radical departure from Qualcomm’s historical operating model, shifting from concentrated engagements to high-velocity community adoption.


Edge-AI-Chips-article-1024x382.png


Qualcomm is transitioning from a premium cellular hardware provider to a full-stack platform provider. This strategy ensures revenue diversification and establishes powerful software-based competitive lock-in for the company. For its customers, the result is the democratization of advanced AI hardware and a clear, supported path from concept to global mass production, positioning Qualcomm as the crucial infrastructure partner for Edge AI innovation.

Outlook: Toward an Intelligent, Efficient, and Secure Edge


The edge AI chip arena in 2025 is nothing short of a renaissance in computer architecture. Startups are racing, incumbents are pivoting, and the shakeout is coming. Apple’s grab of Xnor.ai showed the playbook: big semis will buy edge innovation or build it in-house. Meanwhile, NVIDIA’s Jetson, AMD/Xilinx FPGAs, and Qualcomm NPUs are already redefining the edge. Lines between categories are blurring, but the rule is simple: efficiency is king. The winners will deliver the most AI per joule, whether through digital accelerators, analog tricks, or brain-inspired architectures.

For investors, the strategic importance is massive. Edge chips sit at the crossroads of AI, IoT, 5G, and smart everything. The market spans $1 sensors to $1,000+ auto processors, a fragmentation that allows nimble players to dominate niches like hearables, robotics, or satellite imaging. But fragmentation also raises the stakes; chips alone aren’t enough; software ecosystems, partnerships, and timing decide who wins design slots.

Near term, digital ASICs from Hailo, Qualcomm, and Google will capture the lion’s share—aligned with today’s deep learning. Analog and in-memory approaches are next, delivering leaps in efficiency for power-starved devices. And on the horizon, neuromorphic computing looms: if spiking neural nets scale, brain-like chips could rewrite the rules entirely. Giants like Intel and IBM are betting the upside is worth it.

The sustainability angle only adds fuel. Cloud AI guzzles megawatts; edge AI can slash energy costs by orders of magnitude. A 1 W camera chip doing local inference beats streaming to a 100 W GPU farm. In sectors like agriculture and healthcare, efficiency isn’t just about battery; it’s about global carbon footprint.

Bottom line: edge AI chips are evolving faster than the incumbents can dictate. What seemed like sci-fi—analog brains, self-learning silicon—is moving into commercial reality. The smart money is shifting to the edge, where the next generation of AI will be defined not by brute force, but by clever, energy-frugal design. The brain took eons to optimize; edge AI chips are doing it in years, and the race is on.


We see articles like this every week, putting BrainChip as a key player alongside the big fish. It’d be nice if the share price actually reflected that for once. Unfortunately, the good-for-nothing short sellers are more active. I hope they end up getting burned by their schadenfreude over investors losing money.
 
  • Fire
  • Like
Reactions: 5 users

TheDrooben

Pretty Pretty Pretty Pretty Good
lestweforget.jpg
 
  • Love
  • Like
  • Wow
Reactions: 30 users

perceptron

Regular
It's quite simple:

The more chips BRN orders, the less they pay per unit.

The more chips a customer buys from BRN, the less they pay per unit.

It's about the same if you go to the grocery store and wants to buy some beers: A case is cheaper to buy per unit as compared to a few.

I hope you get it now, if not, well maybe better not invest in BRN but go and buy some beers instead, a case for starters maybe?
Hi Guzzi62,
Thanks for your example. I will continue my research into chip manufacturing. Beer and chips : )
 
  • Haha
Reactions: 4 users

Diogenese

Top 20
It's quite simple:

The more chips BRN orders, the less they pay per unit.

The more chips a customer buys from BRN, the less they pay per unit.

It's about the same if you go to the grocery store and wants to buy some beers: A case is cheaper to buy per unit as compared to a few.

I hope you get it now, if not, well maybe better not invest in BRN but go and buy some beers instead, a case for starters maybe?
That's unkind Guzzi. If perceptron invests in BRN, they will be able to buy the brewery. (Not financial advice).
 
  • Fire
  • Haha
  • Like
Reactions: 5 users

Diogenese

Top 20
Hi All,

Here's a MUST READ report by Woodside Capital Partners, published 3 November 2025. I don't believe it's been posted previously.

The report frames edge AI as the next big battleground for silicon, with billions of sensors, to handle latency/bandwidth/privacy constraints, and an explosion of data at the edge.

It lists BrainChip among the “notable edge AI chip players” with its neuromorphic “Akida” spiking neural network processor and claims it targets “extreme-edge” AI and milliwatt-scale power budgets. I've highlighted the relevant parts in orange.

It emphasises that the winners will be those who deliver the most AI per joule, especially on devices with tight power/thermal budgets and “always-on” demand, which is good for us because that's exactly the domain BrainChip targets.

While the report notes that "spiking neural networks still lag in programming ease and general performance, and the software ecosystem is nascent', it also says that "momentum is building".

It also says "while sales are still modest" the potential payoff is enormous.

AI at the Edge: Low Power, High Stakes​

November 3, 2025Industry Reports, WCP News
Palo Alto – November 3, 2025 – Woodside Capital Partners (WCP) is pleased to release our Digital Advertising Quarterly Sector Update for Q3 2025, authored by senior bankers Alain Bismuth and George Jones.

Introduction: Intelligence on the Edge


Edge AI isn’t just a buzzword – it’s fast becoming the next battleground in silicon. As billions of sensors and devices come online, they are generating an avalanche of data that can’t always wait for the cloud. IDC projects 41.6 billion connected IoT devices by 2025, producing nearly 79 zettabytes of data per year. Transmitting all that information to distant data centers is impractical due to latency, bandwidth, and privacy constraints. The solution? Push the computing – and the intelligence – out to the network’s edge.

As EE Times Editor-in-Chief Nitin Dahad noted, “While industry commentators have been talking about edge AI for a while, the challenge to date, as with IoT, was in the market fragmentation… But that is changing.” He highlighted how the ecosystem is consolidating through moves like Qualcomm’s acquisitions of Edge Impulse and Arduino, or Google’s collaboration with Synaptics on open-source RISC-V NPUs – signs that “edge AI really pushes connected IoT devices into a new realm – one of “ambient intelligence,” where AI chips put intelligence into things without having to connect to the cloud, consume massive power, or compromise security.”

This new paradigm is ushering in ultra-efficient chips purpose-built for on-device learning and inference. These Edge AI processors are no longer niche curiosities; they are becoming “the heartbeat of a new digital age,” with the market expected to reach $13.5 billion in 2025. The strategic takeaway is clear: the future of AI will be decentralized, and the real innovation—and value creation—is happening at the edge.
Drivers of the Edge AI Chip Boom
Several converging forces are propelling the rapid rise of edge AI chips. For the investment community, understanding these drivers is key to seeing where value will accrue in the coming years:
  • Data Deluge & Latency Sensitivity: The sheer volume of sensor data (from cameras, microphones, wearables, etc.) is overwhelming. Sending it all to the cloud is often infeasible. Edge chips enable real-time processing at the source, avoiding network latency. This is mission-critical for applications like autonomous drones or surgical robots that can’t afford the milliseconds of round-trip cloud delay. For example, IDC estimates that 41.6 billion IoT devices will produce unimaginable data streams by 2025 – processing must be pushed outward to handle this in time.
  • Bandwidth & Connectivity Limits: Not every environment has fat internet pipes or reliable 5G. From rural farms to factory floors, edge AI hardware ensures analytics continue even with spotty connectivity. It’s often more cost-effective to process data locally than to continuously offload gigabytes to the cloud.
  • Privacy and Security: In an era of stricter data regulations and heightened user sensitivity, keeping data on-device is a significant advantage. Edge AI chips let a smartphone analyze your biometrics or photos privately, without ever uploading to a server. This reduces exposure to breaches and eases compliance with laws like GDPR.
  • Power Efficiency: Perhaps counterintuitively, doing AI on the edge can save energy overall. Rather than firing up a distant server (and all the network infrastructure in between) for a small inference task, a low-power chip in a device can do it with less total energy. Moreover, many edge use-cases involve battery-powered gear – think wearables or remote sensors – where ultra-efficient silicon is the only option. A cloud model simply can’t run on a coin cell battery.
  • Cost & Scalability: Lastly, cloud computing at scale is expensive. As AI becomes ubiquitous, offloading every task to centralized GPUs or TPUs racks up cloud bills. Pushing intelligence to millions of cheap, dedicated edge chips distributes the load and can be more cost-efficient at scale. It also enables new experiences and products that wouldn’t be feasible if every interaction required a cloud call (for instance, AI features in areas with no connectivity, or devices that need to respond in under 10 ms).
In short, the edge is where the digital world meets the real world, and it demands silicon that can handle messy, real-time data within tight power and latency budgets. This demand is fueling an “arms race” among chip makers – both established giants and ambitious startups – to build the brains for the edge. And the money is following: by 2025, custom ASICs for edge inference are projected to generate nearly $7.8 billion in revenue, and AI chip startups have already raised over $5.1 billion in VC funding in the first half of 2025 alone. The race is on to create chips that deliver data-center-caliber smarts without the luxury of a data center’s power or cooling.


The New Silicon Landscape: From Cloud Titans to Edge Niche


Not long ago, NVIDIA GPUs dominated the AI hardware narrative, thanks to the deep learning boom. But those power-hungry processors live in server racks, gulping kilowatts – not exactly ideal for a smart camera or a drone. The shift to edge AI has cracked the door open for a new wave of silicon solutions optimized for efficiency, specialization, and integration into smaller devices. This has made the competitive landscape far more diverse and exciting than the old CPU/GPU duopoly.

Tech giants have recognized the trend and are embedding AI acceleration in their edge offerings. Apple’s latest iPhone chip, the A19 Bionic, packs a 35 TOPS neural engine – effectively a dedicated AI brain inside your pocket. Qualcomm now ships NPUs (neural processing units) in hundreds of millions of Snapdragon chips each year, ensuring that nearly every new smartphone has on-device AI capability. Even Google, synonymous with cloud AI, has its Edge TPU chips for on-premise and IoT inference. These incumbents leverage enormous R&D and software ecosystems, but they also have broad mandates (serving many applications), which leaves plenty of room for specialists to outperform in niche areas.

This is where startups and smaller players shine, by laser-focusing on edge use cases and squeezing out efficiencies that general-purpose silicon can’t match. A slew of innovators worldwide are delivering novel architectures for edge AI – many of them fundamentally rethinking how computations are done under the hood. The approaches vary (digital ASICs, analog in-memory computing, neuromorphic designs, and more), but the goal is the same: maximum AI performance per watt on tiny footprints. Below are a few notable edge AI chip players and their strategies:

  • Ambient Scientific (USA): Silicon Valley–based company developing ultra-low power, analog in-memory AI processors that enable always-on, on-device AI for battery-powered edge applications.
  • Axelera AI (Netherlands): Developed the Metis AI processing unit, a high-performance vision accelerator purpose-built for the network edge.
  • Blumind (Canada): Pioneers all-analog AI chips for ultra-low-power, always-on edge tasks, delivering standard neural network performance at up to 1000x less power than traditional digital designs.
  • BrainChip (Australia): A neuromorphic-chip trailblazer that has commercialized the Akida spiking neural network processor for extreme-edge AI. BrainChip’s architecture performs brain-inspired event-based learning on-chip, targeting milliwatt-scale power budgets.
  • GreenWaves Technologies (France): A pioneer in RISC-V-based edge processors for battery-powered devices. Its GAP9 processor combines a multi-core MCU, a DSP, and a neural accelerator, enabling advanced AI features like neural noise cancellation in hearables at exceptionally low power.
  • Hailo (Israel): A leading-edge AI accelerator vendor whose specialized processors combine high throughput with low energy use for deep learning at the edge.
  • Klepsydra (Switzerland): Takes a software-centric approach to edge AI optimization, with a lightweight framework that boosts inference efficiency across existing hardware, achieving up to 4x lower latency and 50% less power consumption on standard processors.
  • Kneron (USA): Provides low-power AI inference chips for smart devices at the edge. Kneron’s “full-stack” edge AI SoCs deliver efficient on-device vision processing and face/pattern recognition, powering everything from smart home cameras to driver-assistance systems.
  • Neuronova (Italy): Builds neuromorphic processors that emulate brain neurons and synapses in silicon, enabling sensor AI tasks with up to 1000x lower energy consumption than conventional chips.
  • Mentium (USA): Using a hybrid in-memory and digital-computation approach, Mentium delivers dependable AI at the Edge with co-processors capable of Cloud-quality inference at ultra-low power. The company has enjoyed success in space-based applications.
  • SiMa.ai (USA): Supplies low-power, high-performance system-on-chip (SoC) solutions for edge machine learning, branded as an MLSoC. SiMa.ai’s platform emphasizes ease of deployment for computer vision and autonomous systems.
  • SynSense (China): Offers event-driven neuromorphic chips that tightly integrate sensing and processing to achieve ultra-low-latency, low-power AI on the edge.
  • Syntiant (USA): Designs ultra-low-power Neural Decision Processors that enable always-on voice and sensor analytics in battery-operated gadgets. Syntiant’s tiny chips have already shipped in over 10 million devices.

Neuromorphic Computing: The Brain as Blueprint



Among all edge AI innovations, neuromorphic computing stands out as the most radical— and arguably the most visionary—approach. Rather than brute-force number crunching, these chips mimic biological brains, using networks of artificial “neurons” and “synapses” that communicate via spiking signals. The appeal is clear: the human brain is a 20-watt wonder that can outperform megawatt supercomputers on specific tasks. After millions of years of evolution, it remains the ultimate proof of concept for efficient intelligence. Why not try to capture some of that magic in silicon?

“The reason for that is evolution,” says Steven Brightfield, CMO of BrainChip. “Our brain had a power budget.” Nature had to optimize for energy efficiency, and neuromorphic chips follow that same rule, making them ideal for battery-powered AI. As Brightfield puts it,
“If you only have a coin-cell battery to run AI, you want a chip that works like the brain; sipping energy only when there’s something worth processing.”

This event-driven paradigm is neuromorphic computing’s secret sauce: neurons fire only when input changes, consuming power only when needed. Intel’s Mike Davies, who leads the company’s neuromorphic lab, explains that such architectures excel at “processing signal streams when you can’t afford to wait to collect the whole stream… suited for a streaming, real-time mode of operation.” Intel’s Loihi chip, for example, matched GPU accuracy on a sensing task while using just one-thousandth the energy.

Though still early, the field is advancing fast. Major players like Intel and IBM, along with startups such as BrainChip, SynSense, and Innatera, are proving that brain-inspired computing is more than academic curiosity. Neuromorphic processors now handle keyword spotting, gesture recognition, and anomaly detection at microwatt power levels—a breakthrough for wearables, drones, and IoT devices.

Challenges remain: spiking neural networks still lag in programming ease and general performance, and the software ecosystem is nascent. As Davies cautions, with tiny neural networks there’s a “limited amount of magic you can bring to a problem.” Yet momentum is building. The efficiency gains are too compelling to ignore. Neuromorphic chips mirror the brain’s architecture, offering real-time intelligence at minimal energy – precisely what the edge demands.

While sales are still modest —projected to reach $0.5 billion by 2025 —the potential payoff is enormous. In a world where AI’s power appetite collides with energy constraints, brain-like chips may become essential infrastructure for the next generation of intelligent, efficient devices.



The Edge Is Also About Security



As Thomas Rosteck, President of Infineon’s Connected Secure Systems division, told EE Times in a recent interview, “The future of AI is about intelligence and security moving together to the edge. We can’t separate compute performance from trust – both have to be built into the silicon.”

Rosteck emphasized that this transformation isn’t simply about smaller chips or lower power; it’s about secure intelligence at the system level—combining sensors, connectivity, compute, and protection in a single integrated architecture. In his words, “The edge has to be smart, but also safe. You need to trust the data before you can use it for AI.”

That trust layer is rapidly becoming a differentiator in edge AI. Devices operating outside controlled environments – from industrial sensors to connected vehicles and medical wearables – are continuously exposed to tampering and data interception. Embedding hardware-based security (secure boot, encrypted memory, and trusted execution environments) ensures that models, data, and inferences cannot be altered or spoofed.

Infineon and peers are leading a broader industry shift: security as an enabler, not an afterthought. As energy efficiency defines the viability of edge AI, trust defines its scalability. For AI to permeate the physical world safely, intelligence must be both local and verifiable—a dual mandate that will shape the next generation of edge architectures.


Case Study: Large Player Pivots



Qualcomm is executing a fundamental strategic shift, moving beyond its traditional cellular markets to focus heavily on the high-growth Intelligent Edge and IoT. Suddenly, Qualcomm has a comprehensive, full-stack edge AI platform accelerated by the acquisitions of Edge Impulse (AI/ML tooling) and Arduino (prototyping ecosystem). A massive, diverse, and bottom-up customer base leads to a crucial shift away from serving a small number of large cellular customers (OEMs and carriers). This diversification mitigates risk and establishes a global innovation pipeline.

This strategy establishes a deep competitive moat by securing software mindshare and platform control. Edge Impulse provides the critical AI/ML framework, ensuring models are built and optimized specifically for Qualcomm’s specialized hardware, such as the AI accelerators (NPUs) in its Dragonwing™ platforms. Qualcomm has created a technical lock-in: developers face significant switching costs if they attempt to migrate optimized models to competing, generic hardware platforms. Qualcomm receives real-time market intelligence on successful developer models, enabling it to tailor its silicon roadmap.

For the millions of developers already engaged, the primary outcomes are accessibility, reduced development friction, and guaranteed scalability. The integrated ecosystem effectively democratizes access to robust, complex chip architectures. Arduino offers a universally trusted, user-friendly Integrated Development Environment (IDE) and open-source libraries. Developers can minimize the need for high-cost, specialized engineering talent. Critically, the workflow bridges the gap between prototyping and mass production, enabling a significantly faster time-to-market.

The acquisitions signify a radical departure from Qualcomm’s historical operating model, shifting from concentrated engagements to high-velocity community adoption.


Edge-AI-Chips-article-1024x382.png


Qualcomm is transitioning from a premium cellular hardware provider to a full-stack platform provider. This strategy ensures revenue diversification and establishes powerful software-based competitive lock-in for the company. For its customers, the result is the democratization of advanced AI hardware and a clear, supported path from concept to global mass production, positioning Qualcomm as the crucial infrastructure partner for Edge AI innovation.

Outlook: Toward an Intelligent, Efficient, and Secure Edge


The edge AI chip arena in 2025 is nothing short of a renaissance in computer architecture. Startups are racing, incumbents are pivoting, and the shakeout is coming. Apple’s grab of Xnor.ai showed the playbook: big semis will buy edge innovation or build it in-house. Meanwhile, NVIDIA’s Jetson, AMD/Xilinx FPGAs, and Qualcomm NPUs are already redefining the edge. Lines between categories are blurring, but the rule is simple: efficiency is king. The winners will deliver the most AI per joule, whether through digital accelerators, analog tricks, or brain-inspired architectures.

For investors, the strategic importance is massive. Edge chips sit at the crossroads of AI, IoT, 5G, and smart everything. The market spans $1 sensors to $1,000+ auto processors, a fragmentation that allows nimble players to dominate niches like hearables, robotics, or satellite imaging. But fragmentation also raises the stakes; chips alone aren’t enough; software ecosystems, partnerships, and timing decide who wins design slots.

Near term, digital ASICs from Hailo, Qualcomm, and Google will capture the lion’s share—aligned with today’s deep learning. Analog and in-memory approaches are next, delivering leaps in efficiency for power-starved devices. And on the horizon, neuromorphic computing looms: if spiking neural nets scale, brain-like chips could rewrite the rules entirely. Giants like Intel and IBM are betting the upside is worth it.

The sustainability angle only adds fuel. Cloud AI guzzles megawatts; edge AI can slash energy costs by orders of magnitude. A 1 W camera chip doing local inference beats streaming to a 100 W GPU farm. In sectors like agriculture and healthcare, efficiency isn’t just about battery; it’s about global carbon footprint.

Bottom line: edge AI chips are evolving faster than the incumbents can dictate. What seemed like sci-fi—analog brains, self-learning silicon—is moving into commercial reality. The smart money is shifting to the edge, where the next generation of AI will be defined not by brute force, but by clever, energy-frugal design. The brain took eons to optimize; edge AI chips are doing it in years, and the race is on.



Qualcomm aversion therapy:

While they have done development work in digital NNs, Qualcomm has invested, and continues to invest, a lot of research in analog over the long term. Compute-in-memory is an analog technique in which the value is stored, usually as a voltage, in a capacitor, or in resistive memory. This is subject to manufacturing variability, reducing the reliability of the calculations when values are accumulated (as in neurons), so they try to develop techniques to mitigate this problem.

US2025218475A1 Compute-in-Memory with Current Transition Detection 20240103

1762827113191.png






A compute-in-memory system is provided in which a plurality of compute-in-memory bitcells couple to a read bit line. Depending upon sequential binary multiplications in the compute-in-memory bitcells, a current from the read bit line sequentially increases. A transition detection circuit detects and counts the current transitions to provide a multiply-and-accumulate result from the sequential binary multiplications.



US2023297335A1 Hybrid Compute-in-Memory 20220315

1762827456004.png



A compute-in-memory array is provided that implements a filter for a layer in a neural network. The filter multiplies a plurality of activation bits by a plurality of filter weight bits for each channel in a plurality of channels through a charge accumulation from a plurality of capacitors. The accumulated charge is digitized to provide the output of the filter.


US2024095492A1 MEMORY MANAGEMENT FOR MATHEMATICAL OPERATIONS IN COMPUTING SYSTEMS WITH HETEROGENEOUS MEMORY ARCHITECTURES 20220901


1762827588032.png




performing mathematical operations on a processor. The method generally includes initializing at least a portion of weight data for a machine learning model in a first memory component associated with a processor. Input data is stored in a second memory component coupled with the processor. Operations using the machine learning model are executed, via a functional unit associated with the processor, based on the at least the portion of the weight data and the input data. A result of the operations using the machine learning model are stored in the second memory component.

[0019] To improve the performance of operations using machine learning models, various techniques may locate computation near or with memory (e.g., co-located with memory). For example, compute-in-memory techniques may allow for data to be stored in SRAM and for analog computation to be performed in memory using modified SRAM cells. In another example, function-in-memory or processing-in-memory techniques may locate digital computation capacity near memory devices (e.g., DRAM, SRAM, MRAM, etc.) in which the weight data and data to be processed are located. In each of these techniques, however, many data transfer operations may still need to be performed to move data into and out of memory for computation (e.g., when computation is co-located with some, but not all, memory in a computing system).
 
  • Fire
  • Like
  • Thinking
Reactions: 3 users
Top Bottom