BRN Discussion Ongoing

stockduck

Regular

"...800 giga operations per second (GOPS) while operating under 300 milliwatts—setting a new benchmark for edge AI efficiency."

That sounds incredible massiv power to me.....:)



1762283494822.png
 
  • Like
  • Love
  • Fire
Reactions: 31 users

Tothemoon24

Top 20

BrainChip Unveils Breakthrough AKD1500 Edge AI Co-Processor at Embedded World North America

Share

LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), a global leader in ultra-low power, fully digital, event-based neuromorphic AI, today announced the launch of its AKD1500, a neuromorphic Edge AI accelerator co-processor chip, at Embedded World North America.
Designed to deliver exceptional performance with minimal power consumption, the AKD1500 achieves 800 giga operations per second (GOPS) while operating under 300 milliwatts—setting a new benchmark for edge AI efficiency. This makes AKD1500 ideal for deployment in battery powered wearables, smart sensors, and heat-constrained environments where battery life and thermal limits are critical.
The AKD1500 integrates seamlessly with x86, ARM, and RISC-V host processing platforms via PCIe or Serial interfaces, enabling rapid adoption across a wide range of applications. The AKD1500 co-processor approach is ideal for a wide range of environments and industries, upgrading multi-processor SoCs within defense, industrial and enterprise settings, and upgrading embedded microcontrollers for AI solutions in healthcare, wearables, and consumer electronics without a complete system redesign. The AKD1500 product has been delivered and designed into several end solutions in AI enabled sensing for medical and defense related applications, including Parsons, Bascom Hunter and Onsor Technologies.
“The AKD1500 is a catalyst for the next wave of intelligent AIoT devices,” said Sean Hehir, CEO of BrainChip. “We’re empowering developers to break free from cloud dependency and bring adaptive learning directly to the edge in a compact, cost-effective package. This technology will make AI truly ubiquitous in smart factories, homes, and wearable devices.”
“BrainChip’s AKD1500 on our 22FDX® platform delivers outstanding compute and memory efficiency,” said Anand Rangarajan, Director of AI & IOT Compute at GlobalFoundries. “Embedded developers are constantly innovating to get the right level of AI to fit within performance, power and area constraints. Using BrainChip’s neuromorphic architecture combined with GlobalFoundries’ 22FDX® process technology, the AKD1500 offers an excellent performance, power and cost envelope that fits into edge devices. We’re proud to support BrainChip’s end-to-end embedded AI solutions using GlobalFoundries silicon.”
AKD1500 is supported by BrainChip’s MetaTF™ software development tools environment, enabling machine learning engineers to easily convert, quantize, compile and deploy models on Akida using standard TensorFlow/KERAS formats which dramatically reduces development time and cost while expanding accessibility for AI developers. BrainChip’s event-based Akida™ neuromorphic architecture also enables the AKD1500 to provide on-chip learning, a critical differentiator from conventional AI accelerators that rely solely on cloud-based training.
AKD1500 samples are available today with volume production scheduled for Q3’26.
BrainChip’s Chief Development Officer, Jonathan Tapson, will present “The Impact of GenAI Workloads on Compute-in-Memory Architectures” at Embedded World North America on November 4th.
For more information: join us for a demo of AKD1500 at Booth 3080; visit the BrainChip developer site for free tutorials, tools, and models for neuromorphic computing; and check out the Embedded World North America site.

 
  • Like
  • Fire
  • Love
Reactions: 24 users
It now has been 1 year since brainchip launched their Pico.
I would think we should have some news soon as that was at the request of a client i believe 🤔 .
Some news on this would be good.
 
  • Like
Reactions: 4 users

perceptron

Regular

"...800 giga operations per second (GOPS) while operating under 300 milliwatts—setting a new benchmark for edge AI efficiency."

That sounds incredible massiv power to me.....:)



View attachment 92755
That is impressive performance. No wonder the potential and current applications are varied.
 
  • Like
Reactions: 3 users

Labsy

Regular
If 800 GOPS at 300 mW (≈ 2.7 TOPS/W) on a general-purpose, programmable edge-AI chip were validated and repeatable, it would be highly attractive to the industry — roughly:

🔋 Top-tier efficiency — about 2–3× better than most commercial edge-AI chips today (like Google Edge TPU ≈ 1 TOPS/W, NVIDIA Orin Nano ≈ 0.5–1 TOPS/W).

🧠 Fits the fastest-growing market — always-on, sensor-edge, and embedded AI (wearables, automotive, defense, IoT).

💰 High commercial value — a validated 2.7 TOPS/W neuromorphic part could attract major licensing or acquisition interest, since energy efficiency is now the main bottleneck in deploying AI at scale.

🧩 If it scales linearly across cores or tiles, it could compete directly with specialized AI accelerators from major semiconductor firms.


In short: validated real-world 800 GOPS @ 300 mW would be a breakthrough-level result in edge-AI efficiency — commercially and technically very attractive.

Chat GPT
 
  • Like
  • Love
Reactions: 4 users

BrainShit

Regular

BrainChip Unveils Breakthrough AKD1500 Edge AI Co-Processor at Embedded World North America

Share

LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), a global leader in ultra-low power, fully digital, event-based neuromorphic AI, today announced the launch of its AKD1500, a neuromorphic Edge AI accelerator co-processor chip, at Embedded World North America.
Designed to deliver exceptional performance with minimal power consumption, the AKD1500 achieves 800 giga operations per second (GOPS) while operating under 300 milliwatts—setting a new benchmark for edge AI efficiency. This makes AKD1500 ideal for deployment in battery powered wearables, smart sensors, and heat-constrained environments where battery life and thermal limits are critical.
The AKD1500 integrates seamlessly with x86, ARM, and RISC-V host processing platforms via PCIe or Serial interfaces, enabling rapid adoption across a wide range of applications. The AKD1500 co-processor approach is ideal for a wide range of environments and industries, upgrading multi-processor SoCs within defense, industrial and enterprise settings, and upgrading embedded microcontrollers for AI solutions in healthcare, wearables, and consumer electronics without a complete system redesign. The AKD1500 product has been delivered and designed into several end solutions in AI enabled sensing for medical and defense related applications, including Parsons, Bascom Hunter and Onsor Technologies.
“The AKD1500 is a catalyst for the next wave of intelligent AIoT devices,” said Sean Hehir, CEO of BrainChip. “We’re empowering developers to break free from cloud dependency and bring adaptive learning directly to the edge in a compact, cost-effective package. This technology will make AI truly ubiquitous in smart factories, homes, and wearable devices.”
“BrainChip’s AKD1500 on our 22FDX® platform delivers outstanding compute and memory efficiency,” said Anand Rangarajan, Director of AI & IOT Compute at GlobalFoundries. “Embedded developers are constantly innovating to get the right level of AI to fit within performance, power and area constraints. Using BrainChip’s neuromorphic architecture combined with GlobalFoundries’ 22FDX® process technology, the AKD1500 offers an excellent performance, power and cost envelope that fits into edge devices. We’re proud to support BrainChip’s end-to-end embedded AI solutions using GlobalFoundries silicon.”
AKD1500 is supported by BrainChip’s MetaTF™ software development tools environment, enabling machine learning engineers to easily convert, quantize, compile and deploy models on Akida using standard TensorFlow/KERAS formats which dramatically reduces development time and cost while expanding accessibility for AI developers. BrainChip’s event-based Akida™ neuromorphic architecture also enables the AKD1500 to provide on-chip learning, a critical differentiator from conventional AI accelerators that rely solely on cloud-based training.
AKD1500 samples are available today with volume production scheduled for Q3’26.
BrainChip’s Chief Development Officer, Jonathan Tapson, will present “The Impact of GenAI Workloads on Compute-in-Memory Architectures” at Embedded World North America on November 4th.
For more information: join us for a demo of AKD1500 at Booth 3080; visit the BrainChip developer site for free tutorials, tools, and models for neuromorphic computing; and check out the Embedded World North America site.


The highest GOPS per watt achieved by an AI chip worldwide as of 2025 is around 10 TOPS/W (tera-operations per second per watt), equivalent to about 10,000 GOPS per watt. For example, the Hailo-8 AI accelerator delivers around 26 TOPS while consuming approximately 2.5 watts, which corresponds to roughly 10 TOPS/W efficiency. This represents one of the best known performance-to-power ratios for AI chips focusing on edge and low-power applications.

In contrast, high-performance data center GPUs like the NVIDIA Jetson AGX Orin can deliver up to 275 TOPS but at configurable power levels from 15 watts up to 60 watts, resulting in lower TOPS/W efficiency suitable for more power-rich environments but not ultra-low-power contexts.

Therefore, for 1 watt power usage specifically, leading edge AI chips reach up to around 10,000 GOPS (10 TOPS) using highly optimized architectures designed for inferencing efficiency at the edge, balancing performance with practical power constraints for embedded and portable devices.

800 GOPS (Giga-Operations Per Second) is considered a high level of performance for edge AI devices, particularly for low-power and battery-operated scenarios. Hence, no widely reported AI processor achieves more than about 800 GOPS at or below 300 milliwatts currently, making the BrainChip AKD1500 the highest known performer in that power range today. :ninja:

Other edge NPUs and accelerators such as the STM32 series or specialized AI chips have demonstrated efficiencies roughly between 8 and 12 GOPS/W at around 1 watt, often tailored for inference tasks in low-power environments.

But high GOPS values alone do not guarantee good real-world inferencing performance if the system cannot supply data efficiently to the AI processor or if other bottlenecks exist in memory, CPU, or peripheral functions.
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

Fiendish

Emerged
I asked my friend grok to break down this development for me whilst i sipped my coffee.. here is what they/them/it/thing said.

### Overview of the Announcement
BrainChip Holdings Ltd. (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY) launched the AKD1500 on November 4, 2025, at Embedded World North America. This neuromorphic Edge AI co-processor chip delivers 800 Giga Operations Per Second (GOPS) at under 300 milliwatts, enabling on-chip learning and seamless integration with x86, ARM, and RISC-V platforms via PCIe or serial interfaces. Samples are available now, with volume production slated for Q3 2026. The chip targets battery-constrained devices like wearables, smart sensors, and heat-limited environments, positioning it as a low-power alternative to traditional AI accelerators reliant on cloud training.

This launch builds on BrainChip's Akida architecture, already in evaluations with partners like Bascom Hunter (US$100,000 contract for defense applications) and integrations with GlobalFoundries' 22nm FD-SOI process. Early adopters include medical sensing (e.g., Parsons, Onsor Technologies) and defense, with tape-out completed in October 2025 to meet demand.

### Key Strengths Driving Commercial Appeal
The AKD1500's neuromorphic design—mimicking brain-like event-based processing—offers 3-10x efficiency gains over conventional CNNs for edge tasks, reducing latency and power draw. This aligns with surging demand for on-device AI, where 40% of global Edge AI needs stem from real-time IoT processing. MetaTF software tools simplify deployment using TensorFlow/Keras, lowering barriers for developers and accelerating time-to-market.

### Market Opportunity and Size
The announcement taps into hyper-growth sectors. Neuromorphic computing, a niche within Edge AI, is exploding due to needs for energy-efficient, real-time inference in IoT (projected 75 billion devices by end-2025) and autonomous systems.

| Market Segment | 2025 Size (USD) | 2030-2032 Projection (USD) | CAGR | Relevance to AKD1500 |
|---------------|-----------------|-----------------------------|------|----------------------|
| Neuromorphic Computing | ~$48M-$213M | $1.3B-$20B | 21%-90% | Core fit: Brain-inspired chips for edge learning; BrainChip leads as first commercial producer. |
| Edge AI Overall | $13.5B-$20.8B | $66B-$84B | 22% | High: Low-power accelerators for 20% of data processed at edge; targets wearables/IoT. |
| AI Chips (Broader) | $40.8B | $286B (2029) | N/A | Indirect: Neuromorphic subset (~$480M in 2025) for specialized efficiency. |

Sources: Grand View Research, Fortune Business Insights, MarketsandMarkets, DataM Intelligence.<grok:render card_id="398044" card_type="citation_card" type="render_inline_citation">
<argument name="citation_id">68</argument>
</grok:render><grok:render card_id="4ef239" card_type="citation_card" type="render_inline_citation">
<argument name="citation_id">69</argument>
</grok:render><grok:render card_id="4dbd79" card_type="citation_card" type="render_inline_citation">
<argument name="citation_id">70</argument>
</grok:render><grok:render card_id="46b80c" card_type="citation_card" type="render_inline_citation">
<argument name="citation_id">72</argument>
</grok:render><grok:render card_id="557726" card_type="citation_card" type="render_inline_citation">
<argument name="citation_id">73</argument>
</grok:render><grok:render card_id="041af3" card_type="citation_card" type="render_inline_citation">
<argument name="citation_id">75</argument>
</grok:render> Asia-Pacific (e.g., China's "Made in China 2025") and North America (defense/healthcare) drive adoption, with automotive and industrial IoT as key verticals.

### Target Applications and Adoption Potential
- **Battery-Powered Wearables & Consumer Electronics**: Ideal for health monitoring (e.g., anomaly detection) and AR/VR; projected $2.3B in AI chip revenue for wearables in 2025. Partnerships like ARQUIMEA for water safety sensors show early traction.
- **Smart Sensors & Industrial IoT**: Upgrades legacy microcontrollers without redesigns; enables condition monitoring in factories (Industry 4.0). Edge AI for telecom (e.g., 5G MEC) could add $1.2B in home robotics sales.
- **Defense & Enterprise**: Radiation-tolerant integrations (e.g., Frontgrade) and contracts like Bascom Hunter signal high-margin deals; negotiations with major contractors ongoing.
- **Automotive & Healthcare**: On-chip learning for real-time sensor fusion; aligns with BMW/Volkswagen's edge AI push.

Rapid adoption via co-processor form factor (M.2 cards) suits multi-processor SoCs, with RISC-V integrations (Andes Technology) expanding to open-source ecosystems. Volume production in Q3 2026 positions BrainChip to capture 5-10% of the neuromorphic niche by 2028, potentially yielding $50M-$100M annual revenue from IP licensing, chip sales, and royalties.

### Revenue Model and Financial Projections
BrainChip's hybrid model—IP licensing (high-margin royalties), direct sales (~$19K in H1 FY25, scaling post-tape-out), and support—leverages marquee partners (Mercedes, NASA, Renesas). Cash reserves at $20M (end-2024) support commercialization, with Q4 FY25 outflows at $4.1M amid growth investments.

- **Short-Term (2026)**: $10M-$20M from samples/evaluations; defense/medical pilots convert to royalties.
- **Medium-Term (2027-2030)**: $100M+ as volume ramps; 20-30% margins from IP (e.g., MegaChips licenses).
- **Upside**: If Edge AI hits 75B IoT devices, BrainChip could license to 10% of low-power sensor makers, adding $500M+ in cumulative royalties.

Investor podcasts highlight CES 2024 demos and 2025 sales wins as execution milestones.

### Competitive Landscape and Differentiation
BrainChip leads in fully digital, event-based neuromorphic IP, outpacing Intel's Loihi (higher idle power) and IBM's TrueNorth (less portable). No direct rivals match AKD1500's 800 GOPS/<300mW benchmark for co-processors. Barriers include software maturity, but MetaTF bridges this. Risks: Slower-than-expected partner integrations or broader AI hype deflation.

### Stock and Investor Sentiment
ASX:BRN traded at ~$0.195 (up 2.63% on Oct 24, 2025), with OTCQX:BRCHF at $0.1338 (Nov 3). YTD +116%, but volatile (down 10% post-Q4 update on cash burn). Analysts forecast +12.8% to $0.220 by Oct 2025 end, driven by commercialization. X buzz is muted (mostly press echoes), but investor relations emphasize "substantial growth potential."<grok:render card_id="e9e0e8" card_type="citation_card" type="render_inline_citation">
<argument name="citation_id">21</argument>
</grok:render> Semantic search shows optimism on efficiency but caution on unproven scale.

### Risks and Challenges
- **Execution**: Q3 2026 production delays could erode momentum; current revenue is nascent ($19K H1 FY25).
- **Market Hype**: AI bubble risks (e.g., overvaluation without AGI) may cap multiples; neuromorphic remains experimental (~$480M in 2025).
- **Competition**: NVIDIA/Qualcomm dominate Edge AI; BrainChip must prove 100x power savings in real workloads.
- **Adoption**: Relies on ecosystem (e.g., RISC-V Con 2025 demos); developer uptake via Akida Cloud is key.

### Overall Commercial Potential
High (8/10). The AKD1500 addresses a validated pain point—power-hungry edge AI—in a $66B+ market by 2030, with BrainChip's first-mover IP yielding defensible royalties. Early contracts and tape-out signal $100M+ revenue by 2028, but success hinges on 2026 volume and partner wins. For investors, it's a speculative play on neuromorphic's "hyper-growth" (45x by 2030), balanced by cash burn. Long-term, it could redefine efficient AI, akin to ARM's mobile revolution.
 

Frangipani

Top 20

301D9038-D996-4A71-972A-A8C0D939A9B5.jpeg



 
Last edited:
  • Like
Reactions: 1 users
Top Bottom