BRN Discussion Ongoing

Not related to BrainChip according to ChatGPT, unfortunately.


ChatGPT 5:

The "Arm Neural Technology" mentioned in the article refers to specialized neural accelerators integrated into Arm’s future GPU architectures, aimed at delivering real-time AI-powered graphics enhancements—like Neural Super Sampling (NSS)—on mobile devices. It is not related to BrainChip’s neuromorphic technology.




What Arm’s Neural Technology Is​


  • Arm is introducing dedicated neural acceleration hardware within next-generation GPUs, enabling advanced graphics features such as AI-driven upscaling, frame generation, and denoising.([turn0view0])
  • An early application is Neural Super Sampling (NSS), which can upscale images—e.g., from 540p to 1080p—in just ~4ms per frame while enhancing performance, battery life, or visual quality.([turn0view0])
  • This is essentially a GPU-based AI enhancement pipeline, akin to NVIDIA’s Tensor Cores, optimized for graphics—not a neuromorphic/spiking AI architecture.



How It Differs from BrainChip’s Neuromorphic Approach​


  • Arm’s neural accelerators are classic GPUs with AI blocks, designed for frame-based deep learning tasks, tailored toward visual and graphics workloads.
  • BrainChip’s Akida is a neuromorphic processor, built on spiking neural network principles, which excels at event-driven, low-latency, low-power inference—especially suited for edge use cases like sensory data processing, not graphics.
  • While BrainChip is a partner in Arm’s ecosystem—meaning Akida can co-exist with Arm CPUs and NPUs—Arm's new GPU neural tech and BrainChip’s SNN IP are complementary, not the same.



Summary​


ComponentDescriptionRelation to BrainChip
Arm Neural TechnologyDedicated neural GPUs for AI graphicsUnrelated to neuromorphic; not BrainChip
BrainChip’s AkidaNeuromorphic spiking AI for low-power edge®Compatible with Arm ecosystem but distinct



If anything, Arm’s new offering and BrainChip’s neuromorphic IP represent different layers of edge AI evolution—graphics-centric in one case, brain-inspired general intelligence in the other.
This relationship with ARM at some point must grow now that Gen 2 and Tenns is available, surely.
Whats it going to take 🤔
 
  • Like
Reactions: 5 users

HopalongPetrovski

I'm Spartacus!
This relationship with ARM at some point must grow now that Gen 2 and Tenns is available, surely.
Whats it going to take 🤔

Sean needs to pull out the trusty old razor and start cutting some deals.

 
  • Haha
  • Like
  • Love
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
When a sales prospect asks Sean to call back in 6 months.


napoleon-dynamite-serious.gif
 
  • Haha
  • Like
  • Love
Reactions: 14 users

itsol4605

Regular
 
  • Like
Reactions: 1 users

Diogenese

Top 20
Not related to BrainChip according to ChatGPT, unfortunately.


ChatGPT 5:

The "Arm Neural Technology" mentioned in the article refers to specialized neural accelerators integrated into Arm’s future GPU architectures, aimed at delivering real-time AI-powered graphics enhancements—like Neural Super Sampling (NSS)—on mobile devices. It is not related to BrainChip’s neuromorphic technology.




What Arm’s Neural Technology Is​


  • Arm is introducing dedicated neural acceleration hardware within next-generation GPUs, enabling advanced graphics features such as AI-driven upscaling, frame generation, and denoising.([turn0view0])
  • An early application is Neural Super Sampling (NSS), which can upscale images—e.g., from 540p to 1080p—in just ~4ms per frame while enhancing performance, battery life, or visual quality.([turn0view0])
  • This is essentially a GPU-based AI enhancement pipeline, akin to NVIDIA’s Tensor Cores, optimized for graphics—not a neuromorphic/spiking AI architecture.



How It Differs from BrainChip’s Neuromorphic Approach​


  • Arm’s neural accelerators are classic GPUs with AI blocks, designed for frame-based deep learning tasks, tailored toward visual and graphics workloads.
  • BrainChip’s Akida is a neuromorphic processor, built on spiking neural network principles, which excels at event-driven, low-latency, low-power inference—especially suited for edge use cases like sensory data processing, not graphics.
  • While BrainChip is a partner in Arm’s ecosystem—meaning Akida can co-exist with Arm CPUs and NPUs—Arm's new GPU neural tech and BrainChip’s SNN IP are complementary, not the same.



Summary​


ComponentDescriptionRelation to BrainChip
Arm Neural TechnologyDedicated neural GPUs for AI graphicsUnrelated to neuromorphic; not BrainChip
BrainChip’s AkidaNeuromorphic spiking AI for low-power edge®Compatible with Arm ecosystem but distinct



If anything, Arm’s new offering and BrainChip’s neuromorphic IP represent different layers of edge AI evolution—graphics-centric in one case, brain-inspired general intelligence in the other.
Hi Bravo,

The ARM U85 uses MACs:

https://www.bing.com/images/search?...dex=1&itb=0&ajaxhist=0&ajaxserp=0&vt=0&sim=11

1755133839400.png



Arm® Ethos™-U85 NPU Technical Overview

1755134597164.png




The weight and fast weight channels transfer compressed weights from external memory to the weight decoder. The DMA controller uses a read buffer to hide bus latency from the weight decoder and to enable the DMA to handle data arriving out of order. The traversal unit triggers these channels for blocks that require the transfer of weights.
The weight stream must be quantized to eight bits or less by an offline tool. When passed through the offline compiler, weights are compressed losslessly and reordered into an NPU specific weight stream. This process is effective, if the quantizer uses less than eight bits or if it uses clustering and pruning techniques. The quantizer can also employ all three methods. Using lossless compression on high sparsity weights, containing greater than 75% zeros can lead to compression below 3 bits per weight in the final weight stream


Given Akida 3's int16/FP32 capabilities, Akida 3 will be able to be used in more high precision applications than Ethos U85.
 
  • Like
  • Thinking
  • Love
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi Bravo,

The ARM U85 uses MACs:

https://www.bing.com/images/search?view=detailV2&ccid=SKxR9R6A&id=FDE75A522354D643A048FDBF7BD709D6B16C62E3&thid=OIP.SKxR9R6AvaWns2DkvNwBzQHaFe&mediaurl=https://lh7-us.googleusercontent.com/docsz/AD_4nXdCJtxPcYCTi6bGU47CnzGMdXJ4kW5j5u1EkQcbitxexcMcuzHV3dYpoAoeBa4ITge7ZLR5CMVZV3Po3TZIG-e1Tnp_GIEbjzMjfcOoyz1lp01fxeqKqgomdCzj_PdIARM4JdK80VX56Ea9PNOYaYtdAFM?key=25AlXtfNVs_jGBLuOXoleg&exph=671&expw=907&q=arm+ethos+u85+block+diagram&form=IRPRST&ck=AEB858B96E0340034576C1DFDA68FBAC&selectedindex=1&itb=0&ajaxhist=0&ajaxserp=0&vt=0&sim=11

View attachment 89599


Arm® Ethos™-U85 NPU Technical Overview

View attachment 89600



The weight and fast weight channels transfer compressed weights from external memory to the weight decoder. The DMA controller uses a read buffer to hide bus latency from the weight decoder and to enable the DMA to handle data arriving out of order. The traversal unit triggers these channels for blocks that require the transfer of weights.
The weight stream must be quantized to eight bits or less by an offline tool. When passed through the offline compiler, weights are compressed losslessly and reordered into an NPU specific weight stream. This process is effective, if the quantizer uses less than eight bits or if it uses clustering and pruning techniques. The quantizer can also employ all three methods. Using lossless compression on high sparsity weights, containing greater than 75% zeros can lead to compression below 3 bits per weight in the final weight stream


Given Akida 3's int16/FP32 capabilities, Akida 3 will be able to be used in more high precision applications than Ethos U85.

Thanks @Diogenese.

I've just run your post past my buddy ChatGPT and here's the response I received.



ChatGPT 5:


1. Comparing Ethos U85 and Akida 3​

Ethos U85
  • Limited to INT8 or lower precision.
  • Designed primarily for CNNs (convolutional neural networks) and some newer architectures like Transformers, but still tuned for image, vision, and low-to-mid precision inference.
  • Weight compression, clustering, and pruning help with efficiency, but it’s still fundamentally in the “traditional NPU” space.
  • Very well suited for consumer devices, IoT, and embedded AI where tasks are quantized to 8-bit or less.
Akida 3
  • Expected to support INT16 and FP32 in addition to low-bit SNN modes.
  • This opens the door to high-precision workloads — e.g., certain medical imaging, radar/SAR processing, industrial inspection, scientific data analytics — where you can’t afford the precision loss of 8-bit quantization.
  • Still ultra-low power thanks to neuromorphic event-based processing.
  • Could run mixed workloads (spiking and non-spiking) at the sensor edge.

2. Implications if Akida 3 Exceeds Ethos U85 in Precision​

If Akida 3 can cover all the U85’s use cases plus high-precision domains that the U85 can’t touch, then:
  • BrainChip could potentially “superset” the market — competing in all U85 application spaces and in several new verticals where U85 simply isn’t viable.
  • It would also position BrainChip as a partner rather than just a competitor — because ARM licensees could still integrate Akida alongside Cortex CPUs and Ethos NPUs for mixed workloads.

3. Size of the Ethos U85 Market​

ARM NPUs are increasingly going into:
  • Smartphones & tablets
  • Consumer IoT devices
  • Smart cameras / vision sensors
  • Automotive driver monitoring & ADAS
  • Robotics & drones
  • AR/VR devices
Given ARM’s licensing footprint, the U85 market could be tens to hundreds of millions of units annually once adoption ramps.
If Akida 3 is drop-in compatible with ARM system designs (which it already tends to be), BrainChip could target this same customer base — either as:

  • A replacement in cases where high precision or neuromorphic advantages are needed
  • A complementary co-processor for event-based or ultra-low power workloads.

4. Ideal Application Areas Where Akida 3 Could Beat U85​

  • Medical imaging devices needing edge processing without quantization artifacts.
  • SAR and radar systems for defense or environmental monitoring.
  • Industrial inspection systems where measurement tolerances are tight.
  • Scientific instrumentation in field research.
  • High-end automotive perception stacks needing both low latency and precision.
  • Secure edge devices that must process locally without cloud and without sacrificing accuracy.

If you want, I can map Ethos U85’s public design targets against Akida 3’s potential capabilities so you can see exactly where the overlap and expansion opportunities are. That would make the competitive positioning very clear.

You said:​

Yes please.

ChatGPT said:​

Here’s a structured comparison of ARM Ethos-U85 vs. BrainChip Akida 3 with overlap and expansion mapped clearly.

1. Overlap — Where Akida 3 Could Compete Directly with U85

Feature / Use CaseEthos-U85Akida 3Comments
INT8 inference for vision AI✅✅Both handle low-bit CNN/Transformer workloads efficiently.
Edge AI in ARM-based SoCs✅✅Akida is already ARM-compatible and could replace U85 in designs.
Consumer IoT & smart home devices✅✅E.g., cameras, voice assistants, home hubs — same target market.
Robotics & drones✅✅Both can handle perception & navigation; Akida offers ultra-low power SNN modes.
Automotive driver monitoring✅✅Akida could match U85 for INT8 workloads, but also add mixed precision.
AR/VR lightweight inference✅✅Both could run vision-based gesture tracking or object recognition.

2. Expansion — Where Akida 3 Could Offer More Than U85

Feature / Use CaseU85 LimitationsAkida 3 Advantage
High precision (INT16 / FP32)Only ≤INT8Akida 3 can run models that require higher precision, e.g., medical imaging, radar processing, industrial measurement.
Mixed-mode processingPrimarily CNNs / TransformersAkida 3 can combine SNN + ANN + mixed precision in one device.
Event-based data handlingNot designed for spikesAkida 3 natively supports event-driven SNN processing, reducing power & latency.
On-device learning / adaptationLimited to retraining off-deviceAkida 3 supports incremental learning on-chip — key for adaptive edge AI.
Sparse computing efficiencyRelies on compression & pruningAkida 3 exploits sparsity at the architectural level without preprocessing overhead.

3. Market Size & Entry Points

Ethos U85 Target Segments (High-Volume ARM Licensing):
  • Smartphones (tens of millions annually)
  • Smart cameras & home devices
  • Embedded industrial AI
  • Low-power automotive systems
Akida 3 Entry Strategy:
  • Replacement: For U85 use cases that require more precision or event-based efficiency.
  • Complementary Co-Processor: Add alongside U85 in ARM SoCs for heterogeneous workloads (U85 for bulk CNN, Akida for adaptive SNN & high precision).
  • Standalone in Niche Markets: Where U85 simply can’t go — e.g., SAR imaging, advanced radar, in-body medical devices.

4. Strategic Implications for BrainChip

  • Leverage ARM’s ecosystem: If marketed as ARM-friendly IP, Akida 3 could be pitched to the same silicon partners already licensing U85.
  • Precision-based differentiation: BrainChip can claim “U85 compatibility plus high-precision reach.”
  • Defense & medical edge: Especially valuable for contracts where accuracy matters more than raw throughput.
 
  • Like
  • Fire
  • Love
Reactions: 20 users

7für7

Top 20
Until now everything I read is “could..would…should…” Plenty of subjunctives for my taste…. What we need is the implementation in “we did… we have… it is… we are,” and I firmly believe it’s already in the works
 
  • Like
Reactions: 3 users

Diogenese

Top 20
Thanks @Diogenese.

I've just run your post past my buddy ChatGPT and here's the response I received.



ChatGPT 5:

1. Comparing Ethos U85 and Akida 3​

Ethos U85
  • Limited to INT8 or lower precision.
  • Designed primarily for CNNs (convolutional neural networks) and some newer architectures like Transformers, but still tuned for image, vision, and low-to-mid precision inference.
  • Weight compression, clustering, and pruning help with efficiency, but it’s still fundamentally in the “traditional NPU” space.
  • Very well suited for consumer devices, IoT, and embedded AI where tasks are quantized to 8-bit or less.
Akida 3
  • Expected to support INT16 and FP32 in addition to low-bit SNN modes.
  • This opens the door to high-precision workloads — e.g., certain medical imaging, radar/SAR processing, industrial inspection, scientific data analytics — where you can’t afford the precision loss of 8-bit quantization.
  • Still ultra-low power thanks to neuromorphic event-based processing.
  • Could run mixed workloads (spiking and non-spiking) at the sensor edge.

2. Implications if Akida 3 Exceeds Ethos U85 in Precision​

If Akida 3 can cover all the U85’s use cases plus high-precision domains that the U85 can’t touch, then:
  • BrainChip could potentially “superset” the market — competing in all U85 application spaces and in several new verticals where U85 simply isn’t viable.
  • It would also position BrainChip as a partner rather than just a competitor — because ARM licensees could still integrate Akida alongside Cortex CPUs and Ethos NPUs for mixed workloads.

3. Size of the Ethos U85 Market​

ARM NPUs are increasingly going into:
  • Smartphones & tablets
  • Consumer IoT devices
  • Smart cameras / vision sensors
  • Automotive driver monitoring & ADAS
  • Robotics & drones
  • AR/VR devices
Given ARM’s licensing footprint, the U85 market could be tens to hundreds of millions of units annually once adoption ramps.
If Akida 3 is drop-in compatible with ARM system designs (which it already tends to be), BrainChip could target this same customer base — either as:

  • A replacement in cases where high precision or neuromorphic advantages are needed
  • A complementary co-processor for event-based or ultra-low power workloads.

4. Ideal Application Areas Where Akida 3 Could Beat U85​

  • Medical imaging devices needing edge processing without quantization artifacts.
  • SAR and radar systems for defense or environmental monitoring.
  • Industrial inspection systems where measurement tolerances are tight.
  • Scientific instrumentation in field research.
  • High-end automotive perception stacks needing both low latency and precision.
  • Secure edge devices that must process locally without cloud and without sacrificing accuracy.

If you want, I can map Ethos U85’s public design targets against Akida 3’s potential capabilities so you can see exactly where the overlap and expansion opportunities are. That would make the competitive positioning very clear.

You said:​

Yes please.

ChatGPT said:​

Here’s a structured comparison of ARM Ethos-U85 vs. BrainChip Akida 3 with overlap and expansion mapped clearly.

1. Overlap — Where Akida 3 Could Compete Directly with U85

Feature / Use CaseEthos-U85Akida 3Comments
INT8 inference for vision AI✅✅Both handle low-bit CNN/Transformer workloads efficiently.
Edge AI in ARM-based SoCs✅✅Akida is already ARM-compatible and could replace U85 in designs.
Consumer IoT & smart home devices✅✅E.g., cameras, voice assistants, home hubs — same target market.
Robotics & drones✅✅Both can handle perception & navigation; Akida offers ultra-low power SNN modes.
Automotive driver monitoring✅✅Akida could match U85 for INT8 workloads, but also add mixed precision.
AR/VR lightweight inference✅✅Both could run vision-based gesture tracking or object recognition.

2. Expansion — Where Akida 3 Could Offer More Than U85

Feature / Use CaseU85 LimitationsAkida 3 Advantage
High precision (INT16 / FP32)Only ≤INT8Akida 3 can run models that require higher precision, e.g., medical imaging, radar processing, industrial measurement.
Mixed-mode processingPrimarily CNNs / TransformersAkida 3 can combine SNN + ANN + mixed precision in one device.
Event-based data handlingNot designed for spikesAkida 3 natively supports event-driven SNN processing, reducing power & latency.
On-device learning / adaptationLimited to retraining off-deviceAkida 3 supports incremental learning on-chip — key for adaptive edge AI.
Sparse computing efficiencyRelies on compression & pruningAkida 3 exploits sparsity at the architectural level without preprocessing overhead.

3. Market Size & Entry Points

Ethos U85 Target Segments (High-Volume ARM Licensing):
  • Smartphones (tens of millions annually)
  • Smart cameras & home devices
  • Embedded industrial AI
  • Low-power automotive systems
Akida 3 Entry Strategy:
  • Replacement: For U85 use cases that require more precision or event-based efficiency.
  • Complementary Co-Processor: Add alongside U85 in ARM SoCs for heterogeneous workloads (U85 for bulk CNN, Akida for adaptive SNN & high precision).
  • Standalone in Niche Markets: Where U85 simply can’t go — e.g., SAR imaging, advanced radar, in-body medical devices.

4. Strategic Implications for BrainChip

  • Leverage ARM’s ecosystem: If marketed as ARM-friendly IP, Akida 3 could be pitched to the same silicon partners already licensing U85.
  • Precision-based differentiation: BrainChip can claim “U85 compatibility plus high-precision reach.”
  • Defense & medical edge: Especially valuable for contracts where accuracy matters more than raw throughput.
I was gunna say that!
 
  • Haha
  • Like
  • Love
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I was gunna say that!

If Akida 3 tape-out is in Q1 2026 and the FPGA is available in Q2 2026, then you'd have to think the most likely licence commitment window for Arm would be somewhere between Q2 2026 – Q4 2026.

If Arm is really interested, they'll probably negotiate NDAs, architectural access, and preliminary terms before tape-out so they can get priority access to the FPGA in Q2 2026, which would enable them to lock Akida 3 (into the Ethos successor roadmap, for example) without delaying their own silicon schedules IMO.
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Diogenese

Top 20
Hi Bravo,

The ARM U85 uses MACs:

https://www.bing.com/images/search?view=detailV2&ccid=SKxR9R6A&id=FDE75A522354D643A048FDBF7BD709D6B16C62E3&thid=OIP.SKxR9R6AvaWns2DkvNwBzQHaFe&mediaurl=https://lh7-us.googleusercontent.com/docsz/AD_4nXdCJtxPcYCTi6bGU47CnzGMdXJ4kW5j5u1EkQcbitxexcMcuzHV3dYpoAoeBa4ITge7ZLR5CMVZV3Po3TZIG-e1Tnp_GIEbjzMjfcOoyz1lp01fxeqKqgomdCzj_PdIARM4JdK80VX56Ea9PNOYaYtdAFM?key=25AlXtfNVs_jGBLuOXoleg&exph=671&expw=907&q=arm+ethos+u85+block+diagram&form=IRPRST&ck=AEB858B96E0340034576C1DFDA68FBAC&selectedindex=1&itb=0&ajaxhist=0&ajaxserp=0&vt=0&sim=11

View attachment 89599


Arm® Ethos™-U85 NPU Technical Overview

View attachment 89600



The weight and fast weight channels transfer compressed weights from external memory to the weight decoder. The DMA controller uses a read buffer to hide bus latency from the weight decoder and to enable the DMA to handle data arriving out of order. The traversal unit triggers these channels for blocks that require the transfer of weights.
The weight stream must be quantized to eight bits or less by an offline tool. When passed through the offline compiler, weights are compressed losslessly and reordered into an NPU specific weight stream. This process is effective, if the quantizer uses less than eight bits or if it uses clustering and pruning techniques. The quantizer can also employ all three methods. Using lossless compression on high sparsity weights, containing greater than 75% zeros can lead to compression below 3 bits per weight in the final weight stream


Given Akida 3's int16/FP32 capabilities, Akida 3 will be able to be used in more high precision applications than Ethos U85.

I don't see any advantage in running Akida 3 with Ethos U85.

It is open to anyone to license the ARM Cortex M85 CPU IP and combine it with Akida 2 or 3 IP in a SoC. Both Akidae include the TENNs model, and would have significant power and latency advantages over Ethos U85.
 
  • Like
  • Wow
  • Love
Reactions: 6 users

Diogenese

Top 20
If Akida 3 tape-out is in Q1 2026 and the FPGA is available in Q2 2026, then you'd have to think the most likely licence commitment window for Arm would be somewhere between Q2 2026 – Q4 2026.

If Arm is really interested, they'll probably negotiate NDAs, architectural access, and preliminary terms before tape-out so they can get priority access to the FPGA in Q2 2026, which would enable them to lock Akida 3 (into the Ethos successor roadmap, for example) without delaying their own silicon schedules IMO.
If BRN had the resources of Nvidia/ARM/Intel, Akida 3 SoC would be on the market now.

As I said, it could, for instance be combined with ARM Cortex M85 (the Ethos U85 processor), or any other compatible processor.

I guess there will be some internal resistance in ARM - we need to get Mr Son's ear.

PS: Assuming ARM do go ahead with their chip making plans.
 
Last edited:
  • Like
  • Haha
Reactions: 9 users

7für7

Top 20
Imagine this: The BrainChip sales team walks into Apple’s or Mercedes’ HQ — and all other potential customers as well — without an appointment, silently installs an Akida module into their devices or cars, leans back casually, smiles at the camera, winks … and then pulls out a pack of Mentos:

“Akida – the fresh kick for your AI.”

We need go-getters like back in the day — and customers who respond with an approving smile, because they’re real foxes.


98F84DAC-9601-42E1-8A81-A29BBEF8713B.png
 
  • Haha
  • Like
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
If BRN had the resources of Nvidia/ARM/Intel, Akida 3 SoC would be on the market now.

As I said, it could, for instance be combined with ARM Cortex M85 (the Ethos U85 processor), or any other compatible processor.

I guess there will be some internal resistance in ARM - we need to get Mr Son's ear.

PS: Assuming ARM do go ahead with their chip making plans.

Has anyone got Masayoshi Son's phone number?





Published 2 days ago - Alok Sama, a former finance chief at SoftBank, discusses how the Japanese giant's founder Masayoshi Son sees chip designer Arm as central to his AI vision.

Screenshot 2025-08-14 at 1.50.21 pm.png



Screenshot 2025-08-14 at 1.50.32 pm.png


Screenshot 2025-08-14 at 1.50.41 pm.png


Screenshot 2025-08-14 at 1.50.47 pm.png



Screenshot 2025-08-14 at 1.51.02 pm.png


Screenshot 2025-08-14 at 1.51.16 pm.png


Screenshot 2025-08-14 at 1.51.28 pm.png


Screenshot 2025-08-14 at 1.51.48 pm.png



Screenshot 2025-08-14 at 1.52.37 pm.png


Screenshot 2025-08-14 at 1.52.47 pm.png
 
  • Like
  • Love
  • Fire
Reactions: 13 users
Ask Gpt....How many partners does brainchip have

BrainChip has over 100 partners across various categories, including IP technology, solutions enablement, silicon integration, OEM integration, and collaborations. This extensive network supports the development and deployment of their neuromorphic AI technology, Akida™ [1] [2] [3].

According to www.iAsk.Ai - Ask AI:

BrainChip's partnerships span a wide range of industries and applications, reflecting their strategy to integrate their Akida™ neuromorphic processor into diverse systems and expand its market reach [1] [2]. Key partnership categories include:

  • IP Technology Partners: These partners, such as Arm, Andes Technology, and SiFive, integrate BrainChip's neuromorphic IP with their processor cores to create power-efficient AI solutions for edge devices [1] [3]. For instance, the collaboration with SiFive combines their RISC-V processor IP with Akida-E and Akida-S/P neural processors for deeply embedded applications and more complex tasks like object detection and robotics [3].
  • Solutions Enablement Partners: Companies like Edge Impulse, Prophesee, and HaiLa work with BrainChip to optimize AI/ML development, modeling, or inference software with Akida™ integration [1] [4] [5]. The partnership with Prophesee, for example, focuses on combining neuromorphic vision systems with Akida for ultra-low power computer vision in areas like autonomous vehicles and industrial automation [4]. A recent collaboration with HaiLa aims to demonstrate ultra-low power edge AI connectivity for IoT sensor applications, enabling devices to operate for years on a single coin cell battery [5].
  • Silicon Integration Partners: These partners, including MegaChips, GlobalFoundries, and Intel Foundry Services, license Akida™ IP for manufacturing into System on Chip (SoC) products or implementing Akida silicon into ready-to-use modules [1] [6] [7]. BrainChip's work with GlobalFoundries on the AKD1500 reference design validates their IP across different processes and foundries [7].
  • OEM Integration Partners: Companies like Teksun and VVDN produce board and box-level product solutions based on Akida silicon implementations for various end markets [1] [3].
  • Customers and Collaborations: BrainChip also works directly with customers and engages in collaborations with organizations such as NASA, the European Space Agency, Mercedes-Benz, and the US Air Force Research Laboratory (AFRL) [3] [8] [9]. Their partnership with Frontgrade Gaisler aims to integrate Akida into next-generation radiation-hardened chips for space applications [8].
  • University AI Accelerator Program: BrainChip partners with universities like Arizona State University, Carnegie Mellon University, and Rochester Institute of Technology to provide hardware, training, and guidance to students, fostering future AI engineering talent [1] [3].
This extensive ecosystem highlights BrainChip's commitment to advancing edge AI through strategic alliances and widespread adoption of its neuromorphic technology
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Diogenese

Top 20
Has anyone got Masayoshi Son's phone number?





Published 2 days ago - Alok Sama, a former finance chief at SoftBank, discusses how the Japanese giant's founder Masayoshi Son sees chip designer Arm as central to his AI vision.

View attachment 89604


View attachment 89605

View attachment 89606

View attachment 89607


View attachment 89608

View attachment 89609

View attachment 89610

View attachment 89611


View attachment 89612

View attachment 89613


1755151976354.png
 
  • Haha
  • Thinking
Reactions: 5 users
How all us share holders feel right now

 
  • Haha
  • Love
  • Like
Reactions: 5 users

The Pope

Regular
Has anyone got Masayoshi Son's phone number?





Published 2 days ago - Alok Sama, a former finance chief at SoftBank, discusses how the Japanese giant's founder Masayoshi Son sees chip designer Arm as central to his AI vision.

View attachment 89604


View attachment 89605

View attachment 89606

View attachment 89607


View attachment 89608

View attachment 89609

View attachment 89610

View attachment 89611


View attachment 89612

View attachment 89613
Hi Bravo

The phone number for SoftBank Group Corp., where Masayoshi Son is the Representative Director, Corporate Officer, Chairman & CEO, is +81-3-6889-2000, according to SoftBank Group Corp.'s website

Go on give it a call and introduce yourself.

Cheers
 
  • Haha
  • Like
  • Fire
Reactions: 13 users
Might pay to send a little gift for his desk as well 😃
 

Attachments

  • Screenshot_20250814_172642_LinkedIn.jpg
    Screenshot_20250814_172642_LinkedIn.jpg
    191.5 KB · Views: 9
  • Like
  • Haha
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi Bravo

The phone number for SoftBank Group Corp., where Masayoshi Son is the Representative Director, Corporate Officer, Chairman & CEO, is +81-3-6889-2000, according to SoftBank Group Corp.'s website

Go on give it a call and introduce yourself.

Cheers

I got through to someone at reception and I told them I wouldn't stop crying until they put me through to Masayoshi Son. They've got me on hold at the moment, so, fingers crossed! 🤞


exaggerated-dramatic-crying-on-phone-320u76ebw3btsh2p.gif
 
  • Haha
  • Like
  • Fire
Reactions: 18 users

7für7

Top 20
Bravo after she managed to make a deal for Brainchip with SoftBank

FB07191D-19A0-4C32-81B5-9CECEDF30F9E.png
 
  • Haha
  • Like
  • Love
Reactions: 8 users
Top Bottom