BRN Discussion Ongoing

7für7

Top 20
IBM unlike most already have a product in Symphony which is immediately AKIDA ready. Huge advantage and shortens the testing process.
Perhaps your views may reflect your pessimism.
Its not about believing anyone. Its about reading and making your own mind up.
The biggest lesson is not about the tech or commercial possibilities but about the lengthy Engagement to commercial product cycle - if you understand and accept that is the way it is then there is no hole to fall into.
If i did n ot believe in Brainchips future i would have offloaded a spike or 2 ago.
I asked AI what the phrase "I am tipping that pretty soon Brainchip will make it a foursome via a partnership with IBM." means.

Reply.
" It means the speaker is predicting (that’s what tipping means in Australian English) that BrainChip will soon announce a fourth major partnership, and they believe that fourth partner will be IBM."
"

" “I am tipping…”​

In Australian usage, tipping means predicting, backing, or expecting something to happen. It’s borrowed from “footy tipping,” where you pick the winners of matches."

Don’t try to make me look stupid now.. I know exactly what that means… yet you keep presenting your viewpoint as the only true one, which basically cancels out your “I’m tipping” relativizing and your claims… that’s what you don’t understand.

Some of your statements are framed like facts, and add-ons like “I’m tipping” get psychologically downgraded when people read it, because the main message carries more weight than the “comparatively unimportant” side note “I’m tipping.”

And I already said: let’s just drop the topic and everyone can form their own opinion. Bravo represented my view very well.

So I don’t know what you’re trying to achieve now by making me look dumb again.
 
  • Fire
Reactions: 1 users

Diogenese

Top 20
IBM unlike most already have a product in Symphony which is immediately AKIDA ready. Huge advantage and shortens the testing process.
Perhaps your views may reflect your pessimism.
Its not about believing anyone. Its about reading and making your own mind up.
The biggest lesson is not about the tech or commercial possibilities but about the lengthy Engagement to commercial product cycle - if you understand and accept that is the way it is then there is no hole to fall into.
If i did n ot believe in Brainchips future i would have offloaded a spike or 2 ago.
I asked AI what the phrase "I am tipping that pretty soon Brainchip will make it a foursome via a partnership with IBM." means.

Reply.
" It means the speaker is predicting (that’s what tipping means in Australian English) that BrainChip will soon announce a fourth major partnership, and they believe that fourth partner will be IBM."
"

" “I am tipping…”​

In Australian usage, tipping means predicting, backing, or expecting something to happen. It’s borrowed from “footy tipping,” where you pick the winners of matches."
You and your vernacular.

I dislocated my vernacular once - very nasty operation.
 
  • Haha
  • Like
Reactions: 7 users

itsol4605

Regular
Nobody can stop him – great !
 
  • Like
  • Fire
  • Love
Reactions: 12 users

Learning

Learning to the Top 🕵‍♂️
More from Kevin

Screenshot_20260302_212903_LinkedIn.jpg

Learning
 
  • Like
  • Fire
  • Love
Reactions: 27 users

manny100

Top 20
Dio, Symphony is interesting in that its so easy it for Kevin to insert the plug and play M.2 Card into Symphony and with minimal additional fuss (compared to integration) and churn out what he describes as great ROI etc.
AKIDA1000 just does what AKIDA does with Symphony. Event based sifting through millions of transactions saving loads of power at a much quicker pace than traditional AI giving users an edge in trading.
However extensive testing still would be needed before unleashing on clients
Compare to say Onsor who have to develop a wearable from scratch - an M.2 Card would look a little dodgy hanging off a pair of glasses.
M.2 cards would not be suitable for the needs of our known defense clients.
 
  • Like
Reactions: 2 users

7für7

Top 20
Kevin likes this


The related article

 
  • Like
  • Love
Reactions: 8 users

Diogenese

Top 20
Dio, Symphony is interesting in that its so easy it for Kevin to insert the plug and play M.2 Card into Symphony and with minimal additional fuss (compared to integration) and churn out what he describes as great ROI etc.
AKIDA1000 just does what AKIDA does with Symphony. Event based sifting through millions of transactions saving loads of power at a much quicker pace than traditional AI giving users an edge in trading.
However extensive testing still would be needed before unleashing on clients
Compare to say Onsor who have to develop a wearable from scratch - an M.2 Card would look a little dodgy hanging off a pair of glasses.
M.2 cards would not be suitable for the needs of our known defense clients.
Hi manny,

The M2 card has 20 NPUs (equivalent to 4 nodes in later versions, but the geography is a bit different). I think a full sized Akida 1 could have 1024 NPU, and this could be ganged together with other Akida 1s, so the M2 is simply a demonstrator. For a mass commercial market, (> 1 million units), it could all be accommodated on a single chip, taking advantage of Akida's multitasking abilities. How good is that ? - multitasking on a massively parallel neuromorphic processor.

https://shop.brainchipinc.com/products/akida-m-2-card

Akida M.2 Card, powered by AKD1000 IC with an ARM M.4 CPU plus 20 Akida 1.0 event-based processing NPUs in a mesh. Akida M.2 AKD1000 Card accelerates CNN-based neural network models using BrainChip’s ultra energy-efficient, and purely digital, event-based processing architecture.

  • Form factor: M.2 2260, B+M Key and E Key available
  • Host interface: PCIe PHY 2-lane
  • Memory interface: LPDDR4 via DMA
  • CPU: 32bit ARM M.4
  • NPU: 20 x Akida 1 Neuron mesh
  • Peak INT8 GOPs: 1.5 TOPs
  • On-chip memory: 8MB high-speed near compute SRAM
  • Clock frequency: 300MHz
  • Operating temperature: 0 – 70°C
  • Thermal solution: no fan or heatsink required
  • Typical application power: 1W

As you say, the advantage for Kevin's project is that the M2 is fully working COTs with ARM processor and comms interface on chip - plug/train/configure and play, and training is a breeze. I think the ease of training is something which will be of interest to olde worlde CNN AI engineers/programmers.

What this illustrates is that there is a potential market for the clunky old Akida 1 IP, because there will be a price differential for Akida 2 IP, and a much greater price differential for Akida 3. Those later models have higher precision and lower latency, but take comparatively larger slices of wafer real estate per chip.

The other thing is that our very good friends at MF have always cited the big budgets of the tech giants for developing AI as a downer for BRN.
 
  • Like
  • Fire
  • Love
Reactions: 10 users

manny100

Top 20
Hi manny,

The M2 card has 20 NPUs (equivalent to 4 nodes in later versions, but the geography is a bit different). I think a full sized Akida 1 could have 1024 NPU, and this could be ganged together with other Akida 1s, so the M2 is simply a demonstrator. For a mass commercial market, (> 1 million units), it could all be accommodated on a single chip, taking advantage of Akida's multitasking abilities. How good is that ? - multitasking on a massively parallel neuromorphic processor.

https://shop.brainchipinc.com/products/akida-m-2-card

Akida M.2 Card, powered by AKD1000 IC with an ARM M.4 CPU plus 20 Akida 1.0 event-based processing NPUs in a mesh. Akida M.2 AKD1000 Card accelerates CNN-based neural network models using BrainChip’s ultra energy-efficient, and purely digital, event-based processing architecture.

  • Form factor: M.2 2260, B+M Key and E Key available
  • Host interface: PCIe PHY 2-lane
  • Memory interface: LPDDR4 via DMA
  • CPU: 32bit ARM M.4
  • NPU: 20 x Akida 1 Neuron mesh
  • Peak INT8 GOPs: 1.5 TOPs
  • On-chip memory: 8MB high-speed near compute SRAM
  • Clock frequency: 300MHz
  • Operating temperature: 0 – 70°C
  • Thermal solution: no fan or heatsink required
  • Typical application power: 1W

As you say, the advantage for Kevin's project is that the M2 is fully working COTs with ARM processor and comms interface on chip - plug/train/configure and play, and training is a breeze. I think the ease of training is something which will be of interest to olde worlde CNN AI engineers/programmers.

What this illustrates is that there is a potential market for the clunky old Akida 1 IP, because there will be a price differential for Akida 2 IP, and a much greater price differential for Akida 3. Those later models have higher precision and lower latency, but take comparatively larger slices of wafer real estate per chip.

The other thing is that our very good friends at MF have always cited the big budgets of the tech giants for developing AI as a downer for BRN.
Thanks Dio, very informative.
Depending on customers needs AKIDA 1000 may well be all that is necessary.
This year should be interesting with AKD1500 samples available for clients since at least Dec'25 (see Dec Q report).
Late 2026 we might see AKD 2500 samples available for early access.
Looking at Kevin's last couple of posts it appears he is testing Symphony which was designed for Finance as a tool for security..
If this works finance companies using both Symphony and Watson would save huge $$ and IBMs Moat would be widened..
IBM could also attract those using say Symphony for Finance but not Watson for security to switch.
Wait and see.
 
  • Like
Reactions: 5 users

Rach2512

Regular
 
  • Like
  • Fire
  • Wow
Reactions: 4 users

Rach2512

Regular
Kevin likes this too.

 
  • Like
  • Fire
Reactions: 10 users

IloveLamp

Top 20
Apologies if posted already

1000020342.jpg
 
  • Like
  • Fire
  • Love
Reactions: 9 users

Frangipani

Top 20
While @ChrisBRN discovered the logos of two new OEM Integration Partners that had appeared on the BRN website overnight (Gbrain and Trusted Semiconductor Solutions), it turns out that the logo of one of our Solutions Enablement Partners has just as silently disappeared: Deep Perception.

Have a look: The Deep Perception logo used to be right there, between the Neurobus and RTX logos:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-478155

C6EF90BA-3902-4F13-958E-B94F5E25B648.jpeg



And now it’s gone:


4EA29ECC-5C3D-4829-8640-F8A402E4CC03.jpeg


Hard to believe, I know, especially given that BrainChip posted the CES 2026 BrainChip & Deep Perception joint demo video only just over a week ago.
(https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-483083)

While it struck me as odd at the time that it was Kurt Manninen who was featured in that demo video, unlike in the BrainChip & HaiLa resp. Quantum Ventura demo videos, when their own staff would explain what they had come up with, I simply assumed whoever was supposed to be representing Deep Perception had fallen sick or were unable to attend CES for other reasons and couldn’t be replaced at short notice. After all, they were a very small company, registered at the CEO’s residential address, if I recall it correctly.

The LinkedIn profile of said CEO, Chris Clason, now shows that he was Co-Founder and CEO of Deep Perception between September 2023 and January 2026 and is currently “Open to Work”.

And the other Co-Founder and CTO Alex Witt apparently left Deep Perception sometime in November and started working for AWS (Amazon Web Services) in December.

Also, the Deep Perception website (http://deepperception.ai/) is no longer active, and neither is the company’s GitHub page (“This organization was marked as archived by an administrator on Feb 16, 2026. It is no longer maintained.”)

As I couldn’t find any info about an acquisition, the most likely explanation sadly is that the small Austin-based company must have folded.


35366AE0-C672-4941-B88B-8F570F2AF53E.jpeg

02D9CE61-72AF-43AD-A295-696561BD8343.jpeg




9C21F404-7C72-433F-92D4-6AFA0FCFF825.jpeg





6B77C952-11DF-4694-BE6E-D42BB721F59A.jpeg


The GitHub repo “gst-aruco-detector” - updated on 26 November 2025 - must have been developed for the Raytheon Autonomous Vehicle Competition, whose 2025/26 round “Operation Touchdown” is sponsored by BrainChip.



Live Partner Demos


“Visual Computing Pipeline
Full compute pipeline using the AKD1000 for drones and mobile devices.”
 
  • Like
  • Sad
  • Wow
Reactions: 5 users
Top Bottom