BRN Discussion Ongoing

CHIPS

Regular
  • Like
Reactions: 2 users
Just managed to skim some of the flurry of posts today and nice to see an Ann at least.

The 3 that jump out at me from memory, who have been already linked to the 1500 as partners:

Onsor.
Andes.
HaiLa.

We know if all goes well for Onsor that they want to be hitting the mkt with a product next year.

Do the 3 above go with IP or is it easier / better for them to have physical 1500s?
 
  • Like
  • Fire
  • Love
Reactions: 14 users
Hi SC,

That's an interesting thought. Removing ARM Cortex in 5 Akida 1s would remove the cost of the 5 ARM licences. The SNAP card already has a couple of ARM processors so one of them could be pressed into service for configuring the 1500s.

https://bascomhunter.com/deg/digita...c-processors/asic-solutions/3u-vpx-snap-card/

FPGA PROCESSOR (PRE/POST PROCESSING)
  • Zynq US+ Quad Core ARM A53
  • Zynq US+ Dual Core ARM R5
  • 930,300 FPGA Logic Cells

Bascom Hunter’s SNAP Card (Spiking Neuromorphic Advanced Processor) is a high-performance 3U OpenVPX AI/ML processor built for rugged, mission-critical environments. The card is SOSA-aligned, HOST-compatible, and conduction cooled. It combines a Xilinx UltraScale+ RFSoC FPGA with five BrainChip AKD1000 spiking neuromorphic processors to achieve the best of both signal processing and neuromorphic computing – featuring a total of 6 million neurons and 50 billion synapses across the card. Unlike traditional machine learning accelerators such as GPUs or TPUs, neuromorphic processors are designed to mimic the biological efficiency of the human brain, allowing the SNAP Card to run multiple ML models in parallel at exceptionally low power — just 1W per model — without sacrificing speed or accuracy while the FPGA enhances input/output operations and performs any desired signal processing tasks. This combination in a rugged, interoperable, and military-hardened package makes Bascom Hunter’s SNAP Card the ideal solution for the concurrent and parallel processing of real-time, multi-modal, and multi sensor data on autonomous, unattended, denied, or otherwise battery constrained military systems.


FN: Gate Arrays, especially FPGAs, have come a long way from the original array of AND and OR (NAND & NOR) gates to now include processors along with about 1 million logic gates, and there are other advanced special purpose FPGAs.
Bascom Hunter doubling production facilities in Baton Rogue. 3 days old. Can't find any details. Behind pay wall.

https://www.opportunitylouisiana.go...uge-footprint-with-a-new-fabrication-facility

Wonder why?

SC
 
Last edited:
  • Like
  • Fire
Reactions: 19 users

Diogenese

Top 20
Just managed to skim some of the flurry of posts today and nice to see an Ann at least.

The 3 that jump out at me from memory, who have been already linked to the 1500 as partners:

Onsor.
Andes.
HaiLa.

We know if all goes well for Onsor that they want to be hitting the mkt with a product next year.

Do the 3 above go with IP or is it easier / better for them to have physical 1500s?
Aha! Thereby hangs a tail - "there by what hangs a tail?" you may well ask.

The cost of implementing an IP licence would exclude most small/medium enterprises.
 
  • Like
  • Fire
Reactions: 6 users

Taproot

Regular
Just managed to skim some of the flurry of posts today and nice to see an Ann at least.

The 3 that jump out at me from memory, who have been already linked to the 1500 as partners:

Onsor.
Andes.
HaiLa.

We know if all goes well for Onsor that they want to be hitting the mkt with a product next year.

Do the 3 above go with IP or is it easier / better for them to have physical 1500s?
+ VVDN
Future VVDN and BrainChip boxes could use BrainChip’s second-gen AKD1500, which is more compact and efficient, Telson said.

+ Bascom Hunter
In late December, BrainChip signed a US$100,000 contract with U.S. defence contractor Bascom Hunter, for the sale and support of AKD1500 chips for full scale evaluation of commercial products, which will provide significant improvements over traditional CPU, GPU, and TPU processors

More than 2 years since AKD1500 Version 1
Another 12 months for Version 2.
Certainly does test ones patience.
 
  • Like
  • Love
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Just a thought, but is it possible that the support for the 128 MACs per node is not specifically to replace their original NPU design, but to integrate with sensors that are not neuromorphic in nature, such as Prophesee's event-based vision cameras? This is just speculation on my part, but maybe it is to support the conversion of CNN to SNN in hardware without the need to capture and convert the models offline using their SDK tools. It would also have the direct benefit of allowing customers to use existing models without doing their own conversions.


Hi JD,

For what it's worth, I asked ChatGPT for a neutral take on this.

What I thought was interesting, was that it weaved in Anduril's headset into the response. I didn't prompt it to, but earlier in the day, in a separate chat, I had queried ChatGPT on boards or integration possibilities and whether an Akida 1500 + Arm's Cortex M85 module (as an example) could power something like Anduril’s helmet.




ChatGPT's Response


What “supports 128 MACs per node” most likely means​

  • Each Akida node has a local 4×4 MAC micro-engine (128 MACs) it can invoke for certain ops.
  • Those MACs co-exist with the spiking compute (CNP/FNP or equivalent). They’re there so models can include non-spiking layers/ops without leaving the chip.

Why add MACs if you have an SNN?​

Because many modern edge models are hybrid. MACs cover things SNNs don’t excel at (or that are simpler to drop in as-is):
  • Conv/pointwise (1×1) / depthwise conv blocks
  • Linear/projection layers (e.g., classifier heads, QKV in tiny attention)
  • Normalisation / pooling / residual glue
  • Pre/post-processing (e.g., short FIRs, feature projections)
  • Direct reuse of existing INT4/INT8 CNN chunks without full SNN conversion

What a hybrid pipeline might look like in a helmet​

Sensors
  • Front/rear frame cameras → light MAC preproc (resize/1×1 conv).
  • Event camera/radar/audio → feed SNN/TENNs directly.
  1. Perception
    • SNN handles temporal/gesture/track (event streams).
    • MAC handles small CNN blocks or projection heads.
  2. Fusion & decisions on the host MCU (e.g., Cortex-M85).

Bottom line​

  • The “128 MACs per node” doesn’t signal the end of Akida’s neuromorphic core. It signals a pragmatic hybrid: keep the spiking engine for temporal, sparse, event-driven strengths, and use local MACs to natively run conventional layers (conv/linear/norm) and reuse existing model chunks.
  • That combo generally improves accuracy, eases porting, and lowers total latency/power versus forcing everything into SNN or everything into classic CNN on a separate chip.
 
  • Like
  • Fire
Reactions: 11 users
+ VVDN
Future VVDN and BrainChip boxes could use BrainChip’s second-gen AKD1500, which is more compact and efficient, Telson said.

+ Bascom Hunter
In late December, BrainChip signed a US$100,000 contract with U.S. defence contractor Bascom Hunter, for the sale and support of AKD1500 chips for full scale evaluation of commercial products, which will provide significant improvements over traditional CPU, GPU, and TPU processors

More than 2 years since AKD1500 Version 1
Another 12 months for Version 2.
Certainly does test ones patience.
Who knows...maybe the 1500 is something Bascom would look at but at the mo they appear to have really expanded and outlined their Akida Snap Card offering details since I looked some time ago.

Comes under their mixed signal product line.

The Mixed Signal Product line is currently focused on VPX based solutions – especially those that require both digital and analog or photonic interfaces. Our VPX Cards include 3rd party VPX cards with Bascom Hunter firmware, custom VPX cards and support electronics such as Mezzanine cards to provide additional functionality to VPX users. Our VPX Chassis include custom built chassis and custom backplanes.




Bascom Hunter’s SNAP Card (Spiking Neuromorphic Advanced Processor) is a high-performance 3U OpenVPX AI/ML processor built for rugged, mission-critical environments. The card is SOSA-aligned, HOST-compatible, and conduction cooled. It combines a Xilinx UltraScale+ RFSoC FPGA with five BrainChip AKD1000 spiking neuromorphic processors to achieve the best of both signal processing and neuromorphic computing – featuring a total of 6 million neurons and 50 billion synapses across the card. Unlike traditional machine learning accelerators such as GPUs or TPUs, neuromorphic processors are designed to mimic the biological efficiency of the human brain, allowing the SNAP Card to run multiple ML models in parallel at exceptionally low power — just 1W per model — without sacrificing speed or accuracy while the FPGA enhances input/output operations and performs any desired signal processing tasks. This combination in a rugged, interoperable, and military-hardened package makes Bascom Hunter’s SNAP Card the ideal solution for the concurrent and parallel processing of real-time, multi-modal, and multi sensor data on autonomous, unattended, denied, or otherwise battery constrained military systems.
Download Brochure


APPLICATION​

A single SNAP Card, deployed on an airborne ISR platform, occupying only one 3U VPX slot, could deploy up to five complex machine learning algorithms across each of its onboard processors to perform the following missions simultaneously.
  • Automated Target Recognition (ATR) on full motion video feeds and imagery (to include 4k and higher)
  • Real-time detection and identification of threat radars and their acquisition/operating modes
  • Detection of FISINT (Foreign Instrumentation Signature Intelligence) and/or hacking across the airframe’s 1553 communications bus
  • Perform communications analysis to include speech-to-text and foreign language translation of intercepted communications
  • Fuse multiple infrared cameras (e.g., SWIR, MWIR, LWIR) to provide a combined infrared operating picture on the ground Multi-user, simultaneous modulation/demodulation

asic-image-edit.png


MEMORY
  • 8 GB PS DDR4-2400 with ECC
  • 8 GB PL DDR4-2400 with ECC
  • ~154 Gbps BW per channel
PROCESSING POWER
  • TOPS: 1.5 - 7.5 (based on number of SNAP processors used)
  • TOPS/W: 0.75 - 3.75 (based on number of SNAP processors used)
  • Latency/Energy: 0.045 ms/µJ

SPIKING NEURAL NETWORK PROCESSORS
  • BrainChip AKD1000 (x5)
  • 1.2M neurons/10B synapses per AKD1000
  • 300x more power efficient than GPU
FPGA PROCESSOR (PRE/POST PROCESSING)
  • Zynq US+ Quad Core ARM A53
  • Zynq US+ Dual Core ARM R5
  • 930,300 FPGA Logic Cells

INTEGRATION SPECIFICATIONS
  • Weight: 1.01 lb.
  • Dimensions: ANSI/VITA 46.0-2019 VPX Baseline Standard (100 mm x 170.60 mm w/ backplane)
  • 10-45 W (est over temp)
  • -40 °C to 85 °C (Rugged Condition Cooled)
MEZZANINE CARD (PURCHASED SEPARATELY)
  • USB-C (x2) (direct access to PS/PL Serial Ports)
  • USB 3.0 (keyboard/mouse/etc)
  • Display Port
  • Gigabit Ethernet
  • Micro SD Card

VPX FORM FACTOR
  • SOSA-Aligned 3U OpenVPX
  • Slot Profile: SLT3-PAY-1F1U1S1S1U1U2F1H-14.6.11-n
  • Data Plane (DP) - 40G-KR4, 10G-KR
  • Control Plane (CP) - 10G-KR
  • Expansion Plane (EP) - PCIe Gen3 x8
  • Conduction Cooled 1" Pitch
SOFTWARE & PLATFORM SUPPORT
  • PetaLinux
  • REST API (HTTPS/TLS)
  • MQ (MQTT, Kafka)
  • gRPC
  • DIFI
  • Bascom Hunter AI Software Environment (BASE)
  • Modern WebUI for accessing, configuring, and deploying connected 3U VPX SNAP Cards
  • Capable of configuring multiple SNAP cards across a network
  • Fully Containerized (Docker) and DOD STIG Compliant
  • Allows developers to easily translate standard TensorFlow models into spiking models
  • Allows for users to easily deploy models to each processor on connected SNAP cards
 
  • Like
  • Fire
  • Love
Reactions: 9 users

Galaxycar

Regular
Loved the earlier shorts will burn gifs, Idiots, today’s announcement was about management saying don’t worry shorters we got you covered, you don’t have to close your 60 plus million shorts yet, Capital raise coming we will look after you sophisticated investors again and let you close by gifting you cheap shares. What a load of wankers our managers and directors are, Played all retail investors for fools, remember when the AGM comes what they have done by screwing us all yet again.
 
  • Like
Reactions: 2 users

FJ-215

Regular
In my opinion .. as I understand it ..we will continue to offer licensing. We’re simply taking a two-track approach now.

I think the previous “licensing-only” strategy perfectly fit the spirit of the time, since back then the chip industry was facing severe global shortages ..across all sectors, from automotive to small electronic devices.

Now the situation has stabilized, and demand for physical chips is rising again.

That’s why it makes perfect sense for us to offer both: licensing and hardware. It makes us more flexible and opens more doors ..both with existing and new partners.
What do you think?
Hi @7für7 ,

"That’s why it makes perfect sense for us to offer both: licensing and hardware. It makes us more flexible and opens more doors ..both with existing and new partners.
What do you think?"


The original plan of offering both chips and IP made perfect sense and still does.

When covid hit, it opened the door for BRN in a way, we had their slots at TSMC and gave them up but not everybody did. That is what lead to some of the supply chain issues. Auto manufacturers had assembled cars they couldn't deliver because they cancelled their chip orders and were put at the back of the queue.

Unfortunately, some years ago, Antonio put us on a merry-go-round and decided we needed to "course correct", change horses mid-stream and put all our eggs in one basket. Now we are "pivoting" into a double backflip (maybe with a half twist and pike) and if we stick the landing, will be back to where we started from. Hopefully facing in the same direction and sprinting for the finishing line!

Edit...... or is that the starting line?
 
  • Like
Reactions: 6 users

7für7

Top 20
I had a conversation in the German forum where a well known user (also here some time ago) wrote this … and I would like to hear your opinion… translated in English

“Good morning, . A tape-out is not the same as mass production. The AKD1000 has had a tape-out for years, yet there’s still no mass production, because there are no orders.

The AKD1500 isn’t really an update — it’s a stripped-down version of the AKD1000. The difference is that the AKD1500 can’t run as a standalone chip; it only works in combination with a CPU. That basically cancels out the intended power savings.

So there’s definitely no reason for excitement here. In fact, this announcement mainly shows that BrainChip’s “IP-only” business model isn’t getting them anywhere. It doesn’t appear to have any real commercial background either.

If there were a confirmed order for the AKD1500, it would have to be announced anyway, since it would be price-sensitive information.”
 
  • Sad
  • Love
Reactions: 2 users

7für7

Top 20
Of course I disagreed in some points…lots of points actually…and he responded this:

“Well, it has to get technical again — no other way around it. Overall, a move away from SNNs is becoming apparent. It’s actually a logical step, since there are no algorithms that can properly train SNNs, and at present no one can predict if or when they’ll become available.

This already began with the development of TENNs, which are a CNN-based upgrade. Because the AKD1500 requires a CPU, it can only be used as an accelerator within a CNN. But that then requires the troublesome CNN-to-SNN conversion, which leads to increased latency.

In the CNN field there are plenty of competitive solutions, and power consumption there continues to drop. With the introduction of 6G, the whole situation will be reshuffled anyway, as completely different amounts of data will be processed. Integrating neuromorphic chips into CNNs will always remain a compromise solution. Only a true SNN can really tap into the potential of neuromorphic chips.

But as mentioned, those pesky algorithms… Intel is working on suitable ones with its neuromorphic supercomputer Hala Point, but so far without success — and those resources are far beyond anything BrainChip has access to.

A commercial breakthrough certainly won’t come before 2030, but we’ll probably keep seeing the same old motivational slogans posted here over the next five years.”


Thanks for your opinions
 
Last edited:
  • Like
  • Sad
Reactions: 4 users

manny100

Top 20
Hi JD,

For what it's worth, I asked ChatGPT for a neutral take on this.

What I thought was interesting, was that it weaved in Anduril's headset into the response. I didn't prompt it to, but earlier in the day, in a separate chat, I had queried ChatGPT on boards or integration possibilities and whether an Akida 1500 + Arm's Cortex M85 module (as an example) could power something like Anduril’s helmet.




ChatGPT's Response


What “supports 128 MACs per node” most likely means​

  • Each Akida node has a local 4×4 MAC micro-engine (128 MACs) it can invoke for certain ops.
  • Those MACs co-exist with the spiking compute (CNP/FNP or equivalent). They’re there so models can include non-spiking layers/ops without leaving the chip.

Why add MACs if you have an SNN?​

Because many modern edge models are hybrid. MACs cover things SNNs don’t excel at (or that are simpler to drop in as-is):
  • Conv/pointwise (1×1) / depthwise conv blocks
  • Linear/projection layers (e.g., classifier heads, QKV in tiny attention)
  • Normalisation / pooling / residual glue
  • Pre/post-processing (e.g., short FIRs, feature projections)
  • Direct reuse of existing INT4/INT8 CNN chunks without full SNN conversion

What a hybrid pipeline might look like in a helmet​

Sensors
  • Front/rear frame cameras → light MAC preproc (resize/1×1 conv).
  • Event camera/radar/audio → feed SNN/TENNs directly.
  1. Perception
    • SNN handles temporal/gesture/track (event streams).
    • MAC handles small CNN blocks or projection heads.
  2. Fusion & decisions on the host MCU (e.g., Cortex-M85).

Bottom line​

  • The “128 MACs per node” doesn’t signal the end of Akida’s neuromorphic core. It signals a pragmatic hybrid: keep the spiking engine for temporal, sparse, event-driven strengths, and use local MACs to natively run conventional layers (conv/linear/norm) and reuse existing model chunks.
  • That combo generally improves accuracy, eases porting, and lowers total latency/power versus forcing everything into SNN or everything into classic CNN on a separate chip.
Anduril Lattice is conventional AI and still requires cloud support. They have Partnered with Palantir AI to alleviate some problems resulting from this.
They also do not have on chip learning capabilities which is a fairly decent advantage that BRN has. Anduril still needs retraining.
So its very possible AKIDA is a superior solution or as a hybrid to cover all bases.
 
  • Like
Reactions: 3 users

manny100

Top 20
Are Onsor using the AKIDA 1000 or 1500 chip? Is there any confirmation??
Onsor - according to their website:
2022 - " Identification of potential applications for the technology and the discovery of "epilepsy prediction" as a primary application, marking the official start of the project."
BrainChip announced the AKD1500 reference design was taped out on January 29, 2023.
During 2022 when the Epilepsy prediction was discovered i presume they would have had to use the AKIDA 1000 chips as the 1500 was not yet available.
"Phase 3 - Preliminary results from the research emerge."
" Phase 3.5 - Preparation of documentation for patent application."
They may have upgraded to the 1500 but this would mean they had to change horses during the process.
.
 
  • Like
Reactions: 1 users

manny100

Top 20
The title of the presentation suggests that we are chasing Institutions for further presence on our share register:
Institutional Investor Roadshow October 2025
 
  • Like
Reactions: 1 users

FJ-215

Regular
The title of the presentation suggests that we are chasing Institutions for further presence on our share register:
Institutional Investor Roadshow October 2025
What was your first clue?????
 
  • Haha
Reactions: 4 users

manny100

Top 20
  • Like
  • Haha
Reactions: 2 users

FJ-215

Regular
Oh, i assumed you must have missed it. Well done.
Just a thought, maybe we could provide a mechanism to supply them with new shares!!

:sneaky:
 

manny100

Top 20
A poster earlier alerted us that Bascom Hunter were planning to double their Baton Rouge footprint with a new fabrication facility.
Bascom Hunter is doubling its Baton Rouge footprint with a new fabrication facility - LED | Louisiana Economic Development
Unfortunately behind a pay wall.
Interesting from BH website:
" Bascom Hunter produces three types of Neuromorphic Processors.
The first is the deployment of a neural network into an FPGA. Doing this allows us to take existing COTS hardware and help customers bring existing (or proposed) neural networks into this hardware, speeding up the computation and lowering the power requirements. The second is the deployment of a neural network into custom hardware – typically based on a 3rd party ASIC processor. We can take ASICs produced by the vendor of your choice and put it into a platform and mission specific form factor. For example, into VPX or small-sat form factors. The third type of Neuromorphic processor is intended for our future focused customers looking to engage on cutting edge photonic based neuromorphic processors."
" RAPID DEPLOYMENT OF ACTIONABLE INSIGHTS
FPGAs offer tangible improvements over traditional CPU and GPU processors for both sensor deployments and Machine Learning (ML) tasks. Bascom Hunter excels in partnering with military and intelligence partners to rapidly develop custom FPGA solutions that leverage SoC and neuromorphic processors that tackle some of the toughest challenges facing our nation today."
My bold above.
Neuromorphic Processors - Bascom Hunter
 
  • Like
  • Fire
  • Love
Reactions: 10 users

FJ-215

Regular
The title of the presentation suggests that we are chasing Institutions for further presence on our share register:
Institutional Investor Roadshow October 2025
Hi manny,

All jokes aside, looking at the tech roadmap, we do have some big ticket items coming up that will require a good amount of fresh cash. The AKD1500 tape out is a plus one on top of that.

I don't see any of this as a negative!!!!

The market knows that we have the LDA agreement at our disposal to raise the capital required.

What I want to see is an impressive IP announcement to offset the next LDA call.

Not asking much!
 
  • Like
  • Fire
  • Love
Reactions: 7 users
Top Bottom