BRN Discussion Ongoing

jtardif999

Regular
Could that update on GitHub possibly be connected to our so far rather secretive partner MulticoreWare, a San Jose-headquartered software development company?

Their name popped up on our website under “Enablement Partners” in early February without any further comment (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-450082).

Three months later, I spotted MulticoreWare’s VP of Sales & Business Development visiting the BrainChip booth at the Andes RISC V Con in San Jose:

“The gentleman standing next to Steve Brightfield is Muhammad Helal, VP Sales & Business Development of MulticoreWare, the only Enablement Partner on our website that to this day has not officially been announced.”

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-459763


But so far crickets from either company…

MulticoreWare still doesn’t even list us as a partner on their website:


View attachment 93744

Neither does BrainChip get a mention anywhere in the below 11 December article titled “Designing Ultra-Low-Power Vision Pipelines on Neuromorphic Hardware - Building Real-Time Elderly Assistance with Neuromorphic hardware”, although “TENNs” is a give-away to us that the neuromorphic AI architecture referred to is indeed Akida.

What I find confusing, though, is that this neuromorphic AI architecture should consequently be Akida 2.0, given that the author is referring to TENNs, which Akida 1.0 doesn’t support. But then of course we do not yet have Akida 2.0 silicon.

However, at the same time it sounds as if the MulticoreWare researchers used physical neuromorphic hardware, which means it must have been an AKD1000 card:

“In the above demo, we have deployed a complete vision pipeline running seamlessly on a Raspberry Pi with the neuromorphic accelerator attached at the PCIE slot, demonstrating portability and practical deployment validating real-time, low-power AI at the edge.”

By the way, also note the following quote which helps to explain why the adoption of neuromorphic technology takes so much longer as if it were a simple plug-and-play solution:

Developing models for neuromorphic AI requires more than porting existing architectures […] In short, building for neuromorphic acceleration means starting from the ground up balancing accuracy, efficiency, and strict design rules to unlock the promise of real-time, ultra-low-power AI at the edge”




View attachment 93741

December 11, 2025

Author
Reshi Krish
is a software engineer in the Platforms and Compilers Technical Unit at MulticoreWare, focused on building ultra-efficient AI pipelines for resource-constrained platforms. She specializes in optimizing and deploying AI across diverse hardware environments, leveraging techniques like quantization, pruning, and runtime optimization. Her work spans optimizing linear algebra libraries, embedded systems, and edge AI applications.

Introduction: Driving Innovation Beyond Power Constraints​

As AI continues to advance at an unprecedented pace, its growing complexity often demands powerful hardware and high energy resources. However, when deploying AI solutions to the edge we look for ultra-efficient hardware which can run utilizing the least amount of energy possible and this introduces its own engineering challenges. ARM Cortex-M Microcontrollers (MCUs) and similar low-power processors have tight compute and memory limits, making optimizations like quantization, pruning, and lightweight runtimes critical for real-time performance. These challenges on the other hand are inspiring innovative solutions that make intelligence more accessible, efficient, and sustainable.

At MulticoreWare, we’ve been exploring multiple paths to push more intelligence onto these constrained devices. This exploration led us to neuromorphic AI architectures and specialized neuromorphic hardware which provides ultra-low-power inference by mimicking the brain’s event-driven processing. We saw the novelty of this framework and aimed to combine this with our deep MCU experience for opening new ways to deliver always-on AI across medical, smart home, and industrial segments.

Designing for Neuromorphic Hardware​

The neuromorphic AI framework we had identified utilized a novel type of neural networks Temporal Event-based Neural Networks (TENNS). TENNs employs a state-space architecture that processes events dynamically rather than at fixed intervals, skipping idle periods to minimize energy and memory usage. This design enables real-time inference on milliwatts of power, making it ideal for edge deployments.

Developing models for neuromorphic AI requires more than porting existing architectures. The framework which we have utilised mandates full int8 quantization and adherence to strict architectural constraints. Only a limited set of layers is supported, and models must follow rigid sequences for compatibility. These restrictions often necessitate significant redesigns, including modification of model architecture, replacing unsupported activations (e.g., LeakyReLU → ReLU) and simplifying branched topologies. Many deep learning features like multi-input/output models are also not supported, requiring developers to implement workarounds or redesign models entirely.

In short, building for neuromorphic acceleration means starting from the ground up balancing accuracy, efficiency, and strict design rules to unlock the promise of real-time, ultra-low-power AI at the edge.

Engineering Real-Time Elderly Assistance on the Edge​

To demonstrate the potential of neuromorphic AI, we developed a computer vision based elderly assistance system capable of detecting critical human activities such as sitting, walking, lying down, or falling all in real time running on extremely low power hardware.

The goal was simple yet ambitious:
To deliver a fully on-device, low-power AI pipeline that continuously monitors and interprets human actions while maintaining user privacy and operational efficiency even in resource-limited environments.

However, due to frameworks architectural constraints, certain models such as pose estimation, could not be fully supported. To overcome this, we adopted a hybrid approach combining neuromorphic and conventional compute resources:
  • Neuromorphic Hardware: Executes object detection and activity classification using specialized models.
  • CPU (Tensorflow Lite): Handles pose estimation and intermediate feature extraction.
ai-inferencing-block.png

This design maintained functionality while ensuring power-efficient on the edge inference. Our modular vision pipeline leverages neuromorphic acceleration for detection and classification, with pose estimation being run on the host device.


View attachment 93742
View attachment 93743

Results: Intelligent, Low-Power Assistance at the Edge​

In the above demo, we have deployed a complete vision pipeline running seamlessly on a Raspberry Pi with the neuromorphic accelerator attached at the PCIE slot, demonstrating portability and practical deployment validating real-time, low-power AI at the edge. This system continuously identifies and classifies user activities in real time, instantly detecting events such as falls or help gestures and triggering immediate alerts. All the processing required was achieved entirely at the edge ensuring privacy and responsiveness in safety-critical scenarios.

The neuromorphic architecture consumes only a fraction of the power required by conventional deep learning pipelines, while maintaining consistent inference speeds and robust performance.

Application Snapshot:
  • Ultra-low power consumption
  • Portable Raspberry Pi + neuromorphic hardware setup
  • End to end application running on the edge hardware

Our Playbook for Making Edge AI Truly Low-Power​

MulticoreWare applies deep technical expertise across emerging low-power compute ecosystems, enabling AI to run efficiently on resource-constrained platforms. Our approach combines:

Frame-4.jpg

Broader MCU AI Applications: Industrial, Smart Home & Smart City​

With healthcare leading the shift toward embedded-first AI, smart homes, industrial systems, and smart cities are rapidly following. Applications like quality inspection, predictive maintenance, robotic assistance, home security, and occupancy sensing increasingly require AI that runs directly on MCU-class, low-power edge processors.

MulticoreWare’s real-time inference framework for Arm Cortex-M devices supports this transition through highly optimised pipelines including quantisation, pruning, CMSIS-NN kernel tuning, and memory-tight execution paths tailored for constrained MCUs. This enables OEMs to deploy workloads such as wake-word spotting, compact vision models, and sensor-level anomaly detection, allowing even the smallest devices to run intelligent features without relying on external compute.

Conclusion: Redefining Intelligence Beyond the Cloud​

The convergence of AI and embedded computing marks a defining moment in how intelligence is designed, deployed, and scaled. By enabling lightweight, power-efficient AI directly at the edge, MulticoreWare empowers customers across healthcare, industrial, and smart city domains to achieve faster response times, higher reliability, and reduced energy footprints.

As the boundary between compute and intelligence continues to fade, MulticoreWare’s Edge AI enablement across MCU and embedded platforms ensures that our partners stay ahead, building the foundation for a truly decentralised, real-time intelligence beyond the cloud.


To learn more about MulticoreWare’s edge AI initiatives, write to us at info@multicorewareinc.com.




View attachment 93745

View attachment 93746
Fits like a glove with Brightfields interview. IMO a licence agreement with MultiCoreWare is imminent.
 
  • Like
  • Fire
  • Thinking
Reactions: 9 users

Andy38

The hope of potential generational wealth is real
So I was just chilling out in Waiheke island for the Christmas break.
The tunes are playing and as I watch the sun go down Echo beach is playing
So a little reflecting going on
So it seams that January for the past 5 or so years has been good for the holder
Hopefully this January will be the best ever 🥰
Cable bay with a vino in hand?
 
  • Fire
  • Like
Reactions: 2 users
Interesting chart. MACD bullish divergence.
4 touches of the highs of the downtrend line since Oct'25 followed by a break of the trendline today.
The trend is still down but the MACD divergence indicates a momentum shift may be beginning.
Its wait and see over the next few days..
View attachment 93875
Maybe in the new year things might change for the better at the moment the trading is all just games 1 share here and one there then 20 shares it’s just rubbish trading by bots I guess.
 
  • Like
Reactions: 3 users
  • Like
  • Love
Reactions: 3 users
Come on BrainChip’s
A cent a day until the new year then get a rocket up ya
 
  • Like
  • Haha
Reactions: 6 users

manny100

Top 20
Merry Chipmas
View attachment 93866

Traditional AI computing relies on machine learning and deep learning methods that demand significant power and memory for both training and inference.
Our researchers have developed a patented neuromorphic computing architecture based on field-programmable gate arrays (FPGAs). This architecture is designed to be parallel and modular, enabling highly efficient, brain-inspired computing directly on the device. Compared to existing techniques, this approach improves energy-per-inference by ~1500 times and latency by ~2400 times .
This paves the way for a new generation of powerful, real-time AI applications in energy-constrained environments.
Know more in the #patent- bit.ly/498XlwC
Inventors: Dhaval Shah, Sounak Dey, Meripe Ajay Kumar, Manoj Nambiar, Arpan Pal
Tata Consultancy Services
#TCSResearch #AI #NeuromorphicComputing

View attachment 93867 View attachment 93869
View attachment 93870
Thanks for posting that. Sounds like they may have been into the AKIDA development hub? Via FGPA it speeds up time to prototype.
 
  • Like
Reactions: 3 users

Wags

Regular
Merry Christmas Chippers.
Stay safe and positive.
Apparently 2026 is our year.
Sincere thankyou to all of the marvellous researchers and contributors.
 
  • Like
  • Love
  • Fire
Reactions: 18 users

HarryCool1

Regular
Merry Christmas everyone, cheers!

1766550331081.png
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 14 users
So no Christmas from our mate Sean this year
I didn't think after years of disappointment that he would redeem him self and present us all with a big fat contract,
what was I thinking!!!!!

But Maybe a New Years Gift?

We can only hope.

Come on Brainchip just 1 cent a day till New Years
 
  • Like
  • Fire
Reactions: 2 users

Diogenese

Top 20
Thanks for posting that. Sounds like they may have been into the AKIDA development hub? Via FGPA it speeds up time to prototype.
Hi manny,

Take this with a grain of salt. It's just my (low salt) postprandial idle speculation.

TCS claim reduced latency for their FPGA NN. They have designed a purpose-built FPGA rather than using a COTS FPGA, ie, they have actual NPUs built into the FPGA with a switchable interconnect fabric providing hardware connexions for the configuration of the NPUs into layers. I think this is different from Akida in that Akida's interconnexion fabric is basically fixed and acts like a packet switch highway with the NPUs being electronically configured by having destination addresses for their output data. (Renesas also have a reconfigurable arrangement).

My guess is that they believe this hardware configuration provides faster transmission than the packet switched version.

TCS have designed a dedicated NN FPGA with switchable hardware interconnexions between the NPUs.

US12314845B2 Field programmable gate array (FPGA) based neuromorphic computing architecture 20211014

1766552634090.png


[0027] The FIG. 1 illustrates a generic functional system of a neuromorphic FPGA architecture, wherein the plurality of neurons are arranged in plurality of layers in a modular and parallel fashion. The basic component of the neuromorphic FPGA architecture is a bio-plausible high-performance neuron. Each neuron among the plurality of neurons is interconnected with other neurons of a backward or a forward layer only through a plurality of synapses in multiple layers, and each of the neuron is mutually independent. With reference to the FIG. 1 , the plurality of neurons is arranged in the plurality of layers, wherein the neurons in the first layer are represented as a neuron-11 ( 106 ), a neuron-12 ( 108 ) and a neuron-1N ( 110 ) (till a number N). Further neurons in the second layer are represented as a neuron-21 ( 112 ), a neuron-22 ( 114 ) and a neuron-2N ( 116 ) (till a number N). The neuromorphic FPGA architecture can comprise several such layers that can go up to a number (N), wherein the neurons in the Nth layer are represented as a neuron-N1 ( 118 ), a neuron-N2 ( 120 ) and a neuron-NN ( 122 ).


It looks like the arrangement of the NPUs is somewhat constrained by the physical layout and may be less flexible than the Akida arrangement. This would necessitate a significantly larger number of NPUs due to the allocation of NPUs to specific layers. This increases silicon footprint and reduces the number of chips per wafer, increasing the cost per chip. {Again - this is only my assessment).

If there is a significant reduction in latency, the additional cost may be justified in cases requiring the lower latency.

The design of the NPUs could still include elements of the Akida layout (TENNs) minus the packet address header which is set during configuration.
 
  • Like
  • Thinking
  • Fire
Reactions: 10 users

manny100

Top 20
  • Like
  • Love
Reactions: 2 users

manny100

Top 20
Hi manny,

Take this with a grain of salt. It's just my (low salt) postprandial idle speculation.

TCS claim reduced latency for their FPGA NN. They have designed a purpose-built FPGA rather than using a COTS FPGA, ie, they have actual NPUs built into the FPGA with a switchable interconnect fabric providing hardware connexions for the configuration of the NPUs into layers. I think this is different from Akida in that Akida's interconnexion fabric is basically fixed and acts like a packet switch highway with the NPUs being electronically configured by having destination addresses for their output data. (Renesas also have a reconfigurable arrangement).

My guess is that they believe this hardware configuration provides faster transmission than the packet switched version.

TCS have designed a dedicated NN FPGA with switchable hardware interconnexions between the NPUs.

US12314845B2 Field programmable gate array (FPGA) based neuromorphic computing architecture 20211014

View attachment 93891

[0027] The FIG. 1 illustrates a generic functional system of a neuromorphic FPGA architecture, wherein the plurality of neurons are arranged in plurality of layers in a modular and parallel fashion. The basic component of the neuromorphic FPGA architecture is a bio-plausible high-performance neuron. Each neuron among the plurality of neurons is interconnected with other neurons of a backward or a forward layer only through a plurality of synapses in multiple layers, and each of the neuron is mutually independent. With reference to the FIG. 1 , the plurality of neurons is arranged in the plurality of layers, wherein the neurons in the first layer are represented as a neuron-11 ( 106 ), a neuron-12 ( 108 ) and a neuron-1N ( 110 ) (till a number N). Further neurons in the second layer are represented as a neuron-21 ( 112 ), a neuron-22 ( 114 ) and a neuron-2N ( 116 ) (till a number N). The neuromorphic FPGA architecture can comprise several such layers that can go up to a number (N), wherein the neurons in the Nth layer are represented as a neuron-N1 ( 118 ), a neuron-N2 ( 120 ) and a neuron-NN ( 122 ).


It looks like the arrangement of the NPUs is somewhat constrained by the physical layout and may be less flexible than the Akida arrangement. This would necessitate a significantly larger number of NPUs due to the allocation of NPUs to specific layers. This increases silicon footprint and reduces the number of chips per wafer, increasing the cost per chip. {Again - this is only my assessment).

If there is a significant reduction in latency, the additional cost may be justified in cases requiring the lower latency.

The design of the NPUs could still include elements of the Akida layout (TENNs) minus the packet address header which is set during configuration.
Thanks Dio, cheers
 
  • Like
Reactions: 2 users
That was a great podcast, I really like Steve, he communicates very clearly, he knows his stuff and presents to me as a great ambassador for Brainchip.

Clearly, Sean's IP approach solely was wrong, a combined IP/Chip approach has now proven to be the best path forward, who told us that, that's correct, our partners and early customers, they recognised that the financial risk in outlying tens of millions of dollars in IP blocks within their own products at this early stage in neuromorphic chip technology was and still is too greater a risk... hence AKD 1000 and AKD 1500 are very, very relevant.

I see this acknowledgement by the company as a positive step, no arrogance here, fantastic!!

If we wish to succeed, we must always be open in our thinking, willing to adapt at short notice, like I and a number of other long termers mentioned at the time, AKD 1000 was too narrow, was absolute bullshit..yes we were short on funding, BUT, the movement to an ARM business model was premature and has potentially cost us a few years in progress....purely my self-centred opinion and my bais support for Peter and Anil, AKD 1000 was and will always be the masterstroke that set Brainchip on the road to success, despite taking 4 years longer than I had quietly hoped for.

Thanks for your input Manny over the last year, have a nice Christmas mate, God bless.
Tech/Chris 🎄🎄👍
Well I hope Sean gets told so at the AGM If the runs aren't on the board
 

TECH

Regular
Well I hope Sean gets told so at the AGM If the runs aren't on the board

I can assure you that Sean isn't going anywhere, unless he decides to throw in the towel himself, the BOD is clearly happy with
how he has, along with the key staff he has employed over the last 4 years progressed, many holders either can't see or understand
how our company has now reached the point where we can look clients/potential clients in the eye in knowing that we have all the
structures in place to engage confidently as an IP/Chip company.

We have the software support, engineering support, hardware available, products and documentation for developers to feel at
ease, neuromorphic technology, through continual education is really starting to hit its strides, we have NEVER BEEN POSITIONED
any better than what we are now, and a lot of that goes to Sean and his business strategy, yes, the sole IP route was possibly wrong,
meaning the timing and our structures weren't ready for that leap, but come the second half of 2026 and throughout 2027 I personally
expect to see a genuine "sales explosion" occur.

I believe when a tape out is confirmed, a client has committed to about a million chips, roll on AKD 2.0.

I love Akida technology, but sadly many others are too afraid to say that.

💕 AKD Tech.
 
  • Like
  • Love
  • Fire
Reactions: 13 users

CHIPS

Regular
Post found on X.com


1766568227834.png
 
  • Like
  • Thinking
  • Fire
Reactions: 11 users

manny100

Top 20
I can assure you that Sean isn't going anywhere, unless he decides to throw in the towel himself, the BOD is clearly happy with
how he has, along with the key staff he has employed over the last 4 years progressed, many holders either can't see or understand
how our company has now reached the point where we can look clients/potential clients in the eye in knowing that we have all the
structures in place to engage confidently as an IP/Chip company.

We have the software support, engineering support, hardware available, products and documentation for developers to feel at
ease, neuromorphic technology, through continual education is really starting to hit its strides, we have NEVER BEEN POSITIONED
any better than what we are now, and a lot of that goes to Sean and his business strategy, yes, the sole IP route was possibly wrong,
meaning the timing and our structures weren't ready for that leap, but come the second half of 2026 and throughout 2027 I personally
expect to see a genuine "sales explosion" occur.

I believe when a tape out is confirmed, a client has committed to about a million chips, roll on AKD 2.0.

I love Akida technology, but sadly many others are too afraid to say that.

💕 AKD Tech.
Hi Tech, agree with your sentiments. The BOD granted Sean > 7.5 million RSU's in May'25 which are effectively 'golden handcuffs' ensuring he stays until at least the business building phase moves to sustained growth and recurring revenue.
Incidentally a small but important piece to the build is the models set up on Github to get developers started.
There are plenty there and are " Designed for developers, researchers, and AI enthusiasts, these ready-to-use models make it easier than ever to explore, build, and innovate with the Akida solution."
They are starters/helpers only and accuracy figures quoted can obviously be way improved/worked on by developers.
It is an example of Brainchip leaving 'no stone un turned'.
 
Last edited:
  • Like
Reactions: 6 users

DK6161

Regular
Good Morning Chippers ,

Just a quick thankyou to all , the collective sharing of information once again has been vast and informative.

Wishing all a enjoyable break & prosperous new year.

Regards,
Esq.
You're very welcome 🤗.
It's the least I can do to put pressure on the company.
💕
 
  • Haha
Reactions: 1 users
Merry Xmas all. Day off TSEX for Xmas. Will catch up on boxing day while watching the cricket, while we belt the pots into submission. Tap out boys. (Sorry Pom - nah, not really)

SC
 
  • Love
Reactions: 1 users

Guzzi62

Regular
You're very welcome 🤗.
It's the least I can do to put pressure on the company.
💕
You can't put any pressure on the company dude, keep on dreaming :ROFLMAO:

Space cadet!

People done their DD knows that BRN are on the right track, thanks to the hard work of the CEO and his crew.
 
  • Like
Reactions: 3 users
You can't put any pressure on the company dude, keep on dreaming :ROFLMAO:

Space cadet!

People done their DD knows that BRN are on the right track, thanks to the hard work of the CEO and his crew.
N9 doubt. Done about 12 years of DD myself, but it wouldn't be near as good without the help and comeraderie of those here. Thanks again everyone and Merry Xmas.

SC
 
  • Like
  • Love
Reactions: 4 users
Top Bottom