BRN Discussion Ongoing

Esq.111

Fascinatingly Intuitive.
Good Afternoon Pom down under ,

Belive i could ,

Possibly found a new, replacement , Australian Director for Brainchips board , just negotiating a competitive salary package through Corn & Forrage ....


She's got the smarts .

Back-scratching bovine leads scientists to reassess intelligence of cows | Animal behaviour | The Guardian https://share.google/zj5H2IJmpo849eBsf

You may all thank me later .

*Dam it , She's Austrian ...... Shaw we could get her a fake passport & residency , certainly be a welcome replacement to the board .

* this is what happens when one spends 15hours a day speed reading.


Regards,
Esq.
 
Last edited:
  • Haha
Reactions: 7 users

Tothemoon24

Top 20
IMG_2063.jpeg


I’m still buzzing from an incredible week in Las Vegas for my very first #CES ! It was an honor to represent Quantum Ventura and showcase our cybersecurity solution, CNRT.

As an ML Engineer, the highlight was demonstrating the flexibility of our architecture. We successfully demoed CNRT running in two distinct environments:

Traditional Inference: Optimized for standard GPU and CPU configurations.

Edge Intelligence: Running seamlessly on the BrainChip AK1000 processor, proving what’s possible when you bring neuromorphic computing to the edge.

Seeing the contrast between high-power compute and low-latency, energy-efficient edge inference in real-time was a massive learning experience.
A huge thank you to Quantum Ventura Inc. for the opportunity to lead this technical demo, and to our partners at BrainChip for the incredible support in showing the future of hardware-accelerated AI.

#CES2026 #MLEngineer #CyberSecurity #EdgeAI #NeuromorphicComputing #BrainChip #QuantumVentura #MachineLearning

IMG_2064.jpeg
 
  • Love
  • Like
  • Fire
Reactions: 11 users

Rach2512

Regular
 
  • Like
  • Fire
  • Love
Reactions: 4 users

Frangipani

Top 20
Could that update on GitHub possibly be connected to our so far rather secretive partner MulticoreWare, a San Jose-headquartered software development company?

Their name popped up on our website under “Enablement Partners” in early February without any further comment (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-450082).

Three months later, I spotted MulticoreWare’s VP of Sales & Business Development visiting the BrainChip booth at the Andes RISC V Con in San Jose:

“The gentleman standing next to Steve Brightfield is Muhammad Helal, VP Sales & Business Development of MulticoreWare, the only Enablement Partner on our website that to this day has not officially been announced.”

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-459763


But so far crickets from either company…

MulticoreWare still doesn’t even list us as a partner on their website:


View attachment 93744

Neither does BrainChip get a mention anywhere in the below 11 December article titled “Designing Ultra-Low-Power Vision Pipelines on Neuromorphic Hardware - Building Real-Time Elderly Assistance with Neuromorphic hardware”, although “TENNs” is a give-away to us that the neuromorphic AI architecture referred to is indeed Akida.


What I find confusing, though, is that this neuromorphic AI architecture should consequently be Akida 2.0, given that the author is referring to TENNs, which Akida 1.0 doesn’t support. But then of course we do not yet have Akida 2.0 silicon.

However, at the same time it sounds as if the MulticoreWare researchers used physical neuromorphic hardware, which means it must have been an AKD1000 card:

“In the above demo, we have deployed a complete vision pipeline running seamlessly on a Raspberry Pi with the neuromorphic accelerator attached at the PCIE slot, demonstrating portability and practical deployment validating real-time, low-power AI at the edge.”

By the way, also note the following quote which helps to explain why the adoption of neuromorphic technology takes so much longer as if it were a simple plug-and-play solution:

“Developing models for neuromorphic AI requires more than porting existing architectures […] In short, building for neuromorphic acceleration means starting from the ground up balancing accuracy, efficiency, and strict design rules to unlock the promise of real-time, ultra-low-power AI at the edge”




View attachment 93741

December 11, 2025

Author
Reshi Krish
is a software engineer in the Platforms and Compilers Technical Unit at MulticoreWare, focused on building ultra-efficient AI pipelines for resource-constrained platforms. She specializes in optimizing and deploying AI across diverse hardware environments, leveraging techniques like quantization, pruning, and runtime optimization. Her work spans optimizing linear algebra libraries, embedded systems, and edge AI applications.

Introduction: Driving Innovation Beyond Power Constraints​

As AI continues to advance at an unprecedented pace, its growing complexity often demands powerful hardware and high energy resources. However, when deploying AI solutions to the edge we look for ultra-efficient hardware which can run utilizing the least amount of energy possible and this introduces its own engineering challenges. ARM Cortex-M Microcontrollers (MCUs) and similar low-power processors have tight compute and memory limits, making optimizations like quantization, pruning, and lightweight runtimes critical for real-time performance. These challenges on the other hand are inspiring innovative solutions that make intelligence more accessible, efficient, and sustainable.

At MulticoreWare, we’ve been exploring multiple paths to push more intelligence onto these constrained devices. This exploration led us to neuromorphic AI architectures and specialized neuromorphic hardware which provides ultra-low-power inference by mimicking the brain’s event-driven processing. We saw the novelty of this framework and aimed to combine this with our deep MCU experience for opening new ways to deliver always-on AI across medical, smart home, and industrial segments.

Designing for Neuromorphic Hardware​

The neuromorphic AI framework we had identified utilized a novel type of neural networks Temporal Event-based Neural Networks (TENNS). TENNs employs a state-space architecture that processes events dynamically rather than at fixed intervals, skipping idle periods to minimize energy and memory usage. This design enables real-time inference on milliwatts of power, making it ideal for edge deployments.

Developing models for neuromorphic AI requires more than porting existing architectures. The framework which we have utilised mandates full int8 quantization and adherence to strict architectural constraints. Only a limited set of layers is supported, and models must follow rigid sequences for compatibility. These restrictions often necessitate significant redesigns, including modification of model architecture, replacing unsupported activations (e.g., LeakyReLU → ReLU) and simplifying branched topologies. Many deep learning features like multi-input/output models are also not supported, requiring developers to implement workarounds or redesign models entirely.

In short, building for neuromorphic acceleration means starting from the ground up balancing accuracy, efficiency, and strict design rules to unlock the promise of real-time, ultra-low-power AI at the edge.

Engineering Real-Time Elderly Assistance on the Edge​

To demonstrate the potential of neuromorphic AI, we developed a computer vision based elderly assistance system capable of detecting critical human activities such as sitting, walking, lying down, or falling all in real time running on extremely low power hardware.

The goal was simple yet ambitious:
To deliver a fully on-device, low-power AI pipeline that continuously monitors and interprets human actions while maintaining user privacy and operational efficiency even in resource-limited environments.

However, due to frameworks architectural constraints, certain models such as pose estimation, could not be fully supported. To overcome this, we adopted a hybrid approach combining neuromorphic and conventional compute resources:
  • Neuromorphic Hardware: Executes object detection and activity classification using specialized models.
  • CPU (Tensorflow Lite): Handles pose estimation and intermediate feature extraction.
ai-inferencing-block.png

This design maintained functionality while ensuring power-efficient on the edge inference. Our modular vision pipeline leverages neuromorphic acceleration for detection and classification, with pose estimation being run on the host device.


View attachment 93742
View attachment 93743

Results: Intelligent, Low-Power Assistance at the Edge​

In the above demo, we have deployed a complete vision pipeline running seamlessly on a Raspberry Pi with the neuromorphic accelerator attached at the PCIE slot, demonstrating portability and practical deployment validating real-time, low-power AI at the edge. This system continuously identifies and classifies user activities in real time, instantly detecting events such as falls or help gestures and triggering immediate alerts. All the processing required was achieved entirely at the edge ensuring privacy and responsiveness in safety-critical scenarios.

The neuromorphic architecture consumes only a fraction of the power required by conventional deep learning pipelines, while maintaining consistent inference speeds and robust performance.

Application Snapshot:
  • Ultra-low power consumption
  • Portable Raspberry Pi + neuromorphic hardware setup
  • End to end application running on the edge hardware

Our Playbook for Making Edge AI Truly Low-Power​

MulticoreWare applies deep technical expertise across emerging low-power compute ecosystems, enabling AI to run efficiently on resource-constrained platforms. Our approach combines:

Frame-4.jpg

Broader MCU AI Applications: Industrial, Smart Home & Smart City​

With healthcare leading the shift toward embedded-first AI, smart homes, industrial systems, and smart cities are rapidly following. Applications like quality inspection, predictive maintenance, robotic assistance, home security, and occupancy sensing increasingly require AI that runs directly on MCU-class, low-power edge processors.

MulticoreWare’s real-time inference framework for Arm Cortex-M devices supports this transition through highly optimised pipelines including quantisation, pruning, CMSIS-NN kernel tuning, and memory-tight execution paths tailored for constrained MCUs. This enables OEMs to deploy workloads such as wake-word spotting, compact vision models, and sensor-level anomaly detection, allowing even the smallest devices to run intelligent features without relying on external compute.

Conclusion: Redefining Intelligence Beyond the Cloud​

The convergence of AI and embedded computing marks a defining moment in how intelligence is designed, deployed, and scaled. By enabling lightweight, power-efficient AI directly at the edge, MulticoreWare empowers customers across healthcare, industrial, and smart city domains to achieve faster response times, higher reliability, and reduced energy footprints.

As the boundary between compute and intelligence continues to fade, MulticoreWare’s Edge AI enablement across MCU and embedded platforms ensures that our partners stay ahead, building the foundation for a truly decentralised, real-time intelligence beyond the cloud.


To learn more about MulticoreWare’s edge AI initiatives, write to us at info@multicorewareinc.com.




View attachment 93745

View attachment 93746

Below LinkedIn post by our secretive partner MulticoreWare connects to the 11 December 2025 article “Designing Ultra-Low-Power Vision Pipelines on Neuromorphic Hardware - Building Real-Time Elderly Assistance with Neuromorphic hardware” written by their Software Engineer Reshi Krish that I already shared last month. 👆🏻

Not sure, though, why MulticoreWare paired the post’s text with an image saying “Edge AI on MCUs for Factory Automation”, as the connected link is about building real-time elderly assistance on neurotrophic hardware, not about “running vibration, defect and anomaly detection AI on tiny microcontrollers to cut downtime and boost equipment life”:
From the article: “This system continuously identifies and classifies user activities in real time, instantly detecting events such as falls or help gestures and triggering immediate alerts. All the processing required was achieved entirely at the edge ensuring privacy and responsiveness in safety-critical scenarios.”

Also, why do MulticoreWare continue to be so shy about naming BrainChip as their partner, although their own name showed up overnight as an enablement partner on the BrainChip website almost a year ago?
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-450082

MulticoreWare list 21 other partners on their website, including very illustrious ones: https://multicorewareinc.com/explore-us/partners/



2031DD37-3C90-4774-91CD-C64447B5C3D7.jpeg





Like I said in my tagged post above, the mention of one detail - which happens to be the one which allows us to infer they have indeed been using Akida - doesn’t quite make sense to me:

Neither does BrainChip get a mention anywhere in the below 11 December article titled “Designing Ultra-Low-Power Vision Pipelines on Neuromorphic Hardware - Building Real-Time Elderly Assistance with Neuromorphic hardware”, although “TENNs” is a give-away to us that the neuromorphic AI architecture referred to is indeed Akida.

What I find confusing, though, is that this neuromorphic AI architecture should consequently be Akida 2.0, given that the author is referring to TENNs, which Akida 1.0 doesn’t support. But then of course we do not yet have Akida 2.0 silicon.

However, at the same time it sounds as if the MulticoreWare researchers used physical neuromorphic hardware, which means it must have been an AKD1000 card:

“In the above demo, we have deployed a complete vision pipeline running seamlessly on a Raspberry Pi with the neuromorphic accelerator attached at the PCIE slot, demonstrating portability and practical deployment validating real-time, low-power AI at the edge.”

By the way, also note the following quote which helps to explain why the adoption of neuromorphic technology takes so much longer as if it were a simple plug-and-play solution:

Developing models for neuromorphic AI requires more than porting existing architectures […] In short, building for neuromorphic acceleration means starting from the ground up balancing accuracy, efficiency, and strict design rules to unlock the promise of real-time, ultra-low-power AI at the edge”
 

Attachments

  • 668CF8E8-0DEA-4340-AACD-73A185D6BC80.jpeg
    668CF8E8-0DEA-4340-AACD-73A185D6BC80.jpeg
    356.9 KB · Views: 4
  • Like
  • Fire
Reactions: 4 users
Good Afternoon Pom down under ,

Belive i could ,

Possibly found a new, replacement , Australian Director for Brainchips board , just negotiating a competitive salary package through Corn & Forrage ....


She's got the smarts .

Back-scratching bovine leads scientists to reassess intelligence of cows | Animal behaviour | The Guardian https://share.google/zj5H2IJmpo849eBsf

You may all thank me later .

*Dam it , She's Austrian ...... Shaw we could get her a fake passport & residency , certainly be a welcome replacement to the board .

* this is what happens when one spends 15hours a day speed reading.


Regards,
Esq.
Maybe not as maybe to overqualified

1768900340835.gif
 

IloveLamp

Top 20
Great post Rach, was just about to post same. In case people don't have linkedin, here is a screen shot.
1000017337.jpg
 
  • Like
  • Fire
Reactions: 4 users

Rach2512

Regular
 
  • Like
Reactions: 1 users

itsol4605

Regular
BrainChip?
Neuromorphic Computing?
I guess not!

Siemens has no deal with BrainChip but with NVIDIA.
 
Top Bottom