BRN Discussion Ongoing

Gazzafish

Regular
With all the talk about renases taping out a few years back and Sean’s current optimism, I wouldn’t be surprised if BRN are earning revenue /royalties right now. We’d never know, it might be via renases or even megachips. No announcement would be warranted as it’s an existing arrangement. First we might see of it would be the quarterly report due out late January. I’m just guessing based on no knowledge. DYOR
 
  • Like
  • Wow
  • Fire
Reactions: 4 users
With all the talk about renases taping out a few years back and Sean’s current optimism, I wouldn’t be surprised if BRN are earning revenue /royalties right now. We’d never know, it might be via renases or even megachips. No announcement would be warranted as it’s an existing arrangement. First we might see of it would be the quarterly report due out late January. I’m just guessing based on no knowledge. DYOR
I love the optimism
We should all believe Sean as everything he has said so far has been so true to his word
Come on Brainchip let’s make this year one to remember
 
  • Like
  • Love
Reactions: 7 users

Tothemoon24

Top 20
Merry Chipmas
IMG_1943.jpeg


Traditional AI computing relies on machine learning and deep learning methods that demand significant power and memory for both training and inference.
Our researchers have developed a patented neuromorphic computing architecture based on field-programmable gate arrays (FPGAs). This architecture is designed to be parallel and modular, enabling highly efficient, brain-inspired computing directly on the device. Compared to existing techniques, this approach improves energy-per-inference by ~1500 times and latency by ~2400 times .
This paves the way for a new generation of powerful, real-time AI applications in energy-constrained environments.
Know more in the #patent- bit.ly/498XlwC
Inventors: Dhaval Shah, Sounak Dey, Meripe Ajay Kumar, Manoj Nambiar, Arpan Pal
Tata Consultancy Services
#TCSResearch #AI #NeuromorphicComputing

IMG_1942.jpeg
IMG_1940.jpeg

IMG_1941.jpeg
 

Attachments

  • IMG_1941.jpeg
    IMG_1941.jpeg
    345.5 KB · Views: 52
  • Like
  • Fire
  • Love
Reactions: 32 users

Diogenese

Top 20
Merry Chipmas
View attachment 93866

Traditional AI computing relies on machine learning and deep learning methods that demand significant power and memory for both training and inference.
Our researchers have developed a patented neuromorphic computing architecture based on field-programmable gate arrays (FPGAs). This architecture is designed to be parallel and modular, enabling highly efficient, brain-inspired computing directly on the device. Compared to existing techniques, this approach improves energy-per-inference by ~1500 times and latency by ~2400 times .
This paves the way for a new generation of powerful, real-time AI applications in energy-constrained environments.
Know more in the #patent- bit.ly/498XlwC
Inventors: Dhaval Shah, Sounak Dey, Meripe Ajay Kumar, Manoj Nambiar, Arpan Pal
Tata Consultancy Services
#TCSResearch #AI #NeuromorphicComputing

View attachment 93867 View attachment 93869
View attachment 93870


Hi TTM,

This looks like the TCS FPGA NN patent:


US12314845B2 Field programmable gate array (FPGA) based neuromorphic computing architecture 20211014


1766488341890.png




This disclosure relates generally to a method and a system for computing using a field programmable gate array (FPGA) neuromorphic architecture. Implementing energy efficient Artificial Intelligence (AI) applications at power constrained environment/devices is challenging due to huge energy consumption during both training and inferencing. The disclosure is a FPGA architecture based neuromorphic computing platform, the basic components include a plurality of neurons and memory. The FPGA neuromorphic architecture is parameterized, parallel and modular, thus enabling improved energy/inference and Latency-Throughput. Based on values of the plurality of features of the data set, the FPGA neuromorphic architecture is generated in a modular and parallel fashion. The output of the disclosed FPGA neuromorphic architecture is the plurality of output spikes from the neuron, which becomes the basis of inference for computing.

I'm not sure why they need an FPGA to be reconfigurable. Akida's NPUs can be connected in any layer configuration needed.

Maybe it's to do with having "copper" interconnects rather than electronic navigation on the comms fabric? That would have potential to improve latency.

The patent dates from November 2021 It predates this Tata Elxsi announcement:

https://brainchip.com/brainchip-and...provide-intelligent-ultralow-power-solutions/

BrainChip and Tata Elxsi Partner on Intelligent Ultra-Low-Power Solutions​


Laguna Hills, Calif. – August 28, 2023 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, welcomes leading design and technology services provider Tata Elxsi as a partner to its Essential AI ecosystem.

Akida’s fully customizable, scalable, event-based AI neural processor architecture and its small footprint boosts the efficiency of various applications by orders of magnitude. Greater AI performance at the edge, independent from the cloud, unlocks the growth of the Artificial Intelligence of Things (AIoT) market that is expected to be more than a trillion dollars by 2030.

“The combination of our user-centric design expertise with leading-edge technologies is key to helping enterprises reimagine their products and services to improve operational efficiency, reduce costs and deliver new services to their customers,” said Manoj Raghavan, CEO and MD at Tata Elxsi. “This cannot be possible without our global ecosystem of partners. By partnering with BrainChip and implementing Akida technology into medical and industrial solutions, we are able to deliver innovative solutions at a faster time to market than otherwise possible.”

“BrainChip is very aligned with Tata Elxsi’s mission to innovate with leading edge technology and deliver compelling new products and services that improve customer experience and outcomes,” said Rob Telson, Vice President of Ecosystems & Partnerships at BrainChip. “Our partnership with Tata Elxsi leverages Akida technology to transform applications and results in markets such as healthcare and industrial automation. We look forward to working with them to create new opportunities and drive growth.


{## Funny - I thought we'd been with Tata longer than 2023, but as the man in black says "Time keeps draggin' along ,,,"

Of course, it is possible to build Akida NPUs in FPGA - that's how we started (Xilinx). Would it make sense to incorporate Akida NPUs in a purpose built FPGA (cf a general purpose FPGA), or is that an Application Specific FPGA (ASFPGA)?

Surely, if Tata are going to build an FPGA NN from th4e ground up, and they knew about TENNs, they would want to have TENNs in the FPGA.
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Diogenese

Top 20
Hi TTM,

This looks like the TCS FPGA NN patent:


US12314845B2 Field programmable gate array (FPGA) based neuromorphic computing architecture 20211014


View attachment 93872



This disclosure relates generally to a method and a system for computing using a field programmable gate array (FPGA) neuromorphic architecture. Implementing energy efficient Artificial Intelligence (AI) applications at power constrained environment/devices is challenging due to huge energy consumption during both training and inferencing. The disclosure is a FPGA architecture based neuromorphic computing platform, the basic components include a plurality of neurons and memory. The FPGA neuromorphic architecture is parameterized, parallel and modular, thus enabling improved energy/inference and Latency-Throughput. Based on values of the plurality of features of the data set, the FPGA neuromorphic architecture is generated in a modular and parallel fashion. The output of the disclosed FPGA neuromorphic architecture is the plurality of output spikes from the neuron, which becomes the basis of inference for computing.

I'm not sure why they need an FPGA to be reconfigurable. Akida's NPUs can be connected in any layer configuration needed.

Maybe it's to do with having "copper" interconnects rather than electronic navigation on the comms fabric? That would have potential to improve latency.

The patent dates from November 2021 It predates this Tata Elxsi announcement:

https://brainchip.com/brainchip-and...provide-intelligent-ultralow-power-solutions/

BrainChip and Tata Elxsi Partner on Intelligent Ultra-Low-Power Solutions​


Laguna Hills, Calif. – August 28, 2023 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, welcomes leading design and technology services provider Tata Elxsi as a partner to its Essential AI ecosystem.

Akida’s fully customizable, scalable, event-based AI neural processor architecture and its small footprint boosts the efficiency of various applications by orders of magnitude. Greater AI performance at the edge, independent from the cloud, unlocks the growth of the Artificial Intelligence of Things (AIoT) market that is expected to be more than a trillion dollars by 2030.

“The combination of our user-centric design expertise with leading-edge technologies is key to helping enterprises reimagine their products and services to improve operational efficiency, reduce costs and deliver new services to their customers,” said Manoj Raghavan, CEO and MD at Tata Elxsi. “This cannot be possible without our global ecosystem of partners. By partnering with BrainChip and implementing Akida technology into medical and industrial solutions, we are able to deliver innovative solutions at a faster time to market than otherwise possible.”

“BrainChip is very aligned with Tata Elxsi’s mission to innovate with leading edge technology and deliver compelling new products and services that improve customer experience and outcomes,” said Rob Telson, Vice President of Ecosystems & Partnerships at BrainChip. “Our partnership with Tata Elxsi leverages Akida technology to transform applications and results in markets such as healthcare and industrial automation. We look forward to working with them to create new opportunities and drive growth.


{## Funny - I thought we'd been with Tata longer than 2023, but as the man in black says "Time keeps draggin' along ,,,"

Of course, it is possible to build Akida NPUs in FPGA - that's how we started (Xilinx). Would it make sense to incorporate Akida NPUs in a purpose built FPGA (cf a general purpose FPGA), or is that an Application Specific FPGA (ASFPGA)?

Surely, if Tata are going to build an FPGA NN from th4e ground up, and they knew about TENNs, they would want to have TENNs in the FPGA.
Interestingly, this Tata patent from mid-2022 deals with time-series data:

US2023334300A1 METHODS AND SYSTEMS FOR TIME-SERIES CLASSIFICATION USING RESERVOIR-BASED SPIKING NEURAL NETWORK 20220418

1766490866759.png


The present disclosure relates to methods and systems for time-series classification using a reservoir-based spiking neural network, that can be used at edge computing applications. Conventional reservoir based SNN techniques addressed either by using non-bio-plausible backpropagation-based mechanisms, or by optimizing the network weight parameters. The present disclosure solves the technical problems of TSC, using a reservoir-based spiking neural network. According to the present disclosure, the time-series data is encoded first using a spiking encoder. Then the spiking reservoir is used to extract the spatio-temporal features for the time-series data. Lastly, the extracted spatio-temporal features of the time-series data is used to train a classifier to obtain the time-series classification model that is used to classify the time-series data in real-time, received from edge devices present at the edge computing network.

[0074] The reservoir based spiking neural network architecture of the present disclosure is implemented using BindsNet 0.2.7, a GPU-based open-source SNN simulator in Python that supports parallel computing. The parameter values for the LIF neuron (refer to equation 1) used in the experiments are: Vthresh =−52.0 mV, Vrest =−65.0 mV. Table 1 shows other important network parameters for the spiking reservoir of the present disclosure. For the Gaussian encoding, 15 input encoding neurons (i.e. m=15) are used, resulting in 15× magnification of the input timescale to spike time scale. A set of weight scalar parameters are selected for different connections between the populations to optimize the spiking reservoir performance
.

... but I can't detect any link to TENNs.
 
  • Like
  • Thinking
  • Love
Reactions: 11 users

manny100

Top 20
Interesting chart. MACD bullish divergence.
4 touches of the highs of the downtrend line since Oct'25 followed by a break of the trendline today.
The trend is still down but the MACD divergence indicates a momentum shift may be beginning.
Its wait and see over the next few days..
brncvcvc.png
 
  • Like
  • Haha
Reactions: 11 users

Tothemoon24

Top 20
IMG_1946.jpeg
IMG_1948.jpeg
IMG_1947.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 39 users
Interestingly, this Tata patent from mid-2022 deals with time-series data:

US2023334300A1 METHODS AND SYSTEMS FOR TIME-SERIES CLASSIFICATION USING RESERVOIR-BASED SPIKING NEURAL NETWORK 20220418

View attachment 93873

The present disclosure relates to methods and systems for time-series classification using a reservoir-based spiking neural network, that can be used at edge computing applications. Conventional reservoir based SNN techniques addressed either by using non-bio-plausible backpropagation-based mechanisms, or by optimizing the network weight parameters. The present disclosure solves the technical problems of TSC, using a reservoir-based spiking neural network. According to the present disclosure, the time-series data is encoded first using a spiking encoder. Then the spiking reservoir is used to extract the spatio-temporal features for the time-series data. Lastly, the extracted spatio-temporal features of the time-series data is used to train a classifier to obtain the time-series classification model that is used to classify the time-series data in real-time, received from edge devices present at the edge computing network.

[0074] The reservoir based spiking neural network architecture of the present disclosure is implemented using BindsNet 0.2.7, a GPU-based open-source SNN simulator in Python that supports parallel computing. The parameter values for the LIF neuron (refer to equation 1) used in the experiments are: Vthresh =−52.0 mV, Vrest =−65.0 mV. Table 1 shows other important network parameters for the spiking reservoir of the present disclosure. For the Gaussian encoding, 15 input encoding neurons (i.e. m=15) are used, resulting in 15× magnification of the input timescale to spike time scale. A set of weight scalar parameters are selected for different connections between the populations to optimize the spiking reservoir performance
.

... but I can't detect any link to TENNs.
For layman, may I ask do you know if Tata has any other neuromorphic compute partners apart from brainchip ?. Ai doesn't seem to think so which is great.
 
  • Like
Reactions: 5 users

Tothemoon24

Top 20
IMG_1949.jpeg



Arm Holdings has positioned itself at the centre of AI transformation. In a wide-ranging podcast interview, Vince Jesaitis, head of global government affairs at Arm, offered enterprise decision-makers look into the company’s international strategy, the evolution of AI as the company sees it, and what lies ahead for the industry.

From cloud to edge​

Arm thinks the AI market is about to enter a new phase, moving from cloud-based processing to edge computing. While much of the media’s attention has been focused to date on massive data centres, with models trained in and accessed from the cloud, Jesaitis said that most AI compute, especially inference tasks, is likely to be increasingly decentralised.

“The next ‘aha’ moment in AI is when local AI processing is being done on devices you couldn’t have imagined before,” Jesaitis said. These devices range from smartphones and earbuds to cars and industrial sensors. Arm’s IP is already embedded, literally, in these devices – it’s a company that only in the last year has been the IP behind over 30 billion chips, placed in devices of every conceivable description, all over the world.

The deployment of AI in edge environments has several benefits, with team at Arm citing three main ‘wins’. Firstly, the inherent efficiency of low-power Arm chips means that power bills for running compute and cooling are lower. That keeps the environmental footprint of the technology as small as possible.

Secondly, putting AI in local settings means latency is much lower (with latency determined by the distance between local operations and the site of the AI model). Arm points to uses like instant translation, dynamic scheduling of control systems, and features like the near-immediate triggering of safety functions – for instance in IIoT settings.

Thirdly, ‘keeping it local’ means there’s no potentially sensitive data sent off-premise. The benefits are obvious for any organisation in highly-regulated industries, but the increasing number of data breaches means even companies operating with relatively benign data sets are looking to reduce their attack surface.

Arm silicon, optimised for power-constrained devices, makes it well-suited for compute where it’s needed on the ground, the company says. The future may well be one where AI is found woven throughout environments, not centralised in a data centre run by one of the large providers.

Arm and global governments​

Arm is actively engaged with global policymakers, considering this level of engagement an important part of its role. Governments continue to compete to attract semiconductor investment, the issues of supply chain and concentrated dependencies still fresh in many policymakers’ memories from the time of the COVID epidemic.

Arm lobbies for workforce development, working at present with policy-makers in the White House on an education coalition to build an ‘AI-ready workforce’. Domestic independence in technology relies as much on the abilities of workforce as it does on the availability of hardware.

Jesaitis noted a divergence between regulatory environments: the US prioritises what the government there terms acceleration and innovation, while the EU leads on safety, privacy, security and legally-enforced standards of practice. Arm aims to find the middle ground between these approaches, building products that meet stringent global compliance needs, yet furthering advances in the AI industry.

The enterprise case for edge AI​

The case for integrating Arm’s edge-focused AI architecture into enterprise transformation strategies can be persuasive. The company stresses its ability to offer scale-able AI without the need to centralise to the cloud, and is also pushing its investment in hardware-level security. That means issues like memory exploits (outside of the control of users plugged into centralised AI models) can be avoided.

Of course, sectors already highly-regulated in terms of data practices are unlikely to experience relaxed governance in the future – the opposite is pretty much inevitable. All industries will be seeing more regulation and greater penalties for non-compliance in the years to come. However, to balance that, there are significant competitive advantages available to those that can demonstrate their systems’ inherent safety and security. It’s into this regulatory landscape that Arm sees itself and local, edge AI fitting.

Additionally, in Europe and Scandinavia, ESG goals are going to be increasingly important. Here, the power-sipping nature of Arm chips offers big advantages. That’s a trend that even the US hyperscalers are responding to: AWS’s latest SHALAR range of low-cost, low-power Arm-based platforms is there to satisfy that exact demand.

Arm’s collaboration with cloud hyperscalers such as AWS and Microsoft produces chips that combine efficiency with the necessary horsepower for AI applications, the company says.

What’s next from Arm and the industry​

Jesaitis pointed out several trends that enterprises may be seeing in the next 12 to 18 months. Global AI exports, particularly from the US and Middle East, are ensuring that local demand for AI can be satisfied by the big providers. Arm is a company that can supply both big providers in these contexts (as part of their portfolios of offerings) and satisfy the rising demand for edge-based AI.

Jesaitis also sees edge AI as something of the hero of sustainability in an industry increasingly under fire for its ecological impact. Because Arm technology’s biggest market has been in low-power compute for mobile, it’s inherently ‘greener’. As enterprises hope to meet energy goals without sacrificing compute, Arm offers a way that combines performance with responsibility.

Redefining “smart”​

Arm’s vision of AI at the edge means computers and the software running on them can be context-aware, cheap to run, secure by design, and – thanks to near-zero network latency – highly-responsive. Jesaitis said, “We used to call things ‘smart’ because they were online. Now, they’re going to be truly intelligent.”
 
  • Like
  • Fire
  • Love
Reactions: 21 users

TheDrooben

Pretty Pretty Pretty Pretty Good
  • Like
  • Fire
  • Love
Reactions: 19 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers ,

Just a quick thankyou to all , the collective sharing of information once again has been vast and informative.

Wishing all a enjoyable break & prosperous new year.

Regards,
Esq.
 
  • Like
  • Love
  • Fire
Reactions: 58 users

Sirod69

bavarian girl ;-)
I wish you all a Merry Christmas with all my heart.
We are having such bad times and that is precisely why we should see a day like this as very valuable
.
Merry Christmas GIF
 
  • Like
  • Love
  • Sad
Reactions: 44 users
Merry Chipmas
View attachment 93866

Traditional AI computing relies on machine learning and deep learning methods that demand significant power and memory for both training and inference.
Our researchers have developed a patented neuromorphic computing architecture based on field-programmable gate arrays (FPGAs). This architecture is designed to be parallel and modular, enabling highly efficient, brain-inspired computing directly on the device. Compared to existing techniques, this approach improves energy-per-inference by ~1500 times and latency by ~2400 times .
This paves the way for a new generation of powerful, real-time AI applications in energy-constrained environments.
Know more in the #patent- bit.ly/498XlwC
Inventors: Dhaval Shah, Sounak Dey, Meripe Ajay Kumar, Manoj Nambiar, Arpan Pal
Tata Consultancy Services
#TCSResearch #AI #NeuromorphicComputing

View attachment 93867 View attachment 93869
View attachment 93870
This has to be brainchip doesn't it ?.

When asking AI if.... Tata is working with any other neuromorphic company the answer is NO.

THIS IS HUGE IF ITS BRAINCHIP...MY GUT SAYS YES.
 
  • Like
  • Fire
Reactions: 8 users

Guzzi62

Regular
This has to be brainchip doesn't it ?.

When asking AI if.... Tata is working with any other neuromorphic company the answer is NO.

THIS IS HUGE IF ITS BRAINCHIP...MY GUT SAYS YES.
Let's hope so, but:

8/12-2025
Tata and Intel Announce Strategic Alliance to Establish Silicon and Compute
Ecosystem in India
Exploring Strategic Collaboration for Silicon and Systems Manufacturing, Packaging, and AI
Compute Market Development.


Okay, nothing about neuromorphic but still, not good IMO.

We can hope Tata has progressed so much in research with Akida that they are committed?

Edit: If they don't want to wait years for Intel's Loihi to reach silicon stage, they have no choice but to go with Brainchip!
 
  • Like
Reactions: 7 users

Bravo

Meow Meow 🐾
Merry Christmas Brain Fam!

2025 felt a lot like a dramatic tablecloth pull - lots of anticipation, but in the end... nothing moved.

Here’s hoping 2026 is the year the tablecloth finally comes off and "exposes" something truly scandalous - a share price
that rises with confidence, pointing firmly north (no pun intended). 🤭

Wishing everyone a safe and happy holiday.

B 🎄


merry-christmas-seasons-greetings.gif
 
  • Like
  • Haha
  • Love
Reactions: 46 users

buena suerte :-)

BOB Bank of Brainchip
Have a fantastic Christmas spending precious time with family and friends and may us 'Chippers' get some seriously (Much needed!!)

good news early into the new year and start it off with a bang !! $$$$$$$$$$$$$$$ 🙏🙏🙏

Fire Love GIF



Merry Christmas GIF




1766543418354.png



CHEERS ALL :)
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 36 users

jtardif999

Regular
Could that update on GitHub possibly be connected to our so far rather secretive partner MulticoreWare, a San Jose-headquartered software development company?

Their name popped up on our website under “Enablement Partners” in early February without any further comment (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-450082).

Three months later, I spotted MulticoreWare’s VP of Sales & Business Development visiting the BrainChip booth at the Andes RISC V Con in San Jose:

“The gentleman standing next to Steve Brightfield is Muhammad Helal, VP Sales & Business Development of MulticoreWare, the only Enablement Partner on our website that to this day has not officially been announced.”

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-459763


But so far crickets from either company…

MulticoreWare still doesn’t even list us as a partner on their website:


View attachment 93744

Neither does BrainChip get a mention anywhere in the below 11 December article titled “Designing Ultra-Low-Power Vision Pipelines on Neuromorphic Hardware - Building Real-Time Elderly Assistance with Neuromorphic hardware”, although “TENNs” is a give-away to us that the neuromorphic AI architecture referred to is indeed Akida.

What I find confusing, though, is that this neuromorphic AI architecture should consequently be Akida 2.0, given that the author is referring to TENNs, which Akida 1.0 doesn’t support. But then of course we do not yet have Akida 2.0 silicon.

However, at the same time it sounds as if the MulticoreWare researchers used physical neuromorphic hardware, which means it must have been an AKD1000 card:

“In the above demo, we have deployed a complete vision pipeline running seamlessly on a Raspberry Pi with the neuromorphic accelerator attached at the PCIE slot, demonstrating portability and practical deployment validating real-time, low-power AI at the edge.”

By the way, also note the following quote which helps to explain why the adoption of neuromorphic technology takes so much longer as if it were a simple plug-and-play solution:

Developing models for neuromorphic AI requires more than porting existing architectures […] In short, building for neuromorphic acceleration means starting from the ground up balancing accuracy, efficiency, and strict design rules to unlock the promise of real-time, ultra-low-power AI at the edge”




View attachment 93741

December 11, 2025

Author
Reshi Krish
is a software engineer in the Platforms and Compilers Technical Unit at MulticoreWare, focused on building ultra-efficient AI pipelines for resource-constrained platforms. She specializes in optimizing and deploying AI across diverse hardware environments, leveraging techniques like quantization, pruning, and runtime optimization. Her work spans optimizing linear algebra libraries, embedded systems, and edge AI applications.

Introduction: Driving Innovation Beyond Power Constraints​

As AI continues to advance at an unprecedented pace, its growing complexity often demands powerful hardware and high energy resources. However, when deploying AI solutions to the edge we look for ultra-efficient hardware which can run utilizing the least amount of energy possible and this introduces its own engineering challenges. ARM Cortex-M Microcontrollers (MCUs) and similar low-power processors have tight compute and memory limits, making optimizations like quantization, pruning, and lightweight runtimes critical for real-time performance. These challenges on the other hand are inspiring innovative solutions that make intelligence more accessible, efficient, and sustainable.

At MulticoreWare, we’ve been exploring multiple paths to push more intelligence onto these constrained devices. This exploration led us to neuromorphic AI architectures and specialized neuromorphic hardware which provides ultra-low-power inference by mimicking the brain’s event-driven processing. We saw the novelty of this framework and aimed to combine this with our deep MCU experience for opening new ways to deliver always-on AI across medical, smart home, and industrial segments.

Designing for Neuromorphic Hardware​

The neuromorphic AI framework we had identified utilized a novel type of neural networks Temporal Event-based Neural Networks (TENNS). TENNs employs a state-space architecture that processes events dynamically rather than at fixed intervals, skipping idle periods to minimize energy and memory usage. This design enables real-time inference on milliwatts of power, making it ideal for edge deployments.

Developing models for neuromorphic AI requires more than porting existing architectures. The framework which we have utilised mandates full int8 quantization and adherence to strict architectural constraints. Only a limited set of layers is supported, and models must follow rigid sequences for compatibility. These restrictions often necessitate significant redesigns, including modification of model architecture, replacing unsupported activations (e.g., LeakyReLU → ReLU) and simplifying branched topologies. Many deep learning features like multi-input/output models are also not supported, requiring developers to implement workarounds or redesign models entirely.

In short, building for neuromorphic acceleration means starting from the ground up balancing accuracy, efficiency, and strict design rules to unlock the promise of real-time, ultra-low-power AI at the edge.

Engineering Real-Time Elderly Assistance on the Edge​

To demonstrate the potential of neuromorphic AI, we developed a computer vision based elderly assistance system capable of detecting critical human activities such as sitting, walking, lying down, or falling all in real time running on extremely low power hardware.

The goal was simple yet ambitious:
To deliver a fully on-device, low-power AI pipeline that continuously monitors and interprets human actions while maintaining user privacy and operational efficiency even in resource-limited environments.

However, due to frameworks architectural constraints, certain models such as pose estimation, could not be fully supported. To overcome this, we adopted a hybrid approach combining neuromorphic and conventional compute resources:
  • Neuromorphic Hardware: Executes object detection and activity classification using specialized models.
  • CPU (Tensorflow Lite): Handles pose estimation and intermediate feature extraction.
ai-inferencing-block.png

This design maintained functionality while ensuring power-efficient on the edge inference. Our modular vision pipeline leverages neuromorphic acceleration for detection and classification, with pose estimation being run on the host device.


View attachment 93742
View attachment 93743

Results: Intelligent, Low-Power Assistance at the Edge​

In the above demo, we have deployed a complete vision pipeline running seamlessly on a Raspberry Pi with the neuromorphic accelerator attached at the PCIE slot, demonstrating portability and practical deployment validating real-time, low-power AI at the edge. This system continuously identifies and classifies user activities in real time, instantly detecting events such as falls or help gestures and triggering immediate alerts. All the processing required was achieved entirely at the edge ensuring privacy and responsiveness in safety-critical scenarios.

The neuromorphic architecture consumes only a fraction of the power required by conventional deep learning pipelines, while maintaining consistent inference speeds and robust performance.

Application Snapshot:
  • Ultra-low power consumption
  • Portable Raspberry Pi + neuromorphic hardware setup
  • End to end application running on the edge hardware

Our Playbook for Making Edge AI Truly Low-Power​

MulticoreWare applies deep technical expertise across emerging low-power compute ecosystems, enabling AI to run efficiently on resource-constrained platforms. Our approach combines:

Frame-4.jpg

Broader MCU AI Applications: Industrial, Smart Home & Smart City​

With healthcare leading the shift toward embedded-first AI, smart homes, industrial systems, and smart cities are rapidly following. Applications like quality inspection, predictive maintenance, robotic assistance, home security, and occupancy sensing increasingly require AI that runs directly on MCU-class, low-power edge processors.

MulticoreWare’s real-time inference framework for Arm Cortex-M devices supports this transition through highly optimised pipelines including quantisation, pruning, CMSIS-NN kernel tuning, and memory-tight execution paths tailored for constrained MCUs. This enables OEMs to deploy workloads such as wake-word spotting, compact vision models, and sensor-level anomaly detection, allowing even the smallest devices to run intelligent features without relying on external compute.

Conclusion: Redefining Intelligence Beyond the Cloud​

The convergence of AI and embedded computing marks a defining moment in how intelligence is designed, deployed, and scaled. By enabling lightweight, power-efficient AI directly at the edge, MulticoreWare empowers customers across healthcare, industrial, and smart city domains to achieve faster response times, higher reliability, and reduced energy footprints.

As the boundary between compute and intelligence continues to fade, MulticoreWare’s Edge AI enablement across MCU and embedded platforms ensures that our partners stay ahead, building the foundation for a truly decentralised, real-time intelligence beyond the cloud.


To learn more about MulticoreWare’s edge AI initiatives, write to us at info@multicorewareinc.com.




View attachment 93745

View attachment 93746
Fits like a glove with Brightfields interview. IMO a licence agreement with MultiCoreWare is imminent.
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

Andy38

The hope of potential generational wealth is real
So I was just chilling out in Waiheke island for the Christmas break.
The tunes are playing and as I watch the sun go down Echo beach is playing
So a little reflecting going on
So it seams that January for the past 5 or so years has been good for the holder
Hopefully this January will be the best ever 🥰
Cable bay with a vino in hand?
 
  • Like
  • Fire
Reactions: 3 users
Interesting chart. MACD bullish divergence.
4 touches of the highs of the downtrend line since Oct'25 followed by a break of the trendline today.
The trend is still down but the MACD divergence indicates a momentum shift may be beginning.
Its wait and see over the next few days..
View attachment 93875
Maybe in the new year things might change for the better at the moment the trading is all just games 1 share here and one there then 20 shares it’s just rubbish trading by bots I guess.
 
  • Like
Reactions: 3 users
Top Bottom