BRN Discussion Ongoing

manny100

Top 20
Just revisiting the Sean video with 'Stocks Down Under' in Oct'25.
From 5.18 mark Sean talks about 12 to 18 months ago customers being prepared to have a look at AKIDA and explore.
He says now clients are coming in with specific plans saying we know what we want.
At the 6.20 mark he talks about wearables models and engagements that are progressing very well - note his plural engagements. He said we can expect to see a lot of more activity in wearables.
Backs up Steve Brightfield's recent remarks.
6.50 mark. From here Sean addresses interest in Radar. Starting with defense. Most radar decades old and needs innovation. The second point he made concerning radar was mobility trend. Drones have changed everything about defense. More mobility around everything including soldiers, eg carrying 'things' - likely means small devices. Solutions include long battery life and no network activity. Brainchip leaning hard into 'mobile' solutions.
Working with some interesting clients on mobile radar solutions and expect us to push harder into that market.
From about 8.05. What can we expect in the next 12 to 18 months.
Tech advancements. Gen AI supporting LLMs on the Edge and " a very exciting configuration with world class break through performance" - built with customer interaction and feedback.
8.45 - Talks about commercial side. Chip orders and Gen 2 - "down the track with extensive evaluations with certain customers right now so look for some licencees on that as well"
Given Steve Brightfields very recent revealing interview and a look back a couple of months to Sean's video - link below) its all starting to come together - still early in the piece though but you can sense is all getting close.
 
  • Like
  • Love
Reactions: 12 users

Diogenese

Top 20
Renesas has begun shipping R-Car X5H silicon samples, evaluation boards, and the RoX Whitebox SDK to selected customers and partners. The company noted that it plans to showcase AI-powered multi-domain demonstrations of the platform at CES 2026.

Renesas advances SDV roadmap with 3nm R-Car Gen 5 platform​

New Products | December 22, 2025
By eeNews Europe
AUTOMOTIVE SDV RENESASEMBEDDED DESIGN



Renesas Electronics has taken another step in its software-defined vehicle (SDV) strategy with the expansion of its fifth-generation R-Car platform, built around a new multi-domain automotive SoC and a broader end-to-end development environment. According to the company, the move is aimed squarely at accelerating the design of highly integrated, AI-driven vehicle architectures.


For eeNews Europe readers, the announcement is relevant because it highlights how leading silicon vendors are converging ADAS, infotainment, and gateway workloads onto a single, safety-capable compute platform. It also offers insight into how open software stacks and early silicon access are being used to shorten automotive development cycles.

3nm multi-domain compute for SDVs​

At the core of the platform is the R-Car X5H, the latest device in the Gen 5 R-Car family. According to Renesas, it is the industry’s first multi-domain automotive SoC manufactured on a 3nm process. The company indicates that the device is designed to run advanced driver assistance systems (ADAS), in-vehicle infotainment (IVI), and gateway functions concurrently on a single chip.

The company says the move to an advanced process node delivers up to 35% lower power consumption compared with previous 5nm solutions, while significantly increasing compute density. The SoC targets centralized SDV architectures, combining high performance with mixed-criticality support so that safety-related and non-safety workloads can coexist without compromise.


The R-Car X5H delivers up to 400 TOPS of AI performance, with the option to scale further using chiplet-based accelerators that can boost AI throughput by four times or more. Graphics performance reaches the equivalent of 4 TFLOPS, supported by a CPU complex of 32 Arm Cortex-A720AE cores and six Cortex-R52 lockstep cores with ASIL D capability, providing more than 1,000k DMIPS.

RoX Whitebox SDK targets faster development​

Alongside the new silicon, Renesas says it is expanding its R-Car Open Access (RoX) development platform with the RoX Whitebox Software Development Kit. The SDK is positioned as an open, scalable environment that integrates hardware, operating systems, middleware, and tools needed for next-generation SDV development.

The Whitebox SDK is built on Linux, Android and the XEN hypervisor, with additional support for AUTOSAR, EB corbos Linux, QNX, Red Hat and SafeRTOS through partners. Out of the box, developers can begin work on ADAS, L3/L4 autonomy, intelligent cockpit and gateway applications.

An integrated AI and ADAS software stack supports real-time perception and sensor fusion, while generative AI and large language models are intended to enable more advanced human–machine interfaces in future AI-driven cockpits, the company notes. The SDK also incorporates production-grade software from partners including Candera, DSP Concepts, Nullmax, Smart Eye, STRADVISION and ThunderSoft.

Sampling and demos

Renesas has begun shipping R-Car X5H silicon samples, evaluation boards, and the RoX Whitebox SDK to selected customers and partners. The company noted that it plans to showcase AI-powered multi-domain demonstrations of the platform at CES 2026.


eenews-europe.png
I

Hi TTM,


Back in Decemebr 2022, there was an announcement about Renesas taping out Akida IP:

https://brainchip.com/renesas-is-ta...etwork-snn-technology-developed-by-brainchip/

eeNews Europe — Renesas is taping out a chip using the spiking neural network (SNN) technology developed by Brainchip.​

Dec 2, 2022 – Nick Flaherty

This is part of a move to boost the leading edge performance of its chips for the Internet of Things, Sailesh Chittipeddi became Executive Vice President and General Manager of IoT and Infrastructure Business Unit at Renesas Electronics and the former CEO of IDT tells eeNews Europe.

This strategy has seen the company develop the first silicon for ARM’s M85 and RISC-V cores, along with new capacity and foundry deals.

“We are very happy to be at the leading edge and now we have made a rapid transition to address our ARM shortfall but we realise the challenges in the marketplace and introduced the RISC-V products to make sure we don’t fall behind in the new architectures,” he said.

“Our next move is to more advanced technology nodes to push the microcontrollers into the gigahertz regime and that’s where the is overlap with microprocessors. The way I look at it is all about the system performance.”

“Now you have accelerators for driving AI with neural processing units rather than a dual core CPU. We are working with a third party taping out a device in December on 22nm CMOS,” said Chittipeddi.

Brainchip and Renesas signed a deal in December 2020 to implement the spiking neural network technology. Tools are vital for this new area. “The partner gives us the training tools that are needed,” he said.


The original Renesas licence was for Akida1 IP, but by the end of 2022, TENNs would have been available for EAPs. Initially Renesas said they would use their in-house DRP for the heavy AI loads\ and relegate Akida to sweeping the swarf. They may be ruing that decision if they stuck to the plan.

TENNs has supplanted Transformers, which replaced LSTM which replaced RNN. This is a major advance in AI/NN.

Copied from elsewhere:

Update QuantizeML to version 0.18.0​

New features​

  • Introduced PleiadesLayer for SpatioTemporal TENNs on Keras
  • Keras sanitizer will now bufferize Conv3D layers, same as ONNX sanitizer. As a result, SpatioTemporal TENNs from both frameworks will be bufferized and quantized at once.
  • Dropped quantization patterns with MaxPooling and LUT activation since this is not supported in Akida
  • Dropped all transformers features

https://github.com/Brainchip-Inc/akida_examples/releases


Google developed Transformers (in 2017?) as the bee's knees for NLP (natural language processing).

https://www.freecodecamp.org/news/how-transformer-models-work-for-language-processing/

September 12, 2025/#Python

How Transformer Models Work for Language Processing​


If you’ve ever used Google Translate, skimmed through a quick summary, or asked a chatbot for help, then you’ve definitely seen Transformers at work. They’re considered the architects behind today’s biggest advances in natural language processing (NLP).

It all began with Recurrent Neural Networks (RNNs), which read text step by step. RNNs worked, but they struggled with long sentences because older context often got lost. LSTMs (Long Short-Term Memory networks) improved memory, but still processed words in sequence, slow and hard to scale
.

The breakthrough came with attention: instead of moving word by word, models could directly “attend” to the most relevant parts of a sentence, no matter where they appeared. In 2017, the paper Attention Is All You Need introduced the Transformer, which replaced recurrence with attention and parallel processing. This made models faster, more accurate, and capable of learning from massive amounts of text
.

TENNs may have the potential to make Transformers, LSTM, and RNN obsolete.

"Could be
Who knows?
There's something due any day
I will know right away soon as it shows
It may come cannonballing down through the sky
Gleam in it's eye
Bright as a rose
Who knows?
It's only just out of reach
Down the block, on a beach
Under a tree
I gotta feeling theres a miracle due
Gonna come true
"

Sondheim and Bernstein couldn't have said a truer word ...
 
  • Like
  • Love
Reactions: 10 users

Diogenese

Top 20
Just revisiting the Sean video with 'Stocks Down Under' in Oct'25.
From 5.18 mark Sean talks about 12 to 18 months ago customers being prepared to have a look at AKIDA and explore.
He says now clients are coming in with specific plans saying we know what we want.
At the 6.20 mark he talks about wearables models and engagements that are progressing very well - note his plural engagements. He said we can expect to see a lot of more activity in wearables.
Backs up Steve Brightfield's recent remarks.
6.50 mark. From here Sean addresses interest in Radar. Starting with defense. Most radar decades old and needs innovation. The second point he made concerning radar was mobility trend. Drones have changed everything about defense. More mobility around everything including soldiers, eg carrying 'things' - likely means small devices. Solutions include long battery life and no network activity. Brainchip leaning hard into 'mobile' solutions.
Working with some interesting clients on mobile radar solutions and expect us to push harder into that market.
From about 8.05. What can we expect in the next 12 to 18 months.
Tech advancements. Gen AI supporting LLMs on the Edge and " a very exciting configuration with world class break through performance" - built with customer interaction and feedback.
8.45 - Talks about commercial side. Chip orders and Gen 2 - "down the track with extensive evaluations with certain customers right now so look for some licencees on that as well"
Given Steve Brightfields very recent revealing interview and a look back a couple of months to Sean's video - link below) its all starting to come together - still early in the piece though but you can sense is all getting close.
Hi manny,

I think that your post is reinforced by the supplanting of Transformers by TENNs. In particular, a better NLP technique will improve GenAI - "world class performance".

But TENNs is not a one-trick-pony. It provides the Midas touch to all AI applications.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 8 users

Gazzafish

Regular
With all the talk about renases taping out a few years back and Sean’s current optimism, I wouldn’t be surprised if BRN are earning revenue /royalties right now. We’d never know, it might be via renases or even megachips. No announcement would be warranted as it’s an existing arrangement. First we might see of it would be the quarterly report due out late January. I’m just guessing based on no knowledge. DYOR
 
  • Wow
Reactions: 1 users
With all the talk about renases taping out a few years back and Sean’s current optimism, I wouldn’t be surprised if BRN are earning revenue /royalties right now. We’d never know, it might be via renases or even megachips. No announcement would be warranted as it’s an existing arrangement. First we might see of it would be the quarterly report due out late January. I’m just guessing based on no knowledge. DYOR
I love the optimism
We should all believe Sean as everything he has said so far has been so true to his word
Come on Brainchip let’s make this year one to remember
 
  • Like
  • Love
Reactions: 4 users

Tothemoon24

Top 20
Merry Chipmas
IMG_1943.jpeg


Traditional AI computing relies on machine learning and deep learning methods that demand significant power and memory for both training and inference.
Our researchers have developed a patented neuromorphic computing architecture based on field-programmable gate arrays (FPGAs). This architecture is designed to be parallel and modular, enabling highly efficient, brain-inspired computing directly on the device. Compared to existing techniques, this approach improves energy-per-inference by ~1500 times and latency by ~2400 times .
This paves the way for a new generation of powerful, real-time AI applications in energy-constrained environments.
Know more in the #patent- bit.ly/498XlwC
Inventors: Dhaval Shah, Sounak Dey, Meripe Ajay Kumar, Manoj Nambiar, Arpan Pal
Tata Consultancy Services
#TCSResearch #AI #NeuromorphicComputing

IMG_1942.jpeg
IMG_1940.jpeg

IMG_1941.jpeg
 

Attachments

  • IMG_1941.jpeg
    IMG_1941.jpeg
    345.5 KB · Views: 14
  • Like
  • Fire
  • Love
Reactions: 12 users

Diogenese

Top 20
Merry Chipmas
View attachment 93866

Traditional AI computing relies on machine learning and deep learning methods that demand significant power and memory for both training and inference.
Our researchers have developed a patented neuromorphic computing architecture based on field-programmable gate arrays (FPGAs). This architecture is designed to be parallel and modular, enabling highly efficient, brain-inspired computing directly on the device. Compared to existing techniques, this approach improves energy-per-inference by ~1500 times and latency by ~2400 times .
This paves the way for a new generation of powerful, real-time AI applications in energy-constrained environments.
Know more in the #patent- bit.ly/498XlwC
Inventors: Dhaval Shah, Sounak Dey, Meripe Ajay Kumar, Manoj Nambiar, Arpan Pal
Tata Consultancy Services
#TCSResearch #AI #NeuromorphicComputing

View attachment 93867 View attachment 93869
View attachment 93870


Hi TTM,

This looks like the TCS FPGA NN patent:


US12314845B2 Field programmable gate array (FPGA) based neuromorphic computing architecture 20211014


1766488341890.png




This disclosure relates generally to a method and a system for computing using a field programmable gate array (FPGA) neuromorphic architecture. Implementing energy efficient Artificial Intelligence (AI) applications at power constrained environment/devices is challenging due to huge energy consumption during both training and inferencing. The disclosure is a FPGA architecture based neuromorphic computing platform, the basic components include a plurality of neurons and memory. The FPGA neuromorphic architecture is parameterized, parallel and modular, thus enabling improved energy/inference and Latency-Throughput. Based on values of the plurality of features of the data set, the FPGA neuromorphic architecture is generated in a modular and parallel fashion. The output of the disclosed FPGA neuromorphic architecture is the plurality of output spikes from the neuron, which becomes the basis of inference for computing.

I'm not sure why they need an FPGA to be reconfigurable. Akida's NPUs can be connected in any layer configuration needed.

Maybe it's to do with having "copper" interconnects rather than electronic navigation on the comms fabric? That would have potential to improve latency.

The patent dates from November 2021 It predates this Tata Elxsi announcement:

https://brainchip.com/brainchip-and...provide-intelligent-ultralow-power-solutions/

BrainChip and Tata Elxsi Partner on Intelligent Ultra-Low-Power Solutions​


Laguna Hills, Calif. – August 28, 2023 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, welcomes leading design and technology services provider Tata Elxsi as a partner to its Essential AI ecosystem.

Akida’s fully customizable, scalable, event-based AI neural processor architecture and its small footprint boosts the efficiency of various applications by orders of magnitude. Greater AI performance at the edge, independent from the cloud, unlocks the growth of the Artificial Intelligence of Things (AIoT) market that is expected to be more than a trillion dollars by 2030.

“The combination of our user-centric design expertise with leading-edge technologies is key to helping enterprises reimagine their products and services to improve operational efficiency, reduce costs and deliver new services to their customers,” said Manoj Raghavan, CEO and MD at Tata Elxsi. “This cannot be possible without our global ecosystem of partners. By partnering with BrainChip and implementing Akida technology into medical and industrial solutions, we are able to deliver innovative solutions at a faster time to market than otherwise possible.”

“BrainChip is very aligned with Tata Elxsi’s mission to innovate with leading edge technology and deliver compelling new products and services that improve customer experience and outcomes,” said Rob Telson, Vice President of Ecosystems & Partnerships at BrainChip. “Our partnership with Tata Elxsi leverages Akida technology to transform applications and results in markets such as healthcare and industrial automation. We look forward to working with them to create new opportunities and drive growth.


{## Funny - I thought we'd been with Tata longer than 2023, but as the man in black says "Time keeps draggin' along ,,,"

Of course, it is possible to build Akida NPUs in FPGA - that's how we started (Xilinx). Would it make sense to incorporate Akida NPUs in a purpose built FPGA (cf a general purpose FPGA), or is that an Application Specific FPGA (ASFPGA)?

Surely, if Tata are going to build an FPGA NN from th4e ground up, and they knew about TENNs, they would want to have TENNs in the FPGA.
 
  • Like
  • Love
  • Fire
Reactions: 6 users

Diogenese

Top 20
Hi TTM,

This looks like the TCS FPGA NN patent:


US12314845B2 Field programmable gate array (FPGA) based neuromorphic computing architecture 20211014


View attachment 93872



This disclosure relates generally to a method and a system for computing using a field programmable gate array (FPGA) neuromorphic architecture. Implementing energy efficient Artificial Intelligence (AI) applications at power constrained environment/devices is challenging due to huge energy consumption during both training and inferencing. The disclosure is a FPGA architecture based neuromorphic computing platform, the basic components include a plurality of neurons and memory. The FPGA neuromorphic architecture is parameterized, parallel and modular, thus enabling improved energy/inference and Latency-Throughput. Based on values of the plurality of features of the data set, the FPGA neuromorphic architecture is generated in a modular and parallel fashion. The output of the disclosed FPGA neuromorphic architecture is the plurality of output spikes from the neuron, which becomes the basis of inference for computing.

I'm not sure why they need an FPGA to be reconfigurable. Akida's NPUs can be connected in any layer configuration needed.

Maybe it's to do with having "copper" interconnects rather than electronic navigation on the comms fabric? That would have potential to improve latency.

The patent dates from November 2021 It predates this Tata Elxsi announcement:

https://brainchip.com/brainchip-and...provide-intelligent-ultralow-power-solutions/

BrainChip and Tata Elxsi Partner on Intelligent Ultra-Low-Power Solutions​


Laguna Hills, Calif. – August 28, 2023 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, welcomes leading design and technology services provider Tata Elxsi as a partner to its Essential AI ecosystem.

Akida’s fully customizable, scalable, event-based AI neural processor architecture and its small footprint boosts the efficiency of various applications by orders of magnitude. Greater AI performance at the edge, independent from the cloud, unlocks the growth of the Artificial Intelligence of Things (AIoT) market that is expected to be more than a trillion dollars by 2030.

“The combination of our user-centric design expertise with leading-edge technologies is key to helping enterprises reimagine their products and services to improve operational efficiency, reduce costs and deliver new services to their customers,” said Manoj Raghavan, CEO and MD at Tata Elxsi. “This cannot be possible without our global ecosystem of partners. By partnering with BrainChip and implementing Akida technology into medical and industrial solutions, we are able to deliver innovative solutions at a faster time to market than otherwise possible.”

“BrainChip is very aligned with Tata Elxsi’s mission to innovate with leading edge technology and deliver compelling new products and services that improve customer experience and outcomes,” said Rob Telson, Vice President of Ecosystems & Partnerships at BrainChip. “Our partnership with Tata Elxsi leverages Akida technology to transform applications and results in markets such as healthcare and industrial automation. We look forward to working with them to create new opportunities and drive growth.


{## Funny - I thought we'd been with Tata longer than 2023, but as the man in black says "Time keeps draggin' along ,,,"

Of course, it is possible to build Akida NPUs in FPGA - that's how we started (Xilinx). Would it make sense to incorporate Akida NPUs in a purpose built FPGA (cf a general purpose FPGA), or is that an Application Specific FPGA (ASFPGA)?

Surely, if Tata are going to build an FPGA NN from th4e ground up, and they knew about TENNs, they would want to have TENNs in the FPGA.
Interestingly, this Tata patent from mid-2022 deals with time-series data:

US2023334300A1 METHODS AND SYSTEMS FOR TIME-SERIES CLASSIFICATION USING RESERVOIR-BASED SPIKING NEURAL NETWORK 20220418

1766490866759.png


The present disclosure relates to methods and systems for time-series classification using a reservoir-based spiking neural network, that can be used at edge computing applications. Conventional reservoir based SNN techniques addressed either by using non-bio-plausible backpropagation-based mechanisms, or by optimizing the network weight parameters. The present disclosure solves the technical problems of TSC, using a reservoir-based spiking neural network. According to the present disclosure, the time-series data is encoded first using a spiking encoder. Then the spiking reservoir is used to extract the spatio-temporal features for the time-series data. Lastly, the extracted spatio-temporal features of the time-series data is used to train a classifier to obtain the time-series classification model that is used to classify the time-series data in real-time, received from edge devices present at the edge computing network.

[0074] The reservoir based spiking neural network architecture of the present disclosure is implemented using BindsNet 0.2.7, a GPU-based open-source SNN simulator in Python that supports parallel computing. The parameter values for the LIF neuron (refer to equation 1) used in the experiments are: Vthresh =−52.0 mV, Vrest =−65.0 mV. Table 1 shows other important network parameters for the spiking reservoir of the present disclosure. For the Gaussian encoding, 15 input encoding neurons (i.e. m=15) are used, resulting in 15× magnification of the input timescale to spike time scale. A set of weight scalar parameters are selected for different connections between the populations to optimize the spiking reservoir performance
.

... but I can't detect any link to TENNs.
 
  • Like
  • Love
Reactions: 4 users

manny100

Top 20
Interesting chart. MACD bullish divergence.
4 touches of the highs of the downtrend line since Oct'25 followed by a break of the trendline today.
The trend is still down but the MACD divergence indicates a momentum shift may be beginning.
Its wait and see over the next few days..
brncvcvc.png
 
  • Like
Reactions: 4 users
Merry Chipmas
View attachment 93866

Traditional AI computing relies on machine learning and deep learning methods that demand significant power and memory for both training and inference.
Our researchers have developed a patented neuromorphic computing architecture based on field-programmable gate arrays (FPGAs). This architecture is designed to be parallel and modular, enabling highly efficient, brain-inspired computing directly on the device. Compared to existing techniques, this approach improves energy-per-inference by ~1500 times and latency by ~2400 times .
This paves the way for a new generation of powerful, real-time AI applications in energy-constrained environments.
Know more in the #patent- bit.ly/498XlwC
Inventors: Dhaval Shah, Sounak Dey, Meripe Ajay Kumar, Manoj Nambiar, Arpan Pal
Tata Consultancy Services
#TCSResearch #AI #NeuromorphicComputing

View attachment 93867 View attachment 93869
View attachment 93870
May I ask, do we know if Tata are working with any order neuromorphic compute company apart from brainchip ?.
 

Tothemoon24

Top 20
IMG_1946.jpeg
IMG_1948.jpeg
IMG_1947.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 4 users
Top Bottom