BRN Discussion Ongoing

manny100

Top 20
Just revisiting the Sean video with 'Stocks Down Under' in Oct'25.
From 5.18 mark Sean talks about 12 to 18 months ago customers being prepared to have a look at AKIDA and explore.
He says now clients are coming in with specific plans saying we know what we want.
At the 6.20 mark he talks about wearables models and engagements that are progressing very well - note his plural engagements. He said we can expect to see a lot of more activity in wearables.
Backs up Steve Brightfield's recent remarks.
6.50 mark. From here Sean addresses interest in Radar. Starting with defense. Most radar decades old and needs innovation. The second point he made concerning radar was mobility trend. Drones have changed everything about defense. More mobility around everything including soldiers, eg carrying 'things' - likely means small devices. Solutions include long battery life and no network activity. Brainchip leaning hard into 'mobile' solutions.
Working with some interesting clients on mobile radar solutions and expect us to push harder into that market.
From about 8.05. What can we expect in the next 12 to 18 months.
Tech advancements. Gen AI supporting LLMs on the Edge and " a very exciting configuration with world class break through performance" - built with customer interaction and feedback.
8.45 - Talks about commercial side. Chip orders and Gen 2 - "down the track with extensive evaluations with certain customers right now so look for some licencees on that as well"
Given Steve Brightfields very recent revealing interview and a look back a couple of months to Sean's video - link below) its all starting to come together - still early in the piece though but you can sense is all getting close.
 
  • Like
  • Love
Reactions: 15 users

Diogenese

Top 20
Renesas has begun shipping R-Car X5H silicon samples, evaluation boards, and the RoX Whitebox SDK to selected customers and partners. The company noted that it plans to showcase AI-powered multi-domain demonstrations of the platform at CES 2026.

Renesas advances SDV roadmap with 3nm R-Car Gen 5 platform​

New Products | December 22, 2025
By eeNews Europe
AUTOMOTIVE SDV RENESASEMBEDDED DESIGN



Renesas Electronics has taken another step in its software-defined vehicle (SDV) strategy with the expansion of its fifth-generation R-Car platform, built around a new multi-domain automotive SoC and a broader end-to-end development environment. According to the company, the move is aimed squarely at accelerating the design of highly integrated, AI-driven vehicle architectures.


For eeNews Europe readers, the announcement is relevant because it highlights how leading silicon vendors are converging ADAS, infotainment, and gateway workloads onto a single, safety-capable compute platform. It also offers insight into how open software stacks and early silicon access are being used to shorten automotive development cycles.

3nm multi-domain compute for SDVs​

At the core of the platform is the R-Car X5H, the latest device in the Gen 5 R-Car family. According to Renesas, it is the industry’s first multi-domain automotive SoC manufactured on a 3nm process. The company indicates that the device is designed to run advanced driver assistance systems (ADAS), in-vehicle infotainment (IVI), and gateway functions concurrently on a single chip.

The company says the move to an advanced process node delivers up to 35% lower power consumption compared with previous 5nm solutions, while significantly increasing compute density. The SoC targets centralized SDV architectures, combining high performance with mixed-criticality support so that safety-related and non-safety workloads can coexist without compromise.


The R-Car X5H delivers up to 400 TOPS of AI performance, with the option to scale further using chiplet-based accelerators that can boost AI throughput by four times or more. Graphics performance reaches the equivalent of 4 TFLOPS, supported by a CPU complex of 32 Arm Cortex-A720AE cores and six Cortex-R52 lockstep cores with ASIL D capability, providing more than 1,000k DMIPS.

RoX Whitebox SDK targets faster development​

Alongside the new silicon, Renesas says it is expanding its R-Car Open Access (RoX) development platform with the RoX Whitebox Software Development Kit. The SDK is positioned as an open, scalable environment that integrates hardware, operating systems, middleware, and tools needed for next-generation SDV development.

The Whitebox SDK is built on Linux, Android and the XEN hypervisor, with additional support for AUTOSAR, EB corbos Linux, QNX, Red Hat and SafeRTOS through partners. Out of the box, developers can begin work on ADAS, L3/L4 autonomy, intelligent cockpit and gateway applications.

An integrated AI and ADAS software stack supports real-time perception and sensor fusion, while generative AI and large language models are intended to enable more advanced human–machine interfaces in future AI-driven cockpits, the company notes. The SDK also incorporates production-grade software from partners including Candera, DSP Concepts, Nullmax, Smart Eye, STRADVISION and ThunderSoft.

Sampling and demos

Renesas has begun shipping R-Car X5H silicon samples, evaluation boards, and the RoX Whitebox SDK to selected customers and partners. The company noted that it plans to showcase AI-powered multi-domain demonstrations of the platform at CES 2026.


eenews-europe.png
I

Hi TTM,


Back in Decemebr 2022, there was an announcement about Renesas taping out Akida IP:

https://brainchip.com/renesas-is-ta...etwork-snn-technology-developed-by-brainchip/

eeNews Europe — Renesas is taping out a chip using the spiking neural network (SNN) technology developed by Brainchip.​

Dec 2, 2022 – Nick Flaherty

This is part of a move to boost the leading edge performance of its chips for the Internet of Things, Sailesh Chittipeddi became Executive Vice President and General Manager of IoT and Infrastructure Business Unit at Renesas Electronics and the former CEO of IDT tells eeNews Europe.

This strategy has seen the company develop the first silicon for ARM’s M85 and RISC-V cores, along with new capacity and foundry deals.

“We are very happy to be at the leading edge and now we have made a rapid transition to address our ARM shortfall but we realise the challenges in the marketplace and introduced the RISC-V products to make sure we don’t fall behind in the new architectures,” he said.

“Our next move is to more advanced technology nodes to push the microcontrollers into the gigahertz regime and that’s where the is overlap with microprocessors. The way I look at it is all about the system performance.”

“Now you have accelerators for driving AI with neural processing units rather than a dual core CPU. We are working with a third party taping out a device in December on 22nm CMOS,” said Chittipeddi.

Brainchip and Renesas signed a deal in December 2020 to implement the spiking neural network technology. Tools are vital for this new area. “The partner gives us the training tools that are needed,” he said.


The original Renesas licence was for Akida1 IP, but by the end of 2022, TENNs would have been available for EAPs. Initially Renesas said they would use their in-house DRP for the heavy AI loads\ and relegate Akida to sweeping the swarf. They may be ruing that decision if they stuck to the plan.

TENNs has supplanted Transformers, which replaced LSTM which replaced RNN. This is a major advance in AI/NN.

Copied from elsewhere:

Update QuantizeML to version 0.18.0​

New features​

  • Introduced PleiadesLayer for SpatioTemporal TENNs on Keras
  • Keras sanitizer will now bufferize Conv3D layers, same as ONNX sanitizer. As a result, SpatioTemporal TENNs from both frameworks will be bufferized and quantized at once.
  • Dropped quantization patterns with MaxPooling and LUT activation since this is not supported in Akida
  • Dropped all transformers features

https://github.com/Brainchip-Inc/akida_examples/releases


Google developed Transformers (in 2017?) as the bee's knees for NLP (natural language processing).

https://www.freecodecamp.org/news/how-transformer-models-work-for-language-processing/

September 12, 2025/#Python

How Transformer Models Work for Language Processing​


If you’ve ever used Google Translate, skimmed through a quick summary, or asked a chatbot for help, then you’ve definitely seen Transformers at work. They’re considered the architects behind today’s biggest advances in natural language processing (NLP).

It all began with Recurrent Neural Networks (RNNs), which read text step by step. RNNs worked, but they struggled with long sentences because older context often got lost. LSTMs (Long Short-Term Memory networks) improved memory, but still processed words in sequence, slow and hard to scale
.

The breakthrough came with attention: instead of moving word by word, models could directly “attend” to the most relevant parts of a sentence, no matter where they appeared. In 2017, the paper Attention Is All You Need introduced the Transformer, which replaced recurrence with attention and parallel processing. This made models faster, more accurate, and capable of learning from massive amounts of text
.

TENNs may have the potential to make Transformers, LSTM, and RNN obsolete.

"Could be
Who knows?
There's something due any day
I will know right away soon as it shows
It may come cannonballing down through the sky
Gleam in it's eye
Bright as a rose
Who knows?
It's only just out of reach
Down the block, on a beach
Under a tree
I gotta feeling theres a miracle due
Gonna come true
"

Sondheim and Bernstein couldn't have said a truer word ...
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Diogenese

Top 20
Just revisiting the Sean video with 'Stocks Down Under' in Oct'25.
From 5.18 mark Sean talks about 12 to 18 months ago customers being prepared to have a look at AKIDA and explore.
He says now clients are coming in with specific plans saying we know what we want.
At the 6.20 mark he talks about wearables models and engagements that are progressing very well - note his plural engagements. He said we can expect to see a lot of more activity in wearables.
Backs up Steve Brightfield's recent remarks.
6.50 mark. From here Sean addresses interest in Radar. Starting with defense. Most radar decades old and needs innovation. The second point he made concerning radar was mobility trend. Drones have changed everything about defense. More mobility around everything including soldiers, eg carrying 'things' - likely means small devices. Solutions include long battery life and no network activity. Brainchip leaning hard into 'mobile' solutions.
Working with some interesting clients on mobile radar solutions and expect us to push harder into that market.
From about 8.05. What can we expect in the next 12 to 18 months.
Tech advancements. Gen AI supporting LLMs on the Edge and " a very exciting configuration with world class break through performance" - built with customer interaction and feedback.
8.45 - Talks about commercial side. Chip orders and Gen 2 - "down the track with extensive evaluations with certain customers right now so look for some licencees on that as well"
Given Steve Brightfields very recent revealing interview and a look back a couple of months to Sean's video - link below) its all starting to come together - still early in the piece though but you can sense is all getting close.
Hi manny,

I think that your post is reinforced by the supplanting of Transformers by TENNs. In particular, a better NLP technique will improve GenAI - "world class performance".

But TENNs is not a one-trick-pony. It provides the Midas touch to all AI applications.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 13 users

Gazzafish

Regular
With all the talk about renases taping out a few years back and Sean’s current optimism, I wouldn’t be surprised if BRN are earning revenue /royalties right now. We’d never know, it might be via renases or even megachips. No announcement would be warranted as it’s an existing arrangement. First we might see of it would be the quarterly report due out late January. I’m just guessing based on no knowledge. DYOR
 
  • Like
  • Wow
  • Fire
Reactions: 4 users
With all the talk about renases taping out a few years back and Sean’s current optimism, I wouldn’t be surprised if BRN are earning revenue /royalties right now. We’d never know, it might be via renases or even megachips. No announcement would be warranted as it’s an existing arrangement. First we might see of it would be the quarterly report due out late January. I’m just guessing based on no knowledge. DYOR
I love the optimism
We should all believe Sean as everything he has said so far has been so true to his word
Come on Brainchip let’s make this year one to remember
 
  • Like
  • Love
Reactions: 7 users

Tothemoon24

Top 20
Merry Chipmas
IMG_1943.jpeg


Traditional AI computing relies on machine learning and deep learning methods that demand significant power and memory for both training and inference.
Our researchers have developed a patented neuromorphic computing architecture based on field-programmable gate arrays (FPGAs). This architecture is designed to be parallel and modular, enabling highly efficient, brain-inspired computing directly on the device. Compared to existing techniques, this approach improves energy-per-inference by ~1500 times and latency by ~2400 times .
This paves the way for a new generation of powerful, real-time AI applications in energy-constrained environments.
Know more in the #patent- bit.ly/498XlwC
Inventors: Dhaval Shah, Sounak Dey, Meripe Ajay Kumar, Manoj Nambiar, Arpan Pal
Tata Consultancy Services
#TCSResearch #AI #NeuromorphicComputing

IMG_1942.jpeg
IMG_1940.jpeg

IMG_1941.jpeg
 

Attachments

  • IMG_1941.jpeg
    IMG_1941.jpeg
    345.5 KB · Views: 32
  • Like
  • Fire
  • Love
Reactions: 28 users

Diogenese

Top 20
Merry Chipmas
View attachment 93866

Traditional AI computing relies on machine learning and deep learning methods that demand significant power and memory for both training and inference.
Our researchers have developed a patented neuromorphic computing architecture based on field-programmable gate arrays (FPGAs). This architecture is designed to be parallel and modular, enabling highly efficient, brain-inspired computing directly on the device. Compared to existing techniques, this approach improves energy-per-inference by ~1500 times and latency by ~2400 times .
This paves the way for a new generation of powerful, real-time AI applications in energy-constrained environments.
Know more in the #patent- bit.ly/498XlwC
Inventors: Dhaval Shah, Sounak Dey, Meripe Ajay Kumar, Manoj Nambiar, Arpan Pal
Tata Consultancy Services
#TCSResearch #AI #NeuromorphicComputing

View attachment 93867 View attachment 93869
View attachment 93870


Hi TTM,

This looks like the TCS FPGA NN patent:


US12314845B2 Field programmable gate array (FPGA) based neuromorphic computing architecture 20211014


1766488341890.png




This disclosure relates generally to a method and a system for computing using a field programmable gate array (FPGA) neuromorphic architecture. Implementing energy efficient Artificial Intelligence (AI) applications at power constrained environment/devices is challenging due to huge energy consumption during both training and inferencing. The disclosure is a FPGA architecture based neuromorphic computing platform, the basic components include a plurality of neurons and memory. The FPGA neuromorphic architecture is parameterized, parallel and modular, thus enabling improved energy/inference and Latency-Throughput. Based on values of the plurality of features of the data set, the FPGA neuromorphic architecture is generated in a modular and parallel fashion. The output of the disclosed FPGA neuromorphic architecture is the plurality of output spikes from the neuron, which becomes the basis of inference for computing.

I'm not sure why they need an FPGA to be reconfigurable. Akida's NPUs can be connected in any layer configuration needed.

Maybe it's to do with having "copper" interconnects rather than electronic navigation on the comms fabric? That would have potential to improve latency.

The patent dates from November 2021 It predates this Tata Elxsi announcement:

https://brainchip.com/brainchip-and...provide-intelligent-ultralow-power-solutions/

BrainChip and Tata Elxsi Partner on Intelligent Ultra-Low-Power Solutions​


Laguna Hills, Calif. – August 28, 2023 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, welcomes leading design and technology services provider Tata Elxsi as a partner to its Essential AI ecosystem.

Akida’s fully customizable, scalable, event-based AI neural processor architecture and its small footprint boosts the efficiency of various applications by orders of magnitude. Greater AI performance at the edge, independent from the cloud, unlocks the growth of the Artificial Intelligence of Things (AIoT) market that is expected to be more than a trillion dollars by 2030.

“The combination of our user-centric design expertise with leading-edge technologies is key to helping enterprises reimagine their products and services to improve operational efficiency, reduce costs and deliver new services to their customers,” said Manoj Raghavan, CEO and MD at Tata Elxsi. “This cannot be possible without our global ecosystem of partners. By partnering with BrainChip and implementing Akida technology into medical and industrial solutions, we are able to deliver innovative solutions at a faster time to market than otherwise possible.”

“BrainChip is very aligned with Tata Elxsi’s mission to innovate with leading edge technology and deliver compelling new products and services that improve customer experience and outcomes,” said Rob Telson, Vice President of Ecosystems & Partnerships at BrainChip. “Our partnership with Tata Elxsi leverages Akida technology to transform applications and results in markets such as healthcare and industrial automation. We look forward to working with them to create new opportunities and drive growth.


{## Funny - I thought we'd been with Tata longer than 2023, but as the man in black says "Time keeps draggin' along ,,,"

Of course, it is possible to build Akida NPUs in FPGA - that's how we started (Xilinx). Would it make sense to incorporate Akida NPUs in a purpose built FPGA (cf a general purpose FPGA), or is that an Application Specific FPGA (ASFPGA)?

Surely, if Tata are going to build an FPGA NN from th4e ground up, and they knew about TENNs, they would want to have TENNs in the FPGA.
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Diogenese

Top 20
Hi TTM,

This looks like the TCS FPGA NN patent:


US12314845B2 Field programmable gate array (FPGA) based neuromorphic computing architecture 20211014


View attachment 93872



This disclosure relates generally to a method and a system for computing using a field programmable gate array (FPGA) neuromorphic architecture. Implementing energy efficient Artificial Intelligence (AI) applications at power constrained environment/devices is challenging due to huge energy consumption during both training and inferencing. The disclosure is a FPGA architecture based neuromorphic computing platform, the basic components include a plurality of neurons and memory. The FPGA neuromorphic architecture is parameterized, parallel and modular, thus enabling improved energy/inference and Latency-Throughput. Based on values of the plurality of features of the data set, the FPGA neuromorphic architecture is generated in a modular and parallel fashion. The output of the disclosed FPGA neuromorphic architecture is the plurality of output spikes from the neuron, which becomes the basis of inference for computing.

I'm not sure why they need an FPGA to be reconfigurable. Akida's NPUs can be connected in any layer configuration needed.

Maybe it's to do with having "copper" interconnects rather than electronic navigation on the comms fabric? That would have potential to improve latency.

The patent dates from November 2021 It predates this Tata Elxsi announcement:

https://brainchip.com/brainchip-and...provide-intelligent-ultralow-power-solutions/

BrainChip and Tata Elxsi Partner on Intelligent Ultra-Low-Power Solutions​


Laguna Hills, Calif. – August 28, 2023 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, welcomes leading design and technology services provider Tata Elxsi as a partner to its Essential AI ecosystem.

Akida’s fully customizable, scalable, event-based AI neural processor architecture and its small footprint boosts the efficiency of various applications by orders of magnitude. Greater AI performance at the edge, independent from the cloud, unlocks the growth of the Artificial Intelligence of Things (AIoT) market that is expected to be more than a trillion dollars by 2030.

“The combination of our user-centric design expertise with leading-edge technologies is key to helping enterprises reimagine their products and services to improve operational efficiency, reduce costs and deliver new services to their customers,” said Manoj Raghavan, CEO and MD at Tata Elxsi. “This cannot be possible without our global ecosystem of partners. By partnering with BrainChip and implementing Akida technology into medical and industrial solutions, we are able to deliver innovative solutions at a faster time to market than otherwise possible.”

“BrainChip is very aligned with Tata Elxsi’s mission to innovate with leading edge technology and deliver compelling new products and services that improve customer experience and outcomes,” said Rob Telson, Vice President of Ecosystems & Partnerships at BrainChip. “Our partnership with Tata Elxsi leverages Akida technology to transform applications and results in markets such as healthcare and industrial automation. We look forward to working with them to create new opportunities and drive growth.


{## Funny - I thought we'd been with Tata longer than 2023, but as the man in black says "Time keeps draggin' along ,,,"

Of course, it is possible to build Akida NPUs in FPGA - that's how we started (Xilinx). Would it make sense to incorporate Akida NPUs in a purpose built FPGA (cf a general purpose FPGA), or is that an Application Specific FPGA (ASFPGA)?

Surely, if Tata are going to build an FPGA NN from th4e ground up, and they knew about TENNs, they would want to have TENNs in the FPGA.
Interestingly, this Tata patent from mid-2022 deals with time-series data:

US2023334300A1 METHODS AND SYSTEMS FOR TIME-SERIES CLASSIFICATION USING RESERVOIR-BASED SPIKING NEURAL NETWORK 20220418

1766490866759.png


The present disclosure relates to methods and systems for time-series classification using a reservoir-based spiking neural network, that can be used at edge computing applications. Conventional reservoir based SNN techniques addressed either by using non-bio-plausible backpropagation-based mechanisms, or by optimizing the network weight parameters. The present disclosure solves the technical problems of TSC, using a reservoir-based spiking neural network. According to the present disclosure, the time-series data is encoded first using a spiking encoder. Then the spiking reservoir is used to extract the spatio-temporal features for the time-series data. Lastly, the extracted spatio-temporal features of the time-series data is used to train a classifier to obtain the time-series classification model that is used to classify the time-series data in real-time, received from edge devices present at the edge computing network.

[0074] The reservoir based spiking neural network architecture of the present disclosure is implemented using BindsNet 0.2.7, a GPU-based open-source SNN simulator in Python that supports parallel computing. The parameter values for the LIF neuron (refer to equation 1) used in the experiments are: Vthresh =−52.0 mV, Vrest =−65.0 mV. Table 1 shows other important network parameters for the spiking reservoir of the present disclosure. For the Gaussian encoding, 15 input encoding neurons (i.e. m=15) are used, resulting in 15× magnification of the input timescale to spike time scale. A set of weight scalar parameters are selected for different connections between the populations to optimize the spiking reservoir performance
.

... but I can't detect any link to TENNs.
 
  • Like
  • Love
  • Thinking
Reactions: 9 users

manny100

Top 20
Interesting chart. MACD bullish divergence.
4 touches of the highs of the downtrend line since Oct'25 followed by a break of the trendline today.
The trend is still down but the MACD divergence indicates a momentum shift may be beginning.
Its wait and see over the next few days..
brncvcvc.png
 
  • Like
  • Haha
Reactions: 10 users

Tothemoon24

Top 20
IMG_1946.jpeg
IMG_1948.jpeg
IMG_1947.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 32 users
Interestingly, this Tata patent from mid-2022 deals with time-series data:

US2023334300A1 METHODS AND SYSTEMS FOR TIME-SERIES CLASSIFICATION USING RESERVOIR-BASED SPIKING NEURAL NETWORK 20220418

View attachment 93873

The present disclosure relates to methods and systems for time-series classification using a reservoir-based spiking neural network, that can be used at edge computing applications. Conventional reservoir based SNN techniques addressed either by using non-bio-plausible backpropagation-based mechanisms, or by optimizing the network weight parameters. The present disclosure solves the technical problems of TSC, using a reservoir-based spiking neural network. According to the present disclosure, the time-series data is encoded first using a spiking encoder. Then the spiking reservoir is used to extract the spatio-temporal features for the time-series data. Lastly, the extracted spatio-temporal features of the time-series data is used to train a classifier to obtain the time-series classification model that is used to classify the time-series data in real-time, received from edge devices present at the edge computing network.

[0074] The reservoir based spiking neural network architecture of the present disclosure is implemented using BindsNet 0.2.7, a GPU-based open-source SNN simulator in Python that supports parallel computing. The parameter values for the LIF neuron (refer to equation 1) used in the experiments are: Vthresh =−52.0 mV, Vrest =−65.0 mV. Table 1 shows other important network parameters for the spiking reservoir of the present disclosure. For the Gaussian encoding, 15 input encoding neurons (i.e. m=15) are used, resulting in 15× magnification of the input timescale to spike time scale. A set of weight scalar parameters are selected for different connections between the populations to optimize the spiking reservoir performance
.

... but I can't detect any link to TENNs.
For layman, may I ask do you know if Tata has any other neuromorphic compute partners apart from brainchip ?. Ai doesn't seem to think so which is great.
 
  • Like
Reactions: 5 users

Tothemoon24

Top 20
IMG_1949.jpeg



Arm Holdings has positioned itself at the centre of AI transformation. In a wide-ranging podcast interview, Vince Jesaitis, head of global government affairs at Arm, offered enterprise decision-makers look into the company’s international strategy, the evolution of AI as the company sees it, and what lies ahead for the industry.

From cloud to edge​

Arm thinks the AI market is about to enter a new phase, moving from cloud-based processing to edge computing. While much of the media’s attention has been focused to date on massive data centres, with models trained in and accessed from the cloud, Jesaitis said that most AI compute, especially inference tasks, is likely to be increasingly decentralised.

“The next ‘aha’ moment in AI is when local AI processing is being done on devices you couldn’t have imagined before,” Jesaitis said. These devices range from smartphones and earbuds to cars and industrial sensors. Arm’s IP is already embedded, literally, in these devices – it’s a company that only in the last year has been the IP behind over 30 billion chips, placed in devices of every conceivable description, all over the world.

The deployment of AI in edge environments has several benefits, with team at Arm citing three main ‘wins’. Firstly, the inherent efficiency of low-power Arm chips means that power bills for running compute and cooling are lower. That keeps the environmental footprint of the technology as small as possible.

Secondly, putting AI in local settings means latency is much lower (with latency determined by the distance between local operations and the site of the AI model). Arm points to uses like instant translation, dynamic scheduling of control systems, and features like the near-immediate triggering of safety functions – for instance in IIoT settings.

Thirdly, ‘keeping it local’ means there’s no potentially sensitive data sent off-premise. The benefits are obvious for any organisation in highly-regulated industries, but the increasing number of data breaches means even companies operating with relatively benign data sets are looking to reduce their attack surface.

Arm silicon, optimised for power-constrained devices, makes it well-suited for compute where it’s needed on the ground, the company says. The future may well be one where AI is found woven throughout environments, not centralised in a data centre run by one of the large providers.

Arm and global governments​

Arm is actively engaged with global policymakers, considering this level of engagement an important part of its role. Governments continue to compete to attract semiconductor investment, the issues of supply chain and concentrated dependencies still fresh in many policymakers’ memories from the time of the COVID epidemic.

Arm lobbies for workforce development, working at present with policy-makers in the White House on an education coalition to build an ‘AI-ready workforce’. Domestic independence in technology relies as much on the abilities of workforce as it does on the availability of hardware.

Jesaitis noted a divergence between regulatory environments: the US prioritises what the government there terms acceleration and innovation, while the EU leads on safety, privacy, security and legally-enforced standards of practice. Arm aims to find the middle ground between these approaches, building products that meet stringent global compliance needs, yet furthering advances in the AI industry.

The enterprise case for edge AI​

The case for integrating Arm’s edge-focused AI architecture into enterprise transformation strategies can be persuasive. The company stresses its ability to offer scale-able AI without the need to centralise to the cloud, and is also pushing its investment in hardware-level security. That means issues like memory exploits (outside of the control of users plugged into centralised AI models) can be avoided.

Of course, sectors already highly-regulated in terms of data practices are unlikely to experience relaxed governance in the future – the opposite is pretty much inevitable. All industries will be seeing more regulation and greater penalties for non-compliance in the years to come. However, to balance that, there are significant competitive advantages available to those that can demonstrate their systems’ inherent safety and security. It’s into this regulatory landscape that Arm sees itself and local, edge AI fitting.

Additionally, in Europe and Scandinavia, ESG goals are going to be increasingly important. Here, the power-sipping nature of Arm chips offers big advantages. That’s a trend that even the US hyperscalers are responding to: AWS’s latest SHALAR range of low-cost, low-power Arm-based platforms is there to satisfy that exact demand.

Arm’s collaboration with cloud hyperscalers such as AWS and Microsoft produces chips that combine efficiency with the necessary horsepower for AI applications, the company says.

What’s next from Arm and the industry​

Jesaitis pointed out several trends that enterprises may be seeing in the next 12 to 18 months. Global AI exports, particularly from the US and Middle East, are ensuring that local demand for AI can be satisfied by the big providers. Arm is a company that can supply both big providers in these contexts (as part of their portfolios of offerings) and satisfy the rising demand for edge-based AI.

Jesaitis also sees edge AI as something of the hero of sustainability in an industry increasingly under fire for its ecological impact. Because Arm technology’s biggest market has been in low-power compute for mobile, it’s inherently ‘greener’. As enterprises hope to meet energy goals without sacrificing compute, Arm offers a way that combines performance with responsibility.

Redefining “smart”​

Arm’s vision of AI at the edge means computers and the software running on them can be context-aware, cheap to run, secure by design, and – thanks to near-zero network latency – highly-responsive. Jesaitis said, “We used to call things ‘smart’ because they were online. Now, they’re going to be truly intelligent.”
 
  • Like
  • Fire
  • Love
Reactions: 14 users

TheDrooben

Pretty Pretty Pretty Pretty Good

Screenshot_20251224_085452_LinkedIn.jpg




Happy as Larry (also still watching closely)

look-larry-david (1).gif
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers ,

Just a quick thankyou to all , the collective sharing of information once again has been vast and informative.

Wishing all a enjoyable break & prosperous new year.

Regards,
Esq.
 
  • Like
  • Love
  • Fire
Reactions: 43 users

Sirod69

bavarian girl ;-)
I wish you all a Merry Christmas with all my heart.
We are having such bad times and that is precisely why we should see a day like this as very valuable
.
Merry Christmas GIF
 
  • Like
  • Love
  • Sad
Reactions: 32 users
Merry Chipmas
View attachment 93866

Traditional AI computing relies on machine learning and deep learning methods that demand significant power and memory for both training and inference.
Our researchers have developed a patented neuromorphic computing architecture based on field-programmable gate arrays (FPGAs). This architecture is designed to be parallel and modular, enabling highly efficient, brain-inspired computing directly on the device. Compared to existing techniques, this approach improves energy-per-inference by ~1500 times and latency by ~2400 times .
This paves the way for a new generation of powerful, real-time AI applications in energy-constrained environments.
Know more in the #patent- bit.ly/498XlwC
Inventors: Dhaval Shah, Sounak Dey, Meripe Ajay Kumar, Manoj Nambiar, Arpan Pal
Tata Consultancy Services
#TCSResearch #AI #NeuromorphicComputing

View attachment 93867 View attachment 93869
View attachment 93870
This has to be brainchip doesn't it ?.

When asking AI if.... Tata is working with any other neuromorphic company the answer is NO.

THIS IS HUGE IF ITS BRAINCHIP...MY GUT SAYS YES.
 
  • Like
  • Fire
Reactions: 6 users

Guzzi62

Regular
This has to be brainchip doesn't it ?.

When asking AI if.... Tata is working with any other neuromorphic company the answer is NO.

THIS IS HUGE IF ITS BRAINCHIP...MY GUT SAYS YES.
Let's hope so, but:

8/12-2025
Tata and Intel Announce Strategic Alliance to Establish Silicon and Compute
Ecosystem in India
Exploring Strategic Collaboration for Silicon and Systems Manufacturing, Packaging, and AI
Compute Market Development.


Okay, nothing about neuromorphic but still, not good IMO.

We can hope Tata has progressed so much in research with Akida that they are committed?

Edit: If they don't want to wait years for Intel's Loihi to reach silicon stage, they have no choice but to go with Brainchip!
 
  • Like
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Merry Christmas Brain Fam!

2025 felt a lot like a dramatic tablecloth pull - lots of anticipation, but in the end... nothing moved.

Here’s hoping 2026 is the year the tablecloth finally comes off and "exposes" something truly scandalous - a share price
that rises with confidence, pointing firmly north (no pun intended). 🤭

Wishing everyone a safe and happy holiday.

B 🎄


merry-christmas-seasons-greetings.gif
 
  • Like
  • Love
  • Haha
Reactions: 33 users

buena suerte :-)

BOB Bank of Brainchip
Have a fantastic Christmas spending precious time with family and friends and may us 'Chippers' get some seriously (Much needed!!)

good news early into the new year and start it off with a bang !! $$$$$$$$$$$$$$$ 🙏🙏🙏

Fire Love GIF



Merry Christmas GIF




1766543418354.png



CHEERS ALL :)
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 24 users
Top Bottom