BRN Discussion Ongoing

DK6161

Regular
sarcastic laugh GIF
 

7für7

Top 20
It always dumps faster than it climbs… takes two days to gain 2 cents, and just four hours to drop 3. Same pattern over and over again. But I’m really looking forward to the day when the shorters get completely wiped out …. no excuses, just boom.

Tired Drama GIF by Channel 7
 
Last edited:
  • Like
  • Fire
Reactions: 8 users

itsol4605

Regular
It always dumps faster than it climbs… takes two days to gain 2 cents, and just four hours to drop 3. Same pattern over and over again. But I’m really looking forward to the day when the shorters get completely wiped out …. no excuses, just boom.

Tired Drama GIF by Channel 7
Brand new insight from our stock market guru
 
  • Haha
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I just stumbled over an OpenAI job posting on LinkedIn for an edge computing role!

The role is for “Senior Research Engineer — Edge, Future of Computing.” It calls for "experience shipping AI products with models in compute-constrained environments" and "working with compute-constrained hardware for inference".

Since the job title mentions "edge", I'm taking it to mean on-device/edge. And if “compute-constrained” means tight power, memory, and thermal budgets with real-time requirements, then Akida fits that category.

Either way, it's great to see OpenAI thinking beyond the cloud and leaning into edge AI.





Screenshot 2025-10-10 at 2.39.25 pm.png






 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 26 users

Frangipani

Top 20
Another highly entertaining eejournal.com article featuring BrainChip by Max Maxfield:


max-0424-image-for-home-page-akida.jpg

October 9, 2025

Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront​

by Max Maxfield

Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure we’re all tap-dancing to the same skirl of the bagpipes, let’s remind ourselves that the term “neuromorphic” is a portmanteau that combines the Greek words “neuro” (relating to nerves or the brain) and “morphic” (relating to form or structure).

Thus, “neuromorphic” literally means “in the form of the brain.” In turn, “neuromorphic computing” refers to electronic systems inspired by the human brain’s functioning. Instead of processing data step-by-step, like traditional computers, neuromorphic chips attempt to mimic how neurons and synapses communicate—utilizing spikes of electrical activity, massive parallelism, and event-driven operation.

The focus of this column is on hardware accelerator intellectual property (IP) functions—specifically neural processing units (NPUs)—that designers can incorporate into their System-on-Chip (SoC) devices. Some SoC developers use third-party NPU IPs, while others develop their own IPs in-house.

I was just chatting with Steve Brightfield, who is CMO at BrainChip. As you may recall, BrainChip’s claim to fame is its Akida AI acceleration processor IP, which is inspired by the human brain’s cognitive capabilities and energy efficiency. Akida delivers low-power, real-time AI processing at the edge, utilizing neuromorphic principles for applications such as vision, audio, and sensor fusion.

The vast majority of NPU IPs accelerate artificial neural networks (ANNs) using large arrays of multiply–accumulate (MAC) units. These dense matrix–vector operations are energy-hungry because every neuron participates in every computation, and the hardware must move a lot of data between memory and the MAC array.

By comparison, Akida employs a neuromorphic architecture based on spiking neural networks (SNNs). Akida’s neurons don’t constantly compute weighted sums; instead, they exchange spikes (brief digital pulses) only when their internal “membrane potential” crosses a threshold. This makes Akida event-driven; that is, computation only occurs when new information is available to process.

In contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernels that perform weighted event accumulation upon the arrival of spikes. Each synapse maintains a small local weight and adds its contribution to a neuron’s membrane potential when it receives a spike. This achieves the same effect as multiply-accumulate, but in a sparse, asynchronous, and energy-efficient manner that’s more akin to a biological brain.

max-0424-01-akida-ip.jpg

Akida self-contained AI acceleration processor IP (Source: BrainChip)

According to BrainChip’s website, the Akida self-contained AI neural processor IP features the following:
  • Scalable fabric of 1 to 128 nodes
  • Each neural node supports 128 MACs
  • Configurable 50K to 130K embedded local SRAM per node
  • DMA for all memory and model operations
  • Multi-layer execution without host CPU
  • Integrate with any Microcontroller or Application Processor
  • Efficient algorithmic mesh

Hang on! I just told you that, “In contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernels…” So, it’s a tad embarrassing to see the folks at BrainChip referencing MACs on their website. The thing is that, in the case of the Akida, the term “MAC” is used somewhat loosely—more as an engineering shorthand than as a literal, synchronous multiply–accumulate unit like those found in conventional GPUs and NPUs.

While each Akida neural node contains hardware that can perform multiply-accumulate operations, these operations are event-driven and sparsely activated. When an input spike arrives, only the relevant synapses and neurons in that node perform a small weighted accumulation—there’s no continuous clocked matrix multiplication going on in the background.

So, while BrainChip’s documentation calls them “MACs,” they’re actually implemented as neuromorphic synaptic processors that behave like a MAC when a spike fires while remaining idle otherwise. This is how the Akida achieves orders-of-magnitude lower power consumption than conventional NPUs, despite performing similar mathematical operations in principle.

Another way to think about this is that a conventional MAC array crunches numbers continuously, with every neuron participating in every cycle. By comparison, an Akida node’s neuromorphic synapses sit dormant, only springing into action when a spike arrives, performing their math locally, and then quieting down again. If I were waxing poetical, I might be tempted to say something pithy at this juncture, like “more firefly than furnace,” but I’m not, so I won’t.

But wait, there’s more, because the Akida processor IP uses sparsity to focus on the most important data, inherently avoiding unnecessary computation and saving energy at every step. Meanwhile, BrainChip’s neural network model, known as Temporal Event-based Neural Networks (TENNs), builds on a state-space model architecture to track events over time, rather than sampling at fixed intervals, thereby skipping periods of no change to conserve energy and memory. Together, these little scamps deliver unmatched efficiency for real-time AI.

max-0424-02-akida-ip-and-tenns-models.jpg

Akida neural processor + TENNs models = awesome (Source: BrainChip)

The name of the game here is “sparse.” We’re talking sparse data (streaming inputs are converted to events at the hardware level, reducing the volume of data by up to 10x before processing begins), sparse weights (unnecessary weights are pruned and compressed, reducing model size and compute demand by up to 10x), and sparse activations (only essential activation functions pass data to the next layers, cutting downstream computation by up to 10x).

Since traditional CNNs activate every neural layer at every timestep, they can consume watts of power to process full data streams, even when nothing is changing. By comparison, the fact that the Akida processes only meaningful information enables real-time streaming AI that runs continuously on milliwatts of power, making it possible to deploy always-on intelligence in wearables, sensors, and other battery-powered devices.

Of course, nothing is easy (“If it were easy, everyone would be doing it,” as they say). A significant challenge for people who wish to utilize neuromorphic computing is that spiking networks differ from conventional neural networks. This is why the folks at BrainChip provide a CNN-to-SNN converter. This means developers can start out with a conventional CNN (which they may already have) and then convert it to an SNN to run on an Akida.

As usual, there are more layers to this onion than you might first suppose. Consider, for example, BrainChip’s collaboration with Prophesee. This is one of those rare cases where two technologies fit together as if they’d been waiting for each other all along.

Prophesee’s event-based cameras don’t capture conventional frames at fixed intervals; instead, each pixel generates a spike whenever it detects a change in light intensity. In other words, the output is already neuromorphic in nature—a continuous stream of sparse, asynchronous events (“spikes”) rather than dense video frames.

That makes it the perfect companion for BrainChip’s Akida processor, which is itself a spiking neural network. While traditional cameras must be converted into spiking form to feed an SNN, and while Prophesee must normally “de-spike” its output to feed a conventional convolutional network, Akida and Prophesee can connect directly—spike to spike, neuron to neuron—with no format gymnastics or power-hungry frame buffering in between.

This native spike-based synergy pays off handsomely in power and latency. As BrainChip’s engineers put it, “We’re working in kilobits per second instead of megabits per second.” Because the Prophesee sensor transmits information only when something changes, and Akida computes only when spikes arrive, the overall system consumes mere milliwatts—compared to the tens of milliwatts required by conventional vision systems.

That difference may not matter in a smartphone, but it’s mission-critical for AR/VR glasses, where the battery is a tenth or even a twentieth the size of a phone’s. By eliminating the need to convert between frames and spikes—and avoiding the energy cost of frame storage, buffering, and transmission—BrainChip and Prophesee have effectively built a neuromorphic end-to-end vision pipeline that mirrors how biological eyes and brains actually work: always on, always responsive, yet sipping power rather than guzzling it.

As another example, I recently heard that BrainChip and HaiLa Technologies have partnered to show what happens when brain-inspired computing meets ultra-efficient wireless connectivity. They’ve created a demonstration that pairs BrainChip’s Akida neuromorphic processor with HaiLa’s BSC2000 backscatter RFIC, a Wi–Fi–compatible chip that communicates by reflecting existing radio signals rather than generating its own (I plan to devote a future column to this technology). The result is a self-contained edge-AI platform that can perform continuous sensing, anomaly detection, and condition monitoring while sipping mere microwatts of power—small enough to run a connected sensor for its entire lifetime on a single coin-cell battery.

This collaboration highlights a new class of intelligent, battery-free edge devices, where sensing, processing, and communication are all optimized for power efficiency. Akida’s event-driven architecture processes only the spikes that matter, while HaiLa’s passive backscatter link eliminates most of the radio’s energy cost. Together they enable always-on, locally intelligent IoT nodes ideal for medical, environmental, and infrastructure monitoring—places where replacing batteries is expensive, impractical, or downright impossible. In short, BrainChip and HaiLa are sketching the blueprint for the next wave of ultra-low-power edge AI systems that think before they speak, and that do both with astonishing efficiency.

Sad to relate, none of the above was what I wanted to talk to you about (stop groaning—it was worth reading). What I originally set out to tell you is the newly introduced Akida Cloud (imagine a roll of drums and a fanfare of trombones).
The existing Akida 1, which has been extremely well received by the market, supports 4-, 2-, and 1-bit weights and activations. The next-generation Akida 2, which is expected to be available to developers in the very near future, will support 8-, 4-, and 1-bit weights and activations. Also, the Akida 2 will support spatio-temporal and temporal event-based neural networks.

For years, BrainChip’s biggest hurdle in courting developers wasn’t its neuromorphic silicon—it was logistics. Demonstrating the Akida architecture meant physically shipping bulky FPGA-based boxes to customers, powering them up on-site, and juggling loan periods. With the launch of the Akida Cloud, that bottleneck disappears.

Engineers can now log in, spin up a virtual instance of the existing Akida 1 running on an actual Akida 1, or the forthcoming Akida 2 running on an FPGA, and run their own neural workloads directly in the browser. Models can be trained, loaded, executed, and benchmarked in real time—no shipping crates, NDAs, or lab setups required.

Akida Cloud represents more than a convenience upgrade; it’s a strategic move to democratize access to neuromorphic technology. By making their latest architectures available online, the chaps and chapesses at BrainChip are lowering the barrier to entry for researchers, startups, and OEMs who want to experiment with event-based AI but lack specialized hardware.

Users can compare the behavior of Akida 1 and Akida 2 side by side, prototype models, and gather performance data before committing to silicon. For BrainChip, the cloud platform also serves as a rapid feedback loop—turning every connected engineer into an early tester and accelerating SNN adoption across the edge AI ecosystem.

And there you have it—brains in the cloud, spikes on the wire, and AI that thinks before it blinks. If this is what the neuromorphic future looks like, I say “bring it on” (just as soon as my poor old hippocampus cools down). But it’s not all about me (it should be, but it’s not). So, what do you think about all of this?



8676747C-4E67-4723-9B98-58C76FCA5062.jpeg
 
  • Like
  • Fire
Reactions: 16 users

TECH

Regular


View attachment 91883

As much as I like your research Frangi, please show the same courtesy as you would expect from others on this forum, namely, I asked you a question a number of weeks ago, but you chose not to respond, now for someone whom likes to comment on other posters, for good or bad, can you give me and others your view on how you personally think Brainchip is tracking towards a successful outcome...going by the amount of time you spend in showcasing facts, you must have formed an opinion one way or another.

Be assured, Australians only Sue if you are caught out lying... appreciate an honest response this time !

Best regards....Texta 🍷🍷
 
  • Like
  • Fire
Reactions: 15 users

CHIPS

Regular

BrainChip share: A gloomy snapshot?​

BrainChip issues millions of new shares while simultaneously canceling employee stock options. Despite a slight recovery, the stock continues to trend downward with high volatility.​

October 3, 2025, 10:02 a.m.Andreas Sommer
BrainChip share


In short:
  • Issue of 5.7 million new ordinary shares
  • Cancellation of almost 17.5 million employee options
  • Share in downward trend despite daily gain
  • High volatility of almost 48 percent
The AI hardware company BrainChip is facing a crucial test. As the company massively restructures its capital structure, the markets are sending contradictory signals. Millions of new shares are being issued against canceled employee options – but who will win this battle for the future?

Capital injection or dilution?​

BrainChip has taken drastic measures: The company issued over 5.7 million new common shares, significantly expanding its capital base. At the same time, however, it canceled nearly 17.5 million restricted stock units (RSUs). These conflicting movements create a complex situation for investors. On the one hand, they provide fresh capital for operations, while on the other, they potentially dilute existing shares.

Technical chaos on the stock market​

The stock itself is stuck in a clear downtrend. Despite a slight daily gain of 2.56 percent to $0.20, the recovery was accompanied by declining trading volume – a classic warning sign for experienced traders. Even more worrying: While a short-term buy signal emerged from the low on October 1, both short- and long-term moving averages continue to send sell signals.

The calm before the storm?​

Technical analysis paints a bleak picture: BrainChip is moving within a broad, declining trend, with the RSI of 30.6 already indicating oversold conditions. With an annualized volatility of nearly 48 percent, the stock remains a high-risk investment. The question is: Is the recent recovery a sustained trend reversal or just a brief respite before the downtrend continues?

BrainChip stock: Buy or sell?! New BrainChip analysis from October 10 provides the answer:

The latest BrainChip figures speak for themselves: BrainChip shareholders urgently need to take action. Is it worth buying, or should you sell? Find out what to do now in the current free analysis from October 10th.
 
  • Haha
  • Like
  • Sad
Reactions: 6 users

FJ-215

Regular
We got a mention in the AFR :mad:

Rally in unprofitable ASX stocks propels ‘wealth destruction’ fears

"A sharp rally in unprofitable ASX-listed businesses has intensified concerns of a bubble propelled by brokers spruiking highly speculative stocks, largely to rake in fees as they advise companies on raising money.

There are 36 companies listed on the S&P/ASX 300 that are making no money, according to new research from MST Marquee. The basket of so-called “birds without wings” has returned 30 per cent over the past six months, smashing the sharemarket’s 13 per cent advance."

"The current crop of loss-making companies includes a mix of commodity producers such as Mineral Resources, IGO and Pilbara Minerals, tech stocks including NextDC, Brainchip, Siteminder and Weebit Nano, and healthcare companies such as Healius, Mayne Pharma and Clarity Pharmaceuticals."
 
  • Thinking
  • Like
  • Fire
Reactions: 5 users

Mccabe84

Regular
Now available on YouTube
 
  • Like
  • Love
  • Fire
Reactions: 38 users

Leevon

Member
We got a mention in the AFR :mad:

Rally in unprofitable ASX stocks propels ‘wealth destruction’ fears

"A sharp rally in unprofitable ASX-listed businesses has intensified concerns of a bubble propelled by brokers spruiking highly speculative stocks, largely to rake in fees as they advise companies on raising money.

There are 36 companies listed on the S&P/ASX 300 that are making no money, according to new research from MST Marquee. The basket of so-called “birds without wings” has returned 30 per cent over the past six months, smashing the sharemarket’s 13 per cent advance."

"The current crop of loss-making companies includes a mix of commodity producers such as Mineral Resources, IGO and Pilbara Minerals, tech stocks including NextDC, Brainchip, Siteminder and Weebit Nano, and healthcare companies such as Healius, Mayne Pharma and Clarity Pharmaceuticals."
All ones we should invest in then!!
 
  • Haha
  • Like
Reactions: 4 users

7für7

Top 20
We got a mention in the AFR :mad:

Rally in unprofitable ASX stocks propels ‘wealth destruction’ fears

"A sharp rally in unprofitable ASX-listed businesses has intensified concerns of a bubble propelled by brokers spruiking highly speculative stocks, largely to rake in fees as they advise companies on raising money.

There are 36 companies listed on the S&P/ASX 300 that are making no money, according to new research from MST Marquee. The basket of so-called “birds without wings” has returned 30 per cent over the past six months, smashing the sharemarket’s 13 per cent advance."

"The current crop of loss-making companies includes a mix of commodity producers such as Mineral Resources, IGO and Pilbara Minerals, tech stocks including NextDC, Brainchip, Siteminder and Weebit Nano, and healthcare companies such as Healius, Mayne Pharma and Clarity Pharmaceuticals."

It’s one thing to be a company that isn’t profitable yet because it’s working to bring a groundbreaking technology to market — that takes time.

It’s something entirely different to be an “explorer”, “mineral” or “health” company that has no actual products and survives purely on the fantasy of what might be there someday.

I don’t need to list all of BrainChip’s patents, partners, and existing customers again — they’re well known.

But anyone mentioning BrainChip in such a comparison has clearly looked only at the share price and the numbers, and done zero real research. DYOR
 
  • Like
  • Fire
  • Love
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

The Netherlands aims to lead brain-inspired computing development​

With its new Neuromorphic Computing Roadmap, the Netherlands positions itself as a global frontrunner in brain-inspired computing.
Published on October 10, 2025

When a human brain performs complex reasoning, it does so using only about 20 watts of power, roughly the energy needed to keep a dim lightbulb on. A supercomputer performing equivalent tasks would require millions of times more energy. This staggering contrast lies at the heart of a technological revolution now taking shape in the Netherlands: neuromorphic computing, a field that designs chips and algorithms inspired by the way our brains process information.
A new Neuromorphic Computing Roadmap 2025, commissioned by Topsector ICT and coordinated with input from academia, industry, and government, sets out how the Netherlands can turn its early lead into global leadership. “The Netherlands has all the building blocks to take the next step,” says Frits Grotenhuis, director of Topsector ICT. “By applying brain-inspired principles, we can make AI more sustainable, efficient, and applicable in fields such as defense, manufacturing, energy, and telecom.”

A unique ecosystem​

The roadmap, prepared by Birch Consultants, highlights the Netherlands’ rare completeness across the neuromorphic value chain: from materials research to software, algorithms, and applications. It names strong academic centers like CogniGron, NL-ECO, and Mission 10-X, as well as startups such as Innatera, Axelera AI, and Hoursec, among the emerging ecosystem’s core players. Together they form, as the roadmap states, “a community that already covers almost the entire neuromorphic stack.”

That depth is unusual. “In most countries, neuromorphic research is either purely academic or purely industrial,” the roadmap observes. “In the Netherlands, both exist and interact - creating the expertise and momentum to advance the field.”
The document also underscores the strategic importance of neuromorphic computing to the National Technology Strategy (NTS), noting that it connects directly with five of the ten Dutch priority key technologies: AI & Data, Semiconductors, Quantum Technologies, Cybersecurity, and Integrated Photonics.

Neuromorphic Computing and our Watt Matters in AI conference​

With its newly published Neuromorphic Computing Roadmap, the Netherlands positions itself as a global frontrunner in brain-inspired computing — a key to solving AI’s growing energy problem and a central theme at the upcoming Watt Matters in AI conference.

Why it matters​

The roadmap leaves little doubt about the urgency. “The fundamental performance limits of conventional systems, especially regarding energy consumption, are becoming increasingly restrictive,” it warns. Global AI workloads are exploding, and even the world’s most efficient data centers, like Amazon’s new 2.2-gigawatt AI hub, are pushing power grids to their limits.
Neuromorphic architectures, in contrast, promise orders-of-magnitude gains in efficiency by combining memory and processing in the same physical location, reducing data transport and heat generation. In practice, this means the same AI task could run not in megawatts, but in watts, a theme that will resonate deeply at Watt Matters in AI, the international conference in Eindhoven later this year, focused on radical energy efficiency in artificial intelligence.
As Hans Hilgenkamp (University of Twente) noted in announcing the Watt Matters program, “If we want AI to scale responsibly, neuromorphic computing is one of the few realistic ways forward.”

AI has a lot to learn from the brain to be energy efficient​

“The brain has already solved many of the computational challenges we face,” says professor Christian Mayr.
Read More

Building the future brain of computing​

The roadmap calls for immediate investment in shared facilities for benchmarking and prototyping, estimated at around €30 million over five years. These facilities, akin to national testbeds, would allow researchers and companies to co-design hardware and algorithms, accelerating innovation and derisking industry adoption.
In the short term, the focus will be on applying neuromorphic principles to existing hardware, such as spiking neural networks and in-memory computing, while developing new materials and devices over the longer term. “Mixing analog and digital computation is not only possible but necessary,” the document notes, signaling a pragmatic, evolutionary approach.
Beyond infrastructure, the roadmap envisions a coordinated national structure that links academia, startups, corporations, and ministries. A steering group under Topsector ICT will guide strategy, while initiatives like the Future of Compute mission to the UK (November 2025) aim to strengthen international collaboration.

Brain-inspired chips moving out of the lab and into your business​

Neuromorphic computing is maturing into a viable tool for energy-efficient, real-time processing, opening up opportunities for businesses.

From labs to leadership​

The Netherlands’ ambition, as phrased in the roadmap, is nothing short of a moonshot: “To compute as efficiently and functionally as the human brain.”
This ambition aligns closely with Europe’s broader goals for technological sovereignty and sustainability. Neuromorphic computing could play a decisive role in reducing AI’s energy footprint, increasing digital autonomy, and securing industrial competitiveness, all key themes that will converge at Watt Matters in AI on November 26, 2025, in Eindhoven.

For more information on the Watt Matters in AI conference and its focus on neuromorphic and energy-efficient computing, visit wattmattersinai.eu.

 
  • Like
  • Fire
  • Love
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Reply is a large scale European consulting / IT services / digital-transformation company, founded in 1996.

In the Company Profile PDF, it says:
“Through its network of specialist companies, Reply supports some of Europe’s leading industrial groups … in Artificial Intelligence, Big Data, Cloud Computing, Digital Media and the Internet of Things.”

Wikipedia says " Reply's revenue increased from €33.3 million in 2000, the year the company was listed on the STAR segment of the Italian Stock Exchange (Borsa Italiana), to €2.12 billion and 15,000 employees in 2023."


Screenshot 2025-10-11 at 11.05.16 am.png


Neuromorphic Computing in Industrial Quality Control​

A New Paradigm in Computation

Neuromorphic computing, a technology that seeks to replicate the brain's own efficiency and processing power, can improve demanding industrial applications, particularly in the realm of real-time visual quality control. Reply’s approach offers significant advantages in energy consumption, speed, and scalability.​




Emulating the Brain's Blueprint​

The term "neuromorphic" is chosen with precision, as this form of computation aims to replicate the intricate activity of biological neurons in every significant aspect. Unlike in classical deep learning, where the concept of a neuron is more abstract, neurons within a neuromorphic system possess their own distinct sense of timing and frequency.
The core operation centres on the "spike", or "firing event". Inside a neuromorphic chip, individual spiking neurons accumulate incoming signals over time, much like the build-up of a membrane potential in their biological counterparts. When this internal charge reaches a specific threshold, the neuron fires a spike, transmitting information to other nodes in the network. This event-driven process forms the foundation of Spiking Neural Networks (SNNs), which are inherently dynamic and recurrent systems adept at processing data where temporality is key.

Core hardware architectures​

The hardware that enables this can be broadly categorised into two types: fully asynchronous and partially asynchronous systems. In a fully asynchronous system, every core within the chip operates with its own independent timing and frequency, a design that most faithfully mirrors the sparse and efficient nature of the brain's computations. This architecture is exemplified by Intel's Loihi 2, currently one of the world's most prominent neuromorphic chip. Conversely, partially asynchronous systems employ synchronous processing within the individual cores but use asynchronous protocols for communication between them. This hybrid model can yield significant gains in hardware efficiency and performance speed-up. This kind of hardware design is embedded into two prominent chips: SpiNNcloud's SpiNNaker 2 and IBM’s TrueNorth.

Unprecedented energy efficiency​

A primary advantage of neuromorphic systems is their remarkable energy efficiency. In an environment where power consumption is a critical design constraint, this technology offers a significant breakthrough. Neuromorphic hardware can deliver energy savings of up to 100 times compared to conventional CPUs and around 30 times compared to GPUs, making it a viable solution for edge computing and large-scale AI deployments where power is at a premium.

Enhanced scalability and performance​

The architecture of neuromorphic computing is inherently designed for scalability. These systems can be expanded to massive networks, with platforms like Intel's Hala Point managing over a billion neurons, a scale that poses a considerable challenge for traditional systems. This scalability is matched by a substantial leap in performance. For computer vision tasks, these systems can achieve an inference speed-up of 120 times, enabling complex, real-time decision-making.

Intelligent data acquisition via event cameras​

Neuromorphic principles also extend to data acquisition through the use of event cameras. Unlike traditional cameras that capture entire frames at fixed intervals, often recording redundant data, event cameras operate asynchronously. They capture data only when a change in a pixel's luminosity is detected. This results in a sparse but highly informative data stream that drastically reduces storage and processing requirements. In one practical example, an event-based approach reduced a 32-gigabyte dataset to just 7 gigabytes.

Neuromorphic Computing in Quality Control​

The demanding field of industrial visual quality control represents an ideal application for neuromorphic technology. Manufacturing environments require real-time, high-accuracy defect detection, a task whose stringent demands on speed and efficiency align perfectly with the capabilities of neuromorphic systems.
To address this need, established AI models are being adapted for the neuromorphic paradigm. An example is Spiking-YOLO, a re-engineered version of the well-known YOLO object detection framework. This model utilises spiking neurons to process visual data. The specific architecture used in a recent project is a highly optimised implementation that merges computational layers to improve efficiency for deployment on neuromorphic hardware.

Multi-object detection for autonomous driving​

In a practical application focused on multi-object detection for autonomous driving, a dataset of driving footage was converted into an event-like format using a simulator. The results were striking: while the car was in motion, the event-based data captured the full context of the surrounding environment, including other vehicles and the landscape. However, the moment the car stopped, the static parts of the scene vanished from the data stream, demonstrating the system's inherent focus on relevant change.
The model was pre-trained on the Common Objects in Context (COCO) dataset, and then further trained on the BDD multi-object dataset, with initial inference tests conducted via a classical simulator. The next stage of development involves re-training this model directly on the converted event-based data accessing the capabilities of the neuromorphic hardware.

An outlook on Intelligent Automation​

Neuromorphic Computing is perfect for deployment on edge devices, which are common in industrial environments but often have significant hardware and memory limitations. The combination of high performance, low energy use, and efficient memory management makes neuromorphic systems an ideal solution for tasks like real-time image recognition on a factory floor.
Reply envisions a production line where an event camera constantly monitors products, with a neuromorphic chip instantly identifying anomalies and alerting personnel, all while operating within the tight constraints of an edge device. While these algorithms are not yet prepared for highly complex cognitive challenges, Reply’s experience shows that their current capabilities already mark a significant step forward for intelligent industrial automation.

 
  • Like
  • Love
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Neuromorphic computing gets a mention.

Says that beyond three years time, neuromorphic computing will gain significant traction, revolutionizing AI applications in robotics and automation.



The AI Silicon Showdown: Nvidia, Intel, and ARM Battle for the Future of Artificial Intelligence​

By:TokenRing AI
October 10, 2025 at 16:35 PM EDT
Photo for article

The artificial intelligence landscape is currently in the throes of an unprecedented technological arms race, centered on the very silicon that powers its rapid advancements. At the heart of this intense competition are industry titans like Nvidia (NASDAQ: NVDA), Intel (NASDAQ: INTC), and ARM (NASDAQ: ARM), each vying for dominance in the burgeoning AI chip market. This fierce rivalry is not merely about market share; it's a battle for the foundational infrastructure of the next generation of computing, dictating the pace of innovation, the accessibility of AI, and even geopolitical influence.
The global AI chip market, valued at an estimated $123.16 billion in 2024, is projected to surge to an astonishing $311.58 billion by 2029, exhibiting a compound annual growth rate (CAGR) of 24.4%. This explosive growth is fueled by the insatiable demand for high-performance and energy-efficient processing solutions essential for everything from massive data centers running generative AI models to tiny edge devices performing real-time inference. The immediate significance of this competition lies in its ability to accelerate innovation, drive specialization in chip design, decentralize AI processing, and foster strategic partnerships that will define the technological landscape for decades to come.

Architectural Arenas: Nvidia's CUDA Citadel, Intel's Open Offensive, and ARM's Ecosystem Expansion​

The core of the AI chip battle lies in the distinct architectural philosophies and strategic ecosystems championed by these three giants. Each company brings a unique approach to addressing the diverse and demanding requirements of modern AI workloads.
Nvidia maintains a commanding lead, particularly in high-end AI training and data center GPUs, with an estimated 70% to 95% market share in AI accelerators. Its dominance is anchored by a full-stack approach that integrates advanced GPU hardware with the powerful and proprietary CUDA (Compute Unified Device Architecture) software platform. Key GPU models like the Hopper architecture (H100 GPU), with its 80 billion transistors and fourth-generation Tensor Cores, have become industry standards. The H100 boasts up to 80GB of HBM3/HBM3e memory and utilizes fourth-generation NVLink for 900 GB/s GPU-to-GPU interconnect bandwidth. More recently, Nvidia unveiled its Blackwell architecture (B100, B200, GB200 Superchip) in March 2024, designed specifically for the generative AI era. Blackwell GPUs feature 208 billion transistors and promise up to 40x more inference performance than Hopper, with systems like the 72-GPU NVL72 rack-scale system. CUDA, established in 2007, provides a robust ecosystem of AI-optimized libraries (cuDNN, NCCL, RAPIDS) that have created a powerful network effect and a significant barrier to entry for competitors. This integrated hardware-software synergy allows Nvidia to deliver unparalleled performance, scalability, and efficiency, making it the go-to for training massive models.
Intel is aggressively striving to redefine its position in the AI chip sector through a multifaceted strategy. Its approach combines enhancing its ubiquitous Xeon CPUs with AI capabilities and developing specialized Gaudi accelerators. The latest Xeon 6 P-core processors (Granite Rapids), with up to 128 P-cores and Intel Advanced Matrix Extensions (AMX), are optimized for AI workloads, capable of doubling the performance of previous generations for AI and HPC. For dedicated deep learning, Intel leverages its Gaudi AI accelerators (from Habana Labs). The Gaudi 3, manufactured on TSMC's 5nm process, features eight Matrix Multiplication Engines (MMEs) and 64 Tensor Processor Cores (TPCs), along with 128GB of HBM2e memory. A key differentiator for Gaudi is its native integration of 24 x 200 Gbps RDMA over Converged Ethernet (RoCE v2) ports directly on the chip, enabling scalable communication using standard Ethernet. Intel emphasizes an open software ecosystem with oneAPI, a unified programming model for heterogeneous computing, and the OpenVINO Toolkit for optimized deep learning inference, particularly strong for edge AI. Intel's strategy differs by offering a broader portfolio and an open ecosystem, aiming to be competitive on cost and provide end-to-end AI solutions.
ARM is undergoing a significant strategic pivot, moving beyond its traditional IP licensing model to directly engage in AI chip manufacturing and design. Historically, ARM licensed its power-efficient architectures (like the Cortex-A series) and instruction sets, enabling partners like Apple (M-series) and Qualcomm to create highly customized SoCs. For infrastructure AI, the ARM Neoverse platform is central, providing high-performance, scalable, and energy-efficient designs for cloud computing and data centers. Major cloud providers like Amazon (Graviton), Microsoft (Azure Cobalt), and Google (Axion) extensively leverage ARM Neoverse for their custom chips. The latest Neoverse V3 CPU shows double-digit performance improvements for ML workloads and incorporates Scalable Vector Extensions (SVE). For edge AI, ARM offers Ethos-U Neural Processing Units (NPUs) like the Ethos-U85, designed for high-performance inference. ARM's unique differentiation lies in its power efficiency, its flexible licensing model that fosters a vast ecosystem of custom designs, and its recent move to design its own full-stack AI chips, which positions it as a direct competitor to some of its licensees while still enabling broad innovation.

Reshaping the Tech Landscape: Benefits, Disruptions, and Strategic Plays​

The intense competition in the AI chip market is profoundly reshaping the strategies and fortunes of AI companies, tech giants, and startups, creating both immense opportunities and significant disruptions.
Tech giants and hyperscalers stand to benefit immensely, particularly those developing their own custom AI silicon. Companies like Google (NASDAQ: GOOGL) with its TPUs, Amazon (NASDAQ: AMZN) with Trainium and Inferentia, Microsoft (NASDAQ: MSFT) with Maia and Cobalt, and Meta (NASDAQ: META) with MTIA are driving a trend of vertical integration. By designing in-house chips, these companies aim to optimize performance for their specific workloads, reduce reliance on external suppliers like Nvidia, gain greater control over their AI infrastructure, and achieve better cost-efficiency for their massive AI operations. This allows them to offer specialized AI services to customers, potentially disrupting traditional chipmakers in the cloud AI services market. Strategic alliances are also key, with Nvidia investing $5 billion in Intel, and OpenAI partnering with AMD for its MI450 series chips.
For specialized AI companies and startups, the intensified competition offers a wider range of hardware options, potentially driving down the significant costs associated with running and deploying AI models. Intel's Gaudi chips, for instance, aim for a better price-to-performance ratio against Nvidia's offerings. This fosters accelerated innovation and reduces dependency on a single vendor, allowing startups to diversify their hardware suppliers. However, they face the challenge of navigating diverse architectures and software ecosystems beyond Nvidia's well-established CUDA. Startups may also find new niches in inference-optimized chips and on-device AI, where cost-effectiveness and efficiency are paramount.
The competitive implications are vast. Innovation acceleration is undeniable, with companies continuously pushing for higher performance, efficiency, and specialized features. The "ecosystem wars" are intensifying, as competitors like Intel and AMD invest heavily in robust software stacks (oneAPI, ROCm) to challenge CUDA's stronghold. This could lead to pricing pressure on dominant players as more alternatives enter the market. Furthermore, the push for vertical integration by tech giants could fundamentally alter the dynamics for traditional chipmakers. Potential disruptions include the rise of on-device AI (AI PCs, edge computing) shifting processing away from the cloud, the growing threat of open-source architectures like RISC-V to ARM's licensing model, and the increasing specialization of chips for either training or inference. Overall, the market is moving towards a more diversified and competitive landscape, where robust software ecosystems, specialized solutions, and strategic alliances will be critical for long-term success.

Beyond the Silicon: Geopolitics, Energy, and the AI Epoch​

The fierce competition in the AI chip market extends far beyond technical specifications and market shares; it embodies profound wider significance, shaping geopolitical landscapes, addressing critical concerns, and marking a pivotal moment in the history of artificial intelligence.
This intense rivalry is a direct reflection of, and a primary catalyst for, the accelerating growth of AI technology. The global AI chip market's projected surge underscores the overwhelming demand for AI-specific chips, particularly GPUs and ASICs, which are now selling for tens of thousands of dollars each. This period highlights a crucial trend: AI progress is increasingly tied to the co-development of hardware and software, moving beyond purely algorithmic breakthroughs. We are also witnessing the decentralization of AI, with the rise of AI PCs and edge AI devices incorporating Neural Processing Units (NPUs) directly into chips, enabling powerful AI capabilities without constant cloud connectivity. Major cloud providers are not just buying chips; they are heavily investing in developing their own custom AI chips (like Google's Trillium, offering 4.7x peak compute performance and 67% more energy efficiency than its predecessor) to optimize workloads and reduce dependency.
The impacts are far-reaching. It's driving accelerated innovation in chip design, manufacturing processes, and software ecosystems, pushing for higher performance and lower power consumption. It's also fostering market diversification, with breakthroughs in training efficiency reducing reliance on the most expensive chips, thereby lowering barriers to entry for smaller companies. However, this also leads to disruption across the supply chain, as companies like AMD, Intel, and various startups actively challenge Nvidia's dominance. Economically, the AI chip boom is a significant growth driver for the semiconductor industry, attracting substantial investment. Crucially, AI chips have become a matter of national security and tech self-reliance. Geopolitical factors, such as the "US-China chip war" and export controls on advanced AI chips, are fragmenting the global supply chain, with nations aggressively pursuing self-sufficiency in AI technology.
Despite the benefits, significant concerns loom. Geopolitical tensions and the concentration of advanced chip manufacturing in a few regions create supply chain vulnerabilities. The immense energy consumption required for large-scale AI training, heavily reliant on powerful chips, raises environmental questions, necessitating a strong focus on energy-efficient designs. There's also a risk of market fragmentation and potential commoditization as the market matures. Ethical concerns surrounding the use of AI chip technology in surveillance and military applications also persist.
This AI chip race marks a pivotal moment, drawing parallels to past technological milestones. It echoes the historical shift from general-purpose computing to specialized graphics processing (GPUs) that laid the groundwork for modern AI. The infrastructure build-out driven by AI chips mirrors the early days of the internet boom, but with added complexity. The introduction of AI PCs, with dedicated NPUs, is akin to the transformative impact of the personal computer itself. In essence, the race for AI supremacy is now inextricably linked to the race for silicon dominance, signifying an era where hardware innovation is as critical as algorithmic advancements.

The Horizon of Hyper-Intelligence: Future Trajectories and Expert Outlook​

The future of the AI chip market promises continued explosive growth and transformative developments, driven by relentless innovation and the insatiable demand for artificial intelligence capabilities across every sector. Experts predict a dynamic landscape defined by technological breakthroughs, expanding applications, and persistent challenges.
In the near term (1-3 years), we can expect sustained demand for AI chips at advanced process nodes (3nm and below), with leading chipmakers like TSMC (NYSE: TSM), Samsung, and Intel aggressively expanding manufacturing capacity. The integration and increased production of High Bandwidth Memory (HBM) will be crucial for enhancing AI chip performance. A significant surge in AI server deployment is anticipated, with AI server penetration projected to reach 30% of all servers by 2029. Cloud service providers will continue their massive investments in data center infrastructure to support AI-based applications. There will be a growing specialization in inference chips, which are energy-efficient and high-performing, essential for processing learned models and making real-time decisions.
Looking further into the long term (beyond 3 years), a significant shift towards neuromorphic computing is gaining traction. These chips, designed to mimic the human brain, promise to revolutionize AI applications in robotics and automation. Greater integration of edge AI will become prevalent, enabling real-time data processing and reducing latency in IoT devices and smart infrastructure. While GPUs currently dominate, Application-Specific Integrated Circuits (ASICs) are expected to capture a larger market share, especially for specific generative AI workloads by 2030, due to their optimal performance in specialized AI tasks. Advanced packaging technologies like 3D system integration, exploration of new materials, and a strong focus on sustainability in chip production will also define the future.
Potential applications and use cases are vast and expanding. Data centers and cloud computing will remain primary drivers, handling intensive AI training and inference. The automotive sector shows immense growth potential, with AI chips powering autonomous vehicles and ADAS. Healthcare will see advanced diagnostic tools and personalized medicine. Consumer electronics, industrial automation, robotics, IoT, finance, and retail will all be increasingly powered by sophisticated AI silicon. For instance, Google's Tensor processor in smartphones and Amazon's Alexa demonstrate the pervasive nature of AI chips in consumer devices.
However, formidable challenges persist. Geopolitical tensions and export controls continue to fragment the global semiconductor supply chain, impacting major players and driving a push for national self-sufficiency. The manufacturing complexity and cost of advanced chips, relying on technologies like Extreme Ultraviolet (EUV) lithography, create significant barriers. Technical design challenges include optimizing performance, managing high power consumption (e.g., 500+ watts for an Nvidia H100), and dissipating heat effectively. The surging demand for GPUs could lead to future supply chain risks and shortages. The high energy consumption of AI chips raises environmental concerns, necessitating a strong focus on energy efficiency.
Experts largely predict Nvidia will maintain its leadership in AI infrastructure, with future GPU generations cementing its technological edge. However, the competitive landscape is intensifying, with AMD making significant strides and cloud providers heavily investing in custom silicon. The demand for AI computing power is often described as "limitless," ensuring exponential growth. While China is rapidly accelerating its AI chip development, analysts predict it will be challenging for Chinese firms to achieve full parity with Nvidia's most advanced offerings by 2030. By 2030, ASICs are predicted to handle the majority of generative AI workloads, with GPUs evolving to be more customized for deep learning tasks.

A New Era of Intelligence: The Unfolding Impact​

The intense competition within the AI chip market is not merely a cyclical trend; it represents a fundamental re-architecting of the technological world, marking one of the most significant developments in AI history. This "AI chip war" is accelerating innovation at an unprecedented pace, fostering a future where intelligence is not only more powerful but also more pervasive and accessible.
The key takeaways are clear: Nvidia's dominance, though still formidable, faces growing challenges from an ascendant AMD, an aggressive Intel, and an increasing number of hyperscalers developing their own custom silicon. Companies like Google (NASDAQ: GOOGL) with its TPUs, Amazon (NASDAQ: AMZN) with Trainium, and Microsoft (NASDAQ: MSFT) with Maia are embracing vertical integration to optimize their AI infrastructure and reduce dependency. ARM, traditionally a licensor, is now making strategic moves into direct chip design, further diversifying the competitive landscape. The market is being driven by the insatiable demand for generative AI, emphasizing energy efficiency, specialized processors, and robust software ecosystems that can rival Nvidia's CUDA.
This development's significance in AI history is profound. It's a new "gold rush" that's pushing the boundaries of semiconductor technology, fostering unprecedented innovation in chip architecture, manufacturing, and software. The trend of vertical integration by tech giants is a major shift, allowing them to optimize hardware and software in tandem, reduce costs, and gain strategic control. Furthermore, AI chips have become a critical geopolitical asset, influencing national security and economic competitiveness, with nations vying for technological independence in this crucial domain.
The long-term impact will be transformative. We can expect a greater democratization and accessibility of AI, as increased competition drives down compute costs, making advanced AI capabilities available to a broader range of businesses and researchers. This will lead to more diversified and resilient supply chains, reducing reliance on single vendors or regions. Continued specialization and optimization in AI chip design for specific workloads and applications will result in highly efficient AI systems. The evolution of software ecosystems will intensify, with open-source alternatives gaining traction, potentially leading to a more interoperable AI software landscape. Ultimately, this competition could spur innovation in new materials and even accelerate the development of next-generation computing paradigms like quantum chips.
In the coming weeks and months, watch for: new chip launches and performance benchmarks from all major players, particularly AMD's MI450 series (deploying in 2026 via OpenAI), Google's Ironwood TPU v7 (expected end of 2025), and Microsoft's Maia (delayed to 2026). Monitor the adoption rates of custom chips by hyperscalers and any further moves by OpenAI to develop its own silicon. The evolution and adoption of open-source AI software ecosystems, like AMD's ROCm, will be crucial indicators of future market share shifts. Finally, keep a close eye on geopolitical developments and any further restrictions in the US-China chip trade war, as these will significantly impact global supply chains and the strategies of chipmakers worldwide. The unfolding drama in the AI silicon showdown will undoubtedly shape the future trajectory of AI innovation and its global accessibility.

 
  • Like
  • Fire
  • Wow
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This is interesting.

TokenRing’s latest article on Intel’s Panther Lake and 18A process specifically mentions neuromorphic computing as one of the next big trends in AI hardware, particularly for Edge AI and IoT devices.

It states "Experts predict the emergence of increasingly hybrid architectures, combining conventional CPU/GPU cores with specialized processors like neuromorphic chips, leveraging the unique strengths of each for optimal AI performance and efficiency."

I think it's a pretty important shift in that neuromorphic isn’t being treated as a far-off research topic anymore. Now grouped alongside NPUs and AI accelerators as part of the next generation of compute.

Intel’s 18A process may enable smaller, faster, denser AI chips but it still faces the same efficiency walls that neuromorphic architectures are built to overcome. The fact it’s being acknowledged in the same breath as leading-edge silicon says a lot about where the market’s heading.

The key point in this for me is that mainstream recognition of neuromorphic computing is growing and BrainChip is well-positioned as one of the few companies already commercialising it.



Published 5 hours ago.







Extract 1

Screenshot 2025-10-11 at 12.27.50 pm.png




Extract 2


Screenshot 2025-10-11 at 12.27.33 pm.png
 
Last edited:
  • Like
  • Love
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I believe it's supposed to say "milli-watt", not "million-watt"!

Million-watt? That’s enough juice to power a nuclear reactor. ⚡🚨

How embarrassing...

Million-watt glasses upon your head,
One blink and half the block’s dead!
Akida’s so hot, no chip could match her,
Forget low power - we’ve built a nuclear reactor!



elmo-nope.gif



Screenshot 2025-10-11 at 1.07.41 pm.png
 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 10 users

Diogenese

Top 20
I believe it's supposed to say "milli-watt", not "million-watt"!

Million-watt? That’s enough juice to power a nuclear reactor. ⚡🚨

How embarrassing...

View attachment 91898


View attachment 91897

Once there was a silly old ram
Thought he'd punch a hole in a dam
No one could make that ram scram;
He kept buttin' that dam

So any time you're feelin' bad
'Stead of feelin' sad
Just remember that ram
Oops, there goes a billion kilowatt dam
Oops, there goes a billion kilowatt dam
 
  • Haha
  • Like
  • Fire
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
BREAKING NEWS

Trump overlooked for Peace Prize as BrainChip’s “million-watt glasses” shortlisted for Nobel in Physics.
Nobel committee calls it “a bright idea.”
Experts warn users to blink responsibly.


giphy (1).gif
 
  • Haha
Reactions: 7 users

Diogenese

Top 20
BREAKING NEWS

Trump overlooked for Peace Prize as BrainChip’s “million-watt glasses” shortlisted for Nobel in Physics.
Nobel committee calls it “a bright idea.”
Experts warn users to blink responsibly.


View attachment 91904


1760153566446.png



I think he's got bootiful legs!
 
  • Haha
Reactions: 4 users
Top Bottom