BRN Discussion Ongoing

Labsy

Regular
The way this guy thinks, is a bit of a worry (in that he's good)..

But their tech doesn't seem "that flash" on the surface..

Their DX-M1 (and future DX-M2) are in 3nm process and using around 5w?..
That seems like a lot and the around $50 cost per chip, seems high too.. (possibly because of the process?)

AKIDA 2.0 IP in 3nm, would romp all over anything they currently, or intend to offer, in both performance and energy usage...

But the "guy" worries me..
I wouldn't worry. There is a whole ecosystem of people who compare power/efficiency/functionality/cost.... Trust our technology. Eventually the cream will float to the surface. If he's smart he'll integrate a RISCV architecture with our IP. Everything else is shit. As oppionioned via the actions of the European space agency. And they re pretty smart.
 
  • Like
  • Fire
  • Love
Reactions: 15 users
I wouldn't worry. There is a whole ecosystem of people who compare power/efficiency/functionality/cost.... Trust our technology. Eventually the cream will float to the surface. If he's smart he'll integrate a RISCV architecture with our IP. Everything else is shit. As oppionioned via the actions of the European space agency. And they re pretty smart.
Hey I'm not "worried" worried, if you know what I mean..
I'm not going to lose any sleep over it. 😛
 
  • Like
  • Love
  • Haha
Reactions: 7 users

manny100

Regular
Growth Opportunities in Neuromorphic Computing 2025-2030 |
Google or ask AI whether BRN is a leader in Neuromorphic AI at the Edge.
Then read the link.

"Growth Opportunities in Neuromorphic Computing 2025-2030 | Neuromorphic Technology Poised for Hyper-Growth as Market Surges Over 45x by 2030​

Strategic Investments and R&D Fuel the Next Wave of Growth in Neuromorphic Computing"​

 
  • Like
  • Fire
Reactions: 15 users

Rach2512

Regular
Growth Opportunities in Neuromorphic Computing 2025-2030 |
Google or ask AI whether BRN is a leader in Neuromorphic AI at the Edge.
Then read the link.

"Growth Opportunities in Neuromorphic Computing 2025-2030 | Neuromorphic Technology Poised for Hyper-Growth as Market Surges Over 45x by 2030​

Strategic Investments and R&D Fuel the Next Wave of Growth in Neuromorphic Computing"​



Thanks for sharing @manny100, see list of featured companies in paid report which is practically $5k.

Screenshot_20250529_155927_Samsung Internet.jpg
Screenshot_20250529_160142_Samsung Internet.jpg
 
  • Like
  • Love
  • Fire
Reactions: 15 users

itsol4605

Regular
  • Like
  • Fire
  • Love
Reactions: 12 users
  • Like
  • Love
  • Fire
Reactions: 31 users
Takashi Sato, Professor in the Graduate School of Informatics at Kyoto University (see @Fullmoonfever ’s post from August 2024 👆🏻), is also co-author of another (though similarly titled) paper describing research done with Akida: “Zero-Aware Regularization for Energy-Efficient Inference on Akida Neuromorphic Processor”, which happens to be presented at ISCAS (International Symposium on Circuits and Systems) 2025 in London tomorrow.

His two co-authors are fellow researchers from Kyoto University’s Graduate School of Informatics, but others than last year: PhD student Takehiro Habara and Associate Professor Hiromitsu Awano (who might actually be the son of Sato’s August 2024 co-author Hikaru Awano: https://repository.kulib.kyoto-u.ac.jp/dspace/bitstream/2433/215689/2/djohk00613.pdf, cf. Acknowledgment page VI: “Last but not least, I am truly grateful to my parents, Hikaru Awano and Akiko Awano for their support of my long student life.”)

View attachment 85398


View attachment 85399



View attachment 85401

View attachment 85400

First author Takehiro Habara is a PhD student at the “Kyoto University School of Platforms”, an interesting interdisciplinary PhD program. The Graduate School of Informatics he is affiliated with is one of the collaborating Graduate Schools.

He says about himself he is “researching low-power AI and creating handheld AI devices” and aims to build “a platform that enables advanced AI inference with low power consumption”, “an AI system that can be used anywhere and by anyone”.


View attachment 85404


View attachment 85405

Translation courtesy of Google Lens:

View attachment 85406


View attachment 85407
View attachment 85409
Further to @Frangipani post above and my previous post she kindly linked, I managed to find the abstract of the presso.

No full presso as yet but positive results for their study using Akida.

Information for Paper ID 1590
Paper Information:
Paper Title:Zero-Aware Regularization for Energy-Efficient Inference on Akida Neuromorphic Processor
Student Contest:Yes
Affiliation Type:Academia
Keywords:Edge AI, Energy efficiency, Neuromorphic Chips, Regularization, Spiking Neural Networks
Abstract:Spiking Neural Networks (SNNs) and their hardware accelerators have emerged as promising systems for advanced cognitive processing with low power consumption. Although the development of SNN hardware accelerators is particularly active, research on the intelligent use of these accelerators remains limited. This study focuses on the SNN accelerator Akida, a commercially available neuromorphic processor, and presents a novel training method designed to reduce inference energy by leveraging the unique architecture of the hardware. Specifically, we apply sparse constraints on neuron activations and synaptic connection weights, aiming to minimize the number of firing neurons by considering Akida's batch spike processing feature. Our proposed method was applied to a network consisting of three convolutional layers and two fully connected layers. In the MNIST image classification task, the activations became 76.1% sparser, and the weights became 22.1% sparser, resulting in a 13.8% reduction in energy consumption per image.
Track ID:8.2
Track Name:Spiking Neural Networks and Systems
Final Decision:Accept as Poster
Session Name:Neural Learning Systems: Circuits & Systems III (Poster)


 
  • Like
  • Fire
  • Love
Reactions: 24 users

MDhere

Top 20
Anastasi Nvidia Huawei Spray Tan

Looks like Anastasi was standing downwind of Donald’s morning application.



China's HUGE AI Chip Breakthrough: NVIDIA is out?

Bravo are you up to a run?
 
  • Haha
  • Like
Reactions: 5 users

Frangipani

Top 20
Arijit Mukherjee, Akida-experienced Principal Scientist from TCS Research, will be leading two workshop ‘industry sessions’ during next month’s week-long Summer School & Advanced Training Programme SENSE (Smart Electronics and Next-Generation Systems Engineering) organised by the Defence Institute of Advanced Technology (DIAT) in Pune, India: “Intro talk: Smart at the Edge” as well as “Beyond TinyML - Neuromorphic Computing: Brains, Machines, and the Story of Spikes”.

While we know that TCS Research is not exclusively friends with us, when it comes to neuromorphic computing, I trust BrainChip will get a very favourable mention during those workshop sessions. 😊



EAFFC650-1A90-408C-AF2F-163F10C4B2B7.jpeg


5832738D-DB17-4C8D-B7B2-7CF6E0E4DEA6.jpeg
13C50B9A-4314-49E2-8C3D-AF79EF6CF6FE.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Growth Opportunities in Neuromorphic Computing 2025-2030 |
Google or ask AI whether BRN is a leader in Neuromorphic AI at the Edge.
Then read the link.

"Growth Opportunities in Neuromorphic Computing 2025-2030 | Neuromorphic Technology Poised for Hyper-Growth as Market Surges Over 45x by 2030​

Strategic Investments and R&D Fuel the Next Wave of Growth in Neuromorphic Computing"​


45 x 20 cents equals $9.00!

Well, if that’s the case Manny, we’ll all be partying like it’s 1999 in our 2030 bodies. 👯💃🕺
 
  • Haha
  • Like
  • Fire
Reactions: 16 users

Rach2512

Regular
  • Like
  • Fire
  • Love
Reactions: 4 users
Hi Bravo, thanks for the video. If its wearables then we are streets ahead of anything else on the market. It makes sense for Nanose to use AKIDA for hand held devices as AKIDA is a no brainer for wearables when they move into that area.
So far they say dozens of diseases can be detected..
I can hear your friends at hot crapper calling you.
 
Last edited:
  • Haha
  • Thinking
  • Love
Reactions: 4 users

manny100

Regular
I can hear your friends at hot crapper calling you.
I have them on ignore right now.
I had my fun with them but it's worn off now.
The agenda driven downrampers never post anything of substance.
 
  • Love
  • Like
  • Fire
Reactions: 5 users

manny100

Regular
I can hear your friends at hot crapper calling you.
I have them on ignore. The fun of teasing them to the extent they go into mindless rage rants has worn off.
Angry ants have no credibility.
 
  • Haha
  • Love
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Why Grok 3’s 1.8 Trillion Parameters Are Pointless Without Neuromorphic Chips: A 2025 Blueprint​

R. Thompson (PhD)
R. Thompson (PhD)

Follow
5 min read
·
Apr 19, 2025

What If We Didn’t Need GPUs to Power the Future of AI?​


2025: The Year AI Hit a Wall 🧠⚡🔥

The generative AI wave, once unstoppable, is now gridlocked by a resource bottleneck. GPU prices have surged. Hardware supply chains are fragile. Electricity consumption is skyrocketing. AI’s relentless progress is now threatened by infrastructure failure.
• TSMC’s January earthquake crippled global GPU production
• Nvidia H100s are priced at $30,000–$40,000–1000% above cost
• Training Grok 3 demands 10²⁴ FLOPs and 100,000 GPUs
• Inference costs for top-tier models now hit $1,000/query
• Data centers draw more power than small nations
This isn’t just a temporary setback. It is a foundational reckoning with how we’ve built and scaled machine learning. As the global AI industry races to meet demand, it now confronts its own unsustainable fuel source: the GPU.
1*sxsu7Ji0NLPyv5UryC-U_g.jpeg

GROK 3: AI’s Biggest Brain with an Unquenchable Thirst​

Launched by xAI in February 2025, Grok 3 represents one of the most ambitious neural architectures ever built.
• A 1.8 trillion-parameter model, dwarfing predecessors
• Trained on Colossus — a 100,000-GPU supercomputer
• Achieves 15–20% performance gains over GPT-4o in reasoning tasks
• Integrates advanced tooling like Think Mode, DeepSearch, and self-correction modules
Yet, Grok 3’s superhuman intelligence is tethered to an aging hardware paradigm. Each inference request draws extraordinary amounts of energy and memory bandwidth. What if that limitation wasn’t necessary?
“Grok 3 is brilliant — but it’s burning the planet. Neuromorphic chips could be the brain transplant it desperately needs.” — Dr. Elena Voss, Stanford

Neuromorphic Chips: Thinking Like the Brain​

Neuromorphic hardware brings a radically different philosophy to computing. Inspired by the brain, it forgoes synchronous operations and instead embraces sparse, event-driven logic.
• Spiking neural networks (SNNs) encode data as temporal spike trains
• Processing is triggered by events, not clock cycles
• Memory and compute are colocated — eliminating latency from data movement
• Power usage is significantly reduced, often by orders of magnitude
This shift makes neuromorphic systems well-suited to inference at the edge, especially in environments constrained by energy or space.
Leading architectures include:
Intel Loihi 2 — Hala Point scales to 1.15 billion neurons
IBM NorthPole — tailored for ultra-low-power deployment
• BrainChip Akida — commercially deployed in compact vision/audio systems
What was once academic curiosity is now enterprise-grade silicon.

The Synergy: Grok 3 + Neuromorphic Hardware​

Transforming Grok 3 into a brain-compatible model requires strategic rewiring.
• Use GPUs for training and neuromorphic systems for inference
• Convert traditional ANN layers into SNN using rate coding techniques
• Redesign the transformer’s attention layers to operate using spikes instead of matrices
Below is a simplified code example to demonstrate ANN-to-SNN conversion:
import numpy as np
import snntorch as snn
from snntorch import spikegen
def ann_to_snn_attention(ann_weights, input_tokens, timesteps=100):
query, key, value = ann_weights['Q'], ann_weights['K'], ann_weights['V']
spike_inputs = spikegen.rate(input_tokens, num_steps=timesteps)
lif_neurons = snn.Leaky(beta=0.9, threshold=1.0)
spike_outputs = []
for t in range(timesteps):
spike_query = np.dot(spike_inputs[t], query)
spike_key = np.dot(spike_inputs[t], key)
attention_scores = np.dot(spike_query, spike_key.T) / np.sqrt(key.shape[-1])
spike_out, mem = lif_neurons(attention_scores)
spike_outputs.append(spike_out)
return np.mean(spike_outputs, axis=0)

Projected Performance Metrics​

1*fSHIAaHj_-qzY2Ja7Tfcxw.png

If scaled and refined, neuromorphic Grok could outperform conventional setups in energy efficiency, speed, and inference cost — particularly in large-scale, low-latency settings.

Real-World Use Case: AI-Powered Clinics in Sub-Saharan Africa​

Imagine Grok 3 Mini — a distilled 50B parameter version — deployed on neuromorphic hardware in community hospitals:
• Low-cost edge inference for X-ray scans and lab diagnostics
• Solar-compatible deployment with minimal power draw
• Offline reasoning through embedded DeepSearch-like retrieval modules
• Massive cost reduction: from $1,000 per inference to under $10

Now layer in mobile deployment: neuromorphic devices in ambulances, refugee clinics, or rural schools. This model changes access to intelligence in places where GPUs will never go.

Overcoming the Common Hurdles​

Knowledge Gap
Most AI engineers lack familiarity with SNNs. Open frameworks like snnTorch and Intel’s Lava are bridging the gap.
Accuracy Trade-offs
ANN-to-SNN transformation often leads to accuracy drops. Solutions like surrogate gradients and spike-based pretraining are emerging.
Hardware Accessibility
Chips like Akida and Loihi are limited today, but joint development programs are underway to commercialize production-grade boards.

Rewiring the Future of AI​

We often assume GPUs are the only viable way forward. But that assumption is crumbling. As AI scales toward trillion-parameter architectures, we must ask:
• Must we rely on energy-hungry matrix multiplications?
• Can intelligence evolve outside the cloud?
xAI and others can:
• Build custom neuromorphic accelerators for key Grok 3 modules
• Train SNN-first models on sparse reasoning tasks
• Embrace hybrid architectures that balance power and scalability
Neuromorphic Grok 3 won’t just save energy. It could redefine where and how intelligence lives.

A Crossroads for Computing​

The 2025 GPU crisis marks a civilizational inflection point. Clinging to the von Neumann architecture risks turning AI into a gated technology, hoarded by a handful of cloud monopolies.
Neuromorphic systems could democratize access:
• Empowering small labs to deploy world-class inference
• Enabling cities to host edge-AI environments for traffic, health, and environment
• Supporting educational tools that run offline, even without a data center
This reimagining doesn’t need to wait. Neuromorphic hardware is here. The challenge is will.

 
  • Like
  • Fire
  • Love
Reactions: 26 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Published 3 days ago. Written by Adit Sheth, Senior Software Engineer at Microsoft.


Screenshot 2025-05-30 at 10.23.40 am.png




Towards AI


Unlocking AI’s Next Wave: How Self-Improving Systems, Neuromorphic Chips, and Scientific AI are Redefining 2025​

Adit Sheth
Adit Sheth

3 days ago




1*7xkxzZQj8YwDz9KSjn1nyg.png

Generated by Microsoft Copilot
The year is 2025, and the world is not merely witnessing a technological shift; it’s experiencing a seismic redefinition of intelligence itself. Forget the fleeting hype cycles of yesteryear. The quiet hum of artificial intelligence has swelled into a thunderous roar, transforming industries, reimagining human-computer interaction, and forcing us to fundamentally reconsider what it means to think, to learn, to be. This isn’t just an upgrade; it’s a revolution, catapulting us beyond the generative models that captivated us just months ago into an era where AI is not just performing tasks, but autonomously enhancing its own capabilities, operating with the brain’s own whispered efficiency, and unlocking the universe’s deepest, most guarded secrets.
This article isn’t a dry technical report. It’s an invitation to explore the very frontier of innovation, a deep dive into the paradigm shifts at the heart of AI’s “next wave.” We’re talking about algorithms that learn to outsmart themselves, hardware that breathes like a biological brain, and models that speak the language of the cosmos. Buckle up. Welcome to 2025’s AI frontier — a landscape where intelligence self-evolves, conserves energy with breathtaking finesse, and accelerates scientific discovery with the precision of a cosmic clock.

The AI Renaissance: Beyond Hype to the Self-Evolving Frontier​

For decades, the promise of Artificial Intelligence danced tantalizingly on the horizon, often retreating into the shadows of “AI winters.” But the current moment is different. Profoundly different. What distinguishes this AI Renaissance from all that came before isn’t just faster processors or bigger datasets; it’s a perfect storm of converging forces — the relentless march of computational power, the sheer tsunami of global data, and the algorithmic breakthroughs, epitomized by the transformative Transformer architecture, that didn’t just unlock Large Language Models (LLMs) but flung open the gates to far grander, more audacious ambitions.
By mid-2025, AI is no longer a nascent curiosity; it’s an indispensable, foundational layer, woven intricately into the fabric of global commerce, cutting-edge research, and our everyday lives. But the most electrifying development isn’t simply AI’s pervasive presence. It’s its burgeoning capacity for self-evolution. We are transitioning from AI that meticulously executes instructions to AI that proactively learns, adapts, and fundamentally improves itself. This profound shift is poised to accelerate innovation at a velocity previously unimaginable, enabling AI to conquer challenges of scale and complexity once considered firmly within the realm of speculative fiction. The age of self-optimizing intelligence has not just dawned; it is galloping into full stride.

The Ascent of Self-Improving AI: Intelligence That Learns to Learn​

Imagine an intelligence that doesn’t just process information, but actively refines its own mind. For far too long, the meticulous art of improving an AI model remained a human-centric, often grueling, cycle of endless fine-tuning and manual iteration. Today, the very vanguard is defined by Self-Improving AI — systems endowed with the astonishing ability to autonomously monitor their own performance, diagnose their own flaws, generate new, targeted data (both synthetic and real), and even daringly refine their internal algorithms or fundamental architectures without constant human intervention. This is intelligence that doesn’t just learn from data; it learns how to learn better, initiating a relentless, accelerating spiral of intellectual ascent.
This revolutionary capability is underpinned by sophisticated, dynamic feedback loops that empower AI to become its own architect:
  • Autonomous Learning Cycles: Picture AI agents engaged in a perpetual ballet of perception, decision, action, and then, crucially, self-evaluation. They assess their own outcomes with surgical precision, then dynamically rewrite elements of their decision-making logic or knowledge base for superior performance. In complex strategic games or hyper-realistic simulation environments, an AI can now play millions of rounds, pinpoint optimal strategies, and literally reprogram itself for victory.
  • Reinforcement Learning with Self-Correction and Reflection: Building upon breakthroughs like Reinforcement Learning from Human Feedback (RLHF), cutting-edge techniques now allow AI systems to “reflect” on their past failures with a chilling clarity previously reserved for human introspection. They meticulously analyze precisely why a particular output was flawed, pinpoint subtle fallacies in their reasoning paths, and then autonomously generate new, targeted training examples or modify internal representations to prevent similar missteps. This concept, often termed “Recursive Self-Improvement” (RSI) or “self-healing AI,” isn’t just about iteration; it hints at a future where AI perpetually bootstraps its own intelligence, pushing the boundaries of its own cognitive capacity.
  • Meta-Learning and AutoML for System Optimization: Beyond simply fine-tuning individual models, meta-learning enables AI to grasp the very principles of learning itself. This means an AI can become adept at rapidly adapting to entirely new tasks with minimal data, or even autonomously generate novel, more efficient machine learning algorithms specifically tailored to emerging problems. Modern Automated Machine Learning (AutoML) platforms are deeply integrating these meta-learning capabilities, allowing AI to autonomously design, optimize, and even deploy complex AI pipelines, from initial data preprocessing to final model integration. The result? A paradigm where AI actively participates, and even leads, in its own engineering. One exciting example of this can be seen in C3 AI’s advancements in multi-agent automation, showcasing how self-improving agents are tackling enterprise-scale challenges by refining their own workflows and reasoning. (Explore more on C3 AI’s “Agents Unleashed” here.)
The ramifications of self-improving AI in 2025 are, quite frankly, profoundly staggering:
  • Unprecedented Autonomy and Resilience: Systems can now adapt to highly dynamic, unpredictable environments and novel situations in real-time, making them fundamentally more robust for mission-critical applications. Imagine autonomous vehicles that learn from every near-miss, refining their driving algorithms instantly; or dynamic infrastructure management systems that self-optimize in response to sudden demands; or next-gen cybersecurity platforms that don’t just detect threats, but autonomously engineer and deploy countermeasures against zero-day attacks. The system learns to fail forward, building resilience through continuous, relentless introspection.
  • Exponential Development Cycles: AI is now, quite literally, accelerating its own evolution. As AI systems become more adept at identifying and fixing their own shortcomings, the very pace of innovation within the AI landscape itself is poised for an exponential surge. This could lead to breakthroughs emerging at a velocity previously deemed impossible, creating a virtuous cycle of accelerating intelligence.
  • Radical Reduction in Human Intervention: While human oversight remains utterly crucial for alignment, ethical guardrails, and ultimate accountability, the need for constant, granular human intervention in optimization, debugging, and iteration decreases dramatically. This frees human engineers and researchers to focus on higher-level strategic challenges, abstract problem definition, and the profound ethical implications of guiding ever-smarter machines.
Imagine an AI system orchestrating a global logistics network that doesn’t just learn from real-time traffic fluctuations, dynamic weather patterns, and unforeseen supply chain disruptions, but also self-revises its entire optimization algorithm to achieve efficiencies far beyond what even the most brilliant human experts could manually program. This isn’t distant futurism; this is the tangible, thrilling promise of self-improving AI, a true game-changer in humanity’s quest for intelligent autonomy. It marks a pivotal moment where AI transitions from a powerful tool to an active, evolving partner in its own progress.

Neuromorphic Computing: Building Brain-Inspired, Energy-Efficient AI​

As the computational and energy demands of large-scale AI — particularly the colossal LLMs and the resource-hungry self-improving systems — continue their meteoric rise, they cast a looming shadow: an undeniable bottleneck. This pressing challenge is precisely what Neuromorphic Computing steps forward to address, representing nothing less than a fundamental paradigm shift in how we design and build AI hardware. Drawing profound inspiration from the astonishing energy efficiency and parallel processing power of the human brain, neuromorphic chips bravely jettison the traditional von Neumann architecture, which, for decades, has inefficiently separated processing from memory, leading to constant, energy-intensive data movement.
Key principles defining this quiet revolution in silicon include:
  • In-Memory Computing (Processing-in-Memory): In stark contrast to conventional architectures, neuromorphic systems ingeniously co-locate processing units directly within or immediately adjacent to memory. This radical approach dramatically curtails the energy consumption associated with constantly shuttling data between distinct processing and storage components — the infamous “von Neumann bottleneck.” This architecture fundamentally mirrors the brain’s seamless, integrated computation and memory, operating with a fluidity unmatched by current digital systems.
  • Event-Driven (Spiking Neural Networks — SNNs): Unlike typical deep learning models that process all inputs continuously, consuming power constantly, neuromorphic chips primarily operate on Spiking Neural Networks (SNNs). These artificial neurons “fire” (generate a computational event) only when a certain threshold of input is reached, mimicking the sparse, asynchronous, and incredibly efficient communication of biological neurons. This event-driven processing leads to extraordinarily low power consumption, as computations are performed only when genuinely necessary, minimizing idle energy drain. Imagine a light switch that only consumes power when it’s actively flipping.
  • Intrinsic Parallelism and On-Chip Adaptability: Neuromorphic architectures are inherently massively parallel, allowing for millions of concurrent computations, much like the brain’s distributed processing. Furthermore, many neuromorphic designs are built for continuous, on-device learning and adaptation, making them uniquely suited for dynamic, real-world edge environments where constant cloud connectivity is impractical or impossible.
The critical and rapidly escalating role of neuromorphic computing in 2025 cannot be overstated:
  • Addressing the Energy Crisis of AI: The monumental carbon footprint and staggering operational costs associated with training and running today’s colossal AI models are simply unsustainable. Neuromorphic chips offer a revolutionary path to orders of magnitude lower power consumption for demanding AI tasks, making large-scale AI deployment far more environmentally responsible and economically viable. This isn’t just an optimization; it’s an existential necessity for AI’s long-term, widespread scalability.
  • Fueling the Edge AI Revolution: By enabling sophisticated AI to run directly on tiny, power-constrained devices — from next-generation wearables and smart sensors to agile drones and truly autonomous robotics — neuromorphic chips unleash the full potential of real-time, on-device intelligence. This dramatically reduces latency, enhances data privacy (as less sensitive data needs to be transmitted to the cloud), and facilitates always-on AI capabilities crucial for applications where consistent cloud connectivity isn’t feasible or desirable. Picture smart eyewear that provides real-time contextual awareness without draining its battery in minutes, or a drone performing complex environmental analysis on its own, far from any network.
  • Opening New Frontiers in AI Application: This unprecedented energy efficiency and real-time processing ability enable novel AI applications that were previously confined to laboratories or supercomputers due to power constraints. Consider medical implants with embedded AI that continuously monitor biomarkers and adapt their function for years without external power, or vast smart city sensor networks that process complex visual and auditory data locally to manage traffic or detect anomalies without overwhelming central servers.
Leading the charge in this hardware revolution are innovators like Intel, with its groundbreaking Loihi series. Loihi 2, in particular, is pushing the boundaries of AI with its support for low-precision, event-driven computation, showing promising results for efficient LLM inference, demonstrating capabilities like real-time gesture recognition and pattern learning with vastly reduced power requirements. (Loihi 2 and its capabilities). IBM also continues its advancements in neuromorphic computing, with ongoing research pushing the boundaries of brain-inspired architectures. Meanwhile, companies like Brainchip are commercializing their Akida chip, a fully digital, event-based AI processor ideal for ultra-low power edge computing, demonstrating advanced capabilities in areas like event-based vision for autonomous vehicles and industrial automation. (See how Brainchip’s Akida is enabling breakthroughs in edge AI.). As these specialized processors mature and become more widely accessible, they promise to fundamentally reshape the hardware landscape of AI, driving us towards a future where intelligence is not just powerful, but also profoundly efficient, always-on, and truly pervasive.
 
  • Like
  • Fire
  • Love
Reactions: 29 users
I have them on ignore right now.
I had my fun with them but it's worn off now.
The agenda driven downrampers never post anything of substance.
That wasn’t meant for you 😂
 

7für7

Top 20
Published 3 days ago. Written by Adit Sheth, Senior Software Engineer at Microsoft.


View attachment 85758



Towards AI


Unlocking AI’s Next Wave: How Self-Improving Systems, Neuromorphic Chips, and Scientific AI are Redefining 2025​

Adit Sheth
Adit Sheth

3 days ago




1*7xkxzZQj8YwDz9KSjn1nyg.png

Generated by Microsoft Copilot
The year is 2025, and the world is not merely witnessing a technological shift; it’s experiencing a seismic redefinition of intelligence itself. Forget the fleeting hype cycles of yesteryear. The quiet hum of artificial intelligence has swelled into a thunderous roar, transforming industries, reimagining human-computer interaction, and forcing us to fundamentally reconsider what it means to think, to learn, to be. This isn’t just an upgrade; it’s a revolution, catapulting us beyond the generative models that captivated us just months ago into an era where AI is not just performing tasks, but autonomously enhancing its own capabilities, operating with the brain’s own whispered efficiency, and unlocking the universe’s deepest, most guarded secrets.
This article isn’t a dry technical report. It’s an invitation to explore the very frontier of innovation, a deep dive into the paradigm shifts at the heart of AI’s “next wave.” We’re talking about algorithms that learn to outsmart themselves, hardware that breathes like a biological brain, and models that speak the language of the cosmos. Buckle up. Welcome to 2025’s AI frontier — a landscape where intelligence self-evolves, conserves energy with breathtaking finesse, and accelerates scientific discovery with the precision of a cosmic clock.

The AI Renaissance: Beyond Hype to the Self-Evolving Frontier​

For decades, the promise of Artificial Intelligence danced tantalizingly on the horizon, often retreating into the shadows of “AI winters.” But the current moment is different. Profoundly different. What distinguishes this AI Renaissance from all that came before isn’t just faster processors or bigger datasets; it’s a perfect storm of converging forces — the relentless march of computational power, the sheer tsunami of global data, and the algorithmic breakthroughs, epitomized by the transformative Transformer architecture, that didn’t just unlock Large Language Models (LLMs) but flung open the gates to far grander, more audacious ambitions.
By mid-2025, AI is no longer a nascent curiosity; it’s an indispensable, foundational layer, woven intricately into the fabric of global commerce, cutting-edge research, and our everyday lives. But the most electrifying development isn’t simply AI’s pervasive presence. It’s its burgeoning capacity for self-evolution. We are transitioning from AI that meticulously executes instructions to AI that proactively learns, adapts, and fundamentally improves itself. This profound shift is poised to accelerate innovation at a velocity previously unimaginable, enabling AI to conquer challenges of scale and complexity once considered firmly within the realm of speculative fiction. The age of self-optimizing intelligence has not just dawned; it is galloping into full stride.

The Ascent of Self-Improving AI: Intelligence That Learns to Learn​

Imagine an intelligence that doesn’t just process information, but actively refines its own mind. For far too long, the meticulous art of improving an AI model remained a human-centric, often grueling, cycle of endless fine-tuning and manual iteration. Today, the very vanguard is defined by Self-Improving AI — systems endowed with the astonishing ability to autonomously monitor their own performance, diagnose their own flaws, generate new, targeted data (both synthetic and real), and even daringly refine their internal algorithms or fundamental architectures without constant human intervention. This is intelligence that doesn’t just learn from data; it learns how to learn better, initiating a relentless, accelerating spiral of intellectual ascent.
This revolutionary capability is underpinned by sophisticated, dynamic feedback loops that empower AI to become its own architect:
  • Autonomous Learning Cycles: Picture AI agents engaged in a perpetual ballet of perception, decision, action, and then, crucially, self-evaluation. They assess their own outcomes with surgical precision, then dynamically rewrite elements of their decision-making logic or knowledge base for superior performance. In complex strategic games or hyper-realistic simulation environments, an AI can now play millions of rounds, pinpoint optimal strategies, and literally reprogram itself for victory.
  • Reinforcement Learning with Self-Correction and Reflection: Building upon breakthroughs like Reinforcement Learning from Human Feedback (RLHF), cutting-edge techniques now allow AI systems to “reflect” on their past failures with a chilling clarity previously reserved for human introspection. They meticulously analyze precisely why a particular output was flawed, pinpoint subtle fallacies in their reasoning paths, and then autonomously generate new, targeted training examples or modify internal representations to prevent similar missteps. This concept, often termed “Recursive Self-Improvement” (RSI) or “self-healing AI,” isn’t just about iteration; it hints at a future where AI perpetually bootstraps its own intelligence, pushing the boundaries of its own cognitive capacity.
  • Meta-Learning and AutoML for System Optimization: Beyond simply fine-tuning individual models, meta-learning enables AI to grasp the very principles of learning itself. This means an AI can become adept at rapidly adapting to entirely new tasks with minimal data, or even autonomously generate novel, more efficient machine learning algorithms specifically tailored to emerging problems. Modern Automated Machine Learning (AutoML) platforms are deeply integrating these meta-learning capabilities, allowing AI to autonomously design, optimize, and even deploy complex AI pipelines, from initial data preprocessing to final model integration. The result? A paradigm where AI actively participates, and even leads, in its own engineering. One exciting example of this can be seen in C3 AI’s advancements in multi-agent automation, showcasing how self-improving agents are tackling enterprise-scale challenges by refining their own workflows and reasoning. (Explore more on C3 AI’s “Agents Unleashed” here.)
The ramifications of self-improving AI in 2025 are, quite frankly, profoundly staggering:
  • Unprecedented Autonomy and Resilience: Systems can now adapt to highly dynamic, unpredictable environments and novel situations in real-time, making them fundamentally more robust for mission-critical applications. Imagine autonomous vehicles that learn from every near-miss, refining their driving algorithms instantly; or dynamic infrastructure management systems that self-optimize in response to sudden demands; or next-gen cybersecurity platforms that don’t just detect threats, but autonomously engineer and deploy countermeasures against zero-day attacks. The system learns to fail forward, building resilience through continuous, relentless introspection.
  • Exponential Development Cycles: AI is now, quite literally, accelerating its own evolution. As AI systems become more adept at identifying and fixing their own shortcomings, the very pace of innovation within the AI landscape itself is poised for an exponential surge. This could lead to breakthroughs emerging at a velocity previously deemed impossible, creating a virtuous cycle of accelerating intelligence.
  • Radical Reduction in Human Intervention: While human oversight remains utterly crucial for alignment, ethical guardrails, and ultimate accountability, the need for constant, granular human intervention in optimization, debugging, and iteration decreases dramatically. This frees human engineers and researchers to focus on higher-level strategic challenges, abstract problem definition, and the profound ethical implications of guiding ever-smarter machines.
Imagine an AI system orchestrating a global logistics network that doesn’t just learn from real-time traffic fluctuations, dynamic weather patterns, and unforeseen supply chain disruptions, but also self-revises its entire optimization algorithm to achieve efficiencies far beyond what even the most brilliant human experts could manually program. This isn’t distant futurism; this is the tangible, thrilling promise of self-improving AI, a true game-changer in humanity’s quest for intelligent autonomy. It marks a pivotal moment where AI transitions from a powerful tool to an active, evolving partner in its own progress.

Neuromorphic Computing: Building Brain-Inspired, Energy-Efficient AI​

As the computational and energy demands of large-scale AI — particularly the colossal LLMs and the resource-hungry self-improving systems — continue their meteoric rise, they cast a looming shadow: an undeniable bottleneck. This pressing challenge is precisely what Neuromorphic Computing steps forward to address, representing nothing less than a fundamental paradigm shift in how we design and build AI hardware. Drawing profound inspiration from the astonishing energy efficiency and parallel processing power of the human brain, neuromorphic chips bravely jettison the traditional von Neumann architecture, which, for decades, has inefficiently separated processing from memory, leading to constant, energy-intensive data movement.
Key principles defining this quiet revolution in silicon include:
  • In-Memory Computing (Processing-in-Memory): In stark contrast to conventional architectures, neuromorphic systems ingeniously co-locate processing units directly within or immediately adjacent to memory. This radical approach dramatically curtails the energy consumption associated with constantly shuttling data between distinct processing and storage components — the infamous “von Neumann bottleneck.” This architecture fundamentally mirrors the brain’s seamless, integrated computation and memory, operating with a fluidity unmatched by current digital systems.
  • Event-Driven (Spiking Neural Networks — SNNs): Unlike typical deep learning models that process all inputs continuously, consuming power constantly, neuromorphic chips primarily operate on Spiking Neural Networks (SNNs). These artificial neurons “fire” (generate a computational event) only when a certain threshold of input is reached, mimicking the sparse, asynchronous, and incredibly efficient communication of biological neurons. This event-driven processing leads to extraordinarily low power consumption, as computations are performed only when genuinely necessary, minimizing idle energy drain. Imagine a light switch that only consumes power when it’s actively flipping.
  • Intrinsic Parallelism and On-Chip Adaptability: Neuromorphic architectures are inherently massively parallel, allowing for millions of concurrent computations, much like the brain’s distributed processing. Furthermore, many neuromorphic designs are built for continuous, on-device learning and adaptation, making them uniquely suited for dynamic, real-world edge environments where constant cloud connectivity is impractical or impossible.
The critical and rapidly escalating role of neuromorphic computing in 2025 cannot be overstated:
  • Addressing the Energy Crisis of AI: The monumental carbon footprint and staggering operational costs associated with training and running today’s colossal AI models are simply unsustainable. Neuromorphic chips offer a revolutionary path to orders of magnitude lower power consumption for demanding AI tasks, making large-scale AI deployment far more environmentally responsible and economically viable. This isn’t just an optimization; it’s an existential necessity for AI’s long-term, widespread scalability.
  • Fueling the Edge AI Revolution: By enabling sophisticated AI to run directly on tiny, power-constrained devices — from next-generation wearables and smart sensors to agile drones and truly autonomous robotics — neuromorphic chips unleash the full potential of real-time, on-device intelligence. This dramatically reduces latency, enhances data privacy (as less sensitive data needs to be transmitted to the cloud), and facilitates always-on AI capabilities crucial for applications where consistent cloud connectivity isn’t feasible or desirable. Picture smart eyewear that provides real-time contextual awareness without draining its battery in minutes, or a drone performing complex environmental analysis on its own, far from any network.
  • Opening New Frontiers in AI Application: This unprecedented energy efficiency and real-time processing ability enable novel AI applications that were previously confined to laboratories or supercomputers due to power constraints. Consider medical implants with embedded AI that continuously monitor biomarkers and adapt their function for years without external power, or vast smart city sensor networks that process complex visual and auditory data locally to manage traffic or detect anomalies without overwhelming central servers.
Leading the charge in this hardware revolution are innovators like Intel, with its groundbreaking Loihi series. Loihi 2, in particular, is pushing the boundaries of AI with its support for low-precision, event-driven computation, showing promising results for efficient LLM inference, demonstrating capabilities like real-time gesture recognition and pattern learning with vastly reduced power requirements. (Loihi 2 and its capabilities). IBM also continues its advancements in neuromorphic computing, with ongoing research pushing the boundaries of brain-inspired architectures. Meanwhile, companies like Brainchip are commercializing their Akida chip, a fully digital, event-based AI processor ideal for ultra-low power edge computing, demonstrating advanced capabilities in areas like event-based vision for autonomous vehicles and industrial automation. (See how Brainchip’s Akida is enabling breakthroughs in edge AI.). As these specialized processors mature and become more widely accessible, they promise to fundamentally reshape the hardware landscape of AI, driving us towards a future where intelligence is not just powerful, but also profoundly efficient, always-on, and truly pervasive.

Nothing positive we find about BrainChip online and post here in the forum seems to help the stock price go up. We all already know that the product is great and offers a wide range of solutions. What I can’t understand is why they still can’t manage to turn it into a profitable company. With all the positive articles and opinions out there, you’d think we’d be a billion-dollar enterprise by now.

But instead, I see random no-name companies popping up out of nowhere, with less technical capability than BrainChip’s Akida, yet they present themselves more confidently — and meanwhile, BrainChip seems to be slipping further into irrelevance.

They focus too heavily on military applications… almost as if they’re clinging to that sector instead of targeting companies with mass-market products. I just don’t get it — and the stock keeps dropping…
 
  • Like
Reactions: 3 users

DK6161

Regular
Nothing positive we find about BrainChip online and post here in the forum seems to help the stock price go up. We all already know that the product is great and offers a wide range of solutions. What I can’t understand is why they still can’t manage to turn it into a profitable company. With all the positive articles and opinions out there, you’d think we’d be a billion-dollar enterprise by now.

But instead, I see random no-name companies popping up out of nowhere, with less technical capability than BrainChip’s Akida, yet they present themselves more confidently — and meanwhile, BrainChip seems to be slipping further into irrelevance.

They focus too heavily on military applications… almost as if they’re clinging to that sector instead of targeting companies with mass-market products. I just don’t get it — and the stock keeps dropping…
I was hoping Sean would get us some deals with other silicon valley companies. Wasn't his connection was the reason why he was the chosen one?
 
  • Like
  • Fire
Reactions: 4 users
Top Bottom