BRN Discussion Ongoing

Diogenese

Top 20
If somebody has the technical background and wants to have a look at Loihi 2

Legendre-SNN on Loihi-2 : Programming Lakemont Cores - ONM Student Talks​



Legendre gets a mention in:

TENNs-PLEIADES: Building Temporal Kernels with Orthogonal Polynomials​

Yan Ru Pei, Olivier Coenen

https://arxiv.org/html/2405.12179v3

Abstract​

We introduce a neural network named PLEIADES (PoLynomial Expansion In Adaptive Distributed Event-based Systems), belonging to the TENNs (Temporal Neural Networks) architecture. We focus on interfacing these networks with event-based data to perform online spatiotemporal classification and detection with low latency. By virtue of using structured temporal kernels and event-based data, we have the freedom to vary the sample rate of the data along with the discretization step-size of the network without additional finetuning. We experimented with three event-based benchmarks and obtained state-of-the-art results on all three by large margins with significantly smaller memory and compute costs. We achieved: 1) 99.59% accuracy with 192K parameters on the DVS128 hand gesture recognition dataset and 100% with a small additional output filter; 2) 99.58% test accuracy with 277K parameters on the AIS 2024 eye tracking challenge; and 3) 0.556 mAP with 576k parameters on the PROPHESEE 1 Megapixel Automotive Detection Dataset.

1Introduction​

Temporal convolutional networks (TCNs) [18] have been a staple for processing time series data from speech enhancement [22] to action segmentation [17]. However, in most cases, the temporal kernel is very short (usually size of 3), making it difficult for the network to capture long-range temporal correlations. The temporal kernels are intentionally kept short, because keeping a long temporal kernel with a large number of trainable kernel values usually leads to unstable training. In addition, we require a large amount of memory for storing the weights during inference. One popular solution for this has been to parameterize the temporal kernel function with a simple multilayer perceptron (MLP), which promotes stability [28] and more compressed parameters, but it often increases the computational load considerably.

Here, we introduce a method of parameterization of temporal kernels, named PLEIADES (PoLynomial Expansion In Adaptive Distributed Event-based Systems), that can in many cases reduce the memory and computational costs compared to explicit convolutions. The design is fairly modular, and can be used as a drop-in replacement for any 1D-like convolutional layers, allowing them to perform long temporal convolutions effectively. In fact, we augment a previously proposed (1+2)D causal spatiotemporal network [23] by replacing its temporal kernels with this new polynomial parameterization. This new network architecture serves as the backbone for a wide range of online spatiotemporal tasks ranging from action recognition to object detection.This network belong to a broader class of networks named Temporal Neural Networks (TENNs) developed by Brainchip Inc
.


...

The seminal work proposing a memory encoding using orthogonal Legendre polynomials in a recurrent state-space model is the Legendre Memory Unit (LMU) [33], where Legendre polynomials (a special case of Jacobi polynomials) are used. The HiPPO formalism [11] then generalized this to other orthogonal functions including Chebyshev polynomials, Laguerre polynomials, and Fourier modes. Later, this sparked a cornucopia of works interfacing with deep state space models including S4 [12], H3 [2], and Mamba [10], achieving impressive results on a wide range of tasks from audio generation to language modeling. There are several common themes among these networks that PLEIADES differ from. First, these models typically only interface with 1D temporal data, and usually try to flatten high dimensional data into 1D data before processing [12, 37], with some exceptions [21]. Second, instead of explicitly performing finite-window temporal convolutions, a running approximation of the effects of such convolutions are performed, essentially yielding a system with infinite impulse responses where the effective polynomial structures are distorted [31, 11]. And in the more recent works, the polynomial structures are tenuously used only for initialization, but then made fully trainable. Finally, these networks mostly use an underlying depthwise structure [14] for long convolutions, which may limit the network capacity, albeit reducing the compute requirement of the network.
[33]↑Aaron Voelker, Ivana Kajić, and Chris Eliasmith.Legendre Memory Units: Continuous-time representation in recurrent neural networks.Advances in neural information processing systems, 32, 2019. [Uni of Waterloo]
 
  • Like
  • Fire
  • Thinking
Reactions: 12 users
1753815723105.gif
 
  • Haha
  • Like
Reactions: 5 users

Dallas

Regular
 
  • Like
  • Fire
  • Love
Reactions: 14 users
Maybe we should turn the BRN thread into a dating site since so many of us are getting ex-communicated by our significant others.

I‘ll start the ball rolling.

I‘m a fun-loving lass with a quirky sense of humour and an even quirkier sense of fashion. I like to practice taekwondo in my spare time and to watch documentaries about true crime and evil psychopaths on Netflix, as well as eating tubs of Connoisseur ice-cream. I’m currently learning how to play the maracas and the bugle 🎷. I have 10 cats and a blue-tongued lizard called Gertrude.

I don’t like chewing sounds, so if you’re one of those people who can eat with your mouth closed, then please feel free to call me.☎️
1753818642279.gif
 
  • Haha
  • Like
  • Wow
Reactions: 8 users

MegaportX

Regular
Maybe we should turn the BRN thread into a dating site since so many of us are getting ex-communicated by our significant others.

I‘ll start the ball rolling.

I‘m a fun-loving lass with a quirky sense of humour and an even quirkier sense of fashion. I like to practice taekwondo in my spare time and to watch documentaries about true crime and evil psychopaths on Netflix, as well as eating tubs of Connoisseur ice-cream. I’m currently learning how to play the maracas and the bugle 🎷. I have 10 cats and a blue-tongued lizard called Gertrude.

I don’t like chewing sounds, so if you’re one of those people who can eat with your mouth closed, then please feel free to call me.☎️
will ferrell snl GIF by Saturday Night Live
 
  • Haha
Reactions: 4 users

7für7

Top 20

Ladies and gentlemen, we have a Karen on board. Mimimimi


IMG_5259.jpeg
 
Last edited:
  • Like
Reactions: 1 users

Andy38

The hope of potential generational wealth is real
1.3m in engineering revenue from 3 CUSTOMERS!!! I like.
Now to see Sean and his 9 million to start rolling in!
 
  • Like
  • Fire
  • Love
Reactions: 35 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

Quarterly is out , though i have not read it yet......



Regards,
Esq.
 
  • Haha
  • Like
Reactions: 12 users

DK6161

Regular
Wow! Over 1 mill revenue! This will go to 50 cents today for sure. Not advice
 
  • Haha
  • Like
Reactions: 8 users

Mccabe84

Regular
  • Like
  • Love
  • Fire
Reactions: 15 users

7für7

Top 20
  • Haha
  • Like
Reactions: 3 users

7für7

Top 20
In short



Sydney - 30 July 2025 - BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, BCHPY) (Company), the world's first commercial producer of neuromorphic artificial intelligence technology, today provides the Quarterly Activities Report in conjunction with its Appendix 4C lodged for the quarter ending 30 June 2025.


Key Highlights


  • Cash balance of US$13.5M provides sufficient capital for growth and investment in research and development of new and existing products.
  • Evaluation of redomiciling Company listing has been completed with input from a range of domestic and international advisors, including legal, investment banking groups, and shareholders. After detailed evaluation and analysis, the Board made the decision that shareholder value is best achieved by remaining listed on the ASX.
  • Cash inflow from customers in the current quarter of US$1.4M was higher than the prior quarter (US$0.14M).
  • Total payments to suppliers and employees of US$4.4M in the current quarter were lower than the prior quarter (US$4.9M).
  • Collaboration with multiple high-quality companies during the quarter further demonstrates the commercial application of BrainChip's technology.
  • Continued expansion of global intellectual property portfolio, now comprising 55 issued and pending patents across the United States, Europe, and APAC regions.

Redomicile Update


On 27 February 2025, the Company announced it was evaluating the possibility of redomiciling to an alternative stock exchange with a focus on the US. Post an extensive review that included input and advice from a range of experts, including foreign and domestic legal advisors, investment banks and feedback from shareholders, the Board has made the decision that shareholder value is best achieved by remaining listed on the ASX.


BrainChip remains committed to the ASX listing and ensuring that the Company continues its path to commercial success. The Board acknowledges and appreciates the ongoing commitment of shareholders. This sustained support is instrumental to the Company's progress and underpins its pursuit of long-term growth.
 
  • Like
  • Love
Reactions: 16 users

Gazzafish

Regular
Good 4c in my opinion 😁👍
 
  • Like
  • Love
Reactions: 40 users

TheDrooben

Pretty Pretty Pretty Pretty Good

Screenshot 2025-07-30 093528.png


Happy as Larry
 
  • Like
  • Love
  • Fire
Reactions: 33 users

itsol4605

Regular
Not Brainchip Akida ... but a good sign

 
  • Like
Reactions: 3 users

Slade

Top 20
  • Like
  • Fire
Reactions: 18 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Good 4c in my opinion 😁👍
Have we finally reached the inflection point????.........revenue growing (albeit from a low base). Almost 12 months of cash left at this burn rate.....what will the next 4C bring?? Only 14 Notification regarding Unquoted Securities announcements until then.......

2d2c74f6-0df4-4cb8-9a82-ece63dc88bbd_text.gif



Happy as Larry
 
  • Like
  • Haha
  • Love
Reactions: 30 users

7für7

Top 20
Bravo is nitpicking and TheDrooben would have loved to see the 35 million already today… Man, oh man, you guys are starting to sound like T&J and the dean… get it together… the process is gradual… both the growth and the transformation into bashers.

😜🤭
 
  • Haha
  • Like
  • Fire
Reactions: 7 users

Bravo

Meow Meow 🐾
This is a positive article for BrainChip. By listing neuromorphic computing as a core theme for “Future AI-as-a-Service,” NASSCOM implicitly endorses BrainChip’s innovation roadmap and relevance in India’s emerging tech landscape.

The Future of AIaaS: Quantum Computing, Neuromorphic Chips, and Next-Gen Architectures​

Shreesh Chaurasi
July 28, 2025



In 2019, Google's Sycamore quantum processor solved a calculation in 200 seconds that would take the world's fastest supercomputer 10,000 years. Fast forward to 2025, and we're witnessing the convergence of quantum computing, neuromorphic chips, and revolutionary architectures that promise to redefine AI-as-a-Service (AIaaS) from the ground up. The question isn't whether these technologies will transform enterprise AI—it's how quickly your organization can adapt to the seismic shift that's already underway.

The Current AIaaS Landscape: At the Precipice of Transformation​

Today's AIaaS market, valued at $15.7 billion in 2024 and projected to reach $148.4 billion by 2030, operates predominantly on traditional silicon-based architectures. However, we're approaching the physical limits of Moore's Law, with transistor scaling becoming increasingly challenging and expensive. Current GPU-based training of large language models like GPT-4 consumes approximately 50 GWh of electricity—enough to power 4,600 homes for a year. This computational bottleneck is driving the urgent need for revolutionary computing paradigms.
The enterprise reality is stark: 73% of organizations report that AI workloads are constrained by current computational limitations, while 68% cite energy costs as a significant barrier to AI adoption at scale. These challenges are catalyzing investment in next-generation computing architectures that promise to deliver exponential improvements in performance, efficiency, and capability.

Quantum Computing: Redefining the Computational Frontier​

AD_4nXeJU24rK__daI5AtFxZdhcIvkfgkvehcIjzN9oRwxywd27nY7z_Qf50wsOhns7KB5leB-YpbJTcsms2OvUK1N-fwQ_tMU_HiXE4MII2n5BRMQFz_O1g6KFKyPvO7Twq4pLhrOOoOg

The Quantum Advantage in AIaaS​

Quantum computing represents a fundamental departure from classical computing, leveraging quantum mechanical phenomena like superposition and entanglement to process information in ways that are impossible with traditional bits. For AIaaS providers, quantum computing offers unprecedented opportunities to tackle computationally intractable problems.
Current quantum systems like IBM's 1,121-qubit Condor processor and Google's 70-qubit Sycamore are already demonstrating quantum advantage in specific domains. By 2030, industry analysts predict quantum computers will achieve 1 million physical qubits, enabling fault-tolerant quantum computation that could revolutionize AI workloads.

Quantum Machine Learning: Beyond Classical Limitations​

Quantum machine learning (QML) algorithms are showing remarkable promise in several key areas:
Optimization Problems: Quantum annealing systems like D-Wave's Advantage can solve complex optimization problems with 5,000+ variables—critical for supply chain optimization, portfolio management, and resource allocation at enterprise scale.
Pattern Recognition: Quantum neural networks demonstrate exponential speedups for certain pattern recognition tasks, with recent research showing 16x improvements in training time for specific classification problems.
Cryptographic Applications: Quantum computers threaten current encryption methods but simultaneously enable quantum-secure communication protocols, creating new opportunities for secure AIaaS offerings.

Market Impact and Investment Trends​

The quantum computing market is experiencing explosive growth, with global investment reaching $2.4 billion in 2024. Major cloud providers are rapidly expanding quantum offerings:
  • IBM Quantum Network: Over 200 members, including Fortune 500 companies
  • Amazon Braket: Providing access to quantum hardware from multiple vendors
  • Microsoft Azure Quantum: Full-stack quantum development platform
  • Google Quantum AI: Focus on fault-tolerant quantum computing
Enterprise adoption is accelerating, with 32% of large organizations planning quantum computing investments within the next three years, primarily for optimization, simulation, and machine learning applications.

Neuromorphic Computing: Mimicking the Brain's Efficiency​

The Biological Inspiration​

The human brain processes information using approximately 20 watts of power—roughly equivalent to a light bulb—while performing complex cognitive tasks that require massive computational resources in traditional systems. Neuromorphic chips attempt to replicate this efficiency by mimicking the brain's neural structure and information processing methods.

Leading Neuromorphic Architectures​

Intel Loihi 2: This second-generation neuromorphic chip contains 1 million artificial neurons and supports up to 120 million synapses. In benchmark tests, Loihi 2 demonstrates 1000x better energy efficiency compared to conventional processors for certain AI workloads.
IBM TrueNorth: With 1 million programmable neurons and 256 million synapses, TrueNorth consumes only 65 milliwatts during operation—orders of magnitude less than traditional processors performing similar tasks.
BrainChip Akida: This commercial neuromorphic processor offers edge AI capabilities with ultra-low power consumption, targeting applications in autonomous vehicles, smart cameras, and IoT devices.

Neuromorphic Advantages for AIaaS​

Neuromorphic computing offers several compelling advantages for AIaaS providers:
Energy Efficiency: Neuromorphic chips can reduce power consumption by 100-1000x for specific AI workloads, dramatically reducing operational costs and environmental impact.
Real-time Processing: Event-driven processing enables microsecond response times, crucial for real-time AI applications like autonomous driving and industrial automation.
Adaptive Learning: Unlike traditional processors, neuromorphic chips can learn and adapt in real-time without explicit reprogramming, enabling more dynamic and responsive AI services.
Edge Computing: Ultra-low power consumption makes neuromorphic chips ideal for edge AI deployments, extending AIaaS capabilities to resource-constrained environments.

Next-Generation Architectures: Beyond Traditional Paradigms​

Photonic Computing: The Speed of Light​

Photonic computing leverages light instead of electricity to process information, offering potential advantages in speed, power efficiency, and parallel processing capabilities. Lightmatter's photonic interconnects have demonstrated 10x improvements in data center efficiency, while Xanadu's photonic quantum computers are exploring new frontiers in quantum machine learning.
Key benefits include:
  • Bandwidth: Optical systems can handle terahertz frequencies, far exceeding electronic limitations
  • Parallel Processing: Light-based systems can perform matrix operations in parallel, ideal for AI workloads
  • Energy Efficiency: Photonic systems consume significantly less power than electronic equivalents

DNA Computing: Biological Information Processing​

DNA computing represents an exotic but promising approach to computation, leveraging the information storage and processing capabilities of biological molecules. Microsoft's DNA storage system has demonstrated the ability to store 1 exabyte of data in a space smaller than a sugar cube, with potential applications in long-term AI model storage and retrieval.

Hybrid Architectures: The Best of All Worlds​

The future of AIaaS likely lies not in any single technology but in hybrid architectures that combine the strengths of different computing paradigms:
Quantum-Classical Hybrids: Combining quantum processors for specific optimization tasks with classical systems for general computation Neuromorphic-Digital Integration: Using neuromorphic chips for real-time processing while leveraging traditional processors for training and complex calculations Photonic-Electronic Systems: Integrating photonic interconnects and processing units with electronic control systems

Industry Implications and Strategic Considerations​

Performance Revolution​

The convergence of these technologies promises dramatic performance improvements:
Training Efficiency: Quantum algorithms could reduce training time for certain neural networks from weeks to hours Inference Speed: Neuromorphic chips enable microsecond inference times for real-time applications Scalability: New architectures support massive parallelization, enabling AIaaS providers to serve millions of concurrent users

Cost Optimization​

Next-generation computing architectures offer significant cost advantages:
Energy Savings: Neuromorphic and photonic systems could reduce data center energy consumption by 90% Hardware Costs: Quantum cloud services eliminate the need for expensive quantum hardware investments Operational Efficiency: Automated optimization and adaptive systems reduce manual intervention requirements

New Service Categories​

These technologies enable entirely new categories of AIaaS offerings:
Quantum-Enhanced AI: Services that leverage quantum algorithms for optimization, simulation, and machine learning Ultra-Low Latency AI: Real-time AI services powered by neuromorphic chips Distributed Intelligence: Edge AI services that bring intelligence closer to data sources

Market Projections and Investment Opportunities​

Market Size and Growth​

The next-generation computing market is experiencing unprecedented growth:
Quantum Computing: $1.3 billion in 2024, projected to reach $15.4 billion by 2030 Neuromorphic Chips: $78 million in 2024, expected to grow to $7.8 billion by 2030 Photonic Computing: $1.8 billion in 2024, forecasted to reach $11.9 billion by 2030

Investment Landscape​

Venture capital and corporate investment in next-generation computing reached $8.2 billion in 2024, with major technology companies leading the charge:
Google: $3+ billion invested in quantum computing research and development IBM: $2.5 billion committed to quantum and neuromorphic computing Intel: $1.8 billion in neuromorphic and photonic computing initiatives Microsoft: $1.2 billion in quantum computing and related technologies

Implementation Roadmap for Enterprises​

AD_4nXdWVtwPQgq8YwqPKIbLByg9NzHXiD94Z_Or7EjZhmkSIu6AD5D2vnylqKfHxZBz5cqqDXs_VcTCdHYuuIjoScnuSb6tWpH0jaJEhnNqXzksFGxRdznkS0pQhiZ7FM4aUwJI8Xu3Iw

Technical Challenges and Mitigation Strategies​

Quantum Computing Challenges​

Quantum Decoherence: Current quantum systems are extremely sensitive to environmental interference, limiting computation time to microseconds. Mitigation strategies include improved error correction codes, better isolation techniques, and hybrid quantum-classical algorithms that minimize quantum computation time.
Scalability: Building fault-tolerant quantum computers requires millions of physical qubits to create thousands of logical qubits. Companies should focus on near-term intermediate-scale quantum (NISQ) applications while hardware matures.
Skill Gap: Quantum programming requires specialized knowledge of quantum mechanics and novel programming paradigms. Organizations should invest in quantum education and partner with universities and specialized training providers.

Neuromorphic Computing Challenges​

Programming Complexity: Neuromorphic systems require fundamentally different programming approaches compared to traditional processors. Development of high-level programming frameworks and tools is essential for broader adoption.
Limited Ecosystems: The neuromorphic software ecosystem is still immature compared to traditional computing. Early adopters should focus on specific applications where neuromorphic advantages are clear and significant.
Integration Difficulties: Combining neuromorphic chips with existing infrastructure requires careful system design and potentially custom hardware solutions.

Security and Compliance Considerations​

Quantum Security Implications​

The advent of large-scale quantum computers poses both threats and opportunities for cybersecurity:
Cryptographic Vulnerabilities: Shor's algorithm could break current RSA and elliptic curve cryptography within decades. Organizations must begin transitioning to post-quantum cryptographic standards.
Quantum Key Distribution: Quantum communication protocols offer theoretically unbreakable security, creating new opportunities for secure AIaaS offerings.

Data Privacy in Neuromorphic Systems​

Neuromorphic systems' always-on, adaptive nature raises new privacy considerations:
Continuous Learning: Systems that continuously adapt based on input data require careful privacy controls and data governance frameworks.
Edge Processing: While edge processing reduces data transmission, it requires robust security measures to protect distributed processing nodes.

Environmental Impact and Sustainability​

Energy Efficiency Revolution​

Next-generation computing architectures offer dramatic improvements in energy efficiency:
Current State: Data centers consume approximately 1% of global electricity, with AI workloads representing a growing portion Future Potential: Neuromorphic and photonic systems could reduce AI computation energy requirements by 100-1000x

Carbon Footprint Reduction​

Organizations implementing next-generation architectures can achieve significant sustainability improvements:
Immediate Impact: Neuromorphic edge computing reduces data transmission requirements Long-term Benefits: Quantum algorithms could optimize supply chains and reduce waste across entire industries

Future Outlook: The Next Decade of AIaaS​

Technology Convergence Timeline​

2025-2026: Commercial deployment of NISQ algorithms for optimization problems 2027-2028: Widespread adoption of neuromorphic chips in edge applications 2029-2030: Fault-tolerant quantum computers enabling breakthrough AI capabilities

Competitive Landscape Evolution​

The Artificial Intelligence as a Service (AIaaS) market will likely consolidate around companies that successfully integrate next-generation computing architectures. Organizations that delay adoption risk being left behind as quantum advantage and neuromorphic efficiency create insurmountable competitive gaps.

Economic Impact​

McKinsey estimates that quantum computing could create $850 billion in annual value by 2040, with significant portions coming from quantum-enhanced AI applications. Similarly, neuromorphic computing could enable $120 billion in new edge AI applications by 2035.

Conclusion: Preparing for the Quantum-Neuromorphic Era​

The convergence of quantum computing, neuromorphic chips, and next-generation architectures represents the most significant transformation in computing since the advent of the microprocessor. For enterprise leaders, the message is clear: the organizations that begin preparing today will dominate the AIaaS landscape of tomorrow.
The path forward requires bold vision, strategic investment, and careful execution. Companies must balance the immediate benefits of current AI technologies with the transformative potential of emerging architectures. Those who successfully navigate this transition will not only achieve competitive advantages but will fundamentally reshape entire industries.
The quantum-neuromorphic era of AIaaS isn't just approaching—it's already begun. The question isn't whether these technologies will transform your business, but whether you'll be leading the transformation or struggling to catch up.

 
  • Like
  • Fire
  • Love
Reactions: 35 users
Top Bottom