BRN Discussion Ongoing

itsol4605

Regular
Just another reminder that we are not alone in our thoughts of where the brn price will be -

the BrainChip stock forecast for 2025 from algorithm-based forecasting service Wallet Investor projected that the share price could rise to A$1.777 by the end of the year, up from A$1.071 at the end of 2023. Its BRN stock forecast suggested that the price could continue rising to reach $2.438 by December 2027.
Great!! 👍😊
Simple question: Why ??
 

itsol4605

Regular
One of Ericsson’s Stockholm-based neuromorphic researchers, Ahsan Javed Awan, who describes himself on LinkedIn as “Technology Specialist - Emerging Compute Algorithms” is looking for 3 Master students to work on “Brain-Inspired Algorithms for Telecom Networks” relating to RAN (Radio Access Network) workloads:


View attachment 91457


As per my June 2024 post above, Ahsan Javed Awan had been very enamoured with Loihi over the past few years, but may possibly be open to evaluate other neuromorphic processors as well.

The job ad posted on LinkedIn doesn’t specify what specific neuromorphic hardware the algorithms would be implemented on:



Ericsson logo
Ericsson
Share


Show more options

Master Thesis: Brain-Inspired Algorithms for Telecom Networks​



Stockholm, Stockholm County, Sweden · 19 hours ago · 4 people clicked apply
Promoted by hirer · Responses managed off LinkedIn
Full-time

Apply
Master Thesis: Brain-Inspired Algorithms for Telecom Networks at Ericsson


About the job​

Join our Team

About this opportunity:

With the rapid adoption of machine learning in telecommunication networks, the energy consumption associated with the training of cognitive algorithms and inference engines is of increased concern. Bio-inspired computing architectures such as Neuromorphic systems could process cognitive tasks in an energy-efficient manner, thereby yielding the networks sustainable. A variety of tasks such as deep learning inference, dynamic programming, quadratic unconstrained binary optimization etc... can exploit the neuromorphic hardware by reformulating the problem to the brain-inspired neural network architecture. To harness the potential of neuromorphic hardware in the telco networks, it is imperative to understand how brain-inspired neural networks can solve relevant computational problems in energy efficient manner.

This master thesis aims at developing brain-inspire neural networks (SNN, BCPNN, etc..) for certain telco use cases and demonstrating the potential energy efficiency gains.

What you will do:


  • Understand Radio Access Network workloads that need to be energy efficient.
  • Reformulate the RAN workloads into customized brain-inspired neural network architecture and validate the functionality using neuromorphic simulator.
  • Devise technique to estimate the energy efficiency gains of the brain-inspired neural network for the RAN workloads.
  • Documentation of the solution and evaluation.
The skills you bring:

  • MSc Student in Physics/Computer Science/Mathematics/Embedded Systems or other related fields
  • Proficient in Probability Theory, Deep Neural Networks, Spiking Neural Networks.
  • Understanding of Neuromorphic Computing hardware and software stacks
  • Good programing skills are required, knowledge of C++, Python, Linux.
Application:

Your application should include: CV, Cover letter, Transcripts of studies (both B.Sc. and up-to-date M.Sc.).

Why join Ericsson?

At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next.

What happens once you apply?

Click Here to find all you need to know about what our typical hiring process looks like.

Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more.
Primary country and city: Sweden (SE) || Stockholm

Req ID: 773239



I’m of course aware of the December 2023 paper “Towards 6G Zero-Energy Internet of Things: Standards, Trends and Recent Results” paper, in which six Ericsson researchers had experimented with Akida for a ZE-IoT device…(https://d197for5662m48.cloudfront.n...rint_pdf/dfcbe2c260b5426434db681b0f637243.pdf)

…but this is a different area of research, in which Ericsson has been collaborating with Intel for years:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-446092

View attachment 91460
Intel Loihi2 .. not BrainChip Akida 😪
 
Brn
 

Attachments

  • Screenshot_20250924_045104_LinkedIn.jpg
    Screenshot_20250924_045104_LinkedIn.jpg
    183.4 KB · Views: 109
  • Like
  • Fire
  • Love
Reactions: 18 users

TopCat

Regular

Here’s the link for Chips 😉
 
  • Like
  • Haha
  • Love
Reactions: 21 users

IMG_5414.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 18 users

yogi

Regular

I argued the case with ChatGPT exploring why the world is persisting with GPU technology (developed for video games) to process AI workloads when breakthrough developments like Brainchip's Akida™ neuromorphic technology seem to offer a potentially better solution.

During the conversation, it became apparent that ChatGPT was parroting the world of AI hardware and software as it exists now and was initially blind to the future possibilities.

And this is not surprising because Large Language Model engines like ChatGPT get their information from readily available sources - they are therefore prone to tapping into the dominant information, dominant opinions, and reflect industry best practices. Sometimes (not always) you have to push them hard to consider disruption.

I wanted to peer into the future.

I didn't get the answer I was hoping for - I got something better...

hashtag#Brainchip hashtag#Akida hashtag#Neuromorphic hashtag#AI
 
  • Like
  • Love
  • Fire
Reactions: 13 users

7für7

Top 20
Today, it would be a perfect day for dropping a price sensitive Ann. Which would send the share price directly and decently to the andromeda galaxy IMO… no financial advice 👍🏻
star wars lightspeed GIF
 
  • Like
  • Fire
Reactions: 7 users
From crapper.

A quote from Elon Musk.. can't find the link so lets say it's my interpretation of it.. I cut and paste it.

In Musk's words: "The future is gonna be weird, but pretty cool." Neuromorphic could make Optimus truly "sentient"-efficient"

Currently Elon Musk doesn't have edge technology with the capabilities of Akida. In my opinion Akida would take his robot technology to the next level.

Elon Musk would be well aware of Akida.
 
  • Like
  • Thinking
  • Fire
Reactions: 8 users

TECH

Regular
From crapper.

A quote from Elon Musk.. can't find the link so lets say it's my interpretation of it.. I cut and paste it.

In Musk's words: "The future is gonna be weird, but pretty cool." Neuromorphic could make Optimus truly "sentient"-efficient"

Currently Elon Musk doesn't have edge technology with the capabilities of Akida. In my opinion Akida would take his robot technology to the next level.

Elon Musk would be well aware of Akida.

Come on Elon, share the love :ROFLMAO: $4.99 USD a share sounds very nice at this moment in time :ROFLMAO:
 
  • Like
  • Haha
  • Thinking
Reactions: 6 users

FJ-215

Regular
Snapdragon Summit about to start. Might at least hear their plans for Edge Impulse
 
  • Like
Reactions: 7 users

Guzzi62

Regular
I just noticed on LinkedIn that the person behind Neuromorphiccore.AI (highly likely Bradley Susser, who also writes about neuromorphic topics on Medium) referred to that paper co-authored by Fernando Sevilla Martínez, Jordi Casas-Roma, Laia Subirats and Raúl Parada Medina earlier today:


View attachment 91443

Here is the extract that FMF found in an article and you have above:


Building Budget-Friendly Neuromorphic AI with Raspberry Pi and Akida​


akida-1024x683.png

The immense energy costs and data-processing bottlenecks of conventional AI systems present a growing problem for industries from finance to logistics. Companies deploying machine learning models face escalating infrastructure expenses, latency constraints, and sustainability concerns as their computational demands multiply. A new study published in IEEE Networking Letters presents a practical framework that could solve this problem by deploying Spiking Neural Networks (SNNs) on ultra-low-cost hardware, offering a blueprint for energy-efficient artificial intelligence that operates at a fraction of current costs.
The research, led by Fernando Sevilla Martínez and colleagues from multiple European institutions, combines Raspberry Pi 5 single-board computers with BrainChip Akida neuromorphic accelerators to create distributed AI systems that consume minimal energy while maintaining real-time performance. This approach could reshape how businesses deploy intelligent systems, from high-frequency trading operations requiring sub-millisecond responses to fraud detection networks processing millions of transactions.

The Business Case for Brain-Inspired Computing
💡

Consider the difference between a classroom where every student constantly works on problems regardless of new information, versus one where students only contribute when they have genuine insights. Conventional neural networks operate like the first scenario—continuously processing data through energy-intensive calculations. Spiking Neural Networks function like the second, activating only when specific conditions trigger responses. This event-driven approach can reduce energy consumption by orders of magnitude while maintaining computational capability.
The underlying technical reason for this dramatic improvement lies in how these systems communicate. Conventional neural networks rely on continuous floating-point operations—complex mathematical calculations that demand significant computational resources. SNNs communicate via discrete spikes, performing simple additions or accumulations of spike events and their associated weights rather than constant complex multiplications. This fundamental difference explains why their power consumption drops from joules to the microjoule range.
For financial services firms, this reduction translates directly to operational advantages. High-frequency trading systems could deploy autonomous processing nodes near exchanges, analyzing market data and executing trades with sub-millisecond latency while consuming less power than a smartphone charger. The distributed nature of these systems enables redundancy and geographical optimization without the infrastructure costs associated with cloud-based processing.
Fraud detection represents another compelling application. Rather than transmitting sensitive transaction data to centralized servers, financial institutions could deploy neuromorphic processors locally, identifying suspicious patterns in real-time while keeping customer information secure. The event-driven nature of SNNs makes them particularly suited for detecting anomalies—unusual spikes or deviations from normal transaction patterns trigger immediate analysis without continuous background processing. This capability becomes even more valuable as the framework enables true distributed intelligence across entire networks.

From Cloud Training to Edge Deployment
🔗

The study details a comprehensive pipeline that bridges high-powered model development with resource-constrained deployment. This process centers on Quantization-Aware Training (QAT), a critical technique that allows complex models trained in GPU-rich cloud environments to perform effectively on tiny, low-power chips.
QAT represents the essential bridge between these two worlds. Rather than compressing models after training—which typically degrades performance—this approach simulates the constraints of target hardware during the learning process. Models adapt to operate under low-bitwidth conditions (4-8 bits) while maintaining accuracy levels comparable to full-precision versions.
“In contrast to post-training quantization, which discretizes weights and activations after full-precision training, QAT simulates quantization effects during training,” the researchers explain. Their method achieves 5-10% better accuracy compared to naive compression techniques, ensuring that sophisticated AI capabilities survive the transition to edge hardware.
The conversion process transforms trained TensorFlow models into Akida’s spike-based format, requiring careful consideration of supported operations. Standard neural network components—convolutional layers, dense connections, batch normalization—transfer seamlessly, while more complex operations must be restructured or avoided. This constraint actually encourages efficient model design, often resulting in more robust and interpretable systems.
The hardware setup pairs the Raspberry Pi 5 with the BrainChip Akida board through a PCIe accelerator—a high-speed interface that allows the Raspberry Pi to connect directly to and offload intensive computations to the specialized neuromorphic chip, bypassing the constraints of its main processor. But the real power of this framework isn’t in a single device—it’s in the network of intelligent agents it creates.

Network-Ready Intelligence at Scale
🌐

Beyond individual device capabilities, the research emphasizes distributed computing architectures that enable sophisticated coordination between multiple AI nodes. The platform supports secure remote access through SSH, allowing administrators to manage networks of neuromorphic devices from any location—crucial for deploying systems across multiple trading floors, branch offices, or geographical regions.
Multiple communication protocols enable different types of coordination:
  • MQTT provides publish-subscribe messaging ideal for sensor networks and market data distribution
  • WebSockets enable real-time bidirectional communication for applications requiring immediate feedback
  • Vehicle-to-Everything (V2X) protocols show infrastructure-free coordination capabilities applicable to mobile trading platforms or disaster-resilient financial networks
The team validated their approach through three practical scenarios that showcase business-relevant capabilities. First, they proved real-time inference broadcasting via MQTT, where classification results from neuromorphic processors reach multiple subscribers instantly—valuable for distributing market analysis or risk assessments across trading teams. Second, they implemented V2X-style communication for autonomous coordination without centralized infrastructure—applicable to decentralized trading networks or backup systems. Third, they enabled federated learning protocols where multiple devices improve their models collectively while maintaining data privacy—essential for financial institutions sharing insights without exposing proprietary information.
This federated learning capability deserves particular attention for financial services organizations. Given strict data privacy regulations like GDPR and CCPA, the fact that models can be collectively improved without ever sharing raw, sensitive data represents a major compliance and security advantage that sets this approach apart from many cloud-based AI solutions. Banks can collaborate on fraud detection improvements without sharing customer information, while trading firms can enhance market prediction models while protecting proprietary strategies. These distributed capabilities form the foundation for truly scalable intelligent systems.

Performance Metrics With Business Impact
⚡

The energy consumption differences between computing platforms reveal significant cost implications. Training neural networks on high-end hardware like Apple’s M1 Max processor consumes 144 joules per operation, while inference on the Raspberry Pi-Akida combination requires only 10-30 microjoules—representing potential energy cost reductions of 99% or more. For organizations processing millions of transactions or market data points daily, these savings compound rapidly.
Latency measurements prove equally compelling for time-sensitive applications. Neuromorphic inference completes in under 1 millisecond compared to 10-20 milliseconds for CPU-based processing. In high-frequency trading where microseconds determine profitability, this performance advantage could justify deployment costs within days or weeks.
These performance gains enable new business models previously constrained by infrastructure costs:
  • Battery-powered devices operate for extended periods without charging
  • Mobile applications make complex decisions locally without cellular connectivity
  • Edge computing deployments function autonomously in remote locations
  • Handheld devices provide instant risk assessments without constant internet connectivity
For predictive analytics applications, portfolio managers could carry devices providing real-time optimization suggestions during client meetings or field visits, enhancing service delivery while maintaining data security. The combination of ultra-low power consumption and high-speed processing creates opportunities for always-on intelligence that adapts to changing market conditions without overwhelming infrastructure costs.

Scaling Distributed Intelligence
📈

The modular architecture enables horizontal scaling across multiple devices, supporting applications from individual trading desks to global financial networks. Networks of Raspberry Pi-Akida nodes can collaborate on complex analytical tasks, sharing computational loads while providing redundancy against hardware failures.
Communication overhead remains minimal despite distributed coordination. MQTT message delivery across local networks averages 6.2 milliseconds with low variance, while broadcast protocols enable infrastructure-free coordination between mobile devices. These capabilities support applications ranging from algorithmic trading clusters to disaster-recovery systems that maintain functionality even when primary data centers fail.
The researchers implemented federated learning protocols particularly relevant to financial services. Multiple nodes can improve their models collectively while keeping sensitive data local—enabling banks to collaborate on fraud detection improvements without sharing customer information, or allowing trading firms to enhance market prediction models while protecting proprietary strategies. This approach transforms what was once a competitive disadvantage (keeping data private) into a collaborative advantage that strengthens the entire network.

Democratizing Advanced AI Technology
🚀

Previous neuromorphic computing research often required expensive specialized hardware accessible only to well-funded research institutions or major technology companies. This study provides a reproducible implementation using commercially available components, significantly lowering barriers for organizations interested in neuromorphic systems.
By providing a complete blueprint using affordable, widely available hardware, this research doesn’t just advance technology—it democratizes access to it. The total cost of a Raspberry Pi-Akida development platform remains under $500, compared to tens of thousands for specialized neuromorphic research systems. This accessibility enables startups, regional banks, investment firms, and individual developers to build and experiment with next-generation AI systems, potentially leading to innovation that isn’t confined to a few well-funded technology giants.
The complete codebase, including documentation and example applications, is publicly available. This transparency accelerates adoption while enabling customization for specific business requirements. Organizations can modify the framework for their particular use cases without starting from scratch or licensing proprietary platforms. The democratization of this technology could spark innovation across industries that previously couldn’t afford to experiment with cutting-edge AI capabilities.

Future Business Implications
🔮

As regulatory pressure increases around AI explainability, energy consumption, and data privacy, neuromorphic systems offer advantages beyond pure performance. The event-driven nature of spike-based processing creates inherent audit trails—it’s easier to understand why a system activated and what information triggered specific decisions. Lower power consumption supports corporate sustainability goals while reducing operational costs. Local processing capabilities enhance data security and regulatory compliance.
The researchers acknowledge current constraints, including restricted support for advanced neural network operations and bounded model depth due to memory requirements. However, they anticipate that future hardware and software revisions will expand these capabilities while maintaining core advantages of event-driven processing.
For business leaders evaluating AI strategy, this research suggests a viable alternative to increasingly expensive cloud-based solutions. The combination of low acquisition costs, minimal operational expenses, and distributed capabilities makes neuromorphic systems attractive for organizations seeking sustainable competitive advantages through artificial intelligence.
This work represents a significant step toward making advanced AI accessible to organizations beyond technology giants, providing practical tools and methods for building intelligent systems that operate effectively within existing infrastructure. As industries demand ever-greater automation and analytical capabilities, spike-based computing may well become the foundation for ubiquitous artificial intelligence that enhances business operations without breaking budgets.

Source: Sevilla Martínez, F., Casas-Roma, J., Subirats, L., & Parada, R. (2025). Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware. IEEE Networking Letters. DOI: 10.1109/LNET.2025.3611426
Categories: Articles

 
  • Like
  • Fire
  • Love
Reactions: 8 users
Top Bottom