BRN Discussion Ongoing

IloveLamp

Top 20

1000020727.jpg
 
  • Fire
  • Like
  • Love
Reactions: 22 users

HopalongPetrovski

I'm Spartacus!
I don’t care bro… I knew that you will respond to that… you only respond on that kind of postings … question is who is the real wanker here… btw… when I realise by my self that something is nonsense, I just delete it… it’s called self reflection. And…, I also posted in the past and still post brainchip regarded stuff… mostly positive for years..and sometimes negative, as I said because of the share price only … but you may have also a personal problem with me that’s why you attack me constantly
Oh dear.
Looks like I've inadvertently killed off Fur.
Sorry everyone.
 
  • Haha
  • Love
  • Like
Reactions: 22 users

Rach2512

Regular
Sorry if already posted, I may have even posted it previously, my memory isn't the best. Is it right to assume that wherever you see Prophesee it is a given that Akida is a part of their offering? Please don't shot me if I've got this wrong.



Screenshot_20260309_151320_Samsung Internet.jpg
Screenshot_20260309_151329_Samsung Internet.jpg
Screenshot_20260309_151340_Samsung Internet.jpg
Screenshot_20260309_151349_Samsung Internet.jpg
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 11 users

HopalongPetrovski

I'm Spartacus!
Sorry if already posted, I may have even posted it previously, my memory isn't the best. Is it right to assume that wherever you see Prophesee it is a given that Akida is a part of their offering? Please don't shot me if I've got this wrong.



View attachment 95876 View attachment 95877 View attachment 95878 View attachment 95879
Maybe. it‘s hard to keep up these days. 🤣
Reading this though brings to mind PVDM’s talk of autonomous cars needing to be able to differentiate between a plastic bag blown by the wind and a child chasing a ball out onto the road.
 
  • Love
  • Like
  • Fire
Reactions: 7 users

miaeffect

Oat latte lover
1773043535788.png
 
  • Haha
Reactions: 6 users
  • Haha
Reactions: 8 users

manny100

Top 20
Just a reminder that Megachips since September 2025 has been demonstrating Robotics at its premises in Japan.
" This section describes the construction of the proposed framework shown inFig. 2. We utilized a desktop PC equipped with a GPU (Nvidia RTX3090) for updating the policies and an Akida Neural Processor SoC as a neurochip [9, 12]."
My bold above.
Quote above from a paper co written by Megachips - see link below.
Robust Iterative Value Conversion: Deep Reinforcement Learning for Neurochip-driven Edge Robots
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Earlyrelease

Regular
Just a reminder that Megachips since September 2025 has been demonstrating Robotics at its premises in Japan.
" This section describes the construction of the proposed framework shown inFig. 2. We utilized a desktop PC equipped with a GPU (Nvidia RTX3090) for updating the policies and an Akida Neural Processor SoC as a neurochip [9, 12]."
My bold above.
Quote above from a paper co written by Megachips - see link below.
Robust Iterative Value Conversion: Deep Reinforcement Learning for Neurochip-driven Edge Robots
Manny.
That is great (appears from paper possibly a Aug 2024 paper) but what we need is confirmation they are still researching after Nov 2025 when their BRN licence expired. I recently sent a letter to IR at brainchip asking what is going on with licence status and if not renewed will this be a ASX release since the original announcement from brainchip resulted in an ASX announcement. Thus in my mind they must then counter and advise the market that the licence was not extended as this would be price sensitive (in a negative way albeit but required by the rules). Anyway as per the last Company broadcast re future shareholder correspondence/responses, nothing not even a got your letter and thanks.
 
  • Fire
  • Like
Reactions: 4 users
  • Haha
  • Wow
Reactions: 4 users
FF
KK Mookhey 3rd+
AI + Cybersecurity | CEO Network Intelligence | Co-Founder Transilience AI | CISA, CISSP, AZ-500 | Avid Mountaineer
1d
Follow
That small LED on the Ray-Ban Meta glasses? Most people selling them can't explain what it means.A joint investigation by Swedish journalists just revealed that footage captured by these glasses — when users invoke the AI assistant — is reviewed by human workers at a subcontractor in Nairobi, Kenya. Workers described seeing bathroom footage, people undressing, bank cards, intimate moments. All captured by someone who thought they were just asking their glasses a question.The person wearing the glasses consented to a privacy policy. Everyone else in the frame never got the chance to.This isn't just a Meta story. It's a preview of what happens when ambient AI recording meets inadequate consent frameworks at scale.Four things every security professional needs to understand about this:→ Wearable AI devices are ambient surveillance by design. A single LED is not a sufficient consent mechanism.→ "Processed locally" is a marketing phrase. Packet analysis on these glasses found constant communication with Meta servers — contradicting what store employees were telling customers.→ Third-party data annotation is the invisible layer of every major AI product. The people reviewing your footage are not Meta employees, may operate under weaker data protections, and are bound by NDAs.→ Bystander consent in wearable tech is still an unresolved legal gap. NOYB and EPIC have both filed actions. Watch this space.We are putting cameras on faces before we've built the legal and ethical infrastructure to govern them”

***€€€€******€€€€€******€€€€******

If we go back to Sean Hehir’s first AGM I well remember in addressing shareholder questions Sean Hehir telling anyone prepared or able to listen that there were many companies making claims about what their technology could do at the Edge but these claims did not stand up to the light of day. The point he was making was not that AKIDA was incredible technology but that the false claims by these companies created barriers that BrainChip had to breakdown because these companies had dominant market positions.

Here we have Meta employees claiming on device intelligence a feature that clearly does not exist.

On device intelligence is at the very core of Brainchip’s AKIDA Technologies.

The truth of what Brainchip AKIDA Technologies are bringing to the World from the Edge to the Cloud is becoming clearer with every passing day. Exaggeration and untruths will only carry competitors so far.
 
  • Like
  • Fire
  • Wow
Reactions: 11 users
FF
KK Mookhey 3rd+
AI + Cybersecurity | CEO Network Intelligence | Co-Founder Transilience AI | CISA, CISSP, AZ-500 | Avid Mountaineer
1d
Follow
That small LED on the Ray-Ban Meta glasses? Most people selling them can't explain what it means.A joint investigation by Swedish journalists just revealed that footage captured by these glasses — when users invoke the AI assistant — is reviewed by human workers at a subcontractor in Nairobi, Kenya. Workers described seeing bathroom footage, people undressing, bank cards, intimate moments. All captured by someone who thought they were just asking their glasses a question.The person wearing the glasses consented to a privacy policy. Everyone else in the frame never got the chance to.This isn't just a Meta story. It's a preview of what happens when ambient AI recording meets inadequate consent frameworks at scale.Four things every security professional needs to understand about this:→ Wearable AI devices are ambient surveillance by design. A single LED is not a sufficient consent mechanism.→ "Processed locally" is a marketing phrase. Packet analysis on these glasses found constant communication with Meta servers — contradicting what store employees were telling customers.→ Third-party data annotation is the invisible layer of every major AI product. The people reviewing your footage are not Meta employees, may operate under weaker data protections, and are bound by NDAs.→ Bystander consent in wearable tech is still an unresolved legal gap. NOYB and EPIC have both filed actions. Watch this space.We are putting cameras on faces before we've built the legal and ethical infrastructure to govern them”

***€€€€******€€€€€******€€€€******

If we go back to Sean Hehir’s first AGM I well remember in addressing shareholder questions Sean Hehir telling anyone prepared or able to listen that there were many companies making claims about what their technology could do at the Edge but these claims did not stand up to the light of day. The point he was making was not that AKIDA was incredible technology but that the false claims by these companies created barriers that BrainChip had to breakdown because these companies had dominant market positions.

Here we have Meta employees claiming on device intelligence a feature that clearly does not exist.

On device intelligence is at the very core of Brainchip’s AKIDA Technologies.

The truth of what Brainchip AKIDA Technologies are bringing to the World from the Edge to the Cloud is becoming clearer with every passing day. Exaggeration and untruths will only carry competitors so far.

Gotta do what they say though :LOL:

they-live-john-carpenter.gif
 
  • Haha
  • Fire
  • Like
Reactions: 6 users

Frangipani

Top 20
New preprint mentioning Akida (mainly Gen 1) by three researchers from Edith Cowan University, including Alexander Rassau, who co-authored several papers with Anup Vanarse, Adam Osseiran and Peter van der Made between 2016 - 2020…


View attachment 63082

b1b3f029-f38c-441a-be58-4126626df8a0-jpeg.63086


View attachment 63084



5. Conclusion

SLAM based on event cameras and neuromorphic computing represents an innovative approach to spatial perception and mapping in dynamic environments.
Event cameras capture visual information asynchronously, responding immediately to changes in the scene with high temporal resolution. Neuromorphic computing, inspired by the brain's processing principles, has the capacity to efficiently handle this event-based data, enabling real-time, low-power computation. By combining event cameras and neuromorphic processing, SLAM systems could achieve several advantages, including low latency, low power consumption, robustness to changing lighting conditions, and adaptability to dynamic environments. This integrated approach offers efficient, scalable, and robust solutions for applications such as robotics, augmented reality, and autonomous vehicles, with the potential to transform spatial perception and navigation capabilities in various domains.


5.1. Summary of Key Findings


VSLAM systems based on traditional image sensors such as monocular, stereo or RGB-D cameras have gained significant development attention in recent decades. These sensors can gather detailed data about the scene and are available at affordable prices. They also have relatively low power requirements, making them feasible for autonomous systems such as self-driving cars, unmanned aerial vehicles and other mobile robots. VSLAM systems employing these sensors have achieved reasonable performance and accuracy but have often struggled in real-world contexts due to high computational demands, limited adaptability to dynamic environments, and susceptibility to motion blur and lighting changes. Moreover, they face difficulties in real-time processing, especially in resource-constrained settings like autonomous drones or mobile robots.

To overcome the drawbacks of these conventional sensors, event cameras have begun to be explored. They have been inspired by the working of biological retinas and attempt to mimic the characteristics of human eyes. This biological design influence for event cameras means they consume minimal power and operate with lower bandwidth in addition to other notable features such as very low latency, high temporal resolution, and wide dynamic range. These attractive features make event cameras highly suitable for robotics, autonomous vehicles, drone navigation, and high-speed tracking applications. However, they operate on a fundamentally different principle compared to traditional cameras; event cameras respond to the brightness changes of the scene and generate events rather than capturing the full frame at a time. This poses challenges as algorithms and approaches employed in conventional image processing and SLAM systems cannot be directly applied and novel methods are required to realize their potential.

For SLAM systems based on event cameras, relevant methods can be selected based on the event representations and the hardware platform being used. Commonly employed methods for event- based SLAM are featured-based, direct, motion-compensated and deep-learning. Feature-based methods can be computationally efficient as they only deal with the small numbers of events produced by the fast-moving cameras for processing. However, their efficacy diminishes while dealing with a texture-less environment. On the other hand, the direct method can achieve robustness in a texture-less environment, but it can only be employed for moderate camera motions. Motion- compensated methods can offer robustness in high-speed motion as well as in large-scale settings, but they can only be employed for rotational camera motions. Deep learning methods can be effectively used to acquire the required attributes of the event data and generate the map while being robust to noise and outliers. However, this requires large amounts of training data, and performance cannot be guaranteed for different environment settings. SNNs have emerged in recent years as alternatives to CNNs and are considered well-suited for data generated by event cameras. The development of practical SNN-based systems is, however, still in the early stages and relevant methods and techniques need considerable further development before they can be implemented in an event camera-based SLAM system.

For conventional SLAM systems, traditional computing platforms usually require additional hardware such as GPU co-processors to perform the heavy computational loads, particularly when deep learning methods are employed. This high computational requirement means power requirements are also high, making them impractical for deployment in mobile autonomous systems. However, neuromorphic event-driven processors utilizing SNNs to model cognitive and interaction capabilities show promise in providing a solution. The research on implementing and integrating these emerging technologies is still in the early stages, however, an additional research effort will be required to realize this potential.

This review has identified that a system based on event cameras and neuromorphic processing presents a promising pathway for enhancing state-of-the-art solutions in SLAM. The unique features of event cameras, such as adaptability to changing lighting conditions, support for high dynamic range and lower power consumption due to the asynchronous nature of event data generation are the driving factors that can help to enhance the performance of the SLAM system. In addition, neuromorphic processors, which are designed to efficiently process and support parallel incoming event streams, can help to minimize the computational cost and increase the efficiency of the system. Such a neuromorphic SLAM system has the possibility of overcoming significant obstacles in autonomous navigation, such as the need for quick and precise perception, while simultaneously reducing problems relating to real-time processing requirements and energy usage. Moreover, if appropriate algorithms and methods can be developed, this technology has the potential to transform the realm of mobile autonomous systems by enhancing their agility, energy efficiency, and ability to function in a variety of complex and unpredictable situations.


5.2. Current State-of-the-Art and Future Scope


During the last few decades, much research has focused on implementing SLAM based on frame-based cameras and laser scanners. Nonetheless, a fully reliable and adaptable solution has yet to be discovered due to the computational complexities and sensor limitations, leading to systems requiring high power consumption and having difficulty adapting to changes in the environment, rendering them impractical for many use cases, particularly for mobile autonomous systems. For this reason, researchers have begun to shift focus to finding alternative or new solutions to address these problems. One promising direction for further exploration was found to be the combination of an event camera and neuromorphic computing technology due to the unique benefits that these complementary approaches can bring to the SLAM problem.

The research to incorporate event cameras and neuromorphic computing technology into a functional SLAM system is, however, currently in the early stages. Given that the algorithms and techniques employed in conventional SLAM approaches are not directly applicable to these emerging technologies, the necessity of finding new algorithms and methods within the neuromorphic computing paradigm is the main challenge faced by researchers. Some promising approaches to applying event cameras to the SLAM problem have been identified in this paper, but future research focus needs to be applied to the problem of utilizing emerging neuromorphic processing capabilities to implement these methods practically and efficiently.


5.3. Neuromorphic SLAM Challenges

Developing SLAM algorithms that effectively utilize event-based data from event cameras and harness the computational capabilities of neuromorphic processors presents a significant challenge. These algorithms must be either heavily modified or newly conceived to fully exploit the strengths of both technologies. Furthermore, integrating data from event cameras with neuromorphic processors and other sensor modalities, such as IMUs or traditional cameras, necessitates the development of new fusion techniques. Managing the diverse data formats, temporal characteristics, and noise profiles from these sensors while maintaining consistency and accuracy throughout the SLAM process will be a complex task.
In terms of scalability, expanding event cameras and neuromorphic processor-based SLAM systems to accommodate large-scale environments with intricate dynamics will pose challenges in computational resource allocation. It is essential to ensure scalability while preserving real-time performance for practical deployment. Additionally, event cameras and neuromorphic processors must adapt to dynamic environments where scene changes occur rapidly. Developing algorithms capable of swiftly updating SLAM estimates based on incoming event data while maintaining robustness and accuracy is critical.

Leveraging the learning capabilities of neuromorphic processors for SLAM tasks, such as map building and localization, necessitates the design of training algorithms and methodologies proficient in learning from event data streams. The development of adaptive learning algorithms capable of enhancing SLAM performance over time in real-world environments presents a significant challenge. Moreover, ensuring the correctness and reliability of event camera and neuromorphic processor- based SLAM systems poses hurdles in verification and validation. Rigorous testing methodologies must also be developed to validate the performance and robustness of these systems. If these challenges can be overcome, the potential rewards are significant, however.


Author Contributions: Conceptualization, S.T., A.R. and D.C.; methodology, S.T.; validation, S.T., A.R. and D.C.; formal analysis, S.T.; investigation, S.T.; writing—original draft preparation, S.T.; writing—review and editing, S.T., A.R. and D.C; supervision, A.R. and D.C. All authors have read and agreed to the published version of the manuscript.

Funding: This research received no external funding.

Institutional Review Board Statement: Not applicable.

Data Availability Statement: Not applicable.

Acknowledgments: This publication is partially supported by an ECU, School of Engineering Scholarship.

Conflicts of Interest: The authors declare no conflict of interest.

Good to see UWA (University of Western Australia) relationship turning out some work (Thesis) around neuromorphic and benchmarked on Akida platforms with substantial benefits on latency and energy.

Can't read the full thesis as it's embargoed till 2027 which can be for reasons like:

Intellectual Property and Patents
Future Publication
Sensitive Data/Confidentiality
Book Deal/Copyright
National Security

Maybe if it was something incredibly useful to industry, BC and Akida, we should pony up and get assigned to us haha.



Towards neuromorphic visual SLAM: A spiking neural network for efficient pose estimation and loop closure based on event camera data

Author​

Sangay Tenzin, Edith Cowan UniversityFollow

Author Identifier​

Sangay Tenzin: http://orcid.org/0000-0001-5257-0302

Date of Award​

2026

Document Type​

Thesis

Publisher​

Edith Cowan University

Degree Name​

Master of Engineering Science

School​

School of Engineering

First Supervisor​

Alexander Rassau

Second Supervisor​

Douglas Chai

Abstract​

The need for effective Simultaneous Localisation and Mapping (SLAM) solutions has been pivotal across a wide range of applications, including autonomous vehicles, industrial robotics, and mobile service platforms where accurate localisation and environmental perception are essential. Visual SLAM (VSLAM) has emerged as a popular approach due to its cost effectiveness and ease of deployment. However, state-of-the-art VSLAM systems using conventional cameras face significant limitations, including high computational requirements, sensitivity to motion blur, restricted dynamic range, and poor performance under variable lighting conditions.
Event cameras present a promising alternative by producing asynchronous, high-temporal resolution data with low latency and power consumption. These characteristics make them ideal for use in dynamic and resource-constrained environments. Complementing this, neuromorphic processors designed for efficient event-driven computation are inherently compatible with sparse temporal data. Despite their synergy, the adoption of event cameras and neuromorphic computing in SLAM remains limited due to the scarcity of public datasets, underdeveloped algorithmic tools, and challenges in multimodal sensor fusion.
This thesis develops integrated Visual Odometry (VO) and Loop Closure (LC) models that leverage neuromorphic sensing, spiking neural networks (SNN), and probabilistic factor-graph optimisation as a pathway towards full event camera-based SLAM. A synchronised multimodal dataset, captured with a Prophesee STM32-GENx320 event camera, Livox MID-360 LiDAR, and Pixhawk 6C Mini IMU spans indoor and outdoor scenarios over 2,285 s. Raw events are aggregated into voxel-grid tensors using 100 ms windows with 20 temporal bins. LiDAR odometry from point clouds is refined with inertial constraints in Georgia Tech Smoothing and Mapping (GTSAM) to produce pseudo-ground-truth trajectories for supervised learning.
Two SNN models are developed using the SpikingJelly framework: a spiking VO network that predicts six-degree-of-freedom (6-DOF) pose increments from voxel grids, and a LC network that estimates inter-frame similarity scores for global trajectory correction. Both models are trained with surrogate gradient learning and employ Leaky Integrate-and-Fire neurons. The VO model uses a hybrid loss function that combines Root Mean Square Error for translation with a geodesic loss on the Special Orthogonal Group in 3D for rotation prediction. The LC model is optimised using a joint loss comprising a triplet margin loss for learning discriminative embeddings and a cross-entropy loss for binary classification.
These frontend models are integrated into a modular backend system based on a sliding window factor graph. The backend fuses VO predictions with IMU pre-integration and LC constraints and performs real-time optimisation using GTSAM. Empirical evaluation on kilometre-scale sequences demonstrates robust performance in diverse indoor and outdoor environments, achieving sub-metre Absolute Trajectory Error and competitive Relative Pose Error. Additionally, hardware benchmarking across conventional and neuromorphic processor such as BrainChip Akida platforms reveals up to a four times reduction in latency and an order of-magnitude gain in energy efficiency on neuromorphic hardware.
The main contributions of this work include a pipeline towards full VSLAM architecture combining SNNs, event-based vision, and multimodal sensor fusion for 6-DOF pose estimation and LC; a novel training pipeline using voxelised asynchronous event data and GTSAM refined pseudo-ground-truth; a modular backend architecture that performs drift-resilient optimisation using VO, IMU, and LC constraints; a cross-platform benchmarking study that highlights the advantages of neuromorphic hardware; and a synchronised multimodal dataset supporting the above components.
Overall, this thesis provides a pipeline towards a scalable and energy-efficient SLAM solution that bridges neuromorphic sensing, spiking computation, and probabilistic inference contributing substantially to the advancement of real-time robotic perception and autonomy and laying a strong foundation for next-generation lightweight, intelligent robotic systems. However, the system's performance is sensitive to sensor calibration and timestamp alignment, and the dataset’s specificity may limit generalisation across broader deployment scenarios without further adaptation.

Related Publications​

Tenzin, S., Rassau, A., & Chai, D. (2024). Application of event cameras and neuromorphic computing to VSLAM: A survey. Biomimetics, 9(7). https://doi.org/10.3390/biomimetics9070444
https://ro.ecu.edu.au/ecuworks2022-2026/4278/

Access Note​

Access to this thesis is embargoed until 10th February 2027

Preprint related to the above Master thesis by the same author, Sangay Tenzin, his thesis supervisors Alex Rassau and Douglas Chai as well as MD Moniruzzaman, all of them from Edith Cowan University (ECU):

Neuromorphic Visual Odometry with Spiking Neural Networks: Evaluation and Benchmarking on the Akida Platform​



DCE61E95-4DC1-430D-9CA7-FE5AAA0E4560.jpeg







F83CCEB3-E306-45FC-AE02-C132FDC6C4A7.jpeg


C5DE7F20-5AEF-494F-B0AB-F9A7BA16C5E6.jpeg




[…]

03B53440-41CB-44C2-87F4-8439572621DD.jpeg



[…]

062DC491-71F0-4BAA-9A42-02866F7A3956.jpeg
A0E3E454-6FDE-4BC8-8AFD-88B97B2D7B19.jpeg
01F79850-BDB9-4960-8475-D653685353FB.jpeg
 
  • Like
  • Love
Reactions: 8 users

Anthropic's casual blog post announcing Claude Code's ability to read COBOL caused IBM stock to drop 13% in one day, wiping out $30 billion in market value.
 

Attachments

  • Screenshot_20260310_090011_LinkedIn.jpg
    Screenshot_20260310_090011_LinkedIn.jpg
    108.6 KB · Views: 0
  • Like
Reactions: 1 users
Top Bottom