BRN Discussion Ongoing

FuzM

Regular
Took a peek at one of our partner, Bascom Hunter. They recently posted the 3U VPX SNAP Card which is ISR focused. Around the same time, they are also looking to hire Chief Engineer - ISR with the job being posted a couple of days ago.

Would be interesting to see how all this plays out in the next few months.

Screenshot 2026-02-14 110434.png


One thing that caught my attention is the Program Execution scope which includes
  • Support transition of technologies into Phase III and program-of-record funding, including production scaling considerations

Screenshot 2026-02-14 110605.png


Another interesting factor is the hiring person is Samuel Subbarao. Who is the Principal Investigator for the Implementing Neural Network Algorithms on Neuromorphic Processors SBIR.

https://www.sbir.gov/awards/195640
Screenshot 2026-02-14 111649.png
 
Last edited:
  • Like
  • Fire
Reactions: 12 users

Diogenese

Top 20
Afternoon Chippers,

Be thinking Kevin D . Johnston may well be a candidate for the NOBEL PEACE PRIZE this year.

* Ability to detect a human emotion shift 2 to 3 seconds before the market explodes 15 times.

Imagine if one will...... every mobile phone has such technology embedded within .... monitoring our partners mood & giving a little alert.

World Peace is finally within our grasp.

😇

On a side note , is anyone able to pinch the vidio off LinkedIn & post on this forum.

I don't have a LinkedIn account , hence can't view it.

Thankyou in advance.

Regards,
Esq.
I know where he can get a second hand one, but the gold plating has been licked off.
 
  • Haha
  • Like
Reactions: 5 users

7für7

Top 20
  • Haha
Reactions: 2 users

IloveLamp

Top 20

1000019135.jpg
1000019132.jpg
1000019138.jpg
 
  • Love
  • Like
  • Fire
Reactions: 8 users
Good to see UWA (University of Western Australia) relationship turning out some work (Thesis) around neuromorphic and benchmarked on Akida platforms with substantial benefits on latency and energy.

Can't read the full thesis as it's embargoed till 2027 which can be for reasons like:

Intellectual Property and Patents
Future Publication
Sensitive Data/Confidentiality
Book Deal/Copyright
National Security

Maybe if it was something incredibly useful to industry, BC and Akida, we should pony up and get assigned to us haha.



Towards neuromorphic visual SLAM: A spiking neural network for efficient pose estimation and loop closure based on event camera data

Author​

Sangay Tenzin, Edith Cowan UniversityFollow

Author Identifier​

Sangay Tenzin: http://orcid.org/0000-0001-5257-0302

Date of Award​

2026

Document Type​

Thesis

Publisher​

Edith Cowan University

Degree Name​

Master of Engineering Science

School​

School of Engineering

First Supervisor​

Alexander Rassau

Second Supervisor​

Douglas Chai

Abstract​

The need for effective Simultaneous Localisation and Mapping (SLAM) solutions has been pivotal across a wide range of applications, including autonomous vehicles, industrial robotics, and mobile service platforms where accurate localisation and environmental perception are essential. Visual SLAM (VSLAM) has emerged as a popular approach due to its cost effectiveness and ease of deployment. However, state-of-the-art VSLAM systems using conventional cameras face significant limitations, including high computational requirements, sensitivity to motion blur, restricted dynamic range, and poor performance under variable lighting conditions.
Event cameras present a promising alternative by producing asynchronous, high-temporal resolution data with low latency and power consumption. These characteristics make them ideal for use in dynamic and resource-constrained environments. Complementing this, neuromorphic processors designed for efficient event-driven computation are inherently compatible with sparse temporal data. Despite their synergy, the adoption of event cameras and neuromorphic computing in SLAM remains limited due to the scarcity of public datasets, underdeveloped algorithmic tools, and challenges in multimodal sensor fusion.
This thesis develops integrated Visual Odometry (VO) and Loop Closure (LC) models that leverage neuromorphic sensing, spiking neural networks (SNN), and probabilistic factor-graph optimisation as a pathway towards full event camera-based SLAM. A synchronised multimodal dataset, captured with a Prophesee STM32-GENx320 event camera, Livox MID-360 LiDAR, and Pixhawk 6C Mini IMU spans indoor and outdoor scenarios over 2,285 s. Raw events are aggregated into voxel-grid tensors using 100 ms windows with 20 temporal bins. LiDAR odometry from point clouds is refined with inertial constraints in Georgia Tech Smoothing and Mapping (GTSAM) to produce pseudo-ground-truth trajectories for supervised learning.
Two SNN models are developed using the SpikingJelly framework: a spiking VO network that predicts six-degree-of-freedom (6-DOF) pose increments from voxel grids, and a LC network that estimates inter-frame similarity scores for global trajectory correction. Both models are trained with surrogate gradient learning and employ Leaky Integrate-and-Fire neurons. The VO model uses a hybrid loss function that combines Root Mean Square Error for translation with a geodesic loss on the Special Orthogonal Group in 3D for rotation prediction. The LC model is optimised using a joint loss comprising a triplet margin loss for learning discriminative embeddings and a cross-entropy loss for binary classification.
These frontend models are integrated into a modular backend system based on a sliding window factor graph. The backend fuses VO predictions with IMU pre-integration and LC constraints and performs real-time optimisation using GTSAM. Empirical evaluation on kilometre-scale sequences demonstrates robust performance in diverse indoor and outdoor environments, achieving sub-metre Absolute Trajectory Error and competitive Relative Pose Error. Additionally, hardware benchmarking across conventional and neuromorphic processor such as BrainChip Akida platforms reveals up to a four times reduction in latency and an order of-magnitude gain in energy efficiency on neuromorphic hardware.
The main contributions of this work include a pipeline towards full VSLAM architecture combining SNNs, event-based vision, and multimodal sensor fusion for 6-DOF pose estimation and LC; a novel training pipeline using voxelised asynchronous event data and GTSAM refined pseudo-ground-truth; a modular backend architecture that performs drift-resilient optimisation using VO, IMU, and LC constraints; a cross-platform benchmarking study that highlights the advantages of neuromorphic hardware; and a synchronised multimodal dataset supporting the above components.
Overall, this thesis provides a pipeline towards a scalable and energy-efficient SLAM solution that bridges neuromorphic sensing, spiking computation, and probabilistic inference contributing substantially to the advancement of real-time robotic perception and autonomy and laying a strong foundation for next-generation lightweight, intelligent robotic systems. However, the system's performance is sensitive to sensor calibration and timestamp alignment, and the dataset’s specificity may limit generalisation across broader deployment scenarios without further adaptation.

Related Publications​

Tenzin, S., Rassau, A., & Chai, D. (2024). Application of event cameras and neuromorphic computing to VSLAM: A survey. Biomimetics, 9(7). https://doi.org/10.3390/biomimetics9070444
https://ro.ecu.edu.au/ecuworks2022-2026/4278/

Access Note​

Access to this thesis is embargoed until 10th February 2027
 
  • Thinking
  • Fire
  • Love
Reactions: 7 users
Are we seeing the momentum rising on neuromorphic.

Google India Software Engineer III with 10k LinkedIn followers believes the shift is happening now too.



Umanshi Bakshi
2w

The Von Neumann bottleneck has long been the ceiling for AI efficiency, but a massive industry shift is underway. We are moving toward Neuromorphic Computing that is hardware designed to mimic the human brain’s neural structure. Unlike traditional chips that consume constant power, these brain-inspired architectures are event-driven, firing only when necessary. We are seeing potential for 100x to 1000x improvements in energy efficiency. Key Industry Drivers: - Decentralized Intelligence: Moving complex AI from the cloud to low-power edge devices (drones, wearables, sensors). - Real-Time Processing: Enabling microsecond reaction times for robotics and autonomous systems. - Sustainability: Reducing the massive carbon footprint associated with traditional AI scaling. The transition from simulating neural networks to running them on native biological-inspired silicon marks the next great era of computing.
 
  • Fire
  • Like
  • Love
Reactions: 5 users
Top Bottom