I decided to revisit this project for a check on dates again as originally posted earlier this year
No @Rise from the ashes I still enjoy a beer in a nice location. I will bring back the beer pics when we hit $5 and are all in full party mode. Ok look forward to some pictures by this Xmas then. 😉
thestockexchange.com.au
Project supposedly started in March for 6 mths so expect possible outcomes later this year or early 2024 you would think.
Some of this has been previous covered / discussed but some may not have yet so for those who maybe missed it as well.
Evaluation of neuromorphic AI with embedded Spiking Neural Networks
Context
AI is proliferating everywhere even to embedded systems to integrate intelligence closer to the sensors (IoT, drones, vehicles, satellites …). But the energy consumption of current Deep learning solutions makes classical AI hardly compatible with energy and resource constrained devices. Edge AI is a recent subject of research that needs to take into account the cost of the neural models both during the training and during the prediction. An original and promising solution to face these constraints is to merge compression technics of deep neural networks and event-based encoding of information thanks to Spiking neural networks (SNN). SNN are considered as third generation of artificial neural networks and are inspired from the way the information is encoded in the brain, and previous works tend to conclude that SNN are more efficient than classical deep networks [3]. This internship project aims at confirming this assumption by converting classical CNN to SNN from standard Machine Learning frameworks (Keras)
and deploy the resulting neural models onto the Akida neuromorphic processor from BrainChip company [4]. The results obtained in terms of accuracy, latency and energy will be compared to other existing embedded solutions for Edge AI [2].
Project mission
The project mission will be organized in several periods:
- Bibliographic study on spiking neural network training
- Introduction to the existing SW framework from BrainChip
- Training of convolutional neural networks for embedded applications [1] and conversion from CNN to SNN from Keras
- Deployment of the SNN onto Akida processing platform
- Experimentations and measurements
- Publication in an international conference.
Practical information
Location : LEAT Lab / SophiaTech Campus, Sophia Antipolis
Duration :
6 months from march 2023
Grant :
from ANR project Deep See
Profile : Machine learning, Artificial intelligence, Artificial neural networks, Python, Keras, Pytorch
Research keywords : Spiking neural network, Edge AI, neuromorphic computing
So....digging a little deeper into project Deep See, I found there appears another parallel project in collaboration with Prophesee and Renault (Renault was picked up in the original post as being involved to a level).
leat.univ-cotedazur.fr
Development of a prototype HW platform for embedded object detection with bio-inspired retinas
Context
The LEAT lab is leader of the national ANR project DeepSee in collaboration with Renault, Prophesee and 2 other labs in neuroscience (CERCO) and computer science (I3S). This project aims at exploring a bio-inspired approach to develop energy-efficient solutions for image processing in automotive applications (ADAS) as explored by [3]. The main mechanisms that are used to follow this approach are event-based cameras (EBC are considered as artificial retinas) and spiking neural networks (SNN).
The first one is a type of sensor detecting the change of luminosity at very high temporal resolution and low power consumption, the second one is a type of artificial neural network mimicking the way the information is encoded in the brain.
The LEAT has developed the first model of SNN able to make object detection on event-based data [1] and the related hardware accelerator on FPGA [2]. The goal of this internship project is to deploy this spike-based AI solution onto an embedded smart camera provided by the Prophesee company [4]. The camera is composed of an event-based sensor and an FPGA. The work will mainly consist in deploying the existing software code (in C) on the embedded CPU, integrate the HW accelerator (VHDL) onto the FPGA and make the communication between them through an AXI-STREAM bus. The last part of the project will consist in realizing experimentations of the resulting smart cameras to evaluate the real-time performances and energy consumption before a validation onto a driving vehicle.
Project mission
The project mission will be organized in several periods:
- Bibliographic study on event-based processing
- Introduction to the existing Sw and Hw solutions at LEAT, and to the dev kit from Prophesee
- Deployment of the Sw part on CPU and the Hw part on FPGA
- Experimentations and validation
- Publication in an international conference.
Practical information
Location : LEAT Lab / SophiaTech Campus, Sophia Antipolis
Duration : 6 months from march 2023
Grant : from ANR project DeepSee
Profile : VHDL programming, FPGA design, C programming, signal/image processing
Research keywords : Embedded systems, Event-based camera, artificial neural network, Edge AI
Autonomous and intelligent embedded solutions are mainly designed as cognitive systems composed of a three step process: perception, decision and action, periodically invoked in a closed-loop manner in order to detect changes in the environment and appropriately choose the actions to be performed according to the mission to be achieved. In an autonomous agent such as a robot, a drone or a vehicle, these 3 stages are quite naturally instantiated in the form of i) the fusion of information from different sensors, ii) then the scene analysis typically performed by artificial neural networks, and iii) finally the selection of an action to be operated on actuators such as engines, mechanical arms or any mean to interact with the environment. In that context, the growing maturity of the complementary technologies of Event-Based Sensors (EBS) and Spiking Neural Networks (SNN) is proven by recent results. The nature of these sensors questions the very way in which autonomous systems interact with their environment. Indeed, an Event-Based Sensor reverses the perception paradigm currently adopted by Frame-Based Sensors (FBS) from systematic and periodical sampling (whether an event has happened or not) to an approach reflecting the true causal relationship where the event triggers the sampling of the information. We propose to study the disruptive change of the perception stage and how event-based processing can cooperate with the current frame-based approach to make the system more reactive and robust. Hence, SNN models have been studied for several years as an interesting alternative to Formal Neural Networks (FNN) both for their reduction of computational complexity in deep network topology, but also for their natural ability to support unsupervised and bio-inspired learning rules. The most recent results show that these methods are becoming more and more mature and are almost catching up with the performance of formal networks, even though most of the learning is done without data labels. But should we compare the two approaches when the very nature of their input-data is different? In the context of interest of image processing, one (FNN) deals with whole frames and categorizes objects, the other (SNN) is particularly suitable for event-based sensors and is therefore more suited to capture spatio-temporal regularities in a constant flow of events.
The approach we propose to follow in the DeepSee project is to associate spiking networks with formal networks rather than putting them in competition.
Partners
RENAULT SAS - GUYANCOURT ,
Laboratoire d'Electronique, Antennes et Télécommunications ,
Laboratoire informatique, signaux systèmes de Sophia Antipolis ,
Centre de recherche cerveau et cognition UMR5549 CNRS/UPS
The other interesting person I found involved in Deep See is none other than Timothee Masquelier as part of CERCO, who were listed in the retina project with Renault & Prophesee. He's also involved in the BrainNet Project. (CV attached)
2021 – now Senior Research Scientist (Directeur de Recherche), CNRS (CERCO), Toulouse, France. Spike-based computing and learning in brains and machines.
Awards / Grants (as PI)
2021 – 2024 ANR PRCE. “BrainNet” Project. 144ke / 643ke in total.
2021 – 2024 ANR PRCE. “DeepSee” Project. 158ke / 711ke in total.
He is also one of the authors of the following paper which was posted previously (attached) where they consider Akida is a good fit.
StereoSpike: Depth Learning With a Spiking Neural Network
ULYSSE RANÇON 1 , JAVIER CUADRADO-ANIBARRO1 , BENOIT R. COTTEREAU 1,2, AND TIMOTHÉE MASQUELIER 1
I also now found a Mar 2023 presso on Edge AI by LEAT & the Cote D'Azur Uni (attached). No mention of us unfortunately but good info in there about thinking, other partners including Valeo, Prophesee etc and we know they using Akida in the Deep See Project as one of the tools.