Mercedes seeking a PhD commencing Feb 2025!
50/50 chance being Loihi based on previous studies.
Project overview:
We are looking for a PhD student for our innovative project to develop an end-to-end autonomous neuromorphic driving system. Based on methods of neuromorphic computing (e.g. Spiking Neural Networks), new models are developed and their energy consumption and computing efficiency are tested in a simulator for a neuromorphic chip. The project is a collaboration between Mercedes-Benz AG and the University of Waterloo (Canada). The successful applicant will be contacted by Prof. Hendrik Lensch (University of Tübingen) and Prof. Chris Eliasmith (University of Waterloo). During the doctorate, several research stays at the University of Waterloo are planned for the doctoral student
Doktorand*in für End-to-End Autonomous Neuromorphic Driving System ab Februar 2025 - Böblingen | Stellenblatt
Life is always about becoming… Im Leben geht es darum, sich auf eine Reise zu begeben, um die beste Version unseres zukünftigen Selbst zu werden.www.stellenblatt.de
Chris Eliasmith, who seems to be quite a polymath, is also CTO of ABR. Some of ABR's patents seem to have sprung from Canada's equivalent of CSIRO.
MB signed an MOU with Ontario in 2022 for cooperation across the electric vehicle chain. In addition to the Wellington Uni research centre linked to Eliasmith, MB has joined the Ontario start-up incubator OVIN.
ABR have patents for their own in-house NN. A sampling follows:
WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326
[0027] The disclosed processing system is highly flexible, in that individual processing elements performing the neural network evaluation can in principle be arranged and connected in any manner. For example, in the processing system may be customized for specific purposes by instantiating the individual components of the system in a hardware description language (e.g., SystemVerilog™) and can be modified by the person of ordinary skill in the art without requiring deep understanding of low-level implementation details. Furthermore, the individual components of the processing system place relatively few constraints on possible connectivity. For instance, the event bus system supports an arbitrary number of input ports by automatically distributing events through a balanced binary tree of bus arbiters, and phase synchronisation and pipelining happens automatically through stream connections. This enables the processing system to be scaled to field programmable gate arrays (FPGAs) of various sizes, and to be adapted to support specific neural network topologies. For example, a specific class of networks may require many neurons in the first layers, but only a few neurons in subsequent layers. The network topology can be adapted accordingly.
[0029] To support 2D convolution, the disclosed processing system tags events with 2D coordinates that describe what source location the event corresponds to. This source location determines what target neurons need to be updated.
US2021342668A1 Methods And Systems For Efficient Processing Of Recurrent Neural Networks 20200429
US2021133568A1 METHODS AND SYSTEMS FOR TRAINING MULTI-BIT SPIKING NEURAL NETWORKS FOR EFFICIENT IMPLEMENTATION ON DIGITAL HARDWARE 20191101
Recurrent neural networks are efficiently mapped to hardware computation blocks specifically designed for Legendre Memory Unit (LMU) cells, Projected LSTM cells, and Feed Forward cells. Iterative resource allocation algorithms are used to partition recurrent neural networks and time multiplex them onto a spatial distribution of computation blocks, guided by multivariable optimizations for power, performance, and accuracy. Embodiments of the invention provide systems for low power, high performance deployment of recurrent neural networks for battery sensitive applications such as automatic speech recognition (ASR), keyword spotting (KWS), biomedical signal processing, and other applications that involve processing time-series data.
The research may involve additional NN players other than the usual suspects.