Master Thesis - Efficient Training of Spiking Neural Networks on the Graphcore IPU
Back Email
Updated:about 8 hours ago
Your Job:
We investigate the use of Graphcore`s Intelligence Processing Unit (IPU) [1, 2] for spiking neural network (SNN) training, as the traditional training on graphics processing units (GPUs) is inefficient and doesn`t fully utilize the benefits of SNNs. Your task will be the implementation of local synaptic plasticity rules (such as e-prop [4] or other gradient based techniques [3, 5, 6, 7]) for SNNs on the IPU. With its architecture consisting of 1472 tiles, each with its own processor and local memory, the IPU provides massive parallelism and fine-grained control. To program the IPU, a Python interface is available for higher-level functions and frameworks, while a C++-based SDK can be used for customized deployment. The IPU offers an exciting opportunity to explore the potential of SNNs for more efficient and brain-inspired computing.
Your tasks will include:
- Investigating training algorithms for SNNs/RNNs beyond BPTT [4, 5, 6, 7]
- Implementing aforementioned local synaptic plasticity rules and other extensions on the IPU using C++ with GraphCore`s poplar SDK
- Creating the machine learning model using Python, based on Tensorflow + Keras
- Profiling code using Graphcore`s tools
- Conducting performance benchmarking to compare the IPU implementation with those on the GPU
- Writing test cases and conducting other software stack maintenance
Your Profile:
- Current master studies in physics, computer science, mathematics, electrical/electronic engineering or a related science or engineering field
- Strong background in mathematics, e.g., linear algebra, differential/integral calculus
- Prior programming experience in Python and C++, CUDA experience is a plus
- Hands-on experience in working with machine learning frameworks (PyTorch, Tensorflow, etc.) and/or neural simulators (NEST, Brian, etc.) is a plus
- Experience with spiking neural networks and/or neuromorphic computing is a plus
Please feel free to apply for the position even if you do not have all the required skills and knowledge. We may be able to teach you missing skills during your induction.
Our Offer:
We work on the very latest issues that impact our society and are offering you the chance to actively help in shaping the change! We support you in your work with:
- A world-leading, international research environment with state-of-the-art equipment
- An interesting and socially relevant topic for your thesis with future-oriented themes
- Ideal conditions for gaining practical experience alongside your studies
- An interdisciplinary collaboration on projects in an international, committed and collegial team
- Excellent technical equipment and the newest technology
- Qualified support through your scientific colleagues
- The chance to independently prepare and work on your tasks
- Flexible working hours as well as a reasonable remuneration
Place of employment: Aachen
We welcome applications from people with diverse backgrounds, e.g. in terms of age, gender, disability, sexual orientation / identity, and social, ethnic and religious origin. A diverse and inclusive working environment with equal opportunities in which everyone can realize their potential is important to us.
References:
[1] The Graphcore IPU
https://www.graphcore.ai/products/ipu
[2] Zhe Jia, Blake Tillman, Marco Maggioni, and Daniele Paolo Scarpazza. Dissecting the Graphcore IPU Architecture via Microbenchmarking. arXiv:1912.03413, 2019.
[3] Emre O. Neftci, Hesham Mostafa, and Friedemann Zenke. Surrogate gradient learning in spiking neural networks: bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Process. Mag. 36, 51–63, 2019.
[4] Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, and Wolfgang Maass. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature communications, 11(1), 1-15, 2020.
[5] Jacques Kaiser, Hesham Mostafa, and Emre Neftci. Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE). Frontiers Neuroscience 14, 424, 2020.
[6] David Silver, Anirudh Goyal, Ivo Danihelka, Matteo Hessel, and Hado van Hasselt. Learning by Directional Gradient Descent. International Conference on Learning Representations, 2021.
[7] Mengye Ren, Simon Kornblith, Renjie Liao, and Geoffrey Hinton. Scaling Forward Gradient With Local Losses. International Conference on Learning Representations, 2023.