Hi Frangipani,Has anyone else stumbled upon this 3 year EU-funded research project called Nimble AI, kick-started in November 2022, that “aims to unlock the potential of neuromorphic vision?“ Couldn’t find anything here on TSE with the help of the search function except a reference to US-based company Nimble Robotics, but they seem totally unrelated.
The 19 project partners include imec in Leuven (Belgium) as well as Paris-based GrAI Matter Labs, highly likely Brainchip’s most serious competitor, according to other posters.
An article about Nimble AI’s ambitious project was published today:
What do you make of of the consortium’s claim that their 3D neuromorphic vision chip will have more than an edge over Akida once it will be ready to hit the market?
NimbleAI
Today only very light AI processing tasks are executed in ubiquitous IoT endpoint devices, where sensor data are generated and access to energy is usually constrained. However, this approach is not scalable and results in high penalties in terms of security, privacy, cost, energy consumption, and...www.hipeac.net
NimbleAI: Ultra-Energy Efficient and Secure Neuromorphic Sensing and Processing at the Endpoint
“Today only very light AI processing tasks are executed in ubiquitous IoT endpoint devices, where sensor data are generated and access to energy is usually constrained. However, this approach is not scalable and results in high penalties in terms of security, privacy, cost, energy consumption, and latency as data need to travel from endpoint devices to remote processing systems such as data centres. Inefficiencies are especially evident in energy consumption.
To keep up pace with the exponentially growing amount of data (e.g. video) and allow more advanced, accurate, safe and timely interactions with the surrounding environment, next-generation endpoint devices will need to run AI algorithms (e.g. computer vision) and other compute intense tasks with very low latency (i.e. units of ms or less) and energy envelops (i.e. tens of mW or less).
NimbleAI will harness the latest advances in microelectronics and integrated circuit technology to create an integral neuromorphic sensing-processing solution to efficiently run accurate and diverse computer vision algorithms in resource- and area-constrained chips destined to endpoint devices. Biology will be a major source of inspiration in NimbleAI, especially with a focus to reproduce adaptivity and experience-induced plasticity that allow biological structures to continuously become more efficient in processing dynamic visual stimuli.
NimbleAI is expected to allow significant improvements compared to state-of-the-art (e.g. commercially available neuromorphic chips), and at least 100x improvement in energy efficiency and 50x shorter latency compared to state-of-the-practice (e.g. CPU/GPU/NPU/TPUs processing frame-based video). NimbleAI will also take a holistic approach for ensuring safety and security at different architecture levels, including silicon level.”
What I find a little odd, though, is that this claim re expected superiority over “state-of-the-art (e.g. commercially available neuromorphic chips)“ doesn’t get any mention on the official Nimble AI website (https://www.nimbleai.eu/), in contrast to the expectation of “at least 100x improvement in energy efficiency and 50x shorter latency compared to state-of-the-practice (e.g. CPU/GPU/NPU/TPUs Processing Frame-based Video).”
Rolling NPUs in with CPUs and GPUs suggests to me they are talking about software implementations, probably involving MAC (multiply accumulate) operations, which would make the efficincy and latency figures possible.
Analog may provide some improvement over digital, but would encounter the accuracy problems inherent in the manufacturing process which would become more pronounced in multi-bit weights and actuations.
GraiMatter is digital:
https://www.graimatterlabs.ai/
The GrAI VIP edge AI processor is based on GML’s innovative NeuronFlow™ technology that combines high-precision 16-bit FP operations with sparse computing to produce massively parallel in-network processing at low power.
...
- Few ms inference latencies for audio and vision networks
- 20 times better power efficiencies compared to alternatives at high fidelity
- Uncompromising accuracy with 16-bit FP MACs
- 8mmx8mm compact package with memory included for best BOM
- Ready to use popular audio and vision networks for faster TTM
Their insistence that their 16-bit FP MAC processor is uncompromisingly accurate is misguided in that it is oblivious of N-of-M coding. In addition, it consumes much more power than Akida.
So if GraiMatter is Nimble AI's example of SOTA commercially available neuromorphic chips, it's not surprising that they think they can improve on its performance.
PS: A camel is a horse designed by a committee.
Launched in November 2022, the NimbleAI consortium now gathers 19 project partners from across Europe: Ikerlan, Barcelona Supercomputing Center, Menta, Universiteit Leiden, Codasip, GrAI Matter, University of Manchester, Consejo Superior De Investigaciones Científicas, Universitat Politècnica De València, Monozukuri, Politecnico Di Milano, CEA, IMEC, Raytrix Gmbh, AVL List, ULMA Medical Technologies, Viewpointsystems, Queen Mary University of London and Technische Universitat Wien.
Last edited: