Some more research indicates that STMicroelectronics are attempting to do neuromorphic computing with RRAM.
In an article in February, 2022 STM mentioned that neuromorphic chips are not mature enough. They want to move computation to the memory.
In the plenary session of the ISSCC conference, Marco Cassis, president of ST’s Analog, MEMS and Sensors Group, looked at the various AI technologies for sensors. He rules out spiking neural network chips, also called neuromorphic, as not mature, saying current convolutional neural networks can tap into reduced precision and semiconductor scaling to get more performance. However these CNN devices struggle with power consumption and memory bandwidth challenges that get in the way of scalability.
“To overcome this limitation is to partially or completely move the computation to the memory,” he said. “In Memory Computing can bring big benefits, 100x densities and efficiencies compared to current state of the art solutions. Here an especially promising avenue is the use of non volatile resistive memory devices to perform computations in the memory itself.
![]()
ST hints at analog in-memory computing chip
STMicroelectronics is looking at the development of an AI chip using analog in-memory computing with its resistive phase change memory. In the plenary session of the ISSCC conference, Marco Cassis, president of ST’s Analog, MEMS and Sensors Group, looked at the various AI technologies for...www.eenewseurope.com
In May, 2019 to January 2023 a consortium of companies including STM & European Commission got together for the TEMPO project.
Technology and hardware for neuromorphic computing
Project description
New ways to integrate emerging memories to enable neuromorphic computing systems
Artificial intelligence (AI) and machine learning are used today for computing all kinds of data, making predictions and solving problems. These are processes based increasingly on deep neuronal network (DNN) models. As the volume of produced data slow down machines and consume greater amounts of energy, there is a new generation of neural units. The spiking neural networks (SNNs) incorporate biologically-feasible spiking neurons with their temporal dynamics. The EU-funded TEMPO project will leverage emerging memory technology to design new innovative technological solutions that make data integration simpler and easier via new neuronal DNN and SNN computing engines. Reduced core computational operational systems’ neuromorphic algorithms will serve as demonstrators.
Objective
Massive adoption of computing in all aspects of human activity has led to unprecedented growth in the amount of data generated. Machine learning has been employed to classify and infer patterns from this abundance of raw data, at various levels of abstraction. Among the algorithms used, brain-inspired, or “neuromorphic”, computation provides a wide range of classification and/or prediction tools. Additionally, certain implementations come about with a significant promise of energy efficiency: highly optimized Deep Neural Network (DNN) engines, ranging up to the efficiency promise of exploratory Spiking Neural Networks (SNN). Given the slowdown of silicon-only scaling, it is important to extend the roadmap of neuromorphic implementations by leveraging fitting technology innovations. Along these lines, the current project aims to sweep technology options, covering emerging memories and 3D integration, and attempt to pair them with contemporary (DNN) and exploratory (SNN) neuromorphic computing paradigms. The process- and design-compatibility of each technology option will be assessed with respect to established integration practices. Core computational kernels of such DNN/SNN algorithms (e.g. dot-product/integrate-and-fire engines) will be reduced to practice in representative demonstrators.
Some other well known companies involved are Valeo, Phillips, Thales, Bosch, Infineon & SynSense.
![]()
Technology and hardware for neuromorphic computing | TEMPO Project | Fact Sheet | H2020 | CORDIS | European Commission
Massive adoption of computing in all aspects of human activity has led to unprecedented growth in the amount of data generated. Machine learning has been employed to classify and infer patterns from this abundance of raw data, at various levels of abstraction. Among the...cordis.europa.eu
In June, 2022 STM released new inertial sensors containing the intelligent sensor processing unit (ISPU) for on device processing.
The ISM330IS embeds a new ST category of processing, ISPU (intelligent sensor processing unit) to support real-time applications that rely on sensor data. The ISPU is an ultra-low-power, high-performance programmable core which can execute signal processing and AI algorithms in the edge. The main benefits of the ISPU are C programming and an enhanced ecosystem with libraries and 3rd party tools/IDE.
The ISM330ISN is scheduled to enter production in H2 2022 and will be available from st.com or distributors for $3.48 for orders of 1000 pieces. NanoEdge AI Studio enabling the creation of libraries designed for specific ISPU part numbers is available at no charge on ST.com
Due to the release date the ISPU is unlikely to have Akida IP. As neuromorphic hardware becomes available & matures they may be interested.
![]()
STMicroelectronics’ new inertial modules enable AI training inside the sensor
Geneva, June 16, 2022 – STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications, has introduced new inertial sensors that con…www.eejournal.com
STM has a big ecosystem & 200,000 customers.
An integrated device manufacturer, we work with more than 200,000 customers and thousands of partners to design and build products, solutions, and ecosystems that address their challenges and opportunities, and the need to support a more sustainable world. Our technologies enable smarter mobility, more efficient power and energy management, and the wide-scale deployment of the Internet of Things and connectivity.
I haven't researched ReRAM NNs in depth, but I think Marco Cassis, president of ST’s Analog, MEMS and Sensors Group was not talking about Akida when he ruled out "spiking neural network chips, also called neuromorphic, as not mature, saying current convolutional neural networks can tap into reduced precision and semiconductor scaling to get more performance. However these CNN devices struggle with power consumption and memory bandwidth challenges that get in the way of scalability."
As we have discussed repeatedly, ReRAMs have their own problems. It is true that, in theory, they provide a much closer synaptic analogy with wetware, but the lack of precision of IC manufacturing at a micro-scale means that they lack accuracy due to resistance variations between individual ReRAMs. The currents from a few hundred (or more) need to be added together to reach a synaptic threshold voltage, so while some errors may cancel out, there is the possibility of cumulative errors.
There are techniques to compensate for the inherent variability, but they immediately reduce a major advanyage of ReRAM, the footprint of each ReRAM cell on the silicon wafer ... this from Weebit:
https://www.weebit-nano.com/technology/reram-memory-module-technology/
"An efficient ReRAM module must be designed and developed in close relation with the memory bitcell so it can optimize the functionality of the memory array. Due to the inherent variability of ReRAM (RRAM) cells, specially developed algorithms are key to the process of programming and erasing cells. These algorithms must be delicately balanced between programming time (the quicker, the better), current (the lower, the better), and cell endurance (allowing each individual cell to operate for as many program/erase [P/E] cycles as possible). Voltage levels, P/E pulse widths and the number of such pulses must be optimized to work with a given bitcell technology.
When reading any given bit, the data must be verified against other assistive information to make sure there are no read errors that could impair overall system performance.
Voltage and current levels must be carefully examined throughout the memory module for any operation – including read, program and erase – to keep power consumption to a minimum and ensure the robustness and reliability of the memory array."
In addition, they need larger operating voltages than digital CMOS because they need to divide the voltage into a number of voltage steps corresponding to the number of synaptic inputs which are added to reach the synaptic threshold. The size of the operating voltage limits the size of the manufacturing process, eg 22 nm, before the voltage can jump between conductors.
Our friend Weebit has planted their flag at 12 nm, but I don't know whether this achievable or aspirational.
https://www.weebit-nano.com/technology/overview/
https://www.weebit-nano.com/technology/reram-bitcell/
Weebit scaling down its ReRAM technology to 22nm
Weebit is scaling its embedded ReRAM technology down to 22nm – one of the industry’s most common process nodes.To be useful in a CPU or GPU, ReRAM output must be converted to digital in an ADC (analog to digital converter)
It sounds like a lot of fluster, but the Weebit ReRAM hybrid analog/digital neuromorphic circuit (something I have previously dubbed Frankenstein) is well received in the market, even though it is not spruiked as having the capabilities of Akida, but rather for its memory.
How many more bells and whistles are needed to develop a ReRAM NN?