Thanks Dio,Hi ILL,
This is pretty much blue sky research and won't be out of the lab any time soon.
https://spectrum.ieee.org/neuromorphic-computing-ai-device
The scientists note that they fabricated their devices using semiconductor-foundry-compatible techniques, suggesting they might readily find use within the electronics industry. However, "the status of our research is in its infancy," Zhang says. "Much more work is required to fabricate large-scale integrated test circuitry with these devices."
It is also talking about analog NNs using memristors.
An adaptable new device can transform into all the key electric components needed for artificial-intelligence hardware, for potential use in robotics and autonomous systems, a new study finds.
It has an extraordinarily short shelf life - 1.6 million switching operations at 300 MHz is about 2 milliseconds, and the hydrogen ion in the perovskite only hangs about for 6 months. Hydrogen is notoriously itinerant and can penetrate many molecular lattices. Optical fibres often have a nitride cladding to prevent hydrogen penetration, as hydrogen has an absorption band in the desired laser wavelength.
The new device proved stable over 1.6 million cycles of switching between states. "Also, hydrogen ions remain in the device for a long period of time after its initial treatment—over six months—which is encouraging," Park says
Switching the function of millions of devices seems to me to be an enormously complicated exercise.
The scientists incorporated protons into perovskite nickelate. Electric pulses applied to this material could shuffle the protons [hydrogen ions] around within the material's lattice, altering its electronic properties. The researchers could electrically reconfigure a device made from this proton-doped perovskite nickelate into a resistor, a memory capacitor, a neuron, or a synapse on demand.
...
The versatility of this device "could simplify AI circuit design for complex computational tasks by avoiding an agglomeration of different functional units that are area- and power-consuming," says study colead author Michael Tae Joon Park, an electrical engineer and materials scientist at Purdue. Potential applications include robotics and autonomous systems, he notes.
In simulations using the new device in an artificial neural network, which mimics the structure of neurons in biological brains, the scientists found that the reconfigurable nature of the new device enabled the neural network "to make its decisions more efficiently, compared to conventional static networks, in complex and ever-changing environments," Zhang says.
The researchers suggest their device could find use in grow-when-required networks, which are neural networks that can grow their computing power on demand. Similarly, such networks can shrink in size if the device detects nodes that are regularly inactive in order to become more efficient.
Akida is reconfigurable in that the library of models changes the weights of the neurons. I am not familiar with "grow-when-required networks", but Akida would seem to incorporate this capability in determining the number of layers and the weights of the neurons. For example, Akida 1000 uses far fewer nodes (4 NPUs) doing key word spotting than it uses in image classification.
Their reference to "conventional static networks" is a reference to analog NNs. Akida is not a "conventional static network". Making decisions may be more efficient, but what about the time/energy in reconfiguring the function of the devices in the whole network?
View attachment 3708
Honestly I only understood maybe half of that, but I get the gist. Thank you for explaining the difference