I recently posted about the lead AKIDA had over Loihi 2. To achieve what these researchers have with Loihi 1 they needed 22 x Loihi 1 chips and expect to improve these results with Loihi 2.
As you read this article remember AKIDA 1 is already head of Loihi 2 and AKIDA 2. With LSTM is about to be released on the World.
AKIDA replacing the GPU now there is a market we did not estimate the value of but Peter did.
My opinion only DYOR
FF
AKIDA BALLISTA
Artificial Neurons Could be 16 Times More Efficient Than GPUs
Austrian researchers are making the processing of data sequences more efficient by optimizing short-term memory.
Photo by
Josh Riemer on
Unsplash
Neuromorphic computers should make artificial intelligence more efficient by replicating real neurons. However, processing sequential data, such as sentences, requires additional circuitry and negates the efficiency advantage.
Researchers at the Graz University of Technology were able to show that these circuits are not required. Their results were published in
Nature Machine Intelligence, a
preliminary version available free of charge.
Instead of large matrices, neuromorphic computers use artificial neurons. This goes hand in hand with another network model called
the Spiking Neural Network (SNN). In this case, the neurons are inactive most of the time and only generate a short current impulse when their action potential is reached. Only when a neuron triggers an impulse do the potentials of the neurons connected to it have to be recalculated, which saves energy.
However, large neural networks typically use long short-term memory (LSTM). It is only with this that they become manageable at all. Neuromorphic computers such as Intel’s Loihi, which the researchers used, use memory for this purpose.
Even more, copied from the brain
This is not particularly efficient, which is why the researchers used a different approach, also inspired by real neurons. Real neurons, after firing once, are less excitable for a time. According to Wolfgang Maass, a professor at TU Graz, this is believed to be the basis of short-term memory in the real brain. With the artificial neurons, the lower sensitivity is achieved via a small current that counteracts the incoming activating current. It decreases over time, this function is also easy to implement on the Loihi chip. The neuron thus briefly remembers what happened in the past.
The researchers tested their approach with
the Babi data set, in which the network reads a sequence of sentences and then has to answer a question about their content. To do this, they converted an existing network into an SNN and pitted it against the original, which computed a GTX 2070 Super. The neuromorphic implementation was up to 16 times more efficient, but only for the smallest two-sentence network. If the network had to remember 20 sentences, the Loihis were only four times more efficient.
Even more efficient with a new chip
The more sentences the model had to process, the smaller the gain in efficiency was. The reason for this is that more artificial neurons and thus more of the 32 Loihi chips are required on the Nahuku board used. The network used to process a sequence of 20 sets used 22 chips. The researchers are hoping for further increases in efficiency from the successor to the Loihi used. Since this contains eight times as many artificial neurons, there is no communication between the individual chips.