This article was written in 2017. I love the title! Hehehe! Makes you appreciate how far BrainChip has come. You know, I can't help but think that we'll get an announcement from Qualcomm sometime in the near future, especially in light of the following statement.
"Qualcomm is also a player in neuromorphic designs, but seems to have cooled on the technology – publicly, at least. Back in 2013, it was working on processors called Zeroth, but has since morphed the Zeroth project into a software-based proposition that will find its way into the core Snapdragon system-on-chip offerings – as a software code stack, rather than as a standalone processor."
Akida is a perfect match for the next generation of the Snapdragon Cockpit (digital cockpit and infotainment system). Please Qualcomm, just spill the beans already!
18 October 2017
Intel joins IBM and Qualcomm in race for a ‘brain chip’
By
Caroline Gabriel
Intel has announced its first self-learning neuromorphic chip, named Loihi, which it says will get smarter over time and enable extremely power efficient designs. This follows IBM’s work on neuromorphic processors, which draw inspiration from the human brain, and are poised to heavily disrupt any application that needs processing power in a mobile device.
Following the Loihi announcement, Intel also unveiled a new quantum computing chip, in collaboration with QuTech (a Dutch company that scored $50m of Intel investment in 2015), which was developed over the past 18 months. That’s a very different proposition from the machine learning elements in Loihi, but Intel is pouring money into its R&D divisions, hoping to keep its processors ahead of the curve and fend off incursions by alternative architectures such as ARM (Qualcomm also has a ‘brain chip’), or Nvidia’s GPUs (graphics processing units).
Intel is far from the only neuromorphic player. IBM and its TrueNorth processors are the most advanced platform on the market, with IBM already having contracts with laboratories in the USA, including the Lawrence Livermore National Lab, which is using it to run simulations to evaluate the safety of the US nuclear arsenal. IBM also has a development partnership with Samsung, which sees the Korean firm use IBM’s newer SyNAPSE design in its Dynamic Vision Sensor – which provides a 2,000-frames per second view of the world in a 300mW power package.
Qualcomm is also a player in neuromorphic designs, but seems to have cooled on the technology – publicly, at least. Back in 2013, it was working on processors called Zeroth, but has since morphed the Zeroth project into a software-based proposition that will find its way into the core Snapdragon system-on-chip offerings – as a software code stack, rather than as a standalone processor.
The promise of neuromorphic chips is their potential for incredible power efficiency. The human brain is an incredibly efficient processing engine, and chips built to mimic its design appear to reap the rewards. Intel claims Loihi is about 1,000 times more energy efficient than the general purpose processor needed to train the neural networks that rival Loihi’s performance.
In theory, this means that chips like Loihi can be far more quickly turned to tasks that use pattern recognition and intuition, which currently rely on vast banks of CPUs and GPUs to achieve results. One chip could replace all the hard work of a traditional machine learning instance, as well as powering a device out in the wild that can carry out advanced pattern recognition – thanks to having both training and inference on the same silicon.
Intel says that this dual system will allow for machines to operate independently of a cloud connection, and says that its researchers have demonstrated a learning rate that boasts one million times improvement over typical spiking neural networks in finger/digit recognition problems. Intel says this is far more efficient than using convolutional neural networks (CNNs) or deep learning neural networks (DLNNs).
“The brain’s neural networks relay information with pulses or spikes, modulate the synaptic strengths or weight of the interconnections based on timing of these spikes, and store these changes locally at the interconnections. Intelligent behaviors emerge from the cooperative and competitive interactions between multiple regions within the brain’s neural networks and its environment,” explained Michael Mayberry, MD of Intel Labs.
The reason Intel in particular is so keen on being at the heart of these new brain chips is their ability to generalize – something that the current model of training doesn’t do well. Currently, a machine learning process could be trained to identify cats expertly, but it would be awful at spotting dogs, even though they share many characteristics. This is because the training data set and model would not account for dogs, and so a completely different model would be needed for that purpose.
With a chip that can self-learn without the need for the new training data set, it should be far easier to tweak the system to spot smaller generalized differences in data or events.
Intel points to a system for monitoring a person’s heartbeat, taking readings after events such as exercise or eating, and using the neuromorphic chip to normalize the data and work out the ‘normal’ heartbeat. It can then spot abnormalities, but also deal with any new events or conditions that the user is subject to – without having to create a training data set to cover all bases. It should provide shorter development time, and better performance in the wild. Again, these are the promises – we’re still a long way from benchmarking these claims in real world situations.
Loihi will be released to research institutions and universities in the first half of 2018. They will get to test its fully asynchronous neuromorphic many-core mesh – a design that Intel says supports a wide range of neural network topologies, and allows each neuron to communicate with the other on-chip neurons (hence mesh). Each of the neuromorphic cores has a programmable learning engine, which the developers can use to adapt the neural network parameters, to support the different ‘learning paradigms’ – supervised, unsupervised, and reinforcement, mainly.
The first iteration of the Loihi chip was made using Intel’s 14nm fabrication process, and houses 1,024 artificial neurons that provide 130,000 simulated neurons – giving it 130m synapses, which is still a rather long way from the human brain’s 80bn synapses, and behind IBM’s TrueNorth, which has around 256m but using 4,096 cores. It seems Intel is getting more synapses with fewer cores, but there’s no practical way of benchmarking the two chips as yet.
As the neurons spike and communicate with other neurons, and as they process information, the inter-neuron connections are strengthened – which leads to improved performance (the learning element). That learning takes place on the chip, and doesn’t require the enormous data sets, but still requires knowing what the question is, and how to gauge a correct answer.
Intel has announced its first self-learning neuromorphic chip, named Loihi, which it says will get smarter over time and enable extremely power efficient designs. This follows IBM’s work on neuromorphic processors, which draw inspiration from the human brain, and are poised to heavily disrupt...
rethinkresearch.biz