Fullmoonfever
Top 20
Article from last year was interesting on Huawei's thoughts on neuromorphic.
Though we're not mentioned it appears Huawei finally cottoned on to a couple of the key aspects - highlighted.
30 March 2021
By Phil Hunter
The convention of IoT devices being lightweight in processing capability is being turned on its head by the rise of neuromorphic computing.
The aim is to mimic the plasticity of the human brain in a new generation of chips optimized for data analytics, employing algorithms under the banners of AI and machine learning. This is being driven by several factors, including demand for ultra-low latency edge computing and desire to save network bandwidth by cutting down on data transmission between end IoT devices and the cloud or centralized data centers.
It is true that edge computing can be deployed in distributed servers, but this itself imposes an overhead and cost, as well as requiring a lot of local bandwidth in some cases.
The sticking point might appear to be power consumption, given that many IoT devices are deployed for long time periods in locations that are not convenient to visit frequently for battery changes. By a similar token, direct connections to the electricity grid are usually either unavailable or impractical, while having dedicated solar or wind panels would elevate costs per device too much in most use cases.
But this calculation ignores the high power consumption of radios, as we were reminded when talking recently to Henk Koopmans, CEO of R&D at Huawei UK. He actually cited the desire to boost battery life as a motivation for massive increases in IoT device processor capabilities, alongside need to reduce latency and save on data transfers to the cloud.
“As many IoT devices are battery powered, often in hard-to-access places, replacing the batteries is time-consuming and affects the cost efficiency of the business model,” Koopmans noted. “Local processing reduces the need for wireless transmissions, the part of the device using the most energy, thereby greatly extending the battery life.”
But this assumes that such a hike in local processing power can be achieved affordably without offsetting the energy gains through cutting wireless transmission drastically. As Koopmans put it, “The challenge, therefore, is to come up with a new type of processor, capable of a level of artificial intelligence to enable the device to locally analyze the data and locally make decisions, while still retaining the very low power consumption level required for IoT devices.”
Koopmans, and Huawei, are convinced that such capability will be achieved through the emerging field of neuromorphic computing, or the third generation of AI as it is sometimes dubbed. The first generation of AI, sometimes called expert systems, emerged over 40 years ago in the 1970s in rule-based systems that emulated classical logical processes to draw reasoned conclusions within a specific, narrowly defined problem domain or field of expertise.
The poster child of this first generation was a medical diagnostic system called Mycin developed at Stanford University in the early 1970s, which demonstrated the genre well but was limited in scope and gained little traction in the clinic. Indeed, it was initially confined to identification of bacteria causing severe infections, such as meningitis, and then recommend appropriate antibiotics with dosages adjusted for the patient’s body weight.
Then after a prolonged lull in the AI field, the second generation emerged during the noughties, brought on by the phenomenal advanced in computational power that enabled application of sophisticated statistical regression at scale to very large data sets. This enabled pattern matching and identification at far higher resolution and granularity, leading to valuable applications in sensing and perception under the banners of neural networks and deep learning.
The ability to identify video streams on the basis of objects within individual frames, as well as to diagnose medical conditions such as some cancers automatically through analysis of X-ray or MRI scanned images, are examples of proven applications.
This second generation has been said to be modelled on the structure and processes of the human brain, but in reality it has just been loosely inspired by that. The neuroscience behind human cognition was just not well enough understood for direct translation into AI algorithms.
The mantra of mimicking the human brain is still being used for the third generation of AI, or neuromorphic computing, but with rather more humility, or perhaps reality. There is much talk of incorporating aspects of biological neural networks more directly into electronic circuits, but with admission that this is as much to provide tools for neuroscientists to develop and test theories of how human brains operate in more detail, as in turn to take inspiration from the brain in cognitive computing.
Indeed, this is already proving to be a two way process with neuroscientists working alongside cognitive computing specialists. It is already clear that even if biomorphic computing does not mimic the brain exactly, an approach in which complex multilayered networks are embodied directly in the architecture of Very Large Scale Integration (VLSI) systems containing electronic analog circuits can greatly accelerate machine learning processes with higher efficiency and much reduced power consumption.
It can also mimic some of the flexibility or plasticity of the human brain, with ability to reconfigure rapidly in near real time to tackle problems more adaptively in response to feedback. Such a structure is also more resilient against failures in the system. Finally, there are also possible security gains, as Koopmans noted, by retaining personal data at a local level, rather than being sent to a cloud where it could be used in an unintended way.
A critical aspect of research therefore lies in investigating how the morphology, or structure, of individual neurons, circuits, applications, and large-scale architectures enables the desired level and type of computation, achieved through available fundamental components such as transistors and spintronic memories.
It could be said to be the usual suspects engaging in such research beyond Huawei, notably leading chipmakers. Intel has developed a chip called Loihi, which it describes as its fifth generation self-learning neuromorphic research test chip, introduced in November 2017. This is a 128-core design based on a specialized architecture that is fabricated on 14-nanometer process technology. The key design future is operation around spiking neural networks (SNNs), developed specifically for arranging logic elements to emulate neural networks as understood to exist in brains of humans and indeed many animals.
The key property is adaptability or plasticity, the ability to learn from experience at the silicon level so that the overall networks become more capable, or smarter, over time. This is achieved by adding a concept that is known to exist in animal brains, that of activation whereby neurons fire only when their membrane electric charge exceeds a set threshold. At this point the neuron generates and transmits a signal which causes other neurons receiving it either to increase or decrease their own potentials as a result. This leads to coordinated activations and firings that can correspond with or execute cognitive processes.
It can be seen then that such a system is a valuable tool for neuroscientists to investigate hypotheses, as well as a vehicle for cognitive computing R&D. There are various research projects working with such ideas, including the European Human Brain Project, which has designed its own a chip and is working on a project called ‘BrainScaleS-2’.
The key point for Koopmans is that the underlying concepts are being proven and that the prizes are huge. “By trying to figure out whether processors can in some way copy the functions of the brain would, even on a small scale, represent a major advance,” said Koopmans. “For example, by replacing the commonly accepted processor architecture, with its separation between CPU and memory, the interconnection between the two being a major bottleneck in processor speeds, with in-memory processing, would be revolutionary.” This is why much of the R&D effort is focused on this area.
The biggest challenge facing this field is not so much at the level of technical design but scaling up for commercial deployment in the field. It is hard to overestimate the importance of, and dependence on, the testing and development ecosystems that have grown up around conventional chip development and manufacture. “Silicon processor chips are designed using CAD (computer-aided design) tools,” said Koopmans. “These tools don’t just allow for the design of the chip, they are also capable of simulating the performance. The investment in such tools is enormous because chip design complexity is increasing all the time.”
As a result, Koopmans admitted that despite the optimism, large scale deployment is a long way off. “What is clear is that the first step is to create the tools to both design and simulate these new chips, which can take years, and we’re still in the research stage.”
Though we're not mentioned it appears Huawei finally cottoned on to a couple of the key aspects - highlighted.
Huawei embraces neuromorphic computing for IoT - Rethink
The convention of IoT devices being lightweight in processing capability is being turned on its head by the rise of neuromorphic computing. The aim is to mimic the plasticity of the human brain in a new generation of chips optimized for data analytics, employing algorithms under the banners of...
rethinkresearch.biz
30 March 2021
Huawei embraces neuromorphic computing for IoT
By Phil Hunter
The convention of IoT devices being lightweight in processing capability is being turned on its head by the rise of neuromorphic computing.
The aim is to mimic the plasticity of the human brain in a new generation of chips optimized for data analytics, employing algorithms under the banners of AI and machine learning. This is being driven by several factors, including demand for ultra-low latency edge computing and desire to save network bandwidth by cutting down on data transmission between end IoT devices and the cloud or centralized data centers.
It is true that edge computing can be deployed in distributed servers, but this itself imposes an overhead and cost, as well as requiring a lot of local bandwidth in some cases.
The sticking point might appear to be power consumption, given that many IoT devices are deployed for long time periods in locations that are not convenient to visit frequently for battery changes. By a similar token, direct connections to the electricity grid are usually either unavailable or impractical, while having dedicated solar or wind panels would elevate costs per device too much in most use cases.
But this calculation ignores the high power consumption of radios, as we were reminded when talking recently to Henk Koopmans, CEO of R&D at Huawei UK. He actually cited the desire to boost battery life as a motivation for massive increases in IoT device processor capabilities, alongside need to reduce latency and save on data transfers to the cloud.
“As many IoT devices are battery powered, often in hard-to-access places, replacing the batteries is time-consuming and affects the cost efficiency of the business model,” Koopmans noted. “Local processing reduces the need for wireless transmissions, the part of the device using the most energy, thereby greatly extending the battery life.”
But this assumes that such a hike in local processing power can be achieved affordably without offsetting the energy gains through cutting wireless transmission drastically. As Koopmans put it, “The challenge, therefore, is to come up with a new type of processor, capable of a level of artificial intelligence to enable the device to locally analyze the data and locally make decisions, while still retaining the very low power consumption level required for IoT devices.”
Koopmans, and Huawei, are convinced that such capability will be achieved through the emerging field of neuromorphic computing, or the third generation of AI as it is sometimes dubbed. The first generation of AI, sometimes called expert systems, emerged over 40 years ago in the 1970s in rule-based systems that emulated classical logical processes to draw reasoned conclusions within a specific, narrowly defined problem domain or field of expertise.
The poster child of this first generation was a medical diagnostic system called Mycin developed at Stanford University in the early 1970s, which demonstrated the genre well but was limited in scope and gained little traction in the clinic. Indeed, it was initially confined to identification of bacteria causing severe infections, such as meningitis, and then recommend appropriate antibiotics with dosages adjusted for the patient’s body weight.
Then after a prolonged lull in the AI field, the second generation emerged during the noughties, brought on by the phenomenal advanced in computational power that enabled application of sophisticated statistical regression at scale to very large data sets. This enabled pattern matching and identification at far higher resolution and granularity, leading to valuable applications in sensing and perception under the banners of neural networks and deep learning.
The ability to identify video streams on the basis of objects within individual frames, as well as to diagnose medical conditions such as some cancers automatically through analysis of X-ray or MRI scanned images, are examples of proven applications.
This second generation has been said to be modelled on the structure and processes of the human brain, but in reality it has just been loosely inspired by that. The neuroscience behind human cognition was just not well enough understood for direct translation into AI algorithms.
The mantra of mimicking the human brain is still being used for the third generation of AI, or neuromorphic computing, but with rather more humility, or perhaps reality. There is much talk of incorporating aspects of biological neural networks more directly into electronic circuits, but with admission that this is as much to provide tools for neuroscientists to develop and test theories of how human brains operate in more detail, as in turn to take inspiration from the brain in cognitive computing.
Indeed, this is already proving to be a two way process with neuroscientists working alongside cognitive computing specialists. It is already clear that even if biomorphic computing does not mimic the brain exactly, an approach in which complex multilayered networks are embodied directly in the architecture of Very Large Scale Integration (VLSI) systems containing electronic analog circuits can greatly accelerate machine learning processes with higher efficiency and much reduced power consumption.
It can also mimic some of the flexibility or plasticity of the human brain, with ability to reconfigure rapidly in near real time to tackle problems more adaptively in response to feedback. Such a structure is also more resilient against failures in the system. Finally, there are also possible security gains, as Koopmans noted, by retaining personal data at a local level, rather than being sent to a cloud where it could be used in an unintended way.
A critical aspect of research therefore lies in investigating how the morphology, or structure, of individual neurons, circuits, applications, and large-scale architectures enables the desired level and type of computation, achieved through available fundamental components such as transistors and spintronic memories.
It could be said to be the usual suspects engaging in such research beyond Huawei, notably leading chipmakers. Intel has developed a chip called Loihi, which it describes as its fifth generation self-learning neuromorphic research test chip, introduced in November 2017. This is a 128-core design based on a specialized architecture that is fabricated on 14-nanometer process technology. The key design future is operation around spiking neural networks (SNNs), developed specifically for arranging logic elements to emulate neural networks as understood to exist in brains of humans and indeed many animals.
The key property is adaptability or plasticity, the ability to learn from experience at the silicon level so that the overall networks become more capable, or smarter, over time. This is achieved by adding a concept that is known to exist in animal brains, that of activation whereby neurons fire only when their membrane electric charge exceeds a set threshold. At this point the neuron generates and transmits a signal which causes other neurons receiving it either to increase or decrease their own potentials as a result. This leads to coordinated activations and firings that can correspond with or execute cognitive processes.
It can be seen then that such a system is a valuable tool for neuroscientists to investigate hypotheses, as well as a vehicle for cognitive computing R&D. There are various research projects working with such ideas, including the European Human Brain Project, which has designed its own a chip and is working on a project called ‘BrainScaleS-2’.
The key point for Koopmans is that the underlying concepts are being proven and that the prizes are huge. “By trying to figure out whether processors can in some way copy the functions of the brain would, even on a small scale, represent a major advance,” said Koopmans. “For example, by replacing the commonly accepted processor architecture, with its separation between CPU and memory, the interconnection between the two being a major bottleneck in processor speeds, with in-memory processing, would be revolutionary.” This is why much of the R&D effort is focused on this area.
The biggest challenge facing this field is not so much at the level of technical design but scaling up for commercial deployment in the field. It is hard to overestimate the importance of, and dependence on, the testing and development ecosystems that have grown up around conventional chip development and manufacture. “Silicon processor chips are designed using CAD (computer-aided design) tools,” said Koopmans. “These tools don’t just allow for the design of the chip, they are also capable of simulating the performance. The investment in such tools is enormous because chip design complexity is increasing all the time.”
As a result, Koopmans admitted that despite the optimism, large scale deployment is a long way off. “What is clear is that the first step is to create the tools to both design and simulate these new chips, which can take years, and we’re still in the research stage.”