Just came across this awesome article about Brainchip on Medium, written by someone with the moniker NeuroCortex.AI - and a follow-up is already in the works!
BrainChip’s Akida: Neuromorphic Processor Bringing AI to the Edge
NeuroCortex.AI
·
Follow
8 min read
·
6 hours ago
As our regular readers you might recall we have covered the basics of Neuromorphic computing a blog series last year and are now pursuing further research into its implementation. One of the blockers in the real time implementation of Spiking neural networks (SNNs) is availability of an actual Neuromorphic chip(s) to run SNN algorithms.
Thus we started connecting with various industry professionals who were connected to developing neuromorphic chips. Soon enough we connected with BrainChip US operations team (based in California) and started talking about potential collaboration. They were kind enough to help us out and agreed to send BrainChip Akida chips our way.
Before we start with implementing SNN models onto Akida let us tell you about BrainChip the company, Akida chipset and why its useful for us.
Akida by BrainChip, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy consumption.
BrainChip is an Australian company that specializes in edge artificial intelligence (AI) on-chip processing and learning. They are the worldwide leader in edge AI on-chip processing and learning, offering solutions that bring common sense to the processing of sensor data, enabling machines to do more with less. BrainChip has a global presence with engineering teams located in California, Toulouse France, Hyderabad India, and Perth Australia.
BrainChip’s flagship product, Akida™, is a fully digital, event-based AI processor that mimics the human brain, analyzing essential sensor inputs at the point of acquisition with high efficiency, precision, and energy economy. This technology allows for edge learning local to the chip, reducing latency, improving privacy, and enhancing data security. Akida, Greek for ‘spike,’ is a neuromorphic SoC that implements a spiking neural network. In many ways, it’s similar to some of the well-known research projects that were presented over the past several years such as IBM’s TrueNorth, SpiNNaker, and
Intel Loihi. With Akida, BrainChip is attempting to seize this early market opportunity with one of the first commercial products. BrainChip is targeting a wide range of markets from the sub-1W edge applications to higher power and performance applications in the data center.
Timeline (BrainChip)
Here’s a breakdown of what Akida is all about:
- Inspired by the Brain and Benefits of a Brain-Inspired Approach: Unlike traditional processors that rely on complex clock cycles, Akida uses event-based processing, similar to how neurons fire in the brain. It utilizes neuromorphic computing, mimicking the human brain’s structure and function. This means Akida processes information in a more efficient way, similar to how neurons fire and communicate This allows it to focus on essential information and reduce power consumption.
- High Performance, Low Power Consumption: BrainChip claims that Akida offers superior performance per watt compared to other solutions. This makes it suitable for edge AI applications where power efficiency is crucial. Akida’s event-based processing focuses on essential information, significantly reducing energy use. This makes it perfect for edge AI applications where battery life is a constraint. Don’t be fooled by the low power consumption. Akida delivers exceptional performance per watt, making it suitable for real-time AI tasks at the network’s edge.
- On-Chip Learning: Akida can perform some level of learning on the device itself, reducing reliance on cloud-based training and processing. Akida can perform some machine learning tasks directly on the chip, reducing reliance on cloud-based training and processing. This improves privacy and reduces latency.
- Anomaly Detection: Akida can be trained to identify unusual patterns in data, making it ideal for security and fraud detection
- Sensor Processing: From analyzing data from cameras and microphones to interpreting readings from industrial sensors, Akida can handle various sensor data streams.
- Autonomous Systems: Akida’s low power consumption and real-time processing capabilities make it suitable for autonomous systems like drones and robots.
- Supported Neural Networks: Akida is designed to accelerate various neural networks directly in hardware, including Convolutional Neural Networks (CNNs) commonly used for image recognition, Recurrent Neural Networks (RNNs) for sequence analysis, and even custom Temporal Event-based Nets (TENNs) optimized for processing complex time-series data.
- Akida Development Environment: BrainChip offers a complete development environment called MetaTF for seamless creation, training, and testing of neural networks specifically designed for the Akida platform. This includes tools for simulating models and integrating them with Python-based machine learning frameworks for easier development.
Akida NSoC Architecture
The Akida NSoC neuron fabric is comprised of cores that are organized in groups of four to create nodes, which are mesh networked. The cores can be implemented for either convolutional layers or fully-connected layers. This flexibility allows users to develop networks with ultra-low power Event-Based Convolution as well as Incremental Learning. The nodes also can be used to implement multiple networks on a single device.
Akida Development environment
The development environment looks similar to any machine learning framework. Users describe their SNN model which is stored in the model zoo. The chip will come with three pre-created models (CIFAR, Imagenet and MNIST) or they can create their own architecture. A Python script can be used to specify data location, model type, and this is shipped off to the Akida execution engine with the Akida neuron model and training methodology with conversions (from pixel to spikes, etc.). It goes into training mode or inference mode depending on user settings. The Akida NSoC uses a pure CMOS logic process, ensuring high yields and low cost. Spiking neural networks (SNNs) are inherently lower power than traditional convolutional neural networks (CNNs), as they replace the math-intensive convolutions and back-propagation training methods with biologically inspired neuron functions and feed-forward training methodologies.
Brainchip’s claim is that while a convolutional approach is more akin to modeling the neuron as a large filter with weights, the iterative linear algebra matrix multiplication on data within an activation layer and associated memory and MAC units yields a power hungrier chip. Instead of this convolutional approach, an SNN models the neuron function with synapses and neurons with spikes between the neurons. The networks learn through reinforcement and inhibition of these spikes (repeating spikes are reinforcement).
The ability to change the firing threshold of the neuron itself and the sensitivity to those spikes is a different and more efficient way to train, albeit within complexity limitations. This means way less memory (there are 6MB per neural core) and a more efficient end result. Neurons learn through selective reinforcement or inhibition of synapses. The Akida NSoC has a neuron fabric comprised of 1.2 million neurons and 10 billion synapses.
Akida Neuron Fabric
The “Akida” device has an on-chip processor complex for system and data management and is also used to tell the neuron fabric (more on that in a moment) to be in training or inference modes. This is a matter of setting the thresholds in the neuron fabric. The real key is the data to spike converter, however, especially in areas like computer vision where pixel data needs to be transformed into spikes. This is not a computationally expensive problem from an efficiency perspective, but it does add some compiler and software footwork. There are audio, pixel, and fintech converters for now with their own dedicated place on-chip. The Akida NSoC is designed for use as a stand-alone embedded accelerator or as a co-processor. It includes sensor interfaces for traditional pixel-based imaging, dynamic vision sensors (DVS), Lidar, audio, and analog signals. It also has high-speed data interfaces such as PCI-Express, USB, and Ethernet. Embedded in the NSoC are data-to-spike converters designed to optimally convert popular data formats into spikes to train and be processed by the Akida Neuron Fabric.
The PCIe links allow for data-center deployments and can scale with the multi-chip expansion port, which is a basic high speed serial interface to send spikes to different neural processing cores — expandable to 1024 devices for very large spiking neural networks. The Akida neuron fabric shown below has its own 6MB on-chip memory and the ability to interface with flash and DDR.
The “Akida” device has an on-chip processor complex for system and data management and is also used to tell the neuron fabric (more on that in a moment) to be in training or inference modes. This is a matter of setting the thresholds in the neuron fabric. The real key is the data to spike converter, however, especially in areas like computer vision where pixel data needs to be transformed into spikes. This is not a computationally expensive problem from an efficiency perspective, but it does add some compiler and software footwork. There are audio, pixel, and fintech converters for now with their own dedicated place on-chip.
The CIFAR 10 benchmark they are rating their performance and efficiency
Brainchip’s “Akida” chip is aimed at both datacenter and training and inference. This includes vision systems in particular but also financial tech applications where users cannot tolerate intermittent connectivity or latency from cloud.
BrainChip’s Commitment to Development
BrainChip is actively developing Akida, with the second generation offering improved capabilities for handling complex neural networks. They are also working on a comprehensive development environment called MetaTF, which simplifies the creation and deployment of neural networks specifically designed for Akida.
The Future of AI is Neuromorphic
The Akida neuromorphic processor represents a significant leap forward in AI technology. With its efficient processing, on-chip learning capabilities, and wide range of applications, Akida is poised to revolutionize the way AI is used at the edge. As BrainChip continues to develop Akida, we can expect even more exciting possibilities to emerge in the future of AI.
Conclusion
In essence, the Akida Neuromorphic Processor is a powerful yet energy-efficient AI processor designed to bring intelligence to the edge of networks by mimicking the human brain’s processing style. Its unique features make it a promising solution for various applications requiring real-time and low-power AI capabilities. Akida is still under development, with BrainChip working on newer generations to address the growing intelligence chip market. Overall, BrainChip is a company at the forefront of neuromorphic computing, aiming to revolutionize AI processing with brain-inspired hardware.
A good news we actually received 2x Akida chips few days back thanks to BrainChip. Soon we are publishing a detailed write-up as how to install and run AI models on top of it. Stay tuned. !!
The two Akida chips we received from BrainChip
References
[1]
https://brainchip.com/akida-neural-processor-soc/
[2]
https://brainchip.com/akida-generations/
[3]
https://brainchip.com/what-is-the-akida-event-domain-neural-processor-2/
[4]
https://www.design-reuse.com/news/54941/brainchip-akida-platform-ai.html
[5]
[6]
https://www.sharesinvalue.com.au/brainchip-an-unrivalled-neural-network-processing-technology/
[7]
https://www.edge-ai-vision.com/resources/technologies/processors/
[8]
https://brainchip.com/technology/