Many of the long term holders have been here for going on 10 years, so it's not surprising that patience occasionally wears thin. However, it is only in the last 5 years that Akida has been available. Before that there was Brainchip Studio (software) and subsequently Brainchip Accelerator (hardware adapted to improve the function of Studio). Akida has gone through a number of iterations since then, and the business model has changed from IP plus ASIC to IP only, and now back to IP plus a somewhat limited hardware embodiment plus "software" in the form of models and MetaTF.
One of the significant developments in Akida 1 was on-chip learning which mitigates the need for retraining when new items need to be included in the model, a task which required the retraining of the entire model in the cloud. Cloud training is a major cost in terms of power and energy for the major AI engines.
Our tech has continued to evolve at a rapid rate under pressure of expanding market requirements. The commercial version of Akida 1 went from 1-bit weights and activations to 4-bits. Akida 1 has been incorporated in groundbreaking on-chip cybersecurity applications (QV, DoE), which is surely a precursor for Akida2/TENNs in microDoppler radar for AFRL/RTX. Akida 2 is embodied in FPGA for demonstration purposes.
Akida 2 went to 8-bits. It also introduced new features like skip, TENNs and state-space models.
The Roadmap also included Akida GenAI with INT16, also in FPGA, and Akida 3 (INT16, FP32), to be incorporated in FPGA within a year.
There is a clear trend to increased accuracy (INT16/FP32) in Akida 3. This indicates an improved ability to distinguish smaller features, although at the expense of greater power consumption. To at least partially compensate for the increased power consumption, Jonathan Tapson did mention a couple of under-wraps patent applications to improve the efficiency of data movement in memory. So I see Akida 3 as a highly specialized chip adapted for applications needing extremely high accuracy, while I think Akida2/TENNs will be the basis for our general business IP/SoC.
The fact that Akida has evolved so rapidly in such large steps is a tribute to the flexibility of its basic design.
https://brainchip.com/brainchip-sho...essing-ip-and-device-at-tinyml-summit-2020-2/
BrainChip Showcases Vision and Learning Capabilities of its Akida Neural Processing IP and Device at tinyML Summit 07.02.2020
SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that it will present its revolutionary new breed of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California February 12-13.
In the Poster Session, “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” representatives from BrainChip will explain to attendees how the company’s Akida™ Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. These features allow Akida to require 40 to 60 percent fewer computations to process a given CNN when compared to a DLA, as well as allowing it to perform learning directly on the chip.
BrainChip will also demonstrate “On-Chip Learning with Akida” in a presentation by Senior Field Applications Engineer Chris Anastasi. The demonstration will involve capturing a few hand gestures and hand positions from the audience using a Dynamic Vision Sensor camera and performing live learning and classification using the Akida neuromorphic platform. This will showcase the fast and lightweight unsupervised live learning capability of the spiking neural network (SNN) and the Akida neuromorphic chip, which takes much less data than a traditional deep neural network (DNN) counterpart and consuming much less power during training.
“We look forward to having the opportunity to share the advancements we have made with our flexible neural processing technology in our Poster Session and Demonstration at the tinyML Summit,” said Louis DiNardo, CEO of BrainChip. “We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems. We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”
Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida performs neural processing on the edge, which vastly reduces the computing resources required of the system host CPU. This unprecedented efficiency not only delivers faster results, it consumes only a tiny fraction of the power resources of traditional AI processing while enabling customers to develop solutions with industry standard flows, such as Tensorflow/Keras. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.
Tiny machine learning is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. tinyML Summit 2020 will continue the tradition of high-quality invited talks, poster and demo presentations, open and stimulating discussions, and significant networking opportunities. It will cover the whole stack of technologies (Systems-Hardware-Algorithms-Software-Applications) at the deep technical levels, a unique feature of the tinyML Summits. Additional information about the event is available at https://tinymlsummit.org/