Ok...was pretty certain this had been posted previously but did quick keyword search and couldn't find it so apols if has been already.
Just a nice reminder & summary imo.
TECHNOLOGY
Neuromorphic computing will need partners to enter the data center
May 12, 2022 - by
admin -
Leave a Comment
The emerging field of neuromorphic processing is not easy to navigate. There are major players on the field who are taking advantage of their size and ample resources; the highest profile is Intel with its
loihi processors Y
IBM TrueNorth Initiative – and a growing list of startups including SynSense, Innatera Nanosystems, and GrAI Matter Labs.
Included in this latest list is BrainChip, a company that has been developing its Akida chip (Akida means “tip” in Greek) and accompanying intellectual property for more than a decade. We have been following BrainChip for the last few years, speaking with them in 2018 and
then again two years later, and the company has proven to be adaptable in a rapidly changing space. The initial plan was to bring the commercial SoC to market by 2019, but BrainChip extended the timeframe to add the ability to run convolutional neural networks (CNNs) alongside spiking neural networks (SNNs).
In January, the company announced the full release of its AKD1000 platform, including its Mini PCIe board that leverages the Akida Neural Network processor. It’s a key part of BrainChip’s strategy to use the technology as benchmarks while seeking partnerships with hardware and chip vendors who will incorporate it into their own designs.
“Looking at our fundamental business model, is it chip or IP or both?” Jerome Nadel, director of marketing for BrainChip, tells
the next platform. “It is an IP license model. We have reference chips, but our go-to-market is definitely working with ecosystem partners, especially those who take a license, such as a chip supplier or ASIC designer and top-tier OEM. … If we are connected with a reference design for sensors for various sensor modalities or for the development of application software, when someone puts together the AI enablement, they want to run it on our hardware and there is already interoperability. You’ll see a lot of these building blocks as we try to break into the ecosystem, because ultimately when you look at the categorical growth in edge AI, it’s really going to come from building blocks leveraging smart sensors.”
BrainChip is
pointing your technology to the edge, where more data is expected to be generated in the coming years. Referring to research from IDC and McKinsey, BrainChip expects the market for AI-needed edge-based devices to grow from $44 billion this year to $70 billion by 2025. Further, in last week’s report
Dell Technologies World Event CEO Michael Dell reiterated his belief that while 10% of data is now generated at the edge, that will change to 75% by 2025. Where data is created, AI will follow. BrainChip has designed Akida for the high-processing, low-power environment and to be able to run analytical AI workloads, particularly inference, on the chip to decrease the flow of data to and from the cloud and thus reduce the latency in generating results.
Neuromorphic chips are designed to mimic the brain by using SNN. BrainChip extends the workloads that Akida could run by being able to also run CNNs, which are useful in edge environments for tasks such as integrated vision, integrated audio, automated driving for LiDAR and RADAR remote sensing devices, and industrial IoT. The company is eyeing sectors like autonomous driving, smart health and smart cities as growth areas.
BrainChip is already seeing some success. Its Akida 1000 platform is being used in Mercedes-Benz’s Vision EQXX concept car for in-cabin AI, including driver and voice authentication, keyword detection and contextual understanding.
The vendor sees the partnerships as a way to increase its presence in the field of neuromorphic chips.
“If we look at a five-year strategic plan, our external three years probably look different than our internal two,” says Nadel. “In the two internal ones, we are still going to focus on chip suppliers and designers and top-tier OEMs. But the three exteriors, if you look at the categories, are really going to come from basic devices, either in the car or in the cabin. whether in consumer electronics looking for this AI enablement. We need to be in the ecosystem. Our IP is
de facto and the business model revolves around that.”
The company has announced a number of partnerships, including with nViso, an AI analytics company. The collaboration will focus on battery-powered applications in the robotics and automotive industries using Akida chips for nViso’s AI technology for social robots and in-cockpit monitoring systems. BrainChip is also working with SiFive to integrate Akida’s technology with SiFive’s.
RISC-V processors for MosChip and edge AI compute workloads, running its Akida IP with the vendor’s ASIC platform for intelligent edge devices.
BrainChip is also working with Arm.
To accelerate the strategy, the company this week launched its AI Enablement Program to offer vendors working prototypes of BrainChip IP on Akida hardware to demonstrate the platform’s capabilities to run AI inference and learning on a chip and on a device. The vendor also offers support for identifying use cases for model and sensor integration.
The program includes three levels: basic and advanced prototypes for the working solution, with the number of AKD1000 chips scaling to 100, custom models for some users, 40 to 160 hours with machine learning experts, and two to ten development systems. . The prototypes will allow BrainChip to bring its commercial products to users at a time when other competitors are still developing their own technologies in a relatively nascent market.
“There is a step of being clear about the use cases and maybe a roadmap of further sensor integration and sensor fusion,” says Nadel. “This is not how we make a living as a business model. The intention is to demonstrate real and tangible work systems based on our technology. The idea was that we could put them in the hands of people and they could see what we do.”
BrainChips Akida IP includes support for up to 1024 nodes that can be configured from two to 256 nodes connected through a mesh network, with each node consisting of four neural processing units. Each of the NPUs includes configurable SRAM and each NPU can be configured for CNN if required and each is event or spike based, using data scarcity, triggers and weights to reduce the number of operations by at least two times. Akida Neural SoC can be used stand-alone or integrated as a coprocessor in a variety of use cases and provides 1.2 million neurons and 10 billion synapses.
The offering also includes the MetaTF machine learning framework for developing neural networks for edge applications and three reference development systems for PCI, PC shuttle and Raspberry Pi systems.
The platform can be used for one-time on-chip learning by using the trained model to extract features and add new classes onto it or multi-step processing that leverages parallel processing to reduce the number of NPUs needed.
Here is the single shot:
And there’s the multi pass:
“The idea of our accelerator being close to the sensor means you’re not sending sensor data, you’re sending inference data,” Nadel said. “It’s really a system architecture game that we envision our micro hardware being combined with sensors. The sensor captures data, it is preprocessed. We make the inference from that and the learning at the core, but especially the inference. Like an in-car advanced driver assistance system, you’re not assigning all the computation and data inference to the GPU-loaded server box. You get the inference data, the metadata, and your load will be lighter.”
On-chip data processing is part of BrainChip’s belief that for much of the AI edge, the future won’t require clouds. Instead of sending all the data to the cloud, which leads to higher latency and costs, the key will be to do it all on the chip itself. Nadel says it’s a “small provocation for the semiconductor industry to talk about cloud independence. It’s not anti-cloud, but the idea is that hyper scaling to the edge is probably the wrong approach. You have to raise the sensor.
Going back to the cloud also means having to retrain the model if there is a change in object classification, says Anil Mankar, co-founder and chief development officer.
the next platform. Adding more classes means changing the rates in the classification.
“Learning on chip,” says Mankar. “It’s called incremental learning or continuous learning, and that’s only possible because… we’re working with spikes and actually similarly copying how our brain learns faces and objects and things like that. People don’t want to transfer learning: go back to the cloud, get new fees. Now you can classify more objects. Once you have an activity on the device, you don’t need the cloud, you don’t need to go back. Everything you learn, you learn” and that does not change when something new is added.