LAGUNA HILLS, CA / ACCESSWIRE / March 12, 2023 /BrainChip Holdings Ltd (ASX:BRN)(OTCQX:BRCHF)(ADR:BCHPY), the world's first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced it has validated that its Akida⢠processor family integrates with the Arm® Cortex®-M85 processor, unlocking new levels of performance and efficiency for next-generation intelligent edge devices.
Arm Cortex-M85 delivers the highest levels of performance in the ...
>>> Read more:
BrainChip integrates Akida with Arm Cortex-M85 Processor, Unlocking AI Capabilities for Edge Devices
This ARM news is like a tidal wave of Ecosystem Building. The article shows 10 hours old.
EEtimes seems to be on top of things. Lots of brilliant-TSE-folks here have pointed them out.
Yeah free publicity, they have a big high-tech following to blast Team
Brainchip Spiking Neural Network attributes.
https://www.eetimes.com/tinyml-comes-to-embedded-world-2023/
Arm
Following the launch of the Cortex-M85 AI-capable MCU core last year, Paul Williamson, senior VP and GM of the IoT line of business at Arm, told EE Times that Arm will continue to invest in its Ethos product line of dedicated AI accelerator cores. While the Ethos line is “very active” and “a concentration of our continued investment,” Williamson said, Arm believes that in the world of MCUs, it will be important to have a stable, software targetable framework that complements more machine learning capability in the MCU.
Announced at the show was enhanced integration for Arm virtual hardware (AVH) into the latest version of the Keil MCU development kit. AVH allows developers to quickly perform “does it fit” checks for their algorithms on specific Arm cores with specific memory sizes, helping them decide whether or not dedicated accelerators like Ethos are required.
Arm is also working closely with third-party accelerator IP providers for applications that require more acceleration than the Ethos line can offer, including
BrainChip (on Arm’s booth, a demo showed an Arm M85 working with BrainChip Akida IP).
https://www.mdpi.com/1424-8220/23/6/3037
Spiking neural networks (SNNs) are subjects of a topic that is gaining more and more interest nowadays. They more closely resemble actual neural networks in the brain than their second-generation counterparts, artificial neural networks (ANNs). SNNs have the potential to be more energy efficient than ANNs on event-driven neuromorphic hardware. This can yield drastic maintenance cost reduction for neural network models, as the energy consumption would be much lower in comparison to regular deep learning models hosted in the cloud today. However, such hardware is still not yet widely available. On standard computer architectures consisting mainly of central processing units (CPUs) and graphics processing units (GPUs) ANNs, due to simpler models of neurons and simpler models of connections between neurons, have the upper hand in terms of execution speed. In general, they also win in terms of learning algorithms, as SNNs do not reach the same levels of performance as their second-generation counterparts in typical machine learning benchmark tasks, such as classification. In this paper, we review existing learning algorithms for spiking neural networks, divide them into categories by type, and assess their computational complexity.
1. Introduction
In the last decade, significant progress has been made in the field of neural networks. This progress mostly resides in the area of deep learning, which achieves high performance in fields like computer vision and natural language processing. Some notable tasks include object detection [
1], image segmentation [
2], text translation, and question answering [
3]. SNNs however, are still not up to par with artificial neural networks in terms of the performance on common machine learning tasks. Classification datasets such as MNIST [
4] and CIFAR-10 [
5] still prove to be a challenge for these types of networks. Despite that, some of their applications have been developed by researchers. One such example is object detection. SNN achieved similar results to ANN while being much more energy efficient in terms of computations. A network was trained by using stochastic gradient descent and the KITTI dataset was used [
6]. Another example from the domain of computer vision is image segmentation with UNET-based SNN. In this case, ANN was trained on the ISBI 2D EM dataset and converted to SNN [
7]. Another application of machine learning using SNNs has been in LiDAR-based vehicles. The ability to autonomously control speed and steering in static and dynamic environments has been demonstrated [
8].
An important trend in spiking neural network-based computer vision approaches are event-based cameras.