I don't think this has been posted here before but it is from what Brainchip calls on their website the new Blog series:
10.5.22
"According to Gartner, traditional computing technologies
will hit a digital wall in 2025 and force a shift to new strategies, including those involving neuromorphic computing. With neuromorphic computing, endpoints can create a truly intelligent edge by efficiently identifying, extracting, analyzing, and inferring only the most meaningful data. Untethered from the cloud, neuromorphic edge AI silicon is already enabling people to seamlessly interact with smarter devices that independently learn new skills, intelligently anticipate requests, and instantly deliver services.
Unlocking the full potential of edge AI with BrainChip
At BrainChip, we believe edge AI presents both a challenge and opportunity for the semiconductor industry. Specific strategies to unlocking the full potential of edge AI will undoubtedly vary, which is why we are launching a new company blog series to explore how neuromorphic edge silicon can mimic the human brain to analyze only essential sensor inputs at the point of acquisition.
We’ll take an in-depth look at the primary design principles of neuromorphic edge silicon, discuss scaling and optimizing on-chip memory, review key strategies for efficiently leveraging incremental and one-shot learning, and detail how to write more efficient machine learning models. We’ll also highlight real world edge AI use cases powered by BrainChip’s Akida neural networking processor, including medical sensors, automotive edge learning at high speeds, object detection and classification, and keyword spotting.
The future’s not only bright, it’s essential
In recent years, neuromorphic computing has enabled new learning models and architectures for edge AI. Smart edge silicon that follows the principles of essential AI—doing more with less—now supports a new generation of advanced multimodal use cases with independent learning and inference capabilities, faster response times, and a lower power budget. By keeping machine learning on the device, neuromorphic edge silicon dramatically reduces latency, minimizes power consumption, and improves security.
We are excited to launch our new company blog series to explore how neuromorphic computing supports the unique learning and performance requirements of edge AI. We look forward to provoking conversation and collaboration as we deploy effective edge compute across real-world applications such as connected cars, consumer electronics, industrial and commercial IoT, and other areas."
8.5.22
Moore’s Law and distributed cloud computing have enabled artificial intelligence (AI) and machine learning (ML) applications to effectively overcome Von Neumann bottlenecks that once limited data throughput on conventional systems. With enormous amounts of targeted compute power and memory available in the cloud, AI/ML training and inference models continue to increase in both size and sophistication.
However, cloud-based data centers have created a new set of challenges for AI applications at the edge such as latency, power, and security. Untethering edge AI from the cloud helps address some of these issues—and creates opportunities for the semiconductor industry to design new products with smarter and more independent sensors, devices, and systems.
For example, autonomous vehicles leverage cloud-free, edge AI learning at high speeds to continuously update and define safety parameters that make it easier for onboard systems to detect anomalous structural vibrations and engine sounds.
Cloud-free edge AI also enables gesture control with faster response times, allowing doctors and therapists to help people with disabilities interact with sophisticated robotic assistance devices. In the future, field hospitals in disaster zones can deploy medical robots with advanced edge AI capabilities, even if connectivity is limited.
Lastly, smart farms in remote areas depend on cloud-free AI to help lower food prices by efficiently increasing crop yields with intelligent soil sensors, irrigation systems, and autonomous drones.
The human brain shows us the way
Untethering edge AI from the cloud is an important
first step to enabling smarter endpoints. However, the semiconductor industry must also acknowledge the inefficiencies of simply scaling down AI hardware at the edge. Indeed, advanced edge applications are already hitting the limits of conventional AI silicon and standard learning models.
Many chips used in edge applications today are still general-purpose processors such as GPUs that consume approximately 1,000X more power than purpose-built edge silicon. Although some smart devices are equipped with low-power digital signal processors (DSPs), these single-purpose chips typically lack advanced learning and analytics capabilities.
In recent years, neuromorphic computing has unlocked more efficient architectures and learning models such as incremental and one-shot learning for edge AI. Smart edge silicon that follows the principles of essential AI—doing more with less—now supports a new generation of advanced multimodal and autonomous use cases with independent learning and inference capabilities, faster response times, and a lower power budget.
These include self-driving cars that personalize cabin settings for individual drivers, automated factories and warehouses, advanced speech and facial recognition applications, and robots that use sophisticated sensors to see, hear, smell, touch, and taste.
The future’s not only bright, it’s essential
At BrainChip, we believe edge AI presents both a challenge and opportunity for the semiconductor industry to look well beyond cutting the cord to the cloud and simply scaling down conventional chip architectures. Neuromorphic edge AI silicon is already enabling people to seamlessly interact with smarter devices that independently learn new skills, intelligently anticipate requests, and instantly deliver services.
Specific strategies to unlocking the full potential of edge AI will undoubtedly vary, which is why it is important to explore a future in which semiconductor companies play a collaborative role in helping to design and implement new neuromorphic architectures.
In this context, Brainchip and the Global Semiconductor Alliance (GSA) have jointly published a white paper titled “Edge AI: The Cloud-Free Future is Now” that discusses:
- Untethering edge AI from the cloud
- Overcoming the limits of conventional AI silicon and learning methodologies
- Unlocking new architectures and learning models with neuromorphic computing
- Supporting autonomous edge AI applications
“We are excited to support the publication of this white paper as it is a call to our industry to work together to address the unique learning and performance requirements of edge AI,” said Brainchip CMO Jerome Nadel. “We hope it provokes conversation and collaboration—and ultimately enables effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT.”
Click (here) to read the full white paper.
(Include a link once it’s available)
Over the next few months, the Brainchip blog will explore how neuromorphic edge silicon can mimic the human brain to analyze only essential sensor inputs at the point of acquisition. Specifically, we’ll take a closer look at the four primary design principles of neuromorphic edge silicon, how to efficiently scale and optimize on-chip memory, as well as strategies for leveraging inference and one-shot learning. We’ll also discuss real world edge AI use cases powered by BrainChip’s Akida neural networking processor, including automotive edge learning at high speeds, wine tasting, object detection and classification, and keyword spotting."
Regards
FF
AKIDA BALLISTA