Fullmoonfever
Top 20
Older article by our mate that I found interesting.
I know Silicon Labs has been mentioned before and pondering any connection to Akida via Edge Impulse given they are one of Silicon Labs Tech partners and Silicon Labs now have an edge AI/ML accelerator for their devices. Obviously compatible with M class as well.
Silicon Labs’ latest families of wireless-enabled SoCs for IoT applications for the first time include a hardware AI/ML accelerator. The upgrade is indicative of the growing popularity of AI/ML techniques for a variety of IoT markets, including smart home, medical and industrial. Dedicated AI/ML hardware on-chip improves power consumption, critical to many IoT applications, even bringing AI/ML within reach for more power-sensitive IoT applications.
“You’ve always been able to run machine learning algorithms on an M-class processor, the trick is can you do it in an energy efficient way?” said Ross Sabolcik, general manager for IoT industrial and commercial products at Silicon Labs. “If you burn so much energy making the calculations, you might as well push it to the cloud, if you have the bandwidth. Our focus was not only to be able to run AI and ML, but to be able to do it in a really efficient way.”
The BG24 and MG24 families, with Bluetooth capability and multi-protocol capability, respectively, will be the first devices in Silicon Labs’ portfolio to feature a new, in-house developed AI/ML accelerator. The accelerator offloads AI/ML workloads from adjacent Arm Cortex-M33 microcontroller cores in applications such as smart home, medical and industrial IoT.
Sabolcik said the company’s hardware accelerator can speed up IoT AI/ML workloads up to four times with a resulting six-fold power savings (compared to using the Cortex-M33). Such power savings are suitable for battery-powered IoT devices. Latency is also improved, compared to sending data back and forth to the cloud for processing.
The new devices natively support TensorFlow, and Silicon Labs has partnered with SensiML and Edge Impulse for a full toolchain to simplify application development and dataset management.
According to Sabolcik, customers are interested in executing AI/ML workloads at lower power at the network edge in applications like wake-word detection and sound detection in security scenarios. AI would also accelerate predictive maintenance analysis for industrial machinery. For non-AI application, the availability of accelerators could mean better reliability, fewer false positives and improved accuracy, Sabolcik said. Vision use cases, such as presence detection or people counting with low-resolution cameras, are also possible on these devices.
Matter is enabled
The BG24 and MG24 also boost flash and RAM capacities, the largest in Silicon Labs’ portfolio. The memory increase stems not from AI/ML requirements, said Sabolcik, but from the desire to offer future-proofing functionality such as multi-protocol support and over-the-air app upgrades. Both series offer up to 1536 kB of flash and 256 kB of RAM.
“Across the board, this is the most capable device that we know how to build for the 2.4GHz space, in terms of sensing, computing, connectivity and security,” he said.
While the BG24 is designed for Bluetooth applications, the MG24 has multi-protocol support, including Matter. Matter, the home automation connectivity standard previously known as Connected Home over IP, is gaining popularity. The feature represents Silicon Labs’ highest radio output power for range and reliability at +20dBm, and the company’s best sensitivity on the receive side.
Robust security features in both families also meet PSA Level 3.
BG24 and MG24 parts are already shipping to more than 40 early-access customers, with general availability expected in April 2022. Modules based on the new SoCs will be available in the second half of this year, the company said.
I know Silicon Labs has been mentioned before and pondering any connection to Akida via Edge Impulse given they are one of Silicon Labs Tech partners and Silicon Labs now have an edge AI/ML accelerator for their devices. Obviously compatible with M class as well.
Wireless processors for IoT get AI accelerator upgrade - Embedded.com
Silicon Labs’ latest IoT SoCs include an in-house-developed hardware IoT AI/ML accelerator, reducing power consumption 6x for AI workloads. Silicon Labs’
www.embedded.com
Wireless processors for IoT get AI accelerator upgrade
February 9, 2022 Sally Ward-FoxtonSilicon Labs’ latest families of wireless-enabled SoCs for IoT applications for the first time include a hardware AI/ML accelerator. The upgrade is indicative of the growing popularity of AI/ML techniques for a variety of IoT markets, including smart home, medical and industrial. Dedicated AI/ML hardware on-chip improves power consumption, critical to many IoT applications, even bringing AI/ML within reach for more power-sensitive IoT applications.
“You’ve always been able to run machine learning algorithms on an M-class processor, the trick is can you do it in an energy efficient way?” said Ross Sabolcik, general manager for IoT industrial and commercial products at Silicon Labs. “If you burn so much energy making the calculations, you might as well push it to the cloud, if you have the bandwidth. Our focus was not only to be able to run AI and ML, but to be able to do it in a really efficient way.”
The BG24 and MG24 families, with Bluetooth capability and multi-protocol capability, respectively, will be the first devices in Silicon Labs’ portfolio to feature a new, in-house developed AI/ML accelerator. The accelerator offloads AI/ML workloads from adjacent Arm Cortex-M33 microcontroller cores in applications such as smart home, medical and industrial IoT.
Sabolcik said the company’s hardware accelerator can speed up IoT AI/ML workloads up to four times with a resulting six-fold power savings (compared to using the Cortex-M33). Such power savings are suitable for battery-powered IoT devices. Latency is also improved, compared to sending data back and forth to the cloud for processing.
The new devices natively support TensorFlow, and Silicon Labs has partnered with SensiML and Edge Impulse for a full toolchain to simplify application development and dataset management.
According to Sabolcik, customers are interested in executing AI/ML workloads at lower power at the network edge in applications like wake-word detection and sound detection in security scenarios. AI would also accelerate predictive maintenance analysis for industrial machinery. For non-AI application, the availability of accelerators could mean better reliability, fewer false positives and improved accuracy, Sabolcik said. Vision use cases, such as presence detection or people counting with low-resolution cameras, are also possible on these devices.
Matter is enabled
The BG24 and MG24 also boost flash and RAM capacities, the largest in Silicon Labs’ portfolio. The memory increase stems not from AI/ML requirements, said Sabolcik, but from the desire to offer future-proofing functionality such as multi-protocol support and over-the-air app upgrades. Both series offer up to 1536 kB of flash and 256 kB of RAM.
“Across the board, this is the most capable device that we know how to build for the 2.4GHz space, in terms of sensing, computing, connectivity and security,” he said.
While the BG24 is designed for Bluetooth applications, the MG24 has multi-protocol support, including Matter. Matter, the home automation connectivity standard previously known as Connected Home over IP, is gaining popularity. The feature represents Silicon Labs’ highest radio output power for range and reliability at +20dBm, and the company’s best sensitivity on the receive side.
Robust security features in both families also meet PSA Level 3.
BG24 and MG24 parts are already shipping to more than 40 early-access customers, with general availability expected in April 2022. Modules based on the new SoCs will be available in the second half of this year, the company said.