As the number of machine learning (ML) use cases grows and evolves, an increasing number of MLops organizations are using more ML at the edge — that is, they are investing in running ML models on devices at the periphery of a network, including smart cameras, IoT computing devices, mobile...
apple.news
5 reasons MLops teams are using more Edge ML
As the number of
machine learning (ML) use cases grows and evolves, an increasing number of
MLops organizations are using more ML at the edge — that is, they are investing in running ML models on devices at the periphery of a network, including smart cameras, IoT computing devices, mobile devices or embedded systems.
ABI Research, a global technology intelligence firm, recently
forecast that the edge ML enablement market will exceed $5 billion by 2027. While the market is still in a “nascent stage,” according to Lian Jye Su, research director at ABI Research, companies looking to ease the challenges of edge ML applications are turning to a variety of platforms, tools and solutions to boost an end-to-end MLops workflow.
“We are absolutely seeing MLops organizations increase the use of EdgeML,” said Lou Flynn, senior product manager for
AI and analytics at SAS. “Enterprises big and small are running to the cloud for various reasons, but the cloud doesn’t lend itself to every use case. So organizations from nearly every industry, including aerospace, manufacturing, energy and automotive, leverage Edge AI to gain competitive advantage.”
Here are five reasons MLops teams are giving edge ML a thumbs-up:
1. Edge devices have become faster and more powerful.
“We have seen multiple companies focus on end-to-end processes around edge ML,” said Frederik Hvilshøj, lead ML engineer at data-centric computer vision company Encord. The two major reasons, he explained, are: Edge devices have become increasingly powerful while model compression has become more effective, which allows for running more powerful models at a higher speed; and edge devices also typically live much closer to the data source, which removes the necessity to move big volumes of data.
“The combination of the two means that high performance models can be run on edge devices at a close-to-real time speed,” he said. “Previously, GPUs living on central servers were necessary to get the high model throughput — but at the cost of having to transfer data back and forth, which made the use case less practical.”
2. Edge ML offers greater efficiency.
Today’s distributed data landscape is ripe with opportunity to analyze content to gain efficiencies, said Lou Flynn, senior product manager for AI and analytics at SAS.
“Many data sources originate from remote locations, such as a warehouse, a standalone sensor at a large agricultural site or even a CubeSat [a square-shaped miniature satellite] as part of a constellation of electro-optical imaging sensors,” he explained. “Each of these scenarios depicts use cases that could gain efficiencies by running edge ML vs. waiting for data to reconcile in cloud storage.”
3. Bandwidth and cost savings are key.
“You need to run ML models on the edge because of physics (bandwidth limitations, latency) and cost,” said Kjell Carlsson, head of
data science strategy at Domino Data Lab. Carlsson explained that IoT is not feasible if data from every sensor needs to be streamed to the cloud to be analyzed.
“The network in a supermarket would not support the high-definition streaming from a couple dozen cameras, let alone the hundreds of cameras and other sensors you would want in a smart store,” he said. By running ML on the edge, you also avoid the cost of data transfer, he added.
“For example, a Fortune 500 manufacturer is using edge ML to continuously monitor equipment to predict equipment failure and alert staff to potential issues,” he said. “Using Domino’s MLops platform, they are monitoring 5,000+ signals with 150+ deep learning models.”
4. EdgeML helps scale the right data.
The real value of edge ML, said Hvilshøj, is that with distributed devices, you can scale your model inference without having to buy larger servers.
“With scaling inference out of the way, the next issue is collecting the right data for the next training iteration,” he said. In many cases, collecting raw data is not hard, but choosing data to label next becomes hard for large volumes of data. The compute resources on the edge devices can help identify what might be more relevant to label.
“For example, if the edge device is a phone and the user of the phone dismisses a prediction, this can be a good indicator that the model was wrong,” he said. “In turn, the particular piece of data would be good for retraining the model with proper labels.”
5. MLops organizations want more flexibility.
According to Flynn, MLops organizations should use their models to not only make better decisions, but to optimize these models for different hardware profiles — for example, using technology like the Apache TVM (Tensor Virtual Machine) to compile models to run more efficiently on different cloud providers and across devices with varying hardware (CPU, GPU and/or FPGAs). One SAS customer — Georgia-Pacific, an American pulp and paper company — uses edge computing at many of its remote manufacturing facilities where high-speed connectivity often isn’t reliable or cost-effective.
“This flexibility gives MLops teams agility to support a wide variety of use cases, enabling them to bring processing to their data on a growing pool of devices,” Flynn said. “While the range of devices are vast, they often come with resource limitations that could constrain model deployment. This is where model compression comes into play. Model compression reduces the footprint of the model and enables it to run on more compact devices (like an edge device) while improving the model’s computational performance.”