Tiny ML
Source: Medium
About TinyML:
TinyML is one of the fastest-growing areas of
Deep Learning. In a nutshell, it’s an emerging field of study that explores the types of models you can run on small, low-power devices like
microcontrollers.
TinyML sits at the intersection of embedded-ML applications, algorithms, hardware and software. The goal is to enable low-latency inference at edge devices on devices that typically consume only a few milliwatts of battery power. By comparison, a desktop CPU would consume about 100 watts (thousands of times more!). Such extremely reduced power draw enables TinyML devices to operate unplugged on batteries and endure for weeks, months and possibly even years — — all while running always-on ML applications at the edge/endpoint.
Although most of us are new to TinyML, it may surprise you to learn that TinyML has served in production ML systems for years. You may have already experienced the benefits of TinyML when you say
“OK Google” to wake up an Android device. That’s powered by an always-on, low-power keyword spotter.
Why TinyML?
If we consider that, according to a forecast by ABI Research, by
2030, it is likely that around
2.5 billion devices will reach the market through TinyML techniques, having as the primary benefit the creation of smart IoT devices and, more than that, popularize them through a possible reduction in costs.
Most IoT devices perform a specific task. They receive input via a sensor, perform calculations, and send data or perform an action.
The usual IoT approach is to collect data and send it to a centralized registration server, and then, you can use machine learning to conclude.
But why don’t we make these devices smart at the embedded system level? We can build solutions like smart traffic signs based on traffic density, send an alert when your refrigerator runs out of stock, or even predict rain based on weather data.
The challenge with embedded systems is that they are tiny. And most of them run on battery. ML models consume a lot of processing power, machine learning tools like Tensorflow are not suitable for creating models on IoT devices.
Building models in TinyML:
In TinyML, the same ML architecture and approach is used, but on smaller devices capable of performing different functions, from answering audio commands to executing actions through chemical interactions.
The most famous is
Tensorflow Lite. With Tensorflow Lite, you can group your Tensorflow models to run on embedded systems. Tensorflow Lite offers small binaries capable of running on low power embedded systems.
Tensorflow Lite: Tiny ML
One example is the use of TinyML in environmental sensors. Imagine that the device is trained to identify temperature and gas quality in a forest. This device can be essential for risk assessment and identification of fire principles.
Connecting to the network is an energy-consuming operation. Using Tensorflow Lite, you can deploy machine learning models without the need to connect to the Internet. This also solves security issues since embedded systems are relatively easier to exploit.
Advantages of TinyML:
Data Security: As there is no need to transfer information to external environments, data privacy is more guaranteed.
Energy savings: Transferring information requires an extensive server infrastructure. When there is no data transmission, energy and resources are saved, consequently in costs.
No connection dependency: If the device depends on the Internet to work, and it goes down, it will be impossible to send the data to the server. You try to use a voice assistant, and it does not respond because it is disconnected from the Internet.
Latency: Data Transfer takes time and often brings in a delay. When it does not involve this process, the result is instantaneous.
“Where there is data smoke, there is business fire.” — Thomas Redman