Four Edge AI Trends To Watch
Ravi Annavajjhala
Forbes Councils Member
Forbes Technology Council
COUNCIL POST| Membership (Fee-Based)
Mar 15, 2023,06:45am EDT
Ravi Annavajjhala - CEO, Kinara Inc.
As 2023 progresses, demand for AI-powered devices
continues growing, driving new opportunities and challenges for businesses and developers. Technology advancements will make it possible to run more AI models on edge devices, delivering real-time results without cloud reliance.
Based on these developments, here are some key predictions to expect:
Increased Adoption
Edge AI technology has proven its value and we can expect to see further widespread adoption in 2023 and beyond. Companies will
continue to invest in edge AI to improve their operations, enhance products (i.e., safer, additional features) and gain competitive advantages. AI’s adoption will also be driven by innovative applications such as ChatGPT, generative AI models (e.g., avatars) and other state-of-the art AI models that will be used for applications in medtech, industrial safety and security.
We are also witnessing that edge AI is transitioning from a technology problem to a deployment problem. In other words, companies are comprehending the edge AI capabilities but it’s a new challenge to get it running in a commercial product, sometimes with multiple AI models in parallel to fulfill an application’s requirements.
Nevertheless, I expect to see continued progress in this area, as companies witness the benefits of edge AI and work to overcome these challenges, as increasing awareness related to costs, energy consumption and latency of running AI in the cloud will likely drive more users to run AI at the edge.
Furthermore, as businesses grow their trust in the technology, edge AI will become increasingly integrated into a wide range of devices, from smartphones and laptops to industrial machines and surveillance systems. This will create new opportunities for businesses to harness AI’s power and improve their products and services.
Improved Performance And More Advanced AI Models
With advancements in hardware and software, edge AI devices will become more powerful, delivering faster and more accurate results. Although edge devices will still be compute-limited compared to cloud processing and expensive and power-hungry GPUs, I expect a trend towards
higher tera operations per second (TOPS) and real-world performance for edge AI processors. As a result, there will be a shift towards more compute-intensive (and accurate) models.
For AI processing, developers are most interested in using leading-edge neural networks for improved accuracy. These network models include
YOLO (You Only Look Once),
Transformers and
MovieNet. Due to its good out-of-the-box performance, YOLO is expected to remain the dominant form of object detection in the years to come. And edge AI processors should advance alongside this technology as newer, more compute-intensive versions of YOLO become available.
Transformer models are also increasing in popularity for vision applications, as they are being actively researched to provide new approaches to solve complex vision tasks. Additionally, the ability to perform computations in parallel and capture long-range dependencies in visual features makes transformers a powerful tool for processing high-dimensional data in computer vision. With the increasing compute capability of edge AI processors, we’ll see a shift towards more transformer models, as they become more accessible for edge deployment.
Activity recognition is the next frontier for edge AI as businesses seek to gain insights into human behavior. For example, in retail, depth cameras determine when a customer's hand goes into a shelf. This shift from image-based tasks to analyzing sequences of video frames is driving the popularity of models like MovieNet.
March Towards Greater Interoperability Of AI Frameworks
As the edge AI market matures, expect to see increased standardization and interoperability between devices. This will make it easier for businesses to integrate edge AI into existing systems, improving efficiency and reducing costs. From a software perspective, standards such as
Tensor Virtual Machine (TVM) and
Multi-Level Intermediate Representation (MLIR) are two emerging trends in the edge AI space.
TVM and MLIR are open-source deep-learning compiler stacks or frameworks for building a compiler that aim to standardize deployment of AI models across different hardware platforms. It provides a unified API for AI models, enabling developers to write code once and run it efficiently on a wide range of devices, including cloud instances and hardware accelerators.
While these standards are becoming more stable, they are still not expected to become mass-adopted in 2023. Neural network operator coverage remains an issue, and targeting different accelerators remains a challenge. However, the industry will see continued work in this area as these technologies evolve.
Increased Focus On Security
As edge AI becomes more widely adopted, there will be a greater focus on securing the sensors generating the data and the AI processors consuming the data. This will include efforts to secure both the hardware and software, as well as the data transmitted and stored. State-of-the-art edge AI processors will include special hardware features to secure all data associated with the neural network’s activity.
In the context of AI models, encryption can protect sensitive data that the model is trained on—for many companies, this model information is the crown jewel. In addition, it will be important to secure the model's parameters and outputs during deployment and inference. Encrypting/decrypting the data and model helps to prevent unauthorized access of the information, ensuring the confidentiality and integrity of the data and model. Encryption and decryption can introduce latency and computational overhead, so the trick for edge AI processor companies will lie in encryption methods and carefully considering trade-offs between security and performance.
Conclusion
In conclusion, 2023 promises to be an exciting year for the edge AI industry, with new opportunities and challenges for businesses and developers alike. As edge AI continues to mature and evolve, we’ll see increased adoption, improved performance, greater interoperability, more AI-powered devices, increased focus on security and new business models. The limitations and challenges we face today will be overcome, and I have no doubt that edge AI will bring about incredible advancements.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives.
Do I qualify?