Her face is in happiness mode
Takeaways from AI Hardware Summit and Edge AI Summit 2022
betterprogramming.pub
Jennifer Fu
Sep 20
·
7 min read
AI Frontiers in 2022
Takeaways from AI Hardware Summit and Edge AI Summit 2022
Image by author — object model was made by the Mythic demo system
AI Hardware Summit was held during September 13–15, 2022, at Santa Clara Marriott, CA. It focuses on systems-first machine learning to reduce time-to-value in the ML lifecycle and to unlock new possibilities for AI development. It is a great event to feel the pulse of AI and learn its frontiers in 2022.
This year, for the first time, AI Hardware Summit was co-located with
Edge AI Summit, which focuses on economical, efficient, and optimized AI at the edge. One pass for two conferences. It is a great opportunity for AI experts to get together and build better technologies.
Here is the list of what we have learnt from the two summits:
Edge AI is a huge growth and performance improvement opportunityAI chips can detect human emotionsTPG can be used for edge AIFoundation models bring a new era of AINext steps for large-scale AI infrastructure
Edge AI Is a Huge Growth and Performance Improvement Opportunity
Edge AI is deploying AI applications in devices throughout the physical world. It is at the network’s edge, close to where the data is located.
Image by author
- Edge computing can happen at IoT devices, which are smart devices or sensors. It has the least latency and the highest bandwidth for devices.
- There are multiple layers of edge with different API capabilities. IoT devices are the less capable ones; the more capable edges are retail stores, factories, hospitals, airports, railways, smart cities, gas stations, etc. There are more computing power and storage capacity for the larger scale edges to perform AI processing.
- Cloud, virtual, or physical data centers have the best programming environment regarding computing power, scalability, security, energy, and storage. However, as the cloud is far from where data are located, it has the worst latency and bandwidth to reach devices.
AI solution is not limited to one layer. Typically, it spans multiple layers, taking advantage of the quick response of smart devices and increased AI capability at more powerful edges and the most powerful cloud.
For example, a vehicle sensor beeps with object detection, and the vehicle network automatically adjusts the car’s direction to avoid a collision. The records are sent to a traffic center for preliminary processing. All data is aggregated and processed in the traffic cloud, where the driving recommendation is sent to drivers less urgently.
Edge AI has advanced due to the maturation of deep learning (neural network training and inference) and enhanced computing power. 5G also boosts IoT devices with faster, more stable, and more secure connectivity. Edge AI brings AI into real-life devices and powerful data centers. Edge AI is a huge growth and performance improvement opportunity.
AI Chips Can Detect Human Emotions
Edge computing can happen on IoT devices. There are traditional AI models, such as
Regression Analysis,
Logistic Regression,
Neural Networks,
Support Vector Machines,
Multiclass Classification, and
K-Means Clustering. Edge AI models are more task-specific, such as general detectors, high-speed detectors, classifiers, densities, re-identifications, personal protective equipment (PPE), thermal detectors, face detection, face identification, face feature detection, scene segmentation, and skeleton detectors.
We have seen the face and face feature detection at a number of booths. Here is a photo taken at the
BrainChip booth.
Image by author
The neuromorphic processor IP, Akida™, mimics the human brain to analyze essential sensor inputs at the acquisition point. It is real-time inference and learning at the edge by Akida’s fully customizable event-based AI neural processor. By inferencing, the chip concluded that the face above is in the “Happiness” mode. The other available modes are “Neutral” and “Sadness.”
Object detection is important in edge AI. Here are two aspects of its current status:
- Accuracy: The original images obtained from IoT devices may be distorted by reflection, blur, soiling, snow, rain, fog, etc. It requires calibration for object recognition and classification. Model accuracy continuously improves.
- Efficiency: The image analytics need to be real time and likely in a high frame rate. It includes geographic information system (GIS) calibration and object tracking. Edge computing reduces server latency and bandwidth.
The following is the crowd management video played at
Mythic booth.
Video by author
It runs AI inference in a real-world scenario. It identifies objects in the crowd in real time.
As we have discussed in
another article, technology advancement must follow regulations. The General Data Protection Regulation (GDPR) is a regulation in EU law on data protection and privacy. It protects individuals about the processing of personal data and the free movement of such data.
Since AI processor is so powerful to read our status and store it somewhere, how much privacy do we have regarding our data? The ripening edge markets bring opportunities along with challenges.
TPG Can Be Used for Edge AI
Central Processing Unit (CPU) is the electronic circuitry that executes instructions comprising a computer program. It is considered the brain of the computer.
Graphics processing unit (GPU) is a specialized processor designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in gaming, workstations, cloud, AI training, self-driving automobiles, etc.
Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google’s own TensorFlow software. Google began using TPUs internally in 2015, and in 2018, made them available for third-party use. In July 2018, Google announced the Edge TPU designed to run machine learning models for edge computing.
TPG can be used for edge AI, and more vendors are working on TPU as AI accelerators.
Foundation Models Bring a New Era of AI
Machine learning has been a part of artificial intelligence. It is the study of computer algorithms that can automatically improve through experience and by using data. Deep learning is a subset of machine learning, a neural network with three or more layers. Deep learning attempts to simulate the human brain’s behavior to learn from large amounts of data.
Lollixzc, CC BY-SA 4.0 <
https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons
AI has gone from a purely academic endeavor to a force powering actions across myriad industries and affecting the lives of millions each day. Machine learning prevailed in the 2000s, and deep learning dominated the 2010s. In the 2020s, it is a new era for foundation models.
Image by author
For this new era, AI looks to replace the task-specific models that have dominated the AI landscape. The term,
foundation model, is defined by the Stanford Institute for Human-Centered Artificial Intelligence:
In recent years, a new successful paradigm for building AI systems has emerged: Train one model on a huge amount of data and adapt it to many applications. We call such a model a foundation model.
It also mentioned that foundation models have demonstrated impressive behavior but can fail unexpectedly, harbor biases, and are poorly understood. Nonetheless, they are being deployed at scale. These are some two successful examples from
OpenAI:
- Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that produces human-like text. Input a short prompt, and the system generates an entire essay.
- DALL-E 2 is a new AI system that can create realistic images and art from natural language descriptions. The head image of this article was generated by Anupam Chugh using DALL-E 2.
SambaNova has shown the foundation model platform,
DataScale®, with four layers of offerings: silicon, software, systems, and as-a-service.
Next Steps for Large Scale AI Infrastructure
According to International Data Corporation (IDC), the global spending on AI systems will jump from $85.3 billion in 2021 to more than $204 billion in 2025. The compound annual growth rate (CAGR) for the 2021–2025 period will be 24.5%.
Researchers and developers are working on the next steps of large-scale AI infrastructure:
- Hardware architecture is important. High-performance computers with AI-optimized accelerators must deliver more computing power for AI models.
- Software matters more than hardware. Software that efficiently uses computing power, such as training sparse neural networks, is more meaningful for AI development.
- Data center environments of computing power, scalability, security, energy, and storage are all important.
- Even cooling matters. With high-densified computing equipment, data center cooling environment needs to be thoughtfully designed. There are choices of cooling with outside air, cold water, temperate water, and/or warm water.
Conclusion
The three-day summits finished. We learned that edge AI is a huge growth and performance improvement opportunity, and the ripening edge markets bring opportunities and challenges.
After machine learning and deep learning, foundation models have started a new era. GPT-3 and DALL-E 2 are large-scale projects from OpenAI. Researchers and developers are working on the next steps of large-scale AI infrastructure.
Thanks for reading.
Want to Connect?If you are interested, check out
my directory of web development articles.
Notes:
- Thanks to Kisaco Research for inviting me to both summits to meet AI experts and exchange ideas for future AI development.
- Thanks to many speakers for providing content for this article.
- Thanks to many booths that showed me great AI products in the works.
Thanks to Anupam Chugh
https://medium.com/m/signin?actionU...3c---------------------clap_footer-----------
https://medium.com/m/signin?actionU...3c---------------------clap_footer-----------