Couple of articles just had a read of.
Not sure if posted prev but anyway and whilst both from Oct, pretty good read imo.
One is about Socionext and given what we know currently, helps broaden the picture somewhat I feel.
The other is by a Strategic Marketing Mgr at Synopsys on IP and AI SoCs and isn't actually a product pitch per se.
Also an interesting read given our IP strategy these days.
SoCs designers face a variety of other challenges when balancing specific computing requirements with the implementation of deep learning capabilities.
www.rtinsights.com
SoCs for Electric and Autonomous Car Makers
By
Rick Fiorenzi | Monday, October 17, 2022
Whether ADAS applications will be needed to be successful in the future is not a question of “if” but “when”.
Next-generation autonomous driving platforms require higher levels of performance to make split-second decisions. A vehicle needs to comprehend, translate, and accurately perceive its surrounding environment and react to changes as fastest and safest means possible. Future
ADAS and Autonomous implementations (Figure 1) require higher performance, real-time edge computing with AI processing capabilities, along with high bandwidth interfaces to a host of high-resolution sensors, including radar,
LiDAR, and camera.
Improving the “seeing/vision” capabilities of advanced driver assistance systems (ADAS) is extending beyond cameras and LiDARs by incorporating smart sensors to handle complex driving scenarios that the auto industry coins Level 4, or “high” automation.
Figure 1: ADAS and autonomous driving require multiple sensors
Custom SoC Versus OTS (Off-the-Shelve) Solutions
There are many factors to consider when auto OEMs decide whether to go with a customized SoC or “off-the-shelf” products. Some questions include whether the car is intended for a broad-based market with little differentiation from others, which key IP should be brought in-house versus relying on external providers, and what are the trade-offs in terms of power, performance, size and costs.
In the end, automotive vendors must decide what is most suitable to them, based on the options available. The diagram in Figure 2 lists some key deciding factors between custom SoC versus standard of-the-shelf product.
Figure 2: Off-the-Shelf vs Custom SoC solutions
Benefits of Custom SoC Solutions
The reasons why custom SoC solutions might be the optimal choice when designing your next automotive application are listed below:
- Custom SoCs are built upon multi-purpose IP blocks that are specifically architected and integrated to achieve the intended functions as required by the application use case. They are specifically designed to achieve optimal levels of performance and efficiency while reducing size and overall BOM costs
- Standard OTS or ‘off-the-shelf’ silicon solutions are intended to appeal to a broader based market. As such, OTS silicon devices support functions which are not fully optimized or in some cases even utilized. This often results in a larger footprint, unnecessary power consumption, and performance inefficiency
- In addition, custom SoC solutions provide OEMs and Tier-1s, the opportunity for complete ownership of key differentiating technologies, in the areas of ADAS and autonomy. Proprietary chips offer companies an opportunity to develop in-depth knowledge and in-house expertise, enabling greater control of future designs and products. implementations.
Figure 3 summarizes the main benefits of a custom SoC solution.
Figure 3: Key benefits of custom SoC solution
Supply Chain – A Major Factor for Consideration
Supply chain interruptions are a primary concern for auto OEMs today. Unanticipated ‘Black Swan’ events can disrupt the flow of supply such as natural disasters, international border blockade, government sanctions, economic downturns, geo-political and social unrest. Supply of materials is never guaranteed, however, the odds for continued production are more favorable when a company doesn’t have to compete with several others to obtain the same product.
More and more car manufacturers are realizing that general purpose chips offer features that cater to multiple customers, limiting their product competitiveness and restricting them to the suppliers’ timelines and delivery schedule.
Why Custom SoCs?
Every now and again a new company comes along that alters the familiar and established business model. Similar to Netflix disrupting the video rental industry, Tesla is a company that has shattered the traditional automotive business model with its early launch of the autonomous technology, direct purchasing program, unconventional automotive designs with large interior displays, and constructions of battery giga-factories. Tesla’s success is driving traditional automakers to rapidly adapt their playbooks.
Unlike other automakers, Tesla has recognized the importance of OTA (Over-the-Air) software updates early on for adding certain features and to improve safety and performance. The company had developed its own chips since 2016. In 2019 at Autonomy Day, Tesla unveiled the Hardware 3.0, a chip that Elon Musk claimed was “objectively the best chip in the world.” Earlier in 2022, it was rumored that Tesla was working with Samsung to develop a new 5nm semiconductor chip that would assist with its autonomous driving software.
Tesla, along with other tech giants like Google, Amazon, Cruise, and many others, have decided to develop their own proprietary autonomous driving platforms.
Tesla was also one of the earliest companies to implement autonomous driving technologies with the launch of its 1st generation autopilot in 2016. In order to build a self-driving car, car makers need a combination of hardware, software, and data working together to train the deep neural networks that allow the vehicle to perceive and move safely through its environment. The deep neural networks are the artificial intelligence engine. It includes a series of algorithms that are specifically designed to mimic the way neurons in the human brain work. They are the backbone of deep learning. The evolution of Tesla’s autopilot and full self-driving features forced carmakers to take a closer look at the use of cameras and ultrasonic sensors.
Tesla acquires tremendous amount of data from its nearly two million autopilot-enabled vehicles each equipped with 8-camera arrays to generate data that’s then used to train the neural networks to detect objects, segment images, measure depth in real time. The car’s onboard supercomputer FSD (Full-Self-Driving) chip runs the deep neural networks and analyze the computer vision inputs from the cameras in real time to understand, make decisions, and move the car through the environment.
As AI becomes more important and costly to deploy, other companies that are heavily invested in the technology—including Google, Amazon, and Microsoft—are also designing their own chips.
The bottom line is that in addition to being a crucial component toward full-self-driving capabilities, autonomous vehicle OEMs aim to develop proprietary chips to differentiate themselves from their competition.
Socionext’s SoC solutions
Creating a proprietary chip requires a complex, highly structured framework with complete support system for addressing each phase of the development process. Most companies seeking to design their own chips do not have the full capabilities in-house. They require assistance from highly specialized companies with extensive engineering skills, know-how and experience to support full-on system-level SoC design, development, and implementation.
A company such as Socionext (Figure 4) offers the right combination of IPs along with the necessary design expertise and support to implement large scale, fully customizable automotive SoC solutions, to meet the most demanding and rigorous automotive application performance requirements.
Figure 4: Socionext Custom SoC Design and Integration
Additionally, Socionext has an established an in-house automotive design team to help to facilitate the early development and large-scale production of high-performance SoCs for automotive applications. As a leading “Solution SoC” provider, Socionext is committed to using leading-edge technologies, such as 5nm and 7nm processes, to produce automotive-grade SoCs that ensure functional safety while accelerating software development and system verification.
Beyond Silicon: Nurturing AI SoCs with IP
SoC designers face a variety of challenges when balancing specific computing requirements with the implementation of deep learning capabilities.
While artificial intelligence (AI) is not a new technology, it wasn’t until 2015 that a steep hike in new investments made advances in processor technology and AI algorithms possible. Beyond simply seeing it as an academic discipline, the world began to take notice of this scientifically proven technology that could exceed human capabilities. Driving this new generation of investment is the evolution of AI in mainframes to embedded applications at the edge, leading to a distinct shift in hardware requirements for memory, processing, and connectivity in AI systems-on-chip (SoCs).
In the past ten years, AI has emerged to enable safer automated transportation, design home assistants catered to individual user specifications, and create more interactive entertainment. To provide these useful functions, applications have increasingly become dependent on deep-learning neural networks. Compute-intense methodologies and all-encompassing chip designs power deep learning and machine learning to meet the demand for smart everything. The on-chip silicon technology must be capable of delivering advanced math functions, fueling unprecedented real-time applications such as facial recognition, object and voice identification, and more.
Defining AI
There are three fundamental building blocks that most AI applications follow: perception, decision-making, and response. Using these three building blocks, AI has the capacity to recognize its environment, use input from the environment to inform itself and make a decision, and then, of course, act on it. The technology can be broken up into two broad categories: “weak AI or narrow AI” and “strong AI or artificial general intelligence.” Weak AI is the ability to solve specific tasks, while strong AI includes the machine’s capability to resolve a problem when faced with a never-before-seen task. Weak AI makes up most of the current market, while strong AI is considered a forward-looking goal the industry hopes to employ within the next few years. While both categories will yield exciting innovations to the AI SoC industry, strong AI opens up a plethora of new applications.
Machine vision applications are a driving catalyst for new investment in AI in the semiconductor market. An advantage of machine vision applications that utilize neural network technology is increased accuracy. Deep learning algorithms such as convolutional neural networks (CNNs) have become the AI bread and butter within SoCs. Deep learning is primarily employed to solve complex problems, such as providing answers in a chatbot or a recommender function in your video streaming app. However, AI has wider capabilities that are now being leveraged by everyday citizens.
The evolution of process technology, microprocessors, and AI algorithms has led to the deployment of AI in embedded applications at the edge. To make AI more user-friendly for broader markets such as automotive, data centers, and the internet of things (IoT), a variety of specific tasks have been implemented, including facial detection, natural language understanding, and more. But looking ahead, edge computing — and more specifically, the on-device AI category — is driving the fastest growth and bringing the most hardware challenges in adding AI capabilities to traditional application processors.
While a large chunk of the industry enables AI accelerators in the cloud, another emerging category is mobile AI. The AI capability of mobile processors has increased from single-digit TOPS to well over 20 TOPS in the past few years. These performance-per-watt improvements show no signs of slowing down, and as the industry steadily nears the point of data collection in edge servers and plug-in accelerator cards, optimization continues to be the top design requirement for edge device accelerators. Due to the limited computing power and memory that some edge device accelerators possess, the algorithms are compressed to meet power and performance requirements, all while preserving the desired accuracy level. As a result, designers have had no choice but to increase the level of compute and memory. Not only are the algorithms compressed, but given the huge amount of data being generated, there is only capacity for the algorithms to focus on designated areas of interest.
While the appetite for AI steadily increases, there has been a noticeable uptick in non-traditional semiconductor companies investing in technology to solidify their place among the innovative ranks. Many companies are currently developing their own ASICs to support their individual AI software and business requirements. Implementing AI in SoC design does not come without many challenges.
The AI SoC Obstacle Course
The overarching obstacle for AI integration into SoCs is that design modifications to support deep learning architectures have a sweeping impact on AI SoC designs in both specialized and general-purpose chips. This is where IP comes into play; the choice and configuration of IP can determine the final capabilities of the AI SoC. For example, integrating custom processors can accelerate the extensive math that AI applications require.
SoCs designers face a variety of other challenges when balancing specific computing requirements with the implementation of deep learning capabilities:
- Data connectivity: CMOS image sensors for vision and deep learning AI accelerators are key examples of the real-time data connectivity needed between sensors. Once compressed and trained, an AI model will be prepared to carry out tasks through a variety of interface IP solutions.
- Security: As security breaches become more common in both personal and business virtual environments, AI offers a unique challenge in securing important data. Protecting AI systems must be a top priority for ensuring user safety and privacy as well as for business investments.
- Memory performance: Advanced AI models require high-performance memory that supports efficient architectures for different memory constraints, including bandwidth, capacity, and cache coherency.
- Specialized processing: To manage massive and changing compute requirements for machine and deep learning tasks, designers are implementing specialized processing functions. With the addition of neural network abilities, SoCs must be able to manage both heterogeneous and massively parallel computations.
Charting AI’s Future Path for SoCs
To sort through trillions of bytes of data and power tomorrow’s innovations, designers are developing chips that can meet the advanced and ever-evolving computational demand. Top-quality IP is one key to success, as it allows for optimizations to create more effective AI SoC architectures.
This SoC design process is innately arduous as decades of expertise, advanced simulation, and prototyping solutions are necessary to optimize, test, and benchmark the overall performance. The ability to “nurture” the design through necessary customizations will be the ultimate test in determining the SoC’s viability in the market.
Machine learning and deep learning are on a strong innovation path. It’s safe to anticipate that the AI market will be driven by demand for faster processing and computations, increased intelligence at the edge, and, of course, automating more functions. Specialized IP solutions such as new processing, memory, and connectivity architectures will be the catalyst for the next generation of designs that enhance human productivity.