Siemens/Synopsys/Cadence

D

Deleted member 118

Guest
A4ED1497-F7FD-4505-B651-C1B48A705C36.png

 
Last edited by a moderator:
  • Like
  • Fire
  • Haha
Reactions: 17 users
Probs worthwhile to understand what the 3 do and how we may link in. Some recent articles blogs on their EDA's.

First up, a bit on Siemens EDA



SemiWiki Banner
Siemens EDA
Pushing Acceleration to the Edge
by Dave Bursky on 11-04-2022 at 6:00 am
Categories: AI, EDA, Events, Siemens EDA

As more AI applications turn to edge computing to reduce latencies, the need for more computational performance at the edge continues to increase. However, commodity compute engines don’t have enough compute power or are too power-hungry to meet the needs of edge systems. Thus, when designing AI accelerators for the edge, Joe Sawicki, the Executive VP for the IC EDA Division of Siemens, at last month’s AI Hardware Summit in Santa Clara, Calif., suggests that there are several approaches to consider: Custom hardware that is optimized for performance, high-level synthesis to radically reduce design cost, and hybrid verification to significantly reduce validation cost.

When these approaches are combined, designers can craft high-performance AI accelerators for edge computing applications. That high performance will be needed since model sizes of the AI algorithms are growing over time—over the past five years, explained Sawicki, the models (such as the ImageNet algorithm) have increased in computational load by more than 100X and that growth shows no sign of slowing down.

Industry estimates from International Business Strategies show that the AI value contribution to the overall IC market revenue will grow from today’s 18% to 66% by the year 2030, while the total IC market revenue will grow from $529 billion today to $1144 billion by 2030. The gain in AI value demonstrates the increasing momentum in custom accelerators to improve both edge device performance and overall AI performance. Although the customized accelerators can deliver exceptional performance, they have one drawback – they have limited flexibility since they are typically optimized for a limited number of algorithms.

In an example described by Sawicki, a configurable block of AI intellectual property is compared to a custom AI accelerator design. Area, Speed, Power, and Energy all show significant reductions for the custom accelerator (see table). For example, area is 50% smaller, speed improved by 94%, power dropped by 60%, and energy consumed per inference was reduced by 97%. It’s not magic here–it’s because the architecture was specifically targeted to implement the specific algorithm explained Sawicki.

performane table siemens eda 1

Part of the optimization challenge is to determine the best level of quantization—for example, 32-bit floating-point accuracy is often preferred, but with just a small loss in the result precision, a 10-bit fixed-point alternative that saves 20X in area/power can be used instead, thus improving compute throughput and reducing chip area. Additionally, by applying high-level synthesis in the hardware design flow, designers can go from the neural network architecture to a C++ bit-accurate algorithm to a C++ Catapult architecture and on to high-level synthesis to craft a synthesizable RTL design that can be implemented using RTL synthesis tools and back-end tool flows.

The use of C++ allows designers to easily explore various architectural approaches, suggests Sawicki. In a second example, he described the design exploration of a RISC-V Rocket core and three design options in a 16-nm process—one that is optimized for low power using an accelerator plus the Rocket core, a second that focused on shrinking the core area to minimize silicon cost, and a third approach that was optimized for speed. For the low-power option, the core plus accelerator consumed 86.54 mW, ran at 25.67 ms, and occupied a total area of about 3 million square microns. The second option reduced the total silicon area by about one-third to 2 million square microns, slowed down the execution to 37.54 ms, and kept the power to just under 90 mW. Lastly, the speed optimized version upped the area back to about the same level as the first option, improved the speed to just 12.07 ms, but upped the power consumption to 93.45 mW. These tradeoffs show that design choices can considerably affect the performance and area of a potential design.

The incorporation of AI/ML function in an EdgeAI design also adds additional verification challenges. The verification tools must deal with the training data set, the AI network mapping, as well as the AI accelerator logic (the structured RTL). As Sawicki explained, functional benchmarking has to deal with virtual platform performance, modeling of the hybrid platform, and simulation/emulation of the modeling platform. And all through that the tools must also perform power and performance analysis. To do that, the verification technology has to be matched to the needs of the project—hybrid verification and run-fast/run-accurate (ability to switch between model fidelity in a single run) make it possible to test real-world workloads in the verification environment.

By using open standards Sawicki expects designers to leverage a rich ecosystem of modeling capabilities in a heterogeneous environment for multi-domain and multi-vendor modeling. Tools for scenario generation, algorithmic modeling, TLM modeling and physics simulations can all be tied together via a system modeling interconnect approach that allows analog and digital simulation, hardware assisted verification through the use of digital twins, and virtual platform models to interact.

 
  • Like
  • Love
Reactions: 15 users
One bit of Synopsys.



Synopsys: IC Electronic Design Automation – Higher Performance, Lower Cost and Faster Time-to-market
NOVEMBER 14, 2022|BY
Is “designing AI chips with AI” feasible?

TAIPEI, Nov. 14, 2022 /PRNewswire/ — This article is based on an interview undertaken by FusionMedium’s technology online media, TechOrange, and published with permission:

Semiconductor chips have become the core for driving innovation in the edge computing of tomorrow. Smartphones were the world’s first stage of entry into the Era of Intelligence. As reliance on electronic products continues to grow, how to design higher performing and cost-effective chips, and to do so in a shorter time, has become a key challenge for IC designers.

Synopsys is an S&P 500 company and has a long history of being a global leader in IC electronic design automation (EDA) and IC interface IP. The company is dedicated to providing the best “Silicon to Software” solutions.

EDA has greatly increased the speed at which technology evolves, while the number of transistors deployed in a chip has risen sharply

With the simple and rapid dissemination of knowledge, human technology has evolved so much that the words that used to be printed by the Gutenberg printing press have now become digital zeros and ones, logical gates, and tools to miniaturize these technologies in tiny silicon chips, and that tool is Electronic Design Automation (EDA).

The emergence of EDA has greatly increased the speed at which technology is developed. Li pointed out that semiconductor technology became a key player in all kinds of electronic devices and systems, although IC circuits were still designed manually at that time.

Li recalled the early 1990s when there were only 12 million transistors in telephone chipsets and engineers were still drawing circuits by hand, taking a year and a half to complete the design of a single chip. “The number of transistors in cell phone chips on the market today runs between 1.2 billion and 2 billion, and each transistor circuit has to be connected correctly for the phone to work, which is virtually impossible were the old design method were still in use. EDA makes this possible. “

EDA design involves three processes to meet the needs for higher performance, lower cost and faster time-to-market

The first step of EDA is to describe the circuit, and the second step is to build a circuit model, simulate operation, analyze feasibility, optimize performance, and finally reach automation.

This process is similar to the current AI machine learning algorithm, which also collects a large amount of data, simulates it into a neural-like network, and then trains the model so that the system can make inferences and predict behavior, and design corresponding actions.

Li continued his description, “AI technology can be applied to EDA in every aspect of IC design, from specification, functional and circuit design, to real-world verification, to IC production and testing, in a way that human beings cannot. The key indicator of IC design is PPA (Power, Performance, Area), which means less power, higher performance and more transistors packed into the same geometry.

In addition to PPA, various considerations such as information security and stability, among others, must now be included, which makes IC design more and more complex.

The era of “designing AI chips with AI” is coming

As a global EDA leader, Synopsys launched an AI-enabled EDA platform in 2020 and joined Taiwan’s Industrial Technology Research Institute (ITRI) to establish the AI Chip Design Lab. The firm then launched the public AI system-on-chip (SoC), to assist IC designers in shortening the development timeline. “If we continued to use the old design method, it would take 100 engineers more than three years to complete the design of an AI accelerator for social networking sites. After leveraging the AI SoC, the Taiwan start-up team needed only 30 engineers and completed the product design within a year and a half.”

The AI on Chip Industry Cooperation Strategic Alliance established by the Smart Electronics Industry Promotion Office (SIPO) of the Industrial Development Bureau, connects the upstream and downstream supply chains of global industries and assists semiconductor, AI, and IoT manufacturers in establishing international partnerships, giving Taiwan’s manufacturers an opportunity to explore innovative business opportunities and enhance international competitiveness.

Lastly, Li said that the global industry relies more and more heavily on ICs as we enter the Era of Intelligence. Statistics show that future market demand for IC design engineers will be 1000 times higher than it is now. To meet this demand, AI is imperative. The era of designing AI chips with AI has come and IC designers must ready themselves for this trend in order to remain competitive in the industrial environment of tomorrow.”
 
  • Like
  • Love
Reactions: 14 users
Something on Cadence.


Four Killer Edge Computing Applications
AuthorCADENCE PCB SOLUTIONS
Edge computing applications

The debate on the future of edge computing still goes strong in some corners of the electronics industry. Like most cases of new technology, it may not live up to all the hype, but this important computing paradigm will likely create immense value in a few key areas and computing applications. As an electronics engineer or systems architect, it’s your job to figure out what those applications are and how they can be practically implemented in commercial systems.

To help shed some light on the important time-critical applications enabled by edge computing, we prepared this article highlighting four applications where edge computing creates significant value for end users and systems architects. The goal in this article is to cut through the hype and shed some light on the more practical aspects of this important technology.

Four Applications in an Edge Computing Ecosystem

The edge computing model is based on a simple concept: bring the compute required in some applications closer to the end user, thereby eliminating the need to send data to the cloud and thus reducing network traffic. Edge computing can be a critical enabler of applications that require high compute and low latency simultaneously. The two ideas tend to go against each other as data-intensive service delivery tends to carry higher latency.

The four application areas outlined below are chosen because they are time-critical, yet they tend to require more compute than could typically be fit onto the end device. These are also just a few areas where edge computing can offer a low-latency solution; systems designers could certainly envision many more application areas where bringing processing closer to end users creates major value and improves service delivery.

Edge AI Processing

AI is probably the highest-compute application being implemented in consumer and commercial devices. Typically, AI processing would be performed in the cloud as part of a larger application whenever compute resources are not available on end devices or user equipment. With an edge computing that is specialized for AI-processing (either on-chip or in a co-processor architecture), computational time and load can be significantly reduced.

As part of model development for deployment in an edge computing system or end-user devices, certain acceleration steps can be implemented to further reduce computational requirements in neural networks. These are outlined below and will be discussed in more depth in a later article.

Quantization

Use floating-point number representation for input data rather than fixed point/integer representation

Pruning

Remove model weights below some threshold in a model’s neural network architecture

Remove sparsity

Remove zero-valued results from tensorial calculations to reduce the computational load in each layer in a neural network

Pre-processing

Apply some fast logical processing to input data so that it is more easily processed in a neural network

With a sufficiently high-compute processor or chipset architecture, and model optimization practices like those listed above, it’s possible to segment low and high compute tasks between the end device and an edge server without increasing traffic in the network backhaul. The pre-processing tasks can also reduce the amount of data sent over wireless links to further improve latency in service delivery.

Smart Infrastructure

Infrastructure is slowly becoming smarter, and as more data becomes available, the computing workloads will continue to increase. An edge computing approach allows companies to create a more secure network for sharing and processing infrastructure data for many tasks, reducing the need for human monitoring and maintenance. Another important area supported by edge computing is integration of data from ADAS systems and traffic monitoring systems to support autonomous vehicles. This high compute area will continue to see growth, primarily driven by vehicles and infrastructure monitoring.

Smart Manufacturing

As much of the world begins to focus on geographically diversifying its supply chains, automation in smart manufacturing will see new investment and development. Edge computing can support further automation with on-demand processing to serve multiple production assets. To ensure greater security in a production environment, these systems could be deployed on-premises, which totally eliminates the need for a public network and gives companies greater control over manufacturing operations.

Security

This is another area where systems are becoming more complex, with more devices being interconnected and sharing more data. Devices deployed for security have greater emphasis on signal acquisition from sensors and subsequent processing of multiple data types. The latter area is where edge computing can play an important role. The data captured by advanced security falls within the following areas:

  • Computer (both still and video streaming)
  • Low-frequency and high-frequency radio sensors
  • Acoustic and optical sensors
  • On-device processing and fusion of data and autonomous decision-making with an embedded AI model

In some environments where internet access is compromised, unreliable, or denied, an edge computing approach can offer direct access to high-compute resources without a link to the cloud. An edge server allows data capture and warehousing in a much more secure environment compared to a publicly accessible telecom network or cloud service. The defense industry in the US and Europe is currently taking this approach to embedded computing very seriously, and many new embedded products are reaching the market.

When you’re ready to design the electronics and peripherals for your edge computing systems, use Allegro PCB Designer, the industry’s best PCB design and analysis software from Cadence. Allegro users can access a complete set of schematic capture features, mixed-signal simulations in PSpice, and powerful CAD features, and much more.
 
  • Like
  • Love
Reactions: 19 users
Top Bottom