Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > EDA/IP

The golden age of simulation-driven design

Posted: 21 Sep 2012 ?? ?Print Version ?Bookmark and Share

Keywords:implementation process? integrated circuits? electronic design automation?

For several decades, the electronic design automation (EDA) industry has been polishing and perfecting the design and implementation process for integrated circuits. However, the reliability verification process has been slow to catch up, especially due to the complex nature of failure mechanisms. Chip designers in the past were willing to take risks when it came to reliability verification because it was not seen as a functional failure or something that caused yield fallout. But times have changed and the EDA and simulation software community has swiftly responded to the needs of a simulation-driven reliability analysis model. This article will delve into what it takes to design for reliability today.

Over a decade ago, the IC design and verification process would include design margin for almost every form of physical verification. Margins were added for several checks such as timing, IR drop, decap requirements, etc. Essentially, these margins were built-in to the verification sign-off process because the true operating condition could not be modeled accurately. For example, the voltage drop at the full-chip level was only simulated using a static analysis. Both tools and compute power were not adequate enough to simulate the entire chip, package, and system in a transient analysis. Design margins were built into a static IR drop analysis to account for dynamic behavior. Another example is the case of timing sign-off. Margins for set-up and hold times were built-in to account for voltage drop effects and aggressor-induced slow down or speed up of interconnect delays. Reliability verification such as electro-migration and self-heat were typically done with worst case switching, temperature, and recovery factors. There was no clear way to achieve realistic switching behavior, or a realistic die temperature profile when signing off electro-migration and self-heat effects.

Fast forward to today, the age of simulation-driven product design. Every IC designer has a toolbox of EDA products to help simulate and verify various reliability phenomena. Reliability verification for ICs not only covers classic electro-migration and self-heat, but also verification of power / ground noise verification, substrate noise, thermal reliability, electro-magnetic interference (EMI) and electrostatic discharge (ESD) events.

Multi-physics modeling such as electro-magnetic, thermo-mechanical, electro-mechanical and thermo-electric are mature in the simulation industry, albeit still evolving. Failure mechanisms in ICs are caused by one physical phenomenon affecting another. For example, the effect of temperature on the electrical resistance of wires, or the effect of current flow on heat dissipation in wires (joules heating), are both thermo-electric multiphysics phenomena. Other examples include the impact of temperature on IC mechanical failures and electro-magnetic interference between multiple ICs in a system. Simulation tools no longer analyze one phenomenon in isolation. They are able to seamlessly straddle different domains of analysis in order to model the true behavior of the system. Multi-physics principles and complex model exchanges are being used to simulate failure mechanisms.

Design for reliability
The IC design community has started to rely on the mantra of 'first silicon success' with a keen focus on 'design for reliability'. Every IC being designed is analyzed for various reliability failure mechanisms using a simulation-driven approach. Designers are no longer building in margins to account for unknown phenomenon or designing with a 'correct by construction' approach. State-of-the-art EDA tools not only have the ability to simulate complex failure mechanisms with multiphysics interactions, but also have the capacity to simulate the entire IC subsystem to include chip, package and board. Issues such as power delivery noise, substrate noise, electro-magnetic interference, and thermal stability can only be accurately simulated when the entire IC subsystem is considered. Reliability failure mechanisms in ICs can be broken down into three major types.

Operational reliability failures
Operational reliability failures are very different from functional failures in the IC world. A functional failure occurs when an improper logic condition happens during normal operation of a circuit. An operational failure on the other hand, occurs when the operating condition of an IC is outside the normal range of operation. Functional failures are very uncommon today, especially due to the high levels of sophisticated logic verification, synthesis, and test tools. However, operational failures are more complex to capture and model due to the uncertainties of operating conditions and multiphysics interactions.

The most common operational failure is transient voltage noise on the power delivery networks (PDNs) of ICs. PDN noise is very complex to model and simulate. Different noise coupling pathways exist for every aggressor and victim and the entirety of the PDN, starting from system and package all the way to the die, needs to be modeled and simulated.

1???2???3?Next Page?Last Page

Article Comments - The golden age of simulation-driven ...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top