Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Manufacturing/Packaging
?
?
Manufacturing/Packaging??

Succeed at 65nm design

Posted: 17 Mar 2008 ?? ?Print Version ?Bookmark and Share

Keywords:DFM at 65nm? process node? lithography-aware design? manufacturing variability?

True design-for-manufacturing (DFM) at the 65nm technology node and below has become more critical. This is due to the shrinking of the structures on the chip, where the same absolute physical variations can result in relatively large electrical variations. At 65nm and below, lithographic effects become the biggest contributor to manufacturing variability.

The problem is that the features (structures) on the silicon chip are now smaller than the wavelength of the light used to create them. If a feature were replicated as it was in the photomask based on the lithographic image, the corresponding form appearing on the silicon would drift farther and farther from the ideal with the decreasing feature sizes associated with the newer technology nodes.

Traditional tools
In conventional design flows, it is currently addressed by postprocessing the GDSII file with a variety of reticle enhancement techniques (RETs) such as optical proximity correction and phase-shift mask (PSM).

For example, the physical design tool modifies the GDSII file by augmenting existing features or adding new features, known as subresolution assist features, to obtain better printability. This means that if the tool projects that the printing process will be distorted in a certain way, it can add its own distortion in the opposite direction, attempting to make the two distortions cancel each other out.

Problem categories
Manufacturing and yield problems typically fall into four main categories: catastrophic, parametric, systematic (feature-driven) and statistical (random). Catastrophic problems are those such as a missing via, which cause the chip to fail completely. By comparison, parametric problems leave the chip functioning, but out of its specified range (e.g. a 500MHz device that runs at only 300MHz or a part that is required to consume less than 5W of power that actually consumes 8W). The origins of both catastrophic and parametric problems can be subdivided into systematic effects and statistical occurrences.

A true DFM-aware solution has to address each of these problem categories, which means that it must be able to model all systematic and statistical effects during implementation, analysis, optimization and verification. The way to reach acceptable performance and yield goals is to make the entire design flowincluding cell characterization, IC implementation, analysis, optimization and sign-offDFM-aware. Within such a flow, manufacturability issues are understood and addressed at the most appropriate and efficient step, creating tighter links between design and manufacturing so that design intent feeds forward to manufacturing, while fab data feeds back to design.

Design tools (particularly the implementation, analysis and optimization engines) have traditionally been rules-based. In other words, they were provided with a set of rules, and they analyzed and modified the design to ensure that none of the rules were violated. In today's ultra-deep submicron technologies, however, these rules no longer reflect the underlying physics of the fabrication process. Even if the design tools meticulously follow all of the rules provided by the foundry, the ensuing chips can still exhibit parametric (or even catastrophic) problems.

To address these problems, tools now need to employ model-based techniques. This means that the tools model the way in which the chips will be fabricated.

DFM-aware characterization
A true DFM-aware design environment begins with DFM-aware characterization. This involves taking the various files associated with the standard-cell libraries, along with the process design kit and DFM data and models provided by the foundry. It then involves characterizing the libraries with respect to process variations and lithographic effects to create statistical probability density functions (PDFs) in the context of timing, power, noise and yield. As part of this process, various technology rules are automatically extracted and/or generated for use by downstream tools.

A true DFM-aware characterization environment also provides yield scoring for individual cells, considering chemical-mechanical polishing effects and using techniques such as critical-area analysis to account for random particulate defects. This allows the model characterization process to provide both sensitivity and robustness metrics that can be subsequently exploited by the implementation, analysis and optimization engines.

Conventional synthesis engines perform their selections and optimizations based on the timing, area and power characteristics of the various cells in the library, coupled with the design constraints provided by the designer. In a DFM-aware environment, the synthesis engine takes into account each cell's noise and yield characteristics. It also considers the variability characteristics (process and lithographic) of the cells forming the library and the way in which these characteristics affect each cell's timing, power, noise and yield.

RET tools
Regarding the physical design portion of the flow, every structure in the design is affected by its surrounding environment in the form of other structures in close proximity, often in non-intuitive ways. This requires the placement tool to be lithography-aware and to heed the limitations and requirements of the downstream manufacturing RET tools.

Similarly, embedding lithographic simulation capability in the routing engine allows it to identify patterns that must be avoided and locations where the layout must be modified to avoid creating lithography hotspots that downstream RET cannot fix. The combination of lithographic-aware placement and routing helps minimize the need for postlayout RET, and increases the effectiveness of any RET that is required.

A true DFM-aware design environment must enable the analysis and optimization of timing, power, noise and yield effects. First, consider timing. Each element forming a path through the chipsuch as a wire segment, via and cell (logic gate)has a delay associated with it. These delays vary as a function of process, voltage and temperature. Traditional design environments have been based on worst-case analysis engines such as static timing analysis (STA).

Statistics-based approach
STA assumes the worst-case delays for the different paths. STA assumes, for example, that all the delays forming a particular path are minimum or maximum, which is both unrealistic and pessimistic. To address these issues, a DFM-aware design environment must employ statistical-based approaches using, for example, a statistical static timing analyzer (SSTA).

A key aspect of a true DFM-aware design environment is that DFM-aware analysis is of limited use without a corresponding DFM-aware optimization capability. To perform variability-aware timing optimization, for example, the DFM-aware SSTA engine must account for sensitivity and criticality.

In traditional STA, the more critical path is the one that affects the circuit delay the most--that is, the one with the most negative slack. By comparison, in DFM-aware SSTA, the most critical path is the one with the highest probability of affecting the circuit delay the most. It is for this reason that DFM-aware SSTA optimizations must be based on functions such as a criticality metric, which is used to determine the critical paths: the paths with the most likelihood of becoming the limiting factor.

Figure 1: DFM-aware SSTA must account for sensitivity and criticality as applied to the two timing PDF curves.

In addition to timing analysis and optimization, all of the other analysis and optimization engines (leakage power, noise and yield) must also employ variability-aware statistical techniques to efficiently account for variability. Using these techniques, it is possible to make the design more robust and less sensitive to variations, thereby maximizing yield throughout the lifespan of the device.

Sign-off verification
Lastly, the environment must provide DFM-aware sign-off verification. In this stage, the DFM-optimized design is passed to a suite of verification engines for checks such as design rule check (DRC) and lithography process check. Once again, all of these engines must analyze and verify the design with respect to process variations and lithographic effects in the context of timing, power, noise and yield. Because many manufacturability issues are difficult to encode as hard-and-fast rules, the physical verification environment must accommodate model-based solutions. Furthermore, a huge amount of design data needs to be processed, so the verification solution must be efficient and scalable.

A key requirement of a true DFM design flow is that it employs a unified data model and all of the implementation, analysis and optimization engines have immediate and concurrent access to exactly the same data. What this means in real terms is that at the same time as the router is laying down a track, the RC parasitics are being extracted; delay, power, noise and yield calculations are being performed; the signal integrity of that route is being evaluated; and the router is using this data to automatically and invisibly make any necessary modifications.

By integrating DFM within the implementation flow, potential design iterations caused by separate point-tool approaches are eliminated. Any design decisions or trade-offs are done within the context of the whole design. Thus, any core improvements (i.e. area reduction and dynamic and static power reduction) can be immediately accessible, and designers can ensure that potential DFM consequences do not interfere or degrade such benefits. After the design has been completed, automated DFM-aware sign-off verification prior to tape-out can be performed using the DRC/LVS/litho engines.

Recap
A true DFM-aware environment accounts for process variability and lithographic effects in the context of timing, power, noise and yield at every stage of the flow. This begins with the characterization of the cell library, continues through implementation, analysis and optimization, and ends with sign-off verification.

-Dwayne Burek
Magma Design Automation Inc.




Article Comments - Succeed at 65nm design
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top