Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > EDA/IP

Statistical static timing analysis ensures IC performance

Posted: 26 Dec 2003 ?? ?Print Version ?Bookmark and Share

Keywords:static timing analysis? ic design? ICCAD conference?

Static timing analysis is one of the pilings upon which the whole edifice of modern IC design has been erected. But this vital technique itself rests upon assumptions that may no longer hold water. As papers at the recent ICCAD conference in San Jose, California, indicated, those assumptions are rapidly being swept away by an irresistible current of change. At stake is the very ability of design teams to complete ambitious chip designs.

As interconnect became dominant, the critical variables in timing were not just transistors' critical dimensions, but also the dimensions and actual shapes of the wire segments and the nature of their immediate neighborhoods.

The first line of defense against variations has always been worst-case estimation. Now worst-case numbers are often so high that they would result in unacceptable performance-slower clock speed than would have been possible with a previous process.

"We are having to introduce guardbanding at every stage in the process," warned Andrew Kahng, professor at the University of California, San Diego. "And along the way we are losing the designer's original intent." Clearly the industry cannot live with an approach that preserves STA by giving up on performance.

This has led to the advent of another type of tool: the statistical static timing analyzer. Basically, an SSTA, as they are becoming known, would compute a probability function for the arrival time of each signal at each node. Then designers could decide how they wanted to trade-off delay and yield. If there is a 95 percent probability that the signal with the least slack will get there in 30ns on any given die, but only a 15 percent probability that it will arrive in 25ns, then designers can make performance calls as business decisions: Can we accept 33MHz performance? Can we accept 15 percent yields? Do we have to redesign?

"There has been an attempt to adapt STA to this new environment," Kahng said, "by developing statistical functions and then plugging in mean-plus-three-sigma values for the delays. This is at least a step forward from using nominal or worst-case values."

But probability functions are difficult to compute. It is fairly simple to find easily computed approximations, if you assume the process variations on a die are statistically independent. But that assumption is wrong, and academia has not found a way to simplify the computations without it. So SSTA remains a promising but not deliverable approach.

Meanwhile, chip designers are taking matters into their own hands. A primary force driving the structured-ASIC market is that this device category removes most of the timing analysis from the design cycle. The timing for the base array is done once, when the array is developed. From then on, the only timing paths that must be analyzed are those the customer creates - usually confined to a few upper metal layers with relaxed feature size and spacing. This effectively moves the timing-closure loop away from the layers in which it is most likely to run into problems with process variations.

This simplification is brought to its extreme in those structured-ASIC architectures that allow customers to specify only one or two via masks: the approach used by eASIC and ViASIC. In this case, the customer-specific part of the design includes no features at all except for vias, and hence virtually every segment of metal can be subjected to full analysis, including even the slowest SSTA.

Such approaches reduce the designs that must be analyzed but don't erase the problem. The chip will still run at the lowest frequency set by the slowest path after variations are taken into account. But an announcement this month from PMC-Sierra Inc. and Fulcrum Microsystems suggests one way out.

The companies announced that PMC would use Fulcrum's asynchronous crossbar switch as hard intellectual property in place of bus structures in PMC's system-on-chip designs. The implication was definitely left open that PMC had other uses for the design methodology as well-namely, possible joint development of an asynchronous MIPS CPU. Both companies have MIPS licenses.

Self-timed logic simply doesn't give a damn about timing variations. Fulcrum's design methodology uses a two-rail logic system that has three states: 1, 0 and not ready. Each stage signals the next when results are available, and there is no need for clocks and registers. Also, there is no need to predict exact path delays; on any given die and set of conditions, the signal gets there when it gets there, and the next stage waits for it.

There has been intense debate about the overhead, the novel tool chain and the complexity of self-timed logic. But all those debates may be silenced by the simple fact that in the near future, self-timed design may be the only way to get both a reasonable percentage of latent performance and reasonable yield out of a design.

- Ron Wilson

EE Times

Article Comments - Statistical static timing analysis e...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top