Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Memory/Storage

Memory overwhelms current verification techniques

Posted: 18 Aug 2003 ?? ?Print Version ?Bookmark and Share

Keywords:memory? soc? verification? spice? turbo-spice?

The modern SoC could reasonably be called an island of logic, surrounded by a sea of memory. The SIA roadmap predicts that embedded memories will comprise more than 70 percent of the SoC die by 2005. Adding to the challenge, some chips may have more than 100 types of memories, dramatically increasing design and verification complexity.

Unfortunately, existing verification techniques cannot scale to meet these challenges. Circuit simulation (SPICE, turbo-SPICE and switch-level Verilog) is unable to provide adequate functional verification coverage for memories. Requiring more resources and time to complete basic verification, they expose the designer to risks of not finding logic errors that exist in memory control circuits, causing the silicon to fail.

Symbolic simulation offers improved coverage and performance, and has been used for system-level memory verification and partial equivalence checking of transistor-level vs. behavioral memory models. It maximizes vector throughput by propagating Boolean equations rather than ones and zeros. It is faster than simulation and can cover more of the design space within the user-specified testbench.

Unfortunately, symbolic simulation, like its simulation predecessors, fails to completely verify large designs since it cannot exhaustively test or identify functional errors outside of the user-specified conditions. To handle complex designs, symbolic simulation must constrain them to the point that little of the functionality is actually checked. As a result, these techniques often require additional tools to identify coverage holes.

The optimal approach is formal equivalence checking, which employs rigorous, mathematical algorithms to prove that an RTL model's behavior is identical to its low-level implementation under all possible inputs and state conditions. It provides exhaustive coverage orders of magnitude faster than any simulation-based methods.

When equivalence checking finds a problem, it produces a counterexample that leads to the source of the problem for faster debug. In contrast, when bugs are detected using simulation-based approaches, the source of the problem is often not apparent. It may be deeply embedded within the design and difficult to observe, causing the designer to create additional vectors to isolate the error.

For multiple reasons, past equivalence checkers have faced challenges when applied to memory verification. The primary reason was that memories are designed at the transistor level in SPICE while these equivalence checkers did not support SPICE as input. Consequently, they were incapable of recognizing complex SPICE memory circuit structures such as bidirectional RAM cells, dynamic logic and sense amps. However, advances in abstraction technology, or the automatic extraction of functional models from the SPICE-level, have overcome these limitations.

Equivalence checkers also demanded that RTL models representing embedded memories have structures similar to their low-level design implementation. For example, both the RTL and its implementation needed to have identical memory cell row-column organization. Unfortunately, some circuit behavior cannot be easily described at the RTL, which inhibits the use of equivalence checking. To overcome this would require extensive transistor logic remodeling in the structural low-level implementation. These structural models penalize the designer with unacceptable simulation times for meeting practical schedules.

Early equivalence checkers also left a gap between implementation and system-level verification. Complete memory verification requires both. Implementation verification using equivalence checking must be performed at the transistor level (SPICE), while system-level verification using simulation is typically performed at higher levels of abstraction (RTL). This forces designers to create two different memory models - one of system-level and another for the implementation.

To overcome the limitations associated with these traditional approaches, a new methodology that bridges implementation and system-level verification is now available that uses a library of parameterized memory models. These RTL models accept user-defined parameters to specify lower-level memory structural and circuit characteristics. By interpreting parameter values corresponding to the implementation structure, equivalence checking comparisons between RTL and transistor level are readily achieved.

Transistor level checking

For the first time, using this technique, designers can perform equivalence checking on memories at the transistor level. Additionally, simulation models are generated that provide fast system-level verification via simulation. Both equivalence checking and system-level simulation use the same RTL model.

RTL memory models are almost always independently developed from the circuit implementation, resulting in two vastly different control and state schemes. Even if they were functionally equivalent, these differences would prevent formal tools from exhaustively comparing them. This new parameterized approach generates RTL memory models that are state equivalent, ensuring correct construction.

An integrated approach using parameterized RTL memory models, automatic transistor abstraction, equivalence checking and system-level model generation, provides multiple advantages. These include ease of use, state-point mapping, master-slave compression, dynamic logic support, hierarchical verification, signal naming conventions and debug. Automatic memory model generation, coupled with transistor abstraction and integrated debug, creates a powerful verification solution that easily identifies design errors and guarantees 100 percent functional verification of designs with embedded memories.

The primary benefit of an integrated formal memory equivalence checking solution, such as Verplex's Conformal MEM, is its ability to identify difficult errors that simulation and symbolic simulation overlook. For example, corner cases buried deep within the state space are commonly missed by manually crafted testbenches. Other examples range from RTL modeling mistakes, wrong wire connections, erroneous inversions and scan chain issues, to using the wrong library or library elements.

There are a number of different types of memory primitives currently available. These primitives go from single read/write port to multiread/write port designs, as well as content addressable memories (CAMs). Designers can quickly model an infinite assortment of RAMs, CAMs and register files, because the primitives are easy to use and are highly flexible.

Verplex primitives also have embedded Open Verification Library (OVL) assertions to trap illegal usage, such as simultaneous read/write operations or write collisions for multiport memories. These OVL assertions are provided as an extension library and coded in non-proprietary Verilog constructs making them usable across all Verilog simulation platforms.

The memory primitives also model illegal input values (X and Z) accurately by warning the user of the problem and also updating the memory core or output appropriately. Verplex memory primitives allow the designer to initialize the array and provide an interface to the memory array to support non-common configurations such as shadow bits or complex compare (CAM) functions.

In addition, Verplex memory equivalence checking supports memories with redundant memory space and circuit function for silicon defect diagnostics, which does not exist in the behavioral RTL model.

In an example of a memory bug that Conformal MEM found, the engineers had used the memory in many designs that were successfully taped out. Both the RTL model and circuit designs were functionally equivalent at the output boundaries. When reusing this same memory in another design, the specification called for a one-clock-cycle delay on the write enable. Based on the specification, designers added D flip-flops to the inputs of both RTL and gate designs.

Circuit designers then verified the circuit using the same comprehensive testbenches they had previously used in prior tapeouts and found no design errors. They then ran Conformal MEM, which quickly produced a counter example of a non-equivalence whenever simultaneous read and write enables occurred. The circuit design passed the data through to the outputs on the same clock cycle, while the RTL model did not. The legacy testbench that had worked flawlessly before was not able to uncover the problem because it did not test for this particular condition.

Functional verification of memories is a difficult problem to attack using traditional vector-based methods such as simulation, and even with more advanced methods such as symbolic simulation. A tightly integrated approach - that includes RTL parameterized models, transistor abstraction, equivalence checking and graphical debug - is the only complete solution that automatically addresses system-level simulation and implementation verification for designs with embedded memories. This approach quickly pinpoints bugs that other methods miss and verifies memories that comprise the majority of real estate on SOC designs.

- Bret Siarkowski

Strategic Technology Director

Verplex Systems Inc.

Article Comments - Memory overwhelms current verificati...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top