Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > EDA/IP

Resolving giga-scale challenges in memory design

Posted: 16 Sep 2015 ?? ?Print Version ?Bookmark and Share

Keywords:embedded memory IP? DRAM? SRAM? flash? EDA?

Advanced memory designers always use cutting-edge fabrication technologies for larger integration density and faster operating speed, while conserving low-power consumption. After all, memory chips are critical components that determine system performance and power.

Similarly, embedded memory IP is also the critical piece of the SoC puzzle, consuming more than 50% of the die area of a chip. No matter what type of memory IP or memory IC 每每 DRAM, SRAM or flash每每greater design complexity is creating massive, giga-scale challenges, which are the last thing circuit designers need or want.

Advanced designs are more complex and larger than ever before. The number of transistors is into the mind-boggling million or billion range and the memory's circuit sizes have increased as well. Designers are forced to factor in reduced supply voltage and growing process variations with smaller process geometries. Shrinking design margins mean less room for designers to play and increased risk for a failed tape out. Unfortunately, design and fabrication costs at advanced nodes are increasing dramatically.

To meet the challenges, new requirements or even re-factoring are needed for design flows and associated electronic design automation (EDA) tools. In particular, verification tools need to be accurate to give correct simulation results and capture all of the small signals, such as leakage currents. Designers also need variation design tools to trade-off chip yield and performance.

One example of an EDA tool that can't catch the trend is FastSPICE, a transistor-level fast circuit simulator found in every verification and signoff design flow, and similar to another transistor-level simulator, Simulation Program with Integrated Circuit Emphasis (SPICE), which is accurate but slow. FastSPICE simulators provide capacity to handle circuit sizes and speed for efficiency. However, they aren't accurate enough and even worse, may give the wrong results. As a result, FastSPICE may fail for accurate verifications of power, leakage, noise or timing.

In the current design flow with FastSPICE, designers are balancing between accuracy and performance for large scale memory simulation and verification, fine-tuning options and settings for each circuit type. It's tedious and, even then, does not offer enough accuracy for a designer's confidence level when they move to advanced nodes. Unreliable FastSPICE results are accumulating and the situation is dire.

Another issue with FastSPICE comes from its core technology limitation. FastSPICE mostly uses partitioning and event-driven technologies to accelerate simulation. These technologies make it difficult to take advantage of new multi-core parallel hardware architecture. Hence, designers, in general, have not seen scalable speed up with multi-core CPUs, which today have become mainstream.

Figure: New GigaSpice simulators are replacing FastSPICE transistor-level fast circuit simulators for memory verification and signoff.

Memory designers require highly accurate simulation to accurately predict power and leakage, as power consumption is one key specification for advanced memory. Ideally, they should use SPICE simulators throughout the entire design and verification flow, from small cell and block designs to full chip memory verification. However, SPICE simulators are not able to handle large-scale memory circuit simulations. Therefore, designers have to rely on FastSPICE for the characterisation of large embedded SRAM blocks, simulation and verification of large memory designs.

1???2?Next Page?Last Page

Article Comments - Resolving giga-scale challenges in m...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top