Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Memory/Storage
?
?
Memory/Storage??

Fit in large-capacity memory in advanced-node SoCs

Posted: 28 Jan 2013 ?? ?Print Version ?Bookmark and Share

Keywords:system-on-chip? through-silicon vias? SRAMs? eDRAMs?

In system-on-chip (SoC) designs nowadays, memory content can take up over 50 per cent of chip area. In addition, the size of individual memories has grown to approximately 40 Mb of contiguous memory. This combined increase can significantly impact the overall power, performance, and area of the chip, as well as manufacturing yield. Successful integration of large-capacity memory in advanced-node SoCs requires the right approach.

Recent advances have made silicon interposer technology an alternative to consider (figure 1). A silicon interposer consists of additional layers or even a separate chip designed to connect a processor with a separate block of memory. Using this technology in conjunction with through-silicon vias (TSVs), a processor can pass data to and from the external memory without paying the high price of additional pins. Interposer structure consists of multiple layers, however, which adds steps, time, and cost to the fabrication process. Despite the performance boost, the approach is still too expensive to be considered mainstream packaging technology.

Figure 1: A silicon interposer is a structure that provides direct access to external memory, but is costly.

An earlier approach to addressing the increase in memory content was the use of embedded DRAMs (eDRAMs). When the area of the eDRAM macro was one third to a quarter of the size of the equivalent embedded SRAM, eDRAM made economic sense, despite the extra processing costs. This solution did not scale well with advancing technology processes, however, resulting in SRAMs reducing in area much faster than eDRAMs, and thus reaching per-bit cost parity in 40 nm, even at 20-Mbit sizes. Furthermore, the prolonged development timelines of eDRAMs compared to the logic process made them even less attractive.

A good alternative solution is to use large-capacity SRAMs, which can dramatically reduce leakage and deliver higher system performance while keeping mask costs in check (figure 2). The challenges of designing large memory blocks with embedded SRAM must be understood and best-in-class strategies must be employed to ensure chip-level power, performance, and area targets are met.

Figure 2: Properly designed and integrated, large-capacity SRAMs can provide a low-power, economical, high performance alternative to eDRAM.

Designing large memory blocks
The default approach to implementing a large-capacity SRAM is to build it out of smaller blocks of memory. It is important to note, however, that coupling together smaller blocks to build a large-capacity memory may negatively impact both area and performance if not handled correctly. There are a lot of nets that need to run between and through the building blocks. In order to keep the area under control without using too many layers of metal, and to reduce the negative impact on implementation speed, it is important to keep the inter-block routing uncomplicated. Inefficient wiring also tends to slow down the performance of the memory, resulting in long access times and slow cycle times.

Dynamic and leakage power are particularly important concerns in large-capacity SRAM implementations. The total power dissipated by the combined memory must be less than the sum of the power dissipated by individual blocks of memory, or there may be no benefit over the basic implementation.

As we can see, there are implementation challenges related to timing, power, and area. To make the combined memory implementation worth doing, it must provide benefits in at least two of the three criteria and be on par in the third compared to the baseline implementation.

Solutions
When looking at improving the power or speed of a memory it is important to consider its access profile. If the target clock frequency is not high, then there are implementation options that can be used to reduce the memory's power consumption. For example, if the reads and writes are localized in time to a certain region, then there are a lot of benefits to using address-based leakage reduction. By source biasing the core array of a large memory to just above the bitcell's retention voltage, memory leakage can be dramatically reduced.

1???2?Next Page?Last Page



Article Comments - Fit in large-capacity memory in adva...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top