Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Memory/Storage
?
?
Memory/Storage??

End of DDR marks surge of 3D, TSV-based memory

Posted: 04 Dec 2014 ?? ?Print Version ?Bookmark and Share

Keywords:Yole Developpement? TSV? HMC? DDR? DRAM?

According to the latest report from Yole Developpement, many assume that both the compute variety (DDR3/DDR4) and the mobile variety (LPDDR3/LPDDR4) of DDR to reach the end of their road soon as the interface allegedly cannot run at data rates higher than 3.2Gb/s in a traditional computer main memory environment. Consequently, several DRAM memory architectures based on 3D layer stacking and TSV have evolved to accommodate increasing memory requirements.

The challenges for DRAM are to reduce power consumption, satisfy required bandwidth and satisfy density (miniaturisation) all the while maintaining low cost. Applications are evolving with different demands on these basic requirements. For example graphics in a smartphone may require bandwidth of 15GB/s while a networking router may require 300GB/s.

Memory is also known to be the biggest user of power in server farms, thus there is a requirement in both portable devices and networking and server applications for low power memory solutions.

Technology options for future DRAM

With the recent Samsung announcement of mass production of 64GB DDR4 DIMMs that use TSV technology for enterprise servers and cloud-based applications, all three of the major DRAM memory manufactures, Samsung, Hynix and Micron, have announced the commercialisation of TSV-based memory architectures.

Hynix has announced the release of multiple memory solutions over the next two years.

Emerging DRAM technologies such as wide IO, HMC and HBM are being optimised for different applications and present different approaches to address bandwidth, power and area challenges. The common element to HMC, HBM and Wide I/O are 3D technologies, i.e.

Wide I/O increases the bandwidth between memory and its driver IC logic by increasing the IO data bus between the two circuits. Wide I/O typically uses TSVs, interposers and 3D stacking technologies.

The 2014 Wide I/O 2 standard JESD229-2 from JEDEC, is designed for high-end mobile applications that require high bandwidth at the lowest possible power. Wide I/O 2 provides up to 68GB/s bandwidth, at lower power consumption (better bandwidth/W) with 1.1V supply voltage. From a packaging standpoint, the Wide I/O 2 is optimised to stack on top of a SoC to minimise power consumption and footprint. This standard trades a significantly larger I/O pin count for a lower operating frequency. Stacking reduces interconnect length and capacitance. The overall effect is to reduce I/O power while enabling higher bandwidth.

In the 2.5D-stacked configuration, cooling solutions can be placed on top of the two dies. With the 3D-stacked form of Wide I/O 2, heat dissipation can be an issue since there is no standard way to cool stacked die. The Hybrid Memory Cube (HMC) is a specialised form of the wide I/O architecture.

The HMC developed by Micron and IBM is expected to be in mass production in 2014. This architecture consists of 3D stacked DRAM layers on top of a controller logic layer. For example, four DRAM die are divided into 16 "cores" and then stacked. The logic base is at the bottom has 16 different logic segments, each controlling the four DRAMs cores that sit directly on top of it . This type of memory architecture supports a very large number of I/O pins between the logic and DRAM cores, which deliver bandwidths as high as 400GB/s. According to the Hybrid Memory Cube Consortium, a single HMC can deliver more than 15x the performance of a DDR3 module and consume 70 per cent less energy per bit than DDR3.

In addition to Micron and IBM, the HMC architecture developer members include Samsung, Hynix, ARM, Open Silicon, Altera and Xilinx.

3D, TSV-based memory products

The 2013 JEDEC HBM memory standard, JESD235 was developed for high end graphics and gaming applications. HBM consisting of stacked DRAM die, built with Wide I/O and TSV, supports 128GB/s to 256GB/s bandwidths. TSMC has recently compared these different memory architectures in terms of bandwidth, power and price.

Different applications will have different requirements in terms of bandwidth, power consumption and footprint. Because thermal characteristics are critical in high end smartphones, the industry consensus is that Wide I/O 2 is probably the best choice. Wide I/O 2 meets heat dissipation, power, bandwidth and density requirements. However, it is more costly than LPDDR4.

Given its lower silicon cost, LPDDR4 is probably better suited for tablets and low end smartphones, and less cost-sensitive mobile markets. For high-end computer graphics processing, which are less constrained by cost then mobile devices, HBM memory may be the best choice. In addition, high performance computing (HPC) or a networking router requiring 300GB/s BW is probably best matched to the HMC.

As we move into 2015 several industry segments have announced applications using the new memory stacks. Intel recently announced that their Xenon Phi processor "Knights Landing" that will debut in 2015 will use 16GB of Micron HMC stacked DRAM on-package, providing up to 500GB/s of memory bandwidth for high performance computing applications. AMD and Nvidia have also announced the use of HBM in their next generation graphics modules such as the Nvidia Pascal due out in 2016.





Article Comments - End of DDR marks surge of 3D, TSV-ba...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top