Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Memory/Storage
?
?
Memory/Storage??

Addressing memory scaling concerns

Posted: 08 Jul 2013 ?? ?Print Version ?Bookmark and Share

Keywords:DRAM? NAND flash memory? memory scaling? PCM? STT-MRAM?

The memory system is a crucial performance and energy bottleneck in almost all computing systems. Recent system design, application, and technology trends that require more capacity, bandwidth, efficiency, and predictability out of the memory system make it an even more important system bottleneck. At the same time, DRAM technology is experiencing difficult technology scaling challenges that make the maintenance and enhancement of its capacity, energy-efficiency, and reliability significantly more costly with conventional techniques.

In this paper, after describing the demands and challenges faced by the memory system, we examine some promising research and design directions to overcome challenges posed by memory scaling. Specifically, we survey three key solution directions: 1) enabling new DRAM architectures, functions, interfaces, and better integration of the DRAM and the rest of the system, 2) designing a memory system that employs emerging memory technologies and takes advantage of multiple different technologies, 3) providing predictable performance and QoS to applications sharing the memory system. We also briefly describe our ongoing related work in combating scaling challenges of NAND flash memory.

Introduction
Main memory is a critical component of all computing systems, whether they be server, embedded, desktop, mobile, sensor. Memory capacity, energy, cost, performance, and management algorithms must scale as we scale the size of the computing system in order to maintain performance growth and enable new applications. Unfortunately, such scaling has become difficult because recent trends in systems, applications, and technology exacerbate the memory system bottleneck.

Trends and requirements
In particular, on the systems/architecture front, energy and power consumption have become key design limiters as the memory system continues to be responsible for a significant fraction of overall system energy/power [42]. More and increasingly heterogeneous [14, 68, 28] processing cores and agents/clients are sharing the memory system, leading to increasing demand for memory capacity and bandwidth along with a relatively new demand for predictable performance and QoS from the memory system [50, 55, 67]. On the applications front, important applications are usually very data intensive and are becoming increasingly so [6], requiring both real-time and offline manipulation of great amounts of data. For example, next-generation genome sequencing technologies produce massive amounts of sequence data that overwhelms memory storage and bandwidth requirements of today's high-end desktop and laptop systems [69, 3, 72] yet researchers have the goal of enabling low-cost personalized medicine.

Creation of new killer applications and usage models for computers likely depends on how well the memory system can support the efficient storage and manipulation of data in such data-intensive applications. In addition, there is an increasing trend towards consolidation of applications on a chip, which leads to the sharing of the memory system across many heterogeneous applications with diverse performance requirements, exacerbating the aforementioned need for predictable performance guarantees from the memory system. On the technology front, two key trends profoundly affect memory systems. First, there is increasing difficulty scaling the well-established charge-based memory technologies, such as DRAM [47, 4, 37, 1] and flash memory [34, 46, 9, 10, 11], to smaller technology nodes. Such scaling has enabled memory systems with reasonable capacity and efficiency; lack of it will make it difficult to achieve high capacity and efficiency at low cost. Second, some emerging resistive memory technologies, such as phase change memory (PCM) [64, 71, 37, 38, 63] or spin-transfer torque magnetic memory (STT-MRAM) [13, 35] appear more scalable, have latency and bandwidth characteristics much closer to DRAM than flash memory and hard disks, and are non-volatile with little idle power consumption.

Such emerging technologies can enable new opportunities in system design, including, for example, the unification of memory and storage sub-systems. They have the potential to be employed as part of main memory, alongside or in place of less scalable and leaky DRAM, but they also have various shortcomings depending on the technology (e.g., some have cell endurance problems, some have very high write latency/power, some have low density) that need to be overcome.

Solution directions
As a result of these systems, applications, and technology trends and the resulting requirements, it is our position that researchers and designers need to fundamentally rethink the way we design memory systems today to 1) overcome scaling challenges with DRAM, 2) enable the use of emerging memory technologies, 3) design memory systems that provide predictable performance and quality of service to applications and users. The rest of the paper describes our solution ideas in these three directions, with pointers to specific techniques when possible. Since scaling challenges themselves arise due to difficulties in enhancing memory components at solely one level of the computing stack (e.g., the device and/or circuit levels in case of DRAM scaling), we believe effective solutions to the above challenges will require cooperation across different layers of the computing stack, from algorithms to software to microarchitecture to devices, as well as between different components of the system, including processors, memory controllers, memory chips, and the storage sub-system.

Challenge 1: New DRAM architectures
DRAM has been the choice technology for implementing main memory due to its relatively low latency and low cost. DRAM process technology scaling has for long enabled lower cost per unit area by enabling reductions in DRAM cell size. Unfortunately, further scaling of DRAM cells has become costly [4, 47, 37, 1] due to increased manufacturing complexity/cost, reduced cell reliability, and potentially increased cell leakage leading to high refresh rates. Several key issues to tackle include:
1) reducing the negative impact of refresh on energy, performance, QoS, and density scaling [44],
2) improving DRAM parallelism/bandwidth [33], latency [41], and energy efficiency [33, 41, 44],
3) improving reliability of DRAM cells at low cost,
4) reducing the significant amount of waste present in today's main memories in which much of the fetched/stored data can be unused due to coarse-granularity management [49, 74],
5) minimising data movement between DRAM and processing elements, which causes high latency, energy, and bandwidth consumption [66].

1???2???3???4???5?Next Page?Last Page



Article Comments - Addressing memory scaling concerns
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top