Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Interface

Address jitter, noise with DDR4 (Part 1)

Posted: 05 Jun 2013 ?? ?Print Version ?Bookmark and Share

Keywords:DDR memory? DDR4? PCI Express? Super Speed USB? memory interconnect?

The latest generation of DDR memory, DDR4, doubles the speed of the current generation of DRAMs, DDR3, with end-of-life data rates of 3.2 GT/s. Compared to the first generation of DDR memory, which started out at 200 MT/s, DDR4 will be running almost 16 times faster. When DDR was first introduced thirteen years ago, the typical semiconductor feature size was 130 nm and a 2.5-V operating voltage was the standard. Now, devices promise to approach 14 nm feature sizes and 1 V operating voltages. The DRAM specs need to keep up. As DDR3 speeds approached 1600 MT/s and the DRAM data valid windows shrank from 800 ps at 600 mV to less than 60 ps at 270 mV, the DDR4 standards development team recognised the need to take a new approach.

To address the challenges posed by higher data rates, the DDR4 specification (JESD79-4) adopts several proven strategies from modern high-speed serial specifications like PCI Express and Super Speed USB. In fact, the DDR4 specification takes things one step further by providing a way for designers to allocate the system timing and noise budget between the controller, memory interconnect and the DRAM that is difficult, if not impossible to accomplish with other high-speed interfaces.

Figure 1: The first signal (green) meets GDDR5 minimum pulse width (tDIPW) and data valid (tDIVW) specifications while the second signal (red) exceeds the tDIVW limit. In theory, this should result in successful data transfer for the first signal and 100% error for the second, but the reality is that neither result holds true.

When DDR assumptions become risky
The traditional specification for DDR AC parametrics has relied on several assumptions that have been valid for many years but become increasingly risky in today's high-speed systems:

Setup and hold times defined a clear boundary between reliable and unreliable data transfers. In other words, if an input signal met the specified Ts (setup) and Th (hold) timings, the data transfer would be 100% reliable.

Random jitter makes up a negligible portion of the system and DRAM timing.

Clock jitter only had to be controlled over a relatively short period, such as the time for the DRAM DLLs to lock, typically a couple hundred cycles.

Random noise sources would be similarly negligible compared to the difference between the specified driver voltage swings and required receiver voltage swings.

The setup and hold timing assumption and random jitter
It has always been a given in designing digital interfaces that if the required signal setup and hold times are met then data transfers will complete successfully. High-speed digital designers have known for years however that this is never literally true. Consider an example based on the JEDEC GDDR5 specification (JESD212). GDDR5 defines minimum pulse width (tDIPW) and data valid window width (tDIVW) for the signal transfer (figure 1). Because GDDR5 allows the controller to discover the optimal position for each data bus (DQ) signal during bus training, the actual values of Ts and Th do not need to be defined, only their total. The timing of the DQ signal for the first data transfer (green) exactly meets the tDIVW requirements while the second transfer (red) is too narrow, violating the spec only by fractions of a femtosecond. A strict interpretation of these timings would predict that the first data transfer (green) will succeed 100% of the time, that is with zero errors. The second transfer could not be assumed to succeed at all. Can this really be what actually happens? Experienced designers know, in fact, that there is never a point where the error probability is truly zero. They also know that the transition from "nearly zero" errors to a higher error rate isn't instantaneous but occurs over a range of timing values.

Effects of random jitter
As long as the amount of random jitter in the data signal was negligible, the setup and hold assumption was also reasonable. For DDR speeds up to about 1.6 GT/s, the contribution of random jitter to signal timing was small compared to the total bit time, which is the inverse of the data transfer rate. At higher speeds, however, random jitter effects become significant, potentially taking up much more than half of the entire data valid window. Table 1 (below) shows just how much of the data valid window can be consumed by 5 ps (rms) of random jitter in a variety of DDR designs (assuming a 10-18 BER goal).

Table 1: Amount of DDR data valid window consumed by random jitter.

Although an error rate of 1 in 10-18 bits seems vanishingly small, DDR busses are usually 64 bits or more wide, so this actually results in a memory-channel error rate greater than 10-16, or more than one failure every two weeks. The traditional assumption of zero errors when Ts+Th is satisfied cannot be achieved in practice, but for the lower DDR speeds, this goal can be effectively met by adding a relatively small amount of extra timing margin to the device specification. This has in fact been the industry practice thru DDR3. It's interesting to note that at 1.6 GT/s, at which the portion of the data-valid window that could be consumed by random jitter becomes significant, the traditional allocations of timing budget for the controller, the DRAM and the memory bus begins to result in negative system timing margin.

1???2?Next Page?Last Page

Article Comments - Address jitter, noise with DDR4 (Par...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top