Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Controls/MCUs

GDDR5 eyes next-gen graphics controllers

Posted: 27 May 2008 ?? ?Print Version ?Bookmark and Share

Keywords:graphics controller? GDDR5? memory interface? DRAM?

Rivals Advanced Micro Devices Inc. and Nvidia Corp. are expected to ship high-speed for next-generation graphics controllers as early as June, delivering a whopping 4Gbit/s per pin, scaling to as much as 7Gbit/s per pin.

The full impact of the Graphics Double Data Rate, version 5 (GDDR5) interconnect, defined by Jedec, is still unclear at a time when the graphics chipsincluding a new architecture from Intel Corp.remain under a cloak of secrecy. But excitement is building as players plow a path to new levels of performance.

Hynix Semiconductor, Qimonda AG and Samsung Electronics are already shipping memory chips using GDDR5. The spec, which could be officially published in September, adds features to cut power and cost while increasing bandwidth over today's mainstream GDDR3 interface.

Set for take-off
Joe Macri, senior director of circuit engineering for AMD's graphics group, said he expects AMD, Nvidia and Intel to use the interface on their next-generation graphics controllers. The interface will also be a good choice for next-generation videogame consoles, said Macri, who chairs Jedec's DRAM committee as well as the task group that defined GDDR5.

"We would not have three vendors bringing DRAM products to market in such a short space of time if only one graphics vendor was going to support it," he said, citing a May 21 press release in which AMD vowed to use GDDR5 for its next Radeon graphics chips. "Intel, Nvidia and others were all there defining [GDDR5] and, I suspect, will design with it."

The question is when. Barry Wagner, a director of technical marketing for Nvidia and vice chairman of the GDDR5 group, would not comment on Nvidia's next-gen controllers. But he downplayed the memory interconnect's significance. "Memory bandwidth is not really a predictor of success or performance. It's a second- or third-order effect on performance, which is primarily influenced by the controller architecture," Wagner said.

For its previous-generation controller, the GeForce 8800, Nvidia used a 384bit GDDR3 interconnect that it later scaled down to a 256bit-wide version. The part competed against an AMD R600 using a 512bit GDDR4 interconnect. Both links were clocked at about 1GHz, but the Nvidia part was widely seen as having superior performance. "With less memory bandwidth, we had a sufficiently better product," Wagner claimed.

For next-generation parts, Nvidia could continue using GDDR3, now sampling at 1.3GHz frequencies. "It's not completely out of gas," Wagner said. DDR3 system memories in x16 and x32 widths could become options if they hit low enough price points, he added.

Clearly, AMD does not want to see a repeat of the GDDR4 experience, when only AMD and two DRAM makers supported the spec. "The ecosystem never fully developed," said Macri. "It achieved its technical goals, but it never did as well in the market as it could have."

Macri said Nvidia helped define GDDR4 but decided not to use it, because the spec used 8 bits minimum for certain burst operations. Nvidia's graphics chip at the time was designed for a more traditional, 4bit burst.

Graphics market
Just what the next-generation controllers will offer and when they will ship is still unclear. Thus, it's also unclear what mix of memory interfaces AMD and Nvidia will use. Overall, the former ATI graphics group is regaining some momentum and market share after having fallen behind during the period of its acquisition by AMD. Nvidia currently commands about two-thirds of the market for discrete graphics chips; AMD takes the other third.

"It looks like the graphics group at AMD has been making up time, and we are getting back to more of a horse race," said Dean McCarron, principal of market watcher Mercury Research. "Typically, the companies release a new controller every 12 to 18 months, with a process shrink as a midlife kicker between generations. It's an insane pace."

Intel is jumping into the fray with a discrete graphics controller it calls Larrabee, based on an array of modified X86 cores, but it will not release the part until sometime in 2009 and is keeping the design details close to the vest. The chip giant is expected to demonstrate a working prototype of Larrabee at Siggraph in August.

Intel did say it has crafted 100 new graphics instructions for Larrabee, which includes a vector-processing unit capable of teraflops performance. The part will also sport a new cache architecture and support the standard Microsoft DirectX and OpenGL graphics application programming interfaces.

The chip likely will not have a huge impact on the graphics market at launch, in part because Intel will have to bring up a complex software stack to take advantage of it. Nevertheless, the debut is a novel one. "We haven't seen a new discrete graphics chip player in about a decade," said McCarron.

Inside GDDR5
GDDR5 will initially appear on 512Mbit and 1Gbit chips supporting data transfers at up to 4Gbit/s per pin at gigahertz frequencies using a quad-rate clock. It can stretch to data rates as fast as 7Gbit/s per pin, to deliver throughput of 12- to 28GByte/s per chip.

The interface retains the single-ended structure of the previous generation but uses a new clocking technique and new low-power modes to consume an average of 2.5W at 5Gbit/s running at 1.5V. Macri estimates the interface reduces power by about 30 percent compared with mainstream GDDR3.

The interface is backward compatible with previous graphics and systems memory interconnects from Jedec. "It's possible to build a controller that handles everything from DDR2 up to GDDR5 running from 400MHz to 5Gbit/s, which is pretty amazing," Macri said.

Pin reductions and other streamlining steps aim to keep die size, and therefore cost, as low as possible. "Our GDDR5 physical-layer block is not a whole lot bigger than our competitor's GDDR3 PHY," said Macri.

GDDR5 extracts clock information from the data stream in a way that allows it to be flexible across different operating conditionsa fact that will help optimize performance when PC gamers overclock the chips. "This is much more flexible than any DRAM we have ever worked with in terms of going up and down in frequency and power," said Macri.

The spec supports read/write error detection directions and can do real-time error detection and repair. The clocking scheme simplifies board routing. The spec itself is in a final stage at Jedec. "There's [still] a lot of cleanup, but after a September ballot, it should be ready for publication," said Macri.

Rambus option
Macri said advanced signaling technologies from Rambus will not be competitive, in part because they use a differential (two-wire) approach rather than the single-wire technique in GDDR5. The extra wire typically requires more pins and power. "We don't think a differential solution make sense until you get to speeds of 8- to10Gbit/s," he said.

The Rambus technology is used as an interconnect for main memory in the Sony Playstation3, but all three major videogame platforms today use GDDR3 as their graphics memory interconnect, Macri said. "XDR doesn't have a footprint in any console for graphics today," he said. "I believe GDDR5 will be a nice fit for the console space."

Wagner of Nvidia said vendors will use single-ended approaches to maintain compatibility with the broadest set of memories for their next-generation controllers. Compatibility "has been the biggest challenge for Rambus," he said.

Ultimately, the industry will have to switch to differential technology, said Michael Ching, director of product marketing at Rambus. A Qimonda white paper shows today's techniques running out of steam at 5- to 6GHz, he said.

"That's pretty much the end of the line for the single-ended approach," Ching said, adding that the Rambus XDR approach can consume less power than single-ended techniques. "Our analysis shows differential technology results in lower power even at 4 GHz or so, and the difference between the two grows as you go faster," he said.

The Rambus XDR technology is available at data rates from 3.2- to 4.8GHz and will scale to 6.4GHz. Within weeks, Rambus will disclose its XDR-2 technology, which will start at 8 GHz and has been demonstrated at 16 GHz, Ching said.

Besides its use in the Playstation3, XDR is employed as a graphics link in a Toshiba notebook. Toshiba is also using XDR in an HDTV chipset.

- Rick Merritt
EE Times

Article Comments - GDDR5 eyes next-gen graphics control...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top