Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Memory/Storage
?
?
Memory/Storage??

DRAM makers regroup for the post-PC era

Posted: 15 Apr 2001 ?? ?Print Version ?Bookmark and Share

Keywords:dram? pc memory? rambus? high-speed memory interface?

It is no stretch to say that system designers expect DRAMs to behave like ducks: to swim gracefully on the surface while paddling furiously below the water line. In other words, the industry wants DRAMs to keep pace with ever-growing performance and density demands, while maintaining the simple, unsophisticated feel of a cost-driven commodity market.

When you tally the list of forces making waves, it is clear why vendors are having a hard time keeping their ducks in a row. First, there is uncertainty about what the next mainstream DRAM architecture will be. Meanwhile, there is the difficulty with high-speed memory interface design, the shift away from the PC as the main influence on DRAM specs and a vigorous legal battle between Rambus and DRAM makers over licensing issues. This turmoil probably makes most system designers long for the straightforward predictability of days past, when fast-page-mode DRAMs caused little fuss aside from quadrupling their density every couple of years.

That said, the DRAM industry is responding to these transitions not by throwing its hands up in dismay, but by working hard to map its future. The industry is focusing on providing innovative, high-volume memory solutions to increasingly segmented markets. Chipmakers are directing more DRAM-development resources to supporting networking, communications and other segments.

"Over the past five to seven years PCs have dictated the DRAM flavor de jour," said Jim Sogas, vice president of marketing and sales for Elpida Memory Inc., the company that was born when Hitachi Semiconductor and NEC Electronics merged their DRAM divisions. "Now it is expected that the datacom segment will be dictating architecture de jour. But I do not see that that's going to change the direction.

"Anyone who's been around DRAMs a long time knows that the pain points are usually in price," Sogas continued. "Any designer would love to get a new market-specific DRAM if it does not cost more. But it also has to be low-risk to implement; something that does not take a major change in the way they design their systems. No one wants to throw away the entire old interface ASICs and start over again.

"We know that DRAM architectures will probably continue to move in evolutionary steps," Sogas said. "That is proven to be the way."

If Sogas is right, it is likely that the mainstream DRAM flavor used by PC OEMs will remain the mainstream overall DRAM architecture, even now that PCs are now longer the dominant users of DRAM.

Because of the fragmented DRAM market, chipmakers are considering alternative architectures that suit special segments. Even core DRAM concepts like page hits are under fire. SDRAM and Rambus supply increased peak bandwidth by boosting same-page data transfer rates. But those DRAM types do not go far enough in shrinking the timing needed between commands. If one address is followed by a new address that falls within the same bank, the controller needs to wait about 70ns before it is allowed to issue a new command. A 70ns lag time is still too long for the new generation of networking and communications systems now on the drawing boards.

In his article, Fumio Baba of Fujitsu Microelectronics Inc. explores an alternative called fast-cycle DRAM architecture. Fast-cycle DRAM is a non-multiplexed addressing scheme that allows activation of a minimum sub-array block in the column axis.

Communications applications are better off with a combination fast-cycle/multi-bank DRAM scheme than with page-based solutions. A lot of communications gear is built with data buffers that lack any real local reference, since data is transferred from the network to the buffer and onto the network again. A data-switching system does not take advantage of page hits in the main DRAM memory because these relatively small packets (usually around 64 bytes) are located randomly throughout the buffer memory.

? Jeff Child

EE Times





Article Comments - DRAM makers regroup for the post-PC ...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top