Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Networks

Network data flow is key to SRAM choice

Posted: 01 Aug 2003 ?? ?Print Version ?Bookmark and Share

Keywords:sram? network design? memory? synchronous burst? syncburst?

Insatiable desire for higher-bandwidth networks has made it necessary to create increasingly innovative memory architectures to facilitate new system implementations.

SyncBurst SRAM

The network design community first adopted the synchronous burst SRAM, also known as the SyncBurst or pipelined burst SRAM, for this purpose in the mid-1990s. At that time, these implementations had to rely on unidirectional data streaming, keeping the SRAM bus in one direction for several cycles before reversing the data flow direction. This was necessary for reasonable bus utilization employing a device originally designed for cache line operations; that is, four data words associated with each address.

As network speeds increased, such an approach to data streaming became inadequate and rapid bus turnarounds became essential. But this device definition could not perform that way. Wait states added to permit contention-free bus turnarounds affected performance. Higher frequencies were required to achieve the necessary data transfers. Data and address bus efficiency were inadequate.

Once it was understood how these new systems were exercising the SyncBurst SRAMs, manufacturers began to develop architectural improvements, resulting in the zero bus turnaround SRAM pioneered by Micron, IDT and others. Optimized for network data flow, this type of SRAM has an equal read and write cycle pipeline length. Thus, reads and writes could be randomly interspersed with no bus turnaround penalty. The performance-hungry network design community snapped them up, and this architecture remains the most widely used SRAM in network applications.

However, there is a problem. Simply put, the zero bus turnaround architecture assumes a specific frequency target, basically a range of 50MHz through 166MHz. Only systems with frequencies in that range can use these SRAMs to avoid wasted bus turnaround cycles. To be sure, zero bus turnaround designs can be clocked at higher frequencies, but only with the addition of the dreaded wait state to avoid bus contention.

SRAMs are used in a plethora of places in network applications - so many that it is easiest to classify them in terms of the nature of traffic flows that the SRAM bus needs to accommodate.

Lookup-style accesses entail mostly read cycles and occasional writes to update the table. Packet-buffering-style accesses entail balanced read and write operations, acting essentially as an addressable buffer. Packet-classification or QoS processing touches the packet more than once, resulting in an unbalanced ratio between reads and writes and, indeed, in ratios that vary depending on network traffic. Any new architecture will need to deal with this diversity of bus operations.

Similarly, increased bandwidth must be addressed. Obviously, bandwidth can be increased by simply making the bus wider. A 100MHz, 144bit zero turnaround bus could easily provide a sustainable bandwidth of 14.4Gbps. However, employing 144 data signals is very costly. And bandwidth in network data flow management configurations cannot be provided blindly, but demands maximum pin efficiency. If, for example, an ASIC technology is capable of 400Mbps/pin toggle rates while satisfying signal integrity issues, then it must be used at that frequency or the implementation will sacrifice pin efficiency.

Data transfer size is also important. Each SRAM bus design uses the smallest word as the basis to set the device burst length requirement, or how many bits must be transferred per address.

Device latency, while not the most critical parameter, is nevertheless a factor. Since the classic two-stage SRAM pipeline (array access plus data transfer) is a reasonably efficient solution, it is best to maintain this concept wherever possible.

This diversity of issues has resulted in a solution requiring three seemingly different yet complementary SRAM architectures - quad-data-rate SRAM, DDR SRAM and DDR SIO. While choosing the optimal SRAM for a new application is still a matter of establishing clear goals after looking at trade-offs among data signal count, bus width, frequency, bus use and optimal data transfer size, the good news is that these network-optimized SRAM architectures accommodate nearly every reasonably imaginable set of goals.

- J. Thomas Pawlowski

Senior Fellow

Micron Technology Inc.

Article Comments - Network data flow is key to SRAM cho...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top