Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Interface
?
?
Interface??

Too many specifications confuse server design

Posted: 16 Apr 2003 ?? ?Print Version ?Bookmark and Share

Keywords:chip set? pci? pci express? interconnect? i/o adapter?

There seem to be more "industry-standard" technologies available than ever before for connecting together server sub-systems, but many of these technologies overlap. Today's interconnects can be divided into three groups: chip-to-chip links, I/O expansion adapters and external server communications.

At the chip-to-chip level, the integration of advanced functions into the core chipsets is creating more functional differences among chipsets. Vendors need to develop more chipsets to deliver the right permutations of function, cost and performance. To ensure maximum flexibility in designing servers and to save money developing new chipsets, it would be valuable to have a standard chipset interconnect.

Such an interconnect is of minimal value to server designers unless a broad range of standards-based components is available from multiple vendors. However, if mix-and-match designs can produce better servers than just procuring a whole chipset, a standard could begin to displace proprietary bus solutions.

Support base

PCI Express could be such a common standard. But unless other major chipset vendors jump on the bandwagon, the value of PCI Express as a chip-to-chip standard will be extremely limited.

At the slot level, it is vital to have a widely accepted industry standard to ensure the availability of the widest possible range of adapters. PCI is the unchallenged leader for connecting I/O adapters to servers.

In desktop systems, PCI is indeed running out of gas. Advanced graphics long ago moved from the PCI bus to the accelerated graphics port (AGP). AGP has scaled through 2x, 4x and now 8x speed. Express is being positioned as the next step for desktop systems graphics after AGP 8x. While not significant to servers as a graphics slot, such a slot could provide an entry point for other functions. But it is unknown if and when other adapters will appear to fill this slot.

Speedup needed

With 10Gbps 4x Infiniband and early 10GbE just coming to market, we now have individual adapters that could demand more bandwidth than PCI-X's 8Gbps can deliver. PCI-X needs a 2x speedup to reasonably handle the full-duplex traffic capability of either 10Gb technology.

PCI-X 2.0 is set to deliver this needed 2x speed boost with a double-data-rate update to PCI-X chips. A second speed doubling, quad data rate, is also defined. The backward and forward compatibility among PCI, PCI-X, and now PCI-X 2.0 is the key to migrating to higher-performance I/O. Vendors can seamlessly enhance their products with PCI-X 2.0 capability, while retaining support for PCI bus systems.

This seamless migration to PCI-X 2.0 is in sharp contrast to the discontinuity that would occur in a move to Express. Without backward compatibility of the slot/adapter, Express will not easily replace PCI slots in servers. Adapter vendors would need to provide two separate product lines during the transition, and server vendors would have to provide multiple servers with different mixes of PCI and Express slots to satisfy customers in various stages of transition. Customers would, for the first time in 10 years, have to manage deployment of incompatible adapter types among their servers.

As for connectivity between systems, Infiniband and 10GbE are two technologies promising higher-speed communications. Infiniband has the functionality to deliver a cost-effective boost in performance, but 10GbE will need to wait for Moore's Law and emerging TCP offload engines with RDMA before it is in a position to benefit customers.

Infiniband host-channel adapters being released over the next few months deliver a low-latency interconnect for database clusters and high-performance computing clusters by virtue of three protocols. Database applications already exploit the virtual interface architecture protocol available on Infiniband. Many high-performance applications already use the message passing interface that Infiniband also provides. Sockets direct protocol is a new protocol that provides a sockets-level software interface. Thus, it can support existing socket-based applications without the performance overhead of TCP/IP, and without having to recode applications to use one of the more exotic low-latency protocols.

- Tom Bradicich & Bill Holland

IBM Corp.





Article Comments - Too many specifications confuse serv...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top