Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Interface
?
?
Interface??

Robust designs cut vendor hype

Posted: 16 Apr 2003 ?? ?Print Version ?Bookmark and Share

Keywords:pci express? accelerated graphics port? agp? ethernet? usb?

Underneath the hype on PCI Express surrounding bandwidth and QoS, vendor cost and design flexibility are the primary reasons to use Express in standalone systems. Economic conditions combined with higher component and board integration needs are the factors that drive vendors to push for Express.

The current Express specifications provide the starting point for developing silicon, software drivers and a PCI-compatible form factor. But from a solution perspective, there is still more--such as Express-to-PCI bridges and firmware - to be developed. Given that this is a new, complex technology for all vendors, combined with ongoing specification development, it becomes clear that credible solutions will not ship until the second half of next year.

Aggressive signal rates

The first use for Express is graphics. If Express adopts an aggressive signaling rate to match the historic accelerated graphics port (AGP) curve, then we should see it used to double graphics bandwidth every three years through the rest of the decade. For I/O, the answer is not as clear. The Express PCI-compatible form factor is nice, but it yields no customer-visible value while increasing vendor and channel transition costs as well as customer confusion.

What is required is a new module form that fundamentally changes the way vendors develop platforms and customers buy them. While today's platforms are not going away, a new modular platform could alleviate frustration and breathe life into this rather stagnant and homogeneous systems space. The module form factor for PCI Express is being developed and should be completed in time for second-half of 2004 solution delivery.

Systems in the data center have their own set of dynamics. As for computer servers, appliances and I/O modules, three interconnects warrant examination - PCI-X, PCI Express and Infiniband. Given the recent announcements concerning Infiniband, we will focus on the PCI-X and PCI Express.

Data center computers span a wide range of offerings ranging from $500 to $1 million platforms. Customer quality, performance and stability demands lead to complex trade-offs and a more conservative interconnect transition strategy. Thus, Hewlett-Packard and others helped create PCI-X and its latest incarnation, PCI-X 2.0. Given the ability to scale PCI-X 2.0 to 8Gbps and beyond, it is clear that PCI-X solutions will be around for many years to come.

The PCI-compatible form factor of Express fails the test of the four critical questions on multiple counts, such as providing customer-visible value and reasonable transition costs and impacts; therefore, it is somewhat defective-on-arrival (DOA).

Again, what is required is a new module form factor that will bring improved hot-plug management and the potential adaptation of standalone-focused form factors to new small-footprint solutions such as server blades. Work is already under way to create this form factor and it is expected to be completed in the first half of this year.

Thus, we believe that PCI Express should ship in the second half of next year as a chip-to-chip interconnect that bridges to PCI-X 2.0 with only a handful of on-board native I/O devices. Once the module work is completed and vendors can migrate a sufficient number of I/O devices to Express, expect to see native Express I/O slots by 2006. Then designers will face a tough set of choices on how many PCI-X 2.0 and PCI Express slots to provide at a given design point.

Ethernet will continue to be the primary link for the switches, routers and other devices used to interconnect systems in a data center. But Ethernet must evolve.

First, remote direct memory access (RDMA) over TCP/IP fundamentally changed application development and solution delivery when products arrived late last year. To make effective use of RDMA for clustering and storage, Ethernet switch providers must modify their implementations to provide low-latency switching of 100ns to 300ns and true QoS.

To differentiate 802.1p-tagged packets in switches and adapters requires dedicated buffer resources. It also needs standardized 802.1p arbitration algorithms, to reduce head-of-line blocking in switches and adapters, and standardized switch ingress-to-egress port scheduling per priority set to manage fabric bandwidth and latency on a given path between end nodes.

These QoS improvements are already on the way for other interconnects, along with new OS and middleware-management software. The creation of standard methods and algorithms for Ethernet can build upon these efforts and create a more robust end-to-end QoS solution.

- Michael Krause

Senior Interconnect Architect

Hewlett-Packard Co.





Article Comments - Robust designs cut vendor hype
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top