Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Networks

Can PCIe compete and win against Ethernet?

Posted: 02 Jul 2014 ?? ?Print Version ?Bookmark and Share

Keywords:PCI Express? PCIe? Ethernet? Fibre Channel? SR-IOV?

The boundaries between PCI Express (PCIe) and Ethernet remain clearly defined—PCIe as a chip-to-chip interconnect and Ethernet as a system-to-system technology. There are very good reasons (and a few less so) why these boundaries have endured. Regardless, these two technologies have definitely co-existed. While nothing is on the horizon that will change this fundamentally, PCIe is showing every sign of growing and competing with Ethernet for space once the domain solely of Ethernet C specifically, within the rack. Can it really compete and win against Ethernet?

Current architecture
Traditional systems currently being deployed in volume have several interconnect technologies that need to be supported. As figure 1 shows, Fibre Channel and Ethernet are two examples of these interconnects (obviously, there could be morefor example, InfiniBand).

Figure 1: Example of a traditional I/O system in use today.

This architecture has several limitations:

???Existence of multiple I/O interconnect technologies
???Low utilisation rates of I/O end-points
???High power and cost of the system due to the need for multiple I/O end-points
???I/O is fixed at the time of architecture and build... no flexibility to change later
???Management software must handle multiple I/O protocols with overhead
This architecture is extremely disadvantaged by the fact that multiple I/O interconnect technologies are in use, thereby increasing latency, cost, board space, and power. This architecture would be somewhat useful if all the end-points were being used 100 per cent of the time. However, more often than not, the end-points are being under-utilised, meaning system users pay for the overhead for that limited utilisation. The increased latency is because the PCIe interface native in the processors on these systems needs to be converted to the multiple protocols. (Designers can reduce their system latency by using the PCIe that's native on the processors and converge all end-points using PCIe.)

Clearly, sharing I/O end-points (figure 2) is the solution to these limitations. This concept appeals to system designers because it lowers cost and power, improves performance and utilisation, and simplifies design. With so many advantages to sharing end-points, it is no surprise that multiple organisations have tried to achieve this; the PCI-SIG, for example, published the Multi-Root I/O Virtualisation (MR-IOV) specification to achieve just that goal. However, due to a combination of technical and business factors, MR-IOV as a specification hasn't taken off, even though it has been more than five years since it was released.

Figure 2: A traditional I/O system using PCI Express for shared I/O.

1???2???3?Next Page?Last Page

Article Comments - Can PCIe compete and win against Eth...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top