Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Networks
?
?
Networks??

Architecture supports Ethernet, Infiniband

Posted: 01 Jan 2007 ?? ?Print Version ?Bookmark and Share

Keywords:Ethernet? Infiniband? ConnectX architecture? Fibre Channel? Internet Protocol?

In the heat of the dot-com boom, a handful of companies bet that data-center interconnects would converge around Infiniband. After the bust, others put their money on an accelerated version of Ethernet.

Today, it's looking less likely that the data center will standardize on any one technology. But it is becoming increasingly clear that multIPle network technologies will converge in hybrid chips.

In perhaps the biggest sign of that silicon-level convergence so far, California-based Mellanox Technologies Ltd announced its ConnectX architecture, which packs support for Ethernet and Infiniband into one chip. The chips also have limited support for Fibre Channel. The move marks a significant shift for the last Infiniband chip company standing after a shakeout in this sector over the last five years.

Mellanox isn't alone in spreading its bets across multiple networks in the data center. In November, Infiniband switch maker Voltaire announced it would ship by June 2007 a high-end switch that also supports 10GbE. The company will use its own ASIC to support bridging between the two networks as well as carrying Internet Protocol (IP) over Infiniband.

Following a similar trend, Fibre Channel leader QLogic Corp. acquired in October Infiniband switch maker SilverStorm Technologies Inc. Earlier last year, QLogic acquired Infiniband card maker PathScale Inc.

"Convergence is the buzz for many people, but buzz can take years to translate into real production environments, and customers are a skittish crowd in the enterprise," said Michael Krause, an interconnect specialist in the PC server division at Hewlett-Packard Co.

A top goal for the 90nm ConnectX architecture from Mellanox is to let server blade makers consolidate support for Infiniband and Ethernet into one chip that also provides some support for Fibre Channel and iSCSI. "That's the major application," said Thad Omura, VP of product marketing at Mellanox.

HP's Krause noted that products for the KR version of the 10GbE standard will be coming to market shortly. They will "enable fewer wires on a blade's enclosure than Infiniband to transfer 10Gbps of application data," he said.

Despite that fact, Mellanox said that all the top server makers are already using Infiniband in their blade servers. Some are using the interconnect at data rates of 20Gbps, Omura said.

Products shipping early this year based on ConnectX are expected to come in several flavors. One product will support 10Gbps and 20Gbps Infiniband only. Another will support a mix of 10Gbps and 20Gbps Infiniband and 1Gbps and 10Gbps Ethernet. Later this year, Mellanox will ship 40Gbps Infiniband adapters based on ConnectX.

"We have our own 10GbE MAC and all our Serdes are homegrown now as well. We like to have all our intellectual property internally developed," said Omura.

The ConnectX chips can encapsulate Fibre Channel commands in Infiniband packets. However, they would require an Infiniband-to-Fibre Channel bridge to link to native Fibre Channel storage networks. Omura suggested such products will emerge this year, but would not say whether Mellanox will make them.

"There's no specific Fibre Channel controller in the hardware, but there are hardware mechanisms to accelerate the Fibre Channel protocol," he said.

Performance hit?
Some observers note that letting one protocol, such as IP or Fibre Channel, ride on top of another can come at the cost of performance. For example, IP over Infiniband can yield as little as 2-3Gbps throughput in some implementations, even though the underlying Infiniband link is capable of carrying up to 8Gbps after taking into consideration its encoding overhead.

On the Ethernet side, the chips do not support TCP offload. Instead, they rely on the remote DMA capabilities at L4 of the Infiniband blocks.

When Mellanox demonstrates 40Gbps Infiniband links over the ConnectX chips early this year, the so-called quad-data-rate Infiniband capability will have latency as low as 1?s, down from 2.25?s in today's chips. The 40Gbps parts will consume less than 6W per port and will be deployed in systems in early 2008.

- Rick Merritt
EE Times




Article Comments - Architecture supports Ethernet, Infi...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top