Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Controls/MCUs

Tehuti finds middle ground for network offloading

Posted: 10 Jan 2005 ?? ?Print Version ?Bookmark and Share

Keywords:fpga? cpu? media access controller? hardware?

There are generally two approaches to hardware design for Internet connections: Either you send raw packets from the media access controller into main memory and let the CPU deal with the TCP/IP protocol stack, or you handle almost the entire stack in a TCP/IP offload engine (TOE) and send completed messages into main memory.

Which you choose depends on what you are doing and how big your budget is. If you are an individual, surfing and using a few low-bandwidth Internet applications, you probably pick the first approach. Packet traffic will be bursty and generally light, and a good PC or workstation CPU won't notice the overhead of executing the protocol layers in software. But you would certainly notice the $200-or-so OEM cost of the TOE. On the other hand, if you are assembling an application server, a gateway to a storage network or a network appliance, where packet traffic will be intense, you may find the TOE worth the money.

But what if there were another option-a hardware approach that cost less than a TOE but still freed up CPU cycles? That thinking led Tehuti Networks to introduce Bordeaux, which can best be described as a partial TOE.

"We started out with a clean sheet of paper," said Tehuti CEO Arie Brish. "We asked what functions we could put in hardware that would do the most to free up the CPU, but at the least cost." The answer, Tehuti's founders believed, should lie somewhere between a simple network interface card (NIC) and a TOE.

After considerable experimentation with traffic profiles, the founders came up with an engine that attached to an Ethernet physical-layer device and did the preliminary processing on packets. Brish hesitated to state exactly how much work the device does, but did say the design passes a stream of packets to the host CPU, but in a form that eliminates much of the grunt work for the software, apparently in terms of finding and establishing connections, sorting our packets by connection and transferring them. "The chip does lots of work on the packets, but far from everything," Brish said.

Today, the design is implemented in an FPGA, with a PCI-X interface to the host system. The complexity is on the order of half a million gates, according to Brish, and the chip can more than keep up with traffic loads when operating at 100 MHz.

The company has done extensive in-system profiling with various traffic loads, and it finds that CPU utilization can be reduced by a factor of around 2.5-or throughput increased by as much, if that is the object. Several patented ideas bring about a major reduction in main-memory footprint compared with a traditional NIC, the company said.

The FPGA solution is an interim step; the company is hardening the design for implementation in an ASIC. Brish expects to use a relatively mature technology and to offer the chip with supporting driver software for a price in the $50 range.

Initially, Brish said, Tehuti will aim the chip at Web appliances and application servers."When a corporate network upgrades from 100Mbits to 1Gbit, people see a significant improvement in apparent speed right away, so they are happy with their networking cards," he said. "But as applications come online to start using the new bandwidth, people see that their traditional solutions aren't enough-but they won't want to spend to upgrade to TOE-based cards. That's our window."

Further out, Brish said he sees the scenario spreading to end-user PCs and workstations.

- Ron Wilson

EE Times

Article Comments - Tehuti finds middle ground for netwo...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top