Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > RF/Microwave
?
?
RF/Microwave??

Gigabit Ethernet transceiver ignites port-count explosion

Posted: 11 Aug 2006 ?? ?Print Version ?Bookmark and Share

Keywords:texas instruments? ti? Ethernet?

By Atul Patel
Texas Instruments

Ethernet has become one of the most prolific enterprise communication standards in the world. With most enterprises switching from legacy information systems to Web-based applications, the bandwidth use is drastically increasing. In addition, new technologies, such as desktop conferencing, net-based virtual meetings, and group-oriented productivity applications, are keeping today's networks at peak utilization.

This escalating bandwidth requirement is driving switch-router equipment developers to design systems with the ability to scale. The result is reflected in switch-router port counts. A typical switch-router port count has scaled from eight ports per system to more than 72.

Switch-router designers now face the critical issue of increasing port density while lowering system cost, reducing power consumption, and shrinking system footprint. A key development in attacking these issues is been the introduction of multi-port Gigabit Ethernet transceiver (Serdes) devices (Figure 1 below).

Gigabit Ethernet can be broken into two groups. The first group of transceivers are collectively called 1000BaseT transceivers (IEEE802.3ab-compliant Gigabit Ethernet copper PHYs). The second group is electrical transceivers used to drive optical modules, such as small form-factor pluggable (SFP) and Gigabit interface converter (GBICs), typically known as Gigabit Ethernet transceivers, or serdes (IEEE803.z-compliant). It's this latter type that has a significant impact on the cost, power, and size of a system.

It's important to understand that as more 10/100/1000BaseT ports are deployed in enterprise networks, the bottleneck at the router will be at the Gigabit Ethernet uplink ports. Not only are more ports needed at the switch router, but also larger Gigabit Ethernet switches in the wiring closets/data centers are required (Fig. 2). Gigabit Ethernet transceivers that drive the optical uplink ports have evolved over the last 10 years from single-port devices to large, complex multi-port systems-on-a-chip. This growth underscores the fact that Gigabit Ethernet switch-router designers must continue to innovate in terms of increasing uplink ports as 10/100/1000BaseT ports continue to increase.


Figure 1: Gigabit Ethernet transceiver is part of the signal chain


Figure 2: Gigabit Ethernet fiber ports continue a strong growth pattern

Over successive generations of product design, switch-router designers have solved or addressed key issues such as power dissipation, cost, and board area. These issues relate back to important customer requirements , including affordability, scalability, usability, and service ability.

Select the right transceiver
Switch-router developers must consider multiple criteria when selecting a Gigabit Ethernet transceiver for an application. This is especially true in higher port-count designs with problems such as high-power use, heat dissipation, and limited board space. Four key areas that designers must consider when selecting a transceiver are interfaces, power dissipation, packaging, and cost.

Gigabit Ethernet transceivers have multiple interfaces that designers must consider. These interfaces are the parallel-data interface, the high-speed serial interface, and the control interface. The parallel interfaces for Gigabit Ethernet transceivers are evolving rapidly of late. For many years, 10bit parallel interfaces dominated the Gigabit Ethernet transceiver landscape because they were easy to design and implement. The 10-bit interface evolved from the Gigabit Media Independent Interface (GMII), which was defined by the standards body that controls the Ethernet specification.

The 10bit GMII interface consists of LVTTL signaling levels with data being clocked from the reference clock's rising-edge. The main difference between the 10bit and GMII interfaces is that the GMII interface also incorporates physical coding sub-layer (PCS) functions. These functions aren't traditionally found with the devices that support the 10bit interface. Selection between 10bit and GMII interfaces will depend on the media access controller (MAC) being used and whether it has the necessary PCS functions or requires them to be present in the transceiver.

In recent years, the 10-bit/GMII interface has given way to reduced-bit interfaces as a way to lower overall pin-count and chip-size while increasing port density. This change was directly influenced by the fact that increasing system port counts had to be achieved while minimizing the cost increases of associated ASICs/MAC devices. The most prevalent reduced-bit interfaces are the reduced 10-bit interface (RTBI) and the reduced Gigabit media-independent interface (RGMII). RTBI, as the name implies, is a reduced version of the traditional 10-bit interface. In this case, the raw parallel-bit width is reduced from 8/10 bits to 4/5 bits. (8/10 or 4/5 depends on whether 8b/10b encoding/decoding is occurring on- or off-chip). Signaling levels vary from LVTTL, to SSTL2, to HSTL.

These 4/5 bits are clocked on both the rising and falling edges of the reference clock. Effectively, the reduced bit interfaces achieve the same data rates as the traditional 10-bit interfaces with half the pin count. This pin savings becomes significant when considered in reference to a system that has 24GbE or 48GbE ports.

RGMII was developed as an alternative to 10bit/GMII and RTBI. Basically, RGMII reduces the maximum number of I/O pins from 23 (in 10bit interface; parallel side) to just 12 (counting the control pins). This is accomplished by multiplexing four data signals with a control signal on both edges of the reference clock.

To determine which parallel interface is best for a particular application, designers must carefully plan and understand the signal chain, especially the MAC/ASIC design that interfaces to the Gigabit Ethernet transceivers. If the objective is to increase the Gigabit Ethernet port count as much as possible while maintaining system size, then one of the reduced-bit interfaces (RTBI/RGMII) should be considered.

Serial interfaces
The high-speed serial interfaces available on today's Gigabit Ethernet transceivers fall into the following signaling level categories: LVPECL (low-voltage pseudo emitter-coupled logic), CML (current-mode logic), and VML (voltage-mode logic) (Figure 3). In the past, the criterion for selecting a serial interface was driven by the type of optical module the transceiver was to interface. Today, designs use ac-coupled SFP optical modules. This consideration is less significant today than factors such as power drawn from the I/Os and termination requirements. For example, the latest transceivers are implemented using VML technology with built-in termination. VML drivers, when ac-coupled, are compatible with LVPECL. VML drivers are implemented in CMOS and don't require external pull-up resistors because their architecture uses NMOS and PMOS transistors to help drive both the falling-and-rising signal edges.


Figure 3: Logic-level comparisons are shown for today's Gigabit Ethernet transceivers

Key considerations in selecting the correct serial interface are overall impact on power dissipation; overall impact on implementation; and interoperability with optical/electrical modules.

Control interfaces
As the number of ports on a single piece of silicon has increased, control interfaces have also evolved from separate I/O pins for each port to robust serial communication buses. For Ethernet, this serial communication bus is known as management data I/O (MDIO). This bus is been defined by the IEEE through various clauses to the Ethernet standard IEEE802.3 (Clause 22). MDIO is a simple two-wire serial interface to connect a management device (such as a microprocessor) to management-capable transceivers (such as multi-port Gigabit Ethernet transceivers or 10GbE XAUI transceivers) to control and gathering status information from the transceiver. This information includes:

? link status

? speed ability and selection

? power-down

? power hibernate status

? TX/RX mode selection

? auto-negotiation control

? loop-back mode control

Transceiver vendors can add more information-gathering capabilities on top of those required by the IEEE. It's recommended for multi-port gigabit Ethernet transceivers that devices with MDIO be selected over devices without it.

Power dissipation
Another key consideration in selecting a Gigabit Ethernet transceiver is power dissipation, a critical feature in almost every electrical system today. For Ethernet routers and switches, it's become a dominating factor in many high-port-count designs. As Gigabit Ethernet port density increases, for a given form-factor, power budgets have remained nearly constant. Basically, designers have to pack more ports into the same chassis while maintaining, or only slightly increasing, the overall power use.

New Gigabit Ethernet transceivers give system designers some relief in this area. In the past five years, Gigabit Ethernet transceivers have moved down the silicon process curve from biCMOS/bipolar to low-power CMOS. As a result, per-port power consumption has decreased from around 1W to less than 200mW.

A good rule-of-thumb in selecting Gigabit Ethernet transceivers is to identify devices developed using CMOS process technology. The power number for these devices should range from 200mW/channel to 300mW/channel. Supply voltages should also scale with process with most devices supporting 1.8V or 2.5V supplies.

Packaging
Most of today's large multi-channel transceivers are housed in large plastic ball grid array (PBGA) packages, with ball pitches ranging from 1 to 0.8 mm. Package selection often plays a critical role in determining how many ports can be packed into a design. In the case of Gigabit Ethernet transceivers, larger BGA packages that employ DDR parallel channel modes must be used to achieve design goals. Package-selection decisions should be weighed against board-design constraints as well as overall system architecture. At some point, BGA sizes will reach a point where implementation cost will outweigh the benefits of greater port count.

Today's Gigabit Ethernet transceivers are a compromise of increased channel count at the expense of more sophisticated and cumbersome interfacing modes. In selecting multi-channel Gigabit Ethernet transceivers, look for BGA packages with less than 300 pins, with about a 1mm ball pitch, and package body sizes less than 20-by-20mm.

Cost
The cost per-port of a Gigabit Ethernet transceiver is one key variable that most designers look at to compare one vendor versus another. Most new transceivers are developed using a low-cost CMOS process. These processes, for the most part, have been cost optimized. Also, advances in packaging technology as well as competition in the contract packaging market has brought more cost competitive solutions to the market. Today, the cost-per-port for a transceiver ranges from $2.50 to as low as $1.50 in high volumes.

Other criteria that directly effect cost are: implementation costs (external components, etc.), power management needs, and thermal management costs (heat sinks, fans, etc.). Also, transceiver selection is often made with specific ASIC/FPGA/MAC development objectives in mind. In this case, overall cost of the combined ASIC plus transceiver design should be used as a decision point rather than the cost of the Gigabit Ethernet transceiver by itself. In selecting a transceiver, designers must consider their entire signal chain relate key design objectives to their components selection.

In performance terms, all Gigabit Ethernet transceivers must meet IEEE requirements to qualify as a Gigabit Ethernet (IEEE802.3z) transceiver. However, just meeting standards-driven performance targets isn't enough in today's market. Designers should look for differential performance factors (power, size, cost, etc.) that have the most impact on their particular design.

Looking at the impact of using one of today's newer multi-channel Gigabit Ethernet transceivers, one example is found within an application that requires many ports. In this case, a 24-port switch-router. In this example, we'll consider the impact of using a single-channel transceiver versus a multi-channel device.

To implement a 24-port solution, three TI TLK2208B devices will be used to support all 24 ports. If the design used older single-channel transceivers, then the solution would require 24 individual transceivers. Using the eight-channel part saves about 1300 mm2 of raw board space.

Large multi-channel devices tend to use newer CMOS process technologies and have more efficient architectures than their predecessors. This gives new transceiver designs a natural advantage in terms of power-dissipation. For example, the TLK2208B dissipates about 165mW/channel. In comparison, traditional single-channel devices dissipate 250mW/channel to 600mW/channel, depending on the technology.

About the author
Atul Patel
is a marketing manager for Gigabit serdes products at Texas Instruments. He can be reached at atulp@ti.com.




Article Comments - Gigabit Ethernet transceiver ignites...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top