Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > EDA/IP

Finisar chair contemplates on fiber's future

Posted: 16 Feb 2007 ?? ?Print Version ?Bookmark and Share

Keywords:Jerry Rawls? Finisar? shift to 100Gbit Ethernet? shift from 10 to 100 Gbit Ethernet? Loring Wirbel?

Rawls: We'll do whatever our customers want, but I'll tell you that the leap from 10Gbit to 100Gbit is a big cliff.

Founded in 1989, Finisar Corp. was one of the few optical-component companies to sustain a healthy business. It survived the telecom crash of 2001, as well-funded startups tanked, and giants like Lucent and Nortel shuttered their internal optoelectronics operations. In an interview with EE Times, Finisar chairman and CEO Jerry Rawls weighed in on single-mode fiber's rise in the data center and whether the shift from 10Gbit to 100Gbit Ethernet might be too much, too soon.

EE Times: Intel and the University of California, Santa Barbara reported last year on the bonding of an indium phosphide (InP) light source with silicon, and Intel predicted that the hybrid bonding of InP and silicon would give rise to the first practical silicon laser. Any comment on the announcement and the press frenzy it generated?
Jerry Rawls: It's an interesting manufacturing breakthrough, to be sure. But when I saw the coverage, the first thing that came to mind was Hewlett-Packard and its bubble switch.

Remember, at the height of the boom, when HP demonstrated an optical switch that came out of its bubble jet printing work? The interest among analysts and the press was incredible. But what remains of that work today?

The data center may be more important to fiber than telecom right now. But it's hard to figure out whether corporate centers will stick with single-mode fiber, or use the new IEEE long-range multimode (LRM) standard that can use the Fiber Distributed Data Interface (FDDI) legacy installed base. And some vendors are promoting twisted-pair copper networks using the new 10GBase-T standard. Where does Finisar see the best opportunity?
In the short run, LRM is the best bet because enterprise managers can take advantage of installed fiber and OEMs can look to component vendors to move from last-generation Xenpak to XFP (10Gbit Form Factor Pluggable Module). We'll see a transition from Xenpak using the LX-4 standard, with four channels at 3Gbps, to the single-channel LRM.

[Editor's Note: LRM uses four-laser, coarse wave-division multiplexing for a 10Gbit connection. It uses larger modules like Xenpak, while the single-laser LRM can use smaller modules such as XFP. While LRM is often cited as inherently cheaper per port than LX-4, it requires higher-performance lasers and dedicated electronic dispersion compensation chips.]

The XFP module was supposed to be the ideal 10Gbit fiber transceiver. Cisco pushed it, but then seemed to back off. Should smaller OEMs follow whatever Cisco does?
It's difficult to take a contrarian approach regarding Cisco. Their problem with XFP is that Cisco's Catalyst enterprise switching group decided that the module did not meet their needs, particularly in reaching 300m over multimode. But in the meantime, Cisco's routing groups were working on XFP, and many other telecom equipment manufacturers saw XFP as the path to pluggable optics. We see XFP as a strong player, and it has helped take our company beyond our enterprise roots and into metropolitan applications.

The Fibre-Channel people have promoted the SFP+ small-form-factor pluggable package as the best option for 10Gbits, since dispersion compensation chips could be placed outside the module. What's your take?
SFP+ has the potential to be a very large market because the package is small enough to allow high port density. We're very active in the T11 working group standards. The issue goes beyond the ultimate cost of a port, in both the 10GbE silicon cost and the optics cost. Adding 10Gbit links is a way to bring value, and SFP+ provides a path to increasing overall system value.

What do you think of the 10GBase-T standard in the enterprise!contender or niche?
Right now, it appears to be pretty nichey because of the cost of the DSP solutions, the power dissipation and the fact that copper solutions at 10Gbits require you to look at new wiring types, like Category 6a. The more interesting issue in our mind is the mix we will see between multimode and single-mode fiber. Multimode has represented virtually all of our data center wiring since the days of FDDI. It's the legacy multimode fiber that resulted in standards like LX-4.

There's going to be a point at which the data center is going to have to start using single-mode fiber because it has almost unlimited bandwidth and you can use numbers of wavelengths. In multimode, we have some limits in the number of wavelengths that can be used!we have clear limits in speed. As you move to the data center of the next decade, where we're going to have 40Gbit and 100Gbit links, I think you're going to see more and more data centers wired in high-speed trunks using single-mode fiber.

Does it make sense to try to move from 10GbE to 100GbE directly, or should we look at 40 or 80?
We'll do whatever our customers want, whether it's 100Gbit or 40Gbit. But I'll tell you that the leap from 10 to 100 is a big cliff.

The absolute difference between the two!and the difficulty the industry has had with 10GbE!has caused the industry to be about two years late on the implementation of 10Gbit links, compared with original projections. The delta between 1 and 10 is 9Gbits!that's already a big cliff to climb up. To go from 10 to 100 is enormous by comparison, even though it's a factor of 10. In absolute terms, it's a formidable obstacle.

We have customers today who say their next system will be based on 40Gbits, and others who say, 'No, we're going to go directly to 100.' They don't yet know the economics to get to those two levels, nor do they know the time frame.

The 10Gbit market was plagued with way too many transceiver packages. Will this get any better at higher speeds?
I don't think it's likely to change much in the future. We may learn a little bit from the lessons of the past. The problem is the move back and forth in serial and parallel interfaces to fiber.

The first thing defined was Xaui (the X-Gbit Attachment Unit Interface), a 4bit-wide interface to the host system. Once that was defined, in putting all the ICs together in a less than fully integrated package, Xenpak was the best they could come up with.

As the ICs become more integrated, packages become smaller and circuitry becomes more sophisticated!that allows the evolution of a module into a smaller form.

XFP was always an alternative to a 4bit parallel interface. In theory, it was going to be lower-cost and easier to implement. But its shortcoming was its inability to do 300m over multimode fiber, as Cisco demanded.

I think in the future, we're always going to follow the path of starting with a product and then meeting customers' demand for higher speed, lower cost and lower power. Then size will evolve accordingly.

- Loring Wirbel
EE Times

Article Comments - Finisar chair contemplates on fiber'...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top