Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Interface
?
?
Interface??

Bridging options enable configurable computing

Posted: 29 Jan 2009 ?? ?Print Version ?Bookmark and Share

Keywords:PCIe? PCI Express? FPGA? computing configurable?

By Mike Alford
Market Manager, Computing
Gennum Corp.

PCIe has become a mainstream PC technology that has quickly supplanted parallel PCI and accelerated graphics port (AGP) in the PC platform, much as parallel PCI displaced ISA and the Video Electronics Standards Association's local bus (VLB) over a decade ago. And now, history is about to repeat itself in the embedded industryonly, with a twist.

Parallel PCI was first introduced in the mid 1990's by Intel Corp. in order to provide a high-speed peripheral pipe into their increasingly faster PC processor. Earlier processor buses, including the ISA bus, required only simple interfaces that could be glued together with TTL devices and some small PLDs. PCI, on the other hand, required a "bridge" to handle the more complex protocol and tight timing that the spec demanded in order to achieve significantly higher throughput. Graphics chipmakers, such as ATI and Nvidia, were among the first to embrace the high bandwidth that PCI enabled and incorporated the interface directly on their GPU. Other common PC peripherals such as disk controllers and Ethernet were serviced by silicon with native PCI interfaces.

Entering embedded
With the momentum around PCI created by the PC industry, it was inevitable that embedded systems would embrace the technology. The diversity of requirements in the embedded market created the need for bridging solutions that could enable PCI in non-PC systems. Thus, the PCI-to-local bus bridge was introduced by companies like PLX Technologies, V3 Semiconductor (QuickLogic), AMCC, and Tundra. This helped to fuel the transition away from proprietary embedded systems buses to PCI as the single most ubiquitous interconnect technology of embedded systems.

The introduction of PCIe has seeded another transition. Again, the move to PCIe is driven by Intel and the insatiable appetite of the PC processor for higher data rates than legacy PCI can accommodate. Quixotic attempts, like PCI-X, to beef up parallel PCI to meet the challenge have had lackluster success and will represent only a blip in history compared to the impact of PCIe.

The transition to PCIe in the embedded industry is well underway. Much of this requirement is being serviced by legacy PCI bridges. Much of the pain associated with the original transition to parallel PCI was software as PCI imposes a discipline for system resource discovery, enumeration, and driver interaction. For PCIe, compatibility with legacy PCI has ensured that the software transition bottleneck has been mitigated. Consequently, the transition to PCIe can be as simple as slapping a legacy bridge onto an existing design and you're done. For many applications, where the legacy PCI bus isn't a performance bottleneck, this approach will be adequate. While the additional cost, area, and power of the legacy bridge is an obvious drawback, the simplicity of the solution is compelling.

Real-time video capture
Video has become one of the 'killer apps' for PCIe due to the high bandwidth of high-definition (HD) content, especially when uncompressed. With features such as simultaneous transmit/receive, virtual channels, and scalability of link width, PCIe is certainly up to the task of HD video. The advantages of PCIe cannot be realized when legacy PCI to PCIe bridges are in the signal path. Take the case of HD video including 1,080p60. The raw data rate for 1,080p60 video is:

Bitrate = 1,080 (vertical lines) x 1,920 (horizontal pixels) x 24BPP (bits per pixel) x 60fps = 2.99Gbit/s (373MBps)

For 32 bits per pixel, this increases to 3.98Gbit/s (498MBps).

Considering that video typically includes audio and other metadata, the bandwidth requirements increase still further. If we compare this to the peak raw bandwidth of PCI/PCI-X, we have:

??PCI 32bits, 33MHZ = 132MBps
??PCI/PCI-X 64bits, 66MHZ = 528MBps

While a 64bit, 66MHz PCI/PCI-X bus has sufficient bandwidth in theory, in practice it is tenuous even at 24bits/pixel. This is due to the overheads of PCI and PCI-X that prohibit them from achieving anywhere close to their theoretical peak bandwidth in real systems. Typically, there can be a 30 percent overhead (a little lower for PCI-X) resulting in a usable bandwidth of 370MBps. This is on the edge of the requirement for 1,080p60/24BPP video and provides no margin.

Another factor to consider is that conditions in the system, such as virtual memory spill/fill to disk, may create periods of latency where data flow is restricted for a few milliseconds. This creates the need to "catch up" after such an event. Unless there is sufficient additional bandwidth, then it may be impossible to catch up before another such event thus resulting in frames of video being dropped.

By contrast, a four-lane PCIe link provides about twice the bandwidth of a 64bit, 66MHz PCI/PCI-X bus. Unlike legacy PCI, data can move in both directions simultaneously. This doubles the bandwidth again for applications that operate full duplex. With that much additional bandwidth, a four-lane PCIe video capture solution can ensure continuous capture.

PCIe provides the bandwidth and performance features to enable a new level of performance for applications such as video capture. At the same time, 1,080p50/60 video equipment is invading both broadcast studios and living rooms. What are the options available to implement an application like video capture over PCIe?


1???2?Next Page?Last Page



Article Comments - Bridging options enable configurable...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top