Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Controls/MCUs
?
?
Controls/MCUs??

Attain audio, video performance goals

Posted: 18 Feb 2008 ?? ?Print Version ?Bookmark and Share

Keywords:DSP core? high-quality audio video communications? HD video?

By Jian Wang, Thanh Tran, Ivan Garcia, Pradeep Bardia
Texas Instruments Inc.

As the migration to high definition (HD) picks up speed, video system designers face new challenges related to bandwidth requirements, image quality, transcoding and digital media codec flexibility. These are difficult issues even for relatively closed systems that operate in proximity to each other.

In order of magnitude, more processing power is required to encode and decode HD video than standard-definition (SD) video. While a single DSP can handle a stream of SD video encode/decode, for example, up to five may be needed for HD720p at 30fps, and 12 to 13 DSPs may be required for HD1,080p at 30fps.

Trick of transmission
No one has a trickier set of problems to solve than designers whose systems interface with the IP networkor private networksto transmit video to distant endpoints. IP-based videoconferencing is one of the most obvious examples of this application. There are many subsets of video communicationsfrom IP video telephony to sophisticated applications in which enhanced wideband audio, presentation data and video boxes are integral parts of the complete real-time system solution.

In addition to handling the difficult migration from SD to HD video, engineers must find ways to minimize end-to-end latency. Visual artifacts such as distorted video caused by network contention/congestion or inadequate video compression implementations can pose problems. The most challenging user experience is ensuring a natural video communications flow among participants. The task sets aggressive limits on system latency.

For a natural communications experience, system designers typically target a less than 250ms end-to-end delay, which includes an undeterministic network, audio and video compression and decompression, and other system latencies. Video communications design engineers must also deal with endpoints that can vary widely and sometimes dynamically.

From a system-level perspective, designers have an urgent need to manage heavy processing yet still fill a large number of end-user products by simply adding or subtracting peripherals. An IP-based video phone, for example, would require both a camera and an LCD, but a streaming IP-based set-top would require only the display. At the other end of the product spectrum, a sophisticated videoconferencing system might need to handle multiple HD video streams.

Familiar formula
Ideally, system designers would handle these challenges with a silicon platform that is scalable to up to 16 data streams; software configurable to accommodate new algorithms as well as upgrades to existing ones; and flexible enough to change product types without much additional design effort. In other words, they seek the familiar formula: performance, flexibility and scalability at attractive price points. Conventional architectures, however, all fall short.

Thanks to the processing and power efficiencies that can be derived from being designed for specific systems, ASICs have historically been a viable choice for video. However, the case for ASICs is not strong in HD video. It is difficult to use a "boxed" compression engine to perform codec processing, because bit rate and delay requirements can only be fulfilled by being able to tune the codec to user-scenario conditions. Since user scenarios are constantly evolving and may change dynamically during a video session, conventional hardwired ASIC technology fails the flexibility test. The cost of ASIC development is another inhibitorNRE costs could easily reach millions of dollars.

Integrating multiple conventional, programmable DSPs on a board can provide the flexibility needed for HD video. Performance goals can also be attained. But the silicon cost multiplies too quickly to provide the magic combination of a sophisticated, scalable solution that hits the right price point. Interchip communication among too many chips can also break the total system latency budget for some product types.

SoC technology offers the most viable alternative. Careful chip partitioning and design can turbocharge performance. That goes a long way toward addressing the order-of-magnitude higher performance required for the leap from SD to HD. It also addresses total system latency by paring the latency contribution of encoding, decoding and, frequently, transcoding. With the right architectural choices, system performance can be efficiently scaled by putting multiple SoCs and/or a cluster of programmable DSPs on a board.

DSP is just one piece in six-element SoC architecture.

DSP core, peripherals
To handle its share of the processing, the DSP core should run at about 600MHz and be capable of executing 4,800MIPS. Signal processing metrics must be at the high end of the performance spectrum. TI's TMS320C64x+ core, for example, can execute four 16bit multiply-accumulates (MACs) per cycle, for a total of 2,376 million MACs per second (MMACS/s) or eight 8bit MACs per cycle, for a total of 4,752MMACS/s.

The DSP core in the SoC would be assigned to perform rate control, mode decision, and high-level motion estimation and slice-level processing. The codec processing assigned to the DSP is usually light enough to implement other algorithms (i.e. camera panning detection, skip macroblocking detection and light detection). To perform this additional work, the DSP needs access to the accelerator buffer via the high-bandwidth crossbar switch fabric.

The signal processing performance of the core can be enhanced by integrating a video processing subsystem on the SoC. Having many A/V features integrated in hardware saves programming time and reduces the need to dedicate software cycles to interfacing with and controlling external devices.

Some peripherals would handle specific video requirements at the front end and back end. Front-end functionality should include an image pipeline for camera image capturing and processing. Because of the unpredictability of user scenarios, the image pipeline should support both BT.656/BT.1120-compliant devices and CCD/CMOS sensors.

Back-end functionality should include an integrated on-screen display driver and integrated DACs to provide analog and/or digital RGB/YCbCr video output. Other desirable integrated features would include networking peripherals, A/V interfaces and an enhanced DMA controller with support for up to 64 simultaneous transfer channels.

About the authors
Jian Wang
is a video systems engineer, Thanh Tran is a video infrastructure systems manager, Ivan Garcia is a video infrastructure system engineer, and Pradeep Bardia is worldwide marketing manager of DSP video solutions unit; all authors work at Texas Instruments Inc.




Article Comments - Attain audio, video performance goal...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top