Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Processors/DSPs
?
?
Processors/DSPs??

Open-source APIs up multimedia performance

Posted: 16 Feb 2007 ?? ?Print Version ?Bookmark and Share

Keywords:video design? DSP in digital video application? digital video design? open-source API? GStreamer Rishi Bhattacharya?

DSPs offer outstanding multimedia performance. Typically, they require just 40 percent to 50 percent as many cycles as a GPP core to run a codec. They also offer far greater flexibility and reconfigurability than ASICs. Yet until now, programmers have had to learn proprietary languages to take advantage of the benefits of DSPs in digital video applications. But the emergence of APIs is eliminating the need to learn these proprietary DSP languages. APIs make it possible to easily leverage DSPs from applications running on the GPP.

Open-source multimedia frameworks, which typically run under the Linux OS on the GPP, are a suitable target for these APIs. The computational burden of video codecs can be offloaded by leveraging the APIs, which abstract many of the complexities of DSP programming. This approach only requires programmers to have basic knowledge of the DSP and eliminates the need to write code to stitch together DSP functions with those that run on the GPP. Those advantages, plus the ability to use the capabilities that free open-source plug-ins and frameworks offer, can substantially reduce time-to-market for new video products.

Integration challenges
Developers of digital video systems face integration challenges. Digital video systems are composed of multiple encoders, decoders, codecs, algorithms and other software components, which must all be integrated together into an executable image long before any content can run on the system. Stitching all these elements together and making sure they function cohesively can be a difficult task. Some systems will require distinct video, imaging, speech, audio and other multimedia modules. Developers that manually integrate each software module or algorithm are distracted from working on value-added functionality, such as adding innovative features.

Many digital video developers have taken the open-source path to build software. A common approach is to obtain significant parts of the software from an open source, and leverage in-house expertise in the areas of usability and hardware integration. Developers often participate in open-source projects to develop technology to fulfill specific needs and integrate the open-source code with internally developed code to create a product.

Texas Instruments Inc. (TI) has developed an API that allows DSPs like GStreamer to be leveraged from open-source multimedia frameworks. The API enables multimedia programmers to leverage the DSP codec engine from within a familiar environment. The interface frees digital video programmers from dealing with the complexity of programming DSPs, making it easy for the ARM/Linux developers to exploit the power of DSP codec acceleration without having to understand the hardware. The interface also automatically and efficiently partitions work between the ARM and DSP. This eliminates the need to write code to interface between functions that run on the DSP and those that run on GPP cores. The interface has been developed in the form of a GStreamer plug-in that was developed by TI compliant with open-source community standards.

GStreamer has become popular in the digital-video programming community through its ability to abstract the manipulation of different media in a way that simplifies the programming process. It makes it possible to write a general video or music player that can support many different formats and networks. Most operations are performed, not by the GStreamer core, but by plug-ins. GStreamer base functionality is primarily concerned with registering and loading plugs-in and providing base classes that define the fundamental capabilities of classes.

The computational burden of video codecs can be offloaded by leveraging the APIs, which abstract many of the complexities of DSP programming.

GStreamer filters, buffers
Source filters present the raw multimedia data for processing. They may get it from a file on an HDD, CD or DVD drive, or they may get it from a "live" source such as a TV receiver card or a network. Some source filters simply pass on the raw data to a parser or splitter filter, while other source filters also perform the parsing step themselves. Transform filters accept either raw or partially processed data and process it further before passing it on.

There are many types of transform filters (including parsers) that split raw byte streams into samples or frames, compressors and decompressors, and format converters. Renderer filters generally accept fully processed data and play it on the system's monitor, through the speakers or possibly through some external device. Also included in this category are "file writer" filters that save data to disk or other persistent storage, and network transmission filters.

Data processing takes place in the plug-in_chain() or plug-in_loop() function. This function could be as simple as a scaling element or as complicated as an actual MP3 decoder. After data is processed, it is sent out from the source pad of the GStreamer element using a gst_pad_push() function. This pushes data to the next element in the linked pipeline.

Buffers are the basic unit of data transfer in GStreamer. The GstBuffer type provides all the state necessary to define a region of memory as part of a stream. Representation of data within Gstreamer via GstBuffer structures follows the approach taken by several other operating systems and their respective multimedia frameworks (e.g. the media sample concept in Microsoft DirectShow). Sub-buffers are also supported, allowing a smaller region of a buffer to become its own buffer, with mechanisms in place to ensure that neither memory space goes away prematurely.

Buffers are usually created with gst_buffer_new(). After a buffer has been created, one will typically allocate memory for it and set the size of the buffer data.

Sync processing
A/V synchronization processing during playback generally requires three types of decisions:

Decision to repeat a frameThis step is typically taken when the presentation time of the frame from the stream is greater than a frame interval of the time to display.

Decision to display a frameThis is typically made when the presentation time of the frame from the stream is between a minimum and maximum threshold.

Decision to skip a frameThis is typically done when the presentation time of the frame is at least two frames behind the time to display. The current frame is then skipped and the next one is processed in hopes of catching up on the next frame interval. This continues until either the next frame is displayed or there are no more frames left to compare.

Furthermore, a common clock should be used by all elements in the pipeline to facilitate these activities. Fortunately, all of these decisions are made by the A/V base sink classes within the GStreamer core libraries. Thus, many of the complexities of A/V synchronization are abstracted away from the user.

TI developed a GStreamer transform filter plug-in, which leverages the DSP for video decoding and runs on the ARM under the Linux OS.

The new interfaces make it possible to use the GStreamer Linux multimedia framework to leverage the software infrastructure of TI's DaVinci platform of processors. This combined infrastructure provides a flexible framework that can accommodate new generations of multimedia codecs.

The software infrastructure enables design of a wide variety of video products. Leveraging this open-source framework provides video equipment designers with access to a community-supported, robust infrastructure, which can decrease time-to-market.

- Rishi Bhattacharya
Systems and Software Architect, DSP Systems Unit
Texas Instruments Inc.




Article Comments - Open-source APIs up multimedia perfo...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top