Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > RF/Microwave
?
?
RF/Microwave??

Mobile net demands optimized design

Posted: 01 Feb 2001 ?? ?Print Version ?Bookmark and Share

Keywords:mobile internet? internet appliances? 3g? multimedia? jvm?

Manufacturers of wireless Internet access systems will play a key role in delivering a new class of wireless services and applications for entertainment, e-commerce and multimedia messaging. Many of these handheld devices will combine the functionality of mobile phones and PDAs, creating appliances that blend computing and wireless communication capabilities. These wireless systems must access the Web, process text, graphics and video, and display this information on a high-resolution screen.

Hybrid mobile Internet appliances must also be able to complete those tasks without compromising responsiveness, application performance or display quality while also maximizing battery life. To meet these objectives, designs must optimally manage the handheld system's scarce resources, such as CPU cycles, memory and memory bandwidth. Designers of wireless handheld Internet appliances must consider the requirements of delivering new media-rich applications that are becoming available, particularly as wireless service providers implement 3G wireless networks.

These new 3G networks will support high-speed data-oriented services at data rates of up to 384Kbps to enable service providers to deliver content for rich data and multimedia applications, such as MPEG-4-based multimedia streams on 3G networks. These applications will allow consumers to watch news clips or access localized video content (such as driving directions and quick tours) via wireless services.

The 3G wireless information appliance must have the necessary CPU processing power to run applications that support value-added telephony services that make full use of the 3G standard's ample bandwidth. For example, the handheld platform must provide enhanced telephone functions, such as servicing an incoming phone call and displaying caller ID while the user is performing traditional handheld computing functions (using the appliance's calendar or accessing a personal information management application).

A second major requirement is the need for high-quality, high-resolution displays that offer the quality that will draw users into the new wireless services. Multimedia content-such as MP3 or MPEG-4 files-will challenge the Internet appliance's ability to offer audio quality, as well as provide a viewing experience that will satisfy consumers who may be accustomed to the high-quality images found on multimedia PCs and consumer electronics.

A third major requirement is the need to accommodate the widespread adoption of Java programming. Unlike PCs, which primarily use Intel or AMD processors, there is no dominant processor architecture for handheld platforms. Software developers will use Java since it can run on virtually any CPU architecture found in a handheld system. In addition, Java can implement a comprehensive security model and be adopted to deliver applications dynamically over wireless networks. New generations of wireless handheld systems will use 150Mips CPUs on PDAs or 40Mips mobile phone CPUs to concurrently run multiple communications protocol stacks, the Java Virtual Machine (JVM) and multimedia decoding. The processor will also handle applications such as Wireless Application Protocol browsers and e-mail programs that run on top of the OS. Moreover, the CPU has to manage system resources, such as graphics display refresh, as well as service interrupts from wireline connectivity interfaces, such as a USB peripheral interface.

Problematic design

To meet all these requirements, designers must overcome a major performance bottleneck in today's handheld system. It is caused by the implementation of the graphics frame buffer as part of the main system memory. A major problem with this approach, particularly on small form-factor devices, is that the CPU and the graphics subsystem compete for system-memory bandwidth.

Limited memory bandwidth slows application response and takes away the ability to manage high-resolution (320 x 320 pixels), 16-bits-per-pixel color screens. As resolutions and color depth increase, the demands on memory bandwidth will continue to grow.

For example, refreshing a color screen that is 320 x 320 pixels, 16 bits per pixel, at 75Hz requires the CPU to read 14.65MB of frame buffer data. Since screen refresh is an isochronous task, 14.6MB of system memory bandwidth must always be available just for display refresh.

Assuming that sustained memory bandwidth is 60 percent of peak bandwidth, display refresh traffic alone continuously utilizes anywhere from 7 percent to 37 percent of sustainable system memory bandwidth. This becomes even more problematic for portable Web tablets and other wireless information appliance systems with larger screens.

Some sophisticated PDA systems use CPUs with 16KB instruction and 16KB data Level 1 caches and have no Level 2 cache. Since memory accesses for graphics operations are fairly random, these caches do not necessarily mitigate the effect of reduced system bandwidth. In fact, graphics data can often pollute caches by displacing application data currently being processed by the cache, or by causing extra cache fills and writeback transactions that will further introduce memory latency for the CPU pipeline and degrade system performance.

Although a Level 2 cache would mitigate this problem by caching larger data sets, there are no cost-effective implementations of a Level 2 cache scheme in PDA-class systems. Heavy utilization of the system or memory bus will also increase system power consumption, resulting in low battery life. Running applications while managing graphics operations will likely cause significant cache thrashing, eventually degrading system performance.

A fourth possible solution to the display refresh and cache-thrashing problems is to increase the memory bandwidth by providing a wider data bus to the memory. However, this increases board design complexity because of the form factor constraints of wireless handheld devices. Finally, the designer can increase system bus speed, which typically runs at one-third or one-half of CPU speed. But that requires higher CPU speeds, resulting in much higher CPU power consumption.

Given the adoption of Java in wireless applications, it is also important to consider the hardware and software trade-offs a system designer must make to run Java applications efficiently on wireless appliances. Executing Java code requires a JVM, and running a JVM consumes CPU cycles for bytecode translation and verification, garbage collection and other tasks that slow down application performance. Employing a just-in-time (JIT) compiler, which uses a software-based acceleration scheme to improve JVM performance, requires a large amount of memory capacity. The compiler often uses 100KB of memory and post-JIT code will take up several hundred kilobytes more. Those memory requirements can increase the cost and size of handheld systems.

Since memory capacity, system memory bandwidth and CPU cycles are at a premium on a resource-limited wireless information appliance, system designers may choose to exploit hardware-based acceleration solutions. The designer can choose from several third-party solutions, including Java coprocessors, integrated CPU solutions and hardware accelerators. Those solutions typically execute the bytecode translation functionality of the JVM in hardware, thus increasing Java application performance without causing the memory bloat associated with JIT techniques. Similarly, decoding of multimedia streams can be implemented purely in software or by hardware-assist solutions.

In any hardware-accelerated solution, the key to enhancing system performance is to conserve memory bandwidth by offloading graphics operations from a standard CPU to another hardware implementation such as a dedicated ASIC or a controller chip that integrates the CPU. Using either approach, the frame buffer must be separated from the system memory and implemented as a part of the hardware graphics accelerator. This step removes the display refresh and graphics-rendering overhead from the CPU and the memory bus, freeing memory bandwidth for other CPU operations.

A dedicated hardware accelerator must also be designed for high-bandwidth and low-arbitration latency access for each of the functional units. Those chips will provide hardware functionality to support a mix of isochronous (for screen refresh) and bursty traffic over their interfaces.

The key to developing optimal hardware accelerator architecture is to make the right latency and bandwidth trade-offs. As previously mentioned, the designer can increase the amount of system memory bandwidth for the CPU by implementing the frame buffer in embedded memory in the hardware accelerator.

That approach enables the designer to use this frame buffer for I/O transactions from peripheral interfaces, such as USB or I2S, so that transaction data can be sent to the system memory or read by the CPU in bursts. That enables efficient utilization of the system memory bus while reducing the interrupt overhead on the CPU. Instead of interrupting the CPU on every transaction, the CPU is now interrupted once every multiple transactions.

? Manish Singh

Product Marketing Manager

MediaQ Inc.





Article Comments - Mobile net demands optimized design
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top