Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Processors/DSPs
?
?
Processors/DSPs??

Digitization of analog, RF circuits

Posted: 16 May 2005 ?? ?Print Version ?Bookmark and Share

Keywords:digital? analog? rf? circuit? 2g?

When discussing the integration of analog and RF into digital SoCs, it is the former two that usually emerge as the most important factors. However, changes in digital processing are shifting that focus. In recent years, engineers have been working to design simpler analog and RF systems by transferring resources and operations to the digital domain, particularly in the case of wireless base station design.

In wireless, the transition from narrowband first-generation systems (e.g. AMPS was 30kHz dedicated per user) to 2G (global system for mobile communications shared 200kHz among eight time-division multiple-access slots) to 3g (5MHz among many, many sessions) steadily reduces the need for analog filters by shifting filtering, channel selection and processing to the DSP. In the past, such a change was not viable, but the steady decline in dollars/MIPS has made it economical.

While the increased performance in DSP assists the analog portions, the reverse is also true. Higher modulation makes the analog design more sensitive, for example, which in turn assists the digital processing.

Wideband code-division multiple access (w-cdma) requires approximately 100x the DSP performance of GSM. So, a design using GSM and supporting 64 users would require eight transceiver units!i.e. eight separate power amplifiers, expensive sets of analog filters for adjacent channel rejection, separate IF and RF strips and separate data converters. In contrast, for W-CDMA to do the same requires only one RF system!with consequent reduction in cost. All the separation and channelization is done digitally.

In the early years, GSM was far cheaper, partly because the analog parts cost less than the digital equivalent. In time, however, the savings from Moore's Law for DSP drive down the 3G costs in a way that is not available to 2G, where the costs for analog components decline more slowly. Hence, the "cost per Erlang" of 3G is much lower than 2G. Incidentally, this is a major reason why the technology will succeed; new services are a bonus, but increased efficiency and reduced cost for voice are the "killer app."

While developers are shaping new standards specifically to capitalize on the benefits of 3G, they can also apply it to older standards. For example, while GSM is inherently a narrowband protocol, it is possible to deploy a multichannel system using a wideband approach, consolidating a number of channels into one using a wideband ADC and DAC, a single RF stage and a multichannel power amplifier (MCPA). Then, between the multichannel converter and the separate per-channel signal processing, a digital filtering stage will separate the distinct channels.

This implies that for a limited number of protocols, using dedicated, optimized logic is most efficient. As the number of standards rises, though, a flexible architecture will prove to be more economical. The breakeven is probably around three. For one or perhaps two different protocols, the cost of the all-digital approach may exceed a "more efficient" dedicated and part-analog system, but if you wish to support more (say, W-CDMA, GSM, Bluetooth and Wi-Fi), then having a degree of programmability will prove to be more efficient. Ultimately, there may be a requirement to support approximately 14 air interfaces, comprising different flavors of 2G and 3G, different WLAN variants, Bluetooth, GPS, WiMax and the like.

In addition to ensuring multimode capability, a second argument exists for using flexible digital sections: The new standards are not static. Indeed, the greater complexity of designs inevitably means that the standards are changing rapidly. In the W-CDMA world, we have seen four versions of the standard in just four years. We have already seen prerelease 99 (Freedom of Mobile Multimedia Access, for example), formal Release 99, Release 4 and Release 5 with its hugely important high-speed downlink packet-access mode. Similarly, in Wi-Fi we have seen 802.11b, 802.11g and 802.11n (all with associated media access control-layer changes). Developing a more cost-efficient architecture is desirable, but only if it offers a usable life span!which means that flexibility and the ability to be updated is mandatory.

Newer technologies like W-CDMA and orthogonal frequency division multiplexing share a couple of interesting properties. First, as discussed, they shift functionality from digital to analog. Second, the desire for improved performance and bandwidth efficiency tends to increase the complexity of the modulation!hence, these protocols are far closer to Gaussian than the previous generation of constant envelope. This stems from the increased depth of modulation (from QPSK to 16QAM or beyond), the use of multitone modulation or CDMA. However, there is a price to pay for this while bandwidth efficiency increases, power efficiency drops.

For simple technologies like GSM and Bluetooth, obtain 40 percent efficiency in a power amplifier. For complex technologies, though, a very linear amplifier is needed. To achieve linearity, run the amplifier inefficiently. This situation is made worse by the high peak-to-average ratio, which requires that you "back off" the power amplifier so that it is always in its linear range!wasting more efficiency. In effect, you have traded efficiency in transmission (more bits per second per Hz per cell, enabling fewer base stations and higher rates) against circuit efficiency (burning a lot more heat and power).

For 3G, this efficiency could be as low as 3 percent for a 20W power amplifier burning 700W of heat. Heat causes failures; it needs airconditioning!the lack of which is a prime failure point!and you are paying for electricity that you do not actually want to use.

Traditionally, analog techniques have been used to address this need. Ultraprecise transistors with careful matching and incredibly clever process techniques improve device linearity.

More recently, digital predistortion (DPD) has been used to replace inefficient techniques, boosting efficiency by 20 percent or possibly more. It works by allowing the power amplifier to be nonlinear, so it can run more efficiently and closer to nonlinearity, but it adds DSP resources to model that nonlinearity, predicting it and "reversing it." In effect, you input a deliberately "wonky" signal, but know that the nonlinearity will affect it to give the output desired all along.

- Rupert Baines

VP of Marketing

picoChip Designs Ltd





Article Comments - Digitization of analog, RF circuits
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top