Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > RF/Microwave
?
?
RF/Microwave??

Steps to reduce non-stationary mobile phone noise up to 25dB

Posted: 22 Apr 2008 ?? ?Print Version ?Bookmark and Share

Keywords:reduce mobile phone noise? non-stationary noise?

By Lloyd Watts
Audience

Mobile carriers are intimately aware of the role that voice quality plays in customer retention. One of the primary factors affecting voice quality is environmental noise, and so any means of suppressing noise provides a potential differentiator for handset manufacturers.

Until recently, noise suppression technology focused on reducing slow-changing stationary noise sources. However, many noise sources, known as non-stationary noise, are fast-changing and so are not being suppressed. As a result, subscribers are unable to reliably use their handsets on busy streets, in crowded restaurants, or even at home.

Suppressing non-stationary noise brings substantial benefits to both subscribers and carriers. Users gain the freedom to speak and hear clearly wherever and whenever they want, enjoy increased privacy by being able to speak softly in noisy environments, and won't be asked to leave important conference calls. Carriers see a reduction in customer churn, increased airtime usage, more efficient use of network bandwidth, and significant savings of capital and operational expenses.

1. Understand the difference between stationary and non-stationary noise
Because of its relatively constant nature!such as a loud fan in the background!stationary noise can readily be recognized and effectively subtracted through conventional signal processing techniques. Non-stationary noise, in contrast, is characterized by rapid or random change, such as a person talking, background music, or keyboard typing. By the time non-stationary noise is recognized as noise, it has already passed, and so more sophisticated noise suppression techniques are required.

2. Use two microphones to improve understanding of the auditory scene
Next-generation noise suppression techniques such as Auditory Scene Analysis (ASA), Beam Forming (BF) and Blind Source Separation (BSS) use multiple microphones to more accurately identify, locate, and group noise sources than is possible with a single microphone. Today's handset manufacturers already recognize this trend and have begun to introduce a second microphone into handset architectures.

3. Use grouping principles to separate voice of interest
Grouping technologies simplify noise suppression while making it possible to identify non-stationary noise sources. Auditory Scene Analysis (ASA), for example, uses the human auditory pathway as a model and so processes noise the way people actually listen to specific sounds. By grouping acoustic energy to recreate the original sound, ASA enables accurate grouping of sounds from multiple sources while avoiding any blending of sounds that should be perceived as separate. Grouping principles can be broadly described as sequential (those that operate across time) and simultaneous (those that operate across frequency).

4. Use multiple cues to group otherwise difficult-to-group sounds correctly
Each grouping cue has limitations. Using multiple cues enables otherwise difficult-to-analyze sounds to be grouped correctly. Some of the more important cues include:

? ? ?? Pitch!Harmonics generated from a pitched sound source form distinct frequency patterns and so can be used to distinguish one sound from another. Pitch is one of the primary cues for distinguishing between male and female voices.

? ? ?? Spatial Information!The location of a sound based on its distance and direction can be used to group sounds and so differentiate them from the voice of interest.

? ? ?? Onset Time!When two bursts of sound energy and their corresponding harmonics are aligned in time, they are likely from the same source.

5. Reduce convergence time for more instantaneous noise removal
Traditional noise suppression techniques must first converge before they can remove noise, making them ineffective in suppressing non-stationary noise sources. By utilizing fast-acting cues to characterize sound, even instantaneous events such as a finger snap can be identified and removed.

6. Employ logarithmic versus linear frequency scales (FCT vs. FFT)
The familiar FFT decomposes frequency components on a linear scale that limits spectral resolution at low frequencies as well as uses a constant frame size and frequency-independent bandwidth. In contrast, an approach such as the Fast Cochlea Transform (FCT) based on characteristics of the human cochlea operates on a logarithmic frequency scale. As a result, it does not limit spectral resolution. By operating continuously instead of in frames, the FCT also reduces processing latency, making it appropriate for identifying non-stationary noise sources. Additionally, the FCT operates with frequency-dependent bandwidth and so can more precisely match the time-frequency tradeoff at each frequency of the human hearing range.

7. Use omni-directional microphones to reduce cost
Certain techniques such as Beam Forming require a specialized cardioid (unidirectional) microphone. Cardioid microphones cost more than omni-directional microphones, have tighter tolerances, must be individually calibrated and matched to within 1dB, introduce restrictions on spacing, and add up to 12dB noise because of sensitivity to wind and breath. Beam Forming is also limited in that any distractors in the beam of interest will be incorrectly passed through as being part of the voice of interest.

It is also important to manage the number of microphones a system requires. Blind Source Separation, for example, uses a simple linear unmixing technique which runs optimally in the presence of at least as many microphones as there are sound sources.

8. Treat echoes as independent sound sources
Traditionally, echoes are removed using separate echo cancellation techniques. Such techniques can be compute-intensive as they must calculate echo reflections and offer poor performance in the presence of rapidly changing noise sources. Grouping cues enable echoes to be treated as simply another noise source. Instantaneous suppression becomes possible because echoes neither need to be calculated nor their changes tracked, providing echo suppression performance up to 46dB.

9. Adopt new testing standards
The mobile industry continues to drive test standards to reflect higher levels of voice quality through innovations in noise suppression. In order to ensure the best quality for their products, the industry has recently amended the ITU P.835 specification to provide a consistent test methodology for measuring and reporting voice quality with noise suppression technology active.

Figure 1: A time waveform of a signal before and after instantaneous non-stationary noise suppression is applied.

Effective suppression of environmental noise, both stationary and non-stationary, is essential if handset manufacturers and carriers are to keep pace with their competitors. By employing next-generation noise suppression techniques (Figure 1), developers can reduce noise levels in handsets by up to 25dB under a wide range of operating conditions.

About the author
Lloyd Watts
is the founder and chairman of Audience, and as CTO provides ongoing guidance and impetus for the company's core technology direction, as well as the vision of neuro-morphic computing for voice systems. Prior to Audience, he was principal researcher at Paul Allen's Interval Research Corporation. Before Interval, Watts developed ICs and software for satellite communications systems, telephony systems, optical character recognition systems and LCD displays for Microtel Pacific Research, Synaptics, and Arithmos. He also invented a low-delay digital speech-coding algorithm that was sold to Cisco in 1999.





Article Comments - Steps to reduce non-stationary mobil...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top