Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Interface

Configuring complex audio use cases

Posted: 03 Jul 2014 ?? ?Print Version ?Bookmark and Share

Keywords:smartphone? beamforming? Applications Processor? Wolfson? WISCE?

Now imagine you have made a mistake, and the right headphone has been connected to the unused EQ3 instead of the intended EQ2. This is an easy typo to make, but you would only hear half the signal. Working through all the register fields to spot the one set to EQ3 instead of EQ2 would be laborious in the extreme.

With WISCE, it becomes obvious. From figure 2, you can immediately see the break in the chain 每 the input to OUT4 right is connected to a floating block (EQ3) instead of to the right channel output (EQ2) as it should be. The graphical representation makes it much easier to debug.

Software processing blocks
As well as connecting up hardware components, a particular use case may involve some run-time loadable software effects, running on a Digital Signal Processing (DSP) core as part of the CODEC or on a separate dedicated chip. When the use case is loaded, the operating system may need to load a firmware image to the DSP core to get the appropriate effect. This can take a while 每 sometimes even of the order of hundreds of milliseconds 每 so the operating system will try to avoid changing firmwares where possible.

Figure 2: A phone call route with a mistake highlighted by WISCE.

Sometimes, however, it is unavoidable 每 for example, switching from a handset phone call to speakerphone may involve replacing the ambient noise reduction designed for a headset with a beamforming algorithm which tracks the current speaker.

In Android this switching of firmwares is typically handled by a use case manager, specifying any firmware images required and where they should be loaded. The use case manager will track which firmware image is currently loaded on the core. If the use case calls for the firmware currently on the core, there's no need to reload it. If, however, the use case calls for a different firmware it will have to be changed.

Tuning for the best sound
Connecting up the modules is only the first step. They then need to be configured for the best sound. The small form factors of smartphones means compromises have to be made in terms of speakers and their location. This often means they boost certain frequencies more than others. In figure 3, the top plot shows the measured frequency response of the speaker. You can see how the speaker has a slight peak at about 900Hz, another higher peak at around 3800Hz. These peaks correspond to resonances in the case and would be heard as a buzzing.

Figure 3: Five band EQ compensation for speakers.

A five-band parametric EQ with three band-pass filters has been used to tune the audio output to compensate for these peaks making the overall response more linear across the 800Hz-4kHz range. See the frequency response of the EQ in the middle plot, and the effect on the speaker output in the bottom plot.

Figure 3 also shows how the performance tails off below 800Hz as would be expected for a small speaker. This might be compensated for with some bass boost, or with psycho-acoustic tricks to fool the ear into thinking the lower frequencies are actually there, but realistically it's difficult to get a good bass response from a small speaker.

The response of the speakers, microphones and any resonances is critically sensitive to the shape and composition of the case and the acoustic chambers within the phone. On the other hand, controlling the settings on the chip requires access to the control signals, making accurate tuning challenging.

When creating our settings on an evaluation board, we are not taking the real design of the phone into account. We can simulate the speaker and microphone responses by playing our audio through a tool like Matlab, but there may be subtle aspects that a recording may miss, leading to a suboptimal tuning.

If we bore through the case to introduce wires to access test points or control interfaces, we change the acoustic properties of the phone, so the settings we come up with will not be appropriate for the final handset without the holes.

Another option is to set the phone up with a particular configuration, run the test and record it. The recording can then be analysed to suggest changes, the phone configuration can be updated and the test run again. This makes for slow testing, and discourages the engineer from trying too many tuning options.

Doing the tuning on the real handset without modification would be ideal, controlled interactively from the tuning/configuration tool using a remoting technology like Wolfson's WISCEBridge. A PC running WISCE communicates with a WISCEBridge server over TCP/IP. It sends configuration commands and queries to the server, which then updates the device, or returns current settings from the device.

The simple protocol and use of TCP/IP means it can be implemented and deployed in a wide range of form-factor products. There is a version which runs on Linux (and hence Android) which communicates with the operating system to configure the device. This can run over any connection, even Wi-Fi, enabling completely wire-free tuning. There is also a version which communicates with an Android device over ADB, the USB Android Debug Shell, requiring just a USB cable connected for this, without anything unusual installed.

About the author
Ian Brockbank is Software Tools Manager at Wolfson Microelectronics.

To download the PDF version of this article, click here.

?First Page?Previous Page 1???2

Article Comments - Configuring complex audio use cases
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top