Calibration of gain, timing errors in ADCs
Keywords:RF? A/D converters? CMOS? time-interleaved? <b>calibration?
Mismatch errors in two-channel TIADC
An efficient way to double the speed of an ADC is to operate two ADCs in parallel with out of phase sampling clocks. The unavoidable small mismatches between the transfer functions of the sub-ADCs result in spurious tones that significantly degrade the achievable dynamic range. There are four types of error in this kind of ADC:
???DC offset error,
???Static gain error,
???Timing error,
???Bandwidth error.
The DC offset error is very simple to handle in practice through digital calibration. The bandwidth error is the most difficult to manage and it is usually mitigated through careful design and layout. In this article we will focus on gain and timing error calibration as they are the major contributors to dynamic range loss.
Proposed calibration method
In practice the Nyquist bandwidth of an ADC is never fully used, and a fraction of it is usually dedicated to the roll-off of the anti-aliasing filter. This free band is exploited to inject a constrained calibration signal. A sine-wave is selected for calibration as it is easy to generate with high spectral purity on which two main constraints are imposed:
1. The amplitude is kept small enough to avoid any impact on the dynamic range while providing enough estimation accuracy. Experiments show that -40 dBFS to -35 dBFS level range provides the best tradeoff for a 14-bit ADC.
2. The frequency is limited to the following discrete values in order to reduce the complexity of the digital signal processing algorithms:
![]() |
Equation 1 |
Where Fs is the TIADC sampling frequency, P, K are unsigned integers and S=+-1 depending on the location of the calibration signal with relation to the edge of the Nyquist zone (figure 1). This signal can be easily generated on-chip with a fractional-N PLL using the clock of the ADC as a reference signal. By choosing K high enough, the harmonics of the calibration signal will alias outside the useful band which relaxes their filtering requirements. The swing adjustment can be achieved with a programmable attenuator placed at the output of the PLL.
![]() |
Figure 1: Frequency plan showing the location of the calibration signal. |
If x0 and x1 denote the outputs of the two sub-ADCs with the calibration signal as input, it can be shown using Equation 1 that these two signals are linked by the following expression (the noise has been ignored):
![]() |
Equation 2 |
The coefficients h0 and h1 of this linear filtering formula are related explicitly to the gain g and timing t errors by:
![]() |
Equation 2 |
This nonlinear set of equations can be linearized and inverted by using a first order approximation, given the fact that the mismatch errors are kept small by design.
The estimation algorithm consists of three steps:
1. The calibration signal is extracted and cancelled from the output of the sub-ADCs using an LMS algorithm, yielding the discrete-time signals x0 and x1. This algorithm requires a digital cosine/sine reference signals at the calibration frequency. The cosine signal is generated with a small Look Up Table (LUT) of size 4K (K 2. The coefficients h0 and h1 are estimated adaptively from the extracted x0 and x1 signals using an LMS algorithm as shown in figure 2.
3. The gain and timing errors are then computed from the linearized set of equations as derived from Equation 3.
![]() |
Figure 2: Background estimation of gain and timing errors through a 2-tap digital adaptive filter. |
Related Articles | Editor's Choice |
Visit Asia Webinars to learn about the latest in technology and get practical design tips.