Home

Delta-Sigma analog-to-digital converter

Signal processing holds quite an amount of internal beauty. To taste it I prepared this long article. Although I was trying to present a simplified discussion, the article is still long and still requires some basic signal-theory knowledge.

I decided to present the charming element using the delta-sigma converter example. Somewhat non-standard approach is used in this article, so if you failed understanding other similar articles you may tray it here. On the other hand, you may also choose to skip introducing chapters.

Sampling frequency (sampling rate)

Often we want to acquire an analog signal into a digital computer because it is so easy and cheap to process signals on digital computers. As digital computers cannot work with analogue signals directly, we have to digitalize them. Digitalizing means that we have to do two things to an analogue signal: quantize it and sample it

As you know, Nyquist theorem tells us that we have to sample an analogue signal with at least twice the frequency of its highest frequency component. Only in this case we will be able to perfectly reproduce the original analogue signal.

Unfortunately, to be understood, the Nyquist theorem challenges you to understand the fact that every analogue signal can be represented as summation of many pure sine waves with different frequencies and amplitudes (Fourier transform). This really is the fact – even the nastiest looking signal waveform can be separated into a palette of pure sine waves. I will call these sine waves “frequency components” of the original analogue signal. Note that each frequency component has two important properties – its frequency and its amplitude (these correspond to the frequency and the amplitude of the pure sine wave).


In this example you have a complex signal that consists of five frequency components


More practical way to display signal's frequency components - a frequency spectrum diagram

Now we better understand the Nyquest theorem (aka. Nyquist-Shanon sampling theorem). If we want to sample an analogue signal that has frequency components at 4Hz, 13Hz, 35Hz, 66Hz and 100Hz then we have to sample it with the frequency of at least 200Hz to be able to later reproduce the signal.

It is also worth knowing that sum of energy carried by frequency components is equal to energy carried by the signal itself (Pareseval’s theorem).

Sure, there are signals that have its highest frequency component set very high – maybe in gigahertz of terahertz range (or even more, there are no limits). Hardly we can sample such signals. If you look at waveform of an signal you can get a feeling on how high its frequency component go – if the signal changes much in tiny fraction of time then surely it has to have some really high-frequency components.

One example of such signal would be a square-wave signal. It actually has the unlimited frequency of its highest frequency component (clear, because in an infinitesimal fraction of time the signal changes its full amplitude). We can call such signals “unlimited-frequency signals” or “unlimited bandwidth signals”. What are we doing when we want to sample such a signal?

Well, nothing much. The fact is that usually the higher-frequency components in such signals become less and less important because higher-frequency sine-wave components become smaller and smaller in amplitude. So we can simply cut-off higher frequency components. Doing this cut-off, we will sure distort the signal to some degree – it all depends how high we set the cut-off frequency. If we set the cut-off frequency high enough the distortion will be acceptably small.


This is how can a perfect square wave look like after we cut-off higher frequencies

The device that cuts-off higher frequency components to make sampling possible is called anti-aliasing filter. The anti-aliasing filter is just a low-pass filter (low-pass means that it only passes low frequency components).

There are however, some signals whose higher-frequency components do not become less important. We simply have no way to sample such signals without distorting it a lot (a Dirac’s pulse trains or white noise are examples of such signals).

Aliasing

What would happen if we sample our analogue signal using lower sampling frequency (sampling rate) than Nyquist theorem recommends? We will not successfully acquire a correctly sampled image of the signal. If we try to reproduce the original signal from such ill-sampled image we will not be able. The reproduced signal will look quite different than the original.

This distortion of sampled image caused by too low sampling frequency is very specific and is called aliasing.

All the frequency components with frequencies lower than half of sampling frequency will be sampled well, while frequency components with frequencies higher than half of sampling frequency will not. But they will not be completely missing form the sampled image, instead these frequency components will still be present in sampled image but its frequencies will be miss-recognized. Frequencies of these components will be misplaced in lower portion of frequency spectrum (at some alias frequencies).

If, for example our signal has frequencies components of 4Hz, 13Hz, 35Hz, 66Hz and 100Hz and we sample it with sampling frequency of 160Hz we will have a problem because half of 160Hz is 80Hz and this is lower than the highest-frequency component in our signal placed at 100Hz. In our sampled image we will have components of 4Hz, 13Hz, 35Hz and 66Hz, but our 100Hz component will be mistakenly placed at 60Hz. Sure when we try to reproduce such signal with components 4Hz, 13Hz, 35Hz, 60Hz and 66Hz it will not be the same as the original one.


An example of sampling without aliasing (top) and with aliasing (botton)

What happens is that higher-frequency components get mirrored at half-sampling frequency. In our example above, the half-sampling frequency is 80Hz. The 100Hz signal when mirrored around 80Hz will be placed at exactly 60Hz. (if we had 120Hz component it would be placed at exactly 40Hz, and if we had a 150Hz component it would be placed at exactly 10Hz).

All the frequency components get folded inside frequency range that extends from zero up to half-sampling frequency. When aliasing occurs, no frequency component will be gone, but instead are misplaced to wrong frequencies.


This is how frequencies get folded in the case of aliasing

As a consequence we can say that, although a signal cannot be reproduced if aliasing occurred, the energy carried by the signal will be preserved. The energy carried by reproduced signal will be the same as the energy carried in the original signal. This conclusion will help us when dealing with quantization noise.

The white noise

White noise is a completely random signal. It has an interesting property that its frequency components have amplitudes irrelevant of the frequency. This is what differs it from other kind of noises.

The white noise has no frequency component with maximum frequency. Even worse, amplitudes of components at higher frequencies do not tend to become any smaller. Instead amplitudes tend to be about equal (in average) across all frequency range from zero to infinite.

How does a white noise look like? In any moment it can be at any level– one cannot predict at what level will the signal be in the next moment no mater how short-timed our forecast has to be. The only think you can measure about this signal is its power (energy carried in a period of time). Oddly, The average power of the white noise signal tends to be about equal no mater how small is the period of time we measure it in.


The white noise signal

Okay, what are amplitudes of frequency components that form the white noise? They all tend to zero. Really, as there is infinite range of frequency components, and as theirs amplitudes do not tend to become any smaller at higher frequencies it is obvious that amplitudes must be at zero level – otherwise the sum of energies carried by these components would be infinite, and we know that energy carried by the white noise signal is not infinite (remember, the sum of energies carried by the frequency components equals the energy carried by the signal itself).

Note that in mathematic it possible to sum up an infinite number of factors that are all tending to zero, and still get a non-zero result.

Now we learned a bit about the white noise and its properties. Lets consider now what will happen if we try to sample a white noise signal.

First, you cannot sample a white noise signal without aliasing to occur – this is because a white noise has infinite frequency range. However, as it is explained already, although we will not be able to get the right sampled image of the white noise (and will not be able to reproduce it afterwards), the energy carried by original white noise signal will still be conserved in our aliasing-poisoned sampled image.

As we explained before in the aliasing chapter, the infinite range of white noise frequency components will be folded into finite frequency range form zero to half-sampling frequency. No component will be gone. As the power of signal will still be equal and finite as before, and as it now has the finite frequency range it is clear that some of its frequency components can have non-zero amplitudes now. This is a direct act of frequency folding (aliasing) that occurred while sampling.

As it is infinite extended, the frequency range of a white noise signal will be folded infinite number of times into zero to half-sampling frequency range. If we tray to fold it into twice narrower frequency range (we halved our sampling period) we will have to fold it twice more times (still an infinite number). As a consequence, we will have the average amplitude of frequency components doubled.

This is an important conclusion... We may sample a white noise signal with high sampling frequency or with low sampling frequency. In both case aliasing will occur but the power of obtained signal will be in both cases conserved and equal to the power of original white noise signal (aliasing doesn’t affect a signal power). But if we sample it using higher sampling frequency, then average amplitude of frequency components in obtained signal will be lower than if we sampled it with lower frequency.


When we sample with higher sampling frequency, the power of white noise redistributes
over larger frequency area, and so amplitudes get smaller.

Digitalization

Now we know enough about sampling, aliasing and white noise. We can try to digitalize a signal. In our example we will first quantize it, and then we will sample it in separate step.


A very elaborated view of the digitalization process
Usually all three steps are done at once

As a result we obtained a digitalized signal that can be stored and processed by digital computers. Great.

Let’s consider the first step of our example – the quantization. A quantized signal can only have limited number of levels. In the quantization process we have to approximate the original signal using only limited numbers of pre-determined levels (quantization levels).

Surely, in the quantization process we are necessarily introducing an error into the signal. I will call this error a “quantization error” or “quantization noise”. The quantization error can be quite small if we use a quantizer that has larger number of pre-determined levels but, in general, it always exist. The quantization error is a function of time – thus we can think of it as just another signal – the quantization noise signal.


The Quantization error (quantization noise) is by definition the difference between original
signal and quantized signal. It is also a signal that can be observed.

The fun stuff here is that we can think of quantized signal as a sum of original analogue signal and the error signal. We can still think of quantized signal as a signal that completely contains our original analogue signal, but somewhat polluted (superimposed) with quantization error signal.

What can we say about the quantization error signal? We can say that, in general, it is not much related to the original signal. They do not share properties. In fact, the quantization error signal is more similar to a white noise. It really has infinite frequency band (range) and amplitudes of its frequency components are equalized across the whole infinite frequency range.

If the quantization error signal is a form of white noise then what is its power? Even its power doesn’t depend on the original analogue signal – instead it depends on the number of pre-determined levels of quantization. If we make a coarse quantization, having only few quantization levels, then the power of the quantization error signal will be high.

Now what happens when quantized signal reaches the sampler? We said that the quantized signal has two components: first is the original analogue signal and the second is the quantization error signal. We suppose that sampling frequency is high enough to sample the original analogue signal part without aliasing. However, aliasing will surely occur for the quantization error part of the quantized signal (because, like a white noise, the quantization error signal has unlimited frequency range - see the white noise chapter and the aliasing chapter).

Oversampling

We see one thing – when a quantized signal (that consists of original analogue signal, and of quantization noise signal) reaches a sampler, aliasing will occur for the quantization noise part of the signal. We, however, learned that amplitudes of frequency components of sampled white noise signal are getting smaller as we use higher sampling frequencies (see the white noise chapter).

Does it make any sense then to make sampling frequency of a quantized signal any higher than it is needed to capture the original analogue signal? Well, yes but only if some other additional actions are undertaken.


Altough we could sample this signal at only 60Hz because its highest frequency components stands at 30Hz,
we sampled it using 4-times oversampling

You see, although amplitudes of frequency components of an aliased white noise signal are getting smaller as we are using higher sampling frequency, the overall power of the noise signal doesn’t change and the signal-to-noise ratio doesn’t improve. And the signal-to-noise ratio is what we want to improve.

How can we use oversampling (using higher sampling frequency than needed) to reduce relative noise power (signal-to-noise ratio)?

We first oversample a quantized signal. This makes amplitudes of frequency components of quantization noise smaller than they would be if we didn’t use oversampling. Yes, we know that the frequency range is now increased so we gained no actual decrease in noise power, but we now have the possibility to simple cut-off higher frequency range where only the noise is present using a low-pass filter. Cute.

This is how we can make some decrease in noise power of quantized signal. We oversampled the quantized signal and then we simply cut-off frequencies that we don’t need. The analog signal uses only lower frequency range, while the noise uses full frequuency range. By cutting-off higher frequencies we cut only frequency components that belong to the noise – reducing the noise level and improving the signal-to-noise ratio.


Useful signal frequency components (green) are concentrated at low frequencies, while quantization noise
components are distributed over full frequency range. We can simply cut-off higher frequencies.

This cut-off is done at the digital side of the converter (on the digital domain by a digital computer). This is another good point – implementing this on a digital computer is really cheap. Actually this cut-off is done by a simple process of digital low-pass filtering and decimation. After passing the strong digital low-pass filter, the quantized signal will be represented by a greater number of levels than it had before. This is simply because the digital filer makes some processing on the signal, and the result of this processing being a real number is now re-cast to higher number of levels (higher number of bits) than the original quantized signal at the filter input. The decimation process is even more simple – you simply forward every N-th sample and discard others.

As a final result you have lesser number of samples but each of them being finer-quantized – exactly as if you used slow-speed fine-leveled quntizer in the first place. However, in practice, it is easier and cheaper to manufacture high-speed coarse-level quantizer than low-speed fine-level quantizer. This is where we make profit out of our efforts.

Look at this the other way. If we can decreased the noise power by oversampling then we also can use this gain to decrease number of pre-determined levels in our quantizer and still keep the noise at appropriate level. This would make our quantizer simpler and cheaper.

Example... you sampled an analog signal using 4-level (2-bit) quantization and 64-time oversampling. Then, in digital domain, you make low-pass filtering. At the filter output you re-cast samples to 32 levels (5-bit). Finally, you make decimation forwarding only every 64-th sample. The result is about the same as if you used 32-level quantization without oversampling in the first place.


The digital part of the oversampled signal post-processing chain. The input signal is low-pass filtered and decimated
The result is smaller number of high quality samples

Note that if you double number of levels of your quantzer, you get four times less power in your noise signal, but unfortunately, when you double the sampling frequency you get only two times less power in the noise signal. Is then oversampling worth it?

You may think that we didn’t get enough from oversampling – it simply cannot compete with increasing of quantization levels. But wait until you read the noise-shaping chapter.

Quantization noise shaping

Any signal, can be frequency shaped using filters. This is what filters do – they change amplitudes of frequency components of a signal that passes through them. They can make high frequency components weaker (low-pass filter), or can make them stronger (high-pass filter) or can frequency-shape a signal any way you want.

We already used some frequency shaping in our oversampling chapter – to cut off high frequencies from a quantized signal (we weakened high-frequency components down to zero).

The quantization noise can be shaped to improve signal-to-noise ratio of a quantized signal. But how do we do it?

We cannot simply place a filter at the output of our quantizer because the quantizer must be the last element in our digitalization chain – if we place a filter after it, the output signal from filter will not be quantized any more and we will not be able to use it for a digital computer.

There is no sense to place it in the front of the quantizer. The quantizer is the element that generates the quantization noise, and so if we place our noise shaping filter in a front of quantzier it will make no effect.

We can put it in loop-back position. It can be seen, from equation, that it has effect to both, the input signal and the quantization noise signal. But this is quite useless to us because it has exactly the same effect on both. The signal and the noise are both shaped the same so we didn’t make any progress in separating these two.

And finally, you can place our noise-shaping filter in the following position. From equation it is seen that the signal and the noise are now affected somewhat different.

This is really useful – imagine that you oversampled your signal and now you have a lot of frequency space at higher frequencies that are not used by your actual signal. Only the quantization noise occupies this region. Now we could simply cancel that noise portion by filtering out these frequencies as explained in the oversampling chapter. However we can do much better – let’s re-shape the quantization noise so it will mostly be pushed away from lower-frequency regions and forced into higher-frequency regions and just then we cancel it out. That would be really charming.


We shaped quantization noise so it is pushed into higher frequency region. Then we make high-frequency cut-off.
Compare to equivalent picture in the "Oversampling" chapter.

This is what is used in the delta-sigma converter.

The delta-sigma converter

Also known as the sigma-delta converter, the delta-sigma is a simple design analogue-to-digital or digital-to-analogue converter. Here we consider only the analogue-to-digital version (ADC). On the picture below you have depicted the signal flow inside a delta-sigma ADC.


The signal flow through an delta-sigma converter. Analogue part is to the left, and digital part is to the right

First of all, the delta-sigma is designed to be cheap. It is designed so that minimum of its components are actually dealing with analog signals while most of its parts are designed in digital technique and work with digitalized signal. A delta-sigma converter can be made using common CMOS technology making it potentially cheaper than other competitive analog-to-digital converters.

The delta-sigma converter goes to extreme. It uses only two quantization levels (a 1-bit analog-to-digital converter), but it also uses oversampling heavily – in some applications it is designed to sample at 64 times higher frequency than is the frequency range of an input signal. And then finally, it uses noise-shaping technique to get better signal-to-noise ratio.

Regarding the noise-shaping filter we can say that it usually is made very simple. A simple filter is used because it is placed at the analog side of the converter, and we know that analog circuitry tends to be expensive. Sometimes the noise-shaping filter is represented by a simple integrator circuit (an integrator represents a first order low-pass filter).

After quantization and sampling, the output signal has a form of a bit-stream and consists of two components: low-pass filtered analogue input signal, and high-pass filtered quantization noise. The power of the noise is mostly pushed (reshaped) into high-frequency region. Later, in digital domain, the signal is low-pass filtered and decimated. This is where the most of the quantization noise is canceled out together with the whole high-frequency region.

It can be noted that if more-complicated (higher order) noise-shaping filter is used, even better noise-rejection result will occur, but this will affect delta-sigma converter price. So, a tradeoff must be made.

The most charming part of the delta-sigma converter is its simplicity. The noise-shaping filter can be as simple as an op-amp integrator circuit, the quantizer is degraded to a comparator. Everything else is made in digital technique.


A block scheme of a simplest delta-sigma modulator (without its digital filters).
Notice that all elements are very simple.

Another advantage is that the anti-aliasing filter (that must be used in front of any type of analog-to-digital converter) can in this case be very cheap, if even needed. This is because over-sampling is used and thus the anti-aliasing filter can have a very slow attenuation curve that will still reach high rejection at half-sampling frequency.

Now I want to say something about this converter name. The delta-sigma or sigma-delta draws its name because it was derived from a delta converter placing an integrator (sigma) in front of the quantizer. The name Delta-Sigma tells us also something on how it is working in its simplest version – it calculates the difference (delta) between output and input signal and integrates (sigma) it. Integrated differences make original signal up again, and so it is quantized now.

But don’t be miss-leaded by the occurrence of an integrator – take it only as a noise-shaping filter. Other filter types, that don’t have to be pure integrators, may do equally well or even better.

Danijel Gorupec, 2006.


Home