Tutorial Contents

Nyquist frequency

Decimation and interpolation

Interpolation algorithm

Contents

The Nyquist frequency and data acquisition

There is an important general point to be made about sampling frequency. This may be common knowledge amongst experts, but many people (all right, I, at any rate) have been confused by it.

It is commonly stated, and true, that if an AD converter runs at a sample rate that is twice the highest frequency contained within the analogue signal (the Nyquist frequency) then the digitized signal will contain all the information that was present in the analogue signal. However, this does not mean that a straightforward display of the digitized data by “joining the dots” will look like the original signal. In order to reconstruct the original signal from digitized data sampled at the Nyquist criterion it is necessary to output the data as a series of impulses, and to pass these impulses through a low-pass filter that has a cut-off frequency equal to the Nyquist frequency. If this is done then the output from this filter will indeed be a replica of the original input signal. However, this is not easily achieved in practice, and is not part of the processing of most digital data display systems, including DataView. If you want the digital data display to look something like the original analogue signal using a simple join-the-dots algorithm, you will almost certainly have to sample at well above twice the Nyquist frequency. But under any circumstances, you absolutely must ensure that the sample frequency of the AD converter is more than twice the frequency of any signal contained in the analogue data, or the dreaded aliasing will occur (an example of which is given below).

Decimation and Interpolation

You can investigate these aspects of sampling theory with DataView.

This shows a 40 Hz sine wave, which has been sampled at 1000 Hz. The display looks like a pretty good sine wave, but it has been highly oversampled according to the Nyquist criterion.

a
sine 40 hz
b
sine 40 hz decimated
Sampling above and near the Nyquist criterion. a. A 40 Hz sine wave sampled at 1000 Hz looks like a sine wave. The sample rate is well above the Nyquist criterion. b. A 40 Hz sine wave sampled at 100 Hz does not look like a sine wave, but it is still sampled within the Nyquist criterion.

The new file does not look much like a sine wave, but since the new sampling frequency of 100 Hz is still more than twice the frequency contained within the signal (40 Hz), according to the Nyquist criterion it ought to contain all the information in the original signal.

To find out whether this is true:

When the operation completes, the new interpolated file will load. Note that the original sine wave has been reconstructed very accurately, although there has been a slight loss of amplitude because the process is not perfectly efficient (nothing ever is). Also note that the end part of the file has been set to 0 due to the inevitable loss of some data due to filter windowing.

If you want to be persuaded of the importance of the Nyquist criterion, then repeat the process but this time reduce the sample frequency by a factor of 20. This breaks the Nyquist rule – the resulting 50 Hz sample frequency is less than twice the frequency of the 40 Hz sine wave that is contained in the data. If you now increase the frequency by a factor of 20, you end up with a very nice-looking sine wave, but it has completely the wrong frequency, 10 Hz rather than 40 Hz! This is an example of data aliasing – a phenomenon in which data can look convincing but be completely wrong

Interpolation algorithm

The Increase frequency command works as follows. It produces (but does not display) an intermediate file consisting of the original (reduced frequency) data values, but with each real sample value interspersed by an appropriate number of 0-valued samples so as to bring the file up to a length appropriate for the new increased sample frequency. This file is then passed through a low-pass filter with a cut-off frequency set at the Nyquist frequency of the original (reduced frequency) file.

Like the previous file, this has a sample frequency of 1000 Hz. The top trace shows a 40 Hz sine wave, with the initial 100 ms having a slight DC offset to provide a timing reference. The lower trace shows the same data, but 9 out of every 10 samples have been replaced by a 0 value. (This was done externally to Dataview in Excel.) The peaks of the resulting positive and negative spikes are what would form the envelope of the upper-trace waveform if its sample rate had been reduced by a factor of 10. This trace is what is contained in the (hidden) intermediate file that is used when increasing the frequency by a factor of 10.

When the new file loads, you should see that the 3rd (bottom) trace looks like a reduced-amplitude version of the original (top trace) sine wave. In fact, the process reduces the amplitude by the same factor as the sample-rate change, i.e. 10-fold. This is because 9 out of 10 samples in the source had amplitude 0, and this gets "shared out" amongst the non-zero values during reconstruction.

We can check this as follows:

The reconstructed sine wave (blue trace) now overlies the original sine wave (red trace) almost exactly, except at the time of the apparent step change in amplitude at 100 ms, where there is a gradual rather than abrupt transition.

reconstructing 40 Hz sine wave
Interpolation by zero-filling and filtering. The original 40 Hz sine wave (red, top trace) was recorded at a 1000 Hz sample rate. There is a step DC shift at 25 ms. The signal was decimated by a factor of 10 and the intervening values set to 0 (bottom trace). The zero-filled signal was passed through a 0-50 Hz band pass filter, amplified by a factor of 10, and superimposed on the original signal (blue, top trace). Apart from the transition at the DC step shift at 25 ms, the original and reconstructed signals are almost identical.