You must always avoid the temptation to “connect the dots” (to naively interpolate between samples) when working with discrete signals.1 Let’s recall the example from the previous section, and see how connecting the dots can get us into trouble.

Imagine someone asks you what the altitude of the plane was at 65 minutes into the flight. How should you respond? You don’t actually have a sample for the altitude at 65 minutes, but you do have measurements for the altitude at 60 minutes and 70 minutes. You might feel tempted to draw a line between these two samples, perform some simple linear interpolation, and infer that the altitude was about 31,000 feet. This sort of temptation is completely natural, but really unhealthy when working with discrete signals. The most appropriate response is simply to say, “I don’t know”. Anything else would be a fib.

Given our measurements and context, we cannot confidently report an altitude for 65 minutes into the flight. Our discrete signal tells us nothing about the altitude of the plane at 65 minutes into the flight. Think about it like this: our discrete signal can actually represent many potential flight histories, most of which have different altitudes at 65 minutes into the flight. In actuality, there are an infinite number of possible flight histories which intersect with the samples of our discrete signal but assume decidedly different altitudes at the 65 minute mark. Click the Play button to see four examples. I hope you’ll notice that connecting the dots in a naive way can be extremely misleading.

Figure 1.  Altitude of a Plane as it Traveled from Paris to Berlin
Samples connected dubiously with a grey line

We would call the red, blue, green, and orange curves aliases of one another since they are indistinguishable from one another when sampled with a period of 10 minutes per sample. In other words, the red, green, blue, and orange signals all look exactly the same after being sampled every ten minutes.

I’ve strongly advised that you should not, “connect the dots” when working with discrete signals. Over the next thirty or so pages I will break this rule. I’m breaking the rule because all of the signals in this document are contrivances, and I feel that showing the continuous counterpart to each discrete signal makes the visualizations more readable and easier to understand. Always remember that I’m being naughty when you see connected dots. Do as I say, and not as I do when you’re out there practicing DSP out in the real world.
1. If you properly sample, it is possible to confidently interpolate between samples. The point I'm trying to make here is that naïve sampling will almost always get you into a situation where your discrete signal doesn't represent the signal you measured with very good fidelity.

P.S. If you’re feeling eager to point out the contradictory and awkward nature of presenting continuous signals using a computer program and digital screen, I'm way ahead of you, and fully aware of the embarrassing meta concerns here.


Whenever there is a gap between samples, uncertainty can creep into our measurements. As the gaps get larger, we become less confident that our discrete signal faithfully represents the physical phenomenon that it was meant to measure. Rapid fluctuations and movements which occur between samples will be lost. We can improve the fidelity of our discrete signals by shortening the gap between samples - by reducing the sampling period.

Imagine that instead of sampling every ten minutes, we had decided to sample the plane’s altitude every 5 minutes. This simple choice would allow us to rule out the orange, green, and red curves as possible aliases of our true signal based upon the sample at 65 minutes. It would be clear that the blue curve is the only possible candidate which still intersects with our samples.

Figure 2.  Decreasing the Sampling Period to 5 Minutes.

Unfortunately, there is a cost associated with sampling. Each sample must be stored somewhere, and space (e.g. memory) is not free. We cannot reduce our sampling period without incurring some overhead. The fundamental trick to sampling is understanding precisely how often you must sample in order to avoid information loss. When we sample more often than is necessary, we say that we are oversampling. Generally, oversampling implies that memory or computational resources are being wasted.2 When we sample too infrequently and lose information we say that we are undersampling.

Proper understanding of sampling theory is much easier when we understand the notion of frequency. In the next few sections we'll leave altitudes behind and look at a class of signals for which frequency is a more natural concept. Sound waves.

2. Oversampling and undersampling are sometimes done deliberately. For example, audio plugins will often deliberately oversample to ensure that they don't produce audible artifacts as a result of their processing. In this case you’re trading memory and computation resources for better audio quality.