In building an ultrasensitive instrument, we're always fighting to improve our signal-to-noise ratio (SNR). The SNR is the ratio of signal power to noise power in the measurement bandwidth, and is limited by noise in the instrument itself and the noise of any background signals, such as the shot noise of the background light or the slight hiss of a microphone.
If the signal is weak, it will have proportionally more noise, so that the apparatus has to be designed to get rid of as much noise as possible. There are a number of ways to do this. The best is to get more signal or reduce the noise, for instance by increasing the laser power and using a laser noise canceller, but eventually we hit a practical limit. At that point, we're left with several options, all of which boil down to filtering in one form or another.
Filters can be hardware or software, but their job is to pass the desired signal frequencies and reject noise at other frequencies. Of course some of the noise lands on top of our signal and so makes it through the filter anyway.
A low-pass filter passes frequencies below its cutoff and attenuates higher ones. If the signal is concentrated below the cutoff frequency, the filter rejects the high-frequency noise while preserving the signal (and the low-frequency noise, of course). By slowing down the measurement, for example by reducing the scan speed, the bandwidth of the signal's frequency spectrum can be reduced and the filter made correspondingly narrower.
A problem with this simple approach is that in most cases there's a concentration of noise at low frequencies (near DC), so filtering doesn't help as much as one might expect--in fact, it's not uncommon for the noise to get worse as the measurement gets slower, which is rather unintuitive. It's because there is a lower limit to the signal spectrum as well as an upper. If we're taking 1000 measurements, each with an averaging time of a millisecond, then the signal spectrum is predominantly contained between 1 Hz and 1 kHz. A measurement that takes a second doesn't contain much signal information or noise between 0 Hz (DC) and 1 Hz. Slowing it down to one measurement per hundred seconds reduces the lower cutoff to (1/100) Hz and the upper cutoff to 10 Hz. That narrows the bandwidth, all right, but interestingly it typically makes the noise worse rather than better. Let's look at why.
To find the total noise, we have to add up the noise contributions at all frequencies in the filter passband. In other words, the total noise power is the integral of the noise power spectral density (PSD). The low frequency noise PSD often goes like 1/f, whose integral is ln(f). Thus if the passband is between f1 and f2, the total noise goes as ln(f2) - ln(f1) = ln(f2 / f1). Because the ratio f2 / f1 is the same in both the fast and slow measurements, the 1/f noise is also the same—sacrificing a factor of 100 in speed hasn't improved things at all. In fact, since things like thermal drifts rise more steeply than 1/f, going slower is likely to make things worse in real cases. So lowpass filtering can help, but only up to a point. In Part 2, we'll look at ways to get round this roadblock.