Measuring the Allan variance by sinusoidal fitting

Rev Sci Instrum. 2018 Feb;89(2):024702. doi: 10.1063/1.5010140.

Abstract

The Allan variance of signal and reference frequencies is measured by a least-square fit of the output of two analog-to-digital converters to ideal sine waves. The difference in the fit phase of the two channels generates the timing data needed for the Allan variance. The fits are performed at the signal frequency (≈10 MHz) without the use of heterodyning. Experimental data from a modified digital oscilloscope yield a residual Allan deviation of 3 × 10-13/τ, where τ is the observation time in s. This corresponds to a standard deviation in time of <300 fs or 20 μrad in phase. The experimental results are supported by statistical theory and Monte Carlo simulations which suggest that optimized devices may have one or two orders of magnitude better performance.