Spectral analysis

Navigation:  Time series analysis and temporal autoregression >

Spectral analysis

Previous pageReturn to chapter overviewNext page

The preceding sections have provided coverage of the analysis of time series in the temporal domain. A time series that has, or is suspected of having, complex periodicity, can also be analyzed in the frequency domain - using special procedures to examine the frequency patterns in the series. This form of analysis, sometimes referred to as harmonic analysis, spectral analysis or Fourier analysis, is similar to the autocorrelation and autoregression methods previously discussed but uses trigonometric functions to model the data. It can also be shown that spectral analysis has a direct relationship with autocovariance and autocorrelation measures. However, in the context of the kind of time series encountered in many applications, excluding those relating to the analysis of electromagnetic (EM) radiation, communications engineering and some complex processes in the natural environment, spectral analysis is not widely used. Very often it is simpler and more effective to use the techniques and tools of the temporal domain methods. For this reason we only cover spectral methods briefly in this section. Readers wishing to learn more are advised to read the book by Hannan (1960 [HAN1]) and the two chapters in Chatfield (1975 [CHA1]) on this subject. Below we shall follow the key points from Chatfield's presentation of this topic.

We saw in the discussion of autoregressive models that we can model the value for x at time t as the weighted sum of prior values plus a current error term:

If it is suspected that the time series data contains a range of different periodicities, pi, each with a different duration, then each will occur with a frequency of ωi=1/pi time periods. Hence if we have periodicities of 3, 6 and 12 months in a monthly series, these correspond to frequencies of 1/3, 1/6 and 1/12. The natural functions to use in order to model periodicity are trigonometric expressions, notably the sin() and cos() functions. Without adjustment, these functions oscillate smoothly around the horizontal axis (the time axis in time series, t) with a value range [-1,1]. Since sin(0)=0 this function crosses the t-axis at 0 and 0 +/- nπ, for n=1,2... The cos() function is similar, but has the value 1 at t=0. To scale and shift a sin() or cos() function to match the kinds of pattern observed in time series, additional parameters are required: r, a scale parameter that will increase the magnitude (or amplitude) of the function; ω, a frequency parameter that alters the number of cycles the function undergoes in multiples of π; and a parameter, θ, to shift the pattern of cycles left or right along the axis so they cross the axis at the desired points (to alter the phase of the function). These additional parameters can be incorporated into an initial simple time series model in the frequency domain:

This expression could equally well have been expressed in terms of sin() functions. Note also that since

and, substituting a and b for the amplitude and phase elements, and adding a constant term for the mean, we have:

This model represents x at time t by two expressions each of which represents a weighted sum of a finite number (k) of frequencies. The model can be extended to an infinite number of frequencies letting k→∞, with the resulting expressions being integrals rather than summations. In this form (excluding the random error term for the time being) the model is known as the spectral representation of the process. The usual form for this representation is:

where u(ω) and v(ω) are continuous processes in the range [0,π]. It turns out that the autocovariance at lag k of a discrete time series, γ(k), is closely related to the first term in the spectral representation. Indeed, the autocovariance (and variance) can be written as:

where f(ω) is known as the spectral density function. Many software packages that provide spectral analysis offer the option to plot the spectral density function as the angular value ω is altered from 0 to π. Specifically, f(ω)dω represents the contribution to the overall variance made by frequencies in the range [ω,ω+dω], hence a peak in the spectral density function equates to a large contribution of variance in that region. The total variance is found as the integral of the spectral density function, which is also the autocovariance at lag 0 (by definition). The ratio of the lag k and lag 0 autocovariances then provides the autocorrelation coefficient. The two representations, in the time domain and in the frequency domain, are thus seen to be equivalent.

In practice a more general, standardized representation is used in spectral analysis, known as the finite Fourier series or harmonic representation :

where

This formula has N parameters and is used to fit N data points, so there is an exact fit and no error term. The coefficients for each frequency value can be obtained as the least squares solution to the original set of expressions (i.e. those including the error term, zt, above). The expression:

is the squared amplitude of the pth frequency or harmonic. With a time series of length N, the contribution of the pth harmonic to the total sum of squares is:

This expression can be plotted against ω to give a graph that shows how the contribution of different frequencies varies as the frequency varies. The graph is known as a periodogram (although this is something of a misnomer, since it shows frequencies rather than time periods). Spectral analysis software packages provide the periodogram plot as the standard tool for spectral analysis. High values in the periodogram mean that those frequencies account for more of the variance than other frequencies. The periodogram is the frequency domain equivalent of the correlogram that arises in the time domain with autocorrelation analysis. Interestingly enough, the periodogram provides an unbiased estimator for the spectral density function, which we discussed earlier. However, it is not a consistent estimator, in that as the length of the time series increases, the variance of the estimator does not decrease. To overcome this difficulty it is usual to smooth the periodogram using some form of simple averaging (an approach originally due to Daniell) or applying a weighted moving average window to the autocovariance function (using methods devised by Tukey, Parzen and others). Spectral analysis software generally provides a number of options for smoothing, with the resultant output being displayed via the spectral density function plot, which as a result is a much smoother function that the periodogram. For example, SPSS provides a default of the Tukey-Hamming window, but each of the others mentioned are also available. Likewise, in R the standard function spectrum() provides periodogram calculation and plotting, but can be augmented with Daniell or other smoothing functions using the span= argument or via the spec.pgram() function which includes a range of options for smoothing and tapering of the time series. Many of these packages utilize formulas and recommendations in Bloomfield (1976 [BLO1]) and Brockwell and Davis (1991 [BRO1]).

References

[BLO1] Bloomfield P (1976) Fourier Analysis of Time Series: An Introduction. Wiley

[BRO1] Brockwell P J, Davis R A (1991) Time Series: Theory and Methods. 2nd ed., Springer

[CHA1] Chatfield C (1975) The Analysis of Times Series: Theory and Practice. Chapman and Hall, London (see also, 6th ed., 2003)

[HAN1] Hanan E J (1960) Time Series Analysis. Methuen, London