<< Click to Display Table of Contents >>
## Trend Analysis |

As noted in the introduction to this overall topic, where time series include trend and/or periodic behavior it is usual for these components to be identified and removed before further analysis. In this section we look at different ways of identifying and removing trends and seasonal (periodic) effects, and tools for separating a time series into trend, seasonal and residual components (known as decomposition). This then leads on to consideration of more advanced models, that build upon these elements to produce more sophisticated, multi-component models, that provide the basis for statistical forecasting.

Many time series contain trends, i.e. they are non-stationary. Trends may be linear, or have some more complex form, such as polynomial or logistic. Whatever the form of the trend it is usually preferable to remove and/or specify the trend explicitly prior to further analysis and modeling. There are several ways in which this may be carried out. The first step is usually to examine the graph of the time series visually, to see if any trend-line behavior can be observed. It may be that a series has no observable trend, has a trend across its range, or exhibits a trend in part of its range. In the latter case it would be sensible to separate the time series into a number of subsets, each of which could be modeled separately. Autocorrelation analysis is a very useful technique for identifying trends and periodicities in the data, in a manner that is often more precise than can be obtained with simple visual inspection.

Having identified that a trend exists, one can then consider procedures for identifying and managing trends. These include:

•curve-fitting — for example applying a least-squares linear regression or growth curve (e.g. Logistic or Gompertz function) to the data

•filtering — single or multiple smoothing using linear filters/moving averages

•differencing — a simple difference operator ∇t=xt-xt-1 will produce a new series {yt} from an existing series {xt}, in which any linear trend has been removed

In each of these procedures the objective is to remove the trend and produce a stationary time series comprised of the results of the operation or the residuals left after trend removal. This new dataset is then analyzed further, for example to model the behavior of the residuals, and then can be recombined with the trend element to produce a model that describes the time series and provides the possibility of forecasting its future behavior.

In the previous subsection, on exponentially weighted moving averages (EWMA), we showed that moving average at time t could be obtained as the weighted sum of the latest observed value xt and the previous EWMA:

where 0≤α≤1. This model does not allow for trends in the data. If a time series includes a simple linear trend, and we let bt represent the rate of change of the data per unit of time, then we can augment the previous expression by inclusion of the rate component:

from which one obtains the revised recurrence relation:

or using the smoothing notation:

This then leaves the question of how to determine rate of change, bt, and whether this should be taken as a constant or also allowed to vary with time, t. It is usual to assume the rate varies with time and is estimated iteratively as follows:

This provides the model known as double exponential smoothing. This process introduces a new parameter, γ, and requires initial values for both St and bt. The simplest initialization values are S1=x1 and b1=x2-x1. Parameters are typically estimated by some form of iterative least squares optimization procedure.

Time series data often exhibits periodicity, which in some cases reflects seasonality, as for example in plant growth or in shopping patterns. Such patterns can be identified by visual inspection of the time series graph and by autocorrelation analysis (see the section on Spectral Analysis for discussion of analysis in the frequency domain). Having identified such patterns they can be then be removed and/or explicitly built into models, which may be used for descriptive and/or forecasting purposes.

A simple moving average, for example a centered moving average, that spans the current time, t, +/-p/2, where p is the period in question, will eliminate a seasonal effect of period p. The moving average in this case consists of p+1 terms, so the first and last are given a weight of 0.5, and the divisor is p. For example, with monthly data seasonal effects can often be removed (smoothed or filtered) by a moving average of the form:

Simple differencing can also remove seasonal effects, e.g. using an operator of the form:

∇t=xt-xt-12

If, as is often the case, time series include both trend and seasonal effects, and these are relatively well defined, it is possible to extend the ideas of double exponential smoothing described above to triple exponential smoothing. It requires three equations, the initial exponential smoothing equation, the trend equation and an additional, seasonal smoothing equation. As before these equations have parameters whose optimal values are found by iteration. The procedure, known as the Holt-Winters forecasting procedure, is described in more details in Chatfield (1975, pp87-89 [CHA1]) and on the NIST website (with worked examples — see further, below).

As Chatfield notes, when seasonal components are included in models of this type, the effects can be included on a simple additive basis (the seasonal effect is a factor that is added to the mean effect) or multiplicative, in which the season effect increases or decreases the mean level by a multiple rather than a fixed amount. If a strong multiplicative effect is suspected a log transform may be helpful, as this will convert it into an additive effect. Software packages which provide time series analysis support will typically offer a range of options for constructing models of this type. For example, as is illustrated in the selection dialog form below, SPSS supports exponential smoothing models of seasonal and non-seasonal types, with and without data transformation, but with prior definition of the periodicity in the data. In this particular case the PM10 kerbside pollution dataset described in the previous subsections of this topic was analyzed using the SPSS exponential smoothing model with a pre-defined periodicity of 7 days (to account for weekly fluctuations in traffic). Tools of this type enable alternative models and transformations to be tested, with outputs compared by quality of fit to the data. A wide range of measures of fit can be selected, including correlation and maximum absolute error (MAE).

As previously observed, analysis of this particular dataset suggests a high level of randomness, with some periodicity, and this is born out by the exponential fitting process, with the best fit being for log transformed data with an additive model, although this explains only about 30% of the variance in the observed data using the standard r2 correlation statistic, but is better than a simple model representing the mean of the differenced series — this is measured by the so-called stationary r2 statistic. This is defined as:

The numerator in the summation is the squared difference between the observed and modeled data values, whilst the denominator is computed from the differenced series for the periodicity specified (7 days in the above example).

If we apply a similar analysis to the Heathrow Airport temperature data discussed when we looked at temporal autocorrelation (shown below), which is clearly seasonal, a simple seasonal model is sufficient to produce an excellent fit, with an r2 value of 0.925 and a mean absolute error level of only 1 degree.

Monthly maximum temperature, Heathrow Airport, 1948-2009

The NIST data example referred to above consists of data for 24 quarterly sales period from 1990 onwards. The data items are:

362,385,432,341,382,409,498,387,473,513,582,474,544,582,681,557,628,707,773,592,627,725,854,661

We can model this dataset using the SPSS forecasting facility, automated option, and extend the model to provide a forward forecast for 1996 and 1997. In this example the software selected exponential smoothing with a multiplicative factor as the best model and computed the fit, confidence limits and forecasts based on this model (see chart below). As can be seen the fit is excellent (r2=0.973) and the forecast clearly develops this pattern into future quarters — which, of course, may or may not be a realistic model in practice!

NIST model sales data

Given a time series, which is suspected as exhibiting a mix of (local) trend, seasonal and residual components, tools exist that make separation of these components a quick and simple process. The procedure is known as decomposing the time series, and is available in many software packages. The procedure in R has the advantage that it simultaneously decomposes and plots the results, so we shall use this to illustrate the process. We shall apply it to the Heathrow temperature dataset, with periodicity specified as 12 months.

The steps involved are: (i) read in the temperature dataset; (ii) create a time series object from the dataset using the ts() function and a frequency defined as 12 (months); (iii) use the decomposition function, stl (seasonal time loess), with parameter "periodic" to break the data into its components; and finally (iv) plot the results (as illustrated below). The loess procedure is a simple local polynomial fitting utility that is called from the stl() function (see the NIST website, loess section, for a fuller description). The 'details' section of the stl() procedure describes its operation as follows: The seasonal component is found by loess smoothing the seasonal sub-series (the series of all January values, ...); if s.window = "periodic" smoothing is effectively replaced by taking the mean. The seasonal values are removed, and the remainder smoothed to find the trend. The overall level is removed from the seasonal component and added to the trend component. This process is iterated a few times. The remainder component is the residuals from the seasonal plus trend fit.

Decomposition of Heathrow temperature dataset

References

[CHA1] Chatfield C (1975) The Analysis of Times Series: Theory and Practice. Chapman and Hall, London