Let us introduce some of the most important concepts of time series
analysis by considering an example where we look for simple models for
predicting the monthly prices of wheat.

In the following, let P_{t} denote
the price of wheat at time (month) t. The
first naive guess would be to say that the price next month is the same
as in this month. Hence, the predictor is
P̂_{t+1|t} = P_{t}.(1.1)

This predictor is called the naive predictor
or the persistent predictor. The syntax
used is short for a prediction (or estimate) of P_{t+1}
given the observations P_{t}, P_{t-1} , . . ..

Next month, i.e., at time t + 1, the actual
price is P_{t+1}. This means that
the prediction error or innovation may be computed as ε_{t+1} = P_{t+1}
- P̂_{t+1|t}.(1.2)

By combining Equations (1.1) and (1.2) we obtain the stochastic model for the wheat price P_{t} = P_{t-1}
+ ε_{t}(1.3)

If {ε_{t}} is a sequence of
uncorrelated zero mean random variables (white
noise), the process (1.3) is called a random
walk. The random walk model is very often seen in finance and
econometrics. For this model the optimal predictor is the naive
predictor (1.1).

The random walk can be rewritten as P_{t} = ε_{t} + ε_{t-1} +
· · ·(1.4)
which shows that the random walk is an integration of the noise, and
that the variance of P_{t} is
unbounded; therefore, no stationary distribution exists. This is an
example of a non-stationary process.

However, it is obvious to try to consider the more general model P_{t} = φP_{t-1}
+ ε_{t}(1.5) called
the AR(1) model (the autoregressive first
order model). For this process a stationary distribution exists for |φ| < 1. Notice that the random walk is
obtained for φ = 1.

Another candidate for a model for wheat prices is P_{t} = ψP_{t-12}
+ ε_{t}(1.6) which
assumes that the price this month is explained by the price in the same
month last year. This seems to be a reasonable guess for a simple
model, since it is well known that wheat price exhibits a seasonal variation. (The noise processes in
(1.5) and (1.6) are, despite the notation used, of course, not the
same).

For wheat prices it is obvious that both the actual price and the price
in the same month in the previous year might be used in a description
of the expected price next month. Such a model is obtained if we assume
that the innovation ε_{t} in model
(1.5) shows an annual variation, i.e., the combined model is (P_{t} - φP_{t-1}
) - ψ(P_{t-12} - φP_{t-13} ) = ε_{t}.(1.7) Models such as (1.6) and (1.7) are
called seasonal models, and they are used
very often in econometrics.

Notice, that for ψ = 0 we obtain the AR(1)
model (1.5), while for φ = 0 the most simple
seasonal model in (1.6) is obtained.

By introducing the backward shift operator
B by B^{k}P_{t}
= P_{t-k}(1.8) the
models can be written in a more compact form. The AR(1) model can be
written as (1 - φB)P_{t} = εt , and
the seasonal model in (1.7) as (1 - φB)(1 - ψB^{12} )P_{t} = ε_{t}(1.9) If we furthermore introduce the difference operator ∇= (1 - B)(1.10)
then the random walk can be written ∇P_{t}
= ε_{t} using a very compact notation. In this book
these kinds of notations will be widely used in order to obtain compact
equations.

Given a time series of observed monthly
wheat prices, P_{1} , P_{2} , . . .
, P_{N} , the model structure
can be identified, and, for a given model, the time series can be used
for parameter estimation.

The model identification is most often
based on the estimated autocorrelation function, since, as it will be
shown in Chapter 6, the autocorrelation function fulfils the same
difference equation as the model. The autocorrelation function shows
how the price is correlated to previous prices; more specifically the
autocorrelation in lag k, called ρ(k), is simply the correlation between P_{t} and P_{t-k}
for stationary processes. For the monthly values of the wheat price we
might expect a dominant annual variation and, hence, that the
autocorrelation in lag 12, i.e., ρ(12) is
high.

The models above will, of course, be generalized in the book. It is
important to notice that these processes all belong to the more general
class of linear processes, which again is strongly related to the
theory of linear systems as demonstrated in the book.