The autoregressive moving average (ARMA) model

yt=a0+i=1paiyti+i=0qβiεti

Stationarity

A stochastic process is covariance stationary or weakly stationary if for all t and s

E(yt)=E(yts)=μE[(ytμ)2]=E[(ytsμ)2]=σy2=γ0E[(ytμ)(ytsμ)]=E[(ytjμ)(ytjsμ)]=γs

If a process is covariance stationary, the covariance between yt and yts depends only on s, the length of time separating the observations. It follows that for a covariance stationary process, γs and γs would represent the same magnitude.

For a covariance stationary series, we can define the autocorrelation between yt and yts

ρs=γsγ0

where γ0 is the variance of yt

Ergodicity

Imagine a battery of I computers generating sequences {yt(1)}t=, {yt(2)}t=, , {yt(I)}t= and consider selecting the observation associated with date t from each sequence: {yt(1),yt(2),,yt(I)} This would be described as a sample of I realizations of the random variable Yt. The expectation of the tth observation of a time series refers to the mean of the probability distribution

E(Yt)=ytfYt(yt)dyt

We might view this as the probability limit of the ensemble average

E(Yt)=plimI1Ii=1IYt(i)

The above expectations of a time series in terms of ensemble averages may seem a bit contrived. Usually we have a single realization of size T from the process {y1(1),y2(1),,yT(1)} From these observations we would calculate the sample mean y¯, which is a time average

y¯=1Tt=1Tyt(1)

A covariance stationary process is said to be ergodic for the mean if y¯ converges in probability to E(Yt) as T.

Moving Average Processes

The First-Order Moving Average Process

Let {εt} be white noise and consider the process

yt=μ+εt+θεt1

where μ and θ could be any constants. This time series is called a first-order moving average process, denoted MA(1).

Expectation The expectation of yt is

E(yt)=E(μ+εt+θεt1)=μ+E(ε)+θE(εt1)=μ

Variance The variance of yt is

E(ytμ)2=E(εt+θεt1)2=E(εt2+2θεtεt1+θ2εt12)=σ2+0+θ2σ2=(1+θ2)σ2

Autocovariance The first autocovariance of yt is

E(ytμ)(yt1μ)=E(εt+θεt1)(εt1+θεt2)=E(εtεt1+θεt12+θεtεt2+θ2εt1εt2)=0+θσ2+0+0=θ2σ2

Higher autocovariances are all zero. For all j>1

E(ytμ)(ytjμ)=0

Autocorrelation The jth autocorrelation of a covariance stationary process is ρj=γjγ0

ρ1=γ1γ0=θ2σ2(1+θ2)σ2=θ21+θ2ρj=0,j>1

The qth-Order Moving Average Process

A qth-order moving average process, denoted MA(q), is characterized by

yt=μ+εt+θ1εt1+θ2εt2++θqεtq

Expectation The expectation of yt is

E(yt)=E(μ)+E(εt)+θ1E(εt1)+θ2E(εt2)++θqE(εtq)=μ

Variance The variance of yt is

E(ytμ)2=E(εt+θ1εt1+θ2εt2++θqεtq)2=E(εt)2+E(θ1εt1)2+E(θ2εt2)2++E(θqεtq)2=σ2+θ12σ2+θ22σ2++θq2σ2=(1+θ12+θ22++θq2)σ2

Autocovariance The autocovariance of yt is

E(ytμ)(ytjμ)=E(εt+θ1εt1+θ2εt2++θqεtq)(εtj+θ1εtj1+θ2εtj2++θqεtjq)=E(θjεtj2+θj+1θ1εtj12+θj+2θ2εtj22++θqθqjεtq2)=(θj+θj+1θ1+θj+2θ2++θqθqj)σ2,j=1,2,,q

For all j>q

E(ytμ)(ytjμ)=0

The Infinite-Order Moving Average Process

Consider the process when q

yt=μ+j=0ψjεtj=μ+ψ0εt+ψ1εt1+ψ2εt2+

This could be described as an MA() process.

The MA() process is covariance stationary if it is square summable

j=0ψj2<

It is often to work with a slightly stronger condition called absolutely summable

j=0|ψj|<

Expectation The mean of an MA() process with absolutely summable is

E(yt)=limTE(μ+ψ0εt+ψ1εt1+ψ2εt2++ψTεtT)=μ

Autocovariance The autocovariance of an MA() process with absolutely summable is

γ0=E(ytμ)2=limTE(ψ0εt+ψ1εt1+ψ2εt2++ψTεtT)2=limT(ψ02+ψ12+ψ22++ψT2)σ2γj=E(ytμ)(ytjμ)=(ψjψ0+ψj+1ψ1+ψj+2ψ2+ψj+3ψ3+)σ2

Autoregressive Processes

The First-Order Autoregressive Process

A first-order autoregressive, denoted AR(1), satisfies the following difference equation

yt=c+ϕyt1+εt
When $$ \phi <1$$, this process is covariance stationary. It can be rewritten as
yt=c+ϕyt1+εt=c+εt+ϕ(c+εt1)+ϕ2yt2=c+εt+ϕ(c+εt1)+ϕ2(c+εt2)+ϕ3yt3=c+εt+ϕ(c+εt1)+ϕ2(c+εt2)+ϕ3(c+εt3)+ϕ4yt4=c+εt+ϕ(c+εt1)+ϕ2(c+εt2)+ϕ3(c+εt3)+=(c+ϕc+ϕ2c+ϕ3c+)+εt+ϕεt1+ϕ2εt2+ϕ3εt3+=c1ϕ+εt+ϕεt1+ϕ2εt2+ϕ3εt3+

We can derive the expectation and autocovariance of AR(1) by the above corresponding MA() process. We also can derive them by assuming AR(1) process is covariance stationary.

Expectation Taking expectations both sides

E(yt)=c+ϕE(yt1)+E(εt)μ=c+ϕμμ=c1ϕ

Autocovariance

The qth-Order Autoregressive Process

The Autocorrelation Function

Autocorrelation function (ACF) and the partial autocorrelation function (PACF) are useful to determine the type of time series data.

For AR(1) model

  • Method 1
yt=a0+a1yt1+εt  assume stationaryE(yt)=a0+a1E(yt1)+E(εt)μ=a0+a1μμ=a01a1Var(yt)=a12Var(yt1)+Var(εt)γ0=a12γ0+σ2γ0=σ21a12Cov(yt,yts)=Cov(a0+a1yt1+εt,a0+a1yts1+εts)=Cov(a1yt1+εt,a1yts1+εts)γs=a12γs+a1sσ2γs=a1sσ21a12

So the autocorrelation function for AR(1)

ρs=γsγ0=a1s
  • Method 2 If the process started at time zero
yt=a0i=0t1a1i+a1ty0+i=0t1a1iεti

Take the expectation of yt and yt+s

E(yt)=a0i=0t1a1i+a1ty0E(yt+s)=a0i=0t+s1a1i+a1t+sy0
If $$ a_{1} <1,ast\rightarrow \infty$$
limtyt=a01a1+i=0a1iεti Var(yt)=E[(ytμ)2]=E[(εt+a1εt1+a12εt2+)2]=σ2[(1+a12+a14+)]=σ21a12 Cov(yt,yts)=E[(ytμ)(ytsμ)]=E[(εt+a1εt1+a12εt2+)(εts+a1εts1+a12εts2+)]=E(a1sεts2+a1s+2εts12+a1s+4εts22+)=σ2a1s(1+a12+a14+)=σ2a1s1a12

The Autocorrelation Function of an AR(2) Process

We assume that a0=0, which implies that E(yt)=0. Adding or subtracting any constant from a variable does not change its variance, covariance, correlation coefficient, etc.

Using Yule-Walker equations: multiply the second-order D.E by yts for s=0,1,2,, and take expectations

Eytyt=a1Eyt1yt+a2Eyt2yt+EεtytEytyt1=a1Eyt1yt1+a2Eyt2yt1+Eεtyt1Eytyt2=a1Eyt1yt2+a2Eyt2yt2+Eεtyt2Eytyts=a1Eyt1yts+a2Eyt2yts+Eεtyts

By definition, the autocovariances of a stationary series are such

Eytyts=Eytsyt=Eytkytks=γs

We also know that coefficient on εt is unity so that Eεtyt=σ2, and Eεtyts=0, so

γ0=a1γ1+a2γ2+σ2γ1=a1γ0+a2γ1γ2=a1γ1+a2γ0γs=a1γs1+a2γs2

Now we can get the ACF

ρ1=a1ρ0+a2ρ1ρs=a1ρs1+a2ρs2

We know ρ0=1, so

ρ1=a1+a2ρ1ρ1=a11a2

The Autocorrelation Function of an MA(1) Process

Consider the MA(1) process yt=εt+βεt1

Applying the Yule-Walker equations

γ0=E(ytyt)=E[(εt+βεt1)(εt+βεt1)]=(1+β2)σ2γ1=E(ytyt1)=E[(εt+βεt1)(εt1+βεt2)]=β2σ2γ2=E(ytyt2)=E[(εt+βεt1)(εt2+βεt3)]=0γs=E(ytyts)=E[(εt+βεt1)(εts+βεts1)]=0  t>2

So the ACF of MA(1)

ρ0=1ρ1=γ1γ0=β21+β2ρs=0  t>1

The Autocorrelation Function of an ARMA(1,1) Process Consider the ARMA(1,1) yt=a1yt1+εt+β1εt1

Eytyt=a1Eyt1yt+Eεtyt+β1Eεt1ytγ0=a1γ1+σ2+β1(a1+β1)σ2Eytyt1=a1Eyt1yt1+Eεtyt1+β1Eεt1yt1γ1=a1γ0+β1σ2Eytyt2=a1Eyt1yt2+Eεtyt2+β1Eεt1yt2γ2=a1γ1Eytyts=a1Eyt1yts+Eεtyts+β1Eεt1ytsγs=a1γs1

Solve the equations and get

γ0=1+β12+2a1β11a12σ2γ1=(1+a1β1)(a1+β1)1a12σ2

And the AFC

ρ0=1ρ1=γ1γ0=(1+a1β1)(a1+β1)1+β12+2a1β1ρs=a1ρs  t>1

The Partial Autoorrelation Function

In AR(1) process, yt and yt2 are correlated even though yt2 does not directly appear in the model. ρ2=corr(yt,yt1)×corr(yt1,yt2)=ρ12. All such “indirect” correlations are present in the ACF. In contrast, the partial autocorrelation between yt and yts climinates the effects of the intervening values yt1 through yts+1.

Method to find the PACF:

  • Form the series {yt}, where ytytμ
  • Form the first-order autoregression equation:
yt=ϕ11yt1+eiyt=ϕ21yt1+ϕ22yt2+ei

where ϕ11 is the partial autocorrelation between yt and yt1, ϕ22 is the partial autocorrelation between yt and yt2. Repeating this process for all additional lags s yields the PACF.

ProcessACFPACFWhite-noiseρs=0ϕss=0AR(1):a1>0ρs=a1sϕ11=ρ1;ϕss=0fors2AR(1):a1<0ρs=a1sϕ11=ρ1;ϕss=0fors2AR(p)Decays toward zero.Spikes through lag pCoefficients may oscillateϕss=0forspMA(1):β>0ρs=0,fors2Oscillating decay:ϕ11>0MA(1):β>0ρs=0,fors2Decay:ϕ11<0ARMA(1,1):a1>0ARMA(1,1):a1<0ARMA(p,q)

Sample Autoorrelations

Given that a series is stationary, we can use the sample mean, variance and autocorrelations to estimate the parameters of the actual data-generating process.

The estimates of μ, σ2 and ρ:

y=t=1TytTσ2^=t=1T(yty)2Trs=t=s+1T(yty)(ytsy)t=1T(yty)2

Box and Pierce used the sample autocorrelations to form the statistic

Q=Tk=1srk2

If the data are generated from a stationary ARMA process, Q is asymptotically χ2(s) distribution.

Ljung and Box test

Q=T(T+2)k=1srk2Tkχ2(s)