The Random Walk Model

Consider the following special case AR(1) process:

\[\begin{eqnarray*} y_{t}&=&y_{t-1}+\varepsilon_{t}\\ \Delta y_{t}&=&\varepsilon_{t} \end{eqnarray*}\]

If \(y_{0}\) is a given initial condition, its solution is

\[\begin{eqnarray*} y_{t}&=&y_{0}+\sum_{i=1}^{t}\varepsilon_{i} \end{eqnarray*}\]

Take expectation

\[\begin{eqnarray*} Ey_{t}&=& E\left(y_{0}+\sum_{i=1}^{t}\varepsilon_{i}\right)=y_{0} \end{eqnarray*}\]

Taking expectation and variance

\[\begin{eqnarray*} E_{t}y_{t+1}&=&E_{t}\left(y_{t}+\varepsilon_{t+1}\right)=y_{t}\\ E_{t}y_{t+s}&=&E_{t}\left(y_{t}+\sum_{i=1}^{s}\varepsilon_{t+i}\right)=y_{t}\\ Var(y_{t})&=&Var\left(\sum_{i=1}^{t}\varepsilon_{i}\right)=t\sigma^{2}\\ Var(y_{t-s})&=&Var\left(\sum_{i=1}^{t-s}\varepsilon_{i}\right)=(t-s)\sigma^{2}\\ E\left[(y_{t}-y_{0})(y_{t-s}-y_{0})\right]&=&E\left[(\sum_{i=1}^{t}\varepsilon_{i} )(\sum_{i=1}^{t+s}\varepsilon_{i})\right]\\ &=&E\left[(\sum_{i=1}^{t-s}\varepsilon_{i}^{2} )\right]\\ &=&(t-s)\sigma^{2} \end{eqnarray*}\]

The correlation coefficient \(\rho_{s}\) is

\[\begin{eqnarray*} \rho_{s}&=&\frac{(t-s)\sigma^{2}}{\sqrt{(t-s)\sigma^{2}}\times \sqrt{t\sigma^{2}}}\\ &=&\sqrt{\frac{t-s}{t}} \end{eqnarray*}\]

When t is big relative to s, the \(\rho_{s}\) are close to unity and decay very slowly.

The Random Walk plus Drift Model Adding a constant term \(a_{0}\):

\[\begin{eqnarray*} y_{t}&=&y_{t-1}+a_{0}+\varepsilon_{t} \end{eqnarray*}\]

Giving the initial condition \(y_{0}\), its solution is

\[\begin{eqnarray*} y_{t}&=&y_{0}+a_{0}t+\sum_{i=1}^{t}\varepsilon_{i} \end{eqnarray*}\]

The behavior of \(y_{t}\) is governed by two nonstationary components: a linear deterministic trend and the stochastic trend.

Function Spaces

  • \(x\): an element of \(C\), that is, any continuous curve traversing the unit interval, be denoted.
  • Coordinates of \(x\): \(x(r)\in\mathbb{R}\) is the unique values of \(x\) at points \(r\in[0,1]\) are called the coordinates of \(x\).
For two members of \(C\), \(x\in C\) and \(y\in C\), we need to say how close together they are. Technically, \(C\) must be assigned a metric. For example, we can define Euclidean metric for any pair of real numbers \(x\) and \(y\) as $$d_{E}(x,y)= x-y $$.

The pair \((\mathbb{R},d_{E})\) is known as the Euclidean space.

We also can define a metric called uniform metric as

\[\begin{eqnarray*} d_{U}(x,y)&=&\sup_{0\leq r\leq 1}|x(r)-y(r)| \end{eqnarray*}\]

This is just the \textit{largest vertical separation} between the pair of functions over the interval. \((C,d_{U})\) is a metric space.

Brownian Motion

A Brownian motion \(B\) is a real random function on the unit interval, with the following properties:

  • \(B\in C\) with probability 1.
  • \(B(0)=0\) with probability 1.
  • for any set of subintervals defined by arbitrary \(0\leq r_{1} < r_{2} < \dots < r_{k}\leq 1\), the increments \(B(r_{1}\), \(B(r_{2})-B(r_{1}\), \(\cdots\), \(B(r_{k})-B(r_{k-1})\) are independent.
  • \(B(t)-B(s)\sim N(0,t-s)\) for \(0\leq s< t \leq 1\).

The Functional Central Limit Theorem

We construct a variable \(X_{T}(r)\) from the sample mean of the first \(r\)th fraction of the observations, \(r\in[0,1]\), defined by

\[\begin{eqnarray*} X_{T}(r)&\equiv&\frac{1}{T}\sum_{t=1}^{[Tr]}u_{t} \end{eqnarray*}\]

Dickey-Fuller Tests

Subtracting \(y_{t-1}\) from each side of the equation \(y_{t}=a_{1}y_{t-1}+\varepsilon_{t}\), we get \(\Delta y_{t}=\gamma y_{t-1}+\varepsilon_{t}\), where \(\gamma=a_{1}-1\). Testing the hypothesis \(a_{1}=1\) is equivalent to testing \(\gamma=0\).

Dickey and Fuller consider three different regression equations

\[\begin{eqnarray*} &&\mbox{random walk model}\\ \Delta y_{t}&=&\gamma y_{t-1}+\varepsilon_{t} \\ && \mbox{add a drift}\\ \Delta y_{t}&=&a_{0}+\gamma y_{t-1}+\varepsilon_{t}\\ &&\mbox{add a drift and linear time trend}\\ \Delta y_{t}&=&a_{0}+\gamma y_{t-1}+a_{2}t+\varepsilon_{t} \end{eqnarray*}\]

Run the OLS and get the estimated value of \(\gamma\) and associated standard error of these three models. However, the critical values of the t-statistics do depend on whether a drift and/or time trend is included in regression models. Note that the appropriate critical values depend on sample size. For any given level of significance, the critical values of the t-statistic decrease as sample size increases.

Augmented Dicker-Fuller test

Consider the pth-order autoregressive process:

\[\begin{eqnarray*} y_{t}&=&a_{0}+a_{1}y_{t-1}+a_{2}y_{t-2}+a_{3}y_{t-3}+\cdots+a_{p-2}y_{t-p+2}+a_{p-1}y_{t-p+1}+a_{p}y_{t-p}+\varepsilon_{t} \end{eqnarray*}\]

Add and subtract \(a_{p}y_{t-p+1}\)

\[\begin{eqnarray*} y_{t}&=&a_{0}+a_{1}y_{t-1}+a_{2}y_{t-2}+\cdots+a_{p-2}y_{t-p+2}+a_{p-1}y_{t-p+1}+a_{p}y_{t-p+1}+a_{p}y_{t-p}-a_{p}y_{t-p+1}+\varepsilon_{t}\\ &=&a_{0}+a_{1}y_{t-1}+a_{2}y_{t-2}+\cdots+a_{p-2}y_{t-p+2}+(a_{p-1}+a_{p})y_{t-p+1}-a_{p}\Delta y_{t-p+1}+\varepsilon_{t} \end{eqnarray*}\]

Add and subtract \((a_{p-1}+a_{p})y_{t-p+2}\)

\[\begin{eqnarray*} y_{t}&=&a_{0}+\cdots+a_{p-2}y_{t-p+2}+(a_{p-1}+a_{p})y_{t-p+2}+(a_{p-1}+a_{p})y_{t-p+1}-(a_{p-1}+a_{p})y_{t-p+2}-a_{p}\Delta y_{t-p+1}+\varepsilon_{t}\\ &=&a_{0}+\cdots+(a_{p-2}+a_{p-1}+a_{p})y_{t-p+2}-(a_{p-1}+a_{p})\Delta y_{t-p+2}-a_{p}\Delta y_{t-p+1}+\varepsilon_{t} \end{eqnarray*}\]

Continuing in this fashion, we get

\[\begin{eqnarray*} \Delta y_{t}&=&a_{0}+\gamma y_{t-1}+\sum_{i=1}^{p}\beta_{i}\Delta y_{t-i+1}+\varepsilon_{t}\\ where\ \ \gamma&=& -\left( 1-\sum_{i=1}^{p}a_{i}\right)\\ \beta_{i}&=& \sum_{j=i}^{p}a_{j} \end{eqnarray*}\]

We can use the same Dickey-Fuller statistics which depends on the regression models and sample size.