Last lecture:
Autoregressive Model (AR): representation and stationary results.
This lecture:
Complex roots for AR(p), Moving Average Model (MA), Stationary results for general AR(p) and MA(q).
10/15/2018
Last lecture:
Autoregressive Model (AR): representation and stationary results.
This lecture:
Complex roots for AR(p), Moving Average Model (MA), Stationary results for general AR(p) and MA(q).
Recall that the recursive representation of the series \(Y_{t}\) contains its initial condition \(Y_1\) and a sequence of noises \(\{\phi^k \varepsilon_{t-k}\}_{k=0}^{t-1}\).
For an AR(p) \(Y_{t} = \phi_1 Y_{t-1} + \phi_2 Y_{t-2} + \cdots + \phi_p Y_{t-p} + \varepsilon_t\) becomes: \[\phi(\mathbb{L})Y_{t}= \varepsilon_{t}\] \[\phi(\mathbb{L})= 1-\phi_{1}\mathbb{L}-\cdots-\phi_{p}\mathbb{L}^{p}\]
We have an operator \(\phi(\mathbb{L})\) of \(Y_t\) on the left and a white noise on the right.
When \(\phi(\mathbb{L})= 1-\phi_{1}\mathbb{L}\) and \(|\phi_{1}|<1\), this AR(1) process can be represented as \[Y_{t}= (1- \phi_1(\mathbb{L}))^{-1}\varepsilon_{t}= \sum_{j=0}^{\infty} (\phi_1 \mathbb{L})^j \varepsilon_t \\ = (1 + \phi_1 \mathbb{L} + (\phi_1 \mathbb{L})^2 + \cdots ) \varepsilon_t = \varepsilon_t +\phi_1 \epsilon_{t-1} + \phi_1^2 \epsilon_{t-2} + \cdots\]
A sequence of an infinit amount of white noises.
\(\mathbb{L}^{\infty}\varepsilon_t=\varepsilon_{-\infty}\) seems like a noise at the emergence of the universe. But the impact of this noise to the current \(Y_t\) is negligible because \(\phi^{\infty} \rightarrow 0\) when \(|\phi|<1\).
Let’s return to the discussion about the stationarity of AR(p) but from the viewpoint of the noises.
For a general AR(p) \[\phi(\mathbb{L})Y_{t}= \varepsilon_{t}\] where \(\phi(\mathbb{L})= 1-\phi_{1}\mathbb{L}-\cdots-\phi_{p}\mathbb{L}^{p}\). Suppose the process \(Y_t\) can be represented in terms of noises: \[Y_{t}= \phi(\mathbb{L})^{-1} \varepsilon_{t}\] and these noises are “well-behaved” (for example, constant means, finite variances), then \(Y_{t}\) should be stationary.
The “well-behaved” noises sequence in fact means a convergent series formed by \(\phi(\mathbb{L})^{-1}\). But \(\phi(\mathbb{L})^{-1}\) is an operator of \(\mathbb{L}\) so it is difficult to be dealed with.
\(\phi(z)=0\) is a polynomial of \(z\). (\(\phi(z)\) is called the generating function or characteristic equation.)
Finding the roots of \(\phi(z)=0\) is a one-to-one problem of detecting whether \(\phi(\mathbb{L})^{-1}\) is convergent.
Extend the idea to AR(p): The roots of \[\phi(z)= 1-\phi_{1} z-\cdots-\phi_{p} z^{p}=0\] always exist on a complex plane. (Fundamental theorem of algebra)
For example the roots of \[ 1- \phi_{1} z -\phi_2 z^2 =0\] \(z = -\frac{\phi_1 \pm \sqrt{\phi_1^2 + 4\phi_2}}{2 \phi_2}\) always exsits as complex numbers even if \(\sqrt{\phi_1^2 + 4\phi_2}<0\).
The roots of the polynomial \(\phi(z)=0\) must all exceed unity circle on a complex plane for holding a stationary process.
\(AR(2): Y_t = -0.25Y_{t-2} +\varepsilon_t\) is stationary. \((4+z^2)/4=0\) gives \(z=\pm 2i\). The absolute value of \(\pm 2i\) is \(2\).
\(AR(2): Y_t = Y_{t-1} - 0.25Y_{t-2} +\varepsilon_t\) is stationary. \((z^2-4z+4)/4=(z-2)^2/4=0\) gives \(z=2\).
library(Quandl)
library(xts)
gold_price = Quandl("LBMA/GOLD", type = "xts", collapse = "daily",
start_date="2017-09-01", end_date="2018-10-12")
head(gold_price,2)
## USD (AM) USD (PM) GBP (AM) GBP (PM) EURO (AM) EURO (PM) ## 2017-09-01 1318.4 1320.4 1020.18 1019.74 1107.98 1114.15 ## 2017-09-04 1334.6 1333.1 1030.98 1029.02 1120.53 1119.67
plot.ts(gold_price[,5], xlab="",ylab="Price",main="Daily closing price of gold (GBP) ")
ar2 = arima(gold_price[,5],order=c(2,0,0), method="ML") ar2
## ## Call: ## arima(x = gold_price[, 5], order = c(2, 0, 0), method = "ML") ## ## Coefficients: ## ar1 ar2 intercept ## 0.9983 -0.0208 1077.8396 ## s.e. 0.0603 0.0608 12.1462 ## ## sigma^2 estimated as 27.4: log likelihood = -871.57, aic = 1751.13
polyroot(c(1, -ar2$coef[1:2]))
## [1] 1.023493-0i 47.076927+0i
Any weakly stationary time series \(\{Y_{t}\}\) has the following representation form
\[Y_{t}= \mu+\sum_{j=0}^{\infty}\psi_{j}\varepsilon_{t-j},\;\varepsilon_{t}\sim WN(0,\sigma^{2})\] \[\psi_{0}= 1,\quad\sum_{j=0}^{\infty}\psi_{j}^{2}<\infty\]
\(\mathbb{E}[Y_{t}]= \mu\)
\(\gamma_{0}= \mbox{Var}(Y_{t})=\sigma^{2}\sum_{j=0}^{\infty}\psi_{j}^{2}<\infty\)
\(\gamma_{i}= \mathbb{E}[(Y_{t}-\mu)(Y_{t-i}-\mu)]=\sigma^{2}(\psi_{i}+\psi_{i+1}\psi_{1}+\cdots)=\sigma^{2}\sum_{k=0}^{\infty}\psi_{k}\psi_{k+i}.\)
A dual representation can exist between the AR process \(Y_t\) and the sequence of white noises.
The stationarity of AR(p) depends on the linear combinations of white noises.
One can decompose a stationary process into noises.
Motivate us to study a purely noises driven process.
\[Y_{t}= \mu+\varepsilon_{t}+\theta\varepsilon_{t-1}=\mu+\theta(L)\varepsilon_{t}\]
\[\theta(\mathbb{L})= 1+\theta\mathbb{L},\quad\varepsilon_{t}\sim WN(0,\sigma^{2})\]
Autocovariances and Autocorrelations: \[\gamma_{1}= \mathbb{E}[(Y_{t}-\mu)(Y_{t-1}-\mu)]\] \[= \mathbb{E}\left[(\varepsilon_{t}+\theta\varepsilon_{t-1})(\varepsilon_{t-1}+\theta\varepsilon_{t-2})\right] = \sigma^{2}\theta\] \[\rho_{1}= \frac{\gamma_{1}}{\gamma_{0}}=\frac{\theta}{1+\theta^{2}}\] \[\gamma_{j}= 0,\quad j>1\]
y1 = arima.sim(n=250,list(ar=0,ma=.95)) y2 = arima.sim(n=250,list(ar=0,ma=.4)) y3 = arima.sim(n=250,list(ar=0,ma=-.4)) y4 = arima.sim(n=250,list(ar=0,ma=-.95))
# plot time series
layout(matrix(c(1,3,2,4),2,2))
plot(y1); abline(h=0)
mtext("q = 0.95")
plot(y2); abline(h=0)
mtext("q = 0.40")
plot(y3); abline(h=0)
mtext("q = -0.95")
plot(y4); abline(h=0)
mtext("q = -0.40")
MA(q): A process \(Y_t\) linearly depends on a finite number \(q\) of previous noises.
It has a compact notation: \[Y_t = \theta(\mathbb{L}) \varepsilon_t\] where \(\theta(\mathbb{L})= 1 + \theta_1 \mathbb{L} + \theta_2 \mathbb{L}^2 + \cdots + \theta_q \mathbb{L}^{q}.\)
Similarly, we can transfer MA models into a AR model with infinite lags: \[Y_{t}-\mu= (1+\theta\mathbb{L})\varepsilon_{t},\quad|\theta|<1\] \[= (1-\theta^{*}\mathbb{L})\varepsilon_{t}\quad\theta^{*}=-\theta\]
then \[(1-\theta^{*}\mathbb{L})^{-1}(Y_{t}-\mu)= \varepsilon_{t}\] \[\sum_{j=0}^{\infty}(\theta^{*})^{j}\mathbb{L}^{j}(Y_{t}-\mu)= \varepsilon_{t}\]
Hence:\[\varepsilon_{t}= (Y_{t}-\mu)+\theta^{*}(Y_{t-1}-\mu)+(\theta^{*})^{2}(Y_{t-2}-\mu)+\dots\] AR model with infinite lags.
AR(p): \(\phi(\mathbb{L}) Y_t =\varepsilon_t\), with \(\phi(\mathbb{L}) =1 -\phi_1 \mathbb{L} -\cdots - \phi_p \mathbb{L}^p\).
MA(q): \(Y_t = \theta(\mathbb{L}) \varepsilon_t\), with \(\phi(\mathbb{L}) =1 + \theta_1 \mathbb{L} -\cdots - \theta_q \mathbb{L}^q\).
A combination leads to a more general representation - ARMA(p,q): \[ \phi(\mathbb{L}) Y_t = \theta(\mathbb{L}) \varepsilon_t.\]
Similarly, ARMA(p,q) can be represented by either AR or MA when the operators of \(\phi(\mathbb{L})\) and \(\theta(\mathbb{L})\) are convergent \[ Y_t = \phi(\mathbb{L})^{-1} \theta(\mathbb{L}) \varepsilon_t \\ \theta(\mathbb{L})^{-1} \phi(\mathbb{L}) Y_t = \varepsilon_t .\]
Hence, for the stationary process \(Y_t\), the dual representation (AR or MA) can enhance one’s understanding.
Stable (non-explosive) operators induce stationary processes
Study the processes with the standpoint of noises.
AR and MA structure can form a duality.