Problem 1.5


For the two series in Problem 1.2 (a) and (b):

Let \(\small x_{t_1}\) and \(\small x_{t_2}\) be

\[\small x_{t_1} = s_{t_1} + w_{t_1}\] where: \[\small s_{t_1} = \bigg\{\begin{array}{left} 0 & if~~t_1=1,2,...,100 \\ 10e^{-{\frac{(t_1-100)}{20}}cos(2{\pi}{\frac{t_1}{4}})} & if~~t_1=101,102,...,200 \end{array} \]
and

\[\small x_{t_2} = s_{t_2} + w_{t_2}\] where: \[\small s_{t_2} = \bigg\{\begin{array}{left} 0 & if~~t_2=1,2,...,100 \\ 10e^{-{\frac{(t_2-100)}{200}}cos(2{\pi}{\frac{t_2}{4}})} & if~~t_2=101,102,...,200 \end{array} \]


    1. Compute and plot the mean functions \(\small \mu_x(t)\), for \(\small t\) = 1,…,200.

      Solution:
      Solving for the mean functions \(\small \mu_x(t)\) of the two series in Problem 1.2 (a) and (b), we have

      \[\small \begin{align}\mu_x(t_1)&=E\left(s_{t_1}+w_{t_1}\right) \\ &=E\left(s_{t_1}\right)+E\left(w_{t_1}\right) \\ &=\bigg\{\begin{array}{left} 0+E(w_{t_1}) & if~~t_1=1,2,...,100 \\ 10e^{-{\frac{(t_1-100)}{20}}cos(2{\pi}{\frac{t_1}{4}})}+E(w_{t_1}) & if~~t_1=101,102,...,200 \end{array}\end{align}\]

      \[\small \begin{align}\mu_x(t_2)&=E\left(s_{t_2}+w_{t_2}\right) \\ &=E\left(s_{t_2}\right)+E\left(w_{t_2}\right) \\ &=\bigg\{\begin{array}{left} 0+E(w_{t_2}) & if~~t_2=1,2,...,100 \\ 10e^{-{\frac{(t_2-100)}{200}}cos(2{\pi}{\frac{t_2}{4}})}+E(w_{t_2}) & if~~t_2=101,102,...,200 \end{array}\end{align}\]

      But \(\small w_{t_1}\) and \(\small w_{t_2}\) are both Gaussian white noise with variance 1 and mean 0. Hence, we can simplify the above calculation into

      \[\small \begin{align}\mu_x(t_1)&=\bigg\{\begin{array}{left} 0 & if~~t_1=1,2,...,100 \\ 10e^{-{\frac{(t_1-100)}{20}}cos(2{\pi}{\frac{t_1}{4}})} & if~~t_1=101,102,...,200 \end{array} \\ &=s_{t_1} \end{align}\]

      \[\small \begin{align}\mu_x(t_2)&=\bigg\{\begin{array}{left} 0+E(w_{t_2}) & if~~t_2=1,2,...,100 \\ 10e^{-{\frac{(t_2-100)}{200}}cos(2{\pi}{\frac{t_2}{4}})} & if~~t_2=101,102,...,200 \end{array} \\ &=s_{t_2} \end{align}\]

      Modifying the R code in Problem Set 1 (Problems 1.2a and 1.2b), with the addition of \(\small \mu_x(t_1)=s_{t_1}\) and \(\small \mu_x(t_2)=s_{t_2}\) , we have the following codes and its corresponding plots:

      s1 = c(rep(0,100),10*exp(-(1:100)/20)*cos(2*pi*1:100/4))
      x1 = s1 + rnorm(200,0,1)
      plot.ts(x1, ylim=c(-9,9), main="10*exp(-(t-100)/20)*cos(2*pi*t/4)", ylab="x1", col="#003399")
      lines(s1, col="#990000")

      s2 = c(rep(0,100),10*exp(-(1:100)/200)*cos(2*pi*1:100/4))
      x2 = s2 + rnorm(200,0,1)
      plot.ts(x2, ylim=c(-9,9), main="10*exp(-(t-100)/200)*cos(2*pi*t/4)", ylab="x2", col="#003399")
      lines(s2, col="#990000")



    1. Calculate the autocovariance functions \(\small \gamma_x(s,t)\), for \(\small s,t\) = 1,…,200.

      Solution:
      Solving for the autocovariance function, we have \[\small \gamma_x(s,t)=E\left[\left(x_s-\mu_s\right)\left(x_t-\mu_t\right)\right]\]

      But since \(\small x_s=s_s+w_s\), \(\small x_t=s_t+w_t\), \(\small \mu_s=s_s\), and \(\small \mu_t=s_t\), we can rewrite it as \[\small \begin{align}\gamma_x(s,t)&=E\left[\left(s_s+w_s-s_s\right)\left(s_t+w_t-s_t\right)\right] \\ &=E\left[w_sw_t\right] \\ &=\bigg\{\begin{array}{left} 1 & if~~s=t,~t=1,2,...,200 \\ 0 & if~~s{\ne}t,~t=1,2,...,200 \end{array}\end{align}\]





Problem 1.6


Consider the time series \[\small x_t=\beta_1+\beta_2t+w_t\] where \(\small \beta_1\) and \(\small \beta_2\) are known constants and \(\small w_t\) is a white noise process with variance \(\small \sigma^2_w\).



    1. Determine whether \(\small x_t\) is stationary.

      Solution:
      From the problem above, we are given with \(\small x_t=\beta_1+\beta_2t+w_t\), \(\small E(w_t)=0\), and variance \(\small \sigma^2_w\). Hence, we have

      \[\small \begin{align}E[x_t]&=E[\beta_1+\beta_2t+w_t] \\ &=\beta_1+\beta_2E[t]+E[w_t] \\ &=\beta_1+\beta_2t \end{align}\]

      Since the first moment (mean) of \(\small x_t\) is a function of time, then \(\small x_t\) is not stationary.



    1. Show that the process \(\small y_t=x_t-x_{t-1}\) is stationary.

      Solution:
      Finding \(\small y_t\), we have \[\small \begin{align}y_t&=\beta_1+\beta_2t+w_t-\left[\beta_1+\beta_2(t-1)+w_{t-1}\right] \\ &=\beta_1+\beta_2t+w_t-\beta_1-\beta_2t+\beta_2-w_{t-1} \\ &=\beta_2+w_t-w_{t-1} \end{align}\]

      Finding the mean of \(\small y_t\), we have \[\small \begin{align}E[y_t]&=E\left[\beta_2+w_t-w_{t-1}\right] \\ &=\beta_2+E[w_t]-E[w_{t-1}] \\ &=\beta_2 \end{align}\]

      Finding the covariance, we have \[\small \begin{align}cov(y_{t+h},y_t)&=cov(x_{t+h}-x_{t+h-1},x_t-x_{t-1}) \\ &=cov(\beta_2+w_{t+h}-w_{t+h-1},\beta_2+w_t-w_{t-1}) \end{align}\]

      \(\small \circ\) when \(\small h=0\), \[\small \begin{align}cov(y_t,y_t)&=cov(\beta_2+w_t-w_{t-1},\beta_2+w_t-w_{t-1}) \\ &=cov(\beta_2,\beta_2)+cov(w_t,w_t)+cov(w_{t-1},w_{t-1}) \\ &=2\sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=1\), \[\small \begin{align}cov(y_{t+1},y_t)&=cov(\beta_2+w_{t+1}-w_t,\beta_2+w_t-w_{t-1}) \\ &=cov(\beta_2,\beta_2)-cov(w_t,w_t)-cov(w_{t+1},w_{t-1}) \\ &=-\sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=-1\), \[\small \begin{align}cov(y_{t-1},y_t)&=cov(\beta_2+w_{t-1}-w_{t-2},\beta_2+w_t-w_{t-1}) \\ &=cov(\beta_2,\beta_2)-cov(w_{t-1},w_{t-1})-cov(w_{t-2},w_t) \\ &=-\sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=2\), \[\small \begin{align}cov(y_{t+2},y_t)&=cov(\beta_2+w_{t+2}-w_{t+1},\beta_2+w_t-w_{t-1}) \\ &=0 \end{align}\]

      \(\small \circ\) when \(\small h=-2\), \[\small \begin{align}cov(y_{t-2},y_t)&=cov(\beta_2+w_{t-2}-w_{t-3},\beta_2+w_t-w_{t-1}) \\ &=0 \end{align}\]

      Hence, the autocovariance of \(\small y_t\) is

      \[\small \gamma_y(h)=\Bigg\{\begin{array}{left} 2\sigma^2_w & if~~h=0 \\ -\sigma^2_w & if~~|h|=1 \\ 0 & if~~|h|{\ge}2 \end{array}\]

      Now, since both the mean and the covariance of the process is independent of time, then \(\small y_t\) is stationary.



    1. Show that the mean of the moving average \[\small v_t=\frac{1}{2q+1}\sum_{j=-q}^{q} x_{t-j}\] is \(\small \beta_1+\beta_2t\), and give a simplified expression of the autocovariance function.

      Solution:
      Finding the mean of \(\small v_t\), we have \[\small \begin{align}E[v_t]&=E\left[\frac{1}{2q+1}\sum_{j=-q}^{q} x_{t-j}\right]=\frac{1}{2q+1}\sum_{j=-q}^{q} E[x_{t-j}]=\frac{1}{2q+1}\left[\beta_1+\beta_2(t-j)+w_{t-j}\right] \\ &=\frac{1}{2q+1}\left[\beta_1(2q+1)+\beta_2t(2q+1)-\beta_2\sum_{j=-q}^{q} j+\sum_{j=-q}^{q} E[w_{t-j}]\right] \\ &=\frac{1}{2q+1}\left[\beta_1(2q+1)+\beta_2t(2q+1)-0+0\right] \\ &=\beta_1+\beta_2t \end{align}\]

      Now, finding the autocovariance function, we have \[\small \gamma_v(h)=cov(v_{t+h},v_t)=cov\left(\frac{1}{2q+1}\sum_{j=-q}^{q} x_{t+h-j},\frac{1}{2q+1}\sum_{k=-q}^{q} x_{t-k} \right)\]

      Let \(\small j=h+k\), so we have \[\small \begin{align}\gamma_v(h)&=cov\left(\frac{1}{2q+1}\sum_{h+k=-q}^{q} x_{t-k},\frac{1}{2q+1}\sum_{k=-q}^{q} x_{t-k}\right) \\ &=\frac{1}{(2q+1)^2}cov\left(\sum_{h+k=-q}^{q} x_{t-k},\sum_{k=-q}^{q} x_{t-k}\right) \\ &=\frac{1}{(2q+1)^2}cov\left(\sum_{h+k=-q}^{q} \left(\beta_1+\beta_2(t-k)+w_{t-k}\right),\sum_{k=-q}^{q} \left(\beta_1+\beta_2(t-k)+w_{t-k}\right)\right) \\ &=\frac{1}{(2q+1)^2}cov\left(\sum_{h+k=-q}^{q} w_{t-k},\sum_{h+k=-q}^{q} w_{t-k}\right) \\ &=\frac{(2q+1-|h|)\sigma^2_w}{(2q+1)^2} \end{align}\]

      Hence, \[\small \gamma_v(h)=\bigg\{ \begin{array}{left} \frac{(2q+1-|h|)\sigma^2_w}{(2q+1)^2} & if~~0{\le}|h|{\le}q \\ 0 & if~~|h|>q \end{array}\]





Problem 1.7


For a moving average process of the form \[\small x_t=w_{t-1}+2w_t+w_{t+1}\] where \(\small w_t\) are independent with zero means and variance \(\small \sigma^2_w\), determine the autocovariance and autocorrelation functions as a function of lag \(\small h=s-t\) and plot the ACF as a function of \(\small h\).


    Solution:
    \(\small \circ\) when \(\small h=0\), \[\small \begin{align}\gamma_x(0)&=cov(x_t,x_t) \\ &=cov(w_{t-1}+2w_t+w_{t+1},w_{t-1}+2w_t+w_{t+1}) \\ &=cov(w_{t-1},w_{t-1})+4cov(w_t,w_t)+cov(w_{t+1},w_{t+1}) \\ &=6\sigma^2_w \end{align}\]

    \(\small \circ\) when \(\small h=1\), \[\small \begin{align}\gamma_x(1)&=cov(x_{t+1},x_t) \\ &=cov(w_{t}+2w_{t+1}+w_{t+2},w_{t-1}+2w_t+w_{t+1}) \\ &=2cov(w_t,w_t)+2cov(w_{t+1},w_{t+1})+cov(w_{t+2},w_{t-1}) \\ &=4\sigma^2_w \end{align}\]

    \(\small \circ\) when \(\small h=-1\), \[\small \begin{align}\gamma_x(-1)&=cov(x_{t-1},x_t) \\ &=cov(w_{t-2}+2w_{t-1}+w_t,w_{t-1}+2w_t+w_{t+1}) \\ &=2cov(w_{t-1},w_{t-1})+2cov(w_t,w_t)+cov(w_{t-2},w_{t+1}) \\ &=4\sigma^2_w \end{align}\]

    \(\small \circ\) when \(\small h=2\), \[\small \begin{align}\gamma_x(2)&=cov(x_{t+2},x_t) \\ &=cov(w_{t+1}+2w_{t+2}+w_{t+3},w_{t-1}+2w_t+w_{t+1}) \\ &=cov(w_{t+1},w_{t+1})+4cov(w_{t+2},w_t)+cov(w_{t+3},w_{t-1}) \\ &=\sigma^2_w \end{align}\]

    \(\small \circ\) when \(\small h=-2\), \[\small \begin{align}\gamma_x(-2)&=cov(x_{t-2},x_t) \\ &=cov(w_{t-3}+2w_{t-2}+w_{t-1},w_{t-1}+2w_t+w_{t+1}) \\ &=cov(w_{t-1},w_{t-1})+4cov(w_{t-2},w_t)+cov(w_{t-3},w_{t+1}) \\ &=\sigma^2_w \end{align}\]

    \(\small \circ\) when \(\small h=3\), \[\small \begin{align}\gamma_x(3)&=cov(x_{t+3},x_t) \\ &=cov(w_{t+2}+2w_{t+3}+w_{t+4},w_{t-1}+2w_t+w_{t+1}) \\ &=0 \end{align}\]

    \(\small \circ\) when \(\small h=-3\), \[\small \begin{align}\gamma_x(-3)&=cov(x_{t-3},x_t) \\ &=cov(w_{t-4}+2w_{t-3}+w_{t-2},w_{t-1}+2w_t+w_{t+1}) \\ &=0 \end{align}\]

    To sum it up, the autocovariance function in this problem can be defined as \[\small \gamma_x(h)= \Bigg \{\begin{array}{left} 6\sigma^2_w & if~~h=0 \\ 4\sigma^2_w & if~~|h|=1 \\ \sigma^2_w & if~~|h|=2 \\ 0 & if~~|h|{\ge}3 \end{array}\]

    Computing for the autocorrelation function (ACF), we have \[\small \begin{align} &\rho_x(0)=\frac{\gamma(0)}{\gamma(0)}=\frac{6\sigma^2_w}{6\sigma^2_w}=1 \\ &\rho_x(1)=\frac{\gamma(1)}{\gamma(0)}=\frac{4\sigma^2_w}{6\sigma^2_w}=\frac{2}{3} \\ &\rho_x(2)=\frac{\gamma(2)}{\gamma(0)}=\frac{\sigma^2_w}{6\sigma^2_w}=\frac{1}{6} \\ &\rho_x(3)=\frac{\gamma(3)}{\gamma(0)}=\frac{0}{6\sigma^2_w}=0 \end{align}\]

    Hence, the autocorrelation function can be defined as \[\small \rho_x(h)= \Bigg \{\begin{array}{left} 1 & if~~h=0 \\ \frac{2}{3} & if~~|h|=1 \\ \frac{1}{6} & if~~|h|=2 \\ 0 & if~~|h|{\ge}3 \end{array}\]

    Plotting the ACF in R, we have

    w = rnorm(500) #generate random
    x = filter(w, filter=c(1,2,1), method="convolution") [2:499]
    print(acf(x, type="correlation"))

    ## 
    ## Autocorrelations of series 'x', by lag
    ## 
    ##      0      1      2      3      4      5      6      7      8      9     10 
    ##  1.000  0.687  0.228  0.080  0.045 -0.018 -0.052 -0.048 -0.031 -0.017 -0.001 
    ##     11     12     13     14     15     16     17     18     19     20     21 
    ##  0.005 -0.003  0.002  0.011  0.004 -0.017 -0.049 -0.073 -0.058  0.002  0.059 
    ##     22     23     24     25     26 
    ##  0.052  0.003 -0.037 -0.074 -0.101





Problem 1.8


Consider the random walk with drift model \[\small x_t=\delta+x_{t-1}+w_t\] for \(\small t\) = 1,2,…, with \(\small x_0=0\), where \(\small w_t\) is white noise with variance \(\small \sigma^2_w\).



    1. Show that the model can be written as \[\small x_t=\delta t+\sum_{k=1}^{t} w_k\]

      Solution:
      Let us consider \(\small x_1\). As per definition, \[\small \begin{align}x_1&=\delta t+x_0+w_1 \\ &=\delta + w_1 \end{align}\]

      Similarly, \[\small \begin{align}x_2&=\delta t+x_1+w_2 \\ &=\delta + \delta + w_1 + w_2 \\ &=2\delta +\sum_{i=1}^{2} w_i \end{align}\]

      And so on. We can see that \(\small x_t\) contains \(\small w_i\) for all \(\small i{\le}k\), and \(\small \delta\) term from all \(\small x_i\), \(\small i<t\) and one from \(\small x_t\) itself.

      Thus, we can write the model as \[\small x_t=\delta t+\sum_{i=1}^{t} w_i\]



    1. Find the mean function and the autocovariance function of \(\small x_t\).

      Solution:
      Mean of \(\small x_t\): \[\small \begin{align} \mu_x&=E[x_t] \\ &=E\left[\delta t+\sum_{i=1}^{t} w_i \right] \\ &=\delta t+\sum_{i=1}^{t} E[w_i] \\ &=\delta t+0 \\ &=\delta t \end{align}\]

      Covariance of \(\small x_t\): \[\small \begin{align} \gamma_x(t+h,t)&=cov(x_{t+h},x_t) \\ &=E\left[\left(x_{t+h}-\mu_{t+h}\right)\left(x_t-\mu_t\right)\right] \\ &=E\left[\left(\sum_{j=1}^{t+h} w_j\right)\left(\sum_{k=1}^{t} w_k\right)\right]\\ &=E\left[\left(w_1+w_2+...+w_t+w_{t+1}+...+w_{t+h}\right)\left(w_1+w_2+...+w_t\right)\right] \\ &=\sum_{j=1}^{t} E\left[w^2_j\right] \\ &=t\sigma^2_w \end{align}\]



    1. Argue that \(\small x_t\) is not stationary.

      Solution:
      From (b), we know that \(\small \gamma_x(t+h,t)\) depends on time \(\small t\). This implies that \(\small x_t\) is not stationary.



    1. Show \(\small \rho_x(t-1,t)=\sqrt{\frac{t-1}{t}}\to 1\) as \(\small t\to\infty\). What is the implication of this result?

      Solution:
      We know that \[\small \rho_x(t-1,t)=\frac{\gamma(t-1,t)}{\sqrt{\gamma(t-1,t-1)\gamma(t,t)}}\]

      where \[\small \gamma(s,t)=E\left[\left(x_s-\mu_s\right)\left(x_t-\mu_t\right)\right]\]

      Solving, we get \[\small \begin{align} \rho_x(t-1,t)&=\frac{(t-1)\sigma^2_w}{\sqrt{(t-1)\sigma^2_w \cdot t \sigma^2_w}} \\ &=\frac{(t-1)\sigma^2_w}{\sqrt{t(t-1)}\sigma^2_w} \\ &=\frac{t-1}{\sqrt{t(t-1)}} \\ &=\sqrt{\frac{t-1}{t}} \end{align}\]

      Hence, as \(\small t\to\infty\), we have \(\small \rho_x(t-1,t)\to 1\).

      This implies a unit delay of the signal as \(\small t\to\infty\) would be identical to the signal itself.



    1. Suggest a transformation to make the series stationary and prove that the transformed series is stationary.

      Solution:
      Note that the series \(\small x_t=\delta + x_{t-1} + w_t\) can be rewritten as \(\small x_t-x_{t-1}=\delta + w_t\).

      Now, let \(\small y_t=x_t-x_{t-1}\), so we have \[\small y_t=\delta + w_t\]

      But since \(\small y_t\) is a linear combination of white noise, then it is stationary.





Problem 1.13


Consider the two series \[\small x_t=w_t\] \[\small y_t=w_t-\theta w_{t-1} + u_t\] where \(\small w_t\) and \(\small u_t\) are independent white noise series with variances \(\small \sigma^2_w\) and \(\small \sigma^2_u\), respectively, and \(\small \theta\) is an unspecified constant.



    1. Express the ACF, \(\small \rho_y(h)\), for \(\small h\) = 0,\(\small \pm 1\),\(\small \pm 2\),… of the series \(\small y_t\) as a function of \(\small \sigma^2_w\), \(\small \sigma^2_u\), and \(\small \theta\).

      Solution:
      Note that \(\small x_t\) is just a white noise. Hence, \[\small \gamma_x(h)=\bigg \{\begin{array}{left} \sigma^2_w & if~~h=0 \\ 0 & if~~h{\ne}0 \end{array}\] and \[\small \rho_x(h)=\bigg \{\begin{array}{left} 1 & if~~h=0 \\ 0 & if~~h{\ne}0 \end{array}\]

      Now, to solve for \(\small \rho_y(h)\), we note that the mean of \(\small y_t\) is zero. That is, \[\small \begin{align} \mu_y&=E[y_t] \\ &=E[w_t-\theta w_{t-1}+u_t] \\ &=E[w_t]-\theta E[w_{t-1}]+E[u_t] \\ &=0 \end{align}\] Finding the covariance of the series \(\small y_t\), we have

      \(\small \circ\) when \(\small h=0\), \[\small \begin{align} \gamma_y(0)&=cov(w_t -\theta w_{t-1}+u_t, w_t-\theta w_{t-1}+u_t)\\ &=cov(w_t,w_t)+\theta^2 cov(w_{t-1},w_{t-1})+cov(u_t,u_t) \\ &=(1+\theta^2)\sigma^2_w+\sigma^2_u \end{align}\]

      \(\small \circ\) when \(\small h=1\), \[\small \begin{align} \gamma_y(1)&=cov(y_{t+1},y_t)\\ &=cov(w_{t+1}-\theta w_t + u_{t+1},w_t-\theta w_{t-1}+u_t)\\ &=-\theta cov(w_t,w_t)-\theta cov(w_{t+1},w_{t-1})+cov(u_{t+1},u_t)\\ &=-\theta \sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=-1\), \[\small \begin{align} \gamma_y(-1)&=cov(y_{t-1},y_t)\\ &=cov(w_{t-1}-\theta w_{t-2} + u_{t-1},w_t-\theta w_{t-1}+u_t)\\ &=-\theta cov(w_{t-1},w_{t-1})-\theta cov(w_{t-1},w_t)+cov(u_{t-1},u_t)\\ &=-\theta \sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=2\), \[\small \begin{align} \gamma_y(2)&=cov(y_{t+2},y_t)\\ &=cov(w_{t+2}-\theta w_{t+1}+u_{t+2},w_t-\theta w_{t-1}+u_t)\\ &=0 \end{align}\]

      \(\small \circ\) when \(\small h=-2\), \[\small \begin{align} \gamma_y(-2)&=cov(y_{t-2},y_t)\\ &=cov(w_{t-2}-\theta w_{t-3}+u_{t-2},w_t-\theta w_{t-1}+u_t)\\ &=0 \end{align}\]

      Hence, we get \[\small \gamma_y(h)=\bigg \{\begin{array}{left} (1+\theta^2)\sigma^2_w + \sigma^2_u & if~~h=0 \\ -\theta \sigma^2_w & if~~|h|=1 \\ 0 & if~~|h|{\ge}2 \end{array}\]

      Computing the autocorrelation function (ACF), we have

      \[\small \begin{align} &\rho_y(0)=\frac {\gamma_y (0)}{\gamma_y (0)}=\frac {1+\theta^2)\sigma^2_w + \sigma^2_u}{1+\theta^2)\sigma^2_w + \sigma^2_u}=1\\ &\small \rho_y(\pm1)=\frac {\gamma_y (\pm1)}{\gamma_y (0)}=\frac {-\theta \sigma^2_w}{1+\theta^2)\sigma^2_w + \sigma^2_u}\\ &\small \rho_y(\pm2)=\frac {\gamma_y (\pm2)}{\gamma_y (0)}=\frac {0}{1+\theta^2)\sigma^2_w + \sigma^2_u}=0 \end{align}\]

      Hence the ACF, \(\small \rho_y (h)\), can be defined as \[\small \rho_y(h)=\bigg \{\begin{array}{left} 1 & if~~h=0 \\ \frac {-\theta \sigma^2_w}{1+\theta^2)\sigma^2_w + \sigma^2_u} & if~~|h|=1 \\ 0 & if~~|h|{\ge}2 \end{array}\]



    1. Determine the CCF, \(\small \rho_{xy} (h)\), relating \(\small x_t\) and \(\small y_t\).

      Solution:
      Since \(\small x_t\) is just a white noise, we have \[\small \gamma_x(h)=\bigg \{\begin{array}{left} \sigma^2_w & if~~h=0 \\ 0 & if~~h{\ne}0 \end{array}\]

      And from (a), we have \[\small \gamma_y(h)=\bigg \{\begin{array}{left} (1+\theta^2)\sigma^2_w + \sigma^2_u & if~~h=0 \\ -\theta \sigma^2_w & if~~|h|=1 \\ 0 & if~~|h|{\ge}2 \end{array}\]

      Finding the autocovariance function relating to \(\small x_t\) and \(\small y_t\), we have

      \(\small \circ\) when \(\small h=0\), \[\small \begin{align} \gamma_{xy} (0)&=cov(x_t,y_t)\\ &=cov(w_t,w_t-\theta w_{t-1}+u_t)\\ &=cov(w_t,w_t)\\ &=\sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=1\), \[\small \begin{align}\gamma_{xy} (1)&=cov(x_{t+1},y_t)\\ &=cov(w_{t+1},w_t-\theta w_{t-1}+u_t)\\ &=0 \end{align}\]

      \(\small \circ\) when \(\small h=-1\), \[\small \begin{align}\gamma_{xy} (-1)&=cov(x_{t-1},y_t)\\ &=cov(w_{t-1},w_t-\theta w_{t-1}+u_t)\\ &=-\theta cov(w_{t-1},w_{t-1})\\ &=-\theta \sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=2\), \[\small \begin{align}\gamma_{xy} (2)&=cov(x_{t+2},y_t)\\ &=cov(w_{t+2},w_t-\theta w_{t-1}+u_t)\\ &=0 \end{align}\]

      \(\small \circ\) when \(\small h=-2\), \[\small \begin{align}\gamma_{xy} (-2)&=cov(x_{t-2},y_t)\\ &=cov(w_{t-2},w_t-\theta w_{t-1}+u_t)\\ &=0 \end{align}\]

      Hence, \[\small \gamma_{xy}(h)=\bigg \{\begin{array}{left} \sigma^2_w & if~~h=0 \\ -\theta \sigma^2_w & if~~h=-1 \\ 0 & otherwise \end{array}\]

      Then computing for the cross-correlation function (CCF), we have \[\small \begin{align} &\rho_{xy}(0)=\frac{\gamma_{xy}(0)}{\sqrt{\gamma_x(0)\gamma_y(0)}}=\frac{\sigma^2_w}{\sqrt{\sigma^2_w [(1+\theta^2)\sigma^2_w + \sigma^2_u]}}=\frac{\sigma_w}{\sqrt{(1+\theta^2)\sigma^2_w + \sigma^2_u}}\\ &\small \rho_{xy}(1)=\frac{\gamma_{xy}(1)}{\sqrt{\gamma_x(0)\gamma_y(0)}}=\frac{0}{\sqrt{\sigma^2_w [(1+\theta^2)\sigma^2_w + \sigma^2_u]}}=0\\ &\small \rho_{xy}(-1)=\frac{\gamma_{xy}(-1)}{\sqrt{\gamma_x(0)\gamma_y(0)}}=\frac{-\theta \sigma^2_w}{\sqrt{\sigma^2_w [(1+\theta^2)\sigma^2_w + \sigma^2_u]}}=\frac{-\theta \sigma_w}{\sqrt{(1+\theta^2)\sigma^2_w + \sigma^2_u}}\\ &\small \rho_{xy}(2)=\frac{\gamma_{xy}(2)}{\sqrt{\gamma_x(0)\gamma_y(0)}}=\frac{0}{\sqrt{\sigma^2_w [(1+\theta^2)\sigma^2_w + \sigma^2_u]}}=0\\ &\small \rho_{xy}(-2)=\frac{\gamma_{xy}(-2)}{\sqrt{\gamma_x(0)\gamma_y(0)}}=\frac{0}{\sqrt{\sigma^2_w [(1+\theta^2)\sigma^2_w + \sigma^2_u]}}=0 \end{align}\]

      Therefore, the CCF is \[\small \rho_{xy}(h)=\Bigg\{ \begin{array}{left} \frac{\sigma_w}{\sqrt{(1+\theta^2)\sigma^2_w + \sigma^2_u}} & if~~h=0 \\ \frac{-\theta \sigma_w}{\sqrt{(1+\theta^2)\sigma^2_w + \sigma^2_u}} & if~~h=-1 \\ 0 & otherwise \end{array}\]



    1. Show that \(\small x_t\) and \(\small y_t\) are jointly stationary.

      Solution:
      From (a), we know that \(\small x_t\) and \(\small y_t\) are both stationary.

      From (b), we know that \[\small \gamma_{xy}(h)=\bigg \{\begin{array}{left} \sigma^2_w & if~~h=0 \\ -\theta \sigma^2_w & if~~h=-1 \\ 0 & otherwise \end{array}\] and is therefore, independent on time \(\small t\). Since the autocovariance and cross-covariance functions depend only on lag \(\small h\), it follows that these series are jointly stationary.





Problem 1.15


Let \(\small w_t\), for \(\small t=0, \pm1, \pm2, ...\) be a normal white noise process, and consider the series
\(\small x_t=w_t w_{t-1}\).

Determine the mean and autocovariance function of \(\small x_t\), and state whether it is stationary.


    Solution:

    Finding the mean of the function \(\small x_t\), we have \[\small \begin{align} E[x_t]&=E[w_t w_{t-1}]\\ &=E[w_t]E[w_{t-1}]\\ &=0 \cdot 0\\ &=0 \end{align}\]

    Finding the covariance of the function \(\small x_t\), we have \[\small \begin{align} \gamma_x(h)&=cov(x_{t+h},x_t)\\ &=cov(w_{t+h}w_{t+h-1},w_t w_{t-1}) \end{align}\]

    \(\small \circ\) when \(\small h=0\), \[\small \begin{align} \gamma_x(0)&=cov(w_t w_{t-1},w_t w_{t-1})\\ &=cov(w_t,w_t)cov(w_{t-1},w_{t-1})\\ &=var(w_t)var(w_{t-1})\\ &=\sigma^2_w \sigma^2_w\\ &=\sigma^4_w \end{align}\]

    \(\small \circ\) when \(\small h \ne 0\), \[\small \begin{align} \gamma_x(h \ne 0)&=cov(w_{t+h}w_{t+h-1},w_t w_{t-1})\\ &=0 \end{align}\]

    Hence, the covariance of the function \(\small x_t\) is \[\small \gamma_x(h)=\bigg \{\begin{array}{left} \sigma^2_w & if~~h=0 \\ 0 & if~~h \ne 0 \end{array}\]

    And since the series is a white noise with mean zero and its covariance is independent of time, then \(\small x_t\) is stationary.





Problem 1.20


    1. Simulate a series of \(\small n\) = 500 Gaussian white noise observations as in Example 1.8 and compute the sample ACF, \(\small \hat{\rho}(h)\), to lag 20. Compare the sample ACF you obtain to the actual ACF, \(\small \rho(h)\).

      Solution:
      The model in Example 1.8 is defined as \[\small v_t=\frac{1}{3} (w_{t-1}+w_t+w_{t+1})\]

      First, we need to calculate the covariance to obtain the actual ACF. So we have \[\small \begin{align} \gamma_v(h)&=cov(v_{t+h},v_t)\\ &=cov\bigg( \frac{1}{3}(w_{t+h-1}+w_{t+h}+w_{t+h+1}),\frac{1}{3}(w_{t-1}+w_t+w_{t+1}) \bigg)\\ &=\frac{1}{9}cov(w_{t+h-1}+w_{t+h}+w_{t+h+1},w_{t-1}+w_t+w_{t+1}) \end{align}\]

      \(\small \circ\) when \(\small h=0\), \[\small \begin{align} \gamma_v(0)&=\frac{1}{9}cov(w_{t-1}+w_t+w_{t+1},w_{t-1}+w_t+w_{t+1})\\ &=\frac{1}{3} \sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=1\), \[\small \begin{align} \gamma_v(1)&=\frac{1}{9}cov(w_t+w_{t+1}+w_{t+2},w_{t-1}+w_t+w_{t+1})\\ &=\frac{2}{9} \sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=-1\), \[\small \begin{align} \gamma_v(-1)&=\frac{1}{9}cov(w_{t-2}+w_{t-1}+w_t,w_{t-1}+w_t+w_{t+1})\\ &=\frac{2}{9} \sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=2\), \[\small \begin{align} \gamma_v(2)&=\frac{1}{9}cov(w_{t+1}+w_{t+2}+w_{t+3},w_{t-1}+w_t+w_{t+1})\\ &=\frac{1}{9} \sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=-2\), \[\small \begin{align} \gamma_v(-2)&=\frac{1}{9}cov(w_{t-3}+w_{t-2}+w_{t-1},w_{t-1}+w_t+w_{t+1})\\ &=\frac{1}{9} \sigma^2_w \end{align}\]

      \(\small \circ\) when \(\small h=3\), \[\small \begin{align} \gamma_v(3)&=\frac{1}{9}cov(w_{t+2}+w_{t+3}+w_{t+4},w_{t-1}+w_t+w_{t+1})\\ &=0 \end{align}\]

      \(\small \circ\) when \(\small h=-3\), \[\small \begin{align} \gamma_v(-3)&=\frac{1}{9}cov(w_{t-4}+w_{t-3}+w_{t-2},w_{t-1}+w_t+w_{t+1})\\ &=0 \end{align}\]

      That is, the covariance function is \[\small \gamma_v(h)=\Bigg\{ \begin{array}{left} \frac{1}{3} \sigma^2_w & if~~h=0 \\ \frac{2}{9} \sigma^2_w & if~~|h|=1 \\ =\frac{1}{9} \sigma^2_w & if~~|h|=2 \\ 0 & if~~|h|{\ge}3 \end{array}\]

      Finding the ACF of the series, we have \[\small \begin{align} &\rho_v(0)=\frac{\gamma_v(0)}{\gamma_v(0)}=\frac{\frac{1}{3}\sigma^2_w}{\frac{1}{3}\sigma^2_w}=1 \\ &\rho_v(\pm1)=\frac{\gamma_v(\pm1)}{\gamma_v(0)}=\frac{\frac{2}{9}\sigma^2_w}{\frac{1}{3}\sigma^2_w}=\frac{2}{3} \\ &\rho_v(\pm2)=\frac{\gamma_v(\pm2)}{\gamma_v(0)}=\frac{\frac{1}{9}\sigma^2_w}{\frac{1}{3}\sigma^2_w}=\frac{1}{3} \\ &\rho_v(\pm3)=\frac{\gamma_v(\pm3)}{\gamma_v(0)}=\frac{0}{\frac{1}{3}\sigma^2_w}=0 \\ \end{align}\]

      That is, \[\small \rho_v(h)=\Bigg\{ \begin{array}{left} 1 & if~~h=0 \\ \frac{2}{3} & if~~|h|=1 \\ \frac{1}{3} & if~~|h|=2 \\ 0 & if~~|h|{\ge}3 \end{array}\]

      Now, if we are to consider \(\small n\) = 500 observations, we can plot the ACF in R using the code below. Note that we generate 2 extra observations due to loss of the end points in making the moving average.

      w_a = rnorm(502,0,1)
      v_a = filter(w_a, sides=2, rep(1,3)/3)
      print(acf(v_a, 20, na.action = na.pass))

      ## 
      ## Autocorrelations of series 'v_a', by lag
      ## 
      ##      0      1      2      3      4      5      6      7      8      9     10 
      ##  1.000  0.623  0.294 -0.068 -0.047 -0.047 -0.006  0.026  0.076  0.063  0.073 
      ##     11     12     13     14     15     16     17     18     19     20 
      ##  0.027  0.025  0.011  0.068  0.085  0.062 -0.006 -0.054 -0.071 -0.085

      As shown in the plot above, the ACF of the sample size \(\small n\) = 500 has minimal positive and negative values whereas the actual ACF are zeros when the lag is greater than 2. This is due to the noise being added to the signal model with mean 0 and variance 1.



    1. Repeat part (a) using only \(\small n\) = 50. How does changing \(\small n\) affect the results?

      Solution:
      Using the same logic in part (a), we plot the ACF in R using the code below with \(\small n\) = 50. Similarly, we generate 2 extra observations due to loss of the end points in making the moving average. That is,

      w_b = rnorm(52,0,1)
      v_b = filter(w_b, sides=2, rep(1,3)/3)
      print(acf(v_b, 20, na.action = na.pass))

      ## 
      ## Autocorrelations of series 'v_b', by lag
      ## 
      ##      0      1      2      3      4      5      6      7      8      9     10 
      ##  1.000  0.761  0.471  0.183  0.076 -0.095 -0.240 -0.373 -0.412 -0.321 -0.157 
      ##     11     12     13     14     15     16     17     18     19     20 
      ## -0.069 -0.106 -0.161 -0.133 -0.065 -0.007 -0.021 -0.041 -0.062 -0.015

      The ACF using \(\small n\) = 500 is much closer to zero compared to the ACF using \(\small n\) = 50. This implies that as we increase the sample size, its ACF becomes closer to the population ACF.





Problem 1.23


Simulate a series of \(\small n\) = 500 observations from the signal-plus-noise model presented in Example 1.12 with \(\small \sigma^2_w\). Compute the sample ACF to lag 100 of the data you generated and comment.


Solution:
The signal-plus-noise model in Example 1.12 is defined as \[\small \begin{align} x_t&=2cos\bigg( 2\pi \frac{t+15}{50} \bigg)+w_t \\ &=2cos\bigg( \frac{2\pi}{50}t+0.6\pi \bigg)+w_t \end{align}\]

Using the R code below, we can generate the plot of the signal (red) and the signal with noise (blue).

x = 2*cos(2*pi*(1:500)/50+0.6*pi) #signal
xt = x + rnorm(500,0,1) #signal plus noise
plot.ts(xt, main="Cosine Wave (red) and Cosine Wave with Noise (blue)", col="#003399")
lines(x, col="#990000")

Computing for the ACF to lag 100, we have

print(acf(xt,100))

## 
## Autocorrelations of series 'xt', by lag
## 
##      0      1      2      3      4      5      6      7      8      9     10 
##  1.000  0.656  0.641  0.618  0.576  0.557  0.468  0.434  0.360  0.261  0.216 
##     11     12     13     14     15     16     17     18     19     20     21 
##  0.143  0.051 -0.034 -0.122 -0.206 -0.241 -0.350 -0.396 -0.462 -0.519 -0.573 
##     22     23     24     25     26     27     28     29     30     31     32 
## -0.594 -0.595 -0.615 -0.646 -0.592 -0.602 -0.599 -0.553 -0.517 -0.436 -0.403 
##     33     34     35     36     37     38     39     40     41     42     43 
## -0.330 -0.283 -0.193 -0.102 -0.039  0.042  0.119  0.181  0.272  0.356  0.422 
##     44     45     46     47     48     49     50     51     52     53     54 
##  0.432  0.473  0.549  0.567  0.573  0.575  0.595  0.563  0.560  0.562  0.484 
##     55     56     57     58     59     60     61     62     63     64     65 
##  0.497  0.407  0.374  0.306  0.215  0.177  0.077  0.021 -0.069 -0.082 -0.191 
##     66     67     68     69     70     71     72     73     74     75     76 
## -0.263 -0.334 -0.375 -0.417 -0.479 -0.513 -0.536 -0.552 -0.551 -0.571 -0.571 
##     77     78     79     80     81     82     83     84     85     86     87 
## -0.554 -0.518 -0.491 -0.452 -0.414 -0.343 -0.293 -0.217 -0.155 -0.083 -0.036 
##     88     89     90     91     92     93     94     95     96     97     98 
##  0.056  0.115  0.189  0.249  0.278  0.359  0.385  0.445  0.503  0.514  0.538 
##     99    100 
##  0.539  0.521

As shown in the ACF plot, the autocovariance follows a strict pattern. The model looks sinusoidal and has a cycle every 50 lags.



— Nothing follows —