Creando el proceso generador de datos (DGP)

El DGP es un VAR(1) bivariado con matriz de coeficientes:

\[ \phi= \left( \begin{array}{ll} 0.3 & 0 \\ 0.5 & 0.6 \end{array} \right) \]

\[ z_{t}= \left( \begin{array}{ll} y_{t}\\ x_{t} \end {array} \right), \ w_{t}= \left( \begin{array}{ll} u_{t}\\ v_{t} \end {array} \right) \]

Phi<-matrix(c(0.3,0,0.5,0.6),2,2,byrow=TRUE)
eigen(Phi)
## eigen() decomposition
## $values
## [1] 0.6 0.3
## 
## $vectors
##      [,1]       [,2]
## [1,]    0  0.5144958
## [2,]    1 -0.8574929
  1. Los eigenvalues de \(\phi\) son 0.3 y 0.6 . Ambos menor que la unidad. Por lo que ambas series en el sistema son estacionarias.
set.seed(1000) 
n=200
z=as.matrix(cbind(rep(0,n),rep(0,n)))
w=as.matrix(cbind(rnorm(n),rnorm(n)))
phi=as.matrix(cbind(c(0.3,0.5),c(0,0.6)))
for(i in 2:n){
z[i,] =phi %*% z[i-1,] + w[i,] 
  }

Prueba de raíces unitarias

En la práctica se testea raíces unitaria para cada serie antes de correr el VAR.

Prueba para la primera variable

library(tseries)
## Warning: package 'tseries' was built under R version 4.1.3
## Registered S3 method overwritten by 'quantmod':
##   method            from
##   as.zoo.data.frame zoo
adf.test(z[,1],k=1)
## Warning in adf.test(z[, 1], k = 1): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  z[, 1]
## Dickey-Fuller = -9.8373, Lag order = 1, p-value = 0.01
## alternative hypothesis: stationary
adf.test(z[,1],k=4)
## Warning in adf.test(z[, 1], k = 4): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  z[, 1]
## Dickey-Fuller = -5.8509, Lag order = 4, p-value = 0.01
## alternative hypothesis: stationary

Prueba para la segunda variable

adf.test(z[,2],k=1)
## Warning in adf.test(z[, 2], k = 1): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  z[, 2]
## Dickey-Fuller = -6.339, Lag order = 1, p-value = 0.01
## alternative hypothesis: stationary
adf.test(z[,2],k=4)
## Warning in adf.test(z[, 2], k = 4): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  z[, 2]
## Dickey-Fuller = -4.2773, Lag order = 4, p-value = 0.01
## alternative hypothesis: stationary

Los p-value en todos los casos salen menores que 0.05 , por lo tanto se rechaza la hipótesis nula de serie no estacionaria . Si no se puede rechazar la presencia de raiz unitaria entonces se tiene que :
1. Correr el modelo VAR usando la diferenciación de la data si las series no están cointegradas.
2. Correr VECM si la series estas cointegradas

Seleccionar el número de rezagos p

En teoría nosotros debemos incluir los suficientes números de rezagos hasta que no exista correlación serial entre los errores. En la práctica se escoge p minimizando AIC

library(vars)
## Loading required package: MASS
## Warning: package 'MASS' was built under R version 4.1.3
## Loading required package: strucchange
## Loading required package: zoo
## 
## Attaching package: 'zoo'
## The following objects are masked from 'package:base':
## 
##     as.Date, as.Date.numeric
## Loading required package: sandwich
## Loading required package: urca
## Loading required package: lmtest
VARselect(z)
## $selection
## AIC(n)  HQ(n)  SC(n) FPE(n) 
##      1      1      1      1 
## 
## $criteria
##                  1           2           3           4           5           6
## AIC(n) -0.06176601 -0.04826772 -0.05658535 -0.03983166 -0.02112784 0.002446965
## HQ(n)  -0.02022953  0.02095977  0.04033312  0.08477780  0.13117261 0.182438416
## SC(n)   0.04077159  0.12262829  0.18266905  0.26778114  0.35484336 0.446776575
## FPE(n)  0.94010777  0.95290182  0.94504889  0.96108756  0.97934766 1.002879385
##                 7          8          9        10
## AIC(n) 0.02916935 0.05837397 0.09514411 0.1069014
## HQ(n)  0.23685179 0.29374741 0.35820854 0.3976568
## SC(n)  0.54185736 0.63942039 0.74454892 0.8246646
## FPE(n) 1.03027719 1.06112921 1.10129361 1.1148446

Se escoge p=1 ,ya que VAR(1) tiene el mas bajo AIC = -0.06176601

Estimación VAR

  1. Primero estimamos un VAR(1) con intercepto y tendencia.
var.1c=VAR(z,p=1,type="both")
## Warning in VAR(z, p = 1, type = "both"): No column names supplied in y, using: y1, y2 , instead.
summary(var.1c)
## 
## VAR Estimation Results:
## ========================= 
## Endogenous variables: y1, y2 
## Deterministic variables: both 
## Sample size: 199 
## Log Likelihood: -546.002 
## Roots of the characteristic polynomial:
## 0.4243 0.4243
## Call:
## VAR(y = z, p = 1, type = "both")
## 
## 
## Estimation results for equation y1: 
## =================================== 
## y1 = y1.l1 + y2.l1 + const + trend 
## 
##        Estimate Std. Error t value Pr(>|t|)   
## y1.l1  0.227132   0.069460   3.270  0.00127 **
## y2.l1 -0.083555   0.049185  -1.699  0.09095 . 
## const -0.179921   0.140330  -1.282  0.20132   
## trend  0.002479   0.001232   2.013  0.04553 * 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## 
## Residual standard error: 0.9492 on 195 degrees of freedom
## Multiple R-Squared: 0.0859,  Adjusted R-squared: 0.07183 
## F-statistic: 6.108 on 3 and 195 DF,  p-value: 0.0005426 
## 
## 
## Estimation results for equation y2: 
## =================================== 
## y2 = y1.l1 + y2.l1 + const + trend 
## 
##        Estimate Std. Error t value Pr(>|t|)    
## y1.l1  0.576700   0.071714   8.042  8.4e-14 ***
## y2.l1  0.580314   0.050781  11.428  < 2e-16 ***
## const -0.186417   0.144883  -1.287    0.200    
## trend  0.001542   0.001272   1.212    0.227    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## 
## Residual standard error: 0.98 on 195 degrees of freedom
## Multiple R-Squared: 0.5378,  Adjusted R-squared: 0.5307 
## F-statistic: 75.64 on 3 and 195 DF,  p-value: < 2.2e-16 
## 
## 
## 
## Covariance matrix of residuals:
##          y1       y2
## y1  0.90099 -0.05112
## y2 -0.05112  0.96041
## 
## Correlation matrix of residuals:
##          y1       y2
## y1  1.00000 -0.05496
## y2 -0.05496  1.00000

Notamos que las variables constante y tendencia para la ecuación 2 salen no significativas , para la ecuación 1 solo la variable tendencia sale significativa al 5%.


2. Estimaremos un VAR(1) sin intercepto o tendencia

var.1c=VAR(z,p=1,type="none")
## Warning in VAR(z, p = 1, type = "none"): No column names supplied in y, using: y1, y2 , instead.
summary(var.1c)
## 
## VAR Estimation Results:
## ========================= 
## Endogenous variables: y1, y2 
## Deterministic variables: none 
## Sample size: 199 
## Log Likelihood: -549.51 
## Roots of the characteristic polynomial:
##  0.43  0.43
## Call:
## VAR(y = z, p = 1, type = "none")
## 
## 
## Estimation results for equation y1: 
## =================================== 
## y1 = y1.l1 + y2.l1 
## 
##       Estimate Std. Error t value Pr(>|t|)    
## y1.l1  0.25493    0.06887   3.702 0.000278 ***
## y2.l1 -0.05589    0.04771  -1.171 0.242875    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## 
## Residual standard error: 0.9565 on 197 degrees of freedom
## Multiple R-Squared: 0.06981, Adjusted R-squared: 0.06037 
## F-statistic: 7.392 on 2 and 197 DF,  p-value: 0.0008023 
## 
## 
## Estimation results for equation y2: 
## =================================== 
## y2 = y1.l1 + y2.l1 
## 
##       Estimate Std. Error t value Pr(>|t|)    
## y1.l1  0.58693    0.07050   8.325  1.4e-14 ***
## y2.l1  0.59648    0.04885  12.211  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## 
## Residual standard error: 0.9792 on 197 degrees of freedom
## Multiple R-Squared: 0.5342,  Adjusted R-squared: 0.5295 
## F-statistic:   113 on 2 and 197 DF,  p-value: < 2.2e-16 
## 
## 
## 
## Covariance matrix of residuals:
##         y1      y2
## y1  0.9104 -0.0391
## y2 -0.0391  0.9578
## 
## Correlation matrix of residuals:
##          y1       y2
## y1  1.00000 -0.04187
## y2 -0.04187  1.00000

La estimación de las regresiones para \(y_{t}\) y \(x_{t}\) son : \[ \ y_{t}=0.25493y_{t-1}-0.05589x_{t-1} \\ x_{t}=0.58693y_{t-1}+0.59648x_{t-1} \]

Estimación Equation-by-Equation por MCO

Podemos obtener los mismos resultados con este método

y=z[,1]
ylag1=c(NA,y[1:n-1])
x=z[,2]
xlag1=c(NA,x[1:n-1])
eqy=lm(y~ylag1+xlag1-1) #Sin intercepto
summary(eqy)
## 
## Call:
## lm(formula = y ~ ylag1 + xlag1 - 1)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.28415 -0.55354  0.09757  0.64302  2.51792 
## 
## Coefficients:
##       Estimate Std. Error t value Pr(>|t|)    
## ylag1  0.25493    0.06887   3.702 0.000278 ***
## xlag1 -0.05589    0.04771  -1.171 0.242875    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.9565 on 197 degrees of freedom
##   (1 observation deleted due to missingness)
## Multiple R-squared:  0.06981,    Adjusted R-squared:  0.06037 
## F-statistic: 7.392 on 2 and 197 DF,  p-value: 0.0008023
eqx=lm(x~ylag1+xlag1-1)
summary(eqx)
## 
## Call:
## lm(formula = x ~ ylag1 + xlag1 - 1)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.6594 -0.7106 -0.1286  0.7087  2.2417 
## 
## Coefficients:
##       Estimate Std. Error t value Pr(>|t|)    
## ylag1  0.58693    0.07050   8.325  1.4e-14 ***
## xlag1  0.59648    0.04885  12.211  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.9792 on 197 degrees of freedom
##   (1 observation deleted due to missingness)
## Multiple R-squared:  0.5342, Adjusted R-squared:  0.5295 
## F-statistic:   113 on 2 and 197 DF,  p-value: < 2.2e-16

Matriz de varianza y covarianza

La verdader matriz de varianzas y covarianzas (VCM) para el vector de los términos de error es la matriz identidad.

\[ \Omega=E(w_{t}w'_{t})=E [\left( \begin{array}{ll} u_{t} \\ v_{t} \end{array} \right) \left( \begin{array}{ll} u_{t} & v_{t} \end{array} \right)] = \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array} \right) \] La estimación de la VCM es:

\[ \ \hat{\Omega}= \left( \begin{array}{ll} 0.9104 & -0.0391 \\ -0.0391 & 0.9578 \end{array} \right) \ \] La cual esta muy cerca de la verdadera VCM.
Para calcular \(\hat{\Omega}\) necesitamos extraer los residuos y luego hallar las dos varianzas y la covarianza (ajustados a los grados de libertad)

sigmau2=var(eqy$residuals)*(n-1)/(n-2)
sigmav2=var(eqx$residuals)*(n-1)/(n-2)
sigmauv=cov(eqy$residuals,eqx$residuals)*(n-1)/(n-2)
omegahat=as.matrix(cbind(c(sigmau2,sigmauv),c(sigmauv,sigmav2)))
omegahat
##             [,1]        [,2]
## [1,]  0.91038779 -0.03910035
## [2,] -0.03910035  0.95780791

Prueba de autocorrelación serial en los residuos

H0: No autocorrelación
H1: Si existe autocorrelación

LM=(#Observaciones)\(R^{2}\) \(\sim\) \(\chi_{p}^{2}\) ,p=8

var1c.bg = serial.test(var.1c, lags.pt = 8, type = "BG")
var1c.bg
## 
##  Breusch-Godfrey LM test
## 
## data:  Residuals of VAR object var.1c
## Chi-squared = 26.558, df = 20, p-value = 0.1482

p valor mayor a 0.05 ,por lo tanto no hay evidencia estadística de autocorrelación serial.

Descomposición de Cholesky de \(\hat{\Omega}\)

\[ \hat{\Omega} = AA' \]

 Donde : A es una Matriz triangular inferior

Aprime=chol(omegahat)
A=t(Aprime)
A
##             [,1]      [,2]
## [1,]  0.95414244 0.0000000
## [2,] -0.04097957 0.9778183

La matriz A es usada para construir el impulso respuesta a errores ortogonales (o forma estructural)

Impulso respuesta a errores ortogonales

Var en forma reducida: \[ \left( \begin{array}{ll} y_{t}\\ x_{t} \end {array} \right) \ =\left( \begin{array}{ll} 0.3 & 0 \\ 0.5 & 0.6 \end{array} \right) \left( \begin{array}{ll} y_{t-1}\\ x_{t-1} \end {array} \right)+ \left( \begin{array}{ll} u_{t}\\ v_{t} \end {array} \right) \]

Representación de Wold

\[ \left(\begin{array}{ll}y_{t}\\x_{t}\end {array}\right)= \left(\begin{array}{ll}u_{t}\\v_{t}\end {array}\right)+ \left(\begin{array}{ll}0.3 & 0 \\0.5 & 0.6\end{array}\right) \left(\begin{array}{ll}u_{t-1}\\v_{t-1}\end {array}\right)+ \left(\begin{array}{ll}0.3 & 0 \\0.5 & 0.6\end{array}\right)^{2} \left(\begin{array}{ll}u_{t-2}\\v_{t-2}\end {array}\right)+ \left(\begin{array}{ll}0.3 & 0 \\0.5 & 0.6\end{array}\right)^{j} \left(\begin{array}{ll}u_{t-j}\\v_{t-j}\end {array}\right)+... \]

Representación de wold con errores ortogonales (forma estructural)

\[ z_{t}=w_{t}+\phi w_{t-1}+\phi^{2} w_{t-2}+...+ \phi^{j} w_{t-j} \\ z_{t}=AA^{-1} w_{t}+\phi AA^{-1}w_{t-1}+\phi^{2} AA^{-1} w_{t-2}+...+ \phi^{j} AA^{-1} w_{t-j} \\ z_{t}=A \tilde{w_{t}}+\phi A \tilde{w}_{t-1} +\phi^{2} A \tilde{w}_{t-2}+...+ \phi^{j} A \tilde{w}_{t-j} \] Las funciones impulso respuesta IR(j) denota el impulso respuesta después de j periodos, es un vector 2x1 se puede calcular recursivamente como: \[ IR(j)=\frac{\partial Z_{t}}{\partial\tilde{w} _{t-j}}=\phi^{j}A \\ Impulso \ respuesta\ para \ u_{t}: \\ IR(1)=A(1c) \\ IR(2)=\phi IR(1) \\ IR(3)=\phi IR(2) \\ ...\ = \ ... \\ IR(j)=\phi IR(j-1) \] Donde A(1c) es la primera columa de A ,para hallar el impulso respuesta a el segundo error estructural se debe cambiar A(1c) por A(2c).

wtilde=as.matrix(c(1,0))
A1c=A%*%wtilde
A1c
##             [,1]
## [1,]  0.95414244
## [2,] -0.04097957
if1=A1c
if1
##             [,1]
## [1,]  0.95414244
## [2,] -0.04097957
phihat=as.matrix(cbind(c(0.25493, 0.58693), c(-0.05589,  0.59648)))
phihat
##         [,1]     [,2]
## [1,] 0.25493 -0.05589
## [2,] 0.58693  0.59648
if2=phihat%*%if1
if2
##           [,1]
## [1,] 0.2455299
## [2,] 0.5355713
if3=phihat%*%if2
if3
##            [,1]
## [1,] 0.03265985
## [2,] 0.46356643
if4=phihat%*%if3
if4
##             [,1]
## [1,] -0.01758275
## [2,]  0.29567715
if5=phihat%*%if4
if5
##             [,1]
## [1,] -0.02100777
## [2,]  0.16604566

Se puede obtener también de la siguiente manera

var1c.11 = irf(var.1c, impulse = "y1", response="y1", boot=TRUE,n.ahead=4)
var1c.11
## 
## Impulse response coefficients
## $y1
##               y1
## [1,]  0.95652435
## [2,]  0.24626191
## [3,]  0.03284138
## [4,] -0.01756413
## [5,] -0.02102573
## 
## 
## Lower Band, CI= 0.95 
## $y1
##               y1
## [1,]  0.88010995
## [2,]  0.11740187
## [3,] -0.04255830
## [4,] -0.07847371
## [5,] -0.06362584
## 
## 
## Upper Band, CI= 0.95 
## $y1
##              y1
## [1,] 1.02161842
## [2,] 0.35271039
## [3,] 0.11126349
## [4,] 0.04137723
## [5,] 0.01641894
plot(var1c.11)

library(vars)
var1c.21 = irf(var.1c, impulse = "y1", response="y2", boot=TRUE,n.ahead=4)
var1c.21
## 
## Impulse response coefficients
## $y1
##               y2
## [1,] -0.04315339
## [2,]  0.53567355
## [3,]  0.46405986
## [4,]  0.29608025
## [5,]  0.16629841
## 
## 
## Lower Band, CI= 0.95 
## $y1
##               y2
## [1,] -0.18911474
## [2,]  0.40745358
## [3,]  0.32669393
## [4,]  0.18101172
## [5,]  0.07919409
## 
## 
## Upper Band, CI= 0.95 
## $y1
##             y2
## [1,] 0.1089663
## [2,] 0.6843819
## [3,] 0.5707819
## [4,] 0.3979103
## [5,] 0.2540366
plot(var1c.21)

Test de Causalidad de Granger

• La prueba de causalidad de Granger (GC) se refiere a si \(x_{t}\) rezagado ayuda a explicar \(y_{t}\) , y viceversa.
• La hipótesis nula es:
H0 : \(\phi_{12}\) = 0 (x does not Granger cause y)
H0 : \(\phi_{21}\) = 0 (y does not Granger cause x)
Podemos rechazr cada hipótesis nula usando el Test F.La hipótesis nula es rechazada cuando el p-valor es menor a 0.05
Recordar que Causalidad a lo Granger \(\neq\) Causalidad
La causalidad de Granger tiene que ver con el ajuste de la muestra. No dice nada sobre el pronóstico fuera de la muestra.

causality(var.1c,cause=c("y2"))
## $Granger
## 
##  Granger causality H0: y2 do not Granger-cause y1
## 
## data:  VAR object var.1c
## F-Test = 1.372, df1 = 1, df2 = 394, p-value = 0.2422
## 
## 
## $Instant
## 
##  H0: No instantaneous causality between: y2 and y1
## 
## data:  VAR object var.1c
## Chi-squared = 0.38572, df = 1, p-value = 0.5346
causality(var.1c,cause=c("y1"))
## $Granger
## 
##  Granger causality H0: y1 do not Granger-cause y2
## 
## data:  VAR object var.1c
## F-Test = 69.304, df1 = 1, df2 = 394, p-value = 1.332e-15
## 
## 
## $Instant
## 
##  H0: No instantaneous causality between: y1 and y2
## 
## data:  VAR object var.1c
## Chi-squared = 0.38572, df = 1, p-value = 0.5346

VAR Estructural (Restricciones de largo plazo Blanchard Quah)

El impacto de largo plazo de \(B\tilde{w_{t}}\) (choque estructural)es: \[ (I-\phi)^{-1}B=Q \] Donde Q es una matriz tringular inferior .
Q es la descomposición de Cholesky de :

\[ QQ'=(I-\phi)^{-1}\Omega((I-\phi)^{-1})' \] La matriz B se obtiene entonces: \[ B=(1-\phi)Q \]

var_BQ=BQ(var.1c)
var_BQ
## 
## SVAR Estimation Results:
## ======================== 
## 
## 
## Estimated contemporaneous impact matrix:
##         y1     y2
## y1  0.9472 0.1333
## y2 -0.1791 0.9627
## 
## Estimated identified long run impact matrix:
##       y1    y2
## y1 1.176 0.000
## y2 1.267 2.386

Otra manera de obtener el impacto de largo plazo de \(B\tilde{w_{t}}\)

LRP = solve(diag(2)-phihat)%*%omegahat%*%solve(diag(2)-t(phihat))
Q = t(chol(LRP))
Q
##          [,1]     [,2]
## [1,] 1.173003 0.000000
## [2,] 1.267031 2.385263

La matriz de relaciones contemporaneas :

B = (diag(2)-phihat)%*%Q
B
##            [,1]      [,2]
## [1,]  0.9447834 0.1333123
## [2,] -0.1771979 0.9625013

La matriz de impacto a largo plazo:

solve(diag(2)-phihat)%*%B
##          [,1]          [,2]
## [1,] 1.173003 -2.775558e-17
## [2,] 1.267031  2.385263e+00

BQ Impulso respuesta del error estructural de y1

wtilde = as.matrix(c(1,0))
B1c = B%*%wtilde   
if1 = B1c
if2 = phihat%*%if1
if3 = phihat%*%if2
if4 = phihat%*%if3
plot(c(if1[1], if2[1], if3[1], if4[1]), type="l")

plot(c(if1[2], if2[2], if3[2], if4[2]), type="l")

plot(irf(var_BQ, impulse = "y1", response="y1", boot=TRUE,n.ahead=4))

plot(irf(var_BQ, impulse = "y1", response="y2", boot=TRUE,n.ahead=4))

Descomposición de la varianza del error de predicción (FEVD)

Se descomponevla varianza del error de pronóstico en las contribuciones de choques estructurales específicos. Esto nos permite demostrar la importancia de un choque para explicar las variaciones de las variables en el modelo.

var1c.fvd = fevd(var.1c, n.ahead=5)
plot(var1c.fvd)

Vemos que las variaciones de la variable y1 son explicadas totalemente por los choques estructurales de y1, y que las variaciones de la variable y2 son explicadas en su mayoría por los choques estructurales de y2 , pero tambien en un porcentaje pequeño por los choques estructurales de y1.

var1c.fvd$y1
##             y1          y2
## [1,] 1.0000000 0.000000000
## [2,] 0.9969451 0.003054924
## [3,] 0.9947479 0.005252059
## [4,] 0.9938673 0.006132705
## [5,] 0.9935938 0.006406153
var1c.fvd$y2
##               y1        y2
## [1,] 0.001942077 0.9980579
## [2,] 0.182061322 0.8179387
## [3,] 0.265135962 0.7348640
## [4,] 0.293687415 0.7063126
## [5,] 0.302330557 0.6976694

Otra manera de obtener estos valores :

FEVD para y1

bigv = omegahat
sumbigv = bigv
thet = A
sumthet = thet[1,1]**2
fevd.11=rep(1,5)
for (i in 2:5) {
                  bigv = phihat%*%bigv%*%t(phihat)
                  sumbigv = sumbigv + bigv
                  thet = phihat%*%thet
                  sumthet = sumthet + thet[1,1]**2
                  fevd.11[i] = sumthet/sumbigv[1,1]
}
cbind(fevd.11, 1-fevd.11)
##        fevd.11            
## [1,] 1.0000000 0.000000000
## [2,] 0.9969326 0.003067450
## [3,] 0.9947265 0.005273538
## [4,] 0.9938423 0.006157723
## [5,] 0.9935677 0.006432257

FEVD para y2

bigv = omegahat
sumbigv = bigv
thet = A
sumthet = thet[2,2]**2
fevd.22=rep(1,5)
for (i in 2:5) {
                  bigv = phihat%*%bigv%*%t(phihat)
                  sumbigv = sumbigv + bigv
                  thet = phihat%*%thet
                  sumthet = sumthet + thet[2,2]**2
                  fevd.22[i] = sumthet/sumbigv[2,2]
}
cbind(1-fevd.22, fevd.22)
##                  fevd.22
## [1,] 0.0000000 1.0000000
## [2,] 0.1820492 0.8179508
## [3,] 0.2650278 0.7349722
## [4,] 0.2935345 0.7064655
## [5,] 0.3021622 0.6978378