library(forecast)
## Registered S3 method overwritten by 'quantmod':
## method from
## as.zoo.data.frame zoo
library(seasonal)
library(tseries)
library(readxl)
Datos <- read_excel(file.choose(), sheet = 2)
data <- ts(Datos$PIB, frequency = 1)
plot(data)
tseries::adf.test(data)
##
## Augmented Dickey-Fuller Test
##
## data: data
## Dickey-Fuller = -3.7499, Lag order = 3, p-value = 0.02805
## alternative hypothesis: stationary
Aceptamos la prueba de Dickey Fuller ya que el pvalor es menor que 0,05 y los datos son estacionarios por lo que no es necesario hacer diferenciacion.
acf(data)
pacf(data)
Observamos que tanto el ACF como el PACF caen gradualmente por lo que usamos los componentes p y q.
##
## Call:
## arima(x = data, order = c(2, 0, 0))
##
## Coefficients:
## ar1 ar2 intercept
## 0.1029 -0.1332 2.3760
## s.e. 0.1299 0.1345 0.6782
##
## sigma^2 estimated as 29.58: log likelihood = -189.88, aic = 387.76
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set -0.005233816 5.438559 4.635858 258.4085 270.2521 0.7884764
## ACF1
## Training set 0.0005872113
##
## Call:
## arima(x = data, order = c(2, 0, 1))
##
## Coefficients:
## ar1 ar2 ma1 intercept
## 0.0935 -0.1324 0.0093 2.3757
## s.e. 1.0344 0.1621 1.0399 0.6795
##
## sigma^2 estimated as 29.58: log likelihood = -189.88, aic = 389.76
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set -0.005038276 5.438556 4.635539 258.2844 270.1384 0.7884221
## ACF1
## Training set 0.0006050009
##
## Call:
## arima(x = data, order = c(2, 0, 2))
##
## Coefficients:
## ar1 ar2 ma1 ma2 intercept
## 0.1120 -0.8971 -0.0933 1.0000 2.3617
## s.e. 0.3053 0.0853 0.1836 0.0578 0.7306
##
## sigma^2 estimated as 28.37: log likelihood = -189.6, aic = 391.19
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE ACF1
## Training set -0.006715404 5.326258 4.515296 311.93 317.425 0.7679708 0.05780488
## [1] 387.7633
## [1] 389.7633
## [1] 391.1915
Elegimos el menor de los 3 modelos y hacemos la prueba de Box-L jung para verificar si existe autocorrelacion en nuestra serie de tiempo.
checkresiduals(m1$residuals)
## Warning in modeldf.default(object): Could not find appropriate degrees of
## freedom for this model.
Box.test(m1$residuals, type="Ljung-Box")
##
## Box-Ljung test
##
## data: m1$residuals
## X-squared = 2.2086e-05, df = 1, p-value = 0.9963
a<-forecast::forecast(m1)
plot(a)
El resultado es mayor que 0.05 por lo que no se rechaza la hipotesis nula y concluimos que los datos de la serie de tiempo son independientes y no existe autocorrelacion.