Lectura de Librerias

library(foreign)   
library(tidyverse) # a set of different packages 
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.0     ✔ stringr   1.5.1
## ✔ ggplot2   3.4.4     ✔ tibble    3.2.1
## ✔ lubridate 1.9.3     ✔ tidyr     1.3.1
## ✔ purrr     1.0.2     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(TSstudio)  # provides a set of tools for descriptive and predictive analysis of time series data
library(forecast)  # provides methods and tools for displaying and analyzing univariate time series forecasts 
## Registered S3 method overwritten by 'quantmod':
##   method            from
##   as.zoo.data.frame zoo
library(corrplot)  # provides a visual exploratory tool on correlation matrix 
## corrplot 0.92 loaded
library(ggplot2)   # dedicated to data visualization
library(tseries)   # time series analysis 
library(mFilter)   # implements several time series filters useful for smoothing and extracting trend and cyclical components of a time series 
library(dygraphs)  # an interface that provides rich facilities for charing time - series
#library(quantmod) 
library(stats)     # functions for statistical calculations and random number generation
library(astsa)     # applied statistical time series analysis 
## 
## Attaching package: 'astsa'
## 
## The following object is masked from 'package:forecast':
## 
##     gas
library(xts)       # convert objects to time series format
## Loading required package: zoo
## 
## Attaching package: 'zoo'
## 
## The following objects are masked from 'package:base':
## 
##     as.Date, as.Date.numeric
## 
## 
## ######################### Warning from 'xts' package ##########################
## #                                                                             #
## # The dplyr lag() function breaks how base R's lag() function is supposed to  #
## # work, which breaks lag(my_xts). Calls to lag(my_xts) that you type or       #
## # source() into this session won't work correctly.                            #
## #                                                                             #
## # Use stats::lag() to make sure you're not using dplyr::lag(), or you can add #
## # conflictRules('dplyr', exclude = 'lag') to your .Rprofile to stop           #
## # dplyr from breaking base R's lag() function.                                #
## #                                                                             #
## # Code in packages is not affected. It's protected by R's namespace mechanism #
## # Set `options(xts.warn_dplyr_breaks_lag = FALSE)` to suppress this warning.  #
## #                                                                             #
## ###############################################################################
## 
## Attaching package: 'xts'
## 
## The following objects are masked from 'package:dplyr':
## 
##     first, last
library(zoo)       # displays time series of numeric vectors / matrices / factors
library(AER)       # applied econometrics with R
## Loading required package: car
## Loading required package: carData
## 
## Attaching package: 'car'
## 
## The following object is masked from 'package:dplyr':
## 
##     recode
## 
## The following object is masked from 'package:purrr':
## 
##     some
## 
## Loading required package: lmtest
## Loading required package: sandwich
## Loading required package: survival
library(plm)       # linear models for panel data
## 
## Attaching package: 'plm'
## 
## The following objects are masked from 'package:dplyr':
## 
##     between, lag, lead
library(vars)      # estimation, lag selection, diagnostic testing, forecasting, causality analysis and estimation of VAR, SVAR, and SVEC models 
## Loading required package: MASS
## 
## Attaching package: 'MASS'
## 
## The following object is masked from 'package:dplyr':
## 
##     select
## 
## Loading required package: strucchange
## 
## Attaching package: 'strucchange'
## 
## The following object is masked from 'package:stringr':
## 
##     boundary
## 
## Loading required package: urca
library(dynlm)     # dynamic linear models and time series regression 
library(dplyr)     # useful package / tool for working with a data frame. It focuses on data manipulation
library(panelvar)  # provides a comprehensive framework for panel vector autoregression models
## Welcome to panelvar! Please cite our package in your publications -- see citation("panelvar")
## 
## Attaching package: 'panelvar'
## 
## The following object is masked from 'package:vars':
## 
##     stability
## 
## The following object is masked from 'package:tidyr':
## 
##     extract
library(GGally)
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
library(lubridate)
library(xts)

Sección I

###Lectura de Datos

house_price <- read.csv("C:/Users/HP/OneDrive - FEMSA Comercio/Escritorio/Bloque 1/M1/Examen/House_Price_USA.csv")

###a.Analisis Exploratorio de los Datos**

summary(house_price) #Base balanceada porque no hay NA's
##      state           year         names              plate          
##  Min.   : 1.0   Min.   :1975   Length:1421        Length:1421       
##  1st Qu.:18.0   1st Qu.:1982   Class :character   Class :character  
##  Median :30.0   Median :1989   Mode  :character   Mode  :character  
##  Mean   :29.8   Mean   :1989                                        
##  3rd Qu.:42.0   3rd Qu.:1996                                        
##  Max.   :56.0   Max.   :2003                                        
##      region      region.name            price            income      
##  Min.   :1.000   Length:1421        Min.   : 58.09   Min.   : 5.910  
##  1st Qu.:3.000   Class :character   1st Qu.: 87.55   1st Qu.: 8.677  
##  Median :5.000   Mode  :character   Median : 96.87   Median : 9.718  
##  Mean   :4.327                      Mean   : 99.89   Mean   : 9.933  
##  3rd Qu.:6.000                      3rd Qu.:108.06   3rd Qu.:11.099  
##  Max.   :8.000                      Max.   :224.12   Max.   :18.219  
##       pop              intrate      
##  Min.   :  380477   Min.   :-5.544  
##  1st Qu.: 1472595   1st Qu.: 3.419  
##  Median : 3495939   Median : 4.572  
##  Mean   : 5076286   Mean   : 4.363  
##  3rd Qu.: 5840774   3rd Qu.: 5.687  
##  Max.   :35484453   Max.   :11.225
str(house_price)
## 'data.frame':    1421 obs. of  10 variables:
##  $ state      : int  1 1 1 1 1 1 1 1 1 1 ...
##  $ year       : int  1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 ...
##  $ names      : chr  "Alabama" "Alabama" "Alabama" "Alabama" ...
##  $ plate      : chr  "AL" "AL" "AL" "AL" ...
##  $ region     : int  5 5 5 5 5 5 5 5 5 5 ...
##  $ region.name: chr  "Plains" "Plains" "Plains" "Plains" ...
##  $ price      : num  105 106 111 113 109 ...
##  $ income     : num  6.45 6.83 7.01 7.27 7.24 ...
##  $ pop        : int  3680533 3737204 3782736 3834120 3869444 3900368 3918533 3925263 3934100 3951824 ...
##  $ intrate    : num  0.26 3.88 1.97 1.76 -0.2 ...

Se convierten las variables que están en integer a numericas.

columns_to_convert <- c("state", "year", "region", "pop") 
house_price[columns_to_convert] <- lapply(house_price[columns_to_convert], as.numeric)

Se crea un datafram con variables numercias

df_numeric <- house_price%>% select_if(is.numeric)

####Gráficos

  1. Scatter Plot de Ingresos y Precios de Casas
scatter_plot <- ggplot(house_price, aes(x = income, y = price)) +
  geom_point(color = "blue", size = 3, alpha = 0.7) +
  labs(title = "Relación entre Ingresos y Precio de Casas USA",
       x = "income" , y = "price") +
  theme_minimal()

# Display the scatter plot
print(scatter_plot)

En este scatter plot se puede ver como hay varios puntos azules dispersos que representan datos individuales sobre ingresos y precios de las casas. La mayoría de los puntos se agrupan en la parte inferior izquierda, lo que podria indicar una concentración de datos donde los ingresos son más bajos, sin embargo, también hay una gran variedad de precios de casas. Mientras aumentan los ingresos, se observa una dispersión más amplia en el precio, queriendo decir que hay una variabilidad mayor en los precios meintras aumentan los ingresos.A medida que los ingresos aumentan, generalmente también lo hacen los precios de las casas, pero no de manera uniforme.

  1. Histograma de Precios de Casa
hist(house_price$price,prob=TRUE,col='steelblue',main='Histograma de Precios de Casas USA')
lines(density(house_price$price),col=3,lwd=4)

Con esta gráfica se puede idetificar que la mayoría de las casas se concentran alrededor del precio de $100, puesto que lo indica la barra más alta y el pico del gráfico de densidad.Esto quiere decir que hay una gran cantidad de casas con precios cercanos a 100.

  1. Correlation Plot
cor_matrix <- cor(df_numeric)
corrplot(cor(df_numeric), method = "square")

*Correlación Positiva: Las variables con correlación positiva son income y year, price y income, pop y income. Viendo eso, se puede inferir que income tiene un correlación positiva con las demás variables, lo que quiere decir que cuando incrementa income también las demás variables y viceversa.

*Correlación Negativa: Las variables que cuentan con la mayor correlación negativa en este grafico es “price” y “region”, “income” y “region”, “intrate” y “price”. Esto quiere decir que hay una correlación inversa con estas variables.

Analizando el correlation plot, se utilizaran las siguientes variables para los modelos: , income, year, price, intrate, y pop.

scatter_matrix_plot <- ggpairs(house_price, 
                               columns = c("price","income", "intrate", "pop", "state", "year"),
                               lower = list(continuous = wrap("points", alpha = 0.3, size = 0.5)),
                               diag = list(continuous = wrap("barDiag", alpha = 0.8, bins = 20)))

#Visualización del scatter matrix plot
print(scatter_matrix_plot)

par(mfrow=c(2,3))
hist(log(house_price$price))
hist(log(house_price$income))
hist(log(house_price$pop))
hist(log(house_price$intrate))
## Warning in log(house_price$intrate): Se han producido NaNs

Conversión de los datos a datos panel

panel_data <- pdata.frame(house_price, index = c("state", "year"), drop.index = TRUE) # we set our dataset to a panel dataset where state and year are both indexed. 

###b. Especificación y estimación de Modelo de Regresión Base

ols <- lm(log(price) ~ lag(income) + intrate + log(pop), data=panel_data)
summary(ols)
## 
## Call:
## lm(formula = log(price) ~ lag(income) + intrate + log(pop), data = panel_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.54640 -0.08981  0.00037  0.09258  0.62677 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  3.920050   0.062439  62.782  < 2e-16 ***
## lag(income)  0.046636   0.002423  19.244  < 2e-16 ***
## intrate     -0.018730   0.001606 -11.665  < 2e-16 ***
## log(pop)     0.019059   0.004218   4.518 6.75e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.1557 on 1417 degrees of freedom
## Multiple R-squared:  0.2724, Adjusted R-squared:  0.2708 
## F-statistic: 176.8 on 3 and 1417 DF,  p-value: < 2.2e-16

Multicolinealidad

vif(ols)
## lag(income)     intrate    log(pop) 
##    1.069763    1.018199    1.053930

El análisis de los valores del Factor de Inflación de la Varianza (VIF) revela que todos los valores están cercanos a 1, lo que sugiere una ausencia de multicolinealidad significativa entre las variables independientes en el modelo de regresión.

###c. Estimación de Modelo de Regresión de Datos Panel que incluye las siguientes especificaciones

panel_model1 <- plm(log(price) ~ log(income) + intrate + log(pop), data=panel_data, model="within") #FE
summary(panel_model1)
## Oneway (individual) effect Within Model
## 
## Call:
## plm(formula = log(price) ~ log(income) + intrate + log(pop), 
##     data = panel_data, model = "within")
## 
## Balanced Panel: n = 49, T = 29, N = 1421
## 
## Residuals:
##       Min.    1st Qu.     Median    3rd Qu.       Max. 
## -0.5388387 -0.0747516 -0.0080046  0.0700524  0.3657791 
## 
## Coefficients:
##               Estimate Std. Error  t-value Pr(>|t|)    
## log(income)  0.4235835  0.0313836  13.4970   <2e-16 ***
## intrate     -0.0188621  0.0012187 -15.4777   <2e-16 ***
## log(pop)    -0.0059007  0.0339458  -0.1738    0.862    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Total Sum of Squares:    24.393
## Residual Sum of Squares: 18.482
## R-Squared:      0.24232
## Adj. R-Squared: 0.21409
## F-statistic: 145.944 on 3 and 1369 DF, p-value: < 2.22e-16
panel_model_tfe <- plm(log(price) ~ log(income) + intrate + log(pop) + factor(house_price$year), data= panel_data, model="within", effect = "time")     # fixed model
summary(panel_model_tfe)
## Oneway (time) effect Within Model
## 
## Call:
## plm(formula = log(price) ~ log(income) + intrate + log(pop) + 
##     factor(house_price$year), data = panel_data, effect = "time", 
##     model = "within")
## 
## Balanced Panel: n = 49, T = 29, N = 1421
## 
## Residuals:
##       Min.    1st Qu.     Median    3rd Qu.       Max. 
## -0.5640882 -0.0973420  0.0020047  0.0864150  0.5715175 
## 
## Coefficients:
##               Estimate Std. Error t-value  Pr(>|t|)    
## log(income)  0.5298396  0.0323991 16.3536 < 2.2e-16 ***
## intrate     -0.0280262  0.0043399 -6.4578 1.466e-10 ***
## log(pop)     0.0185704  0.0042172  4.4034 1.147e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Total Sum of Squares:    41.917
## Residual Sum of Squares: 32.928
## R-Squared:      0.21446
## Adj. R-Squared: 0.19692
## F-statistic: 126.4 on 3 and 1389 DF, p-value: < 2.22e-16
panel_model2 <- plm(log(price) ~ log(income) + intrate + log(pop), data=panel_data, model="random") # RE
summary(panel_model2)
## Oneway (individual) effect Random Effect Model 
##    (Swamy-Arora's transformation)
## 
## Call:
## plm(formula = log(price) ~ log(income) + intrate + log(pop), 
##     data = panel_data, model = "random")
## 
## Balanced Panel: n = 49, T = 29, N = 1421
## 
## Effects:
##                   var std.dev share
## idiosyncratic 0.01350 0.11619 0.519
## individual    0.01252 0.11188 0.481
## theta: 0.8106
## 
## Residuals:
##       Min.    1st Qu.     Median    3rd Qu.       Max. 
## -0.5368978 -0.0770703 -0.0042198  0.0729515  0.3972974 
## 
## Coefficients:
##               Estimate Std. Error  z-value Pr(>|z|)    
## (Intercept)  3.4938819  0.2100503  16.6335   <2e-16 ***
## log(income)  0.4139645  0.0262962  15.7424   <2e-16 ***
## intrate     -0.0189411  0.0012127 -15.6191   <2e-16 ***
## log(pop)     0.0154700  0.0148811   1.0396   0.2985    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Total Sum of Squares:    25.212
## Residual Sum of Squares: 19.104
## R-Squared:      0.24228
## Adj. R-Squared: 0.24068
## Chisq: 453.093 on 3 DF, p-value: < 2.22e-16
panel_model3 <- plm(log(price) ~ log(income) + intrate + log(pop), data=panel_data, model="pooling") #Pooling
summary(panel_model3)
## Pooling Model
## 
## Call:
## plm(formula = log(price) ~ log(income) + intrate + log(pop), 
##     data = panel_data, model = "pooling")
## 
## Balanced Panel: n = 49, T = 29, N = 1421
## 
## Residuals:
##        Min.     1st Qu.      Median     3rd Qu.        Max. 
## -0.55501905 -0.09341850 -0.00072342  0.09396888  0.63096928 
## 
## Coefficients:
##               Estimate Std. Error  t-value  Pr(>|t|)    
## (Intercept)  3.3762239  0.0751159  44.9469 < 2.2e-16 ***
## log(income)  0.4378571  0.0249382  17.5577 < 2.2e-16 ***
## intrate     -0.0189192  0.0016378 -11.5519 < 2.2e-16 ***
## log(pop)     0.0196848  0.0043016   4.5761 5.151e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Total Sum of Squares:    47.234
## Residual Sum of Squares: 35.605
## R-Squared:      0.2462
## Adj. R-Squared: 0.24461
## F-statistic: 154.272 on 3 and 1417 DF, p-value: < 2.22e-16

Pruebas para escoger cual modelo es mejor

# do we select fixed effects estimation or OLS estimation? 
pFtest(panel_model_tfe, ols) 
## 
##  F test for time effects
## 
## data:  log(price) ~ log(income) + intrate + log(pop) + factor(house_price$year)
## F = 2.1715, df1 = 28, df2 = 1389, p-value = 0.0003999
## alternative hypothesis: significant effects
# If p-value is less than 5% fixed effects model is a better estimation than ols. 
# Hay un p-value de 0.0004 (menor a 5%) - FE es mejor

# do we add time fixed effects? 
pFtest(panel_model_tfe, panel_model3) # pFtest test for individual and / or time effects. If p-value is less than 5%, it is recommended to consider using time - fixed effects. 
## 
##  F test for time effects
## 
## data:  log(price) ~ log(income) + intrate + log(pop) + factor(house_price$year)
## F = 4.0338, df1 = 28, df2 = 1389, p-value = 1.335e-11
## alternative hypothesis: significant effects
# p-value de 8.134e-11 (menor a 5%) - Time Fixed Effects es mejor

# estimate Hausman test to select either fixed or random effects
phtest(panel_model_tfe, panel_model2) # Hausman Test for Panel Models. P-value less than 5% suggest to consider FE rather than RE. 
## 
##  Hausman Test
## 
## data:  log(price) ~ log(income) + intrate + log(pop) + factor(house_price$year)
## chisq = 41.156, df = 3, p-value = 6.059e-09
## alternative hypothesis: one model is inconsistent
# P-value de 3.858e-12 (menor a 5%) - se recomienda usar Fixed Effects

# do we select random effects estimation or OLS estimation?
plmtest(panel_model3, type=c("bp")) # If p-value is less than 5%, random effects is appropriate rather than ols. 
## 
##  Lagrange Multiplier Test - (Breusch-Pagan)
## 
## data:  log(price) ~ log(income) + intrate + log(pop)
## chisq = 4239.5, df = 1, p-value < 2.2e-16
## alternative hypothesis: significant effects
# P-value menor a 5% - se recomienda usar RE

Tomando en cuenta todo lo anterior, es mejor utilizar Time Fixed Effects.

###c. Pruebas de Diagnóstico

# Heterocedasticidad
bptest(panel_model_tfe)           
## 
##  studentized Breusch-Pagan test
## 
## data:  panel_model_tfe
## BP = 181.3, df = 31, p-value < 2.2e-16
# p-value menor a 5%, hay presencia de heterocedasticidad.

# Autocorrelación Serial 
residuos <- residuals(panel_model_tfe)
Box.test(residuos, type="Ljung-Box")
## 
##  Box-Ljung test
## 
## data:  residuos
## X-squared = 1173.7, df = 1, p-value < 2.2e-16
# p-value menor a 5%, hay presencia de autocorrelación serial significativa en los residuos del modelo.

Sección II

a. Descripción de la actividad de negocios de ALFA

ALFA es una empresa mexicana dedicada a la gestión de varios negocios multinacionales Entre sus subsidiarias se encuentra Sigma, que se especializa en la industria alimentaria, especializada en la producción, comercialización y distribución de alimentos. Además, poseen Alpek, reconocida por ser uno de los principales productores de poliéster (PTA, PET, PET reciclado) y poliestireno expandido (EPS).

ALFA <- read.csv("C:/Users/HP/OneDrive - FEMSA Comercio/Escritorio/ALFAA.csv")
summary (ALFA) #Base balanceada porque no hay NA's
##      Date                Open             High             Low        
##  Length:365         Min.   : 6.559   Min.   : 6.879   Min.   : 5.591  
##  Class :character   1st Qu.:12.196   1st Qu.:12.589   1st Qu.:11.675  
##  Mode  :character   Median :13.841   Median :14.288   Median :13.320  
##                     Mean   :15.647   Mean   :16.130   Mean   :15.099  
##                     3rd Qu.:20.199   3rd Qu.:20.857   3rd Qu.:19.322  
##                     Max.   :25.571   Max.   :26.128   Max.   :24.803  
##      Close          Adj.Close          Volume         
##  Min.   : 6.349   Min.   : 6.306   Min.   : 10927684  
##  1st Qu.:12.114   1st Qu.:12.041   1st Qu.: 25043651  
##  Median :13.886   Median :13.796   Median : 34412834  
##  Mean   :15.617   Mean   :15.493   Mean   : 43393794  
##  3rd Qu.:20.089   3rd Qu.:19.841   3rd Qu.: 51832081  
##  Max.   :25.680   Max.   :25.363   Max.   :832008391

Modificaciones en el dataframe

# Se reemplaza el formato de Date de character a date
ALFA$Date <- mdy(ALFA$Date)

# Se verifica que los cambios se hayan hecho
str(ALFA)
## 'data.frame':    365 obs. of  7 variables:
##  $ Date     : Date, format: "2017-01-02" "2017-01-09" ...
##  $ Open     : num  23.5 23.7 24 23.3 24.8 ...
##  $ High     : num  24 23.9 24 25.5 25.4 ...
##  $ Low      : num  23 22.7 22.4 23.3 24.4 ...
##  $ Close    : num  23.6 23.9 23.3 24.9 24.7 ...
##  $ Adj.Close: num  23.3 23.5 23 24.6 24.4 ...
##  $ Volume   : int  24476022 29631442 34347218 48427622 38867753 37344072 62087978 58242390 54195160 32387751 ...
summary(ALFA)
##       Date                 Open             High             Low        
##  Min.   :2017-01-02   Min.   : 6.559   Min.   : 6.879   Min.   : 5.591  
##  1st Qu.:2018-10-01   1st Qu.:12.196   1st Qu.:12.589   1st Qu.:11.675  
##  Median :2020-06-29   Median :13.841   Median :14.288   Median :13.320  
##  Mean   :2020-06-29   Mean   :15.647   Mean   :16.130   Mean   :15.099  
##  3rd Qu.:2022-03-28   3rd Qu.:20.199   3rd Qu.:20.857   3rd Qu.:19.322  
##  Max.   :2023-12-25   Max.   :25.571   Max.   :26.128   Max.   :24.803  
##      Close          Adj.Close          Volume         
##  Min.   : 6.349   Min.   : 6.306   Min.   : 10927684  
##  1st Qu.:12.114   1st Qu.:12.041   1st Qu.: 25043651  
##  Median :13.886   Median :13.796   Median : 34412834  
##  Mean   :15.617   Mean   :15.493   Mean   : 43393794  
##  3rd Qu.:20.089   3rd Qu.:19.841   3rd Qu.: 51832081  
##  Max.   :25.680   Max.   :25.363   Max.   :832008391

###b. Generar y mostrar gráfico de series de tiempo de la variable Adj Close

Se visualiza el comportamiento de los precios del stock de ALFA a través de los años.

Tendencia

## [1] TRUE
## [1] 52

Podemos observar cómo la linea de tendencia cuenta con una pendiente negativa, lo que quiere decir que los precios del stock de ALFA a disminuido mientras pasa el tiempo, pero no de manera lineal.

Estacionalidad

decompose_ts <- decompose(ALFA_ST, type = "multiplicative")
decompose_ts
## $x
## Time Series:
## Start = c(2017, 1) 
## End = c(2023, 1) 
## Frequency = 52 
##   [1] 23.318956 23.535206 22.976555 24.589424 24.391197 23.967707 22.661192
##   [8] 23.454113 23.805519 24.093847 24.722548 25.363163 24.722548 24.244335
##  [15] 23.287914 22.539019 23.342052 23.206709 24.307495 23.991695 24.442835
##  [22] 24.560137 24.451859 23.919514 23.287914 23.296936 23.333027 24.298473
##  [29] 24.379673 22.800678 22.268335 20.987093 22.511950 22.890909 22.223223
##  [36] 21.790125 22.502928 21.970581 20.743481 19.841194 17.910311 18.180994
##  [43] 18.289268 18.063698 17.865194 18.632133 18.965981 18.839663 18.731388
##  [50] 19.507351 19.155462 19.507351 20.093834 20.211130 21.474329 21.952534
##  [57] 20.896866 20.553995 20.785116 20.369411 19.700674 20.568228 21.119486
##  [64] 21.209856 21.047192 21.291187 21.255039 21.273117 21.499041 20.649561
##  [71] 20.071192 19.474749 19.294008 18.354158 18.806011 20.089268 20.089268
##  [78] 20.875488 20.866449 20.721859 21.218893 22.402739 22.556368 21.887630
##  [85] 21.824371 22.185850 22.709999 22.619625 22.845552 22.176811 21.806299
##  [92] 21.146595 21.038155 21.996075 20.550154 19.980822 20.776081 19.348232
##  [99] 19.013861 18.489717 19.935635 19.953712 20.703783 20.730894 21.462891
## [106] 21.978001 22.429853 21.417706 21.752075 21.029112 20.504972 20.107340
## [113] 19.339195 18.462605 18.853157 18.228943 18.663181 18.988861 17.858028
## [120] 18.427969 18.165613 17.441885 16.880995 16.093941 15.551141 16.184404
## [127] 16.057751 16.790527 17.378557 17.061926 16.464851 15.650656 14.474594
## [134] 14.519828 15.053578 15.198322 13.171880 14.311754 15.487815 15.442583
## [141] 15.659700 15.915265 15.743158 15.797507 15.906209 15.715983 15.100027
## [148] 15.462357 15.525763 14.411603 13.768471 13.913402 13.831877 14.248555
## [155] 14.918863 14.330078 14.819223 14.511243 14.348197 13.714121 12.853592
## [162] 12.101760 11.214057 11.884364 11.386164 11.150650  9.218879  7.331548
## [169]  6.778053  6.306221  7.903192  9.191658  7.821528  8.946668  8.846856
## [176]  9.554605 10.598081 10.652523 11.868400 12.040798 11.859325 11.705072
## [183] 11.478229 10.951955 10.752334 10.108101 10.924734 13.202233 12.920948
## [190] 12.866506 13.011683 12.893726 12.294863 11.723220 12.394673 12.131536
## [197] 12.812063 13.093348 14.200339 12.857431 13.256675 13.964423 13.873686
## [204] 13.819243 15.261962 14.944384 12.966315 12.784842 13.047980 12.889660
## [211] 12.381336 11.836702 11.709621 11.219450 10.947133 11.028830 10.338960
## [218] 11.056061 11.155910 11.110524 10.765589 10.711125 10.611278 11.049329
## [225] 12.668078 12.886336 12.813583 12.631701 12.731736 12.686267 13.450170
## [232] 14.050380 13.422888 13.732087 13.904875 13.895781 13.768463 13.913969
## [239] 13.786652 13.395606 12.895431 12.167903 12.877241 12.622608 12.458914
## [246] 12.649890 12.959088 13.286477 13.359228 13.923064 13.986722 13.559300
## [253] 14.723345 14.723345 14.523274 14.505085 13.959439 14.195887 13.095501
## [260] 13.795746 13.659334 13.231913 13.459265 13.504734 13.286477 13.050029
## [267] 14.395956 13.968534 13.586582 13.832124 14.677874 13.823029 13.568827
## [274] 13.723846 13.258786 13.459400 13.240548 12.465447 12.283071 12.401616
## [281] 13.322618 12.985222 12.894033 12.511041 12.337784 12.465447 13.140242
## [288] 13.632658 13.568827 13.094647 12.775489 12.675180 12.884913 12.866677
## [295] 12.511041 11.963911 11.991268 12.000387 11.918318 11.672109 11.207047
## [302] 10.897008 11.544445 11.845366 12.638705 12.757250 12.921390 12.629586
## [309] 12.328665 12.091576 12.118932 11.745060 11.334711
## 
## $seasonal
## Time Series:
## Start = c(2017, 1) 
## End = c(2023, 1) 
## Frequency = 52 
##   [1] 1.0605844 1.0556788 1.0678737 1.0465822 1.0229462 0.9908473 0.9912536
##   [8] 0.9908574 0.9536115 0.9662777 0.9609398 0.9125960 0.8992297 0.9006345
##  [15] 0.9075675 0.9475286 0.9476596 0.9434890 0.9272632 0.9248442 0.9517606
##  [22] 0.9463000 0.9819517 1.0109603 1.0040919 1.0141367 1.0245016 1.0139872
##  [29] 1.0020211 0.9928569 1.0104510 1.0298356 1.0068712 1.0178208 1.0473660
##  [36] 1.0381523 1.0388990 1.0259739 1.0271345 1.0156457 1.0128781 1.0374554
##  [43] 1.0372944 1.0081310 1.0414742 1.0323674 1.0203256 1.0170849 1.0470113
##  [50] 1.0617908 1.0301982 1.0350058 1.0605844 1.0556788 1.0678737 1.0465822
##  [57] 1.0229462 0.9908473 0.9912536 0.9908574 0.9536115 0.9662777 0.9609398
##  [64] 0.9125960 0.8992297 0.9006345 0.9075675 0.9475286 0.9476596 0.9434890
##  [71] 0.9272632 0.9248442 0.9517606 0.9463000 0.9819517 1.0109603 1.0040919
##  [78] 1.0141367 1.0245016 1.0139872 1.0020211 0.9928569 1.0104510 1.0298356
##  [85] 1.0068712 1.0178208 1.0473660 1.0381523 1.0388990 1.0259739 1.0271345
##  [92] 1.0156457 1.0128781 1.0374554 1.0372944 1.0081310 1.0414742 1.0323674
##  [99] 1.0203256 1.0170849 1.0470113 1.0617908 1.0301982 1.0350058 1.0605844
## [106] 1.0556788 1.0678737 1.0465822 1.0229462 0.9908473 0.9912536 0.9908574
## [113] 0.9536115 0.9662777 0.9609398 0.9125960 0.8992297 0.9006345 0.9075675
## [120] 0.9475286 0.9476596 0.9434890 0.9272632 0.9248442 0.9517606 0.9463000
## [127] 0.9819517 1.0109603 1.0040919 1.0141367 1.0245016 1.0139872 1.0020211
## [134] 0.9928569 1.0104510 1.0298356 1.0068712 1.0178208 1.0473660 1.0381523
## [141] 1.0388990 1.0259739 1.0271345 1.0156457 1.0128781 1.0374554 1.0372944
## [148] 1.0081310 1.0414742 1.0323674 1.0203256 1.0170849 1.0470113 1.0617908
## [155] 1.0301982 1.0350058 1.0605844 1.0556788 1.0678737 1.0465822 1.0229462
## [162] 0.9908473 0.9912536 0.9908574 0.9536115 0.9662777 0.9609398 0.9125960
## [169] 0.8992297 0.9006345 0.9075675 0.9475286 0.9476596 0.9434890 0.9272632
## [176] 0.9248442 0.9517606 0.9463000 0.9819517 1.0109603 1.0040919 1.0141367
## [183] 1.0245016 1.0139872 1.0020211 0.9928569 1.0104510 1.0298356 1.0068712
## [190] 1.0178208 1.0473660 1.0381523 1.0388990 1.0259739 1.0271345 1.0156457
## [197] 1.0128781 1.0374554 1.0372944 1.0081310 1.0414742 1.0323674 1.0203256
## [204] 1.0170849 1.0470113 1.0617908 1.0301982 1.0350058 1.0605844 1.0556788
## [211] 1.0678737 1.0465822 1.0229462 0.9908473 0.9912536 0.9908574 0.9536115
## [218] 0.9662777 0.9609398 0.9125960 0.8992297 0.9006345 0.9075675 0.9475286
## [225] 0.9476596 0.9434890 0.9272632 0.9248442 0.9517606 0.9463000 0.9819517
## [232] 1.0109603 1.0040919 1.0141367 1.0245016 1.0139872 1.0020211 0.9928569
## [239] 1.0104510 1.0298356 1.0068712 1.0178208 1.0473660 1.0381523 1.0388990
## [246] 1.0259739 1.0271345 1.0156457 1.0128781 1.0374554 1.0372944 1.0081310
## [253] 1.0414742 1.0323674 1.0203256 1.0170849 1.0470113 1.0617908 1.0301982
## [260] 1.0350058 1.0605844 1.0556788 1.0678737 1.0465822 1.0229462 0.9908473
## [267] 0.9912536 0.9908574 0.9536115 0.9662777 0.9609398 0.9125960 0.8992297
## [274] 0.9006345 0.9075675 0.9475286 0.9476596 0.9434890 0.9272632 0.9248442
## [281] 0.9517606 0.9463000 0.9819517 1.0109603 1.0040919 1.0141367 1.0245016
## [288] 1.0139872 1.0020211 0.9928569 1.0104510 1.0298356 1.0068712 1.0178208
## [295] 1.0473660 1.0381523 1.0388990 1.0259739 1.0271345 1.0156457 1.0128781
## [302] 1.0374554 1.0372944 1.0081310 1.0414742 1.0323674 1.0203256 1.0170849
## [309] 1.0470113 1.0617908 1.0301982 1.0350058 1.0605844
## 
## $trend
## Time Series:
## Start = c(2017, 1) 
## End = c(2023, 1) 
## Frequency = 52 
##   [1]       NA       NA       NA       NA       NA       NA       NA       NA
##   [9]       NA       NA       NA       NA       NA       NA       NA       NA
##  [17]       NA       NA       NA       NA       NA       NA       NA       NA
##  [25]       NA       NA 22.21266 22.14969 22.10328 22.06348 22.00453 21.93810
##  [33] 21.88724 21.83954 21.77041 21.69704 21.62849 21.55391 21.47864 21.41490
##  [41] 21.36696 21.33524 21.30535 21.26304 21.19772 21.11355 21.02061 20.91143
##  [49] 20.79747 20.70635 20.63877 20.58473 20.53773 20.47962 20.41484 20.38062
##  [57] 20.37956 20.39099 20.39304 20.37965 20.37755 20.39021 20.40148 20.40675
##  [65] 20.41896 20.44173 20.48436 20.55111 20.60954 20.64971 20.69613 20.73101
##  [73] 20.73835 20.73545 20.74366 20.75954 20.77872 20.80537 20.83030 20.86045
##  [81] 20.88663 20.89067 20.89375 20.90654 20.90842 20.90321 20.89721 20.87349
##  [89] 20.83145 20.78100 20.72941 20.68435 20.62955 20.56953 20.51012 20.44722
##  [97] 20.38570 20.32252 20.25402 20.19717 20.14988 20.09174 20.03396 19.97122
## [105] 19.89223 19.80115 19.68754 19.54689 19.39895 19.26249 19.11497 18.95606
## [113] 18.81090 18.67245 18.53435 18.40504 18.28654 18.17680 18.07602 17.96629
## [121] 17.85350 17.75765 17.66372 17.56577 17.46786 17.37343 17.27073 17.15719
## [129] 17.04670 16.92953 16.80411 16.66843 16.51892 16.36714 16.20751 16.03611
## [137] 15.86093 15.69253 15.53699 15.39021 15.22727 15.02985 14.81078 14.57455
## [145] 14.35689 14.17236 13.98408 13.80294 13.64400 13.50387 13.39337 13.29255
## [153] 13.19908 13.11312 13.01438 12.90981 12.81035 12.71722 12.63625 12.55804
## [161] 12.47592 12.41702 12.39542 12.37911 12.34140 12.29309 12.23622 12.16356
## [169] 12.09106 12.02361 11.95861 11.90364 11.86977 11.83607 11.78921 11.76309
## [177] 11.75980 11.75991 11.77275 11.79319 11.78111 11.74748 11.71559 11.68297
## [185] 11.64846 11.61150 11.58244 11.56296 11.55191 11.54112 11.52282 11.51184
## [193] 11.52956 11.58452 11.65920 11.73990 11.80829 11.85219 11.91666 12.00114
## [201] 12.07716 12.14489 12.19499 12.23506 12.26983 12.30436 12.33872 12.37324
## [209] 12.41607 12.46771 12.52501 12.59061 12.65472 12.68410 12.68571 12.67875
## [217] 12.67074 12.66684 12.66581 12.67630 12.69064 12.70717 12.72354 12.73678
## [225] 12.74270 12.74739 12.76825 12.78965 12.80319 12.81603 12.81010 12.79038
## [233] 12.78442 12.79539 12.81099 12.82015 12.83381 12.86021 12.89141 12.92418
## [241] 12.97494 13.03637 13.09586 13.15378 13.21434 13.27429 13.32732 13.38325
## [249] 13.43767 13.48630 13.51498 13.51644 13.50729 13.49998 13.50345 13.51200
## [257] 13.50953 13.48938 13.46415 13.44153 13.42200 13.41212 13.40767 13.39787
## [265] 13.38027 13.36362 13.35659 13.36321 13.36641 13.35655 13.34572 13.33498
## [273] 13.31873 13.29320 13.25698 13.20719 13.15461 13.11465 13.07812 13.03917
## [281] 13.00487 12.97143 12.93772 12.90180 12.87218 12.84307 12.80100       NA
## [289]       NA       NA       NA       NA       NA       NA       NA       NA
## [297]       NA       NA       NA       NA       NA       NA       NA       NA
## [305]       NA       NA       NA       NA       NA       NA       NA       NA
## [313]       NA
## 
## $random
## Time Series:
## Start = c(2017, 1) 
## End = c(2023, 1) 
## Frequency = 52 
##   [1]        NA        NA        NA        NA        NA        NA        NA
##   [8]        NA        NA        NA        NA        NA        NA        NA
##  [15]        NA        NA        NA        NA        NA        NA        NA
##  [22]        NA        NA        NA        NA        NA 1.0253164 1.0818796
##  [29] 1.1007641 1.0408475 1.0015219 0.9289350 1.0215232 1.0297890 0.9746349
##  [36] 0.9673824 1.0014733 0.9935256 0.9402591 0.9122407 0.8275671 0.8213923
##  [43] 0.8275717 0.8426833 0.8092265 0.8548049 0.8842828 0.8857928 0.8602171
##  [50] 0.8872698 0.9009238 0.9156097 0.9224974 0.9348391 0.9850398 1.0291861
##  [57] 1.0023825 1.0173050 1.0282192 1.0087199 1.0138125 1.0439345 1.0772725
##  [64] 1.1388991 1.1462780 1.1564681 1.1433012 1.0924546 1.1007748 1.0598881
##  [71] 1.0458778 1.0157408 0.9775083 0.9353888 0.9232538 0.9572214 0.9628795
##  [78] 0.9893836 0.9777785 0.9796537 1.0138589 1.0800953 1.0684088 1.0165965
##  [85] 1.0366846 1.0427780 1.0376009 1.0438289 1.0556229 1.0401511 1.0241597
##  [92] 1.0065987 1.0068408 1.0307455 0.9659283 0.9693086 0.9785644 0.9222091
##  [99] 0.9200686 0.9000828 0.9449442 0.9353351 1.0031416 1.0029300 1.0173245
## [106] 1.0513954 1.0668791 1.0469404 1.0961492 1.1017977 1.0821832 1.0705214
## [113] 1.0780955 1.0232687 1.0585483 1.0852908 1.1349679 1.1599328 1.0885581
## [120] 1.0824969 1.0736785 1.0410487 1.0306539 0.9906647 0.9353943 0.9844245
## [127] 0.9468558 0.9680197 1.0153127 0.9937715 0.9563785 0.9259882 0.8744757
## [134] 0.8935152 0.9191962 0.9202988 0.8247935 0.8960426 0.9517542 0.9665278
## [141] 0.9898928 1.0321030 1.0348717 1.0672128 1.0938286 1.0688827 1.0409784
## [148] 1.1111873 1.0926039 1.0337601 1.0075282 1.0291243 1.0008897 1.0233536
## [155] 1.1127340 1.0724723 1.0907350 1.0808878 1.0633084 1.0434529 1.0071617
## [162] 0.9836131 0.9126764 0.9688920 0.9674787 0.9387226 0.7840334 0.6604750
## [169] 0.6234047 0.5823522 0.7281870 0.8149327 0.6953395 0.8011556 0.8092847
## [176] 0.8782594 0.9468900 0.9572375 1.0266538 1.0099264 1.0025367 0.9825009
## [183] 0.9563088 0.9244983 0.9212072 0.8767882 0.9334592 1.1086907 1.1108787
## [190] 1.0953210 1.0781424 1.0788784 1.0264496 0.9863533 1.0349968 1.0174412
## [197] 1.0712107 1.0648357 1.1487944 1.0627101 1.0539529 1.1137690 1.1149914
## [204] 1.1105056 1.1880110 1.1438787 1.0200601 0.9983183 0.9908639 0.9793166
## [211] 0.9256982 0.8982777 0.9045600 0.8926992 0.8705639 0.8778934 0.8556641
## [218] 0.9032960 0.9165914 0.9604250 0.9433736 0.9359177 0.9189267 0.9155542
## [225] 1.0490517 1.0714482 1.0822718 1.0679103 1.0448206 1.0460477 1.0692644
## [232] 1.0866022 1.0456620 1.0582460 1.0594292 1.0689496 1.0706634 1.0897232
## [239] 1.0583833 1.0064484 0.9870895 0.9170389 0.9388370 0.9243519 0.9075307
## [246] 0.9288363 0.9466821 0.9774760 0.9815223 0.9951132 0.9976966 0.9950801
## [253] 1.0466217 1.0564263 1.0540984 1.0554638 0.9869073 0.9911322 0.9441099
## [260] 0.9916391 0.9595487 0.9345305 0.9400440 0.9631122 0.9707159 0.9855544
## [267] 1.0873267 1.0549427 1.0659185 1.0717475 1.1445239 1.1358795 1.1329452
## [274] 1.1462986 1.1019963 1.0755309 1.0621249 1.0074289 1.0128813 1.0283943
## [281] 1.0763561 1.0578709 1.0149416 0.9591997 0.9545786 0.9570674 1.0019519
## [288]        NA        NA        NA        NA        NA        NA        NA
## [295]        NA        NA        NA        NA        NA        NA        NA
## [302]        NA        NA        NA        NA        NA        NA        NA
## [309]        NA        NA        NA        NA        NA
## 
## $figure
##  [1] 1.0605844 1.0556788 1.0678737 1.0465822 1.0229462 0.9908473 0.9912536
##  [8] 0.9908574 0.9536115 0.9662777 0.9609398 0.9125960 0.8992297 0.9006345
## [15] 0.9075675 0.9475286 0.9476596 0.9434890 0.9272632 0.9248442 0.9517606
## [22] 0.9463000 0.9819517 1.0109603 1.0040919 1.0141367 1.0245016 1.0139872
## [29] 1.0020211 0.9928569 1.0104510 1.0298356 1.0068712 1.0178208 1.0473660
## [36] 1.0381523 1.0388990 1.0259739 1.0271345 1.0156457 1.0128781 1.0374554
## [43] 1.0372944 1.0081310 1.0414742 1.0323674 1.0203256 1.0170849 1.0470113
## [50] 1.0617908 1.0301982 1.0350058
## 
## $type
## [1] "multiplicative"
## 
## attr(,"class")
## [1] "decomposed.ts"

Estacionaridad

adf.test(ALFA_ST) 
## 
##  Augmented Dickey-Fuller Test
## 
## data:  ALFA_ST
## Dickey-Fuller = -2.2576, Lag order = 6, p-value = 0.4678
## alternative hypothesis: stationary
#H0: Non-stationary and HA: Stationary. p-values < 0.05 reject the H0. 

P-value de 0.47, lo que quiere decir que es no estacionaria

Autocorrelación Serial

acf(ALFA_ST,main= "Significant Autocorrelation") 

### d. Estimación del pronóstico del precio de la acción (Adj Close) para los siguientes 2 meses del año 2024 mediante la especificación de:

  1. ARMA 1
ARMA1 <- arima(ALFA_ST, order = c(1,0,1)) 
print(ARMA1)
## 
## Call:
## arima(x = ALFA_ST, order = c(1, 0, 1))
## 
## Coefficients:
##          ar1     ma1  intercept
##       0.9895  0.0716    16.5752
## s.e.  0.0078  0.0594     3.0155
## 
## sigma^2 estimated as 0.4345:  log likelihood = -315.7,  aic = 639.4
Box.test(ARMA1$residuals,lag=1,type="Ljung-Box")      
## 
##  Box-Ljung test
## 
## data:  ARMA1$residuals
## X-squared = 0.015027, df = 1, p-value = 0.9024
# Box.test is to examine the hypothesis of independence of a regression residuals. 
# Ho: Regression Residuals are independently distributed. 
# Ha: Regression Residuals are not independently distributed, but exhibit serial correlation. 
  1. ARMA 2
ARMA2 <- arima(ALFA_ST, order = c(1,0,2)) 
print(ARMA2)
## 
## Call:
## arima(x = ALFA_ST, order = c(1, 0, 2))
## 
## Coefficients:
##          ar1     ma1      ma2  intercept
##       0.9904  0.0691  -0.0392    16.6008
## s.e.  0.0075  0.0573   0.0584     3.1068
## 
## sigma^2 estimated as 0.4339:  log likelihood = -315.47,  aic = 640.95
Box.test(ARMA2$residuals,lag=1,type="Ljung-Box")      # Box.test is to examine the hypothesis of independence of a regression residuals. 
## 
##  Box-Ljung test
## 
## data:  ARMA2$residuals
## X-squared = 0.0077541, df = 1, p-value = 0.9298
# Ho: Regression Residuals are independently distributed. 
# Ha: Regression Residuals are not independently distributed, but exhibit serial correlation. 
  1. ARIMA 1
ARIMA1 <- arima(ALFA_ST, order = c(1,1,2)) 
print(ARIMA1)
## 
## Call:
## arima(x = ALFA_ST, order = c(1, 1, 2))
## 
## Coefficients:
##           ar1     ma1     ma2
##       -0.9569  1.0267  0.0439
## s.e.   0.0342  0.0681  0.0608
## 
## sigma^2 estimated as 0.4335:  log likelihood = -312.41,  aic = 632.81
Box.test(ARIMA1$residuals,lag=1,type="Ljung-Box")      # Box.test is to examine the hypothesis of independence of a regression residuals. 
## 
##  Box-Ljung test
## 
## data:  ARIMA1$residuals
## X-squared = 0.0050454, df = 1, p-value = 0.9434
# Ho: Regression Residuals are independently distributed. 
# Ha: Regression Residuals are not independently distributed, but exhibit serial correlation. 
  1. ARIMA 2
ARIMA2 <- arima(ALFA_ST, order = c(1,1,1)) 
print(ARIMA2)
## 
## Call:
## arima(x = ALFA_ST, order = c(1, 1, 1))
## 
## Coefficients:
##           ar1     ma1
##       -0.9490  0.9796
## s.e.   0.0362  0.0233
## 
## sigma^2 estimated as 0.4342:  log likelihood = -312.67,  aic = 631.33
Box.test(ARIMA2$residuals,lag=1,type="Ljung-Box")      # Box.test is to examine the hypothesis of independence of a regression residuals. 
## 
##  Box-Ljung test
## 
## data:  ARIMA2$residuals
## X-squared = 0.36385, df = 1, p-value = 0.5464
# Ho: Regression Residuals are independently distributed. 
# Ha: Regression Residuals are not independently distributed, but exhibit serial correlation. 
  • P-Value es aproximadamente 0.55, lo que sugiere que no hay autocorrelación significativa en los residuos al nivel de significancia del 5%.

Comparando los dos modelos Presentados:

  1. ARMA(1,0,1) con un AIC aproximado de 639.40

  2. ARMA(1,0,2) con un AIC aproximado de 640.95

  3. ARIMA (1,1,2) con un AIC apriximado de 632.81

  4. ARIMA (1,1,1) con un AIC aproximado de 631.33

Dado que un AIC más bajo indica un mejor ajuste, el primer modelo (ARIMA(1,1,1)) parece ser más conveniente en este caso puesto que mantiene un AIC menor que todos los otros modelos.

Pronostico - graficando las series con los fitted values.

ts.plot(ALFA_ST)
ARMA_fit <- ALFA_ST - residuals(ARIMA2)
points(ARMA_fit, type = "l", col = 2, lty = 2)

#Usando predict() para ahcer un 1-step forecast
predict_ARIMA <- predict(ARIMA2)

#Obteniendo el 1-step forecast usando $pred[1]
predict_ARIMA$pred[1]
## [1] 11.42791
#Alternativamente usando predict para hacer un 1-step por 10-step forecasts
predict(ARIMA2, n.ahead = 8)
## $pred
## Time Series:
## Start = c(2023, 2) 
## End = c(2023, 9) 
## Frequency = 52 
## [1] 11.42791 11.33946 11.42340 11.34374 11.41934 11.34759 11.41569 11.35106
## 
## $se
## Time Series:
## Start = c(2023, 2) 
## End = c(2023, 9) 
## Frequency = 52 
## [1] 0.6589267 0.9462197 1.1536335 1.3381685 1.4924662 1.6389296 1.7674706
## [8] 1.8924897
#Se estiman los proximos 8 periodos
#Graficando AirPassenger series, más el pronostico y su intervalo de predicción de 95% 
ts.plot(ALFA_ST, xlim = c(2017, 2023))
ARIMA_forecast <- predict(ARIMA2, n.ahead = 8)$pred
ARIMA_forecast_se <- predict(ARIMA2, n.ahead = 8)$se
points(ARIMA_forecast, type = "l", col = 2)
points(ARIMA_forecast - 2*ARIMA_forecast_se, type = "l", col = 2, lty = 2)
points(ARIMA_forecast + 2*ARIMA_forecast_se, type = "l", col = 2, lty = 2)

alfa_df <- as.data.frame(ALFA)
alfa_df <- dplyr::select(alfa_df, Date, Adj.Close)

nuevos <- data.frame(
  Date = timeDate::timeDate(c("20240105","20240112","20240119","20240126","20240202","20240209","20240216","20240223")) %>% ymd(),
  Adj.Close = c(11.42791,11.33946 ,11.42340,11.34374, 11.41934, 11.34759,11.41569,11.35106 )
)

#Con el codigo que utilizamos en clase no me permitía ver las fechas con el formato adecuado, por ejemplo, cuando agregaba 01-01-2024 en mi dataframe alfa_df me salía fechas como 06-07-1975, por esta razón se llego al codigo anterior para que el dataframe si pudiera agarrar las fechas correctas.

# Agregar nuevos datos al data frame existente
alfa_df <- rbind(alfa_df, nuevos)

ts_data_forecast <- ts(alfa_df$Adj.Close, start = c(2017), end = c(2024), frequency = 52) # Anual time series data
plot(ts_data_forecast, type="l",col="blue", lwd=2, xlab ="Date",ylab ="Adj. Close", main = "Adj. Close") 

En la grafica anterior se puede notar como no esta tomando en cuenta las nuevas, por ende, se tuvo que realizar un nuevo codigo para que en esta nueva grafica se pueda visualizar como el Adj. Close se mueve en los primeros 2 meses del 2024.

# Alfa_df a formato fecha
alfa_df$Date <- as.Date(alfa_df$Date)

# alfa_df a un objeto xts
alfa_xts <- xts(alfa_df$Adj.Close, order.by = alfa_df$Date)

# se grafica la serie de tiempo con xts
plot(alfa_xts, type = "l", col = "blue", lwd = 2, main = "Adj. Close", ylab = "Adj. Close")

forecast_dates <- seq(max(index(alfa_xts)) + 7, by = "week", length.out = 8)
forecast_xts <- xts(ARIMA_forecast, order.by = forecast_dates)
lines(forecast_xts, col = "red", lwd = 2)

autoplot(forecast(ARIMA2))

###f. Evaluación y selección de modelo de pronóstico

alfa_yahoo <- read.csv("C:/Users/HP/OneDrive - FEMSA Comercio/Escritorio/ALFAA.MX.csv")
ts_data_yahoo <- ts(alfa_yahoo$Adj.Close, start = c(2017), end = c(2024), frequency = 52) # Anual time series data
plot(ts_data_yahoo, type="l",col="blue", lwd=2, xlab ="Date",ylab ="Adj. Close", main = "Adj. Close") 

# Graficar las dos series de tiempo en una misma trama
plot(ts_data_forecast, type = "l", col = "blue", ylim = range(c(ts_data_forecast, ts_data_yahoo)), ylab = "Valores", xlab = "Tiempo")
lines(ts_data_yahoo, col = "red")

# Agregar leyenda
legend("topleft", legend = c("ts_data_forecast", "ts_data_yahoo"), col = c("blue", "red"), lty = 1)

# Esto graficará las dos series de tiempo en una misma trama con colores diferentes, y también agregará una leyenda para distinguir entre las dos series.

Con la datos que sacamos de pronostico podemos ver que el modelo arima que se escogio si es suficiente bueno, puesto que como se ve en la grafica, los datos reales se parecen mucho a los pronosticados. Asimismo, tomando en cuenta el autoplot, se puede visualizar que los stocks de ALFA muy probablemente se quedaran estancados en los proximos meses de este año.