For this week’s discussion, I picked data from the company Rosetta Stone Inc. Source: https://finance.yahoo.com/quote/RST/history?period1=1437523200&period2=1595376000&interval=1wk&filter=history&frequency=1wk

#Reading our dataset

#Summary of our dataset

str(rose)
## 'data.frame':    60 obs. of  7 variables:
##  $ Date     : chr  "2015-08-01" "2015-09-01" "2015-10-01" "2015-11-01" ...
##  $ Open     : num  7.24 6.97 6.75 6.58 7.28 6.71 6.6 7.86 6.68 8 ...
##  $ High     : num  7.31 7.29 7.2 8.22 7.47 7.47 8.28 8.6 8.11 8.46 ...
##  $ Low      : num  6.4 6.65 6.31 6.58 6.58 6.17 6.6 6.51 6.68 7.26 ...
##  $ Close    : num  6.87 6.7 6.58 7.27 6.69 6.66 7.91 6.71 7.95 7.58 ...
##  $ Adj.Close: num  6.87 6.7 6.58 7.27 6.69 6.66 7.91 6.71 7.95 7.58 ...
##  $ Volume   : int  2002700 946200 1061200 1396200 1815500 2071100 1434100 1951800 1068400 1298500 ...
head(rose)
##         Date Open High  Low Close Adj.Close  Volume
## 1 2015-08-01 7.24 7.31 6.40  6.87      6.87 2002700
## 2 2015-09-01 6.97 7.29 6.65  6.70      6.70  946200
## 3 2015-10-01 6.75 7.20 6.31  6.58      6.58 1061200
## 4 2015-11-01 6.58 8.22 6.58  7.27      7.27 1396200
## 5 2015-12-01 7.28 7.47 6.58  6.69      6.69 1815500
## 6 2016-01-01 6.71 7.47 6.17  6.66      6.66 2071100
tail(rose)
##          Date  Open  High   Low Close Adj.Close  Volume
## 55 2020-02-01 17.24 20.70 16.29 17.27     17.27 2512600
## 56 2020-03-01 17.37 17.96  8.85 14.02     14.02 6480700
## 57 2020-04-01 13.38 18.04 12.55 17.08     17.08 3322600
## 58 2020-05-01 16.39 19.57 16.00 18.58     18.58 3768900
## 59 2020-06-01 18.71 19.10 15.87 16.86     16.86 3130800
## 60 2020-07-01 16.91 24.31 15.39 23.88     23.88 7757700

#Creating our time series

rose.ts = ts(rose$Adj.Close, frequency = 12, start = c(2015,8))
autoplot(rose.ts) +xlab("Year") + ylab("Adjusting Closing Price in $")

#No seasonality in our dataset

#Building ARIMA and ETS Models (without spliting the data)

M1.Arima = auto.arima(rose.ts)
summary(M1.Arima)  
## Series: rose.ts 
## ARIMA(0,1,0) 
## 
## sigma^2 estimated as 4.092:  log likelihood=-125.29
## AIC=252.57   AICc=252.64   BIC=254.65
## 
## Training set error measures:
##                     ME     RMSE      MAE      MPE     MAPE     MASE        ACF1
## Training set 0.2836145 2.006003 1.389614 1.289255 9.715281 0.344177 -0.06049492
#mettre résumé m1.arima est plus efficace que simplement mettre m1.arima parce que résumé m1.arima nous donne non seulement le aic/bic, mais aussi le mape.
M1.ETS = ets(rose.ts, model = "ZZZ")
summary(M1.ETS)
## ETS(M,N,N) 
## 
## Call:
##  ets(y = rose.ts, model = "ZZZ") 
## 
##   Smoothing parameters:
##     alpha = 0.7871 
## 
##   Initial states:
##     l = 6.6983 
## 
##   sigma:  0.1346
## 
##      AIC     AICc      BIC 
## 308.0187 308.4473 314.3017 
## 
## Training set error measures:
##                     ME     RMSE      MAE      MPE     MAPE      MASE       ACF1
## Training set 0.3334069 2.015439 1.402622 1.633332 9.644094 0.3473986 0.09239768
#Based on those results, it appears that the ARIMA model is better than the ETS model. Indeed, not only are its AIC, AICc, and BIC lower, but all the other metrics (except the MAPE) are lower than in the ETS model.

#Spliting our dataset into training and testing

train.ts = window(rose.ts, end=c(2020,6))
test.ts = window(rose.ts, start=c(2015,8))

#Building new ARIMA and ETS models

M2.ARIMA = auto.arima(train.ts)
M2.ETS = ets(train.ts, model = "ZZZ")

#Forecast from the new ARIMA model

fcast.M2.ARIMA = forecast(M2.ARIMA,h=6)
plot(fcast.M2.ARIMA)
lines(test.ts, col="red")  
legend("topleft",lty=1,col=c("red","blue"),c("actual values","forecast"))

#Forecast from the new ETS model

fcast.M2.ETS = forecast(M2.ETS,h=6)
plot(fcast.M2.ETS)
lines(test.ts, col="red")  
legend("topleft",lty=1,col=c("red","blue"),c("actual values","forecast"))

#Determining the best model

acc.ARIMA = accuracy(fcast.M2.ARIMA, test.ts)
acc.ETS = accuracy(fcast.M2.ETS, test.ts)
acc.ARIMA
##                     ME     RMSE      MAE        MPE      MAPE      MASE
## Training set 0.1694385 1.804714 1.294184  0.8128534  9.381692 0.3153601
## Test set     7.0199980 7.019998 7.019998 29.3969778 29.396978 1.7105969
##                      ACF1
## Training set -0.007029003
## Test set               NA
acc.ETS
##                     ME     RMSE      MAE       MPE     MAPE      MASE      ACF1
## Training set 0.2119084 1.823125 1.303141  1.095218  9.31249 0.3175427 0.1297034
## Test set     6.7827341 6.782734 6.782734 28.403410 28.40341 1.6527816        NA
#Based on our results, it appears that the ETS model is a little bit better than the ARIMA model because all its metrics are unanimously lower than in the ARIMA model.
#However, on a visual standpoint, both graphs look the same, which infers that even though the ETS model is, on number, better than the ARIMA model, it is not by a large margin.