8.1, 8.5, 8.6, 8.7, 8.8, 8.9 in Hyndman
Consider the the number of pigs slaughtered in Victoria, available in the aus_livestock dataset.
Use the ETS() function to estimate the equivalent model for simple exponential smoothing. Find the optimal values of α and ℓ, and generate forecasts for the next four months.
vp <- aus_livestock %>%
filter(Animal == 'Pigs' & State == 'Victoria')
fit <- vp %>%
model(ETS(Count ~ error("A") + trend("N") + season("N")))
optimal_params <- fit$optimal.parameters
alpha <- optimal_params["alpha"]
l0 <- optimal_params["l0"]
forecast_values <- forecast(fit, h = 4)
print(paste("Optimal alpha:", alpha))
## [1] "Optimal alpha: "
print(paste("Optimal l0:", l0))
## [1] "Optimal l0: "
# Print the forecasted values
print(forecast_values)
## # A fable: 4 x 6 [1M]
## # Key: Animal, State, .model [1]
## Animal State .model Month Count .mean
## <fct> <fct> <chr> <mth> <dist> <dbl>
## 1 Pigs Victoria "ETS(Count ~ error(\"A\") +… 2019 Jan N(95187, 8.7e+07) 95187.
## 2 Pigs Victoria "ETS(Count ~ error(\"A\") +… 2019 Feb N(95187, 9.7e+07) 95187.
## 3 Pigs Victoria "ETS(Count ~ error(\"A\") +… 2019 Mar N(95187, 1.1e+08) 95187.
## 4 Pigs Victoria "ETS(Count ~ error(\"A\") +… 2019 Apr N(95187, 1.1e+08) 95187.
Compute a 95% prediction interval for the first forecast using ^y±1.96s where s is the standard deviation of the residuals. Compare your interval with the interval produced by R.
The hilo() function is a function that extracts the minimum and maximum values of the predicted value. Using this function, I can easily get the lower and upper bounds of the predicted value.
The second result, “Prediction interval for h=1 from model: 76871.01 to 113502.1”, has a narrower prediction interval.
# Calculate interval using hilo() function
intervals <- forecast_values %>% hilo()
intervals$`95%`[1]
## <hilo[1]>
## [1] [76854.79, 113518.3]95
# Get first predicted value
y_hat <- forecast_values$.mean[1]
# Get residuals from augment(fit)
residuals <- augment(fit)$.resid
# Calculate standard deviation of residuals
s <- sd(residuals)
# Manually calculate the interval
u_95 <- y_hat + (s * 1.96)
l_95 <- y_hat - (s * 1.96)
cat('Prediction interval for h=1 from model:', l_95, 'to', u_95)
## Prediction interval for h=1 from model: 76871.01 to 113502.1
Data set global_economy contains the annual Exports from many countries. Select one country to analyse.
Plot the Exports series and discuss the main features of the data.
global_economy
## # A tsibble: 15,150 x 9 [1Y]
## # Key: Country [263]
## Country Code Year GDP Growth CPI Imports Exports Population
## <fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Afghanistan AFG 1960 537777811. NA NA 7.02 4.13 8996351
## 2 Afghanistan AFG 1961 548888896. NA NA 8.10 4.45 9166764
## 3 Afghanistan AFG 1962 546666678. NA NA 9.35 4.88 9345868
## 4 Afghanistan AFG 1963 751111191. NA NA 16.9 9.17 9533954
## 5 Afghanistan AFG 1964 800000044. NA NA 18.1 8.89 9731361
## 6 Afghanistan AFG 1965 1006666638. NA NA 21.4 11.3 9938414
## 7 Afghanistan AFG 1966 1399999967. NA NA 18.6 8.57 10152331
## 8 Afghanistan AFG 1967 1673333418. NA NA 14.2 6.77 10372630
## 9 Afghanistan AFG 1968 1373333367. NA NA 15.2 8.90 10604346
## 10 Afghanistan AFG 1969 1408888922. NA NA 15.0 10.1 10854428
## # ℹ 15,140 more rows
kor_exports <- global_economy %>%
filter(Code == 'KOR')
head(kor_exports)
## # A tsibble: 6 x 9 [1Y]
## # Key: Country [1]
## Country Code Year GDP Growth CPI Imports Exports Population
## <fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Korea, Rep. KOR 1960 3957873926. NA 1.48 12.0 2.62 25012374
## 2 Korea, Rep. KOR 1961 2417237754. 6.88 1.60 14.0 4.04 25765673
## 3 Korea, Rep. KOR 1962 2813933900. 3.84 1.70 15.6 3.87 26513030
## 4 Korea, Rep. KOR 1963 3988246109. 9.19 2.06 14.7 3.95 27261747
## 5 Korea, Rep. KOR 1964 3458518494. 9.46 2.66 12.4 4.96 27984155
## 6 Korea, Rep. KOR 1965 3120307808. 7.18 3.02 14.3 7.13 28704674
kor_exports %>%
autoplot(Exports) +
labs(title = 'South Korea Annual Exports')
Use an ETS(A,N,N) model to forecast the series, and plot the forecasts.
fit <- kor_exports %>%
model(ANN = ETS(Exports ~ error('A') + trend('N') + season('N')))
KEF <- fit %>%
forecast(h = 4)
KEF %>% autoplot(kor_exports) +
labs(title = 'South Korea Annual Exports Forecast')
Compute the RMSE values for the training data.
accuracy_test <- accuracy(fit)
accuracy_test
## # A tibble: 1 × 11
## Country .model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
## <fct> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Korea, Rep. ANN Training 0.698 3.41 2.42 3.94 9.31 0.983 0.991 0.0227
rmse_test <- accuracy_test$RMSE
rmse_test
## [1] 3.408323
Compare the results to those from an ETS(A,A,N) model. (Remember that the trended model is using one more parameter than the simpler model.) Discuss the merits of the two forecasting methods for this data set.
fit2 <- kor_exports %>%
model(AAN = ETS(Exports ~ error('A') + trend('A') + season('N')))
accuracy_test2 <- accuracy(fit2)
accuracy_test2
## # A tibble: 1 × 11
## Country .model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
## <fct> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Korea, Rep. AAN Training 0.0474 3.34 2.35 1.03 9.38 0.956 0.971 0.0249
accuracy_test
## # A tibble: 1 × 11
## Country .model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
## <fct> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Korea, Rep. ANN Training 0.698 3.41 2.42 3.94 9.31 0.983 0.991 0.0227
Comparing the results of the two models, the ETS(A,A,N) model was found to have slightly higher prediction accuracy than the ETS(ANN) model. The ETS(A,A,N) model has slightly lower RMSE and MAPE, as well as lower RMSSE. However, both models have similar ACF1 values.
Compare the forecasts from both methods. Which do you think is best?
fit3 <- kor_exports %>%
model(AAN = ETS(Exports ~ error('A') + trend('A') + season('N')),
ANN = ETS(Exports ~ error('A') + trend('N') + season('N'))
)
fit3 %>%
forecast(h = 4) %>%
autoplot(kor_exports, level = NULL) +
labs(title = 'South Korea Annual Exports ANN Vs AAN Forecast')
As mentioned in question d, the RMSE, MAPE, and RMSSE values of AAN are slightly lower than those of ANN, so based on these indicators, the ETS(A,A,N) model is slightly better than the ETS(ANN) model when predicting Korean export data. It can be said to show better performance.
Calculate a 95% prediction interval for the first forecast for each model, using the RMSE values and assuming normal errors. Compare your intervals with those produced using R.
qnorm(0.975) is the Z value corresponding to the 95% confidence interval from a standard normal distribution.
For AAN it is 6.545052 and for ANN it is 6.680189. Therefore, you can see that the prediction interval of the AAN model is slightly smaller than that of the ANN model. We can generally say that smaller prediction intervals are better. A smaller prediction interval means that the values predicted by the model are more accurate. Therefore, it can be seen that AAN can make better predictions.
rmse2 <- accuracy(fit2)$RMSE
prediction_intervals2 <- qnorm(0.975) * rmse2
prediction_intervals2
## [1] 6.545052
rmse <- accuracy(fit)$RMSE
prediction_intervals <- qnorm(0.975) * rmse
prediction_intervals
## [1] 6.680189
Forecast the Chinese GDP from the global_economy data set using an ETS model. Experiment with the various options in the ETS() function to see how much the forecasts change with damped trend, or with a Box-Cox transformation. Try to develop an intuition of what each is doing to the forecasts.
[Hint: use a relatively large value of h when forecasting, so you can clearly see the differences between the various options when plotting the forecasts.]
In the ets() function, Box-Cox was used because “ANN” cannot be used in the model argument when using damped.
CGDP <- global_economy %>%
filter(Country == 'China')
CGDP %>%
autoplot(GDP) +
labs(title = 'China GDP')
hes1 <- ets(CGDP$GDP, model = "ANN")
summary(hes1)
## ETS(A,N,N)
##
## Call:
## ets(y = CGDP$GDP, model = "ANN")
##
## Smoothing parameters:
## alpha = 0.9999
##
## Initial states:
## l = 67537060017.8105
##
## sigma: 4.23085e+11
##
## AIC AICc BIC
## 3344.888 3345.332 3351.069
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 209849588304 415726481306 2.13215e+11 8.140109 10.99508 0.983468
## ACF1
## Training set 0.7893944
hes2 <- forecast(hes1, h=12)
plot(hes2, col = "black", lwd=2, flty=3, shadecols = c("gray", "mistyrose"))
hes1 <- ets(CGDP$GDP, model = "AAN")
summary(hes1)
## ETS(A,A,N)
##
## Call:
## ets(y = CGDP$GDP, model = "AAN")
##
## Smoothing parameters:
## alpha = 0.9999
## beta = 0.5519
##
## Initial states:
## l = 50284778074.482
## b = 3288256684.0531
##
## sigma: 196901248160
##
## AIC AICc BIC
## 3258.053 3259.207 3268.356
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 23629943600 189990266650 95916434827 1.405428 7.617734 0.4424207
## ACF1
## Training set 0.009046233
hes2 <- forecast(hes1, h=12)
plot(hes2, col = "black", lwd=2, flty=3, shadecols = c("gray", "mistyrose"))
lambda <- BoxCox.lambda(CGDP$GDP)
hes3<- BoxCox(CGDP$GDP, lambda)
hes3 <- ets(hes3, model="AAN")
summary(hes3)
## ETS(A,A,N)
##
## Call:
## ets(y = hes3, model = "AAN")
##
## Smoothing parameters:
## alpha = 0.9999
## beta = 0.078
##
## Initial states:
## l = 16.6569
## b = 0.0223
##
## sigma: 0.0378
##
## AIC AICc BIC
## -138.5628 -137.4089 -128.2606
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 0.004632284 0.03648204 0.02872569 0.02624792 0.1653141 0.6260019
## ACF1
## Training set 0.1775163
hes4 <-(forecast(hes3, h = 12))
plot(hes4, col = "black", lwd=2, flty=3, shadecols = c("gray", "mistyrose"))
Initial states: The first model has the initial level (l) set to 50284778074.482, and the second model has the initial level (l) set to 16.6569 after Box-Cox transformation. After applying the Box-Cox transformation, the initial level values are significantly lower.
Sigma, AIC, AICc, BIC: For both models, the AIC, AICc, and BIC values are lower for the model after applying the Box-Cox transformation. This means that a model applying the Box-Cox transformation is likely to be a better model.
Training set error measures: the second model appears to perform slightly better.
Overall, the model after applying Box-Cox transformation was found to have better prediction performance.
Find an ETS model for the Gas data from aus_production and forecast the next few years. Why is multiplicative seasonality necessary here? Experiment with making the trend damped. Does it improve the forecasts?
ETS(M,A,N) is used because the data has trends and seasonality.
Because seasonal fluctuations that occur in time series data occur proportionally, multiplicative seasonality ‘M’ (multiplicative) is necessary.
aus_production
## # A tsibble: 218 x 7 [1Q]
## Quarter Beer Tobacco Bricks Cement Electricity Gas
## <qtr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1956 Q1 284 5225 189 465 3923 5
## 2 1956 Q2 213 5178 204 532 4436 6
## 3 1956 Q3 227 5297 208 561 4806 7
## 4 1956 Q4 308 5681 197 570 4418 6
## 5 1957 Q1 262 5577 187 529 4339 5
## 6 1957 Q2 228 5651 214 604 4811 7
## 7 1957 Q3 236 5317 227 603 5259 7
## 8 1957 Q4 320 6152 222 582 4735 6
## 9 1958 Q1 272 5758 199 554 4608 5
## 10 1958 Q2 233 5641 229 620 5196 7
## # ℹ 208 more rows
aus_production %>%
autoplot(Gas) +
labs(title = 'Gas Production')
gas <- ets(aus_production$Gas)
summary(gas)
## ETS(M,A,N)
##
## Call:
## ets(y = aus_production$Gas)
##
## Smoothing parameters:
## alpha = 0.1111
## beta = 0.1019
##
## Initial states:
## l = 5.7619
## b = 0.0725
##
## sigma: 0.1631
##
## AIC AICc BIC
## 2137.521 2137.804 2154.443
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set -0.07874544 16.35735 11.92955 -1.321798 13.49607 0.8199909
## ACF1
## Training set 0.08876701
plot(forecast(gas, h = 12))
lines(fitted(gas), type="o", col="indianred")
gas_damped <- ets(aus_production$Gas, damped = TRUE)
summary(gas_damped)
## ETS(M,Ad,N)
##
## Call:
## ets(y = aus_production$Gas, damped = TRUE)
##
## Smoothing parameters:
## alpha = 0.1064
## beta = 0.1064
## phi = 0.967
##
## Initial states:
## l = 5.7348
## b = 0.1328
##
## sigma: 0.1639
##
## AIC AICc BIC
## 2138.989 2139.387 2159.296
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 0.2482358 16.21027 11.88644 -0.9168213 13.43345 0.8170283
## ACF1
## Training set 0.07464713
plot(forecast(gas_damped, h = 12))
lines(fitted(gas_damped) , type="o", col="indianred")
There are no noticeable differences between the two models, ETS(M,Ad,N) and ETS(M,A,N). Although the ETS(M,Ad,N) model has a higher AIC value and a slightly lower RMSE value than the ETS(M,A,N) model, the performance difference between the two models is very small, so in reality, both models make useful predictions. can provide.
Recall your retail time series data (from Exercise 7 in Section 2.10).
set.seed(12345678)
myseries <- aus_retail |>
filter(`Series ID` == sample(aus_retail$`Series ID`,1))
myseries %>% autoplot()
## Plot variable not specified, automatically selected `.vars = Turnover`
#myseries
#tail(myseries, 5)
Why is multiplicative seasonality necessary for this series?
Multiplicative seasonality is necessary because seasonal fluctuations that occur in time series data occur proportionally.
Apply Holt-Winters’ multiplicative method to the data. Experiment with making the trend damped.
months <- myseries$Month
turnover <- myseries$Turnover
new_df <- data.frame(Month = months, Turnover = turnover)
myseries_ts <- ts(myseries$Turnover, start = c(1988, 4), frequency = 12)
fit1 <- hw(myseries_ts, seasonal="multiplicative", h=12)
plot(fit1)
fit2 <- hw(myseries_ts,seasonal="multiplicative", damped=TRUE, h=5)
plot(fit2)
Compare the RMSE of the one-step forecasts from the two methods. Which do you prefer?
summary(fit1)
##
## Forecast method: Holt-Winters' multiplicative method
##
## Model Information:
## Holt-Winters' multiplicative method
##
## Call:
## hw(y = myseries_ts, h = 12, seasonal = "multiplicative")
##
## Smoothing parameters:
## alpha = 0.5497
## beta = 1e-04
## gamma = 0.2017
##
## Initial states:
## l = 2.3528
## b = 0.0334
## s = 0.8126 0.7983 0.7749 1.3226 0.992 1.0592
## 1.004 1.1374 1.2081 1.0221 0.9978 0.8707
##
## sigma: 0.0697
##
## AIC AICc BIC
## 1803.503 1805.246 1869.986
##
## Error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set -0.001192482 0.5995194 0.4434511 -0.2651947 5.210359 0.5062745
## ACF1
## Training set -0.04221779
##
## Forecasts:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## Jan 2019 11.61263 10.575842 12.64942 10.026998 13.19827
## Feb 2019 10.41539 9.354232 11.47656 8.792487 12.03830
## Mar 2019 11.55862 10.251803 12.86543 9.560018 13.55721
## Apr 2019 11.85447 10.394026 13.31491 9.620915 14.08802
## May 2019 13.49349 11.705119 15.28187 10.758410 16.22858
## Jun 2019 14.11504 12.121509 16.10857 11.066197 17.16388
## Jul 2019 16.17241 13.756178 18.58865 12.477101 19.86772
## Aug 2019 16.02063 13.503318 18.53794 12.170734 19.87052
## Sep 2019 14.79555 12.362103 17.22900 11.073914 18.51718
## Oct 2019 14.78261 12.247681 17.31754 10.905770 18.65945
## Nov 2019 14.80620 12.167794 17.44460 10.771109 18.84128
## Dec 2019 21.07043 17.179784 24.96108 15.120199 27.02066
summary(fit2)
##
## Forecast method: Damped Holt-Winters' multiplicative method
##
## Model Information:
## Damped Holt-Winters' multiplicative method
##
## Call:
## hw(y = myseries_ts, h = 5, seasonal = "multiplicative", damped = TRUE)
##
## Smoothing parameters:
## alpha = 0.5562
## beta = 1e-04
## gamma = 0.2073
## phi = 0.8441
##
## Initial states:
## l = 2.6391
## b = -0.0516
## s = 0.8028 0.8056 0.8196 1.3074 0.9719 1.0633
## 1.0268 1.0957 1.2165 1.0508 0.9988 0.8408
##
## sigma: 0.0707
##
## AIC AICc BIC
## 1809.599 1811.553 1879.993
##
## Error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 0.05823794 0.6024724 0.4486334 0.4987534 5.230405 0.5121909
## ACF1
## Training set -0.04902864
##
## Forecasts:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## Jan 2019 11.55410 10.507376 12.60083 9.953272 13.15494
## Feb 2019 10.32985 9.258337 11.40136 8.691112 11.96859
## Mar 2019 11.43668 10.116881 12.75648 9.418220 13.45514
## Apr 2019 11.71500 10.238747 13.19125 9.457267 13.97273
## May 2019 13.30040 11.493986 15.10682 10.537728 16.06307
Models with lower RMSE are considered to have better predictive performance. Therefore, it can be seen that fit1 has better prediction performance than fit2.
Check that the residuals from the best method look like white noise.
Looking at the ACF plot of the fit1, I can infer that the time series data is close to a steady state and is predictable. The distribution of the residuals appears to follow a normal distribution.
checkresiduals(fit1)
##
## Ljung-Box test
##
## data: Residuals from Holt-Winters' multiplicative method
## Q* = 18.741, df = 24, p-value = 0.7657
##
## Model df: 0. Total lags used: 24
Now find the test set RMSE, while training the model to the end of 2010. Can you beat the seasonal naïve approach from Exercise 7 in Section 5.11?
samp_dt = sample(1:nrow(new_df), nrow(new_df) * 0.7, replace=F)
myseries_train = new_df[samp_dt,]
myseries_test = new_df[-samp_dt,]
# train set
myseries_train %>%
filter(year(Month) < 2011) %>%
head(10)
## Month Turnover
## 1 1998 Aug 6.8
## 2 1997 Jul 6.5
## 3 1992 Dec 7.6
## 4 1989 Sep 3.8
## 5 1995 Nov 9.2
## 6 1993 Jul 7.0
## 7 1995 Dec 12.9
## 8 2010 Jul 16.1
## 9 2001 Sep 9.5
## 10 2004 May 9.0
# test set
myseries_test %>%
filter(year(Month) < 2011) %>%
head(10)
## Month Turnover
## 1 1988 Oct 3.1
## 2 1988 Nov 3.0
## 3 1989 Jan 2.7
## 4 1989 Apr 2.5
## 5 1989 May 2.8
## 6 1989 Nov 3.5
## 7 1989 Dec 5.3
## 8 1990 Sep 5.5
## 9 1991 May 5.5
## 10 1991 Jul 7.0
myseries_train_ts <- ts(myseries_train$Turnover, start = c(1988, 4), frequency = 12)
myseries_test_ts <- ts(myseries_test$Turnover, start = c(1988, 4), frequency = 12)
fit <- hw(myseries_train_ts, seasonal = "multiplicative", h = 36)
fit2 <- snaive(myseries_train_ts, h = 36)
autoplot(myseries_train_ts) +
autolayer(fit, series = "HW", PI = FALSE) +
autolayer(fit2, series = "SNAIVE", PI = FALSE) +
xlab("Year") +
ylab("Turnover") +
labs(title = "Train Data") +
theme(legend.position = "top")
fit <- hw(myseries_test_ts, seasonal = "multiplicative", h = 36)
fit2 <- snaive(myseries_test_ts, h = 36)
forecast_hw <- forecast(fit, h = 36)
forecast_snaive <- forecast(fit2, h = 36)
autoplot(myseries_test_ts) +
autolayer(forecast_hw, series = "HW Forecast", PI = TRUE) +
autolayer(forecast_snaive, series = "SNAIVE Forecast", PI = TRUE) +
xlab("Year") +
ylab("Turnover") +
labs(title = "Test Data") +
theme(legend.position = "top")
forecast_hw %>% accuracy()
## ME RMSE MAE MPE MAPE MASE ACF1
## Training set -0.1436105 1.787079 1.309919 -5.032193 15.15427 0.5962389 0.033213
forecast_snaive %>% accuracy()
## ME RMSE MAE MPE MAPE MASE ACF1
## Training set 1.19899 2.951374 2.19697 10.27658 22.42436 1 0.1618902
The RMSE and MAE of the HW model are lower than those of the SNAIVE model, and the MAPE and MASE are lower. Additionally, the ACF1 value is lower for the HW model, so it can be said that the HW model provides better predictions.
For the same retail data, try an STL decomposition applied to the Box-Cox transformed series, followed by ETS on the seasonally adjusted data. How does that compare with your best previous forecasts on the test set?
lambda <- BoxCox.lambda(myseries$Turnover)
myseries_transformed <- mutate(myseries, Turnover_BoxCox = BoxCox(myseries$Turnover, lambda))
fit5 <- myseries_transformed%>%
model(
'STL Box-Cox' = STL(Turnover_BoxCox ~ season(window = 'periodic'), robust = TRUE),
'ETS Box-Cox' = ETS(Turnover_BoxCox)
)
fit5 %>% accuracy()
## # A tibble: 2 × 12
## State Industry .model .type ME RMSE MAE MPE MAPE MASE RMSSE
## <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Norther… Clothin… STL B… Trai… 1.80e-4 0.0307 0.0211 -0.0863 1.61 0.384 0.380
## 2 Norther… Clothin… ETS B… Trai… 3.83e-3 0.0335 0.0252 0.277 1.88 0.458 0.416
## # ℹ 1 more variable: ACF1 <dbl>
forecast_hw %>% accuracy()
## ME RMSE MAE MPE MAPE MASE ACF1
## Training set -0.1436105 1.787079 1.309919 -5.032193 15.15427 0.5962389 0.033213
The RMSE of the STL model is 0.0307, the RMSE of the ETS model is 0.0335, and the RMSE of the Holt-Winters’ method is 1.787079. Therefore, it appears that the STL and ETS models made better predictions.