require(fpp2)

7.1.Consider the pigs series — the number of pigs slaughtered in Victoria each month.

  1. Use the ses() function in R to find the optimal values of α and ℓ0, and generate forecasts for the next four months.
pigsfc <- ses(pigs, h = 4)
pigsfc$model
## Simple exponential smoothing 
## 
## Call:
##  ses(y = pigs, h = 4) 
## 
##   Smoothing parameters:
##     alpha = 0.2971 
## 
##   Initial states:
##     l = 77260.0561 
## 
##   sigma:  10308.58
## 
##      AIC     AICc      BIC 
## 4462.955 4463.086 4472.665
  1. Compute a 95% prediction interval for the first forecast using
    ^y ± 1.96s where s is the standard deviation of the residuals. Compare your interval with the interval produced by R.
s <- sd(residuals(pigsfc))
pigs95 <- c(Lower = pigsfc$mean[1] - 1.96*s, Upper = pigsfc$mean[1] + 1.96*s)
pigs95
##     Lower     Upper 
##  78679.97 118952.84

7.5. Data set books contains the daily sales of paperback and hardcover books at the same store. The task is to forecast the next four days’ sales for paperback and hardcover books.

  1. Plot the series and discuss the main features of the data.
autoplot(books)

This series includes Paperback and Hardcover books within a period of 30 days, and it is going in an upward trend.

  1. Use the ses() function to forecast each series, and plot the forecasts.
bookspaper <- ses(books[,1])
bookshardcover <- ses(books[,2])
summary(bookspaper)
## 
## Forecast method: Simple exponential smoothing
## 
## Model Information:
## Simple exponential smoothing 
## 
## Call:
##  ses(y = books[, 1]) 
## 
##   Smoothing parameters:
##     alpha = 0.1685 
## 
##   Initial states:
##     l = 170.8271 
## 
##   sigma:  34.8183
## 
##      AIC     AICc      BIC 
## 318.9747 319.8978 323.1783 
## 
## Error measures:
##                    ME     RMSE     MAE       MPE     MAPE      MASE       ACF1
## Training set 7.175981 33.63769 27.8431 0.4736071 15.57784 0.7021303 -0.2117522
## 
## Forecasts:
##    Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
## 31       207.1097 162.4882 251.7311 138.8670 275.3523
## 32       207.1097 161.8589 252.3604 137.9046 276.3147
## 33       207.1097 161.2382 252.9811 136.9554 277.2639
## 34       207.1097 160.6259 253.5935 136.0188 278.2005
## 35       207.1097 160.0215 254.1979 135.0945 279.1249
## 36       207.1097 159.4247 254.7946 134.1818 280.0375
## 37       207.1097 158.8353 255.3840 133.2804 280.9389
## 38       207.1097 158.2531 255.9663 132.3899 281.8294
## 39       207.1097 157.6777 256.5417 131.5099 282.7094
## 40       207.1097 157.1089 257.1105 130.6400 283.5793
autoplot(bookspaper)

summary(bookshardcover)
## 
## Forecast method: Simple exponential smoothing
## 
## Model Information:
## Simple exponential smoothing 
## 
## Call:
##  ses(y = books[, 2]) 
## 
##   Smoothing parameters:
##     alpha = 0.3283 
## 
##   Initial states:
##     l = 149.2861 
## 
##   sigma:  33.0517
## 
##      AIC     AICc      BIC 
## 315.8506 316.7737 320.0542 
## 
## Error measures:
##                    ME     RMSE      MAE      MPE     MAPE      MASE       ACF1
## Training set 9.166735 31.93101 26.77319 2.636189 13.39487 0.7987887 -0.1417763
## 
## Forecasts:
##    Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
## 31       239.5601 197.2026 281.9176 174.7799 304.3403
## 32       239.5601 194.9788 284.1414 171.3788 307.7414
## 33       239.5601 192.8607 286.2595 168.1396 310.9806
## 34       239.5601 190.8347 288.2855 165.0410 314.0792
## 35       239.5601 188.8895 290.2306 162.0662 317.0540
## 36       239.5601 187.0164 292.1038 159.2014 319.9188
## 37       239.5601 185.2077 293.9124 156.4353 322.6848
## 38       239.5601 183.4574 295.6628 153.7584 325.3618
## 39       239.5601 181.7600 297.3602 151.1625 327.9577
## 40       239.5601 180.1111 299.0091 148.6406 330.4795
autoplot(bookshardcover)

  1. Compute the RMSE values for the training data in each case.
accuracy(bookspaper)
##                    ME     RMSE     MAE       MPE     MAPE      MASE       ACF1
## Training set 7.175981 33.63769 27.8431 0.4736071 15.57784 0.7021303 -0.2117522
accuracy(bookshardcover)
##                    ME     RMSE      MAE      MPE     MAPE      MASE       ACF1
## Training set 9.166735 31.93101 26.77319 2.636189 13.39487 0.7987887 -0.1417763

7.6. We will continue with the daily sales of paperback and hardcover books in data set books.

  1. Apply Holt’s linear method to the paperback and hardback series and compute four-day forecasts in each case.
holtpaper<- holt(books[,1], h=4)
forecast(holtpaper)
##    Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
## 31       209.4668 166.6035 252.3301 143.9130 275.0205
## 32       210.7177 167.8544 253.5811 145.1640 276.2715
## 33       211.9687 169.1054 254.8320 146.4149 277.5225
## 34       213.2197 170.3564 256.0830 147.6659 278.7735
autoplot(holtpaper)

holthardcover<- holt(books[,2], h=4)
forecast(holthardcover)
##    Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
## 31       250.1739 212.7390 287.6087 192.9222 307.4256
## 32       253.4765 216.0416 290.9113 196.2248 310.7282
## 33       256.7791 219.3442 294.2140 199.5274 314.0308
## 34       260.0817 222.6468 297.5166 202.8300 317.3334
autoplot(holthardcover)

  1. Compare the RMSE measures of Holt’s method for the two series to those of simple exponential smoothing in the previous question. (Remember that Holt’s method is using one more parameter than SES.) Discuss the merits of the two forecasting methods for these data sets.

The holt method uses trend while the ses method does not use trend.

SES

accuracy(bookspaper)
##                    ME     RMSE     MAE       MPE     MAPE      MASE       ACF1
## Training set 7.175981 33.63769 27.8431 0.4736071 15.57784 0.7021303 -0.2117522
accuracy(bookshardcover)
##                    ME     RMSE      MAE      MPE     MAPE      MASE       ACF1
## Training set 9.166735 31.93101 26.77319 2.636189 13.39487 0.7987887 -0.1417763

HOLT

accuracy(holtpaper)
##                     ME     RMSE      MAE       MPE     MAPE      MASE
## Training set -3.717178 31.13692 26.18083 -5.508526 15.58354 0.6602122
##                    ACF1
## Training set -0.1750792
accuracy(holthardcover)
##                      ME     RMSE      MAE       MPE    MAPE      MASE
## Training set -0.1357882 27.19358 23.15557 -2.114792 12.1626 0.6908555
##                     ACF1
## Training set -0.03245186

The RMSE for HOLT method is better than the SES method.

  1. Compare the forecasts for the two series using both methods. Which do you think is best?
fc <- data.frame(Paperback=c(bookspaper$mean[1], holtpaper$mean[1]),
                 Hardcover=c(bookshardcover$mean[1], holthardcover$mean[1]),
                 row.names = c("SES", "HOLT"))
fc
##      Paperback Hardcover
## SES   207.1097  239.5601
## HOLT  209.4668  250.1739

The holt method is better with a 209.4668 for paperback and 250.1739 for hardcover.

  1. Calculate a 95% prediction interval for the first forecast for each series, using the RMSE values and assuming normal errors. Compare your intervals with those produced using ses and holt.

SES

cat("Paperback ses:", bookspaper$mean[1] - 1.96 * accuracy(bookspaper)[2], "to", bookspaper$mean[1] + 1.96 * accuracy(bookspaper)[2])
## Paperback ses: 141.1798 to 273.0395
cat("Hardcover ses:", bookshardcover$mean[1] - 1.96 * accuracy(bookshardcover)[2], "to", bookshardcover$mean[1] + 1.96 * accuracy(bookshardcover)[2])
## Hardcover ses: 176.9753 to 302.1449

HOLT

cat("Paperback Holt:", holtpaper$mean[1] - 1.96 * accuracy(holtpaper)[2], "to", holtpaper$mean[1] + 1.96 * accuracy(holtpaper)[2])
## Paperback Holt: 148.4384 to 270.4951
cat("Hardcover Holt:", holthardcover$mean[1] - 1.96 * accuracy(holthardcover)[2], "to", holthardcover$mean[1] + 1.96 * accuracy(holthardcover)[2])
## Hardcover Holt: 196.8745 to 303.4733

Paperback ses: 141.1798 to 273.0395 vs Paperback Holt: 148.4384 to 270.4951

Hardcover ses: 176.9753 to 302.1449 vs Hardcover Holt: 196.8745 to 303.4733

In both cases holt method is better.

7.7. For this exercise use data set eggs, the price of a dozen eggs in the United States from 1900–1993. Experiment with the various options in the holt() function to see how much the forecasts change with damped trend, or with a Box-Cox transformation. Try to develop an intuition of what each argument is doing to the forecasts.

[Hint: use h=100 when calling holt() so you can clearly see the differences between the various options when plotting the forecasts.]

Which model gives the best RMSE?

The BoxCox method gives the best RMSE of 26.39376.

Holt

eggs1 <- holt(eggs, h = 100)
autoplot(eggs) +
  autolayer(eggs1, series="Holt", PI=TRUE)

eggs1$model
## Holt's method 
## 
## Call:
##  holt(y = eggs, h = 100) 
## 
##   Smoothing parameters:
##     alpha = 0.8124 
##     beta  = 1e-04 
## 
##   Initial states:
##     l = 314.7232 
##     b = -2.7222 
## 
##   sigma:  27.1665
## 
##      AIC     AICc      BIC 
## 1053.755 1054.437 1066.472
accuracy(eggs1)
##                      ME     RMSE      MAE       MPE     MAPE      MASE
## Training set 0.04499087 26.58219 19.18491 -1.142201 9.653791 0.9463626
##                    ACF1
## Training set 0.01348202

Holt Damped

eggs2 <- holt(eggs, h = 100, damped = TRUE)
autoplot(eggs) +
  autolayer(eggs2, series="Damped", PI=TRUE)

eggs2$model
## Damped Holt's method 
## 
## Call:
##  holt(y = eggs, h = 100, damped = TRUE) 
## 
##   Smoothing parameters:
##     alpha = 0.8462 
##     beta  = 0.004 
##     phi   = 0.8 
## 
##   Initial states:
##     l = 276.9842 
##     b = 4.9966 
## 
##   sigma:  27.2755
## 
##      AIC     AICc      BIC 
## 1055.458 1056.423 1070.718
accuracy(eggs2)
##                     ME     RMSE     MAE       MPE     MAPE      MASE
## Training set -2.891496 26.54019 19.2795 -2.907633 10.01894 0.9510287
##                      ACF1
## Training set -0.003195358

Holt Boxcox

lambda <- round(BoxCox.lambda(eggs), 2)
eggs3 <- holt(eggs, h = 100, lambda = "auto")
autoplot(eggs) +
  autolayer(eggs3, series="BoxCox", PI=TRUE)

eggs3$model
## Holt's method 
## 
## Call:
##  holt(y = eggs, h = 100, lambda = "auto") 
## 
##   Box-Cox transformation: lambda= 0.3956 
## 
##   Smoothing parameters:
##     alpha = 0.809 
##     beta  = 1e-04 
## 
##   Initial states:
##     l = 21.0322 
##     b = -0.1144 
## 
##   sigma:  1.0549
## 
##      AIC     AICc      BIC 
## 443.0310 443.7128 455.7475
accuracy(eggs3)
##                     ME     RMSE      MAE       MPE     MAPE      MASE
## Training set 0.7736844 26.39376 18.96387 -1.072416 9.620095 0.9354593
##                    ACF1
## Training set 0.03887152

Holt Damped BoxCox

eggs4 <- holt(eggs, h = 100, damped = TRUE, lambda = "auto")
autoplot(eggs) +
  autolayer(eggs4, series="Damped Boxcox", PI=TRUE)

eggs4$model
## Damped Holt's method 
## 
## Call:
##  holt(y = eggs, h = 100, damped = TRUE, lambda = "auto") 
## 
##   Box-Cox transformation: lambda= 0.3956 
## 
##   Smoothing parameters:
##     alpha = 0.8356 
##     beta  = 1e-04 
##     phi   = 0.98 
## 
##   Initial states:
##     l = 21.6922 
##     b = -0.1429 
## 
##   sigma:  1.068
## 
##      AIC     AICc      BIC 
## 446.2962 447.2617 461.5560
accuracy(eggs4)
##                      ME     RMSE      MAE       MPE     MAPE      MASE
## Training set -0.8200445 26.53321 19.45654 -2.019718 9.976131 0.9597618
##                     ACF1
## Training set 0.005852382

7.8. Recall your retail time series data (from Exercise 3 in Section 2.10).

  1. Why is multiplicative seasonality necessary for this series?

Multiplicative seasonality is necessary for this series because it continually goes upward.

retaildata <- readxl::read_excel("retail.xlsx", skip = 1)
myts <- ts(retaildata[,"A3349352V"], frequency=12, start=c(1982,4))
autoplot(myts) 

b.Apply Holt-Winters’ multiplicative method to the data. Experiment with making the trend damped.

hwretail <- hw(myts, seasonal = "multiplicative")
autoplot(hwretail)

hwretaild <- hw(myts, seasonal = "multiplicative", damped = TRUE)
autoplot(hwretaild)

c.Compare the RMSE of the one-step forecasts from the two methods. Which do you prefer?

The method that isn’t damped is the best, as it has a slightly lower RMSE.

accuracy(hwretail)
##                     ME     RMSE      MAE         MPE     MAPE      MASE
## Training set 0.7104427 65.18499 50.22738 -0.07829382 2.021084 0.3271807
##                   ACF1
## Training set -0.096935
accuracy(hwretaild)
##                   ME     RMSE      MAE       MPE    MAPE      MASE       ACF1
## Training set 7.87931 65.86629 51.03572 0.2581645 2.03756 0.3324462 -0.1085988
  1. Check that the residuals from the best method look like white noise.
checkresiduals(hwretail)

## 
##  Ljung-Box test
## 
## data:  Residuals from Holt-Winters' multiplicative method
## Q* = 409.43, df = 8, p-value < 2.2e-16
## 
## Model df: 16.   Total lags used: 24
  1. Now find the test set RMSE, while training the model to the end of 2010. Can you beat the seasonal naïve approach from Exercise 8 in Section 3.7?
myts.train <- window(myts, end = c(2010, 12))
myts.test <- window(myts, start=2011)

hw.train <- hw(myts.train, seasonal = "multiplicative")
hw.accuracy <- accuracy(hw.train, x = myts)
hw.accuracy
##                       ME      RMSE       MAE         MPE     MAPE      MASE
## Training set    2.007842  62.56536  48.43616 -0.04379885 2.099009 0.3101652
## Test set     -174.504914 213.39837 177.26710 -3.22807317 3.281807 1.1351454
##                    ACF1 Theil's U
## Training set -0.1036767        NA
## Test set      0.4176727 0.3674889
sn.train <- snaive(myts.train)
sn.accuracy <- accuracy(sn.train, myts.test)
sn.accuracy
##                    ME     RMSE      MAE      MPE     MAPE     MASE      ACF1
## Training set 149.9174 186.7455 156.1625 5.721577 6.008671 1.000000 0.6541086
## Test set     173.1417 198.5256 173.1417 3.324160 3.324160 1.108728 0.3146049
##              Theil's U
## Training set        NA
## Test set     0.3612413

The seasonal naive approach is actually not the best method, the original method is the best.

7.9. For the same retail data, try an STL decomposition applied to the Box-Cox transformed series, followed by ETS on the seasonally adjusted data. How does that compare with your best previous forecasts on the test set?

The boxcox is the best method for this one in comparison to the best method of the previous and the ets.

boxcoxtrain <- stlf(myts.train, lambda =BoxCox.lambda(myts.train))
stlbox <- accuracy(boxcoxtrain)
#ETS transformation on seasonally
etstrain <- ets(seasadj(decompose(myts.train, "multiplicative")))
stlets <- accuracy(etstrain)
stlbox
##                     ME     RMSE      MAE         MPE     MAPE      MASE
## Training set 0.2500331 55.27263 42.72477 -0.02201167 1.851543 0.2735918
##                     ACF1
## Training set -0.04498648
stlets
##                    ME     RMSE      MAE        MPE     MAPE      MASE
## Training set 4.082729 66.06871 49.57428 0.07058345 2.057793 0.3172782
##                     ACF1
## Training set -0.07659893