library(fpp2)
library(kableExtra)
mod <- ses(pigs, h = 4)
summary(mod)
##
## Forecast method: Simple exponential smoothing
##
## Model Information:
## Simple exponential smoothing
##
## Call:
## ses(y = pigs, h = 4)
##
## Smoothing parameters:
## alpha = 0.2971
##
## Initial states:
## l = 77260.0561
##
## sigma: 10308.58
##
## AIC AICc BIC
## 4462.955 4463.086 4472.665
##
## Error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 385.8721 10253.6 7961.383 -0.922652 9.274016 0.7966249
## ACF1
## Training set 0.01282239
##
## Forecasts:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## Sep 1995 98816.41 85605.43 112027.4 78611.97 119020.8
## Oct 1995 98816.41 85034.52 112598.3 77738.83 119894.0
## Nov 1995 98816.41 84486.34 113146.5 76900.46 120732.4
## Dec 1995 98816.41 83958.37 113674.4 76092.99 121539.8
autoplot(mod) + autolayer(fitted(mod), series = 'Fitted')
SES isn’t a bad way to fit this data because it appears to be cyclical, rather than trending. It’s unclear if there is a seasonal pattern.
sig <- sqrt(sum(residuals(mod) ** 2)/(length(pigs) - 2))
alpha <- 0.2971
mu <- mod$mean
interval <- qnorm(.975) * sqrt(sig**2 * (1 + (0:3 * alpha ** 2)))
frame_it <- function(mean, interval){
low95 <- as.vector(mean - interval)
high95 <- as.vector(mean + interval)
data.frame(`low 95` = low95, `high 95` = high95)
}
frame_it(mu, interval) %>% kable() %>%
kable_styling()
| low.95 | high.95 |
|---|---|
| 78611.97 | 119020.8 |
| 77739.12 | 119893.7 |
| 76901.00 | 120731.8 |
| 76093.78 | 121539.0 |
We used n-2 degrees of freedom to make \(s\)(1 paramter) unbiased. This comes close to matching the numbers from the interval from SES. It might only be off due to rounding alpha.
autoplot(books)
my_books <- books
pap <- my_books[, 1]
hrd <- my_books[, 2]
se1 <- ses(pap, h = 4)
summary(se1)
##
## Forecast method: Simple exponential smoothing
##
## Model Information:
## Simple exponential smoothing
##
## Call:
## ses(y = pap, h = 4)
##
## Smoothing parameters:
## alpha = 0.1685
##
## Initial states:
## l = 170.8271
##
## sigma: 34.8183
##
## AIC AICc BIC
## 318.9747 319.8978 323.1783
##
## Error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 7.175981 33.63769 27.8431 0.4736071 15.57784 0.7021303
## ACF1
## Training set -0.2117522
##
## Forecasts:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## 31 207.1097 162.4882 251.7311 138.8670 275.3523
## 32 207.1097 161.8589 252.3604 137.9046 276.3147
## 33 207.1097 161.2382 252.9811 136.9554 277.2639
## 34 207.1097 160.6259 253.5935 136.0188 278.2005
autoplot(se1)
se2 <- ses(hrd, h = 4)
summary(se2)
##
## Forecast method: Simple exponential smoothing
##
## Model Information:
## Simple exponential smoothing
##
## Call:
## ses(y = hrd, h = 4)
##
## Smoothing parameters:
## alpha = 0.3283
##
## Initial states:
## l = 149.2861
##
## sigma: 33.0517
##
## AIC AICc BIC
## 315.8506 316.7737 320.0542
##
## Error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 9.166735 31.93101 26.77319 2.636189 13.39487 0.7987887
## ACF1
## Training set -0.1417763
##
## Forecasts:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## 31 239.5601 197.2026 281.9176 174.7799 304.3403
## 32 239.5601 194.9788 284.1414 171.3788 307.7414
## 33 239.5601 192.8607 286.2595 168.1396 310.9806
## 34 239.5601 190.8347 288.2855 165.0410 314.0792
autoplot(se2)
Given the increasing trend, these forcasts are certainly not optimal.
acc_se1 <- accuracy(se1)
acc_se1
## ME RMSE MAE MPE MAPE MASE
## Training set 7.175981 33.63769 27.8431 0.4736071 15.57784 0.7021303
## ACF1
## Training set -0.2117522
acc_se2 <- accuracy(se2)
acc_se2
## ME RMSE MAE MPE MAPE MASE
## Training set 9.166735 31.93101 26.77319 2.636189 13.39487 0.7987887
## ACF1
## Training set -0.1417763
With an in-sample RMSE of about 1/6 of the average level, our predictions weren’t great and probably wouldn’t show much value over the naive method.
h1 <- forecast::holt(pap, h = 4)
summary(h1)
##
## Forecast method: Holt's method
##
## Model Information:
## Holt's method
##
## Call:
## forecast::holt(y = pap, h = 4)
##
## Smoothing parameters:
## alpha = 1e-04
## beta = 1e-04
##
## Initial states:
## l = 170.699
## b = 1.2621
##
## sigma: 33.4464
##
## AIC AICc BIC
## 318.3396 320.8396 325.3456
##
## Error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set -3.717178 31.13692 26.18083 -5.508526 15.58354 0.6602122
## ACF1
## Training set -0.1750792
##
## Forecasts:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## 31 209.4668 166.6035 252.3301 143.9130 275.0205
## 32 210.7177 167.8544 253.5811 145.1640 276.2715
## 33 211.9687 169.1054 254.8320 146.4149 277.5225
## 34 213.2197 170.3564 256.0830 147.6659 278.7735
autoplot(h1)
h2 <- forecast::holt(hrd, h = 4)
summary(h2)
##
## Forecast method: Holt's method
##
## Model Information:
## Holt's method
##
## Call:
## forecast::holt(y = hrd, h = 4)
##
## Smoothing parameters:
## alpha = 1e-04
## beta = 1e-04
##
## Initial states:
## l = 147.7935
## b = 3.303
##
## sigma: 29.2106
##
## AIC AICc BIC
## 310.2148 312.7148 317.2208
##
## Error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set -0.1357882 27.19358 23.15557 -2.114792 12.1626 0.6908555
## ACF1
## Training set -0.03245186
##
## Forecasts:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## 31 250.1739 212.7390 287.6087 192.9222 307.4256
## 32 253.4765 216.0416 290.9113 196.2248 310.7282
## 33 256.7791 219.3442 294.2140 199.5274 314.0308
## 34 260.0817 222.6468 297.5166 202.8300 317.3334
autoplot(h2)
Including a trend makes these much more reasonable.
acc_h1 <- accuracy(h1)
acc_h1
## ME RMSE MAE MPE MAPE MASE
## Training set -3.717178 31.13692 26.18083 -5.508526 15.58354 0.6602122
## ACF1
## Training set -0.1750792
acc_h2 <- accuracy(h2)
acc_h2
## ME RMSE MAE MPE MAPE MASE
## Training set -0.1357882 27.19358 23.15557 -2.114792 12.1626 0.6908555
## ACF1
## Training set -0.03245186
The RMSE is somewhat improved over the SES model. With only 30 datapoints, we might not be able to improve too much over this model without overfitting.
Just from visual inspection, its obvious the second method is a little better. This translates to the accuracy statistics across the board so I would conclude that the second model fits the data better.
ses_interval <- function (alpha, sig) qnorm(.975) * sqrt(sig**2 * (1 + (0:3 * alpha ** 2)))
holt_interval <- function (alpha, sig, beta){
qnorm(.975) * sqrt(sig**2 * (1 + (0:3 * (alpha ** 2 + alpha * 1:4 + (beta ** 2 * 1:4 * (2*1:4 - 1))/6 ))))
}
calc_sig <- function(model) sqrt(sum(residuals(model) ** 2)/(nrow(books) - 2))
SES intervals
frame_it(se1$mean, ses_interval(.1685, acc_se1[1])) %>% kable(caption = 'Paperback') %>% kable_styling()
| low.95 | high.95 |
|---|---|
| 193.0450 | 221.1743 |
| 192.8467 | 221.3726 |
| 192.6512 | 221.5681 |
| 192.4582 | 221.7611 |
frame_it(se2$mean, ses_interval(0.3283 , acc_se2[2])) %>% kable(caption = 'Hardcover') %>% kable_styling()
| low.95 | high.95 |
|---|---|
| 176.9765 | 302.1437 |
| 173.6901 | 305.4301 |
| 170.5601 | 308.5601 |
| 167.5660 | 311.5542 |
Holt Intervals
frame_it(h1$mean, holt_interval(.0001, acc_h1[1], .0001))%>% kable(caption = 'Paperback') %>% kable_styling()
| low.95 | high.95 |
|---|---|
| 202.1812 | 216.7523 |
| 203.4315 | 218.0040 |
| 204.6810 | 219.2564 |
| 205.9298 | 220.5096 |
frame_it(h2$mean, holt_interval(.0001, acc_h2[1], .0001)) %>% kable(caption = 'Hardcover') %>% kable_styling()
| low.95 | high.95 |
|---|---|
| 249.9077 | 250.4400 |
| 253.2103 | 253.7426 |
| 256.5129 | 257.0453 |
| 259.8154 | 260.3480 |
These are again close to the R function, but this time differ more because we used RMSE from the accuracy output. This differs from the unbiased sigma used to calculate the intervals in the model.
Unaltered Holt
base_mod <- forecast::holt(eggs, h = 100)
autoplot(base_mod)
Prices can’t go negative so this isn’t reasonable.
We’ll still find the one-step CV RMSE for future reference
fholt <- function (data, h){
forecast(forecast::holt(data, h = h))
}
sqrt(mean((tsCV(eggs, fholt) ** 2), na.rm = T))
## [1] 28.65963
Exponential = TRUE
If we’re exponentiating the trend pattern, we’re now restricted to postive values
no_exp <- forecast::holt(eggs, exponential = TRUE, h = 100)
autoplot(no_exp)
The jagged predicttion intervals are an interesting feature. The prediction intervals are relatively small. Exponential trends are not covered in the book so I don’t have much intuition as to why we see this.
RMSE:
fholt <- function (data, h){
forecast::holt(data, h = h, exponential = TRUE)
}
sqrt(mean((tsCV(eggs, fholt) ** 2), na.rm = T))
## [1] 29.52781
Box-Cox
lambda <- BoxCox.lambda(eggs)
bc <- forecast::holt(eggs, lambda = lambda, h = 100)
autoplot(bc) + ggtitle('Box-Cox')
The forecast is very similar to the exponential trend.
RMSE:
fholt <- function (data, h){
forecast::holt(data, h = h, lambda = lambda, biasadj = T)
}
sqrt(mean(tsCV(eggs, fholt)[10:93]**2))
## [1] 29.99576
Dampened Trend Box-Cox
damped <- forecast::holt(eggs, lambda = lambda, damped = T, h = 100)
autoplot(damped)
This is probably the most intuitively reasonable pattern. It’s unlikely the price of eggs with drop below a certain level.
RMSE:
fholt <- function (data, h){
forecast::holt(data, h = h, lambda = lambda, biasadj = T, damped = T)
}
sqrt(mean(tsCV(eggs, fholt)[10:93]**2))
## [1] 29.95962
The RMSEs are all similar so I would consider explainibility, intuition, and prediction interval. I can’t explain the exponential trend so that’s out. The dampened Box-Cox is next best in terms of intuitive likelihood of the forecast and the prediction interval doesn’t balloon until we get 10+ years out. That would make it my choice.
Now we’ll examine the fitted values for different paramter levels
ha <- forecast::holt(eggs, alpha = .8, h = 100)
autoplot(ha) + autolayer(fitted(ha), series = 'Fitted Values') + ggtitle('High Alpha')
la <- forecast::holt(eggs, alpha = .1, h = 5)
autoplot(la) + autolayer(fitted(la), series = 'Fitted Values') + ggtitle('Low Alpha')
hb <- forecast::holt(eggs, beta = .4, h = 5)
autoplot(hb) + autolayer(fitted(hb), series = 'Fitted Values')+ ggtitle('High Beta')
The prominent feature when playing around with the parameters is how the high beta gives you a pattern with more extreme outliers. After a big change, the trend parameter and the level adjust so the predicted next step will be extreme.
library(tidyverse)
retaildata <- read_csv("https://raw.githubusercontent.com/TheFedExpress/Data/master/retail.csv", skip=1)
myts <- ts(retaildata[[12]],frequency=12, start=c(1982,4))
autoplot(myts)
The multiplicitive version is necesary because the effect of seasonality increases as the level of the series increases.
Standard Trend
hw_mod1 <- hw(myts, seasonal = 'multiplicative')
summary(hw_mod1)
##
## Forecast method: Holt-Winters' multiplicative method
##
## Model Information:
## Holt-Winters' multiplicative method
##
## Call:
## hw(y = myts, seasonal = "multiplicative")
##
## Smoothing parameters:
## alpha = 0.2712
## beta = 1e-04
## gamma = 0.1857
##
## Initial states:
## l = 124.5622
## b = 1.4387
## s = 0.8815 0.7806 0.9057 1.4771 1.0712 1.0208
## 0.9659 0.9347 0.9583 0.9917 1.0793 0.9331
##
## sigma: 0.0507
##
## AIC AICc BIC
## 4407.234 4408.920 4474.261
##
## Error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set -0.2553259 17.71997 12.66088 -0.418381 3.803932 0.5318911
## ACF1
## Training set 0.1150271
##
## Forecasts:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## Jan 2014 605.2081 565.9135 644.5026 545.1123 665.3039
## Feb 2014 494.9516 461.6567 528.2466 444.0314 545.8718
## Mar 2014 559.7261 520.8117 598.6406 500.2116 619.2407
## Apr 2014 591.4964 549.0857 633.9070 526.6348 656.3579
## May 2014 644.9935 597.3893 692.5977 572.1892 717.7979
## Jun 2014 607.6242 561.5388 653.7096 537.1426 678.1057
## Jul 2014 584.3144 538.8404 629.7884 514.7679 653.8609
## Aug 2014 568.6646 523.3146 614.0146 499.3078 638.0214
## Sep 2014 616.1380 565.8488 666.4273 539.2272 693.0488
## Oct 2014 631.8937 579.1660 684.6213 551.2537 712.5336
## Nov 2014 666.2761 609.4944 723.0579 579.4360 753.1163
## Dec 2014 1004.6824 917.3162 1092.0486 871.0674 1138.2975
## Jan 2015 621.7319 564.6717 678.7922 534.4658 708.9981
## Feb 2015 508.4348 460.9566 555.9129 435.8232 581.0463
## Mar 2015 574.9396 520.3472 629.5321 491.4477 658.4316
## Apr 2015 607.5374 548.9131 666.1617 517.8792 697.1956
## May 2015 662.4463 597.5216 727.3710 563.1526 761.7401
## Jun 2015 624.0292 561.9432 686.1151 529.0769 718.9815
## Jul 2015 600.0550 539.4793 660.6307 507.4125 692.6975
## Aug 2015 583.9496 524.1632 643.7361 492.5141 675.3851
## Sep 2015 632.6624 566.9975 698.3272 532.2367 733.0881
## Oct 2015 648.8032 580.5644 717.0420 544.4409 753.1654
## Nov 2015 684.0664 611.1865 756.9464 572.6062 795.5267
## Dec 2015 1031.4495 920.1761 1142.7230 861.2715 1201.6276
autoplot(hw_mod1)
Damped Trend
hw_mod2 <- hw(myts, seasonal = 'multiplicative', damped = T)
summary(hw_mod2)
##
## Forecast method: Damped Holt-Winters' multiplicative method
##
## Model Information:
## Damped Holt-Winters' multiplicative method
##
## Call:
## hw(y = myts, seasonal = "multiplicative", damped = T)
##
## Smoothing parameters:
## alpha = 0.3456
## beta = 0.0074
## gamma = 0.1292
## phi = 0.98
##
## Initial states:
## l = 124.9688
## b = 1.1551
## s = 0.8496 0.7561 0.9241 1.5295 1.0872 1.0314
## 0.9812 0.9242 0.9833 0.966 1.045 0.9226
##
## sigma: 0.0528
##
## AIC AICc BIC
## 4433.887 4435.776 4504.857
##
## Error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 1.950698 17.90964 12.59081 0.3413179 3.790673 0.5289474
## ACF1
## Training set 0.05250455
##
## Forecasts:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## Jan 2014 606.1953 565.1744 647.2163 543.4592 668.9315
## Feb 2014 496.9136 461.2577 532.5696 442.3826 551.4447
## Mar 2014 562.3069 519.7070 604.9068 497.1559 627.4579
## Apr 2014 593.8184 546.4944 641.1424 521.4425 666.1942
## May 2014 646.2480 592.2412 700.2548 563.6517 728.8443
## Jun 2014 608.8359 555.6282 662.0437 527.4617 690.2102
## Jul 2014 587.8240 534.2326 641.4153 505.8631 669.7849
## Aug 2014 572.2565 517.9470 626.5659 489.1973 655.3156
## Sep 2014 618.5263 557.5385 679.5140 525.2536 711.7989
## Oct 2014 637.6047 572.3998 702.8096 537.8824 737.3270
## Nov 2014 671.1605 600.0858 742.2352 562.4612 779.8599
## Dec 2014 1002.6020 892.8146 1112.3895 834.6967 1170.5074
## Jan 2015 622.4224 550.4864 694.3584 512.4058 732.4390
## Feb 2015 509.9189 449.2104 570.6274 417.0732 602.7645
## Mar 2015 576.6964 506.0408 647.3520 468.6380 684.7549
## Apr 2015 608.6773 532.0065 685.3482 491.4194 725.9353
## May 2015 662.0611 576.3959 747.7263 531.0475 793.0747
## Jun 2015 623.4048 540.6140 706.1956 496.7872 750.0224
## Jul 2015 601.5804 519.6441 683.5166 476.2697 726.8910
## Aug 2015 585.3543 503.6481 667.0606 460.3954 710.3133
## Sep 2015 632.3729 541.9735 722.7723 494.1189 770.6269
## Oct 2015 651.5663 556.2373 746.8952 505.7732 797.3593
## Nov 2015 685.5362 582.9466 788.1257 528.6390 842.4333
## Dec 2015 1023.6092 867.0208 1180.1977 784.1279 1263.0906
autoplot(hw_mod2)
\(\phi\) is near the top of the allowed range on the parameter so the dampening doesn’t have much of an affect.
The in-sample RMSE is slightly better when not using damping. Damping would seem to introduce bias though, so that would be expected. If it reduces variance in forecasts, this tradeoff would be worth it. We can try cross validation to see if this is true with one step forecasts.
fhw <- function (data, h){
forecast(hw(data, seasonal = 'multiplicative', damped = F), h = h)
}
sqrt(mean((tsCV(myts, fhw) ** 2), na.rm = T))
## [1] 19.25559
fhw <- function (data, h){
forecast(hw(data, seasonal = 'multiplicative', damped = T), h = h)
}
sqrt(mean((tsCV(myts, fhw) ** 2), na.rm = T))
## [1] 20.14191
With CV of the one step forecasts, the undamped method remains better. Perhaps we would see the benefits as the forecast horizon increases, increasing the affect of damping.
checkresiduals(hw_mod1)
##
## Ljung-Box test
##
## data: Residuals from Holt-Winters' multiplicative method
## Q* = 65.76, df = 8, p-value = 3.414e-11
##
## Model df: 16. Total lags used: 24
The autocorrelation in at lags of 1,2, and 3 years is just over statistical significance. Perhaps there is some expalinable variance left in the seasonal component, though I wouldn’t consider the model invalid based on these small autocorrelations. I might just seek a better model if there were ample time.
We compare our model to the baseline seasonal naive approach. For this we’ll use the damped trend because the bias effect was minimal on the training set, but it should have a considerable variance reducing effect on the test set.
myts.train <- window(myts, end=c(2010,12))
myts.test <- window(myts, start=2011)
mhw <- hw(myts.train, seasonal = 'multiplicative', damped = T)
accuracy(mhw, myts.test)
## ME RMSE MAE MPE MAPE MASE
## Training set 0.7190557 16.26705 11.81180 0.1075218 3.876624 0.5319264
## Test set -23.8629305 33.47260 28.77762 -5.1003474 5.715782 1.2959559
## ACF1 Theil's U
## Training set 0.007729446 NA
## Test set 0.330842260 0.3197526
m1 <- snaive(myts.train)
accuracy(m1,myts.test)
## ME RMSE MAE MPE MAPE MASE
## Training set 15.13063 28.69434 22.20571 4.854662 6.873099 1.000000
## Test set -21.02500 31.50373 24.22500 -4.276868 4.878829 1.090936
## ACF1 Theil's U
## Training set 0.4670908 NA
## Test set 0.5491969 0.3020243
Surprisingly, we were not able to beat the baseline naive model. It tested better across the board.
BoxCox.lambda(myts.train)
## [1] 0.06330008
We’ll round lambda to zero and choose the log transform
newts_bc <- BoxCox(myts.train, 0)
newts_stl <- stl(newts_bc, t.window = 12, s.window = 'periodic')
newts.train <- newts_stl %>% seasadj()
seasonal <- rep(window(newts_stl$time.series[, 1], start = c(2009,1), end = c(2009, 12)), 3) %>%
as.vector
autoplot(newts.train)
The variability of the data has definitely been dampened It’s also unclear if there is any remaining seasonal trend after the stl decomposition.
modets <- ets(newts.train, damped = T, model = 'AAN')
preds <- forecast(modets, 36) %>% as.vector
accuracy(exp (preds$mean %>% as.vector + seasonal), myts.test)
## ME RMSE MAE MPE MAPE ACF1 Theil's U
## Test set -7.16472 42.1297 33.85158 -2.265762 6.131789 0.6639146 0.3822294
m1 <- naive(newts.train)
preds <- rep(forecast(m1)$mean[[1]],36) %>% as.vector
accuracy(exp (preds + seasonal), myts.test)
## ME RMSE MAE MPE MAPE ACF1
## Test set -34.86168 55.09948 47.98297 -7.419127 9.250294 0.7289705
## Theil's U
## Test set 0.5342036
The ETS model predicts better than the naive on the transformed data, but it predicts worse than both models on the original data.
fcast <- stlf(myts.train, method='ets')
accuracy(fcast, myts.test)
## ME RMSE MAE MPE MAPE MASE
## Training set 0.1369716 16.55338 11.28664 -0.1575794 3.642597 0.5082765
## Test set -51.5867138 57.89974 54.75693 -10.7038991 11.068668 2.4658944
## ACF1 Theil's U
## Training set 0.08073633 NA
## Test set 0.07494666 0.5463551
We’ll now try just the Box-Cox transform and use additive seasonality because of the logging.
modets <- ets(newts_bc, damped = T, model = 'AAA')
fcast <- forecast(modets, 36)$mean
accuracy(exp(fcast), myts.test)
## ME RMSE MAE MPE MAPE ACF1
## Test set -4.502593 40.08141 31.91802 -1.698341 5.781319 0.7281946
## Theil's U
## Test set 0.3571224
Finally, we let the function choose the optimal model
We’ll now try just the Box-Cox transform and use additive seasonality because of the logging.
modets <- ets(myts.train, damped = T, model = 'ZZZ')
fcast <- forecast(modets, 36)$mean
accuracy(fcast, myts.test)
## ME RMSE MAE MPE MAPE ACF1 Theil's U
## Test set -3.18005 40.87076 32.5157 -1.469796 5.869069 0.7349412 0.3616382
This is still not an improvement. It is somewhat curious that no model could be the seasonal naive method and the transform models couldn’t beat the original HW. Some of our digging turned up this:
summary(ets(myts.train, damped = T, model = 'AAM', restrict = F))
## ETS(A,Ad,M)
##
## Call:
## ets(y = myts.train, model = "AAM", damped = T, restrict = F)
##
## Smoothing parameters:
## alpha = 0.3666
## beta = 0.0161
## gamma = 1e-04
## phi = 0.9799
##
## Initial states:
## l = 125.0101
## b = 1.2998
## s = 0.8999 0.7956 0.966 1.5249 1.0494 1.0064
## 0.9655 0.9024 0.9438 0.9651 1.0303 0.9507
##
## sigma: 16.6833
##
## AIC AICc BIC
## 3976.531 3978.629 4045.714
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 0.7190557 16.26705 11.8118 0.1075218 3.876624 0.5319264
## ACF1
## Training set 0.007729446
summary(hw(myts.train, seasonal = 'multiplicative', damped = T, h = 1))
##
## Forecast method: Damped Holt-Winters' multiplicative method
##
## Model Information:
## Damped Holt-Winters' multiplicative method
##
## Call:
## hw(y = myts.train, h = 1, seasonal = "multiplicative", damped = T)
##
## Smoothing parameters:
## alpha = 0.3666
## beta = 0.0161
## gamma = 1e-04
## phi = 0.9799
##
## Initial states:
## l = 125.0101
## b = 1.2998
## s = 0.8999 0.7956 0.966 1.5249 1.0494 1.0064
## 0.9655 0.9024 0.9438 0.9651 1.0303 0.9507
##
## sigma: 0.0543
##
## AIC AICc BIC
## 3965.935 3968.033 4035.119
##
## Error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 0.7190557 16.26705 11.8118 0.1075218 3.876624 0.5319264
## ACF1
## Training set 0.007729446
##
## Forecasts:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## Jan 2011 530.1785 493.271 567.086 473.7333 586.6237
Comparing the summaries, these models are identical, but the ets call was among the “restricted” models due to numerical instability. The hw model call apparently uses additive errors, which are not recomended due to said numerical insability.In this prediction’s case it might have been a feature rather than a bug, but I would trust the ets function more due to this fact.