Sigco, Inc is a glass and architectural metal fabricator/distributor in Westbrook, ME and serves much New England. They manufacture a number of glass products and think the demand for these products is seasonal.
Inventory management is a major challenge. It is predominantly manual - inventory is counted three times a week to ensure accuracy, demand is forecasted just one week in advance and quantities to order for replenishment (including safety stock levels) are very subjective. Â In particular, safety stock is computed by assigning an extra percentage of product to keep on hand of some of the products, but not all, and the percentage used varies by product. There is opportunity for analysis and improvement in the forecasting of demand using historical seasonal data. Â Further, safety stock could be calculated to ensure greater efficiency of their inventory - maintaining or improving their service level while minimizing too much stock on hand.
The goal of this project is to build a systematic forecasting model evaluation tool for the company to use in its calculations for safety stock and reorder points using the EOQ model. An interactive forecasting tool, such as an R Shiny app, provides a simple means for management to upload data sets, visualize the historical use of products in graphical form, determine components inherent in the data, partition the data into training and validation sets, run a few different fit models , evaluate the performance of the models, choose the best model to deploy a forecast, and use that forecast in the calculation of the safety stock and reorder points.
Everything we do in the class relies on data, where and how do you expect to obtain the necessary time series data to utilize the tools and models we covered in class?
The data is easy to come by. The historical use of each item is documented on annual Excel spreadsheets that the company keeps. There are roughly 200 SKUs in Sigco’s inventory, and I have chosen to look at the 25 most utilized items and forecast their use.
Step 1
Visualization
Import the company spreadsheets into R, clean them up, and combine them into a single time series data set starting from January, 2012. Not all items have been in the company’s inventory since then, but the majority have.
The final data set will look something like this:
Item_Code | Month | Used |
---|---|---|
SS02CLR7284 | 01-12 | 468 |
SS02LOE7284 | 01-12 | NA |
SS03S6A7284 | 01-12 | 3 |
SS03S6T7284 | 01-12 | 2 |
SS04CLR96130 | 01-12 | 157 |
SS05CLR96130 | 01-12 | 278 |
SS05S6T96130 | 01-12 | 19 |
SS06BRN96130 | 01-12 | 75 |
SS06CLR96130 | 01-12 | 1074 |
SS06GRY96130 | 01-12 | 34 |
SS06LOE96130 | 01-12 | 15 |
SS06S6A100144 | 01-12 | 69 |
SS06S6T100144 | 01-12 | 104 |
SS06STP96130 | 01-12 | 24 |
SS10CLR96130 | 01-12 | 196 |
SS10SHG96130 | 01-12 | 73 |
SS10STP96130 | 01-12 | 17 |
SS12CLR102130 | 01-12 | 168 |
SS12SHG96130 | 01-12 | 15 |
SS12STP96130 | 01-12 | 12 |
Step 2
Visualize the data. I have chosen 6 different Items for brevity’s sake, I’ll narrow the scope to SKU numbers SS06S6T100144, SS10CLR96130,SS06S6A100144, SS03S6T7284, SS04CLR96130, SS06BRN96130. I will refer to them as E, F, I, M, N, and S respectively.
I chose these particular SKUs because, as a group, I can illustrate how the company can model data that exhibits differing components. None of the items are simple to model and they each each exhibit non-stationarity. They have varying degrees of seasonality and trend.
Seasonality
Below I show each of the SKU items plotted. The first group are seasonal charts where the months of the year are plotted on the x axis and each colored line represents sales in the data series. We can see varying degrees of seasonality.
The six plots below show seasonality differently, with the years represented on the x axis and each line represents the sales in the each month.
Trend
Trend can be visualized in the plots below.
Finally, we can get an idea if seasonality and/or trend are present by looking at Acf plots.
All items display stickiness on the Acf plots, meaning that there is some lag-1 trending that can probably be modeled. Additionally we see seasonality that can be modeled.
Are these data series random walks?
We can determine this by hypothesis testing.
We can say that E is not a random walk if we are using \(\alpha = 0.05\)
## Series: e.ts
## ARIMA(1,0,0) with non-zero mean
##
## Coefficients:
## ar1 mean
## 0.8718 336.0494
## s.e. 0.0587 90.6882
##
## sigma^2 estimated as 10820: log likelihood=-387.76
## AIC=781.52 AICc=781.92 BIC=788
## ar1
## 0.03267771
## ar1
## 0.02894776
We can say that F is not a random walk if we are using \(\alpha = 0.01\)
## Series: f.ts
## ARIMA(1,0,0) with non-zero mean
##
## Coefficients:
## ar1 mean
## 0.7624 335.8390
## s.e. 0.0800 36.0101
##
## sigma^2 estimated as 5314: log likelihood=-364.73
## AIC=735.47 AICc=735.87 BIC=741.94
ar1 |
---|
0.004195 |
ar1 |
---|
0.002966 |
We can say that I is not a random walk if we are using \(\alpha = 0.01\)
## Series: i.ts
## ARIMA(1,0,0) with non-zero mean
##
## Coefficients:
## ar1 mean
## 0.7227 178.9316
## s.e. 0.0860 27.3758
##
## sigma^2 estimated as 4115: log likelihood=-356.48
## AIC=718.97 AICc=719.37 BIC=725.45
ar1 |
---|
0.002002 |
ar1 |
---|
0.001262 |
We can say that M IS a random walk if we are using \(\alpha = 0.01\) because \(p=0.08\) and therefore \(p>\alpha\) if \(\alpha = 0.05\).
## Series: m.ts
## ARIMA(1,0,0) with non-zero mean
##
## Coefficients:
## ar1 mean
## 0.8968 81.4548
## s.e. 0.0589 35.9754
##
## sigma^2 estimated as 1141: log likelihood=-315.88
## AIC=637.77 AICc=638.17 BIC=644.25
ar1 |
---|
0.08497 |
ar1 |
---|
0.0801 |
We can say that N is not a random walk if we are using \(\alpha = 0.01\)
## Series: n.ts
## ARIMA(1,0,0) with non-zero mean
##
## Coefficients:
## ar1 mean
## 0.6517 220.4260
## s.e. 0.0931 17.6684
##
## sigma^2 estimated as 2643: log likelihood=-342.22
## AIC=690.44 AICc=690.84 BIC=696.92
ar1 |
---|
0.0004009 |
ar1 |
---|
0.0001847 |
We can say that S is not a random walk if we are using \(\alpha = 0.01\)
## Series: s.ts
## ARIMA(1,0,0) with non-zero mean
##
## Coefficients:
## ar1 mean
## 0.0879 78.4328
## s.e. 0.1239 4.3789
##
## sigma^2 estimated as 1057: log likelihood=-312.62
## AIC=631.24 AICc=631.64 BIC=637.72
ar1 |
---|
4.666e-10 |
ar1 |
---|
1.814e-13 |
All of this leads us to veer in the direction of choosing forecasting models that accommodate trend and seasonality for all but one of the SKUs, M. However. Looking at the M plots that show trend and seasonality, it seems that those components are present and I therefore think it would be worth trying to model with methods other than the naive.
Forecast with the appropriate models. Also include a naive model to use as a baseline. I will model series and deploy forecasts for item E. The process for the other data sets is the same.
SS06S6T100144 - Item E
Naive
 | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2016 | NA | NA | NA | NA | 238 | 415 | 652 | 424 | 469 | 414 | 637 | 612 |
2017 | 605 | 487 | 478 | 575 | NA | NA | NA | NA | NA | NA | NA | NA |
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 118.3250 | 170.6247 | 132.5750 | 26.39194 | 42.24987 | 1.000000 | 0.2443971 | NA |
Test set | 122.4167 | 242.3460 | 185.4167 | 14.80013 | 27.51436 | 1.398579 | 0.1713987 | 1.560893 |
Exponential Smoothing
## ETS(M,A,N)
##
## Call:
## ets(y = salesTrainE, model = "ZZZ", restrict = FALSE)
##
## Smoothing parameters:
## alpha = 0.1801
## beta = 1e-04
##
## Initial states:
## l = 108.0528
## b = 7.8754
##
## sigma: 0.3358
##
## AIC AICc BIC
## 674.1312 675.4355 683.8874
## ETS(M,A,N)
##
## Call:
## ets(y = salesTrainE, model = "ZZZ", restrict = FALSE)
##
## Smoothing parameters:
## alpha = 0.1801
## beta = 1e-04
##
## Initial states:
## l = 108.0528
## b = 7.8754
##
## sigma: 0.3358
##
## AIC AICc BIC
## 674.1312 675.4355 683.8874
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 3.020445 87.34413 68.38085 -18.24478 35.33935 0.5157899
## ACF1
## Training set 0.3262123
## Point Forecast Lo 0 Hi 0
## May 2016 552.5477 552.5477 552.5477
## Jun 2016 560.4389 560.4389 560.4389
## Jul 2016 568.3300 568.3300 568.3300
## Aug 2016 576.2211 576.2211 576.2211
## Sep 2016 584.1123 584.1123 584.1123
## Oct 2016 592.0034 592.0034 592.0034
## Nov 2016 599.8945 599.8945 599.8945
## Dec 2016 607.7857 607.7857 607.7857
## Jan 2017 615.6768 615.6768 615.6768
## Feb 2017 623.5679 623.5679 623.5679
## Mar 2017 631.4591 631.4591 631.4591
## Apr 2017 639.3502 639.3502 639.3502
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 3.020446 | 87.34413 | 68.38085 | -18.2447769 | 35.33935 | 0.5157899 | 0.3262123 | NA |
Test set | 26.967689 | 143.60805 | 115.09006 | 0.3889655 | 18.01269 | 0.8681129 | 0.4154351 | 1.07865 |
Regression
##
## Call:
## tslm(formula = salesTrainE ~ trend + season)
##
## Residuals:
## Min 1Q Median 3Q Max
## -167.328 -55.580 -1.703 50.466 170.516
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 66.9745 41.8034 1.602 0.1172
## trend 9.5130 0.7724 12.316 5.13e-15 ***
## season2 -64.5130 52.4391 -1.230 0.2260
## season3 -100.6260 52.4561 -1.918 0.0624 .
## season4 -68.9391 52.4846 -1.314 0.1967
## season5 -138.7740 55.6355 -2.494 0.0170 *
## season6 -59.0370 55.6194 -1.061 0.2950
## season7 5.4500 55.6140 0.098 0.9224
## season8 -10.8130 55.6194 -0.194 0.8469
## season9 -26.8260 55.6355 -0.482 0.6324
## season10 -24.8391 55.6623 -0.446 0.6579
## season11 11.6479 55.6998 0.209 0.8354
## season12 -3.6151 55.7479 -0.065 0.9486
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 82.9 on 39 degrees of freedom
## Multiple R-squared: 0.817, Adjusted R-squared: 0.7607
## F-statistic: 14.51 on 12 and 39 DF, p-value: 7.565e-11
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## May 2016 432.3906 307.8375 556.9438 239.1373 625.6439
## Jun 2016 521.6406 397.0875 646.1938 328.3873 714.8939
## Jul 2016 595.6406 471.0875 720.1938 402.3873 788.8939
## Aug 2016 588.8906 464.3375 713.4438 395.6373 782.1439
## Sep 2016 582.3906 457.8375 706.9438 389.1373 775.6439
## Oct 2016 593.8906 469.3375 718.4438 400.6373 787.1439
## Nov 2016 639.8906 515.3375 764.4438 446.6373 833.1439
## Dec 2016 634.1406 509.5875 758.6938 440.8873 827.3939
## Jan 2017 647.2688 523.4505 771.0870 455.1556 839.3819
## Feb 2017 592.2688 468.4505 716.0870 400.1556 784.3819
## Mar 2017 565.6687 441.8505 689.4870 373.5556 757.7819
## Apr 2017 606.8687 483.0505 730.6870 414.7556 798.9819
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 0.00000 | 71.79738 | 58.4113 | -7.908828 | 33.42363 | 0.4405906 | 0.3164708 | NA |
Test set | 39.50417 | 161.24688 | 132.0771 | 2.112034 | 20.67654 | 0.9962443 | 0.3818900 | 1.119411 |
An Acf plot of the regression model shows some uncaptured trend and seasonality which we can further model.
This is the adjusted regression forecast and its accuracy. It’s not much of an improvement, but since it appeared that there was more data to capture, it’s important to try.
## Jan Feb Mar Apr May Jun Jul
## 2016 459.3504 530.9191 599.2686
## 2017 648.2457 593.2438 566.6431 607.8429
## Aug Sep Oct Nov Dec
## 2016 590.7128 583.6358 594.9513 640.8924 635.1236
## 2017
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 35.34753 | 156.4548 | 128.7389 | 1.497218 | 20.22199 | 0.3691283 | 1.109454 |
Use a Neural Network
After we get a neural network forecast, we can model for uncaptured components to maybe improve the forecast. In this case, the forecast accuracy is the same.
The neural net has given the best MAPE scores during the test and the validation periods.
## Jan Feb Mar Apr May Jun Jul
## 2016 574.4792 634.4689 560.3893
## 2017 565.4610 563.0646 633.9663 639.4796
## Aug Sep Oct Nov Dec
## 2016 548.6557 565.7550 668.2788 632.2778 632.9401
## 2017
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 0.0040966 | 1.666347 | 0.7207818 | -0.4435348 | 0.7456993 | 0.0054368 | 0.0684582 | NA |
Test set | 21.3153110 | 135.306011 | 103.7266647 | -0.0782943 | 16.0479365 | 0.7823999 | 0.4430976 | 1.001827 |
Further modeling may be warranted
## Jan Feb Mar Apr May Jun Jul
## 2016 574.2426 635.3384 560.5980
## 2017 565.4135 563.0942 633.9934 639.4698
## Aug Sep Oct Nov Dec
## 2016 548.2667 565.6147 668.4496 632.3645 632.8697
## 2017
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 21.27374 | 135.3217 | 103.7162 | -0.0842045 | 16.04682 | 0.4424514 | 1.00188 |
However, we see the forecast accuracy is the same after working in the Acf residuals. This means the original model captured all the trend and seasonality.
Other Considerations
As we read earlier in the course, forecasts can be improved by averaging the forecasted valued from various models. Below is the average and the accuracy metrics. Although the accuracy is not ideal and not as good as the neural network, it would be interesting to test the model.
 | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2016 | NA | NA | NA | NA | 449.4 | 532.9 | 594.1 | 534.4 | 550.3 | 567 | 627.3 | 621.7 |
2017 | 608.4 | 566.5 | 577.3 | 615.2 | NA | NA | NA | NA | NA | NA | NA | NA |
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 52.55096 | 163.7467 | 131.5483 | 4.305709 | 20.12362 | 0.3375512 | 1.137395 |
Using the hybridForecasting package in R, we can get an average. I’ve tried this using equal weights.
## Fitting the auto.arima model
## Fitting the ets model
## Fitting the thetam model
## Fitting the nnetar model
## Fitting the stlm model
## Fitting the tbats model
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## May 2016 512.0804 314.7279 790.3676 188.8337 916.2618
## Jun 2016 554.3334 315.0201 805.8577 185.1032 935.7745
## Jul 2016 551.8526 315.3594 821.3007 181.4448 955.2152
## Aug 2016 574.8607 315.7394 836.7029 177.8487 974.5936
## Sep 2016 573.5164 312.5858 852.0700 174.3063 993.9182
## Oct 2016 575.5531 305.9079 867.4069 170.8101 1027.1951
## Nov 2016 578.4258 298.6795 882.7181 167.3532 1061.6045
## Dec 2016 576.4629 290.4193 898.0077 163.9294 1094.7273
## Jan 2017 572.7404 280.3255 913.2791 160.5333 1126.7520
## Feb 2017 555.9694 270.8568 928.5358 157.1598 1157.8254
## Mar 2017 551.4869 261.9364 943.7806 153.8045 1188.0643
## Apr 2017 570.2692 253.5023 959.0163 146.7856 1217.5636
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 60.62073 | 139.8875 | 108.7653 | 6.404726 | 16.20717 | 0.3291958 | 0.9903161 |
This plot shows the original series in black; it is obscured by the neural net fit model. The forecasts are in dotted lines.
Has a similar problem been solved by others? If yes, then please cite those sources.
Yes. Last semester, in Supply Chain with Professor Kohli, my team tackled inventory management at Sigco. We attempted to build a system using historical data to forecast several months out. With the forecasted values we used the Economic Order Quantity (EOQ) model to calculate safety stock and reorder points. I was tasked with the forecasting and EOQ calculations. I am a little embarrassed to tell exactly how I did this last semester, but the process was abbreviated, but now, armed with Predictive Analytics, I wanted to revisit the material for find the best forecasting model and I wanted to try and develop a systematic workflow to tackling various forecasting problems.
SS10CLR96130 - Item F
Naive
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 59.42500 | 104.71115 | 81.12500 | 14.131633 | 22.40321 | 1.0000000 | 0.4230421 | NA |
Test set | 36.58333 | 82.01067 | 64.91667 | 6.635059 | 13.49779 | 0.8002054 | -0.0636735 | 0.8644523 |
Smoothing
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 8.686765 | 67.33701 | 53.80847 | -1.087094 | 18.39327 | 0.6632785 | 0.0945714 | NA |
Test set | 51.272257 | 77.49537 | 58.76113 | 9.560009 | 11.51168 | 0.7243282 | -0.0392103 | 0.894827 |
Regression
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 0.0 | 57.03477 | 48.00000 | -3.2064104 | 17.18337 | 0.5916795 | 0.4010158 | NA |
Test set | 11.6 | 66.06529 | 57.08333 | 0.9713111 | 12.15496 | 0.7036466 | 0.0445771 | 0.7738384 |
Neural Net
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 0.0002352 | 1.40819 | 0.5181391 | -0.0240968 | 0.1664162 | 0.0063869 | -0.011977 | NA |
Test set | 67.5524255 | 94.62602 | 77.7741946 | 13.2426397 | 15.8755564 | 0.9586958 | 0.072223 | 0.9867861 |
A forecast of the validation period averaged from the forecasts created shows a slight improvement for Item F with a MAPE = 11.82 %. I did not include AR as an improvement because the the AR-1 was not statistically significant.
422.5203 |
440.9856 |
420.8139 |
451.2269 |
454.9842 |
400.8383 |
429.4176 |
471.0476 |
363.3738 |
422.7669 |
461.1446 |
414.8563 |
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 41.752 | 73.03856 | 58.01135 | 7.602255 | 11.73361 | 0.0315091 | 0.7937082 |
Using the hybridForecasting package in R, we can get an average. I’ve tried this using equal weights.
## Fitting the auto.arima model
## Fitting the ets model
## Fitting the thetam model
## Fitting the nnetar model
## Fitting the stlm model
## Fitting the tbats model
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## May 2016 446.3148 297.2575 557.3755 232.29331 607.6622
## Jun 2016 472.0956 285.3829 625.5841 214.13264 683.7029
## Jul 2016 451.5615 274.3689 610.9070 197.28822 669.9871
## Aug 2016 464.9239 264.0321 640.8084 181.47949 700.8344
## Sep 2016 469.5398 254.2448 649.7249 166.51115 710.6821
## Oct 2016 443.6383 244.9140 608.1169 152.24085 687.7146
## Nov 2016 441.1531 235.9693 611.3011 138.56107 701.3944
## Dec 2016 449.5602 227.3560 657.3450 125.38825 721.0220
## Jan 2017 391.6485 186.8709 620.9246 111.18929 727.2995
## Feb 2017 431.8902 210.9585 628.9970 100.31039 739.6451
## Mar 2017 465.7235 203.1100 665.2750 88.30717 751.6483
## Apr 2017 446.7164 195.4612 644.4943 76.60927 763.3462
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 23.35285 | 60.84033 | 51.25404 | 3.654906 | 10.61441 | 0.0180377 | 0.6897032 |
This plot shows the original series in black; it is obscured by the neural net fit model. The forecasts are in dotted lines.
SS06S6A100144 - Item I
Naive
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 47.62500 | 84.79755 | 71.17500 | 16.797377 | 42.79801 | 1.0000000 | 0.1997600 | NA |
Test set | 23.83333 | 81.13158 | 67.33333 | 6.000587 | 24.19084 | 0.9460251 | 0.0945242 | 1.020058 |
Smoothing
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | -3.673546 | 55.60147 | 45.16578 | -20.366560 | 38.61434 | 0.6345736 | 0.3152869 | NA |
Test set | 3.181353 | 63.38607 | 43.14888 | -2.975198 | 15.38072 | 0.6062364 | 0.2180748 | 0.9465873 |
Regression
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 0.000000 | 44.80830 | 36.44135 | -8.485234 | 27.86224 | 0.5119964 | 0.2449057 | NA |
Test set | -3.308333 | 58.80067 | 48.77708 | -4.371698 | 17.72427 | 0.6853120 | 0.1383254 | 0.8244113 |
Neural Net
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | -0.0031909 | 0.9950608 | 0.4942731 | -0.0531801 | 0.3295865 | 0.0069445 | -0.3387125 | NA |
Test set | -31.5140598 | 71.1659420 | 60.1171115 | -15.5243643 | 22.8793611 | 0.8446380 | 0.0363118 | 1.10175 |
A forecast of the validation period averaged from the forecasts created shows a MAPE = 17.3 % for Item F.The validation forcast generated with the ets() function has a slightly lower MAPE.
212.7410 |
242.7858 |
267.4868 |
253.6946 |
284.3508 |
324.4258 |
312.0073 |
315.5073 |
278.2871 |
286.7616 |
287.6429 |
294.7321 |
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | -1.951927 | 58.02839 | 47.20203 | -4.217668 | 17.30837 | 0.0781997 | 0.8249908 |
Using the hybridForecasting package in R, we can get an average. I’ve tried this using equal weights.
## Fitting the auto.arima model
## Fitting the ets model
## Fitting the thetam model
## Fitting the nnetar model
## Fitting the stlm model
## Fitting the tbats model
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## May 2016 256.6805 126.7862 402.1459 85.83098 470.0997
## Jun 2016 267.5406 144.5147 413.9139 102.18900 483.0035
## Jul 2016 305.8831 191.1664 425.8818 151.42218 496.1119
## Aug 2016 281.1042 145.9957 438.0714 103.57218 509.4607
## Sep 2016 296.6487 192.5464 450.5057 143.61897 523.0874
## Oct 2016 333.9771 190.9239 463.2092 140.07371 537.0319
## Nov 2016 333.2072 189.4338 476.2074 136.73098 551.3360
## Dec 2016 345.3758 188.0626 489.5273 133.57011 566.0435
## Jan 2017 316.7112 186.7990 503.1969 130.57374 581.1996
## Feb 2017 313.4112 178.6107 517.2448 127.72711 596.8513
## Mar 2017 325.2770 184.5572 531.7006 125.01752 613.0464
## Apr 2017 336.9771 183.5635 546.5943 122.43399 629.8336
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | -31.31614 | 69.39335 | 56.08105 | -15.51609 | 21.81169 | 0.1981013 | 1.116172 |
This plot shows the original series in black; it is obscured by the neural net fit model. The forecasts are in dotted lines.
SS03S6T7284 - Item M
Naive
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 31.70000 | 49.76645 | 35.25000 | 13.45736 | 81.23248 | 1.000000 | 0.4232189 | NA |
Test set | 61.66667 | 71.20978 | 61.66667 | 34.48862 | 34.48862 | 1.749409 | 0.0592041 | 1.751342 |
Smoothing
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 4.753476 | 30.07296 | 20.68440 | -163.6502 | 194.29521 | 0.5867915 | 0.3818068 | NA |
Test set | 57.346891 | 68.56287 | 60.54177 | 28.5995 | 31.64224 | 1.7174970 | 0.2460468 | 1.400521 |
Regression
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 0.00000 | 26.29192 | 19.15721 | -102.52566 | 215.5284 | 0.543467 | 0.4344196 | NA |
Test set | 54.92917 | 63.11317 | 56.51354 | 28.25757 | 29.7665 | 1.603221 | 0.0640557 | 1.335918 |
Adjusted Regression, 2nd layer
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 52.74583 | 61.47627 | 54.45349 | 26.89222 | 28.51856 | 0.0800638 | 1.311077 |
Neural Net
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | -0.0076164 | 0.4022962 | 0.2476183 | -0.9907413 | 1.823636 | 0.0070246 | -0.3860262 | NA |
Test set | 33.8152361 | 49.9233146 | 42.7500576 | 16.1206022 | 22.926653 | 1.2127676 | 0.0693361 | 1.103316 |
Adjusted Neural Net
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 31.6319 | 48.93914 | 41.52983 | 14.75525 | 22.21025 | 0.1125532 | 1.09599 |
A forecast of the validation period averaged from the forecasts created shows a MAPE = 28.46 % for Item M. The validation forcast generated using a neural net has a lower MAPE.
Average
 | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2016 | NA | NA | NA | NA | 98.9 | 132.9 | 116.4 | 128.5 | 157.5 | 142.3 | 125.2 | 143.1 |
2017 | 115.1 | 89.33 | 128.2 | 140.4 | NA | NA | NA | NA | NA | NA | NA | NA |
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 51.93949 | 60.22005 | 53.61843 | 26.86657 | 28.46556 | 0.1061761 | 1.312795 |
Using the hybridForecasting package in R, we can get an average. I’ve tried this using equal weights.
## Fitting the auto.arima model
## Fitting the ets model
## Fitting the thetam model
## Fitting the nnetar model
## Fitting the stlm model
## Fitting the tbats model
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## May 2016 121.8632 25.61916 202.8826 -21.29964 249.8014
## Jun 2016 135.0413 24.48847 223.4124 -23.68519 275.3785
## Jul 2016 133.1990 23.36472 243.5718 -26.06013 308.3702
## Aug 2016 146.3383 22.24667 261.2452 -28.42635 331.8117
## Sep 2016 156.1184 21.13318 294.7240 -30.78559 339.8034
## Oct 2016 150.1534 20.02323 275.3751 -33.13943 331.6203
## Nov 2016 139.6438 18.91585 251.9757 -35.48933 312.0824
## Dec 2016 144.1291 17.81019 290.5302 -37.83660 319.9426
## Jan 2017 125.2073 16.70544 231.6331 -40.18247 288.5210
## Feb 2017 109.4724 15.60087 235.2173 -42.52807 293.3462
## Mar 2017 129.6952 14.49580 253.9524 -44.87445 298.1722
## Apr 2017 138.3580 13.38959 266.3558 -47.22256 302.9999
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 42.64838 | 53.60892 | 46.01626 | 20.89573 | 24.10323 | 0.1728412 | 1.142197 |
This plot shows the original series in black; it is obscured by the neural net fit model. The forecasts are in dotted lines. All validation forecasts under forecast.
SS04CLR96130 Item N
Naive
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 6.075 | 43.22644 | 33.82500 | 0.1282712 | 17.26042 | 1.0000000 | 0.3155666 | NA |
Test set | -11.000 | 39.31709 | 30.16667 | -6.0323055 | 13.27579 | 0.8918453 | 0.3968885 | 0.7798844 |
Smoothing
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 1.260821 | 26.46651 | 21.01479 | -1.117445 | 10.22671 | 0.6212798 | 0.1052558 | NA |
Test set | -7.319592 | 35.25757 | 25.24042 | -4.855386 | 10.75110 | 0.7462061 | -0.0204894 | 0.6298392 |
Regression
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 0.000000 | 27.61653 | 22.19255 | -2.1360152 | 11.64914 | 0.6560990 | 0.3219670 | NA |
Test set | 3.841667 | 36.33801 | 25.75521 | 0.0067359 | 10.38407 | 0.7614252 | 0.0600393 | 0.6331893 |
Adjusted Regression, 2nd layer
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 4.011323 | 36.43325 | 25.67414 | 0.0616684 | 10.34501 | 0.0692196 | 0.6349095 |
Neural Net
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 0.0024011 | 0.4911374 | 0.2613425 | -0.0100761 | 0.1264097 | 0.0077263 | -0.0954704 | NA |
Test set | 9.5688481 | 42.4134310 | 32.0484990 | 0.8003957 | 13.0123840 | 0.9474796 | 0.6339037 | 0.7125205 |
Adjusted Neural Net
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 9.738504 | 42.5598 | 32.32121 | 0.8553281 | 13.12752 | 0.6425076 | 0.7150103 |
A forecast of the validation period averaged from the forecasts created shows a MAPE = 9.23 % for Item N. The validation forcast generated using a neural net has a lower MAPE.
Average
 | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2016 | NA | NA | NA | NA | 244.7 | 262.7 | 296.7 | 287.3 | 270.9 | 324.3 | 277.4 | 200.9 |
2017 | 162.2 | 144.6 | 183.9 | 211.1 | NA | NA | NA | NA | NA | NA | NA | NA |
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | -1.227269 | 31.47252 | 22.38427 | -2.52014 | 9.231017 | 0.384626 | 0.5470095 |
Using the hybridForecasting package in R, we can get an average. I’ve tried this using equal weights.
## Fitting the auto.arima model
## Fitting the ets model
## Fitting the thetam model
## Fitting the nnetar model
## Fitting the stlm model
## Fitting the tbats model
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## May 2016 258.0678 187.6351 327.0125 161.26810 348.9808
## Jun 2016 280.1646 206.7271 349.0158 177.48343 371.5685
## Jul 2016 313.4935 235.1501 381.9589 205.28361 405.0813
## Aug 2016 296.4048 220.3486 364.1240 190.34061 387.8024
## Sep 2016 287.3077 219.9439 344.0036 189.90347 368.2251
## Oct 2016 347.4824 259.8386 442.6747 229.79078 467.4276
## Nov 2016 269.6604 203.9979 340.7654 180.10361 363.2624
## Dec 2016 203.6689 148.9412 275.3500 119.81087 296.1335
## Jan 2017 165.8680 113.9145 227.4463 83.86438 257.4964
## Feb 2017 152.3938 98.5328 226.0558 71.76003 256.1059
## Mar 2017 187.0999 109.5549 277.1800 82.30053 307.2301
## Apr 2017 227.3524 157.4277 299.6713 127.37760 327.3989
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | -11.41368 | 34.71745 | 25.29313 | -6.798074 | 11.11042 | 0.1838163 | 0.6214731 |
This plot shows the original series in black; it is obscured by the neural net fit model. The forecasts are in dotted lines. All validation forecasts under forecast.
SS06BRN96130 Item S
Naive
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | -1.37500 | 43.05433 | 30.825 | -13.66871 | 43.26580 | 1.0000000 | -0.1816915 | NA |
Test set | 13.08333 | 25.69857 | 18.750 | 14.34804 | 21.17458 | 0.6082725 | 0.0662009 | 0.9427777 |
Smoothing
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | -0.0039956 | 33.90733 | 26.89572 | -20.801814 | 42.48986 | 0.8725294 | 0.0416140 | NA |
Test set | 8.6509539 | 23.17158 | 20.71698 | 3.744613 | 24.53520 | 0.6720838 | 0.2560415 | 0.9053182 |
Regression
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 0.000000 | 26.59540 | 20.17644 | -12.080545 | 29.99343 | 0.654548 | -0.1907813 | NA |
Test set | 8.404167 | 25.75182 | 21.22760 | 5.651266 | 24.56574 | 0.688649 | 0.2103834 | 1.139387 |
Neural Net
ME | RMSE | MAE | MPE | MAPE | MASE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|---|
Training set | 0.0000808 | 0.075242 | 0.0500806 | -0.0067769 | 0.086876 | 0.0016247 | 0.2417458 | NA |
Test set | 4.6843148 | 34.874508 | 28.3823641 | -2.5890850 | 35.054424 | 0.9207580 | 0.3953903 | 1.780403 |
A forecast of the validation period averaged from the forecasts created shows a MAPE = 21.26671 % for Item S. The validation forcast generated using a neural net has a lower MAPE.
Average
 | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2016 | NA | NA | NA | NA | 76.09 | 81.67 | 86.79 | 99.8 | 85.58 | 84.82 | 65.21 | 78.36 |
2017 | 66.63 | 56.27 | 77.66 | 62.65 | NA | NA | NA | NA | NA | NA | NA | NA |
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 8.705692 | 22.35973 | 18.54079 | 5.288709 | 21.26671 | 0.5000356 | 0.8795697 |
Using the hybridForecasting package in R, we can get an average. I’ve tried this using equal weights.
## Fitting the auto.arima model
## Fitting the ets model
## Fitting the thetam model
## Fitting the nnetar model
## Fitting the stlm model
## Fitting the tbats model
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## May 2016 75.31035 29.78108 124.0371 6.9727737 143.3062
## Jun 2016 69.53808 27.98427 120.3030 5.1759620 263.8151
## Jul 2016 81.68887 32.09831 136.3090 9.0951562 168.7960
## Aug 2016 89.79008 32.07419 148.5303 9.0710374 185.4254
## Sep 2016 88.45519 32.05007 150.7200 9.0469185 188.0969
## Oct 2016 83.01150 30.90409 140.4585 8.0957810 175.5828
## Nov 2016 71.20356 30.00568 122.3110 7.1973752 213.7870
## Dec 2016 76.80475 31.97771 124.5146 8.9745620 146.6223
## Jan 2017 68.29337 27.35145 120.3030 8.9504432 143.3062
## Feb 2017 64.57241 23.71684 133.8052 0.9085343 288.4490
## Mar 2017 67.21698 23.96159 122.6910 7.2720891 145.4993
## Apr 2017 64.38326 22.97090 120.3030 2.7053460 270.4249
ME | RMSE | MAE | MPE | MAPE | ACF1 | Theil’s U | |
---|---|---|---|---|---|---|---|
Test set | 10.47763 | 21.70208 | 16.94499 | 7.415807 | 18.85166 | 0.2048708 | 0.8548713 |
This plot shows the original series in black; it is obscured by the neural net fit model. The forecasts are in dotted lines. All validation forecasts under forecast.