Introduction

I am to forecast how much cash will be taken out of 4 different ATM machines for May 2010. Data was provided for this project. The cash is in hundreds of dollars.

Data Exploration

I will begin by plotting the cash withdrawn by each ATM.

ATM #4 is utilized at a higher rate than every other ATM in the data series. There is an unusual spike in ATM #4. According to a 2010 article in Time magazine an average sized ATM can hold $200,000. So the outlier is an implausible figure. There are some missing ATM labels in the data. Let’s remove ATM #4 and the missing label from the dataset and replot the visualization:

It looks like ATM #3 was not utilized until recently. ATM #1 and #2 seem very similar to each other. It doesn’t seem like there is much of a trend in the data. Let’s explore the distributions further with this visualization:

ATM Minimum 1st Qu. Mean Median 3rd Qu. Maximum NA’s
ATM1 1.00000 73.0000 83.8867403 91.0000 108.000 180.00 3
ATM2 0.00000 25.5000 62.5785124 67.0000 93.000 147.00 2
ATM3 0.00000 0.0000 0.7205479 0.0000 0.000 96.00 0
ATM4 1.56326 124.3344 474.0433449 403.8393 704.507 10919.76 0

Is there any Daily Effects?

There is a pattern in the time series but fluctuations are very tight. I wonder if it is explained by the day of the week. For example, Friday and Saturday might have heavier usage and a Tuesday night might be calm. Let’s examine this hypothesis:

It looks like there is some varriation based on the day of the week. Thursday looks to be less busy than the other days of the week. There is so much variation between the 4 ATMs that each will be modeled seperately.

A Quick Note on Process

I will clean up the data to produce the forecasts. I will then explore some candidate modeling techniques. Once I have a handful of candidates, I will prefrom a cross-validation on the models and get the RMSE. The model that minimizes the RMSE during cross-validation will be selected as the model of choice.

ATM #1

Data Cleanup

I will begin ATM #1. There are 3 missing observations that will need to be cleaned up.

Now that we have a complete data set I will create time series objects. Since there is a weekly effect I will be using a frequency of 7. This does mess up the dates in the plot however so please pay no reguard to them.

One readily observes the repeating peaks on every 7th lag. There also seems to be an interesting negative correlation between the 1st and both the 3rd and the 5th lag.

Model Creation

STL Decomposition Models

I will try a couple of seasonal decomposition models. I will set the seasonal window to 7 so it picks up the day of the week variation.

This seems promising to me. I will do STL decomposition forecasts using both the ETS and ARIMA models and check their residual plots

STL + ETS


    Ljung-Box test

data:  Residuals from STL +  ETS(A,N,N)
Q* = 13.759, df = 12, p-value = 0.3164

Model df: 2.   Total lags used: 14

This model seems to hold merit and should be taken under consideration.

STL + ARIMA


    Ljung-Box test

data:  Residuals from STL +  ARIMA(0,1,2)
Q* = 8.4479, df = 12, p-value = 0.7492

Model df: 2.   Total lags used: 14

This model also preformed well. It is definately a candidate for the cross-validation stage.

Holt-Winters


    Ljung-Box test

data:  Residuals from Holt-Winters' additive method
Q* = 28.877, df = 3, p-value = 0.000002376

Model df: 11.   Total lags used: 14

This method did a fairly good job. I will move it on to the next phase.

Holt-Winters with Box Cox Adjustment


    Ljung-Box test

data:  Residuals from Holt-Winters' additive method
Q* = 18.824, df = 3, p-value = 0.0002973

Model df: 11.   Total lags used: 14

This seems to be a strong candidate. We will see how if fares in the cross-validation.

ARIMA


    Ljung-Box test

data:  Residuals from ARIMA(0,0,1)(0,1,2)[7]
Q* = 15.247, df = 11, p-value = 0.1715

Model df: 3.   Total lags used: 14

This model looks like it is preforming well. Let’s see how all of them stack up.

Model Selection

I will select the ARIMA model to produce the forecast for ATM #1. I have decided on this because it is the model that preformed the best in the cross validation.

ATM #2

Now I will repeat the above process for ATM #2. As this is a repeat I will not include as much explanatory text.

Data Cleanup

There are 2 missing observations. This will be cleaned up using the tsclean function again.

Once again the ACF plot has the regular spikes on evry multiple of 7.

Model Creation

STL Decomposition Models

Again I will try a couple of seasonal decomposition models. I will set the seasonal window to 7 so it picks up the day of the week variation.

This seems very similar to ATM #1.

STL + ETS


    Ljung-Box test

data:  Residuals from STL +  ETS(A,N,N)
Q* = 9.2372, df = 12, p-value = 0.6825

Model df: 2.   Total lags used: 14

This model preformed much better than the it did for the ATM #1 time series. It will be interesting to see how it does in cross-validation

STL + ARIMA


    Ljung-Box test

data:  Residuals from STL +  ARIMA(2,1,2)
Q* = 7.4932, df = 10, p-value = 0.6782

Model df: 4.   Total lags used: 14

Interesting. The residuals have a bit more spread but the left tail is shorter than the STL+ETS model.

Holt-Winters


    Ljung-Box test

data:  Residuals from Holt-Winters' additive method
Q* = 37.435, df = 3, p-value = 0.00000003722

Model df: 11.   Total lags used: 14

This method did not preform as well as the others. I will, however keep it in so I can compare it to ATM #1’s statistics.

Holt-Winters with Box Cox Adjustment


    Ljung-Box test

data:  Residuals from Holt-Winters' additive method
Q* = 34.776, df = 3, p-value = 0.0000001358

Model df: 11.   Total lags used: 14

ARIMA


    Ljung-Box test

data:  Residuals from ARIMA(2,0,2)(0,1,1)[7]
Q* = 10.231, df = 9, p-value = 0.3321

Model df: 5.   Total lags used: 14

This model looks like it is preforming well. Let’s see how all of them stack up.

Cross Validation

In order to understand how well a model is likely to preform at predicting out of sample data, I will use the tsCV function and evaluate the models. As prevously noted my goal is to minimize the RMSE. First I will get the errors from the cross validation process, then I will compute the RMSE.

Model RMSE
ARIMA 33.56421
STL ARIMA 34.17962
STL ETS 34.96679
Adjusted Holt-Winters 41.53970
Holt-Winters 55.33134

Interesting. It looks like the ARIMA model was the top preformer again. I find it interesting that the RMSE is higher and more spread out for ATM #2. ATM #1’s RMSEs ranged from roughyl 30 to 39. The RMSEs for ATM #2 ranged from 34 to 55. There is more variability in this data that isn’t captured by the model.

Model Selection

As stated in the previous section I will use the ARIMA model to produce the forecast for ATM #2.

ATM #3

This model is quite different from the two proceeding cases. You can see it in the plot:

Most of the values are zeros (points in red above), except for 3 points (shown in blue). The three non-zero points are the most current. This presents a serious challenge.

Model Creation/Selection

There is one fundamental question with this dataset. Are the three points an outlier, or is it the begining of the new normal? If they are outliers one would expect the cash value to return to zero. If the three points are an indication of change, then the historical data have little relevance.

I will be assuming the new data are the begining of the new normal. The challenge is we only have three data points. In the absense of more data I will calculate the average of these three points and use it for the forecast with the recommendation to revise it frequently. As the average is only based off of three data points it should not be considered stable.

ATM #4

This ATM is different too, but not as radically different as ATM #3. I will be using the same approach used for ATM #1 and #2 with this ATM.

Data Cleanup

There is one major outlier in the data set. We will clean it up by simply by using the tsclean function once again.

Once again there is the familiar patter of peaks on the multiples of seven.

Model Creation

STL Decomposition Models

Again I will try a couple of seasonal decomposition models. I will set the seasonal window to 7 so it picks up the day of the week variation.

This seems very similar to ATM #1.

STL + ETS


    Ljung-Box test

data:  Residuals from STL +  ETS(A,N,N)
Q* = 30.61, df = 12, p-value = 0.002258

Model df: 2.   Total lags used: 14

This model did a fair job. There are a couple of ACF spikes that are outside the bands. Let’s see if the ARIMA model does better:

STL + ARIMA


    Ljung-Box test

data:  Residuals from STL +  ARIMA(1,0,1) with non-zero mean
Q* = 21.78, df = 11, p-value = 0.02614

Model df: 3.   Total lags used: 14

This is an improvement over the STL+ETS model. There are still a couple of spikes on the ACF that falls outside the threshold.

Holt-Winters


    Ljung-Box test

data:  Residuals from Holt-Winters' additive method
Q* = 18.279, df = 3, p-value = 0.0003852

Model df: 11.   Total lags used: 14

This method seems to have preformed better than any of the STL models.

Holt-Winters with Box Cox Adjustment


    Ljung-Box test

data:  Residuals from Holt-Winters' additive method
Q* = 18.934, df = 3, p-value = 0.0002822

Model df: 11.   Total lags used: 14

This model preformed very well. It doesn’t look like any of the ACF spikes are outside the bands. This is a contender for sure.

ARIMA


    Ljung-Box test

data:  Residuals from ARIMA(0,0,3)(1,0,0)[7] with non-zero mean
Q* = 16.96, df = 9, p-value = 0.04934

Model df: 5.   Total lags used: 14

This model did fairly well, but it seems like the Holt-Winters with Box-Cox adjustment did better. It’s time to cross-validate and see how all of them preform.

Model Selection

Once again I will use the forecasts produced by the ARIMA model as the projections for ATM #4.

Summary

I set out to create predictions for 4 different ATMs. After testing multiple approaches using cross-validation, I selected ARIMA models for ATM #1, #2 and #4, as it was the modeling technique with the lowest RMSE. For ATM #3 we used the mean of all non-zero data (3 observations). This model needs to be updated once more data becomes available. I will finish this project by exporting my forcasts in the same file format as the original data.