Background

The purpose of this presentation is to explore a real world application of a Random Forest model on time series data. Random forest is one of the simplest, most robust, powerful and popular ML algorithms and we wanted to go “full circle” in exploring this popular algorithm, from a subsection of this week’s tree-based model topic, on time series data (something we covered in-depth earlier in the semester).

Approach

In completing this exploration of a Random Forest model we:

  • pre-process and visualize our data upon reading it in,
  • difference and transform the data so as to make it stationary,
  • utilize time delay embedding to bring our data into a form an ML model can handle,
  • train our model, consider its statistics, and compare its performance with that of a SNAIVE model.

Data

We use a time series from the German Statistical Office on German wage and income tax revenue from 1999 – 2019 (after tax redistribution). Note: we end up dropping entries from 2019 because they aren’t complete for the year.

The can be downloaded from STATWORX’s GitHub.


Pre-process and Visualize

In order to use random forest for time series data we transform, difference, and embed our data.

We read in and pre-process the German tax data so as to transform our csv file into a compatible time series format:

To do so we drop the “Type” column, exclude data from 2019, combine month and year into one column, and convert our data from tabular to time series format.

Next, we verify that there are no missing data issues and proceed to visualizing German Wage and Income Tax from 1999 to 2018:

We observe seasonality and trend in our data. German tax income, likely similar to American tax incomes, appears to have a major spike once a year (likely when personal taxes are due) with multiple smaller spikes throughout the year. Additionally, the total tax income is climbing on a year-to-year basis.

Data Preparation

What does this mean?

We have to difference our data to make our non-stationary time series stationary and remove the variance within our data that is present from level to level. This, differencing time series data to make it stationary, is an essential step for classical time series models much like it will be for our random forest model.

Reminder: making our data stationary means that the mean and variance of the series is stable and does not change over time. It implies stability.

Difference and Transform

We apply first order differencing, the difference between consecutive obervations (ie. one month to the next), to stabilize the mean of our time series and a log transformation to stabilize the variance. The prior plot is set above our transformed plot to highlight the difference:

It appears that first order differencing and log transformation have set our data straight. We proceed with stationary data.

Time Delay Embedding

From here, we’ll train our random forest model and make forecasts for 2018, which we’ll later compare to the actual values to assess the accuracy of our model. In order to do so, we’ll later have to reverse our transformations to bring them to the original scale.

Before doing so we have to bring our data into a form that a machine learning algorithm can handle. What we need to do, essentially, is transform a vector into a matrix (a structure that a ML algorithm can work with).

To do so, we make use of a concept called time delay embedding using the embed() function:

Time delay embedding represents a time series in a Euclidean space (a 3-D geometrical space) and allows us to proceed with any linear or non-linear regression method on time series data, whether Random Forest, Gradient Boosting, etc. We set our desired number of lags to be 6 months and the duration of our forecast to be 12 months.

Model Building

We make use of a direct forecasting strategy, split our data into training and test sets, and fit our “out of the box” random forest model:

We train 12 models and get 12 separate forecasts (one for 1 mo, one for 2mos, etc etc. up until 12 months) and before we can assess our Random Forest model, we have to transform the forecasts back to the original scale:

##                ME   RMSE      MAE     MPE     MAPE      ACF1 Theil's U
## Test set 173403.6 368796 253322.1 2.05931 2.646831 0.2240158 0.0734833

We reverse the effects of earlier differencing and (log) transformation by exponentiating the cumulative sum of our transformed forecasts (reverse log) and multiplying the result with the last observation of our time series (reverse diff) … the resulting plot and output statistics hold some promise.

Based on the plot, our forecast appears to capture the trend and seasonality of the data.

When we consider accuracy metrics, on the surface it appears that we have a high RMSE but when we consider that we’d divided our tax value by 1000 when plotting it brings the high RMSE value into perspective. We’re dealing with large tax revenue values and so a larger RMSE is to be expected. Thus we put more of an emphasis on the MAPE. The MAPE (2.64%) is excellent. Especially when we consider that this is performance was with respect to unseen test data.

For comparison, we visit the performance statistics of a Seasonal Naive model:

##                    ME     RMSE      MAE      MPE     MAPE     MASE      ACF1
## Training set 216938.7 470874.8 388618.5 2.852533 6.648711 1.000000 0.6303982
## Test set     485006.8 495096.8 485006.8 5.507943 5.507943 1.248028 0.1382461
##               Theil's U
## Training set         NA
## Test set     0.09267227

The plot looks decent although it doesn’t quite capture the upward trend in the same way our random forest model did. When we compare accuracy statistics, we see that our Random Forest model had a far lower RMSE value with a MAPE that was half the value of our seasonal naive model’s performance.

Our Random Forest model was a far better model than a seasonal naive model in predicting German wage and taxes for 2018.

Tuning (A Different Approach)

  1. Pick hyperparameters,
  2. Select an appropriate range of values for each
  3. Determine the “best” configuration

The optimal method for creating a training and testing set for a time series is to actually use multiple training and testing sets, but that is computationally exhausting. So for the purposes of this presentation we’re going to use the Timeslice function to create a holdout testing set. Hold-out simply means splitting your dataset into a train and test set. We’re going to train our model using 2 approaches to select a tuned grid, one with hold-out and the other with k-fold cross-validation.

## $train
## $train$Training197
##   [1]   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18
##  [19]  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36
##  [37]  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54
##  [55]  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72
##  [73]  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90
##  [91]  91  92  93  94  95  96  97  98  99 100 101 102 103 104 105 106 107 108
## [109] 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
## [127] 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
## [145] 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
## [163] 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180
## [181] 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197
## 
## 
## $test
## $test$Testing197
##  [1] 198 199 200 201 202 203 204 205 206 207 208 209

A Random Forest model’s two most important hyperparameters are ntree, the number of trees and mtry, the number of predictors that get sampled at each split in the tree.

Typically, good values for ntree sit in the range of a few hundred to a thousand (more trees, greater accuracy). This approach is computationally expensive, so for graduate students trying to do a demonstration: 500 trees is enough.

The mtry parameter is the number of predictors that get considered as splitting candidates at each node. A good rule of thumb is p (number of predictors)/3.

We will use the traditional methodology to find mtry. The variable holdout_result says the best model will be produced with mtry 6:

## Random Forest 
## 
## 209 samples
##   6 predictor
## 
## No pre-processing
## Resampling: Rolling Forecasting Origin Resampling (1 held-out with a fixed window) 
## Summary of sample sizes: 197, 197, 197, 197, 197, 197, ... 
## Resampling results across tuning parameters:
## 
##   mtry  RMSE        Rsquared  MAE       
##   2     0.05468396  NaN       0.05468396
##   3     0.05232497  NaN       0.05232497
##   6     0.05199089  NaN       0.05199089
## 
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was mtry = 6.

Here we are creating k-fold cross validation, helping us figure out which configuration is ‘best’. K-fold cross validation is when the dataset is randomly split into ‘k’ group.

Cross-validation works for time series, if it is autoregressive. This means that k-fold CV works if the predictors the model are lagged versions of the response variable.

Using repeated cross validation we get an mtry value of 2 for the best model.

## Random Forest 
## 
## 209 samples
##   6 predictor
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 3 times) 
## Summary of sample sizes: 189, 189, 189, 188, 188, 189, ... 
## Resampling results across tuning parameters:
## 
##   mtry  RMSE        Rsquared   MAE       
##   2     0.09288249  0.9710107  0.06447414
##   3     0.09344366  0.9701619  0.06346654
##   6     0.10117577  0.9645287  0.06610264
## 
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was mtry = 2.
##                ME     RMSE      MAE      MPE     MAPE
## Test set 453852.4 485538.1 453852.4 5.118042 5.118042
## Test set 350752.4 459037.6 362697.2 3.727779 3.881447

The hyper paramter tuned model (2nd row) outperformed the un-tuned model but it did not outperform our earliest RF model (recall the MAPE < 3%).

Conclusion

It appears that this was a good real world application of a Random Forest model. We were able to predict German wage and income taxes for a full year with less than 3% mean absolute percent error (MAPE) while also highlighting the impact hyper parameter tuning could have during the model-building / selection phase.

With that said, we would have anticipated that our hyper parameter tuned model would have been the best model but it was not. While we aren’t 100% sure as to why this is, our first inkling would be that we may have overfit our hyper parameter tuned model and that’s why it didn’t perform as well as maybe it should have on test data.

With regard to approach improvement, we believe the approach outlined above may have been improved via exploration of

  1. experimenting with box cox rather than log transformation, and
  2. comparing our RF (bagging) model with a boosting model (ie. AdaBoost or XGBoost).

Reference

This presentation was made with reference to the following resource(s):