Machine Learning models are widely used and have various applications in classification or regression tasks. Due to increasing computational power, availability of new data sources and new methods, ML models are more and more complex. Models created with techniques like boosting, bagging of neural networks are true black boxes. It is hard to trace the link between input variables and model outcomes. They are use because of high performance, but lack of interpretability is one of their weakest sides.

In many applications we need to know, understand or prove how input variables are used in the model and what impact do they have on final model prediction. DALEX is a set of tools that help to understand how complex models are working.

[All contents are only summarized from https://pbiecek.github.io/DALEX_docs/index.html and the package’s documentation]

Exploring the data

Create Model Explainer

Black-box models may have very different structures. This function creates a unified representation of a model, which can be further processed by various explainers.

Let’s get some information for the linear model:

wine_lm_explainer4
## Model label:  model_4v 
## Model class:  lm 
## Data head  :
##   fixed.acidity volatile.acidity citric.acid residual.sugar chlorides
## 1           7.0             0.27        0.36           20.7     0.045
## 2           6.3             0.30        0.34            1.6     0.049
##   free.sulfur.dioxide total.sulfur.dioxide density  pH sulphates alcohol
## 1                  45                  170   1.001 3.0      0.45     8.8
## 2                  14                  132   0.994 3.3      0.49     9.5
##   quality
## 1       6
## 2       6

Simarly, an explainer for random forest model can be created:

wine_rf_explainer4
## Model label:  model_rf 
## Model class:  randomForest.formula,randomForest 
## Data head  :
##   fixed.acidity volatile.acidity citric.acid residual.sugar chlorides
## 1           7.0             0.27        0.36           20.7     0.045
## 2           6.3             0.30        0.34            1.6     0.049
##   free.sulfur.dioxide total.sulfur.dioxide density  pH sulphates alcohol
## 1                  45                  170   1.001 3.0      0.45     8.8
## 2                  14                  132   0.994 3.3      0.49     9.5
##   quality
## 1       6
## 2       6

Model Performance Plots

Moreover, the model’s residuals can be plotted easily.

Again, let’s start with viewing the data:

explainer_lm
## Model label:  lm 
## Model class:  lm 
## Data head  :
##   satisfaction_level last_evaluation number_project average_montly_hours
## 1               0.38            0.53              2                  157
## 2               0.80            0.86              5                  262
##   time_spend_company Work_accident left promotion_last_5years sales salary
## 1                  3             0    1                     0 sales    low
## 2                  6             0    1                     0 sales medium
explainer_rf
## Model label:  randomForest 
## Model class:  randomForest.formula,randomForest 
## Data head  :
##   satisfaction_level last_evaluation number_project average_montly_hours
## 1               0.38            0.53              2                  157
## 2               0.80            0.86              5                  262
##   time_spend_company Work_accident left promotion_last_5years sales salary
## 1                  3             0    1                     0 sales    low
## 2                  6             0    1                     0 sales medium
Now, we can plot the residuals to compare our fitted linear and random forest models:
Comparison of residuals for linear model and random forest

Comparison of residuals for linear model and random forest

Alternatively, we can view the residuals as a boxplot:
Comparison of residuals for linear model and random forest

Comparison of residuals for linear model and random forest

Feature importance

Usually, we are not only interested in the accuracy and validity of the model, but would also like to know which features had an influence on the predictions. With DALEX, we can perform both model agnostic and specific analyses:

Model agnostic

Model agnostic variable importance is calculated by means of permutations. We simply substract the loss function calculated for validation dataset with permuted values for a single variable from the loss function calculated for validation dataset. This concept and some extensions are described in (Fisher, Rudin, and Dominici 2018).

Hoow does feature importance differs between the linear and the rf models? Since we are using the same loss function and the same method for variable permutations, the losses calculated with both methods can be directly compared.

Model agnostic variable importance plot. Right edges correspond to loss function after permutation of a single variable. Left edges correspond to loss of a full model

Model agnostic variable importance plot. Right edges correspond to loss function after permutation of a single variable. Left edges correspond to loss of a full model

Model specific

Some models have build-in tools for calculation of variable importance. Random forest uses two different measures - one based on out-of-bag data and second one based on gains in nodes. Read more about this approach in (Liaw and Wiener 2002)

Let’s check out the default importance measure for random forests models:
Built-in variable importance plot for random forest

Built-in variable importance plot for random forest

It is easy to assess variable importance for linear models and generalized models, since model coefficients have direct interpretation.

Forest plots were initially used in the meta analysis to visualize effects in different studies. . At present, however, they are frequently used to present summary characteristics for models with linear structure / created with lm or glm functions.

Forest plot created with forestmodel package

Forest plot created with forestmodel package

We can also visualize coefficients easily:
Model coefficients plotted with sjPlot package

Model coefficients plotted with sjPlot package

Variable responses

Explainers presented here are designed to better understand the relation between a variable and a model output.

First, we look at Partial Dependence Plots (PDP), one of the most popular methods for exploration of a relation between a continuous variable and a model outcome. Then, we look at Accumulated Local Effects Plots (ALEP), an extension of PDP more suited for highly correlated variables. Finally, we will use Merging Path Plots, a method for exploration of a relation between a categorical variable and a model outcome.

PDP

Relation between output from HR_rf_model and variable satisfaction_level

Relation between output from HR_rf_model and variable satisfaction_level

ALEP

Relation between output from models HR_rf_model and HR_lm_model against the variable satisfaction_level

Relation between output from models HR_rf_model and HR_lm_model against the variable satisfaction_level

MPM

Relation between output from models HR_rf_model and HR_lm_model against the variable salary calculated with Accumulated local effects.

Relation between output from models HR_rf_model and HR_lm_model against the variable salary calculated with Accumulated local effects.