TidyModels is the newer version of Max Kuhn’s CARET and can be used for a number of machine learning tasks. This modelling framework takes a different approach to modelling - allowing for a more structured workflow, and like tidyverse, has a whole set of packages for making the machine learning process easier. I will touch on a number of these packages in the following sub sections.
This package supercedes that in R for Data Science, as Hadley Wickham admitted he needed a better modelling solution at the time, and Max Kuhn and team have delivered on this.
The aim of this webinar is to:
The framework of a TidyModels approach flows as so:
I will show you the steps in the following tutorials.
I will load in the stranded patient data - a stranded patient is a patient that has been in hospital for longer than 7 days and we also call these Long Waiters. The import steps are below and use the native readr package to load this in:
# Read in the data
library(NHSRdatasets)
strand_pat <- NHSRdatasets::stranded_data %>%
setNames(c("stranded_class", "age", "care_home_ref_flag", "medically_safe_flag",
"hcop_flag", "needs_mental_health_support_flag", "previous_care_in_last_12_month", "admit_date", "frail_descrip")) %>%
mutate(stranded_class = factor(stranded_class)) %>%
drop_na()
print(head(strand_pat))## # A tibble: 6 × 9
## stranded_class age care_home_ref_flag medically_safe_flag hcop_flag
## <fct> <int> <int> <int> <int>
## 1 Not Stranded 50 0 0 0
## 2 Not Stranded 31 1 0 1
## 3 Not Stranded 32 0 1 0
## 4 Not Stranded 69 1 1 0
## 5 Not Stranded 33 0 0 1
## 6 Stranded 75 1 1 0
## # … with 4 more variables: needs_mental_health_support_flag <int>,
## # previous_care_in_last_12_month <int>, admit_date <chr>, frail_descrip <chr>
As this is a classification problem we need to look at the classification imbalance in the predictor variable i.e. the thing we are trying to predict.
The following code looks at the class imbalance as a volume and proportion and then I am going to use the second index from the class balance table i.e. the number of people who are long waiters is going to be lower than those that aren’t, otherwise we are offering a very poor service to patients.
class_bal_table <- table(strand_pat$stranded_class)
prop_tab <- prop.table(class_bal_table)
upsample_ratio <- (class_bal_table[2] / sum(class_bal_table))
print(prop_tab)##
## Not Stranded Stranded
## 0.6552217 0.3447783
print(class_bal_table)##
## Not Stranded Stranded
## 458 241
It is always a good idea to observe the data structures of the data items we are trying to predict. I generally separate the names of the variables out into factors, integer / numerics and character vectors:
strand_pat$admit_date <- as.Date(strand_pat$admit_date, format="%d/%m/%Y") #Format date to be date to work with recipes steps
factors <- names(select_if(strand_pat, is.factor))
numbers <- names(select_if(strand_pat, is.numeric))
characters <- names(select_if(strand_pat, is.character))
print(factors); print(numbers); print(characters)## [1] "stranded_class"
## [1] "age" "care_home_ref_flag"
## [3] "medically_safe_flag" "hcop_flag"
## [5] "needs_mental_health_support_flag" "previous_care_in_last_12_month"
## [1] "frail_descrip"
The Rsample package makes it easy to divide your data up. To view all the functionality navigate to the Rsample vignette.
We will divide the data into a training and test sample. This approach is the simplest method to testing your models accuracy and future performance on unseen data. Here we are going to treat the test data as the unseen data to allow us to evaluate if the model is fit for being released into the wild, or not.
# Partition into training and hold out test / validation sample
set.seed(123)
split <- rsample::initial_split(strand_pat, prop=3/4)
train_data <- rsample::training(split)
test_data <- rsample::testing(split)Recipes is an excellent package. I have for years done feature, dummy and other types of coding and feature selection with CARET, also a great package, but this makes the process much simpiler. The first part of the recipe is to fit your model and then you add recipe steps, this is supposed to replicate baking adding the specific ingredients. For all the particular steps that recipes contains, go directly to the recipes site.
stranded_rec <-
recipe(stranded_class ~ ., data=train_data) %>%
# The stranded class is what we are trying to predict and we are using the training data
step_date(admit_date, features = c("dow", "month")) %>%
#Recipes step_date allows for additional features to be created from the date
step_rm(admit_date) %>%
#Remove the date, as we have created features off of it, if left in the dreaded multicolinearity may be present
themis::step_upsample(stranded_class, over_ratio = as.numeric(upsample_ratio)) %>%
#SMOTE recipe step to upsample the minority class i.e. stranded patients
step_dummy(all_nominal(), -all_outcomes()) %>%
#Automatically created dummy variables for all categorical variables (nominal)
step_zv(all_predictors()) %>%
#Get rid of features that have zero variance
step_normalize(all_predictors()) #ML models train better when the data is centered and scaled
print(stranded_rec) #Terminology is to use recipe## Recipe
##
## Inputs:
##
## role #variables
## outcome 1
## predictor 8
##
## Operations:
##
## Date features from admit_date
## Delete terms admit_date
## Up-sampling based on stranded_class
## Dummy variables from all_nominal(), -all_outcomes()
## Zero variance filter on all_predictors()
## Centering and scaling for all_predictors()
To look up some of these steps, I have previously covered them in a CARET tutorial. For all the list of recipes steps refer to the link above the code chunk.
The package Parsnip is the model to work with TidyModels. Parsnip still does not have many of the algorithms present in CARET, but it makes it much simpler to work in the tidy way.
Here we will create a basic logistic regression as our baseline model. If you want a second tutorial around model ensembling in TidyModels with Baguette and Stacks, then I would be happy to arrange this, but these are a session in themselves.
The reason Logistic Regression is the choice as it is a nice generalised linear model that most people have encountered.
TidyModels has a workflow structure which we will build in the next few steps:
In TidyModels you have to create an instance of the model in memory before working with it:
lr_mod <-
parsnip::logistic_reg() %>%
set_engine("glm")
print(lr_mod)## Logistic Regression Model Specification (classification)
##
## Computational engine: glm
The next step is to create the model workflow.
Now it is time to do the workflow to connect the newly instantiated model together:
# Create model workflow
strand_wf <-
workflow() %>%
add_model(lr_mod) %>%
add_recipe(stranded_rec)
print(strand_wf)## ══ Workflow ════════════════════════════════════════════════════════════════════
## Preprocessor: Recipe
## Model: logistic_reg()
##
## ── Preprocessor ────────────────────────────────────────────────────────────────
## 6 Recipe Steps
##
## • step_date()
## • step_rm()
## • step_upsample()
## • step_dummy()
## • step_zv()
## • step_normalize()
##
## ── Model ───────────────────────────────────────────────────────────────────────
## Logistic Regression Model Specification (classification)
##
## Computational engine: glm
The next step is fitting the model to our data:
# Create the model fit
strand_fit <-
strand_wf %>%
fit(data = train_data)The final step is to use the pull_workflow_fit() parameter to retrieve the fit on the workflow:
strand_fitted <- strand_fit %>%
extract_fit_parsnip() %>%
tidy()
print(strand_fitted)## # A tibble: 18 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -0.284 0.147 -1.94 0.0526
## 2 age 0.724 0.237 3.05 0.00226
## 3 care_home_ref_flag 0.0717 0.110 0.653 0.514
## 4 medically_safe_flag 0.00922 0.111 0.0831 0.934
## 5 hcop_flag -0.178 0.110 -1.62 0.106
## 6 needs_mental_health_support_flag -0.0825 0.113 -0.731 0.465
## 7 previous_care_in_last_12_month 2.35 0.405 5.81 0.00000000632
## 8 frail_descrip_Fall.patient.history -0.105 0.148 -0.711 0.477
## 9 frail_descrip_Mobility.problems -0.228 0.142 -1.61 0.108
## 10 frail_descrip_No.index.item 0.828 0.254 3.26 0.00113
## 11 admit_date_dow_Mon -0.0405 0.143 -0.284 0.777
## 12 admit_date_dow_Tue -0.198 0.146 -1.36 0.173
## 13 admit_date_dow_Wed -0.0206 0.138 -0.149 0.881
## 14 admit_date_dow_Thu 0.127 0.137 0.926 0.355
## 15 admit_date_dow_Fri -0.0170 0.139 -0.122 0.903
## 16 admit_date_dow_Sat 0.0752 0.130 0.577 0.564
## 17 admit_date_month_Feb 0.333 0.127 2.63 0.00855
## 18 admit_date_month_Dec 0.0215 0.116 0.186 0.853
As an optional step I have created a plot to visualise the significance. This will only work with linear, and generalized linear models, that analyse p values from t tests and finding the probability value from the t distribution. The visualisation code is contained hereunder:
# Add significance column to tibble using mutate
strand_fitted <- strand_fitted %>%
mutate(Significance = ifelse(p.value < 0.05, "Significant", "Insignificant")) %>%
arrange(desc(p.value))
#Create a ggplot object to visualise significance
plot <- strand_fitted %>%
ggplot(data = strand_fitted, mapping = aes(x=term, y=p.value, fill=Significance)) +
geom_col() + theme(axis.text.x = element_text(
face="bold", color="#0070BA",
size=8, angle=90)
) + labs(y="P value", x="Terms",
title="P value significance chart",
subtitle="A chart to represent the significant variables in the model",
caption="Produced by Gary Hutson")
#print("Creating plot of P values")
#print(plot)
plotly::ggplotly(plot)#print(ggplotly(plot))
#ggsave("Figures/p_val_plot.png", plot) #Save the plotNow we will assess how well the model predicts on the test (holdout) data to evaluate if we want to productionise the model, or abandon it at this stage. This is implemented below:
class_pred <- predict(strand_fit, test_data) #Get the class label predictions
prob_pred <- predict(strand_fit, test_data, type="prob") #Get the probability predictions
lr_predictions <- data.frame(class_pred, prob_pred) %>%
setNames(c("LR_Class", "LR_NotStrandedProb", "LR_StrandedProb")) #Combined into tibble and rename
stranded_preds <- test_data %>%
bind_cols(lr_predictions)
print(tail(lr_predictions))## LR_Class LR_NotStrandedProb LR_StrandedProb
## 170 Not Stranded 0.7742604862 0.2257395
## 171 Not Stranded 0.8561677713 0.1438322
## 172 Not Stranded 0.8630439698 0.1369560
## 173 Stranded 0.0001198593 0.9998801
## 174 Not Stranded 0.7920077663 0.2079922
## 175 Stranded 0.4827049681 0.5172950
Yardstick is another tool in the TidyModels arsenal. It is useful for generating quick summary statistics and evaluation metrics. I will grab the area under the curve estimates to show how well the model fits:
roc_plot <-
stranded_preds %>%
roc_curve(truth = stranded_class, LR_NotStrandedProb) %>%
autoplot
print(roc_plot)I like ROC plots - but they only show you sensitivity how well it is at predicting stranded and the inverse how good it is at predicting not stranded. I like to look at the overall accuracy and balanced accuracy on a confusion matrix, for binomial classification problems.
I use the CARET package and utilise the confusion matrix functions to perform this:
library(caret)## Loading required package: lattice
##
## Attaching package: 'caret'
## The following objects are masked from 'package:yardstick':
##
## precision, recall, sensitivity, specificity
## The following object is masked from 'package:purrr':
##
## lift
cm <- caret::confusionMatrix(stranded_preds$stranded_class,
stranded_preds$LR_Class)
print(cm)## Confusion Matrix and Statistics
##
## Reference
## Prediction Not Stranded Stranded
## Not Stranded 115 7
## Stranded 31 22
##
## Accuracy : 0.7829
## 95% CI : (0.7144, 0.8415)
## No Information Rate : 0.8343
## P-Value [Acc > NIR] : 0.9698561
##
## Kappa : 0.4103
##
## Mcnemar's Test P-Value : 0.0001907
##
## Sensitivity : 0.7877
## Specificity : 0.7586
## Pos Pred Value : 0.9426
## Neg Pred Value : 0.4151
## Prevalence : 0.8343
## Detection Rate : 0.6571
## Detection Prevalence : 0.6971
## Balanced Accuracy : 0.7731
##
## 'Positive' Class : Not Stranded
##
On the back of the Advanced Modelling course I did for the NHS-R Community I have created a package to work with the outputs of a confusion matrix. This package is aimed at the flattening of binary and multi-class confusion matrix results.
library(ConfusionTableR)
cm <- ConfusionTableR::binary_visualiseR(
train_labels = stranded_preds$stranded_class,
truth_labels = stranded_preds$LR_Class,
class_label1 = "Not Stranded",
class_label2 = "Stranded",
quadrant_col1 = "#28ACB4",
quadrant_col2 = "#4397D2",
custom_title = "Stranded Patient Confusion Matrix",
text_col= "black",
cm_stat_size = 1.2,
round_dig = 2)As this is a binary classification problem, then there is the potential to store the outputs of the model into a database. The ConfusionTableR package can do this for you and to implement this in a binary classification model, you would use the binary_class_cm to output this to a list whereby you can then flatten the table. This works very much like the broom package for linear regression outputs.
cm_binary_class <- ConfusionTableR::binary_class_cm(
train_labels = stranded_preds$stranded_class,
truth_labels = stranded_preds$LR_Class)## [INFO] Building a record level confusion matrix to store in dataset
## [INFO] Build finished and to expose record level cm use the record_level_cm list item
# Expose the record level confusion matrix
glimpse(cm_binary_class$record_level_cm)## Rows: 1
## Columns: 23
## $ Pred_Not.Stranded_Ref_Not.Stranded <int> 115
## $ Pred_Stranded_Ref_Not.Stranded <int> 31
## $ Pred_Not.Stranded_Ref_Stranded <int> 7
## $ Pred_Stranded_Ref_Stranded <int> 22
## $ Accuracy <dbl> 0.7828571
## $ Kappa <dbl> 0.4102519
## $ AccuracyLower <dbl> 0.7143576
## $ AccuracyUpper <dbl> 0.8415195
## $ AccuracyNull <dbl> 0.8342857
## $ AccuracyPValue <dbl> 0.9698561
## $ McnemarPValue <dbl> 0.0001906511
## $ Sensitivity <dbl> 0.7876712
## $ Specificity <dbl> 0.7586207
## $ Pos.Pred.Value <dbl> 0.942623
## $ Neg.Pred.Value <dbl> 0.4150943
## $ Precision <dbl> 0.942623
## $ Recall <dbl> 0.7876712
## $ F1 <dbl> 0.858209
## $ Prevalence <dbl> 0.8342857
## $ Detection.Rate <dbl> 0.6571429
## $ Detection.Prevalence <dbl> 0.6971429
## $ Balanced.Accuracy <dbl> 0.773146
## $ cm_ts <dttm> 2021-12-01 17:47:30
The next markdown document will look at how to improve your models with model selection, K-fold cross validation and hyperparameter tuning. I was thinking of doing an ensembling course off the back of this, so please contact me if that would be interesting to you.
I will now save the R image data into file, as we will pick this up in the next markdown document.
save.image(file="Data/stranded_data.rdata")The first markdown document showed you how to build your first TidyModels model on an healthcare dataset. This could be a ML model you simply tweak for your own uses. I will now load the data back in and resume where we left off:
load(file="Data/stranded_data.rdata")The first step will involve something called cross validation (see supporting workshop slides). The essence of cross validation is that you take sub samples of the training dataset. This is done to emulate how well the model will perform on unseen data samples when out in the wild (production):
As the image shows - the folds take a sampe of the training set and each randomly selected fold acts as the test sample. We then use a final hold out validation set to finally test the model. This will be shown in the following section.
set.seed(123)
#Set a random seed for replication of results
ten_fold <- vfold_cv(train_data, v=10)We will use the previous trained logistic regression model with resamples to improve the results of the cross validation:
set.seed(123)
lr_fit_rs <-
strand_wf %>%
fit_resamples(ten_fold)We will now collect the metrics using the tune package and the collect_metrics function:
# To collect the resmaples you need to call collect_metrics to average out the accuracy for that model
collected_mets <- tune::collect_metrics(lr_fit_rs)
print(collected_mets)## # A tibble: 2 × 6
## .metric .estimator mean n std_err .config
## <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 accuracy binary 0.763 10 0.0208 Preprocessor1_Model1
## 2 roc_auc binary 0.711 10 0.0280 Preprocessor1_Model1
# Now I can compare the accuracy from the previous test set I had already generated a confusion matrix for
accuracy_resamples <- collected_mets$mean[1] * 100
accuracy_validation_set <- as.numeric(cm$overall[1] * 100)
print(cat(paste0("The true accuracy of the model is between the resample testing:",
round(accuracy_resamples,2), "\nThe validation sample: ",
round(accuracy_validation_set,2), ".")))## The true accuracy of the model is between the resample testing:76.32
## The validation sample: .NULL
This shows that the true accuracy value is somewhere between the reported results from the resampling method and those in our validation sample.
The following example will move on from the logistic regression and aim to build a random forest, and later a decision tree. Other options in Parnsip would be to use a gradient boosted tree to amp up the results further. In addition, I aim at teaching a follow up webinar to this for ensembling - specifically model stacking (Stacks package) and bagging (Baguette package).
The first step, as with the logistic regression example, if to define and instantiate the model:
rf_mod <-
rand_forest(trees=500) %>%
set_engine("ranger") %>%
set_mode("classification")
print(rf_mod)## Random Forest Model Specification (classification)
##
## Main Arguments:
## trees = 500
##
## Computational engine: ranger
Then we are going to fit the model to the previous training data:
rf_fit <-
rf_mod %>%
fit(stranded_class ~ ., data = train_data)
print(rf_fit)## parsnip model object
##
## Fit time: 102ms
## Ranger result
##
## Call:
## ranger::ranger(x = maybe_data_frame(x), y = y, num.trees = ~500, num.threads = 1, verbose = FALSE, seed = sample.int(10^5, 1), probability = TRUE)
##
## Type: Probability estimation
## Number of trees: 500
## Sample size: 524
## Number of independent variables: 8
## Mtry: 2
## Target node size: 10
## Variable importance mode: none
## Splitrule: gini
## OOB prediction error (Brier s.): 0.1751094
We will aim to increase the sample representation in this model by fitting it to a resamples object, in parsnip and rsample:
#Create workflow step
rf_wf <-
workflow() %>%
add_model(rf_mod) %>%
add_formula(stranded_class ~ .) #The predictor is contained in add_formula method
set.seed(123)
rf_fit_rs <-
rf_wf %>%
fit_resamples(ten_fold)
print(rf_fit_rs)## # Resampling results
## # 10-fold cross-validation
## # A tibble: 10 × 4
## splits id .metrics .notes
## <list> <chr> <list> <list>
## 1 <split [471/53]> Fold01 <tibble [2 × 4]> <tibble [0 × 1]>
## 2 <split [471/53]> Fold02 <tibble [2 × 4]> <tibble [0 × 1]>
## 3 <split [471/53]> Fold03 <tibble [2 × 4]> <tibble [0 × 1]>
## 4 <split [471/53]> Fold04 <tibble [2 × 4]> <tibble [0 × 1]>
## 5 <split [472/52]> Fold05 <tibble [2 × 4]> <tibble [0 × 1]>
## 6 <split [472/52]> Fold06 <tibble [2 × 4]> <tibble [0 × 1]>
## 7 <split [472/52]> Fold07 <tibble [2 × 4]> <tibble [0 × 1]>
## 8 <split [472/52]> Fold08 <tibble [2 × 4]> <tibble [0 × 1]>
## 9 <split [472/52]> Fold09 <tibble [2 × 4]> <tibble [0 × 1]>
## 10 <split [472/52]> Fold10 <tibble [2 × 4]> <tibble [0 × 1]>
The next step is to collect the resample metrics:
# Collect the metrics using another model with resampling
rf_resample_mean_preds <- tune::collect_metrics(rf_fit_rs)
print(rf_resample_mean_preds)## # A tibble: 2 × 6
## .metric .estimator mean n std_err .config
## <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 accuracy binary 0.775 10 0.0211 Preprocessor1_Model1
## 2 roc_auc binary 0.719 10 0.0254 Preprocessor1_Model1
The model predictive power is maxing out at about 78%. I know this is due to the fact that the data is dummy data and most of the features that are contained in the model have a weak association to the outcome variable.
What you would need to do after this is look for more representative features of what causes a patient to stay a long time in hospital. This is where the clinical context comes into play.
We are going to now create a decision tree and we are going to tune the hyperparameters using the dials package. The dials package contains a list of hyperparameter tuning methods and is useful for creating quick hyperparameter grids and aiming to optimise them.
Like all the other steps, the first thing to do is build the decision tree. Note - the reason set_model(“classification”) is because the thing we are predicting is a factor. If this was a continuous variable, then you would need to switch this to regression. However, the model development for regression is identical to classification.
tune_tree <-
decision_tree(
cost_complexity = tune(), #tune() is a placeholder for an empty grid
tree_depth = tune() #we will fill these in the next section
) %>%
set_engine("rpart") %>%
set_mode("classification")
print(tune_tree)## Decision Tree Model Specification (classification)
##
## Main Arguments:
## cost_complexity = tune()
## tree_depth = tune()
##
## Computational engine: rpart
The next step is to fill these blank values for cost complexity and tree depth - see the documentation for parsnip about these meaning, but decision trees have a cost value which minimises the splits and the depth of the tree is how far down you go.
We will now create the object:
grid_tree_tune <- grid_regular(dials::cost_complexity(),
dials::tree_depth(),
levels = 10)
print(head(grid_tree_tune,20))## # A tibble: 20 × 2
## cost_complexity tree_depth
## <dbl> <int>
## 1 0.0000000001 1
## 2 0.000000001 1
## 3 0.00000001 1
## 4 0.0000001 1
## 5 0.000001 1
## 6 0.00001 1
## 7 0.0001 1
## 8 0.001 1
## 9 0.01 1
## 10 0.1 1
## 11 0.0000000001 2
## 12 0.000000001 2
## 13 0.00000001 2
## 14 0.0000001 2
## 15 0.000001 2
## 16 0.00001 2
## 17 0.0001 2
## 18 0.001 2
## 19 0.01 2
## 20 0.1 2
The tuning process, and modelling process, normally needs the ML engineer to access the full potential of your machine. The next steps show how to register the cores on your machine and max them out for training the model and doing grid searching:
all_cores <- parallel::detectCores(logical = FALSE)-1
print(all_cores)## [1] 3
#Registers all cores and subtracts one, so you have some time to work
cl <- makePSOCKcluster(all_cores)
print(cl)## socket cluster with 3 nodes on host 'localhost'
#Makes an in memory cluster to utilise your cores
registerDoParallel(cl)
#Registers that we want to do parallel processingNext, I will create the model workflow, as we have done a few times before:
set.seed(123)
tree_wf <- workflow() %>%
add_model(tune_tree) %>%
add_formula(stranded_class ~ .)
# Make the decision tree workflow - always postfix with wf for convention
# Add the registered model
# Add the formula of the outcome class you are predicting against all IVs
tree_pred_tuned <-
tree_wf %>%
tune::tune_grid(
resamples = ten_fold, #This is the 10 fold cross validation variable we created earlier
grid = grid_tree_tune #This is the tuning grid
)This ggplot helps to visualise how the manual tuning has gone on and will show where the best tree depth occurs in terms of the cost complexity (the number of terminal or leaf nodes):
tune_plot <- tree_pred_tuned %>%
collect_metrics() %>% #Collect metrics from tuning
mutate(tree_depth = factor(tree_depth)) %>%
ggplot(aes(cost_complexity, mean, color = tree_depth)) +
geom_line(size = 1, alpha = 0.7) +
geom_point(size = 1.5) +
facet_wrap(~ .metric, scales = "free", nrow = 2) +
scale_x_log10(labels = scales::label_number()) +
scale_color_viridis_d(option = "plasma", begin = .9, end = 0) + theme_minimal()
print(tune_plot)ggsave(filename="Figures/hyperparameter_tree.png", tune_plot)## Saving 7 x 5 in image
This shows that you only need a depth of 4 to get the optimal accuracy. However, the tune package helps us out with this as well.
The tune package allows us to select the best candidate model, with the most optimal set of hyperparameters:
# To get the best ROC - area under the curve value we will use the following:
tree_pred_tuned %>%
tune::show_best("roc_auc")## # A tibble: 5 × 8
## cost_complexity tree_depth .metric .estimator mean n std_err .config
## <dbl> <int> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 0.0000000001 10 roc_auc binary 0.739 10 0.0236 Preprocesso…
## 2 0.000000001 10 roc_auc binary 0.739 10 0.0236 Preprocesso…
## 3 0.00000001 10 roc_auc binary 0.739 10 0.0236 Preprocesso…
## 4 0.0000001 10 roc_auc binary 0.739 10 0.0236 Preprocesso…
## 5 0.000001 10 roc_auc binary 0.739 10 0.0236 Preprocesso…
# Select the best tree
best_tree <- tree_pred_tuned %>%
tune::select_best("roc_auc")
print(best_tree)## # A tibble: 1 × 3
## cost_complexity tree_depth .config
## <dbl> <int> <chr>
## 1 0.0000000001 10 Preprocessor1_Model061
The next step is to us the best tree to make our predictions.
final_wf <-
tree_wf %>%
finalize_workflow(best_tree) #Finalise workflow passes in our best tree
print(final_wf)## ══ Workflow ════════════════════════════════════════════════════════════════════
## Preprocessor: Formula
## Model: decision_tree()
##
## ── Preprocessor ────────────────────────────────────────────────────────────────
## stranded_class ~ .
##
## ── Model ───────────────────────────────────────────────────────────────────────
## Decision Tree Model Specification (classification)
##
## Main Arguments:
## cost_complexity = 1e-10
## tree_depth = 10
##
## Computational engine: rpart
Make a prediction against this finalised tree:
final_tree_pred <-
final_wf %>%
fit(data = train_data)
print(final_tree_pred)## ══ Workflow [trained] ══════════════════════════════════════════════════════════
## Preprocessor: Formula
## Model: decision_tree()
##
## ── Preprocessor ────────────────────────────────────────────────────────────────
## stranded_class ~ .
##
## ── Model ───────────────────────────────────────────────────────────────────────
## n= 524
##
## node), split, n, loss, yval, (yprob)
## * denotes terminal node
##
## 1) root 524 188 Not Stranded (0.64122137 0.35877863)
## 2) previous_care_in_last_12_month< 1.5 432 105 Not Stranded (0.75694444 0.24305556)
## 4) admit_date< 18667.5 424 100 Not Stranded (0.76415094 0.23584906)
## 8) age< 34 120 22 Not Stranded (0.81666667 0.18333333)
## 16) admit_date< 18658.5 110 18 Not Stranded (0.83636364 0.16363636)
## 32) admit_date>=18623.5 62 6 Not Stranded (0.90322581 0.09677419)
## 64) admit_date< 18638 25 0 Not Stranded (1.00000000 0.00000000) *
## 65) admit_date>=18638 37 6 Not Stranded (0.83783784 0.16216216)
## 130) care_home_ref_flag>=0.5 16 1 Not Stranded (0.93750000 0.06250000) *
## 131) care_home_ref_flag< 0.5 21 5 Not Stranded (0.76190476 0.23809524)
## 262) age< 25.5 14 1 Not Stranded (0.92857143 0.07142857) *
## 263) age>=25.5 7 3 Stranded (0.42857143 0.57142857) *
## 33) admit_date< 18623.5 48 12 Not Stranded (0.75000000 0.25000000)
## 66) age>=24.5 28 4 Not Stranded (0.85714286 0.14285714) *
## 67) age< 24.5 20 8 Not Stranded (0.60000000 0.40000000)
## 134) admit_date< 18607 9 1 Not Stranded (0.88888889 0.11111111) *
## 135) admit_date>=18607 11 4 Stranded (0.36363636 0.63636364) *
## 17) admit_date>=18658.5 10 4 Not Stranded (0.60000000 0.40000000) *
## 9) age>=34 304 78 Not Stranded (0.74342105 0.25657895)
## 18) age>=60.5 229 48 Not Stranded (0.79039301 0.20960699)
## 36) admit_date>=18636.5 94 15 Not Stranded (0.84042553 0.15957447) *
## 37) admit_date< 18636.5 135 33 Not Stranded (0.75555556 0.24444444)
## 74) admit_date< 18627.5 100 18 Not Stranded (0.82000000 0.18000000)
## 148) age< 68.5 43 3 Not Stranded (0.93023256 0.06976744) *
## 149) age>=68.5 57 15 Not Stranded (0.73684211 0.26315789)
## 298) age>=74.5 32 6 Not Stranded (0.81250000 0.18750000) *
## 299) age< 74.5 25 9 Not Stranded (0.64000000 0.36000000)
## 598) hcop_flag>=0.5 12 2 Not Stranded (0.83333333 0.16666667) *
## 599) hcop_flag< 0.5 13 6 Stranded (0.46153846 0.53846154) *
## 75) admit_date>=18627.5 35 15 Not Stranded (0.57142857 0.42857143)
## 150) age< 74.5 24 8 Not Stranded (0.66666667 0.33333333) *
## 151) age>=74.5 11 4 Stranded (0.36363636 0.63636364) *
## 19) age< 60.5 75 30 Not Stranded (0.60000000 0.40000000)
## 38) admit_date>=18621.5 50 16 Not Stranded (0.68000000 0.32000000)
## 76) age< 43.5 25 5 Not Stranded (0.80000000 0.20000000) *
## 77) age>=43.5 25 11 Not Stranded (0.56000000 0.44000000)
## 154) care_home_ref_flag< 0.5 7 1 Not Stranded (0.85714286 0.14285714) *
## 155) care_home_ref_flag>=0.5 18 8 Stranded (0.44444444 0.55555556) *
## 39) admit_date< 18621.5 25 11 Stranded (0.44000000 0.56000000)
## 78) hcop_flag>=0.5 11 3 Not Stranded (0.72727273 0.27272727) *
## 79) hcop_flag< 0.5 14 3 Stranded (0.21428571 0.78571429) *
## 5) admit_date>=18667.5 8 3 Stranded (0.37500000 0.62500000) *
## 3) previous_care_in_last_12_month>=1.5 92 9 Stranded (0.09782609 0.90217391) *
We will look at global variable importance. As mentioned prior, to look at local patient level importance, use the LIME package.
plot <- final_tree_pred %>%
extract_fit_parsnip() %>%
vip(aesthetics = list(color = "black", fill = "#26ACB5")) + theme_minimal()
print(plot)ggsave("Figures/VarImp.png", plot)## Saving 7 x 5 in image
This was derived when we looked at the logistic regression significance that these would be the important variables, due to their linear significance.
The last step is to create the final predictions from the tuned decision tree:
# Create the final prediction
final_fit <-
final_wf %>%
last_fit(split)
final_fit_fitted_metrics <- final_fit %>%
collect_metrics()
print(final_fit_fitted_metrics)## # A tibble: 2 × 4
## .metric .estimator .estimate .config
## <chr> <chr> <dbl> <chr>
## 1 accuracy binary 0.697 Preprocessor1_Model1
## 2 roc_auc binary 0.708 Preprocessor1_Model1
#Create the final predictions
final_fit_predictions <- final_fit %>%
collect_predictions()
print(final_fit_predictions)## # A tibble: 175 × 7
## id `.pred_Not Strand… .pred_Stranded .row .pred_class stranded_class
## <chr> <dbl> <dbl> <int> <fct> <fct>
## 1 train/tes… 0.857 0.143 1 Not Strand… Not Stranded
## 2 train/tes… 0.429 0.571 3 Stranded Not Stranded
## 3 train/tes… 0.667 0.333 4 Not Strand… Not Stranded
## 4 train/tes… 1 0 7 Not Strand… Not Stranded
## 5 train/tes… 0.0978 0.902 9 Stranded Not Stranded
## 6 train/tes… 0.0978 0.902 15 Stranded Stranded
## 7 train/tes… 0.857 0.143 18 Not Strand… Not Stranded
## 8 train/tes… 0.930 0.0698 22 Not Strand… Stranded
## 9 train/tes… 0.214 0.786 27 Stranded Not Stranded
## 10 train/tes… 0.840 0.160 32 Not Strand… Not Stranded
## # … with 165 more rows, and 1 more variable: .config <chr>
You could do similar with viewing this object in the confusion matrix add in, but I will view this on a plot:
roc_plot <- final_fit_predictions %>%
roc_curve(stranded_class, `.pred_Not Stranded`) %>%
autoplot()
print(roc_plot)ggsave(filename = "Figures/tuned_tree.png", plot=roc_plot)## Saving 7 x 5 in image
One last point to note - to inspect any of the tuning parameters and hyperparameters for the models you can use the args function to return these - examples below:
args(decision_tree)## function (mode = "unknown", engine = "rpart", cost_complexity = NULL,
## tree_depth = NULL, min_n = NULL)
## NULL
args(logistic_reg)## function (mode = "classification", engine = "glm", penalty = NULL,
## mixture = NULL)
## NULL
args(rand_forest)## function (mode = "unknown", engine = "ranger", mtry = NULL, trees = NULL,
## min_n = NULL)
## NULL
Finally we will look at implementing a really powerful model that is used a lot, by me, in Kaggle competitions. Last month I finished in the top 6% of Kaggle entrants, just using this model and doing some preprocessing.
The logic of XGBoost is awesome. Essentially this performs multiple training iterations and over each iteration the model learns the error. To visualise this diagramatically:
This is how the model learns, but can easily overfit, thus why hyperparameter tuning is needed.
Let’s learn how to train one of these Kaggle-beaters!
From the previous recipe we created applying the preprocessing steps to the stranded data we are going to try and improve our model using tuning, resampling and powerful model training techniques, using bagged trees and gradient boosting.
# Use stranded recipe
# Prepare the recipe
strand_rec_preped <- stranded_rec %>%
prep()
#Bake the recipe
strand_folds <-
recipes::bake(
strand_rec_preped,
new_data = training(split)
) %>%
rsample::vfold_cv(v = 10)xgboost_model <-
parsnip::boost_tree(
mode = "classification",
trees=1000,
min_n = tune(),
tree_depth = tune(),
learn_rate = tune(),
loss_reduction = tune()
) %>%
set_engine("xgboost")The next step, as we covered earlier, is to create a grid search in dials to go over each one of these hyperparameters to tune the model.
xgboost_params <-
dials::parameters(
min_n(),
tree_depth(),
learn_rate(),
loss_reduction()
)
xgboost_grid <-
dials::grid_max_entropy(
xgboost_params, size = 100 #Indicates the size of the search space
)
xgboost_grid## # A tibble: 100 × 4
## min_n tree_depth learn_rate loss_reduction
## <int> <int> <dbl> <dbl>
## 1 37 10 2.90e- 3 5.07e-10
## 2 38 3 1.14e- 2 2.07e- 2
## 3 10 4 1.33e- 5 1.20e- 5
## 4 2 2 1.24e- 9 1.78e- 4
## 5 25 1 4.23e- 7 1.20e- 5
## 6 32 8 2.37e- 6 5.56e-10
## 7 5 8 4.23e- 5 2.28e- 8
## 8 40 2 4.09e-10 1.39e-10
## 9 14 9 3.94e- 2 1.51e- 6
## 10 8 11 3.44e- 5 9.46e- 9
## # … with 90 more rows
The next step is to set up the workflow for the grid.
xgboost_wf <-
workflows::workflow() %>%
add_model(xgboost_model) %>%
add_formula(stranded_class ~ .)The next step is to now tune the model using your tuning grid:
xgboost_tuned <- tune::tune_grid(
object = xgboost_wf,
resamples = strand_folds,
grid = xgboost_grid,
metrics = yardstick::metric_set(accuracy, roc_auc),
control = tune::control_grid(verbose=TRUE)
)Let’s now get the best hyperparameters for the model:
xgboost_tuned %>%
tune::show_best(metric="roc_auc")## # A tibble: 5 × 10
## min_n tree_depth learn_rate loss_reduction .metric .estimator mean n
## <int> <int> <dbl> <dbl> <chr> <chr> <dbl> <int>
## 1 9 7 4.58e- 2 0.327 roc_auc binary 0.743 10
## 2 16 8 1.42e-10 0.408 roc_auc binary 0.743 10
## 3 16 13 2.21e-10 0.211 roc_auc binary 0.735 10
## 4 12 11 2.08e-10 0.0000490 roc_auc binary 0.734 10
## 5 14 9 3.94e- 2 0.00000151 roc_auc binary 0.733 10
## # … with 2 more variables: std_err <dbl>, .config <chr>
Now let’s select the most performant from the model:
xgboost_best_params <- xgboost_tuned %>%
tune::select_best(metric="roc_auc")The final stage is finalizing the model to use the best parameters:
xgboost_model_final <- xgboost_model %>%
tune::finalize_model(xgboost_best_params)# Create training set
train_proc <- bake(strand_rec_preped,
new_data = training(split))
train_prediction <- xgboost_model_final %>%
fit(
formula = stranded_class ~ .,
data = train_proc
) %>%
predict(new_data = train_proc) %>%
bind_cols(training(split))## [17:54:04] WARNING: amalgamation/../src/learner.cc:1115: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
xgboost_score_train <-
train_prediction %>%
yardstick::metrics(stranded_class, .pred_class)
# Create testing set
test_proc <- bake(strand_rec_preped,
new_data = testing(split))
test_prediction <- xgboost_model_final %>%
fit(
formula = stranded_class ~ .,
data = train_proc
) %>%
predict(new_data = test_proc) ## [17:54:05] WARNING: amalgamation/../src/learner.cc:1115: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
# Bind test predictions to labels
test_prediction <- cbind(test_prediction, testing(split))
xgboost_score <-
test_prediction %>%
yardstick::metrics(stranded_class, .pred_class)
print(xgboost_score)## # A tibble: 2 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.8
## 2 kap binary 0.473
The final step we will use ConfusionTableR to store the classification results in a csv file:
library(data.table)##
## Attaching package: 'data.table'
## The following object is masked from 'package:rlang':
##
## :=
## The following object is masked from 'package:purrr':
##
## transpose
## The following objects are masked from 'package:dplyr':
##
## between, first, last
library(ConfusionTableR)
cm_outputs <- ConfusionTableR::binary_class_cm(train_labels = test_prediction$.pred_class,
truth_labels = test_prediction$stranded_class)## [INFO] Building a record level confusion matrix to store in dataset
## [INFO] Build finished and to expose record level cm use the record_level_cm list item
data.table::fwrite(cm_outputs$record_level_cm,
file=paste0("Data/xgboost_results_", as.character(Sys.time()),
".csv"))That wraps up this part of the demo, the xgboost has improved performance massively, but it has done it in the wrong direction, as it is picking up many more true negative examples.
If you are interested in a further session on ensembling - then I would be happy to go over the Stacks and Baguette packages for model stacking and bagging. These are relatively new additions to TidyModels and they are not as optimised as some of the caret packages, but I would be happy to show you how these are implemented.