In this episode of SLICED, contestants are challenged to use a variety of features to predict whether a batter’s hit results in a home run.

The evaluation algorithm is log loss.

I’ll be learning how to use Racing Methods by coding along Julia Silge!

suppressWarnings(if(!require(pacman)) install.packages("pacman"))
## Loading required package: pacman
pacman::p_load("tidyverse", "tidymodels", "here", "scales", "glmnet", "stacks", "janitor", "finetune", "vip")

doParallel::registerDoParallel()

Against my better judgement, am moving straight to modelling!

Build an xgboost model

Data Budgeting

In this section, we allocate specific subsets of data for different tasks.

set.seed(2056)
# Load data
train_raw <- read_csv("train.csv", show_col_types = FALSE)
holdout <- read_csv("test.csv", show_col_types = FALSE)

# Convert 0s and 1s (outcome) into a factor and split data
bb_split <- train_raw %>% 
  mutate(is_home_run = if_else(as.logical(is_home_run), "HR", "no"),
         is_home_run = factor(is_home_run)) %>% 
  initial_split(strata = is_home_run)
bb_train <- training(bb_split)
bb_test <- testing(bb_split)

# Training folds
set.seed(2056)
bb_folds <- bb_train %>% 
  vfold_cv(v = 10, strata = is_home_run)

eval_metrics <- metric_set(mn_log_loss)
theme_set(theme_light())

Feature Engineering

Feature engineering encompasses activities that reformat predictor values to make them easier for a model to use effectively.

bb_rec <- recipe(is_home_run ~ launch_angle + launch_speed + plate_x + plate_z
       + inning + balls + 
         strikes + game_date +
         bb_type + bearing + 
         pitch_mph + is_batter_lefty + is_pitcher_lefty, data = bb_train) %>%
  # Extract week of year from date
  step_date(game_date, features = c("week"), keep_original_cols = FALSE) %>% 
  
  # Assign missing factor to "unknown"
  step_unknown(all_nominal_predictors()) %>% 
  # Convert nominal features to numeric
  step_dummy(all_nominal_predictors(), one_hot = TRUE) %>% 
  
  # Impute missing numeric values using the median
  step_impute_median(all_numeric_predictors(), - launch_angle, - launch_speed) %>% 
  step_impute_linear(launch_angle, launch_speed, impute_with = imp_vars(plate_x, plate_z, pitch_mph)) %>% 
  step_nzv(all_predictors())

prep(bb_rec)
## Recipe
## 
## Inputs:
## 
##       role #variables
##    outcome          1
##  predictor         13
## 
## Training data contained 34683 data points and 15303 incomplete rows. 
## 
## Operations:
## 
## Date features from game_date [trained]
## Unknown factor level assignment for bb_type, bearing [trained]
## Dummy variables from bb_type, bearing [trained]
## Median Imputation for plate_x, plate_z, inning, balls, strikes, pitch... [trained]
## Linear regression imputation for launch_angle, launch_speed [trained]
## Sparse, unbalanced variable filter removed bb_type_unknown, bearing_unknown [trained]

Model specification

# Model specification
xgb_spec <- 
  boost_tree(
    trees = tune(),
    min_n = tune(),
    mtry = tune(),
    learn_rate = 0.01
  ) %>% 
  set_engine("xgboost") %>% 
  set_mode("classification")

# Workflow
xgb_wf <- workflow() %>% 
  add_recipe(bb_rec) %>% 
  add_model(xgb_spec)

Using racing methods to tune our model

In racing methods, the tuning process evaluates all models on an initial subset of resamples. Based on their current performance metrics, the process eliminates tuning parameter combinations that are unlikely to be the best results using a repeated measure ANOVA model. This is unlike grid search where all models need to be fit across all resamples before any tuning parameters can be evaluated.

doParallel::registerDoParallel()
library(finetune)

set.seed(2056)

# Grid search via racing 
xgb_rs <- tune_race_anova(
  object = xgb_wf,
  resamples = bb_folds,
  # Try out 15 different combinations of parameters
  # i.e 15 different models
  grid = 15,
  metrics = metric_set(mn_log_loss),
  control = control_race(verbose_elim = TRUE)
)
## i Creating pre-processing data to finalize unknown parameter: mtry
## i Racing will minimize the mn_log_loss metric.
## i Resamples are analyzed in a random order.
## i Fold10: 12 eliminated;  3 candidates remain.
## i Fold05: All but one parameter combination were eliminated.
xgb_rs$.metrics
## [[1]]
## # A tibble: 15 x 7
##     mtry trees min_n .metric     .estimator .estimate .config              
##    <int> <int> <int> <chr>       <chr>          <dbl> <chr>                
##  1     1  1965    13 mn_log_loss binary         0.113 Preprocessor1_Model01
##  2     3  1793    37 mn_log_loss binary         0.109 Preprocessor1_Model02
##  3     4  1527    34 mn_log_loss binary         0.108 Preprocessor1_Model03
##  4     5   165    32 mn_log_loss binary         0.195 Preprocessor1_Model04
##  5     6  1687    26 mn_log_loss binary         0.107 Preprocessor1_Model05
##  6     7   407    16 mn_log_loss binary         0.118 Preprocessor1_Model06
##  7     8   589    12 mn_log_loss binary         0.111 Preprocessor1_Model07
##  8    10   836     3 mn_log_loss binary         0.109 Preprocessor1_Model08
##  9    10  1322    20 mn_log_loss binary         0.106 Preprocessor1_Model09
## 10    12  1456     9 mn_log_loss binary         0.105 Preprocessor1_Model10
## 11    12   750    19 mn_log_loss binary         0.109 Preprocessor1_Model11
## 12    14  1080    24 mn_log_loss binary         0.107 Preprocessor1_Model12
## 13    16    53     7 mn_log_loss binary         0.395 Preprocessor1_Model13
## 14    16   357    29 mn_log_loss binary         0.122 Preprocessor1_Model14
## 15    17   967    37 mn_log_loss binary         0.109 Preprocessor1_Model15
## 
## [[2]]
## # A tibble: 15 x 7
##     mtry trees min_n .metric     .estimator .estimate .config              
##    <int> <int> <int> <chr>       <chr>          <dbl> <chr>                
##  1     1  1965    13 mn_log_loss binary         0.111 Preprocessor1_Model01
##  2     3  1793    37 mn_log_loss binary         0.104 Preprocessor1_Model02
##  3     4  1527    34 mn_log_loss binary         0.104 Preprocessor1_Model03
##  4     5   165    32 mn_log_loss binary         0.194 Preprocessor1_Model04
##  5     6  1687    26 mn_log_loss binary         0.102 Preprocessor1_Model05
##  6     7   407    16 mn_log_loss binary         0.114 Preprocessor1_Model06
##  7     8   589    12 mn_log_loss binary         0.106 Preprocessor1_Model07
##  8    10   836     3 mn_log_loss binary         0.104 Preprocessor1_Model08
##  9    10  1322    20 mn_log_loss binary         0.102 Preprocessor1_Model09
## 10    12  1456     9 mn_log_loss binary         0.101 Preprocessor1_Model10
## 11    12   750    19 mn_log_loss binary         0.105 Preprocessor1_Model11
## 12    14  1080    24 mn_log_loss binary         0.103 Preprocessor1_Model12
## 13    16    53     7 mn_log_loss binary         0.394 Preprocessor1_Model13
## 14    16   357    29 mn_log_loss binary         0.117 Preprocessor1_Model14
## 15    17   967    37 mn_log_loss binary         0.104 Preprocessor1_Model15
## 
## [[3]]
## # A tibble: 15 x 7
##     mtry trees min_n .metric     .estimator .estimate .config              
##    <int> <int> <int> <chr>       <chr>          <dbl> <chr>                
##  1     1  1965    13 mn_log_loss binary         0.113 Preprocessor1_Model01
##  2     3  1793    37 mn_log_loss binary         0.109 Preprocessor1_Model02
##  3     4  1527    34 mn_log_loss binary         0.109 Preprocessor1_Model03
##  4     5   165    32 mn_log_loss binary         0.196 Preprocessor1_Model04
##  5     6  1687    26 mn_log_loss binary         0.107 Preprocessor1_Model05
##  6     7   407    16 mn_log_loss binary         0.119 Preprocessor1_Model06
##  7     8   589    12 mn_log_loss binary         0.113 Preprocessor1_Model07
##  8    10   836     3 mn_log_loss binary         0.110 Preprocessor1_Model08
##  9    10  1322    20 mn_log_loss binary         0.107 Preprocessor1_Model09
## 10    12  1456     9 mn_log_loss binary         0.106 Preprocessor1_Model10
## 11    12   750    19 mn_log_loss binary         0.111 Preprocessor1_Model11
## 12    14  1080    24 mn_log_loss binary         0.109 Preprocessor1_Model12
## 13    16    53     7 mn_log_loss binary         0.395 Preprocessor1_Model13
## 14    16   357    29 mn_log_loss binary         0.123 Preprocessor1_Model14
## 15    17   967    37 mn_log_loss binary         0.110 Preprocessor1_Model15
## 
## [[4]]
## # A tibble: 3 x 7
##    mtry trees min_n .metric     .estimator .estimate .config              
##   <int> <int> <int> <chr>       <chr>          <dbl> <chr>                
## 1     6  1687    26 mn_log_loss binary        0.0927 Preprocessor1_Model05
## 2    10  1322    20 mn_log_loss binary        0.0924 Preprocessor1_Model09
## 3    12  1456     9 mn_log_loss binary        0.0928 Preprocessor1_Model10
## 
## [[5]]
## # A tibble: 1 x 7
##    mtry trees min_n .metric     .estimator .estimate .config              
##   <int> <int> <int> <chr>       <chr>          <dbl> <chr>                
## 1    12  1456     9 mn_log_loss binary        0.0937 Preprocessor1_Model10
## 
## [[6]]
## # A tibble: 1 x 7
##    mtry trees min_n .metric     .estimator .estimate .config              
##   <int> <int> <int> <chr>       <chr>          <dbl> <chr>                
## 1    12  1456     9 mn_log_loss binary         0.105 Preprocessor1_Model10
## 
## [[7]]
## # A tibble: 1 x 7
##    mtry trees min_n .metric     .estimator .estimate .config              
##   <int> <int> <int> <chr>       <chr>          <dbl> <chr>                
## 1    12  1456     9 mn_log_loss binary        0.0955 Preprocessor1_Model10
## 
## [[8]]
## # A tibble: 1 x 7
##    mtry trees min_n .metric     .estimator .estimate .config              
##   <int> <int> <int> <chr>       <chr>          <dbl> <chr>                
## 1    12  1456     9 mn_log_loss binary        0.0884 Preprocessor1_Model10
## 
## [[9]]
## # A tibble: 1 x 7
##    mtry trees min_n .metric     .estimator .estimate .config              
##   <int> <int> <int> <chr>       <chr>          <dbl> <chr>                
## 1    12  1456     9 mn_log_loss binary        0.0983 Preprocessor1_Model10
## 
## [[10]]
## # A tibble: 1 x 7
##    mtry trees min_n .metric     .estimator .estimate .config              
##   <int> <int> <int> <chr>       <chr>          <dbl> <chr>                
## 1    12  1456     9 mn_log_loss binary        0.0977 Preprocessor1_Model10

The metric shows the incremental elimination of tuning parameters that may not likely improve performance.

grid: An integer or data frame. When an integer is used, the function creates a space-filling design with grid number of candidate parameter combinations. space-filling designs generally find a configuration of points that cover the parameter space with the smallest chance of overlapping or redundant values.

# See the race visually
plot_race(xgb_rs)

# Best model
show_best(xgb_rs)

Finalize workflow

Update the xg_boost model with the best tuning parameters

# Finalize workflow
xgb_last <- xgb_wf %>% 
  finalize_workflow(select_best(xgb_rs, "mn_log_loss")) %>% 
  last_fit(bb_split)

# Collect predictions
xgb_last %>% 
  collect_predictions() %>% 
  mn_log_loss(is_home_run, .pred_HR)

Interpret fit model

library(vip)

# Extract fitted xg_boost wf
extract_workflow(xgb_last) %>% 
  extract_fit_parsnip() %>% 
  vip(geom = "point", num_features = 15)

LS0tDQp0aXRsZTogIlR1bmluZyB4Z2Jvb3N0IHdpdGggcmFjaW5nIG1ldGhvZHMiDQpvdXRwdXQ6DQogICMgZmlkZWxpdXM6Omh0bWxfcGFzc3dvcmRfcHJvdGVjdGVkOg0KICAjICAgcGFzc3dvcmQ6ICJobjQzMjYiDQogICMgICBzdHlsZToNCiAgIyAgICAgYnV0dG9uX3RleHQ6ICJPcGVuIFNlc2FtZSEiDQogICMgICAgIGJhY2tncm91bmRfY29sb3I6ICIjZjJmM2VkIg0KICAjICBvdXRwdXRfZm9ybWF0OiANCiAgICAgIHJtYXJrZG93bjo6aHRtbF9kb2N1bWVudDoNCiAgICAgICAgZGZfcHJpbnQ6IHBhZ2VkDQogICAgICAgIGNzczogc3R5bGVfNy5jc3MNCiAgICAgICAgdGhlbWU6IGZsYXRseQ0KICAgICAgICBoaWdobGlnaHQ6IGJyZWV6ZWRhcmsNCiAgICAgICAgdG9jOiB5ZXMNCiAgICAgICAgdG9jX2Zsb2F0OiB5ZXMNCiAgICAgICAgY29kZV9kb3dubG9hZDogeWVzDQplZGl0b3Jfb3B0aW9uczogDQogIGNodW5rX291dHB1dF90eXBlOiBjb25zb2xlDQotLS0NCg0KSW4gW3RoaXMgZXBpc29kZSBvZiBTTElDRURdKGh0dHBzOi8vd3d3LmthZ2dsZS5jb20vYy9zbGljZWQtczAxZTA5LXBsYXlvZmZzLTEvb3ZlcnZpZXcpLCBjb250ZXN0YW50cyBhcmUgY2hhbGxlbmdlZCB0byB1c2UgYSB2YXJpZXR5IG9mIGZlYXR1cmVzIHRvIHByZWRpY3Qgd2hldGhlciBhIGJhdHRlcidzIGhpdCByZXN1bHRzIGluIGEgaG9tZSBydW4uDQoNClRoZSBldmFsdWF0aW9uIGFsZ29yaXRobSBpcyBsb2cgbG9zcy4NCg0KSSdsbCBiZSBsZWFybmluZyBob3cgdG8gdXNlIFJhY2luZyBNZXRob2RzIGJ5IFtjb2RpbmcgYWxvbmcgSnVsaWEgU2lsZ2VdKGh0dHBzOi8vd3d3LnlvdXR1YmUuY29tL3dhdGNoP3Y9X2UwTkZJYUhZMmMpIQ0KDQpgYGB7cn0NCnN1cHByZXNzV2FybmluZ3MoaWYoIXJlcXVpcmUocGFjbWFuKSkgaW5zdGFsbC5wYWNrYWdlcygicGFjbWFuIikpDQoNCnBhY21hbjo6cF9sb2FkKCJ0aWR5dmVyc2UiLCAidGlkeW1vZGVscyIsICJoZXJlIiwgInNjYWxlcyIsICJnbG1uZXQiLCAic3RhY2tzIiwgImphbml0b3IiLCAiZmluZXR1bmUiLCAidmlwIikNCg0KZG9QYXJhbGxlbDo6cmVnaXN0ZXJEb1BhcmFsbGVsKCkNCmBgYA0KDQpBZ2FpbnN0IG15IGJldHRlciBqdWRnZW1lbnQsIGFtIG1vdmluZyBzdHJhaWdodCB0byBtb2RlbGxpbmchDQoNCiMjIEJ1aWxkIGFuIHhnYm9vc3QgbW9kZWwNCg0KIyMjIERhdGEgQnVkZ2V0aW5nDQoNCkluIHRoaXMgc2VjdGlvbiwgd2UgYWxsb2NhdGUgc3BlY2lmaWMgc3Vic2V0cyBvZiBkYXRhIGZvciBkaWZmZXJlbnQgdGFza3MuDQoNCmBgYHtyfQ0Kc2V0LnNlZWQoMjA1NikNCiMgTG9hZCBkYXRhDQp0cmFpbl9yYXcgPC0gcmVhZF9jc3YoInRyYWluLmNzdiIsIHNob3dfY29sX3R5cGVzID0gRkFMU0UpDQpob2xkb3V0IDwtIHJlYWRfY3N2KCJ0ZXN0LmNzdiIsIHNob3dfY29sX3R5cGVzID0gRkFMU0UpDQoNCiMgQ29udmVydCAwcyBhbmQgMXMgKG91dGNvbWUpIGludG8gYSBmYWN0b3IgYW5kIHNwbGl0IGRhdGENCmJiX3NwbGl0IDwtIHRyYWluX3JhdyAlPiUgDQogIG11dGF0ZShpc19ob21lX3J1biA9IGlmX2Vsc2UoYXMubG9naWNhbChpc19ob21lX3J1biksICJIUiIsICJubyIpLA0KICAgICAgICAgaXNfaG9tZV9ydW4gPSBmYWN0b3IoaXNfaG9tZV9ydW4pKSAlPiUgDQogIGluaXRpYWxfc3BsaXQoc3RyYXRhID0gaXNfaG9tZV9ydW4pDQpiYl90cmFpbiA8LSB0cmFpbmluZyhiYl9zcGxpdCkNCmJiX3Rlc3QgPC0gdGVzdGluZyhiYl9zcGxpdCkNCg0KIyBUcmFpbmluZyBmb2xkcw0Kc2V0LnNlZWQoMjA1NikNCmJiX2ZvbGRzIDwtIGJiX3RyYWluICU+JSANCiAgdmZvbGRfY3YodiA9IDEwLCBzdHJhdGEgPSBpc19ob21lX3J1bikNCg0KZXZhbF9tZXRyaWNzIDwtIG1ldHJpY19zZXQobW5fbG9nX2xvc3MpDQp0aGVtZV9zZXQodGhlbWVfbGlnaHQoKSkNCmBgYA0KDQojIyMgRmVhdHVyZSBFbmdpbmVlcmluZw0KDQpGZWF0dXJlIGVuZ2luZWVyaW5nIGVuY29tcGFzc2VzIGFjdGl2aXRpZXMgdGhhdCByZWZvcm1hdCBwcmVkaWN0b3IgdmFsdWVzIHRvIG1ha2UgdGhlbSBlYXNpZXIgZm9yIGEgbW9kZWwgdG8gdXNlIGVmZmVjdGl2ZWx5Lg0KDQpgYGB7cn0NCmJiX3JlYyA8LSByZWNpcGUoaXNfaG9tZV9ydW4gfiBsYXVuY2hfYW5nbGUgKyBsYXVuY2hfc3BlZWQgKyBwbGF0ZV94ICsgcGxhdGVfeg0KICAgICAgICsgaW5uaW5nICsgYmFsbHMgKyANCiAgICAgICAgIHN0cmlrZXMgKyBnYW1lX2RhdGUgKw0KICAgICAgICAgYmJfdHlwZSArIGJlYXJpbmcgKyANCiAgICAgICAgIHBpdGNoX21waCArIGlzX2JhdHRlcl9sZWZ0eSArIGlzX3BpdGNoZXJfbGVmdHksIGRhdGEgPSBiYl90cmFpbikgJT4lDQogICMgRXh0cmFjdCB3ZWVrIG9mIHllYXIgZnJvbSBkYXRlDQogIHN0ZXBfZGF0ZShnYW1lX2RhdGUsIGZlYXR1cmVzID0gYygid2VlayIpLCBrZWVwX29yaWdpbmFsX2NvbHMgPSBGQUxTRSkgJT4lIA0KICANCiAgIyBBc3NpZ24gbWlzc2luZyBmYWN0b3IgdG8gInVua25vd24iDQogIHN0ZXBfdW5rbm93bihhbGxfbm9taW5hbF9wcmVkaWN0b3JzKCkpICU+JSANCiAgIyBDb252ZXJ0IG5vbWluYWwgZmVhdHVyZXMgdG8gbnVtZXJpYw0KICBzdGVwX2R1bW15KGFsbF9ub21pbmFsX3ByZWRpY3RvcnMoKSwgb25lX2hvdCA9IFRSVUUpICU+JSANCiAgDQogICMgSW1wdXRlIG1pc3NpbmcgbnVtZXJpYyB2YWx1ZXMgdXNpbmcgdGhlIG1lZGlhbg0KICBzdGVwX2ltcHV0ZV9tZWRpYW4oYWxsX251bWVyaWNfcHJlZGljdG9ycygpLCAtIGxhdW5jaF9hbmdsZSwgLSBsYXVuY2hfc3BlZWQpICU+JSANCiAgc3RlcF9pbXB1dGVfbGluZWFyKGxhdW5jaF9hbmdsZSwgbGF1bmNoX3NwZWVkLCBpbXB1dGVfd2l0aCA9IGltcF92YXJzKHBsYXRlX3gsIHBsYXRlX3osIHBpdGNoX21waCkpICU+JSANCiAgc3RlcF9uenYoYWxsX3ByZWRpY3RvcnMoKSkNCg0KcHJlcChiYl9yZWMpDQpgYGANCg0KIyMjIE1vZGVsIHNwZWNpZmljYXRpb24NCg0KYGBge3J9DQojIE1vZGVsIHNwZWNpZmljYXRpb24NCnhnYl9zcGVjIDwtIA0KICBib29zdF90cmVlKA0KICAgIHRyZWVzID0gdHVuZSgpLA0KICAgIG1pbl9uID0gdHVuZSgpLA0KICAgIG10cnkgPSB0dW5lKCksDQogICAgbGVhcm5fcmF0ZSA9IDAuMDENCiAgKSAlPiUgDQogIHNldF9lbmdpbmUoInhnYm9vc3QiKSAlPiUgDQogIHNldF9tb2RlKCJjbGFzc2lmaWNhdGlvbiIpDQoNCiMgV29ya2Zsb3cNCnhnYl93ZiA8LSB3b3JrZmxvdygpICU+JSANCiAgYWRkX3JlY2lwZShiYl9yZWMpICU+JSANCiAgYWRkX21vZGVsKHhnYl9zcGVjKQ0KYGBgDQoNCiMjIyBVc2luZyByYWNpbmcgbWV0aG9kcyB0byB0dW5lIG91ciBtb2RlbA0KDQpJbiByYWNpbmcgbWV0aG9kcywgdGhlIHR1bmluZyBwcm9jZXNzIGV2YWx1YXRlcyBhbGwgbW9kZWxzIG9uIGFuIGluaXRpYWwgc3Vic2V0IG9mIHJlc2FtcGxlcy4gQmFzZWQgb24gdGhlaXIgY3VycmVudCBwZXJmb3JtYW5jZSBtZXRyaWNzLCB0aGUgcHJvY2VzcyBlbGltaW5hdGVzIHR1bmluZyBwYXJhbWV0ZXIgY29tYmluYXRpb25zIHRoYXQgYXJlIHVubGlrZWx5IHRvIGJlIHRoZSBiZXN0IHJlc3VsdHMgdXNpbmcgYSByZXBlYXRlZCBtZWFzdXJlIEFOT1ZBIG1vZGVsLiBUaGlzIGlzIHVubGlrZSBncmlkIHNlYXJjaCB3aGVyZSBhbGwgbW9kZWxzIG5lZWQgdG8gYmUgZml0IGFjcm9zcyBhbGwgcmVzYW1wbGVzIGJlZm9yZSBhbnkgdHVuaW5nIHBhcmFtZXRlcnMgY2FuIGJlIGV2YWx1YXRlZC4NCg0KYGBge3J9DQpkb1BhcmFsbGVsOjpyZWdpc3RlckRvUGFyYWxsZWwoKQ0KbGlicmFyeShmaW5ldHVuZSkNCg0Kc2V0LnNlZWQoMjA1NikNCg0KIyBHcmlkIHNlYXJjaCB2aWEgcmFjaW5nIA0KeGdiX3JzIDwtIHR1bmVfcmFjZV9hbm92YSgNCiAgb2JqZWN0ID0geGdiX3dmLA0KICByZXNhbXBsZXMgPSBiYl9mb2xkcywNCiAgIyBUcnkgb3V0IDE1IGRpZmZlcmVudCBjb21iaW5hdGlvbnMgb2YgcGFyYW1ldGVycw0KICAjIGkuZSAxNSBkaWZmZXJlbnQgbW9kZWxzDQogIGdyaWQgPSAxNSwNCiAgbWV0cmljcyA9IG1ldHJpY19zZXQobW5fbG9nX2xvc3MpLA0KICBjb250cm9sID0gY29udHJvbF9yYWNlKHZlcmJvc2VfZWxpbSA9IFRSVUUpDQopDQoNCnhnYl9ycyQubWV0cmljcw0KDQoNCmBgYA0KDQpUaGUgbWV0cmljIHNob3dzIHRoZSBpbmNyZW1lbnRhbCBlbGltaW5hdGlvbiBvZiB0dW5pbmcgcGFyYW1ldGVycyB0aGF0IG1heSBub3QgbGlrZWx5IGltcHJvdmUgcGVyZm9ybWFuY2UuDQoNCj4gYGdyaWRgOiBBbiBpbnRlZ2VyIG9yIGRhdGEgZnJhbWUuIFdoZW4gYW4gaW50ZWdlciBpcyB1c2VkLCB0aGUgZnVuY3Rpb24gY3JlYXRlcyBhIHNwYWNlLWZpbGxpbmcgZGVzaWduIHdpdGggYGdyaWRgIG51bWJlciBvZiBjYW5kaWRhdGUgcGFyYW1ldGVyIGNvbWJpbmF0aW9ucy4gKnNwYWNlLWZpbGxpbmcgZGVzaWducyogZ2VuZXJhbGx5IGZpbmQgYSBjb25maWd1cmF0aW9uIG9mIHBvaW50cyB0aGF0IGNvdmVyIHRoZSBwYXJhbWV0ZXIgc3BhY2Ugd2l0aCB0aGUgc21hbGxlc3QgY2hhbmNlIG9mIG92ZXJsYXBwaW5nIG9yIHJlZHVuZGFudCB2YWx1ZXMuDQoNCmBgYHtyfQ0KIyBTZWUgdGhlIHJhY2UgdmlzdWFsbHkNCnBsb3RfcmFjZSh4Z2JfcnMpDQoNCiMgQmVzdCBtb2RlbA0Kc2hvd19iZXN0KHhnYl9ycykNCmBgYA0KDQojIyMgRmluYWxpemUgd29ya2Zsb3cNCg0KVXBkYXRlIHRoZSB4Z19ib29zdCBtb2RlbCB3aXRoIHRoZSBiZXN0IHR1bmluZyBwYXJhbWV0ZXJzDQoNCmBgYHtyfQ0KIyBGaW5hbGl6ZSB3b3JrZmxvdw0KeGdiX2xhc3QgPC0geGdiX3dmICU+JSANCiAgZmluYWxpemVfd29ya2Zsb3coc2VsZWN0X2Jlc3QoeGdiX3JzLCAibW5fbG9nX2xvc3MiKSkgJT4lIA0KICBsYXN0X2ZpdChiYl9zcGxpdCkNCg0KIyBDb2xsZWN0IHByZWRpY3Rpb25zDQp4Z2JfbGFzdCAlPiUgDQogIGNvbGxlY3RfcHJlZGljdGlvbnMoKSAlPiUgDQogIG1uX2xvZ19sb3NzKGlzX2hvbWVfcnVuLCAucHJlZF9IUikNCmBgYA0KDQojIyMgSW50ZXJwcmV0IGZpdCBtb2RlbA0KDQpgYGB7cn0NCmxpYnJhcnkodmlwKQ0KDQojIEV4dHJhY3QgZml0dGVkIHhnX2Jvb3N0IHdmDQpleHRyYWN0X3dvcmtmbG93KHhnYl9sYXN0KSAlPiUgDQogIGV4dHJhY3RfZml0X3BhcnNuaXAoKSAlPiUgDQogIHZpcChnZW9tID0gInBvaW50IiwgbnVtX2ZlYXR1cmVzID0gMTUpDQpgYGANCg0KIyMjIA0K