Developing a model to predict permeability (see Sect. 1.4) could save significant resources for a pharmaceutical company, while at the same time more rapidly identifying molecules that have a sufficient permeability to become a drug:
Start R and use these commands to load the data:
library(AppliedPredictiveModeling)
library(caret)
data(permeability)
str(fingerprints)
## num [1:165, 1:1107] 0 0 0 0 0 0 0 0 0 0 ...
## - attr(*, "dimnames")=List of 2
## ..$ : chr [1:165] "1" "2" "3" "4" ...
## ..$ : chr [1:1107] "X1" "X2" "X3" "X4" ...
The matrix fingerprints contains the 1,107 binary
molecular predictors for the 165 compounds, while
permeability contains permeability response.
The fingerprint predictors indicate the presence or absence of
substructures of a molecule and are often sparse meaning that relatively
few of the molecules contain each substructure. Filter out the
predictors that have low frequencies using the nearZeroVar
function from the caret package. How many predictors are left for
modeling?
Using the nearZeroVarfunction to filter out the
predictors that have low frequencies, there are 388 predictors
remaining.
filter <- fingerprints[, -nearZeroVar(fingerprints)]
str(filter)
## num [1:165, 1:388] 0 0 0 0 0 0 0 0 0 0 ...
## - attr(*, "dimnames")=List of 2
## ..$ : chr [1:165] "1" "2" "3" "4" ...
## ..$ : chr [1:388] "X1" "X2" "X3" "X4" ...
Split the data into a training and a test set, pre-process the data, and tune a PLS model. How many latent variables are optimal and what is the corresponding resampled estimate of R2?
set.seed(1)
trainingRows <- createDataPartition(permeability,
p = .80,
list= FALSE)
TrainX <- filter[trainingRows, ]
TrainY <- permeability[trainingRows]
TestX <- filter[-trainingRows, ]
TestY <- permeability[-trainingRows]
plsfit <- train(TrainX, TrainY,
method = "pls",
tuneLength = 20,
trControl = trainControl(method = "cv"),
preProc = c("center", "scale"))
plsfit
## Partial Least Squares
##
## 133 samples
## 388 predictors
##
## Pre-processing: centered (388), scaled (388)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 120, 121, 119, 120, 120, 120, ...
## Resampling results across tuning parameters:
##
## ncomp RMSE Rsquared MAE
## 1 12.69853 0.3419188 9.510461
## 2 11.26461 0.4862419 7.961848
## 3 11.23192 0.5020667 8.443290
## 4 11.34704 0.5151263 8.525333
## 5 11.36740 0.4843217 8.217022
## 6 11.17569 0.5067142 8.111659
## 7 10.70101 0.5296468 7.822282
## 8 10.54862 0.5419219 7.788303
## 9 10.68495 0.5356735 7.877981
## 10 10.45770 0.5534189 7.634547
## 11 10.61471 0.5495913 7.802781
## 12 10.60305 0.5554128 7.829622
## 13 10.53495 0.5554981 7.761453
## 14 10.57287 0.5551807 7.813457
## 15 10.82956 0.5352628 7.981578
## 16 10.99621 0.5257060 8.088185
## 17 11.44273 0.5069297 8.222767
## 18 11.48512 0.5058010 8.218605
## 19 11.62309 0.4996874 8.252432
## 20 11.79233 0.4908691 8.322718
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was ncomp = 10.
The optimal compoent is 10. The RMSE for the model is 10.45770 and R^2 is 0.5534189.
plot(plsfit)
Predict the response for the test set. What is the test set estimate of R2?
plsPred <- predict(plsfit, TestX)
plsValues1 <- data.frame(obs = TestY, pred = plsPred)
defaultSummary(plsValues1)
## RMSE Rsquared MAE
## 14.6758044 0.3274115 10.9299035
The RMSE is 14.6758044 The test set estimate of R^2 is 0.3274115.
Try building other models discussed in this chapter. Do any have better predictive performance?
set.seed(2)
ridgeRegFit <- train(TrainX, TrainY,
method = "ridge",
tuneGrid = data.frame(.lambda = seq(0.00001, .1, length = 10)),
trControl = trainControl(method = "cv"),
preProc = c("center", "scale"))
ridgeRegFit
## Ridge Regression
##
## 133 samples
## 388 predictors
##
## Pre-processing: centered (388), scaled (388)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 121, 121, 119, 119, 119, 120, ...
## Resampling results across tuning parameters:
##
## lambda RMSE Rsquared MAE
## 0.00001 2029.28022 0.2261796 964.525137
## 0.01112 11.70853 0.5391032 8.737736
## 0.02223 11.31131 0.5605033 8.284931
## 0.03334 10.94164 0.5735674 8.010468
## 0.04445 10.73920 0.5827179 7.844390
## 0.05556 10.66706 0.5859917 7.789898
## 0.06667 10.55978 0.5910037 7.733575
## 0.07778 10.50718 0.5937604 7.710072
## 0.08889 10.42716 0.5979405 7.674604
## 0.10000 10.37857 0.6005431 7.674802
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was lambda = 0.1.
The optimal lambda is 0.1. The RMSE is 10.37857 and the R^2 is 0.6005431. We should use the test RMSE and R^2 to compare with the prevoius model.
plot(ridgeRegFit)
ridgePred <- predict(ridgeRegFit, TestX)
ridgeValues1 <- data.frame(obs = TestY, pred = ridgePred)
defaultSummary(ridgeValues1)
## RMSE Rsquared MAE
## 15.0822363 0.3047575 11.4460367
The RMSE is 15.0822363. The test set estimate of R^2 is 0.3047575. The RMSE is higher and the R^2 is higher than the previous model, PLS. PLS model predicts more accurately.
set.seed(3)
lassoFit <- train(TrainX, TrainY,
method = "lasso",
tuneLength = 20,
trControl = trainControl(method = "cv"),
preProc = c("center", "scale"))
## Warning: model fit failed for Fold03: fraction=0.9 Error in if (zmin < gamhat) { : missing value where TRUE/FALSE needed
## Warning: model fit failed for Fold10: fraction=0.9 Error in if (zmin < gamhat) { : missing value where TRUE/FALSE needed
## Warning in nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo,
## : There were missing values in resampled performance measures.
lassoFit
## The lasso
##
## 133 samples
## 388 predictors
##
## Pre-processing: centered (388), scaled (388)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 120, 119, 121, 119, 120, 118, ...
## Resampling results across tuning parameters:
##
## fraction RMSE Rsquared MAE
## 0.1000000 11.95055 0.5646770 8.285235
## 0.1421053 11.66126 0.5687350 8.180527
## 0.1842105 11.39533 0.5675687 8.126356
## 0.2263158 11.29728 0.5531254 8.214378
## 0.2684211 11.21937 0.5455604 8.290208
## 0.3105263 11.09924 0.5466052 8.263106
## 0.3526316 11.04888 0.5487770 8.227926
## 0.3947368 11.12723 0.5431076 8.245136
## 0.4368421 11.21093 0.5379914 8.299488
## 0.4789474 11.33047 0.5335253 8.369220
## 0.5210526 11.49121 0.5296504 8.452024
## 0.5631579 11.71734 0.5232284 8.570514
## 0.6052632 11.92871 0.5169482 8.705313
## 0.6473684 12.10642 0.5135509 8.831677
## 0.6894737 12.30414 0.5084839 8.956643
## 0.7315789 12.51336 0.5029586 9.108752
## 0.7736842 12.70216 0.4996455 9.253687
## 0.8157895 12.93623 0.4944714 9.407275
## 0.8578947 13.11425 0.4908026 9.537103
## 0.9000000 13.23285 0.4881201 9.630285
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was fraction = 0.3526316.
The optimal fraction is 0.3526316. The RMSE is 11.04888 and the R^2 is 0.5487770.
plot(lassoFit)
lassoPred <- predict(lassoFit, TestX)
lassoValues1 <- data.frame(obs = TestY, pred = lassoPred)
defaultSummary(lassoValues1)
## RMSE Rsquared MAE
## 16.5608349 0.2083087 11.8788030
The RMSE is 16.5608349. The test set estimate of R^2 is 0.2083087.
set.seed(4)
enetFit <- train(TrainX, TrainY,
method = "enet",
tuneGrid = expand.grid(.lambda = c(0, 0.01, .1),
.fraction = seq(.05, 1, length = 20)),
trControl = trainControl(method = "cv"),
preProc = c("center", "scale"))
## Warning: model fit failed for Fold01: lambda=0.00, fraction=1 Error in if (zmin < gamhat) { : missing value where TRUE/FALSE needed
## Warning: model fit failed for Fold08: lambda=0.00, fraction=1 Error in if (zmin < gamhat) { : missing value where TRUE/FALSE needed
## Warning: model fit failed for Fold09: lambda=0.00, fraction=1 Error in if (zmin < gamhat) { : missing value where TRUE/FALSE needed
## Warning in nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo,
## : There were missing values in resampled performance measures.
enetFit
## Elasticnet
##
## 133 samples
## 388 predictors
##
## Pre-processing: centered (388), scaled (388)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 120, 120, 119, 118, 120, 120, ...
## Resampling results across tuning parameters:
##
## lambda fraction RMSE Rsquared MAE
## 0.00 0.05 13.60367 0.3411740 10.252519
## 0.00 0.10 13.01068 0.3672401 9.373719
## 0.00 0.15 13.02795 0.3669037 9.079703
## 0.00 0.20 12.84501 0.3729087 8.947470
## 0.00 0.25 12.76712 0.3797844 8.960690
## 0.00 0.30 12.72145 0.3835777 9.002673
## 0.00 0.35 12.65867 0.3848945 9.032075
## 0.00 0.40 12.50102 0.3964283 8.987314
## 0.00 0.45 12.36119 0.4077676 8.928778
## 0.00 0.50 12.25379 0.4161283 8.931356
## 0.00 0.55 12.15473 0.4266032 8.887154
## 0.00 0.60 12.05800 0.4377691 8.792456
## 0.00 0.65 11.93098 0.4526419 8.683748
## 0.00 0.70 11.87263 0.4599611 8.607908
## 0.00 0.75 11.86780 0.4641193 8.584768
## 0.00 0.80 11.87473 0.4676627 8.579892
## 0.00 0.85 11.84874 0.4722429 8.571856
## 0.00 0.90 11.80311 0.4782670 8.525586
## 0.00 0.95 11.78385 0.4821372 8.471248
## 0.00 1.00 11.77030 0.4854085 8.445759
## 0.01 0.05 23.68575 0.4890447 13.731837
## 0.01 0.10 36.68140 0.4703021 19.130869
## 0.01 0.15 49.67276 0.4893495 25.189192
## 0.01 0.20 62.37387 0.5090474 31.152066
## 0.01 0.25 75.07794 0.5133499 37.019894
## 0.01 0.30 87.90897 0.5129277 42.916664
## 0.01 0.35 100.62758 0.5178019 48.787976
## 0.01 0.40 113.29466 0.5250210 54.693094
## 0.01 0.45 125.92239 0.5304432 60.851004
## 0.01 0.50 138.54356 0.5364716 66.990727
## 0.01 0.55 151.19585 0.5409660 73.138401
## 0.01 0.60 163.89630 0.5412559 79.307329
## 0.01 0.65 176.69398 0.5355985 85.544322
## 0.01 0.70 189.48128 0.5309717 91.757734
## 0.01 0.75 202.24499 0.5272231 97.941062
## 0.01 0.80 214.86978 0.5238325 104.105957
## 0.01 0.85 227.28209 0.5222732 110.249045
## 0.01 0.90 239.70173 0.5208690 116.394498
## 0.01 0.95 252.16807 0.5162703 122.579935
## 0.01 1.00 264.62364 0.5126777 128.760065
## 0.10 0.05 11.99639 0.4938224 9.098122
## 0.10 0.10 11.58657 0.4621466 8.243516
## 0.10 0.15 11.71843 0.4658549 8.259867
## 0.10 0.20 11.72949 0.4681291 8.328248
## 0.10 0.25 11.48006 0.4904683 8.215461
## 0.10 0.30 11.19284 0.5145909 8.059629
## 0.10 0.35 11.07295 0.5272563 7.979742
## 0.10 0.40 10.98169 0.5388366 7.854584
## 0.10 0.45 10.84987 0.5542696 7.704535
## 0.10 0.50 10.76561 0.5655920 7.604397
## 0.10 0.55 10.70251 0.5743298 7.519999
## 0.10 0.60 10.62086 0.5830012 7.461470
## 0.10 0.65 10.57325 0.5895564 7.454661
## 0.10 0.70 10.55429 0.5942930 7.468944
## 0.10 0.75 10.56856 0.5952866 7.493278
## 0.10 0.80 10.57820 0.5962017 7.509437
## 0.10 0.85 10.59843 0.5961642 7.527610
## 0.10 0.90 10.60711 0.5964890 7.534710
## 0.10 0.95 10.59606 0.5983857 7.529420
## 0.10 1.00 10.58250 0.6003715 7.523100
##
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were fraction = 0.7 and lambda = 0.1.
plot(enetFit)
enetPred <- predict(enetFit, TestX)
enetValues1 <- data.frame(obs = TestY, pred = enetPred)
defaultSummary(enetValues1)
## RMSE Rsquared MAE
## 14.1795387 0.3415212 10.7570758
The RMSE is 14.1795387. The test set estimate of R^2 is 0.3415212.
Would you recommend any of your models to replace the permeability laboratory experiment?
| Model | RMSE | R^2 |
|---|---|---|
| PLS | 14.6758044 | 0.3274115 |
| Ridge | 15.0822363 | 0.3047575 |
| Lasso | 16.5608349 | 0.2083087 |
| Elastic Net | 14.1795387 | 0.3415212 |
The Elastic Net has the lowest RMSE and highest R^2 among the model. Thus, it performed the best. I recommend to replace the permeability laboratory experiment with the Elastic Net model.
A chemical manufacturing process for a pharmaceutical product was discussed in Sect. 1.4. In this problem, the objective is to understand the relationship between biological measurements of the raw materials (predictors), measurements of the manufacturing process (predictors), and the response of product yield. Biological predictors cannot be changed but can be used to assess the quality of the raw material before processing. On the other hand, manufacturing process predictors can be changed in the manufacturing process. Improving product yield by 1 % will boost revenue by approximately one hundred thousand dollars per batch:
Start R and use these commands to load the data:
data(ChemicalManufacturingProcess)
The matrix processPredictors contains the 57 predictors
(12 describing the input biological material and 45 describing the
process predictors) for the 176 manufacturing runs. yield
contains the percent yield for each run.
A small percentage of cells in the predictor set contain missing values. Use an imputation function to fill in these missing values (e.g., see Sect. 3.8).
# Refering text p. 54
library(RANN)
trans <- preProcess(ChemicalManufacturingProcess,
method = "knnImpute" )
transformed <- predict(trans, ChemicalManufacturingProcess)
sum(is.na(transformed))
## [1] 0
Split the data into a training and a test set, pre-process the data, and tune a model of your choice from this chapter. What is the optimal value of the performance metric?
chem_filtered <- transformed[, -nearZeroVar(transformed)]
set.seed(6)
trainingRows <- createDataPartition(chem_filtered$Yield,
p = .80,
list= FALSE)
TrainX <- chem_filtered[trainingRows,-1 ]
TrainY <- chem_filtered[trainingRows,1 ]
TestX <- chem_filtered[-trainingRows,-1 ]
TestY <- chem_filtered[-trainingRows,1]
plsfit <- train(TrainX, TrainY,
method = "pls",
tuneLength = 20,
trControl = trainControl(method = "cv"),
preProc = c("center", "scale"))
plsfit
## Partial Least Squares
##
## 144 samples
## 56 predictor
##
## Pre-processing: centered (56), scaled (56)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 129, 131, 129, 130, 128, 128, ...
## Resampling results across tuning parameters:
##
## ncomp RMSE Rsquared MAE
## 1 0.8098087 0.3906361 0.6341267
## 2 0.8799861 0.4550474 0.6085317
## 3 0.8587511 0.5118507 0.6053258
## 4 1.0583470 0.5074134 0.6642414
## 5 1.3125516 0.4734144 0.7500397
## 6 1.3378854 0.4926556 0.7392859
## 7 1.4081861 0.4952382 0.7707932
## 8 1.4447883 0.5134021 0.7874488
## 9 1.4419237 0.4894127 0.8076173
## 10 1.4870557 0.4751936 0.8238646
## 11 1.4837750 0.4587447 0.8262115
## 12 1.4645264 0.4464050 0.8239693
## 13 1.4193131 0.4380433 0.8134875
## 14 1.3470804 0.4396263 0.7864815
## 15 1.2391253 0.4584315 0.7534894
## 16 1.2069641 0.4692778 0.7402663
## 17 1.2267787 0.4812788 0.7391359
## 18 1.2157897 0.4905495 0.7331337
## 19 1.2723112 0.4941548 0.7391426
## 20 1.3882389 0.4919690 0.7597895
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was ncomp = 1.
The optimal ncomp is 1.
The RMSE is 0.8098087 The estimate of R^2 is 0.3906361.
Predict the response for the test set. What is the value of the performance metric and how does this compare with the resampled performance metric on the training set?
plsPred <- predict(plsfit, TestX)
plsValues1 <- data.frame(obs = TestY, pred = plsPred)
defaultSummary(plsValues1)
## RMSE Rsquared MAE
## 0.7909615 0.3543885 0.6676219
The RMSE is 0.7909615 The test set estimate of R^2 is 0.3543885. The RMSE is lower than RMSE in the training set. The R^2 is lower than the training set. Generalization error?
Which predictors are most important in the model you have trained? Do either the biological or process predictors dominate the list?
To optimize yield, we can focus on the adjusitng the top predictors such as manufacturing process 32 and 36. There is no specific type of predictor that dominate the list.
varimps <- varImp(plsfit, scale = TRUE)
##
## Attaching package: 'pls'
## The following object is masked from 'package:caret':
##
## R2
## The following object is masked from 'package:stats':
##
## loadings
plot(varimps, top = 25)
Explore the relationships between each of the top predictors and the response. How could this information be helpful in improving yield in future runs of the manufacturing process?
The correlation plot reveals the folliwng: - Certain manuafacturing process show strong positve correlation with yields (ManufacturingProcess32, ManufacturingProcess09). Prioritizing the control of these manufacturing process can improve the yield. - Certain manuafacturing process show strong negative correlation with yields (ManufacturingProcess13, ManufacturingProcess17) - Among the top predictors, all biological meterials show positive correlation with yield. Knowing this can help identify high quality biological material. - strong positve correlation among the biollogical materials. This indicates multicollinearity is present. This could make it diffiuclt to interpret its individual affects on the yield.
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(corrplot)
## corrplot 0.95 loaded
##
## Attaching package: 'corrplot'
## The following object is masked from 'package:pls':
##
## corrplot
top_predictors <- varImp(plsfit)$importance %>%
mutate(Predictor = rownames(.)) %>%
arrange(desc(Overall)) %>%
head(10)
top_predictor_names <- top_predictors$Predictor
corrplot(cor(chem_filtered[, c("Yield",top_predictor_names )]))