Question 1:
Developing a model to predict permeability (see Sect. 1.4) could save significant resources for a pharmaceutical company, while at the same time more rapidly identifying molecules that have a sufficient permeability to become a drug:
*a) Start R and use these commands to load the data:
>library(AppliedPredictiveModeling)
>data(permeability) The matrix fingerprints
contains the 1,107 binary molecular predictors for the 165 compounds,
while permeability contains permeability response.
Answer:
data("permeability")
dim(fingerprints)
## [1] 165 1107
dim(permeability)
## [1] 165 1
The matrix fingerprints contains the 1,107 binary molecular predictors for the 165 compounds, while permeability contains permeability response.
b) The fingerprint predictors indicate the presence or
absence of substructures of a molecule and are often sparse meaning that
relatively few of the molecules contain each substructure. Filter out
the predictors that have low frequencies using the
nearZeroVar function from the caret package.
How many predictors are left for modeling?
lowfreq <- nearZeroVar(fingerprints)
fingerprints <- fingerprints[, -lowfreq]
dim(fingerprints)
## [1] 165 388
There were 1,107 predictors and now there are only 388 predictors left for modeling.
c) Split the data into a training and a test set, pre-process the data, and tune a PLS model. How many latent variables are optimal and what is the corresponding resampled estimate of \(R^2\)?
set.seed(624)
# index for training
index <- createDataPartition(permeability, p = .8, list = FALSE)
# train
train_perm <- permeability[index, ]
train_fp <- fingerprints[index, ]
# test
test_perm <- permeability[-index, ]
test_fp <- fingerprints [-index, ]
# 10-fold cross-validation to make reasonable estimates
ctrl <- trainControl(method = "cv", number = 10)
plsTune <- train(train_fp, train_perm, method = "pls", metric = "Rsquared",
tuneLength = 20, trControl = ctrl, preProc = c("center", "scale"))
plot(plsTune)
plsTune
## Partial Least Squares
##
## 133 samples
## 388 predictors
##
## Pre-processing: centered (388), scaled (388)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 118, 119, 120, 120, 121, 120, ...
## Resampling results across tuning parameters:
##
## ncomp RMSE Rsquared MAE
## 1 13.25656 0.2953172 10.202424
## 2 11.90191 0.4662745 8.710959
## 3 12.05579 0.4624570 9.271134
## 4 12.03297 0.4793759 9.314195
## 5 12.06645 0.4870447 8.986610
## 6 11.82964 0.5012972 8.788270
## 7 11.91363 0.5011672 9.153386
## 8 11.79990 0.4960881 9.119966
## 9 11.81946 0.4959475 9.301787
## 10 11.85288 0.4924486 9.176136
## 11 11.79654 0.5025443 9.128199
## 12 11.62869 0.5131115 8.965070
## 13 11.78348 0.5080595 8.920097
## 14 12.01377 0.4935108 9.101865
## 15 12.09297 0.4862359 9.131109
## 16 12.10087 0.4953868 9.053161
## 17 12.38093 0.4847366 9.218277
## 18 12.59348 0.4768971 9.402569
## 19 12.61895 0.4807222 9.338592
## 20 12.77045 0.4682401 9.549745
##
## Rsquared was used to select the optimal model using the largest value.
## The final value used for the model was ncomp = 12.
The optimal tuning had 12 components with a corresponding \(R^2\) of 0.5131115
d) Predict the response for the test set. What is the test set estimate of \(R^2\)
fp_predict <- predict(plsTune, test_fp)
postResample(fp_predict, test_perm)
## RMSE Rsquared MAE
## 11.4895371 0.4741832 9.3113125
The test set estimate of \(R^2\) is 0.4741832.
e) Try building other models discussed in this chapter. Do any have better predictive performance?
Elastic Net Regression Model:
set.seed(624)
# grid of penalties
enetGrid <- expand.grid(.lambda = c(0, 0.01, .1), .fraction = seq(.05, 1, length = 20))
# tuning penalized regression model
enetTune <- train(train_fp, train_perm, method = "enet",
tuneGrid = enetGrid, trControl = ctrl, preProc = c("center", "scale"))
plot(enetTune)
enetTune
## Elasticnet
##
## 133 samples
## 388 predictors
##
## Pre-processing: centered (388), scaled (388)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 121, 118, 119, 121, 119, 120, ...
## Resampling results across tuning parameters:
##
## lambda fraction RMSE Rsquared MAE
## 0.00 0.05 12.53022 0.4195116 9.219840
## 0.00 0.10 11.83087 0.4482164 8.465318
## 0.00 0.15 11.94876 0.4475951 8.738893
## 0.00 0.20 11.98463 0.4484245 8.890257
## 0.00 0.25 11.79106 0.4640731 8.861128
## 0.00 0.30 11.66486 0.4742758 8.848355
## 0.00 0.35 11.72130 0.4749181 8.912993
## 0.00 0.40 11.90989 0.4695334 8.998669
## 0.00 0.45 12.30625 0.4529253 9.193462
## 0.00 0.50 12.68544 0.4362691 9.340897
## 0.00 0.55 13.00847 0.4214423 9.471613
## 0.00 0.60 13.29088 0.4089528 9.590155
## 0.00 0.65 13.44531 0.4004275 9.646439
## 0.00 0.70 13.61111 0.3907010 9.737890
## 0.00 0.75 13.82251 0.3811970 9.865902
## 0.00 0.80 13.94077 0.3755526 9.951804
## 0.00 0.85 14.00446 0.3726513 10.006198
## 0.00 0.90 14.15593 0.3660339 10.096242
## 0.00 0.95 14.35377 0.3580099 10.182112
## 0.00 1.00 14.48818 0.3547683 10.231509
## 0.01 0.05 12.91277 0.3962042 9.210662
## 0.01 0.10 14.49519 0.3998262 10.363051
## 0.01 0.15 16.02608 0.4115378 11.216868
## 0.01 0.20 17.63731 0.4189926 12.226957
## 0.01 0.25 19.54774 0.4096559 13.401832
## 0.01 0.30 21.65430 0.3895637 14.606572
## 0.01 0.35 23.60756 0.3748645 15.739320
## 0.01 0.40 25.60311 0.3611846 16.904536
## 0.01 0.45 27.57928 0.3504770 18.090540
## 0.01 0.50 29.55483 0.3408871 19.320808
## 0.01 0.55 31.52744 0.3320255 20.544216
## 0.01 0.60 33.47925 0.3280021 21.731405
## 0.01 0.65 35.46538 0.3231206 22.929380
## 0.01 0.70 37.43374 0.3183345 24.106466
## 0.01 0.75 39.37733 0.3156452 25.290183
## 0.01 0.80 41.31187 0.3139363 26.468389
## 0.01 0.85 43.29420 0.3110890 27.711822
## 0.01 0.90 45.37309 0.3072853 28.998914
## 0.01 0.95 47.41765 0.3041533 30.254470
## 0.01 1.00 49.32673 0.3034756 31.415359
## 0.10 0.05 12.55808 0.4101914 9.475823
## 0.10 0.10 12.00149 0.4328341 8.560342
## 0.10 0.15 12.08155 0.4311345 8.705317
## 0.10 0.20 12.22936 0.4267428 8.969534
## 0.10 0.25 12.11297 0.4349270 8.932117
## 0.10 0.30 12.04598 0.4378837 8.978154
## 0.10 0.35 12.04834 0.4369033 9.003314
## 0.10 0.40 12.08992 0.4341602 9.037805
## 0.10 0.45 12.16888 0.4291846 9.099988
## 0.10 0.50 12.25788 0.4233290 9.143842
## 0.10 0.55 12.32683 0.4187918 9.177390
## 0.10 0.60 12.41168 0.4130270 9.253065
## 0.10 0.65 12.47522 0.4078620 9.287724
## 0.10 0.70 12.53494 0.4039400 9.335168
## 0.10 0.75 12.58185 0.4016082 9.363731
## 0.10 0.80 12.61736 0.4001737 9.380885
## 0.10 0.85 12.65290 0.3987804 9.398010
## 0.10 0.90 12.69026 0.3974567 9.419637
## 0.10 0.95 12.73757 0.3953611 9.442814
## 0.10 1.00 12.77339 0.3939567 9.457362
##
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were fraction = 0.3 and lambda = 0.
enet_predict <- predict(enetTune, test_fp)
postResample(enet_predict, test_perm)
## RMSE Rsquared MAE
## 11.9836978 0.4157479 9.7314675
Least Angle Regression:
set.seed(624)
larsTune <- train(train_fp, train_perm, method = "lars", metric = "Rsquared",
tuneLength = 20, trControl = ctrl, preProc = c("center", "scale"))
plot(larsTune)
lars_predict <- predict(larsTune, test_fp)
postResample(lars_predict, test_perm)
## RMSE Rsquared MAE
## 11.6906258 0.4342625 9.6425013
f) Would you recommend any of your models to replace the permeability laboratory experiment?
I would recommend the Partial Least Squares model as it produced better statistics. It had a higher \(R^2\) and lower RMSE and MAE.
Question 2:
A chemical manufacturing process for a pharmaceutical product was discussed in Sect.1.4. In this problem, the objective is to understand the relationship between biological measurements of the raw materials (predictors), 6.5 Computing 139 measurements of the manufacturing process (predictors), and the response of product yield. Biological predictors cannot be changed but can be used to assess the quality of the raw material before processing. On the other hand, manufacturing process predictors can be changed in the manufacturing process. Improving product yield by 1% will boost revenue by approximately one hundred thousand dollars per batch:
a) Start R and use these commands to load the data:
> library(AppliedPredictiveModeling)
> data(chemicalManufacturing) The matrix
processPredictors contains the 57 predictors (12 describing
the input biological material and 45 describing the process predictors)
for the 176 manufacturing runs. yield contains the percent yield for
each run.
data("ChemicalManufacturingProcess")
b) A small percentage of cells in the predictor set contain missing values. Use an imputation function to fill in these missing values (e.g., see Sect. 3.8)
sum(is.na(ChemicalManufacturingProcess))
## [1] 106
Since we have 106 missing values in our data so in order to feed the data to model we have to take care of that so let’s impute.
miss <- preProcess(ChemicalManufacturingProcess, method = "bagImpute")
Chemical <- predict(miss, ChemicalManufacturingProcess)
There were 106 missing values in
ChemicalManufacturingProcess. Bagged trees were used to
impute the data. Bagged trees are made using all the other
variables.
sum(is.na(Chemical))
## [1] 0
As we can see that we have no missing values in our data now.
c) Split the data into a training and a test set, pre-process the data, and tune a model of your choice from this chapter. What is the optimal value of the performance metric?
Before creating model of choice let’s split our data into testing and training.
set.seed(624)
Chemical <- Chemical[, -nearZeroVar(Chemical)]
# index for training
index <- createDataPartition(Chemical$Yield, p = .8, list = FALSE)
# train
train_chem <- Chemical[index, ]
# test
test_chem <- Chemical[-index, ]
Now that our data has been split let’s create model
Partial Least Sqaures (PLS):
set.seed(624)
plsTune <- train(Yield ~ ., Chemical , method = "pls",
tuneLength = 20, trControl = ctrl, preProc = c("center", "scale"))
plot(plsTune)
plsTune
## Partial Least Squares
##
## 176 samples
## 56 predictor
##
## Pre-processing: centered (56), scaled (56)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 160, 157, 158, 159, 158, 159, ...
## Resampling results across tuning parameters:
##
## ncomp RMSE Rsquared MAE
## 1 1.436891 0.4568805 1.147590
## 2 1.872742 0.4711564 1.185897
## 3 1.292614 0.5633698 1.020010
## 4 1.480526 0.5381868 1.085319
## 5 1.707358 0.5156812 1.131007
## 6 1.821904 0.4903840 1.156300
## 7 2.006142 0.4802835 1.211850
## 8 2.092370 0.4622998 1.253598
## 9 2.220647 0.4485854 1.290999
## 10 2.322021 0.4410360 1.315081
## 11 2.446697 0.4264842 1.352393
## 12 2.475260 0.4206118 1.367989
## 13 2.464162 0.4197124 1.377783
## 14 2.418661 0.4227280 1.364288
## 15 2.375812 0.4242080 1.350175
## 16 2.368363 0.4267259 1.337946
## 17 2.386174 0.4254577 1.339398
## 18 2.376321 0.4271412 1.334877
## 19 2.400843 0.4255697 1.348122
## 20 2.422785 0.4231162 1.359970
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was ncomp = 3.
Optimal tuning has 3 components with \(R^2\) of 0.56337.
Least Angle Regression (LAR):
set.seed(624)
larsTune <- train(Yield ~ ., Chemical , method = "lars", metric = "Rsquared",
tuneLength = 20, trControl = ctrl, preProc = c("center", "scale"))
plot(larsTune)
larsTune
## Least Angle Regression
##
## 176 samples
## 56 predictor
##
## Pre-processing: centered (56), scaled (56)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 160, 157, 158, 159, 158, 159, ...
## Resampling results across tuning parameters:
##
## fraction RMSE Rsquared MAE
## 0.05 1.268059 0.6252093 1.037325
## 0.10 1.157751 0.6229682 0.936117
## 0.15 1.163964 0.6233226 0.922333
## 0.20 1.425304 0.5567018 1.005712
## 0.25 1.717206 0.4959408 1.101890
## 0.30 1.936760 0.4676172 1.176116
## 0.35 1.922115 0.4611422 1.188942
## 0.40 1.875680 0.4585226 1.187174
## 0.45 1.838067 0.4577825 1.183074
## 0.50 1.731858 0.4637976 1.161242
## 0.55 1.508554 0.4924491 1.104212
## 0.60 1.286634 0.5676612 1.027343
## 0.65 1.637094 0.4985405 1.138608
## 0.70 2.069278 0.4782574 1.247884
## 0.75 2.478866 0.4698927 1.355965
## 0.80 2.854003 0.4595411 1.460073
## 0.85 3.231161 0.4425963 1.561937
## 0.90 3.619450 0.4262587 1.662857
## 0.95 4.056912 0.4124876 1.774808
## 1.00 4.501021 0.4019904 1.887089
##
## Rsquared was used to select the optimal model using the largest value.
## The final value used for the model was fraction = 0.05.
The optimal model has a fraction of 0.05 and \(R^2\) of 0.6252.
Ridge:
set.seed(624)
## Define the candidate set of values
ridgeGrid <- data.frame(.lambda = seq(0, .1, length = 15))
ridgeTune <- train(Yield ~ ., Chemical , method = "ridge",
tuneGrid = ridgeGrid, trControl = ctrl, preProc = c("center", "scale"))
plot(ridgeTune)
ridgeTune
## Ridge Regression
##
## 176 samples
## 56 predictor
##
## Pre-processing: centered (56), scaled (56)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 160, 157, 158, 159, 158, 159, ...
## Resampling results across tuning parameters:
##
## lambda RMSE Rsquared MAE
## 0.000000000 4.501021 0.4019904 1.887089
## 0.007142857 1.951471 0.4493837 1.224896
## 0.014285714 2.093121 0.4429601 1.247997
## 0.021428571 2.098017 0.4475006 1.242244
## 0.028571429 2.077066 0.4519785 1.232565
## 0.035714286 2.050861 0.4560167 1.222873
## 0.042857143 2.024673 0.4596597 1.213933
## 0.050000000 1.999988 0.4629748 1.206382
## 0.057142857 1.977156 0.4660169 1.200121
## 0.064285714 1.956157 0.4688278 1.194388
## 0.071428571 1.936855 0.4714397 1.189526
## 0.078571429 1.919085 0.4738781 1.185097
## 0.085714286 1.902686 0.4761635 1.180984
## 0.092857143 1.887513 0.4783129 1.177421
## 0.100000000 1.873437 0.4803403 1.174175
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was lambda = 0.1.
The optimal model has λ of 0.1 and \(R^2\) of 0.4803403.
d) Predict the response for the test set. What is the value of the performance metric and how does this compare with the resampled performance metric on the training set?
Since we tried our three models in our previous steps so out of those
three the lars method was chosen as it had the highest
\(R^2\).
lars_predict <- predict(larsTune, test_chem[ ,-1])
postResample(lars_predict, test_chem[ ,1])
## RMSE Rsquared MAE
## 1.399505 0.718109 1.095894
The \(R^2\) is 0.718109, which is higher than the training set.
e) Which predictors are most important in the model you have trained? Do either the biological or process predictors dominate the list?
varImp(larsTune)
## loess r-squared variable importance
##
## only 20 most important variables shown (out of 56)
##
## Overall
## ManufacturingProcess32 100.00
## ManufacturingProcess13 90.02
## BiologicalMaterial06 84.56
## ManufacturingProcess36 76.03
## ManufacturingProcess17 74.88
## BiologicalMaterial03 73.53
## ManufacturingProcess09 70.37
## BiologicalMaterial12 67.98
## BiologicalMaterial02 65.33
## ManufacturingProcess31 60.38
## ManufacturingProcess06 58.03
## ManufacturingProcess33 49.39
## BiologicalMaterial11 48.11
## BiologicalMaterial04 47.13
## ManufacturingProcess11 42.47
## BiologicalMaterial08 41.88
## BiologicalMaterial01 39.14
## ManufacturingProcess12 33.02
## ManufacturingProcess30 32.91
## BiologicalMaterial09 32.41
The 5 most important variables used in the modeling are
ManufacturingProcess32,
ManufacturingProcess13, BiologicalMaterial06,
ManufacturingProcess36, and
ManufacturingProcess17. Process predictors dominate the
list. The ratio of process to biological predictors is 11:9.
f) Explore the relationships between each of the top predictors and the response. How could this information be helpful in improving yield in future runs of the manufacturing process?
top10 <- varImp(larsTune)$importance %>%
arrange(-Overall) %>%
head(10)
Chemical %>%
select(c("Yield", row.names(top10))) %>%
cor() %>%
corrplot()
Based on the correlation plot analysis, ManufacturingProcess32 exhibits the strongest positive correlation with Yield. Conversely, three out of the top ten variables display negative correlations with Yield. This insight could prove valuable in future iterations of the manufacturing process, as these predictors significantly influence yield. To enhance yield maximization or improvement efforts, optimizing measurements related to the manufacturing process and biological characteristics of raw materials may be beneficial.