suppressWarnings({library(caret)
library(pls)
library(VIM)
library(dplyr)
library(tidyr)})
## Loading required package: ggplot2
## Loading required package: lattice
##
## Attaching package: 'pls'
## The following object is masked from 'package:caret':
##
## R2
## The following object is masked from 'package:stats':
##
## loadings
## Loading required package: colorspace
## Loading required package: grid
## VIM is ready to use.
## Suggestions and bug-reports can be submitted at: https://github.com/statistikat/VIM/issues
##
## Attaching package: 'VIM'
## The following object is masked from 'package:datasets':
##
## sleep
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
Developing a model to predict permeability could save significant resources for a pharmaceutical company, while at the same time more rapidly identifying molecules that have a sufficient permeability to become a drug:
Start R and use these commands to load the data:
library(AppliedPredictiveModeling)
data(permeability)
The matrix “fingerprints” contains 1,107 binary molecular predictors for the 165 compounds, while “permeability” contains permeability response.
The fingerprint predictors indicate the presence or absence of substructures of a molecule and are often sparse, meaning that relatively few of the molecules contain each substructure. Filter out the predictors that have low frequencies using the nearZeroVar from the caret package. How many predictors are left for modeling?
fingerprints <- as.data.frame(fingerprints)
fingerprints_near_zero_var <- nearZeroVar(fingerprints)
#remove predictors with low frequencies
fingerprints <- fingerprints[,-fingerprints_near_zero_var]
dim(fingerprints)
## [1] 165 388
There were 719 predictors that were filtered out, so there are 388 remaining.
Split the data into a training and a test set, pre-process the data, and tune a PLS model. How many latent variables are optimal and what is the corresponding resampled estimate of \(R^2\)?
#check for missing values
sum(is.na(fingerprints))
## [1] 0
There are no missing values in the data, and since the data is binary, it doesn’t make sense to center & scale the data or do a transformation. We could look for highly correlated predictors and potentially remove them, but this is not necessary since we are using a PLS model, which is recommended when there are correlated predictors for a linear-regression type model.
#create train and test sets
set.seed(1989)
trainingRows <- createDataPartition(permeability, p=0.80, list=FALSE)
fingerprints_train <- fingerprints[trainingRows,]
fingerprints_test <- fingerprints[-trainingRows,]
permeability_train <- permeability[trainingRows]
permeability_test <- permeability[-trainingRows]
#combine predictors and response into one train set
fingerprints_train_combined <- cbind(fingerprints_train, permeability = permeability_train)
fingerprints_train_combined <- as.data.frame(fingerprints_train_combined)
#fit model
plsFit <- train(permeability ~ ., data = fingerprints_train_combined, method = "pls", tuneLength = 10,
trControl = trainControl(method = "cv"))
plsFit$bestTune
## ncomp
## 8 8
plsFit$results %>%
filter(ncomp == 8)
## ncomp RMSE Rsquared MAE RMSESD RsquaredSD MAESD
## 1 8 10.76961 0.5440447 8.112289 2.121611 0.2033106 1.673211
8 variables are optimal, and the corresponding value of \(R^2\) is approximately 0.54.
Predict the response for the test set. What is the test set estimate of \(R^2\)?
#save predicted values
permeability_predictions <- predict(plsFit, fingerprints_test, ncomp = 8)
postResample(pred = permeability_predictions, obs = permeability_test)
## RMSE Rsquared MAE
## 12.3011159 0.3964415 9.6506066
The test set estimate of \(R^2\) is approximately 0.40.
Try building other models discussed in this chapter. Do any have better predictive performance?
lmFit <- train(permeability ~ ., data = fingerprints_train_combined, method = "lm")
permeability_pred_lm <- predict(lmFit, fingerprints_test)
postResample(pred = permeability_pred_lm, obs = permeability_test)
## RMSE Rsquared MAE
## 29.9432697 0.1768542 18.3793439
An ordinary linear regression model has a much lower \(R^2\) of approximately 0.18. This is probably due to some variables being highly correlated and needing to be removed, which the partial least squares method accounts for.
Would you recommend any of your models to replace the permeability laboratory experiment?
I would recommend the partial least squares model with the optimal number of 8 components, since this had the best predictive performance on the test set, as shown by it having the highest \(R^2\).
A chemical manufacturing process for a pharmaceutical product was discussed in Sect. 1.4. In this problem, the objective is to understand the relationship between biological measurements of the raw materials (predictors), measurements of the manufacturing process (predictors), and the response of product yield. Biological predictors cannot be changed but can be used to assess the quality of the raw material before processing. On the other hand, manufacturing process predictors can be changed in the manufacturing process. Improving product yield by 1% will boost revenue by approximately one hundred thousand dollars per batch:
Start R and use these commands to load the data:
data(ChemicalManufacturingProcess)
The matrix “processPredictors” contains the 57 predictors (12 describing the input biological material and 45 describing the process predictors) for the 176 manufacturing runs. “yield” contains the percent yield for each run.
A small percentage of cells in the predictor set contain missing values. Use an imputation function to fill in these missing values (e.g., See Sect. 3.8).
#use k-nearest neighbors imputation with k=5
ChemicalManufacturingProcess <- kNN(ChemicalManufacturingProcess, k=5)
sum(is.na(ChemicalManufacturingProcess))
## [1] 0
All missing values have been filled in.
Split the data into a training and a test set, pre-process the data, and tune a model of your choice from this chapter. What is the optimal value of the performance metric?
#remove predictors with near zero variance
cmp_near_zero_var <- nearZeroVar(ChemicalManufacturingProcess)
ChemicalManufacturingProcess <- ChemicalManufacturingProcess[,-cmp_near_zero_var]
#create train/test split
set.seed(1989)
trainingRows_2 <- createDataPartition(ChemicalManufacturingProcess$Yield, p=0.80, list=FALSE)
cmp_train <- ChemicalManufacturingProcess[trainingRows_2,]
cmp_test <- ChemicalManufacturingProcess[-trainingRows_2,]
#fit model with pre-processing to center and scale
plsFit_cmp <- train(Yield ~ ., data = cmp_train, method = "pls", preProcess = c("center", "scale"), tuneLength = 10,
trControl = trainControl(method = "cv"))
plsFit_cmp$bestTune
## ncomp
## 1 1
plsFit_cmp$results %>%
filter(ncomp == 1)
## ncomp RMSE Rsquared MAE RMSESD RsquaredSD MAESD
## 1 1 1.58088 0.3678611 1.225164 0.6255172 0.2565932 0.3044073
The optimal value of \(R^2\) is approximately 0.37.
Predict the response for the test set. What is the value of the performance metric and how does this compare with the resampled performance metric on the training set?
#save predicted values
cmp_predictions <- predict(plsFit_cmp, cmp_test[,-1])
postResample(pred = cmp_predictions, obs = cmp_test$Yield)
## RMSE Rsquared MAE
## 1.3838875 0.5771233 1.1289079
The performance metric of \(R^2\) of 0.58 on the test set is higher than the \(R^2\) of 0.37 on the training set. This could be due to random variability since there is a relatively small sample size.
Which predictors are most important in the model you have trained? Do either the biological or process predictors dominate the list?
varImp(plsFit_cmp, scale = FALSE)
## pls variable importance
##
## only 20 most important variables shown (out of 59)
##
## Overall
## ManufacturingProcess32 0.12565
## ManufacturingProcess09 0.10781
## ManufacturingProcess36 0.10707
## ManufacturingProcess13 0.10434
## BiologicalMaterial06 0.08944
## BiologicalMaterial02 0.08840
## ManufacturingProcess17 0.08592
## ManufacturingProcess33 0.08452
## BiologicalMaterial03 0.08268
## ManufacturingProcess11 0.08201
## ManufacturingProcess06 0.07993
## BiologicalMaterial04 0.07936
## ManufacturingProcess12 0.07288
## BiologicalMaterial08 0.07163
## BiologicalMaterial12 0.07023
## BiologicalMaterial01 0.06668
## BiologicalMaterial11 0.06307
## ManufacturingProcess30 0.05034
## ManufacturingProcess28 0.04964
## ManufacturingProcess10 0.04929
The most important predictors are Manufacturing Processes 32, 9, 36, and 13. At first glance, it seems like the manufacturing processes dominate the list. However, the list includes 8/12 biological materials, and only 12/38 manufacturing processes, indicating that biological materials are overrepresented.
Explore the relationships between each of the top predictors and the response. How could this information be helpful in improving yield in future runs of the manufacturing process?
coef(plsFit_cmp$finalModel)
## , , 1 comps
##
## .outcome
## BiologicalMaterial01 0.066684067
## BiologicalMaterial02 0.088398581
## BiologicalMaterial03 0.082677753
## BiologicalMaterial04 0.079355531
## BiologicalMaterial05 0.017760694
## BiologicalMaterial06 0.089440165
## BiologicalMaterial08 0.071628233
## BiologicalMaterial09 0.011975948
## BiologicalMaterial10 0.039830447
## BiologicalMaterial11 0.063066639
## BiologicalMaterial12 0.070233599
## ManufacturingProcess01 -0.027608956
## ManufacturingProcess02 -0.037281194
## ManufacturingProcess03 -0.011048430
## ManufacturingProcess04 -0.044043066
## ManufacturingProcess05 0.022409935
## ManufacturingProcess06 0.079933377
## ManufacturingProcess07 -0.008725332
## ManufacturingProcess08 -0.004327776
## ManufacturingProcess09 0.107814688
## ManufacturingProcess10 0.049293158
## ManufacturingProcess11 0.082010562
## ManufacturingProcess12 0.072879332
## ManufacturingProcess13 -0.104337624
## ManufacturingProcess14 -0.007646035
## ManufacturingProcess15 0.041968495
## ManufacturingProcess16 -0.008986411
## ManufacturingProcess17 -0.085918350
## ManufacturingProcess18 -0.014399902
## ManufacturingProcess19 0.023322608
## ManufacturingProcess20 -0.014913667
## ManufacturingProcess21 -0.003226452
## ManufacturingProcess22 -0.008000476
## ManufacturingProcess23 -0.025907144
## ManufacturingProcess24 -0.042591433
## ManufacturingProcess25 0.001234298
## ManufacturingProcess26 0.007215405
## ManufacturingProcess27 0.001038972
## ManufacturingProcess28 0.049641819
## ManufacturingProcess29 0.029384143
## ManufacturingProcess30 0.050339606
## ManufacturingProcess31 -0.013790702
## ManufacturingProcess32 0.125645436
## ManufacturingProcess33 0.084521003
## ManufacturingProcess34 0.043192932
## ManufacturingProcess35 -0.035032901
## ManufacturingProcess36 -0.107070678
## ManufacturingProcess37 -0.026195483
## ManufacturingProcess38 -0.017238444
## ManufacturingProcess39 0.005642967
## ManufacturingProcess40 -0.005278148
## ManufacturingProcess41 -0.001685532
## ManufacturingProcess42 -0.005030122
## ManufacturingProcess43 0.040296414
## ManufacturingProcess44 0.015201925
## ManufacturingProcess45 0.010461819
## ManufacturingProcess03_impTRUE 0.037414303
## ManufacturingProcess10_impTRUE 0.042944137
## ManufacturingProcess11_impTRUE 0.042765451
Manufacturing processes 32 and 9 have a positive influence on the yield, while processes 36 and 13 have a negative influence. I would recommend adjusting processes 36 and 13 to ameloriate their negative influence on the overall yield.