6.2 Developing a model to predict permeability(see Sect.1.4) could save significant resources for a pharmaceutical company,while at the sametime more rapidly identifying molecules that have a sufficient permeability to become a drug:

The matrix fingerprints contains the 1,107 binary molecular predic- tors for the 165 compounds, while permeability contains permeability response.

  1. Start R and use these commands to load the data:
library(AppliedPredictiveModeling)
library(caret)
## Loading required package: ggplot2
## Loading required package: lattice
library(dplyr)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(corrplot)
## Warning: package 'corrplot' was built under R version 4.3.3
## corrplot 0.94 loaded
data(permeability)
  1. The fingerprint predictors indicate the presence or absence of substruc- tures of a molecule and are often sparsemeaning that relatively few ofthe molecules contain each substructure. Filter out the predictors that have low frequencies using the nearZeroVar function from the caret package. How many predictors are left for modeling?
dim(fingerprints)
## [1]  165 1107
fingerprints <- fingerprints[, -nearZeroVar(fingerprints)]

dim(fingerprints)
## [1] 165 388
  1. Split the data into a training and a test set, pre-process the data, and tune a PLS model. How many latent variables are optimal and what is the corresponding resampled estimate of R2?
set.seed(213)

# index for training
index <- createDataPartition(permeability, p = .7, list = FALSE)

# training sets
train_perm <- permeability[index, ]
train_fp <- fingerprints[index, ]
# testing sets
test_perm <- permeability[-index, ]
test_fp <- fingerprints [-index, ]

# 10-fold cross-validation to make reasonable estimates
ctrl <- trainControl(method = "cv", number = 10)

pls_model <- train(train_fp, train_perm, method = "pls", metric = "Rsquared",
             tuneLength = 20, trControl = ctrl, preProc = c("center", "scale"))

plot(pls_model)

  1. Predict the response for the test set. What is the test set estimate of R2?
# Make predictions on the test set
pls_predictions <- predict(pls_model, test_fp)

# Calculate test set R-squared for PLS
test_R2 <- postResample(pls_predictions, test_perm)[["Rsquared"]]
print(test_R2)
## [1] 0.4687978
  1. Try building other models discussed in this chapter. Do any have better predictive performance?
ridgeTune <- train(
  x = train_fp, y = train_perm,
  method = "glmnet",
  tuneGrid = expand.grid(alpha = 0, lambda = seq(0.001, 1, length = 20)), 
  metric = "Rsquared",
  trControl = ctrl,
  preProc = c("center", "scale")
)

plot(ridgeTune)

  1. Would you recommend any of your models to replace the permeability laboratory experiment?
# Optimal lambda
best_lambda <- ridgeTune$bestTune$lambda
best_lambda
## [1] 1
# Predict on the test set
ridge_predictions <- predict(ridgeTune, newdata = test_fp)

# Calculate test set R-squared
ridge_r2 <- postResample(ridge_predictions, test_perm)[["Rsquared"]]
ridge_r2
## [1] 0.5403888

6.3. A chemical manufacturing process for a pharmaceutical product was discussed in Sect.1.4. In this problem, the objective is to understand the relationship between biological measurements of the raw materials(predictors), measurements of the manufacturing process (predictors), and the response of product yield. Biological predictors cannot be changed but can be used to assess the quality of the raw material before processing. On the other hand, manufacturing process predictors can be changed in the manufacturing pro- cess. Improving product yield by 1% will boost revenue by approximately one hundred thousand dollars per batch: (a) Start R and use these commands to load the data:

library(AppliedPredictiveModeling)
data("ChemicalManufacturingProcess")

The matrix processPredictors contains the 57 predictors (12 describing the input biological material and 45 describing the process predictors) for the 176 manufacturing runs. yield contains the percent yield for each run.

  1. Asmall percentage of cells in the predictor set contain missing values. Use an imputation function to fill in these missing values (e.g., see Sect.3.8).
sum(is.na(ChemicalManufacturingProcess))
## [1] 106
missing <- preProcess(ChemicalManufacturingProcess, method = "bagImpute")
Chemical <- predict(missing, ChemicalManufacturingProcess)

sum(is.na(Chemical))
## [1] 0
  1. Split the data into a training and a test set, pre-process the data, and tune a model of your choice from this chapter. What is the optimal value of the performance metric?
Chemical <- Chemical[, -nearZeroVar(Chemical)]
set.seed(453)

# index for training
index2 <- createDataPartition(Chemical$Yield,  p = .8, list = FALSE)

# train 
train_chem <- Chemical[index2, ]

# test
test_chem <- Chemical[-index2, ]
set.seed(453)

pls_model_chem <- train(
  Yield ~ ., data = train_chem, method = "pls",
  trControl = trainControl("cv", number = 10),
  tuneLength = 25
)

plot(pls_model_chem) 

print(pls_model_chem)
## Partial Least Squares 
## 
## 144 samples
##  56 predictor
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 131, 130, 130, 129, 129, 128, ... 
## Resampling results across tuning parameters:
## 
##   ncomp  RMSE       Rsquared   MAE     
##    1      1.703369  0.2086908  1.356838
##    2      1.696314  0.2173439  1.350182
##    3      2.738152  0.2106363  1.660486
##    4      3.108309  0.2269650  1.766030
##    5      6.119173  0.2503410  2.526392
##    6      5.764987  0.3052312  2.405522
##    7      6.121132  0.3231199  2.458867
##    8      8.907536  0.3351044  3.190655
##    9     12.896144  0.3098155  4.259362
##   10     10.654305  0.3404871  3.671088
##   11      7.774692  0.3876969  2.864527
##   12      7.107442  0.4293301  2.662620
##   13      5.224185  0.4414374  2.128175
##   14      5.913008  0.4705509  2.280858
##   15      5.693617  0.4760092  2.245974
##   16      4.998067  0.4921636  2.048582
##   17      6.552369  0.4995208  2.427595
##   18      5.003371  0.5036211  2.034329
##   19      5.367763  0.4973387  2.130746
##   20      4.383367  0.4998666  1.878918
##   21      4.104056  0.5038056  1.792917
##   22      3.782727  0.5091043  1.717620
##   23      4.056420  0.4984389  1.800891
##   24      3.921929  0.5136381  1.765530
##   25      4.032128  0.4908455  1.798149
## 
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was ncomp = 2.

The optimal ncomp is 2 with a 𝑅2 of 0.22.

set.seed(624)

larsTune <- train(Yield ~ ., Chemical , method = "lars", metric = "Rsquared",
                    tuneLength = 20, trControl = ctrl, preProc = c("center", "scale"))

plot(larsTune)

larsTune
## Least Angle Regression 
## 
## 176 samples
##  56 predictor
## 
## Pre-processing: centered (56), scaled (56) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 160, 157, 158, 159, 158, 159, ... 
## Resampling results across tuning parameters:
## 
##   fraction  RMSE      Rsquared   MAE      
##   0.05      1.283584  0.6225257  1.0463960
##   0.10      1.159889  0.6232028  0.9365626
##   0.15      1.166757  0.6216601  0.9278850
##   0.20      1.285217  0.5766354  0.9686902
##   0.25      1.509900  0.5402258  1.0423415
##   0.30      1.758242  0.4879534  1.1289880
##   0.35      1.887215  0.4714128  1.1779997
##   0.40      1.856049  0.4677619  1.1771152
##   0.45      1.797200  0.4703301  1.1664351
##   0.50      1.803936  0.4620279  1.1739839
##   0.55      1.753039  0.4604957  1.1642258
##   0.60      1.659286  0.4694330  1.1445798
##   0.65      1.562180  0.4996689  1.1178144
##   0.70      1.558313  0.5305094  1.1136644
##   0.75      1.388695  0.5278279  1.0758170
##   0.80      1.516388  0.4997529  1.1142540
##   0.85      1.834859  0.4698781  1.2050280
##   0.90      2.178869  0.4515275  1.2998587
##   0.95      2.522669  0.4362574  1.3880903
##   1.00      2.928664  0.4224834  1.4940821
## 
## Rsquared was used to select the optimal model using the largest value.
## The final value used for the model was fraction = 0.1.

The optimal fraction is 0.1 with a 𝑅2 of 0.62.

  1. Predict the response for the test set. What is the value of the performance metric and how does this compare with the resampled performance metric on the training set?

The lars model had a higher 𝑅2 than the pls model, so the lars model was chosen to predict.

lars_predict <- predict(larsTune, test_chem[ ,-1])

postResample(lars_predict, test_chem[ ,1])
##      RMSE  Rsquared       MAE 
## 0.9921079 0.6524316 0.7802339
  1. Which predictors are most important in the model you have trained? Do either the biological or process predictors dominate the list?
varImp(larsTune)
## loess r-squared variable importance
## 
##   only 20 most important variables shown (out of 56)
## 
##                        Overall
## ManufacturingProcess32  100.00
## ManufacturingProcess13   90.02
## BiologicalMaterial06     84.56
## ManufacturingProcess36   75.70
## ManufacturingProcess17   74.89
## BiologicalMaterial03     73.53
## ManufacturingProcess09   70.37
## BiologicalMaterial12     67.98
## BiologicalMaterial02     65.33
## ManufacturingProcess06   58.17
## ManufacturingProcess31   56.47
## ManufacturingProcess33   49.89
## BiologicalMaterial11     48.12
## BiologicalMaterial04     47.13
## ManufacturingProcess11   42.21
## BiologicalMaterial08     41.88
## BiologicalMaterial01     39.14
## ManufacturingProcess12   32.70
## BiologicalMaterial09     32.42
## ManufacturingProcess27   23.15

The top predictors that are most important in the model are ManufacturingProcess32, ManufacturingProcess13, BiologicalMaterial06, ManufacturingProcess36, ManufacturingProcess17, BiologicalMaterial03,and ManufacturingProcess09.

The majority of the predictors are manufacturing process predictors.

  1. Explore the relationships between each of the top predictors and the re- sponse.How could this information be helpful in improving yield in future runs of the manufacturing process?
top10 <- head(arrange(varImp(larsTune)$importance, desc(Overall)), 10)
top_predictors <- rownames(top10)
top_10_pre <- Chemical[, c("Yield", top_predictors)]


cor_matrix <- cor(top_10_pre, use = "complete.obs")

# Plot
corrplot(cor_matrix, method = 'circle')

This corrplot shows the strong correlation between yield and Manufacturing process 32.