Problem 6.2

Developing a model to predict permeability (see Sect. 1.4) could save significant resources for a pharmaceutical company, while at the same time more rapidly identifying molecules that have a sufficient permeability to become a drug:

a) Start R and use these commands to load the data:
library(AppliedPredictiveModeling)
data(permeability)

The matrix fingerprints contains the 1,107 binary molecular predictors for the 165 compounds, while permeability contains permeability response.



b) The fingerprint predictors indicate the presence of absence of substructures of a molecule and are often sparse meaning that relatively few of the molecules contain each substructure. Filter out the predictors that have low frequencies using the nearZeroVar function from the caret package. How many predictors are left for modeling?
prints <- fingerprints[,-nearZeroVar(fingerprints)]
dim(prints)
## [1] 165 388

After removing the variables with little to no variance, we are left with 388 predictors from the fingerprints matrix.



c) Split the data into a training and test set, preprocess the data, and tune a PLS model. How many latent variables are optimal and what is the corresponding resampled estimate of \(R^2\)?
# Combine the data
perm <- cbind(permeability, as.data.frame(prints))

Let’s look at our binary predictors and make sure they are truly binary:

# Helper function
is.binary <- function(x){
  u <- length(unique(x))
  min.val <- min(unique(x))
  max.val <- max(unique(x))
  
  if(u == 2 & min.val == 0 & max.val == 1){
    return(TRUE)
  } else{
    return(FALSE)
  }
}

apply(perm[,-1], MARGIN =2, FUN = is.binary) %>% table()
## .
## TRUE 
##  388

As expected, the only non-binary variable is the outcome (dependent) variable.

Next we’ll separate the data into a training and test set:

# Keep 20% of observations for testing
set.seed(28130)

ttSplit <- createDataPartition(perm$permeability, times=1, p=0.20, list=F)

perm.train <- perm[-ttSplit,]
perm.test <- perm[ttSplit,]

Now we will train a Partial Least Squares model

# Use 8-fold cross-validation
control <- trainControl(method="cv", 8)

# Train the PLS model
pls.model <- train(perm.train[,-1], perm.train[,1],
                   method="pls", trControl=control,
                   metric="RMSE", tuneLength = 20)

# Output summary of the fit
summary(pls.model)
## Data:    X dimension: 129 388 
##  Y dimension: 129 1
## Fit method: oscorespls
## Number of components considered: 3
## TRAINING: % variance explained
##           1 comps  2 comps  3 comps
## X           29.56    42.21    49.58
## .outcome    32.40    54.85    60.30
plot(pls.model)

The PLS model chose 3 component variables with a cross-validated RMSE of 11.813 and \(R^2\) value of 0.453.

Let’s look at the residuals:

qqnorm(pls.model$finalModel$residuals)

The residuals seem to be somewhat well-behaved in that they look close to normal.



d) Predict the response for the test set. What is the test estimate for \(R^2\)?
# Predict on test set
pls.pred <- predict(pls.model, newdata=perm.test)

# Output RMSE and R^2
paste0("RMSE  = ",round(RMSE(pls.pred, obs = perm.test$permeability),2))
## [1] "RMSE  = 14.49"
paste0("R-Squared = ",round(caret::R2(pls.pred, obs = perm.test$permeability),3))
## [1] "R-Squared = 0.269"

The \(R^2\) for the test set is significantly worse than the training set.



e) Try building other models discussed in this chapter. Do any have better predictive performance?

First we will try an Elastic Net model, since it is useful in countering some correlation between predictor variables as well as selecting the most important ones:

enet.model <- train(perm.train[,-1], perm.train[,1],
                   method="glmnet", trControl=control,
                   tuneGrid = expand.grid(alpha=seq(0,1,by=0.05),
                                          lambda=seq(0,1,by=0.05)))

enet.model$bestTune
##     alpha lambda
## 294  0.65      1
plot(enet.model)

We get the values of our hyperparameters as 0.55 and 1 for the alpha and lambda parameters, respectively.

Let’s check how this model does on the test data:

# Predict our permeability values
enet.pred <- predict(enet.model, perm.test)

# Check measures
postResample(enet.pred,perm.test$permeability)
##       RMSE   Rsquared        MAE 
## 13.4472015  0.3169109  9.9800195

ElasticNet did a little better than the PLS model on the test data.



f) Would you recommend any of your models to replace the permeability laboratory experiment?

No, I would not. None of the models properly capture the potential relationship between the molecular fingerprints and the permeability measure. The models do not explain enough of the variability to do well on unknown samples.






6.3 A chemical manufacturing process for a pharmaceutical product was discussed in Sect 1.4. In this problem, the objective is to understand the relationship between biological measurements of the raw materials (predictors), measurements of the manufacturing process (predictors), and the response of product yeild. Biological predictors cannot be changed but can be used to assess the quality of the raw material before processing. On the other hand, manufacturing process predictors can be changed in the manufacturing process. Improving product yield by 1% will boost revenue by approximately one hundred thousand dollars per batch:



a) Start R and use these commands to load the data:
library(AppliedPredictiveModeling)

# The book had this incorrectly listed. In this version of the package,
#   the data is in one data frame now.
data("ChemicalManufacturingProcess")

The matrix processPredictors contains 57 predictors (12 describing the input biological material and 45 describing the process predictors) for the 176 manufacturing runs. yield contains the percent yield for each run.



b) A small percentage of cells in the predictor set contain missing values. Use an imputation function to fill in these missing values (e.g. see Sect. 3.8)
numNA <- function(x){
  return(sum(is.na(x)))
}

map_df(ChemicalManufacturingProcess, numNA) %>%
  pivot_longer(cols=everything(), names_to="variable") %>%
  filter(value > 0) %>% arrange(-value) %>%
  select("Variable Name" = variable, "# NAs" = value) %>%
  kable() %>% kable_styling(bootstrap_options = "striped", full_width = F)
Variable Name # NAs
ManufacturingProcess03 15
ManufacturingProcess11 10
ManufacturingProcess10 9
ManufacturingProcess25 5
ManufacturingProcess26 5
ManufacturingProcess27 5
ManufacturingProcess28 5
ManufacturingProcess29 5
ManufacturingProcess30 5
ManufacturingProcess31 5
ManufacturingProcess33 5
ManufacturingProcess34 5
ManufacturingProcess35 5
ManufacturingProcess36 5
ManufacturingProcess02 3
ManufacturingProcess06 2
ManufacturingProcess01 1
ManufacturingProcess04 1
ManufacturingProcess05 1
ManufacturingProcess07 1
ManufacturingProcess08 1
ManufacturingProcess12 1
ManufacturingProcess14 1
ManufacturingProcess22 1
ManufacturingProcess23 1
ManufacturingProcess24 1
ManufacturingProcess40 1
ManufacturingProcess41 1

There are some missing values in the Manufacturing Process variables. At most, ManufacturingProcess03 has 15 NA values, while ManufacturingProcess11 and ManufacturingProcess10 have 10 and 9 respectively. The rest are 5 or less.

We’ll use a KNN imputation method to fill in these values. The function that does this also centers and scales the data, so we won’t need to do that step in section c) below.

# No imputation on the outcome (and we don't want it scaled)
preproc <-preProcess(ChemicalManufacturingProcess[,-1], method = "knnImpute",
                  k = 5, knnSummary = mean)

chem <- predict(preproc, ChemicalManufacturingProcess, na.action = na.pass)

# Check for NAs
map_df(chem, numNA) %>%
  pivot_longer(cols=everything(), names_to="variable") %>%
  filter(value > 0) %>% arrange(-value) %>%
  select("Variable Name" = variable, "# NAs" = value) %>%
  kable() %>% kable_styling(bootstrap_options = "striped", full_width = F)
Variable Name # NAs

Our imputation removed the NA values, replacing them with imputed values.



c) Split the data into a training and a test set, pre-process the data, and tune a model of your choice from this chapter. What is the optimal value of the performance metric?

First let’s look at the distribution of outcome variable Yield:

chem %>% ggplot(aes(x=Yield)) + geom_density() + theme_light() +
  labs(title="Distribution of Dependent Variable") +
  stat_function(fun = dnorm,
                args = list(mean = mean(chem$Yield),
                                         sd = sd(chem$Yield)), lty=3,
                color="red")

The Yield variable (black line) is relatively close to normal (red line) in the plot above. So, there is no real need to transform it.

Next we will split our data:

ttSplit <- createDataPartition(chem$Yield, times=1, p=0.20, list=F)

chem.train <- chem[-ttSplit,]
chem.test <- chem[ttSplit,]

Now we will try to train a PLS model:

# Use 8-fold cross-validation
control <- trainControl(method="cv", 8)

# Train the PLS model
chem.model <- train(chem.train[,-1], chem.train[,1],
                   method="pls", trControl=control,
                   metric="RMSE", tuneLength = 20)

# Output summary of the fit
summary(chem.model)
## Data:    X dimension: 140 57 
##  Y dimension: 140 1
## Fit method: oscorespls
## Number of components considered: 4
## TRAINING: % variance explained
##           1 comps  2 comps  3 comps  4 comps
## X           16.28    23.96    30.42    37.07
## .outcome    48.84    65.08    70.58    73.06
# Plot the fit
plot(chem.model)

The PLS model shows minimum cross-validated training RMSE is achieved with 4 components.

# Investigate the residuals
qqnorm(chem.model$finalModel$residuals)

plot(chem.model$finalModel$residuals)

The plots above show that the residuals are randomly distributed about a mean of zero and seem to mostly follow a normal distribution.

The PLS model chose 4 component variables with a cross-validated RMSE of 1.129 and \(R^2\) value of 0.627.



d) Predict the response for the test set. What is the value of the performance metric and how does this compare with the resampled performance metric on the training set?
# Predict on test set
chem.pred <- predict(chem.model, newdata=chem.test)

# Output RMSE and R^2
paste0("RMSE  = ",round(RMSE(chem.pred, obs = chem.test$Yield),3))
## [1] "RMSE  = 1.303"
paste0("R-Squared = ",round(caret::R2(chem.pred, obs = chem.test$Yield),3))
## [1] "R-Squared = 0.5"

The test set results compare pretty favorably with with training set metrics. The RMSE is slightly lower (likely a result of variance introduced by cross-validation) and the \(R^2\) dropped only a small amount.



e) Which predictors are most important in the model you have trained? Do either the biological of process predictors dominate the list?
# Get the coefficients of the 4 comp model
chem.coef <- as.data.frame(coef(chem.model$finalModel))

chem.coef <- chem.coef %>% rownames_to_column(var = "Variable Name")

colnames(chem.coef) <- c("Variable Name","Coefficient")

chem.coef %>% arrange(-abs(Coefficient))
##             Variable Name   Coefficient
## 1  ManufacturingProcess32  0.4362600402
## 2  ManufacturingProcess09  0.3064472592
## 3  ManufacturingProcess36 -0.2786309931
## 4  ManufacturingProcess34  0.2771928053
## 5  ManufacturingProcess37 -0.2250403773
## 6  ManufacturingProcess13 -0.2248494746
## 7  ManufacturingProcess17 -0.2204898326
## 8  ManufacturingProcess33  0.1854744656
## 9  ManufacturingProcess15  0.1670658415
## 10 ManufacturingProcess06  0.1662332827
## 11 ManufacturingProcess19  0.1564143927
## 12 ManufacturingProcess11  0.1402491403
## 13   BiologicalMaterial09 -0.1399647547
## 14 ManufacturingProcess04  0.1385601974
## 15   BiologicalMaterial07 -0.1382620004
## 16   BiologicalMaterial03  0.1193092804
## 17 ManufacturingProcess38 -0.1106321157
## 18 ManufacturingProcess45  0.1013999512
## 19   BiologicalMaterial06  0.1005602221
## 20 ManufacturingProcess07 -0.0989939553
## 21 ManufacturingProcess28 -0.0955735582
## 22 ManufacturingProcess44  0.0948874610
## 23 ManufacturingProcess43  0.0942894556
## 24 ManufacturingProcess39  0.0857894983
## 25   BiologicalMaterial04  0.0737575381
## 26 ManufacturingProcess10  0.0732209293
## 27 ManufacturingProcess24 -0.0695918584
## 28 ManufacturingProcess30  0.0666333855
## 29 ManufacturingProcess21 -0.0603005869
## 30   BiologicalMaterial10 -0.0602382662
## 31 ManufacturingProcess31 -0.0593461345
## 32 ManufacturingProcess18  0.0582132902
## 33   BiologicalMaterial11 -0.0575490117
## 34 ManufacturingProcess35 -0.0562246253
## 35 ManufacturingProcess40 -0.0559951865
## 36   BiologicalMaterial02  0.0529218433
## 37 ManufacturingProcess41 -0.0519885312
## 38 ManufacturingProcess20  0.0515595130
## 39 ManufacturingProcess29  0.0486880904
## 40 ManufacturingProcess42  0.0446902656
## 41 ManufacturingProcess16 -0.0358673149
## 42 ManufacturingProcess23 -0.0337128789
## 43 ManufacturingProcess25 -0.0303878351
## 44 ManufacturingProcess27 -0.0294915195
## 45   BiologicalMaterial12 -0.0245863525
## 46   BiologicalMaterial08 -0.0244240031
## 47 ManufacturingProcess05 -0.0220445792
## 48 ManufacturingProcess02  0.0218736552
## 49 ManufacturingProcess14  0.0208404438
## 50 ManufacturingProcess22 -0.0199084450
## 51 ManufacturingProcess08 -0.0197023320
## 52   BiologicalMaterial01 -0.0180949959
## 53 ManufacturingProcess26 -0.0095188334
## 54 ManufacturingProcess12  0.0093343103
## 55   BiologicalMaterial05  0.0071213849
## 56 ManufacturingProcess03 -0.0025915670
## 57 ManufacturingProcess01  0.0007419237

Here we see the coefficients of the selected PLS model. The first few have the most impact (either positively, or negatively) on the predicted Yield.

The manufacturing predictors are much more influential than the biological ones, as the most influential biological variable is a bit more than one-quarter the coefficient of the highest variable coefficient.



f) Explore the relationships between each of the top predictors and the response. How could this information be helpful in improving yield in future runs of the manufacturing process?
# Top predictor
chem %>%
  ggplot(aes(x=ManufacturingProcess32, y=Yield)) +
  geom_point() + theme_light() +
  labs(title="Process 32 vs. Yield",
       y="Yield", x="Process 32 (Normalized and Centered)")

The top predictor ManufacturingProcess32 has an obvious correlation with the Yield. It’s not strictly linear, but it is relatively strong.

# Second-Highest predictor
chem %>%
  ggplot(aes(x=ManufacturingProcess09, y=Yield)) +
  geom_point() + theme_light() +
  labs(title="Process 09 vs. Yield",
       y="Yield", x="Process 09 (Normalized and Centered)")

The second-most influential predictor also seems to correlate somewhat with the output.

# Third-most influential predictor
chem %>%
  ggplot(aes(x=ManufacturingProcess36, y=Yield)) +
  geom_point() + theme_light() +
  labs(title="Process 36 vs. Yield",
       y="Yield", x="Process 36 (Normalized and Centered)")

Finally, the third-most influential predictor, ManufacturingProcess36 has some correlation with the outcome Yield, however it is a negative correlation.

In general, these three processes seem to have a good deal of impact on the output of this chemical manufacturing process.

bestVars <- chem.coef %>% arrange(-abs(Coefficient)) %>%
  top_n(7, abs(Coefficient)) %>% rbind(c("Yield",0))

  cor(chem[,bestVars$`Variable Name`]) %>% corrplot::corrplot()

The plot above shows the 7 most infulential predictors, how they correlate with one another, and with the outcome variable Yield.

Looking at the 3 variables we called out above, you can see that 32 and 09 should increase Yield as they increase, but 36 does the opposite. Thankfully, 32 and 36 move in opposite directions (negatively correlated), so altering those 3 variables should impact output the most.