library(fpp3)
library(tidyverse)
library(caret)
library(pls)
library(yardstick)
library(MASS)
# library(fable)
# library(latex2exp)

6.2.

Developing a model to predict permeability (see Sect. 1.4) could save significant resources for a pharmaceutical company, while at the same time more rapidly identifying molecules that have a sufficient permeability to become a drug:

  1. Start R and use these commands to load the data:
library(AppliedPredictiveModeling)
data(permeability)

The matrix fingerprints contains the 1,107 binary molecular predictors for the 165 compounds, while permeability contains permeability response.

prints <- fingerprints |>
  as_tibble() |>
  print()
## # A tibble: 165 × 1,107
##       X1    X2    X3    X4    X5    X6    X7    X8    X9   X10   X11   X12   X13
##    <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
##  1     0     0     0     0     0     1     1     1     0     0     0     1     0
##  2     0     0     0     0     0     0     1     1     0     0     0     1     1
##  3     0     0     0     0     0     1     1     1     0     0     0     0     1
##  4     0     0     0     0     0     0     1     1     0     0     0     1     1
##  5     0     0     0     0     0     0     1     1     0     0     0     1     1
##  6     0     0     0     0     0     0     1     1     0     0     0     1     1
##  7     0     0     0     0     0     1     1     1     0     0     0     0     1
##  8     0     0     0     0     0     0     1     1     0     0     0     0     1
##  9     0     0     0     0     0     1     1     1     0     0     0     0     1
## 10     0     0     0     0     0     0     1     1     0     0     0     1     1
## # ℹ 155 more rows
## # ℹ 1,094 more variables: X14 <dbl>, X15 <dbl>, X16 <dbl>, X17 <dbl>,
## #   X18 <dbl>, X19 <dbl>, X20 <dbl>, X21 <dbl>, X22 <dbl>, X23 <dbl>,
## #   X24 <dbl>, X25 <dbl>, X26 <dbl>, X27 <dbl>, X28 <dbl>, X29 <dbl>,
## #   X30 <dbl>, X31 <dbl>, X32 <dbl>, X33 <dbl>, X34 <dbl>, X35 <dbl>,
## #   X36 <dbl>, X37 <dbl>, X38 <dbl>, X39 <dbl>, X40 <dbl>, X41 <dbl>,
## #   X42 <dbl>, X43 <dbl>, X44 <dbl>, X45 <dbl>, X46 <dbl>, X47 <dbl>, …
perm <- permeability |>
  as_tibble() |>
  print()
## # A tibble: 165 × 1
##    permeability
##           <dbl>
##  1        12.5 
##  2         1.12
##  3        19.4 
##  4         1.73
##  5         1.68
##  6         0.51
##  7        25.4 
##  8         0.55
##  9        39.5 
## 10         4.91
## # ℹ 155 more rows
  1. The fingerprint predictors indicate the presence or absence of substructures of a molecule and are often sparse meaning that relatively few of the molecules contain each substructure. Filter out the predictors that have low frequencies using the nearZeroVar function from the caret package. How many predictors are left for modeling

It looks like 388 predictors still remain after using the nearZeroVar function.

lessprints <- prints[, -nearZeroVar(prints)] |> print()
## # A tibble: 165 × 388
##       X1    X2    X3    X4    X5    X6   X11   X12   X15   X16   X20   X21   X25
##    <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
##  1     0     0     0     0     0     1     0     1     0     0     0     0     0
##  2     0     0     0     0     0     0     0     1     0     0     0     0     0
##  3     0     0     0     0     0     1     0     0     0     0     0     0     0
##  4     0     0     0     0     0     0     0     1     0     0     0     0     0
##  5     0     0     0     0     0     0     0     1     0     0     0     0     0
##  6     0     0     0     0     0     0     0     1     0     0     0     0     0
##  7     0     0     0     0     0     1     0     0     0     1     0     0     0
##  8     0     0     0     0     0     0     0     0     0     0     0     0     0
##  9     0     0     0     0     0     1     0     0     1     1     0     0     0
## 10     0     0     0     0     0     0     0     1     0     0     0     0     0
## # ℹ 155 more rows
## # ℹ 375 more variables: X26 <dbl>, X27 <dbl>, X28 <dbl>, X29 <dbl>, X35 <dbl>,
## #   X36 <dbl>, X37 <dbl>, X38 <dbl>, X39 <dbl>, X40 <dbl>, X41 <dbl>,
## #   X42 <dbl>, X43 <dbl>, X44 <dbl>, X46 <dbl>, X47 <dbl>, X48 <dbl>,
## #   X49 <dbl>, X50 <dbl>, X51 <dbl>, X52 <dbl>, X53 <dbl>, X54 <dbl>,
## #   X55 <dbl>, X56 <dbl>, X57 <dbl>, X58 <dbl>, X59 <dbl>, X60 <dbl>,
## #   X61 <dbl>, X62 <dbl>, X63 <dbl>, X64 <dbl>, X65 <dbl>, X66 <dbl>, …
  1. Split the data into a training and a test set, pre-process the data, and tune a PLS model. How many latent variables are optimal and what is the corresponding resampled estimate of R2?
set.seed(1)

samp <- sample(nrow(lessprints), 124)

testprints <- lessprints[-samp,] |> print()
## # A tibble: 41 × 388
##       X1    X2    X3    X4    X5    X6   X11   X12   X15   X16   X20   X21   X25
##    <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
##  1     0     0     0     0     0     0     0     1     0     0     0     0     0
##  2     0     0     0     0     0     0     0     1     0     0     0     0     0
##  3     0     0     0     0     0     0     0     0     0     0     0     0     0
##  4     0     0     0     0     0     1     0     0     1     1     0     0     0
##  5     1     1     0     0     0     0     0     1     0     0     0     0     1
##  6     1     1     1     1     1     0     0     1     0     0     1     1     1
##  7     0     0     0     0     0     0     0     1     0     0     0     0     0
##  8     0     0     0     0     0     0     1     1     0     0     0     0     0
##  9     1     1     1     1     1     0     0     1     0     0     1     1     1
## 10     0     0     0     0     0     0     0     1     0     0     0     0     0
## # ℹ 31 more rows
## # ℹ 375 more variables: X26 <dbl>, X27 <dbl>, X28 <dbl>, X29 <dbl>, X35 <dbl>,
## #   X36 <dbl>, X37 <dbl>, X38 <dbl>, X39 <dbl>, X40 <dbl>, X41 <dbl>,
## #   X42 <dbl>, X43 <dbl>, X44 <dbl>, X46 <dbl>, X47 <dbl>, X48 <dbl>,
## #   X49 <dbl>, X50 <dbl>, X51 <dbl>, X52 <dbl>, X53 <dbl>, X54 <dbl>,
## #   X55 <dbl>, X56 <dbl>, X57 <dbl>, X58 <dbl>, X59 <dbl>, X60 <dbl>,
## #   X61 <dbl>, X62 <dbl>, X63 <dbl>, X64 <dbl>, X65 <dbl>, X66 <dbl>, …
testperm <- perm[-samp,] |> print()
## # A tibble: 41 × 1
##    permeability
##           <dbl>
##  1         1.73
##  2         1.68
##  3         0.55
##  4        39.5 
##  5         0.55
##  6         1.57
##  7         2.43
##  8         2.71
##  9         3.61
## 10        19.4 
## # ℹ 31 more rows
trainprints <- lessprints[samp,] |> print()
## # A tibble: 124 × 388
##       X1    X2    X3    X4    X5    X6   X11   X12   X15   X16   X20   X21   X25
##    <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
##  1     0     1     1     1     1     1     1     1     0     0     1     1     1
##  2     0     0     0     0     0     0     0     1     1     1     0     0     0
##  3     0     0     0     0     0     0     0     0     0     0     0     0     0
##  4     0     0     0     0     0     0     0     0     0     0     0     0     0
##  5     1     1     1     1     1     1     0     1     0     0     1     1     1
##  6     0     0     0     0     0     0     0     1     0     0     0     0     0
##  7     0     0     0     0     0     1     0     0     0     0     0     0     0
##  8     1     1     1     1     1     1     0     1     0     0     1     1     1
##  9     0     0     0     0     0     1     0     1     0     0     0     0     0
## 10     0     0     0     0     0     1     0     1     0     1     0     0     0
## # ℹ 114 more rows
## # ℹ 375 more variables: X26 <dbl>, X27 <dbl>, X28 <dbl>, X29 <dbl>, X35 <dbl>,
## #   X36 <dbl>, X37 <dbl>, X38 <dbl>, X39 <dbl>, X40 <dbl>, X41 <dbl>,
## #   X42 <dbl>, X43 <dbl>, X44 <dbl>, X46 <dbl>, X47 <dbl>, X48 <dbl>,
## #   X49 <dbl>, X50 <dbl>, X51 <dbl>, X52 <dbl>, X53 <dbl>, X54 <dbl>,
## #   X55 <dbl>, X56 <dbl>, X57 <dbl>, X58 <dbl>, X59 <dbl>, X60 <dbl>,
## #   X61 <dbl>, X62 <dbl>, X63 <dbl>, X64 <dbl>, X65 <dbl>, X66 <dbl>, …
trainperm <- perm[samp,] |> print()
## # A tibble: 124 × 1
##    permeability
##           <dbl>
##  1       28.1  
##  2        8.59 
##  3        0.525
##  4        2.46 
##  5        5.56 
##  6        1.76 
##  7       18.9  
##  8        3.8  
##  9        1.70 
## 10        5.36 
## # ℹ 114 more rows
trainingData <- bind_cols(trainprints, trainperm)
                      
# trainingData <- trainprints
# trainingData$Perm <- as.data.frame(trainperm)
 
# print(trainingData)

plsFit <- plsr(permeability ~ .,data = trainingData, validation = "CV", ncomp = 10)

summary(plsFit)
## Data:    X dimension: 124 388 
##  Y dimension: 124 1
## Fit method: kernelpls
## Number of components considered: 10
## 
## VALIDATION: RMSEP
## Cross-validated using 10 random segments.
##        (Intercept)  1 comps  2 comps  3 comps  4 comps  5 comps  6 comps
## CV           15.63    13.59    12.17    11.84    12.22    12.10    12.22
## adjCV        15.63    13.59    12.12    11.77    12.00    11.69    11.99
##        7 comps  8 comps  9 comps  10 comps
## CV       12.25    12.16    12.49     13.07
## adjCV    12.01    11.93    12.22     12.74
## 
## TRAINING: % variance explained
##               1 comps  2 comps  3 comps  4 comps  5 comps  6 comps  7 comps
## X               29.77    43.64    49.85    52.58    57.99    65.76    68.91
## permeability    28.83    49.81    57.31    65.49    69.99    72.75    75.11
##               8 comps  9 comps  10 comps
## X               71.53    73.82     76.12
## permeability    76.99    79.10     80.62
validationplot(plsFit)

validationplot(plsFit, val.type = "R2")

# print(scores(plsFit))
# print(loadings(plsFit))
# print(scores(plsPredict, testperm))

# plsTune <- train(trainprints, trainperm, method = "pls")
# plsTune <- train(trainprints, trainperm, method = "pls", tuneLength = 20, trControl = ctrl, preProc = c("center", "scale"))
  1. Predict the response for the test set. What is the test set estimate of R2?

The test set estimate of R2 was 0.432

plsPredict <- predict(plsFit, testprints, ncomp = 3)

data <- rename(bind_cols(testperm, as.data.frame(plsPredict)), 
               "actual" = "permeability", 
               "predict" = "permeability.3 comps")

print(rsq(data, actual, predict))
## # A tibble: 1 × 3
##   .metric .estimator .estimate
##   <chr>   <chr>          <dbl>
## 1 rsq     standard       0.432
  1. Try building other models discussed in this chapter. Do any have better predictive performance?
olmPredict <- lm(permeability ~ .,data = trainingData) |>
  predict(testprints)

lmValues1 <- data.frame(obs = testperm, pred = olmPredict) |> 
  rename("obs" = "permeability")

print("Ordinary Linear Regression: ")
## [1] "Ordinary Linear Regression: "
defaultSummary(lmValues1)
##       RMSE   Rsquared        MAE 
## 48.3780921  0.0105273 29.5056826
ridgeModel <- lm.ridge(permeability ~ .,data = trainingData, lambda = 0.001) |>
  select()
## modified HKB estimator is -1.401459e-27 
## modified L-W estimator is -113.5653 
## smallest value of GCV  at 0.001
# lmValues1 <- data.frame(obs = testperm, pred = olmPredict) |> 
#   rename("obs" = "permeability")
# 
# print("Penalized Regression Models: ")
# defaultSummary(lmValues1)
  1. Would you recommend any of your models to replace the permeability laboratory experiment?

6.3.

A chemical manufacturing process for a pharmaceutical product was discussed in Sect. 1.4. In this problem, the objective is to understand the relationship between biological measurements of the raw materials (predictors), product yield. Biological predictors cannot be changed but can be used to assess the quality of the raw material before processing. On the other hand, manufacturing process predictors can be changed in the manufacturing process. Improving product yield by 1 % will boost revenue by approximately one hundred thousand dollars per batch:

  1. Start R and use these commands to load the data:
library(AppliedPredictiveModeling)
data(chemicalManufacturingProcess)

The matrix processPredictors contains the 57 predictors (12 describing the input biological material and 45 describing the process predictors) for the 176 manufacturing runs. yield contains the percent yield for each run.

  1. A small percentage of cells in the predictor set contain missing values. Use an imputation function to fill in these missing values (e.g., see Sect. 3.8).

  2. Split the data into a training and a test set, pre-process the data, and tune a model of your choice from this chapter. What is the optimal value of the performance metric?

  3. Predict the response for the test set. What is the value of the performance metric and how does this compare with the resampled performance metric on the training set?

  4. Which predictors are most important in the model you have trained? Do either the biological or process predictors dominate the list?

  5. Explore the relationships between each of the top predictors and the response. How could this information be helpful in improving yield in future runs of the manufacturing process?