library(tidyverse)
## Warning: package 'ggplot2' was built under R version 4.3.3
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.0     ✔ stringr   1.5.1
## ✔ ggplot2   3.5.1     ✔ tibble    3.2.1
## ✔ lubridate 1.9.3     ✔ tidyr     1.3.0
## ✔ purrr     1.0.2     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(caret)
## Warning: package 'caret' was built under R version 4.3.3
## Loading required package: lattice
## 
## Attaching package: 'caret'
## 
## The following object is masked from 'package:purrr':
## 
##     lift

6.2

Developing a model to predict permeability (see Sect. 1.4) could save significant resources for a pharmaceutical company, while at the same time more rapidly identifying molecules that have a sufficient permeability to become a drug: (a) Start R and use these commands to load the data

library(AppliedPredictiveModeling)
## Warning: package 'AppliedPredictiveModeling' was built under R version 4.3.3
data(permeability)

dim(fingerprints) 
## [1]  165 1107
dim(permeability)
## [1] 165   1

The matrix fingerprints contains the 1,107 binary molecular predictors for the 165 compounds, while permeability contains permeability response.

  1. The fingerprint predictors indicate the presence or absence of substructures of a molecule and are often sparse meaning that relatively few of the molecules contain each substructure. Filter out the predictors that have low frequencies using the nearZeroVar function from the caret package. How many predictors are left for modeling?
# Filter out near zero variance predictors
nzv <- nearZeroVar(fingerprints)
filtered_fingerprints <- fingerprints[, -nzv]

# Check the number of predictors left
dim(filtered_fingerprints)[2]
## [1] 388

There are 388 predictors are left.
(c) Split the data into a training and a test set, pre-process the data, and tune a PLS model. How many latent variables are optimal and what is the corresponding re sampled estimate of R2?

set.seed(24324) 
trainIndex <- createDataPartition(permeability, p = 0.8, list = FALSE)
trainX <- filtered_fingerprints[trainIndex, ]
testX <- filtered_fingerprints[-trainIndex, ]
trainY <- permeability[trainIndex]
testY <- permeability[-trainIndex]
preProc <- preProcess(trainX, method = c("center", "scale"))
trainX <- predict(preProc, trainX)
testX <- predict(preProc, testX)
plsFit <- train(
    x = trainX, y = trainY,
    method = "pls",
    tuneLength = 20,
    trControl = trainControl(method = "repeatedcv", repeats = 5)
)
print(plsFit)
## Partial Least Squares 
## 
## 133 samples
## 388 predictors
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 5 times) 
## Summary of sample sizes: 119, 119, 121, 119, 120, 120, ... 
## Resampling results across tuning parameters:
## 
##   ncomp  RMSE      Rsquared   MAE      
##    1     12.83940  0.3723068  10.003266
##    2     11.32325  0.5232548   8.163803
##    3     11.38391  0.5117540   8.736887
##    4     11.41817  0.5073625   8.982007
##    5     10.90094  0.5500008   8.317762
##    6     10.77007  0.5583918   8.234721
##    7     10.49573  0.5799407   8.187043
##    8     10.47844  0.5876281   8.348120
##    9     10.72103  0.5754349   8.533767
##   10     10.91757  0.5675198   8.571109
##   11     11.25002  0.5497627   8.790371
##   12     11.47414  0.5308934   8.854460
##   13     11.68265  0.5196960   8.948358
##   14     11.90548  0.5075917   9.059684
##   15     12.09592  0.5000765   9.107593
##   16     12.41389  0.4863809   9.314562
##   17     12.55197  0.4807324   9.457369
##   18     12.60562  0.4806969   9.528100
##   19     12.62939  0.4815019   9.552291
##   20     12.82935  0.4736206   9.719977
## 
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was ncomp = 8.
plot(plsFit)

  1. Predict the response for the test set. What is the test set estimate of R2?
# Predict on the test set
plsPred <- predict(plsFit, testX )
postResample(plsPred, testY)
##       RMSE   Rsquared        MAE 
## 12.6847680  0.2841403  8.4170652
  1. Try building other models discussed in this chapter. Do any have better predictive performance?
# Train an SVM model
svmFit <- train(
    x = trainX, y = trainY,
    method = "svmRadial",
    tuneLength = 5,
    trControl = trainControl(method = "repeatedcv", repeats = 5)
)
print(svmFit)
## Support Vector Machines with Radial Basis Function Kernel 
## 
## 133 samples
## 388 predictors
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 5 times) 
## Summary of sample sizes: 119, 119, 120, 120, 121, 119, ... 
## Resampling results across tuning parameters:
## 
##   C     RMSE      Rsquared   MAE     
##   0.25  12.17492  0.5254212  8.304999
##   0.50  11.63221  0.5244405  8.030649
##   1.00  10.95986  0.5545334  7.662535
##   2.00  10.30771  0.5846630  7.280411
##   4.00  10.15781  0.5984959  7.234496
## 
## Tuning parameter 'sigma' was held constant at a value of 0.0021461
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were sigma = 0.0021461 and C = 4.
# Predict on test set
svmPred <- predict(svmFit, newdata = testX)

# Calculate test set R^2
svmTestR2 <- cor(testY, svmPred)^2
cat("SVM Test R^2:", svmTestR2, "\n")
## SVM Test R^2: 0.4003382

##6.3

A chemical manufacturing process for a pharmaceutical product was discussed in Sect. 1.4. In this problem, the objective is to understand the relationship between biological measurements of the raw materials (predictors), measurements of the manufacturing process (predictors), and the response of product yield. Biological predictors cannot be changed but can be used to assess the quality of the raw material before processing. On the other hand, manufacturing process predictors can be changed in the manufacturing process. Improving product yield by 1 % will boost revenue by approximately one hundred thousand dollars per batch:

  1. Start R and use these commands to load the data:
library(AppliedPredictiveModeling)
data(ChemicalManufacturingProcess)
# Check the dimensions and structure of the data
chemical <- ChemicalManufacturingProcess
dim(chemical)
## [1] 176  58

The matrix processPredictors contains the 57 predictors (12 describing the input biological material and 45 describing the process predictors) for the 176 manufacturing runs. yield contains the percent yield for each run. (b) A small percentage of cells in the predictor set contain missing values. Use an imputation function to fill in these missing values (e.g., see Sect. 3.8).

preProc <- preProcess(chemical, method = "bagImpute")
imputed_dataset <- predict(preProc, chemical)
#separate out predictor and response
predictor <- ChemicalManufacturingProcess[,-1]
response <- ChemicalManufacturingProcess$Yield
# imputation and pre-process
imputed <- preProcess(ChemicalManufacturingProcess, method = c("knnImpute"))


sum(is.na(imputed_dataset))
## [1] 0
  1. Split the data into a training and a test set, pre-process the data, and tune a model of your choice from this chapter. What is the optimal value of the performance metric?
library(RANN)
## Warning: package 'RANN' was built under R version 4.3.3
trans <- predict(imputed, chemical)
sample <- sample(c(TRUE, FALSE), nrow(trans), replace=TRUE, prob=c(0.8,0.2))
train  <- trans[sample, ]
# 
test   <- trans[!sample, ]
test_chem <-  trans[!sample, ]
# Pre-process data (e.g., centering and scaling)
pre_process <- preProcess(train, method = c("center", "scale"))
train_preprocessed <- predict(pre_process, train)
test_preprocessed <- predict(pre_process, test)

# Separate predictors and response
train_x <- train_preprocessed[, -which(names(train_preprocessed) == "Yield")]
train_y <- train_preprocessed$Yield
test_x <- test_preprocessed[, -which(names(test_preprocessed) == "Yield")]
test_y <- test_preprocessed$Yield

# Train a model (Random Forest)
set.seed(123)
rf_model <- train(
  x = train_x,
  y = train_y,
  method = "rf",
  trControl = trainControl(method = "cv", number = 10),
  metric = "RMSE"
)

# Model evaluation
rf_model
## Random Forest 
## 
## 147 samples
##  57 predictor
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 131, 133, 133, 133, 131, 134, ... 
## Resampling results across tuning parameters:
## 
##   mtry  RMSE       Rsquared   MAE      
##    2    0.6438771  0.6787738  0.5192924
##   29    0.5901099  0.6824375  0.4496367
##   57    0.5973413  0.6634521  0.4497093
## 
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was mtry = 29.
# Predict on test set
test_predictions <- predict(rf_model, test_x)

# Calculate performance metrics
rmse <- sqrt(mean((test_predictions - test_y)^2))
cat("Test RMSE:", rmse, "\n")
## Test RMSE: 0.5779966
  1. Predict the response for the test set. What is the value of the performance metric and how does this compare with the resampled performance metric on the training set?
chemtuned <- train(train %>% dplyr::select(-c("Yield")), train$Yield,
                 method = "pls",
                 tuneLength = 25,
                 trControl = trainControl("cv", number = 10), # use 10-fold cross-validation
                 preProc = c("center", "scale"))
# Print out tuning results
chemtuned
## Partial Least Squares 
## 
## 147 samples
##  57 predictor
## 
## Pre-processing: centered (57), scaled (57) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 133, 132, 133, 131, 132, 132, ... 
## Resampling results across tuning parameters:
## 
##   ncomp  RMSE       Rsquared   MAE      
##    1     0.8055235  0.4507798  0.6310476
##    2     1.0089536  0.5076486  0.6434741
##    3     0.7139061  0.5905980  0.5491270
##    4     0.9401839  0.5681328  0.6279718
##    5     1.1751195  0.5363313  0.6979205
##    6     1.2654779  0.5316961  0.7283897
##    7     1.3424418  0.5219613  0.7390583
##    8     1.3865617  0.4943538  0.7478407
##    9     1.5643977  0.4569937  0.7980841
##   10     1.7069939  0.4482318  0.8465827
##   11     1.8680414  0.4357595  0.8946845
##   12     2.0198275  0.4172813  0.9480298
##   13     2.0145436  0.4122623  0.9506365
##   14     1.9942225  0.4161104  0.9347440
##   15     1.9366731  0.4159897  0.9195333
##   16     1.8685600  0.4188856  0.8987671
##   17     1.7838880  0.4202636  0.8771889
##   18     1.7000882  0.4229186  0.8573963
##   19     1.6295681  0.4269886  0.8423494
##   20     1.5171765  0.4355971  0.8146559
##   21     1.4196113  0.4460516  0.7904533
##   22     1.3683986  0.4597046  0.7796760
##   23     1.3162155  0.4870407  0.7605238
##   24     1.3991134  0.4397084  0.7982461
##   25     1.4757661  0.4232730  0.8247767
## 
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was ncomp = 3.
plot(chemtuned)

  1. Which predictors are most important in the model you have trained? Do either the biological or process predictors dominate the list?
# Extract variable importance
importance <- varImp(rf_model, scale = FALSE)
print(importance)
## rf variable importance
## 
##   only 20 most important variables shown (out of 57)
## 
##                        Overall
## ManufacturingProcess32  35.085
## ManufacturingProcess13   9.992
## BiologicalMaterial12     8.370
## BiologicalMaterial03     8.289
## ManufacturingProcess31   7.351
## ManufacturingProcess17   6.287
## BiologicalMaterial06     5.891
## ManufacturingProcess09   4.550
## ManufacturingProcess06   4.428
## BiologicalMaterial11     3.530
## ManufacturingProcess36   3.527
## BiologicalMaterial02     2.611
## BiologicalMaterial04     2.271
## ManufacturingProcess11   2.126
## ManufacturingProcess30   2.026
## ManufacturingProcess28   1.983
## BiologicalMaterial05     1.938
## BiologicalMaterial08     1.903
## ManufacturingProcess27   1.818
## ManufacturingProcess20   1.770
# Plot variable importance
plot(importance, main = "Variable Importance")

  1. Explore the relationships between each of the top predictors and the response. How could this information be helpful in improving yield in future runs of the manufacturing process?

Based on the correlation analysis, ManufacturingProcess32 has the highest positive correlation with Yield. Additionally, three of the top ten predictors are negatively correlated with Yield. This information is valuable for future manufacturing runs, as it highlights the predictors that significantly impact the yield.