In Kuhn and Johnson do problems 6.2 and 6.3. There are only two but they consist of many parts. Please submit a link to your Rpubs and submit the .rmd file as well.
knitr::opts_chunk$set(warning = FALSE, message = FALSE)
Developing a model to predict permeability (see Sect. 1.4) could save significant resources for a pharmaceutical company, while at the same time more rapidly identifying molecules that have a sufficient permeability to become a drug:
Start R and use these commands to load the data:
library(AppliedPredictiveModeling)
data(permeability)
The matrix fingerprints contains the 1,107 binary molecular predictors for the 165 compounds, while permeability contains permeability response.
The fingerprint predictors indicate the presence or absence of substructures of a molecule and are often sparse meaning that relatively few of the molecules contain each substructure. Filter out the predictors that have low frequencies using the nearZeroVar function from the caret package. How many predictors are left for modeling?
library(caret)
# Filter out predictors that have low frequencies
nzv <- nearZeroVar(fingerprints)
length(nzv)
## [1] 719
# Should have 1107 - 719 predictors remaining
filtered <- fingerprints[ , -nzv]
ncol(filtered)
## [1] 388
There are 388 predictors left for modeling.
Split the data into a training and a test set, pre-process the data, and tune a PLS model. How many latent variables are optimal and what is the corresponding resampled estimate of R2?
# Set seed and splitting data into training and test set
set.seed(123)
trainIndex <- createDataPartition(permeability, p = 0.8, list = FALSE)
trainX <- filtered[trainIndex, ]
testX <- filtered[-trainIndex, ]
trainY <- permeability[trainIndex]
testY <- permeability[-trainIndex]
# Pre-process the data and train a PLS model
plstrain <- train(
x = trainX,
y = trainY,
method = "pls",
preProc = c("center", "scale"),
tuneLength = 10,
trControl = trainControl(method = "cv", number = 10)
)
# Results
plot(plstrain)
max(plstrain$results$Rsquared)
## [1] 0.5335956
This plot shows that the lowest RMSE at 2 components, indicating that two latent variables are optimal in summarizing all useful information from the hundreds of predictors.
The R^2 is .553, indicating that 55.3% of the variability in permeability can be explained by two components
Predict the response for the test set. What is the test set estimate of R2?
y_hat <- predict(plstrain, newdata = testX)
r2_test <- cor(y_hat, testY)^2
r2_test
## [1] 0.3244542
The r^2 of the test set is 0.3245. This is lower than the r^2 from the training data, suggesting some overfitting.
Try building other models discussed in this chapter. Do any have better predictive performance?
ctrl <- trainControl(method = "cv", number = 10)
# Linear Regression
lmfit <- train(
x = trainX,
y = trainY,
method = "lm",
trControl = ctrl)
# Ridge Regression
ridgeFit <- train(
x = trainX,
y = trainY,
method = "ridge",
preProc = c("center", "scale"),
tuneLength = 25,
trControl = ctrl
)
# Elastic Net
enetFit <- train(
x = trainX,
y = trainY,
method = "enet",
preProc = c("center", "scale"),
tuneLength = 10,
trControl = ctrl
)
# Robust Linear Model (RLM)
# RLM failed because the predictor matrix is singular but we can preprocess with PCA
rlmPCA <- train(
x = trainX,
y = trainY,
method = "rlm",
preProcess = "pca",
trControl = ctrl
)
# Comparing the results
models <- list(
PLS = plstrain,
LM = lmfit,
Ridge = ridgeFit,
ENet = enetFit,
RLM_PCA = rlmPCA
)
results <- data.frame(Model = character(), R2 = numeric(), RMSE = numeric())
for (m in names(models)) {
preds <- predict(models[[m]], newdata = testX)
r2 <- cor(preds, testY)^2
rmse <- sqrt(mean((preds - testY)^2))
results <- rbind(results, data.frame(Model = m, R2 = r2, RMSE = rmse))
}
# Print table
knitr::kable(results, digits = 4)
| Model | R2 | RMSE |
|---|---|---|
| PLS | 0.3245 | 12.3487 |
| LM | 0.0785 | 29.9065 |
| Ridge | 0.4004 | 12.8430 |
| ENet | 0.3854 | 11.1370 |
| RLM_PCA | 0.2622 | 14.2518 |
The Elastic Net model provided the best predictive performance on the test set as it had the highest R^2 and the lowest RMSE. It had a higher R^2 and lower RMSE than the PLS model.
Elastic Net combines both lasso (L1) and ridge (L2) regularization penalties, allowing it to shrink coefficients of correlated predictors and eliminate irrelevant features. This reduces overfitting leading to a better predictive model than PLS, which only relies on latent components.
Would you recommend any of your models to replace the permeability laboratory experiment?
No, I would not recommend any of the models to replace the permeability laboratory experiment. The best performing model, Elastic Net, only achieved a R^2 of 0.40 on the test set, meaning it explains just 40% of the variance.
A chemical manufacturing process for a pharmaceutical product was discussed in Sect. 1.4. In this problem, the objective is to understand the relationship between biological measurements of the raw materials (predictors), measurements of the manufacturing process (predictors), and the response of product yield. Biological predictors cannot be changed but can be used to assess the quality of the raw material before processing. On the other hand,manufacturing process predictors can be changed in the manufacturing process. Improving product yield by 1 % will boost revenue by approximately one hundred thousand dollars per batch:
Start R and use these commands to load the data:
library(AppliedPredictiveModeling)
data(ChemicalManufacturingProcess) # The dataset 'ChemicalManufacturing' could not be found
The matrix processPredictors contains the 57 predictors (12 describing the input biological material and 45 describing the process predictors) for the 176 manufacturing runs. yield contains the percent yield for each run.
A small percentage of cells in the predictor set contain missing values. Use an imputation function to fill in these missing values (e.g., see Sect. 3.8).
# Separating the target (Yield) and predictor variables
yield <- ChemicalManufacturingProcess$Yield
processPredictors <- ChemicalManufacturingProcess[, -1]
# Check for missing values
sum(is.na(processPredictors))
## [1] 106
# Impute missing values with the median
preProc <- preProcess(processPredictors, method = "medianImpute")
processPredictors_imputed <- predict(preProc, processPredictors)
sum(is.na(processPredictors_imputed))
## [1] 0
Imputed the missing values with the median.
Split the data into a training and a test set, pre-process the data, and tune a model of your choice from this chapter. What is the optimal value of the performance metric?
set.seed(1234)
trainIndex <- createDataPartition(yield, p = 0.8, list = FALSE)
trainX <- processPredictors_imputed[trainIndex, ]
testX <- processPredictors_imputed[-trainIndex, ]
trainY <- yield[trainIndex]
testY <- yield[-trainIndex]
ctrl <- trainControl(method = "cv", number = 10)
# PLS model
plsFit <- train(
x = trainX,
y = trainY,
method = "pls",
preProc = c("center", "scale"),
tuneLength = 20,
trControl = ctrl
)
# Results
plot(plsFit)
plsFit$bestTune
## ncomp
## 3 3
plsFit$results[which.max(plsFit$results$Rsquared), ]
## ncomp RMSE Rsquared MAE RMSESD RsquaredSD MAESD
## 3 3 1.186747 0.6296581 0.9716607 0.2160935 0.1525181 0.1565355
The optimal value of the performance model used 2 latent components and achieved a R^2 of 0.63.
Predict the response for the test set. What is the value of the performance metric and how does this compare with the resampled performance metric on the training set?
# Predict test set
y_pred <- predict(plsFit, newdata = testX)
# Results
r2_test <- cor(y_pred, testY)^2
rmse_test <- sqrt(mean((y_pred - testY)^2))
print(paste("R-squared:", r2_test))
## [1] "R-squared: 0.175784643761978"
print(paste("RMSE:", rmse_test))
## [1] "RMSE: 2.12205694450488"
The PLS model fit the training data well with an R^2 of 0.63 but performed poorly on the test data (R^2 = 0.18). This large drop in performance indicates overfitting and indicates the model does not generalize well to new, unseen data.
Which predictors are most important in the model you have trained? Do either the biological or process predictors dominate the list?
To find the most important predictor in the model, we can use varImp() from the caret package, which measures how much each predictor contributes to the model’s predictions.
# Top 10 predictors
vip <- varImp(plsFit)
plot(vip, top = 10)
This plot shows that ManufacturingProcess32 contributes the most to the PLS model’s predictions, followed by ManufacturingProcess36 and ManufacturingProcess13. Manufacturing Process predictors dominate the list over Biological Material predictors.
Explore the relationships between each of the top predictors and the response. How could this information be helpful in improving yield in future runs of the manufacturing process?
library(corrplot)
# Correlation plot of the top 3 predictors
corr_data <- ChemicalManufacturingProcess[,
c("Yield", "ManufacturingProcess32", "ManufacturingProcess36", "ManufacturingProcess13")]
corr_matrix <- cor(corr_data, use = "pairwise.complete.obs")
corrplot(corr_matrix, method = "color", addCoef.col = "black", tl.col = "black")
ManufacturingProcess32 has a strong positive correlation with the yield. Higher values of this process is associated with higher yield. ManufacturingProcess36 and ManufacturingProcess13 both show moderate neagtive correlations with yield, increasing these values will decrease the yield. ManufacturingProcess36 and ManufacturingProcess32 have a strongly negative correlation (-0.79), indicating as one increases, the other process decreases.
The positive correlation between ManufacturingProcess32 and yield suggests that optimizing this process will lead to higher production. On the other hand, the negative relationship for ManufacturingProcess36 and ManufacturingProcess13 indicates lowering these processes will help prevent yield loss.