DATA 624 Homework 7
DATA 624 Homework 7
- Question 6.2
- Start R and use these commands to load the data:
- The fingerprint predictors indicate the presence or absence of substructures of a molecule and are often sparse meaning that relatively few of the molecules contain each substructure. Filter out the predictors that have low frequencies using the nearZeroVar function from the caret package. How many predictors are left for modeling?
- Split the data into a training and a test set, pre-process the data, and tune a PLS model. How many latent variables are optimal and what is the corresponding resampled estimate of R2?
- Predict the response for the test set. What is the test set estimate of R2?
- Would you recommend any of your models to replace the permeability laboratory experiment?
- Question 6.3
- Start R and use these commands to load the data:
- A small percentage of cells in the predictor set contain missing values. Use
- Split the data into a training and a test set, pre-process the data, and tune a model of your choice from this chapter. What is the optimal value of the performance metric?
- Predict the response for the test set. What is the value of the performance metric and how does this compare with the resampled performance metric on the training set?
- Which predictors are most important in the model you have trained? Do either the biological or process predictors dominate the list?
- ) Explore the relationships between each of the top predictors and the response. How could this information be helpful in improving yield in future runs of the manufacturing process?
Question 6.2
Developing a model to predict permeability (see Sect. 1.4) could save significant resources for a pharmaceutical company, while at the same time more rapidly identifying molecules that have a sufficient permeability to become a drug:
Start R and use these commands to load the data:
The fingerprint predictors indicate the presence or absence of substructures of a molecule and are often sparse meaning that relatively few of the molecules contain each substructure. Filter out the predictors that have low frequencies using the nearZeroVar function from the caret package. How many predictors are left for modeling?
fingerprintsdf <- as.data.frame(fingerprints)
print(paste('Total predictors:', ncol(fingerprintsdf)))
## [1] "Total predictors: 1107"
This statement will return features that have more than one unique value
## [1] "Non-Sparse predictors: 388"
Split the data into a training and a test set, pre-process the data, and tune a PLS model. How many latent variables are optimal and what is the corresponding resampled estimate of R2?
df <- as.data.frame(fingerprintsdf[, nearZeroVar(fingerprintsdf)]) %>% mutate(permeability = permeability)
# Make this reproducible
set.seed(42)
in_train <- createDataPartition(df$permeability, times = 1, p = 0.8, list = FALSE)
train_df <- df[in_train, ]
test_df <- df[-in_train, ]
pls_model <- train(
permeability ~ ., data = train_df, method = "pls",
center = TRUE,
trControl = trainControl("cv", number = 10),
tuneLength = 25
)
# Plot model RMSE vs different values of components
title <- paste("Training Set RMSE Minimized at",
pls_model$bestTune$ncomp,
"Components")
ggplot(pls_model)+xlab("Number of Variables")+ggtitle(title)
## ncomp RMSE Rsquared
## 1 5 14.29229 0.263486
The optimal number of features is 5 and these 5 features explain 26% of the variance
Predict the response for the test set. What is the test set estimate of R2?
The R-squared on the test set is .3
# Make predictions
predictions <- predict(pls_model, test_df)
# Model performance metrics
results <- data.frame(Model = "PLS",
RMSE = caret::RMSE(predictions, test_df$permeability),
Rsquared = caret::R2(predictions, test_df$permeability))
results
## Model RMSE Rsquared
## permeability PLS 13.23639 0.3016825
##Try building other models discussed in this chapter. Do any have better predictive performance?
## Loading required package: lars
## Loaded lars 1.2
## Warning: package 'glmnet' was built under R version 3.6.2
## Loading required package: Matrix
##
## Attaching package: 'Matrix'
## The following object is masked from 'package:tidyr':
##
## expand
## Loaded glmnet 3.0-2
pcr_model <- train(
permeability ~ ., data = train_df, method = "pcr",
center = TRUE,
trControl = trainControl("cv", number = 10),
tuneLength = 25
)
pcr_predictions <- predict(pcr_model, test_df)
pcr_results <- data.frame(Model = "PCR",
RMSE = caret::RMSE(pcr_predictions, test_df$permeability),
Rsquared = caret::R2(pcr_predictions, test_df$permeability))
pcr_results
## Model RMSE Rsquared
## permeability PCR 15.90717 0.03445949
x <- model.matrix(permeability ~ ., data = train_df)
x_test <- model.matrix(permeability ~ ., data = test_df)
rr_cv <- cv.glmnet(x, train_df$permeability, alpha = 0)
rr_model <- glmnet(x, train_df$permeability, alpha = 0, lambda = rr_cv$lambda.min)
rr_predictions <- as.vector(predict(rr_model, x_test))
rr_results <- data.frame(Model = "Ridge Regression",
RMSE = caret::RMSE(rr_predictions, test_df$permeability),
Rsquared = caret::R2(rr_predictions, test_df$permeability))
rr_results
## Model RMSE Rsquared
## permeability Ridge Regression 14.59042 0.1426659
lr_cv <- cv.glmnet(x, train_df$permeability, alpha = 1)
lr_model <- glmnet(x, train_df$permeability, alpha = 1, lambda = lr_cv$lambda.min)
lr_predictions <- as.vector(predict(lr_model, x_test))
lr_results <- data.frame(Model = "Lasso Regression",
RMSE = caret::RMSE(lr_predictions, test_df$permeability),
Rsquared = caret::R2(lr_predictions, test_df$permeability))
lr_results
## Model RMSE Rsquared
## permeability Lasso Regression 13.98552 0.2417885
en_model <- train(
permeability ~ ., data = train_df, method = "glmnet",
trControl = trainControl("cv", number = 10),
tuneLength = 10
)
## Warning in nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo, :
## There were missing values in resampled performance measures.
en_predictions <- en_model %>% predict(x_test)
# Model performance metrics
en_results <- data.frame(Model = "Elastic Net Regression",
RMSE = caret::RMSE(en_predictions, test_df$permeability),
Rsquared = caret::R2(en_predictions, test_df$permeability))
en_results
## Model RMSE Rsquared
## permeability Elastic Net Regression 14.08593 0.2055749
As far as R-squared values go no other method improved upon the results from the partial least squares model
Would you recommend any of your models to replace the permeability laboratory experiment?
No with the best r-squared be around .3 I do not think this would be a good idea as the model is not very good at explaining permeability
Question 6.3
A chemical manufacturing process for a pharmaceutical produce was discussed in Sect. 1.4. In this problem, the objective is to understand the relationship between biological measurement of the raw materials (predictors), measurements of the manufacutring process (predictors), and the response of product yield. Biological predictors cannot be changed but can be used to assess the quality of the raw materials before processing. On the other hand, manufacturing process predictors can be changed in the manufacturing process. Improving product yield by 1% will boot revenue by approximately one hundred thousand dollars per batch:
Start R and use these commands to load the data:
## Warning: package 'RANN' was built under R version 3.6.3
the matrix processPredictors contains the 57 predictors (12 describing the input biological material and 45 describing the process predictors) for the 176 manufacturing runs, yield contains the percent yueld for each run.
A small percentage of cells in the predictor set contain missing values. Use
an imputation function to fill in these missing values (e.g., see Sect. 3.8).
Split the data into a training and a test set, pre-process the data, and tune a model of your choice from this chapter. What is the optimal value of the performance metric?
Remove the zero variance variables from the dataset
df <- df %>%
select_at(vars(-one_of(nearZeroVar(., names = TRUE))))
in_train <- createDataPartition(df$Yield, times = 1, p = 0.8, list = FALSE)
train_df <- df[in_train, ]
test_df <- df[-in_train, ]
pls_model <- train(
Yield ~ ., data = train_df, method = "pls",
center = TRUE,
scale = TRUE,
trControl = trainControl("cv", number = 10),
tuneLength = 25
)
# Plot model RMSE vs different values of components
title <- paste("Training Set RMSE Minimized at",
pls_model$bestTune$ncomp,
"Components")
plot(pls_model, main = title)
## ncomp RMSE Rsquared
## 1 3 0.7501872 0.5518453
Predict the response for the test set. What is the value of the performance metric and how does this compare with the resampled performance metric on the training set?
# Make predictions
pls_predictions <- predict(pls_model, test_df)
# Model performance metrics
results <- data.frame(RMSE = caret::RMSE(pls_predictions, test_df$Yield),
Rsquared = caret::R2(pls_predictions, test_df$Yield))
results
## RMSE Rsquared
## 1 0.6595847 0.6534918
Which predictors are most important in the model you have trained? Do either the biological or process predictors dominate the list?
pls_importance <- varImp(pls_model)$importance %>%
as.data.frame() %>%
rownames_to_column("Variable") %>%
filter(Overall >= 50) %>%
arrange(desc(Overall)) %>%
mutate(importance = row_number())
varImp(pls_model) %>%
plot(, top = max(pls_importance$importance), main = "Important Variables")
) Explore the relationships between each of the top predictors and the response. How could this information be helpful in improving yield in future runs of the manufacturing process?
df %>%
select(c('ManufacturingProcess09','ManufacturingProcess13','ManufacturingProcess32','ManufacturingProcess17','ManufacturingProcess36',
'BiologicalMaterial02', 'BiologicalMaterial03', 'BiologicalMaterial06', 'ManufacturingProcess06', 'BiologicalMaterial04', 'Yield')) %>%
cor() %>%
corrplot(method = 'circle')