Homework 7: Linear Regression

Instructions

In Kuhn and Johnson do problems 6.2 and 6.3. There are only two but they consist of many parts. Please submit a link to your Rpubs and submit the .rmd file as well.

Packages

library(AppliedPredictiveModeling)
library(caret)
library(dplyr)
library(mice)
library(corrplot)
library(elasticnet)
library(pls)

6.2

6.2. Developing a model to predict permeability (see Sect. 1.4) could save significant resources for a pharmaceutical company, while at the same time more rapidly identifying molecules that have a sufficient permeability to become a drug:

(a) Start R and use these commands to load the data:

data(permeability)

The matrix fingerprints contains the 1,107 binary molecular predictors for the 165 compounds, while permeability contains permeability response.

str(fingerprints)
##  num [1:165, 1:1107] 0 0 0 0 0 0 0 0 0 0 ...
##  - attr(*, "dimnames")=List of 2
##   ..$ : chr [1:165] "1" "2" "3" "4" ...
##   ..$ : chr [1:1107] "X1" "X2" "X3" "X4" ...

(b) The fingerprint predictors indicate the presence or absence of substructures of a molecule and are often sparse meaning that relatively few of the molecules contain each substructure. Filter out the predictors that have low frequencies using the nearZeroVar function from the caret package. How many predictors are left for modeling?

low_frequency <- nearZeroVar(fingerprints)

#remove low frequency columns using baser df[row,columns]
predictors <- fingerprints[,-low_frequency]

dim(predictors)
## [1] 165 388

There are 388 predictors left for modeling.

(c) Split the data into a training and a test set, pre-process the data, and tune a PLS model. How many latent variables are optimal and what is the corresponding resampled estimate of \(R^2\)?

set.seed(624)
#70 30 split
split1<- sample(c(rep(0, 0.7 * nrow(permeability)), 
                  rep(1, 0.3 * nrow(permeability))))

X_train <- predictors[split1 == 0,]
X_test <- predictors[split1 == 1,]
y_train <- permeability[split1 == 0]
y_test <- permeability[split1 == 1]

#PLS model 
plsTune <- train(X_train, y_train, 
                method='pls', metric='Rsquared',
                tuneLength=20, 
                trControl=trainControl(method='cv'),
                preProc=c('center', 'scale')
                )
plsTune
## Partial Least Squares 
## 
## 116 samples
## 388 predictors
## 
## Pre-processing: centered (388), scaled (388) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 104, 105, 105, 104, 104, 104, ... 
## Resampling results across tuning parameters:
## 
##   ncomp  RMSE      Rsquared   MAE      
##    1     12.98974  0.3897237  10.091790
##    2     11.71406  0.5029046   8.701257
##    3     11.83257  0.5312902   9.214913
##    4     11.42981  0.5632994   9.013731
##    5     11.03805  0.5914051   8.691079
##    6     10.59821  0.6316227   8.231370
##    7     10.26537  0.6443277   7.986676
##    8     10.46235  0.6304089   8.231908
##    9     10.73985  0.6017092   8.546290
##   10     10.83152  0.5891046   8.531458
##   11     10.99323  0.5805043   8.418635
##   12     11.13834  0.5746015   8.593098
##   13     11.37617  0.5655660   8.816645
##   14     11.58639  0.5586207   9.061223
##   15     11.77675  0.5451381   9.186252
##   16     12.04135  0.5270379   9.397151
##   17     12.31818  0.5175956   9.491485
##   18     12.67136  0.4995409   9.714319
##   19     12.76746  0.4976270   9.845285
##   20     12.90804  0.4913546  10.027778
## 
## Rsquared was used to select the optimal model using the largest value.
## The final value used for the model was ncomp = 7.
plsTune$results %>% 
  dplyr::filter(ncomp == 7)
##   ncomp     RMSE  Rsquared      MAE  RMSESD RsquaredSD    MAESD
## 1     7 10.26537 0.6443277 7.986676 2.36905  0.1930055 1.832539

The best tune was found at ncomp = 7 with an \(R^2\) value of 0.6443277.

(d) Predict the response for the test set. What is the test set estimate of \(R^2\)?

#prediction using model and testing data
plsPred <- predict(plsTune, newdata=X_test)
#evaluation
postResample(pred=plsPred, obs=y_test)
##      RMSE  Rsquared       MAE 
## 15.504629  0.298245 10.263821

\(R^2\) = 0.6443277

(e) Try building other models discussed in this chapter. Do any have better predictive performance?

set.seed(123)
pcr_Tune <- train(X_train, y_train, 
                   method = "pcr",
                   tuneLength = 20,
                   trControl = trainControl("cv"),
                   preProc=c('center', 'scale')
)

#pcr_Tune
pcrPred <- predict(pcr_Tune, newdata=X_test)
#evaluation
postResample(pred=pcrPred, obs=y_test)
##       RMSE   Rsquared        MAE 
## 13.9929634  0.2698699  9.8225701
set.seed(456)
enetGrid <- expand.grid(.lambda = c(0, 0.01, .1),
                        .fraction = seq(.05, 1, length = 20)) 
enet_Tune <- train(X_train, y_train, 
                    method = "enet",
                    tuneGrid = enetGrid,
                    trControl = trainControl("cv"),
                    preProc = c("center", "scale"))
#enet_Tune
#generate prediction using model and testing data
enetPred <- predict(enet_Tune, newdata=X_test)
#evaluation metrics
postResample(pred=enetPred, obs=y_test)
##       RMSE   Rsquared        MAE 
## 14.8780513  0.3165505  9.3951063

(f) Would you recommend any of your models to replace the permeability laboratory experiment?

Model with the lowest RMSE was that of the Principal Component Analysis from pcr_Tune with an RMSE of 13.9929634.

6.3

6.3. A chemical manufacturing process for a pharmaceutical product was discussed in Sect. 1.4. In this problem, the objective is to understand the relationship between biological measurements of the raw materials (predictors), measurements of the manufacturing process (predictors), and the response of product yield. Biological predictors cannot be changed but can be used to assess the quality of the raw material before processing. On the other hand, manufacturing process predictors can be changed in the manufacturing process. Improving product yield by 1 % will boost revenue by approximately one hundred thousand dollars per batch:

(a) Start R and use these commands to load the data:

data(ChemicalManufacturingProcess)

The matrix processPredictors contains the 57 predictors (12 describing the input biological material and 45 describing the process predictors) for the 176 manufacturing runs. yield contains the percent yield for each run.

(b) A small percentage of cells in the predictor set contain missing values. Use an imputation function to fill in these missing values (e.g., see Sect. 3.8).

imputed_data <- mice(ChemicalManufacturingProcess, printFlag=F, method="cart", seed = 1)

full_data <- complete(imputed_data)

(c) Split the data into a training and a test set, pre-process the data, and tune a model of your choice from this chapter. What is the optimal value of the performance metric?

low_values <- nearZeroVar(full_data)

chem_predictors <- full_data[,-low_values]

split2 <- sample(c(rep(0, 0.7 * nrow(chem_predictors)), 
                  rep(1, 0.3 * nrow(chem_predictors))))

#split the data
chem_train <- chem_predictors[split2 == 0,]
chem_test <- chem_predictors[split2 == 1,]

#PLS model 
chem_pls <- train(Yield~., chem_train, 
                method='pls', metric='Rsquared',
                tuneLength=30, 
                trControl=trainControl(method='cv'),
                preProc=c('center', 'scale')
                )
chem_pls
## Partial Least Squares 
## 
## 124 samples
##  56 predictor
## 
## Pre-processing: centered (56), scaled (56) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 112, 112, 112, 110, 111, 112, ... 
## Resampling results across tuning parameters:
## 
##   ncomp  RMSE      Rsquared   MAE     
##    1     1.529677  0.4010646  1.180821
##    2     1.755482  0.4392847  1.190576
##    3     1.512907  0.4935478  1.081644
##    4     1.415093  0.5253042  1.061878
##    5     1.427493  0.5192771  1.076707
##    6     1.511009  0.4951565  1.110234
##    7     1.563757  0.4905256  1.138808
##    8     1.546491  0.5043460  1.149353
##    9     1.484077  0.5213452  1.143936
##   10     1.555641  0.4992690  1.169854
##   11     1.773537  0.4662366  1.245791
##   12     2.061358  0.4493184  1.332951
##   13     2.334952  0.4426627  1.415567
##   14     2.487741  0.4290738  1.482593
##   15     2.550284  0.4236253  1.505839
##   16     2.625538  0.4268477  1.535556
##   17     2.718648  0.4205033  1.564783
##   18     2.773839  0.4203954  1.584750
##   19     2.891574  0.4162470  1.622794
##   20     3.126069  0.4149081  1.693775
##   21     3.446181  0.4129242  1.787790
##   22     3.837902  0.4085571  1.908807
##   23     4.071110  0.4013838  1.986113
##   24     4.250297  0.3959688  2.040170
##   25     4.394987  0.3883497  2.088477
##   26     4.554736  0.3765970  2.149645
##   27     4.678290  0.3648323  2.201199
##   28     4.730504  0.3598082  2.223963
##   29     4.771777  0.3585355  2.239628
##   30     4.726215  0.3590860  2.229927
## 
## Rsquared was used to select the optimal model using the largest value.
## The final value used for the model was ncomp = 4.
chem_pls$results %>% 
  filter(ncomp==4)
##   ncomp     RMSE  Rsquared      MAE   RMSESD RsquaredSD     MAESD
## 1     4 1.415093 0.5253042 1.061878 0.574831  0.1420566 0.2542517

(d) Predict the response for the test set. What is the value of the performance metric and how does this compare with the resampled performance metric on the training set?

#predict
chem_predict = predict(chem_pls,chem_test)
postResample(pred = chem_predict, obs =chem_test$Yield)
##      RMSE  Rsquared       MAE 
## 1.6455102 0.4269652 1.1066459

The \(R^2\) value raised slightly on the chem_test$Yield data suggesting the model performed better using test set than on the training set.

(e) Which predictors are most important in the model you have trained? Do either the biological or process predictors dominate the list?

varImp(chem_pls)
## pls variable importance
## 
##   only 20 most important variables shown (out of 56)
## 
##                        Overall
## ManufacturingProcess32  100.00
## ManufacturingProcess36   73.26
## ManufacturingProcess09   69.84
## ManufacturingProcess13   68.00
## ManufacturingProcess33   60.94
## ManufacturingProcess17   60.81
## BiologicalMaterial03     53.70
## BiologicalMaterial02     53.17
## BiologicalMaterial06     52.88
## BiologicalMaterial08     52.28
## ManufacturingProcess06   51.22
## BiologicalMaterial12     47.72
## ManufacturingProcess12   47.63
## ManufacturingProcess11   47.25
## BiologicalMaterial04     46.72
## ManufacturingProcess34   45.47
## BiologicalMaterial01     45.17
## ManufacturingProcess29   44.51
## ManufacturingProcess30   44.36
## BiologicalMaterial11     44.07

Using the varImp() function, it is found that the ManufacturingProcess predictors are slightly more important followed by 8 BiologicalMaterial predictors.

(f) Explore the relationships between each of the top predictors and the response. How could this information be helpful in improving yield in future runs of the manufacturing process?

Using the correlation matrix below, the Yield response variable doesn’t have strong correlations with many of the important predictors identified in the previous step. The strongest positive correlations are with ManufacturingProcess32 and ManufacturingProcess09 with correlation values 0.61 and 0.50 respectively. The strongest negative correlations are with ManufacturingProcess36 and ManufacturingProcess13 with correlations values -0.52 and -0.50, respectively. The response variable does not have strong correlations with the important Biological Material predictors from part e.

corr_vals <- chem_predictors %>% 
  select('Yield', 'ManufacturingProcess32','ManufacturingProcess36',
         'ManufacturingProcess13','ManufacturingProcess17',
         'ManufacturingProcess09','ManufacturingProcess12',
         'ManufacturingProcess11','ManufacturingProcess33',
         'BiologicalMaterial02', 'BiologicalMaterial08',
         'BiologicalMaterial06', 'BiologicalMaterial12')
corr_plot_vals <- cor(corr_vals)
corrplot.mixed(corr_plot_vals, tl.col = 'black', tl.pos = 'lt', 
         upper = "number", lower="circle")