In Kuhn and Johnson do problems 6.2 and 6.3. There are only two but they consist of many parts. Please submit a link to your Rpubs and submit the .rmd file as well.
library(tidyverse)
library(caret)
library(AppliedPredictiveModeling)
library(corrplot)
data("permeability")
The matrix fingerprints contains the 1,107 binary molecular predictors for the 165 compounds, while permeability contains permeability response.
dim(fingerprints)
## [1] 165 1107
fingerprints <- fingerprints[, -nearZeroVar(fingerprints)]
dim(fingerprints)
## [1] 165 388
Fingerprints matrix has 1107 predictors, after applying the -nearZeroVar function I got 388 predictors.
set.seed(624)
# index for training
index <- createDataPartition(permeability, p = .8, list = FALSE)
# train
train_perm <- permeability[index, ]
train_fp <- fingerprints[index, ]
# test
test_perm <- permeability[-index, ]
test_fp <- fingerprints [-index, ]
# 10-fold cross-validation to make reasonable estimates
ctrl <- trainControl(method = "cv", number = 10)
plsTune <- train(train_fp, train_perm, method = "pls", metric = "Rsquared",
tuneLength = 20, trControl = ctrl, preProc = c("center", "scale"))
plot(plsTune)
plsTune
## Partial Least Squares
##
## 133 samples
## 388 predictors
##
## Pre-processing: centered (388), scaled (388)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 118, 119, 120, 120, 121, 120, ...
## Resampling results across tuning parameters:
##
## ncomp RMSE Rsquared MAE
## 1 13.25656 0.2953172 10.202424
## 2 11.90191 0.4662745 8.710959
## 3 12.05579 0.4624570 9.271134
## 4 12.03297 0.4793759 9.314195
## 5 12.06645 0.4870447 8.986610
## 6 11.82964 0.5012972 8.788270
## 7 11.91363 0.5011672 9.153386
## 8 11.79990 0.4960881 9.119966
## 9 11.81946 0.4959475 9.301787
## 10 11.85288 0.4924486 9.176136
## 11 11.79654 0.5025443 9.128199
## 12 11.62869 0.5131115 8.965070
## 13 11.78348 0.5080595 8.920097
## 14 12.01377 0.4935108 9.101865
## 15 12.09297 0.4862359 9.131109
## 16 12.10087 0.4953868 9.053161
## 17 12.38093 0.4847366 9.218277
## 18 12.59348 0.4768971 9.402569
## 19 12.61895 0.4807222 9.338592
## 20 12.77045 0.4682401 9.549745
##
## Rsquared was used to select the optimal model using the largest value.
## The final value used for the model was ncomp = 12.
We can see in the table above that the optimal tuning has 12 components. In the graph we can see that the highest R2 value is 12.5
fp_predict <- predict(plsTune, test_fp)
postResample(fp_predict, test_perm)
## RMSE Rsquared MAE
## 11.4895371 0.4741832 9.3113125
The test set estimate of R2 is 0.4741832.
set.seed(247)
# grid of penalties
enetGrid <- expand.grid(.lambda = c(0, 0.01, .1), .fraction = seq(.05, 1, length = 20))
# tuning penalized regression model
enetTune <- train(train_fp, train_perm, method = "enet",
tuneGrid = enetGrid, trControl = ctrl, preProc = c("center", "scale"))
## Warning: model fit failed for Fold02: lambda=0.00, fraction=1 Error in if (zmin < gamhat) { : missing value where TRUE/FALSE needed
## Warning in nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo,
## : There were missing values in resampled performance measures.
plot(enetTune)
enet_predict <- predict(enetTune, test_fp)
postResample(enet_predict, test_perm)
## RMSE Rsquared MAE
## 11.2682652 0.4652716 9.2858416
The R2 is 0.5307087 and the RMSE is lower using a penalized Elastic Net regression model.
set.seed(624)
larsTune <- train(train_fp, train_perm, method = "lars", metric = "Rsquared",
tuneLength = 20, trControl = ctrl, preProc = c("center", "scale"))
plot(larsTune)
lars_predict <- predict(larsTune, test_fp)
postResample(lars_predict, test_perm)
## RMSE Rsquared MAE
## 11.6906258 0.4342625 9.6425013
The Least Angle Regression is slightly worse than the PLS method as the R2 is lower and the RMSE is higher
I would recommend the Elastic Net regression model as it produced better statistics. It had a higher R2 and lower RMSE and MAE.
data(ChemicalManufacturingProcess)
The matrix processPredictors contains the 57 predictors (12 describing the input biological material and 45 describing the process predictors) for the 176 manufacturing runs. yield contains the percent yield for each run.
sum(is.na(ChemicalManufacturingProcess))
## [1] 106
miss <- preProcess(ChemicalManufacturingProcess, method = "bagImpute")
Chemical <- predict(miss, ChemicalManufacturingProcess)
sum(is.na(Chemical))
## [1] 0
There were 106 missing values in ChemicalManufacturingProcess. Bagged trees were used to impute the data. Bagged trees are made using all the other variables.
# filtering low frequencies
Chemical <- Chemical[, -nearZeroVar(Chemical)]
set.seed(279)
# index for training
index <- createDataPartition(Chemical$Yield, p = .8, list = FALSE)
# train
train_chem <- Chemical[index, ]
# test
test_chem <- Chemical[-index, ]
set.seed(624)
plsTune <- train(Yield ~ ., Chemical , method = "pls",
tuneLength = 20, trControl = ctrl, preProc = c("center", "scale"))
plot(plsTune)
plsTune
## Partial Least Squares
##
## 176 samples
## 56 predictor
##
## Pre-processing: centered (56), scaled (56)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 160, 157, 158, 159, 158, 159, ...
## Resampling results across tuning parameters:
##
## ncomp RMSE Rsquared MAE
## 1 1.436891 0.4568805 1.147590
## 2 1.872742 0.4711564 1.185897
## 3 1.292614 0.5633698 1.020010
## 4 1.480526 0.5381868 1.085319
## 5 1.707358 0.5156812 1.131007
## 6 1.821904 0.4903840 1.156300
## 7 2.006142 0.4802835 1.211850
## 8 2.092370 0.4622998 1.253598
## 9 2.220647 0.4485854 1.290999
## 10 2.322021 0.4410360 1.315081
## 11 2.446697 0.4264842 1.352393
## 12 2.475260 0.4206118 1.367989
## 13 2.464162 0.4197124 1.377783
## 14 2.418661 0.4227280 1.364288
## 15 2.375812 0.4242080 1.350175
## 16 2.368363 0.4267259 1.337946
## 17 2.386174 0.4254577 1.339398
## 18 2.376321 0.4271412 1.334877
## 19 2.400843 0.4255697 1.348122
## 20 2.422785 0.4231162 1.359970
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was ncomp = 3.
Optimal tuning has 3 components with R2 of 0.5623262.
set.seed(624)
enetTune <- train(Yield ~ ., Chemical , method = "enet",
tuneGrid = enetGrid, trControl = ctrl, preProc = c("center", "scale"))
plot(enetTune)
enetTune
## Elasticnet
##
## 176 samples
## 56 predictor
##
## Pre-processing: centered (56), scaled (56)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 160, 157, 158, 159, 158, 159, ...
## Resampling results across tuning parameters:
##
## lambda fraction RMSE Rsquared MAE
## 0.00 0.05 1.263954 0.6225860 1.0294423
## 0.00 0.10 1.171467 0.6195893 0.9388695
## 0.00 0.15 1.285779 0.6043223 0.9537550
## 0.00 0.20 1.455526 0.5547255 1.0089814
## 0.00 0.25 1.743737 0.4948981 1.1067214
## 0.00 0.30 1.939587 0.4680604 1.1795594
## 0.00 0.35 1.908244 0.4619370 1.1924860
## 0.00 0.40 1.850254 0.4606806 1.1877837
## 0.00 0.45 1.794856 0.4594057 1.1771407
## 0.00 0.50 2.101187 0.4481459 1.2573220
## 0.00 0.55 2.407435 0.4725106 1.3304398
## 0.00 0.60 2.744013 0.5217990 1.4001566
## 0.00 0.65 3.225543 0.4863491 1.5339631
## 0.00 0.70 3.771484 0.4747361 1.6693508
## 0.00 0.75 4.094773 0.4681231 1.7557563
## 0.00 0.80 4.135408 0.4589587 1.7764653
## 0.00 0.85 4.225706 0.4419317 1.8076852
## 0.00 0.90 4.318549 0.4257147 1.8358006
## 0.00 0.95 4.415063 0.4120784 1.8636370
## 0.00 1.00 4.501021 0.4019904 1.8870890
## 0.01 0.05 1.536442 0.5973476 1.2453271
## 0.01 0.10 1.309336 0.6258493 1.0618555
## 0.01 0.15 1.201950 0.6195671 0.9778981
## 0.01 0.20 1.176854 0.6175939 0.9523896
## 0.01 0.25 1.161852 0.6236987 0.9319051
## 0.01 0.30 1.175811 0.6198597 0.9301295
## 0.01 0.35 1.246551 0.6032199 0.9540424
## 0.01 0.40 1.296291 0.5952774 0.9647967
## 0.01 0.45 1.362599 0.5626257 0.9909640
## 0.01 0.50 1.460207 0.5489330 1.0197566
## 0.01 0.55 1.600805 0.5172511 1.0731396
## 0.01 0.60 1.793769 0.4868596 1.1322410
## 0.01 0.65 1.885961 0.4712944 1.1672050
## 0.01 0.70 1.953361 0.4609687 1.1927301
## 0.01 0.75 2.016199 0.4534714 1.2156241
## 0.01 0.80 2.076369 0.4480579 1.2356802
## 0.01 0.85 2.143715 0.4433120 1.2568727
## 0.01 0.90 2.179751 0.4401770 1.2700016
## 0.01 0.95 2.129338 0.4398117 1.2610389
## 0.01 1.00 2.043851 0.4421279 1.2426724
## 0.10 0.05 1.649315 0.5363459 1.3354459
## 0.10 0.10 1.487591 0.6095911 1.2076167
## 0.10 0.15 1.354079 0.6236486 1.0996757
## 0.10 0.20 1.258077 0.6219261 1.0220867
## 0.10 0.25 1.200809 0.6201101 0.9778990
## 0.10 0.30 1.181463 0.6179041 0.9615112
## 0.10 0.35 1.171335 0.6193380 0.9468400
## 0.10 0.40 1.165095 0.6229576 0.9424421
## 0.10 0.45 1.167185 0.6235463 0.9390511
## 0.10 0.50 1.212130 0.6118562 0.9527442
## 0.10 0.55 1.310136 0.6005375 0.9787766
## 0.10 0.60 1.383384 0.5963431 0.9968869
## 0.10 0.65 1.442303 0.5805640 1.0192090
## 0.10 0.70 1.511715 0.5573960 1.0504530
## 0.10 0.75 1.598893 0.5315716 1.0836263
## 0.10 0.80 1.707558 0.5079105 1.1181461
## 0.10 0.85 1.772034 0.4960139 1.1387951
## 0.10 0.90 1.812766 0.4895472 1.1528253
## 0.10 0.95 1.846664 0.4841682 1.1650245
## 0.10 1.00 1.873437 0.4803403 1.1741747
##
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were fraction = 0.25 and lambda = 0.01.
The optimal model has a fraction of 0.1 and λ of 0. The R2 is 0.6253182.
lm_model <- lm(Yield ~ ., Chemical)
summary(lm_model)
##
## Call:
## lm(formula = Yield ~ ., data = Chemical)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.17844 -0.53656 -0.02842 0.50526 2.00415
##
## Coefficients: (1 not defined because of singularities)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.234e+00 8.608e+01 0.049 0.96085
## BiologicalMaterial01 2.483e-01 3.342e-01 0.743 0.45900
## BiologicalMaterial02 -1.120e-01 1.281e-01 -0.874 0.38375
## BiologicalMaterial03 1.636e-01 2.354e-01 0.695 0.48843
## BiologicalMaterial04 -1.044e-01 5.235e-01 -0.199 0.84233
## BiologicalMaterial05 1.513e-01 1.061e-01 1.426 0.15641
## BiologicalMaterial06 3.336e-03 3.014e-01 0.011 0.99119
## BiologicalMaterial08 3.808e-01 6.358e-01 0.599 0.55034
## BiologicalMaterial09 -8.180e-01 1.370e+00 -0.597 0.55162
## BiologicalMaterial10 7.954e-02 1.367e+00 0.058 0.95370
## BiologicalMaterial11 -8.954e-02 8.230e-02 -1.088 0.27874
## BiologicalMaterial12 3.493e-01 6.346e-01 0.551 0.58300
## ManufacturingProcess01 6.695e-02 9.596e-02 0.698 0.48672
## ManufacturingProcess02 1.343e-02 4.311e-02 0.311 0.75601
## ManufacturingProcess03 -3.377e+00 5.103e+00 -0.662 0.50934
## ManufacturingProcess04 6.282e-02 2.940e-02 2.137 0.03464 *
## ManufacturingProcess05 7.326e-04 3.859e-03 0.190 0.84974
## ManufacturingProcess06 3.261e-02 4.341e-02 0.751 0.45401
## ManufacturingProcess07 -1.810e-01 2.126e-01 -0.851 0.39623
## ManufacturingProcess08 -6.282e-02 2.522e-01 -0.249 0.80374
## ManufacturingProcess09 2.614e-01 1.812e-01 1.443 0.15176
## ManufacturingProcess10 -1.166e-01 5.742e-01 -0.203 0.83950
## ManufacturingProcess11 1.942e-01 7.132e-01 0.272 0.78590
## ManufacturingProcess12 3.761e-05 1.013e-04 0.371 0.71120
## ManufacturingProcess13 -2.670e-01 3.843e-01 -0.695 0.48859
## ManufacturingProcess14 3.058e-04 1.115e-02 0.027 0.97816
## ManufacturingProcess15 1.972e-03 8.903e-03 0.222 0.82506
## ManufacturingProcess16 -4.937e-05 3.190e-04 -0.155 0.87728
## ManufacturingProcess17 -1.402e-01 3.011e-01 -0.466 0.64240
## ManufacturingProcess18 4.245e-03 4.450e-03 0.954 0.34211
## ManufacturingProcess19 -2.233e-03 7.301e-03 -0.306 0.76021
## ManufacturingProcess20 -4.517e-03 4.721e-03 -0.957 0.34062
## ManufacturingProcess21 NA NA NA NA
## ManufacturingProcess22 -1.666e-02 4.209e-02 -0.396 0.69299
## ManufacturingProcess23 -4.181e-02 8.289e-02 -0.504 0.61495
## ManufacturingProcess24 -1.931e-02 2.340e-02 -0.825 0.41100
## ManufacturingProcess25 -6.493e-03 1.365e-02 -0.476 0.63506
## ManufacturingProcess26 6.101e-03 1.041e-02 0.586 0.55909
## ManufacturingProcess27 -7.061e-03 7.781e-03 -0.907 0.36601
## ManufacturingProcess28 -7.882e-02 3.094e-02 -2.547 0.01212 *
## ManufacturingProcess29 1.393e+00 8.961e-01 1.555 0.12261
## ManufacturingProcess30 -3.693e-01 6.233e-01 -0.592 0.55463
## ManufacturingProcess31 4.783e-02 1.203e-01 0.398 0.69168
## ManufacturingProcess32 3.333e-01 6.833e-02 4.877 3.34e-06 ***
## ManufacturingProcess33 -4.068e-01 1.286e-01 -3.164 0.00197 **
## ManufacturingProcess34 -1.496e+00 2.753e+00 -0.543 0.58792
## ManufacturingProcess35 -1.879e-02 1.765e-02 -1.064 0.28926
## ManufacturingProcess36 2.833e+02 3.132e+02 0.904 0.36765
## ManufacturingProcess37 -6.935e-01 2.889e-01 -2.401 0.01789 *
## ManufacturingProcess38 -1.900e-01 2.417e-01 -0.786 0.43333
## ManufacturingProcess39 7.077e-02 1.307e-01 0.542 0.58907
## ManufacturingProcess40 4.605e-01 6.545e+00 0.070 0.94403
## ManufacturingProcess41 2.549e-01 4.736e+00 0.054 0.95716
## ManufacturingProcess42 4.372e-02 2.102e-01 0.208 0.83557
## ManufacturingProcess43 2.268e-01 1.182e-01 1.919 0.05741 .
## ManufacturingProcess44 -4.385e-01 1.186e+00 -0.370 0.71222
## ManufacturingProcess45 9.547e-01 5.444e-01 1.754 0.08204 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.039 on 120 degrees of freedom
## Multiple R-squared: 0.7826, Adjusted R-squared: 0.683
## F-statistic: 7.854 on 55 and 120 DF, p-value: < 2.2e-16
The ordinary linear regression model has a Multiple R2 of 0.7813 and an Adjusted R2 of 0.6811.
set.seed(624)
## Define the candidate set of values
ridgeGrid <- data.frame(.lambda = seq(0, .1, length = 15))
ridgeTune <- train(Yield ~ ., Chemical , method = "ridge",
tuneGrid = ridgeGrid, trControl = ctrl, preProc = c("center", "scale"))
plot(ridgeTune)
ridgeTune
## Ridge Regression
##
## 176 samples
## 56 predictor
##
## Pre-processing: centered (56), scaled (56)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 160, 157, 158, 159, 158, 159, ...
## Resampling results across tuning parameters:
##
## lambda RMSE Rsquared MAE
## 0.000000000 4.501021 0.4019904 1.887089
## 0.007142857 1.951471 0.4493837 1.224896
## 0.014285714 2.093121 0.4429601 1.247997
## 0.021428571 2.098017 0.4475006 1.242244
## 0.028571429 2.077066 0.4519785 1.232565
## 0.035714286 2.050861 0.4560167 1.222873
## 0.042857143 2.024673 0.4596597 1.213933
## 0.050000000 1.999988 0.4629748 1.206382
## 0.057142857 1.977156 0.4660169 1.200121
## 0.064285714 1.956157 0.4688278 1.194388
## 0.071428571 1.936855 0.4714397 1.189526
## 0.078571429 1.919085 0.4738781 1.185097
## 0.085714286 1.902686 0.4761635 1.180984
## 0.092857143 1.887513 0.4783129 1.177421
## 0.100000000 1.873437 0.4803403 1.174175
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was lambda = 0.1.
The optimal model has λ of 0.1 and R2 of 0.4785667.
set.seed(624)
larsTune <- train(Yield ~ ., Chemical , method = "lars", metric = "Rsquared",
tuneLength = 20, trControl = ctrl, preProc = c("center", "scale"))
plot(larsTune)
larsTune
## Least Angle Regression
##
## 176 samples
## 56 predictor
##
## Pre-processing: centered (56), scaled (56)
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 160, 157, 158, 159, 158, 159, ...
## Resampling results across tuning parameters:
##
## fraction RMSE Rsquared MAE
## 0.05 1.268059 0.6252093 1.037325
## 0.10 1.157751 0.6229682 0.936117
## 0.15 1.163964 0.6233226 0.922333
## 0.20 1.425304 0.5567018 1.005712
## 0.25 1.717206 0.4959408 1.101890
## 0.30 1.936760 0.4676172 1.176116
## 0.35 1.922115 0.4611422 1.188942
## 0.40 1.875680 0.4585226 1.187174
## 0.45 1.838067 0.4577825 1.183074
## 0.50 1.731858 0.4637976 1.161242
## 0.55 1.508554 0.4924491 1.104212
## 0.60 1.286634 0.5676612 1.027343
## 0.65 1.637094 0.4985405 1.138608
## 0.70 2.069278 0.4782574 1.247884
## 0.75 2.478866 0.4698927 1.355965
## 0.80 2.854003 0.4595411 1.460073
## 0.85 3.231161 0.4425963 1.561937
## 0.90 3.619450 0.4262587 1.662857
## 0.95 4.056912 0.4124876 1.774808
## 1.00 4.501021 0.4019904 1.887089
##
## Rsquared was used to select the optimal model using the largest value.
## The final value used for the model was fraction = 0.05.
The optimal model has a fraction of 0.05 and R2 of 0.6256310.
lars_predict <- predict(larsTune, test_chem[ ,-1])
postResample(lars_predict, test_chem[ ,1])
## RMSE Rsquared MAE
## 1.192691 0.662927 1.012612
Ordinary linear model had the highest R2 , but it comes with consequences. Therefore, the lars method was chosen as it had the highest R2. The R2 is 0.7170636, which is higher than the training set.
varImp(larsTune)
## loess r-squared variable importance
##
## only 20 most important variables shown (out of 56)
##
## Overall
## ManufacturingProcess32 100.00
## ManufacturingProcess13 90.02
## BiologicalMaterial06 84.56
## ManufacturingProcess36 76.03
## ManufacturingProcess17 74.88
## BiologicalMaterial03 73.53
## ManufacturingProcess09 70.37
## BiologicalMaterial12 67.98
## BiologicalMaterial02 65.33
## ManufacturingProcess31 60.38
## ManufacturingProcess06 58.03
## ManufacturingProcess33 49.39
## BiologicalMaterial11 48.11
## BiologicalMaterial04 47.13
## ManufacturingProcess11 42.47
## BiologicalMaterial08 41.88
## BiologicalMaterial01 39.14
## ManufacturingProcess12 33.02
## ManufacturingProcess30 32.91
## BiologicalMaterial09 32.41
The 5 most important variables used in the modeling are ManufacturingProcess32, ManufacturingProcess13, BiologicalMaterial06, ManufacturingProcess36, and ManufacturingProcess17. Process predictors dominate the list. The ratio of process to biological predictors is 11:9.
top10 <- varImp(larsTune)$importance %>%
arrange(-Overall) %>%
head(10)
Chemical %>%
select(c("Yield", row.names(top10))) %>%
cor() %>%
corrplot()
According to the correlation plot, ManufacturingProcess32 has the highest positive correlation with Yield. Three of the top ten variables are negatively correlated with Yield. This information can be helfup in the future runs of the manufacturing process, as these were the predictors that affected the yield. If they want to maximize or improve their yield, they way want to improve their measurements of the manufacturing process and biological measurements of the raw materials.