Do problems 7.2 and 7.5 in Kuhn and Johnson.

library(caret)
library(tidyverse)
library(AppliedPredictiveModeling)
library(corrplot)
library(e1071)
library(mlbench)

Question 7.2

Friedman (1991) introduced several benchmark data sets create by simulation. One of these simulations used the following nonlinear equation to create data:

where the x values are random variables uniformly distributed between [0, 1] (there are also 5 other non-informative variables also created in the simulation). The package mlbench contains a function called mlbench.friedman1 that simulates these data

set.seed(200)
trainingData <- mlbench.friedman1(200, sd = 1)
trainingData$x <- data.frame(trainingData$x)
featurePlot(trainingData$x, trainingData$y)

testData <- mlbench.friedman1(5000, sd = 1)
testData$x <- data.frame(testData$x)

Tune several models on these data. For example:

knnModel <- train(x = trainingData$x,
                  y = trainingData$y,
                  method = "knn",
                  preProc = c("center", "scale"),
                  tuneLength = 10)

knnModel
## k-Nearest Neighbors 
## 
## 200 samples
##  10 predictor
## 
## Pre-processing: centered (10), scaled (10) 
## Resampling: Bootstrapped (25 reps) 
## Summary of sample sizes: 200, 200, 200, 200, 200, 200, ... 
## Resampling results across tuning parameters:
## 
##   k   RMSE      Rsquared   MAE     
##    5  3.466085  0.5121775  2.816838
##    7  3.349428  0.5452823  2.727410
##    9  3.264276  0.5785990  2.660026
##   11  3.214216  0.6024244  2.603767
##   13  3.196510  0.6176570  2.591935
##   15  3.184173  0.6305506  2.577482
##   17  3.183130  0.6425367  2.567787
##   19  3.198752  0.6483184  2.592683
##   21  3.188993  0.6611428  2.588787
##   23  3.200458  0.6638353  2.604529
## 
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was k = 17.
knnPred <- predict(knnModel, newdata = testData$x)
knnR <-postResample(pred = knnPred, obs = testData$y)

Which models appear to give the best performance? Does MARS select the informative predictors (those named X1–X5)?

cv <- trainControl(method = "cv", number = 10)

grid <- expand.grid(.decay=c(0, 0.01, 0.1),
                        .size=(1:10),
                        .bag=FALSE)

marsGrid <- expand.grid(.degree = 1:2,
                         .nprune = 2:38) 

Neural Network model

set.seed(100)
nnetTune <- train(x = trainingData$x,
                  y = trainingData$y,
                  method = "avNNet",
                  tuneGrid = grid,
                  trControl = cv,
                  preProc = c("center", "scale"),
                  trace=FALSE,
                  linout=TRUE,
                  maxit=500)
nnetTune
## Model Averaged Neural Network 
## 
## 200 samples
##  10 predictor
## 
## Pre-processing: centered (10), scaled (10) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 180, 180, 180, 180, 180, 180, ... 
## Resampling results across tuning parameters:
## 
##   decay  size  RMSE      Rsquared   MAE     
##   0.00    1    2.392711  0.7610354  1.897330
##   0.00    2    2.410532  0.7567109  1.907478
##   0.00    3    2.043693  0.8224284  1.630766
##   0.00    4    2.285447  0.8138382  1.747363
##   0.00    5    2.445627  0.7708133  1.824123
##   0.00    6    2.903022  0.7376507  2.060185
##   0.00    7    3.477815  0.6517491  2.525041
##   0.00    8    6.534209  0.4377882  3.579067
##   0.00    9    4.494356  0.5629194  2.887114
##   0.00   10    3.509194  0.6097876  2.476476
##   0.01    1    2.385381  0.7602926  1.887906
##   0.01    2    2.425125  0.7510903  1.935991
##   0.01    3    2.151209  0.8016018  1.701951
##   0.01    4    2.091925  0.8154383  1.676653
##   0.01    5    2.169745  0.7999252  1.738716
##   0.01    6    2.262033  0.8056618  1.817194
##   0.01    7    2.313236  0.7870298  1.847297
##   0.01    8    2.413890  0.7772585  1.937986
##   0.01    9    2.317190  0.7847502  1.857640
##   0.01   10    2.480713  0.7407632  1.996338
##   0.10    1    2.393965  0.7596431  1.894191
##   0.10    2    2.423612  0.7525959  1.935872
##   0.10    3    2.169915  0.7982379  1.726855
##   0.10    4    2.059080  0.8224160  1.648610
##   0.10    5    1.975656  0.8394000  1.578979
##   0.10    6    2.152197  0.8098019  1.693054
##   0.10    7    2.161512  0.8163011  1.693526
##   0.10    8    2.273716  0.7922525  1.822714
##   0.10    9    2.315336  0.7811271  1.785409
##   0.10   10    2.334803  0.7692182  1.872733
## 
## Tuning parameter 'bag' was held constant at a value of FALSE
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were size = 5, decay = 0.1 and bag = FALSE.
nnetTune$bestTune
##    size decay   bag
## 25    5   0.1 FALSE

MARS model

set.seed(100)
marsTune = train(x = trainingData$x, 
                 y = trainingData$y, 
                 method = "earth", 
                 tuneGrid = marsGrid,
                 preProc = c("center", "scale"),
                 trControl = cv)
## Loading required package: earth
## Warning: package 'earth' was built under R version 4.3.3
## Loading required package: Formula
## Loading required package: plotmo
## Warning: package 'plotmo' was built under R version 4.3.3
## Loading required package: plotrix
marsTune
## Multivariate Adaptive Regression Spline 
## 
## 200 samples
##  10 predictor
## 
## Pre-processing: centered (10), scaled (10) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 180, 180, 180, 180, 180, 180, ... 
## Resampling results across tuning parameters:
## 
##   degree  nprune  RMSE      Rsquared   MAE      
##   1        2      4.327937  0.2544880  3.6004742
##   1        3      3.572450  0.4912720  2.8958113
##   1        4      2.596841  0.7183600  2.1063410
##   1        5      2.370161  0.7659777  1.9186686
##   1        6      2.276141  0.7881481  1.8100006
##   1        7      1.766728  0.8751831  1.3902146
##   1        8      1.780946  0.8723243  1.4013449
##   1        9      1.665091  0.8819775  1.3255147
##   1       10      1.663804  0.8821283  1.3276573
##   1       11      1.657738  0.8822967  1.3317299
##   1       12      1.653784  0.8827903  1.3315041
##   1       13      1.648496  0.8823663  1.3164065
##   1       14      1.639073  0.8841742  1.3128329
##   1       15      1.639073  0.8841742  1.3128329
##   1       16      1.639073  0.8841742  1.3128329
##   1       17      1.639073  0.8841742  1.3128329
##   1       18      1.639073  0.8841742  1.3128329
##   1       19      1.639073  0.8841742  1.3128329
##   1       20      1.639073  0.8841742  1.3128329
##   1       21      1.639073  0.8841742  1.3128329
##   1       22      1.639073  0.8841742  1.3128329
##   1       23      1.639073  0.8841742  1.3128329
##   1       24      1.639073  0.8841742  1.3128329
##   1       25      1.639073  0.8841742  1.3128329
##   1       26      1.639073  0.8841742  1.3128329
##   1       27      1.639073  0.8841742  1.3128329
##   1       28      1.639073  0.8841742  1.3128329
##   1       29      1.639073  0.8841742  1.3128329
##   1       30      1.639073  0.8841742  1.3128329
##   1       31      1.639073  0.8841742  1.3128329
##   1       32      1.639073  0.8841742  1.3128329
##   1       33      1.639073  0.8841742  1.3128329
##   1       34      1.639073  0.8841742  1.3128329
##   1       35      1.639073  0.8841742  1.3128329
##   1       36      1.639073  0.8841742  1.3128329
##   1       37      1.639073  0.8841742  1.3128329
##   1       38      1.639073  0.8841742  1.3128329
##   2        2      4.327937  0.2544880  3.6004742
##   2        3      3.572450  0.4912720  2.8958113
##   2        4      2.661826  0.7070510  2.1734709
##   2        5      2.404015  0.7578971  1.9753867
##   2        6      2.243927  0.7914805  1.7830717
##   2        7      1.856336  0.8605482  1.4356822
##   2        8      1.754607  0.8763186  1.3968406
##   2        9      1.653859  0.8870129  1.2813884
##   2       10      1.434159  0.9166537  1.1339203
##   2       11      1.320482  0.9289120  1.0347278
##   2       12      1.317547  0.9306879  1.0359899
##   2       13      1.296910  0.9306902  1.0146112
##   2       14      1.221407  0.9395223  0.9631486
##   2       15      1.230516  0.9390469  0.9761484
##   2       16      1.236911  0.9387407  0.9745362
##   2       17      1.236911  0.9387407  0.9745362
##   2       18      1.236911  0.9387407  0.9745362
##   2       19      1.236911  0.9387407  0.9745362
##   2       20      1.236911  0.9387407  0.9745362
##   2       21      1.236911  0.9387407  0.9745362
##   2       22      1.236911  0.9387407  0.9745362
##   2       23      1.236911  0.9387407  0.9745362
##   2       24      1.236911  0.9387407  0.9745362
##   2       25      1.236911  0.9387407  0.9745362
##   2       26      1.236911  0.9387407  0.9745362
##   2       27      1.236911  0.9387407  0.9745362
##   2       28      1.236911  0.9387407  0.9745362
##   2       29      1.236911  0.9387407  0.9745362
##   2       30      1.236911  0.9387407  0.9745362
##   2       31      1.236911  0.9387407  0.9745362
##   2       32      1.236911  0.9387407  0.9745362
##   2       33      1.236911  0.9387407  0.9745362
##   2       34      1.236911  0.9387407  0.9745362
##   2       35      1.236911  0.9387407  0.9745362
##   2       36      1.236911  0.9387407  0.9745362
##   2       37      1.236911  0.9387407  0.9745362
##   2       38      1.236911  0.9387407  0.9745362
## 
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were nprune = 14 and degree = 2.
marsTune$bestTune
##    nprune degree
## 50     14      2

SVM model

set.seed(100)
svmRTune = train(x = trainingData$x, 
                 y = trainingData$y, 
                 method =  "svmRadial", 
                 tuneLength = 14,
                 preProc = c("center", "scale"),
                 trControl = cv)
svmRTune
## Support Vector Machines with Radial Basis Function Kernel 
## 
## 200 samples
##  10 predictor
## 
## Pre-processing: centered (10), scaled (10) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 180, 180, 180, 180, 180, 180, ... 
## Resampling results across tuning parameters:
## 
##   C        RMSE      Rsquared   MAE     
##      0.25  2.530787  0.7922715  2.013175
##      0.50  2.259539  0.8064569  1.789962
##      1.00  2.099789  0.8274242  1.656154
##      2.00  2.002943  0.8412934  1.583791
##      4.00  1.943618  0.8504425  1.546586
##      8.00  1.918711  0.8547582  1.532981
##     16.00  1.920651  0.8536189  1.536116
##     32.00  1.920651  0.8536189  1.536116
##     64.00  1.920651  0.8536189  1.536116
##    128.00  1.920651  0.8536189  1.536116
##    256.00  1.920651  0.8536189  1.536116
##    512.00  1.920651  0.8536189  1.536116
##   1024.00  1.920651  0.8536189  1.536116
##   2048.00  1.920651  0.8536189  1.536116
## 
## Tuning parameter 'sigma' was held constant at a value of 0.06509124
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were sigma = 0.06509124 and C = 8.
svmRTune$bestTune
##        sigma C
## 6 0.06509124 8

Prediction

nnetPred <- predict(nnetTune, testData$x)
svmRPred <- predict(svmRTune, testData$x)
marsPred <- predict(marsTune, testData$x)
nnetR <- postResample(nnetPred, testData$y)
marsR <- postResample(marsPred , testData$y)
svmRR<- postResample(svmRPred, testData$y)
(data.frame(rbind(knnR, nnetR, marsR, svmRR)))
##           RMSE  Rsquared      MAE
## knnR  3.204059 0.6819919 2.568346
## nnetR 2.111396 0.8277556 1.573901
## marsR 1.277999 0.9338365 1.014707
## svmRR 2.063191 0.8275736 1.566221

Given that the MARS model boasts the lowest RMSE of 1.277999 and the highest R-squared value of 0.9338365, it suggests superior predictive performance.

varImp(marsTune)
## earth variable importance
## 
##    Overall
## X1  100.00
## X4   75.40
## X2   49.00
## X5   15.72
## X3    0.00

The MARS model appears to prioritize the most informative variables, namely X1 through X5, with X5 being identified as the most crucial and X3 as the least significant among them.

Question 7.5

Exercise 6.3 describes data for a chemical manufacturing process. Use the same data imputation, data splitting, and pre-processing steps as before and train several nonlinear regression models.

a <- data(ChemicalManufacturingProcess)
yield_index <- which(names(ChemicalManufacturingProcess) == "Yield")

imputed <- preProcess(ChemicalManufacturingProcess[, -yield_index], method = c("bagImpute"))
preProcess_data <- predict(imputed , ChemicalManufacturingProcess)

#find low values #remove low frequency columns using baser df[row,columns]

low_values <- nearZeroVar(preProcess_data)
chem_data <- preProcess_data[,-low_values]
set.seed(100)

train_index <- createDataPartition(chem_data$Yield, p = 0.80, list = FALSE)

train_cmp <- chem_data[train_index, ]
test_cmp <- chem_data[-train_index, ]
set.seed(100)
knnModel_cmp <- train(Yield~.,
                      data = train_cmp,
                      method = "knn",
                      preProc = c("center", "scale"),
                      tuneLength = 10,
                      trControl = cv)
knnModel_cmp
## k-Nearest Neighbors 
## 
## 144 samples
##  56 predictor
## 
## Pre-processing: centered (56), scaled (56) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 129, 130, 130, 130, 130, 130, ... 
## Resampling results across tuning parameters:
## 
##   k   RMSE      Rsquared   MAE     
##    5  1.373961  0.4667009  1.095281
##    7  1.412541  0.4459759  1.138165
##    9  1.421226  0.4456633  1.160480
##   11  1.421813  0.4443863  1.167361
##   13  1.429989  0.4386378  1.178164
##   15  1.440356  0.4450619  1.186098
##   17  1.453428  0.4468128  1.193517
##   19  1.463862  0.4443903  1.200036
##   21  1.475792  0.4380007  1.199345
##   23  1.484624  0.4398006  1.201379
## 
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was k = 5.
set.seed(100)
nnetTune_cmp <- train(Yield~.,
                      data = train_cmp,
                      method = "avNNet",
                      tuneGrid = grid,
                      trControl = cv,
                      preProcess = c("center", "scale"),
                      trace=FALSE,
                      linout=TRUE,
                      maxit=500)
nnetTune_cmp
## Model Averaged Neural Network 
## 
## 144 samples
##  56 predictor
## 
## Pre-processing: centered (56), scaled (56) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 129, 130, 130, 130, 130, 130, ... 
## Resampling results across tuning parameters:
## 
##   decay  size  RMSE      Rsquared   MAE     
##   0.00    1    1.638939  0.3060453  1.333894
##   0.00    2    1.486210  0.3847363  1.188725
##   0.00    3    1.657188  0.3501601  1.304169
##   0.00    4    2.108686  0.3129627  1.698305
##   0.00    5    1.983628  0.2541799  1.631068
##   0.00    6    2.196480  0.3978687  1.781554
##   0.00    7    3.291651  0.1731803  2.540150
##   0.00    8    3.643258  0.2126245  2.857396
##   0.00    9    4.832106  0.1393810  3.607765
##   0.00   10    6.390868  0.2436376  4.625732
##   0.01    1    1.396127  0.5235518  1.099805
##   0.01    2    1.640534  0.4291540  1.321997
##   0.01    3    1.728319  0.4235641  1.384122
##   0.01    4    1.792854  0.3915515  1.339865
##   0.01    5    1.517019  0.4644799  1.213641
##   0.01    6    1.611472  0.4279721  1.258881
##   0.01    7    1.537370  0.4480238  1.238919
##   0.01    8    1.533148  0.4653270  1.269169
##   0.01    9    1.830142  0.3796636  1.368616
##   0.01   10    2.392958  0.3197990  1.645841
##   0.10    1    1.360899  0.5113350  1.076587
##   0.10    2    1.689871  0.4309316  1.233542
##   0.10    3    1.723641  0.4231475  1.285794
##   0.10    4    1.549739  0.4720650  1.210876
##   0.10    5    1.785768  0.4215289  1.313711
##   0.10    6    1.788261  0.4322991  1.360416
##   0.10    7    1.631988  0.4147471  1.300764
##   0.10    8    1.710090  0.4084997  1.281407
##   0.10    9    1.574347  0.4615441  1.266253
##   0.10   10    1.547599  0.4596403  1.209105
## 
## Tuning parameter 'bag' was held constant at a value of FALSE
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were size = 1, decay = 0.1 and bag = FALSE.
set.seed(100)
marsTune_cmp = train(Yield~.,
                     data = train_cmp,
                     method = "earth", 
                     preProcess = c("center", "scale"),
                     tuneGrid = marsGrid,
                     trControl = cv)
marsTune_cmp
## Multivariate Adaptive Regression Spline 
## 
## 144 samples
##  56 predictor
## 
## Pre-processing: centered (56), scaled (56) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 129, 130, 130, 130, 130, 130, ... 
## Resampling results across tuning parameters:
## 
##   degree  nprune  RMSE      Rsquared   MAE      
##   1        2      1.410080  0.4643855  1.1074276
##   1        3      1.247764  0.5626662  1.0176984
##   1        4      1.206859  0.5942762  0.9827068
##   1        5      1.229023  0.5658374  0.9979404
##   1        6      1.262274  0.5564844  1.0176942
##   1        7      1.291974  0.5446800  1.0329472
##   1        8      1.294378  0.5370697  1.0377210
##   1        9      1.314516  0.5271532  1.0511347
##   1       10      1.341264  0.5207636  1.0545367
##   1       11      1.351373  0.5213581  1.0445329
##   1       12      1.364405  0.5083554  1.0563251
##   1       13      1.416718  0.4848593  1.0867636
##   1       14      1.422226  0.4880648  1.0943299
##   1       15      1.417461  0.4935215  1.0785626
##   1       16      1.417461  0.4935215  1.0785626
##   1       17      1.417461  0.4935215  1.0785626
##   1       18      1.417461  0.4935215  1.0785626
##   1       19      1.417461  0.4935215  1.0785626
##   1       20      1.417461  0.4935215  1.0785626
##   1       21      1.417461  0.4935215  1.0785626
##   1       22      1.417461  0.4935215  1.0785626
##   1       23      1.417461  0.4935215  1.0785626
##   1       24      1.417461  0.4935215  1.0785626
##   1       25      1.417461  0.4935215  1.0785626
##   1       26      1.417461  0.4935215  1.0785626
##   1       27      1.417461  0.4935215  1.0785626
##   1       28      1.417461  0.4935215  1.0785626
##   1       29      1.417461  0.4935215  1.0785626
##   1       30      1.417461  0.4935215  1.0785626
##   1       31      1.417461  0.4935215  1.0785626
##   1       32      1.417461  0.4935215  1.0785626
##   1       33      1.417461  0.4935215  1.0785626
##   1       34      1.417461  0.4935215  1.0785626
##   1       35      1.417461  0.4935215  1.0785626
##   1       36      1.417461  0.4935215  1.0785626
##   1       37      1.417461  0.4935215  1.0785626
##   1       38      1.417461  0.4935215  1.0785626
##   2        2      1.410080  0.4643855  1.1074276
##   2        3      1.214882  0.5735123  0.9855761
##   2        4      1.156652  0.6218120  0.9384927
##   2        5      1.155263  0.6178114  0.9383847
##   2        6      1.168812  0.6150147  0.9471030
##   2        7      1.227627  0.5845404  0.9981379
##   2        8      1.264432  0.5605302  1.0177758
##   2        9      1.254636  0.5737630  1.0075434
##   2       10      1.254565  0.5727663  0.9955087
##   2       11      1.384108  0.5274736  1.0539201
##   2       12      1.463255  0.4881863  1.0852403
##   2       13      1.484462  0.4936979  1.1055955
##   2       14      1.476672  0.5108879  1.1221508
##   2       15      1.472352  0.5082951  1.1106012
##   2       16      1.529975  0.4941522  1.1309789
##   2       17      1.579660  0.4708478  1.1736475
##   2       18      1.532025  0.4863905  1.1462062
##   2       19      1.591484  0.4627879  1.1730159
##   2       20      1.598942  0.4602243  1.1717916
##   2       21      1.620553  0.4555104  1.1872562
##   2       22      1.641640  0.4527199  1.2033009
##   2       23      1.648353  0.4533590  1.2053419
##   2       24      1.639030  0.4589137  1.1956405
##   2       25      1.640340  0.4568640  1.1963551
##   2       26      1.640340  0.4568640  1.1963551
##   2       27      1.640340  0.4568640  1.1963551
##   2       28      1.640340  0.4568640  1.1963551
##   2       29      1.640340  0.4568640  1.1963551
##   2       30      1.640340  0.4568640  1.1963551
##   2       31      1.640340  0.4568640  1.1963551
##   2       32      1.640340  0.4568640  1.1963551
##   2       33      1.640340  0.4568640  1.1963551
##   2       34      1.640340  0.4568640  1.1963551
##   2       35      1.640340  0.4568640  1.1963551
##   2       36      1.640340  0.4568640  1.1963551
##   2       37      1.640340  0.4568640  1.1963551
##   2       38      1.640340  0.4568640  1.1963551
## 
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were nprune = 5 and degree = 2.
set.seed(100)
svmRTune_cmp = train(Yield~.,
                      data = train_cmp,
                     method =  "svmRadial",
                     preProcess = c("center", "scale"),
                     tuneLength = 14,
                     trControl = cv)
svmRTune_cmp
## Support Vector Machines with Radial Basis Function Kernel 
## 
## 144 samples
##  56 predictor
## 
## Pre-processing: centered (56), scaled (56) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 129, 130, 130, 130, 130, 130, ... 
## Resampling results across tuning parameters:
## 
##   C        RMSE      Rsquared   MAE      
##      0.25  1.467336  0.4597930  1.1884523
##      0.50  1.345416  0.5135482  1.0955498
##      1.00  1.234258  0.5764261  1.0036685
##      2.00  1.187865  0.5869733  0.9478727
##      4.00  1.160688  0.5967224  0.9161900
##      8.00  1.141551  0.6129996  0.8999537
##     16.00  1.135015  0.6182180  0.8948105
##     32.00  1.135015  0.6182180  0.8948105
##     64.00  1.135015  0.6182180  0.8948105
##    128.00  1.135015  0.6182180  0.8948105
##    256.00  1.135015  0.6182180  0.8948105
##    512.00  1.135015  0.6182180  0.8948105
##   1024.00  1.135015  0.6182180  0.8948105
##   2048.00  1.135015  0.6182180  0.8948105
## 
## Tuning parameter 'sigma' was held constant at a value of 0.01231136
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were sigma = 0.01231136 and C = 16.
knnPred_cmp  <- predict(knnModel_cmp, test_cmp)
nnetPred_cmp <- predict(nnetTune_cmp, test_cmp)
svmRPred_cmp <- predict(svmRTune_cmp, test_cmp)
marsPred_cmp <- predict(marsTune_cmp, test_cmp)
knnR_cmp <- postResample(knnPred_cmp, test_cmp$Yield)
nnetR_cmp <- postResample(nnetPred_cmp, test_cmp$Yield)
marsR_cmp <- postResample(marsPred_cmp, test_cmp$Yield)
svmRR_cmp<- postResample(svmRPred_cmp, test_cmp$Yield)

a.)Which nonlinear regression model gives the optimal resampling and test set performance?

data.frame(rbind(
"knn" = knnR_cmp,
"nnet" = nnetR_cmp,
"svmR" = marsR_cmp,
"mars" = svmRR_cmp
))
##          RMSE  Rsquared       MAE
## knn  1.225526 0.4636845 0.9768750
## nnet 1.285657 0.4872098 1.0457226
## svmR 1.205773 0.5285850 0.8955427
## mars 1.068149 0.5901801 0.8417024

The MARS model has the lowest RMSE with 1.068149 and the highest Rsquared 0.5901801 therfore it is the best performance test set.

b.) Which predictors are most important in the optimal nonlinear regression model? Do either the biological or process variables dominate the list? How do the top ten important predictors compare to the top ten predictors from the optimal linear model?

plot(varImp(marsTune_cmp))



The process variable is still the most dominate list even in the nonlinear regression model.Compare to the linear models for the last homework there where some biological variable contributions buit here it seems to be just 5 processing variables.

c.) Explore the relationships between the top predictors and the response for the predictors that are unique to the optimal nonlinear regression model. Do these plots reveal intuition about the biological or process predictors and their relationship with yield?

Based on the correlation matrix provided, it appears that the Yield response variable exhibits weak correlations with several of the important predictors identified in the preceding step.

rel_exp <- chem_data%>%
  select('Yield', 'ManufacturingProcess32','ManufacturingProcess09',
         'ManufacturingProcess29','ManufacturingProcess17',
         'ManufacturingProcess38')

cor_matrix <- cor(rel_exp)

corrplot(cor_matrix, 
         method="color",
         addCoef.col = "black", 
         type="upper")