Exercise 7.2

Friedman (1991)1 introduced several benchmark data sets created by simulation. One of these simulations used the following nonlinear equation to create data:

\[ y = 10 sin(\pi x_1x_2) + 20(x_3 − 0.5)^2 + 10x_4 + 5x_5 + N(0, \sigma^2) \]

where the \(x\) values are random variables uniformly distributed between [0, 1] (there are also 5 other non-informative variables also created in the simulation). The package mlbench contains a function called mlbench.friedman1 that simulates these data:

Tune several models on these data. For example:

## k-Nearest Neighbors 
## 
## 200 samples
##  10 predictor
## 
## Pre-processing: centered (10), scaled (10) 
## Resampling: Bootstrapped (25 reps) 
## Summary of sample sizes: 200, 200, 200, 200, 200, 200, ... 
## Resampling results across tuning parameters:
## 
##   k   RMSE      Rsquared   MAE     
##    5  3.565620  0.4887976  2.886629
##    7  3.422420  0.5300524  2.752964
##    9  3.368072  0.5536927  2.715310
##   11  3.323010  0.5779056  2.669375
##   13  3.275835  0.6030846  2.628663
##   15  3.261864  0.6163510  2.621192
##   17  3.261973  0.6267032  2.616956
##   19  3.286299  0.6281075  2.640585
##   21  3.280950  0.6390386  2.643807
##   23  3.292397  0.6440392  2.656080
## 
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was k = 15.
##      RMSE  Rsquared       MAE 
## 3.1750657 0.6785946 2.5443169

Since KNN was already done for us let’s try Neural Network, MARS, and SVM Models:

Neural Network

## integer(0)
## Model Averaged Neural Network 
## 
## 200 samples
##  10 predictor
## 
## Pre-processing: centered (10), scaled (10) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 180, 180, 180, 180, 180, 180, ... 
## Resampling results across tuning parameters:
## 
##   decay  size  RMSE      Rsquared   MAE     
##   0.00    1    2.465834  0.7652678  1.942441
##   0.00    2    2.445164  0.7675359  1.944883
##   0.00    3    2.147319  0.8161300  1.677458
##   0.00    4    1.979448  0.8349126  1.569148
##   0.00    5    2.268260  0.7990066  1.759661
##   0.00    6    3.000213  0.7168376  2.154902
##   0.00    7    3.926401  0.5881065  2.722061
##   0.00    8    3.839723  0.6511982  2.576947
##   0.00    9    4.477948  0.5726123  2.651716
##   0.00   10    3.651913  0.6335229  2.647879
##   0.01    1    2.428559  0.7705163  1.898501
##   0.01    2    2.422050  0.7700046  1.889313
##   0.01    3    2.097381  0.8196632  1.664673
##   0.01    4    1.992370  0.8332997  1.543294
##   0.01    5    2.086844  0.8219336  1.661976
##   0.01    6    2.053629  0.8321765  1.627448
##   0.01    7    2.382882  0.7779714  1.870126
##   0.01    8    2.491082  0.7549978  1.983412
##   0.01    9    2.537928  0.7501353  1.970496
##   0.01   10    2.424314  0.7674509  1.994493
##   0.10    1    2.434707  0.7693413  1.900737
##   0.10    2    2.410750  0.7714357  1.866653
##   0.10    3    2.073922  0.8235621  1.644433
##   0.10    4    2.095264  0.8233267  1.636508
##   0.10    5    2.025602  0.8297170  1.600506
##   0.10    6    2.136210  0.8063575  1.700064
##   0.10    7    2.209184  0.8079517  1.724491
##   0.10    8    2.230213  0.8012644  1.761082
##   0.10    9    2.239971  0.7930149  1.773179
##   0.10   10    2.343935  0.7751607  1.822638
## 
## Tuning parameter 'bag' was held constant at a value of FALSE
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were size = 4, decay = 0 and bag = FALSE.
##      RMSE  Rsquared       MAE 
## 1.9076746 0.8554136 1.4611545

Multivariate Adaptive Regression Splines

## Multivariate Adaptive Regression Spline 
## 
## 200 samples
##  10 predictor
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 180, 180, 180, 180, 180, 180, ... 
## Resampling results across tuning parameters:
## 
##   degree  nprune  RMSE      Rsquared   MAE      
##   1        2      4.315331  0.2630204  3.5755464
##   1        3      3.666000  0.4738162  2.9661726
##   1        4      2.630968  0.7220292  2.0994662
##   1        5      2.366544  0.7741619  1.9089091
##   1        6      2.374161  0.7773955  1.9092685
##   1        7      1.801501  0.8616195  1.4555533
##   1        8      1.717858  0.8802526  1.3803634
##   1        9      1.665762  0.8873283  1.3181056
##   1       10      1.604615  0.8908877  1.2677719
##   1       11      1.539444  0.8976754  1.2163330
##   1       12      1.550580  0.8981426  1.2189330
##   1       13      1.565713  0.8974297  1.2220535
##   1       14      1.579878  0.8953298  1.2406902
##   1       15      1.579878  0.8953298  1.2406902
##   1       16      1.579878  0.8953298  1.2406902
##   1       17      1.579878  0.8953298  1.2406902
##   1       18      1.579878  0.8953298  1.2406902
##   1       19      1.579878  0.8953298  1.2406902
##   1       20      1.579878  0.8953298  1.2406902
##   1       21      1.579878  0.8953298  1.2406902
##   1       22      1.579878  0.8953298  1.2406902
##   1       23      1.579878  0.8953298  1.2406902
##   1       24      1.579878  0.8953298  1.2406902
##   1       25      1.579878  0.8953298  1.2406902
##   1       26      1.579878  0.8953298  1.2406902
##   1       27      1.579878  0.8953298  1.2406902
##   1       28      1.579878  0.8953298  1.2406902
##   1       29      1.579878  0.8953298  1.2406902
##   1       30      1.579878  0.8953298  1.2406902
##   1       31      1.579878  0.8953298  1.2406902
##   1       32      1.579878  0.8953298  1.2406902
##   1       33      1.579878  0.8953298  1.2406902
##   1       34      1.579878  0.8953298  1.2406902
##   1       35      1.579878  0.8953298  1.2406902
##   1       36      1.579878  0.8953298  1.2406902
##   1       37      1.579878  0.8953298  1.2406902
##   1       38      1.579878  0.8953298  1.2406902
##   2        2      4.315331  0.2630204  3.5755464
##   2        3      3.666000  0.4738162  2.9661726
##   2        4      2.630968  0.7220292  2.0994662
##   2        5      2.294427  0.7832589  1.8357912
##   2        6      2.301732  0.7850174  1.8032504
##   2        7      1.827058  0.8576576  1.4429599
##   2        8      1.729448  0.8746330  1.3474910
##   2        9      1.528057  0.9038235  1.2109976
##   2       10      1.437891  0.9164554  1.1309884
##   2       11      1.363933  0.9255647  1.0637296
##   2       12      1.277488  0.9325898  0.9901803
##   2       13      1.226813  0.9391704  0.9556211
##   2       14      1.203084  0.9400635  0.9254790
##   2       15      1.248063  0.9361253  0.9701682
##   2       16      1.239398  0.9371530  0.9666075
##   2       17      1.248585  0.9361578  0.9766556
##   2       18      1.242270  0.9369071  0.9710791
##   2       19      1.242270  0.9369071  0.9710791
##   2       20      1.242270  0.9369071  0.9710791
##   2       21      1.242270  0.9369071  0.9710791
##   2       22      1.242270  0.9369071  0.9710791
##   2       23      1.242270  0.9369071  0.9710791
##   2       24      1.242270  0.9369071  0.9710791
##   2       25      1.242270  0.9369071  0.9710791
##   2       26      1.242270  0.9369071  0.9710791
##   2       27      1.242270  0.9369071  0.9710791
##   2       28      1.242270  0.9369071  0.9710791
##   2       29      1.242270  0.9369071  0.9710791
##   2       30      1.242270  0.9369071  0.9710791
##   2       31      1.242270  0.9369071  0.9710791
##   2       32      1.242270  0.9369071  0.9710791
##   2       33      1.242270  0.9369071  0.9710791
##   2       34      1.242270  0.9369071  0.9710791
##   2       35      1.242270  0.9369071  0.9710791
##   2       36      1.242270  0.9369071  0.9710791
##   2       37      1.242270  0.9369071  0.9710791
##   2       38      1.242270  0.9369071  0.9710791
## 
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were nprune = 14 and degree = 2.
##      RMSE  Rsquared       MAE 
## 1.1722635 0.9448890 0.9324923

Support Vector Machine

## Support Vector Machines with Radial Basis Function Kernel 
## 
## 200 samples
##  10 predictor
## 
## Pre-processing: centered (10), scaled (10) 
## Resampling: Bootstrapped (25 reps) 
## Summary of sample sizes: 200, 200, 200, 200, 200, 200, ... 
## Resampling results across tuning parameters:
## 
##   C          RMSE      Rsquared   MAE     
##        0.25  2.580462  0.7702802  2.052974
##        0.50  2.363056  0.7843928  1.865800
##        1.00  2.237299  0.7997309  1.744713
##        2.00  2.148749  0.8115339  1.665674
##        4.00  2.088755  0.8208669  1.617509
##        8.00  2.080988  0.8220429  1.615589
##       16.00  2.079821  0.8220994  1.614947
##       32.00  2.079821  0.8220994  1.614947
##       64.00  2.079821  0.8220994  1.614947
##      128.00  2.079821  0.8220994  1.614947
##      256.00  2.079821  0.8220994  1.614947
##      512.00  2.079821  0.8220994  1.614947
##     1024.00  2.079821  0.8220994  1.614947
##     2048.00  2.079821  0.8220994  1.614947
##     4096.00  2.079821  0.8220994  1.614947
##     8192.00  2.079821  0.8220994  1.614947
##    16384.00  2.079821  0.8220994  1.614947
##    32768.00  2.079821  0.8220994  1.614947
##    65536.00  2.079821  0.8220994  1.614947
##   131072.00  2.079821  0.8220994  1.614947
## 
## Tuning parameter 'sigma' was held constant at a value of 0.06042887
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were sigma = 0.06042887 and C = 16.
##      RMSE  Rsquared       MAE 
## 2.0666793 0.8267973 1.5699537

Model Comparison

Which models appear to give the best performance?

RMSE Rsquared MAE
marsPR 1.172263 0.9448890 0.9324923
avNNetPR 1.907675 0.8554136 1.4611545
svmRPR 2.066679 0.8267973 1.5699537
knnPR 3.175066 0.6785946 2.5443169

The MARS model has the best performance as measured by all three metrics.

Does MARS select the informative predictors (those named X1X5)?

## earth variable importance
## 
##     Overall
## X1   100.00
## X4    85.13
## X2    69.22
## X5    49.28
## X3    39.95
## X10    0.00
## X7     0.00
## X8     0.00
## X6     0.00
## X9     0.00

Yes, the MARS model did select the appropriate 5 predictors.

Exercise 7.5

Exercise 6.3 describes data for a chemical manufacturing process. Use the same data imputation, data splitting, and pre-processing steps as before and train several nonlinear regression models.

variable n_miss pct_miss
ManufacturingProcess03 15 8.5227273
ManufacturingProcess11 10 5.6818182
ManufacturingProcess10 9 5.1136364
ManufacturingProcess25 5 2.8409091
ManufacturingProcess26 5 2.8409091
ManufacturingProcess27 5 2.8409091
ManufacturingProcess28 5 2.8409091
ManufacturingProcess29 5 2.8409091
ManufacturingProcess30 5 2.8409091
ManufacturingProcess31 5 2.8409091
ManufacturingProcess33 5 2.8409091
ManufacturingProcess34 5 2.8409091
ManufacturingProcess35 5 2.8409091
ManufacturingProcess36 5 2.8409091
ManufacturingProcess02 3 1.7045455
ManufacturingProcess06 2 1.1363636
ManufacturingProcess01 1 0.5681818
ManufacturingProcess04 1 0.5681818
ManufacturingProcess05 1 0.5681818
ManufacturingProcess07 1 0.5681818
ManufacturingProcess08 1 0.5681818
ManufacturingProcess12 1 0.5681818
ManufacturingProcess14 1 0.5681818
ManufacturingProcess22 1 0.5681818
ManufacturingProcess23 1 0.5681818
ManufacturingProcess24 1 0.5681818
ManufacturingProcess40 1 0.5681818
ManufacturingProcess41 1 0.5681818
variable n_miss pct_miss

There are 57 predictors out of 58 left for modeling after removing variables with near zero variance.

K-Nearest Neighbors

## k-Nearest Neighbors 
## 
## 144 samples
##  56 predictor
## 
## Pre-processing: centered (56), scaled (56) 
## Resampling: Bootstrapped (25 reps) 
## Summary of sample sizes: 144, 144, 144, 144, 144, 144, ... 
## Resampling results across tuning parameters:
## 
##   k   RMSE       Rsquared   MAE      
##    5  0.7848549  0.3985066  0.6145319
##    7  0.7921264  0.3889122  0.6245157
##    9  0.7940208  0.3940038  0.6340025
##   11  0.8046103  0.3848223  0.6484965
##   13  0.8046193  0.3882349  0.6475569
##   15  0.8080558  0.3867707  0.6486827
##   17  0.8152381  0.3818176  0.6546360
##   19  0.8177902  0.3858107  0.6563324
##   21  0.8196397  0.3916575  0.6553754
##   23  0.8253488  0.3897930  0.6585352
## 
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was k = 5.
RMSE Rsquared MAE
knnTrainPR 0.5469640 0.7451672 0.4270063
knnTestPR 0.6830351 0.5072823 0.5613149

Neural Network

##  [1]  2  7 40  6  1 37  4 11 36 43 26 22 20 24 21 53 56 29 52
## Model Averaged Neural Network 
## 
## 144 samples
##  56 predictor
## 
## Pre-processing: centered (56), scaled (56) 
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 128, 130, 130, 128, 129, 130, ... 
## Resampling results across tuning parameters:
## 
##   decay  size  RMSE       Rsquared   MAE      
##   0.00    1    0.8396470  0.4005108  0.6615245
##   0.00    2    0.8796699  0.4908523  0.6968518
##   0.00    3    0.7433092  0.5676666  0.5985656
##   0.00    4    0.8228909  0.4630506  0.6234666
##   0.00    5    0.7983387  0.5051035  0.6391375
##   0.00    6    0.6770709  0.6231819  0.5359852
##   0.00    7    0.7001850  0.5622057  0.5601646
##   0.00    8    0.7657453  0.5158849  0.5953332
##   0.00    9    0.6648047  0.6104093  0.5403146
##   0.00   10    0.7303300  0.5333304  0.5613485
##   0.01    1    0.7817747  0.5071088  0.6219232
##   0.01    2    0.7938073  0.5113558  0.6357742
##   0.01    3    0.7071143  0.6049079  0.5598437
##   0.01    4    0.6410067  0.6234780  0.5159379
##   0.01    5    0.6829545  0.5948473  0.5363130
##   0.01    6    0.6226229  0.6530780  0.5017593
##   0.01    7    0.6589511  0.6238842  0.5234140
##   0.01    8    0.6405720  0.6354143  0.5116185
##   0.01    9    0.6348979  0.6545056  0.5033677
##   0.01   10    0.6329309  0.6446533  0.4986836
##   0.10    1    0.7340652  0.5321038  0.5979146
##   0.10    2    0.7029103  0.5902275  0.5642606
##   0.10    3    0.6624017  0.6105338  0.5157650
##   0.10    4    0.6398857  0.6392670  0.5024366
##   0.10    5    0.6331925  0.6399911  0.5030135
##   0.10    6    0.5859516  0.6838226  0.4648973
##   0.10    7    0.6405224  0.6420978  0.5021698
##   0.10    8    0.6099027  0.6628481  0.4862918
##   0.10    9    0.6298129  0.6532951  0.4984566
##   0.10   10    0.6349071  0.6497655  0.4986965
## 
## Tuning parameter 'bag' was held constant at a value of FALSE
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were size = 6, decay = 0.1 and bag = FALSE.
RMSE Rsquared MAE
avNNetTrainPR 0.0369488 0.9989146 0.0274775
avNNetTestPR 0.5507293 0.7386185 0.4693944

Multivariate Adaptive Regression Splines

## Multivariate Adaptive Regression Spline 
## 
## 144 samples
##  56 predictor
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 130, 128, 131, 130, 129, 129, ... 
## Resampling results across tuning parameters:
## 
##   degree  nprune  RMSE       Rsquared   MAE      
##   1        2      0.7870167  0.3922139  0.6150402
##   1        3      0.6809260  0.5505581  0.5529802
##   1        4      0.6361125  0.6106564  0.5228998
##   1        5      0.6417366  0.5916332  0.5119515
##   1        6      0.6092059  0.6346037  0.4746314
##   1        7      0.5983950  0.6478127  0.4733128
##   1        8      0.6083035  0.6391785  0.4856206
##   1        9      0.6126940  0.6409056  0.4847138
##   1       10      0.6407015  0.6213885  0.5010489
##   1       11      0.6440577  0.6207329  0.5095536
##   1       12      0.6506050  0.6157313  0.5182507
##   1       13      0.6631668  0.6066322  0.5332929
##   1       14      0.6530777  0.6183577  0.5288227
##   1       15      0.6529136  0.6191417  0.5270874
##   1       16      0.6529136  0.6191417  0.5270874
##   1       17      0.6529136  0.6191417  0.5270874
##   1       18      0.6529136  0.6191417  0.5270874
##   1       19      0.6529136  0.6191417  0.5270874
##   1       20      0.6529136  0.6191417  0.5270874
##   1       21      0.6529136  0.6191417  0.5270874
##   1       22      0.6529136  0.6191417  0.5270874
##   1       23      0.6529136  0.6191417  0.5270874
##   1       24      0.6529136  0.6191417  0.5270874
##   1       25      0.6529136  0.6191417  0.5270874
##   1       26      0.6529136  0.6191417  0.5270874
##   1       27      0.6529136  0.6191417  0.5270874
##   1       28      0.6529136  0.6191417  0.5270874
##   1       29      0.6529136  0.6191417  0.5270874
##   1       30      0.6529136  0.6191417  0.5270874
##   1       31      0.6529136  0.6191417  0.5270874
##   1       32      0.6529136  0.6191417  0.5270874
##   1       33      0.6529136  0.6191417  0.5270874
##   1       34      0.6529136  0.6191417  0.5270874
##   1       35      0.6529136  0.6191417  0.5270874
##   1       36      0.6529136  0.6191417  0.5270874
##   1       37      0.6529136  0.6191417  0.5270874
##   1       38      0.6529136  0.6191417  0.5270874
##   2        2      0.7870167  0.3922139  0.6150402
##   2        3      0.6970169  0.5214454  0.5726509
##   2        4      0.6906755  0.5466012  0.5616738
##   2        5      0.6812860  0.5675180  0.5594349
##   2        6      0.6745882  0.5806514  0.5531024
##   2        7      0.6871805  0.5798363  0.5539456
##   2        8      0.7054934  0.5794917  0.5795675
##   2        9      0.7053006  0.5908765  0.5716725
##   2       10      0.7085830  0.5912413  0.5705302
##   2       11      0.7317398  0.5807714  0.5950023
##   2       12      0.7349106  0.5857587  0.6007345
##   2       13      0.7461829  0.5917587  0.6009880
##   2       14      0.7586610  0.5898372  0.6150160
##   2       15      0.8371269  0.5604203  0.6392707
##   2       16      0.8442284  0.5581161  0.6458228
##   2       17      0.8368997  0.5588828  0.6460793
##   2       18      0.8411941  0.5570694  0.6499184
##   2       19      0.8850663  0.5472341  0.6654185
##   2       20      0.8867846  0.5414673  0.6682072
##   2       21      0.8707424  0.5463752  0.6621355
##   2       22      0.8784568  0.5460701  0.6650358
##   2       23      0.8902135  0.5334734  0.6698804
##   2       24      0.8902135  0.5334734  0.6698804
##   2       25      0.9102032  0.5277492  0.6877720
##   2       26      0.9102032  0.5277492  0.6877720
##   2       27      0.9102032  0.5277492  0.6877720
##   2       28      0.9102032  0.5277492  0.6877720
##   2       29      0.9102032  0.5277492  0.6877720
##   2       30      0.9102032  0.5277492  0.6877720
##   2       31      0.9102032  0.5277492  0.6877720
##   2       32      0.9102032  0.5277492  0.6877720
##   2       33      0.9102032  0.5277492  0.6877720
##   2       34      0.9102032  0.5277492  0.6877720
##   2       35      0.9102032  0.5277492  0.6877720
##   2       36      0.9102032  0.5277492  0.6877720
##   2       37      0.9102032  0.5277492  0.6877720
##   2       38      0.9102032  0.5277492  0.6877720
## 
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were nprune = 7 and degree = 1.
RMSE Rsquared MAE
marsTrainPR 0.5372419 0.7162805 0.4268017
marsTestPR 0.6007029 0.6146672 0.4785827

Support Vector Machine

## Support Vector Machines with Radial Basis Function Kernel 
## 
## 144 samples
##  56 predictor
## 
## Pre-processing: centered (56), scaled (56) 
## Resampling: Bootstrapped (25 reps) 
## Summary of sample sizes: 144, 144, 144, 144, 144, 144, ... 
## Resampling results across tuning parameters:
## 
##   C          RMSE       Rsquared   MAE      
##        0.25  0.8124985  0.4306397  0.6563446
##        0.50  0.7554534  0.4841798  0.6061790
##        1.00  0.7087280  0.5342116  0.5606701
##        2.00  0.6919763  0.5484540  0.5409400
##        4.00  0.6874974  0.5517676  0.5344535
##        8.00  0.6839277  0.5547142  0.5311237
##       16.00  0.6837900  0.5548126  0.5309246
##       32.00  0.6837900  0.5548126  0.5309246
##       64.00  0.6837900  0.5548126  0.5309246
##      128.00  0.6837900  0.5548126  0.5309246
##      256.00  0.6837900  0.5548126  0.5309246
##      512.00  0.6837900  0.5548126  0.5309246
##     1024.00  0.6837900  0.5548126  0.5309246
##     2048.00  0.6837900  0.5548126  0.5309246
##     4096.00  0.6837900  0.5548126  0.5309246
##     8192.00  0.6837900  0.5548126  0.5309246
##    16384.00  0.6837900  0.5548126  0.5309246
##    32768.00  0.6837900  0.5548126  0.5309246
##    65536.00  0.6837900  0.5548126  0.5309246
##   131072.00  0.6837900  0.5548126  0.5309246
## 
## Tuning parameter 'sigma' was held constant at a value of 0.0138164
## RMSE was used to select the optimal model using the smallest value.
## The final values used for the model were sigma = 0.0138164 and C = 16.
RMSE Rsquared MAE
svmRTrainPR 0.0978128 0.9930548 0.0963316
svmRTestPR 0.5445090 0.6689024 0.4521679

Part (a)

Model Comparison

Which nonlinear regression model gives the optimal resampling and test set performance?

RMSE Rsquared MAE
avNNetTrainPR 0.0369488 0.9989146 0.0274775
svmRTrainPR 0.0978128 0.9930548 0.0963316
marsTrainPR 0.5372419 0.7162805 0.4268017
knnTrainPR 0.5469640 0.7451672 0.4270063
RMSE Rsquared MAE
svmRTestPR 0.5445090 0.6689024 0.4521679
avNNetTestPR 0.5507293 0.7386185 0.4693944
marsTestPR 0.6007029 0.6146672 0.4785827
knnTestPR 0.6830351 0.5072823 0.5613149

Although the neural net had significantly better performance on the training set, the support vector machine gave slightly better performance on the training data. The \(R^2\) is still higher on the neural net but since they are different models types you really cannot compare the \(R^2\) between models only on differently tuned versions of the same type of model.

Part (b)

Which predictors are most important in the optimal nonlinear regression model? Do either the biological or process variables dominate the list?

How do the top ten important predictors compare to the top ten predictors from the optimal linear model?

## loess r-squared variable importance
## 
##   only 20 most important variables shown (out of 56)
## 
##                        Overall
## ManufacturingProcess13  100.00
## ManufacturingProcess32   85.77
## ManufacturingProcess17   83.74
## BiologicalMaterial06     69.74
## ManufacturingProcess09   66.15
## BiologicalMaterial03     63.98
## ManufacturingProcess36   63.70
## BiologicalMaterial12     63.02
## BiologicalMaterial02     59.13
## ManufacturingProcess06   57.10
## ManufacturingProcess31   55.86
## ManufacturingProcess11   49.90
## ManufacturingProcess30   45.80
## BiologicalMaterial11     44.33
## BiologicalMaterial04     41.02
## ManufacturingProcess33   37.63
## ManufacturingProcess29   37.51
## BiologicalMaterial08     37.09
## BiologicalMaterial01     33.39
## ManufacturingProcess12   33.37

The manufacturing processes still dominate the list of most important predictors in the support vector machine model, although 4 of the top ten (and 2 of the top 6) are biological materials so they are not completely absent like they were in the optimal linear model.

Part (c)

Explore the relationships between the top predictors and the response for the predictors that are unique to the optimal nonlinear regression model.

Do these plots reveal intuition about the biological or process predictors and their relationship with yield?

The plots show a linear relationship especially evident in the plots for ManufacturingProcess32 and ManufacturingProcess09. These were also the top two predictors in the linear model from HW7.

## 
## Call:
## lm(formula = Yield ~ ManufacturingProcess13 + ManufacturingProcess32 + 
##     ManufacturingProcess17 + ManufacturingProcess09 + BiologicalMaterial06 + 
##     BiologicalMaterial03, data = train)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.5162 -0.5023  0.0548  0.4293  1.6572 
## 
## Coefficients:
##                         Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            -0.006241   0.053581  -0.116   0.9074    
## ManufacturingProcess13 -0.123407   0.119763  -1.030   0.3046    
## ManufacturingProcess32  0.517059   0.066841   7.736 2.01e-12 ***
## ManufacturingProcess17 -0.200728   0.101527  -1.977   0.0500 .  
## ManufacturingProcess09  0.233529   0.097282   2.401   0.0177 *  
## BiologicalMaterial06    0.113148   0.122875   0.921   0.3588    
## BiologicalMaterial03   -0.037138   0.109328  -0.340   0.7346    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.6417 on 137 degrees of freedom
## Multiple R-squared:  0.6149, Adjusted R-squared:  0.598 
## F-statistic: 36.45 on 6 and 137 DF,  p-value: < 2.2e-16

Once again a simple linear model using the top 6 predictors in the support vector machine model gives us performance that is almost as good as the more complicated SVM model. Using the predictors marked as having statistical significance from this model and from the model in HW7 gives us the model below:

## 
## Call:
## lm(formula = Yield ~ ManufacturingProcess32 + ManufacturingProcess17 + 
##     ManufacturingProcess09 + ManufacturingProcess04 + ManufacturingProcess13 + 
##     ManufacturingProcess37, data = train)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.53635 -0.50042 -0.00428  0.43254  1.56325 
## 
## Coefficients:
##                         Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            -0.008187   0.052628  -0.156  0.87661    
## ManufacturingProcess32  0.590668   0.058066  10.172  < 2e-16 ***
## ManufacturingProcess17 -0.127397   0.096709  -1.317  0.18993    
## ManufacturingProcess09  0.306156   0.093650   3.269  0.00136 ** 
## ManufacturingProcess04  0.105317   0.060301   1.747  0.08296 .  
## ManufacturingProcess13 -0.156000   0.121000  -1.289  0.19948    
## ManufacturingProcess37 -0.105486   0.056874  -1.855  0.06578 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.6296 on 137 degrees of freedom
## Multiple R-squared:  0.6293, Adjusted R-squared:  0.6131 
## F-statistic: 38.77 on 6 and 137 DF,  p-value: < 2.2e-16
RMSE Rsquared MAE
svmRTestPR 0.5445090 0.6689024 0.4521679
lmTestPR 0.5594587 0.6509815 0.4203717
lmTrainPR 0.6140760 0.6293247 0.5026644

This last linear model using the top predictors found in HW 7 and in also in this assignment give us a linear model that has performance almost equal to our best nonlinear model. In fact if we use MAE as our measure of performance then the linear model actually outperforms the SVM model on the test data.

Footnotes


  1. Friedman J (1991). “Multivariate Adaptive Regression Splines.” The Annals of Statistics, 19(1), 1–141.