We have seen that we can fit an SVM with a non-linear kernel in order to perform classification using a non-linear decision boundary.We will now see that we can also obtain a non-linear decision boundary by performing logistic regression using non-linear transformations of the features.
(a) Generate a data set with n = 500 and p = 2, such that the observations belong to two classes with a quadratic decision boundary between them. For instance, you can do this as follows:
set.seed(1)
x1 <- runif (500) - 0.5
x2 <- runif (500) - 0.5
y <- 1 * (x1^2 - x2^2 > 0)
df = data.frame(x1, x2)
(b) Plot the observations, colored according to their class labels. Your plot should display X1 on the x-axis, and X2 on the yaxis.
plot(x1 , x2, col = (y + 2))
(c) Fit a logistic regression model to the data, using X1 and X2 as predictors.
set.seed(1)
logr = glm(y ~ x1 + x2 ,family = binomial)
summary(logr)
##
## Call:
## glm(formula = y ~ x1 + x2, family = binomial)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.179 -1.139 -1.112 1.206 1.257
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -0.087260 0.089579 -0.974 0.330
## x1 0.196199 0.316864 0.619 0.536
## x2 -0.002854 0.305712 -0.009 0.993
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 692.18 on 499 degrees of freedom
## Residual deviance: 691.79 on 497 degrees of freedom
## AIC: 697.79
##
## Number of Fisher Scoring iterations: 3
(d) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be linear.
set.seed(1)
logr.prob = predict(logr, df, type="response")
logr.pred = rep(0,500)
logr.pred [logr.prob > .5] = 1
plot(x1, x2, col = logr.pred + 1)
(e) Now fit a logistic regression model to the data using
non-linear functions of X1 and X2 as predictors (e.g. X2 1 , X1×X2,
log(X2), and so forth).
logr2 = glm(y~log(x1)+log(x2),data=df,family='binomial')
## Warning in log(x1): NaNs produced
## Warning in log(x2): NaNs produced
## Warning: glm.fit: algorithm did not converge
## Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
logr2
##
## Call: glm(formula = y ~ log(x1) + log(x2), family = "binomial", data = df)
##
## Coefficients:
## (Intercept) log(x1) log(x2)
## -44.49 497.86 -528.98
##
## Degrees of Freedom: 120 Total (i.e. Null); 118 Residual
## (379 observations deleted due to missingness)
## Null Deviance: 167.7
## Residual Deviance: 1.895e-07 AIC: 6
(f) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be obviously non-linear. If it is not, then repeat (a)-(e) until you come up with an example in which the predicted class labels are obviously non-linear.
set.seed(1)
logr2.prob = predict(logr2, df, type="response")
## Warning in log(x1): NaNs produced
## Warning in log(x2): NaNs produced
logr2.pred = rep(0,500)
logr2.pred[logr2.prob > .5] = 1
plot(x1,x2, col=logr2.pred+2)
(g) Fit a support vector classifier to the data with X1 and X2
as predictors. Obtain a class prediction for each training observation.
Plot the observations, colored according to the predicted class
labels.
library(e1071)
svm.fit = svm(y ~ x1 + x2,data = df,kernel='linear', cost = 0.01)
svm.prob = predict(svm.fit, df)
svm.pred = rep(0,500)
svm.pred[svm.prob > .5] = 1
plot(x1, x2, col = svm.pred + 1)
(h) Fit a SVM using a non-linear kernel to the data. Obtain a
class prediction for each training observation. Plot the
observations,colored according to the predicted class
labels.
set.seed(1)
svm2.fit = svm(y ~ x1 + x2, df, kernel='radial', gamma=1)
svm2.prob = predict(svm2.fit, df)
svm2.pred = rep(0,500)
svm2.pred[svm2.prob > .5] = 1
plot(x1,x2, col=svm2.pred + 2)
(i) Comment on your results. The svm non-linear kernel
was the most effective. It had clear boundries compared to the
others.
In this problem, you will use support vector approaches in order to predict whether a given car gets high or low gas mileage based on the Auto data set.
library(ISLR2)
attach(Auto)
(a) Create a binary variable that takes on a 1 for cars with gas mileage above the median, and a 0 for cars with gas mileage below the median.
Auto$mpg.b = ifelse(Auto$mpg > median(Auto$mpg), 1, 0)
auto2 = subset(Auto, select=-c(mpg))
median(Auto$mpg)
## [1] 22.75
(b) Fit a support vector classifier to the data with various values of cost, in order to predict whether a car gets high or low gas mileage. Report the cross-validation errors associated with different values of this parameter. Comment on your results. Note you will need to fit the classifier without the gas mileage variable to produce sensible results.
set.seed(1)
svm2 = tune(svm, mpg.b~., data = auto2, kernel = "linear", ranges = list(cost = c(0.001, 0.01, .1, 1, 5, 10)))
summary(svm2)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 1
##
## - best performance: 0.09603609
##
## - Detailed performance results:
## cost error dispersion
## 1 1e-03 0.10881486 0.02537281
## 2 1e-02 0.10421950 0.03138085
## 3 1e-01 0.10227373 0.03634911
## 4 1e+00 0.09603609 0.03666741
## 5 5e+00 0.10034346 0.03612147
## 6 1e+01 0.10531309 0.03683207
(c) Now repeat (b), this time using SVMs with radial and polynomial basis kernels, with different values of gamma and degree and cost. Comment on your results. With a radial kernel the results are an error rate of 0.004717605 with a degree of 1 and a cost of 5. The polynomial kernel has an error rate of 0.005123323 with degree of 1 and a cost of 10.
set.seed(1)
svm.r = tune(svm, mpg.b~., data=auto2, kernel="radial", ranges= list(cost=c(0.001, 0.01, .1, 1, 5,10), gamma = c(0.1, 1, 5, 10)))
summary(svm.r)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost gamma
## 5 0.1
##
## - best performance: 0.0651571
##
## - Detailed performance results:
## cost gamma error dispersion
## 1 1e-03 0.1 0.44966460 0.036851866
## 2 1e-02 0.1 0.15038588 0.023971541
## 3 1e-01 0.1 0.07282379 0.025938178
## 4 1e+00 0.1 0.07186118 0.030174404
## 5 5e+00 0.1 0.06515710 0.029987281
## 6 1e+01 0.1 0.06878597 0.031960255
## 7 1e-03 1.0 0.49585247 0.039308236
## 8 1e-02 1.0 0.47238675 0.039412960
## 9 1e-01 1.0 0.27951125 0.036343997
## 10 1e+00 1.0 0.09918732 0.020523476
## 11 5e+00 1.0 0.10353541 0.020801755
## 12 1e+01 1.0 0.10442706 0.020690277
## 13 1e-03 5.0 0.49797109 0.039387310
## 14 1e-02 5.0 0.49255017 0.039569487
## 15 1e-01 5.0 0.44175647 0.040898107
## 16 1e+00 5.0 0.23810450 0.007973452
## 17 5e+00 5.0 0.23812027 0.007956301
## 18 1e+01 5.0 0.23812027 0.007956301
## 19 1e-03 10.0 0.49808972 0.039349882
## 20 1e-02 10.0 0.49332769 0.039200376
## 21 1e-01 10.0 0.44838377 0.037776360
## 22 1e+00 10.0 0.24380724 0.004605136
## 23 5e+00 10.0 0.24380416 0.004607188
## 24 1e+01 10.0 0.24380416 0.004607188
set.seed(1)
svm.ply = tune(svm, mpg.b~., data=auto2, kernel="polynomial", ranges= list(cost=c(0.001, 0.01, 0.1, 1, 5, 10),degree = c(1:5)))
summary(svm.ply)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost degree
## 10 1
##
## - best performance: 0.1039033
##
## - Detailed performance results:
## cost degree error dispersion
## 1 1e-03 1 0.4935679 0.03915358
## 2 1e-02 1 0.4508053 0.03828434
## 3 1e-01 1 0.1670639 0.03132832
## 4 1e+00 1 0.1039924 0.02744167
## 5 5e+00 1 0.1044447 0.03235532
## 6 1e+01 1 0.1039033 0.03360908
## 7 1e-03 2 0.4984497 0.03930707
## 8 1e-02 2 0.4982382 0.03934745
## 9 1e-01 2 0.4960905 0.03974005
## 10 1e+00 2 0.4752293 0.04525479
## 11 5e+00 2 0.3981761 0.07334314
## 12 1e+01 2 0.3375643 0.08311313
## 13 1e-03 3 0.4984673 0.03930311
## 14 1e-02 3 0.4984136 0.03930767
## 15 1e-01 3 0.4978768 0.03935396
## 16 1e+00 3 0.4924827 0.03986701
## 17 5e+00 3 0.4692575 0.04334894
## 18 1e+01 3 0.4418838 0.04913560
## 19 1e-03 4 0.4984731 0.03930263
## 20 1e-02 4 0.4984719 0.03930290
## 21 1e-01 4 0.4984602 0.03930559
## 22 1e+00 4 0.4983427 0.03933257
## 23 5e+00 4 0.4978212 0.03945572
## 24 1e+01 4 0.4971505 0.03961089
## 25 1e-03 5 0.4984732 0.03930261
## 26 1e-02 5 0.4984731 0.03930264
## 27 1e-01 5 0.4984718 0.03930295
## 28 1e+00 5 0.4984585 0.03930611
## 29 5e+00 5 0.4983996 0.03932013
## 30 1e+01 5 0.4983260 0.03933773
top = svm.ply$best.model
top
##
## Call:
## best.tune(METHOD = svm, train.x = mpg.b ~ ., data = auto2, ranges = list(cost = c(0.001,
## 0.01, 0.1, 1, 5, 10), degree = c(1:5)), kernel = "polynomial")
##
##
## Parameters:
## SVM-Type: eps-regression
## SVM-Kernel: polynomial
## cost: 10
## degree: 1
## gamma: 0.003215434
## coef.0: 0
## epsilon: 0.1
##
##
## Number of Support Vectors: 240
top$cost
## [1] 10
(d) Make some plots to back up your assertions in (b) and (c). Hint: In the lab, we used the plot() function for svm objects only in cases with p = 2. When p > 2, you can use the plot() function to create plots displaying pairs of variables at a time. Essentially, instead of typing
plot(svmfit , dat)
where svmfit contains your fitted model and dat is a data frame containing your data, you can type
plot(svmfit , dat , x1 ∼ x4)
in order to plot just the first and fourth variables. However, you must replace x1 and x4 with the correct variable names. To find out more, type ?plot.svm.
svm.linear2 = svm(mpg.b ~ ., data = auto2, kernel = "linear", cost = 1)
svm.poly2 = svm(mpg.b ~ ., data = auto2, kernel = "polynomial", cost = 10, degree = 1)
svm.r2 = svm(mpg.b ~ ., data = auto2, kernel = "radial", cost = 5, gamma = 0.1)
costlist =as.numeric(svm2$performances[,'cost'])
errorlist = as.numeric(svm2$performances[,'error'])
plot(x = costlist, y=errorlist, xlab = "cost", ylab = "error")
title("Linear SVM")
lines(x =costlist, y=errorlist)
costlist.p = as.numeric(svm.ply$performances[,'cost'])
errorlist.p = as.numeric(svm.ply$performances[,'error'])
plot(x =costlist.p, y=errorlist.p, xlab = "cost", ylab = "error")
title("Polynomial SVM")
lines(x =costlist.p, y=errorlist.p)
costlist.r = as.numeric(svm.r$performances[,'cost'])
errorlist.r = as.numeric(svm.r$performances[,'error'])
plot(x =costlist.r, y=errorlist.r, xlab = "cost", ylab = "error")
title("Radial SVM")
lines(x=costlist.r, y=errorlist.r)
## Problem 8 ## This problem involves the OJ
data set which is part of the ISLR2 package.
attach(OJ)
(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.
set.seed(1)
sample.oj = sample(nrow(OJ), 800)
train.oj = OJ[sample.oj,]
test.oj = OJ[-sample.oj,]
(b) Fit a support vector classifier to the training data using cost = 0.01, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics, and describe the results obtained. There are 435 support vectors at a cost of 0.01
set.seed(1)
svm.l2 = svm(Purchase ~ ., kernel = "linear", data = train.oj, cost = 0.01)
summary(svm.l2)
##
## Call:
## svm(formula = Purchase ~ ., data = train.oj, kernel = "linear", cost = 0.01)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 0.01
##
## Number of Support Vectors: 435
##
## ( 219 216 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
(c) What are the training and test error rates? training = 0.175 test = 0.1777778
set.seed(1)
train.pred = predict(svm.l2, train.oj)
table(train.oj$Purchase, train.pred)
## train.pred
## CH MM
## CH 420 65
## MM 75 240
train.e = mean(train.oj$Purchase != train.pred)
train.e
## [1] 0.175
test.pred = predict(svm.l2, test.oj)
table(test.oj$Purchase, test.pred)
## test.pred
## CH MM
## CH 153 15
## MM 33 69
test.e = mean(test.oj$Purchase != test.pred)
test.e
## [1] 0.1777778
(d) Use the tune() function to select an optimal cost. Consider values in the range 0.01 to 10.
set.seed(1)
tune.oj = tune(svm, Purchase~., data=train.oj, kernel="linear", ranges= list(cost=seq(0.01:10)))
summary(tune.oj)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 3
##
## - best performance: 0.16875
##
## - Detailed performance results:
## cost error dispersion
## 1 1 0.17500 0.02946278
## 2 2 0.17250 0.02874698
## 3 3 0.16875 0.03019037
## 4 4 0.17000 0.02958040
## 5 5 0.17250 0.03162278
## 6 6 0.17500 0.03333333
## 7 7 0.17500 0.03333333
## 8 8 0.17375 0.03197764
## 9 9 0.17375 0.03197764
## 10 10 0.17375 0.03197764
(e) Compute the training and test error rates using this new value for cost.
set.seed(1)
cost.fit = svm(Purchase ~ ., data = train.oj, kernel = 'linear', cost = tune.oj$best.parameters$cost)
summary(cost.fit)
##
## Call:
## svm(formula = Purchase ~ ., data = train.oj, kernel = "linear", cost = tune.oj$best.parameters$cost)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 3
##
## Number of Support Vectors: 328
##
## ( 164 164 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
train.pred2 = predict(cost.fit, train.oj)
table(train.pred2, train.oj$Purchase)
##
## train.pred2 CH MM
## CH 422 70
## MM 63 245
traincost.e = mean(train.oj$Purchase != train.pred2)
traincost.e
## [1] 0.16625
test.pred2=predict(cost.fit, test.oj)
table(test.pred2, test.oj$Purchase)
##
## test.pred2 CH MM
## CH 156 29
## MM 12 73
testcost.e = mean(test.oj$Purchase != test.pred2)
testcost.e
## [1] 0.1518519
(f) Repeat parts (b) through (e) using a support vector machine with a radial kernel. Use the default value for gamma.
set.seed(1)
svm.p2 = svm(Purchase ~ ., kernel="polynomial", degree=2, data = train.oj, cost = 0.01)
summary(svm.p2)
##
## Call:
## svm(formula = Purchase ~ ., data = train.oj, kernel = "polynomial",
## degree = 2, cost = 0.01)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: polynomial
## cost: 0.01
## degree: 2
## coef.0: 0
##
## Number of Support Vectors: 636
##
## ( 321 315 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
set.seed(1)
train.pred3 = predict(svm.p2, train.oj)
table(train.oj$Purchase, train.pred3)
## train.pred3
## CH MM
## CH 484 1
## MM 297 18
train.e3 = mean(train.oj$Purchase != train.pred3)
train.e3
## [1] 0.3725
test.pred3 = predict(svm.p2, test.oj)
table(test.oj$Purchase, test.pred3)
## test.pred3
## CH MM
## CH 167 1
## MM 98 4
test.e3 = mean(test.oj$Purchase != test.pred3)
test.e3
## [1] 0.3666667
set.seed(1)
tune.ojr = tune(svm, Purchase~., data=train.oj, kernel="radial", ranges = list(cost=c(cost=seq(0.01, 10, length.out=75))))
summary(tune.ojr)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 0.685
##
## - best performance: 0.1675
##
## - Detailed performance results:
## cost error dispersion
## 1 0.010 0.39375 0.04007372
## 2 0.145 0.18750 0.02825971
## 3 0.280 0.18000 0.03073181
## 4 0.415 0.17625 0.02531057
## 5 0.550 0.16875 0.02651650
## 6 0.685 0.16750 0.02648375
## 7 0.820 0.17000 0.02513851
## 8 0.955 0.16875 0.02062395
## 9 1.090 0.17125 0.01958777
## 10 1.225 0.17250 0.02108185
## 11 1.360 0.17375 0.02389938
## 12 1.495 0.17375 0.02239947
## 13 1.630 0.17625 0.02161050
## 14 1.765 0.17625 0.02079162
## 15 1.900 0.17625 0.02079162
## 16 2.035 0.17750 0.02188988
## 17 2.170 0.18000 0.02220485
## 18 2.305 0.17625 0.02079162
## 19 2.440 0.17750 0.02266912
## 20 2.575 0.17625 0.02239947
## 21 2.710 0.17625 0.02239947
## 22 2.845 0.17625 0.02239947
## 23 2.980 0.17625 0.02239947
## 24 3.115 0.17625 0.02239947
## 25 3.250 0.17750 0.02266912
## 26 3.385 0.17875 0.02360703
## 27 3.520 0.17875 0.02360703
## 28 3.655 0.17875 0.02360703
## 29 3.790 0.18000 0.02371708
## 30 3.925 0.18000 0.02371708
## 31 4.060 0.18125 0.02301117
## 32 4.195 0.18125 0.02301117
## 33 4.330 0.18125 0.02301117
## 34 4.465 0.18125 0.02144923
## 35 4.600 0.18125 0.02144923
## 36 4.735 0.18125 0.02144923
## 37 4.870 0.18125 0.02144923
## 38 5.005 0.18000 0.02220485
## 39 5.140 0.18000 0.02220485
## 40 5.275 0.18000 0.02220485
## 41 5.410 0.18000 0.02220485
## 42 5.545 0.18000 0.02220485
## 43 5.680 0.18000 0.02220485
## 44 5.815 0.18000 0.02220485
## 45 5.950 0.18000 0.02220485
## 46 6.085 0.18000 0.02220485
## 47 6.220 0.18000 0.02220485
## 48 6.355 0.18125 0.02301117
## 49 6.490 0.18125 0.02301117
## 50 6.625 0.18125 0.02301117
## 51 6.760 0.18125 0.02301117
## 52 6.895 0.18250 0.02371708
## 53 7.030 0.18375 0.02503470
## 54 7.165 0.18250 0.02443813
## 55 7.300 0.18250 0.02443813
## 56 7.435 0.18375 0.02638523
## 57 7.570 0.18375 0.02638523
## 58 7.705 0.18375 0.02638523
## 59 7.840 0.18375 0.02638523
## 60 7.975 0.18375 0.02638523
## 61 8.110 0.18250 0.02648375
## 62 8.245 0.18125 0.02447363
## 63 8.380 0.18000 0.02443813
## 64 8.515 0.18000 0.02443813
## 65 8.650 0.18000 0.02443813
## 66 8.785 0.18375 0.02703521
## 67 8.920 0.18375 0.02703521
## 68 9.055 0.18375 0.02703521
## 69 9.190 0.18375 0.02703521
## 70 9.325 0.18625 0.02853482
## 71 9.460 0.18625 0.02853482
## 72 9.595 0.18625 0.02853482
## 73 9.730 0.18625 0.02853482
## 74 9.865 0.18625 0.02853482
## 75 10.000 0.18625 0.02853482
set.seed(1)
cost.ojrf = svm(Purchase ~ ., data = train.oj, kernel ="radial", cost = 0.685)
summary(cost.ojrf)
##
## Call:
## svm(formula = Purchase ~ ., data = train.oj, kernel = "radial", cost = 0.685)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 0.685
##
## Number of Support Vectors: 387
##
## ( 194 193 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
train.pred4 = predict(cost.ojrf, train.oj)
table(train.pred4, train.oj$Purchase)
##
## train.pred4 CH MM
## CH 438 74
## MM 47 241
test.pred4=predict(cost.ojrf, test.oj)
table(test.pred4, test.oj$Purchase)
##
## test.pred4 CH MM
## CH 150 31
## MM 18 71
cost.traine = mean(train.oj$Purchase != train.pred4)
cost.traine
## [1] 0.15125
cost.teste= mean(test.oj$Purchase != test.pred4)
cost.teste
## [1] 0.1814815
(g) Repeat parts (b) through (e) using a support vector machine with a polynomial kernel. Set degree = 2.
set.seed(1)
svm.p3= svm(Purchase ~ ., kernel="polynomial", degree=2, data = train.oj, cost = 0.01)
summary(svm.p3)
##
## Call:
## svm(formula = Purchase ~ ., data = train.oj, kernel = "polynomial",
## degree = 2, cost = 0.01)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: polynomial
## cost: 0.01
## degree: 2
## coef.0: 0
##
## Number of Support Vectors: 636
##
## ( 321 315 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
set.seed(1)
train.pred5 = predict(svm.p3, train.oj)
table(train.oj$Purchase, train.pred5)
## train.pred5
## CH MM
## CH 484 1
## MM 297 18
train.e4 = mean(train.oj$Purchase != train.pred5)
train.e4
## [1] 0.3725
test.pred5 = predict(svm.p3, test.oj)
table(test.oj$Purchase, test.pred5)
## test.pred5
## CH MM
## CH 167 1
## MM 98 4
test.e4 = mean(test.oj$Purchase != test.pred5)
test.e4
## [1] 0.3666667
set.seed(1)
tune.oj3 = tune(svm, Purchase~., data=train.oj, kernel="polynomial", degree=2, ranges = list(cost=c(cost=seq(0.01, 10, length.out=75))))
summary(tune.oj3)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 2.44
##
## - best performance: 0.17125
##
## - Detailed performance results:
## cost error dispersion
## 1 0.010 0.39125 0.04210189
## 2 0.145 0.27375 0.05447030
## 3 0.280 0.20125 0.04348132
## 4 0.415 0.20125 0.04185375
## 5 0.550 0.20500 0.04257347
## 6 0.685 0.20375 0.03998698
## 7 0.820 0.20250 0.04322101
## 8 0.955 0.20250 0.04479893
## 9 1.090 0.20375 0.04041881
## 10 1.225 0.19500 0.04257347
## 11 1.360 0.19375 0.04419417
## 12 1.495 0.19000 0.04322101
## 13 1.630 0.18750 0.04166667
## 14 1.765 0.18500 0.04199868
## 15 1.900 0.18125 0.04177070
## 16 2.035 0.18000 0.04216370
## 17 2.170 0.17750 0.04158325
## 18 2.305 0.17375 0.03793727
## 19 2.440 0.17125 0.03729108
## 20 2.575 0.17375 0.03884174
## 21 2.710 0.17500 0.03818813
## 22 2.845 0.17500 0.03818813
## 23 2.980 0.17625 0.03793727
## 24 3.115 0.17750 0.03670453
## 25 3.250 0.18000 0.03545341
## 26 3.385 0.17875 0.03586723
## 27 3.520 0.17875 0.03586723
## 28 3.655 0.17875 0.03537988
## 29 3.790 0.18000 0.03395258
## 30 3.925 0.18250 0.03395258
## 31 4.060 0.18375 0.03438447
## 32 4.195 0.18500 0.03425801
## 33 4.330 0.18500 0.03425801
## 34 4.465 0.18500 0.03425801
## 35 4.600 0.18375 0.03387579
## 36 4.735 0.18250 0.03496029
## 37 4.870 0.18250 0.03496029
## 38 5.005 0.18250 0.03496029
## 39 5.140 0.18250 0.03496029
## 40 5.275 0.18375 0.03537988
## 41 5.410 0.18625 0.03143004
## 42 5.545 0.18375 0.03064696
## 43 5.680 0.18500 0.03162278
## 44 5.815 0.18625 0.03304563
## 45 5.950 0.18625 0.03304563
## 46 6.085 0.18500 0.03162278
## 47 6.220 0.18500 0.03162278
## 48 6.355 0.18500 0.03162278
## 49 6.490 0.18500 0.03162278
## 50 6.625 0.18625 0.03356689
## 51 6.760 0.18500 0.03162278
## 52 6.895 0.18500 0.03162278
## 53 7.030 0.18625 0.03251602
## 54 7.165 0.18500 0.03525699
## 55 7.300 0.18375 0.03387579
## 56 7.435 0.18375 0.03387579
## 57 7.570 0.18375 0.03387579
## 58 7.705 0.18250 0.03291403
## 59 7.840 0.18125 0.03448530
## 60 7.975 0.18000 0.03395258
## 61 8.110 0.18125 0.03240906
## 62 8.245 0.18000 0.03129164
## 63 8.380 0.18250 0.02898755
## 64 8.515 0.18250 0.02898755
## 65 8.650 0.18000 0.02838231
## 66 8.785 0.17875 0.02766993
## 67 8.920 0.17875 0.02766993
## 68 9.055 0.17750 0.02751262
## 69 9.190 0.17750 0.02751262
## 70 9.325 0.17750 0.02751262
## 71 9.460 0.17750 0.02751262
## 72 9.595 0.17875 0.02766993
## 73 9.730 0.17875 0.02766993
## 74 9.865 0.17875 0.02766993
## 75 10.000 0.18125 0.02779513
set.seed(1)
cost.fit2 = svm(Purchase ~ ., data = train.oj, kernel="polynomial", degree=2, cost = 2.44)
summary(cost.fit2)
##
## Call:
## svm(formula = Purchase ~ ., data = train.oj, kernel = "polynomial",
## degree = 2, cost = 2.44)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: polynomial
## cost: 2.44
## degree: 2
## coef.0: 0
##
## Number of Support Vectors: 398
##
## ( 202 196 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
train.pred6 = predict(cost.fit2, train.oj)
table(train.pred6, train.oj$Purchase)
##
## train.pred6 CH MM
## CH 452 92
## MM 33 223
cost.e = mean(train.oj$Purchase != train.pred6)
cost.e
## [1] 0.15625
test.pred6=predict(cost.fit2, test.oj)
table(test.pred6, test.oj$Purchase)
##
## test.pred6 CH MM
## CH 154 42
## MM 14 60
cost.e2 = mean(test.oj$Purchase != test.pred6)
cost.e2
## [1] 0.2074074
(h) Overall, which approach seems to give the best results on this data? Linear gives the best results Linear = 0.1777778 Polynomial = 0.3666667 Radial = 0.1814815