1. We have seen that we can fit an SVM with a non-linear kernel in order to perform classification using a non-linear decision boundary. We will now see that we can also obtain a non-linear decision boundary by performing logistic regression using non-linear transformations of the features.
  1. Generate a data set with n = 500 and p = 2, such that the observations belong to two classes with a quadratic decision boundary between them:
set.seed(421)
x1 = runif(500) - 0.5
x2 = runif(500) - 0.5
y = 1 * (x1^2 - x2^2 > 0)
  1. Plot the observations, colored according to their class labels. Your plot should display X1 on the x-axis, and X2 on the yaxis.
plot(x1[y == 0], x2[y == 0], col = "red", xlab = "X1", ylab = "X2")
points(x1[y == 1], x2[y == 1], col = "blue")

  1. Fit a logistic regression model to the data, using X1 and X2 as predictors.
lm.fit = glm(y ~ x1 + x2, family = binomial)
summary(lm.fit)
## 
## Call:
## glm(formula = y ~ x1 + x2, family = binomial)
## 
## Deviance Residuals: 
##    Min      1Q  Median      3Q     Max  
## -1.278  -1.227   1.089   1.135   1.175  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)
## (Intercept)  0.11999    0.08971   1.338    0.181
## x1          -0.16881    0.30854  -0.547    0.584
## x2          -0.08198    0.31476  -0.260    0.795
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 691.35  on 499  degrees of freedom
## Residual deviance: 690.99  on 497  degrees of freedom
## AIC: 696.99
## 
## Number of Fisher Scoring iterations: 3
  1. Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be linear.
data = data.frame(x1 = x1, x2 = x2, y = y)
lm.prob = predict(lm.fit, data, type = "response")
lm.pred = ifelse(lm.prob > 0.52, 1, 0)
data.pos = data[lm.pred == 1, ]
data.neg = data[lm.pred == 0, ]
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2")
points(data.neg$x1, data.neg$x2, col = "red", pch = 4)

  1. Now fit a logistic regression model to the data using non-linear functions of X1 and X2 as predictors (e.g. X2 1 , X1×X2, log(X2), and so forth).
lm.fit = glm(y ~ poly(x1, 2) + poly(x2, 2) + I(x1 * x2), data = data, family = binomial)
## Warning: glm.fit: algorithm did not converge
## Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
summary(lm.fit)
## 
## Call:
## glm(formula = y ~ poly(x1, 2) + poly(x2, 2) + I(x1 * x2), family = binomial, 
##     data = data)
## 
## Deviance Residuals: 
##       Min         1Q     Median         3Q        Max  
## -0.003575   0.000000   0.000000   0.000000   0.003720  
## 
## Coefficients:
##                Estimate Std. Error z value Pr(>|z|)
## (Intercept)      236.09   34920.61   0.007    0.995
## poly(x1, 2)1    3608.97  246381.97   0.015    0.988
## poly(x1, 2)2   88150.22 1333540.93   0.066    0.947
## poly(x2, 2)1    3256.75  177352.91   0.018    0.985
## poly(x2, 2)2  -87128.37 1164195.57  -0.075    0.940
## I(x1 * x2)       -33.23  446735.64   0.000    1.000
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 6.9135e+02  on 499  degrees of freedom
## Residual deviance: 3.3069e-05  on 494  degrees of freedom
## AIC: 12
## 
## Number of Fisher Scoring iterations: 25
  1. Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be obviously non-linear. If it is not, then repeat (a)-(e) until you come up with an example in which the predicted class labels are obviously non-linear.
lm.prob = predict(lm.fit, data, type = "response")
lm.pred = ifelse(lm.prob > 0.5, 1, 0)
data.pos = data[lm.pred == 1, ]
data.neg = data[lm.pred == 0, ]
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2")
points(data.neg$x1, data.neg$x2, col = "red")

  1. Fit a support vector classifier to the data with X1 and X2 as predictors. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.
library(e1071)
## Warning: package 'e1071' was built under R version 3.6.2
svm.fit = svm(as.factor(y) ~ x1 + x2, data, kernel = "linear", cost = 0.1)
svm.pred = predict(svm.fit, data)
data.pos = data[svm.pred == 1, ]
data.neg = data[svm.pred == 0, ]
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2")
points(data.neg$x1, data.neg$x2, col = "red")

  1. In this problem, you will use support vector approaches in order to predict whether a given car gets high or low gas mileage based on the Auto data set.
  1. Create a binary variable that takes on a 1 for cars with gas mileage above the median, and a 0 for cars with gas mileage below the median.
library(ISLR)
gas.med = median(Auto$mpg)
new.var = ifelse(Auto$mpg > gas.med, 1, 0)
Auto$mpglevel = as.factor(new.var)
  1. Fit a support vector classifier to the data with various values of cost, in order to predict whether a car gets high or low gas mileage. Report the cross-validation errors associated with different values of this parameter. Comment on your results.
set.seed(1)
tune.out = tune(svm, mpglevel ~ ., data = Auto, kernel = "linear", ranges = list(cost = c(0.01, 
    0.1, 1, 5, 10, 100)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##  cost
##     1
## 
## - best performance: 0.01025641 
## 
## - Detailed performance results:
##    cost      error dispersion
## 1 1e-02 0.07653846 0.03617137
## 2 1e-01 0.04596154 0.03378238
## 3 1e+00 0.01025641 0.01792836
## 4 5e+00 0.02051282 0.02648194
## 5 1e+01 0.02051282 0.02648194
## 6 1e+02 0.03076923 0.03151981
  1. Now repeat (b), this time using SVMs with radial and polynomial basis kernels, with different values of gamma and degree and cost. Comment on your results.
set.seed(1)
tune.out = tune(svm, mpglevel ~ ., data = Auto, kernel = "polynomial", ranges = list(cost = c(0.1, 1, 5, 10), degree = c(2, 3, 4)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##  cost degree
##    10      2
## 
## - best performance: 0.5130128 
## 
## - Detailed performance results:
##    cost degree     error dispersion
## 1   0.1      2 0.5511538 0.04366593
## 2   1.0      2 0.5511538 0.04366593
## 3   5.0      2 0.5511538 0.04366593
## 4  10.0      2 0.5130128 0.08963366
## 5   0.1      3 0.5511538 0.04366593
## 6   1.0      3 0.5511538 0.04366593
## 7   5.0      3 0.5511538 0.04366593
## 8  10.0      3 0.5511538 0.04366593
## 9   0.1      4 0.5511538 0.04366593
## 10  1.0      4 0.5511538 0.04366593
## 11  5.0      4 0.5511538 0.04366593
## 12 10.0      4 0.5511538 0.04366593

For a polynomial kernel, the lowest cross-validation error is obtained for a degree of 3 and a cost of 10.

set.seed(1)
tune.out = tune(svm, mpglevel ~ ., data = Auto, kernel = "radial", ranges = list(cost = c(0.1, 
    1, 5, 10), gamma = c(0.01, 0.1, 1, 5, 10, 100)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##  cost gamma
##    10  0.01
## 
## - best performance: 0.02557692 
## 
## - Detailed performance results:
##    cost gamma      error dispersion
## 1   0.1 1e-02 0.08929487 0.04382379
## 2   1.0 1e-02 0.07403846 0.03522110
## 3   5.0 1e-02 0.04852564 0.03303346
## 4  10.0 1e-02 0.02557692 0.02093679
## 5   0.1 1e-01 0.07903846 0.03874545
## 6   1.0 1e-01 0.05371795 0.03525162
## 7   5.0 1e-01 0.02820513 0.03299190
## 8  10.0 1e-01 0.03076923 0.03375798
## 9   0.1 1e+00 0.55115385 0.04366593
## 10  1.0 1e+00 0.06384615 0.04375618
## 11  5.0 1e+00 0.05884615 0.04020934
## 12 10.0 1e+00 0.05884615 0.04020934
## 13  0.1 5e+00 0.55115385 0.04366593
## 14  1.0 5e+00 0.49493590 0.04724924
## 15  5.0 5e+00 0.48217949 0.05470903
## 16 10.0 5e+00 0.48217949 0.05470903
## 17  0.1 1e+01 0.55115385 0.04366593
## 18  1.0 1e+01 0.51794872 0.05063697
## 19  5.0 1e+01 0.51794872 0.04917316
## 20 10.0 1e+01 0.51794872 0.04917316
## 21  0.1 1e+02 0.55115385 0.04366593
## 22  1.0 1e+02 0.55115385 0.04366593
## 23  5.0 1e+02 0.55115385 0.04366593
## 24 10.0 1e+02 0.55115385 0.04366593

For a radial kernel, the lowest cross-validation error is obtained for a gamma of 0.01 and a cost of 0.1

  1. Make some plots to back up your assertions in (b) and (c).
svm.linear = svm(mpglevel ~ ., data = Auto, kernel = "linear", cost = .01)
svm.poly = svm(mpglevel ~ ., data = Auto, kernel = "polynomial", cost = 10, 
    degree = 3)
svm.radial = svm(mpglevel ~ ., data = Auto, kernel = "radial", cost = .1, gamma = 0.01)
plotpairs = function(fit) {
    for (name in names(Auto)[!(names(Auto) %in% c("mpg", "mpglevel", "name"))]) {
        plot(fit, Auto, as.formula(paste("mpg~", name, sep = "")))
    }
}
plotpairs(svm.linear)

  1. This problem involves the OJ data set which is part of the ISLR package. 372 9. Support Vector Machines
  1. Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.
set.seed(1)
train = sample(dim(OJ)[1], 800)
OJ.train = OJ[train, ]
OJ.test = OJ[-train, ]
  1. Fit a support vector classifier to the training data using cost=0.01, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics, and describe the results obtained.
svm.linear = svm(Purchase ~ ., kernel = "linear", data = OJ.train, cost = 0.01)
summary(svm.linear)
## 
## Call:
## svm(formula = Purchase ~ ., data = OJ.train, kernel = "linear", 
##     cost = 0.01)
## 
## 
## Parameters:
##    SVM-Type:  C-classification 
##  SVM-Kernel:  linear 
##        cost:  0.01 
## 
## Number of Support Vectors:  435
## 
##  ( 219 216 )
## 
## 
## Number of Classes:  2 
## 
## Levels: 
##  CH MM

Support vector classifier creates 435 support vectors out of 800 training points. Out of these, 216 belong to level MM and remaining 219 belong to level CH.

  1. What are the training and test error rates ?
train.pred = predict(svm.linear, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 420  65
##   MM  75 240
(75 + 65) / (420 + 240 + 75 + 65)
## [1] 0.175
test.pred = predict(svm.linear, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 153  15
##   MM  33  69
(31 + 14) / (153 + 69 + 33 + 15)
## [1] 0.1666667

The training error rate is 17.5% and test error rate is about 16.7%.

  1. Use the tune() function to select an optimal “cost”. Consider values in the range 0.01 to 10.
set.seed(1)
tune.out = tune(svm, Purchase ~ ., data = OJ.train, kernel = "linear", ranges = list(cost = 10^seq(-2, 
    1, by = 0.25)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##      cost
##  3.162278
## 
## - best performance: 0.16875 
## 
## - Detailed performance results:
##           cost   error dispersion
## 1   0.01000000 0.17625 0.02853482
## 2   0.01778279 0.17625 0.03143004
## 3   0.03162278 0.17125 0.02829041
## 4   0.05623413 0.17625 0.02853482
## 5   0.10000000 0.17250 0.03162278
## 6   0.17782794 0.17125 0.02829041
## 7   0.31622777 0.17125 0.02889757
## 8   0.56234133 0.17125 0.02703521
## 9   1.00000000 0.17500 0.02946278
## 10  1.77827941 0.17375 0.02729087
## 11  3.16227766 0.16875 0.03019037
## 12  5.62341325 0.17375 0.03304563
## 13 10.00000000 0.17375 0.03197764

We may see that the optimal cost is 3.16.

  1. Compute the training and test error rates using this new value for “cost”.
svm.linear = svm(Purchase ~ ., kernel = "linear", data = OJ.train, cost = tune.out$best.parameters$cost)
train.pred = predict(svm.linear, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 423  62
##   MM  70 245
(70+62)/(423+245+70+62)
## [1] 0.165
test.pred = predict(svm.linear, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 156  12
##   MM  29  73
(29+12)/(156+73+29+12)
## [1] 0.1518519

With the best cost, the training error rate is now 16.5% and the test error rate is 15.2%.

f.Repeat parts (b) through (e) using a support vector machine with a radial kernel. Use the default value for “gamma”.

svm.radial = svm(Purchase ~ ., data = OJ.train, kernel = "radial")
summary(svm.radial)
## 
## Call:
## svm(formula = Purchase ~ ., data = OJ.train, kernel = "radial")
## 
## 
## Parameters:
##    SVM-Type:  C-classification 
##  SVM-Kernel:  radial 
##        cost:  1 
## 
## Number of Support Vectors:  373
## 
##  ( 188 185 )
## 
## 
## Number of Classes:  2 
## 
## Levels: 
##  CH MM
train.pred = predict(svm.radial, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 441  44
##   MM  77 238
(77+44)/(441+238+77+44)
## [1] 0.15125
test.pred = predict(svm.radial, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 151  17
##   MM  33  69
(33+17)/(151+69+33+17)
## [1] 0.1851852

Radial kernel with default gamma creates 373 support vectors, out of which, 185 belong to level CH and remaining 188 belong to level MM. The classifier has a training error of 15.1% and a test error of 18.5% which is a slight improvement over linear kernel. We now use cross validation to find optimal cost.

set.seed(1)
tune.out = tune(svm, Purchase ~ ., data = OJ.train, kernel = "radial", ranges = list(cost = 10^seq(-2, 
    1, by = 0.25)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##       cost
##  0.5623413
## 
## - best performance: 0.16875 
## 
## - Detailed performance results:
##           cost   error dispersion
## 1   0.01000000 0.39375 0.04007372
## 2   0.01778279 0.39375 0.04007372
## 3   0.03162278 0.35750 0.05927806
## 4   0.05623413 0.19500 0.02443813
## 5   0.10000000 0.18625 0.02853482
## 6   0.17782794 0.18250 0.03291403
## 7   0.31622777 0.17875 0.03230175
## 8   0.56234133 0.16875 0.02651650
## 9   1.00000000 0.17125 0.02128673
## 10  1.77827941 0.17625 0.02079162
## 11  3.16227766 0.17750 0.02266912
## 12  5.62341325 0.18000 0.02220485
## 13 10.00000000 0.18625 0.02853482
svm.radial = svm(Purchase ~ ., data = OJ.train, kernel = "radial", cost = tune.out$best.parameters$cost)
train.pred = predict(svm.radial, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 437  48
##   MM  71 244
(71+48)/(437+244+71+48)
## [1] 0.14875
test.pred = predict(svm.radial, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 150  18
##   MM  30  72
(30+18)/(150+72+30+18)
## [1] 0.1777778
  1. Repeat parts (b) through (e) using a support vector machine with a polynomial kernel. Set “degree” = 2.
svm.poly = svm(Purchase ~ ., kernel = "polynomial", data = OJ.train, degree = 2)
summary(svm.poly)
## 
## Call:
## svm(formula = Purchase ~ ., data = OJ.train, kernel = "polynomial", 
##     degree = 2)
## 
## 
## Parameters:
##    SVM-Type:  C-classification 
##  SVM-Kernel:  polynomial 
##        cost:  1 
##      degree:  2 
##      coef.0:  0 
## 
## Number of Support Vectors:  447
## 
##  ( 225 222 )
## 
## 
## Number of Classes:  2 
## 
## Levels: 
##  CH MM
train.pred = predict(svm.poly, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 449  36
##   MM 110 205
(110+36)/(449+205+110+36)
## [1] 0.1825
test.pred <- predict(svm.poly, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 153  15
##   MM  45  57
(45+15)/(153+57+45+15)
## [1] 0.2222222

Polynomial kernel with default gamma creates 447 support vectors, out of which, 222 belong to level CH and remaining 225 belong to level MM. The classifier has a training error of 18.3% and a test error of 22.2% which is no improvement over linear kernel. We now use cross validation to find optimal cost.

set.seed(1)
tune.out = tune(svm, Purchase ~ ., data = OJ.train, kernel = "polynomial", degree = 2, ranges = list(cost = 10^seq(-2, 
    1, by = 0.25)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##      cost
##  3.162278
## 
## - best performance: 0.1775 
## 
## - Detailed performance results:
##           cost   error dispersion
## 1   0.01000000 0.39125 0.04210189
## 2   0.01778279 0.37125 0.03537988
## 3   0.03162278 0.36500 0.03476109
## 4   0.05623413 0.33750 0.04714045
## 5   0.10000000 0.32125 0.05001736
## 6   0.17782794 0.24500 0.04758034
## 7   0.31622777 0.19875 0.03972562
## 8   0.56234133 0.20500 0.03961621
## 9   1.00000000 0.20250 0.04116363
## 10  1.77827941 0.18500 0.04199868
## 11  3.16227766 0.17750 0.03670453
## 12  5.62341325 0.18375 0.03064696
## 13 10.00000000 0.18125 0.02779513
svm.poly = svm(Purchase ~ ., kernel = "polynomial", degree = 2, data = OJ.train, cost = tune.out$best.parameter$cost)
summary(svm.poly)
## 
## Call:
## svm(formula = Purchase ~ ., data = OJ.train, kernel = "polynomial", 
##     degree = 2, cost = tune.out$best.parameter$cost)
## 
## 
## Parameters:
##    SVM-Type:  C-classification 
##  SVM-Kernel:  polynomial 
##        cost:  3.162278 
##      degree:  2 
##      coef.0:  0 
## 
## Number of Support Vectors:  385
## 
##  ( 197 188 )
## 
## 
## Number of Classes:  2 
## 
## Levels: 
##  CH MM
train.pred <- predict(svm.poly, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 451  34
##   MM  90 225
(90+34)/(451+225+90+34)
## [1] 0.155
test.pred <- predict(svm.poly, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 154  14
##   MM  41  61
(41+14)/(154+61+41+14)
## [1] 0.2037037

Tuning reduce train and test error rates.