5. We have seen that we can fit an SVM with a non-linear kernel in order to perform classification using a non-linear decision boundary. We will now see that we can also obtain a non-linear decision boundary by performing logistic regression using non-linear transformations of the features.

(a) Generate a data set with n = 500 and p = 2, such that the observations belong to two classes with a quadratic decision boundary between them. For instance, you can do this as follows:

set.seed(421)
x1 = runif(500) - 0.5
x2 = runif(500) - 0.5
y = 1 * (x1^2 - x2^2 > 0)

(b) Plot the observations, colored according to their class labels. Your plot should display X1 on the x-axis, and X2 on the yaxis.

plot(x1[y == 0], x2[y == 0], col = "green", xlab = "X1", ylab = "X2", pch = "+")
points(x1[y == 1], x2[y == 1], col = "blue", pch = 4)

(c) Fit a logistic regression model to the data, using X1 and X2 as predictors.

lm.fit = glm(y ~ x1 + x2, family = binomial)
summary(lm.fit)
## 
## Call:
## glm(formula = y ~ x1 + x2, family = binomial)
## 
## Deviance Residuals: 
##    Min      1Q  Median      3Q     Max  
## -1.278  -1.227   1.089   1.135   1.175  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)
## (Intercept)  0.11999    0.08971   1.338    0.181
## x1          -0.16881    0.30854  -0.547    0.584
## x2          -0.08198    0.31476  -0.260    0.795
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 691.35  on 499  degrees of freedom
## Residual deviance: 690.99  on 497  degrees of freedom
## AIC: 696.99
## 
## Number of Fisher Scoring iterations: 3

(d) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be linear.

data = data.frame(x1 = x1, x2 = x2, y = y)
lm.prob = predict(lm.fit, data, type = "response")
lm.pred = ifelse(lm.prob > 0.52, 1, 0)
data.pos = data[lm.pred == 1, ]
data.neg = data[lm.pred == 0, ]
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2", pch = "+")
points(data.neg$x1, data.neg$x2, col = "green", pch = 4)

(e) Now fit a logistic regression model to the data using non-linear functions of X1 and X2 as predictors

lm.fit = glm(y ~ poly(x1, 2) + poly(x2, 2) + I(x1 * x2), data = data, family = binomial)
## Warning: glm.fit: algorithm did not converge
## Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred

(f) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be obviously non-linear. If it is not, then repeat (a)-(e) until you come up with an example in which the predicted class labels are obviously non-linear.

lm.prob = predict(lm.fit, data, type = "response")
lm.pred = ifelse(lm.prob > 0.5, 1, 0)
data.pos = data[lm.pred == 1, ]
data.neg = data[lm.pred == 0, ]
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2", pch = "+")
points(data.neg$x1, data.neg$x2, col = "green", pch = 4)

(g) Fit a support vector classifier to the data with X1 and X2 as predictors. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.

library(e1071)
svm.fit = svm(as.factor(y) ~ x1 + x2, data, kernel = "linear", cost = 0.1)
svm.pred = predict(svm.fit, data)
data.pos = data[svm.pred == 1, ]
data.neg = data[svm.pred == 0, ]
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2", pch = "+")
points(data.neg$x1, data.neg$x2, col = "green", pch = 4)

(h) Fit a SVM using a non-linear kernel to the data. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.

svm.fit = svm(as.factor(y) ~ x1 + x2, data, gamma = 1)
svm.pred = predict(svm.fit, data)
data.pos = data[svm.pred == 1, ]
data.neg = data[svm.pred == 0, ]
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2", pch = "+")
points(data.neg$x1, data.neg$x2, col = "green", pch = 4)

(i) Comment on your results.

SVMs with non-linear kernel are extremely powerful in finding non-linear boundary. Both, logistic regression with non-interactions and SVMs with linear kernels fail to find the decision boundary. Adding interaction terms to logistic regression seems to give them same power as radial-basis kernels. However, there is some manual efforts and tuning involved in picking right interaction terms. This effort can become prohibitive with large number of features.

7. In this problem, you will use support vector approaches in order to predict whether a given car gets high or low gas mileage based on the Auto data set.

(a) Create a binary variable that takes on a 1 for cars with gas mileage above the median, and a 0 for cars with gas mileage below the median.

library(ISLR)
gas.med = median(Auto$mpg)
new.var = ifelse(Auto$mpg > gas.med, 1, 0)
Auto$mpglevel = as.factor(new.var)

(b) Fit a support vector classifier to the data with various values of cost, in order to predict whether a car gets high or low gas mileage. Report the cross-validation errors associated with different values of this parameter. Comment on your results.

library(e1071)
set.seed(3255)
tune.out = tune(svm, mpglevel ~ ., data = Auto, kernel = "linear", ranges = list(cost = c(0.01, 
    0.1, 1, 5, 10, 100)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##  cost
##     1
## 
## - best performance: 0.01282051 
## 
## - Detailed performance results:
##    cost      error dispersion
## 1 1e-02 0.07384615 0.04219942
## 2 1e-01 0.04083333 0.03008810
## 3 1e+00 0.01282051 0.02179068
## 4 5e+00 0.01538462 0.02477158
## 5 1e+01 0.02044872 0.02354784
## 6 1e+02 0.03070513 0.02357884

(c) Now repeat (b), this time using SVMs with radial and polynomial basis kernels, with different values of gamma and degree and cost. Comment on your results.

set.seed(21)
tune.out = tune(svm, mpglevel ~ ., data = Auto, kernel = "polynomial", ranges = list(cost = c(0.1, 
    1, 5, 10), degree = c(2, 3, 4)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##  cost degree
##    10      2
## 
## - best performance: 0.5508974 
## 
## - Detailed performance results:
##    cost degree     error dispersion
## 1   0.1      2 0.5867308 0.07310319
## 2   1.0      2 0.5867308 0.07310319
## 3   5.0      2 0.5867308 0.07310319
## 4  10.0      2 0.5508974 0.11697667
## 5   0.1      3 0.5867308 0.07310319
## 6   1.0      3 0.5867308 0.07310319
## 7   5.0      3 0.5867308 0.07310319
## 8  10.0      3 0.5867308 0.07310319
## 9   0.1      4 0.5867308 0.07310319
## 10  1.0      4 0.5867308 0.07310319
## 11  5.0      4 0.5867308 0.07310319
## 12 10.0      4 0.5867308 0.07310319

The lowest cross-validation error is obtained for cost=10 and degree=2.

set.seed(463)
tune.out = tune(svm, mpglevel ~ ., data = Auto, kernel = "radial", ranges = list(cost = c(0.1, 
    1, 5, 10), gamma = c(0.01, 0.1, 1, 5, 10, 100)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##  cost gamma
##    10  0.01
## 
## - best performance: 0.02288462 
## 
## - Detailed performance results:
##    cost gamma      error dispersion
## 1   0.1 1e-02 0.08916667 0.04526330
## 2   1.0 1e-02 0.07397436 0.03896185
## 3   5.0 1e-02 0.05102564 0.03813274
## 4  10.0 1e-02 0.02288462 0.03286718
## 5   0.1 1e-01 0.07903846 0.04724112
## 6   1.0 1e-01 0.05602564 0.03950993
## 7   5.0 1e-01 0.02801282 0.02231663
## 8  10.0 1e-01 0.02551282 0.02093755
## 9   0.1 1e+00 0.55102564 0.03813274
## 10  1.0 1e+00 0.06365385 0.04199145
## 11  5.0 1e+00 0.06108974 0.04358351
## 12 10.0 1e+00 0.06108974 0.04358351
## 13  0.1 5e+00 0.55102564 0.03813274
## 14  1.0 5e+00 0.48717949 0.03963085
## 15  5.0 5e+00 0.49224359 0.04525523
## 16 10.0 5e+00 0.49224359 0.04525523
## 17  0.1 1e+01 0.55102564 0.03813274
## 18  1.0 1e+01 0.50506410 0.04235779
## 19  5.0 1e+01 0.49993590 0.04269277
## 20 10.0 1e+01 0.49993590 0.04269277
## 21  0.1 1e+02 0.55102564 0.03813274
## 22  1.0 1e+02 0.55102564 0.03813274
## 23  5.0 1e+02 0.55102564 0.03813274
## 24 10.0 1e+02 0.55102564 0.03813274

For radial basis kernel, cost=10 and gamma=0.01.

(d) Make some plots to back up your assertions in (b) and (c).

svm.linear = svm(mpglevel ~ ., data = Auto, kernel = "linear", cost = 1)
svm.poly = svm(mpglevel ~ ., data = Auto, kernel = "polynomial", cost = 10, 
    degree = 2)
svm.radial = svm(mpglevel ~ ., data = Auto, kernel = "radial", cost = 10, gamma = 0.01)
plotpairs = function(fit) {
    for (name in names(Auto)[!(names(Auto) %in% c("mpg", "mpglevel", "name"))]) {
        plot(fit, Auto, as.formula(paste("mpg~", name, sep = "")))
    }
}
plotpairs(svm.linear)

8. This problem involves the OJ data set which is part of the ISLR package.

(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.

library(ISLR)
set.seed(9004)
train = sample(dim(OJ)[1], 800)
OJ.train = OJ[train, ]
OJ.test = OJ[-train, ]

(b) Fit a support vector classifier to the training data using cost=0.01, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics, and describe the results obtained.

library(e1071)
svm.linear = svm(Purchase ~ ., kernel = "linear", data = OJ.train, cost = 0.01)
summary(svm.linear)
## 
## Call:
## svm(formula = Purchase ~ ., data = OJ.train, kernel = "linear", 
##     cost = 0.01)
## 
## 
## Parameters:
##    SVM-Type:  C-classification 
##  SVM-Kernel:  linear 
##        cost:  0.01 
##       gamma:  0.05555556 
## 
## Number of Support Vectors:  432
## 
##  ( 217 215 )
## 
## 
## Number of Classes:  2 
## 
## Levels: 
##  CH MM

(c) What are the training and test error rates?

train.pred = predict(svm.linear, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 439  53
##   MM  82 226
test.pred = predict(svm.linear, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 142  19
##   MM  29  80

(d) Use the tune() function to select an optimal cost. Consider values in the range 0.01 to 10.

set.seed(1554)
tune.out = tune(svm, Purchase ~ ., data = OJ.train, kernel = "linear", ranges = list(cost = 10^seq(-2, 
    1, by = 0.25)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##       cost
##  0.3162278
## 
## - best performance: 0.16875 
## 
## - Detailed performance results:
##           cost   error dispersion
## 1   0.01000000 0.16875 0.03691676
## 2   0.01778279 0.16875 0.03397814
## 3   0.03162278 0.17125 0.03230175
## 4   0.05623413 0.17250 0.03162278
## 5   0.10000000 0.17000 0.03291403
## 6   0.17782794 0.17125 0.03335936
## 7   0.31622777 0.16875 0.03498512
## 8   0.56234133 0.17000 0.03129164
## 9   1.00000000 0.16875 0.03397814
## 10  1.77827941 0.16875 0.03240906
## 11  3.16227766 0.16875 0.03294039
## 12  5.62341325 0.17125 0.03120831
## 13 10.00000000 0.17125 0.03283481

Tuning shows that optimal cost is 0.3162

(e) Compute the training and test error rates using this new value for cost.

svm.linear = svm(Purchase ~ ., kernel = "linear", data = OJ.train, cost = tune.out$best.parameters$cost)
train.pred = predict(svm.linear, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 435  57
##   MM  71 237
test.pred = predict(svm.linear, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 141  20
##   MM  29  80

The training error decreases to 16% but test error slightly increases to 18.1% by using best cost.

(f) Repeat parts (b) through (e) using a support vector machine with a radial kernel. Use the default value for gamma.

set.seed(410)
svm.radial = svm(Purchase ~ ., data = OJ.train, kernel = "radial")
summary(svm.radial)
## 
## Call:
## svm(formula = Purchase ~ ., data = OJ.train, kernel = "radial")
## 
## 
## Parameters:
##    SVM-Type:  C-classification 
##  SVM-Kernel:  radial 
##        cost:  1 
##       gamma:  0.05555556 
## 
## Number of Support Vectors:  367
## 
##  ( 184 183 )
## 
## 
## Number of Classes:  2 
## 
## Levels: 
##  CH MM
train.pred = predict(svm.radial, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 452  40
##   MM  78 230
test.pred = predict(svm.radial, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 146  15
##   MM  27  82
set.seed(755)
tune.out = tune(svm, Purchase ~ ., data = OJ.train, kernel = "radial", ranges = list(cost = 10^seq(-2, 
    1, by = 0.25)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##       cost
##  0.5623413
## 
## - best performance: 0.165 
## 
## - Detailed performance results:
##           cost   error dispersion
## 1   0.01000000 0.38500 0.06258328
## 2   0.01778279 0.38500 0.06258328
## 3   0.03162278 0.37625 0.06908379
## 4   0.05623413 0.21000 0.03855011
## 5   0.10000000 0.18625 0.03143004
## 6   0.17782794 0.18375 0.03230175
## 7   0.31622777 0.17125 0.03438447
## 8   0.56234133 0.16500 0.03763863
## 9   1.00000000 0.17500 0.03584302
## 10  1.77827941 0.17375 0.04059026
## 11  3.16227766 0.17625 0.03747684
## 12  5.62341325 0.17625 0.03839216
## 13 10.00000000 0.17375 0.03458584
svm.radial = svm(Purchase ~ ., data = OJ.train, kernel = "radial", cost = tune.out$best.parameters$cost)
train.pred = predict(svm.radial, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 452  40
##   MM  77 231
test.pred = predict(svm.radial, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 146  15
##   MM  28  81

(g) Repeat parts (b) through (e) using a support vector machine with a polynomial kernel. Set degree=2.

set.seed(8112)
svm.poly = svm(Purchase ~ ., data = OJ.train, kernel = "poly", degree = 2)
summary(svm.poly)
## 
## Call:
## svm(formula = Purchase ~ ., data = OJ.train, kernel = "poly", 
##     degree = 2)
## 
## 
## Parameters:
##    SVM-Type:  C-classification 
##  SVM-Kernel:  polynomial 
##        cost:  1 
##      degree:  2 
##       gamma:  0.05555556 
##      coef.0:  0 
## 
## Number of Support Vectors:  452
## 
##  ( 232 220 )
## 
## 
## Number of Classes:  2 
## 
## Levels: 
##  CH MM
train.pred = predict(svm.poly, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 460  32
##   MM 105 203
test.pred = predict(svm.poly, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 149  12
##   MM  37  72
set.seed(322)
tune.out = tune(svm, Purchase ~ ., data = OJ.train, kernel = "poly", degree = 2, 
    ranges = list(cost = 10^seq(-2, 1, by = 0.25)))
summary(tune.out)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##      cost
##  5.623413
## 
## - best performance: 0.18375 
## 
## - Detailed performance results:
##           cost   error dispersion
## 1   0.01000000 0.38500 0.05426274
## 2   0.01778279 0.36750 0.05075814
## 3   0.03162278 0.35750 0.05177408
## 4   0.05623413 0.34250 0.04937104
## 5   0.10000000 0.31500 0.05230785
## 6   0.17782794 0.24875 0.03928617
## 7   0.31622777 0.20875 0.05684103
## 8   0.56234133 0.20875 0.05653477
## 9   1.00000000 0.20000 0.06095308
## 10  1.77827941 0.19375 0.04497299
## 11  3.16227766 0.18625 0.04185375
## 12  5.62341325 0.18375 0.03335936
## 13 10.00000000 0.18375 0.04041881
svm.poly = svm(Purchase ~ ., data = OJ.train, kernel = "poly", degree = 2, cost = tune.out$best.parameters$cost)
train.pred = predict(svm.poly, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 455  37
##   MM  84 224
test.pred = predict(svm.poly, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 148  13
##   MM  34  75

(h) Overall, which approach seems to give the best results on this data?

Overall, radial basis kernel seems to be producing minimum misclassification error on both train and test data.