Problem 5

We have seen that we can fit an SVM with a non-linear kernel in order to perform classification using a non-linear decision boundary. We will now see that we can also obtain a non-linear decision boundary by performing logistic regression using non-linear transformations of the features.

(a) Generate a data set with n = 500 and p = 2, such that the observations belong to two classes with a quadratic decision boundary between them. For instance, you can do this as follows:

> x1 <- runif (500) - 0.5

> x2 <- runif (500) - 0.5

> y <- 1 * (x1^2 - x2^2 > 0)

set.seed(421)

x1=runif(500)-.5
x2=runif(500)-.5
y=1*(x1^2-x2^2>0)

(b) Plot the observations, colored according to their class labels. Your plot should display X1 on the x axis, and X2 on the yaxis.

par(bg="black", col.lab="white", col.axis="white")
plot(x1[y == 0], x2[y == 0], col = "red", xlab = "X1", ylab = "X2", pch = 5)
points(x1[y == 1], x2[y == 1], col = "blue", pch = 4)
box(col="white")

(c) Fit a logistic regression model to the data, using X1 and X2 as predictors.

lm.data=data.frame(x1 = x1, x2 = x2, y = as.factor(y))
lm.fit=glm(y~., data = lm.data, family="binomial")
summary(lm.fit)
## 
## Call:
## glm(formula = y ~ ., family = "binomial", data = lm.data)
## 
## Deviance Residuals: 
##    Min      1Q  Median      3Q     Max  
## -1.278  -1.227   1.089   1.135   1.175  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)
## (Intercept)  0.11999    0.08971   1.338    0.181
## x1          -0.16881    0.30854  -0.547    0.584
## x2          -0.08198    0.31476  -0.260    0.795
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 691.35  on 499  degrees of freedom
## Residual deviance: 690.99  on 497  degrees of freedom
## AIC: 696.99
## 
## Number of Fisher Scoring iterations: 3

(d) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be linear.

lm.prob = predict(lm.fit, newdata=lm.data, type = "response")
lm.pred = ifelse(lm.prob > 0.52, 1, 0)

data.pos = lm.data[lm.pred == 1, ]
data.neg = lm.data[lm.pred == 0, ]

par(bg="black", col.lab="white", col.axis="white")
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2", pch = 5)
points(data.neg$x1, data.neg$x2, col = "red", pch = 4)
box(col="white")

(e) Now fit a logistic regression model to the data using non-linear functions of X1 and X2 as predictors (e.g. X2 1 , X1×X2, log(X2), and so forth).

lm.fit2=glm(y~poly(x1, 2)+poly(x2, 2)+I(x1 * x2),data=lm.data, family=binomial)
## Warning: glm.fit: algorithm did not converge
## Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
summary(lm.fit2)
## 
## Call:
## glm(formula = y ~ poly(x1, 2) + poly(x2, 2) + I(x1 * x2), family = binomial, 
##     data = lm.data)
## 
## Deviance Residuals: 
##       Min         1Q     Median         3Q        Max  
## -0.003575   0.000000   0.000000   0.000000   0.003720  
## 
## Coefficients:
##                Estimate Std. Error z value Pr(>|z|)
## (Intercept)      236.09   34920.61   0.007    0.995
## poly(x1, 2)1    3608.97  246381.97   0.015    0.988
## poly(x1, 2)2   88150.22 1333540.93   0.066    0.947
## poly(x2, 2)1    3256.75  177352.91   0.018    0.985
## poly(x2, 2)2  -87128.37 1164195.57  -0.075    0.940
## I(x1 * x2)       -33.23  446735.64   0.000    1.000
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 6.9135e+02  on 499  degrees of freedom
## Residual deviance: 3.3069e-05  on 494  degrees of freedom
## AIC: 12
## 
## Number of Fisher Scoring iterations: 25

(f) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be obviously non-linear. If it is not, then repeat (a)-(e) until you come up with an example in which the predicted class labels are obviously non-linear.

lm.prob2=predict(lm.fit2,lm.data,type="response")
lm.pred2=ifelse(lm.prob2>.5,1,0)
data.pos2=lm.data[lm.pred2==1, ]
data.neg2=lm.data[lm.pred2==0, ]

par(bg="black", col.lab="white", col.axis="white")
plot(data.pos2$x1, data.pos2$x2,col="green", xlab="x1", ylab="x2", pch=8)
points(data.neg2$x1, data.neg2$x2, col="white", pch=7)
box(col="white")

(g) Fit a support vector classifier to the data with X1 and X2 as predictors. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.

library(e1071)
svm.lin=svm(y~.,data=lm.data,kernel="linear",cost=.01)
plot(svm.lin,lm.data)

(h) Fit a SVM using a non-linear kernel to the data. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.

train=sample(200,150)
svm.nonlin=svm(y~., data=lm.data[train,], kernel=("radial"),gama=1,cost=1e5)
plot(svm.nonlin,lm.data[train,])

(i) Comment on your results.

When it comes to nonlinear boundaries, SVM linear kernels can work alright with small cost, while nonlinear kernels make a better fit, however irregular they may be.

Problem 7

In this problem, you will use support vector approaches in order to predict whether a given car gets high or low gas mileage based on the Auto data set.

library(ISLR)
library(e1071)
attach(Auto)
summary(Auto)
##       mpg          cylinders      displacement     horsepower        weight    
##  Min.   : 9.00   Min.   :3.000   Min.   : 68.0   Min.   : 46.0   Min.   :1613  
##  1st Qu.:17.00   1st Qu.:4.000   1st Qu.:105.0   1st Qu.: 75.0   1st Qu.:2225  
##  Median :22.75   Median :4.000   Median :151.0   Median : 93.5   Median :2804  
##  Mean   :23.45   Mean   :5.472   Mean   :194.4   Mean   :104.5   Mean   :2978  
##  3rd Qu.:29.00   3rd Qu.:8.000   3rd Qu.:275.8   3rd Qu.:126.0   3rd Qu.:3615  
##  Max.   :46.60   Max.   :8.000   Max.   :455.0   Max.   :230.0   Max.   :5140  
##                                                                                
##   acceleration        year           origin                      name    
##  Min.   : 8.00   Min.   :70.00   Min.   :1.000   amc matador       :  5  
##  1st Qu.:13.78   1st Qu.:73.00   1st Qu.:1.000   ford pinto        :  5  
##  Median :15.50   Median :76.00   Median :1.000   toyota corolla    :  5  
##  Mean   :15.54   Mean   :75.98   Mean   :1.577   amc gremlin       :  4  
##  3rd Qu.:17.02   3rd Qu.:79.00   3rd Qu.:2.000   amc hornet        :  4  
##  Max.   :24.80   Max.   :82.00   Max.   :3.000   chevrolet chevette:  4  
##                                                  (Other)           :365

(a) Create a binary variable that takes on a 1 for cars with gas mileage above the median, and a 0 for cars with gas mileage below the median.

med.mpg=median(Auto$mpg)
bin.var=ifelse(Auto$mpg>med.mpg,1,0)
Auto$mpglevel=as.factor(bin.var)

(b) Fit a support vector classifier to the data with various values of cost, in order to predict whether a car gets high or low gas mileage. Report the cross-validation errors associated with different values of this parameter. Comment on your results. Note you will need to fit the classifier without the gas mileage variable to produce sensible results.

set.seed(333)
tune.oof=tune(svm,mpglevel~.,data = Auto, kernel="linear", ranges = list(cost=c(.01, .1, 1, 5, 10, 100)))
summary(tune.oof)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##  cost
##     1
## 
## - best performance: 0.01025641 
## 
## - Detailed performance results:
##    cost      error dispersion
## 1 1e-02 0.07397436 0.03896185
## 2 1e-01 0.04089744 0.04041773
## 3 1e+00 0.01025641 0.01324097
## 4 5e+00 0.01532051 0.01788871
## 5 1e+01 0.02301282 0.02549182
## 6 1e+02 0.03326923 0.02974993

(c) Now repeat (b), this time using SVMs with radial and polynomial basis kernels, with different values of gamma and degree and cost. Comment on your results.

set.seed(221)
tune.ooooff=tune(svm,mpglevel~., data = Auto, kernel="polynomial", ranges = list(cost=c(.1,1,5,10), degree=c(2,3,4)))
summary(tune.ooooff)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##  cost degree
##    10      2
## 
## - best performance: 0.5330769 
## 
## - Detailed performance results:
##    cost degree     error dispersion
## 1   0.1      2 0.5638462 0.03963569
## 2   1.0      2 0.5638462 0.03963569
## 3   5.0      2 0.5638462 0.03963569
## 4  10.0      2 0.5330769 0.07780372
## 5   0.1      3 0.5638462 0.03963569
## 6   1.0      3 0.5638462 0.03963569
## 7   5.0      3 0.5638462 0.03963569
## 8  10.0      3 0.5638462 0.03963569
## 9   0.1      4 0.5638462 0.03963569
## 10  1.0      4 0.5638462 0.03963569
## 11  5.0      4 0.5638462 0.03963569
## 12 10.0      4 0.5638462 0.03963569
set.seed(48)
tune.oooofff=tune(svm,mpglevel~., data = Auto, kernel="radial", ranges = list(cost=c(.1,1,5,10), gama=c(.01,1,5,10,100)))
summary(tune.oooofff)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##  cost gama
##    10 0.01
## 
## - best performance: 0.05878205 
## 
## - Detailed performance results:
##    cost  gama      error dispersion
## 1   0.1 1e-02 0.09955128 0.04443422
## 2   1.0 1e-02 0.07660256 0.04018600
## 3   5.0 1e-02 0.07147436 0.03381912
## 4  10.0 1e-02 0.05878205 0.02985043
## 5   0.1 1e+00 0.09955128 0.04443422
## 6   1.0 1e+00 0.07660256 0.04018600
## 7   5.0 1e+00 0.07147436 0.03381912
## 8  10.0 1e+00 0.05878205 0.02985043
## 9   0.1 5e+00 0.09955128 0.04443422
## 10  1.0 5e+00 0.07660256 0.04018600
## 11  5.0 5e+00 0.07147436 0.03381912
## 12 10.0 5e+00 0.05878205 0.02985043
## 13  0.1 1e+01 0.09955128 0.04443422
## 14  1.0 1e+01 0.07660256 0.04018600
## 15  5.0 1e+01 0.07147436 0.03381912
## 16 10.0 1e+01 0.05878205 0.02985043
## 17  0.1 1e+02 0.09955128 0.04443422
## 18  1.0 1e+02 0.07660256 0.04018600
## 19  5.0 1e+02 0.07147436 0.03381912
## 20 10.0 1e+02 0.05878205 0.02985043

(d) Make some plots to back up your assertions in (b) and (c).

Hint: In the lab, we used the plot() function for svm objects only in cases with p = 2. When p > 2, you can use the plot() function to create plots displaying pairs of variables at a time.

Essentially, instead of typing

> plot(svmfit , dat)

where svmfit contains your fitted model and dat is a data frame containing your data, you can type

> plot(svmfit , dat , x1 ∼ x4)

in order to plot just the first and fourth variables. However, you must replace x1 and x4 with the correct variable names. To find out more, type ? plot.svm.

svm.lin = svm(mpglevel ~ ., data = Auto, kernel = "linear", cost = 1)
svm.poly = svm(mpglevel ~ ., data = Auto, kernel = "polynomial", cost = 10, 
    degree = 2)
svm.rad = svm(mpglevel ~ ., data = Auto, kernel = "radial", cost = 10, gamma = 0.01)
plotpairs = function(fit) {
    for (name in names(Auto)[!(names(Auto) %in% c("mpg", "mpglevel", "name"))]) {
        plot(fit, Auto, as.formula(paste("mpg~", name, sep = "")))
    }
}
plotpairs(svm.lin)

plotpairs(svm.poly)

plotpairs(svm.rad)

Problem 8

This problem involves the OJ data set which is part of the ISLR2 Package.

library(ISLR)
library(e1071)

(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.

set.seed(96)
train = sample(dim(OJ)[1], 800)
OJ.train = OJ[train, ]
OJ.test = OJ[-train, ]

(b) Fit a support vector classifier to the training data using cost = 0.01, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics, and describe the results obtained.

OJ.svm.lin=svm(Purchase~., data = OJ.train, kernel="linear", cost=.01)
summary(OJ.svm.lin)
## 
## Call:
## svm(formula = Purchase ~ ., data = OJ.train, kernel = "linear", cost = 0.01)
## 
## 
## Parameters:
##    SVM-Type:  C-classification 
##  SVM-Kernel:  linear 
##        cost:  0.01 
## 
## Number of Support Vectors:  433
## 
##  ( 217 216 )
## 
## 
## Number of Classes:  2 
## 
## Levels: 
##  CH MM

(c) What are the training and test error rates?

train.pred=predict(OJ.svm.lin, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 436  52
##   MM  78 234
test.pred=predict(OJ.svm.lin, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 144  21
##   MM  29  76

(d) Use the tune() function to select an optimal cost. Consider values in the range 0.01 to 10.

set.seed(1554)
tune.ooooofffff=tune(svm,Purchase~., data = OJ.train, kernel = "linear", ranges = list(cost=10^seq(-2,1, by = .25)))
summary(tune.ooooofffff)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##       cost
##  0.5623413
## 
## - best performance: 0.1575 
## 
## - Detailed performance results:
##           cost   error dispersion
## 1   0.01000000 0.17125 0.03682259
## 2   0.01778279 0.16875 0.03596391
## 3   0.03162278 0.16250 0.03435921
## 4   0.05623413 0.16375 0.03508422
## 5   0.10000000 0.16375 0.03304563
## 6   0.17782794 0.16250 0.03535534
## 7   0.31622777 0.15875 0.03387579
## 8   0.56234133 0.15750 0.03641962
## 9   1.00000000 0.16500 0.03670453
## 10  1.77827941 0.16625 0.04210189
## 11  3.16227766 0.16625 0.04126894
## 12  5.62341325 0.17000 0.03184162
## 13 10.00000000 0.16750 0.03545341

(e) Compute the training and test error rates using this new value for cost.

svm.linear = svm(Purchase ~ ., kernel = "linear", data = OJ.train, cost = tune.ooooofffff$best.parameters$cost)
train.pred = predict(svm.linear, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 433  55
##   MM  70 242
test.pred = predict(svm.linear, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 141  24
##   MM  28  77

(f) Repeat parts (b) through (e) using a support vector machine with a radial kernel. Use the default value for gamma.

set.seed(410)
svm.radial=svm(Purchase~., data = OJ.train, kernel="radial")
summary(svm.radial)
## 
## Call:
## svm(formula = Purchase ~ ., data = OJ.train, kernel = "radial")
## 
## 
## Parameters:
##    SVM-Type:  C-classification 
##  SVM-Kernel:  radial 
##        cost:  1 
## 
## Number of Support Vectors:  370
## 
##  ( 188 182 )
## 
## 
## Number of Classes:  2 
## 
## Levels: 
##  CH MM
train.pred=predict(svm.radial, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 448  40
##   MM  77 235
test.pred=predict(svm.radial, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 146  19
##   MM  33  72
set.seed(755)
tune.ooooooffffff=tune(svm,Purchase~., data = OJ.train, kernel = "radial", ranges = list(cost=10^seq(-2,1, by = .25)))
summary(tune.ooooooffffff)
## 
## Parameter tuning of 'svm':
## 
## - sampling method: 10-fold cross validation 
## 
## - best parameters:
##       cost
##  0.5623413
## 
## - best performance: 0.16875 
## 
## - Detailed performance results:
##           cost   error dispersion
## 1   0.01000000 0.39000 0.04851976
## 2   0.01778279 0.39000 0.04851976
## 3   0.03162278 0.35125 0.06883202
## 4   0.05623413 0.20250 0.03670453
## 5   0.10000000 0.18125 0.03596391
## 6   0.17782794 0.17500 0.03864008
## 7   0.31622777 0.17125 0.04126894
## 8   0.56234133 0.16875 0.03076005
## 9   1.00000000 0.17125 0.02949223
## 10  1.77827941 0.17375 0.03408018
## 11  3.16227766 0.17000 0.03291403
## 12  5.62341325 0.17250 0.03670453
## 13 10.00000000 0.17625 0.03884174
svm.radial=svm(Purchase~., data = OJ.train, kernel = "radial", cost=tune.ooooooffffff$best.parameter$cost)
train.pred=predict(svm.radial, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 445  43
##   MM  79 233
test.pred=predict(svm.radial, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 145  20
##   MM  30  75

(g) Repeat parts (b) through (e) using a support vector machine with a polynomial kernel. Set degree = 2.

set.seed(21)
tune.ooooooofffffff=svm(Purchase~., data = OJ.train, kernel = "polynomial", cost=tune.ooooooffffff$best.parameter$cost, degree=2)
summary(tune.ooooooofffffff)
## 
## Call:
## svm(formula = Purchase ~ ., data = OJ.train, kernel = "polynomial", 
##     cost = tune.ooooooffffff$best.parameter$cost, degree = 2)
## 
## 
## Parameters:
##    SVM-Type:  C-classification 
##  SVM-Kernel:  polynomial 
##        cost:  0.5623413 
##      degree:  2 
##      coef.0:  0 
## 
## Number of Support Vectors:  489
## 
##  ( 248 241 )
## 
## 
## Number of Classes:  2 
## 
## Levels: 
##  CH MM
svm.poly=svm(Purchase~., data = OJ.train, kernel = "polynomial", cost=tune.ooooooffffff$best.parameter$cost, degree=2)
train.pred=predict(svm.poly, OJ.train)
table(OJ.train$Purchase, train.pred)
##     train.pred
##       CH  MM
##   CH 463  25
##   MM 125 187
test.pred=predict(svm.poly, OJ.test)
table(OJ.test$Purchase, test.pred)
##     test.pred
##       CH  MM
##   CH 152  13
##   MM  44  61

(h) Overall, which approach seems to give the best results on this Data?

The polynomial model using the new value for cost ended up with the lowest error rates for the data set.