set.seed(421)
x1 = runif(500) - 0.5
x2 = runif(500) - 0.5
y=1*(x1^2-x2^2>0)assignment 8
Question 5
We have seen that we can fit an SVM with a non-linear kernel in order to perform classification using a non-linear decision boundary. We will now see that we can also obtain a non-linear decision boundary by performing logistic regression using non-linear transformations of the features.
- Generate a data set with n = 500 and p = 2, such that the observations belong to two classes with a quadratic decision boundary between them. For instance, you can do this as follows:
- Plot the observations, colored according to their class labels. Your plot should display X1 on the x-axis, and X2 on the y-axis.
plot(x1[y == 0], x2[y == 0], col = "red", xlab = "X1", ylab = "X2", pch = "+")
points(x1[y == 1], x2[y == 1], col = "blue", pch = 4)- Fit a logistic regression model to the data, using X1 and X2 as predictors.
lm.fit = glm(y ~ x1 + x2, family = binomial)
summary(lm.fit)
Call:
glm(formula = y ~ x1 + x2, family = binomial)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.11999 0.08971 1.338 0.181
x1 -0.16881 0.30854 -0.547 0.584
x2 -0.08198 0.31476 -0.260 0.795
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 691.35 on 499 degrees of freedom
Residual deviance: 690.99 on 497 degrees of freedom
AIC: 696.99
Number of Fisher Scoring iterations: 3
- Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be linear.
data = data.frame(x1 = x1, x2 = x2, y = y)
lm.prob = predict(lm.fit, data, type = "response")
lm.pred = ifelse(lm.prob > 0.52, 1, 0)
data.pos = data[lm.pred == 1, ]
data.neg = data[lm.pred == 0, ]
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2", pch = "+")
points(data.neg$x1, data.neg$x2, col = "red", pch = 4)(e) Now fit a logistic regression model to the data using non-linear functions of X1 and X2 as predictors (e.g. X21, X1 ×X2, log(X2),and so forth).
lm.fit = glm(y ~ poly(x1, 2) + poly(x2, 2) + I(x1 * x2), data = data, family = binomial)Warning: glm.fit: algorithm did not converge
Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
(f) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be obviously non-linear. If it is not, then repeat (a)–(e) until you come up with an example in which the predicted class labels are obviously non-linear.
lm.prob = predict(lm.fit, data, type = "response")
lm.pred = ifelse(lm.prob > 0.5, 1, 0)
data.pos = data[lm.pred == 1, ]
data.neg = data[lm.pred == 0, ]
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2", pch = "+")
points(data.neg$x1, data.neg$x2, col = "red", pch = 4)(g) Fit a support vector classifier to the data with X1 and X2 as predictors. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.
library(e1071)
svm.fit = svm(as.factor(y) ~ x1 + x2, data, kernel = "linear", cost = 0.1)
svm.pred = predict(svm.fit, data)
data.pos = data[svm.pred == 1, ]
data.neg = data[svm.pred == 0, ]
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2", pch = "+")
points(data.neg$x1, data.neg$x2, col = "red", pch = 4)(h) Fit a SVM using a non-linear kernel to the data. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.
svm.fit = svm(as.factor(y) ~ x1 + x2, data, gamma = 1)
svm.pred = predict(svm.fit, data)
data.pos = data[svm.pred == 1, ]
data.neg = data[svm.pred == 0, ]
plot(data.pos$x1, data.pos$x2, col = "blue", xlab = "X1", ylab = "X2", pch = "+")
points(data.neg$x1, data.neg$x2, col = "red", pch = 4)- Comment on your results.
The experiment perfromed covers the idea of SVMS are important to use for finding non linear models using cross validation would be easier with the parameter of gamma.
Question 7
In this problem, you will use support vector approaches in order to predict whether a given car gets high or low gas mileage based on the Auto data set.
(a) Create a binary variable that takes on a 1 for cars with gas mileage above the median, and a 0 for cars with gas mileage below the median.
library(ISLR2)
gas.med = median(Auto$mpg)
new.var = ifelse(Auto$mpg > gas.med, 1, 0)
Auto$mpglevel = as.factor(new.var)(b)Fit a support vector classifier to the data with various values of cost, in order to predict whether a car gets high or low gas mileage. Report the cross-validation errors associated with different values of this parameter. Comment on your results.
library(e1071)
set.seed(3255)
tune.out = tune(svm, mpglevel ~ ., data = Auto, kernel = "linear", ranges = list(cost = c(0.01,
0.1, 1, 5, 10, 100)))
summary(tune.out)
Parameter tuning of 'svm':
- sampling method: 10-fold cross validation
- best parameters:
cost
1
- best performance: 0.01269231
- Detailed performance results:
cost error dispersion
1 1e-02 0.07397436 0.06863413
2 1e-01 0.05102564 0.06923024
3 1e+00 0.01269231 0.02154160
4 5e+00 0.01519231 0.01760469
5 1e+01 0.02025641 0.02303772
6 1e+02 0.03294872 0.02898463
(c) Now repeat (b), this time using SVMs with radial and polynomial basis kernels, with different values of gamma and degree and cost. Comment on your results.
set.seed(21)
tune.out = tune(svm, mpglevel ~ ., data = Auto, kernel = "polynomial", ranges = list(cost = c(0.1,
1, 5, 10), degree = c(2, 3, 4)))
summary(tune.out)
Parameter tuning of 'svm':
- sampling method: 10-fold cross validation
- best parameters:
cost degree
10 2
- best performance: 0.5435897
- Detailed performance results:
cost degree error dispersion
1 0.1 2 0.5587821 0.04538579
2 1.0 2 0.5587821 0.04538579
3 5.0 2 0.5587821 0.04538579
4 10.0 2 0.5435897 0.05611162
5 0.1 3 0.5587821 0.04538579
6 1.0 3 0.5587821 0.04538579
7 5.0 3 0.5587821 0.04538579
8 10.0 3 0.5587821 0.04538579
9 0.1 4 0.5587821 0.04538579
10 1.0 4 0.5587821 0.04538579
11 5.0 4 0.5587821 0.04538579
12 10.0 4 0.5587821 0.04538579
set.seed(463)
tune.out = tune(svm, mpglevel ~ ., data = Auto, kernel = "radial", ranges = list(cost = c(0.1,
1, 5, 10), gamma = c(0.01, 0.1, 1, 5, 10, 100)))
summary(tune.out)
Parameter tuning of 'svm':
- sampling method: 10-fold cross validation
- best parameters:
cost gamma
10 0.01
- best performance: 0.02551282
- Detailed performance results:
cost gamma error dispersion
1 0.1 1e-02 0.09429487 0.04814900
2 1.0 1e-02 0.07897436 0.03875105
3 5.0 1e-02 0.05352564 0.02532795
4 10.0 1e-02 0.02551282 0.02417610
5 0.1 1e-01 0.07891026 0.03847631
6 1.0 1e-01 0.05602564 0.02881876
7 5.0 1e-01 0.03826923 0.03252085
8 10.0 1e-01 0.03320513 0.02964746
9 0.1 1e+00 0.57660256 0.05479863
10 1.0 1e+00 0.06628205 0.02996211
11 5.0 1e+00 0.06115385 0.02733573
12 10.0 1e+00 0.06115385 0.02733573
13 0.1 5e+00 0.57660256 0.05479863
14 1.0 5e+00 0.51538462 0.06642516
15 5.0 5e+00 0.50775641 0.07152757
16 10.0 5e+00 0.50775641 0.07152757
17 0.1 1e+01 0.57660256 0.05479863
18 1.0 1e+01 0.53833333 0.05640443
19 5.0 1e+01 0.53070513 0.05708644
20 10.0 1e+01 0.53070513 0.05708644
21 0.1 1e+02 0.57660256 0.05479863
22 1.0 1e+02 0.57660256 0.05479863
23 5.0 1e+02 0.57660256 0.05479863
24 10.0 1e+02 0.57660256 0.05479863
(d) Make some plots to back up your assertions in (b) and (c). Hint: In the lab, we used the plot() function for svm objects only in cases with p = 2. When p > 2, you can use the plot() function to create plots displaying pairs of variables at a time. Essentially, instead of typing plot(svmfit , dat) where svmfit contains your fitted model and dat is a data frame containing your data, you can type plot(svmfit , dat , x1∼x4) in order to plot just the first and fourth variables. However, you must replace x1 and x4 with the correct variable names. To find out more, type ?plot.svm.
svm.linear = svm(mpglevel ~ ., data = Auto, kernel = "linear", cost = 1)
svm.poly = svm(mpglevel ~ ., data = Auto, kernel = "polynomial", cost = 10,
degree = 2)
svm.radial = svm(mpglevel ~ ., data = Auto, kernel = "radial", cost = 10, gamma = 0.01)
plotpairs = function(fit) {
for (name in names(Auto)[!(names(Auto) %in% c("mpg", "mpglevel", "name"))]) {
plot(fit, Auto, as.formula(paste("mpg~", name, sep = "")))
}
}
plotpairs(svm.linear)plotpairs(svm.poly)plotpairs(svm.radial)Question 8
This problem involves the OJ data set which is part of the ISLR package. 372 9. Support Vector Machines
(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.
library(ISLR2)
set.seed(9004)
train = sample(dim(OJ)[1], 800)
OJ.train = OJ[train, ]
OJ.test = OJ[-train, ](b) Fit a support vector classifier to the training data using cost=0.01, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics, and describe the results obtained.
library(e1071)
svm.linear = svm(Purchase ~ ., kernel = "linear", data = OJ.train, cost = 0.01)
summary(svm.linear)
Call:
svm(formula = Purchase ~ ., data = OJ.train, kernel = "linear", cost = 0.01)
Parameters:
SVM-Type: C-classification
SVM-Kernel: linear
cost: 0.01
Number of Support Vectors: 442
( 222 220 )
Number of Classes: 2
Levels:
CH MM
(c) What are the training and test error rates?
train.pred = predict(svm.linear, OJ.train)
table(OJ.train$Purchase, train.pred) train.pred
CH MM
CH 432 51
MM 80 237
test.pred = predict(svm.linear, OJ.test)
table(OJ.test$Purchase, test.pred) test.pred
CH MM
CH 146 24
MM 22 78
(d) Use the tune() function to select an optimal cost. Consider values in the range 0.01 to 10.
set.seed(1554)
tune.out = tune(svm, Purchase ~ ., data = OJ.train, kernel = "linear", ranges = list(cost = 10^seq(-2,
1, by = 0.25)))
summary(tune.out)
Parameter tuning of 'svm':
- sampling method: 10-fold cross validation
- best parameters:
cost
3.162278
- best performance: 0.1625
- Detailed performance results:
cost error dispersion
1 0.01000000 0.16750 0.03395258
2 0.01778279 0.16875 0.02960973
3 0.03162278 0.16625 0.02638523
4 0.05623413 0.16875 0.03076005
5 0.10000000 0.16875 0.02901748
6 0.17782794 0.16750 0.02838231
7 0.31622777 0.17000 0.02898755
8 0.56234133 0.16875 0.02841288
9 1.00000000 0.16500 0.03106892
10 1.77827941 0.16500 0.03106892
11 3.16227766 0.16250 0.03118048
12 5.62341325 0.16375 0.02664713
13 10.00000000 0.16750 0.02581989
(e) Compute the training and test error rates using this new value for cost.
svm.linear = svm(Purchase ~ ., kernel = "linear", data = OJ.train, cost = tune.out$best.parameters$cost)
train.pred = predict(svm.linear, OJ.train)
table(OJ.train$Purchase, train.pred) train.pred
CH MM
CH 428 55
MM 74 243
test.pred = predict(svm.linear, OJ.test)
table(OJ.test$Purchase, test.pred) test.pred
CH MM
CH 146 24
MM 20 80
(f) Repeat parts (b) through (e) using a support vector machine with a radial kernel. Use the default value for gamma.
set.seed(410)
svm.radial = svm(Purchase ~ ., data = OJ.train, kernel = "radial")
summary(svm.radial)
Call:
svm(formula = Purchase ~ ., data = OJ.train, kernel = "radial")
Parameters:
SVM-Type: C-classification
SVM-Kernel: radial
cost: 1
Number of Support Vectors: 371
( 188 183 )
Number of Classes: 2
Levels:
CH MM
train.pred = predict(svm.radial, OJ.train)
table(OJ.train$Purchase, train.pred) train.pred
CH MM
CH 441 42
MM 74 243
test.pred = predict(svm.radial, OJ.test)
table(OJ.test$Purchase, test.pred) test.pred
CH MM
CH 148 22
MM 27 73
set.seed(755)
tune.out = tune(svm, Purchase ~ ., data = OJ.train, kernel = "radial", ranges = list(cost = 10^seq(-2,
1, by = 0.25)))
summary(tune.out)
Parameter tuning of 'svm':
- sampling method: 10-fold cross validation
- best parameters:
cost
0.3162278
- best performance: 0.1675
- Detailed performance results:
cost error dispersion
1 0.01000000 0.39625 0.06615691
2 0.01778279 0.39625 0.06615691
3 0.03162278 0.35375 0.09754807
4 0.05623413 0.20000 0.04249183
5 0.10000000 0.17750 0.04073969
6 0.17782794 0.17125 0.03120831
7 0.31622777 0.16750 0.04216370
8 0.56234133 0.16750 0.03782269
9 1.00000000 0.17250 0.03670453
10 1.77827941 0.17750 0.03374743
11 3.16227766 0.18000 0.04005205
12 5.62341325 0.18000 0.03446012
13 10.00000000 0.18625 0.04427267
svm.radial = svm(Purchase ~ ., data = OJ.train, kernel = "radial", cost = tune.out$best.parameters$cost)
train.pred = predict(svm.radial, OJ.train)
table(OJ.train$Purchase, train.pred) train.pred
CH MM
CH 440 43
MM 81 236