5. We have seen that we can fit an SVM with a non-linear
kernel in order to perform classification using a non-linear decision
boundary. We will now see that we can also obtain a non-linear decision
boundary by performing logistic regression using non-linear
transformations of the features.
(a) Generate a data set with n = 500 and p = 2, such that the
observations belong to two classes with a quadratic decision boundary
between them. For instance, you can do this as follows:
> x1 <- runif (500) - 0.5
> x2 <- runif (500) - 0.5
> y <- 1 * (x1^2 - x2^2 > 0)
set.seed(2020)
x1 <- runif(500) - 0.5
x2 <- runif(500) - 0.5
y <- ifelse(x1^2 - x2^2 > 0, 1, 0)
data <- data.frame(x1 = x1, x2 = x2, y = as.factor(y))
(b) Plot the observations, colored according to their class labels. Your plot should display X1 on the x-axis, and X2 on the y-axis.
library(ggplot2)
ggplot(data, aes(x = x1, y = x2, color = y)) +
geom_point() +
labs(title = "Original Class Labels")
(c) Fit a logistic regression model to the data, using X1 and X2 as predictors.
library(glmnet)
## Loading required package: Matrix
## Loaded glmnet 4.1-9
glm_fit <- glm(y ~ x1 + x2, data = data, family = binomial)
(d) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be linear.
data$pred_glm_linear <- ifelse(predict(glm_fit, type = "response") > 0.5, 1, 0)
ggplot(data, aes(x = x1, y = x2, color = as.factor(pred_glm_linear))) +
geom_point() +
labs(title = "Logistic Regression (Linear Terms Only)")
(e) Now fit a logistic regression model to the data using non-linear functions of X1 and X2 as predictors (e.g. X12, X1 ×X2, log(X2), and so forth).
X <- model.matrix(y ~ x1 + x2 + I(x1^2) + I(x2^2) + I(x1 * x2), data = data)[, -1]
y_vec <- data$y
cv_fit <- cv.glmnet(X, y_vec, family = "binomial", alpha = 0)
(f) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be obviously non-linear. If it is not, then repeat (a)-(e) until you come up with an example in which the predicted class labels are obviously non-linear.
# Get predicted probabilities using best lambda
pred_probs <- predict(cv_fit, newx = X, s = "lambda.min", type = "response")
# Classify based on 0.5 threshold
data$pred_ridge <- ifelse(pred_probs > 0.5, 1, 0)
# Plot
library(ggplot2)
ggplot(data, aes(x = x1, y = x2, color = as.factor(pred_ridge))) +
geom_point() +
labs(title = "Ridge Logistic Regression (Non-linear Terms)", color = "Predicted Class")
(g) Fit a support vector classifier to the data with X1 and X2 as predictors. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.
library(e1071)
# SVM with linear kernel
svm_linear <- svm(y ~ x1 + x2, data = data, kernel = "linear", cost = 1)
# Predict
data$pred_svm_linear <- predict(svm_linear)
# Plot
ggplot(data, aes(x = x1, y = x2, color = pred_svm_linear)) +
geom_point() +
labs(title = "SVM with Linear Kernel", color = "Predicted Class")
(h) Fit a SVM using a non-linear kernel to the data. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.
# SVM with radial (RBF) kernel
svm_rbf <- svm(y ~ x1 + x2, data = data, kernel = "radial", cost = 1, gamma = 1)
# Predict
data$pred_svm_rbf <- predict(svm_rbf)
# Plot
ggplot(data, aes(x = x1, y = x2, color = pred_svm_rbf)) +
geom_point() +
labs(title = "SVM with Radial Kernel", color = "Predicted Class")
(i) Comment on your results.
Based on the plots:
5(b): The original data shows a quadratic decision boundary, with
classes separated by x1^2 - x2^2 > 0.
5(d): The linear logistic regression produces a straight decision
boundary, misclassifying many points due to the non-linear true
boundary.
5(f): The ridge logistic regression with non-linear terms captures the
quadratic boundary more effectively, improving classification
accuracy.
5(g): The linear SVM also produces a linear boundary, similar to 5(d),
and struggles with the non-linear separation.
5(h): The SVM with a radial kernel closely matches the quadratic
boundary, offering the best classification performance.
Conclusion: Non-linear models (ridge logistic with polynomial terms and
radial SVM) outperform linear models (linear logistic and linear SVM)
due to the quadratic nature of the data, with the radial SVM providing
the most accurate predictions.
7. In this problem, you will use support vector approaches in
order to predict whether a given car gets high or low gas mileage based
on the Auto data set.
(a) Create a binary variable that takes on a 1 for cars with gas
mileage above the median, and a 0 for cars with gas mileage below the
median.
Auto <- na.omit(Auto)
Auto$mpg01 <- ifelse(Auto$mpg > median(Auto$mpg), 1, 0)
Auto$mpg01 <- as.factor(Auto$mpg01)
(b) Fit a support vector classifier to the data with various values of cost, in order to predict whether a car gets high or low gas mileage. Report the cross-validation errors associated with different values of this parameter. Comment on your results. Note you will need to fit the classifier without the gas mileage variable to produce sensible results.
library(e1071)
set.seed(2020)
tune_linear <- tune(svm, mpg01 ~ . - mpg, data = Auto, kernel = "linear",
ranges = list(cost = c(0.01, 0.1, 1, 5, 10)),
tunecontrol = tune.control(cross = 5))
summary(tune_linear)
##
## Parameter tuning of 'svm':
##
## - sampling method: 5-fold cross validation
##
## - best parameters:
## cost
## 0.01
##
## - best performance: 0.09681921
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01 0.09681921 0.02587303
## 2 0.10 0.09685167 0.02447476
## 3 1.00 0.09688413 0.02285627
## 4 5.00 0.10191496 0.03192624
## 5 10.00 0.10447907 0.03121116
Comment on results: The lowest cross-validation error occurs at the smallest cost (0.01), suggesting that a wider margin (allowing more misclassifications) performs best for this dataset. Higher cost values lead to slightly worse performance, possibly indicating overfitting as the model becomes less tolerant of errors. This suggests the data may have some noise or overlap, where a softer margin (lower cost) is more appropriate.
(c) Now repeat (b), this time using SVMs with radial and polynomial basis kernels, with different values of gamma and degree and cost. Comment on your results.
# Radial kernel SVM
tune_radial <- tune(svm, mpg01 ~ . - mpg - name, data = Auto, kernel = "radial",
ranges = list(cost = c(0.01, 0.1, 1, 5, 10),
gamma = c(0.01, 0.1, 1)),
tunecontrol = tune.control(cross = 5))
summary(tune_radial)
##
## Parameter tuning of 'svm':
##
## - sampling method: 5-fold cross validation
##
## - best parameters:
## cost gamma
## 5 1
##
## - best performance: 0.06640701
##
## - Detailed performance results:
## cost gamma error dispersion
## 1 0.01 0.01 0.45121714 0.167041563
## 2 0.10 0.01 0.11230120 0.014456576
## 3 1.00 0.01 0.08932165 0.013074570
## 4 5.00 0.01 0.08672509 0.010608026
## 5 10.00 0.01 0.08928919 0.009086799
## 6 0.01 0.10 0.17825381 0.076173963
## 7 0.10 0.10 0.09188575 0.016991322
## 8 1.00 0.10 0.09185329 0.010879147
## 9 5.00 0.10 0.08425836 0.023552409
## 10 10.00 0.10 0.07656605 0.009360903
## 11 0.01 1.00 0.44096073 0.189878252
## 12 0.10 1.00 0.09198312 0.028206417
## 13 1.00 1.00 0.07406686 0.026498110
## 14 5.00 1.00 0.06640701 0.021304933
## 15 10.00 1.00 0.07153522 0.037117060
# Polynomial kernel SVM (degree = 2)
tune_poly <- tune(svm, mpg01 ~ . - mpg - name, data = Auto, kernel = "polynomial",
ranges = list(cost = c(0.01, 0.1, 1, 5, 10),
gamma = c(0.01, 0.1, 1)),
degree = 2,
tunecontrol = tune.control(cross = 5))
summary(tune_poly)
##
## Parameter tuning of 'svm':
##
## - sampling method: 5-fold cross validation
##
## - best parameters:
## cost gamma
## 5 1
##
## - best performance: 0.1711133
##
## - Detailed performance results:
## cost gamma error dispersion
## 1 0.01 0.01 0.5434599 0.02854541
## 2 0.10 0.01 0.5434599 0.02854541
## 3 1.00 0.01 0.5408958 0.02347389
## 4 5.00 0.01 0.3773775 0.07241380
## 5 10.00 0.01 0.3316456 0.05360474
## 6 0.01 0.10 0.5408958 0.02347389
## 7 0.10 0.10 0.3316456 0.05360474
## 8 1.00 0.10 0.2910419 0.04882669
## 9 5.00 0.10 0.1862707 0.02842499
## 10 10.00 0.10 0.1836741 0.01138507
## 11 0.01 1.00 0.2910419 0.04882669
## 12 0.10 1.00 0.1836741 0.01138507
## 13 1.00 1.00 0.1812399 0.02547445
## 14 5.00 1.00 0.1711133 0.03668459
## 15 10.00 1.00 0.1711133 0.03668459
Overall Comparison:
The radial kernel outperforms both the linear kernel (7(b)) and the
polynomial kernel (degree = 2), with the lowest cross-validation error
(0.06640701). This suggests the data likely contains non-linear patterns
best captured by the RBF kernel’s flexibility.
The polynomial kernel’s higher error (0.1711133) indicates that a
quadratic boundary (degree = 2) is insufficient, and higher degrees
might be worth exploring, though the radial kernel’s adaptability seems
more effective here.
(d) Make some plots to back up your assertions in (b) and
(c).
Hint: In the lab, we used the plot() function for svm objects
only in cases with p = 2. When p > 2, you can use the plot() function
to create plots displaying pairs of variables at a time. Essentially,
instead of typing
> plot(svmfit, dat)
where svmfit contains your fitted model and dat is a data frame
containing your data, you can type
> plot(svmfit, dat, x1 ∼ x4)
in order to plot just the first and fourth variables. However,
you must replace x1 and x4 with the correct variable names. To find out
more, type ?plot.svm.
# Create and prepare data
Auto$mpg01 <- ifelse(Auto$mpg > median(Auto$mpg), 1, 0)
Auto$mpg01 <- as.factor(Auto$mpg01)
Auto_sub <- Auto[, !(names(Auto) %in% c("mpg", "name"))]
# Fit models (using previous best parameters)
set.seed(2020)
best_linear <- svm(mpg01 ~ ., data = Auto_sub, kernel = "linear", cost = 0.01)
best_radial <- svm(mpg01 ~ ., data = Auto_sub, kernel = "radial", cost = 5, gamma = 1)
best_poly <- svm(mpg01 ~ ., data = Auto_sub, kernel = "polynomial", cost = 5, gamma = 1, degree = 2)
# Generate plots
plot(best_linear, Auto_sub, horsepower ~ weight, main = "Linear SVM: Horsepower vs. Weight")
plot(best_radial, Auto_sub, horsepower ~ weight, main = "Radial SVM: Horsepower vs. Weight")
plot(best_poly, Auto_sub, horsepower ~ weight, main = "Polynomial SVM: Horsepower vs. Weight")
8. This problem involves the OJ data set which is part of the
ISLR2 package.
(a) Create a training set containing a random sample of 800
observations, and a test set containing the remaining
observations.
library(ISLR2)
library(e1071)
set.seed(2020)
train_idx <- sample(1:nrow(OJ), 800)
train <- OJ[train_idx, ]
test <- OJ[-train_idx, ]
(b) Fit a support vector classifier to the training data using cost = 0.01, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics, and describe the results obtained.
library(e1071)
svm_fit <- svm(Purchase ~ ., data = train, kernel = "linear", cost = 0.01)
summary(svm_fit)
##
## Call:
## svm(formula = Purchase ~ ., data = train, kernel = "linear", cost = 0.01)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 0.01
##
## Number of Support Vectors: 433
##
## ( 216 217 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
The support vector classifier fitted to the training data with a
linear kernel and a cost of 0.01 resulted in 433 support vectors, with
216 for class CH and 217 for class MM,
indicating a wide margin that includes over 54% of the 800 training
observations. This high number of support vectors suggests the data may
not be perfectly linearly separable, and the low cost prioritizes a
simpler model with potential misclassifications over a tight fit. The
balanced distribution of support vectors across the two classes reflects
the dataset’s near-equal class representation, setting the stage for
evaluating the model’s performance on training and test error rates.
(c) What are the training and test error rates?
# Training error
train_pred <- predict(svm_fit, train)
train_err <- mean(train_pred != train$Purchase)
cat("Training error rate:", train_err, "\n")
## Training error rate: 0.17125
# Test error
test_pred <- predict(svm_fit, test)
test_err <- mean(test_pred != test$Purchase)
cat("Test error rate:", test_err, "\n")
## Test error rate: 0.1851852
(d) Use the tune() function to select an optimal cost. Consider values in the range 0.01 to 10.
tune_out <- tune(svm, Purchase ~ ., data = train, kernel = "linear",
ranges = list(cost = c(0.01, 0.1, 1, 5, 10)),
tunecontrol = tune.control(cross = 5))
summary(tune_out)
##
## Parameter tuning of 'svm':
##
## - sampling method: 5-fold cross validation
##
## - best parameters:
## cost
## 10
##
## - best performance: 0.16875
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01 0.17375 0.02774043
## 2 0.10 0.17375 0.02270738
## 3 1.00 0.17000 0.02270738
## 4 5.00 0.17125 0.03112475
## 5 10.00 0.16875 0.03451675
best_cost <- tune_out$best.parameter$cost
cat("Optimal cost:", best_cost, "\n")
## Optimal cost: 10
(e) Compute the training and test error rates using this new value for cost.
svm_fit_tuned <- svm(Purchase ~ ., data = train, kernel = "linear", cost = best_cost)
# Training error
train_pred_tuned <- predict(svm_fit_tuned, train)
train_err_tuned <- mean(train_pred_tuned != train$Purchase)
cat("Tuned training error rate:", train_err_tuned, "\n")
## Tuned training error rate: 0.16125
# Test error
test_pred_tuned <- predict(svm_fit_tuned, test)
test_err_tuned <- mean(test_pred_tuned != test$Purchase)
cat("Tuned test error rate:", test_err_tuned, "\n")
## Tuned test error rate: 0.1814815
(f) Repeat parts (b) through (e) using a support vector machine with a radial kernel. Use the default value for gamma.
# (b) Fit radial SVM
svm_radial <- svm(Purchase ~ ., data = train, kernel = "radial", cost = 0.01)
summary(svm_radial)
##
## Call:
## svm(formula = Purchase ~ ., data = train, kernel = "radial", cost = 0.01)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 0.01
##
## Number of Support Vectors: 604
##
## ( 304 300 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
# (c) Error rates
train_pred_radial <- predict(svm_radial, train)
train_err_radial <- mean(train_pred_radial != train$Purchase)
cat("Radial training error rate:", train_err_radial, "\n")
## Radial training error rate: 0.375
test_pred_radial <- predict(svm_radial, test)
test_err_radial <- mean(test_pred_radial != test$Purchase)
cat("Radial test error rate:", test_err_radial, "\n")
## Radial test error rate: 0.4333333
# (d) Tune cost
tune_radial <- tune(svm, Purchase ~ ., data = train, kernel = "radial",
ranges = list(cost = c(0.01, 0.1, 1, 5, 10)),
tunecontrol = tune.control(cross = 5))
summary(tune_radial)
##
## Parameter tuning of 'svm':
##
## - sampling method: 5-fold cross validation
##
## - best parameters:
## cost
## 1
##
## - best performance: 0.17125
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01 0.37500 0.03186887
## 2 0.10 0.18625 0.03862884
## 3 1.00 0.17125 0.03440340
## 4 5.00 0.18375 0.03918107
## 5 10.00 0.18750 0.03507804
best_cost_radial <- tune_radial$best.parameter$cost
cat("Optimal cost for radial:", best_cost_radial, "\n")
## Optimal cost for radial: 1
# (e) Error rates with tuned cost
svm_radial_tuned <- svm(Purchase ~ ., data = train, kernel = "radial", cost = best_cost_radial)
train_pred_radial_tuned <- predict(svm_radial_tuned, train)
train_err_radial_tuned <- mean(train_pred_radial_tuned != train$Purchase)
cat("Tuned radial training error rate:", train_err_radial_tuned, "\n")
## Tuned radial training error rate: 0.145
test_pred_radial_tuned <- predict(svm_radial_tuned, test)
test_err_radial_tuned <- mean(test_pred_radial_tuned != test$Purchase)
cat("Tuned radial test error rate:", test_err_radial_tuned, "\n")
## Tuned radial test error rate: 0.1925926
(g) Repeat parts (b) through (e) using a support vector machine with a polynomial kernel. Set degree = 2.
# (b) Fit polynomial SVM
svm_poly <- svm(Purchase ~ ., data = train, kernel = "polynomial", cost = 0.01, degree = 2)
summary(svm_poly)
##
## Call:
## svm(formula = Purchase ~ ., data = train, kernel = "polynomial",
## cost = 0.01, degree = 2)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: polynomial
## cost: 0.01
## degree: 2
## coef.0: 0
##
## Number of Support Vectors: 605
##
## ( 305 300 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
# (c) Error rates
train_pred_poly <- predict(svm_poly, train)
train_err_poly <- mean(train_pred_poly != train$Purchase)
cat("Polynomial training error rate:", train_err_poly, "\n")
## Polynomial training error rate: 0.375
test_pred_poly <- predict(svm_poly, test)
test_err_poly <- mean(test_pred_poly != test$Purchase)
cat("Polynomial test error rate:", test_err_poly, "\n")
## Polynomial test error rate: 0.4333333
# (d) Tune cost
tune_poly <- tune(svm, Purchase ~ ., data = train, kernel = "polynomial",
ranges = list(cost = c(0.01, 0.1, 1, 5, 10)),
degree = 2,
tunecontrol = tune.control(cross = 5))
summary(tune_poly)
##
## Parameter tuning of 'svm':
##
## - sampling method: 5-fold cross validation
##
## - best parameters:
## cost
## 5
##
## - best performance: 0.18125
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01 0.37500 0.03156095
## 2 0.10 0.32875 0.04873397
## 3 1.00 0.20250 0.02005851
## 4 5.00 0.18125 0.03451675
## 5 10.00 0.18250 0.04361587
best_cost_poly <- tune_poly$best.parameter$cost
cat("Optimal cost for polynomial:", best_cost_poly, "\n")
## Optimal cost for polynomial: 5
# (e) Error rates with tuned cost
svm_poly_tuned <- svm(Purchase ~ ., data = train, kernel = "polynomial", cost = best_cost_poly, degree = 2)
train_pred_poly_tuned <- predict(svm_poly_tuned, train)
train_err_poly_tuned <- mean(train_pred_poly_tuned != train$Purchase)
cat("Tuned polynomial training error rate:", train_err_poly_tuned, "\n")
## Tuned polynomial training error rate: 0.16125
test_pred_poly_tuned <- predict(svm_poly_tuned, test)
test_err_poly_tuned <- mean(test_pred_poly_tuned != test$Purchase)
cat("Tuned polynomial test error rate:", test_err_poly_tuned, "\n")
## Tuned polynomial test error rate: 0.1962963
(h) Overall, which approach seems to give the best results on
this data?
To determine the best approach, we compare the tuned test error rates
from parts (e), (f), and (g), as the test error rate reflects how well
each model generalizes to unseen data:
The linear SVM with a tuned cost of 10 provides the
lowest test error rate (0.1814815), indicating it generalizes best to
the test set among the three approaches. This suggests that the
relationship between the predictors and Purchase in the
OJ dataset may be adequately captured by a linear decision
boundary, despite the initial high number of support vectors and
moderate error rates. The radial and polynomial kernels, while offering
non-linear flexibility, resulted in higher test errors, possibly due to
overfitting or less optimal parameter tuning for this dataset.