We have seen that we can fit an SVM with a non-linear kernel in order to perform classification using a non-linear decision boundary. We will now see that we can also obtain a non-linear decision boundary by performing logistic regression using non-linear transformations of the features.
library(tidyverse)
set.seed(6543)
df <- data.frame(x1 = runif(500) - 0.5, x2 = runif(500) - 0.5) %>%
mutate(y = as_factor(1 * (x1*x1 - x2*x2 > 0)))
head(df)
## x1 x2 y
## 1 0.24326656 -0.02067898 1
## 2 0.04431692 -0.39834571 0
## 3 -0.49872592 -0.48420299 1
## 4 0.46610643 0.36148768 1
## 5 -0.09963889 0.15664937 0
## 6 0.37059420 0.08471074 1
df %>%
ggplot(aes(x = x1, y = x2, color = y)) +
geom_point(size = 2.5, alpha = .8) +
labs(x = expression(x[1]), y = expression(x[2]), color = "Response")
Fit a logistic regression model to the data, using \(x_1\) and \(x_2\) as predictors.
Both \(x_1\) and \(x_2\) are not significant in predicting \(y\).
##
## Call:
## glm(formula = y ~ x1 + x2, family = binomial, data = df)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.153 -1.099 -1.041 1.267 1.346
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -0.21926 0.09010 -2.434 0.0149 *
## x1 0.09701 0.32049 0.303 0.7621
## x2 -0.25269 0.30344 -0.833 0.4050
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 687.30 on 499 degrees of freedom
## Residual deviance: 686.54 on 497 degrees of freedom
## AIC: 692.54
##
## Number of Fisher Scoring iterations: 3
df %>%
mutate(glm.prob = predict(glm.fit, df, type = "response"),
glm.pred = as_factor(ifelse(glm.prob > 0.45, 1, 0))) %>%
ggplot(aes(x = x1, y = x2, color = glm.pred)) +
geom_point(size = 2.5, alpha = .8) +
labs(x = expression(x[1]), y = expression(x[2]), color = "Prediction")
Now fit a logistic regression model to the data using non-linear functions of \(x_1\) and \(x_2\) as predictors (e.g. \(x_1^2\), \(x_1\) x \(x_2\), \(\log(x_2)\), and so forth).
The quadratic term of \(x_1\) and the log transformation of \(x_2\) are significant in predicting \(y\), but the interaction of \(x_1\) and \(x_2\) is not.
##
## Call:
## glm(formula = y ~ x1 * x2 + I(x1^2) + log(abs(x2)), family = binomial,
## data = df)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -4.1661 -0.3174 -0.0715 0.2129 1.9216
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -10.0447 0.9774 -10.277 <2e-16 ***
## x1 0.1172 0.5981 0.196 0.845
## x2 0.1671 0.6012 0.278 0.781
## I(x1^2) 40.2995 3.9514 10.199 <2e-16 ***
## log(abs(x2)) -4.2393 0.4606 -9.203 <2e-16 ***
## x1:x2 -2.0343 1.7120 -1.188 0.235
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 687.30 on 499 degrees of freedom
## Residual deviance: 228.58 on 494 degrees of freedom
## AIC: 240.58
##
## Number of Fisher Scoring iterations: 7
Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be obviously non-linear. If it is not, then repeat (a)-(e) until you come up with an example in which the predicted class labels are obviously non-linear.
With non-linear terms added to the model, the predictions better resemble the true class boundary.
df %>%
mutate(glm.prob2 = predict(glm2, df, type = "response"),
glm.pred2 = as_factor(ifelse(glm.prob2 > 0.45, 1, 0))) %>%
ggplot(aes(x = x1, y = x2, color = glm.pred2)) +
geom_point(size = 2.5, alpha = .8) +
labs(x = expression(x[1]), y = expression(x[2]), color = "Prediction")
Fit a support vector classifier to the data with \(x_1\) and \(x_2\) as predictors. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.
Using a linear kernel, the SVM classifies all predictions as 0’s.
svm.lin <- svm(y ~ x1 + x2, data = df, kernel = "linear", cost = 0.001)
df %>%
mutate(svm.preds = as_factor(predict(svm.lin, df))) %>%
ggplot(aes(x = x1, y = x2, color = svm.preds)) +
geom_point(size = 2.5, alpha = .8) +
labs(x = expression(x[1]), y = expression(x[2]), color = "Linear SVM \nPredictions")
svm2 <- svm(y ~ x1 + x2, data = df, kernel = "radial", gamma = 0.5, cost = 2)
df %>%
mutate(svm.preds2 = as_factor(predict(svm2, df))) %>%
ggplot(aes(x = x1, y = x2, color = svm.preds2)) +
geom_point(size = 2.5, alpha = .8) +
labs(x = expression(x[1]), y = expression(x[2]), color = "Radial SVM\nPredictions")
Comment on your results.
When the data is non-linear, the SVM with a radial kernel provides a better fit to the data and produces more accurate results. The linear SVM classified all predictions as 0 and the logistic model was able to more closely approximate the decision boundary with non-linear transformation terms added to the model.
In this problem, you will use support vector approaches in order to predict whether a given car gets high or low gas mileage based on the Auto data set.
## 'data.frame': 392 obs. of 9 variables:
## $ mpg : num 18 15 18 16 17 15 14 14 14 15 ...
## $ cylinders : num 8 8 8 8 8 8 8 8 8 8 ...
## $ displacement: num 307 350 318 304 302 429 454 440 455 390 ...
## $ horsepower : num 130 165 150 150 140 198 220 215 225 190 ...
## $ weight : num 3504 3693 3436 3433 3449 ...
## $ acceleration: num 12 11.5 11 12 10.5 10 9 8.5 10 8.5 ...
## $ year : num 70 70 70 70 70 70 70 70 70 70 ...
## $ origin : num 1 1 1 1 1 1 1 1 1 1 ...
## $ name : Factor w/ 304 levels "amc ambassador brougham",..: 49 36 231 14 161 141 54 223 241 2 ...
## [1] 0
Auto <- Auto %>%
mutate(mpg01 = as_factor(ifelse(mpg > median(mpg), 1, 0))) %>%
select(-name)
median(mpg)
## [1] 22.75
## mpg mpg01
## 1 18 0
## 2 15 0
## 3 18 0
## 4 16 0
## 5 17 0
## 6 15 0
cost
, in order to predict whether a car gets high or low gas mileage. Report the cross-validation errors associated with different values of this parameter. Comment on your results.set.seed(21)
lin.tune <- tune(svm, mpg01 ~. -mpg, data = Auto, kernel = "linear",
ranges = list(cost = c(0.001, 0.01, 0.1, 1, 5, 10, 100)))
summary(lin.tune)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 0.01
##
## - best performance: 0.09448718
##
## - Detailed performance results:
## cost error dispersion
## 1 1e-03 0.12512821 0.05190575
## 2 1e-02 0.09448718 0.03815069
## 3 1e-01 0.10730769 0.03583022
## 4 1e+00 0.09461538 0.04386976
## 5 5e+00 0.10230769 0.04383645
## 6 1e+01 0.10230769 0.04383645
## 7 1e+02 0.10230769 0.04383645
The best value of cost
is 0.01, which minimizes the cross-validation error to 9.45%. Using the optimal cost
parameter, the linear SVM creates 149 support vectors out of 392 training observations. 74 of these belong to cars with gas mileage below the median and 75 above the median.
svm.lin <- svm(mpg01 ~., data = Auto, kernel = "linear",
cost = lin.tune$best.parameters$cost)
summary(svm.lin)
##
## Call:
## svm(formula = mpg01 ~ ., data = Auto, kernel = "linear", cost = lin.tune$best.parameters$cost)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 0.01
##
## Number of Support Vectors: 149
##
## ( 74 75 )
##
##
## Number of Classes: 2
##
## Levels:
## 0 1
gamma
and degree
and cost
. Comment on your results.set.seed(21)
poly.tune <- tune(svm, mpg01 ~. -mpg, data = Auto, kernel = "polynomial",
ranges = list(cost = c(0.01, 0.1, 1, 5, 10, 100), degree = c(2, 3, 4)))
summary(poly.tune)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost degree
## 5 3
##
## - best performance: 0.08153846
##
## - Detailed performance results:
## cost degree error dispersion
## 1 1e-02 2 0.45666667 0.10707504
## 2 1e-01 2 0.27301282 0.08206822
## 3 1e+00 2 0.25250000 0.10823212
## 4 5e+00 2 0.17833333 0.06847335
## 5 1e+01 2 0.17326923 0.07151199
## 6 1e+02 2 0.18858974 0.05862089
## 7 1e-02 3 0.25769231 0.08063288
## 8 1e-01 3 0.21435897 0.10280785
## 9 1e+00 3 0.09416667 0.04091693
## 10 5e+00 3 0.08153846 0.03331936
## 11 1e+01 3 0.08416667 0.02705174
## 12 1e+02 3 0.08935897 0.04210128
## 13 1e-02 4 0.38012821 0.09852304
## 14 1e-01 4 0.26794872 0.08504220
## 15 1e+00 4 0.22724359 0.09593291
## 16 5e+00 4 0.20128205 0.07777367
## 17 1e+01 4 0.18096154 0.06028978
## 18 1e+02 4 0.13500000 0.05328033
The best choice of parameters for the polynomial kernel is cost = 5
and degree = 3
, which provides a lower cross-validation error than the linear SVM at 8.15%. The polynomial SVM produces only 91 support vectors, with 42 belonging to cars with gas mileage below the median and 49 above.
svm.poly <- svm(mpg01 ~., data = Auto, kernel = "polynomial",
cost = poly.tune$best.parameters$cost,
degree = poly.tune$best.parameters$degree)
summary(svm.poly)
##
## Call:
## svm(formula = mpg01 ~ ., data = Auto, kernel = "polynomial", cost = poly.tune$best.parameters$cost,
## degree = poly.tune$best.parameters$degree)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: polynomial
## cost: 5
## degree: 3
## coef.0: 0
##
## Number of Support Vectors: 91
##
## ( 42 49 )
##
##
## Number of Classes: 2
##
## Levels:
## 0 1
set.seed(21)
rad.tune <- tune(svm, mpg01 ~. -mpg, data = Auto, kernel = "radial",
ranges = list(cost = c(0.1, 1, 5, 10, 100),
gamma = c(0.01, 0.1, 1, 5, 10, 100)))
summary(rad.tune)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost gamma
## 1 1
##
## - best performance: 0.06634615
##
## - Detailed performance results:
## cost gamma error dispersion
## 1 0.1 1e-02 0.11230769 0.04025360
## 2 1.0 1e-02 0.09448718 0.03410674
## 3 5.0 1e-02 0.10480769 0.04246845
## 4 10.0 1e-02 0.10480769 0.04415509
## 5 100.0 1e-02 0.08435897 0.04854915
## 6 0.1 1e-01 0.09698718 0.02897809
## 7 1.0 1e-01 0.09711538 0.03756219
## 8 5.0 1e-01 0.08685897 0.03861230
## 9 10.0 1e-01 0.07923077 0.04443232
## 10 100.0 1e-01 0.07660256 0.04483235
## 11 0.1 1e+00 0.08942308 0.03873036
## 12 1.0 1e+00 0.06634615 0.03804529
## 13 5.0 1e+00 0.08423077 0.03175533
## 14 10.0 1e+00 0.08673077 0.02723676
## 15 100.0 1e+00 0.08935897 0.01844848
## 16 0.1 5e+00 0.55878205 0.04538579
## 17 1.0 5e+00 0.08935897 0.03228009
## 18 5.0 5e+00 0.08423077 0.02432559
## 19 10.0 5e+00 0.08423077 0.02432559
## 20 100.0 5e+00 0.08679487 0.02481615
## 21 0.1 1e+01 0.55878205 0.04538579
## 22 1.0 1e+01 0.12012821 0.04540706
## 23 5.0 1e+01 0.11500000 0.04236075
## 24 10.0 1e+01 0.11500000 0.04236075
## 25 100.0 1e+01 0.11500000 0.04236075
## 26 0.1 1e+02 0.55878205 0.04538579
## 27 1.0 1e+02 0.52820513 0.06031624
## 28 5.0 1e+02 0.52307692 0.05254933
## 29 10.0 1e+02 0.52307692 0.05254933
## 30 100.0 1e+02 0.52307692 0.05254933
The radial SVM with cost = 1
and gamma = 1
decreases the cross-validation error to 6.63%, lower than both the linear and polynomial kernel. Using the best parameters, the radial SVM creates 198 support vectors, more than the previous two classifiers.
svm.rad <- svm(mpg01 ~., data = Auto, kernel = "radial",
cost = rad.tune$best.parameters$cost,
gamma = rad.tune$best.parameters$gamma)
summary(svm.rad)
##
## Call:
## svm(formula = mpg01 ~ ., data = Auto, kernel = "radial", cost = rad.tune$best.parameters$cost,
## gamma = rad.tune$best.parameters$gamma)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 1
##
## Number of Support Vectors: 198
##
## ( 95 103 )
##
##
## Number of Classes: 2
##
## Levels:
## 0 1
The X
’s in the SVM plots represent the support vectors and the closed circles represent the data points. The color represents the two classes, white is for cars with gas mileage above the median and grey is for cars below the median.
This problem involves the OJ data set which is part of the ISLR
package.
## 'data.frame': 1070 obs. of 18 variables:
## $ Purchase : Factor w/ 2 levels "CH","MM": 1 1 1 2 1 1 1 1 1 1 ...
## $ WeekofPurchase: num 237 239 245 227 228 230 232 234 235 238 ...
## $ StoreID : num 1 1 1 1 7 7 7 7 7 7 ...
## $ PriceCH : num 1.75 1.75 1.86 1.69 1.69 1.69 1.69 1.75 1.75 1.75 ...
## $ PriceMM : num 1.99 1.99 2.09 1.69 1.69 1.99 1.99 1.99 1.99 1.99 ...
## $ DiscCH : num 0 0 0.17 0 0 0 0 0 0 0 ...
## $ DiscMM : num 0 0.3 0 0 0 0 0.4 0.4 0.4 0.4 ...
## $ SpecialCH : num 0 0 0 0 0 0 1 1 0 0 ...
## $ SpecialMM : num 0 1 0 0 0 1 1 0 0 0 ...
## $ LoyalCH : num 0.5 0.6 0.68 0.4 0.957 ...
## $ SalePriceMM : num 1.99 1.69 2.09 1.69 1.69 1.99 1.59 1.59 1.59 1.59 ...
## $ SalePriceCH : num 1.75 1.75 1.69 1.69 1.69 1.69 1.69 1.75 1.75 1.75 ...
## $ PriceDiff : num 0.24 -0.06 0.4 0 0 0.3 -0.1 -0.16 -0.16 -0.16 ...
## $ Store7 : Factor w/ 2 levels "No","Yes": 1 1 1 1 2 2 2 2 2 2 ...
## $ PctDiscMM : num 0 0.151 0 0 0 ...
## $ PctDiscCH : num 0 0 0.0914 0 0 ...
## $ ListPriceDiff : num 0.24 0.24 0.23 0 0 0.3 0.3 0.24 0.24 0.24 ...
## $ STORE : num 1 1 1 1 0 0 0 0 0 0 ...
## [1] 0
Fit a support vector classifier to the training data using cost = 0.01
, with Purchase
as the response and the other variables as predictors. Use the summary()
function to produce summary statistics, and describe the results obtained.
The linear SVM creates 432 support vectors out of 800 observations in the training set, with 215 belonging to Citrus Hill purchases and the remaining 217 belonging to Minute Maid.
##
## Call:
## svm(formula = Purchase ~ ., data = oj.train, kernel = "linear", cost = 0.01)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 0.01
##
## Number of Support Vectors: 432
##
## ( 215 217 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
What are the training and test error rates?
The training error is 16% and the test error is about 18.5%.
## [1] 0.16
## [1] 0.1851852
Use the tune()
function to select an optimal cost
. Consider values in the range of 0.01 to 10.
Tuning the linear SVM gives us the lowest cross-validation error when cost = 10
.
set.seed(81)
tuned <- tune(svm, Purchase ~ ., data = oj.train, kernel = "linear",
ranges = list(cost = 10^seq(-2, 1, by = 0.2)))
summary(tuned)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 10
##
## - best performance: 0.1625
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01000000 0.17000 0.04721405
## 2 0.01584893 0.16875 0.04340139
## 3 0.02511886 0.16875 0.04177070
## 4 0.03981072 0.16625 0.04332131
## 5 0.06309573 0.16250 0.03952847
## 6 0.10000000 0.16500 0.04116363
## 7 0.15848932 0.16625 0.04372023
## 8 0.25118864 0.16500 0.04479893
## 9 0.39810717 0.16750 0.04495368
## 10 0.63095734 0.16250 0.04208127
## 11 1.00000000 0.16500 0.04518481
## 12 1.58489319 0.16500 0.04779877
## 13 2.51188643 0.16500 0.04518481
## 14 3.98107171 0.16500 0.04199868
## 15 6.30957344 0.16375 0.04619178
## 16 10.00000000 0.16250 0.04487637
Compute the training and test error rates using this new value for cost
.
Using the best value for cost
decreases the training error to 15.9% and the test error to 17%. Tuning the linear SVM also reduces the number of support vectors to 322.
##
## Call:
## best.tune(method = svm, train.x = Purchase ~ ., data = oj.train,
## ranges = list(cost = 10^seq(-2, 1, by = 0.2)), kernel = "linear")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 10
##
## Number of Support Vectors: 322
##
## ( 159 163 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
train.preds <- svm.lin2 %>%
predict(oj.train)
l.train.error <- mean(oj.train$Purchase != train.preds)
l.train.error
## [1] 0.15875
test.preds <- svm.lin2 %>%
predict(oj.test)
l.test.error <- mean(oj.test$Purchase != test.preds)
l.test.error
## [1] 0.1703704
gamma
, \(\gamma\).##
## Call:
## svm(formula = Purchase ~ ., data = oj.train, kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 1
##
## Number of Support Vectors: 364
##
## ( 181 183 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
Using the default value for \(\gamma\), the radial basis kernel creates 364 support vectors, with 181 belonging to the class CH
and 183 belonging to class MM
. The radial SVM gives us a lower training error than the linear kernel at 13.75%, but a higher test error of 19.63%.
## [1] 0.1375
## [1] 0.1962963
set.seed(5)
tuned <- tune(svm, Purchase ~ ., data = oj.train, kernel = "radial",
ranges = list(cost = 10^seq(-2, 1, by = 0.2)))
summary(tuned)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 0.6309573
##
## - best performance: 0.16625
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01000000 0.39000 0.04031129
## 2 0.01584893 0.39000 0.04031129
## 3 0.02511886 0.39000 0.04031129
## 4 0.03981072 0.26000 0.03855011
## 5 0.06309573 0.19875 0.02664713
## 6 0.10000000 0.17375 0.03653860
## 7 0.15848932 0.17125 0.03955042
## 8 0.25118864 0.17000 0.04495368
## 9 0.39810717 0.17125 0.04752558
## 10 0.63095734 0.16625 0.05304937
## 11 1.00000000 0.16875 0.04938862
## 12 1.58489319 0.16625 0.04896498
## 13 2.51188643 0.17625 0.04693746
## 14 3.98107171 0.18375 0.04126894
## 15 6.30957344 0.18125 0.04723243
## 16 10.00000000 0.17875 0.04966904
##
## Call:
## best.tune(method = svm, train.x = Purchase ~ ., data = oj.train,
## ranges = list(cost = 10^seq(-2, 1, by = 0.2)), kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 0.6309573
##
## Number of Support Vectors: 389
##
## ( 193 196 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
Tuning the radial SVM gives us a higher training error rate of 14.5% but a lower test error rate of 18.5%, and produces an additional 25 support vectors.
train.preds <- svm.rad2 %>%
predict(oj.train)
r.train.error <- mean(oj.train$Purchase != train.preds)
r.train.error
## [1] 0.145
test.preds <- svm.rad2 %>%
predict(oj.test)
r.test.error <- mean(oj.test$Purchase != test.preds)
r.test.error
## [1] 0.1851852
degree = 2
.##
## Call:
## svm(formula = Purchase ~ ., data = oj.train, kernel = "polynomial",
## degree = 2)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: polynomial
## cost: 1
## degree: 2
## coef.0: 0
##
## Number of Support Vectors: 444
##
## ( 220 224 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
The polynomial kernel creates 444 support vectors, with 220 belonging to CH
and 224 belonging to MM
. The training error is 16.9% and the test error is 20%, which is not an improvement from the linear or radial SVM.
train.preds <- svm.poly %>%
predict(oj.train)
p.train.error <- mean(oj.train$Purchase != train.preds)
p.train.error
## [1] 0.16875
test.preds <- svm.poly %>%
predict(oj.test)
p.test.error <- mean(oj.test$Purchase != test.preds)
p.test.error
## [1] 0.2
set.seed(1121)
tuned <- tune(svm, Purchase ~ ., data = oj.train, kernel = "polynomial", degree = 2,
ranges = list(cost = 10^seq(-2, 1, by = 0.2)))
summary(tuned)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 1.584893
##
## - best performance: 0.1775
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01000000 0.38750 0.02886751
## 2 0.01584893 0.37000 0.04005205
## 3 0.02511886 0.36500 0.04073969
## 4 0.03981072 0.34625 0.04528076
## 5 0.06309573 0.32125 0.05653477
## 6 0.10000000 0.31125 0.05787019
## 7 0.15848932 0.26000 0.06713378
## 8 0.25118864 0.22625 0.05447030
## 9 0.39810717 0.20500 0.05210833
## 10 0.63095734 0.19625 0.05434266
## 11 1.00000000 0.18000 0.04684490
## 12 1.58489319 0.17750 0.04322101
## 13 2.51188643 0.18250 0.03736085
## 14 3.98107171 0.17875 0.03488573
## 15 6.30957344 0.17750 0.03162278
## 16 10.00000000 0.18000 0.03129164
##
## Call:
## best.tune(method = svm, train.x = Purchase ~ ., data = oj.train,
## ranges = list(cost = 10^seq(-2, 1, by = 0.2)), kernel = "polynomial",
## degree = 2)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: polynomial
## cost: 1.584893
## degree: 2
## coef.0: 0
##
## Number of Support Vectors: 421
##
## ( 207 214 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
Tuning the polynomial kernel decreases the training error rate to 16% but increases the test error rate to 20.4%.
## [1] 0.16
## [1] 0.2037037
Overall, which approach seems to give the best results on this data?
The Linear SVM gave us the best results overall with the lowest misclassification error rate on the test set.
data.frame(svm = c("Linear", "Radial", "Polynomial"),
train = percent(c(l.train.error, r.train.error, p.train.error), accuracy = 0.01),
test = percent(c(l.test.error, r.test.error, p.test.error), accuracy = 0.01)) %>%
arrange(test) %>%
rename("SVM Kernel" = svm, "Train Error" = train, "Test Error" = test) %>%
kable(align = c('l', 'c', 'c')) %>%
kable_styling(bootstrap_options = c("striped", "hover"))
SVM Kernel | Train Error | Test Error |
---|---|---|
Linear | 15.88% | 17.04% |
Radial | 14.50% | 18.52% |
Polynomial | 16.88% | 20.00% |