library(ISLR2)
library(MASS)
library(class)
library(tidyverse)
library(caret)
We have seen that we can fit an SVM with a non-linear kernel in order to perform classification using a non-linear decision boundary. We will now see that we can also obtain a non-linear decision boundary by performing logistic regression using non-linear transformations of the features.
(a) Generate a data set with n = 500 and p = 2, such that the observations belong to two classes with a quadratic decision boundary between them. For instance, you can do this as follows:
x1 <- runif (500) - 0.5
x2 <- runif (500) - 0.5
y <- 1 * (x1^2 - x2^2 > 0)
set.seed(1)
x1 = runif(500) - 0.5
x2 = runif(500) - 0.5
y = 1 * (x1^2 - x2^2 > 0)
(b) Plot the observations, colored according to their class labels. Your plot should display X1 on the x-axis, and X2 on the y-axis.
plot(x1,x2,col=ifelse(y,'red','blue'))
(c) Fit a logistic regression model to the data, using X1 and X2 as predictors.
glm.fit=glm(y~ x1 + x2 ,family='binomial')
summary(glm.fit)
##
## Call:
## glm(formula = y ~ x1 + x2, family = "binomial")
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.179 -1.139 -1.112 1.206 1.257
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -0.087260 0.089579 -0.974 0.330
## x1 0.196199 0.316864 0.619 0.536
## x2 -0.002854 0.305712 -0.009 0.993
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 692.18 on 499 degrees of freedom
## Residual deviance: 691.79 on 497 degrees of freedom
## AIC: 697.79
##
## Number of Fisher Scoring iterations: 3
Using GLM, X1 and X2 are both insignificant
(d) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be linear.
data = data.frame(x1 = x1, x2 = x2, y = y)
glm.pred=predict(glm.fit,data.frame(x1,x2))
plot(x1,x2,col=ifelse(glm.pred>0,'red','blue'),pch=ifelse(as.integer(glm.pred>0) == y,1,4))
In this plot, the predicted 1 is in red, with cirles being correctly identifed.
(e) Now fit a logistic regression model to the data using non-linear functions of X1 and X2 as predictors (e.g. X21 , X1×X2, log(X2), and so forth).
lm.fit = glm(y ~ poly(x1, 2) + poly(x2, 2) + I(x1 * x2), data = data, family = binomial)
(f) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be obviously non-linear. If it is not, then repeat (a)-(e) until you come up with an example in which the predicted class labels are obviously non-linear.
data = data.frame(x1 = x1, x2 = x2, y = y)
lm.pred=predict(lm.fit,data.frame(x1,x2))
plot(x1,x2,col=ifelse(lm.pred>0,'red','blue'),pch=ifelse(as.integer(lm.pred>0) == y,1,4))
The non-linear boundary shows up, which is similar to the true decision boundary.
(g) Fit a support vector classifier to the data with X1 and X2 as predictors. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.
For g and h, this could have alternatively been accomplished in caret telling it which kernel to use; I chose to use the svm package here in order to validate with solutions online.
library(e1071)
svm.fit=svm(y~.,data=data.frame(x1,x2,y=as.factor(y)),kernel='linear')
svm.pred=predict(svm.fit,data.frame(x1,x2),type='response')
plot(x1,x2,col=ifelse(svm.pred!=0,'red','blue'),pch=ifelse(svm.pred == y,1,4))
Here, everything is blue, indicating they are all predicted as the same class.
(h) Fit a SVM using a non-linear kernel to the data. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.
svm.fit2=svm(y~.,data=data.frame(x1,x2,y=as.factor(y)),kernel='polynomial',degree=2)
svm.pred2=predict(svm.fit2,data.frame(x1,x2),type='response')
plot(x1,x2,col=ifelse(svm.pred2!=0,'red','blue'),pch=ifelse(svm.pred2 == y,1,4))
Here we changed the the kernel to polynomial; using the same identification, the plot is more similar to true, with only some misclassified.
(i) Comment on your results.
The closest fit appears to be the polynomial logistic regression model. Similar, and almost as good, the SVM with a polynomial kernal was close in second. The fact that the linear SVM was not good is not surprising, as SVMs with non-linear kernals are good at finding non-linear boundaries, and the linear SVM is would not succeed as we did not have a linear boundary. It is possible that the non-linear logistic regression performed so well because of how well specified the boundary was.
In this problem, you will use support vector approaches in order to predict whether a given car gets high or low gas mileage based on the Auto data set.
attach(Auto)
(a) Create a binary variable that takes on a 1 for cars with gas mileage above the median, and a 0 for cars with gas mileage below the median.
library(ISLR)
gas.med = median(Auto$mpg)
new.var = ifelse(Auto$mpg > gas.med, 1, 0)
Auto$mpglevel = as.factor(new.var)
table(Auto$mpglevel)
##
## 0 1
## 196 196
The new variable, mpglevel is split as expected with 0s
and 1s
(b) Fit a support vector classifier to the data with various values of cost, in order to predict whether a car gets high or low gas mileage. Report the cross-validation errors associated with different values of this parameter. Comment on your results. Note you will need to fit the classifier without the gas mileage variable to produce sensible results.
Note about problem 7. I struggled for days trying to overcome an error about “logical subscript too long.” Even when copying code word for word from working sources I get the same error.
summary(Auto)
## mpg cylinders displacement horsepower weight
## Min. : 9.00 Min. :3.000 Min. : 68.0 Min. : 46.0 Min. :1613
## 1st Qu.:17.00 1st Qu.:4.000 1st Qu.:105.0 1st Qu.: 75.0 1st Qu.:2225
## Median :22.75 Median :4.000 Median :151.0 Median : 93.5 Median :2804
## Mean :23.45 Mean :5.472 Mean :194.4 Mean :104.5 Mean :2978
## 3rd Qu.:29.00 3rd Qu.:8.000 3rd Qu.:275.8 3rd Qu.:126.0 3rd Qu.:3615
## Max. :46.60 Max. :8.000 Max. :455.0 Max. :230.0 Max. :5140
##
## acceleration year origin name
## Min. : 8.00 Min. :70.00 Min. :1.000 amc matador : 5
## 1st Qu.:13.78 1st Qu.:73.00 1st Qu.:1.000 ford pinto : 5
## Median :15.50 Median :76.00 Median :1.000 toyota corolla : 5
## Mean :15.54 Mean :75.98 Mean :1.577 amc gremlin : 4
## 3rd Qu.:17.02 3rd Qu.:79.00 3rd Qu.:2.000 amc hornet : 4
## Max. :24.80 Max. :82.00 Max. :3.000 chevrolet chevette: 4
## (Other) :365
## mpglevel
## 0:196
## 1:196
##
##
##
##
##
str(Auto)
## 'data.frame': 392 obs. of 10 variables:
## $ mpg : num 18 15 18 16 17 15 14 14 14 15 ...
## $ cylinders : num 8 8 8 8 8 8 8 8 8 8 ...
## $ displacement: num 307 350 318 304 302 429 454 440 455 390 ...
## $ horsepower : num 130 165 150 150 140 198 220 215 225 190 ...
## $ weight : num 3504 3693 3436 3433 3449 ...
## $ acceleration: num 12 11.5 11 12 10.5 10 9 8.5 10 8.5 ...
## $ year : num 70 70 70 70 70 70 70 70 70 70 ...
## $ origin : num 1 1 1 1 1 1 1 1 1 1 ...
## $ name : Factor w/ 304 levels "amc ambassador brougham",..: 49 36 231 14 161 141 54 223 241 2 ...
## $ mpglevel : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
After a lot of investigation and trial and error, I have found that
the origin variable is causing errors in my code. The work
around I have developed is to factor origin. Going back through my
notes, I do see “char - make factor or dummy
Auto.new=Auto
#Auto.new = subset(Auto.new, select=-c(mpg01))
Auto.new$name=as.factor(Auto.new$name)
str(Auto.new)
## 'data.frame': 392 obs. of 10 variables:
## $ mpg : num 18 15 18 16 17 15 14 14 14 15 ...
## $ cylinders : num 8 8 8 8 8 8 8 8 8 8 ...
## $ displacement: num 307 350 318 304 302 429 454 440 455 390 ...
## $ horsepower : num 130 165 150 150 140 198 220 215 225 190 ...
## $ weight : num 3504 3693 3436 3433 3449 ...
## $ acceleration: num 12 11.5 11 12 10.5 10 9 8.5 10 8.5 ...
## $ year : num 70 70 70 70 70 70 70 70 70 70 ...
## $ origin : num 1 1 1 1 1 1 1 1 1 1 ...
## $ name : Factor w/ 304 levels "amc ambassador brougham",..: 49 36 231 14 161 141 54 223 241 2 ...
## $ mpglevel : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
Now back to the assignment
set.seed(1)
tune.lin <- tune(svm, mpglevel ~ ., data = Auto.new, kernel = "linear", ranges = list(cost = c(0.01, 0.1, 1, 5, 10, 100, 1000)))
summary(tune.lin)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 1
##
## - best performance: 0.01025641
##
## - Detailed performance results:
## cost error dispersion
## 1 1e-02 0.07653846 0.03617137
## 2 1e-01 0.04596154 0.03378238
## 3 1e+00 0.01025641 0.01792836
## 4 5e+00 0.02051282 0.02648194
## 5 1e+01 0.02051282 0.02648194
## 6 1e+02 0.03076923 0.03151981
## 7 1e+03 0.03076923 0.03151981
Of the costs we looked at, 1 had the lowest error
rate.
(c) Now repeat (b), this time using SVMs with radial and polynomial basis kernels, with different values of gamma and degree and cost. Comment on your results.
set.seed(1)
tune.pol <- tune(svm, mpglevel ~ ., data = Auto.new, kernel = "polynomial", ranges = list(cost = c(0.01, 0.1, 1, 5, 10, 100), degree = c(2, 3, 4)))
summary(tune.pol)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost degree
## 100 2
##
## - best performance: 0.3013462
##
## - Detailed performance results:
## cost degree error dispersion
## 1 1e-02 2 0.5511538 0.04366593
## 2 1e-01 2 0.5511538 0.04366593
## 3 1e+00 2 0.5511538 0.04366593
## 4 5e+00 2 0.5511538 0.04366593
## 5 1e+01 2 0.5130128 0.08963366
## 6 1e+02 2 0.3013462 0.09961961
## 7 1e-02 3 0.5511538 0.04366593
## 8 1e-01 3 0.5511538 0.04366593
## 9 1e+00 3 0.5511538 0.04366593
## 10 5e+00 3 0.5511538 0.04366593
## 11 1e+01 3 0.5511538 0.04366593
## 12 1e+02 3 0.3446154 0.09821588
## 13 1e-02 4 0.5511538 0.04366593
## 14 1e-01 4 0.5511538 0.04366593
## 15 1e+00 4 0.5511538 0.04366593
## 16 5e+00 4 0.5511538 0.04366593
## 17 1e+01 4 0.5511538 0.04366593
## 18 1e+02 4 0.5511538 0.04366593
We see a substantially lower error rate at degree of 2 and cost of 100 than any of the other options.
set.seed(1)
tune.rad <- tune(svm, mpglevel ~ ., data = Auto.new, kernel = "radial", ranges = list(cost = c(0.01, 0.1, 1, 5, 10, 100), gamma = c(0.01, 0.1, 1, 5, 10, 100)))
summary(tune.rad)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost gamma
## 100 0.01
##
## - best performance: 0.01282051
##
## - Detailed performance results:
## cost gamma error dispersion
## 1 1e-02 1e-02 0.55115385 0.04366593
## 2 1e-01 1e-02 0.08929487 0.04382379
## 3 1e+00 1e-02 0.07403846 0.03522110
## 4 5e+00 1e-02 0.04852564 0.03303346
## 5 1e+01 1e-02 0.02557692 0.02093679
## 6 1e+02 1e-02 0.01282051 0.01813094
## 7 1e-02 1e-01 0.21711538 0.09865227
## 8 1e-01 1e-01 0.07903846 0.03874545
## 9 1e+00 1e-01 0.05371795 0.03525162
## 10 5e+00 1e-01 0.02820513 0.03299190
## 11 1e+01 1e-01 0.03076923 0.03375798
## 12 1e+02 1e-01 0.03583333 0.02759051
## 13 1e-02 1e+00 0.55115385 0.04366593
## 14 1e-01 1e+00 0.55115385 0.04366593
## 15 1e+00 1e+00 0.06384615 0.04375618
## 16 5e+00 1e+00 0.05884615 0.04020934
## 17 1e+01 1e+00 0.05884615 0.04020934
## 18 1e+02 1e+00 0.05884615 0.04020934
## 19 1e-02 5e+00 0.55115385 0.04366593
## 20 1e-01 5e+00 0.55115385 0.04366593
## 21 1e+00 5e+00 0.49493590 0.04724924
## 22 5e+00 5e+00 0.48217949 0.05470903
## 23 1e+01 5e+00 0.48217949 0.05470903
## 24 1e+02 5e+00 0.48217949 0.05470903
## 25 1e-02 1e+01 0.55115385 0.04366593
## 26 1e-01 1e+01 0.55115385 0.04366593
## 27 1e+00 1e+01 0.51794872 0.05063697
## 28 5e+00 1e+01 0.51794872 0.04917316
## 29 1e+01 1e+01 0.51794872 0.04917316
## 30 1e+02 1e+01 0.51794872 0.04917316
## 31 1e-02 1e+02 0.55115385 0.04366593
## 32 1e-01 1e+02 0.55115385 0.04366593
## 33 1e+00 1e+02 0.55115385 0.04366593
## 34 5e+00 1e+02 0.55115385 0.04366593
## 35 1e+01 1e+02 0.55115385 0.04366593
## 36 1e+02 1e+02 0.55115385 0.04366593
When we use a radial kernal, we get our lowest error rate at gamma of .01 and cost, once again, at 100. This is also the lowest error rate from anything we ran.
(d) Make some plots to back up your assertions in (b) and
(c). Hint: In the lab, we used the plot() function for svm objects only
in cases with p = 2. When p > 2, you can use the plot() function to
create plots displaying pairs of variables at a time. Essentially,
instead of typing plot (svmfit , dat) where svmfit contains
your fitted model and dat is a data frame containing your data, you can
type plot (svmfit , dat , x1 ∼ x4) in order to plot just
the first and fourth variables. However, you must replace x1 and x4 with
the correct variable names. To find out more, type
?plot.svm.
So first, lets make a fit of our best set ups.
svm_radial = svm (mpglevel ~ ., data = Auto.new, kernel = "radial", scale = T, gamma = 0.01, cost = 100)
svm_poly = svm(mpglevel ~ ., data = Auto.new, kernel = "polynomial", cost = 100, degree = 2)
svm_linear = svm(mpglevel ~ ., data = Auto.new, kernel = "linear", cost = 1)
And now for some plots (Note, I saw several analysts using the same for loop to iterate through all the variables. This seems like a great approach; I had to modify it a little because in our code we had to turn origin into a factor, which prevents this from plotting correctly:
plotpairs = function(fit) {
for (name in names(Auto.new)[!(names(Auto.new) %in% c("mpg", "mpglevel", "name", "origin"))]) {
plot(fit, Auto.new, as.formula(paste("mpg~", name, sep = "")))
}
}
plotpairs(svm_linear)
plotpairs(svm_poly)
plotpairs(svm_radial)
detach(Auto)
attach(OJ)
This problem involves the OJ data set which is part of the ISLR2 package.
(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.
set.seed(1)
train_index <- sample(1:nrow(OJ), 800)
train = OJ[train_index, ]
test = OJ[-train_index, ]
nrow(train)/nrow(OJ)
## [1] 0.7476636
nrow(test)/nrow(OJ)
## [1] 0.2523364
(b) Fit a support vector classifier to the training data using cost = 0.01, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics, and describe the results obtained.
svm_lin_Auto = svm(Purchase ~ ., data = train, kernel = "linear", scale = T, cost = 0.01)
summary(svm_lin_Auto)
##
## Call:
## svm(formula = Purchase ~ ., data = train, kernel = "linear", cost = 0.01,
## scale = T)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 0.01
##
## Number of Support Vectors: 435
##
## ( 219 216 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
(c) What are the training and test error rates?
data.frame(train_error = mean(predict(svm_lin_Auto, train) != train$Purchase),
test_error = mean(predict(svm_lin_Auto, test) != test$Purchase))
## train_error test_error
## 1 0.175 0.1777778
The test error is only slightly higher than the training error.
(d) Use the tune() function to select an optimal cost. Consider values in the range 0.01 to 10.
tune.out = tune(svm, Purchase ~ ., data = train, kernel = "linear", ranges = list(cost = 10^seq(-2,
1, by = 0.25)))
summary(tune.out)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 10
##
## - best performance: 0.17125
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01000000 0.17375 0.03884174
## 2 0.01778279 0.17500 0.03996526
## 3 0.03162278 0.17750 0.03717451
## 4 0.05623413 0.18000 0.03073181
## 5 0.10000000 0.17875 0.03064696
## 6 0.17782794 0.17875 0.03537988
## 7 0.31622777 0.17875 0.03438447
## 8 0.56234133 0.17625 0.03197764
## 9 1.00000000 0.17500 0.03061862
## 10 1.77827941 0.17375 0.02972676
## 11 3.16227766 0.17250 0.03270236
## 12 5.62341325 0.17250 0.03322900
## 13 10.00000000 0.17125 0.03488573
Our best performance is .170 with is occurring at a cost of 1.78 and 3.16. We will use 3.16
(e) Compute the training and test error rates using this new value for cost.
svm.linear = svm(Purchase ~ ., kernel = "linear", data = train, cost = tune.out$best.parameters$cost)
train.pred = predict(svm.linear, train)
table(train$Purchase, train.pred)
## train.pred
## CH MM
## CH 423 62
## MM 69 246
(62+69)/ (62+69+423+246)
## [1] 0.16375
test.pred = predict(svm.linear, test)
table(test$Purchase, test.pred)
## test.pred
## CH MM
## CH 156 12
## MM 28 74
(29+12) / (156+73+29+12)
## [1] 0.1518519
Our confusion matrix shows a better performance in test than train. Lets use our original code to verify, as that seems suspicious.
data.frame(train_error = mean(predict(svm.linear, train) != train$Purchase),
test_error = mean(predict(svm.linear, test) != test$Purchase))
## train_error test_error
## 1 0.16375 0.1481481
Both calculation methods show an improvement in the train and test error rates.
(f) Repeat parts (b) through (e) using a support vector machine with a radial kernel. Use the default value for gamma.
set.seed(1)
svm.radial = svm(Purchase ~ ., data = train, kernel = "radial")
summary(svm.radial)
##
## Call:
## svm(formula = Purchase ~ ., data = train, kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 1
##
## Number of Support Vectors: 373
##
## ( 188 185 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
data.frame(train_error = mean(predict(svm.radial, train) != train$Purchase),
test_error = mean(predict(svm.radial, test) != test$Purchase))
## train_error test_error
## 1 0.15125 0.1851852
Using radial, our train error improved and our test error got a little worse. All error values have been similar. We had 373 support vectors with 188 and 185 for CH and MM respectively. However, we still need to find optimal gamma:
set.seed(1)
tune.out = tune(svm, Purchase ~ ., data = train, kernel = "radial", ranges = list(cost = 10^seq(-2,
1, by = 0.25)))
summary(tune.out)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 0.5623413
##
## - best performance: 0.16875
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01000000 0.39375 0.04007372
## 2 0.01778279 0.39375 0.04007372
## 3 0.03162278 0.35750 0.05927806
## 4 0.05623413 0.19500 0.02443813
## 5 0.10000000 0.18625 0.02853482
## 6 0.17782794 0.18250 0.03291403
## 7 0.31622777 0.17875 0.03230175
## 8 0.56234133 0.16875 0.02651650
## 9 1.00000000 0.17125 0.02128673
## 10 1.77827941 0.17625 0.02079162
## 11 3.16227766 0.17750 0.02266912
## 12 5.62341325 0.18000 0.02220485
## 13 10.00000000 0.18625 0.02853482
svm.radial = svm(Purchase ~ ., data = train, kernel = "radial", cost = tune.out$best.parameters$cost)
train.pred = predict(svm.radial, train)
table(train$Purchase, train.pred)
## train.pred
## CH MM
## CH 437 48
## MM 71 244
Confusion matrix above.
data.frame(train_error = mean(predict(svm.radial, train) != train$Purchase),
test_error = mean(predict(svm.radial, test) != test$Purchase))
## train_error test_error
## 1 0.14875 0.1777778
We were able to reduce our train error rate to its lowest level yet at .149. The test error, while improved, still lags behind the optimal in linear.
(g) Repeat parts (b) through (e) using a support vector machine with a polynomial kernel. Set degree = 2.
set.seed(1)
svm.poly = svm(Purchase ~ ., data = train, kernel = "poly", degree = 2)
summary(svm.poly)
##
## Call:
## svm(formula = Purchase ~ ., data = train, kernel = "poly", degree = 2)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: polynomial
## cost: 1
## degree: 2
## coef.0: 0
##
## Number of Support Vectors: 447
##
## ( 225 222 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
With our ^2 Poly support vectors increased to 447, with a 225 222 now for CH and MM.
train.pred = predict(svm.poly, train)
table(train$Purchase, train.pred)
## train.pred
## CH MM
## CH 449 36
## MM 110 205
Confusion Matrix above.
data.frame(train_error = mean(predict(svm.poly, train) != train$Purchase),
test_error = mean(predict(svm.poly, test) != test$Purchase))
## train_error test_error
## 1 0.1825 0.2222222
Now lets tune it.
set.seed(1)
tune.out = tune(svm, Purchase ~ ., data = train, kernel = "poly", degree = 2,
ranges = list(cost = 10^seq(-2, 1, by = 0.25)))
summary(tune.out)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 3.162278
##
## - best performance: 0.1775
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01000000 0.39125 0.04210189
## 2 0.01778279 0.37125 0.03537988
## 3 0.03162278 0.36500 0.03476109
## 4 0.05623413 0.33750 0.04714045
## 5 0.10000000 0.32125 0.05001736
## 6 0.17782794 0.24500 0.04758034
## 7 0.31622777 0.19875 0.03972562
## 8 0.56234133 0.20500 0.03961621
## 9 1.00000000 0.20250 0.04116363
## 10 1.77827941 0.18500 0.04199868
## 11 3.16227766 0.17750 0.03670453
## 12 5.62341325 0.18375 0.03064696
## 13 10.00000000 0.18125 0.02779513
svm.poly = svm(Purchase ~ ., data = train, kernel = "poly", degree = 2, cost = tune.out$best.parameters$cost)
train.pred = predict(svm.poly, train)
table(train$Purchase, train.pred)
## train.pred
## CH MM
## CH 451 34
## MM 90 225
data.frame(train_error = mean(predict(svm.poly, train) != train$Purchase),
test_error = mean(predict(svm.poly, test) != test$Purchase))
## train_error test_error
## 1 0.155 0.2037037
The train error was right in the middle of our methods, but the test error was last.
(h) Overall, which approach seems to give the best results on this data?
Our best results came from the linear kernel both in train, and more importantly, in test.
detach(OJ)