This question should be answered using the Weekly data set, which is part of the ISLR package. This data is similar in nature to the Smarket data from this chapter’s lab, except that it contains 1,089 weekly returns for 21 years, from the beginning of 1990 to the end of 2010.
Produce some numerical and graphical summaries of the Weekly data. Do there appear to be any patterns?
library(ISLR)
## Warning: package 'ISLR' was built under R version 4.0.5
library(corrplot)
## corrplot 0.90 loaded
summary(Weekly)
## Year Lag1 Lag2 Lag3
## Min. :1990 Min. :-18.1950 Min. :-18.1950 Min. :-18.1950
## 1st Qu.:1995 1st Qu.: -1.1540 1st Qu.: -1.1540 1st Qu.: -1.1580
## Median :2000 Median : 0.2410 Median : 0.2410 Median : 0.2410
## Mean :2000 Mean : 0.1506 Mean : 0.1511 Mean : 0.1472
## 3rd Qu.:2005 3rd Qu.: 1.4050 3rd Qu.: 1.4090 3rd Qu.: 1.4090
## Max. :2010 Max. : 12.0260 Max. : 12.0260 Max. : 12.0260
## Lag4 Lag5 Volume Today
## Min. :-18.1950 Min. :-18.1950 Min. :0.08747 Min. :-18.1950
## 1st Qu.: -1.1580 1st Qu.: -1.1660 1st Qu.:0.33202 1st Qu.: -1.1540
## Median : 0.2380 Median : 0.2340 Median :1.00268 Median : 0.2410
## Mean : 0.1458 Mean : 0.1399 Mean :1.57462 Mean : 0.1499
## 3rd Qu.: 1.4090 3rd Qu.: 1.4050 3rd Qu.:2.05373 3rd Qu.: 1.4050
## Max. : 12.0260 Max. : 12.0260 Max. :9.32821 Max. : 12.0260
## Direction
## Down:484
## Up :605
##
##
##
##
corrplot(cor(Weekly[,-9]), method="square")
pairs(Weekly)
From the 2 plots above it looks that the only variables that are significantly related are Year and Volume. A scatter plot of the Year and Volume relationship is shown below.
plot(Weekly$Volume, ylab = "Volume")
Use the full data set to perform a logistic regression with Direction as the response and the five lag variables plus Volume as predictors. Use the summary function to print the results. Do any of the predictors appear to be statistically significant? If so, which ones?
glm.fit = glm(Direction ~ . - Year - Today, data = Weekly, family = "binomial")
summary(glm.fit)
##
## Call:
## glm(formula = Direction ~ . - Year - Today, family = "binomial",
## data = Weekly)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.6949 -1.2565 0.9913 1.0849 1.4579
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.26686 0.08593 3.106 0.0019 **
## Lag1 -0.04127 0.02641 -1.563 0.1181
## Lag2 0.05844 0.02686 2.175 0.0296 *
## Lag3 -0.01606 0.02666 -0.602 0.5469
## Lag4 -0.02779 0.02646 -1.050 0.2937
## Lag5 -0.01447 0.02638 -0.549 0.5833
## Volume -0.02274 0.03690 -0.616 0.5377
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1496.2 on 1088 degrees of freedom
## Residual deviance: 1486.4 on 1082 degrees of freedom
## AIC: 1500.4
##
## Number of Fisher Scoring iterations: 4
There appears to be only 1 statistically significant predictor which is Lag2. Lag2 has a p-value of 0.0296, so we reject the null hypothesis and conclude that Lag2 is significant condsidering Direction. The estimate coefficient for Lag2 is 0.05844, meaning that with an increase of 1 in Lag2 gives us an increase of e^0.05844 = 1.05844 odds of going Up.
Compute the confusion matrix and overall fraction of correct predictions. Explain what the confusion matrix is telling you about the types of mistakes made by logistic regression.
glm.fit.probs = predict(glm.fit, type = "response")
glm.fit.pred = rep("Down", 1089)
glm.fit.pred[glm.fit.probs > 0.5] = "Up"
table(glm.fit.pred, Weekly$Direction)
##
## glm.fit.pred Down Up
## Down 54 48
## Up 430 557
mean(glm.fit.pred == Weekly$Direction)
## [1] 0.5610652
There is a much larger number of predicted Up’s. The number 557 is the number of predicted Up’s that turned out to be True. But the number 430 is the number of predicted Up’s that actually turned out to be Down’s. This is quite a large number as well, and we see this in the 0.5610652 mean which represents the model accuracy of 56%. Although it’s higher than 50% and gives some prediction, it’s low.
Now fit the logistic regression model using a training data period from 1990 to 2008, with Lag2 as the only predictor. Compute the confusion matrix and the overall fraction of correct predictions for the held out data (that is, the data from 2009 and 2010).
train = (Weekly$Year < 2009)
glm.fit2 = glm(Direction ~ Lag2, data = Weekly, subset = train, family = "binomial")
summary(glm.fit2)
##
## Call:
## glm(formula = Direction ~ Lag2, family = "binomial", data = Weekly,
## subset = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.536 -1.264 1.021 1.091 1.368
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.20326 0.06428 3.162 0.00157 **
## Lag2 0.05810 0.02870 2.024 0.04298 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1354.7 on 984 degrees of freedom
## Residual deviance: 1350.5 on 983 degrees of freedom
## AIC: 1354.5
##
## Number of Fisher Scoring iterations: 4
glm.probs = predict(glm.fit2, Weekly[!train, ], type = "response")
glm.pred = rep("Down", dim(Weekly[!train, ])[1])
glm.pred[glm.probs > 0.5] = "Up"
table(glm.pred, Weekly[!train, ]$Direction)
##
## glm.pred Down Up
## Down 9 5
## Up 34 56
mean(glm.pred == Weekly[!train, ]$Direction)
## [1] 0.625
After only using Lag2, we see the model prediction increase to 62%.
Repeat (d) using LDA.
library(MASS)
lda.fit = lda(Direction ~ Lag2, data = Weekly, subset = train)
lda.fit
## Call:
## lda(Direction ~ Lag2, data = Weekly, subset = train)
##
## Prior probabilities of groups:
## Down Up
## 0.4477157 0.5522843
##
## Group means:
## Lag2
## Down -0.03568254
## Up 0.26036581
##
## Coefficients of linear discriminants:
## LD1
## Lag2 0.4414162
lda.pred = predict(lda.fit, Weekly[!train, ])
table(lda.pred$class, Weekly[!train, ]$Direction)
##
## Down Up
## Down 9 5
## Up 34 56
mean(lda.pred$class == Weekly[!train, ]$Direction)
## [1] 0.625
LDA gives same prediction power of 62.5% shown previously.
Repeat (d) using QDA.
qda.fit = qda(Direction ~ Lag2, data = Weekly, subset = train)
qda.fit
## Call:
## qda(Direction ~ Lag2, data = Weekly, subset = train)
##
## Prior probabilities of groups:
## Down Up
## 0.4477157 0.5522843
##
## Group means:
## Lag2
## Down -0.03568254
## Up 0.26036581
qda.pred = predict(qda.fit, Weekly[!train, ])
table(qda.pred$class, Weekly[!train, ]$Direction)
##
## Down Up
## Down 0 0
## Up 43 61
mean(qda.pred$class == Weekly[!train, ]$Direction)
## [1] 0.5865385
The QDA gives a prediction power of 58% which is less than both the Logistic Regression and the LDA at 62.5%. This suggest that Logistic Reg and LDA models are better for this data so far.
Repeat (d) using KNN with K = 1.
library(class)
train.X = data.frame(Weekly[train, ]$Lag2)
test.X = data.frame(Weekly[!train, ]$Lag2)
train.Direction = Weekly[train, ]$Direction
set.seed(1)
knn.pred = knn(train.X, test.X, train.Direction, k = 1)
table(knn.pred, Weekly[!train, ]$Direction)
##
## knn.pred Down Up
## Down 21 30
## Up 22 31
mean(knn.pred == Weekly[!train, ]$Direction)
## [1] 0.5
The K-nearest neighbor technique gives us a prediction power of 50%. This is again low and lower than all previous models.
Which of these methods appears to provide the best results on this data?
It appears that the Logistic Regression and LDA models gave the best prediction capabilities on this data.
Experiment with different combinations of predictors, including possible transformations and interactions, for each of the methods. Report the variables, method, and associated confusion matrix that appears to provide the best results on the held out data. Note that you should also experiment with values for K in the KNN classifier.
In this problem, you will develop a model to predict whether a given car gets high or low gas mileage based on the Auto data set.
summary(Auto)
## mpg cylinders displacement horsepower weight
## Min. : 9.00 Min. :3.000 Min. : 68.0 Min. : 46.0 Min. :1613
## 1st Qu.:17.00 1st Qu.:4.000 1st Qu.:105.0 1st Qu.: 75.0 1st Qu.:2225
## Median :22.75 Median :4.000 Median :151.0 Median : 93.5 Median :2804
## Mean :23.45 Mean :5.472 Mean :194.4 Mean :104.5 Mean :2978
## 3rd Qu.:29.00 3rd Qu.:8.000 3rd Qu.:275.8 3rd Qu.:126.0 3rd Qu.:3615
## Max. :46.60 Max. :8.000 Max. :455.0 Max. :230.0 Max. :5140
##
## acceleration year origin name
## Min. : 8.00 Min. :70.00 Min. :1.000 amc matador : 5
## 1st Qu.:13.78 1st Qu.:73.00 1st Qu.:1.000 ford pinto : 5
## Median :15.50 Median :76.00 Median :1.000 toyota corolla : 5
## Mean :15.54 Mean :75.98 Mean :1.577 amc gremlin : 4
## 3rd Qu.:17.02 3rd Qu.:79.00 3rd Qu.:2.000 amc hornet : 4
## Max. :24.80 Max. :82.00 Max. :3.000 chevrolet chevette: 4
## (Other) :365
Create a binary variable, mpg01, that contains a 1 if mpg contains a value above its median, and a 0 if mpg contains a value below its median. You can compute the median using the median() function. Note you may find it helpful to use the data.frame() function to create a single data set containing both mpg01 and the other Auto variables.
mpg01 = rep(0, dim(Auto)[1])
mpg01[Auto$mpg > median(Auto$mpg)] = 1
Auto = data.frame(Auto, mpg01)
head(Auto)
## mpg cylinders displacement horsepower weight acceleration year origin
## 1 18 8 307 130 3504 12.0 70 1
## 2 15 8 350 165 3693 11.5 70 1
## 3 18 8 318 150 3436 11.0 70 1
## 4 16 8 304 150 3433 12.0 70 1
## 5 17 8 302 140 3449 10.5 70 1
## 6 15 8 429 198 4341 10.0 70 1
## name mpg01
## 1 chevrolet chevelle malibu 0
## 2 buick skylark 320 0
## 3 plymouth satellite 0
## 4 amc rebel sst 0
## 5 ford torino 0
## 6 ford galaxie 500 0
Explore the data graphically in order to investigate the association between mpg01 and the other features. Which of the other features seem most likely to be useful in predicting mpg01? Scatterplots and boxplots may be useful tools to answer this question. Describe your findings.
par(mfrow = c(2, 3))
plot(factor(Auto$mpg01), Auto$cylinders, ylab = "Cylinders")
plot(factor(Auto$mpg01), Auto$displacement, ylab = "Displacement")
plot(factor(Auto$mpg01), Auto$horsepower, ylab = "Horsepower")
plot(factor(Auto$mpg01), Auto$weight, ylab = "Weight")
plot(factor(Auto$mpg01), Auto$acceleration, ylab = "Acceleration")
plot(factor(Auto$mpg01), Auto$year, ylab = "Year")
mtext("Cars with above(1) and below(0) median mpg", outer = TRUE, line= -1)
From the scatter plots above we see that on ‘Cylinders’ that 4-cylinder engines account for the majority of cars with above median mpg. On ‘Horsepower’, cars with less horsepower tend to account for the above median mpd (1). Similar with ‘Weight’, cars with less weight have above average mpg. A car with a more recent ‘Year’ and the higher the ‘Acceleration’ also more often account for cars with above average mpg.
par(mfrow = c(3, 2))
plot(Auto$cylinders, Auto$mpg01, xlab = "Cylinders")
plot(Auto$displacement, Auto$mpg01, xlab = "Displacement)")
plot(Auto$horsepower, Auto$mpg01, xlab = "Horsepower")
plot(Auto$weight, Auto$mpg01, xlab = "Weight")
plot(Auto$acceleration, Auto$mpg01, xlab = "Acceleration")
plot(Auto$year, Auto$mpg01, xlab = "Year")
mtext("Cars with above(1) and below(0) median mpg", outer = TRUE, line = -1)
The scatterplots above look to show that Horsepwer and Weight could be significant predictors of above average mpg since they are showing decent clusters.
Split the data into a training set and a test set.
set.seed(1)
train = sample(dim(Auto)[1], size = 0.75*dim(Auto)[1])
Perform LDA on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?
lda.fit = lda(mpg01 ~ cylinders + displacement + horsepower + weight + year, data = Auto, subset = train)
lda.fit
## Call:
## lda(mpg01 ~ cylinders + displacement + horsepower + weight +
## year, data = Auto, subset = train)
##
## Prior probabilities of groups:
## 0 1
## 0.4863946 0.5136054
##
## Group means:
## cylinders displacement horsepower weight year
## 0 6.804196 273.8881 129.60839 3625.434 74.39860
## 1 4.198675 118.0265 79.66225 2347.728 77.56954
##
## Coefficients of linear discriminants:
## LD1
## cylinders -0.396649943
## displacement -0.003191981
## horsepower 0.010953830
## weight -0.001045186
## year 0.121474065
lda.pred = predict(lda.fit, Auto[-train, ])
table(lda.pred$class, Auto[-train, "mpg01"], dnn = c("Predicted", "Actual"))
## Actual
## Predicted 0 1
## 0 41 0
## 1 12 45
1 - mean(lda.pred$class == Auto[-train, "mpg01"])
## [1] 0.122449
The LDA model above gives us a test error of 12.2%, which is fair.
Perform QDA on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?
qda.fit = qda(mpg01 ~ cylinders + displacement + horsepower + weight + year, data = Auto, subset = train)
qda.fit
## Call:
## qda(mpg01 ~ cylinders + displacement + horsepower + weight +
## year, data = Auto, subset = train)
##
## Prior probabilities of groups:
## 0 1
## 0.4863946 0.5136054
##
## Group means:
## cylinders displacement horsepower weight year
## 0 6.804196 273.8881 129.60839 3625.434 74.39860
## 1 4.198675 118.0265 79.66225 2347.728 77.56954
qda.pred = predict(qda.fit, Auto[-train, ])
table(qda.pred$class, Auto[-train, "mpg01"], dnn = c("Predicted", "Actual"))
## Actual
## Predicted 0 1
## 0 44 3
## 1 9 42
1 - mean(qda.pred$class == Auto[-train, "mpg01"])
## [1] 0.122449
The QDA model above gives us a test error of 12.2%, which is the same as the LDA.
Perform logistic regression on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?
glm.fit = glm(mpg01 ~ cylinders + displacement + horsepower + weight + year, data = Auto, subset = train,
family = "binomial")
summary(glm.fit)
##
## Call:
## glm(formula = mpg01 ~ cylinders + displacement + horsepower +
## weight + year, family = "binomial", data = Auto, subset = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.19202 -0.12464 0.03779 0.26337 3.05390
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -16.104909 5.463342 -2.948 0.00320 **
## cylinders 0.071647 0.461111 0.155 0.87652
## displacement -0.005088 0.011361 -0.448 0.65426
## horsepower -0.031040 0.017852 -1.739 0.08208 .
## weight -0.004063 0.001085 -3.746 0.00018 ***
## year 0.409944 0.083764 4.894 9.88e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 407.35 on 293 degrees of freedom
## Residual deviance: 125.87 on 288 degrees of freedom
## AIC: 137.87
##
## Number of Fisher Scoring iterations: 7
glm.probs = predict(glm.fit, Auto[-train, ], type = "response")
glm.pred = rep(0, dim(Auto[-train, ])[1])
glm.pred[glm.probs > 0.5] = 1
table(glm.pred, Auto[-train, "mpg01"], dnn = c("Predicted", "Actual"))
## Actual
## Predicted 0 1
## 0 46 2
## 1 7 43
1 - mean(glm.pred == Auto[-train, "mpg01"])
## [1] 0.09183673
The Logistic Regression model shows a 9.1% test error. This is better than the QDA or LDA shown previously.
Perform KNN on the training data, with several values of K, in order to predict mpg01. Use only the variables that seemed most associated with mpg01 in (b). What test errors do you obtain? Which value of K seems to perform the best on this data set?
scaled.auto = scale(Auto[, -c(8, 9, 10)])
head(scaled.auto)
## mpg cylinders displacement horsepower weight acceleration year
## 1 -0.6977467 1.482053 1.075915 0.6632851 0.6197483 -1.283618 -1.623241
## 2 -1.0821153 1.482053 1.486832 1.5725848 0.8422577 -1.464852 -1.623241
## 3 -0.6977467 1.482053 1.181033 1.1828849 0.5396921 -1.646086 -1.623241
## 4 -0.9539925 1.482053 1.047246 1.1828849 0.5361602 -1.283618 -1.623241
## 5 -0.8258696 1.482053 1.028134 0.9230850 0.5549969 -1.827320 -1.623241
## 6 -1.0821153 1.482053 2.241772 2.4299245 1.6051468 -2.008554 -1.623241
cols = c("cylinders", "displacement", "horsepower", "weight", "year")
train.X1 = scaled.auto[train, cols]
test.X1 = scaled.auto[-train, cols]
train.mpg01 = Auto[train, "mpg01"]
set.seed(1)
k1.vals = (1:10)*2 - 1
knn1.error = rep(0, 10)
knn1.tables = list()
for (i in 1:10){
knn1.pred = knn(train.X1, test.X1, train.mpg01, k = 2*i - 1)
knn1.tables[[k1.vals[i]]] = table(knn1.pred, Auto[-train, "mpg01"], dnn = c("Predicted", "Actual"))
knn1.error[i] = 1 - mean(knn1.pred == Auto[-train, "mpg01"])
}
cbind(k1.vals, knn1.error)
## k1.vals knn1.error
## [1,] 1 0.05102041
## [2,] 3 0.07142857
## [3,] 5 0.07142857
## [4,] 7 0.08163265
## [5,] 9 0.08163265
## [6,] 11 0.10204082
## [7,] 13 0.11224490
## [8,] 15 0.11224490
## [9,] 17 0.10204082
## [10,] 19 0.10204082
knn1.tables[[1]]
## Actual
## Predicted 0 1
## 0 48 0
## 1 5 45
We see that k=1 has the lowest error at 5.1 %, and that generally as we increase k, we increase our error. We can conclude that K-nearest neighbor has the lowest error of all the models we’ve attempted.
Using the Boston data set, fit classification models in order to predict whether a given suburb has a crime rate above or below the median. Explore logistic regression, LDA, and KNN models using various subsets of the predictors. Describe your findings.
bstn = MASS::Boston
summary(bstn)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
crime01 <- rep(0, length(bstn$crim))
crime01[bstn$crim > median(bstn$crim)] <- 1
bstn= data.frame(bstn,crime01)
head(bstn)
## crim zn indus chas nox rm age dis rad tax ptratio black lstat
## 1 0.00632 18 2.31 0 0.538 6.575 65.2 4.0900 1 296 15.3 396.90 4.98
## 2 0.02731 0 7.07 0 0.469 6.421 78.9 4.9671 2 242 17.8 396.90 9.14
## 3 0.02729 0 7.07 0 0.469 7.185 61.1 4.9671 2 242 17.8 392.83 4.03
## 4 0.03237 0 2.18 0 0.458 6.998 45.8 6.0622 3 222 18.7 394.63 2.94
## 5 0.06905 0 2.18 0 0.458 7.147 54.2 6.0622 3 222 18.7 396.90 5.33
## 6 0.02985 0 2.18 0 0.458 6.430 58.7 6.0622 3 222 18.7 394.12 5.21
## medv crime01
## 1 24.0 0
## 2 21.6 0
## 3 34.7 0
## 4 33.4 0
## 5 36.2 0
## 6 28.7 0
cor(bstn[, -15])[, "crim"]
## crim zn indus chas nox rm
## 1.00000000 -0.20046922 0.40658341 -0.05589158 0.42097171 -0.21924670
## age dis rad tax ptratio black
## 0.35273425 -0.37967009 0.62550515 0.58276431 0.28994558 -0.38506394
## lstat medv
## 0.45562148 -0.38830461
corrplot(cor(bstn), method="square")
From the correlation numbers and correlation plot above, it seems that indus, nox, age, rad, and tax are strong associations with the crime variable.
train = 1:(dim(bstn)[1]/2)
test = (dim(bstn)[1]/2 + 1):dim(bstn)[1]
bstn.train = Boston[train, ]
bstn.test = bstn[test, ]
crime01.test = crime01[test]
{r} set.seed(1) bstn.fit <-glm(crime01~ indus+nox+age+rad+tax, data=bstn.train,family=binomial) bstn.probs = predict(bstn.fit, bstn.test, type = “response”) bstn.pred = rep(0, length(bstn.probs)) bstn.pred[bstn.probs > 0.5] = 1 table(bstn.pred, crime01.test)
{r}
summary(bstn.fit)
{r}
mean(bstn.pred != crime01.test)
The Logistic Regression gives us a very high prediction power of 94%. This could be too high but we will test other models to see.
### Linear Discriminat Analysis
{r}
bstn.lda <-lda(crime01~ indus+nox+age+dis+rad+tax, data=bstn.train,family=binomial)
bstn_lda.pred = predict(bstn.lda, bstn.test)
table(bstn_lda.pred$class, crime01.test)
`
{r}
bstn.lda
`
{r}
mean(bstn_lda.pred$class != crime01.test)
The LDA model has a low test error of 10.6%
{r} train.K1=cbind(indus,nox,age,rad,tax)[train,] test.K1=cbind(indus,nox,age,rad,tax)[test,] bstnk1.pred=knn(train.K1, test.K1, crime01.test, k=1) table(bstnk1.pred,crime01.test)
{r}
mean(bstnk1.pred !=crime01.test)
{r} train.K5=cbind(indus,nox,age,rad,tax)[train,] test.K5=cbind(indus,nox,age,rad,tax)[test,] bstnk5.pred=knn(train.K5, test.K5, crime01.test, k=5) table(bstnk5.pred,crime01.test)
{r}
mean(bstnk5.pred !=crime01.test)
{r} train.K3=cbind(indus,nox,age,rad,tax)[train,] test.K3=cbind(indus,nox,age,rad,tax)[test,] bstnk3.pred=knn(train.K3, test.K3, crime01.test, k=3) table(bstnk3.pred,crime01.test) ` {r} mean(bstnk3.pred !=crime01.test) ```
Between k=1 and k=5, we see that k=1 has a higher error rate at 84% compared to k=5 at 26%. K-nearest neighbor may not be the best model for this data.
In conclusion, the Logistic Regression looked to be the best model for this particular data set.