library(ISLR)
library(MASS)
## Warning: package 'MASS' was built under R version 3.6.2
library(class)
Applied Problem # 10.
a.)Produce some numerical and graphical summaries of the Weekly data. Do there appear to be any patterns?
summary(Weekly)
## Year Lag1 Lag2 Lag3
## Min. :1990 Min. :-18.1950 Min. :-18.1950 Min. :-18.1950
## 1st Qu.:1995 1st Qu.: -1.1540 1st Qu.: -1.1540 1st Qu.: -1.1580
## Median :2000 Median : 0.2410 Median : 0.2410 Median : 0.2410
## Mean :2000 Mean : 0.1506 Mean : 0.1511 Mean : 0.1472
## 3rd Qu.:2005 3rd Qu.: 1.4050 3rd Qu.: 1.4090 3rd Qu.: 1.4090
## Max. :2010 Max. : 12.0260 Max. : 12.0260 Max. : 12.0260
## Lag4 Lag5 Volume
## Min. :-18.1950 Min. :-18.1950 Min. :0.08747
## 1st Qu.: -1.1580 1st Qu.: -1.1660 1st Qu.:0.33202
## Median : 0.2380 Median : 0.2340 Median :1.00268
## Mean : 0.1458 Mean : 0.1399 Mean :1.57462
## 3rd Qu.: 1.4090 3rd Qu.: 1.4050 3rd Qu.:2.05373
## Max. : 12.0260 Max. : 12.0260 Max. :9.32821
## Today Direction
## Min. :-18.1950 Down:484
## 1st Qu.: -1.1540 Up :605
## Median : 0.2410
## Mean : 0.1499
## 3rd Qu.: 1.4050
## Max. : 12.0260
pairs(Weekly)
cor(Weekly[,-9])
## Year Lag1 Lag2 Lag3 Lag4
## Year 1.00000000 -0.032289274 -0.03339001 -0.03000649 -0.031127923
## Lag1 -0.03228927 1.000000000 -0.07485305 0.05863568 -0.071273876
## Lag2 -0.03339001 -0.074853051 1.00000000 -0.07572091 0.058381535
## Lag3 -0.03000649 0.058635682 -0.07572091 1.00000000 -0.075395865
## Lag4 -0.03112792 -0.071273876 0.05838153 -0.07539587 1.000000000
## Lag5 -0.03051910 -0.008183096 -0.07249948 0.06065717 -0.075675027
## Volume 0.84194162 -0.064951313 -0.08551314 -0.06928771 -0.061074617
## Today -0.03245989 -0.075031842 0.05916672 -0.07124364 -0.007825873
## Lag5 Volume Today
## Year -0.030519101 0.84194162 -0.032459894
## Lag1 -0.008183096 -0.06495131 -0.075031842
## Lag2 -0.072499482 -0.08551314 0.059166717
## Lag3 0.060657175 -0.06928771 -0.071243639
## Lag4 -0.075675027 -0.06107462 -0.007825873
## Lag5 1.000000000 -0.05851741 0.011012698
## Volume -0.058517414 1.00000000 -0.033077783
## Today 0.011012698 -0.03307778 1.000000000
library(corrplot)
## corrplot 0.84 loaded
corrplot(cor(Weekly[,-9]), method="square")
attach(Weekly)
Looking at the pairwise table, it’s hard to say if there is any patterns. So I created a correlation table and only found a correlation of 0.8419 between “Volume” and “Year”.
b.) Use the full data set to perform a logistic regression with Direction as the response and the five lag variables plus Volume as predictors. Use the summary function to print the results. Do any of the predictors appear to be statistically significant? If so, which ones?
glm.fits = glm(Direction~Lag1+Lag2+Lag3+Lag4+Lag5+Volume,family = "binomial", data=Weekly)
summary(glm.fits)
##
## Call:
## glm(formula = Direction ~ Lag1 + Lag2 + Lag3 + Lag4 + Lag5 +
## Volume, family = "binomial", data = Weekly)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.6949 -1.2565 0.9913 1.0849 1.4579
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.26686 0.08593 3.106 0.0019 **
## Lag1 -0.04127 0.02641 -1.563 0.1181
## Lag2 0.05844 0.02686 2.175 0.0296 *
## Lag3 -0.01606 0.02666 -0.602 0.5469
## Lag4 -0.02779 0.02646 -1.050 0.2937
## Lag5 -0.01447 0.02638 -0.549 0.5833
## Volume -0.02274 0.03690 -0.616 0.5377
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1496.2 on 1088 degrees of freedom
## Residual deviance: 1486.4 on 1082 degrees of freedom
## AIC: 1500.4
##
## Number of Fisher Scoring iterations: 4
Out of all the predictors, it appears that only “Lag2” is statistically significant to predict whether the market had a positive or negative return on a given week.
c.) Compute the confusion matrix and overall fraction of correct predictions. Explain what the confusion matrix is telling you about the types of mistakes made by logistic regression
glm.probs=predict(glm.fits,type="response")
glm.probs[1:10]
## 1 2 3 4 5 6 7
## 0.6086249 0.6010314 0.5875699 0.4816416 0.6169013 0.5684190 0.5786097
## 8 9 10
## 0.5151972 0.5715200 0.5554287
contrasts(Direction)
## Up
## Down 0
## Up 1
glm.pred=rep("Down",1089)
glm.pred[glm.probs>.5]="Up"
table(glm.pred,Direction)
## Direction
## glm.pred Down Up
## Down 54 48
## Up 430 557
By looking at the confusion matrix, we see that we don’t have the most accurate model. For example, this confusion matrix has produced 430 false positives. However, the matrix does accurately predict 557 true positives. Also, this models predicts correctly 56.11% of the time.
d.) Now fit the logistic regression model using a training data period from 1990 to 2008, with Lag2 as the only predictor. Compute the confusion matrix and the overall fraction of correct predictions for the held out data (that is, the data from 2009 and 2010).
train=(Year<2009)
Weekly.2009=Smarket[!train,]
dim(Weekly.2009)
## [1] 104 9
Direction.2009=Direction[!train]
glm.fits=glm(Direction~Lag2,data=Weekly.2009,family=binomial,subset=train)
glm.probs=predict(glm.fits,Weekly.2009,type="response")
glm.pred=rep("Down",104)
glm.pred[glm.probs>.5]="Up"
table(glm.pred,Direction.2009)
## Direction.2009
## glm.pred Down Up
## Down 13 15
## Up 30 46
mean(glm.pred == Direction.2009)
## [1] 0.5673077
We see a an average accuracy of 56.73%.
e.) Repeat d.) using LDA
lda.fit=lda(Direction~Lag2,data=Weekly,subset=train)
lda.fit
## Call:
## lda(Direction ~ Lag2, data = Weekly, subset = train)
##
## Prior probabilities of groups:
## Down Up
## 0.4477157 0.5522843
##
## Group means:
## Lag2
## Down -0.03568254
## Up 0.26036581
##
## Coefficients of linear discriminants:
## LD1
## Lag2 0.4414162
plot(lda.fit)
lda.pred=predict(lda.fit, Weekly.2009)
names(lda.pred)
## [1] "class" "posterior" "x"
lda.class=lda.pred$class
table(lda.class,Direction.2009)
## Direction.2009
## lda.class Down Up
## Down 0 0
## Up 43 61
mean(lda.class==Direction.2009)
## [1] 0.5865385
We see a an average accuracy of 58.65%.
f.) Repeat d.) using QDA
qda.fit=qda(Direction~Lag2,data=Weekly,subset=train)
qda.fit
## Call:
## qda(Direction ~ Lag2, data = Weekly, subset = train)
##
## Prior probabilities of groups:
## Down Up
## 0.4477157 0.5522843
##
## Group means:
## Lag2
## Down -0.03568254
## Up 0.26036581
qda.class=predict(qda.fit,Weekly.2009)$class
table(qda.class,Direction.2009)
## Direction.2009
## qda.class Down Up
## Down 0 0
## Up 43 61
mean(qda.class==Direction.2009)
## [1] 0.5865385
We see a an average accuracy of 58.65%.
g.) Repeat d.) using KNN with K = 1.
train.X=cbind(Lag2)[train,]
test.X=cbind(Lag2)[!train,]
train.Direction=Direction[train]
set.seed(1)
table(glm.pred,Direction.2009)
## Direction.2009
## glm.pred Down Up
## Down 13 15
## Up 30 46
h.) Which of these methods appears to provide the best results on this data?
The LDA and QDA seems to have produced the most accurate tables.
detach(Weekly)
Applied Problem #11.
a.) Create a binary variable, mpg01, that contains a 1 if mpg contains a value above its median, and a 0 if mpg contains a value below its median. You can compute the median using the median() function. Note you may find it helpful to use the data.frame() function to create a single data set containing both mpg01 and the other Auto variables.
summary(Auto)
## mpg cylinders displacement horsepower
## Min. : 9.00 Min. :3.000 Min. : 68.0 Min. : 46.0
## 1st Qu.:17.00 1st Qu.:4.000 1st Qu.:105.0 1st Qu.: 75.0
## Median :22.75 Median :4.000 Median :151.0 Median : 93.5
## Mean :23.45 Mean :5.472 Mean :194.4 Mean :104.5
## 3rd Qu.:29.00 3rd Qu.:8.000 3rd Qu.:275.8 3rd Qu.:126.0
## Max. :46.60 Max. :8.000 Max. :455.0 Max. :230.0
##
## weight acceleration year origin
## Min. :1613 Min. : 8.00 Min. :70.00 Min. :1.000
## 1st Qu.:2225 1st Qu.:13.78 1st Qu.:73.00 1st Qu.:1.000
## Median :2804 Median :15.50 Median :76.00 Median :1.000
## Mean :2978 Mean :15.54 Mean :75.98 Mean :1.577
## 3rd Qu.:3615 3rd Qu.:17.02 3rd Qu.:79.00 3rd Qu.:2.000
## Max. :5140 Max. :24.80 Max. :82.00 Max. :3.000
##
## name
## amc matador : 5
## ford pinto : 5
## toyota corolla : 5
## amc gremlin : 4
## amc hornet : 4
## chevrolet chevette: 4
## (Other) :365
library(ggplot2)
mpg01 = rep(0, length(mpg))
mpg01[mpg > median.default(mpg)] = 1
Auto = data.frame(Auto, mpg01)
b.) Explore the data graphically in order to investigate the association between mpg01 and the other features. Which of the other features seem most likely to be useful in predicting mpg01? Scatterplots and boxplots may be useful tools to answer this question. Describe your findings.
cor(Auto[,-9])
## mpg cylinders displacement horsepower weight
## mpg 1.0000000 -0.7776175 -0.8051269 -0.7784268 -0.8322442
## cylinders -0.7776175 1.0000000 0.9508233 0.8429834 0.8975273
## displacement -0.8051269 0.9508233 1.0000000 0.8972570 0.9329944
## horsepower -0.7784268 0.8429834 0.8972570 1.0000000 0.8645377
## weight -0.8322442 0.8975273 0.9329944 0.8645377 1.0000000
## acceleration 0.4233285 -0.5046834 -0.5438005 -0.6891955 -0.4168392
## year 0.5805410 -0.3456474 -0.3698552 -0.4163615 -0.3091199
## origin 0.5652088 -0.5689316 -0.6145351 -0.4551715 -0.5850054
## acceleration year origin
## mpg 0.4233285 0.5805410 0.5652088
## cylinders -0.5046834 -0.3456474 -0.5689316
## displacement -0.5438005 -0.3698552 -0.6145351
## horsepower -0.6891955 -0.4163615 -0.4551715
## weight -0.4168392 -0.3091199 -0.5850054
## acceleration 1.0000000 0.2903161 0.2127458
## year 0.2903161 1.0000000 0.1815277
## origin 0.2127458 0.1815277 1.0000000
corrplot(cor(Auto[,-9]), method="square")
By looking at the correlation tables, we can see some correlation between mgp01 and cylinders, displacement, horsepower, and weight.
c.) Split the data into a training set and a test set.
train = (year %% 2 == 0)
train.1 = Auto[train,]
test.1 = Auto[-train,]
d.) Perform LDA on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?
lda.fit1 = lda(mpg01~displacement+horsepower+weight+year+cylinders, data=train.1)
lda.pred1 = predict(lda.fit1, test.1)
table(lda.pred1$class, test.auto$mpg01)
mean(lda.pred1$class != test.1$mpg01)
e.) Perform QDA on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?
qda.fit1 = qda(mpg01~displacement+horsepower+weight+year+cylinders, data=train.1)
qda.pred1 = predict(qda.fit1, test.1)
table(qda.pred1$class, test.1$mpg01)
mean(qda.pred1$class != test.1$mpg01)
f.) Perform a Logistic Regression on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?
glm.fit1 = glm(mpg01~displacement+horsepower+weight+year+cylinders+origin, data=train.1,family=binomial)
glm.probs1 = predict(glm.fit1, test.1, type = "response")
glm.pred1 = rep(0, length(glm.probs1))
glm.pred1[glm.probs1 > 0.5] = 1
table(glm.pred1, test.1$mpg01)
mean(glm.pred1 != test.1$mpg01)
g.) Perform KNN on the training data, with several values of K, in order to predict mpg01. Use only the variables that seemed most associated with mpg01 in (b). What test errors do you obtain? Which value of K seems to perform the best on this data set?
train.X1= cbind(displacement,horsepower,weight,cylinders,year, origin)[train,]
test.X1=cbind(displacement,horsepower,weight,cylinders, year, origin)[-train,]
set.seed(1)
k.pred=knn(train.X1,test.X1,train.1$mpg01,k=1)
mean(k.pred != test.1$mpg01)
k.pred=knn(train.X1,test.X1,train.1$mpg01,k=5)
mean(k.pred != test.1$mpg01)
k.pred=knn(train.X1,test.X1,train.1$mpg01,k=10)
mean(k.pred != test.1$mpg01)
When k=1, it results with the lowest error rate. However, as k increase, we see the error also increase.