10. Equation 4.32 derived an expression for \(log \frac{(Pr(Y =k|X=x)} {Pr(Y =K|X=x))}\) in the setting where p > 1, so that the mean for the kth class, µk, is a p dimensional vector, and the shared covariance Σ is a p × p matrix. However, in the setting with p = 1, (4.32) takes a simpler form, since the means µ1,…,µK and the variance σ2 are scalars. In this simpler setting, repeat the calculation in (4.32), and provide expressions for \(a_k\) and \(b_kj\) in terms of πk, πK, µk, µK, and σ2.
Assuming all classes have the same variance, the posterior probability becomes:
\[\begin{equation} p_k(x)=\frac{\pi_k\frac{1}{\sqrt{2\pi\sigma}}\exp(\frac{1}{2\sigma^2}(x-\mu_k)^2)}{\sum_{l=1}^{K}\pi_l\frac{1}{\sqrt{2\pi\sigma}}\exp(\frac{1}{2\sigma^2}(x-\mu_l)^2)} \end{equation}\]
After simplification this becomes:
\[\begin{equation} p_k(x)=x\frac{\mu_k}{\sigma^2}-\frac{\mu^2_k}{2\sigma^2}+log(\pi_k) \end{equation}\]
Where \(a_k\) equals \(x\frac{\mu_k}{\sigma^2}-\frac{\mu^2_k}{2\sigma^2}\) and \(b_kj\) equals \(log(\pi_k)\)
Work out the detailed forms of \(a_k, b_{kj} , and\ c_{kjl}\) in (4.33). Your answer should involve πk, πK, µk, µK, Σk, and ΣK.
Equation 4.33 is the following:
\[\begin{equation} log \frac{(Pr(Y =k|X=x)} {Pr(Y =K|X=x))}= a_{k} + \sum_{j=1}^{p}b_{kj}x_{j} + \sum_{j=1}^{p}\sum_{l=1}^{p}c_{kjl}x_jx_l \end{equation}\]
In this case \(a_k\) equals:
\[\begin{equation} log({\frac{\pi_k}{\pi_K}}) \end{equation}\]
In this case \(b_{kj}\) equals:
\[\begin{equation} log(\frac{b_{kj}x_j}{b_{Kj}x_j}) \end{equation}\]
In this case \(c_{kjl}\) equals:
\[\begin{equation} log(\frac{c_{kjl}x_jx_l}{c_{Kjl}x_jx_l}) \end{equation}\]
13. This question should be answered using the Weekly data set, which is part of the ISLR2 package. This data is similar in nature to the Smarket data from this chapter’s lab, except that it contains 1, 089 weekly returns for 21 years, from the beginning of 1990 to the end of 2010.
library(ISLR)
(a) Produce some numerical and graphical summaries of the Weekly data. Do there appear to be any patterns?
summary(Weekly)
## Year Lag1 Lag2 Lag3
## Min. :1990 Min. :-18.1950 Min. :-18.1950 Min. :-18.1950
## 1st Qu.:1995 1st Qu.: -1.1540 1st Qu.: -1.1540 1st Qu.: -1.1580
## Median :2000 Median : 0.2410 Median : 0.2410 Median : 0.2410
## Mean :2000 Mean : 0.1506 Mean : 0.1511 Mean : 0.1472
## 3rd Qu.:2005 3rd Qu.: 1.4050 3rd Qu.: 1.4090 3rd Qu.: 1.4090
## Max. :2010 Max. : 12.0260 Max. : 12.0260 Max. : 12.0260
## Lag4 Lag5 Volume Today
## Min. :-18.1950 Min. :-18.1950 Min. :0.08747 Min. :-18.1950
## 1st Qu.: -1.1580 1st Qu.: -1.1660 1st Qu.:0.33202 1st Qu.: -1.1540
## Median : 0.2380 Median : 0.2340 Median :1.00268 Median : 0.2410
## Mean : 0.1458 Mean : 0.1399 Mean :1.57462 Mean : 0.1499
## 3rd Qu.: 1.4090 3rd Qu.: 1.4050 3rd Qu.:2.05373 3rd Qu.: 1.4050
## Max. : 12.0260 Max. : 12.0260 Max. :9.32821 Max. : 12.0260
## Direction
## Down:484
## Up :605
##
##
##
##
Here is a summary of the variables in the weekly dataset.
pairs(Weekly)
Based on this scatterplot matrix, the only pattern between the variables appears to be between the year and volume variables.
cor(Weekly[,-9])
## Year Lag1 Lag2 Lag3 Lag4
## Year 1.00000000 -0.032289274 -0.03339001 -0.03000649 -0.031127923
## Lag1 -0.03228927 1.000000000 -0.07485305 0.05863568 -0.071273876
## Lag2 -0.03339001 -0.074853051 1.00000000 -0.07572091 0.058381535
## Lag3 -0.03000649 0.058635682 -0.07572091 1.00000000 -0.075395865
## Lag4 -0.03112792 -0.071273876 0.05838153 -0.07539587 1.000000000
## Lag5 -0.03051910 -0.008183096 -0.07249948 0.06065717 -0.075675027
## Volume 0.84194162 -0.064951313 -0.08551314 -0.06928771 -0.061074617
## Today -0.03245989 -0.075031842 0.05916672 -0.07124364 -0.007825873
## Lag5 Volume Today
## Year -0.030519101 0.84194162 -0.032459894
## Lag1 -0.008183096 -0.06495131 -0.075031842
## Lag2 -0.072499482 -0.08551314 0.059166717
## Lag3 0.060657175 -0.06928771 -0.071243639
## Lag4 -0.075675027 -0.06107462 -0.007825873
## Lag5 1.000000000 -0.05851741 0.011012698
## Volume -0.058517414 1.00000000 -0.033077783
## Today 0.011012698 -0.03307778 1.000000000
Based on this correlation matrix we can see that there is a strong positive correlation between Volume and Year.
(b) Use the full data set to perform a logistic regression with Direction as the response and the five lag variables plus Volume as predictors. Use the summary function to print the results. Do any of the predictors appear to be statistically significant? If so, which ones?
glm.fits=glm(Direction~Lag1+Lag2+Lag3+Lag4+Lag5+Volume, data = Weekly, family = binomial)
summary (glm.fits)
##
## Call:
## glm(formula = Direction ~ Lag1 + Lag2 + Lag3 + Lag4 + Lag5 +
## Volume, family = binomial, data = Weekly)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.6949 -1.2565 0.9913 1.0849 1.4579
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.26686 0.08593 3.106 0.0019 **
## Lag1 -0.04127 0.02641 -1.563 0.1181
## Lag2 0.05844 0.02686 2.175 0.0296 *
## Lag3 -0.01606 0.02666 -0.602 0.5469
## Lag4 -0.02779 0.02646 -1.050 0.2937
## Lag5 -0.01447 0.02638 -0.549 0.5833
## Volume -0.02274 0.03690 -0.616 0.5377
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1496.2 on 1088 degrees of freedom
## Residual deviance: 1486.4 on 1082 degrees of freedom
## AIC: 1500.4
##
## Number of Fisher Scoring iterations: 4
Based on these results, Lag2 is the only variable that significantly impacts the Direction that the market goes on a given week. The coefficient of Lag2 is .05844, meaning that if the stock market went up two weeks ago it is more likely to go up this week.
(c) Compute the confusion matrix and overall fraction of correct predictions. Explain what the confusion matrix is telling you about the types of mistakes made by logistic regression.
str(Weekly)
## 'data.frame': 1089 obs. of 9 variables:
## $ Year : num 1990 1990 1990 1990 1990 1990 1990 1990 1990 1990 ...
## $ Lag1 : num 0.816 -0.27 -2.576 3.514 0.712 ...
## $ Lag2 : num 1.572 0.816 -0.27 -2.576 3.514 ...
## $ Lag3 : num -3.936 1.572 0.816 -0.27 -2.576 ...
## $ Lag4 : num -0.229 -3.936 1.572 0.816 -0.27 ...
## $ Lag5 : num -3.484 -0.229 -3.936 1.572 0.816 ...
## $ Volume : num 0.155 0.149 0.16 0.162 0.154 ...
## $ Today : num -0.27 -2.576 3.514 0.712 1.178 ...
## $ Direction: Factor w/ 2 levels "Down","Up": 1 1 2 2 2 1 2 2 2 1 ...
glm.probs = predict(glm.fits,type="response")
glm.probs[1:10]
## 1 2 3 4 5 6 7 8
## 0.6086249 0.6010314 0.5875699 0.4816416 0.6169013 0.5684190 0.5786097 0.5151972
## 9 10
## 0.5715200 0.5554287
contrasts(Weekly$Direction)
## Up
## Down 0
## Up 1
glm.pred=rep("Down" , 1089)
glm.pred[glm.probs >.5] = "Up"
table(glm.pred, Weekly$Direction)
##
## glm.pred Down Up
## Down 54 48
## Up 430 557
Above is the confusion matrix.
(54+557)/1089
## [1] 0.5610652
The model correctly predicted the movement of the market 56.1% of the time.
1-.5610652
## [1] 0.4389348
The test and training error were from the same 1,089 observations, so the training error rate is 0.4389348.
(d) Now fit the logistic regression model using a training data period from 1990 to 2008, with Lag2 as the only predictor. Compute the confusion matrix and the overall fraction of correct predictions for the held out data (that is, the data from 2009 and 2010).
train=(Weekly$Year<2009)
Weekly.2009=Weekly[!train ,]
Direction.2009=Weekly$Direction[!train]
dim(Weekly.2009)
## [1] 104 9
Above we see the dimensions of the test data set which consists of data from 2009 and 2010. There are 156 observations in the data set.
glm.fits=glm(Direction~Lag2^2, data = Weekly,family=binomial ,subset=train)
glm.probs=predict (glm.fits,Weekly.2009, type="response")
glm.pred=rep("Down",104)
glm.pred[glm.probs >.5]=" Up"
table(glm.pred ,Direction.2009)
## Direction.2009
## glm.pred Down Up
## Up 34 56
## Down 9 5
From this confusion matrix, we see that the model correctly predicted an increase in the stock market 56 times and correctly predicted a decrease in the market 9 times.
(56+9)/104
## [1] 0.625
This result tells us that the model correctly identified the weekly movement of the stock market 62.5% of the time for the years 2009 and 2010.
(e) Repeat (d) using LDA
library(MASS)
lda.fit=lda(Direction~Lag2 ,data = Weekly ,subset=train)
lda.fit
## Call:
## lda(Direction ~ Lag2, data = Weekly, subset = train)
##
## Prior probabilities of groups:
## Down Up
## 0.4477157 0.5522843
##
## Group means:
## Lag2
## Down -0.03568254
## Up 0.26036581
##
## Coefficients of linear discriminants:
## LD1
## Lag2 0.4414162
Above we see that the prior probabilities of groups is .4415863 and .5584137 for Down and Up respectively.
lda.pred=predict(lda.fit , Weekly.2009)
names(lda.pred)
## [1] "class" "posterior" "x"
lda.class=lda.pred$class
table(lda.class , Direction.2009)
## Direction.2009
## lda.class Down Up
## Down 9 5
## Up 34 56
From this we see that the lda model correctly predicted a weekly decrease in the market 3 times and a weekly increase in the market 59 times.
(3+59)/104
## [1] 0.5961538
The lda model makes a correct prediction 59.62% of the time.
(f) Repeat (d) using QDA.
qda.fit=qda(Direction~Lag2 ,data=Weekly ,subset=train)
qda.fit
## Call:
## qda(Direction ~ Lag2, data = Weekly, subset = train)
##
## Prior probabilities of groups:
## Down Up
## 0.4477157 0.5522843
##
## Group means:
## Lag2
## Down -0.03568254
## Up 0.26036581
Above we see that the prior probabilities of groups .4477157 and .5522843 for Down and Up respectively.
qda.class=predict(qda.fit ,Weekly.2009)$class
table(qda.class ,Direction.2009)
## Direction.2009
## qda.class Down Up
## Down 0 0
## Up 43 61
Based on this we see that the qda model correctly predicted a weekly increase in the market 61 times.
61/104
## [1] 0.5865385
This means that the QDA model made a correct prediction 58.65% of the time.
(g) Repeat (d) using KNN with K = 1.
library(class)
train.X=cbind(Weekly$Lag2)[train ,]
test.X=cbind(Weekly$Lag2)[!train ,]
train.Direction =Weekly$Direction [train]
dim(train.X)= c(985,1)
dim(test.X)=c(104,1)
set.seed(1)
knn.pred=knn(train.X,test.X,train.Direction ,k=1)
table(knn.pred ,Direction.2009)
## Direction.2009
## knn.pred Down Up
## Down 21 30
## Up 22 31
We can see from the result of the KNN approach that the model correctly predicts Down 21 times and correctly predicts Up 31 times.
(21+31)/104
## [1] 0.5
This means that it makes correct predictions 50 percent of the time.
(h) Repeat (d) using naive Bayes.
library(e1071)
library(ISLR)
nb.fit=naiveBayes(Direction~Lag2 ,data=Weekly ,subset=train)
nb.fit
##
## Naive Bayes Classifier for Discrete Predictors
##
## Call:
## naiveBayes.default(x = X, y = Y, laplace = laplace)
##
## A-priori probabilities:
## Y
## Down Up
## 0.4477157 0.5522843
##
## Conditional probabilities:
## Lag2
## Y [,1] [,2]
## Down -0.03568254 2.199504
## Up 0.26036581 2.317485
nb.class=predict(nb.fit ,Weekly.2009)
table(nb.class ,Direction.2009)
## Direction.2009
## nb.class Down Up
## Down 0 0
## Up 43 61
We can see from this that the prior probabilities for Down and Up are .4477 and .5522 respectively. The table shows that the naive Bayes correctly predicts an increase in the stock market 61 weeks.
61/104
## [1] 0.5865385
The naive Bayes makes a correct prediction 58.65% of the time.
(i) Which of these methods appears to provide the best results on this data?
The logistic regression method gave us the highest percentage of correct predictions, so it was the best of the methods used.
(j) Experiment with different combinations of predictors, including possible transformations and interactions, for each of the methods. Report the variables, method, and associated confusion matrix that appears to provide the best results on the held out data. Note that you should also experiment with values for K in the KNN classifier.
lda.fit=lda(Direction~Lag2^2 ,data = Weekly ,subset=train)
lda.fit
## Call:
## lda(Direction ~ Lag2^2, data = Weekly, subset = train)
##
## Prior probabilities of groups:
## Down Up
## 0.4477157 0.5522843
##
## Group means:
## Lag2
## Down -0.03568254
## Up 0.26036581
##
## Coefficients of linear discriminants:
## LD1
## Lag2 0.4414162
lda.pred=predict(lda.fit , Weekly.2009)
names(lda.pred)
## [1] "class" "posterior" "x"
lda.class=lda.pred$class
table(lda.class , Direction.2009)
## Direction.2009
## lda.class Down Up
## Down 9 5
## Up 34 56
We can see from this that the LDA method correctly predicted an increase in the market on 56 weeks and correctly predicted a decrease on 9 weeks.
65/104
## [1] 0.625
The new LDA model using lag2 squared as the predictor variable correctly predicted 62.5% of weeks.
qda.fit=qda(Direction~Lag2^2 ,data=Weekly ,subset=train)
qda.fit
## Call:
## qda(Direction ~ Lag2^2, data = Weekly, subset = train)
##
## Prior probabilities of groups:
## Down Up
## 0.4477157 0.5522843
##
## Group means:
## Lag2
## Down -0.03568254
## Up 0.26036581
qda.class=predict(qda.fit ,Weekly.2009)$class
table(qda.class ,Direction.2009)
## Direction.2009
## qda.class Down Up
## Down 0 0
## Up 43 61
Squaring the predictor lag2 in the QDA model did not impact the number of correctly predicted weeks. The model still predicted correctly 58.65% of weeks.
glm.fits=glm(Direction~Lag2*Lag1, data = Weekly,family=binomial ,subset=train)
glm.probs=predict (glm.fits,Weekly.2009, type="response")
glm.pred=rep("Down",104)
glm.pred[glm.probs >.5]=" Up"
table(glm.pred ,Direction.2009)
## Direction.2009
## glm.pred Down Up
## Up 36 53
## Down 7 8
Using an interaction term between lag1 and lag2 as the predictor variable, we see that the logistic model correctly predicted an increase in the market on 53 weeks and correctly predicted a decrease on 7 weeks.
60/104
## [1] 0.5769231
The model made a correct prediction 57.69 percent of the time.
library(class)
train.X=cbind(Weekly$Lag2)[train ,]
test.X=cbind(Weekly$Lag2)[!train ,]
train.Direction =Weekly$Direction [train]
dim(train.X)= c(985,1)
dim(test.X)=c(104,1)
set.seed(1)
knn.pred=knn(train.X,test.X,train.Direction ,k=5)
table(knn.pred ,Direction.2009)
## Direction.2009
## knn.pred Down Up
## Down 16 21
## Up 27 40
with K=5, we see that the KNN model correctly predicted the outcome on 56 out of 104 weeks.
library(class)
train.X=cbind(Weekly$Lag2)[train ,]
test.X=cbind(Weekly$Lag2)[!train ,]
train.Direction =Weekly$Direction [train]
dim(train.X)= c(985,1)
dim(test.X)=c(104,1)
set.seed(1)
knn.pred=knn(train.X,test.X,train.Direction ,k=10)
table(knn.pred ,Direction.2009)
## Direction.2009
## knn.pred Down Up
## Down 17 21
## Up 26 40
With k=10, the KNN model correctly predicted the outcome on 57 out of 104 weeks
library(class)
train.X=cbind(Weekly$Lag2)[train ,]
test.X=cbind(Weekly$Lag2)[!train ,]
train.Direction =Weekly$Direction [train]
dim(train.X)= c(985,1)
dim(test.X)=c(104,1)
set.seed(1)
knn.pred=knn(train.X,test.X,train.Direction ,k=20)
table(knn.pred ,Direction.2009)
## Direction.2009
## knn.pred Down Up
## Down 21 21
## Up 22 40
With k=20, the KNN model correctly predicted the outcome on 61 of 104 weeks.
library(class)
train.X=cbind(Weekly$Lag2)[train ,]
test.X=cbind(Weekly$Lag2)[!train ,]
train.Direction =Weekly$Direction [train]
dim(train.X)= c(985,1)
dim(test.X)=c(104,1)
set.seed(1)
knn.pred=knn(train.X,test.X,train.Direction ,k=30)
table(knn.pred ,Direction.2009)
## Direction.2009
## knn.pred Down Up
## Down 20 24
## Up 23 37
With k=30, the model made a correct prediction on 57 out of 104 weeks. This suggests that the optimum value for K is somewhere between 10 and 30.