This problem relates to the QDA model, in which the observations within each class are drawn from a normal distribution with a classspecific mean vector and a class specific covariance matrix. We consider the simple case where \(p=1\); i.e. there is only one feature. Suppose that we have \(K\) classes, and that if an observation belongs to the \(k\)-th class then \(X\) comes from a one-dimensional normal distribution, \(X \sim N\left(\mu_k, \sigma_k^2\right)\). Recall that the density function for the one-dimensional normal distribution is given in (4.16). Prove that in this case, the Bayes classifier is not linear. Argue that it is in fact quadratic.
Hint: For this problem, you should follow the arguments laid out in Section 4.4.1, but without making the assumption that \(\sigma_1^2=\ldots=\sigma_K^2\).
========================================================
The equation 4.17 changes to allow \(\sigma\) to vary with \(k\), which will affect our discriminant function. Substituting \(\sigma\) with \(\sigma_k\) in equation 4.17 yields:
\[ p_k(x) = \frac {\pi_k \frac {1} {\sqrt{2 \pi} \sigma_k} \exp(- \frac {1} {2 \sigma_k^2} (x - \mu_k)^2) } {\sum { \pi_l \frac {1} {\sqrt{2 \pi} \sigma_l} \exp(- \frac {1} {2 \sigma_l^2} (x - \mu_l)^2) }} \\\]
Then, taking the log of both sides, we get the following:
\[ \log(p_k(x)) = \frac {\log(\pi_k) + \log(\frac {1} {\sqrt{2 \pi} \sigma_k}) + - \frac {1} {2 \sigma_k^2} (x - \mu_k)^2 } {\log(\sum { \pi_l \frac {1} {\sqrt{2 \pi} \sigma_l} \exp(- \frac {1} {2 \sigma_l^2} (x - \mu_l)^2) })} \\ \]
This allows us to simplify to the discriminant by removing all the values that do not vary with \(k\). When we simplify the discriminant, we keep the additional \(x^2\) term, because the variance in the denominator now varies individually with \(k\) (whereas in equation 4.18 the \(x^2\) term is dropped because it would not vary with \(k\). Thus, the function \(\delta_k(x)\) is a quadratic function of \(x\):
Hence, an independently varying class distribution yields a quadratic discriminant, \(\delta_k(x)\).
========================================================
When the number of features \(p\) is large, there tends to be a deterioration in the performance of \(\mathrm{KNN}\) and other local approaches that perform prediction using only observations that are near the test observation for which a prediction must be made. This phenomenon is known as the curse of dimensionality, and it ties into the fact that non-parametric approaches often perform poorly when \(p\) is large. We will now investigate this curse.
Suppose that we have a set of observations, each with measurements on \(p=1\) feature, \(X\). We assume that \(X\) is uniformly (evenly) distributed on \([0,1]\). Associated with each observation is a response value. Suppose that we wish to predict a test observation’s response using only observations that are within \(10 \%\) of the range of \(X\) closest to that test observation. For instance, in order to predict the response for a test observation with \(X=0.6\), we will use observations in the range \([0.55,0.65]\). On average, what fraction of the available observations will we use to make the prediction?
Now suppose that we have a set of observations, each with measurements on \(p=3\) features, \(X_1,X_2\) and \(X_3\). We assume that \(\left(X_1, X_2, X_3\right)\) are uniformly distributed on \([0,1] \times[0,1] \times[0,1]\). We wish to predict a test observation’s response using only observations that are within \(10 \%\) of the range of \(X_1\), within \(10 \%\) of the range of \(X_2\), and within \(10 \%\) of the range of \(X_3\) closest to that test observation. For instance, in order to predict the response for a test observation with \(X_1=0.6,X_2=0.35\) and \(X_3=0.15\), we will use observations in the range \([0.55,0.65]\) for \(X_1\), in the range \([0.3,0.4]\) for \(X_2\), and in the range \([0.1,0.2]\) for \(X_2\). On average, what fraction of the available observations will we use to make the prediction?
Now suppose that we have a set of observations on \(p=200\) features. Again the observations are uniformly distributed on each feature, and again each feature ranges in value from 0 to 1 . We wish to predict a test observation’s response using observations within the \(10 \%\) of each feature’s range that is closest to that test observation. What fraction of the available observations will we use to make the prediction?
Using your answers to parts (a)-(c), argue that a drawback of KNN when \(p\) is large is that there are very few training observations “near” any given test observation.
Now suppose that we wish to make a prediction for a test observation by creating a p-dimensional hypercube centered around the test observation that contains, on average, \(10 \%\) of the training observations. For \(p=1,3\), and 200 , what is the length of each side of the hypercube? Comment on your answer.
Note: A hypercube is a generalization of a cube to an arbitrary number of dimensions. When \(p=1\), a hypercube is simply a line segment, when \(p=2\) it is a square, when \(p=3\) a cube, and when \(p=200\) it is a 200-dimensional cube.
========================================================
As \(X\) is uniformly distributed and we take the nearest 10% of observations, we would expect our sample to represent, on average, 10% of all observations, noting that the nearest 10% for values less than 0.05 and greater than 0.95 would all be the same intervals of \([0,0.1]\) and \([0.9,1]\), respectively,
Assuming all the features are uniformly distributed (and are independent in those distributions), we now have a total joint probability that is represented by the intersection of the nearest 10% of observations for each feature. Thus, this intersection would be 0.001 or 0.1%: \[ \frac{1}{10} \times \frac{1}{10}\times \frac{1}{10} = \frac{1}{1000} \]
By the properties of joint distributions, the fraction of all observations we would use under those assumptions would be \(\frac{1}{10^200}\), which is, a sample over 100 orders of magnitude smaller than sampling a single atom out of all the matter in the known universe. (This is a bad way to do this.)
If we assume our KNN model will be based on nearest 10% across all features independently across uniform distributions, our availability of observations to compare to decreases exponentially with each additional feature we add to \(p\). I think my point about sampling incomprehensibly smaller than a single atom from all the atoms in the universe in part (c) illustrates how this model will break down on a real-world dataset.
For \(p = 1\), the nearest 10% of values have a length of \(0.1\) as was used in our first example.
As we increase the number of dimensions on our hypercube, our hypervolume needs to stay fixed at 10% of the total space and the “length” of the side of our hypercube would be the \(p\)th root of 0.1. Thus, for \(p = 3\), \(length = 0.1^{\frac{1}{3}} \approx 0.46\) and for \(p = 200\), \(length = 0.1^{\frac{1}{200}} \approx 0.989\). This begins to suffer from the opposite problem as the prior sampling method: as we increase the number of features, we begin to accept nearly all of the observations, reducing the power of the nonparametric algorithm to one that effectively just computes the mean (losing the value of a more flexible model.)
========================================================
Suppose we collect data for a group of students in a statistics class with variables \(X_1=\) hours studied, \(X_2=\) undergrad GPA, and \(Y=\) receive an A. We fit a logistic regression and produce estimated coefficient, \(\hat{\beta}_0=-10, \hat{\beta}_1=0.15, \hat{\beta}_2=2\).
Estimate the probability that a student who studies for \(45 \mathrm{~h}\) and has an undergrad GPA of \(3.75\) gets an \(\mathrm{A}\) in the class.
How many hours would the student in part (a) need to study to have a \(75 \%\) chance of getting an A in the class?
========================================================
A 98.6% chance:
\[ \eta(P(X)) = \frac{\log(P(X))}{1 + \log(P(X)) } = \hat\beta_0 + \hat\beta_1X_1 + \hat\beta_2X_2 , \\ P(X) = \frac{\exp(\hat\beta_0 + \hat\beta_1X_1 + \hat\beta_2X_2)}{1 + \exp(\hat\beta_0 + \hat\beta_1X_1 + \hat\beta_2X_2)}\\ P(X) = \frac{\exp(-10 + 0.15*45 + 2*3.75)}{1 + \exp(-10 + 0.15*45 + 2*3.75)}\\ P(X) = \frac{\exp(4.25)}{1+\exp(4.25)} = 0.986 \\ \]
About 24 hours:
\[ \log(\frac{P(X)}{1 - P(X)}) = \hat\beta_0 + \hat\beta_1X_1 + \hat\beta_2X_2\\ \log(\frac{0.75}{1 - 0.75}) = -10 + 0.15X_1 + 7.5\\ 1.099 = -2.5 + 0.15X_1\\ X_1 \approx 24 hours. \\ \]
Equation \(4.32\) derived an expression for \(\log \left(\frac{\operatorname{Pr}(Y=k \mid X=x)}{\operatorname{Pr}(Y=K \mid X=x)}\right)\) in the setting where \(p>1\), so that the mean for the \(k\) th class, \(\mu_k\), is a \(p\) dimensional vector, and the shared covariance \(\boldsymbol{\Sigma}\) is a \(p \times p\) matrix. However, in the setting with \(p=1\),(4.32) takes a simpler form, since the means \(\mu_1, \ldots, \mu_K\) and the variance \(\sigma^2\) are scalars. In this simpler setting, repeat the calculation in (4.32), and provide expressions for \(a_k\) and \(b_{k j}\) in terms of \(\pi_k, \pi_K, \mu_k, \mu_K\), and \(\sigma^2\).
========================================================
If we reduce the dimensionality of \(p\) to \(p=1\), it reduces the inverse of the covariance matrix \(\boldsymbol{\Sigma}^{-1}(\boldsymbol{X})\) to a scalar of the inverse of the variance \(\frac{1}{\sigma^2(x)}\) and the matrices with \(x\) reduce to vectors of
diagonal matrix of values \(\sigma^2(x)\), where \(x\) is the difference between the mean of the class weighting \(\mu_k\) and the mean across all classes, \(\mu_K\). This reduces the sum of all \(b_{kj}\) across \(j\) to just \(b_k\), since there is only one feature. Thus,
\[ \log \left(\frac{\operatorname{Pr}(Y=k \mid X=x)}{\operatorname{Pr}(Y=K \mid X=x)}\right) = \log\left(\frac{\pi_kf_k(x)}{\pi_Kf_K(x)}\right)\\ = \log(\frac{\pi_k}{\pi_K}) - \frac{1}{2\sigma^2}(x-\mu_k)^2 + \frac{1}{2\sigma^2}(x-\mu_K)^2 \\ = \log(\frac{\pi_k}{\pi_K}) - \frac{1}{2\sigma^2}(\mu_k^2 -\mu_K^2) + \frac{1}{\sigma^2}(\mu_k-\mu_K)x\\ \]
Defining our variables based on equation (4.32) gives us \(a_k = \log(\frac{\pi_k}{\pi_K}) - \frac{1}{2\sigma^2}(\mu_k-\mu_K)\) and \(b_k = \frac{1}{\sigma^2}(\mu_k-\mu_K)\).
========================================================
Suppose that you wish to classify an observation \(X \in \mathbb{R}\) into apples and oranges. You fit a logistic regression model and find that
\[ \widehat{\operatorname{Pr}}(Y=\text { orange } \mid X=x)=\frac{\exp \left(\hat{\beta}_0+\hat{\beta}_1 x\right)}{1+\exp \left(\hat{\beta}_0+\hat{\beta}_1 x\right)} . \]
Your friend fits a logistic regression model to the same data using the softmax formulation in (4.13), and finds that
\[ \begin{array}{r} \widehat{\operatorname{Pr}}(Y=\text { orange } \mid X=x)= \\ \qquad \frac{\exp \left(\hat{\alpha}_{\text {orange0 }}+\hat{\alpha}_{\text {orange1 }} x\right)}{\exp \left(\hat{\alpha}_{\text {orange } 0}+\hat{\alpha}_{\text {orange } 1} x\right)+\exp \left(\hat{\alpha}_{\text {apple0 }}+\hat{\alpha}_{\text {apple1 }} x\right)} . \end{array} \]
What is the log odds of orange versus apple in your model?
What is the log odds of orange versus apple in your friend’s model?
Suppose that in your model, \(\hat{\beta}_0=3\) and \(\hat{\beta}_1=-2\). What are the coefficient estimates in your friend’s model? Be as specific as possible.
Now suppose that you and your friend fit the same two models on a different data set. This time, your friend gets the coefficient estimates \(\hat{\alpha}_{\text {orange0 }}=1.5, \hat{\alpha}_{\text {orange } 1}=-2.4, \hat{\alpha}_{\text {apple } 0}=3.6, \hat{\alpha}_{\text {apple1 }}=0.8\). What are the coefficient estimates in your model?
Finally, suppose you apply both models from (d) to a data set with 2,000 test observations. What fraction of the time do you expect the predicted class labels from your model to agree with those from your friend’s model? Explain your answer.
========================================================
Taking the \(\log\) of the odds given by manipulating our formula to match the form of equation (4.3) for \(\widehat{\operatorname{Pr}}(oranges)\) yields us with the log odds for this simple binary classification, since all other possible odds are apples: \[ logodds(\widehat{\operatorname{Pr}}(Y=\text { orange } \mid X=x)) = \hat{\beta}_0+\hat{\beta}_1 x \]
Because our friend is taking the softmax approach, we divide the odds of oranges by the odds of apples using his equation (with the denominators cancelling out): \[ odds \left(\frac{\widehat{\operatorname{Pr}}(Y=\text { orange } \mid X=x)}{\widehat{\operatorname{Pr}}(Y=\text { apple } \mid X=x)}\right) = \frac{\exp \left(\hat{\alpha}_{\text {orange0 }}+\hat{\alpha}_{\text {orange1 }} x\right)}{\exp \left(\hat{\alpha}_{\text {apple0 }}+\hat{\alpha}_{\text {apple1 }} x\right)} \] And then, taking the log of the odds: \[ \\ logodds \left(\frac{\widehat{\operatorname{Pr}}(Y=\text { orange } \mid X=x)}{\widehat{\operatorname{Pr}}(Y=\text { apple } \mid X=x)}\right) = \hat{\alpha}_{\text {orange0 }}+\hat{\alpha}_{\text {orange1 }} x - \hat{\alpha}_{\text {apple0 }}-\hat{\alpha}_{\text {apple1 }} x \]
Our friend’s model should have values for \(\hat{a}\) that are near our values, according to these relationships
\[ \\ \hat{\beta_0} \approx \hat{\alpha}_{\text {orange0 }}- \hat{\alpha}_{\text {apple0 }}\\ \hat{\beta_1} \approx \hat{\alpha}_{\text {orange1 }} -\hat{\alpha}_{\text {apple1 }} \]
\[ \\ \hat{\beta_0} \approx 1.5- 3.6 = -2.1\\ \hat{\beta_1} \approx -2.4 -0.8 = -3.2 \]
If the models are predicting the same thing, I would expect that they would agree 100% of the time, because the relationships found in equation (4.14) are mathematically equivalent to our estimated values using the non-softmax log odds.
========================================================
This question should be answered using the Weekly data set, which is part of the ISLR2 package. This data is similar in nature to the Smarket data from this chapter’s lab, except that it contains 1,089 weekly returns for 21 years, from the beginning of 1990 to the end of 2010.
Produce some numerical and graphical summaries of the Week1y data. Do there appear to be any patterns?
Use the full data set to perform a logistic regression with Direction as the response and the first four lag variables plus Volume as predictors. Use the summary function to print the results. Do any of the predictors appear to be statistically significant? If so, which ones?
Compute the confusion matrix and overall fraction of correct predictions. Explain what the confusion matrix is telling you about the types of mistakes made by logistic regression.
Now fit the logistic regression model using a training data period from 1990 to 2008, with Lag3 as the only predictor. Compute the confusion matrix and the overall fraction of correct predictions for the held out data (that is, the data from 2009 and 2010).
Repeat (d) using LDA.
Repeat (d) using QDA.
Repeat (d) using KNN with \(K=1\).
Repeat (d) using naive Bayes.
Which of these methods appears to provide the best results on this data?
Experiment with different combinations of predictors, including possible transformations and interactions, for each of the methods. Report the variables, method, and associated confusion matrix that appears to provide the best results on the held out data. Note that you should also experiment with values for \(K\) in the KNN classifier.
========================================================
summary(Weekly)
## Year Lag1 Lag2 Lag3
## Min. :1990 Min. :-18.1950 Min. :-18.1950 Min. :-18.1950
## 1st Qu.:1995 1st Qu.: -1.1540 1st Qu.: -1.1540 1st Qu.: -1.1580
## Median :2000 Median : 0.2410 Median : 0.2410 Median : 0.2410
## Mean :2000 Mean : 0.1506 Mean : 0.1511 Mean : 0.1472
## 3rd Qu.:2005 3rd Qu.: 1.4050 3rd Qu.: 1.4090 3rd Qu.: 1.4090
## Max. :2010 Max. : 12.0260 Max. : 12.0260 Max. : 12.0260
## Lag4 Lag5 Volume Today
## Min. :-18.1950 Min. :-18.1950 Min. :0.08747 Min. :-18.1950
## 1st Qu.: -1.1580 1st Qu.: -1.1660 1st Qu.:0.33202 1st Qu.: -1.1540
## Median : 0.2380 Median : 0.2340 Median :1.00268 Median : 0.2410
## Mean : 0.1458 Mean : 0.1399 Mean :1.57462 Mean : 0.1499
## 3rd Qu.: 1.4090 3rd Qu.: 1.4050 3rd Qu.:2.05373 3rd Qu.: 1.4050
## Max. : 12.0260 Max. : 12.0260 Max. :9.32821 Max. : 12.0260
## Direction
## Down:484
## Up :605
##
##
##
##
# Using the psych package for a good summary.
pairs.panels(Weekly[,-9])
plot(Weekly$Volume)
The scatterplot/ correlation matrix show a positive correlation
between Year and Volume (which is due to the
fact that volume is increasing over time), but as we would expect, we
don’t see much relationship across the lag variables. From the summary
statistics, we can observe that the Lag variables are very
similar to each other and to Today.
glm.fit = glm(Direction ~ Lag1+Lag2+Lag3+Lag4+Volume, data=Weekly, family=binomial)
summary(glm.fit)
##
## Call:
## glm(formula = Direction ~ Lag1 + Lag2 + Lag3 + Lag4 + Volume,
## family = binomial, data = Weekly)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.5844 -1.2571 0.9923 1.0872 1.4354
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.26246 0.08553 3.069 0.00215 **
## Lag1 -0.04083 0.02638 -1.548 0.12166
## Lag2 0.05927 0.02677 2.214 0.02684 *
## Lag3 -0.01661 0.02661 -0.624 0.53240
## Lag4 -0.02668 0.02637 -1.012 0.31172
## Volume -0.02148 0.03681 -0.584 0.55945
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1496.2 on 1088 degrees of freedom
## Residual deviance: 1486.7 on 1083 degrees of freedom
## AIC: 1498.7
##
## Number of Fisher Scoring iterations: 4
We have some statistical significance for the Lag2
variable, but I’m not yet convinced that that is practically
significant. Additionally, our intercept is significant, which probably
is just representing the overall odds of one Direction over
another.
glm.probs = predict(glm.fit, type="response")
glm.preds = rep("Down", 1089) # Like the lab: "1089 "Down"s
glm.preds[glm.probs>0.5] = "Up"
attach(Weekly)
table(glm.preds,Direction)
## Direction
## glm.preds Down Up
## Down 55 45
## Up 429 560
mean(glm.preds == Direction) #get the accuracy/%correct
## [1] 0.5647383
While the overall accuracy of the model was 56%, the distribution of errors was not: 7% (45/605) of “Up” days were misclassified, but 88.6% (429/484) of “Down” days were misclassified as “Up”. Because there are more up days than down days (55% of all of the values were Up), the multiple logistic regression model uses the intercept (which was statistically significant) to dominate the classifications in favor of Up. This suggests that our model is not predicting labels very well from the features, since it is performing close to just random selection from the given labels.
attach(Weekly)
## The following objects are masked from Weekly (pos = 3):
##
## Direction, Lag1, Lag2, Lag3, Lag4, Lag5, Today, Volume, Year
pre.2008 = (Year<2009) # from 1990 to 2008.
post.2008 = Weekly[!pre.2008 ,] # from 2009 to 2010.
post.2008.labels = Direction[!pre.2008]
glm.fit.2 = glm(Direction ~ Lag3, data=Weekly, family=binomial, subset=pre.2008)
glm.probs.2 = predict(glm.fit.2, post.2008, type="response") # Predictions for post '08
glm.preds.2 = rep("Down", 104)
glm.preds.2[glm.probs.2>0.5] = "Up"
table(glm.preds.2,post.2008.labels) # Confusion matrix.
## post.2008.labels
## glm.preds.2 Down Up
## Up 43 61
mean(glm.preds.2 == post.2008.labels) #get the accuracy/%correct
## [1] 0.5865385
The model makes correct predictions on 61/104= 58.7% of the days, which is still bad: it is literally just picking from the distribution because it predicts “up” for every single value. This is not what is known, in the industry, as a good model.
lda.fit = lda(Direction ~ Lag3, data=Weekly, subset=pre.2008)
lda.preds = predict(lda.fit,post.2008) #predict
preds.labels = lda.preds$class
table(preds.labels,post.2008.labels) # Confusion matrix.
## post.2008.labels
## preds.labels Down Up
## Down 0 0
## Up 43 61
mean(preds.labels == post.2008.labels)
## [1] 0.5865385
Perhaps unsurprisingly, we aren’t seeing any predictive power in the single feature model using LDA. The model is defaulting to the most likely label as the model does not have enough information to make a better decision, again leaving us with 58.7%. And considering that it is unable to guess anything other than “Up” my hopes for QDA to perform are not exactly “Up”.
qda.fit = qda(Direction ~ Lag3, data=Weekly, subset=pre.2008)
qda.pred = predict(qda.fit,post.2008)
qda.labels = qda.pred$class
table(qda.labels,post.2008.labels)
## post.2008.labels
## qda.labels Down Up
## Down 0 0
## Up 43 61
mean(qda.labels == post.2008.labels)
## [1] 0.5865385
In a surprise to no one making it this far into these responses, the
QDA also performed objectively poorly, again only using the most common
label as the only predicted outcome (58.7% again). We have fairly
robustly shown that Lag3 alone is not a great predictor of
market direction.
train.X <- as.matrix(Lag2[pre.2008])
test.X <- as.matrix(Lag2[!pre.2008])
train.labels <- Direction[pre.2008]
knn.preds = knn(train.X, test.X, train.labels, k=1)
table(knn.preds, post.2008.labels)
## post.2008.labels
## knn.preds Down Up
## Down 21 30
## Up 22 31
mean(knn.preds == post.2008.labels)
## [1] 0.5
I hope you weren’t holding your breath for KNN using only one
lag3 neighbor, because it isn’t pretty either. In fact, it
did worse than just guessing “Up” each time with only 51% correct.
nb.fit <- naiveBayes(Direction ~ Lag3, data = Weekly, subset = pre.2008)
nb.labels = predict(nb.fit, post.2008)
table(nb.labels, post.2008.labels)
## post.2008.labels
## nb.labels Down Up
## Down 0 0
## Up 43 61
mean(nb.labels == post.2008.labels)
## [1] 0.5865385
In another crushing defeat for Lag3 as a solo predictor,
our Naive Bayes model ends up doing exactly what we’ve been seeing with
most of the previous models, only this time Naive Bayes does the
prediction explicitly, by purposefully just guessing “Up” each time.
It’s still a bad model. Still just 58.7%
I’m a little surprised by this question, but if we are really
generous with our connotation of the word “best”, all of the above
models that just predicted “Up” every time were the “best”. This is
actually the logic behind indexed funds: models (and even worse…humans)
have historically not performed as well as just predicting “Up” every
time (just investing money into an indexed portfolio). So let this be a
lesson: if your options are to make a model using just our trusty
Lag3 or to bet that it will do unambiguously well, those
are the same thing in this case. None of these models are recommended,
and “best” would be to take an indexed fund or a model that predicts
like an indexed fund: all “Up”.
Below are my experiments for each of the predictors.
Beginning out of order and purely for comedic effect, we find that no matter the neighborhood, Lag3 is not a good single feature, with its best model (N=10), doing just as good as just guessing “Up”. LOL.
# For sh*ts and giggles, here are a bunch of neighbors of our exceptionally performant Lag3
for (K in 2:12){
knn.preds = knn(train.X, test.X, train.labels, k=K)
print(table(knn.preds, post.2008.labels))
print(mean(knn.preds == post.2008.labels))
}
## post.2008.labels
## knn.preds Down Up
## Down 22 27
## Up 21 34
## [1] 0.5384615
## post.2008.labels
## knn.preds Down Up
## Down 16 20
## Up 27 41
## [1] 0.5480769
## post.2008.labels
## knn.preds Down Up
## Down 18 20
## Up 25 41
## [1] 0.5673077
## post.2008.labels
## knn.preds Down Up
## Down 16 21
## Up 27 40
## [1] 0.5384615
## post.2008.labels
## knn.preds Down Up
## Down 16 19
## Up 27 42
## [1] 0.5576923
## post.2008.labels
## knn.preds Down Up
## Down 16 20
## Up 27 41
## [1] 0.5480769
## post.2008.labels
## knn.preds Down Up
## Down 14 18
## Up 29 43
## [1] 0.5480769
## post.2008.labels
## knn.preds Down Up
## Down 17 21
## Up 26 40
## [1] 0.5480769
## post.2008.labels
## knn.preds Down Up
## Down 18 17
## Up 25 44
## [1] 0.5961538
## post.2008.labels
## knn.preds Down Up
## Down 18 21
## Up 25 40
## [1] 0.5576923
## post.2008.labels
## knn.preds Down Up
## Down 19 21
## Up 24 40
## [1] 0.5673077
Now that that fun is over, I’m going to try to use a little bit of stockmarket (i.e. unskilled horseracing/gambling) understanding to predict something better than just “Up”.
I will hypothesize that the lags closer to today are going to be more valuable, so I will look at more interactions and variability there. I will include an interaction with volume, because if there is a large shift in volume, perhaps it is felt only as it relates to previous values (doubtful). Finally, I will be trying for some non-linear models, since I don’t know how to do an ARIMA and make use of these lags. My guess is that the BEST thing would actually be to use the price volatility (rather than the price) to pick horses. But we have what we have. And out of spite, I’m going to exclude Lag3
glm.model <- glm(Direction ~ log(Volume) + I(Lag1^2) + I(Lag2^2)+ I(Lag5^2) + Volume*(Lag1 + Lag2 + Lag4 + Lag5), data = Weekly, family = binomial, subset = pre.2008)
summary(glm.model)
##
## Call:
## glm(formula = Direction ~ log(Volume) + I(Lag1^2) + I(Lag2^2) +
## I(Lag5^2) + Volume * (Lag1 + Lag2 + Lag4 + Lag5), family = binomial,
## data = Weekly, subset = pre.2008)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.5998 -1.2464 0.9561 1.0871 1.8315
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.183229 0.197816 0.926 0.3543
## log(Volume) -0.087094 0.143496 -0.607 0.5439
## I(Lag1^2) 0.001891 0.006702 0.282 0.7778
## I(Lag2^2) 0.012039 0.006344 1.898 0.0577 .
## I(Lag5^2) -0.004264 0.005148 -0.828 0.4075
## Volume -0.013265 0.124716 -0.106 0.9153
## Lag1 -0.033449 0.042520 -0.787 0.4315
## Lag2 0.021600 0.042873 0.504 0.6144
## Lag4 -0.034463 0.042246 -0.816 0.4146
## Lag5 -0.089848 0.043521 -2.064 0.0390 *
## Volume:Lag1 -0.015961 0.017672 -0.903 0.3664
## Volume:Lag2 0.025545 0.018813 1.358 0.1745
## Volume:Lag4 0.008426 0.018246 0.462 0.6442
## Volume:Lag5 0.035373 0.020518 1.724 0.0847 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1354.7 on 984 degrees of freedom
## Residual deviance: 1333.9 on 971 degrees of freedom
## AIC: 1361.9
##
## Number of Fisher Scoring iterations: 4
glm.model.probs = predict(glm.model, post.2008, type="response")
glm.model.preds = rep("Down", 104) # Like the lab: "1089 "Down"s
glm.model.preds[glm.model.probs>0.5] = "Up"
table(glm.model.preds,post.2008.labels)
## post.2008.labels
## glm.model.preds Down Up
## Down 23 23
## Up 20 38
mean(glm.model.preds == post.2008.labels) #get the accuracy/%correct
## [1] 0.5865385
glm.model <- glm(Direction ~ log(Volume) + Lag1 + I(Lag2^2)+ I(Lag5^2) + Lag1:Lag5+ Lag1:Lag4 + Lag2:Lag5 + Volume:(Lag1 + Lag2 + Lag4 + Lag5) +Lag5 + Lag4, data = Weekly, family = binomial, subset = pre.2008)
summary(glm.model)
##
## Call:
## glm(formula = Direction ~ log(Volume) + Lag1 + I(Lag2^2) + I(Lag5^2) +
## Lag1:Lag5 + Lag1:Lag4 + Lag2:Lag5 + Volume:(Lag1 + Lag2 +
## Lag4 + Lag5) + Lag5 + Lag4, family = binomial, data = Weekly,
## subset = pre.2008)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.5693 -1.2499 0.9467 1.0794 1.9102
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.154525 0.078849 1.960 0.0500 .
## log(Volume) -0.101041 0.067688 -1.493 0.1355
## Lag1 -0.028892 0.041971 -0.688 0.4912
## I(Lag2^2) 0.015654 0.006624 2.363 0.0181 *
## I(Lag5^2) -0.003302 0.005861 -0.563 0.5731
## Lag5 -0.091704 0.043726 -2.097 0.0360 *
## Lag4 -0.029600 0.041618 -0.711 0.4769
## Lag1:Lag5 -0.013469 0.011643 -1.157 0.2473
## Lag1:Lag4 -0.011153 0.009749 -1.144 0.2526
## Lag5:Lag2 0.003959 0.010520 0.376 0.7067
## Lag1:Volume -0.022337 0.016618 -1.344 0.1789
## Lag2:Volume 0.038097 0.014833 2.568 0.0102 *
## Lag4:Volume 0.009297 0.016066 0.579 0.5628
## Lag5:Volume 0.040323 0.019931 2.023 0.0431 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1354.7 on 984 degrees of freedom
## Residual deviance: 1330.8 on 971 degrees of freedom
## AIC: 1358.8
##
## Number of Fisher Scoring iterations: 4
glm.model.probs = predict(glm.model, post.2008, type="response")
glm.model.preds = rep("Down", 104) # Like the lab: "1089 "Down"s
glm.model.preds[glm.model.probs>0.5] = "Up"
table(glm.model.preds,post.2008.labels)
## post.2008.labels
## glm.model.preds Down Up
## Down 23 21
## Up 20 40
mean(glm.model.preds == post.2008.labels) #get the accuracy/%correct
## [1] 0.6057692
glm.model <- glm(Direction ~ log(Volume) + Lag1 + I(Lag2^2)+I(Lag1^2):I(Lag2^2)+ I(Lag1^2):I(Lag5^2) + Lag1:Lag5+ Lag1:Lag4 + Volume:(Lag1 + Lag2 + Lag4 + Lag5) +Lag5 + Lag4, data = Weekly, family = binomial, subset = pre.2008)
summary(glm.model)
##
## Call:
## glm(formula = Direction ~ log(Volume) + Lag1 + I(Lag2^2) + I(Lag1^2):I(Lag2^2) +
## I(Lag1^2):I(Lag5^2) + Lag1:Lag5 + Lag1:Lag4 + Volume:(Lag1 +
## Lag2 + Lag4 + Lag5) + Lag5 + Lag4, family = binomial, data = Weekly,
## subset = pre.2008)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.6463 -1.2536 0.9494 1.0756 1.7839
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.1750289 0.0772705 2.265 0.02350 *
## log(Volume) -0.0977139 0.0675861 -1.446 0.14824
## Lag1 -0.0340013 0.0439990 -0.773 0.43966
## I(Lag2^2) 0.0059594 0.0089021 0.669 0.50322
## Lag5 -0.0858081 0.0446044 -1.924 0.05438 .
## Lag4 -0.0296252 0.0423108 -0.700 0.48381
## I(Lag2^2):I(Lag1^2) 0.0005257 0.0004499 1.168 0.24264
## I(Lag1^2):I(Lag5^2) -0.0004320 0.0003600 -1.200 0.23019
## Lag1:Lag5 -0.0170639 0.0122534 -1.393 0.16374
## Lag1:Lag4 -0.0143309 0.0119375 -1.200 0.22995
## Lag1:Volume -0.0189928 0.0200294 -0.948 0.34301
## Volume:Lag2 0.0412224 0.0148220 2.781 0.00542 **
## Lag4:Volume 0.0077047 0.0172890 0.446 0.65585
## Lag5:Volume 0.0345907 0.0219068 1.579 0.11434
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1354.7 on 984 degrees of freedom
## Residual deviance: 1328.9 on 971 degrees of freedom
## AIC: 1356.9
##
## Number of Fisher Scoring iterations: 6
glm.model.probs = predict(glm.model, post.2008, type="response")
glm.model.preds = rep("Down", 104) # Like the lab: "1089 "Down"s
glm.model.preds[glm.model.probs>0.5] = "Up"
table(glm.model.preds,post.2008.labels)
## post.2008.labels
## glm.model.preds Down Up
## Down 23 23
## Up 20 38
mean(glm.model.preds == post.2008.labels) #get the accuracy/%correct
## [1] 0.5865385
glm.model <- glm(Direction ~ Lag2 + I(Lag1^2) + Lag1:Lag5 + Lag4:Lag1, data = Weekly, family = binomial, subset = pre.2008)
summary(glm.model)
##
## Call:
## glm(formula = Direction ~ Lag2 + I(Lag1^2) + Lag1:Lag5 + Lag4:Lag1,
## family = binomial, data = Weekly, subset = pre.2008)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.500 -1.256 1.014 1.093 1.515
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.177624 0.069200 2.567 0.0103 *
## Lag2 0.063175 0.029971 2.108 0.0350 *
## I(Lag1^2) 0.005331 0.005159 1.033 0.3015
## Lag1:Lag5 -0.009903 0.008644 -1.146 0.2520
## Lag1:Lag4 0.004128 0.007834 0.527 0.5982
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1354.7 on 984 degrees of freedom
## Residual deviance: 1348.2 on 980 degrees of freedom
## AIC: 1358.2
##
## Number of Fisher Scoring iterations: 4
glm.model.probs = predict(glm.model, post.2008, type="response")
glm.model.preds = rep("Down", 104) # Like the lab: "1089 "Down"s
glm.model.preds[glm.model.probs>0.5] = "Up"
table(glm.model.preds,post.2008.labels)
## post.2008.labels
## glm.model.preds Down Up
## Down 7 4
## Up 36 57
mean(glm.model.preds == post.2008.labels) #get the accuracy/%correct
## [1] 0.6153846
glm.model <- glm(Direction ~ Lag5 + I(Lag2^2)+ Volume:(Lag1 + Lag2+ Lag5), data = Weekly, family = binomial, subset = pre.2008)
summary(glm.model)
##
## Call:
## glm(formula = Direction ~ Lag5 + I(Lag2^2) + Volume:(Lag1 + Lag2 +
## Lag5), family = binomial, data = Weekly, subset = pre.2008)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.7523 -1.2434 0.9698 1.0962 1.7100
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.178874 0.069882 2.560 0.0105 *
## Lag5 -0.092042 0.042237 -2.179 0.0293 *
## I(Lag2^2) 0.011061 0.005948 1.860 0.0629 .
## Volume:Lag1 -0.025728 0.012112 -2.124 0.0337 *
## Volume:Lag2 0.030647 0.013100 2.339 0.0193 *
## Lag5:Volume 0.041634 0.018365 2.267 0.0234 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1354.7 on 984 degrees of freedom
## Residual deviance: 1338.6 on 979 degrees of freedom
## AIC: 1350.6
##
## Number of Fisher Scoring iterations: 4
glm.model.probs = predict(glm.model, post.2008, type="response")
glm.model.preds = rep("Down", 104) # Like the lab: "1089 "Down"s
glm.model.preds[glm.model.probs>0.5] = "Up"
table(glm.model.preds,post.2008.labels)
## post.2008.labels
## glm.model.preds Down Up
## Down 19 15
## Up 24 46
mean(glm.model.preds == post.2008.labels) #get the accuracy/%correct
## [1] 0.625
glm.model <- glm(Direction ~ Lag5 + I(Lag2^2)+ Volume:(Lag1 + Lag2+ Lag5 + Lag3:Lag4) + Lag5:Lag2, data = Weekly, family = binomial, subset = pre.2008)
summary(glm.model)
##
## Call:
## glm(formula = Direction ~ Lag5 + I(Lag2^2) + Volume:(Lag1 + Lag2 +
## Lag5 + Lag3:Lag4) + Lag5:Lag2, family = binomial, data = Weekly,
## subset = pre.2008)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.5978 -1.2456 0.9667 1.0964 1.7177
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.1697306 0.0705359 2.406 0.01612 *
## Lag5 -0.0930558 0.0422911 -2.200 0.02778 *
## I(Lag2^2) 0.0141065 0.0066128 2.133 0.03291 *
## Volume:Lag1 -0.0245230 0.0116426 -2.106 0.03518 *
## Volume:Lag2 0.0368888 0.0142868 2.582 0.00982 **
## Lag5:Volume 0.0397368 0.0184429 2.155 0.03119 *
## Lag5:Lag2 0.0110143 0.0090897 1.212 0.22561
## Volume:Lag3:Lag4 0.0002885 0.0020107 0.143 0.88590
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1354.7 on 984 degrees of freedom
## Residual deviance: 1337.1 on 977 degrees of freedom
## AIC: 1353.1
##
## Number of Fisher Scoring iterations: 4
glm.model.probs = predict(glm.model, post.2008, type="response")
glm.model.preds = rep("Down", 104) # Like the lab: "1089 "Down"s
glm.model.preds[glm.model.probs>0.5] = "Up"
table(glm.model.preds,post.2008.labels)
## post.2008.labels
## glm.model.preds Down Up
## Down 20 15
## Up 23 46
mean(glm.model.preds == post.2008.labels) #get the accuracy/%correct
## [1] 0.6346154
glm.model <- glm(Direction ~ Lag5 + I(Lag2^2)+ Volume:(Lag1 + Lag2+ Lag5 ) + Lag5:Lag2, data = Weekly, family = binomial, subset = pre.2008)
summary(glm.model)
##
## Call:
## glm(formula = Direction ~ Lag5 + I(Lag2^2) + Volume:(Lag1 + Lag2 +
## Lag5) + Lag5:Lag2, family = binomial, data = Weekly, subset = pre.2008)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.6020 -1.2455 0.9654 1.0969 1.6508
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.168791 0.070250 2.403 0.0163 *
## Lag5 -0.093728 0.042019 -2.231 0.0257 *
## I(Lag2^2) 0.014298 0.006490 2.203 0.0276 *
## Volume:Lag1 -0.024444 0.011561 -2.114 0.0345 *
## Volume:Lag2 0.037218 0.014099 2.640 0.0083 **
## Lag5:Volume 0.040412 0.017802 2.270 0.0232 *
## Lag5:Lag2 0.011067 0.009084 1.218 0.2231
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1354.7 on 984 degrees of freedom
## Residual deviance: 1337.1 on 978 degrees of freedom
## AIC: 1351.1
##
## Number of Fisher Scoring iterations: 4
glm.model.probs = predict(glm.model, post.2008, type="response")
glm.model.preds = rep("Down", 104) # Like the lab: "1089 "Down"s
glm.model.preds[glm.model.probs>0.5] = "Up"
table(glm.model.preds,post.2008.labels)
## post.2008.labels
## glm.model.preds Down Up
## Down 20 15
## Up 23 46
mean(glm.model.preds == post.2008.labels) #get the accuracy/%correct
## [1] 0.6346154
========================================================
This problem involves writing functions.
Write a function, LogPower(), that prints out the result of raising \(\ln(2)\) to the 3rd power. In other words, your function should compute \(\ln(2)^3\) and print out the results. Hint: Recall that in R \(\mathrm{x}^{\wedge} \mathrm{a}\) raises \(\mathrm{x}\) to the power a, and \(\mathrm{log}(x)\) computes the natural logarithm of \(x\). Use the print() function to output the result.
Create a new function, LogPower2(), that allows you to pass any two numbers, \(x\) and \(a\), and prints out the value of \(\log(x)^{\wedge}a\). You can do this by beginning your function with the line
LogPower2 <- function (x, a)
You should be able to call your function by entering, for instance,
LogPower2(3,8)
on the command line. This should output the value of \(\ln(3)^8\), namely, 2.122.
Using the LogPower2() function that you just wrote, compute \(\ln(10)^3\), \(\ln(8)^{17}\), and \(\ln(131)^3\).
Now create a new function, LogPower3(), that actually returns the result \(\log(x)^{\wedge} a\) as an \(R\) object, rather than simply printing it to the screen. That is, if you store the value \(\log(x)^{\wedge} \mathrm{a}\) in an object called result within your function, then you can simply return() this result, using the following line:
return (result)
The line above should be the last line in your function, before the } symbol.
Now using the LogPower3() function, create a plot of \(f(x)=\ln(x)^2\). The \(x\)-axis should display a range of integers
from 1 to 10 , and the \(y\)-axis
should display \(\ln(x)^2\). Label the
axes appropriately, and use an appropriate title for the figure.
Consider displaying either the \(x\)-axis, the \(y\)-axis, or both on the log-scale. You can
do this by using log ="x", log = "y", or
log = "xy" as arguments to the plot()
function.
Create a function, PlotLogPower(), that allows you to create a
plot of \(x\) against \(\ln(x)^{\wedge} a\) for a fixed a and for a
range of values of \(x\). For instance,
if you call PlotLogPower(1:10,3) then a plot should be
created with an \(x\)-axis taking on
values \(1,2, \ldots, 10\), and a \(y\)-axis taking on values \(\log(1)^3, \log(2)^3, \ldots,
\log(10)^3\).
========================================================
I’m going to generalize the function and just set the defaults to 2
and 3 for x and a respectively.
LogPower <- function(x=2,a=3){
print(log(x)^a)
}
LogPower()
## [1] 0.3330247
Oops, I should have read the whole problem before generalizing the function in the part (a). So this function is identical to the one above.
LogPower2 <- function(x=2,a=3){
print(log(x)^a)
}
LogPower2(3,8)
## [1] 2.12205
LogPower2(10,3)
## [1] 12.20807
LogPower2(8,17)
## [1] 254152.4
LogPower2(131,3)
## [1] 115.8715
LogPower3 <- function(x=2,a=3){
return (log(x)^a)
}
x <- 1:10
y <- LogPower3(x,2)
plot(x,y,log="xy", main="Plot of ln(x)^2")
## Warning in xy.coords(x, y, xlabel, ylabel, log): 1 y value <= 0 omitted from
## logarithmic plot
plot(x,y,log="x", main="Plot of ln(x)^2")
plot(x,y,log="y", main="Plot of ln(x)^2")
## Warning in xy.coords(x, y, xlabel, ylabel, log): 1 y value <= 0 omitted from
## logarithmic plot
PlotLogPower <- function(x,a){
plot(x,LogPower3(x,a))
}
PlotLogPower(1:10,3)
========================================================
In Chapter 4, we used logistic regression to predict the probability of default using income and balance on the Default data set. We will now estimate the test error of this logistic regression model using the validation set approach. Do not forget to set a random seed before beginning your analysis.
Fit a logistic regression model that uses income to predict default.
Using the validation set approach, estimate the test error of this model. In order to do this, you must perform the following steps:
Split the sample set into a training set and a validation set.
Fit a multiple logistic regression model using only the training observations.
Obtain a prediction of default status for each individual in the validation set by computing the posterior probability of default for that individual, and classifying the individual to the default category if the posterior probability is greater than \(0.5\).
Compute the validation set error, which is the fraction of the observations in the validation set that are misclassified.
Repeat the process in (b) three times, using three different splits of the observations into a training set and a validation set. Comment on the results obtained.
Now consider a logistic regression model that predicts the probability of default using income and a dummy variable for student. Estimate the test error for this model using the validation set approach. Comment on whether or not including a dummy variable for student leads to a reduction in the test error rate.
========================================================
log.r.m.fit <- glm(Default$default ~ Default$income, family = binomial)
summary(log.r.m.fit)
##
## Call:
## glm(formula = Default$default ~ Default$income, family = binomial)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.2968 -0.2723 -0.2576 -0.2478 2.7111
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.094e+00 1.463e-01 -21.156 <2e-16 ***
## Default$income -8.353e-06 4.207e-06 -1.985 0.0471 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2920.6 on 9999 degrees of freedom
## Residual deviance: 2916.7 on 9998 degrees of freedom
## AIC: 2920.7
##
## Number of Fisher Scoring iterations: 6
df = as.data.frame(Default[sample(1:nrow(Default)),]) #I decided to shuffle first, just in case there's some kind of ordering.
train <- df[1:(0.8*dim(df)[1]),]
val <- df[(0.8*dim(df)[1]+1):(dim(df)[1]),]
glm.fit <- glm(default ~ income, data = train, family = binomial) #step 2
summary(glm.fit)
##
## Call:
## glm(formula = default ~ income, family = binomial, data = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.2994 -0.2746 -0.2599 -0.2501 2.6895
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.076e+00 1.630e-01 -18.88 <2e-16 ***
## income -8.346e-06 4.688e-06 -1.78 0.075 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2367.4 on 7999 degrees of freedom
## Residual deviance: 2364.2 on 7998 degrees of freedom
## AIC: 2368.2
##
## Number of Fisher Scoring iterations: 6
glm.preds <- predict(glm.fit, val, type = 'response')
pred.labels = rep("No", dim(val)[1]) # Like the lab
pred.labels[glm.preds>0.5] = "Yes"
table(pred.labels,val$default)
##
## pred.labels No Yes
## No 1938 62
## Compute the misclassification:
1 - mean(pred.labels == val$default)
## [1] 0.031
Despite classifying everything as “No”, the model performed at nearly 97% accuracy, because our “Yes” values are so rare. Validation set error is 3.45%
for (i in 1:3){
df = as.data.frame(Default[sample(1:nrow(Default)),]) #I decided to shuffle first, just in case there's some kind of ordering.
train <- df[1:(0.8*dim(df)[1]),]
val <- df[(0.8*dim(df)[1]+1):(dim(df)[1]),]
glm.fit <- glm(default ~ income, data = train, family = binomial) #step 2
print(summary(glm.fit))
glm.preds <- predict(glm.fit, val, type = 'response')
pred.labels = rep("No", dim(val)[1]) # Like the lab
pred.labels[glm.preds>0.5] = "Yes"
print(table(pred.labels,val$default))
## Compute the misclassification:
print(1 - mean(pred.labels == val$default))
}
##
## Call:
## glm(formula = default ~ income, family = binomial, data = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.2821 -0.2696 -0.2618 -0.2565 2.6547
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.201e+00 1.643e-01 -19.48 <2e-16 ***
## income -4.413e-06 4.646e-06 -0.95 0.342
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2374.1 on 7999 degrees of freedom
## Residual deviance: 2373.2 on 7998 degrees of freedom
## AIC: 2377.2
##
## Number of Fisher Scoring iterations: 6
##
##
## pred.labels No Yes
## No 1939 61
## [1] 0.0305
##
## Call:
## glm(formula = default ~ income, family = binomial, data = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.3179 -0.2744 -0.2492 -0.2330 2.8034
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -2.949e+00 1.625e-01 -18.149 < 2e-16 ***
## income -1.446e-05 4.828e-06 -2.995 0.00274 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2259.2 on 7999 degrees of freedom
## Residual deviance: 2250.1 on 7998 degrees of freedom
## AIC: 2254.1
##
## Number of Fisher Scoring iterations: 6
##
##
## pred.labels No Yes
## No 1922 78
## [1] 0.039
##
## Call:
## glm(formula = default ~ income, family = binomial, data = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.2883 -0.2697 -0.2581 -0.2503 2.6835
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.154e+00 1.643e-01 -19.196 <2e-16 ***
## income -6.523e-06 4.680e-06 -1.394 0.163
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2333.8 on 7999 degrees of freedom
## Residual deviance: 2331.9 on 7998 degrees of freedom
## AIC: 2335.9
##
## Number of Fisher Scoring iterations: 6
##
##
## pred.labels No Yes
## No 1933 67
## [1] 0.0335
The model continually predicts “No” for all the values, because the values that we are interested in are rare and there is not enough information in the features. The validation set errors are below the confusion matrices for each regression.
i = 0.8
for (i in c(0.8,0.8,0.8,0.8,0.8) ){ #c(0.5,0.6,0.7,0.8,0.9) I messed around with sample size
df = as.data.frame(Default[sample(1:nrow(Default)),]) #I decided to shuffle first, just in case there's some kind of ordering.
train <- df[1:(i*dim(df)[1]),]
val <- df[(i*dim(df)[1]+1):(dim(df)[1]),]
glm.fit <- glm(default ~ income + student, data = train, family = binomial) #step 2
print(summary(glm.fit))
glm.preds <- predict(glm.fit, val, type = 'response')
pred.labels = rep("No", dim(val)[1]) # Like the lab
pred.labels[glm.preds>0.5] = "Yes"
print(table(pred.labels,val$default))
## Compute the misclassification:
print(1 - mean(pred.labels == val$default))
}
##
## Call:
## glm(formula = default ~ income + student, family = binomial,
## data = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.2882 -0.2845 -0.2465 -0.2450 2.6632
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.536e+00 3.081e-01 -11.476 <2e-16 ***
## income 1.280e-06 7.435e-06 0.172 0.863
## studentYes 3.329e-01 2.101e-01 1.584 0.113
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2306.8 on 7999 degrees of freedom
## Residual deviance: 2301.5 on 7997 degrees of freedom
## AIC: 2307.5
##
## Number of Fisher Scoring iterations: 6
##
##
## pred.labels No Yes
## No 1929 71
## [1] 0.0355
##
## Call:
## glm(formula = default ~ income + student, family = binomial,
## data = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.2869 -0.2856 -0.2435 -0.2429 2.6637
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.525e+00 3.109e-01 -11.339 <2e-16 ***
## income 4.834e-07 7.499e-06 0.064 0.949
## studentYes 3.400e-01 2.118e-01 1.606 0.108
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2286.5 on 7999 degrees of freedom
## Residual deviance: 2280.4 on 7997 degrees of freedom
## AIC: 2286.4
##
## Number of Fisher Scoring iterations: 6
##
##
## pred.labels No Yes
## No 1926 74
## [1] 0.037
##
## Call:
## glm(formula = default ~ income + student, family = binomial,
## data = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.3234 -0.3012 -0.2374 -0.2294 2.7551
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.878e+00 3.232e-01 -12.000 < 2e-16 ***
## income 7.207e-06 7.685e-06 0.938 0.348397
## studentYes 7.159e-01 2.151e-01 3.329 0.000873 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2300.0 on 7999 degrees of freedom
## Residual deviance: 2280.9 on 7997 degrees of freedom
## AIC: 2286.9
##
## Number of Fisher Scoring iterations: 6
##
##
## pred.labels No Yes
## No 1928 72
## [1] 0.036
##
## Call:
## glm(formula = default ~ income + student, family = binomial,
## data = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.3125 -0.2970 -0.2447 -0.2391 2.7088
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.716e+00 3.123e-01 -11.900 < 2e-16 ***
## income 4.999e-06 7.477e-06 0.669 0.50374
## studentYes 5.564e-01 2.105e-01 2.643 0.00821 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2340.6 on 7999 degrees of freedom
## Residual deviance: 2328.5 on 7997 degrees of freedom
## AIC: 2334.5
##
## Number of Fisher Scoring iterations: 6
##
##
## pred.labels No Yes
## No 1934 66
## [1] 0.033
##
## Call:
## glm(formula = default ~ income + student, family = binomial,
## data = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.3073 -0.2919 -0.2471 -0.2413 2.7029
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.701e+00 3.106e-01 -11.919 <2e-16 ***
## income 5.095e-06 7.426e-06 0.686 0.4926
## studentYes 5.038e-01 2.102e-01 2.397 0.0165 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2340.6 on 7999 degrees of freedom
## Residual deviance: 2331.2 on 7997 degrees of freedom
## AIC: 2337.2
##
## Number of Fisher Scoring iterations: 6
##
##
## pred.labels No Yes
## No 1934 66
## [1] 0.033
It does not appear that the model has changed from just predicting that default = “No”, despite the student dummy having statistical significance.
We continue to consider the use of a logistic regression model to predict the probability of default using income and student on the Default data set. In particular, we will now compute estimates for the standard errors of the income and student logistic regression coefficients in two different ways: (1) using the bootstrap, and (2) using the standard formula for computing the standard errors in the \(\operatorname{glm}()\) function. Do not forget to set a random seed before beginning your analysis.
Using the summary() and glm()
functions, determine the estimated standard errors for the coefficients
associated with income in a multiple logistic regression model that uses
both predictors.
Write a function, boot.fn(), that takes as input the
Default data set as well as an index of the observations,
and that outputs the coefficient estimates for income and student in the
multiple logistic regression model.
Use the boot() function together with your
boot.fn() function to estimate the standard errors of the
logistic regression coefficients for income and student.
Comment on the estimated standard errors obtained using the
glm() function and using your bootstrap function.
set.seed(42)
df = as.data.frame(Default[sample(1:nrow(Default)),]) #I decided to shuffle first, just in case there's some kind of ordering.
train <- df[1:(i*dim(df)[1]),]
val <- df[(i*dim(df)[1]+1):(dim(df)[1]),]
glm.fit <- glm(default ~ income + student, data = train, family = binomial) #step 2
summary(glm.fit)
##
## Call:
## glm(formula = default ~ income + student, family = binomial,
## data = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.3139 -0.2982 -0.2475 -0.2417 2.7018
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.699e+00 3.090e-01 -11.969 < 2e-16 ***
## income 5.110e-06 7.396e-06 0.691 0.48967
## studentYes 5.440e-01 2.085e-01 2.609 0.00908 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2374.1 on 7999 degrees of freedom
## Residual deviance: 2362.5 on 7997 degrees of freedom
## AIC: 2368.5
##
## Number of Fisher Scoring iterations: 6
std.errors <- as.data.frame(summary(glm.fit)$coefficients)$`Std. Error`
std.errors
## [1] 3.090097e-01 7.396153e-06 2.084938e-01
The standard error, using the glm() function are printed
both in-line in the summary and again at the bottom of the output for
the intercept, income, and student
coefficients, respectively.
boot.fn <- function(df, index){
glm.fit <- glm(default ~ income + student, data = df, subset = index, family = binomial)
return(coef(glm.fit))
}
library(boot)
##
## Attaching package: 'boot'
## The following object is masked from 'package:psych':
##
## logit
## The following object is masked from 'package:car':
##
## logit
boot(Default, boot.fn, R = 500)
##
## ORDINARY NONPARAMETRIC BOOTSTRAP
##
##
## Call:
## boot(data = Default, statistic = boot.fn, R = 500)
##
##
## Bootstrap Statistics :
## original bias std. error
## t1* -3.797124e+00 1.818556e-02 2.801736e-01
## t2* 7.260586e-06 -5.682173e-07 6.693186e-06
## t3* 5.670596e-01 -7.398965e-03 1.902999e-01
The standard errors after using the bootstrapping method are slightly
smaller that the errors from using the glm() function. This
could be because the bootstrapping allows for more passes at the
data.