Review of k-fold cross-validation.
(a) Explain how k-fold cross-validation is implemented.
K-fold Cross Validation (CV) involves splitting the dataset into \(K\) distinct subsets. Often we use \(K = 5\) or \(K =
10\). First, we train the model using all subsets but one and
evaluate the model’s performance (computing the MSE) on the excluded
subset. This process is repeated \(K\)
times, each time excluding a different subset. Finally, the \(K\) calculated MSEs are averaged to obtain
an estimated validation (test) error rate for new observations.
(b) What are the advantages and disadvantages of k-fold
cross-validation relative to:
1. The validation set approach?
The validation set approach is generally more simple and easier to
implements than the k-fold CV method. However, the validation MSE can be
highly variable due to random sampling, and only a subset of the data is
used for training, which can lead to overfitting if the number of
observations is small. Likewise, only a subset (the other part) of data
used for validation, which can lead to higher variance in the estimated
performance metrics compared to k-fold CV. In summary, k-fold CV will
often perform better and be more stable in terms of MSE.
2. LOOCV?
When comparing to Leave-One-Out Cross Validation (LOOCV), k-fold CV can
be less computationally intensive. In essence, LOOCV is a special case
of k-fold CV, where \(K=n\). In cases
where \(n\) is large, k-fold CV is
preferred because we can choose any value of \(K\), while LOOCV would have to fit the
model an extreme amount (\(n\)) of
times. On the other hand, LOOCV tends to have lower bias than k-fold CV
because it uses almost all of the data for training in each iteration,
resulting in a model that is closer to the true distribution of the
data. In summary, when looking at the trade-off between bias and
variance for these two methods, k-fold CV generally has less variance
yet more bias than LOOCV. The major advantage of k-fold CV is that we
can experiment with different values of \(K\) to achieve the most appropriate
trade-off between variance and bias according to our needs and
goals.
In Chapter 4, we used logistic regression to predict the
probability of default using income and
balance on the Default data set.
We will now estimate the test error of this logistic regression model
using the validation set approach. Do not forget to set a random seed
before beginning your analysis.
(a) Fit a logistic regression model that uses
income and balance to predict
default.
set.seed(123)
Default <- Default
glm.fit <- glm(default ~ income + balance, data = Default, family = "binomial")
summary(glm.fit)
##
## Call:
## glm(formula = default ~ income + balance, family = "binomial",
## data = Default)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.154e+01 4.348e-01 -26.545 < 2e-16 ***
## income 2.081e-05 4.985e-06 4.174 2.99e-05 ***
## balance 5.647e-03 2.274e-04 24.836 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 2920.6 on 9999 degrees of freedom
## Residual deviance: 1579.0 on 9997 degrees of freedom
## AIC: 1585
##
## Number of Fisher Scoring iterations: 8
(b) Using the validation set approach, estimate the test
error of this model. In order to do this, you must perform the following
steps:
1. Split the sample set into a training set and a validation
set.
2. Fit a multiple logistic regression model using only the training
observations.
3. Obtain a prediction of default status for each individual in the
validation set by computing the posterior probability of default for
that individual, and classifying the individual to the default category
if the posterior probability is greater than 0.5.
4. Compute the validation set error, which is the fraction of the
observations in the validation set that are misclassified.
set.seed(123)
#1.
D_train = sample(10000, 5000)
#2.
glm.fit = glm(default ~ income + balance, data = Default, family = "binomial", subset = D_train)
summary(glm.fit)
##
## Call:
## glm(formula = default ~ income + balance, family = "binomial",
## data = Default, subset = D_train)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.194e+01 6.451e-01 -18.504 < 2e-16 ***
## income 2.210e-05 7.381e-06 2.995 0.00275 **
## balance 5.874e-03 3.362e-04 17.474 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1429.88 on 4999 degrees of freedom
## Residual deviance: 752.69 on 4997 degrees of freedom
## AIC: 758.69
##
## Number of Fisher Scoring iterations: 8
#3.
glm.probs = predict(glm.fit, newdata = Default[-D_train, ], type = "response")
glm.pred = rep("No", 5000)
glm.pred[glm.probs>0.5] = "Yes"
#4.
mean(glm.pred != Default[-D_train, ]$default)
## [1] 0.0276
(c) Repeat the process in (b) three times, using three different splits of the observations into a training set and a validation set. Comment on the results obtained.
#1.
D_train = sample(10000, 5000)
#2.
glm.fit = glm(default ~ income + balance, data = Default, family = "binomial", subset = D_train)
summary(glm.fit)
##
## Call:
## glm(formula = default ~ income + balance, family = "binomial",
## data = Default, subset = D_train)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.087e+01 5.575e-01 -19.503 < 2e-16 ***
## income 1.827e-05 6.826e-06 2.677 0.00744 **
## balance 5.332e-03 2.922e-04 18.252 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1543.58 on 4999 degrees of freedom
## Residual deviance: 860.28 on 4997 degrees of freedom
## AIC: 866.28
##
## Number of Fisher Scoring iterations: 8
#3.
glm.probs = predict(glm.fit, newdata = Default[-D_train, ], type = "response")
glm.pred = rep("No", 5000)
glm.pred[glm.probs>0.5] = "Yes"
#4.
mean(glm.pred != Default[-D_train, ]$default)
## [1] 0.0246
#1.
D_train = sample(10000, 5000)
#2.
glm.fit = glm(default ~ income + balance, data = Default, family = "binomial", subset = D_train)
summary(glm.fit)
##
## Call:
## glm(formula = default ~ income + balance, family = "binomial",
## data = Default, subset = D_train)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.120e+01 6.037e-01 -18.554 <2e-16 ***
## income 1.012e-05 7.106e-06 1.424 0.154
## balance 5.676e-03 3.248e-04 17.479 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1436.67 on 4999 degrees of freedom
## Residual deviance: 771.89 on 4997 degrees of freedom
## AIC: 777.89
##
## Number of Fisher Scoring iterations: 8
#3.
glm.probs = predict(glm.fit, newdata = Default[-D_train, ], type = "response")
glm.pred = rep("No", 5000)
glm.pred[glm.probs>0.5] = "Yes"
#4.
mean(glm.pred != Default[-D_train, ]$default)
## [1] 0.0262
#1.
D_train = sample(10000, 5000)
#2.
glm.fit = glm(default ~ income + balance, data = Default, family = "binomial", subset = D_train)
summary(glm.fit)
##
## Call:
## glm(formula = default ~ income + balance, family = "binomial",
## data = Default, subset = D_train)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.201e+01 6.646e-01 -18.068 < 2e-16 ***
## income 2.040e-05 7.407e-06 2.755 0.00588 **
## balance 5.910e-03 3.513e-04 16.824 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1354.4 on 4999 degrees of freedom
## Residual deviance: 728.1 on 4997 degrees of freedom
## AIC: 734.1
##
## Number of Fisher Scoring iterations: 8
#3.
glm.probs = predict(glm.fit, newdata = Default[-D_train, ], type = "response")
glm.pred = rep("No", 5000)
glm.pred[glm.probs>0.5] = "Yes"
#4.
mean(glm.pred != Default[-D_train, ]$default)
## [1] 0.0282
We notice from the results that the validation estimate of the test error rate varies due to random sampling of the test and validation split. Using set.seed() for reproducibility, the three splits above resulted in test error rates of 0.0246, 0.0262, and 0.0282 respectively. We notice, however, that the variation between the different splits is fairly small. Even after running the code several time without a seed, I found that all validation estimates fell between 0.0245 and 0.0295.
(d) Now consider a logistic regression model that
predicts the probability of default using income, balance, and a dummy
variable for student. Estimate the test error for this model using the
validation set approach. Comment on whether or not including a dummy
variable for student leads to a reduction in the test error
rate.
#1.
D_train = sample(10000, 5000)
#2.
glm.fit = glm(default ~ income + balance + student, data = Default, family = "binomial", subset = D_train)
summary(glm.fit)
##
## Call:
## glm(formula = default ~ income + balance + student, family = "binomial",
## data = Default, subset = D_train)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.047e+01 6.795e-01 -15.401 <2e-16 ***
## income -2.973e-06 1.146e-05 -0.259 0.7953
## balance 5.607e-03 3.206e-04 17.491 <2e-16 ***
## studentYes -7.783e-01 3.222e-01 -2.416 0.0157 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1443.44 on 4999 degrees of freedom
## Residual deviance: 791.18 on 4996 degrees of freedom
## AIC: 799.18
##
## Number of Fisher Scoring iterations: 8
#3.
glm.probs = predict(glm.fit, newdata = Default[-D_train, ], type = "response")
glm.pred = rep("No", 5000)
glm.pred[glm.probs>0.5] = "Yes"
#4.
mean(glm.pred != Default[-D_train, ]$default)
## [1] 0.0268
It appears that adding the dummy variable for student
produces a validation estimate of the test error rate similar to the
model without this predictor.
We continue to consider the use of a logistic regression model to
predict the probability of default using
income and balance on the Default data set.
In particular, we will now compute estimates for the standard errors of
the income and balance logistic regression
coefficients in two different ways: (1) using the bootstrap, and (2)
using the standard formula for computing the standard errors in the
glm() function. Do not forget to set a random seed before
beginning your analysis.
(a) Using the summary() and
glm() functions, determine the estimated standard errors
for the coefficients associated with income and balance in a multiple
logistic regression model that uses both predictors.
set.seed(123)
D_train = sample(10000, 5000)
glm.fit = glm(default ~ income + balance, data = Default, family = "binomial", subset = D_train)
summary(glm.fit)
##
## Call:
## glm(formula = default ~ income + balance, family = "binomial",
## data = Default, subset = D_train)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.194e+01 6.451e-01 -18.504 < 2e-16 ***
## income 2.210e-05 7.381e-06 2.995 0.00275 **
## balance 5.874e-03 3.362e-04 17.474 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 1429.88 on 4999 degrees of freedom
## Residual deviance: 752.69 on 4997 degrees of freedom
## AIC: 758.69
##
## Number of Fisher Scoring iterations: 8
The logistic regression estimate for \(SE(\hat{\beta}_{0})\) is -11.94, while the estimates for \(SE(\hat{\beta}_{1})\) and \(SE(\hat{\beta}_{2})\) are 0.0000221 and 0.005874 respectively.
(b) Write a function, boot.fn(), that takes
as input the Default data set as well as an index of the
observations, and that outputs the coefficient estimates for
income and balance in the multiple logistic
regression model.
boot.fn <- function(data, index) {
return (coef(glm(default ~ income + balance, data = data, family = "binomial", subset = index)))
}
(c) Use the boot() function together with
your boot.fn() function to estimate the standard errors of
the logistic regression coefficients for income and
balance.
boot(Default, boot.fn, 1000)
##
## ORDINARY NONPARAMETRIC BOOTSTRAP
##
##
## Call:
## boot(data = Default, statistic = boot.fn, R = 1000)
##
##
## Bootstrap Statistics :
## original bias std. error
## t1* -1.154047e+01 -2.744827e-02 4.199078e-01
## t2* 2.080898e-05 1.581698e-07 4.730370e-06
## t3* 5.647103e-03 1.290768e-05 2.216651e-04
The bootstrap estimate for \(SE(\hat{\beta}_{0})\) is -11.54047, while the bootstrap estimates for \(SE(\hat{\beta}_{1})\) and \(SE(\hat{\beta}_{2})\) are approximately 0.0000208 and 0.0056471.
(d) Comment on the estimated standard errors obtained
using the glm() function and using your bootstrap function.
The estimated standard errors resulting from the two methods are almost
identical for the individual beta coefficients.
We will now consider the Boston housing data set, from the ISLR2 library
(a) Based on this data set, provide an estimate for the
population mean of medv. Call this estimate \(\hat{\mu}\).
Boston <- Boston
mu.hat = mean(Boston$medv)
mu.hat
## [1] 22.53281
Noting that medv is measured in $1000s, the estimate for
the population mean of medv ( \(\hat{\mu}\) ) is $22,532.81
(b) Provide an estimate of the standard error of \(\hat{\mu}\). Interpret this
result.
We can compute the standard error of the sample mean by dividing the
sample standard deviation by the square root of the number of
observations.
mu.hat.err = sd(Boston$medv)/sqrt(length(Boston$medv))
mu.hat.err
## [1] 0.4088611
The estimate of the standard error of \(\hat{\mu}\) is $408.86.
(c) Now estimate the standard error of \(\hat{\mu}\) using the bootstrap. How does this compare to your answer from (b)?
set.seed(123)
mu.boot.fn <- function(var, index){
return(mean(var[index]))
}
boot(Boston$medv, mu.boot.fn, 1000)
##
## ORDINARY NONPARAMETRIC BOOTSTRAP
##
##
## Call:
## boot(data = Boston$medv, statistic = mu.boot.fn, R = 1000)
##
##
## Bootstrap Statistics :
## original bias std. error
## t1* 22.53281 -0.01607372 0.4045557
Using the boostrap, we get a standard error of \(\hat{\mu}\) of approximately $404.56. This result is almost indistinguishable from the estimate above ($408.86) relative to the mean of $22,532.81.
(d) Based on your bootstrap estimate from (c), provide a
95% confidence interval for the mean of medv. Compare it to
the results obtained using
t.test(Boston$medv).
We can approximate a 95% confidence interval using the formula \([\hat{\mu} − 2SE(\hat{\mu}), \hat{\mu} +
2SE(\hat{\mu})]\)
conf.lower <- 22.53281 - (2*0.4045557)
conf.upper <- 22.53281 + (2*0.4045557)
cat("95% Confidence Interval using Bootstrap: [", conf.lower, ", ", conf.upper, "]\n")
## 95% Confidence Interval using Bootstrap: [ 21.7237 , 23.34192 ]
t.test(Boston$medv)
##
## One Sample t-test
##
## data: Boston$medv
## t = 55.111, df = 505, p-value < 2.2e-16
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 21.72953 23.33608
## sample estimates:
## mean of x
## 22.53281
Using the bootstrap to attain a 95% confidence level resulted in almost identical results to the t-test. The confidence intervals are [ 21.7237, 23.34192 ] and [ 21.72953, 23.33608 ] respectively.
(e) Based on this data set, provide an estimate, \(\hat{\mu}_{med}\), for the median value of
medv in the population.
mu.hat.med <- median(Boston$medv)
mu.hat.med
## [1] 21.2
The median value of medv is $21,200.
(f) We now would like to estimate the standard error of \(\hat{\mu}_{med}\). Unfortunately, there is no simple formula for computing the standard error of the median. Instead, estimate the standard error of the median using the bootstrap. Comment on your findings.
set.seed(123)
med.boot.fn <- function(var, index){
return(median(var[index]))
}
boot(Boston$medv, med.boot.fn, 1000)
##
## ORDINARY NONPARAMETRIC BOOTSTRAP
##
##
## Call:
## boot(data = Boston$medv, statistic = med.boot.fn, R = 1000)
##
##
## Bootstrap Statistics :
## original bias std. error
## t1* 21.2 -0.0203 0.3676453
The standard error of the median using the bootstrap is approximately $367.65. Again, in comparison to the median ($21,200), the standard error is fairly small.
(g) Based on this data set, provide an estimate for the
tenth percentile of medv in Boston census tracts. Call this
quantity \(\hat{\mu}_{0.1}\). You can
use the quantile() function.
mu.hat.01 <- quantile(Boston$medv, c(0.1))
mu.hat.01
## 10%
## 12.75
The tenth percentile of medv is $12,750.
(h) Use the bootstrap to estimate the standard error of \(\hat{\mu}_{0.1}\). Comment on your findings.
set.seed(123)
p01.boot.fn <- function(var, index){
return(quantile(var[index],c(0.1)))
}
boot(Boston$medv, p01.boot.fn, 1000)
##
## ORDINARY NONPARAMETRIC BOOTSTRAP
##
##
## Call:
## boot(data = Boston$medv, statistic = p01.boot.fn, R = 1000)
##
##
## Bootstrap Statistics :
## original bias std. error
## t1* 12.75 -0.012 0.527868
The standard error of the tenth percentile is $527.87. This error is significantly larger than seen previously relative to the estimate of the tenth percentile ($12,750).