Conceptual

3. We now review k-fold cross-validation.

  • a) Explain how k-fold cross-validation is implemented.
    • K-fold cross-validation is implemented by randomly splitting the data into k groups that are approximately the same size. The first part is removed and the model is fit on the remaining k-1 parts. This process is repeated k times taking out a different part each time. Then the k different MSE’s are averaged to get an estimated validation error rate for new observations.


  • b) What are the advantages and disadvantages of k-fold cross-validation relative to:
    • i. The validation set approach?
    • ii. LOOCV?
      • The advantages of k-fold cross-validation relative to the validation set approach are that k-fold cross-validation estimation of the test error rate can be much less variable than the estimation produced by the validation set approach. This is due to the randomness in splitting the data into training and validation set. With k-fold cross-validation the all the data is used in both the training and validation sets. A disadvantage of the validation set approach is the potential for worse performance compared to k-fold cross-validation since only a subset of observations are used to fit the model. Statistical methods tend to perform worse when trained on fewer observations.

      • LOOCV is a special case of k-fold cross-validation where k = n. A disadvantage of this approach compared to k-fold cross-validation is that LOOCV can be computationally expensive since the model is run n times. The more observations there are, the more computationally expensive the LOOCV approach becomes. On the other hand, k-fold cross-validation is run just k times, where k is chosen, usually 5 or 10. This is much less computationally intensive then LOOCV, especially with larger data sets. LOOCV has less bias than k-fold cross-validation but also a higher variance, when k < n. So this falls back to the bias-variance trade-off.


Applied

5. In Chapter 4, we used logistic regression to predict the probabilities pf default using income and balance on the Default data set. We will now estimate the test error of this logistic regression model using the validation set approach. Do not forget to set a random seed before beginning your analysis.

library(ISLR2)
attach(Default)
set.seed(1)

a) Fit a logistic regression model that uses income and balance to predict default.

glm_fit <- glm(default ~ income + balance, data = Default, family = binomial)
summary(glm_fit)
## 
## Call:
## glm(formula = default ~ income + balance, family = binomial, 
##     data = Default)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.4725  -0.1444  -0.0574  -0.0211   3.7245  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.154e+01  4.348e-01 -26.545  < 2e-16 ***
## income       2.081e-05  4.985e-06   4.174 2.99e-05 ***
## balance      5.647e-03  2.274e-04  24.836  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2920.6  on 9999  degrees of freedom
## Residual deviance: 1579.0  on 9997  degrees of freedom
## AIC: 1585
## 
## Number of Fisher Scoring iterations: 8


b) Using the validation set approach estimate the test error of this model. In order to do this, you must perform the following steps:


  • i. Split the sample into a training set and a validation set.
index <- sample(1:nrow(Default), 0.8 * nrow(Default))
train <- Default[index, ]
test <- Default[-index, ]
  • ii. Fit a multiple logistic regression model using only the training observations.
glm_fit2 <- glm(default ~ income + balance, data = Default, family = binomial, subset = index)
summary(glm_fit2)
## 
## Call:
## glm(formula = default ~ income + balance, family = binomial, 
##     data = Default, subset = index)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.4758  -0.1413  -0.0563  -0.0210   3.4620  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.168e+01  4.893e-01 -23.879  < 2e-16 ***
## income       2.547e-05  5.631e-06   4.523  6.1e-06 ***
## balance      5.613e-03  2.531e-04  22.176  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2313.6  on 7999  degrees of freedom
## Residual deviance: 1239.2  on 7997  degrees of freedom
## AIC: 1245.2
## 
## Number of Fisher Scoring iterations: 8


  • iii. Obtain a prediction of default status for each individual in the validation set by computing the posterior probability of default for that individual, and classifying the individual to the default category if the posterior probability is greater than 0.5.
glm_probs <- predict(glm_fit2, test, type = "response")
glm_pred <- rep("No", length(glm_probs))
glm_pred[glm_probs > 0.5] = "Yes"
  • iv. Compute the validation set error, which is the fraction of the observations in the validation set that are misclassified.
mean(glm_pred != test$default)
## [1] 0.026


c) Repeat the process in b) three times, using three different splits of the observations into a training set and a validation set. Comment on the results obtained.

index2 <- sample(1:nrow(Default), 0.8 * nrow(Default))
train2 <- Default[index2, ]
test2 <- Default[-index2, ]
glm_fit2 <- glm(default ~ income + balance, data = Default, family = binomial, subset = index2)
summary(glm_fit2)
## 
## Call:
## glm(formula = default ~ income + balance, family = binomial, 
##     data = Default, subset = index2)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.4882  -0.1436  -0.0567  -0.0206   3.7292  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.156e+01  4.817e-01 -24.002   <2e-16 ***
## income       1.992e-05  5.510e-06   3.615    3e-04 ***
## balance      5.691e-03  2.533e-04  22.472   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2380.8  on 7999  degrees of freedom
## Residual deviance: 1267.1  on 7997  degrees of freedom
## AIC: 1273.1
## 
## Number of Fisher Scoring iterations: 8
glm_probs2 <- predict(glm_fit2, test2, type = "response")
glm_pred2 <- rep("No", length(glm_probs2))
glm_pred2[glm_probs2 > 0.5] = "Yes"
mean(glm_pred2 != test2$default)
## [1] 0.024
index3 <- sample(1:nrow(Default), 0.8 * nrow(Default))
train3 <- Default[index3, ]
test3 <- Default[-index3, ]
glm_fit3 <- glm(default ~ income + balance, data = Default, family = binomial, subset = index3)
summary(glm_fit3)
## 
## Call:
## glm(formula = default ~ income + balance, family = binomial, 
##     data = Default, subset = index3)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.4547  -0.1421  -0.0562  -0.0208   3.7322  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.157e+01  4.930e-01 -23.471  < 2e-16 ***
## income       2.118e-05  5.600e-06   3.782 0.000156 ***
## balance      5.633e-03  2.566e-04  21.951  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2293.2  on 7999  degrees of freedom
## Residual deviance: 1246.8  on 7997  degrees of freedom
## AIC: 1252.8
## 
## Number of Fisher Scoring iterations: 8
glm_probs3 <- predict(glm_fit3, test3, type = "response")
glm_pred3 <- rep("No", length(glm_probs3))
glm_pred3[glm_probs3 > 0.5] = "Yes"
mean(glm_pred3 != test3$default)
## [1] 0.0265
index4 <- sample(1:nrow(Default), 0.8 * nrow(Default))
train4 <- Default[index4, ]
test4 <- Default[-index4, ]
glm_fit4 <- glm(default ~ income + balance, data = Default, family = binomial, subset = index4)
summary(glm_fit4)
## 
## Call:
## glm(formula = default ~ income + balance, family = binomial, 
##     data = Default, subset = index4)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.4420  -0.1402  -0.0561  -0.0206   3.7385  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.155e+01  4.929e-01 -23.427  < 2e-16 ***
## income       1.938e-05  5.687e-06   3.408 0.000654 ***
## balance      5.647e-03  2.580e-04  21.890  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2259.2  on 7999  degrees of freedom
## Residual deviance: 1219.0  on 7997  degrees of freedom
## AIC: 1225
## 
## Number of Fisher Scoring iterations: 8
glm_probs4 <- predict(glm_fit4, test4, type = "response")
glm_pred4 <- rep("No", length(glm_probs4))
glm_pred4[glm_probs4 > 0.5] = "Yes"
mean(glm_pred4 != test4$default)
## [1] 0.0315


  • There is a slight variation in the validation test error rates between the various splits, but it is quite small. The error rates are 2.4%, 2.65%, and 3.15%. This subtle variation in error rates is due to the randomness in splitting the data into a training and validation set.


d) Now consider a logistic regression model that predicts the probability of default using income, balance, and a dummy variable for student. Estimate the test error for this model using the validation set approach. Comment on whether or not including a dummy variable for student leads to a reduction in the test error rate.

glm_fit6 <- glm(default ~ income + balance + student, data = Default, family = binomial, subset = index)
summary(glm_fit6)
## 
## Call:
## glm(formula = default ~ income + balance + student, family = binomial, 
##     data = Default, subset = index)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.4736  -0.1389  -0.0543  -0.0202   3.5171  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.094e+01  5.517e-01 -19.834  < 2e-16 ***
## income       5.877e-06  9.172e-06   0.641  0.52170    
## balance      5.713e-03  2.585e-04  22.100  < 2e-16 ***
## studentYes  -7.282e-01  2.683e-01  -2.714  0.00664 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2313.6  on 7999  degrees of freedom
## Residual deviance: 1231.9  on 7996  degrees of freedom
## AIC: 1239.9
## 
## Number of Fisher Scoring iterations: 8
glm_probs6 <- predict(glm_fit6, test, type = "response")
glm_pred6 <- rep("No", length(glm_probs6))
glm_pred6[glm_probs6 > 0.5] = "Yes"
mean(glm_pred6 != test$default)
## [1] 0.0275


  • With a validation test error rate of 2.75%, it appears that adding in the student variable to the logistic regression model does not lead to a reduction in test error.


6. We continue to consider the use of logistic regression model to predict the probability of default using income and balance on the Default data set. In particular, we will now compute estimates for the standard errors of the income and balance logistic regression coefficients in two different ways: 1) using the bootstrap, and 2) using the standard formula for computing the standard errors in the glm() function. Do not forget to set a random seed before beginning your analysis.


a) Using the summary() and glm() functions, determine the estimated standard errors for the coefficients associated with income and balance in a multiple logistic regression model that uses both predictors.

set.seed(1)

fit_glm <- glm(default ~ income + balance, data = Default, family = "binomial")
summary(fit_glm)
## 
## Call:
## glm(formula = default ~ income + balance, family = "binomial", 
##     data = Default)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.4725  -0.1444  -0.0574  -0.0211   3.7245  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.154e+01  4.348e-01 -26.545  < 2e-16 ***
## income       2.081e-05  4.985e-06   4.174 2.99e-05 ***
## balance      5.647e-03  2.274e-04  24.836  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2920.6  on 9999  degrees of freedom
## Residual deviance: 1579.0  on 9997  degrees of freedom
## AIC: 1585
## 
## Number of Fisher Scoring iterations: 8


  • The estimates for the standard error for income is \(4.985\) x \(10^{-6}\) and for balance is \(2.274\) x \(10^{-4}\) .


b) Write a function, boot.fn(), that takes as input the Default data set as well as an index of the observations, and that outputs the coefficient estimates for income and balance in the multiple logistic regression model.

boot_fn <- function(data, idx){
  coef(glm(default ~ income + balance, data = data, family = "binomial", subset = idx))
  
}


c) Use the boot() function together with your boot.fn() function to estimate the standard errors of the logistic regression coefficients for income and balance.

library(boot)
set.seed(1)

boot(Default, boot_fn, 1000)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = Default, statistic = boot_fn, R = 1000)
## 
## 
## Bootstrap Statistics :
##          original        bias     std. error
## t1* -1.154047e+01 -3.945460e-02 4.344722e-01
## t2*  2.080898e-05  1.680317e-07 4.866284e-06
## t3*  5.647103e-03  1.855765e-05 2.298949e-04


d) Comment on the estimated standard errors obtained using the glm() function and using your bootstrap function.

  • The estimated standard errors obtained by the bootstrap function are \(4.866284\) x \(10^{-6}\) for income and \(2.298949\) x \(10^{-4}\) for balance. These estimates are quite close to the ones obtained using the glm() function in the previous part. This could mean that the underlying assumptions of logistic regression are satisfied by this data.


9. We will now consider the Boston housing data set, from the ISLR2 library.


a) Based on this data set, provide an estimate for the population mean of medv. Call the estimate \(\hat{\mu}\)

attach(Boston)

mu_hat <- mean(medv)
mu_hat
## [1] 22.53281


  • The estimate for the population mean of mdev is \(\hat{\mu} = 22.53281\). Since medv is the median value of owner-occupied homes in thousands of dollars, the population mean in $22,533.


b) Provide an estimate of the standard error of \(\hat{\mu}\). Interpret this result.

semu_hat <- sd(medv) / sqrt(length(medv))
semu_hat
## [1] 0.4088611


  • The estimate of the standard error of \(\hat{\mu}\) is \(0.4088611\). Each standard error is $408.86 above or below the population mean.


c) Now estimate the standard error of \(\hat{\mu}\) using the bootstrap. How does this compare to your answer from b)?

set.seed(1)

boot_fn2 <- function(data, index){
  mu <- mean(data[index])
  return (mu)
}

boot(medv, boot_fn2, 1000)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot_fn2, R = 1000)
## 
## 
## Bootstrap Statistics :
##     original      bias    std. error
## t1* 22.53281 0.007650791   0.4106622


  • The estimated standard error of \(\hat{\mu}\) provided by the bootstrap is \(0.4106622\) is very close to the estimate found in part b) which was \(0.4088611\).


d) Based on you bootstrap estimate from c), provide a 95% confidence interval for the mean of medv. Compare it to the results obtained using t.test(Boston$medv). *Hint: You can approximate a 95% confidence interval using the formula [\(\hat{\mu} - 2SE(\hat{\mu}), \hat{\mu} + 2SE(\hat{\mu})\)].

t.test(medv)
## 
##  One Sample t-test
## 
## data:  medv
## t = 55.111, df = 505, p-value < 2.2e-16
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
##  21.72953 23.33608
## sample estimates:
## mean of x 
##  22.53281
mu_hat_CI <- c(mu_hat - 2 * semu_hat, mu_hat + 2 * semu_hat)
mu_hat_CI
## [1] 21.71508 23.35053


  • The 95% confidence interval produced using the bootstrap is incredibly close the to the 95% confidence interval produced by the t.test() function.


e) Based on this data set, provide an estimate, \(\hat{\mu}_{med}\), for the median value of medv in the population.

medmu_hat <- median(medv)
medmu_hat
## [1] 21.2


  • The median value of medv in the population is \(\hat{\mu}_{med} = 21.2\).


f) We would like to estimate the standard error of \(\hat{\mu}_{med}\). Unfortunately, there is no simple formula for computing the standard error of the median. Instead, estimate the standard error of the median using bootstrap. Comment on your findings.

set.seed(1)

boot_fn3 <- function(data, index){
  med <- median(data[index])
  return(med)
}

boot(medv, boot_fn3, 1000)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot_fn3, R = 1000)
## 
## 
## Bootstrap Statistics :
##     original  bias    std. error
## t1*     21.2 0.02295   0.3778075


  • The standard error of the median of medv obtained using the bootstrap yields a value of 0.3778075. That is the estimated standard error for the median is $377.81, slightly smaller than that of the estimated standard error of the mean.


g) Based on this data set, provide an estimate for the tenth percentile of medv in Boston census tracts. Call this quantity \(\hat{\mu}_{0.1}\). (You can use the quantile() function.)

mu_point1 <- quantile(medv, c(0.1))
mu_point1
##   10% 
## 12.75


  • The estimate for the tenth percentile of medv is \(\hat{\mu}_{0.1} = 12.75\).


h) Use the bootstrap to estimate the standard error of \(\hat{\mu}_{0.1}\). Comment on your findings.

set.seed(1)

boot_fn4 <- function(data, index){
  quant <- quantile(data[index], c(0.1))
  return(quant)
}

boot(medv, boot_fn4, 1000)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot_fn4, R = 1000)
## 
## 
## Bootstrap Statistics :
##     original  bias    std. error
## t1*    12.75  0.0339   0.4767526


  • The bootstrap estimated standard error of the 10th percentile of medv is 0.4767526. That is, the standard error of the 10th percentile is about $476.75.