ISLR Chapter 5: Resampling Methods

Assignment 4 - STA 6543

Olivia Shipley

2025-07-24


knitr::opts_chunk$set(echo = TRUE)

# ✨ Problem 3

(a) Explain how k-fold cross-validation is implemented.

Response:

K-fold cross validation is implemented by randomly dividing a set of observation into k, equally sized groups, or folds. The first fold acts as a validation set and the k-1 folds are the training set.

The model is fit on the training set and the Mean Square Error (MSE), is computed on the validation set. After this is repeated k-times and each time a different group of observations is treated as the validation set. This results in multiple k-estimates of the MSE, an average of these estimates concludes the implementation of k-fold cross-validation.

(b) What are the advantages and disadvantages of k-fold cross-validation relative to:

i. The validation set approach?

The advantage of the validation set approach is that it is simpler and easier to implement, and the variability in test error estimates from the validation set approach tends to be much higher than the variability in k-fold CV estimates. But the validation set method can also lead to overestimates of the test error rate.

ii. LOOCV?

The disadvantage of the LOOCV approach is that it has the potential to be computationally expensive, whereas the k-fold CV is typically performed with k = 5 or k = 10, making it more feasible. The k-fold approach also gives more accurate estimates of test error rates. However, if our main goal was to decrease bias, then LOOCV would be preferred over k-fold.

# ✨ Problem 5

In Chapter 4, we used logistic regression to predict the probability of default using income and balance on the Default data set. We will now estimate the test error of this logistic regression model using the validation set approach. Do not forget to set a random seed before beginning your analysis.

library(ISLR)
set.seed(1)
summary(Default)
##  default    student       balance           income     
##  No :9667   No :7056   Min.   :   0.0   Min.   :  772  
##  Yes: 333   Yes:2944   1st Qu.: 481.7   1st Qu.:21340  
##                        Median : 823.6   Median :34553  
##                        Mean   : 835.4   Mean   :33517  
##                        3rd Qu.:1166.3   3rd Qu.:43808  
##                        Max.   :2654.3   Max.   :73554

(a) Fit a logistic regression model that uses income and balance to predict default.

glm.fit=glm(default~income+balance, data=Default, family=binomial)
summary(glm.fit)
## 
## Call:
## glm(formula = default ~ income + balance, family = binomial, 
##     data = Default)
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.154e+01  4.348e-01 -26.545  < 2e-16 ***
## income       2.081e-05  4.985e-06   4.174 2.99e-05 ***
## balance      5.647e-03  2.274e-04  24.836  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2920.6  on 9999  degrees of freedom
## Residual deviance: 1579.0  on 9997  degrees of freedom
## AIC: 1585
## 
## Number of Fisher Scoring iterations: 8

(b) Using the validation set approach, estimate the test error of this model. In order to do this, you must perform the following steps:

i. Split the sample set into a training set and a validation set.

train=sample(dim(Default)[1], dim(Default)[1]/2)

ii. Fit a multiple logistic regression model using only the training observations.

glm.fit2=glm(default~income+balance, data=Default, family="binomial", subset=train)
summary(glm.fit2)
## 
## Call:
## glm(formula = default ~ income + balance, family = "binomial", 
##     data = Default, subset = train)
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.194e+01  6.178e-01 -19.333  < 2e-16 ***
## income       3.262e-05  7.024e-06   4.644 3.41e-06 ***
## balance      5.689e-03  3.158e-04  18.014  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 1523.8  on 4999  degrees of freedom
## Residual deviance:  803.3  on 4997  degrees of freedom
## AIC: 809.3
## 
## Number of Fisher Scoring iterations: 8

iii. Obtain a prediction of default status for each individual in the validation set by computing the posterior probability of default for that individual, and classifying the individual to the default category if the posterior probability is greater than 0.5.

glm.prob=predict(glm.fit2, Default[-train, ], type="response")
glm.pred=rep("No", dim(Default)[1])
glm.pred[glm.prob > 0.5]="Yes"

iv. Compute the validation set error, which is the fraction of the observations in the validation set that are misclassified.

mean(glm.pred != Default[-train, "default"])
## [1] 0.0254

(c) Repeat the process in (b) three times, using three different splits of the observations into a training set and a validation set. Comment on the results obtained.

train=sample(dim(Default)[1], dim(Default)[1]/2)
glm.fit3=glm(default~income+balance, data=Default, family="binomial", subset=train)
glm.prob=predict(glm.fit3, Default[-train, ], type="response")
glm.pred=rep("No", dim(Default)[1])
glm.pred[glm.prob > 0.5]="Yes"
mean(glm.pred != Default[-train, "default"])
## [1] 0.0274
train=sample(dim(Default)[1], dim(Default)[1]/2)
glm.fit4=glm(default~income+balance, data=Default, family="binomial", subset=train)
glm.prob=predict(glm.fit4, Default[-train, ], type="response")
glm.pred=rep("No", dim(Default)[1])
glm.pred[glm.prob > 0.5]="Yes"
mean(glm.pred != Default[-train, "default"])
## [1] 0.0244
train=sample(dim(Default)[1], dim(Default)[1]/2)
glm.fit5=glm(default~income+balance, data=Default, family="binomial", subset=train)
glm.prob=predict(glm.fit5, Default[-train, ], type="response")
glm.pred=rep("No", dim(Default)[1])
glm.pred[glm.prob > 0.5]="Yes"
mean(glm.pred != Default[-train, "default"])
## [1] 0.0244

Response: The validation set errors across the three different random splits demonstrate the variability inherent in the validation set approach. While the error rates are fairly close in range, the varition between splits shows that test error estimates depend on which observations are assigned to training versus validation sets. This highlights a key limitation of the validation set method - results can vary based on the random split.

(d) Now consider a logistic regression model that predicts the probability of default using income, balance, and a dummy variable for student. Estimate the test error for this model using the validation set approach. Comment on whether or not including a dummy variable for student leads to a reduction in the test error rate.

train=sample(dim(Default)[1], dim(Default)[1]/2)
glm.fit6=glm(default~income+balance+student, data=Default, family="binomial", subset=train)
glm.prob=predict(glm.fit6, Default[-train, ], type="response")
glm.pred=rep("No", dim(Default)[1])
glm.pred[glm.prob > 0.5]="Yes"
mean(glm.pred != Default[-train, "default"])
## [1] 0.0278

Response: Including the student dummy variable does not meaningfully improve the test error rate, remaining within the same range as previous models. This suggests that student status does not add substantial predictive value beyond what income and balance already provide. The lack of improvement indicates that the additional complexity from including student status is not justified by better predictive performance.

# ✨ Problem 6

We continue to consider the use of a logistic regression model to predict the probability of default using income and balance on the Default data set. In particular, we will now compute estimates for the standard errors of the income and balance logistic regression coefficients in two different ways:(1) using the bootstrap, and (2) using the standard formula for computing the standard errors in the glm() function. Do not forget to set a random seed before beginning your analysis.

set.seed(1)

(a) Using the summary() and glm() functions, determine the estimated standard errors for the coefficients associated with income and balance in a multiple logistic regression model that uses both predictors.

glm.fit = glm(default~income+balance, data=Default, family="binomial")
summary(glm.fit)
## 
## Call:
## glm(formula = default ~ income + balance, family = "binomial", 
##     data = Default)
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.154e+01  4.348e-01 -26.545  < 2e-16 ***
## income       2.081e-05  4.985e-06   4.174 2.99e-05 ***
## balance      5.647e-03  2.274e-04  24.836  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2920.6  on 9999  degrees of freedom
## Residual deviance: 1579.0  on 9997  degrees of freedom
## AIC: 1585
## 
## Number of Fisher Scoring iterations: 8

(b) Write a function, boot.fn(), that takes as input the Default data set as well as an index of the observations, and that outputs the coefficient estimates for income and balance in the multiple logistic regression model.

boot.fn=function (data ,index)
  return(coef(glm(default~income+balance, data=data, subset=index, family="binomial")))

(c) Use the boot() function together with your boot.fn() function to estimate the standard errors of the logistic regression coefficients for income and balance.

library(boot)
boot(Default, boot.fn, 1000)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = Default, statistic = boot.fn, R = 1000)
## 
## 
## Bootstrap Statistics :
##          original        bias     std. error
## t1* -1.154047e+01 -3.945460e-02 4.344722e-01
## t2*  2.080898e-05  1.680317e-07 4.866284e-06
## t3*  5.647103e-03  1.855765e-05 2.298949e-04

(d) Comment on the estimated standard errors obtained using the glm() function and using your bootstrap function.

Response: The standard error estimates from the glm() function and bootstrap method are very similar, validating both approaches. The close agreement confirms that the theoretical standard errors from glm() are reliable for this dataset. The bootstrap provides a non-parametric check that arrives at essentially the same uncertainty estimates, indicating our logistic regression model assumptions are well-satisfied.

✨ Problem 9


We will now consider the Boston housing data set, from the ISLR2 library.

library(ISLR2)
## 
## Attaching package: 'ISLR2'
## The following objects are masked from 'package:ISLR':
## 
##     Auto, Credit
attach(Boston)

(a) Based on this data set, provide an estimate for the population mean of medv. Call this estimateˆµ.

mu.hat=mean(medv)
mu.hat
## [1] 22.53281

(b) Provide an estimate of the standard error of ˆµ. Interpret this result.

Hint: We can compute the standard error of the sample mean by dividing the sample standard deviation by the square root of the number of observations.

sd(medv)/sqrt(dim(Boston)[1])
## [1] 0.4088611

Response: We can be fairly confident that the true population mean of median home values lies close to our estimate of $22,533, give or take about $409. The small standard error indicates this is a reliable estimate, likely due to having a sufficiently large sample size from the Boston dataset.

(c) Now estimate the standard error ofˆµ using the bootstrap. How does this compare to your answer from (b)?

set.seed(1)

boot.fn=function(data, index)
  return(mean(data[index]))

boot(medv, boot.fn, 1000)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot.fn, R = 1000)
## 
## 
## Bootstrap Statistics :
##     original      bias    std. error
## t1* 22.53281 0.007650791   0.4106622

Response: The bootstrap standard error estimate is very close to the theoretical value of 0.4089 from the previous question. This demonstrates that bootstrap resampling provides an excellent alternative to the formula-based approach. The close agreement validates both methods and shows that our standard error estimate is robust regardless of the method used.

(d) Based on your bootstrap estimate from (c), provide a 95 % confidence interval for the mean of medv. Compare it to the results obtained using t.test(Boston$medv).

Hint: You can approximate a 95 % confidence interval using the formula

\[[ˆ µ− 2SE(ˆ µ),ˆµ + 2SE(ˆ µ)]\]

t.test(Boston$medv)
## 
##  One Sample t-test
## 
## data:  Boston$medv
## t = 55.111, df = 505, p-value < 2.2e-16
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
##  21.72953 23.33608
## sample estimates:
## mean of x 
##  22.53281

\[[22.53281− 2(0.4106622)(22.53281),22.53281 + 2(0.4106622)(22.53281)]\]

Response: The bootstrap-based 95% confidence interval should be approximately [21.71, 23.35], which closely matches the t.test() result of [21.72953, 23.33608].

This close agreement occurs because the sample size is large enough for the Central Limit Theorem to apply, making the sampling distribution approximately normal.

(e) Based on this data set, provide an estimate,ˆµmed, for the median value of medv in the population.

median(medv)
## [1] 21.2

(f) We now would like to estimate the standard error ofˆµmed. Unfortunately, there is no simple formula for computing the standard error of the median. Instead, estimate the standard error of the median using the bootstrap. Comment on your findings.

boot.fn=function(data, index)
  return(median(data[index]))

boot(medv, boot.fn, 1000)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot.fn, R = 1000)
## 
## 
## Bootstrap Statistics :
##     original  bias    std. error
## t1*     21.2 -0.0386   0.3770241

Response: The median estimate provides the value that splits the housing prices in half. The bootstrap approach shows that the median has good precision (relatively small standard error), and the estimate is stable across different bootstrap samples.

(g) Based on this data set, provide an estimate for the tenth percentile of medv in Boston census tracts. Call this quantityˆµ0.1.

(You can use the quantile() function.)

mu.percentile=quantile(medv, c(0.1))
mu.percentile
##   10% 
## 12.75

(h) Use the bootstrap to estimate the standard error ofˆµ0.1. Comment on your findings.

boot.fn=function(data, index)
  return(quantile(data[index], c(0.1)))

boot(medv, boot.fn, 1000)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot.fn, R = 1000)
## 
## 
## Bootstrap Statistics :
##     original  bias    std. error
## t1*    12.75  0.0186   0.4925766

Response: The 10th percentile estimate of 12.75 indicates that 10% of Boston census tracts have median home values at or below $12,750. The bootstrap standard error for this percentile estimate is relatively small, indicating good precision.