Question 3

A.) Explain how k-fold cross-validation is implemented.

K-fold cross validation involves randomly dividing a set of data into k amount of groups, or ‘folds’, each of about the same size. The first fold is used for a validation test set, and the statistical learning method is then applied to the remaining k-1 folds. MSE1 is computed on the observations on the held-out fold. This procedure is repeated k times; with each time, a different group of observations being used as a validation set. This process results in k number of test errors, all of which are then combined and averaged for cross validation.

B.) What are the advantages and disadvantages of k-fold CV relative to:

I.) The Validation Set Approach

The advantage to using k-fold CV over the Validation Set Approach, is that k-fold results in more accurate estimates of the test error rates, since the Validation Set Approach is applied to only half of the data without cross validation, leading to high level of bias, but quick and ease of use in the implementation.

II.) LOOCV

The advantage of k-fold CV over the LOOCV approach is that k-fold is computationally easier than LOOCV. Apart from a computational advantage, k-fold typically gives more accurate estimates of the test error rate. Apart from this, k-fold approach usually results in a lower test error variance compared to LOOCV. However, when it comes to comparing bias, LOOCV is at an advantage here because the LOOCV approach results in approximately unbiased estimates of the test error rate, since it is cross-validating almost as many times as there are observations in the full data set.

Question 5

A.) Estimate the test error of this logistic regression model using the validation set approach. Do not forget to set a random seed before beginning your analysis.

library(ISLR)
set.seed(1)
attach(Default)
summary(Default)
##  default    student       balance           income     
##  No :9667   No :7056   Min.   :   0.0   Min.   :  772  
##  Yes: 333   Yes:2944   1st Qu.: 481.7   1st Qu.:21340  
##                        Median : 823.6   Median :34553  
##                        Mean   : 835.4   Mean   :33517  
##                        3rd Qu.:1166.3   3rd Qu.:43808  
##                        Max.   :2654.3   Max.   :73554
fit.glm<-glm(default~income+balance,data=Default, family = "binomial")
summary(fit.glm)
## 
## Call:
## glm(formula = default ~ income + balance, family = "binomial", 
##     data = Default)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.4725  -0.1444  -0.0574  -0.0211   3.7245  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.154e+01  4.348e-01 -26.545  < 2e-16 ***
## income       2.081e-05  4.985e-06   4.174 2.99e-05 ***
## balance      5.647e-03  2.274e-04  24.836  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2920.6  on 9999  degrees of freedom
## Residual deviance: 1579.0  on 9997  degrees of freedom
## AIC: 1585
## 
## Number of Fisher Scoring iterations: 8

B.) Using the validation set approach, estimate the test error of this model.

I.) Split data into training set and validation set.

train <- sample(dim(Default)[1], dim(Default)[1] / 2)

II.) Fit a multiple regression model using only the training observations.

fit.glm<-glm(default~income+balance,data=Default,family = "binomial",subset = train)
summary(fit.glm)
## 
## Call:
## glm(formula = default ~ income + balance, family = "binomial", 
##     data = Default, subset = train)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.3583  -0.1268  -0.0475  -0.0165   3.8116  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.208e+01  6.658e-01 -18.148   <2e-16 ***
## income       1.858e-05  7.573e-06   2.454   0.0141 *  
## balance      6.053e-03  3.467e-04  17.457   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 1457.0  on 4999  degrees of freedom
## Residual deviance:  734.4  on 4997  degrees of freedom
## AIC: 740.4
## 
## Number of Fisher Scoring iterations: 8

III.) Obtain prediction of default status for each individual in the validation set by computing the posterior probability of default for that individual. Classify each individual to default category if posterior probability is greater than 0.5

validateSet<-Default[-train]
probdef<-predict(fit.glm,data=validateSet, type = "response")
pred.glm <- rep("No", length(probdef))
pred.glm[probdef > 0.5] <- "Yes"

IV.) Compute the validation error, which is the fraction of the observations in the validation set that are misclassified.

mean(pred.glm != validateSet$default)
## [1] 0.0477

From the output above, we can see that the model using the validation set approach returns an error rate of 4.7%

C.) Repeat process in B three times, using three different splits of the observations into a traingin set and a validation set.

iterate<-function(){
train <- sample(dim(Default)[1], dim(Default)[1] / 2)
fit.glm <- glm(default ~ income + balance, data = Default, family = "binomial", subset = train)
probdef <- predict(fit.glm, data = validateSet, type = "response")
pred.glm <- rep("No", length(probdef))
pred.glm[probdef > 0.5] <- "Yes"
return(mean(pred.glm != validateSet$default))
}
iterate()
## [1] 0.0457
iterate()
## [1] 0.0467
iterate()
## [1] 0.0479

The results from using three different splits of the observations are quite variable. It appears that using different split sets results in variable results.

D.) Now consider a logistic regression model that predicts the probability of default using income, balance, and a dummy variable for student. Estimate the test error for this model using the validation set approach. Comment on whether or not including a dummy variable for student leads to a reduction in the test error rate.

student.glm<-glm(default~income+balance+student,data=default,family="binomial",subset = train)
pred.student<-rep("No", length(probdef))
stuprob <- predict(student.glm, data = validateSet, type = "response")
pred.student[probdef > 0.5] <- "Yes"
mean(pred.student != validateSet$default)
## [1] 0.0477

After running a logistic regression model with an added student dummy variable, we see that the test error result is not much better than the previous models’ results.

Question 6

We continue to consider the use of a logistic regression model to predict the probability of default using income and balance on the Default data set. In particular, we will now compute estimates for the standard errors of the income and balance logistic regression coefficients in two different ways: (1) using the bootstrap, and (2) using the standard formula for computing the standard errors in the glm() function. Do not forget to set a random seed before beginning your analysis.

A.) Using the summary() and glm() functions, determine the estimated standard errors for the coefficients associated with income and balance in a multiple logistic regression model that uses both predictors.

set.seed(1)
attach(Default)
## The following objects are masked from Default (pos = 3):
## 
##     balance, default, income, student
fit.glm6<-glm(default ~ income + balance, data = Default, family = "binomial")
summary(fit.glm6)
## 
## Call:
## glm(formula = default ~ income + balance, family = "binomial", 
##     data = Default)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.4725  -0.1444  -0.0574  -0.0211   3.7245  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.154e+01  4.348e-01 -26.545  < 2e-16 ***
## income       2.081e-05  4.985e-06   4.174 2.99e-05 ***
## balance      5.647e-03  2.274e-04  24.836  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2920.6  on 9999  degrees of freedom
## Residual deviance: 1579.0  on 9997  degrees of freedom
## AIC: 1585
## 
## Number of Fisher Scoring iterations: 8

From the results above we can see thqt the standard error rates for the coefficients are 4.985e-06 for income, and 2.274e-04 for balance. Both of these predictors are significant.

B.) Write a function, boot.fn(), that takes as input the Default data set as well as an index of the observations, and that outputs the coefficient estimates for income and balance in the multiple logistic regression model.

library(boot)
boot.fn<- function(data,index){
  
 fit.boot<-glm(default~income+balance,data = data,family = "binomial",subset=index) 
 return(coef(fit.boot))
  
}

C.) Use boot() function together wiht your boot,fn() function to estimate the standard errors of the logistic regression coefficients for “income” and “balance”.

boot(Default,boot.fn,50)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = Default, statistic = boot.fn, R = 50)
## 
## 
## Bootstrap Statistics :
##          original        bias     std. error
## t1* -1.154047e+01  1.181200e-01 4.202402e-01
## t2*  2.080898e-05 -5.466926e-08 4.542214e-06
## t3*  5.647103e-03 -6.974834e-05 2.282819e-04

D.) Comment on the estimated standard errors obtained using the glm() function and using your bootstrap function.

Comparing the results between the glm() function and the bootstrap function, we see that with the glm() function from part ‘A’ that our standard errors were 4.985e-06 for income, and 2.274e-04 for balance. With the bootstrap function done in part ‘C’, we see standard errors as 4.542214e-06 for income, and 2.282819e-04 for balance. Comparing the two, there doesn’t seem to be much difference in results.

Question 9

I will now consider the Boston housing data set, from the MASS library.

A.) Based on this data set, provide an estimate for the population mean of medv. Call this estimate mu.hat.

library(MASS)
set.seed(1)
attach(Boston)
mu.hat<-mean(medv)
mu.hat
## [1] 22.53281

B.) Provide an estimate of the standard error of mu.hat. Interpret this result.

mu.hat.err<-sd(medv)/sqrt(length(medv))
mu.hat.err
## [1] 0.4088611

C.) Now estimate the standard error of mu.hat using the bootstrap. How does this compare to your answer from (b)?

boot.func <- function(data, index) {
    mu <- mean(data[index])
    return (mu)
}
boot(medv, boot.func, 1000)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot.func, R = 1000)
## 
## 
## Bootstrap Statistics :
##     original      bias    std. error
## t1* 22.53281 0.008517589   0.4119374

In part B, the resultant error was .4089, whereas for the bootstrap function we get .4119 standard error.

D.) Based on your bootstrap estimate from (c), provide a 95% confidence interval for the mean of “medv”. Compare it to the results obtained using t.test(Boston$medv).

t.test(medv)
## 
##  One Sample t-test
## 
## data:  medv
## t = 55.111, df = 505, p-value < 2.2e-16
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
##  21.72953 23.33608
## sample estimates:
## mean of x 
##  22.53281
CI.mu.hat <- c(22.53 - 2 * 0.4119, 22.53 + 2 * 0.4119)
CI.mu.hat
## [1] 21.7062 23.3538

The results of the confidence interval method shown above are quite close to the one-sample t test. From the one-sample t-test we get 21.72953 23.33608; and with applying the confidence interval to mu.hat we get 21.7062 23.3538

E.) Provide an estimate for median of mu.hat for the value of medv in the Boston data set.

mu.hat.med<-median(medv)
mu.hat.med
## [1] 21.2

F.) Estimate the standard error of mu.hat median. Use the bootstrap for this estimate.

boot.fn.med <- function(data, index) {
    mu.med <- median(data[index])
    return (mu.med)
}
boot(medv, boot.fn.med, 1000)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot.fn.med, R = 1000)
## 
## 
## Bootstrap Statistics :
##     original  bias    std. error
## t1*     21.2 -0.0098   0.3874004

G.) Provide an estimate for the tenth percentile of “medv” in Boston suburbs.Call this quantity mu.hat.ten:

mu.hat.ten <- quantile(medv, c(0.1))
mu.hat.ten
##   10% 
## 12.75

H.) Use the bootstrap to estimate the standard error of mu.hat.ten:

boot.fn.ten <- function(data, index) {
    mu.quant <- quantile(data[index], c(0.1))
    return (mu.quant)
}
boot(medv, boot.fn.ten, 1000)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot.fn.ten, R = 1000)
## 
## 
## Bootstrap Statistics :
##     original  bias    std. error
## t1*    12.75 0.00515   0.5113487