##3. We now review k-fold cross-validation.## (a) Explain how k-fold cross-validation is implemented. This technique involves randomly dividing the set of observations into k groups, or folds, equal size.The first group is treated as the trainging set, and the method is fit on the remaining k-1 folds. The MSE is computed on the observations in the held-out fold. This process is repeated k more times on each of the different groups.

(b) What are the advantages and disadvantages of k-fold cross validation relative to: i. The validation set approach?

The validation set approach is simple and easily implemented. However the validation MSE can be vary

ii. LOOCV? The LOOV has less bias and the LOOV spits out the same results

##5. In Chapter 4, we used logistic regression to predict the probability of default using income and balance on the Default data set. We will now estimate the test error of this logistic regression model using the validation set approach. Do not forget to set a random seed before beginning your analysis.##

library(ISLR2)
data(Default)
summary(Default)
##  default    student       balance           income     
##  No :9667   No :7056   Min.   :   0.0   Min.   :  772  
##  Yes: 333   Yes:2944   1st Qu.: 481.7   1st Qu.:21340  
##                        Median : 823.6   Median :34553  
##                        Mean   : 835.4   Mean   :33517  
##                        3rd Qu.:1166.3   3rd Qu.:43808  
##                        Max.   :2654.3   Max.   :73554

(a) Fit a logistic regression model that uses income and balance to predict default.

logmodel1 = glm(default ~ balance + income, data = Default, family = binomial)
summary(logmodel1)
## 
## Call:
## glm(formula = default ~ balance + income, family = binomial, 
##     data = Default)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.4725  -0.1444  -0.0574  -0.0211   3.7245  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.154e+01  4.348e-01 -26.545  < 2e-16 ***
## balance      5.647e-03  2.274e-04  24.836  < 2e-16 ***
## income       2.081e-05  4.985e-06   4.174 2.99e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2920.6  on 9999  degrees of freedom
## Residual deviance: 1579.0  on 9997  degrees of freedom
## AIC: 1585
## 
## Number of Fisher Scoring iterations: 8

(b) Using the validation set approach, estimate the test error of this model. In order to do this, you must perform the following steps: i. Split the sample set into a training set and a validation set.

trainDefault = sample(dim(Default)[1], dim(Default)[1]*0.50)
testDefault = Default[-trainDefault, ]

ii. Fit a multiple logistic regression model using only the train ing observations.

Logmodel2 = glm(default ~ balance + income, data = Default, family = binomial, subset = trainDefault)

iii. Obtain a prediction of default status for each individual in the validation set by computing the posterior probability of default for that individual, and classifying the individual to the default category if the posterior probability is greater than 0.5.

log.prob_def = predict(Logmodel2, testDefault, type = "response")
log.pred_def = rep("No", dim(Default)[1]*0.50)
log.pred_def[log.prob_def > 0.5] = "Yes"
table(log.pred_def, testDefault$default)
##             
## log.pred_def   No  Yes
##          No  4808  114
##          Yes   19   59

iv. Compute the validation set error, which is the fraction of the observations in the validation set that are misclassified.

mean(log.pred_def !=testDefault$default)
## [1] 0.0266

(c) Repeat the process in (b) three times, using three different splits of the observations into a training set and a validation set. Com ment on the results obtained. The resualts vary from each attempt.

#i
trainDefault = sample(dim(Default)[1], dim(Default)[1]*0.50)
testDefault = Default[-trainDefault, ]

#ii
Logmodel2 = glm(default ~ balance + income, data = Default, family = binomial, subset = trainDefault)

#iii
log.prob_def = predict(Logmodel2, testDefault, type = "response")
log.pred_def = rep("No", dim(Default)[1]*0.50)
log.pred_def[log.prob_def > 0.5] = "Yes"
table(log.pred_def, testDefault$default)
##             
## log.pred_def   No  Yes
##          No  4809  122
##          Yes   19   50
#iv
mean(log.pred_def !=testDefault$default)
## [1] 0.0282
#i
trainDefault = sample(dim(Default)[1], dim(Default)[1]*0.50)
testDefault = Default[-trainDefault, ]

#ii
Logmodel2 = glm(default ~ balance + income, data = Default, family = binomial, subset = trainDefault)

#iii
log.prob_def = predict(Logmodel2, testDefault, type = "response")
log.pred_def = rep("No", dim(Default)[1]*0.50)
log.pred_def[log.prob_def > 0.5] = "Yes"
table(log.pred_def, testDefault$default)
##             
## log.pred_def   No  Yes
##          No  4806  123
##          Yes   16   55
#iv
mean(log.pred_def !=testDefault$default)
## [1] 0.0278

(d) Now consider a logistic regression model that predicts the prob ability of default using income, balance, and a dummy variable for student. Estimate the test error for this model using the val idation set approach. Comment on whether or not including a dummy variable for student leads to a reduction in the test error rate. 2.6% error rate, not much different the prevous splits.

trainDefault = sample(dim(Default)[1], dim(Default)[1]*0.50)
testDefault = Default[-trainDefault, ]


LOGmodelc = glm(default ~ balance + income + student, data = Default, family = binomial, subset = trainDefault)

log.prob_def = predict(LOGmodelc, testDefault, type = "response")
log.pred_def = rep("No", dim(Default)[1]*0.50)
log.pred_def[log.prob_def > 0.5] = "Yes"
table(log.pred_def, testDefault$default)
##             
## log.pred_def   No  Yes
##          No  4820  118
##          Yes   15   47
mean(log.pred_def !=testDefault$default)
## [1] 0.0266

##6. We continue to consider the use of a logistic regression model to predict the probability of default using income and balance on the Default data set. In particular, we will now compute estimates for the standard errors of the income and balance logistic regression co efficients in two different ways: (1) using the bootstrap, and (2) using the standard formula for computing the standard errors in the glm() function. Do not forget to set a random seed before beginning your analysis. (a) Using the summary() and glm() functions, determine the esti mated standard errors for the coefficients associated with income and balance in a multiple logistic regression model that uses both predictors.

LOGmodelc = glm(default ~ balance + income, data = Default, family = binomial)

summary(LOGmodelc)
## 
## Call:
## glm(formula = default ~ balance + income, family = binomial, 
##     data = Default)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.4725  -0.1444  -0.0574  -0.0211   3.7245  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.154e+01  4.348e-01 -26.545  < 2e-16 ***
## balance      5.647e-03  2.274e-04  24.836  < 2e-16 ***
## income       2.081e-05  4.985e-06   4.174 2.99e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 2920.6  on 9999  degrees of freedom
## Residual deviance: 1579.0  on 9997  degrees of freedom
## AIC: 1585
## 
## Number of Fisher Scoring iterations: 8

(b) Write a function, boot.fn(), that takes as input the Default data set as well as an index of the observations, and that outputs the coefficient estimates for income and balance in the multiple logistic regression model.

boot.fn = function(data, index) return(coef(glm(default ~ balance + income, data = data, family = binomial, subset = index)))

(c) Use the boot() function together with your boot.fn() function to estimate the standard errors of the logistic regression coefficients for income and balance.

library(boot)
boot(Default, boot.fn, 100)
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = Default, statistic = boot.fn, R = 100)
## 
## 
## Bootstrap Statistics :
##          original        bias     std. error
## t1* -1.154047e+01 -2.860955e-02 4.431434e-01
## t2*  5.647103e-03  1.257411e-05 2.408367e-04
## t3*  2.080898e-05  1.895874e-07 4.920288e-06

(d) Comment on the estimated standard errors obtained using the glm() function and using your bootstrap function. The estimated standerd error for balance and income are 5.647e-03 and 2.081e-05 ##9. We will now consider the Boston housing data set, from the ISLR2 library.

library(ISLR2)
data("Boston")
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          lstat      
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   : 1.73  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.: 6.95  
##  Median : 5.000   Median :330.0   Median :19.05   Median :11.36  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :12.65  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:16.95  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :37.97  
##       medv      
##  Min.   : 5.00  
##  1st Qu.:17.02  
##  Median :21.20  
##  Mean   :22.53  
##  3rd Qu.:25.00  
##  Max.   :50.00

(a) Based on this data set, provide an estimate for the population mean of medv. Call this estimate ˆµ.

attach(Boston)
mean.medv= mean(medv)
mean.medv
## [1] 22.53281

(b) Provide an estimate of the standard error of ˆµ. Interpret this result. Hint: We can compute the standard error of the sample mean by dividing the sample standard deviation by the square root of the number of observations.

std.mean = sd(medv)/sqrt(length(medv))
std.mean
## [1] 0.4088611

(c) Now estimate the standard error of ˆµ using the bootstrap. How does this compare to your answer from (b)?

boot.fn2= function(data,index) return(mean(data[index]))
boota= boot(medv,boot.fn2,100)
boota
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot.fn2, R = 100)
## 
## 
## Bootstrap Statistics :
##     original      bias    std. error
## t1* 22.53281 -0.05899407   0.4495242

(d) Based on your bootstrap estimate from (c), provide a 95 % con fidence interval for the mean of medv. Compare it to the results obtained using t.test(Boston$medv). Hint: You can approximate a 95 % confidence interval using the formula [ˆµ − 2SE(ˆµ), µˆ + 2SE(ˆµ)].

t.test(Boston$medv)
## 
##  One Sample t-test
## 
## data:  Boston$medv
## t = 55.111, df = 505, p-value < 2.2e-16
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
##  21.72953 23.33608
## sample estimates:
## mean of x 
##  22.53281
Cn.bos= c(22.53-2*0.4174872, 22.53+2*0.4174872)
Cn.bos
## [1] 21.69503 23.36497

(e) Based on this data set, provide an estimate, ˆµmed, for the median value of medv in the population.

median.medv = median(medv)
median.medv
## [1] 21.2

(f) We now would like to estimate the standard error of ˆµmed. Unfor tunately, there is no simple formula for computing the standard error of the median. Instead, estimate the standard error of th median using the bootstrap. Comment on your findings.

boot.fn4= function(data,index)return(median(data[index]))
bootb= boot(medv, boot.fn4,100)
bootb
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot.fn4, R = 100)
## 
## 
## Bootstrap Statistics :
##     original  bias    std. error
## t1*     21.2 -0.0445   0.3744285

(g) Based on this data set, provide an estimate for the tenth per centile of medv in Boston census tracts. Call this quantity ˆµ0.1. (You can use the quantile() function.)

tn.medv = quantile(medv,c(0.1))
tn.medv
##   10% 
## 12.75

(h) Use the bootstrap to estimate the standard error of ˆµ0.1. Com ment on your findings.

boot.fn4 = function(data, index) return(quantile(data[index], c(0.1)))
bootc= boot(medv, boot.fn4, 1000)
bootc
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = medv, statistic = boot.fn4, R = 1000)
## 
## 
## Bootstrap Statistics :
##     original  bias    std. error
## t1*    12.75 0.02555   0.5048263