library(tidyverse)
library(tinytex)
library(readxl)
library(openintro)
library(ggplot2)
library(gridExtra)
library(DescTools)
library(ISLR)
library(ISLR2)
library(dplyr)

CHAPTER 3

QUESTION 2

Carefully explain the differences between the KNN classifier and KNN regression methods.

KNN stands for K-Nearest Neighbors which is a method of calculating new data points with training data to help determine either what classification it can be or what the predicted value it will be given a certain parameter. There are two different methods that are considered Machine Learning Algorithms, KNN classifier and KNN regression.

Essentially, these two methods are closely related but their application is slightly different. They both utilize the simple idea that in order to analyze a new data point it is possible to reference the known data points surrounding it and determine whether it belongs to one class or another, or predict where it should be given the proximity of other data points.

KNN Classification primarily deals with categorical data. Recall the Palmer Penguins data set in R, where one can see on a scatter plot the different bill lengths of each species of penguin. If you were to place a new data point on the scatter plot the KNN classifier will be able to determine which species that data point will belong to. You can tweak the specificity/flexibility of the model by assigning K points in the training data set. A smaller K value can lead to overfitting, which will make the boundaries more jagged between the categories. This typically has a low bias, but high variance. The larger K values will smooth out the boundaries but can introduce more bias into the model.

KNN regression methods is similar to what was described above but primarily deals with continuous data. The nice thing about KNN regression models is that even if the data is not linear you can use the same technique to create a regression model. In this instance with continuous data, the K nearest neighbors will be evaluated and weighted in order to fit the new data point. The same principle applies with choosing a small K versus a large K.

QUESTION 9

This question involves the use of multiple linear regression on the Auto data set.

  1. Produce a scatterplot matrix which includes all of the variables in the data set.
pairs(Auto)

> (b) Compute the matrix of correlations between the variables using the function cor(). You will need to exclude the name variable, which is qualitative.

names(Auto)
## [1] "mpg"          "cylinders"    "displacement" "horsepower"   "weight"      
## [6] "acceleration" "year"         "origin"       "name"
Auto_NN <- subset(Auto, select = -name)
cor(Auto_NN)
##                     mpg  cylinders displacement horsepower     weight
## mpg           1.0000000 -0.7776175   -0.8051269 -0.7784268 -0.8322442
## cylinders    -0.7776175  1.0000000    0.9508233  0.8429834  0.8975273
## displacement -0.8051269  0.9508233    1.0000000  0.8972570  0.9329944
## horsepower   -0.7784268  0.8429834    0.8972570  1.0000000  0.8645377
## weight       -0.8322442  0.8975273    0.9329944  0.8645377  1.0000000
## acceleration  0.4233285 -0.5046834   -0.5438005 -0.6891955 -0.4168392
## year          0.5805410 -0.3456474   -0.3698552 -0.4163615 -0.3091199
## origin        0.5652088 -0.5689316   -0.6145351 -0.4551715 -0.5850054
##              acceleration       year     origin
## mpg             0.4233285  0.5805410  0.5652088
## cylinders      -0.5046834 -0.3456474 -0.5689316
## displacement   -0.5438005 -0.3698552 -0.6145351
## horsepower     -0.6891955 -0.4163615 -0.4551715
## weight         -0.4168392 -0.3091199 -0.5850054
## acceleration    1.0000000  0.2903161  0.2127458
## year            0.2903161  1.0000000  0.1815277
## origin          0.2127458  0.1815277  1.0000000
  1. Use the lm() function to perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Use the summary() function to print the results. Comment on the output.
lm.fit.Auto <- lm(mpg~.,data = Auto_NN)
summary(lm.fit.Auto)
## 
## Call:
## lm(formula = mpg ~ ., data = Auto_NN)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -9.5903 -2.1565 -0.1169  1.8690 13.0604 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  -17.218435   4.644294  -3.707  0.00024 ***
## cylinders     -0.493376   0.323282  -1.526  0.12780    
## displacement   0.019896   0.007515   2.647  0.00844 ** 
## horsepower    -0.016951   0.013787  -1.230  0.21963    
## weight        -0.006474   0.000652  -9.929  < 2e-16 ***
## acceleration   0.080576   0.098845   0.815  0.41548    
## year           0.750773   0.050973  14.729  < 2e-16 ***
## origin         1.426141   0.278136   5.127 4.67e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.328 on 384 degrees of freedom
## Multiple R-squared:  0.8215, Adjusted R-squared:  0.8182 
## F-statistic: 252.4 on 7 and 384 DF,  p-value: < 2.2e-16
  1. Is there a relationship between the predictors and the response?

Looking at the multiple linear regression model we can see that there are several predictors that are statistically significant in relation to mpg.

  1. Which predictors appear to have a statistically significant relationship to the response?

Specifically, we can see that Displacement, Weight, Year and Origin have statistically significant relationships to mpg.

  1. What does the coefficient for the year variable suggest?

The coefficients in the linear regression model represent the estimated effect of each predictor variable on the response variable. Since the coefficient for year is .750773, it means that for every one-unit increase in Year the MPG will increase by .75

The realtively large coefficient for Year shows that it does play a large part in determining MPG versus the other factors.

  1. Use the plot() function to produce diagnostic plots of the linear regression fit. Comment on any problems you see with the fit. Do the residual plots suggest any unusually large outliers? Does the leverage plot identify any observations with unusually high leverage?
par(mfrow=c(2,2))
plot(lm.fit.Auto)

In the Residuals vs. Fitted plot we can see that there are going to be some values that will be over predicted on the tail ends of the data, and be under predicted in the middle. This suggests that the data could be slightly non-linear. There is some funneling with the higher values as well. The Cooks D, Residuals vs. Leverage plot shows that there are a few observations that are outliers compared to the rest of the data. The QQ plot shows the distribution is mostly normal, except for those outliers pointed out before.

  1. Use the * and : symbols to fit linear regression models with interaction effects. Do any interactions appear to be statistically significant?
Horse_Weight <- lm(mpg~horsepower*weight,data = Auto_NN)
summary(Horse_Weight)
## 
## Call:
## lm(formula = mpg ~ horsepower * weight, data = Auto_NN)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -10.7725  -2.2074  -0.2708   1.9973  14.7314 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        6.356e+01  2.343e+00  27.127  < 2e-16 ***
## horsepower        -2.508e-01  2.728e-02  -9.195  < 2e-16 ***
## weight            -1.077e-02  7.738e-04 -13.921  < 2e-16 ***
## horsepower:weight  5.355e-05  6.649e-06   8.054 9.93e-15 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.93 on 388 degrees of freedom
## Multiple R-squared:  0.7484, Adjusted R-squared:  0.7465 
## F-statistic: 384.8 on 3 and 388 DF,  p-value: < 2.2e-16
Origin_Year <- lm(mpg~origin:year, data=Auto_NN)
summary(Origin_Year)
## 
## Call:
## lm(formula = mpg ~ origin:year, data = Auto_NN)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -12.7945  -4.7198  -0.8051   3.6099  18.4823 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 14.485655   0.677588   21.38   <2e-16 ***
## origin:year  0.074469   0.004987   14.93   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 6.233 on 390 degrees of freedom
## Multiple R-squared:  0.3638, Adjusted R-squared:  0.3622 
## F-statistic:   223 on 1 and 390 DF,  p-value: < 2.2e-16
Acc_Cyn <- lm(mpg~acceleration*cylinders,data = Auto_NN)
summary(Acc_Cyn)
## 
## Call:
## lm(formula = mpg ~ acceleration * cylinders, data = Auto_NN)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -13.2257  -3.1788  -0.7045   2.4031  17.4642 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            31.37192    5.27599   5.946 6.13e-09 ***
## acceleration            0.73498    0.33724   2.179   0.0299 *  
## cylinders              -1.84692    0.85564  -2.159   0.0315 *  
## acceleration:cylinders -0.11179    0.05806  -1.926   0.0549 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 4.895 on 388 degrees of freedom
## Multiple R-squared:  0.6097, Adjusted R-squared:  0.6067 
## F-statistic:   202 on 3 and 388 DF,  p-value: < 2.2e-16

We can see that there is statistical significance when dealing with the interaction terms Horsepower + Weight, and Origin + Year. In the last model we see that acceleration and cylinders alone have significance but the interaction does not with a p-value of .0549 for the interaction term.

  1. Try a few different transformations of the variables, such as log(X), sqrt(X), X². Comment on your findings.
Log_Disp <- lm(mpg~. + log(displacement), data = Auto_NN)
summary(Log_Disp)
## 
## Call:
## lm(formula = mpg ~ . + log(displacement), data = Auto_NN)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -12.1562  -1.8388  -0.0423   1.6999  11.7871 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        4.529e+01  8.485e+00   5.337 1.62e-07 ***
## cylinders          3.391e-03  3.025e-01   0.011 0.991060    
## displacement       7.744e-02  9.655e-03   8.021 1.29e-14 ***
## horsepower        -4.380e-02  1.304e-02  -3.358 0.000864 ***
## weight            -4.536e-03  6.404e-04  -7.083 6.80e-12 ***
## acceleration      -1.352e-02  9.142e-02  -0.148 0.882479    
## year               7.827e-01  4.695e-02  16.671  < 2e-16 ***
## origin             4.485e-01  2.799e-01   1.602 0.109926    
## log(displacement) -1.537e+01  1.804e+00  -8.520 3.70e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.055 on 383 degrees of freedom
## Multiple R-squared:  0.8499, Adjusted R-squared:  0.8468 
## F-statistic: 271.1 on 8 and 383 DF,  p-value: < 2.2e-16
Square_Disp <- lm(mpg~.+I(displacement^2), data = Auto_NN)
summary(Square_Disp)
## 
## Call:
## lm(formula = mpg ~ . + I(displacement^2), data = Auto_NN)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -10.3584  -1.7222   0.0236   1.5766  11.9298 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)       -9.776e+00  4.271e+00  -2.289   0.0226 *  
## cylinders          7.073e-01  3.191e-01   2.216   0.0273 *  
## displacement      -1.041e-01  1.490e-02  -6.984 1.28e-11 ***
## horsepower        -5.848e-02  1.323e-02  -4.421 1.28e-05 ***
## weight            -4.231e-03  6.362e-04  -6.650 1.01e-10 ***
## acceleration      -2.391e-02  9.001e-02  -0.266   0.7907    
## year               7.622e-01  4.607e-02  16.544  < 2e-16 ***
## origin             4.035e-01  2.741e-01   1.472   0.1419    
## I(displacement^2)  2.077e-04  2.222e-05   9.345  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.007 on 383 degrees of freedom
## Multiple R-squared:  0.8546, Adjusted R-squared:  0.8516 
## F-statistic: 281.4 on 8 and 383 DF,  p-value: < 2.2e-16
Log_Weight <- lm(mpg~. + log(weight), data = Auto_NN)
summary(Log_Weight)
## 
## Call:
## lm(formula = mpg ~ . + log(weight), data = Auto_NN)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -9.6516 -1.6398 -0.1671  1.5973 12.7247 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  269.474171  31.136919   8.654  < 2e-16 ***
## cylinders     -0.498204   0.292415  -1.704  0.08924 .  
## displacement   0.013527   0.006832   1.980  0.04843 *  
## horsepower    -0.022137   0.012483  -1.773  0.07696 .  
## weight         0.007657   0.001631   4.694 3.73e-06 ***
## acceleration   0.045763   0.089486   0.511  0.60936    
## year           0.797808   0.046383  17.200  < 2e-16 ***
## origin         0.719552   0.262819   2.738  0.00647 ** 
## log(weight)  -41.320927   4.446725  -9.292  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.01 on 383 degrees of freedom
## Multiple R-squared:  0.8543, Adjusted R-squared:  0.8513 
## F-statistic: 280.8 on 8 and 383 DF,  p-value: < 2.2e-16

We can see that by utilizing non-linear transformations of certain predictors it can change the significance. For log(displacement) we can see that it does increase the significance of that variable, but maybe not necessary. The displacement^2 also doesn’t change much.

However, taking the log(weight) did change the significance of the predictor by several measures, so it might be beneficial to continue with this model.

QUESTION 10

This question should be answered using the Carseats data set.

  1. Fit a multiple regression model to predict Sales using Price, Urban and US.
names(Carseats)
##  [1] "Sales"       "CompPrice"   "Income"      "Advertising" "Population" 
##  [6] "Price"       "ShelveLoc"   "Age"         "Education"   "Urban"      
## [11] "US"
Carseats.lm <- lm(Sales~Price+Urban+US, data = Carseats)
summary(Carseats.lm)
## 
## Call:
## lm(formula = Sales ~ Price + Urban + US, data = Carseats)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -6.9206 -1.6220 -0.0564  1.5786  7.0581 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 13.043469   0.651012  20.036  < 2e-16 ***
## Price       -0.054459   0.005242 -10.389  < 2e-16 ***
## UrbanYes    -0.021916   0.271650  -0.081    0.936    
## USYes        1.200573   0.259042   4.635 4.86e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.472 on 396 degrees of freedom
## Multiple R-squared:  0.2393, Adjusted R-squared:  0.2335 
## F-statistic: 41.52 on 3 and 396 DF,  p-value: < 2.2e-16
  1. Provide an interpretation of each coefficient in the model. Be careful - some of the variables in the model are qualitative !

The coefficients in the linear regression model represent the estimated effect of each predictor variable on the response variable. Since the coefficient for Price is -0.054459, it means that for every one-unit increase in Price the Sales will decrease by -.05.

Using ?Carseats we can see that Urban is “A factor with levels No and Yes to indicate whether the store is in an urban or rural location.” and US is “A factor with levels No and Yes to indicate whether the store is in the US or not.” The coefficients for these qualitative variables represent the difference in the mean response between each level of the variable. UrbanYes is -.02 units below the mean of UrbanNo, and USYes is 1.2 units above the mean response for USNo.

Carseats_NQ <- Carseats %>% select_if(is.numeric)
names(Carseats_NQ)
## [1] "Sales"       "CompPrice"   "Income"      "Advertising" "Population" 
## [6] "Price"       "Age"         "Education"
  1. Write out the model in equation form, being careful to handle the qualitative variables properly.

\[ Sales = 13.04 + (-.05)(Price) + (-.02)(UrbanYes) + (1.2)(USYes) \] > (d) For which of the predictors can you reject the null hypothesis \(H_0 : \beta_j = 0\) ?

The p-value for Price and USYes is statistically significant so we can reject the null hypothesis for these variables.

  1. On the basis of your response to the previous question, fit a smaller model that only uses the predictors for which there is evidence of association with the outcome.
Carseats.lm2 <- lm(Sales~Price+US, data = Carseats)
summary(Carseats.lm2)
## 
## Call:
## lm(formula = Sales ~ Price + US, data = Carseats)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -6.9269 -1.6286 -0.0574  1.5766  7.0515 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 13.03079    0.63098  20.652  < 2e-16 ***
## Price       -0.05448    0.00523 -10.416  < 2e-16 ***
## USYes        1.19964    0.25846   4.641 4.71e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.469 on 397 degrees of freedom
## Multiple R-squared:  0.2393, Adjusted R-squared:  0.2354 
## F-statistic: 62.43 on 2 and 397 DF,  p-value: < 2.2e-16
  1. How well do the models in (a) and (e) fit the data ?

To look at the fit of model we need to look at the \(R^2\) and \(Adjusted R^2\). We can see in the model in part (a) has an R-squared: 0.2393, and an Adjusted R-squared: 0.2335 The model in part (e) has an R-squared: 0.2393, Adjusted R-squared: 0.2354

The \(R^2\) is the same for both, but when adjusted for bias removing the UrbanYes variable did increase the adjusted r-squared from .2335 to .2354

  1. Using the model from (e), obtain 95% confidence intervals for the coefficient(s).
confint(Carseats.lm2, level = .95)
##                   2.5 %      97.5 %
## (Intercept) 11.79032020 14.27126531
## Price       -0.06475984 -0.04419543
## USYes        0.69151957  1.70776632
  1. Is there evidence of outliers or high leverage observations in the model from (e) ?
par(mfrow = c(2, 2))
plot(Carseats.lm2)

There is evidence of a few outliers according to the Residuals vs. Leverage plot, and the QQ plot.

QUESTION 12

This problem involves simple linear regression without an intercept.

  1. Recall that the coefficient estimate \(\hat{\beta}\) for the linear regression of \(Y\) onto \(X\) witout an intercept is given by (3.38). Under what circumstance is the coefficient estimate for the regression of \(X\) onto \(Y\) the same as the coefficient estimate for the regression of \(Y\) onto \(X\) ?

*The coefficient estimate for the regression of \(Y\) onto \(X\) is \[\hat{\beta} = \frac{\sum_ix_iy_i}{\sum_jx_j^2}\] The coefficient estimate for the regression of \(X\) onto \(Y\) is \[\hat{\beta}' = \frac{\sum_ix_iy_i}{\sum_jy_j^2}\] The coefficients are the same if \[\sum_jx_j^2 = \sum_jy_j^2\]

  1. Generate an example in R with \(n = 100\) observations in which the coefficient estimate for the regression of \(X\) onto \(Y\) is different from the coefficient estimate for the regression of \(Y\) onto \(X\).
set.seed(1)
x <- 1:100
sum(x^2)
## [1] 338350
y <- 2 * x + rnorm(100, sd = 0.1)
sum(y^2)
## [1] 1353606
fit.Y <- lm(y ~ x + 0)
fit.X <- lm(x ~ y + 0)
summary(fit.Y)
## 
## Call:
## lm(formula = y ~ x + 0)
## 
## Residuals:
##       Min        1Q    Median        3Q       Max 
## -0.223590 -0.062560  0.004426  0.058507  0.230926 
## 
## Coefficients:
##    Estimate Std. Error t value Pr(>|t|)    
## x 2.0001514  0.0001548   12920   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.09005 on 99 degrees of freedom
## Multiple R-squared:      1,  Adjusted R-squared:      1 
## F-statistic: 1.669e+08 on 1 and 99 DF,  p-value: < 2.2e-16
summary(fit.X)
## 
## Call:
## lm(formula = x ~ y + 0)
## 
## Residuals:
##       Min        1Q    Median        3Q       Max 
## -0.115418 -0.029231 -0.002186  0.031322  0.111795 
## 
## Coefficients:
##   Estimate Std. Error t value Pr(>|t|)    
## y 5.00e-01   3.87e-05   12920   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.04502 on 99 degrees of freedom
## Multiple R-squared:      1,  Adjusted R-squared:      1 
## F-statistic: 1.669e+08 on 1 and 99 DF,  p-value: < 2.2e-16
  1. Generate an example in R with \(n = 100\) observations in which the coefficient estimate for the regression of \(X\) onto \(Y\) is the same as the coefficient estimate for the regression of \(Y\) onto \(X\).
x <- 1:100
sum(x^2)
## [1] 338350
y <- 100:1
sum(y^2)
## [1] 338350
fit.Y <- lm(y ~ x + 0)
fit.X <- lm(x ~ y + 0)
summary(fit.Y)
## 
## Call:
## lm(formula = y ~ x + 0)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -49.75 -12.44  24.87  62.18  99.49 
## 
## Coefficients:
##   Estimate Std. Error t value Pr(>|t|)    
## x   0.5075     0.0866    5.86 6.09e-08 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 50.37 on 99 degrees of freedom
## Multiple R-squared:  0.2575, Adjusted R-squared:   0.25 
## F-statistic: 34.34 on 1 and 99 DF,  p-value: 6.094e-08
summary(fit.X)
## 
## Call:
## lm(formula = x ~ y + 0)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -49.75 -12.44  24.87  62.18  99.49 
## 
## Coefficients:
##   Estimate Std. Error t value Pr(>|t|)    
## y   0.5075     0.0866    5.86 6.09e-08 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 50.37 on 99 degrees of freedom
## Multiple R-squared:  0.2575, Adjusted R-squared:   0.25 
## F-statistic: 34.34 on 1 and 99 DF,  p-value: 6.094e-08