1. This question involves the use of multiple linear regression on the Auto data set.
  1. Produce a scatterplot matrix which includes all of the variables in the data set.
Auto %>% select(mpg : origin) %>% 
ggpairs()

  1. Compute the matrix of correlations between the variables using the function cor(). You will need to exclude the name variable, cor() which is qualitative.
Auto %>% select(mpg : origin) %>% 
ggcorr(., palette = "RdBu", label = TRUE)

  1. Use the lm() function to perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Use the summary() function to print the results.
Auto %>% select(mpg : origin) -> dt 
fit <- lm(mpg ~ ., data = dt)
  summary(fit)
## 
## Call:
## lm(formula = mpg ~ ., data = dt)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -9.5903 -2.1565 -0.1169  1.8690 13.0604 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  -17.218435   4.644294  -3.707  0.00024 ***
## cylinders     -0.493376   0.323282  -1.526  0.12780    
## displacement   0.019896   0.007515   2.647  0.00844 ** 
## horsepower    -0.016951   0.013787  -1.230  0.21963    
## weight        -0.006474   0.000652  -9.929  < 2e-16 ***
## acceleration   0.080576   0.098845   0.815  0.41548    
## year           0.750773   0.050973  14.729  < 2e-16 ***
## origin         1.426141   0.278136   5.127 4.67e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.328 on 384 degrees of freedom
## Multiple R-squared:  0.8215, Adjusted R-squared:  0.8182 
## F-statistic: 252.4 on 7 and 384 DF,  p-value: < 2.2e-16

Comment on the output. For instance:

i. Is there a relationship between the predictors and the response?

Only between some the predictors depending on the observations we have.

ii. Which predictors appear to have a statistically significant relationship to the response?

Displacement, weight, year and origin are significant, thus we can claim that there is a significant relationship.

iii. What does the coefficient for the year variable suggest? That the newer the car, the higher the mpg.

(d) Use the plot() function to produce diagnostic plots of the linear regression fit. Comment on any problems you see with the fit. Do the residual plots suggest any unusually large outliers? Does the leverage plot identify any observations with unusually high leverage?

library(broom)
model <- broom::augment(fit)
residual <- function(model) {ggplot(model, aes(.fitted, .resid)) +
                geom_point() +
                geom_hline(yintercept = 0) +
                geom_smooth(se = FALSE)}

stdResidual <- function(model) {ggplot(model, aes(.fitted, .std.resid)) +
                  geom_point() +
                  geom_hline(yintercept = 0) +
                  geom_smooth(se = FALSE)}

ggqqplot <- function(model) {ggplot(model) +
            stat_qq(aes(sample = .std.resid)) +
            geom_abline()}

sqrtResidual <-  function(model) {ggplot(model, aes(.fitted, sqrt(abs(.std.resid)))) +
                    geom_point() +
                    geom_smooth(se = FALSE)}

cooksD <-  function(model) { ggplot(model, aes(seq_along(.cooksd), .cooksd)) +
            geom_col() }

hatPlot <- function(model) { ggplot(model, aes(.hat, .std.resid)) +
              geom_vline(size = 2, colour = "white", xintercept = 0) +
              geom_hline(size = 2, colour = "white", yintercept = 0) +
              geom_point() + geom_smooth(se = FALSE) }


hatCooksD <- function(model) { ggplot(model, aes(.hat, .std.resid)) +
                geom_point(aes(size = .cooksd)) +
                geom_smooth(se = FALSE, size = 0.5) }

hatCooksD2 <- function(model) { ggplot(model, aes(.hat, .cooksd)) +
                geom_vline(xintercept = 0, colour = NA) +
                geom_abline(slope = seq(0, 3, by = 0.5), colour = "white") +
                geom_smooth(se = FALSE) +
                geom_point() }

purrr::invoke_map(.f = list(residual, stdResidual, ggqqplot, sqrtResidual, cooksD, hatPlot, hatCooksD, hatCooksD2), .x = list(list(model))) %>% 
    gridExtra::grid.arrange(grobs = ., ncol = 2, 
                          top = paste("Diagnostic plots for", as.expression(fit$call)))

There is a trend in the variance of the residuals. In fact the model fits more worse on higher values of the predicted variable. That’s probably due to the effect of an influencial too as we can see from hat and Cook’s D value.

  1. Use the * and : symbols to fit linear regression models with interaction effects. Do any interactions appear to be statistically significant?
summary(lm(mpg ~ .*., dt))
## 
## Call:
## lm(formula = mpg ~ . * ., data = dt)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -7.6303 -1.4481  0.0596  1.2739 11.1386 
## 
## Coefficients:
##                             Estimate Std. Error t value Pr(>|t|)   
## (Intercept)                3.548e+01  5.314e+01   0.668  0.50475   
## cylinders                  6.989e+00  8.248e+00   0.847  0.39738   
## displacement              -4.785e-01  1.894e-01  -2.527  0.01192 * 
## horsepower                 5.034e-01  3.470e-01   1.451  0.14769   
## weight                     4.133e-03  1.759e-02   0.235  0.81442   
## acceleration              -5.859e+00  2.174e+00  -2.696  0.00735 **
## year                       6.974e-01  6.097e-01   1.144  0.25340   
## origin                    -2.090e+01  7.097e+00  -2.944  0.00345 **
## cylinders:displacement    -3.383e-03  6.455e-03  -0.524  0.60051   
## cylinders:horsepower       1.161e-02  2.420e-02   0.480  0.63157   
## cylinders:weight           3.575e-04  8.955e-04   0.399  0.69000   
## cylinders:acceleration     2.779e-01  1.664e-01   1.670  0.09584 . 
## cylinders:year            -1.741e-01  9.714e-02  -1.793  0.07389 . 
## cylinders:origin           4.022e-01  4.926e-01   0.816  0.41482   
## displacement:horsepower   -8.491e-05  2.885e-04  -0.294  0.76867   
## displacement:weight        2.472e-05  1.470e-05   1.682  0.09342 . 
## displacement:acceleration -3.479e-03  3.342e-03  -1.041  0.29853   
## displacement:year          5.934e-03  2.391e-03   2.482  0.01352 * 
## displacement:origin        2.398e-02  1.947e-02   1.232  0.21875   
## horsepower:weight         -1.968e-05  2.924e-05  -0.673  0.50124   
## horsepower:acceleration   -7.213e-03  3.719e-03  -1.939  0.05325 . 
## horsepower:year           -5.838e-03  3.938e-03  -1.482  0.13916   
## horsepower:origin          2.233e-03  2.930e-02   0.076  0.93931   
## weight:acceleration        2.346e-04  2.289e-04   1.025  0.30596   
## weight:year               -2.245e-04  2.127e-04  -1.056  0.29182   
## weight:origin             -5.789e-04  1.591e-03  -0.364  0.71623   
## acceleration:year          5.562e-02  2.558e-02   2.174  0.03033 * 
## acceleration:origin        4.583e-01  1.567e-01   2.926  0.00365 **
## year:origin                1.393e-01  7.399e-02   1.882  0.06062 . 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.695 on 363 degrees of freedom
## Multiple R-squared:  0.8893, Adjusted R-squared:  0.8808 
## F-statistic: 104.2 on 28 and 363 DF,  p-value: < 2.2e-16

All the interactions (:) marked with at least * are significant

  1. Try a few different transformations of the variables, such as log(X), √X, X2. Comment on your findings.
fit2 <- lm(mpg ~ cylinders + log(displacement) + log(horsepower) + log(weight) + acceleration + year + origin, dt)
map(list(fit, fit2), summary) %>% 
  map('adj.r.squared')
## [[1]]
## [1] 0.8182238
## 
## [[2]]
## [1] 0.8450547

If I look at the partial residual plots, it seems like the correlation between variables such as displacement, horsepower and weight is not actually linear. Thus we can consider a log transformation given the pattern of the residuals.

Problem 2: Work on the problem 10 in Section 4.7 (Exercises) on page 171. You are required to work on part (a)~(d). But I encourage you to try the part(e)~(i) if you are familiar with KNN, LDA and QDA.

  1. This question should be answered using the Weekly data set, which is part of the ISLR package. This data is similar in nature to the Smarket data from this chapter’s lab, except that it contains 1, 089 weekly returns for 21 years, from the beginning of 1990 to the end of 2010.
  1. Produce some numerical and graphical summaries of the Weekly data. Do there appear to be any patterns?
summary(Weeklyd)
##       Year           Lag1               Lag2               Lag3         
##  Min.   :1990   Min.   :-18.1950   Min.   :-18.1950   Min.   :-18.1950  
##  1st Qu.:1995   1st Qu.: -1.1540   1st Qu.: -1.1540   1st Qu.: -1.1580  
##  Median :2000   Median :  0.2410   Median :  0.2410   Median :  0.2410  
##  Mean   :2000   Mean   :  0.1506   Mean   :  0.1511   Mean   :  0.1472  
##  3rd Qu.:2005   3rd Qu.:  1.4050   3rd Qu.:  1.4090   3rd Qu.:  1.4090  
##  Max.   :2010   Max.   : 12.0260   Max.   : 12.0260   Max.   : 12.0260  
##       Lag4               Lag5              Volume       
##  Min.   :-18.1950   Min.   :-18.1950   Min.   :0.08747  
##  1st Qu.: -1.1580   1st Qu.: -1.1660   1st Qu.:0.33202  
##  Median :  0.2380   Median :  0.2340   Median :1.00268  
##  Mean   :  0.1458   Mean   :  0.1399   Mean   :1.57462  
##  3rd Qu.:  1.4090   3rd Qu.:  1.4050   3rd Qu.:2.05373  
##  Max.   : 12.0260   Max.   : 12.0260   Max.   :9.32821  
##      Today          Direction 
##  Min.   :-18.1950   Down:484  
##  1st Qu.: -1.1540   Up  :605  
##  Median :  0.2410             
##  Mean   :  0.1499             
##  3rd Qu.:  1.4050             
##  Max.   : 12.0260
ggpairs(Weeklyd)

ggcorr(Weeklyd, palette = "RdBu", label = TRUE)

It seems like there is a very high correlation between year and volume.

(b) Use the full data set to perform a logistic regression with. Direction as the response and the five lag variables plus Volume as predictors. Use the summary function to print the results. Do any of the predictors appear to be statistically significant? If so, which ones?

fit <- glm(Direction ~ . - Today - Year, family = binomial, data = Weeklyd)
summary(fit)
## 
## Call:
## glm(formula = Direction ~ . - Today - Year, family = binomial, 
##     data = Weeklyd)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.6949  -1.2565   0.9913   1.0849   1.4579  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)   
## (Intercept)  0.26686    0.08593   3.106   0.0019 **
## Lag1        -0.04127    0.02641  -1.563   0.1181   
## Lag2         0.05844    0.02686   2.175   0.0296 * 
## Lag3        -0.01606    0.02666  -0.602   0.5469   
## Lag4        -0.02779    0.02646  -1.050   0.2937   
## Lag5        -0.01447    0.02638  -0.549   0.5833   
## Volume      -0.02274    0.03690  -0.616   0.5377   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 1496.2  on 1088  degrees of freedom
## Residual deviance: 1486.4  on 1082  degrees of freedom
## AIC: 1500.4
## 
## Number of Fisher Scoring iterations: 4

Only lag2 appears significant.

  1. Compute the confusion matrix and overall fraction of correct predictions. Explain what the confusion matrix is telling you about the types of mistakes made by logistic regression.
#1 are Up
augment(fit) %>% mutate(observed = as.numeric(Direction)-1) %>% 
  select(observed, .fitted) %>% mutate(fitted = ifelse(.fitted > .5, '1', '0')) %>% 
  select(-.fitted) %>% 
  count(observed, fitted) %>% 
  spread(fitted, n)
## # A tibble: 2 x 3
##   observed   `0`   `1`
## *    <dbl> <int> <int>
## 1        0   465    19
## 2        1   563    42

The model does well in predicting true negative but performs poorly on the rest.

  1. Now fit the logistic regression model using a training data period from 1990 to 2008, with Lag2 as the only predictor. Compute the confusion matrix and the overall fraction of correct predictions for the held out data (that is, the data from 2009 and 2010).
fit2 <- glm(Direction ~ Lag2, family = binomial, data = filter(Weeklyd, Year <= 2008) )
augment(fit2) %>% mutate(observed = as.numeric(Direction)-1) %>% 
  select(observed, .fitted) %>% mutate(fitted = ifelse(.fitted > .5, '1', '0')) %>% 
  select(-.fitted) %>% 
  count(observed, fitted) %>% 
  spread(fitted, n)
## # A tibble: 2 x 3
##   observed   `0`   `1`
## *    <dbl> <int> <int>
## 1        0   439     2
## 2        1   533    11
select(filter(Weeklyd, Year > 2008), Direction) %>% 
  mutate(fitted = ifelse(predict.glm(fit2, filter(Weeklyd, Year > 2008), type = 'response') > .5, 1, 0), observed = as.numeric(Direction)-1) %>% 
  count(observed, fitted) %>% 
  mutate(prop = n/sum(n)) %>% 
  select(-n) %>% 
  spread(fitted, prop)
## # A tibble: 2 x 3
##   observed        `0`       `1`
## *    <dbl>      <dbl>     <dbl>
## 1        0 0.08653846 0.3269231
## 2        1 0.04807692 0.5384615

The confusions matrix says that we are underperforming a random guess.

  1. Repeat (d) using LDA.
library(MASS)
fitLDA <- lda(formula(fit2), data = filter(Weeklyd, Year <= 2008))
predict(fitLDA, filter(Weeklyd, Year > 2008))$class
##   [1] Up   Up   Down Down Up   Up   Up   Down Down Down Down Up   Up   Up  
##  [15] Up   Up   Up   Up   Up   Up   Down Up   Up   Up   Up   Up   Up   Up  
##  [29] Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up  
##  [43] Up   Up   Down Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up  
##  [57] Down Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up  
##  [71] Up   Down Up   Down Up   Up   Up   Up   Down Down Up   Up   Up   Up  
##  [85] Up   Down Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up  
##  [99] Up   Up   Up   Up   Up   Up  
## Levels: Down Up
  1. Repeat (d) using QDA.
fitQDA <- lda(formula(fit2), data = filter(Weeklyd, Year <= 2008))
predict(fitQDA, filter(Weeklyd, Year > 2008))$class
##   [1] Up   Up   Down Down Up   Up   Up   Down Down Down Down Up   Up   Up  
##  [15] Up   Up   Up   Up   Up   Up   Down Up   Up   Up   Up   Up   Up   Up  
##  [29] Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up  
##  [43] Up   Up   Down Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up  
##  [57] Down Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up  
##  [71] Up   Down Up   Down Up   Up   Up   Up   Down Down Up   Up   Up   Up  
##  [85] Up   Down Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up   Up  
##  [99] Up   Up   Up   Up   Up   Up  
## Levels: Down Up
  1. Repeat (d) using KNN with K = 1.
detach("package:MASS", unload=TRUE)
library(class)
train <- sample(1:nrow(Weekly), 900)
train.X <- cbind(Weekly$Lag1 ,Weekly$Lag2)[train ,]
test.X <-  cbind(Weekly$Lag1 ,Weekly$Lag2)[-train,]
train.Direction <-  Weekly$Direction[train]
test.Direction <-  Weekly$Direction[-train]

set.seed(1)
knn.pred <- knn(train.X, test.X, train.Direction, k=1)
table(knn.pred , test.Direction)
##         test.Direction
## knn.pred Down Up
##     Down   44 37
##     Up     37 71
knn.pred <- knn(train.X, test.X, train.Direction, k=3)
table(knn.pred , test.Direction)
##         test.Direction
## knn.pred Down Up
##     Down   34 39
##     Up     47 69
  1. Which of these methods appears to provide the best results on this data?
  2. Experiment with different combinations of predictors, including possible transformations and interactions, for each of the methods. Report the variables, method, and associated confusion matrix that appears to provide the best results on the held out data. Note that you should also experiment with values for K in the KNN classifier