Load necessary packages from the library

suppressMessages(library(UsingR))
suppressMessages(library(ggplot2))
suppressMessages(library(stats))
#suppressMessages(library(lmtest))
suppressMessages(library(dplyr))
data("mtcars")

Question 1

Consider the 𝚖𝚝𝚌𝚊𝚛𝚜 data set. Fit a model with mpg as the outcome that includes number of cylinders as a factor variable and weight as confounder. Give the adjusted estimate for the expected change in mpg comparing 8 cylinders to 4.

Answer:

mcyl<-relevel(factor(mtcars$cyl),"4")
fit<-lm(mpg~mcyl+wt, data = mtcars)
summary(fit)$coef[3]
## [1] -6.07086

Question 2

Consider the 𝚖𝚝𝚌𝚊𝚛𝚜 data set. Fit a model with mpg as the outcome that includes number of cylinders as a factor variable and weight as a possible confounding variable. Compare the effect of 8 versus 4 cylinders on mpg for the adjusted and unadjusted by weight models. Here, adjusted means including the weight variable as a term in the regression model and unadjusted means the model without weight included. What can be said about the effect comparing 8 and 4 cylinders after looking at models with and without weight included?

Answer:

mcyl<-relevel(factor(mtcars$cyl),"4")
fit_adjusted<-lm(mpg~mcyl+wt, data = mtcars)
summary(fit_adjusted)$coefficients
##              Estimate Std. Error   t value     Pr(>|t|)
## (Intercept) 33.990794  1.8877934 18.005569 6.257246e-17
## mcyl6       -4.255582  1.3860728 -3.070244 4.717834e-03
## mcyl8       -6.070860  1.6522878 -3.674214 9.991893e-04
## wt          -3.205613  0.7538957 -4.252065 2.130435e-04
fit_unadjusted<-lm(mpg~mcyl, data = mtcars)
summary(fit_unadjusted)$coefficients
##               Estimate Std. Error   t value     Pr(>|t|)
## (Intercept)  26.663636  0.9718008 27.437347 2.688358e-22
## mcyl6        -6.920779  1.5583482 -4.441099 1.194696e-04
## mcyl8       -11.563636  1.2986235 -8.904534 8.568209e-10

Holding weight constant, cylinder appears to have less of an impact on mpg than if weight is disregarded.

It is both true and sensible that including weight would attenuate the effect of number of cylinders on mpg.

Question 3

Consider the 𝚖𝚝𝚌𝚊𝚛𝚜 data set. Fit a model with mpg as the outcome that considers number of cylinders as a factor variable and weight as confounder. Now fit a second model with mpg as the outcome model that considers the interaction between number of cylinders (as a factor variable) and weight. Give the P-value for the likelihood ratio test comparing the two models and suggest a model using 0.05 as a type I error rate significance benchmark.

Answer:

mcyl<-factor(mtcars$cyl)
fit1<-lm(mpg~mcyl+wt, data = mtcars)
fit2<-lm(mpg~mcyl+wt+mcyl*wt, data = mtcars)
#lrtest(fit1, fit2)

Likelihood ratio test

Model 1: mpg ~ mcyl + wt Model 2: mpg ~ mcyl + wt + mcyl * wt Df LogLik Df Chisq Pr(>Chisq)
1 5 -73.311
2 7 -70.741 2 5.1412 0.07649 . — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1

The P-value is larger than 0.05. So, according to our criterion, we would fail to reject, which suggests that the interaction terms may not be necessary.

Question 4

Consider the 𝚖𝚝𝚌𝚊𝚛𝚜 data set. Fit a model with mpg as the outcome that includes number of cylinders as a factor variable and weight inlcuded in the model as

lm(mpg ~ I(wt * 0.5) + factor(cyl), data = mtcars)
## 
## Call:
## lm(formula = mpg ~ I(wt * 0.5) + factor(cyl), data = mtcars)
## 
## Coefficients:
##  (Intercept)   I(wt * 0.5)  factor(cyl)6  factor(cyl)8  
##       33.991        -6.411        -4.256        -6.071

How is the wt coefficient interpretted?

Answer:

mcyl<-factor(mtcars$cyl)
lm(mpg ~ I(wt * 0.5) + factor(cyl), data = mtcars)
## 
## Call:
## lm(formula = mpg ~ I(wt * 0.5) + factor(cyl), data = mtcars)
## 
## Coefficients:
##  (Intercept)   I(wt * 0.5)  factor(cyl)6  factor(cyl)8  
##       33.991        -6.411        -4.256        -6.071

The estimated expected change in MPG per one ton increase in weight for a specific number of cylinders (4, 6, 8).

Question 5

x <- c(0.586, 0.166, -0.042, -0.614, 11.72)
y <- c(0.549, -0.026, -0.127, -0.751, 1.344)

Give the hat diagonal for the most influential point

Answer:

fit<-lm(y~x)
max(influence(fit)$hat)
## [1] 0.9945734
influence(fit)$hat
##         1         2         3         4         5 
## 0.2286650 0.2438146 0.2525027 0.2804443 0.9945734
## showing how it's actually calculated
xm <- cbind(1, x)
diag(xm %*% solve(t(xm) %*% xm) %*% t(xm))
## [1] 0.2286650 0.2438146 0.2525027 0.2804443 0.9945734

Question 6

Consider the following data set

x <- c(0.586, 0.166, -0.042, -0.614, 11.72)
y <- c(0.549, -0.026, -0.127, -0.751, 1.344)

Give the slope dfbeta for the point with the highest hat value.

Answer:

fit<-lm(y~x)
influence.measures(fit) 
## Influence measures of
##   lm(formula = y ~ x) :
## 
##    dfb.1_     dfb.x     dffit cov.r   cook.d   hat inf
## 1  1.0621 -3.78e-01    1.0679 0.341 2.93e-01 0.229   *
## 2  0.0675 -2.86e-02    0.0675 2.934 3.39e-03 0.244    
## 3 -0.0174  7.92e-03   -0.0174 3.007 2.26e-04 0.253   *
## 4 -1.2496  6.73e-01   -1.2557 0.342 3.91e-01 0.280   *
## 5  0.2043 -1.34e+02 -149.7204 0.107 2.70e+02 0.995   *
influence.measures(fit)$infmat[5, 'dfb.x'] ## showing how the wanted value is extracted from the table
## [1] -133.8226

Question 7

Consider a regression relationship between Y and X with and without adjustment for a third variable Z. Which of the following is true about comparing the regression coefficient between Y and X with and without adjustment for Z.

Answer:

It is possible for the coefficient to reverse sign after adjustment. For example, it can be strongly significant and positive before adjustment and strongly significant and negative after adjustment.