Consider the following data with x as the predictor and y as as the outcome.
x <- c(0.61, 0.93, 0.83, 0.35, 0.54, 0.16, 0.91, 0.62, 0.62) y <- c(0.67, 0.84, 0.6, 0.18, 0.85, 0.47, 1.1, 0.65, 0.36) Give a P-value for the two sided hypothesis test of whether ??1 from a linear regression model is 0 or not.
x <- c(0.61, 0.93, 0.83, 0.35, 0.54, 0.16, 0.91, 0.62, 0.62)
y <- c(0.67, 0.84, 0.6, 0.18, 0.85, 0.47, 1.1, 0.65, 0.36)
n <- length(y)
beta1 <- cor(y, x) * sd(y) / sd(x)
beta0 <- mean(y) - beta1 * mean(x)
yhat <- beta0 + beta1 * x
e <- y - yhat #residuals
sigma <- sqrt(sum(e^2) / (n - 2)) # estimate of SD / average residuals
ssx <- sum((x - mean(x))^2) # sum of x squares
seBeta0 <- (1 / n + mean(x) ^ 2 / ssx) ^ .5 * sigma # se for beta0
seBeta1 <- sigma / sqrt(ssx) # se fot beta1
tBeta0 <- beta0 / seBeta0; #t-stat beta0
tBeta1 <- beta1 / seBeta1 # t-stats beta1
pBeta0 <- 2 * pt(abs(tBeta0), df = n - 2, lower.tail = FALSE) # p-value beta0
pBeta1 <- 2 * pt(abs(tBeta1), df = n - 2, lower.tail = FALSE) # p-value beta1
coefTable <- rbind(c(beta0, seBeta0, tBeta0, pBeta0), c(beta1, seBeta1, tBeta1, pBeta1))
colnames(coefTable) <- c("Estimate", "Std. Error", "t value", "P(>|t|)")
rownames(coefTable) <- c("(Intercept)", "x")
coefTable
## Estimate Std. Error t value P(>|t|)
## (Intercept) 0.1884572 0.2061290 0.9142681 0.39098029
## x 0.7224211 0.3106531 2.3254912 0.05296439
#Obviously R has built-in function that does the math for us : summary(lm(y~x))$coefficient
Consider the previous problem, give the estimate of the residual standard deviation.
sigma
## [1] 0.2229981
In the mtcars data set, fit a linear regression model of weight (predictor) on mpg (outcome). Get a 95% confidence interval for the expected mpg at the average weight. What is the lower endpoint?
data(mtcars)
fit <- lm(mpg~wt, data=mtcars)
predict(fit, newdata=(wt=mean(mtcars$wt)), interval = ("confidence"))
## fit lwr upr
## 1 20.09062 18.99098 21.19027
#or
fit <- lm(mpg ~ I(wt - mean(wt)), data = mtcars)
confint(fit)
## 2.5 % 97.5 %
## (Intercept) 18.990982 21.190268
## I(wt - mean(wt)) -6.486308 -4.202635
Refer to the previous question. Read the help file for mtcars. What is the weight coefficient interpreted as? The estimated expected change in mpg per 1,000 lb increase in weight. It can’t be interpreted without further information The estimated 1,000 lb change in weight per 1 mpg increase. The estimated expected change in mpg per 1 lb increase in weight.
#help("mtcars")
Consider again the mtcars data set and a linear regression model with mpg as predicted by weight (1,000 lbs). A new car is coming weighing 3000 pounds. Construct a 95% prediction interval for its mpg. What is the upper endpoint?
fit <- lm(mpg ~ wt, data = mtcars)
predict(fit, newdata = data.frame(wt = 3), interval = "prediction")
## fit lwr upr
## 1 21.25171 14.92987 27.57355
Consider again the mtcars data set and a linear regression model with mpg as predicted by weight (in 1,000 lbs). A “short” ton is defined as 2,000 lbs. Construct a 95% confidence interval for the expected change in mpg per 1 short ton increase in weight. Give the lower endpoint.
sumCoef <- summary(fit)$coefficients
meanCoef<-sumCoef[2,1]
stdErrCoef<-sumCoef[2,2]
df<-fit$df
(meanCoef + c(-1,1) * qt(.975, df=df) * stdErrCoef) * 2
## [1] -12.97262 -8.40527
If my X from a linear regression is measured in centimeters and I convert it to meters what would happen to the slope coefficient?
It would get multiplied by 100. It would get multiplied by 10 It would get divided by 100 It would get divided by 10
sumCoef <- summary(fit)$coefficients
meanCoef<-sumCoef[2,1]
stdErrCoef<-sumCoef[2,2]
df<-fit$df
(meanCoef + c(-1,1) * qt(.975, df=df) * stdErrCoef) * 2
## [1] -12.97262 -8.40527
I have an outcome, Y, and a predictor, X and fit a linear regression model with Y=??0+??1X+?? to obtain ??^0 and ??^1. What would be the consequence to the subsequent slope and intercept if I were to refit the model with a new regressor, X+c for some constant, c?
x <- mtcars$wt
y <- mtcars$mpg
c <- 1 # constant
fit<-lm(y~x) # red
fit2 <- lm(y ~ I(x + c)) # blue
{plot(x,y)
abline(fit,col="red")
abline(fit2,col="blue")}
#The new intercept would be ??^0 - c??^1
#fit2$coefficients-fit$coefficients
#We see that intercept = intercept - -5.344
#Coursera note:
#Note that if Y=??0+??1X+?? then Y=??0???c??1+??1(X+c)+?? so that the answer is that the intercept gets subtracted by c??1
Refer back to the mtcars data set with mpg as an outcome and weight (wt) as the predictor. About what is the ratio of the the sum of the squared errors, \[ \sum_{i=1}^n= (Y_i - \bar Y)^2 \] when comparing a model with just an intercept (denominator) to the model with the intercept and slope (numerator)?
fit <- lm(mpg ~ wt, data = mtcars)
sum((mtcars$mpg-(fit$coefficients[1]+fit$coefficients[2]*x))^2)/sum((mtcars$mpg-mean(mtcars$mpg))^2)
## [1] 0.2471672
# coursera answer
fit1 <- lm(mpg ~ wt, data = mtcars)
fit2 <- lm(mpg ~ 1, data = mtcars)
1 - summary(fit1)$r.squared
## [1] 0.2471672
# coursera answer
sse1 <- sum((predict(fit1) - mtcars$mpg)^2)
sse2 <- sum((predict(fit2) - mtcars$mpg)^2)
sse1/sse2
## [1] 0.2471672
Do the residuals always have to sum to 0 in linear regression?
The residuals never sum to zero. The residuals must always sum to zero. If an intercept is included, then they will sum to 0. If an intercept is included, the residuals most likely won’t sum to zero.
sum(resid(fit))
## [1] -1.637579e-15
sum(resid(lm(y~x-1)))
## [1] 98.11672
sum(resid(lm(y~1)))
## [1] -5.995204e-15