Carefully explain the differences between the KNN classifier and KNN regression methods.
KNN classifier estimates the conditional probability for class \(j\) as the fraction of points in \(N_0\) whose response values equal \(j\). Where as KNN regression takes the average of the responses considering \(k\) closest points to \(X\) in training data to predict \(Y\) for a given value of \(X\).
This question involves the use of multiple linear regression on the Auto data set.
library(ISLR)
attach(Auto)
(a) Produce a scatterplot matrix which includes all of the variables in the data set.
pairs(Auto)
(b) Compute the matrix of correlations between the variables using the function cor(). You will need to exclude the name variable, cor() which is qualitative.
names(Auto)
## [1] "mpg" "cylinders" "displacement" "horsepower" "weight"
## [6] "acceleration" "year" "origin" "name"
cor(Auto[1:8])
## mpg cylinders displacement horsepower weight
## mpg 1.0000000 -0.7776175 -0.8051269 -0.7784268 -0.8322442
## cylinders -0.7776175 1.0000000 0.9508233 0.8429834 0.8975273
## displacement -0.8051269 0.9508233 1.0000000 0.8972570 0.9329944
## horsepower -0.7784268 0.8429834 0.8972570 1.0000000 0.8645377
## weight -0.8322442 0.8975273 0.9329944 0.8645377 1.0000000
## acceleration 0.4233285 -0.5046834 -0.5438005 -0.6891955 -0.4168392
## year 0.5805410 -0.3456474 -0.3698552 -0.4163615 -0.3091199
## origin 0.5652088 -0.5689316 -0.6145351 -0.4551715 -0.5850054
## acceleration year origin
## mpg 0.4233285 0.5805410 0.5652088
## cylinders -0.5046834 -0.3456474 -0.5689316
## displacement -0.5438005 -0.3698552 -0.6145351
## horsepower -0.6891955 -0.4163615 -0.4551715
## weight -0.4168392 -0.3091199 -0.5850054
## acceleration 1.0000000 0.2903161 0.2127458
## year 0.2903161 1.0000000 0.1815277
## origin 0.2127458 0.1815277 1.0000000
(c) Use the lm() function to perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Use the summary() function to print the results.
fit<-lm(mpg~.-name,data=Auto)
summary(fit)
##
## Call:
## lm(formula = mpg ~ . - name, data = Auto)
##
## Residuals:
## Min 1Q Median 3Q Max
## -9.5903 -2.1565 -0.1169 1.8690 13.0604
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -17.218435 4.644294 -3.707 0.00024 ***
## cylinders -0.493376 0.323282 -1.526 0.12780
## displacement 0.019896 0.007515 2.647 0.00844 **
## horsepower -0.016951 0.013787 -1.230 0.21963
## weight -0.006474 0.000652 -9.929 < 2e-16 ***
## acceleration 0.080576 0.098845 0.815 0.41548
## year 0.750773 0.050973 14.729 < 2e-16 ***
## origin 1.426141 0.278136 5.127 4.67e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.328 on 384 degrees of freedom
## Multiple R-squared: 0.8215, Adjusted R-squared: 0.8182
## F-statistic: 252.4 on 7 and 384 DF, p-value: < 2.2e-16
Comment on the output. For instance:
i. Is there a relationship between the predictors and the response?
Looking at the p-value for corresponding F-statistics, it is possible to assume there is some relationship between some set of predictors and the response
ii. Which predictors appear to have a statistically significant relationship to the response?
According to the table predictors such as ‘displacement’, ‘weight’, ‘year’, and ‘origin’ have a statistically significant relationship to the response.
iii. What does the coefficient for the year variable suggest?
It suggests that the effect of an increase in 1 ‘year’ is an increase of 0.750773 in ‘mpg’.
(d) Use the plot() function to produce diagnostic plots of the linear regression fit. Comment on any problems you see with the fit. Do the residual plots suggest any unusually large outliers? Does the leverage plot identify any observations with unusually high leverage?
par(mfrow = c(2,2))
plot(fit)
It looks like the residual vs fitted values plot has issue with fit. Looking at the plot of standardized residuals vs leverage suggested that there are three outliers of the data. Lastly the plot 14 has high leverage value.
(e) Use the * and : symbols to fit linear regression models with interaction effects. Do any interactions appear to be statistically significant?
summary (lm(mpg~cylinders*displacement+weight*displacement+weight*cylinders+acceleration*weight),data=Auto)
##
## Call:
## lm(formula = mpg ~ cylinders * displacement + weight * displacement +
## weight * cylinders + acceleration * weight)
##
## Residuals:
## Min 1Q Median 3Q Max
## -12.886 -2.540 -0.361 1.778 17.262
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5.279e+01 9.222e+00 5.724 2.1e-08 ***
## cylinders 1.596e+00 2.103e+00 0.759 0.44840
## displacement -1.082e-01 4.244e-02 -2.549 0.01119 *
## weight -1.069e-02 3.612e-03 -2.961 0.00326 **
## acceleration -9.211e-02 4.033e-01 -0.228 0.81945
## cylinders:displacement 1.134e-03 4.032e-03 0.281 0.77866
## displacement:weight 2.625e-05 8.966e-06 2.927 0.00363 **
## cylinders:weight -4.834e-04 6.910e-04 -0.700 0.48462
## weight:acceleration 1.318e-04 1.406e-04 0.937 0.34910
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.07 on 383 degrees of freedom
## Multiple R-squared: 0.7337, Adjusted R-squared: 0.7281
## F-statistic: 131.9 on 8 and 383 DF, p-value: < 2.2e-16
Using the three highest correlated predictors of matrix of correlation and seemingly related combination such as ‘acceleration’ and ‘weight’, we can see that displacement:weight has statistically significant interaction.
(f) Try a few different transformations of the variables, such as \(log(X), √X, X^2\). Comment on your findings.
lm.fit2=lm((mpg)~.-name+log(weight)+sqrt(weight)+I(weight^2),data=Auto)
summary(lm.fit2)
##
## Call:
## lm(formula = (mpg) ~ . - name + log(weight) + sqrt(weight) +
## I(weight^2), data = Auto)
##
## Residuals:
## Min 1Q Median 3Q Max
## -9.6949 -1.6067 -0.2302 1.6548 12.2013
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -5.420e+03 2.617e+03 -2.071 0.03899 *
## cylinders -2.764e-01 3.214e-01 -0.860 0.39035
## displacement 1.517e-02 6.807e-03 2.228 0.02646 *
## horsepower -2.820e-02 1.255e-02 -2.246 0.02525 *
## weight 7.703e-01 3.920e-01 1.965 0.05011 .
## acceleration 7.095e-02 8.966e-02 0.791 0.42924
## year 7.964e-01 4.604e-02 17.299 < 2e-16 ***
## origin 7.700e-01 2.613e-01 2.947 0.00341 **
## log(weight) 1.204e+03 5.835e+02 2.064 0.03970 *
## sqrt(weight) -1.169e+02 5.724e+01 -2.043 0.04175 *
## I(weight^2) -1.861e-05 1.064e-05 -1.748 0.08126 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.985 on 381 degrees of freedom
## Multiple R-squared: 0.8575, Adjusted R-squared: 0.8538
## F-statistic: 229.3 on 10 and 381 DF, p-value: < 2.2e-16
For our example of weight, while the log and square root have statistical significance, the squaring of weight does not have significance.
This question should be answered using the Carseats data set.
library(ISLR)
attach(Carseats)
(a) Fit a multiple regression model to predict Sales using Price,Urban, and US.
fit<-lm(Sales~Price+Urban+US)
summary(fit)
##
## Call:
## lm(formula = Sales ~ Price + Urban + US)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.9206 -1.6220 -0.0564 1.5786 7.0581
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 13.043469 0.651012 20.036 < 2e-16 ***
## Price -0.054459 0.005242 -10.389 < 2e-16 ***
## UrbanYes -0.021916 0.271650 -0.081 0.936
## USYes 1.200573 0.259042 4.635 4.86e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.472 on 396 degrees of freedom
## Multiple R-squared: 0.2393, Adjusted R-squared: 0.2335
## F-statistic: 41.52 on 3 and 396 DF, p-value: < 2.2e-16
(b) Provide an interpretation of each coefficient in the model. Be careful—some of the variables in the model are qualitative!
From the table above, Price and US are significant predictors of Sales , for every $1 increase in my price, sales decrease by $54. Sales inside the US are $1,200 higher than sales outside of the US, Urban has no effect on Sales.
(c) Write out the model in equation form, being careful to handle the qualitative variables properly. \(Sales=13.043469 - 0.054459Price - 0.021916Urban_{Yes} + 1.200573US_{Yes}\)
(d) For which of the predictors can you reject the null hypothesis \(H0 : \beta_j = 0\)?
Price and US
(e) On the basis of your response to the previous question, fit a smaller model that only uses the predictors for which there is evidence of association with the outcome.
fit<-lm(Sales~Price+US)
summary(fit)
##
## Call:
## lm(formula = Sales ~ Price + US)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.9269 -1.6286 -0.0574 1.5766 7.0515
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 13.03079 0.63098 20.652 < 2e-16 ***
## Price -0.05448 0.00523 -10.416 < 2e-16 ***
## USYes 1.19964 0.25846 4.641 4.71e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.469 on 397 degrees of freedom
## Multiple R-squared: 0.2393, Adjusted R-squared: 0.2354
## F-statistic: 62.43 on 2 and 397 DF, p-value: < 2.2e-16
(f) How well do the models in (a) and (e) fit the data?
Not well, each model explains around 23% of the variance in Sales.
(g) Using the model from (e), obtain 95 % confidence intervals for the coefficient(s).
confint(fit)
## 2.5 % 97.5 %
## (Intercept) 11.79032020 14.27126531
## Price -0.06475984 -0.04419543
## USYes 0.69151957 1.70776632
(h) Is there evidence of outliers or high leverage observations in the model from (e)?
R has built in functions to that can help us identify influential points using various statistics with one simple command. Researchers have suggested several cutoff levels or upper limits as to what is the acceptable influence an observation should have before being considered an outlier. For example, the average leverage \(\frac{(p+1)}{n}\) which for us is \(\frac{(2+1)}{400} = 0.0075\).
par (mfrow=c(2,2))
plot(fit)
summary(influence.measures(fit))
## Potentially influential observations of
## lm(formula = Sales ~ Price + US) :
##
## dfb.1_ dfb.Pric dfb.USYs dffit cov.r cook.d hat
## 26 0.24 -0.18 -0.17 0.28_* 0.97_* 0.03 0.01
## 29 -0.10 0.10 -0.10 -0.18 0.97_* 0.01 0.01
## 43 -0.11 0.10 0.03 -0.11 1.05_* 0.00 0.04_*
## 50 -0.10 0.17 -0.17 0.26_* 0.98 0.02 0.01
## 51 -0.05 0.05 -0.11 -0.18 0.95_* 0.01 0.00
## 58 -0.05 -0.02 0.16 -0.20 0.97_* 0.01 0.01
## 69 -0.09 0.10 0.09 0.19 0.96_* 0.01 0.01
## 126 -0.07 0.06 0.03 -0.07 1.03_* 0.00 0.03_*
## 160 0.00 0.00 0.00 0.01 1.02_* 0.00 0.02
## 166 0.21 -0.23 -0.04 -0.24 1.02 0.02 0.03_*
## 172 0.06 -0.07 0.02 0.08 1.03_* 0.00 0.02
## 175 0.14 -0.19 0.09 -0.21 1.03_* 0.02 0.03_*
## 210 -0.14 0.15 -0.10 -0.22 0.97_* 0.02 0.01
## 270 -0.03 0.05 -0.03 0.06 1.03_* 0.00 0.02
## 298 -0.06 0.06 -0.09 -0.15 0.97_* 0.01 0.00
## 314 -0.05 0.04 0.02 -0.05 1.03_* 0.00 0.02_*
## 353 -0.02 0.03 0.09 0.15 0.97_* 0.01 0.00
## 357 0.02 -0.02 0.02 -0.03 1.03_* 0.00 0.02
## 368 0.26 -0.23 -0.11 0.27_* 1.01 0.02 0.02_*
## 377 0.14 -0.15 0.12 0.24 0.95_* 0.02 0.01
## 384 0.00 0.00 0.00 0.00 1.02_* 0.00 0.02
## 387 -0.03 0.04 -0.03 0.05 1.02_* 0.00 0.02
## 396 -0.05 0.05 0.08 0.14 0.98_* 0.01 0.00
R points out a few observations that violate various rules for each influence measure. Typically, one can demonstrate these statistics and report both a regression with all data included and one with the outliers removed and compare.
outlying.obs<-c(26,29,43,50,51,58,69,126,160,166,172,175,210,270,298,314,353,357,368,377,384,387,396)
Carseats.small<-Carseats[-outlying.obs,]
fit2<-lm(Sales~Price+US,data=Carseats.small)
summary(fit2)
##
## Call:
## lm(formula = Sales ~ Price + US, data = Carseats.small)
##
## Residuals:
## Min 1Q Median 3Q Max
## -5.263 -1.605 -0.039 1.590 5.428
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 12.925232 0.665259 19.429 < 2e-16 ***
## Price -0.053973 0.005511 -9.794 < 2e-16 ***
## USYes 1.255018 0.248856 5.043 7.15e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.29 on 374 degrees of freedom
## Multiple R-squared: 0.2387, Adjusted R-squared: 0.2347
## F-statistic: 58.64 on 2 and 374 DF, p-value: < 2.2e-16
With these potential outliers or influential observatons removed, very little changes from the linear model fit to the full dat set. The confidence interval for the coefficient estimates produced by the linear model fit to the full data set contain the estimates of the coefficients for the estimates of the model with the outliers removed. It’s safe to include all of the data points in our model.
This problem involves simple linear regression without an intercept.
(a) Recall that the coefficient estimate \(βˆ\) for the linear regression of \(Y\) onto \(X\) without an intercept is given by \((3.38)\). Under what circumstance is the coefficient estimate for the regression of \(X\) onto \(Y\) the same as the coefficient estimate for the regression of \(Y\) onto \(X\)?
If we set the coefficient estimate \(βˆ\) for both \(Y\) and \(X\) equal to each other. The numerator will cancel each other and leave was this equation: \(\sum_{i=1}^{n} x_i^2\) \(=\) \(\sum_{i=1}^{n} y_i^2\)
(b) Generate an example in R with \(n = 100\) observations in which the coefficient estimate for the regression of \(X\) onto \(Y\) is different from the coefficient estimate for the regression of \(Y\) onto \(X\).
x=rnorm(100, mean=10, sd=10)
y=2*x+rnorm(100,mean=5,sd=20)
lm.fit.b1<-lm(y~x+0)
lm.fit.b2<-lm(x~y+0)
coef(lm.fit.b1)
## x
## 2.263171
coef(lm.fit.b2)
## y
## 0.3129044
(c) Generate an example in R with \(n = 100\) observations in which the coefficient estimate for the regression of \(X\) onto \(Y\) is the same as the coefficient estimate for the regression of \(Y\) onto \(X\).
x=rnorm(100, mean=10, sd=10)
sum.X.e2=sum(x^2)
y.ult.e2=sum.X.e2/50
r=y.ult.e2/(100-1)
y.e2=seq(0,y.ult.e2,r)
y=sqrt(y.e2)
lm.fit.c1<-lm(y~x+0)
lm.fit.c2<-lm(x~y+0)
coef(lm.fit.c1)
## x
## 0.5501627
coef(lm.fit.c2)
## y
## 0.5501627