ANSWER 2)
KNN classifier:
Given a positive in-K-nearest integer K and a test observation x0, the KNN classifier first identifies the neighbors K points in the training data that are closest to x0, represented by N0.
Just as in the regression setting, there is not a strong relationship between the training error rate and the test error rate. With K = 1, the KNN training error rate is 0, but the test error rate may be quite high.
In general, as we use more flexible classification methods, the training error rate will decline but the test error rate may not. With K = 1, the decision boundary is overly flexible, while with K = 100 it is not sufficiently flexible.
KNN regression:
ANSWER 9A)
require(ISLR)
## Loading required package: ISLR
pairs(Auto)
(9b) Compute the matrix of correlations between the variables using the function cor(). You will need to exclude the name variable, cor() which is qualitative.
ANSWER 9b)
cor(subset(Auto, select=-name))
## mpg cylinders displacement horsepower weight
## mpg 1.0000000 -0.7776175 -0.8051269 -0.7784268 -0.8322442
## cylinders -0.7776175 1.0000000 0.9508233 0.8429834 0.8975273
## displacement -0.8051269 0.9508233 1.0000000 0.8972570 0.9329944
## horsepower -0.7784268 0.8429834 0.8972570 1.0000000 0.8645377
## weight -0.8322442 0.8975273 0.9329944 0.8645377 1.0000000
## acceleration 0.4233285 -0.5046834 -0.5438005 -0.6891955 -0.4168392
## year 0.5805410 -0.3456474 -0.3698552 -0.4163615 -0.3091199
## origin 0.5652088 -0.5689316 -0.6145351 -0.4551715 -0.5850054
## acceleration year origin
## mpg 0.4233285 0.5805410 0.5652088
## cylinders -0.5046834 -0.3456474 -0.5689316
## displacement -0.5438005 -0.3698552 -0.6145351
## horsepower -0.6891955 -0.4163615 -0.4551715
## weight -0.4168392 -0.3091199 -0.5850054
## acceleration 1.0000000 0.2903161 0.2127458
## year 0.2903161 1.0000000 0.1815277
## origin 0.2127458 0.1815277 1.0000000
(9c) Use the lm() function to perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Use the summary() function to print the results. Comment on the output.
df.lm <- lm(mpg~.-name, data=Auto)
summary(df.lm)
##
## Call:
## lm(formula = mpg ~ . - name, data = Auto)
##
## Residuals:
## Min 1Q Median 3Q Max
## -9.5903 -2.1565 -0.1169 1.8690 13.0604
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -17.218435 4.644294 -3.707 0.00024 ***
## cylinders -0.493376 0.323282 -1.526 0.12780
## displacement 0.019896 0.007515 2.647 0.00844 **
## horsepower -0.016951 0.013787 -1.230 0.21963
## weight -0.006474 0.000652 -9.929 < 2e-16 ***
## acceleration 0.080576 0.098845 0.815 0.41548
## year 0.750773 0.050973 14.729 < 2e-16 ***
## origin 1.426141 0.278136 5.127 4.67e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.328 on 384 degrees of freedom
## Multiple R-squared: 0.8215, Adjusted R-squared: 0.8182
## F-statistic: 252.4 on 7 and 384 DF, p-value: < 2.2e-16
For instance:
ANSWER 9c-i)
ANSWER 9c-ii)
The following predictors have a statistically significant relationship to the response: weight, year, origin and displacement
ANSWER 9c-iii)
(9d) Use the plot() function to produce diagnostic plots of the linear regression fit. Comment on any problems you see with the fit.
ANSWER 9d)
Do the residual plots suggest any unusually large outliers?
Does the leverage plot identify any observations with unusually high leverage?
(Note: 3.3.3 Potential Problems)
par(mfrow=c(2,2))
plot(df.lm)
(9e) Use the * and : symbols to fit linear regression models with interaction effects. Do any interactions appear to be statistically significant?
df.lm_0 <- lm(mpg~displacement+weight+year+origin, data=Auto)
df.lm_1 <- lm(mpg~displacement+weight+year*origin, data=Auto)
df.lm_2 <- lm(mpg~displacement+origin+year*weight, data=Auto)
df.lm_3 <- lm(mpg~year+origin+displacement*weight, data=Auto)
summary(df.lm_0)
##
## Call:
## lm(formula = mpg ~ displacement + weight + year + origin, data = Auto)
##
## Residuals:
## Min 1Q Median 3Q Max
## -9.8102 -2.1129 -0.0388 1.7725 13.2085
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.861e+01 4.028e+00 -4.620 5.25e-06 ***
## displacement 5.588e-03 4.768e-03 1.172 0.242
## weight -6.575e-03 5.571e-04 -11.802 < 2e-16 ***
## year 7.714e-01 4.981e-02 15.486 < 2e-16 ***
## origin 1.226e+00 2.670e-01 4.593 5.92e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.346 on 387 degrees of freedom
## Multiple R-squared: 0.8181, Adjusted R-squared: 0.8162
## F-statistic: 435.1 on 4 and 387 DF, p-value: < 2.2e-16
summary(df.lm_1)
##
## Call:
## lm(formula = mpg ~ displacement + weight + year * origin, data = Auto)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.7541 -1.8722 -0.0936 1.6900 12.4650
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 7.927e+00 8.873e+00 0.893 0.372229
## displacement 1.551e-03 4.859e-03 0.319 0.749735
## weight -6.394e-03 5.526e-04 -11.571 < 2e-16 ***
## year 4.313e-01 1.130e-01 3.818 0.000157 ***
## origin -1.449e+01 4.707e+00 -3.079 0.002225 **
## year:origin 2.023e-01 6.047e-02 3.345 0.000904 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.303 on 386 degrees of freedom
## Multiple R-squared: 0.8232, Adjusted R-squared: 0.8209
## F-statistic: 359.5 on 5 and 386 DF, p-value: < 2.2e-16
summary(df.lm_2)
##
## Call:
## lm(formula = mpg ~ displacement + origin + year * weight, data = Auto)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.9402 -1.8736 -0.0966 1.5924 12.2125
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.076e+02 1.290e+01 -8.339 1.34e-15 ***
## displacement -4.020e-04 4.558e-03 -0.088 0.929767
## origin 9.116e-01 2.547e-01 3.579 0.000388 ***
## year 1.962e+00 1.716e-01 11.436 < 2e-16 ***
## weight 2.605e-02 4.552e-03 5.722 2.12e-08 ***
## year:weight -4.305e-04 5.967e-05 -7.214 2.89e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.145 on 386 degrees of freedom
## Multiple R-squared: 0.8397, Adjusted R-squared: 0.8376
## F-statistic: 404.4 on 5 and 386 DF, p-value: < 2.2e-16
summary(df.lm_3)
##
## Call:
## lm(formula = mpg ~ year + origin + displacement * weight, data = Auto)
##
## Residuals:
## Min 1Q Median 3Q Max
## -10.6119 -1.7290 -0.0115 1.5609 12.5584
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -8.007e+00 3.798e+00 -2.108 0.0357 *
## year 8.194e-01 4.518e-02 18.136 < 2e-16 ***
## origin 3.567e-01 2.574e-01 1.386 0.1666
## displacement -7.148e-02 9.176e-03 -7.790 6.27e-14 ***
## weight -1.054e-02 6.530e-04 -16.146 < 2e-16 ***
## displacement:weight 2.104e-05 2.214e-06 9.506 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.016 on 386 degrees of freedom
## Multiple R-squared: 0.8526, Adjusted R-squared: 0.8507
## F-statistic: 446.5 on 5 and 386 DF, p-value: < 2.2e-16
ANSWER 9e) * Yes, the interactions appear to be statistically significant.
(9f) Try a few different transformations of the variables, such as log(X), √X, X2. Comment on your findings.
sf.lm_4 <- lm(mpg~poly(displacement,3)+weight+year+origin, data=Auto)
sf.lm_5 <- lm(mpg~displacement+I(log(weight))+year+origin, data=Auto)
sf.lm_6 <- lm(mpg~displacement+I(weight^2)+year+origin, data=Auto)
summary(sf.lm_4)
##
## Call:
## lm(formula = mpg ~ poly(displacement, 3) + weight + year + origin,
## data = Auto)
##
## Residuals:
## Min 1Q Median 3Q Max
## -11.8131 -1.8012 0.0788 1.5566 12.3181
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -2.342e+01 3.802e+00 -6.160 1.84e-09 ***
## poly(displacement, 3)1 -1.701e+01 9.820e+00 -1.732 0.0840 .
## poly(displacement, 3)2 2.840e+01 3.610e+00 7.866 3.74e-14 ***
## poly(displacement, 3)3 -7.996e+00 3.164e+00 -2.527 0.0119 *
## weight -5.285e-03 5.419e-04 -9.753 < 2e-16 ***
## year 8.189e-01 4.660e-02 17.572 < 2e-16 ***
## origin 2.422e-01 2.761e-01 0.877 0.3810
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.102 on 385 degrees of freedom
## Multiple R-squared: 0.8445, Adjusted R-squared: 0.842
## F-statistic: 348.4 on 6 and 385 DF, p-value: < 2.2e-16
summary(sf.lm_5)
##
## Call:
## lm(formula = mpg ~ displacement + I(log(weight)) + year + origin,
## data = Auto)
##
## Residuals:
## Min 1Q Median 3Q Max
## -9.7136 -1.9214 0.0447 1.5790 12.9864
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 131.274483 11.082986 11.845 < 2e-16 ***
## displacement 0.007711 0.004052 1.903 0.057810 .
## I(log(weight)) -21.584745 1.451851 -14.867 < 2e-16 ***
## year 0.804835 0.046532 17.296 < 2e-16 ***
## origin 0.836143 0.250485 3.338 0.000925 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.113 on 387 degrees of freedom
## Multiple R-squared: 0.8425, Adjusted R-squared: 0.8409
## F-statistic: 517.7 on 4 and 387 DF, p-value: < 2.2e-16
summary(sf.lm_6)
##
## Call:
## lm(formula = mpg ~ displacement + I(weight^2) + year + origin,
## data = Auto)
##
## Residuals:
## Min 1Q Median 3Q Max
## -10.0988 -2.2549 -0.1057 1.8704 13.4702
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -2.609e+01 4.349e+00 -5.999 4.56e-09 ***
## displacement -9.114e-03 5.118e-03 -1.781 0.0757 .
## I(weight^2) -7.068e-07 9.075e-08 -7.789 6.28e-14 ***
## year 7.336e-01 5.380e-02 13.635 < 2e-16 ***
## origin 1.488e+00 2.900e-01 5.132 4.56e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.628 on 387 degrees of freedom
## Multiple R-squared: 0.7861, Adjusted R-squared: 0.7839
## F-statistic: 355.7 on 4 and 387 DF, p-value: < 2.2e-16
ANSWER 9f)
ANSWER 10a)
df2.lm <- lm(Sales ~ Price + Urban + US, data=Carseats)
summary(df2.lm)
##
## Call:
## lm(formula = Sales ~ Price + Urban + US, data = Carseats)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.9206 -1.6220 -0.0564 1.5786 7.0581
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 13.043469 0.651012 20.036 < 2e-16 ***
## Price -0.054459 0.005242 -10.389 < 2e-16 ***
## UrbanYes -0.021916 0.271650 -0.081 0.936
## USYes 1.200573 0.259042 4.635 4.86e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.472 on 396 degrees of freedom
## Multiple R-squared: 0.2393, Adjusted R-squared: 0.2335
## F-statistic: 41.52 on 3 and 396 DF, p-value: < 2.2e-16
(10b) Provide an interpretation of each coefficient in the model. Be careful—some of the variables in the model are qualitative!
ANSWER 10b)
For Price (-0.054459) the Sales have decreased 54 for each dollar increase in Price. This variable is statistically significant since p is below the significance level of 0.05.
For UrbanYes (-0.021916) the Sales are about 22 lower on Urban locations. This variable is not statistically significant since p is higher the significance level of 0.05.
For USYes (1.200573) the Sales are 1201 higher in US locations.This variable is statistically significant since p is below the significance level of 0.05.
(10c) Write out the model in equation form, being careful to handle the qualitative variables properly.
ANSWER 10c)
Sales = 13.043 - 0.054 x (Price) - 0.022 x (UrbanYes) + 1.201 x (USYes)
(10d) For which of the predictors can you reject the null hypothesis H0 : βj = 0?
ANSWER 10d)
We can reject null hypothesis for Price and USYes since p is below the significance level of 0.05.
(10e) On the basis of your response to the previous question, fit a smaller model that only uses the predictors for which there is evidence of association with the outcome.
ANSWER 10e)
df2.lm_1 <- lm(Sales ~ Price + US, data=Carseats)
summary(df2.lm_1)
##
## Call:
## lm(formula = Sales ~ Price + US, data = Carseats)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.9269 -1.6286 -0.0574 1.5766 7.0515
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 13.03079 0.63098 20.652 < 2e-16 ***
## Price -0.05448 0.00523 -10.416 < 2e-16 ***
## USYes 1.19964 0.25846 4.641 4.71e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.469 on 397 degrees of freedom
## Multiple R-squared: 0.2393, Adjusted R-squared: 0.2354
## F-statistic: 62.43 on 2 and 397 DF, p-value: < 2.2e-16
(10f) How well do the models in (a) and (e) fit the data?
ANSWER 10f)
df2.lm (Price, Urban, US):
df2.lm_1 (Price, US):
As shown above df2.lm_1 has a lower RSE value with 2 predictors instead of 3.
(10g) Using the model from (e), obtain 95% confidence intervals for the coefficient(s).
ANSWER 10g)
confint(df2.lm_1)
## 2.5 % 97.5 %
## (Intercept) 11.79032020 14.27126531
## Price -0.06475984 -0.04419543
## USYes 0.69151957 1.70776632
(10h) Is there evidence of outliers or high leverage observations in the model from (10e)?
ANSWER 10h)
par(mfrow=c(2,2))
plot(df2.lm_1)
par(mfrow=c(1,1))
plot(predict(df2.lm_1), rstudent(df2.lm_1))
require(car)
## Loading required package: car
## Loading required package: carData
leveragePlots(df2.lm_1)
plot(hatvalues(df2.lm_1))
ANSWER 12a)
ANSWER 12b)
set.seed(1)
x <- rnorm(100)
y <- 2*x + rnorm(100)
df.lm_Y <- lm(y ~ x)
df.lm_X <- lm(x ~ y)
summary(df.lm_Y)
##
## Call:
## lm(formula = y ~ x)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.8768 -0.6138 -0.1395 0.5394 2.3462
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.03769 0.09699 -0.389 0.698
## x 1.99894 0.10773 18.556 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.9628 on 98 degrees of freedom
## Multiple R-squared: 0.7784, Adjusted R-squared: 0.7762
## F-statistic: 344.3 on 1 and 98 DF, p-value: < 2.2e-16
summary(df.lm_X)
##
## Call:
## lm(formula = x ~ y)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.90848 -0.28101 0.06274 0.24570 0.85736
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.03880 0.04266 0.91 0.365
## y 0.38942 0.02099 18.56 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.4249 on 98 degrees of freedom
## Multiple R-squared: 0.7784, Adjusted R-squared: 0.7762
## F-statistic: 344.3 on 1 and 98 DF, p-value: < 2.2e-16
set.seed(1)
x <- rnorm(100, mean=1000, sd=0.1)
y <- rnorm(100, mean=1000, sd=0.1)
df.lm_Y <- lm(y ~ x)
df.lm_X <- lm(x ~ y)
summary(df.lm_Y)
##
## Call:
## lm(formula = y ~ x)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.18768 -0.06138 -0.01395 0.05394 0.23462
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1001.05662 107.72820 9.292 4.16e-15 ***
## x -0.00106 0.10773 -0.010 0.992
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.09628 on 98 degrees of freedom
## Multiple R-squared: 9.887e-07, Adjusted R-squared: -0.0102
## F-statistic: 9.689e-05 on 1 and 98 DF, p-value: 0.9922
summary(df.lm_X)
##
## Call:
## lm(formula = x ~ y)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.232416 -0.060361 0.000536 0.058305 0.229316
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.001e+03 9.472e+01 10.57 <2e-16 ***
## y -9.324e-04 9.472e-02 -0.01 0.992
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.09028 on 98 degrees of freedom
## Multiple R-squared: 9.887e-07, Adjusted R-squared: -0.0102
## F-statistic: 9.689e-05 on 1 and 98 DF, p-value: 0.9922