Problem 2

Carefully explain the differences between the KNN classifier and KNN regression methods.

First, the KNN classifier occurs when the response (dependent) variable is a categorical variable. This is not the case with the KNN regression. KNN classifiers predict the category of a point on the basis of other points nearest to the point of interest. KNN regression predicts y for a given value of x by considering k closest points to x in the training data and taking the average of the responses.

Problem 9

This question involves the use of multiple linear regression in the Auto data set.

(a) Produce a scatterplot matrix which includes all of the variables in the data set.

# The dataset file was read from a specific folder in a personal computer:
# Auto <- read.csv(".../auto.csv", header = T, na.strings = '?')
Auto <- na.omit(Auto)
summary(Auto)
##       mpg          cylinders      displacement     horsepower        weight    
##  Min.   : 9.00   Min.   :3.000   Min.   : 68.0   Min.   : 46.0   Min.   :1613  
##  1st Qu.:17.00   1st Qu.:4.000   1st Qu.:105.0   1st Qu.: 75.0   1st Qu.:2225  
##  Median :22.75   Median :4.000   Median :151.0   Median : 93.5   Median :2804  
##  Mean   :23.45   Mean   :5.472   Mean   :194.4   Mean   :104.5   Mean   :2978  
##  3rd Qu.:29.00   3rd Qu.:8.000   3rd Qu.:275.8   3rd Qu.:126.0   3rd Qu.:3615  
##  Max.   :46.60   Max.   :8.000   Max.   :455.0   Max.   :230.0   Max.   :5140  
##   acceleration        year           origin          name          
##  Min.   : 8.00   Min.   :70.00   Min.   :1.000   Length:392        
##  1st Qu.:13.78   1st Qu.:73.00   1st Qu.:1.000   Class :character  
##  Median :15.50   Median :76.00   Median :1.000   Mode  :character  
##  Mean   :15.54   Mean   :75.98   Mean   :1.577                     
##  3rd Qu.:17.02   3rd Qu.:79.00   3rd Qu.:2.000                     
##  Max.   :24.80   Max.   :82.00   Max.   :3.000
plot(Auto)

(b) Compute the matrix of correlations between the variables using the function cor(). You will need to exclude the name variable, which is qualitative.

Cylinders, displacement, horsepower and weight are extemely and positively correlated to each other (> 0.8), while acceleration shows a high negative correlation to the previous variables (< -0.41).

cor(subset(Auto, select = -name))
##                     mpg  cylinders displacement horsepower     weight
## mpg           1.0000000 -0.7776175   -0.8051269 -0.7784268 -0.8322442
## cylinders    -0.7776175  1.0000000    0.9508233  0.8429834  0.8975273
## displacement -0.8051269  0.9508233    1.0000000  0.8972570  0.9329944
## horsepower   -0.7784268  0.8429834    0.8972570  1.0000000  0.8645377
## weight       -0.8322442  0.8975273    0.9329944  0.8645377  1.0000000
## acceleration  0.4233285 -0.5046834   -0.5438005 -0.6891955 -0.4168392
## year          0.5805410 -0.3456474   -0.3698552 -0.4163615 -0.3091199
## origin        0.5652088 -0.5689316   -0.6145351 -0.4551715 -0.5850054
##              acceleration       year     origin
## mpg             0.4233285  0.5805410  0.5652088
## cylinders      -0.5046834 -0.3456474 -0.5689316
## displacement   -0.5438005 -0.3698552 -0.6145351
## horsepower     -0.6891955 -0.4163615 -0.4551715
## weight         -0.4168392 -0.3091199 -0.5850054
## acceleration    1.0000000  0.2903161  0.2127458
## year            0.2903161  1.0000000  0.1815277
## origin          0.2127458  0.1815277  1.0000000

(c) Use the lm() function to perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Use the summary() function to print the results. Comment on the output. For instance:

i. Is there a relationship between the predictors and the response?

The overall model is statistically significant with a F statistic of 252.4 and a p value near 0. The null hypothesis that all estimated beta coefficients are equal to 0 would be rejected.

ii. Which predictors appear to have a statistically significant relationship to the response?

Displacement, weight, year and origin appear to have statistically significant relationships to the response.

iii. What does the coefficient for the year variable suggest?

A one unit increase in the model year is associated with a 0.75 increase in the mileage per gallon.

lm.auto <- lm(mpg~.-name, data = Auto)
summary(lm.auto)
## 
## Call:
## lm(formula = mpg ~ . - name, data = Auto)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -9.5903 -2.1565 -0.1169  1.8690 13.0604 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  -17.218435   4.644294  -3.707  0.00024 ***
## cylinders     -0.493376   0.323282  -1.526  0.12780    
## displacement   0.019896   0.007515   2.647  0.00844 ** 
## horsepower    -0.016951   0.013787  -1.230  0.21963    
## weight        -0.006474   0.000652  -9.929  < 2e-16 ***
## acceleration   0.080576   0.098845   0.815  0.41548    
## year           0.750773   0.050973  14.729  < 2e-16 ***
## origin         1.426141   0.278136   5.127 4.67e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.328 on 384 degrees of freedom
## Multiple R-squared:  0.8215, Adjusted R-squared:  0.8182 
## F-statistic: 252.4 on 7 and 384 DF,  p-value: < 2.2e-16

(d) Use the plot() function to produce diagnostic plots of the linear regression fit. Comment on any problems you see with the fit. Do the residual plots suggest any unusually large outliers? Does the leverage plot identify any observations with unusually high leverage?

The residuals vs. fitted plot shows that there may be a non-linear or curved relationship between the predictors and the response. Observation 14 in the residuals vs. leverage plot is an outlier, but the Cook’s distance is not extremely high. Observations 327 and 394 are shown as having high leverage. Observations 323 and 26 also appear to be outliers.

par(mfrow = c(2, 2))
plot(lm.auto)

(e) Use the * and : symbols to fit linear regression models with interaction effects. Do any interactions appear to be statistically significant?

When the interaction effect of cylinders and displacement and the effect of horsepower and displacement were first added to the original model (lm.auto). The result was that the only statistically significant interaction effect was with displacement and horsepower. A second model (lm.autoint1) was estimated without the interaction effect of cylinders and displacement. This resulted in beta coefficient estimates that were statistically significant for all independent variables with very little change in the \(R^2\) statistic.

lm.autoint <- lm(mpg ~ cylinders*displacement + horsepower*displacement + 
                   weight + acceleration + year + origin, data = Auto)
summary(lm.autoint)
## 
## Call:
## lm(formula = mpg ~ cylinders * displacement + horsepower * displacement + 
##     weight + acceleration + year + origin, data = Auto)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -9.0692 -1.5453 -0.1215  1.4814 12.7232 
## 
## Coefficients:
##                           Estimate Std. Error t value Pr(>|t|)    
## (Intercept)             -1.006e+00  4.426e+00  -0.227   0.8203    
## cylinders                2.309e-01  5.702e-01   0.405   0.6858    
## displacement            -8.149e-02  1.336e-02  -6.099 2.61e-09 ***
## horsepower              -1.873e-01  2.367e-02  -7.915 2.71e-14 ***
## weight                  -3.235e-03  6.557e-04  -4.933 1.21e-06 ***
## acceleration            -1.923e-01  9.382e-02  -2.050   0.0410 *  
## year                     7.402e-01  4.472e-02  16.550  < 2e-16 ***
## origin                   6.435e-01  2.583e-01   2.492   0.0131 *  
## cylinders:displacement   1.997e-03  2.325e-03   0.859   0.3908    
## displacement:horsepower  4.812e-04  6.897e-05   6.976 1.34e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.913 on 382 degrees of freedom
## Multiple R-squared:  0.8639, Adjusted R-squared:  0.8607 
## F-statistic: 269.4 on 9 and 382 DF,  p-value: < 2.2e-16
lm.autoint1 <- lm(mpg ~ cylinders + horsepower*displacement + weight + 
                    acceleration + year + origin, data = Auto)
summary(lm.autoint1)
## 
## Call:
## lm(formula = mpg ~ cylinders + horsepower * displacement + weight + 
##     acceleration + year + origin, data = Auto)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -8.7010 -1.6009 -0.0967  1.4119 12.6734 
## 
## Coefficients:
##                           Estimate Std. Error t value Pr(>|t|)    
## (Intercept)             -1.894e+00  4.302e+00  -0.440  0.66007    
## cylinders                6.466e-01  3.017e-01   2.143  0.03275 *  
## horsepower              -1.975e-01  2.052e-02  -9.624  < 2e-16 ***
## displacement            -7.487e-02  1.092e-02  -6.859 2.80e-11 ***
## weight                  -3.147e-03  6.475e-04  -4.861 1.71e-06 ***
## acceleration            -2.131e-01  9.062e-02  -2.351  0.01921 *  
## year                     7.379e-01  4.463e-02  16.534  < 2e-16 ***
## origin                   6.891e-01  2.527e-01   2.727  0.00668 ** 
## horsepower:displacement  5.236e-04  4.813e-05  10.878  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.912 on 383 degrees of freedom
## Multiple R-squared:  0.8636, Adjusted R-squared:  0.8608 
## F-statistic: 303.1 on 8 and 383 DF,  p-value: < 2.2e-16

(f) Try a few different transformations of the variables, such as log\((X)\), \(\sqrt{X}\), \(X^2\). Comment on your findings.

All 3 models had similar ANOVA p values \((2.2 * 10^{-16})\) to reject the null hypothesis that all \(\hat{\beta}\) coefficients are equal to 0. The log model explained the highest percentage of total variation with \(R^2\) = 0.8482, and had 5 \(\hat{\beta}\) coefficients that were statistically significant (not counting the intercept).

lm.autolog <- lm(mpg ~ log(cylinders) + log(horsepower) + log(displacement) + 
                   log(weight) + log(acceleration) + log(year) + 
                   log(origin), data = Auto)
summary(lm.autolog)
## 
## Call:
## lm(formula = mpg ~ log(cylinders) + log(horsepower) + log(displacement) + 
##     log(weight) + log(acceleration) + log(year) + log(origin), 
##     data = Auto)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -9.5987 -1.8172 -0.0181  1.5906 12.8132 
## 
## Coefficients:
##                   Estimate Std. Error t value Pr(>|t|)    
## (Intercept)       -66.5643    17.5053  -3.803 0.000167 ***
## log(cylinders)      1.4818     1.6589   0.893 0.372273    
## log(horsepower)    -6.9657     1.5569  -4.474 1.01e-05 ***
## log(displacement)  -1.0551     1.5385  -0.686 0.493230    
## log(weight)       -12.5728     2.2251  -5.650 3.12e-08 ***
## log(acceleration)  -4.9831     1.6078  -3.099 0.002082 ** 
## log(year)          54.9857     3.5555  15.465  < 2e-16 ***
## log(origin)         1.5822     0.5083   3.113 0.001991 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.069 on 384 degrees of freedom
## Multiple R-squared:  0.8482, Adjusted R-squared:  0.8454 
## F-statistic: 306.5 on 7 and 384 DF,  p-value: < 2.2e-16

The square root model explained the 2nd highest percentage of total variation with \(R^2\) = 0.8338, and had the fewest \(\hat{\beta}\) coefficients (only 4) that were statistically significant (not counting the intercept).

lm.autosqrt <- lm(mpg ~ sqrt(cylinders) + sqrt(horsepower) + sqrt(displacement) + 
                    sqrt(weight) + sqrt(acceleration) + sqrt(year) + 
                    sqrt(origin), data = Auto)
summary(lm.autosqrt)
## 
## Call:
## lm(formula = mpg ~ sqrt(cylinders) + sqrt(horsepower) + sqrt(displacement) + 
##     sqrt(weight) + sqrt(acceleration) + sqrt(year) + sqrt(origin), 
##     data = Auto)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -9.5250 -1.9822 -0.1111  1.7347 13.0681 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        -49.79814    9.17832  -5.426 1.02e-07 ***
## sqrt(cylinders)     -0.23699    1.53753  -0.154   0.8776    
## sqrt(horsepower)    -0.77976    0.30788  -2.533   0.0117 *  
## sqrt(displacement)   0.22580    0.22940   0.984   0.3256    
## sqrt(weight)        -0.62172    0.07898  -7.872 3.59e-14 ***
## sqrt(acceleration)  -0.82529    0.83443  -0.989   0.3233    
## sqrt(year)          12.79030    0.85891  14.891  < 2e-16 ***
## sqrt(origin)         3.26036    0.76767   4.247 2.72e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.21 on 384 degrees of freedom
## Multiple R-squared:  0.8338, Adjusted R-squared:  0.8308 
## F-statistic: 275.3 on 7 and 384 DF,  p-value: < 2.2e-16

The squared model explained the lowest percentage of total variation with \(R^2\) = 0.7981, but had the most \(\hat{\beta}\) coefficients (6) that were statistically significant (not counting the intercept).

lm.autosq <- lm(mpg ~ I(cylinders^2) + I(horsepower^2) + I(displacement^2) + 
              I(weight^2) + I(acceleration^2) + I(year^2) + I(origin^2), 
              data = Auto)
summary(lm.autosq)
## 
## Call:
## lm(formula = mpg ~ I(cylinders^2) + I(horsepower^2) + I(displacement^2) + 
##     I(weight^2) + I(acceleration^2) + I(year^2) + I(origin^2), 
##     data = Auto)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -9.6786 -2.3227 -0.0582  1.9073 12.9807 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        1.208e+00  2.356e+00   0.513 0.608382    
## I(cylinders^2)    -8.829e-02  2.521e-02  -3.502 0.000515 ***
## I(horsepower^2)   -3.621e-05  4.975e-05  -0.728 0.467201    
## I(displacement^2)  5.680e-05  1.382e-05   4.109 4.87e-05 ***
## I(weight^2)       -9.351e-07  8.978e-08 -10.416  < 2e-16 ***
## I(acceleration^2)  6.278e-03  2.690e-03   2.334 0.020130 *  
## I(year^2)          4.999e-03  3.530e-04  14.160  < 2e-16 ***
## I(origin^2)        4.129e-01  6.914e-02   5.971 5.37e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.539 on 384 degrees of freedom
## Multiple R-squared:  0.7981, Adjusted R-squared:  0.7944 
## F-statistic: 216.8 on 7 and 384 DF,  p-value: < 2.2e-16

Problem 10

This question should be answered using the Carseats data set.

library(ISLR)
attach(Carseats)

(a) Fit a multiple regression model to predict Sales using Price,Urban, and US.

fit<-lm(Sales~Price+Urban+US)
summary(fit)
## 
## Call:
## lm(formula = Sales ~ Price + Urban + US)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -6.9206 -1.6220 -0.0564  1.5786  7.0581 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 13.043469   0.651012  20.036  < 2e-16 ***
## Price       -0.054459   0.005242 -10.389  < 2e-16 ***
## UrbanYes    -0.021916   0.271650  -0.081    0.936    
## USYes        1.200573   0.259042   4.635 4.86e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.472 on 396 degrees of freedom
## Multiple R-squared:  0.2393, Adjusted R-squared:  0.2335 
## F-statistic: 41.52 on 3 and 396 DF,  p-value: < 2.2e-16

(b) Provide an interpretation of each coefficient in the model. Be careful some of the variables in the model are qualitative!

From the table above, price and US are significant predictors of Sales, for every $1 increase in my price, my sales go down by $54. Sales inside of the US are $1,200 higher than sales outside of the US. Urban has no effect on Sales.

(c) Write out the model in equation form, being careful to handle the qualitative variables properly.

\(Sales = 13.043469 -0.054459Price-0.021916Urban_{Yes}+1.200573US_{Yes}\)

(d) For which of the predictors can you reject the null hypothesis \(H_{0}: \beta_{j}= 0\)?

Price and US

(e) On the basis of your response to the previous question, fit a smaller model that only uses the predictors for which there is evidence of association with the outcome.

fit<-lm(Sales~Price+US)
summary(fit)
## 
## Call:
## lm(formula = Sales ~ Price + US)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -6.9269 -1.6286 -0.0574  1.5766  7.0515 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 13.03079    0.63098  20.652  < 2e-16 ***
## Price       -0.05448    0.00523 -10.416  < 2e-16 ***
## USYes        1.19964    0.25846   4.641 4.71e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.469 on 397 degrees of freedom
## Multiple R-squared:  0.2393, Adjusted R-squared:  0.2354 
## F-statistic: 62.43 on 2 and 397 DF,  p-value: < 2.2e-16

(f) How well do the models in (a) and (e) fit the data?

Terrible, each model explains around 23% of the variance in Sales.

(g) Using the model from (e), obtain 95 % confidence intervals for the coefficient(s).

confint(fit)
##                   2.5 %      97.5 %
## (Intercept) 11.79032020 14.27126531
## Price       -0.06475984 -0.04419543
## USYes        0.69151957  1.70776632

(h) Is there evidence of outliers or high leverage observations in the model from (e)?

R has built in functions to that can help us identify influential points using various statistics with one simple command. Researchers have suggested several cutoff levels or upper limits as to what is the acceptable influence an observation should have before being considered an outlier. For example, the average leverage \(\frac{(p+1)}{n}\) which for us is \(\frac{(2+1)}{400} = 0.0075\).

par(mfrow=c(2,2))
plot(fit)

summary(influence.measures(fit))
## Potentially influential observations of
##   lm(formula = Sales ~ Price + US) :
## 
##     dfb.1_ dfb.Pric dfb.USYs dffit   cov.r   cook.d hat    
## 26   0.24  -0.18    -0.17     0.28_*  0.97_*  0.03   0.01  
## 29  -0.10   0.10    -0.10    -0.18    0.97_*  0.01   0.01  
## 43  -0.11   0.10     0.03    -0.11    1.05_*  0.00   0.04_*
## 50  -0.10   0.17    -0.17     0.26_*  0.98    0.02   0.01  
## 51  -0.05   0.05    -0.11    -0.18    0.95_*  0.01   0.00  
## 58  -0.05  -0.02     0.16    -0.20    0.97_*  0.01   0.01  
## 69  -0.09   0.10     0.09     0.19    0.96_*  0.01   0.01  
## 126 -0.07   0.06     0.03    -0.07    1.03_*  0.00   0.03_*
## 160  0.00   0.00     0.00     0.01    1.02_*  0.00   0.02  
## 166  0.21  -0.23    -0.04    -0.24    1.02    0.02   0.03_*
## 172  0.06  -0.07     0.02     0.08    1.03_*  0.00   0.02  
## 175  0.14  -0.19     0.09    -0.21    1.03_*  0.02   0.03_*
## 210 -0.14   0.15    -0.10    -0.22    0.97_*  0.02   0.01  
## 270 -0.03   0.05    -0.03     0.06    1.03_*  0.00   0.02  
## 298 -0.06   0.06    -0.09    -0.15    0.97_*  0.01   0.00  
## 314 -0.05   0.04     0.02    -0.05    1.03_*  0.00   0.02_*
## 353 -0.02   0.03     0.09     0.15    0.97_*  0.01   0.00  
## 357  0.02  -0.02     0.02    -0.03    1.03_*  0.00   0.02  
## 368  0.26  -0.23    -0.11     0.27_*  1.01    0.02   0.02_*
## 377  0.14  -0.15     0.12     0.24    0.95_*  0.02   0.01  
## 384  0.00   0.00     0.00     0.00    1.02_*  0.00   0.02  
## 387 -0.03   0.04    -0.03     0.05    1.02_*  0.00   0.02  
## 396 -0.05   0.05     0.08     0.14    0.98_*  0.01   0.00

R points out a few observations that violate various rules for each influence measure. Typically, one can demonstrate these statistics and report both a regression with all data included and one with the outliers removed and compare.

outyling.obs<-c(26,29,43,50,51,58,69,126,160,166,172,175,210,270,298,314,353,357,368,377,384,387,396)
Carseats.small<-Carseats[-outyling.obs,]
fit2<-lm(Sales~Price+US,data=Carseats.small)
summary(fit2)
## 
## Call:
## lm(formula = Sales ~ Price + US, data = Carseats.small)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -5.263 -1.605 -0.039  1.590  5.428 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 12.925232   0.665259  19.429  < 2e-16 ***
## Price       -0.053973   0.005511  -9.794  < 2e-16 ***
## USYes        1.255018   0.248856   5.043 7.15e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.29 on 374 degrees of freedom
## Multiple R-squared:  0.2387, Adjusted R-squared:  0.2347 
## F-statistic: 58.64 on 2 and 374 DF,  p-value: < 2.2e-16

With these potential outliers or influential observations removed, very little changes from the linear model fit to the full data set. The confidence interval for the coefficient estimates produced by the linear model fit to the full data set contain the estimates of the coefficients for the estimates of the model with the outliers removed. It’s safe to include all of the data points in our model.

Problem 12

This problem involves simple linear regression without an intercept.

a) Recall that the coefficient estimate \(\hat{\beta}\) for the linear regression of Y onto X without an intercept is given by (3.38). Under what circumstance is the coefficient estimate for the regression of X onto Y the same as the coefficient estimate for the regression of Y onto X?

The coefficient estimate for the regression of X onto Y is the same as the coefficient estimate for the regression of Y onto X when the sum of squares of the observed y values is equal to the sum of squares of the observed x values.

(b) Generate an example in R with n = 100 observations in which the coefficient estimate for the regression of X onto Y is different from the coefficient estimate for the regression of Y onto X.

set.seed(1)
x = rnorm(100)
y = 2*x

yfit12b <- lm(y ~ x + 0)
summary(yfit12b)
## 
## Call:
## lm(formula = y ~ x + 0)
## 
## Residuals:
##        Min         1Q     Median         3Q        Max 
## -3.776e-16 -3.378e-17  2.680e-18  6.113e-17  5.105e-16 
## 
## Coefficients:
##    Estimate Std. Error   t value Pr(>|t|)    
## x 2.000e+00  1.296e-17 1.543e+17   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.167e-16 on 99 degrees of freedom
## Multiple R-squared:      1,  Adjusted R-squared:      1 
## F-statistic: 2.382e+34 on 1 and 99 DF,  p-value: < 2.2e-16
xfit12b <- lm(x ~ y + 0)
summary(xfit12b)
## 
## Call:
## lm(formula = x ~ y + 0)
## 
## Residuals:
##        Min         1Q     Median         3Q        Max 
## -1.888e-16 -1.689e-17  1.339e-18  3.057e-17  2.552e-16 
## 
## Coefficients:
##   Estimate Std. Error   t value Pr(>|t|)    
## y 5.00e-01   3.24e-18 1.543e+17   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.833e-17 on 99 degrees of freedom
## Multiple R-squared:      1,  Adjusted R-squared:      1 
## F-statistic: 2.382e+34 on 1 and 99 DF,  p-value: < 2.2e-16

(c) Generate an example in R with n = 100 observations in which the coefficient estimate for the regression of X onto Y is the same as the coefficient estimate for the regression of Y onto X.

In both models, the \(\hat{\beta}\) coefficients were equal out to 5 decimal places.

set.seed(1)
x <- rnorm(100)
y <- sample(x, 100)

yfit12c <- lm(y ~ x + 0)
summary(yfit12c)
## 
## Call:
## lm(formula = y ~ x + 0)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.1665 -0.4995  0.1140  0.6945  2.2833 
## 
## Coefficients:
##   Estimate Std. Error t value Pr(>|t|)
## x -0.07768    0.10020  -0.775     0.44
## 
## Residual standard error: 0.9021 on 99 degrees of freedom
## Multiple R-squared:  0.006034,   Adjusted R-squared:  -0.004006 
## F-statistic: 0.601 on 1 and 99 DF,  p-value: 0.4401
xfit12c <- lm(x ~ y + 0)
summary(xfit12c)
## 
## Call:
## lm(formula = x ~ y + 0)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.2182 -0.4969  0.1595  0.6782  2.4017 
## 
## Coefficients:
##   Estimate Std. Error t value Pr(>|t|)
## y -0.07768    0.10020  -0.775     0.44
## 
## Residual standard error: 0.9021 on 99 degrees of freedom
## Multiple R-squared:  0.006034,   Adjusted R-squared:  -0.004006 
## F-statistic: 0.601 on 1 and 99 DF,  p-value: 0.4401