2. Carefully explain the differences between the KNN classifier and KNN regression methods.

A K-Nearest Neighbors (KNN) clasifier is when looking at a point, specifically a categorical variable, it looks at the nearest points and majority rules in labeling the observed point in question. KNN regression is similiar but for continuous variables taking the average of the neighbors.

9. This question involves the use of multiple linear regression on the Auto data set

auto = read.csv("Auto.csv", header = TRUE, na.strings = "?", stringsAsFactors = TRUE)
autodata = read.table("Auto.data", header = TRUE, na.strings = "?", stringsAsFactors = TRUE)
  1. Produce a scatterplot matrix which includes all of the variables in the data set.
pairs(auto)

NA
NA

(b) Compute the matrix of correlations between the variables using the function cor(). You will need to exclude the name variable, which is qualitative

numeric_auto = auto[sapply(auto, is.numeric)]
cor_matrix = cor(numeric_auto, use = "complete.obs")
cor_matrix
                    mpg  cylinders displacement horsepower     weight acceleration       year     origin
mpg           1.0000000 -0.7776175   -0.8051269 -0.7784268 -0.8322442    0.4233285  0.5805410  0.5652088
cylinders    -0.7776175  1.0000000    0.9508233  0.8429834  0.8975273   -0.5046834 -0.3456474 -0.5689316
displacement -0.8051269  0.9508233    1.0000000  0.8972570  0.9329944   -0.5438005 -0.3698552 -0.6145351
horsepower   -0.7784268  0.8429834    0.8972570  1.0000000  0.8645377   -0.6891955 -0.4163615 -0.4551715
weight       -0.8322442  0.8975273    0.9329944  0.8645377  1.0000000   -0.4168392 -0.3091199 -0.5850054
acceleration  0.4233285 -0.5046834   -0.5438005 -0.6891955 -0.4168392    1.0000000  0.2903161  0.2127458
year          0.5805410 -0.3456474   -0.3698552 -0.4163615 -0.3091199    0.2903161  1.0000000  0.1815277
origin        0.5652088 -0.5689316   -0.6145351 -0.4551715 -0.5850054    0.2127458  0.1815277  1.0000000

(c) Use the lm() function to perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Use the summary() function to print the results. Comment on the output. For instance:

auto_model = lm(mpg ~ . - name, data = auto)
summary(auto_model)

Call:
lm(formula = mpg ~ . - name, data = auto)

Residuals:
    Min      1Q  Median      3Q     Max 
-9.5903 -2.1565 -0.1169  1.8690 13.0604 

Coefficients:
               Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -17.218435   4.644294  -3.707  0.00024 ***
cylinders     -0.493376   0.323282  -1.526  0.12780    
displacement   0.019896   0.007515   2.647  0.00844 ** 
horsepower    -0.016951   0.013787  -1.230  0.21963    
weight        -0.006474   0.000652  -9.929  < 2e-16 ***
acceleration   0.080576   0.098845   0.815  0.41548    
year           0.750773   0.050973  14.729  < 2e-16 ***
origin         1.426141   0.278136   5.127 4.67e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.328 on 384 degrees of freedom
  (5 observations deleted due to missingness)
Multiple R-squared:  0.8215,    Adjusted R-squared:  0.8182 
F-statistic: 252.4 on 7 and 384 DF,  p-value: < 2.2e-16
  1. Is there a relationship between the predictors and the response?
    there is a relationship between the predictors and response variables which is confirmed in p-value of 2.2e-16 showing there is at least one interaction that is not zero
  2. Which predictors appear to have a statistically significant relationship to the response?
    displacement, weight, year, and origin
  3. What does the coefficient for the year variable suggest?
    for every year the variable increases, the MPG improves by .75 units

__(d) Use the plot() function to produce diagnostic plots of the linear regression fit. Comment on any problems you see with the fit. Do the residual plots suggest any unusually large outliers? Does the leverage plot identify any observations with unusually high leverage?–

par(mfrow = c(2,2))
plot(auto_model)

(e) Use the * and : symbols to fit linear regression models with interaction effects. Do any interactions appear to be statistically significant?

model = lm(mpg ~ horsepower * weight + cylinders:displacement + year:origin, data = auto)
summary(model)

Call:
lm(formula = mpg ~ horsepower * weight + cylinders:displacement + 
    year:origin, data = auto)

Residuals:
     Min       1Q   Median       3Q      Max 
-11.3599  -2.5220  -0.2283   1.9314  14.8406 

Coefficients:
                         Estimate Std. Error t value Pr(>|t|)    
(Intercept)             5.738e+01  2.703e+00  21.228  < 2e-16 ***
horsepower             -2.349e-01  2.690e-02  -8.732  < 2e-16 ***
weight                 -9.412e-03  8.444e-04 -11.147  < 2e-16 ***
horsepower:weight       4.719e-05  6.782e-06   6.959 1.47e-11 ***
cylinders:displacement  3.321e-04  6.212e-04   0.535    0.593    
year:origin             1.835e-02  4.002e-03   4.586 6.12e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.835 on 386 degrees of freedom
  (5 observations deleted due to missingness)
Multiple R-squared:  0.7616,    Adjusted R-squared:  0.7585 
F-statistic: 246.7 on 5 and 386 DF,  p-value: < 2.2e-16
model1 = lm(mpg ~ horsepower:weight + cylinders * displacement + year * origin, data = auto)
summary(model1)

Call:
lm(formula = mpg ~ horsepower:weight + cylinders * displacement + 
    year * origin, data = auto)

Residuals:
     Min       1Q   Median       3Q      Max 
-12.9951  -1.7644  -0.2853   1.6297  14.6801 

Coefficients:
                         Estimate Std. Error t value Pr(>|t|)    
(Intercept)             3.339e+01  8.990e+00   3.715 0.000234 ***
cylinders              -4.065e+00  4.253e-01  -9.559  < 2e-16 ***
displacement           -1.383e-01  1.361e-02 -10.165  < 2e-16 ***
year                    2.731e-01  1.108e-01   2.465 0.014139 *  
origin                 -2.031e+01  4.648e+00  -4.369 1.60e-05 ***
horsepower:weight      -1.937e-05  2.353e-06  -8.232 2.91e-15 ***
cylinders:displacement  1.991e-02  1.812e-03  10.990  < 2e-16 ***
year:origin             2.698e-01  5.962e-02   4.526 8.03e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.268 on 384 degrees of freedom
  (5 observations deleted due to missingness)
Multiple R-squared:  0.8278,    Adjusted R-squared:  0.8247 
F-statistic: 263.7 on 7 and 384 DF,  p-value: < 2.2e-16

(f) Try a few different transformations of the variables, such as log(X), √X, X2. Comment on your findings.

model_log = lm(mpg ~ log(horsepower) * log(weight) + log(cylinders) * log(displacement), data = auto)
summary(model_log)

Call:
lm(formula = mpg ~ log(horsepower) * log(weight) + log(cylinders) * 
    log(displacement), data = auto)

Residuals:
     Min       1Q   Median       3Q      Max 
-11.6310  -2.2681  -0.3507   1.8963  15.7422 

Coefficients:
                                 Estimate Std. Error t value Pr(>|t|)    
(Intercept)                      457.4668    98.0172   4.667 4.22e-06 ***
log(horsepower)                  -76.0956    25.0162  -3.042 0.002512 ** 
log(weight)                      -50.5747    14.5533  -3.475 0.000569 ***
log(cylinders)                     7.5831    16.2073   0.468 0.640134    
log(displacement)                  0.8374     5.1441   0.163 0.870774    
log(horsepower):log(weight)        8.6252     3.1643   2.726 0.006708 ** 
log(cylinders):log(displacement)  -1.7244     3.0861  -0.559 0.576649    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.937 on 385 degrees of freedom
  (5 observations deleted due to missingness)
Multiple R-squared:  0.7495,    Adjusted R-squared:  0.7456 
F-statistic:   192 on 6 and 385 DF,  p-value: < 2.2e-16
model_root = lm(mpg ~ sqrt(horsepower)*sqrt(weight) + sqrt(cylinders) * sqrt(displacement), data = auto)
summary(model_root)

Call:
lm(formula = mpg ~ sqrt(horsepower) * sqrt(weight) + sqrt(cylinders) * 
    sqrt(displacement), data = auto)

Residuals:
     Min       1Q   Median       3Q      Max 
-11.6705  -2.2496  -0.3115   1.8862  15.6795 

Coefficients:
                                    Estimate Std. Error t value Pr(>|t|)    
(Intercept)                        120.52680    9.98018  12.077  < 2e-16 ***
sqrt(horsepower)                    -5.93648    1.22800  -4.834 1.93e-06 ***
sqrt(weight)                        -1.22163    0.23946  -5.102 5.30e-07 ***
sqrt(cylinders)                     -5.00139    5.52743  -0.905  0.36612    
sqrt(displacement)                  -1.04018    0.94668  -1.099  0.27256    
sqrt(horsepower):sqrt(weight)        0.07879    0.02222   3.545  0.00044 ***
sqrt(cylinders):sqrt(displacement)   0.32297    0.38558   0.838  0.40276    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.926 on 385 degrees of freedom
  (5 observations deleted due to missingness)
Multiple R-squared:  0.7508,    Adjusted R-squared:  0.7469 
F-statistic: 193.3 on 6 and 385 DF,  p-value: < 2.2e-16
summary(model_log)$adj.r.squared
[1] 0.7455929
summary(model_root)$adj.r.squared
[1] 0.7469293

10. This question should be answered using the Carseats data set.

library(ISLR2)
Warning: package ‘ISLR2’ was built under R version 4.4.2
attach(Carseats)

(a) Fit a multiple regression model to predict Sales using Price, Urban, and US.

fit = lm(Sales~Price + Urban + US, data = Carseats)
summary(fit)

Call:
lm(formula = Sales ~ Price + Urban + US, data = Carseats)

Residuals:
    Min      1Q  Median      3Q     Max 
-6.9206 -1.6220 -0.0564  1.5786  7.0581 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept) 13.043469   0.651012  20.036  < 2e-16 ***
Price       -0.054459   0.005242 -10.389  < 2e-16 ***
UrbanYes    -0.021916   0.271650  -0.081    0.936    
USYes        1.200573   0.259042   4.635 4.86e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.472 on 396 degrees of freedom
Multiple R-squared:  0.2393,    Adjusted R-squared:  0.2335 
F-statistic: 41.52 on 3 and 396 DF,  p-value: < 2.2e-16
coef(fit)[2][1]
      Price 
-0.05445885 

(b) Provide an interpretation of each coefficient in the model. Be careful—some of the variables in the model are qualitative!

The coefficient for Price is -0.054459 which means that for every dollar increase in the price of my carseat, my store’s sales decrease by $54 on average

The coefficient for US == Yes is 1.200573 which means, on average, US stores sell $1,200 more carseats compared to stores outside the US.

(c) Write out the model in equation form, being careful to handle the qualitative variables properly.
\(Sales = 13.04 - .054Price - .022Urban + 1.2US\)

(d) For which of the predictors can you reject the null hypothesis \(H_0 : \beta_j = 0\)?
see part b for interpretation, but Price and US = Yes are significant thus we can rejuect the null hypothesis

(e) On the basis of your response to the previous question, fit a smaller model that only uses the predictors for which there is evidence of association with the outcome.

fit = lm(Sales~Price + US, data = Carseats)
summary(fit)

Call:
lm(formula = Sales ~ Price + US, data = Carseats)

Residuals:
    Min      1Q  Median      3Q     Max 
-6.9269 -1.6286 -0.0574  1.5766  7.0515 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 13.03079    0.63098  20.652  < 2e-16 ***
Price       -0.05448    0.00523 -10.416  < 2e-16 ***
USYes        1.19964    0.25846   4.641 4.71e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.469 on 397 degrees of freedom
Multiple R-squared:  0.2393,    Adjusted R-squared:  0.2354 
F-statistic: 62.43 on 2 and 397 DF,  p-value: < 2.2e-16

(f) How well do the models in (a) and (e) fit the data?

terribly: Adjusted R-Ssquared is .2335 for part (a) and Adjusted R-squared is .2354 for part (e)

(g) Using the model from (e), obtain 95 % confidence intervals for the coefficient(s).

confint(fit)
                  2.5 %      97.5 %
(Intercept) 11.79032020 14.27126531
Price       -0.06475984 -0.04419543
USYes        0.69151957  1.70776632

(h) Is there evidence of outliers or high leverage observations in the model from (e)?

par(mfrow=c(2,2))
plot(fit)

12. This problem involves simple linear regression without an intercept.
(a) Recall that the coefficient estimate ˆ β for the linear regression of Y onto X without an intercept is given by (3.38). Under what circumstance is the coefficient estimate for the regression of X onto Y the same as the coefficient estimate for the regression of Y onto X? it would seem they have the same variance and the same magnitudes. or to say it another way, the sum of squares of x equals the sum of squares of y.

(b) Generate an example in R with n = 100 observations in which the coefficient estimate for the regression of X onto Y is different from the coefficient estimate for the regression of Y onto X.

set.seed(69)
n = 100
x = rnorm(n, mean = 50, sd = 10)
y = 2*x + rnorm(n, mean = 0, sd = 20)


XY = lm(x ~ y + 0)
betaXY = coef(XY)

YX = lm(y ~ x + 0)
betaYX = coef(YX)

cat("Beta (Y ~ X):", betaYX, "\n")
Beta (Y ~ X): 2.057628 
cat("Beta (X ~ Y):", betaXY, "\n")
Beta (X ~ Y): 0.470096 

(c) Generate an example in R with n = 100 observations in which the coefficient estimate for the regression of X onto Y is the same as the coefficient estimate for the regression of Y onto X.

set.seed(69)
n = 100
x = rnorm(n, mean = 50, sd = 10)
y = x  
XY2 = lm(x ~ y + 0)
betaXY2 = coef(XY2)

YX2 = lm(y ~ x + 0)
betaYX2 = coef(YX2)

cat("Beta (Y ~ X):", betaYX2, "\n")
Beta (Y ~ X): 1 
cat("Beta (X ~ Y):", betaXY2, "\n")
Beta (X ~ Y): 1 
---
title: "R Notebook"
output: html_notebook


---

\color{blue}__*2. Carefully explain the differences between the KNN classifier and KNN
regression methods.*__

A K-Nearest Neighbors (KNN) clasifier is when looking at a point, specifically a categorical variable, it looks at the nearest points and majority rules in labeling the observed point in question.  KNN regression is similiar but for continuous variables taking the average of the neighbors.  

__*9. This question involves the use of multiple linear regression on the
Auto data set*__

```{r}
auto = read.csv("Auto.csv", header = TRUE, na.strings = "?", stringsAsFactors = TRUE)
autodata = read.table("Auto.data", header = TRUE, na.strings = "?", stringsAsFactors = TRUE)
```

(a) Produce a scatterplot matrix which includes all of the variables
in the data set.
```{r}
pairs(auto)


```

__(b) Compute the matrix of correlations between the variables using
the function cor(). You will need to exclude the name variable, which is qualitative__
```{r}
numeric_auto = auto[sapply(auto, is.numeric)]
cor_matrix = cor(numeric_auto, use = "complete.obs")
cor_matrix

```

__(c) Use the lm() function to perform a multiple linear regression
with mpg as the response and all other variables except name as
the predictors. Use the summary() function to print the results.
Comment on the output. For instance:__  
```{r}
auto_model = lm(mpg ~ . - name, data = auto)
summary(auto_model)
```


  i. Is there a relationship between the predictors and the response?  
  there is a relationship between the predictors and response variables which is confirmed in p-value of 2.2e-16 showing there is at least one interaction that is not zero
  ii. Which predictors appear to have a statistically significant
relationship to the response?  
  displacement, weight, year, and origin
  iii. What does the coefficient for the year variable suggest?  
  for every year the variable increases, the MPG improves by .75 units   


__(d) Use the plot() function to produce diagnostic plots of the linear
regression fit. Comment on any problems you see with the fit.
Do the residual plots suggest any unusually large outliers? Does
the leverage plot identify any observations with unusually high
leverage?--  



```{r}
par(mfrow = c(2,2))
plot(auto_model)
```

__(e) Use the * and : symbols to fit linear regression models with
interaction effects. Do any interactions appear to be statistically
significant?__

```{r}
model = lm(mpg ~ horsepower * weight + cylinders:displacement + year:origin, data = auto)
summary(model)

model1 = lm(mpg ~ horsepower:weight + cylinders * displacement + year * origin, data = auto)
summary(model1)
```



__(f) Try a few different transformations of the variables, such as
log(X), √X, X2. Comment on your findings.__  
```{r}
model_log = lm(mpg ~ log(horsepower) * log(weight) + log(cylinders) * log(displacement), data = auto)
summary(model_log)

model_root = lm(mpg ~ sqrt(horsepower)*sqrt(weight) + sqrt(cylinders) * sqrt(displacement), data = auto)
summary(model_root)


```
```{r}
summary(model_log)$adj.r.squared
summary(model_root)$adj.r.squared
```



__*10. This question should be answered using the Carseats data set.*__  
```{r}
library(ISLR2)
attach(Carseats)
```
__(a) Fit a multiple regression model to predict Sales using Price,
Urban, and US.__  

```{r}
fit = lm(Sales~Price + Urban + US, data = Carseats)
summary(fit)
coef(fit)[2][1]
```
__(b) Provide an interpretation of each coefficient in the model. Be
careful—some of the variables in the model are qualitative!__  

The coefficient for `Price` is -0.054459 which means that for every dollar increase in the price of my carseat, my store's sales decrease by $54 on average

The coefficient for `US ==  Yes` is 1.200573 which means, on average, US stores sell $1,200 more carseats compared to stores outside the US.    

__(c) Write out the model in equation form, being careful to handle
the qualitative variables properly.__  
$Sales = 13.04 - .054Price - .022Urban + 1.2US$

__(d) For which of the predictors can you reject the null hypothesis $H_0 : \beta_j = 0$?__   
see part b for interpretation, but `Price` and `US = Yes` are significant thus we can rejuect the null hypothesis

__(e) On the basis of your response to the previous question, fit a
smaller model that only uses the predictors for which there is
evidence of association with the outcome.__  

```{r}
fit = lm(Sales~Price + US, data = Carseats)
summary(fit)
```

__(f) How well do the models in (a) and (e) fit the data?__  

terribly: Adjusted R-Ssquared is .2335 for part (a) and Adjusted R-squared is .2354 for part (e)

__(g) Using the model from (e), obtain 95 % confidence intervals for
the coefficient(s).__  

```{r}
confint(fit)
```

__(h) Is there evidence of outliers or high leverage observations in the
model from (e)?__  

```{r}
par(mfrow=c(2,2))
plot(fit)
```


__12. This problem involves simple linear regression without an intercept.__  
(a) Recall that the coefficient estimate ˆ β for the linear regression of
Y onto X without an intercept is given by (3.38). Under what
circumstance is the coefficient estimate for the regression of X
onto Y the same as the coefficient estimate for the regression of
Y onto X?
it would seem they have the same variance and the same magnitudes.  or to say it another way, the sum of squares of x equals the sum of squares of y.  

__(b) Generate an example in R with n = 100 observations in which
the coefficient estimate for the regression of X onto Y is different
from the coefficient estimate for the regression of Y onto X.__

```{r}
set.seed(69)
n = 100
x = rnorm(n, mean = 50, sd = 10)
y = 2*x + rnorm(n, mean = 0, sd = 20)


XY = lm(x ~ y + 0)
betaXY = coef(XY)

YX = lm(y ~ x + 0)
betaYX = coef(YX)

cat("Beta (Y ~ X):", betaYX, "\n")
cat("Beta (X ~ Y):", betaXY, "\n")

```



__(c) Generate an example in R with n = 100 observations in which
the coefficient estimate for the regression of X onto Y is the
same as the coefficient estimate for the regression of Y onto X.__

```{r}
set.seed(69)
n = 100
x = rnorm(n, mean = 50, sd = 10)
y = x  
XY2 = lm(x ~ y + 0)
betaXY2 = coef(XY2)

YX2 = lm(y ~ x + 0)
betaYX2 = coef(YX2)

cat("Beta (Y ~ X):", betaYX2, "\n")
cat("Beta (X ~ Y):", betaXY2, "\n")


```

