R Markdown

library(ISLR2)
## Warning: package 'ISLR2' was built under R version 4.3.2
library(ISLR)
## Warning: package 'ISLR' was built under R version 4.3.2
## 
## Attaching package: 'ISLR'
## The following objects are masked from 'package:ISLR2':
## 
##     Auto, Credit
library(class)
library(glmnet)
## Loading required package: Matrix
## Loaded glmnet 4.1-8
library(tidyverse)
## Warning: package 'ggplot2' was built under R version 4.3.2
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.2     ✔ readr     2.1.4
## ✔ forcats   1.0.0     ✔ stringr   1.5.0
## ✔ ggplot2   3.4.4     ✔ tibble    3.2.1
## ✔ lubridate 1.9.2     ✔ tidyr     1.3.0
## ✔ purrr     1.0.2
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ tidyr::expand() masks Matrix::expand()
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ✖ tidyr::pack()   masks Matrix::pack()
## ✖ tidyr::unpack() masks Matrix::unpack()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(readr)
library(datasets)
library(corrplot)
## Warning: package 'corrplot' was built under R version 4.3.2
## corrplot 0.92 loaded
library(MASS)
## Warning: package 'MASS' was built under R version 4.3.2
## 
## Attaching package: 'MASS'
## 
## The following object is masked from 'package:dplyr':
## 
##     select
## 
## The following object is masked from 'package:ISLR2':
## 
##     Boston
library(ggplot2)
library(e1071)
## Warning: package 'e1071' was built under R version 4.3.2
library(naivebayes)
## Warning: package 'naivebayes' was built under R version 4.3.2
## naivebayes 0.9.7 loaded

Q13. This question should be answered using the Weekly data set, which is part of the ISLR2 package. This data is similar in nature to the Smarket data from this chapter’s lab, except that it contains 1,089 weekly returns for 21 years, from the beginning of 1990 to the end of 2010.

(a) Produce some numerical and graphical summaries of the Weekly data. Do there appear to be any patterns?

(b) Use the full data set to perform a logistic regression with Direction as the response and the five lag variables plus Volume as predictors. Use the summary function to print the results. Do any of the predictors appear to be statistically significant? If so, which ones?

(c) Compute the confusion matrix and overall fraction of correct predictions. Explain what the confusion matrix is telling you about the types of mistakes made by logistic regression.

(d) Now fit the logistic regression model using a training data period from 1990 to 2008, with Lag2 as the only predictor. Compute the confusion matrix and the overall fraction of correct predictions for the held out data (that is, the data from 2009 and 2010).

(e) Repeat (d) using LDA.

(f) Repeat (d) using QDA.

(g) Repeat (d) using KNN with K =1.

(h) Repeat (d) using naive Bayes.

(i) Which of these methods appears to provide the best results on this data?

(j) Experiment with different combinations of predictors, including possible transformations and interactions, for each of the methods. Report the variables, method, and associated confusion matrix that appears to provide the best results on the held out data. Note that you should also experiment with values for K in the KNN classifier.

# Load the Weekly dataset
data("Weekly")
# (a) Numerical and graphical summaries
summary(Weekly)
##       Year           Lag1               Lag2               Lag3         
##  Min.   :1990   Min.   :-18.1950   Min.   :-18.1950   Min.   :-18.1950  
##  1st Qu.:1995   1st Qu.: -1.1540   1st Qu.: -1.1540   1st Qu.: -1.1580  
##  Median :2000   Median :  0.2410   Median :  0.2410   Median :  0.2410  
##  Mean   :2000   Mean   :  0.1506   Mean   :  0.1511   Mean   :  0.1472  
##  3rd Qu.:2005   3rd Qu.:  1.4050   3rd Qu.:  1.4090   3rd Qu.:  1.4090  
##  Max.   :2010   Max.   : 12.0260   Max.   : 12.0260   Max.   : 12.0260  
##       Lag4               Lag5              Volume            Today         
##  Min.   :-18.1950   Min.   :-18.1950   Min.   :0.08747   Min.   :-18.1950  
##  1st Qu.: -1.1580   1st Qu.: -1.1660   1st Qu.:0.33202   1st Qu.: -1.1540  
##  Median :  0.2380   Median :  0.2340   Median :1.00268   Median :  0.2410  
##  Mean   :  0.1458   Mean   :  0.1399   Mean   :1.57462   Mean   :  0.1499  
##  3rd Qu.:  1.4090   3rd Qu.:  1.4050   3rd Qu.:2.05373   3rd Qu.:  1.4050  
##  Max.   : 12.0260   Max.   : 12.0260   Max.   :9.32821   Max.   : 12.0260  
##  Direction 
##  Down:484  
##  Up  :605  
##            
##            
##            
## 
# Plot weekly returns over time
plot(Weekly$Year, Weekly$Today, type = "l", col = "blue", xlab = "Year", ylab = "Weekly Returns")

pairs(Weekly)

The mean and median values for “Lag1” to “Lag5” are close to zero, suggesting that, on average, there is no significant overall trend in the weekly returns for the past five weeks.

The mean for “Today” is 0.1499, indicating a slight positive average return for the current week.

# (b) Logistic regression
logistic_model <- glm(Direction ~ Lag1 + Lag2 + Lag3 + Lag4 + Lag5 + Volume, data = Weekly, family = "binomial")
summary(logistic_model)
## 
## Call:
## glm(formula = Direction ~ Lag1 + Lag2 + Lag3 + Lag4 + Lag5 + 
##     Volume, family = "binomial", data = Weekly)
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)   
## (Intercept)  0.26686    0.08593   3.106   0.0019 **
## Lag1        -0.04127    0.02641  -1.563   0.1181   
## Lag2         0.05844    0.02686   2.175   0.0296 * 
## Lag3        -0.01606    0.02666  -0.602   0.5469   
## Lag4        -0.02779    0.02646  -1.050   0.2937   
## Lag5        -0.01447    0.02638  -0.549   0.5833   
## Volume      -0.02274    0.03690  -0.616   0.5377   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 1496.2  on 1088  degrees of freedom
## Residual deviance: 1486.4  on 1082  degrees of freedom
## AIC: 1500.4
## 
## Number of Fisher Scoring iterations: 4

Among the lag variables, only “Lag2” is statistically significant, with a positive estimate (0.05844), a z-value of 2.175, and a p-value of 0.0296. This suggests that the percentage return two weeks prior has a significant impact on predicting the direction of the market.

The predictors “Lag1,” “Lag3,” “Lag4,” “Lag5,” and “Volume” do not appear to be statistically significant in predicting the market direction. Their p-values are higher than the commonly used significance level of 0.05, indicating that there is not enough evidence to reject the null hypothesis that their coefficients are zero.

# (c) Confusion matrix and overall fraction of correct predictions
predicted_direction <- ifelse(predict(logistic_model, type = "response") > 0.5, "Up", "Down")
confusion_matrix <- table(Actual = Weekly$Direction, Predicted = predicted_direction)
accuracy <- sum(diag(confusion_matrix)) / sum(confusion_matrix)
print(confusion_matrix)
##       Predicted
## Actual Down  Up
##   Down   54 430
##   Up     48 557
cat("Overall accuracy:", accuracy, "\n")
## Overall accuracy: 0.5610652

The confusion matrix reveals that the model tends to have more false positives (430) than false negatives (48). This suggests that the model is more prone to incorrectly predicting an “Up” direction when the actual direction is “Down.” The model might be less sensitive to capturing instances of the market going “Up.”

# (d) Logistic regression with Lag2 as the only predictor
training_data <- subset(Weekly, Year < 2009)
testing_data <- subset(Weekly, Year >= 2009)
logistic_model_lag2 <- glm(Direction ~ Lag2, data = training_data, family = "binomial")
predicted_direction_lag2 <- ifelse(predict(logistic_model_lag2, newdata = testing_data, type = "response") > 0.5, "Up", "Down")
confusion_matrix_lag2 <- table(Actual = testing_data$Direction, Predicted = predicted_direction_lag2)
accuracy_lag2 <- sum(diag(confusion_matrix_lag2)) / sum(confusion_matrix_lag2)
print(confusion_matrix_lag2)
##       Predicted
## Actual Down Up
##   Down    9 34
##   Up      5 56
cat("Overall accuracy with Lag2:", accuracy_lag2, "\n")
## Overall accuracy with Lag2: 0.625

Comparing with the previous model that used multiple predictors, the model with only “Lag2” appears to have improved accuracy for the held-out data. However, further analysis, including additional metrics and comparisons, would be beneficial to assess the overall performance and generalizability of the model.

# (e) LDA
lda_model <- lda(Direction ~ Lag2, data = training_data)
predicted_direction_lda <- predict(lda_model, newdata = testing_data)$class
confusion_matrix_lda <- table(Actual = testing_data$Direction, Predicted = predicted_direction_lda)
accuracy_lda <- sum(diag(confusion_matrix_lda)) / sum(confusion_matrix_lda)
print(confusion_matrix_lda)
##       Predicted
## Actual Down Up
##   Down    9 34
##   Up      5 56
cat("Overall accuracy with LDA:", accuracy_lda, "\n")
## Overall accuracy with LDA: 0.625

The accuracy with LDA is the same as the logistic regression model using “Lag2” as the only predictor for the held-out data. Both models achieved an overall accuracy of 62.5%. This suggests that, in this specific context, LDA and logistic regression with a single predictor “Lag2” perform similarly in predicting market directions.

# (f) QDA
qda_model <- qda(Direction ~ Lag2, data = training_data)
predicted_direction_qda <- predict(qda_model, newdata = testing_data)$class
confusion_matrix_qda <- table(Actual = testing_data$Direction, Predicted = predicted_direction_qda)
accuracy_qda <- sum(diag(confusion_matrix_qda)) / sum(confusion_matrix_qda)
print(confusion_matrix_qda)
##       Predicted
## Actual Down Up
##   Down    0 43
##   Up      0 61
cat("Overall accuracy with QDA:", accuracy_qda, "\n")
## Overall accuracy with QDA: 0.5865385

The accuracy with QDA appears to be lower than the logistic regression and LDA models discussed earlier. The QDA model, in this case, seems to be biased towards predicting “Up” for all instances, resulting in lower accuracy.

# (g) KNN with K=1
# Extract the relevant columns for training and testing
train_features <- training_data[, "Lag2", drop = FALSE]
test_features <- testing_data[, "Lag2", drop = FALSE]

# Fit KNN model
knn_model <- knn(train = train_features, test = test_features, cl = training_data$Direction, k = 1)
confusion_matrix_knn <- table(Actual = testing_data$Direction, Predicted = knn_model)
accuracy_knn <- sum(diag(confusion_matrix_knn)) / sum(confusion_matrix_knn)
print(confusion_matrix_knn)
##       Predicted
## Actual Down Up
##   Down   21 22
##   Up     30 31
cat("Overall accuracy with KNN:", accuracy_knn, "\n")
## Overall accuracy with KNN: 0.5

The accuracy with KNN is lower compared to the logistic regression, LDA, and QDA models discussed earlier. The balanced confusion matrix suggests that KNN does not exhibit a strong bias towards one class but may struggle to make accurate predictions overall in this specific context.

# (h) Naive Bayes

# Convert Direction to a factor if not already
training_data$Direction <- as.factor(training_data$Direction)

# Fit Naive Bayes model
naive_bayes_model <- naiveBayes(Direction ~ Lag2, data = training_data)

# Extract Lag2 for testing
test_features_nb <- data.frame(Lag2 = testing_data$Lag2)

# Predict using Naive Bayes model
predicted_direction_nb <- predict(naive_bayes_model, newdata = test_features_nb)
confusion_matrix_nb <- table(Actual = testing_data$Direction, Predicted = predicted_direction_nb)
accuracy_nb <- sum(diag(confusion_matrix_nb)) / sum(confusion_matrix_nb)
print(confusion_matrix_nb)
##       Predicted
## Actual Down Up
##   Down    0 43
##   Up      0 61
cat("Overall accuracy with Naive Bayes:", accuracy_nb, "\n")
## Overall accuracy with Naive Bayes: 0.5865385

The accuracy with Naive Bayes appears to be lower than some other models, such as logistic regression and LDA, and is similar to the accuracy obtained with QDA. Like QDA, Naive Bayes in this case seems to be biased towards predicting “Up” for all instances.

(i.) Based on the overall accuracy results on the held-out data:

It appears that Logistic Regression with Lag2 and LDA have the highest overall accuracy, both achieving 62.5%. Therefore, based on this evaluation, both Logistic Regression with Lag2 and LDA seem to provide the best results on this specific dataset.

(j.)

# Experimenting with different predictors
predictors_combination <- c("Lag2", "Lag3")  # Modify this with different combinations
training_data_sub <- subset(training_data, select = c("Direction", predictors_combination))
testing_data_sub <- subset(testing_data, select = c("Direction", predictors_combination))

# Fit the model (e.g., logistic regression)
model_sub <- glm(Direction ~ ., data = training_data_sub, family = "binomial")

# Predict using the model
predicted_direction_sub <- ifelse(predict(model_sub, newdata = testing_data_sub, type = "response") > 0.5, "Up", "Down")

# Evaluate performance
confusion_matrix_sub <- table(Actual = testing_data_sub$Direction, Predicted = predicted_direction_sub)
accuracy_sub <- sum(diag(confusion_matrix_sub)) / sum(confusion_matrix_sub)
print(confusion_matrix_sub)
##       Predicted
## Actual Down Up
##   Down    8 35
##   Up      4 57
cat("Overall accuracy with subset of predictors:", accuracy_sub, "\n")
## Overall accuracy with subset of predictors: 0.625

The subset of predictors appears to have produced a model with reasonable accuracy, and further analysis could involve exploring the specific predictors included in the subset to understand their impact on prediction.

Q14. In this problem, you will develop a model to predict whether a given car gets high or low gas mileage based on the Auto data set.

(a) Create a binary variable, mpg01, that contains a 1 if mpg contains a value above its median, and a 0 if mpg contains a value below its median. You can compute the median using the median() function. Note you may find it helpful to use the data.frame() function to create a single data set containing both mpg01 and the other Auto variables.

(b) Explore the data graphically in order to investigate the association between mpg01 and the other features. Which of the other features seem most likely to be useful in predicting mpg01? Scatterplots and boxplots may be useful tools to answer this question. Describe your findings.

(c) Split the data into a training set and a test set.

(d) Perform LDA on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?

(e) Perform QDA on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?

(f) Perform logistic regression on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?

(g) Perform naive Bayes on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?

(h) Perform KNN on the training data, with several values of K, in order to predict mpg01. Use only the variables that seemed most associated with mpg01 in (b). What test errors do you obtain Which value of K seems to perform the best on this data set?

# Load the Auto dataset
data(Auto)
summary(Auto)
##       mpg          cylinders      displacement     horsepower        weight    
##  Min.   : 9.00   Min.   :3.000   Min.   : 68.0   Min.   : 46.0   Min.   :1613  
##  1st Qu.:17.00   1st Qu.:4.000   1st Qu.:105.0   1st Qu.: 75.0   1st Qu.:2225  
##  Median :22.75   Median :4.000   Median :151.0   Median : 93.5   Median :2804  
##  Mean   :23.45   Mean   :5.472   Mean   :194.4   Mean   :104.5   Mean   :2978  
##  3rd Qu.:29.00   3rd Qu.:8.000   3rd Qu.:275.8   3rd Qu.:126.0   3rd Qu.:3615  
##  Max.   :46.60   Max.   :8.000   Max.   :455.0   Max.   :230.0   Max.   :5140  
##                                                                                
##   acceleration        year           origin                      name    
##  Min.   : 8.00   Min.   :70.00   Min.   :1.000   amc matador       :  5  
##  1st Qu.:13.78   1st Qu.:73.00   1st Qu.:1.000   ford pinto        :  5  
##  Median :15.50   Median :76.00   Median :1.000   toyota corolla    :  5  
##  Mean   :15.54   Mean   :75.98   Mean   :1.577   amc gremlin       :  4  
##  3rd Qu.:17.02   3rd Qu.:79.00   3rd Qu.:2.000   amc hornet        :  4  
##  Max.   :24.80   Max.   :82.00   Max.   :3.000   chevrolet chevette:  4  
##                                                  (Other)           :365
# (a) Create binary variable mpg01
Auto$mpg01 <- ifelse(Auto$mpg > median(Auto$mpg), 1, 0)
Auto$mpg01 <- as.factor(Auto$mpg01)  # Convert to factor
# (b) Explore the data graphically
pairs(Auto[, c("mpg01", "cylinders", "horsepower", "weight", "acceleration")])

boxplot(weight ~ mpg01, data = Auto)

By examining scatterplots of “mpg01” against each feature, you can observe trends and patterns. For instance, you may notice clusters or trends indicating whether certain feature values are associated with higher or lower gas mileage.

By creating boxplots for each numerical feature grouped by “mpg01,” you can identify features where there is a noticeable difference in the distribution between the two categories. Features with clear separation between the boxplots may be more useful in predicting “mpg01.”

# (c) Split the data into a training set and a test set
set.seed(123)  # Set seed for reproducibility
train_index <- sample(1:nrow(Auto), 0.7 * nrow(Auto))
train_data <- Auto[train_index, ]
test_data <- Auto[-train_index, ]
# (d) Perform LDA
lda_model <- lda(mpg01 ~ cylinders + horsepower + weight + acceleration, data = train_data)
lda_pred <- predict(lda_model, test_data)
lda_error <- mean(lda_pred$class != test_data$mpg01)
print(paste("LDA Test Error:", lda_error))
## [1] "LDA Test Error: 0.110169491525424"
# (e) Perform QDA
qda_model <- qda(mpg01 ~ cylinders + horsepower + weight + acceleration, data = train_data)
qda_pred <- predict(qda_model, test_data)
qda_error <- mean(qda_pred$class != test_data$mpg01)
print(paste("QDA Test Error:", qda_error))
## [1] "QDA Test Error: 0.0932203389830508"
# (f) Perform Logistic Regression
logreg_model <- glm(mpg01 ~ cylinders + horsepower + weight + acceleration, family = binomial, data = train_data)
logreg_pred <- predict(logreg_model, test_data, type = "response")
logreg_pred_class <- ifelse(logreg_pred > 0.5, 1, 0)
logreg_error <- mean(logreg_pred_class != test_data$mpg01)
print(paste("Logistic Regression Test Error:", logreg_error))
## [1] "Logistic Regression Test Error: 0.0847457627118644"
# (g) Perform Naive Bayes using the naivebayes package
nb_model <- naive_bayes(mpg01 ~ cylinders + horsepower + weight + acceleration, data = train_data)
nb_pred_probs <- predict(nb_model, newdata = test_data, type = "prob")
## Warning: predict.naive_bayes(): more features in the newdata are provided as
## there are probability tables in the object. Calculation is performed based on
## features to be found in the tables.
# Extract the predicted probabilities for class 1
nb_pred_probs_class1 <- nb_pred_probs[, "1"]

# Convert probabilities to class labels using a threshold of 0.5
nb_pred_class <- ifelse(nb_pred_probs_class1 > 0.5, 1, 0)

# Calculate the Naive Bayes test error
nb_error <- mean(nb_pred_class != as.numeric(test_data$mpg01))
print(paste("Naive Bayes Test Error:", nb_error))
## [1] "Naive Bayes Test Error: 0.923728813559322"
# (h) Perform KNN with different values of K
k_values <- c(1, 3, 5, 7)  # Example K values to try
for (k in k_values) {
  knn_pred <- knn(train_data[, c("cylinders", "horsepower", "weight", "acceleration")],
                  test_data[, c("cylinders", "horsepower", "weight", "acceleration")],
                  train_data$mpg01, k = k)
  knn_error <- mean(knn_pred != test_data$mpg01)
  print(paste("KNN Test Error (K =", k, "):", knn_error))
}
## [1] "KNN Test Error (K = 1 ): 0.194915254237288"
## [1] "KNN Test Error (K = 3 ): 0.144067796610169"
## [1] "KNN Test Error (K = 5 ): 0.127118644067797"
## [1] "KNN Test Error (K = 7 ): 0.127118644067797"