Introduction:

In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.

The data is publicly available from the UCI Machine Learning Repository and can be imported directly.

Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv

Columns (no header in the CSV, so we need to assign them manually):

  1. Pregnancies: Number of times pregnant
  2. Glucose: Plasma glucose concentration (2-hour test)
  3. BloodPressure: Diastolic blood pressure (mm Hg)
  4. SkinThickness: Triceps skin fold thickness (mm)
  5. Insulin: 2-hour serum insulin (mu U/ml)
  6. BMI: Body mass index (weight in kg/(height in m)^2)
  7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk)
  8. Age: Age in years
  9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)

Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.

Cleaning the dataset Don’t change the following code

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.2.0     ✔ readr     2.2.0
## ✔ forcats   1.0.1     ✔ stringr   1.6.0
## ✔ ggplot2   4.0.2     ✔ tibble    3.3.1
## ✔ lubridate 1.9.5     ✔ tidyr     1.3.2
## ✔ purrr     1.2.1     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

data <- read.csv(url, header = FALSE)

colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")

data$Outcome <- as.factor(data$Outcome)

# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA


colSums(is.na(data))
##              Pregnancies                  Glucose            BloodPressure 
##                        0                        5                       35 
##            SkinThickness                  Insulin                      BMI 
##                        0                        0                       11 
## DiabetesPedigreeFunction                      Age                  Outcome 
##                        0                        0                        0

Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.

data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)

model <- glm(Outcome ~ Glucose + BMI + Age, 
             data = data_subset, 
             family = binomial)

summary(model)
## 
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = binomial, 
##     data = data_subset)
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -9.032377   0.711037 -12.703  < 2e-16 ***
## Glucose      0.035548   0.003481  10.212  < 2e-16 ***
## BMI          0.089753   0.014377   6.243  4.3e-10 ***
## Age          0.028699   0.007809   3.675 0.000238 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 974.75  on 751  degrees of freedom
## Residual deviance: 724.96  on 748  degrees of freedom
## AIC: 732.96
## 
## Number of Fisher Scoring iterations: 4
model_r2 <- 1 - (model$deviance / model$null.deviance)
model_r2
## [1] 0.25626

What does the intercept represent (log-odds of diabetes when predictors are zero)?

The intercept represents the log-odds of having diabetes when Glucose, BMI, and Age are all 0. This is not very meaningful in real life because those values are unrealistic, but it serves as the starting point of the model.

For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)?

For Glucose, the coefficient is positive, so a one-unit increase in Glucose raises the odds of diabetes. Glucose is statistically significant because its p-value is less than 0.05.

For BMI, the coefficient is positive, so a one-unit increase in BMI raises the odds of diabetes. BMI is statistically significant because its p-value is less than 0.05.

For Age, the coefficient is positive, so a one-unit increase in Age raises the odds of diabetes. Age is statistically significant because its p-value is less than 0.05.

Question 2: Confusion Matrix and Important Metric

Calculate and report the metrics:

Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)

Use the following starter code

# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

# Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).
# This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)

# Predicted probabilities
predicted_probs <- predict(model, type = "response")

# Predicted classes
predicted_class <- factor(ifelse(predicted_probs > 0.5, 1, 0), levels = c(0, 1))

# Confusion matrix
actual_class <- factor(data_subset$Outcome_num, levels = c(0, 1))
conf_matrix <- table(Predicted = predicted_class, Actual = actual_class)

conf_matrix
##          Actual
## Predicted   0   1
##         0 429 114
##         1  59 150
# Extract Values:
TN <- conf_matrix["0", "0"]
FP <- conf_matrix["1", "0"]
FN <- conf_matrix["0", "1"]
TP <- conf_matrix["1", "1"]

# Metrics    
accuracy <- (TP + TN) / sum(conf_matrix)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)

cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77 
## Sensitivity: 0.568 
## Specificity: 0.879 
## Precision: 0.718

Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?

The model accuracy is 0.77. The sensitivity is 0.568, and the specificity is 0.879.

The model performs moderately well. Sensitivity shows how well the model detects people who actually have diabetes. Specificity shows how well the model detects people who do not have diabetes. Since specificity is higher than sensitivity, the model is better at detecting non-diabetes cases than diabetes cases. In medical diagnosis, sensitivity is very important because missing a diabetes case could delay treatment.

Question 3: ROC Curve, AUC, and Interpretation

thresholds <- seq(0, 1, by = 0.01)

roc_data <- data.frame(
  threshold = thresholds,
  sensitivity = NA,
  specificity = NA
)

for (i in seq_along(thresholds)) {
  predicted_temp <- ifelse(predicted_probs > thresholds[i], 1, 0)
  
  TP_temp <- sum(predicted_temp == 1 & data_subset$Outcome_num == 1)
  TN_temp <- sum(predicted_temp == 0 & data_subset$Outcome_num == 0)
  FP_temp <- sum(predicted_temp == 1 & data_subset$Outcome_num == 0)
  FN_temp <- sum(predicted_temp == 0 & data_subset$Outcome_num == 1)
  
  roc_data$sensitivity[i] <- TP_temp / (TP_temp + FN_temp)
  roc_data$specificity[i] <- TN_temp / (TN_temp + FP_temp)
}

roc_data$false_positive_rate <- 1 - roc_data$specificity

plot(roc_data$false_positive_rate, roc_data$sensitivity,
     type = "l",
     main = "ROC Curve",
     xlab = "False Positive Rate",
     ylab = "Sensitivity")

abline(0, 1, lty = 2)

auc_value <- sum(diff(roc_data$false_positive_rate) *
                   (head(roc_data$sensitivity, -1) + tail(roc_data$sensitivity, -1)) / 2)

auc_value <- abs(auc_value)
auc_value
## [1] 0.8280784

What does AUC indicate (0.5 = random, 1.0 = perfect)?

The AUC value is 0.828. AUC measures how well the model separates people with diabetes from people without diabetes. An AUC of 0.5 means the model is no better than random guessing. An AUC of 1.0 means perfect classification. Since this model has an AUC above 0.5, it performs better than random guessing.

For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.

For diabetes diagnosis, I would prioritize sensitivity because it is important to catch people who may have diabetes. A lower threshold, such as 0.4 instead of 0.5, could increase sensitivity and reduce false negatives. This may create more false positives, but that is usually safer in medical screening than missing people who actually have diabetes.