Introduction:

In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.

The data is publicly available from the UCI Machine Learning Repository and can be imported directly.

Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv

Columns (no header in the CSV, so we need to assign them manually):

  1. Pregnancies: Number of times pregnant
  2. Glucose: Plasma glucose concentration (2-hour test)
  3. BloodPressure: Diastolic blood pressure (mm Hg)
  4. SkinThickness: Triceps skin fold thickness (mm)
  5. Insulin: 2-hour serum insulin (mu U/ml)
  6. BMI: Body mass index (weight in kg/(height in m)^2)
  7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk)
  8. Age: Age in years
  9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)

Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.

Cleaning the dataset Don’t change the following code

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.0     ✔ stringr   1.5.2
## ✔ ggplot2   4.0.0     ✔ tibble    3.3.0
## ✔ lubridate 1.9.4     ✔ tidyr     1.3.1
## ✔ purrr     1.1.0     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

data <- read.csv(url, header = FALSE)

colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")

data$Outcome <- as.factor(data$Outcome)

# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA


colSums(is.na(data))
##              Pregnancies                  Glucose            BloodPressure 
##                        0                        5                       35 
##            SkinThickness                  Insulin                      BMI 
##                        0                        0                       11 
## DiabetesPedigreeFunction                      Age                  Outcome 
##                        0                        0                        0

Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.

## Enter your code here
logistic <- glm(Outcome ~ Glucose + BMI + Age, data = data, family = "binomial")
summary(logistic)
## 
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = "binomial", 
##     data = data)
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -9.032377   0.711037 -12.703  < 2e-16 ***
## Glucose      0.035548   0.003481  10.212  < 2e-16 ***
## BMI          0.089753   0.014377   6.243  4.3e-10 ***
## Age          0.028699   0.007809   3.675 0.000238 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 974.75  on 751  degrees of freedom
## Residual deviance: 724.96  on 748  degrees of freedom
##   (16 observations deleted due to missingness)
## AIC: 732.96
## 
## Number of Fisher Scoring iterations: 4
r_square <- 1 - (logistic$deviance/logistic$null.deviance)

r_square
## [1] 0.25626

What does the intercept represent (log-odds of diabetes when predictors are zero)? The intercept represents the log-odds of diabetes when all predictors are zero, basically at the starting point before glucose, BMI or age have any affect.

For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)? All of the predictors are significant. A one-unit increase in each variable (Glucose, BMI and Age) will raise the odds of diabetes.

Glucose (p-value < 2e-16) BMI (p-value = 4.3e-10) AGE (p-value = 0.000238)

Question 2: Confusion Matrix and Important Metric

Calculate and report the metrics:

Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)

Use the following starter code

# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]
data_subset <- data[!(is.na(data$Glucose) | is.na(data$BMI) | is.na(data$Age)), ]

#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$diagnose_num <- ifelse(data_subset$Outcome == "1", 1, 0)

# Predicted probabilities
predicted.probs <- logistic$fitted.values



# Predicted classes

predicted.classes <- ifelse(predicted.probs > 0.5, 1, 0)


# Confusion matrix
confusion <- table(
  Predicted = factor(predicted.classes, levels = c(0, 1)),
  Actual   = factor(data_subset$diagnose_num, levels = c(0, 1))
)

confusion
##          Actual
## Predicted   0   1
##         0 429 114
##         1  59 150
#Extract Values:
TN <- 429
FP <- 59
FN <- 114
TP <- 150

#Metrics    
accuracy <- (TP + TN) / (TP + TN + FP + FN)
sensitivity <- TP / (TP + FN)   
specificity <- TN / (TN + FP)  
precision <- TP / (TP + FP)     
f1_score <- 2 * (precision * sensitivity) / (precision + sensitivity)

#Print Values 
cat("Accuracy:    ", round(accuracy, 3), "\n")
## Accuracy:     0.77
cat("Sensitivity: ", round(sensitivity, 3), "\n")
## Sensitivity:  0.568
cat("Specificity: ", round(specificity, 3), "\n")
## Specificity:  0.879
cat("Precision:   ", round(precision, 3), "\n")
## Precision:    0.718
cat("F1 Score:    ", round(f1_score, 3), "\n")
## F1 Score:     0.634

Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?

The model performs well, at a 77% accuracy, so it diagnosed most patients mostly correct. However, the specificity is around 88 which is higher than the sensitivity, which is at 57. This means the model is better at detecting NON diabetic patients than detecting patients WITH diabetes. This may matter for medical diagnosis because there will more more cases of false negatives for diabetics since the sensitivity is low.This can be life threatening to many individuals if they do not receive the treatment they need in time.

Question 3: ROC Curve, AUC, and Interpretation

#Enter your code here
# install.packages("pROC") # if needed
library(pROC)
## Type 'citation("pROC")' for a citation.
## 
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
## 
##     cov, smooth, var
# Create a factor outcome with labels "no diabetes" and "diabetes"
data_subset$Outcome_label <- ifelse(data_subset$Outcome == "1",
                                    "diabetes", "no diabetes")

# ROC curve & AUC using data_subset
roc_obj <- roc(response  = data_subset$Outcome_label,
               predictor = logistic$fitted.values,
               levels    = c("no diabetes", "diabetes"),
               direction = "<")  


# Print AUC value
auc_val <- auc(roc_obj); auc_val
## Area under the curve: 0.828
# Plot ROC with AUC displayed
plot.roc(roc_obj, print.auc = TRUE, legacy.axes = TRUE,
         xlab = "False Positive Rate (1 - Specificity)",
         ylab = "True Positive Rate (Sensitivity)")

What does AUC indicate (0.5 = random, 1.0 = perfect)? The AUC = 0.828 indicates the model is good at distinguishing between patients with diabetes and those without it.

On the plot, the ROC curve is well above the diagonal “random guess” line, showing that the model performs much better than chance.

In plain words: if you randomly pick one person with diabetes and one without diabetes, the model has about an 82.8% chance of ranking the person with diabetes higher(more likely to have diabetes)

For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain. For diabetes diagnoses, we should prioritize sensitivity(catching cases) because it is better to have a false positive than a false negative. Missing someone who actually has diabetes, false negative, can delay treatment and can be life threatening to that individual if not given the proper care in time.
In order to prioritize sensitivity, we can lower the threshold be lower <0.5 so we can label more people at risk/with diabetes rather than having false negative cases.