In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.
The data is publicly available from the UCI Machine Learning Repository and can be imported directly.
Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv
Columns (no header in the CSV, so we need to assign them manually):
Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.
Cleaning the dataset Don’t change the following code
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.4 ✔ readr 2.1.5
## ✔ forcats 1.0.1 ✔ stringr 1.5.2
## ✔ ggplot2 4.0.0 ✔ tibble 3.3.0
## ✔ lubridate 1.9.4 ✔ tidyr 1.3.1
## ✔ purrr 1.1.0
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
data <- read.csv(url, header = FALSE)
colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")
data$Outcome <- as.factor(data$Outcome)
# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA
colSums(is.na(data))
## Pregnancies Glucose BloodPressure
## 0 5 35
## SkinThickness Insulin BMI
## 0 0 11
## DiabetesPedigreeFunction Age Outcome
## 0 0 0
Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.
Provide the model summary.
Calculate and interpret R²: 1 - (model\(deviance / model\)null.deviance). What does it indicate about the model’s explanatory power?
logistic <- glm(Outcome ~ Glucose + BMI + Age, data=data, family="binomial")
summary(logistic)
##
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = "binomial",
## data = data)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -9.032377 0.711037 -12.703 < 2e-16 ***
## Glucose 0.035548 0.003481 10.212 < 2e-16 ***
## BMI 0.089753 0.014377 6.243 4.3e-10 ***
## Age 0.028699 0.007809 3.675 0.000238 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 974.75 on 751 degrees of freedom
## Residual deviance: 724.96 on 748 degrees of freedom
## (16 observations deleted due to missingness)
## AIC: 732.96
##
## Number of Fisher Scoring iterations: 4
r_square <- 1 - (logistic$deviance/logistic$null.deviance)
r_square
## [1] 0.25626
What does the R^2 represent?
The r-square is
telling us that this model only explains ~25.6% of the variation for the
outcome. This indicates that we are likely missing very key predictors
in our model.
What does the intercept represent (log-odds of diabetes when
predictors are zero)?
The model predicts an essentially
zero probability (0.0001) of diabetes when glucose, BMI, and age are all
at 0. This makes sense, because those levels are physiologically
impossible.
For each predictor (Glucose, BMI, Age), does a one-unit
increase raise or lower the odds of diabetes? Are they significant
(p-value < 0.05)?
A one-unit increase in glucose raises
the odds of diabetes by ~3.6%. An additional BMI point increases the
odds of diabetes by ~9%. And each additional year of age increases the
odds by ~2.9%. All of the p-values are < 0.05, meaning they are all
significant.
Question 2: Confusion Matrix and Important Metric
Predict probabilities using the fitted model.
Create predicted classes with a 0.5 threshold (1 if probability > 0.5, else 0).
Build a confusion matrix (Predicted vs. Actual Outcome).
Calculate and report the metrics:
Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)
Use the following starter code
# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]
#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)
# Predicted probabilities
predicted.probs <- logistic$fitted.values
# Predicted classes
predicted.classes <- ifelse(predicted.probs > 0.5, 1, 0)
# Confusion matrix
confusion <- table(
Predicted = factor(predicted.classes, levels = c(0, 1)),
Actual = factor(data_subset$Outcome_num, levels = c(0, 1))
)
confusion
## Actual
## Predicted 0 1
## 0 429 114
## 1 59 150
#Extract Values:
TN <- 429
FP <- 59
FN <- 114
TP <- 150
#Metrics
accuracy <- (TP + TN) / (TP + TN + FP + FN)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)
cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77
## Sensitivity: 0.568
## Specificity: 0.879
## Precision: 0.718
Interpret: How well does the model perform? Is it better at
detecting diabetes (sensitivity) or non-diabetes (specificity)? Why
might this matter for medical diagnosis?
The model has a
decently good accuracy of 0.77, but it is much better at identifying
non-diabetics than diabetics. It does a good job at identifying healthy
patients, but will miss a large number of unhealthy patients. This is
concerning in a medical context because false negatives can have
detrimental effects by delaying or outright missing treatment.
Question 3: ROC Curve, AUC, and Interpretation
Plot the ROC curve, use the “data_subset” from Q2.
Calculate AUC.
library(pROC)
## Warning: package 'pROC' was built under R version 4.5.2
## Type 'citation("pROC")' for a citation.
##
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
##
## cov, smooth, var
# ROC curve & AUC on full data
roc_obj <- roc(response = data_subset$Outcome,
predictor = logistic$fitted.values,
levels = c(0, 1),
direction = "<") # smaller prob = Healthy
# Print AUC value
auc_val <- auc(roc_obj); auc_val
## Area under the curve: 0.828
# Plot ROC with AUC displayed
plot.roc(roc_obj, print.auc = TRUE, legacy.axes = TRUE,
xlab = "False Positive Rate (1 - Specificity)",
ylab = "True Positive Rate (Sensitivity)")
What does AUC indicate (0.5 = random, 1.0 =
perfect)?
An AUC of 0.828 indicates that the model will
correctly predict a diabetic patient to have diabetes than a
non-diabetic patient ~83% of the time. This is a good score, but does
have room for improvement.
For diabetes diagnosis, prioritize sensitivity (catching
cases) or specificity (avoiding false positives)? Suggest a threshold
and explain.
For a diabetes diagnosis, we should
prioritize sensitivity because missing a true diabetes diagnoses has
larger consequences than falsely predicting a positive case. We should
lower the threshold from 0.5, ideally to something closer to 0.25 so
that the model’s sensitivity increases even at the detriment of the
specificity. This way, there will be less false negatives, which is
worth the increase in false positives.