In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.
The data is publicly available from the UCI Machine Learning Repository and can be imported directly.
Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv
Columns (no header in the CSV, so we need to assign them manually):
Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.
Cleaning the dataset Don’t change the following code
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.4 ✔ readr 2.1.6
## ✔ forcats 1.0.1 ✔ stringr 1.6.0
## ✔ ggplot2 4.0.1 ✔ tibble 3.3.0
## ✔ lubridate 1.9.4 ✔ tidyr 1.3.1
## ✔ purrr 1.2.0
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
data <- read.csv(url, header = FALSE)
colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")
data$Outcome <- as.factor(data$Outcome)
# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA
colSums(is.na(data))
## Pregnancies Glucose BloodPressure
## 0 5 35
## SkinThickness Insulin BMI
## 0 0 11
## DiabetesPedigreeFunction Age Outcome
## 0 0 0
Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.
Provide the model summary.
Calculate and interpret R²: 1 - (model\(deviance / model\)null.deviance). What does it indicate about the model’s explanatory power?
## Enter your code here
# Keep only rows with complete data
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]
# Fit logistic regression model (Outcome ~ Glucose + BMI + Age)
logit_model <- glm(Outcome ~ Glucose + BMI + Age, data = data_subset, family = binomial)
# Model summary
summary(logit_model)
##
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = binomial,
## data = data_subset)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -9.032377 0.711037 -12.703 < 2e-16 ***
## Glucose 0.035548 0.003481 10.212 < 2e-16 ***
## BMI 0.089753 0.014377 6.243 4.3e-10 ***
## Age 0.028699 0.007809 3.675 0.000238 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 974.75 on 751 degrees of freedom
## Residual deviance: 724.96 on 748 degrees of freedom
## AIC: 732.96
##
## Number of Fisher Scoring iterations: 4
# Calculate pseudo R²
r_squared <- 1 - (logit_model$deviance / logit_model$null.deviance)
r_squared
## [1] 0.25626
What does the intercept represent (log-odds of diabetes when predictors are zero)? Answer: Intercept: Log-odds of diabetes when Glucose, BMI, and Age are 0. (Not clinically meaningful)
For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)?
Answer:
Glucose: Positive coefficient → higher glucose increases odds of diabetes, usually significant (p < 0.05).
BMI: Positive coefficient → higher BMI increases odds of diabetes, usually significant (p < 0.05).
Age: Positive coefficient → older age increases odds of diabetes, may or may not be significant.
R² (pseudo R²): Measures the proportion of deviance explained by the model. Higher values indicate the model explains more variance.
Question 2: Confusion Matrix and Important Metric
Predict probabilities using the fitted model.
Create predicted classes with a 0.5 threshold (1 if probability > 0.5, else 0).
Build a confusion matrix (Predicted vs. Actual Outcome).
Calculate and report the metrics:
Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)
Use the following starter code
# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]
#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)
# Predicted probabilities
pred_probs <- predict(logit_model, type = "response")
# Predicted classes
pred_classes <- ifelse(pred_probs > 0.5, 1, 0)
# Confusion matrix
conf_matrix <- table(Predicted = pred_classes, Actual = data_subset$Outcome_num)
conf_matrix
## Actual
## Predicted 0 1
## 0 429 114
## 1 59 150
#Extract Values:
TN <- conf_matrix[1,1]
FP <- conf_matrix[2,1]
FN <- conf_matrix[1,2]
TP <- conf_matrix[2,2]
#Metrics
accuracy <- (TP + TN) / (TP + TN + FP + FN)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)
cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77
## Sensitivity: 0.568
## Specificity: 0.879
## Precision: 0.718
Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?
Answer / Interpretation:
Accuracy: Overall proportion correctly classified.
Sensitivity (Recall): How well the model detects true diabetes cases.
Specificity: How well the model detects non-diabetes cases.
Precision: Among predicted positives, how many are true positives.
Medical interpretation: High sensitivity is prioritized for diabetes screening to catch as many true cases as possible. Specificity is also important to avoid false positives, but missing a true diabetic patient is more harmful.
Question 3: ROC Curve, AUC, and Interpretation
Plot the ROC curve, use the “data_subset” from Q2.
Calculate AUC.
#Enter your code here
library(pROC)
## Type 'citation("pROC")' for a citation.
##
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
##
## cov, smooth, var
# ROC curve
roc_obj <- roc(data_subset$Outcome_num, pred_probs)
## Setting levels: control = 0, case = 1
## Setting direction: controls < cases
plot(roc_obj, main = "ROC Curve for Logistic Regression", col = "blue", lwd = 2)
# AUC
auc_value <- auc(roc_obj)
auc_value
## Area under the curve: 0.828
What does AUC indicate (0.5 = random, 1.0 = perfect)?
Answer / Interpretation:
AUC: Measures model’s ability to discriminate between positive (diabetes) and negative (no diabetes) cases.
Range: 0.5 → random guessing, 1.0 → perfect discrimination.
Higher AUC → better model discrimination.
For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.
For diabetes diagnosis, prioritize sensitivity: use a slightly lower threshold (e.g., 0.4 instead of 0.5) to catch more true positive cases, while tolerating some false positives.