In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.
The data is publicly available from the UCI Machine Learning Repository and can be imported directly.
Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv
Columns (no header in the CSV, so we need to assign them manually):
Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.
Cleaning the dataset Don’t change the following code
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.4 ✔ readr 2.1.5
## ✔ forcats 1.0.1 ✔ stringr 1.5.2
## ✔ ggplot2 4.0.0 ✔ tibble 3.3.0
## ✔ lubridate 1.9.4 ✔ tidyr 1.3.1
## ✔ purrr 1.1.0
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
data <- read.csv(url, header = FALSE)
colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")
data$Outcome <- as.factor(data$Outcome)
# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA
colSums(is.na(data))
## Pregnancies Glucose BloodPressure
## 0 5 35
## SkinThickness Insulin BMI
## 0 0 11
## DiabetesPedigreeFunction Age Outcome
## 0 0 0
head(data)
## Pregnancies Glucose BloodPressure SkinThickness Insulin BMI
## 1 6 148 72 35 0 33.6
## 2 1 85 66 29 0 26.6
## 3 8 183 64 0 0 23.3
## 4 1 89 66 23 94 28.1
## 5 0 137 40 35 168 43.1
## 6 5 116 74 0 0 25.6
## DiabetesPedigreeFunction Age Outcome
## 1 0.627 50 1
## 2 0.351 31 0
## 3 0.672 32 1
## 4 0.167 21 0
## 5 2.288 33 1
## 6 0.201 30 0
Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.
## Enter your code here
logistic_model <- glm(Outcome ~ Glucose + BMI + Age, data = data, family = "binomial")
summary(logistic_model)
##
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = "binomial",
## data = data)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -9.032377 0.711037 -12.703 < 2e-16 ***
## Glucose 0.035548 0.003481 10.212 < 2e-16 ***
## BMI 0.089753 0.014377 6.243 4.3e-10 ***
## Age 0.028699 0.007809 3.675 0.000238 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 974.75 on 751 degrees of freedom
## Residual deviance: 724.96 on 748 degrees of freedom
## (16 observations deleted due to missingness)
## AIC: 732.96
##
## Number of Fisher Scoring iterations: 4
R2 <- 1 - (logistic_model$deviance / logistic_model$null.deviance)
R2
## [1] 0.25626
The pseudo R2 indicates about 25.6% of the variaation in diabetes outcomes
What does the intercept represent (log-odds of diabetes when predictors are zero)?
The intercept of 9.03 represents the log-odds of Diabetes when Glucose, BMI, and Age are all zero, which is not realistic in the dataset. Therefore, the intercept is not interpretable in a practical way.
For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)?
For the three predictors, Glucose, BMI, and Age, a one unit increase increases the odds of diabetes. All are statistically significant, there is strong evidence that the predictors have a real effect on the outcome of diabetes.
Question 2: Confusion Matrix and Important Metric
Predict probabilities using the fitted model.
Create predicted classes with a 0.5 threshold (1 if probability > 0.5, else 0).
Build a confusion matrix (Predicted vs. Actual Outcome).
Calculate and report the metrics:
Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)
Use the following starter code
# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]
#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)
# Predicted probabilities
pred_probs <- logistic_model$fitted.values
# Predicted classes
pred_classes <- ifelse(pred_probs > 0.5, 1, 0)
# Confusion matrix
confusion <- table(
Predicted = factor(pred_classes, levels = c(0, 1)),
Actual = factor(data_subset$Outcome_num, levels = c(0, 1))
)
confusion
## Actual
## Predicted 0 1
## 0 429 114
## 1 59 150
#Extract Values:
TN <- 429
FP <- 59
FN <- 114
TP <- 150
#Metrics
accuracy <- (TP + TN) / (TP + TN + FP + FN)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)
cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77
## Sensitivity: 0.568
## Specificity: 0.879
## Precision: 0.718
Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?
The model has good overall accuracy (77%) and shows a clear balance between detecting diabetes cases and avoiding false alarms. It is especially strong at identifying non-diabetic individuals (88%), while still moderately effective at catching actual diabetes cases (57%). Overall, this results indicate the model performs reasonably well for a binary classification problem on medical data.
Question 3: ROC Curve, AUC, and Interpretation
Plot the ROC curve, use the “data_subset” from Q2.
Calculate AUC.
library(pROC)
## Warning: package 'pROC' was built under R version 4.5.2
## Type 'citation("pROC")' for a citation.
##
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
##
## cov, smooth, var
#Enter your code here
ROC_ob <- roc(response = data_subset$Outcome,
predictor = logistic_model$fitted.values,
direction = "<")
## Setting levels: control = 0, case = 1
#print AUC value
auc_value <- auc(ROC_ob)
auc_value
## Area under the curve: 0.828
#ROC curve with AUC displayed
plot.roc(ROC_ob, print.auc = TRUE, legacy.axes = TRUE,
xlab = "False Positive Rate (1 - Specificity)",
ylab = "True Positive Rate (Sensitivity)",
col = "purple", lwd = 2, main = "ROC Curve for Diabetes Prediction")
What does AUC indicate (0.5 = random, 1.0 = perfect)?
AUC measures how well the model distinguishes between diabetes and non-diabetes cases. An AUC of 0.828 shows the model doeas a good job at telling apart those with or without diabetes.
For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.
For diabetes diagnosis, it is better to prioritize sensitivity because catching as many true cases as possible is crucial to ensure early treatment to reduce complications. Missing a diabetes case (false negative) can have bad health consequences. I suggest using a lower threshold than 0.5 , probably around 0.3 or 0.4, to increase sensitivity which may give us more false positive too. This means that the model highlights more people as potentialy diabetic for further testing , which is safer than missing real cases.