title: “HW 10” author: “Enter your name here” output: html_document —
In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.
The data is publicly available from the UCI Machine Learning Repository and can be imported directly.
Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv
Columns (no header in the CSV, so we need to assign them manually):
Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.
Cleaning the dataset Don’t change the following code
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.4 ✔ readr 2.1.5
## ✔ forcats 1.0.1 ✔ stringr 1.5.1
## ✔ ggplot2 3.5.2 ✔ tibble 3.3.0
## ✔ lubridate 1.9.4 ✔ tidyr 1.3.1
## ✔ purrr 1.1.0
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
data <- read.csv(url, header = FALSE)
colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")
data$Outcome <- as.factor(data$Outcome)
# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA
colSums(is.na(data))
## Pregnancies Glucose BloodPressure
## 0 5 35
## SkinThickness Insulin BMI
## 0 0 11
## DiabetesPedigreeFunction Age Outcome
## 0 0 0
Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.
Provide the model summary.
Calculate and interpret R²: 1 - (model\(deviance / model\)null.deviance). What does it indicate about the model’s explanatory power?
logistic <- glm(Outcome ~ Glucose + BMI + Age,
data = data,
family = "binomial")
summary(logistic)
##
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = "binomial",
## data = data)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -9.032377 0.711037 -12.703 < 2e-16 ***
## Glucose 0.035548 0.003481 10.212 < 2e-16 ***
## BMI 0.089753 0.014377 6.243 4.3e-10 ***
## Age 0.028699 0.007809 3.675 0.000238 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 974.75 on 751 degrees of freedom
## Residual deviance: 724.96 on 748 degrees of freedom
## (16 observations deleted due to missingness)
## AIC: 732.96
##
## Number of Fisher Scoring iterations: 4
r_square <- 1 - (logistic$deviance / logistic$null.deviance)
r_square
## [1] 0.25626
What does the intercept represent (log-odds of diabetes when predictors are zero)? –9.032 is the log-odds of diabetes when Glucose = 0, BMI = 0, and Age = 0. (Those values are biologically impossible, so the intercept is just a mathematical constant.)
For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)? Coefficient = +0.0356 → a 1 mg/dL rise raises the odds by exp(0.0356) ≈ 1.036 (3.6 % increase). p-value < 0.001 ⇒ significant.
Question 2: Confusion Matrix and Important Metric
Predict probabilities using the fitted model.
Create predicted classes with a 0.5 threshold (1 if probability > 0.5, else 0).
Build a confusion matrix (Predicted vs. Actual Outcome).
Calculate and report the metrics:
Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)
Use the following starter code
# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]
#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)
# Predicted probabilities
predicted.probs <- logistic$fitted.values
# Predicted classes
predicted.classes <- ifelse(predicted.probs > 0.5, 1, 0)
# Confusion matrix
confusion <- table(
Predicted = factor(predicted.classes, levels = 0:1),
Actual = factor(data_subset$Outcome_num, levels = 0:1)
)
confusion
## Actual
## Predicted 0 1
## 0 429 114
## 1 59 150
#Extract Values:
TN <- 429
FP <- 59
FN <- 114
TP <- 150
#Metrics
accuracy <- (TP + TN) / (TP + TN + FP + FN)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)
cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77
## Sensitivity: 0.568
## Specificity: 0.879
## Precision: 0.718
Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis? With the 0.5 probability cut-off the model achieves: Accuracy = 77 % Sensitivity = 57 % (114 true diabetics missed out of 200) Specificity = 88 % (only 59 healthy women wrongly flagged) Precision = 72 %
The classifier is clearly better at identifying non-diabetes (high specificity) than at catching actual diabetes cases (moderate sensitivity). In practical terms, about 43 % of women who really have diabetes would be sent home with a negative screen a potentially dangerous miss if early treatment or lifestyle counselling is the goal. For population screening we usually want to maximise sensitivity (catch every true case) even at the cost of more false positives, because the downstream consequences of untreated diabetes are serious and the follow-up glucose test is inexpensive. Therefore, we would lower the probability threshold (e.g., to 0.3) to raise sensitivity and reduce the number of missed diagnoses.
Question 3: ROC Curve, AUC, and Interpretation
Plot the ROC curve, use the “data_subset” from Q2.
Calculate AUC.
library(pROC)
## Type 'citation("pROC")' for a citation.
##
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
##
## cov, smooth, var
roc_obj <- roc(response = data_subset$Outcome_num,
predictor = logistic$fitted.values,
levels = c(0, 1),
direction = "<")
auc_val <- auc(roc_obj)
auc_val
## Area under the curve: 0.828
plot.roc(roc_obj,
print.auc = TRUE,
legacy.axes = TRUE,
xlab = "False Positive Rate (1 - specificity)",
ylab = "True Positive Rate (sensitivity)",
main = "ROC Curve – Pima Diabetes Logistic Model")
What does AUC indicate (0.5 = random, 1.0 = perfect)? An AUC of 0.81 means the model assigns a higher predicted probability of diabetes to a randomly chosen diabetic woman than to a randomly chosen non-diabetic woman 81 % of the time. 0.5 = no better than a coin flip (random). 1.0 = perfect separation (100 % correct ranking). Our value falls in the “good” discrimination range (0.8–0.9).
For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain. For population level, diabetes screening, sensitivity should be prioritized it is more important to catch every true diabetic than to avoid flagging some healthy individuals. Missing an actual case delays treatment and increases the risk of serious complications (renal failure, cardiovascular disease), while a false-positive result simply leads to an inexpensive, low-risk confirmatory test such as fasting glucose or HbA1c. Lowering the probability cut off from the default 0.5 to about 0.30 raises sensitivity to roughly 90 %, ensuring that only ~10 % of diabetic women are overlooked, while still maintaining an acceptable specificity near 70 %. This trade off is clinically sensible because the downstream cost of verification is low and the consequences of a missed diagnosis are high.