In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.
The data is publicly available from the UCI Machine Learning Repository and can be imported directly.
Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv
Columns (no header in the CSV, so we need to assign them manually):
Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.
Cleaning the dataset Don’t change the following code
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.4 ✔ readr 2.1.6
## ✔ forcats 1.0.1 ✔ stringr 1.6.0
## ✔ ggplot2 4.0.1 ✔ tibble 3.3.0
## ✔ lubridate 1.9.4 ✔ tidyr 1.3.1
## ✔ purrr 1.2.0
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
data <- read.csv(url, header = FALSE)
colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")
data$Outcome <- as.factor(data$Outcome)
# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA
colSums(is.na(data))
## Pregnancies Glucose BloodPressure
## 0 5 35
## SkinThickness Insulin BMI
## 0 0 11
## DiabetesPedigreeFunction Age Outcome
## 0 0 0
Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.
Provide the model summary.
Calculate and interpret R²: 1 - (model\(deviance / model\)null.deviance). What does it indicate about the model’s explanatory power?
## Enter your code here
model <- glm(Outcome ~ Glucose + BMI + Age,
data = data,
family = binomial(link = "logit"),
na.action = na.omit)
model
##
## Call: glm(formula = Outcome ~ Glucose + BMI + Age, family = binomial(link = "logit"),
## data = data, na.action = na.omit)
##
## Coefficients:
## (Intercept) Glucose BMI Age
## -9.03238 0.03555 0.08975 0.02870
##
## Degrees of Freedom: 751 Total (i.e. Null); 748 Residual
## (16 observations deleted due to missingness)
## Null Deviance: 974.7
## Residual Deviance: 725 AIC: 733
summary(model)
##
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = binomial(link = "logit"),
## data = data, na.action = na.omit)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -9.032377 0.711037 -12.703 < 2e-16 ***
## Glucose 0.035548 0.003481 10.212 < 2e-16 ***
## BMI 0.089753 0.014377 6.243 4.3e-10 ***
## Age 0.028699 0.007809 3.675 0.000238 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 974.75 on 751 degrees of freedom
## Residual deviance: 724.96 on 748 degrees of freedom
## (16 observations deleted due to missingness)
## AIC: 732.96
##
## Number of Fisher Scoring iterations: 4
model_r2 <- 1 - (model$deviance / model$null.deviance)
cat("R-squared:", round(model_r2, 4), "\n")
## R-squared: 0.2563
What does the intercept represent (log-odds of diabetes when predictors are zero)?
The intercept represents the log-odds of diabetes when all predictors (Glucose, BMI, Age) are zero. In practical terms, this means the baseline probability of diabetes for someone with Glucose = 0, BMI = 0, and Age = 0, though these values may not be biologically meaningful.
For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)?
Glucose: A one-unit increase in Glucose increases the odds of diabetes (positive coefficient). Likely significant (p < 0.05).
BMI: A one-unit increase in BMI increases the odds of diabetes (positive coefficient). Likely significant (p < 0.05).
Age: A one-unit increase in Age increases the odds of diabetes (positive coefficient). Likely significant (p < 0.05).
Question 2: Confusion Matrix and Important Metric
Predict probabilities using the fitted model.
Create predicted classes with a 0.5 threshold (1 if probability > 0.5, else 0).
Build a confusion matrix (Predicted vs. Actual Outcome).
Calculate and report the metrics:
Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)
Use the following starter code
# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]
#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)
# Predicted probabilities
pred_prob <- predict(model, newdata = data_subset, type = "response")
# Predicted classes
pred_class <- ifelse(pred_prob > 0.5, 1, 0)
# Confusion matrix
conf_matrix <- table(Predicted = pred_class, Actual = data_subset$Outcome_num)
conf_matrix
## Actual
## Predicted 0 1
## 0 429 114
## 1 59 150
#Extract Values:
TN <- conf_matrix[1, 1] # True Negative
FP <- conf_matrix[2, 1] # False Positive
FN <- conf_matrix[1, 2] # False Negative
TP <- conf_matrix[2, 2] # True Positive
#Metrics
accuracy <- (TP + TN) / sum(conf_matrix)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)
cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77
## Sensitivity: 0.568
## Specificity: 0.879
## Precision: 0.718
Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?
The model typically shows moderate to good performance. Based on typical results:
Accuracy is usually around 75-80% and specificity is often higher than sensitivity
This means the model is better at correctly identifying non-diabetic cases than detecting diabetic cases
For medical diagnosis, this matters because: High sensitivity is crucial for not missing actual diabetes cases (avoiding false negatives), also high specificity helps avoid unnecessary treatment and anxiety for healthy people (avoiding false positives) and in diabetes screening, sensitivity might be prioritized to catch all potential cases, as missing a diagnosis can have serious health consequences
Question 3: ROC Curve, AUC, and Interpretation
Plot the ROC curve, use the “data_subset” from Q2.
Calculate AUC.
#Enter your code here
library(pROC)
## Type 'citation("pROC")' for a citation.
##
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
##
## cov, smooth, var
roc_obj <- roc(data_subset$Outcome_num, pred_prob)
## Setting levels: control = 0, case = 1
## Setting direction: controls < cases
plot(roc_obj,
main = "ROC Curve for Diabetes Prediction",
col = "blue",
lwd = 2,
print.auc = TRUE,
auc.polygon = TRUE,
grid = TRUE)
auc_value <- auc(roc_obj)
cat("AUC:", round(auc_value, 3), "\n")
## AUC: 0.828
What does AUC indicate (0.5 = random, 1.0 = perfect)? AUC measures the model’s ability to distinguish between classes. An AUC of 0.5 indicates random guessing (no discrimination ability), while 1.0 indicates perfect classification. Typically:
0.5-0.7: Poor discrimination
0.7-0.8: Acceptable discrimination
0.8-0.9: Good discrimination
0.9: Excellent discrimination
For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.
For diabetes diagnosis, we should prioritize sensitivity to catch as many actual cases as possible.
Explanation: Missing a diabetes diagnosis (false negative) can lead to serious complications like nerve damage, kidney disease, and cardiovascular problems. A false positive, while causing anxiety and requiring follow-up tests, is less harmful than missing an actual case.
Suggested threshold: Instead of the standard 0.5, use 0.3-0.4. This lower threshold increases sensitivity (catches more true diabetic cases) at the expense of lower specificity (more false positives). The trade-off is medically appropriate because catching more true cases is more important than avoiding false alarms in diabetes screening.