In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements. The data is publicly available from the UCI Machine Learning Repository and can be imported directly. Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv
Columns (no header in the CSV, so we need to assign them manually): 1. Pregnancies: Number of times pregnant 2. Glucose: Plasma glucose concentration (2-hour test) 3. BloodPressure: Diastolic blood pressure (mm Hg) 4. SkinThickness: Triceps skin fold thickness (mm) 5. Insulin: 2-hour serum insulin (mu U/ml) 6. BMI: Body mass index (weight in kg/(height in m)^2) 7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk) 8. Age: Age in years 9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)
Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.
Cleaning the dataset Don’t change the following code
library("pROC")
## Type 'citation("pROC")' for a citation.
##
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
##
## cov, smooth, var
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.4 ✔ readr 2.1.5
## ✔ forcats 1.0.1 ✔ stringr 1.5.2
## ✔ ggplot2 4.0.0 ✔ tibble 3.3.0
## ✔ lubridate 1.9.4 ✔ tidyr 1.3.1
## ✔ purrr 1.1.0
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
data <- read.csv(url, header = FALSE)
colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")
data$Outcome <- as.factor(data$Outcome)
# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA
colSums(is.na(data))
## Pregnancies Glucose BloodPressure
## 0 5 35
## SkinThickness Insulin BMI
## 0 0 11
## DiabetesPedigreeFunction Age Outcome
## 0 0 0
Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age. - Provide the model summary. - Calculate and interpret R²: 1 - (model\(deviance / model\)null.deviance). What does it indicate about the model’s explanatory power?
logistic_model <- glm(Outcome ~ Glucose + BMI + Age,
data = data,
family = binomial(link = "logit"))
summary(logistic_model)
##
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = binomial(link = "logit"),
## data = data)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -9.032377 0.711037 -12.703 < 2e-16 ***
## Glucose 0.035548 0.003481 10.212 < 2e-16 ***
## BMI 0.089753 0.014377 6.243 4.3e-10 ***
## Age 0.028699 0.007809 3.675 0.000238 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 974.75 on 751 degrees of freedom
## Residual deviance: 724.96 on 748 degrees of freedom
## (16 observations deleted due to missingness)
## AIC: 732.96
##
## Number of Fisher Scoring iterations: 4
R_squared = 1 - (logistic_model$deviance / logistic_model$null.deviance)
print(R_squared)
## [1] 0.25626
R squared is equal to 0.25626. Because the R squared value is between 0.2 and 0.4, this is an acceptable and moderate fit logistic regression model.
What does the intercept represent (log-odds of diabetes when predictors are zero)? The intercept represents the log-odds of having diabetes when Glucose = 0, BMI = 0, and Age = 0. Since these values are impossible in real life, the intercept doesn’t have any practical interpretation in this context.
For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)? For Glucose, a one-unit increase leads to a 0.036 increase in the log-odds of diabetes. In other words, a one-unit increase in Glucose leads to a e^0.036 = 1.037 increase in the odds of diabetes. For BMI, a one-unit increase leads to a 0.09 increase in the log-odds of diabetes. In other words, a one-unit increase in BMI leads to a e^0.09 = 1.094 increase in odds for diabetes. Finally, for Age, a one-unit increase leads to a 0.029 increase in the log-odds of diabetes. In other words, a one-year increase in age leads to a e^0.029 = 1.029 increase in odds of diabetes.
Question 2: Confusion Matrix and Important Metric - Predict probabilities using the fitted model. - Create predicted classes with a 0.5 threshold (1 if probability > 0.5, else 0). - Build a confusion matrix (Predicted vs. Actual Outcome).
Calculate and report the metrics: Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)
Use the following starter code
# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]
#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)
# Predicted probabilities
predicted_probs <- predict(logistic_model, type = "response")
# Predicted classes
predicted_classes <- ifelse(predicted_probs > 0.5, 1, 0)
# Confusion matrix
confusion_matrix <- table(Actual = data_subset$Outcome_num,
Predicted = predicted_classes)
#Extract Values:
TN <- confusion_matrix[1, 1]
FP <- confusion_matrix[1, 2]
FN <- confusion_matrix[2, 1]
TP <- confusion_matrix[2, 2]
#Metrics
accuracy <- (TP + TN) / (TP + TN + FP + FN)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)
cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77
## Sensitivity: 0.568
## Specificity: 0.879
## Precision: 0.718
Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis? The model performs relatively well, with an accuracy above 0.5 (0.77). It is better at detecting non-diabetes cases than diabetes cases (specificity = 0.879 vs sensitivity = 0.568). For medical diagnosis this may be important as missing diabetes cases can be dangerous and lead to major complications for patients that go untreated. Furthermore, this models low sensitivity could be concerning for medical screening because of its large amount of false positives which lead to unnecessary additional confirmatory testing.
Question 3: ROC Curve, AUC, and Interpretation - Plot the ROC curve, use the “data_subset” from Q2. - Calculate AUC.
roc_obj <- roc(data_subset$Outcome_num, predicted_probs)
## Setting levels: control = 0, case = 1
## Setting direction: controls < cases
plot(roc_obj,
main = "ROC Curve for Diabetes Prediction Model",
col = "#2E86AB",
lwd = 3,
print.auc = TRUE,
auc.polygon = TRUE,
auc.polygon.col = rgb(46, 134, 171, alpha = 50, maxColorValue = 255),
grid = TRUE,
legacy.axes = TRUE,
xlab = "False Positive Rate (1 - Specificity)",
ylab = "True Positive Rate (Sensitivity)")
auc_value <- auc(roc_obj)
print(auc_value)
## Area under the curve: 0.828
What does AUC indicate (0.5 = random, 1.0 = perfect)? The AUC value is 0.828 indicating excellent discriminatory ability. More specifically, this means that the model correctly a random diabetic patient higher than a random non-diabetic patient yet it is not a pefect graph.
For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain. For diabetes, I would prioritize sensitivity. I would also lower the threshold to about 0.35 because it will allow to catch more diabetes cases and result in fewer false negatives.