Introduction:

In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.

The data is publicly available from the UCI Machine Learning Repository and can be imported directly.

Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv

Columns (no header in the CSV, so we need to assign them manually):

  1. Pregnancies: Number of times pregnant
  2. Glucose: Plasma glucose concentration (2-hour test)
  3. BloodPressure: Diastolic blood pressure (mm Hg)
  4. SkinThickness: Triceps skin fold thickness (mm)
  5. Insulin: 2-hour serum insulin (mu U/ml)
  6. BMI: Body mass index (weight in kg/(height in m)^2)
  7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk)
  8. Age: Age in years
  9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)

Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.

Cleaning the dataset Don’t change the following code

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.1     ✔ stringr   1.5.2
## ✔ ggplot2   4.0.0     ✔ tibble    3.3.0
## ✔ lubridate 1.9.4     ✔ tidyr     1.3.1
## ✔ purrr     1.1.0     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

data <- read.csv(url, header = FALSE)

colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")

data$Outcome <- as.factor(data$Outcome)

# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA


colSums(is.na(data))
##              Pregnancies                  Glucose            BloodPressure 
##                        0                        5                       35 
##            SkinThickness                  Insulin                      BMI 
##                        0                        0                       11 
## DiabetesPedigreeFunction                      Age                  Outcome 
##                        0                        0                        0

Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.

## Enter your code here

# keep only rows with non-missing Glucose, BMI, and Age

data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

# Logistic regression model

logit_model <- glm(Outcome ~ Glucose + BMI + Age,
data = data_subset,
family = binomial)

# Summary

summary(logit_model)
## 
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = binomial, 
##     data = data_subset)
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -9.032377   0.711037 -12.703  < 2e-16 ***
## Glucose      0.035548   0.003481  10.212  < 2e-16 ***
## BMI          0.089753   0.014377   6.243  4.3e-10 ***
## Age          0.028699   0.007809   3.675 0.000238 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 974.75  on 751  degrees of freedom
## Residual deviance: 724.96  on 748  degrees of freedom
## AIC: 732.96
## 
## Number of Fisher Scoring iterations: 4
# Pseudo R-squared

R2 <- 1 - (logit_model$deviance / logit_model$null.deviance)
R2
## [1] 0.25626

What does the intercept represent (log-odds of diabetes when predictors are zero)?

The intercept is –9.03, which represents the log-odds of having diabetes when Glucose = 0, BMI = 0, and Age = 0. Mathematically, it is the baseline log-odds of diabetes when all predictors are zero.

For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)?

All three predictors have positive coefficients and very small p-values (< 0.001). Since Glucose (β = 0.035548, p < 2e-16): - a one-unit increase in glucose increases the log-odds of diabetes. - since the coefficient is positive, higher glucose levels increase the odds of diabetes. - the small p-value means glucose is highly statistically significant.

Question 2: Confusion Matrix and Important Metric

Calculate and report the metrics:

Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)

Use the following starter code

# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)


# Predicted probabilities
data_subset$pred_prob <- predict(logit_model, type = "response")


# Predicted classes
data_subset$pred_class <- ifelse(data_subset$pred_prob > 0.5, 1, 0)


# Confusion matrix
conf_mat <- table(Predicted = data_subset$pred_class,
                  Actual = data_subset$Outcome_num)

conf_mat
##          Actual
## Predicted   0   1
##         0 429 114
##         1  59 150
# Make sure the missing levels doesn't break when knitting
all_levels <- c("0","1")
conf_mat_full <- matrix(0, nrow=2, ncol=2,
                        dimnames=list(Predicted=all_levels, Actual=all_levels))
conf_mat_full[rownames(conf_mat), colnames(conf_mat)] <- conf_mat

#Extract Values:
TN <- conf_mat_full["0","0"]
FP <- conf_mat_full["1","0"]
FN <- conf_mat_full["0","1"]
TP <- conf_mat_full["1","1"]

#Metrics    
accuracy <- (TP + TN) / (TP + TN + FP + FN)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)

cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77 
## Sensitivity: 0.568 
## Specificity: 0.879 
## Precision: 0.718

Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?

The logistic regression model achieved an accuracy of 0.77, meaning it correctly classified about 77% of all cases with a fairly strong overall performance.

Sensitivity measures how well the model detects actual diabetes cases (true positives). This model has a sensitivity of 0.568, meaning the model correctly identifies only 56.8% of people who truly have diabetes.

Specificity measures how well the model detects non-diabetes cases (true negatives). This model has a specificity of 0.879 means the model correctly identifies 87.9% of people who do not have diabetes.

The model performs reasonably well overall, but it is much better at detecting non-diabetes (specificity = 0.879) than detecting diabetes itself (sensitivity = 0.568). This imbalance helped us understand that the model is more likely to miss actual diabetes cases than to generate false alarms.

Question 3: ROC Curve, AUC, and Interpretation

#Enter your code here

# Load pROC
library(pROC)
## Type 'citation("pROC")' for a citation.
## 
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
## 
##     cov, smooth, var
# ROC curve and AUC
roc_obj <- roc(response = data_subset$Outcome_num,
               predictor = data_subset$pred_prob)
## Setting levels: control = 0, case = 1
## Setting direction: controls < cases
# Plot ROC curve
plot(roc_obj, col = "blue", main = "ROC Curve for Diabetes Logistic Regression")
abline(a = 0, b = 1, lty = 2)

# AUC value
auc_value <- auc(roc_obj)
auc_value
## Area under the curve: 0.828

What does AUC indicate (0.5 = random, 1.0 = perfect)?

An AUC of 0.828 means the model has good ability to distinguish between people with diabetes and those without it. In about 82.8% of randomly chosen pairs (one diabetic, one non-diabetic), the model assigns a higher predicted probability to the diabetic patient. This indicates a well-performing model.

For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.

In diabetes screening, it is more important to prioritize sensitivity, even if that reduces specificity as we would be able to catch as many true cases as possible.