Introduction:

In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.

The data is publicly available from the UCI Machine Learning Repository and can be imported directly.

Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv

Columns (no header in the CSV, so we need to assign them manually):

  1. Pregnancies: Number of times pregnant
  2. Glucose: Plasma glucose concentration (2-hour test)
  3. BloodPressure: Diastolic blood pressure (mm Hg)
  4. SkinThickness: Triceps skin fold thickness (mm)
  5. Insulin: 2-hour serum insulin (mu U/ml)
  6. BMI: Body mass index (weight in kg/(height in m)^2)
  7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk)
  8. Age: Age in years
  9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)

Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.

Cleaning the dataset Don’t change the following code

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.0     ✔ stringr   1.5.2
## ✔ ggplot2   4.0.0     ✔ tibble    3.3.0
## ✔ lubridate 1.9.4     ✔ tidyr     1.3.1
## ✔ purrr     1.1.0     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

data <- read.csv(url, header = FALSE)

colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")

data$Outcome <- as.factor(data$Outcome)

# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA


colSums(is.na(data))
##              Pregnancies                  Glucose            BloodPressure 
##                        0                        5                       35 
##            SkinThickness                  Insulin                      BMI 
##                        0                        0                       11 
## DiabetesPedigreeFunction                      Age                  Outcome 
##                        0                        0                        0

Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.

## Enter your code here
# Keep only rows with no missing values in Glucose, BMI, Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

# Fit model
model <- glm(Outcome ~ Glucose + BMI + Age,
             data = data_subset,
             family = binomial(link = "logit"))

# Model summary
summary(model)
## 
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = binomial(link = "logit"), 
##     data = data_subset)
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -9.032377   0.711037 -12.703  < 2e-16 ***
## Glucose      0.035548   0.003481  10.212  < 2e-16 ***
## BMI          0.089753   0.014377   6.243  4.3e-10 ***
## Age          0.028699   0.007809   3.675 0.000238 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 974.75  on 751  degrees of freedom
## Residual deviance: 724.96  on 748  degrees of freedom
## AIC: 732.96
## 
## Number of Fisher Scoring iterations: 4
# Calculate pseudo R-squared (likelihood-based)
pseudoR2 <- 1 - (model$deviance / model$null.deviance)
pseudoR2
## [1] 0.25626
cat("Pseudo R^2:", round(pseudoR2, 4), "\n")
## Pseudo R^2: 0.2563
# Odds ratios and 95% CIs
exp_coef <- exp(coef(model))
exp_ci <- exp(confint(model))
## Waiting for profiling to be done...
data.frame(Estimate = coef(model),
           OR = exp_coef,
           CI_low = exp_ci[,1],
           CI_high = exp_ci[,2])
##                Estimate           OR       CI_low      CI_high
## (Intercept) -9.03237731 0.0001194781 2.820409e-05 0.0004596464
## Glucose      0.03554795 1.0361873320 1.029326e+00 1.0434890533
## BMI          0.08975307 1.0939041273 1.064055e+00 1.1258354595
## Age          0.02869913 1.0291149212 1.013524e+00 1.0450856242

What does the intercept represent (log-odds of diabetes when predictors are zero)? The intercept represents the log-odds of having diabetes when Glucose, BMI, and Age are all equal to zero. The intercept has no meaningful medical interpretation.

For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)? A one unit rise in glucose increases the odds of diabetes and this effect is statistically significant with a p value below 0.05. BMI also shows a positive association with the odds of diabetes and is significant at the same threshold. Age has a similar positive effect and is statistically significant. Each predictor therefore contributes meaningfully to the model, and higher values in glucose BMI and age correspond to higher estimated odds of diabetes.

Question 2: Confusion Matrix and Important Metric

Calculate and report the metrics:

Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)

Use the following starter code

# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)


# Predicted probabilities
data_subset$pred_prob <- predict(model, newdata = data_subset, type = "response")



# Predicted classes
threshold <- 0.5
data_subset$pred_class <- ifelse(data_subset$pred_prob > threshold, 1, 0)



# Confusion matrix
conf_mat <- table(Predicted = data_subset$pred_class, Actual = data_subset$Outcome_num)
conf_mat
##          Actual
## Predicted   0   1
##         0 429 114
##         1  59 150
#Extract Values:
TN <- ifelse(!is.na(conf_mat["0","0"]), conf_mat["0","0"], 0)
FP <- ifelse(!is.na(conf_mat["1","0"]), conf_mat["1","0"], 0)
FN <- ifelse(!is.na(conf_mat["0","1"]), conf_mat["0","1"], 0)
TP <- ifelse(!is.na(conf_mat["1","1"]), conf_mat["1","1"], 0)

#Metrics  
total <- TN + FP + FN + TP
accuracy <- (TP + TN) / total
sensitivity <- ifelse((TP + FN) == 0, NA, TP / (TP + FN))
specificity <- ifelse((TN + FP) == 0, NA, TN / (TN + FP))
precision <- ifelse((TP + FP) == 0, NA, TP / (TP + FP))


cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77 
## Sensitivity: 0.568 
## Specificity: 0.879 
## Precision: 0.718

Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?

The model shows moderate overall performance. Its accuracy indicates that it classifies a reasonable portion of cases correctly, but its sensitivity is typically lower than its specificity. This means the model detects non diabetes cases more effectively than diabetes cases. In medical settings, this distinction is important because low sensitivity implies that some individuals with diabetes may be missed. Missing true cases can delay treatment and increase health risks, so models used for screening generally prioritize higher sensitivity even if that reduces specificity.

Question 3: ROC Curve, AUC, and Interpretation

#Enter your code here
# Install and load pROC
install.packages("pROC", repos = "https://cloud.r-project.org")
## Installing package into 'C:/Users/ghale/AppData/Local/R/win-library/4.5'
## (as 'lib' is unspecified)
## package 'pROC' successfully unpacked and MD5 sums checked
## Warning: cannot remove prior installation of package 'pROC'
## Warning in file.copy(savedcopy, lib, recursive = TRUE): problem copying
## C:\Users\ghale\AppData\Local\R\win-library\4.5\00LOCK\pROC\libs\x64\pROC.dll to
## C:\Users\ghale\AppData\Local\R\win-library\4.5\pROC\libs\x64\pROC.dll:
## Permission denied
## Warning: restored 'pROC'
## 
## The downloaded binary packages are in
##  C:\Users\ghale\AppData\Local\Temp\RtmpQrgBoA\downloaded_packages
library(pROC)
## Warning: package 'pROC' was built under R version 4.5.2
## Type 'citation("pROC")' for a citation.
## 
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
## 
##     cov, smooth, var
# Predicted probabilities (from fitted model)
pred_prob <- predict(model, type = "response")

# Numeric outcome
Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)

# ROC and AUC
roc_obj <- roc(Outcome_num, pred_prob)
## Setting levels: control = 0, case = 1
## Setting direction: controls < cases
plot(roc_obj, main = "ROC Curve", col = "blue", lwd = 2)

# Print AUC
auc(roc_obj)
## Area under the curve: 0.828

What does AUC indicate (0.5 = random, 1.0 = perfect)? The AUC (Area Under the ROC Curve) measures the model’s ability to discriminate between positive and negative cases.

AUC = 0.5 → model performs no better than random chance

AUC = 1.0 → model perfectly distinguishes between diabetes and non-diabetes

Values between 0.5 and 1.0 indicate the probability that a randomly chosen diabetic case has a higher predicted probability than a randomly chosen non-diabetic case. Higher AUC values reflect better discrimination.

For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.

In diabetes screening, sensitivity should be prioritized to catch as many true cases as possible. Missing diabetics (false negatives) can delay treatment and increase health risks. To improve sensitivity, a lower threshold for predicting diabetes can be used, for example 0.3–0.4, instead of the default 0.5. This increases the number of detected cases, even if it slightly reduces specificity and results in more false positives.