Introduction:

In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.

The data is publicly available from the UCI Machine Learning Repository and can be imported directly.

Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv

Columns (no header in the CSV, so we need to assign them manually):

  1. Pregnancies: Number of times pregnant
  2. Glucose: Plasma glucose concentration (2-hour test)
  3. BloodPressure: Diastolic blood pressure (mm Hg)
  4. SkinThickness: Triceps skin fold thickness (mm)
  5. Insulin: 2-hour serum insulin (mu U/ml)
  6. BMI: Body mass index (weight in kg/(height in m)^2)
  7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk)
  8. Age: Age in years
  9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)

Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.

Cleaning the dataset Don’t change the following code

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.1     ✔ stringr   1.5.1
## ✔ ggplot2   4.0.0     ✔ tibble    3.3.0
## ✔ lubridate 1.9.4     ✔ tidyr     1.3.1
## ✔ purrr     1.1.0     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

data <- read.csv(url, header = FALSE)

colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")

data$Outcome <- as.factor(data$Outcome)

# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA


colSums(is.na(data))
##              Pregnancies                  Glucose            BloodPressure 
##                        0                        5                       35 
##            SkinThickness                  Insulin                      BMI 
##                        0                        0                       11 
## DiabetesPedigreeFunction                      Age                  Outcome 
##                        0                        0                        0

Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.

## Enter your code here

logistic <- glm(Outcome ~ Glucose + BMI + Age, data=data, family="binomial")

summary(logistic)
## 
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = "binomial", 
##     data = data)
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -9.032377   0.711037 -12.703  < 2e-16 ***
## Glucose      0.035548   0.003481  10.212  < 2e-16 ***
## BMI          0.089753   0.014377   6.243  4.3e-10 ***
## Age          0.028699   0.007809   3.675 0.000238 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 974.75  on 751  degrees of freedom
## Residual deviance: 724.96  on 748  degrees of freedom
##   (16 observations deleted due to missingness)
## AIC: 732.96
## 
## Number of Fisher Scoring iterations: 4

What does the intercept represent (log-odds of diabetes when predictors are zero)?

Intercept (−9.03):

It represents the log(odds) of having diabetes when Glucose = 0, BMI = 0, and Age = 0. On the probability scale:

p=1/(1+e9.03)=0.00012

So the baseline probability of diabetes is about 0.01% when all predictors are zero.

For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)?

Glucose (0.0355):

A one-unit increase raises the odds of diabetes. Odds ratio: p = e^0.0355 = 1.036 This effect is statistically significant (p < 0.001).

BMI (0.0898):

A one-unit increase raises the odds of diabetes. Odds ratio: p = e^0.0898 = 1.094 This effect is statistically significant (p < 0.001).

Age (0.0287):

A one-unit increase raises the odds of diabetes. Odds ratio: p = e^0.0287 = 1.029 This effect is statistically significant (p < 0.001).

Question 2: Confusion Matrix and Important Metric

Calculate and report the metrics:

Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)

Use the following starter code

# Keep only rows with no missing values in Glucose, BMI, or Age

data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

# Create numeric outcome

data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)

# Fit the logistic model again on this subset

logistic <- glm(Outcome_num ~ Glucose + BMI + Age, data = data_subset, family = "binomial")

# Predicted probabilities

pred_prob <- predict(logistic, type = "response")

# Predicted classes (0.5 threshold)

pred_class <- ifelse(pred_prob > 0.5, 1, 0)

# Confusion matrix

conf_mat <- table(Predicted = pred_class, Actual = data_subset$Outcome_num)
conf_mat
##          Actual
## Predicted   0   1
##         0 429 114
##         1  59 150
# Extract TN, FP, FN, TP

TN <- conf_mat[1,1]
FP <- conf_mat[1,2]
FN <- conf_mat[2,1]
TP <- conf_mat[2,2]

# Metrics

accuracy <- (TP + TN) / sum(conf_mat)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)

cat("Accuracy:", round(accuracy, 3),
"\nSensitivity:", round(sensitivity, 3),
"\nSpecificity:", round(specificity, 3),
"\nPrecision:", round(precision, 3))
## Accuracy: 0.77 
## Sensitivity: 0.718 
## Specificity: 0.79 
## Precision: 0.568

Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?

The model shows solid overall performance with an accuracy of about 77%.

It is better at detecting non-diabetes (specificity ≈ 0.79) than detecting diabetes (sensitivity ≈ 0.718). This means it correctly identifies healthy individuals more often than it identifies those with diabetes.

In a medical context, this matters because lower sensitivity means the model misses some true diabetes cases, which is more serious than a false alarm. Missing a diabetic patient delays treatment, so ideally, a diagnostic model should prioritize higher sensitivity, even if specificity decreases.

Question 3: ROC Curve, AUC, and Interpretation

#Enter your code here
# install.packages("pROC") # if needed

library(pROC)
## Warning: package 'pROC' was built under R version 4.5.2
## Type 'citation("pROC")' for a citation.
## 
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
## 
##     cov, smooth, var
# ROC curve & AUC using data_subset from Q2

roc_obj <- roc(response = data_subset$Outcome_num,
predictor = pred_prob,
levels = c(0, 1),
direction = "<")  # smaller prob = 0 (no diabetes)

# Print AUC value

auc_val <- auc(roc_obj); auc_val
## Area under the curve: 0.828
# Plot ROC with AUC displayed

plot.roc(roc_obj, print.auc = TRUE, legacy.axes = TRUE,
xlab = "False Positive Rate (1 - Specificity)",
ylab = "True Positive Rate (Sensitivity)")

What does AUC indicate (0.5 = random, 1.0 = perfect)?

The AUC = 0.828 means your model is good at distinguishing between diabetic and non-diabetic patients.

On the plot, the curve sits well above the diagonal “random guess” line, showing that the model performs much better than chance and has strong discrimination ability.

For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.

For diabetes diagnosis, you should prioritize sensitivity so you don’t miss true cases. A lower threshold like 0.3–0.4 helps catch more positive patients, even if it increases false alarms.