Introduction:

In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.

The data is publicly available from the UCI Machine Learning Repository and can be imported directly.

Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv

Columns (no header in the CSV, so we need to assign them manually):

  1. Pregnancies: Number of times pregnant
  2. Glucose: Plasma glucose concentration (2-hour test)
  3. BloodPressure: Diastolic blood pressure (mm Hg)
  4. SkinThickness: Triceps skin fold thickness (mm)
  5. Insulin: 2-hour serum insulin (mu U/ml)
  6. BMI: Body mass index (weight in kg/(height in m)^2)
  7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk)
  8. Age: Age in years
  9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)

Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.

Cleaning the dataset Don’t change the following code

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.1     ✔ stringr   1.5.2
## ✔ ggplot2   4.0.0     ✔ tibble    3.3.0
## ✔ lubridate 1.9.4     ✔ tidyr     1.3.1
## ✔ purrr     1.1.0     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

data <- read.csv(url, header = FALSE)

colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")

data$Outcome <- as.factor(data$Outcome)

# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA


colSums(is.na(data))
##              Pregnancies                  Glucose            BloodPressure 
##                        0                        5                       35 
##            SkinThickness                  Insulin                      BMI 
##                        0                        0                       11 
## DiabetesPedigreeFunction                      Age                  Outcome 
##                        0                        0                        0

Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.

## Enter your code here

model <- glm(Outcome ~ Glucose + BMI + Age,
             data = data,
             family = "binomial")

# Model summary
summary(model)
## 
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = "binomial", 
##     data = data)
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -9.032377   0.711037 -12.703  < 2e-16 ***
## Glucose      0.035548   0.003481  10.212  < 2e-16 ***
## BMI          0.089753   0.014377   6.243  4.3e-10 ***
## Age          0.028699   0.007809   3.675 0.000238 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 974.75  on 751  degrees of freedom
## Residual deviance: 724.96  on 748  degrees of freedom
##   (16 observations deleted due to missingness)
## AIC: 732.96
## 
## Number of Fisher Scoring iterations: 4
# Pseudo R²
r_square <- 1 - (model$deviance / model$null.deviance)
r_square
## [1] 0.25626

What does the intercept represent (log-odds of diabetes when predictors are zero)?

The intercept of –9.032377 represents the log odds of diabetes when Glucose, BMI, and Age are all zero. Since such values are not realistic in practice, the intercept is not clinically meaningful, but it serves as a baseline for the model.

For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)?

All three predictors Glucose, BMI, and Age have positive and statistically significant coefficients.This means that increases in any of these variables increase the likelihood of having diabetes. The r squared value of 0.2563 means the model explains about 25.63% of the variation in the log odds of diabetes relative to a model with no predictors.

Question 2: Confusion Matrix and Important Metric

Calculate and report the metrics:

Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)

Use the following starter code

# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)


model_subset <- glm(Outcome ~ Glucose + BMI + Age,
                    data = data_subset,
                    family = "binomial")

# Predicted probabilities

predicted.probs <- model_subset$fitted.values

# Predicted classes

predicted.classes <- ifelse(predicted.probs > 0.5, 1, 0)

# Confusion matrix

confusion <- table(
  Predicted = factor(predicted.classes, levels = c(0,1)),
  Actual = factor(data_subset$Outcome_num, levels = c(0,1)))
confusion
##          Actual
## Predicted   0   1
##         0 429 114
##         1  59 150
#Extract Values:
TN <- 429
FP <- 59
FN <- 114
TP <- 150

#Metrics    
accuracy <- (TP + TN) / (TP + TN + FP + FN)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)

cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77 
## Sensitivity: 0.568 
## Specificity: 0.879 
## Precision: 0.718

Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?

The logistic regression model shows an overall accuracy of 77%, meaning it correctly classifies the majority of individuals. However, its performance differs between the two classes. The sensitivity is 56.8%, indicating the model detects only a little over half of actual diabetes cases, while the specificity is much higher at 87.9%, meaning it is far better at identifying people who do not have diabetes. This imbalance suggests that the model is more effective at ruling out diabetes than detecting it. In a medical context, this is important because missing true diabetes cases (false negatives) can delay diagnosis and treatment. While the model performs reasonably well overall, its relatively low sensitivity means it may not be ideal as a screening tool without adjusting the probability threshold to improve the detection of positive cases.

Question 3: ROC Curve, AUC, and Interpretation

#Enter your code here

#install.packages("pROC")
library(pROC)
## Warning: package 'pROC' was built under R version 4.5.2
## Type 'citation("pROC")' for a citation.
## 
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
## 
##     cov, smooth, var
roc_obj <- roc(response = data_subset$Outcome_num,
               predictor = predicted.probs)
## Setting levels: control = 0, case = 1
## Setting direction: controls < cases
auc_value <- auc(roc_obj)
auc_value
## Area under the curve: 0.828
plot(roc_obj, main = "ROC Curve for Diabetes Prediction Model",
     ylab= "sensitivity",
     xlab= "specificity")

What does AUC indicate (0.5 = random, 1.0 = perfect)?

the model’s AUC is 0.828, which indicates strong discriminatory ability. This means that about 82.8% of the time, the model correctly ranks a randomly chosen diabetic individual as having a higher predicted risk than a non-diabetic individual.

For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.

For diabetes diagnosis, it is more important to prioritize sensitivity (correctly identifying people who actually have diabetes) rather than specificity. Missing a diabetic patient (a false negative) can delay treatment and lead to serious medical complications. False positives, on the other hand, can be followed by additional testing and do not usually pose an immediate health risk. Because the model’s default sensitivity (56.8%) is lower than its specificity (87.9%), i would lower the classification threshold from the standard 0.5 to something like 0.40. This would increase sensitivity, allowing the model to catch more true diabetes cases, even though it will produce more false positives.