Introduction:

In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.

The data is publicly available from the UCI Machine Learning Repository and can be imported directly.

Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv

Columns (no header in the CSV, so we need to assign them manually):

  1. Pregnancies: Number of times pregnant
  2. Glucose: Plasma glucose concentration (2-hour test)
  3. BloodPressure: Diastolic blood pressure (mm Hg)
  4. SkinThickness: Triceps skin fold thickness (mm)
  5. Insulin: 2-hour serum insulin (mu U/ml)
  6. BMI: Body mass index (weight in kg/(height in m)^2)
  7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk)
  8. Age: Age in years
  9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)

Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.

Cleaning the dataset Don’t change the following code

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.1     ✔ stringr   1.5.2
## ✔ ggplot2   4.0.0     ✔ tibble    3.3.0
## ✔ lubridate 1.9.4     ✔ tidyr     1.3.1
## ✔ purrr     1.1.0     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

data <- read.csv(url, header = FALSE)

colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")

data$Outcome <- as.factor(data$Outcome)

# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA


colSums(is.na(data))
##              Pregnancies                  Glucose            BloodPressure 
##                        0                        5                       35 
##            SkinThickness                  Insulin                      BMI 
##                        0                        0                       11 
## DiabetesPedigreeFunction                      Age                  Outcome 
##                        0                        0                        0

Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.

## Enter your code here

data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

log_model <- glm(Outcome ~ Glucose + BMI + Age,
                 data = data_subset,
                 family = "binomial")

summary(log_model)
## 
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = "binomial", 
##     data = data_subset)
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -9.032377   0.711037 -12.703  < 2e-16 ***
## Glucose      0.035548   0.003481  10.212  < 2e-16 ***
## BMI          0.089753   0.014377   6.243  4.3e-10 ***
## Age          0.028699   0.007809   3.675 0.000238 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 974.75  on 751  degrees of freedom
## Residual deviance: 724.96  on 748  degrees of freedom
## AIC: 732.96
## 
## Number of Fisher Scoring iterations: 4
r2 <- 1 - (log_model$deviance / log_model$null.deviance)
r2
## [1] 0.25626

What does the intercept represent (log-odds of diabetes when predictors are zero)?

The intercept is -9.032377, representing the log-odds of having diabetes when the following factors are all 0: Glucose, BMI, and Age. This value is the baseline for the log model.

For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)?

Glucose, BMI, and Age all have positive coefficients, meaning a one-unit increase raises the odds of diabetes. Each p-value is also substantially less than 0.05, making each predictor significant.

Question 2: Confusion Matrix and Important Metric

Calculate and report the metrics:

Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)

Use the following starter code

# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)

log_model <- glm(Outcome ~ Glucose + BMI + Age,
                 data = data_subset,
                 family = "binomial")


# Predicted probabilities
predicted_probs <- predict(log_model, type = "response")


# Predicted classes
predicted_class <- ifelse(predicted_probs > 0.5, 1, 0)


# Confusion matrix
conf_matrix <- table(
  Predicted = factor(predicted_class, levels = c(0,1)),
  Actual = factor(data_subset$Outcome_num, levels = c(0,1))
)

conf_matrix
##          Actual
## Predicted   0   1
##         0 429 114
##         1  59 150
#Extract Values:
TN <- conf_matrix[1, 1]
FP <- conf_matrix[1, 2]
FN <- conf_matrix[2, 1]
TP <- conf_matrix[2, 2]

#Metrics    
accuracy <- (TP + TN) / (TP + TN + FP + FN)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)

cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77 
## Sensitivity: 0.718 
## Specificity: 0.79 
## Precision: 0.568

Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?

This model is slightly better at identifying non diabetics (based on specificity) than diabetics (based on sensitivity); it correctly identifies about 72 percent of people who do have diabetes and correctly identifies 79 percent of people who do not have diabetes. When the model predicts someone has diabetes, it’s correct only 57 percent of the time (based on precision).

In medical contexts, it would likely be more important for this model to become better at identifying diabetics, as missing a diabetic case could lead to a lack of treatment, which would be very detrimental (possibly even fatal).

Question 3: ROC Curve, AUC, and Interpretation

#Enter your code here

library(pROC)
## Warning: package 'pROC' was built under R version 4.5.2
## Type 'citation("pROC")' for a citation.
## 
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
## 
##     cov, smooth, var
roc_obj <- roc(response = data_subset$Outcome_num,
predictor = predicted_probs,
levels = c(0, 1),
direction = "<") 

plot.roc(roc_obj,
         print.auc = TRUE,
         legacy.axes = TRUE,
         xlab = "False Positive Rate (1 - Specificity)",
         ylab = "True Positive Rate (Sensitivity)",
         main = "Diabetes Prediction Model ROC Curve")

auc_val <- auc(roc_obj)
auc_val
## Area under the curve: 0.828

What does AUC indicate (0.5 = random, 1.0 = perfect)?

The AUC is 0.828, meaning that the model has good ability to discriminate. An 0.828 AUC means that there is an 82.8 percent chance that the model will correctly distinguish a diabetic patient from a non-diabetic patient based on the probability predicted.

For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.

From a medical standpoint, it is more important to prioritize sensitivity–especially with a condition as serious as diabetes–because false-negative diagnosis results can prevent necessary treatment, leading to very severe health outcomes. Additionally, the patient may continue showing symptoms, forcing them and treatment facilities to spend more time and resources on tests. Instead of a 0.5 threshold, lowering to 0.4 could catch more true positives at the risk of showing more false positives. This is an acceptable trade-off, as false negatives are more dangerous in a medical context than false positives.