Introduction:

In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.

The data is publicly available from the UCI Machine Learning Repository and can be imported directly.

Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv

Columns (no header in the CSV, so we need to assign them manually):

  1. Pregnancies: Number of times pregnant
  2. Glucose: Plasma glucose concentration (2-hour test)
  3. BloodPressure: Diastolic blood pressure (mm Hg)
  4. SkinThickness: Triceps skin fold thickness (mm)
  5. Insulin: 2-hour serum insulin (mu U/ml)
  6. BMI: Body mass index (weight in kg/(height in m)^2)
  7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk)
  8. Age: Age in years
  9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)

Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.

Cleaning the dataset Don’t change the following code

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.0     ✔ stringr   1.5.1
## ✔ ggplot2   3.5.2     ✔ tibble    3.3.0
## ✔ lubridate 1.9.4     ✔ tidyr     1.3.1
## ✔ purrr     1.1.0     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

data <- read.csv(url, header = FALSE)

colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")

data$Outcome <- as.factor(data$Outcome)

# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA


colSums(is.na(data))
##              Pregnancies                  Glucose            BloodPressure 
##                        0                        5                       35 
##            SkinThickness                  Insulin                      BMI 
##                        0                        0                       11 
## DiabetesPedigreeFunction                      Age                  Outcome 
##                        0                        0                        0

Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.

## Enter your code here
logistic <- glm(Outcome ~ Glucose + BMI + Age, data=data, 
                family="binomial")

summary(logistic)
## 
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = "binomial", 
##     data = data)
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -9.032377   0.711037 -12.703  < 2e-16 ***
## Glucose      0.035548   0.003481  10.212  < 2e-16 ***
## BMI          0.089753   0.014377   6.243  4.3e-10 ***
## Age          0.028699   0.007809   3.675 0.000238 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 974.75  on 751  degrees of freedom
## Residual deviance: 724.96  on 748  degrees of freedom
##   (16 observations deleted due to missingness)
## AIC: 732.96
## 
## Number of Fisher Scoring iterations: 4
r_square <- 1 - (logistic$deviance/logistic$null.deviance)
r_square
## [1] 0.25626

What does the intercept represent (log-odds of diabetes when predictors are zero)?

The intercept value for this logistic regression model is –9.03, which represents the log odds of having diabetes when all predictors (Glucose, BMI, and Age) are equal to zero. In real life, a glucose level of 0 or BMI of 0 cannot be meaningful for prediction. However, statistically it is necessary for the model to consider and demonstrates the baseline starting point of the log odds. Because the intercept is very negative, it says that without contributions from these predictors, the chances of getting diabetes will be very small.

For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)?

All three predictors (Glucose, BMI, Age) have positive estimates, as a one unit increase in one of the variables increases the odds of getting diabetes. Specifically, the log odds increase for Glucose (0.0355), BMI (0.0898), and Age (0.0287), which reflects a higher likelihood of diabetes with an increase in this category. P-values: All three predictors have p-values below 0.05, and next to each of them you would find, meaning they are statistically significant. So, Glucose, BMI, and Age all make significant contributions to predicting diabetes.

Question 2: Confusion Matrix and Important Metric

Calculate and report the metrics:

Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)

Use the following starter code

# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)


# Predicted probabilities
predicted.probs <- logistic$fitted.values


# Predicted classes
predicted.classes <- ifelse(predicted.probs > 0.5, 1, 0)


# Confusion matrix
confusion <- table(
  Predicted = factor(predicted.classes, levels = c(0, 1)),
  Actual = factor(data_subset$Outcome_num, levels = c(0, 1))
)

confusion
##          Actual
## Predicted   0   1
##         0 429 114
##         1  59 150
#Extract Values:
TN <- 429
FP <- 59
FN <- 114
TP <- 150

#Metrics    
accuracy <- (TP + TN) / (TP + TN + FP + FN)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN +FP)
precision <- TP / (TP + FP)

cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77 
## Sensitivity: 0.568 
## Specificity: 0.879 
## Precision: 0.718

Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?

Overall, the model performed good characteristics based on confusion matrix and performance metrics, where the accuracy of 0.77 showed that it is able to detect diabetes or non-diabetes correctly in 77% of cases. Nevertheless, because specificity is 0.879 and sensitivity is only 0.568, the model does a far better job at identifying non-diabetes than diabetes. That is, the model correctly detects nearly 88% of people who do not have diabetes, but only about 57% of people who do. This imbalance is crucial for medical purposes, since low sensitivity means a large number of patients with actual diabetes are missed due to high false negatives. A missed diabetes diagnosis can delay treatment and cause serious health issues. Hence, while the model does well at excluding non-diabetic cases, the model isn’t performing as well for targeting people with diabetes, which requires a change in threshold or an improvement in the model to achieve higher sensitivity for safer medical use.

Question 3: ROC Curve, AUC, and Interpretation

#Enter your code here
library(pROC)
## Warning: package 'pROC' was built under R version 4.5.2
## Type 'citation("pROC")' for a citation.
## 
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
## 
##     cov, smooth, var
roc_obj <- roc(response = data_subset$Outcome,
               predictor = logistic$fitted.values,
               levels = c("0", "1"),
               direction = "<")

auc_val <- auc(roc_obj); auc_val
## Area under the curve: 0.828
plot.roc(roc_obj, print.auc = TRUE, legacy.axes = TRUE,
         xlab = "False Positive Rate (1 - Specificity)",
         ylab = "True Positive Rate (Sensitivity)"
         )

What does AUC indicate (0.5 = random, 1.0 = perfect)?

The AUC shows how well the model can tell the difference between people who have diabetes and people who do not. If the AUC is 0.5, the model is doing no better than random guessing. If it is 1.0, the model is perfect. A higher AUC means better performance. If AUC is close to 0.80, then it means the model does a good job overall at separating high risk and low risk patients.

For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.

For diabetes diagnosis, sensitivity is usually better because detecting as many authentic diabetes cases as possible is more important. The consequences for a failure to catch someone with diabetes are significant in the long run. False positives are not desirable, but they tend to lead to more testing and can be safer than missing a true case. This threshold can be lowered to below 0.5 for increased sensitivity. Setting the threshold around 0.35 or 0.40 would help the model identify more true cases, although this may result in a higher number of false positives. This trade off is within the scope of medical screening since early detection is vital.