Introduction:

In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.

The data is publicly available from the UCI Machine Learning Repository and can be imported directly.

Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv

Columns (no header in the CSV, so we need to assign them manually):

  1. Pregnancies: Number of times pregnant
  2. Glucose: Plasma glucose concentration (2-hour test)
  3. BloodPressure: Diastolic blood pressure (mm Hg)
  4. SkinThickness: Triceps skin fold thickness (mm)
  5. Insulin: 2-hour serum insulin (mu U/ml)
  6. BMI: Body mass index (weight in kg/(height in m)^2)
  7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk)
  8. Age: Age in years
  9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)

Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.

Cleaning the dataset Don’t change the following code

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.1     ✔ stringr   1.5.2
## ✔ ggplot2   4.0.0     ✔ tibble    3.3.0
## ✔ lubridate 1.9.4     ✔ tidyr     1.3.1
## ✔ purrr     1.1.0     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

data <- read.csv(url, header = FALSE)

colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")

data$Outcome <- as.factor(data$Outcome)

# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA


colSums(is.na(data))
##              Pregnancies                  Glucose            BloodPressure 
##                        0                        5                       35 
##            SkinThickness                  Insulin                      BMI 
##                        0                        0                       11 
## DiabetesPedigreeFunction                      Age                  Outcome 
##                        0                        0                        0

Question 1: Create and Interpret a Logistic Regression Model

## Enter your code here

logistic_model <- glm(Outcome ~ Glucose + BMI + Age, data=data, family = "binomial")

summary(logistic_model)
## 
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = "binomial", 
##     data = data)
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -9.032377   0.711037 -12.703  < 2e-16 ***
## Glucose      0.035548   0.003481  10.212  < 2e-16 ***
## BMI          0.089753   0.014377   6.243  4.3e-10 ***
## Age          0.028699   0.007809   3.675 0.000238 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 974.75  on 751  degrees of freedom
## Residual deviance: 724.96  on 748  degrees of freedom
##   (16 observations deleted due to missingness)
## AIC: 732.96
## 
## Number of Fisher Scoring iterations: 4
## Enter your code here


r_square <- 1 - (logistic_model$deviance/logistic_model$null.deviance)

r_square
## [1] 0.25626

What does the intercept represent (log-odds of diabetes when predictors are zero)?

The intercept represents the log-odds of diabetes when the three factors: glucose, bmi and age are all zero. It is impossible for an individual to have 0 for any. Therefore, suggesting it is not meaningful in a real-world setting.

For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)?

For Glucose, one-unit raises the odds of diabetes and the p-value is less than 0.05 so it is significant.For BMI, one unit raises the odds of diabetes and the p-value is less than 0.05 so it is significant. Lastly for age, one-unit increase raises the odds of diabetes, it is significant because the p-value is less than 0.05.

Question 2: Confusion Matrix and Important Metric

Calculate and report the metrics:

Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)

Use the following starter code

# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)


# Predicted probabilities

predicted_prob <- logistic_model$fitted.values

# Predicted classes

predicted_class <-ifelse(predicted_prob > 0.5,1,0)


# Confusion matrix

confusion_max <- table(Predicted = factor(predicted_class, levels = c(0,1)), Actual = factor(data_subset$Outcome_num, levels = c(0,1)))

confusion_max
##          Actual
## Predicted   0   1
##         0 429 114
##         1  59 150
#Extract Values:
TN <- 429
FP <- 59
FN <- 114
TP <- 150

#Metrics    
accuracy <- (TP + TN) / (TP + TN + FP + FN)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <-  TP / (TP + FP) 

cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77 
## Sensitivity: 0.568 
## Specificity: 0.879 
## Precision: 0.718

Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?

The model demonstrates 77% accuracy, this indicates that the model does a moderate job in performance. The model does a better job at detecting non-diabetes (specificity) resulting in 88%, than detecting diabetes. This is important for medical diagnosis because avoids the amount of false positives. But it is important to note that as much as there are less false positives being reported, the lower sensitivity (57%) can also result in actual cases of diabetics being missed. This can pose a threat to these individuals overall health and safety.

Question 3: ROC Curve, AUC, and Interpretation

#Enter your code here

library(pROC)
## Type 'citation("pROC")' for a citation.
## 
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
## 
##     cov, smooth, var
roc_obj <- roc(response = data_subset$Outcome, 
               predictor = logistic_model$fitted.values, levels = c("0", "1"), direction = "<" )

auc_val <-auc(roc_obj); auc_val
## Area under the curve: 0.828
plot.roc(roc_obj, print.auc = TRUE, legacy.axes = TRUE, 
         xlab = "False Positive Rate (1 - Specificity)", ylab = "True Positive Rate (Sensitivity)")

What does AUC indicate (0.5 = random, 1.0 = perfect)?

The AUC measures the overall ability of the model between distinguishing diabetics and non-diabetics. In this case, the AUC is 0.828 which is closer to 1.0 and would indicate that the model does a good job of distinguishing the diabetics and non-diabetics.

For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.

For diabetes diagnosis, it’s better to prioritize sensitivity (catching cases) versus specificity (avoiding false positives). In order to reduce the chance of avoiding missing the cases that actually do have diabetes, and dealing with the false positives that can be disproved with further exams. Instead of using a threshold of 0.5, we can use a lower threshold like 0.4 or even 0.3, this would prioritize sensitivity to catch more cases of possible diabetics to lower the chance of missing those individuals and of course would result in false positive results as well.