Introduction:

In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.

The data is publicly available from the UCI Machine Learning Repository and can be imported directly.

Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv

Columns (no header in the CSV, so we need to assign them manually):

  1. Pregnancies: Number of times pregnant
  2. Glucose: Plasma glucose concentration (2-hour test)
  3. BloodPressure: Diastolic blood pressure (mm Hg)
  4. SkinThickness: Triceps skin fold thickness (mm)
  5. Insulin: 2-hour serum insulin (mu U/ml)
  6. BMI: Body mass index (weight in kg/(height in m)^2)
  7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk)
  8. Age: Age in years
  9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)

Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.

Cleaning the dataset Don’t change the following code

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.1     ✔ stringr   1.5.1
## ✔ ggplot2   4.0.0     ✔ tibble    3.3.0
## ✔ lubridate 1.9.4     ✔ tidyr     1.3.1
## ✔ purrr     1.1.0     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

data <- read.csv(url, header = FALSE)

colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")

data$Outcome <- as.factor(data$Outcome)

# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA


colSums(is.na(data))
##              Pregnancies                  Glucose            BloodPressure 
##                        0                        5                       35 
##            SkinThickness                  Insulin                      BMI 
##                        0                        0                       11 
## DiabetesPedigreeFunction                      Age                  Outcome 
##                        0                        0                        0

Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.

logistic <- glm( Outcome ~ Glucose + BMI + Age, data = data, family="binomial")

summary(logistic)
## 
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = "binomial", 
##     data = data)
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -9.032377   0.711037 -12.703  < 2e-16 ***
## Glucose      0.035548   0.003481  10.212  < 2e-16 ***
## BMI          0.089753   0.014377   6.243  4.3e-10 ***
## Age          0.028699   0.007809   3.675 0.000238 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 974.75  on 751  degrees of freedom
## Residual deviance: 724.96  on 748  degrees of freedom
##   (16 observations deleted due to missingness)
## AIC: 732.96
## 
## Number of Fisher Scoring iterations: 4
r_square <- 1 - (logistic$deviance/logistic$null.deviance)
r_square
## [1] 0.25626

What does the intercept represent (log-odds of diabetes when predictors are zero)? The intercept is -9.032377, and it is the log (odds) of having diabetes when glucose, bmi, and age is equal to 0. On the probability scale that is equal to p=1/(1+e^9.03)=0.000118. So, there is a 0.000118% chance of diabetes when all predictors are 0.

For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)?

All of these are highly significant; for glucose, p<2e-16, for bmi, p = 4.3e-10, and age, p = 0.000238. And as for raising the odds, all of the estimates are positive, which means a one unit increase raises the odds for all of our predictors.

Question 2: Confusion Matrix and Important Metric

Calculate and report the metrics:

Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)

Use the following starter code

# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)



# Predicted probabilities

predicted.probs <- logistic$fitted.values

# Predicted classes

predicted.classes<-ifelse(predicted.probs>0.5,1,0)

# Confusion matrix

confusion<-table(
  Predicted = factor (predicted.classes,levels=c(0,1)),
  Actual = factor (data_subset$Outcome_num, levels = c(0,1))
  
)
confusion
##          Actual
## Predicted   0   1
##         0 429 114
##         1  59 150
#Extract Values:
TN <-429 
FP <-59 
FN <- 114 
TP <- 150 
Total <- TN+FN+FP+TP 
Total
## [1] 752
#Metrics    
accuracy <-  (TP + TN)/ Total
sensitivity <- TP/(TP+FN)
specificity <- TN/(TN+FP)
precision <- TP/(TP+FP)

cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77 
## Sensitivity: 0.568 
## Specificity: 0.879 
## Precision: 0.718

Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?

The model performs pretty well, it’s accuracy is 77%. Overall, it’s better at detecting the specificity (87.9%, compared to (56.8%))which is non diabetes. This matters for medical diagnosis because high specificity means less mistakes, so if someone really has no diabetes for example, you can be pretty confident with that answer. But also the sensitivity being kind of low means there can be misses on whether or not someone has diabetes or not. So, I would say take it with a grain of salt.

Question 3: ROC Curve, AUC, and Interpretation

library(pROC)
## Warning: package 'pROC' was built under R version 4.5.2
## Type 'citation("pROC")' for a citation.
## 
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
## 
##     cov, smooth, var
roc_obj <- roc(response = data_subset$Outcome,
               predictor = logistic$fitted.values,
               levels = c("0", "1"),
               direction = "<") 


auc_val <- auc(roc_obj); auc_val
## Area under the curve: 0.828
plot.roc(roc_obj, print.auc = TRUE, legacy.axes = TRUE,
         xlab = "False Positive Rate (1 - Specificity)",
         ylab = "True Positive Rate (Sensitivity)")

What does AUC indicate (0.5 = random, 1.0 = perfect)?

The AUC is very close to 1, which is perfect! So, there is a very good distinction with separating diabetic people from non diabetic people.

For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain.

Prioritize sensitivity because it would be better to be safe than sorry. If its mistaken that you have diabetes when you don’t, I am sure there will be more tests you will take further on and you will realize that you don’t have diabetes, but if you do have diabetes and are not taking the right procedures for it because you were told you don’t, it will be more harmful for you in the future. If the threshold was 0.3 or so, we could catch more cases of diabetes in people. We would have more false positives, but again, it is better to be cautious than risk it.