Introduction:

In this homework, you will apply logistic regression to a real-world dataset: the Pima Indians Diabetes Database. This dataset contains medical records from 768 women of Pima Indian heritage, aged 21 or older, and is used to predict the onset of diabetes (binary outcome: 0 = no diabetes, 1 = diabetes) based on physiological measurements.

The data is publicly available from the UCI Machine Learning Repository and can be imported directly.

Dataset URL: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv

Columns (no header in the CSV, so we need to assign them manually):

  1. Pregnancies: Number of times pregnant
  2. Glucose: Plasma glucose concentration (2-hour test)
  3. BloodPressure: Diastolic blood pressure (mm Hg)
  4. SkinThickness: Triceps skin fold thickness (mm)
  5. Insulin: 2-hour serum insulin (mu U/ml)
  6. BMI: Body mass index (weight in kg/(height in m)^2)
  7. DiabetesPedigreeFunction: Diabetes pedigree function (a function scoring genetic risk)
  8. Age: Age in years
  9. Outcome: Class variable (0 = no diabetes, 1 = diabetes)

Task Overview: You will load the data, build a logistic regression model to predict diabetes onset using a subset of predictors (Glucose, BMI, Age), interpret the model, evaluate it with a confusion matrix and metrics, and analyze the ROC curve and AUC.

Cleaning the dataset Don’t change the following code

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.0     ✔ stringr   1.5.1
## ✔ ggplot2   3.5.2     ✔ tibble    3.2.1
## ✔ lubridate 1.9.4     ✔ tidyr     1.3.1
## ✔ purrr     1.0.4     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
url <- "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

data <- read.csv(url, header = FALSE)

colnames(data) <- c("Pregnancies", "Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age", "Outcome")

data$Outcome <- as.factor(data$Outcome)

# Handle missing values (replace 0s with NA because 0 makes no sense here)
data$Glucose[data$Glucose == 0] <- NA
data$BloodPressure[data$BloodPressure == 0] <- NA
data$BMI[data$BMI == 0] <- NA


colSums(is.na(data))
##              Pregnancies                  Glucose            BloodPressure 
##                        0                        5                       35 
##            SkinThickness                  Insulin                      BMI 
##                        0                        0                       11 
## DiabetesPedigreeFunction                      Age                  Outcome 
##                        0                        0                        0

Question 1: Create and Interpret a Logistic Regression Model - Fit a logistic regression model to predict Outcome using Glucose, BMI, and Age.

## Enter your code here
## Logistic regression model
fit=glm(Outcome ~ Glucose + BMI + Age, data =data, family = binomial)
summary(fit)
## 
## Call:
## glm(formula = Outcome ~ Glucose + BMI + Age, family = binomial, 
##     data = data)
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -9.032377   0.711037 -12.703  < 2e-16 ***
## Glucose      0.035548   0.003481  10.212  < 2e-16 ***
## BMI          0.089753   0.014377   6.243  4.3e-10 ***
## Age          0.028699   0.007809   3.675 0.000238 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 974.75  on 751  degrees of freedom
## Residual deviance: 724.96  on 748  degrees of freedom
##   (16 observations deleted due to missingness)
## AIC: 732.96
## 
## Number of Fisher Scoring iterations: 4
#### Calculate pseudo R-squared
pseudo_R2 <- 1 - (fit$deviance / fit$null.deviance)
pseudo_R2
## [1] 0.25626

What does the intercept represent (log-odds of diabetes when predictors are zero)? The intercept is -9.032377 shows that the log-odds of diabetes when Glucose = 0, BMI = 0, and Age = 0.

For each predictor (Glucose, BMI, Age), does a one-unit increase raise or lower the odds of diabetes? Are they significant (p-value < 0.05)? Glucose For the Glucose predictor, the coefficient estimate is 0.035548. Since the coefficient is positive, it means that a one-unit increase in glucose raises the odds of diabetes.The Glucose predictor is significant since the p-value is less than 0.05 (p<0.05).

BMI For BMI predictor, the coefficient estimate is 0.089753. It implies that a one-unit increase in BMI raises the odds of diabetes.The BMI is significant since the p-value is less than 0.05 (p<0.05).

Age The Age coefficient estimate is 0.028699. It implies that a one-unit increase in Age raises the odds of diabetes.Age is significant since the p-value is less than 0.05 (p<0.05).

Pseudo-R² The pseudo-R² value is 0.25626 suggesting that approximately 25.626 % of the variability in diabetes status is explained by the model.

Question 2: Confusion Matrix and Important Metric

Calculate and report the metrics:

Accuracy: (TP + TN) / Total Sensitivity (Recall): TP / (TP + FN) Specificity: TN / (TN + FP) Precision: TP / (TP + FP)

Use the following starter code

# Keep only rows with no missing values in Glucose, BMI, or Age
data_subset <- data[complete.cases(data[, c("Glucose", "BMI", "Age")]), ]

#Create a numeric version of the outcome (0 = no diabetes, 1 = diabetes).This is required for calculating confusion matrices.
data_subset$Outcome_num <- ifelse(data_subset$Outcome == "1", 1, 0)


# Predicted probabilities
data_subset$pred_prob <- predict(fit, newdata = data_subset, type = "response")



# Predicted classes
data_subset$pred_class <- ifelse(data_subset$pred_prob > 0.5, 1, 0)




# Confusion matrix
conf_matrix <- table(Predicted = data_subset$pred_class,Actual = data_subset$Outcome_num)
conf_matrix
##          Actual
## Predicted   0   1
##         0 429 114
##         1  59 150
#Extract Values:
TN <-conf_matrix["0","0"]
FP <- conf_matrix["1","0"]
FN <- conf_matrix["0","1"]
TP <- conf_matrix["1","1"]


#Metrics    
accuracy <- (TP + TN) / sum(conf_matrix)
sensitivity <- TP / (TP + FN)
specificity <- TN / (TN + FP)
precision <- TP / (TP + FP)

cat("Accuracy:", round(accuracy, 3), "\nSensitivity:", round(sensitivity, 3), "\nSpecificity:", round(specificity, 3), "\nPrecision:", round(precision, 3))
## Accuracy: 0.77 
## Sensitivity: 0.568 
## Specificity: 0.879 
## Precision: 0.718

Interpret: How well does the model perform? Is it better at detecting diabetes (sensitivity) or non-diabetes (specificity)? Why might this matter for medical diagnosis?

The model’s accuracy is 0.77, meaning it correctly classifies about 77 % of all cases.The model has a high specificity of 0.879, suggesting that the model correctly identifies 87.9 % of non-diabetes cases. It has a low sensitivity of 0.568, implying that the model correctly identifies 56.8 % of actual diabetes cases. This indicates that the model performs better at detecting non-diabetic cases than diabetic cases. In medical diagnosis, when sensitivity is higher specificity it is helps to catch as many true cases of diabetic patient as possible. In medical screening,it vital to have a higher sensitivity.

Question 3: ROC Curve, AUC, and Interpretation

#Enter your code here
library(pROC)
## Warning: package 'pROC' was built under R version 4.5.2
## Type 'citation("pROC")' for a citation.
## 
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
## 
##     cov, smooth, var
roc_obj <- roc(data_subset$Outcome_num, data_subset$pred_prob)
## Setting levels: control = 0, case = 1
## Setting direction: controls < cases
plot(roc_obj, main = "ROC Curve for Diabetes Prediction")

auc_value <- auc(roc_obj)
auc_value
## Area under the curve: 0.828

What does AUC indicate (0.5 = random, 1.0 = perfect)? AUC value = 0.828. It shows that the model can correctly distinguish between diabetic and non-diabetic patients about 82.8 % of the time. Hence, the model has good discriminating ability.

For diabetes diagnosis, prioritize sensitivity (catching cases) or specificity (avoiding false positives)? Suggest a threshold and explain. In diabetes diagnosis, we prioritize sensitivity over specificity.This is becaise the aim is to catch as many true diabetes cases as possible. The threshold should be between 0.3 and 0.4. This is because the lowering the threshold increases sensitivity.