Introduction

Statistical modeling is a fundamental tool in epidemiology that allows us to:

  • Describe relationships between variables
  • Predict outcomes based on risk factors
  • Estimate associations while controlling for confounding

This lecture introduces key concepts in regression modeling using real-world data from the Behavioral Risk Factor Surveillance System (BRFSS) 2023.


Setup and Data Preparation

# Load required packages
library(tidyverse)
library(haven)
library(knitr)
library(kableExtra)
library(plotly)
library(broom)
library(car)
library(ggeffects)
library(gtsummary)
library(ggstats)
brfss_clean <- read_rds("C:/Users/MY789914/OneDrive - University at Albany - SUNY/Desktop/Stat 553 (R)/brfss_subset_2023.rds")

Descriptive Statistics

# Summary table by diabetes status
desc_table <- brfss_clean %>%
  group_by(diabetes) %>%
  summarise(
    N = n(),
    `Mean Age` = round(mean(age_cont), 1),
    `% Male` = round(100 * mean(sex == "Male"), 1),
    `% Obese` = round(100 * mean(bmi_cat == "Obese", na.rm = TRUE), 1),
    `% Physically Active` = round(100 * mean(phys_active), 1),
    `% Current Smoker` = round(100 * mean(current_smoker), 1),
    `% Hypertension` = round(100 * mean(hypertension), 1),
    `% High Cholesterol` = round(100 * mean(high_chol), 1)
  ) %>%
  mutate(diabetes = ifelse(diabetes == 1, "Diabetes", "No Diabetes"))

desc_table %>%
  kable(caption = "Descriptive Statistics by Diabetes Status",
        align = "lrrrrrrrr") %>%
  kable_styling(bootstrap_options = c("striped", "hover", "condensed"),
                full_width = FALSE)
Descriptive Statistics by Diabetes Status
diabetes N Mean Age % Male % Obese % Physically Active % Current Smoker % Hypertension % High Cholesterol
No Diabetes 1053 58.2 49.0 34.8 69.4 29.3 47.5 42.5
Diabetes 228 63.1 53.9 56.1 53.5 27.6 76.8 67.1

Part 1: Statistical Modeling Concepts

1. What is Statistical Modeling?

A statistical model is a mathematical representation of the relationship between:

  • An outcome variable (dependent variable, response)
  • One or more predictor variables (independent variables, exposures, covariates)

General Form of a Statistical Model

\[f(Y) = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \cdots + \beta_p X_p + \epsilon\]

Where:

  • \(f(Y)\) is a function of the outcome (identity, log, logit, etc.)
  • \(\beta_0\) is the intercept (baseline value)
  • \(\beta_1, \beta_2, \ldots, \beta_p\) are coefficients (effect sizes)
  • \(X_1, X_2, \ldots, X_p\) are predictor variables
  • \(\epsilon\) is the error term (random variation)

2. Types of Regression Models

The choice of regression model depends on the type of outcome variable:

Common Regression Models in Epidemiology
Outcome Type Regression Type Link Function Example
Continuous Linear Identity: Y Blood pressure, BMI
Binary Logistic Logit: log(p/(1-p)) Disease status, mortality
Count Poisson/Negative Binomial Log: log(Y) Number of infections
Time-to-event Cox Proportional Hazards Log: log(h(t)) Survival time

Simple vs. Multiple Regression

  • Simple regression: One predictor variable
  • Multiple regression: Two or more predictor variables (controls for confounding)

3. Linear Regression Example

Let’s model the relationship between age and diabetes prevalence.

Simple Linear Regression

# Simple linear regression: diabetes ~ age
model_linear_simple <- lm(diabetes ~ age_cont, data = brfss_clean)

# Display results
tidy(model_linear_simple, conf.int = TRUE) %>%
  kable(caption = "Simple Linear Regression: Diabetes ~ Age",
        digits = 4,
        col.names = c("Term", "Estimate", "Std. Error", "t-statistic", "p-value", "95% CI Lower", "95% CI Upper")) %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE)
Simple Linear Regression: Diabetes ~ Age
Term Estimate Std. Error t-statistic p-value 95% CI Lower 95% CI Upper
(Intercept) -0.0632 0.0481 -1.3125 0.1896 -0.1576 0.0312
age_cont 0.0041 0.0008 5.1368 0.0000 0.0025 0.0056

Interpretation:

  • Intercept (\(\beta_0\)): -0.0632 - Expected probability of diabetes at age 0 (not meaningful in this context)
  • Slope (\(\beta_1\)): 0.0041 - For each 1-year increase in age, the probability of diabetes increases by 0.41%

Visualization

With continuous age

# Create scatter plot with regression line
p1 <- ggplot(brfss_clean, aes(x = age_cont, y = diabetes)) +
  geom_jitter(alpha = 0.2, width = 0.5, height = 0.02, color = "steelblue") +
  geom_smooth(method = "lm", se = TRUE, color = "red", linewidth = 1.2) +
  labs(
    title = "Relationship Between Age and Diabetes",
    subtitle = "Simple Linear Regression",
    x = "Age (years)",
    y = "Probability of Diabetes"
  ) +
  theme_minimal(base_size = 12)

ggplotly(p1) %>%
  layout(hovermode = "closest")

Diabetes Prevalence by Age


4. Logistic Regression: The Preferred Model for Binary Outcomes

Problem with linear regression for binary outcomes:

  • Predicted probabilities can fall outside [0, 1]
  • Assumes constant variance (violated for binary data)

Solution: Logistic Regression

Uses the logit link function to ensure predicted probabilities stay between 0 and 1:

\[\text{logit}(p) = \log\left(\frac{p}{1-p}\right) = \beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p\]

Simple Logistic Regression

# Simple logistic regression: diabetes ~ age
model_logistic_simple <- glm(diabetes ~ age_cont,
                              data = brfss_clean,
                              family = binomial(link = "logit"))

# Display results with odds ratios
tidy(model_logistic_simple, exponentiate = TRUE, conf.int = TRUE) %>%
  kable(caption = "Simple Logistic Regression: Diabetes ~ Age (Odds Ratios)",
        digits = 3,
        col.names = c("Term", "Odds Ratio", "Std. Error", "z-statistic", "p-value", "95% CI Lower", "95% CI Upper")) %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE)
Simple Logistic Regression: Diabetes ~ Age (Odds Ratios)
Term Odds Ratio Std. Error z-statistic p-value 95% CI Lower 95% CI Upper
(Intercept) 0.029 0.423 -8.390 0 0.012 0.064
age_cont 1.034 0.007 4.978 0 1.021 1.048

Interpretation:

  • Odds Ratio (OR): 1.034
  • For each 1-year increase in age, the odds of diabetes increase by 3.4%
  • The relationship is highly statistically significant (p < 0.001)

Predicted Probabilities

# From ggeffects package
pp <- predict_response(model_logistic_simple, terms = "age_cont")
plot(pp)
Predicted Diabetes Probability by Age

Predicted Diabetes Probability by Age

# Generate predicted probabilities
pred_data <- data.frame(age_cont = seq(18, 80, by = 1))
pred_data$predicted_prob <- predict(model_logistic_simple,
                                    newdata = pred_data,
                                    type = "response")

# Plot
p2 <- ggplot(pred_data, aes(x = age_cont, y = predicted_prob)) +
  geom_line(color = "darkred", linewidth = 1.5) +
  geom_ribbon(aes(ymin = predicted_prob - 0.02,
                  ymax = predicted_prob + 0.02),
              alpha = 0.2, fill = "darkred") +
  labs(
    title = "Predicted Probability of Diabetes by Age",
    subtitle = "Simple Logistic Regression",
    x = "Age (years)",
    y = "Predicted Probability of Diabetes"
  ) +
  scale_y_continuous(labels = scales::percent_format(), limits = c(0, 0.6)) +
  theme_minimal(base_size = 12)

ggplotly(p2)

Predicted Diabetes Probability by Age


5. Multiple Regression: Controlling for Confounding

What is Confounding?

A confounder is a variable that:

  1. Is associated with both the exposure and the outcome
  2. Is not on the causal pathway between exposure and outcome
  3. Distorts the true relationship between exposure and outcome

Example: The relationship between age and diabetes may be confounded by BMI, physical activity, and other factors.

Multiple Logistic Regression

# Multiple logistic regression with potential confounders
model_logistic_multiple <- glm(diabetes ~ age_cont + sex + bmi_cat +
                                phys_active + current_smoker + education,
                               data = brfss_clean,
                               family = binomial(link = "logit"))

# Display results
tidy(model_logistic_multiple, exponentiate = TRUE, conf.int = TRUE) %>%
  kable(caption = "Multiple Logistic Regression: Diabetes ~ Age + Covariates (Odds Ratios)",
        digits = 3,
        col.names = c("Term", "Odds Ratio", "Std. Error", "z-statistic", "p-value", "95% CI Lower", "95% CI Upper")) %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE) %>%
  scroll_box(height = "400px")
Multiple Logistic Regression: Diabetes ~ Age + Covariates (Odds Ratios)
Term Odds Ratio Std. Error z-statistic p-value 95% CI Lower 95% CI Upper
(Intercept) 0.009 1.177 -4.001 0.000 0.000 0.065
age_cont 1.041 0.007 5.515 0.000 1.027 1.057
sexMale 1.191 0.154 1.133 0.257 0.880 1.613
bmi_catNormal 1.971 1.052 0.645 0.519 0.378 36.309
bmi_catOverweight 3.155 1.044 1.101 0.271 0.621 57.679
bmi_catObese 6.834 1.041 1.845 0.065 1.354 124.675
phys_active 0.589 0.157 -3.373 0.001 0.433 0.802
current_smoker 1.213 0.178 1.085 0.278 0.852 1.716
educationHigh school graduate 0.634 0.288 -1.579 0.114 0.364 1.131
educationSome college 0.542 0.294 -2.081 0.037 0.307 0.977
educationCollege graduate 0.584 0.305 -1.763 0.078 0.324 1.074

Interpretation:

  • Age (adjusted OR): 1.041
    • After adjusting for sex, BMI, physical activity, smoking, and education, each 1-year increase in age is associated with a 4.1% increase in the odds of diabetes
  • Sex (Male vs Female): OR = 1.191
    • Males have 19.1% higher odds of diabetes compared to females, adjusting for other variables
  • BMI (Obese vs Normal): OR = 6.834
    • Obese individuals had 6.83 times higher odds of diabetes compared to normal-weight individuals.

6. Dummy Variables: Coding Categorical Predictors

Categorical variables with \(k\) levels are represented using \(k-1\) dummy variables (indicator variables).

Example: Education Level

Education has 4 levels: 1. < High school (reference category) 2. High school graduate 3. Some college 4. College graduate

R automatically creates 3 dummy variables:

# Extract dummy variable coding
dummy_table <- data.frame(
  Education = c("< High school", "High school graduate", "Some college", "College graduate"),
  `Dummy 1 (HS grad)` = c(0, 1, 0, 0),
  `Dummy 2 (Some college)` = c(0, 0, 1, 0),
  `Dummy 3 (College grad)` = c(0, 0, 0, 1),
  check.names = FALSE
)

dummy_table %>%
  kable(caption = "Dummy Variable Coding for Education",
        align = "lccc") %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE) %>%
  row_spec(1, bold = TRUE, background = "#ffe6e6")  # Highlight reference category
Dummy Variable Coding for Education
Education Dummy 1 (HS grad) Dummy 2 (Some college) Dummy 3 (College grad)
< High school 0 0 0
High school graduate 1 0 0
Some college 0 1 0
College graduate 0 0 1

Reference Category: The category with all zeros (< High school) is the reference group. All other categories are compared to this reference.

Visualizing Education Effects

# Extract education coefficients
educ_coefs <- tidy(model_logistic_multiple, exponentiate = TRUE, conf.int = TRUE) %>%
  filter(str_detect(term, "education")) %>%
  mutate(
    education_level = str_remove(term, "education"),
    education_level = factor(education_level,
                             levels = c("High school graduate",
                                       "Some college",
                                       "College graduate"))
  )

# Add reference category
ref_row <- data.frame(
  term = "education< High school",
  estimate = 1.0,
  std.error = 0,
  statistic = NA,
  p.value = NA,
  conf.low = 1.0,
  conf.high = 1.0,
  education_level = factor("< High school (Ref)",
                          levels = c("< High school (Ref)",
                                    "High school graduate",
                                    "Some college",
                                    "College graduate"))
)

educ_coefs_full <- bind_rows(ref_row, educ_coefs) %>%
  mutate(education_level = factor(education_level,
                                 levels = c("< High school (Ref)",
                                           "High school graduate",
                                           "Some college",
                                           "College graduate")))

# Plot
p3 <- ggplot(educ_coefs_full, aes(x = education_level, y = estimate)) +
  geom_hline(yintercept = 1, linetype = "dashed", color = "gray50") +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high),
                  size = 0.8, color = "darkblue") +
  coord_flip() +
  labs(
    title = "Association Between Education and Diabetes",
    subtitle = "Adjusted Odds Ratios (reference: < High school)",
    x = "Education Level",
    y = "Odds Ratio (95% CI)"
  ) +
  theme_minimal(base_size = 12)

ggplotly(p3)

Odds Ratios for Education Levels

# Plot model coefficients with `ggcoef_model()`
ggcoef_model(model_logistic_multiple, exponentiate = TRUE,
  include = c("education"),
  variable_labels = c(
    education = "Education"),
  facet_labeller = ggplot2::label_wrap_gen(10)
)


7. Interactions (Effect Modification)

An interaction exists when the effect of one variable on the outcome differs across levels of another variable.

Epidemiologic term: Effect modification

Example: Age × Sex Interaction

Does the effect of age on diabetes differ between males and females?

# Model with interaction term
model_interaction <- glm(diabetes ~ age_cont * sex + bmi_cat + phys_active,
                         data = brfss_clean,
                         family = binomial(link = "logit"))

# Display interaction results
tidy(model_interaction, exponentiate = TRUE, conf.int = TRUE) %>%
  filter(str_detect(term, "age_cont")) %>%
  kable(caption = "Age × Sex Interaction Model (Odds Ratios)",
        digits = 3,
        col.names = c("Term", "Odds Ratio", "Std. Error", "z-statistic", "p-value", "95% CI Lower", "95% CI Upper")) %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE)
Age × Sex Interaction Model (Odds Ratios)
Term Odds Ratio Std. Error z-statistic p-value 95% CI Lower 95% CI Upper
age_cont 1.031 0.009 3.178 0.001 1.012 1.051
age_cont:sexMale 1.015 0.014 1.084 0.278 0.988 1.044

Interpretation:

  • Main effect of age: OR among females (reference)
  • Interaction term (age:sexMale): Additional effect of age among males
  • If the interaction term is significant, the age-diabetes relationship differs by sex

Visualizing Interaction

# Generate predicted probabilities by sex
pred_interact <- ggpredict(model_interaction, terms = c("age_cont [18:80]", "sex"))

# Plot
p4 <- ggplot(pred_interact, aes(x = x, y = predicted, color = group, fill = group)) +
  geom_line(linewidth = 1.2) +
  geom_ribbon(aes(ymin = conf.low, ymax = conf.high), alpha = 0.2, color = NA) +
  labs(
    title = "Predicted Probability of Diabetes by Age and Sex",
    subtitle = "Testing for Age × Sex Interaction",
    x = "Age (years)",
    y = "Predicted Probability of Diabetes",
    color = "Sex",
    fill = "Sex"
  ) +
  scale_y_continuous(labels = scales::percent_format()) +
  scale_color_manual(values = c("Female" = "#E64B35", "Male" = "#4DBBD5")) +
  scale_fill_manual(values = c("Female" = "#E64B35", "Male" = "#4DBBD5")) +
  theme_minimal(base_size = 12) +
  theme(legend.position = "bottom")

ggplotly(p4)

Age-Diabetes Relationship by Sex


8. Model Diagnostics

Every regression model makes assumptions about the data. If assumptions are violated, results may be invalid.

Key Assumptions for Logistic Regression

  1. Linearity of log odds: Continuous predictors have a linear relationship with the log odds of the outcome
  2. Independence of observations: Each observation is independent
  3. No perfect multicollinearity: Predictors are not perfectly correlated
  4. No influential outliers: Individual observations don’t overly influence the model

Checking for Multicollinearity

Variance Inflation Factor (VIF): Measures how much the variance of a coefficient is inflated due to correlation with other predictors.

  • VIF < 5: Generally acceptable
  • VIF > 10: Serious multicollinearity problem
# Calculate VIF
vif_values <- vif(model_logistic_multiple)

# Create VIF table
# For models with categorical variables, vif() returns GVIF (Generalized VIF)
if (is.matrix(vif_values)) {
  # If matrix (categorical variables present), extract GVIF^(1/(2*Df))
  vif_df <- data.frame(
    Variable = rownames(vif_values),
    VIF = vif_values[, "GVIF^(1/(2*Df))"]
  )
} else {
  # If vector (only continuous variables)
  vif_df <- data.frame(
    Variable = names(vif_values),
    VIF = as.numeric(vif_values)
  )
}

# Add interpretation
vif_df <- vif_df %>%
  arrange(desc(VIF)) %>%
  mutate(
    Interpretation = case_when(
      VIF < 5 ~ "Low (No concern)",
      VIF >= 5 & VIF < 10 ~ "Moderate (Monitor)",
      VIF >= 10 ~ "High (Problem)"
    )
  )

vif_df %>%
  kable(caption = "Variance Inflation Factors (VIF) for Multiple Regression Model",
        digits = 2,
        align = "lrc") %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE) %>%
  row_spec(which(vif_df$VIF >= 10), bold = TRUE, color = "white", background = "#DC143C") %>%
  row_spec(which(vif_df$VIF >= 5 & vif_df$VIF < 10), background = "#FFA500") %>%
  row_spec(which(vif_df$VIF < 5), background = "#90EE90")
Variance Inflation Factors (VIF) for Multiple Regression Model
Variable VIF Interpretation
age_cont age_cont 1.05 Low (No concern)
current_smoker current_smoker 1.05 Low (No concern)
phys_active phys_active 1.02 Low (No concern)
sex sex 1.01 Low (No concern)
education education 1.01 Low (No concern)
bmi_cat bmi_cat 1.01 Low (No concern)

Influential Observations

Cook’s Distance: Measures how much the model would change if an observation were removed.

  • Cook’s D > 1: Potentially influential observation
# Calculate Cook's distance
cooks_d <- cooks.distance(model_logistic_multiple)

# Create data frame
influence_df <- data.frame(
  observation = 1:length(cooks_d),
  cooks_d = cooks_d
) %>%
  mutate(influential = ifelse(cooks_d > 1, "Yes", "No"))

# Plot
p5 <- ggplot(influence_df, aes(x = observation, y = cooks_d, color = influential)) +
  geom_point(alpha = 0.6) +
  geom_hline(yintercept = 1, linetype = "dashed", color = "red") +
  labs(
    title = "Cook's Distance: Identifying Influential Observations",
    subtitle = "Values > 1 indicate potentially influential observations",
    x = "Observation Number",
    y = "Cook's Distance",
    color = "Influential?"
  ) +
  scale_color_manual(values = c("No" = "steelblue", "Yes" = "red")) +
  theme_minimal(base_size = 12)

ggplotly(p5)

Cook’s Distance for Influential Observations

# Count influential observations
n_influential <- sum(influence_df$influential == "Yes")
cat("Number of potentially influential observations:", n_influential, "\n")
## Number of potentially influential observations: 0

9. Model Comparison and Selection

Comparing Nested Models

Use Likelihood Ratio Test to compare nested models:

# Model 1: Age only
model1 <- glm(diabetes ~ age_cont,
              data = brfss_clean,
              family = binomial)

# Model 2: Age + Sex
model2 <- glm(diabetes ~ age_cont + sex,
              data = brfss_clean,
              family = binomial)

# Model 3: Full model
model3 <- model_logistic_multiple

# Likelihood ratio test
lrt_1_2 <- anova(model1, model2, test = "LRT")
lrt_2_3 <- anova(model2, model3, test = "LRT")

# Create comparison table
model_comp <- data.frame(
  Model = c("Model 1: Age only",
            "Model 2: Age + Sex",
            "Model 3: Full model"),
  AIC = c(AIC(model1), AIC(model2), AIC(model3)),
  BIC = c(BIC(model1), BIC(model2), BIC(model3)),
  `Deviance` = c(deviance(model1), deviance(model2), deviance(model3)),
  check.names = FALSE
)

model_comp %>%
  kable(caption = "Model Comparison: AIC, BIC, and Deviance",
        digits = 2,
        align = "lrrr") %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE) %>%
  row_spec(which.min(model_comp$AIC), bold = TRUE, background = "#d4edda")
Model Comparison: AIC, BIC, and Deviance
Model AIC BIC Deviance
Model 1: Age only 1175.08 1185.39 1171.08
Model 2: Age + Sex 1175.85 1191.32 1169.85
Model 3: Full model 1122.65 1179.36 1100.65

Interpretation:

  • Lower AIC/BIC indicates better model fit
  • Model 3 (full model) has the lowest AIC, suggesting it provides the best fit to the data

10. Error Term in Statistical Models

All statistical models include an error term (\(\epsilon\)) to account for:

  • Random variation in the outcome
  • Unmeasured variables not included in the model
  • Measurement error in variables

\[Y = \beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p + \epsilon\]

Key points:

  • The model cannot perfectly predict every outcome
  • The difference between observed and predicted values is the error (residual)
  • We assume errors are normally distributed with mean 0 (for linear regression)

Part 2: Student Lab Activity

Lab Overview

In this lab, you will:

  1. Build your own logistic regression model predicting hypertension (high blood pressure)
  2. Create dummy variables for categorical predictors
  3. Interpret regression coefficients
  4. Test for confounding and interaction
  5. Perform model diagnostics

Lab Instructions

Task 1: Explore the Outcome Variable

# YOUR CODE HERE: Create a frequency table of hypertension status

# Summary table by hypertension status
desc_table <- brfss_clean %>%
  group_by(hypertension) %>%
  summarise(
    N = n(),
    `Mean Age` = round(mean(age_cont, na.rm = TRUE), 1),
    `% Hypertension` = round(100 * mean(hypertension, na.rm = TRUE), 1)
  ) %>%
  mutate(
    rfhype6 = ifelse(hypertension == 1, "Hypertension", "No Hypertension")
  )

desc_table %>%
  kable(
    caption = "Descriptive Statistics by Hypertension Status",
    align = "lccc"
  ) %>%
  kable_styling(
    bootstrap_options = c("striped", "hover", "condensed"),
    full_width = FALSE)
Descriptive Statistics by Hypertension Status
hypertension N Mean Age % Hypertension rfhype6
0 606 54.5 0 No Hypertension
1 675 63.1 100 Hypertension
# YOUR CODE HERE: Calculate the prevalence of hypertension by age group

# Simple linear regression: Hypeertention ~ age
model_linear_simple <- lm(hypertension ~ age_cont, data = brfss_clean)

# Display results
tidy(model_linear_simple, conf.int = TRUE) %>%
  kable(caption = "Simple Linear Regression: Hypertention ~ Age",
        digits = 4,
        col.names = c("Term", "Estimate", "Std. Error", "t-statistic", "p-value", "95% CI Lower", "95% CI Upper")) %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE)
Simple Linear Regression: Hypertention ~ Age
Term Estimate Std. Error t-statistic p-value 95% CI Lower 95% CI Upper
(Intercept) -0.1891 0.060 -3.1495 0.0017 -0.3069 -0.0713
age_cont 0.0121 0.001 12.2239 0.0000 0.0102 0.0141

Questions:

  1. What is the overall prevalence of hypertension in the dataset?

Prevalence = 675/ (675+606)*100 = 52.7% The overall prevalence of hypertension in the dataset was 52.7%, indicating that slightly more than half of the study population reported having hypertension.

  1. How does hypertension prevalence vary by age group?

Participants with hypertension had a higher mean age (63.1 years) compared to those without hypertension (54.5 years), and age was significantly associated with hypertension (p < 0.001), indicating increasing prevalence with age.


Task 2: Build a Simple Logistic Regression Model

# YOUR CODE HERE: Fit a simple logistic regression model
# Outcome: hypertension
# Predictor: age_cont
model_logistic_simple <- glm(hypertension ~ age_cont,
                              data = brfss_clean,
                              family = binomial(link = "logit"))

# YOUR CODE HERE: Display the results with odds ratios
tidy(model_logistic_simple, exponentiate = TRUE, conf.int = TRUE) %>%
  kable(caption = "Simple Logistic Regression: Hypertension ~ Age (Odds Ratios)",
        digits = 3,
        col.names = c("Term", "Odds Ratio", "Std. Error", "z-statistic", "p-value", "95% CI Lower", "95% CI Upper")) %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE)
Simple Logistic Regression: Hypertension ~ Age (Odds Ratios)
Term Odds Ratio Std. Error z-statistic p-value 95% CI Lower 95% CI Upper
(Intercept) 0.048 0.296 -10.293 0 0.026 0.084
age_cont 1.055 0.005 10.996 0 1.045 1.065
# YOUR CODE HERE: Display the results with odds ratios

Questions:

  1. What is the odds ratio for age? Interpret this value. Odds Ratio (OR)=1.055. For each 1-year increase in age, the odds of hypertension increase by 5.5%

  2. Is the association statistically significant? Yes. The relationship is highly statistically significant (p < 0.001)

  3. What is the 95% confidence interval for the odds ratio? The confidence interval is CI (1.045-1.065). —

Task 3: Create a Multiple Regression Model

# YOUR CODE HERE: Fit a multiple logistic regression model
# Outcome: hypertension
# Predictors: age_cont, sex, bmi_cat, phys_active, current_smoker
model_logistic_multiple <- glm(hypertension ~ age_cont + sex + bmi_cat +
                                phys_active + current_smoker,
                               data = brfss_clean,
                               family = binomial(link = "logit"))



# YOUR CODE HERE: Display the results
tidy(model_logistic_multiple, exponentiate = TRUE, conf.int = TRUE) %>%
  kable(caption = "Multiple Logistic Regression: Hypertension ~ Age + Covariates (Odds Ratios)",
        digits = 3,
        col.names = c("Term", "Odds Ratio", "Std. Error", "z-statistic", "p-value", "95% CI Lower", "95% CI Upper")) %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE) %>%
  scroll_box(height = "400px")
Multiple Logistic Regression: Hypertension ~ Age + Covariates (Odds Ratios)
Term Odds Ratio Std. Error z-statistic p-value 95% CI Lower 95% CI Upper
(Intercept) 0.008 0.653 -7.355 0.000 0.002 0.028
age_cont 1.061 0.005 11.234 0.000 1.050 1.073
sexMale 1.270 0.123 1.950 0.051 0.999 1.616
bmi_catNormal 2.097 0.546 1.356 0.175 0.759 6.756
bmi_catOverweight 3.241 0.543 2.166 0.030 1.183 10.385
bmi_catObese 6.585 0.545 3.459 0.001 2.394 21.176
phys_active 0.900 0.130 -0.808 0.419 0.697 1.162
current_smoker 1.071 0.139 0.495 0.621 0.817 1.407

Questions:

  1. How did the odds ratio for age change after adjusting for other variables? Age (adjusted OR):1.061. After adjusting for sex, BMI, physical activity and smoking each 1-year increase in age is associated with a 6.1% increase in the odds of hypertension.

  2. What does this suggest about confounding? The increase in the odds ratio after adjustment suggests that confounding was present in the crude association between age and hypertension. Variables such as BMI and sex likely confounded the relationship because they are associated with both age and hypertension. After controlling for these factors, the association between age and hypertension became stronger, indicating that age is an independent predictor of hypertension.

  3. Which variables are the strongest predictors of hypertension? The strongest predictor of hypertension was obesity, with obese individuals having 6.6 times higher odds of hypertension compared to the reference BMI group (p = 0.001). Overweight individuals also had significantly higher odds (OR = 3.24, p = 0.030). Age was another important predictor, with each additional year increasing the odds of hypertension by 6.1% (p < 0.001). Sex and physical activity were not statistically significant predictors after adjustment. —

Task 4: Interpret Dummy Variables

# YOUR CODE HERE: Create a table showing the dummy variable coding for bmi_cat
dummy_table <- data.frame(
  BMI = c("Normal","Overweight", "Obese"),
  `Dummy 2 (Overweight)` = c( 0, 1, 0),
  `Dummy 3 (Obese)` = c( 0, 0, 1),
  check.names = FALSE
)
dummy_table %>%
  kable(caption = "Dummy Variable Coding for BMI",
        align = "lccc") %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE) %>%
  row_spec(1, bold = TRUE, background = "#ffe6e6") 
Dummy Variable Coding for BMI
BMI Dummy 2 (Overweight) Dummy 3 (Obese)
Normal 0 0
Overweight 1 0
Obese 0 1
# YOUR CODE HERE: Extract and display the odds ratios for BMI categories
model_logistic_bmi <- glm(hypertension ~ bmi_cat,
                               data = brfss_clean,
                               family = binomial(link = "logit"))



# YOUR CODE HERE: Display the results
tidy(model_logistic_bmi, exponentiate = TRUE, conf.int = TRUE) %>%
  kable(caption = "Multiple Logistic Regression: Hypertension ~ bmi_cat)",
        digits = 3,
        col.names = c("Term", "Odds Ratio", "Std. Error", "z-statistic", "p-value", "95% CI Lower", "95% CI Upper")) %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE) %>%
  scroll_box(height = "400px")
Multiple Logistic Regression: Hypertension ~ bmi_cat)
Term Odds Ratio Std. Error z-statistic p-value 95% CI Lower 95% CI Upper
(Intercept) 0.333 0.516 -2.127 0.033 0.108 0.860
bmi_catNormal 2.026 0.529 1.335 0.182 0.764 6.354
bmi_catOverweight 3.266 0.525 2.255 0.024 1.242 10.181
bmi_catObese 4.968 0.525 3.055 0.002 1.890 15.477

Questions:

  1. What is the reference category for BMI? The reference category for BMI is Normal BMI, because both dummy variables (Overweight and Obese) are coded as 0 for this group. All odds ratios are interpreted relative to individuals with normal BMI.

  2. Interpret the odds ratio for “Obese” compared to the reference category. Individuals classified as obese had approximately 4.97 times higher odds of having hypertension compared to individuals with normal BMI, and this association was statistically significant (p = 0.002).

  3. How would you explain this to a non-statistician? People who are obese are much more likely to have high blood pressure than people with a normal weight. Specifically, obese individuals are about 5 times more likely to have hypertension compared to those with normal BMI. This shows that excess body weight is an important risk factor for high blood pressure. —

Task 5: Test for Interaction

# YOUR CODE HERE: Fit a model with Age × BMI interaction
# Test if the effect of age on hypertension differs by BMI category
model_interaction <- glm(hypertension ~ age_cont * bmi_cat,
                         data = brfss_clean,
                         family = binomial(link = "logit"))

# Display interaction results
tidy(model_interaction, exponentiate = TRUE, conf.int = TRUE) %>%
  filter(str_detect(term, "age_cont")) %>%
  kable(caption = "Age × BMI Interaction Model (Odds Ratios)",
        digits = 3,
        col.names = c("Term", "Odds Ratio", "Std. Error", "z-statistic", "p-value", "95% CI Lower", "95% CI Upper")) %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE)
Age × BMI Interaction Model (Odds Ratios)
Term Odds Ratio Std. Error z-statistic p-value 95% CI Lower 95% CI Upper
age_cont 1.004 0.042 0.102 0.918 0.929 1.108
age_cont:bmi_catNormal 1.058 0.043 1.306 0.192 0.957 1.147
age_cont:bmi_catOverweight 1.063 0.043 1.423 0.155 0.962 1.151
age_cont:bmi_catObese 1.054 0.042 1.232 0.218 0.954 1.140
# YOUR CODE HERE: Perform a likelihood ratio test comparing models with and without interaction
pred_interact <- ggpredict(model_interaction, terms = c("age_cont [18:80]", "bmi_cat"))

# Plot
p4 <- ggplot(pred_interact, aes(x = x, y = predicted, color = group, fill = group)) +
  geom_line(linewidth = 1.2) +
  geom_ribbon(aes(ymin = conf.low, ymax = conf.high), alpha = 0.2, color = NA) +
  labs(
    title = "Predicted Probability of hypertention by Age and Sex",
    subtitle = "Testing for Age × BMI Interaction",
    x = "Age (years)",
    y = "Predicted Probability of Hypertension",
    color = "BMI",
    fill = "BMI"
  ) +
  scale_y_continuous(labels = scales::percent_format()) +
  scale_color_manual(values = c("Normal" = "#E64B35", "Overweight" = "#4DBBD5", "Obese" = "purple")) +
  scale_fill_manual(values = c("Female" = "#E64B35", "Male" = "#4DBBD5", "Obese" = "purple")) +
  theme_minimal(base_size = 12) +
  theme(legend.position = "bottom")

ggplotly(p4)

Questions:

  1. Is the interaction term statistically significant? No, the interaction terms between age and BMI category are not statistically significant (all p > 0.05). This indicates there is no statistically significant interaction between age and BMI in predicting hypertension.

  2. What does this mean in epidemiologic terms (effect modification)? This means that BMI does not significantly modify the relationship between age and hypertension. In epidemiologic terms, the effect of age on hypertension risk appears to be similar across all BMI categories. The increase in hypertension risk with age is consistent regardless of whether an individual has normal weight, is overweight, or is obese.

  3. Create a visualization showing predicted probabilities by age and BMI category


Task 6: Model Diagnostics

# YOUR CODE HERE: Calculate VIF for your multiple regression model
vif_values <- vif(model_logistic_multiple)

# Create VIF table
# For models with categorical variables, vif() returns GVIF (Generalized VIF)
if (is.matrix(vif_values)) {
  # If matrix (categorical variables present), extract GVIF^(1/(2*Df))
  vif_df <- data.frame(
    Variable = rownames(vif_values),
    VIF = vif_values[, "GVIF^(1/(2*Df))"]
  )
} else {
  # If vector (only continuous variables)
  vif_df <- data.frame(
    Variable = names(vif_values),
    VIF = as.numeric(vif_values)
  )
}

# Add interpretation
vif_df <- vif_df %>%
  arrange(desc(VIF)) %>%
  mutate(
    Interpretation = case_when(
      VIF < 5 ~ "Low (No concern)",
      VIF >= 5 & VIF < 10 ~ "Moderate (Monitor)",
      VIF >= 10 ~ "High (Problem)"
    )
  )

vif_df %>%
  kable(caption = "Variance Inflation Factors (VIF) for Multiple Regression Model",
        digits = 2,
        align = "lrc") %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE) %>%
  row_spec(which(vif_df$VIF >= 10), bold = TRUE, color = "white", background = "#DC143C") %>%
  row_spec(which(vif_df$VIF >= 5 & vif_df$VIF < 10), background = "#FFA500") %>%
  row_spec(which(vif_df$VIF < 5), background = "#90EE90")
Variance Inflation Factors (VIF) for Multiple Regression Model
Variable VIF Interpretation
age_cont age_cont 1.06 Low (No concern)
current_smoker current_smoker 1.04 Low (No concern)
bmi_cat bmi_cat 1.02 Low (No concern)
phys_active phys_active 1.01 Low (No concern)
sex sex 1.01 Low (No concern)
# YOUR CODE HERE: Create a Cook's distance plot to identify influential observations
cooks_d <- cooks.distance(model_logistic_multiple)

# Create data frame
influence_df <- data.frame(
  observation = 1:length(cooks_d),
  cooks_d = cooks_d
) %>%
  mutate(influential = ifelse(cooks_d > 1, "Yes", "No"))

# Plot
p5 <- ggplot(influence_df, aes(x = observation, y = cooks_d, color = influential)) +
  geom_point(alpha = 0.6) +
  geom_hline(yintercept = 1, linetype = "dashed", color = "red") +
  labs(
    title = "Cook's Distance: Identifying Influential Observations",
    subtitle = "Values > 1 indicate potentially influential observations",
    x = "Observation Number",
    y = "Cook's Distance",
    color = "Influential?"
  ) +
  scale_color_manual(values = c("No" = "steelblue", "Yes" = "red")) +
  theme_minimal(base_size = 12)

ggplotly(p5)

Questions:

  1. Are there any concerns about multicollinearity? No, there are no concerns about multicollinearity. All VIF values are close to 1, indicating very low correlation between predictor variables. This suggests that the regression coefficients are stable and not distorted by multicollinearity.

  2. Are there any influential observations that might affect your results? There is no evidence of influential observations affecting the model results.

  3. What would you do if you found serious violations? If serious violations were found, I would investigate data quality, consider removing or modifying problematic variables, and perform sensitivity analyses to ensure robust results. —

Task 7: Model Comparison

# YOUR CODE HERE: Compare three models using AIC and BIC
# Model A: Age only
model1 <- glm(hypertension~ age_cont,
              data = brfss_clean,
              family = binomial)
# Model B: Age + sex + bmi_cat
model2 <- glm(hypertension ~ age_cont + sex+ bmi_cat,
              data = brfss_clean,
              family = binomial)
# Model C: Age + sex + bmi_cat + phys_active + current_smoker
model3 <- glm(hypertension ~ age_cont + sex+ bmi_cat + phys_active + current_smoker,
              data = brfss_clean,
              family = binomial)

# Likelihood ratio test
lrt_1_2 <- anova(model1, model2, test = "LRT")
lrt_2_3 <- anova(model2, model3, test = "LRT")


# YOUR CODE HERE: Create a comparison table
model_comp <- data.frame(
  Model = c("Model 1: Age only",
            "Model 2: Age + sex + bmi_cat",
            "Model 3: Full model"),
  AIC = c(AIC(model1), AIC(model2), AIC(model3)),
  BIC = c(BIC(model1), BIC(model2), BIC(model3)),
  `Deviance` = c(deviance(model1), deviance(model2), deviance(model3)),
  check.names = FALSE
)

model_comp %>%
  kable(caption = "Model Comparison: AIC, BIC, and Deviance",
        digits = 2,
        align = "lrrr") %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                full_width = FALSE) %>%
  row_spec(which.min(model_comp$AIC), bold = TRUE, background = "#d4edda")
Model Comparison: AIC, BIC, and Deviance
Model AIC BIC Deviance
Model 1: Age only 1636.61 1646.92 1632.61
Model 2: Age + sex + bmi_cat 1576.49 1607.42 1564.49
Model 3: Full model 1579.50 1620.74 1563.50

Questions:

  1. Which model has the best fit based on AIC? Model 2 has the best fit because it has the lowest AIC (1576.49).

  2. Is the added complexity of the full model justified? No, the added complexity of the full model is not justified because it has higher AIC and BIC values and only minimal improvement in deviance.

  3. Which model would you choose for your final analysis? Why? Model 2 would be chosen for the final analysis because it provides the best balance between goodness-of-fit and model simplicity. —

Lab Report Guidelines

Write a brief report (1-2 pages) summarizing your findings:

  1. Introduction: State your research question Hypertension is a major risk factor for cardiovascular disease and affects a large proportion of adults worldwide. Age and body mass index (BMI) are known risk factors, but their independent effects and interaction require evaluation. The research question for this analysis was: What is the association between age and hypertension, and how do BMI, sex, and physical activity influence this relationship among U.S. adults in the BRFSS 2023 dataset?
  2. Methods: Describe your analytic approach This cross-sectional analysis used data from the Behavioral Risk Factor Surveillance System (BRFSS) 2023. The outcome variable was hypertension (yes/no), and the primary predictor was age (continuous). Covariates included sex, BMI category (normal, overweight, obese), and physical activity. Descriptive statistics were calculated to estimate hypertension prevalence. Logistic regression models were used to examine associations between age and hypertension, including adjusted models controlling for covariates. Interaction between age and BMI was tested to assess effect modification. Model fit was evaluated using AIC and BIC, and multicollinearity was assessed using variance inflation factors (VIF).
  3. Results: Present key findings with tables and figures The sample included 1,281 participants, and 52.7% had hypertension. Participants with hypertension were older (mean age 63.1 years) compared to those without hypertension (mean age 54.5 years). In the adjusted logistic regression model, age was significantly associated with hypertension (OR = 1.061, 95% CI: 1.050–1.073, p < 0.001), indicating a 6.1% increase in odds per year of age. BMI was also a strong predictor. Overweight individuals had higher odds of hypertension (OR = 3.27, p = 0.024), and obese individuals had nearly five times higher odds (OR = 4.97, p = 0.002) compared to those with normal BMI. No statistically significant interaction between age and BMI was observed. Model comparison showed that the model including age, sex, and BMI provided the best fit. VIF values indicated no multicollinearity.
  4. Interpretation: Explain what your results mean Age and BMI were strong independent predictors of hypertension. Older individuals and those with higher BMI had significantly increased odds of hypertension. The lack of significant interaction suggests that age affects hypertension risk similarly across BMI groups. These findings support the importance of weight management and hypertension screening, particularly among older adults.
  5. Limitations: Discuss potential issues with your analysis This study has several limitations. The cross-sectional design prevents causal inference. Hypertension was self-reported, which may introduce reporting bias. Residual confounding from unmeasured factors may also be present. Despite these limitations, the results are consistent with existing epidemiologic evidence. Submission: Submit your completed R Markdown file and knitted HTML report.

Summary

Key Concepts Covered

  1. Statistical modeling describes relationships between variables
  2. Regression types depend on the outcome variable type
  3. Logistic regression is appropriate for binary outcomes
  4. Multiple regression controls for confounding
  5. Dummy variables represent categorical predictors
  6. Interactions test for effect modification
  7. Model diagnostics check assumptions and identify problems
  8. Model comparison helps select the best model

Important Formulas

Logistic Regression:

\[\text{logit}(p) = \log\left(\frac{p}{1-p}\right) = \beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p\]

Odds Ratio:

\[\text{OR} = e^{\beta_i}\]

Predicted Probability:

\[p = \frac{e^{\beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p}}{1 + e^{\beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p}}\]


References

  • Agresti, A. (2018). An Introduction to Categorical Data Analysis (3rd ed.). Wiley.
  • Hosmer, D. W., Lemeshow, S., & Sturdivant, R. X. (2013). Applied Logistic Regression (3rd ed.). Wiley.
  • Vittinghoff, E., Glidden, D. V., Shiboski, S. C., & McCulloch, C. E. (2012). Regression Methods in Biostatistics (2nd ed.). Springer.
  • Centers for Disease Control and Prevention. (2023). Behavioral Risk Factor Surveillance System.

Session Info

sessionInfo()
## R version 4.5.1 (2025-06-13 ucrt)
## Platform: x86_64-w64-mingw32/x64
## Running under: Windows 11 x64 (build 22631)
## 
## Matrix products: default
##   LAPACK version 3.12.1
## 
## locale:
## [1] LC_COLLATE=English_United States.utf8 
## [2] LC_CTYPE=English_United States.utf8   
## [3] LC_MONETARY=English_United States.utf8
## [4] LC_NUMERIC=C                          
## [5] LC_TIME=English_United States.utf8    
## 
## time zone: America/New_York
## tzcode source: internal
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] ggstats_0.12.0   gtsummary_2.5.0  ggeffects_2.3.2  car_3.1-3       
##  [5] carData_3.0-5    broom_1.0.11     plotly_4.12.0    kableExtra_1.4.0
##  [9] knitr_1.51       haven_2.5.5      lubridate_1.9.4  forcats_1.0.1   
## [13] stringr_1.6.0    dplyr_1.1.4      purrr_1.2.1      readr_2.1.6     
## [17] tidyr_1.3.2      tibble_3.3.1     ggplot2_4.0.1    tidyverse_2.0.0 
## 
## loaded via a namespace (and not attached):
##  [1] gtable_0.3.6         xfun_0.56            bslib_0.9.0         
##  [4] htmlwidgets_1.6.4    insight_1.4.5        lattice_0.22-7      
##  [7] tzdb_0.5.0           crosstalk_1.2.2      vctrs_0.6.5         
## [10] tools_4.5.1          generics_0.1.4       datawizard_1.3.0    
## [13] pkgconfig_2.0.3      Matrix_1.7-3         data.table_1.18.0   
## [16] RColorBrewer_1.1-3   S7_0.2.1             lifecycle_1.0.5     
## [19] compiler_4.5.1       farver_2.1.2         textshaping_1.0.4   
## [22] htmltools_0.5.9      sass_0.4.10          yaml_2.3.12         
## [25] lazyeval_0.2.2       Formula_1.2-5        pillar_1.11.1       
## [28] jquerylib_0.1.4      broom.helpers_1.22.0 cachem_1.1.0        
## [31] abind_1.4-8          nlme_3.1-168         tidyselect_1.2.1    
## [34] digest_0.6.39        stringi_1.8.7        labeling_0.4.3      
## [37] splines_4.5.1        labelled_2.16.0      fastmap_1.2.0       
## [40] grid_4.5.1           cli_3.6.5            magrittr_2.0.4      
## [43] cards_0.7.1          withr_3.0.2          scales_1.4.0        
## [46] backports_1.5.0      timechange_0.3.0     rmarkdown_2.30      
## [49] httr_1.4.7           otel_0.2.0           hms_1.1.4           
## [52] evaluate_1.0.5       viridisLite_0.4.2    mgcv_1.9-3          
## [55] rlang_1.1.6          glue_1.8.0           xml2_1.5.2          
## [58] svglite_2.2.2        rstudioapi_0.18.0    jsonlite_2.0.0      
## [61] R6_2.6.1             systemfonts_1.3.1