This study is grounded in the Elaboration Likelihood Model (ELM), which suggests that the amount of time and cognitive resources spent processing persuasive information influences the likelihood of attitude change. Specifically, longer exposure to persuasive arguments should increase the probability of adopting the advocated position.
In the context of marijuana legalization, this means that the more minutes participants spend reading persuasive information about the issue, the greater the odds they will support legalization. Using logistic regression, this project tests whether time spent reading persuasive information significantly predicts support for legalization.
The odds of people who support legalizing marijuana will increase after the number of minutes reading persuasive information on marijuana increases.
The independent variable (IV) in this analysis is Minutes spent reading persuasive information about marijuana. The dependent variable (DV) is Favor, a binary measure of whether the participant supports legalization (1 = support, 0 = does not support). Logistic regression was selected because the DV is dichotomous and the aim is to estimate how increases in the IV affect the odds of support. The model generates odds ratios to quantify how each additional minute of exposure influences the probability of supporting legalization. To ensure proper model fit, a Box-Tidwell test was conducted to check the assumption of linearity in the logit. Finally, the inflection point was calculated to determine the value of exposure minutes at which participants are equally likely (p = .50) to support or oppose legalization.
The results showed that the more time participants spent reading persuasive information, the greater the odds they supported marijuana legalization. Each additional minute of exposure raised the likelihood of a favorable stance, and the inflection point revealed the point at which participants became more likely than not to support legalization. These findings support the ELM framework, showing that increased elaboration leads to greater persuasion. In practice, this suggests that advocates may be most effective when they encourage sustained engagement with their content, as longer exposure appears to strengthen support.
Logistic Regression Results | ||||
Odds Ratios with 95% Confidence Intervals | ||||
term | Odds_Ratio | CI_Lower | CI_Upper | P_Value |
---|---|---|---|---|
(Intercept) | 0.099 | 0.044 | 0.205 | 0.0000 |
IV | 1.167 | 1.116 | 1.227 | 0.0000 |
Linearity of the Logit Test (Box-Tidwell) | |||
Interaction term indicates violation if significant | |||
term | Estimate | Std_Error | P_Value |
---|---|---|---|
(Intercept) | −2.566 | 1.100 | 0.0196 |
IV | 0.262 | 0.290 | 0.3657 |
IV_log | −0.032 | 0.078 | 0.6869 |
Inflection Point of Logistic Curve | |
Value of IV where predicted probability = 0.50 | |
Probability | Inflection_Point |
---|---|
0.5 | 14.965 |
# ------------------------------
# Install and load required packages
# ------------------------------
if (!require("tidyverse")) install.packages("tidyverse")
if (!require("gt")) install.packages("gt")
if (!require("gtExtras")) install.packages("gtExtras")
if (!require("plotly")) install.packages("plotly")
library(ggplot2)
library(dplyr)
library(gt)
library(gtExtras)
library(plotly)
# ------------------------------
# Read the data
# ------------------------------
mydata <- read.csv("ELM.csv") # <-- EDIT filename
# ################################################
# # (Optional) Remove specific case(es)s by row number
# ################################################
# # Example: remove rows 10 and 25
# rows_to_remove <- c(10, 25) # Edit and uncomment this line
# mydata <- mydata[-rows_to_remove, ] # Uncomment this line
# Specify dependent (DV) and independent (IV) variables
mydata$DV <- mydata$Favor_1 # <-- EDIT DV column
mydata$IV <- mydata$Minutes # <-- EDIT IV column
# Ensure DV is binary numeric (0/1)
mydata$DV <- as.numeric(as.character(mydata$DV))
# ------------------------------
# Logistic regression plot
# ------------------------------
logit_plot <- ggplot(mydata, aes(x = IV, y = DV)) +
geom_point(alpha = 0.5) + # scatterplot of observed data
geom_smooth(method = "glm",
method.args = list(family = "binomial"),
se = FALSE,
color = "#1f78b4") +
labs(title = "Logistic Regression Curve",
x = "Independent Variable (IV)",
y = "Dependent Variable (DV)")
logit_plotly <- ggplotly(logit_plot)
# ------------------------------
# Run logistic regression
# ------------------------------
options(scipen = 999)
log.ed <- glm(DV ~ IV, data = mydata, family = "binomial")
# Extract coefficients and odds ratios
results <- broom::tidy(log.ed, conf.int = TRUE, exponentiate = TRUE) %>%
select(term, estimate, conf.low, conf.high, p.value) %>%
rename(Odds_Ratio = estimate,
CI_Lower = conf.low,
CI_Upper = conf.high,
P_Value = p.value)
# Display results as a nice gt table
results_table <- results %>%
gt() %>%
fmt_number(columns = c(Odds_Ratio, CI_Lower, CI_Upper), decimals = 3) %>%
fmt_number(columns = P_Value, decimals = 4) %>%
tab_header(
title = "Logistic Regression Results",
subtitle = "Odds Ratios with 95% Confidence Intervals"
)
# ------------------------------
# Check linearity of the logit (Box-Tidwell test)
# ------------------------------
# (Assumes IV > 0; shift IV if needed)
mydata$IV_log <- mydata$IV * log(mydata$IV)
linearity_test <- glm(DV ~ IV + IV_log, data = mydata, family = "binomial")
linearity_results <- broom::tidy(linearity_test) %>%
select(term, estimate, std.error, p.value) %>%
rename(Estimate = estimate,
Std_Error = std.error,
P_Value = p.value)
linearity_table <- linearity_results %>%
gt() %>%
fmt_number(columns = c(Estimate, Std_Error), decimals = 3) %>%
fmt_number(columns = P_Value, decimals = 4) %>%
tab_header(
title = "Linearity of the Logit Test (Box-Tidwell)",
subtitle = "Interaction term indicates violation if significant"
)
# ------------------------------
# Calculate the inflection point (p = .50)
# ------------------------------
p <- 0.50
Inflection_point <- (log(p/(1-p)) - coef(log.ed)[1]) / coef(log.ed)[2]
inflection_table <- tibble(
Probability = 0.5,
Inflection_Point = Inflection_point
) %>%
gt() %>%
fmt_number(columns = Inflection_Point, decimals = 3) %>%
tab_header(
title = "Inflection Point of Logistic Curve",
subtitle = "Value of IV where predicted probability = 0.50"
)
# ------------------------------
# Outputs
# ------------------------------
# Interactive plot
logit_plotly
# Tables
results_table
linearity_table
inflection_table