This analysis applies the first-level agenda-setting framework to explore the way APNews framed coverage of the Trump Administration’s tariffs. While tariffs are generally perceived as an economic policy tool, they also carry political and emotional weight because they can be portrayed either as protection for American industries or as a cause of financial strain. The study focuses on two contrasting frames: Economic Gain, referring to coverage that presents tariffs neutrally or without negative implications, and Economic Burden, referring to coverage that associates tariffs with harm or economic setbacks.
Although this project is guided by framing theory, the first-level agenda-setting code was used because it effectively measures the volume of attention devoted to each frame over time. By comparing the weekly counts of Economic Gain versus Economic Burden coverage, the analysis tests whether one perspective dominated APNews reporting. The paired-samples t-test provides a statistical test of that difference, helping determine if the emphasis on negative framing was significant or simply due to random variation.
The weekly volume of APNews coverage framing tariffs as an Economic Burden will be significantly greater than the weekly volume of coverage framing tariffs as an Economic Gain.
Weekly APNews.com coverage volume of the two tariff frames served as the analysis’s dependent variable. It was measured continuously as the number of stories published per week. The independent variable was frame, measured categorically as either Economic Gain (neutral or positive framing) or Economic Burden (negative framing) and operationalized as containing key words or phrases likely to be unique to each framing of the Trump Administration’s tariffs.
Key words and phrases used to identify stories emphasizing the Economic Gain frame (V1) were: “tariff,” “tariffs,” “trade policy,” “trade deal,” “trade agreement,” “U.S. trade,” “American manufacturing,” “economic growth,” “domestic production,” “jobs,” “economy,” “economic boost,” and “recovery.”
Key words and phrases used to identify stories emphasizing the Economic Burden frame (V2) were: “tariff,” “tariffs,” “cost,” “prices,” “inflation,” “trade war,” “retaliation,” “economic slowdown,” “loss,” “deficit,” “layoffs,” “farmers,” “backlash,” “higher prices,” “hurt,” “decline,” “economic pain,” “instability,” “recession,” “criticism,” “tension,” “uncertainty,” and “global market.”
A paired-samples t-test assessed the statistical significance of variation in weekly coverage volume between the two frames, comparing Economic Burden (V2) to Economic Gain (V1).
The figure below summarizes APNews.com’s weekly coverage of the Trump Administration’s tariffs coded by frame. Across the period analyzed, coverage emphasizing the Economic Burden frame consistently appeared more frequently than coverage emphasizing the Economic Gain frame. This trend suggests that APNews tended to focus on the challenges and negative outcomes associated with tariffs rather than highlighting potential benefits or neutral economic perspectives.
The Shapiro-Wilk test indicated that the pair differences were approximately normally distributed, allowing the use of a paired-samples t-test. The results of the t-test revealed a statistically significant difference between the two frames, with Economic Burden coverage appearing more often on average than Economic Gain coverage. A Wilcoxon Signed Rank test produced consistent results, further supporting the reliability of the finding.
Overall, the analysis suggests that APNews framed tariffs in ways that emphasized their economic costs more than their potential gains. These results align with expectations based on framing theory, which proposes that the salience of particular aspects of an issue can influence public understanding. Future analysis will take a closer look at the timing and context of these coverage shifts to see how they align with specific political or economic events during the period.
| Descriptive Statistics: Pair Differences | ||||
| count | mean | sd | min | max |
|---|---|---|---|---|
| 40.000 | 45.500 | 13.834 | 17.000 | 72.000 |
| Normality Test (Shapiro-Wilk) | ||
| statistic | p.value | method |
|---|---|---|
| 0.9773 | 0.5910 | Shapiro-Wilk normality test |
| If the P.VALUE is 0.05 or less, the number of pairs is fewer than 40, and the distribution of pair differences shows obvious non-normality or outliers, consider using the Wilcoxon Signed Rank Test results instead of the Paired-Samples t-Test results. | ||
| Paired-Samples t-Test | |||||
| statistic | parameter | p.value | conf.low | conf.high | method |
|---|---|---|---|---|---|
| 20.8012 | 39 | 0.0000 | 41.0756 | 49.9244 | Paired t-test |
| Group Means and SDs (t-Test) | |||
| V1_Mean | V2_Mean | V1_SD | V2_SD |
|---|---|---|---|
| 51.025 | 96.525 | 35.072 | 40.376 |
| Wilcoxon Signed Rank Test | ||
| statistic | p.value | method |
|---|---|---|
| 0.0000 | 0.0000 | Wilcoxon signed rank test with continuity correction |
| Group Means and SDs (Wilcoxon) | |||
| V1_Mean | V2_Mean | V1_SD | V2_SD |
|---|---|---|---|
| 51.025 | 96.525 | 35.072 | 40.376 |
# ============================================
# APNews text analysis (First-level agenda-setting theory version)
# ============================================
# ============================================
# --- Load required libraries ---
# ============================================
if (!require("tidyverse")) install.packages("tidyverse")
if (!require("tidytext")) install.packages("tidytext")
library(tidyverse)
library(tidytext)
# ============================================
# --- Load the APNews data ---
# ============================================
# Read the data from the web
FetchedData <- readRDS(url("https://github.com/drkblake/Data/raw/refs/heads/main/APNews.rds"))
# Save the data on your computer
saveRDS(FetchedData, file = "APNews.rds")
# remove the downloaded data from the environment
rm (FetchedData)
APNews <- readRDS("APNews.rds")
# ============================================
# --- Flag Topic1-related stories ---
# ============================================
# --- Define Topic1 phrases ---
phrases <- c(
"tariff",
"tariffs",
"trade policy",
"trade deal",
"trade agreement",
"U.S. trade",
"American manufacturing",
"economic growth",
"domestic production",
"jobs",
"economy",
"economic boost",
"recovery"
)
# --- Escape regex special characters ---
escaped_phrases <- str_replace_all(
phrases,
"([\\^$.|?*+()\\[\\]{}\\\\])",
"\\\\\\1"
)
# --- Build whole-word/phrase regex pattern ---
pattern <- paste0("\\b", escaped_phrases, "\\b", collapse = "|")
# --- Apply matching to flag Topic1 stories ---
APNews <- APNews %>%
mutate(
Full.Text.clean = str_squish(Full.Text), # normalize whitespace
Topic1 = if_else(
str_detect(Full.Text.clean, regex(pattern, ignore_case = TRUE)),
"Yes",
"No"
)
)
# ============================================
# --- Flag Topic2-related stories ---
# ============================================
# --- Define Topic2 phrases ---
phrases <- c(
"tariff",
"tariffs",
"cost",
"prices",
"inflation",
"trade war",
"retaliation",
"economic slowdown",
"loss", "deficit",
"layoffs", "farmers",
"backlash",
"higher prices",
"hurt",
"decline",
"economic pain",
"instability",
"recession",
"criticism",
"tension",
"uncertainty",
"global market"
)
# --- Escape regex special characters ---
escaped_phrases <- str_replace_all(
phrases,
"([\\^$.|?*+()\\[\\]{}\\\\])",
"\\\\\\1"
)
# --- Build whole-word/phrase regex pattern ---
pattern <- paste0("\\b", escaped_phrases, "\\b", collapse = "|")
# --- Apply matching to flag Topic2 stories ---
APNews <- APNews %>%
mutate(
Full.Text.clean = str_squish(Full.Text),
Topic2 = if_else(
str_detect(Full.Text.clean, regex(pattern, ignore_case = TRUE)),
"Yes",
"No"
)
)
# ============================================
# --- Visualize weekly counts of Topic1- and Topic2-related stories ---
# ============================================
# --- Load plotly if needed ---
if (!require("plotly")) install.packages("plotly")
library(plotly)
# --- Summarize weekly counts for Topic1 = "Yes" ---
Topic1_weekly <- APNews %>%
filter(Topic1 == "Yes") %>%
group_by(Week) %>%
summarize(Count = n(), .groups = "drop") %>%
mutate(Topic = "Economic Gain") # Note custom Topic1 label
# --- Summarize weekly counts for Topic2 = "Yes" ---
Topic2_weekly <- APNews %>%
filter(Topic2 == "Yes") %>%
group_by(Week) %>%
summarize(Count = n(), .groups = "drop") %>%
mutate(Topic = "Economic Burden") # Note custom Topic2 label
# --- Combine both summaries into one data frame ---
Weekly_counts <- bind_rows(Topic2_weekly, Topic1_weekly)
# --- Fill in missing combinations with zero counts ---
Weekly_counts <- Weekly_counts %>%
tidyr::complete(
Topic,
Week = full_seq(range(Week), 1), # generate all week numbers
fill = list(Count = 0)
) %>%
arrange(Topic, Week)
# --- Create interactive plotly line chart ---
AS1 <- plot_ly(
data = Weekly_counts,
x = ~Week,
y = ~Count,
color = ~Topic,
colors = c("steelblue", "firebrick"),
type = "scatter",
mode = "lines+markers",
line = list(width = 2),
marker = list(size = 6)
) %>%
layout(
title = "Weekly Counts of Topic1- and Topic2-Related AP News Articles",
xaxis = list(
title = "Week Number (starting with Week 1 of 2025)",
dtick = 1
),
yaxis = list(title = "Number of Articles"),
legend = list(title = list(text = "Topic")),
hovermode = "x unified"
)
# ============================================
# --- Show the chart ---
# ============================================
AS1
# ============================================================
# Setup: Install and Load Required Packages
# ============================================================
if (!require("tidyverse")) install.packages("tidyverse")
if (!require("plotly")) install.packages("plotly")
if (!require("gt")) install.packages("gt")
if (!require("gtExtras")) install.packages("gtExtras")
if (!require("broom")) install.packages("broom")
library(tidyverse)
library(plotly)
library(gt)
library(gtExtras)
library(broom)
options(scipen = 999)
# ============================================================
# Data Import
# ============================================================
# Reshape to wide form
mydata <- Weekly_counts %>%
pivot_wider(names_from = Topic, values_from = Count)
names(mydata) <- make.names(names(mydata))
# Specify the two variables involved
mydata$V1 <- mydata$Economic.Gain # <== Customize this
mydata$V2 <- mydata$Economic.Burden # <== Customize this
# ============================================================
# Compute Pair Differences
# ============================================================
mydata$PairDifferences <- mydata$V2 - mydata$V1
# ============================================================
# Interactive Histogram of Pair Differences
# ============================================================
hist_plot <- plot_ly(
data = mydata,
x = ~PairDifferences,
type = "histogram",
marker = list(color = "#1f78b4", line = list(color = "black", width = 1))
) %>%
layout(
title = "Distribution of Pair Differences",
xaxis = list(title = "Pair Differences"),
yaxis = list(title = "Count"),
shapes = list(
list(
type = "line",
x0 = mean(mydata$PairDifferences, na.rm = TRUE),
x1 = mean(mydata$PairDifferences, na.rm = TRUE),
y0 = 0,
y1 = max(table(mydata$PairDifferences)),
line = list(color = "red", dash = "dash")
)
)
)
# ============================================================
# Descriptive Statistics
# ============================================================
desc_stats <- mydata %>%
summarise(
count = n(),
mean = mean(PairDifferences, na.rm = TRUE),
sd = sd(PairDifferences, na.rm = TRUE),
min = min(PairDifferences, na.rm = TRUE),
max = max(PairDifferences, na.rm = TRUE)
)
desc_table <- desc_stats %>%
gt() %>%
gt_theme_538() %>%
tab_header(title = "Descriptive Statistics: Pair Differences") %>%
fmt_number(columns = where(is.numeric), decimals = 3)
# ============================================================
# Normality Test (Shapiro-Wilk)
# ============================================================
shapiro_res <- shapiro.test(mydata$PairDifferences)
shapiro_table <- tidy(shapiro_res) %>%
select(statistic, p.value, method) %>%
gt() %>%
gt_theme_538() %>%
tab_header(title = "Normality Test (Shapiro-Wilk)") %>%
fmt_number(columns = c(statistic, p.value), decimals = 4) %>%
tab_source_note(
source_note = "If the P.VALUE is 0.05 or less, the number of pairs is fewer than 40, and the distribution of pair differences shows obvious non-normality or outliers, consider using the Wilcoxon Signed Rank Test results instead of the Paired-Samples t-Test results."
)
# ============================================================
# Reshape Data for Repeated-Measures Plot
# ============================================================
df_long <- mydata %>%
pivot_longer(cols = c(V1, V2),
names_to = "Measure",
values_to = "Value")
# ============================================================
# Repeated-Measures Boxplot (Interactive, with Means)
# ============================================================
group_means <- df_long %>%
group_by(Measure) %>%
summarise(mean_value = mean(Value), .groups = "drop")
boxplot_measures <- plot_ly() %>%
add_trace(
data = df_long,
x = ~Measure, y = ~Value,
type = "box",
boxpoints = "outliers",
marker = list(color = "red", size = 4),
line = list(color = "black"),
fillcolor = "royalblue",
name = ""
) %>%
add_trace(
data = group_means,
x = ~Measure, y = ~mean_value,
type = "scatter", mode = "markers",
marker = list(
symbol = "diamond", size = 9,
color = "black", line = list(color = "white", width = 1)
),
text = ~paste0("Mean = ", round(mean_value, 2)),
hoverinfo = "text",
name = "Group Mean"
) %>%
layout(
title = "Boxplot of Repeated Measures (V1 vs V2) with Means",
xaxis = list(title = "Measure"),
yaxis = list(title = "Value"),
showlegend = FALSE
)
# ============================================================
# Parametric Test (Paired-Samples t-Test)
# ============================================================
t_res <- t.test(mydata$V2, mydata$V1, paired = TRUE)
t_table <- tidy(t_res) %>%
select(statistic, parameter, p.value, conf.low, conf.high, method) %>%
gt() %>%
gt_theme_538() %>%
tab_header(title = "Paired-Samples t-Test") %>%
fmt_number(columns = c(statistic, p.value, conf.low, conf.high), decimals = 4)
t_summary <- mydata %>%
select(V1, V2) %>%
summarise_all(list(Mean = mean, SD = sd)) %>%
gt() %>%
gt_theme_538() %>%
tab_header(title = "Group Means and SDs (t-Test)") %>%
fmt_number(columns = everything(), decimals = 3)
# ============================================================
# Nonparametric Test (Wilcoxon Signed Rank)
# ============================================================
wilcox_res <- wilcox.test(mydata$V1, mydata$V2, paired = TRUE,
exact = FALSE)
wilcox_table <- tidy(wilcox_res) %>%
select(statistic, p.value, method) %>%
gt() %>%
gt_theme_538() %>%
tab_header(title = "Wilcoxon Signed Rank Test") %>%
fmt_number(columns = c(statistic, p.value), decimals = 4)
wilcox_summary <- mydata %>%
select(V1, V2) %>%
summarise_all(list(Mean = mean, SD = sd)) %>%
gt() %>%
gt_theme_538() %>%
tab_header(title = "Group Means and SDs (Wilcoxon)") %>%
fmt_number(columns = everything(), decimals = 3)
# ============================================================
# Results Summary (in specified order)
# ============================================================
hist_plot
desc_table
shapiro_table
boxplot_measures
t_table
t_summary
wilcox_table
wilcox_summary