Analysis of Intentionality Cues in US Immigration Discourse
Introduction
Immigration has become a major political fault line in many Western societies (Card et al., 2022; Dancygier & Margalit, 2020). Why do people hold such polarized view? Research on immigration attitudes has approached this question through the lens of costs and benefits, particularly by emphasizing perceived threats (Lutz & Bitschnau, 2023). Specifically, people are more likely to reject immigration when they see it as a source of unemployment, crime, or social conflict. These explanations share a key feature: they tend to focus on the outcomes of immigration—its (perceived) tangible effects on the host society. Yet, social evaluations are not based solely on actions and consequences; they also depend on the perceived mental states behind those actions—intentions, motivations, and reasons. Humans possess a cognitive ability for mind-reading, allowing them to attribute mental states to others (Ho, Saxe & Cushman, 2022). Crucially, mind perception plays a central role in moral judgment and emotional responses to actions (Barrett & Saxe, 2021; Gray et al., 2012; Sell et al., 2017).
A key dimension of mind perception that strongly influences moral judgment and social evaluation is intentionality (Barrett et al., 2016; Barrett & Saxe, 2021; Cushman, 2015; Fincher et al., 2018; Gray et al., 2012). People judge an action more harshly when they perceive it as intentional and, conversely, more leniently when they see it as unintentional. When assessing intentionality, individuals rely on various mental concepts, including goals, motivations, attitudes, and character traits. This mechanism is evident in attitudes toward redistribution: people are more likely to support welfare policies when they believe recipients are hardworking and not responsible for their situation (Aarøe & Petersen, 2014; Petersen, 2012; van Oorschot, 2000, 2006). Similarly, the perceived intent behind an aggression can dramatically alter its social evaluation—aggressions perceived as driven by harmful intent are judged far more negatively and generate more negative emotional responses (Sell et al., 2017). Accordingly, intentionality cues also shape public perceptions of immigrants: their perceived motivation to work, attitude toward the host society, and reasons for migrating all influence attitudes toward them independently of the costs and benefits that they generate for the host society (Kootstra, 2016; Naumann et al., 2024; Reeskens & van der Meer, 2019).
Unraveling the psychological underpinnings of attitudes towards immigration is essential in explaining major trends in politics. Politicians and other political actors can be seen as strategic agents who seek to mobilize voters through rhetorical strategies and policy stances, but their effectiveness depends on aligning these efforts with the psychology of their audience. To make an issue more salient, political entrepreneurs must frame it in a psychologically compelling way—one that effectively engages cognitive mechanisms to capture attention, evoke emotions, and generate support . In this sense, political rhetoric can be understood as a form of “cultural technology,” intuitively designed by self-interested actors to exploit psychological predispositions (Dubourg & Baumard, 2021; Fitouchi et al., 2021; Fitouchi & Singh, 2022; Sijilmassi et al., 2024). This is particularly evident in political discourse on immigration: while immigration was a low-salience issue in many Western countries during the 1950s and 1960s, it has become one of the most politically charged topics since the 2000s (Card et al., 2022; Dancygier & Margalit, 2020; Simonsen & Widmann, 2023). This shift in salience is, in part, the result of sustained political narratives that have framed immigration as a problem in mass media and political speeches (Dancygier & Margalit, 2020; Eberl et al., 2018).
Given the central role of intentionality in moral judgment and social evaluation, highlighting intentionality cues should be a particularly effective rhetorical strategy for increasing the salience of immigration and mobilizing voters on this issue. As immigration becomes more politically salient and polarized, we expect a growing emphasis on intentionality in political discourse. Anti-immigration parties are likely to underscore perceived negative intentions of immigrants to elicit hostility and moral outrage among their supporters, whereas pro-immigration parties will emphasize perceived positive intentions to foster empathy and support.
To test our hypotheses, we will use large language models (LLMs) to annotate extensive corpora of immigration-related texts, including parliamentary speeches and political manifestos. As a first step, we will quantify intentionality cues in a dataset of approximately 250,000 excerpts from US congressional speeches on immigration since 1880. This script tests our core hypotheses regarding the presence of intentionality cues in U.S. immigration discourse, considering time trends, tone (positive/negative), salience, and polarization.
Main Analyses
H1: There should be an increase of intentionality cues in political discourse over time
Model:
#Main model: summary(glmer(gpt4_label_binary2 ~ year + party + chamber + (1| state), data = sampled_data_annotated_arranged, family = binomial ))
boundary (singular) fit: see help('isSingular')
Warning in vcov.merMod(object, use.hessian = use.hessian): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Warning in vcov.merMod(object, correlation = correlation, sigm = sig): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ year + party + chamber + (1 | state)
Data: sampled_data_annotated_arranged
AIC BIC logLik deviance df.resid
407.8 437.0 -196.9 393.8 472
Scaled residuals:
Min 1Q Median 3Q Max
-0.6351 -0.4340 -0.3809 -0.3428 3.0799
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 4e-14 2e-07
Number of obs: 479, groups: state, 53
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -9.314927 7.595201 -1.226 0.220
year 0.004093 0.003805 1.076 0.282
partyR -0.139420 0.301168 -0.463 0.643
partyUnknown 0.188366 0.580518 0.324 0.746
chamberH -0.398994 0.665278 -0.600 0.549
chamberS -0.805301 0.640040 -1.258 0.208
Correlation of Fixed Effects:
(Intr) year partyR prtyUn chmbrH
year -0.996
partyR -0.036 0.021
partyUnknwn -0.124 0.048 0.196
chamberH -0.167 0.085 -0.018 0.806
chamberS -0.120 0.039 0.010 0.780 0.901
optimizer (Nelder_Mead) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')
#Alternative, post-1945: summary(glmer(gpt4_label_binary2 ~ year + party + chamber + (1| state), data = sampled_data_annotated_arranged[sampled_data_annotated_arranged$year >1945,], family = binomial ))
boundary (singular) fit: see help('isSingular')
Warning in vcov.merMod(object, use.hessian = use.hessian): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Warning in vcov.merMod(object, use.hessian = use.hessian): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ year + party + chamber + (1 | state)
Data:
sampled_data_annotated_arranged[sampled_data_annotated_arranged$year >
1945, ]
AIC BIC logLik deviance df.resid
332.5 359.8 -159.2 318.5 361
Scaled residuals:
Min 1Q Median 3Q Max
-0.6322 -0.4319 -0.4108 -0.3780 3.1857
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 0 0
Number of obs: 368, groups: state, 52
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -6.009881 14.224225 -0.423 0.673
year 0.002782 0.007129 0.390 0.696
partyR -0.169630 0.339625 -0.499 0.617
partyUnknown -0.493386 0.777366 -0.635 0.526
chamberH -1.142080 0.840915 -1.358 0.174
chamberS -1.314138 0.818048 -1.606 0.108
Correlation of Fixed Effects:
(Intr) year partyR prtyUn chmbrH
year -0.998
partyR 0.053 -0.062
partyUnknwn -0.145 0.091 0.151
chamberH -0.071 0.015 -0.012 0.868
chamberS -0.036 -0.020 0.007 0.858 0.925
optimizer (Nelder_Mead) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')
Plot:
Warning: The `size` argument of `element_line()` is deprecated as of ggplot2 3.4.0.
ℹ Please use the `linewidth` argument instead.
H2a&b: Intentionality cues should become more prevalent in both positive (H2a) and negative claims (H2b) over time
Model:
#H2a: summary(glmer( gpt4_label_binary2 ~ year + party + chamber + (1| state),data = sampled_data_annotated_arranged[sampled_data_annotated_arranged$tone2 =="Positive",],family = binomial))
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.530994 (tol = 0.002, component 1)
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?;Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ year + party + chamber + (1 | state)
Data:
sampled_data_annotated_arranged[sampled_data_annotated_arranged$tone2 ==
"Positive", ]
AIC BIC logLik deviance df.resid
232.7 255.7 -109.4 218.7 189
Scaled residuals:
Min 1Q Median 3Q Max
-0.8583 -0.6199 -0.4342 1.1301 1.9561
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 0.6999 0.8366
Number of obs: 196, groups: state, 46
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -6.489307 19.882600 -0.326 0.744
year 0.002737 0.009896 0.277 0.782
partyR -0.159385 0.501186 -0.318 0.750
partyUnknown 0.316862 1.158265 0.274 0.784
chamberH -0.045019 0.837551 -0.054 0.957
chamberS -0.297689 0.804338 -0.370 0.711
Correlation of Fixed Effects:
(Intr) year partyR prtyUn chmbrH
year -0.999
partyR -0.315 0.308
partyUnknwn -0.239 0.214 0.194
chamberH -0.270 0.233 0.070 0.551
chamberS -0.233 0.197 0.061 0.532 0.862
optimizer (Nelder_Mead) convergence code: 0 (OK)
Model failed to converge with max|grad| = 0.530994 (tol = 0.002, component 1)
Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?
Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
#H2b: summary(glmer( gpt4_label_binary2 ~ year + party + chamber + (1| state),data = sampled_data_annotated_arranged[sampled_data_annotated_arranged$tone2 =="Negative",],family = binomial))
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
unable to evaluate scaled gradient
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge: degenerate Hessian with 1 negative eigenvalues
Warning in vcov.merMod(object, use.hessian = use.hessian): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Warning in vcov.merMod(object, correlation = correlation, sigm = sig): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ year + party + chamber + (1 | state)
Data:
sampled_data_annotated_arranged[sampled_data_annotated_arranged$tone2 ==
"Negative", ]
AIC BIC logLik deviance df.resid
161.0 186.5 -73.5 147.0 276
Scaled residuals:
Min 1Q Median 3Q Max
-0.4510 -0.2889 -0.2514 -0.2115 4.5112
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 0.3411 0.5841
Number of obs: 283, groups: state, 48
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -9.543005 12.791589 -0.746 0.456
year 0.003434 0.006423 0.535 0.593
partyR 0.433093 0.515032 0.841 0.400
partyUnknown 0.259075 1.283596 0.202 0.840
chamberH 0.042139 1.554772 0.027 0.978
chamberS -0.397690 1.511957 -0.263 0.793
Correlation of Fixed Effects:
(Intr) year partyR prtyUn chmbrH
year -0.992
partyR 0.070 -0.092
partyUnknwn -0.063 -0.017 0.214
chamberH -0.167 0.051 -0.030 0.589
chamberS -0.113 -0.004 0.010 0.570 0.942
optimizer (Nelder_Mead) convergence code: 0 (OK)
unable to evaluate scaled gradient
Model failed to converge: degenerate Hessian with 1 negative eigenvalues
Plot:
`geom_smooth()` using formula = 'y ~ x'
Warning: Removed 4 rows containing non-finite outside the scale range
(`stat_smooth()`).
Warning: Removed 4 rows containing missing values or values outside the scale range
(`geom_point()`).
H3: Intentionality cues should be more prevalent in congressional periods when immigration is more salient
NB: Salience is measured as the percentage of immigration-related tokens on the total token in a given congressional period.
Model:
summary(glmer( gpt4_label_binary2 ~ salience_text + year + party + chamber + (1| state),data = sampled_data_annotated_arranged,family = binomial))
boundary (singular) fit: see help('isSingular')
Warning in vcov.merMod(object, use.hessian = use.hessian): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Warning in vcov.merMod(object, correlation = correlation, sigm = sig): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ salience_text + year + party + chamber +
(1 | state)
Data: sampled_data_annotated_arranged
AIC BIC logLik deviance df.resid
409.8 443.1 -196.9 393.8 471
Scaled residuals:
Min 1Q Median 3Q Max
-0.6319 -0.4371 -0.3795 -0.3448 3.1023
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 0 0
Number of obs: 479, groups: state, 53
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -9.929307 8.310771 -1.195 0.232
salience_text -0.021327 0.114993 -0.185 0.853
year 0.004426 0.004219 1.049 0.294
partyR -0.136885 0.301522 -0.454 0.650
partyUnknown 0.185532 0.580713 0.319 0.749
chamberH -0.390215 0.666999 -0.585 0.559
chamberS -0.794067 0.642738 -1.235 0.217
Correlation of Fixed Effects:
(Intr) slnc_t year partyR prtyUn chmbrH
salienc_txt 0.397
year -0.996 -0.424
partyR -0.050 -0.045 0.037
partyUnknwn -0.104 0.026 0.033 0.194
chamberH -0.180 -0.070 0.106 -0.015 0.802
chamberS -0.146 -0.093 0.075 0.015 0.774 0.901
optimizer (Nelder_Mead) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')
H4: Intentionality cues should be more prevalent in congressional periods when immigration is more polarized
NB: Polarization is measured as the average difference in the mean tone of speeches towards immigration (from positive to negative) between Republicans and Democrats, in a given congressional period.
Model:
summary(glmer( gpt4_label_binary2 ~ polarization_score + year + party + chamber + (1| state),data = sampled_data_annotated_arranged,family = binomial))
boundary (singular) fit: see help('isSingular')
Warning in vcov.merMod(object, use.hessian = use.hessian): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Warning in vcov.merMod(object, correlation = correlation, sigm = sig): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ polarization_score + year + party + chamber +
(1 | state)
Data: sampled_data_annotated_arranged
AIC BIC logLik deviance df.resid
407.6 440.9 -195.8 391.6 471
Scaled residuals:
Min 1Q Median 3Q Max
-0.6937 -0.4256 -0.3727 -0.3242 3.5238
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 0 0
Number of obs: 479, groups: state, 53
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 5.944368 12.410058 0.479 0.632
polarization_score 1.039835 0.692981 1.501 0.133
year -0.003767 0.006336 -0.594 0.552
partyR -0.177788 0.302903 -0.587 0.557
partyUnknown 0.206908 0.582475 0.355 0.722
chamberH -0.517164 0.671978 -0.770 0.442
chamberS -0.926444 0.647712 -1.430 0.153
Correlation of Fixed Effects:
(Intr) plrzt_ year partyR prtyUn chmbrH
polrztn_scr 0.815
year -0.998 -0.823
partyR -0.096 -0.088 0.087
partyUnknwn -0.047 0.026 -0.001 0.194
chamberH -0.194 -0.123 0.145 -0.004 0.797
chamberS -0.177 -0.132 0.128 0.020 0.769 0.903
optimizer (Nelder_Mead) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')
Plot:
`geom_smooth()` using formula = 'y ~ x'
Warning: Removed 4 rows containing non-finite outside the scale range
(`stat_smooth()`).
Warning: Removed 4 rows containing missing values or values outside the scale range
(`geom_point()`).
Just for info, the evolution of polarization per year:
H5: Intentionality cues in political discourse about immigration should be associated with more moralization
TBD
Additional research questions
ARQ1a): Is there an asymetry in the prevalence of intentionality cues between positive vs. negative discourse about immigration?
Model:
summary(glmer( gpt4_label_binary2 ~ tone2 + year + party + chamber + (1| state),data = sampled_data_annotated_arranged,family = binomial))
boundary (singular) fit: see help('isSingular')
Warning in vcov.merMod(object, use.hessian = use.hessian): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Warning in vcov.merMod(object, correlation = correlation, sigm = sig): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ tone2 + year + party + chamber + (1 | state)
Data: sampled_data_annotated_arranged
AIC BIC logLik deviance df.resid
385.7 419.1 -184.9 369.7 471
Scaled residuals:
Min 1Q Median 3Q Max
-0.7067 -0.5168 -0.2882 -0.2595 3.9845
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 4.455e-14 2.111e-07
Number of obs: 479, groups: state, 53
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -7.870293 8.014611 -0.982 0.326
tone2Positive 1.373218 0.290842 4.722 2.34e-06 ***
year 0.002795 0.004010 0.697 0.486
partyR 0.085255 0.313828 0.272 0.786
partyUnknown 0.192810 0.597904 0.322 0.747
chamberH -0.092061 0.684005 -0.135 0.893
chamberS -0.387008 0.662951 -0.584 0.559
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr) tn2Pst year partyR prtyUn chmbrH
tone2Positv 0.019
year -0.996 -0.051
partyR -0.101 0.150 0.082
partyUnknwn -0.156 0.004 0.082 0.203
chamberH -0.173 0.077 0.091 0.007 0.806
chamberS -0.143 0.107 0.061 0.029 0.778 0.902
optimizer (Nelder_Mead) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')
Plot:
Robustness check:
ARQ1b): Does intentionality rhetoric converge across tones when immigration becomes more salient?
Warning in vcov.merMod(object, use.hessian = use.hessian): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Warning in vcov.merMod(object, correlation = correlation, sigm = sig): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ tone2 + salience_text + year + party + chamber +
tone2 * salience_text + (1 | state)
Data: sampled_data_annotated_arranged
AIC BIC logLik deviance df.resid
389.6 431.4 -184.8 369.6 469
Scaled residuals:
Min 1Q Median 3Q Max
-0.7355 -0.5121 -0.2882 -0.2607 3.9609
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 0 0
Number of obs: 479, groups: state, 53
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -7.395531 8.639065 -0.856 0.3920
tone2Positive 1.244351 0.629888 1.976 0.0482 *
salience_text -0.015600 0.193194 -0.081 0.9356
year 0.002576 0.004396 0.586 0.5579
partyR 0.088955 0.315056 0.282 0.7777
partyUnknown 0.203201 0.598830 0.339 0.7344
chamberH -0.098306 0.684630 -0.144 0.8858
chamberS -0.390749 0.664589 -0.588 0.5566
tone2Positive:salience_text 0.053034 0.224209 0.237 0.8130
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr) tn2Pst slnc_t year partyR prtyUn chmbrH chmbrS
tone2Positv 0.070
salienc_txt 0.268 0.726
year -0.995 -0.122 -0.319
partyR -0.112 0.000 -0.081 0.097
partyUnknwn -0.134 -0.038 -0.015 0.065 0.205
chamberH -0.179 0.034 -0.030 0.104 0.009 0.802
chamberS -0.160 0.023 -0.062 0.086 0.033 0.773 0.902
tn2Pstv:sl_ -0.054 -0.887 -0.802 0.097 0.076 0.046 0.000 0.027
optimizer (Nelder_Mead) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')
Plot:
`geom_smooth()` using formula = 'y ~ x'
Warning: Removed 4 rows containing non-finite outside the scale range
(`stat_smooth()`).
Warning: Removed 4 rows containing missing values or values outside the scale range
(`geom_point()`).
Robustness check:
ARQ1c): Does intentionality rhetoric converge across tones when immigration becomes more polarized?
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.81041 (tol = 0.002, component 1)
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?;Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ tone2 + polarization_score + year + party +
chamber + tone2 * polarization_score + (1 | state)
Data: sampled_data_annotated_arranged
AIC BIC logLik deviance df.resid
386.0 427.7 -183.0 366.0 469
Scaled residuals:
Min 1Q Median 3Q Max
-0.8267 -0.4684 -0.2895 -0.2278 4.8383
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 0.1094 0.3308
Number of obs: 479, groups: state, 53
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 12.430773 5.540236 2.244 0.02485 *
tone2Positive 1.449557 0.463114 3.130 0.00175 **
polarization_score 1.397779 0.684104 2.043 0.04103 *
year -0.007709 0.002683 -2.873 0.00407 **
partyR 0.058254 0.349455 0.167 0.86761
partyUnknown 0.227383 0.718506 0.316 0.75165
chamberH -0.233458 0.696868 -0.335 0.73762
chamberS -0.558062 0.669509 -0.834 0.40454
tone2Positive:polarization_score 0.021062 0.843578 0.025 0.98008
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr) tn2Pst plrzt_ year partyR prtyUn chmbrH chmbrS
tone2Positv -0.239
polrztn_scr 0.218 0.509
year -0.989 0.173 -0.271
partyR -0.323 0.160 -0.092 0.288
partyUnknwn -0.286 0.056 -0.023 0.186 0.265
chamberH -0.256 0.105 -0.014 0.134 0.060 0.702
chamberS -0.169 0.066 -0.044 0.052 0.033 0.656 0.892
tn2Pstv:pl_ 0.024 -0.711 -0.742 0.012 0.071 0.034 -0.019 0.011
optimizer (Nelder_Mead) convergence code: 0 (OK)
Model failed to converge with max|grad| = 0.81041 (tol = 0.002, component 1)
Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?
Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
`geom_smooth()` using formula = 'y ~ x'
Warning: Removed 4 rows containing non-finite outside the scale range
(`stat_smooth()`).
Warning: Removed 4 rows containing missing values or values outside the scale range
(`geom_point()`).
Robustness check:
ARQ2a): Is there an asymmetry between parties in the evolution of intentionality cues in discourse about immigration?
Model:
sampled_data_annotated_arranged_bipartisan = sampled_data_annotated_arranged[sampled_data_annotated_arranged$party %in%c("R", "D"),]summary(glmer( gpt4_label_binary2 ~ party + year + chamber + (1| state),data = sampled_data_annotated_arranged_bipartisan,family = binomial))
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.377768 (tol = 0.002, component 1)
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?;Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ party + year + chamber + (1 | state)
Data: sampled_data_annotated_arranged_bipartisan
AIC BIC logLik deviance df.resid
322.9 342.9 -156.4 312.9 398
Scaled residuals:
Min 1Q Median 3Q Max
-0.5875 -0.4148 -0.3548 -0.3064 3.3445
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 0.1936 0.4401
Number of obs: 403, groups: state, 52
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -13.981377 2.501854 -5.588 2.29e-08 ***
partyR -0.112481 0.309721 -0.363 0.716
year 0.006231 0.001259 4.951 7.40e-07 ***
chamberS -0.498686 0.315022 -1.583 0.113
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr) partyR year
partyR -0.086
year -0.994 0.030
chamberS 0.023 0.053 -0.075
optimizer (Nelder_Mead) convergence code: 0 (OK)
Model failed to converge with max|grad| = 0.377768 (tol = 0.002, component 1)
Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?
Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
Plot:
Robustness check:
ARQ2b): Does intentionality rhetoric converge across parties when immigration becomes more salient?
Model:
summary(glmer( gpt4_label_binary2 ~ party + salience_text + year + chamber + party * salience_text + (1| state),data = sampled_data_annotated_arranged_bipartisan,family = binomial))
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.317158 (tol = 0.002, component 1)
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?;Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ party + salience_text + year + chamber +
party * salience_text + (1 | state)
Data: sampled_data_annotated_arranged_bipartisan
AIC BIC logLik deviance df.resid
325.0 353.0 -155.5 311.0 396
Scaled residuals:
Min 1Q Median 3Q Max
-0.6559 -0.4049 -0.3540 -0.2846 3.7194
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 0.2488 0.4988
Number of obs: 403, groups: state, 52
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -14.817169 2.279278 -6.501 7.99e-11 ***
partyR -0.948433 0.708381 -1.339 0.1806
salience_text -0.153130 0.152460 -1.004 0.3152
year 0.006841 0.001153 5.935 2.94e-09 ***
chamberS -0.534143 0.319265 -1.673 0.0943 .
partyR:salience_text 0.321931 0.241107 1.335 0.1818
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr) partyR slnc_t year chmbrS
partyR -0.108
salienc_txt -0.059 0.521
year -0.981 0.000 -0.095
chamberS -0.006 0.104 -0.024 -0.048
prtyR:slnc_ 0.086 -0.895 -0.637 0.013 -0.089
optimizer (Nelder_Mead) convergence code: 0 (OK)
Model failed to converge with max|grad| = 0.317158 (tol = 0.002, component 1)
Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?
Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
Plot:
`geom_smooth()` using formula = 'y ~ x'
Warning: Removed 4 rows containing non-finite outside the scale range
(`stat_smooth()`).
Warning: Removed 4 rows containing missing values or values outside the scale range
(`geom_point()`).
Robustness check:
ARQ2c): Does intentionality rhetoric converge across parties when immigration becomes more polarized?
Model:
summary(glmer( gpt4_label_binary2 ~ party + polarization_score + year + chamber + party * polarization_score + (1| state),data = sampled_data_annotated_arranged_bipartisan,family = binomial))
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.316811 (tol = 0.002, component 1)
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?;Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ party + polarization_score + year + chamber +
party * polarization_score + (1 | state)
Data: sampled_data_annotated_arranged_bipartisan
AIC BIC logLik deviance df.resid
323.9 351.9 -154.9 309.9 396
Scaled residuals:
Min 1Q Median 3Q Max
-0.7424 -0.4046 -0.3468 -0.2847 4.4696
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 0.2534 0.5034
Number of obs: 403, groups: state, 52
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.4077049 2.2849354 -1.054 0.292
partyR -0.6735497 0.5004234 -1.346 0.178
polarization_score 0.2743152 0.5958392 0.460 0.645
year 0.0003169 0.0011681 0.271 0.786
chamberS -0.5247942 0.3199626 -1.640 0.101
partyR:polarization_score 1.2903398 0.9313025 1.386 0.166
Correlation of Fixed Effects:
(Intr) partyR plrzt_ year chmbrS
partyR -0.081
polrztn_scr 0.086 0.420
year -0.989 0.009 -0.173
chamberS -0.025 0.069 -0.079 -0.025
prtyR:plrz_ 0.042 -0.774 -0.625 0.008 -0.049
optimizer (Nelder_Mead) convergence code: 0 (OK)
Model failed to converge with max|grad| = 0.316811 (tol = 0.002, component 1)
Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?
Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
Plot:
`geom_smooth()` using formula = 'y ~ x'
Warning: Removed 4 rows containing non-finite outside the scale range
(`stat_smooth()`).
Warning: Removed 4 rows containing missing values or values outside the scale range
(`geom_point()`).
Robustness check:
ARQ3: Is the evolution of intentionality cues in positive and negative sentences about immigration the same in both parties?
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
unable to evaluate scaled gradient
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge: degenerate Hessian with 1 negative eigenvalues
Warning in vcov.merMod(object, use.hessian = use.hessian): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Warning in vcov.merMod(object, correlation = correlation, sigm = sig): variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gpt4_label_binary2 ~ tone2 + year + party + chamber + tone2 *
year + tone2 * party + year * party + tone2 * year * party +
(1 | state)
Data: sampled_data_annotated_arranged_bipartisan
AIC BIC logLik deviance df.resid
311.9 351.9 -145.9 291.9 393
Scaled residuals:
Min 1Q Median 3Q Max
-0.7962 -0.4106 -0.2762 -0.2166 4.9603
Random effects:
Groups Name Variance Std.Dev.
state (Intercept) 0.3251 0.5702
Number of obs: 403, groups: state, 52
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.009e+01 1.948e+01 -0.518 0.604
tone2Positive 9.477e+00 2.477e+01 0.383 0.702
year 3.730e-03 9.886e-03 0.377 0.706
partyR -1.457e-01 2.548e+01 -0.006 0.995
chamberS -3.713e-01 3.356e-01 -1.106 0.269
tone2Positive:year -3.939e-03 1.254e-02 -0.314 0.753
tone2Positive:partyR -2.507e+01 3.647e+01 -0.687 0.492
year:partyR 3.224e-04 1.291e-02 0.025 0.980
tone2Positive:year:partyR 1.238e-02 1.845e-02 0.671 0.502
Correlation of Fixed Effects:
(Intr) tn2Pst year partyR chmbrS tn2Ps: tn2P:R yr:prR
tone2Positv -0.772
year -1.000 0.771
partyR -0.767 0.593 0.766
chamberS 0.101 -0.063 -0.110 -0.030
ton2Pstv:yr 0.773 -1.000 -0.773 -0.594 0.065
tn2Pstv:prR 0.523 -0.678 -0.523 -0.678 0.007 0.678
year:partyR 0.767 -0.593 -0.767 -1.000 0.031 0.594 0.677
tn2Pstv:y:R -0.524 0.678 0.524 0.678 -0.008 -0.678 -1.000 -0.679
optimizer (Nelder_Mead) convergence code: 0 (OK)
unable to evaluate scaled gradient
Model failed to converge: degenerate Hessian with 1 negative eigenvalues
Plot:
`geom_smooth()` using formula = 'y ~ x'
Warning: Removed 2 rows containing non-finite outside the scale range
(`stat_smooth()`).
Warning: Removed 2 rows containing missing values or values outside the scale range
(`geom_point()`).
Warning: Removed 5 rows containing missing values or values outside the scale range
(`geom_smooth()`).