Luttrell et al. ( 2019) investigated whether the influence of moral counterattitudinal messages on attitude change depends on the extent that the initial attitudes were moralized. They further explored the mediation effect of the perceptions/thoughts toward the message and the moderation effect of political orientation. To test the hypotheses, they conducted two studies on the attitudes toward recycling.
I choose this paper to replicate because I have been working on factor analysis and structural equation modeling, thus I am very interested in self-reported data. This paper matches my research experiences and interests as it focused on variables (e.g., attitude) that were measured by scales. Besides, this paper used questionnaires to measure the variables of interest and the reliabilities of the questionnaires were reported in the paper. However, the construct validity was not reported in the paper. In addition to replicating the research, I want to further test the validity of the measurement tools used in this paper.
The potential challenge in this study is the funding to collect sufficient samples. In the original paper, the authors took one dollar to recruit each participant, and they recruited around 200 people.
Original effect size, power analysis for samples to achieve 80%, 90%, 95% power to detect that effect size. Considerations of feasibility for selecting planned sample size.
The authors preregistered a target sample size of 200, which provided 80% power to detect an interaction effect as small as .04. They finally collected 415 samples for study 1. However, to save the money, I plan to collect 50 samples and each participant will get $1 for completing the 5-minute survey.
This experiment will be conducted through qualtrics and participants will be required to fill out a survey (I have designed the survey). The questionnaire on attitude toward recycling, political orientation, perceptions about the message, as well as the stimuli (practical or moral arguments against recycling) used in the replication study will be the same as Luttrell et al. ( 2019) (see the quote below).
Premessage attitudes. Participants rated their attitudes toward recycling using three 9-point semantic-differential scales, ranging from −4 (bad, dislike, and negative) to 4 (good, like, and positive).
Perceived attitude bases. Participants reported the degree to which their attitudes toward recycling were based on their core moral beliefs or on practical concerns using 5-point scales (ranging from not at all to extremely). These items were modeled after items previously used to assess moral conviction (see Skitka, 2010). These items were presented within a small set of randomly ordered questions about people’s bases for their recycling attitudes so participants would not suspect that the study was specifically about the moral basis for those attitudes.
Political orientation. Participants reported their political orientation on two items measuring ideology for social and economic issues. The items were measured on 5-point scales ranging from very liberal to very conservative.
Counterattitudinal message. We created two versions of an antirecycling essay that appealed either to moral or to practical concerns. The moral appeal, entitled “Recycling: Harmful and Immoral,” framed its antirecy- cling position in moral terms (e.g., “Supporting recycling programs would be a grave moral transgression”) and cited particular moral reasons against recycling programs (e.g., “precious pets and animals [are] mercilessly killed by fumes produced in the recycling process”). By contrast, the practical appeal, entitled “Recycling: Costly and Unfeasible,” framed its antirecycling position in prag- matic terms (e.g., recycling is an “inefficient and unfea- sible endeavor for most municipalities to adopt”) and cited particular economic and pragmatic concerns (e.g., “An increase in trucks greatly increases traffic, both on the highways and on city roads”). We chose a practical appeal as the comparison because it reflects a common type of nonmoral persuasive argument (e.g., Mucciaroni, 2011) that could be similarly substantive. Messages were similar in length (426–428 words) and in number of argu- ments, and they were designed to be equally cogent to the sample overall.
Thought listing. Participants next listed the thoughts they had while reading the antirecycling message. Participants were given six thought-listing boxes and told to enter one thought per box; however, they were not required to fill all six. These thoughts were subsequently rated by two independent coders for overall valence (i.e., whether each thought was promessage, antimessage, neutral to the message, or unrelated to the message), following common practice.
Note that since I will code it by myself, there will not be two independent coders.
Postmessage attitudes. After reading the message, participants once again reported their attitudes toward recycling, using the same items used to measure premessage attitudes.
Message ratings. To ensure that participants perceived the messages as intended, we asked how much the mes- sage seemed to make arguments related to moral and practical concerns, each on 7-point scales ranging from not at all to very much.
At the beginning of the survey, participants will be asked to read an introduction to recycling and report their attitude about recycling. Next, participants will be randomly assigned into two groups, and in one group, they will need to read a message arguing against recycling using a moral appeal. In another group, participants will be assigned a message arguing against recycling using a practical appeal. After reading the message, participants in each group need to report their attitudes toward recycling again. Also, they will be asked about their perceptions of the message, political orientation, age, gender, and the primary language they are using.
First, we exclude any cases for which the message would be pro-attitudinal (i.e., attitudes toward recycling below the midpoint.) Next, we excluded any cases that were duplicates of a previous participant’s IP Address.
Calculate Cronbach’s alpha for the measures
Conduct confirmatory factor analysis for the measures
Data were submitted to t-test analyses to determine whether the messages were perceived as intended.
The data were submitted to a multiple regression analysis predicting postmessage attitudes in which pre- message attitudes, moral attitude basis, and message type were entered in the first step of the model and the Moral Attitude Basis × Message Type interaction term was entered in the second step. Message type was effects coded so that −1 corresponded with the moral- argument condition and +1 corresponded with the practical-argument condition. Results for these predic- tors are interpreted in the first steps of the model in which they appear.
We computed the indirect effect and CIs with non- parametric bootstrapping (10,000 iterations) using the mediation package in R (Tingley, Yamamoto, Hirose, Keele, & Imai, 2014). We tested the indirect effect on postmessage attitudes, controlling for premessage atti- tudes, setting the Moral Attitude Basis × Message Type interaction term as the predictor; valenced thoughts as the mediator; premessage attitudes, moral attitude basis, and message type as covariates; and postmessage atti- tudes as the outcome variable.
The fol- lowing analyses tested the interaction of each alternative basis with message type on postmessage attitudes, con- trolling for premessage attitudes.
To assess whether political orientation moderated the Moral Attitude Basis × Message Type interaction, we subjected the data to a hierarchical multiple regression model that tested a three-way interaction on postmessage attitudes.
Sample size will be smaller than the original study.
The thought list will be coded by myself instead of two independent coders in the original study, so I cannot test consistency between coders.
The original paper didn’t conduct confirmatory factor analysis for construct validation, and I will try to conduct in the replication study.
You can comment this section out prior to final report with data collection.
Sample size, demographics, data exclusions based on rules spelled out in analysis plan
Any differences from what was described as the original plan, or “none”.
##Results
Data preparation following the analysis plan.
The analyses as specified in the analysis plan.
Side-by-side graph with original graph is ideal here
###Exploratory analyses
Any follow-up analyses desired (not required).
Open the discussion section with a paragraph summarizing the primary result from the confirmatory analysis and the assessment of whether it replicated, partially replicated, or failed to replicate the original result.
Add open-ended commentary (if any) reflecting (a) insights from follow-up exploratory analysis, (b) assessment of the meaning of the replication (or not) - e.g., for a failure to replicate, are the differences between original and present study ones that definitely, plausibly, or are unlikely to have been moderators of the result, and (c) discussion of any objections or challenges raised by the current and original authors about the replication attempt. None of these need to be long.