Replication of Study ‘How Quick Decisions Illuminate Moral Character’ by Clayton R. Critcher, Yoel Inbar and David A. Pizarro (2012, Psychological Science)
Introduction
Critcher, Inbar, and Pizarro (2012) found that the speed of decision-making plays a major role in shaping how we evaluate someone’s moral character. Quick decisions often signal certainty, which leads to stronger judgments—quick moral decisions get positive evaluations, while quick immoral ones result in harsher criticism. In contrast, slow decisions suggest indecision or conflict, leading to more moderate assessments. In this replication, I’ll be testing these findings again, but with a stronger emphasis on randomization to reduce any potential biases that may have influenced the original study. This could pose challenges, especially if additional variables or confounding factors emerge that were not fully accounted for before. By refining the experimental design, I aim to provide a clearer understanding of how decision speed impacts moral evaluations and address any limitations in the original methodology. To conduct this experiment, participants will be presented with scenarios where individuals make either moral or immoral decisions, with the critical variable being the speed of the decision (quick vs. slow). After each scenario, participants will rate the decision-maker’s moral character, perceived certainty, and impulsivity. The challenge will be creating scenarios that clearly manipulate decision speed without introducing unintended biases—ensuring quick decisions don’t seem impulsive by default and slow decisions aren’t perceived as overly reflective. Pretesting will be necessary to refine the stimuli and ensure participants interpret them consistently and as intended.
[Repository] (https://github.com/carolineporche1/critcher2012)
[Original Paper] (https://github.com/carolineporche1/critcher2012/blob/main/original_paper/How_quick_decisions_illuminate_moral_cha.pdf)
Methods
Power Analysis
Planned Sample
Planned sample size and/or termination rule, sampling frame, known demographics if any, preselection rules if any.
Materials
“Immediately following the description of Justin and Nate’s actions, we asked participants the following sets of items (all on 1–7 scales):
Quickness: “As a manipulation check, participants indicated how quickly (vs. slowly) the decision was made”
Moral character evaluation: “The three moral evaluation items had participants assess the agents’ underlying moral principles and standards… by asking whether the agent: “has entirely good (vs. entirely bad) moral principles,” “has good (vs. bad) moral standards,” and “deep down has the moral principles and knowledge to do the right thing”.”
Certainty: “Participants indicated “how conflicted she felt when making her decision” (reverse-scored), “how many reservations she had” (reverse-scored), whether the target “was quite certain in her decision” (vs. had considerable reservations), and “how far she was from choosing the alternate course of action”.”
Emotional Impulsivity: “Participants indicated to what extent the person remained “calm and emotionally contained” (reverse-scored) and “became upset and acted without thinking.””
Perceived Motives: “…participants rated their motives to: “get more money” and ’protect her children”.”
Procedure
Participants will read about two men, Justin and Nate, who come upon separate cash-filled wallets in a grocery store parking lot. Justin “was able to decide quickly” what to do, while Nate “was only able to decide after long and careful deliberation.”
Participants will then be assigned to one of two conditions:
- Moral decision: Justin and Nate both decide not to steal the money and return the wallets.
- Immoral decision: Justin and Nate both decide to keep the money and drive off.
Participants will be asked to rate how quickly each decision was made and evaluate the moral character, certainty, and impulsivity of both Justin and Nate. Randomization will be used to control for order effects.
Note:
- This procedure was followed precisely as outlined in the original article without deviations.
Analysis Plan
Primary Analysis:
I will conduct a two-way ANOVA to examine how decision speed (quick vs. slow) interacts with their decision (accept vs. reject the offer) in influencing participants’ moral character evaluations. Based on the original study, I expect that quick decisions to sell her son will result in more negative judgments, while quick refusals will lead to more positive (though marginal) evaluations.Additional Analyses:
I plan to explore whether these effects generalize to different types of moral dilemmas. Another potential avenue is to examine whether participant demographics (e.g., gender or age) moderate the observed effects.Data Cleaning and Exclusion Rules:
I will follow the same data cleaning procedures outlined in the original study. I’ll ensure that participants with incomplete responses are excluded and covariates like emotional impulsivity are properly accounted for.Note:
This analysis plan closely follows the approach described in the original article, with the same data exclusions, control variables, and covariate adjustments. Any additional analyses I conduct will build on the original methodology to enhance our understanding of decision speed and moral judgment.
Design Overview
The two factors that are manipulated throughout this study are ‘decision type’ and ‘decision speed’
Throughout the study there are five measures taken and they were not repeated
This study uses a between-participants design which tests the robustness of the effect rather than a within subjects design which could have had the consequence of an order effect
There is no mention of steps taken to reduce demand characteristics within the study
Participant’s previous exposures to relevant situations of the study may pose as potential confounds
Differences from Original Study
Explicitly describe known differences in sample, setting, procedure, and analysis plan from original study. The goal, of course, is to minimize those differences, but differences will inevitably occur. Also, note whether such differences are anticipated to make a difference based on claims in the original article or subsequent published research on the conditions for obtaining the effect.
Different from the original study, we will only be conducting Experiment One. In addition, our participants will not be from UC Berkeley and our experiment will be conducted online rather than in person. We can’t control the exact environment as the original study but that is not predicted to make a difference. In the end of our study, we are planning to add an attention check.
Methods Addendum (Post Data Collection)
You can comment this section out prior to final report with data collection.
Actual Sample
Sample size, demographics, data exclusions based on rules spelled out in analysis plan
Differences from pre-data collection methods plan
Any differences from what was described as the original plan, or “none”.
Results
Data preparation
In the initial stages of our data preparation, we will begin by loading our data set and uploading the necessary libraries and functions. Then we will begin cleaning our data, which may include dropping any Na values or special characters that we do not need for our analysis. After this, we will filter the data and variables needed for the analysis and create the necessary rows and columns.
```{r include=F} ### Data Preparation
Load Required Libraries
library(dplyr) library(tidyr) library(purrr) library(readr) library(stringr) library(jsonlite) library(magrittr)
Load Data
Define the directory path
data_directory <- “/Users/carolineporche/Desktop/CSS/CSS204/critcher2013_2/data/osfstorage-archive”
Retrieve a list of all CSV files within the directory
file_list <- list.files(path = data_directory, pattern = “*.csv”, full.names = TRUE, recursive = TRUE)
Read and merge data from all CSV files, adding a ‘subject_id’ column
merged_data <- file_list %>% map_dfr(~ { # Load the data from each file temp_data <- read_csv(.x)
# Extract and format the subject ID from the filename
id <- tools::file_path_sans_ext(basename(.x))
# Insert subject ID as the initial column
temp_data %>% mutate(subject_id = id, .before = 1)
})
Data Cleaning and Filtering
Make sure ‘processed_data’ is correctly defined
processed_data <- merged_data %>% # Keep essential columns: subject ID, response time, stimulus, and response select(subject_id, rt, stimulus, response) %>%
# Add a ‘condition’ column based on specific stimulus content mutate(condition = case_when( str_detect(stimulus, “steal the money”) ~ “moral”, str_detect(stimulus, “pocketed the money”) ~ “immoral”, TRUE ~ NA_character_ )) %>%
# Propagate ‘condition’ values downward to fill in any gaps fill(condition, .direction = “down”) %>%
# Organize data by ‘subject_id’ for subject-specific processing group_by(subject_id) %>%
# Create an attention check column using the last row’s response data mutate(attention_check = if_else( row_number() == n(), # If this is the last row for the participant str_extract(response, ‘“Q0”:\d+’) %>% # Extract the key-value pair for “Q0” str_extract(“\d+$”), # Extract the numeric value NA_character_ )) %>%
# Remove the groupings to finalize the attention check column ungroup() %>%
# Convert ‘attention_check’ to numeric type mutate(attention_check = as.numeric(attention_check)) %>%
# Spread attention check values throughout all rows fill(attention_check, .direction = “downup”)
Parse ‘response’ column and annotate target information
expanded_data <- processed_data %>% # Convert the JSON-formatted response data into structured format mutate(parsed_response = map(response, ~ fromJSON(.) %>% as.data.frame())) %>%
# Unpack the parsed response data into separate columns unnest(cols = parsed_response) %>%
# Add a ‘target’ column to label data for Justin and Nate mutate( target = case_when( row_number() %% 2 == 1 ~ “Justin”, # Justin for odd-numbered rows TRUE ~ “Nate” # Nate for even-numbered rows ) )
Transform data to a long format and generate a combined identifier
reshaped_data <- expanded_data %>% pivot_longer( cols = starts_with(“Q”), # Columns like Q0, Q1, etc. names_to = “question”, # Name for question identifier values_to = “response_value” # Name for response values ) %>% # Create a ‘question_value’ column by combining target and question identifiers mutate(question_value = paste(target, question, sep = “_”)) %>% # Drop extraneous columns select(-target, -question, -response, -stimulus)
Include only participants who successfully passed the attention check
valid_data <- reshaped_data %>% filter(attention_check == 0)
Identify participants who met all attention check criteria
eligible_subjects <- valid_data %>% filter( (question_value == “Justin_Q0” & response_value == 1) | (question_value == “Nate_Q0” & response_value == 0) ) %>% distinct(subject_id) %>% # Get unique subject IDs pull(subject_id)
Confirmatory analysis
The analyses were conducted as outlined in the analysis plan. Specifically, I used a two-way ANOVA to test the interaction between decision speed (quick vs. slow) and the decision (accept vs. reject) on moral character evaluations.
Replicating the original experiment, the two-way ANOVA on the moral character evaluation composite confirmed that the effect of decision speed depends on their decision. As stated in the original article:
“Replicating Experiment 1, a two-way ANOVA on the moral character evaluation composite confirmed that the influence of decision speed depended on their decision, F(1, 549) = 16.08, p < .001 (see Figure 1).”
Exploratory analyses
Any follow-up analyses desired (not required).
Discussion
Summary of Replication Attempt
Open the discussion section with a paragraph summarizing the primary result from the confirmatory analysis and the assessment of whether it replicated, partially replicated, or failed to replicate the original result.
Commentary
Add open-ended commentary (if any) reflecting (a) insights from follow-up exploratory analysis, (b) assessment of the meaning of the replication (or not) - e.g., for a failure to replicate, are the differences between original and present study ones that definitely, plausibly, or are unlikely to have been moderators of the result, and (c) discussion of any objections or challenges raised by the current and original authors about the replication attempt. None of these need to be long.