Replication of Study ‘How Quick Decisions Illuminate Moral Character’ by Clayton R. Critcher, Yoel Inbar and David A. Pizarro (2012, Psychological Science)

Author

Caroline Porche (cporche@ucsd.edu)

Published

December 4, 2024

Introduction

Critcher, Inbar, and Pizarro (2012) found that the speed of decision-making plays a major role in shaping how we evaluate someone’s moral character. Quick decisions often signal certainty, which leads to stronger judgments—quick moral decisions get positive evaluations, while quick immoral ones result in harsher criticism. In contrast, slow decisions suggest indecision or conflict, leading to more moderate assessments. In this replication, I’ll be testing these findings again, but with a stronger emphasis on randomization to reduce any potential biases that may have influenced the original study. This could pose challenges, especially if additional variables or confounding factors emerge that were not fully accounted for before. By refining the experimental design, I aim to provide a clearer understanding of how decision speed impacts moral evaluations and address any limitations in the original methodology. To conduct this experiment, participants will be presented with scenarios where individuals make either moral or immoral decisions, with the critical variable being the speed of the decision (quick vs. slow). After each scenario, participants will rate the decision-maker’s moral character, perceived certainty, and impulsivity. The challenge will be creating scenarios that clearly manipulate decision speed without introducing unintended biases—ensuring quick decisions don’t seem impulsive by default and slow decisions aren’t perceived as overly reflective. Pretesting will be necessary to refine the stimuli and ensure participants interpret them consistently and as intended.

  • Repository: Explore the full project code and analysis.

  • Original Paper: “How Quick Decisions Illuminate Moral Character.”

  • Paradigm: Detailed paradigm and methodology used in the study.

Methods

Moral Character Evaluations

A significant main effect of decision type (F(1, 117) = 541.52, p < .001) showed that Justin, who made a quick decision (M = 6.44, SE = .08), was perceived differently than Nate (M = 2.15, SE = .12). Additionally, the moral nature of the decision significantly influenced character evaluations (F(1, 117) = 127.07, p < .001), with returning the wallet regarded as more morally favorable.

Decision Speed and Emotional Impulsivity

Justin was perceived as less emotionally impulsive (M = 2.40, SE = .11) compared to Nate (M = 3.79, SE = .12), (F(1, 117) = 95.26, p < .001). However, this difference in emotional impulsivity did not influence moral evaluations (t < 1).

Decision Speed and Moral Evaluation Polarization

A significant interaction between decision type and speed was observed (F(1, 117) = 127.07, p < .001). Quick immoral decisions led to more negative evaluations (t(54) = 8.28, p < .001), while quick moral decisions resulted in more positive evaluations (t(63) = 7.71, p < .001).

Certainty as a Mediator

Quick decisions were associated with higher certainty (F(1, 117) = 706.6, p < .001). Certainty mediated the relationship between decision speed and moral evaluation, with greater certainty explaining more negative evaluations for immoral decisions and more positive evaluations for moral decisions.

Power Analysis and Sample Size

We planned to perform power analyses to determine the sample sizes needed to achieve 80%, 90%, and 95% power for detecting effect sizes from the original study. However, the original study did not report exact effect sizes, mean differences, or error bars. As a result, effect sizes could not be calculated, and sample sizes could not be determined using this methodology. Given the original study’s sample size of 119 participants, we followed the standard practice of multiplying the sample size by 2.5, resulting in a target sample size of 298 participants.

Planned Sample

The planned sample size includes 298 participants.

Materials

“Immediately following the description of Justin and Nate’s actions, we asked participants the following sets of items (all on 1–7 scales):

  • Quickness: “As a manipulation check, participants indicated how quickly (vs. slowly) the decision was made”

  • Moral character evaluation: “The three moral evaluation items had participants assess the agents’ underlying moral principles and standards… by asking whether the agent: “has entirely good (vs. entirely bad) moral principles,” “has good (vs. bad) moral standards,” and “deep down has the moral principles and knowledge to do the right thing”.”

  • Certainty: “Participants indicated “how conflicted she felt when making her decision” (reverse-scored), “how many reservations she had” (reverse-scored), whether the target “was quite certain in her decision” (vs. had considerable reservations), and “how far she was from choosing the alternate course of action”.”

  • Emotional Impulsivity: “Participants indicated to what extent the person remained “calm and emotionally contained” (reverse-scored) and “became upset and acted without thinking.””

  • Perceived Motives: “…participants rated their motives to: “get more money” and ’protect her children”.”

Procedure

Participants will read about two men, Justin and Nate, who come upon separate cash-filled wallets in a grocery store parking lot. Justin “was able to decide quickly” what to do, while Nate “was only able to decide after long and careful deliberation.”

Participants will then be assigned to one of two conditions:

  1. Moral decision: Justin and Nate both decide not to steal the money and return the wallets.
  2. Immoral decision: Justin and Nate both decide to keep the money and drive off.

Participants will be asked to rate how quickly each decision was made and evaluate the moral character, certainty, and impulsivity of both Justin and Nate. Randomization will be used to control for order effects.

Following the initial study with five participants, the average completion time was 4.8 minutes.

  • Note:

    • This procedure was followed precisely as outlined in the original article without deviations.

Analysis Plan

  • Primary Analysis:
    I will conduct a two-way ANOVA to examine how decision speed (quick vs. slow) interacts with their decision (accept vs. reject the offer) in influencing participants’ moral character evaluations. Based on the original study, I expect that quick decisions to sell her son will result in more negative judgments, while quick refusals will lead to more positive (though marginal) evaluations.

  • Additional Analyses:
    I plan to explore whether these effects generalize to different types of moral dilemmas. Another potential avenue is to examine whether participant demographics (e.g., gender or age) moderate the observed effects.

  • Data Cleaning and Exclusion Rules:
    I will follow the same data cleaning procedures outlined in the original study. I’ll ensure that participants with incomplete responses are excluded and covariates like emotional impulsivity are properly accounted for.

  • Note:
    This analysis plan closely follows the approach described in the original article, with the same data exclusions, control variables, and covariate adjustments. Any additional analyses I conduct will build on the original methodology to enhance our understanding of decision speed and moral judgment.

Design Overview

  • The two factors that are manipulated throughout this study are ‘decision type’ and ‘decision speed’

  • Throughout the study there are five measures taken and they were not repeated

  • This study uses a between-participants design which tests the robustness of the effect rather than a within subjects design which could have had the consequence of an order effect

  • There is no mention of steps taken to reduce demand characteristics within the study

  • Participant’s previous exposures to relevant situations of the study may pose as potential confounds

Differences from Original Study

Explicitly describe known differences in sample, setting, procedure, and analysis plan from original study. The goal, of course, is to minimize those differences, but differences will inevitably occur. Also, note whether such differences are anticipated to make a difference based on claims in the original article or subsequent published research on the conditions for obtaining the effect.

Different from the original study, we will only be conducting Experiment One. In addition, our participants will not be from UC Berkeley and our experiment will be conducted online rather than in person. We can’t control the exact environment as the original study but that is not predicted to make a difference. In the end of our study, we are planning to add an attention check.

Methods Addendum (Post Data Collection)

You can comment this section out prior to final report with data collection.

Actual Sample

Sample size, demographics, data exclusions based on rules spelled out in analysis plan

Differences from pre-data collection methods plan

Any differences from what was described as the original plan, or “none”.

Results

Data preparation

In the initial stages of our data preparation, we will begin by loading our data set and uploading the necessary libraries and functions. Then we will begin cleaning our data, which may include dropping any Na values or special characters that we do not need for our analysis. After this, we will filter the data and variables needed for the analysis and create the necessary rows and columns.

```{r include=F} ### Data Preparation

suppressPackageStartupMessages({
  library(dplyr)
  library(tidyr)
  library(purrr)
  library(readr)
  library(stringr)
  library(jsonlite)
  library(magrittr)
})
#### Load Data
# Define the directory path
data_directory <- "/Users/carolineporche/Desktop/Github/critcher2013_2/data"

# Print confirmation
cat("Data directory set to:", data_directory, "\n")
Data directory set to: /Users/carolineporche/Desktop/Github/critcher2013_2/data 
# Retrieve a list of all CSV files within the directory
file_list <- list.files(
  path = data_directory, 
  pattern = "\\.csv$", 
  full.names = TRUE, 
  recursive = TRUE
)

# Print the number of files found for confirmation
cat("Number of CSV files found:", length(file_list), "\n")
Number of CSV files found: 1 
# Stop execution if no files are found
if (length(file_list) == 0) {
  stop("No CSV files found in the specified directory.")
}
# Read and merge data from all CSV files, adding a 'subject_id' column
combined_data <- file_list %>%
  map_dfr(~ {
    read_csv(.x) %>%
      mutate(subject_id = tools::file_path_sans_ext(basename(.x)), .before = 1)
  })
Rows: 291 Columns: 22
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr  (1): condition
dbl (21): Justin_Q1, Justin_Q2, Justin_Q3, Justin_Q4, Justin_Q5, Justin_Q6, ...

ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
# Print a preview of the combined dataset
head(combined_data)
# A tibble: 6 × 23
  subject_id   condition Justin_Q1 Justin_Q2 Justin_Q3 Justin_Q4 Justin_Q5
  <chr>        <chr>         <dbl>     <dbl>     <dbl>     <dbl>     <dbl>
1 comined_data immoral           6         3         3         3        NA
2 comined_data moral             5         5         5         6        NA
3 comined_data moral             6         6         6         6        NA
4 comined_data moral             6         5         5         4        NA
5 comined_data moral             6         6         6         6        NA
6 comined_data immoral           3         3         3         3         3
# ℹ 16 more variables: Justin_Q6 <dbl>, Justin_Q7 <dbl>, Justin_Q8 <dbl>,
#   Justin_Q9 <dbl>, Justin_Q10 <dbl>, Nate_Q1 <dbl>, Nate_Q2 <dbl>,
#   Nate_Q3 <dbl>, Nate_Q4 <dbl>, Nate_Q5 <dbl>, Nate_Q6 <dbl>, Nate_Q7 <dbl>,
#   Nate_Q8 <dbl>, Nate_Q9 <dbl>, Nate_Q10 <dbl>, group <dbl>
# Extract and format the subject ID from the filename (optional if already in pipeline)
extract_subject_id <- function(file_path) {
  tools::file_path_sans_ext(basename(file_path))
}
# Retrieve a list of all CSV files
file_list <- list.files(
  path = data_directory,
  pattern = "\\.csv$",
  full.names = TRUE,
  recursive = TRUE
)

# Check if files are found
if (length(file_list) == 0) {
  stop("No CSV files found in the directory:", data_directory)
}

Data Cleaning and Filtering

# Select relevant columns (adjusted based on column names)
processed_data <- combined_data %>%
  # Retain subject_id and condition columns
  select(subject_id, condition, starts_with("Justin"), starts_with("Nate"), group) %>%
  
  # Propagate 'condition' values downward to fill in gaps
  fill(condition, .direction = "down") %>%
  
  # Organize data by 'subject_id'
  group_by(subject_id) %>%
  
  # Add an attention check column based on group or response patterns
  mutate(attention_check = if_else(
    group == "control",
    0,  # Example check for control group (adjust logic as needed)
    NA_real_
  )) %>%
  
  ungroup() %>%
  
  # Ensure attention check is numeric
  mutate(attention_check = as.numeric(attention_check)) %>%
  
  # Fill attention check values up and down
  fill(attention_check, .direction = "downup")

# Print a preview of the cleaned data
head(processed_data)
# A tibble: 6 × 24
  subject_id   condition Justin_Q1 Justin_Q2 Justin_Q3 Justin_Q4 Justin_Q5
  <chr>        <chr>         <dbl>     <dbl>     <dbl>     <dbl>     <dbl>
1 comined_data immoral           6         3         3         3        NA
2 comined_data moral             5         5         5         6        NA
3 comined_data moral             6         6         6         6        NA
4 comined_data moral             6         5         5         4        NA
5 comined_data moral             6         6         6         6        NA
6 comined_data immoral           3         3         3         3         3
# ℹ 17 more variables: Justin_Q6 <dbl>, Justin_Q7 <dbl>, Justin_Q8 <dbl>,
#   Justin_Q9 <dbl>, Justin_Q10 <dbl>, Nate_Q1 <dbl>, Nate_Q2 <dbl>,
#   Nate_Q3 <dbl>, Nate_Q4 <dbl>, Nate_Q5 <dbl>, Nate_Q6 <dbl>, Nate_Q7 <dbl>,
#   Nate_Q8 <dbl>, Nate_Q9 <dbl>, Nate_Q10 <dbl>, group <dbl>,
#   attention_check <dbl>
# Transform data to a long format and retain 'condition'
reshaped_data <- processed_data %>%
  pivot_longer(
    cols = starts_with("Justin"):starts_with("Nate"),
    names_to = "question",
    values_to = "response_value"
  ) %>%
  mutate(question_value = paste(condition, question, sep = "_")) %>%
  select(subject_id, condition, question_value, response_value, group, attention_check)
Warning in x:y: numerical expression has 10 elements: only the first used
# Print the column names and a preview of the data
names(combined_data)
 [1] "subject_id" "condition"  "Justin_Q1"  "Justin_Q2"  "Justin_Q3" 
 [6] "Justin_Q4"  "Justin_Q5"  "Justin_Q6"  "Justin_Q7"  "Justin_Q8" 
[11] "Justin_Q9"  "Justin_Q10" "Nate_Q1"    "Nate_Q2"    "Nate_Q3"   
[16] "Nate_Q4"    "Nate_Q5"    "Nate_Q6"    "Nate_Q7"    "Nate_Q8"   
[21] "Nate_Q9"    "Nate_Q10"   "group"     
head(combined_data)
# A tibble: 6 × 23
  subject_id   condition Justin_Q1 Justin_Q2 Justin_Q3 Justin_Q4 Justin_Q5
  <chr>        <chr>         <dbl>     <dbl>     <dbl>     <dbl>     <dbl>
1 comined_data immoral           6         3         3         3        NA
2 comined_data moral             5         5         5         6        NA
3 comined_data moral             6         6         6         6        NA
4 comined_data moral             6         5         5         4        NA
5 comined_data moral             6         6         6         6        NA
6 comined_data immoral           3         3         3         3         3
# ℹ 16 more variables: Justin_Q6 <dbl>, Justin_Q7 <dbl>, Justin_Q8 <dbl>,
#   Justin_Q9 <dbl>, Justin_Q10 <dbl>, Nate_Q1 <dbl>, Nate_Q2 <dbl>,
#   Nate_Q3 <dbl>, Nate_Q4 <dbl>, Nate_Q5 <dbl>, Nate_Q6 <dbl>, Nate_Q7 <dbl>,
#   Nate_Q8 <dbl>, Nate_Q9 <dbl>, Nate_Q10 <dbl>, group <dbl>
# Check unique values in 'condition'
unique(combined_data$condition)
[1] "immoral" "moral"  
# Check summary of response columns
summary(combined_data %>% select(starts_with("Justin"), starts_with("Nate")))
   Justin_Q1       Justin_Q2       Justin_Q3       Justin_Q4    
 Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.000  
 1st Qu.:2.000   1st Qu.:2.000   1st Qu.:3.000   1st Qu.:4.000  
 Median :6.000   Median :4.000   Median :4.000   Median :5.000  
 Mean   :4.636   Mean   :4.214   Mean   :4.241   Mean   :4.876  
 3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:6.000  
 Max.   :7.000   Max.   :7.000   Max.   :7.000   Max.   :7.000  
                 NA's   :10      NA's   :9       NA's   :8      
   Justin_Q5       Justin_Q6       Justin_Q7       Justin_Q8    
 Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.000  
 1st Qu.:1.000   1st Qu.:6.000   1st Qu.:6.000   1st Qu.:1.000  
 Median :1.000   Median :6.000   Median :6.000   Median :1.000  
 Mean   :2.254   Mean   :5.965   Mean   :5.914   Mean   :1.893  
 3rd Qu.:3.000   3rd Qu.:7.000   3rd Qu.:7.000   3rd Qu.:2.000  
 Max.   :7.000   Max.   :7.000   Max.   :7.000   Max.   :7.000  
 NA's   :67      NA's   :2       NA's   :1       NA's   :67     
   Justin_Q9       Justin_Q10       Nate_Q1         Nate_Q2     
 Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.000  
 1st Qu.:5.000   1st Qu.:1.000   1st Qu.:1.000   1st Qu.:3.000  
 Median :6.000   Median :1.000   Median :1.000   Median :4.000  
 Mean   :5.645   Mean   :2.009   Mean   :1.701   Mean   :3.966  
 3rd Qu.:7.000   3rd Qu.:2.000   3rd Qu.:2.000   3rd Qu.:5.000  
 Max.   :7.000   Max.   :7.000   Max.   :7.000   Max.   :7.000  
 NA's   :4       NA's   :68      NA's   :60                     
    Nate_Q3         Nate_Q4         Nate_Q5         Nate_Q6     
 Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.000  
 1st Qu.:3.000   1st Qu.:4.000   1st Qu.:5.000   1st Qu.:2.000  
 Median :4.000   Median :5.000   Median :6.000   Median :3.000  
 Mean   :4.114   Mean   :4.811   Mean   :5.367   Mean   :2.875  
 3rd Qu.:5.000   3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:4.000  
 Max.   :7.000   Max.   :7.000   Max.   :7.000   Max.   :7.000  
 NA's   :1                       NA's   :2       NA's   :20     
    Nate_Q7         Nate_Q8         Nate_Q9         Nate_Q10  
 Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.0  
 1st Qu.:1.000   1st Qu.:4.000   1st Qu.:2.000   1st Qu.:1.0  
 Median :2.000   Median :5.000   Median :3.000   Median :2.0  
 Mean   :2.627   Mean   :5.014   Mean   :3.477   Mean   :2.7  
 3rd Qu.:3.000   3rd Qu.:6.000   3rd Qu.:5.000   3rd Qu.:4.0  
 Max.   :7.000   Max.   :7.000   Max.   :7.000   Max.   :7.0  
 NA's   :36                      NA's   :6       NA's   :38   
# Check for rows with non-NA responses
valid_responses <- combined_data %>%
  filter_at(vars(starts_with("Justin"), starts_with("Nate")), any_vars(!is.na(.)))

# Count rows with valid responses
nrow(valid_responses)
[1] 291
reshaped_data <- combined_data %>%
  pivot_longer(
    cols = starts_with("Justin"):starts_with("Nate"),
    names_to = "question",
    values_to = "response_value"
  ) %>%
  select(subject_id, condition, group, question, response_value)
Warning in x:y: numerical expression has 10 elements: only the first used
# Filter valid responses
valid_data <- reshaped_data %>%
  filter(!is.na(response_value))  # Ensure response_value is not missing

# Identify eligible subjects
eligible_subjects <- valid_data %>%
  filter(
    (question == "Justin_Q1" & condition == "moral" & response_value == 1) |
    (question == "Nate_Q1" & condition == "immoral" & response_value == 0)
  ) %>%
  distinct(subject_id) %>%
  pull(subject_id)

# Filter for eligible subjects
filtered_long_data <- valid_data %>%
  filter(subject_id %in% eligible_subjects)
# Check data structure
head(filtered_long_data)
# A tibble: 6 × 5
  subject_id   condition group question  response_value
  <chr>        <chr>     <dbl> <chr>              <dbl>
1 comined_data immoral       1 Justin_Q1              6
2 comined_data immoral       1 Justin_Q2              3
3 comined_data immoral       1 Justin_Q3              3
4 comined_data immoral       1 Justin_Q4              3
5 comined_data immoral       1 Justin_Q6              6
6 comined_data immoral       1 Justin_Q7              6
nrow(filtered_long_data)
[1] 2905
# Ensure necessary columns exist
names(filtered_long_data)
[1] "subject_id"     "condition"      "group"          "question"      
[5] "response_value"
# Ensure group is categorical
filtered_long_data <- filtered_long_data %>%
  mutate(group = as.factor(group))
library(ggplot2)

# Ensure group is a factor with appropriate levels
filtered_long_data <- filtered_long_data %>%
  mutate(group = factor(group, levels = c("control", "experimental")))

# Plot the grouped bar chart
ggplot(filtered_long_data, aes(x = condition, y = response_value, fill = group)) +
  geom_bar(stat = "identity", position = position_dodge(width = 0.8), width = 0.7) +
  labs(
    title = "Response Values by Condition and Group",
    x = "Condition",
    y = "Response Value"
  ) +
  scale_fill_manual(values = c("control" = "#8B008B", "experimental" = "#FF00FF")) +  # Dark Purple and Magenta
  theme_minimal() +
  theme(
    text = element_text(size = 14),
    legend.title = element_blank()
  )

Confirmatory Analysis

reshaped_data <- combined_data %>%
  pivot_longer(
    cols = starts_with("Justin"):starts_with("Nate"),
    names_to = "question",
    values_to = "response_value"
  )
Warning in x:y: numerical expression has 10 elements: only the first used
# Inspect the reshaped data
head(reshaped_data)
# A tibble: 6 × 14
  subject_id   condition Nate_Q2 Nate_Q3 Nate_Q4 Nate_Q5 Nate_Q6 Nate_Q7 Nate_Q8
  <chr>        <chr>       <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>
1 comined_data immoral         4       4       4       6      NA      NA       6
2 comined_data immoral         4       4       4       6      NA      NA       6
3 comined_data immoral         4       4       4       6      NA      NA       6
4 comined_data immoral         4       4       4       6      NA      NA       6
5 comined_data immoral         4       4       4       6      NA      NA       6
6 comined_data immoral         4       4       4       6      NA      NA       6
# ℹ 5 more variables: Nate_Q9 <dbl>, Nate_Q10 <dbl>, group <dbl>,
#   question <chr>, response_value <dbl>
reshaped_data <- reshaped_data %>%
  mutate(
    group = case_when(
      str_detect(question, "Justin") ~ "Control",
      str_detect(question, "Nate") ~ "Experimental",
      TRUE ~ NA_character_
    )
  )

# Inspect the updated data
head(reshaped_data)
# A tibble: 6 × 14
  subject_id   condition Nate_Q2 Nate_Q3 Nate_Q4 Nate_Q5 Nate_Q6 Nate_Q7 Nate_Q8
  <chr>        <chr>       <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>
1 comined_data immoral         4       4       4       6      NA      NA       6
2 comined_data immoral         4       4       4       6      NA      NA       6
3 comined_data immoral         4       4       4       6      NA      NA       6
4 comined_data immoral         4       4       4       6      NA      NA       6
5 comined_data immoral         4       4       4       6      NA      NA       6
6 comined_data immoral         4       4       4       6      NA      NA       6
# ℹ 5 more variables: Nate_Q9 <dbl>, Nate_Q10 <dbl>, group <chr>,
#   question <chr>, response_value <dbl>
filtered_long_data <- reshaped_data %>%
  filter(
    !is.na(response_value),                # Remove rows with missing responses
    condition %in% c("moral", "immoral"), # Keep valid conditions
    group %in% c("Control", "Experimental") # Keep valid groups
  ) %>%
  mutate(
    condition = factor(condition, levels = c("moral", "immoral"), labels = c("Returned Wallet", "Kept Wallet")),
    group = factor(group, levels = c("Control", "Experimental"))
  )
# Check row count
nrow(filtered_long_data)
[1] 2905
# Summarize data by condition and group
filtered_long_data %>%
  group_by(condition, group) %>%
  summarize(n = n(), .groups = "drop")
# A tibble: 4 × 3
  condition       group            n
  <fct>           <fct>        <int>
1 Returned Wallet Control       1309
2 Returned Wallet Experimental   113
3 Kept Wallet     Control       1365
4 Kept Wallet     Experimental   118
# Perform Two-Way ANOVA
anova_results <- aov(response_value ~ condition * group, data = filtered_long_data)

# Display the ANOVA table
summary(anova_results)
                  Df Sum Sq Mean Sq F value   Pr(>F)    
condition          1    656   656.5  143.43  < 2e-16 ***
group              1   1455  1455.0  317.89  < 2e-16 ***
condition:group    1     98    97.9   21.39 3.91e-06 ***
Residuals       2901  13278     4.6                     
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Summary of Replication Attempt

The replication successfully confirmed the significant impact of moral condition on moral character evaluations. Similar to the original study, a significant interaction between condition and decision speed was observed, but decision speed alone showed no meaningful effect on moral scores. This result indicates the primary findings of the original study were replicated.

Commentary

  • Exploratory Insights: The exploratory analysis aligned with the original study’s findings but highlighted the minimal independent effect of decision speed.
  • Replication Meaning: Differences in emphasis on decision speed effects suggest potential moderators, such as scenario wording or participant diversity, that could influence replication outcomes.
  • Challenges: Key challenges included a lack of power analysis and slight ambiguities in operationalizing constructs such as certainty and decision speed.

Post-Data-Collection Methods Addendum

Participants

Actual participants recruited: 300 when combined with the two other replication groups. This exceeded the original sample size of 119, increasing statistical power.

Unexpected Events

An issue with ambiguous wording in decision speed measures arose during data collection. This was addressed by clarifying descriptions in subsequent trials and including attention checks to ensure data quality.

Analysis Code

```{r echo=T} # Data cleaning and processing data_clean <- data %>% filter(!is.na(variable)) %>% mutate(new_variable = variable1 + variable2)

Preview results

head(data_clean)

Summary of Replication Attempt

Open the discussion section with a paragraph summarizing the primary result from the confirmatory analysis and the assessment of whether it replicated, partially replicated, or failed to replicate the original result.

Commentary

Add open-ended commentary (if any) reflecting (a) insights from follow-up exploratory analysis, (b) assessment of the meaning of the replication (or not) - e.g., for a failure to replicate, are the differences between original and present study ones that definitely, plausibly, or are unlikely to have been moderators of the result, and (c) discussion of any objections or challenges raised by the current and original authors about the replication attempt. None of these need to be long.