As pre-registered, we sought to collect data to attain 200 analyzable responses for each between-subjects condition (i.e., 200 participants who passed all attention checks for each between-subjects condition, totalling 600 usable responses across the entire experiment).
On the first wave of data collection (N = 694), applying the pre-registered exclusion criteria led to adequate samples (i.e., Ns > 200) for two between-subjects datasets, but an inadequate sample (i.e., N = 195) for one between-subjects dataset (specifically, the Stranger and Sibling dataset). Therefore, we launched a small second wave (N = 10) to attain an adequate sample for the Stranger and Sibling dataset.
Before data were loaded into R (below), the following changes were made:
Raw variable names from Qualtrics were renamed to be more descriptive.
If there were any responses for the field “Bot_Catcher,” these cases were deleted. This field was designed to be an invisible question that only bots would answer (as human respondents would not see the field). However, 0 cases were detected.
Duplicate IP addresses were removed. There was only 1 instance of a duplicate IP address, but it came from two different Prolific IDs (and both responses were removed for failing attention checks).
All other identifying information was removed (e.g., IP addresses, longitude/latitude, etc.).
Before running this chunk, please load “E1_raw_data.csv” into the R environment.
# packages should be loaded in the following order to avoid function conflicts
library(psych) # for describing data
library(effsize) # for mean difference effect sizes
library(sjstats) # for eta-squared effect sizes
library(correlation) # for cleaner correlation test output
library(rmcorr) # for repeated-measures correlation tests
library(cocor) # for comparing dependent correlation coefficients
library(tidyverse) # for data manipulation and plottingData were separated into three distinct data sets (for each between-subjects condition). Then, a between-subjects variable was created within each between-subjects dataset. Last, all three datasets were recombined.
# creates dataset that only has partipants who made judgments of agents who helped strangers and cousins
E1_STRnCUZ <- E1_raw_data %>%
filter(STRnCUZ_STR_m1 >= 0 | STRnCUZ_STR_m2 >= 0 | STRnCUZ_STR_m3 >= 0)
# creates dataset that only has partipants who made judgments of agents who helped strangers and siblings
E1_STRnSIB <- E1_raw_data %>%
filter(STRnSIB_STR_m1 >= 0 | STRnSIB_STR_m2 >= 0 | STRnSIB_STR_m3 >= 0)
# creates dataset that only has partipants who made judgments of agents who helped cousins and siblings
E1_CUZnSIB <- E1_raw_data %>%
filter(CUZnSIB_CUZ_m1 >= 0 | CUZnSIB_CUZ_m2 >= 0 | CUZnSIB_CUZ_m3 >= 0)
# create between-subjects condition variable
E1_STRnCUZ$BSs_cond <- rep("STR vs CUZ", nrow(E1_STRnCUZ))
E1_STRnSIB$BSs_cond <- rep("STR vs SIB", nrow(E1_STRnSIB))
E1_CUZnSIB$BSs_cond <- rep("CUZ vs SIB", nrow(E1_CUZnSIB))
# recombine between-subjects data
E1_all <- rbind(E1_STRnCUZ, E1_STRnSIB, E1_CUZnSIB)Based on our pre-registered criteria, participants who failed any of 3 attention checks were to be excluded from all analyses. This led to a final analyzable N = 611 (an 87% retention rate).
The first attention check was disguised as an experimental scenario; in the scenario text, participants were instructed to respond with the left-most option on the scale for all three pre-outcome measures (obligation, likelihood, and descriptive normativity). Participants who responded above a 10 on any of the three 100-point scales were excluded.
The second and third attention checks explicitly instructed participants to respond in a particular way on two individual difference measures. The second attention check instructed participants to respond “strongly agree” on the MAC’s 100 point scale; participants who responded below 90 were excluded. The third attention check instructed participants to respond “strongly disagree” on the MFQ’s scale; participants who responded anything other than “strongly disagree” (a 1 on the 6-point scale) were excluded.
# Experimental task attention check variables
E1_all$AC_oblig <- rowSums(E1_all[, c("AC_oblig1", "AC_oblig2", "AC_oblig3")], na.rm = T)
E1_all$AC_likely <- rowSums(E1_all[, c("AC_likely1", "AC_likely2", "AC_likely3")], na.rm = T)
E1_all$AC_dnorm <- rowSums(E1_all[, c("AC_dnorm1", "AC_dnorm2", "AC_dnorm3")], na.rm = T) # if any are > 10, exclude
# Individual difference attention check variables
E1_all$MAC_AC <- E1_all$MAC_Jud_4 # if < 90, exclude
E1_all$MFQ_AC <- E1_all$MFQ_Jud_4 # if != 1, exclude
# Create dataset that filters out inattentive participants
E1_all_clean <- E1_all %>%
# excludes participants who were not paying attention during experimental task
filter(AC_oblig <= 10 & AC_likely <= 10 & AC_dnorm <= 10) %>%
# exclude participants who were not paying attention during MAC scale
filter(MAC_AC >= 90) %>%
# exclude participants who were not paying attention during MFQ scale
filter(MFQ_AC == 1)# Main DVs
# create single column for each condition's variables that collapses across presentation order of DV
# e.g., STRnCUZ_STR_o1 = STR vs CUZ dataset, "No Choice" condition, STR obligation judgment, obligation judgment presented first
# to clarify, as noted in the Method section (and SOM), two other pre-outcome judgments were collected, equaling three different
# orders of presentation, each of which started with a different DV (these additional judgments are not included here)
E1_all_clean$NoChoice_STR_oblig <- rowSums(E1_all_clean[, c("STRnCUZ_STR_o1", "STRnCUZ_STR_o2", "STRnCUZ_STR_o3",
"STRnSIB_STR_o1", "STRnSIB_STR_o2", "STRnSIB_STR_o3")], na.rm = T)
E1_all_clean$NoChoice_STR_moral <- rowSums(E1_all_clean[, c("STRnCUZ_STR_m1", "STRnCUZ_STR_m2", "STRnCUZ_STR_m3",
"STRnSIB_STR_m1", "STRnSIB_STR_m2", "STRnSIB_STR_m3")], na.rm = T)
E1_all_clean$NoChoice_CUZ_oblig <- rowSums(E1_all_clean[, c("STRnCUZ_CUZ_o1", "STRnCUZ_CUZ_o2", "STRnCUZ_CUZ_o3",
"CUZnSIB_CUZ_o1", "CUZnSIB_CUZ_o2", "CUZnSIB_CUZ_o3")], na.rm = T)
E1_all_clean$NoChoice_CUZ_moral <- rowSums(E1_all_clean[, c("STRnCUZ_CUZ_m1", "STRnCUZ_CUZ_m2", "STRnCUZ_CUZ_m3",
"CUZnSIB_CUZ_m1", "CUZnSIB_CUZ_m2", "CUZnSIB_CUZ_m3")], na.rm = T)
E1_all_clean$NoChoice_SIB_oblig <- rowSums(E1_all_clean[, c("STRnSIB_SIB_o1", "STRnSIB_SIB_o2", "STRnSIB_SIB_o3",
"CUZnSIB_SIB_o1", "CUZnSIB_SIB_o2", "CUZnSIB_SIB_o3")], na.rm = T)
E1_all_clean$NoChoice_SIB_moral <- rowSums(E1_all_clean[, c("STRnSIB_SIB_m1", "STRnSIB_SIB_m2", "STRnSIB_SIB_m3",
"CUZnSIB_SIB_m1", "CUZnSIB_SIB_m2", "CUZnSIB_SIB_m3")], na.rm = T)
# e.g., STRoCUZ_STR_o11 = STR vs CUZ dataset, "Choice" condition, STR is eventually helped instead of CUZ,
# obligation judgments in general presented first (first 1), but obligation to STR presented before obligation to CUZ (second 1)
# to clarify, as noted in the Method section, two obligation judgments were collected in these conditions,
# one for each potential beneficiary (e.g., STR and CUZ)
E1_all_clean$STRoCUZ_STR_oblig <- rowSums(E1_all_clean[, c("STRoCUZ_STR_o11", "STRoCUZ_STR_o12", "STRoCUZ_STR_o31",
"STRoCUZ_STR_o21", "STRoCUZ_STR_o22", "STRoCUZ_STR_o32")], na.rm = T)
E1_all_clean$STRoCUZ_CUZ_oblig <- rowSums(E1_all_clean[, c("STRoCUZ_CUZ_o11", "STRoCUZ_CUZ_o12", "STRoCUZ_CUZ_o31",
"STRoCUZ_CUZ_o21", "STRoCUZ_CUZ_o22", "STRoCUZ_CUZ_o32")], na.rm = T)
E1_all_clean$STRoCUZ_STR_moral <- rowSums(E1_all_clean[, c("STRoCUZ_STR_m11", "STRoCUZ_STR_m12", "STRoCUZ_STR_m31",
"STRoCUZ_STR_m21", "STRoCUZ_STR_m22", "STRoCUZ_STR_m32")], na.rm = T)
E1_all_clean$CUZoSTR_STR_oblig <- rowSums(E1_all_clean[, c("CUZoSTR_STR_o11", "CUZoSTR_STR_o12", "CUZoSTR_STR_o31",
"CUZoSTR_STR_o21", "CUZoSTR_STR_o22", "CUZoSTR_STR_o32")], na.rm = T)
E1_all_clean$CUZoSTR_CUZ_oblig <- rowSums(E1_all_clean[, c("CUZoSTR_CUZ_o11", "CUZoSTR_CUZ_o12", "CUZoSTR_CUZ_o31",
"CUZoSTR_CUZ_o21", "CUZoSTR_CUZ_o22", "CUZoSTR_CUZ_o32")], na.rm = T)
E1_all_clean$CUZoSTR_CUZ_moral <- rowSums(E1_all_clean[, c("CUZoSTR_CUZ_m11", "CUZoSTR_CUZ_m12", "CUZoSTR_CUZ_m31",
"CUZoSTR_CUZ_m21", "CUZoSTR_CUZ_m22", "CUZoSTR_CUZ_m32")], na.rm = T)
E1_all_clean$Choice_STRnCUZ_STR_oblig <- (E1_all_clean$STRoCUZ_STR_oblig + E1_all_clean$CUZoSTR_STR_oblig)/2 # creates pre-registered obligation index for mean comparisons
E1_all_clean$Choice_STRnCUZ_STR_moral <- rowSums(E1_all_clean[, c(
"STRoCUZ_STR_m11", "STRoCUZ_STR_m12", "STRoCUZ_STR_m31",
"STRoCUZ_STR_m21", "STRoCUZ_STR_m22", "STRoCUZ_STR_m32")], na.rm = T) # should be a duplicate of "STRoCUZ_STR_moral"
E1_all_clean$Choice_STRnCUZ_CUZ_oblig <- (E1_all_clean$STRoCUZ_CUZ_oblig + E1_all_clean$CUZoSTR_CUZ_oblig)/2 # creates pre-registered obligation index for mean comparisons
E1_all_clean$Choice_STRnCUZ_CUZ_moral <- rowSums(E1_all_clean[, c(
"CUZoSTR_CUZ_m11", "CUZoSTR_CUZ_m12", "CUZoSTR_CUZ_m31",
"CUZoSTR_CUZ_m21", "CUZoSTR_CUZ_m22", "CUZoSTR_CUZ_m32")], na.rm = T) # should be a duplicate of "CUZoSTR_CUZ_moral"
E1_all_clean$STRoSIB_STR_oblig <- rowSums(E1_all_clean[, c("STRoSIB_STR_o11", "STRoSIB_STR_o12", "STRoSIB_STR_o31",
"STRoSIB_STR_o21", "STRoSIB_STR_o22", "STRoSIB_STR_o32")], na.rm = T)
E1_all_clean$STRoSIB_SIB_oblig <- rowSums(E1_all_clean[, c("STRoSIB_SIB_o11", "STRoSIB_SIB_o12", "STRoSIB_SIB_o31",
"STRoSIB_SIB_o21", "STRoSIB_SIB_o22", "STRoSIB_SIB_o32")], na.rm = T)
E1_all_clean$STRoSIB_STR_moral <- rowSums(E1_all_clean[, c("STRoSIB_STR_m11", "STRoSIB_STR_m12", "STRoSIB_STR_m31",
"STRoSIB_STR_m21", "STRoSIB_STR_m22", "STRoSIB_STR_m32")], na.rm = T)
E1_all_clean$SIBoSTR_STR_oblig <- rowSums(E1_all_clean[, c("SIBoSTR_STR_o11", "SIBoSTR_STR_o12", "SIBoSTR_STR_o31",
"SIBoSTR_STR_o21", "SIBoSTR_STR_o22", "SIBoSTR_STR_o32")], na.rm = T)
E1_all_clean$SIBoSTR_SIB_oblig <- rowSums(E1_all_clean[, c("SIBoSTR_SIB_o11", "SIBoSTR_SIB_o12", "SIBoSTR_SIB_o31",
"SIBoSTR_SIB_o21", "SIBoSTR_SIB_o22", "SIBoSTR_SIB_o32")], na.rm = T)
E1_all_clean$SIBoSTR_SIB_moral <- rowSums(E1_all_clean[, c("SIBoSTR_SIB_m11", "SIBoSTR_SIB_m12", "SIBoSTR_SIB_m31",
"SIBoSTR_SIB_m21", "SIBoSTR_SIB_m22", "SIBoSTR_SIB_m32")], na.rm = T)
E1_all_clean$Choice_STRnSIB_STR_oblig <- (E1_all_clean$STRoSIB_STR_oblig + E1_all_clean$SIBoSTR_STR_oblig)/2 # creates pre-registered obligation index for mean comparisons
E1_all_clean$Choice_STRnSIB_STR_moral <- rowSums(E1_all_clean[, c(
"STRoSIB_STR_m11", "STRoSIB_STR_m12", "STRoSIB_STR_m31",
"STRoSIB_STR_m21", "STRoSIB_STR_m22", "STRoSIB_STR_m32")], na.rm = T) # should be a duplicate of "STRoSIB_STR_moral"
E1_all_clean$Choice_STRnSIB_SIB_oblig <- (E1_all_clean$STRoSIB_SIB_oblig + E1_all_clean$SIBoSTR_SIB_oblig)/2 # creates pre-registered obligation index for mean comparisons
E1_all_clean$Choice_STRnSIB_SIB_moral <- rowSums(E1_all_clean[, c(
"SIBoSTR_SIB_m11", "SIBoSTR_SIB_m12", "SIBoSTR_SIB_m31",
"SIBoSTR_SIB_m21", "SIBoSTR_SIB_m22", "SIBoSTR_SIB_m32")], na.rm = T) # should be a duplicate of "SIBoSTR_SIB_moral"
E1_all_clean$CUZoSIB_CUZ_oblig <- rowSums(E1_all_clean[, c("CUZoSIB_CUZ_o11", "CUZoSIB_CUZ_o12", "CUZoSIB_CUZ_o31",
"CUZoSIB_CUZ_o21", "CUZoSIB_CUZ_o22", "CUZoSIB_CUZ_o32")], na.rm = T)
E1_all_clean$CUZoSIB_SIB_oblig <- rowSums(E1_all_clean[, c("CUZoSIB_SIB_o11", "CUZoSIB_SIB_o12", "CUZoSIB_SIB_o31",
"CUZoSIB_SIB_o21", "CUZoSIB_SIB_o22", "CUZoSIB_SIB_o32")], na.rm = T)
E1_all_clean$CUZoSIB_CUZ_moral <- rowSums(E1_all_clean[, c("CUZoSIB_CUZ_m11", "CUZoSIB_CUZ_m12", "CUZoSIB_CUZ_m31",
"CUZoSIB_CUZ_m21", "CUZoSIB_CUZ_m22", "CUZoSIB_CUZ_m32")], na.rm = T)
E1_all_clean$SIBoCUZ_CUZ_oblig <- rowSums(E1_all_clean[, c("SIBoCUZ_CUZ_o11", "SIBoCUZ_CUZ_o12", "SIBoCUZ_CUZ_o31",
"SIBoCUZ_CUZ_o21", "SIBoCUZ_CUZ_o22", "SIBoCUZ_CUZ_o32")], na.rm = T)
E1_all_clean$SIBoCUZ_SIB_oblig <- rowSums(E1_all_clean[, c("SIBoCUZ_SIB_o11", "SIBoCUZ_SIB_o12", "SIBoCUZ_SIB_o31",
"SIBoCUZ_SIB_o21", "SIBoCUZ_SIB_o22", "SIBoCUZ_SIB_o32")], na.rm = T)
E1_all_clean$SIBoCUZ_SIB_moral <- rowSums(E1_all_clean[, c("SIBoCUZ_SIB_m11", "SIBoCUZ_SIB_m12", "SIBoCUZ_SIB_m31",
"SIBoCUZ_SIB_m21", "SIBoCUZ_SIB_m22", "SIBoCUZ_SIB_m32")], na.rm = T)
E1_all_clean$Choice_CUZnSIB_CUZ_oblig <- (E1_all_clean$CUZoSIB_CUZ_oblig + E1_all_clean$SIBoCUZ_CUZ_oblig)/2 # creates pre-registered obligation index for mean comparisons
E1_all_clean$Choice_CUZnSIB_CUZ_moral <- rowSums(E1_all_clean[, c(
"CUZoSIB_CUZ_m11", "CUZoSIB_CUZ_m12", "CUZoSIB_CUZ_m31",
"CUZoSIB_CUZ_m21", "CUZoSIB_CUZ_m22", "CUZoSIB_CUZ_m32")], na.rm = T) # should be a duplicate of "CUZoSIB_CUZ_moral"
E1_all_clean$Choice_CUZnSIB_SIB_oblig <- (E1_all_clean$CUZoSIB_SIB_oblig + E1_all_clean$SIBoCUZ_SIB_oblig)/2 # creates pre-registered obligation index for mean comparisons
E1_all_clean$Choice_CUZnSIB_SIB_moral <- rowSums(E1_all_clean[, c(
"SIBoCUZ_SIB_m11", "SIBoCUZ_SIB_m12", "SIBoCUZ_SIB_m31",
"SIBoCUZ_SIB_m21", "SIBoCUZ_SIB_m22", "SIBoCUZ_SIB_m32")], na.rm = T) # should be a duplicate of "SIBoCUZ_SIB_moral"
# Difference Scores
# e.g., STR minus CUZ obligation within No Choice or Choice conditions (for diff score corrs and ind. diffs analyses)
# STR vs CUZ
E1_all_clean$NoChoice_STRminusCUZ_oblig <- E1_all_clean$NoChoice_STR_oblig - E1_all_clean$NoChoice_CUZ_oblig
E1_all_clean$NoChoice_STRminusCUZ_moral <- E1_all_clean$NoChoice_STR_moral - E1_all_clean$NoChoice_CUZ_moral
E1_all_clean$Choice_STRminusCUZ_oblig <- E1_all_clean$Choice_STRnCUZ_STR_oblig - E1_all_clean$Choice_STRnCUZ_CUZ_oblig
E1_all_clean$Choice_STRminusCUZ_moral <- E1_all_clean$Choice_STRnCUZ_STR_moral - E1_all_clean$Choice_STRnCUZ_CUZ_moral
# STR vs SIB
E1_all_clean$NoChoice_STRminusSIB_oblig <- E1_all_clean$NoChoice_STR_oblig - E1_all_clean$NoChoice_SIB_oblig
E1_all_clean$NoChoice_STRminusSIB_moral <- E1_all_clean$NoChoice_STR_moral - E1_all_clean$NoChoice_SIB_moral
E1_all_clean$Choice_STRminusSIB_oblig <- E1_all_clean$Choice_STRnSIB_STR_oblig - E1_all_clean$Choice_STRnSIB_SIB_oblig
E1_all_clean$Choice_STRminusSIB_moral <- E1_all_clean$Choice_STRnSIB_STR_moral - E1_all_clean$Choice_STRnSIB_SIB_moral
# CUZ vs SIB
E1_all_clean$NoChoice_CUZminusSIB_oblig <- E1_all_clean$NoChoice_CUZ_oblig - E1_all_clean$NoChoice_SIB_oblig
E1_all_clean$NoChoice_CUZminusSIB_moral <- E1_all_clean$NoChoice_CUZ_moral - E1_all_clean$NoChoice_SIB_moral
E1_all_clean$Choice_CUZminusSIB_oblig <- E1_all_clean$Choice_CUZnSIB_CUZ_oblig - E1_all_clean$Choice_CUZnSIB_SIB_oblig
E1_all_clean$Choice_CUZminusSIB_moral <- E1_all_clean$Choice_CUZnSIB_CUZ_moral - E1_all_clean$Choice_CUZnSIB_SIB_moral
# Individual Difference Measures (for ind. diffs analyses)
# MAC (Morality-as-Cooperation scale) composite
E1_all_clean$MAC_Fam_Combined <- ((E1_all_clean$MAC_Jud_1 + E1_all_clean$MAC_Jud_2 + E1_all_clean$MAC_Jud_3 +
E1_all_clean$MAC_Rel_1 + E1_all_clean$MAC_Rel_2 + E1_all_clean$MAC_Rel_3)/6)
E1_all_clean$MAC_Fam_Jud <- ((E1_all_clean$MAC_Jud_1 + E1_all_clean$MAC_Jud_2 + E1_all_clean$MAC_Jud_3)/3)
E1_all_clean$MAC_Fam_Rel <- ((E1_all_clean$MAC_Rel_1 + E1_all_clean$MAC_Rel_2 + E1_all_clean$MAC_Rel_3)/3)
# MFQ (Moral Foundations Theory scale) composite
E1_all_clean$MFQ_Loyalty_Combined <- ((E1_all_clean$MFQ_Jud_1 + E1_all_clean$MFQ_Jud_2 + E1_all_clean$MFQ_Jud_3 +
E1_all_clean$MFQ_Rel_1 + E1_all_clean$MFQ_Rel_2 + E1_all_clean$MFQ_Rel_3)/6)
E1_all_clean$MFQ_Loyalty_Jud <- ((E1_all_clean$MFQ_Jud_1 + E1_all_clean$MFQ_Jud_2 + E1_all_clean$MFQ_Jud_3)/3)
E1_all_clean$MFQ_Loyalty_Rel <- ((E1_all_clean$MFQ_Rel_1 + E1_all_clean$MFQ_Rel_2 + E1_all_clean$MFQ_Rel_3)/3)
# OUS (Oxford Utilitarianism Scale: Impartial Beneficence subscale) composite
E1_all_clean$OUS_IB <- ((E1_all_clean$OUS_IB1 + E1_all_clean$OUS_IB2 + E1_all_clean$OUS_IB3 +
E1_all_clean$OUS_IB4 + E1_all_clean$OUS_IB5)/5)# STR vs CUZ
E1_STRnCUZ_clean <- E1_all_clean %>%
filter(BSs_cond == 'STR vs CUZ') %>%
# select only variables that are relevant to STRnCUZ
select(
ResponseId, # selects variable
Age:Political_Overall, # selects demographic variables
MAC_Jud_1:MAC_Jud_3, MAC_Rel_1:MAC_Rel_3, MFQ_Jud_1:MFQ_Jud_3, MFQ_Rel_1:MFQ_Rel_3, OUS_IB1:OUS_IB5, # selects raw ind. diff variables (for reliabilty check)
MAC_Fam_Combined:OUS_IB, # selects composited ind. diff variables
BSs_cond, # selects variable for between-subjects condition
Dist_Scen:CloseODist_Scen, # selects within-subjects condition variable
NoChoice_STR_oblig:NoChoice_CUZ_moral, # selects NoChoice DVs for STRnCUZ dataset
Choice_STRnCUZ_STR_oblig:Choice_STRnCUZ_CUZ_moral, # selects Choice DVs for STRnCUZ dataset
NoChoice_STRminusCUZ_oblig:Choice_STRminusCUZ_moral # selects difference score variables for STRnCUZ dataset
)
# STR vs SIB
E1_STRnSIB_clean <- E1_all_clean %>%
filter(BSs_cond == 'STR vs SIB') %>%
# select only variables that are relevant to STRnCUZ
select(
ResponseId, # selects variable
Age:Political_Overall, # selects demographic variables
MAC_Jud_1:MAC_Jud_3, MAC_Rel_1:MAC_Rel_3, MFQ_Jud_1:MFQ_Jud_3, MFQ_Rel_1:MFQ_Rel_3, OUS_IB1:OUS_IB5, # selects raw ind. diff variables (for reliabilty check)
MAC_Fam_Combined:OUS_IB, # selects composited ind. diff variables
BSs_cond, # selects variable for between-subjects condition
Dist_Scen:CloseODist_Scen, # sselects within-subjects condition variable
NoChoice_STR_oblig:NoChoice_STR_moral, NoChoice_SIB_oblig:NoChoice_SIB_moral, # selects NoChoice DVs for STRnSIB dataset
Choice_STRnSIB_STR_oblig:Choice_STRnSIB_SIB_moral, # selects Choice DVs for STRnSIB dataset
NoChoice_STRminusSIB_oblig:Choice_STRminusSIB_moral # selects difference score variables for STRnSIB dataset
)
# CUZ vs SIB
E1_CUZnSIB_clean <- E1_all_clean %>%
filter(BSs_cond == 'CUZ vs SIB') %>%
# select only variables that are relevant to STRnCUZ
select(
ResponseId, # selects variable
Age:Political_Overall, # selects demographic variables
MAC_Jud_1:MAC_Jud_3, MAC_Rel_1:MAC_Rel_3, MFQ_Jud_1:MFQ_Jud_3, MFQ_Rel_1:MFQ_Rel_3, OUS_IB1:OUS_IB5, # selects raw ind. diff variables (for reliabilty check)
MAC_Fam_Combined:OUS_IB, # selects composited ind. diff variables
BSs_cond, # selects variable for between-subjects condition
Dist_Scen:CloseODist_Scen, # selects within-subjects condition variable
NoChoice_CUZ_oblig:NoChoice_SIB_moral, # selects NoChoice DVs for CUZnSIB dataset
Choice_CUZnSIB_CUZ_oblig:Choice_CUZnSIB_SIB_moral, # selects Choice DVs for CUZnSIB dataset
NoChoice_CUZminusSIB_oblig:Choice_CUZminusSIB_moral # selects difference score variables for CUZnSIB dataset
)# Convert data from wide to long format for obligation and moral character
# STRnCUZ
E1_STRnCUZ_cond_long <- E1_STRnCUZ_clean %>%
pivot_longer(
cols = c(Dist_Scen, Close_Scen, DistOClose_Scen, CloseODist_Scen),
names_to = "WSs_cond",
values_to = "Condition"
)
E1_STRnCUZ_oblig_long <- E1_STRnCUZ_clean %>%
pivot_longer(
cols = c(NoChoice_STR_oblig, NoChoice_CUZ_oblig, Choice_STRnCUZ_STR_oblig, Choice_STRnCUZ_CUZ_oblig),
names_to = "WSs_cond",
values_to = "oblig"
)
E1_STRnCUZ_moral_long <- E1_STRnCUZ_clean %>%
pivot_longer(
cols = c(NoChoice_STR_moral, NoChoice_CUZ_moral, Choice_STRnCUZ_STR_moral, Choice_STRnCUZ_CUZ_moral),
names_to = "WSs_cond",
values_to = "moral"
)
# Combine long STRnCUZ datasets, select plotting variables, and create condition variable for each factor (Relation + Choice Context)
E1_STRnCUZ_long <- cbind(E1_STRnCUZ_cond_long, E1_STRnCUZ_oblig_long, E1_STRnCUZ_moral_long)
E1_STRnCUZ_long <- E1_STRnCUZ_long[, !duplicated(colnames(E1_STRnCUZ_long))] %>% # get rid of duplicate columns
select(ResponseId,
Age:OUS_IB,
BSs_cond,
WSs_cond,
Condition,
oblig, moral) %>%
mutate(Relation = case_when(
WSs_cond == "Dist_Scen" ~ "Distant",
WSs_cond == "Close_Scen" ~ "Close",
WSs_cond == "DistOClose_Scen" ~ "Distant",
WSs_cond == "CloseODist_Scen" ~ "Close")) %>%
mutate(`Choice Context` = case_when(
WSs_cond == "Dist_Scen" ~ "No Choice",
WSs_cond == "Close_Scen" ~ "No Choice",
WSs_cond == "DistOClose_Scen" ~ "Choice",
WSs_cond == "CloseODist_Scen" ~ "Choice"))
# Reorder/rename condition and participant factors
E1_STRnCUZ_long$Relation <- as.factor(E1_STRnCUZ_long$Relation)
E1_STRnCUZ_long$Relation <- ordered(E1_STRnCUZ_long$Relation, levels = c("Distant", "Close"))
E1_STRnCUZ_long$`Choice Context` <- as.factor(E1_STRnCUZ_long$`Choice Context`)
E1_STRnCUZ_long$`Choice Context` <- ordered(E1_STRnCUZ_long$`Choice Context`, levels = c("No Choice", "Choice"))
E1_STRnCUZ_long$ResponseId <- as.factor(E1_STRnCUZ_long$ResponseId)
# Convert data from wide to long format for obligation and moral character
# STRnSIB
E1_STRnSIB_cond_long <- E1_STRnSIB_clean %>%
pivot_longer(
cols = c(Dist_Scen, Close_Scen, DistOClose_Scen, CloseODist_Scen),
names_to = "WSs_cond",
values_to = "Condition"
)
E1_STRnSIB_oblig_long <- E1_STRnSIB_clean %>%
pivot_longer(
cols = c(NoChoice_STR_oblig, NoChoice_SIB_oblig, Choice_STRnSIB_STR_oblig, Choice_STRnSIB_SIB_oblig),
names_to = "WSs_cond",
values_to = "oblig"
)
E1_STRnSIB_moral_long <- E1_STRnSIB_clean %>%
pivot_longer(
cols = c(NoChoice_STR_moral, NoChoice_SIB_moral, Choice_STRnSIB_STR_moral, Choice_STRnSIB_SIB_moral),
names_to = "WSs_cond",
values_to = "moral"
)
# Combine long STRnSIB datasets, select plotting variables, and create condition variable for each factor (Relation + Choice Context)
E1_STRnSIB_long <- cbind(E1_STRnSIB_cond_long, E1_STRnSIB_oblig_long, E1_STRnSIB_moral_long)
E1_STRnSIB_long <- E1_STRnSIB_long[, !duplicated(colnames(E1_STRnSIB_long))] %>% # get rid of duplicate columns
select(ResponseId,
Age:OUS_IB,
BSs_cond,
WSs_cond,
Condition,
oblig, moral) %>%
mutate(Relation = case_when(
WSs_cond == "Dist_Scen" ~ "Distant",
WSs_cond == "Close_Scen" ~ "Close",
WSs_cond == "DistOClose_Scen" ~ "Distant",
WSs_cond == "CloseODist_Scen" ~ "Close")) %>%
mutate(`Choice Context` = case_when(
WSs_cond == "Dist_Scen" ~ "No Choice",
WSs_cond == "Close_Scen" ~ "No Choice",
WSs_cond == "DistOClose_Scen" ~ "Choice",
WSs_cond == "CloseODist_Scen" ~ "Choice"))
# Reorder/rename condition and participant factors
E1_STRnSIB_long$Relation <- as.factor(E1_STRnSIB_long$Relation)
E1_STRnSIB_long$Relation <- ordered(E1_STRnSIB_long$Relation, levels = c("Distant", "Close"))
E1_STRnSIB_long$`Choice Context` <- as.factor(E1_STRnSIB_long$`Choice Context`)
E1_STRnSIB_long$`Choice Context` <- ordered(E1_STRnSIB_long$`Choice Context`, levels = c("No Choice", "Choice"))
E1_STRnSIB_long$ResponseId <- as.factor(E1_STRnSIB_long$ResponseId)
# Convert data from wide to long format for obligation and moral character
# CUZnSIB
E1_CUZnSIB_cond_long <- E1_CUZnSIB_clean %>%
pivot_longer(
cols = c(Dist_Scen, Close_Scen, DistOClose_Scen, CloseODist_Scen),
names_to = "WSs_cond",
values_to = "Condition"
)
E1_CUZnSIB_oblig_long <- E1_CUZnSIB_clean %>%
pivot_longer(
cols = c(NoChoice_CUZ_oblig, NoChoice_SIB_oblig, Choice_CUZnSIB_CUZ_oblig, Choice_CUZnSIB_SIB_oblig),
names_to = "WSs_cond",
values_to = "oblig"
)
E1_CUZnSIB_moral_long <- E1_CUZnSIB_clean %>%
pivot_longer(
cols = c(NoChoice_CUZ_moral, NoChoice_SIB_moral, Choice_CUZnSIB_CUZ_moral, Choice_CUZnSIB_SIB_moral),
names_to = "WSs_cond",
values_to = "moral"
)
# Combine long CUZnSIB datasets, select plotting variables, and create condition variable for each factor (Relation + Choice Context)
E1_CUZnSIB_long <- cbind(E1_CUZnSIB_cond_long, E1_CUZnSIB_oblig_long, E1_CUZnSIB_moral_long)
E1_CUZnSIB_long <- E1_CUZnSIB_long[, !duplicated(colnames(E1_CUZnSIB_long))] %>% # get rid of duplicate columns
select(ResponseId,
Age:OUS_IB,
BSs_cond,
WSs_cond,
Condition,
oblig, moral) %>%
mutate(Relation = case_when(
WSs_cond == "Dist_Scen" ~ "Distant",
WSs_cond == "Close_Scen" ~ "Close",
WSs_cond == "DistOClose_Scen" ~ "Distant",
WSs_cond == "CloseODist_Scen" ~ "Close")) %>%
mutate(`Choice Context` = case_when(
WSs_cond == "Dist_Scen" ~ "No Choice",
WSs_cond == "Close_Scen" ~ "No Choice",
WSs_cond == "DistOClose_Scen" ~ "Choice",
WSs_cond == "CloseODist_Scen" ~ "Choice"))
# Reorder/rename condition and participant factors
E1_CUZnSIB_long$Relation <- as.factor(E1_CUZnSIB_long$Relation)
E1_CUZnSIB_long$Relation <- ordered(E1_CUZnSIB_long$Relation, levels = c("Distant", "Close"))
E1_CUZnSIB_long$`Choice Context` <- as.factor(E1_CUZnSIB_long$`Choice Context`)
E1_CUZnSIB_long$`Choice Context` <- ordered(E1_CUZnSIB_long$`Choice Context`, levels = c("No Choice", "Choice"))
E1_CUZnSIB_long$ResponseId <- as.factor(E1_CUZnSIB_long$ResponseId)
# Combine into one dataset for later exploratory analyses
E1_all_long <- rbind(E1_STRnSIB_long, E1_STRnCUZ_long, E1_CUZnSIB_long)
# Reorder All_long BSs_cond
E1_all_long$BSs_cond <- as.factor(E1_all_long$BSs_cond)
E1_all_long$BSs_cond <- ordered(E1_all_long$BSs_cond, levels = c("STR vs SIB", "STR vs CUZ", "CUZ vs SIB")) # order is due to likely differences between datasets (largest to smallest - see pre-registration's "exploratory analysis" section)describeBy(E1_STRnSIB_long$oblig, list(E1_STRnSIB_long$Relation, E1_STRnSIB_long$`Choice Context`), mat = T)describeBy(E1_STRnCUZ_long$oblig, list(E1_STRnCUZ_long$Relation, E1_STRnCUZ_long$`Choice Context`), mat = T)describeBy(E1_CUZnSIB_long$oblig, list(E1_CUZnSIB_long$Relation, E1_CUZnSIB_long$`Choice Context`), mat = T)describeBy(E1_STRnSIB_long$moral, list(E1_STRnSIB_long$Relation, E1_STRnSIB_long$`Choice Context`), mat = T)describeBy(E1_STRnCUZ_long$moral, list(E1_STRnCUZ_long$Relation, E1_STRnCUZ_long$`Choice Context`), mat = T)describeBy(E1_CUZnSIB_long$moral, list(E1_CUZnSIB_long$Relation, E1_CUZnSIB_long$`Choice Context`), mat = T)# Set dodge for plotting crossed factors
dodge = position_dodge(width = 1) print(oblig_plot_STRnSIB <- ggplot(data = E1_STRnSIB_long, aes(x = `Choice Context`, y = oblig, fill = Relation)) +
geom_violin(aes(fill = Relation), position = dodge) +
geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
theme(legend.position = "right") +
theme_classic() +
xlab("Choice Context") +
ylab("Obligation Strength") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "right",
legend.title = element_text(color = "black", size = 14),
legend.text = element_text(color = "black", size = 12)))print(oblig_plot_STRnCUZ <- ggplot(data = E1_STRnCUZ_long, aes(x = `Choice Context`, y = oblig, fill = Relation)) +
geom_violin(aes(fill = Relation), position = dodge) +
geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
theme(legend.position = "right") +
theme_classic() +
xlab("Choice Context") +
ylab("Obligation Strength") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "right",
legend.title = element_text(color = "black", size = 14),
legend.text = element_text(color = "black", size = 12)))print(oblig_plot_CUZnSIB <- ggplot(data = E1_CUZnSIB_long, aes(x = `Choice Context`, y = oblig, fill = Relation)) +
geom_violin(aes(fill = Relation), position = dodge) +
geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
theme(legend.position = "right") +
theme_classic() +
xlab("Choice Context") +
ylab("Obligation Strength") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "right",
legend.title = element_text(color = "black", size = 14),
legend.text = element_text(color = "black", size = 12)))print(oblig_plot_combined <- ggplot(data = E1_all_long, aes(x = `Choice Context`, y = oblig, fill = Relation)) +
geom_violin(aes(fill = Relation), position = dodge) +
geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
theme(legend.position = "right") +
theme_classic() +
facet_wrap(~BSs_cond, nrow = 3) +
xlab("\nChoice Context") +
ylab("Obligation Strength\n") +
theme(axis.title.x = element_text(size = 18),
axis.title.y = element_text(size = 18),
axis.text.x = element_text(color = "black", size = 16),
axis.text.y = element_text(color = "black", size = 16),
strip.text.x = element_text(color = "black", size = 16),
legend.position = "right",
legend.title = element_text(color = "black", size = 18),
legend.text = element_text(color = "black", size = 16)))
ggsave("E1_oblig_plot.png")Saving 14 x 9 in image
print(moral_plot_STRnSIB <- ggplot(data = E1_STRnSIB_long, aes(x = `Choice Context`, y = moral, fill = Relation)) +
geom_hline(yintercept = 50, linetype = "dashed", color = "black") +
geom_violin(aes(fill = Relation), position = dodge) +
geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
theme(legend.position = "right") +
theme_classic() +
xlab("Choice Context") +
ylab("Moral Character") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "right",
legend.title = element_text(color = "black", size = 14),
legend.text = element_text(color = "black", size = 12)))print(moral_plot_STRnCUZ <- ggplot(data = E1_STRnCUZ_long, aes(x = `Choice Context`, y = moral, fill = Relation)) +
geom_hline(yintercept = 50, linetype = "dashed", color = "black") +
geom_violin(aes(fill = Relation), position = dodge) +
geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
theme(legend.position = "right") +
theme_classic() +
xlab("Choice Context") +
ylab("Moral Character") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "right",
legend.title = element_text(color = "black", size = 14),
legend.text = element_text(color = "black", size = 12)))print(moral_plot_CUZnSIB <- ggplot(data = E1_CUZnSIB_long, aes(x = `Choice Context`, y = moral, fill = Relation)) +
geom_hline(yintercept = 50, linetype = "dashed", color = "black") +
geom_violin(aes(fill = Relation), position = dodge) +
geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
theme(legend.position = "right") +
theme_classic() +
xlab("Choice Context") +
ylab("Moral Character") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "right",
legend.title = element_text(color = "black", size = 14),
legend.text = element_text(color = "black", size = 12)))print(moral_plot_combined <- ggplot(data = E1_all_long, aes(x = `Choice Context`, y = moral, fill = Relation)) +
geom_hline(yintercept = 50, linetype = "dashed", color = "black") +
geom_violin(aes(fill = Relation), position = dodge) +
geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
theme(legend.position = "right") +
theme_classic() +
facet_wrap(~BSs_cond, nrow = 3) +
xlab("\nChoice Context") +
ylab("Moral Character\n") +
theme(axis.title.x = element_text(size = 18),
axis.title.y = element_text(size = 18),
axis.text.x = element_text(color = "black", size = 16),
axis.text.y = element_text(color = "black", size = 16),
strip.text.x = element_text(color = "black", size = 16),
legend.position = "right",
legend.title = element_text(color = "black", size = 18),
legend.text = element_text(color = "black", size = 16)))
ggsave("E1_moral_plot.png")Saving 14 x 9 in image
See our pre-registration (https://osf.io/6q4kr/?view_only=8033c65c9dfc41bcbc528fe94409e4e8) for our predictions related to obligation judgments and moral character judgments.
# returns t-test results
t.test(oblig ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T)
Paired t-test
data: oblig by Relation
t = -6.1598, df = 202, p-value = 3.873e-09
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-22.86444 -11.77596
sample estimates:
mean of the differences
-17.3202
# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: -0.4323351 (small)
95 percent confidence interval:
lower upper
-0.5766144 -0.2880559
# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: -0.5435963 (medium)
95 percent confidence interval:
lower upper
-0.7294547 -0.3577378
# returns correlation between variables
cor_test(data = E1_STRnSIB_clean, "NoChoice_STR_oblig", "NoChoice_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------
NoChoice_STR_oblig | NoChoice_SIB_oblig | 0.21 | [0.07, 0.34] | 3.04 | 0.003**
Observations: 203
# returns t-test results
t.test(oblig ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T)
Paired t-test
data: oblig by Relation
t = -11.476, df = 202, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-21.11008 -14.91947
sample estimates:
mean of the differences
-18.01478
# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: -0.8054442 (large)
95 percent confidence interval:
lower upper
-0.9642286 -0.6466598
# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: -0.6748516 (medium)
95 percent confidence interval:
lower upper
-0.8029442 -0.5467589
# returns correlation between variables
cor_test(data = E1_STRnSIB_clean, "Choice_STRnSIB_STR_oblig", "Choice_STRnSIB_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
----------------------------------------------------------------------------------------------
Choice_STRnSIB_STR_oblig | Choice_STRnSIB_SIB_oblig | 0.65 | [0.56, 0.72] | 12.09 | < .001***
Observations: 203
# returns t-test results
t.test(oblig ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T)
Paired t-test
data: oblig by Relation
t = -6.28, df = 202, p-value = 2.037e-09
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-23.97522 -12.51739
sample estimates:
mean of the differences
-18.24631
# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: -0.4407704 (small)
95 percent confidence interval:
lower upper
-0.5852924 -0.2962484
# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: -0.5701483 (medium)
95 percent confidence interval:
lower upper
-0.7625822 -0.3777145
# returns correlation between variables
cor_test(data = E1_STRnCUZ_clean, "NoChoice_STR_oblig", "NoChoice_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-------------------------------------------------------------------------------
NoChoice_STR_oblig | NoChoice_CUZ_oblig | 0.16 | [0.03, 0.29] | 2.35 | 0.020*
Observations: 203
# returns t-test results
t.test(oblig ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T)
Paired t-test
data: oblig by Relation
t = -10.245, df = 202, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-15.32575 -10.37869
sample estimates:
mean of the differences
-12.85222
# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: -0.7190697 (medium)
95 percent confidence interval:
lower upper
-0.8738569 -0.5642825
# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: -0.491025 (small)
95 percent confidence interval:
lower upper
-0.590761 -0.391289
# returns correlation between variables
cor_test(data = E1_STRnCUZ_clean, "Choice_STRnCUZ_STR_oblig", "Choice_STRnCUZ_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
----------------------------------------------------------------------------------------------
Choice_STRnCUZ_STR_oblig | Choice_STRnCUZ_CUZ_oblig | 0.77 | [0.70, 0.82] | 16.94 | < .001***
Observations: 203
# returns t-test results
t.test(oblig ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T)
Paired t-test
data: oblig by Relation
t = -2.4887, df = 204, p-value = 0.01362
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-11.688854 -1.355049
sample estimates:
mean of the differences
-6.521951
# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: -0.1738211 (negligible)
95 percent confidence interval:
lower upper
-0.31215141 -0.03549085
# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: -0.1892702 (negligible)
95 percent confidence interval:
lower upper
-0.3401031 -0.0384373
# returns correlation between variables
cor_test(data = E1_CUZnSIB_clean, "NoChoice_CUZ_oblig", "NoChoice_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
----------------------------------------------------------------------------------
NoChoice_CUZ_oblig | NoChoice_SIB_oblig | 0.41 | [0.29, 0.52] | 6.35 | < .001***
Observations: 205
# returns t-test results
t.test(oblig ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T)
Paired t-test
data: oblig by Relation
t = -7.1341, df = 204, p-value = 1.666e-11
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-7.960208 -4.512963
sample estimates:
mean of the differences
-6.236585
# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: -0.4982647 (small)
95 percent confidence interval:
lower upper
-0.6438342 -0.3526952
# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: -0.2211563 (small)
95 percent confidence interval:
lower upper
-0.2828367 -0.1594759
# returns correlation between variables
cor_test(data = E1_CUZnSIB_clean, "Choice_CUZnSIB_CUZ_oblig", "Choice_CUZnSIB_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
----------------------------------------------------------------------------------------------
Choice_CUZnSIB_CUZ_oblig | Choice_CUZnSIB_SIB_oblig | 0.90 | [0.87, 0.92] | 29.68 | < .001***
Observations: 205
# returns 2 x 2 within-subject ANOVA results
aov_moral_STRnSIB <- aov(moral ~ Relation*`Choice Context` + Error(ResponseId/(Relation*`Choice Context`)), data = E1_STRnSIB_long)
summary(aov_moral_STRnSIB)
Error: ResponseId
Df Sum Sq Mean Sq F value Pr(>F)
Residuals 202 101536 502.7
Error: ResponseId:Relation
Df Sum Sq Mean Sq F value Pr(>F)
Relation 1 2554 2553.7 12.22 0.00058 ***
Residuals 202 42199 208.9
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Error: ResponseId:`Choice Context`
Df Sum Sq Mean Sq F value Pr(>F)
`Choice Context` 1 70237 70237 319.7 <2e-16 ***
Residuals 202 44384 220
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Error: ResponseId:Relation:`Choice Context`
Df Sum Sq Mean Sq F value Pr(>F)
Relation:`Choice Context` 1 8121 8121 44.26 2.65e-10 ***
Residuals 202 37069 184
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# returns eta-sq effect size
eta_sq(aov_moral_STRnSIB, partial = TRUE)'eta_sq' is deprecated.
Use 'effectsize::eta_squared()' instead.
See help("Deprecated")
Group | Parameter | Eta2 (partial) | 90% CI
----------------------------------------------------------------------------------------------
ResponseId:Relation | Relation | 0.06 | [0.02, 0.12]
ResponseId:`Choice Context` | Choice Context | 0.61 | [0.55, 0.67]
ResponseId:Relation:`Choice Context` | Relation:Choice Context | 0.18 | [0.11, 0.26]
# returns 2 x 2 within-subject ANOVA results
aov_moral_STRnCUZ <- aov(moral ~ Relation*`Choice Context` + Error(ResponseId/(Relation*`Choice Context`)), data = E1_STRnCUZ_long)
summary(aov_moral_STRnCUZ)
Error: ResponseId
Df Sum Sq Mean Sq F value Pr(>F)
Residuals 202 115423 571.4
Error: ResponseId:Relation
Df Sum Sq Mean Sq F value Pr(>F)
Relation 1 99 98.63 0.445 0.505
Residuals 202 44764 221.60
Error: ResponseId:`Choice Context`
Df Sum Sq Mean Sq F value Pr(>F)
`Choice Context` 1 51060 51060 239.6 <2e-16 ***
Residuals 202 43042 213
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Error: ResponseId:Relation:`Choice Context`
Df Sum Sq Mean Sq F value Pr(>F)
Relation:`Choice Context` 1 2473 2472.8 13.14 0.000365 ***
Residuals 202 38005 188.1
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# returns eta-sq effect size
eta_sq(aov_moral_STRnCUZ, partial = TRUE)'eta_sq' is deprecated.
Use 'effectsize::eta_squared()' instead.
See help("Deprecated")
Group | Parameter | Eta2 (partial) | 90% CI
----------------------------------------------------------------------------------------------
ResponseId:Relation | Relation | 2.20e-03 | [0.00, 0.03]
ResponseId:`Choice Context` | Choice Context | 0.54 | [0.47, 0.60]
ResponseId:Relation:`Choice Context` | Relation:Choice Context | 0.06 | [0.02, 0.12]
# returns 2 x 2 within-subject ANOVA results
aov_moral_CUZnSIB <- aov(moral ~ Relation*`Choice Context` + Error(ResponseId/(Relation*`Choice Context`)), data = E1_CUZnSIB_long)
summary(aov_moral_CUZnSIB)
Error: ResponseId
Df Sum Sq Mean Sq F value Pr(>F)
Residuals 204 114949 563.5
Error: ResponseId:Relation
Df Sum Sq Mean Sq F value Pr(>F)
Relation 1 2585 2585 14.77 0.000162 ***
Residuals 204 35699 175
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Error: ResponseId:`Choice Context`
Df Sum Sq Mean Sq F value Pr(>F)
`Choice Context` 1 60167 60167 263.4 <2e-16 ***
Residuals 204 46603 228
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Error: ResponseId:Relation:`Choice Context`
Df Sum Sq Mean Sq F value Pr(>F)
Relation:`Choice Context` 1 3114 3114.2 21.49 6.35e-06 ***
Residuals 204 29564 144.9
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# returns eta-sq effect size
eta_sq(aov_moral_CUZnSIB, partial = TRUE)'eta_sq' is deprecated.
Use 'effectsize::eta_squared()' instead.
See help("Deprecated")
Group | Parameter | Eta2 (partial) | 90% CI
----------------------------------------------------------------------------------------------
ResponseId:Relation | Relation | 0.07 | [0.02, 0.13]
ResponseId:`Choice Context` | Choice Context | 0.56 | [0.49, 0.62]
ResponseId:Relation:`Choice Context` | Relation:Choice Context | 0.10 | [0.04, 0.16]
# returns t-test results
t.test(moral ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T)
Paired t-test
data: moral by Relation
t = 2.6937, df = 202, p-value = 0.007661
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
0.7446009 4.8120494
sample estimates:
mean of the differences
2.778325
# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: 0.1890605 (negligible)
95 percent confidence interval:
lower upper
0.04985719 0.32826387
# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: 0.185017 (negligible)
95 percent confidence interval:
lower upper
0.04884148 0.32119244
# returns correlation between variables
cor_test(data = E1_STRnSIB_clean, "NoChoice_STR_moral", "NoChoice_SIB_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
----------------------------------------------------------------------------------
NoChoice_STR_moral | NoChoice_SIB_moral | 0.52 | [0.41, 0.61] | 8.66 | < .001***
Observations: 203
# returns t-test results
t.test(moral ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T)
Paired t-test
data: moral by Relation
t = -5.8971, df = 202, p-value = 1.535e-08
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-13.172713 -6.571129
sample estimates:
mean of the differences
-9.871921
# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: -0.4138981 (small)
95 percent confidence interval:
lower upper
-0.5576617 -0.2701344
# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: -0.5410311 (medium)
95 percent confidence interval:
lower upper
-0.7341354 -0.3479268
# returns correlation between variables
cor_test(data = E1_STRnSIB_clean, "Choice_STRnSIB_STR_moral", "Choice_STRnSIB_SIB_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-------------------------------------------------------------------------------------------
Choice_STRnSIB_STR_moral | Choice_STRnSIB_SIB_moral | 0.15 | [0.01, 0.28] | 2.09 | 0.038*
Observations: 203
# returns t-test results
t.test(moral ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T)
Paired t-test
data: moral by Relation
t = 2.771, df = 202, p-value = 0.00611
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
0.8055855 4.7806214
sample estimates:
mean of the differences
2.793103
# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: 0.1944849 (negligible)
95 percent confidence interval:
lower upper
0.05521042 0.33375932
# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: 0.1761995 (negligible)
95 percent confidence interval:
lower upper
0.05022958 0.30216940
# returns correlation between variables
cor_test(data = E1_STRnCUZ_clean, "NoChoice_STR_moral", "NoChoice_CUZ_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
----------------------------------------------------------------------------------
NoChoice_STR_moral | NoChoice_CUZ_moral | 0.59 | [0.49, 0.67] | 10.35 | < .001***
Observations: 203
# returns t-test results
t.test(moral ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T)
Paired t-test
data: moral by Relation
t = -2.4091, df = 202, p-value = 0.01689
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-7.6142845 -0.7600997
sample estimates:
mean of the differences
-4.187192
# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: -0.1690857 (negligible)
95 percent confidence interval:
lower upper
-0.30804426 -0.03012719
# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "STR vs CUZ") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: -0.22502 (small)
95 percent confidence interval:
lower upper
-0.41094859 -0.03909133
# returns correlation between variables
cor_test(data = E1_STRnCUZ_clean, "Choice_STRnCUZ_STR_moral", "Choice_STRnCUZ_CUZ_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-------------------------------------------------------------------------------------------
Choice_STRnCUZ_STR_moral | Choice_STRnCUZ_CUZ_moral | 0.11 | [-0.02, 0.25] | 1.63 | 0.104
Observations: 203
# returns t-test results
t.test(moral ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T)
Paired t-test
data: moral by Relation
t = 0.34573, df = 204, p-value = 0.7299
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-1.628815 2.321498
sample estimates:
mean of the differences
0.3463415
# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: 0.02414673 (negligible)
95 percent confidence interval:
lower upper
-0.1131704 0.1614638
# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "No Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: 0.02173942 (negligible)
95 percent confidence interval:
lower upper
-0.1018844 0.1453633
# returns correlation between variables
cor_test(data = E1_CUZnSIB_clean, "NoChoice_CUZ_moral", "NoChoice_SIB_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
----------------------------------------------------------------------------------
NoChoice_CUZ_moral | NoChoice_SIB_moral | 0.59 | [0.50, 0.68] | 10.54 | < .001***
Observations: 205
# returns t-test results
t.test(moral ~ Relation,
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T)
Paired t-test
data: moral by Relation
t = -5.1188, df = 204, p-value = 7.088e-07
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-10.317915 -4.579646
sample estimates:
mean of the differences
-7.44878
# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = F) # setting this to false ensures dz is calculated, using difference score
Cohen's d
d estimate: -0.3575111 (small)
95 percent confidence interval:
lower upper
-0.4991274 -0.2158948
# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
data = E1_all_long %>% filter(BSs_cond == "CUZ vs SIB") %>% filter(`Choice Context` == "Choice") %>% droplevels(),
paired = T,
within = T) # setting this to true ensures d-av is calculated, using raw scores
Cohen's d
d estimate: -0.4281153 (small)
95 percent confidence interval:
lower upper
-0.5998953 -0.2563353
# returns correlation between variables
cor_test(data = E1_CUZnSIB_clean, "Choice_CUZnSIB_CUZ_moral", "Choice_CUZnSIB_SIB_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
----------------------------------------------------------------------------------------------
Choice_CUZnSIB_CUZ_moral | Choice_CUZnSIB_SIB_moral | 0.28 | [0.15, 0.40] | 4.20 | < .001***
Observations: 205
# Create difference score datasets for plotting of diff score correlations
# STRnCUZ
E1_diff_STRnCUZ_cond_long <- E1_STRnCUZ_clean %>%
pivot_longer(
cols = c(Dist_Scen, Close_Scen),
names_to = "WSs_cond",
values_to = "Condition"
)
E1_diff_STRnCUZ_oblig_long <- E1_STRnCUZ_clean %>%
pivot_longer(
cols = c(NoChoice_STRminusCUZ_oblig, Choice_STRminusCUZ_oblig),
names_to = "WSs_cond",
values_to = "oblig"
)
E1_diff_STRnCUZ_moral_long <- E1_STRnCUZ_clean %>%
pivot_longer(
cols = c(NoChoice_STRminusCUZ_moral, Choice_STRminusCUZ_moral),
names_to = "WSs_cond",
values_to = "moral"
)
# Combine long STRnCUZ datasets, select plotting variables, and create condition variable for `Choice Context` factor
E1_diff_STRnCUZ_long <- cbind(E1_diff_STRnCUZ_cond_long, E1_diff_STRnCUZ_oblig_long, E1_diff_STRnCUZ_moral_long)
E1_diff_STRnCUZ_long <- E1_diff_STRnCUZ_long[, !duplicated(colnames(E1_diff_STRnCUZ_long))] # get rid of duplicate columns
E1_diff_STRnCUZ_long <- E1_diff_STRnCUZ_long %>%
select(ResponseId,
Age:OUS_IB,
BSs_cond,
WSs_cond,
Condition,
oblig, moral) %>%
mutate(`Choice Context` = case_when(
WSs_cond == "Dist_Scen" ~ "No Choice",
WSs_cond == "Close_Scen" ~ "Choice"))
# Reorder/rename condition, and participant factors
E1_diff_STRnCUZ_long$`Choice Context` <- as.factor(E1_diff_STRnCUZ_long$`Choice Context`)
E1_diff_STRnCUZ_long$`Choice Context` <- ordered(E1_diff_STRnCUZ_long$`Choice Context`, levels = c("No Choice", "Choice"))
E1_diff_STRnCUZ_long$ResponseId <- as.factor(E1_diff_STRnCUZ_long$ResponseId)
# STRnSIB
E1_diff_STRnSIB_cond_long <- E1_STRnSIB_clean %>%
pivot_longer(
cols = c(Dist_Scen, Close_Scen),
names_to = "WSs_cond",
values_to = "Condition"
)
E1_diff_STRnSIB_oblig_long <- E1_STRnSIB_clean %>%
pivot_longer(
cols = c(NoChoice_STRminusSIB_oblig, Choice_STRminusSIB_oblig),
names_to = "WSs_cond",
values_to = "oblig"
)
E1_diff_STRnSIB_moral_long <- E1_STRnSIB_clean %>%
pivot_longer(
cols = c(NoChoice_STRminusSIB_moral, Choice_STRminusSIB_moral),
names_to = "WSs_cond",
values_to = "moral"
)
# Combine long STRnSIB datasets, select plotting variables, and create condition variable for `Choice Context` factor
E1_diff_STRnSIB_long <- cbind(E1_diff_STRnSIB_cond_long, E1_diff_STRnSIB_oblig_long, E1_diff_STRnSIB_moral_long)
E1_diff_STRnSIB_long <- E1_diff_STRnSIB_long[, !duplicated(colnames(E1_diff_STRnSIB_long))] # get rid of duplicate columns
E1_diff_STRnSIB_long <- E1_diff_STRnSIB_long %>%
select(ResponseId,
Age:OUS_IB,
BSs_cond,
WSs_cond,
Condition,
oblig, moral) %>%
mutate(`Choice Context` = case_when(
WSs_cond == "Dist_Scen" ~ "No Choice",
WSs_cond == "Close_Scen" ~ "Choice"))
# Reorder/rename condition, and participant factors
E1_diff_STRnSIB_long$`Choice Context` <- as.factor(E1_diff_STRnSIB_long$`Choice Context`)
E1_diff_STRnSIB_long$`Choice Context` <- ordered(E1_diff_STRnSIB_long$`Choice Context`, levels = c("No Choice", "Choice"))
E1_diff_STRnSIB_long$ResponseId <- as.factor(E1_diff_STRnSIB_long$ResponseId)
# CUZnSIB
E1_diff_CUZnSIB_cond_long <- E1_CUZnSIB_clean %>%
pivot_longer(
cols = c(Dist_Scen, Close_Scen),
names_to = "WSs_cond",
values_to = "Condition"
)
E1_diff_CUZnSIB_oblig_long <- E1_CUZnSIB_clean %>%
pivot_longer(
cols = c(NoChoice_CUZminusSIB_oblig, Choice_CUZminusSIB_oblig),
names_to = "WSs_cond",
values_to = "oblig"
)
E1_diff_CUZnSIB_moral_long <- E1_CUZnSIB_clean %>%
pivot_longer(
cols = c(NoChoice_CUZminusSIB_moral, Choice_CUZminusSIB_moral),
names_to = "WSs_cond",
values_to = "moral"
)
# Combine long CUZnSIB datasets, select plotting variables, and create condition variable for `Choice Context` factor
E1_diff_CUZnSIB_long <- cbind(E1_diff_CUZnSIB_cond_long, E1_diff_CUZnSIB_oblig_long, E1_diff_CUZnSIB_moral_long)
E1_diff_CUZnSIB_long <- E1_diff_CUZnSIB_long[, !duplicated(colnames(E1_diff_CUZnSIB_long))] # get rid of duplicate columns
E1_diff_CUZnSIB_long <- E1_diff_CUZnSIB_long %>%
select(ResponseId,
Age:OUS_IB,
BSs_cond,
WSs_cond,
Condition,
oblig, moral) %>%
mutate(`Choice Context` = case_when(
WSs_cond == "Dist_Scen" ~ "No Choice",
WSs_cond == "Close_Scen" ~ "Choice"))
# Reorder/rename condition, and participant factors
E1_diff_CUZnSIB_long$`Choice Context` <- as.factor(E1_diff_CUZnSIB_long$`Choice Context`)
E1_diff_CUZnSIB_long$`Choice Context` <- ordered(E1_diff_CUZnSIB_long$`Choice Context`, levels = c("No Choice", "Choice"))
E1_diff_CUZnSIB_long$ResponseId <- as.factor(E1_diff_CUZnSIB_long$ResponseId)
# Combine into one dataset for plotting
E1_diff_all_long <- rbind(E1_diff_STRnSIB_long, E1_diff_STRnCUZ_long, E1_diff_CUZnSIB_long)
# Reorder All_long BSs_cond
E1_diff_all_long$BSs_cond <- as.factor(E1_diff_all_long$BSs_cond)
E1_diff_all_long$BSs_cond <- ordered(E1_diff_all_long$BSs_cond, levels = c("STR vs SIB", "STR vs CUZ", "CUZ vs SIB"))print(oblig_moral_diff_plot_STRnSIB <- ggplot(data = E1_diff_STRnSIB_long, aes(x = oblig, y = moral)) +
geom_jitter(color = "darkorchid1", alpha = 0.5) +
geom_smooth(method = 'lm', color = "darkorchid1") +
facet_wrap(BSs_cond~`Choice Context`) +
scale_x_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
scale_y_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
theme_classic() +
xlab("Obligation Strength Difference (Distant - Close)") +
ylab("Moral Character Difference (Distant - Close)") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12)))print(oblig_moral_diff_plot_STRnCUZ <- ggplot(data = E1_diff_STRnCUZ_long, aes(x = oblig, y = moral)) +
geom_jitter(color = "darkorchid1", alpha = 0.5) +
geom_smooth(method = 'lm', color = "darkorchid1") +
facet_wrap(BSs_cond~`Choice Context`) +
scale_x_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
scale_y_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
theme_classic() +
xlab("Obligation Strength Difference (Distant - Close)") +
ylab("Moral Character Difference (Distant - Close)") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12)))print(oblig_moral_diff_plot_CUZnSIB <- ggplot(data = E1_diff_CUZnSIB_long, aes(x = oblig, y = moral)) +
geom_jitter(color = "darkorchid1", alpha = 0.5) +
geom_smooth(method = 'lm', color = "darkorchid1") +
facet_wrap(BSs_cond~`Choice Context`) +
scale_x_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
scale_y_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
theme_classic() +
xlab("Obligation Strength Difference (Distant - Close)") +
ylab("Moral Character Difference (Distant - Close)") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12)))print(oblig_moral_diff_plot_combined <- ggplot(data = E1_diff_all_long, aes(x = oblig, y = moral)) +
geom_jitter(color = "darkorchid1", alpha = 0.5) +
geom_smooth(method = 'lm', color = "darkorchid1") +
facet_wrap(BSs_cond~`Choice Context`, nrow = 3) +
scale_x_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
scale_y_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
theme_classic() +
xlab("\nObligation Strength Difference (Distant - Close)") +
ylab("Moral Character Difference (Distant - Close)\n") +
theme(axis.title.x = element_text(size = 18),
axis.title.y = element_text(size = 18),
axis.text.x = element_text(color = "black", size = 16),
axis.text.y = element_text(color = "black", size = 16),
strip.text.x = element_text(color = "black", size = 16)))
ggsave("E1_moral~oblig_plot.png")Saving 14 x 9 in image
See our pre-registration (https://osf.io/6q4kr/?view_only=8033c65c9dfc41bcbc528fe94409e4e8) and manuscript for our predictions about the relationship between obligation differences and moral character differences.
# pearson's r
cor_test(E1_STRnSIB_clean, "NoChoice_STRminusSIB_oblig", "NoChoice_STRminusSIB_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-----------------------------------------------------------------------------------------------
NoChoice_STRminusSIB_oblig | NoChoice_STRminusSIB_moral | 0.07 | [-0.07, 0.21] | 1.01 | 0.316
Observations: 203
# pearson's r
cor_test(E1_STRnSIB_clean, "Choice_STRminusSIB_oblig", "Choice_STRminusSIB_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------------------
Choice_STRminusSIB_oblig | Choice_STRminusSIB_moral | 0.18 | [0.05, 0.31] | 2.65 | 0.009**
Observations: 203
# pearson's r
cor_test(E1_STRnCUZ_clean, "NoChoice_STRminusCUZ_oblig", "NoChoice_STRminusCUZ_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-----------------------------------------------------------------------------------------------
NoChoice_STRminusCUZ_oblig | NoChoice_STRminusCUZ_moral | 0.03 | [-0.11, 0.16] | 0.38 | 0.702
Observations: 203
# pearson's r
cor_test(E1_STRnCUZ_clean, "Choice_STRminusCUZ_oblig", "Choice_STRminusCUZ_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
----------------------------------------------------------------------------------------------
Choice_STRminusCUZ_oblig | Choice_STRminusCUZ_moral | 0.39 | [0.27, 0.50] | 6.00 | < .001***
Observations: 203
# pearson's r
cor_test(E1_CUZnSIB_clean, "NoChoice_CUZminusSIB_oblig", "NoChoice_CUZminusSIB_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_oblig | NoChoice_CUZminusSIB_moral | -0.03 | [-0.17, 0.10] | -0.48 | 0.635
Observations: 205
# pearson's r
cor_test(E1_CUZnSIB_clean, "Choice_CUZminusSIB_oblig", "Choice_CUZminusSIB_moral", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
--------------------------------------------------------------------------------------------
Choice_CUZminusSIB_oblig | Choice_CUZminusSIB_moral | 0.19 | [0.06, 0.32] | 2.78 | 0.006**
Observations: 205
rmcorr_STRnSIB_NoChoice <- rmcorr(participant = ResponseId,
measure1 = oblig,
measure2 = moral,
dataset = E1_all_long %>%
filter(BSs_cond == "STR vs SIB") %>%
filter(`Choice Context` == "No Choice"))print(rmcorr_plot_STRnSIB_NoChoice <- ggplot(data = E1_all_long %>%
filter(BSs_cond == "STR vs SIB") %>%
filter(`Choice Context` == "No Choice"),
aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
geom_point(aes(color = ResponseId)) +
geom_line(aes(y = rmcorr_STRnSIB_NoChoice$model$fitted.values), linetype = 1) +
scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
theme_classic() +
xlab("Obligation Strength") +
ylab("Moral Character") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "none"))rmcorr_STRnSIB_Choice <- rmcorr(participant = ResponseId,
measure1 = oblig,
measure2 = moral,
dataset = E1_all_long %>%
filter(BSs_cond == "STR vs SIB") %>%
filter(`Choice Context` == "Choice"))print(rmcorr_plot_STRnSIB_Choice <- ggplot(data = E1_all_long %>%
filter(BSs_cond == "STR vs SIB") %>%
filter(`Choice Context` == "Choice"),
aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
geom_point(aes(color = ResponseId)) +
geom_line(aes(y = rmcorr_STRnSIB_Choice$model$fitted.values), linetype = 1) +
scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
theme_classic() +
xlab("Obligation Strength") +
ylab("Moral Character") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "none"))rmcorr_STRnCUZ_NoChoice <- rmcorr(participant = ResponseId,
measure1 = oblig,
measure2 = moral,
dataset = E1_all_long %>%
filter(BSs_cond == "STR vs CUZ") %>%
filter(`Choice Context` == "No Choice"))print(rmcorr_plot_STRnCUZ_NoChoice <- ggplot(data = E1_all_long %>%
filter(BSs_cond == "STR vs CUZ") %>%
filter(`Choice Context` == "No Choice"),
aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
geom_point(aes(color = ResponseId)) +
geom_line(aes(y = rmcorr_STRnCUZ_NoChoice$model$fitted.values), linetype = 1) +
scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
theme_classic() +
xlab("Obligation Strength") +
ylab("Moral Character") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "none"))rmcorr_STRnCUZ_Choice <- rmcorr(participant = ResponseId,
measure1 = oblig,
measure2 = moral,
dataset = E1_all_long %>%
filter(BSs_cond == "STR vs CUZ") %>%
filter(`Choice Context` == "Choice"))print(rmcorr_plot_STRnCUZ_Choice <- ggplot(data = E1_all_long %>%
filter(BSs_cond == "STR vs CUZ") %>%
filter(`Choice Context` == "Choice"),
aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
geom_point(aes(color = ResponseId)) +
geom_line(aes(y = rmcorr_STRnCUZ_Choice$model$fitted.values), linetype = 1) +
scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
theme_classic() +
xlab("Obligation Strength") +
ylab("Moral Character") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "none"))rmcorr_CUZnSIB_NoChoice <- rmcorr(participant = ResponseId,
measure1 = oblig,
measure2 = moral,
dataset = E1_all_long %>%
filter(BSs_cond == "CUZ vs SIB") %>%
filter(`Choice Context` == "No Choice"))print(rmcorr_plot_CUZnSIB_NoChoice <- ggplot(data = E1_all_long %>%
filter(BSs_cond == "CUZ vs SIB") %>%
filter(`Choice Context` == "No Choice"),
aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
geom_point(aes(color = ResponseId)) +
geom_line(aes(y = rmcorr_CUZnSIB_NoChoice$model$fitted.values), linetype = 1) +
scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
theme_classic() +
xlab("Obligation Strength") +
ylab("Moral Character") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "none"))rmcorr_CUZnSIB_Choice <- rmcorr(participant = ResponseId,
measure1 = oblig,
measure2 = moral,
dataset = E1_all_long %>%
filter(BSs_cond == "CUZ vs SIB") %>%
filter(`Choice Context` == "Choice"))print(rmcorr_plot_CUZnSIB_Choice <- ggplot(data = E1_all_long %>%
filter(BSs_cond == "CUZ vs SIB") %>%
filter(`Choice Context` == "Choice"),
aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
geom_point(aes(color = ResponseId)) +
geom_line(aes(y = rmcorr_CUZnSIB_Choice$model$fitted.values), linetype = 1) +
scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
theme_classic() +
xlab("Obligation Strength") +
ylab("Moral Character") +
theme(axis.title.x = element_text(size = 14),
axis.title.y = element_text(size = 14),
axis.text.x = element_text(color = "black", size = 12),
axis.text.y = element_text(color = "black", size = 12),
legend.position = "none"))See our manuscript for our predictions about the within-individual relationship between obligation judgments and moral character judgments.
print(rmcorr_STRnSIB_NoChoice)
Repeated measures correlation
r
-0.0102866
degrees of freedom
202
p-value
0.8839031
95% confidence interval
-0.1481273 0.1279461
print(rmcorr_STRnSIB_Choice)
Repeated measures correlation
r
0.3726286
degrees of freedom
202
p-value
4.05481e-08
95% confidence interval
0.2472991 0.4856947
print(rmcorr_STRnCUZ_NoChoice)
Repeated measures correlation
r
-0.05309295
degrees of freedom
202
p-value
0.4507343
95% confidence interval
-0.1897531 0.08558536
print(rmcorr_STRnCUZ_Choice)
Repeated measures correlation
r
0.4095764
degrees of freedom
202
p-value
1.18001e-09
95% confidence interval
0.2877982 0.5183203
print(rmcorr_CUZnSIB_NoChoice)
Repeated measures correlation
r
-0.03701313
degrees of freedom
204
p-value
0.5973697
95% confidence interval
-0.1735021 0.1008709
print(rmcorr_CUZnSIB_Choice)
Repeated measures correlation
r
0.3122796
degrees of freedom
204
p-value
4.887046e-06
95% confidence interval
0.182748 0.431155
# create dataset with only MAC Family Values variables
E1_STRnSIB_clean_MAConly <- E1_STRnSIB_clean %>% select(MAC_Jud_1:MAC_Rel_3)
psych::alpha(E1_STRnSIB_clean_MAConly)Number of categories should be increased in order to count frequencies.
Reliability analysis
Call: psych::alpha(x = E1_STRnSIB_clean_MAConly)
lower alpha upper 95% confidence boundaries
0.9 0.92 0.94
Reliability if an item is dropped:
Item statistics
# create dataset with only MAC Family Values variables
E1_STRnCUZ_clean_MAConly <- E1_STRnCUZ_clean %>% select(MAC_Jud_1:MAC_Rel_3)
psych::alpha(E1_STRnCUZ_clean_MAConly)Number of categories should be increased in order to count frequencies.
Reliability analysis
Call: psych::alpha(x = E1_STRnCUZ_clean_MAConly)
lower alpha upper 95% confidence boundaries
0.9 0.92 0.93
Reliability if an item is dropped:
Item statistics
# create dataset with only MAC Family Values variables
E1_CUZnSIB_clean_MAConly <- E1_CUZnSIB_clean %>% select(MAC_Jud_1:MAC_Rel_3)
psych::alpha(E1_CUZnSIB_clean_MAConly)Number of categories should be increased in order to count frequencies.
Reliability analysis
Call: psych::alpha(x = E1_CUZnSIB_clean_MAConly)
lower alpha upper 95% confidence boundaries
0.9 0.92 0.94
Reliability if an item is dropped:
Item statistics
# create dataset with only MFQ Family Values variables
E1_STRnSIB_clean_MFQonly <- E1_STRnSIB_clean %>% select(MFQ_Jud_1:MFQ_Rel_3)
psych::alpha(E1_STRnSIB_clean_MFQonly)
Reliability analysis
Call: psych::alpha(x = E1_STRnSIB_clean_MFQonly)
lower alpha upper 95% confidence boundaries
0.76 0.8 0.84
Reliability if an item is dropped:
Item statistics
Non missing response frequency for each item
1 2 3 4 5 6 miss
MFQ_Jud_1 0.23 0.18 0.18 0.19 0.16 0.06 0
MFQ_Jud_2 0.20 0.23 0.23 0.19 0.13 0.03 0
MFQ_Jud_3 0.12 0.13 0.26 0.33 0.09 0.05 0
MFQ_Rel_1 0.33 0.30 0.17 0.15 0.05 0.01 0
MFQ_Rel_2 0.08 0.15 0.19 0.29 0.20 0.09 0
MFQ_Rel_3 0.08 0.11 0.26 0.30 0.20 0.05 0
# create dataset with only MFQ Family Values variables
E1_STRnCUZ_clean_MFQonly <- E1_STRnCUZ_clean %>% select(MFQ_Jud_1:MFQ_Rel_3)
psych::alpha(E1_STRnCUZ_clean_MFQonly)
Reliability analysis
Call: psych::alpha(x = E1_STRnCUZ_clean_MFQonly)
lower alpha upper 95% confidence boundaries
0.76 0.8 0.84
Reliability if an item is dropped:
Item statistics
Non missing response frequency for each item
1 2 3 4 5 6 miss
MFQ_Jud_1 0.18 0.17 0.17 0.24 0.12 0.12 0
MFQ_Jud_2 0.18 0.27 0.21 0.19 0.12 0.04 0
MFQ_Jud_3 0.07 0.21 0.31 0.24 0.15 0.03 0
MFQ_Rel_1 0.28 0.28 0.16 0.18 0.07 0.03 0
MFQ_Rel_2 0.06 0.15 0.19 0.28 0.23 0.09 0
MFQ_Rel_3 0.07 0.15 0.22 0.30 0.21 0.05 0
# create dataset with only MFQ Family Values variables
E1_CUZnSIB_clean_MFQonly <- E1_CUZnSIB_clean %>% select(MFQ_Jud_1:MFQ_Rel_3)
psych::alpha(E1_CUZnSIB_clean_MFQonly)
Reliability analysis
Call: psych::alpha(x = E1_CUZnSIB_clean_MFQonly)
lower alpha upper 95% confidence boundaries
0.68 0.73 0.79
Reliability if an item is dropped:
Item statistics
Non missing response frequency for each item
1 2 3 4 5 6 miss
MFQ_Jud_1 0.17 0.20 0.18 0.23 0.15 0.08 0
MFQ_Jud_2 0.17 0.16 0.31 0.19 0.15 0.02 0
MFQ_Jud_3 0.09 0.17 0.30 0.28 0.13 0.03 0
MFQ_Rel_1 0.27 0.27 0.14 0.20 0.08 0.03 0
MFQ_Rel_2 0.05 0.18 0.21 0.27 0.19 0.09 0
MFQ_Rel_3 0.06 0.16 0.18 0.32 0.19 0.09 0
# create dataset with only MFQ Family Values variables
E1_STRnSIB_clean_OUSonly <- E1_STRnSIB_clean %>% select(OUS_IB1:OUS_IB5)
psych::alpha(E1_STRnSIB_clean_OUSonly)
Reliability analysis
Call: psych::alpha(x = E1_STRnSIB_clean_OUSonly)
lower alpha upper 95% confidence boundaries
0.68 0.74 0.8
Reliability if an item is dropped:
Item statistics
Non missing response frequency for each item
1 2 3 4 5 6 7 miss
OUS_IB1 0.23 0.22 0.19 0.17 0.13 0.04 0.01 0
OUS_IB2 0.18 0.22 0.17 0.16 0.17 0.06 0.03 0
OUS_IB3 0.04 0.14 0.22 0.09 0.27 0.15 0.09 0
OUS_IB4 0.15 0.26 0.21 0.14 0.11 0.09 0.03 0
OUS_IB5 0.10 0.25 0.23 0.10 0.18 0.11 0.02 0
# create dataset with only MFQ Family Values variables
E1_STRnCUZ_clean_OUSonly <- E1_STRnCUZ_clean %>% select(OUS_IB1:OUS_IB5)
psych::alpha(E1_STRnCUZ_clean_OUSonly)
Reliability analysis
Call: psych::alpha(x = E1_STRnCUZ_clean_OUSonly)
lower alpha upper 95% confidence boundaries
0.73 0.78 0.83
Reliability if an item is dropped:
Item statistics
Non missing response frequency for each item
1 2 3 4 5 6 7 miss
OUS_IB1 0.24 0.22 0.14 0.18 0.14 0.06 0.02 0
OUS_IB2 0.18 0.18 0.17 0.15 0.19 0.11 0.02 0
OUS_IB3 0.08 0.13 0.15 0.15 0.19 0.19 0.10 0
OUS_IB4 0.15 0.24 0.21 0.12 0.18 0.06 0.05 0
OUS_IB5 0.14 0.21 0.19 0.14 0.19 0.07 0.05 0
# create dataset with only MFQ Family Values variables
E1_CUZnSIB_clean_OUSonly <- E1_CUZnSIB_clean %>% select(OUS_IB1:OUS_IB5)
psych::alpha(E1_CUZnSIB_clean_OUSonly)
Reliability analysis
Call: psych::alpha(x = E1_CUZnSIB_clean_OUSonly)
lower alpha upper 95% confidence boundaries
0.68 0.74 0.79
Reliability if an item is dropped:
Item statistics
Non missing response frequency for each item
1 2 3 4 5 6 7 miss
OUS_IB1 0.19 0.22 0.15 0.20 0.14 0.08 0.01 0
OUS_IB2 0.15 0.20 0.21 0.18 0.16 0.08 0.02 0
OUS_IB3 0.05 0.08 0.18 0.16 0.25 0.20 0.08 0
OUS_IB4 0.09 0.19 0.20 0.18 0.20 0.12 0.02 0
OUS_IB5 0.07 0.18 0.19 0.21 0.23 0.10 0.01 0
print(oblig_mac_comp_plot <- ggplot(data = E1_all_long, aes(x = MAC_Fam_Combined, y = oblig, color = Relation)) +
geom_jitter(aes(color = Relation), alpha = 0.5) +
scale_color_manual(values = c("lightskyblue3", "indianred3")) +
geom_smooth(method = 'lm') +
facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
theme_classic() +
xlab("\nMAC Family Values Composite") +
ylab("Obligation Strength\n") +
theme(axis.title.x = element_text(size = 18),
axis.title.y = element_text(size = 18),
axis.text.x = element_text(color = "black", size = 16),
axis.text.y = element_text(color = "black", size = 16),
strip.text.x = element_text(color = "black", size = 16),
legend.position = "right",
legend.title = element_text(color = "black", size = 18),
legend.text = element_text(color = "black", size = 16)))
ggsave("E1_oblig~MAC_plot.png")Saving 14 x 9 in image
print(oblig_mft_comp_plot <- ggplot(data = E1_all_long, aes(x = MFQ_Loyalty_Combined, y = oblig, color = Relation)) +
geom_jitter(aes(color = Relation), alpha = 0.5) +
scale_color_manual(values = c("lightskyblue3", "indianred3")) +
geom_smooth(method = 'lm') +
facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
scale_x_continuous(limits = c(.5,6.5), breaks = c(1,2,3,4,5,6)) +
scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
theme_classic() +
xlab("\nMFT Ingroup Loyalty Composite") +
ylab("Obligation Strength\n") +
theme(axis.title.x = element_text(size = 18),
axis.title.y = element_text(size = 18),
axis.text.x = element_text(color = "black", size = 16),
axis.text.y = element_text(color = "black", size = 16),
strip.text.x = element_text(color = "black", size = 16),
legend.position = "right",
legend.title = element_text(color = "black", size = 18),
legend.text = element_text(color = "black", size = 16)))
ggsave("E1_oblig~MFT_plot.png")Saving 14 x 9 in image
print(oblig_ous_comp_plot <- ggplot(data = E1_all_long, aes(x = OUS_IB, y = oblig, color = Relation)) +
geom_jitter(aes(color = Relation), alpha = 0.5) +
scale_color_manual(values = c("lightskyblue3", "indianred3")) +
geom_smooth(method = 'lm') +
facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
scale_x_continuous(limits = c(.5,7.5), breaks = c(1,2,3,4,5,6,7)) +
scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
theme_classic() +
xlab("\nOUS Impartial Beneficence Composite") +
ylab("Obligation Strength\n") +
theme(axis.title.x = element_text(size = 18),
axis.title.y = element_text(size = 18),
axis.text.x = element_text(color = "black", size = 16),
axis.text.y = element_text(color = "black", size = 16),
strip.text.x = element_text(color = "black", size = 16),
legend.position = "right",
legend.title = element_text(color = "black", size = 18),
legend.text = element_text(color = "black", size = 16)))
ggsave("E1_oblig~OUS_plot.png")Saving 14 x 9 in image
See our pre-registration (https://osf.io/6q4kr/?view_only=8033c65c9dfc41bcbc528fe94409e4e8) and manuscript for our predictions about the relationship between individual differences and obligation judgments.
# distant pearson's r
cor_test(E1_STRnSIB_clean, "MAC_Fam_Combined", "NoChoice_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_STR_oblig | 0.23 | [0.09, 0.36] | 3.33 | 0.001**
Observations: 203
# close pearson's r
cor_test(E1_STRnSIB_clean, "MAC_Fam_Combined", "NoChoice_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_SIB_oblig | 0.35 | [0.23, 0.47] | 5.34 | < .001***
Observations: 203
# diff pearson's r
cor_test(E1_STRnSIB_clean, "MAC_Fam_Combined", "NoChoice_STRminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_STRminusSIB_oblig | -0.12 | [-0.25, 0.02] | -1.74 | 0.084
Observations: 203
# distant pearson's r
cor_test(E1_STRnSIB_clean, "MAC_Fam_Combined", "Choice_STRnSIB_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_STRnSIB_STR_oblig | 0.25 | [0.12, 0.38] | 3.73 | < .001***
Observations: 203
# distant pearson's r
cor_test(E1_STRnSIB_clean, "MAC_Fam_Combined", "Choice_STRnSIB_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_STRnSIB_SIB_oblig | 0.43 | [0.31, 0.54] | 6.79 | < .001***
Observations: 203
# diff pearson's r
cor_test(E1_STRnSIB_clean, "MAC_Fam_Combined", "Choice_STRminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-----------------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_STRminusSIB_oblig | -0.30 | [-0.42, -0.17] | -4.40 | < .001***
Observations: 203
# distant pearson's r
cor_test(E1_STRnCUZ_clean, "MAC_Fam_Combined", "NoChoice_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-----------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_STR_oblig | 0.06 | [-0.07, 0.20] | 0.91 | 0.365
Observations: 203
# close pearson's r
cor_test(E1_STRnCUZ_clean, "MAC_Fam_Combined", "NoChoice_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_CUZ_oblig | 0.22 | [0.08, 0.34] | 3.16 | 0.002**
Observations: 203
# diff pearson's r
cor_test(E1_STRnCUZ_clean, "MAC_Fam_Combined", "NoChoice_STRminusCUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_STRminusCUZ_oblig | -0.14 | [-0.27, 0.00] | -1.95 | 0.053
Observations: 203
# distant pearson's r
cor_test(E1_STRnCUZ_clean, "MAC_Fam_Combined", "Choice_STRnCUZ_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
------------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_STRnCUZ_STR_oblig | 0.19 | [0.05, 0.32] | 2.68 | 0.008**
Observations: 203
# distant pearson's r
cor_test(E1_STRnCUZ_clean, "MAC_Fam_Combined", "Choice_STRnCUZ_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_STRnCUZ_CUZ_oblig | 0.36 | [0.24, 0.48] | 5.50 | < .001***
Observations: 203
# diff pearson's r
cor_test(E1_STRnCUZ_clean, "MAC_Fam_Combined", "Choice_STRminusCUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-----------------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_STRminusCUZ_oblig | -0.30 | [-0.42, -0.17] | -4.51 | < .001***
Observations: 203
# distant pearson's r
cor_test(E1_CUZnSIB_clean, "MAC_Fam_Combined", "NoChoice_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
--------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_CUZ_oblig | 0.26 | [0.12, 0.38] | 3.78 | < .001***
Observations: 205
# close pearson's r
cor_test(E1_CUZnSIB_clean, "MAC_Fam_Combined", "NoChoice_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
--------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_SIB_oblig | 0.48 | [0.36, 0.58] | 7.75 | < .001***
Observations: 205
# diff pearson's r
cor_test(E1_CUZnSIB_clean, "MAC_Fam_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
-----------------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_CUZminusSIB_oblig | -0.21 | [-0.34, -0.08] | -3.07 | 0.002**
Observations: 205
# distant pearson's r
cor_test(E1_CUZnSIB_clean, "MAC_Fam_Combined", "Choice_CUZnSIB_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
--------------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_CUZnSIB_CUZ_oblig | 0.32 | [0.19, 0.44] | 4.86 | < .001***
Observations: 205
# distant pearson's r
cor_test(E1_CUZnSIB_clean, "MAC_Fam_Combined", "Choice_CUZnSIB_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
--------------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_CUZnSIB_SIB_oblig | 0.40 | [0.28, 0.51] | 6.23 | < .001***
Observations: 205
# diff pearson's r
cor_test(E1_CUZnSIB_clean, "MAC_Fam_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
-----------------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_CUZminusSIB_oblig | -0.23 | [-0.36, -0.10] | -3.41 | < .001***
Observations: 205
# distant pearson's r
cor_test(E1_STRnSIB_clean, "MFQ_Loyalty_Combined", "NoChoice_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_STR_oblig | 0.29 | [0.16, 0.41] | 4.34 | < .001***
Observations: 203
# close pearson's r
cor_test(E1_STRnSIB_clean, "MFQ_Loyalty_Combined", "NoChoice_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_SIB_oblig | 0.32 | [0.19, 0.44] | 4.80 | < .001***
Observations: 203
# diff pearson's r
cor_test(E1_STRnSIB_clean, "MFQ_Loyalty_Combined", "NoChoice_STRminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
------------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_STRminusSIB_oblig | -0.05 | [-0.18, 0.09] | -0.67 | 0.505
Observations: 203
# distant pearson's r
cor_test(E1_STRnSIB_clean, "MFQ_Loyalty_Combined", "Choice_STRnSIB_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
----------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_STRnSIB_STR_oblig | 0.22 | [0.09, 0.35] | 3.26 | 0.001**
Observations: 203
# distant pearson's r
cor_test(E1_STRnSIB_clean, "MFQ_Loyalty_Combined", "Choice_STRnSIB_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
------------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_STRnSIB_SIB_oblig | 0.36 | [0.24, 0.48] | 5.54 | < .001***
Observations: 203
# diff pearson's r
cor_test(E1_STRnSIB_clean, "MFQ_Loyalty_Combined", "Choice_STRminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
---------------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_STRminusSIB_oblig | -0.24 | [-0.37, -0.11] | -3.50 | < .001***
Observations: 203
# distant pearson's r
cor_test(E1_STRnCUZ_clean, "MFQ_Loyalty_Combined", "NoChoice_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
---------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_STR_oblig | 0.04 | [-0.10, 0.18] | 0.59 | 0.555
Observations: 203
# close pearson's r
cor_test(E1_STRnCUZ_clean, "MFQ_Loyalty_Combined", "NoChoice_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
---------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_CUZ_oblig | 0.16 | [0.02, 0.29] | 2.24 | 0.026*
Observations: 203
# diff pearson's r
cor_test(E1_STRnCUZ_clean, "MFQ_Loyalty_Combined", "NoChoice_STRminusCUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
------------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_STRminusCUZ_oblig | -0.10 | [-0.24, 0.04] | -1.44 | 0.152
Observations: 203
# distant pearson's r
cor_test(E1_STRnCUZ_clean, "MFQ_Loyalty_Combined", "Choice_STRnCUZ_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
----------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_STRnCUZ_STR_oblig | 0.22 | [0.08, 0.35] | 3.19 | 0.002**
Observations: 203
# distant pearson's r
cor_test(E1_STRnCUZ_clean, "MFQ_Loyalty_Combined", "Choice_STRnCUZ_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
------------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_STRnCUZ_CUZ_oblig | 0.37 | [0.24, 0.48] | 5.60 | < .001***
Observations: 203
# diff pearson's r
cor_test(E1_STRnCUZ_clean, "MFQ_Loyalty_Combined", "Choice_STRminusCUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
---------------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_STRminusCUZ_oblig | -0.27 | [-0.39, -0.13] | -3.91 | < .001***
Observations: 203
# distant pearson's r
cor_test(E1_CUZnSIB_clean, "MFQ_Loyalty_Combined", "NoChoice_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
---------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_CUZ_oblig | 0.17 | [0.03, 0.30] | 2.47 | 0.014*
Observations: 205
# close pearson's r
cor_test(E1_CUZnSIB_clean, "MFQ_Loyalty_Combined", "NoChoice_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_SIB_oblig | 0.25 | [0.11, 0.37] | 3.64 | < .001***
Observations: 205
# diff pearson's r
cor_test(E1_CUZnSIB_clean, "MFQ_Loyalty_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
------------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_CUZminusSIB_oblig | -0.07 | [-0.21, 0.06] | -1.07 | 0.288
Observations: 205
# distant pearson's r
cor_test(E1_CUZnSIB_clean, "MFQ_Loyalty_Combined", "Choice_CUZnSIB_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
---------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_CUZnSIB_CUZ_oblig | 0.16 | [0.02, 0.29] | 2.32 | 0.021*
Observations: 205
# distant pearson's r
cor_test(E1_CUZnSIB_clean, "MFQ_Loyalty_Combined", "Choice_CUZnSIB_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
------------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_CUZnSIB_SIB_oblig | 0.24 | [0.10, 0.36] | 3.45 | < .001***
Observations: 205
# diff pearson's r
cor_test(E1_CUZnSIB_clean, "MFQ_Loyalty_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
-------------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_CUZminusSIB_oblig | -0.20 | [-0.33, -0.06] | -2.88 | 0.004**
Observations: 205
# distant pearson's r
cor_test(E1_STRnSIB_clean, "OUS_IB", "NoChoice_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-----------------------------------------------------------------------
OUS_IB | NoChoice_STR_oblig | 0.15 | [0.02, 0.29] | 2.22 | 0.027*
Observations: 203
# close pearson's r
cor_test(E1_STRnSIB_clean, "OUS_IB", "NoChoice_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-----------------------------------------------------------------------
OUS_IB | NoChoice_SIB_oblig | 0.08 | [-0.05, 0.22] | 1.21 | 0.228
Observations: 203
# diff pearson's r
cor_test(E1_STRnSIB_clean, "OUS_IB", "NoChoice_STRminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-------------------------------------------------------------------------------
OUS_IB | NoChoice_STRminusSIB_oblig | 0.05 | [-0.09, 0.18] | 0.65 | 0.515
Observations: 203
# distant pearson's r
cor_test(E1_STRnSIB_clean, "OUS_IB", "Choice_STRnSIB_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------
OUS_IB | Choice_STRnSIB_STR_oblig | 0.31 | [0.18, 0.43] | 4.57 | < .001***
Observations: 203
# distant pearson's r
cor_test(E1_STRnSIB_clean, "OUS_IB", "Choice_STRnSIB_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
------------------------------------------------------------------------------
OUS_IB | Choice_STRnSIB_SIB_oblig | 0.21 | [0.08, 0.34] | 3.08 | 0.002**
Observations: 203
# diff pearson's r
cor_test(E1_STRnSIB_clean, "OUS_IB", "Choice_STRminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-----------------------------------------------------------------------------
OUS_IB | Choice_STRminusSIB_oblig | 0.04 | [-0.10, 0.18] | 0.58 | 0.560
Observations: 203
# distant pearson's r
cor_test(E1_STRnCUZ_clean, "OUS_IB", "NoChoice_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
-----------------------------------------------------------------------
OUS_IB | NoChoice_STR_oblig | 0.17 | [0.03, 0.30] | 2.44 | 0.015*
Observations: 203
# close pearson's r
cor_test(E1_STRnCUZ_clean, "OUS_IB", "NoChoice_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------
OUS_IB | NoChoice_CUZ_oblig | 0.24 | [0.11, 0.37] | 3.52 | < .001***
Observations: 203
# diff pearson's r
cor_test(E1_STRnCUZ_clean, "OUS_IB", "NoChoice_STRminusCUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------
OUS_IB | NoChoice_STRminusCUZ_oblig | -0.08 | [-0.22, 0.06] | -1.16 | 0.248
Observations: 203
# distant pearson's r
cor_test(E1_STRnCUZ_clean, "OUS_IB", "Choice_STRnCUZ_STR_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------
OUS_IB | Choice_STRnCUZ_STR_oblig | 0.42 | [0.30, 0.53] | 6.62 | < .001***
Observations: 203
# distant pearson's r
cor_test(E1_STRnCUZ_clean, "OUS_IB", "Choice_STRnCUZ_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
--------------------------------------------------------------------------------
OUS_IB | Choice_STRnCUZ_CUZ_oblig | 0.46 | [0.34, 0.56] | 7.24 | < .001***
Observations: 203
# diff pearson's r
cor_test(E1_STRnCUZ_clean, "OUS_IB", "Choice_STRminusCUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(201) | p
------------------------------------------------------------------------------
OUS_IB | Choice_STRminusCUZ_oblig | -0.12 | [-0.26, 0.01] | -1.77 | 0.078
Observations: 203
# distant pearson's r
cor_test(E1_CUZnSIB_clean, "OUS_IB", "NoChoice_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
--------------------------------------------------------------------------
OUS_IB | NoChoice_CUZ_oblig | 0.23 | [0.10, 0.36] | 3.38 | < .001***
Observations: 205
# close pearson's r
cor_test(E1_CUZnSIB_clean, "OUS_IB", "NoChoice_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
--------------------------------------------------------------------------
OUS_IB | NoChoice_SIB_oblig | 0.38 | [0.25, 0.49] | 5.81 | < .001***
Observations: 205
# diff pearson's r
cor_test(E1_CUZnSIB_clean, "OUS_IB", "NoChoice_CUZminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
----------------------------------------------------------------------------------
OUS_IB | NoChoice_CUZminusSIB_oblig | -0.14 | [-0.27, 0.00] | -2.02 | 0.044*
Observations: 205
# distant pearson's r
cor_test(E1_CUZnSIB_clean, "OUS_IB", "Choice_CUZnSIB_CUZ_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
--------------------------------------------------------------------------------
OUS_IB | Choice_CUZnSIB_CUZ_oblig | 0.36 | [0.24, 0.48] | 5.58 | < .001***
Observations: 205
# distant pearson's r
cor_test(E1_CUZnSIB_clean, "OUS_IB", "Choice_CUZnSIB_SIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
--------------------------------------------------------------------------------
OUS_IB | Choice_CUZnSIB_SIB_oblig | 0.34 | [0.22, 0.46] | 5.19 | < .001***
Observations: 205
# diff pearson's r
cor_test(E1_CUZnSIB_clean, "OUS_IB", "Choice_CUZminusSIB_oblig", method = "Pearson")Parameter1 | Parameter2 | r | 95% CI | t(203) | p
----------------------------------------------------------------------------------
OUS_IB | Choice_CUZminusSIB_oblig | -9.06e-03 | [-0.15, 0.13] | -0.13 | 0.897
Observations: 205
These analyses were only conducted within the CUZnSIB dataset due to potential confounding of moral values’ effects on obligation judgments in other datasets.
# correlation values are taken from the oblig ~ ind. diffs analyses
## r.jk = oblig ~ family values corr; r.jh = oblig ~ ingroup loyalty corr; r.kh = family values ~ ingroup loyalty corr
# distant
cocor.dep.groups.overlap(r.jk = .26, r.jh = .17, r.kh = .65, 205, alternative = "two.sided",
test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
Results of a comparison of two overlapping correlations based on dependent groups
Comparison between r.jk = 0.26 and r.jh = 0.17
Difference: r.jk - r.jh = 0.09
Related correlation: r.kh = 0.65
Group size: n = 205
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05
steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
z = 1.5735, p-value = 0.1156
Null hypothesis retained
# close
cocor.dep.groups.overlap(r.jk = .48, r.jh = .25, r.kh = .65, 205, alternative = "two.sided",
test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
Results of a comparison of two overlapping correlations based on dependent groups
Comparison between r.jk = 0.48 and r.jh = 0.25
Difference: r.jk - r.jh = 0.23
Related correlation: r.kh = 0.65
Group size: n = 205
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05
steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
z = 4.2898, p-value = 0.0000
Null hypothesis rejected
# difference
cocor.dep.groups.overlap(r.jk = -.21, r.jh = -.07, r.kh = .65, 205, alternative = "two.sided",
test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
Results of a comparison of two overlapping correlations based on dependent groups
Comparison between r.jk = -0.21 and r.jh = -0.07
Difference: r.jk - r.jh = -0.14
Related correlation: r.kh = 0.65
Group size: n = 205
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05
steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
z = -2.4104, p-value = 0.0159
Null hypothesis rejected
# correlation values are taken from the oblig ~ ind. diffs analyses
## r.jk = oblig ~ family values corr; r.jh = oblig ~ ingroup loyalty corr; r.kh = family values ~ ingroup loyalty corr
# distant
cocor.dep.groups.overlap(r.jk = .32, r.jh = .16, r.kh = .65, 205, alternative = "two.sided",
test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
Results of a comparison of two overlapping correlations based on dependent groups
Comparison between r.jk = 0.32 and r.jh = 0.16
Difference: r.jk - r.jh = 0.16
Related correlation: r.kh = 0.65
Group size: n = 205
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05
steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
z = 2.8229, p-value = 0.0048
Null hypothesis rejected
# close
cocor.dep.groups.overlap(r.jk = .40, r.jh = .24, r.kh = .65, 205, alternative = "two.sided",
test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
Results of a comparison of two overlapping correlations based on dependent groups
Comparison between r.jk = 0.4 and r.jh = 0.24
Difference: r.jk - r.jh = 0.16
Related correlation: r.kh = 0.65
Group size: n = 205
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05
steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
z = 2.9080, p-value = 0.0036
Null hypothesis rejected
# difference
cocor.dep.groups.overlap(r.jk = -.23, r.jh = -.20, r.kh = .65, 205, alternative = "two.sided",
test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
Results of a comparison of two overlapping correlations based on dependent groups
Comparison between r.jk = -0.23 and r.jh = -0.2
Difference: r.jk - r.jh = -0.03
Related correlation: r.kh = 0.65
Group size: n = 205
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05
steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
z = -0.5241, p-value = 0.6002
Null hypothesis retained