Data Waves

As pre-registered, we sought to collect data to attain 330 analyzable responses for each between-subjects condition (i.e., 330 participants who passed the attention check for each between-subjects condition, totalling 660 usable responses across the entire experiment).


On the first wave of data collection (N = 739), applying the pre-registered exclusion criterion led to adequate samples (i.e., Ns > 330) for all between-subjects datasets. Therefore, we did not launch a second wave of data collection.


Data Cleaning

Before data were loaded into R (below), the following changes were made:

  1. Raw variable names from Qualtrics were renamed to be more descriptive.

  2. If there were any responses for the field “Bot_Catcher,” these cases were deleted. This field was designed to be an invisible question that only bots would answer (as human respondents would not see the field). However, 0 cases were detected.

  3. Duplicate IP addresses were removed. There were only 4 instances of a duplicate IP address, leading to an N = 735.

  4. All other identifying information was removed (e.g., IP addresses, longitude/latitude, etc.).


Loading Data/Packages

Before running this chunk, please load “E2_raw_data.csv” into the R environment.


# packages should be loaded in the following order to avoid function conflicts
library(psych) # for describing data
library(effsize) # for mean difference effect sizes

Attaching package: 㤼㸱effsize㤼㸲

The following object is masked from 㤼㸱package:psych㤼㸲:

    cohen.d
library(sjstats) # for eta-squared effect sizes

Attaching package: 㤼㸱sjstats㤼㸲

The following object is masked from 㤼㸱package:psych㤼㸲:

    phi
library(correlation) # for cleaner correlation test output
library(rmcorr) # for repeated-measures correlation tests
library(tidyverse) # for data manipulation and plotting
Registered S3 methods overwritten by 'dbplyr':
  method         from
  print.tbl_lazy     
  print.tbl_sql      
-- Attaching packages --------------------------------------- tidyverse 1.3.0 --
v ggplot2 3.3.2     v dplyr   1.0.0
v tibble  3.0.1     v stringr 1.4.0
v tidyr   1.1.0     v forcats 0.5.0
v purrr   0.3.4     
-- Conflicts ------------------------------------------ tidyverse_conflicts() --
x ggplot2::%+%()   masks psych::%+%()
x ggplot2::alpha() masks psych::alpha()
x dplyr::filter()  masks stats::filter()
x dplyr::lag()     masks stats::lag()

Data Separation/Recombining

Data were separated into two distinct data sets (for each between-subjects condition). Then, a between-subjects variable was created within each between-subjects dataset. Last, both datasets were recombined.


# creates dataset that only has participants who made judgments of agents who helped stranger-like family members
E2_SL <- E2_raw_data %>%
  filter(SL_CnS_C_m1 >= 0 | SL_CnS_C_m2 >= 0)

# creates dataset that only has participants who made judgments of agents who friend-like family members
E2_FL <- E2_raw_data %>%
  filter(FL_CnS_C_m1 >= 0 | FL_CnS_C_m2 >= 0)

# create between-subjects condition variable
E2_SL$BSs_cond <- rep("Stranger-Like", nrow(E2_SL))
E2_FL$BSs_cond <- rep("Friend-Like", nrow(E2_FL))

# recombine between-subjects data
E2_all <- rbind(E2_SL, E2_FL)

Implementing Attention Checks

Based on our pre-registered criterion, participants who failed a pre-manipulation attention check were to be excluded from all analyses. The attention check was disguised as an experimental scenario; in the scenario text, participants were instructed to respond with the left-most option on the scale for all seven pre-outcome measures.


Participants who responded on average above a 10 on the pre-outcome 100-points scales were excluded. (We chose to use an average because we realized that a small group of participants answered the left-most option on the scale for six of the seven pre-outcome measures, but for the seventh pre-outcome measure, they answered with a number slightly above 10. Through testing how this could have happened, we noticed that participants using a mouse-scroll could have answered the seventh pre-outcome measure correctly, but their mouse-scroll could have dislodged their last answer if they did not click off of the slider first.) This led to a final analyzable N = 699 (a 95% retention rate).


# Create an attention check average variable
E2_all$AC_AVG <- ((E2_all$AC_oblig + E2_all$AC_relate + E2_all$AC_close + E2_all$AC_priorhelp + E2_all$AC_futurehelp + E2_all$AC_priorinteract + E2_all$AC_futureinteract)/7)

# Create dataset that filters out inattentive participants
E2_all_clean <- E2_all %>%
  # excludes participants who were not paying attention
    filter(AC_AVG < 10)

Creating Analysis Variables

# Main DVs
# create single column for each condition's variables that collapses across presentation order of DVs

# e.g., SL_CnS_C_o1 = "Stranger-Like" family members dataset, "No Choice" condition, CUZ obligation judgment, obligation judgment presented first
# to clarify, as noted in the Method section (and SOM), six other pre-outcome judgments were collected, counterbalanced so that obligation judgments were either first or last (1 = obligation first, 2 = obligation last)

E2_all_clean$NoChoice_CUZ_oblig  <- rowSums(E2_all_clean[, c("SL_CnS_C_o1", "SL_CnS_C_o2",
                                                               "FL_CnS_C_o1", "FL_CnS_C_o2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_relate  <- rowSums(E2_all_clean[, c("SL_CnS_C_r1", "SL_CnS_C_r2",
                                                                "FL_CnS_C_r1", "FL_CnS_C_r2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_close  <- rowSums(E2_all_clean[, c("SL_CnS_C_c1", "SL_CnS_C_c2",
                                                               "FL_CnS_C_c1", "FL_CnS_C_c2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_priorhelp  <- rowSums(E2_all_clean[, c("SL_CnS_C_ph1", "SL_CnS_C_ph2",
                                                                   "FL_CnS_C_ph1", "FL_CnS_C_ph2")],
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_futurehelp  <- rowSums(E2_all_clean[, c("SL_CnS_C_fh1", "SL_CnS_C_fh2",
                                                                   "FL_CnS_C_fh1", "FL_CnS_C_fh2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_priorinteract  <- rowSums(E2_all_clean[, c("SL_CnS_C_pi1", "SL_CnS_C_pi2",
                                                                   "FL_CnS_C_pi1", "FL_CnS_C_pi2")],
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_futureinteract  <- rowSums(E2_all_clean[, c("SL_CnS_C_fi1", "SL_CnS_C_fi2",
                                                                   "FL_CnS_C_fi1", "FL_CnS_C_fi2")],
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_moral  <- rowSums(E2_all_clean[, c("SL_CnS_C_m1", "SL_CnS_C_m2",
                                                               "FL_CnS_C_m1", "FL_CnS_C_m2")], 
                                                na.rm = T)

E2_all_clean$NoChoice_SIB_oblig  <- rowSums(E2_all_clean[, c("SL_CnS_S_o1", "SL_CnS_S_o2",
                                                               "FL_CnS_S_o1", "FL_CnS_S_o2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_relate  <- rowSums(E2_all_clean[, c("SL_CnS_S_r1", "SL_CnS_S_r2",
                                                                "FL_CnS_S_r1", "FL_CnS_S_r2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_close  <- rowSums(E2_all_clean[, c("SL_CnS_S_c1", "SL_CnS_S_c2",
                                                               "FL_CnS_S_c1", "FL_CnS_S_c2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_priorhelp  <- rowSums(E2_all_clean[, c("SL_CnS_S_ph1", "SL_CnS_S_ph2",
                                                                   "FL_CnS_S_ph1", "FL_CnS_S_ph2")],
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_futurehelp  <- rowSums(E2_all_clean[, c("SL_CnS_S_fh1", "SL_CnS_S_fh2",
                                                                   "FL_CnS_S_fh1", "FL_CnS_S_fh2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_priorinteract  <- rowSums(E2_all_clean[, c("SL_CnS_S_pi1", "SL_CnS_S_pi2",
                                                                   "FL_CnS_S_pi1", "FL_CnS_S_pi2")],
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_futureinteract  <- rowSums(E2_all_clean[, c("SL_CnS_S_fi1", "SL_CnS_S_fi2",
                                                                   "FL_CnS_S_fi1", "FL_CnS_S_fi2")],
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_moral  <- rowSums(E2_all_clean[, c("SL_CnS_S_m1", "SL_CnS_S_m2",
                                                               "FL_CnS_S_m1", "FL_CnS_S_m2")], 
                                                na.rm = T)

# e.g., SL_CnS_CoS_C_o11 = "Stranger-Like" family members dataset, "Choice" condition, CUZ obligation judgment, CUZ measures first, obligation judgment presented first
# to clarify, as noted in the Method section, two obligation (and other pre-outcome) judgments were collected in these conditions -- one for each potential beneficiary (e.g., CUZ and SIB), and they get averaged together later on in this same code chunk
E2_all_clean$CUZoSIB_CUZ_oblig <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_o11", "SL_CnS_CoS_C_o12",
                                                           "SL_CnS_CoS_C_o21", "SL_CnS_CoS_C_o22",
                                                           "FL_CnS_CoS_C_o11", "FL_CnS_CoS_C_o12",
                                                           "FL_CnS_CoS_C_o21", "FL_CnS_CoS_C_o22")],
                                                    na.rm = T) 
E2_all_clean$CUZoSIB_CUZ_relate <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_r11", "SL_CnS_CoS_C_r12",
                                                           "SL_CnS_CoS_C_r21", "SL_CnS_CoS_C_r22",
                                                           "FL_CnS_CoS_C_r11", "FL_CnS_CoS_C_r12",
                                                           "FL_CnS_CoS_C_r21", "FL_CnS_CoS_C_r22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_close <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_c11", "SL_CnS_CoS_C_c12",
                                                           "SL_CnS_CoS_C_c21", "SL_CnS_CoS_C_c22",
                                                           "FL_CnS_CoS_C_c11", "FL_CnS_CoS_C_c12",
                                                           "FL_CnS_CoS_C_c21", "FL_CnS_CoS_C_c22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_priorhelp <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_ph11", "SL_CnS_CoS_C_ph12",
                                                           "SL_CnS_CoS_C_ph21", "SL_CnS_CoS_C_ph22",
                                                           "FL_CnS_CoS_C_ph11", "FL_CnS_CoS_C_ph12",
                                                           "FL_CnS_CoS_C_ph21", "FL_CnS_CoS_C_ph22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_futurehelp <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_fh11", "SL_CnS_CoS_C_fh12",
                                                           "SL_CnS_CoS_C_fh21", "SL_CnS_CoS_C_fh22",
                                                           "FL_CnS_CoS_C_fh11", "FL_CnS_CoS_C_fh12",
                                                           "FL_CnS_CoS_C_fh21", "FL_CnS_CoS_C_fh22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_priorinteract <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_pi11", "SL_CnS_CoS_C_pi12",
                                                           "SL_CnS_CoS_C_pi21", "SL_CnS_CoS_C_pi22",
                                                           "FL_CnS_CoS_C_pi11", "FL_CnS_CoS_C_pi12",
                                                           "FL_CnS_CoS_C_pi21", "FL_CnS_CoS_C_pi22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_futureinteract <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_fi11", "SL_CnS_CoS_C_fi12",
                                                           "SL_CnS_CoS_C_fi21", "SL_CnS_CoS_C_fi22",
                                                           "FL_CnS_CoS_C_fi11", "FL_CnS_CoS_C_fi12",
                                                           "FL_CnS_CoS_C_fi21", "FL_CnS_CoS_C_fi22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_oblig <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_o11", "SL_CnS_CoS_S_o12",
                                                           "SL_CnS_CoS_S_o21", "SL_CnS_CoS_S_o22",
                                                           "FL_CnS_CoS_S_o11", "FL_CnS_CoS_S_o12",
                                                           "FL_CnS_CoS_S_o21", "FL_CnS_CoS_S_o22")],
                                                    na.rm = T) 
E2_all_clean$CUZoSIB_SIB_relate <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_r11", "SL_CnS_CoS_S_r12",
                                                           "SL_CnS_CoS_S_r21", "SL_CnS_CoS_S_r22",
                                                           "FL_CnS_CoS_S_r11", "FL_CnS_CoS_S_r12",
                                                           "FL_CnS_CoS_S_r21", "FL_CnS_CoS_S_r22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_close <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_c11", "SL_CnS_CoS_S_c12",
                                                           "SL_CnS_CoS_S_c21", "SL_CnS_CoS_S_c22",
                                                           "FL_CnS_CoS_S_c11", "FL_CnS_CoS_S_c12",
                                                           "FL_CnS_CoS_S_c21", "FL_CnS_CoS_S_c22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_priorhelp <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_ph11", "SL_CnS_CoS_S_ph12",
                                                           "SL_CnS_CoS_S_ph21", "SL_CnS_CoS_S_ph22",
                                                           "FL_CnS_CoS_S_ph11", "FL_CnS_CoS_S_ph12",
                                                           "FL_CnS_CoS_S_ph21", "FL_CnS_CoS_S_ph22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_futurehelp <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_fh11", "SL_CnS_CoS_S_fh12",
                                                           "SL_CnS_CoS_S_fh21", "SL_CnS_CoS_S_fh22",
                                                           "FL_CnS_CoS_S_fh11", "FL_CnS_CoS_S_fh12",
                                                           "FL_CnS_CoS_S_fh21", "FL_CnS_CoS_S_fh22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_priorinteract <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_pi11", "SL_CnS_CoS_S_pi12",
                                                           "SL_CnS_CoS_S_pi21", "SL_CnS_CoS_S_pi22",
                                                           "FL_CnS_CoS_S_pi11", "FL_CnS_CoS_S_pi12",
                                                           "FL_CnS_CoS_S_pi21", "FL_CnS_CoS_S_pi22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_futureinteract <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_fi11", "SL_CnS_CoS_S_fi12",
                                                           "SL_CnS_CoS_S_fi21", "SL_CnS_CoS_S_fi22",
                                                           "FL_CnS_CoS_S_fi11", "FL_CnS_CoS_S_fi12",
                                                           "FL_CnS_CoS_S_fi21", "FL_CnS_CoS_S_fi22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_moral <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_m11", "SL_CnS_CoS_C_m12",
                                                           "SL_CnS_CoS_C_m21", "SL_CnS_CoS_C_m22",
                                                           "FL_CnS_CoS_C_m11", "FL_CnS_CoS_C_m12",
                                                           "FL_CnS_CoS_C_m21", "FL_CnS_CoS_C_m22")],
                                                    na.rm = T)

E2_all_clean$SIBoCUZ_CUZ_oblig <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_o11", "SL_CnS_SoC_C_o12",
                                                           "SL_CnS_SoC_C_o21", "SL_CnS_SoC_C_o22",
                                                           "FL_CnS_SoC_C_o11", "FL_CnS_SoC_C_o12",
                                                           "FL_CnS_SoC_C_o21", "FL_CnS_SoC_C_o22")],
                                                    na.rm = T) 
E2_all_clean$SIBoCUZ_CUZ_relate <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_r11", "SL_CnS_SoC_C_r12",
                                                           "SL_CnS_SoC_C_r21", "SL_CnS_SoC_C_r22",
                                                           "FL_CnS_SoC_C_r11", "FL_CnS_SoC_C_r12",
                                                           "FL_CnS_SoC_C_r21", "FL_CnS_SoC_C_r22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_CUZ_close <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_c11", "SL_CnS_SoC_C_c12",
                                                           "SL_CnS_SoC_C_c21", "SL_CnS_SoC_C_c22",
                                                           "FL_CnS_SoC_C_c11", "FL_CnS_SoC_C_c12",
                                                           "FL_CnS_SoC_C_c21", "FL_CnS_SoC_C_c22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_CUZ_priorhelp <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_ph11", "SL_CnS_SoC_C_ph12",
                                                           "SL_CnS_SoC_C_ph21", "SL_CnS_SoC_C_ph22",
                                                           "FL_CnS_SoC_C_ph11", "FL_CnS_SoC_C_ph12",
                                                           "FL_CnS_SoC_C_ph21", "FL_CnS_SoC_C_ph22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_CUZ_futurehelp <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_fh11", "SL_CnS_SoC_C_fh12",
                                                           "SL_CnS_SoC_C_fh21", "SL_CnS_SoC_C_fh22",
                                                           "FL_CnS_SoC_C_fh11", "FL_CnS_SoC_C_fh12",
                                                           "FL_CnS_SoC_C_fh21", "FL_CnS_SoC_C_fh22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_CUZ_priorinteract <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_pi11", "SL_CnS_SoC_C_pi12",
                                                           "SL_CnS_SoC_C_pi21", "SL_CnS_SoC_C_pi22",
                                                           "FL_CnS_SoC_C_pi11", "FL_CnS_SoC_C_pi12",
                                                           "FL_CnS_SoC_C_pi21", "FL_CnS_SoC_C_pi22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_CUZ_futureinteract <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_fi11", "SL_CnS_SoC_C_fi12",
                                                           "SL_CnS_SoC_C_fi21", "SL_CnS_SoC_C_fi22",
                                                           "FL_CnS_SoC_C_fi11", "FL_CnS_SoC_C_fi12",
                                                           "FL_CnS_SoC_C_fi21", "FL_CnS_SoC_C_fi22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_oblig <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_o11", "SL_CnS_SoC_S_o12",
                                                           "SL_CnS_SoC_S_o21", "SL_CnS_SoC_S_o22",
                                                           "FL_CnS_SoC_S_o11", "FL_CnS_SoC_S_o12",
                                                           "FL_CnS_SoC_S_o21", "FL_CnS_SoC_S_o22")],
                                                    na.rm = T) 
E2_all_clean$SIBoCUZ_SIB_relate <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_r11", "SL_CnS_SoC_S_r12",
                                                           "SL_CnS_SoC_S_r21", "SL_CnS_SoC_S_r22",
                                                           "FL_CnS_SoC_S_r11", "FL_CnS_SoC_S_r12",
                                                           "FL_CnS_SoC_S_r21", "FL_CnS_SoC_S_r22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_close <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_c11", "SL_CnS_SoC_S_c12",
                                                           "SL_CnS_SoC_S_c21", "SL_CnS_SoC_S_c22",
                                                           "FL_CnS_SoC_S_c11", "FL_CnS_SoC_S_c12",
                                                           "FL_CnS_SoC_S_c21", "FL_CnS_SoC_S_c22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_priorhelp <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_ph11", "SL_CnS_SoC_S_ph12",
                                                           "SL_CnS_SoC_S_ph21", "SL_CnS_SoC_S_ph22",
                                                           "FL_CnS_SoC_S_ph11", "FL_CnS_SoC_S_ph12",
                                                           "FL_CnS_SoC_S_ph21", "FL_CnS_SoC_S_ph22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_futurehelp <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_fh11", "SL_CnS_SoC_S_fh12",
                                                           "SL_CnS_SoC_S_fh21", "SL_CnS_SoC_S_fh22",
                                                           "FL_CnS_SoC_S_fh11", "FL_CnS_SoC_S_fh12",
                                                           "FL_CnS_SoC_S_fh21", "FL_CnS_SoC_S_fh22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_priorinteract <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_pi11", "SL_CnS_SoC_S_pi12",
                                                           "SL_CnS_SoC_S_pi21", "SL_CnS_SoC_S_pi22",
                                                           "FL_CnS_SoC_S_pi11", "FL_CnS_SoC_S_pi12",
                                                           "FL_CnS_SoC_S_pi21", "FL_CnS_SoC_S_pi22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_futureinteract <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_fi11", "SL_CnS_SoC_S_fi12",
                                                           "SL_CnS_SoC_S_fi21", "SL_CnS_SoC_S_fi22",
                                                           "FL_CnS_SoC_S_fi11", "FL_CnS_SoC_S_fi12",
                                                           "FL_CnS_SoC_S_fi21", "FL_CnS_SoC_S_fi22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_moral <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_m11", "SL_CnS_SoC_S_m12",
                                                           "SL_CnS_SoC_S_m21", "SL_CnS_SoC_S_m22",
                                                           "FL_CnS_SoC_S_m11", "FL_CnS_SoC_S_m12",
                                                           "FL_CnS_SoC_S_m21", "FL_CnS_SoC_S_m22")],
                                                    na.rm = T)


E2_all_clean$Choice_CUZ_oblig  <- (E2_all_clean$CUZoSIB_CUZ_oblig +
                                   E2_all_clean$SIBoCUZ_CUZ_oblig)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_relate  <- (E2_all_clean$CUZoSIB_CUZ_relate +
                                    E2_all_clean$SIBoCUZ_CUZ_relate)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_close  <- (E2_all_clean$CUZoSIB_CUZ_close +
                                   E2_all_clean$SIBoCUZ_CUZ_close)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_priorhelp  <- (E2_all_clean$CUZoSIB_CUZ_priorhelp +
                                       E2_all_clean$SIBoCUZ_CUZ_priorhelp)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_futurehelp  <- (E2_all_clean$CUZoSIB_CUZ_futurehelp +
                                        E2_all_clean$SIBoCUZ_CUZ_futurehelp)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_priorinteract  <- (E2_all_clean$CUZoSIB_CUZ_priorinteract +
                                           E2_all_clean$SIBoCUZ_CUZ_priorinteract)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_futureinteract  <- (E2_all_clean$CUZoSIB_CUZ_futureinteract +
                                            E2_all_clean$SIBoCUZ_CUZ_futureinteract)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_moral  <- E2_all_clean$CUZoSIB_CUZ_moral # single judgment (post-outcome)

E2_all_clean$Choice_SIB_oblig  <- (E2_all_clean$CUZoSIB_SIB_oblig +
                                   E2_all_clean$SIBoCUZ_SIB_oblig)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_relate  <- (E2_all_clean$CUZoSIB_SIB_relate +
                                    E2_all_clean$SIBoCUZ_SIB_relate)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_close  <- (E2_all_clean$CUZoSIB_SIB_close +
                                   E2_all_clean$SIBoCUZ_SIB_close)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_priorhelp  <- (E2_all_clean$CUZoSIB_SIB_priorhelp +
                                       E2_all_clean$SIBoCUZ_SIB_priorhelp)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_futurehelp  <- (E2_all_clean$CUZoSIB_SIB_futurehelp +
                                        E2_all_clean$SIBoCUZ_SIB_futurehelp)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_priorinteract  <- (E2_all_clean$CUZoSIB_SIB_priorinteract +
                                           E2_all_clean$SIBoCUZ_SIB_priorinteract)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_futureinteract  <- (E2_all_clean$CUZoSIB_SIB_futureinteract +
                                            E2_all_clean$SIBoCUZ_SIB_futureinteract)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_moral  <- E2_all_clean$SIBoCUZ_SIB_moral # single judgment (post-outcome)


# Difference Scores
# CUZminusSIB obligation within No Choice or Choice conditions (for diff score corrs and ind. diffs analyses)
E2_all_clean$NoChoice_CUZminusSIB_oblig <- E2_all_clean$NoChoice_CUZ_oblig - E2_all_clean$NoChoice_SIB_oblig
E2_all_clean$NoChoice_CUZminusSIB_relate <- E2_all_clean$NoChoice_CUZ_relate - E2_all_clean$NoChoice_SIB_relate
E2_all_clean$NoChoice_CUZminusSIB_close <- E2_all_clean$NoChoice_CUZ_close - E2_all_clean$NoChoice_SIB_close
E2_all_clean$NoChoice_CUZminusSIB_priorhelp <- E2_all_clean$NoChoice_CUZ_priorhelp - E2_all_clean$NoChoice_SIB_priorhelp
E2_all_clean$NoChoice_CUZminusSIB_futurehelp <- E2_all_clean$NoChoice_CUZ_futurehelp - E2_all_clean$NoChoice_SIB_futurehelp
E2_all_clean$NoChoice_CUZminusSIB_priorinteract <- E2_all_clean$NoChoice_CUZ_priorinteract - E2_all_clean$NoChoice_SIB_priorinteract
E2_all_clean$NoChoice_CUZminusSIB_futureinteract <- E2_all_clean$NoChoice_CUZ_futureinteract - E2_all_clean$NoChoice_SIB_futureinteract
E2_all_clean$NoChoice_CUZminusSIB_moral <- E2_all_clean$NoChoice_CUZ_moral - E2_all_clean$NoChoice_SIB_moral

E2_all_clean$Choice_CUZminusSIB_oblig <- E2_all_clean$Choice_CUZ_oblig - E2_all_clean$Choice_SIB_oblig
E2_all_clean$Choice_CUZminusSIB_relate <- E2_all_clean$Choice_CUZ_relate - E2_all_clean$Choice_SIB_relate
E2_all_clean$Choice_CUZminusSIB_close <- E2_all_clean$Choice_CUZ_close - E2_all_clean$Choice_SIB_close
E2_all_clean$Choice_CUZminusSIB_priorhelp <- E2_all_clean$Choice_CUZ_priorhelp - E2_all_clean$Choice_SIB_priorhelp
E2_all_clean$Choice_CUZminusSIB_futurehelp <- E2_all_clean$Choice_CUZ_futurehelp - E2_all_clean$Choice_SIB_futurehelp
E2_all_clean$Choice_CUZminusSIB_priorinteract <- E2_all_clean$Choice_CUZ_priorinteract - E2_all_clean$Choice_SIB_priorinteract
E2_all_clean$Choice_CUZminusSIB_futureinteract <- E2_all_clean$Choice_CUZ_futureinteract - E2_all_clean$Choice_SIB_futureinteract
E2_all_clean$Choice_CUZminusSIB_moral <- E2_all_clean$Choice_CUZ_moral - E2_all_clean$Choice_SIB_moral


# Individual Difference Measures (for ind. diffs analyses)

# MAC (Morality-as-Cooperation scale) composites
# first need to reverse score property judgment subscale per Curry et al. 2019
E2_all_clean$MAC_Jud_19_r <- ((102 - (E2_all_clean$MAC_Jud_19 +1)) - 1) 
E2_all_clean$MAC_Jud_20_r <- ((102 - (E2_all_clean$MAC_Jud_20 +1)) - 1)
E2_all_clean$MAC_Jud_21_r <- ((102 - (E2_all_clean$MAC_Jud_21 +1)) - 1)

E2_all_clean$MAC_Fam_Combined <- ((E2_all_clean$MAC_Jud_1 + E2_all_clean$MAC_Jud_2 + E2_all_clean$MAC_Jud_3 +
                                       E2_all_clean$MAC_Rel_1 + E2_all_clean$MAC_Rel_2 + E2_all_clean$MAC_Rel_3)/6)
E2_all_clean$MAC_Fam_Jud <- ((E2_all_clean$MAC_Jud_1 + E2_all_clean$MAC_Jud_2 + E2_all_clean$MAC_Jud_3)/3)
E2_all_clean$MAC_Fam_Rel <- ((E2_all_clean$MAC_Rel_1 + E2_all_clean$MAC_Rel_2 + E2_all_clean$MAC_Rel_3)/3)

E2_all_clean$MAC_Group_Combined <- ((E2_all_clean$MAC_Jud_4 + E2_all_clean$MAC_Jud_5 + E2_all_clean$MAC_Jud_6 +
                                       E2_all_clean$MAC_Rel_4 + E2_all_clean$MAC_Rel_5 + E2_all_clean$MAC_Rel_6)/6)
E2_all_clean$MAC_Group_Jud <- ((E2_all_clean$MAC_Jud_4 + E2_all_clean$MAC_Jud_5 + E2_all_clean$MAC_Jud_6)/3)
E2_all_clean$MAC_Group_Rel <- ((E2_all_clean$MAC_Rel_4 + E2_all_clean$MAC_Rel_5 + E2_all_clean$MAC_Rel_6)/3)

E2_all_clean$MAC_Rec_Combined <- ((E2_all_clean$MAC_Jud_7 + E2_all_clean$MAC_Jud_8 + E2_all_clean$MAC_Jud_9 +
                                       E2_all_clean$MAC_Rel_7 + E2_all_clean$MAC_Rel_8 + E2_all_clean$MAC_Rel_9)/6)
E2_all_clean$MAC_Rec_Jud <- ((E2_all_clean$MAC_Jud_7 + E2_all_clean$MAC_Jud_8 + E2_all_clean$MAC_Jud_9)/3)
E2_all_clean$MAC_Rec_Rel <- ((E2_all_clean$MAC_Rel_7 + E2_all_clean$MAC_Rel_8 + E2_all_clean$MAC_Rel_9)/3)

E2_all_clean$MAC_Hero_Combined <- ((E2_all_clean$MAC_Jud_10 + E2_all_clean$MAC_Jud_11 + E2_all_clean$MAC_Jud_12 +
                                       E2_all_clean$MAC_Rel_10 + E2_all_clean$MAC_Rel_11 + E2_all_clean$MAC_Rel_12)/6)
E2_all_clean$MAC_Hero_Jud <- ((E2_all_clean$MAC_Jud_10 + E2_all_clean$MAC_Jud_11 + E2_all_clean$MAC_Jud_12)/3)
E2_all_clean$MAC_Hero_Rel <- ((E2_all_clean$MAC_Rel_10 + E2_all_clean$MAC_Rel_11 + E2_all_clean$MAC_Rel_12)/3)

E2_all_clean$MAC_Def_Combined <- ((E2_all_clean$MAC_Jud_13 + E2_all_clean$MAC_Jud_14 + E2_all_clean$MAC_Jud_15 +
                                       E2_all_clean$MAC_Rel_13 + E2_all_clean$MAC_Rel_14 + E2_all_clean$MAC_Rel_15)/6)
E2_all_clean$MAC_Def_Jud <- ((E2_all_clean$MAC_Jud_13 + E2_all_clean$MAC_Jud_14 + E2_all_clean$MAC_Jud_15)/3)
E2_all_clean$MAC_Def_Rel <- ((E2_all_clean$MAC_Rel_13 + E2_all_clean$MAC_Rel_14 + E2_all_clean$MAC_Rel_15)/3)

E2_all_clean$MAC_Fair_Combined <- ((E2_all_clean$MAC_Jud_16 + E2_all_clean$MAC_Jud_17 + E2_all_clean$MAC_Jud_18 +
                                       E2_all_clean$MAC_Rel_16 + E2_all_clean$MAC_Rel_17 + E2_all_clean$MAC_Rel_18)/6)
E2_all_clean$MAC_Fair_Jud <- ((E2_all_clean$MAC_Jud_16 + E2_all_clean$MAC_Jud_17 + E2_all_clean$MAC_Jud_18)/3)
E2_all_clean$MAC_Fair_Rel <- ((E2_all_clean$MAC_Rel_16 + E2_all_clean$MAC_Rel_17 + E2_all_clean$MAC_Rel_18)/3)

E2_all_clean$MAC_Prop_Combined <- ((E2_all_clean$MAC_Jud_19_r + E2_all_clean$MAC_Jud_20_r + E2_all_clean$MAC_Jud_21_r +
                                       E2_all_clean$MAC_Rel_19 + E2_all_clean$MAC_Rel_20 + E2_all_clean$MAC_Rel_21)/6)
E2_all_clean$MAC_Prop_Jud <- ((E2_all_clean$MAC_Jud_19_r + E2_all_clean$MAC_Jud_20_r + E2_all_clean$MAC_Jud_21_r)/3)
E2_all_clean$MAC_Prop_Rel <- ((E2_all_clean$MAC_Rel_19 + E2_all_clean$MAC_Rel_20 + E2_all_clean$MAC_Rel_21)/3)


# MFQ (Moral Foundations Theory scale) composites
E2_all_clean$MFQ_Harm_Combined <- ((E2_all_clean$MFQ_Jud_1 + E2_all_clean$MFQ_Jud_2 + E2_all_clean$MFQ_Jud_3 +
                                       E2_all_clean$MFQ_Rel_1 + E2_all_clean$MFQ_Rel_2 + E2_all_clean$MFQ_Rel_3)/6)
E2_all_clean$MFQ_Harm_Jud <- ((E2_all_clean$MFQ_Jud_1 + E2_all_clean$MFQ_Jud_2 + E2_all_clean$MFQ_Jud_3)/3)
E2_all_clean$MFQ_Harm_Rel <- ((E2_all_clean$MFQ_Rel_1 + E2_all_clean$MFQ_Rel_2 + E2_all_clean$MFQ_Rel_3)/3)

E2_all_clean$MFQ_Fairness_Combined <- ((E2_all_clean$MFQ_Jud_4 + E2_all_clean$MFQ_Jud_5 + E2_all_clean$MFQ_Jud_6 +
                                       E2_all_clean$MFQ_Rel_4 + E2_all_clean$MFQ_Rel_5 + E2_all_clean$MFQ_Rel_6)/6)
E2_all_clean$MFQ_Fairness_Jud <- ((E2_all_clean$MFQ_Jud_4 + E2_all_clean$MFQ_Jud_5 + E2_all_clean$MFQ_Jud_6)/3)
E2_all_clean$MFQ_Fairness_Rel <- ((E2_all_clean$MFQ_Rel_4 + E2_all_clean$MFQ_Rel_5 + E2_all_clean$MFQ_Rel_6)/3)

E2_all_clean$MFQ_Loyalty_Combined <- ((E2_all_clean$MFQ_Jud_7 + E2_all_clean$MFQ_Jud_8 + E2_all_clean$MFQ_Jud_9 +
                                       E2_all_clean$MFQ_Rel_7 + E2_all_clean$MFQ_Rel_8 + E2_all_clean$MFQ_Rel_9)/6)
E2_all_clean$MFQ_Loyalty_Jud <- ((E2_all_clean$MFQ_Jud_7 + E2_all_clean$MFQ_Jud_8 + E2_all_clean$MFQ_Jud_9)/3)
E2_all_clean$MFQ_Loyalty_Rel <- ((E2_all_clean$MFQ_Rel_7 + E2_all_clean$MFQ_Rel_8 + E2_all_clean$MFQ_Rel_9)/3)

E2_all_clean$MFQ_Authority_Combined <- ((E2_all_clean$MFQ_Jud_10 + E2_all_clean$MFQ_Jud_11 + E2_all_clean$MFQ_Jud_12 +
                                       E2_all_clean$MFQ_Rel_10 + E2_all_clean$MFQ_Rel_11 + E2_all_clean$MFQ_Rel_12)/6)
E2_all_clean$MFQ_Authority_Jud <- ((E2_all_clean$MFQ_Jud_10 + E2_all_clean$MFQ_Jud_11 + E2_all_clean$MFQ_Jud_12)/3)
E2_all_clean$MFQ_Authority_Rel <- ((E2_all_clean$MFQ_Rel_10 + E2_all_clean$MFQ_Rel_11 + E2_all_clean$MFQ_Rel_12)/3)

E2_all_clean$MFQ_Purity_Combined <- ((E2_all_clean$MFQ_Jud_13 + E2_all_clean$MFQ_Jud_14 + E2_all_clean$MFQ_Jud_15 +
                                       E2_all_clean$MFQ_Rel_13 + E2_all_clean$MFQ_Rel_14 + E2_all_clean$MFQ_Rel_15)/6)
E2_all_clean$MFQ_Purity_Jud <- ((E2_all_clean$MFQ_Jud_13 + E2_all_clean$MFQ_Jud_14 + E2_all_clean$MFQ_Jud_15)/3)
E2_all_clean$MFQ_Purity_Rel <- ((E2_all_clean$MFQ_Rel_13 + E2_all_clean$MFQ_Rel_14 + E2_all_clean$MFQ_Rel_15)/3)

# OUS (Oxford Utilitarianism Scale) composites
E2_all_clean$OUS_IB <- ((E2_all_clean$OUS_IB1 + E2_all_clean$OUS_IB2 + E2_all_clean$OUS_IB3 +
                             E2_all_clean$OUS_IB4 + E2_all_clean$OUS_IB5)/5)
E2_all_clean$OUS_IH <- ((E2_all_clean$OUS_IH1 + E2_all_clean$OUS_IH2 + E2_all_clean$OUS_IH3 +
                             E2_all_clean$OUS_IH4)/4)

Creating Analyzable Between-Subjects Datasets

# Stranger-Like family members
E2_SL_clean <- E2_all_clean %>%
  filter(BSs_cond == 'Stranger-Like') %>%
  # select only variables that are relevant to Stranger-Like data
  select(
    ResponseId, # selects variable
    Age:Urban_Rural, # selects demographic variables
    MAC_Jud_1:MAC_Jud_18, MAC_Jud_19_r:MAC_Jud_21_r, MAC_Rel_1:MAC_Rel_21, 
    MFQ_Jud_1:MFQ_Jud_15, MFQ_Rel_1:MFQ_Rel_15, 
    OUS_IB1:OUS_IB5, OUS_IH1:OUS_IH4, # selects raw ind. diff variables (for reliabilty check)
    MAC_Fam_Combined:OUS_IH, # selects composited ind. diff variables
    BSs_cond, # selects variable for between-subjects condition
    SL_Dist_Scen:SL_CloseODist_Scen, # selects scenario-to-condition variables for SL data
    NoChoice_CUZ_oblig:NoChoice_SIB_moral, # selects NoChoice DVs for SL data
    Choice_CUZ_oblig:Choice_SIB_moral, # selects Choice DVs for SL data
    NoChoice_CUZminusSIB_oblig:Choice_CUZminusSIB_moral # selects difference score variables for SL data
    )

# Friend-like family members
E2_FL_clean <- E2_all_clean %>%
  filter(BSs_cond == 'Friend-Like') %>%
  # select only variables that are relevant to "Friend-Like" data
  select(
    ResponseId, # selects variable
    Age:Urban_Rural, # selects demographic variables
    MAC_Jud_1:MAC_Jud_18, MAC_Jud_19_r:MAC_Jud_21_r, MAC_Rel_1:MAC_Rel_21, 
    MFQ_Jud_1:MFQ_Jud_15, MFQ_Rel_1:MFQ_Rel_15, 
    OUS_IB1:OUS_IB5, OUS_IH1:OUS_IH4, # selects raw ind. diff variables (for reliabilty check)
    MAC_Fam_Combined:OUS_IH, # selects composited ind. diff variables
    BSs_cond, # selects variable for between-subjects condition
    FL_Dist_Scen:FL_CloseODist_Scen, # selects scenario-to-condition variables for FL data
    NoChoice_CUZ_oblig:NoChoice_SIB_moral, # selects NoChoice DVs for FL data
    Choice_CUZ_oblig:Choice_SIB_moral, # selects Choice DVs for FL data
    NoChoice_CUZminusSIB_oblig:Choice_CUZminusSIB_moral # selects difference score variables for FL data
    )

Tidying Data

# Convert data from wide to long format
# Stranger-Like
E2_SL_cond_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(SL_Dist_Scen, SL_Close_Scen, SL_DistOClose_Scen, SL_CloseODist_Scen),
    names_to = "WSs_cond",
    values_to = "Condition"
  )

E2_SL_oblig_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_oblig, NoChoice_SIB_oblig, Choice_CUZ_oblig, Choice_SIB_oblig),
    names_to = "WSs_cond",
    values_to = "oblig"
  )

E2_SL_relate_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_relate, NoChoice_SIB_relate, Choice_CUZ_relate, Choice_SIB_relate),
    names_to = "WSs_cond",
    values_to = "relate"
  )

E2_SL_close_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_close, NoChoice_SIB_close, Choice_CUZ_close, Choice_SIB_close),
    names_to = "WSs_cond",
    values_to = "close"
  )

E2_SL_priorhelp_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_priorhelp, NoChoice_SIB_priorhelp, Choice_CUZ_priorhelp, Choice_SIB_priorhelp),
    names_to = "WSs_cond",
    values_to = "priorhelp"
  )

E2_SL_futurehelp_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_futurehelp, NoChoice_SIB_futurehelp, Choice_CUZ_futurehelp, Choice_SIB_futurehelp),
    names_to = "WSs_cond",
    values_to = "futurehelp"
  )

E2_SL_priorinteract_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_priorinteract, NoChoice_SIB_priorinteract, Choice_CUZ_priorinteract, Choice_SIB_priorinteract),
    names_to = "WSs_cond",
    values_to = "priorinteract"
  )

E2_SL_futureinteract_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_futureinteract, NoChoice_SIB_futureinteract, Choice_CUZ_futureinteract, Choice_SIB_futureinteract),
    names_to = "WSs_cond",
    values_to = "futureinteract"
  )

E2_SL_moral_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_moral, NoChoice_SIB_moral, Choice_CUZ_moral, Choice_SIB_moral),
    names_to = "WSs_cond",
    values_to = "moral"
  )


# Combine long SL datasets, select plotting variables, and create condition variable for each factor (Relation + Choice Context)
E2_SL_long <- cbind(E2_SL_cond_long, 
                    E2_SL_oblig_long, E2_SL_relate_long, E2_SL_close_long,
                    E2_SL_priorhelp_long, E2_SL_futurehelp_long,
                    E2_SL_priorinteract_long, E2_SL_futureinteract_long,
                    E2_SL_moral_long)

E2_SL_long <- E2_SL_long[, !duplicated(colnames(E2_SL_long))] %>% # get rid of duplicate columns
  select(ResponseId,
         Age:OUS_IH,
         BSs_cond,
         WSs_cond,
         Condition,
         oblig, relate, close, 
         priorhelp, futurehelp,
         priorinteract, futureinteract,
         moral) %>%
  mutate(Relation = case_when(
    WSs_cond == "SL_Dist_Scen" ~ "Distant",
    WSs_cond == "SL_Close_Scen" ~ "Close",
    WSs_cond == "SL_DistOClose_Scen" ~ "Distant",
    WSs_cond == "SL_CloseODist_Scen" ~ "Close")) %>%
  mutate(`Choice Context` = case_when(
    WSs_cond == "SL_Dist_Scen" ~ "No Choice",
    WSs_cond == "SL_Close_Scen" ~ "No Choice",
    WSs_cond == "SL_DistOClose_Scen" ~ "Choice",
    WSs_cond == "SL_CloseODist_Scen" ~ "Choice"))

# Reorder/rename condition and participant factors
E2_SL_long$Relation <- as.factor(E2_SL_long$Relation)
E2_SL_long$Relation <- ordered(E2_SL_long$Relation, levels = c("Distant", "Close"))
E2_SL_long$`Choice Context` <- as.factor(E2_SL_long$`Choice Context`)
E2_SL_long$`Choice Context` <- ordered(E2_SL_long$`Choice Context`, levels = c("No Choice", "Choice"))
E2_SL_long$ResponseId <- as.factor(E2_SL_long$ResponseId)


# Friend-Like
E2_FL_cond_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(FL_Dist_Scen, FL_Close_Scen, FL_DistOClose_Scen, FL_CloseODist_Scen),
    names_to = "WSs_cond",
    values_to = "Condition"
  )

E2_FL_oblig_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_oblig, NoChoice_SIB_oblig, Choice_CUZ_oblig, Choice_SIB_oblig),
    names_to = "WSs_cond",
    values_to = "oblig"
  )

E2_FL_relate_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_relate, NoChoice_SIB_relate, Choice_CUZ_relate, Choice_SIB_relate),
    names_to = "WSs_cond",
    values_to = "relate"
  )

E2_FL_close_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_close, NoChoice_SIB_close, Choice_CUZ_close, Choice_SIB_close),
    names_to = "WSs_cond",
    values_to = "close"
  )

E2_FL_priorhelp_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_priorhelp, NoChoice_SIB_priorhelp, Choice_CUZ_priorhelp, Choice_SIB_priorhelp),
    names_to = "WSs_cond",
    values_to = "priorhelp"
  )

E2_FL_futurehelp_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_futurehelp, NoChoice_SIB_futurehelp, Choice_CUZ_futurehelp, Choice_SIB_futurehelp),
    names_to = "WSs_cond",
    values_to = "futurehelp"
  )

E2_FL_priorinteract_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_priorinteract, NoChoice_SIB_priorinteract, Choice_CUZ_priorinteract, Choice_SIB_priorinteract),
    names_to = "WSs_cond",
    values_to = "priorinteract"
  )

E2_FL_futureinteract_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_futureinteract, NoChoice_SIB_futureinteract, Choice_CUZ_futureinteract, Choice_SIB_futureinteract),
    names_to = "WSs_cond",
    values_to = "futureinteract"
  )

E2_FL_moral_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_moral, NoChoice_SIB_moral, Choice_CUZ_moral, Choice_SIB_moral),
    names_to = "WSs_cond",
    values_to = "moral"
  )


# Combine long SL datasets, select plotting variables, and create condition variable for each factor (Relation + Choice Context)
E2_FL_long <- cbind(E2_FL_cond_long, 
                    E2_FL_oblig_long, E2_FL_relate_long, E2_FL_close_long,
                    E2_FL_priorhelp_long, E2_FL_futurehelp_long,
                    E2_FL_priorinteract_long, E2_FL_futureinteract_long,
                    E2_FL_moral_long)

E2_FL_long <- E2_FL_long[, !duplicated(colnames(E2_FL_long))] %>% # get rid of duplicate columns
  select(ResponseId,
         Age:OUS_IH,
         BSs_cond,
         WSs_cond,
         Condition,
         oblig, relate, close, 
         priorhelp, futurehelp,
         priorinteract, futureinteract,
         moral) %>%
  mutate(Relation = case_when(
    WSs_cond == "FL_Dist_Scen" ~ "Distant",
    WSs_cond == "FL_Close_Scen" ~ "Close",
    WSs_cond == "FL_DistOClose_Scen" ~ "Distant",
    WSs_cond == "FL_CloseODist_Scen" ~ "Close")) %>%
  mutate(`Choice Context` = case_when(
    WSs_cond == "FL_Dist_Scen" ~ "No Choice",
    WSs_cond == "FL_Close_Scen" ~ "No Choice",
    WSs_cond == "FL_DistOClose_Scen" ~ "Choice",
    WSs_cond == "FL_CloseODist_Scen" ~ "Choice"))

# Reorder/rename condition and participant factors
E2_FL_long$Relation <- as.factor(E2_FL_long$Relation)
E2_FL_long$Relation <- ordered(E2_FL_long$Relation, levels = c("Distant", "Close"))
E2_FL_long$`Choice Context` <- as.factor(E2_FL_long$`Choice Context`)
E2_FL_long$`Choice Context` <- ordered(E2_FL_long$`Choice Context`, levels = c("No Choice", "Choice"))
E2_FL_long$ResponseId <- as.factor(E2_FL_long$ResponseId)

# Combine into one dataset for later analyses
E2_all_long <- rbind(E2_SL_long, E2_FL_long)
# Reorder all_long BSs_cond
E2_all_long$BSs_cond <- as.factor(E2_all_long$BSs_cond)
E2_all_long$BSs_cond <- ordered(E2_all_long$BSs_cond, levels = c("Stranger-Like", "Friend-Like"))

Descriptive Statistics

Oblig

Stranger-Like

describeBy(E2_SL_long$oblig, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)

Friend-Like

describeBy(E2_FL_long$oblig, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)

Relate

Stranger-Like

describeBy(E2_SL_long$relate, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)

Friend-Like

describeBy(E2_FL_long$relate, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)

Close

Stranger-Like

describeBy(E2_SL_long$close, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)

Friend-Like

describeBy(E2_FL_long$close, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)

Prior Help

Stranger-Like

describeBy(E2_SL_long$priorhelp, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)

Friend-Like

describeBy(E2_FL_long$priorhelp, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)

Future Help

Stranger-Like

describeBy(E2_SL_long$futurehelp, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)

Friend-Like

describeBy(E2_FL_long$futurehelp, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)

Prior Interax

Stranger-Like

describeBy(E2_SL_long$priorinteract, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)

Friend-Like

describeBy(E2_FL_long$priorinteract, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)

Future Interax

Stranger-Like

describeBy(E2_SL_long$futureinteract, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)

Friend-Like

describeBy(E2_FL_long$futureinteract, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)

Moral

Stranger-Like

describeBy(E2_SL_long$moral, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)

Friend-Like

describeBy(E2_FL_long$moral, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)

Mean Difference Plots

# Set dodge for plotting crossed factors
dodge = position_dodge(width = 1) 

Oblig

Stranger-Like

print(oblig_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = oblig, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Obligation Strength") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Friend-Like

print(oblig_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = oblig, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Obligation Strength") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Combined

print(oblig_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = oblig, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("\nChoice Context") +
        ylab("Obligation Strength\n") +
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))


ggsave("E2_oblig_plot.png")
Saving 14 x 9 in image

Relate

Stranger-Like

print(relate_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = relate, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Relatedness") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Friend-Like

print(relate_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = relate, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Relatedness") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Combined

print(relate_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = relate, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Relatedness") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))


ggsave("E2_relate_plot.png")
Saving 14 x 9 in image

Close

Stranger-Like

print(close_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = close, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Closeness") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Friend-Like

print(close_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = close, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Closeness") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Combined

print(close_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = close, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Closeness") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))


ggsave("E2_close_plot.png")
Saving 14 x 9 in image

Prior Help

Stranger-Like

print(priorhelp_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = priorhelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Help") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Friend-Like

print(priorhelp_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = priorhelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Help") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Combined

print(priorhelp_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = priorhelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Help") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))


ggsave("E2_priorhelp_plot.png")
Saving 14 x 9 in image

Future Help

Stranger-Like

print(futurehelp_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = futurehelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Help") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Friend-Like

print(futurehelp_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = futurehelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Help") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Combined

print(futurehelp_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = futurehelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Help") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))


ggsave("E2_futurehelp_plot.png")
Saving 14 x 9 in image

Prior Interax

Stranger-Like

print(priorinteract_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = priorinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Interactions") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Friend-Like

print(priorinteract_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = priorinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Interactions") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Combined

print(priorinteract_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = priorinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Interactions") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))


ggsave("E2_priorinteract_plot.png")
Saving 14 x 9 in image

Future Interax

Stranger-Like

print(futureinteract_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = futureinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Interactions") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Friend-Like

print(futureinteract_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = futureinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Interactions") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Combined

print(futureinteract_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = futureinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Interactions") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))


ggsave("E2_futureinteract_plot.png")
Saving 14 x 9 in image

Moral

Stranger-Like

print(moral_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = moral, fill = Relation)) +
        geom_hline(yintercept = 50, linetype = "dashed", color = "black") +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Moral Character") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Friend-Like

print(moral_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = moral, fill = Relation)) +
        geom_hline(yintercept = 50, linetype = "dashed", color = "black") +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Moral Character") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))

Combined

print(moral_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = moral, fill = Relation)) +
        geom_hline(yintercept = 50, linetype = "dashed", color = "black") +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("\nChoice Context") +
        ylab("Moral Character\n") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))


ggsave("E2_moral_plot.png")
Saving 14 x 9 in image

Mean Difference Tests


See our pre-registration (INSERT LINK HERE) for our predictions related to obligation judgments and moral character judgments.


Oblig

Stranger-Like

No Choice

# returns t-test results
t.test(oblig ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  oblig by Relation
t = -4.0679, df = 353, p-value = 5.855e-05
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -12.152705  -4.231476
sample estimates:
mean of the differences 
               -8.19209 
# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.2162072 (small)
95 percent confidence interval:
     lower      upper 
-0.3217695 -0.1106450 
# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.2385614 (small)
95 percent confidence interval:
     lower      upper 
-0.3553268 -0.1217959 
# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_oblig", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1         |         Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------
NoChoice_CUZ_oblig | NoChoice_SIB_oblig | 0.39 | [0.30, 0.48] |   7.98 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_oblig, breaks = 100))
$breaks
  [1] -100  -98  -96  -94  -92  -90  -88  -86  -84  -82  -80  -78  -76  -74  -72  -70  -68  -66  -64
 [20]  -62  -60  -58  -56  -54  -52  -50  -48  -46  -44  -42  -40  -38  -36  -34  -32  -30  -28  -26
 [39]  -24  -22  -20  -18  -16  -14  -12  -10   -8   -6   -4   -2    0    2    4    6    8   10   12
 [58]   14   16   18   20   22   24   26   28   30   32   34   36   38   40   42   44   46   48   50
 [77]   52   54   56   58   60   62   64   66   68   70   72   74   76   78   80   82   84   86   88
 [96]   90   92   94   96   98  100

$counts
  [1]  8  0  1  2  1  1  2  2  1  1  0  2  1  0  3  2  1  1  1  2  3  3  3  7  6  5  1  1  4  7  0  1
 [33]  4  6  7  6  1  5  5  7  8  5  9 12  5  4  9  3 12 49  5  6  6  8  6  3  5  5  1  6  7 12  2  5
 [65]  3  2  1  4  0  2  1  3  2  2  6  2  4  1  0  1  1  3  2  1  0  0  0  1  0  0  0  2  0  1  0  0
 [97]  1  0  0  1

$density
  [1] 0.011299435 0.000000000 0.001412429 0.002824859 0.001412429 0.001412429 0.002824859 0.002824859
  [9] 0.001412429 0.001412429 0.000000000 0.002824859 0.001412429 0.000000000 0.004237288 0.002824859
 [17] 0.001412429 0.001412429 0.001412429 0.002824859 0.004237288 0.004237288 0.004237288 0.009887006
 [25] 0.008474576 0.007062147 0.001412429 0.001412429 0.005649718 0.009887006 0.000000000 0.001412429
 [33] 0.005649718 0.008474576 0.009887006 0.008474576 0.001412429 0.007062147 0.007062147 0.009887006
 [41] 0.011299435 0.007062147 0.012711864 0.016949153 0.007062147 0.005649718 0.012711864 0.004237288
 [49] 0.016949153 0.069209040 0.007062147 0.008474576 0.008474576 0.011299435 0.008474576 0.004237288
 [57] 0.007062147 0.007062147 0.001412429 0.008474576 0.009887006 0.016949153 0.002824859 0.007062147
 [65] 0.004237288 0.002824859 0.001412429 0.005649718 0.000000000 0.002824859 0.001412429 0.004237288
 [73] 0.002824859 0.002824859 0.008474576 0.002824859 0.005649718 0.001412429 0.000000000 0.001412429
 [81] 0.001412429 0.004237288 0.002824859 0.001412429 0.000000000 0.000000000 0.000000000 0.001412429
 [89] 0.000000000 0.000000000 0.000000000 0.002824859 0.000000000 0.001412429 0.000000000 0.000000000
 [97] 0.001412429 0.000000000 0.000000000 0.001412429

$mids
  [1] -99 -97 -95 -93 -91 -89 -87 -85 -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53
 [25] -51 -49 -47 -45 -43 -41 -39 -37 -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5
 [49]  -3  -1   1   3   5   7   9  11  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43
 [73]  45  47  49  51  53  55  57  59  61  63  65  67  69  71  73  75  77  79  81  83  85  87  89  91
 [97]  93  95  97  99

$xname
[1] "E2_SL_clean$NoChoice_CUZminusSIB_oblig"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(oblig ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  oblig by Relation
t = -12.341, df = 353, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -9.233971 -6.695407
sample estimates:
mean of the differences 
              -7.964689 
# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.6559167 (medium)
95 percent confidence interval:
     lower      upper 
-0.7709438 -0.5408897 
# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.2703635 (small)
95 percent confidence interval:
     lower      upper 
-0.3141546 -0.2265724 
# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_oblig", "Choice_SIB_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------
Choice_CUZ_oblig | Choice_SIB_oblig | 0.92 | [0.90, 0.93] |  42.56 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_oblig, breaks = 100))
$breaks
 [1] -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55 -54 -53 -52 -51 -50 -49 -48 -47 -46 -45 -44 -43 -42
[25] -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28 -27 -26 -25 -24 -23 -22 -21 -20 -19 -18
[49] -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4  -3  -2  -1   0   1   2   3   4   5   6
[73]   7   8   9  10  11  12  13  14  15  16  17  18  19  20  21  22  23

$counts
 [1]  1  0  0  0  0  0  0  0  1  2  0  0  0  0  0  0  0  0  0  1  1  0  0  2  1  2  2  1  2  3  1  1  0
[34]  1  4  0  1  3  0  1  9  1  2  4  3  8  4  2  2  6  5  8  9 13  7  5 14 11 14  9  9  9 17 21 74 29
[67] 10  4  4  0  1  0  1  0  1  2  1  2  0  0  0  1  0  0  0  0  0  1

$density
 [1] 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [9] 0.002824859 0.005649718 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[17] 0.000000000 0.000000000 0.000000000 0.002824859 0.002824859 0.000000000 0.000000000 0.005649718
[25] 0.002824859 0.005649718 0.005649718 0.002824859 0.005649718 0.008474576 0.002824859 0.002824859
[33] 0.000000000 0.002824859 0.011299435 0.000000000 0.002824859 0.008474576 0.000000000 0.002824859
[41] 0.025423729 0.002824859 0.005649718 0.011299435 0.008474576 0.022598870 0.011299435 0.005649718
[49] 0.005649718 0.016949153 0.014124294 0.022598870 0.025423729 0.036723164 0.019774011 0.014124294
[57] 0.039548023 0.031073446 0.039548023 0.025423729 0.025423729 0.025423729 0.048022599 0.059322034
[65] 0.209039548 0.081920904 0.028248588 0.011299435 0.011299435 0.000000000 0.002824859 0.000000000
[73] 0.002824859 0.000000000 0.002824859 0.005649718 0.002824859 0.005649718 0.000000000 0.000000000
[81] 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859

$mids
 [1] -64.5 -63.5 -62.5 -61.5 -60.5 -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5
[17] -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5
[33] -32.5 -31.5 -30.5 -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5
[49] -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5
[65]  -0.5   0.5   1.5   2.5   3.5   4.5   5.5   6.5   7.5   8.5   9.5  10.5  11.5  12.5  13.5  14.5
[81]  15.5  16.5  17.5  18.5  19.5  20.5  21.5  22.5

$xname
[1] "E2_SL_clean$Choice_CUZminusSIB_oblig"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Friend-Like

No Choice

# returns t-test results
t.test(oblig ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  oblig by Relation
t = -2.8839, df = 344, p-value = 0.004175
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -7.825057 -1.479290
sample estimates:
mean of the differences 
              -4.652174 
# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.1552641 (negligible)
95 percent confidence interval:
      lower       upper 
-0.26160612 -0.04892206 
# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.1512148 (negligible)
95 percent confidence interval:
      lower       upper 
-0.25475162 -0.04767791 
# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_oblig", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1         |         Parameter2 |    r |       95% CI | t(343) |         p
----------------------------------------------------------------------------------
NoChoice_CUZ_oblig | NoChoice_SIB_oblig | 0.53 | [0.44, 0.60] |  11.45 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_oblig, breaks = 100))
$breaks
  [1] -100  -98  -96  -94  -92  -90  -88  -86  -84  -82  -80  -78  -76  -74  -72  -70  -68  -66  -64
 [20]  -62  -60  -58  -56  -54  -52  -50  -48  -46  -44  -42  -40  -38  -36  -34  -32  -30  -28  -26
 [39]  -24  -22  -20  -18  -16  -14  -12  -10   -8   -6   -4   -2    0    2    4    6    8   10   12
 [58]   14   16   18   20   22   24   26   28   30   32   34   36   38   40   42   44   46   48   50
 [77]   52   54   56   58   60   62   64   66   68   70   72   74   76   78   80   82   84   86   88
 [96]   90   92   94   96   98  100

$counts
  [1]  5  1  1  0  0  0  1  0  0  1  0  1  2  0  1  0  2  0  0  1  0  2  1  3  8  5  5  1  2  0  1  2
 [33]  1  6  1  5  1  6  6  6  9  6  4  5 10  6  6  8 17 73 13  7  6  7  4  9  8  3  6  7  1  6  6  2
 [65]  8  3  2  3  1  3  2  2  1  2  3  3  2  0  0  0  0  1  0  0  0  0  0  0  0  0  0  0  0  0  0  0
 [97]  0  0  0  2

$density
  [1] 0.007246377 0.001449275 0.001449275 0.000000000 0.000000000 0.000000000 0.001449275 0.000000000
  [9] 0.000000000 0.001449275 0.000000000 0.001449275 0.002898551 0.000000000 0.001449275 0.000000000
 [17] 0.002898551 0.000000000 0.000000000 0.001449275 0.000000000 0.002898551 0.001449275 0.004347826
 [25] 0.011594203 0.007246377 0.007246377 0.001449275 0.002898551 0.000000000 0.001449275 0.002898551
 [33] 0.001449275 0.008695652 0.001449275 0.007246377 0.001449275 0.008695652 0.008695652 0.008695652
 [41] 0.013043478 0.008695652 0.005797101 0.007246377 0.014492754 0.008695652 0.008695652 0.011594203
 [49] 0.024637681 0.105797101 0.018840580 0.010144928 0.008695652 0.010144928 0.005797101 0.013043478
 [57] 0.011594203 0.004347826 0.008695652 0.010144928 0.001449275 0.008695652 0.008695652 0.002898551
 [65] 0.011594203 0.004347826 0.002898551 0.004347826 0.001449275 0.004347826 0.002898551 0.002898551
 [73] 0.001449275 0.002898551 0.004347826 0.004347826 0.002898551 0.000000000 0.000000000 0.000000000
 [81] 0.000000000 0.001449275 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [89] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [97] 0.000000000 0.000000000 0.000000000 0.002898551

$mids
  [1] -99 -97 -95 -93 -91 -89 -87 -85 -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53
 [25] -51 -49 -47 -45 -43 -41 -39 -37 -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5
 [49]  -3  -1   1   3   5   7   9  11  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43
 [73]  45  47  49  51  53  55  57  59  61  63  65  67  69  71  73  75  77  79  81  83  85  87  89  91
 [97]  93  95  97  99

$xname
[1] "E2_FL_clean$NoChoice_CUZminusSIB_oblig"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(oblig ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  oblig by Relation
t = -10.577, df = 344, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -6.127427 -4.205907
sample estimates:
mean of the differences 
              -5.166667 
# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.569462 (medium)
95 percent confidence interval:
     lower      upper 
-0.6834169 -0.4555071 
# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.187572 (negligible)
95 percent confidence interval:
     lower      upper 
-0.2226951 -0.1524488 
# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_oblig", "Choice_SIB_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------
Choice_CUZ_oblig | Choice_SIB_oblig | 0.95 | [0.93, 0.96] |  53.91 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_oblig, breaks = 100))
$breaks
 [1] -58 -57 -56 -55 -54 -53 -52 -51 -50 -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35
[25] -34 -33 -32 -31 -30 -29 -28 -27 -26 -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11
[49] -10  -9  -8  -7  -6  -5  -4  -3  -2  -1   0   1   2   3   4   5   6   7   8   9  10  11  12  13
[73]  14  15  16  17  18

$counts
 [1]  1  0  0  0  0  0  0  0  1  0  0  0  0  1  1  0  0  0  0  1  1  0  0  0  1  0  1  2  1  0  1  0  1
[34]  3  5  2  0  2  2  0  4  2  7  7  5  4 11  7  7  5 14 12 12  5 10 22 28 98 40  5  7  0  2  0  0  0
[67]  0  1  0  0  2  0  0  0  0  1

$density
 [1] 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [9] 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551 0.002898551 0.000000000
[17] 0.000000000 0.000000000 0.000000000 0.002898551 0.002898551 0.000000000 0.000000000 0.000000000
[25] 0.002898551 0.000000000 0.002898551 0.005797101 0.002898551 0.000000000 0.002898551 0.000000000
[33] 0.002898551 0.008695652 0.014492754 0.005797101 0.000000000 0.005797101 0.005797101 0.000000000
[41] 0.011594203 0.005797101 0.020289855 0.020289855 0.014492754 0.011594203 0.031884058 0.020289855
[49] 0.020289855 0.014492754 0.040579710 0.034782609 0.034782609 0.014492754 0.028985507 0.063768116
[57] 0.081159420 0.284057971 0.115942029 0.014492754 0.020289855 0.000000000 0.005797101 0.000000000
[65] 0.000000000 0.000000000 0.000000000 0.002898551 0.000000000 0.000000000 0.005797101 0.000000000
[73] 0.000000000 0.000000000 0.000000000 0.002898551

$mids
 [1] -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5
[17] -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5 -30.5 -29.5 -28.5 -27.5 -26.5
[33] -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5
[49]  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5  -0.5   0.5   1.5   2.5   3.5   4.5   5.5
[65]   6.5   7.5   8.5   9.5  10.5  11.5  12.5  13.5  14.5  15.5  16.5  17.5

$xname
[1] "E2_FL_clean$Choice_CUZminusSIB_oblig"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Relate

Stranger-Like

No Choice

# returns t-test results
t.test(relate ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  relate by Relation
t = -34.659, df = 353, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -47.56245 -42.45450
sample estimates:
mean of the differences 
              -45.00847 
# returns dz effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -1.842109 (large)
95 percent confidence interval:
    lower     upper 
-2.013468 -1.670750 
# returns d-av effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -1.999068 (large)
95 percent confidence interval:
    lower     upper 
-2.195146 -1.802990 
# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_relate", "NoChoice_SIB_relate", method = "Pearson")
Parameter1          |          Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------------
NoChoice_CUZ_relate | NoChoice_SIB_relate | 0.41 | [0.32, 0.49] |   8.46 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_relate, breaks = 100))
$breaks
  [1] -94 -93 -92 -91 -90 -89 -88 -87 -86 -85 -84 -83 -82 -81 -80 -79 -78 -77 -76 -75 -74 -73 -72 -71
 [25] -70 -69 -68 -67 -66 -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55 -54 -53 -52 -51 -50 -49 -48 -47
 [49] -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28 -27 -26 -25 -24 -23
 [73] -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4  -3  -2  -1   0   1
 [97]   2   3   4   5   6   7   8   9  10

$counts
  [1]  2  1  1  1  0  1  1  1  3  0  2  5  5  2  7  5  3 12 11  9  9  5  7  3  0  2  1  3  0  3  2  0
 [33]  1  1  0  3  6  3  1  0  2  1  8 16  9  5  1  4  3  2  2  2  1  4  9  6  2  6  3  2  5  3  5  9
 [65]  9 10 11 10 12 11  5  6  2  3  1  3  2  3  2  1  5  1  1  2  2  0  1  2  1  1  1  1  2  4  0  2
 [97]  2  0  0  0  0  2  0  1

$density
  [1] 0.005649718 0.002824859 0.002824859 0.002824859 0.000000000 0.002824859 0.002824859 0.002824859
  [9] 0.008474576 0.000000000 0.005649718 0.014124294 0.014124294 0.005649718 0.019774011 0.014124294
 [17] 0.008474576 0.033898305 0.031073446 0.025423729 0.025423729 0.014124294 0.019774011 0.008474576
 [25] 0.000000000 0.005649718 0.002824859 0.008474576 0.000000000 0.008474576 0.005649718 0.000000000
 [33] 0.002824859 0.002824859 0.000000000 0.008474576 0.016949153 0.008474576 0.002824859 0.000000000
 [41] 0.005649718 0.002824859 0.022598870 0.045197740 0.025423729 0.014124294 0.002824859 0.011299435
 [49] 0.008474576 0.005649718 0.005649718 0.005649718 0.002824859 0.011299435 0.025423729 0.016949153
 [57] 0.005649718 0.016949153 0.008474576 0.005649718 0.014124294 0.008474576 0.014124294 0.025423729
 [65] 0.025423729 0.028248588 0.031073446 0.028248588 0.033898305 0.031073446 0.014124294 0.016949153
 [73] 0.005649718 0.008474576 0.002824859 0.008474576 0.005649718 0.008474576 0.005649718 0.002824859
 [81] 0.014124294 0.002824859 0.002824859 0.005649718 0.005649718 0.000000000 0.002824859 0.005649718
 [89] 0.002824859 0.002824859 0.002824859 0.002824859 0.005649718 0.011299435 0.000000000 0.005649718
 [97] 0.005649718 0.000000000 0.000000000 0.000000000 0.000000000 0.005649718 0.000000000 0.002824859

$mids
  [1] -93.5 -92.5 -91.5 -90.5 -89.5 -88.5 -87.5 -86.5 -85.5 -84.5 -83.5 -82.5 -81.5 -80.5 -79.5 -78.5
 [17] -77.5 -76.5 -75.5 -74.5 -73.5 -72.5 -71.5 -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5
 [33] -61.5 -60.5 -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5 -48.5 -47.5 -46.5
 [49] -45.5 -44.5 -43.5 -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5 -30.5
 [65] -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5
 [81] -13.5 -12.5 -11.5 -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5  -0.5   0.5   1.5
 [97]   2.5   3.5   4.5   5.5   6.5   7.5   8.5   9.5

$xname
[1] "E2_SL_clean$NoChoice_CUZminusSIB_relate"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(relate ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  relate by Relation
t = -35.606, df = 353, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -46.43032 -41.56968
sample estimates:
mean of the differences 
                    -44 
# returns dz effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -1.892461 (large)
95 percent confidence interval:
    lower     upper 
-2.066782 -1.718140 
# returns d-av effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -1.980539 (large)
95 percent confidence interval:
    lower     upper 
-2.168465 -1.792613 
# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_relate", "Choice_SIB_relate", method = "Pearson")
Parameter1        |        Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------
Choice_CUZ_relate | Choice_SIB_relate | 0.45 | [0.37, 0.53] |   9.52 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_relate, breaks = 100))
$breaks
  [1] -95 -94 -93 -92 -91 -90 -89 -88 -87 -86 -85 -84 -83 -82 -81 -80 -79 -78 -77 -76 -75 -74 -73 -72
 [25] -71 -70 -69 -68 -67 -66 -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55 -54 -53 -52 -51 -50 -49 -48
 [49] -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28 -27 -26 -25 -24
 [73] -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4  -3  -2  -1   0
 [97]   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24
[121]  25  26  27  28  29  30

$counts
  [1]  1  0  0  0  3  0  0  1  1  2  0  1  1  2  3  6  5  5  3 17  9  6  6  4  4  0  3  3  0  2  1  4
 [33]  5  1  1  3  0  2  4  1  4  2  3  8 23  4  4  2  1  5  3  1  2  2  6  5  4  5  4  4  7  4  5  8
 [65]  4 10 12  6  9 10 10 13  2  8  1  3  4  4  3  4  2  1  1  2  1  1  2  3  0  2  0  1  0  1  2  1
 [97]  1  0  0  1  0  0  0  0  0  0  0  0  0  1  1  0  0  0  0  0  0  0  0  0  0  0  0  0  1

$density
  [1] 0.002824859 0.000000000 0.000000000 0.000000000 0.008474576 0.000000000 0.000000000 0.002824859
  [9] 0.002824859 0.005649718 0.000000000 0.002824859 0.002824859 0.005649718 0.008474576 0.016949153
 [17] 0.014124294 0.014124294 0.008474576 0.048022599 0.025423729 0.016949153 0.016949153 0.011299435
 [25] 0.011299435 0.000000000 0.008474576 0.008474576 0.000000000 0.005649718 0.002824859 0.011299435
 [33] 0.014124294 0.002824859 0.002824859 0.008474576 0.000000000 0.005649718 0.011299435 0.002824859
 [41] 0.011299435 0.005649718 0.008474576 0.022598870 0.064971751 0.011299435 0.011299435 0.005649718
 [49] 0.002824859 0.014124294 0.008474576 0.002824859 0.005649718 0.005649718 0.016949153 0.014124294
 [57] 0.011299435 0.014124294 0.011299435 0.011299435 0.019774011 0.011299435 0.014124294 0.022598870
 [65] 0.011299435 0.028248588 0.033898305 0.016949153 0.025423729 0.028248588 0.028248588 0.036723164
 [73] 0.005649718 0.022598870 0.002824859 0.008474576 0.011299435 0.011299435 0.008474576 0.011299435
 [81] 0.005649718 0.002824859 0.002824859 0.005649718 0.002824859 0.002824859 0.005649718 0.008474576
 [89] 0.000000000 0.005649718 0.000000000 0.002824859 0.000000000 0.002824859 0.005649718 0.002824859
 [97] 0.002824859 0.000000000 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000
[105] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859 0.002824859 0.000000000
[113] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[121] 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859

$mids
  [1] -94.5 -93.5 -92.5 -91.5 -90.5 -89.5 -88.5 -87.5 -86.5 -85.5 -84.5 -83.5 -82.5 -81.5 -80.5 -79.5
 [17] -78.5 -77.5 -76.5 -75.5 -74.5 -73.5 -72.5 -71.5 -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5
 [33] -62.5 -61.5 -60.5 -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5 -48.5 -47.5
 [49] -46.5 -45.5 -44.5 -43.5 -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5
 [65] -30.5 -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5 -16.5 -15.5
 [81] -14.5 -13.5 -12.5 -11.5 -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5  -0.5   0.5
 [97]   1.5   2.5   3.5   4.5   5.5   6.5   7.5   8.5   9.5  10.5  11.5  12.5  13.5  14.5  15.5  16.5
[113]  17.5  18.5  19.5  20.5  21.5  22.5  23.5  24.5  25.5  26.5  27.5  28.5  29.5

$xname
[1] "E2_SL_clean$Choice_CUZminusSIB_relate"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Friend-Like

No Choice

# returns t-test results
t.test(relate ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  relate by Relation
t = -33.579, df = 344, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -46.58346 -41.42813
sample estimates:
mean of the differences 
               -44.0058 
# returns dz effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -1.807811 (large)
95 percent confidence interval:
    lower     upper 
-1.979372 -1.636250 
# returns d-av effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -1.919301 (large)
95 percent confidence interval:
    lower     upper 
-2.108489 -1.730112 
# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_relate", "NoChoice_SIB_relate", method = "Pearson")
Parameter1          |          Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------------
NoChoice_CUZ_relate | NoChoice_SIB_relate | 0.44 | [0.35, 0.52] |   8.98 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_relate, breaks = 100))
$breaks
  [1] -91 -90 -89 -88 -87 -86 -85 -84 -83 -82 -81 -80 -79 -78 -77 -76 -75 -74 -73 -72 -71 -70 -69 -68
 [25] -67 -66 -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55 -54 -53 -52 -51 -50 -49 -48 -47 -46 -45 -44
 [49] -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28 -27 -26 -25 -24 -23 -22 -21 -20
 [73] -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4  -3  -2  -1   0   1   2   3   4
 [97]   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28
[121]  29  30

$counts
  [1]  1  1  0  1  3  1  2  4  4  2  1  2  7  4 12 11 10 11  5  7  5  1  1  1  3  4  1  0  0  0  1  0
 [33]  2  2  1  0  2  2  2  8 13 15  1  5  2  6  0  1  1  1  3  4  1  3  2  4  5  1 11  6  6  8  3 10
 [65]  9 18 11 11  6  2  3  6  1  2  6  2  2  3  2  3  3  2  1  3  0  1  2  2  0  1  4  0  0  0  0  0
 [97]  0  0  0  0  0  0  0  1  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  1

$density
  [1] 0.002898551 0.002898551 0.000000000 0.002898551 0.008695652 0.002898551 0.005797101 0.011594203
  [9] 0.011594203 0.005797101 0.002898551 0.005797101 0.020289855 0.011594203 0.034782609 0.031884058
 [17] 0.028985507 0.031884058 0.014492754 0.020289855 0.014492754 0.002898551 0.002898551 0.002898551
 [25] 0.008695652 0.011594203 0.002898551 0.000000000 0.000000000 0.000000000 0.002898551 0.000000000
 [33] 0.005797101 0.005797101 0.002898551 0.000000000 0.005797101 0.005797101 0.005797101 0.023188406
 [41] 0.037681159 0.043478261 0.002898551 0.014492754 0.005797101 0.017391304 0.000000000 0.002898551
 [49] 0.002898551 0.002898551 0.008695652 0.011594203 0.002898551 0.008695652 0.005797101 0.011594203
 [57] 0.014492754 0.002898551 0.031884058 0.017391304 0.017391304 0.023188406 0.008695652 0.028985507
 [65] 0.026086957 0.052173913 0.031884058 0.031884058 0.017391304 0.005797101 0.008695652 0.017391304
 [73] 0.002898551 0.005797101 0.017391304 0.005797101 0.005797101 0.008695652 0.005797101 0.008695652
 [81] 0.008695652 0.005797101 0.002898551 0.008695652 0.000000000 0.002898551 0.005797101 0.005797101
 [89] 0.000000000 0.002898551 0.011594203 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [97] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551
[105] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[113] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[121] 0.002898551

$mids
  [1] -90.5 -89.5 -88.5 -87.5 -86.5 -85.5 -84.5 -83.5 -82.5 -81.5 -80.5 -79.5 -78.5 -77.5 -76.5 -75.5
 [17] -74.5 -73.5 -72.5 -71.5 -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5 -61.5 -60.5 -59.5
 [33] -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5
 [49] -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5 -30.5 -29.5 -28.5 -27.5
 [65] -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5
 [81] -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5  -0.5   0.5   1.5   2.5   3.5   4.5
 [97]   5.5   6.5   7.5   8.5   9.5  10.5  11.5  12.5  13.5  14.5  15.5  16.5  17.5  18.5  19.5  20.5
[113]  21.5  22.5  23.5  24.5  25.5  26.5  27.5  28.5  29.5

$xname
[1] "E2_FL_clean$NoChoice_CUZminusSIB_relate"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(relate ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  relate by Relation
t = -33.766, df = 344, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -44.46029 -39.56580
sample estimates:
mean of the differences 
              -42.01304 
# returns dz effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -1.817923 (large)
95 percent confidence interval:
    lower     upper 
-1.990080 -1.645766 
# returns d-av effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -1.86697 (large)
95 percent confidence interval:
    lower     upper 
-2.046759 -1.687182 
# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_relate", "Choice_SIB_relate", method = "Pearson")
Parameter1        |        Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------
Choice_CUZ_relate | Choice_SIB_relate | 0.47 | [0.39, 0.55] |   9.93 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_relate, breaks = 100))
$breaks
  [1] -92 -91 -90 -89 -88 -87 -86 -85 -84 -83 -82 -81 -80 -79 -78 -77 -76 -75 -74 -73 -72 -71 -70 -69
 [25] -68 -67 -66 -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55 -54 -53 -52 -51 -50 -49 -48 -47 -46 -45
 [49] -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28 -27 -26 -25 -24 -23 -22 -21
 [73] -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4  -3  -2  -1   0   1   2   3
 [97]   4   5   6   7   8   9  10  11  12  13  14  15  16

$counts
  [1]  1  1  0  0  0  1  0  0  1  4  4  3  1  5  6 10 13  9  9  6  1  0  1  0  2  0  2  0  0  0  3  3
 [33]  2  1  1  0  0  1  0  2  6 22 16  4  3  3  4  3  4  3  0  3  3 10  4  5  3  6  4  3  8  2  6  5
 [65]  9  9 14 15 10  5  6  2  5  3  3  4  7  2  5  2  1  1  3  2  0  0  2  2  0  0  0  4  0  2  0  1
 [97]  0  0  0  0  1  1  0  0  0  0  0  1

$density
  [1] 0.002898551 0.002898551 0.000000000 0.000000000 0.000000000 0.002898551 0.000000000 0.000000000
  [9] 0.002898551 0.011594203 0.011594203 0.008695652 0.002898551 0.014492754 0.017391304 0.028985507
 [17] 0.037681159 0.026086957 0.026086957 0.017391304 0.002898551 0.000000000 0.002898551 0.000000000
 [25] 0.005797101 0.000000000 0.005797101 0.000000000 0.000000000 0.000000000 0.008695652 0.008695652
 [33] 0.005797101 0.002898551 0.002898551 0.000000000 0.000000000 0.002898551 0.000000000 0.005797101
 [41] 0.017391304 0.063768116 0.046376812 0.011594203 0.008695652 0.008695652 0.011594203 0.008695652
 [49] 0.011594203 0.008695652 0.000000000 0.008695652 0.008695652 0.028985507 0.011594203 0.014492754
 [57] 0.008695652 0.017391304 0.011594203 0.008695652 0.023188406 0.005797101 0.017391304 0.014492754
 [65] 0.026086957 0.026086957 0.040579710 0.043478261 0.028985507 0.014492754 0.017391304 0.005797101
 [73] 0.014492754 0.008695652 0.008695652 0.011594203 0.020289855 0.005797101 0.014492754 0.005797101
 [81] 0.002898551 0.002898551 0.008695652 0.005797101 0.000000000 0.000000000 0.005797101 0.005797101
 [89] 0.000000000 0.000000000 0.000000000 0.011594203 0.000000000 0.005797101 0.000000000 0.002898551
 [97] 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551 0.002898551 0.000000000 0.000000000
[105] 0.000000000 0.000000000 0.000000000 0.002898551

$mids
  [1] -91.5 -90.5 -89.5 -88.5 -87.5 -86.5 -85.5 -84.5 -83.5 -82.5 -81.5 -80.5 -79.5 -78.5 -77.5 -76.5
 [17] -75.5 -74.5 -73.5 -72.5 -71.5 -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5 -61.5 -60.5
 [33] -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5
 [49] -43.5 -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5 -30.5 -29.5 -28.5
 [65] -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5
 [81] -11.5 -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5  -0.5   0.5   1.5   2.5   3.5
 [97]   4.5   5.5   6.5   7.5   8.5   9.5  10.5  11.5  12.5  13.5  14.5  15.5

$xname
[1] "E2_FL_clean$Choice_CUZminusSIB_relate"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Close

Stranger-Like

No Choice

# returns t-test results
t.test(close ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  close by Relation
t = -3.8599, df = 353, p-value = 0.0001349
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -5.816367 -1.889847
sample estimates:
mean of the differences 
              -3.853107 
# returns dz effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.2051499 (small)
95 percent confidence interval:
      lower       upper 
-0.31059191 -0.09970788 
# returns d-av effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.1754968 (negligible)
95 percent confidence interval:
     lower      upper 
-0.2654483 -0.0855454 
# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_close", "NoChoice_SIB_close", method = "Pearson")
Parameter1         |         Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------
NoChoice_CUZ_close | NoChoice_SIB_close | 0.63 | [0.57, 0.69] |  15.39 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_close, breaks = 100))
$breaks
 [1] -90 -88 -86 -84 -82 -80 -78 -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50 -48 -46 -44
[25] -42 -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2   0   2   4
[49]   6   8  10  12  14  16  18  20  22  24  26  28  30  32  34  36  38  40  42  44  46  48  50  52
[73]  54  56  58  60  62  64

$counts
 [1]  1  0  0  0  0  0  2  0  0  2  0  0  1  1  1  0  1  0  2  1  1  3  2  2  1  3  1  4  1  2  1  1  3
[34]  6  3  7  4  8 12  5 12 15 15 25 75 25 18 15 15  6 10  9  7  5  2  2  2  1  4  2  2  1  1  1  0  0
[67]  0  0  0  1  0  0  0  0  0  0  1

$density
 [1] 0.001412429 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859 0.000000000
 [9] 0.000000000 0.002824859 0.000000000 0.000000000 0.001412429 0.001412429 0.001412429 0.000000000
[17] 0.001412429 0.000000000 0.002824859 0.001412429 0.001412429 0.004237288 0.002824859 0.002824859
[25] 0.001412429 0.004237288 0.001412429 0.005649718 0.001412429 0.002824859 0.001412429 0.001412429
[33] 0.004237288 0.008474576 0.004237288 0.009887006 0.005649718 0.011299435 0.016949153 0.007062147
[41] 0.016949153 0.021186441 0.021186441 0.035310734 0.105932203 0.035310734 0.025423729 0.021186441
[49] 0.021186441 0.008474576 0.014124294 0.012711864 0.009887006 0.007062147 0.002824859 0.002824859
[57] 0.002824859 0.001412429 0.005649718 0.002824859 0.002824859 0.001412429 0.001412429 0.001412429
[65] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.001412429 0.000000000 0.000000000
[73] 0.000000000 0.000000000 0.000000000 0.000000000 0.001412429

$mids
 [1] -89 -87 -85 -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49 -47 -45 -43
[25] -41 -39 -37 -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1   1   3   5
[49]   7   9  11  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43  45  47  49  51  53
[73]  55  57  59  61  63

$xname
[1] "E2_SL_clean$NoChoice_CUZminusSIB_close"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(close ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  close by Relation
t = -9.9187, df = 353, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -7.550201 -5.051494
sample estimates:
mean of the differences 
              -6.300847 
# returns dz effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.5271712 (medium)
95 percent confidence interval:
     lower      upper 
-0.6385352 -0.4158072 
# returns d-av effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.2651506 (small)
95 percent confidence interval:
     lower      upper 
-0.3185498 -0.2117514 
# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_close", "Choice_SIB_close", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------
Choice_CUZ_close | Choice_SIB_close | 0.87 | [0.85, 0.90] |  33.67 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_close, breaks = 100))
$breaks
  [1] -83 -82 -81 -80 -79 -78 -77 -76 -75 -74 -73 -72 -71 -70 -69 -68 -67 -66 -65 -64 -63 -62 -61 -60
 [25] -59 -58 -57 -56 -55 -54 -53 -52 -51 -50 -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36
 [49] -35 -34 -33 -32 -31 -30 -29 -28 -27 -26 -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12
 [73] -11 -10  -9  -8  -7  -6  -5  -4  -3  -2  -1   0   1   2   3   4   5   6   7   8   9  10  11  12
 [97]  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32

$counts
  [1]  1  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  1
 [33]  0  0  0  1  0  0  0  0  2  0  3  0  5  3  1  0  2  1  0  2  0  4  0  1  3  3  1  3  0  4  4  1
 [65]  2  1  0  3  4  9  4  4 10  5  9  8 12  8 13 15 26 28 92 29  5  6  2  3  1  1  1  1  0  0  0  0
 [97]  1  0  1  1  1  0  0  0  0  1  0  0  0  0  0  0  0  0  1

$density
  [1] 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
  [9] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [17] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [25] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859
 [33] 0.000000000 0.000000000 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000
 [41] 0.005649718 0.000000000 0.008474576 0.000000000 0.014124294 0.008474576 0.002824859 0.000000000
 [49] 0.005649718 0.002824859 0.000000000 0.005649718 0.000000000 0.011299435 0.000000000 0.002824859
 [57] 0.008474576 0.008474576 0.002824859 0.008474576 0.000000000 0.011299435 0.011299435 0.002824859
 [65] 0.005649718 0.002824859 0.000000000 0.008474576 0.011299435 0.025423729 0.011299435 0.011299435
 [73] 0.028248588 0.014124294 0.025423729 0.022598870 0.033898305 0.022598870 0.036723164 0.042372881
 [81] 0.073446328 0.079096045 0.259887006 0.081920904 0.014124294 0.016949153 0.005649718 0.008474576
 [89] 0.002824859 0.002824859 0.002824859 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000
 [97] 0.002824859 0.000000000 0.002824859 0.002824859 0.002824859 0.000000000 0.000000000 0.000000000
[105] 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[113] 0.000000000 0.000000000 0.002824859

$mids
  [1] -82.5 -81.5 -80.5 -79.5 -78.5 -77.5 -76.5 -75.5 -74.5 -73.5 -72.5 -71.5 -70.5 -69.5 -68.5 -67.5
 [17] -66.5 -65.5 -64.5 -63.5 -62.5 -61.5 -60.5 -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5
 [33] -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5
 [49] -34.5 -33.5 -32.5 -31.5 -30.5 -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5
 [65] -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5
 [81]  -2.5  -1.5  -0.5   0.5   1.5   2.5   3.5   4.5   5.5   6.5   7.5   8.5   9.5  10.5  11.5  12.5
 [97]  13.5  14.5  15.5  16.5  17.5  18.5  19.5  20.5  21.5  22.5  23.5  24.5  25.5  26.5  27.5  28.5
[113]  29.5  30.5  31.5

$xname
[1] "E2_SL_clean$Choice_CUZminusSIB_close"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Friend-Like

No Choice

# returns t-test results
t.test(close ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  close by Relation
t = -2.8987, df = 344, p-value = 0.003987
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -5.0209996 -0.9616091
sample estimates:
mean of the differences 
              -2.991304 
# returns dz effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.1560626 (negligible)
95 percent confidence interval:
      lower       upper 
-0.26241121 -0.04971409 
# returns d-av effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.1661081 (negligible)
95 percent confidence interval:
      lower       upper 
-0.27939261 -0.05282365 
# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_close", "NoChoice_SIB_close", method = "Pearson")
Parameter1         |         Parameter2 |    r |       95% CI | t(343) |         p
----------------------------------------------------------------------------------
NoChoice_CUZ_close | NoChoice_SIB_close | 0.43 | [0.34, 0.52] |   8.91 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_close, breaks = 100))
$breaks
 [1] -80 -78 -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50 -48 -46 -44 -42 -40 -38 -36 -34
[25] -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2   0   2   4   6   8  10  12  14
[49]  16  18  20  22  24  26  28  30  32  34  36  38  40  42  44  46  48  50  52  54  56  58  60  62
[73]  64  66  68  70  72  74  76  78

$counts
 [1]  1  0  0  1  0  0  0  0  1  0  0  0  1  0  4  2  1  2  0  1  0  2  1  2  3  6  6  3  6  9  8  8 10
[34] 14  9  9 14 12 14 87  8 20 12  6  9  6 10  4  2  5  3  2  4  3  0  1  1  1  0  2  1  0  1  2  0  1
[67]  0  1  1  0  0  0  0  0  0  1  0  0  1

$density
 [1] 0.001449275 0.000000000 0.000000000 0.001449275 0.000000000 0.000000000 0.000000000 0.000000000
 [9] 0.001449275 0.000000000 0.000000000 0.000000000 0.001449275 0.000000000 0.005797101 0.002898551
[17] 0.001449275 0.002898551 0.000000000 0.001449275 0.000000000 0.002898551 0.001449275 0.002898551
[25] 0.004347826 0.008695652 0.008695652 0.004347826 0.008695652 0.013043478 0.011594203 0.011594203
[33] 0.014492754 0.020289855 0.013043478 0.013043478 0.020289855 0.017391304 0.020289855 0.126086957
[41] 0.011594203 0.028985507 0.017391304 0.008695652 0.013043478 0.008695652 0.014492754 0.005797101
[49] 0.002898551 0.007246377 0.004347826 0.002898551 0.005797101 0.004347826 0.000000000 0.001449275
[57] 0.001449275 0.001449275 0.000000000 0.002898551 0.001449275 0.000000000 0.001449275 0.002898551
[65] 0.000000000 0.001449275 0.000000000 0.001449275 0.001449275 0.000000000 0.000000000 0.000000000
[73] 0.000000000 0.000000000 0.000000000 0.001449275 0.000000000 0.000000000 0.001449275

$mids
 [1] -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49 -47 -45 -43 -41 -39 -37 -35 -33
[25] -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1   1   3   5   7   9  11  13  15
[49]  17  19  21  23  25  27  29  31  33  35  37  39  41  43  45  47  49  51  53  55  57  59  61  63
[73]  65  67  69  71  73  75  77

$xname
[1] "E2_FL_clean$NoChoice_CUZminusSIB_close"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(close ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  close by Relation
t = -9.7685, df = 344, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -6.572598 -4.369431
sample estimates:
mean of the differences 
              -5.471014 
# returns dz effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.5259201 (medium)
95 percent confidence interval:
     lower      upper 
-0.6386998 -0.4131404 
# returns d-av effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.329588 (small)
95 percent confidence interval:
     lower      upper 
-0.3976085 -0.2615675 
# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_close", "Choice_SIB_close", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------
Choice_CUZ_close | Choice_SIB_close | 0.80 | [0.76, 0.84] |  25.01 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_close, breaks = 100))
$breaks
 [1] -75 -74 -73 -72 -71 -70 -69 -68 -67 -66 -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55 -54 -53 -52
[25] -51 -50 -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28
[49] -27 -26 -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4
[73]  -3  -2  -1   0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18

$counts
 [1]  1  0  0  0  0  1  1  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  1  1  0  0
[34]  0  1  1  0  1  0  0  0  0  1  0  1  0  1  1  1  0  2  2  0  3  0  4  5  9  2  7  3  5  8  6  6  3
[67]  7 13 11 18 16 13 13 25 99 23 12  5  2  0  2  1  1  0  3  1  1  0  0  0  0  0  1

$density
 [1] 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551 0.002898551 0.000000000
 [9] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[17] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[25] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551 0.002898551 0.000000000
[33] 0.000000000 0.000000000 0.002898551 0.002898551 0.000000000 0.002898551 0.000000000 0.000000000
[41] 0.000000000 0.000000000 0.002898551 0.000000000 0.002898551 0.000000000 0.002898551 0.002898551
[49] 0.002898551 0.000000000 0.005797101 0.005797101 0.000000000 0.008695652 0.000000000 0.011594203
[57] 0.014492754 0.026086957 0.005797101 0.020289855 0.008695652 0.014492754 0.023188406 0.017391304
[65] 0.017391304 0.008695652 0.020289855 0.037681159 0.031884058 0.052173913 0.046376812 0.037681159
[73] 0.037681159 0.072463768 0.286956522 0.066666667 0.034782609 0.014492754 0.005797101 0.000000000
[81] 0.005797101 0.002898551 0.002898551 0.000000000 0.008695652 0.002898551 0.002898551 0.000000000
[89] 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551

$mids
 [1] -74.5 -73.5 -72.5 -71.5 -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5 -61.5 -60.5 -59.5
[17] -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5
[33] -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5 -30.5 -29.5 -28.5 -27.5
[49] -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5
[65] -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5  -0.5   0.5   1.5   2.5   3.5   4.5
[81]   5.5   6.5   7.5   8.5   9.5  10.5  11.5  12.5  13.5  14.5  15.5  16.5  17.5

$xname
[1] "E2_FL_clean$Choice_CUZminusSIB_close"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Prior Help

Stranger-Like

No Choice

# returns t-test results
t.test(priorhelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  priorhelp by Relation
t = -6.2413, df = 353, p-value = 1.244e-09
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -8.763683 -4.564000
sample estimates:
mean of the differences 
              -6.663842 
# returns dz effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.3317235 (small)
95 percent confidence interval:
     lower      upper 
-0.4389055 -0.2245415 
# returns d-av effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.2868032 (small)
95 percent confidence interval:
     lower      upper 
-0.3788590 -0.1947473 
# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_priorhelp", "NoChoice_SIB_priorhelp", method = "Pearson")
Parameter1             |             Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------------------
NoChoice_CUZ_priorhelp | NoChoice_SIB_priorhelp | 0.63 | [0.56, 0.69] |  15.07 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_priorhelp, breaks = 100))
$breaks
 [1] -88 -86 -84 -82 -80 -78 -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50 -48 -46 -44 -42
[25] -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2   0   2   4   6
[49]   8  10  12  14  16  18  20  22  24  26  28  30  32  34  36  38  40  42  44  46  48  50  52  54
[73]  56  58  60  62  64  66  68

$counts
 [1]  1  0  0  0  1  0  0  1  0  0  0  1  0  1  1  1  1  4  2  1  5  6  1  3  2  4  6  1  2  2  1  4  8
[34]  6  5 10  9 12 11 13 13 31 29 53 15 13  9 13  9  7  5  6  3  4  4  2  0  2  1  0  2  2  1  0  0  0
[67]  1  0  0  0  1  1  0  0  0  0  0  1

$density
 [1] 0.001412429 0.000000000 0.000000000 0.000000000 0.001412429 0.000000000 0.000000000 0.001412429
 [9] 0.000000000 0.000000000 0.000000000 0.001412429 0.000000000 0.001412429 0.001412429 0.001412429
[17] 0.001412429 0.005649718 0.002824859 0.001412429 0.007062147 0.008474576 0.001412429 0.004237288
[25] 0.002824859 0.005649718 0.008474576 0.001412429 0.002824859 0.002824859 0.001412429 0.005649718
[33] 0.011299435 0.008474576 0.007062147 0.014124294 0.012711864 0.016949153 0.015536723 0.018361582
[41] 0.018361582 0.043785311 0.040960452 0.074858757 0.021186441 0.018361582 0.012711864 0.018361582
[49] 0.012711864 0.009887006 0.007062147 0.008474576 0.004237288 0.005649718 0.005649718 0.002824859
[57] 0.000000000 0.002824859 0.001412429 0.000000000 0.002824859 0.002824859 0.001412429 0.000000000
[65] 0.000000000 0.000000000 0.001412429 0.000000000 0.000000000 0.000000000 0.001412429 0.001412429
[73] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.001412429

$mids
 [1] -87 -85 -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49 -47 -45 -43 -41
[25] -39 -37 -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1   1   3   5   7
[49]   9  11  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43  45  47  49  51  53  55
[73]  57  59  61  63  65  67

$xname
[1] "E2_SL_clean$NoChoice_CUZminusSIB_priorhelp"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(priorhelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  priorhelp by Relation
t = -13.075, df = 353, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -11.045985  -8.157405
sample estimates:
mean of the differences 
              -9.601695 
# returns dz effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.6949149 (medium)
95 percent confidence interval:
     lower      upper 
-0.8111820 -0.5786478 
# returns d-av effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.3909815 (small)
95 percent confidence interval:
     lower      upper 
-0.4518944 -0.3300685 
# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_priorhelp", "Choice_SIB_priorhelp", method = "Pearson")
Parameter1           |           Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------------
Choice_CUZ_priorhelp | Choice_SIB_priorhelp | 0.84 | [0.81, 0.87] |  29.25 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_priorhelp, breaks = 100))
$breaks
  [1] -74 -73 -72 -71 -70 -69 -68 -67 -66 -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55 -54 -53 -52 -51
 [25] -50 -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28 -27
 [49] -26 -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4  -3
 [73]  -2  -1   0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20  21
 [97]  22  23  24  25  26

$counts
  [1]  1  0  1  0  0  0  0  0  0  0  0  0  1  0  0  1  0  0  0  0  0  2  1  2  1  0  0  1  0  1  0  2
 [33]  2  1  2  1  1  2  1  2  1  2  2  2  3  2  2  2  2  4  3  2  3  3  2  4  2  9  2  8 15  5 16  7
 [65]  8 11 13 13  8 17 10 17 20 60 24  9  4  1  1  2  2  0  0  1  0  0  0  0  0  0  1  0  0  0  1  0
 [97]  0  0  1  1

$density
  [1] 0.002824859 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
  [9] 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859 0.000000000 0.000000000 0.002824859
 [17] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.005649718 0.002824859 0.005649718
 [25] 0.002824859 0.000000000 0.000000000 0.002824859 0.000000000 0.002824859 0.000000000 0.005649718
 [33] 0.005649718 0.002824859 0.005649718 0.002824859 0.002824859 0.005649718 0.002824859 0.005649718
 [41] 0.002824859 0.005649718 0.005649718 0.005649718 0.008474576 0.005649718 0.005649718 0.005649718
 [49] 0.005649718 0.011299435 0.008474576 0.005649718 0.008474576 0.008474576 0.005649718 0.011299435
 [57] 0.005649718 0.025423729 0.005649718 0.022598870 0.042372881 0.014124294 0.045197740 0.019774011
 [65] 0.022598870 0.031073446 0.036723164 0.036723164 0.022598870 0.048022599 0.028248588 0.048022599
 [73] 0.056497175 0.169491525 0.067796610 0.025423729 0.011299435 0.002824859 0.002824859 0.005649718
 [81] 0.005649718 0.000000000 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000
 [89] 0.000000000 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.002824859 0.000000000
 [97] 0.000000000 0.000000000 0.002824859 0.002824859

$mids
  [1] -73.5 -72.5 -71.5 -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5 -61.5 -60.5 -59.5 -58.5
 [17] -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5
 [33] -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5 -30.5 -29.5 -28.5 -27.5 -26.5
 [49] -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5
 [65]  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5  -0.5   0.5   1.5   2.5   3.5   4.5   5.5
 [81]   6.5   7.5   8.5   9.5  10.5  11.5  12.5  13.5  14.5  15.5  16.5  17.5  18.5  19.5  20.5  21.5
 [97]  22.5  23.5  24.5  25.5

$xname
[1] "E2_SL_clean$Choice_CUZminusSIB_priorhelp"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Friend-Like

No Choice

# returns t-test results
t.test(priorhelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  priorhelp by Relation
t = -3.8733, df = 344, p-value = 0.0001285
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -5.720921 -1.867484
sample estimates:
mean of the differences 
              -3.794203 
# returns dz effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.2085314 (small)
95 percent confidence interval:
     lower      upper 
-0.3153813 -0.1016815 
# returns d-av effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.2119308 (small)
95 percent confidence interval:
     lower      upper 
-0.3205604 -0.1033011 
# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_priorhelp", "NoChoice_SIB_priorhelp", method = "Pearson")
Parameter1             |             Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------------------
NoChoice_CUZ_priorhelp | NoChoice_SIB_priorhelp | 0.48 | [0.40, 0.56] |  10.23 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_priorhelp, breaks = 100))
$breaks
 [1] -88 -86 -84 -82 -80 -78 -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50 -48 -46 -44 -42
[25] -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2   0   2   4   6
[49]   8  10  12  14  16  18  20  22  24  26  28  30  32  34  36  38  40  42  44  46  48  50  52  54
[73]  56  58  60  62  64  66  68  70  72  74  76

$counts
 [1]  1  0  0  0  0  0  0  0  0  0  0  1  0  0  2  1  2  0  2  2  0  1  1  0  1  1  7  4  1  1  2  7  7
[34]  5  7  8  5 13  9 14 16 14 13 95 17 17 12  6 10  5  4  6  2  3  4  3  1  1  1  0  0  0  1  3  0  0
[67]  0  1  0  1  0  0  3  0  0  0  0  0  0  0  0  1

$density
 [1] 0.001449275 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [9] 0.000000000 0.000000000 0.000000000 0.001449275 0.000000000 0.000000000 0.002898551 0.001449275
[17] 0.002898551 0.000000000 0.002898551 0.002898551 0.000000000 0.001449275 0.001449275 0.000000000
[25] 0.001449275 0.001449275 0.010144928 0.005797101 0.001449275 0.001449275 0.002898551 0.010144928
[33] 0.010144928 0.007246377 0.010144928 0.011594203 0.007246377 0.018840580 0.013043478 0.020289855
[41] 0.023188406 0.020289855 0.018840580 0.137681159 0.024637681 0.024637681 0.017391304 0.008695652
[49] 0.014492754 0.007246377 0.005797101 0.008695652 0.002898551 0.004347826 0.005797101 0.004347826
[57] 0.001449275 0.001449275 0.001449275 0.000000000 0.000000000 0.000000000 0.001449275 0.004347826
[65] 0.000000000 0.000000000 0.000000000 0.001449275 0.000000000 0.001449275 0.000000000 0.000000000
[73] 0.004347826 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[81] 0.000000000 0.001449275

$mids
 [1] -87 -85 -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49 -47 -45 -43 -41
[25] -39 -37 -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1   1   3   5   7
[49]   9  11  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43  45  47  49  51  53  55
[73]  57  59  61  63  65  67  69  71  73  75

$xname
[1] "E2_FL_clean$NoChoice_CUZminusSIB_priorhelp"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(priorhelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  priorhelp by Relation
t = -9.0624, df = 344, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -6.314485 -4.062326
sample estimates:
mean of the differences 
              -5.188406 
# returns dz effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.4879037 (small)
95 percent confidence interval:
     lower      upper 
-0.5997247 -0.3760828 
# returns d-av effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.3099237 (small)
95 percent confidence interval:
     lower      upper 
-0.3786638 -0.2411837 
# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_priorhelp", "Choice_SIB_priorhelp", method = "Pearson")
Parameter1           |           Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------------
Choice_CUZ_priorhelp | Choice_SIB_priorhelp | 0.80 | [0.76, 0.83] |  24.54 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_priorhelp, breaks = 100))
$breaks
  [1] -85 -84 -83 -82 -81 -80 -79 -78 -77 -76 -75 -74 -73 -72 -71 -70 -69 -68 -67 -66 -65 -64 -63 -62
 [25] -61 -60 -59 -58 -57 -56 -55 -54 -53 -52 -51 -50 -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38
 [49] -37 -36 -35 -34 -33 -32 -31 -30 -29 -28 -27 -26 -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14
 [73] -13 -12 -11 -10  -9  -8  -7  -6  -5  -4  -3  -2  -1   0   1   2   3   4   5   6   7   8   9  10
 [97]  11  12  13  14  15  16  17

$counts
  [1]   1   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
 [25]   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   1   0   0   0   2   1   0
 [49]   0   0   0   1   0   0   0   1   2   0   0   4   1   4   1   4   5   1   4   2   9   1   5   4
 [73]   4   7   9   3   7   9   7  10  10  13  16  27 105  31  11   6   3   1   1   3   0   0   1   1
 [97]   1   1   0   1   0   1

$density
  [1] 0.002898551 0.000000000 0.000000000 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000
  [9] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [17] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [25] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [33] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551 0.000000000 0.000000000
 [41] 0.000000000 0.002898551 0.000000000 0.000000000 0.000000000 0.005797101 0.002898551 0.000000000
 [49] 0.000000000 0.000000000 0.000000000 0.002898551 0.000000000 0.000000000 0.000000000 0.002898551
 [57] 0.005797101 0.000000000 0.000000000 0.011594203 0.002898551 0.011594203 0.002898551 0.011594203
 [65] 0.014492754 0.002898551 0.011594203 0.005797101 0.026086957 0.002898551 0.014492754 0.011594203
 [73] 0.011594203 0.020289855 0.026086957 0.008695652 0.020289855 0.026086957 0.020289855 0.028985507
 [81] 0.028985507 0.037681159 0.046376812 0.078260870 0.304347826 0.089855072 0.031884058 0.017391304
 [89] 0.008695652 0.002898551 0.002898551 0.008695652 0.000000000 0.000000000 0.002898551 0.002898551
 [97] 0.002898551 0.002898551 0.000000000 0.002898551 0.000000000 0.002898551

$mids
  [1] -84.5 -83.5 -82.5 -81.5 -80.5 -79.5 -78.5 -77.5 -76.5 -75.5 -74.5 -73.5 -72.5 -71.5 -70.5 -69.5
 [17] -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5 -61.5 -60.5 -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5
 [33] -52.5 -51.5 -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5 -41.5 -40.5 -39.5 -38.5 -37.5
 [49] -36.5 -35.5 -34.5 -33.5 -32.5 -31.5 -30.5 -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5
 [65] -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5  -9.5  -8.5  -7.5  -6.5  -5.5
 [81]  -4.5  -3.5  -2.5  -1.5  -0.5   0.5   1.5   2.5   3.5   4.5   5.5   6.5   7.5   8.5   9.5  10.5
 [97]  11.5  12.5  13.5  14.5  15.5  16.5

$xname
[1] "E2_FL_clean$Choice_CUZminusSIB_priorhelp"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Future Help

Stranger-Like

No Choice

# returns t-test results
t.test(futurehelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  futurehelp by Relation
t = -3.8295, df = 353, p-value = 0.0001519
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -7.234338 -2.324984
sample estimates:
mean of the differences 
              -4.779661 
# returns dz effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.2035358 (small)
95 percent confidence interval:
      lower       upper 
-0.30896074 -0.09811078 
# returns d-av effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.1789109 (negligible)
95 percent confidence interval:
      lower       upper 
-0.27136706 -0.08645483 
# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_futurehelp", "NoChoice_SIB_futurehelp", method = "Pearson")
Parameter1              |              Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------------------
NoChoice_CUZ_futurehelp | NoChoice_SIB_futurehelp | 0.61 | [0.54, 0.67] |  14.58 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_futurehelp, breaks = 100))
$breaks
 [1] -80 -78 -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50 -48 -46 -44 -42 -40 -38 -36 -34
[25] -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2   0   2   4   6   8  10  12  14
[49]  16  18  20  22  24  26  28  30  32  34  36  38  40  42  44  46  48  50  52  54  56  58  60  62
[73]  64  66  68  70  72  74  76  78  80

$counts
 [1]  2  1  0  0  0  1  0  0  0  2  1  1  2  3  4  4  2  2  3  4  2  4  2  2  4  5  2  9  7  7  6  9  7
[34]  8  5  8 12 20 29 33 31 21  9  6 14  7  8  7  2  5  3  6  1  2  4  1  1  1  0  0  0  1  2  1  1  2
[67]  0  0  0  0  0  0  0  0  0  1  1  0  1  2

$density
 [1] 0.002824859 0.001412429 0.000000000 0.000000000 0.000000000 0.001412429 0.000000000 0.000000000
 [9] 0.000000000 0.002824859 0.001412429 0.001412429 0.002824859 0.004237288 0.005649718 0.005649718
[17] 0.002824859 0.002824859 0.004237288 0.005649718 0.002824859 0.005649718 0.002824859 0.002824859
[25] 0.005649718 0.007062147 0.002824859 0.012711864 0.009887006 0.009887006 0.008474576 0.012711864
[33] 0.009887006 0.011299435 0.007062147 0.011299435 0.016949153 0.028248588 0.040960452 0.046610169
[41] 0.043785311 0.029661017 0.012711864 0.008474576 0.019774011 0.009887006 0.011299435 0.009887006
[49] 0.002824859 0.007062147 0.004237288 0.008474576 0.001412429 0.002824859 0.005649718 0.001412429
[57] 0.001412429 0.001412429 0.000000000 0.000000000 0.000000000 0.001412429 0.002824859 0.001412429
[65] 0.001412429 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[73] 0.000000000 0.000000000 0.000000000 0.001412429 0.001412429 0.000000000 0.001412429 0.002824859

$mids
 [1] -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49 -47 -45 -43 -41 -39 -37 -35 -33
[25] -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1   1   3   5   7   9  11  13  15
[49]  17  19  21  23  25  27  29  31  33  35  37  39  41  43  45  47  49  51  53  55  57  59  61  63
[73]  65  67  69  71  73  75  77  79

$xname
[1] "E2_SL_clean$NoChoice_CUZminusSIB_futurehelp"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(futurehelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  futurehelp by Relation
t = -12.09, df = 353, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -10.007543  -7.207147
sample estimates:
mean of the differences 
              -8.607345 
# returns dz effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.6425661 (medium)
95 percent confidence interval:
     lower      upper 
-0.7571821 -0.5279500 
# returns d-av effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.3432908 (small)
95 percent confidence interval:
     lower      upper 
-0.4006586 -0.2859229 
# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_futurehelp", "Choice_SIB_futurehelp", method = "Pearson")
Parameter1            |            Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------------
Choice_CUZ_futurehelp | Choice_SIB_futurehelp | 0.86 | [0.83, 0.88] |  31.24 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_futurehelp, breaks = 100))
$breaks
  [1] -74 -73 -72 -71 -70 -69 -68 -67 -66 -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55 -54 -53 -52 -51
 [25] -50 -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28 -27
 [49] -26 -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4  -3
 [73]  -2  -1   0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20  21
 [97]  22  23  24  25  26  27  28  29  30  31  32

$counts
  [1]  1  0  0  0  0  0  0  0  0  0  0  0  1  0  0  1  0  0  0  2  0  0  1  1  2  1  0  1  0  0  0  0
 [33]  0  5  2  1  0  0  2  0  3  1  1  0  3  2  2  4  5  4  2  2  4  4  1  1  4  3  3  6  5  4 16 10
 [65] 12  9 17 13 15 15 12 19 25 49 29  6  9  2  1  2  1  0  0  0  2  0  0  1  0  1  0  0  0  0  0  0
 [97]  0  1  0  0  0  0  0  1  0  1

$density
  [1] 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
  [9] 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859 0.000000000 0.000000000 0.002824859
 [17] 0.000000000 0.000000000 0.000000000 0.005649718 0.000000000 0.000000000 0.002824859 0.002824859
 [25] 0.005649718 0.002824859 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000
 [33] 0.000000000 0.014124294 0.005649718 0.002824859 0.000000000 0.000000000 0.005649718 0.000000000
 [41] 0.008474576 0.002824859 0.002824859 0.000000000 0.008474576 0.005649718 0.005649718 0.011299435
 [49] 0.014124294 0.011299435 0.005649718 0.005649718 0.011299435 0.011299435 0.002824859 0.002824859
 [57] 0.011299435 0.008474576 0.008474576 0.016949153 0.014124294 0.011299435 0.045197740 0.028248588
 [65] 0.033898305 0.025423729 0.048022599 0.036723164 0.042372881 0.042372881 0.033898305 0.053672316
 [73] 0.070621469 0.138418079 0.081920904 0.016949153 0.025423729 0.005649718 0.002824859 0.005649718
 [81] 0.002824859 0.000000000 0.000000000 0.000000000 0.005649718 0.000000000 0.000000000 0.002824859
 [89] 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [97] 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859
[105] 0.000000000 0.002824859

$mids
  [1] -73.5 -72.5 -71.5 -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5 -61.5 -60.5 -59.5 -58.5
 [17] -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5
 [33] -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5 -30.5 -29.5 -28.5 -27.5 -26.5
 [49] -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5
 [65]  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5  -0.5   0.5   1.5   2.5   3.5   4.5   5.5
 [81]   6.5   7.5   8.5   9.5  10.5  11.5  12.5  13.5  14.5  15.5  16.5  17.5  18.5  19.5  20.5  21.5
 [97]  22.5  23.5  24.5  25.5  26.5  27.5  28.5  29.5  30.5  31.5

$xname
[1] "E2_SL_clean$Choice_CUZminusSIB_futurehelp"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Friend-Like

No Choice

# returns t-test results
t.test(futurehelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  futurehelp by Relation
t = -3.1602, df = 344, p-value = 0.001717
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -4.829576 -1.124047
sample estimates:
mean of the differences 
              -2.976812 
# returns dz effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.1701376 (negligible)
95 percent confidence interval:
      lower       upper 
-0.27660668 -0.06366849 
# returns d-av effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.167496 (negligible)
95 percent confidence interval:
    lower     upper 
-0.272289 -0.062703 
# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_futurehelp", "NoChoice_SIB_futurehelp", method = "Pearson")
Parameter1              |              Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------------------
NoChoice_CUZ_futurehelp | NoChoice_SIB_futurehelp | 0.52 | [0.43, 0.59] |  11.14 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_futurehelp, breaks = 100))
$breaks
 [1] -94 -92 -90 -88 -86 -84 -82 -80 -78 -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50 -48
[25] -46 -44 -42 -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2   0
[49]   2   4   6   8  10  12  14  16  18  20  22  24  26  28  30  32  34  36  38  40  42  44  46  48
[73]  50  52  54  56  58  60

$counts
 [1]  1  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  1  1  2  0  1  4  2  0  0  1  1  1  2  3  2  1
[34]  2  3  9  4  4  4 11 15 10 12 16 15 20 99 18 10 11  7 11  8  0  5  6  5  2  0  0  1  3  0  1  1  0
[67]  2  0  0  1  1  0  0  1  0  3  1

$density
 [1] 0.001449275 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [9] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[17] 0.000000000 0.001449275 0.001449275 0.002898551 0.000000000 0.001449275 0.005797101 0.002898551
[25] 0.000000000 0.000000000 0.001449275 0.001449275 0.001449275 0.002898551 0.004347826 0.002898551
[33] 0.001449275 0.002898551 0.004347826 0.013043478 0.005797101 0.005797101 0.005797101 0.015942029
[41] 0.021739130 0.014492754 0.017391304 0.023188406 0.021739130 0.028985507 0.143478261 0.026086957
[49] 0.014492754 0.015942029 0.010144928 0.015942029 0.011594203 0.000000000 0.007246377 0.008695652
[57] 0.007246377 0.002898551 0.000000000 0.000000000 0.001449275 0.004347826 0.000000000 0.001449275
[65] 0.001449275 0.000000000 0.002898551 0.000000000 0.000000000 0.001449275 0.001449275 0.000000000
[73] 0.000000000 0.001449275 0.000000000 0.004347826 0.001449275

$mids
 [1] -93 -91 -89 -87 -85 -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49 -47
[25] -45 -43 -41 -39 -37 -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1   1
[49]   3   5   7   9  11  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43  45  47  49
[73]  51  53  55  57  59

$xname
[1] "E2_FL_clean$NoChoice_CUZminusSIB_futurehelp"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(futurehelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  futurehelp by Relation
t = -9.3125, df = 344, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -6.147323 -4.003401
sample estimates:
mean of the differences 
              -5.075362 
# returns dz effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.5013687 (medium)
95 percent confidence interval:
     lower      upper 
-0.6135219 -0.3892155 
# returns d-av effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.3114277 (small)
95 percent confidence interval:
     lower      upper 
-0.3786613 -0.2441941 
# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_futurehelp", "Choice_SIB_futurehelp", method = "Pearson")
Parameter1            |            Parameter2 |    r |       95% CI | t(343) |         p
----------------------------------------------------------------------------------------
Choice_CUZ_futurehelp | Choice_SIB_futurehelp | 0.81 | [0.77, 0.84] |  25.32 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_futurehelp, breaks = 100))
$breaks
 [1] -78 -77 -76 -75 -74 -73 -72 -71 -70 -69 -68 -67 -66 -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55
[25] -54 -53 -52 -51 -50 -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31
[49] -30 -29 -28 -27 -26 -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7
[73]  -6  -5  -4  -3  -2  -1   0   1   2   3   4   5   6   7   8   9  10  11  12

$counts
 [1]   2   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
[25]   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   1   0   0   0   0   2
[49]   2   1   0   3   4   3   3   1   3   2   4   1   6   4   3   2   4   3   6   7   3  11   9   5
[73]  12   9  18  14  24 113  27   9   7   5   3   1   1   3   0   0   1   1

$density
 [1] 0.005797101 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [9] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[17] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[25] 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[33] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551
[41] 0.000000000 0.000000000 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000 0.005797101
[49] 0.005797101 0.002898551 0.000000000 0.008695652 0.011594203 0.008695652 0.008695652 0.002898551
[57] 0.008695652 0.005797101 0.011594203 0.002898551 0.017391304 0.011594203 0.008695652 0.005797101
[65] 0.011594203 0.008695652 0.017391304 0.020289855 0.008695652 0.031884058 0.026086957 0.014492754
[73] 0.034782609 0.026086957 0.052173913 0.040579710 0.069565217 0.327536232 0.078260870 0.026086957
[81] 0.020289855 0.014492754 0.008695652 0.002898551 0.002898551 0.008695652 0.000000000 0.000000000
[89] 0.002898551 0.002898551

$mids
 [1] -77.5 -76.5 -75.5 -74.5 -73.5 -72.5 -71.5 -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5
[17] -61.5 -60.5 -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5 -48.5 -47.5 -46.5
[33] -45.5 -44.5 -43.5 -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5 -30.5
[49] -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5
[65] -13.5 -12.5 -11.5 -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5  -0.5   0.5   1.5
[81]   2.5   3.5   4.5   5.5   6.5   7.5   8.5   9.5  10.5  11.5

$xname
[1] "E2_FL_clean$Choice_CUZminusSIB_futurehelp"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Prior Interax

Stranger-Like

No Choice

# returns t-test results
t.test(priorinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  priorinteract by Relation
t = -11.696, df = 353, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -18.90171 -13.45987
sample estimates:
mean of the differences 
              -16.18079 
# returns dz effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.6216164 (medium)
95 percent confidence interval:
     lower      upper 
-0.7356017 -0.5076311 
# returns d-av effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.5954452 (medium)
95 percent confidence interval:
     lower      upper 
-0.7039004 -0.4869900 
# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_priorinteract", "NoChoice_SIB_priorinteract", method = "Pearson")
Parameter1                 |                 Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------------------------
NoChoice_CUZ_priorinteract | NoChoice_SIB_priorinteract | 0.54 | [0.46, 0.61] |  12.08 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_priorinteract, breaks = 100))
$breaks
 [1] -100  -98  -96  -94  -92  -90  -88  -86  -84  -82  -80  -78  -76  -74  -72  -70  -68  -66  -64
[20]  -62  -60  -58  -56  -54  -52  -50  -48  -46  -44  -42  -40  -38  -36  -34  -32  -30  -28  -26
[39]  -24  -22  -20  -18  -16  -14  -12  -10   -8   -6   -4   -2    0    2    4    6    8   10   12
[58]   14   16   18   20   22   24   26   28   30   32   34   36   38   40   42   44   46   48   50
[77]   52   54   56

$counts
 [1]  1  0  0  0  0  0  3  2  2  0  1  2  3  2  1  1  2  2  2  4  1  5  1  3  4  3  5  4  5  4  8  3  9
[34]  7  6  6  4  8  7  8  7 10 14  7  7 16 22 12 17 30 13 16 12  6  6  4  2  2  3  5  1  0  2  2  1  2
[67]  0  1  0  1  0  0  2  1  0  0  0  1

$density
 [1] 0.001412429 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.004237288 0.002824859
 [9] 0.002824859 0.000000000 0.001412429 0.002824859 0.004237288 0.002824859 0.001412429 0.001412429
[17] 0.002824859 0.002824859 0.002824859 0.005649718 0.001412429 0.007062147 0.001412429 0.004237288
[25] 0.005649718 0.004237288 0.007062147 0.005649718 0.007062147 0.005649718 0.011299435 0.004237288
[33] 0.012711864 0.009887006 0.008474576 0.008474576 0.005649718 0.011299435 0.009887006 0.011299435
[41] 0.009887006 0.014124294 0.019774011 0.009887006 0.009887006 0.022598870 0.031073446 0.016949153
[49] 0.024011299 0.042372881 0.018361582 0.022598870 0.016949153 0.008474576 0.008474576 0.005649718
[57] 0.002824859 0.002824859 0.004237288 0.007062147 0.001412429 0.000000000 0.002824859 0.002824859
[65] 0.001412429 0.002824859 0.000000000 0.001412429 0.000000000 0.001412429 0.000000000 0.000000000
[73] 0.002824859 0.001412429 0.000000000 0.000000000 0.000000000 0.001412429

$mids
 [1] -99 -97 -95 -93 -91 -89 -87 -85 -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53
[25] -51 -49 -47 -45 -43 -41 -39 -37 -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5
[49]  -3  -1   1   3   5   7   9  11  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43
[73]  45  47  49  51  53  55

$xname
[1] "E2_SL_clean$NoChoice_CUZminusSIB_priorinteract"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(priorinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  priorinteract by Relation
t = -17.344, df = 353, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -21.68756 -17.27006
sample estimates:
mean of the differences 
              -19.47881 
# returns dz effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.9218361 (large)
95 percent confidence interval:
    lower     upper 
-1.046397 -0.797275 
# returns d-av effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.7099864 (medium)
95 percent confidence interval:
     lower      upper 
-0.7999148 -0.6200579 
# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_priorinteract", "Choice_SIB_priorinteract", method = "Pearson")
Parameter1               |               Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------------------
Choice_CUZ_priorinteract | Choice_SIB_priorinteract | 0.70 | [0.65, 0.75] |  18.57 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_priorinteract, breaks = 100))
$breaks
 [1] -96 -94 -92 -90 -88 -86 -84 -82 -80 -78 -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50
[25] -48 -46 -44 -42 -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2
[49]   0   2   4   6   8  10  12  14  16  18  20  22  24  26  28  30  32  34  36  38  40  42  44  46
[73]  48

$counts
 [1]  1  0  0  0  0  0  1  1  0  1  2  0  1  3  2  1  1  5  1  4  6  5  7  4  2  4  5  6  7  7  7  4 10
[34]  9  9 12 12  8  9  7  7 18 12 14 21 16 19 53 13  5  4  1  2  1  1  0  0  1  0  0  0  0  1  0  0  0
[67]  0  0  0  0  0  1

$density
 [1] 0.001412429 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.001412429 0.001412429
 [9] 0.000000000 0.001412429 0.002824859 0.000000000 0.001412429 0.004237288 0.002824859 0.001412429
[17] 0.001412429 0.007062147 0.001412429 0.005649718 0.008474576 0.007062147 0.009887006 0.005649718
[25] 0.002824859 0.005649718 0.007062147 0.008474576 0.009887006 0.009887006 0.009887006 0.005649718
[33] 0.014124294 0.012711864 0.012711864 0.016949153 0.016949153 0.011299435 0.012711864 0.009887006
[41] 0.009887006 0.025423729 0.016949153 0.019774011 0.029661017 0.022598870 0.026836158 0.074858757
[49] 0.018361582 0.007062147 0.005649718 0.001412429 0.002824859 0.001412429 0.001412429 0.000000000
[57] 0.000000000 0.001412429 0.000000000 0.000000000 0.000000000 0.000000000 0.001412429 0.000000000
[65] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.001412429

$mids
 [1] -95 -93 -91 -89 -87 -85 -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49
[25] -47 -45 -43 -41 -39 -37 -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1
[49]   1   3   5   7   9  11  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43  45  47

$xname
[1] "E2_SL_clean$Choice_CUZminusSIB_priorinteract"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Friend-Like

No Choice

# returns t-test results
t.test(priorinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  priorinteract by Relation
t = -4.4011, df = 344, p-value = 1.439e-05
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -5.812775 -2.222008
sample estimates:
mean of the differences 
              -4.017391 
# returns dz effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.2369497 (small)
95 percent confidence interval:
     lower      upper 
-0.3441300 -0.1297694 
# returns d-av effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.2408293 (small)
95 percent confidence interval:
     lower      upper 
-0.3498136 -0.1318450 
# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_priorinteract", "NoChoice_SIB_priorinteract", method = "Pearson")
Parameter1                 |                 Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------------------------
NoChoice_CUZ_priorinteract | NoChoice_SIB_priorinteract | 0.48 | [0.40, 0.56] |  10.23 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_priorinteract, breaks = 100))
$breaks
 [1] -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50 -48 -46 -44 -42 -40 -38 -36 -34 -32 -30
[25] -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2   0   2   4   6   8  10  12  14  16  18
[49]  20  22  24  26  28  30  32  34  36  38  40  42  44  46  48  50  52  54  56  58  60  62  64  66
[73]  68  70  72

$counts
 [1]   2   0   0   0   0   0   0   0   1   1   1   1   3   1   1   1   1   1   1   2   2   4   4   3
[25]   2   3   4   4   2   4  10  13  15  11  12  14  19 119  21   7   7   8   7   7   5   3   1   2
[49]   5   1   0   0   0   2   0   0   0   1   0   1   0   1   0   0   0   2   0   1   0   0   0   0
[73]   0   1

$density
 [1] 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [9] 0.001449275 0.001449275 0.001449275 0.001449275 0.004347826 0.001449275 0.001449275 0.001449275
[17] 0.001449275 0.001449275 0.001449275 0.002898551 0.002898551 0.005797101 0.005797101 0.004347826
[25] 0.002898551 0.004347826 0.005797101 0.005797101 0.002898551 0.005797101 0.014492754 0.018840580
[33] 0.021739130 0.015942029 0.017391304 0.020289855 0.027536232 0.172463768 0.030434783 0.010144928
[41] 0.010144928 0.011594203 0.010144928 0.010144928 0.007246377 0.004347826 0.001449275 0.002898551
[49] 0.007246377 0.001449275 0.000000000 0.000000000 0.000000000 0.002898551 0.000000000 0.000000000
[57] 0.000000000 0.001449275 0.000000000 0.001449275 0.000000000 0.001449275 0.000000000 0.000000000
[65] 0.000000000 0.002898551 0.000000000 0.001449275 0.000000000 0.000000000 0.000000000 0.000000000
[73] 0.000000000 0.001449275

$mids
 [1] -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49 -47 -45 -43 -41 -39 -37 -35 -33 -31 -29
[25] -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1   1   3   5   7   9  11  13  15  17  19
[49]  21  23  25  27  29  31  33  35  37  39  41  43  45  47  49  51  53  55  57  59  61  63  65  67
[73]  69  71

$xname
[1] "E2_FL_clean$NoChoice_CUZminusSIB_priorinteract"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(priorinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  priorinteract by Relation
t = -9.6968, df = 344, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -7.237946 -4.796837
sample estimates:
mean of the differences 
              -6.017391 
# returns dz effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.5220599 (medium)
95 percent confidence interval:
     lower      upper 
-0.6347394 -0.4093805 
# returns d-av effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.3853272 (small)
95 percent confidence interval:
     lower      upper 
-0.4661926 -0.3044618 
# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_priorinteract", "Choice_SIB_priorinteract", method = "Pearson")
Parameter1               |               Parameter2 |    r |       95% CI | t(343) |         p
----------------------------------------------------------------------------------------------
Choice_CUZ_priorinteract | Choice_SIB_priorinteract | 0.73 | [0.67, 0.77] |  19.64 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_priorinteract, breaks = 100))
$breaks
  [1] -81 -80 -79 -78 -77 -76 -75 -74 -73 -72 -71 -70 -69 -68 -67 -66 -65 -64 -63 -62 -61 -60 -59 -58
 [25] -57 -56 -55 -54 -53 -52 -51 -50 -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34
 [49] -33 -32 -31 -30 -29 -28 -27 -26 -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10
 [73]  -9  -8  -7  -6  -5  -4  -3  -2  -1   0   1   2   3   4   5   6   7   8   9  10  11  12  13  14
 [97]  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38
[121]  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56

$counts
  [1]   1   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   1   0   0   0   0
 [25]   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   2   0   1   1   3   1   0   0   1
 [49]   0   1   1   0   0   2   2   1   4   4   2   1   2   3   2   2   4   6   5  10   9   4   3   3
 [73]   9   8  14  16  12  12  11  21 118  18   9   3   0   2   1   1   2   0   1   0   1   0   1   0
 [97]   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
[121]   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1

$density
  [1] 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
  [9] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551 0.000000000
 [17] 0.000000000 0.000000000 0.000000000 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000
 [25] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [33] 0.000000000 0.000000000 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000 0.005797101
 [41] 0.000000000 0.002898551 0.002898551 0.008695652 0.002898551 0.000000000 0.000000000 0.002898551
 [49] 0.000000000 0.002898551 0.002898551 0.000000000 0.000000000 0.005797101 0.005797101 0.002898551
 [57] 0.011594203 0.011594203 0.005797101 0.002898551 0.005797101 0.008695652 0.005797101 0.005797101
 [65] 0.011594203 0.017391304 0.014492754 0.028985507 0.026086957 0.011594203 0.008695652 0.008695652
 [73] 0.026086957 0.023188406 0.040579710 0.046376812 0.034782609 0.034782609 0.031884058 0.060869565
 [81] 0.342028986 0.052173913 0.026086957 0.008695652 0.000000000 0.005797101 0.002898551 0.002898551
 [89] 0.005797101 0.000000000 0.002898551 0.000000000 0.002898551 0.000000000 0.002898551 0.000000000
 [97] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[105] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[113] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[121] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[129] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[137] 0.002898551

$mids
  [1] -80.5 -79.5 -78.5 -77.5 -76.5 -75.5 -74.5 -73.5 -72.5 -71.5 -70.5 -69.5 -68.5 -67.5 -66.5 -65.5
 [17] -64.5 -63.5 -62.5 -61.5 -60.5 -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5
 [33] -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5
 [49] -32.5 -31.5 -30.5 -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5
 [65] -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5
 [81]  -0.5   0.5   1.5   2.5   3.5   4.5   5.5   6.5   7.5   8.5   9.5  10.5  11.5  12.5  13.5  14.5
 [97]  15.5  16.5  17.5  18.5  19.5  20.5  21.5  22.5  23.5  24.5  25.5  26.5  27.5  28.5  29.5  30.5
[113]  31.5  32.5  33.5  34.5  35.5  36.5  37.5  38.5  39.5  40.5  41.5  42.5  43.5  44.5  45.5  46.5
[129]  47.5  48.5  49.5  50.5  51.5  52.5  53.5  54.5  55.5

$xname
[1] "E2_FL_clean$Choice_CUZminusSIB_priorinteract"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Future Interax

Stranger-Like

No Choice

# returns t-test results
t.test(futureinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  futureinteract by Relation
t = -5.5451, df = 353, p-value = 5.766e-08
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -10.408796  -4.958435
sample estimates:
mean of the differences 
              -7.683616 
# returns dz effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.2947192 (small)
95 percent confidence interval:
     lower      upper 
-0.4013108 -0.1881275 
# returns d-av effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.2886645 (small)
95 percent confidence interval:
    lower     upper 
-0.392978 -0.184351 
# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_futureinteract", "NoChoice_SIB_futureinteract", method = "Pearson")
Parameter1                  |                  Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------------------------
NoChoice_CUZ_futureinteract | NoChoice_SIB_futureinteract | 0.52 | [0.44, 0.59] |  11.43 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_futureinteract, breaks = 100))
$breaks
 [1] -94 -92 -90 -88 -86 -84 -82 -80 -78 -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50 -48
[25] -46 -44 -42 -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2   0
[49]   2   4   6   8  10  12  14  16  18  20  22  24  26  28  30  32  34  36  38  40  42  44  46  48
[73]  50  52  54  56  58  60  62  64  66  68  70  72  74  76  78  80  82

$counts
 [1]  1  0  0  1  0  1  0  1  1  0  2  1  0  1  1  3  1  0  0  0  0  5  5  3  3  1  4  7  1  8  3  4  6
[34]  8  6  6  9  9  7 10 17 10 16 14 19 21 25 17 18  5  9  6  4  6  5  7  1  3  4  2  2  2  1  2  1  1
[67]  1  0  2  2  2  0  0  1  1  1  2  0  1  0  1  0  0  0  1  0  0  2

$density
 [1] 0.001412429 0.000000000 0.000000000 0.001412429 0.000000000 0.001412429 0.000000000 0.001412429
 [9] 0.001412429 0.000000000 0.002824859 0.001412429 0.000000000 0.001412429 0.001412429 0.004237288
[17] 0.001412429 0.000000000 0.000000000 0.000000000 0.000000000 0.007062147 0.007062147 0.004237288
[25] 0.004237288 0.001412429 0.005649718 0.009887006 0.001412429 0.011299435 0.004237288 0.005649718
[33] 0.008474576 0.011299435 0.008474576 0.008474576 0.012711864 0.012711864 0.009887006 0.014124294
[41] 0.024011299 0.014124294 0.022598870 0.019774011 0.026836158 0.029661017 0.035310734 0.024011299
[49] 0.025423729 0.007062147 0.012711864 0.008474576 0.005649718 0.008474576 0.007062147 0.009887006
[57] 0.001412429 0.004237288 0.005649718 0.002824859 0.002824859 0.002824859 0.001412429 0.002824859
[65] 0.001412429 0.001412429 0.001412429 0.000000000 0.002824859 0.002824859 0.002824859 0.000000000
[73] 0.000000000 0.001412429 0.001412429 0.001412429 0.002824859 0.000000000 0.001412429 0.000000000
[81] 0.001412429 0.000000000 0.000000000 0.000000000 0.001412429 0.000000000 0.000000000 0.002824859

$mids
 [1] -93 -91 -89 -87 -85 -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49 -47
[25] -45 -43 -41 -39 -37 -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1   1
[49]   3   5   7   9  11  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43  45  47  49
[73]  51  53  55  57  59  61  63  65  67  69  71  73  75  77  79  81

$xname
[1] "E2_SL_clean$NoChoice_CUZminusSIB_futureinteract"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(futureinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  futureinteract by Relation
t = -13.382, df = 353, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -11.140761  -8.285793
sample estimates:
mean of the differences 
              -9.713277 
# returns dz effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.7112667 (medium)
95 percent confidence interval:
     lower      upper 
-0.8280709 -0.5944625 
# returns d-av effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.3950859 (small)
95 percent confidence interval:
     lower      upper 
-0.4552682 -0.3349035 
# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_futureinteract", "Choice_SIB_futureinteract", method = "Pearson")
Parameter1                |                Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------------------------
Choice_CUZ_futureinteract | Choice_SIB_futureinteract | 0.85 | [0.81, 0.87] |  29.74 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_futureinteract, breaks = 100))
$breaks
  [1] -67 -66 -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55 -54 -53 -52 -51 -50 -49 -48 -47 -46 -45 -44
 [25] -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28 -27 -26 -25 -24 -23 -22 -21 -20
 [49] -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4  -3  -2  -1   0   1   2   3   4
 [73]   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28
 [97]  29  30  31  32  33

$counts
  [1]  1  0  0  0  0  0  1  0  0  1  0  0  1  0  1  1  1  2  0  2  2  1  0  0  0  1  1  0  1  2  1  2
 [33]  4  2  1  1  3  2  1  2  5  1  3  4  8  6  2  2  2  6  2  4  7 10  6 12 11  9 15 13 18 12 12 12
 [65] 19 22 45 21  8  7  1  3  0  0  1  0  1  1  0  0  1  0  0  0  0  0  0  0  0  1  0  1  0  0  1  0
 [97]  0  0  0  1

$density
  [1] 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859 0.000000000
  [9] 0.000000000 0.002824859 0.000000000 0.000000000 0.002824859 0.000000000 0.002824859 0.002824859
 [17] 0.002824859 0.005649718 0.000000000 0.005649718 0.005649718 0.002824859 0.000000000 0.000000000
 [25] 0.000000000 0.002824859 0.002824859 0.000000000 0.002824859 0.005649718 0.002824859 0.005649718
 [33] 0.011299435 0.005649718 0.002824859 0.002824859 0.008474576 0.005649718 0.002824859 0.005649718
 [41] 0.014124294 0.002824859 0.008474576 0.011299435 0.022598870 0.016949153 0.005649718 0.005649718
 [49] 0.005649718 0.016949153 0.005649718 0.011299435 0.019774011 0.028248588 0.016949153 0.033898305
 [57] 0.031073446 0.025423729 0.042372881 0.036723164 0.050847458 0.033898305 0.033898305 0.033898305
 [65] 0.053672316 0.062146893 0.127118644 0.059322034 0.022598870 0.019774011 0.002824859 0.008474576
 [73] 0.000000000 0.000000000 0.002824859 0.000000000 0.002824859 0.002824859 0.000000000 0.000000000
 [81] 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [89] 0.000000000 0.002824859 0.000000000 0.002824859 0.000000000 0.000000000 0.002824859 0.000000000
 [97] 0.000000000 0.000000000 0.000000000 0.002824859

$mids
  [1] -66.5 -65.5 -64.5 -63.5 -62.5 -61.5 -60.5 -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5
 [17] -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5
 [33] -34.5 -33.5 -32.5 -31.5 -30.5 -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5
 [49] -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5
 [65]  -2.5  -1.5  -0.5   0.5   1.5   2.5   3.5   4.5   5.5   6.5   7.5   8.5   9.5  10.5  11.5  12.5
 [81]  13.5  14.5  15.5  16.5  17.5  18.5  19.5  20.5  21.5  22.5  23.5  24.5  25.5  26.5  27.5  28.5
 [97]  29.5  30.5  31.5  32.5

$xname
[1] "E2_SL_clean$Choice_CUZminusSIB_futureinteract"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Friend-Like

No Choice

# returns t-test results
t.test(futureinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  futureinteract by Relation
t = -3.4364, df = 344, p-value = 0.0006617
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -4.767245 -1.296523
sample estimates:
mean of the differences 
              -3.031884 
# returns dz effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.1850086 (negligible)
95 percent confidence interval:
      lower       upper 
-0.29161617 -0.07840102 
# returns d-av effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.1806067 (negligible)
95 percent confidence interval:
      lower       upper 
-0.28463664 -0.07657682 
# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_futureinteract", "NoChoice_SIB_futureinteract", method = "Pearson")
Parameter1                  |                  Parameter2 |    r |       95% CI | t(343) |         p
----------------------------------------------------------------------------------------------------
NoChoice_CUZ_futureinteract | NoChoice_SIB_futureinteract | 0.52 | [0.44, 0.60] |  11.38 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_futureinteract, breaks = 100))
$breaks
 [1] -84 -82 -80 -78 -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50 -48 -46 -44 -42 -40 -38
[25] -36 -34 -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2   0   2   4   6   8  10
[49]  12  14  16  18  20  22  24  26  28  30  32  34  36  38  40  42  44  46  48  50  52  54  56  58
[73]  60  62

$counts
 [1]   1   0   0   1   0   0   0   0   0   0   1   0   0   0   1   1   1   2   0   1   0   0   4   0
[25]   1   2   2   1   4   2   4   6   6   9  15   3   7  19  12  20  24 110  11   9  11   8   9   4
[49]   9   1   4   2   1   1   1   1   2   3   0   0   1   1   2   0   1   0   0   0   0   0   2   0
[73]   1

$density
 [1] 0.001449275 0.000000000 0.000000000 0.001449275 0.000000000 0.000000000 0.000000000 0.000000000
 [9] 0.000000000 0.000000000 0.001449275 0.000000000 0.000000000 0.000000000 0.001449275 0.001449275
[17] 0.001449275 0.002898551 0.000000000 0.001449275 0.000000000 0.000000000 0.005797101 0.000000000
[25] 0.001449275 0.002898551 0.002898551 0.001449275 0.005797101 0.002898551 0.005797101 0.008695652
[33] 0.008695652 0.013043478 0.021739130 0.004347826 0.010144928 0.027536232 0.017391304 0.028985507
[41] 0.034782609 0.159420290 0.015942029 0.013043478 0.015942029 0.011594203 0.013043478 0.005797101
[49] 0.013043478 0.001449275 0.005797101 0.002898551 0.001449275 0.001449275 0.001449275 0.001449275
[57] 0.002898551 0.004347826 0.000000000 0.000000000 0.001449275 0.001449275 0.002898551 0.000000000
[65] 0.001449275 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551 0.000000000
[73] 0.001449275

$mids
 [1] -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49 -47 -45 -43 -41 -39 -37
[25] -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1   1   3   5   7   9  11
[49]  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43  45  47  49  51  53  55  57  59
[73]  61

$xname
[1] "E2_FL_clean$NoChoice_CUZminusSIB_futureinteract"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice

# returns t-test results
t.test(futureinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  futureinteract by Relation
t = -9.1648, df = 344, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -5.905830 -3.818808
sample estimates:
mean of the differences 
              -4.862319 
# returns dz effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.4934186 (small)
95 percent confidence interval:
     lower      upper 
-0.6053747 -0.3814626 
# returns d-av effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.3139896 (small)
95 percent confidence interval:
     lower      upper 
-0.3828948 -0.2450845 
# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_futureinteract", "Choice_SIB_futureinteract", method = "Pearson")
Parameter1                |                Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------------------------
Choice_CUZ_futureinteract | Choice_SIB_futureinteract | 0.80 | [0.76, 0.83] |  24.48 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_futureinteract, breaks = 100))
$breaks
  [1] -73 -72 -71 -70 -69 -68 -67 -66 -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55 -54 -53 -52 -51 -50
 [25] -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28 -27 -26
 [49] -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4  -3  -2
 [73]  -1   0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20  21  22
 [97]  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44

$counts
  [1]   2   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0
 [25]   0   0   0   0   0   0   0   0   0   0   0   1   1   1   2   0   0   0   0   1   0   1   0   3
 [49]   1   1   2   4   1   3   3   3   3   1   4   7   8   5   3   8  12  12  13  12  10  13  15  25
 [73] 123  18   6   3   3   3   1   1   2   0   0   0   0   1   1   0   0   0   0   0   0   0   0   0
 [97]   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1

$density
  [1] 0.005797101 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
  [9] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [17] 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551 0.000000000 0.000000000 0.000000000
 [25] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [33] 0.000000000 0.000000000 0.000000000 0.002898551 0.002898551 0.002898551 0.005797101 0.000000000
 [41] 0.000000000 0.000000000 0.000000000 0.002898551 0.000000000 0.002898551 0.000000000 0.008695652
 [49] 0.002898551 0.002898551 0.005797101 0.011594203 0.002898551 0.008695652 0.008695652 0.008695652
 [57] 0.008695652 0.002898551 0.011594203 0.020289855 0.023188406 0.014492754 0.008695652 0.023188406
 [65] 0.034782609 0.034782609 0.037681159 0.034782609 0.028985507 0.037681159 0.043478261 0.072463768
 [73] 0.356521739 0.052173913 0.017391304 0.008695652 0.008695652 0.008695652 0.002898551 0.002898551
 [81] 0.005797101 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551 0.002898551 0.000000000
 [89] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [97] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[105] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[113] 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551

$mids
  [1] -72.5 -71.5 -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5 -61.5 -60.5 -59.5 -58.5 -57.5
 [17] -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5 -41.5
 [33] -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5 -30.5 -29.5 -28.5 -27.5 -26.5 -25.5
 [49] -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5  -9.5
 [65]  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5  -0.5   0.5   1.5   2.5   3.5   4.5   5.5   6.5
 [81]   7.5   8.5   9.5  10.5  11.5  12.5  13.5  14.5  15.5  16.5  17.5  18.5  19.5  20.5  21.5  22.5
 [97]  23.5  24.5  25.5  26.5  27.5  28.5  29.5  30.5  31.5  32.5  33.5  34.5  35.5  36.5  37.5  38.5
[113]  39.5  40.5  41.5  42.5  43.5

$xname
[1] "E2_FL_clean$Choice_CUZminusSIB_futureinteract"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Moral

ANOVAs

Stranger-Like

# returns 2 x 2 within-subject ANOVA results
aov_moral_SL <- aov(moral ~ Relation*`Choice Context` + Error(ResponseId/(Relation*`Choice Context`)), 
                    data = E2_all_long %>%
                      filter(BSs_cond == "Stranger-Like"))
summary(aov_moral_SL)

Error: ResponseId
           Df Sum Sq Mean Sq F value Pr(>F)
Residuals 353 240678   681.8               

Error: ResponseId:Relation
           Df Sum Sq Mean Sq F value   Pr(>F)    
Relation    1  10075   10075   59.36 1.34e-13 ***
Residuals 353  59909     170                     
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Error: ResponseId:`Choice Context`
                  Df Sum Sq Mean Sq F value Pr(>F)    
`Choice Context`   1 136782  136782   469.8 <2e-16 ***
Residuals        353 102785     291                   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Error: ResponseId:Relation:`Choice Context`
                           Df Sum Sq Mean Sq F value  Pr(>F)    
Relation:`Choice Context`   1  11260   11260   64.77 1.3e-14 ***
Residuals                 353  61365     174                    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# returns eta-sq effect size
effectsize::eta_squared(aov_moral_SL, partial = TRUE)
Group                                |               Parameter | Eta2 (partial) |       90% CI
----------------------------------------------------------------------------------------------
ResponseId:Relation                  |                Relation |           0.14 | [0.09, 0.20]
ResponseId:`Choice Context`          |          Choice Context |           0.57 | [0.52, 0.62]
ResponseId:Relation:`Choice Context` | Relation:Choice Context |           0.16 | [0.10, 0.21]

Friend-Like

# returns 2 x 2 within-subject ANOVA results
aov_moral_FL <- aov(moral ~ Relation*`Choice Context` + Error(ResponseId/(Relation*`Choice Context`)), 
                    data = E2_all_long %>%
                      filter(BSs_cond == "Friend-Like"))
summary(aov_moral_FL)

Error: ResponseId
           Df Sum Sq Mean Sq F value Pr(>F)
Residuals 344 236889   688.6               

Error: ResponseId:Relation
           Df Sum Sq Mean Sq F value   Pr(>F)    
Relation    1   2827  2826.5   15.26 0.000113 ***
Residuals 344  63709   185.2                     
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Error: ResponseId:`Choice Context`
                  Df Sum Sq Mean Sq F value Pr(>F)    
`Choice Context`   1 147539  147539   412.6 <2e-16 ***
Residuals        344 123000     358                   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Error: ResponseId:Relation:`Choice Context`
                           Df Sum Sq Mean Sq F value   Pr(>F)    
Relation:`Choice Context`   1   3744    3744   17.74 3.23e-05 ***
Residuals                 344  72585     211                     
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# returns eta-sq effect size
effectsize::eta_squared(aov_moral_FL, partial = TRUE)
Group                                |               Parameter | Eta2 (partial) |       90% CI
----------------------------------------------------------------------------------------------
ResponseId:Relation                  |                Relation |           0.04 | [0.01, 0.08]
ResponseId:`Choice Context`          |          Choice Context |           0.55 | [0.49, 0.59]
ResponseId:Relation:`Choice Context` | Relation:Choice Context |           0.05 | [0.02, 0.09]

t-tests

Stranger-Like

No Choice
# returns t-test results
t.test(moral ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  moral by Relation
t = 0.45528, df = 353, p-value = 0.6492
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -1.012816  1.622985
sample estimates:
mean of the differences 
              0.3050847 
# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: 0.0241978 (negligible)
95 percent confidence interval:
      lower       upper 
-0.08016727  0.12856288 
# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: 0.01849127 (negligible)
95 percent confidence interval:
      lower       upper 
-0.06125669  0.09823923 
# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_moral", "NoChoice_SIB_moral", method = "Pearson")
Parameter1         |         Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------
NoChoice_CUZ_moral | NoChoice_SIB_moral | 0.71 | [0.65, 0.76] |  18.81 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_moral, breaks = 100))
$breaks
  [1] -75 -74 -73 -72 -71 -70 -69 -68 -67 -66 -65 -64 -63 -62 -61 -60 -59 -58 -57 -56 -55 -54 -53 -52
 [25] -51 -50 -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28
 [49] -27 -26 -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4
 [73]  -3  -2  -1   0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20
 [97]  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44
[121]  45  46  47  48  49  50  51  52

$counts
  [1]  1  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  1  1  0  0  0  0  0
 [33]  0  0  1  0  0  0  0  0  0  0  1  0  0  0  0  1  0  1  3  1  3  1  0  3  1  3  3  2  5  6  4  6
 [65]  5  8  6  9  6  3  7 13  7 15 90 21 14 10  9  2  7  7  5  4  6  6  5  2  2  3  1  1  2  3  2  3
 [97]  3  2  3  1  0  1  5  2  1  0  1  0  1  0  0  0  0  0  0  0  0  0  1  0  0  0  0  0  0  0  1

$density
  [1] 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
  [9] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [17] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [25] 0.000000000 0.002824859 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [33] 0.000000000 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [41] 0.000000000 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859
 [49] 0.000000000 0.002824859 0.008474576 0.002824859 0.008474576 0.002824859 0.000000000 0.008474576
 [57] 0.002824859 0.008474576 0.008474576 0.005649718 0.014124294 0.016949153 0.011299435 0.016949153
 [65] 0.014124294 0.022598870 0.016949153 0.025423729 0.016949153 0.008474576 0.019774011 0.036723164
 [73] 0.019774011 0.042372881 0.254237288 0.059322034 0.039548023 0.028248588 0.025423729 0.005649718
 [81] 0.019774011 0.019774011 0.014124294 0.011299435 0.016949153 0.016949153 0.014124294 0.005649718
 [89] 0.005649718 0.008474576 0.002824859 0.002824859 0.005649718 0.008474576 0.005649718 0.008474576
 [97] 0.008474576 0.005649718 0.008474576 0.002824859 0.000000000 0.002824859 0.014124294 0.005649718
[105] 0.002824859 0.000000000 0.002824859 0.000000000 0.002824859 0.000000000 0.000000000 0.000000000
[113] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859 0.000000000
[121] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002824859

$mids
  [1] -74.5 -73.5 -72.5 -71.5 -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5 -61.5 -60.5 -59.5
 [17] -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5 -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5
 [33] -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5 -30.5 -29.5 -28.5 -27.5
 [49] -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5
 [65] -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5  -1.5  -0.5   0.5   1.5   2.5   3.5   4.5
 [81]   5.5   6.5   7.5   8.5   9.5  10.5  11.5  12.5  13.5  14.5  15.5  16.5  17.5  18.5  19.5  20.5
 [97]  21.5  22.5  23.5  24.5  25.5  26.5  27.5  28.5  29.5  30.5  31.5  32.5  33.5  34.5  35.5  36.5
[113]  37.5  38.5  39.5  40.5  41.5  42.5  43.5  44.5  45.5  46.5  47.5  48.5  49.5  50.5  51.5

$xname
[1] "E2_SL_clean$NoChoice_CUZminusSIB_moral"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice
# returns t-test results
t.test(moral ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  moral by Relation
t = -8.9849, df = 353, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -13.376804  -8.572349
sample estimates:
mean of the differences 
              -10.97458 
# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.4775421 (small)
95 percent confidence interval:
     lower      upper 
-0.5876805 -0.3674037 
# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.5583514 (medium)
95 percent confidence interval:
     lower      upper 
-0.6895242 -0.4271787 
# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_moral", "Choice_SIB_moral", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------
Choice_CUZ_moral | Choice_SIB_moral | 0.32 | [0.22, 0.41] |   6.26 | < .001***

Observations: 354
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_moral, breaks = 100))
$breaks
 [1] -94 -92 -90 -88 -86 -84 -82 -80 -78 -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50 -48
[25] -46 -44 -42 -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2   0
[49]   2   4   6   8  10  12  14  16  18  20  22  24  26  28  30  32  34  36  38  40  42  44  46  48
[73]  50  52  54  56  58  60

$counts
 [1]  1  0  0  1  0  1  0  0  0  1  0  0  0  1  1  1  2  1  1  0  1  9  9  2  1  2  2  6  7  4  7  7  4
[34] 13 12  7  9 11 10 11 10  9  9  9 12 12 55 24  7  4  6 11  9  5  2  2  1  4  0  2  1  3  1  0  1  3
[67]  0  0  1  0  1  1  2  0  0  0  2

$density
 [1] 0.001412429 0.000000000 0.000000000 0.001412429 0.000000000 0.001412429 0.000000000 0.000000000
 [9] 0.000000000 0.001412429 0.000000000 0.000000000 0.000000000 0.001412429 0.001412429 0.001412429
[17] 0.002824859 0.001412429 0.001412429 0.000000000 0.001412429 0.012711864 0.012711864 0.002824859
[25] 0.001412429 0.002824859 0.002824859 0.008474576 0.009887006 0.005649718 0.009887006 0.009887006
[33] 0.005649718 0.018361582 0.016949153 0.009887006 0.012711864 0.015536723 0.014124294 0.015536723
[41] 0.014124294 0.012711864 0.012711864 0.012711864 0.016949153 0.016949153 0.077683616 0.033898305
[49] 0.009887006 0.005649718 0.008474576 0.015536723 0.012711864 0.007062147 0.002824859 0.002824859
[57] 0.001412429 0.005649718 0.000000000 0.002824859 0.001412429 0.004237288 0.001412429 0.000000000
[65] 0.001412429 0.004237288 0.000000000 0.000000000 0.001412429 0.000000000 0.001412429 0.001412429
[73] 0.002824859 0.000000000 0.000000000 0.000000000 0.002824859

$mids
 [1] -93 -91 -89 -87 -85 -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49 -47
[25] -45 -43 -41 -39 -37 -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1   1
[49]   3   5   7   9  11  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43  45  47  49
[73]  51  53  55  57  59

$xname
[1] "E2_SL_clean$Choice_CUZminusSIB_moral"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Friend-Like

No Choice
# returns t-test results
t.test(moral ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  moral by Relation
t = 0.64188, df = 344, p-value = 0.5214
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -0.8915215  1.7552896
sample estimates:
mean of the differences 
              0.4318841 
# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: 0.03455759 (negligible)
95 percent confidence interval:
      lower       upper 
-0.07118084  0.14029602 
# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: 0.02713705 (negligible)
95 percent confidence interval:
     lower      upper 
-0.0558867  0.1101608 
# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_moral", "NoChoice_SIB_moral", method = "Pearson")
Parameter1         |         Parameter2 |    r |       95% CI | t(343) |         p
----------------------------------------------------------------------------------
NoChoice_CUZ_moral | NoChoice_SIB_moral | 0.69 | [0.63, 0.74] |  17.74 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_moral, breaks = 100))
$breaks
  [1] -50 -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -32 -31 -30 -29 -28 -27
 [25] -26 -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10  -9  -8  -7  -6  -5  -4  -3
 [49]  -2  -1   0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20  21
 [73]  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45
 [97]  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66

$counts
  [1]   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   1   1   2   1   0   0   1   1
 [25]   1   1   2   4   1   1   0   1   1   2   6   2   3   6   3  11   3   2   5  11   2  10  13   9
 [49]   8 103  15  13   9   8   9   6   8   3   1   6   5   3   3   5   2   1   1   4   0   6   1   2
 [73]   1   0   0   3   1   0   2   0   0   1   0   0   0   0   0   0   0   0   1   0   0   0   1   1
 [97]   0   0   1   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1

$density
  [1] 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
  [9] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.002898551
 [17] 0.002898551 0.002898551 0.005797101 0.002898551 0.000000000 0.000000000 0.002898551 0.002898551
 [25] 0.002898551 0.002898551 0.005797101 0.011594203 0.002898551 0.002898551 0.000000000 0.002898551
 [33] 0.002898551 0.005797101 0.017391304 0.005797101 0.008695652 0.017391304 0.008695652 0.031884058
 [41] 0.008695652 0.005797101 0.014492754 0.031884058 0.005797101 0.028985507 0.037681159 0.026086957
 [49] 0.023188406 0.298550725 0.043478261 0.037681159 0.026086957 0.023188406 0.026086957 0.017391304
 [57] 0.023188406 0.008695652 0.002898551 0.017391304 0.014492754 0.008695652 0.008695652 0.014492754
 [65] 0.005797101 0.002898551 0.002898551 0.011594203 0.000000000 0.017391304 0.002898551 0.005797101
 [73] 0.002898551 0.000000000 0.000000000 0.008695652 0.002898551 0.000000000 0.005797101 0.000000000
 [81] 0.000000000 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
 [89] 0.000000000 0.000000000 0.002898551 0.000000000 0.000000000 0.000000000 0.002898551 0.002898551
 [97] 0.000000000 0.000000000 0.002898551 0.002898551 0.000000000 0.000000000 0.000000000 0.000000000
[105] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[113] 0.000000000 0.000000000 0.000000000 0.002898551

$mids
  [1] -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5 -41.5 -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5
 [17] -33.5 -32.5 -31.5 -30.5 -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5 -20.5 -19.5 -18.5
 [33] -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5  -9.5  -8.5  -7.5  -6.5  -5.5  -4.5  -3.5  -2.5
 [49]  -1.5  -0.5   0.5   1.5   2.5   3.5   4.5   5.5   6.5   7.5   8.5   9.5  10.5  11.5  12.5  13.5
 [65]  14.5  15.5  16.5  17.5  18.5  19.5  20.5  21.5  22.5  23.5  24.5  25.5  26.5  27.5  28.5  29.5
 [81]  30.5  31.5  32.5  33.5  34.5  35.5  36.5  37.5  38.5  39.5  40.5  41.5  42.5  43.5  44.5  45.5
 [97]  46.5  47.5  48.5  49.5  50.5  51.5  52.5  53.5  54.5  55.5  56.5  57.5  58.5  59.5  60.5  61.5
[113]  62.5  63.5  64.5  65.5

$xname
[1] "E2_FL_clean$NoChoice_CUZminusSIB_moral"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Choice
# returns t-test results
t.test(moral ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

    Paired t-test

data:  moral by Relation
t = -4.5336, df = 344, p-value = 8.017e-06
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -8.827515 -3.485529
sample estimates:
mean of the differences 
              -6.156522 
# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

Cohen's d

d estimate: -0.2440798 (small)
95 percent confidence interval:
     lower      upper 
-0.3513495 -0.1368101 
# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

Cohen's d

d estimate: -0.2843231 (small)
95 percent confidence interval:
     lower      upper 
-0.4099225 -0.1587236 
# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_moral", "Choice_SIB_moral", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------
Choice_CUZ_moral | Choice_SIB_moral | 0.32 | [0.22, 0.41] |   6.29 | < .001***

Observations: 345
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_moral, breaks = 100))
$breaks
 [1] -88 -86 -84 -82 -80 -78 -76 -74 -72 -70 -68 -66 -64 -62 -60 -58 -56 -54 -52 -50 -48 -46 -44 -42
[25] -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10  -8  -6  -4  -2   0   2   4   6
[49]   8  10  12  14  16  18  20  22  24  26  28  30  32  34  36  38  40  42  44  46  48  50  52  54
[73]  56  58  60  62  64  66  68  70  72  74  76  78  80  82  84  86  88  90  92  94  96  98 100

$counts
 [1]  3  0  1  1  0  2  0  1  0  1  2  0  3  1  3  0  2  0  3  5  1  2  1  2  3  3  3  4  4  2  7  2  7
[34]  6  5  9  5  7  8 12  9  9 23 69 30  7  4  6  5  6  7  7  2  3  3  6  6  3  3  1  2  2  0  1  1  2
[67]  1  0  2  2  0  0  0  0  0  0  0  0  0  0  0  1  0  0  0  0  0  0  0  0  0  0  0  1

$density
 [1] 0.004347826 0.000000000 0.001449275 0.001449275 0.000000000 0.002898551 0.000000000 0.001449275
 [9] 0.000000000 0.001449275 0.002898551 0.000000000 0.004347826 0.001449275 0.004347826 0.000000000
[17] 0.002898551 0.000000000 0.004347826 0.007246377 0.001449275 0.002898551 0.001449275 0.002898551
[25] 0.004347826 0.004347826 0.004347826 0.005797101 0.005797101 0.002898551 0.010144928 0.002898551
[33] 0.010144928 0.008695652 0.007246377 0.013043478 0.007246377 0.010144928 0.011594203 0.017391304
[41] 0.013043478 0.013043478 0.033333333 0.100000000 0.043478261 0.010144928 0.005797101 0.008695652
[49] 0.007246377 0.008695652 0.010144928 0.010144928 0.002898551 0.004347826 0.004347826 0.008695652
[57] 0.008695652 0.004347826 0.004347826 0.001449275 0.002898551 0.002898551 0.000000000 0.001449275
[65] 0.001449275 0.002898551 0.001449275 0.000000000 0.002898551 0.002898551 0.000000000 0.000000000
[73] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[81] 0.000000000 0.001449275 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
[89] 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.001449275

$mids
 [1] -87 -85 -83 -81 -79 -77 -75 -73 -71 -69 -67 -65 -63 -61 -59 -57 -55 -53 -51 -49 -47 -45 -43 -41
[25] -39 -37 -35 -33 -31 -29 -27 -25 -23 -21 -19 -17 -15 -13 -11  -9  -7  -5  -3  -1   1   3   5   7
[49]   9  11  13  15  17  19  21  23  25  27  29  31  33  35  37  39  41  43  45  47  49  51  53  55
[73]  57  59  61  63  65  67  69  71  73  75  77  79  81  83  85  87  89  91  93  95  97  99

$xname
[1] "E2_FL_clean$Choice_CUZminusSIB_moral"

$equidist
[1] TRUE

attr(,"class")
[1] "histogram"

Moral Diff ~ Oblig Diff Plots

# Create difference score datasets for plotting of diff score correlations

# Stranger-Like
E2_diff_SL_cond_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(SL_Dist_Scen, SL_Close_Scen),
    names_to = "WSs_cond",
    values_to = "Condition"
  )

E2_diff_SL_oblig_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_oblig, Choice_CUZminusSIB_oblig),
    names_to = "WSs_cond",
    values_to = "oblig"
  )

E2_diff_SL_relate_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_relate, Choice_CUZminusSIB_relate),
    names_to = "WSs_cond",
    values_to = "relate"
  )

E2_diff_SL_close_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_close, Choice_CUZminusSIB_close),
    names_to = "WSs_cond",
    values_to = "close"
  )

E2_diff_SL_priorhelp_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_priorhelp, Choice_CUZminusSIB_priorhelp),
    names_to = "WSs_cond",
    values_to = "priorhelp"
  )

E2_diff_SL_futurehelp_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_futurehelp, Choice_CUZminusSIB_futurehelp),
    names_to = "WSs_cond",
    values_to = "futurehelp"
  )

E2_diff_SL_priorinteract_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_priorinteract, Choice_CUZminusSIB_priorinteract),
    names_to = "WSs_cond",
    values_to = "priorinteract"
  )

E2_diff_SL_futureinteract_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_futureinteract, Choice_CUZminusSIB_futureinteract),
    names_to = "WSs_cond",
    values_to = "futureinteract"
  )

E2_diff_SL_moral_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_moral, Choice_CUZminusSIB_moral),
    names_to = "WSs_cond",
    values_to = "moral"
  )

# Combine long SL datasets, select plotting variables, and create condition variable for `Choice Context`
E2_diff_SL_long <- cbind(E2_diff_SL_cond_long, 
                         E2_diff_SL_oblig_long, 
                         E2_diff_SL_relate_long, E2_diff_SL_close_long,                                                              E2_diff_SL_priorhelp_long, E2_diff_SL_futurehelp_long, 
                         E2_diff_SL_priorinteract_long, E2_diff_SL_futureinteract_long,
                         E2_diff_SL_moral_long)
E2_diff_SL_long <- E2_diff_SL_long[, !duplicated(colnames(E2_diff_SL_long))] # get rid of duplicate columns

E2_diff_SL_long <- E2_diff_SL_long %>%
  select(ResponseId,
         Age:OUS_IH,
         BSs_cond,
         WSs_cond,
         Condition,
         oblig, 
         relate, close, 
         priorhelp, futurehelp,
         priorinteract, futureinteract,
         moral) %>%
  mutate(`Choice Context` = case_when(
    WSs_cond == "SL_Dist_Scen" ~ "No Choice",
    WSs_cond == "SL_Close_Scen" ~ "Choice"))

# Reorder/rename condition, and participant factors
E2_diff_SL_long$`Choice Context` <- as.factor(E2_diff_SL_long$`Choice Context`)
E2_diff_SL_long$`Choice Context` <- ordered(E2_diff_SL_long$`Choice Context`, levels = c("No Choice", "Choice"))
E2_diff_SL_long$ResponseId <- as.factor(E2_diff_SL_long$ResponseId)

# Friend-Like
E2_diff_FL_cond_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(FL_Dist_Scen, FL_Close_Scen),
    names_to = "WSs_cond",
    values_to = "Condition"
  )

E2_diff_FL_oblig_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_oblig, Choice_CUZminusSIB_oblig),
    names_to = "WSs_cond",
    values_to = "oblig"
  )

E2_diff_FL_relate_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_relate, Choice_CUZminusSIB_relate),
    names_to = "WSs_cond",
    values_to = "relate"
  )

E2_diff_FL_close_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_close, Choice_CUZminusSIB_close),
    names_to = "WSs_cond",
    values_to = "close"
  )

E2_diff_FL_priorhelp_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_priorhelp, Choice_CUZminusSIB_priorhelp),
    names_to = "WSs_cond",
    values_to = "priorhelp"
  )

E2_diff_FL_futurehelp_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_futurehelp, Choice_CUZminusSIB_futurehelp),
    names_to = "WSs_cond",
    values_to = "futurehelp"
  )

E2_diff_FL_priorinteract_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_priorinteract, Choice_CUZminusSIB_priorinteract),
    names_to = "WSs_cond",
    values_to = "priorinteract"
  )

E2_diff_FL_futureinteract_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_futureinteract, Choice_CUZminusSIB_futureinteract),
    names_to = "WSs_cond",
    values_to = "futureinteract"
  )

E2_diff_FL_moral_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_moral, Choice_CUZminusSIB_moral),
    names_to = "WSs_cond",
    values_to = "moral"
  )

# Combine long SL datasets, select plotting variables, and create condition variable for `Choice Context`
E2_diff_FL_long <- cbind(E2_diff_FL_cond_long, 
                         E2_diff_FL_oblig_long, 
                         E2_diff_FL_relate_long, E2_diff_FL_close_long,                                                              E2_diff_FL_priorhelp_long, E2_diff_FL_futurehelp_long, 
                         E2_diff_FL_priorinteract_long, E2_diff_FL_futureinteract_long,
                         E2_diff_FL_moral_long)
E2_diff_FL_long <- E2_diff_FL_long[, !duplicated(colnames(E2_diff_FL_long))] # get rid of duplicate columns

E2_diff_FL_long <- E2_diff_FL_long %>%
  select(ResponseId,
         Age:OUS_IH,
         BSs_cond,
         WSs_cond,
         Condition,
         oblig, 
         relate, close, 
         priorhelp, futurehelp,
         priorinteract, futureinteract,
         moral) %>%
  mutate(`Choice Context` = case_when(
    WSs_cond == "FL_Dist_Scen" ~ "No Choice",
    WSs_cond == "FL_Close_Scen" ~ "Choice"))

# Reorder/rename condition, and participant factors
E2_diff_FL_long$`Choice Context` <- as.factor(E2_diff_FL_long$`Choice Context`)
E2_diff_FL_long$`Choice Context` <- ordered(E2_diff_FL_long$`Choice Context`, levels = c("No Choice", "Choice"))
E2_diff_FL_long$ResponseId <- as.factor(E2_diff_FL_long$ResponseId)


# Combine into one dataset for plotting
E2_diff_all_long <- rbind(E2_diff_SL_long, E2_diff_FL_long)
# Reorder All_long BSs_cond
E2_diff_all_long$BSs_cond <- as.factor(E2_diff_all_long$BSs_cond)
E2_diff_all_long$BSs_cond <- ordered(E2_diff_all_long$BSs_cond, levels = c("Stranger-Like", "Friend-Like"))

Stranger-Like

print(oblig_moral_diff_plot_SL <- ggplot(data = E2_diff_SL_long,
                                                    aes(x = oblig, y = moral)) +
        geom_jitter(color = "darkorchid1", alpha = 0.5) +
        geom_smooth(method = 'lm', color = "darkorchid1") +
        facet_wrap(~`Choice Context`) +
        scale_x_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        scale_y_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        theme_classic() +
        xlab("Obligation Strength Difference (Distant - Close)") +
        ylab("Moral Character Difference (Distant - Close)") +
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12)))

Friend-Like

print(oblig_moral_diff_plot_FL <- ggplot(data = E2_diff_FL_long,
                                                    aes(x = oblig, y = moral)) +
        geom_jitter(color = "darkorchid1", alpha = 0.5) +
        geom_smooth(method = 'lm', color = "darkorchid1") +
        facet_wrap(~`Choice Context`) +
        scale_x_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        scale_y_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        theme_classic() +
        xlab("Obligation Strength Difference (Distant - Close)") +
        ylab("Moral Character Difference (Distant - Close)") +
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12)))

Combined

print(oblig_moral_diff_plot_combined <- ggplot(data = E2_diff_all_long,
                                                    aes(x = oblig, y = moral)) +
        geom_jitter(color = "darkorchid1", alpha = 0.5) +
        geom_smooth(method = 'lm', color = "darkorchid1") +
        facet_wrap(BSs_cond~`Choice Context`, nrow = 2) +
        scale_x_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        scale_y_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        theme_classic() +
        xlab("\nObligation Strength Difference (Distant - Close)") +
        ylab("Moral Character Difference (Distant - Close)\n") +
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16)))


ggsave("E2_moral~oblig_plot.png")
Saving 14 x 9 in image

Moral Diff ~ Oblig Diff Tests


See our pre-registration (INSERT LINK) and manuscript for our predictions about the relationship between obligation differences and moral character differences.


Stranger-Like

No Choice

# pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_oblig", "NoChoice_CUZminusSIB_moral", method = "Pearson")
Parameter1                 |                 Parameter2 |     r |        95% CI | t(352) |     p
------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_oblig | NoChoice_CUZminusSIB_moral | -0.02 | [-0.13, 0.08] |  -0.42 | 0.674

Observations: 354

Choice

# pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_oblig", "Choice_CUZminusSIB_moral", method = "Pearson")
Parameter1               |               Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------------------
Choice_CUZminusSIB_oblig | Choice_CUZminusSIB_moral | 0.27 | [0.17, 0.36] |   5.20 | < .001***

Observations: 354

Friend-Like

No Choice

# pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_oblig", "NoChoice_CUZminusSIB_moral", method = "Pearson")
Parameter1                 |                 Parameter2 |    r |        95% CI | t(343) |     p
-----------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_oblig | NoChoice_CUZminusSIB_moral | 0.06 | [-0.05, 0.16] |   1.07 | 0.284

Observations: 345

Choice

# pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_oblig", "Choice_CUZminusSIB_moral", method = "Pearson")
Parameter1               |               Parameter2 |    r |       95% CI | t(343) |       p
--------------------------------------------------------------------------------------------
Choice_CUZminusSIB_oblig | Choice_CUZminusSIB_moral | 0.14 | [0.04, 0.24] |   2.67 | 0.008**

Observations: 345

Moral ~ Oblig R-M Plots

Stranger-Like

No Choice

rmcorr_SL_NoChoice <- rmcorr(participant = ResponseId, 
                                  measure1 = oblig, 
                                  measure2 = moral, 
                                  dataset = E2_all_long %>%
                                    filter(BSs_cond == "Stranger-Like") %>%
                                    filter(`Choice Context` == "No Choice"))
print(rmcorr_plot_SL_NoChoice <- ggplot(data = E2_all_long %>%
                                    filter(BSs_cond == "Stranger-Like") %>%
                                    filter(`Choice Context` == "No Choice"),
                                       aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
                                         geom_point(aes(color = ResponseId)) +
                                         geom_line(aes(y = rmcorr_SL_NoChoice$model$fitted.values), linetype = 1) +
                                         scale_x_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         scale_y_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         theme_classic() +
                                         xlab("Obligation Strength") +
                                         ylab("Moral Character") +
                                         theme(axis.title.x = element_text(size = 14), 
                                               axis.title.y = element_text(size = 14),
                                               axis.text.x = element_text(color = "black", size = 12), 
                                               axis.text.y = element_text(color = "black", size = 12),
                                               legend.position = "none"))

Choice

rmcorr_SL_Choice <- rmcorr(participant = ResponseId, 
                                  measure1 = oblig, 
                                  measure2 = moral, 
                                  dataset = E2_all_long %>%
                                    filter(BSs_cond == "Stranger-Like") %>%
                                    filter(`Choice Context` == "Choice"))
print(rmcorr_plot_SL_Choice <- ggplot(data = E2_all_long %>%
                                    filter(BSs_cond == "Stranger-Like") %>%
                                    filter(`Choice Context` == "Choice"),
                                       aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
                                         geom_point(aes(color = ResponseId)) +
                                         geom_line(aes(y = rmcorr_SL_Choice$model$fitted.values), linetype = 1) +
                                         scale_x_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         scale_y_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         theme_classic() +
                                         xlab("Obligation Strength") +
                                         ylab("Moral Character") +
                                         theme(axis.title.x = element_text(size = 14), 
                                               axis.title.y = element_text(size = 14),
                                               axis.text.x = element_text(color = "black", size = 12), 
                                               axis.text.y = element_text(color = "black", size = 12),
                                               legend.position = "none"))

Friend-Like

No Choice

rmcorr_FL_NoChoice <- rmcorr(participant = ResponseId, 
                                  measure1 = oblig, 
                                  measure2 = moral, 
                                  dataset = E2_all_long %>%
                                    filter(BSs_cond == "Friend-Like") %>%
                                    filter(`Choice Context` == "No Choice"))
print(rmcorr_plot_FL_NoChoice <- ggplot(data = E2_all_long %>%
                                    filter(BSs_cond == "Friend-Like") %>%
                                    filter(`Choice Context` == "No Choice"),
                                       aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
                                         geom_point(aes(color = ResponseId)) +
                                         geom_line(aes(y = rmcorr_FL_NoChoice$model$fitted.values), linetype = 1) +
                                         scale_x_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         scale_y_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         theme_classic() +
                                         xlab("Obligation Strength") +
                                         ylab("Moral Character") +
                                         theme(axis.title.x = element_text(size = 14), 
                                               axis.title.y = element_text(size = 14),
                                               axis.text.x = element_text(color = "black", size = 12), 
                                               axis.text.y = element_text(color = "black", size = 12),
                                               legend.position = "none"))

Choice

rmcorr_FL_Choice <- rmcorr(participant = ResponseId, 
                                  measure1 = oblig, 
                                  measure2 = moral, 
                                  dataset = E2_all_long %>%
                                    filter(BSs_cond == "Friend-Like") %>%
                                    filter(`Choice Context` == "Choice"))
print(rmcorr_plot_FL_Choice <- ggplot(data = E2_all_long %>%
                                    filter(BSs_cond == "Friend-Like") %>%
                                    filter(`Choice Context` == "Choice"),
                                       aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
                                         geom_point(aes(color = ResponseId)) +
                                         geom_line(aes(y = rmcorr_FL_Choice$model$fitted.values), linetype = 1) +
                                         scale_x_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         scale_y_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         theme_classic() +
                                         xlab("Obligation Strength") +
                                         ylab("Moral Character") +
                                         theme(axis.title.x = element_text(size = 14), 
                                               axis.title.y = element_text(size = 14),
                                               axis.text.x = element_text(color = "black", size = 12), 
                                               axis.text.y = element_text(color = "black", size = 12),
                                               legend.position = "none"))

Moral ~ Oblig R-M Tests


See our pre-registration (INSERT LINK) for our predictions about the within-individual relationship between obligation judgments and moral character judgments.


Stranger-Like

No Choice

print(rmcorr_SL_NoChoice)

Repeated measures correlation

r
-0.02706391

degrees of freedom
353

p-value
0.6113006

95% confidence interval
-0.1310765 0.07753801 

Choice

print(rmcorr_SL_Choice)

Repeated measures correlation

r
0.4382157

degrees of freedom
353

p-value
4.325398e-18

95% confidence interval
0.3498351 0.5188646 

Friend-Like

No Choice

print(rmcorr_FL_NoChoice)

Repeated measures correlation

r
0.05185481

degrees of freedom
344

p-value
0.3361943

95% confidence interval
-0.05418352 0.1567366 

Choice

print(rmcorr_FL_Choice)

Repeated measures correlation

r
0.2379516

degrees of freedom
344

p-value
7.656461e-06

95% confidence interval
0.1356228 0.3352573 

Ind. Diff Internal Reliability Tests

MAC

Family

Stranger-Like

# create dataset with only MAC Family Values variables
E2_SL_clean_MAC_Fam_only <- E2_SL_clean %>% select(MAC_Jud_1:MAC_Jud_3, MAC_Rel_1:MAC_Rel_3)

psych::alpha(E2_SL_clean_MAC_Fam_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MAC_Fam_only)

 

 lower alpha upper     95% confidence boundaries
0.88 0.9 0.92 

 Reliability if an item is dropped:

 Item statistics 

Friend-Like

# create dataset with only MAC Family Values variables
E2_FL_clean_MAC_Fam_only <- E2_FL_clean %>% select(MAC_Jud_1:MAC_Jud_3, MAC_Rel_1:MAC_Rel_3)

psych::alpha(E2_FL_clean_MAC_Fam_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MAC_Fam_only)

 

 lower alpha upper     95% confidence boundaries
0.87 0.89 0.91 

 Reliability if an item is dropped:

 Item statistics 

Group

Stranger-Like

# create dataset with only MAC Groupily Values variables
E2_SL_clean_MAC_Group_only <- E2_SL_clean %>% select(MAC_Jud_4:MAC_Jud_6, MAC_Rel_4:MAC_Rel_6)

psych::alpha(E2_SL_clean_MAC_Group_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MAC_Group_only)

 

 lower alpha upper     95% confidence boundaries
0.85 0.87 0.89 

 Reliability if an item is dropped:

 Item statistics 

Friend-Like

# create dataset with only MAC Groupily Values variables
E2_FL_clean_MAC_Group_only <- E2_FL_clean %>% select(MAC_Jud_4:MAC_Jud_6, MAC_Rel_4:MAC_Rel_6)

psych::alpha(E2_FL_clean_MAC_Group_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MAC_Group_only)

 

 lower alpha upper     95% confidence boundaries
0.82 0.85 0.87 

 Reliability if an item is dropped:

 Item statistics 

Reciprocity

Stranger-Like

# create dataset with only MAC Recily Values variables
E2_SL_clean_MAC_Rec_only <- E2_SL_clean %>% select(MAC_Jud_7:MAC_Jud_9, MAC_Rel_7:MAC_Rel_9)

psych::alpha(E2_SL_clean_MAC_Rec_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MAC_Rec_only)

 

 lower alpha upper     95% confidence boundaries
0.79 0.82 0.85 

 Reliability if an item is dropped:

 Item statistics 

Friend-Like

# create dataset with only MAC Recily Values variables
E2_FL_clean_MAC_Rec_only <- E2_FL_clean %>% select(MAC_Jud_7:MAC_Jud_9, MAC_Rel_7:MAC_Rel_9)

psych::alpha(E2_FL_clean_MAC_Rec_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MAC_Rec_only)

 

 lower alpha upper     95% confidence boundaries
0.8 0.83 0.85 

 Reliability if an item is dropped:

 Item statistics 

Heroism

Stranger-Like

# create dataset with only MAC Heroily Values variables
E2_SL_clean_MAC_Hero_only <- E2_SL_clean %>% select(MAC_Jud_10:MAC_Jud_12, MAC_Rel_10:MAC_Rel_12)

psych::alpha(E2_SL_clean_MAC_Hero_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MAC_Hero_only)

 

 lower alpha upper     95% confidence boundaries
0.83 0.85 0.88 

 Reliability if an item is dropped:

 Item statistics 

Friend-Like

# create dataset with only MAC Heroily Values variables
E2_FL_clean_MAC_Hero_only <- E2_FL_clean %>% select(MAC_Jud_10:MAC_Jud_12, MAC_Rel_10:MAC_Rel_12)

psych::alpha(E2_FL_clean_MAC_Hero_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MAC_Hero_only)

 

 lower alpha upper     95% confidence boundaries
0.8 0.83 0.86 

 Reliability if an item is dropped:

 Item statistics 

Authority

Stranger-Like

# create dataset with only MAC Authily Values variables
E2_SL_clean_MAC_Auth_only <- E2_SL_clean %>% select(MAC_Jud_13:MAC_Jud_15, MAC_Rel_13:MAC_Rel_15)

psych::alpha(E2_SL_clean_MAC_Auth_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MAC_Auth_only)

 

 lower alpha upper     95% confidence boundaries
0.88 0.9 0.92 

 Reliability if an item is dropped:

 Item statistics 

Friend-Like

# create dataset with only MAC Authily Values variables
E2_FL_clean_MAC_Auth_only <- E2_FL_clean %>% select(MAC_Jud_13:MAC_Jud_15, MAC_Rel_13:MAC_Rel_15)

psych::alpha(E2_FL_clean_MAC_Auth_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MAC_Auth_only)

 

 lower alpha upper     95% confidence boundaries
0.86 0.88 0.9 

 Reliability if an item is dropped:

 Item statistics 

Fairness

Stranger-Like

# create dataset with only MAC Fairily Values variables
E2_SL_clean_MAC_Fair_only <- E2_SL_clean %>% select(MAC_Jud_16:MAC_Jud_18, MAC_Rel_16:MAC_Rel_18)

psych::alpha(E2_SL_clean_MAC_Fair_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MAC_Fair_only)

 

 lower alpha upper     95% confidence boundaries
0.66 0.7 0.75 

 Reliability if an item is dropped:

 Item statistics 

Friend-Like

# create dataset with only MAC Fairily Values variables
E2_FL_clean_MAC_Fair_only <- E2_FL_clean %>% select(MAC_Jud_16:MAC_Jud_18, MAC_Rel_16:MAC_Rel_18)

psych::alpha(E2_FL_clean_MAC_Fair_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MAC_Fair_only)

 

 lower alpha upper     95% confidence boundaries
0.69 0.74 0.78 

 Reliability if an item is dropped:

 Item statistics 

Property

Stranger-Like

# create dataset with only MAC Propily Values variables
E2_SL_clean_MAC_Prop_only <- E2_SL_clean %>% select(MAC_Jud_19_r:MAC_Jud_21_r, MAC_Rel_19:MAC_Rel_21)

psych::alpha(E2_SL_clean_MAC_Prop_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MAC_Prop_only)

 

 lower alpha upper     95% confidence boundaries
0.59 0.64 0.7 

 Reliability if an item is dropped:

 Item statistics 

Friend-Like

# create dataset with only MAC Propily Values variables
E2_FL_clean_MAC_Prop_only <- E2_FL_clean %>% select(MAC_Jud_19_r:MAC_Jud_21_r, MAC_Rel_19:MAC_Rel_21)

psych::alpha(E2_FL_clean_MAC_Prop_only)
Number of categories should be increased  in order to count frequencies. 

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MAC_Prop_only)

 

 lower alpha upper     95% confidence boundaries
0.64 0.69 0.74 

 Reliability if an item is dropped:

 Item statistics 

MFT

Harm

Stranger-Like

E2_SL_clean_MFQ_Harm_only <- E2_SL_clean %>% select(MFQ_Jud_1:MFQ_Jud_3, MFQ_Rel_1:MFQ_Rel_3)
psych::alpha(E2_SL_clean_MFQ_Harm_only)

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MFQ_Harm_only)

 

 lower alpha upper     95% confidence boundaries
0.7 0.74 0.78 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
             1    2    3    4    5    6 miss
MFQ_Jud_1 0.02 0.01 0.05 0.22 0.30 0.40    0
MFQ_Jud_2 0.02 0.03 0.07 0.13 0.22 0.53    0
MFQ_Jud_3 0.13 0.16 0.18 0.13 0.19 0.21    0
MFQ_Rel_1 0.02 0.06 0.12 0.22 0.34 0.24    0
MFQ_Rel_2 0.02 0.03 0.07 0.22 0.33 0.33    0
MFQ_Rel_3 0.02 0.01 0.04 0.14 0.30 0.48    0

Friend-Like

E2_FL_clean_MFQ_Harm_only <- E2_FL_clean %>% select(MFQ_Jud_1:MFQ_Jud_3, MFQ_Rel_1:MFQ_Rel_3)
psych::alpha(E2_FL_clean_MFQ_Harm_only)

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MFQ_Harm_only)

 

 lower alpha upper     95% confidence boundaries
0.73 0.77 0.81 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
             1    2    3    4    5    6 miss
MFQ_Jud_1 0.01 0.02 0.07 0.20 0.33 0.37    0
MFQ_Jud_2 0.04 0.06 0.06 0.10 0.21 0.52    0
MFQ_Jud_3 0.14 0.14 0.18 0.16 0.21 0.17    0
MFQ_Rel_1 0.04 0.05 0.08 0.23 0.33 0.26    0
MFQ_Rel_2 0.04 0.05 0.07 0.21 0.34 0.30    0
MFQ_Rel_3 0.02 0.02 0.03 0.17 0.32 0.43    0

Fairness

Stranger-Like

E2_SL_clean_MFQ_Fair_only <- E2_SL_clean %>% select(MFQ_Jud_4:MFQ_Jud_6, MFQ_Rel_4:MFQ_Rel_6)
psych::alpha(E2_SL_clean_MFQ_Fair_only)

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MFQ_Fair_only)

 

 lower alpha upper     95% confidence boundaries
0.65 0.7 0.75 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
             1    2    3    4    5    6 miss
MFQ_Jud_4 0.01 0.01 0.05 0.13 0.28 0.53    0
MFQ_Jud_5 0.02 0.04 0.10 0.22 0.33 0.29    0
MFQ_Jud_6 0.25 0.16 0.14 0.18 0.12 0.16    0
MFQ_Rel_4 0.01 0.04 0.07 0.25 0.32 0.31    0
MFQ_Rel_5 0.00 0.02 0.11 0.21 0.33 0.32    0
MFQ_Rel_6 0.01 0.01 0.06 0.13 0.32 0.47    0

Friend-Like

E2_FL_clean_MFQ_Fair_only <- E2_FL_clean %>% select(MFQ_Jud_4:MFQ_Jud_6, MFQ_Rel_4:MFQ_Rel_6)
psych::alpha(E2_FL_clean_MFQ_Fair_only)

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MFQ_Fair_only)

 

 lower alpha upper     95% confidence boundaries
0.72 0.76 0.8 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
             1    2    3    4    5    6 miss
MFQ_Jud_4 0.02 0.04 0.04 0.15 0.27 0.48    0
MFQ_Jud_5 0.03 0.04 0.09 0.24 0.32 0.28    0
MFQ_Jud_6 0.22 0.17 0.17 0.16 0.14 0.13    0
MFQ_Rel_4 0.03 0.05 0.10 0.17 0.37 0.28    0
MFQ_Rel_5 0.03 0.03 0.09 0.21 0.35 0.29    0
MFQ_Rel_6 0.02 0.02 0.05 0.12 0.30 0.48    0

Loyalty

Stranger-Like

E2_SL_clean_MFQ_Loyalty_only <- E2_SL_clean %>% select(MFQ_Jud_7:MFQ_Jud_9, MFQ_Rel_7:MFQ_Rel_9)
psych::alpha(E2_SL_clean_MFQ_Loyalty_only)

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MFQ_Loyalty_only)

 

 lower alpha upper     95% confidence boundaries
0.76 0.79 0.82 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
             1    2    3    4    5    6 miss
MFQ_Jud_7 0.16 0.17 0.17 0.22 0.19 0.11    0
MFQ_Jud_8 0.17 0.19 0.22 0.22 0.14 0.06    0
MFQ_Jud_9 0.13 0.18 0.31 0.25 0.10 0.03    0
MFQ_Rel_7 0.22 0.23 0.21 0.20 0.09 0.05    0
MFQ_Rel_8 0.10 0.19 0.15 0.29 0.19 0.07    0
MFQ_Rel_9 0.10 0.19 0.18 0.30 0.16 0.06    0

Friend-Like

E2_FL_clean_MFQ_Loyalty_only <- E2_FL_clean %>% select(MFQ_Jud_7:MFQ_Jud_9, MFQ_Rel_7:MFQ_Rel_9)
psych::alpha(E2_FL_clean_MFQ_Loyalty_only)

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MFQ_Loyalty_only)

 

 lower alpha upper     95% confidence boundaries
0.7 0.74 0.78 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
             1    2    3    4    5    6 miss
MFQ_Jud_7 0.13 0.14 0.17 0.24 0.19 0.12    0
MFQ_Jud_8 0.14 0.17 0.22 0.24 0.17 0.06    0
MFQ_Jud_9 0.10 0.20 0.29 0.25 0.12 0.04    0
MFQ_Rel_7 0.24 0.27 0.16 0.19 0.10 0.05    0
MFQ_Rel_8 0.09 0.14 0.23 0.26 0.19 0.10    0
MFQ_Rel_9 0.10 0.18 0.19 0.25 0.18 0.10    0

Authority

Stranger-Like

E2_SL_clean_MFQ_Auth_only <- E2_SL_clean %>% select(MFQ_Jud_10:MFQ_Jud_12, MFQ_Rel_10:MFQ_Rel_12)
psych::alpha(E2_SL_clean_MFQ_Auth_only)

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MFQ_Auth_only)

 

 lower alpha upper     95% confidence boundaries
0.77 0.8 0.84 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
              1    2    3    4    5    6 miss
MFQ_Jud_10 0.09 0.10 0.07 0.32 0.25 0.17    0
MFQ_Jud_11 0.23 0.15 0.15 0.22 0.12 0.12    0
MFQ_Jud_12 0.11 0.10 0.18 0.29 0.19 0.13    0
MFQ_Rel_10 0.17 0.19 0.19 0.24 0.16 0.04    0
MFQ_Rel_11 0.23 0.26 0.21 0.17 0.08 0.03    0
MFQ_Rel_12 0.03 0.06 0.13 0.25 0.33 0.20    0

Friend-Like

E2_FL_clean_MFQ_Auth_only <- E2_FL_clean %>% select(MFQ_Jud_10:MFQ_Jud_12, MFQ_Rel_10:MFQ_Rel_12)
psych::alpha(E2_FL_clean_MFQ_Auth_only)

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MFQ_Auth_only)

 

 lower alpha upper     95% confidence boundaries
0.71 0.75 0.79 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
              1    2    3    4    5    6 miss
MFQ_Jud_10 0.06 0.10 0.10 0.31 0.23 0.20    0
MFQ_Jud_11 0.21 0.15 0.09 0.28 0.17 0.10    0
MFQ_Jud_12 0.10 0.10 0.14 0.30 0.21 0.15    0
MFQ_Rel_10 0.13 0.22 0.21 0.22 0.14 0.08    0
MFQ_Rel_11 0.23 0.29 0.20 0.16 0.08 0.04    0
MFQ_Rel_12 0.04 0.07 0.14 0.25 0.30 0.20    0

Purity

Stranger-Like

E2_SL_clean_MFQ_Purity_only <- E2_SL_clean %>% select(MFQ_Jud_13:MFQ_Jud_15, MFQ_Rel_13:MFQ_Rel_15)
psych::alpha(E2_SL_clean_MFQ_Purity_only)

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_MFQ_Purity_only)

 

 lower alpha upper     95% confidence boundaries
0.88 0.9 0.91 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
              1    2    3    4    5    6 miss
MFQ_Jud_13 0.17 0.16 0.14 0.21 0.16 0.16    0
MFQ_Jud_14 0.19 0.18 0.14 0.22 0.12 0.14    0
MFQ_Jud_15 0.26 0.13 0.12 0.20 0.16 0.14    0
MFQ_Rel_13 0.18 0.16 0.15 0.20 0.17 0.13    0
MFQ_Rel_14 0.20 0.18 0.19 0.19 0.14 0.09    0
MFQ_Rel_15 0.41 0.09 0.09 0.14 0.10 0.17    0

Friend-Like

E2_FL_clean_MFQ_Purity_only <- E2_FL_clean %>% select(MFQ_Jud_13:MFQ_Jud_15, MFQ_Rel_13:MFQ_Rel_15)
psych::alpha(E2_FL_clean_MFQ_Purity_only)

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_MFQ_Purity_only)

 

 lower alpha upper     95% confidence boundaries
0.83 0.85 0.88 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
              1    2    3    4    5    6 miss
MFQ_Jud_13 0.13 0.18 0.18 0.21 0.15 0.15    0
MFQ_Jud_14 0.15 0.19 0.15 0.23 0.15 0.13    0
MFQ_Jud_15 0.28 0.13 0.12 0.21 0.13 0.13    0
MFQ_Rel_13 0.12 0.24 0.17 0.18 0.17 0.12    0
MFQ_Rel_14 0.19 0.26 0.19 0.15 0.13 0.08    0
MFQ_Rel_15 0.43 0.11 0.08 0.12 0.10 0.17    0

OUS

Impartial Beneficence

Stranger-Like

E2_SL_clean_OUS_IB_only <- E2_SL_clean %>% select(OUS_IB1:OUS_IB5)
psych::alpha(E2_SL_clean_OUS_IB_only)

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_OUS_IB_only)

 

 lower alpha upper     95% confidence boundaries
0.68 0.72 0.77 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
           1    2    3    4    5    6    7 miss
OUS_IB1 0.21 0.19 0.16 0.19 0.16 0.06 0.03    0
OUS_IB2 0.16 0.23 0.15 0.19 0.17 0.06 0.03    0
OUS_IB3 0.06 0.11 0.16 0.16 0.22 0.17 0.12    0
OUS_IB4 0.10 0.20 0.17 0.14 0.21 0.11 0.06    0
OUS_IB5 0.16 0.21 0.17 0.15 0.19 0.09 0.03    0

Friend-Like

E2_FL_clean_OUS_IB_only <- E2_FL_clean %>% select(OUS_IB1:OUS_IB5)
psych::alpha(E2_FL_clean_OUS_IB_only)

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_OUS_IB_only)

 

 lower alpha upper     95% confidence boundaries
0.74 0.78 0.82 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
           1    2    3    4    5    6    7 miss
OUS_IB1 0.19 0.18 0.19 0.21 0.15 0.04 0.03    0
OUS_IB2 0.18 0.18 0.16 0.22 0.19 0.07 0.02    0
OUS_IB3 0.07 0.12 0.17 0.13 0.21 0.19 0.11    0
OUS_IB4 0.10 0.15 0.21 0.15 0.22 0.13 0.04    0
OUS_IB5 0.16 0.19 0.19 0.19 0.17 0.06 0.04    0

Instrumental Harm

Stranger-Like

E2_SL_clean_OUS_IH_only <- E2_SL_clean %>% select(OUS_IH1:OUS_IH4)
psych::alpha(E2_SL_clean_OUS_IH_only)

Reliability analysis   
Call: psych::alpha(x = E2_SL_clean_OUS_IH_only)

 

 lower alpha upper     95% confidence boundaries
0.72 0.76 0.8 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
           1    2    3    4    5    6    7 miss
OUS_IH1 0.15 0.22 0.18 0.24 0.14 0.05 0.02    0
OUS_IH2 0.30 0.26 0.16 0.14 0.11 0.01 0.02    0
OUS_IH3 0.19 0.19 0.12 0.20 0.16 0.10 0.04    0
OUS_IH4 0.14 0.17 0.15 0.28 0.20 0.05 0.01    0

Friend-Like

E2_FL_clean_OUS_IH_only <- E2_FL_clean %>% select(OUS_IH1:OUS_IH4)
psych::alpha(E2_FL_clean_OUS_IH_only)

Reliability analysis   
Call: psych::alpha(x = E2_FL_clean_OUS_IH_only)

 

 lower alpha upper     95% confidence boundaries
0.75 0.79 0.82 

 Reliability if an item is dropped:

 Item statistics 

Non missing response frequency for each item
           1    2    3    4    5    6    7 miss
OUS_IH1 0.19 0.18 0.15 0.26 0.18 0.02 0.03    0
OUS_IH2 0.29 0.22 0.15 0.17 0.13 0.04 0.01    0
OUS_IH3 0.20 0.15 0.11 0.22 0.17 0.09 0.05    0
OUS_IH4 0.16 0.15 0.11 0.27 0.23 0.05 0.03    0

Oblig ~ Ind. Diff Plots

MAC

Family

print(oblig_mac_Fam_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Fam_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Family Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))


ggsave("E2_oblig~MAC_plot.png")
Saving 14 x 9 in image

Group

print(oblig_mac_Group_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Group_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Group Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

Reciprocity

print(oblig_mac_Rec_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Rec_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Reciprocity Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

Heroism

print(oblig_mac_Hero_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Hero_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Heroism Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

Authority

print(oblig_mac_Auth_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Def_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Authority Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

Fairness

print(oblig_mac_Fair_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Fair_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Fairness Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

Property

print(oblig_mac_Prop_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Prop_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Property Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

MFT

Harm

print(oblig_mft_Harm_plot <- ggplot(data = E2_all_long,
                                     aes(x = MFQ_Harm_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,6.5), breaks = c(1,2,3,4,5,6)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMFT Harm Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

Fairness

print(oblig_mft_Fairness_plot <- ggplot(data = E2_all_long,
                                     aes(x = MFQ_Fairness_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,6.5), breaks = c(1,2,3,4,5,6)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMFT Fairness Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

Loyalty

print(oblig_mft_Loyalty_plot <- ggplot(data = E2_all_long,
                                     aes(x = MFQ_Loyalty_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,6.5), breaks = c(1,2,3,4,5,6)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMFT Loyalty Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))


ggsave("E2_oblig~MFT_plot.png")
Saving 14 x 9 in image

Authority

print(oblig_mft_Authority_plot <- ggplot(data = E2_all_long,
                                     aes(x = MFQ_Authority_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,6.5), breaks = c(1,2,3,4,5,6)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMFT Authority Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

Purity

print(oblig_mft_Purity_plot <- ggplot(data = E2_all_long,
                                     aes(x = MFQ_Purity_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,6.5), breaks = c(1,2,3,4,5,6)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMFT Purity Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

OUS

Impartial Beneficence

print(oblig_ous_ib_plot <- ggplot(data = E2_all_long,
                                     aes(x = OUS_IB, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,7.5), breaks = c(1,2,3,4,5,6,7)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nOUS Impartial Beneficence Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))


ggsave("E2_oblig~OUS_plot.png")
Saving 14 x 9 in image

Instrumental Harm

print(oblig_ous_ih_plot <- ggplot(data = E2_all_long,
                                     aes(x = OUS_IH, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,7.5), breaks = c(1,2,3,4,5,6,7)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nOUS Instrumental Harm Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

Oblig ~ Ind. Diff Tests


See our pre-registration (INSERT LINK) and manuscript for our predictions about the relationship between individual differences and obligation judgments.


MAC

Family

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_CUZ_oblig | 0.31 | [0.21, 0.40] |   6.03 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_SIB_oblig | 0.33 | [0.23, 0.42] |   6.48 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |                 Parameter2 |     r |        95% CI | t(352) |     p
--------------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_CUZminusSIB_oblig | -0.03 | [-0.13, 0.08] |  -0.47 | 0.638

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_CUZ_oblig | 0.37 | [0.28, 0.46] |   7.54 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_SIB_oblig | 0.43 | [0.34, 0.51] |   8.82 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |               Parameter2 |     r |         95% CI | t(352) |         p
-----------------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_CUZminusSIB_oblig | -0.20 | [-0.30, -0.09] |  -3.77 | < .001***

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_CUZ_oblig | 0.25 | [0.15, 0.35] |   4.85 | < .001***

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_SIB_oblig | 0.32 | [0.23, 0.42] |   6.36 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |                 Parameter2 |     r |        95% CI | t(343) |     p
--------------------------------------------------------------------------------------
MAC_Fam_Combined | NoChoice_CUZminusSIB_oblig | -0.06 | [-0.16, 0.05] |  -1.02 | 0.307

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_CUZ_oblig | 0.29 | [0.19, 0.38] |   5.64 | < .001***

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_SIB_oblig | 0.34 | [0.25, 0.43] |   6.78 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |               Parameter2 |     r |         95% CI | t(343) |       p
---------------------------------------------------------------------------------------
MAC_Fam_Combined | Choice_CUZminusSIB_oblig | -0.17 | [-0.27, -0.06] |  -3.11 | 0.002**

Observations: 345

Group

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1         |         Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------
MAC_Group_Combined | NoChoice_CUZ_oblig | 0.22 | [0.12, 0.32] |   4.25 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1         |         Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------
MAC_Group_Combined | NoChoice_SIB_oblig | 0.25 | [0.15, 0.34] |   4.77 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1         |                 Parameter2 |     r |        95% CI | t(352) |     p
----------------------------------------------------------------------------------------
MAC_Group_Combined | NoChoice_CUZminusSIB_oblig | -0.03 | [-0.13, 0.08] |  -0.52 | 0.602

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1         |       Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------
MAC_Group_Combined | Choice_CUZ_oblig | 0.28 | [0.18, 0.37] |   5.45 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1         |       Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------
MAC_Group_Combined | Choice_SIB_oblig | 0.26 | [0.16, 0.36] |   5.13 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1         |               Parameter2 |     r |        95% CI | t(352) |     p
--------------------------------------------------------------------------------------
MAC_Group_Combined | Choice_CUZminusSIB_oblig | -0.01 | [-0.12, 0.09] |  -0.23 | 0.821

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1         |         Parameter2 |    r |       95% CI | t(343) |         p
----------------------------------------------------------------------------------
MAC_Group_Combined | NoChoice_CUZ_oblig | 0.32 | [0.22, 0.41] |   6.24 | < .001***

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1         |         Parameter2 |    r |       95% CI | t(343) |         p
----------------------------------------------------------------------------------
MAC_Group_Combined | NoChoice_SIB_oblig | 0.26 | [0.16, 0.36] |   4.98 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1         |                 Parameter2 |    r |        95% CI | t(343) |     p
---------------------------------------------------------------------------------------
MAC_Group_Combined | NoChoice_CUZminusSIB_oblig | 0.08 | [-0.03, 0.18] |   1.48 | 0.140

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1         |       Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------
MAC_Group_Combined | Choice_CUZ_oblig | 0.34 | [0.24, 0.43] |   6.68 | < .001***

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1         |       Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------
MAC_Group_Combined | Choice_SIB_oblig | 0.34 | [0.24, 0.43] |   6.69 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1         |               Parameter2 |         r |        95% CI | t(343) |     p
------------------------------------------------------------------------------------------
MAC_Group_Combined | Choice_CUZminusSIB_oblig | -8.27e-03 | [-0.11, 0.10] |  -0.15 | 0.878

Observations: 345

Reciprocity

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------
MAC_Rec_Combined | NoChoice_CUZ_oblig | 0.21 | [0.10, 0.30] |   3.94 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------
MAC_Rec_Combined | NoChoice_SIB_oblig | 0.30 | [0.20, 0.39] |   5.92 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |                 Parameter2 |     r |        95% CI | t(352) |     p
--------------------------------------------------------------------------------------
MAC_Rec_Combined | NoChoice_CUZminusSIB_oblig | -0.09 | [-0.19, 0.01] |  -1.73 | 0.085

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------
MAC_Rec_Combined | Choice_CUZ_oblig | 0.30 | [0.21, 0.40] |   5.98 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------
MAC_Rec_Combined | Choice_SIB_oblig | 0.34 | [0.25, 0.43] |   6.85 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |               Parameter2 |     r |         95% CI | t(352) |       p
---------------------------------------------------------------------------------------
MAC_Rec_Combined | Choice_CUZminusSIB_oblig | -0.15 | [-0.25, -0.05] |  -2.87 | 0.004**

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------
MAC_Rec_Combined | NoChoice_CUZ_oblig | 0.25 | [0.15, 0.34] |   4.73 | < .001***

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------
MAC_Rec_Combined | NoChoice_SIB_oblig | 0.29 | [0.19, 0.38] |   5.51 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |                 Parameter2 |     r |        95% CI | t(343) |     p
--------------------------------------------------------------------------------------
MAC_Rec_Combined | NoChoice_CUZminusSIB_oblig | -0.02 | [-0.13, 0.08] |  -0.40 | 0.690

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------
MAC_Rec_Combined | Choice_CUZ_oblig | 0.35 | [0.26, 0.44] |   7.01 | < .001***

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------
MAC_Rec_Combined | Choice_SIB_oblig | 0.37 | [0.27, 0.46] |   7.31 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |               Parameter2 |     r |        95% CI | t(343) |     p
------------------------------------------------------------------------------------
MAC_Rec_Combined | Choice_CUZminusSIB_oblig | -0.05 | [-0.15, 0.06] |  -0.85 | 0.398

Observations: 345

Heroism

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(352) |       p
-------------------------------------------------------------------------------
MAC_Hero_Combined | NoChoice_CUZ_oblig | 0.17 | [0.07, 0.27] |   3.28 | 0.001**

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(352) |         p
---------------------------------------------------------------------------------
MAC_Hero_Combined | NoChoice_SIB_oblig | 0.22 | [0.12, 0.32] |   4.22 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |                 Parameter2 |     r |        95% CI | t(352) |     p
---------------------------------------------------------------------------------------
MAC_Hero_Combined | NoChoice_CUZminusSIB_oblig | -0.05 | [-0.15, 0.06] |  -0.88 | 0.381

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(352) |         p
-------------------------------------------------------------------------------
MAC_Hero_Combined | Choice_CUZ_oblig | 0.21 | [0.10, 0.30] |   3.95 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(352) |         p
-------------------------------------------------------------------------------
MAC_Hero_Combined | Choice_SIB_oblig | 0.24 | [0.13, 0.33] |   4.54 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |               Parameter2 |     r |         95% CI | t(352) |      p
---------------------------------------------------------------------------------------
MAC_Hero_Combined | Choice_CUZminusSIB_oblig | -0.11 | [-0.21,  0.00] |  -2.06 | 0.040*

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(343) |         p
---------------------------------------------------------------------------------
MAC_Hero_Combined | NoChoice_CUZ_oblig | 0.21 | [0.10, 0.31] |   3.93 | < .001***

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(343) |         p
---------------------------------------------------------------------------------
MAC_Hero_Combined | NoChoice_SIB_oblig | 0.21 | [0.11, 0.31] |   4.06 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |                 Parameter2 |        r |        95% CI | t(343) |     p
------------------------------------------------------------------------------------------
MAC_Hero_Combined | NoChoice_CUZminusSIB_oblig | 6.60e-03 | [-0.10, 0.11] |   0.12 | 0.903

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(343) |         p
-------------------------------------------------------------------------------
MAC_Hero_Combined | Choice_CUZ_oblig | 0.23 | [0.13, 0.33] |   4.35 | < .001***

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(343) |         p
-------------------------------------------------------------------------------
MAC_Hero_Combined | Choice_SIB_oblig | 0.24 | [0.14, 0.33] |   4.53 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |               Parameter2 |     r |        95% CI | t(343) |     p
-------------------------------------------------------------------------------------
MAC_Hero_Combined | Choice_CUZminusSIB_oblig | -0.03 | [-0.14, 0.08] |  -0.57 | 0.572

Observations: 345

Authority

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------
MAC_Def_Combined | NoChoice_CUZ_oblig | 0.20 | [0.09, 0.30] |   3.77 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------
MAC_Def_Combined | NoChoice_SIB_oblig | 0.27 | [0.17, 0.36] |   5.19 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |                 Parameter2 |     r |        95% CI | t(352) |     p
--------------------------------------------------------------------------------------
MAC_Def_Combined | NoChoice_CUZminusSIB_oblig | -0.07 | [-0.17, 0.04] |  -1.28 | 0.201

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------
MAC_Def_Combined | Choice_CUZ_oblig | 0.26 | [0.16, 0.36] |   5.07 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------
MAC_Def_Combined | Choice_SIB_oblig | 0.25 | [0.15, 0.35] |   4.87 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |               Parameter2 |     r |        95% CI | t(352) |     p
------------------------------------------------------------------------------------
MAC_Def_Combined | Choice_CUZminusSIB_oblig | -0.02 | [-0.13, 0.08] |  -0.43 | 0.667

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(343) |       p
------------------------------------------------------------------------------
MAC_Def_Combined | NoChoice_CUZ_oblig | 0.17 | [0.07, 0.27] |   3.24 | 0.001**

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1       |         Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------
MAC_Def_Combined | NoChoice_SIB_oblig | 0.21 | [0.10, 0.31] |   3.92 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |                 Parameter2 |     r |        95% CI | t(343) |     p
--------------------------------------------------------------------------------------
MAC_Def_Combined | NoChoice_CUZminusSIB_oblig | -0.02 | [-0.13, 0.08] |  -0.44 | 0.663

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(343) |       p
----------------------------------------------------------------------------
MAC_Def_Combined | Choice_CUZ_oblig | 0.15 | [0.05, 0.25] |   2.86 | 0.005**

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1       |       Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------
MAC_Def_Combined | Choice_SIB_oblig | 0.18 | [0.08, 0.28] |   3.39 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1       |               Parameter2 |     r |        95% CI | t(343) |     p
------------------------------------------------------------------------------------
MAC_Def_Combined | Choice_CUZminusSIB_oblig | -0.09 | [-0.19, 0.02] |  -1.63 | 0.104

Observations: 345

Fairness

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(352) |      p
------------------------------------------------------------------------------
MAC_Fair_Combined | NoChoice_CUZ_oblig | 0.10 | [0.00, 0.21] |   1.97 | 0.050*

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(352) |         p
---------------------------------------------------------------------------------
MAC_Fair_Combined | NoChoice_SIB_oblig | 0.18 | [0.08, 0.28] |   3.52 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |                 Parameter2 |     r |        95% CI | t(352) |     p
---------------------------------------------------------------------------------------
MAC_Fair_Combined | NoChoice_CUZminusSIB_oblig | -0.08 | [-0.18, 0.03] |  -1.42 | 0.156

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(352) |         p
-------------------------------------------------------------------------------
MAC_Fair_Combined | Choice_CUZ_oblig | 0.21 | [0.11, 0.31] |   4.02 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(352) |         p
-------------------------------------------------------------------------------
MAC_Fair_Combined | Choice_SIB_oblig | 0.20 | [0.10, 0.30] |   3.84 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |               Parameter2 |     r |        95% CI | t(352) |     p
-------------------------------------------------------------------------------------
MAC_Fair_Combined | Choice_CUZminusSIB_oblig | -0.02 | [-0.12, 0.09] |  -0.29 | 0.770

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(343) |         p
---------------------------------------------------------------------------------
MAC_Fair_Combined | NoChoice_CUZ_oblig | 0.18 | [0.08, 0.28] |   3.42 | < .001***

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(343) |         p
---------------------------------------------------------------------------------
MAC_Fair_Combined | NoChoice_SIB_oblig | 0.19 | [0.09, 0.29] |   3.60 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |                 Parameter2 |        r |        95% CI | t(343) |     p
------------------------------------------------------------------------------------------
MAC_Fair_Combined | NoChoice_CUZminusSIB_oblig | 2.92e-03 | [-0.10, 0.11] |   0.05 | 0.957

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(343) |         p
-------------------------------------------------------------------------------
MAC_Fair_Combined | Choice_CUZ_oblig | 0.25 | [0.15, 0.35] |   4.87 | < .001***

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(343) |         p
-------------------------------------------------------------------------------
MAC_Fair_Combined | Choice_SIB_oblig | 0.27 | [0.17, 0.36] |   5.17 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |               Parameter2 |     r |        95% CI | t(343) |     p
-------------------------------------------------------------------------------------
MAC_Fair_Combined | Choice_CUZminusSIB_oblig | -0.05 | [-0.15, 0.06] |  -0.90 | 0.368

Observations: 345

Property

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(352) |       p
-------------------------------------------------------------------------------
MAC_Prop_Combined | NoChoice_CUZ_oblig | 0.17 | [0.07, 0.27] |   3.25 | 0.001**

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(352) |      p
------------------------------------------------------------------------------
MAC_Prop_Combined | NoChoice_SIB_oblig | 0.12 | [0.01, 0.22] |   2.25 | 0.025*

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |                 Parameter2 |    r |        95% CI | t(352) |     p
--------------------------------------------------------------------------------------
MAC_Prop_Combined | NoChoice_CUZminusSIB_oblig | 0.04 | [-0.06, 0.15] |   0.82 | 0.411

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(352) |       p
-----------------------------------------------------------------------------
MAC_Prop_Combined | Choice_CUZ_oblig | 0.14 | [0.04, 0.24] |   2.70 | 0.007**

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(352) |      p
----------------------------------------------------------------------------
MAC_Prop_Combined | Choice_SIB_oblig | 0.12 | [0.02, 0.23] |   2.36 | 0.019*

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |               Parameter2 |    r |        95% CI | t(352) |     p
------------------------------------------------------------------------------------
MAC_Prop_Combined | Choice_CUZminusSIB_oblig | 0.02 | [-0.09, 0.12] |   0.35 | 0.727

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |        95% CI | t(343) |     p
------------------------------------------------------------------------------
MAC_Prop_Combined | NoChoice_CUZ_oblig | 0.06 | [-0.05, 0.16] |   1.05 | 0.293

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |        95% CI | t(343) |     p
------------------------------------------------------------------------------
MAC_Prop_Combined | NoChoice_SIB_oblig | 0.09 | [-0.01, 0.20] |   1.74 | 0.083

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |                 Parameter2 |     r |        95% CI | t(343) |     p
---------------------------------------------------------------------------------------
MAC_Prop_Combined | NoChoice_CUZminusSIB_oblig | -0.03 | [-0.14, 0.07] |  -0.61 | 0.544

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(343) |      p
----------------------------------------------------------------------------
MAC_Prop_Combined | Choice_CUZ_oblig | 0.12 | [0.01, 0.22] |   2.22 | 0.027*

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(343) |      p
----------------------------------------------------------------------------
MAC_Prop_Combined | Choice_SIB_oblig | 0.13 | [0.02, 0.23] |   2.37 | 0.018*

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |               Parameter2 |     r |        95% CI | t(343) |     p
-------------------------------------------------------------------------------------
MAC_Prop_Combined | Choice_CUZminusSIB_oblig | -0.03 | [-0.13, 0.08] |  -0.50 | 0.619

Observations: 345

MFT

Harm

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(352) |       p
-------------------------------------------------------------------------------
MFQ_Harm_Combined | NoChoice_CUZ_oblig | 0.14 | [0.04, 0.24] |   2.71 | 0.007**

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(352) |         p
---------------------------------------------------------------------------------
MFQ_Harm_Combined | NoChoice_SIB_oblig | 0.21 | [0.11, 0.31] |   4.00 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |                 Parameter2 |     r |        95% CI | t(352) |     p
---------------------------------------------------------------------------------------
MFQ_Harm_Combined | NoChoice_CUZminusSIB_oblig | -0.06 | [-0.17, 0.04] |  -1.19 | 0.235

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(352) |         p
-------------------------------------------------------------------------------
MFQ_Harm_Combined | Choice_CUZ_oblig | 0.18 | [0.07, 0.28] |   3.36 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(352) |         p
-------------------------------------------------------------------------------
MFQ_Harm_Combined | Choice_SIB_oblig | 0.18 | [0.08, 0.28] |   3.46 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |               Parameter2 |     r |        95% CI | t(352) |     p
-------------------------------------------------------------------------------------
MFQ_Harm_Combined | Choice_CUZminusSIB_oblig | -0.04 | [-0.15, 0.06] |  -0.83 | 0.406

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(343) |      p
------------------------------------------------------------------------------
MFQ_Harm_Combined | NoChoice_CUZ_oblig | 0.12 | [0.01, 0.22] |   2.19 | 0.029*

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1        |         Parameter2 |    r |       95% CI | t(343) |      p
------------------------------------------------------------------------------
MFQ_Harm_Combined | NoChoice_SIB_oblig | 0.14 | [0.03, 0.24] |   2.56 | 0.011*

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |                 Parameter2 |     r |        95% CI | t(343) |     p
---------------------------------------------------------------------------------------
MFQ_Harm_Combined | NoChoice_CUZminusSIB_oblig | -0.01 | [-0.12, 0.09] |  -0.22 | 0.826

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(343) |       p
-----------------------------------------------------------------------------
MFQ_Harm_Combined | Choice_CUZ_oblig | 0.16 | [0.05, 0.26] |   2.95 | 0.003**

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1        |       Parameter2 |    r |       95% CI | t(343) |       p
-----------------------------------------------------------------------------
MFQ_Harm_Combined | Choice_SIB_oblig | 0.17 | [0.07, 0.28] |   3.29 | 0.001**

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1        |               Parameter2 |     r |        95% CI | t(343) |     p
-------------------------------------------------------------------------------------
MFQ_Harm_Combined | Choice_CUZminusSIB_oblig | -0.06 | [-0.16, 0.05] |  -1.03 | 0.303

Observations: 345

Fairness

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1            |         Parameter2 |    r |        95% CI | t(352) |     p
----------------------------------------------------------------------------------
MFQ_Fairness_Combined | NoChoice_CUZ_oblig | 0.09 | [-0.01, 0.19] |   1.71 | 0.088

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1            |         Parameter2 |    r |       95% CI | t(352) |       p
-----------------------------------------------------------------------------------
MFQ_Fairness_Combined | NoChoice_SIB_oblig | 0.15 | [0.05, 0.25] |   2.94 | 0.003**

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1            |                 Parameter2 |     r |        95% CI | t(352) |     p
-------------------------------------------------------------------------------------------
MFQ_Fairness_Combined | NoChoice_CUZminusSIB_oblig | -0.06 | [-0.16, 0.04] |  -1.14 | 0.255

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1            |       Parameter2 |    r |        95% CI | t(352) |     p
--------------------------------------------------------------------------------
MFQ_Fairness_Combined | Choice_CUZ_oblig | 0.10 | [ 0.00, 0.20] |   1.89 | 0.060

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1            |       Parameter2 |    r |       95% CI | t(352) |      p
--------------------------------------------------------------------------------
MFQ_Fairness_Combined | Choice_SIB_oblig | 0.11 | [0.01, 0.21] |   2.11 | 0.036*

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1            |               Parameter2 |     r |        95% CI | t(352) |     p
-----------------------------------------------------------------------------------------
MFQ_Fairness_Combined | Choice_CUZminusSIB_oblig | -0.05 | [-0.15, 0.06] |  -0.87 | 0.387

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1            |         Parameter2 |    r |       95% CI | t(343) |      p
----------------------------------------------------------------------------------
MFQ_Fairness_Combined | NoChoice_CUZ_oblig | 0.11 | [0.00, 0.21] |   1.99 | 0.047*

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1            |         Parameter2 |    r |       95% CI | t(343) |       p
-----------------------------------------------------------------------------------
MFQ_Fairness_Combined | NoChoice_SIB_oblig | 0.14 | [0.04, 0.24] |   2.63 | 0.009**

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1            |                 Parameter2 |     r |        95% CI | t(343) |     p
-------------------------------------------------------------------------------------------
MFQ_Fairness_Combined | NoChoice_CUZminusSIB_oblig | -0.03 | [-0.13, 0.08] |  -0.49 | 0.623

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1            |       Parameter2 |    r |       95% CI | t(343) |       p
---------------------------------------------------------------------------------
MFQ_Fairness_Combined | Choice_CUZ_oblig | 0.18 | [0.07, 0.28] |   3.31 | 0.001**

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1            |       Parameter2 |    r |       95% CI | t(343) |       p
---------------------------------------------------------------------------------
MFQ_Fairness_Combined | Choice_SIB_oblig | 0.17 | [0.07, 0.27] |   3.21 | 0.001**

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1            |               Parameter2 |    r |        95% CI | t(343) |     p
----------------------------------------------------------------------------------------
MFQ_Fairness_Combined | Choice_CUZminusSIB_oblig | 0.01 | [-0.09, 0.12] |   0.23 | 0.818

Observations: 345

Loyalty

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1           |         Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_CUZ_oblig | 0.21 | [0.10, 0.30] |   3.95 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1           |         Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_SIB_oblig | 0.18 | [0.08, 0.28] |   3.50 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1           |                 Parameter2 |    r |        95% CI | t(352) |     p
-----------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_CUZminusSIB_oblig | 0.02 | [-0.09, 0.12] |   0.31 | 0.756

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1           |       Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_CUZ_oblig | 0.27 | [0.17, 0.36] |   5.23 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1           |       Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_SIB_oblig | 0.28 | [0.18, 0.37] |   5.46 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1           |               Parameter2 |     r |        95% CI | t(352) |     p
----------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_CUZminusSIB_oblig | -0.07 | [-0.18, 0.03] |  -1.40 | 0.162

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1           |         Parameter2 |    r |       95% CI | t(343) |      p
---------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_CUZ_oblig | 0.14 | [0.03, 0.24] |   2.54 | 0.012*

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1           |         Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_SIB_oblig | 0.19 | [0.08, 0.29] |   3.50 | < .001***

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1           |                 Parameter2 |     r |        95% CI | t(343) |     p
------------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | NoChoice_CUZminusSIB_oblig | -0.04 | [-0.15, 0.06] |  -0.76 | 0.447

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1           |       Parameter2 |    r |       95% CI | t(343) |       p
--------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_CUZ_oblig | 0.15 | [0.05, 0.25] |   2.83 | 0.005**

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1           |       Parameter2 |    r |       95% CI | t(343) |       p
--------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_SIB_oblig | 0.17 | [0.06, 0.27] |   3.17 | 0.002**

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1           |               Parameter2 |     r |        95% CI | t(343) |     p
----------------------------------------------------------------------------------------
MFQ_Loyalty_Combined | Choice_CUZminusSIB_oblig | -0.06 | [-0.16, 0.05] |  -1.06 | 0.288

Observations: 345

Authority

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1             |         Parameter2 |    r |       95% CI | t(352) |      p
-----------------------------------------------------------------------------------
MFQ_Authority_Combined | NoChoice_CUZ_oblig | 0.13 | [0.02, 0.23] |   2.40 | 0.017*

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1             |         Parameter2 |    r |       95% CI | t(352) |       p
------------------------------------------------------------------------------------
MFQ_Authority_Combined | NoChoice_SIB_oblig | 0.17 | [0.07, 0.27] |   3.26 | 0.001**

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1             |                 Parameter2 |     r |        95% CI | t(352) |     p
--------------------------------------------------------------------------------------------
MFQ_Authority_Combined | NoChoice_CUZminusSIB_oblig | -0.04 | [-0.15, 0.06] |  -0.81 | 0.419

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1             |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------------
MFQ_Authority_Combined | Choice_CUZ_oblig | 0.19 | [0.09, 0.29] |   3.65 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1             |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------------
MFQ_Authority_Combined | Choice_SIB_oblig | 0.18 | [0.08, 0.28] |   3.48 | < .001***

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1             |               Parameter2 |     r |        95% CI | t(352) |     p
------------------------------------------------------------------------------------------
MFQ_Authority_Combined | Choice_CUZminusSIB_oblig | -0.01 | [-0.12, 0.09] |  -0.23 | 0.816

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1             |         Parameter2 |    r |        95% CI | t(343) |     p
-----------------------------------------------------------------------------------
MFQ_Authority_Combined | NoChoice_CUZ_oblig | 0.05 | [-0.05, 0.16] |   0.98 | 0.326

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1             |         Parameter2 |    r |       95% CI | t(343) |      p
-----------------------------------------------------------------------------------
MFQ_Authority_Combined | NoChoice_SIB_oblig | 0.13 | [0.03, 0.23] |   2.47 | 0.014*

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1             |                 Parameter2 |     r |        95% CI | t(343) |     p
--------------------------------------------------------------------------------------------
MFQ_Authority_Combined | NoChoice_CUZminusSIB_oblig | -0.08 | [-0.18, 0.03] |  -1.40 | 0.163

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1             |       Parameter2 |    r |        95% CI | t(343) |     p
---------------------------------------------------------------------------------
MFQ_Authority_Combined | Choice_CUZ_oblig | 0.09 | [-0.01, 0.20] |   1.73 | 0.085

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1             |       Parameter2 |    r |        95% CI | t(343) |     p
---------------------------------------------------------------------------------
MFQ_Authority_Combined | Choice_SIB_oblig | 0.10 | [ 0.00, 0.20] |   1.88 | 0.060

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1             |               Parameter2 |     r |        95% CI | t(343) |     p
------------------------------------------------------------------------------------------
MFQ_Authority_Combined | Choice_CUZminusSIB_oblig | -0.03 | [-0.13, 0.08] |  -0.51 | 0.613

Observations: 345

Purity

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1          |         Parameter2 |    r |       95% CI | t(352) |      p
--------------------------------------------------------------------------------
MFQ_Purity_Combined | NoChoice_CUZ_oblig | 0.11 | [0.00, 0.21] |   2.06 | 0.040*

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1          |         Parameter2 |    r |       95% CI | t(352) |       p
---------------------------------------------------------------------------------
MFQ_Purity_Combined | NoChoice_SIB_oblig | 0.15 | [0.04, 0.25] |   2.76 | 0.006**

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1          |                 Parameter2 |     r |        95% CI | t(352) |     p
-----------------------------------------------------------------------------------------
MFQ_Purity_Combined | NoChoice_CUZminusSIB_oblig | -0.04 | [-0.14, 0.07] |  -0.67 | 0.505

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1          |       Parameter2 |    r |       95% CI | t(352) |      p
------------------------------------------------------------------------------
MFQ_Purity_Combined | Choice_CUZ_oblig | 0.12 | [0.01, 0.22] |   2.25 | 0.025*

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1          |       Parameter2 |    r |       95% CI | t(352) |      p
------------------------------------------------------------------------------
MFQ_Purity_Combined | Choice_SIB_oblig | 0.13 | [0.03, 0.23] |   2.46 | 0.014*

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1          |               Parameter2 |     r |        95% CI | t(352) |     p
---------------------------------------------------------------------------------------
MFQ_Purity_Combined | Choice_CUZminusSIB_oblig | -0.05 | [-0.15, 0.06] |  -0.91 | 0.364

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1          |         Parameter2 |    r |        95% CI | t(343) |     p
--------------------------------------------------------------------------------
MFQ_Purity_Combined | NoChoice_CUZ_oblig | 0.02 | [-0.08, 0.13] |   0.44 | 0.657

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1          |         Parameter2 |    r |        95% CI | t(343) |     p
--------------------------------------------------------------------------------
MFQ_Purity_Combined | NoChoice_SIB_oblig | 0.10 | [-0.01, 0.20] |   1.77 | 0.077

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1          |                 Parameter2 |     r |        95% CI | t(343) |     p
-----------------------------------------------------------------------------------------
MFQ_Purity_Combined | NoChoice_CUZminusSIB_oblig | -0.07 | [-0.17, 0.04] |  -1.28 | 0.200

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "Choice_CUZ_oblig", method = "Pearson")
Parameter1          |       Parameter2 |         r |        95% CI | t(343) |     p
-----------------------------------------------------------------------------------
MFQ_Purity_Combined | Choice_CUZ_oblig | -9.45e-03 | [-0.11, 0.10] |  -0.17 | 0.861

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "Choice_SIB_oblig", method = "Pearson")
Parameter1          |       Parameter2 |         r |        95% CI | t(343) |     p
-----------------------------------------------------------------------------------
MFQ_Purity_Combined | Choice_SIB_oblig | -4.09e-03 | [-0.11, 0.10] |  -0.08 | 0.940

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1          |               Parameter2 |     r |        95% CI | t(343) |     p
---------------------------------------------------------------------------------------
MFQ_Purity_Combined | Choice_CUZminusSIB_oblig | -0.02 | [-0.12, 0.09] |  -0.30 | 0.765

Observations: 345

OUS

Impartial Beneficence

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "OUS_IB", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1 |         Parameter2 |    r |       95% CI | t(352) |      p
-----------------------------------------------------------------------
OUS_IB     | NoChoice_CUZ_oblig | 0.13 | [0.02, 0.23] |   2.42 | 0.016*

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "OUS_IB", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1 |         Parameter2 |    r |       95% CI | t(352) |      p
-----------------------------------------------------------------------
OUS_IB     | NoChoice_SIB_oblig | 0.13 | [0.03, 0.23] |   2.46 | 0.014*

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "OUS_IB", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1 |                 Parameter2 |         r |        95% CI | t(352) |     p
------------------------------------------------------------------------------------
OUS_IB     | NoChoice_CUZminusSIB_oblig | -4.55e-03 | [-0.11, 0.10] |  -0.09 | 0.932

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "OUS_IB", "Choice_CUZ_oblig", method = "Pearson")
Parameter1 |       Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------
OUS_IB     | Choice_CUZ_oblig | 0.19 | [0.09, 0.29] |   3.59 | < .001***

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "OUS_IB", "Choice_SIB_oblig", method = "Pearson")
Parameter1 |       Parameter2 |    r |       95% CI | t(352) |       p
----------------------------------------------------------------------
OUS_IB     | Choice_SIB_oblig | 0.16 | [0.05, 0.26] |   2.95 | 0.003**

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "OUS_IB", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1 |               Parameter2 |    r |        95% CI | t(352) |     p
-----------------------------------------------------------------------------
OUS_IB     | Choice_CUZminusSIB_oblig | 0.05 | [-0.06, 0.15] |   0.88 | 0.381

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "OUS_IB", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1 |         Parameter2 |    r |       95% CI | t(343) |      p
-----------------------------------------------------------------------
OUS_IB     | NoChoice_CUZ_oblig | 0.12 | [0.01, 0.22] |   2.18 | 0.030*

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "OUS_IB", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1 |         Parameter2 |    r |        95% CI | t(343) |     p
-----------------------------------------------------------------------
OUS_IB     | NoChoice_SIB_oblig | 0.06 | [-0.04, 0.17] |   1.17 | 0.243

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "OUS_IB", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1 |                 Parameter2 |    r |        95% CI | t(343) |     p
-------------------------------------------------------------------------------
OUS_IB     | NoChoice_CUZminusSIB_oblig | 0.06 | [-0.04, 0.17] |   1.13 | 0.258

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "OUS_IB", "Choice_CUZ_oblig", method = "Pearson")
Parameter1 |       Parameter2 |    r |       95% CI | t(343) |      p
---------------------------------------------------------------------
OUS_IB     | Choice_CUZ_oblig | 0.12 | [0.01, 0.22] |   2.15 | 0.032*

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "OUS_IB", "Choice_SIB_oblig", method = "Pearson")
Parameter1 |       Parameter2 |    r |       95% CI | t(343) |      p
---------------------------------------------------------------------
OUS_IB     | Choice_SIB_oblig | 0.11 | [0.00, 0.21] |   2.00 | 0.046*

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "OUS_IB", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1 |               Parameter2 |    r |        95% CI | t(343) |     p
-----------------------------------------------------------------------------
OUS_IB     | Choice_CUZminusSIB_oblig | 0.02 | [-0.08, 0.13] |   0.41 | 0.683

Observations: 345

Instrumental Harm

Stranger-Like

No Choice
# distant pearson's r
cor_test(E2_SL_clean, "OUS_IH", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1 |         Parameter2 |     r |        95% CI | t(352) |     p
------------------------------------------------------------------------
OUS_IH     | NoChoice_CUZ_oblig | -0.01 | [-0.12, 0.09] |  -0.21 | 0.831

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "OUS_IH", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1 |         Parameter2 |     r |        95% CI | t(352) |     p
------------------------------------------------------------------------
OUS_IH     | NoChoice_SIB_oblig | -0.06 | [-0.16, 0.05] |  -1.04 | 0.300

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "OUS_IH", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1 |                 Parameter2 |    r |        95% CI | t(352) |     p
-------------------------------------------------------------------------------
OUS_IH     | NoChoice_CUZminusSIB_oblig | 0.04 | [-0.06, 0.14] |   0.76 | 0.449

Observations: 354
Choice
# distant pearson's r
cor_test(E2_SL_clean, "OUS_IH", "Choice_CUZ_oblig", method = "Pearson")
Parameter1 |       Parameter2 |    r |        95% CI | t(352) |     p
---------------------------------------------------------------------
OUS_IH     | Choice_CUZ_oblig | 0.02 | [-0.09, 0.12] |   0.34 | 0.736

Observations: 354
# close pearson's r
cor_test(E2_SL_clean, "OUS_IH", "Choice_SIB_oblig", method = "Pearson")
Parameter1 |       Parameter2 |        r |        95% CI | t(352) |     p
-------------------------------------------------------------------------
OUS_IH     | Choice_SIB_oblig | 1.76e-03 | [-0.10, 0.11] |   0.03 | 0.974

Observations: 354
# diff pearson's r
cor_test(E2_SL_clean, "OUS_IH", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1 |               Parameter2 |    r |        95% CI | t(352) |     p
-----------------------------------------------------------------------------
OUS_IH     | Choice_CUZminusSIB_oblig | 0.04 | [-0.07, 0.14] |   0.70 | 0.487

Observations: 354

Friend-Like

No Choice
# distant pearson's r
cor_test(E2_FL_clean, "OUS_IH", "NoChoice_CUZ_oblig", method = "Pearson")
Parameter1 |         Parameter2 |    r |        95% CI | t(343) |     p
-----------------------------------------------------------------------
OUS_IH     | NoChoice_CUZ_oblig | 0.06 | [-0.04, 0.17] |   1.16 | 0.245

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "OUS_IH", "NoChoice_SIB_oblig", method = "Pearson")
Parameter1 |         Parameter2 |        r |        95% CI | t(343) |     p
---------------------------------------------------------------------------
OUS_IH     | NoChoice_SIB_oblig | 2.63e-03 | [-0.10, 0.11] |   0.05 | 0.961

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "OUS_IH", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1 |                 Parameter2 |    r |        95% CI | t(343) |     p
-------------------------------------------------------------------------------
OUS_IH     | NoChoice_CUZminusSIB_oblig | 0.06 | [-0.04, 0.17] |   1.18 | 0.238

Observations: 345
Choice
# distant pearson's r
cor_test(E2_FL_clean, "OUS_IH", "Choice_CUZ_oblig", method = "Pearson")
Parameter1 |       Parameter2 |    r |        95% CI | t(343) |     p
---------------------------------------------------------------------
OUS_IH     | Choice_CUZ_oblig | 0.02 | [-0.08, 0.13] |   0.43 | 0.667

Observations: 345
# close pearson's r
cor_test(E2_FL_clean, "OUS_IH", "Choice_SIB_oblig", method = "Pearson")
Parameter1 |       Parameter2 |        r |        95% CI | t(343) |     p
-------------------------------------------------------------------------
OUS_IH     | Choice_SIB_oblig | 8.16e-03 | [-0.10, 0.11] |   0.15 | 0.880

Observations: 345
# diff pearson's r
cor_test(E2_FL_clean, "OUS_IH", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1 |               Parameter2 |    r |        95% CI | t(343) |     p
-----------------------------------------------------------------------------
OUS_IH     | Choice_CUZminusSIB_oblig | 0.05 | [-0.06, 0.15] |   0.84 | 0.400

Observations: 345

Oblig ~ MAC Family Values vs MFT Ingroup Loyalty Tests

Stranger-Like

No Choice

# correlation values are taken from the above analyses
## r.jk = oblig ~ family values corr; r.jh = oblig ~ ingroup loyalty corr; r.kh = family values ~ ingroup loyalty corr

# distant
cocor.dep.groups.overlap(r.jk = .31, r.jh = .21, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.31 and r.jh = 0.21
Difference: r.jk - r.jh = 0.1
Related correlation: r.kh = 0.62
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = 2.2437, p-value = 0.0249
  Null hypothesis rejected
# close
cocor.dep.groups.overlap(r.jk = .33, r.jh = .18, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.33 and r.jh = 0.18
Difference: r.jk - r.jh = 0.15
Related correlation: r.kh = 0.62
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = 3.3647, p-value = 0.0008
  Null hypothesis rejected
# difference
cocor.dep.groups.overlap(r.jk = -.03, r.jh = .02, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = -0.03 and r.jh = 0.02
Difference: r.jk - r.jh = -0.05
Related correlation: r.kh = 0.62
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -1.0748, p-value = 0.2825
  Null hypothesis retained

Choice

# correlation values are taken from the oblig ~ ind. diffs analyses
## r.jk = oblig ~ family values corr; r.jh = oblig ~ ingroup loyalty corr; r.kh = family values ~ ingroup loyalty corr

# distant
cocor.dep.groups.overlap(r.jk = .37, r.jh = .27, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.37 and r.jh = 0.27
Difference: r.jk - r.jh = 0.1
Related correlation: r.kh = 0.62
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = 2.2964, p-value = 0.0217
  Null hypothesis rejected
# close
cocor.dep.groups.overlap(r.jk = .43, r.jh = .28, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.43 and r.jh = 0.28
Difference: r.jk - r.jh = 0.15
Related correlation: r.kh = 0.62
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = 3.5083, p-value = 0.0005
  Null hypothesis rejected
# difference
cocor.dep.groups.overlap(r.jk = -.20, r.jh = -.07, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = -0.2 and r.jh = -0.07
Difference: r.jk - r.jh = -0.13
Related correlation: r.kh = 0.62
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -2.8289, p-value = 0.0047
  Null hypothesis rejected

Friend-Like

No Choice

# correlation values are taken from the oblig ~ ind. diffs analyses
## r.jk = oblig ~ family values corr; r.jh = oblig ~ ingroup loyalty corr; r.kh = family values ~ ingroup loyalty corr

# distant
cocor.dep.groups.overlap(r.jk = .25, r.jh = .14, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.25 and r.jh = 0.14
Difference: r.jk - r.jh = 0.11
Related correlation: r.kh = 0.64
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = 2.4560, p-value = 0.0140
  Null hypothesis rejected
# close
cocor.dep.groups.overlap(r.jk = .33, r.jh = .19, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.33 and r.jh = 0.19
Difference: r.jk - r.jh = 0.14
Related correlation: r.kh = 0.64
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = 3.1879, p-value = 0.0014
  Null hypothesis rejected
# difference
cocor.dep.groups.overlap(r.jk = -.06, r.jh = -.04, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = -0.06 and r.jh = -0.04
Difference: r.jk - r.jh = -0.02
Related correlation: r.kh = 0.64
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -0.4365, p-value = 0.6624
  Null hypothesis retained

Choice

# correlation values are taken from the oblig ~ ind. diffs analyses
## r.jk = oblig ~ family values corr; r.jh = oblig ~ ingroup loyalty corr; r.kh = family values ~ ingroup loyalty corr

# distant
cocor.dep.groups.overlap(r.jk = .29, r.jh = .15, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.29 and r.jh = 0.15
Difference: r.jk - r.jh = 0.14
Related correlation: r.kh = 0.64
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = 3.1488, p-value = 0.0016
  Null hypothesis rejected
# close
cocor.dep.groups.overlap(r.jk = .34, r.jh = .17, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.34 and r.jh = 0.17
Difference: r.jk - r.jh = 0.17
Related correlation: r.kh = 0.64
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = 3.8687, p-value = 0.0001
  Null hypothesis rejected
# difference
cocor.dep.groups.overlap(r.jk = -.17, r.jh = -.06, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = -0.17 and r.jh = -0.06
Difference: r.jk - r.jh = -0.11
Related correlation: r.kh = 0.64
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -2.4189, p-value = 0.0156
  Null hypothesis rejected

Oblig Diff ~ Other Pre-Outcome Diff Tests

Relate

Stranger-Like

No Choice

# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_relate", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                  |                 Parameter2 |    r |        95% CI | t(352) |     p
------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_relate | NoChoice_CUZminusSIB_oblig | 0.10 | [ 0.00, 0.21] |   1.95 | 0.052

Observations: 354

Choice

# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_relate", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                |               Parameter2 |    r |        95% CI | t(352) |     p
--------------------------------------------------------------------------------------------
Choice_CUZminusSIB_relate | Choice_CUZminusSIB_oblig | 0.10 | [ 0.00, 0.21] |   1.95 | 0.052

Observations: 354

Friend-Like

No Choice

# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_relate", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                  |                 Parameter2 |    r |        95% CI | t(343) |     p
------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_relate | NoChoice_CUZminusSIB_oblig | 0.02 | [-0.09, 0.12] |   0.32 | 0.747

Observations: 345

Choice

# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_relate", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                |               Parameter2 |    r |       95% CI | t(343) |      p
--------------------------------------------------------------------------------------------
Choice_CUZminusSIB_relate | Choice_CUZminusSIB_oblig | 0.13 | [0.02, 0.23] |   2.42 | 0.016*

Observations: 345

Close

Stranger-Like

No Choice

# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_close", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                 |                 Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_close | NoChoice_CUZminusSIB_oblig | 0.22 | [0.12, 0.32] |   4.21 | < .001***

Observations: 354

Choice

# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_close", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1               |               Parameter2 |    r |       95% CI | t(352) |         p
----------------------------------------------------------------------------------------------
Choice_CUZminusSIB_close | Choice_CUZminusSIB_oblig | 0.37 | [0.28, 0.46] |   7.48 | < .001***

Observations: 354

Friend-Like

No Choice

# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_close", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                 |                 Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_close | NoChoice_CUZminusSIB_oblig | 0.25 | [0.15, 0.35] |   4.79 | < .001***

Observations: 345

Choice

# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_close", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1               |               Parameter2 |    r |       95% CI | t(343) |         p
----------------------------------------------------------------------------------------------
Choice_CUZminusSIB_close | Choice_CUZminusSIB_oblig | 0.60 | [0.53, 0.66] |  13.92 | < .001***

Observations: 345

Prior Help

Stranger-Like

No Choice

# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_priorhelp", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                     |                 Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_priorhelp | NoChoice_CUZminusSIB_oblig | 0.21 | [0.10, 0.30] |   3.96 | < .001***

Observations: 354

Choice

# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_priorhelp", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                   |               Parameter2 |    r |       95% CI | t(352) |         p
--------------------------------------------------------------------------------------------------
Choice_CUZminusSIB_priorhelp | Choice_CUZminusSIB_oblig | 0.42 | [0.33, 0.50] |   8.72 | < .001***

Observations: 354

Friend-Like

No Choice

# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_priorhelp", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                     |                 Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_priorhelp | NoChoice_CUZminusSIB_oblig | 0.26 | [0.16, 0.36] |   5.00 | < .001***

Observations: 345

Choice

# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_priorhelp", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                   |               Parameter2 |    r |       95% CI | t(343) |         p
--------------------------------------------------------------------------------------------------
Choice_CUZminusSIB_priorhelp | Choice_CUZminusSIB_oblig | 0.56 | [0.49, 0.63] |  12.67 | < .001***

Observations: 345

Future Help

Stranger-Like

No Choice

# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_futurehelp", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                      |                 Parameter2 |    r |       95% CI | t(352) |         p
-------------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_futurehelp | NoChoice_CUZminusSIB_oblig | 0.36 | [0.27, 0.45] |   7.32 | < .001***

Observations: 354

Choice

# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_futurehelp", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                    |               Parameter2 |    r |       95% CI | t(352) |         p
---------------------------------------------------------------------------------------------------
Choice_CUZminusSIB_futurehelp | Choice_CUZminusSIB_oblig | 0.51 | [0.43, 0.58] |  11.14 | < .001***

Observations: 354

Friend-Like

No Choice

# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_futurehelp", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                      |                 Parameter2 |    r |       95% CI | t(343) |         p
-------------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_futurehelp | NoChoice_CUZminusSIB_oblig | 0.35 | [0.25, 0.44] |   6.84 | < .001***

Observations: 345

Choice

# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_futurehelp", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                    |               Parameter2 |    r |       95% CI | t(343) |         p
---------------------------------------------------------------------------------------------------
Choice_CUZminusSIB_futurehelp | Choice_CUZminusSIB_oblig | 0.60 | [0.53, 0.66] |  13.86 | < .001***

Observations: 345

Prior Interax

Stranger-Like

No Choice

# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_priorinteract", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                         |                 Parameter2 |    r |       95% CI | t(352) |       p
--------------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_priorinteract | NoChoice_CUZminusSIB_oblig | 0.16 | [0.05, 0.26] |   2.96 | 0.003**

Observations: 354

Choice

# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_priorinteract", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                       |               Parameter2 |    r |       95% CI | t(352) |         p
------------------------------------------------------------------------------------------------------
Choice_CUZminusSIB_priorinteract | Choice_CUZminusSIB_oblig | 0.34 | [0.24, 0.43] |   6.75 | < .001***

Observations: 354

Friend-Like

No Choice

# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_priorinteract", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                         |                 Parameter2 |    r |       95% CI | t(343) |         p
----------------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_priorinteract | NoChoice_CUZminusSIB_oblig | 0.26 | [0.16, 0.35] |   4.92 | < .001***

Observations: 345

Choice

# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_priorinteract", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                       |               Parameter2 |    r |       95% CI | t(343) |         p
------------------------------------------------------------------------------------------------------
Choice_CUZminusSIB_priorinteract | Choice_CUZminusSIB_oblig | 0.45 | [0.36, 0.53] |   9.27 | < .001***

Observations: 345

Future Interax

Stranger-Like

No Choice

# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_futureinteract", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                          |                 Parameter2 |    r |       95% CI | t(352) |         p
-----------------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_futureinteract | NoChoice_CUZminusSIB_oblig | 0.19 | [0.09, 0.29] |   3.59 | < .001***

Observations: 354

Choice

# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_futureinteract", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                        |               Parameter2 |    r |       95% CI | t(352) |         p
-------------------------------------------------------------------------------------------------------
Choice_CUZminusSIB_futureinteract | Choice_CUZminusSIB_oblig | 0.45 | [0.37, 0.53] |   9.52 | < .001***

Observations: 354

Friend-Like

No Choice

# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_futureinteract", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                          |                 Parameter2 |    r |       95% CI | t(343) |         p
-----------------------------------------------------------------------------------------------------------
NoChoice_CUZminusSIB_futureinteract | NoChoice_CUZminusSIB_oblig | 0.19 | [0.09, 0.29] |   3.65 | < .001***

Observations: 345

Choice

# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_futureinteract", "Choice_CUZminusSIB_oblig", method = "Pearson")
Parameter1                        |               Parameter2 |    r |       95% CI | t(343) |         p
-------------------------------------------------------------------------------------------------------
Choice_CUZminusSIB_futureinteract | Choice_CUZminusSIB_oblig | 0.54 | [0.46, 0.61] |  11.78 | < .001***

Observations: 345

Oblig ~ Relate vs Social Interaction Tests

Close

Stranger-Like

No Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .22, r.kh = .07, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.1 and r.jh = 0.22
Difference: r.jk - r.jh = -0.12
Related correlation: r.kh = 0.07
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -1.6826, p-value = 0.0925
  Null hypothesis retained

Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .37, r.kh = .15, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.1 and r.jh = 0.37
Difference: r.jk - r.jh = -0.27
Related correlation: r.kh = 0.15
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -4.0746, p-value = 0.0000
  Null hypothesis rejected

Friend-Like

No Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .02, r.jh = .25, r.kh = .04, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.02 and r.jh = 0.25
Difference: r.jk - r.jh = -0.23
Related correlation: r.kh = 0.04
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -3.1271, p-value = 0.0018
  Null hypothesis rejected

Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .13, r.jh = .60, r.kh = .03, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.13 and r.jh = 0.6
Difference: r.jk - r.jh = -0.47
Related correlation: r.kh = 0.03
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -7.2267, p-value = 0.0000
  Null hypothesis rejected

Prior Help

Stranger-Like

No Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .21, r.kh = .09, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.1 and r.jh = 0.21
Difference: r.jk - r.jh = -0.11
Related correlation: r.kh = 0.09
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -1.5568, p-value = 0.1195
  Null hypothesis retained

Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .42, r.kh = .15, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.1 and r.jh = 0.42
Difference: r.jk - r.jh = -0.32
Related correlation: r.kh = 0.15
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -4.8956, p-value = 0.0000
  Null hypothesis rejected

Friend-Like

No Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .02, r.jh = .26, r.kh = .09, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.02 and r.jh = 0.26
Difference: r.jk - r.jh = -0.24
Related correlation: r.kh = 0.09
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -3.3557, p-value = 0.0008
  Null hypothesis rejected

Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .13, r.jh = .56, r.kh = .08, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.13 and r.jh = 0.56
Difference: r.jk - r.jh = -0.43
Related correlation: r.kh = 0.08
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -6.6344, p-value = 0.0000
  Null hypothesis rejected

Future Help

Stranger-Like

No Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .36, r.kh = .17, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.1 and r.jh = 0.36
Difference: r.jk - r.jh = -0.26
Related correlation: r.kh = 0.17
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -3.9597, p-value = 0.0001
  Null hypothesis rejected

Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .51, r.kh = .13, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.1 and r.jh = 0.51
Difference: r.jk - r.jh = -0.41
Related correlation: r.kh = 0.13
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -6.3988, p-value = 0.0000
  Null hypothesis rejected

Friend-Like

No Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .02, r.jh = .35, r.kh = .03, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.02 and r.jh = 0.35
Difference: r.jk - r.jh = -0.33
Related correlation: r.kh = 0.03
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -4.5466, p-value = 0.0000
  Null hypothesis rejected

Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .13, r.jh = .60, r.kh = .09, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.13 and r.jh = 0.6
Difference: r.jk - r.jh = -0.47
Related correlation: r.kh = 0.09
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -7.4426, p-value = 0.0000
  Null hypothesis rejected

Prior Interax

Stranger-Like

No Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .16, r.kh = .14, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.1 and r.jh = 0.16
Difference: r.jk - r.jh = -0.06
Related correlation: r.kh = 0.14
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -0.8679, p-value = 0.3854
  Null hypothesis retained

Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .34, r.kh = .15, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.1 and r.jh = 0.34
Difference: r.jk - r.jh = -0.24
Related correlation: r.kh = 0.15
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -3.5960, p-value = 0.0003
  Null hypothesis rejected

Friend-Like

No Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .02, r.jh = .26, r.kh = .08, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.02 and r.jh = 0.26
Difference: r.jk - r.jh = -0.24
Related correlation: r.kh = 0.08
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -3.3376, p-value = 0.0008
  Null hypothesis rejected

Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .13, r.jh = .45, r.kh = .06, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.13 and r.jh = 0.45
Difference: r.jk - r.jh = -0.32
Related correlation: r.kh = 0.06
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -4.6708, p-value = 0.0000
  Null hypothesis rejected

Future Interax

Stranger-Like

No Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .19, r.kh = .18, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.1 and r.jh = 0.19
Difference: r.jk - r.jh = -0.09
Related correlation: r.kh = 0.18
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -1.3376, p-value = 0.1810
  Null hypothesis retained

Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .45, r.kh = .15, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.1 and r.jh = 0.45
Difference: r.jk - r.jh = -0.35
Related correlation: r.kh = 0.15
Group size: n = 354
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -5.4048, p-value = 0.0000
  Null hypothesis rejected

Friend-Like

No Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .02, r.jh = .19, r.kh = .03, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.02 and r.jh = 0.19
Difference: r.jk - r.jh = -0.17
Related correlation: r.kh = 0.03
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -2.2817, p-value = 0.0225
  Null hypothesis rejected

Choice

# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .13, r.jh = .54, r.kh = .08, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

  Results of a comparison of two overlapping correlations based on dependent groups

Comparison between r.jk = 0.13 and r.jh = 0.54
Difference: r.jk - r.jh = -0.41
Related correlation: r.kh = 0.08
Group size: n = 345
Null hypothesis: r.jk is equal to r.jh
Alternative hypothesis: r.jk is not equal to r.jh (two-sided)
Alpha: 0.05

steiger1980: Steiger's (1980) modification of Dunn and Clark's z (1969) using average correlations
  z = -6.2662, p-value = 0.0000
  Null hypothesis rejected
---
title: "Experiment 2"
author: "BLINDED FOR PEER REVIEW"
date: '`r format(Sys.time(), "%B %d, %Y")`'
output: 
  html_notebook:
    code_folding: hide
    highlight: tango
    theme: darkly
    toc: yes
    toc_depth: 5
    toc_float: yes
---

<br>

# Data Waves

As pre-registered, we sought to collect data to attain 330 analyzable responses for each between-subjects condition (i.e., 330 participants who passed the attention check for each between-subjects condition, totalling 660 usable responses across the entire experiment). 

<br>

On the first wave of data collection (N = 739), applying the pre-registered exclusion criterion led to adequate samples (i.e., Ns > 330) for all between-subjects datasets. Therefore, we did not launch a second wave of data collection.

<br>

# Data Cleaning

Before data were loaded into R (below), the following changes were made:

(1) Raw variable names from Qualtrics were renamed to be more descriptive.

(2) If there were any responses for the field "Bot_Catcher," these cases were deleted. This field was designed to be an invisible question that only bots would answer (as human respondents would not see the field). However, 0 cases were detected.

(3) Duplicate IP addresses were removed. There were only 4 instances of a duplicate IP address, leading to an N = 735.

(4) All other identifying information was removed (e.g., IP addresses, longitude/latitude, etc.).

<br>

## Loading Data/Packages

Before running this chunk, please load "E2_raw_data.csv" into the R environment.

<br>
```{r}
# packages should be loaded in the following order to avoid function conflicts
library(psych) # for describing data
library(effsize) # for mean difference effect sizes
library(sjstats) # for eta-squared effect sizes
library(correlation) # for cleaner correlation test output
library(rmcorr) # for repeated-measures correlation tests
library(cocor) # for comparing dependent correlation coefficients
library(tidyverse) # for data manipulation and plotting
```

## Data Separation/Recombining

Data were separated into two distinct data sets (for each between-subjects condition). Then, a between-subjects variable was created within each between-subjects dataset. Last, both datasets were recombined.

<br>
```{r}
# creates dataset that only has participants who made judgments of agents who helped stranger-like family members
E2_SL <- E2_raw_data %>%
  filter(SL_CnS_C_m1 >= 0 | SL_CnS_C_m2 >= 0)

# creates dataset that only has participants who made judgments of agents who friend-like family members
E2_FL <- E2_raw_data %>%
  filter(FL_CnS_C_m1 >= 0 | FL_CnS_C_m2 >= 0)

# create between-subjects condition variable
E2_SL$BSs_cond <- rep("Stranger-Like", nrow(E2_SL))
E2_FL$BSs_cond <- rep("Friend-Like", nrow(E2_FL))

# recombine between-subjects data
E2_all <- rbind(E2_SL, E2_FL)
```

## Implementing Attention Checks

Based on our pre-registered criterion, participants who failed a pre-manipulation attention check were to be excluded from all analyses. The attention check was disguised as an experimental scenario; in the scenario text, participants were instructed to respond with the left-most option on the scale for all seven pre-outcome measures. 

<br>

Participants who responded on average above a 10 on the pre-outcome 100-points scales were excluded. (We chose to use an average because we realized that a small group of participants answered the left-most option on the scale for six of the seven pre-outcome measures, but for the seventh pre-outcome measure, they answered with a number slightly above 10. Through testing how this could have happened, we noticed that participants using a mouse-scroll could have answered the seventh pre-outcome measure correctly, but their mouse-scroll could have dislodged their last answer if they did not click off of the slider first.) This led to a final analyzable N = 699 (a 95% retention rate).

<br>
```{r}
# Create an attention check average variable
E2_all$AC_AVG <- ((E2_all$AC_oblig + E2_all$AC_relate + E2_all$AC_close + E2_all$AC_priorhelp + E2_all$AC_futurehelp + E2_all$AC_priorinteract + E2_all$AC_futureinteract)/7)

# Create dataset that filters out inattentive participants
E2_all_clean <- E2_all %>%
  # excludes participants who were not paying attention
    filter(AC_AVG < 10)
```

## Creating Analysis Variables
```{r}
# Main DVs
# create single column for each condition's variables that collapses across presentation order of DVs

# e.g., SL_CnS_C_o1 = "Stranger-Like" family members dataset, "No Choice" condition, CUZ obligation judgment, obligation judgment presented first
# to clarify, as noted in the Method section (and SOM), six other pre-outcome judgments were collected, counterbalanced so that obligation judgments were either first or last (1 = obligation first, 2 = obligation last)

E2_all_clean$NoChoice_CUZ_oblig  <- rowSums(E2_all_clean[, c("SL_CnS_C_o1", "SL_CnS_C_o2",
                                                               "FL_CnS_C_o1", "FL_CnS_C_o2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_relate  <- rowSums(E2_all_clean[, c("SL_CnS_C_r1", "SL_CnS_C_r2",
                                                                "FL_CnS_C_r1", "FL_CnS_C_r2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_close  <- rowSums(E2_all_clean[, c("SL_CnS_C_c1", "SL_CnS_C_c2",
                                                               "FL_CnS_C_c1", "FL_CnS_C_c2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_priorhelp  <- rowSums(E2_all_clean[, c("SL_CnS_C_ph1", "SL_CnS_C_ph2",
                                                                   "FL_CnS_C_ph1", "FL_CnS_C_ph2")],
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_futurehelp  <- rowSums(E2_all_clean[, c("SL_CnS_C_fh1", "SL_CnS_C_fh2",
                                                                   "FL_CnS_C_fh1", "FL_CnS_C_fh2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_priorinteract  <- rowSums(E2_all_clean[, c("SL_CnS_C_pi1", "SL_CnS_C_pi2",
                                                                   "FL_CnS_C_pi1", "FL_CnS_C_pi2")],
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_futureinteract  <- rowSums(E2_all_clean[, c("SL_CnS_C_fi1", "SL_CnS_C_fi2",
                                                                   "FL_CnS_C_fi1", "FL_CnS_C_fi2")],
                                                na.rm = T)
E2_all_clean$NoChoice_CUZ_moral  <- rowSums(E2_all_clean[, c("SL_CnS_C_m1", "SL_CnS_C_m2",
                                                               "FL_CnS_C_m1", "FL_CnS_C_m2")], 
                                                na.rm = T)

E2_all_clean$NoChoice_SIB_oblig  <- rowSums(E2_all_clean[, c("SL_CnS_S_o1", "SL_CnS_S_o2",
                                                               "FL_CnS_S_o1", "FL_CnS_S_o2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_relate  <- rowSums(E2_all_clean[, c("SL_CnS_S_r1", "SL_CnS_S_r2",
                                                                "FL_CnS_S_r1", "FL_CnS_S_r2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_close  <- rowSums(E2_all_clean[, c("SL_CnS_S_c1", "SL_CnS_S_c2",
                                                               "FL_CnS_S_c1", "FL_CnS_S_c2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_priorhelp  <- rowSums(E2_all_clean[, c("SL_CnS_S_ph1", "SL_CnS_S_ph2",
                                                                   "FL_CnS_S_ph1", "FL_CnS_S_ph2")],
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_futurehelp  <- rowSums(E2_all_clean[, c("SL_CnS_S_fh1", "SL_CnS_S_fh2",
                                                                   "FL_CnS_S_fh1", "FL_CnS_S_fh2")], 
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_priorinteract  <- rowSums(E2_all_clean[, c("SL_CnS_S_pi1", "SL_CnS_S_pi2",
                                                                   "FL_CnS_S_pi1", "FL_CnS_S_pi2")],
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_futureinteract  <- rowSums(E2_all_clean[, c("SL_CnS_S_fi1", "SL_CnS_S_fi2",
                                                                   "FL_CnS_S_fi1", "FL_CnS_S_fi2")],
                                                na.rm = T)
E2_all_clean$NoChoice_SIB_moral  <- rowSums(E2_all_clean[, c("SL_CnS_S_m1", "SL_CnS_S_m2",
                                                               "FL_CnS_S_m1", "FL_CnS_S_m2")], 
                                                na.rm = T)

# e.g., SL_CnS_CoS_C_o11 = "Stranger-Like" family members dataset, "Choice" condition, CUZ obligation judgment, CUZ measures first, obligation judgment presented first
# to clarify, as noted in the Method section, two obligation (and other pre-outcome) judgments were collected in these conditions -- one for each potential beneficiary (e.g., CUZ and SIB), and they get averaged together later on in this same code chunk
E2_all_clean$CUZoSIB_CUZ_oblig <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_o11", "SL_CnS_CoS_C_o12",
                                                           "SL_CnS_CoS_C_o21", "SL_CnS_CoS_C_o22",
                                                           "FL_CnS_CoS_C_o11", "FL_CnS_CoS_C_o12",
                                                           "FL_CnS_CoS_C_o21", "FL_CnS_CoS_C_o22")],
                                                    na.rm = T) 
E2_all_clean$CUZoSIB_CUZ_relate <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_r11", "SL_CnS_CoS_C_r12",
                                                           "SL_CnS_CoS_C_r21", "SL_CnS_CoS_C_r22",
                                                           "FL_CnS_CoS_C_r11", "FL_CnS_CoS_C_r12",
                                                           "FL_CnS_CoS_C_r21", "FL_CnS_CoS_C_r22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_close <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_c11", "SL_CnS_CoS_C_c12",
                                                           "SL_CnS_CoS_C_c21", "SL_CnS_CoS_C_c22",
                                                           "FL_CnS_CoS_C_c11", "FL_CnS_CoS_C_c12",
                                                           "FL_CnS_CoS_C_c21", "FL_CnS_CoS_C_c22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_priorhelp <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_ph11", "SL_CnS_CoS_C_ph12",
                                                           "SL_CnS_CoS_C_ph21", "SL_CnS_CoS_C_ph22",
                                                           "FL_CnS_CoS_C_ph11", "FL_CnS_CoS_C_ph12",
                                                           "FL_CnS_CoS_C_ph21", "FL_CnS_CoS_C_ph22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_futurehelp <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_fh11", "SL_CnS_CoS_C_fh12",
                                                           "SL_CnS_CoS_C_fh21", "SL_CnS_CoS_C_fh22",
                                                           "FL_CnS_CoS_C_fh11", "FL_CnS_CoS_C_fh12",
                                                           "FL_CnS_CoS_C_fh21", "FL_CnS_CoS_C_fh22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_priorinteract <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_pi11", "SL_CnS_CoS_C_pi12",
                                                           "SL_CnS_CoS_C_pi21", "SL_CnS_CoS_C_pi22",
                                                           "FL_CnS_CoS_C_pi11", "FL_CnS_CoS_C_pi12",
                                                           "FL_CnS_CoS_C_pi21", "FL_CnS_CoS_C_pi22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_futureinteract <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_fi11", "SL_CnS_CoS_C_fi12",
                                                           "SL_CnS_CoS_C_fi21", "SL_CnS_CoS_C_fi22",
                                                           "FL_CnS_CoS_C_fi11", "FL_CnS_CoS_C_fi12",
                                                           "FL_CnS_CoS_C_fi21", "FL_CnS_CoS_C_fi22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_oblig <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_o11", "SL_CnS_CoS_S_o12",
                                                           "SL_CnS_CoS_S_o21", "SL_CnS_CoS_S_o22",
                                                           "FL_CnS_CoS_S_o11", "FL_CnS_CoS_S_o12",
                                                           "FL_CnS_CoS_S_o21", "FL_CnS_CoS_S_o22")],
                                                    na.rm = T) 
E2_all_clean$CUZoSIB_SIB_relate <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_r11", "SL_CnS_CoS_S_r12",
                                                           "SL_CnS_CoS_S_r21", "SL_CnS_CoS_S_r22",
                                                           "FL_CnS_CoS_S_r11", "FL_CnS_CoS_S_r12",
                                                           "FL_CnS_CoS_S_r21", "FL_CnS_CoS_S_r22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_close <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_c11", "SL_CnS_CoS_S_c12",
                                                           "SL_CnS_CoS_S_c21", "SL_CnS_CoS_S_c22",
                                                           "FL_CnS_CoS_S_c11", "FL_CnS_CoS_S_c12",
                                                           "FL_CnS_CoS_S_c21", "FL_CnS_CoS_S_c22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_priorhelp <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_ph11", "SL_CnS_CoS_S_ph12",
                                                           "SL_CnS_CoS_S_ph21", "SL_CnS_CoS_S_ph22",
                                                           "FL_CnS_CoS_S_ph11", "FL_CnS_CoS_S_ph12",
                                                           "FL_CnS_CoS_S_ph21", "FL_CnS_CoS_S_ph22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_futurehelp <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_fh11", "SL_CnS_CoS_S_fh12",
                                                           "SL_CnS_CoS_S_fh21", "SL_CnS_CoS_S_fh22",
                                                           "FL_CnS_CoS_S_fh11", "FL_CnS_CoS_S_fh12",
                                                           "FL_CnS_CoS_S_fh21", "FL_CnS_CoS_S_fh22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_priorinteract <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_pi11", "SL_CnS_CoS_S_pi12",
                                                           "SL_CnS_CoS_S_pi21", "SL_CnS_CoS_S_pi22",
                                                           "FL_CnS_CoS_S_pi11", "FL_CnS_CoS_S_pi12",
                                                           "FL_CnS_CoS_S_pi21", "FL_CnS_CoS_S_pi22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_SIB_futureinteract <- rowSums(E2_all_clean[, c("SL_CnS_CoS_S_fi11", "SL_CnS_CoS_S_fi12",
                                                           "SL_CnS_CoS_S_fi21", "SL_CnS_CoS_S_fi22",
                                                           "FL_CnS_CoS_S_fi11", "FL_CnS_CoS_S_fi12",
                                                           "FL_CnS_CoS_S_fi21", "FL_CnS_CoS_S_fi22")],
                                                    na.rm = T)
E2_all_clean$CUZoSIB_CUZ_moral <- rowSums(E2_all_clean[, c("SL_CnS_CoS_C_m11", "SL_CnS_CoS_C_m12",
                                                           "SL_CnS_CoS_C_m21", "SL_CnS_CoS_C_m22",
                                                           "FL_CnS_CoS_C_m11", "FL_CnS_CoS_C_m12",
                                                           "FL_CnS_CoS_C_m21", "FL_CnS_CoS_C_m22")],
                                                    na.rm = T)

E2_all_clean$SIBoCUZ_CUZ_oblig <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_o11", "SL_CnS_SoC_C_o12",
                                                           "SL_CnS_SoC_C_o21", "SL_CnS_SoC_C_o22",
                                                           "FL_CnS_SoC_C_o11", "FL_CnS_SoC_C_o12",
                                                           "FL_CnS_SoC_C_o21", "FL_CnS_SoC_C_o22")],
                                                    na.rm = T) 
E2_all_clean$SIBoCUZ_CUZ_relate <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_r11", "SL_CnS_SoC_C_r12",
                                                           "SL_CnS_SoC_C_r21", "SL_CnS_SoC_C_r22",
                                                           "FL_CnS_SoC_C_r11", "FL_CnS_SoC_C_r12",
                                                           "FL_CnS_SoC_C_r21", "FL_CnS_SoC_C_r22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_CUZ_close <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_c11", "SL_CnS_SoC_C_c12",
                                                           "SL_CnS_SoC_C_c21", "SL_CnS_SoC_C_c22",
                                                           "FL_CnS_SoC_C_c11", "FL_CnS_SoC_C_c12",
                                                           "FL_CnS_SoC_C_c21", "FL_CnS_SoC_C_c22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_CUZ_priorhelp <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_ph11", "SL_CnS_SoC_C_ph12",
                                                           "SL_CnS_SoC_C_ph21", "SL_CnS_SoC_C_ph22",
                                                           "FL_CnS_SoC_C_ph11", "FL_CnS_SoC_C_ph12",
                                                           "FL_CnS_SoC_C_ph21", "FL_CnS_SoC_C_ph22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_CUZ_futurehelp <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_fh11", "SL_CnS_SoC_C_fh12",
                                                           "SL_CnS_SoC_C_fh21", "SL_CnS_SoC_C_fh22",
                                                           "FL_CnS_SoC_C_fh11", "FL_CnS_SoC_C_fh12",
                                                           "FL_CnS_SoC_C_fh21", "FL_CnS_SoC_C_fh22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_CUZ_priorinteract <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_pi11", "SL_CnS_SoC_C_pi12",
                                                           "SL_CnS_SoC_C_pi21", "SL_CnS_SoC_C_pi22",
                                                           "FL_CnS_SoC_C_pi11", "FL_CnS_SoC_C_pi12",
                                                           "FL_CnS_SoC_C_pi21", "FL_CnS_SoC_C_pi22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_CUZ_futureinteract <- rowSums(E2_all_clean[, c("SL_CnS_SoC_C_fi11", "SL_CnS_SoC_C_fi12",
                                                           "SL_CnS_SoC_C_fi21", "SL_CnS_SoC_C_fi22",
                                                           "FL_CnS_SoC_C_fi11", "FL_CnS_SoC_C_fi12",
                                                           "FL_CnS_SoC_C_fi21", "FL_CnS_SoC_C_fi22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_oblig <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_o11", "SL_CnS_SoC_S_o12",
                                                           "SL_CnS_SoC_S_o21", "SL_CnS_SoC_S_o22",
                                                           "FL_CnS_SoC_S_o11", "FL_CnS_SoC_S_o12",
                                                           "FL_CnS_SoC_S_o21", "FL_CnS_SoC_S_o22")],
                                                    na.rm = T) 
E2_all_clean$SIBoCUZ_SIB_relate <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_r11", "SL_CnS_SoC_S_r12",
                                                           "SL_CnS_SoC_S_r21", "SL_CnS_SoC_S_r22",
                                                           "FL_CnS_SoC_S_r11", "FL_CnS_SoC_S_r12",
                                                           "FL_CnS_SoC_S_r21", "FL_CnS_SoC_S_r22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_close <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_c11", "SL_CnS_SoC_S_c12",
                                                           "SL_CnS_SoC_S_c21", "SL_CnS_SoC_S_c22",
                                                           "FL_CnS_SoC_S_c11", "FL_CnS_SoC_S_c12",
                                                           "FL_CnS_SoC_S_c21", "FL_CnS_SoC_S_c22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_priorhelp <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_ph11", "SL_CnS_SoC_S_ph12",
                                                           "SL_CnS_SoC_S_ph21", "SL_CnS_SoC_S_ph22",
                                                           "FL_CnS_SoC_S_ph11", "FL_CnS_SoC_S_ph12",
                                                           "FL_CnS_SoC_S_ph21", "FL_CnS_SoC_S_ph22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_futurehelp <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_fh11", "SL_CnS_SoC_S_fh12",
                                                           "SL_CnS_SoC_S_fh21", "SL_CnS_SoC_S_fh22",
                                                           "FL_CnS_SoC_S_fh11", "FL_CnS_SoC_S_fh12",
                                                           "FL_CnS_SoC_S_fh21", "FL_CnS_SoC_S_fh22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_priorinteract <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_pi11", "SL_CnS_SoC_S_pi12",
                                                           "SL_CnS_SoC_S_pi21", "SL_CnS_SoC_S_pi22",
                                                           "FL_CnS_SoC_S_pi11", "FL_CnS_SoC_S_pi12",
                                                           "FL_CnS_SoC_S_pi21", "FL_CnS_SoC_S_pi22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_futureinteract <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_fi11", "SL_CnS_SoC_S_fi12",
                                                           "SL_CnS_SoC_S_fi21", "SL_CnS_SoC_S_fi22",
                                                           "FL_CnS_SoC_S_fi11", "FL_CnS_SoC_S_fi12",
                                                           "FL_CnS_SoC_S_fi21", "FL_CnS_SoC_S_fi22")],
                                                    na.rm = T)
E2_all_clean$SIBoCUZ_SIB_moral <- rowSums(E2_all_clean[, c("SL_CnS_SoC_S_m11", "SL_CnS_SoC_S_m12",
                                                           "SL_CnS_SoC_S_m21", "SL_CnS_SoC_S_m22",
                                                           "FL_CnS_SoC_S_m11", "FL_CnS_SoC_S_m12",
                                                           "FL_CnS_SoC_S_m21", "FL_CnS_SoC_S_m22")],
                                                    na.rm = T)


E2_all_clean$Choice_CUZ_oblig  <- (E2_all_clean$CUZoSIB_CUZ_oblig +
                                   E2_all_clean$SIBoCUZ_CUZ_oblig)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_relate  <- (E2_all_clean$CUZoSIB_CUZ_relate +
                                    E2_all_clean$SIBoCUZ_CUZ_relate)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_close  <- (E2_all_clean$CUZoSIB_CUZ_close +
                                   E2_all_clean$SIBoCUZ_CUZ_close)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_priorhelp  <- (E2_all_clean$CUZoSIB_CUZ_priorhelp +
                                       E2_all_clean$SIBoCUZ_CUZ_priorhelp)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_futurehelp  <- (E2_all_clean$CUZoSIB_CUZ_futurehelp +
                                        E2_all_clean$SIBoCUZ_CUZ_futurehelp)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_priorinteract  <- (E2_all_clean$CUZoSIB_CUZ_priorinteract +
                                           E2_all_clean$SIBoCUZ_CUZ_priorinteract)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_futureinteract  <- (E2_all_clean$CUZoSIB_CUZ_futureinteract +
                                            E2_all_clean$SIBoCUZ_CUZ_futureinteract)/2 # creates pre-reg'd index
E2_all_clean$Choice_CUZ_moral  <- E2_all_clean$CUZoSIB_CUZ_moral # single judgment (post-outcome)

E2_all_clean$Choice_SIB_oblig  <- (E2_all_clean$CUZoSIB_SIB_oblig +
                                   E2_all_clean$SIBoCUZ_SIB_oblig)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_relate  <- (E2_all_clean$CUZoSIB_SIB_relate +
                                    E2_all_clean$SIBoCUZ_SIB_relate)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_close  <- (E2_all_clean$CUZoSIB_SIB_close +
                                   E2_all_clean$SIBoCUZ_SIB_close)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_priorhelp  <- (E2_all_clean$CUZoSIB_SIB_priorhelp +
                                       E2_all_clean$SIBoCUZ_SIB_priorhelp)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_futurehelp  <- (E2_all_clean$CUZoSIB_SIB_futurehelp +
                                        E2_all_clean$SIBoCUZ_SIB_futurehelp)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_priorinteract  <- (E2_all_clean$CUZoSIB_SIB_priorinteract +
                                           E2_all_clean$SIBoCUZ_SIB_priorinteract)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_futureinteract  <- (E2_all_clean$CUZoSIB_SIB_futureinteract +
                                            E2_all_clean$SIBoCUZ_SIB_futureinteract)/2 # creates pre-reg'd index
E2_all_clean$Choice_SIB_moral  <- E2_all_clean$SIBoCUZ_SIB_moral # single judgment (post-outcome)


# Difference Scores
# CUZminusSIB obligation within No Choice or Choice conditions (for diff score corrs and ind. diffs analyses)
E2_all_clean$NoChoice_CUZminusSIB_oblig <- E2_all_clean$NoChoice_CUZ_oblig - E2_all_clean$NoChoice_SIB_oblig
E2_all_clean$NoChoice_CUZminusSIB_relate <- E2_all_clean$NoChoice_CUZ_relate - E2_all_clean$NoChoice_SIB_relate
E2_all_clean$NoChoice_CUZminusSIB_close <- E2_all_clean$NoChoice_CUZ_close - E2_all_clean$NoChoice_SIB_close
E2_all_clean$NoChoice_CUZminusSIB_priorhelp <- E2_all_clean$NoChoice_CUZ_priorhelp - E2_all_clean$NoChoice_SIB_priorhelp
E2_all_clean$NoChoice_CUZminusSIB_futurehelp <- E2_all_clean$NoChoice_CUZ_futurehelp - E2_all_clean$NoChoice_SIB_futurehelp
E2_all_clean$NoChoice_CUZminusSIB_priorinteract <- E2_all_clean$NoChoice_CUZ_priorinteract - E2_all_clean$NoChoice_SIB_priorinteract
E2_all_clean$NoChoice_CUZminusSIB_futureinteract <- E2_all_clean$NoChoice_CUZ_futureinteract - E2_all_clean$NoChoice_SIB_futureinteract
E2_all_clean$NoChoice_CUZminusSIB_moral <- E2_all_clean$NoChoice_CUZ_moral - E2_all_clean$NoChoice_SIB_moral

E2_all_clean$Choice_CUZminusSIB_oblig <- E2_all_clean$Choice_CUZ_oblig - E2_all_clean$Choice_SIB_oblig
E2_all_clean$Choice_CUZminusSIB_relate <- E2_all_clean$Choice_CUZ_relate - E2_all_clean$Choice_SIB_relate
E2_all_clean$Choice_CUZminusSIB_close <- E2_all_clean$Choice_CUZ_close - E2_all_clean$Choice_SIB_close
E2_all_clean$Choice_CUZminusSIB_priorhelp <- E2_all_clean$Choice_CUZ_priorhelp - E2_all_clean$Choice_SIB_priorhelp
E2_all_clean$Choice_CUZminusSIB_futurehelp <- E2_all_clean$Choice_CUZ_futurehelp - E2_all_clean$Choice_SIB_futurehelp
E2_all_clean$Choice_CUZminusSIB_priorinteract <- E2_all_clean$Choice_CUZ_priorinteract - E2_all_clean$Choice_SIB_priorinteract
E2_all_clean$Choice_CUZminusSIB_futureinteract <- E2_all_clean$Choice_CUZ_futureinteract - E2_all_clean$Choice_SIB_futureinteract
E2_all_clean$Choice_CUZminusSIB_moral <- E2_all_clean$Choice_CUZ_moral - E2_all_clean$Choice_SIB_moral


# Individual Difference Measures (for ind. diffs analyses)

# MAC (Morality-as-Cooperation scale) composites
# first need to reverse score property judgment subscale per Curry et al. 2019
E2_all_clean$MAC_Jud_19_r <- ((102 - (E2_all_clean$MAC_Jud_19 +1)) - 1) 
E2_all_clean$MAC_Jud_20_r <- ((102 - (E2_all_clean$MAC_Jud_20 +1)) - 1)
E2_all_clean$MAC_Jud_21_r <- ((102 - (E2_all_clean$MAC_Jud_21 +1)) - 1)

E2_all_clean$MAC_Fam_Combined <- ((E2_all_clean$MAC_Jud_1 + E2_all_clean$MAC_Jud_2 + E2_all_clean$MAC_Jud_3 +
                                       E2_all_clean$MAC_Rel_1 + E2_all_clean$MAC_Rel_2 + E2_all_clean$MAC_Rel_3)/6)
E2_all_clean$MAC_Fam_Jud <- ((E2_all_clean$MAC_Jud_1 + E2_all_clean$MAC_Jud_2 + E2_all_clean$MAC_Jud_3)/3)
E2_all_clean$MAC_Fam_Rel <- ((E2_all_clean$MAC_Rel_1 + E2_all_clean$MAC_Rel_2 + E2_all_clean$MAC_Rel_3)/3)

E2_all_clean$MAC_Group_Combined <- ((E2_all_clean$MAC_Jud_4 + E2_all_clean$MAC_Jud_5 + E2_all_clean$MAC_Jud_6 +
                                       E2_all_clean$MAC_Rel_4 + E2_all_clean$MAC_Rel_5 + E2_all_clean$MAC_Rel_6)/6)
E2_all_clean$MAC_Group_Jud <- ((E2_all_clean$MAC_Jud_4 + E2_all_clean$MAC_Jud_5 + E2_all_clean$MAC_Jud_6)/3)
E2_all_clean$MAC_Group_Rel <- ((E2_all_clean$MAC_Rel_4 + E2_all_clean$MAC_Rel_5 + E2_all_clean$MAC_Rel_6)/3)

E2_all_clean$MAC_Rec_Combined <- ((E2_all_clean$MAC_Jud_7 + E2_all_clean$MAC_Jud_8 + E2_all_clean$MAC_Jud_9 +
                                       E2_all_clean$MAC_Rel_7 + E2_all_clean$MAC_Rel_8 + E2_all_clean$MAC_Rel_9)/6)
E2_all_clean$MAC_Rec_Jud <- ((E2_all_clean$MAC_Jud_7 + E2_all_clean$MAC_Jud_8 + E2_all_clean$MAC_Jud_9)/3)
E2_all_clean$MAC_Rec_Rel <- ((E2_all_clean$MAC_Rel_7 + E2_all_clean$MAC_Rel_8 + E2_all_clean$MAC_Rel_9)/3)

E2_all_clean$MAC_Hero_Combined <- ((E2_all_clean$MAC_Jud_10 + E2_all_clean$MAC_Jud_11 + E2_all_clean$MAC_Jud_12 +
                                       E2_all_clean$MAC_Rel_10 + E2_all_clean$MAC_Rel_11 + E2_all_clean$MAC_Rel_12)/6)
E2_all_clean$MAC_Hero_Jud <- ((E2_all_clean$MAC_Jud_10 + E2_all_clean$MAC_Jud_11 + E2_all_clean$MAC_Jud_12)/3)
E2_all_clean$MAC_Hero_Rel <- ((E2_all_clean$MAC_Rel_10 + E2_all_clean$MAC_Rel_11 + E2_all_clean$MAC_Rel_12)/3)

E2_all_clean$MAC_Def_Combined <- ((E2_all_clean$MAC_Jud_13 + E2_all_clean$MAC_Jud_14 + E2_all_clean$MAC_Jud_15 +
                                       E2_all_clean$MAC_Rel_13 + E2_all_clean$MAC_Rel_14 + E2_all_clean$MAC_Rel_15)/6)
E2_all_clean$MAC_Def_Jud <- ((E2_all_clean$MAC_Jud_13 + E2_all_clean$MAC_Jud_14 + E2_all_clean$MAC_Jud_15)/3)
E2_all_clean$MAC_Def_Rel <- ((E2_all_clean$MAC_Rel_13 + E2_all_clean$MAC_Rel_14 + E2_all_clean$MAC_Rel_15)/3)

E2_all_clean$MAC_Fair_Combined <- ((E2_all_clean$MAC_Jud_16 + E2_all_clean$MAC_Jud_17 + E2_all_clean$MAC_Jud_18 +
                                       E2_all_clean$MAC_Rel_16 + E2_all_clean$MAC_Rel_17 + E2_all_clean$MAC_Rel_18)/6)
E2_all_clean$MAC_Fair_Jud <- ((E2_all_clean$MAC_Jud_16 + E2_all_clean$MAC_Jud_17 + E2_all_clean$MAC_Jud_18)/3)
E2_all_clean$MAC_Fair_Rel <- ((E2_all_clean$MAC_Rel_16 + E2_all_clean$MAC_Rel_17 + E2_all_clean$MAC_Rel_18)/3)

E2_all_clean$MAC_Prop_Combined <- ((E2_all_clean$MAC_Jud_19_r + E2_all_clean$MAC_Jud_20_r + E2_all_clean$MAC_Jud_21_r +
                                       E2_all_clean$MAC_Rel_19 + E2_all_clean$MAC_Rel_20 + E2_all_clean$MAC_Rel_21)/6)
E2_all_clean$MAC_Prop_Jud <- ((E2_all_clean$MAC_Jud_19_r + E2_all_clean$MAC_Jud_20_r + E2_all_clean$MAC_Jud_21_r)/3)
E2_all_clean$MAC_Prop_Rel <- ((E2_all_clean$MAC_Rel_19 + E2_all_clean$MAC_Rel_20 + E2_all_clean$MAC_Rel_21)/3)


# MFQ (Moral Foundations Theory scale) composites
E2_all_clean$MFQ_Harm_Combined <- ((E2_all_clean$MFQ_Jud_1 + E2_all_clean$MFQ_Jud_2 + E2_all_clean$MFQ_Jud_3 +
                                       E2_all_clean$MFQ_Rel_1 + E2_all_clean$MFQ_Rel_2 + E2_all_clean$MFQ_Rel_3)/6)
E2_all_clean$MFQ_Harm_Jud <- ((E2_all_clean$MFQ_Jud_1 + E2_all_clean$MFQ_Jud_2 + E2_all_clean$MFQ_Jud_3)/3)
E2_all_clean$MFQ_Harm_Rel <- ((E2_all_clean$MFQ_Rel_1 + E2_all_clean$MFQ_Rel_2 + E2_all_clean$MFQ_Rel_3)/3)

E2_all_clean$MFQ_Fairness_Combined <- ((E2_all_clean$MFQ_Jud_4 + E2_all_clean$MFQ_Jud_5 + E2_all_clean$MFQ_Jud_6 +
                                       E2_all_clean$MFQ_Rel_4 + E2_all_clean$MFQ_Rel_5 + E2_all_clean$MFQ_Rel_6)/6)
E2_all_clean$MFQ_Fairness_Jud <- ((E2_all_clean$MFQ_Jud_4 + E2_all_clean$MFQ_Jud_5 + E2_all_clean$MFQ_Jud_6)/3)
E2_all_clean$MFQ_Fairness_Rel <- ((E2_all_clean$MFQ_Rel_4 + E2_all_clean$MFQ_Rel_5 + E2_all_clean$MFQ_Rel_6)/3)

E2_all_clean$MFQ_Loyalty_Combined <- ((E2_all_clean$MFQ_Jud_7 + E2_all_clean$MFQ_Jud_8 + E2_all_clean$MFQ_Jud_9 +
                                       E2_all_clean$MFQ_Rel_7 + E2_all_clean$MFQ_Rel_8 + E2_all_clean$MFQ_Rel_9)/6)
E2_all_clean$MFQ_Loyalty_Jud <- ((E2_all_clean$MFQ_Jud_7 + E2_all_clean$MFQ_Jud_8 + E2_all_clean$MFQ_Jud_9)/3)
E2_all_clean$MFQ_Loyalty_Rel <- ((E2_all_clean$MFQ_Rel_7 + E2_all_clean$MFQ_Rel_8 + E2_all_clean$MFQ_Rel_9)/3)

E2_all_clean$MFQ_Authority_Combined <- ((E2_all_clean$MFQ_Jud_10 + E2_all_clean$MFQ_Jud_11 + E2_all_clean$MFQ_Jud_12 +
                                       E2_all_clean$MFQ_Rel_10 + E2_all_clean$MFQ_Rel_11 + E2_all_clean$MFQ_Rel_12)/6)
E2_all_clean$MFQ_Authority_Jud <- ((E2_all_clean$MFQ_Jud_10 + E2_all_clean$MFQ_Jud_11 + E2_all_clean$MFQ_Jud_12)/3)
E2_all_clean$MFQ_Authority_Rel <- ((E2_all_clean$MFQ_Rel_10 + E2_all_clean$MFQ_Rel_11 + E2_all_clean$MFQ_Rel_12)/3)

E2_all_clean$MFQ_Purity_Combined <- ((E2_all_clean$MFQ_Jud_13 + E2_all_clean$MFQ_Jud_14 + E2_all_clean$MFQ_Jud_15 +
                                       E2_all_clean$MFQ_Rel_13 + E2_all_clean$MFQ_Rel_14 + E2_all_clean$MFQ_Rel_15)/6)
E2_all_clean$MFQ_Purity_Jud <- ((E2_all_clean$MFQ_Jud_13 + E2_all_clean$MFQ_Jud_14 + E2_all_clean$MFQ_Jud_15)/3)
E2_all_clean$MFQ_Purity_Rel <- ((E2_all_clean$MFQ_Rel_13 + E2_all_clean$MFQ_Rel_14 + E2_all_clean$MFQ_Rel_15)/3)

# OUS (Oxford Utilitarianism Scale) composites
E2_all_clean$OUS_IB <- ((E2_all_clean$OUS_IB1 + E2_all_clean$OUS_IB2 + E2_all_clean$OUS_IB3 +
                             E2_all_clean$OUS_IB4 + E2_all_clean$OUS_IB5)/5)
E2_all_clean$OUS_IH <- ((E2_all_clean$OUS_IH1 + E2_all_clean$OUS_IH2 + E2_all_clean$OUS_IH3 +
                             E2_all_clean$OUS_IH4)/4)
```

## Creating Analyzable Between-Subjects Datasets
```{r}
# Stranger-Like family members
E2_SL_clean <- E2_all_clean %>%
  filter(BSs_cond == 'Stranger-Like') %>%
  # select only variables that are relevant to Stranger-Like data
  select(
    ResponseId, # selects variable
    Age:Urban_Rural, # selects demographic variables
    MAC_Jud_1:MAC_Jud_18, MAC_Jud_19_r:MAC_Jud_21_r, MAC_Rel_1:MAC_Rel_21, 
    MFQ_Jud_1:MFQ_Jud_15, MFQ_Rel_1:MFQ_Rel_15, 
    OUS_IB1:OUS_IB5, OUS_IH1:OUS_IH4, # selects raw ind. diff variables (for reliabilty check)
    MAC_Fam_Combined:OUS_IH, # selects composited ind. diff variables
    BSs_cond, # selects variable for between-subjects condition
    SL_Dist_Scen:SL_CloseODist_Scen, # selects scenario-to-condition variables for SL data
    NoChoice_CUZ_oblig:NoChoice_SIB_moral, # selects NoChoice DVs for SL data
    Choice_CUZ_oblig:Choice_SIB_moral, # selects Choice DVs for SL data
    NoChoice_CUZminusSIB_oblig:Choice_CUZminusSIB_moral # selects difference score variables for SL data
    )

# Friend-like family members
E2_FL_clean <- E2_all_clean %>%
  filter(BSs_cond == 'Friend-Like') %>%
  # select only variables that are relevant to "Friend-Like" data
  select(
    ResponseId, # selects variable
    Age:Urban_Rural, # selects demographic variables
    MAC_Jud_1:MAC_Jud_18, MAC_Jud_19_r:MAC_Jud_21_r, MAC_Rel_1:MAC_Rel_21, 
    MFQ_Jud_1:MFQ_Jud_15, MFQ_Rel_1:MFQ_Rel_15, 
    OUS_IB1:OUS_IB5, OUS_IH1:OUS_IH4, # selects raw ind. diff variables (for reliabilty check)
    MAC_Fam_Combined:OUS_IH, # selects composited ind. diff variables
    BSs_cond, # selects variable for between-subjects condition
    FL_Dist_Scen:FL_CloseODist_Scen, # selects scenario-to-condition variables for FL data
    NoChoice_CUZ_oblig:NoChoice_SIB_moral, # selects NoChoice DVs for FL data
    Choice_CUZ_oblig:Choice_SIB_moral, # selects Choice DVs for FL data
    NoChoice_CUZminusSIB_oblig:Choice_CUZminusSIB_moral # selects difference score variables for FL data
    )
```

## Tidying Data
```{r}
# Convert data from wide to long format
# Stranger-Like
E2_SL_cond_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(SL_Dist_Scen, SL_Close_Scen, SL_DistOClose_Scen, SL_CloseODist_Scen),
    names_to = "WSs_cond",
    values_to = "Condition"
  )

E2_SL_oblig_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_oblig, NoChoice_SIB_oblig, Choice_CUZ_oblig, Choice_SIB_oblig),
    names_to = "WSs_cond",
    values_to = "oblig"
  )

E2_SL_relate_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_relate, NoChoice_SIB_relate, Choice_CUZ_relate, Choice_SIB_relate),
    names_to = "WSs_cond",
    values_to = "relate"
  )

E2_SL_close_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_close, NoChoice_SIB_close, Choice_CUZ_close, Choice_SIB_close),
    names_to = "WSs_cond",
    values_to = "close"
  )

E2_SL_priorhelp_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_priorhelp, NoChoice_SIB_priorhelp, Choice_CUZ_priorhelp, Choice_SIB_priorhelp),
    names_to = "WSs_cond",
    values_to = "priorhelp"
  )

E2_SL_futurehelp_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_futurehelp, NoChoice_SIB_futurehelp, Choice_CUZ_futurehelp, Choice_SIB_futurehelp),
    names_to = "WSs_cond",
    values_to = "futurehelp"
  )

E2_SL_priorinteract_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_priorinteract, NoChoice_SIB_priorinteract, Choice_CUZ_priorinteract, Choice_SIB_priorinteract),
    names_to = "WSs_cond",
    values_to = "priorinteract"
  )

E2_SL_futureinteract_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_futureinteract, NoChoice_SIB_futureinteract, Choice_CUZ_futureinteract, Choice_SIB_futureinteract),
    names_to = "WSs_cond",
    values_to = "futureinteract"
  )

E2_SL_moral_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_moral, NoChoice_SIB_moral, Choice_CUZ_moral, Choice_SIB_moral),
    names_to = "WSs_cond",
    values_to = "moral"
  )


# Combine long SL datasets, select plotting variables, and create condition variable for each factor (Relation + Choice Context)
E2_SL_long <- cbind(E2_SL_cond_long, 
                    E2_SL_oblig_long, E2_SL_relate_long, E2_SL_close_long,
                    E2_SL_priorhelp_long, E2_SL_futurehelp_long,
                    E2_SL_priorinteract_long, E2_SL_futureinteract_long,
                    E2_SL_moral_long)

E2_SL_long <- E2_SL_long[, !duplicated(colnames(E2_SL_long))] %>% # get rid of duplicate columns
  select(ResponseId,
         Age:OUS_IH,
         BSs_cond,
         WSs_cond,
         Condition,
         oblig, relate, close, 
         priorhelp, futurehelp,
         priorinteract, futureinteract,
         moral) %>%
  mutate(Relation = case_when(
    WSs_cond == "SL_Dist_Scen" ~ "Distant",
    WSs_cond == "SL_Close_Scen" ~ "Close",
    WSs_cond == "SL_DistOClose_Scen" ~ "Distant",
    WSs_cond == "SL_CloseODist_Scen" ~ "Close")) %>%
  mutate(`Choice Context` = case_when(
    WSs_cond == "SL_Dist_Scen" ~ "No Choice",
    WSs_cond == "SL_Close_Scen" ~ "No Choice",
    WSs_cond == "SL_DistOClose_Scen" ~ "Choice",
    WSs_cond == "SL_CloseODist_Scen" ~ "Choice"))

# Reorder/rename condition and participant factors
E2_SL_long$Relation <- as.factor(E2_SL_long$Relation)
E2_SL_long$Relation <- ordered(E2_SL_long$Relation, levels = c("Distant", "Close"))
E2_SL_long$`Choice Context` <- as.factor(E2_SL_long$`Choice Context`)
E2_SL_long$`Choice Context` <- ordered(E2_SL_long$`Choice Context`, levels = c("No Choice", "Choice"))
E2_SL_long$ResponseId <- as.factor(E2_SL_long$ResponseId)


# Friend-Like
E2_FL_cond_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(FL_Dist_Scen, FL_Close_Scen, FL_DistOClose_Scen, FL_CloseODist_Scen),
    names_to = "WSs_cond",
    values_to = "Condition"
  )

E2_FL_oblig_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_oblig, NoChoice_SIB_oblig, Choice_CUZ_oblig, Choice_SIB_oblig),
    names_to = "WSs_cond",
    values_to = "oblig"
  )

E2_FL_relate_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_relate, NoChoice_SIB_relate, Choice_CUZ_relate, Choice_SIB_relate),
    names_to = "WSs_cond",
    values_to = "relate"
  )

E2_FL_close_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_close, NoChoice_SIB_close, Choice_CUZ_close, Choice_SIB_close),
    names_to = "WSs_cond",
    values_to = "close"
  )

E2_FL_priorhelp_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_priorhelp, NoChoice_SIB_priorhelp, Choice_CUZ_priorhelp, Choice_SIB_priorhelp),
    names_to = "WSs_cond",
    values_to = "priorhelp"
  )

E2_FL_futurehelp_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_futurehelp, NoChoice_SIB_futurehelp, Choice_CUZ_futurehelp, Choice_SIB_futurehelp),
    names_to = "WSs_cond",
    values_to = "futurehelp"
  )

E2_FL_priorinteract_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_priorinteract, NoChoice_SIB_priorinteract, Choice_CUZ_priorinteract, Choice_SIB_priorinteract),
    names_to = "WSs_cond",
    values_to = "priorinteract"
  )

E2_FL_futureinteract_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_futureinteract, NoChoice_SIB_futureinteract, Choice_CUZ_futureinteract, Choice_SIB_futureinteract),
    names_to = "WSs_cond",
    values_to = "futureinteract"
  )

E2_FL_moral_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZ_moral, NoChoice_SIB_moral, Choice_CUZ_moral, Choice_SIB_moral),
    names_to = "WSs_cond",
    values_to = "moral"
  )


# Combine long SL datasets, select plotting variables, and create condition variable for each factor (Relation + Choice Context)
E2_FL_long <- cbind(E2_FL_cond_long, 
                    E2_FL_oblig_long, E2_FL_relate_long, E2_FL_close_long,
                    E2_FL_priorhelp_long, E2_FL_futurehelp_long,
                    E2_FL_priorinteract_long, E2_FL_futureinteract_long,
                    E2_FL_moral_long)

E2_FL_long <- E2_FL_long[, !duplicated(colnames(E2_FL_long))] %>% # get rid of duplicate columns
  select(ResponseId,
         Age:OUS_IH,
         BSs_cond,
         WSs_cond,
         Condition,
         oblig, relate, close, 
         priorhelp, futurehelp,
         priorinteract, futureinteract,
         moral) %>%
  mutate(Relation = case_when(
    WSs_cond == "FL_Dist_Scen" ~ "Distant",
    WSs_cond == "FL_Close_Scen" ~ "Close",
    WSs_cond == "FL_DistOClose_Scen" ~ "Distant",
    WSs_cond == "FL_CloseODist_Scen" ~ "Close")) %>%
  mutate(`Choice Context` = case_when(
    WSs_cond == "FL_Dist_Scen" ~ "No Choice",
    WSs_cond == "FL_Close_Scen" ~ "No Choice",
    WSs_cond == "FL_DistOClose_Scen" ~ "Choice",
    WSs_cond == "FL_CloseODist_Scen" ~ "Choice"))

# Reorder/rename condition and participant factors
E2_FL_long$Relation <- as.factor(E2_FL_long$Relation)
E2_FL_long$Relation <- ordered(E2_FL_long$Relation, levels = c("Distant", "Close"))
E2_FL_long$`Choice Context` <- as.factor(E2_FL_long$`Choice Context`)
E2_FL_long$`Choice Context` <- ordered(E2_FL_long$`Choice Context`, levels = c("No Choice", "Choice"))
E2_FL_long$ResponseId <- as.factor(E2_FL_long$ResponseId)

# Combine into one dataset for later analyses
E2_all_long <- rbind(E2_SL_long, E2_FL_long)
# Reorder all_long BSs_cond
E2_all_long$BSs_cond <- as.factor(E2_all_long$BSs_cond)
E2_all_long$BSs_cond <- ordered(E2_all_long$BSs_cond, levels = c("Stranger-Like", "Friend-Like"))
```


# Descriptive Statistics {.tabset}

## Oblig {.tabset}

### Stranger-Like
```{r}
describeBy(E2_SL_long$oblig, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)
```
### Friend-Like
```{r}
describeBy(E2_FL_long$oblig, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)
```

## Relate {.tabset}

### Stranger-Like
```{r}
describeBy(E2_SL_long$relate, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)
```
### Friend-Like
```{r}
describeBy(E2_FL_long$relate, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)
```

## Close {.tabset}

### Stranger-Like
```{r}
describeBy(E2_SL_long$close, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)
```
### Friend-Like
```{r}
describeBy(E2_FL_long$close, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)
```

## Prior Help {.tabset}

### Stranger-Like
```{r}
describeBy(E2_SL_long$priorhelp, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)
```
### Friend-Like
```{r}
describeBy(E2_FL_long$priorhelp, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)
```

## Future Help {.tabset}

### Stranger-Like
```{r}
describeBy(E2_SL_long$futurehelp, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)
```
### Friend-Like
```{r}
describeBy(E2_FL_long$futurehelp, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)
```

## Prior Interax {.tabset}

### Stranger-Like
```{r}
describeBy(E2_SL_long$priorinteract, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)
```
### Friend-Like
```{r}
describeBy(E2_FL_long$priorinteract, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)
```

## Future Interax {.tabset}

### Stranger-Like
```{r}
describeBy(E2_SL_long$futureinteract, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)
```
### Friend-Like
```{r}
describeBy(E2_FL_long$futureinteract, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)
```

## Moral {.tabset}

### Stranger-Like
```{r}
describeBy(E2_SL_long$moral, list(E2_SL_long$Relation, E2_SL_long$`Choice Context`), mat = T)
```
### Friend-Like
```{r}
describeBy(E2_FL_long$moral, list(E2_FL_long$Relation, E2_FL_long$`Choice Context`), mat = T)
```


# Mean Difference Plots {.tabset}
```{r}
# Set dodge for plotting crossed factors
dodge = position_dodge(width = 1) 
```

## Oblig {.tabset}

### Stranger-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = oblig, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Obligation Strength") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Friend-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = oblig, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Obligation Strength") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Combined
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = oblig, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("\nChoice Context") +
        ylab("Obligation Strength\n") +
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))

ggsave("E2_oblig_plot.png")
```

## Relate {.tabset}

### Stranger-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(relate_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = relate, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Relatedness") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Friend-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(relate_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = relate, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Relatedness") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Combined
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(relate_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = relate, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Relatedness") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))

ggsave("E2_relate_plot.png")
```

## Close {.tabset}

### Stranger-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(close_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = close, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Closeness") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Friend-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(close_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = close, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Closeness") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Combined
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(close_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = close, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Closeness") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))

ggsave("E2_close_plot.png")
```

## Prior Help {.tabset}

### Stranger-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(priorhelp_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = priorhelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Help") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Friend-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(priorhelp_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = priorhelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Help") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Combined
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(priorhelp_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = priorhelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Help") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))

ggsave("E2_priorhelp_plot.png")
```

## Future Help {.tabset}

### Stranger-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(futurehelp_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = futurehelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Help") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Friend-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(futurehelp_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = futurehelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Help") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Combined
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(futurehelp_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = futurehelp, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Help") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))

ggsave("E2_futurehelp_plot.png")
```

## Prior Interax {.tabset}

### Stranger-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(priorinteract_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = priorinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Interactions") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Friend-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(priorinteract_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = priorinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Interactions") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Combined
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(priorinteract_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = priorinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Prior Interactions") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))

ggsave("E2_priorinteract_plot.png")
```

## Future Interax {.tabset}

### Stranger-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(futureinteract_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = futureinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Interactions") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Friend-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(futureinteract_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = futureinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Interactions") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Combined
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(futureinteract_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = futureinteract, fill = Relation)) +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 2.5, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("Choice Context") +
        ylab("Perceived Frequency of Future Interactions") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))

ggsave("E2_futureinteract_plot.png")
```

## Moral {.tabset}

### Stranger-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(moral_plot_SL <- ggplot(data = E2_SL_long, aes(x = `Choice Context`, y = moral, fill = Relation)) +
        geom_hline(yintercept = 50, linetype = "dashed", color = "black") +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Moral Character") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Friend-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(moral_plot_FL <- ggplot(data = E2_FL_long, aes(x = `Choice Context`, y = moral, fill = Relation)) +
        geom_hline(yintercept = 50, linetype = "dashed", color = "black") +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        xlab("Choice Context") +
        ylab("Moral Character") + 
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 14),
              legend.text = element_text(color = "black", size = 12)))
```
### Combined
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(moral_plot_combined <- ggplot(data = E2_all_long, aes(x = `Choice Context`, y = moral, fill = Relation)) +
        geom_hline(yintercept = 50, linetype = "dashed", color = "black") +
        geom_violin(aes(fill = Relation), position = dodge) +
        geom_boxplot(width = 0.1, color = "black", outlier.shape = NA, position = dodge) +
        scale_fill_manual(values = c("lightskyblue3", "indianred3")) +
        stat_summary(fun = mean, geom = "point", shape = 18, color = "white", size = 3, position = dodge) +
        theme(legend.position = "right") +
        theme_classic() +
        facet_wrap(~BSs_cond, nrow = 2) +
        xlab("\nChoice Context") +
        ylab("Moral Character\n") + 
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16),
              legend.position = "right",
              legend.title = element_text(color = "black", size = 18),
              legend.text = element_text(color = "black", size = 16)))

ggsave("E2_moral_plot.png")
```


# Mean Difference Tests {.tabset}

<br>

See our pre-registration (INSERT LINK HERE) for our predictions related to obligation judgments and moral character judgments.

<br>

## Oblig {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(oblig ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_oblig", "NoChoice_SIB_oblig", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_oblig, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(oblig ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_oblig", "Choice_SIB_oblig", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_oblig, breaks = 100))
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(oblig ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_oblig", "NoChoice_SIB_oblig", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_oblig, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(oblig ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(oblig ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_oblig", "Choice_SIB_oblig", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_oblig, breaks = 100))
```


## Relate {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(relate ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_relate", "NoChoice_SIB_relate", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_relate, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(relate ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_relate", "Choice_SIB_relate", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_relate, breaks = 100))
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(relate ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_relate", "NoChoice_SIB_relate", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_relate, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(relate ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(relate ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_relate", "Choice_SIB_relate", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_relate, breaks = 100))
```


## Close {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(close ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_close", "NoChoice_SIB_close", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_close, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(close ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_close", "Choice_SIB_close", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_close, breaks = 100))
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(close ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_close", "NoChoice_SIB_close", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_close, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(close ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(close ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_close", "Choice_SIB_close", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_close, breaks = 100))
```


## Prior Help {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(priorhelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_priorhelp", "NoChoice_SIB_priorhelp", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_priorhelp, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(priorhelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_priorhelp", "Choice_SIB_priorhelp", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_priorhelp, breaks = 100))
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(priorhelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_priorhelp", "NoChoice_SIB_priorhelp", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_priorhelp, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(priorhelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(priorhelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_priorhelp", "Choice_SIB_priorhelp", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_priorhelp, breaks = 100))
```


## Future Help {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(futurehelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_futurehelp", "NoChoice_SIB_futurehelp", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_futurehelp, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(futurehelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_futurehelp", "Choice_SIB_futurehelp", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_futurehelp, breaks = 100))
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(futurehelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_futurehelp", "NoChoice_SIB_futurehelp", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_futurehelp, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(futurehelp ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(futurehelp ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_futurehelp", "Choice_SIB_futurehelp", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_futurehelp, breaks = 100))
```


## Prior Interax {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(priorinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_priorinteract", "NoChoice_SIB_priorinteract", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_priorinteract, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(priorinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_priorinteract", "Choice_SIB_priorinteract", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_priorinteract, breaks = 100))
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(priorinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_priorinteract", "NoChoice_SIB_priorinteract", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_priorinteract, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(priorinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(priorinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_priorinteract", "Choice_SIB_priorinteract", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_priorinteract, breaks = 100))
```


## Future Interax {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(futureinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_futureinteract", "NoChoice_SIB_futureinteract", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_futureinteract, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(futureinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_futureinteract", "Choice_SIB_futureinteract", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_futureinteract, breaks = 100))
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# returns t-test results
t.test(futureinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_futureinteract", "NoChoice_SIB_futureinteract", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_futureinteract, breaks = 100))
```
#### Choice
```{r}
# returns t-test results
t.test(futureinteract ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(futureinteract ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_futureinteract", "Choice_SIB_futureinteract", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_futureinteract, breaks = 100))
```


## Moral {.tabset}

### ANOVAs {.tabset}

#### Stranger-Like
```{r}
# returns 2 x 2 within-subject ANOVA results
aov_moral_SL <- aov(moral ~ Relation*`Choice Context` + Error(ResponseId/(Relation*`Choice Context`)), 
                    data = E2_all_long %>%
                      filter(BSs_cond == "Stranger-Like"))
summary(aov_moral_SL)

# returns eta-sq effect size
effectsize::eta_squared(aov_moral_SL, partial = TRUE)
```
#### Friend-Like
```{r}
# returns 2 x 2 within-subject ANOVA results
aov_moral_FL <- aov(moral ~ Relation*`Choice Context` + Error(ResponseId/(Relation*`Choice Context`)), 
                    data = E2_all_long %>%
                      filter(BSs_cond == "Friend-Like"))
summary(aov_moral_FL)

# returns eta-sq effect size
effectsize::eta_squared(aov_moral_FL, partial = TRUE)
```

### t-tests {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# returns t-test results
t.test(moral ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "NoChoice_CUZ_moral", "NoChoice_SIB_moral", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$NoChoice_CUZminusSIB_moral, breaks = 100))
```
##### Choice
```{r}
# returns t-test results
t.test(moral ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Stranger-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_SL_clean, "Choice_CUZ_moral", "Choice_SIB_moral", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_SL_clean$Choice_CUZminusSIB_moral, breaks = 100))
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# returns t-test results
t.test(moral ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "No Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "NoChoice_CUZ_moral", "NoChoice_SIB_moral", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$NoChoice_CUZminusSIB_moral, breaks = 100))
```
##### Choice
```{r}
# returns t-test results
t.test(moral ~ Relation, 
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
         droplevels(), 
       paired = T)

# returns dz effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = F) # setting this to false ensures dz is calculated, using difference score

# returns d-av effect size and 95% CIs
effsize::cohen.d(moral ~ Relation | Subject(ResponseId),
       data = E2_all_long %>% 
         filter(BSs_cond == "Friend-Like") %>%
         filter(`Choice Context` == "Choice") %>%
                   droplevels(), 
                 paired = T,
                 within = T) # setting this to true ensures d-av is calculated, using raw scores

# returns correlation between variables
cor_test(data = E2_FL_clean, "Choice_CUZ_moral", "Choice_SIB_moral", method = "Pearson")
```
```{r, fig.width = 9, fig.height = 6, out.width = "75%", out.height = "75%"}
# returns histogram of differences score variable
print(hist(E2_FL_clean$Choice_CUZminusSIB_moral, breaks = 100))
```


# Moral Diff ~ Oblig Diff Plots {.tabset} 
```{r}
# Create difference score datasets for plotting of diff score correlations

# Stranger-Like
E2_diff_SL_cond_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(SL_Dist_Scen, SL_Close_Scen),
    names_to = "WSs_cond",
    values_to = "Condition"
  )

E2_diff_SL_oblig_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_oblig, Choice_CUZminusSIB_oblig),
    names_to = "WSs_cond",
    values_to = "oblig"
  )

E2_diff_SL_relate_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_relate, Choice_CUZminusSIB_relate),
    names_to = "WSs_cond",
    values_to = "relate"
  )

E2_diff_SL_close_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_close, Choice_CUZminusSIB_close),
    names_to = "WSs_cond",
    values_to = "close"
  )

E2_diff_SL_priorhelp_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_priorhelp, Choice_CUZminusSIB_priorhelp),
    names_to = "WSs_cond",
    values_to = "priorhelp"
  )

E2_diff_SL_futurehelp_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_futurehelp, Choice_CUZminusSIB_futurehelp),
    names_to = "WSs_cond",
    values_to = "futurehelp"
  )

E2_diff_SL_priorinteract_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_priorinteract, Choice_CUZminusSIB_priorinteract),
    names_to = "WSs_cond",
    values_to = "priorinteract"
  )

E2_diff_SL_futureinteract_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_futureinteract, Choice_CUZminusSIB_futureinteract),
    names_to = "WSs_cond",
    values_to = "futureinteract"
  )

E2_diff_SL_moral_long <- E2_SL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_moral, Choice_CUZminusSIB_moral),
    names_to = "WSs_cond",
    values_to = "moral"
  )

# Combine long SL datasets, select plotting variables, and create condition variable for `Choice Context`
E2_diff_SL_long <- cbind(E2_diff_SL_cond_long, 
                         E2_diff_SL_oblig_long, 
                         E2_diff_SL_relate_long, E2_diff_SL_close_long,                                                              E2_diff_SL_priorhelp_long, E2_diff_SL_futurehelp_long, 
                         E2_diff_SL_priorinteract_long, E2_diff_SL_futureinteract_long,
                         E2_diff_SL_moral_long)
E2_diff_SL_long <- E2_diff_SL_long[, !duplicated(colnames(E2_diff_SL_long))] # get rid of duplicate columns

E2_diff_SL_long <- E2_diff_SL_long %>%
  select(ResponseId,
         Age:OUS_IH,
         BSs_cond,
         WSs_cond,
         Condition,
         oblig, 
         relate, close, 
         priorhelp, futurehelp,
         priorinteract, futureinteract,
         moral) %>%
  mutate(`Choice Context` = case_when(
    WSs_cond == "SL_Dist_Scen" ~ "No Choice",
    WSs_cond == "SL_Close_Scen" ~ "Choice"))

# Reorder/rename condition, and participant factors
E2_diff_SL_long$`Choice Context` <- as.factor(E2_diff_SL_long$`Choice Context`)
E2_diff_SL_long$`Choice Context` <- ordered(E2_diff_SL_long$`Choice Context`, levels = c("No Choice", "Choice"))
E2_diff_SL_long$ResponseId <- as.factor(E2_diff_SL_long$ResponseId)

# Friend-Like
E2_diff_FL_cond_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(FL_Dist_Scen, FL_Close_Scen),
    names_to = "WSs_cond",
    values_to = "Condition"
  )

E2_diff_FL_oblig_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_oblig, Choice_CUZminusSIB_oblig),
    names_to = "WSs_cond",
    values_to = "oblig"
  )

E2_diff_FL_relate_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_relate, Choice_CUZminusSIB_relate),
    names_to = "WSs_cond",
    values_to = "relate"
  )

E2_diff_FL_close_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_close, Choice_CUZminusSIB_close),
    names_to = "WSs_cond",
    values_to = "close"
  )

E2_diff_FL_priorhelp_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_priorhelp, Choice_CUZminusSIB_priorhelp),
    names_to = "WSs_cond",
    values_to = "priorhelp"
  )

E2_diff_FL_futurehelp_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_futurehelp, Choice_CUZminusSIB_futurehelp),
    names_to = "WSs_cond",
    values_to = "futurehelp"
  )

E2_diff_FL_priorinteract_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_priorinteract, Choice_CUZminusSIB_priorinteract),
    names_to = "WSs_cond",
    values_to = "priorinteract"
  )

E2_diff_FL_futureinteract_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_futureinteract, Choice_CUZminusSIB_futureinteract),
    names_to = "WSs_cond",
    values_to = "futureinteract"
  )

E2_diff_FL_moral_long <- E2_FL_clean %>%
  pivot_longer(
    cols = c(NoChoice_CUZminusSIB_moral, Choice_CUZminusSIB_moral),
    names_to = "WSs_cond",
    values_to = "moral"
  )

# Combine long SL datasets, select plotting variables, and create condition variable for `Choice Context`
E2_diff_FL_long <- cbind(E2_diff_FL_cond_long, 
                         E2_diff_FL_oblig_long, 
                         E2_diff_FL_relate_long, E2_diff_FL_close_long,                                                              E2_diff_FL_priorhelp_long, E2_diff_FL_futurehelp_long, 
                         E2_diff_FL_priorinteract_long, E2_diff_FL_futureinteract_long,
                         E2_diff_FL_moral_long)
E2_diff_FL_long <- E2_diff_FL_long[, !duplicated(colnames(E2_diff_FL_long))] # get rid of duplicate columns

E2_diff_FL_long <- E2_diff_FL_long %>%
  select(ResponseId,
         Age:OUS_IH,
         BSs_cond,
         WSs_cond,
         Condition,
         oblig, 
         relate, close, 
         priorhelp, futurehelp,
         priorinteract, futureinteract,
         moral) %>%
  mutate(`Choice Context` = case_when(
    WSs_cond == "FL_Dist_Scen" ~ "No Choice",
    WSs_cond == "FL_Close_Scen" ~ "Choice"))

# Reorder/rename condition, and participant factors
E2_diff_FL_long$`Choice Context` <- as.factor(E2_diff_FL_long$`Choice Context`)
E2_diff_FL_long$`Choice Context` <- ordered(E2_diff_FL_long$`Choice Context`, levels = c("No Choice", "Choice"))
E2_diff_FL_long$ResponseId <- as.factor(E2_diff_FL_long$ResponseId)


# Combine into one dataset for plotting
E2_diff_all_long <- rbind(E2_diff_SL_long, E2_diff_FL_long)
# Reorder All_long BSs_cond
E2_diff_all_long$BSs_cond <- as.factor(E2_diff_all_long$BSs_cond)
E2_diff_all_long$BSs_cond <- ordered(E2_diff_all_long$BSs_cond, levels = c("Stranger-Like", "Friend-Like"))
```

## Stranger-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_moral_diff_plot_SL <- ggplot(data = E2_diff_SL_long,
                                                    aes(x = oblig, y = moral)) +
        geom_jitter(color = "darkorchid1", alpha = 0.5) +
        geom_smooth(method = 'lm', color = "darkorchid1") +
        facet_wrap(~`Choice Context`) +
        scale_x_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        scale_y_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        theme_classic() +
        xlab("Obligation Strength Difference (Distant - Close)") +
        ylab("Moral Character Difference (Distant - Close)") +
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12)))
```
## Friend-Like
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_moral_diff_plot_FL <- ggplot(data = E2_diff_FL_long,
                                                    aes(x = oblig, y = moral)) +
        geom_jitter(color = "darkorchid1", alpha = 0.5) +
        geom_smooth(method = 'lm', color = "darkorchid1") +
        facet_wrap(~`Choice Context`) +
        scale_x_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        scale_y_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        theme_classic() +
        xlab("Obligation Strength Difference (Distant - Close)") +
        ylab("Moral Character Difference (Distant - Close)") +
        theme(axis.title.x = element_text(size = 14), 
              axis.title.y = element_text(size = 14),
              axis.text.x = element_text(color = "black", size = 12), 
              axis.text.y = element_text(color = "black", size = 12)))
```
## Combined
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_moral_diff_plot_combined <- ggplot(data = E2_diff_all_long,
                                                    aes(x = oblig, y = moral)) +
        geom_jitter(color = "darkorchid1", alpha = 0.5) +
        geom_smooth(method = 'lm', color = "darkorchid1") +
        facet_wrap(BSs_cond~`Choice Context`, nrow = 2) +
        scale_x_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        scale_y_continuous(limits = c(-101,101), breaks = c(-100,-50,0,50,100)) +
        theme_classic() +
        xlab("\nObligation Strength Difference (Distant - Close)") +
        ylab("Moral Character Difference (Distant - Close)\n") +
        theme(axis.title.x = element_text(size = 18), 
              axis.title.y = element_text(size = 18),
              axis.text.x = element_text(color = "black", size = 16), 
              axis.text.y = element_text(color = "black", size = 16),
              strip.text.x = element_text(color = "black", size = 16)))

ggsave("E2_moral~oblig_plot.png")
```


# Moral Diff ~ Oblig Diff Tests {.tabset}

<br>

See our pre-registration (INSERT LINK) and manuscript for our predictions about the relationship between obligation differences and moral character differences.

<br>

## Stranger-Like {.tabset}

### No Choice
```{r}
# pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_oblig", "NoChoice_CUZminusSIB_moral", method = "Pearson")
```
### Choice
```{r}
# pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_oblig", "Choice_CUZminusSIB_moral", method = "Pearson")
```

## Friend-Like {.tabset}

### No Choice
```{r}
# pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_oblig", "NoChoice_CUZminusSIB_moral", method = "Pearson")
```
### Choice
```{r}
# pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_oblig", "Choice_CUZminusSIB_moral", method = "Pearson")
```


# Moral ~ Oblig R-M Plots {.tabset}

## Stranger-Like {.tabset}

### No Choice
```{r}
rmcorr_SL_NoChoice <- rmcorr(participant = ResponseId, 
                                  measure1 = oblig, 
                                  measure2 = moral, 
                                  dataset = E2_all_long %>%
                                    filter(BSs_cond == "Stranger-Like") %>%
                                    filter(`Choice Context` == "No Choice"))
```
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(rmcorr_plot_SL_NoChoice <- ggplot(data = E2_all_long %>%
                                    filter(BSs_cond == "Stranger-Like") %>%
                                    filter(`Choice Context` == "No Choice"),
                                       aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
                                         geom_point(aes(color = ResponseId)) +
                                         geom_line(aes(y = rmcorr_SL_NoChoice$model$fitted.values), linetype = 1) +
                                         scale_x_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         scale_y_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         theme_classic() +
                                         xlab("Obligation Strength") +
                                         ylab("Moral Character") +
                                         theme(axis.title.x = element_text(size = 14), 
                                               axis.title.y = element_text(size = 14),
                                               axis.text.x = element_text(color = "black", size = 12), 
                                               axis.text.y = element_text(color = "black", size = 12),
                                               legend.position = "none"))
```
### Choice
```{r}
rmcorr_SL_Choice <- rmcorr(participant = ResponseId, 
                                  measure1 = oblig, 
                                  measure2 = moral, 
                                  dataset = E2_all_long %>%
                                    filter(BSs_cond == "Stranger-Like") %>%
                                    filter(`Choice Context` == "Choice"))
```
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(rmcorr_plot_SL_Choice <- ggplot(data = E2_all_long %>%
                                    filter(BSs_cond == "Stranger-Like") %>%
                                    filter(`Choice Context` == "Choice"),
                                       aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
                                         geom_point(aes(color = ResponseId)) +
                                         geom_line(aes(y = rmcorr_SL_Choice$model$fitted.values), linetype = 1) +
                                         scale_x_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         scale_y_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         theme_classic() +
                                         xlab("Obligation Strength") +
                                         ylab("Moral Character") +
                                         theme(axis.title.x = element_text(size = 14), 
                                               axis.title.y = element_text(size = 14),
                                               axis.text.x = element_text(color = "black", size = 12), 
                                               axis.text.y = element_text(color = "black", size = 12),
                                               legend.position = "none"))
```

## Friend-Like {.tabset}

### No Choice
```{r}
rmcorr_FL_NoChoice <- rmcorr(participant = ResponseId, 
                                  measure1 = oblig, 
                                  measure2 = moral, 
                                  dataset = E2_all_long %>%
                                    filter(BSs_cond == "Friend-Like") %>%
                                    filter(`Choice Context` == "No Choice"))
```
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(rmcorr_plot_FL_NoChoice <- ggplot(data = E2_all_long %>%
                                    filter(BSs_cond == "Friend-Like") %>%
                                    filter(`Choice Context` == "No Choice"),
                                       aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
                                         geom_point(aes(color = ResponseId)) +
                                         geom_line(aes(y = rmcorr_FL_NoChoice$model$fitted.values), linetype = 1) +
                                         scale_x_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         scale_y_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         theme_classic() +
                                         xlab("Obligation Strength") +
                                         ylab("Moral Character") +
                                         theme(axis.title.x = element_text(size = 14), 
                                               axis.title.y = element_text(size = 14),
                                               axis.text.x = element_text(color = "black", size = 12), 
                                               axis.text.y = element_text(color = "black", size = 12),
                                               legend.position = "none"))
```
### Choice
```{r}
rmcorr_FL_Choice <- rmcorr(participant = ResponseId, 
                                  measure1 = oblig, 
                                  measure2 = moral, 
                                  dataset = E2_all_long %>%
                                    filter(BSs_cond == "Friend-Like") %>%
                                    filter(`Choice Context` == "Choice"))
```
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(rmcorr_plot_FL_Choice <- ggplot(data = E2_all_long %>%
                                    filter(BSs_cond == "Friend-Like") %>%
                                    filter(`Choice Context` == "Choice"),
                                       aes(x = oblig, y = moral, group = ResponseId, color = ResponseId)) +
                                         geom_point(aes(color = ResponseId)) +
                                         geom_line(aes(y = rmcorr_FL_Choice$model$fitted.values), linetype = 1) +
                                         scale_x_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         scale_y_continuous(limits = c(-5,105), breaks = c(0,25,50,75,100)) +
                                         theme_classic() +
                                         xlab("Obligation Strength") +
                                         ylab("Moral Character") +
                                         theme(axis.title.x = element_text(size = 14), 
                                               axis.title.y = element_text(size = 14),
                                               axis.text.x = element_text(color = "black", size = 12), 
                                               axis.text.y = element_text(color = "black", size = 12),
                                               legend.position = "none"))
```


# Moral ~ Oblig R-M Tests {.tabset}

<br>

See our pre-registration (INSERT LINK) for our predictions about the within-individual relationship between obligation judgments and moral character judgments.

<br>

## Stranger-Like {.tabset}

### No Choice
```{r}
print(rmcorr_SL_NoChoice)
```
### Choice
```{r}
print(rmcorr_SL_Choice)
```

## Friend-Like {.tabset}

### No Choice
```{r}
print(rmcorr_FL_NoChoice)
```
### Choice
```{r}
print(rmcorr_FL_Choice)
```


# Ind. Diff Internal Reliability Tests {.tabset}

## MAC {.tabset}

### Family {.tabset}

#### Stranger-Like
```{r}
# create dataset with only MAC Family Values variables
E2_SL_clean_MAC_Fam_only <- E2_SL_clean %>% select(MAC_Jud_1:MAC_Jud_3, MAC_Rel_1:MAC_Rel_3)

psych::alpha(E2_SL_clean_MAC_Fam_only)
```
#### Friend-Like
```{r}
# create dataset with only MAC Family Values variables
E2_FL_clean_MAC_Fam_only <- E2_FL_clean %>% select(MAC_Jud_1:MAC_Jud_3, MAC_Rel_1:MAC_Rel_3)

psych::alpha(E2_FL_clean_MAC_Fam_only)
```

### Group {.tabset}

#### Stranger-Like
```{r}
# create dataset with only MAC Groupily Values variables
E2_SL_clean_MAC_Group_only <- E2_SL_clean %>% select(MAC_Jud_4:MAC_Jud_6, MAC_Rel_4:MAC_Rel_6)

psych::alpha(E2_SL_clean_MAC_Group_only)
```
#### Friend-Like
```{r}
# create dataset with only MAC Groupily Values variables
E2_FL_clean_MAC_Group_only <- E2_FL_clean %>% select(MAC_Jud_4:MAC_Jud_6, MAC_Rel_4:MAC_Rel_6)

psych::alpha(E2_FL_clean_MAC_Group_only)
```

### Reciprocity {.tabset}

#### Stranger-Like
```{r}
# create dataset with only MAC Recily Values variables
E2_SL_clean_MAC_Rec_only <- E2_SL_clean %>% select(MAC_Jud_7:MAC_Jud_9, MAC_Rel_7:MAC_Rel_9)

psych::alpha(E2_SL_clean_MAC_Rec_only)
```
#### Friend-Like
```{r}
# create dataset with only MAC Recily Values variables
E2_FL_clean_MAC_Rec_only <- E2_FL_clean %>% select(MAC_Jud_7:MAC_Jud_9, MAC_Rel_7:MAC_Rel_9)

psych::alpha(E2_FL_clean_MAC_Rec_only)
```

### Heroism {.tabset}

#### Stranger-Like
```{r}
# create dataset with only MAC Heroily Values variables
E2_SL_clean_MAC_Hero_only <- E2_SL_clean %>% select(MAC_Jud_10:MAC_Jud_12, MAC_Rel_10:MAC_Rel_12)

psych::alpha(E2_SL_clean_MAC_Hero_only)
```
#### Friend-Like
```{r}
# create dataset with only MAC Heroily Values variables
E2_FL_clean_MAC_Hero_only <- E2_FL_clean %>% select(MAC_Jud_10:MAC_Jud_12, MAC_Rel_10:MAC_Rel_12)

psych::alpha(E2_FL_clean_MAC_Hero_only)
```

### Authority {.tabset}

#### Stranger-Like
```{r}
# create dataset with only MAC Authily Values variables
E2_SL_clean_MAC_Auth_only <- E2_SL_clean %>% select(MAC_Jud_13:MAC_Jud_15, MAC_Rel_13:MAC_Rel_15)

psych::alpha(E2_SL_clean_MAC_Auth_only)
```
#### Friend-Like
```{r}
# create dataset with only MAC Authily Values variables
E2_FL_clean_MAC_Auth_only <- E2_FL_clean %>% select(MAC_Jud_13:MAC_Jud_15, MAC_Rel_13:MAC_Rel_15)

psych::alpha(E2_FL_clean_MAC_Auth_only)
```

### Fairness {.tabset}

#### Stranger-Like
```{r}
# create dataset with only MAC Fairily Values variables
E2_SL_clean_MAC_Fair_only <- E2_SL_clean %>% select(MAC_Jud_16:MAC_Jud_18, MAC_Rel_16:MAC_Rel_18)

psych::alpha(E2_SL_clean_MAC_Fair_only)
```
#### Friend-Like
```{r}
# create dataset with only MAC Fairily Values variables
E2_FL_clean_MAC_Fair_only <- E2_FL_clean %>% select(MAC_Jud_16:MAC_Jud_18, MAC_Rel_16:MAC_Rel_18)

psych::alpha(E2_FL_clean_MAC_Fair_only)
```

### Property {.tabset}

#### Stranger-Like
```{r}
# create dataset with only MAC Propily Values variables
E2_SL_clean_MAC_Prop_only <- E2_SL_clean %>% select(MAC_Jud_19_r:MAC_Jud_21_r, MAC_Rel_19:MAC_Rel_21)

psych::alpha(E2_SL_clean_MAC_Prop_only)
```
#### Friend-Like
```{r}
# create dataset with only MAC Propily Values variables
E2_FL_clean_MAC_Prop_only <- E2_FL_clean %>% select(MAC_Jud_19_r:MAC_Jud_21_r, MAC_Rel_19:MAC_Rel_21)

psych::alpha(E2_FL_clean_MAC_Prop_only)
```

## MFT {.tabset}

### Harm {.tabset}

#### Stranger-Like
```{r}
E2_SL_clean_MFQ_Harm_only <- E2_SL_clean %>% select(MFQ_Jud_1:MFQ_Jud_3, MFQ_Rel_1:MFQ_Rel_3)
psych::alpha(E2_SL_clean_MFQ_Harm_only)
```
#### Friend-Like
```{r}
E2_FL_clean_MFQ_Harm_only <- E2_FL_clean %>% select(MFQ_Jud_1:MFQ_Jud_3, MFQ_Rel_1:MFQ_Rel_3)
psych::alpha(E2_FL_clean_MFQ_Harm_only)
```

### Fairness {.tabset}

#### Stranger-Like
```{r}
E2_SL_clean_MFQ_Fair_only <- E2_SL_clean %>% select(MFQ_Jud_4:MFQ_Jud_6, MFQ_Rel_4:MFQ_Rel_6)
psych::alpha(E2_SL_clean_MFQ_Fair_only)
```
#### Friend-Like
```{r}
E2_FL_clean_MFQ_Fair_only <- E2_FL_clean %>% select(MFQ_Jud_4:MFQ_Jud_6, MFQ_Rel_4:MFQ_Rel_6)
psych::alpha(E2_FL_clean_MFQ_Fair_only)
```

### Loyalty {.tabset}

#### Stranger-Like
```{r}
E2_SL_clean_MFQ_Loyalty_only <- E2_SL_clean %>% select(MFQ_Jud_7:MFQ_Jud_9, MFQ_Rel_7:MFQ_Rel_9)
psych::alpha(E2_SL_clean_MFQ_Loyalty_only)
```
#### Friend-Like
```{r}
E2_FL_clean_MFQ_Loyalty_only <- E2_FL_clean %>% select(MFQ_Jud_7:MFQ_Jud_9, MFQ_Rel_7:MFQ_Rel_9)
psych::alpha(E2_FL_clean_MFQ_Loyalty_only)
```

### Authority {.tabset}

#### Stranger-Like
```{r}
E2_SL_clean_MFQ_Auth_only <- E2_SL_clean %>% select(MFQ_Jud_10:MFQ_Jud_12, MFQ_Rel_10:MFQ_Rel_12)
psych::alpha(E2_SL_clean_MFQ_Auth_only)
```
#### Friend-Like
```{r}
E2_FL_clean_MFQ_Auth_only <- E2_FL_clean %>% select(MFQ_Jud_10:MFQ_Jud_12, MFQ_Rel_10:MFQ_Rel_12)
psych::alpha(E2_FL_clean_MFQ_Auth_only)
```

### Purity {.tabset}

#### Stranger-Like
```{r}
E2_SL_clean_MFQ_Purity_only <- E2_SL_clean %>% select(MFQ_Jud_13:MFQ_Jud_15, MFQ_Rel_13:MFQ_Rel_15)
psych::alpha(E2_SL_clean_MFQ_Purity_only)
```
#### Friend-Like
```{r}
E2_FL_clean_MFQ_Purity_only <- E2_FL_clean %>% select(MFQ_Jud_13:MFQ_Jud_15, MFQ_Rel_13:MFQ_Rel_15)
psych::alpha(E2_FL_clean_MFQ_Purity_only)
```

## OUS {.tabset}

### Impartial Beneficence {.tabset}

#### Stranger-Like
```{r}
E2_SL_clean_OUS_IB_only <- E2_SL_clean %>% select(OUS_IB1:OUS_IB5)
psych::alpha(E2_SL_clean_OUS_IB_only)
```
#### Friend-Like
```{r}
E2_FL_clean_OUS_IB_only <- E2_FL_clean %>% select(OUS_IB1:OUS_IB5)
psych::alpha(E2_FL_clean_OUS_IB_only)
```

### Instrumental Harm {.tabset}

#### Stranger-Like
```{r}
E2_SL_clean_OUS_IH_only <- E2_SL_clean %>% select(OUS_IH1:OUS_IH4)
psych::alpha(E2_SL_clean_OUS_IH_only)
```
#### Friend-Like
```{r}
E2_FL_clean_OUS_IH_only <- E2_FL_clean %>% select(OUS_IH1:OUS_IH4)
psych::alpha(E2_FL_clean_OUS_IH_only)
```


# Oblig ~ Ind. Diff Plots {.tabset}

## MAC {.tabset}

### Family {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mac_Fam_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Fam_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Family Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

ggsave("E2_oblig~MAC_plot.png")
```
### Group {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mac_Group_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Group_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Group Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

```
### Reciprocity {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mac_Rec_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Rec_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Reciprocity Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

```
### Heroism {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mac_Hero_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Hero_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Heroism Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

```
### Authority {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mac_Auth_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Def_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Authority Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

```
### Fairness {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mac_Fair_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Fair_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Fairness Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

```
### Property {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mac_Prop_plot <- ggplot(data = E2_all_long,
                                     aes(x = MAC_Prop_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMAC Property Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

```

## MFT {.tabset}

### Harm {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mft_Harm_plot <- ggplot(data = E2_all_long,
                                     aes(x = MFQ_Harm_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,6.5), breaks = c(1,2,3,4,5,6)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMFT Harm Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

```
### Fairness {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mft_Fairness_plot <- ggplot(data = E2_all_long,
                                     aes(x = MFQ_Fairness_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,6.5), breaks = c(1,2,3,4,5,6)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMFT Fairness Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

```
### Loyalty {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mft_Loyalty_plot <- ggplot(data = E2_all_long,
                                     aes(x = MFQ_Loyalty_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,6.5), breaks = c(1,2,3,4,5,6)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMFT Loyalty Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

ggsave("E2_oblig~MFT_plot.png")
```
### Authority {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mft_Authority_plot <- ggplot(data = E2_all_long,
                                     aes(x = MFQ_Authority_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,6.5), breaks = c(1,2,3,4,5,6)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMFT Authority Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

```
### Purity {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_mft_Purity_plot <- ggplot(data = E2_all_long,
                                     aes(x = MFQ_Purity_Combined, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,6.5), breaks = c(1,2,3,4,5,6)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nMFT Purity Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

```

## OUS {.tabset}

### Impartial Beneficence {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_ous_ib_plot <- ggplot(data = E2_all_long,
                                     aes(x = OUS_IB, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,7.5), breaks = c(1,2,3,4,5,6,7)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nOUS Impartial Beneficence Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

ggsave("E2_oblig~OUS_plot.png")
```
### Instrumental Harm {.tabset}
```{r, fig.width = 14, fig.height = 9, out.width = "75%", out.height = "75%"}
print(oblig_ous_ih_plot <- ggplot(data = E2_all_long,
                                     aes(x = OUS_IH, y = oblig, color = Relation)) +
                                            geom_jitter(aes(color = Relation), alpha = 0.5) +
                                            scale_color_manual(values = c("lightskyblue3", "indianred3")) +
                                            geom_smooth(method = 'lm') +
                                            facet_wrap(BSs_cond~`Choice Context`, ncol = 2) +
                                            scale_x_continuous(limits = c(.5,7.5), breaks = c(1,2,3,4,5,6,7)) +
                                            scale_y_continuous(limits = c(-1,101), breaks = c(0,25,50,75,100)) +
                                            theme_classic() +
                                            xlab("\nOUS Instrumental Harm Composite") +
                                            ylab("Obligation Strength\n") +
                                                    theme(axis.title.x = element_text(size = 18), 
                                                    axis.title.y = element_text(size = 18),
                                                    axis.text.x = element_text(color = "black", size = 16), 
                                                    axis.text.y = element_text(color = "black", size = 16),
                                                    strip.text.x = element_text(color = "black", size = 16),
                                                    legend.position = "right",
                                                    legend.title = element_text(color = "black", size = 18),
                                                    legend.text = element_text(color = "black", size = 16)))

```


# Oblig ~ Ind. Diff Tests {.tabset}

<br>

See our pre-registration (INSERT LINK) and manuscript for our predictions about the relationship between individual differences and obligation judgments.

<br>

## MAC {.tabset}

### Family {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Fam_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Fam_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Group {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Group_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Group_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Reciprocity {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Rec_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Rec_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Heroism {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Hero_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Hero_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```


### Authority {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Def_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Def_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Fairness {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Fair_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Fair_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Property {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MAC_Prop_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MAC_Prop_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

## MFT {.tabset}

### Harm {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Harm_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Harm_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Fairness {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Fairness_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Fairness_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Loyalty {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Loyalty_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Loyalty_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Authority {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Authority_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Authority_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Purity {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "MFQ_Purity_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "MFQ_Purity_Combined", "Choice_CUZminusSIB_oblig", method = "Pearson")
```


## OUS {.tabset}

### Impartial Beneficence {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "OUS_IB", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "OUS_IB", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "OUS_IB", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "OUS_IB", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "OUS_IB", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "OUS_IB", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "OUS_IB", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "OUS_IB", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "OUS_IB", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "OUS_IB", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "OUS_IB", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "OUS_IB", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Instrumental Harm {.tabset}

#### Stranger-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "OUS_IH", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "OUS_IH", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "OUS_IH", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_SL_clean, "OUS_IH", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_SL_clean, "OUS_IH", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_SL_clean, "OUS_IH", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

#### Friend-Like {.tabset}

##### No Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "OUS_IH", "NoChoice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "OUS_IH", "NoChoice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "OUS_IH", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
##### Choice
```{r}
# distant pearson's r
cor_test(E2_FL_clean, "OUS_IH", "Choice_CUZ_oblig", method = "Pearson")

# close pearson's r
cor_test(E2_FL_clean, "OUS_IH", "Choice_SIB_oblig", method = "Pearson")

# diff pearson's r
cor_test(E2_FL_clean, "OUS_IH", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

# Oblig ~ MAC Family Values vs MFT Ingroup Loyalty Tests {.tabset}

## Stranger-Like {.tabset}

### No Choice
```{r}
# correlation values are taken from the oblig ~ ind. diffs analyses
## r.jk = oblig ~ family values corr; r.jh = oblig ~ ingroup loyalty corr; r.kh = family values ~ ingroup loyalty corr

# distant
cocor.dep.groups.overlap(r.jk = .31, r.jh = .21, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

# close
cocor.dep.groups.overlap(r.jk = .33, r.jh = .18, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

# difference
cocor.dep.groups.overlap(r.jk = -.03, r.jh = .02, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
### Choice
```{r}
# correlation values are taken from the oblig ~ ind. diffs analyses
## r.jk = oblig ~ family values corr; r.jh = oblig ~ ingroup loyalty corr; r.kh = family values ~ ingroup loyalty corr

# distant
cocor.dep.groups.overlap(r.jk = .37, r.jh = .27, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

# close
cocor.dep.groups.overlap(r.jk = .43, r.jh = .28, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

# difference
cocor.dep.groups.overlap(r.jk = -.20, r.jh = -.07, r.kh = .62, 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```

## Friend-Like {.tabset}

### No Choice
```{r}
# correlation values are taken from the oblig ~ ind. diffs analyses
## r.jk = oblig ~ family values corr; r.jh = oblig ~ ingroup loyalty corr; r.kh = family values ~ ingroup loyalty corr

# distant
cocor.dep.groups.overlap(r.jk = .25, r.jh = .14, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

# close
cocor.dep.groups.overlap(r.jk = .33, r.jh = .19, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

# difference
cocor.dep.groups.overlap(r.jk = -.06, r.jh = -.04, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
### Choice
```{r}
# correlation values are taken from the oblig ~ ind. diffs analyses
## r.jk = oblig ~ family values corr; r.jh = oblig ~ ingroup loyalty corr; r.kh = family values ~ ingroup loyalty corr

# distant
cocor.dep.groups.overlap(r.jk = .29, r.jh = .15, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

# close
cocor.dep.groups.overlap(r.jk = .34, r.jh = .17, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)

# difference
cocor.dep.groups.overlap(r.jk = -.17, r.jh = -.06, r.kh = .64, 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```


# Oblig Diff ~ Other Pre-Outcome Diff Tests {.tabset}

## Relate {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_relate", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_relate", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_relate", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_relate", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

## Close {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_close", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_close", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_close", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_close", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

## Prior Help {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_priorhelp", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_priorhelp", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_priorhelp", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_priorhelp", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

## Future Help {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_futurehelp", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_futurehelp", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_futurehelp", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_futurehelp", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

## Prior Interax {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_priorinteract", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_priorinteract", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_priorinteract", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_priorinteract", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

## Future Interax {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "NoChoice_CUZminusSIB_futureinteract", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_SL_clean, "Choice_CUZminusSIB_futureinteract", "Choice_CUZminusSIB_oblig", method = "Pearson")
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "NoChoice_CUZminusSIB_futureinteract", "NoChoice_CUZminusSIB_oblig", method = "Pearson")
```
#### Choice
```{r}
# diff pearson's r
cor_test(E2_FL_clean, "Choice_CUZminusSIB_futureinteract", "Choice_CUZminusSIB_oblig", method = "Pearson")
```


# Oblig ~ Relate vs Social Interaction Tests {.tabset}

## Close {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .22, r.kh = .07, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
#### Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .37, r.kh = .15, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .02, r.jh = .25, r.kh = .04, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
#### Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .13, r.jh = .60, r.kh = .03, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```

## Prior Help {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .21, r.kh = .09, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
#### Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .42, r.kh = .15, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .02, r.jh = .26, r.kh = .09, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
#### Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .13, r.jh = .56, r.kh = .08, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```

## Future Help {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .36, r.kh = .17, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
#### Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .51, r.kh = .13, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .02, r.jh = .35, r.kh = .03, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
#### Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .13, r.jh = .60, r.kh = .09, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```

## Prior Interax {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .16, r.kh = .14, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
#### Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .34, r.kh = .15, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .02, r.jh = .26, r.kh = .08, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
#### Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .13, r.jh = .45, r.kh = .06, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```

## Future Interax {.tabset}

### Stranger-Like {.tabset}

#### No Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .19, r.kh = .18, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
#### Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .10, r.jh = .45, r.kh = .15, n = 354, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```

### Friend-Like {.tabset}

#### No Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .02, r.jh = .19, r.kh = .03, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```
#### Choice
```{r}
# correlation values are taken from the oblig diff ~ pre-outcome analyses
## r.jk = oblig diff ~ relate diff corr; r.jh = oblig diff ~ social interaction diff corr; r.kh = relate diff ~ social interaction diff corr

# difference
cocor.dep.groups.overlap(r.jk = .13, r.jh = .54, r.kh = .08, n = 345, alternative = "two.sided",
                         test = "steiger1980", alpha = 0.05, conf.level = 0.95, null.value = 0)
```









