Research Scenario

Rejection sensitivity refers to the degree to which an individual is sensitive to interpreting ambiguous cues during interpersonal interactions as a sign of rejection. People high on rejection sensitivity may interpret a mildly negative experience, like not immediately receiving an answer to a text, more strongly as a sign of rejection compared to people low on rejection sensitivty.

A researcher is interested in how being high, versus low, on rejection sensitivity affects people’s perception of how much an interaction partner likes them. The researcher is also interested in how the emotion that a partner displays during the interaction (happy versus neutral) affects how much people perceive that their interaction partner likes them. Finally, the researcher is interested in whether the effect of the interaction partner’s emotions on how much people believe their interaction partner likes them differs depending on whether people are high, or low, on rejection sensitivity.

Import Data

data <- import("reject.csv")

Data Cleaning

Both of our independent variables should be factors. The dependent variable should be integer or numeric.

  • IV1: Rejection Sensitivity (rs)

    • Low
    • High
  • IV2: Partner Emotionality (partner)

    • Neutral
    • Happy
  • DV: Perceived liking

    • The degree to which the participant agrees with the statement, “My interaction partner likes me” from 1 (strongly disagree) to 7 (strongly agree)

First, check each variable’s measure type.

str(data)
## 'data.frame':    80 obs. of  3 variables:
##  $ rs     : chr  "Low" "Low" "Low" "Low" ...
##  $ partner: chr  "Neutral" "Neutral" "Neutral" "Neutral" ...
##  $ liking : int  4 4 4 5 5 5 5 5 5 5 ...

We need to convert rs and partner into factors.

Remember: when converting a character variable into a factor, the factor and as.factor functions re-order the levels of the IV to be in alphabetical order. Thus, we need to assign labels in alphabetical order. (Note: This does not apply if you’re converting an integer or numeric variable into a factor).

data <- data %>%
  mutate(rs = factor(rs, labels = c("High", "Low")),
         partner = factor(partner, labels = c("Happy", "Neutral")))

# Check the levels of each IV
levels(data$rs)
## [1] "High" "Low"
levels(data$partner)
## [1] "Happy"   "Neutral"

Descriptive Statistics

Overall mean & SD for each level of rejection sensitivity:

rs_means <- data %>%
  group_by(rs) %>%
  summarize(n = n(),
            mean = mean(liking, na.rm = TRUE),
            sd = sd(liking, na.rm = TRUE))

# Descriptive Statistics Table
rs_means
## # A tibble: 2 × 4
##   rs        n  mean    sd
##   <fct> <int> <dbl> <dbl>
## 1 High     40  4.12 2.45 
## 2 Low      40  5.78 0.920
# Graph of Descriptive Statistics
ggplot(rs_means, aes(x = rs, y = mean, fill = rs)) +
  geom_bar(stat = "identity") +
  ggtitle("Average Perceived Liking at Each Level of Rejection Sensitivity") +
  labs(x = "Rejection Sensitivity",
       y = "Mean Perceived Liking") +
  theme(plot.title = element_text(hjust = 0.5))

Overall mean & SD for each level of partner emotionality:

partner_means <- data %>%
  group_by(partner) %>%
  summarize(n = n(),
            mean = mean(liking, na.rm = TRUE),
            sd = sd(liking, na.rm = TRUE))

partner_means
## # A tibble: 2 × 4
##   partner     n  mean    sd
##   <fct>   <int> <dbl> <dbl>
## 1 Happy      40  6.22 0.733
## 2 Neutral    40  3.68 2.09
ggplot(partner_means, aes(x = partner, y = mean, fill = partner)) +
  geom_bar(stat = "identity") +
  ggtitle("Average Perceived Liking at Each Level of Partner Emotionality") +
  labs(x = "Partner Emotionality",
       y = "Mean Perceived Liking") +
  theme(plot.title = element_text(hjust = 0.5))

Cell means for every combination of each level of rejection sensitivity and partner emotionality:

cell_means <- data %>%
  group_by(rs, partner) %>%
  summarize(n = n(),
            mean = mean(liking, na.rm = TRUE),
            sd = sd(liking, na.rm = TRUE))
## `summarise()` has grouped output by 'rs'. You can override using the `.groups`
## argument.
cell_means
## # A tibble: 4 × 5
## # Groups:   rs [2]
##   rs    partner     n  mean    sd
##   <fct> <fct>   <int> <dbl> <dbl>
## 1 High  Happy      20  6.45 0.686
## 2 High  Neutral    20  1.8  0.696
## 3 Low   Happy      20  6    0.725
## 4 Low   Neutral    20  5.55 1.05
ggplot(cell_means, aes(x = rs, y = mean, fill = partner)) +
  geom_bar(stat = "identity", position = "dodge") +
  ggtitle("Average Perceived Liking at Each Combination of Rejection Sensitivity & Partner Emotionality") +
  labs(x = "Rejection Sensitivity",
       y = "Mean Perceived Liking") +
  theme(plot.title = element_text(hjust = 0.5))

Fit the Model

Typically, when there are multiple IVs in one’s study, the researcher is interested in examining three effects: 1) the main effect of IV1, 2) the main effect of IV2, and 3) the interaction effect.

Contrast Coding

Rejection sensitivity has two levels (high or low). Thus, we only need 2-1 = 1 contrast code to represent it in our model. Let’s use high = +1/2 and low = -1/2 to contrast code rejection sensitivity.

# Make sure you know the order of the levels before coding them
levels(data$rs) 
## [1] "High" "Low"
rs_code1 <- c(1/2, -1/2)
contrasts(data$rs) <- rs_code1

Partner emotionality has two levels (happy or neutral). Thus, we need 2-1 = 1 contrast code to represent it in the model. Let’s use happy = 1/2 and neutral = -1/2 to contrast code partner emotionality.

levels(data$partner)
## [1] "Happy"   "Neutral"
partner_code1 <- c(1/2, -1/2)
contrasts(data$partner) <- partner_code1

Fit the Model

To predict liking scores from rejection sensitivity, partner emotionality, and the interaction between these two categorical IVs, we can use the following syntax:

model <- lm(liking ~ rs*partner, data = data)
  • rs*partner represents the interaction between rejection sensitivity and partner emotionality
    • When the interaction effect between two categorical IVs is included as a predictor, the main effects of each IV separately are automatically included
    • Thus, the above model is equivalent to: lm(liking ~ rs + partner + rs*partner, data = data)

Checking whether model assumptions were met

Checking whether errors are normally distributed

augment_model <- augment(model)
augment_model
## # A tibble: 80 × 9
##    liking rs    partner .fitted .resid   .hat .sigma .cooksd .std.resid
##     <int> <fct> <fct>     <dbl>  <dbl>  <dbl>  <dbl>   <dbl>      <dbl>
##  1      4 Low   Neutral    5.55 -1.55  0.050   0.788 0.0515      -1.98 
##  2      4 Low   Neutral    5.55 -1.55  0.0500  0.788 0.0515      -1.98 
##  3      4 Low   Neutral    5.55 -1.55  0.0500  0.788 0.0515      -1.98 
##  4      5 Low   Neutral    5.55 -0.550 0.05    0.806 0.00649     -0.702
##  5      5 Low   Neutral    5.55 -0.550 0.05    0.806 0.00649     -0.702
##  6      5 Low   Neutral    5.55 -0.550 0.05    0.806 0.00649     -0.702
##  7      5 Low   Neutral    5.55 -0.550 0.05    0.806 0.00649     -0.702
##  8      5 Low   Neutral    5.55 -0.550 0.05    0.806 0.00649     -0.702
##  9      5 Low   Neutral    5.55 -0.550 0.05    0.806 0.00649     -0.702
## 10      5 Low   Neutral    5.55 -0.550 0.05    0.806 0.00649     -0.702
## # ℹ 70 more rows
ggplot(data = augment_model, aes(x = .resid)) + 
  geom_density(fill = "purple") + 
  stat_function(linetype = 2, 
                fun = dnorm, 
                args = list(mean = mean(augment_model$.resid), 
                                sd   =   sd(augment_model$.resid))) 

  • The residuals are somewhat bimodal, but the normality assumption is robust to violations, so let’s continue.

Checking independence of errors

data$ID <- c(1:nrow(data))

augment_model$ID <- data$ID

ggplot(data = augment_model, aes(x = ID, y = .resid)) + 
  geom_point() +  
  geom_hline(yintercept = 0)

  • Overall, there doesn’t look to be a systematic pattern in the relationship between ID and the model’s residuals.

Checking homogeneity of variances

plot(model, 1)

  • Remember that we are looking to see that the amount of spread in the data at each level of condition is approximately the same. If you can’t tell visually, you can also run Levene’s test.
leveneTest(liking ~ rs*partner, data = data)
## Levene's Test for Homogeneity of Variance (center = median)
##       Df F value Pr(>F)
## group  3  1.3672 0.2592
##       76
  • Good - levene’s test for homogeneity of variances is non-significant.

Interpret model output

Let’s examine the output using anova().

# Fitting the model
anova(model)
## Analysis of Variance Table
## 
## Response: liking
##            Df Sum Sq Mean Sq F value    Pr(>F)    
## rs          1  54.45  54.450  84.281 6.028e-14 ***
## partner     1 130.05 130.050 201.299 < 2.2e-16 ***
## rs:partner  1  88.20  88.200 136.521 < 2.2e-16 ***
## Residuals  76  49.10   0.646                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Effect Sizes
etaSquared(model)
##               eta.sq eta.sq.part
## rs         0.1692045   0.5258329
## partner    0.4041330   0.7259280
## rs:partner 0.2740833   0.6423889

Effect Sizes

  • eta-squared = SSR/SS_Total
    • Eta-squared is synonymous with the definition we’ve been discussing for \(R^2\)
    • SS_Total is the total variability on our DV
  • partial eta-squared = SSR/SSE(C)
    • Partial eta-squared is synonymous with the definition we’ve been discussing for \(PRE\) when PA-PC = 1

Question: Is there a significant main effect of rejection sensitivity?

rs_means
## # A tibble: 2 × 4
##   rs        n  mean    sd
##   <fct> <int> <dbl> <dbl>
## 1 High     40  4.12 2.45 
## 2 Low      40  5.78 0.920
anova(model)
## Analysis of Variance Table
## 
## Response: liking
##            Df Sum Sq Mean Sq F value    Pr(>F)    
## rs          1  54.45  54.450  84.281 6.028e-14 ***
## partner     1 130.05 130.050 201.299 < 2.2e-16 ***
## rs:partner  1  88.20  88.200 136.521 < 2.2e-16 ***
## Residuals  76  49.10   0.646                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
etaSquared(model)
##               eta.sq eta.sq.part
## rs         0.1692045   0.5258329
## partner    0.4041330   0.7259280
## rs:partner 0.2740833   0.6423889

Yes. On average, people high on rejection sensitivity (M = 4.12, SD = 2.45) perceived that their interaction partner liked them significantly less than people low in rejection sensitivity (M = 5.78, SD = 0.92), F(1, 76) = 84.28, p < .001, \(\eta^2 = 0.17\), \(\eta^2_{p} = 0.53\).

Question: Is there a significant main effect of partner emotionality?

partner_means
## # A tibble: 2 × 4
##   partner     n  mean    sd
##   <fct>   <int> <dbl> <dbl>
## 1 Happy      40  6.22 0.733
## 2 Neutral    40  3.68 2.09
anova(model)
## Analysis of Variance Table
## 
## Response: liking
##            Df Sum Sq Mean Sq F value    Pr(>F)    
## rs          1  54.45  54.450  84.281 6.028e-14 ***
## partner     1 130.05 130.050 201.299 < 2.2e-16 ***
## rs:partner  1  88.20  88.200 136.521 < 2.2e-16 ***
## Residuals  76  49.10   0.646                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
etaSquared(model)
##               eta.sq eta.sq.part
## rs         0.1692045   0.5258329
## partner    0.4041330   0.7259280
## rs:partner 0.2740833   0.6423889

Yes. On average, people who interacted with a happy partner (M = 6.22, SD = 0.73), perceived that their interaction partner liked them significantly more than people who interacted with a neutral partner (M = 3.68, SD = 2.09), F(1, 76) = 201.30, p < .001, \(\eta^2 = 0.40\), \(\eta^2_{p} = 0.73\).

Question: Is there a significant interaction effect?

cell_means
## # A tibble: 4 × 5
## # Groups:   rs [2]
##   rs    partner     n  mean    sd
##   <fct> <fct>   <int> <dbl> <dbl>
## 1 High  Happy      20  6.45 0.686
## 2 High  Neutral    20  1.8  0.696
## 3 Low   Happy      20  6    0.725
## 4 Low   Neutral    20  5.55 1.05
anova(model)
## Analysis of Variance Table
## 
## Response: liking
##            Df Sum Sq Mean Sq F value    Pr(>F)    
## rs          1  54.45  54.450  84.281 6.028e-14 ***
## partner     1 130.05 130.050 201.299 < 2.2e-16 ***
## rs:partner  1  88.20  88.200 136.521 < 2.2e-16 ***
## Residuals  76  49.10   0.646                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
etaSquared(model)
##               eta.sq eta.sq.part
## rs         0.1692045   0.5258329
## partner    0.4041330   0.7259280
## rs:partner 0.2740833   0.6423889

Yes. There was a significant interaction between rejection sensitivity and partner emotionality, F(1, 76) = 136.52, p < .001, \(\eta^2 = 0.27\), \(\eta^2_{p} = 0.64\). We will interpret this significant interaction effect further below!

Now, let’s examine the output using summary():

summary(model)
## 
## Call:
## lm(formula = liking ~ rs * partner, data = data)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
##  -1.55  -0.55   0.00   0.55   1.45 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   4.95000    0.08986   55.08  < 2e-16 ***
## rs1          -1.65000    0.17973   -9.18 6.03e-14 ***
## partner1      2.55000    0.17973   14.19  < 2e-16 ***
## rs1:partner1  4.20000    0.35946   11.68  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.8038 on 76 degrees of freedom
## Multiple R-squared:  0.8474, Adjusted R-squared:  0.8414 
## F-statistic: 140.7 on 3 and 76 DF,  p-value: < 2.2e-16

Question: Interpret the meaning of each of the parameter estimates.

  • b0: 4.95, the mean of the group means
cell_means
## # A tibble: 4 × 5
## # Groups:   rs [2]
##   rs    partner     n  mean    sd
##   <fct> <fct>   <int> <dbl> <dbl>
## 1 High  Happy      20  6.45 0.686
## 2 High  Neutral    20  1.8  0.696
## 3 Low   Happy      20  6    0.725
## 4 Low   Neutral    20  5.55 1.05
(6.45+1.8+6+5.55)/4
## [1] 4.95
  • b1: -1.65, overall mean for people high on RS minus the overall mean for people low on RS
rs_means
## # A tibble: 2 × 4
##   rs        n  mean    sd
##   <fct> <int> <dbl> <dbl>
## 1 High     40  4.12 2.45 
## 2 Low      40  5.78 0.920
4.12-5.78
## [1] -1.66
  • b2: 2.55, overall mean for people who interacted with a happy partner minus the overall mean for people who interacted with a neutral partner
partner_means
## # A tibble: 2 × 4
##   partner     n  mean    sd
##   <fct>   <int> <dbl> <dbl>
## 1 Happy      40  6.22 0.733
## 2 Neutral    40  3.68 2.09
6.22-3.68
## [1] 2.54
  • b3: 4.20, the difference in the effect of rejection sensitivy when partner expressed neutral versus happy emotions
cell_means
## # A tibble: 4 × 5
## # Groups:   rs [2]
##   rs    partner     n  mean    sd
##   <fct> <fct>   <int> <dbl> <dbl>
## 1 High  Happy      20  6.45 0.686
## 2 High  Neutral    20  1.8  0.696
## 3 Low   Happy      20  6    0.725
## 4 Low   Neutral    20  5.55 1.05
# effect when partner showed neutral emotions
effect_neutral <- 1.8-5.55
effect_neutral
## [1] -3.75
# effect when partner showed happy emotions
effect_happy <- 6.45-6
effect_happy
## [1] 0.45
effect_happy - effect_neutral
## [1] 4.2

Interpreting the Main Effects

Main Effect of Rejection Sensitivity

To examine the direction of the main effect of rejection sensitivity, we can look at the means for each level using the emmeans function:

rs_means <- emmeans(model, ~rs)
## NOTE: Results may be misleading due to involvement in interactions
rs_means
##  rs   emmean    SE df lower.CL upper.CL
##  High   4.12 0.127 76     3.87     4.38
##  Low    5.78 0.127 76     5.52     6.03
## 
## Results are averaged over the levels of: partner 
## Confidence level used: 0.95

We can also graph the main effect using the emmip function:

# Graphing means only
emmip(model, ~rs,
      CIs = TRUE,
      CIarg = list(lwd = 2, alpha = 0.3),
      xlab = "Rejection Sensitivity",
      ylab = "Perceived Liking")
## NOTE: Results may be misleading due to involvement in interactions

To further interpret the main effect of rejection sensitivity, we can look at the effect size using Cohen’s d:

eff_size(rs_means, sigma = sigma(model), edf = df.residual(model))
##  contrast   effect.size    SE df lower.CL upper.CL
##  High - Low       -2.05 0.279 76    -2.61     -1.5
## 
## Results are averaged over the levels of: partner 
## sigma used for effect sizes: 0.8038 
## Confidence level used: 0.95

Main Effect of Partner Emotionality

To examine the direction of the main effect of partner emotionality, we can look at the means for each level using the emmeans function:

partner_means <- emmeans(model, ~partner)
## NOTE: Results may be misleading due to involvement in interactions
partner_means
##  partner emmean    SE df lower.CL upper.CL
##  Happy     6.22 0.127 76     5.97     6.48
##  Neutral   3.67 0.127 76     3.42     3.93
## 
## Results are averaged over the levels of: rs 
## Confidence level used: 0.95

We can also graph the main effect using the emmip function:

# Graphing means only
emmip(model, ~partner,
      CIs = TRUE,
      CIarg = list(lwd = 2, alpha = 0.3),
      xlab = "Rejection Sensitivity",
      ylab = "Perceived Liking") 
## NOTE: Results may be misleading due to involvement in interactions

To further interpret the main effect of partner emotionality, we can look at the effect size using Cohen’s d:

eff_size(partner_means, sigma = sigma(model), edf = df.residual(model))
##  contrast        effect.size    SE df lower.CL upper.CL
##  Happy - Neutral        3.17 0.341 76     2.49     3.85
## 
## Results are averaged over the levels of: rs 
## sigma used for effect sizes: 0.8038 
## Confidence level used: 0.95

Interaction Effect

The easiest way to examine the pattern of an interaction effect is by looking at a table of the cell means, or even better, at a graph of the interaction effect.

First, let’s look at the cell means using the emmeans function:

cell_means <- emmeans(model, ~rs*partner)
cell_means
##  rs   partner emmean   SE df lower.CL upper.CL
##  High Happy     6.45 0.18 76     6.09     6.81
##  Low  Happy     6.00 0.18 76     5.64     6.36
##  High Neutral   1.80 0.18 76     1.44     2.16
##  Low  Neutral   5.55 0.18 76     5.19     5.91
## 
## Confidence level used: 0.95

Second, let’s look at a graph of the interaction effect using the emmip function:

emmip(model, partner ~ rs,
      CIs = TRUE,
      CIarg = list(lwd = 2, alpha = 0.3),
      xlab = "Rejection Sensitivity",
      ylab = "Perceived Liking")

Question: How would you interpret the significant interaction effect?

When the interaction partner displayed neutral emotions, people high on rejection sensitivity perceived that the interaction partner liked them less than people low on rejection sensitivity. When the interaction partner displayed happy emotions, there was little difference in how much people high versus low on rejection sensitivity perceived that their interaction partner liked them.

Simple Effects Analysis

Simple effects are used to examine the effect of IV1 on the DV separately for each level of IV2.

In this case, that means we can look at the effect of rejection sensitivity on perceived liking separately for each level of partner emotionality (or vice versa).

We can get simple effects using the emmeans function together with the contrast function:

emmeans(model, ~ rs*partner) %>% 
  contrast(interaction = "consec", # compares consecutive groups
           simple = "rs", # variable which you would like the simple effect of 
           by = "partner", # variable at whose levels you would like the simple effects of the previously named variable
           adjust = "none") # adjust can be used to choose a post-hoc comparison correction 
## partner = Happy:
##  rs_consec  estimate    SE df t.ratio p.value
##  Low - High    -0.45 0.254 76  -1.770  0.0807
## 
## partner = Neutral:
##  rs_consec  estimate    SE df t.ratio p.value
##  Low - High     3.75 0.254 76  14.754  <.0001

Interpretation of Simple Effects of Rejection Sensitivity

When interpreting the simple effects, it’s helpful to reference the table of cell means:

cell_means 
##  rs   partner emmean   SE df lower.CL upper.CL
##  High Happy     6.45 0.18 76     6.09     6.81
##  Low  Happy     6.00 0.18 76     5.64     6.36
##  High Neutral   1.80 0.18 76     1.44     2.16
##  Low  Neutral   5.55 0.18 76     5.19     5.91
## 
## Confidence level used: 0.95
  • When the interaction partner displayed happy emotions during the interaction, there was no significant difference in how much people low, versus high, on rejection sensitivity perceived that their interaction partner liked them, \(M_{Diff} = -0.45, t(76) = -1.77, p = .081\).

  • When the interaction partner displayed neutral emotions during the interaction, people low on rejection sensitivity perceived that the partner liked them significantly more compared to people high on rejection sensitivty, \(M_{Diff} = 3.75, t(76) = 14.75, p < .001\).

Vice versa, we could also look at the effect of partner emotionality on perceived liking separately for each level of rejection sensitivity:

emmeans(model, ~ rs*partner) %>%
  contrast(interaction = "consec",
           simple = "partner",
           by = "rs",
           adjust = "none")
## rs = High:
##  partner_consec  estimate    SE df t.ratio p.value
##  Neutral - Happy    -4.65 0.254 76 -18.294  <.0001
## 
## rs = Low:
##  partner_consec  estimate    SE df t.ratio p.value
##  Neutral - Happy    -0.45 0.254 76  -1.770  0.0807

Interpretation of Simple Effects of Partner Emotionality

Question: How would you interpret the simple effects of partner emotionality at each level of rejection sensitivity?

  • For people high on rejection sensitivity, they perceived that their interaction partner liked them significantly less when the interaction partner displayed neutral emotions compared to when they displayed happy emotions, \(M_{Diff} = -4.65, t(76) = -18.29, p < .001\).

  • For people low on rejection sensitivity, there was no significant differencce in how much they perceievd that their interaction partner liked them when the interaction displayed neutral compared to happy emotions, \(M_{Diff} = -0.45, t(76) = -1.77, p = .081\).

Finally, we could ask whether the effect of rejection sensitivity on perceived liking was significantly different depending on whether the interaction partner was showing happy, or neutral, emotion:

emmeans(model, ~ rs*partner) %>%
  contrast(interaction = "consec")
##  rs_consec  partner_consec  estimate    SE df t.ratio p.value
##  Low - High Neutral - Happy      4.2 0.359 76  11.684  <.0001

Interpretation of Interaction Effect

  • The effect of rejection sensitivity on perceived liking when the interaction partner displayed neutral emotions (\(M_{Diff} = 3.75\)) was significantly stronger than when the interaction partner displayed happy emotions during the interaction (\(M_{Diff} = -0.45\)), \(t(76) = 11.68, p < .001\).

(Notice that this is the same as the value that was being tested by the interaction effect in the overall model!)