library(psych) # for the describe() command
library(ggplot2) # to visualize our results
## Warning: package 'ggplot2' was built under R version 4.3.3
##
## Attaching package: 'ggplot2'
## The following objects are masked from 'package:psych':
##
## %+%, alpha
library(expss) # for the cross_cases() command
## Loading required package: maditr
##
## To aggregate several columns with one summary: take(mtcars, mpg, hp, fun = mean, by = am)
##
## Use 'expss_output_rnotebook()' to display tables inside R Notebooks.
## To return to the console output, use 'expss_output_default()'.
##
## Attaching package: 'expss'
## The following object is masked from 'package:ggplot2':
##
## vars
library(car) # for the leveneTest() command
## Warning: package 'car' was built under R version 4.3.3
## Loading required package: carData
## Warning: package 'carData' was built under R version 4.3.3
##
## Attaching package: 'car'
## The following object is masked from 'package:expss':
##
## recode
## The following object is masked from 'package:psych':
##
## logit
library(afex) # to run the ANOVA and plot results
## Warning: package 'afex' was built under R version 4.3.3
## Loading required package: lme4
## Warning: package 'lme4' was built under R version 4.3.3
## Loading required package: Matrix
##
## Attaching package: 'lme4'
## The following object is masked from 'package:expss':
##
## dummy
## ************
## Welcome to afex. For support visit: http://afex.singmann.science/
## - Functions for ANOVAs: aov_car(), aov_ez(), and aov_4()
## - Methods for calculating p-values with mixed(): 'S', 'KR', 'LRT', and 'PB'
## - 'afex_aov' and 'mixed' objects can be passed to emmeans() for follow-up tests
## - Get and set global package options with: afex_options()
## - Set sum-to-zero contrasts globally: set_sum_contrasts()
## - For example analyses see: browseVignettes("afex")
## ************
##
## Attaching package: 'afex'
## The following object is masked from 'package:lme4':
##
## lmer
library(emmeans) # for posthoc tests
## Warning: package 'emmeans' was built under R version 4.3.3
# import the dataset you cleaned previously
# this will be the dataset you'll use throughout the rest of the semester
# use ARC data
d <- read.csv(file="Data/arc_clean.csv", header=T)
# for the HW, you may or may not need to use the code below this comment
# check to see if you have a variable called 'X' or 'ResponseId' in your data
names(d)
## [1] "X" "gender_rc" "ethnicity_rc" "pss" "phq"
## [6] "rse"
# if you do have 'X' or 'ResponseId' (aka your ID variable) then DELETE THE CODE BELOW
# if you don't have those variables, keep the code and run it
d$row_id <- 1:nrow(d)
Note: You can chose to run either a one-way ANOVA (a single IV with more than 3 levels) or a two-way/factorial ANOVA (at least two IVs) for the homework. You will need to specify your hypothesis and customize your code based on the choice you make. I will run both versions of the test here for illustrative purposes.
One-Way: We predict that there will be a significant effect of gender on stress, as measured by the perceived stress scale (PSS-4).
Two-Way: We predict that there will significant effects of gender and race/ethnicity on stress, as measured by the perceived stress scale (PSS-4). We also predict that race/ethnicity and gender will interact and that women of color will report significantly higher stress than men of color or white men and women.
# you only need to check the variables you're using in the current analysis
# although you checked them previously, it's always a good idea to look them over again and be sure that everything is correct
str(d)
## 'data.frame': 1250 obs. of 7 variables:
## $ X : int 1 20 30 31 33 57 68 81 86 104 ...
## $ gender_rc : chr "f" "m" "f" "f" ...
## $ ethnicity_rc: chr "white" "white" "white" "white" ...
## $ pss : num 3.25 3.75 1 3.25 2 4 3.75 1.25 2.5 2.5 ...
## $ phq : num 1.33 3.33 1 2.33 1.11 ...
## $ rse : num 2.3 1.6 3.9 1.7 3.9 1.8 1.3 3.5 2.6 3 ...
## $ row_id : int 1 2 3 4 5 6 7 8 9 10 ...
# make our categorical variables factors
d$row_id <- as.factor(d$row_id) #we'll actually use our ID variable for this analysis, so make sure it's coded as a factor
d$gender_rc <- as.factor(d$gender_rc)
d$ethnicity_rc <- as.factor(d$ethnicity_rc)
# check your categorical variables
table(d$gender_rc)
##
## f m nb
## 1020 197 33
table(d$ethnicity_rc, useNA = "always")
##
## asian black mideast multiracial other prefer_not
## 139 26 12 65 11 18
## white <NA>
## 979 0
# also use histograms to examine your continuous variable
hist(d$pss)
Assumptions checked below:
If you have confirmed everything else…
# check your categorical variables and make sure they have decent cell sizes
# they should have at least 5 participants in each cell
# but larger numbers are always better
table(d$gender_rc)
##
## f m nb
## 1020 197 33
table(d$ethnicity_rc)
##
## asian black mideast multiracial other prefer_not
## 139 26 12 65 11 18
## white
## 979
cross_cases(d, gender_rc, ethnicity_rc)
| ethnicity_rc | |||||||
|---|---|---|---|---|---|---|---|
| asian | black | mideast | multiracial | other | prefer_not | white | |
| gender_rc | |||||||
| f | 110 | 24 | 9 | 54 | 8 | 16 | 799 |
| m | 27 | 2 | 2 | 8 | 3 | 2 | 153 |
| nb | 2 | 1 | 3 | 27 | |||
| #Total cases | 139 | 26 | 12 | 65 | 11 | 18 | 979 |
# we're going to recode our race/ethnicity variable into two groups: poc and white
# you may or may not need to combine and/or drop groups for the HW
#pay ttention to the name of your variable and levels of your variable for HW, when you recode you have to enter the exact label that is used for that level of the variable (for ex: upper case A would not work for asian)
table(d$ethnicity_rc)
##
## asian black mideast multiracial other prefer_not
## 139 26 12 65 11 18
## white
## 979
d$poc[d$ethnicity_rc == "asian"] <- "poc"
d$poc[d$ethnicity_rc == "black"] <- "poc"
d$poc[d$ethnicity_rc == "mideast"] <- "poc"
d$poc[d$ethnicity_rc == "multiracial"] <- "poc"
d$poc[d$ethnicity_rc == "other"] <- "poc"
d$poc[d$ethnicity_rc == "prefer_not"] <- NA
d$poc[d$ethnicity_rc == "white"] <- "white"
table(d$poc, useNA = "always")
##
## poc white <NA>
## 253 979 18
d$poc <- as.factor(d$poc)
# our number of small nb participants is going to hurt us for the two-way anova, but it should be okay for the one-way anova
# so we'll create a new dataframe for the two-way analysis and call it d2
d2 <- subset(d, gender_rc == "m" | gender_rc == "f")
d2$gender_rc <- droplevels(d2$gender_rc)
# to double-check any changes we made
table(d2$gender_rc)
##
## f m
## 1020 197
cross_cases(d2, gender_rc, poc)
| poc | ||
|---|---|---|
| poc | white | |
| gender_rc | ||
| f | 205 | 799 |
| m | 42 | 153 |
| #Total cases | 247 | 952 |
# you can use the describe() command on an entire dataframe (d) or just on a single variable
describe(d$pss)
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 1250 2.93 0.95 3 2.93 1.11 1 5 4 0.09 -0.74 0.03
# we'll use the describeBy() command to view skew and kurtosis across our IVs
describeBy(d$pss, group = d$gender_rc)
##
## Descriptive statistics by group
## group: f
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 1020 2.97 0.95 3 2.97 1.11 1 5 4 0.04 -0.73 0.03
## ------------------------------------------------------------
## group: m
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 197 2.6 0.89 2.5 2.55 0.74 1 5 4 0.5 -0.24 0.06
## ------------------------------------------------------------
## group: nb
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 33 3.73 0.66 3.75 3.77 0.74 2 5 3 -0.48 -0.18 0.11
describeBy(d2$pss, group = d2$gender_rc)
##
## Descriptive statistics by group
## group: f
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 1020 2.97 0.95 3 2.97 1.11 1 5 4 0.04 -0.73 0.03
## ------------------------------------------------------------
## group: m
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 197 2.6 0.89 2.5 2.55 0.74 1 5 4 0.5 -0.24 0.06
describeBy(d2$pss, group = d2$poc)
##
## Descriptive statistics by group
## group: poc
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 247 3.05 0.96 3 3.05 1.11 1 5 4 0.04 -0.75 0.06
## ------------------------------------------------------------
## group: white
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 952 2.88 0.94 2.75 2.86 1.11 1 5 4 0.13 -0.72 0.03
# use the leveneTest() command from the car package to test homogeneity of variance
# uses the 'formula' setup: formula is y~x1*x2, where y is our DV and x1 is our first IV and x2 is our second IV
leveneTest(pss~gender_rc, data = d)
## Levene's Test for Homogeneity of Variance (center = median)
## Df F value Pr(>F)
## group 2 5.1218 0.006091 **
## 1247
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
leveneTest(pss~gender_rc*poc, data = d2)
## Levene's Test for Homogeneity of Variance (center = median)
## Df F value Pr(>F)
## group 3 1.2775 0.2807
## 1195
# use the lm() command to run the regression
# formula is y~x1*x2, where y is our DV, x1 is our first IV and x2 is our second IV
reg_model <- lm(pss~gender_rc, data = d) #for one-way
reg_model2 <- lm(pss~gender_rc*poc, data = d2) #for two-way
# Cook's distance
plot(reg_model, 4)
# Residuals vs Leverage
plot(reg_model, 5)
# Cook's distance
plot(reg_model2, 4)
# Residuals vs Leverage
plot(reg_model2, 5)
Our cell sizes are very unbalanced. A small sample size for one of the levels of our variable limits our power and increases our Type II error rate.
Levene’s test is significant for our three-level gender variable. We are ignoring this and continuing with the analysis anyway, but in the real world this is something we would have to correct for.
We identified a single outlier.
aov_model <- aov_ez(data = d,
id = "row_id",
between = c("gender_rc"),
dv = "pss",
anova_table = list(es = "pes"))
## Contrasts set to contr.sum for the following variables: gender_rc
aov_model2 <- aov_ez(data = d2,
id = "row_id",
between = c("gender_rc","poc"),
dv = "pss",
anova_table = list(es = "pes"))
## Warning: Missing values for 18 ID(s), which were removed before analysis:
## 67, 68, 153, 159, 184, 319, 376, 470, 677, 828, ... [showing first 10 only]
## Below the first few rows (in wide format) of the removed cases with missing data.
## row_id gender_rc poc .
## # 67 67 f <NA> 2.50
## # 68 68 f <NA> 2.75
## # 151 153 f <NA> 1.25
## # 157 159 f <NA> 1.50
## # 182 184 f <NA> 2.00
## # 315 319 f <NA> 3.00
## Contrasts set to contr.sum for the following variables: gender_rc, poc
Effect size cutoffs from Cohen (1988):
nice(aov_model)
## Anova Table (Type 3 tests)
##
## Response: pss
## Effect df MSE F pes p.value
## 1 gender_rc 2, 1247 0.87 26.20 *** .040 <.001
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '+' 0.1 ' ' 1
nice(aov_model2)
## Anova Table (Type 3 tests)
##
## Response: pss
## Effect df MSE F pes p.value
## 1 gender_rc 1, 1195 0.87 28.31 *** .023 <.001
## 2 poc 1, 1195 0.87 0.56 <.001 .453
## 3 gender_rc:poc 1, 1195 0.87 3.43 + .003 .064
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '+' 0.1 ' ' 1
afex_plot(aov_model, x = "gender_rc")
afex_plot(aov_model2, x = "gender_rc", trace = "poc")
afex_plot(aov_model2, x = "poc", trace = "gender_rc")
Only run posthocs if the test is significant! E.g., only run the posthoc tests on gender if there is a main effect for gender.
emmeans(aov_model, specs="gender_rc", adjust="tukey")
## Note: adjust = "tukey" was changed to "sidak"
## because "tukey" is only appropriate for one set of pairwise comparisons
## gender_rc emmean SE df lower.CL upper.CL
## f 2.97 0.0292 1247 2.90 3.04
## m 2.60 0.0663 1247 2.44 2.75
## nb 3.73 0.1621 1247 3.35 4.12
##
## Confidence level used: 0.95
## Conf-level adjustment: sidak method for 3 estimates
pairs(emmeans(aov_model, specs="gender_rc", adjust="tukey"))
## contrast estimate SE df t.ratio p.value
## f - m 0.379 0.0725 1247 5.231 <.0001
## f - nb -0.761 0.1647 1247 -4.618 <.0001
## m - nb -1.140 0.1751 1247 -6.507 <.0001
##
## P value adjustment: tukey method for comparing a family of 3 estimates
Only run posthocs if the test is significant! E.g., only run the posthoc tests on gender if there is a main effect for gender.
emmeans(aov_model2, specs="gender_rc", adjust="tukey")
## NOTE: Results may be misleading due to involvement in interactions
## Note: adjust = "tukey" was changed to "sidak"
## because "tukey" is only appropriate for one set of pairwise comparisons
## gender_rc emmean SE df lower.CL upper.CL
## f 3.04 0.0365 1195 2.96 3.12
## m 2.57 0.0813 1195 2.38 2.75
##
## Results are averaged over the levels of: poc
## Confidence level used: 0.95
## Conf-level adjustment: sidak method for 2 estimates
pairs(emmeans(aov_model2, specs="gender_rc", adjust="tukey"))
## NOTE: Results may be misleading due to involvement in interactions
## contrast estimate SE df t.ratio p.value
## f - m 0.474 0.0892 1195 5.320 <.0001
##
## Results are averaged over the levels of: poc
emmeans(aov_model2, specs="poc", adjust="tukey")
## NOTE: Results may be misleading due to involvement in interactions
## Note: adjust = "tukey" was changed to "sidak"
## because "tukey" is only appropriate for one set of pairwise comparisons
## poc emmean SE df lower.CL upper.CL
## poc 2.84 0.0791 1195 2.66 3.01
## white 2.77 0.0412 1195 2.68 2.86
##
## Results are averaged over the levels of: gender_rc
## Confidence level used: 0.95
## Conf-level adjustment: sidak method for 2 estimates
pairs(emmeans(aov_model2, specs="poc", adjust="tukey"))
## NOTE: Results may be misleading due to involvement in interactions
## contrast estimate SE df t.ratio p.value
## poc - white 0.067 0.0892 1195 0.751 0.4527
##
## Results are averaged over the levels of: gender_rc
emmeans(aov_model2, specs="gender_rc", by="poc", adjust="sidak")
## poc = poc:
## gender_rc emmean SE df lower.CL upper.CL
## f 3.16 0.0652 1195 3.01 3.30
## m 2.52 0.1441 1195 2.20 2.84
##
## poc = white:
## gender_rc emmean SE df lower.CL upper.CL
## f 2.93 0.0330 1195 2.85 3.00
## m 2.62 0.0755 1195 2.45 2.79
##
## Confidence level used: 0.95
## Conf-level adjustment: sidak method for 2 estimates
pairs(emmeans(aov_model2, specs="gender_rc", by="poc", adjust="sidak"))
## poc = poc:
## contrast estimate SE df t.ratio p.value
## f - m 0.639 0.1581 1195 4.044 0.0001
##
## poc = white:
## contrast estimate SE df t.ratio p.value
## f - m 0.309 0.0824 1195 3.753 0.0002
emmeans(aov_model2, specs="poc", by="gender_rc", adjust="sidak")
## gender_rc = f:
## poc emmean SE df lower.CL upper.CL
## poc 3.16 0.0652 1195 3.01 3.30
## white 2.93 0.0330 1195 2.85 3.00
##
## gender_rc = m:
## poc emmean SE df lower.CL upper.CL
## poc 2.52 0.1441 1195 2.20 2.84
## white 2.62 0.0755 1195 2.45 2.79
##
## Confidence level used: 0.95
## Conf-level adjustment: sidak method for 2 estimates
pairs(emmeans(aov_model2, specs="poc", by="gender_rc", adjust="sidak"))
## gender_rc = f:
## contrast estimate SE df t.ratio p.value
## poc - white 0.2321 0.0731 1195 3.175 0.0015
##
## gender_rc = m:
## contrast estimate SE df t.ratio p.value
## poc - white -0.0982 0.1626 1195 -0.604 0.5463
To test our hypothesis that there would be a significant effect of gender on stress, we used a one-way ANOVA. Our data was unbalanced, with many more women participating in our survey (n = 1004) than men (n = 195) or non-binary and other gender participants (n = 32). This significantly reduces the power of our test and increases the chances of a Type II error. We did not identify any outliers following visual analysis of a Residuals vs Leverage plot. A significant Levene’s test (p = .002) also indicates that our data violates the assumption of homogeneity of variance. This suggests that there is an increased chance of Type I error. We continued with our analysis for the purpose of this class.
We found a significant effect of gender, F(2,1247) = 26.20, p < .001, ηp2 = .040 (large effect size; Cohen, 1988). Posthoc tests using Tukey’s HSD revealed that women reported more stress than men but less stress than non-binary and other gender participants, while non-binary and other gender participants reported the highest amount of stress overall (see Figure 1 for a comparison).
To test our hypothesis that gender and race would impact stress and would interact significantly, we used a two-way/factorial ANOVA. Our data met most of the assumptions of the test, although our data was unbalanced, with many more women participating in our survey (n = 1004) than men (n = 195). We identified and removed a single outlier following visual analysis of a Residuals vs Leverage plot.
As predicted, we found a significant main effect for gender, F(1,1195) = 28.31, p < .001, ηp2 .023 (small effect size; Cohen, 1988). As predicted, women reported significantly more stress than men. Contrary to our expectations, we did not find a significant main effect for race (p = .453).
Lastly, we found a significant interaction between gender and race (see Figure 2), F(1,1195) = 3.43, p = .064, ηp2 = .003 (trivial effect size; Cohen, 1988). When comparing by race, women of color (M = 3.16, SE = .07) reported significantly more stress than men of color (M = 2.52, SE = .14; p < .001), as did white women (M = 2.93, SE = .03) compared to white men (M = 2.62, SE = .08; p < .001). When comparing by gender, women of color reported significantly more stress than white women (p = .002), while men of color and white men reported similar levels of stress (p = .546).
References
Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.