1 Loading Libraries

library(psych) # for the describe() command
library(ggplot2) # to visualize our results
## 
## Attaching package: 'ggplot2'
## The following objects are masked from 'package:psych':
## 
##     %+%, alpha
library(expss) # for the cross_cases() command
## Loading required package: maditr
## 
## To aggregate several columns with one summary: take(mtcars, mpg, hp, fun = mean, by = am)
## 
## Use 'expss_output_rnotebook()' to display tables inside R Notebooks.
##  To return to the console output, use 'expss_output_default()'.
## 
## Attaching package: 'expss'
## The following object is masked from 'package:ggplot2':
## 
##     vars
library(car) # for the leveneTest() command
## Loading required package: carData
## 
## Attaching package: 'car'
## The following object is masked from 'package:expss':
## 
##     recode
## The following object is masked from 'package:psych':
## 
##     logit
library(afex) # to run the ANOVA and plot results
## Loading required package: lme4
## Loading required package: Matrix
## 
## Attaching package: 'lme4'
## The following object is masked from 'package:expss':
## 
##     dummy
## ************
## Welcome to afex. For support visit: http://afex.singmann.science/
## - Functions for ANOVAs: aov_car(), aov_ez(), and aov_4()
## - Methods for calculating p-values with mixed(): 'S', 'KR', 'LRT', and 'PB'
## - 'afex_aov' and 'mixed' objects can be passed to emmeans() for follow-up tests
## - Get and set global package options with: afex_options()
## - Set sum-to-zero contrasts globally: set_sum_contrasts()
## - For example analyses see: browseVignettes("afex")
## ************
## 
## Attaching package: 'afex'
## The following object is masked from 'package:lme4':
## 
##     lmer
library(emmeans) # for posthoc tests

2 Importing Data

# import the dataset you cleaned previously
# this will be the dataset you'll use throughout the rest of the semester
# use ARC data
d <- read.csv(file="data/final copy.csv", header=T)

# new code! this adds a column with a number for each row. it makes it easier when we drop outliers later
d$row_id <- 1:nrow(d)

3 State Your Hypothesis

Note: You can chose to run either a one-way ANOVA (a single IV with more than 3 levels) or a two-way/factorial ANOVA (at least two IVs) for the homework. You will need to specify your hypothesis and customize your code based on the choice you make. I will run both versions of the test here for illustrative purposes.

One-Way: We predict that there will be a significant effect of race on belonging, as measured by the perceived need to belong scale.

4 Check Your Variables

# you only need to check the variables you're using in the current analysis
# although you checked them previously, it's always a good idea to look them over again and be sure that everything is correct
str(d)
## 'data.frame':    3182 obs. of  7 variables:
##  $ efficacy          : num  3.4 3.4 2.2 2.8 3 2.4 2.3 3 3 3.7 ...
##  $ belong            : int  4 4 2 4 4 3 4 4 4 3 ...
##  $ marriageimportance: int  2 3 2 1 2 3 4 3 4 2 ...
##  $ race_rc           : chr  "white" "white" "white" "other" ...
##  $ politicalviews    : num  2.5 2.5 5 8 4.5 8 4 1.5 5.5 6 ...
##  $ income            : int  3 3 1 1 6 1 2 3 7 1 ...
##  $ row_id            : int  1 2 3 4 5 6 7 8 9 10 ...
# make our categorical variables factors
#we'll actually use our ID variable for this analysis, so make sure it's coded as a factor
d$race_rc <- as.factor(d$race_rc)
# we're going to recode our race/ethnicity variable into two groups: poc and white
table(d$race_rc)
## 
##      asian      black   hispanic nativeamer      other      white 
##        210        249        286         12         97       2026
d$poc[d$race_rc == "asian"] <- "poc"
d$poc[d$race_rc == "black"] <- "poc"
d$poc[d$race_rc == "nativeamerican"] <- "poc"
d$poc[d$race_rc == "hispanic"] <- "poc"
d$poc[d$race_rc == "other"] <- "poc"
d$poc[d$race_rc == "prefer_not"] <- NA
d$poc[d$race_rc == "white"] <- "white"
table(d$poc)
## 
##   poc white 
##   842  2026
d$poc <- as.factor(d$poc)

# you can use the describe() command on an entire dataframe (d) or just on a single variable
describe(d$belong)
##    vars    n mean sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 3178 3.61  1      4    3.68 1.48   1   5     4 -0.62     0.04 0.02
# we'll use the describeBy() command to view skew and kurtosis across our IVs
describeBy(d$belong, group = d$poc)
## 
##  Descriptive statistics by group 
## group: poc
##    vars   n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 840 3.55 1.02      4    3.62 1.48   1   5     4 -0.55        0 0.04
## ------------------------------------------------------------ 
## group: white
##    vars    n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 2025 3.65 0.98      4    3.72 1.48   1   5     4 -0.67     0.12 0.02
# also use histograms to examine your continuous variable
hist(d$belong)

5 Check Your Assumptions

5.1 ANOVA Assumptions

  • DV should be normally distributed across levels of the IV
  • All levels of the IVs should have equal number of cases and there should be no empty cells. Cells with low numbers decrease the power of the test (increase change of Type II error)
  • Homogeneity of variance should be assured
  • Outliers should be identified and removed
  • If you have confirmed everything about, the sampling distribution should be normal. (For a demonstration of what the sampling distribution is, go here.)

5.1.1 Check levels of IVs

table(d$poc)
## 
##   poc white 
##   842  2026

5.1.2 Check homogeneity of variance

# use the leveneTest() command from the car package to test homogeneity of variance
# uses the 'formula' setup: formula is y~x1*x2, where y is our DV and x1 is our first IV and x2 is our second IV
leveneTest(belong~poc, data = d)
## Levene's Test for Homogeneity of Variance (center = median)
##         Df F value   Pr(>F)   
## group    1  7.3122 0.006889 **
##       2863                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

5.1.3 Check for outliers using Cook’s distance and Residuals vs Leverage plot

5.1.3.1 Run a Regression

# use this commented out section only if you need to remove outliers

# to drop multiple outliers, remove the # at the beginning of the line and use this code:
d <- subset(d, row_id!=c(684) & row_id!=c(1799))

# use the lm() command to run the regression
# formula is y~x1*x2 + c, where y is our DV, x1 is our first IV, x2 is our second IV, and c is our covariate
reg_model <- lm(belong ~ poc, data = d) #for one-way

5.1.3.2 Check for outliers (One-Way)

# Cook's distance
plot(reg_model, 4)

# Residuals vs Leverage
plot(reg_model, 5)

5.2 Issues with My Data

Levene’s test is significant for our race variable. We are ignoring this and continuing with the analysis anyway, but in the real world this is something we would have to correct for.

We identified and removed 2 outliers.

6 Run an ANOVA

aov_model <- aov_ez(data = d,
                    id = "row_id",
                    between = c("poc"),
                    dv = "belong",
                    anova_table = list(es = "pes"), na.rm = TRUE)  
## Warning: Missing values for 315 ID(s), which were removed before analysis:
## 9, 10, 71, 76, 79, 85, 87, 92, 95, 99, ... [showing first 10 only]
## Below the first few rows (in wide format) of the removed cases with missing data.
##      row_id  poc .
## # 9       9 <NA> 4
## # 10     10 <NA> 3
## # 71     71 <NA> 3
## # 76     76 <NA> 3
## # 79     79 <NA> 5
## # 85     85 <NA> 3
## Contrasts set to contr.sum for the following variables: poc

7 View Output

Effect size cutoffs from Cohen (1988):

  • η2 = 0.01 indicates a small effect
  • η2 = 0.06 indicates a medium effect
  • η2 = 0.14 indicates a large effect
nice(aov_model)
## Anova Table (Type 3 tests)
## 
## Response: belong
##   Effect      df  MSE      F  pes p.value
## 1    poc 1, 2863 0.99 5.46 * .002    .020
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '+' 0.1 ' ' 1

8 Visualize Results

afex_plot(aov_model, x = "poc")

9 Run Posthoc Tests (One-Way)

Only run posthocs if the test is significant! E.g., only run the posthoc tests on gender if there is a main effect for gender.

emmeans(aov_model, specs="poc", adjust="tukey")
## Note: adjust = "tukey" was changed to "sidak"
## because "tukey" is only appropriate for one set of pairwise comparisons
##  poc   emmean     SE   df lower.CL upper.CL
##  poc     3.55 0.0343 2863     3.47     3.63
##  white   3.65 0.0221 2863     3.60     3.70
## 
## Confidence level used: 0.95 
## Conf-level adjustment: sidak method for 2 estimates
pairs(emmeans(aov_model, specs="poc", adjust="tukey"))
##  contrast    estimate     SE   df t.ratio p.value
##  poc - white  -0.0952 0.0408 2863  -2.336  0.0196

10 Write Up Results

10.1 One-Way ANOVA

To test our hypothesis that there would be a significant effect of race on belonging, we used a one-way ANOVA. Our data was unbalanced, with a larger number of participants identifying as white (n = 1004) compared to participants identifying as poc (people of color) (n = 195). This unbalanced distribution reduces the statistical power of our test and increases the chances of a Type II error. We also identified and removed two outliers following visual analysis of a Residuals vs Leverage plot.

A significant Levene’s test (p = .002) indicates that our data violates the assumption of homogeneity of variance. This violation suggests an increased chance of Type I error. Despite this violation, we proceeded with our analysis for the purpose of this class.

We found a significant effect of race, F(1, 2863) = 5.46, p = .020, ηp2 = .002 (small effect size; Cohen, 1988). Posthoc tests using Tukey’s Honestly Significant Difference (HSD) revealed that there was a significant difference in belonging between participants who identified as poc and those who identified as white. Participants who identified as white reported slightly higher levels of belonging compared to participants who identified as poc (see Figure 1 for a comparison).

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.