1 Loading Libraries

#install.packages("afex")
#install.packages("emmeans")
#install.packages("ggbeeswarm")
#install.packages("expss")

library(psych) # for the describe() command
library(ggplot2) # to visualize our results
## 
## Attaching package: 'ggplot2'
## The following objects are masked from 'package:psych':
## 
##     %+%, alpha
library(expss) # for the cross_cases() command
## Loading required package: maditr
## 
## To select columns from data: columns(mtcars, mpg, vs:carb)
## 
## Use 'expss_output_rnotebook()' to display tables inside R Notebooks.
##  To return to the console output, use 'expss_output_default()'.
## 
## Attaching package: 'expss'
## The following object is masked from 'package:ggplot2':
## 
##     vars
library(car) # for the leveneTest() command
## Loading required package: carData
## 
## Attaching package: 'car'
## The following object is masked from 'package:expss':
## 
##     recode
## The following object is masked from 'package:psych':
## 
##     logit
library(afex) # to run the ANOVA 
## Loading required package: lme4
## Loading required package: Matrix
## 
## Attaching package: 'lme4'
## The following object is masked from 'package:expss':
## 
##     dummy
## ************
## Welcome to afex. For support visit: http://afex.singmann.science/
## - Functions for ANOVAs: aov_car(), aov_ez(), and aov_4()
## - Methods for calculating p-values with mixed(): 'S', 'KR', 'LRT', and 'PB'
## - 'afex_aov' and 'mixed' objects can be passed to emmeans() for follow-up tests
## - Get and set global package options with: afex_options()
## - Set sum-to-zero contrasts globally: set_sum_contrasts()
## - For example analyses see: browseVignettes("afex")
## ************
## 
## Attaching package: 'afex'
## The following object is masked from 'package:lme4':
## 
##     lmer
library(ggbeeswarm) # to run plot results
## Warning: package 'ggbeeswarm' was built under R version 4.5.2
library(emmeans) # for posthoc tests
## Welcome to emmeans.
## Caution: You lose important information if you filter this package's results.
## See '? untidy'

2 Importing Data

# For HW, import the project dataset you cleaned previously this will be the dataset you'll use throughout the rest of the semester

d <- read.csv(file="Data/projectdata.csv", header=T)


# new code! this adds a column with a number for each row. It will make it easier if we need to drop outliers later
d$row_id <- 1:nrow(d)

3 State Your Hypothesis

Note: For your HW, you will choose to run EITHER a one-way ANOVA (a single IV with 3 or more levels) OR a two-way/factorial ANOVA (at least two IVs with 2 or 3 levels each). You will need to specify your hypothesis and customize your code based on the choice you make. We will run BOTH versions of the test in the lab for illustrative purposes.

One-Way: We predict that there will be a significant difference in people’s level of satisfaction with life based on people’s level of income (low, middle, and high).

4 Check Your Variables

# you only need to check the variables you're using in the current analysis

str(d)
## 'data.frame':    3143 obs. of  8 variables:
##  $ ResponseID: chr  "R_BJN3bQqi1zUMid3" "R_2TGbiBXmAtxywsD" "R_12G7bIqN2wB2N65" "R_39pldNoon8CePfP" ...
##  $ gender    : chr  "f" "m" "m" "f" ...
##  $ income    : chr  "1 low" "1 low" "rather not say" "rather not say" ...
##  $ swb       : num  4.33 4.17 1.83 5.17 3.67 ...
##  $ mindful   : num  2.4 1.8 2.2 2.2 3.2 ...
##  $ efficacy  : num  3.4 3.4 2.2 2.8 3 2.4 2.3 3 3 3.7 ...
##  $ stress    : num  3.3 3.3 4 3.2 3.1 3.5 3.3 2.4 2.9 2.7 ...
##  $ row_id    : int  1 2 3 4 5 6 7 8 9 10 ...
# make our categorical variables of interest "factors"
# because we'll use our newly created row ID variable for this analysis, so make sure it's coded as a factor, too.
d$income <- as.factor(d$income) 
d$row_id <- as.factor(d$row_id)


# check that all our categorical variables of interest are now factors
str(d)
## 'data.frame':    3143 obs. of  8 variables:
##  $ ResponseID: chr  "R_BJN3bQqi1zUMid3" "R_2TGbiBXmAtxywsD" "R_12G7bIqN2wB2N65" "R_39pldNoon8CePfP" ...
##  $ gender    : chr  "f" "m" "m" "f" ...
##  $ income    : Factor w/ 4 levels "1 low","2 middle",..: 1 1 4 4 2 4 1 1 3 4 ...
##  $ swb       : num  4.33 4.17 1.83 5.17 3.67 ...
##  $ mindful   : num  2.4 1.8 2.2 2.2 3.2 ...
##  $ efficacy  : num  3.4 3.4 2.2 2.8 3 2.4 2.3 3 3 3.7 ...
##  $ stress    : num  3.3 3.3 4 3.2 3.1 3.5 3.3 2.4 2.9 2.7 ...
##  $ row_id    : Factor w/ 3143 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
# check our DV skew and kurtosis
describe(d$swb)
##    vars    n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 3143 4.47 1.32   4.67    4.53 1.48   1   7     6 -0.36    -0.45 0.02
# we'll use the describeBy() command to view our DV's skew and kurtosis across our IVs' levels
describeBy(d$swb, group = d$income)
## 
##  Descriptive statistics by group 
## group: 1 low
##    vars   n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 879 4.24 1.32   4.33    4.27 1.48   1   7     6 -0.23    -0.48 0.04
## ------------------------------------------------------------ 
## group: 2 middle
##    vars   n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 876  4.6 1.32   4.83    4.67 1.24   1   7     6 -0.48    -0.37 0.04
## ------------------------------------------------------------ 
## group: 3 high
##    vars   n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 535 4.79 1.32      5    4.88 1.24   1   7     6 -0.61    -0.08 0.06
## ------------------------------------------------------------ 
## group: rather not say
##    vars   n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 853 4.39 1.27    4.5    4.43 1.48   1   7     6 -0.27    -0.53 0.04
# also use histograms to examine your continuous variable
hist(d$swb)

d <- subset(d, income != "rather not say")
d$income <- droplevels(d$income)

table(d$income)
## 
##    1 low 2 middle   3 high 
##      879      876      535
# REMEMBER your test's level of POWER is determined by your SMALLEST subsample

5 Check Your Assumptions

5.1 ANOVA Assumptions

  • DV should be normally distributed across levels of the IV (we checked previously using “describeBy” function)
  • All levels of the IVs should have an equal number of cases and there should be no empty cells. Cells with low numbers decrease the power of the test (which increases chance of Type II error)
  • Homogeneity of variance should be confirmed (using Levene’s Test)
  • Outliers should be identified and removed – we will actually remove them this time!
  • If you have confirmed everything above, the sampling distribution should be normal.

5.1.1 Check levels of IVs

# One-Way
table(d$income)
## 
##    1 low 2 middle   3 high 
##      879      876      535
## If cross_cases() doesn't work for you, then use xtabs() instead. Fill in the code below and remove the "#" to run. Then hashtag out the cross_cases() line above.

#xtabs(~ V1 + V2, data=d)

5.1.2 Check homogeneity of variance

# use the leveneTest() command from the car package to test homogeneity of variance
# uses the 'formula' setup: formula is y~x1*x2, where y is our DV and x1 is our first IV and x2 is our second IV

# One-Way
leveneTest(swb~income, data = d)
## Levene's Test for Homogeneity of Variance (center = median)
##         Df F value Pr(>F)
## group    2  0.3327  0.717
##       2287

5.1.3 Check for outliers using Cook’s distance and Residuals VS Leverage plot

5.1.3.1 Run a Regression to get both outlier plots

# use the lm() command to run the regression
# formula is y~x1*x2 + c, where y is our DV, x1 is our first IV, x2 is our second IV.

# One-Way
reg_model <- lm(swb~income, data = d) 

5.1.3.2 Check for outliers (One-Way)

# Cook's distance
plot(reg_model, 4)

# Residuals VS Leverage
plot(reg_model, 5)


## Issues with My Data

Our cell sizes are somewhat unbalanced between the income group levels. A smaller sample size for one of the levels of our variable limits our power and increases our Type II error rate.

Levene's test was not significant for our three-level income type variable with the One-Way ANOVA.

We identified and removed no outliers for the One-Way ANOVA.



# Run an ANOVA


``` r
# One-Way
aov_model <- aov_ez(data = d,
                    id = "ResponseID",
                    between = c("income"),
                    dv = "swb",
                    anova_table = list(es = "pes"))
## Contrasts set to contr.sum for the following variables: income

6 View Output

# One-Way
nice(aov_model)
## Anova Table (Type 3 tests)
## 
## Response: swb
##   Effect      df  MSE         F  pes p.value
## 1 income 2, 2287 1.74 33.03 *** .028   <.001
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '+' 0.1 ' ' 1

ANOVA Effect Size [partial eta-squared] cutoffs from Cohen (1988): * η^2 < 0.01 indicates a trivial effect * η^2 >= 0.01 indicates a small effect * η^2 >= 0.06 indicates a medium effect * η^2 >= 0.14 indicates a large effect

7 Visualize Results

# One-Way
afex_plot(aov_model, x = "income")

8 Run Posthoc Tests (One-Way)

ONLY run posthoc IF the ANOVA test is SIGNIFICANT! E.g., only run the posthoc tests on pet type if there is a main effect for pet type

emmeans(aov_model, specs="income", adjust="sidak")
##  income   emmean     SE   df lower.CL upper.CL
##  1 low      4.24 0.0445 2287     4.13     4.34
##  2 middle   4.60 0.0445 2287     4.49     4.70
##  3 high     4.79 0.0570 2287     4.66     4.93
## 
## Confidence level used: 0.95 
## Conf-level adjustment: sidak method for 3 estimates
pairs(emmeans(aov_model, specs="income", adjust="sidak"))
##  contrast          estimate     SE   df t.ratio p.value
##  1 low - 2 middle    -0.359 0.0629 2287  -5.704  <.0001
##  1 low - 3 high      -0.556 0.0723 2287  -7.693  <.0001
##  2 middle - 3 high   -0.197 0.0723 2287  -2.725  0.0178
## 
## P value adjustment: tukey method for comparing a family of 3 estimates

```

9 Write Up Results

9.1 One-Way ANOVA

To test our hypothesis that there will be a significant difference in people’s level of satisfaction with life based on their income levels (low, middle, high), we used a one-way ANOVA. Our data was somewhar unbalanced, with less people with high income participating in our survey (n = 535) than middle (n = 876) or low (n = 879). This reduces the power of our test and increases the chances of a Type II error. We did not need to remove outliers following visual analysis of Cook’s Distance and Residuals VS Leverage plots. A non-significant Levene’s test (p = .80) also indicates that our data did not violates the assumption of homogeneity of variance. This suggests that there is not an increased chance of Type I error.

We found a significant effect of income level, F(2, 2287) = 33.03, p < .001, ηp2 = .028 (small effect size; Cohen, 1988). Posthoc tests using Sidak’s adjustment revealed that participants with middle income (M = 4.60, SE = 0.04) reported more satisfaction with life than those with low income (M = 4.24, SE = 0.04) but less satisfaction with life than those with high income (M = 4.79, SE = 0.06); participants with the highest income reported the highest amount of satisfaction with life overall (see Figure 1 for a comparison).

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.