1 Loading Libraries

library(expss) # for the cross_cases() command
## Loading required package: maditr
## 
## Use magrittr pipe '%>%' to chain several operations:
##              mtcars %>%
##                  let(mpg_hp = mpg/hp) %>%
##                  take(mean(mpg_hp), by = am)
## 
library(psych) # for the describe() command
library(car) # for the leveneTest() command
## Loading required package: carData
## 
## Attaching package: 'car'
## The following object is masked from 'package:psych':
## 
##     logit
## The following object is masked from 'package:expss':
## 
##     recode
library(effsize) # for the cohen.d() command
## 
## Attaching package: 'effsize'
## The following object is masked from 'package:psych':
## 
##     cohen.d

2 Importing Data

# import the dataset you cleaned previously
# this will be the dataset you'll use throughout the rest of the semester
d <- read.csv(file="Data/arc_test.csv", header=T)

3 Chi Square: State Your Hypothesis

There will be a no statistical difference of group of people whether it be a parent or an adolescence across the gender categories (in other words, adult and adolescent participants will have not have different results when it comes to their gender)

4 Chi Square: Check Your Variables

# you only need to check the variables you're using in the current analysis
# although you checked them previously, it's always a good idea to look them over again and be sure that everything is correct
str(d)
## 'data.frame':    2073 obs. of  7 variables:
##  $ X              : int  1 20 30 31 32 33 48 49 57 58 ...
##  $ group          : chr  "parent" "young person" "young person" "parent" ...
##  $ gender         : chr  "female" "male" "female" "female" ...
##  $ phq            : num  1.33 3.33 1 2.33 NA ...
##  $ support_parents: num  NA NA NA NA NA NA NA NA NA NA ...
##  $ pas_covid      : num  3.22 4.56 3.33 4.22 NA ...
##  $ big_5          : num  4.67 5.33 5.33 5.33 NA ...
# we can see in the str() command that our categorical variables are being read as character or string variables
# to correct this, we'll use the as.factor() command
d$group <- as.factor(d$group)
d$gender <- as.factor(d$gender)

table(d$group, useNA = "always")
## 
##       parent young person         <NA> 
##          606         1467            0
table(d$gender, useNA = "always")
## 
##             female I use another term               male  Prefer not to say 
##               1472                 47                311                 55 
##               <NA> 
##                188
cross_cases(d, group , gender)
 gender 
 female   I use another term   male   Prefer not to say 
 group 
   parent  464 1 70 14
   young person  1008 46 241 41
   #Total cases  1472 47 311 55

5 Chi Square: Check Your Assumptions

5.1 Chi-square Test Assumptions

  • Data should be frequencies or counts
  • Variables and levels should be independent
  • There are two variables
  • At least 5 or more participants per cell

5.2 Issues with My Data

While my data meets the first three assumptions, I don’t have at least 5 participants in all cells. The number of “I use another term” and “Prefer not to say” is pretty small.

To proceed with this analysis, I will combine the categories of “I prefer not to say” and “I use another term” into one category of. Merging participants is always a difficult choice, and has the potential to further marginalize already minoritized groups, but it’s a necessary compromise for my analysis. I will make a note to discuss this issue in my Method write-up and in my Discussion as a limitation of my study.

table(d$gender, useNA = "always")
## 
##             female I use another term               male  Prefer not to say 
##               1472                 47                311                 55 
##               <NA> 
##                188
d$gender[d$gender == "I use another term"] <- "Prefer not to say"

table(d$gender, useNA = "always")
## 
##             female I use another term               male  Prefer not to say 
##               1472                  0                311                102 
##               <NA> 
##                188
# since I made changes to my variables, I am going to re-run the cross_cases() command
cross_cases(d, group, gender)
 gender 
 female   I use another term   male   Prefer not to say 
 group 
   parent  464 70 15
   young person  1008 241 87
   #Total cases  1472 311 102

6 Chi Square: Run a Chi-square Test

# we use the chisq.test() command to run our chi-square test
# the only arguments we need to specify are the variables we're using for the chi-square test
# we are saving the output from our chi-square test to the chi_output object so we can view it again later
chi_output <- chisq.test(d$group, d$gender)

7 Chi Square: View Test Output

# to view the results of our chi-square test, we just have to call up the output we saved
chi_output
## 
##  Pearson's Chi-squared test
## 
## data:  d$group and d$gender
## X-squared = 20.967, df = 2, p-value = 2.8e-05

8 View Standardized Residuals

# to view the standardized residuals, we use the $ operator to access the stdres element of the chi_output file that we created
chi_output$stdres
##               d$gender
## d$group           female      male Prefer not to say
##   parent        4.324514 -2.810556         -3.295572
##   young person -4.324514  2.810556          3.295572

9 Chi Square: Write Up Results

 group 
 parent   young person 
 gender 
   female  464 1008
   I use another term 
   male  70 241
   Prefer not to say  15 87

As predicted, we did not find a gender difference in participation across the age categories, X^2(5, N = 2073) = 20.967, p = 2.8e-05.

10 T-test: State Your Hypothesis

We predict that women will report significantly more neurotic personality tendencies and symptoms than men, as measured by the scale (big_5 (1-3)).

11 T-test: Check Your Variables

# you only need to check the variables you're using in the current analysis
# although you checked them previously, it's always a good idea to look them over again and be sure that everything is correct
str(d)
## 'data.frame':    2073 obs. of  7 variables:
##  $ X              : int  1 20 30 31 32 33 48 49 57 58 ...
##  $ group          : Factor w/ 2 levels "parent","young person": 1 2 2 1 2 1 1 1 1 2 ...
##  $ gender         : Factor w/ 4 levels "female","I use another term",..: 1 3 1 1 NA 1 1 1 1 1 ...
##  $ phq            : num  1.33 3.33 1 2.33 NA ...
##  $ support_parents: num  NA NA NA NA NA NA NA NA NA NA ...
##  $ pas_covid      : num  3.22 4.56 3.33 4.22 NA ...
##  $ big_5          : num  4.67 5.33 5.33 5.33 NA ...
d$gender <- as.factor(d$gender)

table(d$gender, useNA = "always")
## 
##             female I use another term               male  Prefer not to say 
##               1472                  0                311                102 
##               <NA> 
##                188
# you can use the describe() command on an entire datafrom (d) or just on a single variable (d$pss)
describe(d$big_5)
##    vars    n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 1749 4.59 1.04   4.67    4.64 0.99   1   7     6 -0.45    -0.08 0.02
# also use a histogram to examine your continuous variable
hist(d$big_5)

# can use the describeBy() command to view the means and standard deviations by group
# it's very similar to the describe() command but splits the dataframe according to the 'group' variable
describeBy(d$big_5, group=d$gender)
## 
##  Descriptive statistics by group 
## group: female
##    vars    n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 1382 4.65 1.01   4.67     4.7 0.99   1   7     6 -0.53      0.1 0.03
## ------------------------------------------------------------ 
## group: I use another term
## NULL
## ------------------------------------------------------------ 
## group: male
##    vars   n mean   sd median trimmed  mad min max range skew kurtosis   se
## X1    1 291 4.23 1.15   4.33    4.21 1.48   1   7     6 0.02    -0.44 0.07
## ------------------------------------------------------------ 
## group: Prefer not to say
##    vars  n mean   sd median trimmed  mad  min  max range  skew kurtosis  se
## X1    1 76 4.89 0.85      5    4.92 0.74 2.67 6.67     4 -0.44    -0.29 0.1
# last, use a boxplot to examine your continuous and categorical variables together
boxplot(d$big_5~d$gender)

12 T-test: Check Your Assumptions

12.1 T-test Assumptions

  • IV must have two levels
  • Data values must be independent (independent t-test only)
  • Data obtained via a random sample
  • Dependent variable must be normally distributed
  • Variances of the two groups are approximately equal

12.2 Testing Homogeneity of Variance with Levene’s Test

We can test whether the variances of our two groups are equal using Levene’s test. The null hypothesis is that the variance between the two groups is equal, which is the result we want. So when running Levene’s test we’re hoping for a non-significant result!

d <- subset(d, gender != "nb")
table(d$gender, useNA = "always")
## 
##             female I use another term               male  Prefer not to say 
##               1472                  0                311                102 
##               <NA> 
##                  0
d$gender <- droplevels(d$gender)
# use the leveneTest() command from the car package to test homogeneity of variance
# uses the same 'formula' setup that we'll use for our t-test: formula is y~x, where y is our DV and x is our IV
leveneTest(big_5~gender, data = d)
## Levene's Test for Homogeneity of Variance (center = median)
##         Df F value    Pr(>F)    
## group    2  9.4996 7.882e-05 ***
##       1746                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

As you can see, our data is very close to significant. When running a t-test, we can account for heterogeneity in our variance by using Welch’s t-test, which does not have the same assumptions as Student’s t-test (the default type of t-test) about variance. R defaults to using Welch’s t-test so this doesn’t require any changes on our part! Even if your data has no issues with homogeneity of variance, you’ll still use Welch’s t-test – it handles the potential issues around variance well and there are no real downsides. We’re just using Levene’s test here to get into the habit of changing the homogeneity of our variance, even if we already have a solution for any potential problems.

12.3 T-test: Issues with My Data

My independent variable has more than two levels. To proceed with this analysis, I will drop the non-binary participants from my sample. I will make a note to discuss this issue in my Method write-up and in my Discussion as a limitation of my study.


# T-test: Write Up Results

To test our hypothesis that women in our sample would report significantly more of these neurotic personality symptoms compare to those of men, we used a t-test. We tested the homogeneity of variance with Levene's test and found some signs of heterogeneity (p = .0002). This suggests that there is an increased chance of Type I error. To correct for this possible issue, we use Welch's t-test, which does not assume homogeneity of variance. Our data met all other assumptions of a t-test.

As predicted, we found that women (M = 2.97) reported higher of these personality symptoms than men (M = 2.59); t(288.7) = 5.43, p < .001 (see Figure 1). The effect size was calculated using Cohen's d, with a value of .40 (small effect; Cohen, 1988).


# another way of running code inline is to insert a code chunk and then add 'echo=FALSE' to the {r} code at the beginning of the chunk
# it will force R to display the output without showing the code

# boxplot code is the same as previous, with a new argument ("names") that lets us customize the display of the levels of our IV
boxplot(d$phq~d$gender, xlab = "Gender", names= c("Women","Men"), ylab = "Personality Symptoms", main = "Figure 1: Gender differences in neurotic personality symptoms")

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.