1 Loading Libraries

# install any packages you have not previously used, then comment them back out.

#install.packages("car")
#install.packages("effsize")

library(psych) # for the describe() command
library(car) # for the leveneTest() command
## Loading required package: carData
## 
## Attaching package: 'car'
## The following object is masked from 'package:psych':
## 
##     logit
library(effsize) # for the cohen.d() command
## 
## Attaching package: 'effsize'
## The following object is masked from 'package:psych':
## 
##     cohen.d

2 Importing Data

d <- read.csv(file="data/projectdata.csv", header=T)

3 State Your Hypothesis

We predict that women will report significantly higher levels of general self efficacy than men, as measured by the General Self Efficacy Scale.

[Remember to revise the above hypothesis in you HW assignment.]

4 Check Your Variables

# you only need to check the variables you're using in the current analysis

## Checking the Categorical variable (IV)

str(d)
## 'data.frame':    3155 obs. of  7 variables:
##  $ ResponseId: chr  "R_BJN3bQqi1zUMid3" "R_2TGbiBXmAtxywsD" "R_12G7bIqN2wB2N65" "R_39pldNoon8CePfP" ...
##  $ gender    : chr  "f" "m" "m" "f" ...
##  $ sibling   : chr  "at least one sibling" "at least one sibling" "at least one sibling" "at least one sibling" ...
##  $ idea      : num  3.75 3.88 3.75 3.75 3.5 ...
##  $ belong    : num  2.8 4.2 3.6 4 3.4 4.2 3.9 3.6 2.9 2.5 ...
##  $ efficacy  : num  3.4 3.4 2.2 2.8 3 2.4 2.3 3 3 3.7 ...
##  $ npi       : num  0.6923 0.1538 0.0769 0.0769 0.7692 ...
# if the categorical variable you're using is showing as a "chr" (character), you must change it to be a factor -- using the next line of code (as.factor)

d$gender <- as.factor(d$gender)

table(d$gender, useNA = "always")
## 
##    f    m   nb <NA> 
## 2315  787   53    0
## Checking the Continuous variable (DV)

# you can use the describe() command on an entire dataframe (d) or just on a single variable within your dataframe -- which we will do here

describe(d$efficacy)
##    vars    n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 3155 3.13 0.45    3.1    3.13 0.44 1.1   4   2.9 -0.26     0.51 0.01
# also use a histogram to visualize your continuous variable

hist(d$efficacy)

# use the describeBy() command to view the means and standard deviations by group
# it's very similar to the describe() command but splits the dataframe according to the 'group' variable

describeBy(d$efficacy, group=d$gender)
## 
##  Descriptive statistics by group 
## group: f
##    vars    n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 2315  3.1 0.45    3.1    3.11 0.44 1.2   4   2.8 -0.26     0.61 0.01
## ------------------------------------------------------------ 
## group: m
##    vars   n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 787 3.21 0.44    3.2    3.21 0.44 1.5   4   2.5 -0.17     -0.1 0.02
## ------------------------------------------------------------ 
## group: nb
##    vars  n mean   sd median trimmed  mad min max range  skew kurtosis   se
## X1    1 53    3 0.54      3    3.03 0.44 1.1   4   2.9 -0.64     1.24 0.07
# last, use a boxplot to examine your continuous and categorical variables together

boxplot(d$efficacy~d$gender)

5 Check Your Assumptions

5.1 T-test Assumptions

  • IV must have two levels
  • Data values must be independent (independent t-test only)
  • Data obtained via a random sample
  • Dependent variable must be normally distributed
  • Variances of the two groups are approximately equal
# If the IV has more than 2 levels, you must drop the additional levels so that you meet the first assumption of a t-test.

d <- subset(d, gender != "nb")

table(d$gender, useNA = "always") #verify that now there are no participants in the removed level
## 
##    f    m   nb <NA> 
## 2315  787    0    0
d$gender <- droplevels(d$gender) # use droplevels() to drop the empty factor

table(d$gender, useNA = "always") #verify that now the entire factor level is removed 
## 
##    f    m <NA> 
## 2315  787    0

5.2 Testing Homogeneity of Variance with Levene’s Test

We can test whether the variances of our two groups are equal using Levene’s test. The null hypothesis is that the variance between the two groups is equal, which is the result we want. So when running Levene’s test we’re hoping for a non-significant result!

# use the leveneTest() command from the car package to test homogeneity of variance
# it uses the same 'formula' setup that we'll use for our t-test: formula is y~x, where y is our DV and x is our IV

leveneTest(efficacy~gender, data = d)
## Levene's Test for Homogeneity of Variance (center = median)
##         Df F value Pr(>F)
## group    1  0.6437 0.4224
##       3100

As you can see, the data does not have significantly different variances between the two comparison groups.

[Revise the above statement for you HW assignment.]

When running a t-test, we can account for heterogeneity in our variance by using Welch’s t-test, which does not have the same assumption about variance as Student’s t-test (the general default type of t-test). R defaults to using Welch’s t-test so this doesn’t require any changes on our part! Even if your data has no issues with homogeneity of variance, you’ll still use Welch’s t-test – it handles the potential issues around variance well and there are no real downsides. We’re just using Levene’s test here to get into the habit of checking the homogeneity of our variance, even if we already have the solution for any potential problems.

5.3 Issues with My Data

My independent variable has more than two levels. To proceed with this analysis, I will drop the non-binary participants from my sample. I will make a note to discuss this issue in my Method write-up and in my Discussion as a limitation of my study.

My data also has some an issue regarding homogeneity of variance as Levene’s test was significant. To accommodate for this heterogeneity of variance, I will use Welch’s t-test instead of Student’s t-test in my analysis.

[Revise the above statements for you HW assignment.]

6 Run a T-test

# very simple! we use the same formula of y~x, where y is our DV and x is our IV

t_output <- t.test(d$efficacy~d$gender)

7 View Test Output

t_output
## 
##  Welch Two Sample t-test
## 
## data:  d$efficacy by d$gender
## t = -5.6963, df = 1372.2, p-value = 1.496e-08
## alternative hypothesis: true difference in means between group f and group m is not equal to 0
## 95 percent confidence interval:
##  -0.13939239 -0.06797792
## sample estimates:
## mean in group f mean in group m 
##        3.102160        3.205845

8 Calculate Cohen’s d

# once again, we use the same formula, y~x, to calculate cohen's d
d_output <- cohen.d(d$efficacy~d$gender)

9 View Effect Size

d_output
## 
## Cohen's d
## 
## d estimate: -0.233655 (small)
## 95 percent confidence interval:
##      lower      upper 
## -0.3147689 -0.1525411

10 Write Up Results

To test our hypothesis that women in our sample would report significantly higher levels of general self efficacy than men, we used an independent samples t-test. This required us to drop our non-binary gender participants from our sample, as we are limited to a two-group comparison when using this test. We tested the homogeneity of variance with Levene’s test and did not find signs of heterogeneity (p > .001). After, we used Welch’s t-test, which does not assume homogeneity of variance. Our data met all other assumptions of a t-test.

Differently than predicted, we found that men (M = 3.21, SD = 0.44) reported significantly higher levels of conscientiousness than women (M = 3.1, SD = 0.45); t(1372.2) = -5.6963, p < .001 (see Figure 1). The effect size was calculated using Cohen’s d, with a value of -0.234 (small effect; Cohen, 1988).

[Revise the above statements for you HW assignment.]

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.