# install any packages you have not previously used, then comment them back out.
#install.packages("car")
#install.packages("effsize")
library(psych) # for the describe() command
library(car) # for the leveneTest() command
## Loading required package: carData
##
## Attaching package: 'car'
## The following object is masked from 'package:psych':
##
## logit
library(effsize) # for the cohen.d() command
##
## Attaching package: 'effsize'
## The following object is masked from 'package:psych':
##
## cohen.d
d <- read.csv(file="Project Folder Data/projectdata.csv", header=T)
# For the HW, you will import the project dataset you cleaned previously
# This will be the dataset you'll use for HWs throughout the rest of the semester
We predict that women will report significantly higher levels of conscientiousness than men, as measured by the International Personality Item Pool (IPIP).
[Remember to revise the above hypothesis in you HW assignment.]
# you only need to check the variables you're using in the current analysis
## Checking the Categorical variable (IV)
str(d)
## 'data.frame': 2159 obs. of 7 variables:
## $ ResponseId: chr "R_BJN3bQqi1zUMid3" "R_2TGbiBXmAtxywsD" "R_12G7bIqN2wB2N65" "R_39pldNoon8CePfP" ...
## $ age : chr "1 between 18 and 25" "1 between 18 and 25" "1 between 18 and 25" "1 between 18 and 25" ...
## $ edu : chr "2 Currently in college" "5 Completed Bachelors Degree" "2 Currently in college" "2 Currently in college" ...
## $ socmeduse : int 47 23 34 35 37 13 37 43 37 29 ...
## $ belong : num 2.8 4.2 3.6 4 3.4 4.2 3.9 3.6 2.9 2.5 ...
## $ stress : num 3.3 3.3 4 3.2 3.1 3.5 3.3 2.4 2.9 2.7 ...
## $ swb : num 4.33 4.17 1.83 5.17 3.67 ...
# if the categorical variable you're using is showing as a "chr" (character), you must change it to be a factor -- using the next line of code (as.factor)
d$age <- as.factor(d$age)
table(d$age, useNA = "always")
##
## 1 between 18 and 25 2 between 26 and 35 3 between 36 and 45 4 over 45
## 1987 116 38 18
## <NA>
## 0
## Checking the Continuous variable (DV)
# you can use the describe() command on an entire dataframe (d) or just on a single variable within your dataframe -- which we will do here
describe(d$socmeduse)
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 2159 34.25 8.59 35 34.52 7.41 11 55 44 -0.31 0.2 0.18
# also use a histogram to visualize your continuous variable
hist(d$socmeduse)
# use the describeBy() command to view the means and standard deviations by group
# it's very similar to the describe() command but splits the dataframe according to the 'group' variable
describeBy(d$socmeduse, group=d$age)
##
## Descriptive statistics by group
## group: 1 between 18 and 25
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 1987 34.69 8.36 35 34.93 7.41 11 55 44 -0.29 0.24 0.19
## ------------------------------------------------------------
## group: 2 between 26 and 35
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 116 29.28 9.63 30 29.33 8.15 11 53 42 -0.04 -0.28 0.89
## ------------------------------------------------------------
## group: 3 between 36 and 45
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 38 29.18 9.12 31 29.38 8.9 11 49 38 -0.2 -0.45 1.48
## ------------------------------------------------------------
## group: 4 over 45
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 18 28.06 10.29 28 27.62 5.93 11 52 41 0.38 -0.03 2.43
# last, use a boxplot to examine your continuous and categorical variables together
boxplot(d$socmeduse~d$age)
# If the IV has more than 2 levels, you must drop the additional levels so that you meet the first assumption of a t-test.
d <- subset(d, age!= "2 between 26 and 35")
table(d$age, useNA = "always") #verify that now there are no participants in the removed level
##
## 1 between 18 and 25 2 between 26 and 35 3 between 36 and 45 4 over 45
## 1987 0 38 18
## <NA>
## 0
d$age <- droplevels(d$age) # use droplevels() to drop the empty factor
table(d$age, useNA = "always") #verify that now the entire factor level is removed
##
## 1 between 18 and 25 3 between 36 and 45 4 over 45 <NA>
## 1987 38 18 0
d <- subset(d, age!= "4 over 45")
table(d$age, useNA = "always") #verify that now there are no participants in the removed level
##
## 1 between 18 and 25 3 between 36 and 45 4 over 45 <NA>
## 1987 38 0 0
d$age <- droplevels(d$age) # use droplevels() to drop the empty factor
table(d$age, useNA = "always") #verify that now the entire factor level is removed
##
## 1 between 18 and 25 3 between 36 and 45 <NA>
## 1987 38 0
We can test whether the variances of our two groups are equal using Levene’s test. The null hypothesis is that the variance between the two groups is equal, which is the result we want. So when running Levene’s test we’re hoping for a non-significant result!
# use the leveneTest() command from the car package to test homogeneity of variance
# it uses the same 'formula' setup that we'll use for our t-test: formula is y~x, where y is our DV and x is our IV
leveneTest(socmeduse~age, data = d)
## Levene's Test for Homogeneity of Variance (center = median)
## Df F value Pr(>F)
## group 1 0.8127 0.3674
## 2023
As you can see, the data does not have significantly different variances between the two comparison groups.
[Revise the above statement for you HW assignment.]
When running a t-test, we can account for heterogeneity in our variance by using Welch’s t-test, which does not have the same assumption about variance as Student’s t-test (the general default type of t-test). R defaults to using Welch’s t-test so this doesn’t require any changes on our part! Even if your data has no issues with homogeneity of variance, you’ll still use Welch’s t-test – it handles the potential issues around variance well and there are no real downsides. We’re just using Levene’s test here to get into the habit of checking the homogeneity of our variance, even if we already have the solution for any potential problems.
My independent variable has more than two levels. To proceed with this analysis, I will drop the 2 between 26 and 35 and 4 over 45 participants from my sample. I will make a note to discuss this issue in my Method write-up and in my Discussion as a limitation of my study.
# very simple! we use the same formula of y~x, where y is our DV and x is our IV
t_output <- t.test(d$socmeduse~d$age)
t_output
##
## Welch Two Sample t-test
##
## data: d$socmeduse by d$age
## t = 3.6926, df = 38.198, p-value = 0.0006914
## alternative hypothesis: true difference in means between group 1 between 18 and 25 and group 3 between 36 and 45 is not equal to 0
## 95 percent confidence interval:
## 2.489235 8.528353
## sample estimates:
## mean in group 1 between 18 and 25 mean in group 3 between 36 and 45
## 34.69300 29.18421
# once again, we use the same formula, y~x, to calculate cohen's d
d_output <- cohen.d(d$socmeduse~d$age)
d_output
##
## Cohen's d
##
## d estimate: 0.6577248 (medium)
## 95 percent confidence interval:
## lower upper
## 0.3359197 0.9795299
To test our hypothesis that women in our sample would report significantly higher levels of conscientiousness than men, we used an independent samples t-test. This required us to drop our non-binary gender participants from our sample, as we are limited to a two-group comparison when using this test. We tested the homogeneity of variance with Levene’s test and found signs of homogeneity (p = 0.3674). This suggests that there is an increased chance of Type I error. To correct for this issue, we used Welch’s t-test, which does not assume homogeneity of variance. Our data met all other assumptions of a t-test.
As predicted, we found that people ages 18 to 25 (M = 34.69, SD = 8.36 ) reported higher levels of social media use those ages 36 to 45 (M = 29.18, SD = 9.12); t(38.198) = 3.9626, p < .001 (see Figure 1). The effect size was calculated using Cohen’s d, with a value of .6577248 (medium effect; Cohen, 1988).
[Revise the above statements for you HW assignment.]
References
Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.