The following objects are masked from 'package:stats':
filter, lag
The following objects are masked from 'package:base':
intersect, setdiff, setequal, union
library(rempsyc)
Suggested APA citation: Thériault, R. (2023). rempsyc: Convenience functions for psychology.
Journal of Open Source Software, 8(87), 5466. https://doi.org/10.21105/joss.05466
The tutorial I am using, says we need to standardize our variables. We scale them, which means we subtract each variable by its mean and then divide by the standard deviation. This puts all the variables on the same scale, so like a -5 value in cylinders is 5 “points” below its mean in the same way a -5 in weight is 5 “points” below its mean.
I guess simple slopes are often used in the context of “moderation” analyses, which is the same as an interaction effect. But I’ve never done it this way before! We are looking at whether there is an interaction between weight and gear on mpg, or we can say that we are looking to see if weight is a moderator on the effect of gear on mpg.
Note about the sr2 : it’s called a delta R-squared, and it shows the proportion of variance explained (unique contribution) of that variable on the whole model.
So now we do the simple slopes. It’s going to give us three slopes: a low, medium, and high slope for the effect of the moderating variable (here its weight).
So it shows that none of the individual slopes are that meaningful, which maybe demonstrates that in this example, the overall effect of the interaction between weight and gear tells the story better than looking at each level of weight, also probably that the interaction was weak anyway.
Simple slopes on my actual data
Now let’s try it on my data for MU and Study time.
sg_data_scaled <- sg_datanum_vars <-c("mean_MU", "mean_StudyTime", "age")sg_data_scaled[num_vars] <-lapply(sg_data[num_vars], function(x) as.vector(scale(x)))# This just scales the IVs and keeps it a regular vector instead of a weird matrix# make sure condition is a factorsg_data_scaled$condsFile <-as.factor(sg_data_scaled$condsFile)
My original model for MU was a linear mixed effects model. Not sure if that will work with this. Let’s try it first.
library(lme4)
Loading required package: Matrix
MU_LMM <-lmer(mean_MU ~ condsFile + age + age*condsFile + (1|participant), data = sg_data_scaled)
Try the nice_slopes() on it…
nice_lm_slopes(MU_LMM, predictor ="mean_MU", moderator ="age") %>%nice_table(highlight =TRUE)
Error in nice_lm_slopes(MU_LMM, predictor = "mean_MU", moderator = "age"): Model must be of class 'lm' or be a 'list()' of lm models (using 'c()' won't work).
It does not work! OK let’s downgrade to a linear model and try it:
MU_LM <-lm(mean_MU ~ condsFile + age + age*condsFile, data = sg_data_scaled)nice_lm_slopes(MU_LM, predictor ="age", moderator ="condsFile") %>%nice_table(highlight =TRUE)
Error in var(if (is.vector(x) || is.factor(x)) x else as.double(x), na.rm = na.rm): Calling var(x) on a factor x is defunct.
Use something like 'all(duplicated(x)[-1L])' to test for a constant vector.
This is not going to work because condition is a factor and this only words with two numeric predictors…
But it looks like something called a Johnson-Neyman technique can help us!
Johnson Neyman technique!
From the interactions library, the Johnson Neyman method “tells you the range of values of the moderator in which the slope of the predictor is significant vs. nonsignificant at a specified alpha level” - that’s literally exactly what I need! And apparently it is compatible with GLM… I hope that is the case because we need a GLM to fit the model for Study time! Let’s try MU first:
library(interactions)sim_slopes(MU_LMM, pred = age, modx = condsFile, jnplot =TRUE)
Warning: Johnson-Neyman intervals are not available for factor predictors or
moderators.
SIMPLE SLOPES ANALYSIS
Slope of age when condsFile = longDelay:
Est. S.E. t val. p
------ ------ -------- ------
0.22 0.10 2.12 0.04
Slope of age when condsFile = noDelay:
Est. S.E. t val. p
------ ------ -------- ------
0.11 0.11 1.06 0.29
Slope of age when condsFile = totalResponse:
Est. S.E. t val. p
------ ------ -------- ------
0.55 0.10 5.29 0.00
Wow it still does not work for factor moderators aggghhh!
Try an alternative method using jtools:
library(jtools)johnson_neyman(MU_LMM, pred ="age", modx ="condsFile")
Error in `johnson_neyman()`:
! Could not find interaction term in the model, so Johnson-Neyman interval
could not be calculated.
# Doesn't work!johnson_neyman(MU_LM, pred ="age", modx ="condsFile")
Error in `johnson_neyman()`:
! Could not find interaction term in the model, so Johnson-Neyman interval
could not be calculated.
# Doesn't work!
Emmeans?
OK nothing is working. A clunky way we could maybe look at this is via pairwise comparisons across age via emmeans.
library(emmeans)
Welcome to emmeans.
Caution: You lose important information if you filter this package's results.
See '? untidy'
# Literally just making arbitrary separationsages_to_probe <-seq(min(sg_data_scaled$age), max(sg_data_scaled$age), length.out =5)# Run the pairwise comparisonsemm <-emmeans(MU_LMM, ~ condsFile | age, at =list(age = ages_to_probe))pairs(emm)
OK so that worked, but let’s get age back into its original scale.
# Get mean and SD from the original ageage_mean <-mean(sg_data$age)age_sd <-sd(sg_data$age)# Back-transform the scaled agesages_original <- ages_to_probe * age_sd + age_mean# Get ages for the pairwise comparisonsemm_df <-as.data.frame(emm)emm_df$age_original <- emm_df$age * age_sd + age_meanemm_df
Jiminy crickets! Well it does show that the significant effect seems to begin around age 5.75, and the difference between no-delay and long-delay was never significant so we don’t need to worry about that always been non-sig.