One explanation for prejudice and discrimination is that – similar to stereotype threat – the fear of being perceived as biased can actually increase bias (Goff et al., 2008; Richeson & Shelton, 2007). Values affirmation interventions (VAI; Cohen et al., 2009), commonly employed to counter stereotype threat, may thus reduce prejudice in specific contexts (Sherman & Cohen, 2002). Although findings are mixed, there have been promising results (Adams et al., 2006; Badea, Tavani, Rubin, & Meyer, 2017; Unzueta & Lowery, 2008; Lehmiller et al., 2010; Čehajić-Clancy et al., 2011; Lesick & Zell, 2021). Some studies suggest that the values being affirmed also influence outcomes (Lehmiller et al., 2010; Badea, Tavani, Rubin, & Meyer, 2017), suggesting non-self-affirmational mechanisms.
A previous study of 231 engineering students examined the impact of a VAI on students’ engineering identity, belonging, and anti-racist attitudes (using the Propensity to Make Attributions to Prejudice scale; Miller & Saucier, 2018). Results indicated reduced trivialization of racism among women of Color and increased vigilance against racism among men of Color, with no impact on White students (Perkins et al., 2023). The current analysis focused on White students, exploring the relationship between endorsed values, anti-racist attitudes, and belonging in engineering. We hypothesized that communal or social values would associate with less trivialization, more vigilance, and lower belonging. Mediation analysis probed values’ role in trivialization, vigilance, and belonging.
Participants (n = 102) completed a pre-test survey that measured several constructs including belonging and expectancy regarding engineering (Scheidt et al., 2018). Two weeks after the start of the semester, students completed the values affirmation online, in which they selected at least three very important and three somewhat important values from a list of 13 (derived from Schwartz’ theory of basic values; Schwartz, 2012). Afterwards, they picked one of their very important values and completed two short writing prompts. At the end of the semester, they completed the pre-test survey measures again as well as two subscales from the Propensity to Make Attributions to Prejudice scale (PMAPS; Miller & Saucier, 2018).
This study analyzes the White subsample’s (n = 62) responses to the values prompt, their belonging and expectancy, and their trivialization of and vigilance against racism. Factor analysis was used to identify the latent values underlying the 13 specific values used in the intervention: kindness (benevolence, universalism, and collectivism); stability (face, security, and conformity); and strength (achievement, power). Participants’ responses to the belonging and expectancy scales were also combined into a single item to measure their current feelings of belonging and their anticipated place in engineering (referred to as belonging from this point forward). One outlier (detected using Mahalanobis’ distance) was dropped.
Our analysis consisted of three mediation analyses using the mediate() function from the psych package in R.
There was a significant total effect of kindness on belonging (b = -.44, p < .001), which was completely mediated when considering the effects of trivialization (b = -.12, p = .362). A bias-corrected bootstrap for the indirect effect (b = -.33) was calculated with 1,000 samples and the confidence interval was below zero (-.51 to -.18; Figure 1).
There was a significant total effect of stability on belonging (b = .30, p = .017), which was also completely mediated by trivialization (b = .09, p = .410). Once again, the bias-corrected bootstrap for the indirect effect (b = .21) was calculated with 1,000 samples and the confidence interval was above zero (.09 to .37; Figure 2)
There was a significant total effect of strength on belonging (b = .35, p = .005) that was completely mediated by trivialization (b = .05, p = .686), and the confidence interval of the bias-corrected bootstrap for the indirect effect (b = .30) was above zero (.17 to .45; Figure 3).
Badea, C., Tavani, J.-L., Rubin, M., & Meyer, T. (2017). Self-affirmation, political value congruence, and support for refugees. Journal of Applied Social Psychology, 47(7), 355–365. https://doi.org/10.1111/jasp.12441
Čehajić-Clancy, S., Effron, D. A., Halperin, E., Liberman, V., & Ross, L. D. (2011). Affirmation, acknowledgment of in-group responsibility, group-based guilt, and support for reparative measures. Journal of Personality and Social Psychology, 101(2), 256–270. https://doi.org/10.1037/a0023936
Cohen, G. L., Garcia, J., Purdie-Vaughns, V., Apfel, N., & Brzustoski, P. (2009). Recursive Processes in Self-Affirmation: Intervening to Close the Minority Achievement Gap. Science, 324(5925), 400–403. https://doi.org/10.1126/science.1170769
Goff, P. A., Steele, C. M., & Davies, P. G. (2008). The space between us: Stereotype threat and distance in interracial contexts. Journal of Personality and Social Psychology, 94, 91–107. https://doi.org/10.1037/0022-3514.94.1.91
Lehmiller, J. J., Law, A. T., & Tormala, T. T. (2010). The effect of self-affirmation on sexual prejudice. Journal of Experimental Social Psychology, 46(2), 276–285. https://doi.org/10.1016/j.jesp.2009.11.009
Lesick, T. l., & Zell, E. (2021). Is Affirmation the Cure? Self-Affirmation and European-Americans’ Perception of Systemic Racism. Basic and Applied Social Psychology, 43(1), 1–13. https://doi.org/10.1080/01973533.2020.1811092
Miller, S. S., & Saucier, D. A. (2018). Individual differences in the propensity to make attributions to prejudice. Group Processes & Intergroup Relations, 21(2), 280–301. https://doi.org/10.1177/1368430216674342
Perkins, H., Major, J., Chen, J., Berger, E., & Godwin, A. (2023). Using a values affirmation to increase engineering students’ awareness and acknowledgement of racism. 2023 American Psychological Association Conference.
Richeson, J. A., & Shelton, J. N. (2007). Negotiating Interracial Interactions: Costs, Consequences, and Possibilities. Current Directions in Psychological Science, 16(6), 316–320. https://doi.org/10.1111/j.1467-8721.2007.00528.x
Scheidt, M., Godwin, A., Senkpeil, R. R., Ge, J. S., Chen, J., Self, B. P., Widmann, J. M., & Berger, E. J. (2018, June 23). Validity Evidence for the SUCCESS Survey: Measuring Non-Cognitive and Affective Traits of Engineering and Computing Students. 2018 ASEE Annual Conference & Exposition. https://peer.asee.org/validity-evidence-for-the-success-survey-measuring-non-cognitive-and-affective-traits-of-engineering-and-computing-students
Schwartz, S. H. (2012). An Overview of the Schwartz Theory of Basic Values. Online Readings in Psychology and Culture, 2(1). https://doi.org/10.9707/2307-0919.1116
Sherman, D., K., & Cohen, G., L. (2002). Accepting Threatening Information: Self–Affirmation and the Reduction of Defensive Biases. Current Directions in Psychological Science, 11(4), 115–149.
Things done in this section: load libraries, import data, separate into pre, post, and follow-up dataframes, and create race_minority and gender_minority variables from participant responses.
Little’s MCAR was used earlier to check if data is missing completely at random. Test is not significant for full dataset (X2 (192, N = 258) = 203.92, p = .264) or for dataset with unit nonresponders dropped (X2 (127, N = 204) = 118.52, p = .692). Data is missing completely at random and we’re okay to proceed with analysis.
library(psych)
library(ggplot2)
library(rstatix)
library(expss) #for cross_cases()
library(dplyr)
library(kableExtra)
library(nFactors)
library(corrplot)
library(sjPlot)
library(viridis)
# import file
import <- read.csv(file="data/final data 10-29-22.csv", header = T)
pre <- subset(import, select=c(UID, grep("pre", colnames(import))))
post <- subset(import, select=c(UID, grep("post", colnames(import))))
follow <- subset(import, select=c(UID, grep("follow", colnames(import))))
pmaps <- subset(import, select=c(UID, grep("pmaps", colnames(import))))
act <- subset(import, select=c(UID, grep("act", colnames(import))))
demo <- subset(import, select=c(UID, 129:162))
m1 <- merge(act, follow, by = "UID")
m2 <- merge(m1, pmaps, by = "UID")
df <- merge(m2, demo, by = "UID")
# m3 <- na.omit(merge(pre, post, by = "UID"))
# act$con[!is.na(act$Q5_1_act)] <- "chall"
# act$con[!is.na(act$Q15_1_act)] <- "val"
# act$con[!is.na(act$Q4_act)] <- "con"
# m4 <- merge(m3, na.omit(subset(act, select=c(UID,con))), by = "UID", all.y = T)
# subset(data.frame(table(m4$UID)), Freq > 1)
# m4 <- subset(m4, UID != "ENHLY4123X")
rm(pre, post, follow, act, pmaps, demo, import, m1, m2)
# 8 = not important, 5 = somewhat important, 1 = very important
val <- subset(df, select=c(UID, grep("Q15", colnames(df)), Q17_act, Q19_act))
val2 <- subset(val, select=c(2:14))
# Recode values in multiple columns
recode_values <- function(x) {
case_when(
x == "Very Important" ~ "1",
x == "Somewhat Important" ~ "0",
x == "Not Important" ~ "-1",
TRUE ~ x
)
}
val3 <- val2 %>%
mutate_at(vars(starts_with("Q15")), recode_values)
# Recode values in multiple columns
recode_values <- function(x) {
case_when(
x == "5" ~ "0",
x == "8" ~ "-1",
TRUE ~ x
)
}
val4 <- val3 %>%
mutate_at(vars(starts_with("Q15")), recode_values)
str(val4)
## 'data.frame': 694 obs. of 13 variables:
## $ Q15_1_act : chr NA NA NA "1" ...
## $ Q15_2_act : chr NA NA NA "1" ...
## $ Q15_3_act : chr NA NA NA "1" ...
## $ Q15_4_act : chr NA NA NA "1" ...
## $ Q15_5_act : chr NA NA NA "0" ...
## $ Q15_6_act : chr NA NA NA "0" ...
## $ Q15_7_act : chr NA NA NA "1" ...
## $ Q15_8_act : chr NA NA NA "0" ...
## $ Q15_9_act : chr NA NA NA "0" ...
## $ Q15_10_act: chr NA NA NA "1" ...
## $ Q15_11_act: chr NA NA NA "0" ...
## $ Q15_12_act: chr NA NA NA "0" ...
## $ Q15_13_act: chr NA NA NA "-1" ...
val5 <- val4 %>%
mutate_if(is.character, as.numeric)
str(val5)
## 'data.frame': 694 obs. of 13 variables:
## $ Q15_1_act : num NA NA NA 1 NA 0 NA NA 0 NA ...
## $ Q15_2_act : num NA NA NA 1 NA 0 NA NA 1 NA ...
## $ Q15_3_act : num NA NA NA 1 NA -1 NA NA 1 NA ...
## $ Q15_4_act : num NA NA NA 1 NA 1 NA NA 1 NA ...
## $ Q15_5_act : num NA NA NA 0 NA 1 NA NA 0 NA ...
## $ Q15_6_act : num NA NA NA 0 NA -1 NA NA 1 NA ...
## $ Q15_7_act : num NA NA NA 1 NA 1 NA NA 1 NA ...
## $ Q15_8_act : num NA NA NA 0 NA -1 NA NA 0 NA ...
## $ Q15_9_act : num NA NA NA 0 NA -1 NA NA -1 NA ...
## $ Q15_10_act: num NA NA NA 1 NA 1 NA NA 0 NA ...
## $ Q15_11_act: num NA NA NA 0 NA 0 NA NA 1 NA ...
## $ Q15_12_act: num NA NA NA 0 NA 0 NA NA 1 NA ...
## $ Q15_13_act: num NA NA NA -1 NA -1 NA NA -1 NA ...
df2 <- cbind.data.frame(df[1],val5[1:12],df[36:105])
rm(val,val2,val3,val4,val5)
Things done in this section: create composite variables from timepoint 3 (follow-up), check univariate normality, visualize data. Created variable belexp that combines belonging (current feelings of belonging) and expectancy (anticipated future belonging).
ggplot(gather(subset(df2, select=c(14:49))), aes(value)) +
geom_histogram(bins = 7) +
facet_wrap(~key)
## Warning: Removed 9369 rows containing non-finite values (`stat_bin()`).
attach(df2)
df2$bel <- (Q2_1_follow + Q2_2_follow + Q2_3_follow + Q2_4_follow)/4
df2$rec <- (Q2_5_follow + Q2_6_follow + Q2_7_follow + Q2_8_follow)/4
df2$int <- (Q2_9_follow + Q2_10_follow + Q2_11_follow)/3
df2$ftpins <- (Q3_1_follow + Q3_2_follow + Q3_3_follow)/3
df2$pof <- (Q3_4_follow + Q3_5_follow + Q3_6_follow + Q3_7_follow)/4
df2$exp <- (Q3_8_follow + Q3_9_follow + Q3_10_follow + Q3_11_follow + Q3_12_follow)/5
df2$anx <- (Q4_1_follow + Q4_2_follow + Q4_3_follow + Q4_4_follow + Q4_5_follow)/5
df2$triv <- (Q47_1_pmaps + Q47_2_pmaps + Q47_3_pmaps + Q47_4_pmaps)/4
df2$vig <- (Q47_5_pmaps + Q47_6_pmaps + Q47_7_pmaps + Q47_8_pmaps)/4
df2$belexp <- (Q2_1_follow + Q2_2_follow + Q2_3_follow + Q2_4_follow +
Q3_8_follow + Q3_9_follow + Q3_10_follow + Q3_11_follow + Q3_12_follow)/9
detach(df2)
df2$rec <- as.vector(scale(df2$rec, center=T, scale=T))
df2$int <- as.vector(scale(df2$int, center=T, scale=T))
df2$ftpins <- as.vector(scale(df2$ftpins, center=T, scale=T))
df2$pof <- as.vector(scale(df2$pof, center=T, scale=T))
df2$anx <- as.vector(scale(df2$anx, center=T, scale=T))
df2$triv <- as.vector(scale(df2$triv, center=T, scale=T))
df2$vig <- as.vector(scale(df2$vig, center=T, scale=T))
df2$belexp <- as.vector(scale(df2$belexp, center=T, scale=T))
Issues with kurtosis for many of the variables at follow-up. Belonging has ok skew but high kurtosis (2.67) while belexp better (kurtosis = 2.01). Trivialization and vigilance are ok.
desc <- describe(subset(df2, select=c(bel, rec, int, ftpins, pof, exp, anx, triv, vig, belexp)))
kbl(round(desc, digits = 2)) %>%
kable_styling() %>%
row_spec(which(desc$kurtosis > 2), bold = T) %>%
row_spec(which(desc$kurtosis < -2), bold = T) %>%
row_spec(which(desc$skew > 2), italic = T) %>%
row_spec(which(desc$skew < -2), italic = T)
| vars | n | mean | sd | median | trimmed | mad | min | max | range | skew | kurtosis | se | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| bel | 1 | 435 | 5.86 | 1.05 | 6.00 | 6.00 | 1.11 | 1.00 | 7.00 | 6.00 | -1.28 | 2.67 | 0.05 |
| rec | 2 | 435 | 0.00 | 1.00 | 0.17 | 0.11 | 1.12 | -4.86 | 1.18 | 6.04 | -1.34 | 3.17 | 0.05 |
| int | 3 | 435 | 0.00 | 1.00 | 0.14 | 0.17 | 0.96 | -5.02 | 0.78 | 5.80 | -1.96 | 5.15 | 0.05 |
| ftpins | 4 | 434 | 0.00 | 1.00 | 0.22 | 0.14 | 1.04 | -5.41 | 0.93 | 6.34 | -1.71 | 5.22 | 0.05 |
| pof | 5 | 434 | 0.00 | 1.00 | 0.30 | 0.15 | 1.03 | -4.56 | 0.99 | 5.55 | -1.46 | 2.92 | 0.05 |
| exp | 6 | 434 | 5.76 | 0.99 | 6.00 | 5.85 | 0.89 | 1.00 | 7.00 | 6.00 | -0.92 | 1.46 | 0.05 |
| anx | 7 | 434 | 0.00 | 1.00 | -0.02 | 0.01 | 1.14 | -2.07 | 1.78 | 3.85 | -0.09 | -0.73 | 0.05 |
| triv | 8 | 431 | 0.00 | 1.00 | -0.29 | -0.11 | 1.11 | -1.19 | 2.41 | 3.59 | 0.73 | -0.25 | 0.05 |
| vig | 9 | 432 | 0.00 | 1.00 | 0.02 | 0.03 | 0.90 | -3.01 | 1.84 | 4.85 | -0.36 | 0.22 | 0.05 |
| belexp | 10 | 434 | 0.00 | 1.00 | 0.09 | 0.09 | 0.89 | -5.21 | 1.29 | 6.51 | -1.00 | 2.01 | 0.05 |
ggplot(aes(x=triv, fill=re_white), data=subset(df2, !is.na(re_white))) +
geom_density(color="#e9ecef", alpha=0.6, position = 'identity') +
scale_fill_manual(values=c("#69b3a2", "#404080")) +
labs(fill="White")
## Warning: Removed 224 rows containing non-finite values (`stat_density()`).
ggplot(aes(x=vig, fill=re_white), data=subset(df2, !is.na(re_white))) +
geom_density(color="#e9ecef", alpha=0.6, position = 'identity') +
scale_fill_manual(values=c("#69b3a2", "#404080")) +
labs(fill="White")
## Warning: Removed 223 rows containing non-finite values (`stat_density()`).
ggplot(aes(x=triv, fill=ge_w), data=subset(df2, !is.na(ge_w))) +
geom_density(color="#e9ecef", alpha=0.6, position = 'identity') +
scale_fill_manual(values=c("#69b3a2", "#404080")) +
labs(fill="Women")
## Warning: Removed 224 rows containing non-finite values (`stat_density()`).
ggplot(aes(x=vig, fill=ge_w), data=subset(df2, !is.na(ge_w))) +
geom_density(color="#e9ecef", alpha=0.6, position = 'identity') +
scale_fill_manual(values=c("#69b3a2", "#404080")) +
labs(fill="Women")
## Warning: Removed 223 rows containing non-finite values (`stat_density()`).
Things done in this section: code participants’ responses to the values affirmation activity to understand their values. Students were asked to rate at least three values from the list below. Ratings were not-important (-1), somewhat-important (0), or very-important (+1). Factor analysis used to identify latent variables underlying the larger list of values (FA used instead of PCA due to focus on underlying, latent factors instead of dimension reduction). Once latent factors identified, averages were created. Higher score indicates greater importance. Score below zero indicates not important.
colnames(df2)[2:13] <- c("Self-direction",
"Stimulation",
"Hedonism",
"Achievement",
"Power",
"Face",
"Security",
"Conformity",
"Tradition",
"Benevolence",
"Universalism",
"Collectivism")
desc <- describe(subset(df2, select=c(2:13)))
kbl(round(desc, digits = 2)) %>%
kable_styling() %>%
row_spec(which(desc$kurtosis > 2), bold = T) %>%
row_spec(which(desc$kurtosis < -2), bold = T) %>%
row_spec(which(desc$skew > 2), italic = T) %>%
row_spec(which(desc$skew < -2), italic = T)
| vars | n | mean | sd | median | trimmed | mad | min | max | range | skew | kurtosis | se | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Self-direction | 1 | 102 | 0.53 | 0.54 | 1 | 0.56 | 0.00 | -1 | 1 | 2 | -0.49 | -1.02 | 0.05 |
| Stimulation | 2 | 102 | 0.63 | 0.53 | 1 | 0.68 | 0.00 | -1 | 1 | 2 | -0.92 | -0.35 | 0.05 |
| Hedonism | 3 | 102 | 0.62 | 0.63 | 1 | 0.74 | 0.00 | -1 | 1 | 2 | -1.38 | 0.71 | 0.06 |
| Achievement | 4 | 102 | 0.80 | 0.49 | 1 | 0.93 | 0.00 | -1 | 1 | 2 | -2.45 | 5.19 | 0.05 |
| Power | 5 | 102 | 0.41 | 0.62 | 0 | 0.48 | 1.48 | -1 | 1 | 2 | -0.53 | -0.66 | 0.06 |
| Face | 6 | 102 | 0.25 | 0.78 | 0 | 0.32 | 1.48 | -1 | 1 | 2 | -0.47 | -1.23 | 0.08 |
| Security | 7 | 102 | 0.70 | 0.52 | 1 | 0.78 | 0.00 | -1 | 1 | 2 | -1.44 | 1.12 | 0.05 |
| Conformity | 8 | 102 | -0.06 | 0.67 | 0 | -0.07 | 0.00 | -1 | 1 | 2 | 0.07 | -0.81 | 0.07 |
| Tradition | 9 | 102 | -0.25 | 0.82 | 0 | -0.32 | 1.48 | -1 | 1 | 2 | 0.49 | -1.35 | 0.08 |
| Benevolence | 10 | 102 | 0.56 | 0.62 | 1 | 0.66 | 0.00 | -1 | 1 | 2 | -1.07 | 0.05 | 0.06 |
| Universalism | 11 | 102 | 0.25 | 0.79 | 0 | 0.32 | 1.48 | -1 | 1 | 2 | -0.48 | -1.27 | 0.08 |
| Collectivism | 12 | 102 | 0.49 | 0.73 | 1 | 0.61 | 0.00 | -1 | 1 | 2 | -1.03 | -0.39 | 0.07 |
long_df <- df2 %>%
gather(variable, value, 2:13)
long_df <- long_df[, c("UID", "variable", "value")]
long_df %>%
ggplot( aes(x=value, color=variable, fill=variable)) +
geom_histogram(bins=3) +
scale_fill_viridis(discrete=TRUE) +
scale_color_viridis(discrete=TRUE) +
theme(
legend.position="none",
panel.spacing = unit(0.1, "lines"),
strip.text.x = element_text(size = 8)
) +
facet_wrap(~variable) +
xlab("Ratings (Very Important; Somewhat Important; Not Important)") +
ylab("Frequency")
## Warning: Removed 7104 rows containing non-finite values (`stat_bin()`).
d <- subset(df2, select=c(2:13))
out <- corr.test(d)
corrplot(out$r, type="upper", method = "color", tl.col = "black", tl.cex = .75,
p.mat = out$p,
sig.level = c(.001, .01, .05), pch.cex = .9,
insig = "label_sig", pch.col = "white",
order = "hclust")
ev <- eigen(cor(na.omit(d))) # get eigenvalues
ap <- parallel(subject = nrow(na.omit(d)), var = ncol(na.omit(d)),
rep = 100,cent = .05)
nS <- nScree(x = ev$values, aparallel = ap$eigen$qevpea)
plotnScree(nS)
fit <- factanal(na.omit(d), 3, rotation="promax")
print(fit, digits = 3, cutoff = 0.3, sort = TRUE)
##
## Call:
## factanal(x = na.omit(d), factors = 3, rotation = "promax")
##
## Uniquenesses:
## Self-direction Stimulation Hedonism Achievement Power
## 0.859 0.668 0.822 0.215 0.666
## Face Security Conformity Tradition Benevolence
## 0.590 0.606 0.298 0.432 0.438
## Universalism Collectivism
## 0.160 0.657
##
## Loadings:
## Factor1 Factor2 Factor3
## Stimulation 0.543
## Tradition 0.682 0.408
## Benevolence 0.687
## Universalism 0.923
## Collectivism 0.535
## Face 0.573
## Security 0.630
## Conformity 0.329 0.795
## Achievement 0.863
## Self-direction -0.311
## Hedonism 0.334
## Power 0.473
##
## Factor1 Factor2 Factor3
## SS loadings 2.785 1.692 1.175
## Proportion Var 0.232 0.141 0.098
## Cumulative Var 0.232 0.373 0.471
##
## Factor Correlations:
## Factor1 Factor2 Factor3
## Factor1 1.000 0.1025 0.1012
## Factor2 0.102 1.0000 -0.0999
## Factor3 0.101 -0.0999 1.0000
##
## Test of the hypothesis that 3 factors are sufficient.
## The chi square statistic is 41.25 on 33 degrees of freedom.
## The p-value is 0.153
fit <- factanal(na.omit(d), 4, rotation="promax")
print(fit, digits = 3, cutoff = 0.3, sort = TRUE)
##
## Call:
## factanal(x = na.omit(d), factors = 4, rotation = "promax")
##
## Uniquenesses:
## Self-direction Stimulation Hedonism Achievement Power
## 0.854 0.670 0.512 0.324 0.631
## Face Security Conformity Tradition Benevolence
## 0.585 0.605 0.266 0.380 0.381
## Universalism Collectivism
## 0.177 0.506
##
## Loadings:
## Factor1 Factor2 Factor3 Factor4
## Stimulation 0.509
## Benevolence 0.813
## Universalism 0.712
## Collectivism 0.825 0.303
## Security 0.526
## Conformity 0.834
## Tradition 0.351 0.569
## Achievement 0.347 0.797
## Power 0.543
## Hedonism 0.762
## Self-direction
## Face 0.422 0.315
##
## Factor1 Factor2 Factor3 Factor4
## SS loadings 2.538 1.556 1.156 0.956
## Proportion Var 0.212 0.130 0.096 0.080
## Cumulative Var 0.212 0.341 0.437 0.517
##
## Factor Correlations:
## Factor1 Factor2 Factor3 Factor4
## Factor1 1.000 -0.1937 -0.1733 0.475
## Factor2 -0.194 1.0000 -0.0371 0.242
## Factor3 -0.173 -0.0371 1.0000 -0.220
## Factor4 0.475 0.2423 -0.2203 1.000
##
## Test of the hypothesis that 4 factors are sufficient.
## The chi square statistic is 25.96 on 24 degrees of freedom.
## The p-value is 0.355
fit <- factanal(na.omit(d), 5, rotation="promax")
print(fit, digits = 3, cutoff = 0.3, sort = TRUE)
##
## Call:
## factanal(x = na.omit(d), factors = 5, rotation = "promax")
##
## Uniquenesses:
## Self-direction Stimulation Hedonism Achievement Power
## 0.645 0.652 0.494 0.439 0.005
## Face Security Conformity Tradition Benevolence
## 0.585 0.601 0.278 0.361 0.357
## Universalism Collectivism
## 0.198 0.500
##
## Loadings:
## Factor1 Factor2 Factor3 Factor4 Factor5
## Benevolence 0.827
## Universalism 0.585
## Collectivism 0.821
## Security 0.526
## Conformity 0.828
## Tradition 0.596
## Power 1.061
## Hedonism 0.799
## Self-direction 0.660
## Achievement 0.467 0.515
## Stimulation 0.460
## Face 0.428 0.329
##
## Factor1 Factor2 Factor3 Factor4 Factor5
## SS loadings 2.337 1.572 1.290 1.056 0.852
## Proportion Var 0.195 0.131 0.107 0.088 0.071
## Cumulative Var 0.195 0.326 0.433 0.521 0.592
##
## Factor Correlations:
## Factor1 Factor2 Factor3 Factor4 Factor5
## Factor1 1.0000 -0.0285 -0.0251 0.490 -0.259
## Factor2 -0.0285 1.0000 -0.2443 -0.180 0.470
## Factor3 -0.0251 -0.2443 1.0000 0.132 0.167
## Factor4 0.4895 -0.1798 0.1321 1.000 -0.293
## Factor5 -0.2593 0.4697 0.1670 -0.293 1.000
##
## Test of the hypothesis that 5 factors are sufficient.
## The chi square statistic is 13.5 on 16 degrees of freedom.
## The p-value is 0.636
Three factor identified three factors but one item (tradition) cross-loaded. Four-factor model resolved the cross-loading. Five factor model did produce a loading for self-direction, which loaded on a two-item factor with achievement. The four factor model does have a weak fourth factor (only two items, one of which is loading very weakly).
Consistent across all models:
Consistent across 3/4 models:
Consistent across 4/5 models:
Based on performance across all three models, sticking with the three-factor model but including tradition with face/security/conformity (currently cross-loading).
Dropping hedonism. Item may have been double- or triple-barreled and loads weakly and inconsistently.
Dropping self-direction. Either loads on its own factor or not at all.
d <- subset(df2, select=c(3, 5:13))
ev <- eigen(cor(na.omit(d))) # get eigenvalues
ap <- parallel(subject = nrow(na.omit(d)), var = ncol(na.omit(d)),
rep = 100,cent = .05)
nS <- nScree(x = ev$values, aparallel = ap$eigen$qevpea)
plotnScree(nS)
fit <- factanal(na.omit(d), 3, rotation="promax")
print(fit, digits = 3, cutoff = 0.3, sort = TRUE)
##
## Call:
## factanal(x = na.omit(d), factors = 3, rotation = "promax")
##
## Uniquenesses:
## Stimulation Achievement Power Face Security Conformity
## 0.659 0.374 0.590 0.631 0.639 0.260
## Tradition Benevolence Universalism Collectivism
## 0.416 0.404 0.191 0.616
##
## Loadings:
## Factor1 Factor2 Factor3
## Stimulation 0.563
## Tradition 0.647 0.445
## Benevolence 0.691
## Universalism 0.911
## Collectivism 0.544
## Face 0.530
## Security 0.597
## Conformity 0.819
## Achievement 0.766
## Power -0.352 0.553
##
## Factor1 Factor2 Factor3
## SS loadings 2.614 1.563 1.058
## Proportion Var 0.261 0.156 0.106
## Cumulative Var 0.261 0.418 0.523
##
## Factor Correlations:
## Factor1 Factor2 Factor3
## Factor1 1.0000 0.0191 0.179
## Factor2 0.0191 1.0000 -0.123
## Factor3 0.1792 -0.1230 1.000
##
## Test of the hypothesis that 3 factors are sufficient.
## The chi square statistic is 14.8 on 18 degrees of freedom.
## The p-value is 0.675
Tradition is still cross-loading and stimulation doesn’t fit the emerging factor – drop?
d <- subset(df2, select=c(5:9, 11:13))
ev <- eigen(cor(na.omit(d))) # get eigenvalues
ap <- parallel(subject = nrow(na.omit(d)), var = ncol(na.omit(d)),
rep = 100,cent = .05)
nS <- nScree(x = ev$values, aparallel = ap$eigen$qevpea)
plotnScree(nS)
fit <- factanal(na.omit(d), 3, rotation="promax")
print(fit, digits = 3, cutoff = 0.3, sort = TRUE)
##
## Call:
## factanal(x = na.omit(d), factors = 3, rotation = "promax")
##
## Uniquenesses:
## Achievement Power Face Security Conformity Benevolence
## 0.364 0.587 0.605 0.603 0.347 0.387
## Universalism Collectivism
## 0.202 0.600
##
## Loadings:
## Factor1 Factor2 Factor3
## Benevolence 0.722
## Universalism 0.906
## Collectivism 0.576
## Face 0.554
## Security 0.632
## Conformity 0.774
## Achievement 0.769
## Power -0.326 0.558
##
## Factor1 Factor2 Factor3
## SS loadings 1.946 1.352 1.049
## Proportion Var 0.243 0.169 0.131
## Cumulative Var 0.243 0.412 0.543
##
## Factor Correlations:
## Factor1 Factor2 Factor3
## Factor1 1.0000 -0.0715 -0.164
## Factor2 -0.0715 1.0000 -0.191
## Factor3 -0.1640 -0.1914 1.000
##
## Test of the hypothesis that 3 factors are sufficient.
## The chi square statistic is 6.5 on 7 degrees of freedom.
## The p-value is 0.483
colnames(df2)[2:13] <- tolower(colnames(df2)[2:13])
df2$kindness <- (df2$benevolence + df2$universalism + df2$collectivism)/3
df2$stability <- (df2$face + df2$security + df2$conformity)/3
df2$strength <- (df2$achievement + df2$power)/2
df2$kindness <- as.vector(scale(df2$kindness, center=T, scale=T))
df2$stability <- as.vector(scale(df2$stability, center=T, scale=T))
df2$strength <- as.vector(scale(df2$strength, center=T, scale=T))
desc <- describe(subset(df2, select=c(kindness, stability, strength)))
kbl(round(desc, digits = 2)) %>%
kable_styling() %>%
row_spec(which(desc$kurtosis > 2), bold = T) %>%
row_spec(which(desc$kurtosis < -2), bold = T) %>%
row_spec(which(desc$skew > 2), italic = T) %>%
row_spec(which(desc$skew < -2), italic = T)
| vars | n | mean | sd | median | trimmed | mad | min | max | range | skew | kurtosis | se | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| kindness | 1 | 102 | 0 | 1 | 0.39 | 0.13 | 0.83 | -2.42 | 0.95 | 3.37 | -0.88 | -0.35 | 0.1 |
| stability | 2 | 102 | 0 | 1 | 0.07 | 0.05 | 0.97 | -1.89 | 1.38 | 3.27 | -0.51 | -1.02 | 0.1 |
| strength | 3 | 102 | 0 | 1 | -0.23 | 0.18 | 1.61 | -3.50 | 0.85 | 4.35 | -1.42 | 2.05 | 0.1 |
corr.test(subset(df2, select=c(kindness, stability, strength)))
## Call:corr.test(x = subset(df2, select = c(kindness, stability, strength)))
## Correlation matrix
## kindness stability strength
## kindness 1.00 0.04 0.02
## stability 0.04 1.00 0.17
## strength 0.02 0.17 1.00
## Sample Size
## [1] 102
## Probability values (Entries above the diagonal are adjusted for multiple tests.)
## kindness stability strength
## kindness 0.00 1.00 1.00
## stability 0.71 0.00 0.25
## strength 0.84 0.08 0.00
##
## To see confidence intervals of the correlations, print with the short=FALSE option
long_df <- df2 %>%
gather(variable, value, 94:96)
long_df <- long_df[, c("UID", "variable", "value")]
long_df$variable[long_df$variable == "kindness"] <- "Kindness"
long_df$variable[long_df$variable == "stability"] <- "Stability"
long_df$variable[long_df$variable == "strength"] <- "Strength"
long_df %>%
ggplot( aes(x=value, color=variable, fill=variable)) +
geom_histogram(bins=5) +
scale_fill_viridis(discrete=TRUE) +
scale_color_viridis(discrete=TRUE) +
theme(
legend.position="none",
panel.spacing = unit(0.1, "lines"),
strip.text.x = element_text(size = 8)
) +
facet_wrap(~variable) +
xlab("Standardized Ratings") + ylab("Frequency")
## Warning: Removed 1776 rows containing non-finite values (`stat_bin()`).
Using Mahalanobis’ distance. One outlier dropped.
d <- na.omit(subset(df2, select=c(1,85:88,90:96)))
m_dist <- mahalanobis(d[-1], colMeans(d[-1]), cov(d[-1]))
d$MD <- round(m_dist, 1)
plot(d$MD)
describe(m_dist)
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 87 10.87 7.47 9.38 9.79 4.69 3.58 57.67 54.08 3.18 15.84 0.8
cut <- qchisq(.999, df=(ncol(d)-1))
abline(a=cut, b=0, col="red")
d$outlier <- F
d$outlier[d$MD > cut] <- T
table(d$outlier)
##
## FALSE TRUE
## 86 1
outs <- subset(d, select=c(UID, outlier), outlier == T)
df3 <- subset(df2, !(UID %in% outs$UID))
Add the new value variables and check all univariate normality and correlations. Strength kurtosis is a bit high (2.02) but ok, probably because of influence from achievement which was highly rated by most students.
desc <- describe(subset(df3, select=c(85:88,90:96)))
kbl(round(desc, digits = 2)) %>%
kable_styling() %>%
row_spec(which(desc$kurtosis > 2), bold = T) %>%
row_spec(which(desc$kurtosis < -2), bold = T) %>%
row_spec(which(desc$skew > 2), italic = T) %>%
row_spec(which(desc$skew < -2), italic = T)
| vars | n | mean | sd | median | trimmed | mad | min | max | range | skew | kurtosis | se | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| rec | 1 | 434 | 0.01 | 0.99 | 0.17 | 0.11 | 1.12 | -4.86 | 1.18 | 6.04 | -1.34 | 3.27 | 0.05 |
| int | 2 | 434 | 0.01 | 0.98 | 0.14 | 0.18 | 0.96 | -5.02 | 0.78 | 5.80 | -1.92 | 5.07 | 0.05 |
| ftpins | 3 | 433 | 0.01 | 0.97 | 0.22 | 0.14 | 1.04 | -5.41 | 0.93 | 6.34 | -1.53 | 4.25 | 0.05 |
| pof | 4 | 433 | 0.01 | 0.99 | 0.30 | 0.15 | 1.03 | -4.56 | 0.99 | 5.55 | -1.47 | 3.02 | 0.05 |
| anx | 5 | 433 | 0.00 | 1.00 | -0.02 | 0.01 | 1.14 | -2.07 | 1.78 | 3.85 | -0.10 | -0.73 | 0.05 |
| triv | 6 | 430 | 0.00 | 1.00 | -0.29 | -0.11 | 1.11 | -1.19 | 2.41 | 3.59 | 0.73 | -0.25 | 0.05 |
| vig | 7 | 431 | 0.00 | 1.00 | 0.02 | 0.03 | 0.90 | -3.01 | 1.84 | 4.85 | -0.36 | 0.21 | 0.05 |
| belexp | 8 | 433 | 0.01 | 0.99 | 0.09 | 0.10 | 0.89 | -5.21 | 1.29 | 6.51 | -0.98 | 2.02 | 0.05 |
| kindness | 9 | 101 | 0.01 | 1.00 | 0.39 | 0.14 | 0.83 | -2.42 | 0.95 | 3.37 | -0.90 | -0.32 | 0.10 |
| stability | 10 | 101 | 0.00 | 1.00 | 0.07 | 0.05 | 0.97 | -1.89 | 1.38 | 3.27 | -0.51 | -1.04 | 0.10 |
| strength | 11 | 101 | -0.01 | 1.00 | -0.23 | 0.17 | 1.61 | -3.50 | 0.85 | 4.35 | -1.41 | 2.02 | 0.10 |
orig <- colnames(df3)
colnames(df3)[85:96] <- c("Engineering Recognition", "Engineering Interest", "Future Time Perspective: Instrumentality", "Future Time Perspective: Perceptions of Future", "exp", "Test Anxiety", "Trivialization of Racism", "Vigilance Against Racism", "Engineering Belonging", "Value: Kindness", "Value: Stability", "Value: Strength")
out <- corr.test(subset(df3, select=c("Engineering Recognition", "Engineering Interest", "Future Time Perspective: Instrumentality", "Future Time Perspective: Perceptions of Future", "Test Anxiety", "Trivialization of Racism", "Vigilance Against Racism", "Engineering Belonging", "Value: Kindness", "Value: Stability", "Value: Strength")))
corrplot(out$r, type="upper", method = "color", tl.col = "black", tl.cex = .75,
p.mat = out$p,
sig.level = c(.001, .01, .05), pch.cex = .9,
insig = "label_sig", pch.col = "white",
order = "hclust")
colnames(df3) <- orig
Things done in this section: examine the relationships between the variables independently. Calculated corrected p-values and report them in the comments in the code.
Note the small sample size! 48 white men and 14 white women.
df4 <- na.omit(subset(df3, select=c(kindness, stability, strength, triv, vig, re_white, ge_m, belexp)))
cross_cases(df4, re_white, ge_m)
|  ge_m | ||
|---|---|---|
| Â FALSEÂ | Â TRUEÂ | |
|  re_white | ||
| Â Â Â FALSEÂ | 4 | 19 |
| Â Â Â TRUEÂ | 14 | 48 |
|    #Total cases | 18 | 67 |
# white men = 48
# white women = 14
df5 <- subset(df4, re_white == T)
triv <- lm(data = df5, triv ~ kindness + stability + strength)
summary(triv)
##
## Call:
## lm(formula = triv ~ kindness + stability + strength, data = df5)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.96596 -0.56427 0.06294 0.54870 1.64159
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.1822 0.1117 1.631 0.1084
## kindness -0.7312 0.1052 -6.954 3.51e-09 ***
## stability 0.2766 0.1150 2.405 0.0194 *
## strength 0.5312 0.1050 5.061 4.50e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.8468 on 58 degrees of freedom
## Multiple R-squared: 0.6219, Adjusted R-squared: 0.6024
## F-statistic: 31.8 on 3 and 58 DF, p-value: 2.755e-12
# (Intercept) kindness stability strength
# 0.135 0.000 0.029 0.000
plot_model(triv, type = "diag")
## [[1]]
##
## [[2]]
## `geom_smooth()` using formula = 'y ~ x'
##
## [[3]]
##
## [[4]]
## `geom_smooth()` using formula = 'y ~ x'
# plot_model(triv, type = "pred")
vig <- lm(data = df5, vig ~ kindness + stability + strength)
summary(vig)
##
## Call:
## lm(formula = vig ~ kindness + stability + strength, data = df5)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.7435 -0.3100 0.1038 0.3802 1.9802
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.03959 0.10391 -0.381 0.704591
## kindness 0.27738 0.09781 2.836 0.006284 **
## stability -0.43338 0.10695 -4.052 0.000153 ***
## strength -0.02782 0.09764 -0.285 0.776725
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.7876 on 58 degrees of freedom
## Multiple R-squared: 0.3296, Adjusted R-squared: 0.2949
## F-statistic: 9.505 on 3 and 58 DF, p-value: 3.359e-05
# (Intercept) kindness stability strength
# 0.755 0.012 0.001 0.777
plot_model(vig, type = "diag")
## [[1]]
##
## [[2]]
## `geom_smooth()` using formula = 'y ~ x'
##
## [[3]]
##
## [[4]]
## `geom_smooth()` using formula = 'y ~ x'
# plot_model(vig, type = "pred")
belxp <- lm(data = df5, belexp ~ kindness + stability + strength)
summary(belxp)
##
## Call:
## lm(formula = belexp ~ kindness + stability + strength, data = df5)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.06840 -0.32243 0.01255 0.38740 1.23516
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.36097 0.09027 3.999 0.000183 ***
## kindness -0.33339 0.08497 -3.924 0.000234 ***
## stability 0.16369 0.09291 1.762 0.083374 .
## strength 0.21298 0.08482 2.511 0.014850 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.6842 on 58 degrees of freedom
## Multiple R-squared: 0.3424, Adjusted R-squared: 0.3084
## F-statistic: 10.07 on 3 and 58 DF, p-value: 1.954e-05
# (Intercept) kindness stability strength
# 0.001 0.001 0.114 0.025
plot_model(belxp, type = "diag")
## [[1]]
##
## [[2]]
## `geom_smooth()` using formula = 'y ~ x'
##
## [[3]]
##
## [[4]]
## `geom_smooth()` using formula = 'y ~ x'
# plot_model(belxp, type = "pred")
belxp2 <- lm(data = df5, belexp ~ triv + vig)
# (Intercept) triv vig
# 0.002 0.000 0.479
summary(belxp2)
##
## Call:
## lm(formula = belexp ~ triv + vig, data = df5)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.3638 -0.3479 0.1230 0.2792 1.3405
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.30443 0.08589 3.545 0.000777 ***
## triv 0.35492 0.06968 5.093 3.87e-06 ***
## vig -0.08193 0.09976 -0.821 0.414787
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.6516 on 59 degrees of freedom
## Multiple R-squared: 0.3933, Adjusted R-squared: 0.3727
## F-statistic: 19.12 on 2 and 59 DF, p-value: 3.959e-07
pvals = c(summary(triv)$coefficients[,4],
summary(vig)$coefficients[,4],
summary(belxp)$coefficients[,4],
summary(belxp2)$coefficients[,4])
round(p.adjust(pvals, "BH"), digits=3)
## (Intercept) kindness stability strength (Intercept) kindness
## 0.135 0.000 0.029 0.000 0.755 0.012
## stability strength (Intercept) kindness stability strength
## 0.001 0.777 0.001 0.001 0.114 0.025
## (Intercept) triv vig
## 0.002 0.000 0.479
Note - uses ordinary least squares approach.
set.seed(100)
options(scipen = 999)
bel_kind <- mediate(y = "belexp", x = "kindness", m = c("triv"), data = df5, n.iter = 1000, std = T)
bel_kind
##
## Mediation/Moderation Analysis
## Call: mediate(y = "belexp", x = "kindness", m = c("triv"), data = df5,
## n.iter = 1000, std = T)
##
## The DV (Y) was belexp . The IV (X) was kindness . The mediating variable(s) = triv .
##
## Total effect(c) of kindness on belexp = -0.44 S.E. = 0.12 t = -3.85 df= 60 with p = 0.00029
## Direct effect (c') of kindness on belexp removing triv = -0.12 S.E. = 0.13 t = -0.92 df= 59 with p = 0.36
## Indirect effect (ab) of kindness on belexp through triv = -0.33
## Mean bootstrapped indirect effect = -0.33 with standard error = 0.08 Lower CI = -0.48 Upper CI = -0.18
## R = 0.63 R2 = 0.4 F = 19.26 on 2 and 59 DF p-value: 0.00000000793
##
## To see the longer output, specify short = FALSE in the print statement or ask for the summary
summary(bel_kind)
## Call: mediate(y = "belexp", x = "kindness", m = c("triv"), data = df5,
## n.iter = 1000, std = T)
##
## Direct effect estimates (traditional regression) (c') X + M on Y
## belexp se t df Prob
## Intercept 0.00 0.10 0.00 59 1.0000000
## kindness -0.12 0.13 -0.92 59 0.3620000
## triv 0.55 0.13 4.38 59 0.0000487
##
## R = 0.63 R2 = 0.4 F = 19.26 on 2 and 59 DF p-value: 0.000000364
##
## Total effect estimates (c) (X on Y)
## belexp se t df Prob
## Intercept 0.00 0.12 0.00 60 1.000000
## kindness -0.44 0.12 -3.85 60 0.000291
##
## 'a' effect estimates (X on M)
## triv se t df Prob
## Intercept 0.0 0.1 0.00 60 1.000000000
## kindness -0.6 0.1 -5.74 60 0.000000328
##
## 'b' effect estimates (M on Y controlling for X)
## belexp se t df Prob
## triv 0.55 0.13 4.38 59 0.0000487
##
## 'ab' effect estimates (through all mediators)
## belexp boot sd lower upper
## kindness -0.33 -0.33 0.08 -0.48 -0.18
bel_stab <- mediate(y = "belexp", x = "stability", m = c("triv"), data = df5, n.iter = 1000, std = T)
bel_stab
##
## Mediation/Moderation Analysis
## Call: mediate(y = "belexp", x = "stability", m = c("triv"), data = df5,
## n.iter = 1000, std = T)
##
## The DV (Y) was belexp . The IV (X) was stability . The mediating variable(s) = triv .
##
## Total effect(c) of stability on belexp = 0.3 S.E. = 0.12 t = 2.45 df= 60 with p = 0.017
## Direct effect (c') of stability on belexp removing triv = 0.09 S.E. = 0.11 t = 0.83 df= 59 with p = 0.41
## Indirect effect (ab) of stability on belexp through triv = 0.21
## Mean bootstrapped indirect effect = 0.22 with standard error = 0.07 Lower CI = 0.09 Upper CI = 0.38
## R = 0.63 R2 = 0.39 F = 19.14 on 2 and 59 DF p-value: 0.00000000871
##
## To see the longer output, specify short = FALSE in the print statement or ask for the summary
summary(bel_stab)
## Call: mediate(y = "belexp", x = "stability", m = c("triv"), data = df5,
## n.iter = 1000, std = T)
##
## Direct effect estimates (traditional regression) (c') X + M on Y
## belexp se t df Prob
## Intercept 0.00 0.10 0.00 59 1.00000000
## stability 0.09 0.11 0.83 59 0.41000000
## triv 0.59 0.11 5.42 59 0.00000115
##
## R = 0.63 R2 = 0.39 F = 19.14 on 2 and 59 DF p-value: 0.000000393
##
## Total effect estimates (c) (X on Y)
## belexp se t df Prob
## Intercept 0.0 0.12 0.00 60 1.000
## stability 0.3 0.12 2.45 60 0.017
##
## 'a' effect estimates (X on M)
## triv se t df Prob
## Intercept 0.00 0.12 0.00 60 1.00000
## stability 0.36 0.12 2.99 60 0.00409
##
## 'b' effect estimates (M on Y controlling for X)
## belexp se t df Prob
## triv 0.59 0.11 5.42 59 0.00000115
##
## 'ab' effect estimates (through all mediators)
## belexp boot sd lower upper
## stability 0.21 0.22 0.07 0.09 0.38
bel_stre <- mediate(y = "belexp", x = "strength", m = c("triv"), data = df5, n.iter = 1000, std = T)
bel_stre
##
## Mediation/Moderation Analysis
## Call: mediate(y = "belexp", x = "strength", m = c("triv"), data = df5,
## n.iter = 1000, std = T)
##
## The DV (Y) was belexp . The IV (X) was strength . The mediating variable(s) = triv .
##
## Total effect(c) of strength on belexp = 0.35 S.E. = 0.12 t = 2.88 df= 60 with p = 0.0055
## Direct effect (c') of strength on belexp removing triv = 0.05 S.E. = 0.12 t = 0.41 df= 59 with p = 0.69
## Indirect effect (ab) of strength on belexp through triv = 0.3
## Mean bootstrapped indirect effect = 0.3 with standard error = 0.07 Lower CI = 0.17 Upper CI = 0.47
## R = 0.62 R2 = 0.39 F = 18.71 on 2 and 59 DF p-value: 0.000000012
##
## To see the longer output, specify short = FALSE in the print statement or ask for the summary
summary(bel_stre)
## Call: mediate(y = "belexp", x = "strength", m = c("triv"), data = df5,
## n.iter = 1000, std = T)
##
## Direct effect estimates (traditional regression) (c') X + M on Y
## belexp se t df Prob
## Intercept 0.00 0.10 0.00 59 1.00000000
## strength 0.05 0.12 0.41 59 0.68600000
## triv 0.60 0.12 5.07 59 0.00000423
##
## R = 0.62 R2 = 0.39 F = 18.71 on 2 and 59 DF p-value: 0.00000051
##
## Total effect estimates (c) (X on Y)
## belexp se t df Prob
## Intercept 0.00 0.12 0.00 60 1.00000
## strength 0.35 0.12 2.88 60 0.00549
##
## 'a' effect estimates (X on M)
## triv se t df Prob
## Intercept 0.0 0.11 0.00 60 1.0000000
## strength 0.5 0.11 4.51 60 0.0000306
##
## 'b' effect estimates (M on Y controlling for X)
## belexp se t df Prob
## triv 0.6 0.12 5.07 59 0.00000423
##
## 'ab' effect estimates (through all mediators)
## belexp boot sd lower upper
## strength 0.3 0.3 0.07 0.17 0.47
options(scipen = 0)
Increased valuing of kindness in early semester is associated with decreased belonging/expectancy at end of semester
Kindness negatively associated with triv
Kindness and bel/exp relationship mediated by end-of-semester trivialization
Increased valuing of stability in early semester is associated with increased belonging/expectancy at end of semester
Stability positively associated with triv
Stability and bel/exp relationship mediated by end of semester trivialization
Increased valuing of strength in early semester is associated with increased belonging/expectancy at end of semester
Strength positively associated with triv
Strength and bel/exp relationship mediated by end of semester trivialization
Summary: * Among White engineering students, three latent factors emerge from a collection of 12 values: kindness, stability, and strength * A series of multiple linear regressions suggests that students who value kindness more report lower belonging/expectancy, but also less trivialization and more vigilance. Those who value strength report more trivialization and belonging/expectancy, and those who value stability report less vigilance. Lastly, higher trivialization is associated with increased belonging/expectancy. * A series of mediation analyses suggests that the relationships between values and belonging/expectancy are mediated by trivialization.