Primary results
Induction task
Analyses of the induction task were logistic regressions unless otherwise specified, predicting prevalence (.01-.99) with participant and test feature as random intercepts. Test feature (“can snap with their toes”, etc.) is technically nested within test feature type (physical, diet, personality), but since each test feature is unique to each test feature type, a model with the nesting term is analytically equivalent to the previous model, so the nesting term was omitted for simplicity of specification.
By condition x test feature type
# condition x test feature type
glmm_condition_testfeaturetype <-
glmmTMB(prevalence ~ condition * test_feature_type + (1|participant),
data = data_tidy,
family = beta_family(link = "logit"))
glmm_condition_testfeaturetype %>%
Anova()
There is no significant interaction between condition and test feature type on prevalence (\(\chi^2\)(4) = 4.38, p = 0.358), based on an ANOVA conducted on a logistic regression with condition, test feature type, and their interaction as fixed effects, with random intercepts per participant.
The only significant effect was a main effect of test feature type (\(\chi^2\)(2) = 230.59, p < .001).
# condition x test feature type
brm_condition_testfeaturetype <-
brm(prevalence ~ condition * test_feature_type + (1|participant),
data = data_tidy,
family = beta_family(link = "logit"),
save_pars = save_pars(all = TRUE))
# vs null
brm_condition_testfeaturetype_null <-
brm(prevalence ~ condition + test_feature_type + (1|participant),
data = data_tidy,
family = beta_family(link = "logit"),
save_pars = save_pars(all = TRUE))
bf_condition_testfeaturetype <-
bayes_factor(brm_condition_testfeaturetype, brm_condition_testfeaturetype_null)
A Bayesian analysis revealed very strong evidence against an interaction between condition and test feature type on prevalence (BF = 0), comparing Bayesian logistic models with and without the interaction, with default priors.
By condition x test feature
We can look at how prevalence judgments vary by condition and individual test features.
By test feature type match
Another way to look at the data is to code responses by whether the test feature type matched the feature type that received generic statements in that condition. If they match (e.g., a diet generic condition responding to a diet test question), we can code that as a match, or if they mismatch (e.g., diet generic condition responding to a personality test question), we can code that as a mismatch.
In this study, match = heard generics about that feature type, mismatch = heard specifics about that feature type.
If the chosen clusters capture some systematicity in how people generalize, matches should result in higher prevalence estimates than mismatches. Indeed, that’s what we find.
# condition
glmm_condition_test_match <-
glmmTMB(prevalence ~ condition_test_match + (1|participant) + (1|test_feature),
data = data_tidy,
family = beta_family(link = "logit"))
glmm_condition_test_match %>%
Anova()
## Analysis of Deviance Table (Type II Wald chisquare tests)
##
## Response: prevalence
## Chisq Df Pr(>Chisq)
## condition_test_match 2.8028 1 0.0941 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
There is a marginal effect of whether one heard generics or specifics about features in the test feature type on prevalence, in an ANOVA conducted on a logistic regression with match as a main effect, and with participant and test feature as random intercepts (\(\chi^2\)(1) = 2.8, p = 0.094).
Group characterization
Participants were asked to describe what characterizes Zarpies as a group. TBD