Summary

This pilot study for study 8 (n=8-11/condition * 15 conditions = 146 participants) was an exploratory pilot to assess potential effect size for an effect of proportionality of generics on inductive potential.

Contrary to predictions, adults’ induction of novel features of a category was affected more by the raw count of generic statements heard about that category than the proportion of generic statements to specific statements, or raw count of specific statements (see Induction task).

In addition, we do not replicate the main effect of condition or differences between conditions in the conditions used in Study 6 (see Replication of study 6).

These are likely because this was a very small sample pilot, underpowered to detect any effects. Next, we’ll re-run the same pilot with a larger sample size (n=30/condition * 15 conditions = 450 participants).

Methods

Participants

Data was collected from 146 adults (n=8-11/condition) via Prolific on 2/28/2025. Participants required to be in the United States, fluent in English, and having not participated in prior studies under this protocol. Participants were paid $1.63 for an estimated 5-8 minute task.

num_generics total_utt n
0 0 10
0 4 11
0 8 9
0 12 8
0 16 11
4 4 11
4 8 7
4 12 9
4 16 9
8 8 9
8 12 11
8 16 10
12 12 9
12 16 10
16 16 10

Exclusion criteria

We intended to recruit 150 recruited participants. However, only 146 completed the study, with the remainder (n = 4) failing to complete due to technical issues (at least some caused by videos freezing when entering and exiting fullscreen on iOS mobile).

Of the 146 who completed the study, 2 participants (1.4% of all participants) were excluded for meeting at least 1 of the following exclusion criteria:

  • failing the attention check (i.e., did not select 100% on slider when asked to during induction task) (n = 0 participants)

  • admitting to use of AI after being explicitly informed use was prohibited (n = 2 participants)

  • failing the task check (n = 1 participants)

Participants who failed the sound check were included, since a few participants mentioned technical difficulties with the Qualtrics automatically progressing past that video.

Demographics

age
mean sd n
40.35 13.05 144
  • The sample skewed young in age.
gender n prop
Female 76 52.8%
Male 64 44.4%
Non-binary 2 1.4%
Agender 1 0.7%
NA 1 0.7%
  • The sample reflected the diversity of the gender identities in the US.
race n prop
White, Caucasian, or European American 100 69.4%
Black or African American 17 11.8%
Hispanic or Latino/a 8 5.6%
South or Southeast Asian 5 3.5%
East Asian 3 2.1%
White, Caucasian, or European American,East Asian 2 1.4%
Indigenous American 1 0.7%
Native American, American Indian, or Alaska Native 1 0.7%
Prefer not to specify 1 0.7%
White, Caucasian, or European American,East Asian,Native Hawaiian or other Pacific Islander 1 0.7%
White, Caucasian, or European American,Hispanic or Latino/a 1 0.7%
White, Caucasian, or European American,Hispanic or Latino/a,Native American, American Indian, or Alaska Native 1 0.7%
White, Caucasian, or European American,Native American, American Indian, or Alaska Native 1 0.7%
White, Caucasian, or European American,South or Southeast Asian 1 0.7%
NA 1 0.7%
  • The sample was also racially diverse, with White Americans slightly overrepresented and Hispanic Americans undererepresented.
education n prop
Less than high school 3 2.1%
High school/GED 25 17.4%
Some college 44 30.6%
Bachelor's (B.A., B.S.) 52 36.1%
Master's (M.A., M.S.) 16 11.1%
Doctoral (Ph.D., J.D., M.D.) 3 2.1%
NA 1 0.7%
  • The sample was about evenly split on college completion.

Procedure

This study was administered as a Qualtrics survey, and approved by the NYU IRB (IRB-FY2023-6812).

After providing their consent, participants completed a captcha, pledge not to use AI, and sound check. Participants then completed:

  1. Training phase: participants heard some number of generic statements and specific statements, based on condition. Which features were mentioned was randomized, as was statement order.

  2. Test phase (induction task): participants completed an induction task where they imagined seeing a Zarpie with a novel feature, and estimated the prevalence of that feature among Zarpies using a slider from 0 to 100 (initialized at 0). All participants completed the same 16 trials, with order of trials randomized.

Participants then completed a few task completion questions, demographics, and were debriefed.

Participant feedback

The most frequent participant issue was technical issues that caused incomplete submissions; such participants are not included in the data.

A few participants mentioned answering 100% to all test questions in the induction task, because the questions seemed to imply that “ALL Zarpies” engage in the activities: (see data file)

  • “I started answering questions 100% because I thought it was referring to ALL the Zarpie people as a whole did those things but realized that the voice didn’t say All, so just viewed them as normal people.” (participant in 4/4 generics condition)

  • “I felt strange answering 100% to every question, but it seemed implied in the way the questions were written that ALL Zarpies engage in the activities listed.” (participant in 8/16 generics condition)

  • “Slightly confused about what we were supposed to be basing the answers off of when not enough information was given to even form a guess. All of the things seem plausible given then previous information yet nothing was ever said about the percentage of them that do any of it so do we just assume all of them do those things? Then what would be the point in asking for a percentage?” (participant in 8/8 generics condition)

When asked to guess what the study was about, many participants reported that the study was about what we ascribe to groups of people from limited information.

Primary results

Induction task

There is a marginal effect of condition (\(\chi\)(14)=23.10, p=.059) on the inferred prevalence of novel features.

The raw count of generics (t(142)=3.82, p=.00020) and the proportion of generics to specifics (t(132)=3.17, p=.0019) each independently predicted the inferred prevalence of novel features. There was also a marginal effect of the raw count of specifics (t(142)=-1.84, p=.067).

When entered into the same model, raw count of generics predicted prevalence judgments (t(131)=2.22, p=.028), while proportion of generics to specifics did not (t(131)=0.03, p=.98).

When entered into the same model, raw count of generics predicted prevalence judgments (t(141)=3.30, p=.0012), while raw count of specifics did not (t(141)=-0.077, p=.94).

# raw number of generics
lmer(prevalence ~ condition + (1|participant) + (1|test_feature),
     data = data_tidy) %>% 
  Anova()
## Analysis of Deviance Table (Type II Wald chisquare tests)
## 
## Response: prevalence
##            Chisq Df Pr(>Chisq)  
## condition 23.099 14    0.05868 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# raw number of generics
lmer(prevalence ~ num_generics + (1|participant) + (1|test_feature),
     data = data_tidy) %>% 
  summary()
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: prevalence ~ num_generics + (1 | participant) + (1 | test_feature)
##    Data: data_tidy
## 
## REML criterion at convergence: 21387.5
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.6809 -0.6343 -0.0877  0.5767  3.4641 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  participant  (Intercept) 384.03   19.597  
##  test_feature (Intercept)  79.95    8.942  
##  Residual                 527.36   22.964  
## Number of obs: 2304, groups:  participant, 144; test_feature, 16
## 
## Fixed effects:
##              Estimate Std. Error       df t value          Pr(>|t|)    
## (Intercept)   31.2266     3.3427  59.7534   9.342 0.000000000000274 ***
## num_generics   1.2912     0.3379 141.9999   3.822          0.000198 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr)
## num_genercs -0.542
# raw number of specifics
lmer(prevalence ~ num_specifics + (1|participant) + (1|test_feature),
     data = data_tidy) %>% 
  summary()
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: prevalence ~ num_specifics + (1 | participant) + (1 | test_feature)
##    Data: data_tidy
## 
## REML criterion at convergence: 21398
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.6567 -0.6341 -0.0871  0.5782  3.4644 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  participant  (Intercept) 416.14   20.400  
##  test_feature (Intercept)  79.95    8.942  
##  Residual                 527.36   22.964  
## Number of obs: 2304, groups:  participant, 144; test_feature, 16
## 
## Fixed effects:
##               Estimate Std. Error       df t value            Pr(>|t|)    
## (Intercept)    41.5584     3.3949  62.6346  12.241 <0.0000000000000002 ***
## num_specifics  -0.6460     0.3499 142.0002  -1.846              0.0669 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr)
## num_specfcs -0.544
# proportion of generics
lmer(prevalence ~ prop_generics + (1|participant) + (1|test_feature),
     data = data_tidy) %>% 
  summary()
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: prevalence ~ prop_generics + (1 | participant) + (1 | test_feature)
##    Data: data_tidy
## 
## REML criterion at convergence: 19888.6
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.6643 -0.6325 -0.0764  0.5701  3.4690 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  participant  (Intercept) 406.55   20.163  
##  test_feature (Intercept)  85.83    9.265  
##  Residual                 522.59   22.860  
## Number of obs: 2144, groups:  participant, 134; test_feature, 16
## 
## Fixed effects:
##               Estimate Std. Error      df t value         Pr(>|t|)    
## (Intercept)     30.936      3.729  72.956   8.297 0.00000000000391 ***
## prop_generics   14.420      4.547 132.000   3.171          0.00189 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr)
## prop_genrcs -0.615
# raw number vs proportion of generics
lmer(prevalence ~ num_generics + prop_generics + (1|participant) + (1|test_feature),
     data = data_tidy) %>% 
  summary()
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: prevalence ~ num_generics + prop_generics + (1 | participant) +  
##     (1 | test_feature)
##    Data: data_tidy
## 
## REML criterion at convergence: 19882.9
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.6840 -0.6267 -0.0777  0.5716  3.4716 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  participant  (Intercept) 393.83   19.845  
##  test_feature (Intercept)  85.83    9.265  
##  Residual                 522.59   22.860  
## Number of obs: 2144, groups:  participant, 134; test_feature, 16
## 
## Fixed effects:
##               Estimate Std. Error       df t value         Pr(>|t|)    
## (Intercept)    30.1377     3.7127  72.0231   8.117 0.00000000000924 ***
## num_generics    1.3831     0.6225 130.9999   2.222            0.028 *  
## prop_generics   0.2052     7.8109 130.9999   0.026            0.979    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) nm_gnr
## num_genercs -0.097       
## prop_genrcs -0.270 -0.819
# raw number of generics vs raw number of specifics
lmer(prevalence ~ num_generics + num_specifics + (1|participant) + (1|test_feature),
     data = data_tidy) %>% 
  summary()
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: prevalence ~ num_generics + num_specifics + (1 | participant) +  
##     (1 | test_feature)
##    Data: data_tidy
## 
## REML criterion at convergence: 21387.5
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.6818 -0.6340 -0.0874  0.5767  3.4649 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  participant  (Intercept) 386.97   19.672  
##  test_feature (Intercept)  79.95    8.942  
##  Residual                 527.36   22.964  
## Number of obs: 2304, groups:  participant, 144; test_feature, 16
## 
## Fixed effects:
##                Estimate Std. Error        df t value       Pr(>|t|)    
## (Intercept)    31.46229    4.52591 119.14483   6.952 0.000000000207 ***
## num_generics    1.27670    0.38734 140.99980   3.296        0.00124 ** 
## num_specifics  -0.02993    0.38652 140.99980  -0.077        0.93839    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) nm_gnr
## num_genercs -0.677       
## num_specfcs -0.673  0.484

Secondary results

Replication of study 6

Three of the conditions (baseline, 0/16, 16/16) in this pilot (n = 8-11/condition) are replications of the conditions (baseline, specific, generic conditions) in Study 6 (n = 90-99/condition). In Study 6, prevalence was rated higher in the generic condition than the baseline condition than the specific condition, respectively.

After subsetting to the Study 6 conditions, we do not replicate the main effect of condition (\(\chi\)(2) = 1.48, p = .48), nor pairwise differences between them (ps > .79).

# same analysis as study 6
model <- lmer(prevalence ~ condition + (1|participant) + (1|test_feature),
     data = data_study_6)

model %>% 
  Anova()
## Analysis of Deviance Table (Type II Wald chisquare tests)
## 
## Response: prevalence
##            Chisq Df Pr(>Chisq)
## condition 1.4766  2     0.4779
model %>% 
  emmeans("condition") %>% 
  pairs(adjust = "bonferroni") %>% 
  summary()
##  contrast           estimate   SE df t.ratio p.value
##  (16/16) - baseline     9.66 10.3 28   0.937  1.0000
##  (16/16) - (0/16)      11.53 10.1 28   1.144  0.7864
##  baseline - (0/16)      1.87 10.1 28   0.186  1.0000
## 
## Degrees-of-freedom method: kenward-roger 
## P value adjustment: bonferroni method for 3 tests

Mixed conditions only

Mixed conditions (4/8, 4/12, 4/16, 8/12, 8/16, 12/16) have heightened contrast between generic and specific statements, since participants see both types of statements.

After subsetting the sample to just mixed conditions (n = 56 total, n = 8-11 per condition), neither raw numbers of generics, raw numbers of specifics, nor the proportion of generics to specifics were statistically significant in predicting prevalence judgments (ps > .33).

# raw number of generics
lmer(prevalence ~ num_generics + (1|participant) + (1|test_feature),
     data = data_mixed_only) %>% 
  summary()
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: prevalence ~ num_generics + (1 | participant) + (1 | test_feature)
##    Data: data_mixed_only
## 
## REML criterion at convergence: 8366.2
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.6656 -0.6067 -0.1097  0.5838  2.9711 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  participant  (Intercept) 398.8    19.97   
##  test_feature (Intercept) 100.7    10.04   
##  Residual                 549.4    23.44   
## Number of obs: 896, groups:  participant, 56; test_feature, 16
## 
## Fixed effects:
##              Estimate Std. Error      df t value  Pr(>|t|)    
## (Intercept)   32.7674     7.4812 64.0895   4.380 0.0000449 ***
## num_generics   0.9287     0.9347 54.0000   0.994     0.325    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr)
## num_genercs -0.866
# raw number of specifics
lmer(prevalence ~ num_specifics + (1|participant) + (1|test_feature),
     data = data_mixed_only) %>% 
  summary()
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: prevalence ~ num_specifics + (1 | participant) + (1 | test_feature)
##    Data: data_mixed_only
## 
## REML criterion at convergence: 8367.1
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.6507 -0.6075 -0.1147  0.5851  2.9743 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  participant  (Intercept) 406.4    20.16   
##  test_feature (Intercept) 100.7    10.04   
##  Residual                 549.4    23.44   
## Number of obs: 896, groups:  participant, 56; test_feature, 16
## 
## Fixed effects:
##               Estimate Std. Error      df t value  Pr(>|t|)    
## (Intercept)    38.1169     7.3452 64.3392   5.189 0.0000023 ***
## num_specifics   0.1633     0.9495 54.0001   0.172     0.864    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr)
## num_specfcs -0.859
# proportion of generics
lmer(prevalence ~ prop_generics + (1|participant) + (1|test_feature),
     data = data_mixed_only) %>% 
  summary()
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: prevalence ~ prop_generics + (1 | participant) + (1 | test_feature)
##    Data: data_mixed_only
## 
## REML criterion at convergence: 8361.3
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.6572 -0.6089 -0.1100  0.5825  2.9752 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  participant  (Intercept) 405.2    20.13   
##  test_feature (Intercept) 100.7    10.04   
##  Residual                 549.4    23.44   
## Number of obs: 896, groups:  participant, 56; test_feature, 16
## 
## Fixed effects:
##               Estimate Std. Error     df t value Pr(>|t|)    
## (Intercept)     35.723      8.948 61.661   3.992 0.000177 ***
## prop_generics    6.817     15.907 54.000   0.429 0.669968    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr)
## prop_genrcs -0.907
# raw number vs proportion of generics
lmer(prevalence ~ num_generics + prop_generics + (1|participant) + (1|test_feature),
     data = data_mixed_only) %>% 
  summary()
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: prevalence ~ num_generics + prop_generics + (1 | participant) +  
##     (1 | test_feature)
##    Data: data_mixed_only
## 
## REML criterion at convergence: 8356.6
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.6682 -0.6052 -0.1070  0.5873  2.9648 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  participant  (Intercept) 400.4    20.01   
##  test_feature (Intercept) 100.7    10.04   
##  Residual                 549.4    23.44   
## Number of obs: 896, groups:  participant, 56; test_feature, 16
## 
## Fixed effects:
##               Estimate Std. Error      df t value Pr(>|t|)    
## (Intercept)     37.183      8.978  60.512   4.142 0.000109 ***
## num_generics     2.428      1.922  53.000   1.263 0.212076    
## prop_generics  -28.999     32.470  53.000  -0.893 0.375838    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) nm_gnr
## num_genercs  0.129       
## prop_genrcs -0.551 -0.873
# raw number of generics vs raw number of specifics
lmer(prevalence ~ num_generics + num_specifics + (1|participant) + (1|test_feature),
     data = data_mixed_only) %>% 
  summary()
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: prevalence ~ num_generics + num_specifics + (1 | participant) +  
##     (1 | test_feature)
##    Data: data_mixed_only
## 
## REML criterion at convergence: 8363.3
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.6667 -0.6053 -0.1051  0.5846  2.9648 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  participant  (Intercept) 400.7    20.02   
##  test_feature (Intercept) 100.7    10.04   
##  Residual                 549.4    23.44   
## Number of obs: 896, groups:  participant, 56; test_feature, 16
## 
## Fixed effects:
##               Estimate Std. Error      df t value Pr(>|t|)
## (Intercept)    22.4345    14.0402 56.2931   1.598    0.116
## num_generics    1.4733     1.1266 52.9999   1.308    0.197
## num_specifics   0.9874     1.1345 52.9999   0.870    0.388
## 
## Correlation of Fixed Effects:
##             (Intr) nm_gnr
## num_genercs -0.854       
## num_specfcs -0.846  0.555

Training features

Participants saw some subset of 16 training features, with participants receiving anywhere from 0 to 16 of them, with subsets randomly selected. Did the particular training features received matter?

Test features order effects

All participants saw and rated the prevalence of the same set of 16 test features, in random order. Did the order of test feature/prevalence judgment questions matter for prevalence judgments?

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: prevalence ~ num_generics * prop_generics * test_feature_order +  
##     (1 | participant) + (1 | test_feature)
##    Data: data_tidy
## 
## REML criterion at convergence: 19882.9
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.7501 -0.6205 -0.0806  0.5634  3.4532 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  participant  (Intercept) 397.07   19.927  
##  test_feature (Intercept)  84.81    9.209  
##  Residual                 521.33   22.833  
## Number of obs: 2144, groups:  participant, 134; test_feature, 16
## 
## Fixed effects:
##                                                 Estimate Std. Error         df
## (Intercept)                                     29.91797    4.29954  117.98850
## num_generics                                     0.02604    1.35947  205.31243
## prop_generics                                   10.51210    9.26687  205.54911
## test_feature_order                               0.05624    0.19375 1991.79725
## num_generics:prop_generics                       0.63377    1.45304  205.31249
## num_generics:test_feature_order                  0.13730    0.07259 1992.13712
## prop_generics:test_feature_order                -1.26822    0.49540 1992.63835
## num_generics:prop_generics:test_feature_order   -0.04668    0.07759 1992.13742
##                                               t value       Pr(>|t|)    
## (Intercept)                                     6.958 0.000000000206 ***
## num_generics                                    0.019         0.9847    
## prop_generics                                   1.134         0.2580    
## test_feature_order                              0.290         0.7717    
## num_generics:prop_generics                      0.436         0.6632    
## num_generics:test_feature_order                 1.891         0.0587 .  
## prop_generics:test_feature_order               -2.560         0.0105 *  
## num_generics:prop_generics:test_feature_order  -0.602         0.5475    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) nm_gnr prp_gn tst_f_ nm_g:_ nm_:__ pr_:__
## num_genercs -0.364                                          
## prop_genrcs -0.364 -0.133                                   
## tst_ftr_rdr -0.383  0.195  0.195                            
## nm_gnrcs:p_  0.368 -0.857 -0.313 -0.197                     
## nm_gnrcs:__  0.164 -0.454  0.061 -0.429  0.389              
## prp_gnrc:__  0.165  0.061 -0.454 -0.430  0.142 -0.135       
## nm_gnr:_:__ -0.166  0.389  0.142  0.434 -0.454 -0.856 -0.313

There was no main effect of order on prevalence judgments, although there was a significant interaction with proportion of generics, and a trending interaction with raw count of generics.

Power analysis