Fitting psychometric functions in R

Load packages

library(tidyverse)
library(brms)
library(tidybayes)

Simulation

We will first simulate some data, and then demonstrate how to fit a psychometric function to the data. This is important for two reasons:

  1. Well-behaved data: We want to have control over how the data were created, and we want to know the true parameters that were used to generate the data.
  2. Parameter recovery: We want to make sure our data analysis is able to recover the true parameters.

The following function takes a set of parameters of interest, and then generates binary responses in a psychphysical experiment. The observer generates yes response, based on the stimulus visibility (i.e., the stimulus intensity). The function also includes lower and upper asymptotes (guesses and lapses, according to the definition by Wichmann and Hill).

simulate_data <- function(mu = 0.05, 
                          sigma = 0.025, 
                          guess = 0, 
                          lapse = 0, 
                          ID = 1, 
                          nreps = 5) {
    b0 <- -mu / sigma
    b1 <- 1 / sigma

    df <- expand_grid(
        ID = ID,
        rep = seq(1, nreps),
        visibility = c(0, 0.037, 0.048, 0.053, 0.061, 0.073, 0.140),
        mu = mu,
        sigma = sigma,
        b0 = b0,
        b1 = b1,
        guess = guess,
        lapse = lapse
    ) |>
        mutate(
            theta = guess + (1 - guess - lapse) * pnorm(b0 + b1 * visibility),
            say_yes = rbinom(n = length(theta), size = 1, prob = theta)
        )
    df <- df |>
        arrange(ID, visibility, rep)
    return(df)
}

Generate a dataset with 1 participant, 100 trials, and a guess rate of 0.2 and a lapse rate of 0.1. We will leave the threshold (mu) and slope (sigma) at their default values of 0.05 and 0.025, respectively.

The threshold is \(Q_5\) the intensity that results in the observer responding yes with probability of \(0.5\).

set.seed(343)
dat1 <- simulate_data(guess = 0.2, lapse = 0.1, nreps = 100, ID = 1)

We can summmarize the data and plot the psychometric function, using the following function:

plot_psychometric_curve <- function(df, mu = 0.05) {
    df |>
        ggplot(aes(visibility, prop_yes)) +
        geom_point() +
        geom_smooth(
            method = "glm", se = FALSE,
            method.args = list(family = "binomial")
        ) +
        geom_vline(xintercept = mu, linetype = "dashed") +
        scale_color_viridis_d(begin = 0, end = 0.8) +
        facet_wrap(~ID) +
        theme_linedraw()
}
datsum1 <- dat1 |>
    group_by(ID, visibility) |>
    summarize(
        say_yes = sum(say_yes),
        N = n(),
        prop_yes = say_yes / N
    )
`summarise()` has grouped output by 'ID'. You can override using the `.groups`
argument.
datsum1 |>
    plot_psychometric_curve()

Fitting a psychometric function using brms

brms is an R package that can be used to perform Bayesian inference in a wide variety of regression models. It is based on the probabilistic programming language Stan, and it is very flexible and powerful.

Priors

We will use the following priors for the parameters:

priors <- c(
    prior(student_t(2, 0, 1), class = "b", coef = "Intercept", nlpar = "eta"),
    prior(student_t(7, 0, 10), class = "b", coef = "visibility", nlpar = "eta"),
    prior(beta(1, 20), nlpar = "lapse", lb = 0, ub = .1),
    prior(beta(1, 20), nlpar = "guess", lb = 0, ub = .1)
)
priors |>
    parse_dist(prior) |>
    # filter(.dist == "student_t") |>
    filter(nlpar == "eta") |>
    ggplot(aes(y = class, dist = .dist, args = .args)) +
    stat_dist_halfeye() +
    facet_wrap(~coef, scales = "free_y") +
    labs(
        title = "stat_dist_halfeyeh()",
        subtitle = "with brms::prior() and tidybayes::parse_dist() to visualize priors",
        x = NULL
    ) +
    theme_tidybayes()

priors |>
    parse_dist(prior) |>
    filter(.dist != "student_t") |>
    ggplot(aes(y = class, dist = .dist, args = .args)) +
    stat_dist_halfeye() +
    facet_wrap(~nlpar, scales = "free_y") +
    labs(
        title = "stat_dist_halfeyeh()",
        subtitle = "with brms::prior() and tidybayes::parse_dist() to visualize priors",
        x = NULL
    ) +
    theme_tidybayes()

Model specification

Formula for non-linear model:

f <- bf(
    say_yes ~ guess + (1 - guess - lapse) * Phi(eta),
    eta ~ 0 + Intercept + visibility,
    guess ~ 1,
    lapse ~ 1,
    family = bernoulli(link = "identity"),
    nl = TRUE
)

Priors:

priors <- c(
    prior(student_t(2, 0, 1), class = "b", coef = "Intercept", nlpar = "eta"),
    prior(student_t(7, 0, 10), class = "b", coef = "visibility", nlpar = "eta"),
    prior(beta(1, 20), nlpar = "lapse", lb = 0, ub = .1),
    prior(beta(1, 20), nlpar = "guess", lb = 0, ub = .1)
)

Fit model:

fit1 <- brm(
    f,
    data = dat1,
    init = 0,
    control = list(adapt_delta = 0.99),
    prior = priors,
    file = "fit1"
)
summary(fit1)
 Family: bernoulli 
  Links: mu = identity 
Formula: say_yes ~ guess + (1 - guess - lapse) * Phi(eta) 
         eta ~ 0 + Intercept + visibility
         guess ~ 1
         lapse ~ 1
   Data: dat1 (Number of observations: 700) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Population-Level Effects: 
                Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
eta_Intercept      -0.74      0.14    -1.05    -0.50 1.00     1240     1363
eta_visibility     17.46      2.76    12.74    23.40 1.00     1245     1618
guess_Intercept     0.03      0.03     0.00     0.09 1.00     1596     1579
lapse_Intercept     0.05      0.03     0.00     0.10 1.00     1617     1613

Draws were sampled using sample(hmc). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

The parameters eta_Intercept and eta_visibilty are the regression coefficients \(b_0\) and \(b_1\) in the linear predictor \(\eta = b_0 + b_1 ⋅ \text{visibility}\).

They can be converted into \(\mu\) and \(\sigma\):

mu = -b0 / b1
sigma = 1 / b1

\(mu\) is estimated to be approximately -0.0424317 an \(sigma\) is estimated to be approximately 0.0407997.

For now, \(\mu\) is close enough. $is off by quite a bit, but there probably aren’t enough data points to estimate the variance, especially in the presence of guesses and lapses.

plot(conditional_effects(fit1), points = TRUE, ask = FALSE)
`geom_line()`: Each group consists of only one observation.
ℹ Do you need to adjust the group aesthetic?

Extract and summarize paramaters

extract_params <- function(fit) {
    pars <- fit |>
        spread_draws(
            b_eta_Intercept,
            b_eta_visibility,
            b_guess_Intercept,
            b_lapse_Intercept
        ) |>
        mutate(
            b0 = b_eta_Intercept,
            b1 = b_eta_visibility,
            mu = -b0 / b1,
            sigma = 1 / b1,
            guess = b_guess_Intercept,
            lapse = b_lapse_Intercept
        ) |>
        select(mu, sigma, guess, lapse) |>
        pivot_longer(cols = mu:lapse, names_to = "param", values_to = "value")
    pars
}
summarise_params <- function(params) {
    params |>
        group_by(param) |>
        mean_qi()
}

Example:

params1 <- fit1 |> 
    extract_params()
params1 |>
    summarise_params()
# A tibble: 4 × 7
  param  value  .lower .upper .width .point .interval
  <chr>  <dbl>   <dbl>  <dbl>  <dbl> <chr>  <chr>    
1 guess 0.0333 0.00104 0.0934   0.95 mean   qi       
2 lapse 0.0469 0.00255 0.0960   0.95 mean   qi       
3 mu    0.0426 0.0339  0.0520   0.95 mean   qi       
4 sigma 0.0587 0.0427  0.0785   0.95 mean   qi       

Several participants

fit_model <- function(df) {
    priors <- c(
        prior(student_t(2, 0, 1), class = "b", coef = "Intercept", nlpar = "eta"),
        prior(student_t(7, 0, 10), class = "b", coef = "visibility", nlpar = "eta"),
        prior(beta(1, 20), nlpar = "lapse", lb = 0, ub = .1),
        prior(beta(1, 20), nlpar = "guess", lb = 0, ub = .1)
    )
    nl_formula <- bf(
        say_yes ~ guess + (1 - guess - lapse) * Phi(eta),
        eta ~ 0 + Intercept + visibility,
        guess ~ 1,
        lapse ~ 1,
        family = bernoulli(link = "identity"),
        nl = TRUE
    )

    fit <- brm(
        nl_formula,
        data = df,
        init = 0,
        control = list(adapt_delta = 0.99),
        prior = priors
    )
}
set.seed(343)
dat2 <- simulate_data(guess = 0.2, lapse = 0.1, nreps = 100, ID = 1:3)
dat2 <- dat2 |>
    group_by(ID) |>
    nest()
dat2 <- dat2 |>
    mutate(fit = map(data, fit_model))
dat2 |>
    mutate(params = map(fit, extract_params)) |>
    mutate(summary = map(params, summarise_params)) |>
    unnest(summary)

Analyze IMAPC data

df <- read_csv("data/experiment-data.csv") |>
  mutate(across(
    c(
      participant_id,
      gender,
      handedness,
      response
    ),
    as_factor
  )) |>
  filter(did_participant_really_imagine == "yes")
Rows: 40824 Columns: 15
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (10): gender, handedness, response, test_part, did_participant_really_im...
dbl  (4): participant_id, age, rt, visibility
lgl  (1): correct

ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
df <- df |>
  select(
    ID = participant_id,
    imagined_direction,
    dot_direction,
    visibility,
    response, rt,
    correct, correct_response
  )


df <- df |>
  mutate(
    condition = case_when(
      imagined_direction == "nothing" ~ "Neutral",
      imagined_direction == dot_direction ~ "Congruent",
      imagined_direction != dot_direction ~ "Incongruent"
    ),
    correct = as.numeric(correct)
  )

df <- df |>
  mutate(across(c(imagined_direction, dot_direction, condition), as_factor))
df <- df |>
  mutate(
    present = if_else(visibility > 0, 1, 0),
    say_yes = if_else(correct == present, 1, 0)
  )
ID_levels <- levels(df$ID)
new_levels <- 1:length(ID_levels)
names(ID_levels) <- new_levels

df <- df |>
  mutate(
    ID = fct_recode(ID, !!!ID_levels)
  )
df_sum <- df |>
  group_by(ID, condition, visibility) |>
  summarize(
    say_yes = sum(say_yes),
    N = n(),
    prop_yes = say_yes / N
  )
`summarise()` has grouped output by 'ID', 'condition'. You can override using
the `.groups` argument.
plot_psychometric_curve <- function(df) {
  df |>
    ggplot(aes(visibility, prop_yes, color = condition)) +
    geom_point() +
    geom_smooth(
      method = "glm", se = FALSE,
      method.args = list(family = "binomial")
    ) +
    scale_color_viridis_d(begin = 0, end = 0.8) +
    facet_wrap(~ID) +
    theme_tidybayes()
}
df_sum |>
  plot_psychometric_curve()

Fit brms model

fit_model <- function(df) {
    priors <- c(
        prior(student_t(2, 0, 1), class = "b",
              coef = "Intercept", nlpar = "eta"),
        prior(student_t(7, 0, 10), class = "b", 
              coef = "visibility", nlpar = "eta"),
        prior(beta(1, 20), nlpar = "lapse", lb = 0, ub = .1),
        prior(beta(1, 20), nlpar = "guess", lb = 0, ub = .1)
    )
    nl_formula <- bf(
        say_yes ~ guess + (1 - guess - lapse) * Phi(eta),
        eta ~ 0 + Intercept + visibility,
        guess ~ 1,
        lapse ~ 1,
        family = bernoulli(link = "identity"),
        nl = TRUE
    )

    fit <- brm(
        nl_formula,
        data = df,
        init = 0,
        control = list(adapt_delta = 0.99),
        prior = priors
    )
}
extract_params <- function(fit) {
    pars <- fit |>
        spread_draws(
            b_eta_Intercept,
            b_eta_visibility,
            b_guess_Intercept,
            b_lapse_Intercept
        ) |>
        mutate(
            b0 = b_eta_Intercept,
            b1 = b_eta_visibility,
            mu = -b0 / b1,
            sigma = 1 / b1,
            guess = b_guess_Intercept,
            lapse = b_lapse_Intercept
        ) |>
        select(mu, sigma, guess, lapse) |>
        pivot_longer(cols = mu:lapse, names_to = "param", values_to = "value")
    pars
}
summarise_params <- function(params) {
    params |>
        group_by(param) |>
        mean_qi()
}
by_ID_condition <- df |>
    # filter(ID %in% c(2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 14)) |>
    group_by(ID, condition) |>
    nest() |>
    mutate(fit = map(data, fit_model))
by_ID_condition <- by_ID_condition |>
    mutate(params = map(fit, extract_params))

by_ID_condition <- by_ID_condition |>
    mutate(summary = map(params, summarise_params)) |>
    unnest(summary)

by_ID_condition |> write_rds("by_ID_condition.rds")
by_ID_condition <- read_rds("by_ID_condition.rds")
by_ID_condition
# A tibble: 912 × 12
# Groups:   ID, condition [228]
   ID    condition   data     fit       params   param  value   .lower .upper
   <fct> <fct>       <list>   <list>    <list>   <chr>  <dbl>    <dbl>  <dbl>
 1 81    Neutral     <tibble> <brmsfit> <tibble> guess 0.0382 0.00205  0.0905
 2 81    Neutral     <tibble> <brmsfit> <tibble> lapse 0.0191 0.000480 0.0622
 3 81    Neutral     <tibble> <brmsfit> <tibble> mu    0.0584 0.0513   0.0664
 4 81    Neutral     <tibble> <brmsfit> <tibble> sigma 0.0277 0.0153   0.0440
 5 81    Congruent   <tibble> <brmsfit> <tibble> guess 0.0300 0.00113  0.0862
 6 81    Congruent   <tibble> <brmsfit> <tibble> lapse 0.0170 0.000448 0.0611
 7 81    Congruent   <tibble> <brmsfit> <tibble> mu    0.0354 0.0257   0.0431
 8 81    Congruent   <tibble> <brmsfit> <tibble> sigma 0.0270 0.0177   0.0402
 9 81    Incongruent <tibble> <brmsfit> <tibble> guess 0.0206 0.000694 0.0731
10 81    Incongruent <tibble> <brmsfit> <tibble> lapse 0.0208 0.000406 0.0744
# ℹ 902 more rows
# ℹ 3 more variables: .width <dbl>, .point <chr>, .interval <chr>
by_ID_condition |>
  filter(ID %in% c(9, 24, 44, 48, 57, 71, 75)) |>
  ggplot(aes(x = condition, y = value, color = ID)) +
    geom_line(aes(group = ID)) +
    geom_point(alpha = 0.6) +
    scale_color_viridis_d() +
    facet_wrap(~param, scales = "free_y") +
    theme_tidybayes()

by_ID_condition_wide <- by_ID_condition |>
    select(ID, condition, param, value) |>
    pivot_wider(names_from = param, values_from = value)
by_ID_condition_wide
# A tibble: 228 × 6
# Groups:   ID, condition [228]
   ID    condition    guess  lapse     mu  sigma
   <fct> <fct>        <dbl>  <dbl>  <dbl>  <dbl>
 1 81    Neutral     0.0382 0.0191 0.0584 0.0277
 2 81    Congruent   0.0300 0.0170 0.0354 0.0270
 3 81    Incongruent 0.0206 0.0208 0.0538 0.0258
 4 80    Neutral     0.0178 0.0382 0.0983 0.0452
 5 80    Congruent   0.0302 0.0355 0.0896 0.0607
 6 80    Incongruent 0.0193 0.0316 0.0988 0.0333
 7 79    Incongruent 0.0297 0.0208 0.0695 0.0262
 8 79    Congruent   0.0181 0.0200 0.0632 0.0223
 9 79    Neutral     0.0185 0.0195 0.0626 0.0237
10 78    Incongruent 0.0312 0.0187 0.0504 0.0176
# ℹ 218 more rows
library(kableExtra)
by_ID_condition_wide |> 
  kable(booktabs = TRUE) |> 
  kable_styling(font_size = 8, 
    bootstrap_options = c("striped", "hover", "condensed", "responsive")) |> 
  scroll_box(width = "800px", height = "500px")
ID condition guess lapse mu sigma
81 Neutral 0.0382110 0.0191236 0.0583597 0.0276560
81 Congruent 0.0299795 0.0170082 0.0354041 0.0269786
81 Incongruent 0.0205890 0.0207509 0.0537973 0.0258209
80 Neutral 0.0178251 0.0381548 0.0982631 0.0451817
80 Congruent 0.0302090 0.0355157 0.0895934 0.0607399
80 Incongruent 0.0192535 0.0315502 0.0988148 0.0333374
79 Incongruent 0.0296931 0.0208442 0.0694580 0.0262264
79 Congruent 0.0180873 0.0199887 0.0632049 0.0222846
79 Neutral 0.0184680 0.0194803 0.0625763 0.0237461
78 Incongruent 0.0312164 0.0187235 0.0504310 0.0176319
78 Congruent 0.0447061 0.0177910 0.0581376 0.0182795
78 Neutral 0.0176494 0.0166452 0.0511859 0.0129913
77 Neutral 0.0322776 0.0364801 0.1326051 0.1088290
77 Incongruent 0.0318594 0.0389235 0.0486000 0.0666849
77 Congruent 0.0309904 0.0301200 0.0554575 0.0443278
76 Congruent 0.0345266 0.0202786 0.0570325 0.0309674
76 Neutral 0.0333203 0.0189785 0.0446129 0.0262140
76 Incongruent 0.0279047 0.0192963 0.0518189 0.0258307
75 Neutral 0.0405743 0.0260450 0.0349173 0.0221402
75 Congruent 0.0350624 0.0169322 0.0123029 0.0414199
75 Incongruent 0.0360473 0.0353515 0.0747928 -0.4497540
74 Congruent 0.0398094 0.0181901 0.0485282 0.0288400
74 Neutral 0.0338367 0.0172852 0.0041833 0.0423804
74 Incongruent 0.0332139 0.0212130 0.0167348 0.0462818
73 Incongruent 0.0135217 0.0207659 0.0838977 0.0241480
73 Congruent 0.0329922 0.0214801 0.0852327 0.0247865
73 Neutral 0.0167863 0.0216250 0.0867625 0.0275386
72 Neutral 0.0161273 0.0191906 0.0658343 0.0197547
72 Incongruent 0.0360291 0.0184799 0.0492235 0.0264924
72 Congruent 0.0155743 0.0148548 0.0524890 0.0120927
71 Congruent 0.0338710 0.0181307 0.0212103 0.0401690
71 Incongruent 0.0431266 0.0356291 -0.9858975 -0.5434902
71 Neutral 0.0312940 0.0321442 0.0473577 0.0455732
70 Congruent 0.0354820 0.0214779 0.0271871 0.0523734
70 Neutral 0.0322041 0.0184828 0.0330358 0.0295791
70 Incongruent 0.0320159 0.0192464 0.0339641 0.0331690
69 Congruent 0.0345644 0.0159527 0.0244027 0.0363899
69 Neutral 0.0299891 0.0162438 0.0326869 0.0251003
69 Incongruent 0.0354661 0.0169580 0.0331172 0.0298785
67 Incongruent 0.0320203 0.0334137 0.0981496 0.0609137
67 Neutral 0.0087894 0.0322175 0.1036412 0.0262562
67 Congruent 0.0349762 0.0081178 -0.0481717 0.0453081
66 Congruent 0.0196052 0.0150148 0.0460609 0.0174208
66 Neutral 0.0178499 0.0201605 0.0635180 0.0231978
66 Incongruent 0.0157686 0.0197209 0.0630714 0.0181891
65 Incongruent 0.0347439 0.0213508 0.0537432 0.0372750
65 Neutral 0.0366660 0.0325243 0.0403406 0.0342724
65 Congruent 0.0367311 0.0178573 0.0362932 0.0339443
64 Congruent 0.0423595 0.0174486 0.0501266 0.0208571
64 Incongruent 0.0146574 0.0205628 0.0731974 0.0229441
64 Neutral 0.0161030 0.0198037 0.0630084 0.0214745
63 Congruent 0.0279527 0.0149855 0.0380860 0.0206597
63 Neutral 0.0272761 0.0203215 0.0550040 0.0286936
63 Incongruent 0.0172301 0.0346676 0.0656476 0.0282613
62 Neutral 0.0271499 0.0329257 0.0739696 0.0278277
62 Congruent 0.0305829 0.0293080 0.0438473 0.0216265
62 Incongruent 0.0148791 0.0349414 0.0752837 0.0238416
61 Incongruent 0.0367729 0.0253119 0.0479785 0.0631901
61 Neutral 0.0287411 0.0223898 0.0754294 0.0367743
61 Congruent 0.0492936 0.0224713 0.0692592 0.0373600
60 Incongruent 0.0347135 0.0217317 0.0651119 0.0348710
60 Congruent 0.0174908 0.0211185 0.0751569 0.0276655
60 Neutral 0.0073665 0.0216884 0.1008220 0.0192926
59 Neutral 0.0348444 0.0196561 0.0317033 0.0394122
59 Congruent 0.0353157 0.0186730 0.0312998 0.0410185
59 Incongruent 0.0357167 0.0217307 0.0323300 0.0529439
58 Incongruent 0.0320595 0.0215393 0.0570838 0.0338731
58 Neutral 0.0269500 0.0207010 0.0618320 0.0329625
58 Congruent 0.0302160 0.0167529 0.0481805 0.0218202
57 Congruent 0.0285347 0.0209598 0.0725155 0.0248510
57 Neutral 0.0121430 0.0211645 0.0835830 0.0226303
57 Incongruent 0.0406297 0.0348622 -0.2818320 -0.3149812
56 Congruent 0.0185721 0.0152726 0.0505704 0.0154173
56 Incongruent 0.0186715 0.0354855 0.0584161 0.0204193
56 Neutral 0.0313864 0.0388105 0.0615784 0.0412069
55 Incongruent 0.0112122 0.0206225 0.0827013 0.0205409
55 Neutral 0.0212506 0.0219504 0.0837218 0.0267814
55 Congruent 0.0125320 0.0207188 0.0818309 0.0242561
54 Neutral 0.0386829 0.0341242 0.0484019 0.0312826
54 Congruent 0.0348926 0.0374585 0.0366903 0.0585123
54 Incongruent 0.0314250 0.0350766 0.0364487 0.0385137
53 Neutral 0.0083928 0.0211888 0.0904319 0.0202894
53 Incongruent 0.0062830 0.0315327 0.1135561 0.0204094
53 Congruent 0.0107860 0.0211510 0.0833247 0.0213696
52 Neutral 0.0170992 0.0356520 0.0619417 0.0208316
52 Incongruent 0.0272390 0.0217167 0.0575897 0.0348350
52 Congruent 0.0375646 0.0203614 0.0479527 0.0321521
51 Congruent 0.0286609 0.0219505 0.0641619 0.0368229
51 Neutral 0.0262242 0.0329551 0.0871575 0.0381199
51 Incongruent 0.0453952 0.0317821 0.0639156 0.0318433
49 Congruent 0.0322412 0.0164519 0.0224177 0.0298148
49 Neutral 0.0201852 0.0184922 0.0480679 0.0229083
49 Incongruent 0.0364139 0.0311085 0.0400262 0.0331130
48 Neutral 0.0332585 0.0221285 0.0212824 0.0527550
48 Incongruent 0.0343149 0.0208712 0.0160847 0.0518353
48 Congruent 0.0353980 0.0185786 0.0236341 0.0435660
47 Congruent 0.0185627 0.0187534 0.0502160 0.0183389
47 Incongruent 0.0453798 0.0200746 0.0553385 0.0336187
47 Neutral 0.0321030 0.0168201 0.0522764 0.0196187
46 Incongruent 0.0280238 0.0221533 0.0707402 0.0364534
46 Congruent 0.0287021 0.0230331 0.0743530 0.0370770
46 Neutral 0.0327052 0.0191995 0.0643797 0.0209482
45 Incongruent 0.0355861 0.0283807 0.0240844 0.0822041
45 Congruent 0.0328808 0.0197233 0.0348586 0.0375574
45 Neutral 0.0333473 0.0309667 0.0437949 0.0447162
44 Congruent 0.0342561 0.0120660 -0.0326226 0.0477152
44 Incongruent 0.0387173 0.0339853 0.0261194 -0.0988137
44 Neutral 0.0317403 0.0317611 0.0500546 0.0335162
43 Neutral 0.0264578 0.0208209 0.0568320 0.0349348
43 Congruent 0.0337147 0.0213764 0.0501927 0.0423295
43 Incongruent 0.0352878 0.0194680 0.0461244 0.0337749
42 Neutral 0.0212146 0.0278052 0.0944854 0.0322402
42 Congruent 0.0251768 0.0375472 0.0810224 0.0337697
42 Incongruent 0.0181591 0.0308397 0.0834385 0.0317686
41 Congruent 0.0281739 0.0145464 0.0415494 0.0208133
41 Incongruent 0.0301730 0.0184632 0.0568968 0.0259309
41 Neutral 0.0216274 0.0215706 0.0401715 0.0236671
40 Incongruent 0.0290939 0.0338104 0.0655269 0.0273205
40 Neutral 0.0148207 0.0384529 0.0922687 0.0349275
40 Congruent 0.0310972 0.0311010 0.0570076 0.0403429
39 Incongruent 0.0352186 0.0225708 -0.0017258 0.0654114
39 Neutral 0.0364121 0.0189770 0.0452426 0.0358281
39 Congruent 0.0349416 0.0165604 0.0226482 0.0379591
38 Congruent 0.0388422 0.0291031 0.0547092 0.0525865
38 Neutral 0.0112690 0.0291412 0.1003908 0.0256854
38 Incongruent 0.0163495 0.0293667 0.0874166 0.0335483
37 Neutral 0.0161834 0.0194561 0.0588216 0.0174219
37 Incongruent 0.0173676 0.0307217 0.0992046 0.0325269
37 Congruent 0.0344701 0.0127567 0.0082848 0.0345356
36 Congruent 0.0184115 0.0149938 0.0481746 0.0163217
36 Incongruent 0.0184997 0.0184528 0.0606200 0.0164891
36 Neutral 0.0466334 0.0192040 0.0602083 0.0177707
35 Congruent 0.0346873 0.0232429 -0.0256789 0.0822978
35 Incongruent 0.0351017 0.0213375 -0.0258296 0.0626023
35 Neutral 0.0356304 0.0278142 -0.0173327 0.0593290
34 Neutral 0.0361295 0.0160894 -0.0322854 0.0616826
34 Congruent 0.0364308 0.0123482 0.0213147 0.0275893
34 Incongruent 0.0363559 0.0157126 0.0237788 0.0339530
33 Congruent 0.0496701 0.0228800 0.0695605 0.0419434
33 Neutral 0.0101880 0.0303114 0.0926693 0.0243967
33 Incongruent 0.0082888 0.0318651 0.1151790 0.0246688
32 Incongruent 0.0212163 0.0210318 0.0547009 0.0300294
32 Congruent 0.0419293 0.0201690 0.0676005 0.0259078
32 Neutral 0.0414461 0.0199178 0.0605385 0.0239164
30 Congruent 0.0331976 0.0192957 0.0227475 0.0408114
30 Incongruent 0.0353625 0.0277026 0.0290801 0.0924558
30 Neutral 0.0188219 0.0407887 0.0806470 0.0386486
28 Neutral 0.0298174 0.0228312 0.0747718 0.0373083
28 Incongruent 0.0156131 0.0358377 0.0949994 0.0364788
28 Congruent 0.0146203 0.0217029 0.0852351 0.0272471
27 Incongruent 0.0208215 0.0173596 0.0498013 0.0227665
27 Neutral 0.0354128 0.0188317 0.0371430 0.0392039
27 Congruent 0.0344146 0.0181737 0.0226043 0.0357951
26 Neutral 0.0140821 0.0361746 0.0963545 0.0336224
26 Incongruent 0.0138782 0.0334765 0.1062621 0.0327441
26 Congruent 0.0145753 0.0284860 0.0944041 0.0289725
25 Congruent 0.0304853 0.0201601 0.0611908 0.0254303
25 Incongruent 0.0098487 0.0535123 0.0882809 0.0251164
25 Neutral 0.0115669 0.0312524 0.0901391 0.0244513
23 Neutral 0.0393657 0.0163052 0.0411213 0.0257366
23 Incongruent 0.0335728 0.0324240 0.0330330 0.0337691
23 Congruent 0.0335456 0.0335826 0.0330380 0.0335288
22 Congruent 0.0341348 0.0263718 0.0025197 0.0350573
22 Neutral 0.0286624 0.0378547 0.0302807 0.0286573
22 Incongruent 0.0339523 0.0482665 0.0323303 0.0508406
21 Congruent 0.0345435 0.0176419 0.0296390 0.0364415
21 Incongruent 0.0325557 0.0215089 0.0214080 0.0335653
21 Neutral 0.0328034 0.0197868 0.0352955 0.0389840
20 Incongruent 0.0356020 0.0158528 0.0005315 0.0442680
20 Congruent 0.0349137 0.0208632 0.0019422 0.0649136
20 Neutral 0.0337738 0.0194116 0.0075048 0.0460884
19 Neutral 0.0298638 0.0196303 0.0569352 0.0207400
19 Congruent 0.0187904 0.0196346 0.0617258 0.0231349
19 Incongruent 0.0171499 0.0200260 0.0638530 0.0225372
24 Congruent 0.0345563 0.0360372 0.1196764 0.1851072
24 Neutral 0.0358060 0.0369687 0.0496901 -0.2986964
24 Incongruent 0.0356580 0.0366020 -0.0607839 0.0577050
18 Neutral 0.0341765 0.0329766 0.0447744 0.0352161
18 Incongruent 0.0336406 0.0219414 0.0503044 0.0390776
18 Congruent 0.0327807 0.0210117 0.0534203 0.0323244
17 Neutral 0.0335690 0.0139924 0.0312403 0.0257144
17 Incongruent 0.0168821 0.0575898 0.0536429 0.0153576
17 Congruent 0.0339662 0.0202354 0.0394261 0.0367295
16 Congruent 0.0344687 0.0381007 0.0167322 0.0693738
16 Neutral 0.0356600 0.0320131 -0.0164896 0.0985684
16 Incongruent 0.0354610 0.0318129 0.0028439 0.0851490
15 Neutral 0.0277040 0.0209146 0.0693286 0.0261423
15 Congruent 0.0160200 0.0202472 0.0638779 0.0227558
15 Incongruent 0.0388044 0.0303308 0.0617019 0.0437375
14 Neutral 0.0365848 0.0192845 0.0397884 0.0393415
14 Congruent 0.0336328 0.0197434 0.0428442 0.0348094
14 Incongruent 0.0269643 0.0185894 0.0363824 0.0259853
13 Congruent 0.0291971 0.0433255 0.0699944 0.0659382
13 Neutral 0.0337859 0.0390611 0.0708489 0.0380761
13 Incongruent 0.0292142 0.0398543 0.0695061 0.0562865
12 Congruent 0.0312373 0.0227081 0.0731287 0.0402826
12 Neutral 0.0151892 0.0283466 0.0965609 0.0275116
12 Incongruent 0.0144171 0.0296677 0.0902158 0.0282559
11 Neutral 0.0312649 0.0351006 0.0415255 0.0301481
11 Congruent 0.0346305 0.0310028 0.0512845 0.0403175
11 Incongruent 0.0319470 0.0353718 0.0515434 0.0534055
29 Neutral 0.0377875 0.0191122 0.0392731 0.0350450
29 Congruent 0.0366962 0.0451502 0.0361152 0.0306671
29 Incongruent 0.0322947 0.0220814 0.0484558 0.0425596
10 Congruent 0.0184000 0.0142747 0.0408532 0.0117103
10 Neutral 0.0345919 0.0171940 0.0206172 0.0381152
10 Incongruent 0.0340335 0.0195673 0.0326178 0.0398682
9 Neutral 0.0333949 0.0122974 0.0154576 0.0273735
9 Incongruent 0.0371965 0.0360061 -0.0498894 -0.2665202
9 Congruent 0.0347612 0.0119215 -0.0380108 0.0495672
7 Congruent 0.0194603 0.0317645 0.0763388 0.0291930
7 Incongruent 0.0114258 0.0203776 0.0696250 0.0157663
7 Neutral 0.0155164 0.0220470 0.0897027 0.0240668
6 Neutral 0.0283276 0.0199576 0.0589422 0.0195392
6 Incongruent 0.0195487 0.0193421 0.0624008 0.0192580
6 Congruent 0.0165305 0.0207611 0.0589062 0.0204794
5 Congruent 0.0414308 0.0334756 0.1044752 0.0856292
5 Neutral 0.0087658 0.0321159 0.1057292 0.0259419
5 Incongruent 0.0077621 0.0311355 0.1086834 0.0238972
4 Congruent 0.0302152 0.0176337 0.0309379 0.0288515
4 Incongruent 0.0201355 0.0293842 0.0492677 0.0161801
4 Neutral 0.0424936 0.0169709 0.0411419 0.0240585
3 Incongruent 0.0266722 0.0207790 0.0513997 0.0326600
3 Neutral 0.0330878 0.0323856 0.0594138 0.0334155
3 Congruent 0.0294497 0.0190865 0.0530717 0.0243716
2 Incongruent 0.0411604 0.0153429 0.0430294 0.0185994
2 Congruent 0.0294579 0.0157450 0.0393817 0.0189369
2 Neutral 0.0378472 0.0148637 0.0410761 0.0217414

Aggregated thresholds by condition

by_ID_condition_sum <- by_ID_condition_wide |>
    Rmisc::summarySEwithin(measurevar = "mu",
                               withinvars = "condition",
                               idvar = "ID",
                               na.rm = FALSE,
                               conf.interval = 0.95)
by_ID_condition_sum 
    condition  N         mu         sd          se         ci
1     Neutral 76 0.05683087 0.05425611 0.006223602 0.01239805
2   Congruent 76 0.04571198 0.05467514 0.006271669 0.01249381
3 Incongruent 76 0.03459732 0.10313945 0.011830907 0.02356837
by_ID_condition_sum |>
    ggplot(aes(x = condition, y = mu, group = 1)) +
    geom_line() +
    geom_errorbar(width = .1, aes(ymin = mu-se, 
                                  ymax = mu+se)) +
    geom_point(shape=21, size=4, fill="white") +
  theme_tidybayes()