Preliminaries.

rm(list=ls())
suppressPackageStartupMessages(library(dplyr))
suppressPackageStartupMessages(library(langcog))
suppressPackageStartupMessages(library(lme4))
library(ggplot2)
library(magrittr)
library(knitr)
opts_chunk$set(fig.width=8, fig.height=5, 
                      echo=TRUE, warning=FALSE, message=FALSE, cache=TRUE)
theme_set(theme_bw())

1 Experiment 1: Experts with dual task

1.1 Data prep

Read data. Coding info from Katie.

  • Heaven/Earth 0=Heaven; 1=Earth
  • Condition: 1= In Play; 2=Out of play; 3=leading; 4=trailing
  • Number Requested,
  • X: 1=Left most column, counting to the right to max of 2 or 3“,
  • Y: 1 equals top row (i.e., out of play heavenly bead), counting downwards to 7.
  • Correct on Search task? Trials weith incorrect search responses were excluded from analysis
  • RT
  • Abacus entered: Only relevent to dual task,
  • Correct on Abacus: only relevent to dual task
  • “Number of columns: occasionally you will see 0s in this column, this means the participant was in a pilot version with 4 columns and should be excluded.”
  • Subject Number: Assigned when extracted from .mat instead of alpha numeric number to make some things in life easier,
  • <3Std Dev: removing outliers (2s). Calculated in linear space. Needs to be redone in log space,
  • Inout or Leading trailing trial,
  • Expertise level: ranges from 0 (none) to 2(has used abacus)
dual_experts_raw <- read.csv("data/Upright dual data experts.csv")
names(dual_experts_raw) <- c("bead_type", "condition", "number_requested", 
                             "X_pos","Y_pos","search_correct","RT",
                             "abacus_val","abacus_correct","n_col",
                             "subnum","outlier","trial_type")
dual_experts_raw %<>% 
  mutate(bead_type = factor(bead_type, 
                            levels = c(0,1), 
                            labels = c("heaven","earth")),
         condition = factor(condition, 
                            levels = c(1,2,3,4), 
                            labels = c("in play", "out of play", 
                                       "leading","trailing")))

Exclusions. Filter pilot participants.

pilot_subs <- dual_experts_raw %>%
  group_by(subnum) %>%
  summarise(pilot = any(n_col == 0)) %>%
  filter(pilot) 
  
dual_experts <- filter(dual_experts_raw, 
                       !subnum %in% pilot_subs$subnum)
dual_experts %<>% 
  group_by(subnum) %>%
  mutate(trial_num = 1:n())

Check to make sure we have a consistent number of trials, no training trials.

qplot(subnum, trial_num, data = dual_experts)

In this dataset we appear to be missing the end of a few participants.

Next, RT exclusions. Note that there are a few 0 RTs. What’s the deal with these?

sum(dual_experts$RT == 0)
## [1] 37
dual_experts %<>% filter(RT > 0, 
                         !is.na(RT))

Linear space.

qplot(RT, data = dual_experts, 
      fill = RT > mean(RT) + 3*sd(RT))

mean(dual_experts$RT)
## [1] 3.860437
median(dual_experts$RT)
## [1] 3.31

Log space looks better.

qplot(log(RT), data = dual_experts, 
      fill = log(RT) > mean(log(RT)) + 3*sd(log(RT)) |
        log(RT) < mean(log(RT)) - 3*sd(log(RT)))

Clip these.

lmean <- mean(log(dual_experts$RT))
lsd <- sd(log(dual_experts$RT))
dual_experts$RT[log(dual_experts$RT) > lmean + 3*lsd |
                  log(dual_experts$RT) < lmean - 3*lsd] <- NA

Replot in linear space just to check.

qplot(RT, data = dual_experts)

Looks good.

1.2 RT and accuracy analyses

Basic analyses.

ms <- dual_experts %>%
  filter(abacus_correct == 1, 
         search_correct == 1) %>%
  group_by(subnum, condition) %>%
  summarise(RT = mean(RT, na.rm=TRUE)) %>%
  group_by(condition) %>%
  multi_boot_standard(col = "RT")

ggplot(ms,aes(x = condition, y = mean, fill = condition)) + 
  geom_bar(stat = "identity") + 
  geom_linerange(aes(ymin = ci_lower, ymax = ci_upper)) 

Add two other variables: bead type and number of columns.

ms <- dual_experts %>%
  filter(abacus_correct == 1, 
         search_correct == 1) %>%
  group_by(subnum, condition, bead_type, n_col) %>%
  summarise(RT = mean(RT, na.rm=TRUE)) %>%
  group_by(condition, bead_type, n_col) %>%
  multi_boot_standard(col = "RT")

ggplot(ms,aes(x = condition, y = mean, fill = condition)) + 
  geom_bar(stat = "identity") + 
  geom_linerange(aes(ymin = ci_lower, ymax = ci_upper)) + 
  facet_grid(bead_type ~ n_col)

It’s clear that the effects are being driven by the 2-column displays, and especiallywith the earthly beads. (Though there are probably fewer heavenly bead trials, no?).

1.3 Stats

Basic LMER confirms highly significant effects for in/out of play, and leading/trailing. Could do better parameterization, but this is still very clear. More random effects don’t converge.

kable(summary(lmer(log(RT) ~ trial_num + condition + 
                     (condition | subnum), 
                   data = filter(dual_experts, 
                                 search_correct == 1, 
                                 abacus_correct == 1)))$coefficients, digits = 3)
Estimate Std. Error t value
(Intercept) 1.330 0.035 38.550
trial_num -0.003 0.000 -22.050
conditionout of play 0.092 0.015 6.158
conditionleading 0.087 0.014 6.018
conditiontrailing 0.028 0.014 2.013

Now add number of columns. Doesn’t converge with number of columns in random effects. This is also strong and clear, and the interactions suggest that all of this gets essentially canceled out in the three-column condition.

kable(summary(lmer(log(RT) ~ trial_num + condition * factor(n_col) + 
                     (condition | subnum), 
                   data = filter(dual_experts, 
                                 search_correct == 1, 
                                 abacus_correct == 1)))$coefficients, digits = 3)
Estimate Std. Error t value
(Intercept) 1.201 0.035 34.014
trial_num -0.003 0.000 -22.346
conditionout of play 0.120 0.019 6.378
conditionleading 0.154 0.018 8.408
conditiontrailing 0.064 0.018 3.569
factor(n_col)3 0.253 0.017 15.336
conditionout of play:factor(n_col)3 -0.059 0.023 -2.536
conditionleading:factor(n_col)3 -0.135 0.023 -5.775
conditiontrailing:factor(n_col)3 -0.073 0.023 -3.106

2 Experiment 2: Experts with no dual task

2.1 Data prep

Read data.

single_experts_raw <- read.csv("data/Upright Single Task Expert Data.csv")
names(single_experts_raw) <- c("bead_type", "condition", "number_requested", 
                             "X_pos","Y_pos","search_correct","RT",
                             "abacus_val","abacus_correct","n_col",
                             "subnum","outlier","trial_type")
single_experts_raw %<>% 
  mutate(bead_type = factor(bead_type, 
                            levels = c(0,1), 
                            labels = c("heaven","earth")),
         condition = factor(condition, 
                            levels = c(1,2,3,4), 
                            labels = c("in play", "out of play", 
                                       "leading","trailing")))

Exclusions. Filter pilot participants.

pilot_subs <- single_experts_raw %>%
  group_by(subnum) %>%
  summarise(pilot = any(n_col == 0)) %>%
  filter(pilot) 
  
single_experts <- filter(single_experts_raw, 
                       !subnum %in% pilot_subs$subnum)
single_experts %<>% 
  group_by(subnum) %>%
  mutate(trial_num = 1:n())

Check to make sure we have a consistent number of trials, no training trials.

qplot(subnum, trial_num, data = single_experts)

All participants have full data.

RT exclusions. Note that there are a few 0 RTs. What’s the deal with these?

sum(single_experts$RT == 0)
## [1] 43
single_experts %<>% filter(RT > 0, 
                         !is.na(RT))

Again clip in log space.

qplot(log(RT), data = single_experts, 
      fill = log(RT) > mean(log(RT)) + 3*sd(log(RT)) |
        log(RT) < mean(log(RT)) - 3*sd(log(RT)))

Clip these.

lmean <- mean(log(single_experts$RT))
lsd <- sd(log(single_experts$RT))
single_experts$RT[log(single_experts$RT) > lmean + 3*lsd |
                  log(single_experts$RT) < lmean - 3*lsd] <- NA

Replot in linear space just to check.

qplot(RT, data = single_experts)

Looks good.

2.2 RT and accuracy analyses

Basic analyses.

ms <- single_experts %>%
  filter(search_correct == 1) %>%
  group_by(subnum, condition) %>%
  summarise(RT = mean(RT, na.rm=TRUE)) %>%
  group_by(condition) %>%
  multi_boot_standard(col = "RT")

ggplot(ms,aes(x = condition, y = mean, fill = condition)) + 
  geom_bar(stat = "identity") + 
  geom_linerange(aes(ymin = ci_lower, ymax = ci_upper)) 

Add two other variables: bead type and number of columns.

ms <- single_experts %>%
  filter(search_correct == 1) %>%
  group_by(subnum, condition, bead_type, n_col) %>%
  summarise(RT = mean(RT, na.rm=TRUE)) %>%
  group_by(condition, bead_type, n_col) %>%
  multi_boot_standard(col = "RT")

ggplot(ms, aes(x = condition, y = mean, fill = condition)) + 
  geom_bar(stat = "identity") + 
  geom_linerange(aes(ymin = ci_lower, ymax = ci_upper)) + 
  facet_grid(bead_type ~ n_col)

Now in this experiment, we’re actually seeing effects in the three-column case. That suggests that it was the load of the three-column abacus reading that was suppressing the attentional effects, which is actually kind of nice and interesting.

2.3 Stats

Same LMER as before. This time the model didn’t converge with random condition effects.

kable(summary(lmer(log(RT) ~ trial_num + condition + 
                     (1 | subnum), 
                   data = filter(single_experts, 
                                 search_correct == 1)))$coefficients, 
      digits = 3)
Estimate Std. Error t value
(Intercept) 0.533 0.033 16.301
trial_num -0.001 0.000 -4.481
conditionout of play 0.118 0.014 8.729
conditionleading 0.085 0.014 6.276
conditiontrailing 0.013 0.013 0.928

Now add number of columns again. Here there are no interactions, which is clear and nice. Interestingly, now the model will converge with condition in the random effects.

kable(summary(lmer(log(RT) ~ trial_num + condition * factor(n_col) + 
                     (condition | subnum), 
                   data = filter(single_experts, 
                                 search_correct == 1)))$coefficients, digits = 3)
Estimate Std. Error t value
(Intercept) 0.452 0.035 12.792
trial_num -0.001 0.000 -4.533
conditionout of play 0.119 0.020 6.011
conditionleading 0.092 0.019 4.819
conditiontrailing 0.019 0.020 0.953
factor(n_col)3 0.163 0.019 8.705
conditionout of play:factor(n_col)3 -0.003 0.027 -0.107
conditionleading:factor(n_col)3 -0.016 0.027 -0.599
conditiontrailing:factor(n_col)3 -0.014 0.026 -0.535

3 Experiments 1 and 2 together

Bind everything together.

experts <- bind_rows(filter(single_experts,
                            search_correct ==1) %>%
                       mutate(expt = "single task", 
                              group = "experts"),
                     filter(dual_experts, 
                            search_correct == 1, 
                            abacus_correct == 1) %>%
                       mutate(expt = "dual task", 
                              group = "experts"))

3.1 Visualization

We don’t learn much more this way, but we can plot everything together

ms <- experts %>%
  group_by(subnum, condition, expt) %>%
  summarise(RT = mean(RT, na.rm=TRUE)) %>%
  group_by(condition, expt) %>%
  multi_boot_standard(col = "RT")

ggplot(ms,aes(x = condition, y = mean, fill = condition)) + 
  geom_bar(stat = "identity") + 
  geom_linerange(aes(ymin = ci_lower, ymax = ci_upper)) + 
  facet_grid(~expt)

3.2 Stats

Model the whole thing together.

kable(summary(lmer(log(RT) ~ expt * trial_num + expt * condition + 
                     (condition | subnum), 
                   data = filter(experts)))$coefficients, 
      digits = 3)
Estimate Std. Error t value
(Intercept) 1.330 0.035 38.151
exptsingle task -0.797 0.049 -16.416
trial_num -0.003 0.000 -20.279
conditionout of play 0.092 0.015 6.064
conditionleading 0.087 0.014 6.080
conditiontrailing 0.028 0.015 1.887
exptsingle task:trial_num 0.002 0.000 11.321
exptsingle task:conditionout of play 0.026 0.021 1.259
exptsingle task:conditionleading -0.002 0.020 -0.087
exptsingle task:conditiontrailing -0.016 0.021 -0.760

This is very interpretable:

  • the single task is faster than dual,
  • you get faster as you continue doing the tasks
  • you don’t improve as much with repeated trials in the single task (because you’re not getting faster at reading the abacus)
  • there are effects of in vs. out of play and leading vs. following
  • there are no interactions with task.

Awesome.

4 Experiment 3: Naive participants, single task

4.1 Data prep

Read data.

naive_raw <- read.csv("data/UprightAdultData.csv")
names(naive_raw) <- c("bead_type", "condition", "number_requested", 
                             "X_pos","Y_pos","search_correct","RT",
                             "abacus_val","abacus_correct","n_col",
                             "subnum","outlier","trial_type", "expertise")
naive_raw %<>% 
  mutate(bead_type = factor(bead_type, 
                            levels = c(0,1), 
                            labels = c("heaven","earth")),
         condition = factor(condition, 
                            levels = c(1,2,3,4), 
                            labels = c("in play", "out of play", 
                                       "leading","trailing")))

Exclusions. Filter pilot participants.

pilot_subs <- naive_raw %>%
  group_by(subnum) %>%
  summarise(pilot = any(n_col == 0)) %>%
  filter(pilot) 
  
naive <- filter(naive_raw, 
                !subnum %in% pilot_subs$subnum)
naive %<>% 
  group_by(subnum) %>%
  mutate(trial_num = 1:n())

Check to make sure we have a consistent number of trials, no training trials.

qplot(subnum, trial_num, data = naive)

All participants have full data.

RT exclusions. Again clip in log space.

qplot(log(RT), data = naive, 
      fill = log(RT) > mean(log(RT)) + 3*sd(log(RT)) |
        log(RT) < mean(log(RT)) - 3*sd(log(RT)))

Clip these.

lmean <- mean(log(naive$RT))
lsd <- sd(log(naive$RT))
naive$RT[log(naive$RT) > lmean + 3*lsd |
           log(naive$RT) < lmean - 3*lsd] <- NA

Replot in linear space just to check.

qplot(RT, data = naive)

Looks good.

4.2 RT and accuracy analyses

Basic analyses. Summary: In this experiment, the effects are smaller, but still present, at all column levels.

ms <- naive %>%
  filter(search_correct == 1) %>%
  group_by(subnum, condition) %>%
  summarise(RT = mean(RT, na.rm=TRUE)) %>%
  group_by(condition) %>%
  multi_boot_standard(col = "RT")

ggplot(ms,aes(x = condition, y = mean, fill = condition)) + 
  geom_bar(stat = "identity") + 
  geom_linerange(aes(ymin = ci_lower, ymax = ci_upper)) 

Add two other variables: bead type and number of columns.

ms <- naive %>%
  filter(search_correct == 1) %>%
  group_by(subnum, condition, bead_type, n_col) %>%
  summarise(RT = mean(RT, na.rm=TRUE)) %>%
  group_by(condition, bead_type, n_col) %>%
  multi_boot_standard(col = "RT")

ggplot(ms, aes(x = condition, y = mean, fill = condition)) + 
  geom_bar(stat = "identity") + 
  geom_linerange(aes(ymin = ci_lower, ymax = ci_upper)) + 
  facet_grid(bead_type ~ n_col)

Add abacus expertise instead.

ms <- naive %>%
  filter(search_correct == 1) %>%
  group_by(subnum, condition, expertise) %>%
  summarise(RT = mean(RT, na.rm=TRUE)) %>%
  group_by(condition, expertise) %>%
  multi_boot_standard(col = "RT")

ggplot(ms, aes(x = condition, y = mean, fill = condition)) + 
  geom_bar(stat = "identity") + 
  geom_linerange(aes(ymin = ci_lower, ymax = ci_upper)) + 
  facet_grid(. ~ expertise)

4.3 Stats

Same LMER as in the previous two, with the full random effects structure. This model shows both effects, as before.

kable(summary(lmer(log(RT) ~ trial_num + condition + 
                     (condition | subnum), 
                   data = filter(naive, 
                                 search_correct == 1)))$coefficients, 
      digits = 3)
Estimate Std. Error t value
(Intercept) 0.118 0.034 3.459
trial_num -0.001 0.000 -10.368
conditionout of play 0.081 0.012 6.520
conditionleading 0.077 0.014 5.671
conditiontrailing 0.023 0.013 1.764

Now add number of columns again. Here again there are no interactions, suggesting that column didn’t affect matters.

kable(summary(lmer(log(RT) ~ trial_num + condition * factor(n_col) + 
                     (condition | subnum), 
                   data = filter(naive, 
                                 search_correct == 1)))$coefficients, digits = 3)
Estimate Std. Error t value
(Intercept) 0.083 0.035 2.367
trial_num -0.001 0.000 -10.529
conditionout of play 0.067 0.017 4.048
conditionleading 0.063 0.018 3.567
conditiontrailing 0.011 0.017 0.676
factor(n_col)3 0.071 0.016 4.542
conditionout of play:factor(n_col)3 0.029 0.022 1.292
conditionleading:factor(n_col)3 0.030 0.022 1.350
conditiontrailing:factor(n_col)3 0.022 0.022 1.011

And check for interactions of expertise level. First with expertise as a continuous variable.

kable(summary(lmer(log(RT) ~ trial_num + condition * expertise + 
                     (condition | subnum), 
                   data = filter(naive, 
                                 search_correct == 1)))$coefficients, digits = 3)
Estimate Std. Error t value
(Intercept) 0.136 0.043 3.138
trial_num -0.001 0.000 -10.387
conditionout of play 0.075 0.016 4.712
conditionleading 0.051 0.017 3.093
conditiontrailing 0.022 0.016 1.339
expertise -0.033 0.049 -0.670
conditionout of play:expertise 0.013 0.018 0.686
conditionleading:expertise 0.049 0.019 2.535
conditiontrailing:expertise 0.001 0.019 0.054

Now with expertise as a factor.

kable(summary(lmer(log(RT) ~ trial_num + condition * factor(expertise) + 
                     (condition | subnum), 
                   data = filter(naive, 
                                 search_correct == 1)))$coefficients, digits = 3)
Estimate Std. Error t value
(Intercept) 0.152 0.045 3.398
trial_num -0.001 0.000 -10.400
conditionout of play 0.077 0.017 4.655
conditionleading 0.056 0.017 3.206
conditiontrailing 0.023 0.017 1.323
factor(expertise)1 -0.105 0.073 -1.435
factor(expertise)2 0.003 0.111 0.024
conditionout of play:factor(expertise)1 0.000 0.028 -0.005
conditionleading:factor(expertise)1 0.029 0.029 1.011
conditiontrailing:factor(expertise)1 -0.003 0.029 -0.107
conditionout of play:factor(expertise)2 0.037 0.042 0.895
conditionleading:factor(expertise)2 0.116 0.044 2.651
conditiontrailing:factor(expertise)2 0.006 0.043 0.140

In both cases, we see a slightly bigger effect of leading zeros being slower for the participants with more abacus exposure. This is not totally unreasonable, but it’s a small effect.

5 Experiments 2 and 3

Bind all data.

d <- bind_rows(filter(naive, 
                      search_correct == 1) %>%
                 mutate(expt = "single task",
                        group = "naive"), 
               experts)

Now plot.

ms <- d %>%
  filter(expt == "single task") %>%
  group_by(subnum, condition, group) %>%
  summarise(RT = mean(RT, na.rm=TRUE)) %>%
  group_by(condition, group) %>%
  multi_boot_standard(col = "RT")

ggplot(ms, aes(x = condition, y = mean, fill = condition)) + 
  geom_bar(stat = "identity") + 
  geom_linerange(aes(ymin = ci_lower, ymax = ci_upper)) + 
  facet_grid( ~ group)

And a statistical model.

kable(summary(lmer(log(RT) ~ trial_num + condition * group + 
                     (condition | subnum), 
                   data = filter(d, 
                                 expt == "single task")))$coefficients, 
      digits = 3)
Estimate Std. Error t value
(Intercept) 0.549 0.032 16.937
trial_num -0.001 0.000 -9.702
conditionout of play 0.118 0.013 9.056
conditionleading 0.085 0.013 6.493
conditiontrailing 0.013 0.014 0.930
groupnaive -0.450 0.047 -9.499
conditionout of play:groupnaive -0.037 0.019 -1.950
conditionleading:groupnaive -0.009 0.019 -0.444
conditiontrailing:groupnaive 0.009 0.020 0.469

Summary - experts are way slower than naive participants, even when they don’t have the dual task. That’s interesting, I think - they are still processing the abacus somehow prior to searching.

There’s not much in the way of interactions of the effects with group, with one exception. The naive participants don’t show as big an “out of play” effect as the experts, trending with p = 0.0511761. So there may be some small difference there.

6 All data

6.1 Basic stats

Consolidate all data into a single model.

kable(summary(lmer(log(RT) ~ trial_num * group * expt + 
                     condition * group * expt + 
                     (condition | subnum), 
                   data = d))$coefficients, 
      digits = 3)
Estimate Std. Error t value
(Intercept) 1.330 0.034 39.045
trial_num -0.003 0.000 -21.355
groupnaive -0.416 0.049 -8.514
exptsingle task -0.797 0.047 -16.801
conditionout of play 0.092 0.014 6.555
conditionleading 0.087 0.014 6.248
conditiontrailing 0.028 0.014 1.992
trial_num:groupnaive -0.001 0.000 -3.150
trial_num:exptsingle task 0.002 0.000 11.929
groupnaive:conditionout of play -0.037 0.020 -1.860
groupnaive:conditionleading -0.008 0.020 -0.391
groupnaive:conditiontrailing 0.010 0.020 0.499
exptsingle task:conditionout of play 0.026 0.019 1.364
exptsingle task:conditionleading -0.002 0.019 -0.089
exptsingle task:conditiontrailing -0.016 0.020 -0.805

In the full model we see all the effects holding up, with not much evidence for interactions of expertise or experiment type. That’s pretty much exactly what we thought was going on.

6.2 Post-hoc analysis: Targets closer to the beam

Does position mediate the “in play/out of play” effect?

First, the baseline model for this analysis: only earthly beads (since heavenly bead position and “in play” is confounded.

There’s an interesting decision to make here in the random effects. I think it’s best to go with no condition or Y_pos random effect both for convergence reasons and ebcause it’s hard theoretically to interpret when you have all the interactions here.

kable(summary(lmer(log(RT) ~ trial_num * group * expt + 
                     condition * group * expt + 
                     (1 | subnum), 
                   data = filter(d, 
                                 condition %in% c("in play","out of play"),
                                 bead_type == "earth")))$coefficients, 
      digits = 3)
Estimate Std. Error t value
(Intercept) 1.300 0.035 37.549
trial_num -0.002 0.000 -12.262
groupnaive -0.400 0.049 -8.083
exptsingle task -0.784 0.048 -16.288
conditionout of play 0.091 0.014 6.263
trial_num:groupnaive -0.001 0.000 -2.063
trial_num:exptsingle task 0.002 0.000 7.002
groupnaive:conditionout of play -0.020 0.020 -0.971
exptsingle task:conditionout of play -0.001 0.020 -0.025

Next, add vertical position as a predictor.

kable(summary(lmer(log(RT) ~ trial_num * group * expt + 
                     condition * group * expt + 
                     Y_pos * group * expt + 
                     (1 | subnum), 
                   data = filter(d, 
                                 condition %in% c("in play","out of play"),
                                 bead_type == "earth")))$coefficients, 
      digits = 3)
Estimate Std. Error t value
(Intercept) 1.177 0.045 26.232
trial_num -0.002 0.000 -12.377
groupnaive -0.457 0.064 -7.178
exptsingle task -0.790 0.062 -12.712
conditionout of play 0.019 0.022 0.842
Y_pos 0.032 0.007 4.294
trial_num:groupnaive -0.001 0.000 -2.083
trial_num:exptsingle task 0.002 0.000 6.958
groupnaive:conditionout of play -0.056 0.032 -1.764
exptsingle task:conditionout of play -0.006 0.031 -0.194
groupnaive:Y_pos 0.015 0.011 1.412
exptsingle task:Y_pos 0.002 0.010 0.196

Now we see a significant effect of Y position with no remaining effect of “out of play.” Check for interactions?

kable(summary(lmer(log(RT) ~ trial_num * group * expt + 
                     Y_pos * group * expt * condition + 
                     (1 | subnum), 
                   data = filter(d, 
                                 condition %in% c("in play","out of play"),
                                 bead_type == "earth")))$coefficients, 
      digits = 3)
Estimate Std. Error t value
(Intercept) 1.186 0.053 22.177
trial_num -0.002 0.000 -12.389
groupnaive -0.554 0.076 -7.292
exptsingle task -0.684 0.074 -9.215
Y_pos 0.030 0.011 2.795
conditionout of play -0.005 0.077 -0.070
trial_num:groupnaive -0.001 0.000 -2.093
trial_num:exptsingle task 0.002 0.000 6.953
groupnaive:Y_pos 0.040 0.015 2.671
exptsingle task:Y_pos -0.025 0.015 -1.719
Y_pos:conditionout of play 0.005 0.015 0.324
groupnaive:conditionout of play 0.188 0.111 1.696
exptsingle task:conditionout of play -0.273 0.108 -2.535
groupnaive:Y_pos:conditionout of play -0.049 0.021 -2.310
exptsingle task:Y_pos:conditionout of play 0.054 0.021 2.591

Interestingly, we are seeing some three-way interactions, but these are a bit hard to interpret. Let’s subset to Experiments 1, 2, and 3.

kable(summary(lmer(log(RT) ~ trial_num + 
                     Y_pos * condition + 
                     (1 | subnum), 
                   data = filter(d, 
                                 group == "experts", 
                                 expt == "dual task",
                                 condition %in% c("in play","out of play"),
                                 bead_type == "earth")))$coefficients, 
      digits = 3)
Estimate Std. Error t value
(Intercept) 1.186 0.053 22.201
trial_num -0.002 0.000 -12.390
Y_pos 0.030 0.011 2.796
conditionout of play -0.005 0.077 -0.070
Y_pos:conditionout of play 0.005 0.015 0.324
kable(summary(lmer(log(RT) ~ trial_num + 
                     Y_pos * condition + 
                     (1 | subnum), 
                   data = filter(d, 
                                 group == "experts", 
                                 expt == "single task",
                                 condition %in% c("in play","out of play"),
                                 bead_type == "earth")))$coefficients, 
      digits = 3)
Estimate Std. Error t value
(Intercept) 0.502 0.057 8.882
trial_num -0.001 0.000 -2.613
Y_pos 0.004 0.011 0.366
conditionout of play -0.279 0.084 -3.336
Y_pos:conditionout of play 0.059 0.016 3.639
kable(summary(lmer(log(RT) ~ trial_num + 
                     Y_pos * condition + 
                     (1 | subnum), 
                   data = filter(d, 
                                 group == "naive", 
                                 expt == "single task",
                                 condition %in% c("in play","out of play"),
                                 bead_type == "earth")))$coefficients, 
      digits = 3)
Estimate Std. Error t value
(Intercept) -0.052 0.049 -1.056
trial_num -0.001 0.000 -6.595
Y_pos 0.044 0.009 4.763
conditionout of play -0.090 0.069 -1.308
Y_pos:conditionout of play 0.009 0.013 0.715

So we see the same unpredicted effect in Experiment 2 as in the previous analysis.