Eyetracking analysis

List of variables

Participants and trials

  • ID
  • group: TD vs DYS
  • class: 1-5 + uni

Design

  • condition: cognate vs. nonword
  • item_n: item number
  • list: list number
  • trial: trial number
  • target: target item
  • sentence

Accuracy and validation measures

  • eye_used: eye used for recording
  • offset_drift: average drift accuracy by trial
  • avg_validation: average validation accuracy by ID
  • center:right_bot: average validation accuracy for each point

Eye-movements

  1. Previous saccade
  • previous_sac_start_x: X coordinate of the previous saccade start point
  • previous_sac_end_x: X coordinate of the previous saccaede end point
  • previous_sac_start_y: Y coordinate of the previous saccade start point
  • previous_sac_end_y: Y coordinate of the previous saccade end point
  • previous_sac_amplitude: Amplitude of the previous saccade in degrees of visual angle
  • previous_sac_avg_velocity: Average velocity of the previous saccade
  • previous_sac_peak_velocity: Peak value of gaze velocity (in visual degrees per second) of the previous saccade
  • previous_sac_direction: Direction (Left, Right, Top, Bottom), relative to the current fixation, in which the previous saccade is aiming. Note a missing value will be recorded if the PREVIOUS_SAC_AMPLITUDE is less than 0.5 degree
  • previous_sac_duration: Duration of the previous saccade in milliseconds
  1. Previous fixation
  • previous_fix_x: X coordinate of the previous fixation
  • previous_fix_y: Y coordinate of the previous fixation
  • previous_fix_distance: Distance of the previous fix relative to current fix in visual degrees
  • previous_fix_direction: Direction (Left, Right, Top, Bottom), relative to the current fixation, in which the previous fixation is located.
  1. Current fixation
  • current_fix_x_warp: X coordinate of the current fix outputed by warp algorithm
  • current_fix_y_warp: Y coordinate of the current fix outputed by warp algorithm
  • current_fix_x: X coordinate of the current fixation
  • current_fix_y: Y coordinate of the current fixation
  • current_fix_start: Time current fix starts
  • current_fix_end: Time current fix ends
  • current_fix_pupil: Average pupil size during the current fixation.
  • current_fix_duration: Current fixation duration
  • ia_first_run_landing_position: Number of pixels from the horizontal position of the first fixation during the first run on the current interest area to the left edge of the interest area.
  1. Next saccade
  • next_sac_start_x: X coordinate of the next saccade start point
  • next_sac_end_x: X coordinate of the next saccade end point
  • next_sac_start_y: Y coordinate of the next saccade start point
  • next_sac_end_y: Y coordinate of the next saccade end point
  • next_sac_amplitude: Amplitude of the next saccade in degrees of visual angle
  • next_sac_avg_velocity: Average velocity of the next saccade
  • next_sac_peak_velocity: Peak value of gaze velocity (in visual degrees per second) of the next saccade
  • next_sac_direction: Direction (Left, Right, Top, Bottom), relative to the current fixation, in which the next saccade is aiming. Note a missing value will be recorded if the NEXT_SAC_AMPLITUDE is less than 0.5 degree.
  • next_sac_duration: Duration of the next saccade in milliseconds.
  1. Next fixation
  • next_fix_x: X coordinate of the next fixation
  • next_fix_y: Y coordinate of the next fixation
  • next_fix_direction: Direction (Left, Right, Top, Bottom), relative to the current fixation, in which the next fixation is located.
  • next_fix_distance: distance of the next fix relative to current fix in visual degrees
  1. AOI pixel parameters:
  • ia_id: ID interest area
  • ia_bottom: Y-coordinate of the lower-right corner of a rectangular AOI
  • ia_top: Y-coordinate of the upper-left corner of a rectangular AOI
  • ia_right: X-coordinate of the lower-right corner of a rectangular AOI
  • ia_left: X-coordinate of the upper-left corner

Measures participants

  • wr_time_z:nwr_error_z: reading measures from std. tests in sd deviation
  • pa_rt, pa_acc, pa_bis: rt, accuracy and speed-accuracy trade-off in spoonerism task (PA)
  • forward and backward (memory measures)
  • lexita_acc, lexita_bis, lexita_rt: LexITA knowledge tot. score, speed-accuracy trade-off and RT
  • lextale_acc, lextale_bis, lextale_rt: LexTALE knowledge tot. score, speed-accuracy trade-off and RT
  • it_ok_acc: orthographic knowledge tot. score IT (rt and speed-accuracy trade-off not computed as test max time was 1 min)
  • en_ok_acc, en_ok_rt, en_ok_bis: orthographic knowledge tot. score, RT and speed-accuracy trade-off
  • va_span: visual attention span - tot. score
  • eng_prof: self-assessed EN proficiency
  • eng_read: self-assessed EN reading exposure
  • it_read: self-assessed IT reading exposure
  • eng_use: self-assessed EN use

Measures items

  • syl1_freq_pos1: frequency of the first syllable in first position
  • syl1_tot_freq: frequency of the first syllable
  • length: target word length in letters
  • frequenc_.wordfreq: frequency of target word from wordfreq.3
  • frequency_subtlex: frequency of target word from subtlex-IT and subtlex-US
  • biagram_frequency: biagram frequency of target words

NEW fixations parameters:

  • sum.fix: sum fixations for each target and ID
  • sum.fp.fix: Sum first pass fixation for each target and ID
  • tot.fix.duration: Tot fixiation duration for each target and ID
  • tot.fp.fix.duration: Tot first-pass fixation duration for each target and ID
  • fflp: First fixation landing position (rightward/leftward) based on mean first X coordinate
  • fflp.2: First fixation landing position (numeric) based on mean first X coordinate
  • is.target.fix: whether the fixation falls within AOI
  • is.fp.fix: whether the fixation is a first pass fixation.

Participants’ exclusion

### original sample 
df.short %>% 
  filter(!startsWith(ID, "VER")) %>% 
  summarize(
    n = n_distinct(ID), 
    .by = "group"
  ) %>% kable(caption = "Original sample") %>% kable_minimal()
Original sample
group n
TD 61
DYS 29
### Exclude participants 
# df.short %>% 
#   
#   ### calculate reading score for exclusion (==VAS)
#   rowwise() %>% 
#   mutate(reading.score = mean(c_across(wr_time_z:nwr_error_z)),
#          reading.score = round(reading.score,2)) %>% 
#   ungroup() %>% 
#   
#   
#   ### exclude ADHD and TD with other diagnoses + DYS with balbuzia since it's reading aloud 
#   mutate(
#     exclude = case_when(
#       ID == "VER02" ~ 1, # ADHD - university - DYS (only excluded for VA Span task)
#       ID == "VER04" ~ 1, # ADHD - university - DYS (only excluded for VA Span task)
#       ID == "MEN12" ~ 1, # discalculia - 3rd - TD
#       ID == "MEN18" ~ 1, # disgrafia, discalculia - 3rd - TD 
#       ID == "MEN26" ~ 1, # disgrafie - 5th - TD, 
#       ID == "LAE33" ~ 1, # disgrafia, disortografia, 3rd - TD 
#       ID == "MEN09" ~ 1, # balbuzia - DYS 
#       TRUE ~ 0
#     ),
#   ) %>% 
#   filter(exclude == 0) %>% 
#   ### exclude poor readers 
#   filter(group.exclusion != "PR") %>% 
#   # filter(group == "DYS" | (group == "TD" & reading.score > -2)) %>% 
#   
#   
#   ### exclude participants from VER 
#   filter(ID != "VER01" & ID != "VER03") %>% 
#   dplyr::select(-exclude, -reading.score) -> df.short

df.short %>% 
  rowwise() %>% 
   ### calculate reading score 
  mutate(reading.score = mean(c_across(wr_time_z:nwr_error_z)),
         reading.score = round(reading.score,2)) %>%
  ungroup() %>%
  filter(!startsWith(ID, "VER")) %>% 
    mutate(
    exclude = case_when(
      ID == "MEN12" ~ 1, # discalculia - 3rd - TD
      ID == "MEN18" ~ 1, # disgrafia, discalculia - 3rd - TD
      ID == "MEN26" ~ 1, # disgrafia - 5th - TD, 
      ID == "LAE33" ~ 1, # disgrafia, disortografia, 3rd - TD, 
      
      ### poor reading performance: 
      ID == "LAE38" ~ 1,
      ID == "LC19" ~ 1, 
      ID == "MEN29" ~ 1, 
      ID == "MEN02" ~ 1, 
      
      
      ID == "MEN10" ~ 1, 
      ID == "LAE23" ~ 1, 
      # ID == "MEN35" ~ 1, 
      
      ID == "MEN09" ~ 1, # balbuzia - DYS 
      
      TRUE ~ 0
    )
  ) %>%
  filter(exclude == 0) -> df.short

df.short %>% 
  summarize(
    n = n_distinct(ID), 
    .by = "group"
  ) %>% 
  kable(caption = "Analyzed sample") %>% kable_minimal()
Analyzed sample
group n
TD 51
DYS 28
### age 
df.short %>% 
  summarize(mean = mean(age), 
            sd = sd(age), 
            range = range(age)) %>% 
  mutate(across(mean:sd, ~round(.x,2))) %>% 
  kable(caption = "age") %>% kable_minimal()
age
mean sd range
17.04 1.44 14.11
17.04 1.44 20.50
### age of onset 
df.short %>% 
  summarize(mean = mean(AoO), 
            sd = sd(AoO), 
            range = range(AoO)) %>% 
  mutate(across(mean:sd, ~round(.x,2))) %>% 
  kable(caption = "AoO") %>% kable_minimal()
AoO
mean sd range
5.68 0.87 3
5.68 0.87 8

Reading scores calculation

### create a single score for the analysis 
df.model %>% 
  
  rowwise() %>% 
  mutate(
    reading.score = mean(c_across(wr_time_z:nwr_error_z))
  ) %>% 
  ungroup() -> df.model 

Preliminary measures

Word reading time
group mean sd range
TD -0.42 0.85 -2.58-1.65
DYS -3.80 1.62 -7.74–0.58
Word reading errors
group mean sd range
TD 0.09 0.78 -2.36-1.18
DYS -2.52 2.31 -8.79-1.04
Nonword reading time
group mean sd range
TD -0.07 0.81 -1.81-1.8
DYS -2.84 1.67 -6.92–0.44
Nonword reading errors
group mean sd range
TD -0.14 1.21 -4.15-1.29
DYS -2.73 2.60 -8.97-0.75
df.short %>% 
  distinct(ID, .keep_all = T) %>% 
  lm(formula = wr_time_z ~ group) -> m1 
tidy(m1) %>% mutate(across(estimate:p.value, ~round(.x,3))) %>% 
  kable(caption = "Word reading time") %>% kable_minimal()
Word reading time
term estimate std.error statistic p.value
(Intercept) -3.797 0.223 -17.032 0
groupTD 3.381 0.277 12.185 0
df.short %>% 
  distinct(ID, .keep_all = T) %>% 
  lm(formula = wr_error_z ~ group) -> m2
tidy(m2) %>% mutate(across(estimate:p.value, ~round(.x,3))) %>% 
  kable(caption = "Word reading errors") %>% kable_minimal()
Word reading errors
term estimate std.error statistic p.value
(Intercept) -2.524 0.285 -8.858 0
groupTD 2.618 0.355 7.381 0
df.short %>% 
  distinct(ID, .keep_all = T) %>% 
  lm(formula = nwr_time_z ~ group) -> m3
tidy(m3) %>% mutate(across(estimate:p.value, ~round(.x,3))) %>% 
  kable(caption = "Nonword reading time") %>% kable_minimal()
Nonword reading time
term estimate std.error statistic p.value
(Intercept) -2.841 0.224 -12.664 0
groupTD 2.771 0.279 9.926 0
df.short %>% 
  distinct(ID, .keep_all = T) %>% 
  lm(formula = nwr_error_z ~ group) -> m4
tidy(m4) %>% mutate(across(estimate:p.value, ~round(.x,3))) %>% 
  kable(caption = "Nonword reading errors") %>% kable_minimal()
Nonword reading errors
term estimate std.error statistic p.value
(Intercept) -2.727 0.344 -7.92 0
groupTD 2.588 0.428 6.04 0
PA
group mean sd range
TD 0.99 0.91 -2.12-2.13
DYS -1.30 2.01 -6.47-1.68
Italian Orthographic Knowledge
group mean sd range
TD 46.35 9.36 25-59
DYS 30.18 9.78 14-50
English Orthographic Knowledge
group mean sd range
TD 0.89 0.77 -0.79-2.47
DYS -1.09 1.54 -5.62-2.3
LexITA
group mean sd range
TD 57.06 2.01 48-60
DYS 52.61 8.18 17-60
LexTALE
group mean sd range
TD 13.33 6.32 0-34
DYS 8.89 6.27 -2-29
Verbal STM (F)
group mean sd range
TD 6.63 1.11 4-9
DYS 6.00 0.90 4-8
Verbal STM (B)
group mean sd range
TD 5.39 1.40 3-8
DYS 4.29 1.92 0-8
df.short %>% 
  distinct(ID, .keep_all = T) %>% 
  lm(formula = pa.bis ~ age * group) -> m5
tidy(m5) %>% mutate(across(estimate:p.value, ~round(.x,3))) %>% 
  kable(caption = "Phonological Awareness") %>% kable_minimal()
Phonological Awareness
term estimate std.error statistic p.value
(Intercept) -14.156 3.652 -3.877 0.000
age 0.730 0.207 3.529 0.001
groupTD 15.429 4.216 3.660 0.000
age:groupTD -0.747 0.242 -3.086 0.003
df.short %>% 
  distinct(ID, .keep_all = T) %>% 
  glm(formula = it.ok.acc ~ age + group, family = poisson(link = "log")) -> m6
tidy(m6) %>% mutate(across(estimate:p.value, ~round(.x,3))) %>% 
  kable(caption = "Italian Orthographic Knowledge") %>% kable_minimal()
Italian Orthographic Knowledge
term estimate std.error statistic p.value
(Intercept) 2.461 0.224 11.008 0
age 0.054 0.012 4.293 0
groupTD 0.478 0.042 11.492 0
df.short %>% 
  distinct(ID, .keep_all = T) %>% 
  lm(formula = en.ok.bis ~ age + group) -> m7
tidy(m7) %>% mutate(across(estimate:p.value, ~round(.x,3))) %>% 
  kable(caption = "English Orthographic Knowledge") %>% kable_minimal()
English Orthographic Knowledge
term estimate std.error statistic p.value
(Intercept) -4.100 1.575 -2.604 0.011
age 0.171 0.089 1.930 0.057
groupTD 2.141 0.268 7.997 0.000
df.short %>% 
  distinct(ID, .keep_all = T) %>% 
  glm(formula = lexita.acc ~ age * group, family = poisson(link = "log")) -> m8
tidy(m8) %>% mutate(across(estimate:p.value, ~round(.x,3))) %>% 
  kable(caption = "LexITA") %>% kable_minimal()
LexITA
term estimate std.error statistic p.value
(Intercept) 3.054 0.386 7.912 0.000
age 0.051 0.022 2.365 0.018
groupTD 0.986 0.441 2.239 0.025
age:groupTD -0.051 0.025 -2.035 0.042
df.short %>% 
  distinct(ID, .keep_all = T) %>% 
  rowwise() %>% mutate(lextale.acc = lextale.acc + 2) %>% ungroup() %>% 
  glm(formula = lextale.acc ~ age + group, family = poisson(link = "log")) -> m9
tidy(m9) %>% mutate(across(estimate:p.value, ~round(.x,3))) %>% 
  kable(caption = "LexTALE") %>% kable_minimal()
LexTALE
term estimate std.error statistic p.value
(Intercept) 0.633 0.383 1.651 0.099
age 0.099 0.021 4.653 0.000
groupTD 0.431 0.070 6.161 0.000
df.short %>% 
  distinct(ID, .keep_all = T) %>% 
  glm(formula = forward ~ age + group, family = poisson(link = "log")) -> m10
tidy(m10) %>% mutate(across(estimate:p.value, ~round(.x,3))) %>% 
  kable(caption = "Verbal STM (F)") %>% kable_minimal()
Verbal STM (F)
term estimate std.error statistic p.value
(Intercept) 1.796 0.573 3.136 0.002
age 0.000 0.032 -0.008 0.994
groupTD 0.099 0.099 1.001 0.317
df.short %>% 
  distinct(ID, .keep_all = T) %>% 
  glm(formula = backward ~ age + group, family = poisson(link = "log")) -> m11
tidy(m11) %>% mutate(across(estimate:p.value, ~round(.x,3))) %>% 
  kable(caption = "Verbal STM (B)") %>% kable_minimal()
Verbal STM (B)
term estimate std.error statistic p.value
(Intercept) 0.659 0.642 1.028 0.304
age 0.045 0.036 1.255 0.209
groupTD 0.271 0.114 2.376 0.018

Inferential analysis

First-pass fixation count

Real words

df.model %>% 
  filter(condition == "cognate") %>% 
  mutate(frequency_wordfreq = scale(frequency_wordfreq)) %>% 
  
  glmer(formula = sum.fp.fix ~ 
          
          lang * en.ok.bis * group +  

          ia_length + # (! prev.vers)
          # bigram_freq + 
          # it.ok.acc + 
          # frequency_wordfreq + # dropped for singular fit issues 
          
          (1|ID) + (1|target), ### Singular fit with slope for language 
        family = poisson(link = "log"), 
        control = glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun = 100000))) -> m1.real

# TEST  ************************************************************
# drop1(m1.real, test = "Chisq") 
# plot_model(m1.real, type = "diag")

# OUTPUT ************************************************************
anova_table(m1.real) -> anova.m1.real
Estimate SE z.value p.value
(Intercept) 0.867 0.053 16.405 0.000
langita -0.290 0.074 -3.922 0.000
en.ok.bis -0.137 0.036 -3.847 0.000
groupTD -0.210 0.069 -3.061 0.002
ia_length 0.127 0.015 8.374 0.000
langita:en.ok.bis 0.047 0.051 0.934 0.350
langita:groupTD 0.132 0.096 1.372 0.170
en.ok.bis:groupTD 0.109 0.071 1.542 0.123
langita:en.ok.bis:groupTD -0.039 0.099 -0.390 0.696
Chisq Df p.value
(Intercept) 269.11 1 0.000
lang 15.38 1 0.000
en.ok.bis 14.80 1 0.000
group 9.37 1 0.002
ia_length 70.13 1 0.000
lang:en.ok.bis 0.87 1 0.350
lang:group 1.88 1 0.170
en.ok.bis:group 2.38 1 0.123
lang:en.ok.bis:group 0.15 1 0.696



Nonwords

df.model %>% 
  filter(condition == "nonword") %>% 
  
  glmer(formula = sum.fp.fix ~ 
          
          ### We use orthographic knowledge as a measure, and remove vocabulary knowledge in that we deal with nonwords 
          lang * en.ok.bis * group +

          ia_length +
          # bigram_freq +
          # it.ok.acc +
          
          (1 |ID) + (1|target), ### Singluar fit with slope for lang 
        family = poisson(link = "log"), 
        control = glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun = 100000))) -> m1.nw

# TEST  ************************************************************
# drop1(m1.nw, test = "Chisq")
# plot_model(m1.nw, type = "diag")

# OUTPUT ************************************************************
anova_table(m1.nw) -> anova.m1.nw
Estimate SE z.value p.value
(Intercept) 1.312 0.058 22.490 0.000
langita -0.223 0.062 -3.585 0.000
en.ok.bis -0.021 0.042 -0.493 0.622
groupTD -0.235 0.073 -3.226 0.001
ia_length 0.127 0.022 5.739 0.000
langita:en.ok.bis -0.016 0.043 -0.364 0.716
langita:groupTD 0.034 0.076 0.447 0.655
en.ok.bis:groupTD -0.086 0.077 -1.119 0.263
langita:en.ok.bis:groupTD 0.180 0.080 2.245 0.025
Chisq Df p.value
(Intercept) 505.81 1 0.000
lang 12.85 1 0.000
en.ok.bis 0.24 1 0.622
group 10.40 1 0.001
ia_length 32.94 1 0.000
lang:en.ok.bis 0.13 1 0.716
lang:group 0.20 1 0.655
en.ok.bis:group 1.25 1 0.263
lang:en.ok.bis:group 5.04 1 0.025

  1. Explore whether Language differences are stronger in DYS vs. TD depending on proficiency levels (low: -1SD, high: +1SD)
emmeans(m1.nw, pairwise ~ lang | group | en.ok.bis, at = list(en.ok.bis = c(-1,1)), adjust = "bonferroni")$contrast %>% summary() %>% as.data.frame() %>% 
  mutate(across(estimate:p.value, ~round(.x,3))) -> emmeans1.m1.nw
color.me <- which(emmeans1.m1.nw$p.value <= .05)
emmeans1.m1.nw %>% kable() %>% kable_minimal() %>% row_spec(color.me, bold = T)
contrast group en.ok.bis estimate SE df z.ratio p.value
eng - ita DYS -1 0.208 0.048 Inf 4.342 0.000
eng - ita TD -1 0.354 0.109 Inf 3.230 0.001
eng - ita DYS 1 0.239 0.096 Inf 2.500 0.012
eng - ita TD 1 0.025 0.055 Inf 0.463 0.644

  1. Explore whether Group differences are stronger in IT vs. EN depending on proficiency levels (low: -1SD, high: +1SD)
emmeans(m1.nw, pairwise ~ group | lang | en.ok.bis, at = list(en.ok.bis = c(-1,1)), adjust = "bonferroni")$contrast %>% summary() %>% as.data.frame() %>% 
  mutate(across(estimate:p.value, ~round(.x,3))) -> emmeans2.m1.nw
color.me <- which(emmeans2.m1.nw$p.value <= .05)
emmeans2.m1.nw %>% kable() %>% kable_minimal() %>% row_spec(color.me, bold = T)
contrast lang en.ok.bis estimate SE df z.ratio p.value
DYS - TD eng -1 0.149 0.110 Inf 1.349 0.177
DYS - TD ita -1 0.295 0.114 Inf 2.578 0.010
DYS - TD eng 1 0.321 0.101 Inf 3.165 0.002
DYS - TD ita 1 0.107 0.106 Inf 1.010 0.312

  1. Explore whether English proficiency is stronger in IT vs. EN, depending on Group
emtrends(m1.nw, pairwise ~ lang | group, var = "en.ok.bis", adjust = "bonferroni", infer = T)$emtrends %>% as.data.frame() %>% mutate(across(en.ok.bis.trend:p.value, ~round(.x,3))) -> emtrends1.m1.nw

color.me <- which(emtrends1.m1.nw$p.value <= .05)
emtrends1.m1.nw %>% kable() %>% kable_minimal() %>% row_spec(color.me, bold = T)
lang group en.ok.bis.trend SE df asymp.LCL asymp.UCL z.ratio p.value
eng DYS -0.021 0.042 Inf -0.102 0.061 -0.493 0.622
ita DYS -0.036 0.044 Inf -0.122 0.050 -0.826 0.409
eng TD -0.107 0.065 Inf -0.233 0.020 -1.650 0.099
ita TD 0.057 0.066 Inf -0.072 0.187 0.870 0.384
emtrends(m1.nw, pairwise ~ lang | group, var = "en.ok.bis", adjust = "bonferroni", infer = T)$contrasts %>% as.data.frame() %>% mutate(across(estimate:p.value, ~round(.x,3))) -> emtrends2.m1.nw

color.me <- which(emtrends2.m1.nw$p.value <= .05)
emtrends2.m1.nw %>% kable() %>% kable_minimal() %>% row_spec(color.me, bold = T)
contrast group estimate SE df asymp.LCL asymp.UCL z.ratio p.value
eng - ita DYS 0.016 0.043 Inf -0.068 0.099 0.364 0.716
eng - ita TD -0.164 0.068 Inf -0.297 -0.031 -2.425 0.015



Gaze duration

Definition: Duration of first-pass fixations

Real words

df.model %>% 
  filter(condition == "cognate") %>% 
  mutate(tot.fp.fix.duration = log(tot.fp.fix.duration),
         frequency_wordfreq = scale(frequency_wordfreq)) %>% 
  
  lmer(formula = tot.fp.fix.duration ~ 
          
         lang * en.ok.bis * group +

          ia_length + 
          # bigram_freq + 
          # it.ok.acc + 
          frequency_wordfreq + 
          
          (1|ID) + (1|target) ### Singular fit with slope for lang 
       ) -> m2.real

# TEST  ************************************************************
# drop1(m2.real, test = "Chisq")
# plot_model(m2.real, type = "diag")


# OUTPUT ************************************************************
anova_table(m2.real) -> anova.m2.real
Estimate SE df t.value p.value
(Intercept) 6.343 0.063 165.777 99.990 0.000
langita -0.314 0.072 245.222 -4.332 0.000
en.ok.bis -0.190 0.043 155.125 -4.448 0.000
groupTD -0.122 0.073 152.260 -1.663 0.098
ia_length 0.092 0.021 43.276 4.342 0.000
frequency_wordfreq -0.069 0.025 43.494 -2.783 0.008
langita:en.ok.bis 0.107 0.046 1636.195 2.307 0.021
langita:groupTD 0.059 0.079 1638.054 0.752 0.452
en.ok.bis:groupTD 0.055 0.075 144.363 0.736 0.463
langita:en.ok.bis:groupTD 0.024 0.080 1628.118 0.301 0.764
Chisq Df p.value
(Intercept) 9997.99 1 0.000
lang 18.77 1 0.000
en.ok.bis 19.78 1 0.000
group 2.77 1 0.096
ia_length 18.85 1 0.000
frequency_wordfreq 7.74 1 0.005
lang:en.ok.bis 5.32 1 0.021
lang:group 0.57 1 0.452
en.ok.bis:group 0.54 1 0.462
lang:en.ok.bis:group 0.09 1 0.764

  1. Explore whether English proficiency is stronger in IT vs. EN
emtrends(m2.real, pairwise ~ lang | group, var = "en.ok.bis", adjust = "bonferroni", infer = T)$emtrends %>% 
  as.data.frame() %>% mutate(across(en.ok.bis.trend:p.value, ~round(.x,3))) -> emtrends1.m2.real

color.me <- which(emtrends1.m2.real$p.value <= .05)
emtrends1.m2.real %>% kable() %>% kable_minimal() %>% row_spec(color.me, bold = T)
lang group en.ok.bis.trend SE df lower.CL upper.CL t.ratio p.value
eng DYS -0.190 0.043 153.641 -0.275 -0.106 -4.447 0.000
ita DYS -0.084 0.042 148.988 -0.168 0.000 -1.975 0.050
eng TD -0.135 0.061 138.139 -0.256 -0.014 -2.213 0.029
ita TD -0.005 0.062 144.114 -0.127 0.118 -0.075 0.941
  1. Explore language differences by proficiency levels
contrast en.ok.bis group estimate SE df t.ratio p.value
eng - ita -2 DYS 0.53 0.08 327.03 6.71 0.000
eng - ita 2 DYS 0.10 0.15 1350.81 0.69 0.491
eng - ita -2 TD 0.52 0.17 1532.54 3.00 0.003
eng - ita 2 TD -0.01 0.11 899.76 -0.06 0.951



Nonwords

df.model %>% 
  filter(condition == "nonword") %>% 
  mutate(tot.fp.fix.duration = log(tot.fp.fix.duration)) %>% 
  
  lmer(formula = tot.fp.fix.duration ~ 
          
          lang * en.ok.bis * group +

          ia_length + 
          # it.ok.acc + 
          # bigram_freq + 
          
          (1 |ID) + (1|target) # singular fit with slope for language 
        ) -> m2.nw

# TEST  ************************************************************
# drop1(m2.nw, test = "Chisq")
# plot_model(m2.nw, type = "diag")

# OUTPUT ************************************************************
anova_table(m2.nw) -> anova.m2.nw
Estimate SE df t.value p.value
(Intercept) 6.992 0.085 156.968 82.052 0.000
langita -0.311 0.096 258.644 -3.248 0.001
en.ok.bis -0.009 0.058 138.228 -0.153 0.879
groupTD -0.175 0.100 141.075 -1.757 0.081
ia_length 0.105 0.037 44.470 2.801 0.008
langita:en.ok.bis -0.012 0.061 1666.278 -0.202 0.840
langita:groupTD 0.049 0.105 1669.411 0.465 0.642
en.ok.bis:groupTD -0.209 0.102 136.648 -2.042 0.043
langita:en.ok.bis:groupTD 0.234 0.107 1660.252 2.188 0.029
Chisq Df p.value
(Intercept) 6732.51 1 0.000
lang 10.55 1 0.001
en.ok.bis 0.02 1 0.879
group 3.09 1 0.079
ia_length 7.85 1 0.005
lang:en.ok.bis 0.04 1 0.840
lang:group 0.22 1 0.642
en.ok.bis:group 4.17 1 0.041
lang:en.ok.bis:group 4.79 1 0.029

  1. Explore whether Language differences are stronger in DYS vs. TD depending on proficiency levels (low: -1SD, high: +1SD)
emmeans(m2.nw, pairwise ~ lang | group | en.ok.bis, at = list(en.ok.bis = c(-1,1)), adjust = "bonferroni")$contrast %>% summary() %>% as.data.frame() %>% 
  mutate(across(estimate:p.value, ~round(.x,3))) -> emmeans1.m2.nw
color.me <- which(emmeans1.m2.nw$p.value <= .05)
emmeans1.m2.nw %>% kable() %>% kable_minimal() %>% row_spec(color.me, bold = T)
contrast group en.ok.bis estimate SE df t.ratio p.value
eng - ita DYS -1 0.299 0.079 125.295 3.795 0.000
eng - ita TD -1 0.483 0.148 942.360 3.276 0.001
eng - ita DYS 1 0.323 0.140 822.417 2.315 0.021
eng - ita TD 1 0.041 0.080 134.217 0.507 0.613

  1. Explore whether Group differences are stronger in IT vs. EN depending on proficiency levels (low: -1SD, high: +1SD)
emmeans(m2.nw, pairwise ~ group | lang | en.ok.bis, at = list(en.ok.bis = c(-1,1)), adjust = "bonferroni")$contrast %>% summary() %>% as.data.frame() %>% 
  mutate(across(estimate:p.value, ~round(.x,3))) -> emmeans2.m2.nw
color.me <- which(emmeans2.m2.nw$p.value <= .05)
emmeans2.m2.nw %>% kable() %>% kable_minimal() %>% row_spec(color.me, bold = T)
contrast lang en.ok.bis estimate SE df t.ratio p.value
DYS - TD eng -1 -0.034 0.147 137.551 -0.231 0.818
DYS - TD ita -1 0.151 0.147 136.916 1.027 0.306
DYS - TD eng 1 0.384 0.139 141.190 2.773 0.006
DYS - TD ita 1 0.102 0.140 146.814 0.727 0.469

  1. Explore whether English proficiency is stronger in IT vs. EN, depending on Group
emtrends(m2.nw, pairwise ~ lang | group, var = "en.ok.bis", adjust = "bonferroni", infer = T)$emtrends %>% 
  as.data.frame() %>% mutate(across(en.ok.bis.trend:p.value, ~round(.x,3))) -> emtrends1.m2.nw

color.me <- which(emtrends1.m2.nw$p.value <= .05)
emtrends1.m2.nw %>% kable() %>% kable_minimal() %>% row_spec(color.me, bold = T)
lang group en.ok.bis.trend SE df lower.CL upper.CL t.ratio p.value
eng DYS -0.009 0.058 138.684 -0.123 0.105 -0.153 0.879
ita DYS -0.021 0.058 142.882 -0.136 0.094 -0.362 0.718
eng TD -0.218 0.085 136.449 -0.385 -0.051 -2.576 0.011
ita TD 0.004 0.085 136.810 -0.164 0.171 0.042 0.967
contrast group estimate SE df lower.CL upper.CL t.ratio p.value
eng - ita DYS 0.012 0.061 1665.261 -0.107 0.131 0.202 0.840
eng - ita TD -0.221 0.088 1657.845 -0.394 -0.049 -2.517 0.012

## To cite R in publications use:
## 
##   R Core Team (2023). _R: A Language and Environment for Statistical
##   Computing_. R Foundation for Statistical Computing, Vienna, Austria.
##   <https://www.R-project.org/>.
## 
## A BibTeX entry for LaTeX users is
## 
##   @Manual{,
##     title = {R: A Language and Environment for Statistical Computing},
##     author = {{R Core Team}},
##     organization = {R Foundation for Statistical Computing},
##     address = {Vienna, Austria},
##     year = {2023},
##     url = {https://www.R-project.org/},
##   }
## 
## We have invested a lot of time and effort in creating R, please cite it
## when using it for data analysis. See also 'citation("pkgname")' for
## citing R packages.


Other eye-tracking measures

First fixation duration

Definition: Duration of the first fixation on the target word

Real words

df.model %>% 
  filter(condition == "cognate") %>% 
  mutate(current_fix_duration = log(current_fix_duration), 
         frequency_wordfreq = scale(frequency_wordfreq)) %>% 
  
  lmer(formula = current_fix_duration ~ 
          
         lang * en.ok.bis * group +
         ia_length + 
         # frequency_wordfreq + 
         # it.ok.acc + 
          
          (1 |ID) + (1|target) # singular fit with slope for language 
        ) -> mod.first.fix.real

# TEST  ************************************************************
# drop1(mod.first.fix.real, test = "Chisq")
# plot_model(mod.first.fix.real, type = "diag")
Chisq Df p.value
(Intercept) 14569.37 1 0.000
lang 0.12 1 0.732
en.ok.bis 2.79 1 0.095
group 3.98 1 0.046
ia_length 0.12 1 0.733
lang:en.ok.bis 1.88 1 0.171
lang:group 3.57 1 0.059
en.ok.bis:group 3.85 1 0.050
lang:en.ok.bis:group 1.10 1 0.294
Estimate SE df t.value p.value
(Intercept) 5.615 0.047 179.075 120.704 0.000
langita -0.019 0.056 328.979 -0.343 0.732
en.ok.bis -0.054 0.032 179.188 -1.671 0.096
groupTD 0.109 0.055 175.024 1.996 0.047
ia_length -0.005 0.014 42.322 -0.341 0.735
langita:en.ok.bis 0.051 0.037 1638.858 1.370 0.171
langita:groupTD -0.121 0.064 1640.988 -1.890 0.059
en.ok.bis:groupTD -0.109 0.056 165.189 -1.963 0.051
langita:en.ok.bis:groupTD 0.068 0.065 1628.794 1.048 0.295
Contrast EOK Group Estimate SE t p
eng - ita -1 DYS 0.07 0.04 1.57 0.119
eng - ita 1 DYS -0.03 0.08 -0.38 0.701
eng - ita -1 TD 0.26 0.09 2.95 0.003
eng - ita 1 TD 0.02 0.05 0.45 0.657

Nonwords

df.model %>% 
  filter(condition == "nonword") %>% 
  mutate(current_fix_duration = log(current_fix_duration), 
         frequency_wordfreq = scale(frequency_wordfreq)) %>% 
  
  lmer(formula = current_fix_duration ~ 
          
         lang * en.ok.bis * group +
         ia_length + 
         # it.ok.acc + 
         
          
          (1 |ID) + (1|target) # singular fit with slope for language 
        ) -> mod.first.fix.nw

# TEST  ************************************************************
# drop1(mod.first.fix.nw, test = "Chisq")
Chisq Df p.value
(Intercept) 8026.07 1 0.000
lang 1.24 1 0.265
en.ok.bis 0.06 1 0.802
group 0.16 1 0.689
ia_length 1.62 1 0.203
lang:en.ok.bis 0.08 1 0.779
lang:group 0.25 1 0.619
en.ok.bis:group 0.42 1 0.516
lang:en.ok.bis:group 0.07 1 0.798
Estimate SE df t.value p.value
(Intercept) 5.788 0.065 134.790 89.588 0.000
langita -0.073 0.065 512.045 -1.114 0.266
en.ok.bis -0.012 0.047 128.368 -0.250 0.803
groupTD 0.032 0.080 130.779 0.400 0.690
ia_length -0.027 0.021 44.808 -1.273 0.210
langita:en.ok.bis 0.013 0.046 1674.304 0.280 0.779
langita:groupTD 0.039 0.079 1677.057 0.497 0.619
en.ok.bis:groupTD -0.054 0.083 127.270 -0.649 0.518
langita:en.ok.bis:groupTD 0.021 0.081 1662.895 0.256 0.798

Total fixation count

Definition: Total amount of fixations on the target word

Real words

df.model %>% 
  filter(condition == "cognate") %>% 
  
glmer(formula = sum.fix ~ 
          
          lang * en.ok.bis * group +

          ia_length + 
          # it.ok.acc + 
          frequency_wordfreq +
          
          (1|ID) + (1|target), ### Singular fit with slope for language 
        family = poisson(link = "log"), 
        control = glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun = 100000))) -> mod.tot.fix.real

# TEST  ************************************************************
# drop1(mod.tot.fix.real, test = "Chisq")
# plot_model(mod.tot.fix.real, type = "diag")
Chisq Df p.value
(Intercept) 174.82 1 0.000
lang 11.15 1 0.001
en.ok.bis 13.56 1 0.000
group 14.28 1 0.000
ia_length 32.85 1 0.000
frequency_wordfreq 4.95 1 0.026
lang:en.ok.bis 0.22 1 0.639
lang:group 2.62 1 0.106
en.ok.bis:group 4.09 1 0.043
lang:en.ok.bis:group 0.74 1 0.391
Estimate SE z.value p.value
(Intercept) 1.345 0.102 13.222 0.000
langita -0.235 0.071 -3.339 0.001
en.ok.bis -0.149 0.040 -3.683 0.000
groupTD -0.280 0.074 -3.779 0.000
ia_length 0.113 0.020 5.732 0.000
frequency_wordfreq -0.198 0.089 -2.225 0.026
langita:en.ok.bis 0.021 0.044 0.469 0.639
langita:groupTD 0.135 0.083 1.618 0.106
en.ok.bis:groupTD 0.154 0.076 2.022 0.043
langita:en.ok.bis:groupTD -0.074 0.086 -0.858 0.391


Nonwords

df.model %>% 
  filter(condition == "nonword") %>% 
  
glmer(formula = sum.fix ~ 
          
          lang * en.ok.bis * group +

          ia_length + 
          # bigram_freq + 
          # it.ok.acc +
          
          (1|ID) + (1|target), ### Singular fit with slope for language 
        family = poisson(link = "log"), 
        control = glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun = 100000))) -> mod.tot.fix.nw

# TEST  ************************************************************
# drop1(mod.tot.fix.nw, test = "Chisq")
# plot_model(mod.tot.fix.nw, type = "diag")
Chisq Df p.value
(Intercept) 661.29 1 0.000
lang 0.76 1 0.383
en.ok.bis 2.24 1 0.134
group 16.51 1 0.000
ia_length 32.63 1 0.000
lang:en.ok.bis 3.40 1 0.065
lang:group 2.99 1 0.084
en.ok.bis:group 0.23 1 0.629
lang:en.ok.bis:group 1.06 1 0.303
Estimate SE z.value p.value
(Intercept) 1.627 0.063 25.716 0.000
langita -0.047 0.053 -0.873 0.383
en.ok.bis -0.067 0.045 -1.498 0.134
groupTD -0.319 0.079 -4.063 0.000
ia_length 0.119 0.021 5.712 0.000
langita:en.ok.bis -0.061 0.033 -1.843 0.065
langita:groupTD 0.108 0.062 1.730 0.084
en.ok.bis:groupTD 0.040 0.082 0.483 0.629
langita:en.ok.bis:groupTD 0.067 0.065 1.029 0.303

Total fixation duration

Definition: Total time spent on the target word (including gaze duration and regression on the target IA)

Real words

df.model %>% 
  filter(condition == "cognate") %>% 
  mutate(composite.en = scale(composite.en)) %>% 
  mutate(tot.fix.duration = log(tot.fix.duration),
         frequency_wordfreq = scale(frequency_wordfreq)) %>% 
  
  lmer(formula = tot.fix.duration ~ 
          
          lang * en.ok.bis * group +

          ia_length + 
          # bigram_freq + 
          # it.ok.acc + 
          frequency_wordfreq + 
          
          (1|ID) + (1|target) ### Singular fit with slope for lang 
       ) -> mod.fix.dur.real

# TEST  ************************************************************
# drop1(mod.fix.dur.real, test = "Chisq")
# plot_model(mod.fix.dur.real, type = "diag")
Chisq Df p.value
(Intercept) 10704.28 1 0.000
lang 24.12 1 0.000
en.ok.bis 30.85 1 0.000
group 16.69 1 0.000
ia_length 20.37 1 0.000
frequency_wordfreq 5.63 1 0.018
lang:en.ok.bis 5.00 1 0.025
lang:group 5.00 1 0.025
en.ok.bis:group 2.98 1 0.085
lang:en.ok.bis:group 0.48 1 0.486
Estimate SE df t.value p.value
(Intercept) 6.734 0.065 147.072 103.462 0.000
langita -0.342 0.070 129.002 -4.911 0.000
en.ok.bis -0.229 0.041 124.122 -5.555 0.000
groupTD -0.288 0.071 122.665 -4.085 0.000
ia_length 0.109 0.024 43.600 4.513 0.000
frequency_wordfreq -0.068 0.028 43.903 -2.374 0.022
langita:en.ok.bis 0.084 0.038 1629.288 2.236 0.025
langita:groupTD 0.143 0.064 1630.743 2.235 0.026
en.ok.bis:groupTD 0.125 0.072 117.582 1.725 0.087
langita:en.ok.bis:groupTD -0.046 0.065 1624.861 -0.696 0.486
contrast group en.ok.bis estimate SE df t.ratio p.value
eng - ita DYS -1 0.43 0.06 77.75 6.96 0.000
eng - ita TD -1 0.24 0.10 433.87 2.43 0.015
eng - ita DYS 1 0.26 0.09 379.74 2.75 0.006
eng - ita TD 1 0.16 0.06 80.50 2.59 0.011
contrast lang en.ok.bis estimate SE df t.ratio p.value
DYS - TD eng -1 0.41 0.1 113.76 3.99 0.000
DYS - TD ita -1 0.22 0.1 116.11 2.15 0.033
DYS - TD eng 1 0.16 0.1 122.19 1.66 0.100
DYS - TD ita 1 0.07 0.1 119.08 0.67 0.504


Nonwords

Chisq Df p.value
(Intercept) 11599.24 1 0.000
lang 0.99 1 0.319
en.ok.bis 11.67 1 0.001
group 15.83 1 0.000
ia_length 10.39 1 0.001
lang:en.ok.bis 0.58 1 0.445
lang:group 0.47 1 0.492
en.ok.bis:group 0.36 1 0.547
lang:en.ok.bis:group 4.79 1 0.029
Estimate SE df t.value p.value
(Intercept) 7.453 0.069 132.145 107.700 0.000
langita -0.067 0.067 93.467 -0.996 0.322
en.ok.bis -0.148 0.043 96.573 -3.417 0.001
groupTD -0.297 0.075 97.677 -3.978 0.000
ia_length 0.111 0.034 44.196 3.224 0.002
langita:en.ok.bis -0.024 0.031 1656.139 -0.764 0.445
langita:groupTD 0.037 0.053 1657.541 0.688 0.492
en.ok.bis:groupTD -0.046 0.077 95.856 -0.602 0.549
langita:en.ok.bis:groupTD 0.119 0.054 1654.237 2.188 0.029

Regression path

Definition: First-pass fixation duration + duration regressions on the previous word + duration returns to IA (revisits)

### Calculate regression path 

df.long %>% 
  
  
### CONDITION 1 - Set `is.regression`
  # > if the fixation is in the IA and the direction of the next one is LEFT 
  # > if the fixation is not in the IA but the previous one was, and it was also a regression, then this is a regression too 
  # > if previous fixation is a regression, and current fix is target fixation but not a first-pass fixation, then it is a regression 
  
  mutate(
    is.regression = if_else(
  ### Condition (a)
  is.fp.fix == 1 & next_fix_direction == "LEFT", 1, 0
  ), 
  
  is.regression = if_else(
    
  ### Condition (b)
  is.fp.fix == 0 & (lag(is.fp.fix) == 1 & lag(is.regression) == 1), 1, is.regression
  ), 
  
  is.regression = if_else(
    is.target.fix == 1 & is.fp.fix == 0 & lag(is.regression) == 1, 1, is.regression
  ), 
  
is.regression = if_else(
    (is.target.fix == 1 & is.fp.fix == 0) | (lag(is.regression, default = 0) == 1 & is.target.fix == 1), 1, is.regression
)
) %>% 
  
group_by(ID, ia_label) %>% 
  mutate(
    regression_path_count = sum(is.regression, na.rm = T),
    regression_path_duration = sum(current_fix_duration[is.regression == 1], na.rm = T)
  ) %>% 
  distinct(ia_label, .keep_all = T) %>% 
ungroup() -> df.reg

# summary(df.reg$regression_path_duration)

Real words

df.reg %>% 
  filter(condition == "cognate") %>% 
  ### composite en
  relocate(lextale.acc, .after = en.ok.bis) %>% 
  mutate(
    lextale.acc = scale(lextale.acc)) %>% 
  rowwise() %>% 
  mutate(
    composite.en = mean(c_across(lextale.acc:en.ok.bis)),
    ia_length = (ia_right - ia_left)) %>% 
  ungroup() %>% 

  mutate(composite.en = scale(composite.en),
         frequency_wordfreq = scale(frequency_wordfreq),
         ia_length = scale(ia_length),
         bigram_freq = scale(bigram_freq),
         it.ok.acc = scale(it.ok.acc),
         frequency_wordfreq = scale(frequency_wordfreq),
         # regression_path_duration = log(regression_path_duration)
         ) %>% 
  
  lmer(formula = regression_path_duration ~ 
          
          lang * en.ok.bis * group + 

          ia_length + 
          # bigram_freq + 
          # it.ok.acc + 
          # frequency_wordfreq + 
          
          (1|ID) + (1|target)) -> mod.reg.real
Chisq Df p.value
(Intercept) 93.76 1 0.000
lang 10.62 1 0.001
en.ok.bis 11.91 1 0.001
group 15.37 1 0.000
ia_length 6.14 1 0.013
lang:en.ok.bis 0.45 1 0.504
lang:group 5.96 1 0.015
en.ok.bis:group 0.54 1 0.464
lang:en.ok.bis:group 0.86 1 0.353
Estimate SE df t.value p.value
(Intercept) 573.036 59.180 172.281 9.683 0.000
langita -200.744 61.592 189.644 -3.259 0.001
en.ok.bis -78.168 22.650 150.552 -3.451 0.001
groupTD -259.676 66.242 152.899 -3.920 0.000
ia_length 52.641 21.251 44.395 2.477 0.017
langita:en.ok.bis -14.120 21.117 2017.473 -0.669 0.504
langita:groupTD 151.028 61.859 2013.833 2.441 0.015
en.ok.bis:groupTD 29.514 40.338 147.111 0.732 0.466
langita:en.ok.bis:groupTD 34.863 37.517 2021.186 0.929 0.353


Nonwords

df.reg %>% 
  filter(condition == "nonword") %>% 
  ### composite en
  relocate(lextale.acc, .after = en.ok.bis) %>% 
  mutate(
    lextale.acc = scale(lextale.acc)) %>% 
  rowwise() %>% 
  mutate(
    composite.en = mean(c_across(lextale.acc:en.ok.bis)),
    ia_length = (ia_right - ia_left)) %>% 
  ungroup() %>% 

  mutate(composite.en = scale(composite.en),
         frequency_wordfreq = scale(frequency_wordfreq),
         ia_length = scale(ia_length),
         bigram_freq = scale(bigram_freq),
         it.ok.acc = scale(it.ok.acc),
         frequency_wordfreq = scale(frequency_wordfreq),
         # regression_path_duration = log(regression_path_duration)
         ) %>% 
  
  lmer(formula = regression_path_duration ~ 
          
          lang * en.ok.bis * group + 

          ia_length + 
          # bigram_freq + 
          # it.ok.acc + 
          # frequency_wordfreq + 
          
          (1|ID) + (1|target)) -> mod.reg.nw

# drop1(mod.reg.nw, test = "Chisq")
Chisq Df p.value
(Intercept) 79.42 1 0.000
lang 3.55 1 0.060
en.ok.bis 15.91 1 0.000
group 10.28 1 0.001
ia_length 4.13 1 0.042
lang:en.ok.bis 0.18 1 0.669
lang:group 0.00 1 0.994
en.ok.bis:group 3.37 1 0.067
lang:en.ok.bis:group 0.86 1 0.352
Estimate SE df t.value p.value
(Intercept) 1052.389 118.089 143.401 8.912 0.000
langita 192.910 102.373 201.393 1.884 0.061
en.ok.bis -188.310 47.208 122.909 -3.989 0.000
groupTD -441.439 137.657 123.377 -3.207 0.002
ia_length 71.127 35.009 45.276 2.032 0.048
langita:en.ok.bis 15.239 35.656 2008.992 0.427 0.669
langita:groupTD 0.790 104.191 2001.848 0.008 0.994
en.ok.bis:groupTD 154.861 84.420 121.907 1.834 0.069
langita:en.ok.bis:groupTD -58.889 63.333 2012.383 -0.930 0.353


Summary

Real words

rowname Chisq Df p.value
First-pass fixation count
(Intercept) 269.11 1 < .001
Language 15.38 1 < .001
EOK 14.80 1 < .001
Group 9.37 1 0.002
IA Length 70.13 1 < .001
Language * EOK 0.87 1 0.35
Language * Group 1.88 1 0.17
EOK * Group 2.38 1 0.123
Language * EOK * Group 0.15 1 0.696
Gaze duration
(Intercept) 9997.99 1 < .001
Language 18.77 1 < .001
EOK 19.78 1 < .001
Group 2.77 1 0.096
IA Length 18.85 1 < .001
Frequency 7.74 1 0.005
Language * EOK 5.32 1 0.021
Language * Group 0.57 1 0.452
EOK * Group 0.54 1 0.462
Language * EOK * Group 0.09 1 0.764
First fixation duration
(Intercept) 14569.37 1 < .001
Language 0.12 1 0.732
EOK 2.79 1 0.095
Group 3.98 1 0.046
IA Length 0.12 1 0.733
Language * EOK 1.88 1 0.171
Language * Group 3.57 1 0.059
EOK * Group 3.85 1 0.05
Language * EOK * Group 1.10 1 0.294
Total fixation count
(Intercept) 174.82 1 < .001
Language 11.15 1 0.001
EOK 13.56 1 < .001
Group 14.28 1 < .001
IA Length 32.85 1 < .001
Frequency 4.95 1 0.026
Language * EOK 0.22 1 0.639
Language * Group 2.62 1 0.106
EOK * Group 4.09 1 0.043
Language * EOK * Group 0.74 1 0.391
Total dwell time
(Intercept) 10704.28 1 < .001
Language 24.12 1 < .001
EOK 30.85 1 < .001
Group 16.69 1 < .001
IA Length 20.37 1 < .001
Frequency 5.63 1 0.018
Language * EOK 5.00 1 0.025
Language * Group 5.00 1 0.025
EOK * Group 2.98 1 0.085
Language * EOK * Group 0.48 1 0.486
Regression path duration
(Intercept) 93.76 1 < .001
Language 10.62 1 0.001
EOK 11.91 1 0.001
Group 15.37 1 < .001
IA Length 6.14 1 0.013
Language * EOK 0.45 1 0.504
Language * Group 5.96 1 0.015
EOK * Group 0.54 1 0.464
Language * EOK * Group 0.86 1 0.353

Pseudowords

rowname Chisq Df p.value
First-pass fixation count
(Intercept) 505.81 1 < .001
Language 12.85 1 < .001
EOK 0.24 1 0.622
Group 10.40 1 0.001
IA Length 32.94 1 < .001
Language * EOK 0.13 1 0.716
Language * Group 0.20 1 0.655
EOK * Group 1.25 1 0.263
Language * EOK * Group 5.04 1 0.025
Gaze duration
(Intercept) 6732.51 1 < .001
Language 10.55 1 0.001
EOK 0.02 1 0.879
Group 3.09 1 0.079
IA Length 7.85 1 0.005
Language * EOK 0.04 1 0.84
Language * Group 0.22 1 0.642
EOK * Group 4.17 1 0.041
Language * EOK * Group 4.79 1 0.029
First fixation duration
(Intercept) 8026.07 1 < .001
Language 1.24 1 0.265
EOK 0.06 1 0.802
Group 0.16 1 0.689
IA Length 1.62 1 0.203
Language * EOK 0.08 1 0.779
Language * Group 0.25 1 0.619
EOK * Group 0.42 1 0.516
Language * EOK * Group 0.07 1 0.798
Total fixation count
(Intercept) 661.29 1 < .001
Language 0.76 1 0.383
EOK 2.24 1 0.134
Group 16.51 1 < .001
IA Length 32.63 1 < .001
Language * EOK 3.40 1 0.065
Language * Group 2.99 1 0.084
EOK * Group 0.23 1 0.629
Language * EOK * Group 1.06 1 0.303
Total dwell time
(Intercept) 11599.24 1 < .001
Language 0.99 1 0.319
EOK 11.67 1 0.001
Group 15.83 1 < .001
IA Length 10.39 1 0.001
Language * EOK 0.58 1 0.445
Language * Group 0.47 1 0.492
EOK * Group 0.36 1 0.547
Language * EOK * Group 4.79 1 0.029
Regression path duration
(Intercept) 79.42 1 < .001
Language 3.55 1 0.06
EOK 15.91 1 < .001
Group 10.28 1 0.001
IA Length 4.13 1 0.042
Language * EOK 0.18 1 0.669
Language * Group 0.00 1 0.994
EOK * Group 3.37 1 0.067
Language * EOK * Group 0.86 1 0.352