This document shows an analysis of MB1 pilot data for purposes of procedural and analytic decision-making. It includes analysis of demographics and looking times, as well as condition differences. Condition differences (i.e., analyses of IDS preference) are included for purposes of analysis planning and should not be used for modification of procedure.
Please forgive errors, omissions, lack of clearly labeled axes, etc. This document was produced quickly to provide an initial guide for decision-making.
There are many decision-points that need to be discussed throughout. Among them:
What if any modifications/clarifications to procedure should we make based on the pilot? sorry if any of this was already discussed/decided -mk
Trial ordering: looks like many labs will use the ‘small set of randomized lists’ approach; to avoid odd item/trial effects it would be good if 12345678 was not everyone’s base list.
Trial naming: what counts as a trial? E.g. is IDS5 followed by ADS5 ‘trial 1, trial 2’, or is it ‘trial 1, order IDSfirst’ This is currently inconsistent between labs.
options(dplyr.width = Inf)
knitr::opts_chunk$set(message = FALSE, warning = FALSE, cache=TRUE)
library(lme4)
library(tidyverse)
library(eyetrackingR)
library(stringr)
library(lubridate)
library(bit64) # necessary because of times from SMI > max integer size
library(langcog)
library(knitr)
library(forcats)
source("et_helper.R")
theme_set(theme_bw())Loading process to standardize pilot data from each lab. Current desideratum is a long-form data sheet with columns:
lab [string] - unique identifiermethod [string] - eye-tracking, HPP, single screensubid [string] - unique within-lab IDage_days [integer] - chronological agetrial_type [string] - IDS, ADS, and trainingtrial_num [integer] - trial number, from 1 – 8 (with -2 and -1 denoting training trials)trial_order [string] - which stim version played first, IDSfirst or ADSfirst or label trials 1—16 for individual soundsstim_name [string] - specific stimulus, eg. ADS-5looking_time [double] - looking time in secondsIn general, we probably want to have three different spreadsheets:
labs - this has lab and method infosubjects - this has demographics for each subjecttrials - this has long-form trial dataThen these can be merged appropriately.
Questions:
Read in eye-tracker pilot data.
raw_data_path <- "pilot/frank/"
info_path <- "info/"
processed_data_path <- "processed_data/frank/"
all_data <- dir(raw_data_path, pattern="*.txt") %>%
paste0(raw_data_path, .) %>%
map_df(get_smi_header) %>%
split(.$file_name) %>%
map_df(read_smi_idf) %>%
split(.$file_name) %>%
map_df(preprocess_data) Now extract trial numbers and match with stimuli.
Some items to be fixed:
frank_data <- all_data %>%
group_by(file_name, trial, stimulus) %>%
summarise(looking_time = max(t_stim)) %>%
mutate(trial_cat = ifelse(str_detect(stimulus, ".jpg"), "speech","other")) %>%
filter(trial_cat == "speech") %>%
group_by(file_name) %>%
filter(trial > 5) %>%
mutate(trial_num = 1:n(),
subid = str_replace(str_replace(file_name,raw_data_path,""),
".txt","")) Now merge in demographic information.
info <- read_csv("info/frank_demo.csv")
frank_data <- info %>%
select(subid, age, order) %>%
left_join(frank_data)Now merge in orders.
orders <- read_csv("info/orders.csv") %>%
gather(marker, stimulus, 2:19) %>%
rename(order = Order) %>%
filter(!str_detect(stimulus, "Train")) %>%
group_by(order) %>%
mutate(trial_num = 1:n()) %>%
separate(stimulus, into = c("trial_type", "stim_num"), sep = -2) %>%
select(-marker, -stim_num)
frank_data <- left_join(frank_data, orders) %>%
mutate(trial_num = ceiling(trial_num / 2)) %>%
mutate(age_days = as.numeric(age),
lab = "stanford",
method = "eye-tracking") %>%
select(lab, method, subid, age_days, trial_type, trial_num, looking_time)floccia_data <- read_csv("pilot/floccia/pilot data.csv") %>%
rename(age_days = age,
looking_time = LT) %>%
mutate(subid = as.character(id),
method = "HPP",
stimulus = str_replace(str_replace(stimulus, ".wav", ""),
"Manybabies\\\\", "")) %>%
separate(stimulus, into = c("trial_type", "stim_num"), sep = "-") %>%
mutate(trial_num = ceiling(trial/2)) %>%
select(lab, method, subid, age_days, trial_type, trial_num, looking_time)hamlin_path <- "pilot/hamlin/"
hamlin_data <- dir(hamlin_path, pattern="*.csv") %>%
paste0(hamlin_path, .) %>%
map_df(function(x) {read_csv(x) %>% mutate(order = x)}) %>%
mutate(order = as.numeric(str_replace(str_replace(order, ".csv",""),
"pilot/hamlin/order",""))) %>%
gather(trial, looking_time,
starts_with("Train"), starts_with("IDS"), starts_with("ADS")) %>%
separate(trial, into = c("trial_type","trial_num"), sep = -2) %>%
mutate(lab = "ubc",
method = "single-screen",
trial_num = as.numeric(trial_num),
age_days = str_split(age, ";") %>%
map_dbl(function(x) as.numeric(x[1]) * 30.3 + as.numeric(x[2]))) %>%
rename(subid = subnum) %>%
select(lab, method, subid, age_days, trial_type, trial_num, looking_time)This is what the eventual data frame looks like:
d <- bind_rows(floccia_data, hamlin_data, frank_data)
kable(head(d))| lab | method | subid | age_days | trial_type | trial_num | looking_time |
|---|---|---|---|---|---|---|
| plymouth | HPP | 1 | 331 | IDS | 1 | 20.194 |
| plymouth | HPP | 1 | 331 | ADS | 1 | 20.199 |
| plymouth | HPP | 1 | 331 | IDS | 2 | 2.715 |
| plymouth | HPP | 1 | 331 | ADS | 2 | 16.488 |
| plymouth | HPP | 1 | 331 | ADS | 3 | 18.237 |
| plymouth | HPP | 1 | 331 | IDS | 3 | 13.777 |
d %>%
group_by(lab, age_days, subid) %>%
distinct %>%
group_by(lab) %>%
summarise(n = n(),
age_months = mean(age_days)/30.3) %>%
kable(digits = 1)| lab | n | age_months |
|---|---|---|
| plymouth | 9 | 10.0 |
| stanford | 14 | 13.9 |
| ubc | 9 | 5.2 |
What’s our participant distribution?
subs <- d %>%
group_by(lab, subid, age_days) %>%
distinct
qplot(age_days, fill = lab, data=subs)First, the overall distribution of looking times.
qplot(looking_time, fill = lab, facets = ~ lab, binwidth = 2, data = d)Stanford has a large number of 2s looking times because that’s the lookaway from the tracker. So when a child isn’t looking at all, they get a 2s. How should we deal with this?
mk thoughts: do all eyetracking implementations have the same lookaway setting? One way to deal with this would be to only code looking time after the lookaway period; this would result in more reasonably distributed eyetracking data but also in throwing away a significant chunk of trial data. On the other hand, it’s also strange to consider a dataset with essentially a floor of 2 sec on the data.
Next, are children making it through the experiment? Looks like essentially everyone does.
final_trial <- d %>%
group_by(lab, subid) %>%
summarize(max_trial = max(trial_num[looking_time > 2]))
qplot(max_trial, fill = lab, data = final_trial)Now, histogram of looking time by trial number. Looks like looking times are staying pretty long.
ggplot(d, aes(x = looking_time, fill = lab)) +
geom_histogram(binwidth = 2) +
facet_wrap(~trial_num)We can look at this by age, too.
qplot(age_days, looking_time, col = lab, facets = ~ trial_num, data = d) +
geom_smooth(aes(group = 1), method = "lm", col = "black")Plot means.
ms <- d %>%
group_by(lab, trial_num) %>%
multi_boot_standard(col = "looking_time", na.rm=TRUE)
ggplot(ms, aes(x = trial_num, y = mean, col = lab)) +
geom_line() +
geom_linerange(aes(ymin = ci_lower, ymax = ci_upper),
position = position_dodge(width = .1)) Doesn’t compute difference scores - treats IDS and ADS observations as independent. These analyses should likely be lower-powered. (Interested to hear people’s thoughts as to whether they should be included).
mk thoughts - I would lean toward no, UNLESS doing so is currently standard in the field. Serious Q, what would we learn about IDS/ADS that we don’t from the descriptive stats + difference score tests?
ms <- d %>%
filter(trial_type != "Train",
looking_time != 0, !is.na(looking_time)) %>%
group_by(trial_num, trial_type) %>%
multi_boot_standard(col = "looking_time", na.rm=TRUE)
ggplot(ms, aes(x = trial_num, y = mean, col = trial_type)) +
geom_line() +
geom_linerange(aes(ymin = ci_lower, ymax = ci_upper),
position = position_dodge(width= .1)) +
ylab("Looking time (s)")Split by lab (which is really age now).
ms <- d %>%
filter(trial_type != "Train",
looking_time != 0, !is.na(looking_time)) %>%
group_by(lab, trial_num, trial_type) %>%
multi_boot_standard(col = "looking_time", na.rm=TRUE) %>%
ungroup %>%
mutate(lab = fct_relevel(lab, "ubc"))
ggplot(ms, aes(x = trial_num, y = mean, col = trial_type)) +
geom_smooth(se = FALSE, span = 2) +
facet_wrap(~lab) +
geom_pointrange(aes(ymin = ci_lower, ymax = ci_upper),
position = position_dodge(width= .1))+
ylab("Looking time (s)") +
scale_color_solarized(name = "Trial Type") +
theme(legend.position = "bottom")Take a look at this in log space as well, following Csibra et al. (2015), “Statistical Treatment of Looking-Time Data.” Doesn’t change much, but is likely better.
ms <- d %>%
filter(trial_type != "Train",
looking_time != 0, !is.na(looking_time)) %>%
group_by(lab, trial_num, trial_type) %>%
mutate(log10_looking_time = log10(looking_time)) %>%
multi_boot_standard(col = "log10_looking_time", na.rm=TRUE)
ggplot(ms, aes(x = trial_num, y = mean, col = trial_type)) +
geom_smooth(se = FALSE, span = 2) +
facet_wrap(~lab) +
geom_pointrange(aes(ymin = ci_lower, ymax = ci_upper),
position = position_dodge(width= .1)) +
ylab("Log looking time (s)") +
scale_color_solarized(name = "Trial Type") +
theme(legend.position = "bottom")Now do (perhaps) the more appropriate analysis: For each pair of trials, subtract to get the difference score. Again following Csibra, we do a difference of logs.
diffs <- d %>%
filter(trial_type != "Train",
looking_time != 0, !is.na(looking_time)) %>%
group_by(lab, subid, age_days, trial_num) %>%
filter(n() == 2) %>% # take only pairs that are complete
summarise(idspref = log10(looking_time[trial_type=="IDS"]) -
log10(looking_time[trial_type=="ADS"]))What’s the distributional form of these data? Is the spike near 0.0 from the stanford 2s kids?
qplot(idspref, data = diffs)How do they change with trials?
ms_diff <- diffs %>%
group_by(lab, trial_num) %>%
multi_boot_standard(col = "idspref", na.rm=TRUE)
ggplot(ms_diff, aes(x = trial_num, y = mean)) +
geom_smooth(se = FALSE, span = 2) +
facet_wrap(~lab) +
geom_pointrange(aes(ymin = ci_lower, ymax = ci_upper),
position = position_dodge(width= .1)) +
ylab("IDS preference (log10 s)") +
geom_hline(yintercept = 0, lty = 2)Or with age?
qplot(age_days, idspref, col = lab, group = 1, data = diffs) +
geom_smooth(method = "lm") +
geom_hline(yintercept = 0, lty = 2) +
ylab("IDS preference (s)") By age and by trial.
qplot(age_days, idspref, col = lab, group = 1, data = diffs) +
geom_smooth(method = "lm") +
facet_wrap(~trial_num) +
geom_hline(yintercept = 0, lty = 2) +
ylab("IDS preference (s)") I would very much recommend including random effects of item, but that info is not included with the data as is.
strong agree -mk
General question: is the goal of the analyses to (1) present what we believe to be the most appropriate analysis? (2) To reflect current practices? (3) to compare the two or (4) something else? -mk
Using log transformed looking time.
d_t_test1 <- d %>%
filter(trial_type != "Train",
looking_time != 0, !is.na(looking_time)) %>%
mutate(log_lt = log(looking_time)) %>%
group_by(subid,trial_type) %>%
summarise(log_lt = mean(log_lt)) %>%
group_by(subid) %>%
filter(n() == 2) %>%
summarize(log_lt_diff = log_lt[trial_type == "IDS"] - log_lt[trial_type == "ADS"])
t.test(d_t_test1$log_lt_diff , mu = 0)##
## One Sample t-test
##
## data: d_t_test1$log_lt_diff
## t = 0.58108, df = 28, p-value = 0.5658
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## -0.1186563 0.2126356
## sample estimates:
## mean of x
## 0.04698964
As above, but perhaps it is simpler to compare conditions, as our mixed effects models will use a condition predictor. Using log transformed looking time.
d_t_test2 <- d %>%
filter(trial_type != "Train",
looking_time != 0, !is.na(looking_time)) %>%
mutate(log_lt = log(looking_time)) %>%
group_by(subid,trial_type) %>%
summarise(log_lt = mean(log_lt))
t.test(log_lt ~ trial_type, data = d_t_test2, paired = T)##
## Paired t-test
##
## data: log_lt by trial_type
## t = -0.58108, df = 28, p-value = 0.5658
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.2126356 0.1186563
## sample estimates:
## mean of the differences
## -0.04698964
d_lmer0 <- d %>%
filter(trial_type != "Train",
looking_time != 0, !is.na(looking_time)) %>%
mutate(log_lt = log(looking_time), AgeC = (age_days - mean(age_days))/sd(age_days)) %>%
group_by(AgeC,subid,lab,trial_type) %>%
summarize(log_lt = mean(log_lt)) %>%
filter(n() == 2) %>%
mutate(log_lt_diff = log_lt[trial_type == "IDS"] - log_lt[trial_type == "ADS"])
summary(lmer(log_lt_diff ~ 1 + (1|lab), data = d_lmer0))## Linear mixed model fit by REML ['lmerMod']
## Formula: log_lt_diff ~ 1 + (1 | lab)
## Data: d_lmer0
##
## REML criterion at convergence: 70
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.9742 -0.7591 0.3471 0.6902 1.5428
##
## Random effects:
## Groups Name Variance Std.Dev.
## lab (Intercept) 0.0000 0.0000
## Residual 0.1863 0.4316
## Number of obs: 58, groups: lab, 3
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 0.04699 0.05668 0.829
Planned regression: 1 + CentredAge * Trial Type + (1 + Centred Age * Trial Type)
I removed the by lab interaction to aid convergence.
summary(lmer(log_lt_diff ~ 1 + AgeC + (1+ AgeC |lab), data = d_lmer0))## Linear mixed model fit by REML ['lmerMod']
## Formula: log_lt_diff ~ 1 + AgeC + (1 + AgeC | lab)
## Data: d_lmer0
##
## REML criterion at convergence: 71
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.9500 -0.8020 0.4602 0.6812 1.7157
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## lab (Intercept) 0.000e+00 0.000e+00
## AgeC 2.434e-20 1.560e-10 NaN
## Residual 1.801e-01 4.244e-01
## Number of obs: 58, groups: lab, 3
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 0.05309 0.05584 0.951
## AgeC -0.09806 0.05703 -1.720
##
## Correlation of Fixed Effects:
## (Intr)
## AgeC -0.064
Planned regression: 1 + (CentredAge + CentredAge^2) * Trial Type + (1 + (Centred Age + CentredAge ^2) * Trial Type)
I removed the by lab interaction to aid convergence
summary(lmer(log_lt_diff ~ 1 + poly(AgeC,2) + (1+ poly(AgeC,2) |lab), data = d_lmer0))## Linear mixed model fit by REML ['lmerMod']
## Formula: log_lt_diff ~ 1 + poly(AgeC, 2) + (1 + poly(AgeC, 2) | lab)
## Data: d_lmer0
##
## REML criterion at convergence: 66.5
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.8393 -0.6905 0.4648 0.6084 1.7571
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## lab (Intercept) 0.000e+00 0.000e+00
## poly(AgeC, 2)1 1.969e-12 1.403e-06 NaN
## poly(AgeC, 2)2 8.245e-13 9.080e-07 NaN -0.94
## Residual 1.821e-01 4.268e-01
## Number of obs: 58, groups: lab, 3
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 0.04699 0.05604 0.838
## poly(AgeC, 2)1 -0.72981 0.42678 -1.710
## poly(AgeC, 2)2 0.26304 0.42678 0.616
##
## Correlation of Fixed Effects:
## (Intr) p(AC,2)1
## ply(AgC,2)1 0.000
## ply(AgC,2)2 0.000 0.000
We will fit a linear mixed effects model predicting all individual observations, with the structure:
log(looking.time) ~ trial.num * stimulus * age + (trial.num * stimulus | subid) + (trial.num * stimulus * age | lab)
NB. This is taken from the RRR. Does stimulus here refer to condition or item? I have taken it to refer to condition.
Interactions removed to aid convergence.
mk: why not mixed effects structures in the other models as well? I think we have 3 sources of random variance we want to model here: random subids nested within random labs, and random stimuli (indivudal items).
d_lmer3 <- d %>%
filter(trial_type != "Train",
looking_time != 0, !is.na(looking_time)) %>%
mutate(log_lt = log(looking_time), AgeC = (age_days - mean(age_days))/sd(age_days),trial_numC = (trial_num - mean(trial_num))/sd(trial_num))
summary(lmer(log_lt ~ 1 + AgeC * trial_type * trial_numC + (1+ trial_type + trial_numC|subid)+ (1+ AgeC + trial_type + trial_numC|lab) , data = d_lmer3))## Linear mixed model fit by REML ['lmerMod']
## Formula: log_lt ~ 1 + AgeC * trial_type * trial_numC + (1 + trial_type +
## trial_numC | subid) + (1 + AgeC + trial_type + trial_numC | lab)
## Data: d_lmer3
##
## REML criterion at convergence: 963.4
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -4.1974 -0.6533 0.0697 0.7041 2.4590
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## subid (Intercept) 0.0987429 0.31423
## trial_typeIDS 0.0326380 0.18066 0.11
## trial_numC 0.0215017 0.14663 0.41 -0.41
## lab (Intercept) 0.0194895 0.13960
## AgeC 0.0217100 0.14734 -1.00
## trial_typeIDS 0.0006276 0.02505 -1.00 1.00
## trial_numC 0.0172508 0.13134 1.00 -1.00 -1.00
## Residual 0.4772065 0.69080
## Number of obs: 419, groups: subid, 29; lab, 3
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 1.55757 0.12455 12.505
## AgeC -0.07790 0.13017 -0.598
## trial_typeIDS 0.05592 0.07741 0.722
## trial_numC -0.13658 0.09391 -1.454
## AgeC:trial_typeIDS -0.10143 0.07705 -1.316
## AgeC:trial_numC 0.13437 0.07782 1.727
## trial_typeIDS:trial_numC -0.02411 0.06856 -0.352
## AgeC:trial_typeIDS:trial_numC -0.17779 0.06889 -2.581
##
## Correlation of Fixed Effects:
## (Intr) AgeC tr_IDS trl_nC AgC:_IDS AgC:_C t_IDS:
## AgeC -0.642
## tril_typIDS -0.331 0.125
## trial_numC 0.573 -0.518 -0.196
## AgC:trl_IDS 0.065 -0.262 -0.004 -0.014
## AgC:trl_nmC -0.311 0.366 -0.016 0.018 -0.144
## trl_tIDS:_C 0.009 -0.007 0.018 -0.341 0.074 0.002
## AgC:_IDS:_C -0.008 0.018 0.074 0.004 -0.005 -0.409 0.060
These analyses were planned to be by preference score, rather than by interactions. Iff we are able to include item names in the data files, I would suggest going by interactions instead so that we can account for item effects with regard to, e.g., missing data. cant we measure item effeects on preference scores too, since the items are paired across condition? -mk
Note that this analysis is perfectly confounded with lab at the moment, so it will not converge.
ids.pref ~ method + (1 | lab)
d_moder1 <- d %>%
filter(trial_type != "Train",
looking_time != 0, !is.na(looking_time)) %>%
mutate(log_lt = log(looking_time),AgeC = (age_days - mean(age_days))/sd(age_days)) %>%
group_by(subid,trial_type,method,lab,AgeC) %>%
summarise(log_lt = mean(log_lt)) %>%
group_by(subid,method,lab,AgeC) %>%
filter(n() == 2) %>%
summarize(log_lt_diff = log_lt[trial_type == "IDS"] - log_lt[trial_type == "ADS"])
summary(lmer(log_lt_diff ~ method + (1|lab), data = d_moder1))## Linear mixed model fit by REML ['lmerMod']
## Formula: log_lt_diff ~ method + (1 | lab)
## Data: d_moder1
##
## REML criterion at convergence: 38.4
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.8138 -0.6947 0.4425 0.6890 1.6323
##
## Random effects:
## Groups Name Variance Std.Dev.
## lab (Intercept) 0.007103 0.08428
## Residual 0.197123 0.44399
## Number of obs: 29, groups: lab, 3
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) -0.01183 0.15819 -0.075
## methodHPP 0.01198 0.23244 0.052
## methodsingle-screen 0.17755 0.23244 0.764
##
## Correlation of Fixed Effects:
## (Intr) mthHPP
## methodHPP -0.681
## mthdsngl-sc -0.681 0.463
## convergence code: 0
## Hessian is numerically singular: parameters are not uniquely determined
Note that this analysis is perfectly confounded with lab at the moment, so it will not converge.
ids.pref ~ method * age + (1 +age | lab)
summary(lmer(log_lt_diff ~ method*AgeC + (1+AgeC|lab), data = d_moder1))## Linear mixed model fit by REML ['lmerMod']
## Formula: log_lt_diff ~ method * AgeC + (1 + AgeC | lab)
## Data: d_moder1
##
## REML criterion at convergence: 34.5
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.6696 -0.6983 0.2308 0.7148 1.1916
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## lab (Intercept) 0.2083 0.4564
## AgeC 0.2166 0.4654 0.20
## Residual 0.2083 0.4564
## Number of obs: 29, groups: lab, 3
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 0.4926 0.7732 0.637
## methodHPP -0.4819 0.9110 -0.529
## methodsingle-screen -1.1800 1.3345 -0.884
## AgeC -0.4641 0.7283 -0.637
## methodHPP:AgeC 0.2916 0.9698 0.301
## methodsingle-screen:AgeC -0.2532 1.1916 -0.212
##
## Correlation of Fixed Effects:
## (Intr) mthHPP mthds- AgeC mHPP:A
## methodHPP -0.849
## mthdsngl-sc -0.579 0.492
## AgeC -0.532 0.451 0.308
## mthdHPP:AgC 0.399 -0.305 -0.231 -0.751
## mthdsng-:AC 0.325 -0.276 0.341 -0.611 0.459
The general idea/approach is to specify a generative model of data inclusion (e.g. select subids to include, trials to include, lt windows to analyze), data analysis (preference vs. difference-from-chance, t test, etc., more things I haven’t thought of), and optional stopping, and then to do some simulations of fitting this model to each lab maximizing effect size (essentially simulating ‘worst case’, intentional p-hacking) as well as some more limited models that e.g. select windows of analysis after data collection, stop early, drop participants, etc in a less nefarious way.
If we have interesting moderator findings, it would also be nice to see e.g. what kinds of sensible measurements you can make on standardized large datasets that you might not be able to do on heterogeneous data
Practical recommendations:
lab, subject, and trial data.Conclusions: It looks like we’re seeing some IDS preference for each group, albeit at a different part of the experiment for each age/lab combo.