This document reports a reanalysis of Berkowitz et al. (2015), Science. The original paper describes a randomized controlled trial of a learning app. Children were randomly assigned to math and reading app groups; their learning outcomes on standardized math and reading tests were assessed after a period of app usage. A math anxiety measure was also collected for children’s parents. The authors write that:
The intervention, short numerical story problems delivered through an iPad app, significantly increased children’s math achievement across the school year compared to a reading (control) group, especially for children whose parents are habitually anxious about math.
To state up front, I think this paper has a number of important strengths, including:
It’s also potentially a very exciting study, because – as the authors note – there is scant evidence to date on the efficacy of educational apps. For this reason, I was eager to dig into the details of the study and make sure that I understood the degree to which the results support the interpretation given in the writeup.
In what follows, I “kick the tires” on the dataset provided by Berkowitz et al., investigating the individual measures and the relationship between them. I am deeply grateful to the authors for publishing their entire dataset and want to emphasize that this practice is a critical part of ensuring scientific progress. While I may disagree with some aspects of the interpretation of the study results, I nevertheless believe that the study was conducted appropriately and that the field is well-served by both this study being done and by the release of the data.
This document is a joint exploration and reanalysis. My intention was to apply a “naive” analytic strategy, that is, to analyze the data as though I were coming at them without having read the paper. In doing that, I hoped to ascertain whether some of the results in the paper were obtained only via particular analytic strategies (e.g., a specific way of discretizing a continuous variable) or whether they were robust to the details of these decisions.
To summarize the results of the reanalysis: In my view, the Berkowitz et al. study does not show that the intervention as a whole was successful, because there was no main effect of the intervention on performance. Instead, it shows that – in some analyses – more use of the math app was related to greater growth in math performance, a dose-response relationship that is subject to significant endogeneity issues (because parents who use math apps more are potentially different from those who don’t). In addition, there is very limited evidence for a relationship of this growth to math anxiety. In sum, this is a well-designed study that nevertheless shows only tentative support for an app-based intervention.
NOTE: Some aspects of this reanalysis will be published as a Technical Comment on Berkowitz et al. To facilitate discussion I am adding a section to the end of my report that reproduces the analyses in that published version, in order. This will lead to some duplication but hopefully also make it easier to understand the details of the comment and how they relate to this report.
To begin our analysis, let’s load the data from the Supplementary Materials (retitling conditions).
d <- read.csv("aac7427-Accessory-Data-File-S1.csv") %>%
mutate(condition = ifelse(cond.dum == 1, "Math App","Reading App"))
Next, let’s explore the dataset a little bit just so we know what the variables look like.
We begin with app usage – are there baseline differences in usage between reading app users and math app users?
ms <- d %>%
group_by(condition) %>%
multi_boot_standard(column = "avg.use", na.rm=TRUE)
# hacks to get vertical ribbons
ps <- ms %>%
gather(measure, score, ci_lower, ci_upper)
ps <- bind_rows(ps, ps %>% arrange(desc(measure))) %>%
mutate(y = c(100, 100, 100, 100, 0, 0, 0, 0))
# plot
qplot(avg.use,
fill = condition,
data = d) +
geom_polygon(data = ps, aes(x = score, y = y, fill = condition), alpha = .5) +
geom_vline(data = ms, aes(xintercept = mean), col = "black", lty = 2) +
scale_fill_solarized() +
scale_colour_solarized() +
ylim(c(0,100)) +
xlab("Rate of app usage (times/week)")
kable(tidy(with(d, t.test(avg.use[condition == "Math App"],
avg.use[condition == "Reading App"]))))
estimate | estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high |
---|---|---|---|---|---|---|---|
-0.3447472 | 1.177843 | 1.52259 | -3.093613 | 0.002209 | 242.5741 | -0.5642576 | -0.1252369 |
Turns out the math folks used the app significantly less than the reading folks - maybe not unexpected given that the math app is probably less fun? But this is not particularly critical.
It seems clear from the outset that the study was designed to look at math, rather than reading, since the math group is substantially oversampled (2.5x the participants). So we should really be thinking of the reading app as a control, not as a failed second intervention. This is critical since this design decision allows us to avoid looking for crossover interactions between condition, measure, and time (e.g., performance grew more on the measure for which the child received training) and licenses us to look only at condition by time interactions (e.g., growth in math performance for the math app group).
Let’s next look at the math anxiety parent questionnaire measure.
ms <- d %>%
group_by(condition) %>%
multi_boot_standard(column = "ParentMAaverage.Fa13", na.rm=TRUE)
# hacks to get vertical ribbons
ps <- ms %>%
gather(measure, score, ci_lower, ci_upper)
ps <- bind_rows(ps, ps %>% arrange(desc(measure))) %>%
mutate(y = c(100, 100, 100, 100, 0, 0, 0, 0))
# plot
qplot(ParentMAaverage.Fa13,
fill = condition,
data = d) +
geom_polygon(data = ps, aes(x = score, y = y, fill = condition), alpha = .5) +
geom_vline(data = ms, aes(xintercept = mean), col = "black", lty = 2) +
scale_fill_solarized() +
scale_colour_solarized() +
ylim(c(0,100)) +
xlab("Parent Math Anxiety")
kable(tidy(with(d, t.test(ParentMAaverage.Fa13[condition == "Math App"],
ParentMAaverage.Fa13[condition == "Reading App"]))))
estimate | estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high |
---|---|---|---|---|---|---|---|
0.0891061 | 2.201163 | 2.112057 | 1.140715 | 0.2549957 | 270.9891 | -0.0646818 | 0.2428939 |
No major differences between conditions here, which is good from the perspective of successful randomization.
One thing from the SI bears discussion. The authors write that:
The first session at both the beginning and end of the school year consisted of several different achievement measures. In the current study, we focus on the Woodcock-Johnson-III Applied Problems Subtest and the Woodcock-Johnson-III Letter-Word Identification Subtest, nationally normed measures of math and reading achievement, respectively (14). The second session consisted of several different academic attitude measures. (p. 3)
I found this statement potentially worrisome because it implied that there might have been other math or reading measures that did not show an effect. (On the other hand, perhaps the other measures could have been unrelated and not predicted to show effects). These weren’t in the dataset or the paper, however, so I don’t have anything more to say about this issue here.
Let’s begin examining the measures that we do have by looking at the relationship between the WJ standardized math and reading (decoding) measures to understand their reliability.
dc <- d %>%
select(WJ.Letter.Word.ID.W.Score.Fa13,WJ.LetterWord.Wscore.Sp14,
WJ.Applied.Problems.W.Score.Fa13, WJ.Applied.Problems_W.Score_Sp14)
dc %>%
filter(complete.cases(dc)) %>%
ggcorplot()
So all of these measures hang together pretty well, as you’d hope (note I haven’t done exclusions yet, so this analysis includes some administrations that were incorrect or incomplete).
In addition, reading and math are quite correlated. Interesting thing about letter word ID – there’s almost no variance left to explain from fall to spring. So that limits the detectable effects of the intervention. There’s a bit more variance left over in the math measure (.44 vs. .24), but we don’t know if that’s measurement noise or signal.
One other thing I was concerned about was the transformations between measures. How do these end up panning out? Let’s look at the correlations between Ws and Grade Equivalents. Take math as the key variable.
qplot(WJ.Applied.Problems.W.Score.Fa13,
WJ.Applied.Problems.GE.Fa13,
geom = "jitter",
data = d)
qplot(WJ.Applied.Problems_W.Score_Sp14,
WJ.Applied.Problem_GE_Sp14,
geom = "jitter",
data = d)
OK, so the grade level equivalence transform squishes the lower scores and expands the higher ones. (It also does different things for the two score ranges, fall and spring). That’s useful to know because it will mean that the extremal values will look different in the GE analyses compared with the raw W scores. It turns out that this makes a real difference in a lot of the analyses below.
One of the tricky things in this dataset is figuring out how to make the exclusions line up with the paper. The SI of the paper notes that:
In addition to the children excluded from the relevant math or reading analyses because they failed to hit basal or ceiling on the measures as detailed above, 32 twins were excluded since they were often split between different classrooms, and therefore would likely have been assigned to different groups and thus would have had access to both apps. An additional 21 families dropped out by calling to tell us that they were moving to another district, or that they no longer wished to be in the study: 16 in the math group (3.8%); 5 in the reading group (3.0%). Moreover, for an additional 31 children, app data suggested they never opened the app. We left these children in as 0 users. However, excluding them from the analyses does not change the significance level of any of the reported outcomes. Finally, it is important to note that 26 families consented to be in the study, but failed to pick up an iPad Mini following fall testing. Given that these families did not start the study, we did not include them in our original 587 children number. All other children were included in the original 587 tally.
So we need to exclude:
twin
)year1dropout
)We also see:
34 children were excluded from math analyses due to experimenter error leading to failure to achieve either a basal or ceiling on this subtest (24 in the fall: 16 in the math app group, 8 in the reading app group; 10 in the spring: 9 math app group, 1 reading app group).
The place this presumably would be worked out would be in the filter
variables (e.g., wjappliedfa13_filter
). These are binary variables that presumably could be conveying whether the children’s scores should be included because they get basal and ceiling items.
sum(d$wjappliedfa13_filter == 0)
## [1] 26
sum(d$WJ.applied.Sp14.filter == 0)
## [1] 31
These numbers don’t match perfectly, though: we get 26 in the fall, and 31 in the spring. Not sure how to make that match up. Perhaps it has to do with computing those numbers after dropping other observations?
sum(d$wjappliedfa13_filter[d$twin == 0 & d$year1dropout == 0] == 0)
## [1] 24
sum(d$WJ.applied.Sp14.filter[d$twin == 0 & d$year1dropout == 0] == 0)
## [1] 11
Much closer, though we do get 11 in the spring (not 10). Let’s do the same for reading:
25 children were excluded from reading analyses due to experimenter error leading to failure to achieve either a basal or ceiling on this subtest (7 in the fall: 4 math app group, 3 reading app group; 18 in the spring: 12 math app group, 6 reading app group).
sum(d$WJletterwordFa13_filter == 0)
## [1] 9
sum(d$WJ.letterword.Sp14.filter == 0)
## [1] 39
sum(d$WJletterwordFa13_filter[d$twin == 0 & d$year1dropout == 0] == 0)
## [1] 7
sum(d$WJ.letterword.Sp14.filter[d$twin == 0 & d$year1dropout == 0] == 0)
## [1] 19
OK, we’re one off, but it’s looking pretty good. Note there is a separate issue in exclusions for analyses that use math anxiety: where math anxiety questionnaire data is missing (~20% of cases), you have to exclude data. I’m assuming the authors just dropped non-responding families for these analyses.
The paper calls one of their analyses the “intention-to-treat” analysis – but that isn’t quite right. The authors are still median-splitting on the math anxiety variable, which means the analysis does not measure the effects of the intervention on the full treated sample. The authors may contend that the split on math anxiety was the planned analysis (this contention would be consistent with the fact that they do not even report grand means for the groups without the math anxiety split). But I would say that this contention is an example of the “garden of forking paths”. Had the intervention resulted in a gain for all children in the math app group, independent of anxiety, I would expect that the authors would have chosen a different analytic strategy, namely reporting the unsplit data. Thus, this choice of a split variable is data dependent and subject to an inflation of Type I error. And the median-split analysis is thus not a true intention-to-treat analysis.
In addition, I had two other critiques of this analysis in the paper. First, the discretization of continuous variables – the strategy for dealing with math anxiety in the reported analysis – is not recommended. In addition to its negative consequences for statistical power, discretization also adds an additional bit of post-hoc flexibility. Often a median split (or a spilt into several bins) signals exploration of the data subsequent to a (failed) continuous analysis.
Second, the “intention-to-treat” analyses reported in the paper use an invalid inferential strategy. The paper reports models fit independently to two halves of the median split and then reports that while the appropriate term in the model was significant for the high anxious kids, it wasn’t significant for the low anxious kids. Whether coefficients for two different models are significant or not significant is an example of the “difference between significant and not significant is not itself significant” fallacy. A test for the interaction is required, since the two effects may not actually differ from one another.
Here I report a couple of attempts at a naive intention-to-treat analysis. I’ll start by just doing a straight comparison of performance with no median splits, no difference scores - just looking for mean differences. First begin by bringing the data closer to tidy data format.
dt.excl <- d %>%
mutate(applied.w.fa13 = ifelse(wjappliedfa13_filter,
WJ.Applied.Problems.W.Score.Fa13,
NA),
applied.w.sp14 = ifelse(WJ.applied.Sp14.filter,
WJ.Applied.Problems_W.Score_Sp14,
NA),
reading.w.fa13 = ifelse(WJletterwordFa13_filter,
WJ.Letter.Word.ID.W.Score.Fa13,
NA),
reading.w.sp14 = ifelse(WJ.letterword.Sp14.filter,
WJ.LetterWord.Wscore.Sp14,
NA)) %>%
filter(!twin, !year1dropout) %>%
gather(measure, score,
applied.w.fa13, applied.w.sp14,
reading.w.fa13, reading.w.sp14) %>%
separate(measure, c("measure","time"), sep="w\\.") %>%
mutate(year = as.factor(str_sub(time, 4, 5)),
measure = ifelse(grepl( "applied", measure), "Math", "Reading"))
Let’s detour briefly to check the means now that we are doing exclusions. Table S1 gives means, albeit using a median split on math anxiety (e.g. not grand means).
Low anxious:
High anxious: 
dt.excl %>%
filter(measure == "Math") %>%
filter(!is.na(parentMA_mediansplit)) %>%
group_by(parentMA_mediansplit, condition, year) %>%
summarise(mean = mean(score, na.rm=TRUE),
sd = sd(score, na.rm=TRUE))
## Source: local data frame [8 x 5]
## Groups: parentMA_mediansplit, condition [?]
##
## parentMA_mediansplit condition year mean sd
## (dbl) (chr) (fctr) (dbl) (dbl)
## 1 1 Math App 3 462.7468 18.71507
## 2 1 Math App 4 478.4348 21.11036
## 3 1 Reading App 3 458.8571 18.08212
## 4 1 Reading App 4 475.3134 19.25602
## 5 2 Math App 3 459.0562 15.98633
## 6 2 Math App 4 474.3836 18.67978
## 7 2 Reading App 3 457.4800 13.56352
## 8 2 Reading App 4 469.2157 18.68402
Great. We get only minor numerical differences between the data I’m using and the data reported by the authors (e.g., <= 1 W score unit). Great. Ok, back on track. Now graph intention-to-treat: that is, are there differences between conditions at the end of random assignment.
ms.excl <- dt.excl %>%
group_by(condition, measure, year) %>%
multi_boot_standard(column="score", na.rm=TRUE)
ggplot(ms.excl, aes(x=year, y=mean, col=condition,
group=condition)) +
facet_grid(.~measure) +
geom_line() +
geom_pointrange(aes(ymax=ci_upper, ymin=ci_lower),
position = position_dodge(width=.05)) +
scale_colour_solarized()
Hmm.. Not that much of an effect. Note that throughout I’m using 95% confidence intervals, computed by non-parametric bootstrap, so this will always look more conservative than SEM as used in the original paper.
Let’s do the same exercise with the GL equivalent scores.
dt.excl.ge <- d %>%
mutate(applied.w.fa13 = ifelse(wjappliedfa13_filter,
WJ.Applied.Problems.GE.Fa13,
NA),
applied.w.sp14 = ifelse(WJ.applied.Sp14.filter,
WJ.Applied.Problem_GE_Sp14,
NA),
reading.w.fa13 = ifelse(WJletterwordFa13_filter,
WJ.Letter.Word.ID.GE.Fa13,
NA),
reading.w.sp14 = ifelse(WJ.letterword.Sp14.filter,
WJ.LetterWord_GE_Sp14,
NA)) %>%
filter(!twin, !year1dropout) %>%
gather(measure, score,
applied.w.fa13, applied.w.sp14,
reading.w.fa13, reading.w.sp14) %>%
separate(measure, c("measure","time"), sep="w\\.") %>%
mutate(year = as.factor(str_sub(time, 4, 5)),
measure = ifelse(grepl( "applied", measure), "Math", "Reading"))
ms.excl.ge <- dt.excl.ge %>%
group_by(condition, measure, year) %>%
multi_boot_standard(column="score", na.rm=TRUE)
ggplot(ms.excl.ge, aes(x=year, y=mean, col=condition,
group=condition)) +
facet_grid(.~measure) +
geom_line() +
geom_pointrange(aes(ymax=ci_upper, ymin=ci_lower),
position = position_dodge(width=.05)) +
scale_colour_solarized()
Interestingly, there is nothing compelling in this analysis, even with the GE measure. So we have no evidence in the most stripped down analysis. The thing that pops out is that the math group as a whole seems like they scored a bit better on everything (e.g., unhappy randomization).
Now let’s turn to the longitudinal (difference score) analyses. Maybe we’re seeing between-subjects variability swamping the effects here, so baseline subtraction via difference scores or longitudinal models are both warranted.
Let’s start by doing a straight longitudinal difference analysis, in other words, take the difference scores from the filtered data and see if we find greater growth for the math group with the math app.
First do this with raw W scores.
diffs.excl <- dt.excl %>%
select(ChildID, condition, year, measure, score) %>%
mutate(measure_year = interaction(measure, year)) %>%
select(-year, -measure) %>%
spread(measure_year, score) %>%
mutate(math_gain = Math.4 - Math.3,
reading_gain = Reading.4 - Reading.3) %>%
select(-Math.3, -Math.4, -Reading.3, -Reading.4) %>%
gather(measure, score, math_gain, reading_gain)
ms.diffs.excl <- diffs.excl %>%
group_by(condition, measure) %>%
multi_boot_standard(column = "score", na.rm=TRUE)
ggplot(ms.diffs.excl, aes(x = measure, y = mean,
ymin = ci_lower, ymax = ci_upper,
fill = condition)) +
geom_bar(stat="identity", position = "dodge") +
geom_linerange(position = position_dodge(width = .9)) +
scale_fill_solarized()
This analysis looks sort of numerically correct - greater gains in the math group for math, greater gains in the reading group for reading. But notice that the confidence intervals are quite large. Can we do the same thing with grade equivalents?
diffs.excl.ge <- dt.excl.ge %>%
select(ChildID, condition, year, measure, score) %>%
mutate(measure_year = interaction(measure, year)) %>%
select(-year, -measure) %>%
spread(measure_year, score) %>%
mutate(math_gain = Math.4 - Math.3,
reading_gain = Reading.4 - Reading.3) %>%
select(-Math.3, -Math.4, -Reading.3, -Reading.4) %>%
gather(measure, score, math_gain, reading_gain)
ms.diffs.excl.ge <- diffs.excl.ge %>%
group_by(condition, measure) %>%
multi_boot_standard(column = "score", na.rm=TRUE) %>%
ungroup %>%
mutate(measure = factor(measure,
levels = c("math_gain","reading_gain"),
labels = c("WJ Applied Problems\n(Math)",
"WJ Letter-Word\n(Reading)")),
Condition = condition)
# quartz()
ggplot(ms.diffs.excl.ge, aes(x = measure, y = mean,
ymin = ci_lower, ymax = ci_upper,
fill = Condition)) +
geom_bar(stat="identity", position = "dodge") +
geom_linerange(position = position_dodge(width = .9)) +
scale_fill_solarized() +
ylab("Improvement (Grade Equivalents)") +
xlab("Measure") + ylim(c(0,1))
This crossover goes away, though the math effect actually looks a little bit bigger. Let’s try a simple post-hoc t-test – we can see if it holds up in the longitudinal analysis later.
kable(tidy(with(diffs.excl.ge,
t.test(score[condition == "Math App" &
measure == "math_gain"],
score[condition == "Reading App" &
measure == "math_gain"], var.equal = TRUE))))
estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high |
---|---|---|---|---|---|---|
0.852973 | 0.77 | 1.061383 | 0.2890304 | 498 | -0.0706193 | 0.2365653 |
kable(tidy(with(diffs.excl.ge,
t.test(score[condition == "Math App" &
measure == "reading_gain"],
score[condition == "Reading App" &
measure == "reading_gain"], var.equal = TRUE))))
estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high |
---|---|---|---|---|---|---|
0.8337731 | 0.8192308 | 0.2728889 | 0.7850497 | 507 | -0.0901546 | 0.1192392 |
So no, this difference is not significant, unfortunately, even uncorrected for multiple comparisons.
Next, let’s make use of the app usage and math anxiety measures. Note that there is some missing data on math anxiety, as reported. So we will be filtering for that for those analyses and our effective N will go down.
We begin by merging the math anxiety and usage data back into the dataset; we’ll also merge back in the GEs with the W scores, since we’ll want to check both of these in several analyses.
diffs.full <- left_join(diffs.excl, d %>%
select(ChildID, ParentMAaverage.Fa13, parentMA_mediansplit,
avg.use, use.012.groups)) %>%
left_join(diffs.excl.ge %>% rename(score.ge = score))
We’ll start by looking at gains relative to app usage. Here we do see some real trend towards an more math gains for those who used the math app more (a dose response relationship) that is not mirrored in reading scores.
ggplot(diffs.full, aes(x = avg.use, y = score, col = measure)) +
facet_grid(.~condition) +
geom_point() +
geom_smooth(method = "lm") +
scale_colour_solarized() +
xlab("Average app usage") +
ylab("Change in W score")
Let’s do the same thing with GE scores. The result is much more striking here! It’s surprising that the grade-level transform does so much. This result must be partially driven by the extremal values.
ggplot(diffs.full, aes(x = avg.use, y = score.ge, col = measure)) +
facet_grid(.~condition) +
geom_point() +
geom_smooth(method = "lm") +
scale_colour_solarized() +
xlab("Average app usage") +
ylab("Change in GE score")
Note that this figure looks pretty similar in terms of numbers to Figure 1 from the paper, though there are several differences from the figure as published. In particular, they:
None of this is necessarily bad, I’m just noting differences (though I prefer showing data compared with model fits alone). In this reanalysis, I’m not going to report any participant or classroom matching across conditions, as this seems to me to be a subsidiary analysis that has a large number of analytic decisions baked into it (e.g., we could demographically correct the intervention effect estimate as well, but how do we decide what the right correction is?).
Let’s remake the right side of Figure 1 (the unmatched side), just to see how it looks with all the data and the continuous analysis.
ggplot(filter(diffs.full,
measure == "math_gain"),
aes(x = avg.use, y = score.ge, col = condition)) +
geom_point() +
geom_smooth(method = "lm") +
scale_colour_solarized() +
xlab("Average app usage") +
ylab("Change in GE score")
The math gain actually looks a little less good than before, mostly just because of baseline variability being greater in the reading group (due to smaller N).
Now we look at effects of math anxiety on change scores. First, examine raw W scores.
ggplot(diffs.full, aes(x = ParentMAaverage.Fa13, y = score, col = measure)) +
facet_grid(.~condition) +
geom_point() +
geom_smooth(method = "lm") +
scale_colour_solarized() +
xlab("Parent math anxiety") +
ylab("Change in W score")
As we might expect, it seems like there are some negative relationships between math gain and anxiety, such that higher anxiety parents had kids who gained less.
Next with GE scores.
ggplot(diffs.full, aes(x = ParentMAaverage.Fa13, y = score.ge, col = measure)) +
facet_grid(.~condition) +
geom_jitter() +
geom_smooth(method = "lm") +
scale_colour_solarized() +
xlab("Parent Math Anxiety") +
ylab("Improvement (Grade Equivalents)")
Overall, I don’t see much going on here. In both cases the trend is towards less gain with more math anxiety, though, and again this is slightly clearer with GEs than with W scores.
One more plot, this time eliminating reading.
diffs.full$Condition <- diffs.full$condition
ggplot(filter(diffs.full, measure == "math_gain"),
aes(x = ParentMAaverage.Fa13, y = score.ge, col = Condition)) +
geom_jitter(width = .01, height = .01) +
geom_smooth(method = "lm") +
scale_colour_solarized() +
xlab("Parent Math Anxiety") +
ylab("Improvement (Grade Equivalents)")
The second major claim of the paper was about the relationship between app usage, math anxiety, and outcomes. In my view, these analyses had a number of the statistical issues described above (e.g., difference between significant and not significant, discretization, etc.). None of these analyses test for differential outcomes in the experimental vs. control condition.
Let’s go about roughly replicating the analysis in Figure 2 of the paper. Again start with W scores. We will need a discrete version of the math anxiety measure. Let’s look at the median split.
diffs.full$parentMA_mediansplit <- factor(diffs.full$parentMA_mediansplit,
levels = c(1,2),
labels = c("Low Anxiety", "High Anxiety"))
qplot(ParentMAaverage.Fa13,
fill = factor(parentMA_mediansplit),
binwidth = .25,
data = filter(diffs.full, !is.na(parentMA_mediansplit))) +
scale_fill_solarized()
Now the plot.
ggplot(filter(diffs.full,
!is.na(parentMA_mediansplit)),
aes(x = avg.use, y = score, col = measure)) +
facet_grid(parentMA_mediansplit~condition) +
geom_point() +
geom_smooth(method = "lm") +
scale_colour_solarized() +
xlab("Average app usage") +
ylab("Change in W score")
And GEs.
ggplot(filter(diffs.full,
!is.na(parentMA_mediansplit)),
aes(x = avg.use, y = score.ge, col = measure)) +
facet_grid(parentMA_mediansplit~condition) +
geom_point() +
geom_smooth(method = "lm") +
scale_colour_solarized() +
xlab("Average app usage") +
ylab("Change in GE score")
Neither of these two figures are particularly similar to Figure 2 (though again the GEs look like they give a much cleaner picture). Looks to me like there’s the same general dose response relationship with app usage for both low and high anxious parents. Let’s get rid of the reading scores (which never show many effects for reasons discussed above and in the SI) and zoom in a bit.
ggplot(filter(diffs.full,
!is.na(parentMA_mediansplit)),
aes(x = avg.use, y = score.ge, col = condition)) +
facet_grid(.~parentMA_mediansplit) +
geom_point() +
geom_smooth(method = "lm") +
scale_colour_solarized() +
xlab("Average app usage") +
ylab("Change in GE score")
OK, now we’re starting to see a relationship, but it looks to me like we are getting a dose response between usage and change in GE math scores for the low anxiety group that’s greater than for the high anxiety group. That’s the opposite of what I would have believed from the paper.
Let’s use the paper’s original app level binning to check the extent to which the picture shown by Figure 2 is related to using the discrete rather than continuous analyses.
ms.diffs.full <- diffs.full %>%
filter(!is.na(parentMA_mediansplit)) %>%
group_by(use.012.groups, condition, parentMA_mediansplit) %>%
multi_boot_standard(column = "score.ge", na.rm=TRUE)
ggplot(ms.diffs.full,
aes(x = use.012.groups, y = mean, fill = condition)) +
facet_grid(.~parentMA_mediansplit) +
geom_bar(stat = "identity", position = "dodge") +
geom_linerange(aes(ymin = ci_lower, ymax = ci_upper),
position = position_dodge(width = .9)) +
scale_fill_solarized() +
xlab("Average app usage") +
ylab("Change in GE score")
OK, now subset this down to just the math app.
ggplot(filter(ms.diffs.full,
condition == "Math App"),
aes(x = use.012.groups, y = mean, fill = parentMA_mediansplit)) +
geom_bar(stat = "identity", position = "dodge") +
geom_linerange(aes(ymin = ci_lower, ymax = ci_upper),
position = position_dodge(width = .9)) +
scale_fill_solarized() +
xlab("Average app usage") +
ylab("Change in GE score")
This plot definitely reproduces the qualitative trends of Figure 2, though note that the numbers are all a bit different. I’m not sure what to attribute these numerical differences to.
But I am not convinced the trends here are not a binning artifact. Let’s go back to the raw data and make this same plot.
# diffs.full$Anxiety <- factor(diffs.full$parentMA_mediansplit,
# levels = c(1,2),
# labels = c("Low Anxiety",
# "High Anxiety"))
ggplot(filter(diffs.full,
!is.na(parentMA_mediansplit),
condition == "Math App"),
aes(x = avg.use, y = score.ge,
col = parentMA_mediansplit)) +
geom_point() +
# geom_smooth(se=FALSE, size = 2, lty = 2) +
geom_smooth(se=TRUE, method = "lm") +
scale_colour_solarized() +
xlab("Average Weekly App Usage") +
ylab("Improvement (Grade Equivalents)")
It looks much less clearly like there are any justifiable non-linearities in the low anxiety group. Instead, the U-shaped trend appears to be a binning artifact. To illustrate this point, I included a linear smoother (solid) and a loess smoother (dashed) below.
ggplot(filter(diffs.full,
!is.na(parentMA_mediansplit),
condition == "Math App"),
aes(x = avg.use, y = score.ge,
col = parentMA_mediansplit)) +
geom_point() +
geom_smooth(se=TRUE, size = 2, lty = 2) +
# geom_smooth(se=TRUE, method = "lm") +
scale_colour_solarized() +
xlab("Average Weekly App Usage") +
ylab("Improvement (Grade Equivalents)")
You can see that there is a bit of curvature in the low anxiety, but overall, the trend appears to be that the high anxiety parents are getting less of a dose response, as you’d expect. In sum, I see little evidence here in the continuous data for the claim (Children of high-math-anxious parents who used the math app about once a week (Bin 1) grew significantly more in math achievement than children of high-math-anxious parents who used the app the least.). Looking at the loess smoothers makes it seem like this difference is just a bit of an odd blip in the otherwise pretty continuous pattern of data.
Although I think the visual analysis above tells most of the story, I’ll give some linear mixed effect models to test the reliability of these patterns. First let’s do a basic intention-to-treat longitudinal analysis.
There are several ways to do these analyses. First, we could just test directly for differences. Or we could do a longitudinal analysis where we look at change over time. I’m selecting this second here, for two reasons. First, as shown above these outcome measures are highly reliable, so baseline subtraction is useful for removing between-subjects variability and getting the best estimate of effects. Second, in the graphical approach above it seemed clear that nothing was doing when we looked at the raw scores at the end of the study.
For simplicity and to avoid overparameterizing the analysis, I’ll do these analyses using only the math measure. That’s still a decision that needs to be made for a study like this, but I think it’s relatively justifiable because of the planned focus on math.
In all of these models I will be looking for interactions, rather than subsetting the data to look for main effects in a subgroup. In Section 4 above I noted that there was an issue in the original paper with informal comparisons between models that had coefficients that either were or were not significant; this is an instance of the “difference between significant and not significant is not itself significant” fallacy. The interaction tests that are the key coefficient in the models below are what I believe to be the correct way to analyze these claims.
The \(p\)-values I report will mostly be using the \(t=z\) approximation, which is justified with so many observations, but I’ll also give the occasional likelihood ratio test for nested model comparison using ANOVA.
One final note: There are only two observations per child for any measure, so the maximal random effects structure that we can have is a single intercept for each child. Thanks to correspondence with Berkowitz et al. we now include random slope and intercept for classroom (these do not modify interpretation particularly).
Reparameterize condition to make the coeffients interpretable.
dt.excl$condition <- factor(dt.excl$condition,
levels = c("Reading App","Math App"))
dt.excl.ge$condition <- factor(dt.excl.ge$condition,
levels = c("Reading App","Math App"))
And a basic model.
kable(summary(lmer(score ~ condition * time
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 457.396 | 2.898 | 157.854 |
conditionMath App | 1.518 | 3.356 | 0.452 |
timesp14 | 14.366 | 1.495 | 9.609 |
conditionMath App:timesp14 | 1.003 | 1.742 | 0.576 |
In this model, we see no interaction of condition x time on math scores, confirming the claim above that there is no main effect. Let’s try GE instead of W.
kable(summary(lmer(score ~ condition * time
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 1.910 | 0.146 | 13.059 |
conditionMath App | 0.075 | 0.169 | 0.444 |
timesp14 | 0.771 | 0.084 | 9.132 |
conditionMath App:timesp14 | 0.072 | 0.098 | 0.733 |
Again no interaction. These analyses together strongly support the idea that there is no population-level treatment effect.
If we have a specific dose-response relationship between math app usage and math scores, we should see a three-way interaction between condition (math app), usage time, and time in the study, reflecting differential growth in math scores per unit usage. Let’s try this with W scores first.
kable(summary(lmer(score ~ condition * time * avg.use
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 455.523 | 3.331 | 136.733 |
conditionMath App | -0.964 | 3.875 | -0.249 |
timesp14 | 14.123 | 2.033 | 6.947 |
avg.use | 1.165 | 1.057 | 1.102 |
conditionMath App:timesp14 | -1.226 | 2.379 | -0.515 |
conditionMath App:avg.use | 2.381 | 1.361 | 1.749 |
timesp14:avg.use | 0.154 | 0.858 | 0.180 |
conditionMath App:timesp14:avg.use | 1.867 | 1.109 | 1.684 |
This model contains some evidence that usage is making a difference (the two-way interaction of time and use), but because the interaction isn’t quite significant, this isn’t the strongest model result you might want. Let’s try GE scores.
kable(summary(lmer(score ~ condition * time * avg.use
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 1.838 | 0.171 | 10.722 |
conditionMath App | -0.068 | 0.199 | -0.340 |
timesp14 | 0.738 | 0.115 | 6.432 |
avg.use | 0.045 | 0.057 | 0.803 |
conditionMath App:timesp14 | -0.088 | 0.134 | -0.658 |
conditionMath App:avg.use | 0.128 | 0.073 | 1.756 |
timesp14:avg.use | 0.020 | 0.048 | 0.423 |
conditionMath App:timesp14:avg.use | 0.137 | 0.063 | 2.190 |
In this model the key three-way interaction is significant. So the dose-response result holds for GE scores, but was not significant for W scores. So this analysis supports the differential dose-response relationship found above via the graphical approach. I would call this a decently strong result (though still tentative due to the fact that it only shows up in one dependent variable). It does also suffer from the endogeneity issue discussed briefly above – this is a problem with the type of analysis, though, not with the strength of the result.
It’s less clear to me what the prediction should be in terms of math anxiety. Let’s ask whether it interacts with growth in math performance to start with.
kable(summary(lmer(score ~ time * ParentMAaverage.Fa13
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 463.959 | 2.653 | 174.880 |
timesp14 | 18.547 | 1.905 | 9.737 |
ParentMAaverage.Fa13 | -2.028 | 1.003 | -2.022 |
timesp14:ParentMAaverage.Fa13 | -1.627 | 0.797 | -2.043 |
kable(summary(lmer(score ~ time * ParentMAaverage.Fa13
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 2.272 | 0.140 | 16.173 |
timesp14 | 1.071 | 0.109 | 9.854 |
ParentMAaverage.Fa13 | -0.118 | 0.054 | -2.185 |
timesp14:ParentMAaverage.Fa13 | -0.112 | 0.046 | -2.444 |
So across groups, there is less growth in math performance for kids of more anxious parents. That makes a lot of sense, and we were seeing that negative trend in the plots. The relationship is more reliable in the GE measure, but it appears in both. Now let’s see if anxiety interacts with condition.
kable(summary(lmer(score ~ time * condition * ParentMAaverage.Fa13
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 463.241 | 5.302 | 87.364 |
timesp14 | 21.458 | 3.774 | 5.686 |
conditionMath App | 0.948 | 6.120 | 0.155 |
ParentMAaverage.Fa13 | -2.217 | 2.054 | -1.080 |
timesp14:conditionMath App | -3.740 | 4.374 | -0.855 |
timesp14:ParentMAaverage.Fa13 | -3.453 | 1.619 | -2.133 |
conditionMath App:ParentMAaverage.Fa13 | 0.280 | 2.355 | 0.119 |
timesp14:conditionMath App:ParentMAaverage.Fa13 | 2.370 | 1.860 | 1.275 |
kable(summary(lmer(score ~ time * condition * ParentMAaverage.Fa13
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 2.243 | 0.280 | 8.002 |
timesp14 | 1.203 | 0.215 | 5.600 |
conditionMath App | 0.038 | 0.324 | 0.117 |
ParentMAaverage.Fa13 | -0.132 | 0.111 | -1.191 |
timesp14:conditionMath App | -0.167 | 0.249 | -0.670 |
timesp14:ParentMAaverage.Fa13 | -0.204 | 0.093 | -2.195 |
conditionMath App:ParentMAaverage.Fa13 | 0.020 | 0.127 | 0.157 |
timesp14:conditionMath App:ParentMAaverage.Fa13 | 0.119 | 0.107 | 1.117 |
The answer here is no. We don’t see any three-way interaction, which – if we had seen it – would have supported the claim that math performance is less (or more) affected by being assigned to the math app if parents had high math anxiety.
Given that result, I am not sure if it’s worth looking for a four-way interaction with the (already somewhat weak) dose response, but here goes.
kable(summary(lmer(score ~ time * condition * ParentMAaverage.Fa13 * avg.use +
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 454.607 | 7.399 | 61.443 |
timesp14 | 22.934 | 5.678 | 4.039 |
conditionMath App | 4.837 | 8.722 | 0.555 |
ParentMAaverage.Fa13 | 1.059 | 2.950 | 0.359 |
avg.use | 6.253 | 3.782 | 1.653 |
timesp14:conditionMath App | -10.657 | 6.722 | -1.585 |
timesp14:ParentMAaverage.Fa13 | -3.990 | 2.391 | -1.669 |
conditionMath App:ParentMAaverage.Fa13 | -2.647 | 3.472 | -0.762 |
timesp14:avg.use | -1.000 | 3.067 | -0.326 |
conditionMath App:avg.use | -3.203 | 4.750 | -0.674 |
ParentMAaverage.Fa13:avg.use | -2.483 | 1.607 | -1.545 |
timesp14:conditionMath App:ParentMAaverage.Fa13 | 3.990 | 2.816 | 1.417 |
timesp14:conditionMath App:avg.use | 4.899 | 3.857 | 1.270 |
timesp14:ParentMAaverage.Fa13:avg.use | 0.378 | 1.305 | 0.290 |
conditionMath App:ParentMAaverage.Fa13:avg.use | 2.413 | 2.037 | 1.184 |
timesp14:conditionMath App:ParentMAaverage.Fa13:avg.use | -1.137 | 1.655 | -0.687 |
kable(summary(lmer(score ~ time * condition * ParentMAaverage.Fa13 * avg.use +
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 1.896 | 0.396 | 4.792 |
timesp14 | 1.224 | 0.323 | 3.792 |
conditionMath App | 0.142 | 0.466 | 0.305 |
ParentMAaverage.Fa13 | 0.002 | 0.159 | 0.014 |
avg.use | 0.252 | 0.204 | 1.232 |
timesp14:conditionMath App | -0.629 | 0.382 | -1.645 |
timesp14:ParentMAaverage.Fa13 | -0.214 | 0.136 | -1.567 |
conditionMath App:ParentMAaverage.Fa13 | -0.091 | 0.187 | -0.488 |
timesp14:avg.use | -0.012 | 0.175 | -0.070 |
conditionMath App:avg.use | -0.096 | 0.256 | -0.374 |
ParentMAaverage.Fa13:avg.use | -0.101 | 0.087 | -1.165 |
timesp14:conditionMath App:ParentMAaverage.Fa13 | 0.226 | 0.161 | 1.410 |
timesp14:conditionMath App:avg.use | 0.333 | 0.220 | 1.513 |
timesp14:ParentMAaverage.Fa13:avg.use | 0.006 | 0.075 | 0.075 |
conditionMath App:ParentMAaverage.Fa13:avg.use | 0.093 | 0.110 | 0.848 |
timesp14:conditionMath App:ParentMAaverage.Fa13:avg.use | -0.077 | 0.095 | -0.818 |
No four-way interaction at all. One last try, preventing over-parameterization by cutting down to only the math app students.
kable(summary(lmer(score ~ time * ParentMAaverage.Fa13 * avg.use
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math" & condition == "Math App")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 2.029 | 0.247 | 8.211 |
timesp14 | 0.594 | 0.207 | 2.876 |
ParentMAaverage.Fa13 | -0.083 | 0.097 | -0.851 |
avg.use | 0.158 | 0.153 | 1.033 |
timesp14:ParentMAaverage.Fa13 | 0.014 | 0.086 | 0.161 |
timesp14:avg.use | 0.321 | 0.135 | 2.381 |
ParentMAaverage.Fa13:avg.use | -0.008 | 0.067 | -0.122 |
timesp14:ParentMAaverage.Fa13:avg.use | -0.071 | 0.059 | -1.217 |
And removing the usage.
kable(summary(lmer(score ~ time * ParentMAaverage.Fa13
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math" & condition == "Math App")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 2.277 | 0.166 | 13.678 |
timesp14 | 1.034 | 0.129 | 8.038 |
ParentMAaverage.Fa13 | -0.107 | 0.062 | -1.721 |
timesp14:ParentMAaverage.Fa13 | -0.083 | 0.053 | -1.558 |
Same results, though in the last subgroup analysis there is starting to be a negative interaction such that parents who are more anxious see less growth. So this model essentially confirms our failure to see a reliable effect of math anxiety (e.g., the claim implied by Figure 2 and the corresponding models in the paper is not supported in this reanalysis).
Going back to the three-way interaction model above, let’s investigate further. First, the model again:
kable(summary(lmer(score ~ time * condition * ParentMAaverage.Fa13 +
(1 | ChildID) +
(time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 463.241 | 5.302 | 87.364 |
timesp14 | 21.458 | 3.774 | 5.686 |
conditionMath App | 0.948 | 6.120 | 0.155 |
ParentMAaverage.Fa13 | -2.217 | 2.054 | -1.080 |
timesp14:conditionMath App | -3.740 | 4.374 | -0.855 |
timesp14:ParentMAaverage.Fa13 | -3.453 | 1.619 | -2.133 |
conditionMath App:ParentMAaverage.Fa13 | 0.280 | 2.355 | 0.119 |
timesp14:conditionMath App:ParentMAaverage.Fa13 | 2.370 | 1.860 | 1.275 |
In this model we’re not seeing much of anything. So let’s try to replicate the split analysis from the paper, where we look at growth separately for the two anxiety groups. First low anxiety:
kable(summary(lmer(score ~ time * condition +
(1 | ChildID) +
(time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math",
parentMA_mediansplit == 1)))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 457.697 | 3.607 | 126.899 |
timesp14 | 16.388 | 2.173 | 7.540 |
conditionMath App | 2.996 | 4.188 | 0.715 |
timesp14:conditionMath App | -0.733 | 2.545 | -0.288 |
As in the original paper, we don’t find an effect here. Now high anxiety:
itt.hianx <- lmer(score ~ time * condition +
(1 | ChildID) +
(time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math",
parentMA_mediansplit == 2))
kable(summary(itt.hianx)$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 457.649 | 3.185 | 143.704 |
timesp14 | 11.313 | 2.107 | 5.369 |
conditionMath App | 1.829 | 3.712 | 0.493 |
timesp14:conditionMath App | 3.901 | 2.443 | 1.597 |
Unlike the original paper, we don’t find a significant effect in this condition either. To follow up, we can use ANOVA to get a fairly reliable \(p\)-value on this test:
itt.hianx.noint <- lmer(score ~ time + condition +
(1 | ChildID) +
(time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math",
parentMA_mediansplit == 2))
kable(anova(itt.hianx,itt.hianx.noint))
Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|
itt.hianx.noint | 8 | 3415.737 | 3448.059 | -1699.868 | 3399.737 | NA | NA | NA |
itt.hianx | 9 | 3415.136 | 3451.498 | -1698.568 | 3397.136 | 2.600528 | 1 | 0.1068281 |
So by a likelihood-ratio test, we do find something that looks like it has the same sign and is \(p = .11\)). Close, but not the same result. In response to an earlier version of this report where I pointed out that the coefficients on these models were not compared directly, Berkowitz et al. wrote that:
While comparing the coefficients of ITT effects for high- and low-math anxious parents (i.e., the interaction between parent math anxiety and app condition) is a low-powered test, we see a significant difference between the high- and low-math anxious parents at p=0.033, one-tailed given our a priori hypothesis (marginal using a two-tailed test p=0.066).
They also specified that the specific test here was a Wald test for the difference between two fixed effects (performed in HLM).
Interestingly, examining these models you can see why the coefficients might test out to be different from one another. The condition effect is somewhat different across the two individual models. Let’s do the whole analysis using likelihood-ratio test to see if the interaction is significant with the median split on anxiety:
itt.median.interaction <- lmer(score ~ time * condition * parentMA_mediansplit +
(1 | ChildID) +
(time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math"))
itt.median.nointeraction <- lmer(score ~ time * condition +
time * parentMA_mediansplit +
condition * parentMA_mediansplit +
(1 | ChildID) +
(time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math"))
kable(summary(itt.median.interaction)$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 458.725 | 5.487 | 83.610 |
timesp14 | 21.365 | 3.930 | 5.437 |
conditionMath App | 2.916 | 6.393 | 0.456 |
parentMA_mediansplit | -0.229 | 3.169 | -0.072 |
timesp14:conditionMath App | -5.156 | 4.610 | -1.119 |
timesp14:parentMA_mediansplit | -5.031 | 2.512 | -2.003 |
conditionMath App:parentMA_mediansplit | -0.922 | 3.682 | -0.250 |
timesp14:conditionMath App:parentMA_mediansplit | 4.436 | 2.923 | 1.518 |
kable(anova(itt.median.interaction, itt.median.nointeraction))
Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|
itt.median.nointeraction | 12 | 7142.630 | 7199.838 | -3559.315 | 7118.630 | NA | NA | NA |
itt.median.interaction | 13 | 7142.288 | 7204.264 | -3558.144 | 7116.288 | 2.341565 | 1 | 0.125963 |
This analysis is actually getting closer (\(p = .13\)). Let’s also try it with GEs.
itt.median.interaction <- lmer(score ~ time * condition * parentMA_mediansplit +
(1 | ChildID) +
(time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math"))
itt.median.nointeraction <- lmer(score ~ time * condition +
time * parentMA_mediansplit +
condition * parentMA_mediansplit +
(1 | ChildID) +
(time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math"))
kable(anova(itt.median.interaction, itt.median.nointeraction))
Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|
itt.median.nointeraction | 12 | 2117.392 | 2174.601 | -1046.696 | 2093.392 | NA | NA | NA |
itt.median.interaction | 13 | 2117.894 | 2179.869 | -1045.947 | 2091.894 | 1.49878 | 1 | 0.2208592 |
Here the GEs make things worse.
Summary from this set of reanalyses that try to redo analyses closer to those reported by Berkowitz et al.:
As discussed above, this section will reproduce the analyses from above that ended up in my Technical Comment.
Figure 1.
ggplot(ms.diffs.excl.ge, aes(x = measure, y = mean,
ymin = ci_lower, ymax = ci_upper,
fill = Condition)) +
geom_bar(stat="identity", position = "dodge") +
geom_linerange(position = position_dodge(width = .9)) +
scale_fill_solarized() +
ylab("Improvement (Grade Equivalents)") +
xlab("Measure") + ylim(c(0,1))
\(t\)-tests for Figure 1.
kable(tidy(with(diffs.excl.ge,
t.test(score[condition == "Math App" &
measure == "math_gain"],
score[condition == "Reading App" &
measure == "math_gain"], var.equal = TRUE))))
estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high |
---|---|---|---|---|---|---|
0.852973 | 0.77 | 1.061383 | 0.2890304 | 498 | -0.0706193 | 0.2365653 |
kable(tidy(with(diffs.excl.ge,
t.test(score[condition == "Math App" &
measure == "reading_gain"],
score[condition == "Reading App" &
measure == "reading_gain"], var.equal = TRUE))))
estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high |
---|---|---|---|---|---|---|
0.8337731 | 0.8192308 | 0.2728889 | 0.7850497 | 507 | -0.0901546 | 0.1192392 |
Main longitudinal models (for paragraph beginning “first”). GE scores and then W scores.
kable(summary(lmer(score ~ condition * time
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 1.910 | 0.146 | 13.059 |
conditionMath App | 0.075 | 0.169 | 0.444 |
timesp14 | 0.771 | 0.084 | 9.132 |
conditionMath App:timesp14 | 0.072 | 0.098 | 0.733 |
kable(summary(lmer(score ~ condition * time
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 457.396 | 2.898 | 157.854 |
conditionMath App | 1.518 | 3.356 | 0.452 |
timesp14 | 14.366 | 1.495 | 9.609 |
conditionMath App:timesp14 | 1.003 | 1.742 | 0.576 |
Three-way interaction models (for paragraph beginning “second”). GE and then W scores, as above.
kable(summary(lmer(score ~ condition * time * avg.use
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 1.838 | 0.171 | 10.722 |
conditionMath App | -0.068 | 0.199 | -0.340 |
timesp14 | 0.738 | 0.115 | 6.432 |
avg.use | 0.045 | 0.057 | 0.803 |
conditionMath App:timesp14 | -0.088 | 0.134 | -0.658 |
conditionMath App:avg.use | 0.128 | 0.073 | 1.756 |
timesp14:avg.use | 0.020 | 0.048 | 0.423 |
conditionMath App:timesp14:avg.use | 0.137 | 0.063 | 2.190 |
kable(summary(lmer(score ~ condition * time * avg.use
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 455.523 | 3.331 | 136.733 |
conditionMath App | -0.964 | 3.875 | -0.249 |
timesp14 | 14.123 | 2.033 | 6.947 |
avg.use | 1.165 | 1.057 | 1.102 |
conditionMath App:timesp14 | -1.226 | 2.379 | -0.515 |
conditionMath App:avg.use | 2.381 | 1.361 | 1.749 |
timesp14:avg.use | 0.154 | 0.858 | 0.180 |
conditionMath App:timesp14:avg.use | 1.867 | 1.109 | 1.684 |
Figure 2.
ggplot(filter(diffs.full,
!is.na(parentMA_mediansplit),
condition == "Math App"),
aes(x = avg.use, y = score.ge,
col = parentMA_mediansplit)) +
geom_point() +
geom_smooth(se=TRUE, method = "lm") +
scale_colour_solarized() +
xlab("Average Weekly App Usage") +
ylab("Improvement (Grade Equivalents)")
Math anxiety three- and four-way interactions (for paragraph beginning “third”).
kable(summary(lmer(score ~ time * condition * ParentMAaverage.Fa13
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 2.243 | 0.280 | 8.002 |
timesp14 | 1.203 | 0.215 | 5.600 |
conditionMath App | 0.038 | 0.324 | 0.117 |
ParentMAaverage.Fa13 | -0.132 | 0.111 | -1.191 |
timesp14:conditionMath App | -0.167 | 0.249 | -0.670 |
timesp14:ParentMAaverage.Fa13 | -0.204 | 0.093 | -2.195 |
conditionMath App:ParentMAaverage.Fa13 | 0.020 | 0.127 | 0.157 |
timesp14:conditionMath App:ParentMAaverage.Fa13 | 0.119 | 0.107 | 1.117 |
kable(summary(lmer(score ~ time * condition * ParentMAaverage.Fa13
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 463.241 | 5.302 | 87.364 |
timesp14 | 21.458 | 3.774 | 5.686 |
conditionMath App | 0.948 | 6.120 | 0.155 |
ParentMAaverage.Fa13 | -2.217 | 2.054 | -1.080 |
timesp14:conditionMath App | -3.740 | 4.374 | -0.855 |
timesp14:ParentMAaverage.Fa13 | -3.453 | 1.619 | -2.133 |
conditionMath App:ParentMAaverage.Fa13 | 0.280 | 2.355 | 0.119 |
timesp14:conditionMath App:ParentMAaverage.Fa13 | 2.370 | 1.860 | 1.275 |
kable(summary(lmer(score ~ time * condition * ParentMAaverage.Fa13 * avg.use +
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 454.607 | 7.399 | 61.443 |
timesp14 | 22.934 | 5.678 | 4.039 |
conditionMath App | 4.837 | 8.722 | 0.555 |
ParentMAaverage.Fa13 | 1.059 | 2.950 | 0.359 |
avg.use | 6.253 | 3.782 | 1.653 |
timesp14:conditionMath App | -10.657 | 6.722 | -1.585 |
timesp14:ParentMAaverage.Fa13 | -3.990 | 2.391 | -1.669 |
conditionMath App:ParentMAaverage.Fa13 | -2.647 | 3.472 | -0.762 |
timesp14:avg.use | -1.000 | 3.067 | -0.326 |
conditionMath App:avg.use | -3.203 | 4.750 | -0.674 |
ParentMAaverage.Fa13:avg.use | -2.483 | 1.607 | -1.545 |
timesp14:conditionMath App:ParentMAaverage.Fa13 | 3.990 | 2.816 | 1.417 |
timesp14:conditionMath App:avg.use | 4.899 | 3.857 | 1.270 |
timesp14:ParentMAaverage.Fa13:avg.use | 0.378 | 1.305 | 0.290 |
conditionMath App:ParentMAaverage.Fa13:avg.use | 2.413 | 2.037 | 1.184 |
timesp14:conditionMath App:ParentMAaverage.Fa13:avg.use | -1.137 | 1.655 | -0.687 |
kable(summary(lmer(score ~ time * condition * ParentMAaverage.Fa13 * avg.use +
+ (1 | ChildID)
+ (time | TeachID.Year1),
data = filter(dt.excl.ge,
measure == "Math")))$coefficients, digits = 3)
Estimate | Std. Error | t value | |
---|---|---|---|
(Intercept) | 1.896 | 0.396 | 4.792 |
timesp14 | 1.224 | 0.323 | 3.792 |
conditionMath App | 0.142 | 0.466 | 0.305 |
ParentMAaverage.Fa13 | 0.002 | 0.159 | 0.014 |
avg.use | 0.252 | 0.204 | 1.232 |
timesp14:conditionMath App | -0.629 | 0.382 | -1.645 |
timesp14:ParentMAaverage.Fa13 | -0.214 | 0.136 | -1.567 |
conditionMath App:ParentMAaverage.Fa13 | -0.091 | 0.187 | -0.488 |
timesp14:avg.use | -0.012 | 0.175 | -0.070 |
conditionMath App:avg.use | -0.096 | 0.256 | -0.374 |
ParentMAaverage.Fa13:avg.use | -0.101 | 0.087 | -1.165 |
timesp14:conditionMath App:ParentMAaverage.Fa13 | 0.226 | 0.161 | 1.410 |
timesp14:conditionMath App:avg.use | 0.333 | 0.220 | 1.513 |
timesp14:ParentMAaverage.Fa13:avg.use | 0.006 | 0.075 | 0.075 |
conditionMath App:ParentMAaverage.Fa13:avg.use | 0.093 | 0.110 | 0.848 |
timesp14:conditionMath App:ParentMAaverage.Fa13:avg.use | -0.077 | 0.095 | -0.818 |
So where does this reanalysis leave us? Some thoughts:
In sum, this paper reports a well-designed and careful study, but the strength of the study is somewhat undermined in my view by the analyses that are reported. Problems of analytic flexibility appear to me to have led the authors to conclusions that are not warranted from the data. Unfortunately, the only way I know to deal with this sort of issue for a large scale study of the type reported here is to preregister the analyses ahead of time. Preregistration can be a painful process because the best-laid analysis plan can fail when it comes into contact with the data, but preregistration is nonetheless extremely useful in avoiding these sorts of analytic pitfalls.
Thanks very much to the Berkowitz et al. team for posting their raw data and providing feedback on a draft of this reanalysis. Thanks also to Johannes Haushofer as well as the members of the Language and Cognition lab at Stanford for valuable feedback.