This is problem set #2, in which we hope you will practice the visualization package ggplot2, as well as hone your knowledge of the packages tidyr and dplyr.
Sklar et al. (2012) claims evidence for unconscious arithmetic processing. We’re going to do a reanalysis of their Experiment 6, which is the primary piece of evidence for that claim. The data are generously contributed by Asael Sklar.
First let’s set up a few preliminaries.
library(ggplot2)## Warning: package 'ggplot2' was built under R version 3.3.2
library(tidyr)## Warning: package 'tidyr' was built under R version 3.3.2
library(plyr)
library(dplyr)##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:plyr':
##
## arrange, count, desc, failwith, id, mutate, rename, summarise,
## summarize
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(lme4)## Loading required package: Matrix
##
## Attaching package: 'Matrix'
## The following object is masked from 'package:tidyr':
##
## expand
sem = function(x) {sd(x, na.rm=TRUE) / sqrt(length(x))}
ci95 = function(x) {sem(x) * 1.96}First read in two data files and subject info. A and B refer to different trial order counterbalances.
subinfo = read.csv("http://langcog.stanford.edu/sklar_expt6_subinfo_corrected.csv")
d.a = read.csv("http://langcog.stanford.edu/sklar_expt6a_corrected.csv")
d.b = read.csv("http://langcog.stanford.edu/sklar_expt6b_corrected.csv")Gather these datasets into long form and get rid of the Xs in the headers.
d.a = d.a %>%
gather(participant, rt, X1:X21)
d.b = d.b %>%
gather(participant, rt, X22:X42)Bind these together. Check out bind_rows.
d = bind_rows(d.a,d.b)Merge these with subject info. You will need to look into merge and its relatives, left_ and right_join. Call this dataframe d, by convention.
d = d %>%
mutate(participant=sapply(strsplit(participant, split='X', fixed=TRUE),function(x) (x[2])))
d$participant = as.numeric(d$participant)
subinfo = subinfo %>%
dplyr::rename(participant = subid)
d = left_join(subinfo, d)## Joining, by = "participant"
Clean up the factor structure.
d$presentation.time = factor(d$presentation.time)
levels(d$operand) = c("addition","subtraction")
d$participant = as.factor(d$participant)We’re ready to start looking at our data!
Examine the basic properties of the dataset. First, take a histogram.
hist(d$rt)A histogram tells us that our dependent variable (reaction time) is relatively normally distributed. As this is an assumption of many analyses we may run, this is good to know. Additionally, we see that most of our reaction times fall between 500 and 1000 ms, which is about what we’d expect, meaning that it’s quite likely that the participants were paying attention and performed the task properly.
Challenge question: what is the sample rate of the input device they are using to gather RTs?
Sklar et al. did two manipulation checks. Subjective - asking participants whether they saw the primes - and objective - asking them to report the parity of the primes (even or odd) to find out if they could actually read the primes when they tried. Examine both the unconscious and conscious manipulation checks. What do you see? Are they related to one another?
cor(d$subjective.test, d$objective.test)## [1] 0.5786542
cor.test(d$subjective.test, d$objective.test)##
## Pearson's product-moment correlation
##
## data: d$subjective.test and d$objective.test
## t = 57.052, df = 6466, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.5622115 0.5946395
## sample estimates:
## cor
## 0.5786542
We find that the correlation between the subjective and objective manipulation checks are significantly related, r = .58, t(6466) = 57.052, p < .001. That is, those who subjectively reported seeing the primes also tended to more accurately report the parity of the primes.
OK, let’s turn back to the measure and implement Sklar et al.’s exclusion criterion. You need to have said you couldn’t see (subjective test) and also be not significantly above chance on the objective test (< .6 correct). Call your new data frame ds.
ds = d %>%
filter(subjective.test == 0) %>%
filter(objective.test < .6)Sklar et al. show a plot of a “facilitation effect” - the amount faster you are for prime-congruent naming compared with prime-incongruent naming. They then show plot this difference score for the subtraction condition and for the two prime times they tested. Try to reproduce this analysis.
HINT: first take averages within subjects, then compute your error bars across participants, using the sem function (defined above).
ds1 = ds %>%
group_by(participant)
ds1 = ds1 %>%
dplyr::filter(operand =="subtraction") %>%
dplyr::summarise(diff = mean(rt[congruent == "yes"] - rt[congruent == "no"], na.rm=T))
ds = left_join(ds, ds1)## Joining, by = "participant"
se = sem(ds$diff); se## [1] 0.2731004
Now plot this summary, giving more or less the bar plot that Sklar et al. gave (though I would keep operation as a variable here. Make sure you get some error bars on there (e.g. geom_errorbar or geom_linerange).
ds = ds %>%
dplyr::filter(operand == "subtraction")
summarySE <- function(data=NULL, measurevar, groupvars=NULL, na.rm=FALSE,
conf.interval=.95, .drop=TRUE) {
# New version of length which can handle NA's: if na.rm==T, don't count them
length2 <- function (x, na.rm=FALSE) {
if (na.rm) sum(!is.na(x))
else length(x)
}
# This does the summary. For each group's data frame, return a vector with
# N, mean, and sd
datac <- ddply(data, groupvars, .drop=.drop,
.fun = function(xx, col) {
c(N = length2(xx[[col]], na.rm=na.rm),
mean = mean (xx[[col]], na.rm=na.rm),
sd = sd (xx[[col]], na.rm=na.rm)
)
},
measurevar
)
# Rename the "mean" column
datac <- plyr::rename(datac, c("mean" = measurevar))
datac$se <- datac$sd / sqrt(datac$N) # Calculate standard error of the mean
# Confidence interval multiplier for standard error
# Calculate t-statistic for confidence interval:
# e.g., if conf.interval is .95, use .975 (above/below), and use df=N-1
ciMult <- qt(conf.interval/2 + .5, datac$N-1)
datac$ci <- datac$se * ciMult
return(datac)
}
dss = summarySE(data = ds, measurevar = "diff", groupvars = "presentation.time", na.rm = FALSE)
dss$diff = abs(dss$diff)
ggplot(data=dss, aes(x=presentation.time, y=diff)) +
geom_bar(stat = "identity", position = "dodge") +
geom_errorbar(aes(ymin = diff - se, ymax = diff + se)) +
labs(title = "Facilitation effect by presentation condition") +
xlab("Presentation duration") +
ylab("Facillitation (1000 ms)")What do you see here? How close is it to what Sklar et al. report? Do the error bars match? How do you interpret these data?
The graph looks about the same as what Sklar et al. reported in terms of means. However, the error bars that we find are much smaller. It is possible that we computed standard error differently than they did - for example, we calculated ours based on our cleaned and pared down data, but they could have used the whole set, causing them to find different standard errors of the means.
Challenge problem: verify Sklar et al.’s claim about the relationship between RT and the objective manipulation check.
rs = lm(diff~scale(objective.test, scale=F), ds); rs.a = summary(rs); rs.a##
## Call:
## lm(formula = diff ~ scale(objective.test, scale = F), data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -26.4282 -7.8596 0.5671 8.0930 19.5552
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -15.283 0.344 -44.43 <2e-16 ***
## scale(objective.test, scale = F) -106.789 5.384 -19.84 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 12.2 on 1256 degrees of freedom
## Multiple R-squared: 0.2385, Adjusted R-squared: 0.2379
## F-statistic: 393.5 on 1 and 1256 DF, p-value: < 2.2e-16
We do find that there is a significant relationship between the objective manipulation test and the difference in reaction time scores, b = -106.789, t(1256) = -19.84, p < .001. As accuracy increases, the difference between congruent and incongruent trials decreases. This makes sense, as consciously seeing the prime would reduce the effect it had on reaction times. Therefore, it may be fair to say that this is an appropriate exclusion criteria, given that they had set this criterion before analyzing the data, as seeing a prime could drive the effect and not the IV (congruent/incongruent primes themselves).
Show us what you would do with these data, operating from first principles. What’s the fairest plot showing a test of Sklar et al.’s original hypothesis?
The main hypothesis, as far as I understand it, is as follows: RT for congruent conditions will be shorter than RT for non-congruent conditions collapsing across conditions. However, their original analyses only focused on the subtraction condition. Here, we reanalyze the data using Sklar et al.’s exclusion criteria, but collapsing across conditions.
ds.a = d %>%
filter(subjective.test == 0) %>%
filter(objective.test < .6)
ds.a1 = ds.a %>%
group_by(participant)
ds.a1 = ds.a1 %>%
dplyr::summarise(diff = mean(rt[congruent == "yes"] - rt[congruent == "no"], na.rm=T))
ds.a = left_join(ds.a, ds.a1)## Joining, by = "participant"
rs1 = lm(rt ~ congruent, ds.a); rs1a = summary(rs1); rs1a##
## Call:
## lm(formula = rt ~ congruent, data = ds.a)
##
## Residuals:
## Min 1Q Median 3Q Max
## -573.53 -85.03 -16.03 88.97 861.47
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 680.030 3.988 170.499 <2e-16 ***
## congruentyes -5.502 5.646 -0.974 0.33
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 142 on 2529 degrees of freedom
## (87 observations deleted due to missingness)
## Multiple R-squared: 0.0003753, Adjusted R-squared: -1.994e-05
## F-statistic: 0.9496 on 1 and 2529 DF, p-value: 0.3299
rs2 = lm(rt ~ presentation.time, ds.a); rs2a = summary(rs2); rs2a##
## Call:
## lm(formula = rt ~ presentation.time, data = ds.a)
##
## Residuals:
## Min 1Q Median 3Q Max
## -610.31 -87.00 -12.31 89.00 824.69
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 711.308 4.008 177.48 <2e-16 ***
## presentation.time2000 -64.312 5.510 -11.67 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 138.4 on 2529 degrees of freedom
## (87 observations deleted due to missingness)
## Multiple R-squared: 0.05111, Adjusted R-squared: 0.05073
## F-statistic: 136.2 on 1 and 2529 DF, p-value: < 2.2e-16
rs3 = lm(rt ~ congruent + presentation.time, ds.a); rs3a = summary(rs3); rs3a##
## Call:
## lm(formula = rt ~ congruent + presentation.time, data = ds.a)
##
## Residuals:
## Min 1Q Median 3Q Max
## -607.41 -88.89 -15.06 89.77 827.59
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 714.230 4.866 146.779 <2e-16 ***
## congruentyes -5.825 5.501 -1.059 0.29
## presentation.time2000 -64.341 5.510 -11.677 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 138.4 on 2528 degrees of freedom
## (87 observations deleted due to missingness)
## Multiple R-squared: 0.05153, Adjusted R-squared: 0.05078
## F-statistic: 68.67 on 2 and 2528 DF, p-value: < 2.2e-16
a = anova(rs1,rs3); a## Analysis of Variance Table
##
## Model 1: rt ~ congruent
## Model 2: rt ~ congruent + presentation.time
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 2529 51012684
## 2 2528 48402152 1 2610531 136.35 < 2.2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
a2 = anova(rs2,rs3); a2## Analysis of Variance Table
##
## Model 1: rt ~ presentation.time
## Model 2: rt ~ congruent + presentation.time
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 2529 48423621
## 2 2528 48402152 1 21469 1.1213 0.2897
When collapsing across conditions, we find that there is no significant relationship between priming condition and reaction time, b = -5.502, t(2529) = -0.97, p = 0.33. This tells us that priming did not drive the differences in reaction time between congruent and incongruent trials. However, we did find a significant relationship between presentation duration and reaction time, b = -64.312, t(2529) = -11.67, p < .001. This tells us that when individuals are exposed to the prime for longer, there is a smaller difference between congruent and incongruent trials. Further, when we factor in both priming condition and presentation duration into an additive model, we find that while there is still no significant effect of priming condition, b = -5.825, t(2528) = -1.06, p = 0.29, there was still a significant effect of presentation duration, b = -64.341, t(2528) = -11.68, p < .001. Finally, when we compare models, we find that the additive model does a significantly better job than the simple model with only priming condition as a predictor, F(1, 2528) = 136.346, p < .001, but the additive model does not do a significantly better job than the simple model with only presentation duration as a predictor, F(1, 2528) = 1.121, p = .29. This tells us that presentation duration, over and above priming condition, explains more of the variance in reaction time, meaning that it’s likely presentation duration and not priming condition that drives the effect.
Challenge problem: Do you find any statistical support for Sklar et al.’s findings?
rs4 = lm(rt ~ congruent, ds); rs4a = summary(rs4); rs4a##
## Call:
## lm(formula = rt ~ congruent, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -559.37 -100.37 -11.81 91.94 543.19
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 675.811 5.828 115.95 <2e-16 ***
## congruentyes -15.443 8.256 -1.87 0.0617 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 143.8 on 1212 degrees of freedom
## (44 observations deleted due to missingness)
## Multiple R-squared: 0.002878, Adjusted R-squared: 0.002055
## F-statistic: 3.498 on 1 and 1212 DF, p-value: 0.06167
rs5 = lm(rt ~ presentation.time, ds); rs5a = summary(rs5); rs5a##
## Call:
## lm(formula = rt ~ presentation.time, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -604.93 -109.54 -6.54 98.07 553.46
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 705.926 5.840 120.873 <2e-16 ***
## presentation.time2000 -71.388 8.025 -8.896 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 139.6 on 1212 degrees of freedom
## (44 observations deleted due to missingness)
## Multiple R-squared: 0.06129, Adjusted R-squared: 0.06052
## F-statistic: 79.14 on 1 and 1212 DF, p-value: < 2.2e-16
rs6 = lm(rt ~ congruent + presentation.time, ds); rs6a = summary(rs6); rs6a##
## Call:
## lm(formula = rt ~ congruent + presentation.time, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -597.23 -102.23 -13.54 90.46 545.82
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 713.542 7.062 101.038 <2e-16 ***
## congruentyes -15.311 8.002 -1.913 0.0559 .
## presentation.time2000 -71.360 8.016 -8.902 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 139.4 on 1211 degrees of freedom
## (44 observations deleted due to missingness)
## Multiple R-squared: 0.06412, Adjusted R-squared: 0.06258
## F-statistic: 41.49 on 2 and 1211 DF, p-value: < 2.2e-16
rs7 = lm(rt ~ congruent*presentation.time, ds); rs7a = summary(rs7); rs7a##
## Call:
## lm(formula = rt ~ congruent * presentation.time, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -594.42 -100.42 -11.70 87.68 548.30
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 716.324 8.231 87.031 < 2e-16
## congruentyes -20.905 11.671 -1.791 0.0735
## presentation.time2000 -76.622 11.319 -6.769 2.01e-11
## congruentyes:presentation.time2000 10.561 16.036 0.659 0.5103
##
## (Intercept) ***
## congruentyes .
## presentation.time2000 ***
## congruentyes:presentation.time2000
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 139.4 on 1210 degrees of freedom
## (44 observations deleted due to missingness)
## Multiple R-squared: 0.06446, Adjusted R-squared: 0.06214
## F-statistic: 27.79 on 3 and 1210 DF, p-value: < 2.2e-16
a3 = anova(rs4,rs6); a3## Analysis of Variance Table
##
## Model 1: rt ~ congruent
## Model 2: rt ~ congruent + presentation.time
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 1212 25073760
## 2 1211 23533703 1 1540057 79.249 < 2.2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
a4 = anova(rs5,rs6); a4## Analysis of Variance Table
##
## Model 1: rt ~ presentation.time
## Model 2: rt ~ congruent + presentation.time
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 1212 23604851
## 2 1211 23533703 1 71148 3.6611 0.05593 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
a5 = anova(rs6,rs7); a5## Analysis of Variance Table
##
## Model 1: rt ~ congruent + presentation.time
## Model 2: rt ~ congruent * presentation.time
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 1211 23533703
## 2 1210 23525269 1 8433.5 0.4338 0.5103
ds2 = d %>%
dplyr::filter(operand == "addition")
rs8 = lm(rt ~ congruent, ds2); rs8a = summary(rs8); rs8a##
## Call:
## lm(formula = rt ~ congruent, data = ds2)
##
## Residuals:
## Min 1Q Median 3Q Max
## -568.11 -82.24 -14.24 60.89 1660.89
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 675.113 3.406 198.217 <2e-16 ***
## congruentyes 3.127 4.821 0.649 0.517
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 137.2 on 3238 degrees of freedom
## (120 observations deleted due to missingness)
## Multiple R-squared: 0.0001299, Adjusted R-squared: -0.0001789
## F-statistic: 0.4207 on 1 and 3238 DF, p-value: 0.5166
rs9 = lm(rt ~ congruent + presentation.time, ds2); rs9a = summary(rs9); rs9a##
## Call:
## lm(formula = rt ~ congruent + presentation.time, data = ds2)
##
## Residuals:
## Min 1Q Median 3Q Max
## -596.49 -78.32 -21.69 63.51 1632.51
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 703.491 4.171 168.680 <2e-16 ***
## congruentyes 2.833 4.729 0.599 0.549
## presentation.time2000 -53.806 4.734 -11.365 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 134.6 on 3237 degrees of freedom
## (120 observations deleted due to missingness)
## Multiple R-squared: 0.0385, Adjusted R-squared: 0.0379
## F-statistic: 64.8 on 2 and 3237 DF, p-value: < 2.2e-16
a6 = anova(rs8,rs9); a6## Analysis of Variance Table
##
## Model 1: rt ~ congruent
## Model 2: rt ~ congruent + presentation.time
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 3238 60963105
## 2 3237 58623900 1 2339205 129.16 < 2.2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
When we rerun the main analyses that Sklar et al. performed using the subsets of data they report having run it on, we get different results than Sklar et al. obtained. When priming condition is used as the sole predictor of reaction time in only the subtraction trials, we found a marginal non-significant trend, b = -15.443, t(1212) = -1.87, p = 0.062. This is quite different than the p = .001 result that Sklar et al. obtained. It is possible that they ran different analyses than we did - as they only report the output of an F-test, it is likely that they ran an ANOVA or regression. However, there are many forms of ANOVA analyses, and it’s almost impossible for us to tell what exactly Sklar et al. did in their analyses from the paper. However, as this is our best guess for the type of analysis they ran, it does not bode well for the reliability of their reported results.
Further, in this model with this dataset, when presentation duration is factored in as the sole predictor, we find that there is a significant relationship between presentation duration and reaction time, b = -71.388, t(2529) = -8.9, p < .001. This tells us that when primes are presented for a longer duration, the difference between congruent and incongruent trials gets smaller. Further, when both priming condition and presentation duration are factored in as predictors in an additive manner, we find that again, there is a marginal non-significant trend in the relationship between priming condition and reaction time, b = -15.311, t(1211) = -1.91, p = 0.056, and there is a significant relationship between presentation duration and reaction time, b = -71.36, t(1211) = -8.9, p < .001. When we compare this model to each of the simple models, we find that it does a significantly better job than the simple model with only priming condition as a predictor, F(1, 1211) = 79.248, p < .001. However, the additive model is only marginally better than the simple model with only presentation duration as a predictor, F(1, 1211) = 3.661, p = .056. This tells us that it is quite likely that presentation duration, over and above priming condition, drove most of the observed effect.
Next, Sklar et al. reported that there was no significant interaction between priming condition and presentation duration. When we replicate their analyses, we do find this to be true - there is no significant interaction between the two predictors, b = 10.561, t(1210) = 0.66, p = 0.51. The simple effect of presentation duration is still significant, b = -76.622, t(1210) = -6.77, p < .001, and the simple effect of priming condition is still marginal, b = -20.905, t(1210) = -1.79, p = 0.074. When we compare this model to the additive model, we find that it does not do a significantly better job at explaining the variance in reaction time, F(1, 1210) = 0.434, p = .51. Although the p-values we obtained in our analysis (p = .51 for the interaction) were different from Sklar et al.’s analysis (p = .16), meaning that again we may have used slightly different analyses than Sklar et al. did, the conclusion we draw is the same - there is no interaction between priming condition and presentation duration.
Finally, we test Sklar et al.’s claim that there is no significant effect of priming condition in the addition condition. We did indeed find that there is no significant relationship between priming condition and reaction time, b = -20.905, t(1210) = -1.79, p = 0.074. Even when we control for presentation duration, we find that there is still no significant effect, b = 2.833, t(3237) = 0.6, p = 0.549, though presentation duration did have a significant effect on reaction time, b = -53.806, t(3237) = -11.36, p < .001. Further, we find that when we compare the two models using an ANOVA test, that the additive model is significantly better than the simple model, F(1, 3237) = 129.162, p < .001. This tells us that again, presentation duration drives any observed differences over and above priming condition. Further, although we reproduced Sklar et al.’s non-significant result, our obtained p-value (p = .517) is greater than Sklar et al.’s (p = .33), meaning again, they may have used different analyses than we did. However, we do uphold the conclusion that priming has no effect in the addition condition.
Overall, we do not find strong support for Sklar et al.’s findings. Although we found a marginally significant effect of priming condition on presentation duration, this effect did not quite achieve statistical sigificance. Further, model comparisons using ANOVA tests showed that the additive model factoring in presentation duration explained far more of the variance in reaction time than including priming condition alone as a predictor. Further, this additive model was not quite significantly better than the simple model with only presentation duration as a predictor, meaning that it is likely that presentation duration, over and above priming condition, drove the effect on reaction time. Therefore, we cannot claim that we have strong support for Sklar et al.’s claims. We do, however, replicate the non-significant results regarding the interaction between priming condition and presentation duration amongst subtraction trials and the non-significant results of the effect of priming on reaction time in addition trials.