Sklar et al (2012) reports a series of studies that use “breaking continuous flash suppression” to suggest that participants are faster to become aware of sentences if they have unusual semantics.
In these experiments, we set out to replicate these effects. We also replicate work by Yang & Yeh (2011) who demonstrated that suppression times are longer for words with unusual semantics.
In Experiments 1 and 2, we replicated Sklar et al’s Experiments 1 and 2, using English versions of their materials, and extending their manipulations to provide a better-controlled test. In Experiment 3, we replicates Experiments 1 and 2 using their original presentation scripts. In Experiment 4, we replicate some of the conditions in Yang and Yeh’s Experiment 1.
In this study, we compare suppression times for semantically anomalous versus neutral sentences. In the original study, the authors only examined raw RTs, but as shown below, these were highly skewed. We therefore examine both raw and log transformed RTs.
## Warning: NAs introduced by coercion
There is a marginal effect of semantics for Raw RTs for Experiment 1a
# excluded subjects
print(paste("n subj excluded = ",total.n-excl.n))## [1] "n subj excluded = 5"
# excluded trials
print(paste("% trials excluded= ",((n.trials - n.excl.trials)/n.trials)*100))## [1] "% trials excluded= 6.39982876712329"
print(paste("median trials excluded per participant = ",median(subj.excl.trials$final_trials), "sd= ", sd(subj.excl.trials$final_trials), "range= ", c(range(subj.excl.trials$final_trials))))## [1] "median trials excluded per participant = 5 sd= 4.45898368027883 range= 1"
## [2] "median trials excluded per participant = 5 sd= 4.45898368027883 range= 20"
print(paste("median timeouts per participant = ",median(timeouts$rt), "sd= ", sd(timeouts$rt), "range= ", range(timeouts$rt)))## [1] "median timeouts per participant = 0 sd= 9.01019419432503 range= 0"
## [2] "median timeouts per participant = 0 sd= 9.01019419432503 range= 44"
# Standardize lenth
sense.pop.sklar$Length <- (sense.pop.sklar$Length - mean(sense.pop.sklar$Length, na.rm = T))/sd(sense.pop.sklar$Length, na.rm = T)
# T test (Sklar style)
sense.pop.sklar.summary <- summaryBy(rt + perceptual.rating~ SubjNo + Condition, data = subset(sense.pop.sklar, Condition %in% c("Sklar_violation", "Sklar_control")), keep.names = T)
kable(summaryBy(rt*1000 + perceptual.rating~ Condition, data = sense.pop.sklar.summary, FUN = c(mean,sd)), digits = 2)| Condition | rt * 1000.mean | perceptual.rating.mean | rt * 1000.sd | perceptual.rating.sd |
|---|---|---|---|---|
| Sklar_control | 1466.33 | 1.86 | 504.79 | 0.49 |
| Sklar_violation | 1501.53 | 1.81 | 566.91 | 0.50 |
t.test(rt ~ Condition, data = sense.pop.sklar.summary, paired = T)##
## Paired t-test
##
## data: rt by Condition
## t = -1.7874, df = 47, p-value = 0.08033
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.074811818 0.004418131
## sample estimates:
## mean of the differences
## -0.03519684
t.test(perceptual.rating ~ Condition, data = sense.pop.sklar.summary, paired = T)##
## Paired t-test
##
## data: perceptual.rating by Condition
## t = 2.7847, df = 47, p-value = 0.007698
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.01493408 0.09267209
## sample estimates:
## mean of the differences
## 0.05380309
es <- t.test(rt ~ Condition, data = sense.pop.sklar.summary, paired = T)
es.t.test.raw <- data.frame(t = es$statistic,n1 = es$parameter,n2 = es$parameter)
# Bayes factor -- minimum effect of 0.01, maximum of 0.06, our effect = -0.03519684 and our SE = -0.03/-1.7874= 0.01678416
# lmer (Rabag style)
sense.pop.sklar.raw <- summary(lmer(rt ~ Condition+Length + (1+Condition|SubjNo)+ (1|prime), data = subset(sense.pop.sklar, Condition %in% c("Sklar_violation", "Sklar_control"))))
kable(data.frame(sense.pop.sklar.raw$coefficients,"p value"= 2*pnorm(-abs(coef(sense.pop.sklar.raw)[,3]))), digit =2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.47 | 0.07 | 20.07 | 0.00 |
| ConditionSklar_violation | 0.03 | 0.02 | 1.48 | 0.14 |
| Length | -0.04 | 0.01 | -4.29 | 0.00 |
#print(paste("p value = ", 2*pnorm(-abs(coef(sense.pop.sklar.raw)[,3]))))There is no effect of semantics for Log RTs for Experiment 1a
# Standardize lenth
sense.pop.sklar$Length <- (sense.pop.sklar$Length - mean(sense.pop.sklar$Length, na.rm = T))/sd(sense.pop.sklar$Length, na.rm = T)
# T test (Sklar style)
sense.pop.sklar.summary <- summaryBy(rt~ SubjNo + Condition, data = subset(sense.pop.sklar, Condition %in% c("Sklar_violation", "Sklar_control")), keep.names = T)
kable(summaryBy(rt ~ Condition, data = sense.pop.sklar.summary, FUN = c(mean,sd)), digits =2)| Condition | rt.mean | rt.sd |
|---|---|---|
| Sklar_control | 7.21 | 0.36 |
| Sklar_violation | 7.23 | 0.38 |
t.test(rt ~ Condition, data = sense.pop.sklar.summary, paired = T)##
## Paired t-test
##
## data: rt by Condition
## t = -1.2248, df = 47, p-value = 0.2267
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.039696479 0.009651483
## sample estimates:
## mean of the differences
## -0.0150225
es <- t.test(rt ~ Condition, data = sense.pop.sklar.summary, paired = T)
es.t.test.log <- data.frame(t = es$statistic,n1 = es$parameter,n2 = es$parameter)
# Bayes factor -- minimum effect of 0.01, maximum of 0.06, our effect = -0.03519684 and our SE = -0.03/-1.7874= 0.01678416
# Bayes factor -- minimum effect of 0.01, maximum of 0.06, our effect = -0.03519684 and our SE = -0.03/-1.7874= 0.01678416
# lmer (Rabag style)
sense.pop.sklar.log <- summary(lmer(rt ~ Condition +Length+ (1+Condition|SubjNo)+ (1|prime), data = subset(sense.pop.sklar, Condition %in% c("Sklar_violation", "Sklar_control"))))
kable(data.frame(sense.pop.sklar.log$coefficients, "p value" = 2*pnorm(-abs(coef(sense.pop.sklar.log)[,3]))), digits = 2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 7.21 | 0.05 | 139.79 | 0.00 |
| ConditionSklar_violation | 0.01 | 0.01 | 1.02 | 0.31 |
| Length | -0.03 | 0.01 | -4.78 | 0.00 |
#print(paste("p value = ", 2*pnorm(-abs(coef(sense.pop.sklar.log)[,3]))))
# Analysis of PAS
# sense.pop.sklar.log.pas <- summary(lmer(rt ~ Condition * perceptual.rating +Length+ (1+Condition|SubjNo)+ (1|prime), data = subset(sense.pop.sklar, Condition %in% c("Sklar_violation", "Sklar_control"))))
# print(sense.pop.sklar.log.pas)
#
# sense.pop.sklar.log.pas2 <- summary(lmer(perceptual.rating ~ Condition +Length+ (1+Condition|SubjNo)+ (1|prime), data = subset(sense.pop.sklar, Condition %in% c("Sklar_violation", "Sklar_control"))))
# print(sense.pop.sklar.log.pas2)There is no effect of semantics for Raw RTs for Experiment 1b
# excluded subjects
print(paste("n subj excluded = ",total.n-excl.n))## [1] "n subj excluded = 5"
# excluded trials
print(paste("% trials excluded = ",((n.trials - n.excl.trials)/n.trials)*100))## [1] "% trials excluded = 6.31267585311305"
print(paste("median trials excluded per participant = ",median(subj.excl.trials$final_trials), "sd= ", sd(subj.excl.trials$final_trials), "range= ", range(subj.excl.trials$final_trials)))## [1] "median trials excluded per participant = 12.5 sd= 13.6026273797089 range= 2"
## [2] "median trials excluded per participant = 12.5 sd= 13.6026273797089 range= 59"
print(paste("median timeouts per participant = ",median(timeouts$rt), "sd= ", sd(timeouts$rt), "range= ", range(timeouts$rt)))## [1] "median timeouts per participant = 1 sd= 23.5189143534677 range= 0"
## [2] "median timeouts per participant = 1 sd= 23.5189143534677 range= 119"
# Standardize lenth
# Standardize lenth
sense.pop.new$Length <- (sense.pop.new$Length - mean(sense.pop.new$Length, na.rm = T))/sd(sense.pop.new$Length, na.rm = T)
# T test (Sklar style)
sense.pop.new.summary <- summaryBy(rt + perceptual.rating~ SubjNo + Condition, data = subset(sense.pop.new, Condition %in% c("Non-sensible","Sensible")), keep.names = T)
kable(summaryBy(rt*1000+ perceptual.rating ~ Condition, data = sense.pop.new.summary, FUN = c(mean,sd)), digits = 2)| Condition | rt * 1000.mean | perceptual.rating.mean | rt * 1000.sd | perceptual.rating.sd |
|---|---|---|---|---|
| Non-sensible | 1463.25 | 1.84 | 549.85 | 0.5 |
| Sensible | 1463.48 | 1.84 | 535.59 | 0.5 |
t.test(rt ~ Condition, data = sense.pop.new.summary, paired = T)##
## Paired t-test
##
## data: rt by Condition
## t = -0.02217, df = 47, p-value = 0.9824
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.02135123 0.02088578
## sample estimates:
## mean of the differences
## -0.0002327283
t.test(perceptual.rating ~ Condition, data = sense.pop.new.summary, paired = T)##
## Paired t-test
##
## data: perceptual.rating by Condition
## t = 0.24623, df = 47, p-value = 0.8066
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.02056688 0.02630377
## sample estimates:
## mean of the differences
## 0.002868449
es <- t.test(rt ~ Condition, data = sense.pop.new.summary, paired = T)
es.t.test.raw <- rbind(es.t.test.raw, data.frame(t = es$statistic,n1 = es$parameter,n2 = es$parameter))
# Bayes factor -- minimum effect of 0.01, maximum of 0.06, our effect = -0.0002327283 and our SE = -0.03/-0.02217= 0.01049744
# lmer (rabag style)
sense.pop.new.raw <-summary(lmer(rt ~ Condition+Length + (1+Condition|SubjNo)+ (1|prime), data = subset(sense.pop.new, Condition %in% c("Non-sensible","Sensible"))))
kable(data.frame(sense.pop.new.raw$coefficients, "p value" = 2*pnorm(-abs(coef(sense.pop.new.raw)[,3]))), digits = 2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.46 | 0.08 | 18.49 | 0.00 |
| ConditionSensible | 0.00 | 0.01 | 0.36 | 0.72 |
| Length | -0.02 | 0.01 | -3.43 | 0.00 |
#print(paste("p value = ", 2*pnorm(-abs(coef(sense.pop.new.raw)[,3]))))There is no effect of semantics for Log RTs for Experiment 1b
# Standardize lenth
sense.pop.new$Length <- (sense.pop.new$Length - mean(sense.pop.new$Length, na.rm = T))/sd(sense.pop.new$Length, na.rm = T)
# T test (Sklar style)
sense.pop.new.summary <- summaryBy(rt ~ SubjNo + Condition, data = subset(sense.pop.new, Condition %in% c("Non-sensible","Sensible")), keep.names = T)
kable(summaryBy(rt ~ Condition, data = sense.pop.new.summary, FUN = c(mean,sd)), digits =2 )| Condition | rt.mean | rt.sd |
|---|---|---|
| Non-sensible | 7.2 | 0.37 |
| Sensible | 7.2 | 0.37 |
t.test(rt ~ Condition, data = sense.pop.new.summary, paired = T)##
## Paired t-test
##
## data: rt by Condition
## t = -0.8643, df = 47, p-value = 0.3918
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.016742376 0.006679635
## sample estimates:
## mean of the differences
## -0.00503137
es <- t.test(rt ~ Condition, data = sense.pop.new.summary, paired = T)
es.t.test.log <- rbind(es.t.test.log, data.frame(t = es$statistic,n1 = es$parameter,n2 = es$parameter))
# Bayes factor -- minimum effect of 0.01, maximum of 0.06, our effect = -0.0002327283 and our SE = -0.03/-0.02217= 0.01049744
# lmer (rabag style)
sense.pop.new.log <-summary(lmer(rt ~ Condition +Length+ (1+Condition|SubjNo)+ (1|prime), data = subset(sense.pop.new, Condition %in% c("Non-sensible","Sensible"))))
kable(data.frame(sense.pop.new.log$coefficients, "p value" = 2*pnorm(-abs(coef(sense.pop.new.log)[,3]))), digits = 2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 7.20 | 0.05 | 133.18 | 0.00 |
| ConditionSensible | 0.01 | 0.01 | 0.80 | 0.42 |
| Length | -0.01 | 0.00 | -2.79 | 0.01 |
#print(paste("p value = ", 2*pnorm(-abs(coef(sense.pop.new.log)[,3]))))
# Analysis of PAS
# sense.pop.new.log.pas <- summary(lmer(rt ~ Condition * perceptual.rating +Length+ (1+Condition|SubjNo)+ (1|prime), data = subset(sense.pop.new, Condition %in% c("Non-sensible","Sensible"))))
# print(sense.pop.new.log.pas)
#
# sense.pop.new.log.pas2 <- summary(lmer(perceptual.rating ~ Condition +Length+ (1+Condition|SubjNo)+ (1|prime), data = subset(sense.pop.new, Condition %in% c("Non-sensible","Sensible"))))
# print(sense.pop.new.log.pas2)RTs are faster for longer stims for raw data
sense.pop.length.raw <- summary(lmer(rt ~ Length + (1+Length|SubjNo), data = sense.pop.length))
kable(data.frame(sense.pop.length.raw$coefficients, "p value" = 2*pnorm(-abs(coef(sense.pop.length.raw)[,3]))), digits = 2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.58 | 0.10 | 15.75 | 0 |
| Length | -0.02 | 0.01 | -3.15 | 0 |
#print(paste("p value = ", 2*pnorm(-abs(coef(sense.pop.length.raw)[,3]))))RTs are faster for longer stims for log data.
sense.pop.length.log <- summary(lmer(rt ~ Length + (1+Length|SubjNo), data = sense.pop.length))
kable(data.frame(sense.pop.length.log$coefficients, "p value"=2*pnorm(-abs(coef(sense.pop.length.log)[,3]))), digits = 2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 0.31 | 0.06 | 5.67 | 0 |
| Length | -0.02 | 0.00 | -4.58 | 0 |
#print(paste("p value = ", 2*pnorm(-abs(coef(sense.pop.length.log)[,3]))))In this experiment, we assess the emotional studies of Sklar et al (Experiment 4).
Semantics has no effect for raw RTs
# excluded subjects
print(paste("n subj excluded = ",total.n-excl.n))## [1] "n subj excluded = 3"
# excluded trials
print(paste("% trials excluded = ",((n.trials - n.excl.trials)/n.trials)*100))## [1] "% trials excluded = 3.40076223981237"
print(paste("median trials excluded per participant = ",median(subj.excl.trials$rt), "sd= ", sd(subj.excl.trials$final_trials), "range= ", range(subj.excl.trials$final_trials)))## [1] "median trials excluded per participant = 2 sd= 1.30645792053807 range= 0"
## [2] "median trials excluded per participant = 2 sd= 1.30645792053807 range= 6"
print(paste("median timeouts per participant = ",median(timeouts$rt), "sd= ", sd(timeouts$rt), "range= ", range(timeouts$rt)))## [1] "median timeouts per participant = 0 sd= 3.57172644985199 range= 0"
## [2] "median timeouts per participant = 0 sd= 3.57172644985199 range= 22"
# Standardize lenth
# Standardize lenth
emo.pop.sklar$Length <- (emo.pop.sklar$Length - mean(emo.pop.sklar$Length, na.rm = T))/sd(emo.pop.sklar$Length, na.rm = T)
# Lin Reg (sklar style)
emo.pop.sklar.sum <- summaryBy(rt +perceptual.rating ~ prime + MeanAffectivity, data = emo.pop.sklar, keep.names = T)
emo.pop.sklar.sum.graph <- emo.pop.sklar.sum
print("Affective Valence against RT")## [1] "Affective Valence against RT"
kable(summary(lm(rt ~ MeanAffectivity, data = emo.pop.sklar.sum))$coefficients,digits=2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 1.53 | 0.01 | 123.63 | 0.00 |
| MeanAffectivity | -0.02 | 0.01 | -1.51 | 0.14 |
print("Affective Valence against Rating Scale")## [1] "Affective Valence against Rating Scale"
kable(summary(lm(perceptual.rating ~ MeanAffectivity, data = emo.pop.sklar.sum))$coefficients,digits=2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 1.99 | 0.01 | 144.13 | 0.00 |
| MeanAffectivity | -0.01 | 0.01 | -0.54 | 0.59 |
es <- lm(rt~MeanAffectivity, data = emo.pop.sklar.sum)
es.t.test.raw <- rbind(es.t.test.raw, data.frame(t = summary(es)$coefficients[2,3],n1 = summary(es)$df[2],n2 = summary(es)$df[2]))
# lmer (rabag style)
emo.sklar.lmer.raw <- summary(lmer(rt ~ MeanAffectivity+Length + (1+MeanAffectivity|SubjNo)+ (1|prime), data = subset(emo.pop.sklar, prime_semantics %in% c("Negative phrase","Neutral phrase"))))
print("Affective Valence against RT")## [1] "Affective Valence against RT"
kable(data.frame(emo.sklar.lmer.raw$coefficients, "p value"=2*pnorm(-abs(coef(emo.sklar.lmer.raw)[,3]))), digits =2 )| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.56 | 0.08 | 20.67 | 0.00 |
| MeanAffectivity | -0.02 | 0.01 | -1.90 | 0.06 |
| Length | -0.03 | 0.01 | -2.07 | 0.04 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.sklar.lmer.raw)[,3]))))For Experiment 2a, semantics has no effect for log RTs
# Standardize lenth
emo.pop.sklar$Length <- (emo.pop.sklar$Length - mean(emo.pop.sklar$Length, na.rm = T))/sd(emo.pop.sklar$Length, na.rm = T)
# Lin Reg (sklar style)
emo.pop.sklar.sum <- summaryBy(rt ~ prime + MeanAffectivity, data = emo.pop.sklar, keep.names = T)
print("Valence against RT")## [1] "Valence against RT"
kable(summary(lm(rt ~ MeanAffectivity, data = emo.pop.sklar.sum))$coefficients, digits =2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 0.32 | 0.01 | 47.90 | 0.00 |
| MeanAffectivity | -0.01 | 0.01 | -1.34 | 0.19 |
es <- lm(rt~MeanAffectivity, data = emo.pop.sklar.sum)
es.t.test.log <- rbind(es.t.test.log, data.frame(t = summary(es)$coefficients[2,3],n1 = summary(es)$df[2],n2 = summary(es)$df[2]))
# lmer (rabag style)
emo.sklar.lmer.log <- summary(lmer(rt ~ MeanAffectivity+Length + (1+MeanAffectivity|SubjNo)+ (1|prime), data = subset(emo.pop.sklar, prime_semantics %in% c("Negative phrase","Neutral phrase"))))
print("Valence against RT")## [1] "Valence against RT"
kable(data.frame(emo.sklar.lmer.log$coefficients, "p value"=2*pnorm(-abs(coef(emo.sklar.lmer.log)[,3]))), digits =2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 0.33 | 0.04 | 8.03 | 0.00 |
| MeanAffectivity | -0.01 | 0.01 | -1.85 | 0.06 |
| Length | -0.01 | 0.01 | -2.18 | 0.03 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.sklar.lmer.log)[,3]))))
#
# # Analysis of PAS
# emo.pop.sklar.log.pas <- summary(lmer(rt ~ MeanAffectivity * perceptual.rating +Length+ (1+Condition|SubjNo)+ (1|prime), subset(emo.pop.sklar, prime_semantics %in% c("Negative phrase","Neutral phrase"))))
# print(emo.pop.sklar.log.pas)
#
# emo.pop.sklar.log.pas2 <- summary(lmer(perceptual.rating ~ MeanAffectivity +Length+ (1+Condition|SubjNo)+ (1|prime), subset(emo.pop.sklar, prime_semantics %in% c("Negative phrase","Neutral phrase"))))
# print(emo.pop.sklar.log.pas2)For Experiment 2b, semantics has no effect on raw RTs (tho this model has trouble converging)
# excluded subjects
print(paste("n subj excluded = ",total.n-excl.n))## [1] "n subj excluded = 5"
# excluded trials
print(paste("% trials excluded = ",((n.trials - n.excl.trials)/n.trials)*100))## [1] "% trials excluded = 3.27214684756584"
print(paste("median trials excluded per participant = ",median(subj.excl.trials$final_trials), "sd= ", sd(subj.excl.trials$final_trials), "range= ", range(subj.excl.trials$final_trials)))## [1] "median trials excluded per participant = 2 sd= 0.981277767987009 range= 0"
## [2] "median trials excluded per participant = 2 sd= 0.981277767987009 range= 4"
print(paste("median timeouts per participant = ",median(timeouts$rt), "sd= ", sd(timeouts$rt), "range= ", range(timeouts$rt)))## [1] "median timeouts per participant = 0 sd= 1.96128821083657 range= 0"
## [2] "median timeouts per participant = 0 sd= 1.96128821083657 range= 12"
# Standardize lenth
# Standardize lenth
emo.pop.new$Length <- (emo.pop.new$Length - mean(emo.pop.new$Length, na.rm = T))/sd(emo.pop.new$Length, na.rm = T)
emo.pop.new.sum <- summaryBy(rt + perceptual.rating~ prime + MeanAffectivity, data = emo.pop.new, keep.names = T)
emo.pop.new.sum.graph <- emo.pop.new.sum
print("Valence against RTs")## [1] "Valence against RTs"
kable(summary(lm(rt ~ MeanAffectivity, data = emo.pop.new.sum))$coefficients,digits=2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 1.47 | 0.01 | 117.77 | 0.00 |
| MeanAffectivity | -0.01 | 0.01 | -0.51 | 0.62 |
print("Valence against Rating Scale")## [1] "Valence against Rating Scale"
kable(summary(lm(perceptual.rating~ MeanAffectivity, data = emo.pop.new.sum))$coefficients,digits=2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 1.9 | 0.01 | 155.56 | 0.00 |
| MeanAffectivity | 0.0 | 0.01 | 0.24 | 0.81 |
es <- lm(rt~MeanAffectivity, data = emo.pop.new.sum)
es.t.test.raw <- rbind(es.t.test.raw, data.frame(t = summary(es)$coefficients[2,3],n1 = summary(es)$df[2],n2 = summary(es)$df[2]))
print("Valence against RT in mixed model")## [1] "Valence against RT in mixed model"
# lmer (rabag style)
emo.new.lmer.raw <- summary(lmer(rt ~ MeanAffectivity +Length+ (1+MeanAffectivity|SubjNo)+ (1+MeanAffectivity|PairID), data = subset(emo.pop.new, prime_semantics %in% c("Negative sentence","Neutral sentence"))))## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control
## $checkConv, : unable to evaluate scaled gradient
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control
## $checkConv, : Model failed to converge: degenerate Hessian with 1 negative
## eigenvalues
kable(data.frame(emo.new.lmer.raw$coefficients, "p value"= 2*pnorm(-abs(coef(emo.new.lmer.raw)[,3]))),digits =2 )| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.48 | 0.07 | 20.73 | 0.00 |
| MeanAffectivity | -0.01 | 0.01 | -0.69 | 0.49 |
| Length | 0.04 | 0.01 | 3.48 | 0.00 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.new.lmer.raw)[,3]))))For Experiment 2b, semantics has no effect on log RTs either (and this model converges more easily).
# Standardize lenth
emo.pop.new$Length <- (emo.pop.new$Length - mean(emo.pop.new$Length, na.rm = T))/sd(emo.pop.new$Length, na.rm = T)
emo.pop.new.sum <- summaryBy(rt ~ prime + MeanAffectivity, data = emo.pop.new, keep.names = T)
print("Valence against RT")## [1] "Valence against RT"
kable(summary(lm(rt ~ MeanAffectivity, data = emo.pop.new.sum))$coefficients, digits =2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 0.29 | 0.01 | 47.05 | 0.00 |
| MeanAffectivity | 0.00 | 0.01 | -0.50 | 0.62 |
es <- lm(rt~MeanAffectivity, data = emo.pop.new.sum)
es.t.test.log <- rbind(es.t.test.log, data.frame(t = summary(es)$coefficients[2,3],n1 = summary(es)$df[2],n2 = summary(es)$df[2]))
# lmer (rabag style)
emo.new.lmer.log <- summary(lmer(rt ~ MeanAffectivity +Length+ (1+MeanAffectivity|SubjNo)+ (1+MeanAffectivity|PairID), data = subset(emo.pop.new, prime_semantics %in% c("Negative sentence","Neutral sentence"))))
print("Valence against RT in mixed model")## [1] "Valence against RT in mixed model"
kable(data.frame(emo.new.lmer.log$coefficients, "p value"=2*pnorm(-abs(coef(emo.new.lmer.log)[,3]))), digits =2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 0.29 | 0.04 | 6.92 | 0.00 |
| MeanAffectivity | 0.00 | 0.01 | -0.39 | 0.69 |
| Length | 0.02 | 0.01 | 2.85 | 0.00 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.new.lmer.log)[,3]))))
#
# # Analysis of PAS
# emo.new.log.pas <- summary(lmer(rt ~ MeanAffectivity * perceptual.rating +Length+ (1+MeanAffectivity|SubjNo)+ (1+MeanAffectivity|PairID), data = subset(emo.pop.new, prime_semantics %in% c("Negative sentence","Neutral sentence"))))
# print(emo.new.log.pas)
#
# emo.new.log.pas2 <- summary(lmer(perceptual.rating ~ MeanAffectivity +Length+ (1+MeanAffectivity|SubjNo)+ (1+MeanAffectivity|PairID), data = subset(emo.pop.new, prime_semantics %in% c("Negative sentence","Neutral sentence"))))
# print(emo.new.log.pas2)For Experiment 2 (combining 2a and 2b) there is no effect of length on raw RTs; we suspect that this is because Experiment 2bs sentences were quite long.
emo.pop.length.raw <- summary(lmer(rt ~ Length + (1+Length|SubjNo), data = emo.pop.length))
kable(data.frame(emo.pop.length.raw$coefficients, "p value" = 2*pnorm(-abs(coef(emo.pop.length.raw)[,3]))), digits = 2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.52 | 0.07 | 20.46 | 0.00 |
| Length | -0.01 | 0.01 | -0.97 | 0.33 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.pop.length.raw)[,3]))))And the same is true for log data.
emo.pop.length.log <- summary(lmer(rt ~ Length + (1+Length|SubjNo), data = emo.pop.length))
kable(data.frame(emo.pop.length.log$coefficients, "p value" = 2*pnorm(-abs(coef(emo.pop.length.log)[,3]))), digits =2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 0.30 | 0.04 | 7.33 | 0.00 |
| Length | -0.01 | 0.01 | -1.33 | 0.18 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.pop.length.log)[,3]))))We also included the Hebrew sentence used by Sklar et al (displayed at two contrasts). This was designed to assess if any visual characteristics of those sentences may have caused the effect, while the contrast manipulation was a sanity check.
The affective valence of the Hebrew stimuli had no effect on raw data, but contrast does have an effect.
# excluded subjects
print(paste("n subj excluded = ",total.n-excl.n))## [1] "n subj excluded = 5"
# excluded trials
print(paste("% trials excluded = ",((n.trials - n.excl.trials)/n.trials)*100))## [1] "% trials excluded = 3.71845949535193"
print(paste("median trials excluded per participant = ",median(subj.excl.trials$final_trials), "sd= ", sd(subj.excl.trials$final_trials), "range= ", range(subj.excl.trials$final_trials)))## [1] "median trials excluded per participant = 3 sd= 1.93983245567803 range= 1"
## [2] "median trials excluded per participant = 3 sd= 1.93983245567803 range= 10"
print(paste("median timeouts per participant = ",median(timeouts$rt), "sd= ", sd(timeouts$rt), "range= ", range(timeouts$rt)))## [1] "median timeouts per participant = 0 sd= 4.75603437894415 range= 0"
## [2] "median timeouts per participant = 0 sd= 4.75603437894415 range= 21"
emo.pop.hebr$Contrast <- as.factor(emo.pop.hebr$Contrast)
contrasts(emo.pop.hebr$Contrast)[1] <- -1
print("Valence against RT")## [1] "Valence against RT"
kable(summary(lm(rt ~ MeanAffectivity*Contrast, data = emo.pop.hebr.sum))$coefficients, digits = 2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 1.43 | 0.01 | 135.11 | 0.00 |
| MeanAffectivity | 0.00 | 0.01 | 0.15 | 0.88 |
| Contrast80 | -0.10 | 0.01 | -9.02 | 0.00 |
| MeanAffectivity:Contrast80 | 0.00 | 0.01 | 0.34 | 0.74 |
print("Valence against PAS")## [1] "Valence against PAS"
kable(summary(lm(perceptual.rating ~ MeanAffectivity*Contrast, data = emo.pop.hebr.sum))$coefficients, digits = 2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 1.52 | 0.01 | 191.39 | 0.00 |
| MeanAffectivity | 0.00 | 0.01 | -0.55 | 0.58 |
| Contrast80 | 0.05 | 0.01 | 5.84 | 0.00 |
| MeanAffectivity:Contrast80 | 0.00 | 0.01 | 0.62 | 0.54 |
print("Valence against RT in mixed model")## [1] "Valence against RT in mixed model"
emo.contr.lmer.raw <- summary(lmer(rt ~ Contrast*MeanAffectivity + (1+Contrast*MeanAffectivity|SubjNo)+ (1+Contrast|prime), data = subset(emo.pop.hebr, prime_semantics %in% c("Hebrew"))))
kable(data.frame(emo.contr.lmer.raw$coefficients, "p value" = 2*pnorm(-abs(coef(emo.contr.lmer.raw)[,3]))), digits =2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.44 | 0.06 | 23.50 | 0.00 |
| Contrast80 | -0.10 | 0.01 | -8.55 | 0.00 |
| MeanAffectivity | 0.00 | 0.01 | 0.30 | 0.77 |
| Contrast80:MeanAffectivity | 0.00 | 0.01 | 0.28 | 0.78 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.contr.lmer.raw)[,3]))))The affective valence of the Hebrew stimuli also has no effect on log data, and contrast again does have an effect.
emo.pop.hebr$Length <- (emo.pop.hebr$Length - mean(emo.pop.hebr$Length))/sd(emo.pop.hebr$Length)
emo.pop.hebr$Contrast <- as.factor(emo.pop.hebr$Contrast)
contrasts(emo.pop.hebr$Contrast)[1] <- -1
print("Valence against RT in linear model")## [1] "Valence against RT in linear model"
kable(summary(lm(rt ~ MeanAffectivity*Contrast, data = emo.pop.hebr.sum))$coefficients, digits =2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 0.28 | 0.01 | 44.71 | 0.00 |
| MeanAffectivity | 0.00 | 0.01 | 0.49 | 0.62 |
| Contrast80 | -0.07 | 0.01 | -11.66 | 0.00 |
| MeanAffectivity:Contrast80 | 0.00 | 0.01 | 0.09 | 0.93 |
print("Valence against RT in mixed model")## [1] "Valence against RT in mixed model"
emo.contr.lmer.log <- summary(lmer(rt ~ Contrast*MeanAffectivity +Length+ (1+Contrast*MeanAffectivity|SubjNo)+ (1+Contrast|prime), data = subset(emo.pop.hebr, prime_semantics %in% c("Hebrew"))))
kable(data.frame(emo.contr.lmer.log$coefficients, "p value" = 2*pnorm(-abs(coef(emo.contr.lmer.log)[,3]))), digits =2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 0.28 | 0.04 | 6.92 | 0.00 |
| Contrast80 | -0.07 | 0.01 | -10.40 | 0.00 |
| MeanAffectivity | 0.00 | 0.01 | 0.28 | 0.78 |
| Length | -0.02 | 0.01 | -2.99 | 0.00 |
| Contrast80:MeanAffectivity | 0.00 | 0.01 | 0.08 | 0.93 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.contr.lmer.log)[,3]))))
#
# # Analysis of PAS
# emo.new.log.pas <- summary(lmer(rt ~ Contrast*MeanAffectivity * perceptual.rating +Length++ (1+Contrast*MeanAffectivity|SubjNo)+ (1+Contrast|prime),data = subset(emo.pop.hebr, prime_semantics %in% c("Hebrew"))))
# print(emo.new.log.pas)
#
# emo.new.log.pas2 <- summary(lmer(perceptual.rating ~ MeanAffectivity +Length+ (1+Contrast*MeanAffectivity|SubjNo)+ (1+Contrast|prime), data = subset(emo.pop.hebr, prime_semantics %in% c("Hebrew"))))
#
# print(emo.new.log.pas2)Finally – a quick test if English is perceived faster than Hebrew (following Jiang et al 07).
Hebrew phrases emerge slower in an analysis of raw data (tho note that there are length confounds here, which we have to regress out, but can only do by character)
emo.pop.lang$Length <- (emo.pop.lang$Length - mean(emo.pop.lang$Length))/sd(emo.pop.lang$Length)
emo.pop.lang.raw <- summary(lmer(rt ~ Lang +Length+ (1+Lang+Length|SubjNo) + (1|prime), data = subset(emo.pop.lang, Contrast == 50)))
kable(data.frame(emo.pop.lang.raw$coefficients, "p value"=2*pnorm(-abs(coef(emo.pop.lang.raw)[,3]))), digits =2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.50 | 0.07 | 21.67 | 0.00 |
| LangHebrew | 0.04 | 0.02 | 2.26 | 0.02 |
| Length | -0.01 | 0.01 | -1.44 | 0.15 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.pop.lang.raw)[,3]))))And Hebrew phrases emerge slower in an analysis of log transformed data
emo.pop.lang$Length <- (emo.pop.lang$Length - mean(emo.pop.lang$Length))/sd(emo.pop.lang$Length)
emo.pop.lang.log <- summary(lmer(rt ~ Lang +Length+ (1+Lang+Length|SubjNo) + (1|prime), data = subset(emo.pop.lang, Contrast == 50)))
kable(data.frame(emo.pop.lang.log$coefficients, "pvalue"=2*pnorm(-abs(coef(emo.pop.lang.log)[,3]))), digits =2)| Estimate | Std..Error | t.value | pvalue | |
|---|---|---|---|---|
| (Intercept) | 0.31 | 0.04 | 7.58 | 0.00 |
| LangHebrew | 0.03 | 0.01 | 2.82 | 0.00 |
| Length | -0.01 | 0.01 | -1.79 | 0.07 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.pop.lang.log)[,3]))))In Experiment 3, we used Sklar’s code to replicate Experiments 1 and 2. The experiments were run in an order, such that 1a and 2a were fast (counterbalanced) and 1b and 2b were run second. This, we reasoned, would provide the most likely grounds for Sklar’s results to replicate.
Semantics does affect raw RTs
# excluded subjects
print(paste("n subj excluded = ",total.n-excl.n))## [1] "n subj excluded = 12"
# excluded trials
print(paste("% trials excluded = ",((n.trials - n.excl.trials)/n.trials)*100))## [1] "% trials excluded = 5.70841239721695"
print(paste("median trials excluded per participant = ",median(subj.excl.trials$final_trials), "sd= ", sd(subj.excl.trials$final_trials), "range= ", range(subj.excl.trials$final_trials)))## [1] "median trials excluded per participant = 4 sd= 5.93274932409672 range= 0"
## [2] "median trials excluded per participant = 4 sd= 5.93274932409672 range= 27"
print(paste("median timeouts per participant = ",median(timeouts$rt), "sd= ", sd(timeouts$rt), "range= ", range(timeouts$rt)))## [1] "median timeouts per participant = 0 sd= 9.60269615354213 range= 0"
## [2] "median timeouts per participant = 0 sd= 9.60269615354213 range= 51"
sklar.sem$Length <- nchar(as.character(sklar.sem$Stim_column))
sklar.sem$Length <- (sklar.sem$Length - mean(sklar.sem$Length,na.rm= T))/sd(sklar.sem$Length,na.rm= T)
sklar.sem.sum <- summaryBy(rt~ SubjNo + Condition, data = sklar.sem, keep.names = T)
kable(summaryBy(rt ~ Condition, data = sklar.sem.sum, FUN = c(mean,sd)),digits=2)| Condition | rt.mean | rt.sd |
|---|---|---|
| Control | 1.83 | 0.66 |
| Violation | 1.91 | 0.78 |
t.test(rt ~ Condition, data = sklar.sem.sum, paired = T)##
## Paired t-test
##
## data: rt by Condition
## t = -2.4307, df = 61, p-value = 0.01802
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.16068618 -0.01563647
## sample estimates:
## mean of the differences
## -0.08816132
es <- t.test(rt ~ Condition, data = sklar.sem.sum, paired = T)
es.t.test.raw <- rbind(es.t.test.raw,data.frame(t = es$statistic,n1 = es$parameter,n2 = es$parameter))
# lmer (Rabag style)
sklar.sem.sum.raw <- summary(lmer(rt ~ Condition +Length+ (1+Condition|SubjNo)+ (1|Stim_column), data = sklar.sem))
kable(data.frame(sklar.sem.sum.raw$coefficients, "p value"=2*pnorm(-abs(coef(sklar.sem.sum.raw)[,3]))), digits=2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.83 | 0.08 | 21.64 | 0.00 |
| ConditionViolation | 0.08 | 0.04 | 2.29 | 0.02 |
| Length | -0.05 | 0.01 | -3.33 | 0.00 |
#print(paste("p value = ", 2*pnorm(-abs(coef(sklar.sem.sum.raw)[,3]))))But there is no effect of semantics for log RTs
sklar.sem$Length <- nchar(as.character(sklar.sem$Stim_column))
sklar.sem$Length <- (sklar.sem$Length - mean(sklar.sem$Length,na.rm= T))/sd(sklar.sem$Length,na.rm= T)
sklar.sem.sum <- summaryBy(rt~ SubjNo + Condition, data = sklar.sem, keep.names = T)
kable(summaryBy(rt ~ Condition, data = sklar.sem.sum, FUN = c(mean,sd)), digits=2)| Condition | rt.mean | rt.sd |
|---|---|---|
| Control | 7.39 | 0.36 |
| Violation | 7.41 | 0.40 |
t.test(rt ~ Condition, data = sklar.sem.sum, paired = T)##
## Paired t-test
##
## data: rt by Condition
## t = -1.697, df = 61, p-value = 0.09479
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.054309947 0.004446126
## sample estimates:
## mean of the differences
## -0.02493191
es <- t.test(rt ~ Condition, data = sklar.sem.sum, paired = T)
es.t.test.log <- rbind(es.t.test.log,data.frame(t = es$statistic,n1 = es$parameter,n2 = es$parameter))
# lmer (Rabag style)
sklar.sem.sum.log <- summary(lmer(rt ~ Condition +Length+ (1+Condition|SubjNo)+ (1|Stim_column), data = sklar.sem))
kable(data.frame(sklar.sem.sum.log$coefficients, "p value"=2*pnorm(-abs(coef(sklar.sem.sum.log)[,3]))), digits=2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 7.39 | 0.05 | 161.04 | 0.00 |
| ConditionViolation | 0.02 | 0.02 | 1.37 | 0.17 |
| Length | -0.03 | 0.01 | -3.95 | 0.00 |
#print(paste("p value = ", 2*pnorm(-abs(coef(sklar.sem.sum.log)[,3]))))Semantics does not affect raw RTs
# excluded subjects
print(paste("n subj excluded = ",total.n-excl.n))## [1] "n subj excluded = 8"
# excluded trials
print(paste("% trials excluded = ",((n.trials - n.excl.trials)/ n.trials)*100))## [1] "% trials excluded = 5.47311827956989"
print(paste("median trials excluded per participant = ",median(subj.excl.trials$rt), "sd= ", sd(subj.excl.trials$final_trials), "range= ", range(subj.excl.trials$final_trials)))## [1] "median trials excluded per participant = 11 sd= 14.9989599498413 range= 2"
## [2] "median trials excluded per participant = 11 sd= 14.9989599498413 range= 82"
print(paste("median timeouts per participant = ",median(timeouts$rt), "sd= ", sd(timeouts$rt), "range= ", range(timeouts$rt)))## [1] "median timeouts per participant = 0 sd= 0.491122009440984 range= 0"
## [2] "median timeouts per participant = 0 sd= 0.491122009440984 range= 4"
edin.sem$Length <- nchar(as.character(edin.sem$Stim_column))
edin.sem$Length <- (edin.sem$Length - mean(edin.sem$Length))/sd(edin.sem$Length)
# Lin Reg (sklar style)
edin.sem.sum <- summaryBy(rt ~ SubjNo + Condition, data = edin.sem, keep.names = T)
kable(summaryBy(rt ~ Condition, data = edin.sem.sum, FUN = c(mean,sd)), digits=2)| Condition | rt.mean | rt.sd |
|---|---|---|
| Control | 1.32 | 0.35 |
| Violation | 1.32 | 0.35 |
t.test(rt ~ Condition, data = edin.sem.sum, paired = T)##
## Paired t-test
##
## data: rt by Condition
## t = 0.091436, df = 61, p-value = 0.9274
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.01571699 0.01722322
## sample estimates:
## mean of the differences
## 0.000753118
es <- t.test(rt ~ Condition, data = edin.sem.sum, paired = T)
es.t.test.raw <- rbind(es.t.test.raw,data.frame(t = es$statistic,n1 = es$parameter,n2 = es$parameter))
# lmer (Rabag style)
edin.sem.sum.raw <- summary(lmer(rt ~ Condition +Length+ (1+Condition|SubjNo)+ (1|Stim_column), data = edin.sem))
kable(data.frame(edin.sem.sum.raw$coefficients, "p value"=2*pnorm(-abs(coef(edin.sem.sum.raw)[,3]))), digits=2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.32 | 0.04 | 30.16 | 0.00 |
| ConditionViolation | 0.00 | 0.01 | -0.11 | 0.91 |
| Length | -0.02 | 0.00 | -5.35 | 0.00 |
#print(paste("p value = ", 2*pnorm(-abs(coef(edin.sem.sum.raw)[,3]))))Semantics does not affect log RTs
edin.sem$Length <- nchar(as.character(edin.sem$Stim_column))
edin.sem$Length <- (edin.sem$Length - mean(edin.sem$Length))/sd(edin.sem$Length)
# Lin Reg (sklar style)
edin.sem.sum <- summaryBy(rt ~ SubjNo + Condition, data = edin.sem, keep.names = T)
kable(summaryBy(rt ~ Condition, data = edin.sem.sum, FUN = c(mean,sd)), digits=2)| Condition | rt.mean | rt.sd |
|---|---|---|
| Control | 7.13 | 0.28 |
| Violation | 7.13 | 0.29 |
t.test(rt ~ Condition, data = edin.sem.sum, paired = T)##
## Paired t-test
##
## data: rt by Condition
## t = 0.4057, df = 63, p-value = 0.6863
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.009748855 0.014715665
## sample estimates:
## mean of the differences
## 0.002483405
es <- t.test(rt ~ Condition, data = edin.sem.sum, paired = T)
es.t.test.log <- rbind(es.t.test.log,data.frame(t = es$statistic,n1 = es$parameter,n2 = es$parameter))
# lmer (Rabag style)
edin.sem.sum.log <- summary(lmer(rt ~ Condition +Length+ (1+Condition|SubjNo)+ (1|Stim_column), data = edin.sem))
kable(data.frame(edin.sem.sum.log$coefficients, "p value"=2*pnorm(-abs(coef(edin.sem.sum.log)[,3]))), digits=2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 7.13 | 0.04 | 200.34 | 0.00 |
| ConditionViolation | 0.00 | 0.01 | -0.37 | 0.71 |
| Length | -0.01 | 0.00 | -5.43 | 0.00 |
#print(paste("p value = ", 2*pnorm(-abs(coef(edin.sem.sum.log)[,3]))))Emotional semantics does not affect raw RTs
# excluded subjects
print(paste("n subj excluded = ",total.n-excl.n))## [1] "n subj excluded = 14"
# excluded trials
print(paste("% trials excluded = ",((n.trials - n.excl.trials)/n.trials)*100))## [1] "% trials excluded = 4.43333333333333"
print(paste("median trials excluded per participant = ",median(subj.excl.trials$rt), "sd= ", sd(subj.excl.trials$final_trials), "range= ", range(subj.excl.trials$final_trials)))## [1] "median trials excluded per participant = 2 sd= 2.07561848382488 range= 0"
## [2] "median trials excluded per participant = 2 sd= 2.07561848382488 range= 11"
print(paste("median timeouts per participant = ",median(timeouts$rt), "sd= ", sd(timeouts$rt), "range= ", range(timeouts$rt)))## [1] "median timeouts per participant = 0 sd= 5.0619267630626 range= 0"
## [2] "median timeouts per participant = 0 sd= 5.0619267630626 range= 26"
sklar.emo$Length <- nchar(as.character(sklar.emo$Stim_column))
sklar.emo$Length <- (sklar.emo$Length - mean(sklar.emo$Length))/sd(sklar.emo$Length)
sklar.emo.sum <- summaryBy(rt ~ Stim_column + MeanAffectivity_column, data = sklar.emo, keep.names = T)
sklar.emo.sum.graph <- sklar.emo.sum
print("Linear model")## [1] "Linear model"
kable(summary(lm(rt ~ MeanAffectivity_column, data = sklar.emo.sum))$coefficients, digits=2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 1.94 | 0.03 | 75.22 | 0.00 |
| MeanAffectivity_column | 0.02 | 0.03 | 0.88 | 0.38 |
es <- lm(rt~MeanAffectivity_column, data = sklar.emo.sum)
es.t.test.raw <- rbind(es.t.test.raw, data.frame(t = summary(es)$coefficients[2,3],n1 = summary(es)$df[2],n2 = summary(es)$df[2]))
print("Mixed model")## [1] "Mixed model"
emo.sklar.lmer.raw <- summary(lmer(rt ~ MeanAffectivity_column +Length+ (1+MeanAffectivity_column|SubjNo)+ (1|Stim_column), data = sklar.emo))
kable(data.frame(emo.sklar.lmer.raw$coefficients, "p value"=2*pnorm(-abs(coef(emo.sklar.lmer.raw)[,3]))), digits =2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.97 | 0.12 | 16.53 | 0.0 |
| MeanAffectivity_column | 0.02 | 0.02 | 0.67 | 0.5 |
| Length | -0.09 | 0.02 | -3.93 | 0.0 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.sklar.lmer.raw)[2,3]))))Emotional semantics does not affect log RTs
sklar.emo$Length <- nchar(as.character(sklar.emo$Stim_column))
sklar.emo$Length <- (sklar.emo$Length - mean(sklar.emo$Length))/sd(sklar.emo$Length)
sklar.emo.sum <- summaryBy(rt ~ Stim_column + MeanAffectivity_column, data = sklar.emo, keep.names = T)
print("Linear model")## [1] "Linear model"
kable(summary(lm(rt ~ MeanAffectivity_column, data = sklar.emo.sum))$coefficients, digits =2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 0.48 | 0.01 | 45.14 | 0.00 |
| MeanAffectivity_column | 0.01 | 0.01 | 0.83 | 0.41 |
es <- lm(rt~MeanAffectivity_column, data = sklar.emo.sum)
es.t.test.log <- rbind(es.t.test.log, data.frame(t = summary(es)$coefficients[2,3],n1 = summary(es)$df[2],n2 = summary(es)$df[2]))
print("Mixed model")## [1] "Mixed model"
emo.sklar.lmer.log <- summary(lmer(rt ~ MeanAffectivity_column+Length + (1+MeanAffectivity_column|SubjNo)+ (1|Stim_column), data = sklar.emo))
kable(data.frame(emo.sklar.lmer.log$coefficients, "p value"= 2*pnorm(-abs(coef(emo.sklar.lmer.log)[,3]))), digits=2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 0.49 | 0.05 | 9.67 | 0.00 |
| MeanAffectivity_column | 0.01 | 0.01 | 0.54 | 0.59 |
| Length | -0.04 | 0.01 | -4.79 | 0.00 |
#print(paste("p value = ", 2*pnorm(-abs(coef(emo.sklar.lmer.log)[2,3]))))Emotional semantics marginally affects raw RTs
# excluded subjects
print(paste("n subj excluded = ",total.n-excl.n))## [1] "n subj excluded = 8"
# excluded trials
print(paste("% trials excluded = ",((n.trials - n.excl.trials)/n.trials)*100))## [1] "% trials excluded = 4.04211956521739"
print(paste("median trials excluded per participant = ",median(subj.excl.trials$rt), "sd= ", sd(subj.excl.trials$final_trials), "range= ", range(subj.excl.trials$final_trials)))## [1] "median trials excluded per participant = 1 sd= 1.57225131780807 range= 0"
## [2] "median trials excluded per participant = 1 sd= 1.57225131780807 range= 10"
print(paste("median timeouts per participant = ",median(timeouts$rt), "sd= ", sd(timeouts$rt), "range= ", range(timeouts$rt)))## [1] "median timeouts per participant = 0 sd= 2.00581119755833 range= 0"
## [2] "median timeouts per participant = 0 sd= 2.00581119755833 range= 16"
edin.emo$Length <- nchar(as.character(edin.emo$Stim_column))
edin.emo$Length <- (edin.emo$Length - mean(edin.emo$Length, na.rm = T))/sd(edin.emo$Length)
# Lin Reg (sklar style)
edin.emo.sum.factorial <- summaryBy(rt ~ SubjNo + Type_column, data = edin.emo, keep.names = T)
#t.test(rt ~ Type_column, data = edin.emo.sum.factorial, paired = T)
#es <- t.test(rt ~ Type_column, data = edin.emo.sum.factorial, paired = T)
#es.t.test.raw <- rbind(es.t.test.raw,data.frame(t = es$statistic,n1 = es$parameter,n2 = es$parameter))
edin.emo.sum <- summaryBy(rt ~ Stim_column + MeanAffectivity_column, data = edin.emo, keep.names = T)
edin.emo.sum.graph <- edin.emo.sum
print("Linear model")## [1] "Linear model"
kable(summary(lm(rt ~ MeanAffectivity_column, data = edin.emo.sum))$coefficients, digits =2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 1.43 | 0.01 | 101.33 | 0.00 |
| MeanAffectivity_column | 0.03 | 0.01 | 2.35 | 0.02 |
es <- lm(rt~MeanAffectivity_column, data = edin.emo.sum)
es.t.test.raw <- rbind(es.t.test.raw, data.frame(t = summary(es)$coefficients[2,3],n1 = summary(es)$df[2],n2 = summary(es)$df[2]))
# lmer (Rabag style)
print("Mixed model")## [1] "Mixed model"
edin.emo.sum.raw <- summary(lmer(rt ~ Type_column +Length+ (1+Type_column|SubjNo)+ (1|Stim_column), data = edin.emo))
kable(data.frame(edin.emo.sum.raw$coefficients, "p value"=2*pnorm(-abs(coef(edin.emo.sum.raw)[,3]))), digits=2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 1.41 | 0.06 | 24.55 | 0.00 |
| Type_columnNeutral sentence | 0.06 | 0.03 | 1.83 | 0.07 |
| Length | -0.03 | 0.01 | -1.74 | 0.08 |
#print(paste("p value = ", 2*pnorm(-abs(coef(edin.emo.sum.raw)[,3]))))Emotional semantics does not affect log RTs
edin.emo$Length <- nchar(as.character(edin.emo$Stim_column))
edin.emo$Length <- (edin.emo$Length - mean(edin.emo$Length, na.rm = T))/sd(edin.emo$Length)
# Lin Reg (sklar style)
edin.emo.sum.factorial <- summaryBy(rt ~ SubjNo + Type_column, data = edin.emo, keep.names = T)
# t.test(rt ~ Type_column, data = edin.emo.sum.factorial, paired = T)
#
#
# es <- t.test(rt ~ Type_column, data = edin.emo.sum.factorial, paired = T)
# es.t.test.log <- rbind(es.t.test.log,data.frame(t = es$statistic,n1 = es$parameter,n2 = es$parameter))
edin.emo.sum <- summaryBy(rt ~ Stim_column + MeanAffectivity_column, data = edin.emo, keep.names = T)
print("Linear model")## [1] "Linear model"
kable(summary(lm(rt ~ MeanAffectivity_column, data = edin.emo.sum))$coefficients, digits=2)| Estimate | Std. Error | t value | Pr(>|t|) | |
|---|---|---|---|---|
| (Intercept) | 0.24 | 0.01 | 29.58 | 0.0 |
| MeanAffectivity_column | 0.01 | 0.01 | 0.86 | 0.4 |
es <- lm(rt~MeanAffectivity_column, data = edin.emo.sum)
es.t.test.log <- rbind(es.t.test.log, data.frame(t = summary(es)$coefficients[2,3],n1 = summary(es)$df[2],n2 = summary(es)$df[2]))
print("Mixed model")## [1] "Mixed model"
# lmer (Rabag style)
edin.emo.sum.log <- summary(lmer(rt ~ Type_column +Length+ (1+Type_column|SubjNo)+ (1|Stim_column), data = edin.emo))
kable(data.frame(edin.emo.sum.log$coefficients, "p value"=2*pnorm(-abs(coef(edin.emo.sum.log)[,3]))), digits=2)| Estimate | Std..Error | t.value | p.value | |
|---|---|---|---|---|
| (Intercept) | 0.24 | 0.04 | 6.39 | 0.00 |
| Type_columnNeutral sentence | 0.01 | 0.02 | 0.61 | 0.54 |
| Length | -0.02 | 0.01 | -2.98 | 0.00 |
#print(paste("p value = ", 2*pnorm(-abs(coef(edin.emo.sum.log)[,3]))))In this experiment, we attempt to replicate Yang & Yeh (2011)’s finding that emotional/neutral words break suppression at different times. Such an effect here, might suggest that participants couldn’t combine word meanings because they could not visually resolve them. In our replication, participants saw 300 words, in a 2*2 design that crossed Emotional Content (Negative vs Neutral words) and Length (Short vs Long words). Emotional ratings were taken from the Florida Affective Norms for English Words. We have a smaller number of subjects than our other experiments (28 total, 26 after exclusions), because participants saw many more critical trials than most comparable experiments.
No effect of semantics on raw RTs (but effect of length)
## [1] "n subj excluded = 2"
## [1] "% trials excluded = 0.92"
## [1] "median trials excluded per participant = 3.5 sd= 1.74223552236333 range= 0"
## [2] "median trials excluded per participant = 3.5 sd= 1.74223552236333 range= 6"
## [1] "median timeouts per participant = 0 sd= 30.0534796689442 range= 0"
## [2] "median timeouts per participant = 0 sd= 30.0534796689442 range= 137"
## [1] "RTs against length"
| length | rt * 1000.mean | rt * 1000.sd |
|---|---|---|
| Long | 1307.37 | 584.26 |
| Short | 1476.10 | 614.66 |
## [1] "RTs against semantics"
| prime_semantics | rt * 1000.mean | rt * 1000.sd |
|---|---|---|
| negative | 1371.90 | 588.70 |
| neutral | 1411.57 | 621.54 |
## [1] "RTs against length by semantics"
| prime_semantics | rt * 1000.mean | rt * 1000.sd |
|---|---|---|
| negative | 1371.90 | 588.70 |
| neutral | 1411.57 | 621.54 |
| Estimate | Std..Error | t.value | p.values | |
|---|---|---|---|---|
| (Intercept) | 1.28 | 0.11 | 11.66 | 0.00 |
| prime_semanticsneutral | 0.04 | 0.04 | 1.22 | 0.22 |
| lengthShort | 0.18 | 0.04 | 4.00 | 0.00 |
| prime_semanticsneutral:lengthShort | -0.01 | 0.05 | -0.20 | 0.84 |
No effect of semantics on Log RTs (but effect of length)
## [1] "RTs against length"
| length | rt * 1000.mean | rt * 1000.sd |
|---|---|---|
| Long | 1307.37 | 584.26 |
| Short | 1476.10 | 614.66 |
## [1] "RTs against semantics"
| prime_semantics | rt * 1000.mean | rt * 1000.sd |
|---|---|---|
| negative | 1371.90 | 588.70 |
| neutral | 1411.57 | 621.54 |
## [1] "RTs against length by semantics"
| prime_semantics | rt * 1000.mean | rt * 1000.sd |
|---|---|---|
| negative | 1371.90 | 588.70 |
| neutral | 1411.57 | 621.54 |
| Estimate | Std..Error | t.value | p.values | |
|---|---|---|---|---|
| (Intercept) | 7.00 | 0.07 | 102.62 | 0.00 |
| prime_semanticsneutral | 0.02 | 0.02 | 0.98 | 0.33 |
| lengthShort | 0.11 | 0.02 | 5.80 | 0.00 |
| prime_semanticsneutral:lengthShort | 0.00 | 0.02 | -0.01 | 1.00 |
print("Raw effect sizes")## [1] "Raw effect sizes"
# Add t.tests from our Experiment 4 (n=2), Sklar Experiments (n =4), yang yeh experiments (n=2)
es.t.test.raw <- rbind(es.t.test.raw, data.frame(t = c(1.48,0.95,2.63,2.92,2.523,2.229,-4.48,-4.28), n1 = c(25,25,31,21,45,45,11,11), n2 = c(25,25,31,21,45,45,11,11)))
raw.es <- tes(es.t.test.raw[,1],es.t.test.raw[,2],es.t.test.raw[,3], verbose = FALSE)
raw.es$exp <- c("Experiment 1a", "Experiment 1b","Experiment 2a", "Experiment 2b","Experiment 3a","Experiment 3b","Experiment 3c","Experiment 3d","Experiment 4a","Experiment 4b", "Sklar et al. Expt. 1","Sklar et al. Expt. 2", "Sklar et al. Expt. 4a","Sklar et al. Expt. 4b", "Yang & Yeh Expt. 1","Yang & Yeh Expt. 2")
raw.es$lab <- c(rep("Ed",10),rep("Israel",4),rep("Taiwan",2))
raw.es$session <- c("1","1","2","2","3","3","3","3","4","4","5","6","7","8","9","10")
raw.es$procedure <- c("1","2","3","4","1","3","2","4","5","5","1","1","3","3","5","5")
#kable(raw.es)
ran.ef.raw <- rma(d,var.d,random = ~ 1|lab,data = raw.es)
ran.ef.raw##
## Random-Effects Model (k = 16; tau^2 estimator: REML)
##
## tau^2 (estimated amount of total heterogeneity): 0.3267 (SE = 0.1436)
## tau (square root of estimated tau^2 value): 0.5716
## I^2 (total heterogeneity / total variability): 86.14%
## H^2 (total variability / sampling variability): 7.21
##
## Test for Heterogeneity:
## Q(df = 15) = 72.1664, p-val < .0001
##
## Model Results:
##
## estimate se zval pval ci.lb ci.ub
## 0.0076 0.1573 0.0481 0.9617 -0.3008 0.3159
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
forest(ran.ef.raw,
slab = raw.es$exp,
mlab = "All Experiments",
xlab = "Cohen's d", psize = 1.5)
par(font = 2)
text(-6, 17.55, "Experiments (raw data)")
text(5, 17.55, "Cohen's d [95% CI]")par(font = 1)
funnel(ran.ef.raw)print("log effect sizes")## [1] "log effect sizes"
es.t.test.log <- rbind(es.t.test.log, data.frame(t = c(1.23,1.21), n1 = c(25,25), n2 = c(25,25)))
es.t.test.log## t n1 n2
## t -1.2248274 47 47
## t1 -0.8642991 47 47
## 1 -1.3412310 48 48
## 11 -0.4967937 54 54
## t2 -1.6969969 61 61
## t3 0.4057050 63 63
## 12 0.8262499 48 48
## 13 0.8583923 44 44
## 14 1.2300000 25 25
## 2 1.2100000 25 25
log.es <- tes(es.t.test.log[,1],es.t.test.log[,2],es.t.test.log[,3], verbose = FALSE)
log.es$exp <- c("Experiment 1a", "Experiment 1b","Experiment 2a", "Experiment 2b","Experiment 3a","Experiment 3b","Experiment 3c","Experiment 3d","Experiment 4a","Experiment 4b")
log.es$lab <- c(rep("Ed",10))#,rep("Israel",4))
log.es$procedure <- c("1","2","3","4","1","3","2","4","5","5")
#kable(log.es)
ran.ef.log <- rma(d,var.d,random = ~ 1|procedure, data = log.es)
ran.ef.log##
## Random-Effects Model (k = 10; tau^2 estimator: REML)
##
## tau^2 (estimated amount of total heterogeneity): 0.0099 (SE = 0.0244)
## tau (square root of estimated tau^2 value): 0.0993
## I^2 (total heterogeneity / total variability): 18.77%
## H^2 (total variability / sampling variability): 1.23
##
## Test for Heterogeneity:
## Q(df = 9) = 11.5972, p-val = 0.2370
##
## Model Results:
##
## estimate se zval pval ci.lb ci.ub
## -0.0421 0.0727 -0.5793 0.5624 -0.1845 0.1003
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
forest(ran.ef.log,
slab = log.es$exp,
mlab = "All Experiments",
xlab = "Cohen's d", psize = 1.5)
par(font = 2)
text(-1.4, 11.6, "Experiments (log transformed data)")
text(2.1, 11.6, "Cohen's d [95% CI]")funnel(ran.ef.log)Over these experiments, when we analyzed raw RT, we found suggestive evidence (p < .1) for an effect of semantics 4 times out of 9 experiments, although only one of these comparisons was reliable.
When we analyzed log transformed RTs, we found hints of evidence (p <.1) for an effect of semantics 2 times out of 9 experiments, but none of these comparisons was reliable.