This is an R Markdown document of my first look at Data from Pejovic, Nielsen, & Kovic.
The data for these experiments can be found at the GitHub Repository here: www.github.com/SOMETHING
In the two studies presented here, participants completed a categorisation task where they had to learn the category label for two sets of images- rounded visual stimuli and jagged visual stimuli.
In Experiment 1, the classic trisyllabic labels Takete and Maluma where used, while in Experiment 2, the labels were modified to be single syllables: Mal vs. Tik
Both experiments were 2x2 experimental designs: Participants learned a category structure that was either congruent with sound symbolism (Jagged = Takete/Tik, Curvy= Maluma/Mal) or incongruent. As a second (and novel) manipulation, participants varied with respect to which was presented first: participants in the label-first condition heard labels prior to seeing the presented images, while those in the image-first condition were presented with labels after seeing the images.
Experiment 1: 33 undergraduate students between 19 and 24 years old (1 excluded) Experiment 2: 29 undergraduate students between 19 and 24 years old (2 excluded)
First we need to Read In and Sanitize the Data for Analysis
Exp1 <- read.csv("F:/Experiments/Collaborations/Pejovic et al/data/Exp1.csv")
Exp2 <- read.csv("F:/Experiments/Collaborations/Pejovic et al/data/Exp2.csv")
Next lets take a really general look at the data frame and how it is structured - we can then see if there is any further messy data that needs to be cleaned up
The str() command is great for this- for every column in our data frame it tells us what the vector is stored as
str(Exp1)
## 'data.frame': 1188 obs. of 10 variables:
## $ Participant: int 1 1 1 1 1 1 1 1 1 1 ...
## $ Order : Factor w/ 3 levels "0","Audio-Visual",..: 3 3 3 3 3 3 3 3 3 3 ...
## $ Congruency : Factor w/ 3 levels "0","Congruent",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Balance : int 1 1 1 1 1 1 1 1 1 1 ...
## $ Block : Factor w/ 4 levels "?","1","2","3": 2 2 2 2 2 2 2 2 2 2 ...
## $ stimulus : Factor w/ 13 levels "?","e31 ",..: 2 9 4 8 12 10 13 7 6 5 ...
## $ stim.num : int 4 11 6 10 14 12 15 9 8 7 ...
## $ Resp : Factor w/ 4 levels "- ","?",..: 3 3 4 3 3 3 3 3 3 3 ...
## $ RespCorr : Factor w/ 4 levels "?","0","1","NR ": 3 3 2 3 3 3 3 3 3 3 ...
## $ RT : int 8328 3656 1828 2718 3110 2797 3344 2750 1453 797 ...
str(Exp2)
## 'data.frame': 1044 obs. of 10 variables:
## $ Participant: int 1 1 1 1 1 1 1 1 1 1 ...
## $ Order : Factor w/ 2 levels "Audio-Visual",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Congruency : Factor w/ 2 levels "Congruent","Incongruent": 1 1 1 1 1 1 1 1 1 1 ...
## $ Balance : int 1 1 1 1 1 1 1 1 1 1 ...
## $ Block : int 1 1 1 1 1 1 1 1 1 1 ...
## $ stimulus : Factor w/ 12 levels "e31 ","e32 ",..: 8 7 10 12 11 1 5 2 6 9 ...
## $ stim.num : int 11 10 13 15 14 4 8 5 9 12 ...
## $ Resp : Factor w/ 3 levels "- ","L ",..: 2 2 2 2 3 3 2 2 2 2 ...
## $ RespCorr : Factor w/ 3 levels "0","1","NR ": 2 2 2 2 1 1 2 2 2 2 ...
## $ RT : int 5844 2953 907 3453 1985 3906 250 3953 3234 656 ...
So we can see that the following things need to change:
1- Participant Needs to be a factor, not an integer 2- FOr Experiment 1, both the Order and Congruency columns have some screwy data- there are values of “0”, where there should only be Audio-Visual or Visual-Audio 3- Balance needs to be a factor, not an integer 4- Block is also screwy- ? is somehow one of the levels 5- Resp is screwy- the only possible values should be L, R, or - (for NR trials) 6- RespCorr also has this problem- there should only be 0, 1, and NR - no ? values 7- For Experiment 2, block is an integer (but should be a factor)
Lets take care of the basic stuff first:
Setting columns as factors:
Exp1$Participant <- as.factor(Exp1$Participant)
Exp2$Participant <- as.factor(Exp2$Participant)
Exp1$Balance <- as.factor(Exp1$Balance)
Exp2$Balance <- as.factor(Exp2$Balance)
str(Exp1)
## 'data.frame': 1188 obs. of 10 variables:
## $ Participant: Factor w/ 33 levels "0","1","2","3",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Order : Factor w/ 3 levels "0","Audio-Visual",..: 3 3 3 3 3 3 3 3 3 3 ...
## $ Congruency : Factor w/ 3 levels "0","Congruent",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Balance : Factor w/ 3 levels "0","1","2": 2 2 2 2 2 2 2 2 2 2 ...
## $ Block : Factor w/ 4 levels "?","1","2","3": 2 2 2 2 2 2 2 2 2 2 ...
## $ stimulus : Factor w/ 13 levels "?","e31 ",..: 2 9 4 8 12 10 13 7 6 5 ...
## $ stim.num : int 4 11 6 10 14 12 15 9 8 7 ...
## $ Resp : Factor w/ 4 levels "- ","?",..: 3 3 4 3 3 3 3 3 3 3 ...
## $ RespCorr : Factor w/ 4 levels "?","0","1","NR ": 3 3 2 3 3 3 3 3 3 3 ...
## $ RT : int 8328 3656 1828 2718 3110 2797 3344 2750 1453 797 ...
str(Exp2)
## 'data.frame': 1044 obs. of 10 variables:
## $ Participant: Factor w/ 29 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ Order : Factor w/ 2 levels "Audio-Visual",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Congruency : Factor w/ 2 levels "Congruent","Incongruent": 1 1 1 1 1 1 1 1 1 1 ...
## $ Balance : Factor w/ 2 levels "1","2": 1 1 1 1 1 1 1 1 1 1 ...
## $ Block : int 1 1 1 1 1 1 1 1 1 1 ...
## $ stimulus : Factor w/ 12 levels "e31 ","e32 ",..: 8 7 10 12 11 1 5 2 6 9 ...
## $ stim.num : int 11 10 13 15 14 4 8 5 9 12 ...
## $ Resp : Factor w/ 3 levels "- ","L ",..: 2 2 2 2 3 3 2 2 2 2 ...
## $ RespCorr : Factor w/ 3 levels "0","1","NR ": 2 2 2 2 1 1 2 2 2 2 ...
## $ RT : int 5844 2953 907 3453 1985 3906 250 3953 3234 656 ...
Now we can hopefully take a look at what is going on with this other weird stuff in Exp1 data
We could just do this by subsetting the weird responses out and hoping for the best, but lets take a look at them first to make sure they are a data artefact and not something terrible
OrderTrouble <- subset(Exp1, Order == 0)
CongruencyTrouble <- subset(Exp1, Congruency == 0)
BlockTrouble <- subset(Exp1, Block == "?")
RespTrouble <- subset(Exp1, Resp == "?")
OrderTrouble
## Participant Order Congruency Balance Block stimulus stim.num Resp
## 881 0 0 0 0 ? ? 0 ?
## 882 0 0 0 0 ? ? 0 ?
## 883 0 0 0 0 ? ? 0 ?
## RespCorr RT
## 881 ? 0
## 882 ? 0
## 883 ? 0
CongruencyTrouble
## Participant Order Congruency Balance Block stimulus stim.num Resp
## 881 0 0 0 0 ? ? 0 ?
## 882 0 0 0 0 ? ? 0 ?
## 883 0 0 0 0 ? ? 0 ?
## RespCorr RT
## 881 ? 0
## 882 ? 0
## 883 ? 0
BlockTrouble
## Participant Order Congruency Balance Block stimulus stim.num Resp
## 881 0 0 0 0 ? ? 0 ?
## 882 0 0 0 0 ? ? 0 ?
## 883 0 0 0 0 ? ? 0 ?
## RespCorr RT
## 881 ? 0
## 882 ? 0
## 883 ? 0
RespTrouble
## Participant Order Congruency Balance Block stimulus stim.num Resp
## 881 0 0 0 0 ? ? 0 ?
## 882 0 0 0 0 ? ? 0 ?
## 883 0 0 0 0 ? ? 0 ?
## RespCorr RT
## 881 ? 0
## 882 ? 0
## 883 ? 0
Fortunately, looking at these shows us that they are all on the same 3 lines of the dataframe, so clearly something got messed up when copying the data
This is good though - it means we can kick out this weird data with one command
Exp1 <- subset(Exp1, Block != "?")
str(Exp1)
## 'data.frame': 1185 obs. of 10 variables:
## $ Participant: Factor w/ 33 levels "0","1","2","3",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Order : Factor w/ 3 levels "0","Audio-Visual",..: 3 3 3 3 3 3 3 3 3 3 ...
## $ Congruency : Factor w/ 3 levels "0","Congruent",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Balance : Factor w/ 3 levels "0","1","2": 2 2 2 2 2 2 2 2 2 2 ...
## $ Block : Factor w/ 4 levels "?","1","2","3": 2 2 2 2 2 2 2 2 2 2 ...
## $ stimulus : Factor w/ 13 levels "?","e31 ",..: 2 9 4 8 12 10 13 7 6 5 ...
## $ stim.num : int 4 11 6 10 14 12 15 9 8 7 ...
## $ Resp : Factor w/ 4 levels "- ","?",..: 3 3 4 3 3 3 3 3 3 3 ...
## $ RespCorr : Factor w/ 4 levels "?","0","1","NR ": 3 3 2 3 3 3 3 3 3 3 ...
## $ RT : int 8328 3656 1828 2718 3110 2797 3344 2750 1453 797 ...
unique(Exp1$Block)
## [1] 1 2 3
## Levels: ? 1 2 3
Unfortunately you will note that the bad values are still contained in the levels of the factors (but not the data itself), to get rid of these we need to relevel
Fortunately, we can do this to the whole dataframe with droplevels
Exp1 <- droplevels(Exp1)
str(Exp1)
## 'data.frame': 1185 obs. of 10 variables:
## $ Participant: Factor w/ 33 levels "0","1","2","3",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Order : Factor w/ 2 levels "Audio-Visual",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Congruency : Factor w/ 2 levels "Congruent","Incongruent": 1 1 1 1 1 1 1 1 1 1 ...
## $ Balance : Factor w/ 2 levels "1","2": 1 1 1 1 1 1 1 1 1 1 ...
## $ Block : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
## $ stimulus : Factor w/ 12 levels "e31 ","e32 ",..: 1 8 3 7 11 9 12 6 5 4 ...
## $ stim.num : int 4 11 6 10 14 12 15 9 8 7 ...
## $ Resp : Factor w/ 3 levels "- ","L ",..: 2 2 3 2 2 2 2 2 2 2 ...
## $ RespCorr : Factor w/ 3 levels "0","1","NR ": 2 2 1 2 2 2 2 2 2 2 ...
## $ RT : int 8328 3656 1828 2718 3110 2797 3344 2750 1453 797 ...
Exp2 <- droplevels(Exp2)
Exp2$Block <- as.factor(Exp2$Block)
str(Exp2)
## 'data.frame': 1044 obs. of 10 variables:
## $ Participant: Factor w/ 29 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ Order : Factor w/ 2 levels "Audio-Visual",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Congruency : Factor w/ 2 levels "Congruent","Incongruent": 1 1 1 1 1 1 1 1 1 1 ...
## $ Balance : Factor w/ 2 levels "1","2": 1 1 1 1 1 1 1 1 1 1 ...
## $ Block : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
## $ stimulus : Factor w/ 12 levels "e31 ","e32 ",..: 8 7 10 12 11 1 5 2 6 9 ...
## $ stim.num : int 11 10 13 15 14 4 8 5 9 12 ...
## $ Resp : Factor w/ 3 levels "- ","L ",..: 2 2 2 2 3 3 2 2 2 2 ...
## $ RespCorr : Factor w/ 3 levels "0","1","NR ": 2 2 2 2 1 1 2 2 2 2 ...
## $ RT : int 5844 2953 907 3453 1985 3906 250 3953 3234 656 ...
Now our data structures look the same for both Experiments, so we’re off to a good
Last thing we want to do before thinking about looking the data is dropping the NR (No Response) trials. Again this is just a single line of R code (note that in this file, NR is reading as NR ) (aka there are a bunch of spaces), so it’s a bit weird to drop
Exp1 <- droplevels(subset(Exp1, RespCorr != "NR "))
Exp2 <- droplevels(subset(Exp2, RespCorr != "NR "))
First we’ll take a look at distributions of Response Times - and also some tests of the normality of those distributions
Exp1D <- density(Exp1$RT)
plot(Exp1D, main="Kernel Density of Response Times")
polygon(Exp1D, col= 'red', border= 'black')
library(moments)
## Warning: package 'moments' was built under R version 3.4.1
library(nortest)
## Warning: package 'nortest' was built under R version 3.4.1
library(e1071)
## Warning: package 'e1071' was built under R version 3.4.1
##
## Attaching package: 'e1071'
## The following objects are masked from 'package:moments':
##
## kurtosis, moment, skewness
shapiro.test(Exp1$RT) #Shapiro-Wilkes normality test
##
## Shapiro-Wilk normality test
##
## data: Exp1$RT
## W = 0.61264, p-value < 2.2e-16
ad.test(Exp1$RT) #Anderson-Darling normality test
##
## Anderson-Darling normality test
##
## data: Exp1$RT
## A = 89.789, p-value < 2.2e-16
cvm.test(Exp1$RT) # Cramer-von Mises normality test
## Warning in cvm.test(Exp1$RT): p-value is smaller than 7.37e-10, cannot be
## computed more accurately
##
## Cramer-von Mises normality test
##
## data: Exp1$RT
## W = 16.295, p-value = 7.37e-10
pearson.test(Exp1$RT) # Pearson chi-square normality test
##
## Pearson chi-square normality test
##
## data: Exp1$RT
## P = 928.01, p-value < 2.2e-16
skewness(Exp1$RT)
## [1] 5.048066
kurtosis(Exp1$RT)
## [1] 41.83859
qqnorm(Exp1$RT)
qqline(Exp1$RT)
In short, all of these point towards the RT data not being normally distributed - so not ideal for statistical analyses - if you’re not familiar with these metrics, the easier ones to look at are the density plot (where you should expect to see a normal curve), and the qqplot (where the dots should all be in a straight line)
Exp2D <- density(Exp2$RT)
plot(Exp1D, main="Kernel Density of Response Times")
polygon(Exp1D, col= 'red', border= 'black')
shapiro.test(Exp2$RT) #Shapiro-Wilkes normality test
##
## Shapiro-Wilk normality test
##
## data: Exp2$RT
## W = 0.58726, p-value < 2.2e-16
ad.test(Exp2$RT) #Anderson-Darling normality test
##
## Anderson-Darling normality test
##
## data: Exp2$RT
## A = 96.909, p-value < 2.2e-16
cvm.test(Exp2$RT) # Cramer-von Mises normality test
## Warning in cvm.test(Exp2$RT): p-value is smaller than 7.37e-10, cannot be
## computed more accurately
##
## Cramer-von Mises normality test
##
## data: Exp2$RT
## W = 17.853, p-value = 7.37e-10
pearson.test(Exp2$RT) # Pearson chi-square normality test
##
## Pearson chi-square normality test
##
## data: Exp2$RT
## P = 991.09, p-value < 2.2e-16
skewness(Exp2$RT)
## [1] 4.668809
kurtosis(Exp2$RT)
## [1] 33.57566
qqnorm(Exp2$RT)
qqline(Exp2$RT)
So, none of the RT data is normally distributed- how do we clean it up?
First, we need to aggregate for each experimental participant, rather than looking at just the raw RT data
library(doBy)
Exp1Agg <- summaryBy(RespCorr + RT ~ Participant, data= Exp1, Fun = c(mean, sd))
head(Exp1Agg)
## Participant RespCorr.mean RT.mean
## 1 0 1.972973 785.9189
## 2 1 1.972222 1184.0833
## 3 2 1.941176 650.2059
## 4 3 1.944444 1200.5833
## 5 4 1.944444 1098.8333
## 6 5 1.942857 991.4857
Here we can see our first error pop up- Means of RespCorr that are Above 1- aka impossible values- this is because RespCorr is currently a factor, where it needs to be numeric for us to take aggregate values
(Note you have to convert levels of a factor into characters before they can be converted to numeric)
Exp1$RespCorr2 <- as.numeric(as.character(Exp1$RespCorr))
Exp1Agg <- summaryBy(RespCorr2 + RT ~ Participant + Experiment + Training + Balance + Block, data= Exp1, FUN = c(mean, sd))
head(Exp1Agg)
## Participant Balance Block RespCorr2.mean RT.mean RespCorr2.sd
## 1 0 1 1 0.9166667 738.3333 0.2886751
## 2 0 1 2 1.0000000 428.4167 0.0000000
## 3 0 1 3 1.0000000 309.8333 0.0000000
## 4 0 2 1 1.0000000 11360.0000 NA
## 5 1 1 1 0.9166667 2865.9167 0.2886751
## 6 1 1 2 1.0000000 389.3333 0.0000000
## RT.sd
## 1 378.81330
## 2 110.99669
## 3 49.31132
## 4 NA
## 5 1919.02032
## 6 221.07355
Exp2$RespCorr2 <- as.numeric(as.character(Exp2$RespCorr))
Exp2Agg <- summaryBy(RespCorr2 + RT ~ Participant + Experiment + Training + Balance + Block, data= Exp2, FUN = c(mean, sd))
head(Exp2Agg)
## Participant Balance Block RespCorr2.mean RT.mean RespCorr2.sd
## 1 1 1 1 0.8333333 2352.9167 0.3892495
## 2 1 1 2 1.0000000 316.0833 0.0000000
## 3 1 1 3 1.0000000 218.7273 0.0000000
## 4 2 1 1 0.7272727 899.2727 0.4670994
## 5 2 1 2 1.0000000 468.6000 0.0000000
## 6 2 1 3 1.0000000 226.7000 0.0000000
## RT.sd
## 1 1800.3283
## 2 282.3449
## 3 230.0079
## 4 1738.5511
## 5 760.1956
## 6 250.0876
Now lets take another quick look at the RTs, just to make sure they aren’t now magically normally distributed (we’ll leave out the tests and just look at plots)
Exp1D <- density(Exp1Agg$RT.mean)
plot(Exp1D, main="Kernel Density of Response Times")
polygon(Exp1D, col= 'red', border= 'black')
qqnorm(Exp1Agg$RT.mean)
qqline(Exp1Agg$RT.mean)
Exp2D <- density(Exp2Agg$RT.mean)
plot(Exp2D, main="Kernel Density of Response Times")
polygon(Exp2D, col= 'red', border= 'black')
qqnorm(Exp2Agg$RT.mean)
qqline(Exp2Agg$RT.mean)
So, still not normally distributed- looks a little bit closer, but not there.
How do we get our RTs normally distributed (or close enough to not worry)? One way is simply log transforming the response times (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4528092/)
Exp1Agg$LogRT <- log(Exp1Agg$RT.mean)
Exp2Agg$LogRT <- log(Exp2Agg$RT.mean)
Exp1D <- density(Exp1Agg$LogRT)
plot(Exp1D, main="Kernel Density of Response Times")
polygon(Exp1D, col= 'red', border= 'black')
Exp2D <- density(Exp2Agg$LogRT)
plot(Exp2D, main="Kernel Density of Response Times")
polygon(Exp2D, col= 'red', border= 'black')
qqnorm(Exp1Agg$LogRT)
qqline(Exp1Agg$LogRT)
qqnorm(Exp2Agg$LogRT)
qqline(Exp2Agg$LogRT)
So with those tansformations we’re a lot closer to looking normally distributed
But actually the best way to do this is to log transform the ORIGINAL values, aggregate them, and then back-transform them. Lets take a look at doing that
Exp1$LogRT <- log(Exp1$RT)
Exp2$LogRT <- log(Exp2$RT)
##qqnorm(Exp1$LogRT)
##qqline(Exp1$LogRT)
##qqnorm(Exp2$LogRT)
##qqline(Exp2$LogRT)
Trying to plot these doesn’t work (so I’ve commented them out)- why is that? We get returned an error, lets take a look at the data again quickly to figure out what is going on
str(Exp1)
## 'data.frame': 1135 obs. of 12 variables:
## $ Participant: Factor w/ 33 levels "0","1","2","3",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Order : Factor w/ 2 levels "Audio-Visual",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ Congruency : Factor w/ 2 levels "Congruent","Incongruent": 1 1 1 1 1 1 1 1 1 1 ...
## $ Balance : Factor w/ 2 levels "1","2": 1 1 1 1 1 1 1 1 1 1 ...
## $ Block : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
## $ stimulus : Factor w/ 12 levels "e31 ","e32 ",..: 1 8 3 7 11 9 12 6 5 4 ...
## $ stim.num : int 4 11 6 10 14 12 15 9 8 7 ...
## $ Resp : Factor w/ 2 levels "L ","R ": 1 1 2 1 1 1 1 1 1 1 ...
## $ RespCorr : Factor w/ 2 levels "0","1": 2 2 1 2 2 2 2 2 2 2 ...
## $ RT : int 8328 3656 1828 2718 3110 2797 3344 2750 1453 797 ...
## $ RespCorr2 : num 1 1 0 1 1 1 1 1 1 1 ...
## $ LogRT : num 9.03 8.2 7.51 7.91 8.04 ...
unique(Exp1$LogRT)
## [1] 9.027379 8.204125 7.510978 7.907652 8.042378 7.936303 8.114923
## [8] 7.919356 7.281386 6.680855 7.624131 7.354362 6.639876 6.533789
## [15] 5.926926 6.411818 6.115892 5.147494 4.700480 6.045005 6.006353
## [22] 5.968708 4.543295 5.231109 4.948760 5.793014 5.389072 6.437752
## [29] 5.840642 5.579730 6.214608 5.236442 7.836370 6.809039 7.995980
## [36] 6.555357 4.691348 5.313206 6.276643 5.743003 2.772589 4.127134
## [43] 4.828314 5.883322 6.304449 6.598509 5.521461 3.433987 6.079933
## [50] 6.148468 3.850148 8.302018 8.501470 5.583496 6.923629 5.886104
## [57] 5.049856 5.384495 7.039660 6.939254 5.966147 6.967909 6.510258
## [64] 5.693732 4.356709 7.237059 3.465736 4.532599 6.244167 6.463029
## [71] 6.150603 7.431300 6.274762 6.082219 5.459586 6.386879 5.455321
## [78] 6.556778 6.182085 5.638355 -Inf 8.404920 7.569412 7.866722
## [85] 7.412160 7.585281 6.577861 7.374002 7.155396 6.774224 5.690359
## [92] 6.876265 7.052721 7.179308 6.641182 9.368881 8.294050 8.001355
## [99] 4.143135 8.027150 6.042633 6.891626 7.142827 4.941642 7.344073
## [106] 5.318120 7.458763 7.785721 7.739359 7.226209 6.982863 6.599870
## [113] 6.825460 7.104965 7.118016 7.631917 6.576470 6.756932 6.331502
## [120] 7.896553 7.616284 7.383368 7.393263 6.860664 7.593374 7.191429
## [127] 7.323171 8.305731 6.997596 7.924796 6.842683 7.884953 8.016978
## [134] 7.248504 7.167809 7.592870 7.092574 7.130899 7.025538 7.616776
## [141] 7.552762 6.620073 7.259820 7.654443 7.038784 7.313220 7.985144
## [148] 6.953684 6.184149 6.413459 6.843750 7.402452 7.458186 6.907755
## [155] 8.229511 7.333676 7.167038 5.056246 8.137980 6.826545 6.302619
## [162] 7.302496 7.811568 7.732808 6.792344 7.011214 6.246107 7.811163
## [169] 6.859615 8.001020 7.990915 5.837730 6.875232 7.214504 7.779467
## [176] 7.969012 6.938284 8.110127 8.401333 6.719013 6.660575 7.118826
## [183] 7.544861 7.608871 8.142645 6.996681 7.536364 7.818028 7.143618
## [190] 7.080026 5.796058 7.421776 7.066467 7.941651 7.704812 7.519150
## [197] 7.292337 7.779049 7.752765 7.476472 5.746203 4.369448 9.307013
## [204] 8.195610 8.328693 6.968850 8.598220 3.828641 8.722743 7.430707
## [211] 7.553287 6.791221 6.738152 6.892642 7.012115 6.359574 7.105786
## [218] 8.411833 7.249215 7.890583 7.237778 6.679599 6.755769 6.333280
## [225] 7.668561 7.561122 7.323831 7.363914 6.532334 6.361302 9.337854
## [232] 6.661855 7.980024 7.646831 7.872836 6.487684 6.486161 7.215240
## [239] 6.118097 8.415382 8.241967 7.772753 8.545003 7.711549 7.745868
## [246] 7.202661 6.983790 6.736967 5.641907 6.008813 9.546813 8.290042
## [253] 7.697575 7.440147 6.699500 8.081475 7.639161 7.990577 7.661527
## [260] 7.646354 7.824046 7.878534 6.720220 6.700731 7.467371 7.528332
## [267] 7.093405 7.544332 7.690286 7.270313 8.615408 8.994917 6.810142
## [274] 7.765993 8.072155 7.676474 8.100768 6.461468 7.817625 7.303170
## [281] 7.180070 7.792349 7.449498 7.718685 8.062118 7.653969 7.952967
## [288] 7.725330
So LogRT is Numeric, but if you look at Entry 81 you’ll see that there is a value of “-Inf”, which means -Infinity. The only way to get this value from a log transform is if you attempt to log transform a value of 0.
So we know there are RT values of 0 - lets get rid of those - they don’t really make sense (although I understand this is likely a feature of how RT is measured in EPrime or whatever software this experiment is run in)
Exp1 <- subset(Exp1, RT > 0)
Exp2 <- subset(Exp2, RT > 0)
Exp1$LogRT <- log(Exp1$RT)
Exp2$LogRT <- log(Exp2$RT)
qqnorm(Exp1$LogRT)
qqline(Exp1$LogRT)
qqnorm(Exp2$LogRT)
qqline(Exp2$LogRT)
So, non-aggregated RTs, even once log transformed, are not very normal. What about when we aggregate them
Exp1Agg2 <- summaryBy(RespCorr2 + LogRT ~ Participant + Experiment + Training + Balance + Block, data= Exp1, FUN = c(mean, sd))
head(Exp1Agg2)
## Participant Balance Block RespCorr2.mean LogRT.mean RespCorr2.sd
## 1 0 1 1 0.9166667 6.494487 0.2886751
## 2 0 1 2 1.0000000 6.034076 0.0000000
## 3 0 1 3 1.0000000 5.724835 0.0000000
## 4 0 2 1 1.0000000 9.337854 NA
## 5 1 1 1 0.9166667 7.800319 0.2886751
## 6 1 1 2 1.0000000 5.772562 0.0000000
## LogRT.sd
## 1 0.4786589
## 2 0.2289095
## 3 0.1550586
## 4 NA
## 5 0.5804868
## 6 0.7002433
Exp2Agg2 <- summaryBy(RespCorr2 + LogRT ~ Participant + Experiment + Training + Balance + Block, data= Exp2, FUN = c(mean, sd))
head(Exp2Agg2)
## Participant Balance Block RespCorr2.mean LogRT.mean RespCorr2.sd
## 1 1 1 1 0.8333333 7.357505 0.3892495
## 2 1 1 2 1.0000000 5.513985 0.0000000
## 3 1 1 3 1.0000000 5.084982 0.0000000
## 4 2 1 1 0.7272727 5.890844 0.4670994
## 5 2 1 2 1.0000000 5.424755 0.0000000
## 6 2 1 3 1.0000000 4.820842 0.0000000
## LogRT.sd
## 1 1.0623188
## 2 0.8912462
## 3 1.0328392
## 4 1.4099304
## 5 1.1773525
## 6 1.4569663
qqnorm(Exp1Agg2$LogRT.mean)
qqline(Exp1Agg2$LogRT.mean)
qqnorm(Exp2Agg2$LogRT.mean)
qqline(Exp2Agg2$LogRT.mean)
Looks pretty good now - what happens when we back-transform the logRTs into real RT values Happily, this can just be done with the exp() command
Exp1Agg2$RT.mean <- exp(Exp1Agg2$LogRT.mean)
Exp2Agg2$RT.mean <- exp(Exp2Agg2$LogRT.mean)
qqnorm(Exp1Agg2$RT.mean)
qqline(Exp1Agg2$RT.mean)
qqnorm(Exp2Agg2$RT.mean)
qqline(Exp2Agg2$RT.mean)
This actually makes things less normal, rather than more, so for now we will proceed with doing analysis on the aggregated LogRTs
So lets do some statistics
To start with, we actually need to re-aggregate the data so it includes all of the stuff we want (Experimental Conditions)
library(tidyr)
library(lme4)
Exp1Agg3 <- summaryBy(RespCorr2 + LogRT ~ Participant + Order + Congruency + Balance + Block, data= Exp1, FUN = c(mean, sd))
Exp2Agg3 <- summaryBy(RespCorr2 + LogRT ~ Participant + Order + Congruency + Balance + Block, data= Exp2, FUN = c(mean, sd))
Exp1Agg3$conditions <- interaction(Exp1Agg3$Order, Exp1Agg3$Congruency)
library(ggplot2)
ggplot(data=Exp1Agg3, aes(x = Block, y = LogRT.mean, colour = conditions, group= conditions)) +
geom_smooth(aes(colour = Order, linetype= Congruency),size = 1,se = F)+
geom_point(aes(col = Order)) +
ggtitle("Experiment 1 Response Times") +
labs(x="Block", y="Log of Response Time") +
theme(axis.title.y = element_text(size=12, color="#666666")) +
theme(axis.text = element_text(size=8)) +
theme(plot.title = element_text(size=16, face="bold", hjust=0, color="#666666"))
So that is what our response time data looks like plotted - lets have a bash at the basic stats
First, for reporting you’ll want some descriptives - we can output those with tapply
tapply(Exp1Agg3$LogRT.mean, Exp1Agg3$Block, mean)
## 1 2 3
## 6.838918 6.215521 6.119808
tapply(Exp1Agg3$LogRT.mean, Exp1Agg3$Block, sd)
## 1 2 3
## 0.6396959 0.6846121 0.6201896
tapply(Exp1Agg3$LogRT.mean, Exp1Agg3$Order, mean)
## Audio-Visual Visual-Audio
## 6.721755 6.082806
tapply(Exp1Agg3$LogRT.mean, Exp1Agg3$Order, sd)
## Audio-Visual Visual-Audio
## 0.6052118 0.6824787
tapply(Exp1Agg3$LogRT.mean, Exp1Agg3$Congruency, mean)
## Congruent Incongruent
## 6.404573 6.386484
tapply(Exp1Agg3$LogRT.mean, Exp1Agg3$Congruency, sd)
## Congruent Incongruent
## 0.7284164 0.7146551
## Also include a little bit so we can report actual RTs for ease
Exp1Agg3$RT <- exp(Exp1Agg3$LogRT.mean)
tapply(Exp1Agg3$RT, Exp1Agg3$Block, mean)
## 1 2 3
## 1266.4703 615.8452 551.0486
tapply(Exp1Agg3$RT, Exp1Agg3$Order, mean)
## Audio-Visual Visual-Audio
## 1088.983 553.085
tapply(Exp1Agg3$RT, Exp1Agg3$Congruency, mean)
## Congruent Incongruent
## 887.2649 738.1191
library(lmerTest)
Exp1Full <- lmer(LogRT.mean ~ Order * Congruency * Block + (1 + Block|Participant), data=Exp1Agg3, REML= FALSE)
#summary(Exp1Full)
anova(Exp1Full)
## Analysis of Variance Table of type III with Satterthwaite
## approximation for degrees of freedom
## Sum Sq Mean Sq NumDF DenDF F.value Pr(>F)
## Order 3.7795 3.7795 1 32.789 21.4914 5.439e-05 ***
## Congruency 0.0019 0.0019 1 32.789 0.0107 0.9183
## Block 7.6774 3.8387 2 51.944 21.8282 1.316e-07 ***
## Order:Congruency 0.3924 0.3924 1 32.789 2.2312 0.1448
## Order:Block 0.3243 0.1621 2 51.944 0.9220 0.4041
## Congruency:Block 0.2433 0.1217 2 51.944 0.6918 0.5052
## Order:Congruency:Block 0.3781 0.1890 2 51.944 1.0749 0.3488
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
So what do we find here- basically there are main effects of Block and of Order, but no effect of Congruency, and no significant interactions of anything.
In the paper we currently report a significant difference as a post-hoc test based on some t-tests, but as I understand it those tests are currently done on non-aggregated data. As it stands, the lack of significant interactions doesn’t really license a post-hoc t-test, but we can go ahead and look at them anyways.
First lets produce some plots that don’t include block
ggplot(data=Exp1Agg3, aes(x = Order, y = LogRT.mean, group= conditions)) +
geom_boxplot(aes(fill = Congruency),size = 1,se = F) +
ggtitle("Experiment 1 Response Times") +
labs(x="Order", y="Log of Response Time") +
theme(axis.title.y = element_text(size=12, color="#666666")) +
theme(axis.text = element_text(size=8)) +
theme(plot.title = element_text(size=16, face="bold", hjust=0, color="#666666"))
## Warning: Ignoring unknown parameters: se
And the stats?
Exp1.AV <- subset(Exp1Agg3, Order == "Audio-Visual")
Exp1.VA <- subset(Exp1Agg3, Order == "Visual-Audio")
Exp1.AV.C <- subset(Exp1.AV, Congruency == "Congruent")
Exp1.AV.I <- subset(Exp1.AV, Congruency == "Incongruent")
Exp1.VA.C <- subset(Exp1.VA, Congruency == "Congruent")
Exp1.VA.I <- subset(Exp1.VA, Congruency == "Incongruent")
mean(Exp1.AV.C$LogRT.mean)
## [1] 6.641009
mean(Exp1.AV.I$LogRT.mean)
## [1] 6.805864
mean(Exp1.VA.C$LogRT.mean)
## [1] 6.185651
mean(Exp1.VA.I$LogRT.mean)
## [1] 5.967104
sd(Exp1.AV.C$LogRT.mean)
## [1] 0.7155402
sd(Exp1.AV.I$LogRT.mean)
## [1] 0.46442
sd(Exp1.VA.C$LogRT.mean)
## [1] 0.6816437
sd(Exp1.VA.I$LogRT.mean)
## [1] 0.6789186
mean(Exp1.AV.C$RT)
## [1] 1181.528
mean(Exp1.AV.I$RT)
## [1] 992.5818
mean(Exp1.VA.C$RT)
## [1] 614.7994
mean(Exp1.VA.I$RT)
## [1] 483.6563
t.test(Exp1.AV.C$LogRT.mean, Exp1.AV.I$LogRT.mean)
##
## Welch Two Sample t-test
##
## data: Exp1.AV.C$LogRT.mean and Exp1.AV.I$LogRT.mean
## t = -0.96036, df = 41.372, p-value = 0.3425
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.5114330 0.1817227
## sample estimates:
## mean of x mean of y
## 6.641009 6.805864
t.test(Exp1.VA.C$LogRT.mean, Exp1.VA.I$LogRT.mean)
##
## Welch Two Sample t-test
##
## data: Exp1.VA.C$LogRT.mean and Exp1.VA.I$LogRT.mean
## t = 1.1453, df = 48.346, p-value = 0.2577
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.1650617 0.6021559
## sample estimates:
## mean of x mean of y
## 6.185651 5.967104
So unfortunately on these log transformed RT values we don’t even get significant differences on these tests
What about the Correctness Data for Experiment 1
ggplot(data=Exp1Agg3, aes(x = Block, y = RespCorr2.mean, colour = conditions, group= conditions)) +
geom_smooth(aes(colour = Order, linetype= Congruency),size = 1,se = F)+
geom_jitter(aes(col = Order, shape = Congruency)) +
ggtitle("Experiment 1 Correctness") +
labs(x="Block", y="Proportion Correct") +
theme(axis.title.y = element_text(size=12, color="#666666")) +
theme(axis.text = element_text(size=8)) +
theme(plot.title = element_text(size=16, face="bold", hjust=0, color="#666666"))
There is at least a bit of weirdness looking at this data- especially the horseshoe shaped correctness for Incognruent Audio-Visual participants, where suddenly a good chunk are quite bad at the task in block 3- it’s probably just because of low n, but it definitely stands out as bizarre- maybe somewhere to look for relatively junky data (i.e. this might go away if we further filtered out response times that are too long/short)
And the stats
tapply(Exp1Agg3$RespCorr2.mean, Exp1Agg3$Block, mean)
## 1 2 3
## 0.8225936 0.9222528 0.9065197
tapply(Exp1Agg3$RespCorr2.mean, Exp1Agg3$Block, sd)
## 1 2 3
## 0.1381761 0.1684835 0.1747442
tapply(Exp1Agg3$RespCorr2.mean, Exp1Agg3$Order, mean)
## Audio-Visual Visual-Audio
## 0.8955370 0.8713012
tapply(Exp1Agg3$RespCorr2.mean, Exp1Agg3$Order, sd)
## Audio-Visual Visual-Audio
## 0.1448225 0.1837020
tapply(Exp1Agg3$RespCorr2.mean, Exp1Agg3$Congruency, mean)
## Congruent Incongruent
## 0.8975719 0.8675821
tapply(Exp1Agg3$RespCorr2.mean, Exp1Agg3$Congruency, sd)
## Congruent Incongruent
## 0.1584912 0.1729092
Exp1FullCorr <- lmer(RespCorr2.mean ~ Order * Congruency * Block + (1 + Block|Participant), data=Exp1Agg3, REML= FALSE)
#summary(Exp1FullCorr)
anova(Exp1FullCorr)
## Analysis of Variance Table of type III with Satterthwaite
## approximation for degrees of freedom
## Sum Sq Mean Sq NumDF DenDF F.value Pr(>F)
## Order 0.000983 0.000983 1 32.980 0.2585 0.6145457
## Congruency 0.001388 0.001388 1 32.980 0.3650 0.5498785
## Block 0.080893 0.040447 2 32.779 10.6353 0.0002756
## Order:Congruency 0.000513 0.000513 1 32.980 0.1350 0.7156322
## Order:Block 0.015192 0.007596 2 32.779 1.9974 0.1518698
## Congruency:Block 0.004228 0.002114 2 32.779 0.5559 0.5788663
## Order:Congruency:Block 0.008106 0.004053 2 32.779 1.0658 0.3560956
##
## Order
## Congruency
## Block ***
## Order:Congruency
## Order:Block
## Congruency:Block
## Order:Congruency:Block
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
For the correctness data, we only get a main effect of block- not surprisingly they get better over the course of blocks
The last thing we might want to consider looking at is just the basic correlation between Correctness and Response times
ggplot(data=Exp1Agg3, aes(x = RespCorr2.mean , y = LogRT.mean, group= conditions)) +
geom_smooth(aes(colour = Order, linetype= Congruency),size = 1,se = F, method = lm)+
geom_point(aes(col = Order, shape = Congruency)) +
ggtitle("Experiment 1- Scatterplot of Correctness x LogRT") +
labs(x="Proportion Correct", y="LogRT") +
theme(axis.title.y = element_text(size=12, color="#666666")) +
theme(axis.text = element_text(size=8)) +
theme(plot.title = element_text(size=16, face="bold", hjust=0, color="#666666"))
So, generally folks get faster as they get better, except those who were in the Audio-Visual Congruent condition, who strangely get slightly slower as they get better (but probably just a few outliers)
Now all the same analyses, but for Experiment 2
Exp2Agg3$conditions <- interaction(Exp2Agg3$Order, Exp2Agg3$Congruency)
library(ggplot2)
ggplot(data=Exp2Agg3, aes(x = Block, y = LogRT.mean, colour = conditions, group= conditions)) +
geom_smooth(aes(colour = Order, linetype= Congruency),size = 1,se = F)+
geom_point(aes(col = Order)) +
ggtitle("Experiment 2 Response Times") +
labs(x="Block", y="Log of Response Time") +
theme(axis.title.y = element_text(size=12, color="#666666")) +
theme(axis.text = element_text(size=8)) +
theme(plot.title = element_text(size=16, face="bold", hjust=0, color="#666666"))
tapply(Exp2Agg3$LogRT.mean, Exp2Agg3$Block, mean)
## 1 2 3
## 6.493579 5.975540 5.827347
tapply(Exp2Agg3$LogRT.mean, Exp2Agg3$Block, sd)
## 1 2 3
## 0.5649686 0.7339003 0.6862552
tapply(Exp2Agg3$LogRT.mean, Exp2Agg3$Order, mean)
## Audio-Visual Visual-Audio
## 6.556460 5.726991
tapply(Exp2Agg3$LogRT.mean, Exp2Agg3$Order, sd)
## Audio-Visual Visual-Audio
## 0.5370541 0.6280257
tapply(Exp2Agg3$LogRT.mean, Exp2Agg3$Congruency, mean)
## Congruent Incongruent
## 6.031875 6.170550
tapply(Exp2Agg3$LogRT.mean, Exp2Agg3$Congruency, sd)
## Congruent Incongruent
## 0.7384512 0.6965179
Exp2Agg3$RT <- exp(Exp2Agg3$LogRT.mean)
tapply(Exp2Agg3$RT, Exp2Agg3$Block, mean)
## 1 2 3
## 764.5301 532.4249 452.4598
tapply(Exp2Agg3$RT, Exp2Agg3$Order, mean)
## Audio-Visual Visual-Audio
## 835.8213 377.8333
tapply(Exp2Agg3$RT, Exp2Agg3$Congruency, mean)
## Congruent Incongruent
## 538.3861 631.0869
Exp2Full <- lmer(LogRT.mean ~ Order * Congruency * Block + (1|Participant), data=Exp2Agg3, REML= FALSE)
#summary(Exp2Full)
anova(Exp2Full)
## Analysis of Variance Table of type III with Satterthwaite
## approximation for degrees of freedom
## Sum Sq Mean Sq NumDF DenDF F.value Pr(>F)
## Order 5.1425 5.1425 1 29 38.323 9.455e-07 ***
## Congruency 0.2187 0.2187 1 29 1.630 0.211886
## Block 6.1793 3.0897 2 58 23.025 4.359e-08 ***
## Order:Congruency 0.0027 0.0027 1 29 0.020 0.888063
## Order:Block 1.5346 0.7673 2 58 5.718 0.005412 **
## Congruency:Block 1.1604 0.5802 2 58 4.324 0.017771 *
## Order:Congruency:Block 0.0232 0.0116 2 58 0.086 0.917359
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
ggplot(data=Exp2Agg3, aes(x = Order, y = LogRT.mean, group= conditions)) +
geom_boxplot(aes(fill = Congruency),size = 1,se = F) +
ggtitle("Experiment 2 Response Times") +
labs(x="Order", y="Log of Response Time") +
theme(axis.title.y = element_text(size=12, color="#666666")) +
theme(axis.text = element_text(size=8)) +
theme(plot.title = element_text(size=16, face="bold", hjust=0, color="#666666"))
Exp2.AV <- subset(Exp2Agg3, Order == "Audio-Visual")
Exp2.VA <- subset(Exp2Agg3, Order == "Visual-Audio")
Exp2.AV.C <- subset(Exp2.AV, Congruency == "Congruent")
Exp2.AV.I <- subset(Exp2.AV, Congruency == "Incongruent")
Exp2.VA.C <- subset(Exp2.VA, Congruency == "Congruent")
Exp2.VA.I <- subset(Exp2.VA, Congruency == "Incongruent")
mean(Exp2.AV.C$LogRT.mean)
## [1] 6.467955
mean(Exp2.AV.I$LogRT.mean)
## [1] 6.659716
mean(Exp2.VA.C$LogRT.mean)
## [1] 5.650306
mean(Exp2.VA.I$LogRT.mean)
## [1] 5.803676
sd(Exp2.AV.C$LogRT.mean)
## [1] 0.4629013
sd(Exp2.AV.I$LogRT.mean)
## [1] 0.6096416
sd(Exp2.VA.C$LogRT.mean)
## [1] 0.7289166
sd(Exp2.VA.I$LogRT.mean)
## [1] 0.5122394
mean(Exp2.AV.C$RT)
## [1] 720.0625
mean(Exp2.AV.I$RT)
## [1] 970.8732
mean(Exp2.VA.C$RT)
## [1] 379.4193
mean(Exp2.VA.I$RT)
## [1] 376.2472
t.test(Exp2.AV.C$LogRT.mean, Exp2.AV.I$LogRT.mean)
##
## Welch Two Sample t-test
##
## data: Exp2.AV.C$LogRT.mean and Exp2.AV.I$LogRT.mean
## t = -1.0917, df = 31.429, p-value = 0.2832
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.5497957 0.1662738
## sample estimates:
## mean of x mean of y
## 6.467955 6.659716
t.test(Exp2.VA.C$LogRT.mean, Exp2.VA.I$LogRT.mean)
##
## Welch Two Sample t-test
##
## data: Exp2.VA.C$LogRT.mean and Exp2.VA.I$LogRT.mean
## t = -0.84337, df = 41.263, p-value = 0.4039
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.5205625 0.2138221
## sample estimates:
## mean of x mean of y
## 5.650306 5.803676
ggplot(data=Exp2Agg3, aes(x = Block, y = RespCorr2.mean, colour = conditions, group= conditions)) +
geom_smooth(aes(colour = Order, linetype= Congruency),size = 1,se = F)+
geom_jitter(aes(col = Order, shape = Congruency)) +
ggtitle("Experiment 2 Correctness") +
labs(x="Block", y="Proportion Correct") +
theme(axis.title.y = element_text(size=12, color="#666666")) +
theme(axis.text = element_text(size=8)) +
theme(plot.title = element_text(size=16, face="bold", hjust=0, color="#666666"))
Exp2FullCorr <- lmer(RespCorr2.mean ~ Order * Congruency * Block + (1|Participant), data=Exp2Agg3, REML= FALSE)
#summary(Exp2FullCorr)
anova(Exp2FullCorr)
## Analysis of Variance Table of type III with Satterthwaite
## approximation for degrees of freedom
## Sum Sq Mean Sq NumDF DenDF F.value Pr(>F)
## Order 0.00110 0.00110 1 29 0.319 0.57645
## Congruency 0.01105 0.01105 1 29 3.219 0.08324 .
## Block 0.67973 0.33987 2 58 99.006 < 2e-16 ***
## Order:Congruency 0.01772 0.01772 1 29 5.161 0.03070 *
## Order:Block 0.00590 0.00295 2 58 0.859 0.42891
## Congruency:Block 0.02188 0.01094 2 58 3.186 0.04865 *
## Order:Congruency:Block 0.02745 0.01372 2 58 3.998 0.02363 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
tapply(Exp2Agg3$RespCorr2.mean, Exp2Agg3$Block, mean)
## 1 2 3
## 0.7729885 0.9446883 0.9776385
tapply(Exp2Agg3$RespCorr2.mean, Exp2Agg3$Block, sd)
## 1 2 3
## 0.09619045 0.07861606 0.05221333
tapply(Exp2Agg3$RespCorr2.mean, Exp2Agg3$Order, mean)
## Audio-Visual Visual-Audio
## 0.8927998 0.9030198
tapply(Exp2Agg3$RespCorr2.mean, Exp2Agg3$Order, sd)
## Audio-Visual Visual-Audio
## 0.1294324 0.1101199
tapply(Exp2Agg3$RespCorr2.mean, Exp2Agg3$Congruency, mean)
## Congruent Incongruent
## 0.9154433 0.8802189
tapply(Exp2Agg3$RespCorr2.mean, Exp2Agg3$Congruency, sd)
## Congruent Incongruent
## 0.1148595 0.1211086
ggplot(data=Exp2Agg3, aes(x = RespCorr2.mean , y = LogRT.mean, group= conditions)) +
geom_smooth(aes(colour = Order, linetype= Congruency),size = 1,se = F, method = lm)+
geom_point(aes(col = Order, shape = Congruency)) +
ggtitle("Experiment 2- Scatterplot of Correctness x LogRT") +
labs(x="Proportion Correct", y="LogRT") +
theme(axis.title.y = element_text(size=12, color="#666666")) +
theme(axis.text = element_text(size=8)) +
theme(plot.title = element_text(size=16, face="bold", hjust=0, color="#666666"))