Study Motivation

The ability to observe and organize environmental stimuli into meaningful knowledge is an important behaviour in animal survival. It helps to identify dangerous landscapes and poisonous foods, as well as understand the parameters of environmental features and abstract that to other similar stimuli. Furthermore, this ability has continued to influence higher order cognition in humans. For instance, among taxonomists, the ability to recognize characteristics of species and group them according to their similarities is a necessary professional skill. While many Galapagos finches appear similar, a taxonomist that was unable to tell the difference between them would not be particularly good.

In the COVIS model of category learning, categorization is characterized by two different systems competing simultaneously against each other in order to detect and process incoming information (Ashby, Alfonso-Reese, Turken, & Waldron, 1998). In it, categorization is assumed to be mediated by one of two independent systems. In the first system, a verbal system processes incoming information using learned, easily verbalizable rules and hypothesis testing. Its dependency on hypothesis testing and executive functioning indicates that this process is heavily influenced by working memory components. In particular, it is thought that these are the verbal components of working memory, as interpretted by Baddeley and Hitch in their 1974 paper, and updated by Baddeley in 1986, 2000, and 2013, and is specifically mediated by a central executive supportive system referred to as a the phonological loop. The phonological loop is a rehearsal mechanism responsible for maintain chuncks of temporary verbalizable information. For instance, I could ask you to remember a stream of numbers and you might do so by repeating the numbers to yourself over and over again. In contrast, the second system utilizes a similarity-based II process, where multiple dimensions of an object are integrated at a procedural learning-based level. Because of its reliance on procedural memory, II makes predictions about motor performance during categorization tasks; specifically that II performance will diminish in the presence of an unrelated motor function (Maddox, Bohil, & Ing, 2004). As both of these processes rely on seemingly independent systems in order to function and make different predictions, it would appear that the processes themselves can be differentiated from one another.

A fundamental aspect of multiple system categorization is that different systems are supported by different cognitive structures. In order to demonstrate this, it is necessary to provide instances of structure disruption impacting one system, but not the other. Though it could be said that RD learning is influenced by working memory, the evidence that supports this varies. It is especially important to consider that there is reason to think that RD learning may be affected by executive functioning and mood as well. Rule-described categorization appears to be influenced and mediated by a multitude of factors, the interaction of which is not well understood.

The Current Study

The current study addresses these concerns by having participants complete a Gaussian blur categorization task, where participants learn to categorize either a rule-described categorization set or a non-rule-described (Information Integration) category set. Simultaneously, participants will also be asked to complete one of two concurrent tasks. The first concurrent task will require participants to speak a list of letters out loud while simultaneously completing the categorization task. The second concurrent task will require participants to tap on the desk with their non-dominant hand when a colon appears on the screen during the categorization task. It is hypothesized that those participants who are assigned to the concurrent verbal task will show performance deficits in rule-described category learning. The concurrent verbal task is not expected to impact II category learning. Additionally, as information integration category learning is thought to be influenced by procedural memory and motor function, it is expected that those participants who are assigned to the tapping concurrent task will show performance deficits in information integration category learning, but not rule-described learning. This study’s tasks have been developed on Psychopy2 and involve participants being assigned to one of six groups, (1) rule-described concurrent tapping, (2) rule-described concurrent articulation, (3) information integration concurrent tapping, (4) information integration concurrent articulation, (5) rule-described no concurrent task, and (6) information integration no concurrent task.

This current document is an attempt to practice modelling this data according to a number of different categorization models.

Libraries

The first step in any R analysis requires us to prepare the libraries that R will use.This is broken down into two parts. First are those that will be used for the data wrangling. The data we’re pulling from comes from Psychopy2, which (as a result of how the program was set up) has made each data file inconsistent. Where data is saved in different columns depending on which condition a participant has been run in. For example, a participant ran in condition 1 will have their responses saved in column Q, while a participant run in condition 2 would have their data saved in column U - and so on. Each participant’s conditions and responses need to be consolidated into a single readible text file if we are to run an analysis on it. While it’s possible to do this manually outside of R. It’s easier in the long run to build the program to account for these discrepancies.

Packages for data wrangling:

#install.packages("data.table")
#install.packages("plyr")
#install.packages("knitr")
#install.packages("ggplot2")
#install.packages("ggrepel")
#install.packages("dplyr")
#install.packages("Hmisc")
#install.packages("readr")
#install.packages("Rmisc")
#install.packages("kableExtra")
#install.packages("reshape")
#install.packages("dplyr")
#install.packages("ez")

library(data.table)
library(plyr)
library(knitr)
library(ggplot2)
library(ggrepel)
library(dplyr)
library(Hmisc)
library(readr)
library(Rmisc)
library(kableExtra)
library(reshape)
library(dplyr)
library(ez)

Set Working Directory

knitr::opts_knit$set(root.dir = normalizePath("C:\\Users\\Josh\\Dropbox\\COVIS Interference\\csvfolder"))

Data Wrangling

Combine csv Files

#First call all the csv files

csv <- list.files(full.names = TRUE, pattern = '*.csv')

#Now we're going to subset the data based on which participants need to be removed from the dataset

csv
##   [1] "./P1_Universal RBII Concurrent Task_2019_Jan_22_1405.csv"  
##   [2] "./P10_Universal RBII Concurrent Task_2019_Mar_05_1607.csv" 
##   [3] "./P100_Universal RBII Concurrent Task_2019_Jul_11_1205.csv"
##   [4] "./P101_Universal RBII Concurrent Task_2019_Jul_11_1351.csv"
##   [5] "./P102_Universal RBII Concurrent Task_2019_Jul_12_0909.csv"
##   [6] "./P103_Universal RBII Concurrent Task_2019_Jul_12_1107.csv"
##   [7] "./P104_Universal RBII Concurrent Task_2019_Jul_12_1306.csv"
##   [8] "./P105_Universal RBII Concurrent Task_2019_Jul_12_1413.csv"
##   [9] "./P106_Universal RBII Concurrent Task_2019_Jul_12_1748.csv"
##  [10] "./P107_Universal RBII Concurrent Task_2019_Jul_13_0907.csv"
##  [11] "./P108_Universal RBII Concurrent Task_2019_Jul_13_1006.csv"
##  [12] "./P109_Universal RBII Concurrent Task_2019_Jul_13_1103.csv"
##  [13] "./P11_Universal RBII Concurrent Task_2019_Mar_06_1707.csv" 
##  [14] "./P110_Universal RBII Concurrent Task_2019_Jul_13_1212.csv"
##  [15] "./P111_Universal RBII Concurrent Task_2019_Jul_13_1310.csv"
##  [16] "./P112_Universal RBII Concurrent Task_2019_Jul_15_1116.csv"
##  [17] "./P113_Universal RBII Concurrent Task_2019_Jul_15_1153.csv"
##  [18] "./P114_Universal RBII Concurrent Task_2019_Jul_15_1300.csv"
##  [19] "./P115_Universal RBII Concurrent Task_2019_Jul_22_1109.csv"
##  [20] "./P116_Universal RBII Concurrent Task_2019_Jul_23_1311.csv"
##  [21] "./P117_Universal RBII Concurrent Task_2019_Jul_23_1407.csv"
##  [22] "./P118_Universal RBII Concurrent Task_2019_Jul_24_1837.csv"
##  [23] "./P119_Universal RBII Concurrent Task_2019_Jul_28_0911.csv"
##  [24] "./P12_Universal RBII Concurrent Task_2019_Mar_06_1710.csv" 
##  [25] "./P120_Universal RBII Concurrent Task_2019_Aug_01_1431.csv"
##  [26] "./P121_Universal RBII Concurrent Task_2019_Aug_06_1209.csv"
##  [27] "./P122_Universal RBII Concurrent Task_2019_Aug_16_1738.csv"
##  [28] "./P123_Universal RBII Concurrent Task_2019_Sep_05_1009.csv"
##  [29] "./P124_Universal RBII Concurrent Task_2019_Sep_11_1308.csv"
##  [30] "./P125_Universal RBII Concurrent Task_2019_Sep_13_1236.csv"
##  [31] "./P126_Universal RBII Concurrent Task_2019_Sep_13_1310.csv"
##  [32] "./P127_Universal RBII Concurrent Task_2019_Sep_23_1006.csv"
##  [33] "./P128_Universal RBII Concurrent Task_2019_Sep_25_1733.csv"
##  [34] "./P129_Universal RBII Concurrent Task_2019_Nov_11_1023.csv"
##  [35] "./P13_Universal RBII Concurrent Task_2019_Mar_07_1119.csv" 
##  [36] "./P130_Universal RBII Concurrent Task_2019_Nov_12_1053.csv"
##  [37] "./P131_Universal RBII Concurrent Task_2019_Nov_12_1137.csv"
##  [38] "./P132_Universal RBII Concurrent Task_2019_Nov_12_1222.csv"
##  [39] "./P133_Universal RBII Concurrent Task_2019_Nov_13_1251.csv"
##  [40] "./P134_Universal RBII Concurrent Task_2019_Nov_13_1424.csv"
##  [41] "./P135_Universal RBII Concurrent Task_2019_Nov_16_1011.csv"
##  [42] "./P135_Universal RBII Concurrent Task_2019_Nov_16_1015.csv"
##  [43] "./P136_Universal RBII Concurrent Task_2019_Nov_16_1055.csv"
##  [44] "./P137_Universal RBII Concurrent Task_2019_Nov_17_1003.csv"
##  [45] "./P138_Universal RBII Concurrent Task_2019_Nov_22_1054.csv"
##  [46] "./P139_Universal RBII Concurrent Task_2019_Nov_22_1303.csv"
##  [47] "./P14_Universal RBII Concurrent Task_2019_Mar_07_1304.csv" 
##  [48] "./P140_Universal RBII Concurrent Task_2019_Nov_22_1354.csv"
##  [49] "./P141_Universal RBII Concurrent Task_2019_Nov_24_1004.csv"
##  [50] "./P142_Universal RBII Concurrent Task_2019_Nov_24_1031.csv"
##  [51] "./P143_Universal RBII Concurrent Task_2019_Nov_26_1444.csv"
##  [52] "./P144_Universal RBII Concurrent Task_2019_Nov_27_1207.csv"
##  [53] "./P145_Universal RBII Concurrent Task_2019_Dec_13_1443.csv"
##  [54] "./P146_Universal RBII Concurrent Task_2020_Jan_17_1237.csv"
##  [55] "./P147_Universal RBII Concurrent Task_2020_Jan_17_1315.csv"
##  [56] "./P148_Universal RBII Concurrent Task_2020_Jan_17_1405.csv"
##  [57] "./P149_Universal RBII Concurrent Task_2020_Jan_17_1458.csv"
##  [58] "./P15_Universal RBII Concurrent Task_2019_Mar_10_1204.csv" 
##  [59] "./P150_Universal RBII Concurrent Task_2020_Jan_20_1052.csv"
##  [60] "./P151_Universal RBII Concurrent Task_2020_Jan_20_1137.csv"
##  [61] "./P152_Universal RBII Concurrent Task_2020_Jan_20_1222.csv"
##  [62] "./P153_Universal RBII Concurrent Task_2020_Jan_20_1307.csv"
##  [63] "./P154_Universal RBII Concurrent Task_2020_Jan_20_1437.csv"
##  [64] "./P155_Universal RBII Concurrent Task_2020_Jan_20_1522.csv"
##  [65] "./P156_Universal RBII Concurrent Task_2020_Jan_21_1309.csv"
##  [66] "./P157_Universal RBII Concurrent Task_2020_Jan_21_1348.csv"
##  [67] "./P158_Universal RBII Concurrent Task_2020_Jan_22_0943.csv"
##  [68] "./P159_Universal RBII Concurrent Task_2020_Jan_22_1021.csv"
##  [69] "./P160_Universal RBII Concurrent Task_2020_Jan_22_1104.csv"
##  [70] "./P161_Universal RBII Concurrent Task_2020_Jan_22_1153.csv"
##  [71] "./P162_Universal RBII Concurrent Task_2020_Jan_22_1238.csv"
##  [72] "./P163_Universal RBII Concurrent Task_2020_Jan_22_1321.csv"
##  [73] "./P164_Universal RBII Concurrent Task_2020_Jan_22_1405.csv"
##  [74] "./P165_Universal RBII Concurrent Task_2020_Jan_22_1452.csv"
##  [75] "./P166_Universal RBII Concurrent Task_2020_Jan_23_0939.csv"
##  [76] "./P167_Universal RBII Concurrent Task_2020_Jan_23_1008.csv"
##  [77] "./P168_Universal RBII Concurrent Task_2020_Jan_23_1053.csv"
##  [78] "./P169_Universal RBII Concurrent Task_2020_Jan_23_1224.csv"
##  [79] "./P16R_Universal RBII Concurrent Task_2019_Mar_11_1821.csv"
##  [80] "./P170_Universal RBII Concurrent Task_2020_Jan_23_1351.csv"
##  [81] "./P171_Universal RBII Concurrent Task_2020_Jan_23_1623.csv"
##  [82] "./P172_Universal RBII Concurrent Task_2020_Jan_23_1646.csv"
##  [83] "./P173_Universal RBII Concurrent Task_2020_Jan_24_1238.csv"
##  [84] "./P174_Universal RBII Concurrent Task_2020_Jan_24_1323.csv"
##  [85] "./P175_Universal RBII Concurrent Task_2020_Jan_24_1411.csv"
##  [86] "./P176_Universal RBII Concurrent Task_2020_Jan_24_1449.csv"
##  [87] "./P177_Universal RBII Concurrent Task_2020_Jan_24_1511.csv"
##  [88] "./P178_Universal RBII Concurrent Task_2020_Jan_27_0936.csv"
##  [89] "./P179_Universal RBII Concurrent Task_2020_Jan_27_1105.csv"
##  [90] "./P17R_Universal RBII Concurrent Task_2019_Mar_11_1818.csv"
##  [91] "./P18_Universal RBII Concurrent Task_2019_Mar_13_1604.csv" 
##  [92] "./P180_Universal RBII Concurrent Task_2020_Jan_27_1151.csv"
##  [93] "./P181_Universal RBII Concurrent Task_2020_Jan_27_1407.csv"
##  [94] "./P182_Universal RBII Concurrent Task_2020_Jan_27_1451.csv"
##  [95] "./P183_Universal RBII Concurrent Task_2020_Jan_28_1021.csv"
##  [96] "./P184_Universal RBII Concurrent Task_2020_Jan_28_1329.csv"
##  [97] "./P185_Universal RBII Concurrent Task_2020_Jan_28_1407.csv"
##  [98] "./P186_Universal RBII Concurrent Task_2020_Jan_28_1535.csv"
##  [99] "./P187_Universal RBII Concurrent Task_2020_Jan_29_0937.csv"
## [100] "./P188_Universal RBII Concurrent Task_2020_Jan_29_1022.csv"
## [101] "./P189_Universal RBII Concurrent Task_2020_Jan_29_1157.csv"
## [102] "./P19_Universal RBII Concurrent Task_2019_Mar_14_1104.csv" 
## [103] "./P190_Universal RBII Concurrent Task_2020_Jan_29_1420.csv"
## [104] "./P191_Universal RBII Concurrent Task_2020_Jan_30_0907.csv"
## [105] "./P192_Universal RBII Concurrent Task_2020_Jan_30_0952.csv"
## [106] "./P193_Universal RBII Concurrent Task_2020_Jan_30_1038.csv"
## [107] "./P194_Universal RBII Concurrent Task_2020_Jan_30_1207.csv"
## [108] "./P195_Universal RBII Concurrent Task_2020_Jan_30_1235.csv"
## [109] "./P196_Universal RBII Concurrent Task_2020_Jan_30_1321.csv"
## [110] "./P197_Universal RBII Concurrent Task_2020_Jan_30_1535.csv"
## [111] "./P198_Universal RBII Concurrent Task_2020_Jan_30_1621.csv"
## [112] "./P199_Universal RBII Concurrent Task_2020_Jan_31_0906.csv"
## [113] "./P2_Universal RBII Concurrent Task_2019_Jan_22_1501.csv"  
## [114] "./P200_Universal RBII Concurrent Task_2020_Jan_31_0952.csv"
## [115] "./P201_Universal RBII Concurrent Task_2020_Jan_31_1036.csv"
## [116] "./P202_Universal RBII Concurrent Task_2020_Jan_31_1205.csv"
## [117] "./P203_Universal RBII Concurrent Task_2020_Jan_30_1404.csv"
## [118] "./P204_Universal RBII Concurrent Task_2020_Jan_31_1256.csv"
## [119] "./P205_Universal RBII Concurrent Task_2020_Jan_31_1356.csv"
## [120] "./P206_Universal RBII Concurrent Task_2020_Jan_31_1600.csv"
## [121] "./P207_Universal RBII Concurrent Task_2020_Feb_01_1204.csv"
## [122] "./P208_Universal RBII Concurrent Task_2020_Feb_01_1252.csv"
## [123] "./P209_Universal RBII Concurrent Task_2020_Feb_01_1337.csv"
## [124] "./P20R_Universal RBII Concurrent Task_2019_Mar_14_1122.csv"
## [125] "./P210_Universal RBII Concurrent Task_2020_Feb_02_0933.csv"
## [126] "./P211_Universal RBII Concurrent Task_2020_Feb_02_1016.csv"
## [127] "./P212_Universal RBII Concurrent Task_2020_Feb_02_1107.csv"
## [128] "./P213_Universal RBII Concurrent Task_2020_Feb_03_1029.csv"
## [129] "./P214_Universal RBII Concurrent Task_2020_Feb_03_1108.csv"
## [130] "./P215_Universal RBII Concurrent Task_2020_Feb_03_1151.csv"
## [131] "./P216_Universal RBII Concurrent Task_2020_Feb_03_1236.csv"
## [132] "./P217_Universal RBII Concurrent Task_2020_Feb_03_1408.csv"
## [133] "./P218_Universal RBII Concurrent Task_2020_Feb_04_0907.csv"
## [134] "./P219_Universal RBII Concurrent Task_2020_Feb_04_0952.csv"
## [135] "./P21R_Universal RBII Concurrent Task_2019_Mar_14_1146.csv"
## [136] "./P220_Universal RBII Concurrent Task_2020_Feb_04_1103.csv"
## [137] "./P221_Universal RBII Concurrent Task_2020_Feb_04_1127.csv"
## [138] "./P222_Universal RBII Concurrent Task_2020_Feb_05_0937.csv"
## [139] "./P223_Universal RBII Concurrent Task_2020_Feb_05_1021.csv"
## [140] "./P224_Universal RBII Concurrent Task_2020_Feb_05_1116.csv"
## [141] "./P225_Universal RBII Concurrent Task_2020_Feb_05_1150.csv"
## [142] "./P226_Universal RBII Concurrent Task_2020_Feb_05_1623.csv"
## [143] "./P227_Universal RBII Concurrent Task_2020_Feb_06_1323.csv"
## [144] "./P228_Universal RBII Concurrent Task_2020_Feb_07_1313.csv"
## [145] "./P229_Universal RBII Concurrent Task_2020_Feb_10_1007.csv"
## [146] "./P22R_Universal RBII Concurrent Task_2019_Mar_18_1358.csv"
## [147] "./P23_Universal RBII Concurrent Task_2019_Mar_18_1637.csv" 
## [148] "./P230_Universal RBII Concurrent Task_2020_Feb_10_1108.csv"
## [149] "./P231_Universal RBII Concurrent Task_2020_Feb_10_1306.csv"
## [150] "./P232_Universal RBII Concurrent Task_2020_Feb_10_1355.csv"
## [151] "./P233_Universal RBII Concurrent Task_2020_Feb_10_1521.csv"
## [152] "./P234_Universal RBII Concurrent Task_2020_Feb_11_1142.csv"
## [153] "./P235_Universal RBII Concurrent Task_2020_Feb_11_1305.csv"
## [154] "./P236_Universal RBII Concurrent Task_2020_Feb_11_1354.csv"
## [155] "./P237_Universal RBII Concurrent Task_2020_Feb_12_1020.csv"
## [156] "./P238_Universal RBII Concurrent Task_2020_Feb_12_1104.csv"
## [157] "./P239_Universal RBII Concurrent Task_2020_Feb_12_1543.csv"
## [158] "./P24_Universal RBII Concurrent Task_2019_Mar_19_1600.csv" 
## [159] "./P240_Universal RBII Concurrent Task_2020_Feb_13_0904.csv"
## [160] "./P241_Universal RBII Concurrent Task_2020_Feb_13_0951.csv"
## [161] "./P242_Universal RBII Concurrent Task_2020_Feb_13_1035.csv"
## [162] "./P243_Universal RBII Concurrent Task_2020_Feb_13_1122.csv"
## [163] "./P244_Universal RBII Concurrent Task_2020_Feb_13_1207.csv"
## [164] "./P245_Universal RBII Concurrent Task_2020_Feb_13_1248.csv"
## [165] "./P246_Universal RBII Concurrent Task_2020_Feb_13_1339.csv"
## [166] "./P247_Universal RBII Concurrent Task_2020_Feb_14_0908.csv"
## [167] "./P248_Universal RBII Concurrent Task_2020_Feb_14_0952.csv"
## [168] "./P249_Universal RBII Concurrent Task_2020_Feb_14_1117.csv"
## [169] "./P250_Universal RBII Concurrent Task_2020_Feb_17_1008.csv"
## [170] "./P251_Universal RBII Concurrent Task_2020_Feb_17_1217.csv"
## [171] "./P252_Universal RBII Concurrent Task_2020_Feb_17_1449.csv"
## [172] "./P253_Universal RBII Concurrent Task_2020_Feb_18_1353.csv"
## [173] "./P254_Universal RBII Concurrent Task_2020_Feb_20_1018.csv"
## [174] "./P255_Universal RBII Concurrent Task_2020_Feb_24_1212.csv"
## [175] "./P256_Universal RBII Concurrent Task_2020_Feb_24_1253.csv"
## [176] "./P257_Universal RBII Concurrent Task_2020_Feb_24_1413.csv"
## [177] "./P258_Universal RBII Concurrent Task_2020_Feb_25_1037.csv"
## [178] "./P259_Universal RBII Concurrent Task_2020_Feb_27_0910.csv"
## [179] "./P25R_Universal RBII Concurrent Task_2019_Mar_18_1749.csv"
## [180] "./P26_Universal RBII Concurrent Task_2019_Mar_19_1722.csv" 
## [181] "./P260_Universal RBII Concurrent Task_2020_Feb_27_1037.csv"
## [182] "./P261_Universal RBII Concurrent Task_2020_Feb_27_1121.csv"
## [183] "./P262_Universal RBII Concurrent Task_2020_Feb_27_1202.csv"
## [184] "./P263_Universal RBII Concurrent Task_2020_Feb_27_1622.csv"
## [185] "./P264_Universal RBII Concurrent Task_2020_Feb_28_0952.csv"
## [186] "./P265_Universal RBII Concurrent Task_2020_Feb_28_1044.csv"
## [187] "./P266_Universal RBII Concurrent Task_2020_Feb_28_1205.csv"
## [188] "./P267_Universal RBII Concurrent Task_2020_Feb_28_1426.csv"
## [189] "./P268_Universal RBII Concurrent Task_2020_Feb_28_1506.csv"
## [190] "./P269_Universal RBII Concurrent Task_2020_Mar_02_1054.csv"
## [191] "./P27_Universal RBII Concurrent Task_2019_Mar_20_1300.csv" 
## [192] "./P270_Universal RBII Concurrent Task_2020_Mar_02_1135.csv"
## [193] "./P271_Universal RBII Concurrent Task_2020_Mar_02_1208.csv"
## [194] "./P272_Universal RBII Concurrent Task_2020_Mar_02_1253.csv"
## [195] "./P273_Universal RBII Concurrent Task_2020_Mar_02_1353.csv"
## [196] "./P274_Universal RBII Concurrent Task_2020_Mar_03_1305.csv"
## [197] "./P275_Universal RBII Concurrent Task_2020_Mar_04_1122.csv"
## [198] "./P276_Universal RBII Concurrent Task_2020_Mar_04_0954.csv"
## [199] "./P277_Universal RBII Concurrent Task_2020_Mar_05_1205.csv"
## [200] "./P278_Universal RBII Concurrent Task_2020_Mar_06_1616.csv"
## [201] "./P279_Universal RBII Concurrent Task_2020_Mar_06_1643.csv"
## [202] "./P28_Universal RBII Concurrent Task_2019_Mar_20_1335.csv" 
## [203] "./P280_Universal RBII Concurrent Task_2020_Mar_09_1018.csv"
## [204] "./P281_Universal RBII Concurrent Task_2020_Mar_09_1054.csv"
## [205] "./P282_Universal RBII Concurrent Task_2020_Mar_09_1143.csv"
## [206] "./P283_Universal RBII Concurrent Task_2020_Mar_09_1223.csv"
## [207] "./P284_Universal RBII Concurrent Task_2020_Mar_09_1355.csv"
## [208] "./P285_Universal RBII Concurrent Task_2020_Mar_09_1525.csv"
## [209] "./P286_Universal RBII Concurrent Task_2020_Mar_10_1359.csv"
## [210] "./P287_Universal RBII Concurrent Task_2020_Mar_11_0952.csv"
## [211] "./P288_Universal RBII Concurrent Task_2020_Mar_11_1041.csv"
## [212] "./P289_Universal RBII Concurrent Task_2020_Mar_11_1122.csv"
## [213] "./P29_Universal RBII Concurrent Task_2019_Mar_20_1401.csv" 
## [214] "./P290_Universal RBII Concurrent Task_2020_Mar_11_1207.csv"
## [215] "./P291_Universal RBII Concurrent Task_2020_Mar_11_1338.csv"
## [216] "./P292_Universal RBII Concurrent Task_2020_Mar_11_1422.csv"
## [217] "./P293_Universal RBII Concurrent Task_2020_Mar_11_1454.csv"
## [218] "./P294_Universal RBII Concurrent Task_2020_Mar_11_1553.csv"
## [219] "./P295_Universal RBII Concurrent Task_2020_Mar_12_0956.csv"
## [220] "./P296_Universal RBII Concurrent Task_2020_Mar_12_1040.csv"
## [221] "./P297_Universal RBII Concurrent Task_2020_Mar_12_1130.csv"
## [222] "./P298_Universal RBII Concurrent Task_2020_Mar_12_1205.csv"
## [223] "./P299_Universal RBII Concurrent Task_2020_Mar_12_1251.csv"
## [224] "./P3_Universal RBII Concurrent Task_2019_Feb_05_1528.csv"  
## [225] "./P30_Universal RBII Concurrent Task_2019_Mar_20_1401.csv" 
## [226] "./P300_Universal RBII Concurrent Task_2020_Mar_12_1340.csv"
## [227] "./P301_Universal RBII Concurrent Task_2020_Mar_12_1543.csv"
## [228] "./P302_Universal RBII Concurrent Task_2020_Mar_12_1623.csv"
## [229] "./P303_Universal RBII Concurrent Task_2020_Mar_12_1716.csv"
## [230] "./P31_Universal RBII Concurrent Task_2019_Mar_20_1502.csv" 
## [231] "./P32_Universal RBII Concurrent Task_2019_Mar_20_1806.csv" 
## [232] "./P33_Universal RBII Concurrent Task_2019_Mar_21_1537.csv" 
## [233] "./P34_Universal RBII Concurrent Task_2019_Mar_21_1634.csv" 
## [234] "./P35_Universal RBII Concurrent Task_2019_Mar_25_1603.csv" 
## [235] "./P36_Universal RBII Concurrent Task_2019_Mar_26_1704.csv" 
## [236] "./P37_Universal RBII Concurrent Task_2019_Mar_27_1410.csv" 
## [237] "./P38_Universal RBII Concurrent Task_2019_Mar_27_1411.csv" 
## [238] "./P39_Universal RBII Concurrent Task_2019_Mar_27_1508.csv" 
## [239] "./P4_Universal RBII Concurrent Task_2019_Feb_11_1634.csv"  
## [240] "./P40_Universal RBII Concurrent Task_2019_Mar_27_1508.csv" 
## [241] "./P41_Universal RBII Concurrent Task_2019_Mar_28_1535.csv" 
## [242] "./P42_Universal RBII Concurrent Task_2019_Mar_28_1536.csv" 
## [243] "./P43_Universal RBII Concurrent Task_2019_Mar_28_1634.csv" 
## [244] "./P44_Universal RBII Concurrent Task_2019_Mar_28_1636.csv" 
## [245] "./P45_Universal RBII Concurrent Task_2019_Mar_29_1313.csv" 
## [246] "./P47_Universal RBII Concurrent Task_2019_Mar_29_1506.csv" 
## [247] "./P48_Universal RBII Concurrent Task_2019_Apr_01_1426.csv" 
## [248] "./P49_Universal RBII Concurrent Task_2019_Apr_03_1313.csv" 
## [249] "./P5_Universal RBII Concurrent Task_2019_Feb_11_1734.csv"  
## [250] "./P50_Universal RBII Concurrent Task_2019_Apr_03_1408.csv" 
## [251] "./P51_Universal RBII Concurrent Task_2019_Apr_03_1605.csv" 
## [252] "./P52_Universal RBII Concurrent Task_2019_Apr_04_1606.csv" 
## [253] "./P53_Universal RBII Concurrent Task_2019_Apr_04_1509.csv" 
## [254] "./P54_Universal RBII Concurrent Task_2019_Apr_04_1509.csv" 
## [255] "./P55_Universal RBII Concurrent Task_2019_Apr_04_1808.csv" 
## [256] "./P56_Universal RBII Concurrent Task_2019_Apr_05_1311.csv" 
## [257] "./P57_Universal RBII Concurrent Task_2019_Apr_05_1317.csv" 
## [258] "./P58_Universal RBII Concurrent Task_2019_Apr_05_1406.csv" 
## [259] "./P59_Universal RBII Concurrent Task_2019_Apr_05_1407.csv" 
## [260] "./P6_Universal RBII Concurrent Task_2019_Feb_13_1604.csv"  
## [261] "./P60_Universal RBII Concurrent Task_2019_Apr_05_1606.csv" 
## [262] "./P61_Universal RBII Concurrent Task_2019_May_27_0858.csv" 
## [263] "./P62_Universal RBII Concurrent Task_2019_Jun_03_1522.csv" 
## [264] "./P63_Universal RBII Concurrent Task_2019_Jun_04_1206.csv" 
## [265] "./P64_Universal RBII Concurrent Task_2019_Jun_04_1511.csv" 
## [266] "./P65_Universal RBII Concurrent Task_2019_Jun_05_1036.csv" 
## [267] "./P66_Universal RBII Concurrent Task_2019_Jun_05_1106.csv" 
## [268] "./P67_Universal RBII Concurrent Task_2019_Jun_05_1513.csv" 
## [269] "./P68_Universal RBII Concurrent Task_2019_Jun_05_1540.csv" 
## [270] "./P69_Universal RBII Concurrent Task_2019_Jun_06_1208.csv" 
## [271] "./P7_Universal RBII Concurrent Task_2019_Mar_01_1403.csv"  
## [272] "./P70_Universal RBII Concurrent Task_2019_Jun_06_1349.csv" 
## [273] "./P71_Universal RBII Concurrent Task_2019_Jun_07_1006.csv" 
## [274] "./P72_Universal RBII Concurrent Task_2019_Jun_07_1135.csv" 
## [275] "./P73_Universal RBII Concurrent Task_2019_Jun_10_1211.csv" 
## [276] "./P74_Universal RBII Concurrent Task_2019_Jun_10_1337.csv" 
## [277] "./P75_Universal RBII Concurrent Task_2019_Jun_17_0906.csv" 
## [278] "./P76_Universal RBII Concurrent Task_2019_Jun_17_1225.csv" 
## [279] "./P77_Universal RBII Concurrent Task_2019_Jun_17_1309.csv" 
## [280] "./P78_Universal RBII Concurrent Task_2019_Jun_17_1407.csv" 
## [281] "./P79_Universal RBII Concurrent Task_2019_Jun_24_1507.csv" 
## [282] "./P8_Universal RBII Concurrent Task_2019_Mar_02_1430.csv"  
## [283] "./P80_Universal RBII Concurrent Task_2019_Jun_26_1108.csv" 
## [284] "./P81_Universal RBII Concurrent Task_2019_Jul_03_1457.csv" 
## [285] "./P82_Universal RBII Concurrent Task_2019_Jul_04_1308.csv" 
## [286] "./P83 _Universal RBII Concurrent Task_2019_Jul_04_1419.csv"
## [287] "./P84_Universal RBII Concurrent Task_2019_Jul_08_0906.csv" 
## [288] "./P85_Universal RBII Concurrent Task_2019_Jul_08_1004.csv" 
## [289] "./P86_Universal RBII Concurrent Task_2019_Jul_08_1104.csv" 
## [290] "./P87_Universal RBII Concurrent Task_2019_Jul_08_1212.csv" 
## [291] "./P88_Universal RBII Concurrent Task_2019_Jul_08_1306.csv" 
## [292] "./P89_Universal RBII Concurrent Task_2019_Jul_08_1439.csv" 
## [293] "./P9_Universal RBII Concurrent Task_2019_Mar_04_1705.csv"  
## [294] "./P90_Universal RBII Concurrent Task_2019_Jul_09_0905.csv" 
## [295] "./P91_Universal RBII Concurrent Task_2019_Jul_09_1007.csv" 
## [296] "./P92_Universal RBII Concurrent Task_2019_Jul_09_1106.csv" 
## [297] "./P93_Universal RBII Concurrent Task_2019_Jul_09_1304.csv" 
## [298] "./P94_Universal RBII Concurrent Task_2019_Jul_09_1444.csv" 
## [299] "./P95_Universal RBII Concurrent Task_2019_Jul_10_1304.csv" 
## [300] "./P96_Universal RBII Concurrent Task_2019_Jul_10_1504.csv" 
## [301] "./P97_Universal RBII Concurrent Task_2019_Jul_10_1742.csv" 
## [302] "./P98_Universal RBII Concurrent Task_2019_Jul_11_0936.csv" 
## [303] "./P99_Universal RBII Concurrent Task_2019_Jul_11_1107.csv"
#Now we'll create the dataframe
csv_data = ldply(csv, read_csv)

df <- csv_data[,c('Participant',"selectCondition", "stimfile", 'Gender', 'Age','Field of Study', "Letters_RB_Trials.thisRepN", "Letters_II_Trials.thisRepN", "Tapping_RB_Trials.thisRepN", "Tapping_II_Trials.thisRepN", "Image_Response_B1.rt", "Image_Response_B1.corr", "Image_Response_B2.rt", "Image_Response_B2.corr", "Image_Response_B3.rt", "Image_Response_B3.corr", "Image_Response_B4.rt","Image_Response_B4.corr")]

#collapse Accuracy and Reaction Time across Conditions and combine into a single column
df$corr<-ifelse(df$selectCondition=='1', df$Image_Response_B1.corr, NA)
df$rt<-ifelse(df$selectCondition=='1', df$Image_Response_B1.rt, NA)
df$corr<-ifelse(df$selectCondition=='2', df$Image_Response_B2.corr, df$corr)
df$rt<-ifelse(df$selectCondition=='2', df$Image_Response_B2.rt, df$rt)
df$corr<-ifelse(df$selectCondition=='3', df$Image_Response_B3.corr, df$corr)
df$rt<-ifelse(df$selectCondition=='3', df$Image_Response_B3.rt, df$rt)
df$corr<-ifelse(df$selectCondition=='4', df$Image_Response_B4.corr, df$corr)
df$rt<-ifelse(df$selectCondition=='4', df$Image_Response_B4.rt, df$rt)
df$corr<-ifelse(df$selectCondition=='5', df$Image_Response_B1.corr, df$corr)
df$rt<-ifelse(df$selectCondition=='5', df$Image_Response_B1.rt, df$rt)
df$corr<-ifelse(df$selectCondition=='6', df$Image_Response_B2.corr, df$corr)
df$rt<-ifelse(df$selectCondition=='6', df$Image_Response_B2.rt, df$rt)

#Now create the new dataframe

df <- df[,c('Participant',"selectCondition", "Letters_RB_Trials.thisRepN", "Letters_II_Trials.thisRepN", "Tapping_RB_Trials.thisRepN", "Tapping_II_Trials.thisRepN", "stimfile", 'Gender', 'Age','Field of Study', "corr", "rt")]

#View(df)
#Set the number of blocks by across each condition.

df$Block <-ifelse(df$selectCondition=='1', df$Letters_RB_Trials.thisRepN, NA)
df$Block <-ifelse(df$selectCondition=='2', df$Letters_II_Trials.thisRepN, df$Block)
df$Block <-ifelse(df$selectCondition=='3', df$Tapping_RB_Trials.thisRepN, df$Block)
df$Block <-ifelse(df$selectCondition=='4', df$Tapping_II_Trials.thisRepN, df$Block)
df$Block <-ifelse(df$selectCondition=='5', df$Letters_RB_Trials.thisRepN, df$Block)
df$Block <-ifelse(df$selectCondition=='6', df$Letters_II_Trials.thisRepN, df$Block)

df <- df[,c('Participant',"selectCondition", "Block", "stimfile", 'Gender', 'Age','Field of Study', "corr", "rt")]
#Change the name of selectionCondition to Condition
colnames(df)[colnames(df)=="selectCondition"] <- "Condition"

#Change the name of stimfile to Stimulus
colnames(df)[colnames(df)=="stimfile"] <- "Stimulus"

#Change the name of corr to Correct
colnames(df)[colnames(df)=="corr"] <- "Correct"

#Change the name of rt to ResponseTime
colnames(df)[colnames(df)=="rt"] <- "ResponseTime"

#Subset data based on students who were removed.DFC = Data Frame Cleaned
dfc = subset(df, df$Participant != "P1") 

dfc = subset(dfc, dfc$Participant != "P39")

dfc = subset(dfc, dfc$Participant != "P46")

dfc = subset(dfc, dfc$Participant != "P51")

dfc = subset(dfc, dfc$Participant != "P62")

dfc = subset(dfc, dfc$Participant != "P101")

dfc = subset(dfc, dfc$Participant != "P102")

dfc = subset(dfc, dfc$Participant != "P135")

#View(dfc)
#Because we want to run a factorial analysis, we're going to need to specify the IV in our data frame. These are Category Set and Concurrent Task.

#We'll start by creating two columns within the existing data frame, one for each variable.
#Category Set:
dfc$Category = ifelse(dfc$Condition == '1', "RD", NA)
dfc$Category = ifelse(dfc$Condition == '2', "II", dfc$Category)
dfc$Category = ifelse(dfc$Condition == '3', "RD", dfc$Category)
dfc$Category = ifelse(dfc$Condition == '4', "II", dfc$Category)
dfc$Category = ifelse(dfc$Condition == '5', "RD", dfc$Category)
dfc$Category = ifelse(dfc$Condition == '6', "II", dfc$Category)
                  

#Concurrent Task:
dfc$Concurrent = ifelse(dfc$Condition == '1', "Letters", NA)
dfc$Concurrent = ifelse(dfc$Condition == '2', "Letters", dfc$Concurrent)
dfc$Concurrent = ifelse(dfc$Condition == '3', "Tapping", dfc$Concurrent)
dfc$Concurrent = ifelse(dfc$Condition == '4', "Tapping", dfc$Concurrent)
dfc$Concurrent = ifelse(dfc$Condition == '5', "Control", dfc$Concurrent)
dfc$Concurrent = ifelse(dfc$Condition == '6', "Control", dfc$Concurrent)


dfc <- dfc[,c('Participant', "Category", "Concurrent", "Block", "Stimulus", 'Gender', 'Age', 'Field of Study', "Correct", "ResponseTime")]
View(dfc)
save(dfc, file = "coginterference.RData")

Data Analysis

So now what? We’ve got a good looking data set here and that’s something to admire. But it isn’t much good if we don’t know what the data means. So, let’s run some analysis. The most obvious one is a 2x3 Analysis of variance, so let’s start there. Much of the following code was taken from Emily G Nielsen from https://rpubs.com/egnielsen/challenge_category_learning.

The first set of analyses will involve the calculation of descriptive statistics and the production of some basic figures. For these purposes, we will focus primarily on the following variables: 1. Accuracy - Dependent variable determining what percentage of images participants correctly categorized. 2. Response Time - Dependent variable determining how quickly participants responded to images on certain blocks. 3. Category - INdependent Variable indicating which category set participants were to learn. They learned either a RD or an II set. 4. Concurrent - Independent Variable indicating which concurrent task participants were instructed to complete. They performed either a concurrent letters, concurrent tapping, or NA control. 5. Block - Independent Variable indicating which Block of the task they are completing. Block 1 were images 1-80, Block 2 were images 81-160, Block 3 were images 161-240, and Block 4 were images 241-320.

Basic Descriptive Statistics

Before we can begin we will need to make a new data frame where each participant will have one mean accuracy performance score across each block, and one mean response time performance across each block. As it is right now, each participant has all of their responses across all instances recorded. So we will need to aggregate their means.

setDT(dfc)[, Accuracy := mean(Correct), by = "Participant,Block"]
setDT(dfc)[, RT := mean(ResponseTime), by = "Participant,Block"]
#x = dfc[dfc$Stimulus == 'ASD38.jpg' & dfc$Block == 0,]
df = data.frame(dfc$Participant, dfc$Category, dfc$Concurrent, dfc$Block, dfc$Accuracy, dfc$RT)
colnames(df) = c("Participant", "Category", "Concurrent", "Block", "Accuracy", "RT")

df = df %>% distinct(Participant, Block, .keep_all = TRUE)
View(df)

This new data frame (df) shows us a series of unique instances, where the mean of each participant’s accuracy and response time have been recorded across the Block level. In this case Block 1 = 0, Block 2 = 1, Block 3 = 2, and Block 4 = 3.

We’ll start by calculating some vasic descriptive statistics (ns, Ms, SDs, SEs, and 95% CIs) for the DV across levels of each of the IV.

Category Set

#Calculate summary statistics:

CatDesc = summarySE(data = df, measurevar = "Accuracy", groupvars = "Category", conf.interval = 0.95)

# Create a table to display the results:
kable(CatDesc, digits = 4,
      caption = "Table 1. Descriptives by category set.",
      col.names = c("Category Set", "n", "M", "SD", "SE", "CI"),
      align = 'c') %>%
  kable_styling(bootstrap_options = 
                  c("hover", "responsive", "striped"),
                full_width = F, position = "center")
Table 1. Descriptives by category set.
Category Set n M SD SE CI
II 580 0.6872 0.1152 0.0048 0.0094
RD 596 0.6096 0.1392 0.0057 0.0112

Concurrent Task

#Calculate summary statistics:

CatDesc = summarySE(data = df, measurevar = "Accuracy", groupvars = "Concurrent", conf.interval = 0.95)

# Create a table to display the results:
kable(CatDesc, digits = 4,
      caption = "Table 2. Descriptives by concurrent task.",
      col.names = c("Concurrent Task", "n", "M", "SD", "SE", "CI"),
      align = 'c') %>%
  kable_styling(bootstrap_options = 
                  c("hover", "responsive", "striped"),
                full_width = F, position = "center")
Table 2. Descriptives by concurrent task.
Concurrent Task n M SD SE CI
Control 384 0.6756 0.1330 0.0068 0.0133
Letters 392 0.6295 0.1331 0.0067 0.0132
Tapping 400 0.6393 0.1307 0.0065 0.0128

Complex Descriptive Statistics

Next we’ll calculate descriptive statistics for the DV across levels of the Concurrent Task and Block crossed with the Category set variable.

Category Set by Block and Concurrent Task

CCDescsAcc = summarySE(data = df, measurevar = "Accuracy",
                    groupvars = c("Category", "Concurrent", "Block"), conf.interval = .95)

# Create a table to display the results:

kable(CCDescsAcc, digits = 4,
      caption = "Table 3. Descriptives by category set and concurrent task.",
      col.names = c("Category Set", "Concurrent Task", "Block","n", "M","SD", "SE", "CI"),
      align = 'c') %>%
  kable_styling(bootstrap_options =
                  c("hover", "responsive", "striped"),
                full_width = F, position = "center")
Table 3. Descriptives by category set and concurrent task.
Category Set Concurrent Task Block n M SD SE CI
II Control 0 47 0.6670 0.0901 0.0131 0.0265
II Control 1 47 0.7271 0.0964 0.0141 0.0283
II Control 2 47 0.7247 0.1231 0.0180 0.0361
II Control 3 47 0.7569 0.0995 0.0145 0.0292
II Letters 0 49 0.6362 0.0959 0.0137 0.0275
II Letters 1 49 0.6556 0.1223 0.0175 0.0351
II Letters 2 49 0.6936 0.1134 0.0162 0.0326
II Letters 3 49 0.7092 0.1232 0.0176 0.0354
II Tapping 0 49 0.6161 0.0916 0.0131 0.0263
II Tapping 1 49 0.6770 0.0951 0.0136 0.0273
II Tapping 2 49 0.6895 0.1290 0.0184 0.0371
II Tapping 3 49 0.6990 0.1235 0.0176 0.0355
RD Control 0 49 0.5755 0.1044 0.0149 0.0300
RD Control 1 49 0.6408 0.1396 0.0199 0.0401
RD Control 2 49 0.6579 0.1547 0.0221 0.0444
RD Control 3 49 0.6617 0.1506 0.0215 0.0433
RD Letters 0 49 0.5250 0.0735 0.0105 0.0211
RD Letters 1 49 0.5824 0.1347 0.0192 0.0387
RD Letters 2 49 0.6041 0.1356 0.0194 0.0389
RD Letters 3 49 0.6301 0.1570 0.0224 0.0451
RD Tapping 0 51 0.5436 0.0901 0.0126 0.0253
RD Tapping 1 51 0.6143 0.1486 0.0208 0.0418
RD Tapping 2 51 0.6235 0.1358 0.0190 0.0382
RD Tapping 3 51 0.6564 0.1477 0.0207 0.0415

Plot

Let’s plot this to see what a visual representation of it looks like.

#library(directlabels)
#acgraph = ggplot(CCDescsAcc, aes(x = Block, y = Accuracy, group = interaction(Category, Concurrent), Colour = Category))+
#
#  geom_line(aes(colour = Concurrent, group = interaction(Category, Concurrent)), position = position_dodge(width = .4))+
#  geom_point(aes(colour = Concurrent), position = position_dodge(width = .4))+
#  geom_errorbar(aes(ymin = Accuracy-se, ymax = Accuracy+se), width = 0.1, position = position_dodge(width = .4))+
#  ggtitle("Accuracy by Block")+
#  scale_fill_discrete(name = "Category")+
#  scale_fill_brewer(palette = "Paired")+
#  geom_dl(aes(label = Concurrent), method = list("last.points", cex = 0.6, hjust = -0.5))
#  
#  theme_classic()
  
  #Now we'll tell R to break the graphs down by Category Type, which we specified earlier.

acgraph = acgraph + facet_wrap(~ Category) +
  theme(legend.position = "none") +
  expand_limits(x = 5.5)
  
  #breaks = c("II.Control", "II.Tapping", "II.Letters", "RB.Control", "RB.Tapping", "RB.Letters"),
  #labels = c("Information Integration Verbal Concurrent", "Information Integration Motor Concurrent", "Information Integration Control", "Rule-Described Verbal Concurrent","Rule-Described Motor Concurrent", "Rule-Described Control")

print(acgraph)

ggsave("acgraph.pdf")
## Saving 7 x 5 in image

Analysis

Analysis Prep

The following libraries will be used for this analysis:

# For running Levene's Test:

#install.packages("car")
library(car)
## Loading required package: carData
## 
## Attaching package: 'car'
## The following object is masked from 'package:dplyr':
## 
##     recode
#For performing ANOVAs:

#install.packages("ez")
library(ez)

# For conducting Games-Howell post-hocs:

#install.packages("userfriendlyscience")
library(userfriendlyscience)
## Warning: package 'userfriendlyscience' was built under R version 3.5.3
## 
## Attaching package: 'userfriendlyscience'
## The following object is masked from 'package:Hmisc':
## 
##     escapeRegex
## The following object is masked from 'package:lattice':
## 
##     oneway

Options

ezANOVA prints output using scientific notitation. In order to make it easier to read our ANOVA outputs we’ll turn scientific notation off.

options(scipen = 999)

We’ll also create a function to assess and print p values in the comments of our script. If p >= .005, the function will display “p =” and the value rounded to two decimal places. If .0005 <= p < .005, the function will display “p =” and the value rounded to three decimal places. If p < .0005, the function will display “p < .001.”

p_round <- function(x){
  if(x > .005)
    {x1 = (paste("= ", round(x, digits = 2), sep = ''))
  }  
  else if(x == .005){x1 = (paste("= .01"))
  }
  else if(x > .0005 & x < .005)
    {x1 = (paste("= ", round(x, digits = 3), sep = ''))
  }  
  else if(x == .0005){x1 = (paste("= .001"))
  }
  else{x1 = (paste("< .001"))
  } 
  (x1)
}

In some cases, we will have to use adjusted df’s and/or perform White adjusted ANOVAs. In these cases, we will have to calculate adjusted effect sizes. Partial eta square can be calculated using the following formula, which we will create a function for:

peta <- function(dfn, dfd, f) {
  return(dfn * f / ((dfn * f) + dfd))
}

We’ll also create a function to help round some of our post-hoc results. (Neither the kable rounding function nor the standard “round” function will work for some of our post-hoc results tables.) The code below was taken from Akhmed (2015).

round_df <- function(df, digits) {
  nums <- vapply(df, is.numeric, FUN.VALUE = logical(1))

  df[,nums] <- round(df[,nums], digits = digits)

  (df)
}

Assumptions

1. Independent Random Sampling

This assumption was met during testing

2. Normality

This assumption can be tested using a Shapiro-Wilk test.

CCB_Shap = shapiro.test(CCDescsAcc$Accuracy)
CCB_Shap
## 
##  Shapiro-Wilk normality test
## 
## data:  CCDescsAcc$Accuracy
## W = 0.98662, p-value = 0.9811

Based on an alpha level of 0.05, the assumption of normality was met; W = 0.94354, p-value = 0.1955.

3.Homogeniety of Variance

This assumption can be tested using a Levene’s test which is taken from the ANOVA output. *However, it currently cannot be tested during the presented code, as group Ns are not equal.

# Run the ANOVA

#CCB_ANOVA = ezANOVA(data = df, dv = .(Accuracy),
#                    wid = .(Participant), 
#                    between = .(Category, Concurrent),
#                    within = .(Block),
#                    detailed = TRUE, type = "III",
#                    white.adjust = TRUE, return_aov = TRUE)

# Extract the Leven's test from the ANOVA output:
#CCB_Lev = CCB_ANOVA$`Levene's Test for Homogeneity of Variance`

# Create a Table to display the restults:

#kable(CCB_Lev, digits = 4,
#      caption = "Table 4. Category set by Concurrent Task Levene's #test.",
#      col.names = c("DFn", "DFd", "SSn", "SSd","F", "p", "sig"),
#      align = 'c') %>%
#  kable_styling(boostrap_options =
#                  c("hover", "responsive", "striped"),
#                full_width = F, position = "center")

Rule-Described Analysis of Variance

dfRD = subset(df, Category == "RD")

dfRD
CBRD_ANOVA = ezANOVA(data = dfRD, 
                    dv = .(Accuracy),
                    wid = .(Participant), 
                    between = .(Concurrent),
                    within = .(Block),
                    detailed = TRUE, type = "III", 
                    return_aov = TRUE)
## Warning: You have removed one or more Ss from the analysis. Refactoring
## "Participant" for ANOVA.
## Warning: Data is unbalanced (unequal N per group). Make sure you specified
## a well-considered value for the type argument to ezANOVA().
kable(CBRD_ANOVA$ANOVA, digits = 4,
      caption = "Table 4. Rule-Described by Concurrent Task ANOVA.",
      col.names = c("Effect", "DFn", "DFd", "SSn", "SSd", "F",
                    "p", "sig", "Effect Size"),
      align = 'c') %>%
  kable_styling(bootstrap_options =
                  c("hover", "responsive", "striped"),
                full_width = F, position = "center")
Table 4. Rule-Described by Concurrent Task ANOVA.
Effect DFn DFd SSn SSd F p sig Effect Size
(Intercept) 1 146 221.4153 7.3359 4406.6543 0.0000
0.9551
Concurrent 2 146 0.2314 7.3359 2.3032 0.1036 0.0217
Block 3 438 0.8548 3.0801 40.5202 0.0000
0.0758
Concurrent:Block 6 438 0.0174 3.0801 0.4130 0.8704 0.0017

Information Integration Analysis of Variance

dfII = subset(df, Category == "II")

dfII
CBII_ANOVA = ezANOVA(data = dfII, 
                    dv = .(Accuracy),
                    wid = .(Participant), 
                    between = .(Concurrent),
                    within = .(Block),
                    detailed = TRUE, type = "III", 
                    return_aov = TRUE)
## Warning: You have removed one or more Ss from the analysis. Refactoring
## "Participant" for ANOVA.
## Warning: Data is unbalanced (unequal N per group). Make sure you specified
## a well-considered value for the type argument to ezANOVA().
kable(CBII_ANOVA$ANOVA, digits = 4,
      caption = "Table 5. Information Integration by Concurrent Task ANOVA.",
      col.names = c("Effect", "DFn", "DFd", "SSn", "SSd", "F",
                    "p", "sig", "Effect Size"),
      align = 'c') %>%
  kable_styling(bootstrap_options =
                  c("hover", "responsive", "striped"),
                full_width = F, position = "center")
Table 5. Information Integration by Concurrent Task ANOVA.
Effect DFn DFd SSn SSd F p sig Effect Size
(Intercept) 1 142 274.1726 5.0532 7704.5801 0.0000
0.9757
Concurrent 2 142 0.2807 5.0532 3.9440 0.0215
0.0395
Block 3 426 0.5329 1.7736 42.6666 0.0000
0.0724
Concurrent:Block 6 426 0.0375 1.7736 1.5026 0.1756 0.0055

Post-Hoc Tests

We appear to have a main effect of Category and a main effect of Block. We are also pretty close to significance in the Concurrent task - this may further develop as data colletion continues. I do not expect any other effects to occur.

Rule-Described Concurrent Task

RDConPairwise = pairwise.t.test(dfRD$Accuracy, dfRD$Concurrent, p.adj = "bonferroni")

RD_Con_Pair_Table = data.frame(round(RDConPairwise$p.value,4))

RD_Con_Pair_TableB = RD_Con_Pair_Table %>%
  mutate(
    Control = text_spec(Control, bold = (ifelse(Control < .05, "TRUE", "FALSE"))),
    Letters = text_spec(Letters, bold = (ifelse(Letters < .05, "TRUE", "FALSE"))),
  )

RD_Con_Pair_TableL = data.frame("Comparison" = c("Letters", "Tapping"))

RD_Con_Pair_TableB = cbind(RD_Con_Pair_TableL, RD_Con_Pair_TableB)

kable(RD_Con_Pair_TableB, digits = 4,
      caption = "Table 6. Post-hoc Test of Rule-Described Accuracy Across Concurrent Task.",

      align = 'c', escape = FALSE) %>%
  kable_styling(bootstrap_options =
                  c("hover", "responsive", "striped"),
                full_width = F, position = "center")
Table 6. Post-hoc Test of Rule-Described Accuracy Across Concurrent Task.
Comparison Control Letters
Letters 0.0016 NA
Tapping 0.2282 0.2449

Information Integration Concurrent Task

IIConPairwise = pairwise.t.test(dfII$Accuracy, dfII$Concurrent, p.adj = "bonferroni")

II_Con_Pair_Table = data.frame(round(IIConPairwise$p.value,4))

II_Con_Pair_TableB = II_Con_Pair_Table %>%
  mutate(
    Control = text_spec(Control, bold = (ifelse(Control < .05, "TRUE", "FALSE"))),
    Letters = text_spec(Letters, bold = (ifelse(Letters < .05, "TRUE", "FALSE"))),
  )
    
II_Con_Pair_TableL = data.frame("Comparison" = c("Letters", "Tapping"))

II_Con_Pair_TableB = cbind(II_Con_Pair_TableL, II_Con_Pair_TableB)

kable(II_Con_Pair_TableB, digits = 4,
      caption = "Table 7. Post-hoc Test of Information Integration Accuracy Across Concurrent Task.",

      align = 'c', escape = FALSE) %>%
  kable_styling(bootstrap_options =
                  c("hover", "responsive", "striped"),
                full_width = F, position = "center")
Table 7. Post-hoc Test of Information Integration Accuracy Across Concurrent Task.
Comparison Control Letters
Letters 0.0003 NA
Tapping 0.0001 1