[PILOT/COPILOT - TEXT IN SQUARE BRACKETS IS HERE FOR GUIDANCE. COPILOT PLEASE DELETE BEFORE KNITTING THE FINAL REPORT]
[PILOT/COPILOT ENTER RELEVANT REPORT DETAILS HERE]
articleID <- "EXT_28-06-2015" # insert the article ID code here e.g., "10-3-2015_PS"
reportType <- 'pilot' # specify whether this is the 'pilot' report or 'final' report
pilotNames <- "Mark Miller" # insert the pilot's name here e.g., "Tom Hardwicke". If there are multiple cpilots enter both names in a character string e.g., "Tom Hardwicke, Bob Dylan"
copilotNames <- NA # # insert the co-pilot's name here e.g., "Michael Frank". If there are multiple co-pilots enter both names in a character string e.g., "Tom Hardwicke, Bob Dylan"
pilotTTC <- 500 # insert the pilot's estimated time to complete (in minutes, fine to approximate) e.g., 120
copilotTTC <- NA # insert the co-pilot's estimated time to complete (in minutes, fine to approximate) e.g., 120
pilotStartDate <- as.Date("10/01/18", format = "%m/%d/%y") # insert the pilot's start date in US format e.g., as.Date("01/25/18", format = "%m/%d/%y")
copilotStartDate <- NA # insert the co-pilot's start date in US format e.g., as.Date("01/25/18", format = "%m/%d/%y")
completionDate <- NA # copilot insert the date of final report completion (after any necessary rounds of author assistance) in US format e.g., as.Date("01/25/18", format = "%m/%d/%y")
[PILOT/COPILOT write a brief summary of the methods underlying the target outcomes written in your own words]
In this paper, Zerr er al. investigate individual differences in adult learning. In Study 1 in particular, N=291 participants from Mechanical Turk learn Lithuanian-English word pairs. The learning stage happened twice over two days. Each day consisted of 16 trials where each trial consisted of only the words the participant did not answer correctly in the trial before, i.e, once the word is correctly answered, it drops from the list. Once the participant answered all 45 words correctly, two distractor tasks were given: math problems and Tetris. At the end of the session on each day, the participant was tested on all 45 words, and the number of words correct was noted.
For this article you should focus on the findings reported in the results section for Study 1 “Performance across days”.
Specifically, you should attempt to reproduce all descriptive and inferential analyses reported in the text below and associated tables/figures:
Performance across days. The learning curves for participants on the first day can be found in Figure 2 (top panel). Here one can see considerable variability in performance in all three measures (Test 1, tests to criterion, and final test) across participants, as well as across quartiles when binned by overall task performance (Fig. 2, bottom panel).
For Test 1, after studying the items once, participants on Day 1 recalled an average of 9.4 English words (SD = 6.6) and on Day 2 an average of 11.1 words (SD = 8.2). To reach criterion, participants took an average of 8.3 tests (SD = 2.9) on Day 1 and 7.6 tests (SD = 2.8) on Day 2. The average cued-recall score on the final test was 33.4 words (SD = 7.9) on Day 1 and 33.2 words (SD = 8.7) on Day 2. For Day 1, the entire task (including informed consent, directions, and the 5-min delay) took an average of 50.3 min to complete (SD = 13.2, range = 28.8–119.0), whereas Day 2 took an average of 45.7 min (SD = 14.0, range = 26.6–115.3). Additional descriptive statistics are in Table 1.
Participants who performed better on the initial test reached criterion more quickly (i.e., required fewer tests to criterion) on Day 1, r = −.60, p < .001, 95% confidence interval (CI) = [−.67, −.52], and Day 2, r = −.63, p < .001, 95% CI = [−.69, −.55]. Participants who reached criterion quickly also had better retention of the word pairs after a delay (i.e., better final-test scores) on Day 1, r = −.57, p < .001, 95% CI = [−.64, −.49], andDay 2, r = −.48, p < .001, 95% CI = [−.56, −.38]. People who performed better on the initial test also remembered more on the final test on Day 1, r = .26, p < .001, 95% CI = [.15, .37], and Day 2, r = .18, p = .002, 95% CI = [.07, .29]. As a result of the strong intercorrelations among the dependent measures (initial test, speed of learning, and long-term retention), we refer to the task as the learning-efficiency task from here forward.
Performance on the learning-efficiency task significantly correlated across days for participants (Table 2), including scores on Test 1, r = .56, p < .001, 95% CI = [.47, .63], tests to criterion, r = .68, p < .001, 95% CI = [.61, .73], and final test, r = .68, p < .001, 95% CI = [.61, .74]. When the three individual measures were converted to z scores and combined into a single metric (learning-efficiency score, which is a composite of initial test, learning speed, and final retention), the correlation across days was also high, r = .68, p < .001, 95% CI = [.61, .74].
# load packages
library(tidyverse) # for data munging
library(knitr) # for kable table formating
library(haven) # import and export 'SPSS', 'Stata' and 'SAS' Files
library(readxl) # import excel files
library(CARPSreports) # custom report functions
library(readr)
# Prepare report object. This will be updated automatically by the reproCheck function each time values are compared
reportObject <- data.frame(dummyRow = TRUE, reportedValue = NA, obtainedValue = NA, valueType = NA, percentageError = NA, comparisonOutcome = NA, eyeballCheck = NA)
study1_raw_df <- read_csv("data/Study1_LET_N=281_Dataset_osf.csv")
The data is best broken down into three dataframes: one for the learning efficiency scores, one for experiment durations, and one for the scores over time.
lescore_names <- c("INITIALTESTSCORE","TESTSTOCRITERION","FINALTESTSCORE")
study1_lescores <- study1_raw_df %>%
# Manually list the relevant columns.
select(
Subject,
INITIALTESTSCORE_S1, TESTSTOCRITERION_S1, FINALTESTSCORE_S1, LE_Score_S1,
INITIALTESTSCORE_S2, TESTSTOCRITERION_S2, FINALTESTSCORE_S2, LE_Score_S2
) %>%
# One observation per row...
gather(key="RawColumn", value="Score", -c("Subject")) %>%
# One variable per column...
extract(RawColumn, c("Measure", "Day"), regex="(.*)_(S1|S2)") %>%
# Handy trick that orders by "time" to match the order of Table 1
mutate(Measure=fct_relevel(Measure, lescore_names))
study1_experiment_duration <- study1_raw_df %>%
# List the relevant columns
select(Subject, starts_with("Experiment_Length")) %>%
# One observation per row
gather(key="RawColumn", value="Duration", -c("Subject")) %>%
# Do some renaming of values
mutate(Day=str_extract(RawColumn, "A|B")) %>%
select(-c("RawColumn"))
study1_words_over_time <- study1_raw_df %>%
# List the relevant columns. Here, it's all Day 1's tests,
# and then the Learning Efficiency score.
select(Subject, INITIALTESTSCORE_S1, matches("TEST\\d+_S1"),
FINALTESTSCORE_S1, LE_Score_S1) %>%
# Prepare some names for the regex parsing format
# Final test is test 17 so that it's to the right of all of them.
rename(TEST1_S1 = INITIALTESTSCORE_S1, TEST17_S1 = FINALTESTSCORE_S1) %>%
# Convert the Learning Efficiency score to a quartile for grouping on the plot.
mutate(Quartile = ntile(LE_Score_S1, 4)) %>%
select(-c("LE_Score_S1")) %>%
# One observation per row...
gather(key="RawColumn", value="Score", -c("Subject", "Quartile")) %>%
# Extract test index from the old column name
extract("RawColumn", c("Test"), regex="TEST(\\d+)_S1") %>%
mutate(Test=as.numeric(Test)) %>%
# If there's an NA, assume they've finished
replace_na(list(Score=45))
For Test 1, after studying the items once, participants on Day 1 recalled an average of 9.4 English words (SD = 6.6) and on Day 2 an average of 11.1 words (SD = 8.2).
To reach criterion, participants took an average of 8.3 tests (SD = 2.9) on Day 1 and 7.6 tests (SD = 2.8) on < Day 2.
The average cued-recall score on the final test was 33.4 words (SD = 7.9) on Day 1 and 33.2 words (SD = 8.7) on Day 2.
…
Additional descriptive statistics are in Table 1.
table1_calculated <- study1_lescores %>%
filter(Measure!="LE_Score") %>%
group_by(Measure, Day) %>%
summarize(
M=mean(Score),
SD=sd(Score),
Minimum=quantile(Score)[1],
"Lower 25%"=quantile(Score)[2],
Mdn=quantile(Score)[3],
"Upper 25%"=quantile(Score)[4],
Maximum=quantile(Score)[5]
) %>%
ungroup()
kable(table1_calculated, digits=1)
| Measure | Day | M | SD | Minimum | Lower 25% | Mdn | Upper 25% | Maximum |
|---|---|---|---|---|---|---|---|---|
| INITIALTESTSCORE | S1 | 9.4 | 6.6 | 0 | 5 | 8 | 12 | 42 |
| INITIALTESTSCORE | S2 | 11.1 | 8.2 | 0 | 5 | 9 | 15 | 42 |
| TESTSTOCRITERION | S1 | 8.3 | 2.8 | 2 | 6 | 8 | 10 | 16 |
| TESTSTOCRITERION | S2 | 7.6 | 2.8 | 2 | 6 | 7 | 9 | 16 |
| FINALTESTSCORE | S1 | 33.3 | 7.9 | 4 | 29 | 34 | 40 | 45 |
| FINALTESTSCORE | S2 | 33.2 | 8.7 | 2 | 28 | 35 | 40 | 45 |
Reprinted for comparison, Table 1 is below. Bold numbers indicate difference.
table1_reported <- read_delim("data/table1.csv", "|",
escape_double = FALSE, trim_ws = TRUE, col_types=cols(.default = "c"))
kable(table1_reported)
| Measure | Day | M | SD | Minimum | Lower 25% | Mdn | Upper 25% | Maximum |
|---|---|---|---|---|---|---|---|---|
| Test 1 Score | Day 1 | 9.4 | 6.6 | 0 | 5 | 8 | 12 | 42 |
| Test 1 Score | Day 2 | 11.1 | 8.2 | 0 | 5 | 9 | 15 | 42 |
| Tests-to-criterion score | Day 1 | 8.3 | 2.9 | 2 | 6 | 8 | 10 | 16 |
| Tests-to-criterion score | Day 2 | 7.6 | 2.8 | 2 | 6 | 7 | 9 | 16 |
| Final-test score | Day 1 | 33.4 | 7.9 | 4 | 29 | 34 | 40 | 45 |
| Final-test score | Day 2 | 33.2 | 8.7 | 2 | 28 | 35 | 40 | 45 |
reported_measure_names <- c("Test 1 Score", "Tests-to-criterion score", "Final-test score")
table1_calculated_long <- table1_calculated %>%
mutate(
Measure = plyr::mapvalues(Measure, from = lescore_names, to = reported_measure_names),
Day = plyr::mapvalues(Day, from=c("S1", "S2"), to=c("Day 1", "Day 2"))
) %>%
gather(Statistic, Value, -c(Measure, Day))
table1_reported_long <- table1_reported %>%
gather(Statistic, Value, -c(Measure, Day))
full_join(table1_calculated_long, table1_reported_long, c("Measure", "Day", "Statistic")) %>%
mutate(ValueType = plyr::mapvalues(
Statistic,
from=c("M", "SD", "Minimum", "Lower 25%", "Mdn", "Upper 25%", "Maximum"),
to=c("mean", "sd", "other", "other", "median", "other", "other")
)) %>%
rowwise() %>%
do(
Measure = .$Measure,
Day = .$Day,
Statistic = .$Statistic,
rc = ifelse(.$Value.y!=0,
reproCheck(reportedValue=.$Value.y, obtainedValue=.$Value.x, valueType=.$ValueType) %>%
filter(!dummyRow) %>%
select(-c(dummyRow)) %>%
list(),
NA)
) %>%
unnest(Measure, Day, Statistic, rc) %>%
rowwise() %>%
do(
Measure = .$Measure,
Day = .$Day,
Statistic = .$Statistic,
reportedValue = .$rc$reportedValue,
obtainedValue = .$rc$obtainedValue,
valueType = .$rc$valueType,
percentageError = .$rc$percentageError,
comparisonOutcome = .$rc$comparisonOutcome,
eyeballCheck = .$rc$eyeballCheck
) %>%
replace_na(
list(
reportedValue = list(0),
obtainedValue = list(0),
valueType = list('other'),
percentageError=list(0),
comparisonOutcome = list('MATCH'),
eyeballCheck = list(NA)
)
) %>%
unnest() %>%
kable()
## [1] "MATCH for mean. The reported value (9.4) and the obtained value (9.4) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MATCH for mean. The reported value (11.1) and the obtained value (11.1) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MATCH for mean. The reported value (8.3) and the obtained value (8.3) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MATCH for mean. The reported value (7.6) and the obtained value (7.6) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MINOR_ERROR for mean. The reported value (33.4) and the obtained value (33.3) differed by 0.3%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MATCH for mean. The reported value (33.2) and the obtained value (33.2) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MATCH for sd. The reported value (6.6) and the obtained value (6.6) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MATCH for sd. The reported value (8.2) and the obtained value (8.2) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MINOR_ERROR for sd. The reported value (2.9) and the obtained value (2.8) differed by 3.45%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MATCH for sd. The reported value (2.8) and the obtained value (2.8) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MATCH for sd. The reported value (7.9) and the obtained value (7.9) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MATCH for sd. The reported value (8.7) and the obtained value (8.7) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (2) and the obtained value (2) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (2) and the obtained value (2) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (4) and the obtained value (4) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (2) and the obtained value (2) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (5) and the obtained value (5) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (5) and the obtained value (5) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (6) and the obtained value (6) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (6) and the obtained value (6) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (29) and the obtained value (29) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (28) and the obtained value (28) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for median. The reported value (8) and the obtained value (8) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for median. The reported value (9) and the obtained value (9) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for median. The reported value (8) and the obtained value (8) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for median. The reported value (7) and the obtained value (7) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for median. The reported value (34) and the obtained value (34) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for median. The reported value (35) and the obtained value (35) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (12) and the obtained value (12) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (15) and the obtained value (15) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (10) and the obtained value (10) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (9) and the obtained value (9) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (40) and the obtained value (40) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (40) and the obtained value (40) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (42) and the obtained value (42) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (42) and the obtained value (42) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (16) and the obtained value (16) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (16) and the obtained value (16) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (45) and the obtained value (45) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for other. The reported value (45) and the obtained value (45) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
| Measure | Day | Statistic | reportedValue | obtainedValue | valueType | percentageError | comparisonOutcome | eyeballCheck |
|---|---|---|---|---|---|---|---|---|
| Test 1 Score | Day 1 | M | 9.4 | 9.4 | mean | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 2 | M | 11.1 | 11.1 | mean | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 1 | M | 8.3 | 8.3 | mean | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 2 | M | 7.6 | 7.6 | mean | 0.0000000 | MATCH | NA |
| Final-test score | Day 1 | M | 33.4 | 33.3 | mean | 0.2994012 | MINOR_ERROR | NA |
| Final-test score | Day 2 | M | 33.2 | 33.2 | mean | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 1 | SD | 6.6 | 6.6 | sd | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 2 | SD | 8.2 | 8.2 | sd | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 1 | SD | 2.9 | 2.8 | sd | 3.4482759 | MINOR_ERROR | NA |
| Tests-to-criterion score | Day 2 | SD | 2.8 | 2.8 | sd | 0.0000000 | MATCH | NA |
| Final-test score | Day 1 | SD | 7.9 | 7.9 | sd | 0.0000000 | MATCH | NA |
| Final-test score | Day 2 | SD | 8.7 | 8.7 | sd | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 1 | Minimum | 0.0 | 0.0 | other | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 2 | Minimum | 0.0 | 0.0 | other | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 1 | Minimum | 2.0 | 2.0 | other | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 2 | Minimum | 2.0 | 2.0 | other | 0.0000000 | MATCH | NA |
| Final-test score | Day 1 | Minimum | 4.0 | 4.0 | other | 0.0000000 | MATCH | NA |
| Final-test score | Day 2 | Minimum | 2.0 | 2.0 | other | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 1 | Lower 25% | 5.0 | 5.0 | other | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 2 | Lower 25% | 5.0 | 5.0 | other | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 1 | Lower 25% | 6.0 | 6.0 | other | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 2 | Lower 25% | 6.0 | 6.0 | other | 0.0000000 | MATCH | NA |
| Final-test score | Day 1 | Lower 25% | 29.0 | 29.0 | other | 0.0000000 | MATCH | NA |
| Final-test score | Day 2 | Lower 25% | 28.0 | 28.0 | other | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 1 | Mdn | 8.0 | 8.0 | median | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 2 | Mdn | 9.0 | 9.0 | median | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 1 | Mdn | 8.0 | 8.0 | median | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 2 | Mdn | 7.0 | 7.0 | median | 0.0000000 | MATCH | NA |
| Final-test score | Day 1 | Mdn | 34.0 | 34.0 | median | 0.0000000 | MATCH | NA |
| Final-test score | Day 2 | Mdn | 35.0 | 35.0 | median | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 1 | Upper 25% | 12.0 | 12.0 | other | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 2 | Upper 25% | 15.0 | 15.0 | other | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 1 | Upper 25% | 10.0 | 10.0 | other | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 2 | Upper 25% | 9.0 | 9.0 | other | 0.0000000 | MATCH | NA |
| Final-test score | Day 1 | Upper 25% | 40.0 | 40.0 | other | 0.0000000 | MATCH | NA |
| Final-test score | Day 2 | Upper 25% | 40.0 | 40.0 | other | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 1 | Maximum | 42.0 | 42.0 | other | 0.0000000 | MATCH | NA |
| Test 1 Score | Day 2 | Maximum | 42.0 | 42.0 | other | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 1 | Maximum | 16.0 | 16.0 | other | 0.0000000 | MATCH | NA |
| Tests-to-criterion score | Day 2 | Maximum | 16.0 | 16.0 | other | 0.0000000 | MATCH | NA |
| Final-test score | Day 1 | Maximum | 45.0 | 45.0 | other | 0.0000000 | MATCH | NA |
| Final-test score | Day 2 | Maximum | 45.0 | 45.0 | other | 0.0000000 | MATCH | NA |
For Day 1, the entire task (including informed consent, directions, and the 5-min delay) took an average of 50.3 min to complete (SD = 13.2, range = 28.8–119.0), whereas Day 2 took an average of 45.7 min (SD = 14.0, range = 26.6–115.3).
duration_reports <- study1_experiment_duration %>%
filter(!is.na(Duration)) %>% # TODO: not sure why these are NAs
group_by(Day) %>%
summarize(
M=mean(Duration),
SD=sd(Duration),
Minimum=min(Duration),
Maximum=max(Duration)
)
kable(duration_reports, digits=1)
| Day | M | SD | Minimum | Maximum |
|---|---|---|---|---|
| A | 50.3 | 13.2 | 28.8 | 119.0 |
| B | 45.7 | 14.0 | 26.6 | 115.3 |
reproCheck(reportedValue = '50.3', obtainedValue = duration_reports$M[1], valueType='mean')
## [1] "MATCH for mean. The reported value (50.3) and the obtained value (50.3) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 50.3 50.3 mean 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '13.2', obtainedValue = duration_reports$SD[1], valueType='sd')
## [1] "MATCH for sd. The reported value (13.2) and the obtained value (13.2) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 13.2 13.2 sd 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '28.8', obtainedValue = duration_reports$Minimum[1], valueType='other')
## [1] "MATCH for other. The reported value (28.8) and the obtained value (28.8) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 28.8 28.8 other 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '119.0', obtainedValue = duration_reports$Maximum[1], valueType='other')
## [1] "MATCH for other. The reported value (119) and the obtained value (119) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 119 119 other 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '45.7', obtainedValue = duration_reports$M[2], valueType='mean')
## [1] "MATCH for mean. The reported value (45.7) and the obtained value (45.7) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 45.7 45.7 mean 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '14.0', obtainedValue = duration_reports$SD[2], valueType='sd')
## [1] "MATCH for sd. The reported value (14) and the obtained value (14) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 14 14 sd 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '26.6', obtainedValue = duration_reports$Minimum[2], valueType='other')
## [1] "MATCH for other. The reported value (26.6) and the obtained value (26.6) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 26.6 26.6 other 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '115.3', obtainedValue = duration_reports$Maximum[2], valueType='other')
## [1] "MATCH for other. The reported value (115.3) and the obtained value (115.3) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 115.3 115.3 other 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
Participants who performed better on the initial test reached criterion more quickly (i.e., required fewer tests to criterion) on Day 1, r = −.60, p < .001, 95% confidence interval (CI) = [−.67, −.52], and Day 2, r = −.63, p < .001, 95% CI = [−.69, −.55].
initial_to_completion <- study1_lescores %>%
spread(Measure, Score) %>%
group_by(Day) %>%
do(cor = cor.test(.$INITIALTESTSCORE, .$TESTSTOCRITERION)) %>%
mutate(
r = cor$estimate,
p = cor$p.value,
"CI Low" = cor$conf.int[1],
"CI High" = cor$conf.int[2]
) %>%
select(-c(cor)) %>%
mutate(p = format(p, format='e', digits=3))
initial_to_completion %>%
kable(digits=2)
| Day | r | p | CI Low | CI High |
|---|---|---|---|---|
| S1 | -0.60 | 4.08e-29 | -0.67 | -0.52 |
| S2 | -0.63 | 5.98e-32 | -0.69 | -0.55 |
reproCheck(reportedValue = '-.60', obtainedValue = initial_to_completion$r[1], valueType='r')
## [1] "MATCH for r. The reported value (-0.6) and the obtained value (-0.6) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.6 -0.6 r 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '<.001', obtainedValue = initial_to_completion$p[1], valueType='p', eyeballCheck = TRUE)
## [1] "MATCH for p. Eyeball comparison only."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE <NA> <NA> <NA> NA
## 2 FALSE <.001 4.08e-29 p NA
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH TRUE
reproCheck(reportedValue = '-.67', obtainedValue = initial_to_completion$`CI Low`[1], valueType='ci')
## [1] "MATCH for ci. The reported value (-0.67) and the obtained value (-0.67) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.67 -0.67 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '-.52', obtainedValue = initial_to_completion$`CI High`[1], valueType='ci')
## [1] "MATCH for ci. The reported value (-0.52) and the obtained value (-0.52) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.52 -0.52 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '-.63', obtainedValue = initial_to_completion$r[2], valueType='r')
## [1] "MATCH for r. The reported value (-0.63) and the obtained value (-0.63) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.63 -0.63 r 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '<.001', obtainedValue = initial_to_completion$p[2], valueType='p', eyeballCheck = TRUE)
## [1] "MATCH for p. Eyeball comparison only."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE <NA> <NA> <NA> NA
## 2 FALSE <.001 5.98e-32 p NA
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH TRUE
reproCheck(reportedValue = '-.69', obtainedValue = initial_to_completion$`CI Low`[2], valueType='ci')
## [1] "MATCH for ci. The reported value (-0.69) and the obtained value (-0.69) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.69 -0.69 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '-.55', obtainedValue = initial_to_completion$`CI High`[2], valueType='ci')
## [1] "MATCH for ci. The reported value (-0.55) and the obtained value (-0.55) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.55 -0.55 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
Participants who reached criterion quickly also had better retention of the word pairs after a delay (i.e., better final-test scores) on Day 1, r = −.57, p < .001, 95% CI = [−.64, −.49], and Day 2, r = −.48, p < .001, 95% CI = [−.56, −.38].
completion_to_retention <- study1_lescores %>%
spread(Measure, Score) %>%
group_by(Day) %>%
do(cor = cor.test(.$TESTSTOCRITERION, .$FINALTESTSCORE)) %>%
mutate(
r = cor$estimate,
p = cor$p.value,
"CI Low" = cor$conf.int[1],
"CI High" = cor$conf.int[2]
) %>%
select(-c(cor)) %>%
mutate(p = format(p, format='e', digits=3))
completion_to_retention %>%
kable(digits=2)
| Day | r | p | CI Low | CI High |
|---|---|---|---|---|
| S1 | -0.57 | 1.15e-25 | -0.64 | -0.49 |
| S2 | -0.48 | 2.52e-17 | -0.56 | -0.38 |
reproCheck(reportedValue = '-.57', obtainedValue = completion_to_retention$r[1], valueType='r')
## [1] "MATCH for r. The reported value (-0.57) and the obtained value (-0.57) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.57 -0.57 r 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '<.001', obtainedValue = completion_to_retention$p[1], valueType='p', eyeballCheck = TRUE)
## [1] "MATCH for p. Eyeball comparison only."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE <NA> <NA> <NA> NA
## 2 FALSE <.001 1.15e-25 p NA
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH TRUE
reproCheck(reportedValue = '-.64', obtainedValue = completion_to_retention$`CI Low`[1], valueType='ci')
## [1] "MATCH for ci. The reported value (-0.64) and the obtained value (-0.64) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.64 -0.64 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '-.49', obtainedValue = completion_to_retention$`CI High`[1], valueType='ci')
## [1] "MATCH for ci. The reported value (-0.49) and the obtained value (-0.49) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.49 -0.49 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '-.48', obtainedValue = completion_to_retention$r[2], valueType='r')
## [1] "MATCH for r. The reported value (-0.48) and the obtained value (-0.48) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.48 -0.48 r 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '<.001', obtainedValue = completion_to_retention$p[2], valueType='p', eyeballCheck = TRUE)
## [1] "MATCH for p. Eyeball comparison only."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE <NA> <NA> <NA> NA
## 2 FALSE <.001 2.52e-17 p NA
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH TRUE
reproCheck(reportedValue = '-.56', obtainedValue = completion_to_retention$`CI Low`[2], valueType='ci')
## [1] "MATCH for ci. The reported value (-0.56) and the obtained value (-0.56) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.56 -0.56 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '-.38', obtainedValue = completion_to_retention$`CI High`[2], valueType='ci')
## [1] "MATCH for ci. The reported value (-0.38) and the obtained value (-0.38) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE -0.38 -0.38 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
People who performed better on the initial test also remembered more on the final test on Day 1, r = .26, p < .001, 95% CI = [.15, .37], and Day 2, r = .18, p = .002, 95% CI = [.07, .29].
initial_to_retention <- study1_lescores %>%
spread(Measure, Score) %>%
group_by(Day) %>%
do(cor = cor.test(.$INITIALTESTSCORE, .$FINALTESTSCORE)) %>%
mutate(
r = cor$estimate,
p = cor$p.value,
"CI Low" = cor$conf.int[1],
"CI High" = cor$conf.int[2]
) %>%
select(-c(cor)) %>%
mutate(
p = ifelse(p < 0.001,
format(p, format='e', digits=3),
format(round(p, 3), nsmall=3)
))
initial_to_retention %>%
kable(digits=2)
| Day | r | p | CI Low | CI High |
|---|---|---|---|---|
| S1 | 0.26 | 7.59e-06 | 0.15 | 0.37 |
| S2 | 0.18 | 0.002 | 0.07 | 0.29 |
reproCheck(reportedValue = '.26', obtainedValue = initial_to_retention$r[1], valueType='r')
## [1] "MATCH for r. The reported value (0.26) and the obtained value (0.26) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.26 0.26 r 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '<.001', obtainedValue = initial_to_retention$p[1], valueType='p', eyeballCheck = TRUE)
## [1] "MATCH for p. Eyeball comparison only."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE <NA> <NA> <NA> NA
## 2 FALSE <.001 7.59e-06 p NA
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH TRUE
reproCheck(reportedValue = '.15', obtainedValue = initial_to_retention$`CI Low`[1], valueType='ci')
## [1] "MATCH for ci. The reported value (0.15) and the obtained value (0.15) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.15 0.15 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '.37', obtainedValue = initial_to_retention$`CI High`[1], valueType='ci')
## [1] "MATCH for ci. The reported value (0.37) and the obtained value (0.37) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.37 0.37 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '.18', obtainedValue = initial_to_retention$r[2], valueType='r')
## [1] "MATCH for r. The reported value (0.18) and the obtained value (0.18) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.18 0.18 r 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '.002', obtainedValue = initial_to_retention$p[2], valueType='p')
## [1] "MATCH for p. The reported value (0.002) and the obtained value (0.002) differed by 0%. Note that the obtained value was rounded to 3 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.002 0.002 p 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '.07', obtainedValue = initial_to_retention$`CI Low`[2], valueType='ci')
## [1] "MATCH for ci. The reported value (0.07) and the obtained value (0.07) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.07 0.07 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '.29', obtainedValue = initial_to_retention$`CI High`[2], valueType='ci')
## [1] "MATCH for ci. The reported value (0.29) and the obtained value (0.29) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.29 0.29 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
Performance on the learning-efficiency task significantly correlated across days for participants (Table 2), including scores on Test 1, r = .56, p < .001, 95% CI = [.47, .63], tests to criterion, r = .68, p < .001, 95% CI = [.61, .73], and final test, r = .68, p < .001, 95% CI = [.61, .74].
daywise_correlation <- study1_lescores %>%
filter(Measure!="LE_Score") %>%
spread(Day, Score) %>%
group_by(Measure) %>%
do(cor = cor.test(.$S1, .$S2)) %>%
mutate(
r = cor$estimate,
p = cor$p.value,
"CI Low" = cor$conf.int[1],
"CI High" = cor$conf.int[2]
) %>%
select(-c(cor))
# format pvalues specially
daywise_correlation$p <- format(daywise_correlation$p, format='e', digits=3)
kable(daywise_correlation, digits=2)
| Measure | r | p | CI Low | CI High |
|---|---|---|---|---|
| INITIALTESTSCORE | 0.56 | 2.78e-24 | 0.47 | 0.63 |
| TESTSTOCRITERION | 0.67 | 1.15e-38 | 0.61 | 0.73 |
| FINALTESTSCORE | 0.68 | 1.18e-39 | 0.61 | 0.74 |
reproCheck(reportedValue = '.56', obtainedValue = daywise_correlation$r[1], valueType='r')
## [1] "MATCH for r. The reported value (0.56) and the obtained value (0.56) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.56 0.56 r 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '<.001', obtainedValue = daywise_correlation$p[1], valueType='p', eyeballCheck = TRUE)
## [1] "MATCH for p. Eyeball comparison only."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE <NA> <NA> <NA> NA
## 2 FALSE <.001 2.78e-24 p NA
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH TRUE
reproCheck(reportedValue = '.47', obtainedValue = daywise_correlation$`CI Low`[1], valueType='ci')
## [1] "MATCH for ci. The reported value (0.47) and the obtained value (0.47) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.47 0.47 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '.63', obtainedValue = daywise_correlation$`CI High`[1], valueType='ci')
## [1] "MATCH for ci. The reported value (0.63) and the obtained value (0.63) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.63 0.63 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '.68', obtainedValue = daywise_correlation$r[2], valueType='r')
## [1] "MINOR_ERROR for r. The reported value (0.68) and the obtained value (0.67) differed by 1.47%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.68 0.67 r 1.470588
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MINOR_ERROR NA
reproCheck(reportedValue = '<.001', obtainedValue = daywise_correlation$p[2], valueType='p', eyeballCheck = TRUE)
## [1] "MATCH for p. Eyeball comparison only."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE <NA> <NA> <NA> NA
## 2 FALSE <.001 1.15e-38 p NA
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH TRUE
reproCheck(reportedValue = '.61', obtainedValue = daywise_correlation$`CI Low`[2], valueType='ci')
## [1] "MATCH for ci. The reported value (0.61) and the obtained value (0.61) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.61 0.61 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '.73', obtainedValue = daywise_correlation$`CI High`[2], valueType='ci')
## [1] "MATCH for ci. The reported value (0.73) and the obtained value (0.73) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.73 0.73 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '.68', obtainedValue = daywise_correlation$r[3], valueType='r')
## [1] "MATCH for r. The reported value (0.68) and the obtained value (0.68) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.68 0.68 r 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '<.001', obtainedValue = daywise_correlation$p[3], valueType='p', eyeballCheck = TRUE)
## [1] "MATCH for p. Eyeball comparison only."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE <NA> <NA> <NA> NA
## 2 FALSE <.001 1.18e-39 p NA
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH TRUE
reproCheck(reportedValue = '.61', obtainedValue = daywise_correlation$`CI Low`[3], valueType='ci')
## [1] "MATCH for ci. The reported value (0.61) and the obtained value (0.61) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.61 0.61 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '.74', obtainedValue = daywise_correlation$`CI High`[3], valueType='ci')
## [1] "MATCH for ci. The reported value (0.74) and the obtained value (0.74) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.74 0.74 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
First, consider how learning efficiency is computed.
daywise_le <- study1_lescores %>%
# It is ambiguous in the paper whether it groups by measure, or by measure and day.
# The numbers match with measure and day, so that is what I assume is used.
group_by(Measure, Day) %>%
mutate(zscore = scale(Score)) %>%
ungroup() %>%
mutate(Score = ifelse(Measure == 'LE_Score', Score, zscore)) %>%
select(-c("zscore")) %>%
spread(Measure, Score) %>%
# It is also unclear whether the z-scores are averaged or summed.
# The numbers match when they are averaged, so that is what I assume is used.
mutate("LearningEfficiency" = (INITIALTESTSCORE - TESTSTOCRITERION + FINALTESTSCORE)/3)
# Is this the correct way to compute learning efficiency?
# If this is the case, the table should be full of small values
daywise_le %>%
group_by(Day) %>%
mutate("Error" = LearningEfficiency - LE_Score) %>%
summarize(
"Mean Error" = mean(Error),
"Mean Absolute Error" = mean(abs(Error)),
"Maximum Absolute Error" = max(abs(Error))
) %>%
kable()
| Day | Mean Error | Mean Absolute Error | Maximum Absolute Error |
|---|---|---|---|
| S1 | -1e-07 | 9.0e-07 | 1.7e-06 |
| S2 | 2e-07 | 2.3e-06 | 5.0e-06 |
The table is indeed full of small values, indicating it is quite likely to have reproduced their method for computing learning efficiency.
When the three individual measures were converted to z scores and combined into a single metric (learning-efficiency score, which is a composite of initial test, learning speed, and final retention), the correlation across days was also high, r = .68, p < .001, 95% CI = [.61, .74].
daywise_correlation_le <- daywise_le %>%
select(c(Subject, Day, LearningEfficiency)) %>%
spread(Day, LearningEfficiency) %>%
group_by(1) %>%
do(cor = cor.test(.$S1, .$S2)) %>%
mutate(
r = cor$estimate,
p = cor$p.value,
"CI Low" = cor$conf.int[1],
"CI High" = cor$conf.int[2]
) %>%
select(-c(cor, 1)) %>%
mutate(p=format(p, format='e', digits=3))
daywise_correlation_le %>%
kable(digits=2)
| r | p | CI Low | CI High |
|---|---|---|---|
| 0.68 | 7.35e-40 | 0.61 | 0.74 |
reproCheck(reportedValue = '.68', obtainedValue = daywise_correlation_le$r[1], valueType='r')
## [1] "MATCH for r. The reported value (0.68) and the obtained value (0.68) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.68 0.68 r 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '<.001', obtainedValue = daywise_correlation_le$p[1], valueType='p', eyeballCheck = TRUE)
## [1] "MATCH for p. Eyeball comparison only."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE <NA> <NA> <NA> NA
## 2 FALSE <.001 7.35e-40 p NA
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH TRUE
reproCheck(reportedValue = '.61', obtainedValue = daywise_correlation_le$`CI Low`[1], valueType='ci')
## [1] "MATCH for ci. The reported value (0.61) and the obtained value (0.61) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.61 0.61 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
reproCheck(reportedValue = '.74', obtainedValue = daywise_correlation_le$`CI High`[1], valueType='ci')
## [1] "MATCH for ci. The reported value (0.74) and the obtained value (0.74) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## dummyRow reportedValue obtainedValue valueType percentageError
## 1 TRUE NA NA <NA> NA
## 2 FALSE 0.74 0.74 ci 0
## comparisonOutcome eyeballCheck
## 1 <NA> NA
## 2 MATCH NA
Performance across days. The learning curves for participants on the first day can be found in Figure 2 (top panel). Here one can see considerable variability in performance in all three measures (Test 1, tests to criterion, and final test) across participants…
study1_words_over_time %>%
ggplot(aes(x=Test, y=Score, group = Subject)) +
geom_line(aes(color = Subject), show.legend = FALSE) +
geom_vline(xintercept=16, linetype="dotted")
Here one can see considerable variability in performance … across quartiles when binned by overall task performance (Fig. 2, bottom panel).
The procedure as listed is not clear what “overall task performace” is, but the caption of the figure is more explicit:
The bottom panel shows learning curves after participants were placed into quartiles on the basis of their learning-efficiency scores. The graph represents each quartile’s mean correct recall for each test block.
quartile_means <- study1_words_over_time %>%
mutate(Quartile=as.factor(Quartile)) %>%
group_by(Quartile, Test) %>%
summarize(mean=mean(Score))
quartile_means %>%
ggplot(aes(x=Test, y=mean, group=Quartile, color=Quartile)) +
geom_line(size=1.5 ) +
geom_point(size=2) +
scale_color_manual(values=c("#FF0000", "#FFAAAA", "#AAAAFF", "#0000FF"))
Almost all values were reproducable with little or no trouble. Three values had minor numerical errors, and the computation for the aggregate learning efficiency score was underspecified.
The cause of the minor numerical errors was likely repeated rounding. To illustrate this problem, suppose a value is calculated to be 6.1475. If it is rounded to two decimal places, its value is 6.15. If that rounded value is rounded again, this time to one decimal place, it becomes 6.2. This value is different from rounding to one decimal place straightaway, which is 6.1.
This sort of double-rounding is likely to have happened if values are pasted into a manuscript with one rounding (e.g, two decimal places) and then edited in a later revision to only include one decimal place.
When this rounding process was applied to all calculated decimal values in Table 1 of the original paper, discrepancies appeared only at the final test score mean of day 1, and the tests to criterion standard deviation of day 1 - exactly the two values which have minor numerical errors in Table 1.
# Note that R's 'round' function can't be used for this example, as it always rounds towards the even number.
# It's likely that the authors used the at-half-round-up rule instead.
# Sourced from https://stackoverflow.com/questions/12688717/round-up-from-5
round_half_up = function(x, n) {
return(sign(x)*trunc(abs(x)*10^n + 0.5)/10^n)
}
table1_calculated %>%
select(Measure, Day, M, SD) %>%
gather(Statistic, "Calculated", c(M, SD)) %>%
mutate(
"Rounded Twice" = Calculated %>% round_half_up(2) %>% round_half_up(1),
"Rounded Once" = Calculated %>% round_half_up(1),
"Error" = abs(`Rounded Twice` - `Rounded Once`)
) %>%
mutate(Calculated=round(Calculated, digits=5)) %>%
mutate_all({function (x) ifelse(.$Error > 0.0, paste("**", x, "**", sep=""), as.character(x))}) %>%
select(-c("Error")) %>%
kable()
| Measure | Day | Statistic | Calculated | Rounded Twice | Rounded Once |
|---|---|---|---|---|---|
| INITIALTESTSCORE | S1 | M | 9.37367 | 9.4 | 9.4 |
| INITIALTESTSCORE | S2 | M | 11.05694 | 11.1 | 11.1 |
| TESTSTOCRITERION | S1 | M | 8.29181 | 8.3 | 8.3 |
| TESTSTOCRITERION | S2 | M | 7.59075 | 7.6 | 7.6 |
| FINALTESTSCORE | S1 | M | 33.34875 | 33.4 | 33.3 |
| FINALTESTSCORE | S2 | M | 33.20996 | 33.2 | 33.2 |
| INITIALTESTSCORE | S1 | SD | 6.63803 | 6.6 | 6.6 |
| INITIALTESTSCORE | S2 | SD | 8.20999 | 8.2 | 8.2 |
| TESTSTOCRITERION | S1 | SD | 2.84986 | 2.9 | 2.8 |
| TESTSTOCRITERION | S2 | SD | 2.80047 | 2.8 | 2.8 |
| FINALTESTSCORE | S1 | SD | 7.88258 | 7.9 | 7.9 |
| FINALTESTSCORE | S2 | SD | 8.65997 | 8.7 | 8.7 |
This process was also completed for all the correlation values, and the third minor numerical error appears in r-value of the number of tests-to-the-criterion across days. There is also a fourth entry that is bolded, i.e, it differs when rounded twice vs. rounded once, but the paper’s reported value is correct.
all_correlative_values <- initial_to_completion %>%
mutate("Correlation Group" = "Initial to Completion") %>%
rename(Subgroup=Day) %>%
full_join(
completion_to_retention %>%
mutate("Correlation Group" = "Completion to Retention") %>%
rename(Subgroup=Day) %>%
full_join(
initial_to_retention %>%
mutate("Correlation Group" = "Initial to Retention") %>%
rename(Subgroup=Day) %>%
full_join(
daywise_correlation %>%
mutate("Correlation Group" = "Daywise Correlation") %>%
rename(Subgroup=Measure) %>%
full_join(
daywise_correlation_le %>%
mutate(
"Correlation Group" = "Learning Efficiency Correlation",
"Subgroup" = "--"
)
)
)
)
)
all_correlative_values %>%
select(c("Correlation Group", "Subgroup", "r", "CI High", "CI Low")) %>%
gather(Statistic, Value, c("r", "CI High", "CI Low")) %>%
mutate(
"Rounded Twice" = Value %>% round_half_up(3) %>% round_half_up(2),
"Rounded Once" = Value %>% round_half_up(2),
"Error" = abs(`Rounded Twice` - `Rounded Once`)
) %>%
mutate(Value=round(Value, digits=5)) %>%
arrange(`Correlation Group`, `Subgroup`) %>%
mutate_all({function (x) ifelse(.$Error > 0.0, paste("**", x, "**", sep=""), as.character(x))}) %>%
select(-c("Error")) %>%
kable()
| Correlation Group | Subgroup | Statistic | Value | Rounded Twice | Rounded Once |
|---|---|---|---|---|---|
| Completion to Retention | S1 | r | -0.57053 | -0.57 | -0.57 |
| Completion to Retention | S1 | CI High | -0.48596 | -0.49 | -0.49 |
| Completion to Retention | S1 | CI Low | -0.64451 | -0.65 | -0.64 |
| Completion to Retention | S2 | r | -0.47638 | -0.48 | -0.48 |
| Completion to Retention | S2 | CI High | -0.38058 | -0.38 | -0.38 |
| Completion to Retention | S2 | CI Low | -0.56206 | -0.56 | -0.56 |
| Daywise Correlation | FINALTESTSCORE | r | 0.68111 | 0.68 | 0.68 |
| Daywise Correlation | FINALTESTSCORE | CI High | 0.73921 | 0.74 | 0.74 |
| Daywise Correlation | FINALTESTSCORE | CI Low | 0.61295 | 0.61 | 0.61 |
| Daywise Correlation | INITIALTESTSCORE | r | 0.55684 | 0.56 | 0.56 |
| Daywise Correlation | INITIALTESTSCORE | CI High | 0.63263 | 0.63 | 0.63 |
| Daywise Correlation | INITIALTESTSCORE | CI Low | 0.47048 | 0.47 | 0.47 |
| Daywise Correlation | TESTSTOCRITERION | r | 0.67462 | 0.68 | 0.67 |
| Daywise Correlation | TESTSTOCRITERION | CI High | 0.73372 | 0.73 | 0.73 |
| Daywise Correlation | TESTSTOCRITERION | CI Low | 0.6054 | 0.61 | 0.61 |
| Initial to Completion | S1 | r | -0.60217 | -0.6 | -0.6 |
| Initial to Completion | S1 | CI High | -0.52194 | -0.52 | -0.52 |
| Initial to Completion | S1 | CI Low | -0.67185 | -0.67 | -0.67 |
| Initial to Completion | S2 | r | -0.62576 | -0.63 | -0.63 |
| Initial to Completion | S2 | CI High | -0.54894 | -0.55 | -0.55 |
| Initial to Completion | S2 | CI Low | -0.69209 | -0.69 | -0.69 |
| Initial to Retention | S1 | r | 0.26349 | 0.26 | 0.26 |
| Initial to Retention | S1 | CI High | 0.36912 | 0.37 | 0.37 |
| Initial to Retention | S1 | CI Low | 0.15114 | 0.15 | 0.15 |
| Initial to Retention | S2 | r | 0.18268 | 0.18 | 0.18 |
| Initial to Retention | S2 | CI High | 0.29342 | 0.29 | 0.29 |
| Initial to Retention | S2 | CI Low | 0.0671 | 0.07 | 0.07 |
| Learning Efficiency Correlation | – | r | 0.68244 | 0.68 | 0.68 |
| Learning Efficiency Correlation | – | CI High | 0.74033 | 0.74 | 0.74 |
| Learning Efficiency Correlation | – | CI Low | 0.6145 | 0.61 | 0.61 |
The second minor trouble was determining exactly how the learning efficiency was calculated. The paper does explain that the z-scores were taken, but it does not specify over what group of data. It is obvious that two different kinds of measures would not be included together, e.g, tests-to-criterion and final test score. It is less clear whether day 1 and day 2’s final test scores should be merged. Ultimately this was only decided based upon matching the calculated scores from the authors’ data table.
Furthermore, the combination of these z-scores is ambiguous. Two options are plausible: either the learning efficiency score will be a sum or an average of the z-scores of each of the three measures. To be fair, as they differ by a constant factor, most measures - including all correlations that were performed - will not be affected by the difference. Again, by matching with the authors’ data table, it was determined that the learning efficiency score was an average.
Author_Assistance = FALSE # was author assistance provided? (if so, enter TRUE)
Insufficient_Information_Errors <- 0 # how many discrete insufficient information issues did you encounter?
# Assess the causal locus (discrete reproducibility issues) of any reproducibility errors. Note that there doesn't necessarily have to be a one-to-one correspondance between discrete reproducibility issues and reproducibility errors. For example, it could be that the original article neglects to mention that a Greenhouse-Geisser correct was applied to ANOVA outcomes. This might result in multiple reproducibility errors, but there is a single causal locus (discrete reproducibility issue).
locus_typo <- 3 # how many discrete issues did you encounter that related to typographical errors?
locus_specification <- 0 # how many discrete issues did you encounter that related to incomplete, incorrect, or unclear specification of the original analyses?
locus_analysis <- 0 # how many discrete issues did you encounter that related to errors in the authors' original analyses?
locus_data <- 0 # how many discrete issues did you encounter that related to errors in the data files shared by the authors?
locus_unidentified <- 0 # how many discrete issues were there for which you could not identify the cause
# How many of the above issues were resolved through author assistance?
locus_typo_resolved <- 0 # how many discrete issues did you encounter that related to typographical errors?
locus_specification_resolved <- 0 # how many discrete issues did you encounter that related to incomplete, incorrect, or unclear specification of the original analyses?
locus_analysis_resolved <- 0 # how many discrete issues did you encounter that related to errors in the authors' original analyses?
locus_data_resolved <- 0 # how many discrete issues did you encounter that related to errors in the data files shared by the authors?
locus_unidentified_resolved <- 0 # how many discrete issues were there for which you could not identify the cause
Affects_Conclusion <- FALSE # Do any reproducibility issues encounter appear to affect the conclusions made in the original article? TRUE, FALSE, or NA. This is a subjective judgement, but you should taking into account multiple factors, such as the presence/absence of decision errors, the number of target outcomes that could not be reproduced, the type of outcomes that could or could not be reproduced, the difference in magnitude of effect sizes, and the predictions of the specific hypothesis under scrutiny.
reportObject <- reportObject %>%
filter(dummyRow == FALSE) %>% # remove the dummy row
select(-dummyRow) %>% # remove dummy row designation
mutate(articleID = articleID) %>% # add variables to report
select(articleID, everything()) # make articleID first column
# decide on final outcome
if(any(reportObject$comparisonOutcome %in% c("MAJOR_ERROR", "DECISION_ERROR")) | Insufficient_Information_Errors > 0){
finalOutcome <- "Failure without author assistance"
if(Author_Assistance == T){
finalOutcome <- "Failure despite author assistance"
}
}else{
finalOutcome <- "Success without author assistance"
if(Author_Assistance == T){
finalOutcome <- "Success with author assistance"
}
}
# collate report extra details
reportExtras <- data.frame(articleID, pilotNames, copilotNames, pilotTTC, copilotTTC, pilotStartDate, copilotStartDate, completionDate, Author_Assistance, finalOutcome, Insufficient_Information_Errors, locus_typo, locus_specification, locus_analysis, locus_data, locus_unidentified, locus_typo_resolved, locus_specification_resolved, locus_analysis_resolved, locus_data_resolved, locus_unidentified_resolved)
# save report objects
if(reportType == "pilot"){
write_csv(reportObject, "pilotReportDetailed.csv")
write_csv(reportExtras, "pilotReportExtras.csv")
}
if(reportType == "final"){
write_csv(reportObject, "finalReportDetailed.csv")
write_csv(reportExtras, "finalReportExtras.csv")
}
[This function will output information about the package versions used in this report:]
devtools::session_info()
## ─ Session info ──────────────────────────────────────────────────────────
## setting value
## version R version 3.5.1 (2018-07-02)
## os Ubuntu 16.04.2 LTS
## system x86_64, linux-gnu
## ui X11
## language en_US
## collate en_US.UTF-8
## ctype en_US.UTF-8
## tz America/Los_Angeles
## date 2018-11-04
##
## ─ Packages ──────────────────────────────────────────────────────────────
## package * version date lib
## assertthat 0.2.0 2017-04-11 [1]
## backports 1.1.2 2017-12-13 [1]
## base64enc 0.1-3 2015-07-28 [1]
## bindr 0.1.1 2018-03-13 [1]
## bindrcpp * 0.2.2 2018-03-29 [1]
## broom 0.5.0 2018-07-17 [1]
## callr 3.0.0 2018-08-24 [1]
## CARPSreports * 0.1 2018-11-02 [1]
## cellranger 1.1.0 2016-07-27 [1]
## cli 1.0.1 2018-09-25 [1]
## colorspace 1.3-2 2016-12-14 [1]
## crayon 1.3.4 2017-09-16 [1]
## desc 1.2.0 2018-05-01 [1]
## devtools 2.0.1 2018-10-26 [1]
## digest 0.6.18 2018-10-10 [1]
## dplyr * 0.7.6 2018-06-29 [1]
## evaluate 0.11 2018-07-17 [1]
## forcats * 0.3.0 2018-02-19 [1]
## fs 1.2.6 2018-08-23 [1]
## ggplot2 * 3.0.0 2018-07-03 [1]
## glue 1.3.0 2018-07-17 [1]
## gtable 0.2.0 2016-02-26 [1]
## haven * 1.1.2 2018-06-27 [1]
## highr 0.7 2018-06-09 [1]
## hms 0.4.2 2018-03-10 [1]
## htmltools 0.3.6 2017-04-28 [1]
## httr 1.3.1 2017-08-20 [1]
## jsonlite 1.5 2017-06-01 [1]
## knitr * 1.20 2018-02-20 [1]
## labeling 0.3 2014-08-23 [1]
## lattice 0.20-35 2017-03-25 [4]
## lazyeval 0.2.1 2017-10-29 [1]
## lubridate 1.7.4 2018-04-11 [1]
## magrittr 1.5 2014-11-22 [1]
## memoise 1.1.0 2017-04-21 [1]
## modelr 0.1.2 2018-05-11 [1]
## munsell 0.5.0 2018-06-12 [1]
## nlme 3.1-137 2018-04-07 [4]
## pillar 1.3.0 2018-07-14 [1]
## pkgbuild 1.0.2 2018-10-16 [1]
## pkgconfig 2.0.2 2018-08-16 [1]
## pkgload 1.0.2 2018-10-29 [1]
## plyr 1.8.4 2016-06-08 [1]
## prettyunits 1.0.2 2015-07-13 [1]
## processx 3.2.0 2018-08-16 [1]
## ps 1.1.0 2018-08-10 [1]
## purrr * 0.2.5 2018-05-29 [1]
## R6 2.3.0 2018-10-04 [1]
## Rcpp 0.12.19 2018-10-01 [1]
## readr * 1.1.1 2017-05-16 [1]
## readxl * 1.1.0 2018-04-20 [1]
## remotes 2.0.2 2018-10-30 [1]
## rlang 0.3.0.1 2018-10-25 [1]
## rmarkdown 1.10 2018-06-11 [1]
## rprojroot 1.3-2 2018-01-03 [1]
## rstudioapi 0.8 2018-10-02 [1]
## rvest 0.3.2 2016-06-17 [1]
## scales 1.0.0 2018-08-09 [1]
## sessioninfo 1.1.0 2018-09-25 [1]
## stringi 1.2.4 2018-07-20 [1]
## stringr * 1.3.1 2018-05-10 [1]
## tibble * 1.4.2 2018-01-22 [1]
## tidyr * 0.8.1 2018-05-18 [1]
## tidyselect 0.2.4 2018-02-26 [1]
## tidyverse * 1.2.1 2017-11-14 [1]
## usethis 1.4.0 2018-08-14 [1]
## withr 2.1.2 2018-03-15 [1]
## xml2 1.2.0 2018-01-24 [1]
## yaml 2.2.0 2018-07-25 [1]
## source
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## Github (METRICS-CARPS/CARPSreports@89db4a9)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.0)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.0)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
## CRAN (R 3.5.1)
##
## [1] /home/mark/R/x86_64-pc-linux-gnu-library/3.5
## [2] /usr/local/lib/R/site-library
## [3] /usr/lib/R/site-library
## [4] /usr/lib/R/library