articleID <- "7-3-2015_PS" # insert the article ID code here e.g., "10-3-2015_PS"
reportType <- 'final'
pilotNames <- "Danielle Boles, Michael Ko" # insert the pilot's name here e.g., "Tom Hardwicke". If there are multiple pilots enter both names in a character string e.g., "Tom Hardwicke, Bob Dylan"
copilotNames <- "Ben Peloquin" # insert the co-pilot's name here e.g., "Michael Frank". If there are multiple co-pilots enter both names in a character string e.g., "Tom Hardwicke, Bob Dylan"
pilotTTC <- 150 # insert the pilot's estimated time to complete (in minutes, fine to approximate) e.g., 120
copilotTTC <- 200 # insert the co- pilot's estimated time to complete (in minutes, fine to approximate) e.g., 120
pilotStartDate <- as.Date("10/27/17", format = "%m/%d/%y") # insert the pilot's start date in US format e.g., as.Date("01/25/18", format = "%m/%d/%y")
copilotStartDate <- as.Date("06/13/18", format = "%m/%d/%y") # insert the co-pilot's start date in US format e.g., as.Date("01/25/18", format = "%m/%d/%y")
completionDate <- as.Date("04/20/19", format = "%m/%d/%y") # copilot insert the date of final report completion (after any necessary rounds of author assistance) in US format e.g., as.Date("01/25/18", format = "%m/%d/%y")
Participants (N=21) completed a series of trials that required them to switch or stay from one task to the other. One task was to choose the larger value of the two values if surrounded by a green box. The other task was to choose the value with the larger font if surrounded by a blue box. Subliminal cues followed by a mask were presented before each trial. Cues included “O” (non-predictive cue), “M” (switch predictive cue), and “T” (repeat predictive cue). Reaction times and performance accuracy were measured.
Performance on switch trials, relative to repeat trials, incurred a switch cost that was evident in longer RTs (836 vs. 689 ms) and lower accuracy rates (79% vs. 92%). If participants were able to learn the predictive value of the cue that preceded only switch trials and could instantiate relevant anticipatory control in response to it, the performance on switch trials preceded by this cue would be better than on switch trials preceded by the nonpredictive cue. This was indeed the case (mean RT-predictive cue: 819 ms; nonpredictive cue: 871 ms; mean difference = 52 ms, 95% confidence interval, or CI = [19.5, 84.4]), two-tailed paired t(20) = 3.34, p < .01. However, error rates did not differ across these two groups of switch trials (predictive cue: 78.9%; nonpredictive cue: 78.8%), p = .8.
library(tidyverse) # for data munging
library(knitr) # for kable table formating
library(haven) # import and export 'SPSS', 'Stata' and 'SAS' Files
library(readxl) # import excel files
library(ReproReports) # custom report functions
library(broom)
# Prepare report object. This will be updated automatically by the reproCheck function each time values are compared.
reportObject <- data.frame(dummyRow = TRUE, reportedValue = NA, obtainedValue = NA, valueType = NA, percentageError = NA, comparisonOutcome = NA, eyeballCheck = NA)
# This reads all the participants data (each is in a seperate xls file) in and combines them into one dataframe
# Each xls has 250 rows, the rest is their calculations using excel, which we don't want in the data
files <- dir('data/Experiment 1')
data <- data.frame()
id <- 1
for (file in files){
if(file != 'Codebook.xls'){
temp_data <- read_xls(file.path('data/Experiment 1', file))
temp_data$id <- id
id <- id + 1
temp_data <- temp_data[1:250, ]
data <- rbind(data, temp_data)
}
}
Do we have data from 21 participants as expected?
## [1] TRUE
Each row is an observation. The data is already in tidy format.
The codebook for Experiment 1 listed O, T, and M as the only primes that they used. However, we found that some participants had primes 2, 4, and 8 instead.
We inferred that 2 is the nonpredictive cue, 4 is the repeat predictive cue and 8 as the switch predictive cue based on how the other columns were named. Thus we will proceed the analysis with this assumption by recoding the primes this way.
data$originalPrime <- data$Prime
data$Prime <- recode(data$Prime, '2' = "O", '4' = "T", '8' = "M")
#recode variables to make referencing easier
data$originalTrialType <- data$TrialType
data$Prime <- recode(data$Prime, 'O' = "Nonpredictive Cue", 'M' = "Switch Predictive Cue", 'T' = "Repeat Predictive Cue")
data$TrialType <- recode(data$TrialType, '0' = "Neither", '1' = "Repeat Trials", '2' = "Switch Trials")
Note: TrialType=0 is not listed in codebook.
We will first try to reproduce median reaction time of switch trials and repeat trials.
Performance on switch trials, relative to repeat trials, incurred a switch cost that was evident in longer RTs (836 vs. 689 ms)
We used median as the author’s instructed to use median for reaction time unless otherwise reported:
unless stated otherwise, the statistical tests were performed on the more stable median values rather than mean values.
med_RT <- data %>%
group_by(TrialType) %>%
summarise(median_RT = median(RT),
mean_RT=mean(RT))
kable(med_RT[-1, ])
| TrialType | median_RT | mean_RT |
|---|---|---|
| Repeat Trials | 665.0625 | 730.9531 |
| Switch Trials | 812.9375 | 895.6084 |
reportObject <- reproCheck(reportedValue = "836", obtained = filter(med_RT, TrialType == "Switch Trials")$median_RT, valueType = 'median')
## [1] "MINOR_ERROR for median. The reported value (836) and the obtained value (813) differed by 2.75%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
reportObject <- reproCheck(reportedValue = "689", obtained = filter(med_RT, TrialType == "Repeat Trials")$median_RT, valueType = 'median')
## [1] "MINOR_ERROR for median. The reported value (689) and the obtained value (665) differed by 3.48%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
Note than means would not have matched either.
Now we are checking if only including participants with ‘letter’ primes enables us to reproduce the target values…
This would not have matched either.
Next we will try to reproduce the accuracy of switch trials and repeat trials.
Performance on switch trials, relative to repeat trials, incurred a switch cost that was evident in […] lower accuracy rates (79% vs. 92%)
mean_RespCorr <- data %>%
group_by(TrialType) %>%
summarise(accuracy = mean(RespCorr))
kable(mean_RespCorr[-1, ])
| TrialType | accuracy |
|---|---|
| Repeat Trials | 0.9137196 |
| Switch Trials | 0.7919738 |
reportObject <- reproCheck(reportedValue = "0.79", obtained = filter(mean_RespCorr, TrialType == "Switch Trials")$accuracy, valueType = 'mean')
## [1] "MATCH for mean. The reported value (0.79) and the obtained value (0.79) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
reportObject <- reproCheck(reportedValue = "0.92", obtained = filter(mean_RespCorr, TrialType == "Repeat Trials")$accuracy, valueType = 'mean')
## [1] "MINOR_ERROR for mean. The reported value (0.92) and the obtained value (0.91) differed by 1.09%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
Minor errors: These values are extremeley close although rounding mean for accuracy repeat accuracy is slightly different.
Now we will analyze Predicitve Switch Cues vs Nonpredictive Switch Cues. Let’s start with reaction time.
This was indeed the case (mean RT-predictive cue: 819 ms; nonpredictive cue: 871 ms; … )
Later the authors do a t test with 20 as the degrees of freedoms. So we will assume that these mean values come from individual RT averages.
mean_Prime_RT_Ind <- data %>%
filter(TrialType == "Switch Trials") %>%
group_by(id, Prime) %>%
summarise(meanRT = mean(RT),
medianRT = median(RT)) #Individual Means
mean_Prime_RT <- mean_Prime_RT_Ind %>% group_by(Prime) %>%
summarise(grandmeanRT = mean(meanRT),
grandMedianRT = median(medianRT)) #Grand Means
kable(mean_Prime_RT)
| Prime | grandmeanRT | grandMedianRT |
|---|---|---|
| Nonpredictive Cue | 907.7555 | 783.1562 |
| Switch Predictive Cue | 883.3979 | 799.5000 |
Minor errors here. Report:
npc_mean <- mean_Prime_RT %>% filter(Prime == "Nonpredictive Cue") %>% pull(grandmeanRT)
reportObject <- reproCheck(reportedValue = "871", obtained = npc_mean, valueType = 'mean')
## [1] "MINOR_ERROR for mean. The reported value (871) and the obtained value (908) differed by 4.25%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
spc_mean <- mean_Prime_RT %>% filter(Prime == "Switch Predictive Cue") %>% pull(grandmeanRT)
reportObject <- reproCheck(reportedValue = "819", obtained = spc_mean, valueType = 'mean')
## [1] "MINOR_ERROR for mean. The reported value (819) and the obtained value (883) differed by 7.81%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
These would not have matched either.
Next we will try to reproduce error rates for switch predicitve cues vs switch nonpredictive cues.
However, error rates did not differ across these two groups of switch trials (predictive cue: 78.9%; nonpredictive cue: 78.8%)
mean_Prime_RespCorr_Ind <- data %>% filter(TrialType == "Switch Trials") %>% group_by(id, Prime) %>% summarise(meanCorr = mean(RespCorr)) #Individual Means
mean_Prime_RespCorr <- mean_Prime_RespCorr_Ind %>% group_by(Prime) %>% summarise(grandmeanCorr = mean(meanCorr)) #Grand Means
kable(mean_Prime_RespCorr)
| Prime | grandmeanCorr |
|---|---|
| Nonpredictive Cue | 0.7802613 |
| Switch Predictive Cue | 0.7994066 |
These numbers are fairly close to the reported numbers.
reportObject <- reproCheck(reported = ".789", obtained = .7994, valueType = 'mean')
## [1] "MINOR_ERROR for mean. The reported value (0.789) and the obtained value (0.799) differed by 1.27%. Note that the obtained value was rounded to 3 decimal places to match the reported value."
reportObject <- reproCheck(reported = ".788", obtained = .7803, valueType = 'mean')
## [1] "MINOR_ERROR for mean. The reported value (0.788) and the obtained value (0.78) differed by 1.02%. Note that the obtained value was rounded to 3 decimal places to match the reported value."
The first claim is that in switch trials, predictive cues lead to statistically significant faster reaction times than nonpredictive cues.
… the performance on switch trials preceded by this cue would be better than on switch trials preceded by the nonpredictive cue. This was indeed the case (mean RT-predictive cue: 819 ms; nonpredictive cue: 871 ms; mean difference = 52 ms, 95% confidence interval, or CI = [19.5, 84.4]), two-tailed paired t(20) = 3.34, p < .01.
mean_Prime_RT_Ind <- mean_Prime_RT_Ind %>% select(id, Prime, meanRT) %>%
spread(Prime, meanRT) #spreading so that the cues are easier to compare
test <- t.test(mean_Prime_RT_Ind[['Nonpredictive Cue']], mean_Prime_RT_Ind[['Switch Predictive Cue']], paired = TRUE)
kable(tidy(test))
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
|---|---|---|---|---|---|---|---|
| 24.35754 | 0.8887943 | 0.3846791 | 20 | -32.80859 | 81.52367 | Paired t-test | two.sided |
reportObject <- reproCheck(reportedValue = "<.01", obtained = test$p.value, valueType = 'p', eyeballCheck = FALSE)
## [1] "EYEBALL CHECK ERROR for p. Eyeball comparison only."
reportObject <- reproCheck(reportedValue = "19.5", obtained = test$conf.int[1], valueType = 'ci')
## [1] "MAJOR_ERROR for ci. The reported value (19.5) and the obtained value (-32.8) differed by 268.21%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
reportObject <- reproCheck(reportedValue = "84.4", obtained = test$conf.int[2], valueType = 'ci')
## [1] "MINOR_ERROR for ci. The reported value (84.4) and the obtained value (81.5) differed by 3.44%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
reportObject <- reproCheck(reportedValue = "20", obtained = test$parameter, valueType = 'df')
## [1] "MATCH for df. The reported value (20) and the obtained value (20) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
reportObject <- reproCheck(reportedValue = "3.34", obtained = test$statistic, valueType = 't')
## [1] "MAJOR_ERROR for t. The reported value (3.34) and the obtained value (0.89) differed by 73.35%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
We do not find the same p value as the original paper. There is ambiguity in how to calculate this statistic which is detailed below.
[INSUFFICIENT INFORMATION ERROR]
Check - note that differences in reaction time could be due to differences in Prime coding. This just supports the notion that we need to better understand the coding differences before trying to reproduce all the analyses.
# Differences in reaction time t.test by prime coding
df_rt_ttest <- data %>%
filter(TrialType == "Switch Trials") %>%
group_by(id, originalPrime) %>%
summarise(meanRT = mean(RT)) %>%
spread(originalPrime, meanRT)
test1 <- t.test(df_rt_ttest$M, df_rt_ttest$O, paired=TRUE)
test2 <- t.test(df_rt_ttest$`2`, df_rt_ttest$`8`, paired=TRUE)
kable(tidy(test1))
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
|---|---|---|---|---|---|---|---|
| -4.820973 | -0.1295861 | 0.8988774 | 13 | -85.19284 | 75.55089 | Paired t-test | two.sided |
kable(tidy(test2))
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
|---|---|---|---|---|---|---|---|
| 63.43066 | 1.88596 | 0.1082568 | 6 | -18.86654 | 145.7279 | Paired t-test | two.sided |
But none of these ways of doing the analysis produce outcomes matching the reported outcomes.
Next we will test the second claim.
However, error rates did not differ across these two groups of switch trials (predictive cue: 78.9%; nonpredictive cue: 78.8%), p = .8.
mean_Prime_RespCorr_Ind <-
mean_Prime_RespCorr_Ind %>%
spread(Prime, meanCorr) #spreading so that the cues are easier to compare
test <-
t.test(mean_Prime_RespCorr_Ind[['Nonpredictive Cue']],
mean_Prime_RespCorr_Ind[['Switch Predictive Cue']], paired = TRUE)
thisP <- test$p.value
kable(tidy(test))
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
|---|---|---|---|---|---|---|---|
| -0.0191453 | -1.07662 | 0.2944658 | 20 | -0.0562394 | 0.0179489 | Paired t-test | two.sided |
## # A tibble: 1 x 2
## nonPredMean predMean
## <dbl> <dbl>
## 1 0.780 0.799
Although still not significant, the p value is very different from what was reported.
reportObject <- reproCheck(reportedValue = ".8", obtained = thisP, valueType = 'p')
## [1] "MAJOR_ERROR for p. The reported value (0.8) and the obtained value (0.3) differed by 62.5%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
In our initial attempts we were not able to reproduce several target outcomes. Generally, all the reaction time statistics (whether we tried means or medians) were different from what was reported, although these were classified as ‘minor numerical errors’. Ultimately though there were major errors for inferential test statistics, which could stem from a problem earlier in the analysis pipeline.
There are a number of aspects of the original analysis / data files we are unclear about. We have e-mailed the original authors several times but not recieved a response to our questions (we have recieved responses saying they will get back to us, but this has not happened > 9 months after the last message). We have thus classifed these issues as 4 ‘insufficient information errors’. The issues we are unclear about are as follows:
Unclear labels for variables
In the data file, there is a variable “CorrResp” (1 or 0) and another variable “RespCorr” (TRUE or FALSE). We used “RespCorr” because it was the only variable of the two that was included in the codebook where TRUE=Accurate response and FALSE=Error. But we still don’t know what “CorrResp” is and whether or not it was used in the original analyses.
Unclear recoding of variables
In the data file, there is 1 excel file per participant with all of their reaction times to the 250 trials. For some participants, the Prime was coded as the actual prime shown “O”, “T”, or “M”. For other participants, the Prime was coded as “2”, “4”, and “8”. However, we had to infer which number corresponded to each letter by looking at the variables names assigned to trial type and which cue followed (“stay_2”, “stay_4”, “swt_2”, “swt_8”).
We coded 2=O, 4=T, 8=M, but we are still unsure whether these are consistent with how the authors coded the prime variable. We should note that some differences do seem exaggerated while subsetting by these two coding schemes – data that received a prime coding Prime in c(2, 4, 8) appeared to have larger RT and Acc differences compared to those coded with Prime in c("O", "T", "M").
Ambiguity between using means or medians
In the article it states that, unless otherwise noted, statistical tests were performed on median values rather than mean values. We followed this according to the paper’s protocol. However, we’re not able to reproduce the following outcomes using either means or medians for the following findings:
“performance on switch trials, relative to repeat trials, incurred a switch cost that was evident in longer RTs (836 vs. 689 ms)”
“mean RT - predictive cue: 819 ms; 95% confidence interval, or CI = [19.5, 84.4], two-tailed paired t(20) = 3.34, p < .01”
Unclear whether descriptives of means/medians of means/medians of individuals, or means/medians across all trials
When we tried to reproduce reaction time medians, we realized that the value could have been obtained by calculating a value (mean or median) for each individual, then summarizing those values to one value (mean or median), OR it could have been obtained by a value (mean or median) across ALL trials. We tried several combinations of means or medians with across individuals or across trials, and still could not replicate the descriptive reaction times.
Author_Assistance = FALSE # was author assistance provided? (if so, enter TRUE)
Insufficient_Information_Errors <- 4 # how many discrete insufficient information issues did you encounter?
# Assess the causal locus (discrete reproducibility issues) of any reproducibility errors. Note that there doesn't necessarily have to be a one-to-one correspondance between discrete reproducibility issues and reproducibility errors. For example, it could be that the original article neglects to mention that a Greenhouse-Geisser correct was applied to ANOVA outcomes. This might result in multiple reproducibility errors, but there is a single causal locus (discrete reproducibility issue).
locus_typo <- 0 # how many discrete issues did you encounter that related to typographical errors?
locus_specification <- 1 # how many discrete issues did you encounter that related to incomplete, incorrect, or unclear specification of the original analyses?
locus_analysis <- 0 # how many discrete issues did you encounter that related to errors in the authors' original analyses?
locus_data <- 0 # how many discrete issues did you encounter that related to errors in the data files shared by the authors?
locus_unidentified <- 3 # how many discrete issues were there for which you could not identify the cause
# How many of the above issues were resolved through author assistance?
locus_typo_resolved <- 0 # how many discrete issues did you encounter that related to typographical errors?
locus_specification_resolved <- 0 # how many discrete issues did you encounter that related to incomplete, incorrect, or unclear specification of the original analyses?
locus_analysis_resolved <- 0 # how many discrete issues did you encounter that related to errors in the authors' original analyses?
locus_data_resolved <- 0 # how many discrete issues did you encounter that related to errors in the data files shared by the authors?
locus_unidentified_resolved <- 0 # how many discrete issues were there for which you could not identify the cause
Affects_Conclusion <- TRUE # Do any reproducibility issues encounter appear to affect the conclusions made in the original article? This is a subjective judgement, but you should taking into account multiple factors, such as the presence/absence of decision errors, the number of target outcomes that could not be reproduced, the type of outcomes that could or could not be reproduced, the difference in magnitude of effect sizes, and the predictions of the specific hypothesis under scrutiny.
reportObject <- reportObject %>%
filter(dummyRow == FALSE) %>% # remove the dummy row
select(-dummyRow) %>% # remove dummy row designation
mutate(articleID = articleID) %>% # add the articleID
select(articleID, everything()) # make articleID first column
# decide on final outcome
if(any(!(reportObject$comparisonOutcome %in% c("MATCH", "MINOR_ERROR"))) | Insufficient_Information_Errors > 0){
finalOutcome <- "Failure without author assistance"
if(Author_Assistance == T){
finalOutcome <- "Failure despite author assistance"
}
}else{
finalOutcome <- "Success without author assistance"
if(Author_Assistance == T){
finalOutcome <- "Success with author assistance"
}
}
# collate report extra details
reportExtras <- data.frame(articleID, pilotNames, copilotNames, pilotTTC, copilotTTC, pilotStartDate, copilotStartDate, completionDate, Author_Assistance, finalOutcome, Insufficient_Information_Errors, locus_typo, locus_specification, locus_analysis, locus_data, locus_unidentified, locus_typo_resolved, locus_specification_resolved, locus_analysis_resolved, locus_data_resolved, locus_unidentified_resolved)
# save report objects
if(reportType == "pilot"){
write_csv(reportObject, "pilotReportDetailed.csv")
write_csv(reportExtras, "pilotReportExtras.csv")
}
if(reportType == "final"){
write_csv(reportObject, "finalReportDetailed.csv")
write_csv(reportExtras, "finalReportExtras.csv")
}
devtools::session_info()
## ─ Session info ───────────────────────────────────────────────────────────────
## setting value
## version R version 4.0.0 (2020-04-24)
## os macOS Catalina 10.15.4
## system x86_64, darwin17.0
## ui X11
## language (EN)
## collate en_US.UTF-8
## ctype en_US.UTF-8
## tz Europe/London
## date 2020-05-13
##
## ─ Packages ───────────────────────────────────────────────────────────────────
## package * version date lib
## assertthat 0.2.1 2019-03-21 [1]
## backports 1.1.6 2020-04-05 [1]
## broom * 0.5.6 2020-04-20 [1]
## callr 3.4.3 2020-03-28 [1]
## cellranger 1.1.0 2016-07-27 [1]
## cli 2.0.2 2020-02-28 [1]
## colorspace 1.4-1 2019-03-18 [1]
## crayon 1.3.4 2017-09-16 [1]
## DBI 1.1.0 2019-12-15 [1]
## dbplyr 1.4.3 2020-04-19 [1]
## desc 1.2.0 2018-05-01 [1]
## devtools 2.3.0 2020-04-10 [1]
## digest 0.6.25 2020-02-23 [1]
## dplyr * 0.8.5 2020-03-07 [1]
## ellipsis 0.3.0 2019-09-20 [1]
## evaluate 0.14 2019-05-28 [1]
## fansi 0.4.1 2020-01-08 [1]
## forcats * 0.5.0 2020-03-01 [1]
## fs 1.4.1 2020-04-04 [1]
## generics 0.0.2 2018-11-29 [1]
## ggplot2 * 3.3.0 2020-03-05 [1]
## glue 1.4.0 2020-04-03 [1]
## gtable 0.3.0 2019-03-25 [1]
## haven * 2.2.0 2019-11-08 [1]
## highr 0.8 2019-03-20 [1]
## hms 0.5.3 2020-01-08 [1]
## htmltools 0.4.0 2019-10-04 [1]
## httr 1.4.1 2019-08-05 [1]
## jsonlite 1.6.1 2020-02-02 [1]
## knitr * 1.28 2020-02-06 [1]
## lattice 0.20-41 2020-04-02 [1]
## lifecycle 0.2.0 2020-03-06 [1]
## lubridate 1.7.8 2020-04-06 [1]
## magrittr 1.5 2014-11-22 [1]
## memoise 1.1.0 2017-04-21 [1]
## modelr 0.1.7 2020-04-30 [1]
## munsell 0.5.0 2018-06-12 [1]
## nlme 3.1-147 2020-04-13 [1]
## pillar 1.4.4 2020-05-05 [1]
## pkgbuild 1.0.7 2020-04-25 [1]
## pkgconfig 2.0.3 2019-09-22 [1]
## pkgload 1.0.2 2018-10-29 [1]
## prettyunits 1.1.1 2020-01-24 [1]
## processx 3.4.2 2020-02-09 [1]
## ps 1.3.2 2020-02-13 [1]
## purrr * 0.3.4 2020-04-17 [1]
## R6 2.4.1 2019-11-12 [1]
## Rcpp 1.0.4.6 2020-04-09 [1]
## readr * 1.3.1 2018-12-21 [1]
## readxl * 1.3.1 2019-03-13 [1]
## remotes 2.1.1 2020-02-15 [1]
## reprex 0.3.0 2019-05-16 [1]
## ReproReports * 0.1 2020-05-06 [1]
## rlang 0.4.6 2020-05-02 [1]
## rmarkdown 2.1 2020-01-20 [1]
## rprojroot 1.3-2 2018-01-03 [1]
## rstudioapi 0.11 2020-02-07 [1]
## rvest 0.3.5 2019-11-08 [1]
## scales 1.1.0 2019-11-18 [1]
## sessioninfo 1.1.1 2018-11-05 [1]
## stringi 1.4.6 2020-02-17 [1]
## stringr * 1.4.0 2019-02-10 [1]
## testthat 2.3.2 2020-03-02 [1]
## tibble * 3.0.1 2020-04-20 [1]
## tidyr * 1.0.2 2020-01-24 [1]
## tidyselect 1.0.0 2020-01-27 [1]
## tidyverse * 1.3.0 2019-11-21 [1]
## usethis 1.6.1 2020-04-29 [1]
## utf8 1.1.4 2018-05-24 [1]
## vctrs 0.2.4 2020-03-10 [1]
## withr 2.2.0 2020-04-20 [1]
## xfun 0.13 2020-04-13 [1]
## xml2 1.3.2 2020-04-23 [1]
## yaml 2.2.1 2020-02-01 [1]
## source
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## Github (METRICS-CARPS/CARPSreports@3277f85)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
## CRAN (R 4.0.0)
##
## [1] /Library/Frameworks/R.framework/Versions/4.0/Resources/library