Report Details

articleID <- "1-6-2014_PS" # insert the article ID code here e.g., "10-3-2015_PS"
reportType <- "final" # specify whether this is the 'pilot' report or 'final' report
pilotNames <- "Tysen Dauer, Erik Santoro, Jaclyn Schwartz" # insert the pilot's name here e.g., "Tom Hardwicke".  If there are multiple cpilots enter both names in a character string e.g., "Tom Hardwicke, Bob Dylan"
copilotNames <- "Michèle Nuijten" # insert the co-pilot's name here e.g., "Michael Frank". If there are multiple co-pilots enter both names in a character string e.g., "Tom Hardwicke, Bob Dylan"
pilotTTC <- 525 # insert the pilot's estimated time to complete (in minutes, fine to approximate) e.g., 120
copilotTTC <- 1280 # insert the co-pilot's estimated time to complete (in minutes, fine to approximate) e.g., 120
pilotStartDate <- as.Date("10/26/17", format = "%m/%d/%y") # insert the pilot's start date in US format e.g., as.Date("01/25/18", format = "%m/%d/%y")
copilotStartDate <- as.Date("07/03/18", format = "%m/%d/%y") # insert the co-pilot's start date in US format e.g., as.Date("01/25/18", format = "%m/%d/%y")
completionDate <- as.Date("05/02/18", format = "%m/%d/%y") # copilot insert the date of final report completion (after any necessary rounds of author assistance) in US format e.g., as.Date("01/25/18", format = "%m/%d/%y")

Methods summary:

Mueller et al. 2014 aimed to find out if note-taking medium (longhand or laptop) had an effect on participants’ ability to correctly answer factual and conceptual questions. Study 1 asked 67 participants to watch one of five different lectures, perform distractor tasks, and then answer questions about the lecture they viewed. They found that note-taking medium had a significant effect on Conceptual answers but not on Factual answers.

The authors report converting the raw data to z scores before listing the ANOVA results from 4 tests: 1. Factual-recall questions by condition (laptop or longhand), 2. Conceptual-application questions by condition, 3. Conceptual-application affected by lecture (labelled “whichtalk” in the data), and 4. Interaction between lecture and note-taking medium (labelled “condition” in the data). According to the article, all of these ANOVAs were mixed fixed- and random-effects ANOVAs, with note-taking medium (laptop vs. longhand) as a fixed effect and lecture (which talk was viewed) as a random effect.

In 2018, a corrigendum for this paper was published. Here, they explain that they now calculate z-scores for each of the studies using an index-score approach. New data files with corrected z-scores have been uploaded to OSF. These are the data files used in this reproducibility analysis.

In the corrigendum they also list corrected degrees of freedom.


Target outcomes:

For this article you should focus on the findings reported for Study 1 in section “Laptop versus longhand performance”. You should use the values reported in the the corrigendum, not the main paper.

Specifically, you should attempt to reproduce all descriptive and inferential analyses reported in the text below and associated tables/figures:

Laptop versus longhand performance. Mixed fixed and random-effects analyses of variance were used to test differences, with note-taking medium (laptop vs.longhand) as a fixed effect and lecture (which talk was viewed) as a random effect. We converted the raw data to z scores because the lecture assessments varied in difficulty and number of points available; however, results did not differ when raw scores were analyzed. (from Mueller & Oppenheimer, 2014, p. 1161)

On factual-recall questions, participants performed equally well across conditions (laptop: M = −0.006, SD = 1.00; longhand: M = 0.05, SD = 1.01), F(1,4.01) = 0.046, p = .841. However, on conceptual application questions, laptop participants performed significantly worse (M = −0.178, SD = 0.900) than longhand participants (M = 0.162, SD = 1.07), F(1, 4.09) = 8.05, p = .046, ηp 2 = .66 (see Fig. 1). Which lecture participants saw also affected performance on conceptual-application questions, F(4, 4) = 7.11, p = .042, ηp2 = .88; however, there was no significant interaction between lectureand note-taking medium, F(4, 55) = 0.259, p = .90. (from Mueller & Oppenheimer, corrigendum, 2018, p. 1-2)


Step 1: Load packages and prepare report object

# load packages
library(tidyverse) # for data munging
library(knitr) # for kable table formating
library(haven) # import and export 'SPSS', 'Stata' and 'SAS' Files
library(readxl) # import excel files
library(ReproReports) # custom report functions

# Libraries needed for ANOVA tests.
library(lme4)
# Prepare report object. This will be updated automatically by the reproCheck function each time values are compared
reportObject <- data.frame(dummyRow = TRUE, reportedValue = NA, obtainedValue = NA, valueType = NA, percentageError = NA, comparisonOutcome = NA, eyeballCheck = NA)

Step 2: Load data

For this reproducibility analysis, we downloaded the new data file with corrected z-scores as referenced in the corrigendum.

data <- read.csv("data/Study 1 abbreviated data.csv")

Step 3: Tidy data

Only select relevant variables.

data_tidy <- data %>%
  #rename("participant" = "ï..participant") %>%
  select(participant, LapLong, whichtalk, ZFindexA:ZCrawW) %>%
  as.tibble()

Step 4: Run analysis

Preprocessing

One participant (participant 63) was removed from the data analysis.

data_tidy <- data_tidy %>%
  filter(participant!=63) %>%
  select(-participant)

Descriptive statistics

From the corrigendum (emphasis added):

On factual-recall questions, participants performed equally well across conditions (laptop: M = −0.006, SD = 1.00; longhand: M = 0.05, SD = 1.01), F(1, 4.01) = 0.046, p = .841. However, on conceptual-application questions, laptop participants performed significantly worse (M = −0.178, SD = 0.900) than longhand participants (M = 0.162, SD = 1.07), F(1, 4.09) = 8.05, p = .046, ηp2 = .66 (see Fig. 1).

descriptives <- data_tidy %>%
  # gather all reported Z scores into one column
  gather(Z_type, score, -LapLong, -whichtalk) %>%
  # group summary statistics by condition and type of z statistic
  group_by(LapLong, Z_type) %>%
  # calculate mean and standard deviation, remove missing values
  summarize(mean = mean(score, na.rm = TRUE),
            sd = sd(score, na.rm = TRUE))

descriptives
## # A tibble: 16 x 4
## # Groups:   LapLong [2]
##    LapLong Z_type       mean    sd
##      <int> <chr>       <dbl> <dbl>
##  1       0 ZCindexA -0.178   0.900
##  2       0 ZCindexW -0.251   0.981
##  3       0 ZCrawA   -0.208   0.889
##  4       0 ZCrawW   -0.260   0.937
##  5       0 ZFindexA -0.00593 1.00 
##  6       0 ZFindexW -0.0565  0.939
##  7       0 ZFrawA   -0.175   0.864
##  8       0 ZFrawW   -0.119   0.983
##  9       1 ZCindexA  0.162   1.07 
## 10       1 ZCindexW  0.229   0.911
## 11       1 ZCrawA    0.189   1.07 
## 12       1 ZCrawW    0.237   0.948
## 13       1 ZFindexA  0.00541 1.01 
## 14       1 ZFindexW  0.0515  1.01 
## 15       1 ZFrawA    0.159   1.10 
## 16       1 ZFrawW    0.108   0.956

According to the explanation in the corrigendum, all reported results are based on the z-scores that used the index-scoring approach. Furthermore, the descriptives that are reported in this paragraph all pertain to differences across conditions. This means we should only look at the following scores:

descriptives_indexA <- 
  descriptives %>%
  filter(grepl("indexA", Z_type))
# to facilitate reproducibility checks, gather data
final_descriptives <- descriptives_indexA  %>%
  gather("statistic", "value", mean, sd) %>%
  # order descriptives according to order in the paragraph
  arrange(desc(Z_type))
  
# add reported values to the descriptives data frame
final_descriptives$reportedValue <-  c(-0.006, 0.05,
                                 1.00, 1.01,
                                 -0.178, 0.162,
                                 0.900, 1.07)

It seems that the mean in the longhand condition on factual recall scores is off by a factor ten. If we look at the full list of means and standard deviations of the z-scores, it is imaginable that the authors copied the z-score within, ZFindexW. This does not seem to affect the substantive conclusion, though.

# formal reproducibility analysis
for(i in 1:nrow(final_descriptives)){
  reportObject <- reproCheck(
    reportedValue = as.character(final_descriptives$reportedValue[i]),
    obtainedValue = final_descriptives$value[i],
    valueType = as.character(final_descriptives$statistic[i]))
}
## [1] "MATCH for mean. The reported value (-0.006) and the obtained value (-0.006) differed by 0%. Note that the obtained value was rounded to 3 decimal places to match the reported value."
## [1] "MAJOR_ERROR for mean. The reported value (0.05) and the obtained value (0.01) differed by 80%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## [1] "MATCH for sd. The reported value (1) and the obtained value (1) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
## [1] "MATCH for sd. The reported value (1.01) and the obtained value (1.01) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
## [1] "MATCH for mean. The reported value (-0.178) and the obtained value (-0.178) differed by 0%. Note that the obtained value was rounded to 3 decimal places to match the reported value."
## [1] "MATCH for mean. The reported value (0.162) and the obtained value (0.162) differed by 0%. Note that the obtained value was rounded to 3 decimal places to match the reported value."
## [1] "MATCH for sd. The reported value (0.9) and the obtained value (0.9) differed by 0%. Note that the obtained value was rounded to 1 decimal places to match the reported value."
## [1] "MATCH for sd. The reported value (1.07) and the obtained value (1.07) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."

Inferential statistics

1. Factual-recall questions by condition (laptop or longhand)

The first inferential test is the following (from the corrigendum):

On factual-recall questions, participants performed equally well across conditions (laptop: M = −0.006, SD = 1.00; longhand: M = 0.05, SD = 1.01), F(1, 4.01) = 0.046, p = .841.

The authors fit an ANOVA with a fixed and a random effect using the following SPSS syntax (all SPSS syntax was available on OSF: https://osf.io/28h7u/):

UNIANOVA ZFindexA BY LapLong whichtalk
  /RANDOM=whichtalk
  /METHOD=SSTYPE(3)
  /INTERCEPT=INCLUDE
  /PRINT ETASQ DESCRIPTIVE
  /CRITERIA=ALPHA(0.05)
  /DESIGN=LapLong whichtalk LapLong*whichtalk.

In R, this code would translate to the following analysis:

model <- lmer(ZFindexA ~ LapLong * whichtalk + (1|whichtalk),
             data = data_tidy)
result1 <- anova(model, type = "III")

result1
## Analysis of Variance Table
##                   npar  Sum Sq Mean Sq F value
## LapLong              1 0.03618 0.03618  0.0399
## whichtalk            1 0.39843 0.39843  0.4391
## LapLong:whichtalk    1 1.91106 1.91106  2.1061

However, this seems to give totally different results, and I’m not sure what to change in R. Therefore, I’ll just run the SPSS syntax the authors provided and perform eyeball checks to see if the numbers match.

The screenshot below shows the results of running the SPSS syntax the authors provided (see above):

Based on these results, all reported values seem to match.

# Check values for ANOVA
 reportObject <- reproCheck(
    reportedValue = "1",
    obtainedValue = 1,
    valueType = "df")
## [1] "MATCH for df. The reported value (1) and the obtained value (1) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = "4.01",
    obtainedValue = 4.014,
    valueType = "df")
## [1] "MATCH for df. The reported value (4.01) and the obtained value (4.01) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = "0.046",
    obtainedValue = 0.046,
    valueType = "F")
## [1] "MATCH for F. The reported value (0.046) and the obtained value (0.046) differed by 0%. Note that the obtained value was rounded to 3 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = ".841",
    obtainedValue = .841,
    valueType = "p")
## [1] "MATCH for p. The reported value (0.841) and the obtained value (0.841) differed by 0%. Note that the obtained value was rounded to 3 decimal places to match the reported value."

2. Conceptual-application questions by condition

However, on conceptual application questions, laptop participants performed significantly worse (M = −0.178, SD = 0.900) than longhand participants (M = 0.162, SD = 1.07), F(1, 4.09) = 8.05, p = .046, ηp 2 = .66 (see Fig. 1).

Again, to check the reproducibility of these numbers, I ran the SPSS syntax the authors provided for this specific analysis:

UNIANOVA ZCindexA BY LapLong whichtalk
  /RANDOM=whichtalk
  /METHOD=SSTYPE(3)
  /INTERCEPT=INCLUDE
  /PRINT ETASQ DESCRIPTIVE
  /CRITERIA=ALPHA(0.05)
  /DESIGN=LapLong whichtalk LapLong*whichtalk.

The screenshot below shows the results:

Based on these results, all reported values seem to match.

# Check values for ANOVA
 reportObject <- reproCheck(
    reportedValue = "1",
    obtainedValue = 1,
    valueType = "df")
## [1] "MATCH for df. The reported value (1) and the obtained value (1) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = "4.09",
    obtainedValue = 4.089,
    valueType = "df")
## [1] "MATCH for df. The reported value (4.09) and the obtained value (4.09) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = "8.05",
    obtainedValue = 8.047,
    valueType = "F")
## [1] "MATCH for F. The reported value (8.05) and the obtained value (8.05) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = ".046",
    obtainedValue = .046,
    valueType = "p")
## [1] "MATCH for p. The reported value (0.046) and the obtained value (0.046) differed by 0%. Note that the obtained value was rounded to 3 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = ".66",
    obtainedValue = .663,
    valueType = "other")
## [1] "MATCH for other. The reported value (0.66) and the obtained value (0.66) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."

3. Conceptual-application affected by lecture

Which lecture participants saw also affected performance on conceptual-application questions, F(4, 4) = 7.11, p = .042, ηp2 = .88;

The SPSS syntax the authors provided for the analysis and its output were the same as in analysis 2. All results seem to match.

# Check values for ANOVA
 reportObject <- reproCheck(
    reportedValue = "4",
    obtainedValue = 4,
    valueType = "df")
## [1] "MATCH for df. The reported value (4) and the obtained value (4) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = "4",
    obtainedValue = 4,
    valueType = "df")
## [1] "MATCH for df. The reported value (4) and the obtained value (4) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = "7.11",
    obtainedValue = 7.113,
    valueType = "F")
## [1] "MATCH for F. The reported value (7.11) and the obtained value (7.11) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = ".042",
    obtainedValue = .042,
    valueType = "p")
## [1] "MATCH for p. The reported value (0.042) and the obtained value (0.042) differed by 0%. Note that the obtained value was rounded to 3 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = ".88",
    obtainedValue = .877,
    valueType = "other")
## [1] "MATCH for other. The reported value (0.88) and the obtained value (0.88) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."

4. Interaction between lecture and note-taking medium

however, there was no significant interaction between lectureand note-taking medium, F(4, 55) = 0.259, p = .90.

Again, the same SPSS syntax and accompanying results are relevant for this analysis. All numbers seem to match the output.

# Check values for ANOVA
 reportObject <- reproCheck(
    reportedValue = "4",
    obtainedValue = 4,
    valueType = "df")
## [1] "MATCH for df. The reported value (4) and the obtained value (4) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = "55",
    obtainedValue = 55,
    valueType = "df")
## [1] "MATCH for df. The reported value (55) and the obtained value (55) differed by 0%. Note that the obtained value was rounded to 0 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = "0.259",
    obtainedValue = .259,
    valueType = "F")
## [1] "MATCH for F. The reported value (0.259) and the obtained value (0.259) differed by 0%. Note that the obtained value was rounded to 3 decimal places to match the reported value."
reportObject <- reproCheck(
    reportedValue = ".90",
    obtainedValue = .903,
    valueType = "p")
## [1] "MATCH for p. The reported value (0.9) and the obtained value (0.9) differed by 0%. Note that the obtained value was rounded to 2 decimal places to match the reported value."

Step 5: Conclusion

In this reproducibility analysis, I was not able to run the correct analyses using R. However, the authors provided all SPSS syntax for the analyses, so I ran all relevant analyses in SPSS. The descriptives were calculated using R.

All inferential statistics seem to reproduce in SPSS. Only in the descriptives did I find a mismatch: a reported mean of .05, which according to our analysis is .005. The value also seems to be 0.005 in Figure 1 of the corrigendum. I consider it likely that the authors either made a typo, or copied the wrong number from a table with all descriptives (another mean was in fact .05). This error does not seem to affect the conclusions.

Update: we contacted one of the authors and they confirmed that it is a typo. 0.005 is the correct value.

Author_Assistance = TRUE # was author assistance provided? (if so, enter TRUE)

Insufficient_Information_Errors <- 0 # how many discrete insufficient information issues did you encounter?

# Assess the causal locus (discrete reproducibility issues) of any reproducibility errors. Note that there doesn't necessarily have to be a one-to-one correspondance between discrete reproducibility issues and reproducibility errors. For example, it could be that the original article neglects to mention that a Greenhouse-Geisser correct was applied to ANOVA outcomes. This might result in multiple reproducibility errors, but there is a single causal locus (discrete reproducibility issue).

locus_typo <- 1 # how many discrete issues did you encounter that related to typographical errors?
locus_specification <- 0 # how many discrete issues did you encounter that related to incomplete, incorrect, or unclear specification of the original analyses?
locus_analysis <- 0 # how many discrete issues did you encounter that related to errors in the authors' original analyses?
locus_data <- 0 # how many discrete issues did you encounter that related to errors in the data files shared by the authors?
locus_unidentified <- 0 # how many discrete issues were there for which you could not identify the cause

# How many of the above issues were resolved through author assistance?
locus_typo_resolved <- 0 # how many discrete issues did you encounter that related to typographical errors?
locus_specification_resolved <- 0 # how many discrete issues did you encounter that related to incomplete, incorrect, or unclear specification of the original analyses?
locus_analysis_resolved <- 0 # how many discrete issues did you encounter that related to errors in the authors' original analyses?
locus_data_resolved <- 0 # how many discrete issues did you encounter that related to errors in the data files shared by the authors?
locus_unidentified_resolved <- 0 # how many discrete issues were there for which you could not identify the cause

Affects_Conclusion <- FALSE # Do any reproducibility issues encounter appear to affect the conclusions made in the original article? TRUE, FALSE, or NA. This is a subjective judgement, but you should taking into account multiple factors, such as the presence/absence of decision errors, the number of target outcomes that could not be reproduced, the type of outcomes that could or could not be reproduced, the difference in magnitude of effect sizes, and the predictions of the specific hypothesis under scrutiny.
reportObject <- reportObject %>%
  filter(dummyRow == FALSE) %>% # remove the dummy row
  select(-dummyRow) %>% # remove dummy row designation
  mutate(articleID = articleID) %>% # add variables to report 
  select(articleID, everything()) # make articleID first column

# decide on final outcome
if(any(!(reportObject$comparisonOutcome %in% c("MATCH", "MINOR_ERROR"))) | Insufficient_Information_Errors > 0){
  finalOutcome <- "Failure without author assistance"
  if(Author_Assistance == T){
    finalOutcome <- "Failure despite author assistance"
  }
}else{
  finalOutcome <- "Success without author assistance"
  if(Author_Assistance == T){
    finalOutcome <- "Success with author assistance"
  }
}

# collate report extra details
reportExtras <- data.frame(articleID, pilotNames, copilotNames, pilotTTC, copilotTTC, pilotStartDate, copilotStartDate, completionDate, Author_Assistance, finalOutcome, Insufficient_Information_Errors, locus_typo, locus_specification, locus_analysis, locus_data, locus_unidentified, locus_typo_resolved, locus_specification_resolved, locus_analysis_resolved, locus_data_resolved, locus_unidentified_resolved)

# save report objects
if(reportType == "pilot"){
  write_csv(reportObject, "pilotReportDetailed.csv")
  write_csv(reportExtras, "pilotReportExtras.csv")
}

if(reportType == "final"){
  write_csv(reportObject, "finalReportDetailed.csv")
  write_csv(reportExtras, "finalReportExtras.csv")
}

Session information

devtools::session_info()
## ─ Session info ───────────────────────────────────────────────────────────────
##  setting  value                       
##  version  R version 4.0.0 (2020-04-24)
##  os       macOS Catalina 10.15.4      
##  system   x86_64, darwin17.0          
##  ui       X11                         
##  language (EN)                        
##  collate  en_US.UTF-8                 
##  ctype    en_US.UTF-8                 
##  tz       Europe/London               
##  date     2020-05-11                  
## 
## ─ Packages ───────────────────────────────────────────────────────────────────
##  package      * version  date       lib
##  assertthat     0.2.1    2019-03-21 [1]
##  backports      1.1.6    2020-04-05 [1]
##  boot           1.3-24   2019-12-20 [1]
##  broom          0.5.6    2020-04-20 [1]
##  callr          3.4.3    2020-03-28 [1]
##  cellranger     1.1.0    2016-07-27 [1]
##  cli            2.0.2    2020-02-28 [1]
##  colorspace     1.4-1    2019-03-18 [1]
##  crayon         1.3.4    2017-09-16 [1]
##  DBI            1.1.0    2019-12-15 [1]
##  dbplyr         1.4.3    2020-04-19 [1]
##  desc           1.2.0    2018-05-01 [1]
##  devtools       2.3.0    2020-04-10 [1]
##  digest         0.6.25   2020-02-23 [1]
##  dplyr        * 0.8.5    2020-03-07 [1]
##  ellipsis       0.3.0    2019-09-20 [1]
##  evaluate       0.14     2019-05-28 [1]
##  fansi          0.4.1    2020-01-08 [1]
##  forcats      * 0.5.0    2020-03-01 [1]
##  fs             1.4.1    2020-04-04 [1]
##  generics       0.0.2    2018-11-29 [1]
##  ggplot2      * 3.3.0    2020-03-05 [1]
##  glue           1.4.0    2020-04-03 [1]
##  gtable         0.3.0    2019-03-25 [1]
##  haven        * 2.2.0    2019-11-08 [1]
##  hms            0.5.3    2020-01-08 [1]
##  htmltools      0.4.0    2019-10-04 [1]
##  httr           1.4.1    2019-08-05 [1]
##  jsonlite       1.6.1    2020-02-02 [1]
##  knitr        * 1.28     2020-02-06 [1]
##  lattice        0.20-41  2020-04-02 [1]
##  lifecycle      0.2.0    2020-03-06 [1]
##  lme4         * 1.1-23   2020-04-07 [1]
##  lubridate      1.7.8    2020-04-06 [1]
##  magrittr       1.5      2014-11-22 [1]
##  MASS           7.3-51.5 2019-12-20 [1]
##  Matrix       * 1.2-18   2019-11-27 [1]
##  memoise        1.1.0    2017-04-21 [1]
##  minqa          1.2.4    2014-10-09 [1]
##  modelr         0.1.7    2020-04-30 [1]
##  munsell        0.5.0    2018-06-12 [1]
##  nlme           3.1-147  2020-04-13 [1]
##  nloptr         1.2.2.1  2020-03-11 [1]
##  pillar         1.4.4    2020-05-05 [1]
##  pkgbuild       1.0.7    2020-04-25 [1]
##  pkgconfig      2.0.3    2019-09-22 [1]
##  pkgload        1.0.2    2018-10-29 [1]
##  prettyunits    1.1.1    2020-01-24 [1]
##  processx       3.4.2    2020-02-09 [1]
##  ps             1.3.2    2020-02-13 [1]
##  purrr        * 0.3.4    2020-04-17 [1]
##  R6             2.4.1    2019-11-12 [1]
##  Rcpp           1.0.4.6  2020-04-09 [1]
##  readr        * 1.3.1    2018-12-21 [1]
##  readxl       * 1.3.1    2019-03-13 [1]
##  remotes        2.1.1    2020-02-15 [1]
##  reprex         0.3.0    2019-05-16 [1]
##  ReproReports * 0.1      2020-05-06 [1]
##  rlang          0.4.6    2020-05-02 [1]
##  rmarkdown      2.1      2020-01-20 [1]
##  rprojroot      1.3-2    2018-01-03 [1]
##  rstudioapi     0.11     2020-02-07 [1]
##  rvest          0.3.5    2019-11-08 [1]
##  scales         1.1.0    2019-11-18 [1]
##  sessioninfo    1.1.1    2018-11-05 [1]
##  statmod        1.4.34   2020-02-17 [1]
##  stringi        1.4.6    2020-02-17 [1]
##  stringr      * 1.4.0    2019-02-10 [1]
##  testthat       2.3.2    2020-03-02 [1]
##  tibble       * 3.0.1    2020-04-20 [1]
##  tidyr        * 1.0.2    2020-01-24 [1]
##  tidyselect     1.0.0    2020-01-27 [1]
##  tidyverse    * 1.3.0    2019-11-21 [1]
##  usethis        1.6.1    2020-04-29 [1]
##  utf8           1.1.4    2018-05-24 [1]
##  vctrs          0.2.4    2020-03-10 [1]
##  withr          2.2.0    2020-04-20 [1]
##  xfun           0.13     2020-04-13 [1]
##  xml2           1.3.2    2020-04-23 [1]
##  yaml           2.2.1    2020-02-01 [1]
##  source                                     
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  Github (METRICS-CARPS/CARPSreports@3277f85)
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
##  CRAN (R 4.0.0)                             
## 
## [1] /Library/Frameworks/R.framework/Versions/4.0/Resources/library