This script examines whether the vocabulary and grammar sections of the CDI can be distinguished as separate dimensions. This is on a subset of data that do not have overlapping ids for original id. For the analysis reported in the manuscript see the here.

I have run one analysis in detail to show the various steps and then wrote a function which repeats the analysis 100 times. Results of the latter are reported on in the paper.

1 Prep Data

Open the libraries.

options(max.print=500)
library(wordbankr) # WB data
library(GGally) # WB data
library(patchwork)
library(tidyverse) # tidy
library(mirt) # IRT models
library(psych) # some psychometric stuff (tests of dimensionality)
library(Gifi)# some more psychometric stuff (tests of dimensionality)
library(knitr) # some formatting, tables, etc
library(lordif) # differential item functioning.
library(sirt) # additional IRT functions

Open data: data saved locally because Wordbank will be updated sometime soon.

Inst <- read_rds("Wordbank_Data/Inst_WS_Eng.rds")
Admin <- read_rds("Wordbank_Data/Admin_WS_Eng.rds")
N_total = nrow(Admin) # making sure things add up later
N_long = nrow(filter(Admin, longitudinal==TRUE)) # making sure things add up later
Item <- read_rds("Wordbank_Data/Item_WS_Eng.rds")

Some participants were included in multiple data sets but were not tagged as longitudinal. We can see this below, by counting the number of observations with the same participant ID.

Quick_Test_Admin <- Admin %>%
  filter(longitudinal==FALSE) %>% 
  group_by(original_id) %>% # group by original id
  arrange(age) %>%
  mutate(
    D = 1,
    total_N = sum(D) # Total number of trials per participant
  ) %>%
  ungroup()

ggplot(Quick_Test_Admin, aes(x=total_N)) + geom_histogram()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

Make new admin dataset that only contains the oldest observation from each participant.

Admin2 <- Admin %>%
  filter(longitudinal==FALSE) %>%
  group_by(original_id) %>%
  arrange(age) %>%
   mutate(
    D = 1,
    trial_num = row_number(), # session number for a given participant
    total_N = sum(D), 
    max_trial = ifelse(total_N == trial_num, yes=1, no=0) # indicate whether a given session is the last one for a participant
    ) %>%
  ungroup(original_id) %>%
  filter(max_trial==1) # select only observations from last trial

N_dropped <- Admin %>% # get number of dropped observations for sanity checks below. 
  filter(longitudinal==FALSE) %>%
  group_by(original_id) %>%
  arrange(age) %>%
   mutate(
    D = 1,
    trial_num = row_number(), 
    total_N = sum(D), 
    max_trial = ifelse(total_N == trial_num, yes=1, no=0)
    ) %>%
  ungroup(original_id) %>%
  filter(max_trial==0) %>%
  nrow()

Make one data set of just complexity items for processing.

Complex <- Admin2 %>%
  full_join(.,Inst, by="data_id") %>% # join admin, inst and item
  full_join(., Item, by="num_item_id") %>%  # join admin, inst and item
  filter(longitudinal==FALSE) %>% # remove longitudinal observations
  filter(type == "combine" | # select relevant observations (complexity, plus combine [which is used to drop non-combiners])
           type == "complexity" 
         ) %>%
  mutate(
    out = ifelse(value=="complex" | value=="sometimes" | value=="produces", yes=1, 
                 no = ifelse(value=="often", yes=2, no =0)) # recode as numeric
  ) 

N_complexity_items = nrow(filter(Item, type == "combine" | 
           type == "complexity"))


# Sanity Check
nrow(Complex) == (N_total - N_long - N_dropped)*N_complexity_items # check that everything adds up
## [1] TRUE

The complexity category variable, coded by WB, is either morphology or syntax for each item on the complexity scale. This will be useful for labeling items later (so we can have morphology760 as a label instead of the definition), but I’ll add “combine” so this shows up in the complexity category variable as well.

Complex$complexity_category <- ifelse(Complex$complexity_category == "", yes=Complex$type, no=Complex$complexity_category)

IRT models require wide format, so here I re-format them

Complex_short_with_ids_all <- Complex %>% 
  dplyr::select(data_id, value, out, complexity_category, num_item_id) %>% # select items
  mutate(
    label = str_c(complexity_category, num_item_id) # make column names for each item
  ) %>%
  pivot_wider(id_cols=data_id, names_from = "label", values_from="out") %>% # pivot wider
  dplyr::select(starts_with(c("data_id", "combine", "morphology", "syntax")))  # select relevant items

Complex_short_with_ids <- Complex_short_with_ids_all %>%  # drop participants missing complexity
drop_na()

Complex_short_grammatical <- Complex_short_with_ids %>%
  filter(combine760 > 0) %>%
  dplyr::select(starts_with(c("morphology", "syntax"))) # need to get rid of participant names for IRT models. 



# Sanity Check
N_NA <- nrow(Complex_short_with_ids_all) - nrow(Complex_short_with_ids %>% # get total number for NAs for 
  dplyr::select(starts_with(c("data_id", "combine", "morphology", "syntax"))) %>%
  drop_na())
N_nog <- nrow(filter(Complex_short_with_ids, combine760 == 0)) # get total number of non-grammatical
nrow(Complex_short_grammatical) == N_total - N_long - N_NA - N_nog - N_dropped # do rows add up?
## [1] TRUE

2 Example Analysis

In order to illustrate the analysis, both to myself and readers, I’ve run the analysis one time below. In this section, I create a subset of vocabulary items (constrained so that they are sampled proportionally to the numbers of each subsection on the CDI), combine this with the grammar items, and run the analysis on that dataset. In the following section, I repeated this analysis 100 times.

2.1 Make Vocab Dataset

Make full vocabulary data set.

Vocab <- Admin2 %>%
  full_join(.,Inst, by="data_id") %>% # combine admin, item and intsr
  full_join(., Item, by="num_item_id") %>% # combine admin, item and intsr
  filter(longitudinal==FALSE) %>% # remove longitudinal data set
  filter(type == "word"
         ) %>%
  mutate(
    out = ifelse(value=="produces", yes=1, no =0) # recode
    )

# Sanity check
N_vocab = nrow(filter(Item, type == "word")) 
nrow(Vocab) == (N_total - N_long - N_dropped)*N_vocab # everything add up?
## [1] TRUE

Sample 37 items from vocabulary data set

set.seed(245)

f1 <- 43/nrow(filter(Item, lexical_category == "nouns" | lexical_category == "predicates")) 

# Make list
DF <- Item %>%
  filter(lexical_category == "nouns" | lexical_category == "predicates") %>% # Select open-class items
  group_by(category) %>% # group by category
  slice_sample(prop= f1, replace=FALSE) %>% # select fixed percentage from each category. 
  ungroup() # 37 vocab items

Check that these new lists are similar to the original data set in terms of content

Item %>%
  filter(lexical_category == "nouns" | lexical_category == "predicates") %>%
  group_by(category) %>%
  count() %>%
  ungroup() %>%
  mutate(
    Prop = n/478
  )
DF %>%
  group_by(category) %>%
  count() %>%
  ungroup() %>%
  mutate(
    Prop = n/37
  )

Select subsets of vocabulary items defined in those lists above. Check correlation between total of selected items and total of non-selected items

total_37 <- Vocab %>%
  filter(item_id %in% DF$item_id) %>%
  group_by(data_id) %>%
  mutate(
    total_37=sum(out)
  ) %>%
  slice(1) %>%
  ungroup() %>%
  dplyr::select("data_id", "total_37")

total_n37 <- Vocab %>%
  filter(lexical_category == "predicates" | lexical_category=="nouns") %>%
  filter(!item_id %in% DF$item_id) %>%
  group_by(data_id) %>%
  mutate(
    total_others=sum(out)
  ) %>%
  slice(1) %>%
  ungroup() %>%
  dplyr::select("data_id", "total_others")

corr = full_join(total_37, total_n37, by="data_id") %>%
  drop_na()

cor(corr$total_37, corr$total_others) # .99
## [1] 0.9859563

Make data wide format for IRT

Vocab_short_with_ids_37_full <- Vocab %>% 
  filter(lexical_category == "nouns" | lexical_category == "predicates") %>% # select open-class items
  filter(item_id %in% DF$item_id) %>% # select items in subset
  dplyr::select(data_id, value, out, definition) %>% # select relevant variables
  pivot_wider(id_cols=data_id, names_from = "definition", values_from="out") 


Vocab_short_with_ids_37 <- Vocab %>% 
  filter(lexical_category == "nouns" | lexical_category == "predicates") %>%
  filter(item_id %in% DF$item_id) %>%
  dplyr::select(data_id, value, out, definition) %>%
  pivot_wider(id_cols=data_id, names_from = "definition", values_from="out") %>%
  drop_na()
Vocab_short <- Vocab_short_with_ids_37 %>%
  dplyr::select(-"data_id") # dataset for IRT can't have IDs



# Sanity Check
N_NA_Vocab = nrow(Vocab_short_with_ids_37_full) - nrow(Vocab_short_with_ids_37) # 26 participants with missing data 
nrow(Vocab_short_with_ids_37) == N_total - N_long - N_NA_Vocab - N_dropped # Looks good
## [1] TRUE

##Combine

Combine the Vocab and Complexity Data sets

full <- full_join(
  Complex_short_with_ids, Vocab_short_with_ids_37, by="data_id"
) %>% # combine datasets
  filter(combine760 > 0) %>% # drop non-combiners (mostly vocab)
  dplyr::select(-c("data_id", "combine760")) %>% 
  drop_na() # one participant with missing vocab, but full grammar. 

Table of number of number of participants producing above 0 for each grammar item

full %>%
  pivot_longer(everything(), names_to="item", values_to="out")  %>%
  mutate(
    type = ifelse(str_detect(item, "morphology"), yes="morphology", no = 
                   ifelse(str_detect(item, "syntax"), yes="syntax", no = "word"))
  ) %>%
  group_by(type, item) %>%
  summarise(
    sum = sum(out)
  ) %>%
  ungroup(item) %>%
  kable()
## `summarise()` has grouped output by 'type'. You can override using the `.groups`
## argument.
type item sum
morphology morphology761 1100
morphology morphology762 1053
morphology morphology763 1106
morphology morphology764 1062
morphology morphology765 614
morphology morphology766 474
morphology morphology767 941
morphology morphology768 1125
morphology morphology769 965
morphology morphology770 490
morphology morphology771 460
morphology morphology772 344
syntax syntax773 894
syntax syntax774 714
syntax syntax775 423
syntax syntax776 275
syntax syntax777 430
syntax syntax778 553
syntax syntax779 530
syntax syntax780 816
syntax syntax781 685
syntax syntax782 472
syntax syntax783 500
syntax syntax784 650
syntax syntax785 466
syntax syntax786 474
syntax syntax787 264
syntax syntax788 302
syntax syntax789 528
syntax syntax790 629
syntax syntax791 678
syntax syntax792 660
syntax syntax793 621
syntax syntax794 292
syntax syntax795 416
syntax syntax796 391
syntax syntax797 423
word airplane 1563
word applesauce 942
word asleep 1027
word bench 352
word black 777
word bug 1397
word bump 910
word buttocks/bottom* 1358
word chocolate 953
word drink (action) 1394
word fork 1357
word garden 547
word hamburger 1126
word heavy 1000
word hit 1089
word hurt 1059
word ladder 717
word make 765
word mop 572
word nuts 732
word paper 1314
word pen 1198
word play 1267
word pull 848
word refrigerator 982
word sauce 622
word spaghetti 1123
word stay 802
word tickle 1167
word tights 402
word tongue 1137
word turkey 827
word vacuum 1063
word wait 797
word white 678
word zebra 899
word zipper 1026

Try MIRT model: 3 forms, 2PL, spline. These take a little while to run and produce a lot of output. So I have saved the model outputs and opened them here. Take a look at the original R Markdown files to run the model.s

#m1 <- mirt(full, 1, "2PL")
#m2 <- mirt(full, 1, "spline", technical=list(NCYCLES=1000)) # doesn't improve at 2000  iterations and it's close. 
#m3 <- mirt(full, 1, "2PL", dentype = "EH")
#saveRDS(m2, "IRT_output/m2r.rds")
#saveRDS(m3, "IRT_output/m3r.rds")
#saveRDS(m1, "IRT_output/m1r.rds")
m1 <-  readRDS("IRT_output/m1r.rds")
m2 <-  readRDS("IRT_output/m2r.rds")
m3 <-  readRDS("IRT_output/m3r.rds")

Make factor scores for the three models. These will be used in DETECT below.

fscores <- fscores(m1, use_dentype_estimate=TRUE)[,1]
fscores_spline <- fscores(m2, use_dentype_estimate=TRUE)[,1]
fscores_EH <- fscores(m3, use_dentype_estimate=TRUE)[,1]
tibble(TwoPL = fscores, Spline = fscores_spline, EH = fscores_EH) %>%
  ggpairs()

Make data frame with total vocab and complexity scores to compare raw scores to latent variables.

vocab_and_grammar <- Complex %>% # data frame defined above
  filter(type=="complexity") %>% # select complexity items only
  group_by(data_id) %>% # sort by participant
  mutate(
    total = sum(out) # make total score for comlexity
  ) %>%
  slice(1) %>% # select 1 row per participant
  ungroup() %>% 
  dplyr::select("data_id", "production", "total") # this already contains production score from Admin dataset

Plot latent variables against total vocabulary and grammar

vglv <- Complex_short_with_ids %>%
  filter(combine760 > 0) %>% 
  bind_cols(., TwoPL = fscores, Spline = fscores_spline, EH = fscores_EH) %>% # combine raw data with latent variables
  dplyr::select("data_id", "TwoPL", "Spline", "EH") %>% # select 
  left_join(vocab_and_grammar, by="data_id") %>% # add total scores.
  pivot_longer(-c("data_id", "production", "total"), names_to="model", values_to="LV") #

vocab<-ggplot(vglv, aes(x=production, y=LV)) + geom_point() + stat_smooth(method="gam") + labs(x = "Vocabulary") + facet_wrap(~factor(model, levels=c('TwoPL','EH','Spline'))) + theme_minimal()
grammar<-ggplot(vglv, aes(x=total, y=LV)) + geom_point() + stat_smooth(method="gam")  + labs(x = "Grammar") + facet_wrap(~factor(model, levels=c('TwoPL','EH','Spline'))) + theme_minimal() 

vocab/grammar

ggsave(filename="plots/wordbank/LV_plot_xsec.png", last_plot(), dpi=500)  

##Confirmatory DETECT

full2 <- data.frame(full) # Make data.frame
dtct <- c(rep(1, 37), rep(2, 37)) # Specify clusters

conf <- conf.detect(full2, fscores, dtct)
## -----------------------------------------------------------
## Confirmatory DETECT Analysis 
## Conditioning on 1 Score
## Bandwidth Scale: 1.1 
## Pairwise Estimation of Conditional Covariances
## ...........................................................
## Nonparametric ICC estimation 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## ...........................................................
## Nonparametric Estimation of conditional covariances 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## -----------------------------------------------------------
##           unweighted weighted
## DETECT         0.569    0.569
## ASSI           0.759    0.759
## RATIO          0.917    0.917
## MADCOV100      0.621    0.621
## MCOV100       -0.027   -0.027
conf_spline <- conf.detect(full2, fscores_spline, dtct)
## -----------------------------------------------------------
## Confirmatory DETECT Analysis 
## Conditioning on 1 Score
## Bandwidth Scale: 1.1 
## Pairwise Estimation of Conditional Covariances
## ...........................................................
## Nonparametric ICC estimation 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## ...........................................................
## Nonparametric Estimation of conditional covariances 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## -----------------------------------------------------------
##           unweighted weighted
## DETECT         0.512    0.512
## ASSI           0.709    0.709
## RATIO          0.893    0.893
## MADCOV100      0.573    0.573
## MCOV100       -0.018   -0.018
conf_EH <- conf.detect(full2, fscores_EH, dtct)
## -----------------------------------------------------------
## Confirmatory DETECT Analysis 
## Conditioning on 1 Score
## Bandwidth Scale: 1.1 
## Pairwise Estimation of Conditional Covariances
## ...........................................................
## Nonparametric ICC estimation 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## ...........................................................
## Nonparametric Estimation of conditional covariances 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## -----------------------------------------------------------
##           unweighted weighted
## DETECT         0.568    0.568
## ASSI           0.759    0.759
## RATIO          0.917    0.917
## MADCOV100      0.619    0.619
## MCOV100       -0.020   -0.020

3 Analysis of 100 data sets

Let’s randomly select 100 data sets from the data frame and calculate DETECT on each of those.

First make the function.

dim_sim <- function(items, iter){
     # Make tibble for output of loop
out = tibble(detect=numeric(), assi=numeric(), ratio=numeric(), detect_s =numeric(), assi_s =numeric(), ratio_s=numeric(), detect_EH =numeric(), assi_EH =numeric(), ratio_EH=numeric() )
  
for(i in 1:iter){
  
  # Make list of XX items
  book <- Item %>% # choose items (no restriction other than noun vs predicate)
  filter(lexical_category == "nouns" | lexical_category == "predicates") %>%
  slice_sample(n= items, replace=FALSE) # select random sample of X items
 
  # Select relevant observations for these items
  Vocab2<- Vocab %>%
     filter( source_name != "Byers" & data_id != "135056")%>%# missing item-level data
     filter(item_id %in% book$item_id) %>%
     dplyr::select(data_id, value, out, definition) %>%
     pivot_wider(id_cols=data_id, names_from = "definition", values_from="out") %>%
     drop_na() # Make vocabulary dataframe
  
  # Make data set of non-selected items (for calculating correlation coefficients)
  Vocab3 <- Vocab %>%
     filter( source_name != "Byers" & data_id != "135056")%>% # missing item-level data
     filter(!(item_id %in% book$item_id)) %>%
     filter(lexical_category == "nouns" | lexical_category == "predicates") %>%
     dplyr::select(data_id, value, out, definition) %>%
     pivot_wider(id_cols=data_id, names_from = "definition", values_from="out") %>%
    drop_na()# Make vocabulary dataframe
  
  # calculate correlation between included and excluded items
  r = cor(rowSums(Vocab2[,2: items + 1]), rowSums(Vocab3[,2:(478-items+1)]))
  
  # Combine selected vocabulary items with grammar items
  full <- left_join(Complex_short_with_ids, Vocab2, by="data_id") %>% # combine with complexity data
  filter(combine760 > 0) %>%
  dplyr::select(-c("data_id", "combine760"))
  
  # run IRT models
  m <- mirt(full, 1, "2PL", verbose=FALSE)
  m_s <- mirt(full, 1, "spline", verbose=FALSE) # likely few iterations, but this takes so much time
  m_EH <- mirt(full, 1, "2PL", dentype = "EH")
  
  # save factor scores
  fscores <- fscores(m, use_dentype_estimate=TRUE)[,1]
  fscores_s <- fscores(m_s, use_dentype_estimate=TRUE)[,1]
  fscores_EH  <- fscores(m_EH, use_dentype_estimate=TRUE)[,1]
  
  # run DETECT
  fulldf <- data.frame(full)
  dtct <- c(rep(1, 37), rep(2, items))
  conf <- conf.detect(fulldf, fscores, dtct, progress=FALSE)
  conf_s <- conf.detect(fulldf, fscores_s, dtct, progress=FALSE)
  conf_EH <- conf.detect(fulldf, fscores_EH, dtct, progress=FALSE)

  #combine results with previous iterations
  df <- tibble(detect=conf$detect.summary[1, 1], assi=conf$detect.summary[2, 1], ratio=conf$detect.summary[3, 1],
                 detect_s =conf_s$detect.summary[1, 1], assi_s =conf_s$detect.summary[2, 1], ratio_s=conf_s$detect.summary[3, 1],
                 detect_EH =conf_EH$detect.summary[1, 1], assi_EH =conf_EH$detect.summary[2, 1], ratio_EH=conf_EH$detect.summary[3, 1], 
                 corr = r)
  out = bind_rows(out, df)
  
  
}

return(out)
}

Run this 100 times and save the output; I’ve not included it, because otherwise the output for each run shows up and blows up R Markdown.

#sim_XSecr <- dim_sim(37, 100)

#write_rds(sim_XSecr, "Dim_Sim_Output/sim_XSecr.rds")

sim_XSecr <- read_rds("Dim_Sim_Output/sim_XSecr.rds")

Get range of correlation coefficients

sim_XSecr %>%
  dplyr::select("corr") %>%
  summarise(
    min = min(corr), 
    max = max(corr), 
    mean = mean(corr)
  )

Plot distributions of DETECT indices

library(ggpirate)
sim_XSecr %>%
  dplyr::rename(detect_b = detect, assi_b = assi, ratio_b = ratio) %>%
  mutate(
    run = row_number()
  ) %>%
  pivot_longer(!c(run, corr), 
               names_to=c("measure", "LV"), 
               names_sep = "_", 
               values_to="value") %>%
  filter(measure=="detect") %>%
  ggplot(aes(x=LV, y= value)) + geom_pirate(aes(color=LV)) + ylim(.2, 1) + theme_minimal() + scale_x_discrete("IRT Model", labels = c("b" = "2PL","EH" = "Empircal Histogram", "s" = "Spline")) + geom_hline(yintercept=c(.2, .4, 1), linetype='dotted') +
  annotate("text", x = "b", y = .8, label = "Moderate Multidimensionality", vjust = -0.5) + 
  annotate("text", x = "b", y = .3, label = "Mild Multidimensionality", vjust = -0.5)
## Warning: Removed 3 rows containing missing values (geom_col_pirate).

ggsave(filename="plots/wordbank/detect_plot_full_r.png", last_plot(), dpi=500)  
## Saving 7 x 5 in image
## Warning: Removed 3 rows containing missing values (geom_col_pirate).

How many iterations are below .4?

sim_XSecr %>%
  dplyr::rename(detect_b = detect, assi_b = assi, ratio_b = ratio) %>%
  mutate(
    run = row_number()
  ) %>%
  pivot_longer(!c(run, corr), 
               names_to=c("measure", "LV"), 
               names_sep = "_", 
               values_to="value") %>%
  filter(measure=="detect") %>%
  group_by(LV) %>%
  summarise(
    Less_40 = sum(value < .40),
    mean = mean(value)
  )

This is all very similar to the analyses on the full dataset.

sessionInfo()
## R version 4.1.0 (2021-05-18)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 19045)
## 
## Matrix products: default
## 
## locale:
## [1] LC_COLLATE=Dutch_Netherlands.1252  LC_CTYPE=Dutch_Netherlands.1252   
## [3] LC_MONETARY=Dutch_Netherlands.1252 LC_NUMERIC=C                      
## [5] LC_TIME=Dutch_Netherlands.1252    
## 
## attached base packages:
## [1] stats4    stats     graphics  grDevices datasets  utils     methods  
## [8] base     
## 
## other attached packages:
##  [1] ggpirate_0.1.2  sirt_3.11-21    lordif_0.3-3    rms_6.2-0      
##  [5] SparseM_1.81    Hmisc_4.6-0     Formula_1.2-4   survival_3.3-1 
##  [9] knitr_1.37      Gifi_0.3-9      psych_2.2.3     mirt_1.36.1    
## [13] lattice_0.20-45 forcats_0.5.1   stringr_1.4.0   dplyr_1.0.7    
## [17] purrr_0.3.4     readr_2.1.2     tidyr_1.2.0     tibble_3.1.6   
## [21] tidyverse_1.3.1 patchwork_1.1.2 GGally_2.1.2    ggplot2_3.3.5  
## [25] wordbankr_0.3.1
## 
## loaded via a namespace (and not attached):
##   [1] TAM_3.7-16            TH.data_1.1-0         colorspace_2.0-3     
##   [4] ellipsis_0.3.2        htmlTable_2.4.0       base64enc_0.1-3      
##   [7] fs_1.5.2              rstudioapi_0.13       farver_2.1.0         
##  [10] Deriv_4.1.3           MatrixModels_0.5-0    mvtnorm_1.1-3        
##  [13] fansi_1.0.2           lubridate_1.8.0       xml2_1.3.3           
##  [16] codetools_0.2-18      splines_4.1.0         mnormt_2.0.2         
##  [19] jsonlite_1.8.0        broom_0.7.12          cluster_2.1.2        
##  [22] dbplyr_2.1.1          png_0.1-7             compiler_4.1.0       
##  [25] httr_1.4.2            backports_1.4.1       assertthat_0.2.1     
##  [28] Matrix_1.4-0          fastmap_1.1.0         cli_3.0.1            
##  [31] admisc_0.26           htmltools_0.5.2       quantreg_5.88        
##  [34] tools_4.1.0           gtable_0.3.0          glue_1.4.2           
##  [37] Rcpp_1.0.8.3          cellranger_1.1.0      jquerylib_0.1.4      
##  [40] vctrs_0.3.8           nlme_3.1-155          xfun_0.30            
##  [43] rvest_1.0.2           lifecycle_1.0.1       renv_0.15.4          
##  [46] dcurver_0.9.2         polspline_1.1.19      MASS_7.3-55          
##  [49] zoo_1.8-9             scales_1.2.0          hms_1.1.1            
##  [52] parallel_4.1.0        sandwich_3.0-1        RColorBrewer_1.1-3   
##  [55] yaml_2.3.5            pbapply_1.5-0         gridExtra_2.3        
##  [58] sass_0.4.0            rpart_4.1.16          reshape_0.8.8        
##  [61] latticeExtra_0.6-29   stringi_1.7.6         highr_0.9            
##  [64] checkmate_2.0.0       permute_0.9-7         polycor_0.8-1        
##  [67] rlang_1.0.2           pkgconfig_2.0.3       evaluate_0.15        
##  [70] labeling_0.4.2        htmlwidgets_1.5.4     quantregGrowth_1.4-0 
##  [73] tidyselect_1.1.2      plyr_1.8.6            magrittr_2.0.1       
##  [76] R6_2.5.1              generics_0.1.3        multcomp_1.4-18      
##  [79] DBI_1.1.2             pillar_1.8.0          haven_2.4.3          
##  [82] foreign_0.8-82        withr_2.5.0           mgcv_1.8-39          
##  [85] nnet_7.3-17           modelr_0.1.8          crayon_1.5.0         
##  [88] utf8_1.2.2            tmvnsim_1.0-2         tzdb_0.2.0           
##  [91] rmarkdown_2.13        jpeg_0.1-9            grid_4.1.0           
##  [94] readxl_1.3.1          CDM_7.5-15            data.table_1.14.2    
##  [97] vegan_2.5-7           reprex_2.0.1          digest_0.6.29        
## [100] GPArotation_2014.11-1 munsell_0.5.0         bslib_0.3.1