This script examines whether the vocabulary and grammar sections of the CDI can be distinguished as separate dimensions^[This analysis considers all observations for which longitudinal==false. There are still many observations with the same values of original_id. Most of these appear to be participants who were tested at multiple time points as part of a data set that wasn’t exclusively longitudinal (though the help file for Wordbank indicates that this variable is not always reliable). We have therefore also run this analysis on a subset of these data without any duplicate original_ids.

I have run one analysis in detail to show the various steps and then wrote a function which repeats the analysis 100 times. Results of the latter are reported on in the paper.

1 Prep Data

Open the libraries.

options(max.print=500)
library(wordbankr) # WB data
library(GGally) # WB data
library(patchwork)
library(tidyverse) # tidy
library(mirt) # IRT models
library(psych) # some psychometric stuff (tests of dimensionality)
library(Gifi)# some more psychometric stuff (tests of dimensionality)
library(knitr) # some formatting, tables, etc
library(lordif) # differential item functioning.
library(sirt) # additional IRT functions

Open data: data saved locally because Wordbank will be updated sometime soon.

Inst <- read_rds("Wordbank_Data/Inst_WS_Eng.rds")
Admin <- read_rds("Wordbank_Data/Admin_WS_Eng.rds")
N_total = nrow(Admin) # making sure things add up later
N_long = nrow(filter(Admin, longitudinal==TRUE)) # making sure things add up later
Item <- read_rds("Wordbank_Data/Item_WS_Eng.rds")

Make one data set of just complexity items for processing.

Complex <- Admin %>%
  full_join(.,Inst, by="data_id") %>% # join admin, inst and item
  full_join(., Item, by="num_item_id") %>%  # join admin, inst and item
  filter(longitudinal==FALSE) %>% # remove longitudinal observations
  filter(type == "combine" | # select relevant observations (complexity, plus combine [which is used to drop non-combiners])
           type == "complexity" 
         ) %>%
  mutate(
    out = ifelse(value=="complex" | value=="sometimes" | value=="produces", yes=1, 
                 no = ifelse(value=="often", yes=2, no =0)) # recode as numeric
  ) 

N_complexity_items = nrow(filter(Item, type == "combine" | 
           type == "complexity"))


# Sanity Check
nrow(Complex) == (N_total - N_long)*N_complexity_items # check that everything adds up
## [1] TRUE

The complexity category variable, coded by WB, is either morphology or syntax for each item on the complexity scale. This will be useful for labeling items later (so we can have morphology760 as a label instead of the definition), but I’ll add “combine” so this shows up in the complexity category variable as well.

Complex$complexity_category <- ifelse(Complex$complexity_category == "", yes=Complex$type, no=Complex$complexity_category)

IRT models require wide format, so here I re-format them

Complex_short_with_ids_all <- Complex %>% 
  dplyr::select(data_id, value, out, complexity_category, num_item_id) %>% # select items
  mutate(
    label = str_c(complexity_category, num_item_id) # make column names for each item
  ) %>%
  pivot_wider(id_cols=data_id, names_from = "label", values_from="out") %>% # pivot wider
  dplyr::select(starts_with(c("data_id", "combine", "morphology", "syntax")))  # select relevant items

Complex_short_with_ids <- Complex_short_with_ids_all %>%  # drop participants missing complexity
drop_na()

Complex_short_grammatical <- Complex_short_with_ids %>%
  filter(combine760 > 0) %>%
  dplyr::select(starts_with(c("morphology", "syntax"))) # need to get rid of participant names for IRT models. 



# Sanity Check
N_NA <- nrow(Complex_short_with_ids_all) - nrow(Complex_short_with_ids %>% # get total number for NAs for 
  dplyr::select(starts_with(c("data_id", "combine", "morphology", "syntax"))) %>%
  drop_na())
N_nog <- nrow(filter(Complex_short_with_ids, combine760 == 0)) # get total number of non-grammatical
nrow(Complex_short_grammatical) == N_total - N_long - N_NA - N_nog # do rows add up?
## [1] TRUE

2 Example Analysis

In order to illustrate the analysis, both to myself and readers, I’ve run the analysis one time below. In this section, I create a subset of vocabulary items (constrained so that they are sampled proportionally to the numbers of each subsection on the CDI), combine this with the grammar items, and run the analysis on that dataset. In the following section, I repeated this analysis 100 times.

2.1 Make Vocab Dataset

Make full vocabulary data set.

Vocab <- Admin %>%
  full_join(.,Inst, by="data_id") %>% # combine admin, item and intsr
  full_join(., Item, by="num_item_id") %>% # combine admin, item and intsr
  filter(longitudinal==FALSE) %>% # remove longitudinal data set
  filter(type == "word"
         ) %>%
  mutate(
    out = ifelse(value=="produces", yes=1, no =0) # recode
    )

# Sanity check
N_vocab = nrow(filter(Item, type == "word")) 
nrow(Vocab) == (N_total - N_long)*N_vocab # everything add up?
## [1] TRUE

Sample 37 items from vocabulary data set

set.seed(245)

f1 <- 43/nrow(filter(Item, lexical_category == "nouns" | lexical_category == "predicates")) 

# Make list
DF <- Item %>%
  filter(lexical_category == "nouns" | lexical_category == "predicates") %>% # Select open-class items
  group_by(category) %>% # group by category
  slice_sample(prop= f1, replace=FALSE) %>% # select fixed percentage from each category. 
  ungroup() # 37 vocab items

Check that these new lists are similar to the original data set in terms of content

Item %>%
  filter(lexical_category == "nouns" | lexical_category == "predicates") %>%
  group_by(category) %>%
  count() %>%
  ungroup() %>%
  mutate(
    Prop = n/478
  )
DF %>%
  group_by(category) %>%
  count() %>%
  ungroup() %>%
  mutate(
    Prop = n/37
  )

Select subsets of vocabulary items defined in those lists above. Check correlation between total of selected items and total of non-selected items

total_37 <- Vocab %>%
  filter(item_id %in% DF$item_id) %>%
  group_by(data_id) %>%
  mutate(
    total_37=sum(out)
  ) %>%
  slice(1) %>%
  ungroup() %>%
  dplyr::select("data_id", "total_37")

total_n37 <- Vocab %>%
  filter(lexical_category == "predicates" | lexical_category=="nouns") %>%
  filter(!item_id %in% DF$item_id) %>%
  group_by(data_id) %>%
  mutate(
    total_others=sum(out)
  ) %>%
  slice(1) %>%
  ungroup() %>%
  dplyr::select("data_id", "total_others")

corr = full_join(total_37, total_n37, by="data_id") %>%
  drop_na()

cor(corr$total_37, corr$total_others) # .99
## [1] 0.9857975

Make data wide format for IRT

Vocab_short_with_ids_37_full <- Vocab %>% 
  filter(lexical_category == "nouns" | lexical_category == "predicates") %>% # select open-class items
  filter(item_id %in% DF$item_id) %>% # select items in subset
  dplyr::select(data_id, value, out, definition) %>% # select relevant variables
  pivot_wider(id_cols=data_id, names_from = "definition", values_from="out") 


Vocab_short_with_ids_37 <- Vocab %>% 
  filter(lexical_category == "nouns" | lexical_category == "predicates") %>%
  filter(item_id %in% DF$item_id) %>%
  dplyr::select(data_id, value, out, definition) %>%
  pivot_wider(id_cols=data_id, names_from = "definition", values_from="out") %>%
  drop_na()
Vocab_short <- Vocab_short_with_ids_37 %>%
  dplyr::select(-"data_id") # dataset for IRT can't have IDs



# Sanity Check
N_NA_Vocab = nrow(Vocab_short_with_ids_37_full) - nrow(Vocab_short_with_ids_37) # 26 participants with missing data 
nrow(Vocab_short_with_ids_37) == N_total - N_long - N_NA_Vocab # Looks good
## [1] TRUE

##Combine

Combine the Vocab and Complexity Data sets

full <- full_join(
  Complex_short_with_ids, Vocab_short_with_ids_37, by="data_id"
) %>% # combine datasets
  filter(combine760 > 0) %>% # drop non-combiners (mostly vocab)
  dplyr::select(-c("data_id", "combine760")) %>% 
  drop_na() # one participant with missing vocab, but full grammar. 

Table of number of number of participants producing above 0 for each grammar item

full %>%
  pivot_longer(everything(), names_to="item", values_to="out")  %>%
  mutate(
    type = ifelse(str_detect(item, "morphology"), yes="morphology", no = 
                   ifelse(str_detect(item, "syntax"), yes="syntax", no = "word"))
  ) %>%
  group_by(type, item) %>%
  summarise(
    sum = sum(out)
  ) %>%
  ungroup(item) %>%
  kable()
## `summarise()` has grouped output by 'type'. You can override using the `.groups`
## argument.
type item sum
morphology morphology761 1229
morphology morphology762 1173
morphology morphology763 1237
morphology morphology764 1167
morphology morphology765 658
morphology morphology766 503
morphology morphology767 1047
morphology morphology768 1261
morphology morphology769 1077
morphology morphology770 524
morphology morphology771 486
morphology morphology772 357
syntax syntax773 976
syntax syntax774 784
syntax syntax775 453
syntax syntax776 286
syntax syntax777 456
syntax syntax778 589
syntax syntax779 572
syntax syntax780 892
syntax syntax781 744
syntax syntax782 507
syntax syntax783 538
syntax syntax784 691
syntax syntax785 495
syntax syntax786 507
syntax syntax787 268
syntax syntax788 316
syntax syntax789 569
syntax syntax790 671
syntax syntax791 733
syntax syntax792 722
syntax syntax793 670
syntax syntax794 304
syntax syntax795 446
syntax syntax796 404
syntax syntax797 446
word airplane 1800
word applesauce 1053
word asleep 1157
word bench 380
word black 866
word bug 1606
word bump 1010
word buttocks/bottom* 1533
word chocolate 1043
word drink (action) 1597
word fork 1534
word garden 585
word hamburger 1241
word heavy 1109
word hit 1214
word hurt 1183
word ladder 789
word make 820
word mop 629
word nuts 800
word paper 1491
word pen 1347
word play 1428
word pull 924
word refrigerator 1082
word sauce 686
word spaghetti 1258
word stay 878
word tickle 1331
word tights 436
word tongue 1271
word turkey 921
word vacuum 1193
word wait 878
word white 754
word zebra 1026
word zipper 1150

Try MIRT model: 3 forms, 2PL, spline. These take a little while to run and produce a lot of output. So I have saved the model outputs and opened them here. Take a look at the original R Markdown files to run the model.s

#m1 <- mirt(full, 1, "2PL")
#m2 <- mirt(full, 1, "spline", technical=list(NCYCLES=1000)) # doesn't improve at 2000  iterations and it's close. 
#m3 <- mirt(full, 1, "2PL", dentype = "EH")
#saveRDS(m2, "IRT_output/m2.rds")
#saveRDS(m3, "IRT_output/m3.rds")
#saveRDS(m1, "IRT_output/m1.rds")
m1 <-  readRDS("IRT_output/m1.rds")
m2 <-  readRDS("IRT_output/m2.rds")
m3 <-  readRDS("IRT_output/m3.rds")

Make factor scores for the three models. These will be used in DETECT below.

fscores <- fscores(m1, use_dentype_estimate=TRUE)[,1]
fscores_spline <- fscores(m2, use_dentype_estimate=TRUE)[,1]
fscores_EH <- fscores(m3, use_dentype_estimate=TRUE)[,1]
tibble(TwoPL = fscores, Spline = fscores_spline, EH = fscores_EH) %>%
  ggpairs()

Make data frame with total vocab and complexity scores to compare raw scores to latent variables.

vocab_and_grammar <- Complex %>% # data frame defined above
  filter(type=="complexity") %>% # select complexity items only
  group_by(data_id) %>% # sort by participant
  mutate(
    total = sum(out) # make total score for complexity
  ) %>%
  slice(1) %>% # select 1 row per participant
  ungroup() %>% 
  dplyr::select("data_id", "production", "total") # this already contains production score from Admin dataset

Plot latent variables against total vocabulary and grammar

vglv <- Complex_short_with_ids %>%
  filter(combine760 > 0) %>% 
  bind_cols(., TwoPL = fscores, Spline = fscores_spline, EH = fscores_EH) %>% # combine raw data with latent variables
  dplyr::select("data_id", "TwoPL", "Spline", "EH") %>% # select 
  left_join(vocab_and_grammar, by="data_id") %>% # add total scores.
  pivot_longer(-c("data_id", "production", "total"), names_to="model", values_to="LV") #

vocab<-ggplot(vglv, aes(x=production, y=LV)) + geom_point() + stat_smooth(method="gam") + labs(x = "Vocabulary") + facet_wrap(~factor(model, levels=c('TwoPL','EH','Spline'))) + theme_minimal()
grammar<-ggplot(vglv, aes(x=total, y=LV)) + geom_point() + stat_smooth(method="gam")  + labs(x = "Grammar") + facet_wrap(~factor(model, levels=c('TwoPL','EH','Spline'))) + theme_minimal() 

vocab/grammar

ggsave(filename="plots/wordbank/LV_plot_xsec.png", last_plot(), dpi=500)  

##Confirmatory DETECT

full2 <- data.frame(full) # Make data.frame
dtct <- c(rep(1, 37), rep(2, 37)) # Specify clusters

conf <- conf.detect(full2, fscores, dtct)
## -----------------------------------------------------------
## Confirmatory DETECT Analysis 
## Conditioning on 1 Score
## Bandwidth Scale: 1.1 
## Pairwise Estimation of Conditional Covariances
## ...........................................................
## Nonparametric ICC estimation 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## ...........................................................
## Nonparametric Estimation of conditional covariances 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## -----------------------------------------------------------
##           unweighted weighted
## DETECT         0.547    0.547
## ASSI           0.760    0.760
## RATIO          0.918    0.918
## MADCOV100      0.596    0.596
## MCOV100       -0.033   -0.033
conf_spline <- conf.detect(full2, fscores_spline, dtct)
## -----------------------------------------------------------
## Confirmatory DETECT Analysis 
## Conditioning on 1 Score
## Bandwidth Scale: 1.1 
## Pairwise Estimation of Conditional Covariances
## ...........................................................
## Nonparametric ICC estimation 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## ...........................................................
## Nonparametric Estimation of conditional covariances 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## -----------------------------------------------------------
##           unweighted weighted
## DETECT         0.497    0.497
## ASSI           0.719    0.719
## RATIO          0.896    0.896
## MADCOV100      0.555    0.555
## MCOV100       -0.025   -0.025
conf_EH <- conf.detect(full2, fscores_EH, dtct)
## -----------------------------------------------------------
## Confirmatory DETECT Analysis 
## Conditioning on 1 Score
## Bandwidth Scale: 1.1 
## Pairwise Estimation of Conditional Covariances
## ...........................................................
## Nonparametric ICC estimation 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## ...........................................................
## Nonparametric Estimation of conditional covariances 
##  5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 
## 55% 60% 65% 70% 75% 80% 85% 90% 95% 
## -----------------------------------------------------------
##           unweighted weighted
## DETECT         0.546    0.546
## ASSI           0.762    0.762
## RATIO          0.919    0.919
## MADCOV100      0.595    0.595
## MCOV100       -0.024   -0.024

3 Analysis of 100 data sets

Let’s randomly select 100 data sets from the data frame and calculate DETECT on each of those.

First make the function.

dim_sim <- function(items, iter){
     # Make tibble for output of loop
out = tibble(detect=numeric(), assi=numeric(), ratio=numeric(), detect_s =numeric(), assi_s =numeric(), ratio_s=numeric(), detect_EH =numeric(), assi_EH =numeric(), ratio_EH=numeric() )
  
for(i in 1:iter){
  
  # Make list of XX items
  book <- Item %>% # choose items (no restriction other than noun vs predicate)
  filter(lexical_category == "nouns" | lexical_category == "predicates") %>%
  slice_sample(n= items, replace=FALSE) # select random sample of X items
 
  # Select relevant observations for these items
  Vocab2<- Vocab %>%
     filter( source_name != "Byers" & data_id != "135056")%>%# missing item-level data
     filter(item_id %in% book$item_id) %>%
     dplyr::select(data_id, value, out, definition) %>%
     pivot_wider(id_cols=data_id, names_from = "definition", values_from="out") %>%
     drop_na() # Make vocabulary dataframe
  
  # Make data set of non-selected items (for calculating correlation coefficients)
  Vocab3 <- Vocab %>%
     filter( source_name != "Byers" & data_id != "135056")%>% # missing item-level data
     filter(!(item_id %in% book$item_id)) %>%
     filter(lexical_category == "nouns" | lexical_category == "predicates") %>%
     dplyr::select(data_id, value, out, definition) %>%
     pivot_wider(id_cols=data_id, names_from = "definition", values_from="out") %>%
    drop_na()# Make vocabulary dataframe
  
  # calculate correlation between included and excluded items
  r = cor(rowSums(Vocab2[,2: items + 1]), rowSums(Vocab3[,2:(478-items+1)]))
  
  # Combine selected vocabulary items with grammar items
  full <- left_join(Complex_short_with_ids, Vocab2, by="data_id") %>% # combine with complexity data
  filter(combine760 > 0) %>%
  dplyr::select(-c("data_id", "combine760"))
  
  # run IRT models
  m <- mirt(full, 1, "2PL", verbose=FALSE)
  m_s <- mirt(full, 1, "spline", verbose=FALSE) # likely few iterations, but this takes so much time
  m_EH <- mirt(full, 1, "2PL", dentype = "EH")
  
  # save factor scores
  fscores <- fscores(m, use_dentype_estimate=TRUE)[,1]
  fscores_s <- fscores(m_s, use_dentype_estimate=TRUE)[,1]
  fscores_EH  <- fscores(m_EH, use_dentype_estimate=TRUE)[,1]
  
  # run DETECT
  fulldf <- data.frame(full)
  dtct <- c(rep(1, 37), rep(2, items))
  conf <- conf.detect(fulldf, fscores, dtct, progress=FALSE)
  conf_s <- conf.detect(fulldf, fscores_s, dtct, progress=FALSE)
  conf_EH <- conf.detect(fulldf, fscores_EH, dtct, progress=FALSE)

  #combine results with previous iterations
  df <- tibble(detect=conf$detect.summary[1, 1], assi=conf$detect.summary[2, 1], ratio=conf$detect.summary[3, 1],
                 detect_s =conf_s$detect.summary[1, 1], assi_s =conf_s$detect.summary[2, 1], ratio_s=conf_s$detect.summary[3, 1],
                 detect_EH =conf_EH$detect.summary[1, 1], assi_EH =conf_EH$detect.summary[2, 1], ratio_EH=conf_EH$detect.summary[3, 1], 
                 corr = r)
  out = bind_rows(out, df)
  
  
}

return(out)
}

Run this 100 times and save the output; I’ve not included it, because otherwise the output for each run shows up and blows up R Markdown.

#sim_XSec <- dim_sim(37, 100)

#write_rds(sim_XSec, "Dim_Sim_Output/sim_XSec.rds")

sim_XSec <- read_rds("Dim_Sim_Output/sim_XSec.rds")

Get range of correlation coefficients

sim_XSec %>%
  dplyr::select("corr") %>%
  summarise(
    min = min(corr), 
    max = max(corr), 
    mean = mean(corr)
  )

Plot distributions of DETECT indices

library(ggpirate)
sim_XSec %>%
  dplyr::rename(detect_b = detect, assi_b = assi, ratio_b = ratio) %>%
  mutate(
    run = row_number()
  ) %>%
  pivot_longer(!c(run, corr), 
               names_to=c("measure", "LV"), 
               names_sep = "_", 
               values_to="value") %>%
  filter(measure=="detect") %>%
  ggplot(aes(x=LV, y= value)) + geom_pirate(aes(color=LV)) + ylim(.2, 1) + theme_minimal() + scale_x_discrete("IRT Model", labels = c("b" = "2PL","EH" = "Empircal Histogram", "s" = "Spline")) + geom_hline(yintercept=c(.2, .4, 1), linetype='dotted') +
  annotate("text", x = "b", y = .8, label = "Moderate Multidimensionality", vjust = -0.5) + 
  annotate("text", x = "b", y = .3, label = "Mild Multidimensionality", vjust = -0.5)
## Warning: Removed 3 rows containing missing values (geom_col_pirate).

ggsave(filename="plots/wordbank/detect_plot_full.png", last_plot(), dpi=500)  
## Saving 7 x 5 in image
## Warning: Removed 3 rows containing missing values (geom_col_pirate).

How many iterations are below .4?

sim_XSec %>%
  dplyr::rename(detect_b = detect, assi_b = assi, ratio_b = ratio) %>%
  mutate(
    run = row_number()
  ) %>%
  pivot_longer(!c(run, corr), 
               names_to=c("measure", "LV"), 
               names_sep = "_", 
               values_to="value") %>%
  filter(measure=="detect") %>%
  group_by(LV) %>%
  summarise(
    Less_40 = sum(value < .40),
    mean = mean(value)
  )
sessionInfo()
## R version 4.1.0 (2021-05-18)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 19045)
## 
## Matrix products: default
## 
## locale:
## [1] LC_COLLATE=Dutch_Netherlands.1252  LC_CTYPE=Dutch_Netherlands.1252   
## [3] LC_MONETARY=Dutch_Netherlands.1252 LC_NUMERIC=C                      
## [5] LC_TIME=Dutch_Netherlands.1252    
## 
## attached base packages:
## [1] stats4    stats     graphics  grDevices datasets  utils     methods  
## [8] base     
## 
## other attached packages:
##  [1] ggpirate_0.1.2  sirt_3.11-21    lordif_0.3-3    rms_6.2-0      
##  [5] SparseM_1.81    Hmisc_4.6-0     Formula_1.2-4   survival_3.3-1 
##  [9] knitr_1.37      Gifi_0.3-9      psych_2.2.3     mirt_1.36.1    
## [13] lattice_0.20-45 forcats_0.5.1   stringr_1.4.0   dplyr_1.0.7    
## [17] purrr_0.3.4     readr_2.1.2     tidyr_1.2.0     tibble_3.1.6   
## [21] tidyverse_1.3.1 patchwork_1.1.2 GGally_2.1.2    ggplot2_3.3.5  
## [25] wordbankr_0.3.1
## 
## loaded via a namespace (and not attached):
##   [1] TAM_3.7-16            TH.data_1.1-0         colorspace_2.0-3     
##   [4] ellipsis_0.3.2        htmlTable_2.4.0       base64enc_0.1-3      
##   [7] fs_1.5.2              rstudioapi_0.13       farver_2.1.0         
##  [10] Deriv_4.1.3           MatrixModels_0.5-0    mvtnorm_1.1-3        
##  [13] fansi_1.0.2           lubridate_1.8.0       xml2_1.3.3           
##  [16] codetools_0.2-18      splines_4.1.0         mnormt_2.0.2         
##  [19] jsonlite_1.8.0        broom_0.7.12          cluster_2.1.2        
##  [22] dbplyr_2.1.1          png_0.1-7             compiler_4.1.0       
##  [25] httr_1.4.2            backports_1.4.1       assertthat_0.2.1     
##  [28] Matrix_1.4-0          fastmap_1.1.0         cli_3.0.1            
##  [31] admisc_0.26           htmltools_0.5.2       quantreg_5.88        
##  [34] tools_4.1.0           gtable_0.3.0          glue_1.4.2           
##  [37] Rcpp_1.0.8.3          cellranger_1.1.0      jquerylib_0.1.4      
##  [40] vctrs_0.3.8           nlme_3.1-155          xfun_0.30            
##  [43] rvest_1.0.2           lifecycle_1.0.1       renv_0.15.4          
##  [46] dcurver_0.9.2         polspline_1.1.19      MASS_7.3-55          
##  [49] zoo_1.8-9             scales_1.2.0          hms_1.1.1            
##  [52] parallel_4.1.0        sandwich_3.0-1        RColorBrewer_1.1-3   
##  [55] yaml_2.3.5            pbapply_1.5-0         gridExtra_2.3        
##  [58] sass_0.4.0            rpart_4.1.16          reshape_0.8.8        
##  [61] latticeExtra_0.6-29   stringi_1.7.6         highr_0.9            
##  [64] checkmate_2.0.0       permute_0.9-7         polycor_0.8-1        
##  [67] rlang_1.0.2           pkgconfig_2.0.3       evaluate_0.15        
##  [70] labeling_0.4.2        htmlwidgets_1.5.4     quantregGrowth_1.4-0 
##  [73] tidyselect_1.1.2      plyr_1.8.6            magrittr_2.0.1       
##  [76] R6_2.5.1              generics_0.1.3        multcomp_1.4-18      
##  [79] DBI_1.1.2             pillar_1.8.0          haven_2.4.3          
##  [82] foreign_0.8-82        withr_2.5.0           mgcv_1.8-39          
##  [85] nnet_7.3-17           modelr_0.1.8          crayon_1.5.0         
##  [88] utf8_1.2.2            tmvnsim_1.0-2         tzdb_0.2.0           
##  [91] rmarkdown_2.13        jpeg_0.1-9            grid_4.1.0           
##  [94] readxl_1.3.1          CDM_7.5-15            data.table_1.14.2    
##  [97] vegan_2.5-7           reprex_2.0.1          digest_0.6.29        
## [100] GPArotation_2014.11-1 munsell_0.5.0         bslib_0.3.1