Much prep(rocessing)

How many games

## # A tibble: 1 x 3
## # Groups:   numPlayers [1]
##   numPlayers complete partial
##        <dbl>    <int>   <int>
## 1          6       15       3

How long full games took

## # A tibble: 1 x 7
##   numPlayers games min_time `25th_time` median_time `75th_time` max_time
##        <dbl> <int>    <dbl>       <dbl>       <dbl>       <dbl>    <dbl>
## 1          6    15       33          49          66          74      106

Many graphs

Everything here has bootstrapped 95% CIs.

Should find better curves to fit, but using quadratic to allow for some curvature.

Models

Warning many of these have less than maximal mixed effects.

##  Family: bernoulli 
##   Links: mu = logit 
## Formula: correct.num ~ block * rotate 
##    Data: acc_input (Number of observations: 10850) 
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
##          total post-warmup samples = 4000
## 
## Population-Level Effects: 
##                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept              1.65      0.07     1.52     1.78 1.00     2335     2663
## block                  0.39      0.03     0.33     0.46 1.00     1686     2367
## rotatesingle          -0.03      0.10    -0.22     0.16 1.00     1726     2258
## block:rotatesingle     0.04      0.05    -0.05     0.13 1.00     1602     2021
## 
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: time ~ block * rotate 
##    Data: time_input (Number of observations: 9779) 
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
##          total post-warmup samples = 4000
## 
## Population-Level Effects: 
##                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept             58.12      0.71    56.70    59.49 1.00     2056     2605
## block                 -7.63      0.25    -8.11    -7.14 1.00     2017     2271
## rotatesingle           2.56      0.97     0.67     4.46 1.00     1800     2310
## block:rotatesingle    -3.22      0.33    -3.88    -2.58 1.00     1717     2145
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma    27.87      0.20    27.48    28.26 1.00     3088     2272
## 
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: words ~ block * rotate + (block | tangram) + (1 | playerId) + (1 | tangram_group) + (block | gameId) 
##    Data: speaker_input (Number of observations: 2170) 
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
##          total post-warmup samples = 4000
## 
## Group-Level Effects: 
## ~gameId (Number of levels: 36) 
##                      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## sd(Intercept)            5.21      1.29     2.61     7.74 1.01      578
## sd(block)                1.32      0.27     0.87     1.91 1.00     1189
## cor(Intercept,block)    -0.77      0.16    -0.98    -0.35 1.01      510
##                      Tail_ESS
## sd(Intercept)             314
## sd(block)                2092
## cor(Intercept,block)      986
## 
## ~playerId (Number of levels: 100) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     5.01      0.61     3.94     6.32 1.01      717     1273
## 
## ~tangram (Number of levels: 12) 
##                      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## sd(Intercept)            5.85      1.43     3.81     9.30 1.00      845
## sd(block)                0.87      0.24     0.50     1.44 1.00     1023
## cor(Intercept,block)    -0.93      0.08    -1.00    -0.72 1.00     1525
##                      Tail_ESS
## sd(Intercept)            1414
## sd(block)                1560
## cor(Intercept,block)     1843
## 
## ~tangram_group (Number of levels: 432) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     3.95      0.32     3.33     4.58 1.00     1394     1915
## 
## Population-Level Effects: 
##                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept             24.82      2.34    20.23    29.48 1.00      743     1362
## block                 -3.52      0.56    -4.64    -2.46 1.00      964     1812
## rotatesingle           4.98      2.39     0.29     9.59 1.00     1119     1741
## block:rotatesingle    -1.85      0.62    -3.05    -0.62 1.00     1175     2006
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     8.89      0.15     8.60     9.19 1.00     2807     2463
## 
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Do people reduce more when listeners do better?

These look pretty different from Robert’s. The word count is pretty coarse (we eliminate blatant chitchat, but didn’t subdivide lines with some referential language). More is said when people get things wrong than right, but it’s not clear if it’s reduction vs how hard it was or where they were starting from.

For bonus fun, look at all the rotate data.

NLP prep

Content analyses

Of words the speaker says in the last round, when were they said by the speaker in earlier rounds for the same tangram?

(TODO vector analysis)