In Exp 1, D talked with M1 for 4 rounds. In some conditions, then talked to M2 for 4 rounds. (So, 4 or 8 total) In Exp 2, D same as above except sometimes then talked to M3 for 4 rounds. (So, 4,8, or 12 total) In Exp 3, D always talked with M1/2/3 together for 5 rounds.
Role is maintained across block. Except in Exp 3, they basically do the experiment twice (with different stims) with two people as director.
Each block of rounds used 16 images. Exp 1 stims are subset of Exp 2 which is subset of Exp 3.
Exp 1 - 28 groups of 4 people Exp 2 - 35 groups of 5 people Exp 3 - 14 groups of 7 people (x2 = 28 directors – subId goes up to 28)
It’s a big unclear what the right way to split this stuff up.
So, rather than do it by experiment, we do it by how many rounds happen, since this effects what the max is. So, 4 and 8 are across exp 1 & 2, 5 is just exp 3, and 12 is a subset of exp 2.
The other comparison is always using round 4 as the end point (because it’s the max shared). Expt’s 1 and 2 should be interchangeable, comparison is them v 3. More people slows things down.
Most conventions start in round 1, although we also see bumps 5 and 9 which makes sense.
Within a group, between round n & n+1, what predicts similarity?
To avoid problems of switching listeners and all that, we’re going to cut things off after round 4. Note there are still problems with targets being weirdly distributed
## Linear mixed model fit by REML ['lmerMod']
## Formula: sim ~ rep1 * multi + (1 | target)
## Data: within_adj
##
## REML criterion at convergence: -19846.2
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -6.9063 -0.4120 0.2971 0.6488 1.6427
##
## Random effects:
## Groups Name Variance Std.Dev.
## target (Intercept) 0.000464 0.02154
## Residual 0.020043 0.14157
## Number of obs: 18710, groups: target, 128
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 0.740069 0.003903 189.603
## rep1 0.066933 0.001498 44.689
## multi 0.001141 0.006161 0.185
## rep1:multi -0.008741 0.002811 -3.110
##
## Correlation of Fixed Effects:
## (Intr) rep1 multi
## rep1 -0.767
## multi -0.483 0.486
## rep1:multi 0.409 -0.533 -0.911
Become more similar in later rounds (which means 3-4 v earlier). This is slightly reduced in the talking to 3 at once versus 1 on 1. (Could try to do clever modelling to try to incorporate group variation and rounds > 4).
## Linear mixed model fit by REML ['lmerMod']
## Formula: sim ~ multi * repNum_1 + (1 | target)
## Data: across_data
##
## REML criterion at convergence: -129934.7
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -4.1533 -0.7233 0.0346 0.7171 3.0762
##
## Random effects:
## Groups Name Variance Std.Dev.
## target (Intercept) 0.006209 0.0788
## Residual 0.046313 0.2152
## Number of obs: 557500, groups: target, 128
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 0.7689591 0.0070469 109.121
## multi 0.0304038 0.0026768 11.358
## repNum_1 -0.0675171 0.0002690 -250.998
## multi:repNum_1 -0.0067623 0.0009368 -7.219
##
## Correlation of Fixed Effects:
## (Intr) multi rpNm_1
## multi -0.061
## repNum_1 -0.095 0.251
## mult:rpNm_1 0.027 -0.872 -0.287
groups are less similar over rounds (again, up to 4 rounds), more listeners = more similar & slightly slower divergence.