To generate the Stan code for modeling experiment and plate effects, I used the following model.

Y ~ (1|experiment/plate)

I made this model under the assumption that the relationship between plates and experiments replicates the relationsip between “cask” and “batch” in the Pastes dataset in lme4.

See the following “Pastes” data from lme4. This was the only example I found in lme4 that had only nested categorical variables like experiment and plate, no continuous regressors.

Strength of a chemical paste product; its quality depending on the delivery batch, and the cask within the delivery. 3 casks selected at random from each delivery were sampled and the samples were kept for reference. Ten of the delivery batches were sampled at random and two analytical tests carried out on each of the 30 samples”.

I used this example to reason through how we model experiments and plates. batches correspond to experiment and casks correspond to plates.

Note that casks are encoded as factors, with levels ‘a’, ‘b’, and ‘c’ nested completely in batches.

levels(.data$cask)
[1] "a" "b" "c"

In contrast, the samples column gives a unique id for each cask, in this case created by concatenating the batch and cask columns.

levels(.data$sample)
 [1] "A:a" "A:b" "A:c" "B:a" "B:b" "B:c" "C:a" "C:b" "C:c" "D:a" "D:b" "D:c" "E:a" "E:b" "E:c" "F:a" "F:b" "F:c" "G:a"
[20] "G:b" "G:c" "H:a" "H:b" "H:c" "I:a" "I:b" "I:c" "J:a" "J:b" "J:c"

In other words, both ways of encoding the nested variable are present, with levels nested within the parent variable, and with unique labels for each instance of the nested variable.

The latter ‘unique’ case matches the labeling of our plates with unique ZIDs. The former case of ‘within-parent-levels’ matches labeling of row and column indexes as integers between 1-12.

This example in lme4 comes with 4 equivilent formulas.

You’ll note that brms generates the same code for each of them.

code1 == code2; code2 == code3; code3 == code4
[1] TRUE
[1] TRUE
[1] TRUE

The code generated – I model experiments and plates using this code

code2
// generated with brms 1.9.0
functions { 
} 
data { 
  int<lower=1> N;  // total number of observations 
  vector[N] Y;  // response variable 
  // data for group-level effects of ID 1 
  int<lower=1> J_1[N]; 
  int<lower=1> N_1; 
  int<lower=1> M_1; 
  vector[N] Z_1_1; 
  // data for group-level effects of ID 2 
  int<lower=1> J_2[N]; 
  int<lower=1> N_2; 
  int<lower=1> M_2; 
  vector[N] Z_2_1; 
  int prior_only;  // should the likelihood be ignored? 
} 
transformed data { 
} 
parameters { 
  real temp_Intercept;  // temporary intercept 
  real<lower=0> sigma;  // residual SD 
  vector<lower=0>[M_1] sd_1;  // group-level standard deviations 
  vector[N_1] z_1[M_1];  // unscaled group-level effects 
  vector<lower=0>[M_2] sd_2;  // group-level standard deviations 
  vector[N_2] z_2[M_2];  // unscaled group-level effects 
} 
transformed parameters { 
  // group-level effects 
  vector[N_1] r_1_1 = sd_1[1] * (z_1[1]); 
  // group-level effects 
  vector[N_2] r_2_1 = sd_2[1] * (z_2[1]); 
} 
model { 
  vector[N] mu = rep_vector(0, N) + temp_Intercept; 
  for (n in 1:N) { 
    mu[n] = mu[n] + (r_1_1[J_1[n]]) * Z_1_1[n] + (r_2_1[J_2[n]]) * Z_2_1[n]; 
  } 
  // priors including all constants 
  target += student_t_lpdf(sigma | 3, 0, 10)
    - 1 * student_t_lccdf(0 | 3, 0, 10); 
  target += student_t_lpdf(sd_1 | 3, 0, 10)
    - 1 * student_t_lccdf(0 | 3, 0, 10); 
  target += normal_lpdf(z_1[1] | 0, 1); 
  target += student_t_lpdf(sd_2 | 3, 0, 10)
    - 1 * student_t_lccdf(0 | 3, 0, 10); 
  target += normal_lpdf(z_2[1] | 0, 1); 
  // likelihood including all constants 
  if (!prior_only) { 
    target += normal_lpdf(Y | mu, sigma); 
  } 
} 
generated quantities { 
  // actual population-level intercept 
  real b_Intercept = temp_Intercept; 
} 

J_1 is just integer indices for batch.

And J_2 is just integer indices for unique cask id

LS0tCnRpdGxlOiAiUGxhdGUvRXhwZXJpbWVudCBtb2RlbCIKb3V0cHV0OiBodG1sX25vdGVib29rCi0tLQoKVG8gZ2VuZXJhdGUgdGhlIFN0YW4gY29kZSBmb3IgbW9kZWxpbmcgZXhwZXJpbWVudCBhbmQgcGxhdGUgZWZmZWN0cywgSSB1c2VkIHRoZSBmb2xsb3dpbmcgbW9kZWwuCgpZIH4gKDF8ZXhwZXJpbWVudC9wbGF0ZSkKCkkgbWFkZSB0aGlzIG1vZGVsIHVuZGVyIHRoZSBhc3N1bXB0aW9uIHRoYXQgdGhlIHJlbGF0aW9uc2hpcCBiZXR3ZWVuIHBsYXRlcyBhbmQgZXhwZXJpbWVudHMgcmVwbGljYXRlcyB0aGUgcmVsYXRpb25zaXAgYmV0d2VlbiAiY2FzayIgYW5kICJiYXRjaCIgaW4gdGhlIFBhc3RlcyBkYXRhc2V0IGluIGxtZTQuCgpgYGB7ciwgZWNobz1GQUxTRX0KbGlicmFyeShsbWU0LCBxdWlldGx5ID0gVCkKbGlicmFyeShicm1zLCBxdWlldGx5ID0gVCkKbGlicmFyeSh0aWR5dmVyc2UsIHF1aWV0bHkgPSBUKQpgYGAKCgoKU2VlIHRoZSBmb2xsb3dpbmcgIlBhc3RlcyIgZGF0YSBmcm9tIGxtZTQuICBUaGlzIHdhcyB0aGUgb25seSBleGFtcGxlIEkgZm91bmQgaW4gbG1lNCB0aGF0IGhhZCBvbmx5IG5lc3RlZCBjYXRlZ29yaWNhbCB2YXJpYWJsZXMgbGlrZSBleHBlcmltZW50IGFuZCBwbGF0ZSwgbm8gY29udGludW91cyByZWdyZXNzb3JzLgoKClN0cmVuZ3RoIG9mIGEgY2hlbWljYWwgcGFzdGUgcHJvZHVjdDsgaXRzIHF1YWxpdHkgZGVwZW5kaW5nIG9uIHRoZSBkZWxpdmVyeSBiYXRjaCwgYW5kIHRoZSBjYXNrIHdpdGhpbiB0aGUgZGVsaXZlcnkuICAzIGNhc2tzIHNlbGVjdGVkIGF0IHJhbmRvbSBmcm9tIGVhY2ggZGVsaXZlcnkgd2VyZSBzYW1wbGVkIGFuZCB0aGUgc2FtcGxlcyB3ZXJlIGtlcHQgZm9yIHJlZmVyZW5jZS4gVGVuIG9mIHRoZSBkZWxpdmVyeSBiYXRjaGVzIHdlcmUgc2FtcGxlZCBhdCByYW5kb20gYW5kIHR3byBhbmFseXRpY2FsIHRlc3RzIGNhcnJpZWQgb3V0IG9uIGVhY2ggb2YgdGhlIDMwIHNhbXBsZXPigJ0uCgoqICpzdHJlbmd0aCogaXMgdGhlIHJlc3BvbnNlCiogKmJhdGNoKiBpcyB0aGUgZGVsaXZlcnkgYmF0Y2ggZnJvbSB3aGljaCB0aGUgc2FtcGxlIHdhcyBzYW1wbGUuIEEgZmFjdG9yIHdpdGggMTAgbGV2ZWxzOiDigJhB4oCZIHRvIOKAmErigJkuCiogKmNhc2sqIGlzIGEgY2FzayB3aXRoaW4gdGhlIGRlbGl2ZXJ5IGJhdGNoIGZyb20gd2hpY2ggdGhlIHNhbXBsZSB3YXMgY2hvc2VuLiBBIGZhY3RvciB3aXRoIDMgbGV2ZWxzOiDigJhh4oCZIHRvIOKAmGPigJkuCiogKnNhbXBsZSogaXMgdGhlIHNhbXBsZSBvZiBwYXN0ZSB3aG9zZSBzdHJlbmd0aCB3YXMgYXNzYXllZCwgdHdvIGFzc2F5cyBwZXIgc2FtcGxlLiBBIGZhY3RvciB3aXRoIDMwIGxldmVsczog4oCYQTph4oCZIHRvIOKAmEo6Y+KAmS4KCgpgYGB7cn0KZGF0YShQYXN0ZXMpCi5kYXRhIDwtIGFzX3RpYmJsZShQYXN0ZXMpCi5kYXRhCmBgYAoKSSB1c2VkIHRoaXMgZXhhbXBsZSB0byByZWFzb24gdGhyb3VnaCBob3cgd2UgbW9kZWwgZXhwZXJpbWVudHMgYW5kIHBsYXRlcy4gKmJhdGNoZXMqIGNvcnJlc3BvbmQgdG8gZXhwZXJpbWVudCBhbmQgKmNhc2tzKiBjb3JyZXNwb25kIHRvIHBsYXRlcy4gIAoKTm90ZSB0aGF0IGNhc2tzIGFyZSBlbmNvZGVkIGFzIGZhY3RvcnMsIHdpdGggbGV2ZWxzICdhJywgJ2InLCBhbmQgJ2MnIG5lc3RlZCBjb21wbGV0ZWx5IGluIGJhdGNoZXMuCgpgYGB7cn0KbGV2ZWxzKC5kYXRhJGNhc2spCmBgYAoKSW4gY29udHJhc3QsIHRoZSAqc2FtcGxlcyogY29sdW1uIGdpdmVzIGEgdW5pcXVlIGlkIGZvciBlYWNoIGNhc2ssIGluIHRoaXMgY2FzZSBjcmVhdGVkIGJ5IGNvbmNhdGVuYXRpbmcgdGhlICpiYXRjaCogYW5kICpjYXNrKiBjb2x1bW5zLgoKYGBge3J9CmxldmVscyguZGF0YSRzYW1wbGUpCmBgYAoKSW4gb3RoZXIgd29yZHMsIGJvdGggd2F5cyBvZiBlbmNvZGluZyB0aGUgbmVzdGVkIHZhcmlhYmxlIGFyZSBwcmVzZW50LCB3aXRoIGxldmVscyBuZXN0ZWQgd2l0aGluIHRoZSBwYXJlbnQgdmFyaWFibGUsIGFuZCB3aXRoIHVuaXF1ZSBsYWJlbHMgZm9yIGVhY2ggaW5zdGFuY2Ugb2YgdGhlIG5lc3RlZCB2YXJpYWJsZS4KClRoZSBsYXR0ZXIgJ3VuaXF1ZScgY2FzZSBtYXRjaGVzIHRoZSBsYWJlbGluZyBvZiBvdXIgcGxhdGVzIHdpdGggdW5pcXVlIFpJRHMuICBUaGUgZm9ybWVyIGNhc2Ugb2YgJ3dpdGhpbi1wYXJlbnQtbGV2ZWxzJyBtYXRjaGVzIGxhYmVsaW5nIG9mIHJvdyBhbmQgY29sdW1uIGluZGV4ZXMgYXMgaW50ZWdlcnMgYmV0d2VlbiAxLTEyLgoKVGhpcyBleGFtcGxlIGluIGxtZTQgY29tZXMgd2l0aCA0IGVxdWl2aWxlbnQgZm9ybXVsYXMuICAKCmBgYHtyfQojIHVuaXF1ZSBsZXZlbHMgZm9yIGNhc2sKZm0xIDwtIHN0cmVuZ3RoIH4gKDF8YmF0Y2gpICsgKDF8c2FtcGxlKQpmbTIgPC0gc3RyZW5ndGggfiAoMXxiYXRjaC9zYW1wbGUpCiMgbmVzdGVkIGxldmV2ZWxzIGZvciBjYXNrCmZtMyA8LSBzdHJlbmd0aCB+ICgxfGJhdGNoKSArICgxfGJhdGNoOmNhc2spCmZtNCA8LSBzdHJlbmd0aCB+ICgxfGJhdGNoL2Nhc2spCmBgYAoKWW91J2xsIG5vdGUgdGhhdCBicm1zIGdlbmVyYXRlcyB0aGUgc2FtZSBjb2RlIGZvciBlYWNoIG9mIHRoZW0uCgpgYGB7cn0KY29kZTEgPC0gbWFrZV9zdGFuY29kZShmbTEsIC5kYXRhKQpjb2RlMiA8LSBtYWtlX3N0YW5jb2RlKGZtMiwgLmRhdGEpCmNvZGUzIDwtIG1ha2Vfc3RhbmNvZGUoZm0zLCAuZGF0YSkKY29kZTQgPC0gbWFrZV9zdGFuY29kZShmbTQsIC5kYXRhKQpjb2RlMSA9PSBjb2RlMjsgY29kZTIgPT0gY29kZTM7IGNvZGUzID09IGNvZGU0CmBgYAoKVGhlIGNvZGUgZ2VuZXJhdGVkIC0tIEkgbW9kZWwgZXhwZXJpbWVudHMgYW5kIHBsYXRlcyB1c2luZyB0aGlzIGNvZGUKCmBgYHtyfQpjb2RlMgpgYGAKCkpfMSBpcyBqdXN0IGludGVnZXIgaW5kaWNlcyBmb3IgYmF0Y2guCgpgYGB7cn0KcGxvdChhcy5pbnRlZ2VyKC5kYXRhJGJhdGNoKSwgbWFrZV9zdGFuZGF0YShmbTIsIC5kYXRhKSRKXzEpCmBgYAoKQW5kIEpfMiBpcyBqdXN0IGludGVnZXIgaW5kaWNlcyBmb3IgdW5pcXVlIGNhc2sgaWQKCmBgYHtyfQpwbG90KGFzLmludGVnZXIoLmRhdGEkc2FtcGxlKSwgbWFrZV9zdGFuZGF0YShmbTIsIC5kYXRhKSRKXzIpCmBgYAo=