============================================================================================================ PREVIOUS: https://rpubs.com/sherloconan/1117715

THIS: https://rpubs.com/sherloconan/1151407

MORE: https://github.com/zhengxiaoUVic/Bayesian

1. One-Way Repeated-Measures Designs

 

1.1 Maximal Set of Random Effects (MRE) Model

\[\begin{equation} \tag{1} \mathcal{M}_{\mathrm{full}}:\ Y_{ikr}=\mu+s_k+\alpha_i+(\alpha s)_{ik}+\epsilon_{ikr},\quad \epsilon_{ikr}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,\ \sigma_\epsilon^2) \end{equation}\]

for subject \(k=1,\dotsb,N\), condition \(i=1,\dotsb,I\), and trial \(r=1,\dotsb,n\).

The number of trials \(n\) refers to the balanced count of observations within each cell of the experimental design, where a “cell” represents a combination of treatment levels across all factors in multiway designs.

  • When \(n=1\) or when the average response in each cell is calculated, the data become aggregated.

The process of aggregation confounds the random slope variance with residual variance (van Doorn et al., 2021, p. 7; separability, Allen et al., 2023, p. 1785; Shang & Zhou, 2022, p. 12).

In Eq. 2, the highest-order interaction between the fixed and random effects, i.e., the random slope \((\alpha s)_{ik}\) cannot be separated from the error term \(\epsilon_{ik}\) (e.g., Loftus & Masson, 1994, p. 488; van den Bergh et al., 2023, p. 7).

\[\begin{equation} \tag{2} \mathcal{M}_{\mathrm{full}}^*:\ Y_{ik}=\mu+s_k+\alpha_i+\epsilon_{ik},\quad \epsilon_{ik}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,\ \sigma_\epsilon^2) \end{equation}\]

 

1.2 One-Way Repeated-Measures Analysis of Variance (RM-ANOVA)

str(sleep) #one-way repeated-measures data with two levels
## 'data.frame':    20 obs. of  3 variables:
##  $ extra: num  0.7 -1.6 -0.2 -1.2 -0.1 3.4 3.7 0.8 0 2 ...
##  $ group: Factor w/ 2 levels "1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ ID   : Factor w/ 10 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
summary(aov(extra ~ group + Error(ID/group), sleep)) #Eq. 2
## 
## Error: ID
##           Df Sum Sq Mean Sq F value Pr(>F)
## Residuals  9  58.08   6.453               
## 
## Error: ID:group
##           Df Sum Sq Mean Sq F value  Pr(>F)   
## group      1 12.482  12.482    16.5 0.00283 **
## Residuals  9  6.808   0.756                   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

 

1.3 Linear Mixed-Effects Regression (LMER)
      using the “lme4” and “nlme” R Packages

RM-ANOVA and LMER return the same p-value of 0.00283.

summary(lme4::lmer(extra ~ group + (1 | ID), sleep)) #Eq. 2, random intercept for each subject
## Linear mixed model fit by REML ['lmerMod']
## Formula: extra ~ group + (1 | ID)
##    Data: sleep
## 
## REML criterion at convergence: 70
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -1.63372 -0.34157  0.03346  0.31511  1.83859 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  ID       (Intercept) 2.8483   1.6877  
##  Residual             0.7564   0.8697  
## Number of obs: 20, groups:  ID, 10
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)   0.7500     0.6004   1.249
## group2        1.5800     0.3890   4.062
## 
## Correlation of Fixed Effects:
##        (Intr)
## group2 -0.324
summary(nlme::lme(fixed=extra ~ group, random=~1 | ID, sleep)) #Eq. 2
## Linear mixed-effects model fit by REML
##   Data: sleep 
##        AIC      BIC    logLik
##   77.95588 81.51737 -34.97794
## 
## Random effects:
##  Formula: ~1 | ID
##         (Intercept)  Residual
## StdDev:      1.6877 0.8697384
## 
## Fixed effects:  extra ~ group 
##             Value Std.Error DF  t-value p-value
## (Intercept)  0.75 0.6003979  9 1.249172  0.2431
## group2       1.58 0.3889588  9 4.062127  0.0028
##  Correlation: 
##        (Intr)
## group2 -0.324
## 
## Standardized Within-Group Residuals:
##         Min          Q1         Med          Q3         Max 
## -1.63372282 -0.34157076  0.03346151  0.31510644  1.83858572 
## 
## Number of Observations: 20
## Number of Groups: 10
summary(lme4::lmer(extra ~ group + (group | ID), sleep)) #Eq. 1, MRE (error)
## Error in h(simpleError(msg, call)): error in evaluating the argument 'object' in selecting a method for function 'summary': number of observations (=20) <= number of random effects (=20) for term (group | ID); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable
summary(nlme::lme(fixed=extra ~ group, random=~group | ID, sleep)) #Eq. 1 (no error?)
## Linear mixed-effects model fit by REML
##   Data: sleep 
##        AIC     BIC    logLik
##   81.64947 86.9917 -34.82473
## 
## Random effects:
##  Formula: ~group | ID
##  Structure: General positive-definite, Log-Cholesky parametrization
##             StdDev    Corr  
## (Intercept) 1.6931968 (Intr)
## group2      0.9195700 -0.012
## Residual    0.5776158       
## 
## Fixed effects:  extra ~ group 
##             Value Std.Error DF  t-value p-value
## (Intercept)  0.75 0.5657345  9 1.325710  0.2176
## group2       1.58 0.3889587  9 4.062128  0.0028
##  Correlation: 
##        (Intr)
## group2 -0.16 
## 
## Standardized Within-Group Residuals:
##         Min          Q1         Med          Q3         Max 
## -1.15969006 -0.18303792 -0.02699051  0.25006831  1.15100310 
## 
## Number of Observations: 20
## Number of Groups: 10

The “maximal” (overparameterized) model Eq. 1, investigated by Bates et al. (2015) and Matuschek et al. (2017), contains three parts: (1) random intercepts, (2) random slopes, and (3) the full variance-covariance structure of random effects.

 

1.4 Bayesian RM-ANOVA

set.seed(277)
(fit2 <- lmBF(extra ~ group + ID,            data=sleep, whichRandom="ID", progress=F)) #Eq. 2
## Bayes factor analysis
## --------------
## [1] group + ID : 29.28089 ±0.68%
## 
## Against denominator:
##   Intercept only 
## ---
## Bayes factor type: BFlinearModel, JZS
set.seed(277)
(fit1 <- lmBF(extra ~ group + ID + group:ID, data=sleep, whichRandom="ID", progress=F)) #Eq. 1
## Bayes factor analysis
## --------------
## [1] group + ID + group:ID : 11.26643 ±2.25%
## 
## Against denominator:
##   Intercept only 
## ---
## Bayes factor type: BFlinearModel, JZS
#the Bayes factor comparing the MRE models without and with the highest-order repeated-measures interaction
fit2 / fit1
## Bayes factor analysis
## --------------
## [1] group + ID : 2.598951 ±2.35%
## 
## Against denominator:
##   extra ~ group + ID + group:ID 
## ---
## Bayes factor type: BFlinearModel, JZS
set.seed(277)
draws <- posterior(fit1, iterations=1e5, progress=F) #sample from the posterior
grp1_ID1 <- draws[,"group:ID-1.&.1"] #MCMC draws for the random slope of subject 1 under level 1
hist(grp1_ID1, prob=T, col="white", yaxt="n", ylab="", ylim=c(0,1.12),
     xlab="Subject 1, Level 1 (100,000)", main="Estimates of the Random Slope in Eq. 1")
lines(density(grp1_ID1), col="red", lwd=2) #plot the distribution of A:ID

Technically, in the Bayesian framework, random slopes can be included even for the aggregated data. In this case, the estimates will be informed entirely by the prior distribution (van Doorn et al., 2021, p. 5).

   

\(p\)-value \(\textit{BF}_{10}\) based on Eq. 2 \(\textit{BF}_{10}\) based on Eq. 1 including A:ID
A 0.00283 \(11.47\pm 0.68\%\) \(8.06\pm 2.84\%\)

 

1.4.1 Testing Factor A

The model in Eq. 2 versus the model excluding A.

A: group

set.seed(277)
anovaBF(extra ~ group + ID, data=sleep, whichRandom="ID", progress=F)
## Bayes factor analysis
## --------------
## [1] group + ID : 11.4676 ±0.68%
## 
## Against denominator:
##   extra ~ ID 
## ---
## Bayes factor type: BFlinearModel, JZS

 

The full model in Eq. 1 versus the model only excluding A.

set.seed(277)
fit1.omit <- lmBF(extra ~ ID + group:ID, data=sleep, whichRandom="ID", progress=F)
fit1 / fit1.omit
## Bayes factor analysis
## --------------
## [1] group + ID + group:ID : 8.056601 ±2.84%
## 
## Against denominator:
##   extra ~ ID + group:ID 
## ---
## Bayes factor type: BFlinearModel, JZS

   

2. Two-Way Repeated-Measures Designs

 

2.1 MRE Model

\[\begin{equation} \tag{3} \mathcal{M}_{\mathrm{full}}:\ Y_{ijkr}=\mu+s_k+\alpha_i+\beta_j+(\alpha\beta)_{ij}+(\alpha\beta s)_{ijk}+\epsilon_{ijkr},\quad \epsilon_{ijkr}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,\ \sigma_\epsilon^2) \end{equation}\]

for subject \(k=1,\dotsb,N\), level of the first factor \(i=1,\dotsb,I\), level of the second factor \(j=1,\dotsb,J\), and trial \(r=1,\dotsb,n\).

\[\begin{equation} \tag{4} \mathcal{M}_{\mathrm{full}}^*:\ Y_{ijk}=\mu+s_k+\alpha_i+\beta_j+(\alpha\beta)_{ij}+\epsilon_{ijk},\quad \epsilon_{ijk}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,\ \sigma_\epsilon^2) \end{equation}\]

when \(n=1\) aggregated.

 

2.2 Two-Way RM-ANOVA using the “ez” R Package

data(puzzles) #2×2 repeated-measures data
str(puzzles)
## 'data.frame':    48 obs. of  4 variables:
##  $ RT   : num  49 47 46 47 48 47 41 46 43 47 ...
##  $ ID   : Factor w/ 12 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
##  $ shape: Factor w/ 2 levels "round","square": 1 1 1 1 1 1 1 1 1 1 ...
##  $ color: Factor w/ 2 levels "color","monochromatic": 2 2 2 2 2 2 2 2 2 2 ...
ezANOVA(puzzles, dv=RT, wid=ID, within=.(shape, color),
        type=2, detailed=T, return_aov=T)
## $ANOVA
##        Effect DFn DFd          SSn   SSd            F            p p<.05
## 1 (Intercept)   1  11 9.720000e+04 226.5 4.720530e+03 7.708423e-16     *
## 2       shape   1  11 1.200000e+01  17.5 7.542857e+00 1.901175e-02     *
## 3       color   1  11 1.200000e+01   9.5 1.389474e+01 3.337568e-03     *
## 4 shape:color   1  11 2.423381e-27  30.5 8.740062e-28 1.000000e+00      
##            ges
## 1 9.970867e-01
## 2 4.054054e-02
## 3 4.054054e-02
## 4 8.533031e-30
## 
## $aov
## 
## Call:
## aov(formula = formula(aov_formula), data = data)
## 
## Grand Mean: 45
## 
## Stratum 1: ID
## 
## Terms:
##                 Residuals
## Sum of Squares      226.5
## Deg. of Freedom        11
## 
## Residual standard error: 4.537721
## 
## Stratum 2: ID:shape
## 
## Terms:
##                 shape Residuals
## Sum of Squares   12.0      17.5
## Deg. of Freedom     1        11
## 
## Residual standard error: 1.261312
## 1 out of 2 effects not estimable
## Estimated effects are balanced
## 
## Stratum 3: ID:color
## 
## Terms:
##                 color Residuals
## Sum of Squares   12.0       9.5
## Deg. of Freedom     1        11
## 
## Residual standard error: 0.9293204
## 1 out of 2 effects not estimable
## Estimated effects are balanced
## 
## Stratum 4: ID:shape:color
## 
## Terms:
##                 shape:color Residuals
## Sum of Squares          0.0      30.5
## Deg. of Freedom           1        11
## 
## Residual standard error: 1.665151
## Estimated effects are balanced

 

2.3 LMER using the “lmerTest” R Package

summary(lmer(RT ~ shape * color + (shape + color || ID), puzzles)) #Eq. 4
## boundary (singular) fit: see help('isSingular')
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: RT ~ shape * color + (shape + color || ID)
##    Data: puzzles
## 
## REML criterion at convergence: 185.1
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -1.6307 -0.6114  0.1701  0.5202  1.6773 
## 
## Random effects:
##  Groups   Name               Variance  Std.Dev.  Corr
##  ID       (Intercept)        2.914e-07 5.398e-04     
##  ID.1     shaperound         0.000e+00 0.000e+00     
##           shapesquare        5.769e-09 7.595e-05  NaN
##  ID.2     colorcolor         5.777e+00 2.404e+00     
##           colormonochromatic 3.793e+00 1.947e+00 1.00
##  Residual                    1.669e+00 1.292e+00     
## Number of obs: 48, groups:  ID, 12
## 
## Fixed effects:
##                                  Estimate Std. Error         df t value
## (Intercept)                     4.500e+01  7.878e-01  1.391e+01  57.124
## shapesquare                    -1.000e+00  5.275e-01  3.300e+01  -1.896
## colormonochromatic              1.000e+00  5.437e-01  3.120e+01   1.839
## shapesquare:colormonochromatic -3.494e-14  7.460e-01  3.300e+01   0.000
##                                Pr(>|t|)    
## (Intercept)                      <2e-16 ***
## shapesquare                      0.0668 .  
## colormonochromatic               0.0754 .  
## shapesquare:colormonochromatic   1.0000    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) shpsqr clrmnc
## shapesquare -0.335              
## clrmnchrmtc -0.538  0.485       
## shpsqr:clrm  0.237 -0.707 -0.686
## optimizer (nloptwrap) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')
summary(lmer(RT ~ shape * color + (shape * color || ID), puzzles)) #Eq. 3 (error)
## Error in h(simpleError(msg, call)): error in evaluating the argument 'object' in selecting a method for function 'summary': number of observations (=48) <= number of random effects (=48) for term (0 + shape:color | ID); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable

Note: The “lme4” R package does not provide p-values. Try “lmerTest”.

Single | leaves correlations free; double || sets correlations to 0.

The “BayesFactor” R package does not explicitly model correlations between random intercepts and random slopes. The “brms” and “BayesRS” packages do.

The boundary (singular) fit warning indicates that the estimated variance-covariance matrix of random effects is close to non-positive definite.

 

2.4 Bayesian RM-ANOVA

set.seed(277)
(fit4 <- lmBF(RT ~ shape * color + ID + shape:ID + color:ID,
              data=puzzles, whichRandom="ID", progress=F)) #Eq. 4
## Bayes factor analysis
## --------------
## [1] shape * color + ID + shape:ID + color:ID : 3530.147 ±2.88%
## 
## Against denominator:
##   Intercept only 
## ---
## Bayes factor type: BFlinearModel, JZS
set.seed(277)
(fit3 <- lmBF(RT ~ shape * color + ID + shape:ID + color:ID + shape:color:ID,
              data=puzzles, whichRandom="ID", progress=F)) #Eq. 3
## Bayes factor analysis
## --------------
## [1] shape * color + ID + shape:ID + color:ID + shape:color:ID : 101115 ±8.13%
## 
## Against denominator:
##   Intercept only 
## ---
## Bayes factor type: BFlinearModel, JZS
#the Bayes factor comparing the MRE models with and without the highest-order repeated-measures interaction
fit3 / fit4
## Bayes factor analysis
## --------------
## [1] shape * color + ID + shape:ID + color:ID + shape:color:ID : 28.64329 ±8.63%
## 
## Against denominator:
##   RT ~ shape * color + ID + shape:ID + color:ID 
## ---
## Bayes factor type: BFlinearModel, JZS

   

\(p\)-value \(\textit{BF}_{10}\) based on Eq. 4 \(\textit{BF}_{10}\) based on Eq. 3 including A:B:ID
AB 1 \(0.38\pm 3.24\%\) \(0.55\pm 12.87\%\)
A 0.0668 \(1.83\pm 1.76\%\) \(4.09\pm 11.15\%\)
B 0.0754 \(2.17\pm 1.75\%\) \(8.40\pm 10.48\%\)

 

2.4.1 Testing Interaction AB

The model in Eq. 4 versus the model only excluding AB.

set.seed(277)
fit4.omitAB <- lmBF(RT ~ shape + color + ID + shape:ID + color:ID,
                    data=puzzles, whichRandom="ID", progress=F)
fit4 / fit4.omitAB
## Bayes factor analysis
## --------------
## [1] shape * color + ID + shape:ID + color:ID : 0.3752346 ±3.24%
## 
## Against denominator:
##   RT ~ shape + color + ID + shape:ID + color:ID 
## ---
## Bayes factor type: BFlinearModel, JZS

 

The full model in Eq. 3 versus the model only excluding AB.

set.seed(277)
fit3.omitAB <- lmBF(RT ~ shape + color + ID + shape:ID + color:ID + shape:color:ID,
                    data=puzzles, whichRandom="ID", progress=F)
fit3 / fit3.omitAB
## Bayes factor analysis
## --------------
## [1] shape * color + ID + shape:ID + color:ID + shape:color:ID : 0.5486214 ±12.87%
## 
## Against denominator:
##   RT ~ shape + color + ID + shape:ID + color:ID + shape:color:ID 
## ---
## Bayes factor type: BFlinearModel, JZS

   

2.4.2 Testing Factor A

The model that drops AB versus the model that excludes A and AB in Eq. 4.

A: shape

set.seed(277)
fit4.omitA <- lmBF(RT ~ color + ID + shape:ID + color:ID,
                   data=puzzles, whichRandom="ID", progress=F)
fit4.omitAB / fit4.omitA
## Bayes factor analysis
## --------------
## [1] shape + color + ID + shape:ID + color:ID : 1.834849 ±1.76%
## 
## Against denominator:
##   RT ~ color + ID + shape:ID + color:ID 
## ---
## Bayes factor type: BFlinearModel, JZS

 

The model that drops AB versus the model that excludes A and AB in Eq. 3.

set.seed(277)
fit3.omitA <- lmBF(RT ~ color + ID + shape:ID + color:ID + shape:color:ID,
                   data=puzzles, whichRandom="ID", progress=F)
fit3.omitAB / fit3.omitA
## Bayes factor analysis
## --------------
## [1] shape + color + ID + shape:ID + color:ID + shape:color:ID : 4.086652 ±11.15%
## 
## Against denominator:
##   RT ~ color + ID + shape:ID + color:ID + shape:color:ID 
## ---
## Bayes factor type: BFlinearModel, JZS

   

2.4.3 Testing Factor B

The model that drops AB versus the model that excludes B and AB in Eq. 4.

B: color

set.seed(277)
fit4.omitB <- lmBF(RT ~ shape + ID + shape:ID + color:ID,
                   data=puzzles, whichRandom="ID", progress=F)
fit4.omitAB / fit4.omitB
## Bayes factor analysis
## --------------
## [1] shape + color + ID + shape:ID + color:ID : 2.174268 ±1.75%
## 
## Against denominator:
##   RT ~ shape + ID + shape:ID + color:ID 
## ---
## Bayes factor type: BFlinearModel, JZS

 

The model that drops AB versus the model that excludes B and AB in Eq. 3.

set.seed(277)
fit3.omitB <- lmBF(RT ~ shape + ID + shape:ID + color:ID + shape:color:ID,
                   data=puzzles, whichRandom="ID", progress=F)
fit3.omitAB / fit3.omitB
## Bayes factor analysis
## --------------
## [1] shape + color + ID + shape:ID + color:ID + shape:color:ID : 8.395337 ±10.48%
## 
## Against denominator:
##   RT ~ shape + ID + shape:ID + color:ID + shape:color:ID 
## ---
## Bayes factor type: BFlinearModel, JZS

   

3. Simulations

 

3.1 Aggregated 2×2 Repeated-Measures Data

sim <- function(N, n, B1, B2=0.5, sd=0.5, seed=277) {
  #' Input -
  #' N:    number of subjects
  #' n:    number of trials
  #' B1:   effect size of interest
  #' B2:   fixed effect of the second factor
  #' sd:   between-subjects standard deviation
  #' seed: random seed
  #' 
  #' Output - 2×2 repeated-measures data in the long format
  
  C1 <- C2 <- c(-.5, .5) #contrast-coded independent variables and no interaction
  set.seed(seed)
  s <- rnorm(N, 0, sd) #random intercepts
  b1 <- rnorm(N, B1, sd) #random slopes of factor A
  b2 <- rnorm(N, B2, sd) #random slopes of factor B
  M <- s + cbind(b1, b2) %*% rbind(rep(C1, 2), rep(C2, each=2)) #cell means
  data.frame("DV"=rnorm(N*4*n, c(M), 1), #a1b1, a2b1, a1b2, a2b2; error sd = 1
             "ID"=rep(paste0("s",1:N), 4*n),
             "A"=rep(rep(c("a1","a2"), each=N), 2*n),
             "B"=rep(rep(c("b1","b2"), each=N*2), n),
             "Trial"=rep(paste0("r",1:n), each=N*4),
             stringsAsFactors=T)
}


calBF <- function(data, iter=1e5, seed=277, ...) {
  #' Input -
  #' data:    2×2 repeated-measures data in the long format
  #' iter:    number of Monte Carlo simulation draws
  #' seed:    random seed
  #' 
  #' Output - the Bayes factors comparing the models 
  #'          without and with the highest-order repeated-measures interaction
  set.seed(seed)
  fit6 <- lmBF(DV ~ A * B + ID + A:ID + B:ID,
               data=data, whichRandom="ID", iterations=iter, progress=F, ...) #Eq. 4

  set.seed(seed)
  fit5 <- lmBF(DV ~ A * B + ID + A:ID + B:ID + A:B:ID,
               data=data, whichRandom="ID", iterations=iter, progress=F, ...) #Eq. 3

  BF_hi <- fit6 / fit5 #BF01 for testing A:B:ID?
  
  set.seed(seed)
  fit6.omitAB <- lmBF(DV ~ A + B + ID + A:ID + B:ID,
                      data=data, whichRandom="ID", iterations=iter, progress=F, ...)

  set.seed(seed)
  fit5.omitAB <- lmBF(DV ~ A + B + ID + A:ID + B:ID + A:B:ID,
                      data=data, whichRandom="ID", iterations=iter, progress=F, ...)
  
  BF_AB5 <- fit5 / fit5.omitAB
  BF_AB6 <- fit6 / fit6.omitAB #BF10 for testing A:B
  
  set.seed(seed)
  fit6.omitA <- lmBF(DV ~ B + ID + A:ID + B:ID,
                     data=data, whichRandom="ID", iterations=iter, progress=F, ...)

  set.seed(seed)
  fit5.omitA <- lmBF(DV ~ B + ID + A:ID + B:ID + A:B:ID,
                     data=data, whichRandom="ID", iterations=iter, progress=F, ...)
  
  BF_A5 <- fit5.omitAB / fit5.omitA
  BF_A6 <- fit6.omitAB / fit6.omitA #BF10 for testing A
  
  set.seed(seed)
  fit6.omitB <- lmBF(DV ~ A + ID + A:ID + B:ID,
                     data=data, whichRandom="ID", iterations=iter, progress=F, ...)

  set.seed(seed)
  fit5.omitB <- lmBF(DV ~ A + ID + A:ID + B:ID + A:B:ID,
                     data=data, whichRandom="ID", iterations=iter, progress=F, ...)
  
  BF_B5 <- fit5.omitAB / fit5.omitB
  BF_B6 <- fit6.omitAB / fit6.omitB #BF10 for testing B
  
  report <- data.frame("BF excl vs incl"=c(extractBF(BF_hi)$bf,NA,NA,NA),
                       "BF10 Eq 4"=c(NA,extractBF(BF_AB6)$bf,
                                     extractBF(BF_A6)$bf,
                                     extractBF(BF_B6)$bf),
                       "BF10 Eq 3"=c(NA,extractBF(BF_AB5)$bf,
                                     extractBF(BF_A5)$bf,
                                     extractBF(BF_B5)$bf),
                       "BF err"=c(extractBF(BF_hi)$error,NA,NA,NA),
                       "BF10 err Eq 4"=c(NA,extractBF(BF_AB6)$error,
                                         extractBF(BF_A6)$error,
                                         extractBF(BF_B6)$error),
                       "BF10 err Eq 3"=c(NA,extractBF(BF_AB5)$error,
                                         extractBF(BF_A5)$error,
                                         extractBF(BF_B5)$error))
  rownames(report) <- c("ABID","AB", "A", "B")
  round(report,3)
}

 

No interaction effect and medium main effects in the true model.

nS <- 24 #set the number of subjects
calBF(data=sim(N=nS, n=1, B1=0.5)) #first call (24 subjects)
##      BF.excl.vs.incl BF10.Eq.4 BF10.Eq.3 BF.err BF10.err.Eq.4 BF10.err.Eq.3
## ABID            0.09        NA        NA  0.038            NA            NA
## AB                NA     0.293     0.451     NA         0.031         0.066
## A                 NA     2.570     5.403     NA         0.009         0.084
## B                 NA     1.450     1.893     NA         0.009         0.118

 

calBF(data=sim(N=nS, n=1, B1=0.5, seed=577)) #second call (24 subjects)
##      BF.excl.vs.incl BF10.Eq.4 BF10.Eq.3 BF.err BF10.err.Eq.4 BF10.err.Eq.3
## ABID           0.176        NA        NA  0.058            NA            NA
## AB                NA     0.350     0.395     NA         0.023         0.115
## A                 NA     9.756    12.757     NA         0.008         0.117
## B                 NA    20.071    73.075     NA         0.008         0.105

 

calBF(data=sim(N=nS, n=1, B1=0.5, seed=777)) #third call (24 subjects)
##      BF.excl.vs.incl BF10.Eq.4 BF10.Eq.3 BF.err BF10.err.Eq.4 BF10.err.Eq.3
## ABID           0.204        NA        NA  0.068            NA            NA
## AB                NA     7.834     2.803     NA         0.031         0.106
## A                 NA     0.622     0.902     NA         0.008         0.133
## B                 NA     3.043     4.959     NA         0.008         0.155

 

calBF(data=sim(N=nS, n=1, B1=0.5, seed=233)) #fourth call (24 subjects)
##      BF.excl.vs.incl BF10.Eq.4 BF10.Eq.3 BF.err BF10.err.Eq.4 BF10.err.Eq.3
## ABID           0.744        NA        NA   0.06            NA            NA
## AB                NA     0.310     0.368     NA         0.031         0.085
## A                 NA   303.287   498.357     NA         0.011         0.077
## B                 NA     2.622     3.092     NA         0.011         0.073

 

calBF(data=sim(N=nS, n=1, B1=0.5, seed=123)) #fifth call (24 subjects)
##      BF.excl.vs.incl BF10.Eq.4 BF10.Eq.3 BF.err BF10.err.Eq.4 BF10.err.Eq.3
## ABID           0.043        NA        NA   0.08            NA            NA
## AB                NA     0.766     0.658     NA         0.028         0.078
## A                 NA     0.974     1.560     NA         0.008         0.110
## B                 NA     1.357     2.173     NA         0.008         0.122

   

calBF(data=sim(N=100, n=1, B1=0.5, seed=233)) #sixth call (100 subjects)
##      BF.excl.vs.incl   BF10.Eq.4  BF10.Eq.3  BF.err BF10.err.Eq.4 BF10.err.Eq.3
## ABID           0.208          NA         NA   0.073            NA            NA
## AB                NA       0.317      0.296      NA         0.018         0.203
## A                 NA     496.871    893.507      NA         0.011         0.213
## B                 NA      90.948    190.745      NA         0.011         0.212

 

calBF(data=sim(N=100, n=1, B1=0.5, seed=123)) #seventh call (100 subjects)
##      BF.excl.vs.incl   BF10.Eq.4  BF10.Eq.3  BF.err BF10.err.Eq.4 BF10.err.Eq.3
## ABID           0.022          NA         NA   0.075            NA            NA
## AB                NA       0.353      0.324      NA         0.018         0.215
## A                 NA     334.848   1192.434      NA         0.010         0.219
## B                 NA      75.048    169.113      NA         0.010         0.216

   

4. R Scripts

 

library(BayesFactor) #v0.9.12-4.7
data(puzzles) #2×2 repeated-measures data
str(puzzles)
# 'data.frame': 48 obs. of  4 variables:
# $ RT   : num  49 47 46 47 48 47 41 46 43 47 ...
# $ ID   : Factor w/ 12 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
# $ shape: Factor w/ 2 levels "round","square": 1 1 1 1 1 1 1 1 1 1 ...
# $ color: Factor w/ 2 levels "color","monochromatic": 2 2 2 2 2 2 2 2 2 2 ...

set.seed(277)
(fit4 <- lmBF(RT ~ shape*color + ID + shape:ID + color:ID,                  data=puzzles, whichRandom="ID", progress=F))

set.seed(277)
(fit3 <- lmBF(RT ~ shape*color + ID + shape:ID + color:ID + shape:color:ID, data=puzzles, whichRandom="ID", progress=F))

# Bayes factor analysis
# --------------
# [Line 11] shape * color + ID + shape:ID + color:ID :                  3530.147 ±2.88%
# [Line 14] shape * color + ID + shape:ID + color:ID + shape:color:ID : 101115 ±8.13%
# 
# Against denominator:
#   Intercept only 
# ---
# Bayes factor type: BFlinearModel, JZS
fit3 / fit4


str(sleep) #one-way repeated-measures data (two levels)
# 'data.frame': 20 obs. of  3 variables:
# $ extra: num  0.7 -1.6 -0.2 -1.2 -0.1 3.4 3.7 0.8 0 2 ...
# $ group: Factor w/ 2 levels "1","2": 1 1 1 1 1 1 1 1 1 1 ...
# $ ID   : Factor w/ 10 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...

set.seed(277)
(fit2 <- lmBF(extra ~ group + ID,            data=sleep, whichRandom="ID", progress=F))

set.seed(277)
(fit1 <- lmBF(extra ~ group + ID + group:ID, data=sleep, whichRandom="ID", progress=F))

# Bayes factor analysis
# --------------
# [Line 36] group + ID : 29.28089 ±0.68%
# [Line 39] group + ID + group:ID : 11.26643 ±2.25%
# 
# Against denominator:
#   Intercept only 
# ---
# Bayes factor type: BFlinearModel, JZS
fit2 / fit1