Multiplicative Treatment Effects in Randomized Experimental Design https://rpubs.com/qliu6/VQF20Liu

Qimin Liu

Fall 2020

Overview

How Are Multiplicative vs. Additive Effects Different?

Additive Treatment Effects

  • Jerry:
    • Pretreatment Cortisol Level: .45 \(\mu\)g/dL
    • Additive Treament Effect: -.09
    • Posttreatment Cortisol Level: .36 \(\mu\)g/dL
  • Tom:
    • Pretreatment Cortisol Level: .20 \(\mu\)g/dL
    • Additive Treament Effect: -.09
    • Posttreatment Cortisol Level: .11 \(\mu\)g/dL

Multiplicative Treatment Effects

  • Jerry:
    • Pretreatment Cortisol Level: .45 \(\mu\)g/dL
    • Multiplicative Treament Effect: .8
    • Posttreatment Cortisol Level: .36 \(\mu\)g/dL
  • Tom:
    • Pretreatment Cortisol Level: .20 \(\mu\)g/dL
    • Multiplicative Treament Effect: .8
    • Posttreatment Cortisol Level: .16 \(\mu\)g/dL

Why Should We Care about Multiplicative Effects

  • They are intuitive and interpretable!
  • Lognormality vs. Normality
    • Gibrats’ Law
      • Response strength = drive \(\times\) habit
      • Motivation to work = expectancy \(\times\) valence
      • Performance = ability \(\times\) motivation
      • Attitude = belief \(\times\) evaluation
    • Examples for lognormally distributed variables
      • reaction time
      • weight
      • physiological measures

Randomized Pretest-Posttest Experimental Designs

LANOVA: Log-transformed Analysis of Variance

  • Full Model: \(Y_{ij}=\mu^*\times\theta_j \times \epsilon_{ij}^*\)
  • Restricted Model: \(Y_{ij}=\mu^*\times\epsilon_{ij}^*\)

After log-transformation, we have

  • Full Model: \(log(Y_{ij})=log(\mu^*)+log(\theta_j) +log(\epsilon_{ij}^*)\)
  • Restricted Model: \(log(Y_{ij})=log(\mu^*)+log(\epsilon_{ij}^*)\)

These can be rewritten as

  • Full Model: \(Y_{ij}^l=\mu^l+\alpha_j^l +\epsilon_{ij}^l\)
  • Restricted Model: \(Y_{ij}^l=\mu^l+\epsilon_{ij}^l\)
  • \(Y_{ij}\): Posttest score of individual \(i\) within group \(j\); \(Y_{ij}^l=log(Y_{ij})\)
  • \(\mu^*=(\prod Y)^{\frac{1}{N}}\): The grand mean is the geometric mean
  • \(X_{ij}\): Posttest score of individual \(i\) within group \(j\); \(X_{ij}^l=log(X_{ij})\)
  • \(\theta_j\): Multiplicative effects of group \(j\); \(\alpha_j^l=log(\theta_j)\)
  • \(\epsilon_{j}^*\sim \mathbb{LN}(0,1)\)
  • Null hypothesis: \(\theta_j=1 \ \forall j\), i.e., \(\alpha_j^l=log(\theta_j)=0 \ \forall j\) (The multiplicative effect in terms of ratio is \(1\) for all groups)
  • In the full model, \(\prod{\theta_j}=1\)

To implement the method:

library(lamme)
data("schoene")
lanova(y=schoene$post_HRT,g=schoene$group,plot=F)
## 
## Call:
## lm(formula = (log(y) ~ factor(g)))
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## 0.7498 0.8936 0.9903 1.0931 1.5362 
## 
## Coefficients:
##                       Estimate (log-scale) s.e.     t value    Pr(>|t|)
## (Intercept)          2.471e+02        2.386e-02   2.309e+02  1.481e-113
## factor(g)treatment   9.489e-01        3.439e-02  -1.526e+00   1.309e-01
##                          2.5 %  97.5 %
## (Intercept)          2.356e+02 259.122
## factor(g)treatment   8.861e-01   1.016
## 
## Residual standard error: 0.1546 on 79 degrees of freedom
## Multiple R-squared:  0.02864,    Adjusted R-squared:  0.01635 
## F-statistic:  2.33 on 1 and 79 DF,  p-value: 0.1309

LANCOVA: Log-transformed Analysis of Covariance

  • Full Model: \(Y_{ij}=\mu^*\times\theta_j \times X^\beta_{ij}\times \epsilon_{ij}^*\)
  • Restricted Model: \(Y_{ij}=\mu^*\times X^\beta_{ij}\times\epsilon_{ij}^*\)

After log-transformation, we have

  • Full Model: \(log(Y_{ij})=log(\mu^*)+log(\theta_j) + \beta log(X_{ij})+log(\epsilon_{ij}^*)\)
  • Restricted Model: \(log(Y_{ij})=log(\mu^*)+\beta log(X_{ij})+log(\epsilon_{ij}^*)\)

These can be rewritten as

  • Full Model: \(Y_{ij}^l=\mu^l+\alpha_j^l +\beta X_{ij}^l+\epsilon_{ij}^l\)
  • Restricted Model: \(Y_{ij}^l=\mu^l+\beta X_{ij}^l+\epsilon_{ij}^l\)

To implement the method:

  • Step 1: log transform X and Y.
  • Step 2: run ANOVA with log transformed X and Y
  • Step 3:
    • Obtain the ANCOVA estimates and exponentiate the treatment effect estimate
    • Interpret covariate effect estimate in terms of power

ANCOHET:

  • Full Model: \(Y_{ij}=\mu+\alpha_j +\beta_j X_{ij}+\epsilon_{ij}\)

    • Alternatively, \(Y_{ij}=\mu+\alpha_j +\beta X_{ij}+I_j X_{ij}+\epsilon_{ij}\)
  • Restricted Model: \(Y_{ij}=\mu+\beta_j X_{ij}+\epsilon_{ij}\)

  • Treatment Effects: \(\alpha_j+I_j X_{ij}\)

  • \(\beta\): Effect (slope) parameter of the covariate

  • \(\beta_j\): Effect (slope) parameter of the covariate for group \(j\)

  • \(I_j=\beta_j-\beta\): Difference in the covariate effect (slope) due to group \(j\) membership

lancova(y=schoene$post_HRT,g=schoene$group,x=schoene$pre_HRT,plot=F)
## 
## Call:
## lm(formula = (log(y) ~ log(x) + factor(g)))
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## 0.7474 0.9404 0.9858 1.0541 1.3438 
## 
## Coefficients:
##                      Estimate (log-scale) s.e.    t value   Pr(>|t|)      2.5 %
## (Intercept)         4.056e+00        3.631e-01  3.856e+00  2.353e-04  1.969e+00
## x                   7.400e-01        6.533e-02  1.133e+01  3.811e-18  6.099e-01
## factor(g)treatment  9.563e-01        2.129e-02 -2.098e+00  3.919e-02  9.166e-01
##                    97.5 %
## (Intercept)         8.358
## x                   0.870
## factor(g)treatment  0.998
## 
## Residual standard error: 0.09569 on 78 degrees of freedom
## Multiple R-squared:  0.6327, Adjusted R-squared:  0.6233 
## F-statistic: 67.19 on 2 and 78 DF,  p-value: < 2.2e-16

SPC: Symmetrized Percent Change Analysis

Full Model: \(\frac{Y_{ij}-X_{ij}}{Y_{ij}+X_{ij}}=\mu+\alpha_j+\epsilon_{ij}\)

Restricted Model: \(\frac{Y_{ij}-X_{ij}}{Y_{ij}+X_{ij}}=\mu+\epsilon_{ij}\)

Simulation Design

Data-generating schemes:

  • Multiplicative: \(Y_{ij}=\mu^*\times\theta_j \times X^\beta_{ij}\times \epsilon_{ij}^*\)
  • Additive: \(Y_{ij}=\mu+\alpha_j +\beta X_{ij}+\epsilon_{ij}\)

Conditions:

  • Design: three-group pretest-posttest randomized design
  • Sample size conditions: \(n\in \{20,40,100,200,350 \}\)
  • Effect size conditions: Cohen’s \(f^2 \in \{0,.01,.0625,.16 \}\)
  • Pretest-posttest Relations: \(\beta \in \{.3,.5,.7, .9 \}\)

Models:

  • ANOVA: \(Y_{ij}=\mu+\alpha_j +\epsilon_{ij}\)
  • ANCOVA: \(Y_{ij}=\mu+\alpha_j +\beta X_{ij}+\epsilon_{ij}\)
  • Gain score analysis (GSA): \(Y_{ij}-X_{ij}=\mu+\alpha_j +\epsilon_{ij}\)
  • ANCOVA with log-transformed \(Y\) (ANCOVA-l): \(log(Y_{ij})=\mu+\alpha_j +\beta X_{ij}+\epsilon_{ij}\)
  • SPC analysis: \(\frac{Y_{ij}-X_{ij}}{Y_{ij}+X_{ij}}=\mu+\alpha_j+\epsilon_{ij}\)
  • LANOVA: \(Y_{ij}=\mu^*\times\theta_j \times \epsilon_{ij}^*\)
  • LANCOVA: \(Y_{ij}=\mu^*\times\theta_j \times X^\beta_{ij}\times \epsilon_{ij}^*\)

Rejection rates (i.e., power under nonnull effect sizes and type I error rates given null effects) recorded over 1000 replications

Simulation Results: Type I Error Rates at \(n=40\)

Simulation Results: Power

Effect Size Measure

\(\zeta=\frac{\theta_a}{\theta_b}=\frac{\mu^*_a}{\mu^*_b}\)

CI: \(\text{exp}(\hat{\mu}^l_a-\hat{\mu}^l_b \pm t_{1-\frac{\alpha}{2},2n-2}s^l_p\sqrt{\frac{2}{n}})\)

\(\zeta\) compared with Cohen’s \(d\)

Overall effect size: \(\frac{R^2}{1-R^2}\) where \(R^2\) is the proportion of explained variance due to the treatment

boot.es(y=schoene$post_HRT,g=schoene$group,x=schoene$pre_HRT,nrep=1000,alpha=.05)
##                   BCa LL    BCa UL exp perc LL exp perc UL
## control        1.8914575 8.5616014   1.8708522   8.6103104
## treatment      0.9193911 0.9964502   0.9194517   0.9977868
## x              0.6039432 0.8786945   0.6040483   0.8815842
## sig2nois ratio 0.8514882 2.7960616   0.9888214   3.2603955

Power and Sample Size Planning

  • Noncentrality parameter:
    • \(J\)-group LANOVA: \(\lambda=N\frac{R^{2^l}}{1-R^{2^l}}\)
    • \(J\)-group LANCOVA: \(\lambda=N\frac{R^{2^l}}{(1-R^{2^l})(1-\rho^2)}\)
  • Power: \(P(F(J-1,N-J,\lambda)\geq F_{1-\alpha}(J-1,N-J))\)

Here we have

  • \(R^{2^l}\) is the proportion of explained arithmetic variance on the logged scale
  • \(\rho\) is the pretest-posttest correlation
pwr.lanova(k=3,n=40,r_sqrd=.05,alpha=.05)
## $`the number of group`
## [1] 3
## 
## $`the per-group sample size`
## [1] 40
## 
## $`r squared`
## [1] 0.05
## 
## $`significance level`
## [1] 0.05
## 
## $power
## [1] 0.5493885
pwr.lancova(k=3,n=40,r_sqrd=.05,rho_sqrd=.3,alpha=.05)
## $`the number of group`
## [1] 3
## 
## $`the per-group sample size`
## [1] 40
## 
## $`pretest-posttest correlation`
## [1] 0.3
## 
## $`r squared`
## [1] 0.05
## 
## $`significance level`
## [1] 0.05
## 
## $power
## [1] 0.7137778
ss.lanova(k=3,r_sqrd=.05,power=.8,alpha=.05)
## $`the number of group`
## [1] 3
## 
## $`proportion of variance to be explained`
## [1] 0.05
## 
## $`significance level`
## [1] 0.05
## 
## $`desired power`
## [1] 0.8
## 
## $`the per-group sample size requirement`
## [1] 68.62034
ss.lancova(k=3,r_sqrd=.05,power=.8,rho_sqrd=.3,alpha=.05)
## $`the number of group`
## [1] 3
## 
## $`pretest-posttest correlation`
## [1] 0.3
## 
## $`proportion of variance to be explained`
## [1] 0.05
## 
## $`significance level`
## [1] 0.05
## 
## $`desired power`
## [1] 0.8
## 
## $`the per-group sample size requirement`
## [1] 48.34812

Visualization and Model Selection Strategy

Visualization

  • Q-Q plot
  • Scatterplot
  • HOV plot

ABC: AIC comparison with Box-Cox transformation

  1. Rescale Y:
    • LANOVA: \(\bar{Y}^*_{ij}ln(Y_{ij})=\mu+\alpha_j+\epsilon_{ij}\)
    • LANCOVA: \(\bar{Y}^*_{ij}ln(Y_{ij})=\mu+\alpha_j+\beta ln(X)+\epsilon_{ij}\)
  2. Compute AIC
  3. Compare AICs with that from additive models

Evaluation via Monte Carlo Simulation and Further Discussion

attach(schoene)
abc(y=post_HRT,g=group,x=pre_HRT) #AIC comparison given pretest
## $ancova
## [1] 759.9898
## 
## $ancohet
## [1] 761.6763
## 
## $`ancova-l`
## [1] 746.6123
## 
## $lancova
## [1] 743.1558
abc(y=post_HRT,g=group) # AIC comparison without pretest scores
## $anova
## [1] 830.5177
## 
## $lanova
## [1] 819.9362

Empirical Data Example

Interactive cognitive-motor step training on cognitive risk factors for falling in older adults

Web Application Demonstration

LAMME Web App @ https://qmliu.shinyapps.io/LAMME/

Download Sample Data here

Discussion