When comparing results between old (nlme::lme
) and new (lme4::lmer
) mixed-modeling software in R, one sometimes sees that lme
reports a very small variance while lmer
reports one that is exactly zero. In general, it is likely that lmer
is giving the right answer, while lme
is giving an answer that is not different in any way that is practically or statistically significant. It is worth keeping in mind that:
lmer
is returning is the right answer to the technical question posed.From Stephanie Porter:
GCD2 <- read.csv("GCD2.csv")
Latest development lme4
:
library("lme4")
packageVersion("lme4")
## [1] '0.99999911.2'
m1 <- lmer(Wxseed ~ (1|Sm), data=GCD2,
na.action=na.omit, REML=TRUE)
VarCorr(m1)
## Groups Name Variance Std.Dev.
## Sm (Intercept) 0.000 0.000
## Residual 0.795 0.892
Stable lme4
(lme4.0
from R-forge, should be equivalent to stable lme4
on CRAN)
library("lme4.0")
m2 <- lmer(Wxseed ~ (1|Sm), data=GCD2,
na.action=na.omit, REML=TRUE)
print(lme4.0:::formatVC(VarCorr(m2)),quote=FALSE)
## Groups Name Variance Std.Dev.
## Sm (Intercept) 0.000 0.000
## Residual 0.795 0.892
detach("package:lme4.0",unload=TRUE)
Now try with lme
:
library(nlme)
m3 <- lme(Wxseed ~ 1,
random= ~1|Sm,
data=GCD2, na.action=na.omit, method="REML")
## extract variance-covariance of random effects:
(m3V <- VarCorr(m3))
## Sm = pdLogChol(1)
## Variance StdDev
## (Intercept) 2.751e-08 0.0001659
## Residual 7.950e-01 0.8916492
## convert from character to numeric:
suppressWarnings(storage.mode(m3V) <- "numeric")
## all standard devs:
m3sd <- m3V[,"StdDev"]
## scaled standard dev:
(m3val <- m3sd["(Intercept)"]/m3sd["Residual"])
## (Intercept)
## 0.0001861
As promised, \( \sigma^2_{\text{sm}} \ll 1 \) but is not exactly zero … let's go back to lme4
and explore the deviance function near zero (we could do this with profile
too, but for this simple case using devFunOnly=TRUE
to get a deviance function we can plug values of the variance parameter into [equivalent to the scaled standard deviation in this case])
ff <- update(m1,devFunOnly=TRUE)
d0 <- -2*c(logLik(m1))
svec <- 10^seq(-10,-1,length=51)
dvec <- sapply(svec,ff)-d0
par(las=1,bty="l")
plot(svec,dvec,log="xy",type="b",
xlab="Scaled standard deviation",ylab="Change in deviance")
abline(v=m3val,col=2)
ddiff <- c(-2*(logLik(m3)-logLik(m1)))
abline(h=ddiff,col=4)
Miraculously enough, lme
and lmer
compute their log-likelihoods in a comparable way, so that the deviance
difference between the models (blue line) matches the scaled among-group standard deviation that lme
estimates (red line).
We shouldn't need to do any hypothesis testing in this case: the estimate of the standard deviation should always be \( \ge 0 \), so the \( p \)-value should be 1.0 if the estimate of the standard deviation is zero.
This seems to work with exactRLRT
…
library("RLRsim")
exactRLRT(m1)
##
## simulated finite sample distribution of RLRT. (p-value based on
## 10000 simulated values)
##
## data:
## RLRT = 0, p-value = 1