One of the major issues in recent releases of lme4 (1.1-6) forward has been the increased rate of model fits that are reported not to converge. Possibly contrary to popular belief, this increased rate of reported problems is not due to greater instability or other problems in the fitting algorithm itself; the algorithms themselves and their default settings have changed only slightly in recent releases (see the lme4 NEWS file). Rather, we introduced new convergence checks, so lots of models that seemed to fit with previous versions now report convergence issues (but in fact are fitting just as well, or as poorly, as before).
Along with the convergence tests, however, have come some false-positive warnings. These were particularly bad in 1.1-6; version 1.1-7 improved things considerably by testing the scaled gradient (i.e., the solution of \(\boldsymbol{H}\boldsymbol{s} = \boldsymbol{g}\), where \(\boldsymbol{H}\) is the estimated Hessian at the MLE, \(\boldsymbol{g}\) is the absolute gradient, and \(\boldsymbol{s}\) is the scaled gradient). To a first approximation, one can think of the scaled gradient as measuring the expected change in the log-likelihood over a scale of one standard error of parameter change, rather than on the scale of the original parameter. A further change (22 October 2014, since version 1.1-8 changed the test to find the maximum absolute value of the minimum of absolute and scaled gradient, to handle situations where the estimated likelihood surface was very flat (poorly determined parameters/large standard errors) so that the scaled gradient would be very large. However, the problem persists to some extent.
The examples below explore the dependence of the scaled gradient and computed Hessians for relatively well-behaved data sets as a function of (simulated) data set size. This does not tell us exactly where we should set the threshold for convergence warnings, but it does indicate that
nlminb for large problems, an overall scaled tolerance of 0.01 might be appropriate …numDeriv package for large sample sizes - or turn off Hessian calculation entirely for large problems.library("lme4")
library("ggplot2")
## preferred (cosmetic) settings
theme_set(theme_bw()+theme(panel.margin=grid::unit(0,"lines")))
library("gridExtra")
library("scales") ## for squish()
## with apologies for Hadleyverse 2 ...
library("tidyr")
library("dplyr")
Get data (previously computed):
load("penicillin_conv.RData")
id_vars <- c("data","optimizer","log10size","rep")
data_vars <- setdiff(names(res),id_vars)
nloptr_bobyqa results are using a slightly different implementation of the same algorithm used in the built-in (minqa-packaged) BOBYQA, but using looser tolerance settings, so they perform worse (in terms of gradients) at small sample sizes, but compare reasonably well to other methodsnlminb optimizer (i.e. the cases where the absolute gradient goes up to about 1) probably do represent problematic fits.Proportion of bad fits under current rules:
Behaviour of estimated minimum eigenvalues (top row, unmodified: bottom row, clamped to (0.5,5)):
Timing:
Unfortunately, for larger data sets (where it’s most needed), a reliable Hessian calculation takes an appreciable fraction of the full fitting time (e.g. for the largest data sets, about 50 seconds for the full fit vs. 17 seconds for the Hessian calculation) …
Actual parameter estimates:
This shows that nlminb becomes unreliable at large sample sizes (don’t know how these incorrect estimates match up with convergence tests).
What if we try to figure out which estimates actually are bad (as judged by their difference from a consensus estimate), rather than just seeing which ones fail the convergence tests? Compute proportion of fit/parameter combinations where all but estimates from all but one optimizer agree to within 1% (relative standard deviation <1%) and the remaining estimate differs by >1%:
What if we compare the max gradient with the size of the error (= mean difference of parameters from consensus)?
Unfortunately there’s no threshold we can draw with respect to the max-gradient (i.e., on the horizontal axis) that will separate “bad” (say >1%) results.
If we scale the gradient by the size (i.e. \(\log(\textrm{grad})+\log(\textrm{nobs})\), equivalent to scaling the cutoff by the number of observations) we do a little better, but still can’t draw a clean cutoff:
For example, if we required \(\log_{10}(\max(|\textrm{grad}|))+\log_{10}(\textrm{nobs})<2.5\), we would get most (but not all) of the cases with error >1%, but we would still catch a variety of cases (with large data sets) with error <0.1% …
sessionInfo()
## R version 3.2.3 (2015-12-10)
## Platform: x86_64-apple-darwin13.4.0 (64-bit)
## Running under: OS X 10.9.5 (Mavericks)
##
## locale:
## [1] en_CA.UTF-8/en_CA.UTF-8/en_CA.UTF-8/C/en_CA.UTF-8/en_CA.UTF-8
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] dplyr_0.4.3 tidyr_0.3.1 scales_0.3.0 gridExtra_2.0.0
## [5] ggplot2_2.0.0 lme4_1.1-11 Matrix_1.2-3
##
## loaded via a namespace (and not attached):
## [1] Rcpp_0.12.2 knitr_1.11 magrittr_1.5
## [4] splines_3.2.3 MASS_7.3-45 munsell_0.4.2
## [7] colorspace_1.2-6 lattice_0.20-33 R6_2.1.1
## [10] minqa_1.2.4 stringr_1.0.0 plyr_1.8.3
## [13] tools_3.2.3 parallel_3.2.3 grid_3.2.3
## [16] nlme_3.1-122 gtable_0.1.2 mgcv_1.8-9
## [19] DBI_0.3.1 htmltools_0.3 lazyeval_0.1.10
## [22] assertthat_0.1 yaml_2.1.13 digest_0.6.8
## [25] RColorBrewer_1.1-2 nloptr_1.0.4 formatR_1.2.1
## [28] evaluate_0.8 rmarkdown_0.9 labeling_0.3
## [31] stringi_1.0-1
glmer examples?