Using variance-covariance structures in mixed linear models is helpful and important in many statical analyses. However, their implementation vary widely between the different R packages. As I personally haven’t found an overview of the different possibilities in R, I started to collect what I have found in tutorials and blogs over the last years.
All of this is still work in progress. I am thus very thanksful for feedback
The main structure is to first give a general overview, then show the syntax on how to implement the respective covariance structures and extract estimates. And finally, some examples will be given.
These data serve merely for to show the syntax.
Example dataset: 16 varieties tested at three different locations under organic managment in Switzerland in 2013. At each location an RCBD trial was conducted with three replicate blocks. Beside yield, height and protein content have been meassured at plot level. Dataset thus consists of 16 V * 3 L * 3 R = 144 observations / rows
plotdat <- fread("Data/wheat_3trials_2013.csv")
head(plotdat)
## var loc rep yield hlw heading height gluten_nit prot
## 1: 681 LIN R1 59.8 77.9 166 95 22.2 9.9
## 2: 681 LIN R3 80.1 78.7 168 100 24.0 10.6
## 3: 681 LIN R2 87.0 79.2 168 100 28.1 11.9
## 4: a7t LIN R1 35.0 81.1 168 100 23.4 10.5
## 5: a7t LIN R2 53.9 82.8 168 115 29.3 12.2
## 6: a7t LIN R3 69.1 83.6 168 120 29.8 12.3
nrow(plotdat)
## [1] 144
setnames(plotdat,"var","treat")
setnames(plotdat,"loc","env")
plotdat[,dummy:=factor(1)]
Means
means <- plotdat[,.(yield=mean(yield)),.(treat,env)]
means[,dummy:=factor(1)]
Special covariance structures can be specified for random effects and the residuals (as both are actually considered as random). These are often also termed G matrix (the random effects) and the R matrix (residuals).
For both
For residuals specified with
Package | Random | Residuals |
---|---|---|
nlme | random | cor/weights |
asreml, MCMCglmm, sommer | random | vcov |
SAS | RANDOM | REPEATED |
Structure | nlme | |
---|---|---|
Constant variance | pdIdent | |
Diagonal | pdDiag |
Correlation structures
Function | Class | specification | slot in model$modelStruct |
---|---|---|---|
* random effects | pdClasses | pdMat | reStruct |
* correlation | corClasses | corMatrix | corStruct |
* weights | varClasses | varWeights | varFunc |
pdClasses: Standard classes of positive-definite matrices (pdMat) structures corClasses: Standard classes of correlation structures (corStruct) varClasses: Standard classes of variance function structures (varFunc) used to specify within-group variance
While the last have direct functions (e.g. varIdent or corCompSymm), random effects ar made of pdMat
These three parts are all combind in lmeStruct(): lmeStruct(reStruct, corStruct, varStruct)
The follwing section shall give an overview of the syntax for some of the most commonly used mixed model packages. As the focus is on the syntax, the examples do not always makes sense.
For each of the different packages the syntax will be given with some special notes, if necessary.
one random effect
model <- lme(yield~1,#treat,
random=~1|env,
# alternative syntax, same results
#random = list(env = pdIdent(~ 1)), # no diff, if pdIdent,pdDiag,pdSymm
#random=list(dummy=pdIdent(~env-1)),# here it must be pdIdent
means)
VarCorr(model)
## env = pdLogChol(1)
## Variance StdDev
## (Intercept) 48.55188 6.967918
## Residual 48.67023 6.976405
data.frame(VarCorr(model)[]) # to extract
## Variance StdDev
## (Intercept) 48.55188 6.967918
## Residual 48.67023 6.976405
PB p163: combination of pdBlocked and pdIdent.
However, the pdBlocked is not necessary.
means$dummy <- factor(1)
model <- lme(yield~1,
random=list(dummy=pdIdent(~env-1),
dummy=pdIdent(~treat-1)),
# alternatively with addtional pdBlocked, but not necessary
# random=list(dummy=pdBlocked(list(dummy=pdIdent(~env-1),
# dummy=pdIdent(~treat-1)))),
# second is dummy is not necessary as it's specified
# already as grouping factor for the pdBlocked
# random=list(dummy=pdBlocked(list(pdIdent(~env-1),
# pdIdent(~treat-1)))),
means)
(sigma(model) * c(1,coef(model$modelStruct$reStruct,unconstrained = F)))^2
## dummy.sd(treat - 1) dummy.sd(env - 1)
## 31.55368 17.11652 49.62163
In the same way costant and diagonal could be combined
separate variance per factor level
model <- lme(yield~1,
#random=list(dummy=pdDiag(~treat-1)),
random=list(treat=pdDiag(~env-1)), # grouping is specified in the pdDiag
# must be in list, can't specify directly with random=pdDiag()
means)
data.frame(VarCorr(model)[])
## Variance StdDev
## envBUE 15.825445 3.978121
## envDIC 28.594193 5.347354
## envLIN 212.444637 14.575481
## Residual 7.483466 2.735592
Values make more sense, when env is given as parameter. However, not quite sure about the interpretation.
model <- lme(yield~1,
random=list(treat=pdSymm(~env-1)),
plotdat)
VarCorr(model)
## treat = pdSymm(env - 1)
## Variance StdDev Corr
## envBUE 15.45647 3.931472 envBUE envDIC
## envDIC 24.61304 4.961153 0.970
## envLIN 188.43641 13.727214 -0.126 0.119
## Residual 48.01813 6.929512
same variances but with (same) correlation
model <- lme(yield~1,
random=list(treat=pdCompSymm(~env-1)),
plotdat)
VarCorr(model)
## treat = pdCompSymm(env - 1)
## Variance StdDev Corr
## envBUE 64.30027 8.018745
## envDIC 64.30027 8.018745 0.009
## envLIN 64.30027 8.018745 0.009 0.009
## Residual 52.35332 7.235559
model <- lmer(yield~(1|env)+(1|treat),means)
data.frame(VarCorr(model))
## grp var1 var2 vcov sdcor
## 1 treat (Intercept) <NA> 17.11894 4.137504
## 2 env (Intercept) <NA> 49.63971 7.045546
## 3 Residual <NA> <NA> 31.55191 5.617108
use part before “|” to specify the levels which will form rows and columns in covariance matrix. Add -1 to remove a common (intercept) variance estimate
By default, Variances and covariances will be estimated (as UNstructured in SAS). Use lmer_alt from afex library and double bars to get only variances.
library(afex)
model <- lmer_alt(yield~1+(env-1||treat),means)
## Warning in max(effect.order): no non-missing arguments to max; returning -Inf
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
## unable to evaluate scaled gradient
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
## Model failed to converge: degenerate Hessian with 1 negative eigenvalues
## Warning: Model failed to converge with 1 negative eigenvalue: -1.0e-03
data.frame(VarCorr(model))
## grp var1 var2 vcov sdcor
## 1 treat re1.envBUE <NA> 0.5642061 0.7511365
## 2 treat.1 re1.envDIC <NA> 13.3337770 3.6515445
## 3 treat.2 re1.envLIN <NA> 197.2083187 14.0430879
## 4 Residual <NA> <NA> 22.7428528 4.7689467
lmer will not work on means, as nenvs * ntreats = nobs
converging problems, and rather sensitive to optimizers
library(optimx)
model <- lmer(yield~1+(env-1|treat),plotdat,
control=lmerControl(optim="Nelder_Mead"))
VarCorr(model) # sd and correlations
## Groups Name Std.Dev. Corr
## treat envBUE 3.9315
## envDIC 4.9612 0.970
## envLIN 13.7272 -0.126 0.119
## Residual 6.9295
data.frame(VarCorr(model)) # covariances
## grp var1 var2 vcov sdcor
## 1 treat envBUE <NA> 15.456486 3.9314737
## 2 treat envDIC <NA> 24.613111 4.9611602
## 3 treat envLIN <NA> 188.435822 13.7271928
## 4 treat envBUE envDIC 18.921553 0.9701037
## 5 treat envBUE envLIN -6.797938 -0.1259621
## 6 treat envDIC envLIN 8.074375 0.1185616
## 7 Residual <NA> <NA> 48.018145 6.9295126
model <- mmer(yield~1,
random=~treat+env,
data=means,verbose=F)
summary(model)$varcomp
## VarComp VarCompSE Zratio Constraint
## treat.yield-yield 17.11496 10.445481 1.6385039 Positive
## env.yield-yield 49.61463 51.526523 0.9628949 Positive
## units.yield-yield 31.55467 8.149726 3.8718685 Positive
model <- mmer(yield~1,
random=~vs(env,treat),
# same as vs(ds(env),treat)
data=means,verbose=F)
summary(model)$varcomp
## VarComp VarCompSE Zratio Constraint
## BUE:treat.yield-yield 0.00000 20.76186 0.0000000 Positive
## DIC:treat.yield-yield 12.76186 21.90226 0.5826731 Positive
## LIN:treat.yield-yield 196.51171 58.55054 3.3562748 Positive
## units.yield-yield 23.31816 19.84059 1.1752751 Positive
model <- mmer(yield~1,
random=~vs(us(env),treat),
data=plotdat,verbose=F)
summary(model)$varcomp
## VarComp VarCompSE Zratio Constraint
## BUE:treat.yield-yield 7.156227 9.352010 0.7652073 Positive
## DIC:BUE:treat.yield-yield 22.065178 9.632847 2.2906185 Unconstr
## DIC:treat.yield-yield 19.252261 13.571168 1.4186149 Positive
## LIN:BUE:treat.yield-yield -3.324167 18.607857 -0.1786432 Unconstr
## LIN:DIC:treat.yield-yield 12.692839 22.791150 0.5569196 Unconstr
## LIN:treat.yield-yield 196.197918 75.865100 2.5861420 Positive
## units.yield-yield 52.353311 7.556521 6.9282297 Positive
Apparently CS is possible in sommer, but I still havent’t found how.
See https://quantdev.ssri.psu.edu/sites/qdev/files/ILD_Ch05_2017_DyadicMLM.html and https://quantdev.ssri.psu.edu/sites/qdev/files/ILD_Ch05_2017_MultivariateMLM.html
To allow for correlated within-group errors (residuals).
from DB, p. 239
Historically, correlation structures have been developed for two main classes of data: time-series data and spatial data. The former is generally associated with observations indexed by an integer-valued time variable, while the latter refers primarily to observations indexed by a two-dimensional spatial location vector.
Time-series (serial correlation structures): repeated measure design. The grouping variable is then person or plot (LTE). In an LTE, the errors/residuals within a plot (the group) can be correlated.
Examples: AR1, General (unstructured), ARMA,
Spatial-data (spatial correlatoin structures):
Examples: Exponential, Gaussian, Linear, …
To visualize the correlation patterns, ACF(model) and Variogram (model) can be used.
Terminology in nlme, SAS, SPSS
Name | nlme | SAS |
---|---|---|
Compound symmetry | corCompSymm | |
AR1 | corAR1 | |
CAR1 | corCAR1 | |
Unstructured | CorSymm |
use cor= option
model<-gls(yield~1,
#cor=corCompSymm(form=~1|env),
cor=corSymm(form=~1|treat),
means) # slightly meaningless for different environments
summary(model)
## Generalized least squares fit by REML
## Model: yield ~ 1
## Data: means
## AIC BIC logLik
## 331.1111 340.3618 -160.5555
##
## Correlation Structure: General
## Formula: ~1 | treat
## Parameter estimate(s):
## Correlation:
## 1 2
## 2 -0.280
## 3 -0.196 0.908
##
## Coefficients:
## Value Std.Error t-value p-value
## (Intercept) 55.97384 1.452709 38.53066 0
##
## Standardized residuals:
## Min Q1 Med Q3 Max
## -1.5044477 -0.8919539 -0.2811805 0.1214138 2.1154597
##
## Residual standard error: 9.687172
## Degrees of freedom: 48 total; 47 residual
# rho
model$modelStruct$corStruct
## Correlation structure of class corSymm representing
## Correlation:
## 1 2
## 2 -0.280
## 3 -0.196 0.908
# for extraction
coefficients(model$modelStruct$corStruct,unconstrained = F)
## [1] -0.2801754 -0.1964128 0.9082124
# correlation matrix for each level of env (will be the same for each)
mats <- corMatrix(model$modelStruct$corStruct)
length(mats)
## [1] 16
dim(mats[[1]]) # dimension matches the number obs per level
## [1] 3 3
What needs to be specified in form? When does data order effect the model?
from Pinheiro and Bates (p233): > Because the compound symmetry correlation model does not depend on the position of the observation, but just on the group to which it belongs, the form argument is used only to specify the grouping structure. The variable on the left hand side of the | operator in form is ignored.
model <- mmer(yield~1,
#random=~treat,
rcov=~vs(AR1(env),units),
data=means,verbose=F)
summary(model)$varcomp
## VarComp VarCompSE Zratio Constraint
## BUE:units.yield-yield 66.08413 31.38477 2.105612 Positive
## DIC:units.yield-yield 33.80448 31.38590 1.077060 Positive
## LIN:units.yield-yield 82.03062 36.91722 2.222015 Positive
In BodyWeight dataset rats have been measured several times
see also help(corCompSymm), where this dataset is also used. They use Time as random factor.
model <- lme(weight ~ Time * Diet, BodyWeight,
random = ~1|Rat,
corr=corCompSymm(form=~1|Rat))
# variance estimattes
VarCorr(model)
## Rat = pdLogChol(1)
## Variance StdDev
## (Intercept) 1340.22706 36.609112
## Residual 41.01548 6.404333
# rho
model$modelStruct$corStruct
## Correlation structure of class corCompSymm representing
## Rho
## 1.261617e-17
model <- lme(weight ~ Time * Diet, BodyWeight,
random = ~1|Rat,
corr=corCompSymm())
# variance estimattes
VarCorr(model)
## Rat = pdLogChol(1)
## Variance StdDev
## (Intercept) 1340.22706 36.609112
## Residual 41.01548 6.404333
# rho
model$modelStruct$corStruct
## Correlation structure of class corCompSymm representing
## Rho
## 1.261617e-17
Specify within-group variance models.
model the variance structure of the within-group errors
DB, p 208 The two main arguments for most of the varFunc constructors are value and form . The first specifies the values of the variance parameters δ and the second is a one-sided formula specifying the variance covariate v and, optionally, a stratification variable for the variance parameters—different parameters are used for each level of the stratification variable.
Specified by the weights= option. By default, weights = NULL , corresponding to a homoscedastic variance model for the within-group errors.
With only a formula (without any varFunc object), this formula is passed to varFixed.
Fix the variance weights to a variable in the dataset (e.g. age, or some measure of sampling variance/sd).
No grouping variable.
No (weights) parameters are estimated.
# varFixed(value)
# the variance weights will be fixed to 1/sqrt(var), where var is a numeric column in the dataset
v <- varFixed(~prot) # must be a numeric covariate
v <- Initialize(v, data = plotdat)
head(varWeights(v))
## [1] 0.3178209 0.3071476 0.2898855 0.3086067 0.2862992 0.2851330
# the weights macht 1/sqrt(var)
plot(1/sqrt(plotdat$prot),varWeights(v))
# model
# model <- gls(yield~1,weights=v,plotdat)
# or just as formula (passed directly to varFixed)
model <- gls(yield~1,weights=~prot,plotdat)
head(varWeights(model$modelStruct$varStruct)) # same as from initialization
## [1] 0.3178209 0.3071476 0.2898855 0.3086067 0.2862992 0.2851330
Different variances for each level of a stratification variable (group).
value: intial value fixed: to be fixed during optimizatin
# varIdent(value, form, fixed)
# only form
v <- varIdent(form=~1|env) # must be 1|grouping factor
# with initial values, length must be n levels -1
# values are on the scale of the not unconstrained coefficients in model
v <- varIdent(value=list(BUE=2,LIN=4),form=~1|env)
# fix, length must be n levels -1
# when two are fixed, the third is set to 1
# with only one, this will be relative to one set to 1 without any fix
v <- varIdent(form=~1|env,fixed=list(BUE=0.8))
# coef will not work, when fixed. Probably because they are not estimated
# it is nice to see, that sigma will be higher, when fixed
# Initialize (not necessary for model fit, but good to see varWeights)
v<-Initialize(v,means)
head(varWeights(v)) # inverse to the specified values
## LIN LIN LIN LIN LIN LIN
## 1 1 1 1 1 1
coef(v,unconstrained=F)
## DIC
## 1
model <- gls(yield~1,
# weights=v,
weights=varIdent(form=~1|env),
means)
# extract, first one is first treatment
# weights are estimated for all but the first level
(sigma(model) * c(1,coef(model$modelStruct$varStruct,unconstrained = F)))^2
## BUE DIC
## 219.92812 23.30891 36.07766
sigma(model)
## [1] 14.82997
head(varWeights(model$modelStruct$varStruct))
## LIN LIN LIN LIN LIN LIN
## 1 1 1 1 1 1
# these weights match 1/(unconstrained coefficients)
1/coef(model$modelStruct$varStruct,unconstrained = F)
## BUE DIC
## 3.071704 2.469001
coef(model$modelStruct$varStruct,unconstrained = F)
## BUE DIC
## 0.3255522 0.4050222
From http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html In nlme these so-called R-side (R for “residual”) structures are accessible via - the weights/VarStruct (heteroscedasticity) and - correlation/corStruct (spatial or temporal correlation) arguments and data structures.
Important to distinquish between 1) factor-specific residual variances, and 2) factor-specific random-effects variance. see https://rpubs.com/bbolker/factorvar
The first one can be done in gls (and lme) with: weights = varIdent(form = ~ 1 | time) The second needs to done with the adding the variable before the | in the random statement, and it apparently also works with lmer.
See also this good summary: https://quantdev.ssri.psu.edu/sites/qdev/files/ILD_Ch06_2017_MLMwithHeterogeneousVariance.html
model <- mmer(yield~1,
#random=~treat,
rcov=~vs(env,units),
data=means,verbose=F)
summary(model)$varcomp
## VarComp VarCompSE Zratio Constraint
## BUE:units.yield-yield 23.31536 8.470886 2.752411 Positive
## DIC:units.yield-yield 36.07935 13.006220 2.774007 Positive
## LIN:units.yield-yield 219.85926 77.932189 2.821161 Positive
Some old examples, probably the old syntax:
same effect needs to be as fixed (to estimate a mean) and for the residuals:
model <- mmer(yield~env,
rcov=~vs(env,units),
data=means,verbose=F)
summary(model)$varcomp
## VarComp VarCompSE Zratio Constraint
## BUE:units.yield-yield 22.41718 8.19392 2.735831 Positive
## DIC:units.yield-yield 37.59212 13.72588 2.738777 Positive
## LIN:units.yield-yield 85.99454 31.37078 2.741231 Positive
means[,var(yield),env]
## env V1
## 1: LIN 86.00366
## 2: BUE 22.41465
## 3: DIC 37.59237
RCBD analysis (treat+rep) within each env and separate residual variance for ech env
model <- gls(yield~treat*env+env:rep,
weights=varIdent(form=~1|env),
plotdat)
(sigma(model) * c(1,coef(model$modelStruct$varStruct,unconstrained = F)))^2
## BUE DIC
## 73.419603 4.649653 5.748014
# compare to sigma^2 from a RCBD model in each env
plotdat[,sigma(lm(yield~treat+rep))^2,env]
## env V1
## 1: LIN 73.419611
## 2: BUE 4.649653
## 3: DIC 5.748014
# compare SE means for single env
plotdat[,mean(data.frame(emmeans(lm(yield~treat+rep),"treat"))$SE),env]
## env V1
## 1: LIN 4.947040
## 2: BUE 1.244943
## 3: DIC 1.384198
# from overall model
meantab <- data.table(data.frame(emmeans(model,"treat","env")))
meantab[,.(SEmean=mean(SE)),env]
## env SEmean
## 1: BUE 1.244944
## 2: DIC 1.384198
## 3: LIN 4.947040
# compare to model with common residual variance
redmodel <- gls(yield~treat*env+env:rep,
plotdat)
# LRT and AIC
anova(redmodel,model)
## Model df AIC BIC logLik Test L.Ratio p-value
## redmodel 1 55 731.1844 868.6740 -310.5922
## model 2 57 662.9372 805.4263 -274.4686 1 vs 2 72.24728 <.0001
# se of means
mean(data.frame(emmeans(redmodel,"treat"))$SE)
## [1] 1.761915
mean(data.frame(emmeans(model,"treat"))$SE)
## [1] 1.761915
# average sed
mean(data.frame(pairs(emmeans(model,"treat")))$SE)
## [1] 2.491724
mean(data.frame(pairs(emmeans(redmodel,"treat")))$SE)
## [1] 2.491724
MSE from fixed model match residual variance from mixed model. Substantial variaion in residual variance between locations. Not fully due to difference in mean yield of locations.
on rat dataset
As it is not always straightforward to extrat the estimated variances, the scope of the following section shall be to show some extraction methods.
must be lme (not gls)
full estimated covariance matrix of the data
library(mgcv)
# reorder data
# means <- means[sample(1:nrow(means)),]
model <- lme(yield~1,
random=~1|env,
cor=corCompSymm(form=~1|env),
weights=varIdent(form=~1|env),
means)
# extract, first one is first treatment
(sigma(model) * c(1,coef(model$modelStruct$varStruct,unconstrained = F)))^2
## DIC LIN
## 37.56539 61.25553 154.80650
#matrix with dimension nobs x nobs, the diagonal
mat <- extract.lme.cov(model)
dim(mat)
## [1] 48 48
nrow(means)
## [1] 48
# cov2: big matrix is split up by groups
mats <- extract.lme.cov2(model)
mats$ind
## [1] 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
## [26] 42 43 44 45 46 47 48 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
length(mats$V) # n matrices as levels of the factor, and with dim obs per group
## [1] 3
#mats$V # the off-diagonals match variance for the random effect (here plots), the diagonals match the diagoals from ...cov (before)
A function to extract variance estimates and calculate their SE based on the delta mehtod (https://stats.idre.ucla.edu/r/faq/how-can-i-calculate-standard-errors-for-variance-components-from-mixed-models/).
# variances of random effects of lme model
extvar <- function(model){
# random effect variances
vs <- data.frame((sigma(model) * c(coef(model$modelStruct$reStruct,unconstrained = F)))^2)
names(vs) <- "Variance"
#rownames(vs)[1] <- "Residual"
rownames(vs) <- gsub("dummy.sd(","",rownames(vs),fixed=T)
rownames(vs) <- gsub(".sd","",rownames(vs),fixed=T)
rownames(vs) <- gsub(" - 1)","",rownames(vs),fixed=T)
# residual variance(s)
vs_res <- data.frame((sigma(model) * c(1,coef(model$modelStruct$varStruct,unconstrained = F)))^2)
names(vs_res) <- "Variance"
#rownames(vs_res)[1] <- "Residual"
if(nrow(vs_res)>1){
rownames(vs_res) <- paste("Residuals_",attr(model$modelStruct$varStruct,"groupNames"),sep="")
} else {
rownames(vs_res) <- "Residuals"
}
# SE of variance components (using the deltamethod)
# with separate residual variances, the estimates from covariance matrix (apVar) are multipliers,
# thus use only for random effects and for the resiudal if a single residual variance is estimated
library(msm)
# class(model)
# str(model)
var <- model$apVar
par<-attr(var, "Pars")
sevec <- c()
for (i in 1:length(par)){
form <- formula(paste("~exp(x",i,")^2",sep=""))
# deltamethod (~ exp(x1)^2, par, var)
sevec <- c(sevec,deltamethod (form, par, var))
}
sevec
vs$SE <- sevec[1:nrow(vs)]
# for residual only with a common variance
if (nrow(vs_res)==1){
vs_res$SE <- sevec[length(sevec)]
} else {vs_res$SE=NA}
vs <- rbind(vs,vs_res)
# z-test
vs$Z <- vs$Variance/vs$SE
vs$P <- 2*pnorm(-abs(vs$Z))
return(vs)
}
from nlme package, repeated measure analysis. weight measured over 64 days, from 16 rats with 3 different diets.
head(BodyWeight)
## Grouped Data: weight ~ Time | Rat
## weight Time Rat Diet
## 1 240 1 1 1
## 2 250 8 1 1
## 3 255 15 1 1
## 4 260 22 1 1
## 5 262 29 1 1
## 6 258 36 1 1
orth <- data.table(Orthodont)
ggplot(orth,aes(age,distance,colour=Sex,group=Subject))+
geom_line()
Mixed models with appropriate covariance structures can be used to estimate certain stability parameters (Piepho (1999)).
Most promienent example is the estimation of genetic correlation
The syntax of this package is very much oriented on asreml-R Specifying unstructured (us) covariance matrices will estimate variances for each trait and covariances, for the two factors var and rep and for the residuals.
Variance and fixed effect estimates (intercept) from the univariate model match the estimates from the multivariate model.
Calculate genetic correlation, based on variance and covariance estimates.
Piepho, Hans-Peter. 1999. “Stability Analysis Using the SAS System.” Agronomy Journal 91 (1): 154. https://doi.org/10.2134/agronj1999.00021962009100010024x.