Introduction

This is a very quick tutorial to running models in R for plasticity (particularly among-individual differences in plasticity), created for Suzanne Alonso and Nick Royle’s ISBE symposium on behavioural plasticity. I am assuming some experience with regression models, but include brief pointers on specifying, performing diagnostics, and checking whether the model is a good fit. If you want to read more about the intricacies of random-intercept and -slope models, the University of Bristol’s Centre for Multi-level Modelling has some great material:

http://www.bristol.ac.uk/cmm/learning/videos/

If you find this useful, or have any comments / suggestions, you can email me at houslay@gmail.com.

Load libraries

There are a bunch of libraries to load (and install, if necessary) - not all of them are for actually running the models, but they’ll make life easier!

require(knitr)
require(tidyr)
require(dplyr)
require(ggplot2)
require(broom)
require(gridExtra)

require(lme4)

# lme4 and tidyr both have 'expand' functions - 
#  we want to make sure it's tidyr's that we use when we call it!
expand <- tidyr::expand

# Function for standard error
std_err <- function(x) sd(x)/sqrt(length(x))

Reaction norms approach

The most common approach to modelling variation in individual plasticity is the reaction norm approach - using random regression models to test for differences in individual slope deviations. This approach has been covered fairly extensively in the literature, e.g. Dingemanse et al 2010.

Figure taken from Dingemanse et al (2010)

Figure taken from Dingemanse et al (2010)

Example overview

In this example, we have observed individual behaviour (‘Activity’) at 5 different temperatures, and want to test whether (i) there exists population-level plasticity, such that average activity is significantly affected by temperature, and (ii) there are individual differences in plasticity, such that slope deviations are significantly different from one another.

Load and check simulated data

Next, we’ll load up some simulated data. This has 3 columns:

  • ID (individual identifier)
  • Temp (predictor variable: the temperature, in C, at which measurement was taken)
  • Activity (the response variable: activity level, on some arbitrary scale)
df_activity <- data_frame(ID = factor(rep(LETTERS[1:20], each=5)),
                          Temp = rep(-2:2, times=20),
                          Activity = c(5.509483369,
                                       5.086400559,
                                       7.291669795,
                                       7.7255394,
                                       12.48731752,
                                       -0.15231211,
                                       3.459667228,
                                       4.676662498,
                                       3.646438077,
                                       3.819056263,
                                       1.845028096,
                                       3.191492183,
                                       9.580782097,
                                       11.4420279,
                                       12.37367193,
                                       4.782295151,
                                       4.754364267,
                                       6.493168431,
                                       5.175259749,
                                       8.687935843,
                                       -0.014547574,
                                       6.828039886,
                                       9.172530306,
                                       15.28585964,
                                       15.33542768,
                                       3.83089209,
                                       4.281952234,
                                       5.08286246,
                                       9.947459161,
                                       11.47400038,
                                       2.68590715,
                                       3.814660944,
                                       10.56006869,
                                       9.671785115,
                                       14.28151694,
                                       5.465825545,
                                       4.22941294,
                                       9.21819825,
                                       8.490583698,
                                       11.12413906,
                                       3.5050106,
                                       4.576850562,
                                       3.337903469,
                                       3.312161935,
                                       6.430644692,
                                       4.350695078,
                                       3.080603398,
                                       5.956493065,
                                       9.342901914,
                                       10.26237114,
                                       2.468397656,
                                       6.546708098,
                                       6.090473345,
                                       9.052316868,
                                       10.40528841,
                                       0.420737959,
                                       2.899078212,
                                       8.042977408,
                                       7.552856261,
                                       10.20289061,
                                       2.287388649,
                                       7.251450611,
                                       7.597332849,
                                       5.844943899,
                                       5.217627539,
                                       2.246044457,
                                       5.111583578,
                                       10.64544808,
                                       16.64489096,
                                       17.8635928,
                                       3.014025209,
                                       2.979938777,
                                       1.872008574,
                                       5.843502353,
                                       6.902012099,
                                       1.537929061,
                                       2.020490233,
                                       6.711361941,
                                       8.444715807,
                                       9.106907328,
                                       1.478295133,
                                       4.370893504,
                                       6.481001168,
                                       15.18832065,
                                       13.85280053,
                                       3.711452311,
                                       4.231790222,
                                       11.51008315,
                                       14.75226205,
                                       18.87686314,
                                       1.946090462,
                                       1.457254528,
                                       3.893480056,
                                       9.035952938,
                                       6.940550594,
                                       1.752620681,
                                       3.227335752,
                                       6.047545853,
                                       10.88963854,
                                       10.80254743))

Take a quick look at the structure of the data:

str(df_activity)
## Classes 'tbl_df', 'tbl' and 'data.frame':    100 obs. of  3 variables:
##  $ ID      : Factor w/ 20 levels "A","B","C","D",..: 1 1 1 1 1 2 2 2 2 2 ...
##  $ Temp    : int  -2 -1 0 1 2 -2 -1 0 1 2 ...
##  $ Activity: num  5.51 5.09 7.29 7.73 12.49 ...
head(df_activity)
## # A tibble: 6 x 3
##       ID  Temp   Activity
##   <fctr> <int>      <dbl>
## 1      A    -2  5.5094834
## 2      A    -1  5.0864006
## 3      A     0  7.2916698
## 4      A     1  7.7255394
## 5      A     2 12.4873175
## 6      B    -2 -0.1523121

We can see that our data structure is ‘tidy’; it maps how we want to think about and use the data (each row is a single observation of a single individual). We can use ggplot to plot the data with individual lines:

ggplot(df_activity, aes(x = Temp, y = Activity, group = ID)) +
  geom_line(alpha = 0.5)

Random intercept model

We will first model our data using a mixed model with ‘random intercepts’ only - we want to include a fixed effect of ‘Temp’ (population-level plasticity), and allow individuals to have distinct intercepts:

lmer_m1_int <- lmer(Activity ~ Temp + (1|ID),
                    data = df_activity)

plot(lmer_m1_int)

hist(residuals(lmer_m1_int))

qqnorm(residuals(lmer_m1_int))

summary(lmer_m1_int)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Activity ~ Temp + (1 | ID)
##    Data: df_activity
## 
## REML criterion at convergence: 477.5
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.18398 -0.61167 -0.05158  0.71892  2.22801 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  ID       (Intercept) 3.031    1.741   
##  Residual             5.322    2.307   
## Number of obs: 100, groups:  ID, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)   6.8007     0.4525   15.03
## Temp          2.1572     0.1631   13.22
## 
## Correlation of Fixed Effects:
##      (Intr)
## Temp 0.000

We can use the predict function to look at population-level plasticity:

##
# Use predict.merMod to get predictions for the population-level response
# - don't include random effects (although this can be done, by repeating the
#    sequences for 'temp' for every level of the random effect and setting 're.form = NULL'...)
# - no option for SEs because of random effects; can bootstrap / MCMC them
#    but not the main point of interest here
df_pred_int_pop <- data.frame(Temp = seq(from = min(df_activity$Temp),
                                         to = max(df_activity$Temp),
                                         length.out = 50))
df_pred_int_pop$fit <-predict(lmer_m1_int, 
                              newdata = df_pred_int_pop, re.form = NA)

##
# Plot prediction for population-level response,
#  with individual-level raw data in background
##
ggplot(df_pred_int_pop, aes(x = Temp, y = fit)) +
  geom_line(size = 2) +
  geom_line(data = df_activity,
            aes(y = Activity, group = ID),
            alpha = 0.4) +
  ylab("Activity") +
  theme_bw()

So, this fits fairly well as a general idea of behavioural plasticity at the population level. We can also use lmerTest to check for significance of the random effect:

# Another option is to load the package lmerTest (but it takes over all lmer models)
library(lmerTest)
rand(lmer_m1_int) 
## Analysis of Random effects Table:
##    Chi.sq Chi.DF p.value    
## ID   17.5      1   3e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
detach(package:lmerTest) # detach it so that future lmer models will give standard lme4 output

# Note that we get basically the same p-value as when
#  we do a chi-square likelihood ratio test of our lmer model
#  and a simple linear regression without the random effect:
lm_m1 <- lm(Activity ~ Temp, data = df_activity)
anova(lmer_m1_int, lm_m1)
## Data: df_activity
## Models:
## lm_m1: Activity ~ Temp
## lmer_m1_int: Activity ~ Temp + (1 | ID)
##             Df    AIC    BIC  logLik deviance  Chisq Chi Df Pr(>Chisq)    
## lm_m1        3 498.91 506.73 -246.46   492.91                             
## lmer_m1_int  4 483.89 494.31 -237.94   475.89 17.025      1   3.69e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Given that we just have random intercepts, we can also calculate the conditional repeatability easily - in terms of how much of the total phenotypic variance (after accounting for fixed effects) is explained by differences between individuals. This information is given in the ‘random effects’ part of the model summary, but we can also use a handy function from the ‘broom’ package to get at it a little more easily:

# Asking for random effects of our model, at variance/covariance scales
tidy(lmer_m1_int, effects = "ran_pars", scales = "vcov")
##                       term    group estimate
## 1       var_(Intercept).ID       ID 3.031157
## 2 var_Observation.Residual Residual 5.321828
# We can spread our data and create a new column of repeatability
tidy(lmer_m1_int, effects = "ran_pars", scales = "vcov") %>% 
  select(group, estimate) %>% 
  spread(group, estimate) %>% 
  mutate(Repeatability = ID/(ID + Residual))
##         ID Residual Repeatability
## 1 3.031157 5.321828      0.362883
# A more old school method, if you prefer:
df_m1_tidy <- tidy(lmer_m1_int, effects = "ran_pars", scales = "vcov")
df_m1_tidy[df_m1_tidy$group == "ID",]$estimate/(sum(df_m1_tidy$estimate))
## [1] 0.362883

So, we have significant population-level plasticity, a significant effect of differences among individuals, and a repeatability of 0.36. Sounds good! But we’ve missed an important step: does this model really fit the data that well?

We can plot our individual-level predictions against the real data, using predictions that the R package broom handily creates:

##
# For visualising individual-level model fits, use broom to get predictions for every observation
#  in our original data frame:
#
# For each row in original data frame, we get predictions for:
# - .fixed = fixed effects only
# - .fitted = fixed and random effects
##
augment(lmer_m1_int) %>% head(10)  # Quick view
##      Activity Temp ID    .fitted     .resid    .fixed        .mu .offset
## 1   5.5094834   -2  A  3.0927239  2.4167595  2.486286  3.0927239       0
## 2   5.0864006   -1  A  5.2499302 -0.1635296  4.643492  5.2499302       0
## 3   7.2916698    0  A  7.4071365 -0.1154667  6.800699  7.4071365       0
## 4   7.7255394    1  A  9.5643427 -1.8388033  8.957905  9.5643427       0
## 5  12.4873175    2  A 11.7215490  0.7657685 11.115111 11.7215490       0
## 6  -0.1523121   -2  B -0.2601291  0.1078170  2.486286 -0.2601291       0
## 7   3.4596672   -1  B  1.8970772  1.5625901  4.643492  1.8970772       0
## 8   4.6766625    0  B  4.0542834  0.6223791  6.800699  4.0542834       0
## 9   3.6464381    1  B  6.2114897 -2.5650516  8.957905  6.2114897       0
## 10  3.8190563    2  B  8.3686960 -4.5496397 11.115111  8.3686960       0
##    .sqrtXwt .sqrtrwt .weights     .wtres
## 1         1        1        1  2.4167595
## 2         1        1        1 -0.1635296
## 3         1        1        1 -0.1154667
## 4         1        1        1 -1.8388033
## 5         1        1        1  0.7657685
## 6         1        1        1  0.1078170
## 7         1        1        1  1.5625901
## 8         1        1        1  0.6223791
## 9         1        1        1 -2.5650516
## 10        1        1        1 -4.5496397
# Note that if we wanted to average over or exclude fixed effects, we
#  would have to use 'predict' or use the 'newdata' parameter here

##
# Can plot data and fitted points over the top of each other:
##
augment(lmer_m1_int) %>%
  ggplot(., aes(x = Temp, y = Activity, group = ID)) +
  geom_line(alpha = 0.3) +
  geom_line(aes(y = .fitted),
            colour = 'red',
            alpha = 0.6) +
  theme_bw()

##
# Another way is to pull out original data columns and the fitted values;
#  Could make 2 separate plots but a quick reshape enables
#   faceting in ggplot2:
##
augment(lmer_m1_int) %>%
  select(Activity, Temp, ID, fit = `.fitted`) %>% 
  gather(type, yval, Activity, fit) %>% 
  ggplot(., aes(x = Temp, y = yval, group = ID)) +
  geom_line(alpha = 0.6) +
  facet_grid(. ~ type) +
  theme_bw()

Either way, it’s clear that our predictions don’t do a particularly great job of matching the data. Now we move on to random slope models…

Random slope model

We can now let individuals vary in their behavioural response to the predictor - so, in addition to fitting the fixed effect of temperature (the population average change in activity with increasing temp), we allow individuals to vary not only in their intercept but also their slope).

lmer_m1_rs <- lmer(Activity ~ Temp + (Temp|ID),
                    data = df_activity)

plot(lmer_m1_rs)

hist(residuals(lmer_m1_rs))

qqnorm(residuals(lmer_m1_rs))

summary(lmer_m1_rs)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Activity ~ Temp + (Temp | ID)
##    Data: df_activity
## 
## REML criterion at convergence: 429.1
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -1.86086 -0.63760 -0.09427  0.56815  2.20462 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr
##  ID       (Intercept) 3.585    1.893        
##           Temp        1.129    1.063    1.00
##  Residual             2.596    1.611        
## Number of obs: 100, groups:  ID, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)   6.8007     0.4530  15.013
## Temp          2.1572     0.2635   8.187
## 
## Correlation of Fixed Effects:
##      (Intr)
## Temp 0.843

Our diagnostics look good again, and we can see that our population-level intercept and slope are actually pretty much the same (from summary, but also quick prediction):

We can use the predict function to look at population-level plasticity:

##
# Use predict.merMod to get predictions for the population-level response
# - don't include random effects (although this can be done, by repeating the
#    sequences for 'temp' for every level of the random effect and setting 're.form = NULL'...)
# - no option for SEs because of random effects; can bootstrap / MCMC them
#    but not the main point of interest here
df_pred_rs_pop <- data.frame(Temp = seq(from = min(df_activity$Temp),
                                         to = max(df_activity$Temp),
                                         length.out = 50))
df_pred_rs_pop$fit <-predict(lmer_m1_rs, 
                              newdata = df_pred_rs_pop, re.form = NA)

##
# Plot prediction for population-level response,
#  with individual-level raw data in background
##
ggplot(df_pred_rs_pop, aes(x = Temp, y = fit)) +
  geom_line(size = 2) +
  geom_line(data = df_activity,
            aes(y = Activity, group = ID),
            alpha = 0.4) +
  ylab("Activity") +
  theme_bw()

We can check significance of the random slope using lmerTest, or again by using a chi-square LRT of the nested models (random slope and intercept vs random intercept only):

# Another option is to load the package lmerTest (but it takes over all lmer models)
library(lmerTest)
rand(lmer_m1_rs) 
## Analysis of Random effects Table:
##         Chi.sq Chi.DF p.value    
## Temp:ID   48.3      2   3e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
detach(package:lmerTest) # detach it so that future lmer models will give standard lme4 output


anova(lmer_m1_rs, lmer_m1_int)
## Data: df_activity
## Models:
## lmer_m1_int: Activity ~ Temp + (1 | ID)
## lmer_m1_rs: Activity ~ Temp + (Temp | ID)
##             Df    AIC    BIC  logLik deviance  Chisq Chi Df Pr(>Chisq)    
## lmer_m1_int  4 483.89 494.31 -237.94   475.89                             
## lmer_m1_rs   6 439.28 454.91 -213.64   427.28 48.604      2  2.791e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Notice that there are 2 degrees of freedom on the test above; this is because our model fits the not only variance for the random slope, but also the covariance between intercepts and slopes. We can have a closer look at the random effects again using broom’s ‘tidy’ function:

# Asking for random effects of our model, at variance/covariance scales
tidy(lmer_m1_rs, effects = "ran_pars", scales = "vcov")
##                       term    group estimate
## 1       var_(Intercept).ID       ID 3.584682
## 2              var_Temp.ID       ID 1.128935
## 3  cov_(Intercept).Temp.ID       ID 2.011684
## 4 var_Observation.Residual Residual 2.596496

Note that here we are not going to calculate the repeatability. It gets a little hairier for random slopes - if we think back to our random intercepts model, we calculated repeatability as the proportion of phenotypic variation explained by differences among individuals. But random slopes allow the differences between individuals to change along our x-axis! There is an equation for working it out, and a paper in Methods Ecol Evol for applying it, but proceed with caution…

Johnson 2014

…which extends Nakagawa & Schielzeth 2010

Also see this nice answer to a question on StackOverflow.

Also, don’t be sucked in by thinking that you can compare the variance explained by individual intercepts with that of individual slopes - these are calculated on different scales.

We can plot our individual-level predictions from the random slopes model against the real data, using predictions that the R package broom handily creates:

##
# Can use broom to get individual-level predictions:
#
# For each row in original data frame, we get predictions for:
# - .fixed = fixed effects only
# - .fitted = fixed and random effects
##
augment(lmer_m1_rs) %>% head(10)
##      Activity Temp ID   .fitted     .resid    .fixed       .mu .offset
## 1   5.5094834   -2  A  2.468306  3.0411774  2.486286  2.468306       0
## 2   5.0864006   -1  A  4.707963  0.3784372  4.643492  4.707963       0
## 3   7.2916698    0  A  6.947621  0.3440491  6.800699  6.947621       0
## 4   7.7255394    1  A  9.187278 -1.4617386  8.957905  9.187278       0
## 5  12.4873175    2  A 11.426935  1.0603822 11.115111 11.426935       0
## 6  -0.1523121   -2  B  2.846204 -2.9985166  2.486286  2.846204       0
## 7   3.4596672   -1  B  3.352931  0.1067358  4.643492  3.352931       0
## 8   4.6766625    0  B  3.859658  0.8170040  6.800699  3.859658       0
## 9   3.6464381    1  B  4.366385 -0.7199474  8.957905  4.366385       0
## 10  3.8190563    2  B  4.873112 -1.0540562 11.115111  4.873112       0
##    .sqrtXwt .sqrtrwt .weights     .wtres
## 1         1        1        1  3.0411774
## 2         1        1        1  0.3784372
## 3         1        1        1  0.3440491
## 4         1        1        1 -1.4617386
## 5         1        1        1  1.0603822
## 6         1        1        1 -2.9985166
## 7         1        1        1  0.1067358
## 8         1        1        1  0.8170040
## 9         1        1        1 -0.7199474
## 10        1        1        1 -1.0540562
##
# Can plot data and fitted over the top of each other:
##
augment(lmer_m1_rs) %>%
  ggplot(., aes(x = Temp, y = Activity, group = ID)) +
  geom_line(alpha = 0.3) +
  geom_line(aes(y = .fitted),
            colour = 'red',
            alpha = 0.6) +
  theme_bw()

##
# Another way is to pull out original data columns and the fitted values;
#  Could make 2 separate plots but a quick reshape enables
#   faceting in ggplot2:
##
augment(lmer_m1_rs) %>%
  select(Activity, Temp, ID, fit = `.fitted`) %>% 
  gather(type, yval, Activity, fit) %>% 
  ggplot(., aes(x = Temp, y = yval, group = ID)) +
  geom_line(alpha = 0.6) +
  facet_grid(. ~ type) +
  theme_bw()

We can see from these figures that our random slopes model does a much better job of fitting the data - not surprising that the addition of random slopes (and covariance between intercept and slope) were so significant!

These figures also reinforce the point made earlier - individual differences explain a lot of phenotypic variation (and the correlation between slopes and intercepts = 1) where Temp == 0, but at other points (eg where Temp == -2) then it really doesn’t. Because of the fact that our individual-level slopes are created by an intercept-slope covariance function, we place constraints on how we model the variance - linear reaction norms will always produce variation as a quadratic function. If this sounds too much like mathematical jargon, just have a look at the plot we made above: variation (basically, the range spanned by the individual slopes) starts small, drops to zero, then increases.

This gives us another little pointer that random slopes put a possibly unrealistic constraint on our methods of modelling variation - do we really expect there to be some magical temperature at which there is absolutely zero variation among individuals? It’s okay to model data in this way, just as long as we’re aware of these traps when we go to interpret the models.

The importance of scaling

Before moving on, we should also take into account another issue (which was touched on above): how the scale of our predictor affects the model.

# Duplicate original data set, and convert temperature scale
df_activity_f <- df_activity %>% 
  mutate(Temp_f = 32 + (Temp * 1.8))

# Plot the raw (simulated) data, grouped by individual ID
ggplot(df_activity_f, aes(x = Temp_f, y = Activity, group = ID)) +
  geom_line(alpha = 0.7) +
  theme_bw()

We can quickly check out the random intercepts model:

lmer_f_int <- lmer(Activity ~ Temp_f + (1|ID),
                   data = df_activity_f)


summary(lmer_f_int)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Activity ~ Temp_f + (1 | ID)
##    Data: df_activity_f
## 
## REML criterion at convergence: 478.6
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.18398 -0.61167 -0.05158  0.71892  2.22801 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  ID       (Intercept) 3.031    1.741   
##  Residual             5.322    2.307   
## Number of obs: 100, groups:  ID, 20
## 
## Fixed effects:
##              Estimate Std. Error t value
## (Intercept) -31.54964    2.93506  -10.75
## Temp_f        1.19845    0.09062   13.22
## 
## Correlation of Fixed Effects:
##        (Intr)
## Temp_f -0.988
augment(lmer_f_int) %>%
  select(Activity, Temp_f, ID, fit = `.fitted`) %>% 
  gather(type, yval, Activity, fit) %>% 
  ggplot(., aes(x = Temp_f, y = yval, group = ID)) +
  geom_line(alpha = 0.6) +
  facet_grid(. ~ type) +
  xlab("Temp (F)") +
  theme_bw()

We get very similar results to before, with a repeatability again of 0.36.

lmer_f_rs <- lmer(Activity ~ Temp_f + (Temp_f|ID),
                  data = df_activity_f)
summary(lmer_f_rs)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Activity ~ Temp_f + (Temp_f | ID)
##    Data: df_activity_f
## 
## REML criterion at convergence: 430.3
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -1.86085 -0.63760 -0.09427  0.56815  2.20462 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  ID       (Intercept) 288.8563 16.9958       
##           Temp_f        0.3484  0.5903  -1.00
##  Residual               2.5965  1.6114       
## Number of obs: 100, groups:  ID, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept) -31.5496     4.3095  -7.321
## Temp_f        1.1984     0.1464   8.187
## 
## Correlation of Fixed Effects:
##        (Intr)
## Temp_f -0.998

Our individual variance has gone through the roof! Also our correlation between intercepts and slopes is now -1, as opposed to 1 with the same data (but on a differently scaled x-axis) in a previous model.

The model looks fine when looking just within the range of the data:

augment(lmer_f_rs) %>%
  select(Activity, Temp_f, ID, fit = `.fitted`) %>% 
  gather(type, yval, Activity, fit) %>% 
  ggplot(., aes(x = Temp_f, y = yval, group = ID)) +
  geom_line(alpha = 0.6) +
  facet_grid(. ~ type) +
  theme_bw()

…But our celsius range was centred at zero, which is where the intercept is set automatically. We haven’t told our model anything about scaling the predictor, so of course our intercept for this model will be at 0 Fahrenheit. Let’s produce individual-level predictions that begin at the intercept for our model…

df_pred_f_rs <- data.frame(ID = factor(rep(levels(df_activity_f$ID), each = 100)),
                           Temp_f = rep(seq(0, max(df_activity_f$Temp_f), 
                                            length.out = 100),
                                        times = 20))

df_pred_f_rs$fit <- predict(lmer_f_rs, 
                           newdata = df_pred_f_rs, re.form = NULL)

##
# Plot prediction for population-level response,
#  with individual-level raw data in background
##
ggplot(df_pred_f_rs, aes(x = Temp_f, y = fit, group = ID)) +
  geom_line(alpha = 0.6,
            colour = 'red') +
  geom_line(data = df_activity_f,
            aes(y = Activity, group = ID),
            alpha = 0.4) +
  ylab("Activity") +
  theme_bw()

Now we can see the problem! We can use the scale function to automatically centre the predictor. Note that I have set ‘scale = FALSE’ within the scale function; this centres our predictor at the mean, but doesn’t rescale it to standard deviations, because I still want the model coefficients to give me the average change in activity for the change in every 1 degree Fahrenheit:

lmer_f_rs_scale <- lmer(Activity ~ scale(Temp_f, scale = FALSE) + 
                          (scale(Temp_f, scale = FALSE)|ID),
                  data = df_activity_f)
summary(lmer_f_rs_scale)
## Linear mixed model fit by REML ['lmerMod']
## Formula: 
## Activity ~ scale(Temp_f, scale = FALSE) + (scale(Temp_f, scale = FALSE) |  
##     ID)
##    Data: df_activity_f
## 
## REML criterion at convergence: 430.3
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -1.86086 -0.63760 -0.09427  0.56815  2.20462 
## 
## Random effects:
##  Groups   Name                         Variance Std.Dev. Corr
##  ID       (Intercept)                  3.5847   1.8933       
##           scale(Temp_f, scale = FALSE) 0.3484   0.5903   1.00
##  Residual                              2.5965   1.6114       
## Number of obs: 100, groups:  ID, 20
## 
## Fixed effects:
##                              Estimate Std. Error t value
## (Intercept)                    6.8007     0.4530  15.013
## scale(Temp_f, scale = FALSE)   1.1984     0.1464   8.187
## 
## Correlation of Fixed Effects:
##             (Intr)
## s(T_,s=FALS 0.843

Additional covariates

You get a call from your collaborator: “hey, we measured behaviour on different sexes - do you think this matters?”

Let’s take a look!

df_sex <- data_frame(ID = LETTERS[1:20]) %>% 
  mutate(SexM = ifelse(ID %in% c("A","C","E","G","H","L","M","N","P","R"),1,0))


df_activity <- left_join(df_activity,
                         df_sex)

str(df_activity)
## Classes 'tbl_df', 'tbl' and 'data.frame':    100 obs. of  4 variables:
##  $ ID      : chr  "A" "A" "A" "A" ...
##  $ Temp    : int  -2 -1 0 1 2 -2 -1 0 1 2 ...
##  $ Activity: num  5.51 5.09 7.29 7.73 12.49 ...
##  $ SexM    : num  1 1 1 1 1 0 0 0 0 0 ...
ggplot(df_activity, aes(x = Temp, y = Activity, colour = factor(SexM), group = ID)) +
  geom_line(alpha = 0.7) +
  theme_bw()

Certainly looks as though there are sex differences in plasticity when we use a random intercepts model…

lmer_m2_ri <- lmer(Activity ~ Temp*SexM + (1|ID),
                    data = df_activity)

plot(lmer_m2_ri)

hist(residuals(lmer_m2_ri))

qqnorm(residuals(lmer_m2_ri))

lmer_m2_ri_testsex <- lmer(Activity ~ Temp+SexM + (1|ID),
                    data = df_activity)

anova(lmer_m2_ri, lmer_m2_ri_testsex)
## Data: df_activity
## Models:
## lmer_m2_ri_testsex: Activity ~ Temp + SexM + (1 | ID)
## lmer_m2_ri: Activity ~ Temp * SexM + (1 | ID)
##                    Df    AIC    BIC  logLik deviance  Chisq Chi Df
## lmer_m2_ri_testsex  5 478.95 491.98 -234.48   468.95              
## lmer_m2_ri          6 471.74 487.37 -229.87   459.74 9.2133      1
##                    Pr(>Chisq)   
## lmer_m2_ri_testsex              
## lmer_m2_ri           0.002403 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

…but we need to check whether we also have significant differences among individuals in slopes:

lmer_m2_rs <- lmer(Activity ~ Temp*SexM + (Temp|ID),
                    data = df_activity)

anova(lmer_m2_rs, lmer_m2_ri)
## Data: df_activity
## Models:
## lmer_m2_ri: Activity ~ Temp * SexM + (1 | ID)
## lmer_m2_rs: Activity ~ Temp * SexM + (Temp | ID)
##            Df    AIC    BIC  logLik deviance  Chisq Chi Df Pr(>Chisq)    
## lmer_m2_ri  6 471.74 487.37 -229.87   459.74                             
## lmer_m2_rs  8 436.28 457.12 -210.14   420.28 39.464      2  2.695e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

We do, so we have to test the sex differences again:

lmer_m2_rs_testsex <- lmer(Activity ~ Temp+SexM + (Temp|ID),
                    data = df_activity)

anova(lmer_m2_rs, lmer_m2_rs_testsex)
## Data: df_activity
## Models:
## lmer_m2_rs_testsex: Activity ~ Temp + SexM + (Temp | ID)
## lmer_m2_rs: Activity ~ Temp * SexM + (Temp | ID)
##                    Df    AIC    BIC  logLik deviance Chisq Chi Df
## lmer_m2_rs_testsex  7 438.06 456.29 -212.03   424.06             
## lmer_m2_rs          8 436.28 457.12 -210.14   420.28 3.779      1
##                    Pr(>Chisq)  
## lmer_m2_rs_testsex             
## lmer_m2_rs             0.0519 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The sex x temperature interaction is now not statistically significant, so it comes down to your philosophy. I would argue that keeping the interaction in the model is better, else you might overinterpret individual differences in plasticity (as some proportion of it can be explained by sex differences):

df_pred_rs_sex <- data.frame(Temp = rep(seq(from = min(df_activity$Temp),
                                         to = max(df_activity$Temp),
                                         length.out = 50),
                                        times = 2),
                             SexM = rep(c(0,1),each = 50))

df_pred_rs_sex$fit <-predict(lmer_m2_rs, 
                              newdata = df_pred_rs_sex, re.form = NA)

##
# Plot prediction for population-level response,
#  with individual-level raw data in background
##
ggplot(df_pred_rs_sex, aes(x = Temp, y = fit, colour = factor(SexM))) +
  geom_line(size = 2) +
  geom_line(data = df_activity,
            aes(y = Activity, group = ID),
            alpha = 0.4) +
  ylab("Activity") +
  theme_bw()

We can now plot our model fit for random slopes with sex x temp interaction:

##
# Can use broom to get individual-level predictions:
#
# For each row in original data frame, we get predictions for:
# - .fixed = fixed effects only
# - .fitted = fixed and random effects
##
augment(lmer_m2_rs) %>% head(10)
##      Activity Temp SexM ID   .fitted     .resid    .fixed       .mu
## 1   5.5094834   -2    1  A  2.761699  2.7477842  2.597501  2.761699
## 2   5.0864006   -1    1  A  4.945686  0.1407145  5.232906  4.945686
## 3   7.2916698    0    1  A  7.129673  0.1619969  7.868310  7.129673
## 4   7.7255394    1    1  A  9.313660 -1.5881203 10.503715  9.313660
## 5  12.4873175    2    1  A 11.497647  0.9896709 13.139120 11.497647
## 6  -0.1523121   -2    0  B  2.796849 -2.9491610  2.375071  2.796849
## 7   3.4596672   -1    0  B  3.316295  0.1433720  4.054079  3.316295
## 8   4.6766625    0    0  B  3.835742  0.8409209  5.733087  3.835742
## 9   3.6464381    1    0  B  4.355188 -0.7087499  7.412095  4.355188
## 10  3.8190563    2    0  B  4.874634 -1.0555781  9.091102  4.874634
##    .offset .sqrtXwt .sqrtrwt .weights     .wtres
## 1        0        1        1        1  2.7477842
## 2        0        1        1        1  0.1407145
## 3        0        1        1        1  0.1619969
## 4        0        1        1        1 -1.5881203
## 5        0        1        1        1  0.9896709
## 6        0        1        1        1 -2.9491610
## 7        0        1        1        1  0.1433720
## 8        0        1        1        1  0.8409209
## 9        0        1        1        1 -0.7087499
## 10       0        1        1        1 -1.0555781
##
# Another way is to pull out original data columns and the fitted values;
#  Could make 2 separate plots but a quick reshape enables
#   faceting in ggplot2:
##
augment(lmer_m2_rs) %>%
  select(Activity, Temp, ID, SexM, fit = `.fitted`) %>% 
  gather(type, yval, Activity, fit) %>% 
  ggplot(., aes(x = Temp, y = yval, colour = factor(SexM), group = ID)) +
  geom_line(alpha = 0.7) +
  facet_grid(. ~ type) +
  theme_bw()

Note that we can also pull out the actual deviations of our model, which is also useful for a few things. Firstly, we can plot the covariance of our intercept and slope terms - in our model, the correlation is 1, and this again shows how strongly linked those terms are:

ranef(lmer_m2_rs)$ID %>% 
  ggplot(., aes(x = `(Intercept)`, y = Temp)) + 
  geom_point() +
  labs(x = "Intercept deviation",
       y = "Slope deviation")

We can also plot the slopes for the range of our data, but having excluded our main effects (i.e., the overall intercept, the intercept deviation for sex, the temperature-related slope, and the slope deviation for sex); remember that an individual’s intercept and slope terms are deviations from the main effects specific to the group(s) that they belong to:

df_indivdev <- data_frame(ID = rep(LETTERS[1:20], each=5),
                          Temp = rep(-2:2, times=20))

df_indivranef <- data_frame(ID = LETTERS[1:20],
                            intercept_dev = ranef(lmer_m2_rs)$ID$`(Intercept)`,
                            slope_dev = ranef(lmer_m2_rs)$ID$Temp)

df_indivdev <- left_join(df_indivdev,
                         df_indivranef)

df_indivdev <- df_indivdev %>% 
  mutate(activity_dev = intercept_dev + (Temp * slope_dev))


ggplot(df_indivdev, aes(x = Temp, y = activity_dev, group = ID)) + 
  geom_line(alpha = 0.4) +
  theme_bw()

Again, this gives us a clear idea of how individuals diverge from each other in activity levels as temperature increases.

Multivariate ‘character state’ models

As Alastair outlined in his talk, an alternative to the reaction norms approach (which places fewer constraints on the variation) is to use multivariate models. If we measure the same individual (or genotype) in different environments, we can consider observations in each environment to be distinct traits (see the literature on genotype-by-environment interactions for more details on this).

I am going to demonstrate the use of multivariate models with ASreml (via its R interface, package ASreml-R) - please note that this is proprietary software and requires purchase of a licence for use. It is also possible to run these models in a Bayesian framework using the (free) R package MCMCglmm, but this comes with a variety of pitfalls (specifying priors, very different model diagnostics than a frequentist framework, etc) that should not be taken lightly.

Bivariate models

To start, we’ll focus on bivariate models, using data collected at two different points on some x-axis. In a similar ‘experiment’ to that used above, we have observations of activity at different temperatures, but this time we have multiple measurements of each individual at 2 temperatures (the ‘rpt’ variable enables us to include an effect of replicate in our model as well):

df_biv <- data_frame(ID = rep(LETTERS[1:20], each=2*3),
                     Temp = rep(c(-2,2),
                                times = 20*3),
                     rpt = rep(seq(1:3), each = 2, times = 20),
                     Activity = c(7.027966305,
                                  0.798126263,
                                  7.188122866,
                                  3.314630048,
                                  1.953624695,
                                  2.428305728,
                                  3.26266759,
                                  7.477803884,
                                  4.408374603,
                                  10.78330786,
                                  2.131766328,
                                  9.306716391,
                                  0.320621527,
                                  11.96383873,
                                  1.945242057,
                                  12.3357745,
                                  3.994167942,
                                  10.92089477,
                                  4.096705569,
                                  13.25335094,
                                  1.869045518,
                                  18.11121753,
                                  5.610605874,
                                  14.85565745,
                                  3.575332803,
                                  7.485154893,
                                  3.959560622,
                                  10.18421144,
                                  2.986188455,
                                  10.15058083,
                                  2.017471784,
                                  14.2233726,
                                  6.902194513,
                                  11.17766854,
                                  1.708159474,
                                  16.17012624,
                                  2.099032171,
                                  5.658883204,
                                  3.351784669,
                                  9.525300254,
                                  6.165440765,
                                  8.200602436,
                                  -0.89518258,
                                  14.24557331,
                                  2.393642374,
                                  15.91121084,
                                  2.176713271,
                                  12.68116737,
                                  0.556026155,
                                  9.604167453,
                                  0.623671819,
                                  9.653574248,
                                  2.175372711,
                                  10.36929689,
                                  6.188299595,
                                  14.70089732,
                                  4.264782696,
                                  13.60906128,
                                  6.535707847,
                                  16.57685261,
                                  3.251935306,
                                  21.14599397,
                                  1.817391592,
                                  20.2145243,
                                  2.846493758,
                                  17.11529936,
                                  3.243723089,
                                  5.666852401,
                                  -0.501208143,
                                  4.459493857,
                                  4.300845998,
                                  6.819073964,
                                  1.55592126,
                                  13.35554155,
                                  0.896798541,
                                  18.71441819,
                                  0.554233407,
                                  11.21835098,
                                  2.711892076,
                                  15.86828939,
                                  4.815074716,
                                  15.61733602,
                                  3.195117234,
                                  15.76449214,
                                  4.420384815,
                                  9.330148279,
                                  5.394919991,
                                  4.193600935,
                                  3.522952469,
                                  7.647965111,
                                  5.512626817,
                                  6.967841056,
                                  0.702693012,
                                  9.313780442,
                                  3.16738194,
                                  6.713379769,
                                  1.550554389,
                                  13.13522144,
                                  3.182210346,
                                  12.03184941,
                                  -0.737417783,
                                  14.78251914,
                                  0.967028497,
                                  10.92756853,
                                  3.437105304,
                                  14.02874438,
                                  -0.750792406,
                                  16.54984428,
                                  2.328782114,
                                  15.19720215,
                                  1.239347672,
                                  16.16797467,
                                  0.392021867,
                                  15.60226804,
                                  3.358546612,
                                  11.10637054,
                                  0.665981764,
                                  11.57263601,
                                  4.114685373,
                                  10.76310319))

We can plot the data as reaction norms, using the mean of each individual in each environment to draw individual slopes:

ggplot(df_biv, aes(x = Temp, y = Activity, group = ID)) +
  stat_summary(fun.y = "mean", alpha = 0.7, geom = "line") +
  theme_bw()

Just out of interest, we first want to use random regression models just to perform a quick check before we move to bivariate:

lmer_m3_rs <- lmer(Activity ~ Temp + scale(rpt) + (Temp|ID),
                    data = df_biv)

summary(lmer_m3_rs)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Activity ~ Temp + scale(rpt) + (Temp | ID)
##    Data: df_biv
## 
## REML criterion at convergence: 549.9
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -1.97464 -0.59156  0.03384  0.56096  2.38560 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr
##  ID       (Intercept) 3.391    1.842        
##           Temp        1.356    1.164    0.92
##  Residual             3.323    1.823        
## Number of obs: 120, groups:  ID, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)   7.2452     0.4441  16.313
## Temp          2.1913     0.2733   8.017
## scale(rpt)    0.1170     0.1671   0.700
## 
## Correlation of Fixed Effects:
##            (Intr) Temp 
## Temp       0.816       
## scale(rpt) 0.000  0.000
df_pred_biv1 = data_frame(ID = rep(LETTERS[1:20], each=2),
                     Temp = rep(c(-2,2),
                                times = 20),
                     rpt = rep(0, times = 20*2))

df_pred_biv1$fit <- predict(lmer_m3_rs, newdata = df_pred_biv1,
                            re.form = NULL)

gg_m3_means <- ggplot(df_biv, aes(x = Temp, y = Activity, group = ID)) +
  stat_summary(fun.y = "mean", alpha = 0.7, geom = "line") +
  theme_bw() +
  ggtitle("Individual means")

gg_m3_fits <- ggplot(df_pred_biv1, aes(x = Temp, y = fit, group = ID)) +
  geom_line(alpha = 0.7) +
  theme_bw() +
  ggtitle("Individual fits")

grid.arrange(gg_m3_means, gg_m3_fits, ncol = 2)

We can fit the same type of model in asreml, with similar diagnostics, and pull out things like the log-likelihood of the model, the fixed effects terms, and the variance components:

require(asreml)
require(nadiv)

asr_rr_mod1 <- asreml(Activity ~ Temp + scale(rpt),
                           random =~ str(~ ID + ID:Temp, 
                                         ~us(2, init = c(0.1, 0.1, 0.1)):id(20)),
                           rcov =~ units,
                           data = df_biv,
                           maxiter = 20)
## ASReml: Sun Aug  7 15:26:39 2016
## 
##      LogLik         S2      DF      wall     cpu
##    -177.7520      4.5263   117  15:26:39     0.0
##    -172.9031      4.1805   117  15:26:39     0.0
##    -169.3505      3.7896   117  15:26:39     0.0
##    -167.6278      3.4505   117  15:26:39     0.0
##    -167.4527      3.3392   117  15:26:39     0.0
##    -167.4492      3.3233   117  15:26:39     0.0
##    -167.4492      3.3229   117  15:26:39     0.0
##    -167.4492      3.3229   117  15:26:39     0.0
## 
## Finished on: Sun Aug  7 15:26:39 2016
##  
## LogLikelihood Converged
plot(asr_rr_mod1)

hist(resid(asr_rr_mod1))

qqnorm(resid(asr_rr_mod1), main="Q-Q plot for residuals")

asr_rr_mod1$loglik
## [1] -167.4492
summary(asr_rr_mod1, all = T)$coef.fixed
##              solution std error    z ratio
## scale(rpt)  0.1170152 0.1671033  0.7002567
## Temp        2.1913278 0.2733431  8.0167667
## (Intercept) 7.2451613 0.4441303 16.3131440
summary(asr_rr_mod1)$varcomp
##                          gamma component std.error  z.ratio constraint
## ID+ID:Temp!us(2).1:1 1.0205600  3.391218 1.2829678 2.643260   Positive
## ID+ID:Temp!us(2).2:1 0.5963646  1.981659 0.7189961 2.756147   Positive
## ID+ID:Temp!us(2).2:2 0.4080397  1.355875 0.4853245 2.793749   Positive
## R!variance           1.0000000  3.322899 0.5287113 6.284903   Positive

We can fit a similar model, but one that also partitions residual variance into two components (one for each environment), enabling us to see whether residual variation is different across environments:

I can’t get this working just now for some reason

df_biv_fac <- df_biv %>% 
  mutate(charTemp = ifelse(Temp == 2, "Hot", "Cold")) %>% 
   spread(charTemp, Activity) %>% 
   gather(charTemp, Activity, Cold:Hot)

asr_rr_mod2 <- asreml(Activity ~ Temp + scale(rpt),
                           random =~ str(~ ID + ID:Temp, 
                                         ~us(2, init = c(0.1, 0.1, 0.1)):id(20)),
                           rcov =~ units:idh(charTemp, init = c(0.1,0.1)),
                           data = df_biv_fac,
                           maxiter = 20)

We can compare back to lmer and see that these two methods give us the same model. ASreml enables us to fit more complex models, including bivariate. We first need to reshape our data so there are now 2 columns of observations - one for each temperature. Each row will then be an individual’s observations within a single repeat.

df_biv_wide <- df_biv %>% 
  spread(., Temp, Activity) %>% 
  rename(Cold = `-2`,
         Hot = `2`)

head(df_biv_wide)
## # A tibble: 6 x 4
##      ID   rpt     Cold        Hot
##   <chr> <int>    <dbl>      <dbl>
## 1     A     1 7.027966  0.7981263
## 2     A     2 7.188123  3.3146300
## 3     A     3 1.953625  2.4283057
## 4     B     1 3.262668  7.4778039
## 5     B     2 4.408375 10.7833079
## 6     B     3 2.131766  9.3067164

We can fit a succession of bivariate models to test our hypotheses:

No random effect, heterogeneous residual variances

asr_biv_m1 <- asreml(cbind(Cold, Hot) ~ trait + scale(rpt),
                     #random =~ ID,
                     rcov =~ units:diag(trait),
                     data = df_biv_wide,
                     maxiter = 20)
## ASReml: Sun Aug  7 15:26:41 2016
## 
##      LogLik         S2      DF      wall     cpu
##    -480.5002      1.0000   117  15:26:41     0.0
##    -396.8695      1.0000   117  15:26:41     0.0
##    -305.1014      1.0000   117  15:26:41     0.0
##    -231.2726      1.0000   117  15:26:41     0.0
##    -203.4826      1.0000   117  15:26:41     0.0
##    -195.2218      1.0000   117  15:26:41     0.0
##    -193.7973      1.0000   117  15:26:41     0.0
##    -193.7230      1.0000   117  15:26:41     0.0
##    -193.7227      1.0000   117  15:26:41     0.0
##    -193.7227      1.0000   117  15:26:41     0.0
## 
## Finished on: Sun Aug  7 15:26:41 2016
##  
## LogLikelihood Converged
summary(asr_biv_m1, all = T)$coef.fixed
##               solution std error    z ratio
## scale(rpt)  0.02610451 0.2406847  0.1084593
## trait_Cold  2.86250569 0.2626478 10.8986475
## trait_Hot  11.62781682 0.5717639 20.3367443
summary(asr_biv_m1)$varcomp
##                      gamma component std.error  z.ratio constraint
## R!variance        1.000000  1.000000        NA       NA      Fixed
## R!trait.Cold.var  4.139032  4.139032 0.7676327 5.391943   Positive
## R!trait.Hot.var  19.614839 19.614839 3.6175951 5.422066   Positive
asr_biv_m1$loglik
## [1] -193.7227

Individual identity random effect, homogeneous variance

We include a random effect of individual identity, but constraining the variation to be the same across both environments:

asr_biv_m2 <- asreml(cbind(Cold, Hot) ~ trait + scale(rpt),
                     random =~ ID:idv(trait),
                     rcov =~ units:diag(trait),
                     data = df_biv_wide,
                     maxiter = 20)
## ASReml: Sun Aug  7 15:26:41 2016
## 
##      LogLik         S2      DF      wall     cpu
##    -425.0107      1.0000   117  15:26:41     0.0
##    -353.9497      1.0000   117  15:26:41     0.0
##    -275.7224      1.0000   117  15:26:41     0.0
##    -212.1954      1.0000   117  15:26:41     0.0
##    -187.4707      1.0000   117  15:26:41     0.0
##    -179.0881      1.0000   117  15:26:41     0.0
##    -177.0242      1.0000   117  15:26:41     0.0
##    -176.8019      1.0000   117  15:26:41     0.0
##    -176.7963      1.0000   117  15:26:41     0.0
##    -176.7962      1.0000   117  15:26:41     0.0
##    -176.7962      1.0000   117  15:26:41     0.0
## 
## Finished on: Sun Aug  7 15:26:41 2016
##  
## LogLikelihood Converged
summary(asr_biv_m2, all = T)$coef.fixed
##              solution std error    z ratio
## scale(rpt)  0.1078807 0.1679187  0.6424578
## trait_Cold  2.8625057 0.6972540  4.1053988
## trait_Hot  11.6278168 0.7027170 16.5469410
summary(asr_biv_m2)$varcomp
##                       gamma component std.error  z.ratio constraint
## ID:trait!trait.var 8.685521  8.685521 2.2685511 3.828664   Positive
## R!variance         1.000000  1.000000        NA       NA      Fixed
## R!trait.Cold.var   3.113528  3.113528 0.6881303 4.524620   Positive
## R!trait.Hot.var    3.572354  3.572354 0.8243115 4.333743   Positive
asr_biv_m2$loglik
## [1] -176.7962
0.5*pchisq(2*(asr_biv_m2$loglik - asr_biv_m1$loglik), 
       1, lower.tail = FALSE)
## [1] 2.972048e-09

The significant effect shows we have statistically significant individual variation. Note that for a likelihood ratio test with 1 degree of freedom, we can use the method of Visscher (2006) to model a mix of 0 and 1 df - essentially cutting the p-value in half.

Allow heterogeneous individual variances across environments

asr_biv_m3 <- asreml(cbind(Cold, Hot) ~ trait + scale(rpt),
                     random =~ ID:idh(trait),
                     rcov =~ units:diag(trait),
                     data = df_biv_wide,
                     maxiter = 20)
## ASReml: Sun Aug  7 15:26:41 2016
## 
##      LogLik         S2      DF      wall     cpu
##    -425.0107      1.0000   117  15:26:41     0.0
##    -353.7575      1.0000   117  15:26:41     0.0
##    -275.0431      1.0000   117  15:26:41     0.0
##    -210.0707      1.0000   117  15:26:41     0.0
##    -182.5338      1.0000   117  15:26:41     0.0
##    -171.5663      1.0000   117  15:26:41     0.0
##    -167.9978      1.0000   117  15:26:41     0.0
##    -167.2720      1.0000   117  15:26:41     0.0
##    -167.2180      1.0000   117  15:26:41     0.0
##    -167.2176      1.0000   117  15:26:41     0.0
##    -167.2176      1.0000   117  15:26:41     0.0
## 
## Finished on: Sun Aug  7 15:26:41 2016
##  
## LogLikelihood Converged
summary(asr_biv_m3, all = T)$coef.fixed
##             solution std error    z ratio
## scale(rpt)  0.114067 0.1677592  0.6799452
## trait_Cold  2.862506 0.3158887  9.0617550
## trait_Hot  11.627817 0.9446954 12.3085360
summary(asr_biv_m3)$varcomp
##                          gamma  component std.error  z.ratio constraint
## ID:trait!trait.Cold  0.9152642  0.9152642 0.6918374 1.322947   Positive
## ID:trait!trait.Hot  16.7141744 16.7141744 5.7966263 2.883431   Positive
## R!variance           1.0000000  1.0000000        NA       NA      Fixed
## R!trait.Cold.var     3.2413464  3.2413464 0.7310989 4.433526   Positive
## R!trait.Hot.var      3.4044360  3.4044360 0.7676449 4.434910   Positive
asr_biv_m3$loglik
## [1] -167.2176
0.5*pchisq(2*(asr_biv_m3$loglik - asr_biv_m2$loglik), 
       1, lower.tail = FALSE)
## [1] 6.01867e-06

We have statistically significant differences in individual variance across environments - this tells us that we must have variance in slopes, otherwise known as individual differences in plasticity / individual x environment interactions (IxE).

Allow individual covariances across environments

asr_biv_m4 <- asreml(cbind(Cold, Hot) ~ trait + scale(rpt),
                     random =~ ID:us(trait, init = c(1,
                                                     0.1,1)),
                     rcov =~ units:diag(trait),
                     data = df_biv_wide,
                     maxiter = 20)
## ASReml: Sun Aug  7 15:26:41 2016
## 
##      LogLik         S2      DF      wall     cpu
##    -263.4972      1.0000   117  15:26:41     0.0
##    -233.8367      1.0000   117  15:26:41     0.0 (1 restrained)
##    -201.9446      1.0000   117  15:26:41     0.0
##    -177.5178      1.0000   117  15:26:41     0.0
##    -168.9568      1.0000   117  15:26:41     0.0
##    -166.4789      1.0000   117  15:26:41     0.0
##    -166.0669      1.0000   117  15:26:41     0.0
##    -166.0469      1.0000   117  15:26:41     0.0
##    -166.0468      1.0000   117  15:26:41     0.0
##    -166.0468      1.0000   117  15:26:41     0.0
## 
## Finished on: Sun Aug  7 15:26:41 2016
##  
## LogLikelihood Converged
summary(asr_biv_m4, all = T)$coef.fixed
##             solution std error    z ratio
## scale(rpt)  0.114067 0.1677592  0.6799452
## trait_Cold  2.862506 0.3158887  9.0617550
## trait_Hot  11.627817 0.9446954 12.3085360
summary(asr_biv_m4)$varcomp
##                               gamma  component std.error   z.ratio
## ID:trait!trait.Cold:Cold  0.9152642  0.9152642 0.6918374  1.322947
## ID:trait!trait.Hot:Cold  -2.0322814 -2.0322814 1.4464407 -1.405022
## ID:trait!trait.Hot:Hot   16.7141744 16.7141744 5.7966264  2.883431
## R!variance                1.0000000  1.0000000        NA        NA
## R!trait.Cold.var          3.2413464  3.2413464 0.7310989  4.433526
## R!trait.Hot.var           3.4044360  3.4044360 0.7676449  4.434910
##                          constraint
## ID:trait!trait.Cold:Cold   Positive
## ID:trait!trait.Hot:Cold    Positive
## ID:trait!trait.Hot:Hot     Positive
## R!variance                    Fixed
## R!trait.Cold.var           Positive
## R!trait.Hot.var            Positive
asr_biv_m4$loglik
## [1] -166.0468
0.5*pchisq(2*(asr_biv_m4$loglik - asr_biv_m3$loglik), 
       1, lower.tail = FALSE)
## [1] 0.06298382

The covariance across environments does not significantly improve the model. We can take a look at the correlation here:

nadiv:::pin(asr_biv_m4, I_E_cor ~ V2/sqrt(V1*V3))
##           Estimate        SE
## I_E_cor -0.5195988 0.3081622

…and also look at the variance components using this model:

# Individual variance for cold
nadiv:::pin(asr_biv_m4, v_cold ~ V1)
##         Estimate        SE
## v_cold 0.9152642 0.6918374
# Individual variance for hot
nadiv:::pin(asr_biv_m4, v_hot ~ V3)
##       Estimate       SE
## v_hot 16.71417 5.796626
# Individual covariance across environments
nadiv:::pin(asr_biv_m4, cov_cold_hot ~ V2)
##               Estimate       SE
## cov_cold_hot -2.032281 1.446441

We can also get Wald F-tests for our fixed effects, showing significant differences across environments (‘trait’), but no effect of repeat:

wald.asreml(asr_biv_m4, denDF = "numeric", ssType = "conditional")$Wald
## ASReml: Sun Aug  7 20:15:08 2016
## 
##      LogLik         S2      DF      wall     cpu
##    -166.0468      1.0000   117  20:15:08     0.0
##    -166.0468      1.0000   117  20:15:08     0.0
##    -166.0468      1.0000   117  20:15:08     0.0
##    -166.0468      1.0000   117  20:15:08     0.0
## 
## Finished on: Sun Aug  7 20:15:08 2016
##  
## LogLikelihood Converged
##            Df denDF    F.inc    F.con Margin           Pr
## trait       2    18 165.9000 165.9000      A 2.533300e-12
## scale(rpt)  1    79   0.4623   0.4623      A 4.985279e-01

Draw these

Trivariate models

We can, of course, extend the multivariate models to any number of environments modelled as separate traits (as long as we have enough data)…

df_tri <- data_frame(ID = rep(LETTERS[1:20], each=3*3),
                     Temp = rep(c(-2,0,2),
                                times = 20*3),
                     rpt = rep(seq(1:3), each = 3, times = 20),
                     Activity = c(1.734408407,
                                  10.8518351,
                                  19.01776552,
                                  -0.779727874,
                                  13.33302343,
                                  18.90546812,
                                  -2.096238589,
                                  11.51624262,
                                  18.64733043,
                                  3.994156952,
                                  11.20284201,
                                  17.40811314,
                                  3.905630926,
                                  9.823659049,
                                  20.73347271,
                                  3.407771876,
                                  10.70812113,
                                  18.33596706,
                                  2.716463471,
                                  4.741070036,
                                  5.543294978,
                                  5.381839838,
                                  5.119849361,
                                  0.671899897,
                                  4.831810559,
                                  -0.313438371,
                                  0.52163211,
                                  2.749687999,
                                  9.901597118,
                                  13.17635767,
                                  3.843085196,
                                  8.999046773,
                                  14.86806844,
                                  5.242673033,
                                  11.66184964,
                                  15.07068855,
                                  6.796607439,
                                  8.882205994,
                                  12.07438667,
                                  4.303687291,
                                  9.171711362,
                                  11.35734837,
                                  7.167887226,
                                  11.38908371,
                                  11.86301916,
                                  6.217811325,
                                  5.991432483,
                                  5.747987304,
                                  6.3017589,
                                  4.827915853,
                                  9.501141911,
                                  6.018287983,
                                  6.659050911,
                                  3.919451184,
                                  1.144663174,
                                  7.19361025,
                                  14.83107866,
                                  4.402629066,
                                  5.54766926,
                                  12.67036933,
                                  1.720621387,
                                  8.615409421,
                                  12.28293717,
                                  5.826726379,
                                  4.302339372,
                                  -0.140776494,
                                  6.354485134,
                                  1.47136968,
                                  3.72227158,
                                  3.671226182,
                                  2.883954055,
                                  1.486686856,
                                  7.848441185,
                                  5.800266309,
                                  3.901756545,
                                  6.197707246,
                                  5.44195192,
                                  3.03663444,
                                  5.931099644,
                                  8.503845719,
                                  4.714831943,
                                  -1.010232245,
                                  8.5313772,
                                  16.82430147,
                                  2.707460021,
                                  9.731994924,
                                  9.317043423,
                                  1.739917733,
                                  8.494064748,
                                  13.20823743,
                                  1.269367593,
                                  4.215954035,
                                  14.31236474,
                                  0.183629661,
                                  8.238479263,
                                  11.52353435,
                                  -1.804231448,
                                  8.01217673,
                                  16.02104352,
                                  1.70669616,
                                  6.07901528,
                                  10.31621216,
                                  2.564136075,
                                  6.353100594,
                                  10.54512055,
                                  2.156652723,
                                  8.137800988,
                                  13.01008405,
                                  2.11395889,
                                  5.416020573,
                                  10.78597345,
                                  1.821471832,
                                  4.58013858,
                                  7.233833757,
                                  -1.61086634,
                                  4.916411284,
                                  10.4099255,
                                  1.249850695,
                                  7.111601587,
                                  12.94749999,
                                  2.980164763,
                                  9.187818247,
                                  13.35888183,
                                  5.053021737,
                                  8.625213579,
                                  15.22720206,
                                  4.706096254,
                                  7.230418387,
                                  9.802586769,
                                  3.943343009,
                                  5.849571717,
                                  7.085007121,
                                  2.49150376,
                                  2.395988881,
                                  7.660671758,
                                  2.625794541,
                                  6.426118729,
                                  9.085534107,
                                  4.990916789,
                                  8.525446735,
                                  11.99362704,
                                  1.948093429,
                                  6.681448371,
                                  12.65878585,
                                  4.442896346,
                                  7.035676352,
                                  9.558942558,
                                  0.675438052,
                                  5.803297854,
                                  11.33221045,
                                  6.254799673,
                                  6.031645038,
                                  7.606488927,
                                  4.649854752,
                                  3.451479118,
                                  5.665786028,
                                  4.283572187,
                                  3.749418039,
                                  6.67064692,
                                  3.02567717,
                                  2.547549008,
                                  5.848982616,
                                  1.461116297,
                                  2.837819294,
                                  11.14989349,
                                  2.358338332,
                                  2.557854525,
                                  8.294501637,
                                  -1.278274936,
                                  5.024046668,
                                  10.06311346,
                                  3.059150852,
                                  9.620010676,
                                  13.82552187,
                                  3.031516446,
                                  9.661464895,
                                  15.01527742,
                                  1.958200605,
                                  6.130036109,
                                  13.29662658))
ggplot(df_tri, aes(x = Temp, y = Activity, group = ID)) +
  stat_summary(fun.y = "mean", alpha = 0.7, geom = "line") +
  theme_bw()

Of course, you would want to go through testing nested models as above, but for the purposes of this tutorial we’ll skip straight to the model where we allow all variances to differ, and estimate all covariances between them:

Allow individual covariances across environments

df_tri_wide <- df_tri %>% 
  spread(., Temp, Activity) %>% 
  rename(Cold = `-2`,
         Medium = `0`,
         Hot = `2`)

head(df_tri_wide)
## # A tibble: 6 x 5
##      ID   rpt       Cold    Medium      Hot
##   <chr> <int>      <dbl>     <dbl>    <dbl>
## 1     A     1  1.7344084 10.851835 19.01777
## 2     A     2 -0.7797279 13.333023 18.90547
## 3     A     3 -2.0962386 11.516243 18.64733
## 4     B     1  3.9941570 11.202842 17.40811
## 5     B     2  3.9056309  9.823659 20.73347
## 6     B     3  3.4077719 10.708121 18.33597
asr_tri_m4 <- asreml(cbind(Cold, Medium, Hot) ~ trait + scale(rpt),
                     random =~ ID:us(trait, init = c(1,
                                                     0.1,1,
                                                     0.1,0.1,1)),
                     rcov =~ units:diag(trait),
                     data = df_tri_wide,
                     maxiter = 20)
## ASReml: Mon Aug  8 00:25:31 2016
## 
##      LogLik         S2      DF      wall     cpu
##    -434.5261      1.0000   176  00:25:31     0.0 (1 restrained)
##  US matrix updates modified 1 times to remain positive definite.
##    -370.1786      1.0000   176  00:25:31     0.0 (6 restrained)
## Logliklihood decreased to    -943.70 - trying again with reduced updates
##    -309.7338      1.0000   176  00:25:31     0.0
##    -299.1378      1.0000   176  00:25:31     0.0
##    -274.0843      1.0000   176  00:25:31     0.0
##  US matrix updates modified 1 times to remain positive definite.
##    -248.6517      1.0000   176  00:25:31     0.0 (6 restrained)
##  US matrix updates modified 1 times to remain positive definite.
##    -227.2540      1.0000   176  00:25:31     0.0 (6 restrained)
##  US matrix updates modified 1 times to remain positive definite.
##    -226.0394      1.0000   176  00:25:31     0.0 (6 restrained)
##  US matrix updates modified 1 times to remain positive definite.
##    -225.9972      1.0000   176  00:25:31     0.0 (6 restrained)
##  US matrix updates modified 1 times to remain positive definite.
##    -225.9959      1.0000   176  00:25:31     0.0 (6 restrained)
##  US matrix updates modified 1 times to remain positive definite.
##    -225.9949      1.0000   176  00:25:31     0.0 (6 restrained)
##  US matrix updates modified 1 times to remain positive definite.
##    -225.9941      1.0000   176  00:25:31     0.0 (6 restrained)
## US variance structures were modified in 8 instances to make them positive definite
## 
## Finished on: Mon Aug  8 00:25:31 2016
##  
## LogLikelihood Converged
summary(asr_tri_m4, all = T)$coef.fixed
##                 solution std error    z ratio
## scale(rpt)   -0.07855768 0.1179770 -0.6658729
## trait_Cold    3.17640388 0.4589100  6.9216271
## trait_Medium  6.89029954 0.5891755 11.6948174
## trait_Hot    10.59207744 1.0779970  9.8257019
summary(asr_tri_m4)$varcomp
##                                  gamma component std.error    z.ratio
## ID:trait!trait.Cold:Cold      3.463594  3.463594 1.3937341  2.4851182
## ID:trait!trait.Medium:Cold   -1.052695 -1.052695 1.2798952 -0.8224854
## ID:trait!trait.Medium:Medium  6.214811  6.214811 2.2783452  2.7277739
## ID:trait!trait.Hot:Cold      -5.426782 -5.426782 2.5977441 -2.0890364
## ID:trait!trait.Hot:Medium    10.592671 10.592671 3.7996882  2.7877737
## ID:trait!trait.Hot:Hot       22.182388 22.182388 7.5514323  2.9375073
## R!variance                    1.000000  1.000000        NA         NA
## R!trait.Cold.var              2.245247  2.245247 0.4959697  4.5269848
## R!trait.Medium.var            2.183085  2.183085 0.4769948  4.5767471
## R!trait.Hot.var               3.181894  3.181894 0.7005189  4.5421959
##                              constraint
## ID:trait!trait.Cold:Cold              ?
## ID:trait!trait.Medium:Cold            ?
## ID:trait!trait.Medium:Medium          ?
## ID:trait!trait.Hot:Cold               ?
## ID:trait!trait.Hot:Medium             ?
## ID:trait!trait.Hot:Hot                ?
## R!variance                        Fixed
## R!trait.Cold.var               Positive
## R!trait.Medium.var             Positive
## R!trait.Hot.var                Positive

We can reform our variance components into a variance-covariance matrix (here with variances on the diagonal, correlations above, covariances below):

vcv_tri <- matrix(data = c(round(as.numeric(nadiv:::pin(asr_tri_m4, v_c ~ V1)$Estimate),2),
                           round(as.numeric(nadiv:::pin(asr_tri_m4, cov_cm ~ V2)$Estimate),2),
                           round(as.numeric(nadiv:::pin(asr_tri_m4, cov_ch ~ V4)$Estimate),2),
                           round(as.numeric(nadiv:::pin(asr_tri_m4, cor_cm ~ V2/(sqrt(V1*V3)))$Estimate),2),
                           round(as.numeric(nadiv:::pin(asr_tri_m4, v_m ~ V3)$Estimate),2),
                           round(as.numeric(nadiv:::pin(asr_tri_m4, cov_mh ~ V5)$Estimate),2),
                           round(as.numeric(nadiv:::pin(asr_tri_m4, cor_ch ~ V4/(sqrt(V1*V6)))$Estimate),2),
                           round(as.numeric(nadiv:::pin(asr_tri_m4, cor_mh ~ V5/(sqrt(V3*V6)))$Estimate),2),
                           round(as.numeric(nadiv:::pin(asr_tri_m4, v_mh~ V6)$Estimate),2)),
                  nrow = 3,
                  dimnames = list(c("Cold","Medium","Hot"),
                                  c("Cold","Medium","Hot")))

kable(vcv_tri)
Cold Medium Hot
Cold 3.46 -0.23 -0.62
Medium -1.05 6.21 0.90
Hot -5.43 10.59 22.18

We can also test specific hypotheses on components of our matrix by fitting separate models with constraints on the correlations (for example), then testing them against our unstructured model. The medium-hot correlation looks quite a strong positive one, so let’s test first whether this is significantly greater than zero:

# Remind ourselves of correlation and SE
nadiv:::pin(asr_tri_m4, cor_mh ~ V5/(sqrt(V3*V6)))
##         Estimate         SE
## cor_mh 0.9021679 0.06748508
##
# Restrict among-individual correlation between medium / hot to 0
##
init_tc0 <- c(0.1,0.1,0,
              1,1,1)
names(init_tc0) <- c("U","U","F",
                     "P","P","P")

asr_tri_tc_0 <- asreml(cbind(Cold, Medium, Hot) ~ trait + scale(rpt),
                     random =~ ID:corgh(trait,
                                        init = init_tc0),
                     rcov =~ units:diag(trait),
                     data = df_tri_wide,
                     maxiter = 50)
## ASReml: Mon Aug  8 00:25:32 2016
## 
##      LogLik         S2      DF      wall     cpu
##    -445.1211      1.0000   176  00:25:32     0.0
##    -383.7975      1.0000   176  00:25:32     0.0
##    -317.4090      1.0000   176  00:25:32     0.0 (1 restrained)
##    -272.6029      1.0000   176  00:25:32     0.0
##    -249.4413      1.0000   176  00:25:32     0.0
##    -241.1078      1.0000   176  00:25:32     0.0
##    -238.5914      1.0000   176  00:25:32     0.0
##    -237.9368      1.0000   176  00:25:32     0.0
##    -237.6918      1.0000   176  00:25:32     0.0
##    -237.5587      1.0000   176  00:25:32     0.0
## 
## Finished on: Mon Aug  8 00:25:32 2016
## 
summary(asr_tri_tc_0)$varcomp
##                                            gamma  component  std.error
## ID:trait!trait.Medium:!trait.Cold.cor  0.5936993  0.5936993 0.10264117
## ID:trait!trait.Hot:!trait.Cold.cor    -0.8101260 -0.8101260 0.07369825
## ID:trait!trait.Hot:!trait.Medium.cor   0.0000000  0.0000000         NA
## ID:trait!trait.Cold                   12.3807714 12.3807714 3.63827077
## ID:trait!trait.Medium                  6.0565242  6.0565242 2.21806990
## ID:trait!trait.Hot                    22.0741634 22.0741634 7.50847842
## R!variance                             1.0000000  1.0000000         NA
## R!trait.Cold.var                       2.3288948  2.3288948 0.52400245
## R!trait.Medium.var                     2.3157785  2.3157785 0.52108949
## R!trait.Hot.var                        3.3039264  3.3039264 0.74119396
##                                          z.ratio    constraint
## ID:trait!trait.Medium:!trait.Cold.cor   5.784222 Unconstrained
## ID:trait!trait.Hot:!trait.Cold.cor    -10.992472 Unconstrained
## ID:trait!trait.Hot:!trait.Medium.cor          NA         Fixed
## ID:trait!trait.Cold                     3.402927      Positive
## ID:trait!trait.Medium                   2.730538      Positive
## ID:trait!trait.Hot                      2.939898      Positive
## R!variance                                    NA         Fixed
## R!trait.Cold.var                        4.444435      Positive
## R!trait.Medium.var                      4.444109      Positive
## R!trait.Hot.var                         4.457573      Positive
0.5*pchisq(2*(asr_tri_m4$loglik - asr_tri_tc_0$loglik), 
       1, lower.tail = FALSE)
## [1] 8.173232e-07

So, the correlation across medium-hot environments is significantly greater than 0. We can also test whether it is significantly different from 1 - if so, then we can say that there is still some statistically significant variation in how individuals respond to those different environments:

##
# Restrict among-individual correlation between medium / hot to 1
# - Needs to be 0.999 for some asreml reason
##
init_tc1 <- c(0.1,0.1,0.999,
              1,1,1)
names(init_tc1) <- c("U","U","F",
                     "P","P","P")

asr_tri_tc_1 <- asreml(cbind(Cold, Medium, Hot) ~ trait + scale(rpt),
                     random =~ ID:corgh(trait,
                                        init = init_tc1),
                     rcov =~ units:idh(trait),
                     data = df_tri_wide,
                     maxiter = 50)
## ASReml: Mon Aug  8 00:25:32 2016
## 
##      LogLik         S2      DF      wall     cpu
##    -440.9567      1.0000   176  00:25:32     0.0
## 
## Finished on: Mon Aug  8 00:25:32 2016
## 
summary(asr_tri_tc_1)$varcomp
##                                            gamma  component std.error
## ID:trait!trait.Medium:!trait.Cold.cor 0.38703761 0.38703761 0.2106100
## ID:trait!trait.Hot:!trait.Cold.cor    0.01813928 0.01813928 0.1401287
## ID:trait!trait.Hot:!trait.Medium.cor  0.99900000 0.99900000        NA
## ID:trait!trait.Cold                   1.07165773 1.07165773 0.3772048
## ID:trait!trait.Medium                 1.57739352 1.57739352 0.2706511
## ID:trait!trait.Hot                    1.99171588 1.99171588 0.1832653
## R!variance                            1.00000000 1.00000000        NA
## R!trait.Cold                          1.60046848 1.60046848 0.2126033
## R!trait.Medium                        1.61050683 1.61050683 0.1896546
## R!trait.Hot                           1.78036334 1.78036334 0.1746606
##                                          z.ratio    constraint
## ID:trait!trait.Medium:!trait.Cold.cor  1.8376981 Unconstrained
## ID:trait!trait.Hot:!trait.Cold.cor     0.1294473 Unconstrained
## ID:trait!trait.Hot:!trait.Medium.cor          NA         Fixed
## ID:trait!trait.Cold                    2.8410501      Positive
## ID:trait!trait.Medium                  5.8281436      Positive
## ID:trait!trait.Hot                    10.8679356      Positive
## R!variance                                    NA         Fixed
## R!trait.Cold                           7.5279578      Positive
## R!trait.Medium                         8.4917886      Positive
## R!trait.Hot                           10.1932746      Positive
0.5*pchisq(2*(asr_tri_m4$loglik - asr_tri_tc_1$loglik), 
       1, lower.tail = FALSE)
## [1] 4.997723e-68

This likelihood ratio test shows that the unstructured model is significantly better, such that - while there is a strong positive correlation across medium and hot environments - there does exist some statistically significant differences in individual plasticity across these environments. We could of course run similar models to test the other 2 correlations (although both are negative, so we would test against 0 and -1 models).

Plot the multivariate model

# Get the main effects (intercepts for different environments)
df_tri_fix <- data_frame(Name = rownames(coef(asr_tri_m4)$fixed),
                         MainEffect = as.numeric(coef(asr_tri_m4)$fixed)) %>% 
  filter(Name != "scale(rpt)")

# Get individual deviations from each environment
df_tri_fit <- data_frame(Name = rownames(coef(asr_tri_m4)$random),
                         Value = as.numeric(coef(asr_tri_m4)$random))

# separate ID from trait name 
df_tri_fit <- df_tri_fit %>% 
  separate(Name, c("ID","Name"),
           sep = ":")

# join main effects to individual deviations
df_tri_fit <- left_join(df_tri_fit,
                        df_tri_fix)
  
# Get fitted value, and also convert discrete x axis to continuous
df_tri_fit <- df_tri_fit %>% 
  mutate(fitVal = Value + MainEffect) %>% 
  mutate(Temp = ifelse(Name == "trait_Cold",-2,
                       ifelse(Name == "trait_Medium",0,2)))

# Plot
ggplot(df_tri_fit, aes(x = Temp, y = fitVal, group = ID)) +
  geom_line(alpha = 0.6) +
  theme_bw()

Compare to reaction norm fit

lmer_tri <- lmer(Activity ~ Temp + scale(rpt) +
                   (Temp|ID),
                 data = df_tri)

summary(lmer_tri)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Activity ~ Temp + scale(rpt) + (Temp | ID)
##    Data: df_tri
## 
## REML criterion at convergence: 783.5
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.66932 -0.52778  0.01837  0.60665  1.99700 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr
##  ID       (Intercept) 4.443    2.108        
##           Temp        2.287    1.512    0.80
##  Residual             2.599    1.612        
## Number of obs: 180, groups:  ID, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  6.88626    0.48641  14.157
## Temp         1.85392    0.34608   5.357
## scale(rpt)  -0.07954    0.12050  -0.660
## 
## Correlation of Fixed Effects:
##            (Intr) Temp 
## Temp       0.760       
## scale(rpt) 0.000  0.000
# Create prediction frame with all
#  combinations of ID and temp,
#  but with rpt always 0
df_tri_lmerpred <- expand(df_tri, nesting(ID, Temp)) %>% 
  mutate(rpt = 0)

df_tri_lmerpred$fit <- predict(lmer_tri,
                               newdata = df_tri_lmerpred,
                               re.form = NULL)


ggplot(df_tri_lmerpred, aes(x = Temp, y = fit, group = ID)) +
  geom_line(alpha = 0.6) +
  theme_bw()

In this case, the reaction norm model is a reasonable fit, but we can easily imagine situations in which linear slopes do not provide enough flexibility. While the intercept and slope (co)variances can be understood when you know what you’re looking for, they are certainly not as easily interpretable as a covariance matrix with entries for each environment.

Code for simple matrix algebra for going between reaction norm and the ‘character state’ multivariate models is given in the Roff & Wilson chapter of Hunt & Hosken’s book on Genotype-by-Environment Interactions and Sexual Selection:

Taken from Roff & Wilson

Taken from Roff & Wilson