These notes build on the prior chapters’ R Code and Instructions, where we discuss some operational aspects of R and detail the use of many functions and how to install packages. Here, we skip details that have already been discussed in the prior R Code and Instructions.
Load the AMCP package to use the data from Table 1, Chapter 16.
library(AMCP)
data(chapter_16_table_1)
chapter_16_table_1
## Severity Trainee Gender
## 1 49 1 1
## 2 40 1 1
## 3 31 1 1
## 4 40 1 1
## 5 42 2 1
## 6 48 2 1
## 7 52 2 1
## 8 58 2 1
## 9 42 3 1
## 10 46 3 1
## 11 50 3 1
## 12 54 3 1
## 13 53 4 2
## 14 59 4 2
## 15 63 4 2
## 16 69 4 2
## 17 44 5 2
## 18 54 5 2
## 19 54 5 2
## 20 64 5 2
## 21 58 6 2
## 22 63 6 2
## 23 67 6 2
## 24 72 6 2
Get the group descriptive statistics (this is one appraoch; see the Chapter 4 R Accompaniment for other approaches).
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
Summary.16.1 <- chapter_16_table_1 %>%
group_by(Gender) %>%
summarize(
mean=mean(Severity),
sd=sd(Severity),
s2=var(Severity),
n=n()
)
Summary.16.1
## # A tibble: 2 × 5
## Gender mean sd s2 n
## <int> <dbl> <dbl> <dbl> <int>
## 1 1 46 7.397788 54.72727 12
## 2 2 60 7.920055 62.72727 12
It may be helpful to extend the above so that descriptives are provided for the trainees within the groups. We call the object created here Summary.16.1b to distinguish it from the above summary.
Summary.16.1b <- chapter_16_table_1 %>%
group_by(Gender,Trainee) %>%
summarize(
mean=mean(Severity),
sd=sd(Severity),
s2=var(Severity),
n=n()
)
Summary.16.1b
## Source: local data frame [6 x 6]
## Groups: Gender [?]
##
## Gender Trainee mean sd s2 n
## <int> <int> <dbl> <dbl> <dbl> <int>
## 1 1 1 40 7.348469 54.00000 4
## 2 1 2 50 6.733003 45.33333 4
## 3 1 3 48 5.163978 26.66667 4
## 4 2 4 61 6.733003 45.33333 4
## 5 2 5 54 8.164966 66.66667 4
## 6 2 6 65 5.944185 35.33333 4
Although not necessary, to be consistent with the book we modify the Trainee numbers. Trainee numbers 4, 5, and 6 in the data here corresponds to what is female Trainees 1, 2, and 3 in the book. The code below extracts the Trainee numbers for Gender=1 and uses those as is, but for Gender=2 it subtracts 3, so as to start numbering within female at 1. We create a new variable named train so as to maintain the integrity of the original data set.
chapter_16_table_1$Trainee <- c(chapter_16_table_1$Trainee[chapter_16_table_1$Gender==1], chapter_16_table_1$Trainee[chapter_16_table_1$Gender==2]-3)
chapter_16_table_1
## Severity Trainee Gender
## 1 49 1 1
## 2 40 1 1
## 3 31 1 1
## 4 40 1 1
## 5 42 2 1
## 6 48 2 1
## 7 52 2 1
## 8 58 2 1
## 9 42 3 1
## 10 46 3 1
## 11 50 3 1
## 12 54 3 1
## 13 53 1 2
## 14 59 1 2
## 15 63 1 2
## 16 69 1 2
## 17 44 2 2
## 18 54 2 2
## 19 54 2 2
## 20 64 2 2
## 21 58 3 2
## 22 63 3 2
## 23 67 3 2
## 24 72 3 2
For purposes of having a unique identification value for each participant, we create an ID variable by pasting Gender and Trainee number. We also add G_ and T_ to the Gender and Trainee numbers, respectively, to serve an an identifier.
chapter_16_table_1$ID <- paste("G=", chapter_16_table_1$Gender, "_", "T=",
chapter_16_table_1$Trainee, sep="")
chapter_16_table_1
## Severity Trainee Gender ID
## 1 49 1 1 G=1_T=1
## 2 40 1 1 G=1_T=1
## 3 31 1 1 G=1_T=1
## 4 40 1 1 G=1_T=1
## 5 42 2 1 G=1_T=2
## 6 48 2 1 G=1_T=2
## 7 52 2 1 G=1_T=2
## 8 58 2 1 G=1_T=2
## 9 42 3 1 G=1_T=3
## 10 46 3 1 G=1_T=3
## 11 50 3 1 G=1_T=3
## 12 54 3 1 G=1_T=3
## 13 53 1 2 G=2_T=1
## 14 59 1 2 G=2_T=1
## 15 63 1 2 G=2_T=1
## 16 69 1 2 G=2_T=1
## 17 44 2 2 G=2_T=2
## 18 54 2 2 G=2_T=2
## 19 54 2 2 G=2_T=2
## 20 64 2 2 G=2_T=2
## 21 58 3 2 G=2_T=3
## 22 63 3 2 G=2_T=3
## 23 67 3 2 G=2_T=3
## 24 72 3 2 G=2_T=3
There are multiple ways of producing plots in the context of nested data structures. One convienient way is using the xyplot() function from the lattice package. The lattice package provides a great deal of flexibility.
library(lattice)
# (each ID has it's own plot)
xyplot(Severity ~ as.factor(Trainee) | factor(Gender, labels=c("Male", "Female")),
data=chapter_16_table_1, main="Trajectory Plot of McCarthy Scores", ylab="Severity", xlab="Trainee")
# Include jitter in order for different values to help differentiate points.
xyplot(jitter(Severity) ~ as.factor(Trainee) | factor(Gender, labels=c("Male", "Female")),
data=chapter_16_table_1, main="Trajectory Plot of McCarthy Scores", ylab="Severity", xlab="Trainee")
# Here we increase the amount of jittering.
xyplot(jitter(Severity, factor=2) ~ as.factor(Trainee) | factor(Gender, labels=c("Male", "Female")),
data=chapter_16_table_1, main="Trajectory Plot of McCarthy Scores", ylab="Severity", xlab="Trainee")
As discussed for Chapter 15, the lme4 package is a widely used package for mixed-effects models, or what we is often termed multilevel models or hierarchical linear models, in R. Notice, below, that the fixed effects are written in the same way that the lm() function is used for linear models, whereas the random effects are included in parentheses with a pipe symbol (i.e., “|”) used to denote the nesting strucutre. The “(1|ID)” part, in words, means that “the intercept is nested within ID,” which means that there is a random intercept for each clustering variable. The model below models the overall mean value (intercept) and the variation around that mean (variance of the intercept). Note that the intercept is incuded by default in the fixed part of the model specification.
library(lme4)
## Loading required package: Matrix
Model.0 <- lmer(Severity ~ 1 + (1|ID), data=chapter_16_table_1)
summary(Model.0)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Severity ~ 1 + (1 | ID)
## Data: chapter_16_table_1
##
## REML criterion at convergence: 166.2
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.59965 -0.49688 -0.01505 0.60318 1.50207
##
## Random effects:
## Groups Name Variance Std.Dev.
## ID (Intercept) 71.01 8.427
## Residual 45.56 6.749
## Number of obs: 24, groups: ID, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 53.000 3.706 14.3
Here we include Gender as a predictor, which, because it is represented by a number, should have as.factor() added (or be specified as a factor in the data frame). Notice here that the SAS output given in the book uses a Gender=2 for the reference groups, whereas R uses Gender=1. Thus, the intercept in the book of 60 is the model implied value for Gender=2, whereas the intercept given by the code below is 46 for Gender=1. However, because the “effect” of Gender is 14 units (-14 when Gender=2 is the reference and 14 when Gender=1 is the reference), the models are equivalent. Compare this with the output of Table 16.3.
Model.1 <- lmer(Severity ~ 1 + as.factor(Gender) + (1|ID), data=chapter_16_table_1)
summary(Model.1)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Severity ~ 1 + as.factor(Gender) + (1 | ID)
## Data: chapter_16_table_1
##
## REML criterion at convergence: 155.2
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.8248 -0.4452 -0.1247 0.6843 1.4141
##
## Random effects:
## Groups Name Variance Std.Dev.
## ID (Intercept) 18.11 4.256
## Residual 45.56 6.749
## Number of obs: 24, groups: ID, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 46.000 3.136 14.669
## as.factor(Gender)2 14.000 4.435 3.157
##
## Correlation of Fixed Effects:
## (Intr)
## as.fctr(G)2 -0.707
Load the data into the workspace.
data(chapter_16_table_4)
chapter_16_table_4
## Observation Room Condition Cognition Skill Inductive
## 1 1 1 0 46 4 21
## 2 2 1 0 52 4 26
## 3 3 1 0 60 4 33
## 4 4 1 0 44 4 22
## 5 5 2 0 46 6 18
## 6 6 2 0 48 6 25
## 7 7 2 0 50 6 26
## 8 8 2 0 54 6 24
## 9 9 2 0 50 6 21
## 10 10 2 0 48 6 25
## 11 11 3 0 52 9 35
## 12 12 3 0 50 9 28
## 13 13 3 0 46 9 32
## 14 14 3 0 50 9 36
## 15 15 3 0 58 9 38
## 16 16 1 1 42 5 26
## 17 17 1 1 46 5 34
## 18 18 1 1 50 5 27
## 19 19 2 1 52 8 38
## 20 20 2 1 54 8 44
## 21 21 2 1 46 8 34
## 22 22 2 1 56 8 45
## 23 23 2 1 48 8 38
## 24 24 3 1 42 7 31
## 25 25 3 1 46 7 41
## 26 26 3 1 44 7 34
## 27 27 3 1 52 7 35
## 28 28 3 1 56 7 38
## 29 29 3 1 54 7 46
Descriptives, overall, ignoring groupings. This is arguably only moderately useful information.
summary(chapter_16_table_4)
## Observation Room Condition Cognition
## Min. : 1 Min. :1.000 Min. :0.0000 Min. :42.00
## 1st Qu.: 8 1st Qu.:2.000 1st Qu.:0.0000 1st Qu.:46.00
## Median :15 Median :2.000 Median :0.0000 Median :50.00
## Mean :15 Mean :2.138 Mean :0.4828 Mean :49.72
## 3rd Qu.:22 3rd Qu.:3.000 3rd Qu.:1.0000 3rd Qu.:52.00
## Max. :29 Max. :3.000 Max. :1.0000 Max. :60.00
## Skill Inductive
## Min. :4.00 Min. :18.00
## 1st Qu.:6.00 1st Qu.:26.00
## Median :7.00 Median :33.00
## Mean :6.69 Mean :31.76
## 3rd Qu.:8.00 3rd Qu.:38.00
## Max. :9.00 Max. :46.00
Descriptives by Condition and Room. Compare this output with Table 16.5 MDK.
Summary.16.4 <- chapter_16_table_4 %>%
group_by(Condition,Room) %>%
summarize(
mean.cog=mean(Cognition),
sd.cog=sd(Cognition),
mean.Skill=mean(Skill),
sd.Skill=sd(Skill),
mean.Inductive=mean(Inductive),
sd.Inductive=sd(Inductive),
n=n()
)
Summary.16.4
## Source: local data frame [6 x 9]
## Groups: Condition [?]
##
## Condition Room mean.cog sd.cog mean.Skill sd.Skill mean.Inductive
## <int> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0 1 50.50000 7.187953 4 0 25.50000
## 2 0 2 49.33333 2.732520 6 0 23.16667
## 3 0 3 51.20000 4.381780 9 0 33.80000
## 4 1 1 46.00000 4.000000 5 0 29.00000
## 5 1 2 51.20000 4.147288 8 0 39.80000
## 6 1 3 49.00000 5.761944 7 0 37.50000
## # ... with 2 more variables: sd.Inductive <dbl>, n <int>
Here we plot scores by Conditionition.
xyplot(jitter(Inductive) ~ as.factor(Condition), data=chapter_16_table_4,
ylab="Inductive Reasoning")
Here different colors are used for the Rooms by specifying groups=Room.
xyplot(jitter(Inductive) ~ as.factor(Condition), groups=Room, data=chapter_16_table_4,
ylab="Inductive Reasoning")
Here each ClassRoom is plotted seperatly, by Condition.
xyplot(jitter(Inductive) ~ as.factor(Condition) | factor(Room), data=chapter_16_table_4,
ylab="Inductive Reasoning", xlab="Condition")
Here each Condition is plotted seperatly, by Room.
xyplot(jitter(Inductive) ~ as.factor(Room) | factor(Condition), data=chapter_16_table_4,
ylab="Inductive Reasoning", xlab="Room")
Here we plot Inductiveive reasoning by cognitive ability for the first classRoom in the control Conditionition (see Fgiure 16.1 in MDK).
plot(Inductive~Cognition, filter(chapter_16_table_4, Condition==0, Room==1), ylim=c(0, 50), xlim=c(0, 70),
ylab="Inductive Reasoning", xlab="Cognitive Ability")
abline(lm(Inductive~Cognition, filter(chapter_16_table_4, Condition==0, Room==1)))
Here we fit a basic mixed-effects model with only an intercept.
Model.IR.0 <- lmer(Inductive ~ 1 + (1|Room), data=chapter_16_table_4)
summary(Model.IR.0)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Inductive ~ 1 + (1 | Room)
## Data: chapter_16_table_4
##
## REML criterion at convergence: 194.5
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.8367 -0.8387 -0.1033 0.6452 2.0127
##
## Random effects:
## Groups Name Variance Std.Dev.
## Room (Intercept) 13.76 3.709
## Residual 49.20 7.014
## Number of obs: 29, groups: Room, 3
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 31.361 2.517 12.46
Now we add Condition (as a factor) as a grouping variable. Importantly we need to specify that the Condition is within Rooms. A coding necessity in this situation is easy to overlook. Notice that, above, in the analysis of Table 16.1 this was unnecessary, as each Trainee were within only one Gender. However, here the classRooms each have both treatments. Although it is easy to leave out the grouping information from the random effect specification, which we show below, doing so treats the Room as a random effect without regard for the fact that the Conditionition is within the classRoom. In such a situation the fixed effects may be the same but the variance components and standard errors will generally differ. As shown below, the random effect is specified as (1|Room:Condition) means, in words, that “the intercept varys among the Rooms and the Conditionition is within Rooms.” One can also regard the “:” notation on the right-hand-side of the pipe symbol as “Conditionition nested within Room.” The “:” symbol denotes an interaction between the two grouping variables. Compare this output with Table 16.6 in MDK.
Model.IR.1A <- lmer(Inductive ~ 1 + Condition + (1|Room:Condition), data=chapter_16_table_4)
summary(Model.IR.1A)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Inductive ~ 1 + Condition + (1 | Room:Condition)
## Data: chapter_16_table_4
##
## REML criterion at convergence: 171
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.39742 -0.74219 0.04089 0.67433 1.93550
##
## Random effects:
## Groups Name Variance Std.Dev.
## Room:Condition (Intercept) 26.63 5.160
## Residual 20.26 4.501
## Number of obs: 29, groups: Room:Condition, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 27.478 3.203 8.579
## Condition 8.148 4.548 1.792
##
## Correlation of Fixed Effects:
## (Intr)
## Condition -0.704
Here we use Condition as a factor. When used as a factor, the parameritization is different than the previously used 0/1 representation. One should be careful when interpreting these coefficients. Neverthless, the fit of the model is the same but it may not be parameterized in the most intuitive way.
chapter_16_table_4$Cond.Factor <- factor(chapter_16_table_4$Condition)
Model.IR.1B <- lmer(Inductive ~ 1 + Cond.Factor + (1|Room:Cond.Factor), data=chapter_16_table_4)
summary(Model.IR.1B)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Inductive ~ 1 + Cond.Factor + (1 | Room:Cond.Factor)
## Data: chapter_16_table_4
##
## REML criterion at convergence: 171
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.39742 -0.74219 0.04089 0.67433 1.93550
##
## Random effects:
## Groups Name Variance Std.Dev.
## Room:Cond.Factor (Intercept) 26.63 5.160
## Residual 20.26 4.501
## Number of obs: 29, groups: Room:Cond.Factor, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 27.478 3.203 8.579
## Cond.Factor1 8.148 4.548 1.792
##
## Correlation of Fixed Effects:
## (Intr)
## Cond.Factr1 -0.704
Here we will create a new variable called Group and use Control or Treatment and treat it as a factor, based on the 0 or 1 numeric representation.
chapter_16_table_4$Group <- chapter_16_table_4$Condition
chapter_16_table_4$Group[chapter_16_table_4$Group==0] <- "Control"
chapter_16_table_4$Group[chapter_16_table_4$Group==1] <- "Treatment"
chapter_16_table_4$Group <- as.factor(chapter_16_table_4$Group)
# Examine the structure of the data
str(chapter_16_table_4)
## 'data.frame': 29 obs. of 8 variables:
## $ Observation: int 1 2 3 4 5 6 7 8 9 10 ...
## $ Room : int 1 1 1 1 2 2 2 2 2 2 ...
## $ Condition : int 0 0 0 0 0 0 0 0 0 0 ...
## $ Cognition : int 46 52 60 44 46 48 50 54 50 48 ...
## $ Skill : int 4 4 4 4 6 6 6 6 6 6 ...
## $ Inductive : int 21 26 33 22 18 25 26 24 21 25 ...
## $ Cond.Factor: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
## $ Group : Factor w/ 2 levels "Control","Treatment": 1 1 1 1 1 1 1 1 1 1 ...
Here we refit the model using Group instead of Condition. But, it is equivilant to the above output when we used the 0/1 from Condition as a number rather than a factor. Notice that the intercept and effect of Conditionition (or Group) differ. The models are equivilant but the groups are specified differently; as always, the results depend on how the data enter the model.
Model.IR.1B <- lmer(Inductive ~ 1 + Group + (1|Room:Group), data=chapter_16_table_4)
summary(Model.IR.1B)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Inductive ~ 1 + Group + (1 | Room:Group)
## Data: chapter_16_table_4
##
## REML criterion at convergence: 171
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.39742 -0.74219 0.04089 0.67433 1.93550
##
## Random effects:
## Groups Name Variance Std.Dev.
## Room:Group (Intercept) 26.63 5.160
## Residual 20.26 4.501
## Number of obs: 29, groups: Room:Group, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 27.478 3.203 8.579
## GroupTreatment 8.148 4.548 1.792
##
## Correlation of Fixed Effects:
## (Intr)
## GroupTrtmnt -0.704
The relevel() function can be used to reverse the reference categoy.
chapter_16_table_4$Group <- relevel(chapter_16_table_4$Group, ref="Treatment")
# Verify the properties of the variables.
str(chapter_16_table_4)
## 'data.frame': 29 obs. of 8 variables:
## $ Observation: int 1 2 3 4 5 6 7 8 9 10 ...
## $ Room : int 1 1 1 1 2 2 2 2 2 2 ...
## $ Condition : int 0 0 0 0 0 0 0 0 0 0 ...
## $ Cognition : int 46 52 60 44 46 48 50 54 50 48 ...
## $ Skill : int 4 4 4 4 6 6 6 6 6 6 ...
## $ Inductive : int 21 26 33 22 18 25 26 24 21 25 ...
## $ Cond.Factor: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
## $ Group : Factor w/ 2 levels "Treatment","Control": 2 2 2 2 2 2 2 2 2 2 ...
Now, with Group as both represented as a factor and with the levels reversed for the reference category selection, we run the same model as before. Here, notice that the “Group1” coefficient is minus the value it was previously.
Model.IR.1 <- lmer(Inductive ~ 1 + Group + (1|Room:Group), data=chapter_16_table_4)
summary(Model.IR.1)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Inductive ~ 1 + Group + (1 | Room:Group)
## Data: chapter_16_table_4
##
## REML criterion at convergence: 171
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.39742 -0.74219 0.04089 0.67433 1.93550
##
## Random effects:
## Groups Name Variance Std.Dev.
## Room:Group (Intercept) 26.63 5.160
## Residual 20.26 4.501
## Number of obs: 29, groups: Room:Group, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 35.626 3.229 11.032
## GroupControl -8.148 4.548 -1.792
##
## Correlation of Fixed Effects:
## (Intr)
## GroupContrl -0.710
To evaluate the effect of Conditionition in terms of an ANOVA output, the anova() function could be used.
anova(Model.IR.1)
## Analysis of Variance Table
## Df Sum Sq Mean Sq F value
## Group 1 65.009 65.009 3.2095
To make things more clear, consider the model in which the group membership is directly specified. That is, if “Treatment=1” it is clear the participate is in the Treatment group. Similarly, if “Control=1” it is clear the participant is in the Control group. This is unnecessary but we demonstrate for clarity.
# Create a Control group identifier.
chapter_16_table_4$Control <- chapter_16_table_4$Condition
chapter_16_table_4$Control[chapter_16_table_4$Condition==0] <- 1
chapter_16_table_4$Control[chapter_16_table_4$Condition==1] <- 0
# Create a Treatment group identifier.
chapter_16_table_4$Treatment <- chapter_16_table_4$Condition
# Fit using Control. This matches the top part of Table 16.6. Here, notice that the intercept represents the Treatment group mean.
Model.IR.C <- lmer(Inductive ~ 1 + Control + (1|Room:Control), data=chapter_16_table_4)
summary(Model.IR.C)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Inductive ~ 1 + Control + (1 | Room:Control)
## Data: chapter_16_table_4
##
## REML criterion at convergence: 171
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.39742 -0.74219 0.04089 0.67433 1.93550
##
## Random effects:
## Groups Name Variance Std.Dev.
## Room:Control (Intercept) 26.63 5.160
## Residual 20.26 4.501
## Number of obs: 29, groups: Room:Control, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 35.626 3.229 11.032
## Control -8.148 4.548 -1.792
##
## Correlation of Fixed Effects:
## (Intr)
## Control -0.710
# Fit using Treatment. Here, notice that the intercept now represents the Control group mean.
Model.IR.T <- lmer(Inductive ~ 1 + Treatment + (1|Room:Treatment), data=chapter_16_table_4)
summary(Model.IR.T)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Inductive ~ 1 + Treatment + (1 | Room:Treatment)
## Data: chapter_16_table_4
##
## REML criterion at convergence: 171
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.39742 -0.74219 0.04089 0.67433 1.93550
##
## Random effects:
## Groups Name Variance Std.Dev.
## Room:Treatment (Intercept) 26.63 5.160
## Residual 20.26 4.501
## Number of obs: 29, groups: Room:Treatment, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 27.478 3.203 8.579
## Treatment 8.148 4.548 1.792
##
## Correlation of Fixed Effects:
## (Intr)
## Treatment -0.704
We use the model that matches the output from Table 16.6. Again, we can use ANOVA for the test of the fixed effects. The approach R takes for the denominator degrees of freedom is different from SAS and some results, as we will see, will differ. There is much written on this but we will ignore those issues here, which are very technical, and simply show how to implement the models in R using the lmer() function from the lme4 package.
summary(Model.IR.C)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Inductive ~ 1 + Control + (1 | Room:Control)
## Data: chapter_16_table_4
##
## REML criterion at convergence: 171
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.39742 -0.74219 0.04089 0.67433 1.93550
##
## Random effects:
## Groups Name Variance Std.Dev.
## Room:Control (Intercept) 26.63 5.160
## Residual 20.26 4.501
## Number of obs: 29, groups: Room:Control, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 35.626 3.229 11.032
## Control -8.148 4.548 -1.792
##
## Correlation of Fixed Effects:
## (Intr)
## Control -0.710
anova(Model.IR.C)
## Analysis of Variance Table
## Df Sum Sq Mean Sq F value
## Control 1 65.009 65.009 3.2095
Additionally, confidence interval can be obtained using the confint() function
confint(Model.IR.C)
## Computing profile confidence intervals ...
## 2.5 % 97.5 %
## .sig01 1.593129 8.8827687
## .sigma 3.458986 6.2170149
## (Intercept) 29.485315 41.6898990
## Control -16.706763 0.4932392
We continue to use Group given the above factor specification whereas the book uses Condition. But they are equivilant results.
Now, using the same modeling specification as before, we add a covariate. The covariate is the same across all Rooms. Thus, it is what can be considered a “level 2” variable (which is a term commonly used in a multilevel model presentation of these models). In the mixed effects modeling framework this variable is only a fixed effect; it does not vary randomly within the Rooms. Her the results correspond to Table 16.7 in MDK.
Model.IR.2 <- lmer(Inductive ~ 1 + Control + Skill + (1|Room:Control), data=chapter_16_table_4)
summary(Model.IR.2)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Inductive ~ 1 + Control + Skill + (1 | Room:Control)
## Data: chapter_16_table_4
##
## REML criterion at convergence: 164.5
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.40177 -0.69562 -0.06924 0.51685 2.01182
##
## Random effects:
## Groups Name Variance Std.Dev.
## Room:Control (Intercept) 6.572 2.564
## Residual 20.274 4.503
## Number of obs: 29, groups: Room:Control, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 20.2521 5.8201 3.480
## Control -7.5686 2.7163 -2.786
## Skill 2.3092 0.8086 2.856
##
## Correlation of Fixed Effects:
## (Intr) Contrl
## Control -0.347
## Skill -0.944 0.119
anova(Model.IR.2)
## Analysis of Variance Table
## Df Sum Sq Mean Sq F value
## Control 1 200.86 200.86 9.9071
## Skill 1 165.37 165.37 8.1566
confint(Model.IR.2)
## Computing profile confidence intervals ...
## 2.5 % 97.5 %
## .sig01 0.0000000 4.201046
## .sigma 3.4631692 6.099172
## (Intercept) 10.7577094 29.785691
## Control -12.0414144 -3.143293
## Skill 0.9893832 3.629000
Here we include a “level 1” (in the multilevel modeling presentation of the model) or simply a variable that randomly varies across the units (here students). Thus, even within a classRoom, the cognitive scores vary across students (unlike the Skill level, which is fixed within each classRoom). Note that this model does not include Skill. However, this model does include the interaction of Conditionition and cognitve (i.e., the Group*cog specification). Note that when an interaction is included the specification of the main effects need not be used, as they are included by default when an interaction is added to a model. We use them here for completeness. Also, notice that we specify both a random intercept and a random slope in the random effecdts part of the model (i.e., within the parentheses)
Model.IR.3A <- lmer(Inductive ~ 1 + Control + Cognition + Control*Cognition + (1+Cognition|Room:Control), data=chapter_16_table_4)
summary(Model.IR.3A)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Inductive ~ 1 + Control + Cognition + Control * Cognition + (1 +
## Cognition | Room:Control)
## Data: chapter_16_table_4
##
## REML criterion at convergence: 157.5
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.50204 -0.61776 0.02134 0.67690 1.67423
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## Room:Control (Intercept) 0.755314 0.86909
## Cognition 0.004772 0.06908 1.00
## Residual 11.738106 3.42609
## Number of obs: 29, groups: Room:Control, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 4.17849 10.19900 0.410
## Control -10.32002 14.69648 -0.702
## Cognition 0.64201 0.21207 3.027
## Control:Cognition 0.02521 0.30052 0.084
##
## Correlation of Fixed Effects:
## (Intr) Contrl Cogntn
## Control -0.694
## Cognition -0.967 0.671
## Cntrl:Cgntn 0.683 -0.968 -0.706
anova(Model.IR.3A)
## Analysis of Variance Table
## Df Sum Sq Mean Sq F value
## Control 1 56.356 56.356 4.8011
## Cognition 1 222.748 222.748 18.9765
## Control:Cognition 1 0.083 0.083 0.0070
The above do not use centered scores for cognitive. As discussed in the book, not using centered scores (i.e., as shown above) leads to estimates from models that are not exactly what may be of interest. In particular, “the intercept tells us the mean value of Inductiveive reasoning when cognitive ability equals zero.” However, cognitive values of zero are “off the scale” and making the comparison at that level is not necessarily of interest. However, this issue can be overcome by centering the level 1 predictor, that is, by centering the cognitive. Below we first add a new variable to the data set: centered cognitive.
chapter_16_table_4$Cognition_Centered <- chapter_16_table_4$Cognition - mean(chapter_16_table_4$Cognition)
Now the same model on the centered cognitive scores is fitted. There are some numerical differences because of the way in which the models are implemented in the software program when lmer is compared to SAS PROC MIXED. Nevertheless, this model maps onto the model of Table 16.10.
Model.IR.3_Centered <- lmer(Inductive ~ 1 + Control + Cognition_Centered + Control*Cognition_Centered + (1+Cognition|Room:Control), data=chapter_16_table_4)
summary(Model.IR.3_Centered)
## Linear mixed model fit by REML ['lmerMod']
## Formula:
## Inductive ~ 1 + Control + Cognition_Centered + Control * Cognition_Centered +
## (1 + Cognition | Room:Control)
## Data: chapter_16_table_4
##
## REML criterion at convergence: 157.5
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.50206 -0.61776 0.02134 0.67690 1.67423
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## Room:Control (Intercept) 0.75644 0.86974
## Cognition 0.00477 0.06907 1.00
## Residual 11.73811 3.42609
## Number of obs: 29, groups: Room:Control, 6
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 36.1021 2.6715 13.514
## Control -9.0666 3.7589 -2.412
## Cognition_Centered 0.6420 0.2121 3.027
## Control:Cognition_Centered 0.0252 0.3005 0.084
##
## Correlation of Fixed Effects:
## (Intr) Contrl Cgnt_C
## Control -0.711
## Cgntn_Cntrd 0.254 -0.181
## Cntrl:Cgn_C -0.179 0.191 -0.706
anova(Model.IR.3_Centered)
## Analysis of Variance Table
## Df Sum Sq Mean Sq F value
## Control 1 56.355 56.355 4.8011
## Cognition_Centered 1 222.753 222.753 18.9769
## Control:Cognition_Centered 1 0.083 0.083 0.0070
Again, we can use ANOVA for the test of the fixed effects.
anova(Model.IR.3_Centered)
## Analysis of Variance Table
## Df Sum Sq Mean Sq F value
## Control 1 56.355 56.355 4.8011
## Cognition_Centered 1 222.753 222.753 18.9769
## Control:Cognition_Centered 1 0.083 0.083 0.0070
Going back to Model.IR.3A, various information can be extrated. Us the attributes() function to see what is available.
attributes(Model.IR.3A)
## $resp
## Reference class object of class "lmerResp"
## Field "Ptr":
## <pointer: 0x7f82be437670>
## Field "mu":
## [1] 22.94960 26.78894 31.90807 21.66982 21.45224 22.68089 23.90955
## [8] 26.36687 23.90955 22.68089 33.73467 32.23978 29.25002 32.23978
## [15] 38.21931 27.53836 29.84225 32.14613 40.11652 41.47962 36.02722
## [22] 42.84272 37.39032 32.58983 35.26392 33.92688 39.27504 41.94912
## [29] 40.61208
## Field "offset":
## [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## Field "sqrtXwt":
## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## Field "sqrtrwt":
## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## Field "weights":
## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## Field "wtres":
## [1] -1.94960050 -0.78894381 1.09193178 0.33018060 -3.45223502
## [6] 2.31910524 2.09044549 -2.36687400 -2.90955451 2.31910524
## [11] 1.26533478 -4.23978243 2.74998315 3.76021757 -0.21931359
## [16] -1.53836399 4.15775466 -5.14612669 -2.11651864 2.52038042
## [21] -2.02721582 2.15727948 0.60968324 -1.58983439 5.73608310
## [26] 0.07312435 -4.27504066 -3.94912316 5.38791809
## Field "y":
## [1] 21 26 33 22 18 25 26 24 21 25 35 28 32 36 38 26 34 27 38 44 34 45 38
## [24] 31 41 34 35 38 46
## Field "REML":
## [1] 4
##
## $Gp
## [1] 0 12
##
## $call
## lmer(formula = Inductive ~ 1 + Control + Cognition + Control *
## Cognition + (1 + Cognition | Room:Control), data = chapter_16_table_4)
##
## $frame
## Inductive Control Cognition Room
## 1 21 1 46 1
## 2 26 1 52 1
## 3 33 1 60 1
## 4 22 1 44 1
## 5 18 1 46 2
## 6 25 1 48 2
## 7 26 1 50 2
## 8 24 1 54 2
## 9 21 1 50 2
## 10 25 1 48 2
## 11 35 1 52 3
## 12 28 1 50 3
## 13 32 1 46 3
## 14 36 1 50 3
## 15 38 1 58 3
## 16 26 0 42 1
## 17 34 0 46 1
## 18 27 0 50 1
## 19 38 0 52 2
## 20 44 0 54 2
## 21 34 0 46 2
## 22 45 0 56 2
## 23 38 0 48 2
## 24 31 0 42 3
## 25 41 0 46 3
## 26 34 0 44 3
## 27 35 0 52 3
## 28 38 0 56 3
## 29 46 0 54 3
##
## $flist
## $flist$`Room:Control`
## [1] 1:1 1:1 1:1 1:1 2:1 2:1 2:1 2:1 2:1 2:1 3:1 3:1 3:1 3:1 3:1 1:0 1:0
## [18] 1:0 2:0 2:0 2:0 2:0 2:0 3:0 3:0 3:0 3:0 3:0 3:0
## Levels: 1:0 1:1 2:0 2:1 3:0 3:1
##
## attr(,"assign")
## [1] 1
##
## $cnms
## $cnms$`Room:Control`
## [1] "(Intercept)" "Cognition"
##
##
## $lower
## [1] 0 -Inf 0
##
## $theta
## [1] 0.25366753 0.02016304 0.00000000
##
## $beta
## [1] 4.17849068 -10.32002110 0.64201381 0.02520679
##
## $u
## [1] -3.275472 0.000000 -1.355453 0.000000 1.960848 0.000000 -2.623153
## [8] 0.000000 1.314624 0.000000 3.978606 0.000000
##
## $devcomp
## $devcomp$cmp
## ldL2 ldRX2 wrss ussq pwrss drsum
## 12.799933 12.219585 252.603291 40.849366 293.452657 NA
## REML dev sigmaML sigmaREML
## 157.537456 NA 3.181047 3.426092
##
## $devcomp$dims
## N n p nmp q nth useSc reTrms spFe REML
## 29 29 4 25 12 3 1 1 0 4
## GLMM NLMM
## 0 0
##
##
## $pp
## Reference class object of class "merPredD"
## Field "Lambdat":
## 12 x 12 sparse Matrix of class "dgCMatrix"
##
## [1,] 0.2536675 0.02016304 . . . .
## [2,] . 0.00000000 . . . .
## [3,] . . 0.2536675 0.02016304 . .
## [4,] . . . 0.00000000 . .
## [5,] . . . . 0.2536675 0.02016304
## [6,] . . . . . 0.00000000
## [7,] . . . . . .
## [8,] . . . . . .
## [9,] . . . . . .
## [10,] . . . . . .
## [11,] . . . . . .
## [12,] . . . . . .
##
## [1,] . . . . . .
## [2,] . . . . . .
## [3,] . . . . . .
## [4,] . . . . . .
## [5,] . . . . . .
## [6,] . . . . . .
## [7,] 0.2536675 0.02016304 . . . .
## [8,] . 0.00000000 . . . .
## [9,] . . 0.2536675 0.02016304 . .
## [10,] . . . 0.00000000 . .
## [11,] . . . . 0.2536675 0.02016304
## [12,] . . . . . 0.00000000
## Field "LamtUt":
## 12 x 29 sparse Matrix of class "dgCMatrix"
## [[ suppressing 29 column names '1', '2', '3' ... ]]
##
## [1,] . . . . . . .
## [2,] . . . . . . .
## [3,] 1.181167 1.302146 1.46345 1.140841 . . .
## [4,] 0.000000 0.000000 0.00000 0.000000 . . .
## [5,] . . . . . . .
## [6,] . . . . . . .
## [7,] . . . . 1.181167 1.221493 1.26182
## [8,] . . . . 0.000000 0.000000 0.00000
## [9,] . . . . . . .
## [10,] . . . . . . .
## [11,] . . . . . . .
## [12,] . . . . . . .
##
## [1,] . . . . . . . .
## [2,] . . . . . . . .
## [3,] . . . . . . . .
## [4,] . . . . . . . .
## [5,] . . . . . . . .
## [6,] . . . . . . . .
## [7,] 1.342472 1.26182 1.221493 . . . . .
## [8,] 0.000000 0.00000 0.000000 . . . . .
## [9,] . . . . . . . .
## [10,] . . . . . . . .
## [11,] . . . 1.302146 1.26182 1.181167 1.26182 1.423124
## [12,] . . . 0.000000 0.00000 0.000000 0.00000 0.000000
##
## [1,] 1.100515 1.181167 1.26182 . . . .
## [2,] 0.000000 0.000000 0.00000 . . . .
## [3,] . . . . . . .
## [4,] . . . . . . .
## [5,] . . . 1.302146 1.342472 1.181167 1.382798
## [6,] . . . 0.000000 0.000000 0.000000 0.000000
## [7,] . . . . . . .
## [8,] . . . . . . .
## [9,] . . . . . . .
## [10,] . . . . . . .
## [11,] . . . . . . .
## [12,] . . . . . . .
##
## [1,] . . . . . . .
## [2,] . . . . . . .
## [3,] . . . . . . .
## [4,] . . . . . . .
## [5,] 1.221493 . . . . . .
## [6,] 0.000000 . . . . . .
## [7,] . . . . . . .
## [8,] . . . . . . .
## [9,] . 1.100515 1.181167 1.140841 1.302146 1.382798 1.342472
## [10,] . 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
## [11,] . . . . . . .
## [12,] . . . . . . .
## Field "Lind":
## [1] 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
## Field "Ptr":
## <pointer: 0x7f82c26de800>
## Field "RZX":
## [,1] [,2] [,3] [,4]
## [1,] 1.554156 0.000000 71.77416 0.00000
## [2,] 0.000000 0.000000 0.00000 0.00000
## [3,] 1.853541 1.853541 94.74242 94.74242
## [4,] 0.000000 0.000000 0.00000 0.00000
## [5,] 2.108829 0.000000 108.42698 0.00000
## [6,] 0.000000 0.000000 0.00000 0.00000
## [7,] 2.326455 2.326455 115.00557 115.00557
## [8,] 0.000000 0.000000 0.00000 0.00000
## [9,] 2.319317 0.000000 114.68856 0.00000
## [10,] 0.000000 0.000000 0.00000 0.00000
## [11,] 2.108460 2.108460 108.46091 108.46091
## [12,] 0.000000 0.000000 0.00000 0.00000
## Field "Ut":
## 12 x 29 sparse Matrix of class "dgCMatrix"
## [[ suppressing 29 column names '1', '2', '3' ... ]]
##
## 1:0 . . . . . . . . . . . . . . . 1 1 1 . . . . .
## 1:0 . . . . . . . . . . . . . . . 42 46 50 . . . . .
## 1:1 1 1 1 1 . . . . . . . . . . . . . . . . . . .
## 1:1 46 52 60 44 . . . . . . . . . . . . . . . . . . .
## 2:0 . . . . . . . . . . . . . . . . . . 1 1 1 1 1
## 2:0 . . . . . . . . . . . . . . . . . . 52 54 46 56 48
## 2:1 . . . . 1 1 1 1 1 1 . . . . . . . . . . . . .
## 2:1 . . . . 46 48 50 54 50 48 . . . . . . . . . . . . .
## 3:0 . . . . . . . . . . . . . . . . . . . . . . .
## 3:0 . . . . . . . . . . . . . . . . . . . . . . .
## 3:1 . . . . . . . . . . 1 1 1 1 1 . . . . . . . .
## 3:1 . . . . . . . . . . 52 50 46 50 58 . . . . . . . .
##
## 1:0 . . . . . .
## 1:0 . . . . . .
## 1:1 . . . . . .
## 1:1 . . . . . .
## 2:0 . . . . . .
## 2:0 . . . . . .
## 2:1 . . . . . .
## 2:1 . . . . . .
## 3:0 1 1 1 1 1 1
## 3:0 42 46 44 52 56 54
## 3:1 . . . . . .
## 3:1 . . . . . .
## Field "Utr":
## [1] 102.8422 0.0000 132.0527 0.0000 257.3526 0.0000 173.8605
## [8] 0.0000 281.2075 0.0000 218.2076 0.0000
## Field "V":
## [,1] [,2] [,3] [,4]
## [1,] 1 1 46 46
## [2,] 1 1 52 52
## [3,] 1 1 60 60
## [4,] 1 1 44 44
## [5,] 1 1 46 46
## [6,] 1 1 48 48
## [7,] 1 1 50 50
## [8,] 1 1 54 54
## [9,] 1 1 50 50
## [10,] 1 1 48 48
## [11,] 1 1 52 52
## [12,] 1 1 50 50
## [13,] 1 1 46 46
## [14,] 1 1 50 50
## [15,] 1 1 58 58
## [16,] 1 0 42 0
## [17,] 1 0 46 0
## [18,] 1 0 50 0
## [19,] 1 0 52 0
## [20,] 1 0 54 0
## [21,] 1 0 46 0
## [22,] 1 0 56 0
## [23,] 1 0 48 0
## [24,] 1 0 42 0
## [25,] 1 0 46 0
## [26,] 1 0 44 0
## [27,] 1 0 52 0
## [28,] 1 0 56 0
## [29,] 1 0 54 0
## Field "VtV":
## [,1] [,2] [,3] [,4]
## [1,] 29 15 1442 754
## [2,] 0 15 754 754
## [3,] 0 0 72308 38180
## [4,] 0 0 0 38180
## Field "Vtr":
## [1] 921 410 46218 20836
## Field "X":
## (Intercept) Control Cognition Control:Cognition
## 1 1 1 46 46
## 2 1 1 52 52
## 3 1 1 60 60
## 4 1 1 44 44
## 5 1 1 46 46
## 6 1 1 48 48
## 7 1 1 50 50
## 8 1 1 54 54
## 9 1 1 50 50
## 10 1 1 48 48
## 11 1 1 52 52
## 12 1 1 50 50
## 13 1 1 46 46
## 14 1 1 50 50
## 15 1 1 58 58
## 16 1 0 42 0
## 17 1 0 46 0
## 18 1 0 50 0
## 19 1 0 52 0
## 20 1 0 54 0
## 21 1 0 46 0
## 22 1 0 56 0
## 23 1 0 48 0
## 24 1 0 42 0
## 25 1 0 46 0
## 26 1 0 44 0
## 27 1 0 52 0
## 28 1 0 56 0
## 29 1 0 54 0
## attr(,"assign")
## [1] 0 1 2 3
## attr(,"msgScaleX")
## character(0)
## Field "Xwts":
## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## Field "Zt":
## 12 x 29 sparse Matrix of class "dgCMatrix"
## [[ suppressing 29 column names '1', '2', '3' ... ]]
##
## 1:0 . . . . . . . . . . . . . . . 1 1 1 . . . . .
## 1:0 . . . . . . . . . . . . . . . 42 46 50 . . . . .
## 1:1 1 1 1 1 . . . . . . . . . . . . . . . . . . .
## 1:1 46 52 60 44 . . . . . . . . . . . . . . . . . . .
## 2:0 . . . . . . . . . . . . . . . . . . 1 1 1 1 1
## 2:0 . . . . . . . . . . . . . . . . . . 52 54 46 56 48
## 2:1 . . . . 1 1 1 1 1 1 . . . . . . . . . . . . .
## 2:1 . . . . 46 48 50 54 50 48 . . . . . . . . . . . . .
## 3:0 . . . . . . . . . . . . . . . . . . . . . . .
## 3:0 . . . . . . . . . . . . . . . . . . . . . . .
## 3:1 . . . . . . . . . . 1 1 1 1 1 . . . . . . . .
## 3:1 . . . . . . . . . . 52 50 46 50 58 . . . . . . . .
##
## 1:0 . . . . . .
## 1:0 . . . . . .
## 1:1 . . . . . .
## 1:1 . . . . . .
## 2:0 . . . . . .
## 2:0 . . . . . .
## 2:1 . . . . . .
## 2:1 . . . . . .
## 3:0 1 1 1 1 1 1
## 3:0 42 46 44 52 56 54
## 3:1 . . . . . .
## 3:1 . . . . . .
## Field "beta0":
## [1] 0 0 0 0
## Field "delb":
## [1] 4.17849068 -10.32002110 0.64201381 0.02520679
## Field "delu":
## [1] -3.275472 0.000000 -1.355453 0.000000 1.960848 0.000000 -2.623153
## [8] 0.000000 1.314624 0.000000 3.978606 0.000000
## Field "theta":
## [1] 0.25366753 0.02016304 0.00000000
## Field "u0":
## [1] 0 0 0 0 0 0 0 0 0 0 0 0
##
## $optinfo
## $optinfo$optimizer
## [1] "bobyqa"
##
## $optinfo$control
## $optinfo$control$iprint
## [1] 0
##
##
## $optinfo$derivs
## $optinfo$derivs$gradient
## [1] -1.433875e-07 -3.398808e-03 0.000000e+00
##
## $optinfo$derivs$Hessian
## [,1] [,2] [,3]
## [1,] 6.507206 312.3572 0.00000
## [2,] 312.357164 15409.9391 0.00000
## [3,] 0.000000 0.0000 14.36235
##
##
## $optinfo$conv
## $optinfo$conv$opt
## [1] 0
##
## $optinfo$conv$lme4
## list()
##
##
## $optinfo$feval
## [1] 1274
##
## $optinfo$warnings
## list()
##
## $optinfo$val
## [1] 0.25366753 0.02016304 0.00000000
##
##
## $class
## [1] "lmerMod"
## attr(,"package")
## [1] "lme4"
Estimated coefficients can be extracted using coef(). This is useful because it shows the intercept for each combination of Room and group.
coef(Model.IR.3A)
## $`Room:Control`
## (Intercept) Control Cognition Control:Cognition
## 1:0 3.347610 -10.32002 0.5759703 0.02520679
## 1:1 3.834656 -10.32002 0.6146838 0.02520679
## 2:0 4.675894 -10.32002 0.6815505 0.02520679
## 2:1 3.513082 -10.32002 0.5891231 0.02520679
## 3:0 4.511968 -10.32002 0.6685206 0.02520679
## 3:1 5.187734 -10.32002 0.7222346 0.02520679
##
## attr(,"class")
## [1] "coef.mer"
Seperately, the random effects can be extracted (recall these are deviations from the fixed effect, of which only the intercept is random here).
ranef(Model.IR.3A)
## $`Room:Control`
## (Intercept) Cognition
## 1:0 -0.8308809 -0.06604347
## 1:1 -0.3438344 -0.02733005
## 2:0 0.4974035 0.03953666
## 2:1 -0.6654087 -0.05289073
## 3:0 0.3334774 0.02650681
## 3:1 1.0092431 0.08022079
Predicted values can be extracted using predict(); The original data can be extracted with model.frame() (here we combine both in a matrix.
Data.and.Predicted <- cbind(predict(Model.IR.3A), model.frame(Model.IR.3A))
Suppose we wished to plot only Room 1 from the Control group (as is the case for Figure 16.1).
Control.Room1 <- chapter_16_table_4[chapter_16_table_4$Control==1 & chapter_16_table_4$Room==1,]
plot(Inductive ~ Cognition, data=Control.Room1, ylim=c(0, 50), xlim=c(0, 70))
abline(lm(Inductive ~ Cognition, data=Control.Room1))
Now, consider Figure 16.2, in which the predicted regression lines are shown for each group/Room. There are multiple ways to do this, we show a simple but effective appraoch. Note that here we do not extrapolate beyond the range of the Cognitive Ability and thus the figure produced here differs from the book. We use only base R functions here; other functions and/or loops would create to code that is more
# Control Group
Control <- chapter_16_table_4[chapter_16_table_4$Control==1, ]
# Seperation by Room.
Control.Room2 <- Control[Control$Room==2, ]
Control.Room3 <- Control[Control$Room==3, ]
# Treatment Group
Treatment <- chapter_16_table_4[chapter_16_table_4$Control==0, ]
# Seperation by Room.
Treatment.Room1 <- Treatment[Treatment$Room==1, ]
Treatment.Room2 <- Treatment[Treatment$Room==2, ]
Treatment.Room3 <- Treatment[Treatment$Room==3, ]
# Null plotting region.
plot(0~0, ylim=c(min(chapter_16_table_4$Inductive), max(chapter_16_table_4$Inductive)),
xlim=c(min(chapter_16_table_4$Cognition), max(chapter_16_table_4$Cognition)),
ylab="Predicted Inductive Reasoning", xlab="Cognitive Ability", type="n")
## Warning in plot.formula(0 ~ 0, ylim =
## c(min(chapter_16_table_4$Inductive), : the formula '0 ~ 0' is treated as '0
## ~ 1'
# Control group
lines(Control.Room1$Cognition, predict(lm(Inductive ~ Cognition, data=Control.Room1)), lty=2)
lines(Control.Room2$Cognition, predict(lm(Inductive ~ Cognition, data=Control.Room2)), lty=2)
lines(Control.Room3$Cognition, predict(lm(Inductive ~ Cognition, data=Control.Room3)), lty=2)
# Treatment group
lines(Treatment.Room1$Cognition, predict(lm(Inductive ~ Cognition, data=Treatment.Room1)))
lines(Treatment.Room2$Cognition, predict(lm(Inductive ~ Cognition, data=Treatment.Room2)))
lines(Treatment.Room3$Cognition, predict(lm(Inductive ~ Cognition, data=Treatment.Room3)))
legend(x=45, y=45, legend=c("Control Group", "Treatment Group"), lty=c(2, 1), bty="n")