8E1. For each of the causal relationships below, name a hypothetical third variable that would lead to an interaction effect:

  1. Bread dough rises because of yeast.
  2. Education leads to higher income.
  3. Gasoline makes a car go.
# 1. The baking yeast is not active under 40 F and would be inactive above 130F, so temperature would interact with yeast when we predict amount of bread dough.

# 2. Different industries have various salary ranges, therefore those industries would likely interact with years of education to predict the incomes.

# 3. The quality of car engine would affect the fuel consumption therefore, car engine would likely interact with gasoline to make a car go.

8E2. Which of the following explanations invokes an interaction?

  1. Caramelizing onions requires cooking over low heat and making sure the onions do not dry out.
  2. A car will go faster when it has more cylinders or when it has a better fuel injector.
  3. Most people acquire their political beliefs from their parents, unless they get them instead from their friends.
  4. Intelligent animal species tend to be either highly social or have manipulative appendages (hands, tentacles, etc.).
# 1. This explanation invokes an interaction between heat and dryness in predicting onion caramelization. Moreover, it predicts that caramelization will only occur when both heat and dryness are at low level

# 2. This explanation may invoke main effects of number of cylinders and fuel injector quality on car speed but does not necessarily invoke an interaction. Especially, it seems still unclear that those two factors (adding cylinders and increasing the quality of the fuel injector) are independent routes to increasing car speed or associated factors.

# 3. This explanation does invoke an interaction insofar that if someone get their political beliefs from their friends, they don't get it from their parents.

# 4.  It invokes an interaction between sociality and the possession of manipulative appendages in predicting a species’ intelligence. 

# Therefore, explanation 1, 3 and 4 invoke interactions.

8E3. For each of the explanations in 8E2, write a linear model that expresses the stated relationship.

μi = βHHi + βDDi + βHDHiDi μi = βCCi + βQQi μi = βTPTiPi + βTFTiFi μi = βSSi + βAAi + βSASiAi

8M1. Recall the tulips example from the chapter. Suppose another set of treatments adjusted the temperature in the greenhouse over two levels: cold and hot. The data in the chapter were collected at the cold temperature. You find none of the plants grown under the hot temperature developed any blooms at all, regardless of the water and shade levels. Can you explain this result in terms of interactions between water, shade, and temperature?

An interaction (between a predictor and an outcome) allows the relationship (between a predictor and an outcome) relate with the value of another predictor. In this case that the relationship between bloosoms and water and the relationship between blossoms and shade are related with the value of temperature, where interaction effects can apply.

Especially, there could be three predictor variables (water, shade, and temperature), we probably have a single three-way interaction and three two-way interactions (WST, WS, WT, and ST).

8M2. Can you invent a regression equation that would make the bloom size zero, whenever the temperature is hot?

The regression equation would be: μi = α + βWWi + βSSi + βTTi + βWSTWiSiTi + βWSWiSi + βWTWiTi + βSTSiTi

To assume some estimated parameter values that would make the bloom size zero whenever the temperature is hot. In simple, I just suppose that all predictors are equal to 1. Doing so makes all the Wi, Si, and Ti terms drop out.

μi|wi=1,si=1,T=1 = α + βW + βS + βT + βWST + βWS + βWT + βST

When I counteract the effects of α, βW, and βS whenever Ti=1 and suppose μi to be 0. To do this, I make βT counteract α, βWT counteract βW, βST counteract βS, and βWST counteract βWS. I will first reorder the equation above to put the T term directly after whatever it counteracts.

μi|Wi=1,Si=1,T=1 = ( α + βT ) + ( βW + βWT ) + ( βS + βST ) + ( βWS + βWST )

Then, I set βT equal to −α and we set βWT equal to −βW.

μi|Wi=1,Si=1,T=1 = (α–α) + (βW–βW) + (βS–βS) + (βWS–βWS)

In this case, because all the terms involving T will drop out when Ti=0, the full regression equation will perform the same as it did before temperature was introduced. That’s why, in this case, μi=0.

μi = α + βWWi + βSSi − αTi – βWSWiSiTi + βWSWiSi – βWWiTi – βSSiTi

According to textbook page 232, then I try to put the exact numbers in.

μi = 129.01 + (74.96)Wi + (−41.14)Si + (−129.01)Ti + (51.87)WiSiTi + (−51.87)

8M3. In parts of North America, ravens depend upon wolves for their food. This is because ravens are carnivorous but cannot usually kill or open carcasses of prey. Wolves however can and do kill and tear open animals, and they tolerate ravens co-feeding at their kills. This species relationship is generally described as a “species interaction.” Can you invent a hypothetical set of data on raven population size in which this relationship would manifest as a statistical interaction? Do you think the biological interaction could be linear? Why or why not?

Regarding the interaction between prey population and wolf population, the linear model here would be the following:

μi = α + βPPi + βWWi + βPWPiWi

N <- 300 # simulation size
rPW <- 0.6 # correlation between prey and wolf
bP <- 0.3 # regression coefficient for prey
bW <- 0.1 # regression coefficient for wolf
bPW <- 0.5 # regression coefficient for prey-by-wolf interaction
# Simulate data
prey <- rnorm(
  n = N, 
  mean = 0, 
  sd = 1
)
wolf <- rnorm(
  n = N, 
  mean = rPW * prey, 
  sd = sqrt(1 - rPW^2)
)
raven <- rnorm(
  n = N, 
  mean = bP*prey + bW*wolf + bPW*prey*wolf, 
  sd = 1
)
d <- data.frame(raven, prey, wolf)
str(d)
## 'data.frame':    300 obs. of  3 variables:
##  $ raven: num  1.386 0.215 -0.842 0.782 0.285 ...
##  $ prey : num  0.2696 -0.63 0.8687 1.7272 0.0242 ...
##  $ wolf : num  0.495 -0.41 -0.505 1.613 -0.659 ...

To verify the model, we can estimate the linear model and verify if the estimate are similar to what we input the simulation.

m <- quap(
  alist(
    raven ~ dnorm(mu, sigma),
    mu <- a + bP*prey + bW*wolf + bPW*prey*wolf,
    a ~ dnorm(0, 1),
    bW ~ dnorm(0, 1),
    bP ~ dnorm(0, 1),
    bPW ~ dnorm(0, 1),
    sigma ~ dunif(0, 2)
  ),
  data = d,
  start = list(a = 0, bP = 0, bW = 0, bPW = 0, sigma = 1)
)
precis(m)
##               mean         sd        5.5%     94.5%
## a     -0.007431273 0.06776259 -0.11572899 0.1008664
## bP     0.194926035 0.07740959  0.07121055 0.3186415
## bW     0.265488354 0.07787497  0.14102911 0.3899476
## bPW    0.482661482 0.05462357  0.39536247 0.5699605
## sigma  1.033540221 0.04219466  0.96610501 1.1009754

Actually, the predictions are quite similar to what we input, including a sizable interaction relationship.

8M4. Repeat the tulips analysis, but this time use priors that constrain the effect of water to be positive and the effect of shade to be negative. Use prior predictive simulation. What do these prior assumptions mean for the interaction prior, if anything?

data(tulips)
d <- tulips
str(d)
## 'data.frame':    27 obs. of  4 variables:
##  $ bed   : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 2 ...
##  $ water : int  1 1 1 2 2 2 3 3 3 1 ...
##  $ shade : int  1 2 3 1 2 3 1 2 3 1 ...
##  $ blooms: num  0 0 111 183.5 59.2 ...
d$blooms_std <- d$blooms / max(d$blooms)
d$water_cent <- d$water - mean(d$water)
d$shade_cent <- d$shade - mean(d$shade)
a <- rnorm( 1e4 , 0.5 , 1 ); sum( a < 0 | a > 1 ) / length( a )
## [1] 0.6092
a <- rnorm( 1e4 , 0.5 , 0.25 ); sum( a < 0 | a > 1 ) / length( a )
## [1] 0.0451
m8.4 <- quap(
    alist(
        blooms_std ~ dnorm( mu , sigma ) ,
        mu <- a + bw*water_cent - bs*shade_cent ,
        a ~ dnorm( 0.5 , 0.25 ) ,
        bw ~ dnorm( 0 , 0.25 ) ,
        bs ~ dnorm( 0 , 0.25 ) ,
        sigma ~ dexp( 1 )
) , data=d )

  
m8.5 <- quap(
    alist(
        blooms_std ~ dnorm( mu , sigma ) ,
        mu <- a + bw*water_cent - bs*shade_cent + bws*water_cent*shade_cent ,
        a ~ dnorm( 0.5 , 0.25 ) ,
        bw ~ dnorm( 0 , 0.25 ) ,
        bs ~ dnorm( 0 , 0.25 ) ,
        bws ~ dnorm( 0 , 0.25 ) ,
        sigma ~ dexp( 1 )
) , data=d )


par(mfrow=c(1,3)) # 3 plots in 1 row
for ( s in -1:1 ) {
    idx <- which( d$shade_cent==s )
    plot( d$water_cent[idx] , d$blooms_std[idx] , xlim=c(-1,1) , ylim=c(0,1) ,
        xlab="water" , ylab="blooms" , pch=16 , col=rangi2 )
    mu <- link( m8.4 , data=data.frame( shade_cent=s , water_cent=-1:1 ) )
    for ( i in 1:20 ) lines( -1:1 , mu[i,] , col=col.alpha("black",0.3) )}

8H1. Return to the data(tulips) example in the chapter. Now include the bed variable as a predictor in the interaction model. Don’t interact bed with the other predictors; just include it as a main effect. Note that bed is categorical. So to use it properly, you will need to either construct dummy variables or rather an index variable, as explained in Chapter 5.

I can find the The interaction model without the bed variable is described on textbook page No. 230.

I could simpliy create this by adding either dummy variables (as textbook page 153 mentions) or an index variable (as textbook page 158 includes).

d <- tulips
d$shade.c <- d$shade - mean(d$shade)
d$water.c <- d$water - mean(d$water)
# Dummy variables
d$bedb <- d$bed == "b"
d$bedc <- d$bed == "c"
# Index variable
d$bedx <- coerce_index(d$bed)

Then we put some dummy variables.

m_dummy <- map(
  alist(
    blooms ~ dnorm(mu, sigma),
    mu <- a + bW*water.c + bS*shade.c + bWS*water.c*shade.c + bBb*bedb + bBc*bedc,
    a ~ dnorm(130, 100),
    bW ~ dnorm(0, 100),
    bS ~ dnorm(0, 100),
    bWS ~ dnorm(0, 100),
    bBb ~ dnorm(0, 100),
    bBc ~ dnorm(0, 100),
    sigma ~ dunif(0, 100)
  ),
  data = d,
  start = list(a = mean(d$blooms), bW = 0, bS = 0, bWS = 0, bBb = 0, bBc = 0, sigma = sd(d$blooms))
)
precis(m_dummy)
##            mean        sd      5.5%     94.5%
## a      99.36131 12.757521  78.97233 119.75029
## bW     75.12433  9.199747  60.42136  89.82730
## bS    -41.23103  9.198481 -55.93198 -26.53008
## bWS   -52.15060 11.242951 -70.11901 -34.18219
## bBb    42.41139 18.039255  13.58118  71.24160
## bBc    47.03141 18.040136  18.19979  75.86303
## sigma  39.18964  5.337920  30.65862  47.72067

Then I use an index variable, which depth = 2 argument for the precis() function.

m_index <- map(
  alist(
    blooms ~ dnorm(mu, sigma),
    mu <- a[bedx] + bW*water.c + bS*shade.c + bWS*water.c*shade.c,
    a[bedx] ~ dnorm(130, 100),
    bW ~ dnorm(0, 100),
    bS ~ dnorm(0, 100),
    bWS ~ dnorm(0, 100),
    sigma ~ dunif(0, 200)
  ),
  data = d
)
## Caution, model may not have converged.
## Code 1: Maximum iterations reached.
precis(m_index, depth = 2)
## Warning in sqrt(diag(vcov(model))): NaNs produced

## Warning in sqrt(diag(vcov(model))): NaNs produced

## Warning in sqrt(diag(vcov(model))): NaNs produced
##            mean       sd       5.5%      94.5%
## a[1]  109.00659 37.63855   48.85293 169.160255
## a[2]  138.71412 37.87146   78.18820 199.240031
## a[3]  148.87486 37.89426   88.31252 209.437206
## bW    121.19806 20.98925   87.65319 154.742936
## bS    -67.14260 25.85098 -108.45746 -25.827731
## bWS   -59.87407 33.31626 -113.11989  -6.628261
## sigma 122.86229      NaN        NaN        NaN

Then I get a very similar result where the major difference in form of the bed-specific intercepts.

coeftab(m_dummy, m_index)
## Caution, model may not have converged.
## Code 1: Maximum iterations reached.
## Caution, model may not have converged.
## Code 1: Maximum iterations reached.
## Warning in sqrt(diag(vcov(model))): NaNs produced
##       m_dummy m_index
## a       99.36      NA
## bW      75.12  121.20
## bS     -41.23  -67.14
## bWS    -52.15  -59.87
## bBb     42.41      NA
## bBc     47.03      NA
## sigma   39.19  122.86
## a[1]       NA  109.01
## a[2]       NA  138.71
## a[3]       NA  148.87
## nobs       27      27