Chapter 8 - Conditional Manatees

This chapter introduced interactions, which allow for the association between a predictor and an outcome to depend upon the value of another predictor. While you can’t see them in a DAG, interactions can be important for making accurate inferences. Interactions can be difficult to interpret, and so the chapter also introduced triptych plots that help in visualizing the effect of an interaction. No new coding skills were introduced, but the statistical models considered were among the most complicated so far in the book.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

8E1. For each of the causal relationships below, name a hypothetical third variable that would lead to an interaction effect:

  1. Bread dough rises because of yeast.
  2. Education leads to higher income.
  3. Gasoline makes a car go.
#1. Bread dough rises because of yeast - temperature that slows or speeds up the process
#2. Education leads to higher income - but the effect can by significantly altered by Emotional intelligence
#3. Gasoline makes a car go - a good driver needed

8E2. Which of the following explanations invokes an interaction?

  1. Caramelizing onions requires cooking over low heat and making sure the onions do not dry out.
  2. A car will go faster when it has more cylinders or when it has a better fuel injector.
  3. Most people acquire their political beliefs from their parents, unless they get them instead from their friends.
  4. Intelligent animal species tend to be either highly social or have manipulative appendages (hands, tentacles, etc.).
# 1. Caramelizing onions requires cooking over (1) low heat and (2) making sure 
# the onions do not dry out.
# 4. Intelligent animal species tend to be either highly social or have 
# manipulative appendages ((1)hands, (2)tentacles, (3...)etc.).

8E3. For each of the explanations in 8E2, write a linear model that expresses the stated relationship.

# 1. μi = α + βHHi + βDDi + βHDHiDi     (H = heat, D = Dryness)
# 2. μi = α + βCCi + βFFi     (C = cylinder, F = fuel injection)
# 3. μi = α + βMPMiPi + βMFMiFi   (F = friends, P = parents)
# 4. μi = α + βSSi + βMMi + βSMSiMi   (S = social, M = manipualtive appendages)

8M1. Recall the tulips example from the chapter. Suppose another set of treatments adjusted the temperature in the greenhouse over two levels: cold and hot. The data in the chapter were collected at the cold temperature. You find none of the plants grown under the hot temperature developed any blooms at all, regardless of the water and shade levels. Can you explain this result in terms of interactions between water, shade, and temperature?

# Relationships 'blooms & shade' and 'blooms & water' depend on the temperature, 
# resulting in potential interaction. We need to account for the interactions of 
# each preditors with one another.

8M2. Can you invent a regression equation that would make the bloom size zero, whenever the temperature is hot?

# Based on the regression formula resembles:
# μi = α+βWWi+βSSi+βTTi+βWTWiTi+βSTSiTi+βWSWiSi+ βWSTWiSiTi

# The first 3 are the water, shade, and temperature by themselves, the next 3 
# are 2 way interactions between water & temperature, shade & temperature, and 
# water & shade. In the end is the 3 way interaction between the water, shade, 
# and temperature.

# Therefore bloom size = 0 ( μi = 0), when temperature is hot (Ti = 1),

# μi = α+βWWi+βSSi-αTi-βWTWiTi+βSTSiTi+βWSWiSi- βWSTWiSiTi

8M4. Repeat the tulips analysis, but this time use priors that constrain the effect of water to be positive and the effect of shade to be negative. Use prior predictive simulation. What do these prior assumptions mean for the interaction prior, if anything? Visualize the prior simulation.

library(rethinking)
data(tulips)

df <- tulips

df$blooms_std = df$blooms / max(df$blooms)
df$shade_cent = df$shade - mean(df$shade)
df$water_cent = df$water - mean(df$water)

m1 <-quap(
  alist(
    blooms_std ~ dnorm(mu, sigma),
    mu<- a + bs*shade_cent + bw*water_cent + bsw*shade_cent*water_cent,
    a ~ dnorm(0.5, 0.25),
    bs ~ dnorm(0,0.25),
    bw ~ dnorm(0.5,0.25),
    bsw ~ dnorm(0,0.25),
    sigma ~ dexp(1)
  ), data=df)

precis(m1)
##             mean         sd        5.5%       94.5%
## a      0.3579837 0.02392178  0.31975204  0.39621528
## bs    -0.1134620 0.02923110 -0.16017892 -0.06674504
## bw     0.2135621 0.02924738  0.16681917  0.26030510
## bsw   -0.1431621 0.03568400 -0.20019199 -0.08613214
## sigma  0.1248605 0.01694698  0.09777591  0.15194502
# From the analysis we could see that there are interaction effect and it is 
# linear. The negative effect of shade & positive effect of water make the bws 
# a negative mean value

8H1. Return to the data(tulips) example in the chapter. Now include the bed variable as a predictor in the interaction model. Don’t interact bed with the other predictors; just include it as a main effect. Note that bed is categorical. So to use it properly, you will need to either construct dummy variables or rather an index variable, as explained in Chapter 5.

data(tulips)
d <- tulips

d$shade.c <- d$shade - mean(d$shade)
d$water.c <- d$water - mean(d$water)

# Dummy variables
d$bedb <- d$bed == "b"
d$bedc <- d$bed == "c"

# Index variable
d$bedx <- coerce_index(d$bed)

#with dummy variables
m_dummy <- quap(
  alist(
    blooms ~ dnorm(mu, sigma),
    mu <- a + bW*water.c + bS*shade.c + bWS*water.c*shade.c + bBb*bedb + bBc*bedc,
    a ~ dnorm(0.5, 0.25),
    bW ~ dnorm(0.5, 0.25),
    bS ~ dnorm(0, 0.25),
    bWS ~ dnorm(0, 0.25),
    bBb ~ dnorm(0, 0.25),
    bBc ~ dnorm(0, 0.25),
    sigma ~ dexp(1)
  ),
  data = d
)

precis(m_dummy)
##               mean        sd       5.5%      94.5%
## a      0.534507540 0.2499991  0.1349607  0.9340543
## bW     0.513459292 0.2499826  0.1139388  0.9129798
## bS    -0.007438469 0.2499791 -0.4069534  0.3920765
## bWS   -0.006307945 0.2499862 -0.4058341  0.3932182
## bBb    0.012703425 0.2499932 -0.3868341  0.4122409
## bBc    0.013128775 0.2499935 -0.3864092  0.4126667
## sigma 79.254262015 4.6407902 71.8373830 86.6711411
#with index variable
m_index <- quap(
  alist(
    blooms ~ dnorm(mu, sigma),
    mu <- a[bedx] + bW*water.c + bS*shade.c + bWS*water.c*shade.c,
    a[bedx] ~ dnorm(130, 100),
    bW ~ dnorm(0, 100),
    bS ~ dnorm(0, 100),
    bWS ~ dnorm(0, 100),
    sigma ~ dunif(0, 200)
  ),
  data = d
)

precis(m_index, depth = 2)
## Warning in sqrt(diag(vcov(model))): NaNs produced

## Warning in sqrt(diag(vcov(model))): NaNs produced

## Warning in sqrt(diag(vcov(model))): NaNs produced
##            mean       sd         5.5%      94.5%
## a[1]  197.37412 28.65009  151.5857439 243.162498
## a[2]  167.06910 43.58426   97.4130398 236.725163
## a[3]  149.84798 44.35363   78.9623162 220.733638
## bW     50.82412 31.68282    0.1888645 101.459381
## bS    -49.69154 32.89714 -102.2675196   2.884436
## bWS   -55.25910 39.39143 -118.2142107   7.696009
## sigma 148.49960      NaN          NaN        NaN
coeftab(m_dummy, m_index)
## Warning in sqrt(diag(vcov(model))): NaNs produced
##       m_dummy m_index
## a        0.53      NA
## bW       0.51   50.82
## bS      -0.01  -49.69
## bWS     -0.01  -55.26
## bBb      0.01      NA
## bBc      0.01      NA
## sigma   79.25  148.50
## a[1]       NA  197.37
## a[2]       NA  167.07
## a[3]       NA  149.85
## nobs       27      27
# We can see that the results are very similar.

8H5. Consider the data(Wines2012) data table. These data are expert ratings of 20 different French and American wines by 9 different French and American judges. Your goal is to model score, the subjective rating assigned by each judge to each wine. I recommend standardizing it. In this problem, consider only variation among judges and wines. Construct index variables of judge and wine and then use these index variables to construct a linear regression model. Justify your priors. You should end up with 9 judge parameters and 20 wine parameters. Plot the parameter estimates. How do you interpret the variation among individual judges and individual wines? Do you notice any patterns, just by plotting the differences? Which judges gave the highest/lowest ratings? Which wines were rated worst/best on average?

data(Wines2012)
d1 = Wines2012
d1list = data.frame(list(s = standardize(d1$score),
                 wine = as.integer(d1$wine),
                 judge = as.integer(d1$judge)))
m <- ulam(alist(
              s ~ dnorm(mu, sigma),
              mu <- j[judge] + w[wine] ,
              w[wine] ~ dnorm(0, 0.5),
              j[judge] ~ dnorm(0, 0.5),
              sigma ~ dexp(1)),
         data = d1list, 
         chains = 4,
         cores = 4)

precis(m, 2)
##               mean         sd        5.5%       94.5%    n_eff     Rhat4
## w[1]   0.123037099 0.24970876 -0.28636200  0.50365309 2747.151 0.9990077
## w[2]   0.093567791 0.25118334 -0.30243606  0.49827627 2841.415 1.0005340
## w[3]   0.234158470 0.25844922 -0.18545819  0.66029242 2720.944 1.0000660
## w[4]   0.476656601 0.24834348  0.09178806  0.88047098 3102.181 0.9991041
## w[5]  -0.094161806 0.26795229 -0.50384959  0.33129015 2727.152 0.9996681
## w[6]  -0.303820358 0.26025421 -0.71377446  0.10933899 3225.046 1.0001212
## w[7]   0.251074925 0.25879893 -0.15443277  0.66359548 3706.381 0.9986737
## w[8]   0.234032214 0.26308362 -0.17489390  0.64526823 2638.769 0.9998189
## w[9]   0.076979279 0.26984210 -0.35340800  0.50659536 3431.932 0.9995824
## w[10]  0.108590864 0.24846178 -0.28804306  0.50450870 3136.925 0.9998466
## w[11] -0.004585952 0.25239802 -0.41426084  0.39406371 2635.361 1.0014490
## w[12] -0.020902744 0.26415911 -0.45618565  0.39266148 2736.306 0.9986780
## w[13] -0.083417024 0.25545015 -0.49035684  0.31440249 3447.112 0.9992003
## w[14]  0.018091138 0.24907662 -0.38242986  0.41104131 2334.299 0.9993029
## w[15] -0.176102445 0.25904750 -0.59005840  0.23711779 3171.045 1.0006566
## w[16] -0.158677530 0.26065012 -0.57367262  0.24372810 3466.267 0.9989872
## w[17] -0.104244934 0.24814487 -0.50152049  0.28366765 2815.401 1.0002537
## w[18] -0.714206430 0.25752630 -1.13059350 -0.30288370 2947.672 0.9997100
## w[19] -0.132279289 0.25378763 -0.53617668  0.27297176 3508.261 0.9990844
## w[20]  0.330717444 0.26052944 -0.08754159  0.75056193 3370.739 0.9985608
## j[1]  -0.288385075 0.19092601 -0.59157210  0.01852876 1813.294 1.0026209
## j[2]   0.202416313 0.19312538 -0.09635571  0.51331126 1985.398 1.0005460
## j[3]   0.195817736 0.19855211 -0.12221092  0.52048206 2052.192 1.0014109
## j[4]  -0.551184310 0.19629814 -0.85757717 -0.23985149 2383.731 0.9998109
## j[5]   0.787045394 0.19676257  0.47712588  1.09194182 1776.810 1.0010580
## j[6]   0.473778988 0.19116301  0.16521940  0.78926917 2187.584 0.9999561
## j[7]   0.118599516 0.20356267 -0.22242280  0.43548522 1880.926 0.9998388
## j[8]  -0.665599497 0.19618785 -0.99012254 -0.36029190 2628.893 0.9987474
## j[9]  -0.357888936 0.19948682 -0.67726212 -0.03276297 2389.800 0.9987770
## sigma  0.847728022 0.05019249  0.77186264  0.93135468 3381.606 0.9999994
traceplot(m)
## [1] 1000
## [1] 1
## [1] 1000

# The a/j parameters are the judges. Each represents an average deviation of the 
# scores. So judges with lower values are harsher on average. Judges with higher 
# values liked the wines more on average. There is some noticeable variation here. 
# It is fairly easy to tell the judges apart.

# The w parameters are the wines. Each represents an average score across all 
# judges. Except for wine 18 (a New Jersey red I think), there isn’t that much 
# variation. These are good wines, after all. Overall, there is more variation 
# from judge than from wine.