Chapter 8 - Conditional Manatees

This chapter introduced interactions, which allow for the association between a predictor and an outcome to depend upon the value of another predictor. While you can’t see them in a DAG, interactions can be important for making accurate inferences. Interactions can be difficult to interpret, and so the chapter also introduced triptych plots that help in visualizing the effect of an interaction. No new coding skills were introduced, but the statistical models considered were among the most complicated so far in the book.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

8E1. For each of the causal relationships below, name a hypothetical third variable that would lead to an interaction effect:

  1. Bread dough rises because of yeast.
  2. Education leads to higher income.
  3. Gasoline makes a car go.
# 1. Temperature. Because yeast is active only in certain range of temperature.
# 2. Age. Usually the longer people get educated, the higher degree and amount of knowledge they will have, which usually leads to the higher income.
# 3. Engine status. A good engine with nice oil efficiency could lead to the car go further.

8E2. Which of the following explanations invokes an interaction?

  1. Caramelizing onions requires cooking over low heat and making sure the onions do not dry out.
  2. A car will go faster when it has more cylinders or when it has a better fuel injector.
  3. Most people acquire their political beliefs from their parents, unless they get them instead from their friends.
  4. Intelligent animal species tend to be either highly social or have manipulative appendages (hands, tentacles, etc.).
# I believe 1,2, 4 invokes an interaction. 
# For 1, water and low heat interact to the onions.
# For 2, number of cylinders and quality of fuel injector interact to the car's speed.
# For 3, parents and friends don't interact. It's the either or relationship.
# For 4, the level of sociality and manipulative appendages interact with the intelligence and determine the level of intelligence.

8E3. For each of the explanations in 8E2, write a linear model that expresses the stated relationship.

# 1. μ_i = β_W*W_i + β_T*T_i + β_TW*T_iW_i
# 2. μ_i = β_C*C_i + β_F*F_i + β_CF*C_iF_i
# 3. μ_i = β_P*P_i + β_F*F_i
# 4. μ_i = β_S*S_i + β_A*A_i + β_SA*S_iA_i

8M1. Recall the tulips example from the chapter. Suppose another set of treatments adjusted the temperature in the greenhouse over two levels: cold and hot. The data in the chapter were collected at the cold temperature. You find none of the plants grown under the hot temperature developed any blooms at all, regardless of the water and shade levels. Can you explain this result in terms of interactions between water, shade, and temperature?

# Based on the question, we know that the relationships between blossoms and water, and blossoms and shade are depended on the temperature. Therefore, we would have a single three-way interaction (water-shade- temperature) and three two-way interactions (water-shade, water-temperature, shade-temperature).

8M2. Can you invent a regression equation that would make the bloom size zero, whenever the temperature is hot?

# The regression equantion would be:
# μ_i = α + β_W*W_i + β_S*S_i + β_T*T_i + β_WS*W_iS_i + β_WT*W_iT_i + β_ST*S_iT_i + β_WST*W_iS_iT_i

# Now, if we set water, shade and temperature to 1, we would get:
# μ_i|w=1,S=1,T=1 = α + β_W + β_S + β_T + β_WS + β_WT + β_ST + β_WST

# Let's group them by the temperature:
# μ_i|w=1,S=1,T=1 = (α + β_T) + (β_W + β_WT) + (β_S + β_ST) + (β_WS + β_WST)

# In order to make the μ_i = 0, we would need to have β_T = -α, β_WT = -β_W, β_ST = -β_S, β_WST = -β_WS. Then the equation would be:
# μ_i|w=1,S=1,T=1 = (α - α) + (β_W -β_W) + (β_S - β_S) + (β_WS - β_WS)

# Now we can see, regardless W and S, when T = 1, we would have μ_i = 0.

8M4. Repeat the tulips analysis, but this time use priors that constrain the effect of water to be positive and the effect of shade to be negative. Use prior predictive simulation. What do these prior assumptions mean for the interaction prior, if anything? Visualize the prior simulation.

data(tulips)
d <- tulips

d$blooms_std <- d$blooms/max(d$blooms)
d$water_cent <- d$water - mean(d$water)
d$shade_cent <- d$shade - mean(d$shade)

m_8M4 <- quap(
  alist(
    blooms_std ~ dnorm(μ,σ),
    μ <- α + βW*water_cent - βS*shade_cent,
    α ~ dnorm(0.5,0.25),
    βW ~ dnorm(0,0.25),
    βS ~ dnorm(0,0.25),
    σ ~ dexp(1)
  ), data=d
)
precis(m_8M4)
##         mean         sd       5.5%     94.5%
## a  0.3587570 0.03021863 0.31046176 0.4070522
## ßW 0.2050402 0.03688922 0.14608413 0.2639963
## ßS 0.1125327 0.03687540 0.05359867 0.1714667
## s  0.1581531 0.02144332 0.12388255 0.1924237
# We can see from the table, that when shade is small (strong light), water has larger impact on blooms, while when shade is bid (low light), water has less impact on blooms. Let's visualize it
par(mfrow=c(1,3))
for (i in -1:1) {idx <- which(d$shade_cent==i)
    plot( d$water_cent[idx] , d$blooms_std[idx], xlim=c(-1,1), ylim=c(0,1),
        xlab="water", ylab="blooms", pch=1)
    μ <- link(m_8M4, data=data.frame( shade_cent=i , water_cent=-1:1))
    for (j in 1:10) lines( -1:1, μ[j,], col=col.alpha("Blue",0.3))}

8H1. Return to the data(tulips) example in the chapter. Now include the bed variable as a predictor in the interaction model. Don’t interact bed with the other predictors; just include it as a main effect. Note that bed is categorical. So to use it properly, you will need to either construct dummy variables or rather an index variable, as explained in Chapter 5.

# In order to handle bed, we could use coerce_index to convert it into int base index.
d$bed_idx <- coerce_index(d$bed)

# Now let's see the model
m_8H1 <- quap(
    alist(
    blooms ~ dnorm(μ,σ),
    μ <- α[bed_idx] + βW*water_cent + βS*shade_cent + βWS*water_cent*shade_cent,
    α[bed_idx] ~ dnorm(130,100),
    βW ~ dnorm(0,100),
    βS ~ dnorm(0,100),
    βWS ~ dnorm(0,100),
    σ ~ dunif(0,100)
  ), data=d
)
precis(m_8H1, depth = 2)
##           mean        sd      5.5%     94.5%
## a[1]  97.77061 12.941687  77.08730 118.45393
## a[2] 142.30962 12.940727 121.62784 162.99140
## a[3] 146.98171 12.940768 126.29986 167.66356
## ßW    75.12989  9.190669  60.44142  89.81835
## ßS   -41.28540  9.189282 -55.97165 -26.59915
## ßWS  -52.19180 11.231751 -70.14230 -34.24129
## s     39.15077  5.323386  30.64297  47.65857

8H5. Consider the data(Wines2012) data table. These data are expert ratings of 20 different French and American wines by 9 different French and American judges. Your goal is to model score, the subjective rating assigned by each judge to each wine. I recommend standardizing it. In this problem, consider only variation among judges and wines. Construct index variables of judge and wine and then use these index variables to construct a linear regression model. Justify your priors. You should end up with 9 judge parameters and 20 wine parameters. Plot the parameter estimates. How do you interpret the variation among individual judges and individual wines? Do you notice any patterns, just by plotting the differences? Which judges gave the highest/lowest ratings? Which wines were rated worst/best on average?

data(Wines2012)
d <- Wines2012

# Now let's standardize it
d2 <- data.frame(list(
  s = standardize(d$score),
  j = as.integer(d$judge),
  w = as.integer(d$wine)
))

# Now we use ulam to construct the model
m_8H5 <- ulam(
  alist(
    s ~ dnorm(mu,sigma),
    mu <- a[j] + b[w], 
    a[j] ~ dnorm(0, 0.5), 
    b[w] ~ dnorm(0, 0.5), 
    sigma ~ dexp(1)
  ), data=d2, chains=4, cores=4
)

precis(m_8H5, depth = 2)
##               mean         sd        5.5%       94.5%    n_eff     Rhat4
## a[1]  -0.271820118 0.19052192 -0.57202241  0.03876149 2116.508 0.9998608
## a[2]   0.216660811 0.19415264 -0.08506901  0.53186486 2044.219 1.0019590
## a[3]   0.213477247 0.19306079 -0.08254058  0.53322940 1869.343 0.9995725
## a[4]  -0.537144863 0.19735161 -0.85016568 -0.21788069 1815.639 1.0000450
## a[5]   0.801933884 0.19470721  0.49306506  1.11289452 2185.435 0.9982964
## a[6]   0.481509285 0.19914271  0.16856130  0.81244674 1933.240 0.9991550
## a[7]   0.137586448 0.19609297 -0.18333735  0.46523099 2145.482 0.9995201
## a[8]  -0.651781867 0.19595105 -0.97198809 -0.33958032 2030.163 0.9990111
## a[9]  -0.344179013 0.18911771 -0.64951937 -0.03734118 2298.559 0.9989452
## b[1]   0.110390978 0.24930022 -0.30320971  0.51284726 2725.928 0.9992420
## b[2]   0.081913603 0.25744939 -0.32182726  0.49469946 2573.527 0.9996519
## b[3]   0.222631731 0.25703537 -0.18644450  0.63313700 2435.599 1.0002326
## b[4]   0.456835959 0.26012057  0.03062460  0.85117975 2720.163 0.9990376
## b[5]  -0.107001055 0.26848269 -0.53015317  0.32398956 3588.629 0.9997020
## b[6]  -0.314904190 0.25298166 -0.70641220  0.09362675 2510.403 0.9994947
## b[7]   0.238365675 0.25883644 -0.16278802  0.66059417 3147.104 1.0003738
## b[8]   0.227054389 0.26362425 -0.19034704  0.64502894 2938.200 1.0001296
## b[9]   0.061544329 0.25724525 -0.34492707  0.47431679 2550.598 0.9995535
## b[10]  0.100749744 0.26650426 -0.32951183  0.51765029 2705.145 0.9994127
## b[11] -0.017597377 0.26905445 -0.45413469  0.39424550 3338.556 0.9991586
## b[12] -0.026801072 0.25806272 -0.44634113  0.37376187 3351.566 1.0001947
## b[13] -0.092699644 0.26556527 -0.52062792  0.33203363 3408.625 0.9988939
## b[14] -0.001432656 0.26030854 -0.40825985  0.41102682 3314.332 0.9990967
## b[15] -0.194761390 0.25951730 -0.60448047  0.23638157 3010.236 0.9992362
## b[16] -0.168898488 0.25962336 -0.57850196  0.25513668 2995.312 0.9991575
## b[17] -0.123320346 0.25101997 -0.52058642  0.27354679 2878.833 0.9991023
## b[18] -0.729482740 0.24777156 -1.11939981 -0.33615673 2809.426 0.9986210
## b[19] -0.139466860 0.25591042 -0.54968008  0.28708881 3799.059 0.9983789
## b[20]  0.308731596 0.26485818 -0.13592705  0.71153194 3092.313 0.9989049
## sigma  0.846180329 0.04687895  0.77555394  0.92462092 2977.464 0.9987526
# We could use traceplot to show the sample behavior
traceplot(m_8H5)
## [1] 1000
## [1] 1
## [1] 1000

# On the traceplot, a 1-9 are the 9 judges, b 1-20 are the 20 wines. 

# Based on the traceplot, we can see each plot represents an average deviation of the scores. The judges that have higher value would show harsher average, which means they like the wines more (hence give more score). On the other hand, the wines plot shows the average score among all judges. The wines that have higher value would also show harsher average, which means more judge like them (hence give more score).

# According to the traceplot, it looks like Judge a5 gave the highest rating, while judge a8 gave the lowest rating. Wine b4 seems to be rated the best, while wine b18 seems to be rated the worst.