Chapter 6 - The Haunted DAG & The Causal Terror

Multiple regression is no oracle, but only a golem. It is logical, but the relationships it describes are conditional associations, not causal influences. Therefore additional information, from outside the model, is needed to make sense of it. This chapter presented introductory examples of some common frustrations: multicollinearity, post-treatment bias, and collider bias. Solutions to these frustrations can be organized under a coherent framework in which hypothetical causal relations among variables are analyzed to cope with confounding. In all cases, causal models exist outside the statistical model and can be difficult to test. However, it is possible to reach valid causal inferences in the absence of experiments. This is good news, because we often cannot perform experiments, both for practical and ethical reasons.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Problems are labeled Easy (E), Medium (M), and Hard (H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

6E1. List three mechanisms by which multiple regression can produce false inferences about causal effects.

# (1) Multicollinearity
# (2) Post-treatment bias
# (3) Collider bias

6E2. For one of the mechanisms in the previous problem, provide an example of your choice, perhaps from your own research.

# Multicollinearity: When we talk about the influencing factors of housing price, many variables are worth considering, such as location, area, year built, number of bedrooms and bathrooms, etc. Here area and number of bedrooms/bathrooms have multicollinearity as more bedrooms/bathrooms result in larger area. Therefore, it is not a good choice to include both variables in the regression.

6E3. List the four elemental confounds. Can you explain the conditional dependencies of each?

# (1) The Fork (X <- Z -> Y): Z is a common cause of X and Y. X and Y are independent, conditional on Z.
# (2) The Pipe (X -> Z -> Y): X influences Z, which influences Y. Conditioning on Z will block the path.
# (3) The Collider (X -> Z <- Y): No association between X and Y unless conditioning on Z.
# (4) The Descendant (X -> Z <- Y, Z -> D): Conditioning on D (descendant of Z) is like conditioning on Z, but weaker.

6E4. How is a biased sample like conditioning on a collider? Think of the example at the open of the chapter.

# The example combines newsworthy score and trustworthy score together, and picks the top 10% of the proposals to fund. Conditioning on the final score (collider), there will be a negative association between newsworthy and trustworthy, because a low newsworthy proposal must have high trustworthiness in order to be funded. Therefore it creates dependencies between newsworthy and trustworthy, which in fact are not associated.

6M1. Modify the DAG on page 186 to include the variable V, an unobserved cause of C and Y: C ← V → Y. Reanalyze the DAG. How many paths connect X to Y? Which must be closed? Which variables should you condition on now?

# 5 paths connect X to Y:
# (1) X -> Y
# (2) X <- U <- A -> C -> Y
# (3) X <- U <- A -> C <- V -> Y
# (4) X <- U -> B <- C -> Y
# (5) X <- U -> B <- C <- V -> Y
# Paths (2) and (3) must be closed, and we should condition on A. Because there is no collider in paths (2) and (3), which means they are open. Paths (4) and (5) are already closed. Conditioning on A will shut the backdoor, as shown below:
library(dagitty)
dag_6m1 <- dagitty("dag{
U [unobserved]
V [unobserved]
X -> Y
X <- U -> B <- C -> Y
U <- A -> C
C <- V -> Y
}")
adjustmentSets(dag_6m1, exposure="X", outcome="Y")
## { A }

6M2. Sometimes, in order to avoid multicollinearity, people inspect pairwise correlations among predictors before including them in a model. This is a bad procedure, because what matters is the conditional association, not the association before the variables are included in the model. To highlight this, consider the DAG X → Z → Y. Simulate data from this DAG so that the correlation between X and Z is very large. Then include both in a model prediction Y. Do you observe any multicollinearity? Why or why not? What is different from the legs example in the chapter?

# Simulate data for X → Z → Y:
N <- 5000
X <- rnorm(n = N, mean = 0, sd = 1)
Z <- rnorm(n = N, mean = X, sd = 0.2)
Y <- rnorm(n = N, mean = Z, sd = 1)
d <- data.frame(X, Z, Y)
# Correlation between X and Z is very large:
cor(X, Z)
## [1] 0.9804156
# Include X and Z in a model prediction Y:
model_6m2 <- quap(alist(
    Y ~ dnorm(mu, sigma),
    mu <- a + bX*X + bZ*Z,
    a ~ dnorm(0, 1),
    bX ~ dnorm(0, 1),
    bZ ~ dnorm(0, 1),
    sigma ~ dexp(1)), data = d)
precis(model_6m2)
##              mean          sd        5.5%        94.5%
## a     -0.01693815 0.013966359 -0.03925908  0.005382793
## bX    -0.13765585 0.070974022 -0.25108604 -0.024225653
## bZ     1.12360226 0.069538110  1.01246693  1.234737592
## sigma  0.98757075 0.009874216  0.97178985  1.003351655
# Plot of bX ~ bZ:
post <- extract.samples(model_6m2)
plot(bX ~ bZ, post, col = col.alpha(rangi2, 0.1), pch = 16)

# Multicollinearity is observed between X and Z because they are highly correlated variables. The difference from the legs example is that this problem is a pipe example (X → Z → Y), while the legs example is a fork (left_leg <- height -> right_leg).