Multiple regression is no oracle, but only a golem. It is logical, but the relationships it describes are conditional associations, not causal influences. Therefore additional information, from outside the model, is needed to make sense of it. This chapter presented introductory examples of some common frustrations: multicollinearity, post-treatment bias, and collider bias. Solutions to these frustrations can be organized under a coherent framework in which hypothetical causal relations among variables are analyzed to cope with confounding. In all cases, causal models exist outside the statistical model and can be difficult to test. However, it is possible to reach valid causal inferences in the absence of experiments. This is good news, because we often cannot perform experiments, both for practical and ethical reasons.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
6E1. List three mechanisms by which multiple regression can produce false inferences about causal effects.
#multicollinearity, post-treatment bias, and collider bias
6E2. For one of the mechanisms in the previous problem, provide an example of your choice, perhaps from your own research.
#Multicollinearity generally occurs when there are high correlations between two or more predictor variables.
#A person’s height and weight generally have high correlation and improper to be predictors together.
6E3. List the four elemental confounds. Can you explain the conditional dependencies of each?
#The Fork, The Pipe, The Collider, The Descendant
#In a fork, some variable Z is a common cause of X and Y, generating a correlation between them. If we condition on Z, then learning X tells us nothing about Y. X and Y are independent, conditional on Z.
#eg. Staying up late has bad impact on heath and work efficience
#The second type of relation is a pipe: X > Z > Y. We saw this when we discussed the plant growth example and post-treatment bias: The treatment X influences fungus Z which influences growth Y. If we condition on Z now, we also block the path from X to Y. So in both a fork and a pipe, conditioning of the middle variable blocks the path.
#eg. unhealthy life style > weight rises > Cholesterol
#The third type of relation is a collider: X > Z < Y. You met colliders earlier in this chapter. Unlike the other two types of relations, in a collider there is no association between X and Y unless you condition on Z. Conditioning on Z, the collider variable, opens the path. Once the path is open, information flows between X and Y.
#Conditioning on a descendent variable is like conditioning on the variable itself, but weaker. A descendent is a variable influenced by another variable.
6E4. How is a biased sample like conditioning on a collider? Think of the example at the open of the chapter.
# When you condition on a collider, it creates statistical—but not necessarily causal— associations among its causes. So once you learn that a proposal has been selected (S), then learning its trustworthiness (T) also provides information about its newsworthiness (N). Because if a selected proposal has low trustworthiness, then it must have high newsworthiness. Otherwise it wouldn’t have been funded. The same is true when in reverse i.e. If a proposal has low newsworthiness, we’d infer that it must have higher than average trustworthiness. Otherwise it would not have been selected for funding.
6M1. Modify the DAG on page 186 to include the variable V, an unobserved cause of C and Y: C ← V → Y. Reanalyze the DAG. Draw the DAG. How many paths connect X to Y? Which must be closed? Which variables should you condition on now?
dag_ <- dagitty( "dag {
U [unobserved]
V [unobserved]
X -> Y
X <- U <- A -> C -> Y
U -> B <- C
C <- V -> Y
}")
adjustmentSets( dag_ , exposure="X" , outcome="Y" )
## { A }
6M2. Sometimes, in order to avoid multicollinearity, people inspect pairwise correlations among predictors before including them in a model. This is a bad procedure, because what matters is the conditional association, not the association before the variables are included in the model. To highlight this, consider the DAG X → Z → Y. Simulate data from this DAG so that the correlation between X and Z is very large. Then include both in a model prediction Y. Do you observe any multicollinearity? Why or why not? What is different from the legs example in the chapter?
N <- 100 # number of individuals
set.seed(909)
x <- rnorm(N,10,2) # sim total
leg_X <- runif(N,0.4,0.5) # leg as proportion
leg_Z <- runif(N,0.4,0.6)
z <- leg_X*x + # sim + error
rnorm( N , 0 , 0.02 )
y <- leg_Z*x + # sim + error
rnorm( N , 0 , 0.02 )
# combine into data frame
d <- data.frame(x,z,y)
cor(d$x,d$z)
## [1] 0.9557877
m <- quap(
alist(
y ~ dnorm( mu , sigma ) ,
mu <- a + bx*x + bz*z ,
a ~ dnorm( 10 , 100 ) ,
bx ~ dnorm( 2 , 10 ) ,
bz ~ dnorm( 2 , 10 ) ,
sigma ~ dexp( 1 )
) ,
data=d )
precis(m)
## mean sd 5.5% 94.5%
## a 0.08914878 0.25900923 -0.32479799 0.5030956
## bx 0.37782994 0.08599496 0.24039338 0.5152665
## bz 0.23248899 0.17980045 -0.05486685 0.5198448
## sigma 0.53324131 0.03755552 0.47322033 0.5932623
plot(precis(m))