Multiple regression is no oracle, but only a golem. It is logical, but the relationships it describes are conditional associations, not causal influences. Therefore additional information, from outside the model, is needed to make sense of it. This chapter presented introductory examples of some common frustrations: multicollinearity, post-treatment bias, and collider bias. Solutions to these frustrations can be organized under a coherent framework in which hypothetical causal relations among variables are analyzed to cope with confounding. In all cases, causal models exist outside the statistical model and can be difficult to test. However, it is possible to reach valid causal inferences in the absence of experiments. This is good news, because we often cannot perform experiments, both for practical and ethical reasons.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
##r chunk
library(dagitty)
library(rethinking)
6E1. List three mechanisms by which multiple regression can produce false inferences about causal effects.
# The three mechanisms are: Multicollinearity, Post-treatment Bias, Collider Bias.
# Multicollinearity: When two predictor variables are highly associated, adding both in a model might cause confusion and make it difficult to determine who is the genuine contributor.
# Post-treatment Bias: This is an example of a type of included variable bias.
# Collider Bias: This occurs when two variables are inconsequential to one another, however are both identified with a third variable The addition of the third variable generates a statistical relationship between the first two variables in the model.
6E2. For one of the mechanisms in the previous problem, provide an example of your choice, perhaps from your own research.
# When a regression model incorporates a treatment outcome as a control variable, regardless of how highly associated the outcome-of-treatment control variable is with the treatment, post-treatment bias emerges. Although the intensity of post-treatment bias in the association between the treatment and the consequence-of-treatment control variable is growing in general.
# For instance, we'd like to look into the effect of race on pay. Imagine that race has an impact on job position, which has an impact on compensation, and that the whole influence of race on salary is related to how race impacts people's employment positions. That is, aside from how race impacts employment status, race has no bearing on compensation. We would (right, mathematically speaking) find no link between race and salary, conditional on job position, if we regressed salary on race and controlled for job position.
6E3. List the four elemental confounds. Can you explain the conditional dependencies of each?
# From page 185
# The four elemental confounds are The Fork, The Pipe, The Collider, The Descendant
# The first form of relationship is the fork: X <- Z -> Y. This is the classic stumbling block. Some variable Z is a common cause of X and Y in a fork, resulting in a correlation between them. Learning X teaches us nothing about Y if we condition on Z. X and Y are independent, while Z is a condition.
#eg. Staying up late has a negative impact on one's health and productivity.
# A pipe is the second type of relationship: X -> Z -> Y. The variable X has an effect on the variable Z, which has an effect on the variable Y. We've now blocked the path from X to Y by putting a condition on Z. Conditioning of the middle variable blocks the path in both a fork and a pipe.
#eg. unhealthy life style has an affect on weight gain which in turn affects with Cholesterol
# A collider is the third type of relationship: X -> Z <- Y. Unlike the other two types of relations, there is no link between X and Y in a collider until you condition on Z. The path is opened by conditioning on Z, the collider variable. Information moves between X and Y once the path is open. Neither X nor Y, on the other hand, has any causal effect on the other.
# The descendent is the fourth relationship. A descendent is a variable whose value is modified by the value of another variable. Conditioning on a descendent has an effect on its parent as well. Descendants are common because we can't always measure a variable directly and must rely on a proxy.
6E4. How is a biased sample like conditioning on a collider? Think of the example at the open of the chapter.
# When a third variable is added to a collider, an association is created between the two variables. A biased sample is analogous to a collider that hasn't been measured.
# Ex: newsworthiness -> acceptance <- trustworthiness
# There was no correlation between trustworthiness and newsworthiness in the chapter's first scenario. The notion is that if a proposal has a high level of newsworthiness or trustworthiness, it will be accepted. As a result, there is a negative correlation between these characteristics in the selected set of offers on average. When conditioning on collider variables, the association in the sub-samples is not the same as the overall sample, resulting in incorrect conclusions on the total sample set.
6M1. Modify the DAG on page 186 to include the variable V, an unobserved cause of C and Y: C ← V → Y. Reanalyze the DAG. Draw the DAG. How many paths connect X to Y? Which must be closed? Which variables should you condition on now?
dag_6.1 <- dagitty("dag { U [unobserved]
V [unobserved]
X -> Y
X <- U <- A -> C -> Y
U -> B <- C
C <- V -> Y }")
adjustmentSets( dag_6.1 , exposure="X" , outcome="Y" )
## { A }
# There are 4 paths from X to Y:
# X <- U <- A -> C -> Y
# X <- U -> B <- C -> Y
# X <- U <- A -> C <- V -> Y
# X <- U -> B <- C <- V -> Y
# Paths 2 and 4 are already closed, because B is a collider. Because A is a fork, Paths 1 and 3 are accessible. We may close A or C in the original DAG, but C is now a collider in the new DAG. As a result, conditioning on C would lead to the discovery of a new path through V. As a result, we just need to condition on A to make conclusions about X's causal effect on Y.
6M2. Sometimes, in order to avoid multicollinearity, people inspect pairwise correlations among predictors before including them in a model. This is a bad procedure, because what matters is the conditional association, not the association before the variables are included in the model. To highlight this, consider the DAG X → Z → Y. Simulate data from this DAG so that the correlation between X and Z is very large. Then include both in a model prediction Y. Do you observe any multicollinearity? Why or why not? What is different from the legs example in the chapter?
n<- 100
b_xz<- 0.9
b_zy<- 0.7
set.seed(909)
x<- rnorm(n)
z<- rnorm(n,x*b_xz)
y<- rnorm(n,z*b_zy)
d <- data.frame(x,y,z)
cor(d)
## x y z
## x 1.0000000 0.5811009 0.8077636
## y 0.5811009 1.0000000 0.7388561
## z 0.8077636 0.7388561 1.0000000
# Yes, multicollinearity exists between X and Z because X and Z are highly correlated.
m6.1<- quap( alist(
y ~ dnorm( mu , sigma ),
mu <- a + b_xz*x + b_zy*z,
a ~ dnorm( 10 , 100 ),
c(b_xz,b_zy) ~ dnorm( 2 , 10 ),
sigma ~ dexp( 1 ) ),
data=d )
precis(m6.1)
## mean sd 5.5% 94.5%
## a -0.10915474 0.0902031 -0.2533167 0.03500725
## b_xz -0.05747396 0.1450075 -0.2892239 0.17427598
## b_zy 0.73251426 0.1074185 0.5608387 0.90418979
## sigma 0.90125831 0.0633088 0.8000786 1.00243799
plot(precis(m6.1))
post <- extract.samples(m6.1)
plot( b_xz ~ b_zy , post , col=col.alpha(rangi2,0.1) , pch=16 )
sum_b_xzb_zy <- post$b_xz + post$b_zy
dens( sum_b_xzb_zy , col=rangi2 , lwd=2 , xlab="sum of b_xz and b_zy" )
# The difference is that using Z will provide extra information about Y However in the legs example, the length of two legs are not providing extra information but the same info.