library(ggdag)
## Warning: package 'ggdag' was built under R version 4.0.4
theme_set(theme_dag())

Chapter 6 - The Haunted DAG & The Causal Terror

Multiple regression is no oracle, but only a golem. It is logical, but the relationships it describes are conditional associations, not causal influences. Therefore additional information, from outside the model, is needed to make sense of it. This chapter presented introductory

examples of some common frustrations: multicollinearity, post-treatment bias, and collider bias. Solutions to these frustrations can be organized under a coherent framework in which hypothetical causal relations among variables are analyzed to cope with confounding. In all cases, causal models exist outside the statistical model and can be difficult to test. However, it is possible to reach valid causal inferences in the absence of experiments. This is good news, because we often cannot perform experiments, both for practical and ethical reasons.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

6E1. List three mechanisms by which multiple regression can produce false inferences about causal effects.

# Post-treatment bias, Collider bias, and Multicolinearity

6E2. For one of the mechanisms in the previous problem, provide an example of your choice, perhaps from your own research.

#Multicolinearity

#If we need to build a model for wealth. The data for wealth will have the education level, marriage, age, or active income, passive income, estate value, etc. For example, the education, active income will probabily have the multicolinearty issue. Since, higher eduation and skills results in higher income.

6E3. List the four elemental confounds. Can you explain the conditional dependencies of each?

# Fork: X<- Z ->Y.  X and Y are independent, conditional on Z. This is the classic confounder. In a fork, there are some variable Z is a common cause of X and Y. If we condition on Z, then learning X tell us nothing about Y

# Pipe: X ->Z ->Y. X and Y are independent, conditional on Z. If we condition on Z now, Y

# Collider: X-> Z <-Y.. there are no association between X and Y unless condition on Z, information flows between X and Y.

# Descendent:  descendent variale is like conditioning on the variable itself, but weaker. A descendent is a variable influenced by another variable

Fork

dagify(y~z,x~z)%>%
    ggdag()

Pipe

dagify(y ~z,z~x)%>%
    ggdag()

Collider

dagify(z~y,z~x)%>%
    ggdag()

6E4. How is a biased sample like conditioning on a collider? Think of the example at the open of the chapter.

#“It seems like the most newsworthy scientific studies are the least trustworthy. The more likely it is to kill you, if true, the less likely it is to be true. The more boring the topic, the more rigorous the results.”

# Conditioning on a colliders, the final score. there will be a negative association  between newsworthy and trustworthy. because those proposals has to be eighter one of them.

6M1. Modify the DAG on page 186 to include the variable V, an unobserved cause of C and Y: C ← V → Y. Reanalyze the DAG. Draw the DAG. How many paths connect X to Y? Which must be closed? Which variables should you condition on now?

# there are probably four way to connect X and Y

#1. X<-U->B<-C->Y
#2. X<-U<-A->C->Y
#3. X<-U<-A->C<-V->Y
#4. X<-U->B<-C<-V->Y

6M2. Sometimes, in order to avoid multicollinearity, people inspect pairwise correlations among predictors before including them in a model. This is a bad procedure, because what matters is the conditional association, not the association before the variables are included in the model. To highlight this, consider the DAG X → Z → Y. Simulate data from this DAG so that the correlation between X and Z is very large. Then include both in a model prediction Y. Do you observe any multicollinearity? Why or why not? What is different from the legs example in the chapter?

library(rethinking)
# There are multicolinearity. 

set.seed(666)
N<- 1000 # number of 

x<- rnorm(N,10,2)
leg_X<- runif(N,0.4,0.5)
leg_Z<- runif(N,0.4,0.6)

z<- leg_X*x + rnorm(N, 0, 0.01)
y<- leg_Z*x + rnorm(N, 0, 0.01)

# combine into data frame
df<- data.frame(x,z,y)
cor(df$x,df$y)
## [1] 0.8463076
m6.2<- quap(
    alist(
        y ~ dnorm(mu,sigma),
        mu<- a+ bx*x + bz*z,
        a ~ dnorm(10,1000),
        bx ~ dnorm(2,10),
        bz ~ dnorm(2,10),
        sigma ~ dexp(2)
    ),
    data = df
)

precis(m6.2)
##              mean         sd        5.5%      94.5%
## a      0.25029454 0.09595014  0.09694768 0.40364140
## bx     0.48287649 0.03020163  0.43460845 0.53114452
## bz    -0.01733778 0.06368731 -0.11912240 0.08444685
## sigma  0.58794683 0.01313519  0.56695426 0.60893940
plot(precis(m6.2))

post<- extract.samples(m6.2)
plot(z~x, post,col = col.alpha(rangi2,0.1),pch = 16)