title: ‘CemSoylu_ANLY505-2022-Summer’ author: “Cem Soylu” date: “2022-07-18” output: pdf_document: default html_document: default header-includes: -

Week 4 - Categories and Curves

This chapter introduced multiple regression, a way of constructing descriptive models for how the mean of a measurement is associated with more than one predictor variable. The defining question of multiple regression is: What is the value of knowing each predictor, once we already know the other predictors? The answer to this question does not by itself provide any causal information. Causal inference requires additional assumptions. Simple directed acyclic graph (DAG) models of causation are one way to represent those assumptions.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them.

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

5-1. Which of the linear models below are multiple linear regressions? \[\begin{align} {μ_i = α + βx_i} \tag{1}\\ μ_i = β_xx_i + β_zz_i \tag{2} \\ μ_i = α + β(x_i − z_i) \tag{3} \\ μ_i = α + β_xx_i + β_zz_i \tag{4} \\ \end{align}\]

#Equations (2) and (4) are multiple linear regressions because they have two variables: x and z and linear in parameters. 
#Equations (1) and (3) are not a multiple linear regression, they are bivariate or single linear regression.

5-2. Write down a multiple regression to evaluate the claim: Neither amount of funding nor size of laboratory is by itself a good predictor of time to PhD degree; but together these variables are both positively associated with time to degree. Write down the model definition and indicate which side of zero each slope parameter should be on.

# Time_to_PhD_degree_i ~ LogNormal(µ_i,σ)
# µ_i = α + β_f*Amount_of_funding_i + β_l*Size_of_laboratory_i
# The parameters on variables are positive: β_f >0 and β_l >0
# Time to a PhD cannot be negative so we should use LogNormal distribution.

5-3. It is sometimes observed that the best predictor of fire risk is the presence of firefighters— States and localities with many firefighters also have more fires. Presumably firefighters do not cause fires. Nevertheless, this is not a spurious correlation. Instead fires cause firefighters. Consider the same reversal of causal inference in the context of the divorce and marriage data. How might a high divorce rate cause a higher marriage rate? Can you think of a way to evaluate this relationship, using multiple regression?

# A high divorce rate implies that there will be potentially higher number of people that can marry again, hence raising the marriage rate. 

# A way to evaluate this relationship, using multiple regression, is to include the remarriage rate in a multivariate regression as a dependent variable to see whether it has a positive association with divorce rate, including other control variables.

5-4. Suppose you have a single categorical predictor with 4 levels (unique values), labeled A, B, C and D. Let Ai be an indicator variable that is 1 where case i is in category A. Also suppose Bi, Ci, and Di for the other categories. Now which of the following linear models are inferentially equivalent ways to include the categorical variable in a regression? Models are inferentially equivalent when it’s possible to compute one posterior distribution from the posterior distribution of another model. \[\begin{align} μ_i = α + β_AA_i + β_BB_i + β_DD_i \tag{1}\\ μ_i = α + β_AA_i + β_BB_i + β_CC_i + β_DD_i \tag{2}\\ μ_i = α + β_BB_i + β_CC_i + β_DD_i \tag{3}\\ μ_i = α_AA_i + α_BB_i + α_CC_i + α_DD_i \tag{4}\\ μ_i = α_A(1 − B_i − C_i − D_i) + α_BB_i + α_CC_i + α_DD_i \tag{5}\\ \end{align}\]

# Equation 1 and 3 are inferentially equivalent for including the categorical variable in a regression because they both leave one dummy variable out.

#μ_i = α + β_AA_i + β_BB_i + β_DD_i
#μ_i = α + β_BB_i + β_CC_i + β_DD_i 

# Equation 4 and 5 are inferentially equivalent for including the categorical variable in a regression because 
# 1 − B_i − C_i − D_i = A_i

#μ_i = α_AA_i + α_BB_i + α_CC_i + α_DD_i 
#μ_i = α_A(1 − B_i − C_i − D_i) + α_BB_i + α_CC_i + α_DD_i 

# We can derive equation (1) and (4) from each other because (1) leaves one dummy variable out, given a constant(intercept) and (4) includes all dummy variables without a constant(intercept).

# Thus only equation (2) has inferences which cannot be computed from the other equations.

5-5. One way to reason through multiple causation hypotheses is to imagine detailed mechanisms through which predictor variables may influence outcomes. For example, it is sometimes argued that the price of gasoline (predictor variable) is positively associated with lower obesity rates (outcome variable). However, there are at least two important mechanisms by which the price of gas could reduce obesity. First, it could lead to less driving and therefore more exercise. Second, it could lead to less driving, which leads to less eating out, which leads to less consumption of huge restaurant meals. Can you outline one or more multiple regressions that address these two mechanisms? Assume you can have any predictor data you need.

# 1. Price of gasoline -> Driving -> Eating out -> Obesity

# 2. Price of gasoline -> Driving -> Exercise -> Obesity

#For the first mechanism where higher price of gasoline could lead to less driving and therefore more exercise, we can add average walking distance as a predictor variable into a multiple linear regression. 

#For the second mechanism where higher price of gasoline could lead to less driving, which leads to less eating out, which leads to less consumption of huge restaurant meals, a good variable to include is the average rate of eating out as a predictor variable in a multiple linear regression.