This chapter introduced the simple linear regression model, a framework for estimating the association between a predictor variable and an outcome variable. The Gaussian distribution comprises the likelihood in such models, because it counts up the relative numbers of ways different combinations of means and standard deviations can produce an observation. To fit these models to data, the chapter introduced quadratic approximation of the posterior distribution and the tool quap. It also introduced new procedures for visualizing prior and posterior distributions.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
4E1. In the model definition below, which line is the likelihood? \[\begin{align} \ y_i ∼ Normal(μ, σ) \\ \ μ ∼ Normal(0, 10) \\ \ σ ∼ Exponential(1) \\ \end{align}\]
# μ and σ are priors. y_i ∼ Normal(μ, σ) is the likelihood.
4E2. In the model definition just above, how many parameters are in the posterior distribution?
# Two parameters: μ and σ
4E3. Using the model definition above, write down the appropriate form of Bayes’ theorem that includes the proper likelihood and priors.
# Pr(μ,σ|y) =(ΠiNormal(yi|μ, σ)Normal(μ|0,10)Uniform(σ|0,10))/ ∫∫ΠiNormal(hi|μ, σ)Normal(μ|0,10)Uniform(σ|0,10)dμdσ
4E4. In the model definition below, which line is the linear model? \[\begin{align} \ y_i ∼ Normal(μ, σ) \\ \ μ_i = α + βx_i \\ \ α ∼ Normal(0, 10) \\ \ β ∼ Normal(0, 1) \\ \ σ ∼ Exponential(2) \\ \end{align}\]
# The liner line is μ_i = α + βx_i.
4E5. In the model definition just above, how many parameters are in the posterior distribution?
# Three parameters: α, β, σ
4M1. For the model definition below, simulate observed y values from the prior (not the posterior). \[\begin{align} \ y_i ∼ Normal(μ, σ) \\ \ μ ∼ Normal(0, 10) \\ \ σ ∼ Exponential(1) \\ \end{align}\]
mu <- rnorm(10000, 0, 10)
sigma <- dexp(1)
y <- rnorm(10000, mu, sigma)
dens(y)
4M2. Translate the model just above into a quap formula.
m1 <- alist(
y ~ rnorm(mu, signma),
mu ~ rmorm(0,10),
sigma <- dexp(1)
)
4M3. Translate the quap model formula below into a mathematical model definition:
y ~ dnorm( mu , sigma ),
mu <- a + b*x,
a ~ dnorm( 0 , 10 ),
b ~ dunif( 0 , 1 ),
sigma ~ dexp( 1 )
m2 <- alist(
y ~ dnorm(mu, signma),
mu ~ a + b*x,
a ~ dnorm( 0 , 10 ),
b ~ dunif( 0 , 1 ),
sigma ~ dexp( 1 )
)
4M4. A sample of students is measured for height each year for 3 years. After the third year, you want to fit a linear regression predicting height using year as a predictor. Write down the mathematical model definition for this regression, using any variable names and priors you choose. Be prepared to defend your choice of priors.
# h_i ∼ Normal(μ, σ)
# μ_i = α + βx_i
# α ∼ Normal(150, 20)
#𝛽~ Normal(0,10)
# σ ∼ Uniform(0,50)
# α represents the weight of students across all ages, from primary schools to colleges.𝛽 stands for the rate of change in expectation of the linear model. σ is a uniform distribution, including the distribution of students' ages and heights (50 is a good start point.)
4M5. Now suppose I remind you that every student got taller each year. Does this information lead you to change your choice of priors? How?
# No, it doesn't because it aligns with my assumption of choosing priors.
4M6. Now suppose I tell you that the variance among heights for students of the same age is never more than 64cm. How does this lead you to revise your priors?
# The variance is never more than 64cm, which means that σ cannot exceed 8cm. As a result, I will adjust σ ∼ Uniform(0,8).
4M7. Refit model m4.3 from the chapter, but omit the mean weight xbar this time. Compare the new model’s posterior to that of the original model. In particular, look at the covariance among the parameters. What is different? Then compare the posterior predictions of both models.
Answer: we can tell from the new model that there is negative correlation between the posteriors. Also, the mean of alpha is much lower in the new model.
data("Howell1")
d <- Howell1
d2 <- d[d$age >= 18, ]
xbar <- mean(d2$weight)
m4.3 <- quap(alist(
height ~ dnorm( mu, sigma),
mu <- a + b * ( weight - xbar ) ,
a ~ dnorm(178, 20),
b ~ dlnorm(0, 1),
sigma ~ dunif(0, 50)),
data = d2)
precis(m4.3)
## mean sd 5.5% 94.5%
## a 154.6008103 0.27031329 154.1687974 155.0328231
## b 0.9032917 0.04192451 0.8362882 0.9702951
## sigma 5.0719875 0.19116481 4.7664692 5.3775058
m4.33 <- quap(
alist(
height ~ dnorm(mu, sigma),
mu <- a + b * weight,
a ~ dnorm(178, 20),
b ~ dlnorm(0, 1),
sigma ~ dunif(0, 50)
) ,
data = d2
)
precis(m4.33)
## mean sd 5.5% 94.5%
## a 114.5343106 1.89774714 111.5013441 117.5672770
## b 0.8907301 0.04175799 0.8239928 0.9574674
## sigma 5.0727184 0.19124889 4.7670657 5.3783710
vcov(m4.33)
## a b sigma
## a 3.601444189 -0.0784378285 0.0093570442
## b -0.078437829 0.0017437293 -0.0002042991
## sigma 0.009357044 -0.0002042991 0.0365761389
4M8. In the chapter, we used 15 knots with the cherry blossom spline. Increase the number of knots and observe what happens to the resulting spline. Then adjust also the width of the prior on the weights—change the standard deviation of the prior and watch what happens. What do you think the combination of knot number and the prior on the weights controls?
Answer: The more knot numbers, the more flexible the spline and we can estimate more general trend.
data(cherry_blossoms)
d <- cherry_blossoms
precis(d)
## mean sd 5.5% 94.5% histogram
## year 1408.000000 350.8845964 867.77000 1948.23000 ▇▇▇▇▇▇▇▇▇▇▇▇▁
## doy 104.540508 6.4070362 94.43000 115.00000 ▁▂▅▇▇▃▁▁
## temp 6.141886 0.6636479 5.15000 7.29470 ▁▃▅▇▃▂▁▁
## temp_upper 7.185151 0.9929206 5.89765 8.90235 ▁▂▅▇▇▅▂▂▁▁▁▁▁▁▁
## temp_lower 5.098941 0.8503496 3.78765 6.37000 ▁▁▁▁▁▁▁▃▅▇▃▂▁▁▁
d2 <- d[ complete.cases(d$doy), ]
num_knots <- 15
knot_list <- quantile( d2$year , probs=seq(0,1,length.out=num_knots) )
library(splines)
B <- bs(d2$year,knots=knot_list[-c(1,num_knots)] , degree=3 , intercept=TRUE )
plot( NULL , xlim=range(d2$year) , ylim=c(0,1) , xlab="year" , ylab="basis" )
for ( i in 1:ncol(B) ) lines( d2$year , B[,i] )
num_knots2 <- 30
knot_list <- quantile( d2$year , probs=seq(0,1,length.out=num_knots2) )
library(splines)
B <- bs(d2$year,knots=knot_list[-c(1,num_knots2)] , degree=3 , intercept=TRUE )
plot( NULL , xlim=range(d2$year) , ylim=c(0,1) , xlab="year" , ylab="basis" )
for ( i in 1:ncol(B) ) lines( d2$year , B[,i] )
4H2. Select out all the rows in the Howell1 data with ages below 18 years of age. If you do it right, you should end up with a new data frame with 192 rows in it.
Answer: The b is 2.72. For each 0 unit increase in weight, the model predicts a 27.2 cm increase in height.
Plot the raw data, with height on the vertical axis and weight on the horizontal axis. Superimpose the MAP regression line and 89% interval for the mean. Also superimpose the 89% interval for predicted heights.
What aspects of the model fit concern you? Describe the kinds of assumptions you would change, if any, to improve the model. You don’t have to write any new code. Just explain what the model appears to be doing a bad job of, and what you hypothesize would be a better model.
Answer: the main concern here is that the relationship between mu and weight is linear. As a result, we may want to change a model, such as quadratic regression.
d3 <- Howell1[Howell1$age < 18, ]
nrow(d3)
## [1] 192
# a. fit a linear regression model
model3 <- quap(
alist(
height ~ dnorm(mu, sigma),
mu <- a + b * weight,
a ~ dnorm(110, 30),
b ~ dnorm(0, 10),
sigma ~ dunif(0, 60)
),
data = d3
)
precis(model3)
## mean sd 5.5% 94.5%
## a 58.333419 1.39499612 56.103945 60.562892
## b 2.715709 0.06819766 2.606716 2.824702
## sigma 8.432801 0.43000857 7.745564 9.120037
# b. plot
plot( height ~ weight , d3 , col=col.alpha(rangi2,0.5) )
weight.seq <- seq(from = min(d3$weight), to = max(d3$weight), by = 1)
mu <- link(model3, data = data.frame(weight = weight.seq))
mu.mean <- apply(mu, 2, mean)
mu.HPDI <- apply(mu, 2, HPDI, prob = 0.89)
lines(weight.seq , mu.mean)
shade( mu.HPDI , weight.seq )