This chapter described some of the most common generalized linear models, those used to model counts. It is important to never convert counts to proportions before analysis, because doing so destroys information about sample size. A fundamental difficulty with these models is that parameters are on a different scale, typically log-odds (for binomial) or log-rate (for Poisson), than the outcome variable they describe. Therefore computing implied predictions is even more important than before.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
11E1. If an event has probability 0.35, what are the log-odds of this event?
log(0.35/(1-0.35))
## [1] -0.6190392
11E2. If an event has log-odds 3.2, what is the probability of this event?
lgodds <- exp(3.2)
p <- lgodds/(1+lgodds)
p
## [1] 0.9608343
11E3. Suppose that a coefficient in a logistic regression has value 1.7. What does this imply about the proportional change in odds of the outcome?
#A unit increase in x results in a 1.7 increase in log-odds of the outcome.
coeff <- exp(1.7)
coeff
## [1] 5.473947
11E4. Why do Poisson regressions sometimes require the use of an offset? Provide an example.
#Poisson regression is used to predict a dependent variable that consists of "count data" given one or more independent variables. Poisson regression needs an offset when the length of time varies for 2 events. Offset will bring the observations on the same scale. Example is the number of failures for a certain machine at various operating conditions.
11M1. As explained in the chapter, binomial data can be organized in aggregated and disaggregated forms, without any impact on inference. But the likelihood of the data does change when the data is converted between the two formats. Can you explain why?
#Binomial data can be organized in aggregated and disaggregated forms, but the likelihood of the data does change when the data is converted between the two formats because c(n,m) multiplier is converted to a constant at the log-scale.
11M2. If a coefficient in a Poisson regression has value 1.7, what does this imply about the change in the outcome?
#It means the difference in log of outcomes will change by exp(1.7) = 5.4739 times.
exp(1.7)
## [1] 5.473947
11M3. Explain why the logit link is appropriate for a binomial generalized linear model.
#Logt link is appropriate for a binomial generalized linear model because it maps a parameter that is defined as a probability mass, therefore its value lies between 0 and 1. A binomial generalized model always generates binary outcome variables( 0 or 1). And thus makes the logit link the most appropriate for the model.
11M4. Explain why the log link is appropriate for a Poisson generalized linear model.
#The log link predictor values are restricted to positive real numbers.This fits well with the Poisson distribution, as it's outcomes are always positive values.
11M5. What would it imply to use a logit link for the mean of a Poisson generalized linear model? Can you think of a real research problem for which this would make sense?
# Log link function maps a parameter that is defined over only positive real values onto a linear model. This works well with Poisson distribution, where the outcome are counts and always positive values.
11M6. State the constraints for which the binomial and Poisson distributions have maximum entropy. Are the constraints different at all for binomial and Poisson? Why or why not?
#The constraints fall under 2 unordered events, and the expected value is constant.Poisson is a special case of the binomial distribution and max entropy occurs under the exact same constraints - without any differences at all.
11M7. Use quap to construct a quadratic approximate posterior distribution for the chimpanzee model that includes a unique intercept for each actor, m11.4 (page 330). Compare the quadratic approximation to the posterior distribution produced instead from MCMC. Can you explain both the differences and the similarities between the approximate and the MCMC distributions? Relax the prior on the actor intercepts to Normal(0,10). Re-estimate the posterior using both ulam and quap. Do the differences increase or decrease? Why?
data("chimpanzees")
df <- chimpanzees
df$recipient <- NULL
#Re-estimating using quap
m <- quap(
alist(
pulled_left ~ dbinom(1,p),
logit(p) <- a[actor] + (xp + xpc * condition)*prosoc_left,
a[actor] ~ dnorm(0,10),
xp ~ dnorm(0,10),
xpc ~ dnorm(0,10)
),
data = df
)
pairs(m)
11M8. Revisit the data(Kline) islands example. This time drop Hawaii from the sample and refit the models. What changes do you observe?
data("Kline")
d <- Kline
d$P <- scale(log(d$population))
d$contact_id <- ifelse(d$contact == "high", 2,1)
d
## culture population contact total_tools mean_TU P contact_id
## 1 Malekula 1100 low 13 3.2 -1.291473310 1
## 2 Tikopia 1500 low 22 4.7 -1.088550750 1
## 3 Santa Cruz 3600 low 24 4.0 -0.515764892 1
## 4 Yap 4791 high 43 5.0 -0.328773359 2
## 5 Lau Fiji 7400 high 33 5.0 -0.044338980 2
## 6 Trobriand 8000 high 19 4.0 0.006668287 2
## 7 Chuuk 9200 high 40 3.8 0.098109204 2
## 8 Manus 13000 low 28 6.6 0.324317564 1
## 9 Tonga 17500 high 55 5.4 0.518797917 2
## 10 Hawaii 275000 low 71 6.6 2.321008320 1
11H1. Use WAIC or PSIS to compare the chimpanzee model that includes a unique intercept for each actor, m11.4 (page 330), to the simpler models fit in the same section. Interpret the results.
m2 <- map(
alist(
pulled_left ~ dbinom(1,p),
logit(p) <- x,
x ~ dnorm(0,10)
),
data = df
)
m3 <- map(
alist(
pulled_left ~ dbinom(1,p),
logit(p) <- x + xp*prosoc_left,
x ~ dnorm(0, 10),
xp ~ dnorm(0,10)
),
data = df
)
m4 <- map(
alist(
pulled_left ~ dbinom(1,p),
logit(p) <- x + (xp + xpc*condition)*prosoc_left,
x ~ dnorm(0, 10),
xp ~ dnorm(0,10),
xpc ~ dnorm(0,10)
),
data = df
)
m5 <- map(
alist(
pulled_left ~ dbinom(1,p),
logit(p) <- x[actor] + (xp + xpc*condition)*prosoc_left,
x[actor] ~ dnorm(0, 10),
xp ~ dnorm(0,10),
xpc ~ dnorm(0,10)
),
data = df
)
#Let's use 'compare' function to compare the models
compare(m2, m3, m4, m5)
## WAIC SE dWAIC dSE pWAIC weight
## m5 549.4375 18.598806 0.0000 NA 15.041258 1.000000e+00
## m3 680.1908 9.299718 130.7533 18.10706 1.847183 4.048322e-29
## m4 682.2604 9.359484 132.8229 18.03792 2.957025 1.438361e-29
## m2 688.0756 7.075164 138.6381 18.89217 1.067437 7.854382e-31