Baby weights, Part I. (9.1, p. 350) The Child Health and Development Studies investigate a range of topics. One study considered all pregnancies between 1960 and 1967 among women in the Kaiser Foundation Health Plan in the San Francisco East Bay area. Here, we study the relationship between smoking and weight of the baby. The variable smoke is coded 1 if the mother is a smoker, and 0 if not. The summary table below shows the results of a linear regression model for predicting the average birth weight of babies, measured in ounces, based on the smoking status of the mother.
The variability within the smokers and non-smokers are about equal and the distributions are symmetric. With these conditions satisfied, it is reasonable to apply the model. (Note that we don’t need to check linearity since the predictor has only two levels.)
\(Baby \ Weight= 123.05 -8.94*smoke\)
For non smoker mother’s baby weight is average 8.94 less.
Absenteeism, Part I. (9.4, p. 352) Researchers interested in the relationship between absenteeism from school and certain demographic characteristics of children collected data from 146 randomly sampled students in rural New South Wales, Australia, in a particular school year. Below are three observations from this data set.
The summary table below shows the results of a linear regression model for predicting the average number of days absent based on ethnic background (eth: 0 - aboriginal, 1 - not aboriginal), sex (sex: 0 - female, 1 - male), and learner status (lrn: 0 - average learner, 1 - slow learner).
\(Absenteeism= 18.93 - 9.11 * eth + 3.10*sex + 2.5*lrn\)
All else is constant; if the subject is male, there is an increase of 3.10 days in the predicted absenteeism.
All else is constant; if the subject is a slow learner, there is n increase of 2.15 days in the predicted absenteeism.
\(Absenteeism=18.93 -9.11*0 + 3.10*1 +2.15*1\)
Predicted Absenteeism is 24.18 days
Residual is = 24.18-2=22.18
#calculating R2 and ADjusted R2
r2 <- 1-(240.57/264.17)
r2_a <- 1-(240.57/264.17 * 145/141)
r2
## [1] 0.08933641
r2_a
## [1] 0.06350198
Absenteeism, Part II. (9.8, p. 357) Exercise above considers a model that predicts the number of days absent using three predictors: ethnic background (eth), gender (sex), and learner status (lrn). The table below shows the adjusted R-squared for the model as well as adjusted R-squared values for all models we evaluate in the first step of the backwards elimination process.
Which, if any, variable should be removed from the model first?
We start with the full model and drop one variable at a time and record $R^2_{adj} each smaller model.
We pick the model that has the highest increase in \(R^2_{adj}\)
In this case, the \(R^2_{adj}\) increases when we remove the learner status. We can remove “lrn” variable.
Challenger disaster, Part I. (9.16, p. 380) On January 28, 1986, a routine launch was anticipated for the Challenger space shuttle. Seventy-three seconds into the flight, disaster happened: the shuttle broke apart, killing all seven crew members on board. An investigation into the cause of the disaster focused on a critical seal called an O-ring, and it is believed that damage to these O-rings during a shuttle launch may be related to the ambient temperature during the launch. The table below summarizes observational data on O-rings for 23 shuttle missions, where the mission order is based on the temperature at the time of the launch. Temp gives the temperature in Fahrenheit, Damaged represents the number of damaged O-rings, and Undamaged represents the number of O-rings that were not damaged.
temp <- c(53,57,58,63,66,67,67,67,68,69,70,70,70,70,72,73,75,75,76,76,78,79,81)
damaged <- c(5,1,1,1,0,0,0,0,0,0,1,0,1,0,0,0,0,1,0,0,0,0,0)
undamaged <- c(1,5,5,5,6,6,6,6,6,6,5,6,5,6,6,6,6,5,6,6,6,6,6)
df <- data.frame(temp, damaged, undamaged)
df
library(ggplot2)
ggplot(df, aes(temp, damaged))+
geom_point()+
geom_smooth(method="glm", se=F)
Based on the above scatter plot, we can see that when the temparature is lower the damaged O-rings are higher and when the temparature is higher there are less damaged O-rings.
We have temparature slope value and intercept value. The temperature slope is telling us that, every additional temperature Fanrenheit degree, the damaged O-ring decreases by 0.2162.
Intercept is telling us that when the temperature is 0, the average damaged O-ring is 11.6630.
\(log_{e}(p_{i}/1-p_{i})=11.6630 -0.2162*Temperature\)
Yes, the lower the temperature the higher the probability of O-rings being damaged.
Challenger disaster, Part II. (9.18, p. 381) Exercise above introduced us to O-rings that were identified as a plausible explanation for the breakup of the Challenger space shuttle 73 seconds into takeoff in 1986. The investigation found that the ambient temperature at the time of the shuttle launch was closely related to the damage of O-rings, which are a critical component of the shuttle. See this earlier exercise if you would like to browse the original data.
\begin{center} \end{center}
where \(\hat{p}\) is the model-estimated probability that an O-ring will become damaged. Use the model to calculate the probability that an O-ring will become damaged at each of the following ambient temperatures: 51, 53, and 55 degrees Fahrenheit. The model-estimated probabilities for several additional ambient temperatures are provided below, where subscripts indicate the temperature:
\[\begin{align*} &\hat{p}_{57} = 0.341 && \hat{p}_{59} = 0.251 && \hat{p}_{61} = 0.179 && \hat{p}_{63} = 0.124 \\ &\hat{p}_{65} = 0.084 && \hat{p}_{67} = 0.056 && \hat{p}_{69} = 0.037 && \hat{p}_{71} = 0.024 \end{align*}\]
p51 <- exp(11.6630-(0.2162*51))/(1 + exp(11.6630-(0.2162*51)))
p53 <- exp(11.6630-(0.2162*53))/(1 + exp(11.6630-(0.2162*53)))
p57 <- exp(11.6630-(0.2162*55))/(1 + exp(11.6630-(0.2162*55)))
p51
## [1] 0.6540297
p53
## [1] 0.5509228
p57
## [1] 0.4432456
ggplot(df, aes(temp, damaged))+
geom_point()+
stat_smooth(method = "glm")
We assume a binomial distribution produced the outcome variable and therefore want to model p the probability of success for a given set of predictors. The conditions of logistic regression:
Each predictor x is lenearly related to \(logit(p_{i})\) if all predictors are held constant.
Each outcome y is independent of the other outcomes.