Grando 8 Homework

if (Sys.info()["sysname"] == "Windows") {
    setwd("~/Masters/DATA606/Week8/Homework")
} else {
    setwd("~/Documents/Masters/DATA606/Week8/Homework")
}
require(ggplot2)
## Loading required package: ggplot2

8.2 Baby weights, Part II. Exercise 8.1 introduces a data set on birth weight of babies. Another variable we consider is parity, which is 0 if the child is the first born, and 1 otherwise. The summary table below shows the results of a linear regression model for predicting the average birth weight of babies, measured in ounces, from parity.

(a) Write the equation of the regression line.

Answer:

\[\widehat { birth\quad weight } \quad =\quad 123.05\quad +\quad parity\quad *\quad -1.93\]

(b) Interpret the slope in this context, and calculate the predicted birth weight of first borns and others.

Answer:

a non-first born child is expected to be 1.93 ounces less than the first born child. The expected birthweight of first born children is 123.05 and the expected birthweight of non-firstborn children is 121.12.

(c) Is there a statistically significant relationship between the average birth weight and parity?

Answer:

\[{ H }_{ O }:\quad { B }_{ 1 }\quad =\quad 0\\ { H }_{ A }:\quad { B }_{ 1 }\quad \neq \quad 0\]

Since a significance level has not been provided, we will use \(\alpha = 0.05\). The p-value is 0.1052 is greater than the significance level; therefore, we fail to reject the null hypothesis. There is not sufficient evidence to reject the hypothesis that there is no association between parity and birthweights.

8.4 Absenteeism. Researchers interested in the relationship between absenteeism from school and certain demographic characteristics of children collected data from 146 randomly sampled students in rural New SouthWales, Australia, in a particular school year. Below are three observations from this data set.

The summary table below shows the results of a linear regression model for predicting the average number of days absent based on ethnic background (eth: 0 - aboriginal, 1 - not aboriginal), sex (sex: 0 - female, 1 - male), and learner status (lrn: 0 - average learner, 1 - slow learner).

(a) Write the equation of the regression line.

Answer:

\[\widehat { days\quad absent } \quad =\quad 18.93\quad +\quad eth\quad *\quad -9.11\quad +\quad sex\quad *\quad 3.10\quad +\quad lrn\quad *\quad 2.15\]

(b) Interpret each one of the slopes in this context.

Answer:

ethnic background: The model predicts a non-aboriginal chld to be absent 9.11 days less than an aboriginal child, all else held constant

sex: The model predicts a male child to be absent 3.1 more days than a female child, all else held constant

learner status: The model predicts a slow learner to be absent 2.15 more days than an average learner, all else held constant.

(c) Calculate the residual for the first observation in the data set: a student who is aboriginal, male, a slow learner, and missed 2 days of school.

Answer:

y_predict <- 18.93 + 0 * -9.11 + 1 * 3.1 + 1 * 2.15
y_observed <- 2
y_observed - y_predict
## [1] -22.18

(d) The variance of the residuals is 240.57, and the variance of the number of absent days for all students in the data set is 264.17. Calculate the R2 and the adjusted R2. Note that there are 146 observations in the data set.

Answer:

r-squared value:

var_resid <- 240.57
var_outcome <- 264.17
1 - var_resid/var_outcome
## [1] 0.08933641

r-squared adj

n_val <- 146
k_val <- 3
1 - (var_resid/var_outcome) * ((n_val - 1)/(n_val - k_val - 1))
## [1] 0.07009704

8.8 Absenteeism, Part II. Exercise 8.4 considers a model that predicts the number of days absent using three predictors: ethnic background (eth), gender (sex), and learner status (lrn). The table below shows the adjusted R-squared for the model as well as adjusted R-squared values for all models we evaluate in the first step of the backwards elimination process.

Which, if any, variable should be removed from the model first?

Answer:

The learner status should be removed since it results in a better adjusted R-squared factor.

(a) Each column of the table above represents a different shuttle mission. Examine these data and describe what you observe with respect to the relationship between temperatures and damaged O-rings.

Answer:

First, I will take the data that is available from openintro.org

ch_data <- read.table("orings.txt", header = TRUE)
names(ch_data) <- c("temperature", "damaged")
ch_data$mission <- c(rep(1:nrow(ch_data)))
ch_data$undamaged <- 7 - ch_data$damaged

Plots for temperature vs orings:

ggplot(ch_data, aes(y = damaged, x = temperature)) + geom_point()

It appears that lower temperatures may correlate to a higher probability of o-ring failure.

(b) Failures have been coded as 1 for a damaged O-ring and 0 for an undamaged O-ring, and a logistic regression model was fit to these data. A summary of this model is given below. Describe the key components of this summary table in words.

Answer:

The predictor variable has a p-value less than 0.05; therefore, it appears it is a significant predictor of the data. The log odds ratio of the predictor is:

(ci_logoddratio_lower <- -0.2162 - 1.96 * 0.0532)
## [1] -0.320472
(ci_logoddratio_higher <- -0.2162 + 1.96 * 0.0532)
## [1] -0.111928

The odds raito is:

(ci_oddratio_lower <- exp(ci_logoddratio_lower))
## [1] 0.7258064
(ci_oddraito_higher <- exp(ci_logoddratio_higher))
## [1] 0.8941086

For every unit change in temperature, the log odds of failure decreases by 0.2162. At a temperature of zero, the log odds of failure is 11.6630.

(c) Write out the logistic model using the point estimates of the model parameters.

Answer:

\[log\left( \frac { \widehat { p } }{ 1-\widehat { p } } \right) \quad =\quad 11.6630\quad +\quad Temperature\quad *\quad -0.2162\]

(d) Based on the model, do you think concerns regarding O-rings are justified? Explain.

Answer:

(p53 <- exp(11.663 + 53 * -0.2162)/(1 + exp(11.663 + 53 * -0.2162)))
## [1] 0.5509228
(p81 <- exp(11.663 + 81 * -0.2162)/(1 + exp(11.663 + 81 * -0.2162)))
## [1] 0.002873921

Yes, the p-values show that temperature is a significant predictor of o-ring failure. Additionally, we can see that the predicted failure at the minimum temperature of the data set is 0.55 and the predicted failure at maximum temperature of the data set is 0 which shows a practically significant difference.

(a) The data provided in the previous exercise are shown in the plot. The logistic model fit to these data may be written as:

where ˆp is the model-estimated probability that an O-ring will become damaged. Use the model to calculate the probability that an O-ring will become damaged at each of the following ambient temperatures: 51, 53, and 55 degrees Fahrenheit. The model-estimated probabilities for several additional ambient temperatures are provided below, where subscripts indicate the temperature:

Answer:

The probability that there will be an o-ring failure is summarized for the following temperatures:

# 51F:
(p51 <- exp(11.663 + 51 * -0.2162)/(1 + exp(11.663 + 51 * -0.2162)))
## [1] 0.6540297
# 53F
(p53 <- exp(11.663 + 53 * -0.2162)/(1 + exp(11.663 + 53 * -0.2162)))
## [1] 0.5509228
# 55F
(p55 <- exp(11.663 + 55 * -0.2162)/(1 + exp(11.663 + 55 * -0.2162)))
## [1] 0.4432456
# 57F
(p57 <- exp(11.663 + 57 * -0.2162)/(1 + exp(11.663 + 57 * -0.2162)))
## [1] 0.3406498
# 59F
(p59 <- exp(11.663 + 59 * -0.2162)/(1 + exp(11.663 + 59 * -0.2162)))
## [1] 0.2510914
# 61F
(p61 <- exp(11.663 + 61 * -0.2162)/(1 + exp(11.663 + 61 * -0.2162)))
## [1] 0.1786971
# 63F
(p63 <- exp(11.663 + 63 * -0.2162)/(1 + exp(11.663 + 63 * -0.2162)))
## [1] 0.123727
# 65F
(p65 <- exp(11.663 + 65 * -0.2162)/(1 + exp(11.663 + 65 * -0.2162)))
## [1] 0.08393843
# 67F
(p67 <- exp(11.663 + 67 * -0.2162)/(1 + exp(11.663 + 67 * -0.2162)))
## [1] 0.05612566
# 69F
(p69 <- exp(11.663 + 69 * -0.2162)/(1 + exp(11.663 + 69 * -0.2162)))
## [1] 0.03715479
# 71F
(p71 <- exp(11.663 + 71 * -0.2162)/(1 + exp(11.663 + 71 * -0.2162)))
## [1] 0.02443024

(b) Add the model-estimated probabilities from part (a) on the plot, then connect these dots using a smooth curve to represent the model-estimated probabilities.

Answer:

ch_fit <- function(temp) {
    exp(11.663 + temp * -0.2162)/(1 + exp(11.663 + temp * -0.2162))
}

ch_df <- data.frame(c(51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 
    71))
names(ch_df) <- c("temp")
ch_df$predicteddamage <- ch_fit(ch_df$temp)
names(ch_df) <- c("temp", "predicteddamage")
ggplot(ch_df, aes(y = predicteddamage, x = temp)) + geom_point() + 
    stat_smooth(method = "glm", method.args = list(family = "binomial"), 
        se = FALSE)
## Warning: non-integer #successes in a binomial glm!

(c) Describe any concerns you may have regarding applying logistic regression in this application, and note any assumptions that are required to accept the model’s validity.

Answer:

My main concern is that each outcome is not independent of other outcomes (assumption 2 below) since there ar so few launches that it is likely there are upgrades and/or modifications most of the launch equipment; therefore, it is not guaranteed that ther was not some other fators, dependent on time series, that affected the results. Additionally, we should check that the predictor is linearly related to the logit(pi) values.

The assumptions that must be met are the following:

  1. Each predictor xi is linearly related to logit(pi) if all other predictors are held constant.

  2. Each outcome Yi is independent of the other outcomes.