Nutrition at Starbucks, Part I. (8.22, p. 326) The scatterplot below shows the relationship between the number of calories and amount of carbohydrates (in grams) Starbucks food menu items contain. Since Starbucks only lists the number of calories on the display items, we are interested in predicting the amount of carbs a menu item has based on its calorie content.
(a):
There is a linear positive relationship between Calories and Carbs. The relationship is not strong nor weak. (I would say something in the middle.)
(b):
The response variable is Carbs(grams), the explanatory variable is Calories.
(c):
If our goal requires to predict the response variable by using the explanatory variable , see residuals and linear relationship between the two variables, we may want to fit a regression line to these data.
(d):
Conditions for the least squares lines are:
1- Linearity : The releationship between the explanatory and the response variable should be linear.
2- Nearly Normal Residuals: The residuals should be nearly normal. This condition may not be satisfied when there are unusual observations that dont follow the trend of the rest of the data. We can use histogram (as per above) to check the normality.
3- Constant Variability: The variability of points around the least squares line should be roughly constant. This implies that the variability of residuals around the 0 line should be roughly constant as well. We can check constant variability by checking residual plot. (as above)
Body measurements, Part I. (8.13, p. 316) Researchers studying anthropometry collected body girth measurements and skeletal diameter measurements, as well as age, weight, height and gender for 507 physically active individuals.19 The scatterplot below shows the relationship between height and shoulder girth (over deltoid muscles), both measured in centimeters.
\begin{center} \end{center}
(a):
The response variable is height(cm), the explanatory variable is shoulder girth(cm). There is a semi strong, positive, linear relationship between these two variables.
(b):
bdims$sho.gi <- bdims$sho.gi*0.393701 # convert cm to inches
plot(bdims$hgt ~ bdims$sho.gi,
xlab = "Shoulder girth (inch)", ylab = "Height (cm)",
pch = 19, col = COL[1,2])
The relationship stays the same. Semi strong, postive linear relationsip between shoulder girth and height.
Body measurements, Part III. (8.24, p. 326) Exercise above introduces data on shoulder girth and height of a group of individuals. The mean shoulder girth is 107.20 cm with a standard deviation of 10.37 cm. The mean height is 171.14 cm with a standard deviation of 9.41 cm. The correlation between height and shoulder girth is 0.67.
Slope Calculation:
\(b_{1}= (s_{y}/s_{x})R\)
Intercept:
\(b_{0}=\overline{y}-b_{1}\overline{x}\)
# calculating slope and intercept
# shoulder girth is explanatory and height is response variable
mean_shoulder <- 107.20
mean_height <- 171.14
sd_shoulder <- 10.37
sd_height <- 9.41
r <- 0.67
slope <- (sd_height/sd_shoulder)*r
intercept <- mean_height- (slope*mean_shoulder)
slope
## [1] 0.6079749
intercept
## [1] 105.9651
predicted height = 105.96651 + (0.6079749 * shoulder girth)
(b):
slope = 0.60
For each additional shoulder girth value , we would expect the cm in height increases on average by 0.60 cms.
intercept: 105.96
The intercept is where the regression line intersects the y-axis. The individuals with no shoulder girth (unfortunately does not make sense but is the interpretation of the intercept), are expected to have 105.96 average height in cms.
(c):
\(R^2\) –> Strength of the fit of a linear model, calculated as the square of the correlation coefficient.
rsquared <- 0.67^2
rsquared
## [1] 0.4489
44% of the variability in height , explained by shoulder girth.
(d):
predicted_height <- intercept + (slope * 100)
predicted_height
## [1] 166.7626
(d):
residual <- 160-predicted_height
residual
## [1] -6.762581
Our model predicted the height wrong by 6.76. Overestimated the height of the person.
(e):
56 cm of shoulder girth is not part of the dataset, so it wouldnt be appropriate to use this in the linear model.
Cats, Part I. (8.26, p. 327) The following regression output is for predicting the heart weight (in g) of cats from their body weight (in kg). The coefficients are estimated using a dataset of 144 domestic cats.
\begin{center} \end{center}
(a):
predicted heart weight = intercept + (slope*Body_weight(kg))
predicted heart weight = -0.357 + (4.034*Body_weight)
(b):
The individuals with no body weight (unfortunately does not make sense but is the interpretation of the intercept), are expected to have -0.357 average heart weight.
(c):
For each additional body weight kg , we would expect the gram in heart weight to increases on average by 4.034.
(d):
64.66% of the variability in heart weight , explained by body weight.
(e):
corr <- sqrt(0.6466)
corr
## [1] 0.8041144
Rate my professor. (8.44, p. 340) Many college courses conclude by giving students the opportunity to evaluate the course and the instructor anonymously. However, the use of these student evaluations as an indicator of course quality and teaching effectiveness is often criticized because these measures may reflect the influence of non-teaching related characteristics, such as the physical appearance of the instructor. Researchers at University of Texas, Austin collected data on teaching evaluation score (higher score means better) and standardized beauty score (a score of 0 means average, negative score means below average, and a positive score means above average) for a sample of 463 professors. The scatterplot below shows the relationship between these variables, and also provided is a regression output for predicting teaching evaluation score from beauty score.
\begin{center} \end{center}
\(\overline{x}=-0.0883\)
\(\overline{y}=3.9983\)
x <- -0.0883
y <- 3.9983
intercept_2 <- 4.010
slope_2 <- (y-intercept_2)/x
slope_2
## [1] 0.1325028
(b):
Slope is positive, so the linear relationship is positive. If we look at the scatter plot, we can see that the relationship is positive.
(c):
1- Linearity : The releationship between the explanatory and the response variable should be linear. This condition is met even though the relationship might be weak but there is a linear relationship.
2- Nearly Normal Residuals: The residuals should be nearly normal. This condition may not be satisfied when there are unusual observations that dont follow the trend of the rest of the data. This condition is met. If we look at the histogram of the residuals , we can see that the residuals are distributed normaly.
3- Constant Variability: The variability of points around the least squares line should be roughly constant. This implies that the variability of residuals around the 0 line should be roughly constant as well. This condition is met. If we look at the residuals scatter plot and qnorm and qline , we can see that there are few outliers on both ends. There arent any significant outliers.