7.24 The scatterplot below shows the relationship between the number of calories and amount of carbohydrates (in grams) Starbucks food menu items contain. 21 Since Starbucks only lists the number of calories on the display items, we are interested in predicting the amount of carbs a menu item has based on its calorie content.
There are a moderate to weak positive relationship between the number of calories and the amount of carbohydrates. It appears from regression line and as weel from residual plot that datapoints are moving away from the regression line as Calories increasing and making the error higher for the model.
carbohydrates is response variable (Y axis), and the explanatory variable is Calories on the x axis.
The regression line helps to predict the amount of carbohydrate (in grams) a starbucks menu food items contains by looking at the number of calories in the item.
(d)Do these data meet the conditions for fitting the least squares line?
Linearity: The data should show a linear trend. With the help of the scatterplot we can see that data shows a moderate to weak linear relationship.
Nearly normal residual: From the histogram for residual, the residuals in this case have a slightly left skewed.
Constant variability: The residual plot suggests that there is no constant variability for residuals. The data fit the linear model much better for lower number of calories than for higher as shown by much larger residuals value.
Independent observation: Its being the item from Menu , asuamed to be independent.
7.26 Body measurements, Part III. Exercise 7.15 introduces data on shoulder girth and height of a group of individuals. The mean shoulder girth is 107.20 cm with a standard deviation of 10.37 cm. The mean height is 171.14 cm with a standard deviation of 9.41 cm. The correlation between height and shoulder girth is 0.67.
The general equation for the regression line: y = B0 + B1*x where B0 is yintercept, and B1 is the slope, represent.
# Shoulder girth is the explanatory variable
shoulder.mean <- 107.20 # in cm
shoulder.SD <- 10.37 # in cm
# Height is the response
height.mean <- 171.14 # in cn
height.SD <- 9.41 # in cm
# R for correlation
R <- 0.67
# Calculate the slope (or otherwise known as B1)
B1 <- R * (height.SD/shoulder.SD)
# Now to calculate for B0, we will use the values (x,y) = (107.20, 171.14). They are also the mean values, and they lie along the regression line.
# Now to rearrange the equation to solve for B0. B0 = y - B1*x
B0 <- 171.14 - B1 * 107.20
B1;B0
## [1] 0.6079749
## [1] 105.9651
height= 105.9651+0.6079749 * shoulder_girth
The intercept (105.9651) is the height in centimeters at girth of 0 cm. The slope (0.6079749) is the number of centimeters increase in height for each increase in shoulder girth. Not all values make sense when we plug them into this linear regresion equation. For example, a person with a shoulder width of 0 cm (which can’t happen) would indicate a height of 105.965 cm.
R.squared <- R^2
paste("R squared: ", round(R.squared,3))
## [1] "R squared: 0.449"
This means that only 0.449 of the variation found in this data is explained by the linear model. Rsqurare is max when its 1 (best fit) , 0 (worst fit). This r^2 doesn’t give good indication of the related variables in model.
Yhat = bo + bi X
st.shoulder <- 100 # in cm
st.height <- B0 + B1 * st.shoulder
paste("Estimated height of a student with a shoulder girth of 100 cm is: ", round(st.height,3), "cm.")
## [1] "Estimated height of a student with a shoulder girth of 100 cm is: 166.763 cm."
actaul_value <- 160
predicated <- st.height
residual <- actaul_value - predicated
paste("The residual is: ", round(residual,3))
## [1] "The residual is: -6.763"
This residual is the difference between the actaul value and predicated value.
x= 1 lets find Yhat i.e height
# Yhat = B0 + B1 X
X <- 1
yhat1 <- B0 + B1 * X
yhat1
## [1] 106.5731
yhat56 <- 56
x56 <- (yhat56 - B0)/B1
x56
## [1] -82.18281
residual <- yhat56 - yhat1
residual
## [1] -50.57306
The original data set had a response variable values between ~80 and 140 cm. A measure of 56 is outside the sample and we would require extrapolation and would not be appropriate. We can also see the residual is neagtive, and negative value for a residual it means the actual value was LESS than the predicted value.
7.30 Cats, Part I. The following regression output is for predicting the heart weight (in g) of cats from their body weight (in kg). The coefficients are estimated using a dataset of 144 domestic cats.
yhat = B0 + B1 * x, B0 <- yintercept B1 <- Slope Heart Weight (g) = -0.357 + 4.034 * `Body Weight (kg)
negative intercept of -0.357 means that for weight of 1 kg , hert weight will be -0.357 + 4.034 = 3.67 grams. But for ZERO Value of weight , Hear weight = -0.357
For an increment of 1 kg in body weight in a cat, the heart weight would increase by 4.034 g.
R2 = 64.66%, it means that linear model would be able to explain ~65% variability in the heart weight (in g) of the cat.
The correlation coefficient is R, we know r2, so R = sqrt(R) = 8.0411442
7.40 Rate my professor. Many college courses conclude by giving students the opportunity to evaluate the course and the instructor anonymously. However, the use of these student evaluations as an indicator of course quality and teaching e???ectiveness is often criticized because these measures may reflect the influence of non-teaching related characteristics, such as the physical appearance of the instructor. Researchers at University of Texas, Austin collected data on teaching evaluation score (higher score means better) and standardized beauty score (a score of 0 means average, negative score means below average, and a positive score means above average) for a sample of 463 professors.24 The scatterplot below shows the relationship between these variables, and also provided is a regression output for predicting teaching evaluation score from beauty score.
From Table we know Yintercept (Bo)= 4.010, We knwo response varible (average teaching evaluation score) Yhat = 3.9983 and explanatory value (average standardized beauty score ) X= -0.0883
3.9983 = 4.010 + B1 * (-0.0883)
B1 <- (3.9983 - 4.010)/(-0.0883)
paste("B1 or the slope: ", round(B1, 3))
## [1] "B1 or the slope: 0.133"
Scatterplot doesn’t show any correlation between beauty and teaching evaluation.But the summary table for P value shows ~ 0, this is clear indication of that there is a relation between these variables.
Linearity : From the scatterplot of the data, there may be a weak positive linear relationship. Normal Q-Q Plot shows a linear relation.
Nearly normal residuals : The residuals do appear to be nearly normal, by looking at its distribution on the histogram.
Constant variability : By looking at the residual plots, there appears to be constant variability throughout th plot.
Independent observations : We do not have much information on how the data sample was collected beyond the fact that it was collected for 463 professors. We might assume independence of observations.