Chapter 7 - Introduction to Linear Regression

Graded Questions: 7.24, 7.26, 7.30, 7.40

7.24 Nutrition at Starbucks, Part I.

The scatterplot below shows the relationship between the number of calories and amount of carbohydrates (in grams) Starbucks food menu items contain. Since Starbucks only lists the number of calories on the display items, we are interested in predicting the amount of carbs a menu item has based on its calorie content.

starbucks <- data.frame(read.csv("DATA606/Chapters/Ch7/starbucks.csv"), stringsAsFactors = F)
head(starbucks,10)
##                           item calories fat carb fiber protein   type
## 1                 8-Grain Roll      350   8   67     5      10 bakery
## 2            Apple Bran Muffin      350   9   64     7       6 bakery
## 3                Apple Fritter      420  20   59     0       5 bakery
## 4              Banana Nut Loaf      490  19   75     4       7 bakery
## 5  Birthday Cake Mini Doughnut      130   6   17     0       0 bakery
## 6            Blueberry Oat Bar      370  14   47     5       6 bakery
## 7              Blueberry Scone      460  22   61     2       7 bakery
## 8   Bountiful Blueberry Muffin      370  14   55     0       6 bakery
## 9            Butter Croissant       310  18   32     0       5 bakery
## 10               Cheese Danish      420  25   39     0       7 bakery
  1. Describe the relationship between number of calories and amount of carbohydrates (in grams) that Starbucks food menu items contain.
sp <- ggplot(starbucks, aes(x=calories, y=carb))
sp + geom_point() + stat_smooth(method=lm, se=T)

cor(starbucks$calories, starbucks$carb)
## [1] 0.674999

There is a positive, moderately strong linear association between the number of calories and carbohydrated.

m1 <- lm(carb ~ calories, data =starbucks)
summary(m1)
## 
## Call:
## lm(formula = carb ~ calories, data = starbucks)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -31.477  -7.476  -1.029  10.127  28.644 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  8.94356    4.74600   1.884   0.0634 .  
## calories     0.10603    0.01338   7.923 1.67e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 12.29 on 75 degrees of freedom
## Multiple R-squared:  0.4556, Adjusted R-squared:  0.4484 
## F-statistic: 62.77 on 1 and 75 DF,  p-value: 1.673e-11

Based on the summary, the regression line formula is:

\[ \hat{y} = 8.94356 + 0.10603 * calories \]

  1. In this scenario, what are the explanatory and response variables?

Explanatory: number of calories (in thousands). Response: number of carbohydrates (in grams)

  1. Why might we want to fit a regression line to these data?

We can predict carbohydrates for a given number of calories using a regression line. This may be useful information for people who are on a certain diet.

  1. Do these data meet the conditions required for fitting a least squares line?
starbucks$pred_carb <- 8.94356 + 0.10603 * starbucks$calories
starbucks$residuals <- starbucks$carb - starbucks$pred_carb

sp <- ggplot(starbucks, aes(x=calories, y=residuals))
sp + geom_point() + stat_smooth(method=lm, se=F)

No, as constant variability seems not to be met. The variability of points around the least squares line is less for items under 200 calories, as seen from the residual plot. Linearity (as the data show a linear trend) and nearly normal residuals (as can be seen from the histogram of residuals) seem to be met however.

7.26 Body measurements, Part III.

Exercise 7.15 introduces data on shoulder girth and height of a group of individuals. The mean shoulder girth is 107.20 cm with a standard deviation of 10.37 cm. The mean height is 171.14 cm with a standard deviation of 9.41 cm. The correlation between height and shoulder girth is 0.67.

bdims <- data.frame(read.table("DATA606/Chapters/Ch7/bdims.txt", header = T), stringsAsFactors = F)
bdims <- subset(bdims, select = c(hgt, sho.gi))
head(bdims,10)
##      hgt sho.gi
## 1  174.0  106.2
## 2  175.3  110.5
## 3  193.5  115.1
## 4  186.5  104.5
## 5  187.2  107.5
## 6  181.5  119.8
## 7  184.0  123.5
## 8  184.5  120.4
## 9  175.0  111.0
## 10 184.0  119.5
cor(bdims$hgt, bdims$sho.gi)
## [1] 0.6657353
sp <- ggplot(bdims, aes(x=sho.gi, y=hgt))
sp + geom_point() + stat_smooth(method=lm, se=T)

  1. Write the equation of the regression line for predicting height.
m2 <- lm(hgt ~ sho.gi, data =bdims)
summary(m2)
## 
## Call:
## lm(formula = hgt ~ sho.gi, data = bdims)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -19.2297  -4.7976  -0.1142   4.7885  21.0979 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 105.83246    3.27245   32.34   <2e-16 ***
## sho.gi        0.60364    0.03011   20.05   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.026 on 505 degrees of freedom
## Multiple R-squared:  0.4432, Adjusted R-squared:  0.4421 
## F-statistic:   402 on 1 and 505 DF,  p-value: < 2.2e-16

Based on the summary, the regression line formula is:

\[ \hat{y} = 105.83246 + 0.60364 * sho.gi \]

  1. Interpret the slope and the intercept in this context.

b1: For each additional cm in shoulder girth, the model predicts an additional 0.60364 cm in height.

b0: When the shoulder girth is 0 cm ,the height is expected to be 105.83246 cm. It does not make sense to have shoulder girth of 0 cm in this context. Here, the y-intercept serves only to adjust the height of the line and is meaningless by itself.

  1. Calculate R2 of the regression line for predicting height from shoulder girth, and interpret it in the context of the application.

Multiple R-squared: 0.4432. About 44% of the variability in height is accounted for by the model.

  1. A randomly selected student from your class has a shoulder girth of 100 cm. Predict the height of this student using the model.
s.sho.gi <- 100
pred.s.hgt <- 105.83246 + 0.60364 * s.sho.gi
pred.s.hgt
## [1] 166.1965
  1. The student from part (d) is 160 cm tall. Calculate the residual, and explain what this residual means.
actual.s.hgt <- 160
s.residual <- actual.s.hgt - pred.s.hgt
s.residual
## [1] -6.19646

It means the student is about 6.2 cm less than would be predicted by the model.

  1. A one year old has a shoulder girth of 56 cm. Would it be appropriate to use this linear model to predict the height of this child?
min(bdims$sho.gi)
## [1] 85.9

No, this calculation would require extrapolation, as it is less than the minimum shoulder girth in the observational data.

7.30 Cats, Part I.

The following regression output is for predicting the heart weight (in g) of cats from their body weight (in kg). The coefficients are estimated using a dataset of 144 domestic cats.

cats <- data.frame(read.csv("DATA606/Chapters/Ch7/cats.csv"))
head(cats,10)
##    Sex Bwt Hwt
## 1    F 2.0 7.0
## 2    F 2.0 7.4
## 3    F 2.0 9.5
## 4    F 2.1 7.2
## 5    F 2.1 7.3
## 6    F 2.1 7.6
## 7    F 2.1 8.1
## 8    F 2.1 8.2
## 9    F 2.1 8.3
## 10   F 2.1 8.5
sp <- ggplot(cats, aes(x=Bwt, y=Hwt))
sp + geom_point() + stat_smooth(method=lm, se=T)

m3 <- lm(Hwt ~ Bwt, data =cats)
summary(m3)
## 
## Call:
## lm(formula = Hwt ~ Bwt, data = cats)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.5694 -0.9634 -0.0921  1.0426  5.1238 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  -0.3567     0.6923  -0.515    0.607    
## Bwt           4.0341     0.2503  16.119   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.452 on 142 degrees of freedom
## Multiple R-squared:  0.6466, Adjusted R-squared:  0.6441 
## F-statistic: 259.8 on 1 and 142 DF,  p-value: < 2.2e-16
  1. Write out the linear model.

Based on the summary, the regression line formula is:

\[ \hat{y} = -0.3567 + 4.0341 * Bwt \]

  1. Interpret the intercept.

b0: When the cat weight is 0 kg, the heart weight is expected to be -0.3567 g.

It neither makes sense to have a cat of 0 weight, or a heart of negative weight. Here, the y-intercept serves only to adjust the weight of the line and is meaningless by itself.

  1. Interpret the slope.

b1: For each additional kg in cat body weight, the model predicts an additional 4.0341g in heart weight.

  1. Interpret R2.

Multiple R-squared: 0.6466. About 65% of the variability in heart weight is accounted for by the model.

  1. Calculate the correlation coefficient.
cor(cats$Bwt, cats$Hwt)
## [1] 0.8041274

7.40 Rate my professor.

Many college courses conclude by giving students the opportunity to evaluate the course and the instructor anonymously. However, the use of these student evaluations as an indicator of course quality and teaching e???ectiveness is often criticized because these measures may reflect the influence of non-teaching related characteristics, such as the physical appearance of the instructor. Researchers at University of Texas, Austin collected data on teaching evaluation score (higher score means better) and standardized beauty score (a score of 0 means average, negative score means below average, and a positive score means above average) for a sample of 463 professors. The scatterplot below shows the relationship between these variables, and also provided is a regression output for predicting teaching evaluation score from beauty score.

prof_evals <- data.frame(read.csv("DATA606/Chapters/Ch7/prof_evals.csv"))
prof_evals <- subset(prof_evals, select = c(courseevaluation, btystdave))
head(prof_evals,10)
##    courseevaluation  btystdave
## 1               4.3  0.2015666
## 2               4.5 -0.8260813
## 3               3.7 -0.6603327
## 4               4.3 -0.7663125
## 5               4.4  1.4214450
## 6               4.2  0.5002196
## 7               4.0 -0.2143501
## 8               3.4 -0.3465390
## 9               4.5  0.0613435
## 10              3.9  0.4525679
sp <- ggplot(prof_evals, aes(x=btystdave, y=courseevaluation))
sp + geom_point() + stat_smooth(method=lm, se=T)

mean(prof_evals$btystdave)
## [1] -0.08834903
mean(prof_evals$courseevaluation)
## [1] 3.998272
  1. Given that the average standardized beauty score is -0.0883 and average teaching evaluation score is 3.9983, calculate the slope.
m4 <- lm(courseevaluation ~ btystdave, data = prof_evals)
summary(m4)
## 
## Call:
## lm(formula = courseevaluation ~ btystdave, data = prof_evals)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.80015 -0.36304  0.07254  0.40207  1.10373 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  4.01002    0.02551 157.205  < 2e-16 ***
## btystdave    0.13300    0.03218   4.133 4.25e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5455 on 461 degrees of freedom
## Multiple R-squared:  0.03574,    Adjusted R-squared:  0.03364 
## F-statistic: 17.08 on 1 and 461 DF,  p-value: 4.247e-05

The slope is the estimate of the btystdave coefficient (0.13300)

\[ \hat{y} = 4.01002 + 0.13300 * Beauty factor \]

  1. Do these data provide convincing evidence that the slope of the relationship between teaching evaluation and beauty is positive? Explain your reasoning.

The Multiple R-squared is only 0.03574, suggesting only 3.6% of the evaluation result is predicated on beauty.

  1. List the conditions required for linear regression and check if each one is satisfied for this model based on the following diagnostic plots.

When fitting a least squares line, we generally require:

Linearity. The data shows very weak correlation. There is not an obvious relationship between the variables.

cor(prof_evals$courseevaluation, prof_evals$btystdave)
## [1] 0.1890391

Nearly normal residuals. Generally the residuals must be nearly normal. From the histogram below there looks to be a skew to the left.

prof_evals$pred_eval <- 4.01002 + 0.13300 * prof_evals$btystdave
prof_evals$residuals <- prof_evals$courseevaluation - prof_evals$pred_eval
hist(prof_evals$residuals)

Constant variability. The variability of points around the least squares line remains roughly constant.

sp <- ggplot(prof_evals, aes(x=btystdave, y=residuals))
sp + geom_point() + stat_smooth(method=lm, se=F)

The variability seems unconstant, with more strongly negative residuals. It seems a linear regression model does not apply, and there is no evident relationship between beauty and course evaluation.