download.file("http://www.openintro.org/stat/data/evals.RData", destfile = "evals.RData")
load("evals.RData")Multiple Regression Lab
Multiple Regression Lab
Load the dataset and librairies
Exercise 1
Is this an observational study or an experiment? The original research question posed in the paper is whether beauty leads directly to the differences in course evaluations. Given the study design, is it possible to answer this question as it is phrased? If not, rephrase the question.
Answer: This is an observational. It is not possible to answer the question phrased in this way because we can only assess correlation in observational studies. The question should be phrased: is there a correlation between beauty and the evaluation scores.
Exercise 2
Describe the distribution of score. Is the distribution skewed? What does that tell you about how students rate courses? Is this what you expected to see? Why, or why not?
hist(evals$score)summary(evals$score) Min. 1st Qu. Median Mean 3rd Qu. Max.
2.300 3.800 4.300 4.175 4.600 5.000
Answer: The distribution is left-skewed. Students’ average ratings of the course are 4.3, but there are some low ratings. Yes, this is something one might expect, as most students are happy with the courses that they take except for a few that might be dissatisfied for one reason or another.
Exercise 3
Excluding score, select two other variables and describe their relationship using an appropriate visualization (scatterplot, side-by-side boxplots, or mosaic plot).
plot(evals$rank, evals$age)Answer: The average age of professors varies by rank, with 50% of tenure track professors in their mid-thirties to mid-forties, while teaching track and tenured professors being closer to 50 years of age.
Simple Linear Regression
plot(evals$score ~ evals$bty_avg)Answer: The approximate number of the data frame is 463, but the plot appears to be have fewer.
Exercise 4
Replot the scatterplot, but this time use the function jitter() on the y- or the x-coordinate. (Use ?jitter to learn more.) What was misleading about the initial scatterplot?
plot(evals$score ~ jitter(evals$bty_avg))Answer: the initial plot looks much less crowded and belies the number of observations. It also does not show how many observations overlap.
Exercise 5
Let’s see if the apparent trend in the plot is something more than natural variation. Fit a linear model called m_bty to predict average professor score by average beauty rating and add the line to your plot using abline(m_bty). Write out the equation for the linear model and interpret the slope. Is average beauty score a statistically significant predictor? Does it appear to be a practically significant predictor?
m_bty <- lm(evals$score ~ evals$bty_avg)
summary(m_bty)
Call:
lm(formula = evals$score ~ evals$bty_avg)
Residuals:
Min 1Q Median 3Q Max
-1.9246 -0.3690 0.1420 0.3977 0.9309
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.88034 0.07614 50.96 < 2e-16 ***
evals$bty_avg 0.06664 0.01629 4.09 5.08e-05 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.5348 on 461 degrees of freedom
Multiple R-squared: 0.03502, Adjusted R-squared: 0.03293
F-statistic: 16.73 on 1 and 461 DF, p-value: 5.083e-05
plot(evals$score ~ evals$bty_avg) +
abline(m_bty)integer(0)
Answer: Y_hat = 3.88034 + 0.06664(bty_avg) The slope of 0.67 indicates that for each one point increase in bty_avg there is a 0.067 increase in score. It is statistically significant, but not practically.
Exercise 6
Use residual plots to evaluate whether the conditions of least squares regression are reasonable. Provide plots and comments for each one (see the Simple Regression Lab for a reminder of how to make these).
plot(m_bty$residuals ~ evals$bty_avg)
abline(h = 0, lty = 3)hist(m_bty$residuals)qqnorm(m_bty$residuals)
qqline(m_bty$residuals)Answer: The residuals are scattered around the zero line, between -2 and 1. The relationship appears to be linear. The residuals are also mostly normal-appearing. The constant variability appears to be met, as there the residuals are not all relatively close to the zero line.
Mulitple Regression
look at relationship between beauty scores and average beauty score
plot(evals$bty_avg ~ evals$bty_f1lower)cor(evals$bty_avg, evals$bty_f1lower)[1] 0.8439112
look at relationship between all beauty variables.
plot(evals[,13:19])add gender term into model to see if beauty is still a significant predictor of score
m_bty_gen <- lm(score ~ bty_avg + gender, data = evals)
summary(m_bty_gen)
Call:
lm(formula = score ~ bty_avg + gender, data = evals)
Residuals:
Min 1Q Median 3Q Max
-1.8305 -0.3625 0.1055 0.4213 0.9314
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.74734 0.08466 44.266 < 2e-16 ***
bty_avg 0.07416 0.01625 4.563 6.48e-06 ***
gendermale 0.17239 0.05022 3.433 0.000652 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.5287 on 460 degrees of freedom
Multiple R-squared: 0.05912, Adjusted R-squared: 0.05503
F-statistic: 14.45 on 2 and 460 DF, p-value: 8.177e-07
Exercise 7
P-values and parameter estimates should only be trusted if the conditions for the regression are reasonable. Verify that the conditions for this model are reasonable using diagnostic plots.
plot(m_bty_gen$residuals ~ evals$bty_avg)
abline(h = 0, lty = 3)qqnorm(m_bty_gen$residuals)
qqline(m_bty_gen$residuals)Answer: The conditions are reasonable.
Exercise 8
Is bty_avg still a significant predictor of score? Has the addition of gender to the model changed the parameter estimate for bty_avg?
Answer: bty_avg is still significant. The addition of gender to the model has improved the model.
plot female and male lines
multiLines(m_bty_gen)Exercise 9
What is the equation of the line corresponding to males? (Hint: For males, the parameter estimate is multiplied by 1.) For two professors who received the same beauty rating, which gender tends to have the higher course evaluation score?
Answer: The equation becomes:
score_hat = 3.74734 + 0.07416 x bty_avg + 0.17239 x 1
For two professors of any given beauty rating, males tend to have a higher course evaluation score.
Exercise 10
Create a new model called m_bty_rank with gender removed and rank added in. How does R appear to handle categorical variables that have more than two levels? Note that the rank variable has three levels: teaching, tenure track, tenured.
m_bty_rank <- lm(score ~ bty_avg + rank, data = evals)
summary(m_bty_rank)
Call:
lm(formula = score ~ bty_avg + rank, data = evals)
Residuals:
Min 1Q Median 3Q Max
-1.8713 -0.3642 0.1489 0.4103 0.9525
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.98155 0.09078 43.860 < 2e-16 ***
bty_avg 0.06783 0.01655 4.098 4.92e-05 ***
ranktenure track -0.16070 0.07395 -2.173 0.0303 *
ranktenured -0.12623 0.06266 -2.014 0.0445 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.5328 on 459 degrees of freedom
Multiple R-squared: 0.04652, Adjusted R-squared: 0.04029
F-statistic: 7.465 on 3 and 459 DF, p-value: 6.88e-05
Answer: there are now two indicator variables, ranktenure track and ranktenured.
Exercise 11
Which variable would you expect to have the highest p-value in this model? Why? Hint: Think about which variable would you expect to not have any association with the professor score.
Answer: I would expect number of professors to have the highest p-value, as the number of professors teaching the course probably does not affect each individual professor’s score.
Run the full model
m_full <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval
+ cls_students + cls_level + cls_profs + cls_credits + bty_avg
+ pic_outfit + pic_color, data = evals)
summary(m_full)
Call:
lm(formula = score ~ rank + ethnicity + gender + language + age +
cls_perc_eval + cls_students + cls_level + cls_profs + cls_credits +
bty_avg + pic_outfit + pic_color, data = evals)
Residuals:
Min 1Q Median 3Q Max
-1.77397 -0.32432 0.09067 0.35183 0.95036
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.0952141 0.2905277 14.096 < 2e-16 ***
ranktenure track -0.1475932 0.0820671 -1.798 0.07278 .
ranktenured -0.0973378 0.0663296 -1.467 0.14295
ethnicitynot minority 0.1234929 0.0786273 1.571 0.11698
gendermale 0.2109481 0.0518230 4.071 5.54e-05 ***
languagenon-english -0.2298112 0.1113754 -2.063 0.03965 *
age -0.0090072 0.0031359 -2.872 0.00427 **
cls_perc_eval 0.0053272 0.0015393 3.461 0.00059 ***
cls_students 0.0004546 0.0003774 1.205 0.22896
cls_levelupper 0.0605140 0.0575617 1.051 0.29369
cls_profssingle -0.0146619 0.0519885 -0.282 0.77806
cls_creditsone credit 0.5020432 0.1159388 4.330 1.84e-05 ***
bty_avg 0.0400333 0.0175064 2.287 0.02267 *
pic_outfitnot formal -0.1126817 0.0738800 -1.525 0.12792
pic_colorcolor -0.2172630 0.0715021 -3.039 0.00252 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.498 on 448 degrees of freedom
Multiple R-squared: 0.1871, Adjusted R-squared: 0.1617
F-statistic: 7.366 on 14 and 448 DF, p-value: 6.552e-14
plot(m_full)Exercise 12
Check your suspicions from the previous exercise. Include the model output in your response.
Answer: Of all the p-values, cls_professingle indeed has the highest at 0.77806.
Exercise 13
Interpret the coefficient associated with the ethnicity variable.
Answer: The ethnicitynot minority coefficient is 0.1234929 indicates that scores of professors that are non-minority are higher than those that are minority by 0.123, given all else being equal.
Exercise 14
Drop the variable with the highest p-value and re-fit the model. Did the coefficients and significance of the other explanatory variables change? (One of the things that makes multiple regression interesting is that coefficient estimates depend on the other variables that are included in the model.) If not, what does this say about whether or not the dropped variable was collinear with the other explanatory variables?
m_full2 <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval
+ cls_students + cls_level + cls_credits + bty_avg
+ pic_outfit + pic_color, data = evals)
summary(m_full2)
Call:
lm(formula = score ~ rank + ethnicity + gender + language + age +
cls_perc_eval + cls_students + cls_level + cls_credits +
bty_avg + pic_outfit + pic_color, data = evals)
Residuals:
Min 1Q Median 3Q Max
-1.7836 -0.3257 0.0859 0.3513 0.9551
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.0872523 0.2888562 14.150 < 2e-16 ***
ranktenure track -0.1476746 0.0819824 -1.801 0.072327 .
ranktenured -0.0973829 0.0662614 -1.470 0.142349
ethnicitynot minority 0.1274458 0.0772887 1.649 0.099856 .
gendermale 0.2101231 0.0516873 4.065 5.66e-05 ***
languagenon-english -0.2282894 0.1111305 -2.054 0.040530 *
age -0.0089992 0.0031326 -2.873 0.004262 **
cls_perc_eval 0.0052888 0.0015317 3.453 0.000607 ***
cls_students 0.0004687 0.0003737 1.254 0.210384
cls_levelupper 0.0606374 0.0575010 1.055 0.292200
cls_creditsone credit 0.5061196 0.1149163 4.404 1.33e-05 ***
bty_avg 0.0398629 0.0174780 2.281 0.023032 *
pic_outfitnot formal -0.1083227 0.0721711 -1.501 0.134080
pic_colorcolor -0.2190527 0.0711469 -3.079 0.002205 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4974 on 449 degrees of freedom
Multiple R-squared: 0.187, Adjusted R-squared: 0.1634
F-statistic: 7.943 on 13 and 449 DF, p-value: 2.336e-14
#plot(m_full2)Answer: The coefficients do change a little, as do the p-values. If their had been no change, it would mean that the dropped variable was not colinear with any of the others explanatory variables.
Exercise 15
Using backward-selection and p-value as the selection criterion, determine the best model. You do not need to show all steps in your answer, just the output for the final model. Also, write out the linear model for predicting score based on the final model you settle on.
m_full3 <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval
+ cls_students + cls_credits + bty_avg
+ pic_outfit + pic_color, data = evals)
#summary(m_full3)
#plot(m_full3)m_full4 <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval
+ cls_credits + bty_avg
+ pic_outfit + pic_color, data = evals)
#summary(m_full4)
#plot(m_full4)m_full5 <- lm(score ~ ethnicity + gender + language + age + cls_perc_eval
+ cls_credits + bty_avg
+ pic_outfit + pic_color, data = evals)
#summary(m_full5)
#plot(m_full5)m_full6 <- lm(score ~ ethnicity + gender + language + age + cls_perc_eval
+ cls_credits + bty_avg
+ pic_color, data = evals)
summary(m_full6)
Call:
lm(formula = score ~ ethnicity + gender + language + age + cls_perc_eval +
cls_credits + bty_avg + pic_color, data = evals)
Residuals:
Min 1Q Median 3Q Max
-1.85320 -0.32394 0.09984 0.37930 0.93610
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.771922 0.232053 16.255 < 2e-16 ***
ethnicitynot minority 0.167872 0.075275 2.230 0.02623 *
gendermale 0.207112 0.050135 4.131 4.30e-05 ***
languagenon-english -0.206178 0.103639 -1.989 0.04726 *
age -0.006046 0.002612 -2.315 0.02108 *
cls_perc_eval 0.004656 0.001435 3.244 0.00127 **
cls_creditsone credit 0.505306 0.104119 4.853 1.67e-06 ***
bty_avg 0.051069 0.016934 3.016 0.00271 **
pic_colorcolor -0.190579 0.067351 -2.830 0.00487 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4992 on 454 degrees of freedom
Multiple R-squared: 0.1722, Adjusted R-squared: 0.1576
F-statistic: 11.8 on 8 and 454 DF, p-value: 2.58e-15
#plot(m_full6)m_full7 <- lm(score ~ ethnicity + gender + age + cls_perc_eval
+ cls_credits + bty_avg
+ pic_color, data = evals)
#summary(m_full7)
#plot(m_full7)m_full8 <- lm(score ~ ethnicity + gender + cls_perc_eval
+ cls_credits + bty_avg
+ pic_color, data = evals)
#summary(m_full8)
#plot(m_full8)m_full9 <- lm(score ~ ethnicity + gender + cls_perc_eval
+ cls_credits + bty_avg , data = evals)
#summary(m_full9)
#plot(m_full9)m_full10 <- lm(score ~ ethnicity + cls_perc_eval
+ cls_credits + bty_avg , data = evals)
#summary(m_full10)
#plot(m_full10)m_full11 <- lm(score ~ ethnicity
+ cls_credits + bty_avg , data = evals)
#summary(m_full11)
#plot(m_full11)m_full12 <- lm(score ~ cls_credits + bty_avg , data = evals)
#summary(m_full12)
#plot(m_full12)m_full13 <- lm(score ~ cls_credits , data = evals)
#summary(m_full13)
#plot(m_full13)Answer: The best linear model equation is: score_hat = 3.772 + 0.186 x ethnicity + 0.207 x gender - 0.206 x language - 0.006 x age + 0.005 x cls_perc_eval + 0.505 x cls_credits + 0.051 x bty_avg - 0.190 x pic_color
Exercise 16
Verify that the conditions for this model are reasonable using diagnostic plots.
qqnorm(m_full6$residuals)
qqline(m_full6$residuals)Exercise 17
The original paper describes how these data were gathered by taking a sample of professors from the University of Texas at Austin and including all courses that they have taught. Considering that each row represents a course, could this new information have an impact on any of the conditions of linear regression?
Answer: The observations are not independent.
Exercise 18
Based on your final model, describe the characteristics of a professor and course at University of Texas at Austin that would be associated with a high evaluation score.
Answer: The highest evaluation score would be a professor who is white, male, english-speaking, younger, with a higher beauty score, and black and white photo.
Exercise 19
Would you be comfortable generalizing your conclusions to apply to professors generally (at any university)? Why or why not?
Answer: No, because this applies to only a specific population.