Grading the professor

Many college courses conclude by giving students the opportunity to evaluate the course and the instructor anonymously. However, the use of these student evaluations as an indicator of course quality and teaching effectiveness is often criticized because these measures may reflect the influence of non-teaching related characteristics, such as the physical appearance of the instructor. The article titled, “Beauty in the classroom: instructors’ pulchritude and putative pedagogical productivity” by Hamermesh and Parker found that instructors who are viewed to be better looking receive higher instructional ratings.

Here, you will analyze the data from this study in order to learn what goes into a positive professor evaluation.

Getting Started

Load packages

In this lab, you will explore and visualize the data using the tidyverse suite of packages. You will also use the GGally package for visualisation of many variables at once and the broom package to tidy regression output. The data can be found in the companion package for OpenIntro resources, openintro.

Let’s load the packages.

library(tidyverse)
library(openintro)
library(GGally)
library(broom)

This is the first time we’re using the GGally package. You will be using the ggpairs() function from this package later in the lab.

Creating a reproducible lab report

To create your new lab report, in RStudio, go to New File -> R Markdown… Then, choose From Template and then choose Lab Report for OpenIntro Statistics Labs from the list of templates.

The data

The data were gathered from end of semester student evaluations for a large sample of professors from the University of Texas at Austin. In addition, six students rated the professors’ physical appearance. The result is a data frame where each row contains a different course and columns represent variables about the courses and professors. It’s called evals.

glimpse(evals)
## Rows: 463
## Columns: 23
## $ course_id     <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1…
## $ prof_id       <int> 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5,…
## $ score         <dbl> 4.7, 4.1, 3.9, 4.8, 4.6, 4.3, 2.8, 4.1, 3.4, 4.5, 3.8, 4…
## $ rank          <fct> tenure track, tenure track, tenure track, tenure track, …
## $ ethnicity     <fct> minority, minority, minority, minority, not minority, no…
## $ gender        <fct> female, female, female, female, male, male, male, male, …
## $ language      <fct> english, english, english, english, english, english, en…
## $ age           <int> 36, 36, 36, 36, 59, 59, 59, 51, 51, 40, 40, 40, 40, 40, …
## $ cls_perc_eval <dbl> 55.81395, 68.80000, 60.80000, 62.60163, 85.00000, 87.500…
## $ cls_did_eval  <int> 24, 86, 76, 77, 17, 35, 39, 55, 111, 40, 24, 24, 17, 14,…
## $ cls_students  <int> 43, 125, 125, 123, 20, 40, 44, 55, 195, 46, 27, 25, 20, …
## $ cls_level     <fct> upper, upper, upper, upper, upper, upper, upper, upper, …
## $ cls_profs     <fct> single, single, single, single, multiple, multiple, mult…
## $ cls_credits   <fct> multi credit, multi credit, multi credit, multi credit, …
## $ bty_f1lower   <int> 5, 5, 5, 5, 4, 4, 4, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 7, 7,…
## $ bty_f1upper   <int> 7, 7, 7, 7, 4, 4, 4, 2, 2, 5, 5, 5, 5, 5, 5, 5, 5, 9, 9,…
## $ bty_f2upper   <int> 6, 6, 6, 6, 2, 2, 2, 5, 5, 4, 4, 4, 4, 4, 4, 4, 4, 9, 9,…
## $ bty_m1lower   <int> 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 7, 7,…
## $ bty_m1upper   <int> 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 6, 6,…
## $ bty_m2upper   <int> 6, 6, 6, 6, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 6, 6,…
## $ bty_avg       <dbl> 5.000, 5.000, 5.000, 5.000, 3.000, 3.000, 3.000, 3.333, …
## $ pic_outfit    <fct> not formal, not formal, not formal, not formal, not form…
## $ pic_color     <fct> color, color, color, color, color, color, color, color, …

We have observations on 21 different variables, some categorical and some numerical. The meaning of each variable can be found by bringing up the help file:

?evals

Exploring the data

Exercise 1. Is this an observational study or an experiment?
The original research question posed in the paper is whether beauty leads directly to the differences in course evaluations.
Given the study design, is it possible to answer this question as it is phrased?
If not, rephrase the question.

**This is an observational study. There are no control groups and this study could be designed after the course evaluations were completed. We cannot derive a causal relationship from an observational study.**
Exercise 2. Describe the distribution of score.
Is the distribution skewed?
What does that tell you about how students rate courses?
Is this what you expected to see?
Why, or why not?

From the histogram below, we can see that the histogram of scores skews left, indicating that the majority of scores are at the higher end.

This indicates to me that students are more likely to submit a recommendation if they feel positively about the professor. This distribution is not surprising. Course evaluations are often mystery to students after they are submitted. It’s unlikely that a student would be inspired to put something critical in the evaluation if they are unsure what happens with the evaluations after submission.

Average score may be too reductive of a metric for evaluating a professor.

Additionally, the more concentrated a score distribution is, the less useful it is for identifying differences between the professors for meaningful improvement.

evals %>% ggplot(aes(x = score)) +
  geom_histogram(color = "black", fill = "white") +
  ggtitle("Histogram of Scores")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

Exercise 3. Excluding score, select two other variables and describe their relationship with each other using an appropriate visualization.

On average, females professors secured higher beauty scores than their males counterparts.

evals %>% ggplot(aes(x=gender, y=bty_avg, fill=gender))+
  geom_boxplot()

Across both genders, beauty scores trended down (slowly) as age increased.

evals %>% group_by(age, gender) %>% 
  summarize(bty_avg_by_age = mean(bty_avg)) %>% 
  ungroup() %>%
  ggplot(aes(x=  age, y = bty_avg_by_age, group = gender, color = gender)) +
  geom_line()
## `summarise()` has grouped output by 'age'. You can override using the `.groups` argument.

Simple linear regression

The fundamental phenomenon suggested by the study is that better looking teachers are evaluated more favourably. Let’s create a scatterplot to see if this appears to be the case:

ggplot(data = evals, aes(x = bty_avg, y = score)) +
  geom_point()

Before you draw conclusions about the trend, compare the number of observations in the data frame with the approximate number of points on the scatterplot. Is anything awry?

dim(evals)
## [1] 463  23
Exercise 4. Re-plot the scatterplot, but this time use geom_jitter as your layer. What was misleading about the initial scatterplot?

Yes, the points were overlapping.

ggplot(data = evals, aes(x = bty_avg, y = score)) +
  geom_jitter()

Exercise 5. Let’s see if the apparent trend in the plot is something more than natural variation. Fit a linear model called m_bty to predict average professor score by average beauty rating. Write out the equation for the linear model and interpret the slope. Is average beauty score a statistically significant predictor? Does it appear to be a practically significant predictor?

According the the linear model summary, there is a significant positive relationship between the average beauty rating and average professor score. Every 1% increase in average beauty scores is associated with a 6% increase in average professor score.

A p-value of less than .001 indicates that this relationship is very unlikely to arise by chance.

This is a low adj. R-squared value of ~3% of the variation in the dependent variable predicted by the bty_avg score. Nonetheless, due to the low p-value, this model indicates that bty_avg is still a meaningful part of a larger model that better explains the variation in professor score..

m_bty <- lm(score ~ bty_avg, data = evals)
summary(m_bty)
## 
## Call:
## lm(formula = score ~ bty_avg, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.9246 -0.3690  0.1420  0.3977  0.9309 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  3.88034    0.07614   50.96  < 2e-16 ***
## bty_avg      0.06664    0.01629    4.09 5.08e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5348 on 461 degrees of freedom
## Multiple R-squared:  0.03502,    Adjusted R-squared:  0.03293 
## F-statistic: 16.73 on 1 and 461 DF,  p-value: 5.083e-05

Add the line of the bet fit model to your plot using the following:

ggplot(data = evals, aes(x = bty_avg, y = score)) +
  geom_jitter() +
  geom_smooth(method = "lm")
## `geom_smooth()` using formula 'y ~ x'

The blue line is the model. The shaded gray area around the line tells you about the variability you might expect in your predictions. To turn that off, use se = FALSE.

ggplot(data = evals, aes(x = bty_avg, y = score)) +
  geom_jitter() +
  geom_smooth(method = "lm", se = FALSE)
## `geom_smooth()` using formula 'y ~ x'

Exercise 6. Use residual plots to evaluate whether the conditions of least squares regression are reasonable. Provide plots and comments for each one (see the Simple Regression Lab for a reminder of how to make these).
plot(m_bty$residuals ~ evals$bty_avg) +
abline(h = 0, lty = 3)  # adds a horizontal dashed line at y = 0

## integer(0)

The residuals vs. fitted values plot does not demonstrate any patterns that violate the regression assumptions.

hist(m_bty$residuals, breaks = 35)

qqnorm(m_bty$residuals)
qqline(m_bty$residuals) 

The q-q plot and histogram of the model residuals demonstrates that these data violate the nearly normal residuals assumptions of the regression model.

Multiple linear regression

The data set contains several variables on the beauty score of the professor: individual ratings from each of the six students who were asked to score the physical appearance of the professors and the average of these six scores. Let’s take a look at the relationship between one of these scores and the average beauty score.

ggplot(data = evals, aes(x = bty_f1lower, y = bty_avg)) +
  geom_point()

evals %>% 
  summarise(cor(bty_avg, bty_f1lower))
## # A tibble: 1 x 1
##   `cor(bty_avg, bty_f1lower)`
##                         <dbl>
## 1                       0.844

As expected, the relationship is quite strong—after all, the average score is calculated using the individual scores. You can actually look at the relationships between all beauty variables (columns 13 through 19) using the following command:

evals %>%
  select(contains("bty")) %>%
  ggpairs()

These variables are collinear (correlated), and adding more than one of these variables to the model would not add much value to the model. In this application and with these highly-correlated predictors, it is reasonable to use the average beauty score as the single representative of these variables.

In order to see if beauty is still a significant predictor of professor score after you’ve accounted for the professor’s gender, you can add the gender term into the model.

m_bty_gen <- lm(score ~ bty_avg + gender, data = evals)
tidy(m_bty_gen)
## # A tibble: 3 x 5
##   term        estimate std.error statistic   p.value
##   <chr>          <dbl>     <dbl>     <dbl>     <dbl>
## 1 (Intercept)   3.75      0.0847     44.3  6.23e-168
## 2 bty_avg       0.0742    0.0163      4.56 6.48e-  6
## 3 gendermale    0.172     0.0502      3.43 6.52e-  4
Exercise 7. p-values and parameter estimates should only be trusted if the conditions for the regression are reasonable.
Verify that the conditions for this model are reasonable using diagnostic plots.
plot(m_bty_gen$residuals ~ evals$bty_avg) +
abline(h = 0, lty = 3)  # adds a horizontal dashed line at y = 0

## integer(0)
hist(m_bty_gen$residuals, breaks = 35)

qqnorm(m_bty_gen$residuals)
qqline(m_bty_gen$residuals) 

summary(m_bty_gen)
## 
## Call:
## lm(formula = score ~ bty_avg + gender, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8305 -0.3625  0.1055  0.4213  0.9314 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  3.74734    0.08466  44.266  < 2e-16 ***
## bty_avg      0.07416    0.01625   4.563 6.48e-06 ***
## gendermale   0.17239    0.05022   3.433 0.000652 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5287 on 460 degrees of freedom
## Multiple R-squared:  0.05912,    Adjusted R-squared:  0.05503 
## F-statistic: 14.45 on 2 and 460 DF,  p-value: 8.177e-07
Exercise 8. Is bty_avg still a significant predictor of score?
Has the addition of `gender` to the model changed the parameter estimate for `bty_avg`?

Yes bty_avg is still a significant predictor of score, and adding gender as a variable to the model has helped bty_avg act as an indicator of professor score by increasing the beta coefficient associated with bty_avg.

Note that the estimate for gender is now called gendermale. You’ll see this name change whenever you introduce a categorical variable. The reason is that R recodes gender from having the values of male and female to being an indicator variable called gendermale that takes a value of \(0\) for female professors and a value of \(1\) for male professors. (Such variables are often referred to as “dummy” variables.)

As a result, for female professors, the parameter estimate is multiplied by zero, leaving the intercept and slope form familiar from simple regression.

\[ \begin{aligned} \widehat{score} &= \hat{\beta}_0 + \hat{\beta}_1 \times bty\_avg + \hat{\beta}_2 \times (0) \\ &= \hat{\beta}_0 + \hat{\beta}_1 \times bty\_avg\end{aligned} \]

Exercise 9. What is the equation of the line corresponding to male professors? (Hint: For male professors, the parameter estimate is multiplied by 1.) For two professors who received the same beauty rating, which gender tends to have the higher course evaluation score?

\[ \begin{aligned} \widehat{score} &= 3.74734 + 0.074161 \times bty\_avg + 0.17239 \end{aligned} \]

For two professors that received the same beauty rating, males tended to have higher course evaluation scores.

The decision to call the indicator variable gendermale instead of genderfemale has no deeper meaning. R simply codes the category that comes first alphabetically as a \(0\). (You can change the reference level of a categorical variable, which is the level that is coded as a 0, using therelevel() function. Use ?relevel to learn more.)

Exercise 10. Create a new model called m_bty_rank with gender removed and rank added in. How does R appear to handle categorical variables that have more than two levels? Note that the rank variable has three levels: teaching, tenure track, tenured.
m_bty_rank <- lm(score ~ bty_avg + rank, data = evals)
tidy(m_bty_rank)
## # A tibble: 4 x 5
##   term             estimate std.error statistic   p.value
##   <chr>               <dbl>     <dbl>     <dbl>     <dbl>
## 1 (Intercept)        3.98      0.0908     43.9  2.92e-166
## 2 bty_avg            0.0678    0.0165      4.10 4.92e-  5
## 3 ranktenure track  -0.161     0.0740     -2.17 3.03e-  2
## 4 ranktenured       -0.126     0.0627     -2.01 4.45e-  2
summary(m_bty_rank)
## 
## Call:
## lm(formula = score ~ bty_avg + rank, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8713 -0.3642  0.1489  0.4103  0.9525 
## 
## Coefficients:
##                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)       3.98155    0.09078  43.860  < 2e-16 ***
## bty_avg           0.06783    0.01655   4.098 4.92e-05 ***
## ranktenure track -0.16070    0.07395  -2.173   0.0303 *  
## ranktenured      -0.12623    0.06266  -2.014   0.0445 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5328 on 459 degrees of freedom
## Multiple R-squared:  0.04652,    Adjusted R-squared:  0.04029 
## F-statistic: 7.465 on 3 and 459 DF,  p-value: 6.88e-05

The interpretation of the coefficients in multiple regression is slightly different from that of simple regression. The estimate for bty_avg reflects how much higher a group of professors is expected to score if they have a beauty rating that is one point higher while holding all other variables constant. In this case, that translates into considering only professors of the same rank with bty_avg scores that are one point apart.

The search for the best model

We will start with a full model that predicts professor score based on rank, gender, ethnicity, language of the university where they got their degree, age, proportion of students that filled out evaluations, class size, course level, number of professors, number of credits, and average beauty rating.

Exercise 11. Which variable would you expect to have the highest p-value in this model? Why? Hint: Think about which variable would you expect to not have any association with the professor score.

I would expect the class level (upper or lower) to be unrelated to the professor score.

Let’s run the model…

m_full <- lm(score ~ rank + gender + ethnicity + language + age + cls_perc_eval 
             + cls_students + cls_level + cls_profs + cls_credits + bty_avg, data = evals)
tidy(m_full)
## # A tibble: 13 x 5
##    term                   estimate std.error statistic  p.value
##    <chr>                     <dbl>     <dbl>     <dbl>    <dbl>
##  1 (Intercept)            3.53      0.241       14.7   4.65e-40
##  2 ranktenure track      -0.107     0.0820      -1.30  1.93e- 1
##  3 ranktenured           -0.0450    0.0652      -0.691 4.90e- 1
##  4 gendermale             0.179     0.0515       3.47  5.79e- 4
##  5 ethnicitynot minority  0.187     0.0775       2.41  1.63e- 2
##  6 languagenon-english   -0.127     0.108       -1.17  2.41e- 1
##  7 age                   -0.00665   0.00308     -2.16  3.15e- 2
##  8 cls_perc_eval          0.00570   0.00155      3.67  2.68e- 4
##  9 cls_students           0.000445  0.000358     1.24  2.15e- 1
## 10 cls_levelupper         0.0187    0.0556       0.337 7.37e- 1
## 11 cls_profssingle       -0.00858   0.0514      -0.167 8.67e- 1
## 12 cls_creditsone credit  0.509     0.117        4.35  1.70e- 5
## 13 bty_avg                0.0613    0.0167       3.67  2.68e- 4
Exercise 12. Check your suspicions from the previous exercise.
Include the model output in your response.

The number of professors (cls_prof) is the input with the lest association to the predicted variable.

summary(m_full)
## 
## Call:
## lm(formula = score ~ rank + gender + ethnicity + language + age + 
##     cls_perc_eval + cls_students + cls_level + cls_profs + cls_credits + 
##     bty_avg, data = evals)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.84482 -0.31367  0.08559  0.35732  1.10105 
## 
## Coefficients:
##                         Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            3.5305036  0.2408200  14.660  < 2e-16 ***
## ranktenure track      -0.1070121  0.0820250  -1.305 0.192687    
## ranktenured           -0.0450371  0.0652185  -0.691 0.490199    
## gendermale             0.1786166  0.0515346   3.466 0.000579 ***
## ethnicitynot minority  0.1869649  0.0775329   2.411 0.016290 *  
## languagenon-english   -0.1268254  0.1080358  -1.174 0.241048    
## age                   -0.0066498  0.0030830  -2.157 0.031542 *  
## cls_perc_eval          0.0056996  0.0015514   3.674 0.000268 ***
## cls_students           0.0004455  0.0003585   1.243 0.214596    
## cls_levelupper         0.0187105  0.0555833   0.337 0.736560    
## cls_profssingle       -0.0085751  0.0513527  -0.167 0.867458    
## cls_creditsone credit  0.5087427  0.1170130   4.348  1.7e-05 ***
## bty_avg                0.0612651  0.0166755   3.674 0.000268 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.504 on 450 degrees of freedom
## Multiple R-squared:  0.1635, Adjusted R-squared:  0.1412 
## F-statistic: 7.331 on 12 and 450 DF,  p-value: 2.406e-12
Exercise 13. Interpret the coefficient associated with the ethnicity variable.

This is a binary variable: minority or not minority, and the estimate indicates that holding everything else constant, changing swapping a non-minority professor in the place of a minority professor is associated with an 18% increase in the evaluation score.

Exercise 14. Drop one variable at a time and peek at the adjusted \(R^2\).
Removing which variable increases adjusted $R^2$ the most?
Drop the variable with the highest p-value and re-fit the model.
Did the coefficients and significance of the other explanatory variables change with this variable removed?
(One of the things that makes multiple regression interesting is that coefficient estimates depend on the other variables that are included in the model.) If not, what does this say about whether or not the dropped variable was collinear with the other explanatory variables?
fit_bty_full_minus_cls_profs <- lm(score ~ rank + gender + ethnicity + language + age + cls_perc_eval 
             + cls_students + cls_level + cls_credits + bty_avg, data = evals)

summary(fit_bty_full_minus_cls_profs)
## 
## Call:
## lm(formula = score ~ rank + gender + ethnicity + language + age + 
##     cls_perc_eval + cls_students + cls_level + cls_credits + 
##     bty_avg, data = evals)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.85048 -0.31394  0.08052  0.35956  1.10356 
## 
## Coefficients:
##                         Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            3.5286297  0.2402990  14.684  < 2e-16 ***
## ranktenure track      -0.1073638  0.0819096  -1.311 0.190606    
## ranktenured           -0.0453744  0.0651169  -0.697 0.486278    
## gendermale             0.1780270  0.0513581   3.466 0.000578 ***
## ethnicitynot minority  0.1893718  0.0760992   2.488 0.013189 *  
## languagenon-english   -0.1265737  0.1079088  -1.173 0.241427    
## age                   -0.0066619  0.0030788  -2.164 0.031006 *  
## cls_perc_eval          0.0056790  0.0015448   3.676 0.000265 ***
## cls_students           0.0004493  0.0003573   1.257 0.209319    
## cls_levelupper         0.0183743  0.0554870   0.331 0.740687    
## cls_creditsone credit  0.5109162  0.1161614   4.398 1.36e-05 ***
## bty_avg                0.0611497  0.0166432   3.674 0.000267 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5035 on 451 degrees of freedom
## Multiple R-squared:  0.1635, Adjusted R-squared:  0.1431 
## F-statistic: 8.012 on 11 and 451 DF,  p-value: 8.303e-13

The adj. r-squared increased but only marginally. It’s likely that cls_profs is highly collinear with an additional input that is still in the model. We need to remove additional variables for significant improvement in the model.

Exercise 15. Using backward-selection and adjusted \(R^2\) as the selection criterion, determine the best model.
You do not need to show all steps in your answer, just the output for the final model.
Also, write out the linear model for predicting score based on the final model you settle on.
fit_best_model <- lm(score ~  gender + ethnicity + language + age + 
    cls_perc_eval + cls_credits + bty_avg + pic_color, data = evals)

summary(fit_best_model)
## 
## Call:
## lm(formula = score ~ gender + ethnicity + language + age + cls_perc_eval + 
##     cls_credits + bty_avg + pic_color, data = evals)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.85320 -0.32394  0.09984  0.37930  0.93610 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            3.771922   0.232053  16.255  < 2e-16 ***
## gendermale             0.207112   0.050135   4.131 4.30e-05 ***
## ethnicitynot minority  0.167872   0.075275   2.230  0.02623 *  
## languagenon-english   -0.206178   0.103639  -1.989  0.04726 *  
## age                   -0.006046   0.002612  -2.315  0.02108 *  
## cls_perc_eval          0.004656   0.001435   3.244  0.00127 ** 
## cls_creditsone credit  0.505306   0.104119   4.853 1.67e-06 ***
## bty_avg                0.051069   0.016934   3.016  0.00271 ** 
## pic_colorcolor        -0.190579   0.067351  -2.830  0.00487 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4992 on 454 degrees of freedom
## Multiple R-squared:  0.1722, Adjusted R-squared:  0.1576 
## F-statistic:  11.8 on 8 and 454 DF,  p-value: 2.58e-15
Exercise 16. Verify that the conditions for this model are reasonable using diagnostic plots.
plot(fit_best_model, which = c(1:3))

Exercise 17. The original paper describes how these data were gathered by taking a sample of professors from the University of Texas at Austin and including all courses that they have taught.
Considering that each row represents a course, could this new information have an impact on any of the conditions of linear regression?

There are no glaring departures from the basic assumptions for a linear regression model. Again, the tails of the residuals qq plot do not necessarily adhere to a normal distribution, but due to the number of observations, that are nearly normal this model could still be useful.

Exercise 18. Based on your final model, describe the characteristics of a professor and course at University of Texas at Austin that would be associated with a high evaluation score.

gender: male ethnicity: not minority language: English age: younger cls_perc_eval: higher cls_credits: one credit bty_avg: higer pic_color: black and white

Exercise 19. Would you be comfortable generalizing your conclusions to apply to professors generally (at any university)?
Why or why not?

**No, because this is observational data, and the collection method is not a random sample of data for professors at other universities.**