Grading the professor

Many college courses conclude by giving students the opportunity to evaluate the course and the instructor anonymously. However, the use of these student evaluations as an indicator of course quality and teaching effectiveness is often criticized because these measures may reflect the influence of non-teaching related characteristics, such as the physical appearance of the instructor. The article titled, “Beauty in the classroom: instructors’ pulchritude and putative pedagogical productivity” (Hamermesh and Parker, 2005) found that instructors who are viewed to be better looking receive higher instructional ratings. (Daniel S. Hamermesh, Amy Parker, Beauty in the classroom: instructors pulchritude and putative pedagogical productivity, Economics of Education Review, Volume 24, Issue 4, August 2005, Pages 369-376, ISSN 0272-7757, 10.1016/j.econedurev.2004.07.013. http://www.sciencedirect.com/science/article/pii/S0272775704001165.)

In this lab we will analyze the data from this study in order to learn what goes into a positive professor evaluation.

The data

The data were gathered from end of semester student evaluations for a large sample of professors from the University of Texas at Austin. In addition, six students rated the professors’ physical appearance. (This is aslightly modified version of the original data set that was released as part of the replication data for Data Analysis Using Regression and Multilevel/Hierarchical Models (Gelman and Hill, 2007).) The result is a data frame where each row contains a different course and columns represent variables about the courses and professors.

load("more/evals.RData")
variable description
score average professor evaluation score: (1) very unsatisfactory - (5) excellent.
rank rank of professor: teaching, tenure track, tenured.
ethnicity ethnicity of professor: not minority, minority.
gender gender of professor: female, male.
language language of school where professor received education: english or non-english.
age age of professor.
cls_perc_eval percent of students in class who completed evaluation.
cls_did_eval number of students in class who completed evaluation.
cls_students total number of students in class.
cls_level class level: lower, upper.
cls_profs number of professors teaching sections in course in sample: single, multiple.
cls_credits number of credits of class: one credit (lab, PE, etc.), multi credit.
bty_f1lower beauty rating of professor from lower level female: (1) lowest - (10) highest.
bty_f1upper beauty rating of professor from upper level female: (1) lowest - (10) highest.
bty_f2upper beauty rating of professor from second upper level female: (1) lowest - (10) highest.
bty_m1lower beauty rating of professor from lower level male: (1) lowest - (10) highest.
bty_m1upper beauty rating of professor from upper level male: (1) lowest - (10) highest.
bty_m2upper beauty rating of professor from second upper level male: (1) lowest - (10) highest.
bty_avg average beauty rating of professor.
pic_outfit outfit of professor in picture: not formal, formal.
pic_color color of professor’s picture: color, black & white.

Exploring the data

  1. Is this an observational study or an experiment? The original research question posed in the paper is whether beauty leads directly to the differences in course evaluations. Given the study design, is it possible to answer this question as it is phrased? If not, rephrase the question.

    It is an observational study. It is not possible to answer this question as it is phrased. I would rephrase like “does beauty contribute to the differences in course evaluations”

  2. Describe the distribution of score. Is the distribution skewed? What does that tell you about how students rate courses? Is this what you expected to see? Why, or why not?

hist(evals$score)

Yes the distribution is skewed to the left. It tells me that students tend to rate high scores (above 4) for most professors. It is what I expect since university of texas is a decent school and should have decent professors

  1. Excluding score, select two other variables and describe their relationship using an appropriate visualization (scatterplot, side-by-side boxplots, or mosaic plot).
plot(evals$cls_students,evals$age )

Does not look like there is a relationship betweent eh age of the professor and number of student in clas

Simple linear regression

The fundamental phenomenon suggested by the study is that better looking teachers are evaluated more favorably. Let’s create a scatterplot to see if this appears to be the case:

plot(evals$score ~ evals$bty_avg)

Before we draw conclusions about the trend, compare the number of observations in the data frame with the approximate number of points on the scatterplot. Is anything awry?

str(evals)
## 'data.frame':    463 obs. of  21 variables:
##  $ score        : num  4.7 4.1 3.9 4.8 4.6 4.3 2.8 4.1 3.4 4.5 ...
##  $ rank         : Factor w/ 3 levels "teaching","tenure track",..: 2 2 2 2 3 3 3 3 3 3 ...
##  $ ethnicity    : Factor w/ 2 levels "minority","not minority": 1 1 1 1 2 2 2 2 2 2 ...
##  $ gender       : Factor w/ 2 levels "female","male": 1 1 1 1 2 2 2 2 2 1 ...
##  $ language     : Factor w/ 2 levels "english","non-english": 1 1 1 1 1 1 1 1 1 1 ...
##  $ age          : int  36 36 36 36 59 59 59 51 51 40 ...
##  $ cls_perc_eval: num  55.8 68.8 60.8 62.6 85 ...
##  $ cls_did_eval : int  24 86 76 77 17 35 39 55 111 40 ...
##  $ cls_students : int  43 125 125 123 20 40 44 55 195 46 ...
##  $ cls_level    : Factor w/ 2 levels "lower","upper": 2 2 2 2 2 2 2 2 2 2 ...
##  $ cls_profs    : Factor w/ 2 levels "multiple","single": 2 2 2 2 1 1 1 2 2 2 ...
##  $ cls_credits  : Factor w/ 2 levels "multi credit",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ bty_f1lower  : int  5 5 5 5 4 4 4 5 5 2 ...
##  $ bty_f1upper  : int  7 7 7 7 4 4 4 2 2 5 ...
##  $ bty_f2upper  : int  6 6 6 6 2 2 2 5 5 4 ...
##  $ bty_m1lower  : int  2 2 2 2 2 2 2 2 2 3 ...
##  $ bty_m1upper  : int  4 4 4 4 3 3 3 3 3 3 ...
##  $ bty_m2upper  : int  6 6 6 6 3 3 3 3 3 2 ...
##  $ bty_avg      : num  5 5 5 5 3 ...
##  $ pic_outfit   : Factor w/ 2 levels "formal","not formal": 2 2 2 2 2 2 2 2 2 2 ...
##  $ pic_color    : Factor w/ 2 levels "black&white",..: 2 2 2 2 2 2 2 2 2 2 ...

Yes something is wary looks like not all the plots are showing. There are 463 obs but I only see less than 200 observations

  1. Replot the scatterplot, but this time use the function jitter() on the \(y\)- or the \(x\)-coordinate. (Use ?jitter to learn more.) What was misleading about the initial scatterplot?
plot(evals$score ~ jitter(evals$bty_avg))

there were data points that over lay each other (dups) and it seems to reduce the number of data points we see on the scatter plot

  1. Let’s see if the apparent trend in the plot is something more than natural variation. Fit a linear model called m_bty to predict average professor score by average beauty rating and add the line to your plot using abline(m_bty). Write out the equation for the linear model and interpret the slope. Is average beauty score a statistically significant predictor? Does it appear to be a practically significant predictor?
m_bty<-lm(evals$score ~ evals$bty_avg)
plot(evals$score ~ evals$bty_avg)
abline(m_bty)

summary(m_bty)
## 
## Call:
## lm(formula = evals$score ~ evals$bty_avg)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.9246 -0.3690  0.1420  0.3977  0.9309 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    3.88034    0.07614   50.96  < 2e-16 ***
## evals$bty_avg  0.06664    0.01629    4.09 5.08e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5348 on 461 degrees of freedom
## Multiple R-squared:  0.03502,    Adjusted R-squared:  0.03293 
## F-statistic: 16.73 on 1 and 461 DF,  p-value: 5.083e-05

Equation ; \[\hat{y} = 3.88034 + 0.06664 \cdot bty\_avg\]

The slope means that for each unit change in bty_avg we increase the score of professor by 0.06664.

from the regression statistics the bty_avg seems to statistically significant but it is definitely not practically significant

  1. Use residual plots to evaluate whether the conditions of least squares regression are reasonable. Provide plots and comments for each one (see the Simple Regression Lab for a reminder of how to make these).
hist(m_bty$residuals)

the residual histogram is skewed to the left.

plot(m_bty$residuals ~ evals$bty_avg)
abline(h = 0, lty = 3)  # adds a horizontal dashed line at y = 0

The points are a little far from the base line

Multiple linear regression

The data set contains several variables on the beauty score of the professor: individual ratings from each of the six students who were asked to score the physical appearance of the professors and the average of these six scores. Let’s take a look at the relationship between one of these scores and the average beauty score.

plot(evals$bty_avg ~ evals$bty_f1lower)
cor(evals$bty_avg, evals$bty_f1lower)

As expected the relationship is quite strong - after all, the average score is calculated using the individual scores. We can actually take a look at the relationships between all beauty variables (columns 13 through 19) using the following command:

plot(evals[,13:19])

These variables are collinear (correlated), and adding more than one of these variables to the model would not add much value to the model. In this application and with these highly-correlated predictors, it is reasonable to use the average beauty score as the single representative of these variables.

In order to see if beauty is still a significant predictor of professor score after we’ve accounted for the gender of the professor, we can add the gender term into the model.

m_bty_gen <- lm(score ~ bty_avg + gender, data = evals)
summary(m_bty_gen)
  1. P-values and parameter estimates should only be trusted if the conditions for the regression are reasonable. Verify that the conditions for this model are reasonable using diagnostic plots.
m_bty_gen <- lm(score ~ bty_avg + gender, data = evals)
hist(m_bty_gen$residuals)

THe conditions for regression are not good with a left skew on residuals

Constant variability: “The variability of points around the least squares line remains roughly constant”. Based on the plot the variability of points around the least squares line remains roughly constant so the condition constant variability has been met.

  1. Is bty_avg still a significant predictor of score? Has the addition of gender to the model changed the parameter estimate for bty_avg?

    Yes it is a significant addition because the pvalue is 0.000652 which is less than 0.05 significan level. We also notice that the addition of gender increased the adjusted R2 slightly from 0.03 to 0.05503

Note that the estimate for gender is now called gendermale. You’ll see this name change whenever you introduce a categorical variable. The reason is that R recodes gender from having the values of female and male to being an indicator variable called gendermale that takes a value of \(0\) for females and a value of \(1\) for males. (Such variables are often referred to as “dummy” variables.)

As a result, for females, the parameter estimate is multiplied by zero, leaving the intercept and slope form familiar from simple regression.

\[ \begin{aligned} \widehat{score} &= \hat{\beta}_0 + \hat{\beta}_1 \times bty\_avg + \hat{\beta}_2 \times (0) \\ &= \hat{\beta}_0 + \hat{\beta}_1 \times bty\_avg\end{aligned} \]

We can plot this line and the line corresponding to males with the following custom function.

multiLines(m_bty_gen)
  1. What is the equation of the line corresponding to males? (Hint: For males, the parameter estimate is multiplied by 1.) For two professors who received the same beauty rating, which gender tends to have the higher course evaluation score?
summary(m_bty_gen)
## 
## Call:
## lm(formula = score ~ bty_avg + gender, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8305 -0.3625  0.1055  0.4213  0.9314 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  3.74734    0.08466  44.266  < 2e-16 ***
## bty_avg      0.07416    0.01625   4.563 6.48e-06 ***
## gendermale   0.17239    0.05022   3.433 0.000652 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5287 on 460 degrees of freedom
## Multiple R-squared:  0.05912,    Adjusted R-squared:  0.05503 
## F-statistic: 14.45 on 2 and 460 DF,  p-value: 8.177e-07

\[\widehat{score} = 3.74734 + 0.07416 \times bty\_avg + 0.17239 \times 1\] \[\widehat{score} = 3.91973 + 0.07416 \times bty\_avg \] For two professors who received the same beauty rating, which gender tends to have the higher course evaluation score?

In this predictive model, Male professors will receive the highest score by 0.17239.

The decision to call the indicator variable gendermale instead ofgenderfemale has no deeper meaning. R simply codes the category that comes first alphabetically as a \(0\). (You can change the reference level of a categorical variable, which is the level that is coded as a 0, using therelevel function. Use ?relevel to learn more.)

  1. Create a new model called m_bty_rank with gender removed and rank added in. How does R appear to handle categorical variables that have more than two levels? Note that the rank variable has three levels: teaching, tenure track, tenured.
m_bty_rank <- lm(score ~ bty_avg + rank, data = evals)
summary(m_bty_rank)
## 
## Call:
## lm(formula = score ~ bty_avg + rank, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8713 -0.3642  0.1489  0.4103  0.9525 
## 
## Coefficients:
##                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)       3.98155    0.09078  43.860  < 2e-16 ***
## bty_avg           0.06783    0.01655   4.098 4.92e-05 ***
## ranktenure track -0.16070    0.07395  -2.173   0.0303 *  
## ranktenured      -0.12623    0.06266  -2.014   0.0445 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5328 on 459 degrees of freedom
## Multiple R-squared:  0.04652,    Adjusted R-squared:  0.04029 
## F-statistic: 7.465 on 3 and 459 DF,  p-value: 6.88e-05

It creates two extra rows two out of the three categorical variable factors

The interpretation of the coefficients in multiple regression is slightly different from that of simple regression. The estimate for bty_avg reflects how much higher a group of professors is expected to score if they have a beauty rating that is one point higher while holding all other variables constant. In this case, that translates into considering only professors of the same rank with bty_avg scores that are one point apart.

The search for the best model

We will start with a full model that predicts professor score based on rank, ethnicity, gender, language of the university where they got their degree, age, proportion of students that filled out evaluations, class size, course level, number of professors, number of credits, average beauty rating, outfit, and picture color.

  1. Which variable would you expect to have the highest p-value in this model? Why? Hint: Think about which variable would you expect to not have any association with the professor score.

I think that the variable ‘number of professorsr’ won’t have an impact on the professors score

Let’s run the model…

m_full <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval 
             + cls_students + cls_level + cls_profs + cls_credits + bty_avg 
             + pic_outfit + pic_color, data = evals)
summary(m_full)

From the summary stats we see the highest pvalue is for the cls_profssingle which is 0.77806. It defenitely not significant

  1. Check your suspicions from the previous exercise. Include the model output in your response. The model showed 0.77806

  2. Interpret the coefficient associated with the ethnicity variable.

for the ethinicity variable for all things being equal, for every one point change in the bty_avg we would get an increase of 0.1234929 points to the score when the ethnicity is not minority

  1. Drop the variable with the highest p-value and re-fit the model. Did the coefficients and significance of the other explanatory variables change? (One of the things that makes multiple regression interesting is that coefficient estimates depend on the other variables that are included in the model.) If not, what does this say about whether or not the dropped variable was collinear with the other explanatory variables?

In this case we will drop the number of professors

m_almost_full <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval 
             + cls_students + cls_level + cls_credits + bty_avg 
             + pic_outfit + pic_color, data = evals)
summary(m_almost_full)
## 
## Call:
## lm(formula = score ~ rank + ethnicity + gender + language + age + 
##     cls_perc_eval + cls_students + cls_level + cls_credits + 
##     bty_avg + pic_outfit + pic_color, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.7836 -0.3257  0.0859  0.3513  0.9551 
## 
## Coefficients:
##                         Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            4.0872523  0.2888562  14.150  < 2e-16 ***
## ranktenure track      -0.1476746  0.0819824  -1.801 0.072327 .  
## ranktenured           -0.0973829  0.0662614  -1.470 0.142349    
## ethnicitynot minority  0.1274458  0.0772887   1.649 0.099856 .  
## gendermale             0.2101231  0.0516873   4.065 5.66e-05 ***
## languagenon-english   -0.2282894  0.1111305  -2.054 0.040530 *  
## age                   -0.0089992  0.0031326  -2.873 0.004262 ** 
## cls_perc_eval          0.0052888  0.0015317   3.453 0.000607 ***
## cls_students           0.0004687  0.0003737   1.254 0.210384    
## cls_levelupper         0.0606374  0.0575010   1.055 0.292200    
## cls_creditsone credit  0.5061196  0.1149163   4.404 1.33e-05 ***
## bty_avg                0.0398629  0.0174780   2.281 0.023032 *  
## pic_outfitnot formal  -0.1083227  0.0721711  -1.501 0.134080    
## pic_colorcolor        -0.2190527  0.0711469  -3.079 0.002205 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4974 on 449 degrees of freedom
## Multiple R-squared:  0.187,  Adjusted R-squared:  0.1634 
## F-statistic: 7.943 on 13 and 449 DF,  p-value: 2.336e-14

yes there were changes on the coefficients of other variables and with the Pvalue THis means that the variable cls_prof that we dropped probably was collinear with other variables

  1. Using backward-selection and p-value as the selection criterion, determine the best model. You do not need to show all steps in your answer, just the output for the final model. Also, write out the linear model for predicting score based on the final model you settle on.

Let’s drop cls_levelupper variable as it has the highest pvalue from our second iteration and lets see if the Adjusted R squared increases

m_almost_full_1 <- lm(score ~ ethnicity + gender + language + age + cls_perc_eval 
              + cls_credits + bty_avg 
              + pic_color, data = evals)
summary(m_almost_full_1)
## 
## Call:
## lm(formula = score ~ ethnicity + gender + language + age + cls_perc_eval + 
##     cls_credits + bty_avg + pic_color, data = evals)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.85320 -0.32394  0.09984  0.37930  0.93610 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            3.771922   0.232053  16.255  < 2e-16 ***
## ethnicitynot minority  0.167872   0.075275   2.230  0.02623 *  
## gendermale             0.207112   0.050135   4.131 4.30e-05 ***
## languagenon-english   -0.206178   0.103639  -1.989  0.04726 *  
## age                   -0.006046   0.002612  -2.315  0.02108 *  
## cls_perc_eval          0.004656   0.001435   3.244  0.00127 ** 
## cls_creditsone credit  0.505306   0.104119   4.853 1.67e-06 ***
## bty_avg                0.051069   0.016934   3.016  0.00271 ** 
## pic_colorcolor        -0.190579   0.067351  -2.830  0.00487 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4992 on 454 degrees of freedom
## Multiple R-squared:  0.1722, Adjusted R-squared:  0.1576 
## F-statistic:  11.8 on 8 and 454 DF,  p-value: 2.58e-15

equation is we ensured that all the pvalues are significant

score <-( 3.771922
        + 0.207112 * evals$gender
        -0.206178 * evals$language
        -0.006046 * evals$age
        +  0.004656 * evals$cls_perc_eval
        + 0.505306 * evals$cls_credits
        +0.051069 * evals$bty_avg
       -0.190579 * evals$pic_color)
## Warning in Ops.factor(0.207112, evals$gender): '*' not meaningful for
## factors
## Warning in Ops.factor(0.206178, evals$language): '*' not meaningful for
## factors
## Warning in Ops.factor(0.505306, evals$cls_credits): '*' not meaningful for
## factors
## Warning in Ops.factor(0.190579, evals$pic_color): '*' not meaningful for
## factors
  1. Verify that the conditions for this model are reasonable using diagnostic plots.

    let’s look at the histogram for nearly normal residual

hist(m_almost_full_1$residuals) 

We can see it is satisfied

Constant variablity of residuals

plot(m_almost_full_1$residuals ~ evals$bty_avg)
abline(h = 0, lty = 3)  # adds a horizontal dashed line at y = 0

We can say it is constant variability

Linearity. Plot residuals vs explanatory variables

plot(m_almost_full_1$residuals ~ evals$age)
abline(h = 0, lty = 3)  # adds a horizontal dashed line at y = 0

looks linear

  1. The original paper describes how these data were gathered by taking a sample of professors from the University of Texas at Austin and including all courses that they have taught. Considering that each row represents a course, could this new information have an impact on any of the conditions of linear regression?

Yes it would affect because independence will no longer be satified If an instructor teaches more than one course it should not affect, however if the same student takes two or more classes with the same instructor this will affect the outcome since independence will not be satisfied.

  1. Based on your final model, describe the characteristics of a professor and course at University of Texas at Austin that would be associated with a high evaluation score.

    the characteristics are graduate from english speaking unviersity, have a hight beauty score, be males, be a non minority ethnicity, be younger, have a one credit class and no color

  2. Would you be comfortable generalizing your conclusions to apply to professors generally (at any university)? Why or why not? I would not be comfortable generalizing because this is just a sample for one university and the correlation is not as high.

This is a product of OpenIntro that is released under a Creative Commons Attribution-ShareAlike 3.0 Unported. This lab was written by Mine Çetinkaya-Rundel and Andrew Bray.