Many college courses conclude by giving students the opportunity to evaluate the course and the instructor anonymously. However, the use of these student evaluations as an indicator of course quality and teaching effectiveness is often criticized because these measures may reflect the influence of non-teaching related characteristics, such as the physical appearance of the instructor. The article titled, “Beauty in the classroom: instructors’ pulchritude and putative pedagogical productivity” (Hamermesh and Parker, 2005) found that instructors who are viewed to be better looking receive higher instructional ratings. (Daniel S. Hamermesh, Amy Parker, Beauty in the classroom: instructors pulchritude and putative pedagogical productivity, Economics of Education Review, Volume 24, Issue 4, August 2005, Pages 369-376, ISSN 0272-7757, 10.1016/j.econedurev.2004.07.013. http://www.sciencedirect.com/science/article/pii/S0272775704001165.)
In this lab we will analyze the data from this study in order to learn what goes into a positive professor evaluation.
The data were gathered from end of semester student evaluations for a large sample of professors from the University of Texas at Austin. In addition, six students rated the professors’ physical appearance. (This is aslightly modified version of the original data set that was released as part of the replication data for Data Analysis Using Regression and Multilevel/Hierarchical Models (Gelman and Hill, 2007).) The result is a data frame where each row contains a different course and columns represent variables about the courses and professors.
load("more/evals.RData")
library(ggplot2)## Warning: package 'ggplot2' was built under R version 3.3.2
library('IS606')##
## Welcome to CUNY IS606 Statistics and Probability for Data Analytics
## This package is designed to support this course. The text book used
## is OpenIntro Statistics, 3rd Edition. You can read this by typing
## vignette('os3') or visit www.OpenIntro.org.
##
## The getLabs() function will return a list of the labs available.
##
## The demo(package='IS606') will list the demos that are available.
##
## Attaching package: 'IS606'
## The following object is masked from 'package:utils':
##
## demo
| variable | description |
|---|---|
score |
average professor evaluation score: (1) very unsatisfactory - (5) excellent. |
rank |
rank of professor: teaching, tenure track, tenured. |
ethnicity |
ethnicity of professor: not minority, minority. |
gender |
gender of professor: female, male. |
language |
language of school where professor received education: english or non-english. |
age |
age of professor. |
cls_perc_eval |
percent of students in class who completed evaluation. |
cls_did_eval |
number of students in class who completed evaluation. |
cls_students |
total number of students in class. |
cls_level |
class level: lower, upper. |
cls_profs |
number of professors teaching sections in course in sample: single, multiple. |
cls_credits |
number of credits of class: one credit (lab, PE, etc.), multi credit. |
bty_f1lower |
beauty rating of professor from lower level female: (1) lowest - (10) highest. |
bty_f1upper |
beauty rating of professor from upper level female: (1) lowest - (10) highest. |
bty_f2upper |
beauty rating of professor from second upper level female: (1) lowest - (10) highest. |
bty_m1lower |
beauty rating of professor from lower level male: (1) lowest - (10) highest. |
bty_m1upper |
beauty rating of professor from upper level male: (1) lowest - (10) highest. |
bty_m2upper |
beauty rating of professor from second upper level male: (1) lowest - (10) highest. |
bty_avg |
average beauty rating of professor. |
pic_outfit |
outfit of professor in picture: not formal, formal. |
pic_color |
color of professor’s picture: color, black & white. |
It is an observational study.
As there are no control and experimental groups,there cannot be causation between the explanatory and response variables. However there can be a correlation. The question cannot be answered the way it is phrased. We can rephrase the question as “Is the instructor’s beauty has a positive (or negative) correlation to student course evaluation?”
score. Is the distribution skewed? What does that tell you about how students rate courses? Is this what you expected to see? Why, or why not?hist(evals$score)The histogram of evaluation scores is left skewed. Students have far more positive evaluations than negative evaluations for their teachers. This is not per expectation. We expected a normal distribution where most teachers would be rated as average and extreme rating - excellent or unsatisfactory would be to only few teachers.
score, select two other variables and describe their relationship using an appropriate visualization (scatterplot, side-by-side boxplots, or mosaic plot).boxplot(evals$bty_avg ~ evals$ethnicity) We find the average rating is irrespective of ethnicity of the teacher.
The fundamental phenomenon suggested by the study is that better looking teachers are evaluated more favorably. Let’s create a scatterplot to see if this appears to be the case:
plot(evals$score ~ evals$bty_avg)Before we draw conclusions about the trend, compare the number of observations in the data frame with the approximate number of points on the scatterplot. Is anything awry?
nrow(evals)## [1] 463
There seem to be more observations than the approximate number of points on the scatterplot.
jitter() on the \(y\)- or the \(x\)-coordinate. (Use ?jitter to learn more.) What was misleading about the initial scatterplot?library(ggplot2)
ggplot(evals, aes(bty_avg, score)) + geom_point(position = position_jitter(w = 0.3, h = 0.3)) + ylab("score") + xlab("beauty average") From the scattrerplot, no evident relationship can be seen between beauty average and score of teacher. By adding a small amount of noise , we can now differentiate the points.
m_bty to predict average professor score by average beauty rating and add the line to your plot using abline(m_bty). Write out the equation for the linear model and interpret the slope. Is average beauty score a statistically significant predictor? Does it appear to be a practically significant predictor?m_bty <- lm(evals$score ~ evals$bty_avg)
cor(evals$score, evals$bty_avg)## [1] 0.1871424
summary(m_bty)##
## Call:
## lm(formula = evals$score ~ evals$bty_avg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.9246 -0.3690 0.1420 0.3977 0.9309
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.88034 0.07614 50.96 < 2e-16 ***
## evals$bty_avg 0.06664 0.01629 4.09 5.08e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.5348 on 461 degrees of freedom
## Multiple R-squared: 0.03502, Adjusted R-squared: 0.03293
## F-statistic: 16.73 on 1 and 461 DF, p-value: 5.083e-05
We find
y = 3.88034 + 0.06664 ??? bty_avg
plot(jitter(evals$score,factor=1.2) ~ jitter(evals$bty_avg,factor=1.2))
abline(m_bty) R2 is about 3.3%, hence, 3.3% of evaluations can be predicted accurately with the model. From the plot and line we may find a statistical relationship , however it is not true. We see th points scattered randomly and also the slope is 0.06664, whihc signifies hardly any change in score due to beauty average rating of the teacher.
plot_ss(x = evals$bty_avg, y = evals$score, showSquares = TRUE)## Click two points to make a line.
## Call:
## lm(formula = y ~ x, data = pts)
##
## Coefficients:
## (Intercept) x
## 3.88034 0.06664
##
## Sum of Squares: 131.868
There are many outliers and the we also see the distribution is not normal. Hence it does not satisfy the least square regression conditions.
The data set contains several variables on the beauty score of the professor: individual ratings from each of the six students who were asked to score the physical appearance of the professors and the average of these six scores. Let’s take a look at the relationship between one of these scores and the average beauty score.
plot(evals$bty_avg ~ evals$bty_f1lower)cor(evals$bty_avg, evals$bty_f1lower)## [1] 0.8439112
As expected the relationship is quite strong - after all, the average score is calculated using the individual scores. We can actually take a look at the relationships between all beauty variables (columns 13 through 19) using the following command:
plot(evals[,13:19])These variables are collinear (correlated), and adding more than one of these variables to the model would not add much value to the model. In this application and with these highly-correlated predictors, it is reasonable to use the average beauty score as the single representative of these variables.
In order to see if beauty is still a significant predictor of professor score after we’ve accounted for the gender of the professor, we can add the gender term into the model.
m_bty_gen <- lm(score ~ bty_avg + gender, data = evals)
summary(m_bty_gen)##
## Call:
## lm(formula = score ~ bty_avg + gender, data = evals)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.8305 -0.3625 0.1055 0.4213 0.9314
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.74734 0.08466 44.266 < 2e-16 ***
## bty_avg 0.07416 0.01625 4.563 6.48e-06 ***
## gendermale 0.17239 0.05022 3.433 0.000652 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.5287 on 460 degrees of freedom
## Multiple R-squared: 0.05912, Adjusted R-squared: 0.05503
## F-statistic: 14.45 on 2 and 460 DF, p-value: 8.177e-07
Lets verify is residuals are normal distribution
# Normal Probability Plot
qqnorm(m_bty_gen$residuals)
qqline(m_bty_gen$residuals)# residual plot against each predictor variable
plot(m_bty_gen$residuals ~ evals$bty_avg)
abline(h = 0, lty = 4) # adds a horizontal dashed line at y = 0plot(m_bty_gen$residuals ~ evals$gender)
abline(h = 0, lty = 4) # adds a horizontal dashed line at y = 0# 4 plots: Resiual vs Fitted, Normal Probability Plot, Scale-Location, Residual vs Leverage
plot(m_bty_gen)#Historgram
hist(m_bty_gen$residuals)# Checking for linearity
plot(jitter(evals$score) ~ evals$bty_avg)plot(evals$score ~ evals$gender)# Normal Probability Plotqqnorm(m_bty_gen$residuals)
qqline(m_bty_gen$residuals)# residual plot against each predictor variable
plot(m_bty_gen$residuals ~ evals$bty_avg)
abline(h = 0, lty = 4) # adds a horizontal dashed line at y = 0plot(m_bty_gen$residuals ~ evals$gender)
abline(h = 0, lty = 4) # adds a horizontal dashed line at y = 0# 4 plots: Resiual vs Fitted, Normal Probability Plot, Scale-Location, Residual vs Leverage
plot(m_bty_gen)#Historgream
hist(m_bty_gen$residuals)# Checking linearlidity
plot(jitter(evals$score) ~ evals$bty_avg)plot(evals$score ~ evals$gender) From the histogram of residuals, we can see that the residuals distribution is slightly skewed to the left. Looking at the Normal Probability Plot for residuals, the residuals do not follow the lines for upper quantiles. Finally from, Residuals vs Fitted, it appears to be constant variability for residuals.
The conditions are reasonable although, residuals are showing possible outliers and the Normal Probability plot shows breakdown, specially in the upper quadriles.
We will assume independance, we have no information on how the sample was taken or whether we can look at the collection order of the residuals to show independence of residuals.
bty_avg still a significant predictor of score? Has the addition of gender to the model changed the parameter estimate for bty_avg?Adjusted R2 = 0.055, The parameter estimage for bty_avg = 0.07416 Yes, gender made beauty average more significant as the p-value computed is even smaller now compared to a model where beauty average was the sole variable.
Note that the estimate for gender is now called gendermale. You’ll see this name change whenever you introduce a categorical variable. The reason is that R recodes gender from having the values of female and male to being an indicator variable called gendermale that takes a value of 0 for females and a value of 1 for males. (Such variables are often referred to as “dummy” variables.)
As a result, for females, the parameter estimate is multiplied by zero, leaving the intercept and slope form familiar from simple regression.
score=??0+??1×bty_avg+??2×(0)=??0+??1×bty_avg score=??0+??1×bty_avg+??2×(0)=??0+??1×bty_avg
We can plot this line and the line corresponding to males with the following custom function.
multiLines(m_bty_gen)Note that the estimate for gender is now called gendermale. You’ll see this name change whenever you introduce a categorical variable. The reason is that R recodes gender from having the values of female and male to being an indicator variable called gendermale that takes a value of \(0\) for females and a value of \(1\) for males. (Such variables are often referred to as “dummy” variables.)
As a result, for females, the parameter estimate is multiplied by zero, leaving the intercept and slope form familiar from simple regression.
\[ \begin{aligned} \widehat{score} &= \hat{\beta}_0 + \hat{\beta}_1 \times bty\_avg + \hat{\beta}_2 \times (0) \\ &= \hat{\beta}_0 + \hat{\beta}_1 \times bty\_avg\end{aligned} \]
We can plot this line and the line corresponding to males with the following custom function.
multiLines(m_bty_gen)score = 3.74734 + 0.07416 × beauty_avg + 0.17239 × gender_male
For gender = Male, we will evaluate the equation with gender_male = 1. In case, of female gender, we will substitute a 0.
Male professor will have a evaluation score higher, all other things being equal.
The decision to call the indicator variable gendermale instead ofgenderfemale has no deeper meaning. R simply codes the category that comes first alphabetically as a \(0\). (You can change the reference level of a categorical variable, which is the level that is coded as a 0, using therelevel function. Use ?relevel to learn more.)
m_bty_rank with gender removed and rank added in. How does R appear to handle categorical variables that have more than two levels? Note that the rank variable has three levels: teaching, tenure track, tenured.m_bty_rank <- lm(score ~ bty_avg + rank, data = evals)
summary(m_bty_rank)##
## Call:
## lm(formula = score ~ bty_avg + rank, data = evals)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.8713 -0.3642 0.1489 0.4103 0.9525
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.98155 0.09078 43.860 < 2e-16 ***
## bty_avg 0.06783 0.01655 4.098 4.92e-05 ***
## ranktenure track -0.16070 0.07395 -2.173 0.0303 *
## ranktenured -0.12623 0.06266 -2.014 0.0445 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.5328 on 459 degrees of freedom
## Multiple R-squared: 0.04652, Adjusted R-squared: 0.04029
## F-statistic: 7.465 on 3 and 459 DF, p-value: 6.88e-05
multiLines(m_bty_rank)For variable with more than 2 levels, it appears to handle it considerering them 2 different variables. The interpretation of the coefficients in multiple regression is slightly different from that of simple regression. The estimate for bty_avg reflects how much higher a group of professors is expected to score if they have a beauty rating that is one point higher while holding all other variables constant. In this case, that translates into considering only professors of the same rank with bty_avg scores that are one point apart.
The interpretation of the coefficients in multiple regression is slightly different from that of simple regression. The estimate for bty_avg reflects how much higher a group of professors is expected to score if they have a beauty rating that is one point higher while holding all other variables constant. In this case, that translates into considering only professors of the same rank with bty_avg scores that are one point apart.
We will start with a full model that predicts professor score based on rank, ethnicity, gender, language of the university where they got their degree, age, proportion of students that filled out evaluations, class size, course level, number of professors, number of credits, average beauty rating, outfit, and picture color.
Possibly, number of professors ‘cls_profs’ teaching sections in course in sample: single, multiple, since the evaluation are done within a class/section. Whether the professor is teaching multiple sections should not have an impact a given evaluation score.
Let’s run the model…
m_full <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval
+ cls_students + cls_level + cls_profs + cls_credits + bty_avg
+ pic_outfit + pic_color, data = evals)
summary(m_full)##
## Call:
## lm(formula = score ~ rank + ethnicity + gender + language + age +
## cls_perc_eval + cls_students + cls_level + cls_profs + cls_credits +
## bty_avg + pic_outfit + pic_color, data = evals)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.77397 -0.32432 0.09067 0.35183 0.95036
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.0952141 0.2905277 14.096 < 2e-16 ***
## ranktenure track -0.1475932 0.0820671 -1.798 0.07278 .
## ranktenured -0.0973378 0.0663296 -1.467 0.14295
## ethnicitynot minority 0.1234929 0.0786273 1.571 0.11698
## gendermale 0.2109481 0.0518230 4.071 5.54e-05 ***
## languagenon-english -0.2298112 0.1113754 -2.063 0.03965 *
## age -0.0090072 0.0031359 -2.872 0.00427 **
## cls_perc_eval 0.0053272 0.0015393 3.461 0.00059 ***
## cls_students 0.0004546 0.0003774 1.205 0.22896
## cls_levelupper 0.0605140 0.0575617 1.051 0.29369
## cls_profssingle -0.0146619 0.0519885 -0.282 0.77806
## cls_creditsone credit 0.5020432 0.1159388 4.330 1.84e-05 ***
## bty_avg 0.0400333 0.0175064 2.287 0.02267 *
## pic_outfitnot formal -0.1126817 0.0738800 -1.525 0.12792
## pic_colorcolor -0.2172630 0.0715021 -3.039 0.00252 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.498 on 448 degrees of freedom
## Multiple R-squared: 0.1871, Adjusted R-squared: 0.1617
## F-statistic: 7.366 on 14 and 448 DF, p-value: 6.552e-14
The p value for this variable, is 0.77806 and is the highest in the model. Hence number professors has the least association to “scores”.
plot(evals$score ~ evals$cls_profs)The ethnicity p-value of about 0.11 means that it has a weak relationship to scores.
m_full_1 <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval
+ cls_students + cls_level + cls_credits + bty_avg
+ pic_outfit + pic_color, data = evals)
summary(m_full_1)##
## Call:
## lm(formula = score ~ rank + ethnicity + gender + language + age +
## cls_perc_eval + cls_students + cls_level + cls_credits +
## bty_avg + pic_outfit + pic_color, data = evals)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.7836 -0.3257 0.0859 0.3513 0.9551
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.0872523 0.2888562 14.150 < 2e-16 ***
## ranktenure track -0.1476746 0.0819824 -1.801 0.072327 .
## ranktenured -0.0973829 0.0662614 -1.470 0.142349
## ethnicitynot minority 0.1274458 0.0772887 1.649 0.099856 .
## gendermale 0.2101231 0.0516873 4.065 5.66e-05 ***
## languagenon-english -0.2282894 0.1111305 -2.054 0.040530 *
## age -0.0089992 0.0031326 -2.873 0.004262 **
## cls_perc_eval 0.0052888 0.0015317 3.453 0.000607 ***
## cls_students 0.0004687 0.0003737 1.254 0.210384
## cls_levelupper 0.0606374 0.0575010 1.055 0.292200
## cls_creditsone credit 0.5061196 0.1149163 4.404 1.33e-05 ***
## bty_avg 0.0398629 0.0174780 2.281 0.023032 *
## pic_outfitnot formal -0.1083227 0.0721711 -1.501 0.134080
## pic_colorcolor -0.2190527 0.0711469 -3.079 0.002205 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.4974 on 449 degrees of freedom
## Multiple R-squared: 0.187, Adjusted R-squared: 0.1634
## F-statistic: 7.943 on 13 and 449 DF, p-value: 2.336e-14
m_full$coefficients - m_full_1$coefficients## Warning in m_full$coefficients - m_full_1$coefficients: longer object
## length is not a multiple of shorter object length
## (Intercept) ranktenure track ranktenured
## 7.961761e-03 8.133220e-05 4.512112e-05
## ethnicitynot minority gendermale languagenon-english
## -3.952838e-03 8.249882e-04 -1.521743e-03
## age cls_perc_eval cls_students
## -8.003969e-06 3.847644e-05 -1.408227e-05
## cls_levelupper cls_profssingle cls_creditsone credit
## -1.234699e-04 -5.207815e-01 4.621803e-01
## bty_avg pic_outfitnot formal pic_colorcolor
## 1.483560e-01 1.063710e-01 -4.304515e+00
The coefficients and significance changed slightly. Since the values changed when the drop variable cls_profs was removed. Other explanatory variables are sightly more significant now
m_full2 <- lm(score ~ ethnicity + gender + language + age + cls_perc_eval
+ cls_credits + bty_avg
+ pic_color, data = evals)
summary(m_full2)##
## Call:
## lm(formula = score ~ ethnicity + gender + language + age + cls_perc_eval +
## cls_credits + bty_avg + pic_color, data = evals)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.85320 -0.32394 0.09984 0.37930 0.93610
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.771922 0.232053 16.255 < 2e-16 ***
## ethnicitynot minority 0.167872 0.075275 2.230 0.02623 *
## gendermale 0.207112 0.050135 4.131 4.30e-05 ***
## languagenon-english -0.206178 0.103639 -1.989 0.04726 *
## age -0.006046 0.002612 -2.315 0.02108 *
## cls_perc_eval 0.004656 0.001435 3.244 0.00127 **
## cls_creditsone credit 0.505306 0.104119 4.853 1.67e-06 ***
## bty_avg 0.051069 0.016934 3.016 0.00271 **
## pic_colorcolor -0.190579 0.067351 -2.830 0.00487 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.4992 on 454 degrees of freedom
## Multiple R-squared: 0.1722, Adjusted R-squared: 0.1576
## F-statistic: 11.8 on 8 and 454 DF, p-value: 2.58e-15
# score = 3.771922 + 0.051069 * beauty_avg ??? 0.006046 * age + 0.004656 * cls_perc_eval Verifying conditions:
Condition 1 : Residuals of the model are nearly normal
# Normal Probability Plot
qqnorm(m_full2$residuals)
qqline(m_full2$residuals)#Historgram
hist(m_full2$residuals) plot(m_full2)Conclusion : residuals of the model is not normal as residual values for the the higher and lower quantiles are less than what a normal distribution would predict
Condition 2 : the variability of the residuals is nearly constant)
plot(abs(m_full2$residuals) ~ m_full2$fitted.values)Most of the residual values are close to the fitted values. There are some outliers
Condition 3. the residuals are independent
plot(m_full2$residuals ~ c(1:nrow(evals)))This shows it is undependant and randomly gathered.
Condition 4 : variables are linearly associated to outcome.
# Checking linearlidity
plot(jitter(evals$score) ~ evals$bty_avg)plot(jitter(evals$score) ~ evals$gender)plot(evals$score ~ evals$ethnicity)plot(evals$score ~ evals$language)plot(evals$score ~ evals$age)plot(evals$score ~ evals$cls_perc_eval)plot(evals$score ~ evals$cls_credits)plot(evals$score ~ evals$pic_color)The variables above are more or less linearly related to the score
The class courses are independent of each other so evaluation scores from one course is independent of the other although taught by the same professor. Hence no impact.
Professor would be ayoung male teaching one credit class, he would not belong to a minority group, he would have received this degree from a universtity where english is the primary language. The professor would have a black and white picture and who have been rated beautifull by a good percentage of his class.
No, this was not conducted as an experiment but based on a sample of 6 which is too small in a given university. As cultural value changes, these results may be different in other university or in a different time frame for other predictor variables as well. Example picture preferences may be culturally biased.
This is a product of OpenIntro that is released under a Creative Commons Attribution-ShareAlike 3.0 Unported. This lab was written by Mine Çetinkaya-Rundel and Andrew Bray.