load("more/evals.RData")This is an observational study. The question should be rephrased as ’is the beauty of a professor a predictor of score in course evaluations?
score. Is the distribution skewed? What does that tell you about how students rate courses? Is this what you expected to see? Why, or why not?library(tidyverse)
ggplot(evals,aes(x=score))+
geom_histogram(aes(y=..count..,fill=..count..,alpha=.2))+
scale_fill_gradient("Count", low="blue", high="red")+
theme_classic()+
theme(legend.position="none")The distribution is left-skewed. Positive evaluations are more frequent than negative evaluations; a normal distribution was expected, wherein fewer positive evaluations were submitted.
score, select two other variables and describe their relationship using an appropriate visualization (scatterplot, side-by-side boxplots, or mosaic plot).evals$agecat[evals$age<=40] <-1
evals$agecat[evals$age>=40 & evals$age<=50] <-2
evals$agecat[evals$age>=50 & evals$age<=60] <-3
evals$agecat[evals$age>=60] <-4
evals$agecat<-as.numeric(evals$agecat)
ggplot(evals,aes(y=evals$bty_avg,x=as.factor(evals$agecat)))+
geom_boxplot(color="black",fill="#80bfff",alpha=0.2)+
theme_classic()+
theme(legend.position="none")+
geom_text(aes(label = ..count.., y= ..prop..), stat= "count", vjust = -.5) +
scale_x_discrete(labels=c("1"="40 or younger",
"2"="41 - 50",
"3"="51 - 60",
"4"="61 or older"))There is a decline in median beauty as age increases when evaluations are categorized by age range.
plot(evals$score ~ evals$bty_avg)Before we draw conclusions about the trend, compare the number of observations in the data frame with the approximate number of points on the scatterplot. Is anything awry?
The plotted points are superimposed; it appears that there are fewer plotted points than there should be.
jitter() on the \(y\)- or the \(x\)-coordinate. (Use ?jitter to learn more.) What was misleading about the initial scatterplot?ggplot(evals, aes(bty_avg, score))+
geom_point(position = position_jitter(w = 0.1, h = 0.1))+
theme_classic()+
ylab("score")+
xlab("beauty")m_bty to predict average professor score by average beauty rating and add the line to your plot using abline(m_bty). Write out the equation for the linear model and interpret the slope. Is average beauty score a statistically significant predictor? Does it appear to be a practically significant predictor?m_bty <- lm(evals$score ~ evals$bty_avg)
ggplot(evals, aes(bty_avg, score))+
geom_point(position = position_jitter(w = 0.1, h = 0.1))+
geom_abline(intercept = coef(m_bty)["(Intercept)"],
slope = coef(m_bty)["evals$bty_avg"])+
theme_classic()+
ylab("score")+
xlab("beauty")The equation is \(\hat{y}\)=3.880338 + 0.066637(bty_avg)
summary(m_bty)##
## Call:
## lm(formula = evals$score ~ evals$bty_avg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.9246 -0.3690 0.1420 0.3977 0.9309
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.88034 0.07614 50.96 < 2e-16 ***
## evals$bty_avg 0.06664 0.01629 4.09 5.08e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.5348 on 461 degrees of freedom
## Multiple R-squared: 0.03502, Adjusted R-squared: 0.03293
## F-statistic: 16.73 on 1 and 461 DF, p-value: 5.083e-05
bty_avg is indeed a predictor of evaluation score (p < 0.001), however the slope of the equation indicates that for every point increase in beauty there is a fractional increase in evaluation score (.06664).
ggplot(evals, aes(y=m_bty$residuals, x=evals$bty_avg))+
geom_point()+
geom_abline(intercept = 0,
slope = 0)+
theme_classic()ggplot(evals,aes(x=m_bty$residuals))+
geom_histogram(aes(y=..count..,fill=..count..,alpha=.2))+
theme_classic()+
theme(legend.position="none")ggplot(evals,aes(sample=m_bty$residuals))+
stat_qq()+
stat_qq_line()+
theme_classic()While the residuals appear to confirm constant variability, the histogram and the normal probability plot both suggest a distribution that deviates from a normal distribution.
m_bty_gen <- lm(score ~ bty_avg + gender, data = evals)
summary(m_bty_gen)##
## Call:
## lm(formula = score ~ bty_avg + gender, data = evals)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.8305 -0.3625 0.1055 0.4213 0.9314
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.74734 0.08466 44.266 < 2e-16 ***
## bty_avg 0.07416 0.01625 4.563 6.48e-06 ***
## gendermale 0.17239 0.05022 3.433 0.000652 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.5287 on 460 degrees of freedom
## Multiple R-squared: 0.05912, Adjusted R-squared: 0.05503
## F-statistic: 14.45 on 2 and 460 DF, p-value: 8.177e-07
ggplot(evals, aes(y=m_bty_gen$residuals, x=m_bty_gen$fitted.values))+
geom_point()+
geom_abline(intercept = 0,
slope = 0)+
theme_classic()Plot of residuals against fitted values indicates constant variance.
ggplot(evals, aes(y=m_bty_gen$residuals, x=evals$bty_avg))+
geom_point()+
geom_abline(intercept = 0,
slope = 0)+
theme_classic()Variance appears constant.
ggplot(evals,aes(sample=m_bty_gen$residuals))+
stat_qq()+
stat_qq_line()+
theme_classic()The distribution is approximiately normal. Conditions for the regression are reasonable.
bty_avg still a significant predictor of score? Has the addition of gender to the model changed the parameter estimate for bty_avg?The inclusion of the variable improved the model; bty_avg is still a sgnificant predictor of score.
multiLines(m_bty_gen)\[ \begin{aligned} \widehat{score} &= \hat{\beta}_0 + \hat{\beta}_1 \times bty\_avg + \hat{\beta}_2 \times (1) \\ &= \hat{\beta}_0 + \hat{\beta}_1 \times bty\_avg + \hat{\beta}_2 \end{aligned} \]
The additional term in the equation implies that males would have the higher course evaluation if beauty rating is held constant.
m_bty_rank with gender removed and rank added in. How does R appear to handle categorical variables that have more than two levels? Note that the rank variable has three levels: teaching, tenure track, tenured.m_bty_rank <- lm(score ~ bty_avg + rank, data = evals)
summary(m_bty_rank)##
## Call:
## lm(formula = score ~ bty_avg + rank, data = evals)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.8713 -0.3642 0.1489 0.4103 0.9525
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.98155 0.09078 43.860 < 2e-16 ***
## bty_avg 0.06783 0.01655 4.098 4.92e-05 ***
## ranktenure track -0.16070 0.07395 -2.173 0.0303 *
## ranktenured -0.12623 0.06266 -2.014 0.0445 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.5328 on 459 degrees of freedom
## Multiple R-squared: 0.04652, Adjusted R-squared: 0.04029
## F-statistic: 7.465 on 3 and 459 DF, p-value: 6.88e-05
multiLines(m_bty_rank)R creates two variables, ranktenure track and ranktenured.
I would expect cls_profs to have the least association with evaluation score.
m_full <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval
+ cls_students + cls_level + cls_profs + cls_credits + bty_avg
+ pic_outfit + pic_color, data = evals)
summary(m_full)cls_profs has the greatest p-value, with cls_profssingle p-value at 0.77806.
ggplot(evals,aes(y=evals$score,x=as.factor(evals$cls_profs)))+
geom_boxplot(color="black",fill="#80bfff",alpha=0.2)+
theme_classic()+
theme(legend.position="none")+
geom_text(aes(label = ..count.., y= ..prop..), stat= "count", vjust = -.5)The coefficient indicates that professors not identified as a minority have higher evaluation scores, but the p-value indicates negligible significance.
m_full2 <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval
+ cls_students + cls_level + cls_credits + bty_avg
+ pic_outfit + pic_color, data = evals)
summary(m_full2)##
## Call:
## lm(formula = score ~ rank + ethnicity + gender + language + age +
## cls_perc_eval + cls_students + cls_level + cls_credits +
## bty_avg + pic_outfit + pic_color, data = evals)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.7836 -0.3257 0.0859 0.3513 0.9551
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.0872523 0.2888562 14.150 < 2e-16 ***
## ranktenure track -0.1476746 0.0819824 -1.801 0.072327 .
## ranktenured -0.0973829 0.0662614 -1.470 0.142349
## ethnicitynot minority 0.1274458 0.0772887 1.649 0.099856 .
## gendermale 0.2101231 0.0516873 4.065 5.66e-05 ***
## languagenon-english -0.2282894 0.1111305 -2.054 0.040530 *
## age -0.0089992 0.0031326 -2.873 0.004262 **
## cls_perc_eval 0.0052888 0.0015317 3.453 0.000607 ***
## cls_students 0.0004687 0.0003737 1.254 0.210384
## cls_levelupper 0.0606374 0.0575010 1.055 0.292200
## cls_creditsone credit 0.5061196 0.1149163 4.404 1.33e-05 ***
## bty_avg 0.0398629 0.0174780 2.281 0.023032 *
## pic_outfitnot formal -0.1083227 0.0721711 -1.501 0.134080
## pic_colorcolor -0.2190527 0.0711469 -3.079 0.002205 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.4974 on 449 degrees of freedom
## Multiple R-squared: 0.187, Adjusted R-squared: 0.1634
## F-statistic: 7.943 on 13 and 449 DF, p-value: 2.336e-14
The coefficients and significance changed; each variable is more significantly associated.
m_full_final <- lm(score ~ ethnicity + gender + language + age + cls_perc_eval
+ cls_credits + bty_avg + pic_color, data = evals)
summary(m_full_final)##
## Call:
## lm(formula = score ~ ethnicity + gender + language + age + cls_perc_eval +
## cls_credits + bty_avg + pic_color, data = evals)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.85320 -0.32394 0.09984 0.37930 0.93610
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.771922 0.232053 16.255 < 2e-16 ***
## ethnicitynot minority 0.167872 0.075275 2.230 0.02623 *
## gendermale 0.207112 0.050135 4.131 4.30e-05 ***
## languagenon-english -0.206178 0.103639 -1.989 0.04726 *
## age -0.006046 0.002612 -2.315 0.02108 *
## cls_perc_eval 0.004656 0.001435 3.244 0.00127 **
## cls_creditsone credit 0.505306 0.104119 4.853 1.67e-06 ***
## bty_avg 0.051069 0.016934 3.016 0.00271 **
## pic_colorcolor -0.190579 0.067351 -2.830 0.00487 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.4992 on 454 degrees of freedom
## Multiple R-squared: 0.1722, Adjusted R-squared: 0.1576
## F-statistic: 11.8 on 8 and 454 DF, p-value: 2.58e-15
Equation: \(\hat{score}\) = 3.7719215 + 0.1678723(ethnicitynot minority) + 0.2071121(gendermale) + -0.2061781(languagenon-english) + -0.0060459(age) + 0.0046559(cls_perc_eval) + 0.5053062(cls_creditsone credit) + 0.0510693(bty_avg) + -0.1905788(pic_colorcolor)
ggplot(evals,aes(sample=m_full_final$residuals))+
stat_qq()+
stat_qq_line()+
theme_classic()ggplot(evals, aes(y=m_full_final$residuals, x=m_full_final$fitted.values))+
geom_point()+
theme_classic()ggplot(evals,aes(y=evals$score,x=as.factor(evals$ethnicity)))+
geom_boxplot(color="black",fill="#80bfff",alpha=0.2)+
theme_classic()+
theme(legend.position="none")+
geom_text(aes(label = ..count.., y= ..prop..), stat= "count", vjust = -.5)ggplot(evals,aes(y=evals$score,x=as.factor(evals$gender)))+
geom_boxplot(color="black",fill="#80baaf",alpha=0.2)+
theme_classic()+
theme(legend.position="none")+
geom_text(aes(label = ..count.., y= ..prop..), stat= "count", vjust = -.5)ggplot(evals,aes(y=evals$score,x=as.factor(evals$language)))+
geom_boxplot(color="black",fill="#60eaaf",alpha=0.2)+
theme_classic()+
theme(legend.position="none")+
geom_text(aes(label = ..count.., y= ..prop..), stat= "count", vjust = -.5)ggplot(evals,aes(y=evals$score,x=evals$age))+
geom_point(shape=19,color="blue",fill="#efccff")+
theme_classic()+
theme(legend.position="none")+
scale_x_continuous(name="cls_perc_eval",
breaks=c(0,20,40,60,80,100))ggplot(evals,aes(y=evals$score,x=evals$cls_perc_eval))+
geom_point(shape=19,color="red",fill="#efccff")+
theme_classic()+
theme(legend.position="none")ggplot(evals,aes(y=evals$score,x=as.factor(evals$cls_credits)))+
geom_boxplot(color="black",fill="#60eaaf",alpha=0.2)+
theme_classic()+
theme(legend.position="none")+
geom_text(aes(label = ..count.., y= ..prop..), stat= "count", vjust = -.5)ggplot(evals,aes(y=evals$score,x=evals$bty_avg))+
geom_point(shape=19,color="gray",fill="#efccff")+
theme_classic()+
theme(legend.position="none")ggplot(evals,aes(y=evals$score,x=as.factor(evals$pic_color)))+
geom_boxplot(color="black",fill="#60ffff",alpha=0.2)+
theme_classic()+
theme(legend.position="none")+
geom_text(aes(label = ..count.., y= ..prop..), stat= "count", vjust = -.5)This information should not change any of the conditions of linear regression.
The highest scores are associated with younger, attractive (high rank in beauty score) males who do not identify as a minority and teach a course for one credit.
These conclusions would not apply to all universities and professors. University demographics would likely greatly interfere with our model.