download.file("http://www.openintro.org/stat/data/evals.RData", destfile = "evals.RData")
load("evals.RData")

Exploring the Data

Exercise 1. Is this an observational study or an experiment? The original research question posed in the paper is whether beauty leads directly to the differences in course evaluations. Given the study design, is it possible to answer this question as it is phrased? If not, rephrase the question. This is an observational study because we are looking at statistics about situations that we cannot control, thus we do not have a control and experimental group. Also, the question as phrased can be answered with the variables used in the study that are specific for each teaching level and also study other factors, such as age or ethnicity.

Exercise 2. Describe the distribution of score. Is the distribution skewed? What does that tell you about how students rate courses? Is this what you expected to see? Why, or why not? The distribution is left skewed and unimodal. This means that the students might inflate the evaluation scores, which is expected because it is a social phenomenon that people are not used to give bad feedback. Therefore, a bad feedback is taken as evaluating someone with something lower than “good,” such as neutral.

# Histogram for score distribution
hist(evals$score, main= "Professor Evaluation Overall Scores Distribution", xlab= "scores")

Exercise 3. Excluding score, select two other variables and describe their relationship using an appropriate visualization (scatterplot, side-by-side boxplots, or mosaic plot). Based on the plot, the professors in the tenure track have in average a higher percent of students completing the evaluations compared to the teaching ranks, and more significantly to the already tenured professors.

plot(evals$rank, evals$cls_perc_eval, ylab= "% of Students", xlab= "Rank", main= "Percent of Students in Class Who Completed Evaluation vs. Rank")

Simple Linear Regression

# Plot answering to central study question: Does beauty influence the evaluation scores?
plot(evals$score ~ evals$bty_avg, xlab= "Beauty Average", ylab= "Score", main= "Does Beauty Affect Professor's Evaluations?")

Exercise 4. Replot the scatterplot, but this time use the function jitter() on the y- or the x-coordinate. (Use ?jitter to learn more.) What was misleading about the initial scatterplot? According to the documentation for this function, jitter() adds noise to a numeric vector. So, this function shows all the repeated values that were not showing in the first plot.

# Plot answering to central study question: Does beauty influence the evaluation scores? with jitter()
plot(evals$score ~ jitter(evals$bty_avg), xlab= "Beauty Average", ylab= "Score", main= "Does Beauty Affect Professor's Evaluations?")

Exercise 5. Let’s see if the apparent trend in the plot is something more than natural variation. Fit a linear model called m_bty to predict average professor score by average beauty rating and add the line to your plot using abline(m_bty). Write out the equation for the linear model and interpret the slope. Is average beauty score a statistically significant predictor? Does it appear to be a practically significant predictor? The equation for the linear model would be \(\hat{y}= 3.88034+ 0.06664* Beauty\ average\). Thus, the slope in this case is the beauty average score difference that it would take to “go up” in score. However, the slope is significantly small. So, the average beuty score is not a significant predictor. This also conffirmer by the almost flat line in the scatter plot.

#Plot
plot(evals$score ~ jitter(evals$bty_avg), xlab= "Beauty Average", ylab= "Score", main= "Does Beauty Affect Professor's Evaluations?")
# Linear model
m_bty <- lm(score ~ bty_avg, data = evals)
#Linear model in scatterplot
abline(m_bty)

#To get equation we need summary for the linear model
summary(m_bty)
## 
## Call:
## lm(formula = score ~ bty_avg, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.9246 -0.3690  0.1420  0.3977  0.9309 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  3.88034    0.07614   50.96  < 2e-16 ***
## bty_avg      0.06664    0.01629    4.09 5.08e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5348 on 461 degrees of freedom
## Multiple R-squared:  0.03502,    Adjusted R-squared:  0.03293 
## F-statistic: 16.73 on 1 and 461 DF,  p-value: 5.083e-05

Exercise 6. Use residual plots to evaluate whether the conditions of least squares regression are reasonable. Provide plots and comments for each one (see the Simple Regression Lab for a reminder of how to make these). Based on the plots, the conditions for least squares regression are reasonable.

Residual vs. Fitted (linearity): The datapoints are fairly spreaed around the horizontal line, with no other clear pattern. Therefore the linearity condition for the model would be met.
Normal Q-Q (Normality): The residuals follow the line, even if at the end the residuals deviate of the line, it is still considered to meet the normality condition.
Scale-Location (Constant Variance): The residuals are fairly around the horizontal line. Also, the line does not have a significant slope. Therefore, the equal variance condition is met.
Residuals vs Leverage (Possible outlier with significant leverage): The plot identified three significant outliers (#227, #335, #337) influential outliers and all the values are within the Cook’s distance lines, which are not even visible in the plot because no influential outliers were identified. Therefore, this condition is met too.
par(mfrow=c(2,2))
plot(m_bty)

Multiple Linear Regression

# Plot for specific relationship between two variables of beauty
plot(evals$bty_avg ~ evals$bty_f1lower, xlab= "Beauty Average in Lower Class Levels", ylab= "Beauty Average", main= "Beauty Average in Lower Class Levels vs. Beauty Average Scores")

cor(evals$bty_avg, evals$bty_f1lower)
## [1] 0.8439112
# Plot for all the relationships between all beauty variables
plot(evals[,13:19])

# Linear model accounting for gender and beauty scores
m_bty_gen <- lm(score ~ bty_avg + gender, data = evals)
summary(m_bty_gen)
## 
## Call:
## lm(formula = score ~ bty_avg + gender, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8305 -0.3625  0.1055  0.4213  0.9314 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  3.74734    0.08466  44.266  < 2e-16 ***
## bty_avg      0.07416    0.01625   4.563 6.48e-06 ***
## gendermale   0.17239    0.05022   3.433 0.000652 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5287 on 460 degrees of freedom
## Multiple R-squared:  0.05912,    Adjusted R-squared:  0.05503 
## F-statistic: 14.45 on 2 and 460 DF,  p-value: 8.177e-07

Exercise 7. P-values and parameter estimates should only be trusted if the conditions for the regression are reasonable. Verify that the conditions for this model are reasonable using diagnostic plots.

Residual vs. Fitted (linearity): The datapoints are fairly spreaed around the horizontal line, with no other clear pattern. Therefore the linearity condition for the model would be met.
Normal Q-Q (Normality): The residuals follow the line, even if at the end deviate of the line, it is still considered to meet the normality condition.
Scale-Location (Constant Variance): The residuals are fairly around the horizontal line. Also, the line does not have a significant slope. Therefore, the equal variance condition is met.
Residuals vs Leverage (Possible outlier with significant leverage): The plot did not identify any influential outliers and all the values are within the Cook’s distance lines, which are not even visible in the plot because no influential outliers were identified. Therefore, this condition is met too.
par(mfrow=c(2,2))
plot(m_bty_gen)

Exercise 8. Is bty_avg still a significant predictor of score? Has the addition of gender to the model changed the parameter estimate for bty_avg? Bauty can still be a reasonable significant predictor of a score, and the addition of gender did not change the parameter estimate. The only significant change is in the histogram, where the distribution became more left skewed.

Exercise 9. What is the equation of the line corresponding to males? (Hint: For males, the parameter estimate is multiplied by 1.) For two professors who received the same beauty rating, which gender tends to have the higher course evaluation score? The equation for males would be \(\hat{score} = 3.74734+ (0.07416* bty\_avg) + (0.17239*gendermale)\). The variable “gendermale” would equal one so the queation would be \(\hat{score} = 3.91973+ (0.07416* bty\_avg)\). Also, based on this equation males would tend to have a higher course evaluation score.

# Model with gendermale new variable
multiLines(m_bty_gen, xlab= "Beauty Average", main= "Beauty Average vs. Score by Gender")

summary(m_bty_gen)
## 
## Call:
## lm(formula = score ~ bty_avg + gender, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8305 -0.3625  0.1055  0.4213  0.9314 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  3.74734    0.08466  44.266  < 2e-16 ***
## bty_avg      0.07416    0.01625   4.563 6.48e-06 ***
## gendermale   0.17239    0.05022   3.433 0.000652 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5287 on 460 degrees of freedom
## Multiple R-squared:  0.05912,    Adjusted R-squared:  0.05503 
## F-statistic: 14.45 on 2 and 460 DF,  p-value: 8.177e-07

Exercise 10. Create a new model called m_bty_rank with gender removed and rank added in. How does R appear to handle categorical variables that have more than two levels? Note that the rank variable has three levels: teaching, tenure track, tenured. R eliminates one of the variables in the model, in this case the summary only shows “tenure track” and “tenured.” It still shows “bty_avg.”

# Linear model accounting for rank and beauty scores
m_bty_rank <- lm(score ~ bty_avg+rank, data = evals)
summary(m_bty_rank)
## 
## Call:
## lm(formula = score ~ bty_avg + rank, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8713 -0.3642  0.1489  0.4103  0.9525 
## 
## Coefficients:
##                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)       3.98155    0.09078  43.860  < 2e-16 ***
## bty_avg           0.06783    0.01655   4.098 4.92e-05 ***
## ranktenure track -0.16070    0.07395  -2.173   0.0303 *  
## ranktenured      -0.12623    0.06266  -2.014   0.0445 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5328 on 459 degrees of freedom
## Multiple R-squared:  0.04652,    Adjusted R-squared:  0.04029 
## F-statistic: 7.465 on 3 and 459 DF,  p-value: 6.88e-05

The Search for The Best Model

Exercise 11. Which variable would you expect to have the highest p-value in this model? Why? Hint: Think about which variable would you expect to not have any association with the professor score. I would expect the variable “pic_color” to have the least association with the professor score.

Exercise 12. Check your suspicions from the previous exercise. Include the model output in your response. According to the model, the variable “cls_profssingle” is the one with no association with the professor score.

m_full <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval 
             + cls_students + cls_level + cls_profs + cls_credits + bty_avg 
             + pic_outfit + pic_color, data = evals)
summary(m_full)
## 
## Call:
## lm(formula = score ~ rank + ethnicity + gender + language + age + 
##     cls_perc_eval + cls_students + cls_level + cls_profs + cls_credits + 
##     bty_avg + pic_outfit + pic_color, data = evals)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.77397 -0.32432  0.09067  0.35183  0.95036 
## 
## Coefficients:
##                         Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            4.0952141  0.2905277  14.096  < 2e-16 ***
## ranktenure track      -0.1475932  0.0820671  -1.798  0.07278 .  
## ranktenured           -0.0973378  0.0663296  -1.467  0.14295    
## ethnicitynot minority  0.1234929  0.0786273   1.571  0.11698    
## gendermale             0.2109481  0.0518230   4.071 5.54e-05 ***
## languagenon-english   -0.2298112  0.1113754  -2.063  0.03965 *  
## age                   -0.0090072  0.0031359  -2.872  0.00427 ** 
## cls_perc_eval          0.0053272  0.0015393   3.461  0.00059 ***
## cls_students           0.0004546  0.0003774   1.205  0.22896    
## cls_levelupper         0.0605140  0.0575617   1.051  0.29369    
## cls_profssingle       -0.0146619  0.0519885  -0.282  0.77806    
## cls_creditsone credit  0.5020432  0.1159388   4.330 1.84e-05 ***
## bty_avg                0.0400333  0.0175064   2.287  0.02267 *  
## pic_outfitnot formal  -0.1126817  0.0738800  -1.525  0.12792    
## pic_colorcolor        -0.2172630  0.0715021  -3.039  0.00252 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.498 on 448 degrees of freedom
## Multiple R-squared:  0.1871, Adjusted R-squared:  0.1617 
## F-statistic: 7.366 on 14 and 448 DF,  p-value: 6.552e-14

Exercise 13. Interpret the coefficient associated with the ethnicity variable. The model is setting ethnicity with dummy variables, in this case, the variable would only be equal to 1 if the professor belongs to a minority and 0 if it does not.

Exercise 14. Drop the variable with the highest p-value and re-fit the model. Did the coefficients and significance of the other explanatory variables change? (One of the things that makes multiple regression interesting is that coefficient estimates depend on the other variables that are included in the model.) If not, what does this say about whether or not the dropped variable was collinear with the other explanatory variables? The coefficients did not change after dropping “cls_profs” out of the model. However, the parameters changed, some of them increased in value and others decreased.This might be explained with the fact that when we dropped the variable out, we change the importance distribution of the parameters.

m_full2 <- lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval 
             + cls_students + cls_level + cls_credits + bty_avg 
             + pic_outfit + pic_color, data = evals)
summary(m_full2)
## 
## Call:
## lm(formula = score ~ rank + ethnicity + gender + language + age + 
##     cls_perc_eval + cls_students + cls_level + cls_credits + 
##     bty_avg + pic_outfit + pic_color, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.7836 -0.3257  0.0859  0.3513  0.9551 
## 
## Coefficients:
##                         Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            4.0872523  0.2888562  14.150  < 2e-16 ***
## ranktenure track      -0.1476746  0.0819824  -1.801 0.072327 .  
## ranktenured           -0.0973829  0.0662614  -1.470 0.142349    
## ethnicitynot minority  0.1274458  0.0772887   1.649 0.099856 .  
## gendermale             0.2101231  0.0516873   4.065 5.66e-05 ***
## languagenon-english   -0.2282894  0.1111305  -2.054 0.040530 *  
## age                   -0.0089992  0.0031326  -2.873 0.004262 ** 
## cls_perc_eval          0.0052888  0.0015317   3.453 0.000607 ***
## cls_students           0.0004687  0.0003737   1.254 0.210384    
## cls_levelupper         0.0606374  0.0575010   1.055 0.292200    
## cls_creditsone credit  0.5061196  0.1149163   4.404 1.33e-05 ***
## bty_avg                0.0398629  0.0174780   2.281 0.023032 *  
## pic_outfitnot formal  -0.1083227  0.0721711  -1.501 0.134080    
## pic_colorcolor        -0.2190527  0.0711469  -3.079 0.002205 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4974 on 449 degrees of freedom
## Multiple R-squared:  0.187,  Adjusted R-squared:  0.1634 
## F-statistic: 7.943 on 13 and 449 DF,  p-value: 2.336e-14

Exercise 15. Using backward-selection and p-value as the selection criterion, determine the best model. You do not need to show all steps in your answer, just the output for the final model. Also, write out the linear model for predicting score based on the final model you settle on. The final model consists of four variables as shown in the equation. These variables were chosen based on their p-value, along with their reasonable relationshio with the purpose of the model. After taking one variable out it was tested whether or nor it influenced the rest of the model.

\[\hat{score} = 3.771922 + 0.167872* ethnicity + 0.207112* gender -0.006046* age + 0.004656 * cls\_perc\_eval\]

\[+ 0.505306 * cls\_credits + 0.051069 * bty\_avg\]

m_full3 <- lm(score ~ ethnicity + gender + age + cls_perc_eval+ cls_credits + bty_avg , data = evals)
summary(m_full3)
## 
## Call:
## lm(formula = score ~ ethnicity + gender + age + cls_perc_eval + 
##     cls_credits + bty_avg, data = evals)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.90305 -0.32025  0.08687  0.37799  1.06885 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            3.409832   0.202120  16.870  < 2e-16 ***
## ethnicitynot minority  0.240897   0.071151   3.386 0.000771 ***
## gendermale             0.182259   0.049942   3.649 0.000293 ***
## age                   -0.005087   0.002610  -1.949 0.051873 .  
## cls_perc_eval          0.005107   0.001440   3.547 0.000430 ***
## cls_creditsone credit  0.532266   0.104448   5.096 5.09e-07 ***
## bty_avg                0.064891   0.016353   3.968 8.41e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5038 on 456 degrees of freedom
## Multiple R-squared:  0.153,  Adjusted R-squared:  0.1419 
## F-statistic: 13.73 on 6 and 456 DF,  p-value: 2.322e-14

Exercise 16. Verify that the conditions for this model are reasonable using diagnostic plots.

Residual vs. Fitted (linearity): The datapoints are fairly spreaed around the horizontal line, with no other clear pattern. The line has a negative slope towards the end, but still follows the line with no other clear pattern. Therefore, the linearity condition for the model would be met.
Normal Q-Q (Normality): The residuals follow the line, even if at the end deviate of the line, it is still considered to meet the normality condition.
Scale-Location (Constant Variance): The residuals are fairly around the horizontal line. Also, the line does not have a dramatic slope, it has a downward slope towards the end but still horizontal. Therefore, the equal variance condition is met.
Residuals vs Leverage (Possible outlier with significant leverage): The plot identified two influential outliers (Observations #159 and #125). However, all the values are within the Cook’s distance lines, which are not even visible in the plot because no influential outliers were identified. We would take off these observations to adequate the relationship. Therefore, this condition is met too.
par(mfrow=c(2,2))
plot(m_full3)

Exercise 17. The original paper describes how these data were gathered by taking a sample of professors from the University of Texas at Austin and including all courses that they have taught. Considering that each row represents a course, could this new information have an impact on any of the conditions of linear regression? Yes, if we consider each row as a different course, then some classes are easier to teach than others and some of the core classes might get a lower score compared to the major classes.

Exercise 18. Based on your final model, describe the characteristics of a professor and course at University of Texas at Austin that would be associated with a high evaluation score. A professor at University of Texas would have a high score if a professor is a minority, male, young, a high percentage of evaluations completed by students, if his class is one credit, and if his beauty average is higher.

Exercise 19. Would you be comfortable generalizing your conclusions to apply to professors generally (at any university)? Why or why not? It depends on the other analyzed institutions, overall yes but it would be necessary to revise the model.


Documentation Statement:

  • I used on 05/13/2020 the files uploaded on BlackBoard for the milestone.
  • On 14 May, 2020 Capt Forbes answered questions regarding the diagnostic plots and he advised for me to use the article cited in the references to understand the diagnostic plots better. He also showed me the code for the diagnostic plots.

References: Kim, Bommae. “Understanding Diagnostic Plots for Linear Regression Analysis.” University Of Virginia Library, September 21, 2015, https://data.library.virginia.edu/diagnostic-plots/. Accessed 14 May 2020.