download.file("http://www.openintro.org/stat/data/evals.RData", destfile = "evals.RData")
load("evals.RData")

#### Exercise 2. Describe the distribution of score. Is the distribution skewed? What does that tell you about how students rate courses? Is this what you expected to see? Why, or why not?

hist(evals$score) ##### the histogram for the score section in evals has left skewed distribution. This is not what I expected because I was expecting a normally shaped distribution. #### Exercise 3. Excluding score, select two other variables and describe their relationship using an appropriate visualization (scatterplot, side-by-side boxplots, or mosaic plot). boxplot(evals$age ~ evals$bty_f2upper) ##### Based on the side-by-side boxplot between age and bty_f2upper, there does not seem to have any relationship between the two variables. ### Simple linear regression plot(evals$score ~ evals$bty_avg) #### Exercise 4. Replot the scatterplot, but this time use the function jitter() on the y- or the x-coordinate. (Use ?jitter to learn more.) What was misleading about the initial scatterplot? ?jitter ## starting httpd help server ... done plot(jitter(evals$score) ~ jitter(evals$bty_avg)) ##### The initial scatterplot was misleading because there are more points shown in jitter scatterplot. #### Exercise 5. LetÃ¢Â€Â™s see if the apparent trend in the plot is something more than natural variation. Fit a linear model called m_bty to predict average professor score by average beauty rating and add the line to your plot using abline(m_bty). Write out the equation for the linear model and interpret the slope. Is average beauty score a statistically significant predictor? Does it appear to be a practically significant predictor? m_bty <- lm(evals$score ~ evals$bty_avg) plot(jitter(evals$score)~ jitter(evals$bty_avg)) abline(m_bty) summary(m_bty) ## ## Call: ## lm(formula = evals$score ~ evals$bty_avg) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.9246 -0.3690 0.1420 0.3977 0.9309 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.88034 0.07614 50.96 < 2e-16 *** ## evals$bty_avg  0.06664    0.01629    4.09 5.08e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.5348 on 461 degrees of freedom
## Multiple R-squared:  0.03502,    Adjusted R-squared:  0.03293
## F-statistic: 16.73 on 1 and 461 DF,  p-value: 5.083e-05

#### Exercise 6. Use residual plots to evaluate whether the conditions of least squares regression are reasonable. Provide plots and comments for each one (see the Simple Regression Lab for a reminder of how to make these).

plot(m_bty$residuals ~ evals$bty_avg)
abline(h = 0)

hist(m_bty$residuals) qqnorm(m_bty$residuals)
qqline(m_bty$residuals) ##### The residual plot did not show any patterns, however the histogram showed that the data has left skewed distribution. This is supported by the qqplot.Thus, I would say the data can be used but need to be careful with the skewed distribution. ### Multiple linear regression plot(evals$bty_avg ~ evals$bty_f1lower) cor(evals$bty_avg, evals$bty_f1lower) ## [1] 0.8439112 plot(evals[,13:19]) m_bty_gen <- lm(score ~ bty_avg + gender, data = evals) summary(m_bty_gen) ## ## Call: ## lm(formula = score ~ bty_avg + gender, data = evals) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.8305 -0.3625 0.1055 0.4213 0.9314 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.74734 0.08466 44.266 < 2e-16 *** ## bty_avg 0.07416 0.01625 4.563 6.48e-06 *** ## gendermale 0.17239 0.05022 3.433 0.000652 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.5287 on 460 degrees of freedom ## Multiple R-squared: 0.05912, Adjusted R-squared: 0.05503 ## F-statistic: 14.45 on 2 and 460 DF, p-value: 8.177e-07 #### Exercise 7. P-values and parameter estimates should only be trusted if the conditions for the regression are reasonable. Verify that the conditions for this model are reasonable using diagnostic plots. m_bty_gen <- lm(evals$score ~ evals$bty_avg+evals$gender)
qqnorm(m_bty_gen$residuals) qqline(m_bty_gen$residuals)

plot(m_bty_gen$residuals ~ evals$bty_avg)
abline(h=0)

plot(evals$score ~ evals$gender)

#### Exercise 8. Is bty_avg still a significant predictor of score? Has the addition of gender to the model changed the parameter estimate for bty_avg?

summary(m_bty_gen)
##
## Call:
## lm(formula = evals$score ~ evals$bty_avg + evals$gender) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.8305 -0.3625 0.1055 0.4213 0.9314 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.74734 0.08466 44.266 < 2e-16 *** ## evals$bty_avg     0.07416    0.01625   4.563 6.48e-06 ***
## evals$gendermale 0.17239 0.05022 3.433 0.000652 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.5287 on 460 degrees of freedom ## Multiple R-squared: 0.05912, Adjusted R-squared: 0.05503 ## F-statistic: 14.45 on 2 and 460 DF, p-value: 8.177e-07 ##### bty_avg is stll a significant predictor of score because the p-value is a lot smaller compared to the one without gender as parameter. Adding parameter of gender made the parameter estimate for bty_avg better because there is an increase of slope and y-intercept. multiLines(m_bty_gen) #### Exercise 9. What is the equation of the line corresponding to males? (Hint: For males, the parameter estimate is multiplied by 1.) For two professors who received the same beauty rating, which gender tends to have the higher course evaluation score? ##### equation is y = 0.07416x + 3.747. Male would have higher score. #### Exercise 10. Create a new model called m_bty_rank with gender removed and rank added in. How does R appear to handle categorical variables that have more than two levels? Note that the rank variable has three levels: teaching, tenure track, tenured. m_bty_rank <- lm(evals$score ~ evals$bty_avg + evals$rank)
summary(m_bty_rank)
##
## Call:
## lm(formula = evals$score ~ evals$bty_avg + evals$rank) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.8713 -0.3642 0.1489 0.4103 0.9525 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.98155 0.09078 43.860 < 2e-16 *** ## evals$bty_avg           0.06783    0.01655   4.098 4.92e-05 ***
## evals$ranktenure track -0.16070 0.07395 -2.173 0.0303 * ## evals$ranktenured      -0.12623    0.06266  -2.014   0.0445 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.5328 on 459 degrees of freedom
## Multiple R-squared:  0.04652,    Adjusted R-squared:  0.04029
## F-statistic: 7.465 on 3 and 459 DF,  p-value: 6.88e-05

### The search for the best model

#### Exercise 11. Which variable would you expect to have the highest p-value in this model? Why? Hint: Think about which variable would you expect to not have any association with the professor score.

##### I would expect the number of credits to have highest p-value
m_full <- lm(evals$score ~ evals$rank + evals$ethnicity + evals$gender + evals$language + evals$age + evals$cls_perc_eval + evals$cls_students + evals$cls_level + evals$cls_profs + evals$cls_credits + evals$bty_avg
+ evals$pic_outfit + evals$pic_color)
summary(m_full)
##
## Call:
## lm(formula = evals$score ~ evals$rank + evals$ethnicity + evals$gender +
##     evals$language + evals$age + evals$cls_perc_eval + evals$cls_students +
##     evals$cls_level + evals$cls_profs + evals$cls_credits + evals$bty_avg +
##     evals$pic_outfit + evals$pic_color)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -1.77397 -0.32432  0.09067  0.35183  0.95036
##
## Coefficients:
##                               Estimate Std. Error t value Pr(>|t|)
## (Intercept)                  4.0952141  0.2905277  14.096  < 2e-16 ***
## evals$ranktenure track -0.1475932 0.0820671 -1.798 0.07278 . ## evals$ranktenured           -0.0973378  0.0663296  -1.467  0.14295
## evals$ethnicitynot minority 0.1234929 0.0786273 1.571 0.11698 ## evals$gendermale             0.2109481  0.0518230   4.071 5.54e-05 ***
## evals$languagenon-english -0.2298112 0.1113754 -2.063 0.03965 * ## evals$age                   -0.0090072  0.0031359  -2.872  0.00427 **
## evals$cls_perc_eval 0.0053272 0.0015393 3.461 0.00059 *** ## evals$cls_students           0.0004546  0.0003774   1.205  0.22896
## evals$cls_levelupper 0.0605140 0.0575617 1.051 0.29369 ## evals$cls_profssingle       -0.0146619  0.0519885  -0.282  0.77806
## evals$cls_creditsone credit 0.5020432 0.1159388 4.330 1.84e-05 *** ## evals$bty_avg                0.0400333  0.0175064   2.287  0.02267 *
## evals$pic_outfitnot formal -0.1126817 0.0738800 -1.525 0.12792 ## evals$pic_colorcolor        -0.2172630  0.0715021  -3.039  0.00252 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.498 on 448 degrees of freedom
## Multiple R-squared:  0.1871, Adjusted R-squared:  0.1617
## F-statistic: 7.366 on 14 and 448 DF,  p-value: 6.552e-14

#### Exercise 14. Drop the variable with the highest p-value and re-fit the model. Did the coefficients and significance of the other explanatory variables change? (One of the things that makes multiple regression interesting is that coefficient estimates depend on the other variables that are included in the model.) If not, what does this say about whether or not the dropped variable was collinear with the other explanatory variables?

m_full <- lm(evals$score ~ evals$rank + evals$gender + evals$language + evals$age + evals$cls_perc_eval
+ evals$cls_students + evals$cls_level + evals$cls_profs + evals$cls_credits + evals$bty_avg + evals$pic_outfit + evals$pic_color) summary(m_full) ## ## Call: ## lm(formula = evals$score ~ evals$rank + evals$gender + evals$language + ## evals$age + evals$cls_perc_eval + evals$cls_students + evals$cls_level + ## evals$cls_profs + evals$cls_credits + evals$bty_avg + evals$pic_outfit + ## evals$pic_color)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -1.73681 -0.32734  0.08283  0.35834  0.98639
##
## Coefficients:
##                               Estimate Std. Error t value Pr(>|t|)
## (Intercept)                  4.2676351  0.2694274  15.840  < 2e-16 ***
## evals$ranktenure track -0.1660677 0.0813523 -2.041 0.041801 * ## evals$ranktenured           -0.1127978  0.0657022  -1.717 0.086705 .
## evals$gendermale 0.2241744 0.0512176 4.377 1.50e-05 *** ## evals$languagenon-english   -0.2862448  0.1055924  -2.711 0.006968 **
## evals$age -0.0092040 0.0031385 -2.933 0.003534 ** ## evals$cls_perc_eval          0.0051119  0.0015357   3.329 0.000944 ***
## evals$cls_students 0.0004785 0.0003777 1.267 0.205899 ## evals$cls_levelupper         0.0767503  0.0567182   1.353 0.176677
## evals$cls_profssingle -0.0292174 0.0512393 -0.570 0.568817 ## evals$cls_creditsone credit  0.4589918  0.1128358   4.068 5.61e-05 ***
## evals$bty_avg 0.0375980 0.0174661 2.153 0.031880 * ## evals$pic_outfitnot formal  -0.1208610  0.0738165  -1.637 0.102265
## evals$pic_colorcolor -0.2400696 0.0701264 -3.423 0.000675 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.4988 on 449 degrees of freedom ## Multiple R-squared: 0.1826, Adjusted R-squared: 0.159 ## F-statistic: 7.717 on 13 and 449 DF, p-value: 6.792e-14 #####Dropping the category with highest p-value did change the coefficients of other variables. This shows that the status of professor plays a role in the evaluation score. #### Exercise 15. Using backward-selection and p-value as the selection criterion, determine the best model. You do not need to show all steps in your answer, just the output for the final model. Also, write out the linear model for predicting score based on the final model you settle on. m_full <- lm(evals$score ~ evals$gender + evals$language + evals$age + evals$cls_perc_eval
+ evals$cls_credits + evals$bty_avg
+ evals$pic_color) summary(m_full) ## ## Call: ## lm(formula = evals$score ~ evals$gender + evals$language + evals$age + ## evals$cls_perc_eval + evals$cls_credits + evals$bty_avg +
##     evals$pic_color) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.81919 -0.32035 0.09272 0.38526 0.88213 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.967255 0.215824 18.382 < 2e-16 *** ## evals$gendermale             0.221457   0.049937   4.435 1.16e-05 ***
## evals$languagenon-english -0.281933 0.098341 -2.867 0.00434 ** ## evals$age                   -0.005877   0.002622  -2.241  0.02551 *
## evals$cls_perc_eval 0.004295 0.001432 2.999 0.00286 ** ## evals$cls_creditsone credit  0.444392   0.100910   4.404 1.33e-05 ***
## evals$bty_avg 0.048679 0.016974 2.868 0.00432 ** ## evals$pic_colorcolor        -0.216556   0.066625  -3.250  0.00124 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.5014 on 455 degrees of freedom
## Multiple R-squared:  0.1631, Adjusted R-squared:  0.1502
## F-statistic: 12.67 on 7 and 455 DF,  p-value: 6.996e-15

#### Exercise 16. Verify that the conditions for this model are reasonable using diagnostic plots.

m_full_final <- lm(evals$score ~ evals$gender + evals$language + evals$age + evals$cls_perc_eval + evals$cls_credits + evals$bty_avg + evals$pic_color)
summary(m_full_final)
##
## Call:
## lm(formula = evals$score ~ evals$gender + evals$language + evals$age +
##     evals$cls_perc_eval + evals$cls_credits + evals$bty_avg + ## evals$pic_color)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -1.81919 -0.32035  0.09272  0.38526  0.88213
##
## Coefficients:
##                              Estimate Std. Error t value Pr(>|t|)
## (Intercept)                  3.967255   0.215824  18.382  < 2e-16 ***
## evals$gendermale 0.221457 0.049937 4.435 1.16e-05 *** ## evals$languagenon-english   -0.281933   0.098341  -2.867  0.00434 **
## evals$age -0.005877 0.002622 -2.241 0.02551 * ## evals$cls_perc_eval          0.004295   0.001432   2.999  0.00286 **
## evals$cls_creditsone credit 0.444392 0.100910 4.404 1.33e-05 *** ## evals$bty_avg                0.048679   0.016974   2.868  0.00432 **
## evals$pic_colorcolor -0.216556 0.066625 -3.250 0.00124 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.5014 on 455 degrees of freedom ## Multiple R-squared: 0.1631, Adjusted R-squared: 0.1502 ## F-statistic: 12.67 on 7 and 455 DF, p-value: 6.996e-15 qqnorm(m_full_final$residuals)
qqline(m_full_final$residuals) plot(m_full_final$residuals ~ evals\$bty_avg)
abline(h=0)