Link: https://www.datacamp.com/courses/data-analysis-and-statistical-inference_mine-cetinkaya-rundel-by-datacamp/lab-7-multiple-linear-regression-9

Grading the professor

Many college courses conclude by giving students the opportunity to evaluate the course and the instructor anonymously. However, the use of these student evaluations as an indicator of course quality and teaching effectiveness is often criticized because these measures may reflect the influence of non-teaching related characteristics, such as the physical appearance of the instructor.

The article titled, “Beauty in the classroom: instructors’ pulchritude and putative pedagogical productivity” (link, Hamermesh and Parker, 2005) found that instructors who are viewed to be better looking receive higher instructional ratings. In this lab we will analyze the data from this study in order to learn what goes into a positive professor evaluation.

The data

The data were gathered from end of semester student evaluations for a large sample of professors from the University of Texas at Austin. In addition, six students rated the professors’ physical appearance.

The result is a data frame where each row contains a different course and columns represent variables about the courses and professors. You can have a look at the different variables in the data frame, and their meaning here. As you’ll be needing this list for reference throughout the lab, it’s best to keep it open in a tab.

This is a slightly modified version of the original data set that was released as part of the replication data for Data Analysis Using Regression and Multilevel/Hierarchical Models (Gelman and Hill, 2007).

load(url('http://s3.amazonaws.com/assets.datacamp.com/course/dasi/evals.RData'))
names(evals)
##  [1] "score"         "rank"          "ethnicity"     "gender"       
##  [5] "language"      "age"           "cls_perc_eval" "cls_did_eval" 
##  [9] "cls_students"  "cls_level"     "cls_profs"     "cls_credits"  
## [13] "bty_f1lower"   "bty_f1upper"   "bty_f2upper"   "bty_m1lower"  
## [17] "bty_m1upper"   "bty_m2upper"   "bty_avg"       "pic_outfit"   
## [21] "pic_color"

Question 1

These data come from an experiment.

  • TRUE
  • FALSE

FALSE

Question 2

The original research question posed in the paper is whether beauty leads directly to the differences in course evaluations. Given the study design, should the question be rephrased? If so, how?

Yes, revise wording to “Is there an association between beauty and course evaluations”

Question 3

Which of the following statements is false about the distribution of evals$score?

  • The median of the distribution is 4.3.
  • 25% of the students gave their professors a score of over 4.6.
  • 11 of students gave a professor a score below 3.
  • The left skewness of the data suggests that the students are less likely to rate the professors highly.
summary(evals$score)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    2.30    3.80    4.30    4.17    4.60    5.00
hist(evals$score)

plot of chunk unnamed-chunk-2

sum(evals$score < 3)
## [1] 11

The left skewness of the data suggests that the students are less likely to rate the professors highly.

Visualizing relationships

Next, you will do some more analysis of the provided variables with the help of visualizations. You’ll do this with the help of a scatterplot, a side-by-side boxplot and a mosaic plot.

# Create a scatterplot for 'age' vs 'bty_avg':
plot(evals$age, evals$bty_avg)

plot of chunk unnamed-chunk-3

# Create a boxplot for 'age' and 'gender':
boxplot(evals$age ~ evals$gender)

plot of chunk unnamed-chunk-3

# Create a mosaic plot for 'rank' and 'gender':
mosaicplot(evals$rank ~ evals$gender)

plot of chunk unnamed-chunk-3

Simple Linear Regression

The fundamental phenomenon suggested by the study is that better looking teachers are evaluated more favorably.

Let’s create a scatterplot to see if this appears to be the case.

plot(evals$score ~ evals$bty_avg)

plot of chunk unnamed-chunk-4

Well done. It also seems like your scatterplot is experiencing overplotting. In the next exercise you will solve this.

The jitter function

Replot the scatterplot, but this time use the jitter funtion. What was misleading about the initial scatterplot?

plot(evals$score ~ jitter(evals$bty_avg))

plot of chunk unnamed-chunk-5

More than natural variation?

Let’s see if the apparent trend in the plot is something more than natural variation. Fit a linear model called m_bty to predict average professor score by average beauty rating and add the line to your plot using abline(m_bty).

plot(evals$score ~ jitter(evals$bty_avg))
m_bty = lm(evals$score ~ evals$bty_avg)
abline(m_bty)

plot of chunk unnamed-chunk-6

Question 4

Average beauty score seems to be a statistically and practically significant predictor.

summary(m_bty)
## 
## Call:
## lm(formula = evals$score ~ evals$bty_avg)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -1.925 -0.369  0.142  0.398  0.931 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     3.8803     0.0761   50.96  < 2e-16 ***
## evals$bty_avg   0.0666     0.0163    4.09  5.1e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.535 on 461 degrees of freedom
## Multiple R-squared:  0.035,  Adjusted R-squared:  0.0329 
## F-statistic: 16.7 on 1 and 461 DF,  p-value: 5.08e-05

FALSE

OK! It’s a statistically significant predictor, but may not be practically significant since the slope is very small.

Question 5

Use residual plots to evaluate whether the conditions of least squares regression are reasonable. Which of the following states is incorrect analysis of the residual plots and conditions?

  • Linear association: The residuals plot shows a random scatter.
  • Constant variance of residuals: No fan shape in residuals plot.
  • Nearly normal residuals: Residuals are right skewed, but the sample size is large, so this may not be an important violation of conditions.
  • Independent observations: Classes sampled randomly, no order effect.
plot(evals$score ~ m_bty$residuals)

plot of chunk unnamed-chunk-8

Nearly normal residuals: Residuals are right skewed, but the sample size is large, so this may not be an important violation of conditions. (it is incorrect!)

Multiple linear regression

The data set contains several variables on the beauty score of the professor: individual ratings from each of the six students who were asked to score the physical appearance of the professors and the average of these six scores. Let’s take a look at the relationship between one of these scores and the average beauty score.

plot(evals$bty_f1lower ~ evals$bty_avg)

plot of chunk unnamed-chunk-9

cor(evals$bty_f1lower, evals$bty_avg)
## [1] 0.8439

Indeed. As expected, the relationship is quite strong.

The relationship between all beauty variables.

A s seen in the previous exercise, the relationship is quite strong after all, the average score is calculated using the individual scores.

plot(evals[, 13:19])

plot of chunk unnamed-chunk-10

These variables are collinear (correlated), and adding more than one of these variables to the model would not add much value to the model. In this application and with these highly-correlated predictors, it is reasonable to use the average beauty score as the single representative of these variables.

Taking into account gender

In order to see if beauty is still a significant predictor of professor score after we’ve accounted for the gender is the professor, we can add the gender term into the model.

m_bty_gen = lm(evals$score ~ evals$bty_avg + evals$gender)
summary(m_bty_gen)
## 
## Call:
## lm(formula = evals$score ~ evals$bty_avg + evals$gender)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -1.831 -0.362  0.105  0.421  0.931 
## 
## Coefficients:
##                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        3.7473     0.0847   44.27  < 2e-16 ***
## evals$bty_avg      0.0742     0.0163    4.56  6.5e-06 ***
## evals$gendermale   0.1724     0.0502    3.43  0.00065 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.529 on 460 degrees of freedom
## Multiple R-squared:  0.0591, Adjusted R-squared:  0.055 
## F-statistic: 14.5 on 2 and 460 DF,  p-value: 8.18e-07

Is bty_avg still a significant predictor of score?

yes

Has the addition of gender to the model changed the parameter estimate for bty_avg?

summary(lm(evals$score ~ evals$bty_avg))
## 
## Call:
## lm(formula = evals$score ~ evals$bty_avg)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -1.925 -0.369  0.142  0.398  0.931 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     3.8803     0.0761   50.96  < 2e-16 ***
## evals$bty_avg   0.0666     0.0163    4.09  5.1e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.535 on 461 degrees of freedom
## Multiple R-squared:  0.035,  Adjusted R-squared:  0.0329 
## F-statistic: 16.7 on 1 and 461 DF,  p-value: 5.08e-05

not by much

Question 6

\(P\)-values and parameter estimates should only be trusted if the conditions for the regression are reasonable. Using diagnostic plots, we can conclude that the conditions for this model are reasonable.

qqnorm(m_bty_gen$residuals)
qqline(m_bty_gen$residuals)

plot of chunk unnamed-chunk-13

TRUE

Note that the estimate for gender is called gendermale in your summary output. You’ll see this name change whenever you introduce a categorical variable. The reason is that R recodes gender from having the values of female and male to being an indicator variable called gendermale that takes a value of 0 for females and a value of 1 for males. (Such variables are often referred to as ‘dummy’ variables.)

As a result, for females, the parameter estimate is multiplied by zero, leaving the intercept and slope form familiar from simple regression.

multiLines(m_bty_gen)

plot of chunk unnamed-chunk-14

Question 7

For two professors (one male and one female) who received the same beauty rating, the male professor is predicted to have the higher course evaluation score than the female.

TRUE

Whoop whoop! For your information, the decision to call the indicator variable gendermale instead of genderfemale has no deeper meaning. R simply codes the category that comes first alphabetically as a 0.

Switching rank and gender

Create a new model called m_bty_rank with gender removed and rank added in.

m_bty_rank = lm(evals$score ~ evals$bty_avg + evals$rank)
summary(m_bty_rank)
## 
## Call:
## lm(formula = evals$score ~ evals$bty_avg + evals$rank)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -1.871 -0.364  0.149  0.410  0.953 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)              3.9815     0.0908   43.86  < 2e-16 ***
## evals$bty_avg            0.0678     0.0165    4.10  4.9e-05 ***
## evals$ranktenure track  -0.1607     0.0740   -2.17    0.030 *  
## evals$ranktenured       -0.1262     0.0627   -2.01    0.045 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.533 on 459 degrees of freedom
## Multiple R-squared:  0.0465, Adjusted R-squared:  0.0403 
## F-statistic: 7.46 on 3 and 459 DF,  p-value: 6.88e-05

Examine the regression output. Since rank has three levels (teaching, tenure track, tenured) two indicator variables are created: one for tenure track and and one for tenured. Teaching is the reference level hence it doesn’t show up in the regression output.

Question 8

Which of the following is the correct order of the three levels of rank if we were to order them from lowest predicted course evaluation score to highest predicted course evaluation score?

Tenure Track, Tenured, Teaching

Well done! In general, the interpretation of the coefficients in multiple regression is slightly different from that of simple regression. The estimate for bty_avg reflects how much higher a group of professors is expected to score if they have a beauty rating that is one point higher while holding all other variables constant. In this case, that translates into considering only professors of the same rank with bty_avg scores that are one point apart.

The search for the best model

You will start with a full model that predicts professor score based on rank, ethnicity, gender, language of the university where they got their degree, age, proportion of students that filled out evaluations, class size, course level, number of professors, number of credits, average beauty rating, outfit, and picture color.

Note you do not included the pic_outfit or pic_color variables in the full model because the original study states that these variables were used in a different analysis evaluating whether they’re related to how highly the six students involved in the study score the professors’ beauty (not related to how the students evaluate their professors in class).

m_full = lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval + 
    cls_students + cls_level + cls_profs + cls_credits + bty_avg, data = evals)

summary(m_full)
## 
## Call:
## lm(formula = score ~ rank + ethnicity + gender + language + age + 
##     cls_perc_eval + cls_students + cls_level + cls_profs + cls_credits + 
##     bty_avg, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8448 -0.3137  0.0856  0.3573  1.1010 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            3.530504   0.240820   14.66  < 2e-16 ***
## ranktenure track      -0.107012   0.082025   -1.30  0.19269    
## ranktenured           -0.045037   0.065218   -0.69  0.49020    
## ethnicitynot minority  0.186965   0.077533    2.41  0.01629 *  
## gendermale             0.178617   0.051535    3.47  0.00058 ***
## languagenon-english   -0.126825   0.108036   -1.17  0.24105    
## age                   -0.006650   0.003083   -2.16  0.03154 *  
## cls_perc_eval          0.005700   0.001551    3.67  0.00027 ***
## cls_students           0.000445   0.000358    1.24  0.21460    
## cls_levelupper         0.018710   0.055583    0.34  0.73656    
## cls_profssingle       -0.008575   0.051353   -0.17  0.86746    
## cls_creditsone credit  0.508743   0.117013    4.35  1.7e-05 ***
## bty_avg                0.061265   0.016675    3.67  0.00027 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.504 on 450 degrees of freedom
## Multiple R-squared:  0.164,  Adjusted R-squared:  0.141 
## F-statistic: 7.33 on 12 and 450 DF,  p-value: 2.41e-12

Question 9

Which of the following is the correct interpretation of the coefficient associated with the ethnicity variable.

Non-minority professors are expected on average to score…

  • 0.19 points lower than minority professors, all else held constant.
  • 0.19 points higher than minority professors, all else held constant.
  • 0.02 points lower than minority professors, all else held constant.
  • 0.02 points higher than minority professors, all else held constant.

0.19 points higher than minority professors, all else held constant.

Eliminating variables from the model - \(p\)-value selection

Now you will create a new model, where you will drop the variable with the highest \(p\)-value in the m_full model.

m_full = lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval + cls_students + cls_level + cls_profs + cls_credits + bty_avg, data = evals)

summary(m_full)
## 
## Call:
## lm(formula = score ~ rank + ethnicity + gender + language + age + 
##     cls_perc_eval + cls_students + cls_level + cls_profs + cls_credits + 
##     bty_avg, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8448 -0.3137  0.0856  0.3573  1.1010 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            3.530504   0.240820   14.66  < 2e-16 ***
## ranktenure track      -0.107012   0.082025   -1.30  0.19269    
## ranktenured           -0.045037   0.065218   -0.69  0.49020    
## ethnicitynot minority  0.186965   0.077533    2.41  0.01629 *  
## gendermale             0.178617   0.051535    3.47  0.00058 ***
## languagenon-english   -0.126825   0.108036   -1.17  0.24105    
## age                   -0.006650   0.003083   -2.16  0.03154 *  
## cls_perc_eval          0.005700   0.001551    3.67  0.00027 ***
## cls_students           0.000445   0.000358    1.24  0.21460    
## cls_levelupper         0.018710   0.055583    0.34  0.73656    
## cls_profssingle       -0.008575   0.051353   -0.17  0.86746    
## cls_creditsone credit  0.508743   0.117013    4.35  1.7e-05 ***
## bty_avg                0.061265   0.016675    3.67  0.00027 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.504 on 450 degrees of freedom
## Multiple R-squared:  0.164,  Adjusted R-squared:  0.141 
## F-statistic: 7.33 on 12 and 450 DF,  p-value: 2.41e-12
max_p = max(summary(m_full)$coefficients[, 4])
which(summary(m_full)$coefficients[, 4] == max_p)
## cls_profssingle 
##              11
# Your new model:
m_new = lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval + cls_students + cls_level + cls_credits + bty_avg, data = evals)

# View the regression output:
summary(m_new)
## 
## Call:
## lm(formula = score ~ rank + ethnicity + gender + language + age + 
##     cls_perc_eval + cls_students + cls_level + cls_credits + 
##     bty_avg, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8505 -0.3139  0.0805  0.3596  1.1036 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            3.528630   0.240299   14.68  < 2e-16 ***
## ranktenure track      -0.107364   0.081910   -1.31  0.19061    
## ranktenured           -0.045374   0.065117   -0.70  0.48628    
## ethnicitynot minority  0.189372   0.076099    2.49  0.01319 *  
## gendermale             0.178027   0.051358    3.47  0.00058 ***
## languagenon-english   -0.126574   0.107909   -1.17  0.24143    
## age                   -0.006662   0.003079   -2.16  0.03101 *  
## cls_perc_eval          0.005679   0.001545    3.68  0.00027 ***
## cls_students           0.000449   0.000357    1.26  0.20932    
## cls_levelupper         0.018374   0.055487    0.33  0.74069    
## cls_creditsone credit  0.510916   0.116161    4.40  1.4e-05 ***
## bty_avg                0.061150   0.016643    3.67  0.00027 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.503 on 451 degrees of freedom
## Multiple R-squared:  0.163,  Adjusted R-squared:  0.143 
## F-statistic: 8.01 on 11 and 451 DF,  p-value: 8.3e-13

If you look at the summary, you see that the coefficients and \(p\)-values of some variables changed. One of the things that makes multiple regression interesting is that coefficient estimates depend on the other variables that are included in the model.

Eliminating variables from the model - adjusted \(R^2\) selection

Now you will create a new model, where you will drop the variable that when dropped yields the highest improvement in the adjusted \(R^2\).

load(url('http://s3.amazonaws.com/assets.datacamp.com/course/dasi/evals.RData'))
m_full = lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval + cls_students + cls_level + cls_profs + cls_credits + bty_avg, data = evals)


summary(m_full)
## 
## Call:
## lm(formula = score ~ rank + ethnicity + gender + language + age + 
##     cls_perc_eval + cls_students + cls_level + cls_profs + cls_credits + 
##     bty_avg, data = evals)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8448 -0.3137  0.0856  0.3573  1.1010 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            3.530504   0.240820   14.66  < 2e-16 ***
## ranktenure track      -0.107012   0.082025   -1.30  0.19269    
## ranktenured           -0.045037   0.065218   -0.69  0.49020    
## ethnicitynot minority  0.186965   0.077533    2.41  0.01629 *  
## gendermale             0.178617   0.051535    3.47  0.00058 ***
## languagenon-english   -0.126825   0.108036   -1.17  0.24105    
## age                   -0.006650   0.003083   -2.16  0.03154 *  
## cls_perc_eval          0.005700   0.001551    3.67  0.00027 ***
## cls_students           0.000445   0.000358    1.24  0.21460    
## cls_levelupper         0.018710   0.055583    0.34  0.73656    
## cls_profssingle       -0.008575   0.051353   -0.17  0.86746    
## cls_creditsone credit  0.508743   0.117013    4.35  1.7e-05 ***
## bty_avg                0.061265   0.016675    3.67  0.00027 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.504 on 450 degrees of freedom
## Multiple R-squared:  0.164,  Adjusted R-squared:  0.141 
## F-statistic: 7.33 on 12 and 450 DF,  p-value: 2.41e-12
max_p = max(summary(m_full)$coefficients[, 4])
which(summary(m_full)$coefficients[, 4] == max_p)
## cls_profssingle 
##              11
r.squared = c()

# The full model:
m_full = lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval + 
              cls_students + cls_level + cls_profs + cls_credits + bty_avg, data = evals)
r.squared['full'] = summary(m_full)$adj.r.squared

# Remove rank:
m1 = lm(score ~ ethnicity + gender + language + age + cls_perc_eval + cls_students + 
          cls_level + cls_profs + cls_credits + bty_avg, data = evals)
r.squared['rank'] = summary(m1)$adj.r.squared

# Remove ethnicity:
m2 = lm(score ~ rank + gender + language + age + cls_perc_eval + 
          cls_students + cls_level + cls_profs + cls_credits + bty_avg, data = evals)
r.squared['ethnicity'] = summary(m2)$adj.r.squared


# Remove gender:
m3 = lm(score ~ rank + ethnicity + language + age + cls_perc_eval + 
          cls_students + cls_level + cls_profs + cls_credits + bty_avg, data = evals)
r.squared['gender '] = summary(m3)$adj.r.squared

# language
m4 = lm(score ~ rank + ethnicity + gender + age + cls_perc_eval + 
              cls_students + cls_level + cls_profs + cls_credits + bty_avg, data = evals)
r.squared['language'] = summary(m4)$adj.r.squared

# age
m5 = lm(score ~ rank + ethnicity + gender + language + cls_perc_eval + 
              cls_students + cls_level + cls_profs + cls_credits + bty_avg, data = evals)
r.squared['age'] = summary(m5)$adj.r.squared

# cls_perc_eval
m6 = lm(score ~ rank + ethnicity + gender + language + age  + 
              cls_students + cls_level + cls_profs + cls_credits + bty_avg, data = evals)
r.squared['cls_perc_eval'] = summary(m6)$adj.r.squared

# cls_students
m7 = lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval + 
          cls_level + cls_profs + cls_credits + bty_avg, data = evals)
r.squared['cls_students'] = summary(m7)$adj.r.squared

# cls_level
m8 = lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval + 
              cls_students + cls_profs + cls_credits + bty_avg, data = evals)
r.squared['cls_level'] = summary(m8)$adj.r.squared

# cls_profs
m9 = lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval + 
              cls_students + cls_level + cls_credits + bty_avg, data = evals)
r.squared['cls_profs'] = summary(m9)$adj.r.squared

# cls_credits
m10 = lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval + 
              cls_students + cls_level + cls_profs + bty_avg, data = evals)
r.squared['cls_credits'] = summary(m10)$adj.r.squared

# bty_avg
m11 = lm(score ~ rank + ethnicity + gender + language + age + cls_perc_eval + 
              cls_students + cls_level + cls_profs + cls_credits, data = evals)
r.squared['bty_avg'] = summary(m11)$adj.r.squared


r.squared
##          full          rank     ethnicity       gender       language 
##        0.1412        0.1418        0.1320        0.1202        0.1405 
##           age cls_perc_eval  cls_students     cls_level     cls_profs 
##        0.1343        0.1174        0.1402        0.1429        0.1431 
##   cls_credits       bty_avg 
##        0.1071        0.1174
which(r.squared == max(r.squared))
## cls_profs 
##        10

Take some time to reflect on this final model. Based on your final model, what are the characteristics of a professor and course at University of Texas at Austin that would be associated with a high evaluation score? Would you be comfortable generalizing your conclusions to apply to professors generally (at any university)? Why or why not?

Question 10

Elimination of which variable from the full model yielded the highest adjusted \(R^2\)?

  • bty_avg
  • cls_profs
  • cls_students
  • rank

cls_profs

To complete the model selection we would continue removing variables one at a time until removal of another variable did not increase adjusted \(R^2\)