Multiple Linear Regression HW

Author

Marley Burchenson

Loading Libraries

library(psych) # for the describe() command
library(car) # for the vif() command
library(sjPlot) # to visualize our results

Importing Data

d <- read.csv(file="Data/mydata.csv", header=T)

# since we're focusing on our continuous variables, we're going to drop our categorical variables. this will make some stuff we're doing later easier.
d <- subset(d, select=-c(sexual_orientation, mhealth))

Examine Your Variables

# use the scale() command to standardize all of your variables (IVs and DVs)
d$support <- scale(d$support, center=T, scale=T)
d$swemws <- scale(d$swemws, center=T, scale=T)
d$rse <- scale(d$rse, center=T, scale=T)
d$iou <- scale(d$iou, center=T, scale=T)

# also use histograms to examine your continuous variables
hist(d$support)

hist(d$swemws)

hist(d$rse)

hist(d$iou)

# use scatterplots to examine your continuous variables in pairs
# can copy paste from correlation hw
plot(d$rse, d$support)

plot(d$rse, d$swemws)

plot(d$rse, d$iou)

plot(d$support, d$swemws)

plot(d$support, d$iou)

plot(d$iou, d$swemws)

# create a correlation matrix to examine the relationships between your variables
# one dependent variable (outcome variable) - pick something you're interested in understanding and controlling in some way - how do other three variables predict/correlate
corr_output_m <- corr.test(d)
corr_output_m
Call:corr.test(x = d)
Correlation matrix 
        support swemws   rse   iou
support    1.00   0.60  0.52 -0.43
swemws     0.60   1.00  0.78 -0.65
rse        0.52   0.78  1.00 -0.66
iou       -0.43  -0.65 -0.66  1.00
Sample Size 
[1] 1140
Probability values (Entries above the diagonal are adjusted for multiple tests.) 
        support swemws rse iou
support       0      0   0   0
swemws        0      0   0   0
rse           0      0   0   0
iou           0      0   0   0

 To see confidence intervals of the correlations, print with the short=FALSE option

State Your Hypothesis - PART OF YOUR WRITEUP

H1: We predict that self-esteem will be positively related to mental well-being.

H2: We predict that intolerance of uncertainty will be negatively related to mental well-being.

H3: We predict that social support will be positively related to mental well-being.

H4: When we examine all three independent variables in conjunction with each other, self-esteem will be the strongest predictor of mental well-being. The effect of intolerance of uncertainty will disappear when accommodating the effects of the other variables.

State your hypotheses. Remember, you will have three IVs and one DV, and all variables will be continuous. You should describe how each of your IVs will relate to the DV (e.g., ‘higher scores on X will be predictive of higher scores on Y’).

Check Your Assumptions

Multiple Linear Regression Assumptions

  • Observations should be independent (confirmed by data report)
  • Number of cases should be adequate (N ≥ 80 + 8m, where m is the number of IVs). If you don’t have enough, it won’t run. (will check this below)
  • Independent variables should not be too correlated (aka multicollinearity). (will check this below)
  • Relationship between the variables should be linear. (will check this below)
  • Outliers should be identified and removed. (will check this below)
  • Residuals should be normally distributed and have constant variance. (will check this below)

Check Number of Cases

For your homework, if you don’t have the required number of cases you’ll need to drop one of your independent variables. Reach out to me if this happens and we can figure out the best way to proceed!

needed <- 80 + 8*3
nrow(d) >= needed
[1] TRUE

Run a Multiple Linear Regression

To check the following assumptions, we run our regression and then check some output and diagnostic plots BEFORE looking at our results.

# # use this commented out section only if you need to remove outliers
# # to drop a single outlier, remove the # at the beginning of the line and use this code:
# # d <- subset(d, row_id!=c(1108))
# 
# # to drop multiple outliers, remove the # at the beginning of the line and use this code:
# d <- subset(d, row_id!=c(1108) & row_id!=c(602))

# use the lm() command to run the regression
# dependent/outcome variable on the left, independent/predictor variables on the right
reg_model <- lm(swemws ~ support + rse + iou, data = d)

Check multicollinearity

  • Higher values indicate more multicollinearity. This usually requires dropping a variable. For your homework, you will need to discuss multicollinearity and any high values, but you don’t have to drop any variables.
  • Cutoff is usually 5
vif(reg_model)
 support      rse      iou 
1.400652 2.005149 1.794576 

Check linearity with Residuals vs Fitted plot

READ THIS TEXT

This plot (below) shows the residuals for each case and the fitted line. The red line is the average residual for the specified point of the dependent variable. If the assumption of linearity is met, the red line should be horizontal. This indicates that the residuals average to around zero. You can see that for this lab, the plot shows some non-linearity because there are more datapoints below the regression line than here are above it. Thus, there are some negative residuals that don’t have positive residuals to cancel them out. However, a bit of deviation is okay – just like with skewness and kurtosis, there’s a range that we can work in before non-normality or non-linearity becomes a critical issue. For some examples of good Residuals vs Fitted plot and ones that show serious errors, check out this page.

For your homework, you’ll simply need to generate this plot and talk about how your plot compares to the good and problematic plots linked to above. Is it closer to the ‘good’ plots or one of the ‘bad’ plots? This is going to be a judgement call, and that’s okay! In practice, you’ll always be making these judgement calls as part of a team, so this assignment is just about getting experience with it, not making the perfect call.

plot(reg_model, 1)

Check for outliers using Cook’s distance and a Residuals vs Leverage plot

READ THIS TEXT

The plots below both address leverage, or how much each data point is able to influence the regression line. Outliers are points that have undue influence on the regression line, the way that Bill Gates entering the room has an undue influence on the mean income.

The first plot, Cook’s distance, is a visualization of a score called (you guessed it) Cook’s distance, calculated for each case (aka row or participant) in the dataframe. Cook’s distance tells us how much the regression would change if the point was removed. The second plot also includes the residuals in the examination of leverage. The standardized residuals are on the y-axis and leverage is on the x-axis; this shows us which points have high residuals (are far from the regression line) and high leverage. Points that have large residuals and high leverage are especially worrisome, because they are far from the regression line but are also exerting a large influence on it.

For your homework, you’ll simply need to generate these plots, assess Cook’s distance in your dataset, and then identify any potential cases that are prominent outliers. Since we have some cutoffs, that makes this process is a bit less subjective than some of the other assessments we’ve done here, which is a nice change!

# Cook's distance - cutoff is .5
plot(reg_model, 4)

# Residuals vs Leverage 
plot(reg_model, 5)

Check homogeneity of variance in a Scale-Location plot

READ THIS TEXT

This plot is similar to the one’s we’ve seen, but it shows us the standardized residuals across the range of the regression line. Because the residuals are standarized, large residuals (whether positive or negative) are at the top of the plot, while small residuals (whether positive or negative) are at the bottom of the plot. If the assumption of homogeneity of variance (also called homoscedasticity) is met, the red line should be mostly horizontal. If it deviates from the mean line, that means that the variance is smaller or larger at that point of the regression line. Once again, you can check out this page for some other examples of this type of plot. (Notice that the Scale-Location plot is the third in the grids.)

For your homework, you’ll simply need to generate this plot and talk about how your plot compares to the ones pictured. Is it closer to the ‘good’ plots or one of the ‘bad’ plots? Again, this is a judgement call! It’s okay if feel uncertain, and you won’t be penalized for that.

plot(reg_model, 3)

Check normality of residuals with a Q-Q plot

READ THIS TEXT

This plot is a bit new. It’s called a Q-Q plot and shows the standardized residuals plotted against a normal distribution. If our variables are perfectly normal, the points will fit on the dashed line perfectly. This page shows how different types of non-normality appear on a Q-Q plot. It’s normal for Q-Q plots show a bit of deviation at the ends.

This page also shows some examples that help us put our Q-Q plot into context. Although it isn’t perfect, we don’t have any serious issues and are okay to proceed.

For your homework, you’ll simply need to generate this plot and talk about how your plot compares to the ones pictured. Does it seem like any skew or kurtosis is indicated by your plot? Is it closer to the ‘good’/‘bad’ plots from the second link?

plot(reg_model, 2)

Issues with My Data - PART OF YOUR WRITEUP

We found that our observations are independent which was confirmed by our data report. We also confirmed that we had a sufficient number of cases. We checked all of the diagnostic plots and found no issues with multicollinearity, linearity, outliers, homogeneity of variance, or normality of residuals.

View Test Output

Effect size cutoffs from Cohen (1988): * Trivial: < .1 * Small: between .1 and .3 * Medium: between .3 and .5 * Large: > .5

summary(reg_model)

Call:
lm(formula = swemws ~ support + rse + iou, data = d)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.13210 -0.33919 -0.00409  0.34900  1.88151 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept)  2.186e-16  1.662e-02   0.000        1    
support      2.378e-01  1.968e-02  12.085   <2e-16 ***
rse          5.194e-01  2.355e-02  22.061   <2e-16 ***
iou         -2.100e-01  2.228e-02  -9.429   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.5612 on 1136 degrees of freedom
Multiple R-squared:  0.6859,    Adjusted R-squared:  0.6851 
F-statistic: 826.9 on 3 and 1136 DF,  p-value: < 2.2e-16

Write Up Results

We predicted that self-esteem would be positively associated with mental well-being, while intolerance of uncertainty would be negatively related to mental well-being. Additionally, social support was expected to have a positive relationship with mental well-being. When considering all three predictors together, we predicted self-esteem would be the strongest predictor, and the effect of intolerance of uncertainty would no longer be significant after accounting for the other variables. Additionally, we found no issues with our data when testing multicollinearity, linearity, outliers, homogeneity of variance, and normality of residuals. The overall model was significant, F(3, 1136) = 826.9, p < .001, Adj R^2 = .685 which suggests a large effect size.

Self-esteem and social support were positively associated with mental well-being. Additionally, intolerance of uncertainty was negatively associated with mental well-being. Among the three predictors, self-esteem had the strongest effect, which supports part of our hypothesis. However, contrary to the hypothesis, the effect of intolerance of uncertainty remained significant even when accounting for social support and self-esteem. (See Table 1)

Table 1: Regression output for the relationships between mental well-being, social support, self-esteem, and intolerance of uncertainty
  Mental Well-Being
Predictors Estimates SE CI p
Intercept 0.00 0.02 -0.03 – 0.03 1.000
Social Support 0.24 0.02 0.20 – 0.28 <0.001
Self-Esteem (RSE) 0.52 0.02 0.47 – 0.57 <0.001
Intolerance of Uncertainty -0.21 0.02 -0.25 – -0.17 <0.001
Observations 1140
R2 / R2 adjusted 0.686 / 0.685


 

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.