Multiple Linear Regression Homework

Author

Lindsey Kreil

Loading Libraries

library(psych) # for the describe() command
library(car) # for the vif() command
library(sjPlot) # to visualize our results

Importing Data

d <- read.csv(file="Data/mydata.csv", header=T)

# since we're focusing on our continuous variables, we're going to drop our categorical variables. this will make some stuff we're doing later easier.
d <- subset(d, select=-c(income, sibling))

Examine Your Variables

# use the scale() command to standardize all of your variables (IVs and DVs)
d$swb <- scale(d$swb, center=T, scale=T)
d$mindful <- scale(d$mindful, center=T, scale=T)
d$belong <- scale(d$belong, center=T, scale=T)
d$stress <- scale(d$stress, center=T, scale=T)

# also use histograms to examine your continuous variables
hist(d$swb)

hist(d$mindful)

hist(d$belong)

hist(d$stress)

# use scatterplots to examine your continuous variables in pairs
plot(d$swb, d$mindful)

plot(d$swb, d$belong)

plot(d$swb, d$stress)

plot(d$mindful, d$belong)

plot(d$mindful, d$stress)

plot(d$belong, d$stress)

# create a correlation matrix to examine the relationships between your variables
corr_output_m <- corr.test(d)
corr_output_m
Call:corr.test(x = d)
Correlation matrix 
          swb mindful belong stress
swb      1.00    0.29  -0.15   -0.5
mindful  0.29    1.00  -0.21   -0.4
belong  -0.15   -0.21   1.00    0.3
stress  -0.50   -0.40   0.30    1.0
Sample Size 
[1] 3142
Probability values (Entries above the diagonal are adjusted for multiple tests.) 
        swb mindful belong stress
swb       0       0      0      0
mindful   0       0      0      0
belong    0       0      0      0
stress    0       0      0      0

 To see confidence intervals of the correlations, print with the short=FALSE option

State Your Hypothesis - PART OF YOUR WRITEUP

H1: We predict that mindfulness attention awareness scale will be positively related to satisfaction with life scale.

H2: We predict that need to belong scale will be positively related to satisfaction with life scale.

H3: We predict that perceived stress will be positively related to satisfaction with life scale.

H4: When we examine all three independent variables in conjunctions with each other, perceived stress will be the strongest predictor of satisfaction with life scale. The effect of mindfulness will disappear when accommodating the effects of the other variables.

Check Your Assumptions

Multiple Linear Regression Assumptions

  • Observations should be independent (confirmed by data report)
  • Number of cases should be adequate (N ≥ 80 + 8m, where m is the number of IVs). If you don’t have enough, it won’t run. (will check this below)
  • Independent variables should not be too correlated (aka multicollinearity). (will check this below)
  • Relationship between the variables should be linear. (will check this below)
  • Outliers should be identified and removed. (will check this below)
  • Residuals should be normally distributed and have constant variance. (will check this below)

Check Number of Cases

For your homework, if you don’t have the required number of cases you’ll need to drop one of your independent variables. Reach out to me if this happens and we can figure out the best way to proceed!

needed <- 80 + 8*3
nrow(d) >= needed
[1] TRUE

Run a Multiple Linear Regression

To check the following assumptions, we run our regression and then check some output and diagnostic plots BEFORE looking at our results.

# # use this commented out section only if you need to remove outliers
# # to drop a single outlier, remove the # at the beginning of the line and use this code:
# # d <- subset(d, row_id!=c(1108))
# 
# # to drop multiple outliers, remove the # at the beginning of the line and use this code:
# # d <- subset(d, row_id!=c(1108) & row_id!=c(602))
# 
# use the lm() command to run the regression
# dependent/outcome variable on the left, independent/predictor variables on the right
reg_model <- lm(swb ~ mindful + belong + stress, data = d)

Check multicollinearity

  • Higher values indicate more multicollinearity. This usually requires dropping a variable. For your homework, you will need to discuss multicollinearity and any high values, but you don’t have to drop any variables.
  • Cutoff is usually 5
vif(reg_model)
 mindful   belong   stress 
1.204646 1.112631 1.267380 

Check linearity with Residuals vs Fitted plot

READ THIS TEXT

This plot (below) shows the residuals for each case and the fitted line. The red line is the average residual for the specified point of the dependent variable. If the assumption of linearity is met, the red line should be horizontal. This indicates that the residuals average to around zero. You can see that for this lab, the plot shows some non-linearity because there are more datapoints below the regression line than here are above it. Thus, there are some negative residuals that don’t have positive residuals to cancel them out. However, a bit of deviation is okay – just like with skewness and kurtosis, there’s a range that we can work in before non-normality or non-linearity becomes a critical issue. For some examples of good Residuals vs Fitted plot and ones that show serious errors, check out this page.

For your homework, you’ll simply need to generate this plot and talk about how your plot compares to the good and problematic plots linked to above. Is it closer to the ‘good’ plots or one of the ‘bad’ plots? This is going to be a judgement call, and that’s okay! In practice, you’ll always be making these judgement calls as part of a team, so this assignment is just about getting experience with it, not making the perfect call.

plot(reg_model, 1)

Check for outliers using Cook’s distance and a Residuals vs Leverage plot

READ THIS TEXT

The plots below both address leverage, or how much each data point is able to influence the regression line. Outliers are points that have undue influence on the regression line, the way that Bill Gates entering the room has an undue influence on the mean income.

The first plot, Cook’s distance, is a visualization of a score called (you guessed it) Cook’s distance, calculated for each case (aka row or participant) in the dataframe. Cook’s distance tells us how much the regression would change if the point was removed. The second plot also includes the residuals in the examination of leverage. The standardized residuals are on the y-axis and leverage is on the x-axis; this shows us which points have high residuals (are far from the regression line) and high leverage. Points that have large residuals and high leverage are especially worrisome, because they are far from the regression line but are also exerting a large influence on it.

For your homework, you’ll simply need to generate these plots, assess Cook’s distance in your dataset, and then identify any potential cases that are prominent outliers. Since we have some cutoffs, that makes this process is a bit less subjective than some of the other assessments we’ve done here, which is a nice change!

# Cook's distance
plot(reg_model, 4)

# Residuals vs Leverage
plot(reg_model, 5)

Check homogeneity of variance in a Scale-Location plot

READ THIS TEXT

This plot is similar to the one’s we’ve seen, but it shows us the standardized residuals across the range of the regression line. Because the residuals are standarized, large residuals (whether positive or negative) are at the top of the plot, while small residuals (whether positive or negative) are at the bottom of the plot. If the assumption of homogeneity of variance (also called homoscedasticity) is met, the red line should be mostly horizontal. If it deviates from the mean line, that means that the variance is smaller or larger at that point of the regression line. Once again, you can check out this page for some other examples of this type of plot. (Notice that the Scale-Location plot is the third in the grids.)

For your homework, you’ll simply need to generate this plot and talk about how your plot compares to the ones pictured. Is it closer to the ‘good’ plots or one of the ‘bad’ plots? Again, this is a judgement call! It’s okay if feel uncertain, and you won’t be penalized for that.

plot(reg_model, 3)

Check normality of residuals with a Q-Q plot

READ THIS TEXT

This plot is a bit new. It’s called a Q-Q plot and shows the standardized residuals plotted against a normal distribution. If our variables are perfectly normal, the points will fit on the dashed line perfectly. This page shows how different types of non-normality appear on a Q-Q plot. It’s normal for Q-Q plots show a bit of deviation at the ends.

This page also shows some examples that help us put our Q-Q plot into context. Although it isn’t perfect, we don’t have any serious issues and are okay to proceed.

For your homework, you’ll simply need to generate this plot and talk about how your plot compares to the ones pictured. Does it seem like any skew or kurtosis is indicated by your plot? Is it closer to the ‘good’/‘bad’ plots from the second link?

plot(reg_model, 2)

Issues with My Data - PART OF YOUR WRITEUP

The plot that shows the residuals for each case and the fitted line, the plot shows some non-linearity because there are more datapoints below the regression line than here are above it. Thus, there are some negative residuals that don’t have positive residuals to cancel them out.

The plot that shows scale-location, the line deviates from the mean line slightly.

View Test Output

Effect size cutoffs from Cohen (1988): * Trivial: < .1 * Small: between .1 and .3 * Medium: between .3 and .5 * Large: > .5

summary(reg_model)

Call:
lm(formula = swb ~ mindful + belong + stress, data = d)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.2331 -0.6037  0.0507  0.6204  2.6668 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept)  1.824e-16  1.533e-02   0.000    1.000    
mindful      1.059e-01  1.683e-02   6.292 3.56e-10 ***
belong       1.714e-02  1.618e-02   1.060    0.289    
stress      -4.651e-01  1.727e-02 -26.939  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.8596 on 3138 degrees of freedom
Multiple R-squared:  0.2618,    Adjusted R-squared:  0.2611 
F-statistic:   371 on 3 and 3138 DF,  p-value: < 2.2e-16

Write Up Results

We predict that mindfulness attention awareness scale will be positively related to satisfaction with life scale. We predict that need to belong scale will be positively related to satisfaction with life scale. We predict that perceived stress will be positively related to satisfaction with life scale. When we examine all three independent variables in conjunctions with each other, perceived stress will be the strongest predictor of satisfaction with life scale. The effect of mindfulness will disappear when accommodating the effects of the other variables.

The plot that shows the residuals for each case and the fitted line, the plot shows some non-linearity because there are more datapoints below the regression line than here are above it. Thus, there are some negative residuals that don’t have positive residuals to cancel them out.

The plot that shows scale-location, the line deviates from the mean line slightly.

The overall model was significant, F(3, 3138) = 371, p < .001, Adj. R^2 = .26

Table 1: Regression output for the relationships between satisfaction with life, mindfulness, need to belong, and perceived stress
  Satisfaction with Life Scale (swb)
Predictors Estimates SE CI p
Intercept 0.00 0.02 -0.03 – 0.03 1.000
Mindfuness Attention to Awareness Scale 0.11 0.02 0.07 – 0.14 <0.001
Need to Belong Scale 0.02 0.02 -0.01 – 0.05 0.289
Perceieved Stress -0.47 0.02 -0.50 – -0.43 <0.001
Observations 3142
R2 / R2 adjusted 0.262 / 0.261


 

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.