1 Loading Libraries

#install.packages("sjPlot")

library(psych) # for the describe() command
library(car) # for the vif() command
## Loading required package: carData
## 
## Attaching package: 'car'
## The following object is masked from 'package:psych':
## 
##     logit
library(sjPlot) # to visualize our results

2 Importing Data

# For HW, import the dataset you cleaned previously, this will be the dataset you'll use throughout the rest of the semester

d <- read.csv(file="Data/projectdata.csv", header=T)

3 State Your Hypothesis

We hypothesize that Narcissism and Independence will significantly predict Life Satisfaction. The relationship between Narcissism and Life Satisfaction will be negative, and the relationship between Independence and Life Satisfaction will be positive.

4 Check Your Variables

# you only need to check the variables you're using in the current analysis
# although you checked them previously, it's always a good idea to look them over again and be sure that everything is correct

str(d)
## 'data.frame':    3078 obs. of  7 variables:
##  $ ResponseID      : chr  "R_BJN3bQqi1zUMid3" "R_2TGbiBXmAtxywsD" "R_12G7bIqN2wB2N65" "R_39pldNoon8CePfP" ...
##  $ npi             : num  0.6923 0.1538 0.0769 0.0769 0.7692 ...
##  $ efficacy        : num  3.4 3.4 2.2 2.8 3 2.4 2.3 3 3 3.7 ...
##  $ swb             : num  4.33 4.17 1.83 5.17 3.67 ...
##  $ moa_independence: num  3.67 3.67 3.5 3 3.83 ...
##  $ usdream         : chr  "american dream is important and achievable for me" "american dream is important and achievable for me" "american dream is not important and maybe not achievable for me" "american dream is not important and maybe not achievable for me" ...
##  $ edu             : chr  "2 Currently in college" "5 Completed Bachelors Degree" "2 Currently in college" "2 Currently in college" ...
# Place only continuous variables of interest in new dataframe, and name it "cont"
cont <- na.omit(subset(d, select=c(npi, moa_independence, swb )))
cont$row_id <- 1:nrow(cont)

# Standardize all IVs
cont$npi <- scale(cont$npi, center=T, scale=T)
cont$moa_independence <- scale(cont$moa_independence, center=T, scale=T)


# you can use the describe() command on an entire dataframe (d) or just on a single variable
describe(cont)
##                  vars    n    mean     sd  median trimmed     mad   min     max
## npi                 1 3078    0.00   1.00   -0.41   -0.13    0.74 -0.91    2.34
## moa_independence    2 3078    0.00   1.00    0.28    0.15    1.06 -5.45    0.99
## swb                 3 3078    4.47   1.32    4.67    4.53    1.48  1.00    7.00
## row_id              4 3078 1539.50 888.69 1539.50 1539.50 1140.86  1.00 3078.00
##                    range  skew kurtosis    se
## npi                 3.24  0.94    -0.70  0.02
## moa_independence    6.45 -1.44     2.53  0.02
## swb                 6.00 -0.37    -0.46  0.02
## row_id           3077.00  0.00    -1.20 16.02
# also use histograms to examine your continuous variables (all IVs and DV)
hist(cont$npi)

hist(cont$moa_independence)

hist(cont$swb)

# last, use scatterplots to examine each pairing of your continuous variables together
plot(cont$npi, cont$swb)  # PUT YOUR DV 2ND (Y-AXIS)

plot(cont$moa_independence, cont$swb)  # PUT YOUR DV 2ND (Y-AXIS)

plot(cont$npi, cont$moa_independence)  # Check relationship between IVs, order does not matter

5 View Your Correlations

corr_output_m <- corr.test(cont)
corr_output_m
## Call:corr.test(x = cont)
## Correlation matrix 
##                    npi moa_independence   swb row_id
## npi               1.00            -0.03  0.04   0.00
## moa_independence -0.03             1.00  0.10   0.01
## swb               0.04             0.10  1.00  -0.01
## row_id            0.00             0.01 -0.01   1.00
## Sample Size 
## [1] 3078
## Probability values (Entries above the diagonal are adjusted for multiple tests.) 
##                   npi moa_independence  swb row_id
## npi              0.00             0.52 0.21      1
## moa_independence 0.13             0.00 0.00      1
## swb              0.04             0.00 0.00      1
## row_id           0.84             0.74 0.47      0
## 
##  To see confidence intervals of the correlations, print with the short=FALSE option
# CHECK FOR ANY CORRELATIONS AMONG YOUR IVs ABOVE .70 --> BAD (aka multicollinearity)

6 Run a Multiple Linear Regression

# ONLY use the commented out section below IF if you need to remove outliers AFTER examining the Cook's distance and a Residuals vs Leverage plots in your HW -- remember we practiced this in the ANOVA lab

#cont <- subset(cont, row_id!=c(1970))


# use the lm() command to run the regression. Put DV on the left,  IVs on the right separated by "+"
reg_model <- lm( swb ~ npi + moa_independence , data = cont )

7 Check Your Assumptions

7.1 Multiple Linear Regression Assumptions

Assumptions we’ve discussed previously:

  • Observations should be independent
  • Variables should be continuous and normally distributed
  • Outliers should be identified and removed
  • Relationship between the variables should be linear
  • Homogeneity of variance [NOTE: We are skipping this here]
  • Residuals should be normal and have constant variance

New assumptions:

  • Number of cases should be adequate (N ≥ 80 + 8*m, where m is the number of IV’s)
  • Independent variables should not be too correlated (aka multicollinearity)

7.2 Count Number of Cases

needed <- 80 + 8*2
nrow(cont) >= needed
## [1] TRUE

NOTE: For your homework, if you don’t have the required number of cases reach out to me and we can figure out the best way to proceed!

7.3 Check for multicollinearity

  • Higher values indicate more multicollinearity
  • Cutoff is usually VIF > 5
# Variance Inflation Factor = VIF
vif(reg_model)
##              npi moa_independence 
##         1.000744         1.000744

NOTE: For your homework, you will need to discuss multicollinearity and any high values in “Issues with My Data”, but you don’t have to drop any variables.

7.4 Check linearity with Residuals vs Fitted plot

The plot below shows the residuals for each case and the fitted line. The red line is the average residual for the specified point of the dependent variable. If the assumption of linearity is met, the red line should be horizontal. This indicates that the residuals average to around zero. However, a bit of deviation is okay – just like with skewness and kurtosis, there’s a range that we can work in before non-normality becomes a critical issue. For some examples of good Residuals vs Fitted plot and ones that show serious errors, check out this page.

plot(reg_model, 1)

NOTE: For your homework, you’ll simply need to generate this plot and talk about whether the assumption was met in “Issues with My Data”. This is going to be a judgement call, and that’s okay!

7.5 Check for outliers using Cook’s distance and a Residuals vs Leverage plot

The plots below both address leverage, or how much each data point is able to influence the regression line. Outliers are points that have undue influence on the regression line, the way that Bill Gates entering the room has an undue influence on the mean income.

The first plot, Cook’s distance, is a visualization of a score called (you guessed it) Cook’s distance, calculated for each case (aka row or participant) in the dataframe. Cook’s distance tells us how much the regression would change if the point was removed. The second plot also includes the residuals in the examination of leverage. The standardized residuals are on the y-axis and leverage is on the x-axis; this shows us which points have high residuals (are far from the regression line) and high leverage. Points that have large residuals and high leverage are especially worrisome, because they are far from the regression line but are also exerting a large influence on it.

# Cook's distance
plot(reg_model, 4)

# Residuals vs Leverage
plot(reg_model, 5)

NOTE: For your homework, you’ll simply need to generate these plots, assess Cook’s distance in your dataset, and then identify and remove any potential cases that are prominent outliers (like we did in the ANOVA lab). You will make a note of this in the “Issues with My Data” write-up.

7.6 Check normality of residuals with a Q-Q plot

This plot is a bit new. It’s called a Q-Q plot and shows the standardized residuals plotted against a normal distribution. If our variables are perfectly normal, the points will fit on the dashed line perfectly. This page shows how different types of non-normality appear on a Q-Q plot.

It’s normal for Q-Q plots show a bit of deviation at the ends. This page shows some examples that help us put our Q-Q plot into context.

plot(reg_model, 2)

NOTE: For your homework, you’ll simply need to generate this plot and think about how your plot compares to the normal/non-normal plots pictured in the links above. Does it seem like the points lie mostly along the straight diagonal line with either no or some minor deviations along each of the tails? If so, your residuals are likely normal enough to meet the assumption. You will talk about this in the write-up below.

7.7 Issues with My Data

Before interpreting our results, we assessed our variables to see if they met the assumptions for a multiple linear regression. We did encounter a Kurtosis outside of the acceptable range for our variable Independence kurtosis = 2.53. We detected slight issues in a Residuals vs Fitted plot. We did not detect any outliers (by visually analyzing Cook’s Distance and Residuals vs Leverage plots) or any serious issues with the normality of our residuals (by visually analyzing a Q-Q plot), nor were there any issues of multicolinearity among our two independent variables.

8 View Test Output

summary(reg_model)
## 
## Call:
## lm(formula = swb ~ npi + moa_independence, data = cont)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.6928 -0.9091  0.1518  0.9910  3.1634 
## 
## Coefficients:
##                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)       4.47336    0.02373 188.508  < 2e-16 ***
## npi               0.05214    0.02374   2.196   0.0282 *  
## moa_independence  0.13755    0.02374   5.793 7.61e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.317 on 3075 degrees of freedom
## Multiple R-squared:  0.01212,    Adjusted R-squared:  0.01147 
## F-statistic: 18.86 on 2 and 3075 DF,  p-value: 7.236e-09
# Note for section below: to type lowercase Beta below (ß) you need to hold down Alt key and type 225 on numeric keypad. If that doesn't work you should be able to copy/paste it from somewhere else

Effect size, based on Regression ß (Beta Estimate) value in our output

  • Trivial: Less than 0.10 (ß < 0.10)
  • Small: 0.10–0.29 (0.10 < ß < 0.29)
  • Medium: 0.30–0.49 (0.30 < ß < 0.49)
  • Large: 0.50 or greater (ß > 0.50)

9 Write Up Results

To test our hypothesis that Narcissism and Independence will significantly predict Life Satisfaction and the relationship between Narcissism and Life Satisfaction will be negative, and the relationship between Independence and Life Satisfaction will be positive we conducted a multiple linear regression to test these assumptions. We confirmed that our data met the assumptions of a linear regression, aside from there being slight issues with linearity.

Our hypothesis was partially supported. The model was statistically significant, Adj. R2 = .01, F(2, 3075) =18.86, p <.001. Our results indicate that Narcissism positively predicted Life Satisfaction and had a trivial effect size (ß < .10; per Cohen, 1988), while Independence positively predicted Life Satisfaction and had a small effect size (0.10 < ß < 0.29). Full output from the regression model is reported in Table 1. This means that people’s Life Satisfaction increases by .05 units for every one unit increase in their Narcissism, while it increases by .14 units for every one unit increase in their Independence.

Table 1: Multiple Regression Model Predicting Life Satisfaction
  Life Satisfaction
Predictors Estimates SE CI p
Intercept 4.47 0.02 4.43 – 4.52 <0.001
Narcissism 0.05 0.02 0.01 – 0.10 0.028
Independence 0.14 0.02 0.09 – 0.18 <0.001
Observations 3078
R2 / R2 adjusted 0.012 / 0.011


 

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.