#install.packages("sjPlot")
library(psych) # for the describe() command
## Warning: package 'psych' was built under R version 4.4.3
library(car) # for the vif() command
## Warning: package 'car' was built under R version 4.4.3
## Loading required package: carData
## Warning: package 'carData' was built under R version 4.4.3
##
## Attaching package: 'car'
## The following object is masked from 'package:psych':
##
## logit
library(sjPlot) # to visualize our results
## Warning: package 'sjPlot' was built under R version 4.4.3
# For HW, import the dataset you cleaned previously, this will be the dataset you'll use throughout the rest of the semester
d <- read.csv(file="Data/projectdata.csv", header=T)
We hypothesize that levels of depression, levels of anxiety, and levels of anxiety on the COVID-19 pandemic will significantly predict negative COVID-19 experiences.
# you only need to check the variables you're using in the current analysis
# although you checked them previously, it's always a good idea to look them over again and be sure that everything is correct
str(d)
## 'data.frame': 979 obs. of 7 variables:
## $ X : int 321 401 520 1390 1422 1849 2247 2526 2609 2689 ...
## $ age : chr "1 under 18" "4 between 36 and 45" "1 under 18" "5 over 45" ...
## $ education: chr "1 equivalent to not completing high school" "6 graduate degree or higher" "1 equivalent to not completing high school" "5 undergraduate degree" ...
## $ phq : num 1.89 2.44 1.56 1.22 4 ...
## $ gad : num 1 2.14 1.14 1 1.57 ...
## $ covid_neg: int 0 0 0 0 0 0 0 0 0 0 ...
## $ pas_covid: num 2.33 4 3 2.89 2.67 ...
# Place only continuous variables of interest in new dataframe, and name it "cont"
cont <- na.omit(subset(d, select=c(phq, gad, pas_covid, covid_neg)))
cont$row_id <- 1:nrow(cont)
# Standardize all IVs
cont$phq <- scale(cont$phq, center=T, scale=T)
cont$gad <- scale(cont$gad, center=T, scale=T)
cont$pas_covid <- scale(cont$pas_covid, center=T, scale=T)
# you can use the describe() command on an entire dataframe (d) or just on a single variable
describe(cont)
## vars n mean sd median trimmed mad min max range
## phq 1 979 0.00 1.00 -0.18 -0.10 1.15 -1.22 2.27 3.49
## gad 2 979 0.00 1.00 -0.31 -0.11 0.94 -1.10 2.23 3.33
## pas_covid 3 979 0.00 1.00 -0.01 0.02 0.96 -3.24 2.57 5.81
## covid_neg 4 979 1.04 1.76 0.00 0.68 0.00 0.00 8.00 8.00
## row_id 5 979 490.00 282.76 490.00 490.00 363.24 1.00 979.00 978.00
## skew kurtosis se
## phq 0.68 -0.58 0.03
## gad 0.75 -0.62 0.03
## pas_covid -0.20 0.11 0.03
## covid_neg 1.50 1.11 0.06
## row_id 0.00 -1.20 9.04
# also use histograms to examine your continuous variables
hist(cont$phq)
hist(cont$gad)
hist(cont$pas_covid)
hist(cont$covid_neg)
# last, use scatterplots to examine each pairing of your continuous variables together
plot(cont$phq, cont$covid_neg) # PUT YOUR DV 2ND (Y-AXIS)
plot(cont$gad, cont$covid_neg) # PUT YOUR DV 2ND (Y-AXIS)
plot(cont$pas_covid, cont$covid_neg) # PUT YOUR DV 2ND (Y-AXIS)
plot(cont$phq, cont$gad)
plot(cont$phq, cont$pas_covid)
plot(cont$gad, cont$pas_covid)
corr_output_m <- corr.test(cont)
corr_output_m
## Call:corr.test(x = cont)
## Correlation matrix
## phq gad pas_covid covid_neg row_id
## phq 1.00 0.84 0.32 0.39 0.03
## gad 0.84 1.00 0.38 0.39 0.07
## pas_covid 0.32 0.38 1.00 0.19 0.07
## covid_neg 0.39 0.39 0.19 1.00 0.16
## row_id 0.03 0.07 0.07 0.16 1.00
## Sample Size
## [1] 979
## Probability values (Entries above the diagonal are adjusted for multiple tests.)
## phq gad pas_covid covid_neg row_id
## phq 0.00 0.00 0.00 0 0.38
## gad 0.00 0.00 0.00 0 0.08
## pas_covid 0.00 0.00 0.00 0 0.08
## covid_neg 0.00 0.00 0.00 0 0.00
## row_id 0.38 0.03 0.03 0 0.00
##
## To see confidence intervals of the correlations, print with the short=FALSE option
# CHECK FOR ANY CORRELATIONS AMONG YOUR IVs ABOVE .70 --> BAD (aka multicollinearity)
# ONLY use the commented out section below IF if you need to remove outliers AFTER examining the Cook's distance and a Residuals vs Leverage plots in your HW -- remember we practiced this in the ANOVA lab
#cont <- subset(cont, row_id!=c(1970))
# use the lm() command to run the regression. Put DV on the left, IVs on the right separated by "+"
reg_model <- lm(covid_neg ~ phq + gad + pas_covid, data = cont)
Assumptions we’ve discussed previously:
New assumptions:
needed <- 80 + 8*3
nrow(cont) >= needed
## [1] TRUE
NOTE: For your homework, if you don’t have the required number of cases you’ll need to drop one of your independent variables. Reach out to me and we can figure out the best way to proceed!
# Variance Inflation Factor = VIF
vif(reg_model)
## phq gad pas_covid
## 3.451299 3.612416 1.169230
NOTE: For your homework, you will need to discuss multicollinearity and any high values, but you don’t have to drop any variables.
The plot below shows the residuals for each case and the fitted line. The red line is the average residual for the specified point of the dependent variable. If the assumption of linearity is met, the red line should be horizontal. This indicates that the residuals average to around zero. However, a bit of deviation is okay – just like with skewness and kurtosis, there’s a range that we can work in before non-normality becomes a critical issue. For some examples of good Residuals vs Fitted plot and ones that show serious errors, check out this page.
plot(reg_model, 1)
NOTE: For your homework, you’ll simply need to generate this plot and talk about whether your assumptions are met. This is going to be a judgement call, and that’s okay!
The plots below both address leverage, or how much each data point is able to influence the regression line. Outliers are points that have undue influence on the regression line, the way that Bill Gates entering the room has an undue influence on the mean income.
The first plot, Cook’s distance, is a visualization of a score called (you guessed it) Cook’s distance, calculated for each case (aka row or participant) in the dataframe. Cook’s distance tells us how much the regression would change if the point was removed. The second plot also includes the residuals in the examination of leverage. The standardized residuals are on the y-axis and leverage is on the x-axis; this shows us which points have high residuals (are far from the regression line) and high leverage. Points that have large residuals and high leverage are especially worrisome, because they are far from the regression line but are also exerting a large influence on it.
# Cook's distance
plot(reg_model, 4)
# Residuals vs Leverage
plot(reg_model, 5)
NOTE: For your homework, you’ll simply need to generate these plots, assess Cook’s distance in your dataset, and then identify and remove any potential cases that are prominent outliers.
This plot is a bit new. It’s called a Q-Q plot and shows the standardized residuals plotted against a normal distribution. If our variables are perfectly normal, the points will fit on the dashed line perfectly. This page shows how different types of non-normality appear on a Q-Q plot.
It’s normal for Q-Q plots show a bit of deviation at the ends. This page shows some examples that help us put our Q-Q plot into context.
plot(reg_model, 2)
NOTE: For your homework, you’ll simply need to generate this plot and talk about how your plot compares to the ones pictured. Does it seem like the points lie mostly along the straight diagonal line with either no or some minor deviations along each of the tails?
Before interpreting our results, we assessed our variables to see if they met the assumptions for a multiple linear regression. We detected slight issues with linearity in a Residuals vs Fitted plot. However, we did not detect any outliers (by visually analyzing Cook’s Distance and Residuals vs Leverage plots) or any serious issues with the normality of our residuals (by visually analyzing a Q-Q plot), nor were there any issues of multicollinearity among our three independent variables.
summary(reg_model)
##
## Call:
## lm(formula = covid_neg ~ phq + gad + pas_covid, data = cont)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.8180 -0.9174 -0.4262 0.5279 6.4697
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.04086 0.05123 20.317 < 2e-16 ***
## phq 0.35624 0.09522 3.741 0.000194 ***
## gad 0.35780 0.09742 3.673 0.000253 ***
## pas_covid 0.08930 0.05542 1.611 0.107477
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.603 on 975 degrees of freedom
## Multiple R-squared: 0.1695, Adjusted R-squared: 0.1669
## F-statistic: 66.31 on 3 and 975 DF, p-value: < 2.2e-16
# Note for section below: to type lowercase Beta below (ß) you need to hold down Alt key and type 225 on numeric keypad. If that doesn't work you should be able to copy/paste it from somewhere else
Effect size, based on Regression Beta (Estimate) value Trivial: Less than 0.10 Small: 0.10–0.29 Medium: 0.30–0.49 Large: 0.50 or greater
To test our hypothesis that levels of depression, levels of anxiety, and levels of pandemic related anxiety would significantly predict negative experiences from the pandemic, we used a multiple linear regression to model the associations between these variables. We confirmed that our data met the assumptions of a linear regression, and although there were slight issues with linearity we still continued on with the analysis.
Our model was statistically significant, Adj. R2 = .17, F(3, 975) = 66.31, p < .001 The relationship between depression (PHQ) and negative COVID-19 experiences was positive and has a small-to-medium effect size (per Cohen, 1988), while the relationships between our remaining predictors (anxiety (GAD) and pandemic-specific anxiety) and negative COVID-19 experiences were positive for GAD and had effect sizes that were small-to-medium. They were non-significant for pandemic related anxiety. Full output from the regression model is reported in Table 1.
Negative Experiences from COVID-19 | ||||
---|---|---|---|---|
Predictors | Estimates | SE | CI | p |
Intercept | 1.04 | 0.05 | 0.94 – 1.14 | <0.001 |
Levels of Depression | 0.36 | 0.10 | 0.17 – 0.54 | <0.001 |
Levels of Anxiety | 0.36 | 0.10 | 0.17 – 0.55 | <0.001 |
Levels of Pandemic-Specific Anxiety | 0.09 | 0.06 | -0.02 – 0.20 | 0.107 |
Observations | 979 | |||
R2 / R2 adjusted | 0.169 / 0.167 |
References
Cohen J, . (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.