1 Loading Libraries

library(psych) # for the describe() command and the corr.test() command
library(apaTables) # to create our correlation table
library(kableExtra) # to create our correlation table
library(naniar)

2 Importing Data

# import the dataset you cleaned previously
# this will be the dataset you'll use throughout the rest of the semester
# use ARC data downloaded previous for lab
d <- read.csv(file="Data/jenna_data_FINAL.csv", header=T)

3 State Your Hypothesis

We predict that worry (measured by the PSWQ), anxiety symptoms (measured by the GAD), extraversion (measured by the Big 5 Extraversion), and COVID positivity (measured by survey) will all be correlated with each other. Furthermore, we predict that worry will be higher in participants who are higher in anxiety symptoms or who report lower extraversion scores.

4 Check Your Variables

# you only need to check the variables you're using in the current analysis
# although you checked them previously, it's always a good idea to look them over again and be sure that everything is correct
str(d)
## 'data.frame':    326 obs. of  7 variables:
##  $ X           : int  1 31 49 57 86 113 133 179 190 208 ...
##  $ income      : chr  "3 high" "2 middle" "2 middle" "2 middle" ...
##  $ exercise_cat: chr  "1 less than 1 hour" "2 1-2 hours" "2 1-2 hours" "2 1-2 hours" ...
##  $ covid_pos   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ big5_ext    : num  2 5 5.67 4 5.33 ...
##  $ pswq        : num  4.94 3.94 2.94 2.81 3.5 ...
##  $ gad         : num  1.86 2 1.57 2.86 1.57 ...
# we're going to create a fake variable for this lab, so that it has four variables and mirrors the homework assignment. SKIP THIS STEP FOR THE HOMEWORK

# since we're focusing on our continuous variables, we're going to subset them into their own dataframe. this will make some stuff we're doing later easier.
cont <- subset(d, select=c(covid_pos, pswq, gad, big5_ext))

# you can use the describe() command on an entire dataframe (d) or just on a single variable (d$pss)
describe(cont)
##           vars   n mean   sd median trimmed  mad  min   max range  skew
## covid_pos    1 326 0.23 1.14   0.00    0.00 0.00 0.00 12.00 12.00  6.57
## pswq         2 326 2.87 0.87   2.81    2.85 1.02 1.12  4.94  3.81  0.20
## gad          3 326 1.53 0.61   1.43    1.42 0.42 1.00  4.00  3.00  1.73
## big5_ext     4 326 4.64 1.33   4.67    4.72 1.48 1.33  7.00  5.67 -0.43
##           kurtosis   se
## covid_pos    50.61 0.06
## pswq         -0.79 0.05
## gad           3.22 0.03
## big5_ext     -0.58 0.07
# our fake variable has high kurtosis, which I'll ignore. you don't need to discuss univariate normality in the results write-ups for the labs/homework, but you will need to discuss it in your final manuscript

# also use histograms to examine your continuous variables
hist(d$covid_pos)

hist(d$pswq)

hist(d$gad)

hist(d$big5_ext)

# last, use scatterplots to examine your continuous variables together
plot(d$covid_pos, d$pswq)

plot(d$covid_pos, d$gad)

plot(d$covid_pos, d$big5_ext)

plot(d$pswq, d$gad)

plot(d$pswq, d$big5_ext)

plot(d$gad, d$big5_ext)

5 Check Your Assumptions

5.1 Pearson’s Correlation Coefficient Assumptions

  • Should have two measurements for each participant
  • Variables should be continuous and normally distributed
  • Outliers should be identified and removed
  • Relationship between the variables should be linear

5.1.1 Checking for Outliers

Note: You are not required to screen out outliers or take any action based on what you see here. This is something you will check and then discuss in your write-up.

d$pswq_std <- scale(d$pswq, center=T, scale=T)
hist(d$pswq_std)

sum(d$pswq_std < -3 | d$covid_poss_std > 3)
## [1] 0
d$gad_std <- scale(d$gad, center=T, scale=T)
hist(d$gad_std)

sum(d$gad_std < -3 | d$gad_std > 3)
## [1] 6
d$big5_ext_std <- scale(d$big5_ext, center=T, scale=T)
hist(d$big5_ext_std)

sum(d$big5_ext_std < -3 | d$big5_ext_std > 3)
## [1] 0
d$covid_pos_std <- scale(d$covid_pos, center=T, scale=T)
hist(d$covid_pos_std)

sum(d$covid_pos_std < -3 | d$covid_poss_std > 3)
## [1] 0

5.2 Issues with My Data

All but one of my variables meet all of the assumptions of Pearson’s correlation coefficient. One variable, covid_pos had high kurtosis (50.61) and had 0 outliers. Outliers can distort the relationship between two variables and sway the correlation in their direction. This variable also appears to have non-linear relationships with the other variables. Pearson’s r may underestimate the strength of a non-linear relationship and distort the relationship direction. Any correlations with my fake measure of fakeness should be evaluated carefully due to these risks.

6 Create a Correlation Matrix

corr_output_m <- corr.test(cont)

7 View Test Output

corr_output_m
## Call:corr.test(x = cont)
## Correlation matrix 
##           covid_pos  pswq   gad big5_ext
## covid_pos      1.00  0.07 -0.01    -0.01
## pswq           0.07  1.00  0.59    -0.13
## gad           -0.01  0.59  1.00    -0.11
## big5_ext      -0.01 -0.13 -0.11     1.00
## Sample Size 
## [1] 326
## Probability values (Entries above the diagonal are adjusted for multiple tests.) 
##           covid_pos pswq  gad big5_ext
## covid_pos      0.00 0.57 1.00     1.00
## pswq           0.19 0.00 0.00     0.07
## gad            0.91 0.00 0.00     0.16
## big5_ext       0.87 0.01 0.04     0.00
## 
##  To see confidence intervals of the correlations, print with the short=FALSE option

8 Write Up Results

To test our hypothesis that worry (measured by the PSWQ), anxiety symptoms (measured by the GAD), extraversion (measured by the Big 5 Extraversion), and COVID positivity (measured by survery) will all be correlated with each other, we calculated a series of Pearson’s correlation coefficients. Most of our data met the assumptions of the test, with all variables meeting the standards of normality and no outliers. One variable, GAD, did have 6 outliers and non-linear relationships with the other variables, and so any significant results involving that variable should be evaluated carefully.

As we didn’t predict, we found that only two variables (GAD, PSWQ) were significantly correlated (ps < .001). The effect sizes of all correlations were large (rs > .5; Cohen, 1988). This test also supported one part of our second hypothesis, that worry would be higher in participants who are higher in anxiety, as can be seen by the correlation coefficients reported in Table 1.

The relationship between worry (PSWQ) and GAD was positive, “r”(326) = 0.59, “p” = < .001.

Table 1: Means, standard deviations, and correlations with confidence intervals
Variable M SD 1 2 3
COVID Positvity 0.23 1.14
Worry symptoms (PSWQ) 2.87 0.87 .07
[-.04, .18]
Anxiety symptoms (GAD) 1.53 0.61 -.01 .59**
[-.11, .10] [.51, .65]
Extraversion (Big 5) 4.64 1.33 -.01 -.13* -.11*
[-.12, .10] [-.24, -.03] [-.22, -.01]
Note:
M and SD are used to represent mean and standard deviation, respectively. Values in square brackets indicate the 95% confidence interval. The confidence interval is a plausible range of population correlations that could have caused the sample correlation.
* indicates p < .05
** indicates p < .01.

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.

9 Loading Libraries

library(psych) # for the describe() command
library(broom) # for the augment() command
library(ggplot2) # to visualize our results
## 
## Attaching package: 'ggplot2'
## The following objects are masked from 'package:psych':
## 
##     %+%, alpha

10 Importing Data

# import the dataset you cleaned previously
# this will be the dataset you'll use throughout the rest of the semester
# use EAMMi2 data
d <- read.csv(file="Data/jenna_data_FINAL.csv", header=T)

11 State Your Hypothesis

We hypothesize that worry (measured by the PSWQ) will significantly predict anxiety symptoms (measured by the GAD), and that the relationship will be positive.

12 Check Your Variables

# you only need to check the variables you're using in the current analysis
# although you checked them previously, it's always a good idea to look them over again and be sure that everything is correct
str(d)
## 'data.frame':    326 obs. of  7 variables:
##  $ X           : int  1 31 49 57 86 113 133 179 190 208 ...
##  $ income      : chr  "3 high" "2 middle" "2 middle" "2 middle" ...
##  $ exercise_cat: chr  "1 less than 1 hour" "2 1-2 hours" "2 1-2 hours" "2 1-2 hours" ...
##  $ covid_pos   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ big5_ext    : num  2 5 5.67 4 5.33 ...
##  $ pswq        : num  4.94 3.94 2.94 2.81 3.5 ...
##  $ gad         : num  1.86 2 1.57 2.86 1.57 ...
# you can use the describe() command on an entire dataframe (d) or just on a single variable
describe(d)
##               vars   n    mean      sd  median trimmed     mad  min     max
## X                1 326 3417.49 2141.80 3259.50 3294.35 2356.59 1.00 8803.00
## income*          2 326    2.41    0.86    2.00    2.39    1.48 1.00    4.00
## exercise_cat*    3 326    2.28    0.94    2.00    2.17    0.00 1.00    5.00
## covid_pos        4 326    0.23    1.14    0.00    0.00    0.00 0.00   12.00
## big5_ext         5 326    4.64    1.33    4.67    4.72    1.48 1.33    7.00
## pswq             6 326    2.87    0.87    2.81    2.85    1.02 1.12    4.94
## gad              7 326    1.53    0.61    1.43    1.42    0.42 1.00    4.00
##                 range  skew kurtosis     se
## X             8802.00  0.44    -0.35 118.62
## income*          3.00  0.35    -0.56   0.05
## exercise_cat*    4.00  1.14     1.51   0.05
## covid_pos       12.00  6.57    50.61   0.06
## big5_ext         5.67 -0.43    -0.58   0.07
## pswq             3.81  0.20    -0.79   0.05
## gad              3.00  1.73     3.22   0.03
# also use histograms to examine your continuous variables
hist(d$pswq)

hist(d$gad)

# last, use scatterplots to examine your continuous variables together
plot(d$pswq, d$gad)

13 Run a Simple Regression

# to calculate standardized coefficients, we have to standardize our IV
d$pswq_std <- scale(d$pswq, center=T, scale=T)
hist(d$pswq_std)

# use the lm() command to run the regression
# dependent/outcome variable on the left, idependent/predictor variable on the right
reg_model <- lm(gad ~ pswq_std, data = d)

14 Check Your Assumptions

14.1 Simple Regression Assumptions

  • Should have two measurements for each participant
  • Variables should be continuous and normally distributed
  • Outliers should be identified and removed
  • Relationship between the variables should be linear
  • Residuals should be normal and have constant variance note: we will not be evaluating whether our data meets these assumptions in this lab/homework – we’ll come back to them next week when we talk about multiple linear regression

14.2 Create plots and view residuals

model.diag.metrics <- augment(reg_model)

ggplot(model.diag.metrics, aes(x = pswq_std, y = gad)) +
  geom_point() +
  stat_smooth(method = lm, se = FALSE) +
  geom_segment(aes(xend = pswq_std, yend = .fitted), color = "red", size = 0.3)
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
## `geom_smooth()` using formula = 'y ~ x'

14.3 Check linearity with Residuals vs Fitted plot

Our plot suggests that there is some minor non-linearity. In comparison to the good and problematic plots, our plot is closer to the ‘good’ plots. It is almost entirely horizontal, minus some upward devaiting on the left. The red line sticks pretty closely to the zero line compared to cases with extreme deviation. However, a bit of deviation is okay – just like with skewness and kurtosis, there’s a range that we can work in before non-normality or non-linearity becomes a critical issue.

plot(reg_model, 1)

14.4 Check for outliers

The plots below both address leverage, or how much each data point is able to influence the regression line. Outliers are points that have undue influence on the regression line, the way that Bill Gates entering the room has an undue influence on the mean income.

The first plot, Cook’s distance, is a visualization of a score called (you guessed it) Cook’s distance, calculated for each case (aka row or participant) in the dataframe. Cook’s distance tells us how much the regression would change if the point was removed. Ideally, we want all points to have the same influence on the regression line, although we accept that there will be some variability. The cutoff for a high Cook’s distance score is .5 (not .05, which is our cutoff for statistical significance). For our data, some points do exert more influence than others but they’re generally equal, and none of them are close to the cutoff.

The second plot also includes the residuals in the examination of leverage. The standardized residuals are on the y-axis and leverage is on the x-axis; this shows us which points have high residuals (are far from the regression line) and high leverage. Point that have large residuals and high leverage are especially worrisome, because they are far from the regression line but are also exerting a large influence on it. The red line indicates the average residual across points with the same amount of leverage. As usual, we want this line to stay as close to the mean line (or the zero line) as possible.

As seen in the plots, there aren’t any severe outliers that would affect our data in our first plot and the leverage in the second plot is low. There are no points appearing past the lines. Our deviations are not severe from the mean line and there are no serious outliers that need to be removed.

# Cook's distance
plot(reg_model, 4)

# Residuals vs Leverage
plot(reg_model, 5)

14.5 Issues with My Data

Before interpreting our results, we assessed our variables to see if they met the assumptions for a simple linear regression. Analysis of a Residuals vs Fitted plot suggested that there is some minor non-linearity, but not enough to violate the assumption of linearity. We also checked Cook’s distance and a Residuals vs Leverage plot to detect outliers. One case had a large residuals and above-average leverage but was below the recommended cutoff for Cook’s distance.

15 View Test Output

summary(reg_model)
## 
## Call:
## lm(formula = gad ~ pswq_std, data = d)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.11056 -0.29314 -0.07815  0.14154  2.31000 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  1.53374    0.02741   55.96   <2e-16 ***
## pswq_std     0.35700    0.02745   13.01   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4949 on 324 degrees of freedom
## Multiple R-squared:  0.343,  Adjusted R-squared:  0.341 
## F-statistic: 169.1 on 1 and 324 DF,  p-value: < 2.2e-16
# note for section below: to type lowercase Beta below (ß) you need to hold down Alt key and type 225 on numeric keypad. If that doesn't work you should be able to copy/paste it from somewhere else

16 Write Up Results

To test our hypothesis that worry (measured by the PSWQ) will significantly predict anxiety symptoms (measured by the GAD), and that the relationship will be positive, we used a simple linear regression to model the relationship between the variables. We confirmed that our data met the assumptions of a linear regression, checking the linearity of the relationship using a Residuals vs Fitted plot and checking for outliers using Cook’s distance and a Residuals vs Leverage plot. Note: we are skipping the assumptions of normality and homogeneity of variance for this assignment.

As predicted, we found that worry significantly predicted anxiety symptoms, Adj. R2 = .34, F(1,324) = 169.1, p < .001. The relationship between worry and anxiety symptoms was positive, ß = .36, t(324) = 13.01, p < .001 (refer to Figure 1). According to Cohen (1988), this constitutes a large effect size (> .50).

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.