1 Loading Libraries

#install.packages("apaTables")
#install.packages("kableExtra")

library(psych) # for the describe() command and the corr.test() command
library(apaTables) # to create our correlation table
library(kableExtra) # to create our correlation table

2 Importing Data

d <- read.csv(file="Data/projectdata.csv", header=T)

# For HW, import the your project dataset you cleaned previously; this will be the dataset you'll use throughout the rest of the semester

3 State Your Hypothesis

We predict that anxiety, health, loneliness, and social support will all be correlated with each other. Additionally, we predict that social support will be negatively correlated with isolation, such that participants who report higher levels of social support will report lower isolation.

4 Check Your Variables

# you only need to check the variables you're using in the current analysis
# it's always a good idea to look them to be sure that everything is correct
str(d)
## 'data.frame':    356 obs. of  7 variables:
##  $ X          : int  1 31 33 57 86 113 133 143 179 190 ...
##  $ urban_rural: chr  "city" "town" "city" "village" ...
##  $ mhealth    : chr  "none or NA" "none or NA" "none or NA" "anxiety disorder" ...
##  $ pas_covid  : num  3.22 4.22 3.22 4.56 3.67 ...
##  $ phq        : num  1.33 2.33 1.11 2.33 1.56 ...
##  $ isolation_a: num  2.25 2.5 1.75 1.25 3 1 1 1.5 2.75 2.75 ...
##  $ support    : num  2.5 2.5 3.67 4.17 1.33 ...
# Since we're focusing only on our continuous variables, we're going to subset them into their own dataframe. This will make some stuff we're doing later on easier.

d2 <- subset(d, select=c(pas_covid, phq, isolation_a, support))

# You can use the describe() command on an entire dataframe (d) or just on a single variable (d$pss)

describe(d2)
##             vars   n mean   sd median trimmed  mad  min  max range  skew
## pas_covid      1 356 3.19 0.69   3.22    3.19 0.66 1.33 5.00  3.67  0.02
## phq            2 356 1.52 0.54   1.33    1.43 0.33 1.00 3.78  2.78  1.65
## isolation_a    3 356 1.69 0.71   1.50    1.60 0.74 1.00 3.50  2.50  0.86
## support        4 356 3.86 0.88   4.00    3.95 0.74 1.00 5.00  4.00 -0.81
##             kurtosis   se
## pas_covid      -0.06 0.04
## phq             2.97 0.03
## isolation_a    -0.36 0.04
## support         0.12 0.05
# NOTE: Our fake variable has high kurtosis, which we'll ignore for the lab. You don't need to discuss univariate normality in the results write-ups for the labs/homework, but you will need to discuss it in your final project manuscript.

# also use histograms to examine your continuous variables

hist(d$pas_covid)

hist(d$phq)

hist(d$isolation_a)

hist(d$support)

# last, use scatterplots to examine your continuous variables together, for each pairing

plot(d$pas_covid, d$phq)

plot(d$pas_covid, d$isolation_a)

plot(d$pas_covid, d$support)

plot(d$phq, d$isolation_a)

plot(d$phq, d$support)

plot(d$isolation_a, d$support)

5 Check Your Assumptions

5.1 Pearson’s Correlation Coefficient Assumptions

  • Should have two measurements for each participant
  • Variables should be continuous and normally distributed
  • Outliers should be identified and removed
  • Relationship between the variables should be linear

5.1.1 Checking for Outliers

Note: You are not required to screen out outliers or take any action based on what you see here. This is something you will check and then discuss in your write-up.

# We are going to standardize (z-score) all of our variables, and check them for outliers.

d2$pas_covid <- scale(d2$pas_covid, center=T, scale=T)
hist(d2$pas_covid)

sum(d2$pas_covid < -3 | d2$pas_covid > 3)
## [1] 0
d2$phq <- scale(d2$phq, center=T, scale=T)
hist(d2$phq)

sum(d2$phq < -3 | d2$phq > 3)
## [1] 10
d2$isolation_a <- scale(d2$isolation_a, center=T, scale=T)
hist(d2$isolation_a)

sum(d2$isolation_a < -3 | d2$isolation_a > 3)
## [1] 0
d2$support <- scale(d2$support, center=T, scale=T)
hist(d2$support)

sum(d2$support < -3 | d2$support > 3)
## [1] 2

5.2 Issues with My Data

Two of my variables meet all of the assumptions of Pearson’s correlation coefficient. One variable, a phq measure of something phq had high kurtosis (2.97) and had 10 outliers. One variable, a support measure of something fake had 2 outliers. Outliers can distort the relationship between two variables and sway the correlation in their direction. These variables, phq and support, also appears to have non-linear relationships with the other two variables. Pearson’s r may underestimate the strength of a non-linear relationship and distort the relationship direction. Any correlations with my phq and support measures of health and social support should be evaluated carefully due to these risks.

[Make sure to revise the above paragraph for your HW.]

6 Run a Single Correlation

corr_output <- corr.test(d2$isolation_a, d2$support)

7 View Single Correlation

corr_output
## Call:corr.test(x = d2$isolation_a, y = d2$support)
## Correlation matrix 
##       [,1]
## [1,] -0.61
## Sample Size 
## [1] 356
## These are the unadjusted probability values.
##   The probability values  adjusted for multiple tests are in the p.adj object. 
##      [,1]
## [1,]    0
## 
##  To see confidence intervals of the correlations, print with the short=FALSE option

8 Create a Correlation Matrix

Strong: Between |0.50| and |1| Moderate: Between |0.30| and |0.49| Weak: Between |0.10| and |0.29| Trivial: Less than |0.09|

Remember, Pearson’s r is also an effect size!

corr_output_m <- corr.test(d2)

9 View Test Output

corr_output_m
## Call:corr.test(x = d2)
## Correlation matrix 
##             pas_covid   phq isolation_a support
## pas_covid        1.00  0.39        0.27   -0.21
## phq              0.39  1.00        0.49   -0.39
## isolation_a      0.27  0.49        1.00   -0.61
## support         -0.21 -0.39       -0.61    1.00
## Sample Size 
## [1] 356
## Probability values (Entries above the diagonal are adjusted for multiple tests.) 
##             pas_covid phq isolation_a support
## pas_covid           0   0           0       0
## phq                 0   0           0       0
## isolation_a         0   0           0       0
## support             0   0           0       0
## 
##  To see confidence intervals of the correlations, print with the short=FALSE option
# Remember to report the p-values from the matrix that are ABOVE the diagonal

10 Write Up Results

To test our hypothesis that anxiety, health, loneliness, and social support would be correlated with one another, we calculated a series of Pearson’s correlation coefficients. Two of the variables (anxiety and loneliness) met the required assumptions of the test, with all two meeting the standards of normality and containing no outliers. Two variables, health and social support, had 10 and 2 outliers as well as non-linear relationships with the other variables; so any significant results involving health and social support should be evaluated carefully.

As predicted, we found that all four variables were significantly correlated (all ps < .001). The effect size of one correlation (support and loneliness) was strong (r = 0.61; Cohen, 1988), three correlations were moderate (health and anxiety, r=0.39) (health and loneliness, r=0.49)(support and health, r=0.39), and two correlations were weak (loneliness and anxiety, r=0.27)(support and anxiety, r=0.21). Our second hypothesis was also supported, that isolation would be lower in participants who reported higher levels of social support, as can be seen by the correlation coefficients reported in Table 1.

[In your HW, revise the above two paragraphs to fit your results. Make sure to discuss ALL predicted correlations and if sig or not.]

Table 1: Means, standard deviations, and correlations with confidence intervals
Variable M SD 1 2 3
anxiety 3.19 0.69
health 1.52 0.54 .39**
[.30, .48]
loneliness 1.69 0.71 .27** .49**
[.18, .37] [.41, .57]
support 3.86 0.88 -.21** -.39** -.61**
[-.31, -.11] [-.47, -.30] [-.67, -.54]
Note:
M and SD are used to represent mean and standard deviation, respectively. Values in square brackets indicate the 95% confidence interval. The confidence interval is a plausible range of population correlations that could have caused the sample correlation.
* indicates p < .05
** indicates p < .01.

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.