library(psych) # for the describe() command and the corr.test() command
library(apaTables) # to create our correlation table
library(kableExtra) # to create our correlation table
Correlation Lab
Loading Libraries
Importing Data
<- read.csv(file="Data/labdata.csv", header=T)
d #
# since we're focusing on our continuous variables, we're going to drop our categorical variables. this will make some stuff we're doing later easier.
<- subset(d, select=-c(pet, mhealth)) d
State Your Hypotheses - PART OF YOUR WRITEUP
We predict the intolerance of uncertainty, depression sore, and precived stress will be positively correlated, and all three mental health variables will be negatively correlated with self-esteem.
Check Your Assumptions
Pearson’s Correlation Coefficient Assumptions
- Should have two measurements for each participant for each variable (confirmed by earlier procedures – we dropped any participants with missing data)
- Variables should be continuous and normally distributed, or assessments of the relationship may be inaccurate (confirmed above – if issues, make a note and continue)
- Outliers should be identified and removed, or results will be inaccurate (will do below)
- Relationship between the variables should be linear, or they will not be detected (will do below)
Checking for Outliers
Outliers can mask potential effects and cause Type II error (you assume there is no relationship when there really is one, e.g., false negative).
Note: You are not required to screen out outliers or take any action based on what you see here. This is something you will check and then discuss in your write-up.
# using the scale() command to standardize our variable, viewing a histogram, and then counting statistical outliers
$iou <- scale(d$iou, center=T, scale=T)
dhist(d$iou)
sum(d$iou < -3 | d$iou > 3)
[1] 0
$rse <- scale(d$rse, center=T, scale=T)
dhist(d$rse)
sum(d$rse < -3 | d$rse > 3)
[1] 0
$phq <- scale(d$phq, center=T, scale=T)
dhist(d$phq)
sum(d$phq < -3 | d$phq > 3)
[1] 0
$pss <- scale(d$pss, center=T, scale=T)
dhist(d$pss)
sum(d$pss < -3 | d$pss > 3)
[1] 0
Checking for Linear Relationships
Non-linear relationships cannot be detected by Pearson’s correlation (the type of correlation we’re doing here). This means that you may underestimate the relationship between a pair of variables if they have a non-linear relationship, and thus your understanding of what’s happening in your data will be inaccurate.
Visually check that relationships are linear and write a brief description of any potential nonlinearity. You will have to use your judgement. There are no penalties for answering ‘wrong’, so try not to stress out about it too much – just do your best.
# use scatterplots to examine your continuous variables together
plot(d$iou, d$rse)
plot(d$iou, d$phq)
plot(d$iou, d$pss)
Check Your Variables
describe(d)
vars n mean sd median trimmed mad min max range skew kurtosis se
iou 1 1201 0 1 -0.18 -0.06 1.10 -1.70 2.70 4.41 0.51 -0.57 0.03
rse 2 1201 0 1 0.10 0.02 1.03 -2.26 1.91 4.17 -0.21 -0.72 0.03
phq 3 1201 0 1 -0.23 -0.09 1.14 -1.26 2.21 3.47 0.61 -0.74 0.03
pss 4 1201 0 1 0.05 -0.01 1.17 -2.04 2.15 4.19 0.07 -0.77 0.03
#
# also use histograms to examine your continuous variables
hist(d$iou)
hist(d$rse)
hist(d$phq)
hist(d$pss)
Issues with My Data - PART OF YOUR WRITEUP
point out non linear groups as well as skew
Run Pearson’s Correlation
There are two ways to run Pearson’s correlation in R. You can calculate each correlation one-at-a-time using multiple commands, or you can calculate them all at once and report the scores in a matrix. The matrix output can be confusing at first, but it’s more efficient. We’ll do it both ways.
Run a Single Correlation
<- corr.test(d$iou, d$rse) corr_output
View Single Correlation
Strong effect: Between |0.50| and |1| Moderate effect: Between |0.30| and |0.49| Weak effect: Between |0.10| and |0.29| Trivial effect: Less than |0.09|
corr_output
Call:corr.test(x = d$iou, y = d$rse)
Correlation matrix
[,1]
[1,] -0.66
Sample Size
[1] 1201
These are the unadjusted probability values.
The probability values adjusted for multiple tests are in the p.adj object.
[,1]
[1,] 0
To see confidence intervals of the correlations, print with the short=FALSE option
Create a Correlation Matrix
# corr_output_m <- corr.test(DATAFRAME)
View Test Output
Strong effect: Between |0.50| and |1| Moderate effect: Between |0.30| and |0.49| Weak effect: Between |0.10| and |0.29| Trivial effect: Less than |0.09|
# corr_output_m
Write Up Results
Write up your results. Again, make sure to maintain an appropriate tone, and follow APA guidelines for reporting statistical results. I recommend following the below outline:
- Briefly restate your hypothesis
- Describe any issues with your data (you can copy/paste from above, just make sure everything flows).
- Report your results. Since we are reporting our results in a table, you do NOT have to list out each individual r-value. Instead, I recommend focusing only on your significant results, and including your p-value and interpretation of effect size (trivial, small, medium, or large; don’t forget the citation).
- Make sure to include a reference to Table 1 (created using the code below)
table_out <- apa.cor.table(d, filename = “table1.doc”, table.number = 1) # table_out2 <- as.data.frame(table_out\(table.body) # # # use the code below to preview your table so you can make sure that you're putting the right variable names in the right places. after you have done so, put the hashtag back so it isn't showing up in your output # # table_out2 # # # you need to update this section with better descriptions of your variables. you can use the full name of the scale, or brief descriptions along with the scale abbreviation as I have. # # DO NOT change anythink other than the variable descriptions! don't delete the extra quotes or comments. R needs them. table_out2\)Variable <- c(“intolerance of uncertainty (IOU)”, ““, #”Depression symptoms (PHQ-9)“,”“,”“, table_out2\(Variable <- c("Self-esteem (RSE-10)", "", "", # "Fake (FKE-3)", "", "") # # # you don't need to make any changes to the remainder of the code # as.data.frame(table_out2) %>% # kbl(row.names = F, # align = c("l", "c", "c", "c", "c", "c"), # caption = paste("Table ",table_out\)table.number,”: “,table_out$table.title, sep=”“), # format =”html”, table.attr = “style=‘width: 75%;’”) %>% # kable_classic() %>% # footnote( # general = “M and SD are used to represent mean and standard deviation, respectively. Values in square brackets indicate the 95% confidence interval. The confidence interval is a plausible range of population correlations that could have caused the sample correlation.”,
References
Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.