Correlation HW

Author

Quinlyn Bry

Loading Libraries

library(psych) # for the describe() command and the corr.test() command
library(apaTables) # to create our correlation table
library(kableExtra) # to create our correlation table

Importing Data

d <- read.csv(file="Data/mydata.csv", header=T)

# since we're focusing on our continuous variables, we're going to drop our categorical variables. this will make some stuff we're doing later easier.
d <- subset(d, select=-c(gender, relationship_status))

State Your Hypotheses - PART OF YOUR WRITEUP

I predict that positive variables (conscientiousness and social support measure) will be positively correlated, and that negative variables (UCLA loneliness scale (adult) and neuroticism) will be positively correlated and the positive variables will be negatively correlated to the negative variables.

State your hypotheses. Remember, you are looking at the correlations between all four of your continuous/quantitatve variables. Depending on what you predict, you might need to write several sentences to describe all of the relationships. Make sure all four variables are included.

Check Your Assumptions

Pearson’s Correlation Coefficient Assumptions

  • Should have two measurements for each participant for each variable (confirmed by earlier procedures – we dropped any participants with missing data)
  • Variables should be continuous and normally distributed, or assessments of the relationship may be inaccurate (will do below)
  • Outliers should be identified and removed, or results will be inaccurate (will do below)
  • Relationship between the variables should be linear, or they will not be detected (will do below)

Checking for Outliers

Outliers can mask potential effects and cause Type II error (you assume there is no relationship when there really is one, e.g., false negative).

Note: You are not required to screen out outliers or take any action based on what you see here. This is something you will check and then discuss in your write-up.

# using the scale() command to standardize our variable, viewing a histogram, and then counting statistical outliers
d$big5_con <- scale(d$big5_con, center=T, scale=T)
hist(d$big5_con)

sum(d$big5_con < -3 | d$iou > 3)
[1] 0
d$big5_neu <- scale(d$big5_neu, center=T, scale=T)
hist(d$big5_neu)

sum(d$big5_neu < -3 | d$big5_neu > 3)
[1] 0
d$isolation <- scale(d$isolation, center=T, scale=T)
hist(d$isolation)

sum(d$isolation < -3 | d$isolation > 3)
[1] 0
d$support <- scale(d$support, center=T, scale=T)
hist(d$support)

sum(d$support < -3 | d$support > 3)
[1] 0

Checking for Linear Relationships

Non-linear relationships cannot be detected by Pearson’s correlation (the type of correlation we’re doing here). This means that you may underestimate the relationship between a pair of variables if they have a non-linear relationship, and thus your understanding of what’s happening in your data will be inaccurate.

Visually check that relationships are linear and write a brief description of any potential nonlinearity. You will have to use your judgement. There are no penalties for answering ‘wrong’, so try not to stress out about it too much – just do your best.

# use scatterplots to examine your continuous variables together
plot(d$big5_con, d$big5_neu)

plot(d$big5_con, d$isolation)

plot(d$big5_con, d$support)

plot(d$big5_neu, d$isolation)

plot(d$big5_neu, d$support)

plot(d$isolation, d$support)

Check Your Variables

describe(d)
          vars    n mean sd median trimmed  mad   min  max range  skew kurtosis
big5_con     1 1262    0  1   0.14    0.03 1.24 -3.21 1.81  5.02 -0.27    -0.30
big5_neu     2 1262    0  1   0.19    0.03 0.98 -2.23 1.73  3.96 -0.30    -0.76
isolation    3 1262    0  1  -0.18   -0.03 1.32 -1.37 1.61  2.98  0.16    -1.29
support      4 1262    0  1   0.10    0.06 1.04 -2.70 1.50  4.20 -0.43    -0.56
            se
big5_con  0.03
big5_neu  0.03
isolation 0.03
support   0.03
# also use histograms to examine your continuous variables
hist(d$big5_con)

hist(d$big5_neu)

hist(d$isolation)

hist(d$support)

Issues with My Data - PART OF YOUR WRITEUP

Make a note right here if you have any outliers. Make sure you describe what variable has the outliers, and how many there are.

Make a note here if you have any indications of non-linearity in your varaible pairings.

Make a note here if you have any skew/kurtosis and describe what varaibles it occurs in.

Briefly describe any issues with your data and how they might impact the interpretation of your results. As usual, this should be written in an appropriate scientific tone.

Run Pearson’s Correlation

There are two ways to run Pearson’s correlation in R. You can calculate each correlation one-at-a-time using multiple commands, or you can calculate them all at once and report the scores in a matrix. The matrix output can be confusing at first, but it’s more efficient. We’ll do it both ways.

Run a Single Correlation

corr_output <- corr.test(d$big5_con, d$big5_neu)

View Single Correlation

Strong effect: Between |0.50| and |1|

Moderate effect: Between |0.30| and |0.49|

Weak effect: Between |0.10| and |0.29|

Trivial effect: Less than |0.09|

corr_output
Call:corr.test(x = d$big5_con, y = d$big5_neu)
Correlation matrix 
      [,1]
[1,] -0.22
Sample Size 
[1] 1262
These are the unadjusted probability values.
  The probability values  adjusted for multiple tests are in the p.adj object. 
     [,1]
[1,]    0

 To see confidence intervals of the correlations, print with the short=FALSE option

Create a Correlation Matrix

corr_output_m <- corr.test(d)

View Test Output

Strong effect: Between |0.50| and |1|

Moderate effect: Between |0.30| and |0.49|

Weak effect: Between |0.10| and |0.29|

Trivial effect: Less than |0.09|

corr_output_m
Call:corr.test(x = d)
Correlation matrix 
          big5_con big5_neu isolation support
big5_con      1.00    -0.22     -0.29    0.25
big5_neu     -0.22     1.00      0.48   -0.33
isolation    -0.29     0.48      1.00   -0.63
support       0.25    -0.33     -0.63    1.00
Sample Size 
[1] 1262
Probability values (Entries above the diagonal are adjusted for multiple tests.) 
          big5_con big5_neu isolation support
big5_con         0        0         0       0
big5_neu         0        0         0       0
isolation        0        0         0       0
support          0        0         0       0

 To see confidence intervals of the correlations, print with the short=FALSE option

Write Up Results

Write up your results. Again, make sure to maintain an appropriate tone, and follow APA guidelines for reporting statistical results. I recommend following the below outline:

  • Briefly restate your hypothesis
  • Describe any issues with your data (you can copy/paste from above, just make sure everything flows).
  • Report your results. Since we are reporting our results in a table, you do NOT have to list out each individual r-value. Instead, I recommend focusing only on your significant results, and including your p-value and interpretation of effect size (trivial, small, medium, or large; don’t forget the citation).
  • Make sure to include a reference to Table 1 (created using the code below)

We tested our hypothesis that the trait conscientiousness would positively correlate to higher levels of social support as would the trait neuroticism to level of lonliness but the two sets of variables are negatively correlate. Using a single pearson’s correlation test and matrix correlation test. Our data met all of the assumptions, so we procceeded with the results and our hypotheses were supported (see Table 1). The effect sizes ranged from -0.22 to -0.63.

Table 1: Means, standard deviations, and correlations with confidence intervals
Variable M SD 1 2 3
Conscientiousness -0.00 1.00
Neuroticism -0.00 1.00 -.22**
[-.27, -.17]
Isolation (UCLA Loneliness Scale (Adult)) -0.00 1.00 -.29** .48**
[-.34, -.24] [.44, .52]
Support (Social Support Measure) 0.00 1.00 .25** -.33** -.63**
[.20, .30] [-.38, -.28] [-.66, -.59]
Note:
M and SD are used to represent mean and standard deviation, respectively. Values in square brackets indicate the 95% confidence interval. The confidence interval is a plausible range of population correlations that could have caused the sample correlation.
* indicates p < .05
** indicates p < .01.

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.