1 Loading Libraries

#install.packages("apaTables")
#install.packages("kableExtra")

library(psych) # for the describe() command and the corr.test() command
library(apaTables) # to create our correlation table
library(kableExtra) # to create our correlation table

2 Importing Data

d <- read.csv(file="Data/projectdata.csv", header=T)

# For HW, import the your project dataset you cleaned previously; this will be the dataset you'll use throughout the rest of the semester

3 State Your Hypothesis

I predict there will be a significantly positively correlation between participants with narcissism and their rating of how satisfied they are with life.

4 Check Your Variables

# you only need to check the variables you're using in the current analysis
# it's always a good idea to look them to be sure that everything is correct
str(d)
## 'data.frame':    3137 obs. of  7 variables:
##  $ ResponseID: chr  "R_BJN3bQqi1zUMid3" "R_2TGbiBXmAtxywsD" "R_12G7bIqN2wB2N65" "R_39pldNoon8CePfP" ...
##  $ edu       : chr  "2 Currently in college" "5 Completed Bachelors Degree" "2 Currently in college" "2 Currently in college" ...
##  $ party_rc  : chr  "democrat" "independent" "apolitical" "apolitical" ...
##  $ swb       : num  4.33 4.17 1.83 5.17 3.67 ...
##  $ belong    : num  2.8 4.2 3.6 4 3.4 4.2 3.9 3.6 2.9 2.5 ...
##  $ usdream   : chr  "american dream is important and achievable for me" "american dream is important and achievable for me" "american dream is not important and maybe not achievable for me" "american dream is not important and maybe not achievable for me" ...
##  $ npi       : num  0.6923 0.1538 0.0769 0.0769 0.7692 ...
# Since we're focusing only on our continuous variables, we're going to subset them into their own dataframe. This will make some stuff we're doing later on easier.

d2 <- subset(d, select=c(npi, swb ))

# You can use the describe() command on an entire dataframe (d) or just on a single variable (d$pss)

describe(d2)
##     vars    n mean   sd median trimmed  mad min max range  skew kurtosis   se
## npi    1 3137 0.28 0.31   0.15    0.24 0.23   0   1     1  0.94    -0.68 0.01
## swb    2 3137 4.48 1.32   4.67    4.53 1.48   1   7     6 -0.36    -0.47 0.02
# NOTE: Our fake variable has high kurtosis, which we'll ignore for the lab. You don't need to discuss univariate normality in the results write-ups for the labs/homework, but you will need to discuss it in your final project manuscript.

# also use histograms to examine your continuous variables

# because we are looking at 4 variables, we will have 4 histograms. You may not have this many for your HW. Make as many as you need to reflect your hypothesis.

hist(d2$npi)

hist(d2$swb)

# last, use scatterplots to examine your continuous variables together, for each pairing

# because we are looking at 4 variables, we will have 6 pairings/plots. You may not have this many for your HW. Make as many as you need to reflect your hypothesis.

plot(d2$npi, d2$swb)

5 Check Your Assumptions

5.1 Pearson’s Correlation Coefficient Assumptions

  • Should have 2 measurements for each participant.
  • Variables should be continuous and normally distributed.
  • Outliers should be identified and removed.
  • Relationship between the variables should be linear .

5.1.1 Checking for Outliers

Note: You are NOT REQUIRED to screen out outliers or take any action based on what you see here. This is something you will always check and then discuss in your write-up.

# We are going to standardize (z-score) all of our 4 variables, and check them for outliers.

d2$npi <- scale(d2$npi, center=T, scale=T)
hist(d2$npi)

sum(d2$npi < -3 | d2$npi > 3)
## [1] 0
d2$swb <- scale(d2$swb, center=T, scale=T)
hist(d2$swb)

sum(d2$swb < -3 | d2$swb > 3)
## [1] 0

5.2 Issues with My Data

Both of my variables meet all of the assumptions of Pearson’s correlation coefficient and have no outliers.

[Make sure to revise the above paragraph for your HW.]

6 Run a Single Correlation

corr_output <- corr.test(d2$npi, d2$swb)

7 View Single Correlation

corr_output
## Call:corr.test(x = d2$npi, y = d2$swb)
## Correlation matrix 
##      [,1]
## [1,] 0.04
## Sample Size 
## [1] 3137
## These are the unadjusted probability values.
##   The probability values  adjusted for multiple tests are in the p.adj object. 
##      [,1]
## [1,] 0.05
## 
##  To see confidence intervals of the correlations, print with the short=FALSE option

8 Create a Correlation Matrix

Strong: Between |0.50| and |1| Moderate: Between |0.30| and |0.49| Weak: Between |0.10| and |0.29| Trivial: Less than |0.09|

Remember, Pearson’s r is also an effect size!

corr_output_m <- corr.test(d2)

9 View Test Output

corr_output_m
## Call:corr.test(x = d2)
## Correlation matrix 
##      npi  swb
## npi 1.00 0.04
## swb 0.04 1.00
## Sample Size 
## [1] 3137
## Probability values (Entries above the diagonal are adjusted for multiple tests.) 
##      npi  swb
## npi 0.00 0.05
## swb 0.05 0.00
## 
##  To see confidence intervals of the correlations, print with the short=FALSE option
# Remember to report the p-values from the matrix that are ABOVE the diagonal

10 Write Up Results

To test the hypothesis that narcissism would be positively correlated with satisfaction with life, we calculated a series of Pearson’s correlation coefficients. Both variables met the required assumptions of the test, both meeting the standards of normality and containing no outliers. We found the variables were not significantly correlated (all p = .05). The effect size was trivial (r < .09; Cohen, 1988). These results can be seen by the correlation coefficients reported in Table 1.

Table 1: Means, standard deviations, and correlations with confidence intervals
Variable M SD 1
Narcissism 0.28 0.31
Satisfaction with Life 4.48 1.32 .04*
[.00, .07]
Note:
M and SD are used to represent mean and standard deviation, respectively. Values in square brackets indicate the 95% confidence interval. The confidence interval is a plausible range of population correlations that could have caused the sample correlation.
* indicates p < .05
** indicates p < .01.

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.