1 Loading Libraries

library(psych) # for the describe() command and the corr.test() command
library(apaTables) # to create our correlation table
library(kableExtra) # to create our correlation table
library(broom) # for the augment() command
library(ggplot2) # to visualize our results
## 
## Attaching package: 'ggplot2'
## The following objects are masked from 'package:psych':
## 
##     %+%, alpha

2 Importing Data

# import the dataset you cleaned previously
# this will be the dataset you'll use throughout the rest of the semester
# use ARC data downloaded previous for lab
d <- read.csv(file="data/EAMMi2_final.csv", header=T)

3 Correlation: State Your Hypothesis

I predict that mindfulness, subjective well-being, efficacy, and perceived stress will all be correlated with each other. Furthermore, I predict that perceived stress will be lower in participants who are higher in mindfulness, higher in subjective well-being, and higher in efficacy.

4 Correlation: Check Your Variables

# you only need to check the variables you're using in the current analysis
# although you checked them previously, it's always a good idea to look them over again and be sure that everything is correct
str(d)
## 'data.frame':    3182 obs. of  6 variables:
##  $ race_rc : chr  "white" "white" "white" "other" ...
##  $ gender  : chr  "f" "m" "m" "f" ...
##  $ stress  : num  3.3 3.6 3.3 3.2 3.5 2.9 3.2 3 2.9 3.2 ...
##  $ swb     : num  4.33 4.17 1.83 5.17 3.67 ...
##  $ efficacy: num  3.4 3.4 2.2 2.8 3 2.4 2.3 3 3 3.7 ...
##  $ mindful : num  6.6 7.2 6.8 6.8 5.8 ...
# since we're focusing on our continuous variables, we're going to subset them into their own dataframe. this will make some stuff we're doing later easier.
cont <- subset(d, select=c(stress, swb, efficacy, mindful))

# you can use the describe() command on an entire dataframe (d) or just on a single variable (d$pss)
describe(cont)
##          vars    n mean   sd median trimmed  mad min  max range  skew kurtosis
## stress      1 3175 3.27 0.41   3.30    3.26 0.44   1 5.00  4.00 -0.16     2.67
## swb         2 3178 4.47 1.32   4.67    4.53 1.48   1 7.00  6.00 -0.36    -0.46
## efficacy    3 3176 3.13 0.45   3.10    3.13 0.44   1 4.00  3.00 -0.29     0.63
## mindful     4 3173 5.29 0.84   5.27    5.29 0.79   3 7.87  4.87  0.06    -0.13
##            se
## stress   0.01
## swb      0.02
## efficacy 0.01
## mindful  0.01
# our fake variable has high kurtosis, which I'll ignore. you don't need to discuss univariate normality in the results write-ups for the labs/homework, but you will need to discuss it in your final manuscript

# also use histograms to examine your continuous variables
hist(d$stress)

hist(d$swb)

hist(d$efficacy)

hist(d$mindful)

# last, use scatterplots to examine your continuous variables together
plot(d$stress, d$swb)

plot(d$stress, d$efficacy)

plot(d$stress, d$mindful)

plot(d$swb, d$efficacy)

plot(d$swb, d$mindful)

plot(d$efficacy, d$mindful)

5 Correlation: Check Your Assumptions

5.1 Pearson’s Correlation Coefficient Assumptions

  • Should have two measurements for each participant
  • Variables should be continuous and normally distributed
  • Outliers should be identified and removed
  • Relationship between the variables should be linear

5.1.1 Checking for Outliers

Note: You are not required to screen out outliers or take any action based on what you see here. This is something you will check and then discuss in your write-up.

d$stress_std <- scale(d$stress, center=T, scale=T)
hist(d$stress_std)

sum(d$stress_std < -3 | d$stress_std > 3)
## [1] NA
d$swb_std <- scale(d$swb, center=T, scale=T)
hist(d$swb_std)

sum(d$swb_std < -3 | d$swb_std > 3)
## [1] NA
d$efficacy_std <- scale(d$efficacy, center=T, scale=T)
hist(d$efficacy_std)

sum(d$efficacy_std < -3 | d$efficacy_std > 3)
## [1] NA
d$mindful_std <- scale(d$mindful, center=T, scale=T)
hist(d$mindful_std)

sum(d$mindful_std < -3 | d$mindful_std > 3)
## [1] NA

5.2 Correlation: Issues with My Data

The variable mindfulness has high kurtosis (2.67) but did not have any outliers shown in the code. When assessing the histograms for each of my variables, it is clear that there are outliers present. However, when I perform the sum(d$variable) code, each variable shows up as [1] NA rather than [1] and a number. I’m unsure on what is causing that issue; therefore, that is something to consider when looking at my data and plots. Some of my correlation scatter plots also strange relationships. For example, there is not a linear relationship between the variables efficacy and stress, but the data is piled in the middle of the plot without a clear start and end point. I am unsure how to interpret these results. The plot between stress and subjective well-being also displays a similar pattern.

6 Correlation: Create a Correlation Matrix

corr_output_m <- corr.test(cont)

7 Correlation: View Test Output

corr_output_m
## Call:corr.test(x = cont)
## Correlation matrix 
##          stress   swb efficacy mindful
## stress     1.00 -0.12    -0.04    0.25
## swb       -0.12  1.00     0.40   -0.29
## efficacy  -0.04  0.40     1.00   -0.25
## mindful    0.25 -0.29    -0.25    1.00
## Sample Size 
##          stress  swb efficacy mindful
## stress     3175 3174     3171    3168
## swb        3174 3178     3174    3172
## efficacy   3171 3174     3176    3168
## mindful    3168 3172     3168    3173
## Probability values (Entries above the diagonal are adjusted for multiple tests.) 
##          stress swb efficacy mindful
## stress     0.00   0     0.04       0
## swb        0.00   0     0.00       0
## efficacy   0.04   0     0.00       0
## mindful    0.00   0     0.00       0
## 
##  To see confidence intervals of the correlations, print with the short=FALSE option

8 Correlation: Write Up Results

To test my hypothesis that stress, subjective well-being, efficacy, and mindfulness would be correlated with one another, I calculated a series of Pearson’s correlation coefficients. Some of my variables had issues meeting the assumptions. The mindfulness variable had high kurtosis (2.67), and the histograms suggest that there are outliers present; however, the code reported a value of NA for each variable. When looking at the scatter plots between stress and subjective well-being and stress and efficacy, determining a linear relationship is unclear because the data clustered towards the middle of the plot.

All four of the variables were significantly correlated. All of the variables had p values <.001 except for the correlation between stress and efficacy, which had a p value of .04. Because this p value is close to the cut off of p < .05, the significance should be evaluated carefully. Most of the effect sizes for the correlations were small (rs < .3; Cohen, 1988), but there was a medium effect size for the correlation between efficacy and subjective well-being (r < .5; Cohen, 1988) and a trivial effect size between stress and efficacy (r < .1; Cohen, 1988). This test partially supported my second hypothesis that stress would be lower in participants who are higher in mindfulness, subjective well-being, and efficacy. There was a slight negative correlation between perceived stress and subjective well-being and efficacy, but there was a slight positive correlation between stress and mindfulness. Perhaps, this suggests that mindfulness is used more often when stress is present rather than mindfulness decreasing stress.

Table 1: Means, standard deviations, and correlations with confidence intervals
Variable M SD 1 2 3
Perceived stress 3.27 0.41
Subjective Well-being 4.47 1.32 -.12**
[-.15, -.09]
Efficacy 3.13 0.45 -.04* .40**
[-.07, -.00] [.37, .43]
Mindfulness 5.29 0.84 .25** -.29** -.25**
[.22, .29] [-.32, -.25] [-.28, -.21]
Note:
M and SD are used to represent mean and standard deviation, respectively. Values in square brackets indicate the 95% confidence interval. The confidence interval is a plausible range of population correlations that could have caused the sample correlation.
* indicates p < .05
** indicates p < .01.

Correlation: References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.

9 Simple Regression: State Your Hypothesis

I hypothesize that there will be a significant negative correlation between perceived stress and efficacy.

10 Simple Regression: Check Your Variables

# you only need to check the variables you're using in the current analysis
# although you checked them previously, it's always a good idea to look them over again and be sure that everything is correct
str(d)
## 'data.frame':    3182 obs. of  10 variables:
##  $ race_rc     : chr  "white" "white" "white" "other" ...
##  $ gender      : chr  "f" "m" "m" "f" ...
##  $ stress      : num  3.3 3.6 3.3 3.2 3.5 2.9 3.2 3 2.9 3.2 ...
##  $ swb         : num  4.33 4.17 1.83 5.17 3.67 ...
##  $ efficacy    : num  3.4 3.4 2.2 2.8 3 2.4 2.3 3 3 3.7 ...
##  $ mindful     : num  6.6 7.2 6.8 6.8 5.8 ...
##  $ stress_std  : num [1:3182, 1] 0.0826 0.8178 0.0826 -0.1625 0.5728 ...
##   ..- attr(*, "scaled:center")= num 3.27
##   ..- attr(*, "scaled:scale")= num 0.408
##  $ swb_std     : num [1:3182, 1] -0.104 -0.23 -1.994 0.526 -0.608 ...
##   ..- attr(*, "scaled:center")= num 4.47
##   ..- attr(*, "scaled:scale")= num 1.32
##  $ efficacy_std: num [1:3182, 1] 0.611 0.611 -2.057 -0.723 -0.278 ...
##   ..- attr(*, "scaled:center")= num 3.13
##   ..- attr(*, "scaled:scale")= num 0.45
##  $ mindful_std : num [1:3182, 1] 1.553 2.265 1.79 1.79 0.604 ...
##   ..- attr(*, "scaled:center")= num 5.29
##   ..- attr(*, "scaled:scale")= num 0.843
# you can use the describe() command on an entire dataframe (d) or just on a single variable
describe(d)
##              vars    n mean   sd median trimmed  mad   min  max range  skew
## race_rc*        1 3173 5.53 2.13   7.00    5.88 0.00  1.00 7.00  6.00 -0.98
## gender*         2 3178 1.28 0.49   1.00    1.21 0.00  1.00 3.00  2.00  1.40
## stress          3 3175 3.27 0.41   3.30    3.26 0.44  1.00 5.00  4.00 -0.16
## swb             4 3178 4.47 1.32   4.67    4.53 1.48  1.00 7.00  6.00 -0.36
## efficacy        5 3176 3.13 0.45   3.10    3.13 0.44  1.00 4.00  3.00 -0.29
## mindful         6 3173 5.29 0.84   5.27    5.29 0.79  3.00 7.87  4.87  0.06
## stress_std      7 3175 0.00 1.00   0.08   -0.01 1.09 -5.55 4.25  9.80 -0.16
## swb_std         8 3178 0.00 1.00   0.15    0.04 1.12 -2.62 1.91  4.54 -0.36
## efficacy_std    9 3176 0.00 1.00  -0.06    0.01 0.99 -4.72 1.95  6.67 -0.29
## mindful_std    10 3173 0.00 1.00  -0.03   -0.01 0.94 -2.72 3.06  5.77  0.06
##              kurtosis   se
## race_rc*        -0.68 0.04
## gender*          0.88 0.01
## stress           2.67 0.01
## swb             -0.46 0.02
## efficacy         0.63 0.01
## mindful         -0.13 0.01
## stress_std       2.67 0.02
## swb_std         -0.46 0.02
## efficacy_std     0.63 0.02
## mindful_std     -0.13 0.02
# also use histograms to examine your continuous variables
hist(d$stress)

hist(d$efficacy)

# last, use scatterplots to examine your continuous variables together
plot(d$stress, d$efficacy)

11 Simple Regression: Run a Simple Regression

# to calculate standardized coefficients, we have to standardize our IV
d$stress_std <- scale(d$stress, center=T, scale=T)

# use the lm() command to run the regression
# dependent/outcome variable on the left, idependent/predictor variable on the right
reg_model <- lm(efficacy ~ stress_std, data = d)

12 Simple Regression: Check Your Assumptions

12.1 Simple Regression Assumptions

  • Should have two measurements for each participant
  • Variables should be continuous and normally distributed
  • Outliers should be identified and removed
  • Relationship between the variables should be linear
  • Residuals should be normal and have constant variance note: we will not be evaluating whether our data meets these assumptions in this lab/homework – we’ll come back to them next week when we talk about multiple linear regression

12.2 Simple Regression: Create plots and view residuals

model.diag.metrics <- augment(reg_model)

ggplot(model.diag.metrics, aes(x = stress_std, y = efficacy)) +
  geom_point() +
  stat_smooth(method = lm, se = FALSE) +
  geom_segment(aes(xend = stress_std, yend = .fitted), color = "red", size = 0.3)
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
## `geom_smooth()` using formula = 'y ~ x'

12.3 Simple Regression: Check linearity with Residuals vs Fitted plot

This plot (below) shows the residuals for each case and the fitted line. The red line is the average residual for the specified point of the dependent variable. If the assumption of linearity is met, the red line should be horizontal. This indicates that the residuals average to around zero. You can see that for this lab, the plot shows some non-linearity because there are more datapoints below the regression line than here are above it. Thus, there are some negative residuals that don’t have positive residuals to cancel them out. However, a bit of deviation is okay – just like with skewness and kurtosis, there’s a range that we can work in before non-normality or non-linearity becomes a critical issue.

For some examples of good Residuals vs Fitted plot and ones that show serious errors, check out this page. I’ve included the images in our video and talk about them more in-depth there. But to summarize quickly, you can see the first case has a plot in which the red line sticks pretty closely to the zero line, while the other cases show some serious deviation. Ours is much closer to the ‘good’ plot than it is to the ‘serious issues’ plots. So we’ll consider our data okay and proceed with our analysis. Obviously, this is quite subjective. I’ll talk a bit about why this is in the video, but the key takeaway is that these evaluations are closely tied to the context of our sample, our data, and what we’re studying. It’s almost always a judgement call.

You’ll notice in the bottom right corner, there are some points with numbers included: these are cases or participants (indicated by row number) who have the most influence on the regression line (and so they might outliers).We’ll talk more about outliers in the next section.

To summarize: our plot suggests there is some minor non-linearity. For your homework, you’ll simply need to generate this plot and talk about how your plot compares to the good and problematic plots linked to above. Is it closer to the ‘good’ plots or one of the ‘bad’ plots? This is going to be a judgement call, and that’s okay! In practice, you’ll always be making these judgement calls as part of a team, so this assignment is just about getting experience with it, not making the perfect call.

plot(reg_model, 1)

12.4 Simple Regression: Check for outliers

The plots below both address leverage, or how much each data point is able to influence the regression line. Outliers are points that have undue influence on the regression line, the way that Bill Gates entering the room has an undue influence on the mean income.

The first plot, Cook’s distance, is a visualization of a score called (you guessed it) Cook’s distance, calculated for each case (aka row or participant) in the dataframe. Cook’s distance tells us how much the regression would change if the point was removed. Ideally, we want all points to have the same influence on the regression line, although we accept that there will be some variability. The cutoff for a high Cook’s distance score is .5 (not .05, which is our cutoff for statistical significance). For our data, some points do exert more influence than others but they’re generally equal, and none of them are close to the cutoff.

The second plot also includes the residuals in the examination of leverage. The standardized residuals are on the y-axis and leverage is on the x-axis; this shows us which points have high residuals (are far from the regression line) and high leverage. Point that have large residuals and high leverage are especially worrisome, because they are far from the regression line but are also exerting a large influence on it. The red line indicates the average residual across points with the same amount of leverage. As usual, we want this line to stay as close to the mean line (or the zero line) as possible.

Because the leverage in our plot is low, part of it actually cut off! If you check the first set of plots on this page (note that Residuals vs Leverage is the fourth in the grid) you can see there are curved red lines in the corners of the Residuals vs Leverage plots. This is the .5 cutoff for Cook’s distance, and so any points appearing past these lines is a serious outlier that needs to be removed. On this page you can also see Residuals vs Leverage plots with severe deviations from the mean line, which makes our deviations appear much less serious.

Our data doesn’t have any severe outliers. For your homework, you’ll simply need to generate these plots, assess Cook’s distance in your dataset, and then identify any potential cases that are prominent outliers. Since we have some cutoffs, that makes this process is a bit less subjective than some of the other assessments we’ve done here, which is a nice change!

# Cook's distance
plot(reg_model, 4)

# Residuals vs Leverage
plot(reg_model, 5)

12.5 Simple Regression: Issues with My Data

Before analyzing the results, I assessed whether my variables fit the assumptions for a simple linear regression. After performing a Residuals vs. Fitted plot, there were signs of slight non-linearity; however, it is not enough to violate the assumption of linearity. After performing Cook’s distance and a Residuals vs Leverage plot to detect outliers, there were two data points that expressed large residuals and above-average leverage, but they are both below the recommended cutoff for Cook’s distance. There was also one data point with low residuals and above-average leverage, but it also remains below the recommended cutoff for Cook’s distance.

13 Simple Regression: View Test Output

summary(reg_model)
## 
## Call:
## lm(formula = efficacy ~ stress_std, data = d)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.21771 -0.24372 -0.03139  0.28916  0.94670 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  3.124558   0.007980 391.547   <2e-16 ***
## stress_std  -0.016771   0.007978  -2.102   0.0356 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4494 on 3169 degrees of freedom
##   (11 observations deleted due to missingness)
## Multiple R-squared:  0.001392,   Adjusted R-squared:  0.001077 
## F-statistic: 4.419 on 1 and 3169 DF,  p-value: 0.03562
# note for section below: to type lowercase Beta below (ß) you need to hold down Alt key and type 225 on numeric keypad. If that doesn't work you should be able to copy/paste it from somewhere else

14 Simple Regression: Write Up Results

To test my hypothesis that there will be a significant negative correlation between perceived stress and efficacy, I performed a simple linear regression to analyze the relationship between the two variables. After checking the linearity using a Residuals Vs Fitted Plot and assessing outliers using Cook’s distance and a Residuals vs Leverage plot, I determiend that my data met the assumptions of linear regression.

As I hypothesized, I found a significant relationship between perceived stress and efficacy, R2 = .45, F(1,3169) = 4.419, p < .05. The relationship between perceived stress and efficacy was negative, ß = -0.02, t(3169) = -2.102, p < .05. According to Cohen (1988), this constitutes a trivial effect size (< .1).

Simple Regression: References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.