AI Experiment Analysis

Loading Libraries

library(afex) # to run the ANOVA and plot results
library(psych) # for the describe() command
library(ggplot2) # to visualize our results
library(expss) # for the cross_cases() command
library(car) # for the leveneTest() command
library(emmeans) # for posthoc tests
library(effsize) # for the cohen.d() command
library(apaTables) # to create our correlation table
library(kableExtra) # to create our correlation table
library(sjPlot) # to visualize our results

Importing Data

# import your AI results dataset
d <- read.csv(file="Data/finalresults_.csv", header=T)

State Your Hypotheses & Chosen Tests

H1: I predict that higher perceptions about the economy will report lower perceived stress than lower perceptions about the economy. H2: I predict that satisfaction with life will predict perceived stress and the relationship will be negative.

Check Your Variables

This is just basic variable checking that is used across all HW assignments.

# to view stats for all variables
describe(d)
           vars   n  mean    sd median trimmed   mad min max range  skew
id            1 100 50.50 29.01   50.5   50.50 37.06   1 100    99  0.00
identity*     2 100 50.50 29.01   50.5   50.50 37.06   1 100    99  0.00
consent*      3 100 35.26 16.86   34.5   35.58 15.57   1  66    65 -0.15
age           4 100 38.72 11.03   35.0   37.66  4.45  20  70    50  0.94
race          5 100  4.61  1.56    6.0    4.69  1.48   2   7     5 -0.22
gender        6 100  1.95  0.22    2.0    2.00  0.00   1   2     1 -4.07
manip_out*    7 100 50.50 29.01   50.5   50.50 37.06   1 100    99  0.00
survey1*      8 100  5.16  2.85    5.5    5.08  3.71   1  10     9  0.00
survey2*      9 100  6.84  2.36    7.0    7.05  1.48   1  11    10 -0.66
ai_manip*    10 100 50.50 29.01   50.5   50.50 37.06   1 100    99  0.00
condition    11 100  1.50  0.50    1.5    1.50  0.74   1   2     1  0.00
           kurtosis   se
id            -1.24 2.90
identity*     -1.24 2.90
consent*      -0.81 1.69
age            0.15 1.10
race          -1.63 0.16
gender        14.69 0.02
manip_out*    -1.24 2.90
survey1*      -1.71 0.28
survey2*      -0.01 0.24
ai_manip*     -1.24 2.90
condition     -2.02 0.05
# we'll use the describeBy() command to view skew and kurtosis across our IVs
describeBy(d, group = "condition")

 Descriptive statistics by group 
condition: 1
          vars  n  mean    sd median trimmed   mad min max range  skew kurtosis
id           1 50 25.50 14.58   25.5   25.50 18.53   1  50    49  0.00    -1.27
identity     2 50 47.34 29.20   48.0   46.73 34.10   1  99    98  0.14    -1.12
consent      3 50 33.88 15.50   31.5   33.42 18.53   3  66    63  0.18    -0.74
age          4 50 38.42 10.25   34.0   37.35  2.97  20  64    44  0.94     0.23
race         5 50  4.56  1.53    4.0    4.65  2.97   2   7     5 -0.20    -1.59
gender       6 50  1.96  0.20    2.0    2.00  0.00   1   2     1 -4.55    19.13
manip_out    7 50 43.58 29.08   36.5   42.62 38.55   3  92    89  0.23    -1.46
survey1      8 50  5.24  3.02    6.0    5.18  4.45   1  10     9 -0.03    -1.81
survey2      9 50  6.78  2.63    7.0    7.00  1.48   1  11    10 -0.64    -0.27
ai_manip    10 50 56.04 29.47   62.5   57.05 38.55   4 100    96 -0.20    -1.28
condition   11 50  1.00  0.00    1.0    1.00  0.00   1   1     0   NaN      NaN
            se
id        2.06
identity  4.13
consent   2.19
age       1.45
race      0.22
gender    0.03
manip_out 4.11
survey1   0.43
survey2   0.37
ai_manip  4.17
condition 0.00
------------------------------------------------------------ 
condition: 2
          vars  n  mean    sd median trimmed   mad min max range  skew kurtosis
id           1 50 75.50 14.58   75.5   75.50 18.53  51 100    49  0.00    -1.27
identity     2 50 53.66 28.77   54.0   54.27 37.81   4 100    96 -0.14    -1.36
consent      3 50 36.64 18.16   41.5   37.48 17.05   1  66    65 -0.42    -0.85
age          4 50 39.02 11.86   35.0   37.95  5.93  20  70    50  0.88    -0.11
race         5 50  4.66  1.60    6.0    4.72  0.74   2   7     5 -0.24    -1.71
gender       6 50  1.94  0.24    2.0    2.00  0.00   1   2     1 -3.59    11.15
manip_out    7 50 57.42 27.52   55.5   58.55 33.36   1 100    99 -0.18    -0.90
survey1      8 50  5.08  2.69    5.0    5.00  4.45   2   9     7  0.02    -1.66
survey2      9 50  6.90  2.09    7.0    7.05  1.48   2  10     8 -0.58    -0.12
ai_manip    10 50 44.96 27.74   46.0   44.20 39.29   1  98    97  0.17    -1.14
condition   11 50  2.00  0.00    2.0    2.00  0.00   2   2     0   NaN      NaN
            se
id        2.06
identity  4.07
consent   2.57
age       1.68
race      0.23
gender    0.03
manip_out 3.89
survey1   0.38
survey2   0.30
ai_manip  3.92
condition 0.00
#convert any categorical variables to factors
d$race <- as.factor(d$race)
d$gender <- as.factor(d$gender)
d$condition <- as.factor(d$condition)

d$survey1 <- as.numeric(d$survey1)
Warning: NAs introduced by coercion
d$survey2 <- as.numeric(d$survey2)
Warning: NAs introduced by coercion
# also use histograms and scatterplots to examine your continuous variables
hist(d$survey1)

plot(d$survey1, d$survey2)

hist(d$survey2)

plot(d$survey1, d$survey2)

# and table() and cross_cases() to examine your categorical variables
# you may not need the cross_cases code
table(d$gender)

 1  2 
 5 95 
table(d$race)

 2  3  4  6  7 
 7 30 12 48  3 
table(d$condition)

 1  2 
50 50 
# and boxplot to examine any categorical variables with continuous variables
boxplot(d$survey1~d$gender)

Check Your Assumptions

t-Test Assumptions

  • Data values must be independent (independent t-test only) (confirmed by data report)
  • Data obtained via a random sample (confirmed by data report)
  • IV must have two levels (will check below)
  • Dependent variable must be normally distributed (will check below. if issues, note and proceed)
  • Variances of the two groups must be approximately equal, aka ‘homogeneity of variance’. Lacking this makes our results inaccurate (will check below - this really only applies to Student’s t-test, but we’ll check it anyway)

Checking IV levels

# preview the levels and counts for your IV
table(d$condition, useNA = "always")

   1    2 <NA> 
  50   50    0 
# note that the table() output shows you exactly how the levels of your variable are written. when recoding, make sure you are spelling them exactly as they appear

# to drop levels from your variable
# this subsets the data and says that any participant who is coded as 'BAD' should be removed
d <- subset(d, condition != "BAD")

table(d$condition, useNA = "always")

   1    2 <NA> 
  50   50    0 
# to combine levels
# this says that where any participant is coded as 'BAD' it should be replaced by 'GOOD'
d$condition_rc[d$condition == "BAD"] <- "GOOD"

table(d$condition_rc, useNA = "always")

<NA> 
 100 
# check your variable types
str(d)
'data.frame':   100 obs. of  12 variables:
 $ id          : int  1 2 3 4 5 6 7 8 9 10 ...
 $ identity    : chr  "I’m 32, Asian American, navigating life in a bustling city. I work in tech and love my job, but the pressure is"| __truncated__ "I'm 34, a Latina woman living in Phoenix. I’m passionate about my work in social justice, but I often feel over"| __truncated__ "I'm 25, a White woman living in a small town in Ohio. I'm passionate about art and photography, but I often fee"| __truncated__ "I'm 34 and Latina, living in a vibrant neighborhood in Los Angeles. Sometimes I feel isolated in my busy life a"| __truncated__ ...
 $ consent     : chr  "I understand the instructions. I am here to respond to questions and complete a writing task in the context of "| __truncated__ "I understand the instructions. I'm ready to participate in the study, answer questions, and complete the writin"| __truncated__ "I understand the instructions. I'm ready to participate in the study of writing ability and complete the writin"| __truncated__ "I understand the instructions. I'm ready to participate in the study and complete the writing task. Please let "| __truncated__ ...
 $ age         : int  32 34 25 34 45 32 61 48 48 27 ...
 $ race        : Factor w/ 5 levels "2","3","4","6",..: 1 3 4 3 4 4 2 4 4 4 ...
 $ gender      : Factor w/ 2 levels "1","2": 1 2 2 2 2 2 2 2 2 2 ...
 $ manip_out   : chr  "The current state of the economy is a testament to resilience and growth, showcasing positive indicators that s"| __truncated__ "In recent months, an array of economic indicators has shown that our economy is not only stabilizing but thrivi"| __truncated__ "In recent months, the narrative surrounding the economy has shifted towards optimism, grounded in a multitude o"| __truncated__ "The current state of the economy is robust and filled with opportunities for individuals and communities alike."| __truncated__ ...
 $ survey1     : num  2.5 2.6 2.3 2.3 2 2 2.5 2.5 2.6 2 ...
 $ survey2     : num  2.4 1.2 2.4 3 2 2 2.4 1 2.4 1 ...
 $ ai_manip    : chr  "I answered the questions based on my optimism and belief in economic growth despite personal challenges. My res"| __truncated__ "I answered the questions based on my desire to remain optimistic about economic trends despite my personal feel"| __truncated__ "My responses reflect a mix of optimism and concern. While I acknowledge positive economic indicators, my person"| __truncated__ "I answered positively about the economy because I believe in its potential, despite feeling isolated and concer"| __truncated__ ...
 $ condition   : Factor w/ 2 levels "1","2": 1 1 1 1 1 1 1 1 1 1 ...
 $ condition_rc: chr  NA NA NA NA ...
# make sure that your IV is recognized as a factor by R
# if you created a new _rc variable make sure to use that one instead
d$condition_rc <- as.factor(d$condition_rc)

Testing Homogeneity of Variance with Levene’s Test

We can test whether the variances of our two groups are equal using Levene’s test. The null hypothesis is that the variance between the two groups is equal, which is the result we want. So when running Levene’s test we’re hoping for a non-significant result!

# use the leveneTest() command from the car package to test homogeneity of variance
# uses the same 'formula' setup that we'll use for our t-test: formula is y~x, where y is our DV and x is our IV
leveneTest(survey1~condition, data = d)
Levene's Test for Homogeneity of Variance (center = median)
      Df F value  Pr(>F)  
group  1    2.77 0.09927 .
      97                  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
leveneTest(survey2~condition, data = d)
Levene's Test for Homogeneity of Variance (center = median)
      Df F value Pr(>F)
group  1  2.1879 0.1423
      97               

Pearson’s Correlation Coefficient Assumptions

  • Should have two measurements for each participant for each variable (confirmed by earlier procedures – we dropped any participants with missing data)
  • Variables should be continuous and normally distributed, or assessments of the relationship may be inaccurate (will do below)
  • Outliers should be identified and removed, or results will be inaccurate (will do below)
  • Relationship between the variables should be linear, or they will not be detected (will do below)

Run a Multiple Linear Regression

To check the assumptions for Pearson’s correlation coefficient, we run our regression and then check our diagnostic plots.

# use the lm() command to run the regression
# dependent/outcome variable on the left, independent/predictor variables on the right
reg_model <- lm(survey1 ~ gender + condition, data = d)
reg_model <- lm(survey2 ~ condition + condition, data = d)

Check linearity with Residuals vs Fitted plot

For some examples of good Residuals vs Fitted plot and ones that show serious errors, check out this page.

For your homework, you’ll simply need to generate this plot and talk about how your plot compares to the good and problematic plots linked to above. Is it closer to the ‘good’ plots or one of the ‘bad’ plots? This is going to be a judgement call, and that’s okay! In practice, you’ll always be making these judgement calls as part of a team, so this assignment is just about getting experience with it, not making the perfect call.

plot(reg_model, 1)

Check for outliers using Cook’s distance and a Residuals vs Leverage plot

For your homework, you’ll simply need to generate these plots, assess Cook’s distance in your dataset, and then identify any potential cases that are prominent outliers.

# Cook's distance
plot(reg_model, 4)

# Residuals vs Leverage
plot(reg_model, 5)

plot(reg_model, 2)

Issues with My Data

Survey1 and Survey2 were originally read in not as numeric and were missing values.

Run Your Analysis

Run a t-Test

# very simple! we specify the dataframe alongside the variables instead of having a separate argument for the dataframe like we did for leveneTest()
t_output <- t.test(d$survey1~d$condition)
t_output <- t.test(d$survey2~d$condition)

View Test Output

t_output

    Welch Two Sample t-test

data:  d$survey2 by d$condition
t = -0.43673, df = 92.869, p-value = 0.6633
alternative hypothesis: true difference in means between group 1 and group 2 is not equal to 0
95 percent confidence interval:
 -0.2413559  0.1543355
sample estimates:
mean in group 1 mean in group 2 
        2.13200         2.17551 

Calculate Cohen’s d

# once again, we use our formula to calculate cohen's d
d_output <- cohen.d(d$survey1 ~ d$condition)
d_output <- cohen.d(d$survey2 ~ d$condition)

View Effect Size

  • Trivial: < .2
  • Small: between .2 and .5
  • Medium: between .5 and .8
  • Large: > .8
d_output

Cohen's d

d estimate: -0.08758309 (negligible)
95 percent confidence interval:
     lower      upper 
-0.4867390  0.3115728 

Run a Correlation Test

Create a Correlation Matrix

d2 <- subset(d, select = c(survey1, survey2, age))
corr_output_m <- corr.test(d2)

View Test Output

  • Strong effect: Between |0.50| and |1|
  • Moderate effect: Between |0.30| and |0.49|
  • Weak effect: Between |0.10| and |0.29|
  • Trivial effect: Less than |0.09|
corr_output_m
Call:corr.test(x = d2)
Correlation matrix 
        survey1 survey2  age
survey1    1.00    0.01 0.11
survey2    0.01    1.00 0.07
age        0.11    0.07 1.00
Sample Size 
        survey1 survey2 age
survey1      99      98  99
survey2      98      99  99
age          99      99 100
Probability values (Entries above the diagonal are adjusted for multiple tests.) 
        survey1 survey2 age
survey1    0.00     1.0 0.8
survey2    0.92     0.0 1.0
age        0.27     0.5 0.0

 To see confidence intervals of the correlations, print with the short=FALSE option
summary(reg_model)

Call:
lm(formula = survey2 ~ condition + condition, data = d)

Residuals:
    Min      1Q  Median      3Q     Max 
-1.1320 -0.1755  0.0680  0.2680  0.8680 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  2.13200    0.07026  30.346   <2e-16 ***
condition2   0.04351    0.09986   0.436    0.664    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.4968 on 97 degrees of freedom
  (1 observation deleted due to missingness)
Multiple R-squared:  0.001953,  Adjusted R-squared:  -0.008336 
F-statistic: 0.1898 on 1 and 97 DF,  p-value: 0.664

Write Up Results

t-Test

We tested our hypothesis that satisfaction with life will predict perceived stress and the relationship will be negative, and that higher perceptions about the economy will report lower perceived stress than lower perceptions about the economy. Using an independent samples t-test. Our data met all of the assumptions of a t-test,

Correlation Test

Write-up of your results goes here. Check past labs/HWs for template. Depending on how many variables you have here, I may need to help you tweak your table output.

Variable M SD 1 2 3
Percieved stress 50.50 29.01
38.72 11.03 -.01
Mindfulness [-.20, .19]
2.26 0.24 -.10 .11
Need to belong [-.29, .10] [-.09, .30]
2.15 0.49 .08 .07 .01
[-.12, .27] [-.13, .26] [-.19, .21]
Note:
M and SD are used to represent mean and standard deviation, respectively. Values in square brackets indicate the 95% confidence interval. The confidence interval is a plausible range of population correlations that could have caused the sample correlation.
* indicates p < .05
** indicates p < .01.

References

Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Routledge Academic.