Note: This document was prepared using versions 3.4.3 and 1.1.414 of R and RStudio, respectively.
Project Components
The study described in this document is part of a larger project looking at the effects of mindfulness and cognition. In addition to this script, the following project components are publicly available:
Abstract
This study investigated the relationship between trait mindfulness and the use of various cognitive heuristics and biases. Participants completed a measure of trait mindfulness, followed by a series of tasks designed to measure the sunk cost bias, availability heuristic, representativeness heuristic and base rate neglect, anchoring heuristic, and belief bias.
Introduction
This document outlines the analyses from the Trait Mindfulness and Cognition Study. The purpose of this study was to investigate whether high levels of trait mindfulness would be associated with lower rates of cognitive biases and heuristic use.
Participants were recruited from Western’s psychology research participant pool (i.e. SONA) and via Amazon’s Mechanical Turk (i.e. MTurk). At the beginning of the study, all participants were asked to complete the Five Facet Mindfulness Questionnaire-18 (FFMQ; Baer et al., 2008; Medvedev et al., 2018). Participants were then randomly assigned to complete one of two sets of cognitive tasks:
Set 1:
- Letter Availability Task
- Base Rates Representativeness Task
- Anchoring Task
Set 2:
- Famous Names Task
- Resistance to Sunk Costs Task
- Belief Bias Task
If there is a negative relationship between trait mindfulness and cognitive heuristic use, high scores on the FFMQ should be predictive of
Analysis Prep
Import Data File
AllData <- read.csv("TraitData2.csv", header = TRUE, quote = "")This file contains all of the data collected in this study.
Some notes regarding the names of variables:
Calculate Outcome Scores
Before we begin our analyses, we’ll need to convert the raw data into usable outcome scores:
Five Facet Mindfulness Questionnaire
Reverse score items:
# Identify FFMQ items to be reverse scored:
FFMQRs <- c("FFMQ_2", "FFMQ_4", "FFMQ_5", "FFMQ_7", "FFMQ_8", "FFMQ_10", "FFMQ_11", "FFMQ_15", "FFMQ_17")
# Reverse score FFMQ items:
FFMQRev <- 6 - AllData[ , FFMQRs]- FFMQ Total Scores:
AllData$FFMQTotal <- rowSums(cbind(AllData$FFMQ_1, FFMQRev$FFMQ_2, AllData$FFMQ_3, FFMQRev$FFMQ_4, FFMQRev$FFMQ_5, AllData$FFMQ_6, FFMQRev$FFMQ_7, FFMQRev$FFMQ_8, AllData$FFMQ_9, FFMQRev$FFMQ_10, FFMQRev$FFMQ_11, AllData$FFMQ_12, AllData$FFMQ_13, AllData$FFMQ_14, FFMQRev$FFMQ_15, AllData$FFMQ_16, FFMQRev$FFMQ_17, AllData$FFMQ_18), na.rm = TRUE)- FFMQ-Observing Scores:
AllData$FFMQOB <- rowSums(cbind(AllData$FFMQ_6, AllData$FFMQ_13, AllData$FFMQ_18), na.rm = TRUE)- FFMQ-Awareness Scores:
AllData$FFMQAA <- rowSums(cbind(FFMQRev$FFMQ_8, FFMQRev$FFMQ_11, FFMQRev$FFMQ_15), na.rm = TRUE)- FFMQ-Non-Judging Scores:
AllData$FFMQNJ <- rowSums(cbind(FFMQRev$FFMQ_4, FFMQRev$FFMQ_7, FFMQRev$FFMQ_17), na.rm = TRUE)- FFMQ-Describing Scores:
AllData$FFMQDS <- rowSums(cbind(AllData$FFMQ_1, FFMQRev$FFMQ_2, FFMQRev$FFMQ_5, FFMQRev$FFMQ_10, AllData$FFMQ_14), na.rm = TRUE)- FFMQ-Non-Reactivity Scores:
AllData$FFMQNR <- rowSums(cbind(AllData$FFMQ_3, AllData$FFMQ_9, AllData$FFMQ_12, AllData$FFMQ_16), na.rm = TRUE)Letter Availability Task
- “First Position” Responses:
# Re-score "third position" choices as 0:
AllData$R_Position[AllData$R_Position == 2] <- 0
AllData$N_Position[AllData$N_Position == 2] <- 0
AllData$V_Position[AllData$V_Position == 2] <- 0
AllData$K_Position[AllData$K_Position == 2] <- 0
AllData$L_Position[AllData$L_Position == 2] <- 0
# Calculate the mean number of first position responses:
AllData$LA1Mean <- rowMeans(cbind(AllData$R_Position, AllData$N_Position, AllData$V_Position, AllData$K_Position, AllData$L_Position), na.rm = TRUE)- First Position Proportions:
AllData$LA1Prop <- rowMeans(cbind(AllData$R_First_1, AllData$N_First_1, AllData$V_First_1, AllData$K_First_1, AllData$L_First_1), na.rm=TRUE)- Third Position Proportions:
AllData$LA3Prop <- rowMeans(cbind(AllData$R_Third_1, AllData$N_Third_1, AllData$V_Third_1, AllData$K_Third_1, AllData$N_Third), na.rm=TRUE)Base Rates Representativeness Task
- Pre-Base Rates Lure Endorsement:
AllData$BRPreLure <- rowMeans(cbind(AllData$BR_Mar_Lure_1, AllData$BR_Uni_Lure_2, AllData$BR_Car_Lure_2, AllData$BR_Eng_Lure_1), na.rm=TRUE)- Pre-Base Rates True Endorsement:
AllData$BRPreTrue <- rowMeans (cbind(AllData$BR_Mar_Lure_2, AllData$BR_Uni_Lure_1, AllData$BR_Car_Lure_1, AllData$BR_Eng_Lure_2), na.rm=TRUE)- Post-Base Rates Lure Endorsement:
AllData$BRPostLure <- rowMeans (cbind(AllData$BR_Mar_BR_1, AllData$BR_Uni_BR_2, AllData$BR_Car_BR_2, AllData$BR_Eng_BR_1), na.rm=TRUE)- Post-Base Rates True Endorsement:
AllData$BRPostTrue <- rowMeans (cbind(AllData$BR_Mar_BR_2, AllData$BR_Uni_BR_1, AllData$BR_Car_BR_1, AllData$BR_Eng_BR_2), na.rm=TRUE)- Change in Lure Endorsement:
AllData$BRLureChange <- (AllData$BRPostLure - AllData$BRPreLure)- Change in True Endorsement:
AllData$BRTrueChange <- (AllData$BRPostTrue - AllData$BRPreTrue)Anchoring Task
# Calculate anchoring scores for List 1:
AllData$ANScore_List1 <- rowMeans(cbind(abs(70 - AllData$AN1_1Est), abs(2000 - AllData$AN1_2Est), abs(1500 - AllData$AN1_3Est), abs(65 - AllData$AN1_4Est), abs(14 - AllData$AN1_5Est), abs(1920 - AllData$AN1_6Est), abs(50000 - AllData$AN1_7Est), abs(30 - AllData$AN1_8Est), abs(100 - AllData$AN1_9Est), abs(17 - AllData$AN1_10Est)), na.rm = TRUE)
# Calculate anchoring scores for List 2:
AllData$ANScore_List2 <- rowMeans(cbind(abs(2000 - AllData$AN2_1Est), abs(45500 - AllData$AN2_2Est), abs(6000 - AllData$AN2_3Est), abs(550 - AllData$AN2_4Est), abs(127 - AllData$AN2_5Est), abs(1850 - AllData$AN2_6Est), abs(100 - AllData$AN2_7Est), abs(7 - AllData$AN2_8Est), abs(20 - AllData$AN2_9Est), abs(7 - AllData$AN2_10Est)), na.rm = TRUE)
# Create a variable to indicate which list was presented:
AllData$ANList <- NA
# Code the variable as "0" for the first list:
AllData$ANList[complete.cases(AllData$ANScore_List1)] <- 0
# Code the variable as "1" for the second list:
AllData$ANList[complete.cases(AllData$ANScore_List2)] <- 1
# Create a variable to represent overall anchoring scores:
AllData$ANScore <- rowMeans(cbind(AllData$ANScore_List1, AllData$ANScore_List2), na.rm = TRUE)Famous Names Task
# Create a variable to indicate which list was presented:
AllData$FNList <- NA
# Code the variable as "0" for the female list:
AllData$FNList[AllData$Q158_First.Click == 0] <- 0
# Code the variable as "1" for the male list:
AllData$FNList[AllData$Q153_First.Click == 0] <- 1
# Create a variable to represent availability-based proportion guesses:
AllData$FNScore <- as.numeric(ifelse(AllData$FNList == 0, AllData$FN_Female_1, ifelse(AllData$FNList == 1, AllData$FN_Male_1, "NA")))Resistance to Sunk Costs Task
AllData$RSCScore <- rowMeans(cbind(AllData$RSC_1_1, AllData$RSC_2_1, AllData$RSC_3_1, AllData$RSC_4_1, AllData$RSC_5_1, AllData$RSC_6_1, AllData$RSC_7_1, AllData$RSC_8_1, AllData$RSC_9_1, AllData$RSC_10_1), na.rm = TRUE)Belief Bias Task
# Calculate proportion correct for consistent items on List 1:
AllData$BB1_ConPerf <- rowMeans(cbind((ifelse(AllData$BB1_1VB==1, 1, 0)), (ifelse(AllData$BB1_4VB==1, 1, 0)), (ifelse(AllData$BB1_10VB==1, 1, 0)), (ifelse(AllData$BB1_16VB==1, 1, 0)), (ifelse(AllData$BB1_19VB==1, 1, 0)), (ifelse(AllData$BB1_26VB==1, 1, 0)), (ifelse(AllData$BB1_27VB==1, 1, 0)), (ifelse(AllData$BB1_29VB==1, 1, 0)), (ifelse(AllData$BB1_5IU==1, 0, 1)), (ifelse(AllData$BB1_6IU==1, 0, 1)), (ifelse(AllData$BB1_8IU==1, 0, 1)), (ifelse(AllData$BB1_12IU==1, 0, 1)), (ifelse(AllData$BB1_17IU==1, 0, 1)), (ifelse(AllData$BB1_23IU==1, 0, 1)), (ifelse(AllData$BB1_24IU==1, 0, 1)), (ifelse(AllData$BB1_31IU==1, 0, 1)), na.rm = TRUE))
# Calculate proportion correct for inconsistent items on List 1:
AllData$BB1_IncPerf = rowMeans(cbind((ifelse(AllData$BB1_2VU==1, 1, 0)), (ifelse(AllData$BB1_9VU==1, 1, 0)), (ifelse(AllData$BB1_11VU==1, 1, 0)), (ifelse(AllData$BB1_15VU==1, 1, 0)), (ifelse(AllData$BB1_18VU==1, 1, 0)), (ifelse(AllData$BB1_22VU==1, 1, 0)), (ifelse(AllData$BB1_25VU==1, 1, 0)), (ifelse(AllData$BB1_32VU==1, 1, 0)), (ifelse(AllData$BB1_3IB==1, 0, 1)), (ifelse(AllData$BB1_7IB==1, 0, 1)), (ifelse(AllData$BB1_13IB==1, 0, 1)), (ifelse(AllData$BB1_14IB==1, 0, 1)), (ifelse(AllData$BB1_20IB==1, 0, 1)), (ifelse(AllData$BB1_21IB==1, 0, 1)), (ifelse(AllData$BB1_28IB==1, 0, 1)), (ifelse(AllData$BB1_30IB==1, 0, 1)), na.rm = TRUE))
# Calculate proportion correct for consistent items on List 2:
AllData$BB2_ConPerf = rowMeans(cbind((ifelse(AllData$BB2_1VB==1, 1, 0)), (ifelse(AllData$BB2_5VB==1, 1, 0)), (ifelse(AllData$BB2_11VB==1, 1, 0)), (ifelse(AllData$BB2_15VB==1, 1, 0)), (ifelse(AllData$BB2_22VB==1, 1, 0)), (ifelse(AllData$BB2_25VB==1, 1, 0)), (ifelse(AllData$BB2_30VB==1, 1, 0)), (ifelse(AllData$BB2_32VB==1, 1, 0)), (ifelse(AllData$BB2_2IU==1, 0, 1)), (ifelse(AllData$BB2_9IU==1, 0, 1)), (ifelse(AllData$BB2_13IU==1, 0, 1)), (ifelse(AllData$BB2_16IU==1, 0, 1)), (ifelse(AllData$BB2_18IU==1, 0, 1)), (ifelse(AllData$BB2_19IU==1, 0, 1)), (ifelse(AllData$BB2_29IU==1, 0, 1)), (ifelse(AllData$BB2_31IU==1, 0, 1)), na.rm = TRUE))
# Calculate proportion correct for inconsistent items on List 2:
AllData$BB2_IncPerf = rowMeans(cbind((ifelse(AllData$BB2_3VU==1, 1, 0)), (ifelse(AllData$BB2_6VU==1, 1, 0)), (ifelse(AllData$BB2_12VU==1, 1, 0)), (ifelse(AllData$BB2_21VU==1, 1, 0)), (ifelse(AllData$BB2_24VU==1, 1, 0)), (ifelse(AllData$BB2_14VU==1, 1, 0)), (ifelse(AllData$BB2_27VU==1, 1, 0)), (ifelse(AllData$BB2_28VU==1, 1, 0)), (ifelse(AllData$BB2_4IB==1, 0, 1)), (ifelse(AllData$BB2_7IB==1, 0, 1)), (ifelse(AllData$BB2_8IB==1, 0, 1)), (ifelse(AllData$BB2_10IB==1, 0, 1)), (ifelse(AllData$BB2_17IB==1, 0, 1)), (ifelse(AllData$BB2_20IB==1, 0, 1)), (ifelse(AllData$BB2_23IB==1, 0, 1)), (ifelse(AllData$BB2_26IB==1, 0, 1)), na.rm = TRUE))
# Calculate belief bias scores for List 1:
AllData$BB1_Score = abs(AllData$BB1_ConPerf - AllData$BB1_IncPerf)
# Calculate belief bias scores for List 2:
AllData$BB2_Score = abs(AllData$BB2_ConPerf - AllData$BB2_IncPerf)
# Create a variable to indicate which list was presented:
AllData$BBList <- NA
# Code the variable as "0" for the first list:
AllData$BBList[complete.cases(AllData$BB1_Score)] <- 0
# Code the variable as "1" for the second list:
AllData$BBList[complete.cases(AllData$BB2_Score)] <- 1
# Create a variable to represent overall belief bias scores:
AllData$BBScore <- rowMeans(cbind(AllData$BB1_Score, AllData$BB2_Score), na.rm = TRUE)Load Libraries
The following libraries will be needed for this analysis:
# 1. For calculating descriptive statistics:
# install.packages("Rmisc")
library(Rmisc)
# 2. For formatting tables:
# a.
# install.packages("knitr")
library(knitr)
# b.
#install.packages ("kableExtra")
library(kableExtra)
# 3. For plotting with ggplot2:
# install.packages("tidyverse")
library(tidyverse)
# 4. For formatting plots:
# a.
# install.packages ("grid")
library(grid)
# b.
# install.packages("gridExtra")
library(gridExtra)
# 5. For performing correlations:
# install.packages("Hmisc")
library(Hmisc)Adjust Display Options
To make it easier to read our statistical outputs, we’ll turn the scientific notation option off.
options(scipen = 999)p-Value Rounding Function
We’ll also create a function to assess and print p-values in the comments of our script. If p >= .005, the function will display “p =” and the value rounded to two decimal places. If .0005 <= p < .005, the function will display “p =” and the value rounded to three decimal places. If p < .0005, the function will display “p < .001.”
p_round <- function(x){
if(x > .005)
{x1 = (paste("= ", round(x, digits = 2), sep = ''))
}
else if(x == .005){x1 = (paste("= .01"))
}
else if(x > .0005 & x < .005)
{x1 = (paste("= ", round(x, digits = 3), sep = ''))
}
else if(x == .0005){x1 = (paste("= .001"))
}
else{x1 = (paste("< .001"))
}
(x1)
}1. Regressions - All Participants
In our first analysis, we will perform regression analyses to determine if any of the FFMQ facets act as significant predictors of any of the outcome measures. We will also include “Source” as a predictor to rule out the possibility that there are innate differences between the populations that we sampled from. Additionally, for any measure for which there were multiple lists, we will include “List” as a predictor.
Subsetting Data
Before we begin, we will create a dataset of the variables of interest. Continuous predictor variables (i.e. FFMQ facet scores) will be mean-centered.
Data1 = data.frame("Subject" = AllData$ID, "Source" = AllData$Source, "MC_FFMQOB" = c(scale(AllData$FFMQOB, center = TRUE, scale = FALSE)), "MC_FFMQNR" = c(scale(AllData$FFMQNR, center = TRUE, scale = FALSE)), "MC_FFMQNJ" = c(scale(AllData$FFMQNJ, center = TRUE, scale = FALSE)), "MC_FFMQDS" = c(scale(AllData$FFMQDS, center = TRUE, scale = FALSE)), "MC_FFMQAA" = c(scale(AllData$FFMQAA, center = TRUE, scale = FALSE)), "LA1Mean" = AllData$LA1Mean, "LA1Prop" = AllData$LA1Prop, "BRPreLure" = AllData$BRPreLure, "BRLureChange" = AllData$BRLureChange, "ANScore" = AllData$ANScore, "ANList" = AllData$ANList, "FNScore" = AllData$FNScore, "FNList" = AllData$FNList, "RSCScore" = AllData$RSCScore, "BBScore" = AllData$BBScore, "BBList" = AllData$BBList)Letter Availability Task
# Calculate summary statistics for "First Position" responses:
LA1MeanDesc1 = summarySE(data = Data1, measurevar = "LA1Mean", groupvars = "Source", conf.interval = .95, na.rm = TRUE)
# Calculate summary statistics for first position proportions:
LA1PropDesc1 = summarySE(data = Data1, measurevar = "LA1Prop", groupvars = "Source", conf.interval = .95, na.rm = TRUE)
# Change the name of the Means columns so they're consistent across summary tables:
colnames(LA1MeanDesc1)[colnames(LA1MeanDesc1) == "LA1Mean"] = "M"
colnames(LA1PropDesc1)[colnames(LA1PropDesc1) == "LA1Prop"] = "M"
# Combine into a single data frame:
LADesc1 = rbind(LA1MeanDesc1, LA1PropDesc1)
# Create a table to display the results:
kable(LADesc1, digits = 4,
caption = "Table 1. Scores on the Letter Availability task.",
col.names = c("Source", "n", "M","SD", "SE", "CI"),
align = 'c') %>%
kable_styling(bootstrap_options =
c("hover", "responsive", "striped"),
full_width = F, position = "center") %>%
group_rows("First Position Responses", 1,2) %>%
group_rows("First Position Proportion", 3,4)| Source | n | M | SD | SE | CI |
|---|---|---|---|---|---|
| First Position Responses | |||||
| 0 | 112 | 0.4893 | 0.2762 | 0.0261 | 0.0517 |
| 1 | 25 | 0.4560 | 0.2485 | 0.0497 | 0.1026 |
| First Position Proportion | |||||
| 0 | 112 | 25.7839 | 22.6791 | 2.1430 | 4.2464 |
| 1 | 25 | 38.6640 | 22.0921 | 4.4184 | 9.1192 |
“First Position” Responses:
LA1MeanR1 = summary(lm(data = Data1, LA1Mean ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA + MC_FFMQOB + Source))
LA1MeanR1##
## Call:
## lm(formula = LA1Mean ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA +
## MC_FFMQOB + Source, data = Data1)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.48412 -0.20592 -0.05019 0.15264 0.60444
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.4866297 0.0260576 18.675 <0.0000000000000002 ***
## MC_FFMQNR -0.0004139 0.0077309 -0.054 0.957
## MC_FFMQDS 0.0089902 0.0090058 0.998 0.320
## MC_FFMQNJ 0.0038277 0.0099456 0.385 0.701
## MC_FFMQAA 0.0042573 0.0114419 0.372 0.710
## MC_FFMQOB -0.0032225 0.0099814 -0.323 0.747
## Source -0.0083008 0.0635326 -0.131 0.896
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.2742 on 130 degrees of freedom
## (130 observations deleted due to missingness)
## Multiple R-squared: 0.02004, Adjusted R-squared: -0.02519
## F-statistic: 0.443 on 6 and 130 DF, p-value: 0.8488
The linear combination of all predictors accounts for 2.52% of the variation in “First Position” responses; R2 = -0.03, F(6, 130 ) = 0.44, p = 0.85.
There were no significant differences on this measure between SONA and MTurk participants; \(\beta\) = -0.01, t = -0.13, p = 0.9.
None of the FFMQ facets were found to be significant predictors of “First Position” responses.
First Position Proportions:
LA1PropR1 = summary(lm(data = Data1, LA1Prop ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA + MC_FFMQOB + Source))
LA1PropR1##
## Call:
## lm(formula = LA1Prop ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA +
## MC_FFMQOB + Source, data = Data1)
##
## Residuals:
## Min 1Q Median 3Q Max
## -39.389 -18.880 -4.006 14.401 59.014
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 25.37556 2.14052 11.855 < 0.0000000000000002 ***
## MC_FFMQNR -0.58054 0.63506 -0.914 0.36233
## MC_FFMQDS 1.14061 0.73978 1.542 0.12555
## MC_FFMQNJ 0.78934 0.81699 0.966 0.33576
## MC_FFMQAA 0.08785 0.93991 0.093 0.92567
## MC_FFMQOB 0.56131 0.81993 0.685 0.49483
## Source 15.48743 5.21893 2.968 0.00357 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 22.52 on 130 degrees of freedom
## (130 observations deleted due to missingness)
## Multiple R-squared: 0.08668, Adjusted R-squared: 0.04452
## F-statistic: 2.056 on 6 and 130 DF, p-value: 0.06274
The linear combination of all predictors accounts for 4.45% of the variation in first position proportion estimates; R2 = 0.04, F(6, 130 ) = 2.06, p = 0.06.
There were significant differences on this measure between SONA and MTurk participants; \(\beta\) = 15.49, t = 2.97, p = 0.004. In particular, SONA participants (MSONA = 38.66, SDSONA = 22.09) provided higher estimates of first position proportions than MTurk participants (MMTurk = 25.78, SDMTurk = 22.68).
None of the FFMQ facets were found to be significant predictors of “First Position” responses.
Base Rates Representativeness Task
# Calculate summary statistics for "First Position" responses:
BRPreLureDesc1 = summarySE(data = Data1, measurevar = "BRPreLure", groupvars = "Source", conf.interval = .95, na.rm = TRUE)
# Calculate summary statistics for first position proportions:
BRLureChangeDesc1 = summarySE(data = Data1, measurevar = "BRLureChange", groupvars = "Source", conf.interval = .95, na.rm = TRUE)
# Change the name of the Means columns so they're consistent across summary tables:
colnames(BRPreLureDesc1)[colnames(BRPreLureDesc1) == "BRPreLure"] = "M"
colnames(BRLureChangeDesc1)[colnames(BRLureChangeDesc1) == "BRLureChange"] = "M"
# Combine into a single data frame:
BRDesc1 = rbind(BRPreLureDesc1, BRLureChangeDesc1)
# Create a table to display the results:
kable(BRDesc1, digits = 4,
caption = "Table 2. Scores on the Base Rates Representativeness task.",
col.names = c("Source", "n", "M","SD", "SE", "CI"),
align = 'c') %>%
kable_styling(bootstrap_options =
c("hover", "responsive", "striped"),
full_width = F, position = "center") %>%
group_rows("Pre-Base Rate Lure Endorsement", 1,2) %>%
group_rows("Lure Change", 3,4)| Source | n | M | SD | SE | CI |
|---|---|---|---|---|---|
| Pre-Base Rate Lure Endorsement | |||||
| 0 | 112 | 7.6674 | 1.3094 | 0.1237 | 0.2452 |
| 1 | 25 | 7.4400 | 1.3468 | 0.2694 | 0.5559 |
| Lure Change | |||||
| 0 | 112 | -2.1384 | 1.1517 | 0.1088 | 0.2156 |
| 1 | 25 | -2.1200 | 1.1482 | 0.2296 | 0.4739 |
Pre-Base Rates Lure Endorsement:
BRPreLureR1 = summary(lm(data = Data1, BRPreLure ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA + MC_FFMQOB + Source))
BRPreLureR1##
## Call:
## lm(formula = BRPreLure ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ +
## MC_FFMQAA + MC_FFMQOB + Source, data = Data1)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4.9118 -0.6709 -0.0526 0.6978 2.9828
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 7.66058 0.11842 64.690 < 0.0000000000000002 ***
## MC_FFMQNR 0.03218 0.03513 0.916 0.36141
## MC_FFMQDS 0.04440 0.04093 1.085 0.27998
## MC_FFMQNJ -0.09291 0.04520 -2.056 0.04182 *
## MC_FFMQAA 0.04100 0.05200 0.788 0.43184
## MC_FFMQOB 0.12709 0.04536 2.802 0.00586 **
## Source -0.11507 0.28873 -0.399 0.69089
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.246 on 130 degrees of freedom
## (130 observations deleted due to missingness)
## Multiple R-squared: 0.1408, Adjusted R-squared: 0.1012
## F-statistic: 3.551 on 6 and 130 DF, p-value: 0.002736
The linear combination of all predictors accounts for 10.12% of the variation in pre-base rate lure endorsement; R2 = 0.1, F(6, 130 ) = 3.55, p = 0.003.
There were no significant differences on this measure between SONA and MTurk participants; \(\beta\) = -0.12, t = -0.4, p = 0.69.
Both FFMQ-Observing and FFMQ-Non-Judging were found to be significant predictors of pre-base rate lure endoresment; \(\beta\) = 0.13, t = 2.8, p = 0.01 and \(\beta\) = -0.09, t = -2.06, p = 0.04, respectively. In particular, higher scores on the FFMQ-Observing were found to result in higher pre-base rate lure ratings, while higher scores on the FFMQ-Non-Judging were found to result in lower pre-base rate lure ratings. These relationships are plotted below.
- Create the observing plot:
BRPreLureOBFig1 = (Data1 %>%
ggplot(aes(MC_FFMQOB, BRPreLure)) +
geom_point(shape = 16) +
# Add linear regression line:
geom_smooth(method = lm) +
# Add labels:
labs(x = "FFMQ-Observing Score", y = "Pre-Base Rate Lure Endorsment", title = "a") +
# Define theme:
theme_light() + theme(plot.title = element_text(hjust = 0, size = 10)))- Create the non-judging plot.
BRPreLureNJFig1 = (Data1 %>%
ggplot(aes(MC_FFMQNJ, BRPreLure)) +
geom_point(shape = 16) +
# Add linear regression line:
geom_smooth(method = lm) +
# Add labels:
labs(x = "FFMQ-Non-Judging Score", y = "Pre-Base Rate Lure Endorsment", title = "b") +
# Define theme:
theme_light()) + theme(plot.title = element_text(hjust = 0, size = 10))- Create a caption.
BRPreLureCap1 = "Figure 1. Linear relationships between pre-base rate lure endorsement and scores on the observing (a) and non-judging (b) subscales of the Five Facet Mindfulness Questionnaire. Shaded areas represent 95% confidence regions."
BRPreLureCap1 = paste0(strwrap(BRPreLureCap1, width = 101), collapse = "\n")- Display the figure.
BRPreLureFig1 = grid.arrange(BRPreLureOBFig1, BRPreLureNJFig1, nrow = 1, widths = c(1, 1), bottom = textGrob(BRPreLureCap1, hjust = 0, x = .05, gp = gpar(fontsize = 10)))Change in Lure Endorsement:
BRLureChangeR1 = summary(lm(data = Data1, BRLureChange ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA + MC_FFMQOB + Source))
BRLureChangeR1##
## Call:
## lm(formula = BRLureChange ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ +
## MC_FFMQAA + MC_FFMQOB + Source, data = Data1)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.74106 -0.68895 0.07046 0.79395 2.55987
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -2.12330 0.10727 -19.794 <0.0000000000000002 ***
## MC_FFMQNR 0.01500 0.03183 0.471 0.6382
## MC_FFMQDS -0.08831 0.03707 -2.382 0.0187 *
## MC_FFMQNJ 0.01623 0.04094 0.396 0.6924
## MC_FFMQAA 0.04992 0.04710 1.060 0.2912
## MC_FFMQOB -0.07481 0.04109 -1.821 0.0710 .
## Source -0.02596 0.26155 -0.099 0.9211
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.129 on 130 degrees of freedom
## (130 observations deleted due to missingness)
## Multiple R-squared: 0.07421, Adjusted R-squared: 0.03148
## F-statistic: 1.737 on 6 and 130 DF, p-value: 0.1174
The linear combination of all predictors accounts for 3.15% of the variation in lure-rating change; R2 = 0.03, F(6, 130 ) = 1.74, p = 0.12.
There were no significant differences on this measure between SONA and MTurk participants; \(\beta\) = -0.03, t = -0.1, p = 0.92.
The FFMQ-Describing was found to be a significant predictor of lure-rating change; \(\beta\) = -0.09, t = -2.38, p = 0.02. In particular, higher scores on the FFMQ-Describing were found to result in greater negative changes in lure-rating. This relationship is plotted below.
# Create a figure caption:
BRLureChangeCap1 = "Figure 2. Linear relationship between lure-rating changes and scores on the describing subscale of the Five Facet Mindfulness Questionnaire. Shaded area represents a 95% confidence region."
BRLureChangeCap1 = paste0(strwrap(BRLureChangeCap1, width = 50), collapse = "\n")
# Create the plot:
BRLureChangeDSFig1 = (Data1 %>%
ggplot(aes(MC_FFMQDS, BRLureChange)) +
geom_point(shape = 16) +
# Add linear regression line:
geom_smooth(method = lm) +
# Add labels:
labs(x = "FFMQ-Describing Score", y = "Lure-Rating Changes", caption = BRLureChangeCap1) +
theme_light() + theme(plot.caption = element_text(hjust = 0, size = 10)))
BRLureChangeDSFig1Anchoring Task
# Calculate summary statistics for anchoring scores:
ANDesc1 = summarySE(data = Data1, measurevar = "ANScore", groupvars = "Source", conf.interval = .95, na.rm = TRUE)
# Create a table to display the results:
kable(ANDesc1, digits = 4,
caption = "Table 3. Scores on the Anchoring task.",
col.names = c("Source", "n", "M","SD", "SE", "CI"),
align = 'c') %>%
kable_styling(bootstrap_options =
c("hover", "responsive", "striped"),
full_width = F, position = "center") | Source | n | M | SD | SE | CI |
|---|---|---|---|---|---|
| 0 | 111 | 11887.34 | 55373.01 | 5255.774 | 10415.71 |
| 1 | 25 | 18942.80 | 62077.02 | 12415.405 | 25624.14 |
ANScoreR1 = summary(lm(data = Data1, ANScore ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA + MC_FFMQOB + Source + ANList))
ANScoreR1##
## Call:
## lm(formula = ANScore ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA +
## MC_FFMQOB + Source + ANList, data = Data1)
##
## Residuals:
## Min 1Q Median 3Q Max
## -44229 -18279 -8937 482 458527
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10693.2 7233.0 1.478 0.1418
## MC_FFMQNR 195.4 1591.2 0.123 0.9025
## MC_FFMQDS 746.6 1847.4 0.404 0.6868
## MC_FFMQNJ 5243.7 2040.2 2.570 0.0113 *
## MC_FFMQAA -4164.5 2365.6 -1.760 0.0807 .
## MC_FFMQOB 2513.6 2076.7 1.210 0.2284
## Source 11696.9 13027.7 0.898 0.3710
## ANList -1082.2 9688.4 -0.112 0.9112
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 56120 on 128 degrees of freedom
## (131 observations deleted due to missingness)
## Multiple R-squared: 0.0643, Adjusted R-squared: 0.01313
## F-statistic: 1.257 on 7 and 128 DF, p-value: 0.2771
The linear combination of all predictors accounts for 1.31% of the variation in anchoring scores; R2 = 0.01, F(7, 128 ) = 1.26, p = 0.28.
There were no significant differences on this measure between SONA and MTurk participants or between List 1 and List 2; \(\beta\) = 11696.86, t = 0.9, p = 0.37 and \(\beta\) = -1082.23, t = -0.11, p = 0.91, respectively.
The FFMQ-Non-Judging was found to be a significant predictor of anchoring scores; \(\beta\) = 5243.73, t = 2.57, p = 0.01. In particular, higher scores on the FFMQ-Non-Judging were found to result in greater anchoring scores. This relationship is plotted below.
# Create a figure caption:
ANScoreCap1 = "Figure 3. Linear relationship between anchoring scores and scores on the non-judging subscale of the Five Facet Mindfulness Questionnaire. Shaded area represents a 95% confidence region."
ANScoreCap1 = paste0(strwrap(ANScoreCap1, width = 45), collapse = "\n")
# Create the plot:
ANScoreNJFig1 = (Data1 %>%
ggplot(aes(MC_FFMQNJ, ANScore)) +
geom_point(shape = 16) +
# Add linear regression line:
geom_smooth(method = lm) +
# Add labels:
labs(x = "FFMQ-Non-Judging Score", y = "Anchoring Scores", caption = ANScoreCap1) +
theme_light() + theme(plot.caption = element_text(hjust = 0, size = 10)))
ANScoreNJFig1Famous Names Task
# Calculate summary statistics for Famous Names scores:
FNDesc1 = summarySE(data = Data1, measurevar = "FNScore", groupvars = "Source", conf.interval = .95, na.rm = TRUE)
# Create a table to display the results:
kable(FNDesc1, digits = 4,
caption = "Table 4. Scores on the Famous Names task.",
col.names = c("Source", "n", "M","SD", "SE", "CI"),
align = 'c') %>%
kable_styling(bootstrap_options =
c("hover", "responsive", "striped"),
full_width = F, position = "center") | Source | n | M | SD | SE | CI |
|---|---|---|---|---|---|
| 0 | 95 | 59.2947 | 11.6371 | 1.1939 | 2.3706 |
| 1 | 26 | 61.9615 | 8.0423 | 1.5772 | 3.2484 |
FNScoreR1 = summary(lm(data = Data1, FNScore ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA + MC_FFMQOB + Source + FNList))
FNScoreR1##
## Call:
## lm(formula = FNScore ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA +
## MC_FFMQOB + Source + FNList, data = Data1)
##
## Residuals:
## Min 1Q Median 3Q Max
## -30.235 -4.233 0.411 5.617 33.608
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 60.60385 1.56703 38.674 <0.0000000000000002 ***
## MC_FFMQNR 0.04963 0.32798 0.151 0.8800
## MC_FFMQDS -0.95354 0.44270 -2.154 0.0334 *
## MC_FFMQNJ 0.06146 0.42156 0.146 0.8843
## MC_FFMQAA 0.01505 0.54224 0.028 0.9779
## MC_FFMQOB 0.17489 0.49287 0.355 0.7234
## Source 1.59433 2.64369 0.603 0.5477
## FNList -1.97416 2.03611 -0.970 0.3343
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 10.96 on 113 degrees of freedom
## (146 observations deleted due to missingness)
## Multiple R-squared: 0.06287, Adjusted R-squared: 0.004818
## F-statistic: 1.083 on 7 and 113 DF, p-value: 0.3789
The linear combination of all predictors accounts for 0.48% of the variation in availability-based proportion guesses; R2 = 0, F(7, 113 ) = 1.08, p = 0.36.
There were no significant differences on this measure between SONA and MTurk participants or between List 1 and List 2; \(\beta\) = 1.59, t = 0.6, p = 0.55 and \(\beta\) = -1.97, t = -0.97, p = 0.33, respectively.
The FFMQ-Describing was found to be a significant predictor of availability-based proportion guesses; \(\beta\) = -0.95, t = -2.15, p = 0.03. In particular, higher scores on the FFMQ-Describing were found to result in lower availability-based proportion guesses. This relationship is plotted below.
# Create a figure caption:
FNScoreCap1 = "Figure 4. Linear relationship between famous names scores and scores on the describing subscale of the Five Facet Mindfulness Questionnaire. Shaded area represents a 95% confidence region."
FNScoreCap1 = paste0(strwrap(FNScoreCap1, width = 45), collapse = "\n")
# Create the plot:
FNScoreDSFig1 = (Data1 %>%
ggplot(aes(MC_FFMQDS, FNScore)) +
geom_point(shape = 16) +
# Add linear regression line:
geom_smooth(method = lm) +
# Add labels:
labs(x = "FFMQ-Describing Score", y = "Famous Names Score", caption = FNScoreCap1) +
theme_light() + theme(plot.caption = element_text(hjust = 0, size = 10)))
FNScoreDSFig1Resistance to Sunk Costs Task
# Calculate summary statistics for resistance to sunk cost scores:
RSCDesc1 = summarySE(data = Data1, measurevar = "RSCScore", groupvars = "Source", conf.interval = .95, na.rm = TRUE)
# Create a table to display the results:
kable(RSCDesc1, digits = 4,
caption = "Table 5. Scores on the Resistance to Sunk Costs task.",
col.names = c("Source", "n", "M","SD", "SE", "CI"),
align = 'c') %>%
kable_styling(bootstrap_options =
c("hover", "responsive", "striped"),
full_width = F, position = "center")| Source | n | M | SD | SE | CI |
|---|---|---|---|---|---|
| 0 | 102 | 4.3892 | 0.7346 | 0.0727 | 0.1443 |
| 1 | 27 | 3.9074 | 0.6889 | 0.1326 | 0.2725 |
RSCScoreR1 = summary(lm(data = Data1, RSCScore ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA + MC_FFMQOB + Source))
RSCScoreR1##
## Call:
## lm(formula = RSCScore ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA +
## MC_FFMQOB + Source, data = Data1)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.52430 -0.51091 0.03729 0.50928 1.70215
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.38943 0.07409 59.248 < 0.0000000000000002 ***
## MC_FFMQNR -0.01359 0.02131 -0.638 0.52496
## MC_FFMQDS 0.01877 0.02866 0.655 0.51369
## MC_FFMQNJ 0.02604 0.02671 0.975 0.33144
## MC_FFMQAA -0.03071 0.03422 -0.897 0.37127
## MC_FFMQOB 0.05941 0.03149 1.886 0.06163 .
## Source -0.44642 0.16946 -2.634 0.00952 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.7243 on 122 degrees of freedom
## (138 observations deleted due to missingness)
## Multiple R-squared: 0.1084, Adjusted R-squared: 0.06459
## F-statistic: 2.473 on 6 and 122 DF, p-value: 0.02718
The linear combination of all predictors accounts for 6.46% of the variation in resistance to sunk cost scores; R2 = 0.06, F(6, 122 ) = 2.47, p = 0.03.
There were significant differences on this measure between SONA and MTurk participants; \(\beta\) = -0.45, t = -2.63, p = 0.01. In particular, SONA participants (MSONA = 3.91, SDSONA = 0.69) demonstrated significantly less resistance to the sunk cost bias than MTurk participants (MMTurk = 4.39, SDMTurk = 0.73).
None of the FFMQ facets were found to be significant predictors of resistance to sunk cost scores.
Belief Bias Task
# Calculate summary statistics for resistance to sunk cost scores:
BBDesc1 = summarySE(data = Data1, measurevar = "BBScore", groupvars = "Source", conf.interval = .95, na.rm = TRUE)
# Create a table to display the results:
kable(BBDesc1, digits = 4,
caption = "Table 6. Scores on the Belief Bias task.",
col.names = c("Source", "n", "M","SD", "SE", "CI"),
align = 'c') %>%
kable_styling(bootstrap_options =
c("hover", "responsive", "striped"),
full_width = F, position = "center")| Source | n | M | SD | SE | CI |
|---|---|---|---|---|---|
| 0 | 100 | 0.1394 | 0.2164 | 0.0216 | 0.0429 |
| 1 | 27 | 0.1198 | 0.2234 | 0.0430 | 0.0884 |
BBScoreR1 = summary(lm(data = Data1, BBScore ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA + MC_FFMQOB + Source + BBList))
BBScoreR1##
## Call:
## lm(formula = BBScore ~ MC_FFMQNR + MC_FFMQDS + MC_FFMQNJ + MC_FFMQAA +
## MC_FFMQOB + Source + BBList, data = Data1)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.24220 -0.11873 -0.05999 0.03886 0.75776
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.166249 0.028540 5.825 0.0000000496 ***
## MC_FFMQNR 0.013214 0.006223 2.123 0.0358 *
## MC_FFMQDS 0.002484 0.008447 0.294 0.7692
## MC_FFMQNJ -0.020242 0.007768 -2.606 0.0103 *
## MC_FFMQAA -0.012611 0.009893 -1.275 0.2049
## MC_FFMQOB 0.006079 0.009322 0.652 0.5156
## Source -0.071221 0.048965 -1.455 0.1484
## BBList -0.030671 0.038256 -0.802 0.4243
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.2092 on 119 degrees of freedom
## (140 observations deleted due to missingness)
## Multiple R-squared: 0.1228, Adjusted R-squared: 0.07124
## F-statistic: 2.381 on 7 and 119 DF, p-value: 0.02587
The linear combination of all predictors accounts for 7.12% of the variation in belief bias scores; R2 = 0.07, F(7, 119 ) = 2.38, p = 0.03.
There were no significant differences on this measure between SONA and MTurk participants or between List 1 and List 2; \(\beta\) = -0.07, t = -1.45, p = 0.15 and \(\beta\) = -0.03, t = -0.8, p = 0.42, respectively.
Both FFMQ-Non-Judging and FFMQ-Non-Reactivity were found to be significant predictors of belief bias scores; \(\beta\) = -0.02, t = -2.61, p = 0.01 and \(\beta\) = 0.01, t = 2.12, p = 0.04, respectively. In particular, higher scores on the FFMQ-Non-Judging were found to result in lower belief bias scores, while higher scores on the FFMQ-Non-Reactivity were found to result in higher belief bias scores. These relationships are plotted below.
- Create the non-judging plot:
BBScoreNJFig1 = (Data1 %>%
ggplot(aes(MC_FFMQNJ, BBScore)) +
geom_point(shape = 16) +
# Add linear regression line:
geom_smooth(method = lm) +
# Add labels:
labs(x = "FFMQ-Non-Judging Score", y = "Belief Bias Scores", title = "a") +
# Define theme:
theme_light() + theme(plot.title = element_text(hjust = 0, size = 10)))- Create the non-reactivity plot.
BBScoreNRFig1 = (Data1 %>%
ggplot(aes(MC_FFMQNR, BBScore)) +
geom_point(shape = 16) +
# Add linear regression line:
geom_smooth(method = lm) +
# Add labels:
labs(x = "FFMQ-Non-Reactivity Score", y = "Belief Bias Scores", title = "b") +
# Define theme:
theme_light()) + theme(plot.title = element_text(hjust = 0, size = 10))- Create a caption.
BBScoreCap1 = "Figure 5. Linear relationships between belief bias scores and scores on the non-judging (a) and non-reactivity (b) subscales of the Five Facet Mindfulness Questionnaire. Shaded areas represent 95% confidence regions."
BBScoreCap1 = paste0(strwrap(BBScoreCap1, width = 101), collapse = "\n")- Display the figure.
BBScoreFig1 = grid.arrange(BBScoreNJFig1, BBScoreNRFig1, nrow = 1, widths = c(1, 1), bottom = textGrob(BBScoreCap1, hjust = 0, x = .05, gp = gpar(fontsize = 10)))