Introduction

Prospect theory (Kahneman & Tversky, 1979; Tversky & Kahneman, 1986, 1992) stipulates that people make choices based on subjective values of the choices, and not from the objective values of the choice. In other words, in a gambling task, when presented with a sure option $5 or a risky 50-50 option of either gaining $10 or gaining $0, prospect theory expects people to make their decisions based on how much they value $5 dollars, relative to how much they value $10. But is the subjective value in prospect theory a proxy of feeling?

Charpentier (2016) investigates this question by utilizing computational models. The authors are interested in whether a model based on feeling will be able to predict choices in a monetary gambling task better than a model based on objective values. The authors asked participants to rate how negatively or positively they would feel or expect to feel if they won or lost an object value. Those self-reported feelings where used to create a feeling function which relates objective values to a subjective feelings (expected or experienced). On a seperate task, the authors asked participants to choose between a sure option and a risky 50-50 option in a monetary gambling task. Feeling based choice model used the constructed feeling function to predict feelings associated with each objective value, and then used those predicted feelings to predict choice. Value based choice model only used objective value to predict choice.

Methods

Power Analysis

As part of their supplementary analysis, Charpentier used a paired t test to compare the weights of losing coeficients to weights of wining coeficients. This test was to see of one was significantly larger than the other in the choice models.

Original paper reports

“We determined that a sample size of 59 participants would achieve 85% power to detect an effect size of 0.401 with an alpha of .05.”

Using the effect size found in the original paper (d = 0.392) with an alpha of .05, 53 participants will achieve 80% power. Power calculations were run in G*Power v3.1.

Planned Sample

For the sake of feasibility in collecting data. I plan to use 30 participants with a shorter version of their experiment. Although this significantly reduces the sample of the experiment, their main analysis does not rely on comparing the weights of losing and wining to each other, but rather, the predictive power of the models itself, which don’t depend on sample size as much as t tests do. There is no specific recruitment constraints in demographics since the original paper did not specify any.

Materials

Basic geometric shapes were used for each stimulus in a trial. Each trial had a different geometric shape to ensure that associations of mo netary outcomes and the shapes did not carry throughout trials.

INCLUDE PICTURE HERE

JsPsych was used to create the experimental paradigm. Shapes were created from basic presentation software using basic shape functions and color palletes, and saved as an image.

Procedure

Directly quoted from the original paper, the two main tasks were as follows:

Feeling Task

In the feelings task, participants completed four blocks of 40 to 48 trials each, in which they reported either expected (Fig. 1a) or experienced (Fig. 1b) feelings associated with a range of wins and losses (between £0.2 and £12) or with no change in monetary amount (£0). At the beginning of each trial, participants were told how much was at stake and whether it was a win trial (e.g., “If you choose the GOOD picture, you will: WIN £10”) or a loss trial (e.g., “If you choose the BAD picture, you will: LOSE £10”). On each trial, their task was to make a simple arbitrary choice between two different geometrical shapes. Participants were told that one stimulus was randomly associated with a gain or loss (between £0.2 and £12) and the other stimulus with no gain and no loss (£0). Each stimulus was presented only once across the entire task so there was no way for participants to learn which stimulus was associated with a better outcome. The probability of sampling each amount was controlled to ensure that each gain and each loss from the range was sampled twice in each block: In one instance, the outcome was the amount at stake (win/loss), and in the other, the outcome was £0 (no win/no loss).

In two of the four blocks (counterbalanced order), participants reported their expected feelings prior to choosing between the two stimuli (Fig. 1a), and in the other two blocks, they reported their experienced feelings after choosing between the two stimuli (Fig. 1b). Participants reported their expected feelings by answering one of four questions asking how they would feel if they “win,” “lose,” “don’t win,” or “don’t lose” (the order of win/lose and don’t-win/don’t-lose questions was counterbalanced across trials). In experienced-feelings blocks, participants answered the question “How do you feel now?” All feelings were rated using a subjective rating scale ranging from extremely unhappy to extremely happy. Expected and experienced feelings were collected in different blocks to ensure participants did not simply remember and repeat the same rating. The choice between the two geometrical shapes was arbitrary and implemented simply in order to have participants actively involved with the outcomes.

Gambling Task

Participants also completed a probabilistic- choice task (Fig. 1c) in which they made 288 to 322 choices between a risky 50-50 gamble and a sure option. Importantly, all the amounts used in the gambling task were the same as those used in the feelings task (between £0.2 and £12), so feelings associated with these outcomes could be combined to predict gambling choice. There were three gamble types: mixed (participants had to choose between a gamble with a 50% chance of a gain and 50% chance of a loss, or a sure option of £0), gain only (participants had to choose between a gamble with a 50% chance of a high gain and a 50% chance of £0, or a sure, smaller gain), and loss only (participants had to choose between a gamble with 50% chance of a high loss and 50% chance of £0, or a sure, smaller loss). According to prospect theory, these three types of choices are essential to estimate loss aversion, risk preference for gains, and risk preference for losses, respectively.

Link to Replication Experiment

Short Experiment Version: [https://web.stanford.edu/~mlko53/expt/experiment_short.html]

Analysis Plan

Only exclusion criteria is removing participant data who show no variation in response.

Feeling Function Models

Model subjective feelings from three different baselines: difference from the midpoint of the rating scale, difference from the rating reported on the previous trial (for experienced feelings only), and difference from the corresponding zero outcome.

10 different models were fit for each feeling block, expected and experienced. Each model was fit to every participants (See Charpentier2016, p.4 for model formula). Best model in each feeling block was determiend by using summed BIC as model selection criterion and 100 split half analysis.

Gambling Choice Models

Each choice model used logistic regression on three predictors, a win value, loss value and sure value, to predict gambling choice (coded “1” for selecting the risky 50-50, or “0” for selecting the sure option). 7 different choice models were fitted for every participant. Choice model 1 and 2 are feeling models using the best fitting feeling function to transform objective values to subjective values. Choice model 3 - 7 are value based models using either raw objective values or a transformation of the raw objective value such as logarithm (See Charpentier2016, p.5 and Supplementary Material, p.3 for model formula). Best model was deteremined by using BIC as model selection criterion and 100 split half analysis.

Weights associated with win and weights associated with loss are compared. From their key choice model of interest, these weights are extracted, standardized and then compared using paired sample t test and repeated measures ANOVA.

Key Analysis

Key analysis tests whether feeling based choice models (Choice Model 1, 2) predict better than value based choice models (Choice Model 3 - 7).

Differences from Original Study

I’m using a smaller sample size in order to fit the contraints of the class budget. Furthermore, I’m using less trials in each feeling block. Instead of 2 blocks of 48 trials for each expected and experienced blocks, I’m only using 1 block of 48 trials. Gambling trial length remains the same.

The authors used Maximum Likehood Estimation algorithm from MATLAB to fit their models. I will use Nonlinear Solver (nls() function) in R using Gauss-Newton algorithm.

Methods Addendum (Post Data Collection)

I think graphing and displaying the mean accuracy of each model is nice so that readers would have a intuitive idea of the predictive power of each model.

Actual Sample

TO FILL OUT AFTER DATA COLLECTION

Differences from pre-data collection methods plan

TO FILL OUT AFTER DATA COLLECTION

Results

Data preparation

Data from

###Data Preparation

####Load Relevant Libraries and Functions
set.seed(456) # so that cross val is the same for reproducibility
library(ggplot2)
library(tidyr)
library(dplyr)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(jsonlite)

####Import data

#init params of experiment
nparticipants <- 2
n.iter <- 10

#read pilot1 data
d <- fromJSON('../expt/pilot_data/pilot1.json')
d$stimulus <- NULL
d$id <- 1

#read pilot 2 data
temp_d <- fromJSON('../expt/pilot_data/pilot2.json')
temp_d$stimulus <- NULL
temp_d$id <- 2

#combine
d <- rbind(d, temp_d)

#filter irrelevant data
d <- d %>% filter(!is.na(feel_trial_type) | !is.na(gamble_trial_type))


#### Prepare data for analysis - create columns etc.

# transform 1-9 likert subjective response to -4 - 4 so that neutral is 0
d$response <- as.integer(d$response)
d$response <- d$response - 5


#### Data exclusion / filtering

# split the data into the blocks so that later analysis is easier
feel_expect <- d %>% 
        filter(feel_trial_type != "now" & feel_trial_type != "") %>%
        select(feel_trial_type, value, response, id)

feel_experience <- d %>% 
        filter(feel_trial_type == "now") %>%
        select(feel_trial_type, value, response, id)

gamble_block <- d %>% 
        filter(!is.na(gamble_trial_type)) %>%
        select(gamble_trial_type, win, lose, sure, key_press, gamble_side, id)

#preprocess data so that there is a binary variable column named gamble
gamble = logical(0)
for(i in 1:length(gamble_block$gamble_side)){
        if(gamble_block$gamble_side[i] == ""){
                gamble <- c(gamble, NA)
        } else{
                if(gamble_block$gamble_side[i] == "R" & gamble_block$key_press[i] == 80){
                        gamble <- c(gamble, TRUE)
                } else if(gamble_block$gamble_side[i] == "L" & gamble_block$key_press[i] == 81){
                        gamble <- c(gamble, TRUE)
                } else{
                        gamble <- c(gamble, FALSE)
                }
        }
}

gamble_block$gamble <- gamble

Helper Functions

These functions help me calculate BIC with split-half bootstrap analysis.

# splits dataframe in half randomly
split_half <- function(d){
        val_length <- round(nrow(d) / 2)
        val_indices <- sample(seq_len(nrow(d)), size = val_length)
        return(d[val_indices, ])
}

# calculate BIC total for feel models
feel_BIC <- function(num){
        vector <- double(20)
        model_names <- character(20)

        for(i in 1:10){
                model_names[i] <- paste0(paste0("feel_expect_mod", i), ".fit")
        }
        
        for(i in 11:20){
                model_names[i] <- paste0(paste0("feel_experience_mod", i - 10), ".fit")
        }
        
        for(i in 1:num){
                for(j in 1:20){
                        model <- paste0(paste0(paste0(model_names[j], "[["), i), "]]") ## make feel_expect_mod j .fit[[ i ]]
                        vector[j] <- vector[j] + BIC(eval(parse(text = model)))
                }
        }
        return(vector)
}

#value transformation function; returns a list or predicted values; win == 1, lost == 2, sure == 3
transform_predict <- function(win, lose, sure, model){
        twin <- predict(model, data.frame(value = win))
        tlose <- predict(model, data.frame(value = lose))
        tsure <- predict(model, data.frame(value = sure))
        return(list(twin, tlose, tsure))
}

gamble_log_fit <- function(d, model){
        params <- transform_predict(d$win, d$lose, d$sure, model)
        return(glm(d$gamble ~ params[[1]] + params[[2]] + params[[3]] + 0, family = binomial(link = "logit")))
}

# calculate BIC total for choice models
choice_BIC <- function(num){
        vector <- double(7)
        for(i in 1:10){
                model_names[i] <- paste0(paste0("choice_mod", i), ".fit")
        }
        
        for(i in 1:num){
                for(j in 1:7){
                        model <- paste0(paste0(paste0(model_names[j], "[["), i), "]]")
                        model <- eval(parse(text = model))
                        if(is.na(model)){
                                vector[j] <- NA #### If there was an error when fitting, assign NA to indicate that the model was not good
                        } else{
                                vector[j] <- vector[j] + BIC(model)       
                        }
                }
        }
        return(vector)
}

Feeling Helper Functions

These functions are the model formulas of the feeling functions.

# formulas from the paper
mod1 <- response ~ b * value
mod2 <- response ~ ifelse(value > 0, bgain * value, bloss * value)
mod3 <- response ~ ifelse(value > 0, b * abs(value) ^ p , -b * abs(value) ^ p)
mod4 <- response ~ ifelse(value > 0, bgain * abs(value) ^ p, -bloss * abs(value) ^ p)
mod5 <- response ~ ifelse(value > 0, b * abs(value) ^ pgain, -b * abs(value) ^ ploss)                            ## 5 has problems converging               
mod6 <- response ~ ifelse(value > 0, bgain * abs(value) ^ pgain, -bloss * abs(value) ^ ploss)                    ## 6 has problems converging
mod7 <- response ~ ifelse(value > 0, b * value + e, b * value - e)                                      
mod8 <- response ~ ifelse(value > 0, bgain * value + e, bloss * value - e)                              
mod9 <- response ~ ifelse(value > 0, b * value + egain, b * value - eloss)
mod10 <- response ~ ifelse(value > 0, bgain * value + egain, bloss * value - eloss)

## Here begins the worst code that I've ever written in my entire life

# function that fits all the feeling model
feel_fit <- function(split = TRUE, num){
        
        ## init empty lists of models
        feel_expect_mod1.fit <- list()
        feel_expect_mod2.fit <- list()
        feel_expect_mod3.fit <- list()
        feel_expect_mod4.fit <- list()
        feel_expect_mod5.fit <- list()
        feel_expect_mod6.fit <- list()
        feel_expect_mod7.fit <- list()
        feel_expect_mod8.fit <- list()
        feel_expect_mod9.fit <- list()
        feel_expect_mod10.fit <- list()
        
        feel_experience_mod1.fit <- list()
        feel_experience_mod2.fit <- list()
        feel_experience_mod3.fit <- list()
        feel_experience_mod4.fit <- list()
        feel_experience_mod5.fit <- list()
        feel_experience_mod6.fit <- list()
        feel_experience_mod7.fit <- list()
        feel_experience_mod8.fit <- list()
        feel_experience_mod9.fit <- list()
        feel_experience_mod10.fit <- list()

        
        ## fit each model to each participant
        for(i in 1:num){
                
                temp_feel_expect <- feel_expect %>% filter(id == i)
                
                temp_feel_experience <- feel_experience %>% filter(id == i)
                
                #only half split when bootstrapping
                if(split == TRUE){
                        temp_feel_expect <- split_half(temp_feel_expect)
                        temp_feel_experience <- split_half(temp_feel_experience)
                }
                
                feel_expect_mod1.fit <- c(feel_expect_mod1.fit, 
                                          list(nls(data = temp_feel_expect, mod1, start = c(b = .5),
                                                   control = list(maxiter = 50000, minFactor=1/2000))))
                
                feel_expect_mod2.fit <- c(feel_expect_mod2.fit, 
                                          list(nls(data = temp_feel_expect, mod2, start = c(bgain = .5, bloss = .5),
                                                   control = list(maxiter = 50000, minFactor=1/2000))))
                
                feel_expect_mod3.fit <- c(feel_expect_mod3.fit, 
                                          list(nls(data = temp_feel_expect, mod3, start = c(b = .5, p = .5),
                                                   control = list(maxiter = 50000, minFactor=1/2000))))
                feel_expect_mod4.fit <- c(feel_expect_mod4.fit, 
                                          list(nls(data = temp_feel_expect, mod4, start = c(bgain = .5, bloss = .5, p = .5),
                                                   control = list(maxiter = 50000, minFactor=1/2000))))
                feel_expect_mod5.fit <- c(feel_expect_mod5.fit, 
                                          list(nls(data = temp_feel_expect, mod5, start = c(b = .5, pgain = .5, ploss = .5),
                                                   control = list(maxiter = 50000, minFactor=1/2000))))
                feel_expect_mod6.fit <- c(feel_expect_mod6.fit, 
                                          list(nls(data = temp_feel_expect, mod6, start = c(bgain = .5, bloss = .5, pgain = .5, ploss = .5),
                                                   control = list(maxiter = 50000, minFactor=1/2000))))
                feel_expect_mod7.fit <- c(feel_expect_mod7.fit, 
                                          list(nls(data = temp_feel_expect, mod7, start = c(b = .5, e = .1),
                                                   control = list(maxiter = 50000, minFactor=1/2000))))
                feel_expect_mod8.fit <- c(feel_expect_mod8.fit, 
                                          list(nls(data = temp_feel_expect, mod8, start = c(bgain = .5, bloss = .5, e = .1),
                                                   control = list(maxiter = 50000, minFactor=1/2000))))
                feel_expect_mod9.fit <- c(feel_expect_mod9.fit, 
                                          list(nls(data = temp_feel_expect, mod9, start = c(b = 1, egain = .1, eloss = .1),
                                                   control = list(maxiter = 50000, minFactor=1/2000))))
                feel_expect_mod10.fit <- c(feel_expect_mod10.fit, 
                                           list(nls(data = temp_feel_expect, mod10, start = c(bgain = 1, bloss = 1, egain = .1, eloss = .1),
                                                    control = list(maxiter = 50000, minFactor=1/2000))))
              
                
                feel_experience_mod1.fit <- c(feel_experience_mod1.fit, 
                                              list(nls(data = temp_feel_experience, mod1, start = c(b = 1),
                                                       control = list(maxiter = 50000, minFactor=1/2000))))
                feel_experience_mod2.fit <- c(feel_experience_mod2.fit, 
                                              list(nls(data = temp_feel_experience, mod2, start = c(bgain = 1, bloss = 1),
                                                       control = list(maxiter = 50000, minFactor=1/2000))))
                feel_experience_mod3.fit <- c(feel_experience_mod3.fit, 
                                              list(nls(data = temp_feel_experience, mod3, start = c(b = 1, p = 1),
                                                       control = list(maxiter = 50000, minFactor=1/2000))))
                feel_experience_mod4.fit <- c(feel_experience_mod4.fit, 
                                              list(nls(data = temp_feel_experience, mod4, start = c(bgain = .5, bloss = .5, p = .5),
                                                       control = list(maxiter = 50000, minFactor=1/2000))))
                feel_experience_mod5.fit <- c(feel_experience_mod5.fit, 
                                              list(nls(data = temp_feel_experience, mod5, start = c(b = 1, pgain = 1, ploss = 1),
                                                       control = list(maxiter = 50000, minFactor=1/2000))))
                feel_experience_mod6.fit <- c(feel_experience_mod6.fit, 
                                              list(nls(data = temp_feel_experience, mod6, start = c(bgain = 1, bloss = 1, pgain = 1, ploss = 1),
                                                       control = list(maxiter = 50000, minFactor=1/2000))))
                feel_experience_mod7.fit <- c(feel_experience_mod7.fit, 
                                              list(nls(data = temp_feel_experience, mod7, start = c(b = 1, e = .1),
                                                       control = list(maxiter = 50000, minFactor=1/2000))))
                feel_experience_mod8.fit <- c(feel_experience_mod8.fit, 
                                              list(nls(data = temp_feel_experience, mod8, start = c(bgain = 1, bloss = 1, e = .1),
                                                       control = list(maxiter = 50000, minFactor=1/2000))))
                feel_experience_mod9.fit <- c(feel_experience_mod9.fit, 
                                              list(nls(data = temp_feel_experience, mod9, start = c(b = 1, egain = .1, eloss = .1),
                                                       control = list(maxiter = 50000, minFactor=1/2000))))
                feel_experience_mod10.fit <- c(feel_experience_mod10.fit, 
                                               list(nls(data = temp_feel_experience, mod10, start = c(bgain = 1, bloss = 1, egain = .1, eloss = .1),
                                                        control = list(maxiter = 50000, minFactor=1/2000))))
        }
        
        #make these models a global variable
        assign("feel_expect_mod1.fit", feel_expect_mod1.fit, envir = .GlobalEnv)
        assign("feel_expect_mod2.fit", feel_expect_mod2.fit, envir = .GlobalEnv)
        assign("feel_expect_mod3.fit", feel_expect_mod3.fit, envir = .GlobalEnv)
        assign("feel_expect_mod4.fit", feel_expect_mod4.fit, envir = .GlobalEnv)
        assign("feel_expect_mod5.fit", feel_expect_mod5.fit, envir = .GlobalEnv)
        assign("feel_expect_mod6.fit", feel_expect_mod6.fit, envir = .GlobalEnv)
        assign("feel_expect_mod7.fit", feel_expect_mod7.fit, envir = .GlobalEnv)
        assign("feel_expect_mod8.fit", feel_expect_mod8.fit, envir = .GlobalEnv)
        assign("feel_expect_mod9.fit", feel_expect_mod9.fit, envir = .GlobalEnv)
        assign("feel_expect_mod10.fit", feel_expect_mod10.fit, envir = .GlobalEnv)
        
        assign("feel_experience_mod1.fit", feel_experience_mod1.fit, envir = .GlobalEnv)
        assign("feel_experience_mod2.fit", feel_experience_mod2.fit, envir = .GlobalEnv)
        assign("feel_experience_mod3.fit", feel_experience_mod3.fit, envir = .GlobalEnv)
        assign("feel_experience_mod4.fit", feel_experience_mod4.fit, envir = .GlobalEnv)
        assign("feel_experience_mod5.fit", feel_experience_mod5.fit, envir = .GlobalEnv)
        assign("feel_experience_mod6.fit", feel_experience_mod6.fit, envir = .GlobalEnv)
        assign("feel_experience_mod7.fit", feel_experience_mod7.fit, envir = .GlobalEnv)
        assign("feel_experience_mod8.fit", feel_experience_mod8.fit, envir = .GlobalEnv)
        assign("feel_experience_mod9.fit", feel_experience_mod9.fit, envir = .GlobalEnv)
        assign("feel_experience_mod10.fit", feel_experience_mod10.fit, envir = .GlobalEnv)
        

        
}

# use nls to fit models
# use BIC on fitted models to return BIC

Choice Helper Functions

These functions are the model formulas of the gambling choice models.

# log transformation function for choice model 4
log0 <- function(x){
        
        if(length(x) == 1){
        
                if(x == 0){
                        ans = 0
                } else if(x < 0){
                        ans = -log1p(-x)
                } else{
                        ans = log1p(x)
                }
                return(ans)
        } else{
                return(sapply(x, FUN = log0))
        }
}

# function that fits all the choice functions
choice_fit <- function(split = TRUE, num){
        
        choice_mod1.fit <- list()
        choice_mod2.fit <- list()
        choice_mod3.fit <- list()
        choice_mod4.fit <- list()
        choice_mod5.fit <- list()
        choice_mod6.fit <- list()
        choice_mod7.fit <- list()
        
        for(i in 1:num){
                temp_gamble_block <- gamble_block %>% filter(id == i)
                if(split == TRUE){
                        temp_gamble_block <- split_half(temp_gamble_block)
                }
                
                choice_mod1.fit <- c(choice_mod1.fit, list(gamble_log_fit(gamble_block, feel_expect_mod3.fit[[i]])))
                choice_mod2.fit <- c(choice_mod2.fit, list(gamble_log_fit(gamble_block, feel_experience_mod3.fit[[i]])))
                choice_mod3.fit <- c(choice_mod3.fit, list(glm(data = temp_gamble_block, gamble ~ 0 + win + lose + sure, family = binomial)))
                choice_mod4.fit <- c(choice_mod4.fit, list(glm(data = temp_gamble_block, gamble ~ 0 + log0(win) + log0(lose) + log0(sure), family = binomial)))
                choice_mod5.fit <- c(choice_mod5.fit, list(tryCatch(nls(data = temp_gamble_block, gamble ~ 1 / (1 + exp(1) ^ -(u * (0.5 * win + 0.5 * lamda * lose))), 
                                                               start = c(u = 1, lamda = 1),
                                                               control = list(maxiter = 50000, minFactor=1/2000)),
                                                               error = function(err) NA)))
                choice_mod6.fit <- c(choice_mod6.fit, list(tryCatch(nls(data = temp_gamble_block, gamble ~ 1 / (1 + exp(1) ^ -(u * (0.5 * (abs(win) ^ gamma) + 0.5 * (abs(lose) ^ gamma)))),
                                                               start = c(u = 1, gamma = 1),
                                                               control = list(maxiter = 50000, minFactor=1/2000)),
                                                               error = function(err) NA)))
                choice_mod7.fit <- c(choice_mod7.fit, list(tryCatch(nls(data = temp_gamble_block, gamble ~ 1 / (1 + exp(1) ^ -(u * (0.5 * (abs(win) ^ gamma) + 0.5 * lamda * (abs(lose) ^ gamma)))),
                                                               start = c(u = 1, gamma = 1, lamda = 1),
                                                               control = list(maxiter = 50000, minFactor=1/2000)),
                                                               error = function(err) NA)))
                
        }
        
        #make these models a global variable
        assign("choice_mod1.fit", choice_mod1.fit, envir = .GlobalEnv)
        assign("choice_mod2.fit", choice_mod2.fit, envir = .GlobalEnv)
        assign("choice_mod3.fit", choice_mod3.fit, envir = .GlobalEnv)
        assign("choice_mod4.fit", choice_mod4.fit, envir = .GlobalEnv)
        assign("choice_mod5.fit", choice_mod5.fit, envir = .GlobalEnv)
        assign("choice_mod6.fit", choice_mod6.fit, envir = .GlobalEnv)
        assign("choice_mod7.fit", choice_mod7.fit, envir = .GlobalEnv)
}

Analysis Helper Function

## Gives count, mean, standard deviation, standard error of the mean, and confidence interval (default 95%).
##   data: a data frame.
##   measurevar: the name of a column that contains the variable to be summariezed
##   groupvars: a vector containing names of columns that contain grouping variables
##   na.rm: a boolean that indicates whether to ignore NA's
##   conf.interval: the percent range of the confidence interval (default is 95%)
summarySE <- function(data=NULL, measurevar, groupvars=NULL, na.rm=FALSE,
                      conf.interval=.95, .drop=TRUE) {
    library(plyr)

    # New version of length which can handle NA's: if na.rm==T, don't count them
    length2 <- function (x, na.rm=FALSE) {
        if (na.rm) sum(!is.na(x))
        else       length(x)
    }

    # This does the summary. For each group's data frame, return a vector with
    # N, mean, and sd
    datac <- ddply(data, groupvars, .drop=.drop,
      .fun = function(xx, col) {
        c(N    = length2(xx[[col]], na.rm=na.rm),
          mean = mean   (xx[[col]], na.rm=na.rm),
          sd   = sd     (xx[[col]], na.rm=na.rm)
        )
      },
      measurevar
    )

    # Rename the "mean" column    
    datac <- rename(datac, c("mean" = measurevar))

    datac$se <- datac$sd / sqrt(datac$N)  # Calculate standard error of the mean

    # Confidence interval multiplier for standard error
    # Calculate t-statistic for confidence interval: 
    # e.g., if conf.interval is .95, use .975 (above/below), and use df=N-1
    ciMult <- qt(conf.interval/2 + .5, datac$N-1)
    datac$ci <- datac$se * ciMult

    return(datac)
}

Confirmatory analysis

Data Visualization

The below graph shows the average reported expected and experienced feeling with each objective value. Standard error bars are shown.

feel_expect_sum <- summarySE(feel_expect, measurevar = "response", groupvars = c("value"))
## -------------------------------------------------------------------------
## You have loaded plyr after dplyr - this is likely to cause problems.
## If you need functions from both plyr and dplyr, please load plyr first, then dplyr:
## library(plyr); library(dplyr)
## -------------------------------------------------------------------------
## 
## Attaching package: 'plyr'
## The following objects are masked from 'package:dplyr':
## 
##     arrange, count, desc, failwith, id, mutate, rename, summarise,
##     summarize
feel_expect_sum$type <- "Expected Feelings"
feel_experience_sum <- summarySE(feel_experience, measurevar = "response", groupvars = c("value"))
feel_experience_sum$type <- "Experienced Feelings"

feel_data_sum <- rbind(feel_expect_sum, feel_experience_sum)

ggplot(data = feel_data_sum,
                        aes(x = value, y = response, col = type)) +
        geom_errorbar(aes(ymin=response-se, ymax=response+se), colour="black", width=.1, position=position_dodge(0.1)) +
        geom_point(position=position_dodge(0.1), size=3, shape=21, fill="white") +
        ggtitle("Feeling Function")

Comparing Feeling Models

First we fit Feeling Models 1 - 10 for both expected and experienced feelings, and select the best model with the lowest sum across all participants BIC. Same fitting procedure is repeated with 100 iterations for split half boostrap analysis.

## init feeling functions dataframe - this will store summed up BIC for every model at every iteration
feeling_models <- data.frame(matrix(ncol = 20, nrow = n.iter))

## init names of the columns
model_names <- character(20)
for(i in 1:10){
        model_names[i] <- paste0("feel_expect_mod", i)
}
        
for(i in 11:20){
        model_names[i] <- paste0("feel_experience_mod", i - 10)
}

colnames(feeling_models) <- model_names

## compute 100 boostraps
for(i in 1:n.iter){
        feel_fit(split = TRUE, nparticipants)
        feeling_models[i, ] <- feel_BIC(nparticipants) # stores summed BIC ofevery model of each iteration 
}

## compute the lowest BIC for each bootstrap iteration
feeling_expect_models <- feeling_models[, 1:10]
feeling_experience_models <- feeling_models[, 11:20]

feeling_expect_models <- table(colnames(feeling_expect_models)[apply(feeling_expect_models,1,which.min)])
feeling_experience_models <- table(colnames(feeling_experience_models)[apply(feeling_experience_models,1,which.min)])

## no boostrapping
feeling_models_nosplit <- data.frame(matrix(ncol = 20, nrow = 1))

## init names of the columns
model_names <- character(20)
for(i in 1:10){
        model_names[i] <- paste0("feel_expect_mod", i)
}
        
for(i in 11:20){
        model_names[i] <- paste0("feel_experience_mod", i - 10)
}

colnames(feeling_models_nosplit) <- model_names

feel_fit(split = FALSE, nparticipants)
feeling_models_nosplit[1, ] <- feel_BIC(nparticipants)

Expected Feeling Model

Original:

Expected Feeling Model

Expected Feeling Model

# plots average summed up BIC expected feeling models
ggplot(feeling_models %>% 
               select(1:10) %>% 
               gather("model", "BIC", 1:10) %>%
               separate(model, sep="_", into= c("first", "second", "model")), aes(x = model, y = BIC)) + 
        stat_summary(fun.y = "mean", geom="bar") +
        coord_cartesian(ylim = c(300, 400))


Experienced Feeling Model

Original:

Experienced Feeling Model

Experienced Feeling Model

# plots average summed up BIC experienced feeling models
ggplot(feeling_models %>% 
               select(11:20) %>% 
               gather("model", "BIC", 1:10) %>%
               separate(model, sep="_", into= c("first", "second", "model")), aes(x = model, y = BIC)) + 
        stat_summary(fun.y = "mean", geom="bar") 

Choice Models

Next, we fit Choice Model 1-7, and select the best model with the lowest sum across all participants BIC. Same fitting procedure is repeated with 100 iterations for split half boostrap analysis.

Key analysis: BIC of Model 1,2 > BIC of Model 3-7

#init choice model array
choice_models <- data.frame(matrix(ncol = 7, nrow = n.iter))

model_names <- character(7)

for(i in 1:7){
        model_names[i] <- paste0(paste0("choice_mod", i), ".fit")
}

colnames(choice_models) <- model_names

#start fitting
feel_fit(split = FALSE, num = nparticipants) # fit feel model with all the data

for(i in 1:n.iter){
        
        choice_fit(split = TRUE, nparticipants) # fit choice models split half
        choice_models[i, ] <- choice_BIC(nparticipants)
}

choice_models <- table(colnames(choice_models)[apply(choice_models,1,which.min)])

Original:

Choice Models

Choice Models

Exploratory analyses

FILL IN LATER

Discussion

Summary of Replication Attempt

Open the discussion section with a paragraph summarizing the primary result from the confirmatory analysis and the assessment of whether it replicated, partially replicated, or failed to replicate the original result.

Commentary

Add open-ended commentary (if any) reflecting (a) insights from follow-up exploratory analysis, (b) assessment of the meaning of the replication (or not) - e.g., for a failure to replicate, are the differences between original and present study ones that definitely, plausibly, or are unlikely to have been moderators of the result, and (c) discussion of any objections or challenges raised by the current and original authors about the replication attempt. None of these need to be long.