The Table of Contents indicates the structured organization of this document. Please use the Table of Contents for reference and navigation. In the HTML version of this document, the Table of Contents is dynamic and clickable. In the Word/PDF version, please consult the page numbers.

We strongly recommend viewing this document in HTML format. When this document is in HTML format, it dynamically displays R code and output tables.

1 Procedure and Design

The two studies assessed motivations to post and not to post online reviews.

Sequence of tasks for participant:

  • Consent

  • Recall a product they purchased online with which they were highly satisfied.

  • Indicate whether they read other reviews when buying that product and whether they posted an online review of that product.

  • Write three reasons why they either did or did not post a review, depending on their answer.

  • Repeat the above process two times with two other products they recall purchasing - one with which they were very dissatisfied, and one with which they were neither satisfied nor dissatisfied.

  • For each product recalled, rate the listed motivations to post and not to post a review (rating scale from 1 = Not at all to 5 = Extremely).

  • Following the phase of recalled transactions, participants read nine hypothetical but realistic online transactions: random order of three satisfaction levels (high satisfaction, medium satisfaction, low satisfaction) X three product types (handmade by individual, affiliated team-logo product, generic factory made).

  • For each scenario, rate the list of motivations to post and not to post a review (rating scale from 1 = Not at all to 5 = Extremely).

  • Report gender and age (with option prefer not to state).

2 Study 1: Assessment of motivations to post and not to post

# Establish some Bayesian scripts
source("OrderedProbitModel2023.R") # N.B. This in turn sources DBDA2E-utilities.R
## Loading required package: coda
## Linked to JAGS 4.3.1
## Loaded modules: basemod,bugs
openGraph = function(...){} # redefine as null for Rmarkdown
saveGraph = function(...){} # redefine as null for Rmarkdown
# Set MCMC steps:
doFinalMCMC = !FALSE
if ( doFinalMCMC ) { # full-length MCMC, takes time, ~20 minutes
  globalAdaptSteps=500  # 500
  globalBurnInSteps=1000  # 1000
  globalNumSavedSteps=30000  # 30000
  globalThinSteps=6 # 6
  globalNChains=4 # 4
} else { # short MCMC for less delayed editing of document, ~4 minutes
  globalAdaptSteps=500  # 500
  globalBurnInSteps=500  # 1000
  globalNumSavedSteps=1000  # 30000
  globalThinSteps=1 # 6
  globalNChains=4 # 4
}
# Function for creating a table of MCMC diagnostics:
diagSummary = function( mcmcCoda ) { 
  require("coda")
  if ( class(mcmcCoda) != "mcmc.list" ) {
    stop( "In diagSummary() input must be mcmc.list (i.e., coda object)." )
  }
  mcmcMat = as.matrix(mcmcCoda,chains=TRUE)
  parameterNames = varnames(mcmcCoda)
  chainSummary = NULL
  parIdx = 0
  for ( parName in parameterNames ) {
    parIdx = parIdx+1
    thisPSRF = as.vector(gelman.diag(mcmcCoda[,parName])[[1]])
    names( thisPSRF) = c("psrfPt","psrfUpCI")
    thisESS = as.vector(effectiveSize(mcmcCoda[,parName]))
    names( thisESS ) = "ESS"
    thisMedian = quantile(mcmcMat[,parName],probs=c(0.5))
    thisETI = quantile(mcmcMat[,parName],probs=c(0.025,0.975))
    thisDensity = density(mcmcMat[,parName])
    thisMode = thisDensity$x[which.max(thisDensity$y)]
    names( thisMode ) = "Mode"
    thisHDI = HDIofMCMC( mcmcMat[,parName] , credMass=0.95 )
    names( thisHDI ) = c("HDIlow","HDIhigh")
    chainSummary = rbind( chainSummary ,
                          c( thisPSRF , thisESS , 
                             thisMedian , thisETI , thisMode , thisHDI ) )
    rownames(chainSummary)[parIdx] = parName 
  }
  return( chainSummary )
}

# Function for plotting latent ratings after MCMC is run:
plotLatentRatings = function( scenario=c("recall","V_artisan")[1] , # just some examples
                              satisLevel=c("HS","MS","LS")[1] ,
                              nToPost=8 , nNotToPost=c(6,8)[1] ,
                              savePlot=FALSE ) {
  # This function assumes that ordered-probit analysis for Study 1 has already been
  # run, and that its output files are stored in the current working directory.
  
  # Verify data structure:
  # Read in data:
  dataMat = read.csv( file=paste0( tempFolder,"/", 
                                   scenario , "_" , satisLevel , ".csv" ) )
  # Get motivation labels from dataMat:
  motiveLabels = as.character(dataMat[,"Motivations"])
  
  # Check that length of motiveLabels matches number of motivations:
  if ( length(motiveLabels) != nToPost + nNotToPost ) {
    warning( "Number of motive labels does not equal number of to-post plus not-post." )
  }
  
  # # Should correspond to the following, not used:
  # motiveNames = c( "Warn Consumers" , "Help Consumers" , "Punish Producer" , "Reward Producer" ,
  #                  "Belong:Reviewers" , "Belong:Consumers" , "Guilt" , "Reciprocity" ,
  #                  "Effort" , "Bogus" , "No Impact" , "Redundant" , 
  #                  "Not Criticize" , "Not Hype" )
  # # Check that motiveLabels and motiveNames correspond correctly:
  # cbind( motiveLabels , motiveNames )
  
  # Read in parameter summary:
  paramMat = read.csv( file=paste0( tempFolder,"/", 
                                    "OrderedProbitModel-",scenario,"_",satisLevel,
                                    "-OrdModel-ParameterSummary.csv" ) )
  # Pull out key info about mu values:
  muInfo = paramMat[ grep( "mu\\[" , paramMat[,"X"] ) , c("Mode","HDIlow","HDIhigh") ]
  
  # Now plot the info about mu:
  openGraph(height=8,width=6.0)
  xLim = c(-2,5) # range(muInfo)
  motiveIdxVec = 1:length(motiveLabels)
  motiveIdxRev = length(motiveLabels):1
  par( mar=c(4.0,9.0,3.0,0.5) , mgp=c(2.0,0.7,0) )
  mainText = switch( satisLevel , 
                     "HS" = "High Satisfaction" ,
                     "LS" = "Low Satisfaction" ,
                     "MS" = "Medium Satisfaction" )
  plot( -1,-1, 
        main=mainText , cex.main=1.75 ,
        xlab="Rating (latent mean)" , xlim=xLim , 
        ylab="" , ylim=c(0.95,length(motiveLabels)+0.05) , yaxt="n" , 
        cex.lab=1.75 )
  title( ylab="Motive" , line=7 , cex.lab=1.75 )
  axis( side=2 , at=1:length(motiveLabels) , 
        labels=motiveLabels[motiveIdxRev] , 
        # labels=motiveNames[motiveIdxRev] , 
        las=1 )
  rect( xleft=xLim[1]-0.2 , ybottom=1-0.5 , xright=0.5 , ytop=length(motiveLabels)+0.5 ,
        col="lightgray" , border=NA )
  rect( xleft=3-0.1 , ybottom=1-0.5 , xright=3+0.1 , ytop=length(motiveLabels)+0.5 ,
        col="lightgray" , border=NA )
  points( muInfo[motiveIdxVec,"Mode"] , motiveIdxRev , 
          col=c(rep("darkgreen",nToPost),rep("red",nNotToPost)) , 
          pch=19 , cex=2 , cex.lab=1.5 )
  # Plot the HDI's:
  for ( motiveIdx in motiveIdxVec ) {
    # plot dotted line from tic mark to HDI:
    segments( x0=xLim[1]-1 , y0=motiveIdxRev[motiveIdx] ,
              x1=muInfo[motiveIdx,"HDIlow"] , y1=motiveIdxRev[motiveIdx] ,
              col="black" , lty="dotted" )
    # plot HDIs:
    segments( x0=muInfo[motiveIdx,"HDIlow"] , y0=motiveIdxRev[motiveIdx] ,
              x1=muInfo[motiveIdx,"HDIhigh"] , y1=motiveIdxRev[motiveIdx] ,
              col=c(rep("darkgreen",nToPost),rep("red",nNotToPost))[motiveIdx] , 
              lwd=5 )
  }
  # Put in a separator between motive TO post and motive NOT to post:
  abline( h=(nNotToPost+0.5) , lty="solid" )
  if ( savePlot ) {
    saveGraph( file=paste0("MotivationSummaryPlot-",satisLevel) , type="pdf" )
  }
}
# Load the data:
DataSummary = read.csv("Data_Summary_Study1.csv")
DataSummary_catch = subset(DataSummary, catchTrialCorrect == "5") # only leaving data that passed catch trials

2.1 Free-Rider Problem in Online Reviewing

The computed values below show the proportions of responses. The values are subsequently displayed in a bar graph.

library("plyr")

nSubjectS1 = nrow(DataSummary_catch) # number of subjects in Study 1

# HS Product Q1: what proportion of respondents read reviews when deciding to purchase?
HS_read_count = count(DataSummary_catch, 'recallHighSat_readReview')
print("High Satisfaction, Read Reviews:")
## [1] "High Satisfaction, Read Reviews:"
HS_read_count$freq[2] / nSubjectS1 # % who read the reviews (HS)
## [1] 0.8798283
# HS product Q2: what proportion of respondents posted reviews after purchase?
HS_post_count = count(DataSummary_catch, 'recallHighSat_postReview')
print("High Satisfaction, Posted Reviews:")
## [1] "High Satisfaction, Posted Reviews:"
HS_post_count$freq[2] / nSubjectS1 # % who posted the reviews (HS)
## [1] 0.07296137
# MS product Q1: what proportion of respondents read reviews when deciding to purchase?
MS_read_count = count(DataSummary_catch, 'recallMedSat_readReview')
print("Medium Satisfaction, Read Reviews:")
## [1] "Medium Satisfaction, Read Reviews:"
MS_read_count$freq[2] / nSubjectS1 # % who read the reviews (MS)
## [1] 0.6652361
# MS product Q2: what proportion of respondents posted reviews after purchase?
MS_post_count = count(DataSummary_catch, 'recallMedSat_postReview')
print("Medium Satisfaction, Posted Reviews:")
## [1] "Medium Satisfaction, Posted Reviews:"
MS_post_count$freq[2] / nSubjectS1 # % who posted the reviews (MS)
## [1] 0.06866953
# LS product Q1: what proportion of respondents read reviews when deciding to purchase?
LS_read_count = count(DataSummary_catch, 'recallLowSat_readReview')
print("Low Satisfaction, Read Reviews:")
## [1] "Low Satisfaction, Read Reviews:"
LS_read_count$freq[2] / nSubjectS1 # % who read the reviews (LS)
## [1] 0.5879828
# LS product Q2: what proportion of respondents posted reviews after purchase?
LS_post_count = count(DataSummary_catch, 'recallLowSat_postReview')
print("Low Satisfaction, Posted Reviews:")
## [1] "Low Satisfaction, Posted Reviews:"
LS_post_count$freq[2] / nSubjectS1 # % who posted the reviews (LS)
## [1] 0.1587983

Proportions of participants who read other reviews before buying and proportions of participants who posted their own reviews for each of the three recalled products. The percentages displayed here should match the numerical outputs in the preceding text.

2.2 List of Motivations for Study 1

Study 1 included the following motivation statements to post or not to post, rated by the participants.

8 motivations to post:

  • I wanted to warn other consumers about a bad product

  • I wanted to help other consumers find a good product

  • I wanted to punish the producer for their bad product

  • I wanted to reward the producer for their good product

  • I wanted to feel social belonging with the community of reviewers

  • I wanted to feel social belonging with other consumers of that product

  • I would have felt bad not contributing a review after using other reviews to make my decision

  • I wanted to reciprocate after using other reviews to make my decision

6 Motivations NOT to post:

  • I felt that posting a rating or review was too effortful

  • I felt the online rating system was bogus

  • I felt that ratings have no impact

  • I felt my rating would have been redundant with others already posted

  • I wanted to avoid criticizing the producer despite a bad product

  • I wanted to avoid hyping the producer despite a good product

2.3 The Ordered-Probit Model

The ratings data are ordinal values, which we choose to describe with an ordered-probit model. It is not appropriate to treat the data as if they were metric, a.k.a. interval, values. (In fact, treating the data as normally-distributed metric values can lead to incorrect conclusions; see Liddell & Kruschke (2018).) There is no claim that an ordered-probit model is the correct model of the data or even the best model of the data. Rather, the ordered-probit model is better than treating the data as if they were normally distributed metric values. Moreover, the ordered-probit model fits the rating distributions reasonably well (as shown later by posterior predictive checks).

The ordered-probit model is explained in more detail below. For extensive background information about ordered-probit models and their analysis in Bayesian software, see Ref Liddell & Kruschke (2018) and its supplemental material at https://osf.io/53ce9/.

Analysis goals and Bayesian estimation. Our primary goal for statistical analysis is describing the distribution of ratings for each motivation. Specifically, we are most interested in estimating the underlying (i.e., latent scale) central tendency of the ratings, along with the uncertainty of that estimate. The model also estimates a distinct standard deviation for every motivation, unlike the conventional assumption of homogeneous variances.

We are not specifically interested in tests of significant differences between motivations, because we do not have specific hypotheses about which motivations should be stronger or weaker, and because our main goal is estimation of the magnitude of the motivations.

The Bayesian approach is especially useful for this application because of its flexibility for specifying exactly the desired model structure (e.g., with distinct variances for every motivation). Moreover, the Bayesian approach directly yields credible intervals for every parameter and derived variable.

Likelihood function. In an ordered-probit model, there is a latent continuous variable underlying the ordinal response. The population is assumed to be a normally distributed on the latent variable, with mean \(\mu_i\) (for case \(i\)) and standard deviation \(\sigma_i\). The latent variable is cut at thresholds, \(\theta_1\) to \(\theta_{K-1}\) (for \(K\) response levels), such that latent values between \(\theta_{k-1}\) and \(\theta_{k}\) produce ordinal response \(j\). The figure below illustrates the ordered-probit model:
Figure: Ordinal data in upper panel are produced from thresholded cumulative-normal in lower panel. (Diagram is from Figure 1 of @LiddellKruschke2018ordinal, p. 329.)

Figure: Ordinal data in upper panel are produced from thresholded cumulative-normal in lower panel. (Diagram is from Figure 1 of Liddell & Kruschke (2018), p. 329.)

Mathematically, the probability of response level \(k\) is \[ p\big( k \,|\, \mu_i , \sigma_i , \{\theta_j\} \big) = \Phi\big( (\theta_{k} - \mu_i )/\sigma_i \big) - \Phi\big( (\theta_{k-1} - \mu_i )/\sigma_i \big) \] where \(\Phi()\) is the standardized cumulative-normal function. For the highest response level \(K\) the threshold \(\theta_{K}\) is effectively \(+\infty\), and for the lowest response level \(k=1\) the threshold \(\theta_{k-1}\) is effectively \(-\infty\). The thresholds are assumed to be determined by the response process and are therefore the same across all cases \(i\) (because all cases are measured by the same response process).

Parameters: The parameters consist of

  • \(\mu_i\) and \(\sigma_i\) for each case \(i\)
  • the thresholds \(\theta_{1}\) to \(\theta_{K-1}\).

However, the “stretch” and position of the latent scale are arbitrary —by analogy, the latent scale could be Fahrenheit or Celsius— and therefore two parameter values are fixed at arbitrary constants. In the traditional parameterization for an ordered-probit model, \(\mu_1 \equiv 0.0\) and \(\sigma_1 \equiv 1.0\) and all other parameters are specified relative to those constants. Kruschke (2015) and Liddell & Kruschke (2018) prefered instead to fix \(\theta_1 \equiv 1.5\) and \(\theta_{K-1} \equiv K+0.5\) (where \(K\) is the highest ordinal level), which makes the values of the parameters correspond roughly to the response scale of \(1\) through \(K\). Thus, in the present applications with Q questions and K=5 response levels, there are a total of \(2 \times Q + (K-3)\) estimated parameters: \(\mu_1\), \(\sigma_1\), …, \(\mu_Q\), \(\sigma_Q\), \(\theta_2\), \(\theta_3\), with \(\theta_1 \equiv 1.5\) and \(\theta_4 \equiv 4.5\).

The means (\(\mu_q\)) describe the central tendency of ratings on the latent scale, and the standard deviations (\(\sigma_q\)) describe the variability of ratings across people on the latent scale. Primary interest is in the magnitudes of the means (and their uncertainties).

In the threshold-pinned parameterization that has \(\theta_1 \equiv 1.5\) and \(\theta_4 \equiv 4.5\), a mean of 1.5 indicates that 50% of the responses will be level “1” and 50% of the responses will be levels \(>\)“1”. A mean of 4.5 indicates that 50% of the responses will be level “5”. A mean of 3.0 suggests the underlying (latent) rating is near a “3” on the response scale, subject to exact placement of the thresholds. A difference between means of 1.0 suggests the underlying (latent) ratings have central tendencies roughly 1 response level apart.

The prior distribution. For basic parameter estimation from a broad prior, as is the case here, the prior can simply use diffuse univariate distributions on each parameter, as was provided by Liddell & Kruschke (2018) in their software at https://osf.io/53ce9/, which is used here. A much more elaborate prior specification was provided in the supplementary material of Kruschke (2021) at https://osf.io/w7cph/, but that is only needed for applications with informed priors or Bayes factors.

In detail, the broad prior specified:

  • \(\mu_j \sim \mbox{normal}( (1\!+\!K)/2 , \sigma\!=\!K )\), where \(K\) is the number of response levels. This sets the prior mean of each item \(j\) to the midpoint of the latent response scale, with a very wide standard deviation relative to the latent response scale.

  • \(\sigma_j \sim \mbox{gamma}(\mbox{mode}\!=\!3 , \mbox{sd}\!=\!3 )\), which is the same as \(\sigma_j \sim \mbox{gamma}( \mbox{shape}\!=\!2.6180 , \mbox{rate}\!=\!0.5393 )\). This prior allows the latent standard deviation of each item \(j\) to be very narrow or very broad relative to the latent response scale.

  • \(\theta_k \sim \mbox{normal}( k\!+\!0.5 , \sigma\!=\!2 )\) for \(k \in \{2,3\}\). This sets the prior mean of each threshold \(k\) at \(k+0.5\) with a very wide standard deviation relative to the latent response scale. Inverted thresholds (e.g., \(\theta_3 < \theta_2\)) are allowed by the prior but rejected during MCMC sampling.

We forego a prior predictive check in this case because the prior is designed to be broadly symmetric to allow extreme data distributions in either high or low directions. That is, the prior is not designed to mimic any particular pattern of real data.

2.4 Ordered-probit analysis of Recalled product ratings

2.4.1 Recalled High-Satisfaction product ratings

########## Recalled transactions ########## 

### To-Post & Not-To-Post ratings of recalled HS product: 
## To post recalled HS
HS_mtp = DataSummary_catch[, c(6, 10:17)] # extract these columns
HS_mtp_rating = HS_mtp[, c(2:9)] 
HS_mtp_rating = na.omit(HS_mtp_rating) # omit NA's if there is any

v1 = count(HS_mtp_rating, "recallHighSat_mTPWarn")
HS_warn = v1[, -1]

v2 = count(HS_mtp_rating, "recallHighSat_mTPFind")
HS_find = v2[, -1]

v3 = count(HS_mtp_rating, "recallHighSat_mTPPunish")
HS_punish = v3[, -1]

v4 = count(HS_mtp_rating, "recallHighSat_mTPReward")
HS_reward = v4[, -1]

v5 = count(HS_mtp_rating, "recallHighSat_mTPCommun")
HS_reviewers = v5[, -1]

v6 = count(HS_mtp_rating, "recallHighSat_mTPConsum")
HS_consumers = v6[, -1]

v7 = count(HS_mtp_rating, "recallHighSat_mTPGuilt")
HS_guilt = v7[, -1]

v8 = count(HS_mtp_rating, "recallHighSat_mTPRecip")
HS_recip = v8[, -1]

## Not-To-Post recalled HS
HS_mnp = DataSummary_catch[, c(6, 18:23)] # extract these columns
HS_mnp_rating = HS_mnp[, c(2:7)] 
HS_mnp_rating = na.omit(HS_mnp_rating) # omit NA's if there is any

v1 = count(HS_mnp_rating, "recallHighSat_mNPEffort")
HS_effort = v1[, -1]

v2 = count(HS_mnp_rating, "recallHighSat_mNPBogus")
HS_bogus = v2[, -1]

v3 = count(HS_mnp_rating, "recallHighSat_mNPImpact")
HS_impact = v3[, -1]

v4 = count(HS_mnp_rating, "recallHighSat_mNPRedund")
HS_redund = v4[, -1]

v5 = count(HS_mnp_rating, "recallHighSat_mNPCritic")
HS_no_crit = v5[, -1]

v6 = count(HS_mnp_rating, "recallHighSat_mNPHype")
HS_no_hype = v6[, -1]

HS_motivation_table = rbind(HS_warn, HS_find, HS_punish, HS_reward, HS_reviewers,
                        HS_consumers, HS_guilt, HS_recip, 
                        HS_effort, HS_bogus, HS_impact, HS_redund, HS_no_crit,
                        HS_no_hype)

HS_motivation_table = cbind(rownames(HS_motivation_table), HS_motivation_table)
colnames(HS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(HS_motivation_table, file = paste0( tempFolder,"/", "recall_HS.csv" ) )
# Run the ordered-probit analysis of Recalled High-Satisfaction ratings:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "recall_HS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
  )

2.4.1.1 Explanation of the plots

In the plots shown above, the panels show bar graphs of the raw data with the fit of the ordered-probit model superimposed. Importantly, these graphs provide a posterior-predictive check as recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021). Each bar shows the raw frequency of the rating, and superimposed on the bar is a dot showing the median posterior probability of that rating and a vertical segment showing the 95% HDI (highest density interval) of the estimated probability. The fit of the model tends to be very good, indicating that its parameter estimates can be interpreted meaningfully. The 14 panels of bar graphs show the frequency of each ordinal rating for the 14 questions. The header of each panel indicates the level of satisfaction (HS, MS, LS) and the question being asked (warn, find, punish, reward, etc.). The header of each panel also indicates the total number of ratings for that question (N). The subtitle of each panel indicates the median posterior estimate of the mean (mu) and standard deviation (sigma) of the ordered-probit model. Additional graphs displayed later show these results more succinctly.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000541, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 16161.9, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

The estimated latent ratings with their 95% HDI’s are plotted below. The gray regions are merely for visual reference, with the gray zone on the left indicating latent means that yield mostly ‘1’ ratings, and the narrow gray zone in the middle marking the midpoint of the latent response scale.

plotLatentRatings( scenario="recall" , satisLevel=c("HS","MS","LS")[1] )
For explanation of the Motive names, please see the [List of Motivations for Study 1](#MotivationList1).

For explanation of the Motive names, please see the List of Motivations for Study 1.

2.4.2 Recalled Low-Satisfaction product ratings

### To-Post & Not-To-Post ratings of recalled LS product: 
## To post recalled LS
LS_mtp = DataSummary_catch[, c(6, 30:37)] # extract these columns
LS_mtp_rating = LS_mtp[, c(2:9)] 
LS_mtp_rating = na.omit(LS_mtp_rating) 

v1 = count(LS_mtp_rating, "recallLowSat_mTPWarn")
LS_warn = v1[, -1]

v2 = count(LS_mtp_rating, "recallLowSat_mTPFind")
LS_find = v2[, -1]

v3 = count(LS_mtp_rating, "recallLowSat_mTPPunish")
LS_punish = v3[, -1]

v4 = count(LS_mtp_rating, "recallLowSat_mTPReward")
LS_reward = v4[, -1]

v5 = count(LS_mtp_rating, "recallLowSat_mTPCommun")
LS_reviewers = v5[, -1]

v6 = count(LS_mtp_rating, "recallLowSat_mTPConsum")
LS_consumers = v6[, -1]

v7 = count(LS_mtp_rating, "recallLowSat_mTPGuilt")
LS_guilt = v7[, -1]

v8 = count(LS_mtp_rating, "recallLowSat_mTPRecip")
LS_recip = v8[, -1]

## Not-To-Post recalled LS:
LS_mnp = DataSummary_catch[, c(6, 38:43)] # extract these columns
LS_mnp_rating = LS_mnp[, c(2:7)] 
LS_mnp_rating = na.omit(LS_mnp_rating)

v1 = count(LS_mnp_rating, "recallLowSat_mNPEffort")
LS_effort = v1[, -1]

v2 = count(LS_mnp_rating, "recallLowSat_mNPBogus")
LS_bogus = v2[, -1]

v3 = count(LS_mnp_rating, "recallLowSat_mNPImpact")
v3 = v3[-6,] # Delete a row that contains frequencies of NA
LS_impact = v3[, -1]

v4 = count(LS_mnp_rating, "recallLowSat_mNPRedund")
LS_redund = v4[, -1]

v5 = count(LS_mnp_rating, "recallLowSat_mNPCritic")
v5 = v5[-6,] # Delete a row that contains frequencies of NA
LS_no_crit = v5[, -1]

v6 = count(LS_mnp_rating, "recallLowSat_mNPHype")
LS_no_hype = v6[, -1]


LS_motivation_table = rbind(LS_warn, LS_find, LS_punish, LS_reward, LS_reviewers,
                            LS_consumers, LS_guilt, LS_recip,
                            LS_effort, LS_bogus, LS_impact, LS_redund, LS_no_crit,
                            LS_no_hype)

LS_motivation_table = cbind(rownames(LS_motivation_table), LS_motivation_table)
colnames(LS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(LS_motivation_table, file =paste0( tempFolder,"/",  "recall_LS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "recall_LS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000979, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 14420.12, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

The estimated latent ratings with their 95% HDI’s are plotted below. The gray regions are merely for visual reference, with the gray zone on the left indicating latent means that yield mostly ‘1’ ratings, and the narrow gray zone in the middle marking the midpoint of the latent response scale.

plotLatentRatings( scenario="recall" , satisLevel=c("HS","MS","LS")[3] )
For explanation of the Motive names, please see the [List of Motivations for Study 1](#MotivationList1).

For explanation of the Motive names, please see the List of Motivations for Study 1.

2.4.3 Recalled Medium-Satisfaction product ratings

### To-Post & Not-To-Post ratings of recalled MS product: 
## To post recalled MS
MS_mtp = DataSummary_catch[, c(6, 50:57)] # extract these columns
MS_mtp_rating = MS_mtp[, c(2:9)] 
MS_mtp_rating = na.omit(MS_mtp_rating)

v1 = count(MS_mtp_rating, "recallMedSat_mTPWarn")
MS_warn = v1[, -1]

v2 = count(MS_mtp_rating, "recallMedSat_mTPFind")
MS_find = v2[, -1]

v3 = count(MS_mtp_rating, "recallMedSat_mTPPunish")
MS_punish = v3[, -1]

v4 = count(MS_mtp_rating, "recallMedSat_mTPReward")
MS_reward = v4[, -1]

v5 = count(MS_mtp_rating, "recallMedSat_mTPCommun")
MS_reviewers = v5[, -1]

v6 = count(MS_mtp_rating, "recallMedSat_mTPConsum")
MS_consumers = v6[, -1]

v7 = count(MS_mtp_rating, "recallMedSat_mTPGuilt")
MS_guilt = v7[, -1]

v8 = count(MS_mtp_rating, "recallMedSat_mTPRecip")
MS_recip = v8[, -1]

## Not-To-Post ratings of recalled MS
MS_mnp = DataSummary_catch[, c(6, 58:63)] # extract these columns
MS_mnp_rating = MS_mnp[, c(2:7)] 
MS_mnp_rating = na.omit(MS_mnp_rating)

v1 = count(MS_mnp_rating, "recallMedSat_mNPEffort")
MS_effort = v1[, -1]

v2 = count(MS_mnp_rating, "recallMedSat_mNPBogus")
MS_bogus = v2[, -1]

v3 = count(MS_mnp_rating, "recallMedSat_mNPImpact")
MS_impact = v3[, -1]

v4 = count(MS_mnp_rating, "recallMedSat_mNPRedund")
MS_redund = v4[, -1]

v5 = count(MS_mnp_rating, "recallMedSat_mNPCritic")
MS_no_crit = v5[, -1]

v6 = count(MS_mnp_rating, "recallMedSat_mNPHype")
MS_no_hype = v6[, -1]

MS_motivation_table = rbind(MS_warn, MS_find, MS_punish, MS_reward, MS_reviewers,
                     MS_consumers, MS_guilt, MS_recip,
                     MS_effort, MS_bogus, MS_impact, MS_redund, MS_no_crit,
                     MS_no_hype)

MS_motivation_table = cbind(rownames(MS_motivation_table), MS_motivation_table)
colnames(MS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(MS_motivation_table, file = paste0( tempFolder,"/", "recall_MS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "recall_MS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.001067, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 14473.04, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

The estimated latent ratings with their 95% HDI’s are plotted below. The gray regions are merely for visual reference, with the gray zone on the left indicating latent means that yield mostly ‘1’ ratings, and the narrow gray zone in the middle marking the midpoint of the latent response scale.

plotLatentRatings( scenario="recall" , satisLevel=c("HS","MS","LS")[2] )
For explanation of the Motive names, please see the [List of Motivations for Study 1](#MotivationList1).

For explanation of the Motive names, please see the List of Motivations for Study 1.

2.5 Ordered-probit analysis of Hypothetical product ratings

2.5.1 Ordered-probit analysis of Hypothetical High-Satisfaction product ratings

2.5.1.1 Hypothetical High-Satisfaction Team product ratings

########## Hypothetical transactions ########## 
### 1.High-satisfaction Team products

## To Post:
HS_sports_mtp = DataSummary_catch[, c(101:108)]
HS_sports_mtp = na.omit(HS_sports_mtp)

v1 = count(HS_sports_mtp, "V3_HighSat_mTPWarn")
HSspo_warn = v1[, -1]

v2 = count(HS_sports_mtp, "V3_HighSat_mTPFind")
HSspo_find = v2[, -1]

v3 = count(HS_sports_mtp, "V3_HighSat_mTPPunish")
HSspo_punish = v3[, -1]

v4 = count(HS_sports_mtp, "V3_HighSat_mTPReward")
HSspo_reward = v4[, -1]

v5 = count(HS_sports_mtp, "V3_HighSat_mTPCommun")
HSspo_reviewers = v5[, -1]

v6 = count(HS_sports_mtp, "V3_HighSat_mTPConsum")
HSspo_consumers = v6[, -1]

v7 = count(HS_sports_mtp, "V3_HighSat_mTPGuilt")
HSspo_guilt = v7[, -1]

v8 = count(HS_sports_mtp, "V3_HighSat_mTPRecip")
HSspo_recip = v8[, -1]

## NOT To Post:
HS_sports_mnp = DataSummary_catch[, c(109:114)]
HS_sports_mnp = na.omit(HS_sports_mnp)

v1 = count(HS_sports_mnp, "V3_HighSat_mNPEffort")
effort = v1[, -1]

v2 = count(HS_sports_mnp, "V3_HighSat_mNPBogus")
bogus = v2[, -1]

v3 = count(HS_sports_mnp, "V3_HighSat_mNPImpact")
impact = v3[, -1]

v4 = count(HS_sports_mnp, "V3_HighSat_mNPRedund")
redund = v4[, -1]

v5 = count(HS_sports_mnp, "V3_HighSat_mNPCritic")
no_crit = v5[, -1]

v6 = count(HS_sports_mnp, "V3_HighSat_mNPHype")
no_hype = v6[, -1]


HS_sports_motivation_table = rbind(HSspo_warn, HSspo_find, HSspo_punish, HSspo_reward, HSspo_reviewers,
                            HSspo_consumers, HSspo_guilt, HSspo_recip,
                            effort, bogus, impact, redund, no_crit, no_hype)


HS_sports_motivation_table = cbind(rownames(HS_sports_motivation_table), HS_sports_motivation_table)
colnames(HS_sports_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(HS_sports_motivation_table, file = paste0( tempFolder,"/", "V1_HS_sports.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V1_HS_sports.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000992, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 10822.11, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

2.5.1.2 Hypothetical High-Satisfaction Generic product ratings

### 2. High-satisfaction Generic (electronic) products

## To Post:
HS_electronics_mtp = DataSummary_catch[, c(118:125)]
HS_electronics_mtp = na.omit(HS_electronics_mtp)

v1 = count(HS_electronics_mtp, "V4_HighSat_mTPWarn")
HSele_warn = v1[, -1]

v2 = count(HS_electronics_mtp, "V4_HighSat_mTPFind")
find = v2[, -1]

v3 = count(HS_electronics_mtp, "V4_HighSat_mTPPunish")
punish = v3[, -1]

v4 = count(HS_electronics_mtp, "V4_HighSat_mTPReward")
reward = v4[, -1]

v5 = count(HS_electronics_mtp, "V4_HighSat_mTPCommun")
reviewers = v5[, -1]

v6 = count(HS_electronics_mtp, "V4_HighSat_mTPConsum")
consumers = v6[, -1]

v7 = count(HS_electronics_mtp, "V4_HighSat_mTPGuilt")
guilt = v7[, -1]

v8 = count(HS_electronics_mtp, "V4_HighSat_mTPRecip")
recip = v8[, -1]

## NOT To Post:
HS_electronics_mnp = DataSummary_catch[, c(126:131)]
HS_electronics_mnp = na.omit(HS_electronics_mnp)

v1 = count(HS_electronics_mnp, "V4_HighSat_mNPEffort")
effort = v1[, -1]

v2 = count(HS_electronics_mnp, "V4_HighSat_mNPBogus")
bogus = v2[, -1]

v3 = count(HS_electronics_mnp, "V4_HighSat_mNPImpact")
impact = v3[, -1]

v4 = count(HS_electronics_mnp, "V4_HighSat_mNPRedund")
redund = v4[, -1]

v5 = count(HS_electronics_mnp, "V4_HighSat_mNPCritic")
no_critic = v5[, -1]

v6 = count(HS_electronics_mnp, "V4_HighSat_mNPHype")
no_hype = v6[, -1]

HS_electronics_motivation_table = rbind(HSele_warn, find, punish, reward, reviewers,
                            consumers, guilt, recip,
                            effort, bogus, impact, redund, no_critic, no_hype)

HS_electronics_motivation_table = cbind(rownames(HS_electronics_motivation_table), HS_electronics_motivation_table)
colnames(HS_electronics_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(HS_electronics_motivation_table, 
          paste0( tempFolder,"/", file = "V2_HS_electronics.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V2_HS_electronics.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000981, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 10866.79, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

2.5.1.3 Hypothetical High-Satisfaction Handmade product ratings

### 3. High-satisfaction Handmade products

## To Post:
HS_handmade_mtp = DataSummary_catch[, c(186:193)]
HS_handmade_mtp = na.omit(HS_handmade_mtp)

v1 = count(HS_handmade_mtp, "V8_HighSat_mTPWarn")
HShan_warn = v1[, -1]

v2 = count(HS_handmade_mtp, "V8_HighSat_mTPFind")
find = v2[, -1]

v3 = count(HS_handmade_mtp, "V8_HighSat_mTPPunish")
punish = v3[, -1]

v4 = count(HS_handmade_mtp, "V8_HighSat_mTPReward")
reward = v4[, -1]

v5 = count(HS_handmade_mtp, "V8_HighSat_mTPCommun")
reviewers = v5[, -1]

v6 = count(HS_handmade_mtp, "V8_HighSat_mTPConsum")
consumers = v6[, -1]

v7 = count(HS_handmade_mtp, "V8_HighSat_mTPGuilt")
guilt = v7[, -1]

v8 = count(HS_handmade_mtp, "V8_HighSat_mTPRecip")
recip = v8[, -1]

## NOT To Post:
HS_handmade_mnp = DataSummary_catch[, c(194:199)]
HS_handmade_mnp = na.omit(HS_handmade_mnp) 

v1 = count(HS_handmade_mnp, "V8_HighSat_mNPEffort")
effort = v1[, -1]

v2 = count(HS_handmade_mnp, "V8_HighSat_mNPBogus")
bogus = v2[, -1]

v3 = count(HS_handmade_mnp, "V8_HighSat_mNPImpact")
impact = v3[, -1]

v4 = count(HS_handmade_mnp, "V8_HighSat_mNPRedund")
redund = v4[, -1]

v5 = count(HS_handmade_mnp, "V8_HighSat_mNPCritic")
no_crit = v5[, -1]

v6 = count(HS_handmade_mnp, "V8_HighSat_mNPHype")
no_hype = v6[, -1]

HS_handmade_motivation_table = rbind(HShan_warn, find, punish, reward, reviewers,
                                 consumers, guilt, recip,
                                 effort, bogus, impact, redund, no_crit, no_hype)

HS_handmade_motivation_table = cbind(rownames(HS_handmade_motivation_table), HS_handmade_motivation_table)
colnames(HS_handmade_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(HS_handmade_motivation_table, 
          file = paste0( tempFolder,"/", "V3_HS_handmade.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V3_HS_handmade.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000726, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 10982.57, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

2.5.2 Ordered-probit analysis of Hypothetical Low-Satisfaction product ratings

2.5.2.1 Hypothetical Low-Satisfaction Team product ratings

### 4. Low-satisfaction Team products

## To Post:
LS_sports_mtp = DataSummary_catch[, c(169:176)]
LS_sports_mtp = na.omit(LS_sports_mtp)

v1 = count(LS_sports_mtp, "V7_LowSat_mTPWarn")
LSspo_warn = v1[, -1]

v2 = count(LS_sports_mtp, "V7_LowSat_mTPFind")
find = v2[, -1]

v3 = count(LS_sports_mtp, "V7_LowSat_mTPPunish")
punish = v3[, -1]

v4 = count(LS_sports_mtp, "V7_LowSat_mTPReward")
reward = v4[, -1]

v5 = count(LS_sports_mtp, "V7_LowSat_mTPCommun")
v5 = rbind(v5, c(5,0)) # add a missing row 
reviewers = v5[, -1]

v6 = count(LS_sports_mtp, "V7_LowSat_mTPConsum")
v6 = rbind(v6, c(5,0)) # add a missing row 
consumers = v6[, -1]

v7 = count(LS_sports_mtp, "V7_LowSat_mTPGuilt")
guilt = v7[, -1]

v8 = count(LS_sports_mtp, "V7_LowSat_mTPRecip")
recip = v8[, -1]

## NOT To Post:
LS_sports_mnp = DataSummary_catch[, c(177:182)]
LS_sports_mnp = na.omit(LS_sports_mnp)

v1 = count(LS_sports_mnp, "V7_LowSat_mNPEffort")
effort = v1[, -1]

v2 = count(LS_sports_mnp, "V7_LowSat_mNPBogus")
bogus = v2[, -1]

v3 = count(LS_sports_mnp, "V7_LowSat_mNPImpact")
impact = v3[, -1]

v4 = count(LS_sports_mnp, "V7_LowSat_mNPRedund")
redund = v4[, -1]

v5 = count(LS_sports_mnp, "V7_LowSat_mNPCritic")
no_crit = v5[, -1]

v6 = count(LS_sports_mnp, "V7_LowSat_mNPHype")
no_hype = v6[, -1]


LS_sports_motivation_table = rbind(LSspo_warn, find, punish, reward, reviewers,
                            consumers, guilt, recip,
                            effort, bogus, impact, redund, no_crit, no_hype)

LS_sports_motivation_table = cbind(rownames(LS_sports_motivation_table), LS_sports_motivation_table)
colnames(LS_sports_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(LS_sports_motivation_table, 
          file = paste0( tempFolder,"/", "V4_LS_sports.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V4_LS_sports.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.001103, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 13682.88, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

2.5.2.2 Hypothetical Low-Satisfaction Generic product ratings

### 5. Low-satisfaction Generic (electronic) products

## To Post:
LS_electronics_mtp = DataSummary_catch[, c(152:159)]
LS_electronics_mtp = na.omit(LS_electronics_mtp)

v1 = count(LS_electronics_mtp, "V6_LowSat_mTPWarn")
LSele_warn = v1[, -1]

v2 = count(LS_electronics_mtp, "V6_LowSat_mTPFind")
find = v2[, -1]

v3 = count(LS_electronics_mtp, "V6_LowSat_mTPPunish")
punish = v3[, -1]

v4 = count(LS_electronics_mtp, "V6_LowSat_mTPReward")
reward = v4[, -1]

v5 = count(LS_electronics_mtp, "V6_LowSat_mTPCommun")
reviewers = v5[, -1]

v6 = count(LS_electronics_mtp, "V6_LowSat_mTPConsum")
v6 = rbind(v6, c(5,0)) # add a missing row 
consumers = v6[, -1]

v7 = count(LS_electronics_mtp, "V6_LowSat_mTPGuilt")
guilt = v7[, -1]

v8 = count(LS_electronics_mtp, "V6_LowSat_mTPRecip")
recip = v8[, -1]

## Not To Post:
LS_electronics_mnp = DataSummary_catch[, c(160:165)]
LS_electronics_mnp = na.omit(LS_electronics_mnp)

v1 = count(LS_electronics_mnp, "V6_LowSat_mNPEffort")
effort = v1[, -1]

v2 = count(LS_electronics_mnp, "V6_LowSat_mNPBogus")
bogus = v2[, -1]

v3 = count(LS_electronics_mnp, "V6_LowSat_mNPImpact")
impact = v3[, -1]

v4 = count(LS_electronics_mnp, "V6_LowSat_mNPRedund")
redund = v4[, -1]

v5 = count(LS_electronics_mnp, "V6_LowSat_mNPCritic")
no_crit = v5[, -1]

v6 = count(LS_electronics_mnp, "V6_LowSat_mNPHype")
no_hype = v6[, -1]


LS_electronics_motivation_table = rbind(LSele_warn, find, punish, reward, reviewers,
                                 consumers, guilt, recip,
                                 effort, bogus, impact, redund, no_crit, no_hype)

LS_electronics_motivation_table = cbind(rownames(LS_electronics_motivation_table), LS_electronics_motivation_table)
colnames(LS_electronics_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(LS_electronics_motivation_table, 
          file = paste0( tempFolder,"/", "V5_LS_electronics.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V5_LS_electronics.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000391, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 13440.09, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

2.5.2.3 Hypothetical Low-Satisfaction Handmade product ratings

### 6. Low-satisfaction Handmade products

## To Post:
LS_handmade_mtp = DataSummary_catch[, c(84:91)]
LS_handmade_mtp = na.omit(LS_handmade_mtp)

v1 = count(LS_handmade_mtp, "V2_LowSat_mTPWarn")
LShan_warn = v1[, -1]

v2 = count(LS_handmade_mtp, "V2_LowSat_mTPFind")
find = v2[, -1]

v3 = count(LS_handmade_mtp, "V2_LowSat_mTPPunish")
punish = v3[, -1]

v4 = count(LS_handmade_mtp, "V2_LowSat_mTPReward")
reward = v4[, -1]

v5 = count(LS_handmade_mtp, "V2_LowSat_mTPCommun")
reviewers = v5[, -1]

v6 = count(LS_handmade_mtp, "V2_LowSat_mTPConsum")
consumers = v6[, -1]

v7 = count(LS_handmade_mtp, "V2_LowSat_mTPGuilt")
guilt = v7[, -1]

v8 = count(LS_handmade_mtp, "V2_LowSat_mTPRecip")
recip = v8[, -1]

## Not To Post:
LS_handmade_mnp = DataSummary_catch[, c(92:97)]
LS_handmade_mnp = na.omit(LS_handmade_mnp)

v1 = count(LS_handmade_mnp, "V2_LowSat_mNPEffort")
effort = v1[, -1]

v2 = count(LS_handmade_mnp, "V2_LowSat_mNPBogus")
bogus = v2[, -1]

v3 = count(LS_handmade_mnp, "V2_LowSat_mNPImpact")
impact = v3[, -1]

v4 = count(LS_handmade_mnp, "V2_LowSat_mNPRedund")
redund = v4[, -1]

v5 = count(LS_handmade_mnp, "V2_LowSat_mNPCritic")
no_crit = v5[, -1]

v6 = count(LS_handmade_mnp, "V2_LowSat_mNPHype")
no_hype = v6[, -1]

LS_handmade_motivation_table = rbind(LShan_warn, find, punish, reward, 
                              reviewers, consumers, guilt, recip,
                              effort, bogus, impact, redund, no_crit, no_hype)

LS_handmade_motivation_table = cbind(rownames(LS_handmade_motivation_table), LS_handmade_motivation_table)
colnames(LS_handmade_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(LS_handmade_motivation_table, 
          file = paste0( tempFolder,"/", "V6_LS_handmade.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V6_LS_handmade.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000463, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 8912.723, indicating possibly wobbly estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

2.5.3 Ordered-probit analysis of Hypothetical Medium-Satisfaction product ratings

2.5.3.1 Hypothetical Medium-Satisfaction Team product ratings

### 7. Medium-satisfaction Team products

## To Post:
MS_sports_mtp = DataSummary_catch[, c(67:74)]
MS_sports_mtp = na.omit(MS_sports_mtp)

v1 = count(MS_sports_mtp, "V1_MedSat_mTPWarn")
MSspo_warn = v1[, -1]

v2 = count(MS_sports_mtp, "V1_MedSat_mTPFind")
find = v2[, -1]

v3 = count(MS_sports_mtp, "V1_MedSat_mTPPunish")
punish = v3[, -1]

v4 = count(MS_sports_mtp, "V1_MedSat_mTPReward")
reward = v4[, -1]

v5 = count(MS_sports_mtp, "V1_MedSat_mTPCommun")
reviewers = v5[, -1]

v6 = count(MS_sports_mtp, "V1_MedSat_mTPConsum")
consumers = v6[, -1]

v7 = count(MS_sports_mtp, "V1_MedSat_mTPGuilt")
guilt = v7[, -1]

v8 = count(MS_sports_mtp, "V1_MedSat_mTPRecip")
recip = v8[, -1]

## Not To Post:
MS_sports_mnp = DataSummary_catch[, c(75:80)]
MS_sports_mnp = na.omit(MS_sports_mnp)

v1 = count(MS_sports_mnp, "V1_MedSat_mNPEffort")
effort = v1[, -1]

v2 = count(MS_sports_mnp, "V1_MedSat_mNPBogus")
bogus = v2[, -1]

v3 = count(MS_sports_mnp, "V1_MedSat_mNPImpact")
impact = v3[, -1]

v4 = count(MS_sports_mnp, "V1_MedSat_mNPRedund")
redund = v4[, -1]

v5 = count(MS_sports_mnp, "V1_MedSat_mNPCritic")
no_crit = v5[, -1]

v6 = count(MS_sports_mnp, "V1_MedSat_mNPHype")
no_hype = v6[, -1]

MS_sports_motivation_table = rbind(MSspo_warn, find, punish, reward, reviewers,
                            consumers, guilt, recip,
                            effort, bogus, impact, redund, no_crit, no_hype)

MS_sports_motivation_table = cbind(rownames(MS_sports_motivation_table), MS_sports_motivation_table)
colnames(MS_sports_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(MS_sports_motivation_table, 
          file = paste0( tempFolder,"/", "V7_MS_sports.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V7_MS_sports.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000593, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 13420.17, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

2.5.3.2 Hypothetical Medium-Satisfaction Generic product ratings

### 8. Medium-satisfaction Generic (electronic) products 

## To Post:
MS_electronics_mtp = DataSummary_catch[, c(203:210)]
MS_electronics_mtp = na.omit(MS_electronics_mtp)

v1 = count(MS_electronics_mtp, "V9_MedSat_mTPWarn")
MSele_warn = v1[, -1]

v2 = count(MS_electronics_mtp, "V9_MedSat_mTPFind")
find = v2[, -1]

v3 = count(MS_electronics_mtp, "V9_MedSat_mTPPunish")
punish = v3[, -1]

v4 = count(MS_electronics_mtp, "V9_MedSat_mTPReward")
reward = v4[, -1]

v5 = count(MS_electronics_mtp, "V9_MedSat_mTPCommun")
v5 = rbind(v5, c(5,0)) # add a missing row 
reviewers = v5[, -1]

v6 = count(MS_electronics_mtp, "V9_MedSat_mTPConsum")
v6 = rbind(v6, c(5,0)) # add a missing row 
consumers = v6[, -1]

v7 = count(MS_electronics_mtp, "V9_MedSat_mTPGuilt")
guilt = v7[, -1]

v8 = count(MS_electronics_mtp, "V9_MedSat_mTPRecip")
recip = v8[, -1]

## Not To Post:
MS_electronics_mnp = DataSummary_catch[, c(211:216)]
MS_electronics_mnp = na.omit(MS_electronics_mnp)

v1 = count(MS_electronics_mnp, "V9_MedSat_mNPEffort")
effort = v1[, -1]

v2 = count(MS_electronics_mnp, "V9_MedSat_mNPBogus")
bogus = v2[, -1]

v3 = count(MS_electronics_mnp, "V9_MedSat_mNPImpact")
impact = v3[, -1]

v4 = count(MS_electronics_mnp, "V9_MedSat_mNPRedund")
redund = v4[, -1]

v5 = count(MS_electronics_mnp, "V9_MedSat_mNPCritic")
no_crit = v5[, -1]

v6 = count(MS_electronics_mnp, "V9_MedSat_mNPHype")
no_hype = v6[, -1]


MS_electronics_motivation_table = rbind(MSele_warn, find, punish, reward, reviewers,
                                 consumers, guilt, recip,
                                 effort, bogus, impact, redund, no_crit, no_hype)

MS_electronics_motivation_table = cbind(rownames(MS_electronics_motivation_table), MS_electronics_motivation_table)
colnames(MS_electronics_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(MS_electronics_motivation_table, 
          file = paste0( tempFolder,"/", "V8_MS_electronics.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V8_MS_electronics.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000366, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 13004.26, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

2.5.3.3 Hypothetical Medium-Satisfaction Handmade product ratings

### 9. Medium-satisfaction Handmade products

## To Post:
MS_handmade_mtp = DataSummary_catch[, c(135:142)]
MS_handmade_mtp = na.omit(MS_handmade_mtp)

v1 = count(MS_handmade_mtp, "V5_MedSat_mTPWarn")
MShan_warn = v1[, -1]

v2 = count(MS_handmade_mtp, "V5_MedSat_mTPFind")
find = v2[, -1]

v3 = count(MS_handmade_mtp, "V5_MedSat_mTPPunish")
punish = v3[, -1]

v4 = count(MS_handmade_mtp, "V5_MedSat_mTPReward")
reward = v4[, -1]

v5 = count(MS_handmade_mtp, "V5_MedSat_mTPCommun")
v5 = rbind(v5, c(5,0)) # add a missing row 
reviewers = v5[, -1]

v6 = count(MS_handmade_mtp, "V5_MedSat_mTPConsum")
consumers = v6[, -1]

v7 = count(MS_handmade_mtp, "V5_MedSat_mTPGuilt")
guilt = v7[, -1]

v8 = count(MS_handmade_mtp, "V5_MedSat_mTPRecip")
recip = v8[, -1]

## Not to Post:
MS_handmade_mnp = DataSummary_catch[, c(143:148)]
MS_handmade_mnp = na.omit(MS_handmade_mnp)

v1 = count(MS_handmade_mnp, "V5_MedSat_mNPEffort")
effort = v1[, -1]

v2 = count(MS_handmade_mnp, "V5_MedSat_mNPBogus")
bogus = v2[, -1]

v3 = count(MS_handmade_mnp, "V5_MedSat_mNPImpact")
impact = v3[, -1]

v4 = count(MS_handmade_mnp, "V5_MedSat_mNPRedund")
redund = v4[, -1]

v5 = count(MS_handmade_mnp, "V5_MedSat_mNPCritic")
no_crit = v5[, -1]

v6 = count(MS_handmade_mnp, "V5_MedSat_mNPHype")
no_hype = v6[, -1]

MS_handmade_motivation_table = rbind(MShan_warn, find, punish, reward, 
                              reviewers, consumers, guilt, recip,
                              effort, bogus, impact, redund, no_crit, no_hype)

MS_handmade_motivation_table = cbind(rownames(MS_handmade_motivation_table), MS_handmade_motivation_table)
colnames(MS_handmade_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(MS_handmade_motivation_table, 
          file = paste0( tempFolder,"/", "V9_MS_handmade.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V9_MS_handmade.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000823, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 10836.45, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

3 Study 2: Replication and Generalization of Study 1

# Load the data:
DataSummary2 = read.csv( "Data_Summary_Study2.csv" )
DataSummary_catch2 = subset(DataSummary2, catchTrialCorrect == "5") # only leaving data that passed catch trials

3.1 Free-Rider Problem in Online Reviewing

Confirm the free-rider problem across levels of satisfaction:

library("plyr")

nSubjectS2 = nrow(DataSummary_catch2) # number of subjects in Study 2

# HS Product Q1: what proportion of respondents read reviews when deciding to purchase?
HS_read_count = count(DataSummary_catch2, 'recallHighSat_readReview')
print("High Satisfaction, Read Reviews:")
## [1] "High Satisfaction, Read Reviews:"
HS_read_count$freq[2] / nSubjectS2 # % who read the reviews (HS)
## [1] 0.8920863
# HS product Q2: what proportion of respondents posted reviews after purchase?
HS_post_count = count(DataSummary_catch2, 'recallHighSat_postReview')
print("High Satisfaction, Posted Reviews:")
## [1] "High Satisfaction, Posted Reviews:"
HS_post_count$freq[2] / nSubjectS2 # % who posted the reviews (HS)
## [1] 0.04316547
# LS product Q1: what proportion of respondents read reviews when deciding to purchase?
LS_read_count = count(DataSummary_catch2, 'recallLowSat_readReview')
print("Low Satisfaction, Read Reviews:")
## [1] "Low Satisfaction, Read Reviews:"
LS_read_count$freq[2] / nSubjectS2 # % who read the reviews (LS)
## [1] 0.6115108
# LS product Q2: what proportion of respondents posted reviews after purchase?
LS_post_count = count(DataSummary_catch2, 'recallLowSat_postReview')
print("Low Satisfaction, Posted Reviews:")
## [1] "Low Satisfaction, Posted Reviews:"
LS_post_count$freq[2] / nSubjectS2 # % who posted the reviews (LS)
## [1] 0.1654676
# MS product Q1: what proportion of respondents read reviews when deciding to purchase?
MS_read_count = count(DataSummary_catch2, 'recallMedSat_readReview')
print("Medium Satisfaction, Read Reviews:")
## [1] "Medium Satisfaction, Read Reviews:"
MS_read_count$freq[2] / nSubjectS2 # % who read the reviews (MS)
## [1] 0.5035971
# MS product Q2: what proportion of respondents posted reviews after purchase?
MS_post_count = count(DataSummary_catch2, 'recallMedSat_postReview')
print("Medium Satisfaction, Posted Reviews:")
## [1] "Medium Satisfaction, Posted Reviews:"
MS_post_count$freq[2] / nSubjectS2 # % who posted the reviews (MS)
## [1] 0.03597122

Proportions of participants who read other reviews before buying and proportions of participants who posted their own reviews for each of the three recalled products.

3.2 List of Motivations in Study 2

Study 2 used somewhat different and differently worded motivation statements than Study 1, to replicate and generalize the results from Study 1.

8 motivations to post:

  • I want to caution other consumers about a bad product

  • I want to assist other consumers find a good product

  • I want to penalize the producer of the bad product

  • I want to honor the producer of the good product

  • I want to feel belonging to a group of consumers who share similar interests

  • I want to return the favor of posting reviews because I use them to make my decision

  • I think many people post reviews, so posting is the normal thing to do

  • I feel collective power with other consumers over producers by posting a review

8 Motivations NOT to post:

  • I feel that posting a review takes too much time and effort

  • I feel reviews in general are fake, so I do not post reviews

  • I feel reviews in general have no impact, so I do not post reviews

  • I feel posting a review is redundant with others already posted

  • I feel personal empathy with the producer and I am reluctant to criticize despite a bad product

  • I feel personal ambivalence toward the producer and I am reluctant to praise despite a good product

  • I think few people post reviews, so not posting is the normal thing to do

  • I feel posting a review does not lead to any collective power with other consumers over producers

3.3 Ordered-probit analysis of Recalled product ratings

The ordered-probit model was the same as used for Study 1; see the previous detailed description of the model.

3.3.1 Recalled High-Satisfaction product ratings

########## Recalled transactions ########## 

### To-Post & Not-To-Post ratings of recalled HS product: 
## To post recalled HS
HS_mtp2 = DataSummary_catch2[, c(6, 10:17)] # extract these columns
HS_mtp_rating2 = HS_mtp2[, c(2:9)] 
HS_mtp_rating2 = na.omit(HS_mtp_rating2) # omit NA's if there is any

v1 = count(HS_mtp_rating2, "recallHighSat_mTPWarn")
HS_warn = v1[, -1]

v2 = count(HS_mtp_rating2, "recallHighSat_mTPHelp")
HS_help = v2[, -1]

v3 = count(HS_mtp_rating2, "recallHighSat_mTPPunish")
HS_punish = v3[, -1]

v4 = count(HS_mtp_rating2, "recallHighSat_mTPReward")
HS_reward = v4[, -1]

v5 = count(HS_mtp_rating2, "recallHighSat_mTPBelong")
HS_belong = v5[, -1]

v6 = count(HS_mtp_rating2, "recallHighSat_mTPRecip")
HS_recip = v6[, -1]

v7 = count(HS_mtp_rating2, "recallHighSat_mTPNorm")
HS_norm = v7[, -1]

v8 = count(HS_mtp_rating2, "recallHighSat_mTPPower")
HS_power = v8[, -1]

## Not-To-Post recalled HS
HS_mnp2 = DataSummary_catch2[, c(6, 18:25)] # extract these columns
HS_mnp_rating2 = HS_mnp2[, c(2:9)] 
HS_mnp_rating2 = na.omit(HS_mnp_rating2) # omit NA's if there is any

v1 = count(HS_mnp_rating2, "recallHighSat_mNPEffort")
HS_effort = v1[, -1]

v2 = count(HS_mnp_rating2, "recallHighSat_mNPBogus")
HS_bogus = v2[, -1]

v3 = count(HS_mnp_rating2, "recallHighSat_mNPImpact")
HS_impact = v3[, -1]

v4 = count(HS_mnp_rating2, "recallHighSat_mNPRedund")
HS_redund = v4[, -1]

v5 = count(HS_mnp_rating2, "recallHighSat_mNPCritic")
HS_no_crit = v5[, -1]

v6 = count(HS_mnp_rating2, "recallHighSat_mNPHype")
v6 = rbind(v6, c(5,0)) # add a missing row 
HS_no_hype = v6[, -1]

v7 = count(HS_mnp_rating2, "ecallHighSat_mNPNorm")
HS_no_norm = v7[, -1]

v8 = count(HS_mnp_rating2, "ecallHighSat_mNPPower")
HS_no_power = v8[, -1]

HS_motivation_table2 = rbind(HS_warn, HS_help, HS_punish, HS_reward, HS_belong,
                        HS_recip, HS_norm, HS_power, 
                        HS_effort, HS_bogus, HS_impact, HS_redund, HS_no_crit,
                        HS_no_hype, HS_no_norm, HS_no_power)

HS_motivation_table2 = cbind(rownames(HS_motivation_table2), HS_motivation_table2)
colnames(HS_motivation_table2) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(HS_motivation_table2, file = paste0( tempFolder,"/", "recall2_HS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "recall2_HS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000966, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 23858.06, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

The estimated latent ratings with their 95% HDI’s are plotted below. The gray regions are merely for visual reference, with the gray zone on the left indicating latent means that yield mostly ‘1’ ratings, and the narrow gray zone in the middle marking the midpoint of the latent response scale.

plotLatentRatings( scenario="recall2" , nNotToPost=8 , satisLevel=c("HS","MS","LS")[1] )
For explanation of the Motive names, please see the [List of Motivations for Study 2](#MotivationList2).

For explanation of the Motive names, please see the List of Motivations for Study 2.

3.3.2 Recalled Low-Satisfaction product ratings

### To-Post & Not-To-Post ratings of recalled LS product: 
## To post recalled LS
LS_mtp2 = DataSummary_catch2[, c(6, 32:39)] # extract these columns
LS_mtp_rating2 = LS_mtp2[, c(2:9)] 
LS_mtp_rating2 = na.omit(LS_mtp_rating2) 

v1 = count(LS_mtp_rating2, "recallLowSat_mTPWarn")
LS_warn = v1[, -1]

v2 = count(LS_mtp_rating2, "recallLowSat_mTPFind")
LS_help = v2[, -1]

v3 = count(LS_mtp_rating2, "recallLowSat_mTPPunish")
LS_punish = v3[, -1]

v4 = count(LS_mtp_rating2, "recallLowSat_mTPReward")
LS_reward = v4[, -1]

v5 = count(LS_mtp_rating2, "recallLowSat_mTPBelong")
LS_belong = v5[, -1]

v6 = count(LS_mtp_rating2, "recallLowSat_mTPRecip")
LS_recip = v6[, -1]

v7 = count(LS_mtp_rating2, "recallLowSat_mTPNorm")
LS_norm = v7[, -1]

v8 = count(LS_mtp_rating2, "recallLowSat_mTPPower")
LS_power = v8[, -1]

## Not-To-Post recalled LS:
LS_mnp2 = DataSummary_catch2[, c(6, 40:47)] # extract these columns
LS_mnp_rating2 = LS_mnp2[, c(2:9)] 
LS_mnp_rating2 = na.omit(LS_mnp_rating2)

v1 = count(LS_mnp_rating2, "recallLowSat_mNPEffort")
LS_effort = v1[, -1]

v2 = count(LS_mnp_rating2, "recallLowSat_mNPBogus")
LS_bogus = v2[, -1]

v3 = count(LS_mnp_rating2, "recallLowSat_mNPImpact")
LS_impact = v3[, -1]

v4 = count(LS_mnp_rating2, "recallLowSat_mNPRedund")
LS_redund = v4[, -1]

v5 = count(LS_mnp_rating2, "recallLowSat_mNPCritic")
LS_no_crit = v5[, -1]

v6 = count(LS_mnp_rating2, "recallLowSat_mNPHype")
v6 = rbind(v6, c(5,0)) # add a missing row 
LS_no_hype = v6[, -1]

v7 = count(LS_mnp_rating2, "recallLowSat_mNPNorm")
LS_no_norm = v7[, -1]

v8 = count(LS_mnp_rating2, "recallLowSat_mNPPower")
LS_no_power = v8[, -1]


LS_motivation_table2 = rbind(LS_warn, LS_help, LS_punish, LS_reward, LS_belong,
                            LS_recip, LS_norm, LS_power,
                            LS_effort, LS_bogus, LS_impact, LS_redund, LS_no_crit,
                            LS_no_hype, LS_no_norm, LS_no_power)

LS_motivation_table2 = cbind(rownames(LS_motivation_table2), LS_motivation_table2)
colnames(LS_motivation_table2) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(LS_motivation_table2, file = paste0( tempFolder,"/", "recall2_LS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "recall2_LS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000576, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 24261.28, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

The estimated latent ratings with their 95% HDI’s are plotted below. The gray regions are merely for visual reference, with the gray zone on the left indicating latent means that yield mostly ‘1’ ratings, and the narrow gray zone in the middle marking the midpoint of the latent response scale.

plotLatentRatings( scenario="recall2" , nNotToPost=8 , satisLevel=c("HS","MS","LS")[3] )
For explanation of the Motive names, please see the [List of Motivations for Study 2](#MotivationList2).

For explanation of the Motive names, please see the List of Motivations for Study 2.

3.3.3 Recalled Medium-Satisfaction product ratings

### To-Post & Not-To-Post ratings of recalled MS product: 
## To post recalled MS
MS_mtp2 = DataSummary_catch2[, c(6, 54:61)] # extract these columns
MS_mtp_rating2 = MS_mtp2[, c(2:9)] 
MS_mtp_rating2 = na.omit(MS_mtp_rating2)

v1 = count(MS_mtp_rating2, "recallMedSat_mTPWarn")
MS_warn = v1[, -1]

v2 = count(MS_mtp_rating2, "recallMedSat_mTPFind")
MS_help = v2[, -1]

v3 = count(MS_mtp_rating2, "recallMedSat_mTPPunish")
MS_punish = v3[, -1]

v4 = count(MS_mtp_rating2, "recallMedSat_mTPReward")
MS_reward = v4[, -1]

v5 = count(MS_mtp_rating2, "recallMedSat_mTPBelong")
MS_belong = v5[, -1]

v6 = count(MS_mtp_rating2, "recallMedSat_mTPRecip")
MS_recip = v6[, -1]

v7 = count(MS_mtp_rating2, "recallMedSat_mTPNorm")
MS_norm = v7[, -1]

v8 = count(MS_mtp_rating2, "recallMedSat_mTPPower")
MS_power = v8[, -1]

## Not-To-Post ratings of recalled MS
MS_mnp2 = DataSummary_catch2[, c(6, 62:69)] # extract these columns
MS_mnp_rating2 = MS_mnp2[, c(2:9)] 
MS_mnp_rating2 = na.omit(MS_mnp_rating2)

v1 = count(MS_mnp_rating2, "recallMedSat_mNPEffort")
MS_effort = v1[, -1]

v2 = count(MS_mnp_rating2, "recallMedSat_mNPBogus")
MS_bogus = v2[, -1]

v3 = count(MS_mnp_rating2, "recallMedSat_mNPImpact")
MS_impact = v3[, -1]

v4 = count(MS_mnp_rating2, "recallMedSat_mNPRedund")
MS_redund = v4[, -1]

v5 = count(MS_mnp_rating2, "recallMedSat_mNPCritic")
MS_no_crit = v5[, -1]

v6 = count(MS_mnp_rating2, "recallMedSat_mNPHype")
v6 = rbind(v6, c(5,0)) # add a missing row 
MS_no_hype = v6[, -1]

v7 = count(MS_mnp_rating2, "recallMedSat_mNPNorm")
MS_no_norm = v7[, -1]

v8 = count(MS_mnp_rating2, "recallMedSat_mNPPower")
MS_no_power = v8[, -1]

MS_motivation_table2 = rbind(MS_warn, MS_help, MS_punish, MS_reward, MS_belong,
                     MS_recip, MS_norm, MS_power,
                     MS_effort, MS_bogus, MS_impact, MS_redund, MS_no_crit,
                     MS_no_hype, MS_no_norm, MS_no_power)

MS_motivation_table2 = cbind(rownames(MS_motivation_table2), MS_motivation_table2)
colnames(MS_motivation_table2) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(MS_motivation_table2, file = paste0( tempFolder,"/", "recall2_MS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "recall2_MS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000736, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 25089.8, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

The estimated latent ratings with their 95% HDI’s are plotted below. The gray regions are merely for visual reference, with the gray zone on the left indicating latent means that yield mostly ‘1’ ratings, and the narrow gray zone in the middle marking the midpoint of the latent response scale.

plotLatentRatings( scenario="recall2" , nNotToPost=8 , satisLevel=c("HS","MS","LS")[2] )
For explanation of the Motive names, please see the [List of Motivations for Study 2](#MotivationList2).

For explanation of the Motive names, please see the List of Motivations for Study 2.

3.4 Ordered-probit analysis of Hypothetical product ratings

3.4.1 Hypothetical High-Satisfaction ratings

3.4.1.1 Hypothetical High-Satisfaction Generic product ratings

### First rearrange data frame so that they are organized by producer types & by satisfaction level
V1_regular = DataSummary_catch2[grep("regular <", DataSummary_catch2$V1_Type), ]
colnames(V1_regular) = gsub("V1_", "v1_", colnames(V1_regular))
colnames(V1_regular) = gsub("V2_", "v2_", colnames(V1_regular))
colnames(V1_regular) = gsub("V3_", "v3_", colnames(V1_regular))


V2_regular = DataSummary_catch2[grep("regular <", DataSummary_catch2$V2_Type), ]
V2_regular = V2_regular[, c(1:69, 90:109, 70:89, 110:131)]
colnames(V2_regular) = gsub("V2_", "v1_", colnames(V2_regular))
colnames(V2_regular) = gsub("V1_", "v2_", colnames(V2_regular))
colnames(V2_regular) = gsub("V3_", "v3_", colnames(V2_regular))


V3_regular = DataSummary_catch2[grep("regular <", DataSummary_catch2$V3_Type), ]
V3_regular = V3_regular[, c(1:69, 110:129, 70:109, 130:131)]
colnames(V3_regular) = gsub("V3_", "v1_", colnames(V3_regular))
colnames(V3_regular) = gsub("V1_", "v2_", colnames(V3_regular))
colnames(V3_regular) = gsub("V2_", "v3_", colnames(V3_regular))

regular = rbind(V1_regular, V2_regular, V3_regular) #V1: regular products


V2_team = regular[grep("Crimson red", regular$v2_Type), ]

V3_team = regular[grep("Crimson red", regular$v3_Type), ]
V3_team = V3_team[, c(1:89, 110:129, 90:109, 130:131)]
colnames(V3_team) = gsub("v3_", "V2_", colnames(V3_team))
colnames(V3_team) = gsub("v2_", "v3_", colnames(V3_team))
colnames(V3_team) = gsub("V2_", "v2_", colnames(V3_team))


by_producers = rbind(V2_team, V3_team) #V1: generic, V2: team, V3: artisan


# HS vignette -- generic product
generic_HS = subset(by_producers, v1_Satisfaction == "highly")

# HS vignette -- team product
team_HS = subset(by_producers, v2_Satisfaction == "highly")

# HS vignette -- artisan product
artisan_HS = subset(by_producers, v3_Satisfaction == "highly")


# LS vignette -- generic product
generic_LS = subset(by_producers, v1_Satisfaction == "very")

# LS vignette -- team product
team_LS = subset(by_producers, v2_Satisfaction == "very")

# LS vignette -- artisan product
artisan_LS = subset(by_producers, v3_Satisfaction == "very")


# MS vignette -- generic product
generic_MS = subset(by_producers, v1_Satisfaction == "neither")

# MS vignette -- team product
team_MS = subset(by_producers, v2_Satisfaction == "neither")

# MS vignette -- artisan product
artisan_MS = subset(by_producers, v3_Satisfaction == "neither")



########## Hypothetical transactions ########## 

# HS vignette -- generic product
generic_HS = subset(by_producers, v1_Satisfaction == "highly")

# HS vignette -- team product
team_HS = subset(by_producers, v2_Satisfaction == "highly")

# HS vignette -- artisan product
artisan_HS = subset(by_producers, v3_Satisfaction == "highly")


# LS vignette -- generic product
generic_LS = subset(by_producers, v1_Satisfaction == "very")

# LS vignette -- team product
team_LS = subset(by_producers, v2_Satisfaction == "very")

# LS vignette -- artisan product
artisan_LS = subset(by_producers, v3_Satisfaction == "very")


# MS vignette -- generic product
generic_MS = subset(by_producers, v1_Satisfaction == "neither")


# MS vignette -- team product
team_MS = subset(by_producers, v2_Satisfaction == "neither")


# MS vignette -- artisan product
artisan_MS = subset(by_producers, v3_Satisfaction == "neither")

mtp = generic_HS[, c(74:81)] # extract these columns
mtp = na.omit(mtp)

v1 = count(mtp, "v1_mTPWarn")
warn = v1[, -1]

v2 = count(mtp, "v1_mTPFind")
help = v2[, -1]

v3 = count(mtp, "v1_mTPPunish")
punish = v3[, -1]

v4 = count(mtp, "v1_mTPReward")
reward = v4[, -1]

v5 = count(mtp, "v1_mTPBelong")
v5 = rbind(v5, c(4,0)) # add a missing row 
v5 = rbind(v5, c(5,0)) # add a missing row 
belong = v5[, -1]

v6 = count(mtp, "v1_mTPRecip")
recip = v6[, -1]

v7 = count(mtp, "v1_mTPNorm")
v7 = rbind(v7, c(4,0)) # add a missing row 
v7 = rbind(v7, c(5,0)) # add a missing row 
norm = v7[, -1]

v8 = count(mtp, "v1_mTPPower")
v8 = rbind(v8, c(5,0)) # add a missing row 
power = v8[, -1]

## Not-To-Post 
mnp = generic_HS[, c(82:89)] # extract these columns
mnp = na.omit(mnp)

v1 = count(mnp, "v1_mNPEffort")
v1 = rbind(c(1,0), v1) # add a missing row 
effort = v1[, -1]

v2 = count(mnp, "v1_mNPBogus")
bogus = v2[, -1]

v3 = count(mnp, "v1_mNPImpact")
impact = v3[, -1]

v4 = count(mnp, "v1_mNPRedund")
redund = v4[, -1]

v5 = count(mnp, "v1_mNPCritic")
no_crit = v5[, -1]

v6 = count(mnp, "v1_mNPHype")
v6 = rbind(v6, c(5,0)) # add a missing row 
no_hype = v6[, -1]

v7 = count(mnp, "v1_mNPNorm")
no_norm = v7[, -1]

v8 = count(mnp, "v1_mNPPower")
no_power = v8[, -1]

genericHS_motivation_table = rbind(warn, help, punish, reward, belong,
                            recip, norm, power,
                            effort, bogus, impact, redund, no_crit,
                            no_hype, no_norm, no_power)

genericHS_motivation_table = cbind(rownames(genericHS_motivation_table), genericHS_motivation_table)
colnames(genericHS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(genericHS_motivation_table, file = paste0( tempFolder,"/", "V_genericHS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V_genericHS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.001118, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 16798.65, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

3.4.1.2 Hypothetical High-Satisfaction Team product ratings

### 4. High-satisfaction Team products

## To Post:
mtp = team_HS[, c(94:101)] # extract these columns
mtp = na.omit(mtp)

v1 = count(mtp, "v2_mTPWarn")
warn = v1[, -1]

v2 = count(mtp, "v2_mTPFind")
help = v2[, -1]

v3 = count(mtp, "v2_mTPPunish")
punish = v3[, -1]

v4 = count(mtp, "v2_mTPReward")
reward = v4[, -1]

v5 = count(mtp, "v2_mTPBelong")
belong = v5[, -1]

v6 = count(mtp, "v2_mTPRecip")
recip = v6[, -1]

v7 = count(mtp, "v2_mTPNorm")
v7 = rbind(v7, c(5,0)) # add a missing row
norm = v7[, -1]

v8 = count(mtp, "v2_mTPPower")
v8 = rbind(v8, c(5,0)) # add a missing row
power = v8[, -1]

## Not-To-Post
mnp = team_HS[, c(102:109)] # extract these columns
mnp = na.omit(mnp)

v1 = count(mnp, "v2_mNPEffort")
effort = v1[, -1]

v2 = count(mnp, "v2_mNPBogus")
bogus = v2[, -1]

v3 = count(mnp, "v2_mNPImpact")
impact = v3[, -1]

v4 = count(mnp, "v2_mNPRedund")
redund = v4[, -1]

v5 = count(mnp, "v2_mNPCritic")
no_crit = v5[, -1]

v6 = count(mnp, "v2_mNPHype")
v6 = rbind(v6, c(4,0)) # add a missing row
v6 = rbind(v6, c(5,0)) # add a missing row
no_hype = v6[, -1]

v7 = count(mnp, "v2_mNPNorm")
no_norm = v7[, -1]

v8 = count(mnp, "v2_mNPPower")
no_power = v8[, -1]

teamHS_motivation_table = rbind(warn, help, punish, reward, belong,
                                    recip, norm, power,
                                    effort, bogus, impact, redund, no_crit,
                                    no_hype, no_norm, no_power)

teamHS_motivation_table = cbind(rownames(teamHS_motivation_table), teamHS_motivation_table)
colnames(teamHS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(teamHS_motivation_table, file = paste0( tempFolder,"/", "V_teamHS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
   dataFileName = "V_teamHS.csv" ,
   yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
   caseIDcolName = "Motivations" ,
   hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
 )

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000861, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 17632.36, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

3.4.1.3 Hypothetical High-Satisfaction Handmade product ratings

### 7. High-satisfaction Artisan products

## To Post:
mtp = artisan_HS[, c(114:121)] # extract these columns
mtp = na.omit(mtp)

v1 = count(mtp, "v3_mTPWarn")
warn = v1[, -1]

v2 = count(mtp, "v3_mTPFind")
help = v2[, -1]

v3 = count(mtp, "v3_mTPPunish")
punish = v3[, -1]

v4 = count(mtp, "v3_mTPReward")
v4 = rbind(c(1,0), v4) # add a missing row
reward = v4[, -1]

v5 = count(mtp, "v3_mTPBelong")
belong = v5[, -1]

v6 = count(mtp, "v3_mTPRecip")
recip = v6[, -1]

v7 = count(mtp, "v3_mTPNorm")
norm = v7[, -1]

v8 = count(mtp, "v3_mTPPower")
power = v8[, -1]

## Not-To-Post ratings 
mnp = artisan_HS[, c(122:129)] # extract these columns
mnp = na.omit(mnp)

v1 = count(mnp, "v3_mNPEffort")
effort = v1[, -1]

v2 = count(mnp, "v3_mNPBogus")
bogus = v2[, -1]

v3 = count(mnp, "v3_mNPImpact")
v3 = rbind(v3, c(5,0)) # add a missing row
impact = v3[, -1]

v4 = count(mnp, "v3_mNPRedund")
redund = v4[, -1]

v5 = count(mnp, "v3_mNPCritic")
v5 = rbind(v5, c(4,0)) # add a missing row
v5 = rbind(v5, c(5,0)) # add a missing row
no_crit = v5[, -1]

v6 = count(mnp, "v3_mNPHype")
v6 = rbind(v6, c(5,0)) # add a missing row
no_hype = v6[, -1]

v7 = count(mnp, "v3_mNPNorm")
no_norm = v7[, -1]

v8 = count(mnp, "v3_mNPPower")
no_power = v8[, -1]

artisan_HS_motivation_table = rbind(warn, help, punish, reward, belong,
                                recip, norm, power,
                                effort, bogus, impact, redund, no_crit,
                                no_hype, no_norm, no_power)

artisan_HS_motivation_table = cbind(rownames(artisan_HS_motivation_table), artisan_HS_motivation_table)
colnames(artisan_HS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(artisan_HS_motivation_table, file = paste0( tempFolder,"/", "V_artisan_HS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V_artisan_HS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000581, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 20140.64, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

3.4.2 Hypothetical Low-Satisfaction ratings

3.4.2.1 Hypothetical Low-Satisfaction Generic product ratings

### 2. Low-satisfaction Generic products

## To post 
mtp = generic_LS[, c(74:81)] # extract these columns
mtp = na.omit(mtp)

v1 = count(mtp, "v1_mTPWarn")
warn = v1[, -1]

v2 = count(mtp, "v1_mTPFind")
help = v2[, -1]

v3 = count(mtp, "v1_mTPPunish")
punish = v3[, -1]

v4 = count(mtp, "v1_mTPReward")
reward = v4[, -1]

v5 = count(mtp, "v1_mTPBelong")
v5 = rbind(v5, c(5,0)) # add a missing row 
belong = v5[, -1]

v6 = count(mtp, "v1_mTPRecip")
recip = v6[, -1]

v7 = count(mtp, "v1_mTPNorm")
v7 = rbind(v7, c(5,0)) # add a missing row 
norm = v7[, -1]

v8 = count(mtp, "v1_mTPPower")
power = v8[, -1]


## Not-To-Post 
mnp = generic_LS[, c(82:89)] # extract these columns
mnp = na.omit(mnp)

v1 = count(mnp, "v1_mNPEffort")
effort = v1[, -1]

v2 = count(mnp, "v1_mNPBogus")
bogus = v2[, -1]

v3 = count(mnp, "v1_mNPImpact")
impact = v3[, -1]

v4 = count(mnp, "v1_mNPRedund")
redund = v4[, -1]

v5 = count(mnp, "v1_mNPCritic")
no_crit = v5[, -1]

v6 = count(mnp, "v1_mNPHype")
no_hype = v6[, -1]

v7 = count(mnp, "v1_mNPNorm")
no_norm = v7[, -1]

v8 = count(mnp, "v1_mNPPower")
v8 = rbind(v8, c(5,0)) # add a missing row 
no_power = v8[, -1]

genericLS_motivation_table = rbind(warn, help, punish, reward, belong,
                                   recip, norm, power,
                                   effort, bogus, impact, redund, no_crit,
                                   no_hype, no_norm, no_power)

genericLS_motivation_table = cbind(rownames(genericLS_motivation_table), genericLS_motivation_table)
colnames(genericLS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(genericLS_motivation_table, file = paste0( tempFolder,"/", "V_genericLS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V_genericLS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000818, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 19123.07, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

3.4.2.2 Hypothetical Low-Satisfaction Team product ratings

### 5. Low-satisfaction Team products

## To Post: 
mtp = team_LS[, c(94:101)] # extract these columns
mtp = na.omit(mtp)

v1 = count(mtp, "v2_mTPWarn")
v1 = rbind(c(1,0), v1) # add a missing row
warn = v1[, -1]

v2 = count(mtp, "v2_mTPFind")
help = v2[, -1]

v3 = count(mtp, "v2_mTPPunish")
punish = v3[, -1]

v4 = count(mtp, "v2_mTPReward")
reward = v4[, -1]

v5 = count(mtp, "v2_mTPBelong")
belong = v5[, -1]

v6 = count(mtp, "v2_mTPRecip")
recip = v6[, -1]

v7 = count(mtp, "v2_mTPNorm")
norm = v7[, -1]

v8 = count(mtp, "v2_mTPPower")
power = v8[, -1]

## Not-To-Post ratings 
mnp = team_LS[, c(102:109)] # extract these columns
mnp = na.omit(mnp)

v1 = count(mnp, "v2_mNPEffort")
effort = v1[, -1]

v2 = count(mnp, "v2_mNPBogus")
bogus = v2[, -1]

v3 = count(mnp, "v2_mNPImpact")
impact = v3[, -1]

v4 = count(mnp, "v2_mNPRedund")
redund = v4[, -1]

v5 = count(mnp, "v2_mNPCritic")
v5 = rbind(v5, c(5,0)) # add a missing row
no_crit = v5[, -1]

v6 = count(mnp, "v2_mNPHype")
v6 = rbind(v6, c(4,0)) # add a missing row
v6 = rbind(v6, c(5,0)) # add a missing row
no_hype = v6[, -1]

v7 = count(mnp, "v2_mNPNorm")
no_norm = v7[, -1]

v8 = count(mnp, "v2_mNPPower")
no_power = v8[, -1]

teamLS_motivation_table = rbind(warn, help, punish, reward, belong,
                                recip, norm, power,
                                effort, bogus, impact, redund, no_crit,
                                no_hype, no_norm, no_power)

teamLS_motivation_table = cbind(rownames(teamLS_motivation_table), teamLS_motivation_table)
colnames(teamLS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(teamLS_motivation_table, file = paste0( tempFolder,"/", "V_teamLS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V_teamLS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
) 

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.001276, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 19666.8, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

3.4.2.3 Hypothetical Low-Satisfaction Handmade product ratings

### 8. Low-satisfaction Artisan products 

## To Post:
mtp = artisan_LS[, c(114:121)] # extract these columns
mtp = na.omit(mtp)

v1 = count(mtp, "v3_mTPWarn")
warn = v1[, -1]

v2 = count(mtp, "v3_mTPFind")
help = v2[, -1]

v3 = count(mtp, "v3_mTPPunish")
punish = v3[, -1]

v4 = count(mtp, "v3_mTPReward")
reward = v4[, -1]

v5 = count(mtp, "v3_mTPBelong")
v5 = rbind(v5, c(5,0)) # add a missing row
belong = v5[, -1]

v6 = count(mtp, "v3_mTPRecip")
recip = v6[, -1]

v7 = count(mtp, "v3_mTPNorm")
v7 = rbind(v7, c(5,0)) # add a missing row
norm = v7[, -1]

v8 = count(mtp, "v3_mTPPower")
v8 = rbind(v8, c(5,0)) # add a missing row
power = v8[, -1]

## Not-To-Post ratings 
mnp = artisan_LS[, c(122:129)] # extract these columns
mnp = na.omit(mnp)

v1 = count(mnp, "v3_mNPEffort")
effort = v1[, -1]

v2 = count(mnp, "v3_mNPBogus")
bogus = v2[, -1]

v3 = count(mnp, "v3_mNPImpact")
impact = v3[, -1]

v4 = count(mnp, "v3_mNPRedund")
redund = v4[, -1]

v5 = count(mnp, "v3_mNPCritic")
no_crit = v5[, -1]

v6 = count(mnp, "v3_mNPHype")
v6 = rbind(v6, c(5,0)) # add a missing row
no_hype = v6[, -1]

v7 = count(mnp, "v3_mNPNorm")
no_norm = v7[, -1]

v8 = count(mnp, "v3_mNPPower")
no_power = v8[, -1]

artisan_LS_motivation_table = rbind(warn, help, punish, reward, belong,
                                    recip, norm, power,
                                    effort, bogus, impact, redund, no_crit,
                                    no_hype, no_norm, no_power)

artisan_LS_motivation_table = cbind(rownames(artisan_LS_motivation_table), artisan_LS_motivation_table)
colnames(artisan_LS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(artisan_LS_motivation_table, file = paste0( tempFolder,"/", "V_artisan_LS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V_artisan_LS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000837, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 15712.84, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

3.4.3 Hypothetical Medium-Satisfaction ratings

3.4.3.1 Hypothetical Medium-Satisfaction Generic product ratings

### 3. Medium-satisfaction Generic products

## To Post:
mtp = generic_MS[, c(74:81)] # extract these columns
mtp = na.omit(mtp)

v1 = count(mtp, "v1_mTPWarn")
warn = v1[, -1]

v2 = count(mtp, "v1_mTPFind")
help = v2[, -1]

v3 = count(mtp, "v1_mTPPunish")
punish = v3[, -1]

v4 = count(mtp, "v1_mTPReward")
reward = v4[, -1]

v5 = count(mtp, "v1_mTPBelong")
belong = v5[, -1]

v6 = count(mtp, "v1_mTPRecip")
recip = v6[, -1]

v7 = count(mtp, "v1_mTPNorm")
norm = v7[, -1]

v8 = count(mtp, "v1_mTPPower")
power = v8[, -1]


## Not-To-Post 
mnp = generic_MS[, c(82:89)] # extract these columns
mnp = na.omit(mnp)

v1 = count(mnp, "v1_mNPEffort")
effort = v1[, -1]

v2 = count(mnp, "v1_mNPBogus")
bogus = v2[, -1]

v3 = count(mnp, "v1_mNPImpact")
v3 = rbind(v3, c(5,0)) # add a missing row 
impact = v3[, -1]

v4 = count(mnp, "v1_mNPRedund")
redund = v4[, -1]

v5 = count(mnp, "v1_mNPCritic")
v5 = rbind(v5, c(5,0)) # add a missing row 
no_crit = v5[, -1]

v6 = count(mnp, "v1_mNPHype")
v6 = rbind(v6, c(5,0)) # add a missing row 
no_hype = v6[, -1]

v7 = count(mnp, "v1_mNPNorm")
no_norm = v7[, -1]

v8 = count(mnp, "v1_mNPPower")
v8 = rbind(v8, c(5,0)) # add a missing row 
no_power = v8[, -1]

genericMS_motivation_table = rbind(warn, help, punish, reward, belong,
                                   recip, norm, power,
                                   effort, bogus, impact, redund, no_crit,
                                   no_hype, no_norm, no_power)

genericMS_motivation_table = cbind(rownames(genericMS_motivation_table), genericMS_motivation_table)
colnames(genericMS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(genericMS_motivation_table, file = paste0( tempFolder,"/", "V_genericMS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V_genericMS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.00085, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 20834.23, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

3.4.3.2 Hypothetical Medium-Satisfaction Team product ratings

### 6. Medium-satisfaction Team products

## To Post: 
mtp = team_MS[, c(94:101)] # extract these columns
mtp = na.omit(mtp)

v1 = count(mtp, "v2_mTPWarn")
warn = v1[, -1]

v2 = count(mtp, "v2_mTPFind")
help = v2[, -1]

v3 = count(mtp, "v2_mTPPunish")
punish = v3[, -1]

v4 = count(mtp, "v2_mTPReward")
reward = v4[, -1]

v5 = count(mtp, "v2_mTPBelong")
v5 = rbind(v5, c(4,0)) # add a missing row
v5 = rbind(v5, c(5,0)) # add a missing row
belong = v5[, -1]

v6 = count(mtp, "v2_mTPRecip")
recip = v6[, -1]

v7 = count(mtp, "v2_mTPNorm")
v7 = rbind(v7, c(5,0)) # add a missing row
norm = v7[, -1]

v8 = count(mtp, "v2_mTPPower")
power = v8[, -1]

## Not-To-Post ratings 
mnp = team_MS[, c(102:109)] # extract these columns
mnp = na.omit(mnp)

v1 = count(mnp, "v2_mNPEffort")
v1 = rbind(c(1,0), v1)
effort = v1[, -1]

v2 = count(mnp, "v2_mNPBogus")
bogus = v2[, -1]

v3 = count(mnp, "v2_mNPImpact")
impact = v3[, -1]

v4 = count(mnp, "v2_mNPRedund")
redund = v4[, -1]

v5 = count(mnp, "v2_mNPCritic")
v5 = rbind(v5, c(5,0)) # add a missing row
no_crit = v5[, -1]

v6 = count(mnp, "v2_mNPHype")
v6 = rbind(v6, c(5,0)) # add a missing row
no_hype = v6[, -1]

v7 = count(mnp, "v2_mNPNorm")
no_norm = v7[, -1]

v8 = count(mnp, "v2_mNPPower")
no_power = v8[, -1]

teamMS_motivation_table = rbind(warn, help, punish, reward, belong,
                                recip, norm, power,
                                effort, bogus, impact, redund, no_crit,
                                no_hype, no_norm, no_power)

teamMS_motivation_table = cbind(rownames(teamMS_motivation_table), teamMS_motivation_table)
colnames(teamMS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(teamMS_motivation_table, file = paste0( tempFolder,"/", "V_teamMS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V_teamMS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000586, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 21160.5, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

3.4.3.3 Hypothetical Medium-Satisfaction Handmade product ratings

### 9. Medium-satisfaction Artisan products

## To Post:
mtp = artisan_MS[, c(114:121)] # extract these columns
mtp = na.omit(mtp)

v1 = count(mtp, "v3_mTPWarn")
warn = v1[, -1]

v2 = count(mtp, "v3_mTPFind")
help = v2[, -1]

v3 = count(mtp, "v3_mTPPunish")
punish = v3[, -1]

v4 = count(mtp, "v3_mTPReward")
reward = v4[, -1]

v5 = count(mtp, "v3_mTPBelong")
v5 = rbind(v5, c(5,0)) # add a missing row
belong = v5[, -1]

v6 = count(mtp, "v3_mTPRecip")
recip = v6[, -1]

v7 = count(mtp, "v3_mTPNorm")
norm = v7[, -1]

v8 = count(mtp, "v3_mTPPower")
v8 = rbind(v8, c(4,0)) # add a missing row
v8 = rbind(v8, c(5,0)) # add a missing row
power = v8[, -1]

## Not-To-Post ratings 
mnp = artisan_MS[, c(122:129)] # extract these columns
mnp = na.omit(mnp)

v1 = count(mnp, "v3_mNPEffort")
effort = v1[, -1]

v2 = count(mnp, "v3_mNPBogus")
bogus = v2[, -1]

v3 = count(mnp, "v3_mNPImpact")
impact = v3[, -1]

v4 = count(mnp, "v3_mNPRedund")
redund = v4[, -1]

v5 = count(mnp, "v3_mNPCritic")
no_crit = v5[, -1]

v6 = count(mnp, "v3_mNPHype")
v6 = rbind(v6, c(5,0)) # add a missing row
no_hype = v6[, -1]

v7 = count(mnp, "v3_mNPNorm")
no_norm = v7[, -1]

v8 = count(mnp, "v3_mNPPower")
no_power = v8[, -1]

artisan_MS_motivation_table = rbind(warn, help, punish, reward, belong,
                                    recip, norm, power,
                                    effort, bogus, impact, redund, no_crit,
                                    no_hype, no_norm, no_power)

artisan_MS_motivation_table = cbind(rownames(artisan_MS_motivation_table), artisan_MS_motivation_table)
colnames(artisan_MS_motivation_table) = c("Motivations", "n1", "n2", "n3", "n4", "n5")

write.csv(artisan_MS_motivation_table, file = paste0( tempFolder,"/", "V_artisan_MS.csv"))
# Run the ordered-probit model analysis:
OrdModelResults = ordinalAndMetricAnalysis(
  folderName = tempFolder ,
  dataFileName = "V_artisan_MS.csv" ,
  yColNames = c("n1","n2","n3","n4","n5") , # column names in data file
  caseIDcolName = "Motivations" ,
  hierarchSD = FALSE ,
  doMetricModel=FALSE ,
  adaptSteps=globalAdaptSteps , 
  burnInSteps=globalBurnInSteps , 
  numSavedSteps=globalNumSavedSteps , 
  thinSteps=globalThinSteps , 
  nChains=globalNChains 
)

For explanation of the plots, please refer to this section.

MCMC Diagnostics:

The table below has a row for every estimated parameter in the model. The columns indicate MCMC diagnostics and estimated values. Column headers are as follows:

  • psrfPt is the point value of the psrf
  • psrfUpCI is an upper bound on the psrf
  • ESS is the effective sample size of the MCMC chain
  • 50% is the median of the estimate
  • 2.5% and 97.5% indicate the limits of the 95% equal-tailed interval (ETI)
  • Mode is the value with highest density computed by a kernel-density estimator
  • HDIlow and HDIhigh indicate the limits of the 95% highest-density interval (HDI).
# Tabular summary of MCMC diagnostics:
diagSum = diagSummary( OrdModelResults$OrdCodaSamples )
displayTable( round(diagSum,3) , options=list(pageLength=10))
# Extract max PSRF and min ESS:
estimatedRows = ( rownames(diagSum) != "thresh[1]"
                  & rownames(diagSum) != "thresh[4]" )
maxPSRF = max(diagSum[estimatedRows,c("psrfPt")])
minESS = min(diagSum[estimatedRows,c("ESS")])

From the table above, we observe

  • PSRF: The maximum PSRF of any parameter is 1.000674, indicating good MCMC convergence.
  • ESS: The minimum ESS of any parameter is 18184.44, indicating stable estimates of limits of credible intervals, according the desired minimum ESS of 10,000 recommended by the Bayesian analysis reporting guidelines (Kruschke, 2021).

4 Free-Responses for Recalled transactions, Studies 1 and 2

Please note that the relative frequencies of the response categories are combined across studies and are presented below.

library(tidyr)
## 
## Attaching package: 'tidyr'
## The following object is masked from 'package:runjags':
## 
##     extract
library(gtools)
## 
## Attaching package: 'gtools'
## The following object is masked from 'package:runjags':
## 
##     ask
### HS Reasons:
HS_post_reason = DataSummary_catch[, c("recallHighSat_postReview","recallHighSat_reason1", "recallHighSat_reason2", "recallHighSat_reason3")]
HS_post_reason = cbind(subjectID = rownames(HS_post_reason), HS_post_reason)

HS_post = gather(HS_post_reason,
                   key = "reasons",
                   value = "free responses",
                   recallHighSat_reason1, recallHighSat_reason2, recallHighSat_reason3)

names(HS_post)[2] = "postReview"

HS_reasons = HS_post[order(HS_post$subjectID), ]
write.csv(HS_reasons, paste0( tempFolder,"/", "1_HS_reasons.csv"))

### LS Reasons:
LS_post_reason = DataSummary_catch[, c("recallLowSat_postReview","recallLowSat_reason1", "recallLowSat_reason2", "recallLowSat_reason3")]
LS_post_reason = cbind(subjectID = rownames(LS_post_reason), LS_post_reason)

LS_post = gather(LS_post_reason,
                 key = "reasons",
                 value = "free responses",
                 recallLowSat_reason1, recallLowSat_reason2, recallLowSat_reason3)

names(LS_post)[2] = "postReview"

LS_reasons = LS_post[order(LS_post$subjectID), ]
write.csv(LS_reasons, paste0( tempFolder,"/", "2_LS_reasons.csv"))

### MS Reasons:
MS_post_reason = DataSummary_catch[, c("recallMedSat_postReview","recallMedSat_reason1", "recallMedSat_reason2", "recallMedSat_reason3")]
MS_post_reason = cbind(subjectID = rownames(MS_post_reason), MS_post_reason)

MS_post = gather(MS_post_reason,
                 key = "reasons",
                 value = "free responses",
                 recallMedSat_reason1, recallMedSat_reason2, recallMedSat_reason3)

names(MS_post)[2] = "postReview"

MS_reasons = MS_post[order(MS_post$subjectID), ]
write.csv(MS_reasons, paste0( tempFolder,"/", "3_MS_reasons.csv"))

data1 = read.csv(paste0( tempFolder,"/", "1_HS_reasons.csv"))
data2 = read.csv(paste0( tempFolder,"/", "2_LS_reasons.csv"))
data3 = read.csv(paste0( tempFolder,"/", "3_MS_reasons.csv"))

study1_reasons = smartbind(data1, data2, data3)
write.csv(study1_reasons, paste0( tempFolder,"/", "study1_reasons.csv"))


data = read.csv(paste0( tempFolder,"/", "study1_reasons.csv"))
data = data[ , -c(1:2)]

write.csv(data, paste0( tempFolder,"/", "study1_reasons.csv"))
library(janitor)
## 
## Attaching package: 'janitor'
## The following objects are masked from 'package:stats':
## 
##     chisq.test, fisher.test
library(epiDisplay)
## Loading required package: foreign
## Loading required package: survival
## Loading required package: MASS
## Loading required package: nnet
### HS Reasons:
HS_post_reason = DataSummary_catch2[, c("recallHighSat_postReview","recallHighSat_reason1", "recallHighSat_reason2", "recallHighSat_reason3")]
HS_post_reason = cbind(subjectID = rownames(HS_post_reason), HS_post_reason)

HS_post = gather(HS_post_reason,
                 key = "reasons",
                 value = "free responses",
                 recallHighSat_reason1, recallHighSat_reason2, recallHighSat_reason3)

names(HS_post)[2] = "postReview"

HS_reasons = HS_post[order(HS_post$subjectID), ]
write.csv(HS_reasons, paste0( tempFolder,"/", "1_HS_reasons2.csv"))


### LS Reasons:
LS_post_reason = DataSummary_catch2[, c("recallLowSat_postReview","recallLowSat_reason1", "recallLowSat_reason2", "recallLowSat_reason3")]
LS_post_reason = cbind(subjectID = rownames(LS_post_reason), LS_post_reason)

LS_post = gather(LS_post_reason,
                 key = "reasons",
                 value = "free responses",
                 recallLowSat_reason1, recallLowSat_reason2, recallLowSat_reason3)

names(LS_post)[2] = "postReview"

LS_reasons = LS_post[order(LS_post$subjectID), ]
write.csv(LS_reasons, paste0( tempFolder,"/", "2_LS_reasons2.csv"))

### MS Reasons:
MS_post_reason = DataSummary_catch2[, c("recallMedSat_postReview","recallMedSat_reason1", "recallMedSat_reason2", "recallMedSat_reason3")]
MS_post_reason = cbind(subjectID = rownames(MS_post_reason), MS_post_reason)

MS_post = gather(MS_post_reason,
                 key = "reasons",
                 value = "free responses",
                 recallMedSat_reason1, recallMedSat_reason2, recallMedSat_reason3)

names(MS_post)[2] = "postReview"

MS_reasons = MS_post[order(MS_post$subjectID), ]
write.csv(MS_reasons, paste0( tempFolder,"/", "3_MS_reasons2.csv"))

data1 = read.csv(paste0( tempFolder,"/", "1_HS_reasons2.csv"))
data2 = read.csv(paste0( tempFolder,"/", "2_LS_reasons2.csv"))
data3 = read.csv(paste0( tempFolder,"/", "3_MS_reasons2.csv"))

study2_reasons = smartbind(data1, data2, data3)
write.csv(study2_reasons, paste0( tempFolder,"/", "study2_reasons.csv"))


data = read.csv(paste0( tempFolder,"/", "study2_reasons.csv"))
data = data[ , -c(1:2)]

write.csv(data, paste0( tempFolder,"/", "study2_reasons.csv"))

We used conventional qualitative content analysis by coding categories that are derived directly and inductively from the raw text data. Our text analysis process followed recommended steps discussed by Zhang & Wildemuth (2017), including preparing the data, defining the unit of analysis, developing categories and a coding scheme by iterative process of reviewing and discussions until we reached consensus on classifications of the reasons. We combined the free response data from both Studies 1 and 2 since they had the same procedure at the beginning of each experiment. The following script generates bar graphs of the proportions of categories that are derived from the qualitative content analysis.

4.1 Proportions of reasons to post reviews:

data = read.csv("Post.csv")
tab1(data$Categories, bar.values = "percent", horiz = TRUE, sort.group = "decreasing",
     cum.percent = FALSE, col = c("olivedrab4", "darkolivegreen4", "darkolivegreen", "olivedrab", "forestgreen","darkseagreen4"),
     cex.names = 0.8, main = NULL)

## data$Categories : 
##                             Frequency Percent
## Warn/help consumers               132    45.4
## Emotion - positive/negative        61    21.0
## Punish/reward producers            32    11.0
## Waste of money                     12     4.1
## To get money back                  12     4.1
## To get reward/incentives            8     2.7
## Disagree with the ratings           8     2.7
## Inaccurate description              7     2.4
## Felt responsibility                 7     2.4
## Reciprocity                         4     1.4
## Was reminded                        3     1.0
## Easy to post                        3     1.0
## Accurate description                2     0.7
##   Total                           291   100.0

Descriptions of category labels summarized in the above figure (reasons to post).

4.2 Proportions of reasons NOT to post reviews – categories corresponding to the explicitly-asked motivations:

data2 = read.csv("Notpost_explicit.csv")
tab1(data2$Categories, bar.values = "percent", horiz = TRUE, sort.group = "increasing",
    cum.percent = FALSE, col = c("indianred3","firebrick1", "firebrick2", "firebrick" ,"darkred"),
    cex.names = 0.8, main = NULL)

## data2$Categories : 
##                                 Frequency Percent
## Bogus                                   7     0.8
## Reluctant to criticize producer        13     1.6
## No impact                              87    10.5
## Redundancy                            270    32.5
## Effortful/time-consuming              455    54.7
##   Total                               832   100.0

Descriptions of category labels summarized in the above figure (reasons not to post that correspond to the explicitly-asked motivations).

4.3 Proportions of reasons NOT to post reviews – categories other than the explicitly-asked motivations:

data3 = read.csv("Notpost_others.csv")
tab1(data3$Categories, bar.values = "percent", horiz = TRUE, sort.group = "decreasing",
     cum.percent = FALSE,
     cex.names = 0.8, main = NULL)

## data3$Categories : 
##                                        Frequency Percent
## Lack of concern: ignorance                   437    21.3
## Product met the expectation                  251    12.2
## Rarely or never post                         205    10.0
## Lack of concern: no need                     192     9.3
## Too difficult: technical                     174     8.5
## Laziness                                     135     6.6
## Reluctant to express opinion                 116     5.6
## Forgot                                        81     3.9
## Could be my own experience                    56     2.7
## No incentives                                 46     2.2
## Reluctant to post negative reviews            42     2.0
## Returned the product                          36     1.8
## Reputation/privacy at stake                   36     1.8
## Not an important product                      30     1.5
## Busy                                          30     1.5
## Lack of expertise                             26     1.3
## My own mistake                                25     1.2
## Lack of concern: cheap                        22     1.1
## Don't read reviews                            22     1.1
## Restatement of dissatisfaction                20     1.0
## Not wanting to think about the product        17     0.8
## Telling friends while not posting             16     0.8
## Well-known product/retailer                   14     0.7
## Few post reviews: norm                        13     0.6
## Contact company directly                      13     0.6
##   Total                                     2055   100.0

Descriptions of category labels summarized in the above figure (reasons not to post other than explicitly-asked motivations).

5 How to reproduce this analysis and appendix

5.1 Software requirements

The analysis uses the R language and these notes are written in R Markdown, both accessed through the editor, RStudio. The user must install R and RStudio on their computer; both are free. Follow the instructions in the links for R and RStudio.

The Bayesian analysis uses software called JAGS, which also must be installed; it is free.

The R script requires packages runjags and rjags which must be previously installed. The R code also requires packages runjags, rjags, plyr, tidyr, gtools, janitor, and epiDisplay, etc. When attempting to run these scripts, RStudio will probably prompt you for any needed but missing packages.

It is good practice to use the latest versions of R, RStudio, and all packages. Sometimes when new packages are installed, they require the latest version of R.

5.2 Files needed

The R Markdown source file for this document is called

  • Web_Appendix.Rmd

To reproduce its output, it must be knitted in R Markdown with all of the following files located in the same folder, and with that folder set as R’s working directory.

A list of all data files loaded by the script:

  • Data_Summary_Study1.csv
  • Data_Summary_Study2.csv
  • Post.csv
  • Notpost_explicit.csv
  • Notpost_others.csv

A list of R scripts sourced by the script:

  • OrderedProbitModel2023.R This is a modified version of the R script that accompanies Liddell & Kruschke (2018).
  • DBDA2E-utilities.R A Bayesian utilities R script from Kruschke (2015) which is freely available at that book’s web site.

A list of auxiliary files:

  • apa.csl Formatting of references.
  • referencesOnlineReview.bib References database.
  • LiddellKruschke2018Fig1.png An image file depicting the ordered-probit model.
  • Free-Riding.png Image files depicting results.
  • Free-Riding2.png
  • Free-Response1.png
  • Free-Response2.png
  • Free-Response3.png

5.3 To reproduce this document

To re-run the analysis and reproduce this document, be sure all of the files listed above are together in the current working directory. Then open Web_Appendix.Rmd in RStudio, and knit it to HTML.

tocOut = toc(quiet=TRUE) # toc() corresponding to tic("beginDocument")
elapsedMinutes = round((tocOut$toc-tocOut$tic)/60,2)

The elapsed time needed to run the analyses and produce this document, on a modest desktop computer, is slightly more than 9.9 minutes.

5.4 Statistical reproducibility

The Bayesian analyses reported here acknowledge the recommendations of the Bayesian analysis reporting guidelines (Kruschke, 2021).

6 R Session Info

# Full R session information:
sessionInfo()
## R version 4.2.1 (2022-06-23)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Big Sur ... 10.16
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] parallel  stats     graphics  grDevices utils     datasets  methods  
## [8] base     
## 
## other attached packages:
##  [1] epiDisplay_3.5.0.2 nnet_7.3-17        MASS_7.3-57        survival_3.3-1    
##  [5] foreign_0.8-82     janitor_2.1.0      gtools_3.9.3       tidyr_1.2.1       
##  [9] plyr_1.8.7         runjags_2.2.1-7    rjags_4-13         coda_0.19-4       
## [13] tictoc_1.1        
## 
## loaded via a namespace (and not attached):
##  [1] tidyselect_1.1.2  xfun_0.33         bslib_0.4.0       purrr_0.3.4      
##  [5] splines_4.2.1     lattice_0.20-45   snakecase_0.11.0  vctrs_0.4.1      
##  [9] generics_0.1.3    htmltools_0.5.3   yaml_2.3.5        utf8_1.2.2       
## [13] rlang_1.0.6       jquerylib_0.1.4   pillar_1.8.1      glue_1.6.2       
## [17] DBI_1.1.3         lifecycle_1.0.2   stringr_1.4.1     htmlwidgets_1.5.4
## [21] evaluate_0.16     knitr_1.40        fastmap_1.1.0     crosstalk_1.2.0  
## [25] fansi_1.0.3       highr_0.9         Rcpp_1.0.9        DT_0.27          
## [29] cachem_1.0.6      jsonlite_1.8.0    digest_0.6.29     stringi_1.7.8    
## [33] dplyr_1.0.10      grid_4.2.1        cli_3.4.1         tools_4.2.1      
## [37] magrittr_2.0.3    sass_0.4.2        tibble_3.1.8      pkgconfig_2.0.3  
## [41] Matrix_1.5-1      ellipsis_0.3.2    lubridate_1.9.0   timechange_0.1.1 
## [45] assertthat_0.2.1  rmarkdown_2.16    rstudioapi_0.14   R6_2.5.1         
## [49] compiler_4.2.1

7 References

Kruschke, J. K. (2015). Doing Bayesian data analysis, Second Edition: A tutorial with R, JAGS, and Stan. Academic Press. https://www.sciencedirect.com/book/9780124058880/doing-bayesian-data-analysis
Kruschke, J. K. (2021). Bayesian analysis reporting guidelines. Nature Human Behaviour, 5, 1282–1291. https://doi.org/10.1038/s41562-021-01177-7
Liddell, T. M., & Kruschke, J. K. (2018). Analyzing ordinal data with metric models: What could possibly go wrong? Journal of Experimental Social Psychology, 79, 328–348. https://doi.org/10.1016/j.jesp.2018.08.009
Zhang, Y., & Wildemuth, B. M. (2017). Qualitative analysis of content. In B. M. Wildemuth (Ed.), Applications of social research methods to questions in information and library science (Second Edition, pp. 318–329). ABC-CLIO.