Touchtest correlations

d1 <- with(exdat, data.frame(SSTouchTest = SSTouchTest, OSTouchTest = OSTouchTest, CloseOthersTouchTest = CloseOthersTouchTest))
round(cor(d1, use = "pairwise.complete.obs"), 2)
##                      SSTouchTest OSTouchTest CloseOthersTouchTest
## SSTouchTest                 1.00        0.44                 0.54
## OSTouchTest                 0.44        1.00                 0.26
## CloseOthersTouchTest        0.54        0.26                 1.00

The Touchscales seem to be sufficiently distinct and definitely not interchangeable. Variance shared ranges from 6.76% only up to 29.16%. So whether to take the average of the three scales (coining an overall comfort with touch construct) is rather a theoretical question. Let me know if you want to try it out.

Path models

Again, some very rough guide to model respecification:

  1. Look at the “Model test” table first. If the pvalue for the chisq test is significant, the model departs from the data.

  2. To see how much are the data incompatible with the model, see the other fit indices. For an approximately fitting model, CFI and TLI should be >.95, RMSEA < .06.

  3. If the model does not fit the data, we cannot rely on model parameters. Usually, they may be slightly off for a misspecified model (say in units), but with severe misspecification, they can be way off (sometimes even in tenths). The next step is then to look at the modification indices table. Collumn “mi” stands for the modification index. It represents the change in chi square statistics if you allow the given parameter. “~” stands for regression path (read as “predicted by”), “~~” denotes a correlation. “sepc.all” is the value of correlation or standardized regression path that the model missed.

Exploratory model 1

Includes also a direct line from CS -> Reflection. CS has a direct effect on reflection, not mediated by PriBC. If you want I can define that path.

path.model1 <- 'PriBC ~ a*ZCommunalStrength
reflection ~ b*PriBC + c*TargetSharingSplit + d*CloseOthersTouchTest + e*ZCommunalStrength
stress ~ f*reflection
health ~ g*stress
CloseOthersTouchTest ~~ 0*TargetSharingSplit
TargetSharingSplit ~~ 0*ZCommunalStrength
'

fit.path.model1 <- sem(path.model1, data = exdat, se = "bootstrap", bootstrap = nboot, mimic = "Mplus", missing = "ML")

Model test and approximate fit indices

fitmeasures(fit.path.model1, c("chisq", "df", "pvalue", "cfi", "tli", "rmsea", "rmsea.ci.lower", "rmsea.ci.upper", "srmr_mplus", "pnfi", "bic"))
##          chisq             df         pvalue            cfi            tli 
##         31.569         14.000          0.005          0.912          0.868 
##          rmsea rmsea.ci.lower rmsea.ci.upper           pnfi            bic 
##          0.054          0.029          0.079          0.571       7531.759

As you can see, the model fits is not very bad, it fits the data approximately. If we would allow for the correlation between TargetSharingSplit and CloseOthersTouchTest (reasonable?), the model would fit the data exactly (in statistical sense, within the bounds of alpha, of course) - the path estimates would thus be fully interpretable, without much reservations.

Modification indices

modificationindices(fit.path.model1, alpha = .05, sort. = TRUE, maximum.number = 5)
##                     lhs op                  rhs     mi   epc sepc.lv
## 73 CloseOthersTouchTest  ~           reflection 16.044 0.761   0.761
## 77 CloseOthersTouchTest  ~   TargetSharingSplit  9.830 0.182   0.182
## 71   TargetSharingSplit  ~ CloseOthersTouchTest  9.830 0.125   0.125
## 8    TargetSharingSplit ~~ CloseOthersTouchTest  9.830 0.125   0.125
## 65    ZCommunalStrength  ~ CloseOthersTouchTest  6.703 0.140   0.140
##    sepc.all sepc.nox
## 73    0.803    0.803
## 77    0.151    0.151
## 71    0.151    0.151
## 8     0.151    0.151
## 65    0.140    0.140

Parameter estimates, standard errors, p-values, CI’s, \(R^2\)

parameterEstimates(fit.path.model1, rsquare = TRUE)[1:7,]
##          lhs op                  rhs label    est    se      z pvalue
## 1      PriBC  ~    ZCommunalStrength     a  0.065 0.037  1.763  0.078
## 2 reflection  ~                PriBC     b  0.222 0.081  2.755  0.006
## 3 reflection  ~   TargetSharingSplit     c  0.168 0.068  2.476  0.013
## 4 reflection  ~ CloseOthersTouchTest     d  0.151 0.050  3.031  0.002
## 5 reflection  ~    ZCommunalStrength     e  0.159 0.064  2.492  0.013
## 6     stress  ~           reflection     f -0.271 0.035 -7.836  0.000
## 7     health  ~               stress     g  0.547 0.065  8.465  0.000
##   ci.lower ci.upper
## 1   -0.016    0.136
## 2    0.040    0.379
## 3    0.023    0.295
## 4    0.042    0.235
## 5    0.059    0.335
## 6   -0.335   -0.200
## 7    0.425    0.679

Bivariate correlations

lavCor(fit.path.model1)
##                      PriBC  rflctn stress health ZCmmnS TrgtSS ClsOTT
## PriBC                 1.000                                          
## reflection            0.165  1.000                                   
## stress               -0.019 -0.387  1.000                            
## health               -0.083 -0.230  0.397  1.000                     
## ZCommunalStrength     0.108  0.178 -0.071 -0.054  1.000              
## TargetSharingSplit    0.043  0.158 -0.056 -0.016  0.000  1.000       
## CloseOthersTouchTest  0.072  0.188  0.026  0.028  0.135  0.151  1.000

Power analysis

For approximate fit (based on the RMSEA distribution). Statistical power for the detection of a likely misspecified model (RMSEA > .08).

df <- fit.path.model1@test[[1]]$df
alpha <- .05
n <- nrow(exdat)
rmsea0 <- .05           # RMSEA given H0
rmseaa <- .08           # RMSEA given Ha

ncp0 <- (n-1)*df*rmsea0**2 ;
ncpa <-(n-1)*df*rmseaa**2 ;
if(rmsea0 < rmseaa) {
  cval <- qchisq(1-alpha,df=df,ncp=ncp0)
  pwr.rmsea <- 1 - pchisq(cval,df=df,ncp=ncpa)
} else {
  cval <- qchisq(alpha,df=df,ncp=ncp0)
  pwr.rmsea <- pchisq(cval,df=df,ncp=ncpa)
}
rm(ncp0, ncpa, cval)
print(round(pwr.rmsea,10))
## [1] 0.6713776

The power to detect mild model misspecifications is rather low (big model-data discrepancies would be detected almost certainly). That is the price we have to pay for sample splitting.

Path model

semPaths(fit.path.model1, layout = "tree2", rotation = 2, nDigits = 2, "Standardized", intercepts = FALSE, residuals = FALSE,
         fade = FALSE, nCharNodes = 0, sizeMan = 9, edge.label.cex = .9, edge.label.position = .4)

Bayes factor in favor of the alternative hypothesis (BF10) and posterior probability for model parameters (given 1:1 prior odds for H0:Ha)

These are Bayes Factors based on model selection / information criteria approach as proposed by Wagenmakers, 2007. Each BF represents the relative evidence in the data favoring alternative hypothesis (parameter freely estimated) over the null (the given parameter fixed to 0). Bayes Factors using BIC approximation implicitly assume unit information prior which makes them rather conservative with regard to the alternative hypothesis.

f1 <- NULL
bf1 <- NULL
posterior1 <- NULL
for(i in letters[1:7]){
  m1 <- 'PriBC ~ a*ZCommunalStrength
reflection ~ b*PriBC + c*TargetSharingSplit + d*CloseOthersTouchTest + e*ZCommunalStrength
  stress ~ f*reflection
  health ~ g*stress
  CloseOthersTouchTest ~~ 0*TargetSharingSplit
  TargetSharingSplit ~~ 0*ZCommunalStrength
  '
  f1 <- sem(m1, data = exdat, mimic = "Mplus", missing = "ML")
  bf1[i] <- 1/(exp((BIC(f1) - BIC(sem(m1, data = exdat, mimic = "Mplus", missing = "ML",
                                      constraints = paste(i, '==', '0', sep = ""))))/2))
  posterior1[i] <- bf1[i]/(1+bf1[i])
}
data.frame(BF10 = bf1, Posterior = round(posterior1, 3))
##           BF10 Posterior
## a 2.980744e-01     0.230
## b 3.121931e+00     0.757
## c 2.285883e+00     0.696
## d 4.538370e+00     0.819
## e 2.185709e+00     0.686
## f 9.072473e+13     1.000
## g 5.810263e+14     1.000

Exploratory model 2

Includes also a direct line from CS -> Reflection. CS has a direct effect on reflection, not mediated by PriBC. If you want I can define that path.

path.model2 <- 'rumination ~ a*ZCommunalStrength + b*CloseOthersTouchTest + c*int_COT_CS
stress ~ d*rumination
health ~ e*stress'

fit.path.model2 <- sem(path.model2, data = exdat, se = "bootstrap", bootstrap = nboot, mimic = "Mplus", missing = "ML")

Model test and approximate fit indices

fitmeasures(fit.path.model2, c("chisq", "df", "pvalue", "cfi", "tli", "rmsea", "rmsea.ci.lower", "rmsea.ci.upper", "srmr_mplus", "pnfi", "bic"))
##          chisq             df         pvalue            cfi            tli 
##          4.893          7.000          0.673          1.000          1.015 
##          rmsea rmsea.ci.lower rmsea.ci.upper           pnfi            bic 
##          0.000          0.000          0.047          0.572       6553.600

The model fits the data exactly (in statistical sense, within the bounds of alpha, of course) - the path estimates are thus fully interpretable.

Modification indices

modificationindices(fit.path.model2, alpha = .05, sort. = TRUE, maximum.number = 5)
##           lhs op                  rhs    mi    epc sepc.lv sepc.all
## 29     stress  ~           int_COT_CS 1.916  0.051   0.051    0.063
## 28     stress  ~ CloseOthersTouchTest 1.185  0.032   0.032    0.043
## 33     health  ~           int_COT_CS 1.154 -0.060  -0.060   -0.054
## 21 rumination ~~               stress 0.983  0.206   0.206    0.212
## 24 rumination  ~               stress 0.983  0.548   0.548    0.310
##    sepc.nox
## 29    0.069
## 28    0.043
## 33   -0.059
## 21    0.212
## 24    0.310

Parameter estimates, standard errors, p-values, CI’s, \(R^2\)

parameterEstimates(fit.path.model2, rsquare = TRUE)[1:5,]
##          lhs op                  rhs label    est    se      z pvalue
## 1 rumination  ~    ZCommunalStrength     a -0.172 0.073 -2.370  0.018
## 2 rumination  ~ CloseOthersTouchTest     b  0.022 0.068  0.326  0.745
## 3 rumination  ~           int_COT_CS     c -0.244 0.075 -3.268  0.001
## 4     stress  ~           rumination     d  0.319 0.022 14.346  0.000
## 5     health  ~               stress     e  0.547 0.061  8.913  0.000
##   ci.lower ci.upper
## 1   -0.334   -0.018
## 2   -0.106    0.158
## 3   -0.374   -0.074
## 4    0.275    0.366
## 5    0.420    0.663

Bivariate correlations

lavCor(fit.path.model2)
##                      rumntn stress health ZCmmnS ClsOTT i_COT_
## rumination            1.000                                   
## stress                0.564  1.000                            
## health                0.239  0.397  1.000                     
## ZCommunalStrength    -0.112 -0.071 -0.054  1.000              
## CloseOthersTouchTest -0.030  0.026  0.028  0.135  1.000       
## int_COT_CS           -0.150 -0.028 -0.068 -0.051  0.164  1.000

Power analysis

For approximate fit (based on the RMSEA distribution). Statistical power for the detection of a likely misspecified model (RMSEA > .08). Models 1SST and 1OST have the same power.

df <- fit.path.model2@test[[1]]$df
alpha <- .05
n <- nrow(exdat)
rmsea0 <- .05           # RMSEA given H0
rmseaa <- .08           # RMSEA given Ha

ncp0 <- (n-1)*df*rmsea0**2 ;
ncpa <-(n-1)*df*rmseaa**2 ;
if(rmsea0 < rmseaa) {
  cval <- qchisq(1-alpha,df=df,ncp=ncp0)
  pwr.rmsea <- 1 - pchisq(cval,df=df,ncp=ncpa)
} else {
  cval <- qchisq(alpha,df=df,ncp=ncp0)
  pwr.rmsea <- pchisq(cval,df=df,ncp=ncpa)
}
rm(ncp0, ncpa, cval)
print(round(pwr.rmsea,10))
## [1] 0.4407915

The power to detect a bad fitting model here is even much lower because there are fewer degrees of freedom (means model having less dimensions along which the model could be rejected), because it’s relatively simple. However, the fit indices are so good, that I wouldn’t worry much about that. At least we can be quite confident that there is no severe causal misspecification.

Path model

semPaths(fit.path.model2, layout = "tree2", rotation = 2, nDigits = 2, "Standardized", intercepts = FALSE, residuals = FALSE,
         fade = FALSE, nCharNodes = 0, sizeMan = 9, edge.label.cex = .9, edge.label.position = .4)

Bayes factor in favor of the alternative hypothesis (BF10) and posterior probability for model parameters (given 1:1 prior odds for H0:Ha)

These are Bayes Factors based on model selection / information criteria approach as proposed by Wagenmakers, 2007. Each BF represents the relative evidence in the data favoring alternative hypothesis (parameter freely estimated) over the null (the given parameter fixed to 0). Bayes Factors using BIC approximation implicitly assume unit information prior which makes them rather conservative with regard to the alternative hypothesis.

f2 <- NULL
bf2 <- NULL
posterior2 <- NULL
for(i in letters[1:5]){
  m2 <- 'rumination ~ a*ZCommunalStrength + b*CloseOthersTouchTest + c*int_COT_CS
stress ~ d*rumination
  health ~ e*stress'
  f2 <- sem(m2, data = exdat, mimic = "Mplus", missing = "ML")
  bf2[i] <- 1/(exp((BIC(f2) - BIC(sem(m2, data = exdat, mimic = "Mplus", missing = "ML",
                                      constraints = paste(i, '==', '0', sep = ""))))/2))
  posterior2[i] <- bf2[i]/(1+bf2[i])
}
data.frame(BF10 = bf2, Posterior = round(posterior2, 3))
##           BF10 Posterior
## a 8.021716e-01     0.445
## b 5.097607e-02     0.049
## c 4.881427e+00     0.830
## d 4.420838e+34     1.000
## e 5.810263e+14     1.000