if(!require(devtools)) install.packages("devtools")
devtools::install_github("cardiomoon/processR")
library(pacman); p_load(psych, DT, car, sensemakr, simpleboot, processR, lavaan, interactions, ggplot2, MASS, robustbase, sfsmisc, sandwich, lmtest)
There is a lot of commentary on Felig et al.’s (2021) paper on twitter. Most of it is positive and a lot of it is unjustified. When I heard of the paper, I attempted to reproduce the results. As far as I am aware, I am the only person who has publicly uploaded an attempt to reproduce their results; others have claimed to reproduce them without proof or with proof provided only in private. Below, I will go over some of the issues with the paper and attempt to reproduce the results in the ways others have. Anyone who does not supply their code and output but says they reproduced the results of the paper (or any paper, for that matter) should be regarded as suspect: if data are available, there is no reason you cannot also verify your output. A final thing to note before going on is that parametric tests were not appropriate for any of this analysis in the first place since the data were largely not quantitative.
Before going on to any analysis, it needs to be noted that the study’s lead author has misrepresented the openness of this piece of research. Though she claimed (https://archive.ph/jm2EP) to have “provided rebuttals with supporting data for every comment” and “[a]ll the syntax is on OSF”, I cannot find any support for actual rebuttals prior to this time (or after, though this does not preclude her doing something in the future) and the whole syntax is not on OSF (https://archive.ph/EQasI), only a single one of the PROCESS models is there and they clearly ran much more than these (as described in the paper and elsewhere). It is thus obviously untrue to say all the syntax is on OSF. Furthermore, we know exploratory analyses were run but not reported, since she has posted them on twitter and there are a number of variables in the provided data that are never used in the analyses reported in the paper in addition to transformations of those variables (i.e., shoe_height vs Zshoe_height) and variables that would have been used to filter the data differently than it ultimately was (i.e., scores_differ_but_one vs score_differs). The lead author even output results from an unspecified and unreported model on her twitter (https://archive.ph/CdyaT) and made it clear - “we ended up not including this in the revised versions” - that it was attempted during the process of getting the paper published. Remarkably, she apparently did not see how doing a large number of exploratory analyses can “be [i.e., yield] p-hacking”.
The filtering used by the authors has negatively affected the generalizability of the research and increased the likelihood that it is uninterpretable. This is because they excluded participants “who did not complete the full questionnaire or completed the items with no variability”. The first decision can be justified if, for example, failure to complete was NMAR with respect to relevant variables or if relevant variables were not filled out. These justifications were not proferred and in the latter case, did not apply in full. More importantly, filtering to only those individuals for whom there was variability in their responses is a way to increase the magnitude of variability in the data, whilst also lowering the reliability of the measure (in the mean case) and potentially yielding longitudinal non-invariance. More variability means more random error to exploit to garner significant effects. This analytic decision is just not justified and without it, the results are almost entirely not significant. One may say that individuals who responded the same twice are satisficing or whatever, but individuals who change their responses may also be in error; there is no a priori means of distinguishing these, so presenting both configurations is the best outcome if the decision to filter based on variability is, for some reason, taken seriously. When that is done, it is apparent that the results are no longer significant because of a mere six additional people. If we use everyone, the results are even less significant. I repeat: their significant results are only significant because they filtered six people who did not change their responses. It is filtering for variability that explains everything that happened in this paper, and that decision is one that implies greater error and suggests a different interpretation between timepoints. I return to this below.
Though the authors did not say it in their paper, they bootstrapped their PROCESS model (this is normal and needs to be done; the model is too liberal otherwise) and we do not know if they bootstrapped their regressions they ran for the Johnson-Neyman plots or otherwise. In any case, thanks to a friend’s interaction with Nick Brown, I found that the bootstrap seed could sometimes explain the significance of their PROCESS result. Using a different seed occasionally failed to replicate it, though I do replicate it below for a non-robust result. This is a common problem with marginal results that are resampled: you can generate positive results in some of the seeds even if the results should not be significant initially or, in any case, without bootstrapping. Why 10000 bootstrap samples were taken in the first place, I do not know, but there was no justification - and there obviously wasn’t because it wasn’t mentioned. My thought process is that they just picked defaults, since using default options replicates their PROCESS result in SPSS 26. Because the result is dependent on the seed, it should not be taken very seriously, though over enough iterations results should converge. With that said, I will not be bootstrapping these results because - as should be abundantly clear - we are dealing with marginal effects and bootstrapping makes things dubious, not better. The exception is that main models used by the authors; I will bootstrap those to exactly reproduce their results. Because I delve into robust regressions, I want to note that I will output PROCESS model results with robust estimation, but, again, it is not trustworthy. This may seem inconsistent to say it was not justified but it is necessary, but it is not: all analytic decisions must be specified and justified.
round(cor(data[c("Self_Obj", "Clothes_RF", "Cold", "Temp_True", "Intoxicated", "Drinks", "Age", "BMI")], use = "pairwise.complete"), 2)
## Self_Obj Clothes_RF Cold Temp_True Intoxicated Drinks Age BMI
## Self_Obj 1.00 0.31 0.07 -0.13 0.12 0.03 -0.30 -0.10
## Clothes_RF 0.31 1.00 0.13 -0.12 0.12 0.06 -0.38 -0.21
## Cold 0.07 0.13 1.00 -0.16 -0.10 -0.14 -0.12 -0.09
## Temp_True -0.13 -0.12 -0.16 1.00 0.03 0.11 0.25 0.15
## Intoxicated 0.12 0.12 -0.10 0.03 1.00 0.67 -0.12 -0.04
## Drinks 0.03 0.06 -0.14 0.11 0.67 1.00 0.01 -0.01
## Age -0.30 -0.38 -0.12 0.25 -0.12 0.01 1.00 0.23
## BMI -0.10 -0.21 -0.09 0.15 -0.04 -0.01 0.23 1.00
describe(data[c("Self_Obj", "Clothes_RF", "Cold", "Temp_True", "Intoxicated", "Drinks", "Age", "BMI")])
vars <int> | n <dbl> | mean <dbl> | sd <dbl> | median <dbl> | trimmed <dbl> | mad <dbl> | min <dbl> | ||
---|---|---|---|---|---|---|---|---|---|
Self_Obj | 1 | 187 | 3.746180 | 0.9329271 | 3.87500 | 3.760407 | 0.926625 | 1.50000 | |
Clothes_RF | 2 | 187 | 3.058824 | 1.9514992 | 3.00000 | 3.006623 | 1.482600 | 0.00000 | |
Cold | 3 | 186 | 3.387097 | 1.3638757 | 3.00000 | 3.366667 | 1.482600 | 1.00000 | |
Temp_True | 4 | 187 | 52.508021 | 3.6120601 | 55.00000 | 52.596026 | 2.965200 | 46.00000 | |
Intoxicated | 5 | 186 | 2.295699 | 1.4117969 | 2.00000 | 2.086667 | 1.482600 | 1.00000 | |
Drinks | 6 | 187 | 2.491979 | 2.3357184 | 2.00000 | 2.225166 | 2.965200 | 0.00000 | |
Age | 7 | 183 | 22.267760 | 5.7616961 | 21.00000 | 21.047619 | 2.965200 | 18.00000 | |
BMI | 8 | 187 | 24.349640 | 5.0849838 | 23.04278 | 23.810009 | 4.120260 | 16.82518 |
Below, “Mod1” corresponds to the first model presented in the paper while “Mod2” corresponds to the second. “ModJ” is a model without covariates. Regarding this covariate-free model, the authors remarked that “the interaction remained significant, demonstrating a pattern consistent with reported analyses.” Furthermore, they fit another model with “race and sexual orientation as additional covariates (in addition to true temperature, BMI, age, intoxication, and drinks consumed), and the interaction remained significant, demonstrating a pattern consistent with reported analyses. Race and sexual orientation were not significant covariates.” This is ModD.
Mod1 <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, data)
Mod2 <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI, data)
ModJ <- lm(Cold ~ Clothes_RF * Self_Obj, data)
ModD <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI + Race + Sex_Orient, data)
summary(Mod1); sum(partial_f2(Mod1)[-1]); partial_f2(Mod1)
##
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True,
## data = data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.93764 -0.74787 -0.04511 0.76645 2.81914
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.50114 1.69073 2.662 0.00846 **
## Clothes_RF 0.49921 0.20794 2.401 0.01738 *
## Self_Obj 0.33919 0.18622 1.821 0.07018 .
## Temp_True -0.04859 0.02764 -1.758 0.08043 .
## Clothes_RF:Self_Obj -0.11332 0.05368 -2.111 0.03616 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.337 on 181 degrees of freedom
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.0604, Adjusted R-squared: 0.03964
## F-statistic: 2.909 on 4 and 181 DF, p-value: 0.02301
## [1] 0.09186391
## (Intercept) Clothes_RF Self_Obj Temp_True
## 0.03915783 0.03184139 0.01833066 0.01707553
## Clothes_RF:Self_Obj
## 0.02461632
summary(Mod2); sum(partial_f2(Mod2)[-1]); partial_f2(Mod2)
##
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True +
## Intoxicated + Drinks + Age + BMI, data = data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -3.13043 -0.90137 -0.01713 0.89194 2.87030
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.465e+00 1.827e+00 2.444 0.0155 *
## Clothes_RF 5.618e-01 2.251e-01 2.496 0.0135 *
## Self_Obj 3.954e-01 1.974e-01 2.004 0.0467 *
## Temp_True -4.215e-02 2.928e-02 -1.440 0.1518
## Intoxicated -1.725e-02 9.848e-02 -0.175 0.8612
## Drinks -8.505e-02 5.939e-02 -1.432 0.1539
## Age 1.457e-05 2.055e-02 0.001 0.9994
## BMI -1.018e-02 2.055e-02 -0.495 0.6211
## Clothes_RF:Self_Obj -1.300e-01 5.724e-02 -2.271 0.0244 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.346 on 172 degrees of freedom
## (6 observations deleted due to missingness)
## Multiple R-squared: 0.08679, Adjusted R-squared: 0.04432
## F-statistic: 2.043 on 8 and 172 DF, p-value: 0.04405
## [1] 0.1151079
## (Intercept) Clothes_RF Self_Obj Temp_True
## 3.473896e-02 3.621553e-02 2.333808e-02 1.205052e-02
## Intoxicated Drinks Age BMI
## 1.782982e-04 1.192571e-02 2.922610e-09 1.425871e-03
## Clothes_RF:Self_Obj
## 2.997382e-02
summary(ModJ); sum(partial_f2(ModJ)[-1]); partial_f2(ModJ)
##
## Call:
## lm(formula = Cold ~ Clothes_RF * Self_Obj, data = data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.8545 -0.6967 -0.1242 0.6485 2.8307
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.76574 0.66530 2.654 0.00866 **
## Clothes_RF 0.53921 0.20788 2.594 0.01026 *
## Self_Obj 0.38438 0.18549 2.072 0.03965 *
## Clothes_RF:Self_Obj -0.12221 0.05375 -2.274 0.02415 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.344 on 182 degrees of freedom
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.04436, Adjusted R-squared: 0.02861
## F-statistic: 2.816 on 3 and 182 DF, p-value: 0.04056
## [1] 0.08896727
## (Intercept) Clothes_RF Self_Obj Clothes_RF:Self_Obj
## 0.03870297 0.03696823 0.02359518 0.02840386
summary(ModD); sum(partial_f2(ModD)[-1]); partial_f2(ModD)
##
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True +
## Intoxicated + Drinks + Age + BMI + Race + Sex_Orient, data = data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.7614 -0.8636 -0.0710 0.8879 2.7344
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5.0633453 1.8356502 2.758 0.00651 **
## Clothes_RF 0.5121891 0.2350777 2.179 0.03087 *
## Self_Obj 0.3598592 0.2033048 1.770 0.07870 .
## Temp_True -0.0434157 0.0315272 -1.377 0.17048
## Intoxicated -0.0716531 0.1025044 -0.699 0.48559
## Drinks -0.0591670 0.0620181 -0.954 0.34156
## Age 0.0007799 0.0213258 0.037 0.97088
## BMI -0.0139031 0.0210670 -0.660 0.51027
## Race -0.0894108 0.0611067 -1.463 0.14545
## Sex_Orient -0.0300816 0.1436138 -0.209 0.83436
## Clothes_RF:Self_Obj -0.1217193 0.0591071 -2.059 0.04115 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.317 on 154 degrees of freedom
## (22 observations deleted due to missingness)
## Multiple R-squared: 0.1048, Adjusted R-squared: 0.04665
## F-statistic: 1.803 on 10 and 154 DF, p-value: 0.06435
## [1] 0.1171288
## (Intercept) Clothes_RF Self_Obj Temp_True
## 4.940541e-02 3.082599e-02 2.034459e-02 1.231408e-02
## Intoxicated Drinks Age BMI
## 3.172957e-03 5.910190e-03 8.683798e-06 2.828111e-03
## Race Sex_Orient Clothes_RF:Self_Obj
## 1.390212e-02 2.848975e-04 2.753713e-02
The authors claimed that, at least when mean centered, the VIFs for all predictors as well as the interaction term were close to 1 and “all under 1.15, which suggests that multicollinearity is not an issue in the interpretation of the interaction.”
vif(Mod1)
## Clothes_RF Self_Obj Temp_True Clothes_RF:Self_Obj
## 16.917980 3.139688 1.032530 22.237854
vif(Mod2)
## Clothes_RF Self_Obj Temp_True Intoxicated
## 18.955019 3.381406 1.103050 1.865539
## Drinks Age BMI Clothes_RF:Self_Obj
## 1.909265 1.356048 1.089789 23.821438
vif(ModJ)
## Clothes_RF Self_Obj Clothes_RF:Self_Obj
## 16.715383 3.079854 22.040348
vif(ModD)
## Clothes_RF Self_Obj Temp_True Intoxicated
## 19.415493 3.457538 1.254847 1.829397
## Drinks Age BMI Race
## 1.990961 1.332999 1.102762 1.201612
## Sex_Orient Clothes_RF:Self_Obj
## 1.145896 24.617480
This claim is demonstrably untrue. What made me think this would be untrue was the r = 0.67 correlation between intoxication and drinks, which is clear, expected, and obviously theoretically explicable. Despite that, it did not produce the largest VIF, although it was greater than the 1.15 limit the authors claimed to have observed. The values arrived at are closer to what they reported with mean centering but that is a necessary result of the transformation and thus uninteresting. Multicollinearity effects are obviously not removed by mean centering despite the VIFs being pushed downwards. To think as much is confusion and a potential abuse of statistics. This should be extremely obvious, but the determinant of the matrix (i.e., what we are actually worried about when it comes to multicollinearity) of centered and uncentered variables is identical. Interpreting VIFs from mean-centered variables as evidence of a lack of multicollinearity is foolish at best. I do not recommend mean centering naively, since all it is liable to do in situations like this is make results more significant than they ought to be. In other words, it is a potentially useful tool in a p-hackers arsenal, though, luckily for the credibility of these authors, not so much for interactions. Interested readers should see Iacobucci et al. (2016), which is a more informative paper than the Iacobucci et al. piece they cited.
det(round(cor(data[c("Self_Obj", "Clothes_RF", "Cold", "Temp_True", "Intoxicated", "Drinks", "Age", "BMI")], use = "pairwise.complete"), 2))
## [1] 0.3113784
det(round(cor(centered[c("Self_Obj", "Clothes_RF", "Cold", "Temp_True", "Intoxicated", "Drinks", "Age", "BMI")], use = "pairwise.complete"), 2))
## [1] 0.3113784
It should be noted that this is far too many bootstrap iterations to be taken seriously. They also never said they bootstrapped in their paper, but that is how their code looks on osf. The archive link to prove that is here: https://archive.ph/EQasI. The results are asymptotically identical
set.seed(1)
BMod1 <- lm.boot(Mod1, 10000)
BMod2 <- lm.boot(Mod2, 10000)
BModJ <- lm.boot(ModJ, 10000)
BModD <- lm.boot(ModD, 10000)
summary(BMod1)
## BOOTSTRAP OF LINEAR MODEL (method = rows)
##
## Original Model Fit
## ------------------
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True,
## data = data)
##
## Coefficients:
## (Intercept) Clothes_RF Self_Obj
## 4.50114 0.49921 0.33919
## Temp_True Clothes_RF:Self_Obj
## -0.04859 -0.11332
##
## Bootstrap SD's:
## (Intercept) Clothes_RF Self_Obj
## 1.50433553 0.20472472 0.17706288
## Temp_True Clothes_RF:Self_Obj
## 0.02623627 0.05437761
summary(BMod2)
## BOOTSTRAP OF LINEAR MODEL (method = rows)
##
## Original Model Fit
## ------------------
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True +
## Intoxicated + Drinks + Age + BMI, data = data)
##
## Coefficients:
## (Intercept) Clothes_RF Self_Obj
## 4.465e+00 5.618e-01 3.954e-01
## Temp_True Intoxicated Drinks
## -4.215e-02 -1.725e-02 -8.505e-02
## Age BMI Clothes_RF:Self_Obj
## 1.457e-05 -1.018e-02 -1.300e-01
##
## Bootstrap SD's:
## (Intercept) Clothes_RF Self_Obj
## 1.72724710 0.24430600 0.20017905
## Temp_True Intoxicated Drinks
## 0.02790503 0.09733085 0.05588937
## Age BMI Clothes_RF:Self_Obj
## 0.01806523 0.02250312 0.06269119
summary(BModJ)
## BOOTSTRAP OF LINEAR MODEL (method = rows)
##
## Original Model Fit
## ------------------
## Call:
## lm(formula = Cold ~ Clothes_RF * Self_Obj, data = data)
##
## Coefficients:
## (Intercept) Clothes_RF Self_Obj
## 1.7657 0.5392 0.3844
## Clothes_RF:Self_Obj
## -0.1222
##
## Bootstrap SD's:
## (Intercept) Clothes_RF Self_Obj
## 0.62008793 0.21275637 0.18382538
## Clothes_RF:Self_Obj
## 0.05737778
summary(BModD)
## BOOTSTRAP OF LINEAR MODEL (method = rows)
##
## Original Model Fit
## ------------------
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True +
## Intoxicated + Drinks + Age + BMI + Race + Sex_Orient, data = data)
##
## Coefficients:
## (Intercept) Clothes_RF Self_Obj
## 5.0633453 0.5121891 0.3598592
## Temp_True Intoxicated Drinks
## -0.0434157 -0.0716531 -0.0591670
## Age BMI Race
## 0.0007799 -0.0139031 -0.0894108
## Sex_Orient Clothes_RF:Self_Obj
## -0.0300816 -0.1217193
##
## Bootstrap SD's:
## (Intercept) Clothes_RF Self_Obj
## 1.78732565 0.25087099 0.20789944
## Temp_True Intoxicated Drinks
## 0.03200148 0.09418458 0.05588053
## Age BMI Race
## 0.01848237 0.02388038 0.06929331
## Sex_Orient Clothes_RF:Self_Obj
## 0.12289369 0.06427470
The only PROCESS model actually provided in their syntax (linked above) was from their first model. For some reason, they omitted their second model’s syntax. Their provided model syntax was
* run PROCESS model 1 with clothes_RF as x, cold as y, self_obj as moderator, and temp_true as covariate; 10,000 bootsraps, mean centered.
with
MISSING=PAIRWISE.
The model looks like this:
Mod1Labs = list(X = "Clothes_RF", Y = "Cold", W = "Self_Obj", covar = "True_Temp")
pmacroModel(1, labels = Mod1Labs)
Specified using lavaan, the result is as follows:
set.seed(1)
LavaanMod1 = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj + g1*Temp_True
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LMod1Fit = sem(LavaanMod1, data, se = "bootstrap", bootstrap = 10000, missing = "pairwise", fixed.x = F)
summary(LMod1Fit, fit = F, stand = T, rsquare = T)
## lavaan 0.6-9 ended normally after 31 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 17
##
## Number of observations 187
## Number of missing patterns 2
##
## Model Test User Model:
##
## Test statistic 214.111
## Degrees of freedom 3
## P-value (Chi-square) 0.000
##
## Parameter Estimates:
##
## Standard errors Bootstrap
## Number of requested bootstrap draws 10000
## Number of successful bootstrap draws 10000
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Cold ~
## Cloths_RF (c1) 0.499 0.209 2.392 0.017 0.499 0.688
## Self_Obj (c2) 0.341 0.177 1.922 0.055 0.341 0.225
## Cl_RF:S_O (c3) -0.114 0.055 -2.050 0.040 -0.114 -0.695
## Temp_True (g1) -0.048 0.026 -1.864 0.062 -0.048 -0.123
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Clothes_RF ~~
## Clths_RF:Slf_O 15.714 1.415 11.103 0.000 15.714 0.935
## Temp_True -0.816 0.497 -1.641 0.101 -0.816 -0.116
## Clothes_RF:Self_Obj ~~
## Temp_True -3.678 2.175 -1.691 0.091 -3.678 -0.118
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 3.746 0.068 55.068 0.000 3.746 4.026
## .Cold 4.487 1.488 3.015 0.003 4.487 3.179
## Clth_RF 3.059 0.143 21.377 0.000 3.059 1.572
## C_RF:S_ 12.026 0.637 18.882 0.000 12.026 1.393
## Temp_Tr 52.508 0.263 199.969 0.000 52.508 14.576
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 0.866 0.081 10.678 0.000 0.866 1.000
## .Cold 1.739 0.160 10.838 0.000 1.739 0.873
## Clth_RF 3.788 0.332 11.424 0.000 3.788 1.000
## C_RF:S_ 74.489 7.242 10.286 0.000 74.489 1.000
## Temp_Tr 12.977 0.570 22.763 0.000 12.977 1.000
##
## R-Square:
## Estimate
## Cold 0.127
The second model, I presume - but do not know since the syntax was omitted for some reason - was also a PROCESS 1 model.
set.seed(1)
LavaanMod2 = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj + g1*Temp_True + g2*Intoxicated + g3*Age + g4*BMI
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LMod2Fit = sem(LavaanMod2, data, se = "bootstrap", bootstrap = 10000, missing = "pairwise", fixed.x = F)
LavaanModJ = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LModJFit = sem(LavaanModJ, data, se = "bootstrap", bootstrap = 10000, missing = "pairwise", fixed.x = F)
LavaanModD = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj + g1*Temp_True + g2*Intoxicated + g3*Age + g4*BMI + g5*Race + g6*Sex_Orient
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LModDFit = sem(LavaanModD, data, se = "bootstrap", bootstrap = 10000, missing = "pairwise", fixed.x = F)
summary(LMod2Fit, fit = F, stand = T, rsquare = T)
## lavaan 0.6-9 ended normally after 34 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 38
##
## Number of observations 187
## Number of missing patterns 4
##
## Model Test User Model:
##
## Test statistic 228.305
## Degrees of freedom 6
## P-value (Chi-square) 0.000
##
## Parameter Estimates:
##
## Standard errors Bootstrap
## Number of requested bootstrap draws 10000
## Number of successful bootstrap draws 10000
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Cold ~
## Cloths_RF (c1) 0.500 0.228 2.193 0.028 0.500 0.687
## Self_Obj (c2) 0.352 0.190 1.846 0.065 0.352 0.231
## Cl_RF:S_O (c3) -0.114 0.059 -1.947 0.052 -0.114 -0.695
## Temp_True (g1) -0.043 0.027 -1.599 0.110 -0.043 -0.109
## Intoxictd (g2) -0.116 0.075 -1.540 0.124 -0.116 -0.116
## Age (g3) -0.004 0.018 -0.234 0.815 -0.004 -0.017
## BMI (g4) -0.010 0.021 -0.476 0.634 -0.010 -0.036
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Clothes_RF ~~
## Clths_RF:Slf_O 15.714 1.415 11.103 0.000 15.714 0.935
## Temp_True -0.816 0.497 -1.641 0.101 -0.816 -0.116
## Intoxicated 0.333 0.194 1.713 0.087 0.333 0.121
## Age -4.207 0.967 -4.350 0.000 -4.207 -0.376
## BMI -2.039 0.687 -2.968 0.003 -2.039 -0.207
## Clothes_RF:Self_Obj ~~
## Temp_True -3.678 2.175 -1.691 0.091 -3.678 -0.118
## Intoxicated 1.558 0.870 1.791 0.073 1.558 0.128
## Age -18.181 3.971 -4.578 0.000 -18.181 -0.367
## BMI -8.281 3.076 -2.692 0.007 -8.281 -0.189
## Temp_True ~~
## Intoxicated 0.155 0.368 0.420 0.674 0.155 0.031
## Age 5.188 1.538 3.372 0.001 5.188 0.251
## BMI 2.687 1.200 2.239 0.025 2.687 0.147
## Intoxicated ~~
## Age -0.903 0.527 -1.713 0.087 -0.903 -0.112
## BMI -0.282 0.505 -0.558 0.577 -0.282 -0.040
## Age ~~
## BMI 6.841 2.077 3.293 0.001 6.841 0.235
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 3.746 0.068 55.068 0.000 3.746 4.026
## .Cold 4.765 1.616 2.949 0.003 4.765 3.367
## Clth_RF 3.059 0.143 21.377 0.000 3.059 1.572
## C_RF:S_ 12.026 0.637 18.882 0.000 12.026 1.393
## Temp_Tr 52.508 0.263 199.969 0.000 52.508 14.576
## Intxctd 2.296 0.103 22.313 0.000 2.296 1.630
## Age 22.268 0.424 52.483 0.000 22.268 3.875
## BMI 24.350 0.368 66.198 0.000 24.350 4.801
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 0.866 0.081 10.678 0.000 0.866 1.000
## .Cold 1.710 0.156 10.978 0.000 1.710 0.854
## Clth_RF 3.788 0.332 11.424 0.000 3.788 1.000
## C_RF:S_ 74.489 7.242 10.286 0.000 74.489 1.000
## Temp_Tr 12.977 0.570 22.763 0.000 12.977 1.000
## Intxctd 1.983 0.207 9.582 0.000 1.983 1.000
## Age 33.020 7.679 4.300 0.000 33.020 1.000
## BMI 25.719 3.372 7.626 0.000 25.719 1.000
##
## R-Square:
## Estimate
## Cold 0.146
summary(LModJFit, fit = F, stand = T, rsquare = T)
## lavaan 0.6-9 ended normally after 25 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 12
##
## Number of observations 187
## Number of missing patterns 2
##
## Model Test User Model:
##
## Test statistic 210.497
## Degrees of freedom 2
## P-value (Chi-square) 0.000
##
## Parameter Estimates:
##
## Standard errors Bootstrap
## Number of requested bootstrap draws 10000
## Number of successful bootstrap draws 10000
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Cold ~
## Cloths_RF (c1) 0.540 0.209 2.584 0.010 0.540 0.736
## Self_Obj (c2) 0.386 0.183 2.104 0.035 0.386 0.251
## Cl_RF:S_O (c3) -0.123 0.057 -2.169 0.030 -0.123 -0.741
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Clothes_RF ~~
## Clths_RF:Slf_O 15.714 1.422 11.052 0.000 15.714 0.935
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 3.746 0.068 55.423 0.000 3.746 4.026
## .Cold 1.765 0.617 2.861 0.004 1.765 1.235
## Clth_RF 3.059 0.143 21.391 0.000 3.059 1.572
## C_RF:S_ 12.026 0.632 19.014 0.000 12.026 1.393
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 0.866 0.082 10.547 0.000 0.866 1.000
## .Cold 1.768 0.160 11.032 0.000 1.768 0.866
## Clth_RF 3.788 0.334 11.327 0.000 3.788 1.000
## C_RF:S_ 74.489 7.247 10.279 0.000 74.489 1.000
##
## R-Square:
## Estimate
## Cold 0.134
summary(LModDFit, fit = F, stand = T, rsquare = T)
## lavaan 0.6-9 ended normally after 36 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 57
##
## Number of observations 187
## Number of missing patterns 6
##
## Model Test User Model:
##
## Test statistic 237.459
## Degrees of freedom 8
## P-value (Chi-square) 0.000
##
## Parameter Estimates:
##
## Standard errors Bootstrap
## Number of requested bootstrap draws 10000
## Number of successful bootstrap draws 10000
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Cold ~
## Cloths_RF (c1) 0.445 0.232 1.917 0.055 0.445 0.621
## Self_Obj (c2) 0.281 0.196 1.436 0.151 0.281 0.188
## Cl_RF:S_O (c3) -0.101 0.060 -1.685 0.092 -0.101 -0.624
## Temp_True (g1) -0.030 0.030 -1.000 0.317 -0.030 -0.078
## Intoxictd (g2) -0.130 0.075 -1.747 0.081 -0.130 -0.132
## Age (g3) -0.009 0.018 -0.490 0.624 -0.009 -0.037
## BMI (g4) -0.006 0.022 -0.278 0.781 -0.006 -0.022
## Race (g5) -0.089 0.066 -1.347 0.178 -0.089 -0.114
## Sex_Orint (g6) -0.091 0.116 -0.781 0.435 -0.091 -0.050
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Clothes_RF ~~
## Clths_RF:Slf_O 15.714 1.429 10.995 0.000 15.714 0.935
## Temp_True -0.816 0.496 -1.646 0.100 -0.816 -0.116
## Intoxicated 0.333 0.191 1.741 0.082 0.333 0.121
## Age -4.207 0.964 -4.363 0.000 -4.207 -0.376
## BMI -2.039 0.680 -2.998 0.003 -2.039 -0.207
## Race -0.392 0.284 -1.382 0.167 -0.392 -0.112
## Sex_Orient -0.099 0.106 -0.934 0.350 -0.099 -0.066
## Clothes_RF:Self_Obj ~~
## Temp_True -3.678 2.161 -1.702 0.089 -3.678 -0.118
## Intoxicated 1.558 0.851 1.831 0.067 1.558 0.128
## Age -18.181 3.959 -4.592 0.000 -18.181 -0.367
## BMI -8.281 3.050 -2.715 0.007 -8.281 -0.189
## Race -1.659 1.204 -1.378 0.168 -1.659 -0.107
## Sex_Orient -0.713 0.468 -1.523 0.128 -0.713 -0.107
## Temp_True ~~
## Intoxicated 0.155 0.360 0.431 0.667 0.155 0.031
## Age 5.188 1.554 3.338 0.001 5.188 0.251
## BMI 2.687 1.197 2.245 0.025 2.687 0.147
## Race 1.097 0.460 2.385 0.017 1.097 0.170
## Sex_Orient 0.895 0.195 4.587 0.000 0.895 0.323
## Intoxicated ~~
## Age -0.903 0.529 -1.708 0.088 -0.903 -0.112
## BMI -0.282 0.510 -0.553 0.580 -0.282 -0.040
## Race -0.407 0.171 -2.382 0.017 -0.407 -0.161
## Sex_Orient 0.055 0.082 0.665 0.506 0.055 0.050
## Age ~~
## BMI 6.841 2.063 3.317 0.001 6.841 0.235
## Race 0.629 0.928 0.677 0.498 0.629 0.061
## Sex_Orient -0.014 0.269 -0.052 0.959 -0.014 -0.003
## BMI ~~
## Race 1.358 0.714 1.901 0.057 1.358 0.149
## Sex_Orient 0.293 0.374 0.784 0.433 0.293 0.075
## Race ~~
## Sex_Orient 0.102 0.115 0.894 0.371 0.102 0.074
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 3.746 0.068 54.871 0.000 3.746 4.026
## .Cold 4.725 1.639 2.882 0.004 4.725 3.394
## Clth_RF 3.059 0.142 21.564 0.000 3.059 1.572
## C_RF:S_ 12.026 0.631 19.059 0.000 12.026 1.393
## Temp_Tr 52.508 0.267 196.423 0.000 52.508 14.576
## Intxctd 2.296 0.104 22.169 0.000 2.296 1.630
## Age 22.268 0.425 52.346 0.000 22.268 3.875
## BMI 24.350 0.375 64.884 0.000 24.350 4.801
## Race 2.164 0.133 16.278 0.000 2.164 1.206
## Sx_Ornt 1.434 0.059 24.240 0.000 1.434 1.862
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 0.866 0.082 10.524 0.000 0.866 1.000
## .Cold 1.684 0.152 11.056 0.000 1.684 0.869
## Clth_RF 3.788 0.332 11.393 0.000 3.788 1.000
## C_RF:S_ 74.489 7.348 10.137 0.000 74.489 1.000
## Temp_Tr 12.977 0.573 22.647 0.000 12.977 1.000
## Intxctd 1.983 0.206 9.629 0.000 1.983 1.000
## Age 33.020 7.591 4.350 0.000 33.020 1.000
## BMI 25.719 3.373 7.624 0.000 25.719 1.000
## Race 3.219 0.391 8.241 0.000 3.219 1.000
## Sx_Ornt 0.593 0.087 6.775 0.000 0.593 1.000
##
## R-Square:
## Estimate
## Cold 0.131
The authors’ second model and their fully-controlled model did not replicate in terms of p < 0.05 (and you do not round p values down, that is a QRP).
These were provided in the paper but not the code. I took the title and axes labels from Figure 3 in the paper, so the inconsistency in capitalizing the “O” in “self-objectification” is not my fault. These were probably done in R or something mimicking the interactions package’s ggplot2 outputs with the johnson_neyman() function or sim_slopes() (which is what I used) since they come with everything basically identical to that output. This suggests they used an lm() model, but they did not say so, so it would be nice to know how they made this plot.
Note that if the whole sample is used rather than the filtered sample the interaction is no longer significant. The most major problem here is that we have no details about the specification of the model that was used to make their Johnson-Neyman plot and no reason for why it was run without mean centering, unlike the other analyses. The model I used is Mod1 from above.
JNE <- sim_slopes(Mod, pred = Clothes_RF, modx = Self_Obj, johnson_neyman = T, jnplot = F)
JPE; JNE
## JOHNSON-NEYMAN INTERVAL
##
## When Self_Obj is OUTSIDE the interval [3.43, 14.54], the slope of
## Clothes_RF is p < .05.
##
## Note: The range of observed values of Self_Obj is [1.50, 6.00]
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of Clothes_RF when Self_Obj = 2.813131 (- 1 SD):
##
## Est. S.E. t val. p
## ------ ------ -------- ------
## 0.18 0.07 2.47 0.01
##
## Slope of Clothes_RF when Self_Obj = 3.748176 (Mean):
##
## Est. S.E. t val. p
## ------ ------ -------- ------
## 0.07 0.05 1.40 0.16
##
## Slope of Clothes_RF when Self_Obj = 4.683221 (+ 1 SD):
##
## Est. S.E. t val. p
## ------- ------ -------- ------
## -0.03 0.07 -0.43 0.67
Importantly, this plot is not meaningful if there is no evidence of an interaction in the first place, as, without an interaction to speak of, it is just divvying up the data and wading through p values until p < 0.05 shows up. The p values for the slopes of the -1, 0, and +1 SD groups in this study, given above, were only significantly different from one another in one case.
ZREG <- function(B1, B2, SEB1, SEB2) {
Z = (B1-B2)/sqrt((SEB1)^2 + (SEB2)^2)
return(Z)}
1-pnorm(ZREG(0.18, 0.07, 0.07, 0.05)) #+1 vs Mean; p = 0.10
## [1] 0.1004971
1-pnorm(ZREG(0.18, -0.03, 0.07, 0.07)) #+1 vs -1; p = 0.02
## [1] 0.01694743
1-pnorm(ZREG(0.07, -0.03, 0.05, 0.07)) #Mean vs -1; p = 0.12
## [1] 0.122521
At least two individuals tacitly claimed some expertise when it came to p values within the intervals in Johnson-Neyman plots. In the paper, the authors said
Examining the Johnson-Neyman significance regions, the positive relationship between skin exposure on perception of coldness is significant only for women who scored a 3.43 or lower on the measure of self-objectification, which corresponds to .32 units below the average level of self-objectification, b = .11, t(181) = 1.97, p = .05, 95% CI [0.00, 0.22] (Figure 3).
The (at least) two individuals who noticed this p value of 0.05 was picked up for criticism claimed that p = .05 “literally by definition” (https://archive.ph/n5jPU) and, much the same, that it “by definition is 0.05” (https://archive.ph/1zYsD). Where does this belief come from? Who knows, but it is so obviously false that no citation would ever be needed to disprove it: it is just plainly, gobsmackingly false. Because it is obviously not so false that everyone gets it, a simple proof follows in the form of simulation: I simulated an interaction below and then plotted the Johnson-Neyman region and outputted the results for it.
set.seed(1)
VX = rnorm(n = 1000, mean = -1, sd = 1)
Group = rnorm(n = 1000, mean = 1, sd = 0.3)
VY = 1 + 0.5 * Group + -0.5 * Group * VX + rnorm(n = 1000, mean = 0, sd = 1)
JNSim <- lm(VY ~ Group + VX + Group * VX)
summary(JNSim)
##
## Call:
## lm(formula = VY ~ Group + VX + Group * VX)
##
## Residuals:
## Min 1Q Median 3Q Max
## -3.6034 -0.6516 -0.0175 0.6935 2.8117
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.04249 0.14997 6.951 6.53e-12 ***
## Group 0.52328 0.14246 3.673 0.000252 ***
## VX 0.09990 0.10313 0.969 0.332917
## Group:VX -0.55048 0.09748 -5.647 2.13e-08 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.031 on 996 degrees of freedom
## Multiple R-squared: 0.256, Adjusted R-squared: 0.2537
## F-statistic: 114.2 on 3 and 996 DF, p-value: < 2.2e-16
JNI <- sim_slopes(JNSim, pred = VX, modx = Group, johnson_neyman = T, jnplot = F)
JPT; JNI
## JOHNSON-NEYMAN INTERVAL
##
## When Group is OUTSIDE the interval [-0.28, 0.42], the slope of VX is p <
## .05.
##
## Note: The range of observed values of Group is [0.02, 2.09]
##
## SIMPLE SLOPES ANALYSIS
##
## Slope of VX when Group = 0.6831270 (- 1 SD):
##
## Est. S.E. t val. p
## ------- ------ -------- ------
## -0.28 0.04 -6.19 0.00
##
## Slope of VX when Group = 0.9951214 (Mean):
##
## Est. S.E. t val. p
## ------- ------ -------- ------
## -0.45 0.03 -14.20 0.00
##
## Slope of VX when Group = 1.3071159 (+ 1 SD):
##
## Est. S.E. t val. p
## ------- ------ -------- ------
## -0.62 0.04 -14.42 0.00
The slope has a p value of <0.01 in each region, as would be expected with such an interaction. Obviously - and I repeat, obviously - the Johnson-Neyman plot does not produce p values for the slopes within regions of interaction and non-interaction that automatically have p values of 0.05. How people arrived at this belief, I really do not know. The first person who made this mistake (see link above) said that individuals who see a “moderate” p value and assume p hacking should be mocked publicly. This is a strong belief and certainly not justified, but I regularly receive death threats and deal with malicious impersonation, so people can obviously justify anything if they put their hearts to it. Realistically, marginal effects are quite likely to not be real, and this is doubly true because, at least in this field, they typically come from underpowered studies and this one was no exception. Because of low power, effect sizes need to be inflated to be significant, and with the f2 values being as minuscule as they are (around 18 to 15 the f2 value of 0.15 used in the power analysis) In this case, they came with more problems which I will get to. With all p values below 0.01, it should now make no sense to claim that the p values from the slopes of the +1/0/-1 groups should all have exactly p = 0.05. In point of fact, those can be plotted individually. For the main study, this looks like
IP <- interact_plot(Mod, pred = Clothes_RF, modx = Self_Obj, interval = T, plot.points = T, jitter = 0.05) + labs(x = "Skin Exposure", y = "Feeling Cold"); IP
In other words, it looks like a regression through noise (and there is no significant relationship aggregately) unless people are grouped together based on a marginally significant interaction.
There is evidence that the authors were looking for significance in the form of another filtering variable in addition to “score_differs”: “scores_differ_but_one”. It is a minor methodological choice, because it removes three participants, but it changes results nonetheless.
SemiFilter <- filter(Unfdata, scores_differ_but_one == 1 & BMI > 0) #Filters based on "but one" variable
FullFilter <- filter(Unfdata, score_differs == 1 & BMI > 0) #Filters based on the variable used to make the filter_$ variable
NoNAs <- Unfdata[!is.na(Unfdata$`filter_$`),] #Does not filter except on BMI > 0
length(SemiFilter$ID)
## [1] 184
length(FullFilter$ID)
## [1] 187
length(NoNAs$ID)
## [1] 193
length(Unfdata$ID)
## [1] 224
ModS <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, SemiFilter); summary(ModS); partial_f2(ModS)
##
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True,
## data = SemiFilter)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.96653 -0.74165 -0.01927 0.76742 2.77434
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.57061 1.72003 2.657 0.00859 **
## Clothes_RF 0.49178 0.20895 2.354 0.01968 *
## Self_Obj 0.33875 0.18740 1.808 0.07234 .
## Temp_True -0.05000 0.02808 -1.781 0.07669 .
## Clothes_RF:Self_Obj -0.11029 0.05398 -2.043 0.04250 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.341 on 178 degrees of freedom
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.06266, Adjusted R-squared: 0.0416
## F-statistic: 2.975 on 4 and 178 DF, p-value: 0.02074
## (Intercept) Clothes_RF Self_Obj Temp_True
## 0.03966968 0.03112110 0.01835806 0.01781083
## Clothes_RF:Self_Obj
## 0.02345465
ModF <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, FullFilter); summary(ModF); partial_f2(ModF)
##
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True,
## data = FullFilter)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.93764 -0.74787 -0.04511 0.76645 2.81914
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.50114 1.69073 2.662 0.00846 **
## Clothes_RF 0.49921 0.20794 2.401 0.01738 *
## Self_Obj 0.33919 0.18622 1.821 0.07018 .
## Temp_True -0.04859 0.02764 -1.758 0.08043 .
## Clothes_RF:Self_Obj -0.11332 0.05368 -2.111 0.03616 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.337 on 181 degrees of freedom
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.0604, Adjusted R-squared: 0.03964
## F-statistic: 2.909 on 4 and 181 DF, p-value: 0.02301
## (Intercept) Clothes_RF Self_Obj Temp_True
## 0.03915783 0.03184139 0.01833066 0.01707553
## Clothes_RF:Self_Obj
## 0.02461632
ModN <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, NoNAs); summary(ModN); partial_f2(ModN)
##
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True,
## data = NoNAs)
##
## Residuals:
## Min 1Q Median 3Q Max
## -3.04397 -0.75327 -0.06848 0.75331 2.72926
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.56452 1.67245 2.729 0.00695 **
## Clothes_RF 0.46315 0.20839 2.223 0.02745 *
## Self_Obj 0.29226 0.18604 1.571 0.11788
## Temp_True -0.04768 0.02720 -1.753 0.08121 .
## Clothes_RF:Self_Obj -0.09792 0.05356 -1.828 0.06910 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.345 on 187 degrees of freedom
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.05983, Adjusted R-squared: 0.03972
## F-statistic: 2.975 on 4 and 187 DF, p-value: 0.02061
## (Intercept) Clothes_RF Self_Obj Temp_True
## 0.03983316 0.02641517 0.01319752 0.01643623
## Clothes_RF:Self_Obj
## 0.01787562
ModU <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, Unfdata); summary(ModU); partial_f2(ModU)
##
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True,
## data = Unfdata)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.7666 -0.6918 -0.2462 0.7320 2.8993
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.64611 1.61191 2.882 0.00434 **
## Clothes_RF 0.24063 0.20848 1.154 0.24969
## Self_Obj 0.09801 0.18147 0.540 0.58970
## Temp_True -0.03263 0.02640 -1.236 0.21783
## Clothes_RF:Self_Obj -0.05135 0.05367 -0.957 0.33968
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.409 on 218 degrees of freedom
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.01651, Adjusted R-squared: -0.001537
## F-statistic: 0.9148 on 4 and 218 DF, p-value: 0.4561
## (Intercept) Clothes_RF Self_Obj Temp_True
## 0.038110009 0.006110809 0.001337953 0.007006521
## Clothes_RF:Self_Obj
## 0.004200140
Removing three participants brought the result closer to p = 0.05; adding 9 makes the interaction insignificant and keeping 37 left it nonexistent. The interaction went from being 17.08% to 15.63% to 11.92% to 2.80% of the effect specified in the power analysis. Without this undesirable form of filtering, there is nothing to this paper.
Sean Mackinnon substantially reproduced Felig et al.’s results with robust regression (https://archive.ph/qWDkW). I asked a friend to see if they could get his code and he provided it (his link, here: https://drive.google.com/drive/folders/1DpJjzA6Wq8-SYMEWZRkPwd57do1pC-Ka). With this, we can see how the robust regressions were reached. It is common among R users to just use the rlm() function from the MASS package, though it is not advisable to use any robust regression procedure without knowing what estimator you’re using (justify your analytic decisions!). Sean used lmrob() from the robustbase package, which is MM. In at least one way, this function beats rlm(): it provides the p values without having to use the function f.robftest()! Sean mean-centered his data by removing the mean, so I have computed the data again with mean centering done in this way rather than with scale(). Functionally, this difference looks like Cold.c = Cold - mean(Cold, na.rm = T)
versus Cold.c = scale(Cold)
. Because it is unknown what standard errors are used by rlm() and the performance of MM estimators is presently dubious for non-normality/heteroskedasticity/etc., I will use HC3s, which are appropriate and understood without having to peer into the code used by either function. These are strongly recommended for when N < 250 and they are known to outperform HC2 and HC1. HC4 has been suggested as generally better, but some think it may push the error rate below the nominal one, obviously entailing an excess of false negatives. Some people also just advocate bootstrapping HC4. Below, I provide the results from rlm(), lmrob(), and both HC3 and HC4. Additionally, I include HC5, which has superior performance compared to HC4 in scenarios with high leverage and for the sake of completeness. As a final note for this section, I did reuse the code from the Channeling Fisher reanalysis by Uri Simonsohn to test MM vs HC3, 4, and 5, and it lost to 3, 4, and 5, and performed only marginally worse than Fisher randomization tests. Little of this is especially relevant anyway, since the data are largely but not wholly ordinal (Cold = 1-6, Clothes_RF = 0-8, Self_Obj = average of eight 1-6 variables, Intoxicated = 1-6, etc.).
All of these models are run because they were described in the piece as also containing significant interactions. As was seen the first time I ran them and can be seen now, that is just not true.
First, rlm() results:
Mod1 <- rlm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, data)
Mod2 <- rlm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI, data)
ModJ <- rlm(Cold ~ Clothes_RF * Self_Obj, data)
ModD <- rlm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI + Race + Sex_Orient, data)
summary(Mod1); f.robftest(Mod1, var = "Clothes_RF:Self_Obj")
##
## Call: rlm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True,
## data = data)
## Residuals:
## Min 1Q Median 3Q Max
## -2.78848 -0.75976 -0.01148 0.79567 3.03601
##
## Coefficients:
## Value Std. Error t value
## (Intercept) 4.6744 1.7747 2.6340
## Clothes_RF 0.5476 0.2183 2.5090
## Self_Obj 0.3864 0.1955 1.9768
## Temp_True -0.0540 0.0290 -1.8612
## Clothes_RF:Self_Obj -0.1339 0.0563 -2.3757
##
## Residual standard error: 1.154 on 181 degrees of freedom
## (1 observation deleted due to missingness)
##
## robust F-test (as if non-random weights)
##
## data: from rlm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, from data = data)
## F = 5.7449, p-value = 0.01756
## alternative hypothesis: true Clothes_RF:Self_Obj is not equal to 0
summary(Mod2); f.robftest(Mod2, var = "Clothes_RF:Self_Obj")
##
## Call: rlm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True +
## Intoxicated + Drinks + Age + BMI, data = data)
## Residuals:
## Min 1Q Median 3Q Max
## -3.107311 -0.930262 -0.008723 0.947200 3.017234
##
## Coefficients:
## Value Std. Error t value
## (Intercept) 4.6720 1.9159 2.4386
## Clothes_RF 0.6166 0.2361 2.6121
## Self_Obj 0.4392 0.2070 2.1217
## Temp_True -0.0503 0.0307 -1.6376
## Intoxicated -0.0445 0.1033 -0.4305
## Drinks -0.0755 0.0623 -1.2122
## Age -0.0013 0.0215 -0.0582
## BMI -0.0033 0.0216 -0.1540
## Clothes_RF:Self_Obj -0.1489 0.0600 -2.4796
##
## Residual standard error: 1.384 on 172 degrees of freedom
## (6 observations deleted due to missingness)
##
## robust F-test (as if non-random weights)
##
## data: from rlm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + from Intoxicated + Drinks + Age + BMI, data = data)
## F = 6.1573, p-value = 0.01405
## alternative hypothesis: true Clothes_RF:Self_Obj is not equal to 0
summary(ModJ); f.robftest(ModJ, var = "Clothes_RF:Self_Obj")
##
## Call: rlm(formula = Cold ~ Clothes_RF * Self_Obj, data = data)
## Residuals:
## Min 1Q Median 3Q Max
## -2.7440 -0.6534 -0.1280 0.6975 3.0107
##
## Coefficients:
## Value Std. Error t value
## (Intercept) 1.8059 0.7799 2.3154
## Clothes_RF 0.5609 0.2437 2.3016
## Self_Obj 0.3768 0.2174 1.7327
## Clothes_RF:Self_Obj -0.1321 0.0630 -2.0957
##
## Residual standard error: 1.029 on 182 degrees of freedom
## (1 observation deleted due to missingness)
##
## robust F-test (as if non-random weights)
##
## data: from rlm(formula = Cold ~ Clothes_RF * Self_Obj, data = data)
## F = 4.382, p-value = 0.03771
## alternative hypothesis: true Clothes_RF:Self_Obj is not equal to 0
summary(ModD); f.robftest(ModD, var = "Clothes_RF:Self_Obj")
##
## Call: rlm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True +
## Intoxicated + Drinks + Age + BMI + Race + Sex_Orient, data = data)
## Residuals:
## Min 1Q Median 3Q Max
## -2.83436 -0.90321 -0.06051 0.88754 2.94350
##
## Coefficients:
## Value Std. Error t value
## (Intercept) 5.1515 1.9459 2.6474
## Clothes_RF 0.5863 0.2492 2.3529
## Self_Obj 0.4142 0.2155 1.9220
## Temp_True -0.0518 0.0334 -1.5511
## Intoxicated -0.0930 0.1087 -0.8558
## Drinks -0.0433 0.0657 -0.6589
## Age -0.0004 0.0226 -0.0195
## BMI -0.0051 0.0223 -0.2301
## Race -0.0777 0.0648 -1.1992
## Sex_Orient -0.0359 0.1522 -0.2357
## Clothes_RF:Self_Obj -0.1460 0.0627 -2.3302
##
## Residual standard error: 1.318 on 154 degrees of freedom
## (22 observations deleted due to missingness)
##
## robust F-test (as if non-random weights)
##
## data: from rlm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + from Intoxicated + Drinks + Age + BMI + Race + Sex_Orient, data = data)
## F = 5.4324, p-value = 0.02106
## alternative hypothesis: true Clothes_RF:Self_Obj is not equal to 0
Where the authors claimed all of these specifications yielded significant interactions, only 2/4 here do, and they are - as everyone finds - marginal.
Second, lmrob() results:
Mod1 <- lmrob(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, data)
Mod2 <- lmrob(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI, data)
ModJ <- lmrob(Cold ~ Clothes_RF * Self_Obj, data)
ModD <- lmrob(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI + Race + Sex_Orient, data)
## Warning in lmrob.S(x, y, control = control): S refinements did not converge (to
## refine.tol=1e-07) in 200 (= k.max) steps
summary(Mod1)
##
## Call:
## lmrob(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, data = data)
## \--> method = "MM"
## Residuals:
## Min 1Q Median 3Q Max
## -2.85689 -0.75801 -0.02146 0.79555 2.94698
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.63411 1.50081 3.088 0.00233 **
## Clothes_RF 0.52888 0.20828 2.539 0.01195 *
## Self_Obj 0.36435 0.17735 2.054 0.04137 *
## Temp_True -0.05218 0.02780 -1.877 0.06217 .
## Clothes_RF:Self_Obj -0.12552 0.05535 -2.268 0.02452 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Robust residual standard error: 1.289
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.06239, Adjusted R-squared: 0.04167
## Convergence in 12 IRWLS iterations
##
## Robustness weights:
## 15 weights are ~= 1. The remaining 171 ones are summarized as
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.5804 0.8588 0.9575 0.9027 0.9912 0.9990
## Algorithmic parameters:
## tuning.chi bb tuning.psi refine.tol
## 1.548e+00 5.000e-01 4.685e+00 1.000e-07
## rel.tol scale.tol solve.tol eps.outlier
## 1.000e-07 1.000e-10 1.000e-07 5.376e-04
## eps.x warn.limit.reject warn.limit.meanrw
## 1.055e-10 5.000e-01 5.000e-01
## nResample max.it best.r.s k.fast.s k.max
## 500 50 2 1 200
## maxit.scale trace.lev mts compute.rd fast.s.large.n
## 200 0 1000 0 2000
## psi subsampling cov
## "bisquare" "nonsingular" ".vcov.avar1"
## compute.outlier.stats
## "SM"
## seed : int(0)
summary(Mod2)
##
## Call:
## lmrob(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated +
## Drinks + Age + BMI, data = data)
## \--> method = "MM"
## Residuals:
## Min 1Q Median 3Q Max
## -3.11741 -0.91733 -0.01239 0.92172 2.95193
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.5678323 1.7636337 2.590 0.0104 *
## Clothes_RF 0.5949111 0.2475534 2.403 0.0173 *
## Self_Obj 0.4278656 0.2024620 2.113 0.0360 *
## Temp_True -0.0487461 0.0302440 -1.612 0.1088
## Intoxicated -0.0338364 0.1001731 -0.338 0.7359
## Drinks -0.0747862 0.0571576 -1.308 0.1925
## Age -0.0009696 0.0168231 -0.058 0.9541
## BMI -0.0020814 0.0249324 -0.083 0.9336
## Clothes_RF:Self_Obj -0.1421233 0.0638085 -2.227 0.0272 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Robust residual standard error: 1.318
## (6 observations deleted due to missingness)
## Multiple R-squared: 0.08516, Adjusted R-squared: 0.04261
## Convergence in 12 IRWLS iterations
##
## Robustness weights:
## 18 weights are ~= 1. The remaining 163 ones are summarized as
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.5554 0.8614 0.9481 0.9053 0.9840 0.9989
## Algorithmic parameters:
## tuning.chi bb tuning.psi refine.tol
## 1.548e+00 5.000e-01 4.685e+00 1.000e-07
## rel.tol scale.tol solve.tol eps.outlier
## 1.000e-07 1.000e-10 1.000e-07 5.525e-04
## eps.x warn.limit.reject warn.limit.meanrw
## 1.055e-10 5.000e-01 5.000e-01
## nResample max.it best.r.s k.fast.s k.max
## 500 50 2 1 200
## maxit.scale trace.lev mts compute.rd fast.s.large.n
## 200 0 1000 0 2000
## psi subsampling cov
## "bisquare" "nonsingular" ".vcov.avar1"
## compute.outlier.stats
## "SM"
## seed : int(0)
summary(ModJ)
##
## Call:
## lmrob(formula = Cold ~ Clothes_RF * Self_Obj, data = data)
## \--> method = "MM"
## Residuals:
## Min 1Q Median 3Q Max
## -2.8003 -0.6802 -0.1214 0.6796 2.9437
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.76580 0.63863 2.765 0.00628 **
## Clothes_RF 0.56337 0.21703 2.596 0.01021 *
## Self_Obj 0.38584 0.18890 2.043 0.04254 *
## Clothes_RF:Self_Obj -0.13065 0.05865 -2.228 0.02713 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Robust residual standard error: 1.288
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.04365, Adjusted R-squared: 0.02789
## Convergence in 11 IRWLS iterations
##
## Robustness weights:
## 14 weights are ~= 1. The remaining 172 ones are summarized as
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.5807 0.8658 0.9632 0.9015 0.9888 0.9986
## Algorithmic parameters:
## tuning.chi bb tuning.psi refine.tol
## 1.548e+00 5.000e-01 4.685e+00 1.000e-07
## rel.tol scale.tol solve.tol eps.outlier
## 1.000e-07 1.000e-10 1.000e-07 5.376e-04
## eps.x warn.limit.reject warn.limit.meanrw
## 6.412e-11 5.000e-01 5.000e-01
## nResample max.it best.r.s k.fast.s k.max
## 500 50 2 1 200
## maxit.scale trace.lev mts compute.rd fast.s.large.n
## 200 0 1000 0 2000
## psi subsampling cov
## "bisquare" "nonsingular" ".vcov.avar1"
## compute.outlier.stats
## "SM"
## seed : int(0)
summary(ModD)
##
## Call:
## lmrob(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated +
## Drinks + Age + BMI + Race + Sex_Orient, data = data)
## \--> method = "S"
## Residuals:
## Min 1Q Median 3Q Max
## -3.382964 -0.492231 0.002482 0.852215 3.075690
##
## Algorithm did not converge
##
## Coefficients of the *initial* S-estimator:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.50052 NA NA NA
## Clothes_RF 0.44967 NA NA NA
## Self_Obj 0.38853 NA NA NA
## Temp_True -0.05486 NA NA NA
## Intoxicated 0.14416 NA NA NA
## Drinks -0.03557 NA NA NA
## Age 0.01922 NA NA NA
## BMI 0.04754 NA NA NA
## Race 0.08182 NA NA NA
## Sex_Orient 0.03169 NA NA NA
## Clothes_RF:Self_Obj -0.10230 NA NA NA
##
## Robustness weights:
## 34 observations c(1,2,5,6,7,13,14,16,19,21,27,31,32,33,37,38,42,50,53,71,73,74,77,92,99,100,114,123,130,140,143,144,161,163)
## are outliers with |weight| = 0 ( < 0.00061);
## 8 weights are ~= 1. The remaining 123 ones are summarized as
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0009116 0.5600000 0.8607000 0.7092000 0.9582000 0.9988000
## Algorithmic parameters:
## tuning.chi bb tuning.psi refine.tol
## 1.548e+00 5.000e-01 4.685e+00 1.000e-07
## rel.tol scale.tol solve.tol eps.outlier
## 1.000e-07 1.000e-10 1.000e-07 6.061e-04
## eps.x warn.limit.reject warn.limit.meanrw
## 1.055e-10 5.000e-01 5.000e-01
## nResample max.it best.r.s k.fast.s k.max
## 500 50 2 1 200
## maxit.scale trace.lev mts compute.rd fast.s.large.n
## 200 0 1000 0 2000
## psi subsampling cov
## "bisquare" "nonsingular" ".vcov.avar1"
## compute.outlier.stats
## "SM"
## seed : int(0)
Three of the lmrob() results are significant and one did not converge.
Third, the HC3 SEs:
Mod1P <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, data)
Mod2P <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI, data)
ModJP <- lm(Cold ~ Clothes_RF * Self_Obj, data)
ModDP <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI + Race + Sex_Orient, data)
VC1 <- vcovHC(Mod1P, type = "HC3")
VC2 <- vcovHC(Mod2P, type = "HC3")
VCJ <- vcovHC(ModJP, type = "HC3")
VCD <- vcovHC(ModDP, type = "HC3")
RM1 <- coeftest(Mod1P, vcov = VC1)
RM2 <- coeftest(Mod2P, vcov = VC2)
RMJ <- coeftest(ModJP, vcov = VCJ)
RMD <- coeftest(ModDP, vcov = VCD)
RM1
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.501140 1.497128 3.0065 0.003018 **
## Clothes_RF 0.499206 0.210091 2.3761 0.018539 *
## Self_Obj 0.339192 0.174309 1.9459 0.053213 .
## Temp_True -0.048591 0.026503 -1.8334 0.068379 .
## Clothes_RF:Self_Obj -0.113318 0.055473 -2.0428 0.042525 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
RM2
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.4653e+00 1.7584e+00 2.5394 0.01199 *
## Clothes_RF 5.6178e-01 2.4723e-01 2.2723 0.02431 *
## Self_Obj 3.9544e-01 2.0163e-01 1.9612 0.05147 .
## Temp_True -4.2149e-02 2.8755e-02 -1.4658 0.14453
## Intoxicated -1.7246e-02 1.0281e-01 -0.1677 0.86698
## Drinks -8.5052e-02 5.8075e-02 -1.4645 0.14488
## Age 1.4567e-05 1.7962e-02 0.0008 0.99935
## BMI -1.0176e-02 2.4343e-02 -0.4180 0.67644
## Clothes_RF:Self_Obj -1.2997e-01 6.3229e-02 -2.0555 0.04134 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
RMJ
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.76574 0.62154 2.8409 0.005011 **
## Clothes_RF 0.53921 0.21440 2.5149 0.012772 *
## Self_Obj 0.38438 0.18293 2.1013 0.036993 *
## Clothes_RF:Self_Obj -0.12221 0.05756 -2.1232 0.035088 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
RMD
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5.06334526 1.80486675 2.8054 0.005674 **
## Clothes_RF 0.51218912 0.25275004 2.0265 0.044443 *
## Self_Obj 0.35985915 0.20997664 1.7138 0.088576 .
## Temp_True -0.04341569 0.03269832 -1.3278 0.186220
## Intoxicated -0.07165310 0.09776033 -0.7329 0.464705
## Drinks -0.05916703 0.05697050 -1.0386 0.300639
## Age 0.00077987 0.01869360 0.0417 0.966777
## BMI -0.01390307 0.02598168 -0.5351 0.593345
## Race -0.08941077 0.07220307 -1.2383 0.217480
## Sex_Orient -0.03008160 0.12354547 -0.2435 0.807953
## Clothes_RF:Self_Obj -0.12171934 0.06427190 -1.8938 0.060124 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Three out of four results were reproduced when HC3 was used.
VC1 <- vcovHC(Mod1P, type = "HC4")
VC2 <- vcovHC(Mod2P, type = "HC4")
VCJ <- vcovHC(ModJP, type = "HC4")
VCD <- vcovHC(ModDP, type = "HC4")
RM1 <- coeftest(Mod1P, vcov = VC1)
RM2 <- coeftest(Mod2P, vcov = VC2)
RMJ <- coeftest(ModJP, vcov = VCJ)
RMD <- coeftest(ModDP, vcov = VCD)
RM1
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.501140 1.488531 3.0239 0.002858 **
## Clothes_RF 0.499206 0.215168 2.3201 0.021453 *
## Self_Obj 0.339192 0.175349 1.9344 0.054626 .
## Temp_True -0.048591 0.026324 -1.8459 0.066539 .
## Clothes_RF:Self_Obj -0.113318 0.056714 -1.9981 0.047207 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
RM2
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.4653e+00 1.7588e+00 2.5389 0.01201 *
## Clothes_RF 5.6178e-01 2.4717e-01 2.2728 0.02427 *
## Self_Obj 3.9544e-01 2.0417e-01 1.9368 0.05441 .
## Temp_True -4.2149e-02 2.8348e-02 -1.4869 0.13888
## Intoxicated -1.7246e-02 1.0764e-01 -0.1602 0.87290
## Drinks -8.5052e-02 6.0119e-02 -1.4147 0.15896
## Age 1.4567e-05 1.8273e-02 0.0008 0.99936
## BMI -1.0176e-02 2.5296e-02 -0.4023 0.68797
## Clothes_RF:Self_Obj -1.2997e-01 6.3310e-02 -2.0529 0.04160 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
RMJ
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.765743 0.634708 2.7820 0.005972 **
## Clothes_RF 0.539211 0.222135 2.4274 0.016182 *
## Self_Obj 0.384385 0.185673 2.0702 0.039843 *
## Clothes_RF:Self_Obj -0.122213 0.059542 -2.0526 0.041547 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
RMD
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5.06334526 1.78872507 2.8307 0.005265 **
## Clothes_RF 0.51218912 0.24963616 2.0517 0.041888 *
## Self_Obj 0.35985915 0.20956218 1.7172 0.087954 .
## Temp_True -0.04341569 0.03189606 -1.3612 0.175451
## Intoxicated -0.07165310 0.10016648 -0.7153 0.475482
## Drinks -0.05916703 0.05780676 -1.0235 0.307662
## Age 0.00077987 0.01913216 0.0408 0.967538
## BMI -0.01390307 0.02670926 -0.5205 0.603439
## Race -0.08941077 0.07109374 -1.2576 0.210423
## Sex_Orient -0.03008160 0.12261442 -0.2453 0.806524
## Clothes_RF:Self_Obj -0.12171934 0.06361190 -1.9135 0.057543 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Three out of four results were reproduced when HC4 was used.
VC1 <- vcovHC(Mod1P, type = "HC5")
VC2 <- vcovHC(Mod2P, type = "HC5")
VCJ <- vcovHC(ModJP, type = "HC5")
VCD <- vcovHC(ModDP, type = "HC5")
RM1 <- coeftest(Mod1P, vcov = VC1)
RM2 <- coeftest(Mod2P, vcov = VC2)
RMJ <- coeftest(ModJP, vcov = VCJ)
RMD <- coeftest(ModDP, vcov = VCD)
RM1
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.501140 1.469425 3.0632 0.002524 **
## Clothes_RF 0.499206 0.206005 2.4233 0.016366 *
## Self_Obj 0.339192 0.170746 1.9865 0.048483 *
## Temp_True -0.048591 0.026035 -1.8664 0.063605 .
## Clothes_RF:Self_Obj -0.113318 0.054383 -2.0837 0.038590 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
RM2
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.4653e+00 1.6982e+00 2.6294 0.00933 **
## Clothes_RF 5.6178e-01 2.3655e-01 2.3749 0.01866 *
## Self_Obj 3.9544e-01 1.9389e-01 2.0395 0.04293 *
## Temp_True -4.2149e-02 2.7825e-02 -1.5148 0.13166
## Intoxicated -1.7246e-02 9.9476e-02 -0.1734 0.86257
## Drinks -8.5052e-02 5.6171e-02 -1.5142 0.13182
## Age 1.4567e-05 1.7260e-02 0.0008 0.99933
## BMI -1.0176e-02 2.3430e-02 -0.4343 0.66460
## Clothes_RF:Self_Obj -1.2997e-01 6.0548e-02 -2.1466 0.03323 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
RMJ
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.765743 0.613262 2.8793 0.004464 **
## Clothes_RF 0.539211 0.212166 2.5415 0.011874 *
## Self_Obj 0.384385 0.180378 2.1310 0.034432 *
## Clothes_RF:Self_Obj -0.122213 0.056933 -2.1466 0.033149 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
RMD
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5.06334526 1.71964155 2.9444 0.003737 **
## Clothes_RF 0.51218912 0.23833842 2.1490 0.033197 *
## Self_Obj 0.35985915 0.19882384 1.8099 0.072255 .
## Temp_True -0.04341569 0.03123443 -1.3900 0.166536
## Intoxicated -0.07165310 0.09275893 -0.7725 0.441022
## Drinks -0.05916703 0.05418361 -1.0920 0.276550
## Age 0.00077987 0.01772204 0.0440 0.964957
## BMI -0.01390307 0.02449017 -0.5677 0.571065
## Race -0.08941077 0.06845263 -1.3062 0.193442
## Sex_Orient -0.03008160 0.11667505 -0.2578 0.796887
## Clothes_RF:Self_Obj -0.12171934 0.06067268 -2.0062 0.046590 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
All results were reproduced with HC5.
The PROCESS models needed to be refitted, though should not be interpreted much since the robust estimator is not very good. I will not be bootstrapping this time and it should be noted that PROCESS it not accurate without bootstrap.
LavaanMod1 = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj + g1*Temp_True
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LMod1Fit = sem(LavaanMod1, data, se = "robust", missing = "pairwise", fixed.x = F)
LavaanMod2 = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj + g1*Temp_True + g2*Intoxicated + g3*Age + g4*BMI
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LMod2Fit = sem(LavaanMod2, data, se = "robust", missing = "pairwise", fixed.x = F)
LavaanModJ = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LModJFit = sem(LavaanModJ, data, se = "robust", missing = "pairwise", fixed.x = F)
LavaanModD = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj + g1*Temp_True + g2*Intoxicated + g3*Age + g4*BMI + g5*Race + g6*Sex_Orient
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LModDFit = sem(LavaanModD, data, se = "robust", missing = "pairwise", fixed.x = F)
summary(LMod1Fit, fit = F, stand = T, rsquare = T)
## lavaan 0.6-9 ended normally after 31 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 17
##
## Number of observations 187
## Number of missing patterns 2
##
## Model Test User Model:
##
## Test statistic 214.111
## Degrees of freedom 3
## P-value (Chi-square) 0.000
##
## Parameter Estimates:
##
## Standard errors Robust.sem
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Cold ~
## Cloths_RF (c1) 0.499 0.147 3.392 0.001 0.499 0.688
## Self_Obj (c2) 0.341 0.105 3.256 0.001 0.341 0.225
## Cl_RF:S_O (c3) -0.114 0.034 -3.349 0.001 -0.114 -0.695
## Temp_True (g1) -0.048 0.026 -1.834 0.067 -0.048 -0.123
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Clothes_RF ~~
## Clths_RF:Slf_O 15.714 1.417 11.092 0.000 15.714 0.935
## Temp_True -0.816 0.501 -1.627 0.104 -0.816 -0.116
## Clothes_RF:Self_Obj ~~
## Temp_True -3.678 2.188 -1.681 0.093 -3.678 -0.118
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 3.746 0.068 55.059 0.000 3.746 4.026
## .Cold 4.487 1.427 3.145 0.002 4.487 3.179
## Clth_RF 3.059 0.142 21.492 0.000 3.059 1.572
## C_RF:S_ 12.026 0.631 19.054 0.000 12.026 1.393
## Temp_Tr 52.508 0.263 199.322 0.000 52.508 14.576
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 0.866 0.082 10.522 0.000 0.866 1.000
## .Cold 1.739 0.160 10.847 0.000 1.739 0.873
## Clth_RF 3.788 0.332 11.401 0.000 3.788 1.000
## C_RF:S_ 74.489 7.257 10.264 0.000 74.489 1.000
## Temp_Tr 12.977 0.561 23.122 0.000 12.977 1.000
##
## R-Square:
## Estimate
## Cold 0.127
summary(LMod2Fit, fit = F, stand = T, rsquare = T)
## lavaan 0.6-9 ended normally after 34 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 38
##
## Number of observations 187
## Number of missing patterns 4
##
## Model Test User Model:
##
## Test statistic 228.305
## Degrees of freedom 6
## P-value (Chi-square) 0.000
##
## Parameter Estimates:
##
## Standard errors Robust.sem
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Cold ~
## Cloths_RF (c1) 0.500 0.151 3.311 0.001 0.500 0.687
## Self_Obj (c2) 0.352 0.105 3.363 0.001 0.352 0.231
## Cl_RF:S_O (c3) -0.114 0.034 -3.338 0.001 -0.114 -0.695
## Temp_True (g1) -0.043 0.027 -1.599 0.110 -0.043 -0.109
## Intoxictd (g2) -0.116 0.072 -1.622 0.105 -0.116 -0.116
## Age (g3) -0.004 0.015 -0.274 0.784 -0.004 -0.017
## BMI (g4) -0.010 0.020 -0.495 0.620 -0.010 -0.036
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Clothes_RF ~~
## Clths_RF:Slf_O 15.714 1.417 11.092 0.000 15.714 0.935
## Temp_True -0.816 0.501 -1.627 0.104 -0.816 -0.116
## Intoxicated 0.333 0.194 1.717 0.086 0.333 0.121
## Age -4.207 0.952 -4.421 0.000 -4.207 -0.376
## BMI -2.039 0.682 -2.989 0.003 -2.039 -0.207
## Clothes_RF:Self_Obj ~~
## Temp_True -3.678 2.188 -1.681 0.093 -3.678 -0.118
## Intoxicated 1.558 0.865 1.801 0.072 1.558 0.128
## Age -18.181 3.911 -4.649 0.000 -18.181 -0.367
## BMI -8.281 3.082 -2.687 0.007 -8.281 -0.189
## Temp_True ~~
## Intoxicated 0.155 0.363 0.426 0.670 0.155 0.031
## Age 5.188 1.506 3.445 0.001 5.188 0.251
## BMI 2.687 1.197 2.245 0.025 2.687 0.147
## Intoxicated ~~
## Age -0.903 0.517 -1.746 0.081 -0.903 -0.112
## BMI -0.282 0.507 -0.556 0.578 -0.282 -0.040
## Age ~~
## BMI 6.841 2.039 3.356 0.001 6.841 0.235
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 3.746 0.068 55.059 0.000 3.746 4.026
## .Cold 4.765 1.480 3.219 0.001 4.765 3.367
## Clth_RF 3.059 0.142 21.492 0.000 3.059 1.572
## C_RF:S_ 12.026 0.631 19.054 0.000 12.026 1.393
## Temp_Tr 52.508 0.263 199.322 0.000 52.508 14.576
## Intxctd 2.296 0.103 22.356 0.000 2.296 1.630
## Age 22.268 0.416 53.571 0.000 22.268 3.875
## BMI 24.350 0.371 65.658 0.000 24.350 4.801
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 0.866 0.082 10.522 0.000 0.866 1.000
## .Cold 1.710 0.155 11.004 0.000 1.710 0.854
## Clth_RF 3.788 0.332 11.401 0.000 3.788 1.000
## C_RF:S_ 74.489 7.257 10.264 0.000 74.489 1.000
## Temp_Tr 12.977 0.561 23.122 0.000 12.977 1.000
## Intxctd 1.983 0.207 9.591 0.000 1.983 1.000
## Age 33.020 7.506 4.399 0.000 33.020 1.000
## BMI 25.719 3.383 7.603 0.000 25.719 1.000
##
## R-Square:
## Estimate
## Cold 0.146
summary(LModJFit, fit = F, stand = T, rsquare = T)
## lavaan 0.6-9 ended normally after 25 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 12
##
## Number of observations 187
## Number of missing patterns 2
##
## Model Test User Model:
##
## Test statistic 210.497
## Degrees of freedom 2
## P-value (Chi-square) 0.000
##
## Parameter Estimates:
##
## Standard errors Robust.sem
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Cold ~
## Cloths_RF (c1) 0.540 0.145 3.711 0.000 0.540 0.736
## Self_Obj (c2) 0.386 0.106 3.627 0.000 0.386 0.251
## Cl_RF:S_O (c3) -0.123 0.034 -3.572 0.000 -0.123 -0.741
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Clothes_RF ~~
## Clths_RF:Slf_O 15.714 1.417 11.092 0.000 15.714 0.935
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 3.746 0.068 55.059 0.000 3.746 4.026
## .Cold 1.765 0.435 4.053 0.000 1.765 1.235
## Clth_RF 3.059 0.142 21.492 0.000 3.059 1.572
## C_RF:S_ 12.026 0.631 19.054 0.000 12.026 1.393
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 0.866 0.082 10.522 0.000 0.866 1.000
## .Cold 1.768 0.161 10.977 0.000 1.768 0.866
## Clth_RF 3.788 0.332 11.401 0.000 3.788 1.000
## C_RF:S_ 74.489 7.257 10.264 0.000 74.489 1.000
##
## R-Square:
## Estimate
## Cold 0.134
summary(LModDFit, fit = F, stand = T, rsquare = T)
## lavaan 0.6-9 ended normally after 36 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 57
##
## Number of observations 187
## Number of missing patterns 6
##
## Model Test User Model:
##
## Test statistic 237.459
## Degrees of freedom 8
## P-value (Chi-square) 0.000
##
## Parameter Estimates:
##
## Standard errors Robust.sem
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Cold ~
## Cloths_RF (c1) 0.445 0.150 2.963 0.003 0.445 0.621
## Self_Obj (c2) 0.281 0.104 2.716 0.007 0.281 0.188
## Cl_RF:S_O (c3) -0.101 0.034 -2.937 0.003 -0.101 -0.624
## Temp_True (g1) -0.030 0.029 -1.028 0.304 -0.030 -0.078
## Intoxictd (g2) -0.130 0.071 -1.829 0.067 -0.130 -0.132
## Age (g3) -0.009 0.015 -0.586 0.558 -0.009 -0.037
## BMI (g4) -0.006 0.021 -0.290 0.772 -0.006 -0.022
## Race (g5) -0.089 0.058 -1.522 0.128 -0.089 -0.114
## Sex_Orint (g6) -0.091 0.100 -0.903 0.367 -0.091 -0.050
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Clothes_RF ~~
## Clths_RF:Slf_O 15.714 1.417 11.092 0.000 15.714 0.935
## Temp_True -0.816 0.501 -1.627 0.104 -0.816 -0.116
## Intoxicated 0.333 0.194 1.717 0.086 0.333 0.121
## Age -4.207 0.952 -4.421 0.000 -4.207 -0.376
## BMI -2.039 0.682 -2.989 0.003 -2.039 -0.207
## Race -0.392 0.275 -1.426 0.154 -0.392 -0.112
## Sex_Orient -0.099 0.100 -0.995 0.319 -0.099 -0.066
## Clothes_RF:Self_Obj ~~
## Temp_True -3.678 2.188 -1.681 0.093 -3.678 -0.118
## Intoxicated 1.558 0.865 1.801 0.072 1.558 0.128
## Age -18.181 3.911 -4.649 0.000 -18.181 -0.367
## BMI -8.281 3.082 -2.687 0.007 -8.281 -0.189
## Race -1.659 1.173 -1.415 0.157 -1.659 -0.107
## Sex_Orient -0.713 0.439 -1.623 0.105 -0.713 -0.107
## Temp_True ~~
## Intoxicated 0.155 0.363 0.426 0.670 0.155 0.031
## Age 5.188 1.506 3.445 0.001 5.188 0.251
## BMI 2.687 1.197 2.245 0.025 2.687 0.147
## Race 1.097 0.450 2.439 0.015 1.097 0.170
## Sex_Orient 0.895 0.179 4.989 0.000 0.895 0.323
## Intoxicated ~~
## Age -0.903 0.517 -1.746 0.081 -0.903 -0.112
## BMI -0.282 0.507 -0.556 0.578 -0.282 -0.040
## Race -0.407 0.166 -2.451 0.014 -0.407 -0.161
## Sex_Orient 0.055 0.077 0.709 0.478 0.055 0.050
## Age ~~
## BMI 6.841 2.039 3.356 0.001 6.841 0.235
## Race 0.629 0.890 0.706 0.480 0.629 0.061
## Sex_Orient -0.014 0.244 -0.057 0.955 -0.014 -0.003
## BMI ~~
## Race 1.358 0.698 1.946 0.052 1.358 0.149
## Sex_Orient 0.293 0.342 0.857 0.391 0.293 0.075
## Race ~~
## Sex_Orient 0.102 0.104 0.981 0.326 0.102 0.074
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 3.746 0.068 55.059 0.000 3.746 4.026
## .Cold 4.725 1.542 3.064 0.002 4.725 3.394
## Clth_RF 3.059 0.142 21.492 0.000 3.059 1.572
## C_RF:S_ 12.026 0.631 19.054 0.000 12.026 1.393
## Temp_Tr 52.508 0.263 199.322 0.000 52.508 14.576
## Intxctd 2.296 0.103 22.356 0.000 2.296 1.630
## Age 22.268 0.416 53.571 0.000 22.268 3.875
## BMI 24.350 0.371 65.658 0.000 24.350 4.801
## Race 2.164 0.130 16.672 0.000 2.164 1.206
## Sx_Ornt 1.434 0.054 26.480 0.000 1.434 1.862
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Slf_Obj (S_O.) 0.866 0.082 10.522 0.000 0.866 1.000
## .Cold 1.684 0.151 11.133 0.000 1.684 0.869
## Clth_RF 3.788 0.332 11.401 0.000 3.788 1.000
## C_RF:S_ 74.489 7.257 10.264 0.000 74.489 1.000
## Temp_Tr 12.977 0.561 23.122 0.000 12.977 1.000
## Intxctd 1.983 0.207 9.591 0.000 1.983 1.000
## Age 33.020 7.506 4.399 0.000 33.020 1.000
## BMI 25.719 3.383 7.603 0.000 25.719 1.000
## Race 3.219 0.381 8.440 0.000 3.219 1.000
## Sx_Ornt 0.593 0.081 7.334 0.000 0.593 1.000
##
## R-Square:
## Estimate
## Cold 0.131
Because much of the data are ordinal, it’ll be useful to do ordered logistic regressions. The first estimates are not robust but the latter are HC1s (since there is no projection matrix to derive HC2 onwards).
Mod1 <- polr(as.factor(Cold) ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, data)
Mod2 <- polr(as.factor(Cold) ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI, data)
ModJ <- polr(as.factor(Cold) ~ Clothes_RF * Self_Obj, data)
ModD <- polr(as.factor(Cold) ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI + Race + Sex_Orient, data)
summary(Mod1); coeftest(Mod1)
##
## Re-fitting to get Hessian
## Call:
## polr(formula = as.factor(Cold) ~ Clothes_RF + Self_Obj * Clothes_RF +
## Temp_True, data = data)
##
## Coefficients:
## Value Std. Error t value
## Clothes_RF 0.71373 0.28409 2.512
## Self_Obj 0.48606 0.24914 1.951
## Temp_True -0.06826 0.03694 -1.848
## Clothes_RF:Self_Obj -0.16947 0.07359 -2.303
##
## Intercepts:
## Value Std. Error t value
## 1|2 -3.8714 2.2076 -1.7537
## 2|3 -2.9031 2.1987 -1.3203
## 3|4 -1.3239 2.1902 -0.6044
## 4|5 -0.1296 2.1870 -0.0593
## 5|6 0.7559 2.1916 0.3449
##
## Residual Deviance: 602.7198
## AIC: 620.7198
## (1 observation deleted due to missingness)
##
## Re-fitting to get Hessian
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## Clothes_RF 0.713730 0.284093 2.5123 0.01289 *
## Self_Obj 0.486060 0.249135 1.9510 0.05264 .
## Temp_True -0.068263 0.036943 -1.8478 0.06630 .
## Clothes_RF:Self_Obj -0.169473 0.073592 -2.3029 0.02245 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
summary(Mod2); coeftest(Mod2)
##
## Re-fitting to get Hessian
## Call:
## polr(formula = as.factor(Cold) ~ Clothes_RF + Self_Obj * Clothes_RF +
## Temp_True + Intoxicated + Drinks + Age + BMI, data = data)
##
## Coefficients:
## Value Std. Error t value
## Clothes_RF 0.788397 0.31185 2.52810
## Self_Obj 0.559430 0.26570 2.10549
## Temp_True -0.064680 0.03921 -1.64942
## Intoxicated -0.035782 0.13127 -0.27259
## Drinks -0.090753 0.07791 -1.16491
## Age -0.002321 0.02623 -0.08848
## BMI 0.001010 0.02819 0.03581
## Clothes_RF:Self_Obj -0.188858 0.07947 -2.37634
##
## Intercepts:
## Value Std. Error t value
## 1|2 -3.7346 2.4047 -1.5530
## 2|3 -2.7874 2.3951 -1.1638
## 3|4 -1.2315 2.3888 -0.5155
## 4|5 -0.0317 2.3864 -0.0133
## 5|6 0.8740 2.3913 0.3655
##
## Residual Deviance: 585.9713
## AIC: 611.9713
## (6 observations deleted due to missingness)
##
## Re-fitting to get Hessian
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## Clothes_RF 0.7883970 0.3118531 2.5281 0.01239 *
## Self_Obj 0.5594297 0.2657006 2.1055 0.03674 *
## Temp_True -0.0646799 0.0392137 -1.6494 0.10093
## Intoxicated -0.0357823 0.1312672 -0.2726 0.78550
## Drinks -0.0907534 0.0779058 -1.1649 0.24571
## Age -0.0023212 0.0262329 -0.0885 0.92960
## BMI 0.0010096 0.0281894 0.0358 0.97147
## Clothes_RF:Self_Obj -0.1888584 0.0794747 -2.3763 0.01861 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
summary(ModJ); coeftest(ModJ)
##
## Re-fitting to get Hessian
## Call:
## polr(formula = as.factor(Cold) ~ Clothes_RF * Self_Obj, data = data)
##
## Coefficients:
## Value Std. Error t value
## Clothes_RF 0.7394 0.28406 2.603
## Self_Obj 0.5029 0.25003 2.012
## Clothes_RF:Self_Obj -0.1712 0.07383 -2.319
##
## Intercepts:
## Value Std. Error t value
## 1|2 -0.1372 0.8952 -0.1533
## 2|3 0.8220 0.8863 0.9274
## 3|4 2.3816 0.8996 2.6474
## 4|5 3.5581 0.9190 3.8718
## 5|6 4.4372 0.9394 4.7234
##
## Residual Deviance: 606.1477
## AIC: 622.1477
## (1 observation deleted due to missingness)
##
## Re-fitting to get Hessian
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## Clothes_RF 0.739374 0.284064 2.6028 0.01003 *
## Self_Obj 0.502945 0.250025 2.0116 0.04577 *
## Clothes_RF:Self_Obj -0.171185 0.073831 -2.3186 0.02155 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
summary(ModD); coeftest(ModD)
##
## Re-fitting to get Hessian
## Call:
## polr(formula = as.factor(Cold) ~ Clothes_RF + Self_Obj * Clothes_RF +
## Temp_True + Intoxicated + Drinks + Age + BMI + Race + Sex_Orient,
## data = data)
##
## Coefficients:
## Value Std. Error t value
## Clothes_RF 0.786627 0.32851 2.3945
## Self_Obj 0.542229 0.28000 1.9365
## Temp_True -0.069676 0.04382 -1.5900
## Intoxicated -0.108346 0.13819 -0.7840
## Drinks -0.056150 0.08239 -0.6815
## Age -0.002867 0.02787 -0.1029
## BMI -0.003549 0.03001 -0.1183
## Race -0.123349 0.08962 -1.3764
## Sex_Orient -0.031082 0.18660 -0.1666
## Clothes_RF:Self_Obj -0.193764 0.08307 -2.3325
##
## Intercepts:
## Value Std. Error t value
## 1|2 -4.6963 2.4964 -1.8812
## 2|3 -3.6845 2.4838 -1.4834
## 3|4 -2.0990 2.4742 -0.8483
## 4|5 -0.7956 2.4690 -0.3222
## 5|6 0.1107 2.4733 0.0447
##
## Residual Deviance: 525.6153
## AIC: 555.6153
## (22 observations deleted due to missingness)
##
## Re-fitting to get Hessian
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## Clothes_RF 0.7866268 0.3285090 2.3945 0.01788 *
## Self_Obj 0.5422287 0.2799999 1.9365 0.05468 .
## Temp_True -0.0696758 0.0438209 -1.5900 0.11394
## Intoxicated -0.1083457 0.1381927 -0.7840 0.43427
## Drinks -0.0561498 0.0823891 -0.6815 0.49659
## Age -0.0028674 0.0278734 -0.1029 0.91820
## BMI -0.0035492 0.0300089 -0.1183 0.90601
## Race -0.1233490 0.0896195 -1.3764 0.17076
## Sex_Orient -0.0310823 0.1865985 -0.1666 0.86793
## Clothes_RF:Self_Obj -0.1937635 0.0830715 -2.3325 0.02100 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
coeftest(Mod1, vcovCL(Mod1, type = "HC1"))
##
## Re-fitting to get Hessian
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## Clothes_RF 0.713730 0.293388 2.4327 0.01598 *
## Self_Obj 0.486060 0.250636 1.9393 0.05405 .
## Temp_True -0.068263 0.037595 -1.8157 0.07110 .
## Clothes_RF:Self_Obj -0.169473 0.077894 -2.1757 0.03090 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
coeftest(Mod2, vcovCL(Mod2, type = "HC1"))
##
## Re-fitting to get Hessian
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## Clothes_RF 0.7883970 0.3502760 2.2508 0.02570 *
## Self_Obj 0.5594297 0.2846196 1.9655 0.05100 .
## Temp_True -0.0646799 0.0423076 -1.5288 0.12819
## Intoxicated -0.0357823 0.1401274 -0.2554 0.79876
## Drinks -0.0907534 0.0770258 -1.1782 0.24037
## Age -0.0023212 0.0237427 -0.0978 0.92224
## BMI 0.0010096 0.0328710 0.0307 0.97554
## Clothes_RF:Self_Obj -0.1888584 0.0908600 -2.0786 0.03918 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
coeftest(ModJ, vcovCL(ModJ, type = "HC1"))
##
## Re-fitting to get Hessian
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## Clothes_RF 0.739374 0.296746 2.4916 0.01363 *
## Self_Obj 0.502945 0.257779 1.9511 0.05262 .
## Clothes_RF:Self_Obj -0.171185 0.079265 -2.1597 0.03214 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
coeftest(ModD, vcovCL(ModD, type = "HC1"))
##
## Re-fitting to get Hessian
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## Clothes_RF 0.7866268 0.3557644 2.2111 0.02854 *
## Self_Obj 0.5422287 0.3035273 1.7864 0.07605 .
## Temp_True -0.0696758 0.0493050 -1.4132 0.15968
## Intoxicated -0.1083457 0.1407174 -0.7700 0.44254
## Drinks -0.0561498 0.0795020 -0.7063 0.48112
## Age -0.0028674 0.0256614 -0.1117 0.91118
## BMI -0.0035492 0.0366541 -0.0968 0.92299
## Race -0.1233490 0.1148278 -1.0742 0.28445
## Sex_Orient -0.0310823 0.1651994 -0.1882 0.85101
## Clothes_RF:Self_Obj -0.1937635 0.0923089 -2.0991 0.03749 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Results will tend to be reproducible with the same data and virtually identical specifications. This is true even for results that can be attributed to noise and error and when there’s no heteroskedasticity, non-normality, outliers, etc., to worry about, robustness won’t alter this. The important thing is that the data is all sane. To this end, negative controls are often used. In this dataset, the negative controls do not make sense. Consider the prediction of subjective cold from the PROCESS model or the various regression models or even the ordinal regressions that correspond to their focal model. The models tended to show a larger coefficient for the self-objectification interaction than for actual temperature. Does it seem plausible that the actual temperature is less related to feeling cold than self-objectification’s interaction with skin exposure? It invokes a novel and unparsimonious explanation on its face and the evidence in favor of it is a bunch of analyses in which the Bayes factors are anecdotal.
Another negative control comes from the unused variable asking what actual temperature participants thought it was, Temp_Think. Compare a model with and without those, and note that the temperature people thought it was inflated the test statistic for the interaction and it was not itself significant as a main effect for predicting feeling cold. This suggests serious issues with the specification of the model. These will not necessarily show up in the VIFs. Moreover, if the perceived actual temperature is used as the DV, the result is not significant, which is perhaps not odd, since the perceived actual temperature seems to have few bad responses and thus evades most of the concerns Felig voiced about them here: https://archive.ph/QSL17. More importantly, the result makes clean theoretical sense when it’s run with perceived temperature as the DV. For one, the real temperature is finally predictive and a scaled perceived temperature (‘Cold’) is not. It appears that the write-in for the real temperature is much better than the pick-a-number format, and why shouldn’t it be? It should suffer less from scaling issues and have more room to granularly change, as well as tether respondent’s answers to a phenomenon they can actually base in reality rather than non-ergodicity. This model is crushingly more predictive than the original one and the result is very different and does not require invoking novel phenomena; there is a strong case for it being preferred and a strong onus on the authors of the original paper to explain why it was excluded when it changes the results so drastically and it is more theoretically acceptable than the other DV. Any conclusions about physiology are unwarranted until results regarding the perceived actual temperature are explained and, in a replication, shown to be consistent with the original conclusions.
ModS <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, data); summary(ModS); vif(ModS)
##
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True,
## data = data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.93764 -0.74787 -0.04511 0.76645 2.81914
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.50114 1.69073 2.662 0.00846 **
## Clothes_RF 0.49921 0.20794 2.401 0.01738 *
## Self_Obj 0.33919 0.18622 1.821 0.07018 .
## Temp_True -0.04859 0.02764 -1.758 0.08043 .
## Clothes_RF:Self_Obj -0.11332 0.05368 -2.111 0.03616 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.337 on 181 degrees of freedom
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.0604, Adjusted R-squared: 0.03964
## F-statistic: 2.909 on 4 and 181 DF, p-value: 0.02301
## Clothes_RF Self_Obj Temp_True Clothes_RF:Self_Obj
## 16.917980 3.139688 1.032530 22.237854
ModT <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Temp_Think, data); summary(ModT); vif(ModT)
##
## Call:
## lm(formula = Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True +
## Temp_Think, data = data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.9177 -0.7814 0.0022 0.8376 2.9055
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5.59111 1.64833 3.392 0.000869 ***
## Clothes_RF 0.59311 0.21049 2.818 0.005426 **
## Self_Obj 0.40161 0.18236 2.202 0.029034 *
## Temp_True -0.04202 0.02998 -1.402 0.162926
## Temp_Think -0.02829 0.01568 -1.804 0.073063 .
## Clothes_RF:Self_Obj -0.14319 0.05388 -2.658 0.008640 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.278 on 165 degrees of freedom
## (16 observations deleted due to missingness)
## Multiple R-squared: 0.1007, Adjusted R-squared: 0.0734
## F-statistic: 3.693 on 5 and 165 DF, p-value: 0.003401
## Clothes_RF Self_Obj Temp_True Temp_Think
## 17.455445 2.971367 1.248022 1.243087
## Clothes_RF:Self_Obj
## 22.598327
ModC <- lm(Temp_Think ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, data); summary(ModC); vif(ModC)
##
## Call:
## lm(formula = Temp_Think ~ Clothes_RF + Self_Obj * Clothes_RF +
## Temp_True, data = data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -31.707 -3.491 0.124 3.800 22.104
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.30202 8.11081 1.393 0.165
## Clothes_RF 0.03544 1.04176 0.034 0.973
## Self_Obj 0.44167 0.90192 0.490 0.625
## Temp_True 0.81613 0.13419 6.082 7.92e-09 ***
## Clothes_RF:Self_Obj 0.09755 0.26654 0.366 0.715
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.327 on 166 degrees of freedom
## (16 observations deleted due to missingness)
## Multiple R-squared: 0.1956, Adjusted R-squared: 0.1762
## F-statistic: 10.09 on 4 and 166 DF, p-value: 2.47e-07
## Clothes_RF Self_Obj Temp_True Clothes_RF:Self_Obj
## 17.455324 2.967081 1.020607 22.580107
ModCo <- lm(Temp_Think ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Cold, data); summary(ModCo); vif(ModCo)
##
## Call:
## lm(formula = Temp_Think ~ Clothes_RF + Self_Obj * Clothes_RF +
## Temp_True + Cold, data = data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -31.674 -3.023 0.316 3.671 21.832
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 14.906004 8.300310 1.796 0.0744 .
## Clothes_RF 0.440256 1.058811 0.416 0.6781
## Self_Obj 0.707697 0.907911 0.779 0.4368
## Temp_True 0.771611 0.135554 5.692 5.6e-08 ***
## Cold -0.683690 0.378996 -1.804 0.0731 .
## Clothes_RF:Self_Obj -0.002233 0.270468 -0.008 0.9934
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.284 on 165 degrees of freedom
## (16 observations deleted due to missingness)
## Multiple R-squared: 0.2111, Adjusted R-squared: 0.1872
## F-statistic: 8.831 on 5 and 165 DF, p-value: 1.907e-07
## Clothes_RF Self_Obj Temp_True Cold
## 18.276291 3.047484 1.055588 1.090416
## Clothes_RF:Self_Obj
## 23.565744
I attempted to reproduce the results contained in Felig et al.’s (2021) paper. This was possible but it does not seem meaningful. To a degree, this is to be expected, as their data were altogether too limited for discovering interactions and the idea that a suppressor effect existed in a scenario apparently dominated by weak if any effects, with little real theoretical reasons to suspect anything was there made it probable that nothing would work out. It is unfortunate when this happens, but being wholly realistic, when it comes to social psychology, this is probably the norm. Given that the interaction is predicated entirely on an inadmissible filtering procedure, the result should be considered extremely tenuous at best. Likewise, because the model requires us to posit that true temperatures were worse at predicting being cold (despite a higher bivariate association) than self-objectification, the theory should be looked at as implausible. The most likely reason for these inconsistencies is that the sample sizes were too small and thus the partialing fallacy is at play. Under several standard prior BFs, the evidence is ancedotal anyway.
At present, there are few reasons to think of the results of Felig et al. (2021) as meaningful, if even reliable. The implausibility of their results given their limited statistical power, lack of informative theory, strange inconsistencies, and inappropriately calculated power is pretty severe. It is good to remember that detecting interaction effects that are suppressors or otherwise requires very large samples, not trifling samples that lack the power needed to detect more typical small main effects and to statistically justify the inclusion of any covariates. The churlish attitude of the main author with regards to criticism of this study may also reflect an increased likelihood of a QRP explanation but this is purely intuitive, as I know of no work establishing surliness and a tendency to bulverize strangers or partake in conspiracist ideation as a predictor of research fraud. As a final note, if a power analysis is not presented in a paper, it should not be trusted, as there are various assumptions undergirding power analyses (like the altogether neglected reliability, or reliability generalized prior to a filtering procedure that affects it, or providing effects that are unrealistically large, like f2 = 0.15) that may drastically increase the required sample size to achieve a certain level of power; these authors somehow calculated that they had 80% power to detect a moderate effect (approximately one and a half times their total observed effects before mean centering and three times as large after, or five to eight times as much as the interaction alone) and even claimed to have oversampled to achieve it.
This is done in the same order as above. I did not add all of the analyses down here because when running it with less took too much time so I stopped myself from adding more. Mean centering was done with the scale() function and with subtraction. I opted not to run these results because they were the same as the ones above but they added a lot of compute time. If you want to run them, go for it. The specification curve analysis (run with the specr package) was also not meaningful and took too long to compute.
Mod1 <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True, centered)
Mod2 <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI, centered)
ModJ <- lm(Cold ~ Clothes_RF * Self_Obj, centered)
ModD <- lm(Cold ~ Clothes_RF + Self_Obj * Clothes_RF + Temp_True + Intoxicated + Drinks + Age + BMI + Race + Sex_Orient, centered)
summary(Mod1); sum(partial_f2(Mod1)[-1]); partial_f2(Mod1)
summary(Mod2); sum(partial_f2(Mod2)[-1]); partial_f2(Mod2)
summary(ModJ); sum(partial_f2(ModJ)[-1]); partial_f2(ModJ)
summary(ModD); sum(partial_f2(ModD)[-1]); partial_f2(ModD)
vif(Mod1)
vif(Mod2)
vif(ModJ)
vif(ModD)
set.seed(1)
BMod1 <- lm.boot(Mod1, 10000)
BMod2 <- lm.boot(Mod2, 10000)
BModJ <- lm.boot(ModJ, 10000)
BModD <- lm.boot(ModD, 10000)
summary(BMod1)
summary(BMod2)
summary(BModJ)
summary(BModD)
set.seed(1)
LavaanMod1 = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj + g1*Temp_True
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LMod1Fit = sem(LavaanMod1, centered, se = "bootstrap", bootstrap = 10000, missing = "pairwise", fixed.x = F)
LavaanMod2 = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj + g1*Temp_True + g2*Intoxicated + g3*Age + g4*BMI
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LMod2Fit = sem(LavaanMod2, centered, se = "bootstrap", bootstrap = 10000, missing = "pairwise", fixed.x = F)
LavaanModJ = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LModJFit = sem(LavaanModJ, centered, se = "bootstrap", bootstrap = 10000, missing = "pairwise", fixed.x = F)
LavaanModD = '
Cold ~ c1*Clothes_RF+c2*Self_Obj+c3*Clothes_RF:Self_Obj + g1*Temp_True + g2*Intoxicated + g3*Age + g4*BMI + g5*Race + g6*Sex_Orient
Self_Obj ~ Self_Obj.mean*1
Self_Obj ~~ Self_Obj.var*Self_Obj'
LModDFit = sem(LavaanModD, centered, se = "bootstrap", bootstrap = 10000, missing = "pairwise", fixed.x = F)
summary(LMod1Fit, fit = F, stand = T, rsquare = T)
summary(LMod2Fit, fit = F, stand = T, rsquare = T)
summary(LModJFit, fit = F, stand = T, rsquare = T)
summary(LModDFit, fit = F, stand = T, rsquare = T)
JN1 <- johnson_neyman(Mod1, pred = Clothes_RF, modx = Self_Obj, alpha=0.05, plot = T)
JP <- JN1$plot
JPT = JP + xlab("Self-objectification") + ylab("Skin Exposure") +
ggtitle("Conditional Effects of Skin Exposure at Levels of Self-Objectification") +
theme(legend.position = c(0.75, 0.75), legend.background = element_blank(), plot.title = element_text(hjust = 0.5))
JPT
Felig, R. N., Jordan, J. A., Shepard, S. L., Courtney, E. P., Goldenberg, J. L., & Roberts, T.-A. (2021). When looking ‘hot’ means not feeling cold: Evidence that self-objectification inhibits feelings of being cold. British Journal of Social Psychology. https://doi.org/10.1111/bjso.12489
Iacobucci, D., Schneider, M. J., Popovich, D. L., & Bakamitsos, G. A. (2016). Mean centering helps alleviate “micro” but not “macro” multicollinearity. Behavior Research Methods, 48(4), 1308–1317. https://doi.org/10.3758/s13428-015-0624-x
The data are as available via OSF on November 10th, 2021 at 18:35, GMT -6. In case they update their data, I have provided this original data, below. “Unfdata” is the data prior to filtering based on the variable filter_$, whereas “data” has been filtered.
datatable(Unfdata, extensions = c("Buttons", "FixedColumns"), options = list(dom = 'Bfrtip', buttons = c('copy', 'csv', 'print'), scrollX = T, fixedColumns = list(leftColumns = 2)))
datatable(data, extensions = c("Buttons", "FixedColumns"), options = list(dom = 'Bfrtip', buttons = c('copy', 'csv', 'print'), scrollX = T, fixedColumns = list(leftColumns = 2)))