suppressWarnings({library(githubinstall)library(evaluate)library(haven)library(tidyverse) devtools::install_local("/Users/jihoonchoi/Documents/GitHub/konfound", force =TRUE)library(konfound) })
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr 1.1.4 ✔ readr 2.1.5
✔ forcats 1.0.0 ✔ stringr 1.5.1
✔ ggplot2 3.5.1 ✔ tibble 3.2.1
✔ lubridate 1.9.3 ✔ tidyr 1.3.1
✔ purrr 1.0.2
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
── R CMD build ─────────────────────────────────────────────────────────────────
* checking for file ‘/private/var/folders/gl/9nbm4d7s0zv_7r1wcdpj_kv80000gn/T/RtmpxjKhJl/file3db83572a77f/konfound/DESCRIPTION’ ... OK
* preparing ‘konfound’:
* checking DESCRIPTION meta-information ... OK
* checking for LF line-endings in source and make files and shell scripts
* checking for empty or unneeded directories
NB: this package now depends on R (>= 3.5.0)
WARNING: Added dependency on R >= 3.5.0 because serialized objects in
serialize/load version 3 cannot be read in older versions of R.
File(s) containing such objects:
‘konfound/data/binary_dummy_data.rda’
* building ‘konfound_1.0.2.tar.gz’
Sensitivity analysis as described in Frank,
Maroulis, Duong, and Kelcey (2013) and in
Frank (2000).
For more information visit http://konfound-it.com.
Robustness of Inference to Replacement (RIR):
RIR = 4 + 280 = 284
Total RIR = Primary RIR in treatment row + Supplemental RIR in control row
Fragility = 3 + 4 = 7
Total Fragility = Primary Fragility in treatment row + Supplemental Fragility in control row
The table implied by the parameter estimates and sample sizes you entered:
User-entered Table:
Fail Success Success_Rate
Control 276 4 1.43%
Treatment 276 4 1.43%
Total 552 8 1.43%
The reported log odds = -0.077, SE = 0.723, and p-value = 1.000.
Values in the table have been rounded to the nearest integer. This may cause
a small change to the estimated effect for the table.
In terms of Fragility, to sustain an inference, transferring 3 data points from
treatment success to treatment failure is not enough to change the inference.
One would also need to transfer 4 data points from control failure to control success
as shown, from the User-entered Table to the Transfer Table.
In terms of RIR, generating the 3 switches from treatment success to treatment failure
is equivalent to replacing 4 treatment success data points with data points for which
the probability of failure in the control sample (98.571%) applies.
In addition, generating the 4 switches from control failure to control success is
equivalent to replacing 280 control failure data points with data points for which
the probability of success in the control sample (1.429%) applies.
Therefore, the total RIR is 284.
RIR = Fragility/P(destination)
The transfer of 7 data points yields the following table:
Transfer Table:
Fail Success Success_Rate
Control 272 8 2.86%
Treatment 279 1 0.36%
Total 551 9 1.61%
The log odds (estimated effect) = -2.105, SE = 1.064, p-value = 0.048.
This is based on t = estimated effect/standard error
See Frank et al. (2021) for a description of the methods.
*Frank, K. A., *Lin, Q., *Maroulis, S., *Mueller, A. S., Xu, R., Rosenberg, J. M., ... & Zhang, L. (2021).
Hypothetical case replacement can be used to quantify the robustness of trial results. Journal of Clinical
Epidemiology, 134, 150-159.
*authors are listed alphabetically.
Accuracy of results increases with the number of decimals entered.
For other forms of output, run
?pkonfound and inspect the to_return argument
# the p-value in the printed output is calculated based on the model estimates.# following p-value calculation is derived from the implied/transferred table# Starting tablestarting_table <-matrix(c(276, 4, 276, 4), nrow =2, byrow =TRUE,dimnames =list(c("Control", "Treatment"), c("Fail", "Success")))# Final tablefinal_table <-matrix(c(272, 8, 279, 1), nrow =2, byrow =TRUE,dimnames =list(c("Control", "Treatment"), c("Fail", "Success")))# Chi-square p-valuesp_start_chi <-chisq.test(starting_table, correct =FALSE)$p.value
Warning in chisq.test(starting_table, correct = FALSE): Chi-squared
approximation may be incorrect