Small Cell Counts in Logistic

suppressWarnings({
  library(githubinstall)
  library(evaluate)
  library(haven)
  library(tidyverse)
  devtools::install_local("/Users/jihoonchoi/Documents/GitHub/konfound", force = TRUE)
  library(konfound) })
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr     1.1.4     ✔ readr     2.1.5
✔ forcats   1.0.0     ✔ stringr   1.5.1
✔ ggplot2   3.5.1     ✔ tibble    3.2.1
✔ lubridate 1.9.3     ✔ tidyr     1.3.1
✔ purrr     1.0.2     
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag()    masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors

── R CMD build ─────────────────────────────────────────────────────────────────
* checking for file ‘/private/var/folders/gl/9nbm4d7s0zv_7r1wcdpj_kv80000gn/T/RtmpxjKhJl/file3db83572a77f/konfound/DESCRIPTION’ ... OK
* preparing ‘konfound’:
* checking DESCRIPTION meta-information ... OK
* checking for LF line-endings in source and make files and shell scripts
* checking for empty or unneeded directories
  NB: this package now depends on R (>= 3.5.0)
  WARNING: Added dependency on R >= 3.5.0 because serialized objects in
  serialize/load version 3 cannot be read in older versions of R.
  File(s) containing such objects:
    ‘konfound/data/binary_dummy_data.rda’
* building ‘konfound_1.0.2.tar.gz’
Sensitivity analysis as described in Frank, 
Maroulis, Duong, and Kelcey (2013) and in 
Frank (2000).
For more information visit http://konfound-it.com.

Printed Output

pkonfound(-0.0768537, 0.7232168, 560, 2, 
          n_treat = 280, model_type = 'logistic')
Robustness of Inference to Replacement (RIR):
RIR = 4 + 280 = 284
Total RIR = Primary RIR in treatment row + Supplemental RIR in control row

Fragility = 3 + 4 = 7
Total Fragility = Primary Fragility in treatment row + Supplemental Fragility in control row

The table implied by the parameter estimates and sample sizes you entered:
User-entered Table:
          Fail Success Success_Rate
Control    276       4        1.43%
Treatment  276       4        1.43%
Total      552       8        1.43%

The reported log odds = -0.077, SE = 0.723, and p-value = 1.000. 
Values in the table have been rounded to the nearest integer. This may cause 
a small change to the estimated effect for the table.

In terms of Fragility, to sustain an inference, transferring 3 data points from
treatment success to treatment failure is not enough to change the inference.
One would also need to transfer 4 data points from control failure to control success
as shown, from the User-entered Table to the Transfer Table.

In terms of RIR, generating the 3 switches from treatment success to treatment failure
is equivalent to replacing 4 treatment success data points with data points for which
the probability of failure in the control sample (98.571%) applies.

In addition, generating the 4 switches from control failure to control success is
equivalent to replacing 280 control failure data points with data points for which
the probability of success in the control sample (1.429%) applies.

Therefore, the total RIR is 284.

RIR = Fragility/P(destination)

The transfer of 7 data points yields the following table:
Transfer Table:
          Fail Success Success_Rate
Control    272       8        2.86%
Treatment  279       1        0.36%
Total      551       9        1.61%

The log odds (estimated effect) = -2.105, SE = 1.064, p-value = 0.048.
This is based on t = estimated effect/standard error

See Frank et al. (2021) for a description of the methods.

*Frank, K. A., *Lin, Q., *Maroulis, S., *Mueller, A. S., Xu, R., Rosenberg, J. M., ... & Zhang, L. (2021).
Hypothetical case replacement can be used to quantify the robustness of trial results. Journal of Clinical
Epidemiology, 134, 150-159.
*authors are listed alphabetically.

Accuracy of results increases with the number of decimals entered.
For other forms of output, run
          ?pkonfound and inspect the to_return argument
For models fit in R, consider use of konfound().
pkonfound(-0.0768537, 0.7232168, 560, 2, 
          n_treat = 280, model_type = 'logistic', to_return = "raw_output")
For interpretation, check out to_return = 'print'.
$RIR_primary
[1] 4

$RIR_supplemental
[1] 280

$fragility_primary
[1] 3

$fragility_supplemental
[1] 4

$starting_table
          Fail Success
Control    276       4
Treatment  276       4

$final_table
          Fail Success
Control    272       8
Treatment  279       1

$user_SE
[1] 0.7232168

$analysis_SE
[1] 0.7232168

$needtworows
[1] TRUE

Additional p-value Calculation

# the p-value in the printed output is calculated based on the model estimates.
# following p-value calculation is derived from the implied/transferred table

# Starting table
starting_table <- matrix(c(276, 4, 276, 4), nrow = 2, byrow = TRUE,
                         dimnames = list(c("Control", "Treatment"), c("Fail", "Success")))

# Final table
final_table <- matrix(c(272, 8, 279, 1), nrow = 2, byrow = TRUE,
                      dimnames = list(c("Control", "Treatment"), c("Fail", "Success")))

# Chi-square p-values
p_start_chi <- chisq.test(starting_table, correct = FALSE)$p.value
Warning in chisq.test(starting_table, correct = FALSE): Chi-squared
approximation may be incorrect
p_final_chi <- chisq.test(final_table, correct = FALSE)$p.value
Warning in chisq.test(final_table, correct = FALSE): Chi-squared approximation
may be incorrect
# Fisher's exact test p-values
p_start_fisher <- fisher.test(starting_table)$p.value
p_final_fisher <- fisher.test(final_table)$p.value

# Print the results
cat("Chi-square p-values:\n")
Chi-square p-values:
cat(p_start_chi, p_final_chi)
1 0.0186571
cat("Fisher p-values:\n")
Fisher p-values:
cat(p_start_fisher, p_final_fisher)
1 0.03756265