suppressWarnings({library(githubinstall)library(evaluate)library(tidyverse)gh_install_packages("konfound", ref ="newitcv_2by2_update", force =TRUE)library(konfound) })
# Case 2: Non-linear## Case 2-1: changeSE = TRUE, Invalidatepkonfound(-0.3, 0.01, 5000, n_covariates =0, alpha = .05, tails =2, nu =0, n_treat =2500, switch_trm =TRUE, model_type ="logistic", to_return ="print")
Robustness of Inference to Replacement (RIR):
RIR = 220
Fragility = 118
The table implied by the parameter estimates and sample sizes you entered:
Fail Success Success_Rate
Control 1157 1343 53.72%
Treatment 1344 1156 46.24%
Total 2501 2499 49.98%
The reported log odds = -0.300, SE = 0.010, and p-value = 0.000.
The SE has been adjusted to 0.057 to generate real numbers in the
implied table for which the p-value would be 0.000. Numbers in
the table cells have been rounded to integers, which may slightly
alter the estimated effect from the value originally entered.
To invalidate the inference that the effect is different from 0
(alpha = 0.050) one would need to replace 220 (16.369%) treatment failure
data points with data points for which the probability of failure in the control
group (46.280%) applies (RIR = 220). This is equivalent to transferring
118 data points from treatment failure to treatment success (Fragility = 118).
Note that RIR = Fragility/[1-P(failure in the control group)]
The transfer of 118 data points yields the following table:
Fail Success Success_Rate
Control 1157 1343 53.72%
Treatment 1226 1274 50.96%
Total 2383 2617 52.34%
The log odds = -0.111, SE = 0.057, p-value = 0.051.
This is based on t = estimated effect/standard error
See Frank et al. (2013) for a description of the method.
Citation: Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. (2013).
What would it take to change an inference?
Using Rubin's causal model to interpret the robustness of causal inferences.
Education, Evaluation and Policy Analysis, 35 , 437-460.
Accuracy of results increases with the number of decimals entered.
For other forms of output, run
?pkonfound and inspect the to_return argument
Robustness of Inference to Replacement (RIR):
RIR = 207
Fragility = 21
The table implied by the parameter estimates and sample sizes you entered:
Fail Success Success_Rate
Control 2246 254 10.16%
Treatment 2307 193 7.72%
Total 4553 447 8.94%
The reported log odds = -0.300, SE = 0.100, and p-value = 0.003.
Values have been rounded to the nearest integer. This may cause
a small change to the estimated effect for the table.
To invalidate the inference that the effect is different from 0
(alpha = 0.050) one would need to replace 207 (8.973%) treatment failure
data points with data points for which the probability of failure in the control
group (89.840%) applies (RIR = 207). This is equivalent to transferring
21 data points from treatment failure to treatment success (Fragility = 21).
Note that RIR = Fragility/[1-P(failure in the control group)]
The transfer of 21 data points yields the following table:
Fail Success Success_Rate
Control 2246 254 10.16%
Treatment 2286 214 8.56%
Total 4532 468 9.36%
The log odds = -0.189, SE = 0.097, p-value = 0.052.
This is based on t = estimated effect/standard error
See Frank et al. (2013) for a description of the method.
Citation: Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. (2013).
What would it take to change an inference?
Using Rubin's causal model to interpret the robustness of causal inferences.
Education, Evaluation and Policy Analysis, 35 , 437-460.
Accuracy of results increases with the number of decimals entered.
For other forms of output, run
?pkonfound and inspect the to_return argument
Robustness of Inference to Replacement (RIR):
RIR = 21
Fragility = 10
The table implied by the parameter estimates and sample sizes you entered:
Fail Success Success_Rate
Control 12 13 52.00%
Treatment 13 12 48.00%
Total 25 25 50.00%
The reported log odds = -0.030, SE = 0.200, and p-value = 0.779.
The SE has been adjusted to 0.566 to generate real numbers in the
implied table for which the p-value would be 0.779. Numbers in
the table cells have been rounded to integers, which may slightly
alter the estimated effect from the value originally entered.
To reach the threshold that would sustain an inference that the
effect is different from 0 (alpha = 0.005) one would need to replace 21
(175.000%) treatment success data points with data points for which the probability of
failure in the control group (48.000%) applies (RIR = 21). This is equivalent
to transferring 10 data points from treatment success to treatment failure
(Fragility = 10).
Note that RIR = Fragility/[1-P(success in the control group)]
Note the RIR exceeds 100%. Generating the transfer of 10 data points would
require replacing more data points than are in the treatment success condition.
The transfer of 10 data points yields the following table:
Fail Success Success_Rate
Control 12 13 52.00%
Treatment 23 2 8.00%
Total 35 15 30.00%
The log odds = -2.522, SE = 0.839, p-value = 0.004.
This is based on t = estimated effect/standard error
See Frank et al. (2013) for a description of the method.
Citation: Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. (2013).
What would it take to change an inference?
Using Rubin's causal model to interpret the robustness of causal inferences.
Education, Evaluation and Policy Analysis, 35 , 437-460.
Accuracy of results increases with the number of decimals entered.
For other forms of output, run
?pkonfound and inspect the to_return argument
Robustness of Inference to Replacement (RIR):
RIR = 24
Fragility = 17
The table implied by the parameter estimates and sample sizes you entered:
Fail Success Success_Rate
Control 180 70 28.00%
Treatment 182 68 27.20%
Total 362 138 27.60%
The reported log odds = -0.030, SE = 0.200, and p-value = 0.841.
Values have been rounded to the nearest integer. This may cause
a small change to the estimated effect for the table.
To reach the threshold that would sustain an inference that the
effect is different from 0 (alpha = 0.050) one would need to replace 24
(35.294%) treatment success data points with data points for which the probability of
failure in the control group (72.000%) applies (RIR = 24). This is equivalent
to transferring 17 data points from treatment success to treatment failure
(Fragility = 17).
Note that RIR = Fragility/[1-P(success in the control group)]
The transfer of 17 data points yields the following table:
Fail Success Success_Rate
Control 180 70 28.00%
Treatment 199 51 20.40%
Total 379 121 24.20%
The log odds = -0.417, SE = 0.211, p-value = 0.049.
This is based on t = estimated effect/standard error
See Frank et al. (2013) for a description of the method.
Citation: Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. (2013).
What would it take to change an inference?
Using Rubin's causal model to interpret the robustness of causal inferences.
Education, Evaluation and Policy Analysis, 35 , 437-460.
Accuracy of results increases with the number of decimals entered.
For other forms of output, run
?pkonfound and inspect the to_return argument
Robustness of Inference to Replacement (RIR):
RIR = 24
Fragility = 17
The table implied by the parameter estimates and sample sizes you entered:
Fail Success Success_Rate
Control 68 182 72.80%
Treatment 70 180 72.00%
Total 138 362 72.40%
The reported log odds = -0.030, SE = 0.200, and p-value = 0.841.
Values have been rounded to the nearest integer. This may cause
a small change to the estimated effect for the table.
To reach the threshold that would sustain an inference that the
effect is different from 0 (alpha = 0.050) one would need to replace 24
(35.294%) control failure data points with data points for which the probability of
failure in the entire sample (13.600%) applies (RIR = 24). This is equivalent
to transferring 17 data points from control failure to control success
(Fragility = 17).
Note that RIR = Fragility/[1-P(failure in the entire sample)]
The transfer of 17 data points yields the following table:
Fail Success Success_Rate
Control 51 199 79.60%
Treatment 70 180 72.00%
Total 121 379 75.80%
The log odds = -0.417, SE = 0.211, p-value = 0.049.
This is based on t = estimated effect/standard error
See Frank et al. (2013) for a description of the method.
Citation: Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. (2013).
What would it take to change an inference?
Using Rubin's causal model to interpret the robustness of causal inferences.
Education, Evaluation and Policy Analysis, 35 , 437-460.
Accuracy of results increases with the number of decimals entered.
For other forms of output, run
?pkonfound and inspect the to_return argument
Robustness of Inference to Replacement (RIR):
RIR = 20 + 15 = 35
Total RIR = Primary RIR in treatment row + Supplemental RIR in control row
Fragility = 9 + 8 = 17
Total Fragility = Primary Fragility in treatment row + Supplemental Fragility in control row
The table implied by the parameter estimates and sample sizes you entered:
Fail Success Success_Rate
Control 27 23 46.00%
Treatment 10 10 50.00%
Total 37 33 47.14%
The reported log odds = 0.250, SE = 0.500, and p-value = 0.763.
The SE has been adjusted to 0.530 to generate real numbers in the
implied table for which the p-value would be 0.763. Numbers in
the table cells have been rounded to integers, which may slightly
alter the estimated effect from the value originally entered.
The inference cannot be sustained merely by switching 9 data points in
the treatment row. Therefore, 8 additional data points have been
switched from control success to control failure.
The final Fragility(= 17) and RIR(= 35) reflect both sets of changes.
Please compare the after transfer table with the implied table.
Fail Success Success_Rate
Control 35 15 30.00%
Treatment 1 19 95.00%
Total 36 34 48.57%
The log odds = 3.792, SE = 1.071, p-value = 0.001.
This is based on t = estimated effect/standard error
See Frank et al. (2013) for a description of the method.
Citation: Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. (2013).
What would it take to change an inference?
Using Rubin's causal model to interpret the robustness of causal inferences.
Education, Evaluation and Policy Analysis, 35 , 437-460.
Accuracy of results increases with the number of decimals entered.
For other forms of output, run
?pkonfound and inspect the to_return argument