Let’s look at the UCB data set:

UCB
## # A tibble: 12 × 6
##    gender dept  admitted rejected applied accept_rate
##    <fct>  <fct>    <dbl>    <dbl>   <dbl>       <dbl>
##  1 Male   A          512      313     825      0.621 
##  2 Female A           89       19     108      0.824 
##  3 Male   B          353      207     560      0.630 
##  4 Female B           17        8      25      0.68  
##  5 Male   C          120      205     325      0.369 
##  6 Female C          202      391     593      0.341 
##  7 Male   D          138      279     417      0.331 
##  8 Female D          131      244     375      0.349 
##  9 Male   E           53      138     191      0.277 
## 10 Female E           94      299     393      0.239 
## 11 Male   F           22      351     373      0.0590
## 12 Female F           24      317     341      0.0704

So we want to see how gender (X) and department (Z) change the probability someone is accepted (Y)

We have two main logistic models we can use that includes all 3:

Comparing the additive vs interaction model

Let’s start by fitting the two models:

# Additive Model
berk_add <- 
  glm(
    formula = accept_rate ~ gender + dept,
    weight = applied,
    family = binomial,
    data = UCB
  )

tidy(berk_add,
     exponentiate = T)
## # A tibble: 7 × 5
##   term         estimate std.error statistic  p.value
##   <chr>           <dbl>     <dbl>     <dbl>    <dbl>
## 1 (Intercept)    1.79      0.0690     8.44  3.27e-17
## 2 genderFemale   1.11      0.0808     1.24  2.17e- 1
## 3 deptB          0.958     0.110     -0.395 6.93e- 1
## 4 deptC          0.283     0.107    -11.8   2.41e-32
## 5 deptD          0.274     0.106    -12.2   2.05e-34
## 6 deptE          0.176     0.126    -13.8   2.86e-43
## 7 deptF          0.0366    0.170    -19.5   2.80e-84
# Interaction model
berk_int <- 
  glm(
    formula = accept_rate ~ gender * dept,
    weight = applied,
    family = binomial,
    data = UCB
  )

tidy(berk_int,
     exponentiate = T)
## # A tibble: 12 × 5
##    term               estimate std.error statistic  p.value
##    <chr>                 <dbl>     <dbl>     <dbl>    <dbl>
##  1 (Intercept)          1.64      0.0717     6.86  6.94e-12
##  2 genderFemale         2.86      0.263      4.00  6.21e- 5
##  3 deptB                1.04      0.113      0.368 7.13e- 1
##  4 deptC                0.358     0.135     -7.58  3.34e-14
##  5 deptD                0.302     0.126     -9.46  3.02e-21
##  6 deptE                0.235     0.177     -8.20  2.49e-16
##  7 deptF                0.0383    0.231    -14.1   3.36e-45
##  8 genderFemale:deptB   0.435     0.510     -1.63  1.03e- 1
##  9 genderFemale:deptC   0.308     0.300     -3.93  8.53e- 5
## 10 genderFemale:deptD   0.379     0.303     -3.21  1.35e- 3
## 11 genderFemale:deptE   0.286     0.330     -3.79  1.50e- 4
## 12 genderFemale:deptF   0.422     0.403     -2.14  3.21e- 2

If there is only 1 interaction term (aka both X and Z are binary) we can just conduct a hypothesis test to see if that term is 0 using tidy().

However, if at least one of X or Z are not binary, we’ll have more than 1 model term, so we can’t conduct a simple z-test :(

What we can do is compare the deviances of the two models:

fit_stats <- 
  bind_rows(
    "add" = glance(berk_add),
    "int" = glance(berk_int),
    .id = "model"
  )

fit_stats
## # A tibble: 2 × 9
##   model null.deviance df.null logLik   AIC   BIC deviance df.residual  nobs
##   <chr>         <dbl>   <int>  <dbl> <dbl> <dbl>    <dbl>       <int> <int>
## 1 add            877.      11  -44.6 103.  107.  2.02e+ 1           5    12
## 2 int            877.      11  -34.5  92.9  98.8 2.56e-13           0    12

A difference in the deviances will follow a \(\chi^2\) model with a difference in the number of model parameters

The difference in parameters between an additive and interaction model is the number of interaction terms that we have: df = \((I - 1)(K - 1)\)

For our example, \(I = 2\), \(K = 6\) then df = 5, which is also the difference in the df.redisual from glance()

fit_stats |> 
  summarize(
    comparison = paste(model,  collapse = ' vs '),
    G = deviance |> diff() |> abs(),
    df = df.residual |> diff() |> abs()
  ) |> 
  mutate(
    p_val = pchisq(q = G,df = df, lower = F)
  )
## # A tibble: 1 × 4
##   comparison     G    df   p_val
##   <chr>      <dbl> <int>   <dbl>
## 1 add vs int  20.2     5 0.00114

So we have strong evidence that the interaction term is needed :(

Instead of finding the test stat and p-value by hand, we can use Anova() in the car package with type = 3

car::Anova(mod = berk_int, type = 3)
## Analysis of Deviance Table (Type III tests)
## 
## Response: accept_rate
##             LR Chisq Df Pr(>Chisq)    
## gender         19.05  1  1.271e-05 ***
## dept          514.76  5  < 2.2e-16 ***
## gender:dept    20.20  5   0.001144 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The interaction row of the results match exactly what we have!

If you have two or more nested models that you want to compare directly, we can use the anova() function in base R.

Let’s fit a model that just have gender:

berk_gen <- 
  glm(
    formula = accept_rate ~ gender,
    weight = applied,
    family = binomial,
    data = UCB
  )


berk_dept <- 
  glm(
    formula = accept_rate ~ dept,
    weight = applied,
    family = binomial,
    data = UCB
  )

# Now we can use anova just by giving it multiple models from simplest to the most complicated:
anova(berk_gen, berk_add, berk_int, 
      test = "LRT")
## Analysis of Deviance Table
## 
## Model 1: accept_rate ~ gender
## Model 2: accept_rate ~ gender + dept
## Model 3: accept_rate ~ gender * dept
##   Resid. Df Resid. Dev Df Deviance  Pr(>Chi)    
## 1        10     783.61                          
## 2         5      20.20  5    763.4 < 2.2e-16 ***
## 3         0       0.00  5     20.2  0.001144 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(berk_gen, berk_int, 
      test = "LRT")
## Analysis of Deviance Table
## 
## Model 1: accept_rate ~ gender
## Model 2: accept_rate ~ gender * dept
##   Resid. Df Resid. Dev Df Deviance  Pr(>Chi)    
## 1        10     783.61                          
## 2         0       0.00 10   783.61 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(berk_dept, berk_int, 
      test = "LRT")
## Analysis of Deviance Table
## 
## Model 1: accept_rate ~ dept
## Model 2: accept_rate ~ gender * dept
##   Resid. Df Resid. Dev Df Deviance Pr(>Chi)   
## 1         6     21.735                        
## 2         0      0.000  6   21.735 0.001352 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

anova() compares the model on a row with the in the previous row

So the test results for row 2 compares gender only vs the additive model and row 3 compares the additive model vs the interaction model

LS0tDQp0aXRsZTogIkFOT1ZBIE1vZGVscyB3aXRoIExvZ2lzdGljIFJlZ3Jlc3Npb24iDQphdXRob3I6ICJDaGFwdGVyIDQiDQpkYXRlOiAiU1RBVCA1MzUwIg0Kb3V0cHV0OiANCiAgaHRtbF9kb2N1bWVudDoNCiAgICBjb2RlX2ZvbGRpbmc6IGhpZGUNCiAgICBjb2RlX2Rvd25sb2FkOiB5ZXMNCiAgICBzbW9vdGhfc2Nyb2xsOiB5ZXMNCiAgICB0aGVtZTogbHVtZW4NCi0tLQ0KDQpgYGB7ciBzZXR1cCwgaW5jbHVkZT1GQUxTRX0NCmtuaXRyOjpvcHRzX2NodW5rJHNldChlY2hvID0gVFJVRSkNCnBhY21hbjo6cF9sb2FkKHRpZHl2ZXJzZSwgYnJvb20sIGNhcikNCnRoZW1lX3NldCh0aGVtZV9idygpKQ0KDQojIENyZWF0aW5nIHRoZSBCZXJrbGV5IERhdGEgc2V0Og0KVUNCIDwtIA0KICBVQ0JBZG1pc3Npb25zIHw+IA0KICBkYXRhLmZyYW1lKCkgfD4gDQogIHBpdm90X3dpZGVyKA0KICAgIG5hbWVzX2Zyb20gPSAiQWRtaXQiLA0KICAgIHZhbHVlc19mcm9tID0gIkZyZXEiDQogICkgfD4gDQogIG11dGF0ZSgNCiAgICBhcHBsaWVkID0gQWRtaXR0ZWQgKyBSZWplY3RlZCwNCiAgICBhY2NlcHRfcmF0ZSA9IEFkbWl0dGVkL2FwcGxpZWQNCiAgKSB8PiANCiAgamFuaXRvcjo6Y2xlYW5fbmFtZXMoKSANCg0KYGBgDQoNCkxldCdzIGxvb2sgYXQgdGhlIFVDQiBkYXRhIHNldDoNCg0KYGBge3J9DQpVQ0INCmBgYA0KDQpTbyB3ZSB3YW50IHRvIHNlZSBob3cgZ2VuZGVyIChYKSBhbmQgZGVwYXJ0bWVudCAoWikgY2hhbmdlIHRoZSBwcm9iYWJpbGl0eSBzb21lb25lIGlzIGFjY2VwdGVkIChZKQ0KDQpXZSBoYXZlIHR3byBtYWluIGxvZ2lzdGljIG1vZGVscyB3ZSBjYW4gdXNlIHRoYXQgaW5jbHVkZXMgYWxsIDM6DQoNCi0gICBIb21vZ2VuZW91cyBhc3NvY2lhdGlvbiAoWFksIFlaLCBYWik6ICQkXGxvZyBcbGVmdCAoIFxmcmFje1xwaV97aWt9fXsxLVxwaV97aWt9fSBccmlnaHQpID0gXGFscGhhICsgXGJldGFeWF9pICtcYmV0YV5aX2skJA0KICAgIC0gICBBbHNvIGNhbGxlZCB0aGUgKiphZGRpdGl2ZSoqIG1vZGVsDQotICAgU2F0dXJhdGVkIE1vZGVsIChYWVopOiAkJFxsb2cgXGxlZnQgKCBcZnJhY3tccGlfe2lrfX17MS1ccGlfaX0gXHJpZ2h0KSA9IFxhbHBoYSArIFxiZXRhXlhfaSArXGJldGFeWl9rICsgXGJldGFee1hafV97aWt9JCQNCiAgICAtICAgQWxzbyBjYWxsZWQgdGhlICoqaW50ZXJhY3Rpb24qKiBtb2RlbA0KDQojIyBDb21wYXJpbmcgdGhlIGFkZGl0aXZlIHZzIGludGVyYWN0aW9uIG1vZGVsDQoNCkxldCdzIHN0YXJ0IGJ5IGZpdHRpbmcgdGhlIHR3byBtb2RlbHM6DQoNCmBgYHtyfQ0KIyBBZGRpdGl2ZSBNb2RlbA0KYmVya19hZGQgPC0gDQogIGdsbSgNCiAgICBmb3JtdWxhID0gYWNjZXB0X3JhdGUgfiBnZW5kZXIgKyBkZXB0LA0KICAgIHdlaWdodCA9IGFwcGxpZWQsDQogICAgZmFtaWx5ID0gYmlub21pYWwsDQogICAgZGF0YSA9IFVDQg0KICApDQoNCnRpZHkoYmVya19hZGQsDQogICAgIGV4cG9uZW50aWF0ZSA9IFQpDQoNCiMgSW50ZXJhY3Rpb24gbW9kZWwNCmJlcmtfaW50IDwtIA0KICBnbG0oDQogICAgZm9ybXVsYSA9IGFjY2VwdF9yYXRlIH4gZ2VuZGVyICogZGVwdCwNCiAgICB3ZWlnaHQgPSBhcHBsaWVkLA0KICAgIGZhbWlseSA9IGJpbm9taWFsLA0KICAgIGRhdGEgPSBVQ0INCiAgKQ0KDQp0aWR5KGJlcmtfaW50LA0KICAgICBleHBvbmVudGlhdGUgPSBUKQ0KYGBgDQoNCklmIHRoZXJlIGlzIG9ubHkgMSBpbnRlcmFjdGlvbiB0ZXJtIChha2EgYm90aCBYIGFuZCBaIGFyZSBiaW5hcnkpIHdlIGNhbiBqdXN0IGNvbmR1Y3QgYSBoeXBvdGhlc2lzIHRlc3QgdG8gc2VlIGlmIHRoYXQgdGVybSBpcyAwIHVzaW5nIGB0aWR5KClgLg0KDQpIb3dldmVyLCBpZiBhdCBsZWFzdCBvbmUgb2YgWCBvciBaIGFyZSBub3QgYmluYXJ5LCB3ZSdsbCBoYXZlIG1vcmUgdGhhbiAxIG1vZGVsIHRlcm0sIHNvIHdlIGNhbid0IGNvbmR1Y3QgYSBzaW1wbGUgei10ZXN0IDooDQoNCldoYXQgd2UgY2FuIGRvIGlzIGNvbXBhcmUgdGhlIGRldmlhbmNlcyBvZiB0aGUgdHdvIG1vZGVsczoNCg0KYGBge3J9DQpmaXRfc3RhdHMgPC0gDQogIGJpbmRfcm93cygNCiAgICAiYWRkIiA9IGdsYW5jZShiZXJrX2FkZCksDQogICAgImludCIgPSBnbGFuY2UoYmVya19pbnQpLA0KICAgIC5pZCA9ICJtb2RlbCINCiAgKQ0KDQpmaXRfc3RhdHMNCmBgYA0KDQpBIGRpZmZlcmVuY2UgaW4gdGhlIGRldmlhbmNlcyB3aWxsIGZvbGxvdyBhICRcY2hpXjIkIG1vZGVsIHdpdGggYSBkaWZmZXJlbmNlIGluIHRoZSBudW1iZXIgb2YgbW9kZWwgcGFyYW1ldGVycw0KDQpUaGUgZGlmZmVyZW5jZSBpbiBwYXJhbWV0ZXJzIGJldHdlZW4gYW4gYWRkaXRpdmUgYW5kIGludGVyYWN0aW9uIG1vZGVsIGlzIHRoZSBudW1iZXIgb2YgaW50ZXJhY3Rpb24gdGVybXMgdGhhdCB3ZSBoYXZlOiBkZiA9ICQoSSAtIDEpKEsgLSAxKSQNCg0KRm9yIG91ciBleGFtcGxlLCAkSSA9IDIkLCAkSyA9IDYkIHRoZW4gZGYgPSA1LCB3aGljaCBpcyBhbHNvIHRoZSBkaWZmZXJlbmNlIGluIHRoZSBgZGYucmVkaXN1YWxgIGZyb20gYGdsYW5jZSgpYA0KDQpgYGB7cn0NCmZpdF9zdGF0cyB8PiANCiAgc3VtbWFyaXplKA0KICAgIGNvbXBhcmlzb24gPSBwYXN0ZShtb2RlbCwgIGNvbGxhcHNlID0gJyB2cyAnKSwNCiAgICBHID0gZGV2aWFuY2UgfD4gZGlmZigpIHw+IGFicygpLA0KICAgIGRmID0gZGYucmVzaWR1YWwgfD4gZGlmZigpIHw+IGFicygpDQogICkgfD4gDQogIG11dGF0ZSgNCiAgICBwX3ZhbCA9IHBjaGlzcShxID0gRyxkZiA9IGRmLCBsb3dlciA9IEYpDQogICkNCg0KYGBgDQoNClNvIHdlIGhhdmUgc3Ryb25nIGV2aWRlbmNlIHRoYXQgdGhlIGludGVyYWN0aW9uIHRlcm0gaXMgbmVlZGVkIDooDQoNCkluc3RlYWQgb2YgZmluZGluZyB0aGUgdGVzdCBzdGF0IGFuZCBwLXZhbHVlIGJ5IGhhbmQsIHdlIGNhbiB1c2UgYEFub3ZhKClgIGluIHRoZSBgY2FyYCBwYWNrYWdlIHdpdGggYHR5cGUgPSAzYA0KDQpgYGB7cn0NCmNhcjo6QW5vdmEobW9kID0gYmVya19pbnQsIHR5cGUgPSAzKQ0KDQpgYGANCg0KVGhlIGludGVyYWN0aW9uIHJvdyBvZiB0aGUgcmVzdWx0cyBtYXRjaCBleGFjdGx5IHdoYXQgd2UgaGF2ZSENCg0KSWYgeW91IGhhdmUgdHdvIG9yIG1vcmUgKipuZXN0ZWQqKiBtb2RlbHMgdGhhdCB5b3Ugd2FudCB0byBjb21wYXJlIGRpcmVjdGx5LCB3ZSBjYW4gdXNlIHRoZSBgYW5vdmEoKWAgZnVuY3Rpb24gaW4gYGJhc2VgIFIuDQoNCkxldCdzIGZpdCBhIG1vZGVsIHRoYXQganVzdCBoYXZlIGdlbmRlcjoNCg0KYGBge3J9DQpiZXJrX2dlbiA8LSANCiAgZ2xtKA0KICAgIGZvcm11bGEgPSBhY2NlcHRfcmF0ZSB+IGdlbmRlciwNCiAgICB3ZWlnaHQgPSBhcHBsaWVkLA0KICAgIGZhbWlseSA9IGJpbm9taWFsLA0KICAgIGRhdGEgPSBVQ0INCiAgKQ0KDQoNCmJlcmtfZGVwdCA8LSANCiAgZ2xtKA0KICAgIGZvcm11bGEgPSBhY2NlcHRfcmF0ZSB+IGRlcHQsDQogICAgd2VpZ2h0ID0gYXBwbGllZCwNCiAgICBmYW1pbHkgPSBiaW5vbWlhbCwNCiAgICBkYXRhID0gVUNCDQogICkNCg0KIyBOb3cgd2UgY2FuIHVzZSBhbm92YSBqdXN0IGJ5IGdpdmluZyBpdCBtdWx0aXBsZSBtb2RlbHMgZnJvbSBzaW1wbGVzdCB0byB0aGUgbW9zdCBjb21wbGljYXRlZDoNCmFub3ZhKGJlcmtfZ2VuLCBiZXJrX2FkZCwgYmVya19pbnQsIA0KICAgICAgdGVzdCA9ICJMUlQiKQ0KDQphbm92YShiZXJrX2dlbiwgYmVya19pbnQsIA0KICAgICAgdGVzdCA9ICJMUlQiKQ0KDQoNCmFub3ZhKGJlcmtfZGVwdCwgYmVya19pbnQsIA0KICAgICAgdGVzdCA9ICJMUlQiKQ0KDQpgYGANCg0KYGFub3ZhKClgIGNvbXBhcmVzIHRoZSBtb2RlbCBvbiBhIHJvdyB3aXRoIHRoZSBpbiB0aGUgcHJldmlvdXMgcm93DQoNClNvIHRoZSB0ZXN0IHJlc3VsdHMgZm9yIHJvdyAyIGNvbXBhcmVzIGdlbmRlciBvbmx5IHZzIHRoZSBhZGRpdGl2ZSBtb2RlbCBhbmQgcm93IDMgY29tcGFyZXMgdGhlIGFkZGl0aXZlIG1vZGVsIHZzIHRoZSBpbnRlcmFjdGlvbiBtb2RlbA0K