Taking support for abortion as the outcome variable, produce fourfold displays showing the association with sex, stratified by status.

knitr::opts_chunk$set(echo = TRUE)
library(vcdExtra)
## Warning: package 'vcdExtra' was built under R version 3.5.1
## Loading required package: vcd
## Warning: package 'vcd' was built under R version 3.5.1
## Loading required package: grid
## Loading required package: gnm
## Warning: package 'gnm' was built under R version 3.5.1
data("Abortion")
str(Abortion)
##  'table' num [1:2, 1:2, 1:2] 171 152 138 167 79 148 112 133
##  - attr(*, "dimnames")=List of 3
##   ..$ Sex             : chr [1:2] "Female" "Male"
##   ..$ Status          : chr [1:2] "Lo" "Hi"
##   ..$ Support_Abortion: chr [1:2] "Yes" "No"
fourfold(Abortion,c(3,1,2))

Do the same for the association of support for abortion with status, stratified by sex.

fourfold(aperm(Abortion,c(3,1,2)))

For each of the problems above, use oddsratio () to calculate the numerical values of the odds ratio, as stratified in the question.

oddsratio(Abortion, log = F)
##  odds ratios for Sex and Status by Support_Abortion 
## 
##       Yes        No 
## 1.3614130 0.6338682
confint(oddsratio(Abortion, log = F))
##         2.5 %    97.5 %
## Yes 0.9945685 1.8635675
## No  0.4373246 0.9187431

Write a brief summary of how support for abortion depends on sex and status. From the fourfold plot above, we can obviously find out that when status is low more females will support abortion. But when status is high, more males will support abortion.

confint(oddsratio(Abortion, log = F))
##         2.5 %    97.5 %
## Yes 0.9945685 1.8635675
## No  0.4373246 0.9187431

The two-way table Mammograms in vcdExtra gives ratings on the severity of diagnosis of 110 mammograms by two raters. Assess the strength of agreement between the raters using Cohen’s κ, both unweighted and weighted.

 data(Mammograms)
 Kappa(Mammograms)
##             value     ASE      z  Pr(>|z|)
## Unweighted 0.3713 0.06033  6.154 7.560e-10
## Weighted   0.5964 0.04923 12.114 8.901e-34

Use agreementplot () for a graphical display of agreement here. Compare the Kappa measures with the results from assocstats (). What is a reasonable interpretation of each of these measures

 agreementplot(Mammograms,main= "agreementplot")

Compare the Kappa measures with the results from assocstats (). What is a reasonable interpretation of each of these measures

kappa(Mammograms)
## [1] 45.29604
assocstats(Mammograms) 
##                     X^2 df   P(> X^2)
## Likelihood Ratio 92.619  9 4.4409e-16
## Pearson          83.516  9 3.2307e-14
## 
## Phi-Coefficient   : NA 
## Contingency Coeff.: 0.657 
## Cramer's V        : 0.503

kappa() computes by default (an estimate of) the 2-norm condition number of a matrix or of the R matrix Computes the Pearson chi-Squared test, the Likelihood Ratio chi-Squared test, the phi coefficient, the contingency coefficient and Cramer’s V for possibly stratified contingency tables.

Agresti and Winner (1997) gave the data in below on the ratings of 160 movies by the reviewers Gene Siskel and Roger Ebert for the period from April 1995 through September 1996. The rating categories were Con (“thumbs down”), Mixed, and Pro (“thumbs up”).

Assess the strength of agreement between the raters using Cohen’s κ, both unweighted and weighted

Mdata = matrix(c(24,8,13,45,8,13,11,32,10,9,64,83,42,30,88,160), ncol = 4, byrow = T)
colnames(Mdata) = c("Con", "Mixed", "Pro", "Total")
rownames(Mdata) = c("Con", "Mixed", "Pro", "Total")
Mdata
##       Con Mixed Pro Total
## Con    24     8  13    45
## Mixed   8    13  11    32
## Pro    10     9  64    83
## Total  42    30  88   160
Kappa(Mdata)
##              value     ASE     z Pr(>|z|)
## Unweighted 0.09012 0.02882 3.127 0.001767
## Weighted   0.08737 0.03372 2.591 0.009573

Use agreementplot () for a graphical display of agreement here.

agreementplot(Mdata, main = "Weighted")