In August of 2012, news outlets ranging from the Washington Post to the Huffington Post ran a story about the rise of atheism in America. The source for the story was a poll that asked people, “Irrespective of whether you attend a place of worship or not, would you say you are a religious person, not a religious person or a convinced atheist?” This type of question, which asks people to classify themselves in one way or another, is common in polling and generates categorical data. In this lab we take a look at the atheism survey and explore what’s at play when making inference about population proportions using categorical data.
To access the press release for the poll, conducted by WIN-Gallup International, click on the following link:
Take a moment to review the report then address the following questions.
They are sample statistics as the %ages are the surveys taken from sample of the population.
We must assume that the observations are independent and less than 10 percent of the population. It should be more than 10 samples for each category and hence fulfilling the minimum criteria to be met.
Turn your attention to Table 6 (pages 15 and 16), which reports the sample size and response percentages for all 57 countries. While this is a useful format to summarize the data, we will base our analysis on the original data set of individual responses to the survey. Load this data set into R with the following command.
load("more/atheism.RData")
atheism
correspond to?Each row represents the countries, their religiosity index in 2005 and % of drop in 2012.
To investigate the link between these two ways of organizing this data, take a look at the estimated proportion of atheists in the United States. Towards the bottom of Table 6, we see that this is 5%. We should be able to come to the same number using the atheism
data.
us12
that contains only the rows in atheism
associated with respondents to the 2012 survey from the United States. Next, calculate the proportion of atheist responses. Does it agree with the percentage in Table 6? If not, why?us12 <- subset(atheism, nationality == "United States" & year == "2012")
sum(us12$response == 'atheist')/nrow(us12)
## [1] 0.0499002
Yes, the percentage agrees with the Table 6.
As was hinted at in Exercise 1, Table 6 provides statistics, that is, calculations made from the sample of 51,927 people. What we’d like, though, is insight into the population parameters. You answer the question, “What proportion of people in your sample reported being atheists?” with a statistic; while the question “What proportion of people on earth would report being atheists” is answered with an estimate of the parameter.
The inferential tools for estimating population proportion are analogous to those used for means in the last chapter: the confidence interval and the hypothesis test.
inference(us12$response, est= "proportion", type ="ci", success = "atheist", method ="theoretical")
## Single proportion -- success: atheist
## Summary statistics:
## p_hat = 0.0499 ; n = 1002
## Check conditions: number of successes = 50 ; number of failures = 952
## Standard error = 0.0069
## 95 % Confidence interval = ( 0.0364 , 0.0634 )
The sample size is big enough and taken randomly. Both categories have more than 10 cases so the conditions are met and hence it can be used for further data analysis.
If the conditions for inference are reasonable, we can either calculate the standard error and construct the interval by hand, or allow the inference
function to do it for us.
inference(us12$response, est = "proportion", type = "ci", method = "theoretical",
success = "atheist")
## Single proportion -- success: atheist
## Summary statistics:
## p_hat = 0.0499 ; n = 1002
## Check conditions: number of successes = 50 ; number of failures = 952
## Standard error = 0.0069
## 95 % Confidence interval = ( 0.0364 , 0.0634 )
Note that since the goal is to construct an interval estimate for a proportion, it’s necessary to specify what constitutes a “success”, which here is a response of "atheist"
.
Although formal confidence intervals and hypothesis tests don’t show up in the report, suggestions of inference appear at the bottom of page 7: “In general, the error margin for surveys of this kind is \(\pm\) 3-5% at 95% confidence”.
me <- (0.0634-0.0364)/2
me
## [1] 0.0135
inference
function, calculate confidence intervals for the proportion of atheists in 2012 in two other countries of your choice, and report the associated margins of error. Be sure to note whether the conditions for inference are met. It may be helpful to create new data sets for each of the two countries first, and then use these data sets in the inference
function to construct the confidence intervals.I have chosen Austria and Canada. Both have cases more than 10, randomly taken sample and within 10 percent of the population.
austria12 <- subset(atheism, nationality =="Austria" & year == "2012")
canada12 <- subset(atheism, nationality =="Canada" & year =="2012")
inference(austria12$response, est="proportion", type="ci", success="atheist", method="theoretical")
## Single proportion -- success: atheist
## Summary statistics:
## p_hat = 0.0998 ; n = 1002
## Check conditions: number of successes = 100 ; number of failures = 902
## Standard error = 0.0095
## 95 % Confidence interval = ( 0.0812 , 0.1184 )
inference(canada12$response, est="proportion", type="ci", success="atheist", method="theoretical")
## Single proportion -- success: atheist
## Summary statistics:
## p_hat = 0.0898 ; n = 1002
## Check conditions: number of successes = 90 ; number of failures = 912
## Standard error = 0.009
## 95 % Confidence interval = ( 0.0721 , 0.1075 )
Imagine you’ve set out to survey 1000 people on two questions: are you female? and are you left-handed? Since both of these sample proportions were calculated from the same sample size, they should have the same margin of error, right? Wrong! While the margin of error does change with sample size, it is also affected by the proportion.
Think back to the formula for the standard error: \(SE = \sqrt{p(1-p)/n}\). This is then used in the formula for the margin of error for a 95% confidence interval: \(ME = 1.96\times SE = 1.96\times\sqrt{p(1-p)/n}\). Since the population proportion \(p\) is in this \(ME\) formula, it should make sense that the margin of error is in some way dependent on the population proportion. We can visualize this relationship by creating a plot of \(ME\) vs. \(p\).
The first step is to make a vector p
that is a sequence from 0 to 1 with each number separated by 0.01. We can then create a vector of the margin of error (me
) associated with each of these values of p
using the familiar approximate formula (\(ME = 2 \times SE\)). Lastly, we plot the two vectors against each other to reveal their relationship.
n <- 1000
p <- seq(0, 1, 0.01)
me <- 2 * sqrt(p * (1 - p)/n)
plot(me ~ p, ylab = "Margin of Error", xlab = "Population Proportion")
p
and me
.The textbook emphasizes that you must always check conditions before making inference. For inference on proportions, the sample proportion can be assumed to be nearly normal if it is based upon a random sample of independent observations and if both \(np \geq 10\) and \(n(1 - p) \geq 10\). This rule of thumb is easy enough to follow, but it makes one wonder: what’s so special about the number 10?
The short answer is: nothing. You could argue that we would be fine with 9 or that we really should be using 11. What is the “best” value for such a rule of thumb is, at least to some degree, arbitrary. However, when \(np\) and \(n(1-p)\) reaches 10 the sampling distribution is sufficiently normal to use confidence intervals and hypothesis tests that are based on that approximation.
We can investigate the interplay between \(n\) and \(p\) and the shape of the sampling distribution by using simulations. To start off, we simulate the process of drawing 5000 samples of size 1040 from a population with a true atheist proportion of 0.1. For each of the 5000 samples we compute \(\hat{p}\) and then plot a histogram to visualize their distribution.
p <- 0.1
n <- 1040
p_hats <- rep(0, 5000)
for(i in 1:5000){
samp <- sample(c("atheist", "non_atheist"), n, replace = TRUE, prob = c(p, 1-p))
p_hats[i] <- sum(samp == "atheist")/n
}
hist(p_hats, main = "p = 0.1, n = 1040", xlim = c(0, 0.18))
These commands build up the sampling distribution of \(\hat{p}\) using the familiar for
loop. You can read the sampling procedure for the first line of code inside the for
loop as, “take a sample of size \(n\) with replacement from the choices of atheist and non-atheist with probabilities \(p\) and \(1 - p\), respectively.” The second line in the loop says, “calculate the proportion of atheists in this sample and record this value.” The loop allows us to repeat this process 5,000 times to build a good representation of the sampling distribution.
mean
to calculate summary statistics.summary(p_hats)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.07019 0.09327 0.09904 0.09969 0.10577 0.12981
sd(p_hats)
## [1] 0.009287382
hist(p_hats)
The data is almost normally distributed and unimodal. Mean is 0.09969 and standard deviation is 0.009287382.
par(mfrow = c(2, 2))
command before creating the histograms. You may need to expand the plot window to accommodate the larger two-by-two plot. Describe the three new sampling distributions. Based on these limited plots, how does \(n\) appear to affect the distribution of \(\hat{p}\)? How does \(p\) affect the sampling distribution?par(mfrow = c(2, 2))
hist(p_hats, main = "p = 0.1, n = 1040", xlim = c(0, 0.18))
p <- 0.1
n <- 400
p_hatsa <- rep(0, 5000)
for(i in 1:5000){
samp <- sample(c("atheist", "non_atheist"), n, replace = TRUE, prob = c(p, 1-p))
p_hatsa[i] <- sum(samp == "atheist")/n
}
hist(p_hatsa, main = "p = 0.1, n = 400", xlim = c(0, 0.18))
p <- 0.2
n <- 1040
p_hatsb <- rep(0, 5000)
for(i in 1:5000){
samp <- sample(c("atheist", "non_atheist"), n, replace = TRUE, prob = c(p, 1-p))
p_hatsb[i] <- sum(samp == "atheist")/n
}
hist(p_hatsb, main = "p = 0.2, n = 1040", xlim = c(0.14, 0.30))
p <- 0.2
n <- 400
p_hatsc <- rep(0, 5000)
for(i in 1:5000){
samp <- sample(c("atheist", "non_atheist"), n, replace = TRUE, prob = c(p, 1-p))
p_hatsc[i] <- sum(samp == "atheist")/n
}
hist(p_hatsc, main = "p = 0.2, n = 400", xlim = c(0.14, 0.30))
With the increase in sample size, data becomes more normally distributed and vice versa. change in p also changes the distribution of the data.
Once you’re done, you can reset the layout of the plotting window by using the command par(mfrow = c(1, 1))
command or clicking on “Clear All” above the plotting window (if using RStudio). Note that the latter will get rid of all your previous plots.
par(mfrow= c(1,1))
As the number of atheists in the sample is 8 that’s why it the criteria is not met for further data analysis. On the other hand, Australia has more than 10 cases so we can proceed with further data analysis.
The question of atheism was asked by WIN-Gallup International in a similar survey that was conducted in 2005. (We assume here that sample sizes have remained the same.) Table 4 on page 13 of the report summarizes survey results from 2005 and 2012 for 39 countries.
Answer the following two questions using the inference
function. As always, write out the hypotheses for any tests you conduct and outline the status of the conditions for inference.
a. Is there convincing evidence that Spain has seen a change in its atheism index between 2005 and 2012?
Hint: Create a new data set for respondents from Spain. Form confidence intervals for the true proportion of athiests in both years, and determine whether they overlap.
H0 = Atheism index in Spain in 2005 and 2012 does not have difference Ha = Atheism index in Spain in 2005 and 2012 have difference
spain12 <- subset(atheism, nationality =="Spain" & year == "2012")
spain05 <- subset(atheism, nationality =="Spain" & year == "2005")
inference(spain12$response, est= "proportion", type= "ci", method= "theoretical", success= "atheist")
## Single proportion -- success: atheist
## Summary statistics:
## p_hat = 0.09 ; n = 1145
## Check conditions: number of successes = 103 ; number of failures = 1042
## Standard error = 0.0085
## 95 % Confidence interval = ( 0.0734 , 0.1065 )
inference(spain05$response, est="proportion", type="ci", method="theoretical", success="atheist")
## Single proportion -- success: atheist
## Summary statistics:
## p_hat = 0.1003 ; n = 1146
## Check conditions: number of successes = 115 ; number of failures = 1031
## Standard error = 0.0089
## 95 % Confidence interval = ( 0.083 , 0.1177 )
**b.** Is there convincing evidence that the United States has seen a
change in its atheism index between 2005 and 2012?
H0: There is no change in atheism index between 2005 and 2012 in US Ha: There is change in atheism index between 2005 and 2012 in US
us12 <- subset(atheism, nationality== "United States" & year== "2012")
us05 <- subset(atheism, nationality== "United States" & year== "2005")
inference(us12$response, est= "proportion", type="ci", success="atheist", method="theoretical")
## Single proportion -- success: atheist
## Summary statistics:
## p_hat = 0.0499 ; n = 1002
## Check conditions: number of successes = 50 ; number of failures = 952
## Standard error = 0.0069
## 95 % Confidence interval = ( 0.0364 , 0.0634 )
inference(us05$response, est= "proportion", type="ci", success="atheist", method="theoretical")
## Single proportion -- success: atheist
## Summary statistics:
## p_hat = 0.01 ; n = 1002
## Check conditions: number of successes = 10 ; number of failures = 992
## Standard error = 0.0031
## 95 % Confidence interval = ( 0.0038 , 0.0161 )
As there is no overlap between the confidence intervals in 2005 and 2012 so reject the null hypothesis in favor of alternate hypothesis.
If we say the hypothesis is rejecting as it was not supposed to be i.e. Type I error then we might have a chance of 39 x 0.05 = 1.95 or 2 countries would have change in atheism index simply by chance.
The formula for SE is:
\[ SE = sqrt(p(1-p)/n \] Now we have to manipulate the formula to get ‘n’
p<- 0.5
z <- 1.96
me2 <- 0.01
n <- (p*(1-p)*z^2)/me2^2
n
## [1] 9604
We need at least 9604 respondents