Jeff Littlejohn
Data 606 - Fall 2018
Lab 5

In August of 2012, news outlets ranging from the Washington Post to the Huffington Post ran a story about the rise of atheism in America. The source for the story was a poll that asked people, “Irrespective of whether you attend a place of worship or not, would you say you are a religious person, not a religious person or a convinced atheist?” This type of question, which asks people to classify themselves in one way or another, is common in polling and generates categorical data. In this lab we take a look at the atheism survey and explore what’s at play when making inference about population proportions using categorical data.

The survey

To access the press release for the poll, conducted by WIN-Gallup International, click on the following link:

https://github.com/jbryer/DATA606/blob/master/inst/labs/Lab6/more/Global_INDEX_of_Religiosity_and_Atheism_PR__6.pdf

Take a moment to review the report then address the following questions.

  1. In the first paragraph, several key findings are reported. Do these percentages appear to be sample statistics (derived from the data sample) or population parameters?

These are sample statistics. They are based on a survey conducted by a worldwide network of pollsters, according to the press release.

  1. The title of the report is “Global Index of Religiosity and Atheism”. To generalize the report’s findings to the global human population, what must we assume about the sampling method? Does that seem like a reasonable assumption?

The survey used a worlwide network of pollsters who are presumably competent in ensuring that their surveys are representative of each country’s population. Each response is independent, and the sample sizes (200+, with most much larger) are substantial enough to be representative of the entire population, presuming non-biased surveys. Only 57 countries - not every one in the world - were polled, but the regional representation appears complete, and generalization looks to be safe.

The data

Turn your attention to Table 6 (pages 15 and 16), which reports the sample size and response percentages for all 57 countries. While this is a useful format to summarize the data, we will base our analysis on the original data set of individual responses to the survey. Load this data set into R with the following command.

load("C:\\Users\\littl\\Documents\\Lab6\\more\\atheism.RData")
  1. What does each row of Table 6 correspond to? What does each row of atheism correspond to?

Each row represents a single survey response.

To investigate the link between these two ways of organizing this data, take a look at the estimated proportion of atheists in the United States. Towards the bottom of Table 6, we see that this is 5%. We should be able to come to the same number using the atheism data.

  1. Using the command below, create a new dataframe called us12 that contains only the rows in atheism associated with respondents to the 2012 survey from the United States. Next, calculate the proportion of atheist responses. Does it agree with the percentage in Table 6? If not, why?

Yes, 0.0499 (below) rounds up to 5%.

library(plyr)
us12 <- subset(atheism, nationality == "United States" & year == "2012")
nrow(subset(us12,response == "atheist"))/nrow(us12)
## [1] 0.0499002

Inference on proportions

As was hinted at in Exercise 1, Table 6 provides statistics, that is, calculations made from the sample of 51,927 people. What we’d like, though, is insight into the population parameters. You answer the question, “What proportion of people in your sample reported being atheists?” with a statistic; while the question “What proportion of people on earth would report being atheists” is answered with an estimate of the parameter.

The inferential tools for estimating population proportion are analogous to those used for means in the last chapter: the confidence interval and the hypothesis test.

  1. Write out the conditions for inference to construct a 95% confidence interval for the proportion of atheists in the United States in 2012. Are you confident all conditions are met?

**conditions for inference: 1. Sample observations are independent. 2. At least 10 successes and failures in the sample.

Conditions for inference are met. More than 10 atheist and non-atheist responses were obtained, and the survey was seemingly independent. **

If the conditions for inference are reasonable, we can either calculate the standard error and construct the interval by hand, or allow the inference function to do it for us.

inference(us12$response, est = "proportion", type = "ci", method = "theoretical", 
          success = "atheist")
## Single proportion -- success: atheist 
## Summary statistics:

## p_hat = 0.0499 ;  n = 1002 
## Check conditions: number of successes = 50 ; number of failures = 952 
## Standard error = 0.0069 
## 95 % Confidence interval = ( 0.0364 , 0.0634 )

Note that since the goal is to construct an interval estimate for a proportion, it’s necessary to specify what constitutes a “success”, which here is a response of "atheist".

Although formal confidence intervals and hypothesis tests don’t show up in the report, suggestions of inference appear at the bottom of page 7: “In general, the error margin for surveys of this kind is \(\pm\) 3-5% at 95% confidence”.

  1. Based on the R output, what is the margin of error for the estimate of the proportion of the proportion of atheists in US in 2012?
SE_6 = 0.0069
#two-tailed z score - use .975
z_6 = qnorm(.975)
ME_6 = z_6 * SE_6
ME_6
## [1] 0.01352375
  1. Using the inference function, calculate confidence intervals for the proportion of atheists in 2012 in two other countries of your choice, and report the associated margins of error. Be sure to note whether the conditions for inference are met. It may be helpful to create new data sets for each of the two countries first, and then use these data sets in the inference function to construct the confidence intervals.

Let’s start with Lebanon.

leb12 <- subset(atheism, nationality == "Lebanon" & year == "2012")
inference(leb12$response, est = "proportion", type = "ci", method = "theoretical", 
          success = "atheist")
## Single proportion -- success: atheist 
## Summary statistics:

## p_hat = 0.0198 ;  n = 505 
## Check conditions: number of successes = 10 ; number of failures = 495 
## Standard error = 0.0062 
## 95 % Confidence interval = ( 0.0077 , 0.032 )

This barely meets the condition for inference, as there are only 10 “successes.” Confidence interval for Lebanon at .95 is .0077 to .032 atheists. Low rates, which one might attribute to the centrality of religious affiliation to Lebanese identity and political life.

Now, let’s check out India.

leb12 <- subset(atheism, nationality == "India" & year == "2012")
inference(leb12$response, est = "proportion", type = "ci", method = "theoretical", 
          success = "atheist")
## Single proportion -- success: atheist 
## Summary statistics:

## p_hat = 0.0302 ;  n = 1092 
## Check conditions: number of successes = 33 ; number of failures = 1059 
## Standard error = 0.0052 
## 95 % Confidence interval = ( 0.0201 , 0.0404 )

Meets success-failure and independence conditons for inference. India’s 95% confidence interval for the proportion of its population that are atheists is .0201 to .0404. This assumes the survey methodology was truly random and each citizen had an equal chance of being selected for an interview.

How does the proportion affect the margin of error?

Imagine you’ve set out to survey 1000 people on two questions: are you female? and are you left-handed? Since both of these sample proportions were calculated from the same sample size, they should have the same margin of error, right? Wrong! While the margin of error does change with sample size, it is also affected by the proportion.

Think back to the formula for the standard error: \(SE = \sqrt{p(1-p)/n}\). This is then used in the formula for the margin of error for a 95% confidence interval: \(ME = 1.96\times SE = 1.96\times\sqrt{p(1-p)/n}\). Since the population proportion \(p\) is in this \(ME\) formula, it should make sense that the margin of error is in some way dependent on the population proportion. We can visualize this relationship by creating a plot of \(ME\) vs. \(p\).

The first step is to make a vector p that is a sequence from 0 to 1 with each number separated by 0.01. We can then create a vector of the margin of error (me) associated with each of these values of p using the familiar approximate formula (\(ME = 2 \times SE\)). Lastly, we plot the two vectors against each other to reveal their relationship.

n <- 1000
p <- seq(0, 1, 0.01)
me <- 2 * sqrt(p * (1 - p)/n)
plot(me ~ p, ylab = "Margin of Error", xlab = "Population Proportion")

  1. Describe the relationship between p and me. The relationship between p and me resembles a parabola with a max at .5 and minimums approach 0 and 1. What does this mean? The margin of error is largest at even-odds contests like coin flips and smallest at more certain (approaching 0 or approaching 1) proportions.

Success-failure condition

The textbook emphasizes that you must always check conditions before making inference. For inference on proportions, the sample proportion can be assumed to be nearly normal if it is based upon a random sample of independent observations and if both \(np \geq 10\) and \(n(1 - p) \geq 10\). This rule of thumb is easy enough to follow, but it makes one wonder: what’s so special about the number 10?

The short answer is: nothing. You could argue that we would be fine with 9 or that we really should be using 11. What is the “best” value for such a rule of thumb is, at least to some degree, arbitrary. However, when \(np\) and \(n(1-p)\) reaches 10 the sampling distribution is sufficiently normal to use confidence intervals and hypothesis tests that are based on that approximation.

We can investigate the interplay between \(n\) and \(p\) and the shape of the sampling distribution by using simulations. To start off, we simulate the process of drawing 5000 samples of size 1040 from a population with a true atheist proportion of 0.1. For each of the 5000 samples we compute \(\hat{p}\) and then plot a histogram to visualize their distribution.

p <- 0.1
n <- 1040
p_hats <- rep(0, 5000)

for(i in 1:5000){
  samp <- sample(c("atheist", "non_atheist"), n, replace = TRUE, prob = c(p, 1-p))
  p_hats[i] <- sum(samp == "atheist")/n
}

hist(p_hats, main = "p = 0.1, n = 1040", xlim = c(0, 0.18))

These commands build up the sampling distribution of \(\hat{p}\) using the familiar for loop. You can read the sampling procedure for the first line of code inside the for loop as, “take a sample of size \(n\) with replacement from the choices of atheist and non-atheist with probabilities \(p\) and \(1 - p\), respectively.” The second line in the loop says, “calculate the proportion of atheists in this sample and record this value.” The loop allows us to repeat this process 5,000 times to build a good representation of the sampling distribution.

  1. Describe the sampling distribution of sample proportions at \(n = 1040\) and \(p = 0.1\). Be sure to note the center, spread, and shape.
    Hint: Remember that R has functions such as mean to calculate summary statistics.
summary(p_hats)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## 0.07019 0.09327 0.09904 0.09969 0.10580 0.12980
IQR <- .10580 - .09327
IQR
## [1] 0.01253

We have mean and a median just below the true atheist population proportion of 0.1. IQR is .01253. Distribution appears normal.

  1. Repeat the above simulation three more times but with modified sample sizes and proportions: for \(n = 400\) and \(p = 0.1\), \(n = 1040\) and \(p = 0.02\), and \(n = 400\) and \(p = 0.02\). Plot all four histograms together by running the par(mfrow = c(2, 2)) command before creating the histograms. You may need to expand the plot window to accommodate the larger two-by-two plot. Describe the three new sampling distributions. Based on these limited plots, how does \(n\) appear to affect the distribution of \(\hat{p}\)? How does \(p\) affect the sampling distribution?
#create 2x2 space for the 4 histograms
par(mfrow = c(2, 2))

#1st simulation, already run from above
hist(p_hats, main = "p = 0.1, n = 1040", xlim = c(0, 0.18))


#2nd simulation - n = 400 and p = 0.1
p <- 0.1
n <- 400
p_hats_2 <- rep(0, 5000)

for(i in 1:5000){
  samp <- sample(c("atheist", "non_atheist"), n, replace = TRUE, prob = c(p, 1-p))
  p_hats_2[i] <- sum(samp == "atheist")/n
}

hist(p_hats_2, main = "p = 0.1, n = 400", xlim = c(0, 0.18))


#3rd simulation - n = 1040 and p = 0.02
p <- 0.02
n <- 1040
p_hats_3 <- rep(0, 5000)

for(i in 1:5000){
  samp <- sample(c("atheist", "non_atheist"), n, replace = TRUE, prob = c(p, 1-p))
  p_hats_3[i] <- sum(samp == "atheist")/n
}

hist(p_hats_3, main = "p = 0.02, n = 1040", xlim = c(0, 0.18))


#4th simulation - n = 400 and p = 0.02
p <- 0.02
n <- 400
p_hats_4 <- rep(0, 5000)

for(i in 1:5000){
  samp <- sample(c("atheist", "non_atheist"), n, replace = TRUE, prob = c(p, 1-p))
  p_hats_4[i] <- sum(samp == "atheist")/n
}

hist(p_hats_4, main = "p = 0.02, n = 400", xlim = c(0, 0.18))

All distribuions appear normal with minimal skew. However, we do see differences based on sample size. The larger the sample size (n), the closer the distribution is centered arond the true proportion (whether .1 or .02). As p grows, the margin of error will be larger, thus the spread in the samples of the same size for .1 is larger than the spread of the same sample size for the .02 simulation.

Once you’re done, you can reset the layout of the plotting window by using the command par(mfrow = c(1, 1)) command or clicking on “Clear All” above the plotting window (if using RStudio). Note that the latter will get rid of all your previous plots.

  1. If you refer to Table 6, you’ll find that Australia has a sample proportion of 0.1 on a sample size of 1040, and that Ecuador has a sample proportion of 0.02 on 400 subjects. Let’s suppose for this exercise that these point estimates are actually the truth. Then given the shape of their respective sampling distributions, do you think it is sensible to proceed with inference and report margin of errors, as the reports does?
#Ecuador
.02 * 400
## [1] 8
#Australia
.1 * 1040
## [1] 104

Australia meets the conditions for inference, but Ecuador does not pass the success-failure condition for inference. Report Australia’s numbers but maybe look for a bigger sample in Ecuador.


On your own

The question of atheism was asked by WIN-Gallup International in a similar survey that was conducted in 2005. (We assume here that sample sizes have remained the same.) Table 4 on page 13 of the report summarizes survey results from 2005 and 2012 for 39 countries.

spain05 <- subset(atheism, nationality == "Spain" & year == "2005")
inference(spain05$response, est = "proportion", type = "ci", method = "theoretical", 
          success = "atheist")
## Single proportion -- success: atheist 
## Summary statistics:

## p_hat = 0.1003 ;  n = 1146 
## Check conditions: number of successes = 115 ; number of failures = 1031 
## Standard error = 0.0089 
## 95 % Confidence interval = ( 0.083 , 0.1177 )

2005 Spain proportion: .1003; CI: .083 - .1177

spain12 <- subset(atheism, nationality == "Spain" & year == "2012")
inference(spain12$response, est = "proportion", type = "ci", method = "theoretical", 
          success = "atheist")
## Single proportion -- success: atheist 
## Summary statistics:

## p_hat = 0.09 ;  n = 1145 
## Check conditions: number of successes = 103 ; number of failures = 1042 
## Standard error = 0.0085 
## 95 % Confidence interval = ( 0.0734 , 0.1065 )

2012 Spain proportion: .09; CI: .0734 - .1065 We see a slight decline in the atheist sample proportion from 2005 to 2012, but there are overlapping confidence intervals, so we can’t conclusively state that there has been a change.

**b.** Is there convincing evidence that the United States has seen a
change in its atheism index between 2005 and 2012?

Let’s create a 2005 subset and histogram.

us05 <- subset(atheism, nationality == "United States" & year == "2005")
inference(us05$response, est = "proportion", type = "ci", method = "theoretical", 
          success = "atheist")
## Single proportion -- success: atheist 
## Summary statistics:

## p_hat = 0.01 ;  n = 1002 
## Check conditions: number of successes = 10 ; number of failures = 992 
## Standard error = 0.0031 
## 95 % Confidence interval = ( 0.0038 , 0.0161 )

2005 US proportion: .01; CI: .0038 - .0161

inference(us12$response, est = "proportion", type = "ci", method = "theoretical", 
          success = "atheist")
## Single proportion -- success: atheist 
## Summary statistics:

## p_hat = 0.0499 ;  n = 1002 
## Check conditions: number of successes = 50 ; number of failures = 952 
## Standard error = 0.0069 
## 95 % Confidence interval = ( 0.0364 , 0.0634 )

2012 US proportion: .0499; CI: .0364 - .0634 We see a clear change in the number of atheists from 2005 to 2012. The lower end of the 2012 95% CI exceeds the upper end of the 2005 CI.

Type I error would mean rejecting a true null hypothesis, which would mean detecting a change in the atheism index where there was not one. The definition of a .05 confidence level means the true population mean would lie outside the 95% confidence interval 1 of 20 times.

39 * .05
## [1] 1.95

So we would expect to detect a false change in 2 (rounded up from 1.95) of the 39 countries in Table 4.

Solve necessary sample size for <= 1% margin of error at 95% confidence level with unknown p. Will assume p of .5 to cover worst-case scenario.

#me = z-score * sqrt((p(1-p))/n)
Z_x = qnorm(.975)
#worst-case scenario for ME
p_x = .5
p_x_num = p_x * (1 - p_x)
#looking for me <= .1
me_x = .01

#square both sides
((Z_x)^2 * (p_x_num))/me_x^2
## [1] 9603.647

Round up to 9604 to get the sample size needed for the guidelines.

This is a product of OpenIntro that is released under a Creative Commons Attribution-ShareAlike 3.0 Unported. This lab was written for OpenIntro by Andrew Bray and Mine Çetinkaya-Rundel.