If you have access to data on an entire population, say the opinion of every adult in the United States on whether or not they think climate change is affecting their local community, it’s straightforward to answer questions like, “What percent of US adults think climate change is affecting their local community?”. Similarly, if you had demographic information on the population you could examine how, if at all, this opinion varies among young and old adults and adults with different leanings. If you have access to only a sample of the population, as is often the case, the task becomes more complicated. What is your best guess for this proportion if you only have data from a small sample of adults? This type of situation requires that you use your sample to make inference on what your population looks like.
Setting a seed: You will take random samples and build sampling distributions in this lab, which means you should set a seed on top of your lab. If this concept is new to you, review the lab on probability.
In this lab, we will explore and visualize the data using the tidyverse suite of packages, and perform statistical inference using infer.
Let’s load the packages.
A 2019 Pew Research report states the following:
To keep our computation simple, we will assume a total population size of 100,000 (even though that’s smaller than the population size of all US adults).
Roughly six-in-ten U.S. adults (62%) say climate change is currently affecting their local community either a great deal or some, according to a new Pew Research Center survey.
Source: Most Americans say climate change impacts their community, but effects vary by region
In this lab, you will assume this 62% is a true population proportion and learn about how sample proportions can vary from sample to sample by taking smaller samples from the population. We will first create our population assuming a population size of 100,000. This means 62,000 (62%) of the adult population think climate change impacts their community, and the remaining 38,000 does not think so.
The name of the data frame is us_adults
and the name of
the variable that contains responses to the question “Do you think
climate change is affecting your local community?” is
climate_change_affects
.
We can quickly visualize the distribution of these responses using a bar plot.
ggplot(us_adults, aes(x = climate_change_affects)) +
geom_bar() +
labs(
x = "", y = "",
title = "Do you think climate change is affecting your local community?"
) +
coord_flip()
We can also obtain summary statistics to confirm we constructed the data frame correctly.
## # A tibble: 2 × 3
## climate_change_affects n p
## <chr> <int> <dbl>
## 1 No 38000 0.38
## 2 Yes 62000 0.62
In this lab, you’ll start with a simple random sample of size 60 from the population.
# Calculate the proportion of "Yes" responses in the sample
percent_affects <- samp %>%
count(climate_change_affects) %>%
mutate(p = n / sum(n) * 100) %>%
filter(climate_change_affects == "Yes")
percent_affects
## # A tibble: 1 × 3
## climate_change_affects n p
## <chr> <int> <dbl>
## 1 Yes 39 65
It’s not expected another student’s sample proportion to match mine exactly due to random sampling variability. Despite both samples being from the same 100,000 adults, different individuals may be chosen, leading to differing sample proportions. However, similar sample sizes from the same population should yield somewhat similar proportions, as larger samples tend to approximate the true population proportion (62%). Yet, small fluctuations can still occur due to randomness.
Return for a moment to the question that first motivated this lab:
based on this sample, what can you infer about the population? With just
one sample, the best estimate of the proportion of US adults who think
climate change affects their local community would be the sample
proportion, usually denoted as \(\hat{p}\) (here we are calling it
p_hat
). That serves as a good point
estimate, but it would be useful to also communicate how
uncertain you are of that estimate. This uncertainty can be quantified
using a confidence interval.
One way of calculating a confidence interval for a population proportion is based on the Central Limit Theorem, as \(\hat{p} \pm z^\star SE_{\hat{p}}\) is, or more precisely, as \[ \hat{p} \pm z^\star \sqrt{ \frac{\hat{p} (1-\hat{p})}{n} } \]
Another way is using simulation, or to be more specific, using bootstrapping. The term bootstrapping comes from the phrase “pulling oneself up by one’s bootstraps”, which is a metaphor for accomplishing an impossible task without any outside help. In this case the impossible task is estimating a population parameter (the unknown population proportion), and we’ll accomplish it using data from only the given sample. Note that this notion of saying something about a population parameter using only information from an observed sample is the crux of statistical inference, it is not limited to bootstrapping.
In essence, bootstrapping assumes that there are more of observations in the populations like the ones in the observed sample. So we “reconstruct” the population by resampling from our sample, with replacement. The bootstrapping scheme is as follows:
Instead of coding up each of these steps, we will construct confidence intervals using the infer package.
Below is an overview of the functions we will use to construct this confidence interval:
Function | Purpose |
---|---|
specify |
Identify your variable of interest |
generate |
The number of samples you want to generate |
calculate |
The sample statistic you want to do inference with, or you can also think of this as the population parameter you want to do inference for |
get_ci |
Find the confidence interval |
This code will find the 95 percent confidence interval for proportion of US adults who think climate change affects their local community.
samp %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = 1000, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = 0.95)
## # A tibble: 1 × 2
## lower_ci upper_ci
## <dbl> <dbl>
## 1 0.517 0.767
specify
we specify the response
variable and the level of that variable we are calling a
success
.generate
we provide the number of resamples we want
from the population in the reps
argument (this should be a
reasonably large number) as well as the type of resampling we want to
do, which is "bootstrap"
in the case of constructing a
confidence interval.calculate
the sample statistic of interest for
each of these resamples, which is prop
ortion.Feel free to test out the rest of the arguments for these functions, since these commands will be used together to calculate confidence intervals and solve inference problems for the rest of the semester. But we will also walk you through more examples in future chapters.
To recap: even though we don’t know what the full population looks like, we’re 95% confident that the true proportion of US adults who think climate change affects their local community is between the two bounds reported as result of this pipeline.
The phrase “95% confident” means that if numerous random samples are taken and created confidence intervals for each, around 95% of those intervals would contain the true population proportion. This confidence level indicates the method’s reliability, not the specific interval’s accuracy from our sample. Essentially, over many repeated samples, about 95% of the intervals should capture the true population parameter.
In this case, you have the rare luxury of knowing the true population proportion (62%) since you have data on the entire population.
Knowing the true population proportion is 62%, we can see if the confidence interval from our sample includes this value. If it does, the interval has successfully captured the true proportion. When my classmates calculate their own intervals, some will capture 62%, while others might not, due to natural variability in random sampling. Despite being 95% confident in each interval, roughly 5% might miss the true population proportion. So, some neighbors’ intervals may include 62%, while others may not.
With 95% confidence intervals, we’d expect about 95% of the intervals generated by students to capture the true population proportion. This is because a 95% confidence level means that if the sampling and interval calculations were repeated many times, roughly 95% of the intervals would include the true population parameter, while about 5% would miss it due to random sampling variation. Essentially, 95 out of 100 intervals should capture the true proportion, so most students’ intervals will include the true population proportion.
In the next part of the lab, you will collect many samples to learn more about how sample proportions and confidence intervals constructed based on those samples vary from one sample to another.
Doing this would require learning programming concepts like iteration so that you can automate repeating running the code you’ve developed so far many times to obtain many (50) confidence intervals. In order to keep the programming simpler, we are providing the interactive app below that basically does this for you and created a plot similar to Figure 5.6 on OpenIntro Statistics, 4th Edition (page 182).
The proportion of confidence intervals that capture the true population proportion should be close to the confidence level (95%), but not exactly equal, due to random sampling variability. A 95% confidence level means that, over many samples and intervals, about 95% should include the true population proportion, with around 5% missing it. In specific experiments, this proportion may vary slightly due to the randomness of samples, converging to 95% over many repetitions. When the app runs, it would show 50 confidence intervals, indicating which ones capture the true proportion (62%) through color coding and vertical lines, illustrating the concept of sampling variability and confidence intervals.
If a 90% confidence level is assumed, the confidence interval becomes narrower, accepting a higher risk of not capturing the true population proportion. Higher confidence levels result in wider intervals, and lower levels lead to narrower intervals. This trade-off balances precision and certainty: narrower intervals provide more precise estimates with less confidence, while wider intervals offer more confidence but less precision.
samp
), find a confidence interval
for the proportion of US Adults who think climate change is affecting
their local community with a confidence level of your choosing (other
than 95%) and interpret it.Choosing an 80% confidence level results in a narrower confidence interval compared to 95%, indicating 80% certainty that the true proportion of US adults who think climate change affects their community lies within the interval, but there’s a 20% chance it doesn’t. Lower confidence levels mean narrower intervals with less certainty, while higher levels like 99% produce wider intervals for more certainty
The plot displays 50 confidence intervals at an 80% confidence level using 1000 bootstraps. Blue intervals indicate capturing the true population proportion, while red ones do not. If around 80% of intervals (40 out of 50) are blue, it confirms the method’s reliability.
samp
and
interpret it. Finally, use the app to generate many intervals and
calculate the proportion of intervals that are capture the true
population proportion.I used 87% for being in between %80 and 95%. The scatter plot with confidence intervals shows how often these intervals capture the true population proportion. With a confidence level of 87%, 87% of the intervals should include the true proportion. Each interval is based on 60 observations and generated using 1000 bootstrap samples to estimate variability. Out of 100 intervals plotted, blue lines indicate those that capture the true proportion, while red lines do not, illustrating the reliability of confidence intervals in statistical analysis.
After switching the sizes of samples the larger sample sizes produce narrower and more precise confidence intervals, enhancing certainty about estimates. Conversely, smaller sample sizes result in wider intervals and greater uncertainty. This aligns with the statistical principle that larger samples provide more reliable estimates, while smaller samples lead to higher variability and less precision at estimating.
Increasing the number of bootstrap samples while keeping the sample size fixed doesn’t significantly change the width of the confidence intervals. This is because more bootstrap samples simply provide a better approximation of the sample statistic’s distribution but do not impact the inherent variability in the original sample. The standard error, and hence the interval width, is primarily determined by the sample size and not by the number of bootstrap replications. Meanwhile more bootstrap samples lead to a smoother distribution, it’s the sample size and confidence level that affect the interval width.