If you have access to data on an entire population, say the opinion of every adult in the United States on whether or not they think climate change is affecting their local community, it’s straightforward to answer questions like, “What percent of US adults think climate change is affecting their local community?”. Similarly, if you had demographic information on the population you could examine how, if at all, this opinion varies among young and old adults and adults with different leanings. If you have access to only a sample of the population, as is often the case, the task becomes more complicated. What is your best guess for this proportion if you only have data from a small sample of adults? This type of situation requires that you use your sample to make inference on what your population looks like.
Setting a seed: You will take random samples and build sampling distributions in this lab, which means you should set a seed on top of your lab. If this concept is new to you, review the lab on probability.
In this lab, we will explore and visualize the data using the tidyverse suite of packages, and perform statistical inference using infer.
Let’s load the packages.
A 2019 Pew Research report states the following:
To keep our computation simple, we will assume a total population size of 100,000 (even though that’s smaller than the population size of all US adults).
Roughly six-in-ten U.S. adults (62%) say climate change is currently affecting their local community either a great deal or some, according to a new Pew Research Center survey.
Source: Most Americans say climate change impacts their community, but effects vary by region
In this lab, you will assume this 62% is a true population proportion and learn about how sample proportions can vary from sample to sample by taking smaller samples from the population. We will first create our population assuming a population size of 100,000. This means 62,000 (62%) of the adult population think climate change impacts their community, and the remaining 38,000 does not think so.
The name of the data frame is us_adults and the name of
the variable that contains responses to the question “Do you think
climate change is affecting your local community?” is
climate_change_affects.
We can quickly visualize the distribution of these responses using a bar plot.
ggplot(us_adults, aes(x = climate_change_affects)) +
geom_bar() +
labs(
x = "", y = "",
title = "Do you think climate change is affecting your local community?"
) +
coord_flip() We can also obtain summary statistics to confirm we constructed the data frame correctly.
## # A tibble: 2 × 3
## climate_change_affects n p
## <chr> <int> <dbl>
## 1 No 38000 0.38
## 2 Yes 62000 0.62
In this lab, you’ll start with a simple random sample of size 60 from the population.
## # A tibble: 2 × 3
## climate_change_affects n p_hat
## <chr> <int> <dbl>
## 1 No 23 0.383
## 2 Yes 37 0.617
** In my sample of 60 adults, about r round(100 (samp %>% count(climate_change_affects) %>% mutate(p_hat = n / sum(n)) %>% filter(climate_change_affects == “Yes”) %>% pull(p_hat)), 1) % think climate change affects their community. This is close to the true population value of 62 %, though it may vary slightly because of sampling randomness.**
samp_summary <- samp |> count(climate_change_affects, name = "n") |> mutate(p_hat = n/sum(n))
p_hat_yes <- samp_summary |> filter(climate_change_affects == "Yes") |> pull(p_hat)
se_hat <- sqrt(p_hat_yes * (1 - p_hat_yes) / n)
tibble(p_hat_yes = p_hat_yes, se_hat = se_hat)## # A tibble: 1 × 2
## p_hat_yes se_hat
## <dbl> <dbl>
## 1 0.617 0.0628
They’re drawing a different simple random sample of 60 people, so their p ̂will vary due to sampling variability. I’d expect it to be similar (because both samples come from the same population with p=0.62), but not identical. The typical difference size is on the order of the standard error, “SE”=√(p ̂(1-p ̂)/n).
Return for a moment to the question that first motivated this lab:
based on this sample, what can you infer about the population? With just
one sample, the best estimate of the proportion of US adults who think
climate change affects their local community would be the sample
proportion, usually denoted as \(\hat{p}\) (here we are calling it
p_hat). That serves as a good point
estimate, but it would be useful to also communicate how
uncertain you are of that estimate. This uncertainty can be quantified
using a confidence interval.
One way of calculating a confidence interval for a population proportion is based on the Central Limit Theorem, as \(\hat{p} \pm z^\star SE_{\hat{p}}\) is, or more precisely, as \[ \hat{p} \pm z^\star \sqrt{ \frac{\hat{p} (1-\hat{p})}{n} } \]
Another way is using simulation, or to be more specific, using bootstrapping. The term bootstrapping comes from the phrase “pulling oneself up by one’s bootstraps”, which is a metaphor for accomplishing an impossible task without any outside help. In this case the impossible task is estimating a population parameter (the unknown population proportion), and we’ll accomplish it using data from only the given sample. Note that this notion of saying something about a population parameter using only information from an observed sample is the crux of statistical inference, it is not limited to bootstrapping.
In essence, bootstrapping assumes that there are more of observations in the populations like the ones in the observed sample. So we “reconstruct” the population by resampling from our sample, with replacement. The bootstrapping scheme is as follows:
Instead of coding up each of these steps, we will construct confidence intervals using the infer package.
Below is an overview of the functions we will use to construct this confidence interval:
| Function | Purpose |
|---|---|
specify |
Identify your variable of interest |
generate |
The number of samples you want to generate |
calculate |
The sample statistic you want to do inference with, or you can also think of this as the population parameter you want to do inference for |
get_ci |
Find the confidence interval |
This code will find the 95 percent confidence interval for proportion of US adults who think climate change affects their local community.
ci_95 <- samp %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = 1000, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = 0.95)
ci_95## # A tibble: 1 × 2
## lower_ci upper_ci
## <dbl> <dbl>
## 1 0.483 0.75
Interpretation: I’m 95% confident the true proportion of US adults who say “Yes” lies between the lower and upper bounds printed above. “95% confident” means that if I repeated this entire procedure many times (new sample → bootstrap → CI), about 95% of those intervals would contain the true p.
specify we specify the response
variable and the level of that variable we are calling a
success.generate we provide the number of resamples we want
from the population in the reps argument (this should be a
reasonably large number) as well as the type of resampling we want to
do, which is "bootstrap" in the case of constructing a
confidence interval.calculate the sample statistic of interest for
each of these resamples, which is proportion.Feel free to test out the rest of the arguments for these functions, since these commands will be used together to calculate confidence intervals and solve inference problems for the rest of the semester. But we will also walk you through more examples in future chapters.
To recap: even though we don’t know what the full population looks like, we’re 95% confident that the true proportion of US adults who think climate change affects their local community is between the two bounds reported as result of this pipeline.
It does not mean there’s a 95% chance this particular interval contains p. It means that if I repeat the process (sample, bootstrap, CI) many times, then about 95% of those intervals would cover the true population proportion.
In this case, you have the rare luxury of knowing the true population proportion (62%) since you have data on the entire population.
## [1] TRUE
My 95% confidence interval does (or does not) include the
true population proportion of 62%.
Because each random sample produces slightly different results, about
95% of all 95% confidence intervals would capture the true
value.
Each student’s confidence interval will be slightly different
because everyone’s random sample is different.
Since we used a 95% confidence level, I would expect
about 95% of all students’ intervals** to capture the true
population proportion of 62%.
This means that if we all repeated the process many times, roughly 95 out of every 100 intervals would contain the true proportion, while the remaining 5 out of 100 would miss it purely due to random sampling variation.**
In the next part of the lab, you will collect many samples to learn more about how sample proportions and confidence intervals constructed based on those samples vary from one sample to another.
Doing this would require learning programming concepts like iteration so that you can automate repeating running the code you’ve developed so far many times to obtain many (50) confidence intervals. In order to keep the programming simpler, we are providing the interactive app below that basically does this for you and created a plot similar to Figure 5.6 on OpenIntro Statistics, 4th Edition (page 182).
set.seed(1234)
true_p <- 0.62
n_ci <- 50
n_samp <- 60
n_boot <- 1000
level <- 0.95
# Build 50 bootstrap CIs
df_ci <- purrr::map_dfr(1:n_ci, function(i) {
ci <- us_adults %>%
sample_n(size = n_samp) %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = n_boot, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = level)
tibble(lower = ci$lower_ci, upper = ci$upper_ci, id = i)
}) %>%
mutate(capture = ifelse(lower < true_p & upper > true_p, "Yes", "No"))
# Proportion that capture the true p
prop_capture <- mean(df_ci$capture == "Yes")
prop_capture## [1] 0.94
ggplot(df_ci, aes(y = id)) +
geom_segment(aes(x = lower, xend = upper, yend = id, color = capture)) +
geom_point(aes(x = lower, color = capture), size = 1.1) +
geom_point(aes(x = upper, color = capture), size = 1.1) +
geom_vline(xintercept = true_p, linetype = "dashed") +
labs(
title = "Fifty 95% bootstrap CIs for p (Yes)",
subtitle = paste0("n = ", n_samp, ", boot reps = ", n_boot, ", true p = ", true_p),
x = "Confidence interval bounds", y = "",
color = "Captures true p?"
) +
theme_minimal()**With n=60, 1000 bootstrap samples per interval, and 50 intervals total, the observed coverage in my run is r round(prop_capture, 2) (i.e., about r round(prop_capture*100)%). This is close to but not exactly the 95% confidence level. It differs slightly because we only drew a finite number (50) of intervals and each interval is based on a random sample; due to randomness, the empirical coverage will vary around 95%. If we increased the number of intervals (e.g., 500 or 1000), the observed proportion would tend to get closer to the nominal 95% level.**
If I choose a higher confidence level (e.g., 99% instead of 95%), the interval will be wider. That’s because the critical value (z^“*” for a proportion) is larger, so I need a bigger margin of error to be confident I’m covering the true pmore often. If I choose a lower confidence level (e.g., 90%), the interval will be narrower. The z^“*” is smaller, so the margin of error shrinks—and I’m accepting a higher chance that the interval misses the true p.
samp), find a confidence interval
for the proportion of US Adults who think climate change is affecting
their local community with a confidence level of your choosing (other
than 95%) and interpret it.# ---- ci-custom ----
# Pick a different confidence level
my_level <- 0.90 # you can change to 0.90, 0.99, etc.
ci_custom <- samp %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = 2000, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = my_level)
ci_custom## # A tibble: 1 × 2
## lower_ci upper_ci
## <dbl> <dbl>
## 1 0.517 0.717
Using a 90% confidence level, my interval is
narrower than the 95% interval because I’m allowing a slightly higher
chance of missing the true proportion.
Based on my bootstrap results, I’m 90% confident** that the
true proportion of U.S. adults who think climate change affects their
local community lies between the two bounds printed above.
If I increased the confidence level (for example to 99%), the interval
would become wider, reflecting greater certainty but a
larger margin of error. **
# ---- ci-50-custom ----
set.seed(1234)
true_p <- 0.62
n_ci <- 50
n_samp <- 60
n_boot <- 1000
level <- 0.90 # same as your chosen confidence level above
# Build 50 bootstrap confidence intervals
df_ci_custom <- purrr::map_dfr(1:n_ci, function(i) {
ci <- us_adults %>%
sample_n(size = n_samp) %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = n_boot, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = level)
tibble(lower = ci$lower_ci, upper = ci$upper_ci, id = i)
}) %>%
mutate(capture = ifelse(lower < true_p & upper > true_p, "Yes", "No"))
# Proportion that captured the true p
prop_capture_custom <- mean(df_ci_custom$capture == "Yes")
prop_capture_custom## [1] 0.9
ggplot(df_ci_custom, aes(y = id)) +
geom_segment(aes(x = lower, xend = upper, yend = id, color = capture)) +
geom_point(aes(x = lower, color = capture), size = 1.1) +
geom_point(aes(x = upper, color = capture), size = 1.1) +
geom_vline(xintercept = true_p, linetype = "dashed") +
labs(
title = paste0("Fifty ", level*100, "% Bootstrap Confidence Intervals for p (Yes)"),
subtitle = paste0("n = ", n_samp, ", bootstrap reps = ", n_boot, ", true p = ", true_p),
x = "Confidence interval bounds",
y = "",
color = "Captures true p?"
) +
theme_minimal()At the 90% confidence level, the proportion of intervals that captured the true population proportion (0.62) in my simulation was about 0.9, or roughly 90%**.
This is close to, but not exactly, the chosen 90% confidence level —
which makes sense, since we only generated 50 intervals.
With more intervals (e.g., 500 or 1000), the observed coverage would get
closer to the nominal 90%.
Also, compared to 95% intervals, these 90% intervals are
narrower, since we’re accepting a slightly higher risk
of missing the true value. **
# ---- ci-compare ----
# For comparison, build a 95% version again (smaller sample count to save time)
level_95 <- 0.95
df_ci_95 <- purrr::map_dfr(1:50, function(i) {
ci <- us_adults %>%
sample_n(size = n_samp) %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = n_boot, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = level_95)
tibble(lower = ci$lower_ci, upper = ci$upper_ci, id = i, level = "95% CI")
})
df_ci_90 <- df_ci_custom %>% mutate(level = "90% CI")
# Combine them for plotting
df_compare <- bind_rows(df_ci_90, df_ci_95) %>%
mutate(capture = ifelse(lower < true_p & upper > true_p, "Yes", "No"))
ggplot(df_compare, aes(y = id, color = capture)) +
geom_segment(aes(x = lower, xend = upper, yend = id)) +
geom_point(aes(x = lower), size = 1.1) +
geom_point(aes(x = upper), size = 1.1) +
geom_vline(xintercept = true_p, linetype = "dashed", color = "gray30") +
facet_wrap(~level) +
labs(
title = "Comparison of 90% vs 95% Confidence Intervals (50 samples each)",
subtitle = paste0("True p = ", true_p, ", sample size = ", n_samp),
x = "Confidence interval bounds",
y = "",
color = "Captures true p?"
) +
theme_minimal()This side-by-side comparison shows how the 90%
intervals** (left panel) are narrower than the
95% intervals (right panel).
Because the 90% intervals use a smaller critical value, they cover less
of the sampling distribution — which is why you’ll notice a few
more intervals missing the true value (0.62) on the left.
Overall, this plot makes it clear that higher confidence = wider
intervals, while lower confidence = narrower
intervals.**
samp and
interpret it. Finally, use the app to generate many intervals and
calculate the proportion of intervals that are capture the true
population proportion.# ---- ci-99 ----
# Try a higher confidence level (99%)
my_level_high <- 0.99
ci_99 <- samp %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = 2000, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = my_level_high)
ci_99## # A tibble: 1 × 2
## lower_ci upper_ci
## <dbl> <dbl>
## 1 0.45 0.767
Since I increased the confidence level from 90% and 95% to
99%, I expected the new interval to be wider**
than the previous ones.
The printed results confirm this — the lower bound is smaller and the
upper bound is larger, giving me a wider range.
This makes sense: to be 99% confident, I need to “cast a wider net,”
so the interval covers more possible sample variation.
Based on the results, I am 99% confident that the true
proportion of U.S. adults who think climate change affects their local
community lies between the bounds shown above.**
When I increase the sample size, the
confidence intervals become narrower.
That’s because a larger sample provides more information about the
population, which reduces the standard error (SE)** — the
variability of the sample statistic.
When I decrease the sample size, the intervals become wider, since smaller samples are less reliable and more affected by random variation.
In short:
- Larger n → smaller SE → narrower CI (more
precise).
- Smaller n → larger SE → wider CI (less precise).
**
When the sample size stays the same** (for example, n = 60), increasing the number of bootstrap samples (like going from 1,000 to 10,000) does not change the width of the confidence interval in any meaningful way.
The reason is that the standard error — which determines the interval’s width — depends on the sample size and variability in the data, not on how many times we resample.
Adding more bootstrap samples just gives a smoother and more stable estimate of the same standard error, making the confidence interval slightly more precise computationally, but the expected width stays the same. **
set.seed(1234)
true_p <- 0.62
n_ci <- 50
n_samp <- 60
n_boot <- 1000
# 90% intervals
level_90 <- 0.90
df_ci_90 <- purrr::map_dfr(1:n_ci, function(i) {
ci <- us_adults %>%
sample_n(size = n_samp) %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = n_boot, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = level_90)
tibble(lower = ci$lower_ci, upper = ci$upper_ci, id = i, level = "90% CI")
}) %>%
mutate(capture = ifelse(lower < true_p & upper > true_p, "Yes", "No"))
# 95% intervals
level_95 <- 0.95
df_ci_95 <- purrr::map_dfr(1:n_ci, function(i) {
ci <- us_adults %>%
sample_n(size = n_samp) %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = n_boot, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = level_95)
tibble(lower = ci$lower_ci, upper = ci$upper_ci, id = i, level = "95% CI")
}) %>%
mutate(capture = ifelse(lower < true_p & upper > true_p, "Yes", "No"))
# combine + coverage
df_compare <- dplyr::bind_rows(df_ci_90, df_ci_95)
coverage_90 <- mean(df_ci_90$capture == "Yes")
coverage_95 <- mean(df_ci_95$capture == "Yes")
tibble(
level = c("90%", "95%"),
observed_coverage = c(coverage_90, coverage_95)
)## # A tibble: 2 × 2
## level observed_coverage
## <chr> <dbl>
## 1 90% 0.9
## 2 95% 0.96
ggplot(df_compare, aes(y = id, color = capture)) +
geom_segment(aes(x = lower, xend = upper, yend = id)) +
geom_point(aes(x = lower), size = 1) +
geom_point(aes(x = upper), size = 1) +
geom_vline(xintercept = true_p, linetype = "dashed", color = "gray30") +
facet_wrap(~ level) +
labs(
title = "Side-by-side: 90% vs 95% Bootstrap CIs for p (Yes)",
subtitle = paste0("n = ", n_samp, ", bootstrap reps = ", n_boot, ", true p = ", true_p),
x = "Confidence interval bounds",
y = "",
color = "Captures true p?"
) +
theme_minimal()Conclusion: This side-by-side view shows that higher confidence yields wider intervals (95% > 90%), which increases the chance that an interval contains the true proportion. In my runs, the observed coverage was about r round(coverage_90100)% for 90% CIs and r round(coverage_95100)% for 95% CIs. Small deviations from the nominal levels are expected with only 50 intervals and random resampling. Overall, sample size controls precision (width) and confidence level controls coverage, and the bootstrap CIs align with these theoretical expectations.