If you have access to data on an entire population, say the opinion of every adult in the United States on whether or not they think climate change is affecting their local community, it’s straightforward to answer questions like, “What percent of US adults think climate change is affecting their local community?”. Similarly, if you had demographic information on the population you could examine how, if at all, this opinion varies among young and old adults and adults with different leanings. If you have access to only a sample of the population, as is often the case, the task becomes more complicated. What is your best guess for this proportion if you only have data from a small sample of adults? This type of situation requires that you use your sample to make inference on what your population looks like.
Setting a seed: You will take random samples and build sampling distributions in this lab, which means you should set a seed on top of your lab. If this concept is new to you, review the lab on probability.
In this lab, we will explore and visualize the data using the tidyverse suite of packages, and perform statistical inference using infer.
Let’s load the packages.
A 2019 Pew Research report states the following:
To keep our computation simple, we will assume a total population size of 100,000 (even though that’s smaller than the population size of all US adults).
Roughly six-in-ten U.S. adults (62%) say climate change is currently affecting their local community either a great deal or some, according to a new Pew Research Center survey.
Source: Most Americans say climate change impacts their community, but effects vary by region
In this lab, you will assume this 62% is a true population proportion and learn about how sample proportions can vary from sample to sample by taking smaller samples from the population. We will first create our population assuming a population size of 100,000. This means 62,000 (62%) of the adult population think climate change impacts their community, and the remaining 38,000 does not think so.
The name of the data frame is us_adults
and the name of
the variable that contains responses to the question “Do you think
climate change is affecting your local community?” is
climate_change_affects
.
We can quickly visualize the distribution of these responses using a bar plot.
ggplot(us_adults, aes(x = climate_change_affects)) +
geom_bar() +
labs(
x = "", y = "",
title = "Do you think climate change is affecting your local community?"
) +
coord_flip()
We can also obtain summary statistics to confirm we constructed the data frame correctly.
## # A tibble: 2 × 3
## climate_change_affects n p
## <chr> <int> <dbl>
## 1 No 38000 0.38
## 2 Yes 62000 0.62
In this lab, you’ll start with a simple random sample of size 60 from the population.
Answer:
60% of my sample believes climate change impacts their community.
## # A tibble: 2 × 3
## climate_change_affects n p
## <chr> <int> <dbl>
## 1 No 26 0.433
## 2 Yes 34 0.567
Answer: Similar to my answer for a similar answer in Lab 5a, I would expect some deviations in the sample proportions between individual students, with random chance causing the exact same proportions. There’s a much smaller chance that the underlying sample members are the exact same. Across enough student sample attempts, I would expect at least one matching pair.
Return for a moment to the question that first motivated this lab:
based on this sample, what can you infer about the population? With just
one sample, the best estimate of the proportion of US adults who think
climate change affects their local community would be the sample
proportion, usually denoted as \(\hat{p}\) (here we are calling it
p_hat
). That serves as a good point
estimate, but it would be useful to also communicate how
uncertain you are of that estimate. This uncertainty can be quantified
using a confidence interval.
One way of calculating a confidence interval for a population proportion is based on the Central Limit Theorem, as \(\hat{p} \pm z^\star SE_{\hat{p}}\) is, or more precisely, as \[ \hat{p} \pm z^\star \sqrt{ \frac{\hat{p} (1-\hat{p})}{n} } \]
Another way is using simulation, or to be more specific, using bootstrapping. The term bootstrapping comes from the phrase “pulling oneself up by one’s bootstraps”, which is a metaphor for accomplishing an impossible task without any outside help. In this case the impossible task is estimating a population parameter (the unknown population proportion), and we’ll accomplish it using data from only the given sample. Note that this notion of saying something about a population parameter using only information from an observed sample is the crux of statistical inference, it is not limited to bootstrapping.
In essence, bootstrapping assumes that there are more of observations in the populations like the ones in the observed sample. So we “reconstruct” the population by resampling from our sample, with replacement. The bootstrapping scheme is as follows:
Instead of coding up each of these steps, we will construct confidence intervals using the infer package.
Below is an overview of the functions we will use to construct this confidence interval:
Function | Purpose |
---|---|
specify |
Identify your variable of interest |
generate |
The number of samples you want to generate |
calculate |
The sample statistic you want to do inference with, or you can also think of this as the population parameter you want to do inference for |
get_ci |
Find the confidence interval |
This code will find the 95 percent confidence interval for proportion of US adults who think climate change affects their local community.
samp %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = 1000, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = 0.95)
## # A tibble: 1 × 2
## lower_ci upper_ci
## <dbl> <dbl>
## 1 0.45 0.683
specify
we specify the response
variable and the level of that variable we are calling a
success
.generate
we provide the number of resamples we want
from the population in the reps
argument (this should be a
reasonably large number) as well as the type of resampling we want to
do, which is "bootstrap"
in the case of constructing a
confidence interval.calculate
the sample statistic of interest for
each of these resamples, which is prop
ortion.Feel free to test out the rest of the arguments for these functions, since these commands will be used together to calculate confidence intervals and solve inference problems for the rest of the semester. But we will also walk you through more examples in future chapters.
To recap: even though we don’t know what the full population looks like, we’re 95% confident that the true proportion of US adults who think climate change affects their local community is between the two bounds reported as result of this pipeline.
Answer: 95% confident means that, repeated enough times, 95% of the calculated intervals would capture the real and true proportion of the population that believes climate change is real.The inverse here is that, 5% of the time, the interval would not capture it and potentially be wildly wrong. The 95 confident is about overall reliability across a larger set of samples. Confidence intervals are about a range of plausible values and the 95% confident reflects a probability on that interval.
In this case, you have the rare luxury of knowing the true population proportion (62%) since you have data on the entire population.
Answer: I am answering this question alone and will only address the first question.My 95% confidence interval is 0.467 to 0.717. Technically, .62 is within this range but the range is also 25%. If I showed you a presidential poll that said Kamala Harris has 48%, with a 95% confidence range of 35% to 60%, would you treat it as a valuable piece of data to use in decision making? In the real world, these confidence intervals need to be significantly lower, closer to +-3%, to be useful.
Answer: I would expect, assuming enough student samples were taken, 95% of the intervals to capture the 62% actual value and 5% of them to not.
In the next part of the lab, you will collect many samples to learn more about how sample proportions and confidence intervals constructed based on those samples vary from one sample to another.
Doing this would require learning programming concepts like iteration so that you can automate repeating running the code you’ve developed so far many times to obtain many (50) confidence intervals. In order to keep the programming simpler, we are providing the interactive app below that basically does this for you and created a plot similar to Figure 5.6 on OpenIntro Statistics, 4th Edition (page 182).
Answer:
2 out of 50, or 96%, produced intervals that captured the true value. This is one percentage point higher than the confidence interval of 95%.
Answer: I chose a 90% confidence interval and expected more samples to fall outside of the confidence interval. When I ran it, I got exactly what I expected. The data went from 96% in the interval at 95% confidence to 94% in the interval at 90% confidence. I expected this because a lower confidence level results in a narrower confidence interval, creating a narrower range where the true value can fall and still be in it.
samp
), find a confidence interval for
the proportion of US Adults who think climate change is affecting their
local community with a confidence level of your choosing (other than
95%) and interpret it.Answer:
I chose 90% confidence level and got a confidence interval of 0.483 to 0.7. The interval did narrow a bit, down to 21.7% versus closer to 25% at 95%.
samp %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = 1000, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = 0.90)
## # A tibble: 1 × 2
## lower_ci upper_ci
## <dbl> <dbl>
## 1 0.45 0.667
Answer: Of the 50 intervals bootstrap resampled 1,000 times at 90% confidence level, 5 out of 50, or 10%, had confidence intervals that did not capture the population value. This is an exact match for the 90% confidence interval, where I would expect, over a large enough number of samples of the population, 10% of the intervals to not capture the true population calue.
samp
and
interpret it. Finally, use the app to generate many intervals and
calculate the proportion of intervals that are capture the true
population proportion.Answer:
I chose a 75% confidence interval because I expected to see a much smaller range that doesn’t catch the population value enough times consistently to produce reliable statistics. The confidence interval I got was .517 to .683, which does capture the population value of .62.
samp %>%
specify(response = climate_change_affects, success = "Yes") %>%
generate(reps = 1000, type = "bootstrap") %>%
calculate(stat = "prop") %>%
get_ci(level = 0.75)
## # A tibble: 1 × 2
## lower_ci upper_ci
## <dbl> <dbl>
## 1 0.483 0.633
When I ran the simulation app using settings as the previous runs except for changing the confidence interval, 16 out of 50, or 32%, of the confidence intervals did not capture the population value. At 75% confidence, I would expect this value to be 25% so this is 8 percentage points higher.
Answer: A sample size of 400 at 95% confidence produced 2 out of 50 no capturing the population value, which is the same as sample size of 60. The confidence intervals got tighter, with most clustering between .55 and .7. At sample size of 60, this stretched out to cover .4 to .8
Increasing the sample size to 1,000 while reducing the confidence level to 90% lead to confidence intervals clustered between .6 and .66.
Overall, The sweetspot for this dataset is a sample around 800 to 1,000 and a 95% confidence level. For these surveys where you have all the answers and are just picking the optimal sample from data you already have, reaching for the most useable stastistics is a good approach. However, if you’re trying to figure out how many peeople to survey in a population for your results to matter, I would not use this approach.
Answer: I changed your app code for this one so I could smash in the SEs and average across them into the UI. The law of dimnishing returns started to take over as I increased the number of bootstrap samples whiles keeping the sample size at 60 and the confidence interval at 95%
bootstrap samples: 800 Average SE: .0628
Boostrap samples: 1000 Average SE: .0627
Boostrap samples: 1400 Average SE: .0621
Boostrap samples: 1800 Average SE: .062
Increasing the sample size to 200 under the same other parameters lead to the biggest change, with the average standard error dropping to .0345