In this lab, you will investigate the ways in which the statistics from a random sample of data can serve as point estimates for population parameters. We’re interested in formulating a sampling distribution of our estimate in order to learn about the properties of the estimate, such as its distribution.

Setting a seed: We will take some random samples and build sampling distributions in this lab, which means you should set a seed at the start of your lab. If this concept is new to you, review the lab on probability.

Getting Started

Load packages

In this lab, we will explore and visualize the data using the tidyverse suite of packages. We will also use the infer package for re-sampling.

Let’s load the packages.

library(tidyverse)
library(openintro)
library(infer)
library(shiny)

The data

A 2019 Gallup report states the following:

The premise that scientific progress benefits people has been embodied in discoveries throughout the ages – from the development of vaccinations to the explosion of technology in the past few decades, resulting in billions of supercomputers now resting in the hands and pockets of people worldwide. Still, not everyone around the world feels science benefits them personally.

Source: World Science Day: Is Knowledge Power?

The Wellcome Global Monitor finds that 20% of people globally do not believe that the work scientists do benefits people like them. In this lab, you will assume this 20% is a true population proportion and learn about how sample proportions can vary from sample to sample by taking smaller samples from the population. We will first create our population assuming a population size of 100,000. This means 20,000 (20%) of the population think the work scientists do does not benefit them personally and the remaining 80,000 think it does.

global_monitor <- tibble(
  scientist_work = c(rep("Benefits", 80000), rep("Doesn't benefit", 20000))
)

The name of the data frame is global_monitor and the name of the variable that contains responses to the question “Do you believe that the work scientists do benefit people like you?” is scientist_work.

We can quickly visualize the distribution of these responses using a bar plot.

ggplot(global_monitor, aes(x = scientist_work)) +
  geom_bar() +
  labs(
    x = "", y = "",
    title = "Do you believe that the work scientists do benefit people like you?"
  ) +
  coord_flip() 

We can also obtain summary statistics to confirm we constructed the data frame correctly.

global_monitor %>%
  count(scientist_work) %>%
  mutate(p = n /sum(n))
## # A tibble: 2 × 3
##   scientist_work      n     p
##   <chr>           <int> <dbl>
## 1 Benefits        80000   0.8
## 2 Doesn't benefit 20000   0.2

The unknown sampling distribution

In this lab, you have access to the entire population, but this is rarely the case in real life. Gathering information on an entire population is often extremely costly or impossible. Because of this, we often take a sample of the population and use that to understand the properties of the population.

If you are interested in estimating the proportion of people who don’t think the work scientists do benefits them, you can use the sample_n command to survey the population.

samp1 <- global_monitor %>%
  sample_n(50)

This command collects a simple random sample of size 50 from the global_monitor dataset, and assigns the result to samp1. This is similar to randomly drawing names from a hat that contains the names of all in the population. Working with these 50 names is considerably simpler than working with all 100,000 people in the population.

  1. Describe the distribution of responses in this sample. How does it compare to the distribution of responses in the population. Hint: Although the sample_n function takes a random sample of observations (i.e. rows) from the dataset, you can still refer to the variables in the dataset with the same names. Code you presented earlier for visualizing and summarizing the population data will still be useful for the sample, however be careful to not label your proportion p since you’re now calculating a sample statistic, not a population parameters. You can customize the label of the statistics to indicate that it comes from the sample.
samp1 %>%
  count(scientist_work) %>%
  mutate(p_hat = n /sum(n))
## # A tibble: 2 × 3
##   scientist_work      n p_hat
##   <chr>           <int> <dbl>
## 1 Benefits           37  0.74
## 2 Doesn't benefit    13  0.26
ggplot(samp1, aes(x = scientist_work)) +
  geom_bar() +
  labs(
    x = "", y = "",
    title = "Do you believe that the work scientists do benefit people like you?"
  ) +
  coord_flip() 

In this particular sample, greater than 20% did not feel science benefited them (26% = 13/50), and less than 80% felt that science benefited them (74%= 37/50)

If you’re interested in estimating the proportion of all people who do not believe that the work scientists do benefits them, but you do not have access to the population data, your best single guess is the sample mean.

samp1 %>%
  count(scientist_work) %>%
  mutate(p_hat = n /sum(n))
## # A tibble: 2 × 3
##   scientist_work      n p_hat
##   <chr>           <int> <dbl>
## 1 Benefits           37  0.74
## 2 Doesn't benefit    13  0.26

Depending on which 50 people you selected, your estimate could be a bit above or a bit below the true population proportion of 0.26. In general, though, the sample proportion turns out to be a pretty good estimate of the true population proportion, and you were able to get it by sampling less than 1% of the population.

  1. Would you expect the sample proportion to match the sample proportion of another student’s sample? Why, or why not? If the answer is no, would you expect the proportions to be somewhat different or very different? Ask a student team to confirm your answer.

Well it depends. Since the sample proportions will take on a normal distribution, there is a chance that another students answer may be close to mine. The greater the number of students in the class obtaining samples, the greater the probability that a sample may be close to, or equal to mine.

  1. Take a second sample, also of size 50, and call it samp2. How does the sample proportion of samp2 compare with that of samp1? Suppose we took two more samples, one of size 100 and one of size 1000. Which would you think would provide a more accurate estimate of the population proportion?
samp2 <- global_monitor %>%
  sample_n(50)
samp3 <- global_monitor %>%
  sample_n(100)
samp4 <- global_monitor %>%
  sample_n(1000)
samp2 %>%
  count(scientist_work) %>%
  mutate(p_hat = n /sum(n))
## # A tibble: 2 × 3
##   scientist_work      n p_hat
##   <chr>           <int> <dbl>
## 1 Benefits           42  0.84
## 2 Doesn't benefit     8  0.16
samp3 %>%
  count(scientist_work) %>%
  mutate(p_hat = n /sum(n))
## # A tibble: 2 × 3
##   scientist_work      n p_hat
##   <chr>           <int> <dbl>
## 1 Benefits           81  0.81
## 2 Doesn't benefit    19  0.19
samp4 %>%
  count(scientist_work) %>%
  mutate(p_hat = n /sum(n))
## # A tibble: 2 × 3
##   scientist_work      n p_hat
##   <chr>           <int> <dbl>
## 1 Benefits          798 0.798
## 2 Doesn't benefit   202 0.202

The second sample ended up having a greater percentage of people believing the benefited from science (84%, and less for those believing they did not benefit from science (16%). The greater the sample size, the closer to the population proportion the results become.79.8%/20.2%

Not surprisingly, every time you take another random sample, you might get a different sample proportion. It’s useful to get a sense of just how much variability you should expect when estimating the population mean this way. The distribution of sample proportions, called the sampling distribution (of the proportion), can help you understand this variability. In this lab, because you have access to the population, you can build up the sampling distribution for the sample proportion by repeating the above steps many times. Here, we use R to take 15,000 different samples of size 50 from the population, calculate the proportion of responses in each sample, filter for only the Doesn’t benefit responses, and store each result in a vector called sample_props50. Note that we specify that replace = TRUE since sampling distributions are constructed by sampling with replacement.

sample_props50 <- global_monitor %>%
                    rep_sample_n(size = 50, reps = 15000, replace = TRUE) %>%
                    count(scientist_work) %>%
                    mutate(p_hat = n /sum(n)) %>%
                    filter(scientist_work == "Doesn't benefit")

And we can visualize the distribution of these proportions with a histogram.

ggplot(data = sample_props50, aes(x = p_hat)) +
  geom_histogram(binwidth = 0.02) +
  labs(
    x = "p_hat (Doesn't benefit)",
    title = "Sampling distribution of p_hat",
    subtitle = "Sample size = 50, Number of samples = 15000"
  )

Next, you will review how this set of code works.

  1. How many elements are there in sample_props50? Describe the sampling distribution, and be sure to specifically note its center. Make sure to include a plot of the distribution in your answer.

There are 15,000 samples of sample size 50, where the sample proportion p_hat is graphed in a histogram. So we see an number of samples where the sample had a mere 0.05 saying they did not benefit, to over 0.40 saying they did not benefit. However, the most samples (count) appear to be centered near the population proportion of 0.20.

Interlude: Sampling distributions

The idea behind the rep_sample_n function is repetition. Earlier, you took a single sample of size n (50) from the population of all people in the population. With this new function, you can repeat this sampling procedure rep times in order to build a distribution of a series of sample statistics, which is called the sampling distribution.

Note that in practice one rarely gets to build true sampling distributions, because one rarely has access to data from the entire population.

Without the rep_sample_n function, this would be painful. We would have to manually run the following code 15,000 times

global_monitor %>%
  sample_n(size = 50, replace = TRUE) %>%
  count(scientist_work) %>%
  mutate(p_hat = n /sum(n)) %>%
  filter(scientist_work == "Doesn't benefit")
## # A tibble: 1 × 3
##   scientist_work      n p_hat
##   <chr>           <int> <dbl>
## 1 Doesn't benefit    11  0.22

as well as store the resulting sample proportions each time in a separate vector.

Note that for each of the 15,000 times we computed a proportion, we did so from a different sample!

  1. To make sure you understand how sampling distributions are built, and exactly what the rep_sample_n function does, try modifying the code to create a sampling distribution of 25 sample proportions from samples of size 10, and put them in a data frame named sample_props_small. Print the output. How many observations are there in this object called sample_props_small? What does each observation represent?
sample_props_small <- global_monitor %>%
                    rep_sample_n(size = 10, reps = 25, replace = TRUE) %>%
                    count(scientist_work) %>%
                    mutate(p_hat = n /sum(n)) %>%
                    filter(scientist_work == "Doesn't benefit")
print(sample_props_small)
## # A tibble: 22 × 4
## # Groups:   replicate [22]
##    replicate scientist_work      n p_hat
##        <int> <chr>           <int> <dbl>
##  1         1 Doesn't benefit     2   0.2
##  2         2 Doesn't benefit     4   0.4
##  3         3 Doesn't benefit     5   0.5
##  4         5 Doesn't benefit     2   0.2
##  5         6 Doesn't benefit     1   0.1
##  6         7 Doesn't benefit     5   0.5
##  7         9 Doesn't benefit     1   0.1
##  8        10 Doesn't benefit     1   0.1
##  9        11 Doesn't benefit     2   0.2
## 10        12 Doesn't benefit     4   0.4
## # ℹ 12 more rows

There is a total of 25 observations of sample size 10. Each result is the proportion of “Doesn’t benefit”, which is the number observed divided by the sample size n=10.

Sample size and the sampling distribution

Mechanics aside, let’s return to the reason we used the rep_sample_n function: to compute a sampling distribution, specifically, the sampling distribution of the proportions from samples of 50 people.

ggplot(data = sample_props50, aes(x = p_hat)) +
  geom_histogram(binwidth = 0.02)

The sampling distribution that you computed tells you much about estimating the true proportion of people who think that the work scientists do doesn’t benefit them. Because the sample proportion is an unbiased estimator, the sampling distribution is centered at the true population proportion, and the spread of the distribution indicates how much variability is incurred by sampling only 50 people at a time from the population.

In the remainder of this section, you will work on getting a sense of the effect that sample size has on your sampling distribution.

  1. Use the app below to create sampling distributions of proportions of Doesn’t benefit from samples of size 10, 50, and 100. Use 5,000 simulations. What does each observation in the sampling distribution represent? How does the mean, standard error, and shape of the sampling distribution change as the sample size increases? How (if at all) do these values change if you increase the number of simulations? (You do not need to include plots in your answer.)

Each observation represents the proportion of the number of people who believe they “don’t benefit” from science over the total sample size (n=10, 50, and 100). Each one of those samples are run 5000 times from selecting samples from the data set. The larger the sample size, the closer to the population proportion the results gets and the more normal shaped the histogram becomes. Using a smaller sample size (n=10) does cause some difficulties because a single change in one causes a 10% increase up or down in the proportion. However, as the sample size increases to 50 or 100, the random chance of the result being changed is reduce by only 2% or 1%. As the sample size gets above 200 or more, we see very little change in the normal shape of the curve, but a narrowing of the distribution.


More Practice

So far, you have only focused on estimating the proportion of those you think the work scientists doesn’t benefit them. Now, you’ll try to estimate the proportion of those who think it does.

Note that while you might be able to answer some of these questions using the app, you are expected to write the required code and produce the necessary plots and summary statistics. You are welcome to use the app for exploration.

  1. Take a sample of size 15 from the population and calculate the proportion of people in this sample who think the work scientists do enhances their lives. Using this sample, what is your best point estimate of the population proportion of people who think the work scientists do enhances their lives?
set.seed(2077)
samp_15 <- global_monitor %>%
  sample_n(size = 15, replace = TRUE) %>%
  count(scientist_work) %>%
  mutate(p_hat = n /sum(n)) %>%
  filter(scientist_work == "Benefits")
samp_15
## # A tibble: 1 × 3
##   scientist_work     n p_hat
##   <chr>          <int> <dbl>
## 1 Benefits          11 0.733

In this particular sampling of the data set of a single sample size 15, we have sample proportion of 0.733 that indicates that science has benefited them. However, before I set the seed to freeze the value, I did get a sample where all 15 indicated that they benefited from science, which would have set the proportion at 100% or 1. However, with this sample our best point estimate is 0.733.

  1. Since you have access to the population, simulate the sampling distribution of proportion of those who think the work scientists do enhances their lives for samples of size 15 by taking 2000 samples from the population of size 15 and computing 2000 sample proportions. Store these proportions in as sample_props15. Plot the data, then describe the shape of this sampling distribution. Based on this sampling distribution, what would you guess the true proportion of those who think the work scientists do enhances their lives to be? Finally, calculate and report the population proportion.
set.seed(2076)
sample_props15 <- global_monitor %>%
                    rep_sample_n(size = 15, reps = 2000, replace = TRUE) %>%
                    count(scientist_work) %>%
                    mutate(p_hat = n /sum(n)) %>%
                    filter(scientist_work == "Benefits")

ggplot(data = sample_props15, aes(x = p_hat)) +
  geom_histogram(binwidth = 0.05)

mean_proportion <- mean(sample_props15$p_hat)
std_dev_proportion <- sd(sample_props15$p_hat)
print(mean_proportion)
## [1] 0.8004333
print(std_dev_proportion)
## [1] 0.1013599
confidence_interval <- t.test(sample_props15$p_hat)$conf.int
cat("95% Confidence Interval Around the Mean:", confidence_interval)
## 95% Confidence Interval Around the Mean: 0.7959884 0.8048782

The plot is mostly normal. You will notice that there appears to be gaps in the plot at about 0.5, 0.7, and 0.9. That is an artifact because of the sample size division by n and the rounding of the decimal places and the bin width size. However, if I were to guess at the true proportion, we can definitely see that it is centered around 0.8. When we actually calculate the mean proportion and standard error for this sampling, we find that the mean is very close to 0.8 (0.8004333) and the SE is 0.1013599. So we can say 95% confidence that the population mean is between 0.7959884 and 0.8048782

  1. Change your sample size from 15 to 150, then compute the sampling distribution using the same method as above, and store these proportions in a new object called sample_props150. Describe the shape of this sampling distribution and compare it to the sampling distribution for a sample size of 15. Based on this sampling distribution, what would you guess to be the true proportion of those who think the work scientists do enhances their lives?
set.seed(2075)
sample_props150 <- global_monitor %>%
                    rep_sample_n(size = 150, reps = 2000, replace = TRUE) %>%
                    count(scientist_work) %>%
                    mutate(p_hat = n /sum(n)) %>%
                    filter(scientist_work == "Benefits")

ggplot(data = sample_props150, aes(x = p_hat)) +
  geom_histogram(binwidth = 0.02)

mean_proportion <- mean(sample_props150$p_hat)
std_dev_proportion <- sd(sample_props150$p_hat)
print(mean_proportion)
## [1] 0.7991167
print(std_dev_proportion)
## [1] 0.03233004
confidence_interval <- t.test(sample_props150$p_hat)$conf.int
cat("95% Confidence Interval Around the Mean:", confidence_interval)
## 95% Confidence Interval Around the Mean: 0.7976989 0.8005344

The plot is very normal, much more so than the sample size 15 model. You would be able to guess more definitively from the plot that the true proportion would be about 0.8. When we actually calculate the mean proportion and standard error for this sampling, we find that the mean proportion is very close to 0.8 (0.7991167) and the SE is 0.0.3233004. So we can say 95% confidence that the population proportion is between 0.7976989 and 0.8005344, more narrow than with the sample size 15 evaluation.

  1. Of the sampling distributions from 2 and 3, which has a smaller spread? If you’re concerned with making estimates that are more often close to the true value, would you prefer a sampling distribution with a large or small spread?

The one with the smaller spread is the one with larger sample size of 150 versus that of 15. When dealing with a smaller sample size, a single reduction of a “Benefits” observation will reduce the sample proportion by 0.0667 (0.1333 for 2, etc), and thereby expand the spread of possible proportions being observed in the sampling.The larger sample size is less susceptible to single observation variability and complies with the Law of Large Numbers. So if we want to get closer to the actual population proportion, we would want to take as large a sample size as possible that would reconcile with the Law of Large Numbers.