Sampling from Ames, Iowa

If you have access to data on an entire population, say the size of every house in Ames, Iowa, it’s straight forward to answer questions like, “How big is the typical house in Ames?” and “How much variation is there in sizes of houses?”. If you have access to only a sample of the population, as is often the case, the task becomes more complicated. What is your best guess for the typical size if you only know the sizes of several dozen houses? This sort of situation requires that you use your sample to make inference on what your population looks like.

The data

In the previous lab, ``Sampling Distributions’’, we looked at the population data of houses from Ames, Iowa. Let’s start by loading that data set.

load("more/ames.RData")

In this lab we’ll start with a simple random sample of size 60 from the population. Specifically, this is a simple random sample of size 60. Note that the data set has information on many housing variables, but for the first portion of the lab we’ll focus on the size of the house, represented by the variable Gr.Liv.Area.

population <- ames$Gr.Liv.Area
samp <- sample(population, 60)
  1. Describe the distribution of your sample. What would you say is the “typical” size within your sample? Also state precisely what you interpreted “typical” to mean.
hist(samp, xlab = "House Area Sq. Ft.")

summary(samp)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##     747    1153    1490    1535    1816    3078

The distribution has a mode in the 1500 sq ft bin and is slightly right skewed with a small secondary peak at 2500 sq ft. The typical size of a house is between 1288 and 1793 sq. ft. I used the IQR to come up with this range as it encompasses 50% of the houses’ size within the sample. You can also see this range is in the two largest bins in the histogram. Note that the IQR may be different when this is knitted into a web page.

  1. Would you expect another student’s distribution to be identical to yours? Would you expect it to be similar? Why or why not?

Since the houses are chosen at random, I would expect another students distribution to be similar, but not identical, to my sample. Choosing at random means that there are going to be different houses in our samples, but 60 houses is large enough where there are going to be few outliers in either sample that would deeply affect the descriptive statistics.

Confidence intervals

One of the most common ways to describe the typical or central value of a distribution is to use the mean. In this case we can calculate the mean of the sample using,

sample_mean <- mean(samp)

Return for a moment to the question that first motivated this lab: based on this sample, what can we infer about the population? Based only on this single sample, the best estimate of the average living area of houses sold in Ames would be the sample mean, usually denoted as \(\bar{x}\) (here we’re calling it sample_mean). That serves as a good point estimate but it would be useful to also communicate how uncertain we are of that estimate. This can be captured by using a confidence interval.

We can calculate a 95% confidence interval for a sample mean by adding and subtracting 1.96 standard errors to the point estimate (See Section 4.2.3 if you are unfamiliar with this formula).

se <- sd(samp) / sqrt(60)
lower <- sample_mean - 1.96 * se
upper <- sample_mean + 1.96 * se
c(lower, upper)
## [1] 1408.921 1661.179

This is an important inference that we’ve just made: even though we don’t know what the full population looks like, we’re 95% confident that the true average size of houses in Ames lies between the values lower and upper. There are a few conditions that must be met for this interval to be valid.

  1. For the confidence interval to be valid, the sample mean must be normally distributed and have standard error \(s / \sqrt{n}\). What conditions must be met for this to be true?

For the mean to be Normally Distributed, the sample size must meet the criteria for the Central Limit Theorem. This mean that the sample has to be randomly selected and have a sample size greater than 30.

Confidence levels

  1. What does “95% confidence” mean? If you’re not sure, see Section 4.2.2.

That we have 95% confidence that our sample has captured the mean of the population. If we were to repeat the process 20 times we would expect that only one sample’s confidence interval of the mean would not capture the population mean.

In this case we have the luxury of knowing the true population mean since we have data on the entire population. This value can be calculated using the following command:

mean(population)
## [1] 1499.69
  1. Does your confidence interval capture the true average size of houses in Ames? If you are working on this lab in a classroom, does your neighbor’s interval capture this value?

Yes, my confidence interval is 1367 sq ft. to 1614 sq. ft. From above the population mean is 1499.69 sq. ft. which is with that confidence interval. Again, the confidence interval may have changed when I knit the file.

  1. Each student in your class should have gotten a slightly different confidence interval. What proportion of those intervals would you expect to capture the true population mean? Why? If you are working in this lab in a classroom, collect data on the intervals created by other students in the class and calculate the proportion of intervals that capture the true population mean.

I would expect 19 out of 20 student’s confidence intervals to capture the population mean. This is because 19/20 is 95% and a confidence interval is defined as having a 95% chance of capturing the mean. That is to say that the population mean is withing the range of the confidence interval.

Using R, we’re going to recreate many samples to learn more about how sample means and confidence intervals vary from one sample to another. Loops come in handy here (If you are unfamiliar with loops, review the Sampling Distribution Lab).

Here is the rough outline:

But before we do all of this, we need to first create empty vectors where we can save the means and standard deviations that will be calculated from each sample. And while we’re at it, let’s also store the desired sample size as n.

samp_mean <- rep(NA, 50)
samp_sd <- rep(NA, 50)
n <- 60

Now we’re ready for the loop where we calculate the means and standard deviations of 50 random samples.

for(i in 1:50){
  samp <- sample(population, n) # obtain a sample of size n = 60 from the population
  samp_mean[i] <- mean(samp)    # save sample mean in ith element of samp_mean
  samp_sd[i] <- sd(samp)        # save sample sd in ith element of samp_sd
}

Lastly, we construct the confidence intervals.

lower_vector <- samp_mean - 1.96 * samp_sd / sqrt(n) 
upper_vector <- samp_mean + 1.96 * samp_sd / sqrt(n)

Lower bounds of these 50 confidence intervals are stored in lower_vector, and the upper bounds are in upper_vector. Let’s view the first interval.

c(lower_vector[1], upper_vector[1])
## [1] 1340.332 1562.434

On your own

c(lower_vector[8], samp_mean[8], upper_vector[8])
## [1] 1320.675 1438.117 1555.559
lower_vector <- samp_mean - 1.96 * samp_sd / sqrt(n) 
upper_vector <- samp_mean + 1.96 * samp_sd / sqrt(n)
samp_sd[8]/sqrt(n)
## [1] 59.91941
c(lower_vector[8], upper_vector[8])
## [1] 1320.675 1555.559

This will be different when I knit the file. That said the interval I chose is (1270.325, 1481.309). For this sample to capture the population mean:

\[ 1499.69 = 1375.817 + x*53.82238 \]

\[ x = (1499.69 - 1375.817)/53.82238 = 2.301515 \]

2.31015 corresponds to a probability of .989317 which corresponds to a critical value of 0.021366.

lower_vector <- samp_mean - 2.301515 * samp_sd / sqrt(n) 
upper_vector <- samp_mean + 2.301515 * samp_sd / sqrt(n)
plot_ci(lower_vector, upper_vector, mean(population))

We went from 3 samples that did not capture the mean to two samples that (barely) did not capture the mean. 2/50 is 0.04 of the samples not capturing the mean, which is fairly consistent with a critical value of 0.0214 as that amounts to one more sample than the expected value of 1 sample out of range. Since the samples were selected randomly you must expect fluctuations like the above, most of the time it’s going to be 1 sample that does not capture the population mean, sometimes it will be 0 or 2, occasionally 3 samples, etc.

This is a product of OpenIntro that is released under a Creative Commons Attribution-ShareAlike 3.0 Unported. This lab was written for OpenIntro by Andrew Bray and Mine Çetinkaya-Rundel.