title: “The normal distribution” author: “Mark Schmalfeld” Fall 2021 606 Lab 4 (real one) output: pdf_document: default html_document: includes: in_header: header.html css: ./lab.css highlight: pygments theme: cerulean toc: true toc_float: true editor_options: chunk_output_type: console —
In this lab, you’ll investigate the probability distribution that is most central to statistics: the normal distribution. If you are confident that your data are nearly normal, that opens the door to many powerful statistical methods. Here we’ll use the graphical tools of R to assess the normality of our data and also learn how to generate random numbers from a normal distribution.
In this lab, we will explore and visualize the data using the tidyverse suite of packages as well as the openintro package.
Let’s load the packages.
library(tidyverse)
library(openintro)
This week you’ll be working with fast food data. This data set contains data on 515 menu items from some of the most popular fast food restaurants worldwide. Let’s take a quick peek at the first few rows of the data.
Either you can use glimpse like before, or head to do this.
library(tidyverse)
library(openintro)
data("fastfood", package='openintro')
head(fastfood)
## # A tibble: 6 × 17
## restaurant item calories cal_fat total_fat sat_fat trans_fat cholesterol
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Mcdonalds Artisan G… 380 60 7 2 0 95
## 2 Mcdonalds Single Ba… 840 410 45 17 1.5 130
## 3 Mcdonalds Double Ba… 1130 600 67 27 3 220
## 4 Mcdonalds Grilled B… 750 280 31 10 0.5 155
## 5 Mcdonalds Crispy Ba… 920 410 45 12 0.5 120
## 6 Mcdonalds Big Mac 540 250 28 10 1 80
## # … with 9 more variables: sodium <dbl>, total_carb <dbl>, fiber <dbl>,
## # sugar <dbl>, protein <dbl>, vit_a <dbl>, vit_c <dbl>, calcium <dbl>,
## # salad <chr>
tail(fastfood)
## # A tibble: 6 × 17
## restaurant item calories cal_fat total_fat sat_fat trans_fat cholesterol
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Taco Bell Original … 700 270 30 9 0.5 45
## 2 Taco Bell Spicy Tri… 780 340 38 10 0.5 50
## 3 Taco Bell Express T… 580 260 29 9 1 60
## 4 Taco Bell Fiesta Ta… 780 380 42 10 1 60
## 5 Taco Bell Fiesta Ta… 720 320 35 7 0 70
## 6 Taco Bell Fiesta Ta… 720 320 36 8 1 55
## # … with 9 more variables: sodium <dbl>, total_carb <dbl>, fiber <dbl>,
## # sugar <dbl>, protein <dbl>, vit_a <dbl>, vit_c <dbl>, calcium <dbl>,
## # salad <chr>
You’ll see that for every observation there are 17 measurements, many of which are nutritional facts.
You’ll be focusing on just three columns to get started: restaurant, calories, calories from fat.
Let’s first focus on just products from McDonalds and Dairy Queen.
mcdonalds <- fastfood %>%
filter(restaurant == "Mcdonalds")
dairy_queen <- fastfood %>%
filter(restaurant == "Dairy Queen")
# Convert rest of the resturants into specific names vectors for later use
chickfila <- fastfood %>%
filter(restaurant == "Chick Fil-A")
sonic <- fastfood %>%
filter(restaurant == "Sonic")
arbys <- fastfood %>%
filter(restaurant == "Arbys")
burgerking <- fastfood %>%
filter(restaurant == "Burger King")
subway <- fastfood %>%
filter(restaurant == "Subway")
taco_bell <- fastfood %>%
filter(restaurant == "Taco Bell")
Mcdonalds: Calories seem to fall prinmarily around the 550-650 range. More upper end calories than the Dairy Queen. Seems to be unimodal in shape with a longer upper tail.
Dairy Queen: Calories seem to fall primarily around the 400-500 range and then another grouping in the 600-700 range. Appears unimodal and longer tail of upper cal but not as much upper end or as high as the Mcdonalds.
ggplot(mcdonalds, aes(x=item, y=calories))+geom_point(size=1.5)
ggplot(dairy_queen, aes(x=item,y=calories))+geom_point(size=2)
ggplot(mcdonalds, aes(x=calories))+geom_histogram(binwidth=10,fill="white", color="black")
ggplot(dairy_queen, aes(x=calories))+geom_histogram(binwidth=10,fill="white",color="blue")
dqcalmean <- mean(dairy_queen$calories)
dqcalmean
## [1] 520.2381
dqcalsd <- sd(dairy_queen$calories)
dqcalsd
## [1] 259.3377
ggplot(data = dairy_queen, aes(x = calories)) +
geom_blank() +
geom_histogram(aes(y = ..density..)) +
stat_function(fun = dnorm, args = c(mean = dqcalmean, sd = dqcalsd), col = "tomato")
# Evaluate mcdonalds
mccalmean <- mean(mcdonalds$calories)
mccalmean
## [1] 640.3509
mccalsd <- sd(mcdonalds$calories)
mccalsd
## [1] 410.6961
# Calculate the mean and sd for the sodium values
dq2mean <- mean(dairy_queen$sodium)
dq2sd <- sd(dairy_queen$sodium)
mc2mean <- mean(mcdonalds$sodium)
mc2sd <- sd(mcdonalds$sodium)
cfmean <- mean(chickfila$sodium)
cfsd <- sd(chickfila$sodium)
sonicmean <- mean(sonic$sodium)
soncisd <- sd(sonic$sodium)
armean <- mean(arbys$sodium)
arsd <- sd(arbys$sodium)
bkmean <- mean(burgerking$sodium)
bksd <- sd(burgerking$sodium)
swmean <- mean(subway$sodium)
swsd <- sd(subway$sodium)
tbmean <- mean(taco_bell$sodium)
tbsd <- sd(taco_bell$sodium)
ggplot(data = mcdonalds, aes(x = calories)) +
geom_blank() +
geom_histogram(aes(y = ..density..)) +
stat_function(fun = dnorm, args = c(mean = mccalmean, sd = mccalsd), col = "purple")
In your description of the distributions, did you use words like bell-shapedor normal? It’s tempting to say so when faced with a unimodal symmetric distribution.
To see how accurate that description is, you can plot a normal distribution curve on top of a histogram to see how closely the data follow a normal distribution. This normal curve should have the same mean and standard deviation as the data. You’ll be focusing on calories from fat from Dairy Queen products, so let’s store them as a separate object and then calculate some statistics that will be referenced later.
dqmean <- mean(dairy_queen$cal_fat)
dqmean
## [1] 260.4762
dqsd <- sd(dairy_queen$cal_fat)
dqsd
## [1] 156.4851
mcmean <- mean(mcdonalds$cal_fat)
mcmean
## [1] 285.614
mcsd <- sd(mcdonalds$cal_fat)
mcsd
## [1] 220.8993
Next, you make a density histogram to use as the backdrop and use the lines function to overlay a normal probability curve. The difference between a frequency histogram and a density histogram is that while in a frequency histogram the heights of the bars add up to the total number of observations, in a density histogram the areas of the bars add up to 1. The area of each bar can be calculated as simply the height times the width of the bar. Using a density histogram allows us to properly overlay a normal distribution curve over the histogram since the curve is a normal probability density function that also has area under the curve of 1. Frequency and density histograms both display the same exact shape; they only differ in their y-axis. You can verify this by comparing the frequency histogram you constructed earlier and the density histogram created by the commands below.
ggplot(data = dairy_queen, aes(x = cal_fat)) +
geom_blank() +
geom_histogram(aes(y = ..density..)) +
stat_function(fun = dnorm, args = c(mean = dqmean, sd = dqsd), col = "tomato")
ggplot(data = mcdonalds, aes(x = cal_fat)) +
geom_blank() +
geom_histogram(aes(y = ..density..)) +
stat_function(fun = dnorm, args = c(mean = mcmean, sd = mcsd), col = "green")
After initializing a blank plot with geom_blank(), the ggplot2 package (within the tidyverse) allows us to add additional layers. The first layer is a density histogram. The second layer is a statistical function – the density of the normal curve, dnorm. We specify that we want the curve to have the same mean and standard deviation as the column of fat calories. The argument col simply sets the color for the line to be drawn. If we left it out, the line would be drawn in black.
The data for dairy queen in the fat calories density histogram vs the normal distribution suggest the data is relatively normal or nearly normal - there are what appear to be some non-normal characteristics - empty spots in the data and peaks that are a lot higher at points with zero points beside these peaks.
Eyeballing the shape of the histogram is one way to determine if the data appear to be nearly normally distributed, but it can be frustrating to decide just how close the histogram is to the curve. An alternative approach involves constructing a normal probability plot, also called a normal Q-Q plot for “quantile-quantile”.
ggplot(data = dairy_queen, aes(sample = cal_fat)) +
geom_line(stat = "qq")
ggplot(data = mcdonalds, aes(sample = cal_fat)) +
geom_line(stat = "qq")
ggplot(data = dairy_queen, aes(sample = calories)) +
geom_line(stat = "qq")
ggplot(data = mcdonalds, aes(sample = calories)) +
geom_line(stat = "qq")
This time, you can use the geom_line() layer, while specifying that you will be creating a Q-Q plot with the stat argument. It’s important to note that here, instead of using x instead aes(), you need to use sample.
The x-axis values correspond to the quantiles of a theoretically normal curve with mean 0 and standard deviation 1 (i.e., the standard normal distribution). The y-axis values correspond to the quantiles of the original unstandardized sample data. However, even if we were to standardize the sample data values, the Q-Q plot would look identical. A data set that is nearly normal will result in a probability plot where the points closely follow a diagonal line. Any deviations from normality leads to deviations of these points from that line.
The plot for Dairy Queen’s calories from fat shows points that tend to follow the line but with some errant points towards the upper tail. You’re left with the same problem that we encountered with the histogram above: how close is close enough?
A useful way to address this question is to rephrase it as: what do probability plots look like for data that I know came from a normal distribution? We can answer this by simulating data from a normal distribution using rnorm.
sim_norm <- rnorm(n = nrow(dairy_queen), mean = dqmean, sd = dqsd)
sim_norm2 <- rnorm(n = nrow(mcdonalds), mean = mcmean, sd = mcsd)
# Calculate the mean and sd for the sodium values
dq2mean <- mean(dairy_queen$sodium)
dq2sd <- sd(dairy_queen$sodium)
mc2mean <- mean(mcdonalds$sodium)
mc2sd <- sd(mcdonalds$sodium)
cfmean <- mean(chickfila$sodium)
cfsd <- sd(chickfila$sodium)
sonicmean <- mean(sonic$sodium)
sonicsd <- sd(sonic$sodium)
armean <- mean(arbys$sodium)
arsd <- sd(arbys$sodium)
bkmean <- mean(burgerking$sodium)
bksd <- sd(burgerking$sodium)
swmean <- mean(subway$sodium)
swsd <- sd(subway$sodium)
tbmean <- mean(taco_bell$sodium)
tbsd <- sd(taco_bell$sodium)
sim_norm_na1 <- rnorm(n = nrow(mcdonalds), mean = mc2mean, sd = mc2sd)
sim_norm_na2 <- rnorm(n = nrow(chickfila), mean = cfmean, sd = cfsd)
sim_norm_na3 <- rnorm(n = nrow(sonic), mean = sonicmean, sd = sonicsd)
sim_norm_na4 <- rnorm(n = nrow(arbys), mean = armean, sd = arsd)
sim_norm_na5 <- rnorm(n = nrow(burgerking), mean = bkmean, sd = bksd)
sim_norm_na6 <- rnorm(n = nrow(dairy_queen), mean = dq2mean, sd = dq2sd)
sim_norm_na7 <- rnorm(n = nrow(subway), mean = swmean, sd = swsd)
sim_norm_na8 <- rnorm(n = nrow(taco_bell), mean = tbmean, sd = tbsd)
The first argument indicates how many numbers you’d like to generate, which we specify to be the same number of menu items in the dairy_queen data set using the nrow() function. The last two arguments determine the mean and standard deviation of the normal distribution from which the simulated sample will be generated. You can take a look at the shape of our simulated data set, sim_norm, as well as its normal probability plot.
sim_norm. Do all of the points fall on the line? How does this plot compare to the probability plot for the real data? (Since sim_norm is not a data frame, it can be put directly into the sample argument and the data argument can be dropped.)#Dairy Queen fat cak
ggplot(data=dairy_queen, aes(sample = sim_norm)) +
geom_line(stat = "qq")
#Mcdonalds fat cal
ggplot(data=mcdonalds, aes(sample = sim_norm2)) +
geom_line(stat = "qq")
Even better than comparing the original plot to a single plot generated from a normal distribution is to compare it to many more plots using the following function. It shows the Q-Q plot corresponding to the original data in the top left corner, and the Q-Q plots of 8 different simulated normal data. It may be helpful to click the zoom button in the plot window.
qqnormsim(sample = cal_fat, data = dairy_queen)
qqnormsim(sample = cal_fat, data = mcdonalds)
Does the normal probability plot for the calories from fat look similar to the plots created for the simulated data? That is, do the plots provide evidence that the calories are nearly normal?
The data seems to indicdate the Dairy Queen Fat Calories are nearly normal with the distribution become a bit out of synch at the upper end.
Using the same technique, determine whether or not the calories from McDonald’s menu appear to come from a normal distribution.
The fal calories shape for McDonalds appears normal if we can include significant negative calories in the distribution. From a reality point of view the offerings are not going to have negative calories so the real distribution would not appear normal and it has a distinct skew.
Okay, so now you have a slew of tools to judge whether or not a variable is normally distributed. Why should you care?
It turns out that statisticians know a lot about the normal distribution. Once you decide that a random variable is approximately normal, you can answer all sorts of questions about that variable related to probability. Take, for example, the question of, “What is the probability that a randomly chosen Dairy Queen product has more than 600 calories from fat?”
If we assume that the calories from fat from Dairy Queen’s menu are normally distributed (a very close approximation is also okay), we can find this probability by calculating a Z score and consulting a Z table (also called a normal probability table). In R, this is done in one step with the function pnorm().
1 - pnorm(q = 600, mean = dqmean, sd = dqsd)
## [1] 0.01501523
1 - pnorm(q = 600, mean = mcmean, sd = mcsd)
## [1] 0.07733771
Note that the function pnorm() gives the area under the normal curve below a given value, q, with a given mean and standard deviation. Since we’re interested in the probability that a Dairy Queen item has more than 600 calories from fat, we have to take one minus that probability.
Assuming a normal distribution has allowed us to calculate a theoretical probability. If we want to calculate the probability empirically, we simply need to determine how many observations fall above 600 then divide this number by the total sample size.
ggplot(data = mcdonalds, aes(sample = cal_fat)) +
geom_line(stat = "qq")
dairy_queen %>%
filter(cal_fat > 600) %>%
summarise(percent = n() / nrow(dairy_queen))
## # A tibble: 1 × 1
## percent
## <dbl>
## 1 0.0476
mcdonalds %>%
filter(cal_fat > 600) %>%
summarise(percent = n() / nrow(mcdonalds))
## # A tibble: 1 × 1
## percent
## <dbl>
## 1 0.0702
Although the probabilities are not exactly the same, they are reasonably close. The closer that your distribution is to being normal, the more accurate the theoretical probabilities will be.
# Evaluate the normal distribution vs a bin/ count historgram and a density historgram for Mcdonalds
ggplot(data = mcdonalds, aes(x = calories)) +
geom_blank() +
geom_histogram(aes(y = ..density..)) +
stat_function(fun = dnorm, args = c(mean = mccalmean, sd = mccalsd), col = "purple")
ggplot(data = mcdonalds, aes(x = cal_fat)) +
geom_blank() +
geom_histogram(aes(y = ..density..)) +
stat_function(fun = dnorm, args = c(mean = mcmean, sd = mcsd), col = "green")
#Create a simulated normal distribution for macdones calories and plot with QQ
sim_norm2 <- rnorm(n = nrow(mcdonalds), mean = mcmean, sd = mcsd)
#normal qq plot
ggplot(data = mcdonalds, aes(sample = calories)) +
geom_line(stat = "qq")
# QQ plot with a simulated normal distribution
ggplot(data=mcdonalds, aes(sample = sim_norm2)) +
geom_line(stat = "qq")
# Compare two methods of estimate the probability of hitting a calorie level or above at varioust caloriy levels.
1 - pnorm(q = 200, mean = mcmean, sd = mcsd)
## [1] 0.650833
1 - pnorm(q = 400, mean = mcmean, sd = mcsd)
## [1] 0.3022921
1 - pnorm(q = 600, mean = mcmean, sd = mcsd)
## [1] 0.07733771
1 - pnorm(q = 800, mean = mcmean, sd = mcsd)
## [1] 0.009940144
mcdonalds %>%
filter(cal_fat > 200) %>%
summarise(percent = n() / nrow(mcdonalds))
## # A tibble: 1 × 1
## percent
## <dbl>
## 1 0.561
mcdonalds %>%
filter(cal_fat > 400) %>%
summarise(percent = n() / nrow(mcdonalds))
## # A tibble: 1 × 1
## percent
## <dbl>
## 1 0.158
mcdonalds %>%
filter(cal_fat > 600) %>%
summarise(percent = n() / nrow(mcdonalds))
## # A tibble: 1 × 1
## percent
## <dbl>
## 1 0.0702
mcdonalds %>%
filter(cal_fat > 800) %>%
summarise(percent = n() / nrow(mcdonalds))
## # A tibble: 1 × 1
## percent
## <dbl>
## 1 0.0351
Look at the QQ normalized plots it seems Arby’s has the best distribution while the others were not a linear a QQ line or had more discontinous changes off a linear line.
#Mcdonalds sodium
ggplot(data=mcdonalds, aes(sample = sim_norm_na1)) +
geom_line(stat = "qq")
#chick fila sodium
ggplot(data=chickfila, aes(sample = sim_norm_na2)) +
geom_line(stat = "qq")
#sonic sondium
ggplot(data=sonic, aes(sample = sim_norm_na3)) +
geom_line(stat = "qq")
#arbys sodium
ggplot(data=arbys, aes(sample = sim_norm_na4)) +
geom_line(stat = "qq")
#burger king sodoim
ggplot(data=burgerking, aes(sample = sim_norm_na5)) +
geom_line(stat = "qq")
#dairy queen sodium
ggplot(data=dairy_queen, aes(sample = sim_norm_na6)) +
geom_line(stat = "qq")
#subway sodium
ggplot(data=subway, aes(sample = sim_norm_na7)) +
geom_line(stat = "qq")
#taca bell sodium
ggplot(data=taco_bell, aes(sample = sim_norm_na8)) +
geom_line(stat = "qq")
Incremental steps in sodium are likely related to the way the standard menus are set up. - For example basic burger and then add bacon or cheese; then add fries - then increase the size of the order of fries or the burger size doubles. - Similar for a Taco resturant - you get one item, two or three with an associated step up in sodium for each menu item you add or differnt menu item ( different could also be lower sodium but still a step and not a continous value)
Looking at the histogram for Total Carbs at Mcdonalds the distribution is right skewed with a tail going to the right.
mc3mean <- mean(mcdonalds$total_carb)
mc3sd <- sd(mcdonalds$total_carb)
ggplot(data = mcdonalds, aes(x = total_carb)) +
geom_blank() +
geom_histogram(aes(y = ..density..)) +
stat_function(fun = dnorm, args = c(mean = mc3mean, sd = mc3sd), col = "tomato")
sim_norm_carb <- rnorm(n = nrow(mcdonalds), mean = mc3mean, sd = mc3sd)
ggplot(data=mcdonalds, aes(sample = sim_norm_carb)) +
geom_line(stat = "qq")
This is an R Markdown document. Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents. For more details on using R Markdown see http://rmarkdown.rstudio.com.
When you click the Knit button a document will be generated that includes both content as well as the output of any embedded R code chunks within the document. You can embed an R code chunk like this:
summary(cars)
## speed dist
## Min. : 4.0 Min. : 2.00
## 1st Qu.:12.0 1st Qu.: 26.00
## Median :15.0 Median : 36.00
## Mean :15.4 Mean : 42.98
## 3rd Qu.:19.0 3rd Qu.: 56.00
## Max. :25.0 Max. :120.00
You can also embed plots, for example:
Note that the echo = FALSE parameter was added to the code chunk to prevent printing of the R code that generated the plot.