library(tidyverse)
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.1 ──
## ✓ ggplot2 3.3.5 ✓ purrr 0.3.4
## ✓ tibble 3.1.4 ✓ dplyr 1.0.7
## ✓ tidyr 1.1.3 ✓ stringr 1.4.0
## ✓ readr 2.0.1 ✓ forcats 0.5.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
library(openintro)
## Loading required package: airports
## Loading required package: cherryblossom
## Loading required package: usdata
head(fastfood)
## # A tibble: 6 × 17
## restaurant item calories cal_fat total_fat sat_fat trans_fat cholesterol
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Mcdonalds Artisan G… 380 60 7 2 0 95
## 2 Mcdonalds Single Ba… 840 410 45 17 1.5 130
## 3 Mcdonalds Double Ba… 1130 600 67 27 3 220
## 4 Mcdonalds Grilled B… 750 280 31 10 0.5 155
## 5 Mcdonalds Crispy Ba… 920 410 45 12 0.5 120
## 6 Mcdonalds Big Mac 540 250 28 10 1 80
## # … with 9 more variables: sodium <dbl>, total_carb <dbl>, fiber <dbl>,
## # sugar <dbl>, protein <dbl>, vit_a <dbl>, vit_c <dbl>, calcium <dbl>,
## # salad <chr>
You’ll first be focusing on just three columns to get started: restaurant, calories, calories from fat. Let’s first focus on just products from McDonalds and Dairy Queen.
mcdonalds <- fastfood %>%
filter(restaurant == "Mcdonalds")
dairy_queen <- fastfood %>%
filter(restaurant == "Dairy Queen")
# Mcdonalds plot
mcdonalds %>%
ggplot() +
geom_histogram(aes(x = cal_fat), bins = 5) +
ggtitle("McDonalds Calories from Fat Distribution") +
xlab("Calories from fat") +
ylab("Frequency")
# Dairy Queen plot
dairy_queen %>%
ggplot() +
geom_histogram(aes(x = cal_fat), bins = 6) +
ggtitle("Dairy Queen Calories from Fat Distribution") +
xlab("Calories from fat") +
ylab("Frequency")
The plots for both McDonalds and Dairy Queen are right skewed. They have similar centers and shapes, with the majority of calories appearing to be around the 200-500 mark. McDonalds has higher max calorie amount than Dairy Queen does.
To see how accurate that description is, you can plot a normal distribution curve on top of a histogram to see how closely the data follow a normal distribution. This normal curve should have the same mean and standard deviation as the data. You’ll be focusing on calories from fat from Dairy Queen products, so let’s store them as a separate object and then calculate some statistics that will be referenced later.
dqmean <- mean(dairy_queen$cal_fat)
dqsd <- sd(dairy_queen$cal_fat)
Next, you make a density histogram to use as the backdrop and use the lines function to overlay a normal probability curve. The difference between a frequency histogram and a density histogram is that while in a frequency histogram the heights of the bars add up to the total number of observations, in a density histogram the areas of the bars add up to 1. The area of each bar can be calculated as simply the height times the width of the bar. Using a density histogram allows us to properly overlay a normal distribution curve over the histogram since the curve is a normal probability density function that also has area under the curve of 1. Frequency and density histograms both display the same exact shape; they only differ in their y-axis. You can verify this by comparing the frequency histogram you constructed earlier and the density histogram created by the commands below.
ggplot(data = dairy_queen, aes(x = cal_fat)) +
geom_blank() +
geom_histogram(aes(y = ..density..)) +
stat_function(fun = dnorm, args = c(mean = dqmean, sd = dqsd), col = "tomato")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
It does seem to follow a normal distribution in some cases but not others. The peak at the center seems to match up with the histogram, but there are high outliers to the right which fits my description above of the histogram being slightly skewed to the right.
Eyeballing the shape of the histogram is one way to determine if the data appear to be nearly normally distributed,but it can be frustrating to decide just how close the histogram is to the curve. An alternative approach involves constructing a normal probability plot, also called a normal Q-Q plot for “quantile-quantile”.
ggplot(data = dairy_queen, aes(sample = cal_fat)) +
geom_line(stat = "qq")
This time, you can use the geom_line() layer, while specifying that you will be creating a Q-Q plot with the stat argument. It’s important to note that here, instead of using x instead aes(), you need to use sample.
The x-axis values correspond to the quantiles of a theoretically normal curve with mean 0 and standard deviation 1 (i.e., the standard normal distribution). The y-axis values correspond to the quantiles of the original unstandardized sample data. However, even if we were to standardize the sample data values, the Q-Q plot would look identical. A data set that is nearly normal will result in a probability plot where the points closely follow a diagonal line. Any deviations from normality leads to deviations of these points from that line.
The plot for Dairy Queen’s calories from fat shows points that tend to follow the line but with some errant points towards the upper tail. You’re left with the same problem that we encountered with the histogram above: how close is close enough?
A useful way to address this question is to rephrase it as: what do probability plots look like for data that I know came from a normal distribution? We can answer this by simulating data from a normal distribution using rnorm.
sim_norm <- rnorm(n =nrow(dairy_queen), mean = dqmean, sd = dqsd)
ggplot(data = NULL, aes(sample = sim_norm)) +
geom_line(stat = "qq")
The points are in a line at the bginning but they are not a line in the middle or top.
Even better than comparing the original plot to a single plot generated from a normal distribution is to compare it to many more plots using the following function. It shows the Q-Q plot corresponding to the original data in the top left corner, and the Q-Q plots of 8 different simulated normal data. It may be helpful to click the zoom button in the plot window.
qqnormsim(sample = cal_fat, data = dairy_queen)
The normal probability plot does look fairly similar to the simulated data. This would suggest that the calories from fat are nearly normal since it looks like the simulated data, mainly like sim 7.
Okay, so now you have a slew of tools to judge whether or not a variable is normally distributed. Why should you care?
It turns out that statisticians know a lot about the normal distribution. Once you decide that a random variable is approximately normal, you can answer all sorts of questions about that variable related to probability. Take, for example, the question of, “What is the probability that a randomly chosen Dairy Queen product has more than 600 calories from fat?”
If we assume that the calories from fat from Dairy Queen’s menu are normally distributed (a very close approximation is also okay), we can find this probability by calculating a Z score and consulting a Z table (also called a normal probability table). In R, this is done in one step with the function pnorm().
1 - pnorm(q = 600, mean = dqmean, sd = dqsd)
## [1] 0.01501523
Note that the function pnorm() gives the area under the normal curve below a given value, q, with a given mean and standard deviation. Since we’re interested in the probability that a Dairy Queen item has more than 600 calories from fat, we have to take one minus that probability.
Assuming a normal distribution has allowed us to calculate a theoretical probability. If we want to calculate the probability empirically, we simply need to determine how many observations fall above 600 then divide this number by the total sample size.
dairy_queen %>%
filter(cal_fat > 600) %>%
summarise(percent = n() / nrow(dairy_queen))
## # A tibble: 1 × 1
## percent
## <dbl>
## 1 0.0476
Although the probabilities are not exactly the same, they are reasonably close. The closer that your distribution is to being normal, the more accurate the theoretical probabilities will be.
Probability that cholesterol is higher than 300 at McDonald’s:
# Theoretical
mcdChol_mean <- mean(mcdonalds$cholesterol)
mcdChol_sd <- sd(mcdonalds$cholesterol)
1 - pnorm(q = 300, mean = mcdChol_mean, sd = mcdChol_sd)
## [1] 0.008372092
# Empirical
mcdonalds %>%
filter(cholesterol > 300) %>%
summarise(percent = n() / nrow(mcdonalds))
## # A tibble: 1 × 1
## percent
## <dbl>
## 1 0.0175
Probability that an item on the Dairy Queen menu will be higher than 1,000
# Theoretical
dqCalories_mean <- mean(dairy_queen$calories)
dqCalories_sd <- sd(dairy_queen$calories)
1 - pnorm(q = 1000, mean = dqCalories_mean, sd = dqCalories_sd)
## [1] 0.03216034
# Empirical
dairy_queen %>%
filter(calories > 1000) %>%
summarise(percent = n() / nrow(dairy_queen))
## # A tibble: 1 × 1
## percent
## <dbl>
## 1 0.0714
There was a closer agreement in the theoretical and empirical in the probability that the probability of a menu item at McDonald’s has cholesterol higher than 300.
fastfood %>%
group_by(restaurant) %>%
ggplot(aes(sample = sodium)) +
geom_line(stat = "qq") +
facet_wrap(~restaurant)
It looks like Burger King has the closest to normal distribution for sodium.
The difference in distribution could be that a lot of the foods that they serve have a similar level of sodium within groups of food that they serve. For example, all their burgers could have a similar content of sodium and then salads or fries are grouped in their sodium content as well.
chickfilA <- fastfood %>%
filter(restaurant == "Chick Fil-A")
# Probability plot
ggplot(data = chickfilA, aes(sample = total_carb)) +
geom_line(stat = "qq") +
ggtitle("Chick Fil-A Total Carbohydrates")
# Histogram
chickfilaHist <- chickfilA %>%
ggplot() +
geom_histogram(aes(x = total_carb), bins = 20) +
xlab("Total Carbs") +
ylab("Frequency") +
ggtitle("Total Carbs Chick Fil-A Histogram")
chickfilaHist