Introduction to data

Author

Amanda Rose Knudsen

Some define statistics as the field that focuses on turning information into knowledge. The first step in that process is to summarize and describe the raw information – the data. In this lab we explore flights, specifically a random sample of domestic flights that departed from the three major New York City airports in 2013. We will generate simple graphical and numerical summaries of data on these flights and explore delay times. Since this is a large data set, along the way you’ll also learn the indispensable skills of data processing and subsetting.

Getting started

Load packages

In this lab, we will explore and visualize the data using the tidyverse suite of packages. The data can be found in the companion package for OpenIntro labs, openintro.

Let’s load the packages.

library(tidyverse)
library(openintro)

The data

The Bureau of Transportation Statistics (BTS) is a statistical agency that is a part of the Research and Innovative Technology Administration (RITA). As its name implies, BTS collects and makes transportation data available, such as the flights data we will be working with in this lab.

First, we’ll view the nycflights data frame. Type the following in your console to load the data:

data(nycflights)

The data set nycflights that shows up in your workspace is a data matrix, with each row representing an observation and each column representing a variable. R calls this data format a data frame, which is a term that will be used throughout the labs. For this data set, each observation is a single flight.

To view the names of the variables, type the command

names(nycflights)
 [1] "year"      "month"     "day"       "dep_time"  "dep_delay" "arr_time" 
 [7] "arr_delay" "carrier"   "tailnum"   "flight"    "origin"    "dest"     
[13] "air_time"  "distance"  "hour"      "minute"   

This returns the names of the variables in this data frame. The codebook (description of the variables) can be accessed by pulling up the help file:

?nycflights

One of the variables refers to the carrier (i.e. airline) of the flight, which is coded according to the following system.

  • carrier: Two letter carrier abbreviation.
    • 9E: Endeavor Air Inc.
    • AA: American Airlines Inc.
    • AS: Alaska Airlines Inc.
    • B6: JetBlue Airways
    • DL: Delta Air Lines Inc.
    • EV: ExpressJet Airlines Inc.
    • F9: Frontier Airlines Inc.
    • FL: AirTran Airways Corporation
    • HA: Hawaiian Airlines Inc.
    • MQ: Envoy Air
    • OO: SkyWest Airlines Inc.
    • UA: United Air Lines Inc.
    • US: US Airways Inc.
    • VX: Virgin America
    • WN: Southwest Airlines Co.
    • YV: Mesa Airlines Inc.

Remember that you can use glimpse to take a quick peek at your data to understand its contents better.

glimpse(nycflights)
Rows: 32,735
Columns: 16
$ year      <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, …
$ month     <int> 6, 5, 12, 5, 7, 1, 12, 8, 9, 4, 6, 11, 4, 3, 10, 1, 2, 8, 10…
$ day       <int> 30, 7, 8, 14, 21, 1, 9, 13, 26, 30, 17, 22, 26, 25, 21, 23, …
$ dep_time  <int> 940, 1657, 859, 1841, 1102, 1817, 1259, 1920, 725, 1323, 940…
$ dep_delay <dbl> 15, -3, -1, -4, -3, -3, 14, 85, -10, 62, 5, 5, -2, 115, -4, …
$ arr_time  <int> 1216, 2104, 1238, 2122, 1230, 2008, 1617, 2032, 1027, 1549, …
$ arr_delay <dbl> -4, 10, 11, -34, -8, 3, 22, 71, -8, 60, -4, -2, 22, 91, -6, …
$ carrier   <chr> "VX", "DL", "DL", "DL", "9E", "AA", "WN", "B6", "AA", "EV", …
$ tailnum   <chr> "N626VA", "N3760C", "N712TW", "N914DL", "N823AY", "N3AXAA", …
$ flight    <int> 407, 329, 422, 2391, 3652, 353, 1428, 1407, 2279, 4162, 20, …
$ origin    <chr> "JFK", "JFK", "JFK", "JFK", "LGA", "LGA", "EWR", "JFK", "LGA…
$ dest      <chr> "LAX", "SJU", "LAX", "TPA", "ORF", "ORD", "HOU", "IAD", "MIA…
$ air_time  <dbl> 313, 216, 376, 135, 50, 138, 240, 48, 148, 110, 50, 161, 87,…
$ distance  <dbl> 2475, 1598, 2475, 1005, 296, 733, 1411, 228, 1096, 820, 264,…
$ hour      <dbl> 9, 16, 8, 18, 11, 18, 12, 19, 7, 13, 9, 13, 8, 20, 12, 20, 6…
$ minute    <dbl> 40, 57, 59, 41, 2, 17, 59, 20, 25, 23, 40, 20, 9, 54, 17, 24…

The nycflights data frame is a massive trove of information. Let’s think about some questions we might want to answer with these data:

  • How delayed were flights that were headed to Los Angeles?
  • How do departure delays vary by month?
  • Which of the three major NYC airports has the best on time percentage for departing flights?

Analysis

Departure delays

Let’s start by examining the distribution of departure delays of all flights with a histogram.

ggplot(data = nycflights, aes(x = dep_delay)) +
  geom_histogram()

This function says to plot the dep_delay variable from the nycflights data frame on the x-axis. It also defines a geom (short for geometric object), which describes the type of plot you will produce.

Histograms are generally a very good way to see the shape of a single distribution of numerical data, but that shape can change depending on how the data is split between the different bins. You can easily define the binwidth you want to use:

ggplot(data = nycflights, aes(x = dep_delay)) +
  geom_histogram(binwidth = 15)

ggplot(data = nycflights, aes(x = dep_delay)) +
  geom_histogram(binwidth = 150)

  1. Look carefully at these three histograms. How do they compare? Are features revealed in one that are obscured in another?

Each histogram portrays different respresentation of the distribution of data. In each histogram, starting with the first, we can observe that the data density is concentrated on the left, with lower data density trailing off to the right: the shape is right skewed. Additionally, all three histograms represent unimodal distributions (one prominent peak). However, in the first histogram and the third histogram (which has a bindwidth of 150), the left-most bin is the most prominent, while in the second histogram (binwidth of 15) we see that it is the second-to-the-left-most bin that is the most prominent. This observation indicates that when using more narrow binwidths we can see more detail in the distribution regarding where the peak prominence occurs. This feature, which is observable in the second histogram, is obscured in both the first and the third histogram.

If you want to visualize only on delays of flights headed to Los Angeles, you need to first filter the data for flights with that destination (dest == "LAX") and then make a histogram of the departure delays of only those flights.

lax_flights <- nycflights %>%
  filter(dest == "LAX")
ggplot(data = lax_flights, aes(x = dep_delay)) +
  geom_histogram()

Let’s decipher these two commands (OK, so it might look like four lines, but the first two physical lines of code are actually part of the same command. It’s common to add a break to a new line after %>% to help readability).

  • Command 1: Take the nycflights data frame, filter for flights headed to LAX, and save the result as a new data frame called lax_flights.
    • == means “if it’s equal to”.
    • LAX is in quotation marks since it is a character string.
  • Command 2: Basically the same ggplot call from earlier for making a histogram, except that it uses the smaller data frame for flights headed to LAX instead of all flights.

Logical operators: Filtering for certain observations (e.g. flights from a particular airport) is often of interest in data frames where we might want to examine observations with certain characteristics separately from the rest of the data. To do so, you can use the filter function and a series of logical operators. The most commonly used logical operators for data analysis are as follows:

  • == means “equal to”
  • != means “not equal to”
  • > or < means “greater than” or “less than”
  • >= or <= means “greater than or equal to” or “less than or equal to”

You can also obtain numerical summaries for these flights:

lax_flights %>%
  summarise(mean_dd   = mean(dep_delay), 
            median_dd = median(dep_delay), 
            n         = n())
# A tibble: 1 × 3
  mean_dd median_dd     n
    <dbl>     <dbl> <int>
1    9.78        -1  1583

Note that in the summarise function you created a list of three different numerical summaries that you were interested in. The names of these elements are user defined, like mean_dd, median_dd, n, and you can customize these names as you like (just don’t use spaces in your names). Calculating these summary statistics also requires that you know the function calls. Note that n() reports the sample size.

Summary statistics: Some useful function calls for summary statistics for a single numerical variable are as follows:

  • mean
  • median
  • sd
  • var
  • IQR
  • min
  • max

Note that each of these functions takes a single vector as an argument and returns a single value.

You can also filter based on multiple criteria. Suppose you are interested in flights headed to San Francisco (SFO) in February:

sfo_feb_flights <- nycflights %>%
  filter(dest == "SFO", month == 2)

Note that you can separate the conditions using commas if you want flights that are both headed to SFO and in February. If you are interested in either flights headed to SFO or in February, you can use the | instead of the comma.

  1. Create a new data frame that includes flights headed to SFO in February, and save this data frame as sfo_feb_flights. How many flights meet these criteria?
sfo_feb_flights <- nycflights |> 
  filter(dest == "SFO", month == 2) 

sfo_feb_flights
# A tibble: 68 × 16
    year month   day dep_time dep_delay arr_time arr_delay carrier tailnum
   <int> <int> <int>    <int>     <dbl>    <int>     <dbl> <chr>   <chr>  
 1  2013     2    18     1527        57     1903        48 DL      N711ZX 
 2  2013     2     3      613        14     1008        38 UA      N502UA 
 3  2013     2    15      955        -5     1313       -28 DL      N717TW 
 4  2013     2    18     1928        15     2239        -6 UA      N24212 
 5  2013     2    24     1340         2     1644       -21 UA      N76269 
 6  2013     2    25     1415       -10     1737       -13 UA      N532UA 
 7  2013     2     7     1032         1     1352       -10 B6      N627JB 
 8  2013     2    15     1805        20     2122         2 AA      N335AA 
 9  2013     2    13     1056        -4     1412       -13 UA      N532UA 
10  2013     2     8      656        -4     1039        -6 DL      N710TW 
# ℹ 58 more rows
# ℹ 7 more variables: flight <int>, origin <chr>, dest <chr>, air_time <dbl>,
#   distance <dbl>, hour <dbl>, minute <dbl>

We can see that there are 68 observations in the tibble sfo_feb_flights: there are 68 flights that are headed to SFO in February.

  1. Describe the distribution of the arrival delays of these flights using a histogram and appropriate summary statistics. Hint: The summary statistics you use should depend on the shape of the distribution.
ggplot(sfo_feb_flights, aes(x = arr_delay)) +
  geom_histogram(binwidth = 10)

sfo_feb_flights |> 
  summarise(median_arrdelay = median(arr_delay, na.rm = TRUE), 
            iqr_arrdelay    = IQR(arr_delay, na.rm = TRUE),
            stdev_arrdelay  = sd(arr_delay, na.rm = TRUE),
            mean_arrdelay   = mean(arr_delay, na.rm = TRUE),
            min_arrdelay    = min(arr_delay, na.rm = TRUE),
            max_arrdelay    = max(arr_delay, na.rm = TRUE),
            n         = n()
  )
# A tibble: 1 × 7
  median_arrdelay iqr_arrdelay stdev_arrdelay mean_arrdelay min_arrdelay
            <dbl>        <dbl>          <dbl>         <dbl>        <dbl>
1             -11         23.2           36.3          -4.5          -66
# ℹ 2 more variables: max_arrdelay <dbl>, n <int>

We can observe that the distribution of flights with a destination of SFO in February is unimodal. The shape of the distribution is relatively right skewed (positively skewed; toward larger values on the x-axis). We observe summary statistics indicating a median of -11 and IQR of 23.2. The mean is -4.5: it is typical for the median to be to the left (a more negative value) of the mean when we observe a positively skewed distribution. As we know, summary statistics like the mean and the standard deviation are “heavily influenced” by outliers (quoting from our OpenIntro Statistics textbook, page 51), and the median and IQR are robust statistics. The minimum value in arrival delays for flights with the destination of SFO in February is -66 and the maximum value is 196. As we know, the number of observations in this dataset of flights with destination of SFO in February is 68.

Another useful technique is quickly calculating summary statistics for various groups in your data frame. For example, we can modify the above command using the group_by function to get the same summary stats for each origin airport:

sfo_feb_flights %>%
  group_by(origin) %>%
  summarise(
    median_dd = median(dep_delay), 
    iqr_dd = IQR(dep_delay), 
    n_flights = n()
    )
# A tibble: 2 × 4
  origin median_dd iqr_dd n_flights
  <chr>      <dbl>  <dbl>     <int>
1 EWR          0.5   5.75         8
2 JFK         -2.5  15.2         60

Here, we first grouped the data by origin and then calculated the summary statistics.

  1. Calculate the median and interquartile range for arr_delays of flights in in the sfo_feb_flights data frame, grouped by carrier. Which carrier has the most variable arrival delays?
sfo_feb_flights |> 
  group_by(carrier) |> 
  summarise(
    median_arrdelay = median(arr_delay, na.rm = TRUE),
    iqr_arrdelay = IQR(arr_delay, na.rm = TRUE)
  ) 
# A tibble: 5 × 3
  carrier median_arrdelay iqr_arrdelay
  <chr>             <dbl>        <dbl>
1 AA                  5           17.5
2 B6                -10.5         12.2
3 DL                -15           22  
4 UA                -10           22  
5 VX                -22.5         21.2
ggplot(
  sfo_feb_flights, aes(x = carrier, y = arr_delay)
) +
  geom_boxplot()

As we know, the median is the midpoint of the data and it (along with the IQR) are robust measures of central tendency; a larger IQR would indicate greater variability and a smaller IQR would indicate less variability (data more close to the median). We can see that there is greater variability for Virgin America (VX), for which we can observe a median of -22 and a IQR of 21.2.

Departure delays by month

Which month would you expect to have the highest average delay departing from an NYC airport?

Let’s think about how you could answer this question:

  • First, calculate monthly averages for departure delays. With the new language you are learning, you could
    • group_by months, then
    • summarise mean departure delays.
  • Then, you could to arrange these average delays in descending order
nycflights %>%
  group_by(month) %>%
  summarise(mean_dd = mean(dep_delay)) %>%
  arrange(desc(mean_dd))
# A tibble: 12 × 2
   month mean_dd
   <int>   <dbl>
 1     7   20.8 
 2     6   20.4 
 3    12   17.4 
 4     4   14.6 
 5     3   13.5 
 6     5   13.3 
 7     8   12.6 
 8     2   10.7 
 9     1   10.2 
10     9    6.87
11    11    6.10
12    10    5.88
  1. Suppose you really dislike departure delays and you want to schedule your travel in a month that minimizes your potential departure delay leaving NYC. One option is to choose the month with the lowest mean departure delay. Another option is to choose the month with the lowest median departure delay. What are the pros and cons of these two choices?
nycflights  |> 
  group_by(month) |> 
  summarise(median_dd = median(dep_delay, na.rm = TRUE))  |> 
  arrange(desc(median_dd))
# A tibble: 12 × 2
   month median_dd
   <int>     <dbl>
 1    12         1
 2     6         0
 3     7         0
 4     3        -1
 5     5        -1
 6     8        -1
 7     1        -2
 8     2        -2
 9     4        -2
10    11        -2
11     9        -3
12    10        -3

If we were to solely look at the mean, a value that is very susceptible to the influence of outliers (extreme observations), our choice to use the month with the lowest mean departure delay (October), we would be using a determination that scales well, while if we were to solely look at the median we would be getting a sense of the the ‘typical’ flight delays per month however we wouldn’t be getting a sense of the influence of outliers, as the median is a robust measure of central tendency. The good news is that October is the month with the lowest mean and the lowest median for departure delays, so we can with greater confidence select October as our departure month if we really want to avoid departure delays.

On time departure rate for NYC airports

Suppose you will be flying out of NYC and want to know which of the three major NYC airports has the best on time departure rate of departing flights. Also supposed that for you, a flight that is delayed for less than 5 minutes is basically “on time.”” You consider any flight delayed for 5 minutes of more to be “delayed”.

In order to determine which airport has the best on time departure rate, you can

  • first classify each flight as “on time” or “delayed”,
  • then group flights by origin airport,
  • then calculate on time departure rates for each origin airport,
  • and finally arrange the airports in descending order for on time departure percentage.

Let’s start with classifying each flight as “on time” or “delayed” by creating a new variable with the mutate function.

nycflights <- nycflights %>%
  mutate(dep_type = ifelse(dep_delay < 5, "on time", "delayed"))

The first argument in the mutate function is the name of the new variable we want to create, in this case dep_type. Then if dep_delay < 5, we classify the flight as "on time" and "delayed" if not, i.e. if the flight is delayed for 5 or more minutes.

Note that we are also overwriting the nycflights data frame with the new version of this data frame that includes the new dep_type variable.

We can handle all of the remaining steps in one code chunk:

nycflights %>%
  group_by(origin) %>%
  summarise(ot_dep_rate = sum(dep_type == "on time") / n()) %>%
  arrange(desc(ot_dep_rate))
# A tibble: 3 × 2
  origin ot_dep_rate
  <chr>        <dbl>
1 LGA          0.728
2 JFK          0.694
3 EWR          0.637
  1. If you were selecting an airport simply based on on time departure percentage, which NYC airport would you choose to fly out of?

You can also visualize the distribution of on on time departure rate across the three airports using a segmented bar plot.

ggplot(data = nycflights, aes(x = origin, fill = dep_type)) +
  geom_bar()

If we were selecting an airport simply based on the on-time departure percentage, we would choose to fly out of LGA (LaGuardia Airport).LGA has the highest on-time departure percentage with 72.8%, followed by JFK with 69.4%, and then EWR (Newark Airport) with 63.7% on-time departure percentage. We can see using the segmented bar plot that Newark Airport (EWR) has the largest number of departures overall, followed by JFK, while LGA has the fewest overall flight departures. That said, if we wanted to follow the insight gained through the on-time departure percentage calculated above, we would choose LGA (LaGuaria Airport).


More Practice

  1. Mutate the data frame so that it includes a new variable that contains the average speed, avg_speed traveled by the plane for each flight (in mph). Hint: Average speed can be calculated as distance divided by number of hours of travel, and note that air_time is given in minutes.
nycflights <- nycflights |> 
  mutate(
    avg_speed = distance / (air_time / 60)
  ) |> 
  relocate(avg_speed)

In the code chunk above, I created avg_speed using a calculation of distance divided by teh number of hours of travel (where the air_time is divided by 60 prior to being the value we divide distance by).

  1. Make a scatterplot of avg_speed vs. distance. Describe the relationship between average speed and distance. Hint: Use geom_point().
ggplot(
  nycflights, aes(x = avg_speed, y = distance)
  ) + 
  geom_point()

We can observe that there is a positive relationship (positive association) between average speed and distance (a larger distance is related to a higher average speed, and a smaller distance is related to a lower average speed).

  1. Replicate the following plot. Hint: The data frame plotted only contains flights from American Airlines, Delta Airlines, and United Airlines, and the points are colored by carrier. Once you replicate the plot, determine (roughly) what the cutoff point is for departure delays where you can still expect to get to your destination on time.

exercise9plotdata <- nycflights |> 
  filter(
    carrier %in% c("DL", "UA", "AA")
  )

ggplot(exercise9plotdata, aes(x = dep_delay, y = arr_delay, color = carrier)) +
  geom_point()

Using the scatterplot, we can determine (roughly) that the cutoff point is around 60 minutes for departure delays where you can still expect to get to your destination on-time. We can observe a green point in the scatter plot (indicating a Delta flight) that crosses the y-axis (arr_delay) at 0, and it appears to sit approximately at the 60 minute dep_delay.