Some define statistics as the field that focuses on turning information into knowledge. The first step in that process is to summarize and describe the raw information – the data. In this homework we will explore flights, specifically a random sample of domestic flights that departed from the three major New York City airport in 2013. We will generate simple graphical and numerical summaries of data on these flights and explore delay times. As this is a large data set, along the way you’ll also learn the indispensable skills of data processing and sub-setting.
The tidyverse “umbrella” package which houses a suite of many different R packages: for data wrangling and data visualization.
The openintro R
package: for data
and custom functions with the OpenIntro resources.
library(tidyverse)
library(openintro)
The nycflights13 dataset is a collection of data pertaining to different airlines flying from different airports in New York City (NYC), also capturing flight, plane, and weather-specific details during the year 2013. The data was collected into these five different branches. This method of collecting data helps us to work on individual aspects of the whole large dataset and also we can combine together multiple aspects to do some complex data analysis. There are also 3-4 database versions of nycflights dataset which cache the data from nycflights database in a local database, helping in joining tables on natural keys efficiently. The source of the flights dataset is BTS Bureau of transportation statistics.
To get started, let’s take a peek at the data.
nycflights
By eyeballing of the output the data set (data frame
R
name) contain with 16 columns and 32,735 rows
32,735 x 16
.
nycflights
## # A tibble: 32,735 x 16
## year month day dep_time dep_delay arr_time arr_delay carrier tailnum
## <int> <int> <int> <int> <dbl> <int> <dbl> <chr> <chr>
## 1 2013 6 30 940 15 1216 -4 VX N626VA
## 2 2013 5 7 1657 -3 2104 10 DL N3760C
## 3 2013 12 8 859 -1 1238 11 DL N712TW
## 4 2013 5 14 1841 -4 2122 -34 DL N914DL
## 5 2013 7 21 1102 -3 1230 -8 9E N823AY
## 6 2013 1 1 1817 -3 2008 3 AA N3AXAA
## 7 2013 12 9 1259 14 1617 22 WN N218WN
## 8 2013 8 13 1920 85 2032 71 B6 N284JB
## 9 2013 9 26 725 -10 1027 -8 AA N3FSAA
## 10 2013 4 30 1323 62 1549 60 EV N12163
## # ... with 32,725 more rows, and 7 more variables: flight <int>, origin <chr>,
## # dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>
To view the names of the variables, type the command
names(nycflights)
## [1] "year" "month" "day" "dep_time" "dep_delay" "arr_time"
## [7] "arr_delay" "carrier" "tailnum" "flight" "origin" "dest"
## [13] "air_time" "distance" "hour" "minute"
This returns the names of the variables in this data frame. The codebook (description of the variables) can be accessed by pulling up the help file:
?nycflights
One of the variables refers to the carrier (i.e. airline) of the flight, which is coded according to the following system.
carrier: Two letter carrier abbreviation
See airlines
in the nycflights13
package
for more information or google the airline code.
We can also check the dimensions of this data frame as well as the
names of the variables, type of variables and the first few observations
by inserting the name of the data set into the glimpse()
function, as seen below:
glimpse(nycflights)
## Rows: 32,735
## Columns: 16
## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, ~
## $ month <int> 6, 5, 12, 5, 7, 1, 12, 8, 9, 4, 6, 11, 4, 3, 10, 1, 2, 8, 10~
## $ day <int> 30, 7, 8, 14, 21, 1, 9, 13, 26, 30, 17, 22, 26, 25, 21, 23, ~
## $ dep_time <int> 940, 1657, 859, 1841, 1102, 1817, 1259, 1920, 725, 1323, 940~
## $ dep_delay <dbl> 15, -3, -1, -4, -3, -3, 14, 85, -10, 62, 5, 5, -2, 115, -4, ~
## $ arr_time <int> 1216, 2104, 1238, 2122, 1230, 2008, 1617, 2032, 1027, 1549, ~
## $ arr_delay <dbl> -4, 10, 11, -34, -8, 3, 22, 71, -8, 60, -4, -2, 22, 91, -6, ~
## $ carrier <chr> "VX", "DL", "DL", "DL", "9E", "AA", "WN", "B6", "AA", "EV", ~
## $ tailnum <chr> "N626VA", "N3760C", "N712TW", "N914DL", "N823AY", "N3AXAA", ~
## $ flight <int> 407, 329, 422, 2391, 3652, 353, 1428, 1407, 2279, 4162, 20, ~
## $ origin <chr> "JFK", "JFK", "JFK", "JFK", "LGA", "LGA", "EWR", "JFK", "LGA~
## $ dest <chr> "LAX", "SJU", "LAX", "TPA", "ORF", "ORD", "HOU", "IAD", "MIA~
## $ air_time <dbl> 313, 216, 376, 135, 50, 138, 240, 48, 148, 110, 50, 161, 87,~
## $ distance <dbl> 2475, 1598, 2475, 1005, 296, 733, 1411, 228, 1096, 820, 264,~
## $ hour <dbl> 9, 16, 8, 18, 11, 18, 12, 19, 7, 13, 9, 13, 8, 20, 12, 20, 6~
## $ minute <dbl> 40, 57, 59, 41, 2, 17, 59, 20, 25, 23, 40, 20, 9, 54, 17, 24~
Let’s start by examining the distribution of departure delays of all flights with a histogram.
ggplot(data = nycflights, aes(x = dep_delay)) +
geom_histogram()
This function says to plot the dep_delay
variable from
the nycflights
data frame on the x-axis. It also defines a
geom
(short for geometric object), which describes the type
of plot you will produce.
Histograms are generally a very good way to see the shape of a single distribution of numerical data, but that shape can change depending on how the data is split between the different bins.
We can easily define the binwidth
as we want to use:
ggplot(data = nycflights, aes(x = dep_delay)) +
geom_histogram(binwidth = 30)
Lets try with more binwidth
ggplot(data = nycflights, aes(x = dep_delay)) +
geom_histogram(binwidth = 150)
Look carefully at these three histograms. How do they compare? Are features revealed in one that are obscured in another?
Answer-1:
Smaller binwidths
provide more detail - the
largest binwidth obscures any useful information.
We notice that the smaller the binwidth
is, the finer
the detail is. The second histogram has the smallest
binwidth
, so it displays the data in finer detail. The
third histogram has the largest binwidth
and clumps much of
the data together, hiding lots of detail. The first histogram has a
binwidth
in between the other two and displays detail
correspondingly (by default it has taken bins = 30
). Though
the second histogram shows the most detail, the binwidth
of
the first one seems to be just right and is more pleasing to the eye to
visualize the data.
If you want to visualize only delays of flights headed to Los
Angeles, you need to first filter
the data for flights with
that destination (dest == "LAX"
) and then make a histogram
of the departure delays of only those flights.
lax_flights <- nycflights %>%
filter(dest == "LAX")
ggplot(data = lax_flights, aes(x = dep_delay)) +
geom_histogram()
Let’s decipher these two commands (OK, so it might look like four lines, but the first two physical lines of code are actually part of the same command. It’s common to add a break to a new line after %>% to help readability).
Command 1: Take the nycflights
data frame,
filter
for flights headed to LAX, and save the result as a
new data frame called lax_flights
.
==
means “if it’s equal to”.LAX
is in quotation marks since it is a character
string.Command 2: Basically the same ggplot call from earlier for making a histogram, except that it uses the smaller data frame for flights headed to LAX instead of all flights.
Logical operators: Filtering for certain observations (e.g. flights from a particular airport) is often of interest in data frames where we might want to examine observations with certain characteristics separately from the rest of the data. To do so, you can use the filter function and a series of logical operators. The most commonly used logical operators for data analysis are as follows:
==
means “equal to”!=
means “not equal to”>
or <
means “greater than” or “less
than”>=
or <=
means “greater than or
equal to” or “less than or equal to”We can also obtain numerical summaries for these flights:
lax_flights %>%
summarise(mean_dd = mean(dep_delay),
median_dd = median(dep_delay),
n= n())
## # A tibble: 1 x 3
## mean_dd median_dd n
## <dbl> <dbl> <int>
## 1 9.78 -1 1583
Note that in the summarise
function we created a list of
three different numerical summaries that we were interested in. The
names of these elements are user defined, like mean_dd
,
median_dd
, n
, and we can customize these names
as we like (just don’t use spaces in your names). Calculating these
summary statistics also requires that we know the function calls. Note
that n()
reports the sample size.
Summary statistics: Some useful function calls for summary statistics for a single numerical variable are as follows:
Note that each of these functions takes a single vector as an argument and returns a single value.
We can also filter based on multiple criteria. Suppose you are interested in flights headed to San Francisco (SFO) in February:
sfo_feb_flights <- nycflights %>%
filter(dest == "SFO", month == 2)
Note that you can separate the conditions using commas if you want
flights that are both headed to SFO and in February. If
you are interested in either flights headed to SFO or
in February, you can use the |
instead of the comma.
Create a new data frame that includes flights headed to SFO in
February, and save this data frame as sfo_feb_flights
. How
many flights meet these criteria?
Answer-2:
sfo_feb_flights <- nycflights %>%
filter(dest == "SFO", month == 2)
glimpse(sfo_feb_flights)
## Rows: 68
## Columns: 16
## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, ~
## $ month <int> 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ~
## $ day <int> 18, 3, 15, 18, 24, 25, 7, 15, 13, 8, 11, 13, 25, 20, 12, 27,~
## $ dep_time <int> 1527, 613, 955, 1928, 1340, 1415, 1032, 1805, 1056, 656, 191~
## $ dep_delay <dbl> 57, 14, -5, 15, 2, -10, 1, 20, -4, -4, 40, -2, -1, -6, -7, 2~
## $ arr_time <int> 1903, 1008, 1313, 2239, 1644, 1737, 1352, 2122, 1412, 1039, ~
## $ arr_delay <dbl> 48, 38, -28, -6, -21, -13, -10, 2, -13, -6, 2, -5, -30, -22,~
## $ carrier <chr> "DL", "UA", "DL", "UA", "UA", "UA", "B6", "AA", "UA", "DL", ~
## $ tailnum <chr> "N711ZX", "N502UA", "N717TW", "N24212", "N76269", "N532UA", ~
## $ flight <int> 1322, 691, 1765, 1214, 1111, 394, 641, 177, 642, 1865, 272, ~
## $ origin <chr> "JFK", "JFK", "JFK", "EWR", "EWR", "JFK", "JFK", "JFK", "JFK~
## $ dest <chr> "SFO", "SFO", "SFO", "SFO", "SFO", "SFO", "SFO", "SFO", "SFO~
## $ air_time <dbl> 358, 367, 338, 353, 341, 355, 359, 338, 347, 361, 332, 351, ~
## $ distance <dbl> 2586, 2586, 2586, 2565, 2565, 2586, 2586, 2586, 2586, 2586, ~
## $ hour <dbl> 15, 6, 9, 19, 13, 14, 10, 18, 10, 6, 19, 8, 10, 18, 7, 17, 1~
## $ minute <dbl> 27, 13, 55, 28, 40, 15, 32, 5, 56, 56, 10, 33, 48, 49, 23, 2~
Total number 68 flights headed to SFO in February.
Describe the distribution of the arrival delays of these flights using a histogram and appropriate summary statistics. Hint: The summary statistics you use should depend on the shape of the distribution.
Answer-3:
ggplot(data = sfo_feb_flights, aes(x = arr_delay)) +
geom_histogram(binwidth=10)
The histogram is right-skewed, so the standard deviation would not
accurately represent the distribution of the data. The IQR
on the other hand describes how the middle 50% of the data is
distributed about the median. Both values can be found below.
sfo_feb_flights %>%
summarise(mean_ad = mean(arr_delay), median_ad = median(arr_delay), iqr_ad = IQR(arr_delay), n_flights = n())
## # A tibble: 1 x 4
## mean_ad median_ad iqr_ad n_flights
## <dbl> <dbl> <dbl> <int>
## 1 -4.5 -11 23.2 68
Another useful technique is quickly calculating summary statistics
for various groups in our data frame. For example, we can modify the
above command using the group_by
function to get the same
summary stats for each origin airport:
sfo_feb_flights %>%
group_by(origin) %>%
summarise(median_dd = median(dep_delay), iqr_dd = IQR(dep_delay), n_flights = n())
## # A tibble: 2 x 4
## origin median_dd iqr_dd n_flights
## <chr> <dbl> <dbl> <int>
## 1 EWR 0.5 5.75 8
## 2 JFK -2.5 15.2 60
Here, we first grouped the data by origin
and then
calculated the summary statistics
.
Calculate the median and interquartile range for
arr_delays
of flights in in the
sfo_feb_flights
data frame, grouped by carrier. Which
carrier has the most variable arrival delays?
Answer-4:
sfo_feb_flights %>%
group_by(carrier) %>%
summarise(median_ad = median(arr_delay), iqr_ad = IQR(arr_delay), n_flights = n())
## # A tibble: 5 x 4
## carrier median_ad iqr_ad n_flights
## <chr> <dbl> <dbl> <int>
## 1 AA 5 17.5 10
## 2 B6 -10.5 12.2 6
## 3 DL -15 22 19
## 4 UA -10 22 21
## 5 VX -22.5 21.2 12
The carriers DL and UA are tied for having the most variable arrival delays because their interquartile ranges are tied for the highest at 22.00. This suggests that they exhibit the greatest variation in arrival delays for the middle 50% of their data.
Which month would we expect to have the highest average delay departing from an NYC airport?
Let’s think about how we could answer this question:
First, calculate monthly averages for departure delays. With the new language we are learning, we could
Then, you could to arrange these average delays in descending order
nycflights %>%
group_by(month) %>%
summarise(mean_dd = mean(dep_delay)) %>%
arrange(desc(mean_dd))
## # A tibble: 12 x 2
## month mean_dd
## <int> <dbl>
## 1 7 20.8
## 2 6 20.4
## 3 12 17.4
## 4 4 14.6
## 5 3 13.5
## 6 5 13.3
## 7 8 12.6
## 8 2 10.7
## 9 1 10.2
## 10 9 6.87
## 11 11 6.10
## 12 10 5.88
Suppose you really dislike departure delays and you want to schedule your travel in a month that minimizes your potential departure delay leaving NYC. One option is to choose the month with the lowest mean departure delay. Another option is to choose the month with the lowest median departure delay. What are the pros and cons of these two choices?
Answer-5:
Mean Pro: This represents the overall average departure delay, taking into account the effect of each delay and giving an idea as to how the data is distributed. Con: This can be skewed by outliers.
Median Pro: It takes the middle value of the entire data set, so outliers do not skew the median. Con: It fails to represent how the data is distributed.
Suppose we will be flying out of NYC and want to know which of the three major NYC airports has the best on time departure rate of departing flights. Also supposed that for we, a flight that is delayed for less than 5 minutes is basically “on time.”” You consider any flight delayed for 5 minutes of more to be “delayed”.
In order to determine which airport has the best on time departure rate, we can
Let’s start with classifying each flight as “on time” or “delayed” by creating a new variable with the mutate function.
nycflights <- nycflights %>%
mutate(dep_type = ifelse(dep_delay < 5, "on time", "delayed"))
nycflights %>%
group_by(origin) %>%
summarise(ot_dep_rate = sum(dep_type == "on time") / n()) %>%
arrange(desc(ot_dep_rate))
## # A tibble: 3 x 2
## origin ot_dep_rate
## <chr> <dbl>
## 1 LGA 0.728
## 2 JFK 0.694
## 3 EWR 0.637
The first argument in the mutate
function is the name of
the new variable we want to create, in this case dep_type
.
Then if dep_delay < 5
, we classify the flight as
"on time"
and "delayed"
if not, i.e. if the
flight is delayed for 5 or more minutes.
Note that we are also overwriting the nycflights data frame with the new version of this data frame that includes the new dep_type variable.
We can handle all of the remaining steps in one code chunk:
nycflights %>%
group_by(origin) %>%
summarise(ot_dep_rate = sum(dep_type == "on time") / n()) %>%
arrange(desc(ot_dep_rate))
## # A tibble: 3 x 2
## origin ot_dep_rate
## <chr> <dbl>
## 1 LGA 0.728
## 2 JFK 0.694
## 3 EWR 0.637
If you were selecting an airport simply based on on time departure percentage, which NYC airport would you choose to fly out of?
Answer-6:
ggplot(data = nycflights, aes(x = origin, fill = dep_type)) +
geom_bar()
LGA has the best time departure percentage of 72.8%. Also, the segmented bar plot below shows that LGA has the best proportion of flights leaving on time. Thus, I would choose LGA.
Mutate the data frame so that it includes a new variable that
contains the average speed, avg_speed
traveled by the plane
for each flight (in mph). Hint: Average speed can be
calculated as distance divided by number of hours of travel, and note
that ‘air_time’ is given in minutes.
Answer-7:
nycflights <- nycflights %>%
mutate(avg_speed = 60*(distance / air_time))
glimpse(nycflights)
## Rows: 32,735
## Columns: 18
## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, ~
## $ month <int> 6, 5, 12, 5, 7, 1, 12, 8, 9, 4, 6, 11, 4, 3, 10, 1, 2, 8, 10~
## $ day <int> 30, 7, 8, 14, 21, 1, 9, 13, 26, 30, 17, 22, 26, 25, 21, 23, ~
## $ dep_time <int> 940, 1657, 859, 1841, 1102, 1817, 1259, 1920, 725, 1323, 940~
## $ dep_delay <dbl> 15, -3, -1, -4, -3, -3, 14, 85, -10, 62, 5, 5, -2, 115, -4, ~
## $ arr_time <int> 1216, 2104, 1238, 2122, 1230, 2008, 1617, 2032, 1027, 1549, ~
## $ arr_delay <dbl> -4, 10, 11, -34, -8, 3, 22, 71, -8, 60, -4, -2, 22, 91, -6, ~
## $ carrier <chr> "VX", "DL", "DL", "DL", "9E", "AA", "WN", "B6", "AA", "EV", ~
## $ tailnum <chr> "N626VA", "N3760C", "N712TW", "N914DL", "N823AY", "N3AXAA", ~
## $ flight <int> 407, 329, 422, 2391, 3652, 353, 1428, 1407, 2279, 4162, 20, ~
## $ origin <chr> "JFK", "JFK", "JFK", "JFK", "LGA", "LGA", "EWR", "JFK", "LGA~
## $ dest <chr> "LAX", "SJU", "LAX", "TPA", "ORF", "ORD", "HOU", "IAD", "MIA~
## $ air_time <dbl> 313, 216, 376, 135, 50, 138, 240, 48, 148, 110, 50, 161, 87,~
## $ distance <dbl> 2475, 1598, 2475, 1005, 296, 733, 1411, 228, 1096, 820, 264,~
## $ hour <dbl> 9, 16, 8, 18, 11, 18, 12, 19, 7, 13, 9, 13, 8, 20, 12, 20, 6~
## $ minute <dbl> 40, 57, 59, 41, 2, 17, 59, 20, 25, 23, 40, 20, 9, 54, 17, 24~
## $ dep_type <chr> "delayed", "on time", "on time", "on time", "on time", "on t~
## $ avg_speed <dbl> 474.4409, 443.8889, 394.9468, 446.6667, 355.2000, 318.6957, ~
Make a scatterplot of avg_speed
vs. distance
. Describe the relationship between average
speed and distance. Hint: Use
geom_point()
.
Answer-8:
ggplot(data = nycflights, aes(x = distance, y = avg_speed)) + geom_point()
The speed range grows slightly with the distance. The reason could be that with longer distances the start and landing time does not count so heavy as with short distances. There is one execeptional fast flight from LaGuardia to Atlanta. The very far flight distances (the points on the 5.000 miles distance rage) are FROM NYC to Honolulu (HNL), the shortest to Philadelphia (PHL).
Replicate the following plot. Hint: The data frame plotted only
contains flights from American Airlines, Delta Airlines, and United
Airlines, and the points are colore
d by
carrier
. Once you replicate the plot, determine (roughly)
what the cutoff point is for departure delays where you can still expect
to get to your destination on time.
Answer-9:
nycflights_3carriers <- nycflights %>%
filter(carrier == "AA" | carrier == "DL" | carrier == "UA")
ggplot(data = nycflights_3carriers, aes(x = dep_delay, y = arr_delay, color= carrier)) + geom_point()
Based off the scatter plot above, the cutoff point for departure delays you can still REASONABLY expect to arrive at your destination on time is slightly ahead of schedule, roughly first five minutes before departure time. We see that with a 60 minute late departure time, it has still been possible to arrive at the destination on time, but this is extremely rare. It is also uncommon for flights departing between 0 and 60 minutes late. Even the majority of flights departing on time arrive late for these three carriers. Only by leaving early can you expect to arrive on time, and a reasonable cutoff for this seems to be around five minutes before scheduled takeoff.
This homework deals with airline flying data from different airports in New York City (NYC). We see several histograms of departure delayed time with different binwidth to extract much information. We notice that the smaller the binwidth is, the finer the detail is. Whereas we see the histogram for arrival delayed, check the distribution with summary statistics for a different carrier. DL and UA carriers are tied for having the most variable arrival delays because their interquartile ranges are tied for the highest at 22.00. We have tried to visualize the relationship between airlines’ average speed and distance. The speed range grows slightly with the distance.