Some define statistics as the field that focuses on turning information into knowledge. The first step in that process is to summarize and describe the raw information – the data. In this lab we explore flights, specifically a random sample of domestic flights that departed from the three major New York City airports in 2013. We will generate simple graphical and numerical summaries of data on these flights and explore delay times. Since this is a large data set, along the way you’ll also learn the indispensable skills of data processing and subsetting.
In this lab, we will explore and visualize the data using the tidyverse suite of packages. The data can be found in the companion package for OpenIntro labs, openintro.
Let’s load the packages.
library(tidyverse)
library(openintro)The Bureau of Transportation Statistics (BTS) is a statistical agency that is a part of the Research and Innovative Technology Administration (RITA). As its name implies, BTS collects and makes transportation data available, such as the flights data we will be working with in this lab.
First, we’ll view the nycflights data frame. Type the
following in your console to load the data:
data(nycflights)The data set nycflights that shows up in your workspace
is a data matrix, with each row representing an
observation and each column representing a variable. R
calls this data format a data frame, which is a term
that will be used throughout the labs. For this data set, each
observation is a single flight.
To view the names of the variables, type the command
names(nycflights)## [1] "year" "month" "day" "dep_time" "dep_delay" "arr_time"
## [7] "arr_delay" "carrier" "tailnum" "flight" "origin" "dest"
## [13] "air_time" "distance" "hour" "minute"
This returns the names of the variables in this data frame. The codebook (description of the variables) can be accessed by pulling up the help file:
?nycflightsOne of the variables refers to the carrier (i.e. airline) of the flight, which is coded according to the following system.
carrier: Two letter carrier abbreviation.
9E: Endeavor Air Inc.AA: American Airlines Inc.AS: Alaska Airlines Inc.B6: JetBlue AirwaysDL: Delta Air Lines Inc.EV: ExpressJet Airlines Inc.F9: Frontier Airlines Inc.FL: AirTran Airways CorporationHA: Hawaiian Airlines Inc.MQ: Envoy AirOO: SkyWest Airlines Inc.UA: United Air Lines Inc.US: US Airways Inc.VX: Virgin AmericaWN: Southwest Airlines Co.YV: Mesa Airlines Inc.Remember that you can use glimpse to take a quick peek
at your data to understand its contents better.
glimpse(nycflights)## Rows: 32,735
## Columns: 16
## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, …
## $ month <int> 6, 5, 12, 5, 7, 1, 12, 8, 9, 4, 6, 11, 4, 3, 10, 1, 2, 8, 10…
## $ day <int> 30, 7, 8, 14, 21, 1, 9, 13, 26, 30, 17, 22, 26, 25, 21, 23, …
## $ dep_time <int> 940, 1657, 859, 1841, 1102, 1817, 1259, 1920, 725, 1323, 940…
## $ dep_delay <dbl> 15, -3, -1, -4, -3, -3, 14, 85, -10, 62, 5, 5, -2, 115, -4, …
## $ arr_time <int> 1216, 2104, 1238, 2122, 1230, 2008, 1617, 2032, 1027, 1549, …
## $ arr_delay <dbl> -4, 10, 11, -34, -8, 3, 22, 71, -8, 60, -4, -2, 22, 91, -6, …
## $ carrier <chr> "VX", "DL", "DL", "DL", "9E", "AA", "WN", "B6", "AA", "EV", …
## $ tailnum <chr> "N626VA", "N3760C", "N712TW", "N914DL", "N823AY", "N3AXAA", …
## $ flight <int> 407, 329, 422, 2391, 3652, 353, 1428, 1407, 2279, 4162, 20, …
## $ origin <chr> "JFK", "JFK", "JFK", "JFK", "LGA", "LGA", "EWR", "JFK", "LGA…
## $ dest <chr> "LAX", "SJU", "LAX", "TPA", "ORF", "ORD", "HOU", "IAD", "MIA…
## $ air_time <dbl> 313, 216, 376, 135, 50, 138, 240, 48, 148, 110, 50, 161, 87,…
## $ distance <dbl> 2475, 1598, 2475, 1005, 296, 733, 1411, 228, 1096, 820, 264,…
## $ hour <dbl> 9, 16, 8, 18, 11, 18, 12, 19, 7, 13, 9, 13, 8, 20, 12, 20, 6…
## $ minute <dbl> 40, 57, 59, 41, 2, 17, 59, 20, 25, 23, 40, 20, 9, 54, 17, 24…
The nycflights data frame is a massive trove of
information. Let’s think about some questions we might want to answer
with these data:
Let’s start by examing the distribution of departure delays of all flights with a histogram.
ggplot(data = nycflights, aes(x = dep_delay)) +
geom_histogram()This function says to plot the dep_delay variable from
the nycflights data frame on the x-axis. It also defines a
geom (short for geometric object), which describes the type
of plot you will produce.
Histograms are generally a very good way to see the shape of a single distribution of numerical data, but that shape can change depending on how the data is split between the different bins. You can easily define the binwidth you want to use:
ggplot(data = nycflights, aes(x = dep_delay)) +
geom_histogram(binwidth = 15)ggplot(data = nycflights, aes(x = dep_delay)) +
geom_histogram(binwidth = 150)Answer: Well in simple words, the histogram with lower binwidth portrays a better picture when it comes to checking out the peaks(frequency of values) at different departure delay times as compared to the histogram with higher binwidth. The higher binwidth histogram obscures that information.
If you want to visualize only on delays of flights headed to Los
Angeles, you need to first filter the data for flights with
that destination (dest == "LAX") and then make a histogram
of the departure delays of only those flights.
lax_flights <- nycflights %>%
filter(dest == "LAX")
ggplot(data = lax_flights, aes(x = dep_delay)) +
geom_histogram()Let’s decipher these two commands (OK, so it might look like four
lines, but the first two physical lines of code are actually part of the
same command. It’s common to add a break to a new line after
%>% to help readability).
nycflights data frame,
filter for flights headed to LAX, and save the result as a
new data frame called lax_flights.
== means “if it’s equal to”.LAX is in quotation marks since it is a character
string.ggplot call from earlier
for making a histogram, except that it uses the smaller data frame for
flights headed to LAX instead of all flights.Logical operators: Filtering for certain
observations (e.g. flights from a particular airport) is often of
interest in data frames where we might want to examine observations with
certain characteristics separately from the rest of the data. To do so,
you can use the filter function and a series of
logical operators. The most commonly used logical
operators for data analysis are as follows:
== means “equal to”!= means “not equal to”> or < means “greater than” or “less
than”>= or <= means “greater than or
equal to” or “less than or equal to”You can also obtain numerical summaries for these flights:
lax_flights %>%
summarise(mean_dd = mean(dep_delay),
median_dd = median(dep_delay),
n = n())## # A tibble: 1 × 3
## mean_dd median_dd n
## <dbl> <dbl> <int>
## 1 9.78 -1 1583
Note that in the summarise function you created a list
of three different numerical summaries that you were interested in. The
names of these elements are user defined, like mean_dd,
median_dd, n, and you can customize these
names as you like (just don’t use spaces in your names). Calculating
these summary statistics also requires that you know the function calls.
Note that n() reports the sample size.
Summary statistics: Some useful function calls for summary statistics for a single numerical variable are as follows:
meanmediansdvarIQRminmaxNote that each of these functions takes a single vector as an argument and returns a single value.
You can also filter based on multiple criteria. Suppose you are interested in flights headed to San Francisco (SFO) in February:
sfo_feb_flights <- nycflights %>%
filter(dest == "SFO", month == 2)Note that you can separate the conditions using commas if you want
flights that are both headed to SFO and in February. If
you are interested in either flights headed to SFO or
in February, you can use the | instead of the comma.
sfo_feb_flights. How
many flights meet these criteria?Answer:
sfo_feb_flights <- nycflights %>%
filter(dest =="SFO", month ==2)
head(sfo_feb_flights)## # A tibble: 6 × 16
## year month day dep_time dep_delay arr_time arr_delay carrier tailnum flight
## <int> <int> <int> <int> <dbl> <int> <dbl> <chr> <chr> <int>
## 1 2013 2 18 1527 57 1903 48 DL N711ZX 1322
## 2 2013 2 3 613 14 1008 38 UA N502UA 691
## 3 2013 2 15 955 -5 1313 -28 DL N717TW 1765
## 4 2013 2 18 1928 15 2239 -6 UA N24212 1214
## 5 2013 2 24 1340 2 1644 -21 UA N76269 1111
## 6 2013 2 25 1415 -10 1737 -13 UA N532UA 394
## # … with 6 more variables: origin <chr>, dest <chr>, air_time <dbl>,
## # distance <dbl>, hour <dbl>, minute <dbl>
sfo_feb_flights%>%
nrow()## [1] 68
# OR
sfo_feb_flights%>%
summarise(fligth_to_SFO = n())## # A tibble: 1 × 1
## fligth_to_SFO
## <int>
## 1 68
There are in total 68 flights that satisfy this criteria
Answer:
ggplot(data = sfo_feb_flights, aes(x=arr_delay))+
geom_histogram()A clear Skewness can be observed in the graph with some outliers to the right. So the summary statics should, firstly, include the median since its has more tendency to resist skewness and to find the center of distribution. Secondly the IQR and also the standard deviation just in case.
sfo_feb_flights%>%
summarise(median_arr=median(arr_delay),iqr_arr=IQR(arr_delay), arr_std =sd(arr_delay))## # A tibble: 1 × 3
## median_arr iqr_arr arr_std
## <dbl> <dbl> <dbl>
## 1 -11 23.2 36.3
Another useful technique is quickly calculating summary statistics
for various groups in your data frame. For example, we can modify the
above command using the group_by function to get the same
summary stats for each origin airport:
sfo_feb_flights %>%
group_by(origin) %>%
summarise(median_dd = median(dep_delay), iqr_dd = IQR(dep_delay), n_flights = n())## # A tibble: 2 × 4
## origin median_dd iqr_dd n_flights
## <chr> <dbl> <dbl> <int>
## 1 EWR 0.5 5.75 8
## 2 JFK -2.5 15.2 60
Here, we first grouped the data by origin and then
calculated the summary statistics.
arr_delays of flights in in the
sfo_feb_flights data frame, grouped by carrier. Which
carrier has the most variable arrival delays?Answer:
sfo_feb_flights|>
group_by(carrier)|>
summarise(aa_median=median(arr_delay),aa_iqr=IQR(arr_delay))|>
arrange(desc(aa_iqr))## # A tibble: 5 × 3
## carrier aa_median aa_iqr
## <chr> <dbl> <dbl>
## 1 DL -15 22
## 2 UA -10 22
## 3 VX -22.5 21.2
## 4 AA 5 17.5
## 5 B6 -10.5 12.2
IQR is the measure of variability around the center of data or spread of middle half of your data. Higher IQR means that the underlying attribute has more variable data. Over here Delta and united airlines has the most variable arrival delay
Which month would you expect to have the highest average delay departing from an NYC airport?
Let’s think about how you could answer this question:
group_by months, thensummarise mean departure delays.arrange these average delays in
descending ordernycflights %>%
group_by(month) %>%
summarise(mean_dd = mean(dep_delay)) %>%
arrange(desc(mean_dd))## # A tibble: 12 × 2
## month mean_dd
## <int> <dbl>
## 1 7 20.8
## 2 6 20.4
## 3 12 17.4
## 4 4 14.6
## 5 3 13.5
## 6 5 13.3
## 7 8 12.6
## 8 2 10.7
## 9 1 10.2
## 10 9 6.87
## 11 11 6.10
## 12 10 5.88
Answer:
nycflights |>
group_by(month) |>
summarise(dd_median=median(dep_delay),dd_mean = mean(dep_delay))## # A tibble: 12 × 3
## month dd_median dd_mean
## <int> <dbl> <dbl>
## 1 1 -2 10.2
## 2 2 -2 10.7
## 3 3 -1 13.5
## 4 4 -2 14.6
## 5 5 -1 13.3
## 6 6 0 20.4
## 7 7 0 20.8
## 8 8 -1 12.6
## 9 9 -3 6.87
## 10 10 -3 5.88
## 11 11 -2 6.10
## 12 12 1 17.4
Well the pros and cons of using mean and median really comes down to the sensitivity towards the extreme values in the data set. Since mean give us the true average so that will be more reliable measure in deciding the month that has lowest departure delays.The cons of using mean is that one bad reading(highest value) can shake the whole results since it is sensitive to those extremes. on the other side median is not sensitive to those extreme values which is a good thing but at the same time the con is that it is not the true measure of departure delays.
Suppose you will be flying out of NYC and want to know which of the three major NYC airports has the best on time departure rate of departing flights. Also supposed that for you, a flight that is delayed for less than 5 minutes is basically “on time.”” You consider any flight delayed for 5 minutes of more to be “delayed”.
In order to determine which airport has the best on time departure rate, you can
Let’s start with classifying each flight as “on time” or “delayed” by
creating a new variable with the mutate function.
nycflights <- nycflights %>%
mutate(dep_type = ifelse(dep_delay < 5, "on time", "delayed"))The first argument in the mutate function is the name of
the new variable we want to create, in this case dep_type.
Then if dep_delay < 5, we classify the flight as
"on time" and "delayed" if not, i.e. if the
flight is delayed for 5 or more minutes.
Note that we are also overwriting the nycflights data
frame with the new version of this data frame that includes the new
dep_type variable.
We can handle all of the remaining steps in one code chunk:
nycflights %>%
group_by(origin) %>%
summarise(ot_dep_rate = sum(dep_type == "on time") / n()) %>%
arrange(desc(ot_dep_rate))## # A tibble: 3 × 2
## origin ot_dep_rate
## <chr> <dbl>
## 1 LGA 0.728
## 2 JFK 0.694
## 3 EWR 0.637
Answer:
I would choose to fly out of LGA, Since it has the highest percentage of on time departure.
You can also visualize the distribution of on on time departure rate across the three airports using a segmented bar plot.
ggplot(data = nycflights, aes(x = origin, fill = dep_type)) +
geom_bar()avg_speed traveled by the plane
for each flight (in mph). Hint: Average speed can be
calculated as distance divided by number of hours of travel, and note
that air_time is given in minutes.Answer:
nycflights <- nycflights|>
mutate(avg_speed=distance/(air_time/60))
head(nycflights)## # A tibble: 6 × 18
## year month day dep_time dep_delay arr_time arr_delay carrier tailnum flight
## <int> <int> <int> <int> <dbl> <int> <dbl> <chr> <chr> <int>
## 1 2013 6 30 940 15 1216 -4 VX N626VA 407
## 2 2013 5 7 1657 -3 2104 10 DL N3760C 329
## 3 2013 12 8 859 -1 1238 11 DL N712TW 422
## 4 2013 5 14 1841 -4 2122 -34 DL N914DL 2391
## 5 2013 7 21 1102 -3 1230 -8 9E N823AY 3652
## 6 2013 1 1 1817 -3 2008 3 AA N3AXAA 353
## # … with 8 more variables: origin <chr>, dest <chr>, air_time <dbl>,
## # distance <dbl>, hour <dbl>, minute <dbl>, dep_type <chr>, avg_speed <dbl>
avg_speed
vs. distance. Describe the relationship between average
speed and distance. Hint: Use
geom_point().Answer:
ggplot(data = nycflights, aes(x = distance, y = avg_speed)) +
geom_point()+theme_classic()From the graph we can clearly conclude that there is a non-linear directly proportional relationship between average speed and the distance covered. The graph almost mimics the curve formed by the equation y = k (x)^1/2, in which, with the increment of x values the y increases abruptly and then remains almost constant. So the same phenomenon can be seen over here. The average speed is on the rise with distance until distance crosses 2000.
colored by
carrier. Once you replicate the plot, determine (roughly)
what the cutoff point is for departure delays where you can still expect
to get to your destination on time.Answer:
Let’s Focus on that portion of the graph where the arrival delay is zero and we can check the value for departure delay to find the cut-off point.
I will apply limits to the axis of the graph to get a closer view of a point where arr_delay or y axis is still zero
ggplot(data = dl_aa_ua, aes(x = dep_delay, y = arr_delay, color = carrier)) +
xlim (-10,60) +
ylim (-30,200) +
geom_point()Now looking at the the graph we can clearly see that the cut-off point for departure delays where we can still expect to get to our destination on time is starting somewhere between 20s and all the to higher 30s(around 35-36). Another rough estimation would be to use geom_smooth instead of point to get and overall idea.
ggplot(data = dl_aa_ua, aes(x = dep_delay, y = arr_delay, color = carrier)) +
xlim (-10,60) +
ylim (-30,200) +
geom_smooth()So according to this graph the cutoff point is almost around 15 mins for the carriers but since its a smooth curve so the curve on the graph might not show all the points in its true form