dplyr
basicsDuring ANLY 512 we will be studying the theory and practice of
data visualization. We will be using R and the
packages within R to assemble data and construct many
different types of visualizations. Before we begin studying data
visualizations we need to develop some data wrangling skills. We will
use these skills to wrangle our data into a form that we can use for
visualizations.
The objective of this assignment is to introduce you to R Studio,
Rmarkdown, the tidyverse and more specifically the dplyr
package.
Each question is worth 5 points.
To submit this homework you will create the document in Rstudio, using the knitr package (button included in Rstudio) and then submit the document to your Rpubs account. Once uploaded you will submit the link to that document on Canvas. Please make sure that this link is hyper linked and that I can see the visualization and the code required to create it.
Question #1
Use the nycflights13 package and the flights data frame to answer the following questions: a.What month had the highest proportion of cancelled flights? b.What month had the lowest?
#Load the required libraries
library(nycflights13)
library(dplyr)
#Check the dataset
head(flights)
## # A tibble: 6 × 19
## year month day dep_time sched_dep…¹ dep_d…² arr_t…³ sched…⁴ arr_d…⁵ carrier
## <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr>
## 1 2013 1 1 517 515 2 830 819 11 UA
## 2 2013 1 1 533 529 4 850 830 20 UA
## 3 2013 1 1 542 540 2 923 850 33 AA
## 4 2013 1 1 544 545 -1 1004 1022 -18 B6
## 5 2013 1 1 554 600 -6 812 837 -25 DL
## 6 2013 1 1 554 558 -4 740 728 12 UA
## # … with 9 more variables: flight <int>, tailnum <chr>, origin <chr>,
## # dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>,
## # time_hour <dttm>, and abbreviated variable names ¹sched_dep_time,
## # ²dep_delay, ³arr_time, ⁴sched_arr_time, ⁵arr_delay
#Code
flights_data = flights %>% group_by(month) %>%
summarise(flights_cancelled = sum(is.na(arr_delay)),
total_flights = n(),
cancelled_flights_prop = flights_cancelled/total_flights) %>%
arrange(desc(cancelled_flights_prop))
#Output results
flights_data
## # A tibble: 12 × 4
## month flights_cancelled total_flights cancelled_flights_prop
## <int> <int> <int> <dbl>
## 1 2 1340 24951 0.0537
## 2 6 1168 28243 0.0414
## 3 12 1115 28135 0.0396
## 4 7 1132 29425 0.0385
## 5 3 932 28834 0.0323
## 6 4 766 28330 0.0270
## 7 5 668 28796 0.0232
## 8 1 606 27004 0.0224
## 9 9 564 27574 0.0205
## 10 8 571 29327 0.0195
## 11 11 297 27268 0.0109
## 12 10 271 28889 0.00938
#Part a. What month had the highest proportion of cancelled flights?
#With 'cancelled_flights_prop' value of 0.053705262 , February (month 2) had the highest proportion of cancelled flights.
#Part b. What month had the lowest?
#With 'cancelled_flights_prop' value of 0.009380733, October (month 10) had the lowest proportion of cancelled flights.
Question #2
Consider the following pipeline:
library(tidyverse)
#mtcars %>%
# group_by(cyl) %>%
# summarize(avg_mpg = mean(mpg)) %>%
# filter(am == 1)
What is the problem with this pipeline?
The issue with the above pipeline is the placement of ‘filter(am == 1)’ operator, since ‘group_by()’ and ‘summarize()’ functions are already being used before the filter operator, the only columns left are ‘cyl’ and ‘avg_mpg’. Due to which, we will get an error message on the assignment of ‘am’ column. In other words, the ‘am’ column does not exists after using the previous two functions.
Correct order of execution should be: mtcars %>% filter(am == 1) group_by(cyl) %>% summarize(avg_mpg = mean(mpg))
Question #3
Define two new variables in the Teams data frame in the
pkg Lahman() package.
batting average (BA). Batting average is the ratio of hits (H) to at-bats (AB)
slugging percentage (SLG). Slugging percentage is total bases divided by at-bats (AB). To compute total bases, you get 1 for a single, 2 for a double, 3 for a triple, and 4 for a home run.
#Load the required library
library(Lahman)
#create a 'Teams' dataset variable
team_data = Teams
#Check the dataset
head(team_data)
## yearID lgID teamID franchID divID Rank G Ghome W L DivWin WCWin LgWin
## 1 1871 NA BS1 BNA <NA> 3 31 NA 20 10 <NA> <NA> N
## 2 1871 NA CH1 CNA <NA> 2 28 NA 19 9 <NA> <NA> N
## 3 1871 NA CL1 CFC <NA> 8 29 NA 10 19 <NA> <NA> N
## 4 1871 NA FW1 KEK <NA> 7 19 NA 7 12 <NA> <NA> N
## 5 1871 NA NY2 NNA <NA> 5 33 NA 16 17 <NA> <NA> N
## 6 1871 NA PH1 PNA <NA> 1 28 NA 21 7 <NA> <NA> Y
## WSWin R AB H X2B X3B HR BB SO SB CS HBP SF RA ER ERA CG SHO SV
## 1 <NA> 401 1372 426 70 37 3 60 19 73 16 NA NA 303 109 3.55 22 1 3
## 2 <NA> 302 1196 323 52 21 10 60 22 69 21 NA NA 241 77 2.76 25 0 1
## 3 <NA> 249 1186 328 35 40 7 26 25 18 8 NA NA 341 116 4.11 23 0 0
## 4 <NA> 137 746 178 19 8 2 33 9 16 4 NA NA 243 97 5.17 19 1 0
## 5 <NA> 302 1404 403 43 21 1 33 15 46 15 NA NA 313 121 3.72 32 1 0
## 6 <NA> 376 1281 410 66 27 9 46 23 56 12 NA NA 266 137 4.95 27 0 0
## IPouts HA HRA BBA SOA E DP FP name
## 1 828 367 2 42 23 243 24 0.834 Boston Red Stockings
## 2 753 308 6 28 22 229 16 0.829 Chicago White Stockings
## 3 762 346 13 53 34 234 15 0.818 Cleveland Forest Citys
## 4 507 261 5 21 17 163 8 0.803 Fort Wayne Kekiongas
## 5 879 373 7 42 22 235 14 0.840 New York Mutuals
## 6 747 329 3 53 16 194 13 0.845 Philadelphia Athletics
## park attendance BPF PPF teamIDBR teamIDlahman45
## 1 South End Grounds I NA 103 98 BOS BS1
## 2 Union Base-Ball Grounds NA 104 102 CHI CH1
## 3 National Association Grounds NA 96 100 CLE CL1
## 4 Hamilton Field NA 101 107 KEK FW1
## 5 Union Grounds (Brooklyn) NA 90 88 NYU NY2
## 6 Jefferson Street Grounds NA 102 98 ATH PH1
## teamIDretro
## 1 BS1
## 2 CH1
## 3 CL1
## 4 FW1
## 5 NY2
## 6 PH1
#Variable a. Batting Average (BA)
team_data = mutate(team_data, BA = H/AB)
#Variable b. Slugging Percentage (SLG)
team_data = mutate(team_data, SLG = (H + 2*X2B + 3*X3B + 4*HR)/AB)
#Output additional columns
head(team_data)
## yearID lgID teamID franchID divID Rank G Ghome W L DivWin WCWin LgWin
## 1 1871 NA BS1 BNA <NA> 3 31 NA 20 10 <NA> <NA> N
## 2 1871 NA CH1 CNA <NA> 2 28 NA 19 9 <NA> <NA> N
## 3 1871 NA CL1 CFC <NA> 8 29 NA 10 19 <NA> <NA> N
## 4 1871 NA FW1 KEK <NA> 7 19 NA 7 12 <NA> <NA> N
## 5 1871 NA NY2 NNA <NA> 5 33 NA 16 17 <NA> <NA> N
## 6 1871 NA PH1 PNA <NA> 1 28 NA 21 7 <NA> <NA> Y
## WSWin R AB H X2B X3B HR BB SO SB CS HBP SF RA ER ERA CG SHO SV
## 1 <NA> 401 1372 426 70 37 3 60 19 73 16 NA NA 303 109 3.55 22 1 3
## 2 <NA> 302 1196 323 52 21 10 60 22 69 21 NA NA 241 77 2.76 25 0 1
## 3 <NA> 249 1186 328 35 40 7 26 25 18 8 NA NA 341 116 4.11 23 0 0
## 4 <NA> 137 746 178 19 8 2 33 9 16 4 NA NA 243 97 5.17 19 1 0
## 5 <NA> 302 1404 403 43 21 1 33 15 46 15 NA NA 313 121 3.72 32 1 0
## 6 <NA> 376 1281 410 66 27 9 46 23 56 12 NA NA 266 137 4.95 27 0 0
## IPouts HA HRA BBA SOA E DP FP name
## 1 828 367 2 42 23 243 24 0.834 Boston Red Stockings
## 2 753 308 6 28 22 229 16 0.829 Chicago White Stockings
## 3 762 346 13 53 34 234 15 0.818 Cleveland Forest Citys
## 4 507 261 5 21 17 163 8 0.803 Fort Wayne Kekiongas
## 5 879 373 7 42 22 235 14 0.840 New York Mutuals
## 6 747 329 3 53 16 194 13 0.845 Philadelphia Athletics
## park attendance BPF PPF teamIDBR teamIDlahman45
## 1 South End Grounds I NA 103 98 BOS BS1
## 2 Union Base-Ball Grounds NA 104 102 CHI CH1
## 3 National Association Grounds NA 96 100 CLE CL1
## 4 Hamilton Field NA 101 107 KEK FW1
## 5 Union Grounds (Brooklyn) NA 90 88 NYU NY2
## 6 Jefferson Street Grounds NA 102 98 ATH PH1
## teamIDretro BA SLG
## 1 BS1 0.3104956 0.5021866
## 2 CH1 0.2700669 0.4431438
## 3 CL1 0.2765599 0.4603710
## 4 FW1 0.2386059 0.3324397
## 5 NY2 0.2870370 0.3960114
## 6 PH1 0.3200625 0.5144418
Question #4
Using the Teams data frame in the
pkg Lahman() package. display the top-5 teams ranked in
terms of slugging percentage (SLG) in Major League Baseball history.
Repeat this using teams since 1969. Slugging percentage is total bases
divided by at-bats.To compute total bases, you get 1 for a single, 2 for
a double, 3 for a triple, and 4 for a home run.
#Load the required library
library(Lahman)
#Part 1: Top 5 ranked teams
top_5 = arrange(team_data, desc(SLG)) %>%
select(yearID, teamID, SLG)
#Output top 5 teams
head(top_5, 5)
## yearID teamID SLG
## 1 2019 HOU 0.6092998
## 2 2019 MIN 0.6071179
## 3 2003 BOS 0.6033975
## 4 2019 NYA 0.5996776
## 5 2020 ATL 0.5964320
#Part 2: Top 5 ranked teams since 1969
top_5_conditional = arrange(team_data, desc(SLG)) %>%
select(yearID, teamID, SLG) %>%
filter(yearID >= 1969)
#Output top 5 teams since 1969
head(top_5_conditional, 5)
## yearID teamID SLG
## 1 2019 HOU 0.6092998
## 2 2019 MIN 0.6071179
## 3 2003 BOS 0.6033975
## 4 2019 NYA 0.5996776
## 5 2020 ATL 0.5964320
Question #5
Use the Batting, Pitching, and
People tables in the pkg Lahman() package to
answer the following questions.
Name every player in baseball history who has accumulated at least 300 home runs (HR) and at least 300 stolen bases (SB). You can find the first and last name of the player in the Master data frame. Join this to your result along with the total home runs and total bases stolen for each of these elite players.
Similarly, name every pitcher in baseball history who has accumulated at least 300 wins (W) and at least 3,000 strikeouts (SO).
Identify the name and year of every player who has hit at least 50 home runs in a single season. Which player had the lowest batting average in that season?
#Load the required library
library(Lahman)
##Part a
b_data = Batting %>%
group_by(playerID) %>%
summarise(total_HR = sum(HR),
total_SB = sum(SB)) %>%
filter(total_HR >= 300 &
total_SB >= 300) %>%
inner_join(People, by = c('playerID' = 'playerID')) %>%
select(nameFirst, nameLast, total_HR, total_SB)
#Output results
b_data
## # A tibble: 8 × 4
## nameFirst nameLast total_HR total_SB
## <chr> <chr> <int> <int>
## 1 Carlos Beltran 435 312
## 2 Barry Bonds 762 514
## 3 Bobby Bonds 332 461
## 4 Andre Dawson 438 314
## 5 Steve Finley 304 320
## 6 Willie Mays 660 338
## 7 Alex Rodriguez 696 329
## 8 Reggie Sanders 305 304
##Part b
p_data = Pitching %>%
group_by(playerID) %>%
summarise(total_W = sum(W),
total_SO = sum(SO)) %>%
filter(total_W >= 300 &
total_SO >= 3000) %>%
inner_join(People, by = c('playerID' = 'playerID')) %>%
select(nameFirst, nameLast, total_W, total_SO)
#Output results
p_data
## # A tibble: 10 × 4
## nameFirst nameLast total_W total_SO
## <chr> <chr> <int> <int>
## 1 Steve Carlton 329 4136
## 2 Roger Clemens 354 4672
## 3 Randy Johnson 303 4875
## 4 Walter Johnson 417 3509
## 5 Greg Maddux 355 3371
## 6 Phil Niekro 318 3342
## 7 Gaylord Perry 314 3534
## 8 Nolan Ryan 324 5714
## 9 Tom Seaver 311 3640
## 10 Don Sutton 324 3574
##Part c
ba_data = Batting %>%
group_by(playerID, yearID) %>%
summarise(total_HR = sum(HR),
BA = sum(H)/sum(AB)) %>%
filter(total_HR >= 50) %>%
inner_join(People, by = c('playerID' = 'playerID')) %>%
select(nameFirst, nameLast, yearID, total_HR, BA) %>%
ungroup() %>%
arrange(BA)
#Output results
ba_data
## # A tibble: 46 × 6
## playerID nameFirst nameLast yearID total_HR BA
## <chr> <chr> <chr> <int> <int> <dbl>
## 1 alonspe01 Pete Alonso 2019 53 0.260
## 2 bautijo02 Jose Bautista 2010 54 0.260
## 3 jonesan01 Andruw Jones 2005 51 0.263
## 4 marisro01 Roger Maris 1961 61 0.269
## 5 vaughgr01 Greg Vaughn 1998 50 0.272
## 6 mcgwima01 Mark McGwire 1997 58 0.274
## 7 fieldce01 Cecil Fielder 1990 51 0.277
## 8 mcgwima01 Mark McGwire 1999 65 0.278
## 9 stantmi03 Giancarlo Stanton 2017 59 0.281
## 10 judgeaa01 Aaron Judge 2017 52 0.284
## # … with 36 more rows