insta— title: “Problem Set #1 - dplyr basics” subtitle: Data Wrangling with dplyr author: “your name” date: “2023-01-24” output: html_document: default —

Directions

During ANLY 512 we will be studying the theory and practice of data visualization. We will be using R and the packages within R to assemble data and construct many different types of visualizations. Before we begin studying data visualizations we need to develop some data wrangling skills. We will use these skills to wrangle our data into a form that we can use for visualizations.

The objective of this assignment is to introduce you to R Studio, Rmarkdown, the tidyverse and more specifically the dplyr package.

Each question is worth 5 points.

To submit this homework you will create the document in Rstudio, using the knitr package (button included in Rstudio) and then submit the document to your Rpubs account. Once uploaded you will submit the link to that document on Canvas. Please make sure that this link is hyper linked and that I can see the visualization and the code required to create it.

Question #1

Use the nycflights13 package and the flights data frame to answer the following questions: a.What month had the highest proportion of cancelled flights? February b.What month had the lowest? October

library(nycflights13)
flights[is.na(flights$dep_time),]
## # A tibble: 8,255 × 19
##     year month   day dep_time sched_de…¹ dep_d…² arr_t…³ sched…⁴ arr_d…⁵ carrier
##    <int> <int> <int>    <int>      <int>   <dbl>   <int>   <int>   <dbl> <chr>  
##  1  2013     1     1       NA       1630      NA      NA    1815      NA EV     
##  2  2013     1     1       NA       1935      NA      NA    2240      NA AA     
##  3  2013     1     1       NA       1500      NA      NA    1825      NA AA     
##  4  2013     1     1       NA        600      NA      NA     901      NA B6     
##  5  2013     1     2       NA       1540      NA      NA    1747      NA EV     
##  6  2013     1     2       NA       1620      NA      NA    1746      NA EV     
##  7  2013     1     2       NA       1355      NA      NA    1459      NA EV     
##  8  2013     1     2       NA       1420      NA      NA    1644      NA EV     
##  9  2013     1     2       NA       1321      NA      NA    1536      NA EV     
## 10  2013     1     2       NA       1545      NA      NA    1910      NA AA     
## # … with 8,245 more rows, 9 more variables: flight <int>, tailnum <chr>,
## #   origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
## #   minute <dbl>, time_hour <dttm>, and abbreviated variable names
## #   ¹​sched_dep_time, ²​dep_delay, ³​arr_time, ⁴​sched_arr_time, ⁵​arr_delay

flight_cancelations <- flights %>% group_by(month) %>% summarize(cancelled = sum(is.na(dep_time)), cancelled_proportion = cancelled/n()*100) %>% arrange(cancelled_proportion)

flight_cancelations # A tibble: 12 × 3 month cancelled cancelled_proportion 1 10 236 0.817 2 11 233 0.854 3 9 452 1.64 4 8 486 1.66 5 1 521 1.93 6 5 563 1.96 7 4 668 2.36 8 3 861 2.99 9 7 940 3.19 10 6 1009 3.57 11 12 1025 3.64 12 2 1261 5.05 > library(nycflights13) > flights[is.na(flights$dep_time),]

Question #2

Consider the following pipeline:

library(tidyverse)
mtcars %>%
  group_by(cyl) %>%
  summarize(avg_mpg = mean(mpg)) %>%
  filter(am == 1)

What is the problem with this pipeline? Error in variable “am”. To fix it we have filter function before use the summarize function.

library(tidyverse) mtcars %>% group_by(cyl) %>% filter(am == 1) %>% summarize(avg_mpg = mean(mpg))

Question #3

Define two new variables in the Teams data frame in the pkg Lahman() package.

  1. batting average (BA). Batting average is the ratio of hits (H) to at-bats (AB)

  2. slugging percentage (SLG). Slugging percentage is total bases divided by at-bats (AB). To compute total bases, you get 1 for a single, 2 for a double, 3 for a triple, and 4 for a home run.

library(Lahman)
B <- Teams$H - Teams$X2B - Teams$X3B - Teams$HR 
Teams1 <- Teams %>% 
mutate(BA = H / AB) %>% 
  mutate(SLG = (B+ HR + 2* X2B + 3* X3B + 4 * HR )/ AB)

Question #4

Using the Teams data frame in the pkg Lahman() package. display the top-5 teams ranked in terms of slugging percentage (SLG) in Major League Baseball history. Repeat this using teams since 1969. Slugging percentage is total bases divided by at-bats.To compute total bases, you get 1 for a single, 2 for a double, 3 for a triple, and 4 for a home run.

Question #5

Use the Batting, Pitching, and People tables in the pkg Lahman() package to answer the following questions.

a.Name every player in baseball history who has accumulated at least 300 home runs (HR) and at least 300 stolen bases (SB). You can find the first and last name of the player in the Master data frame. Join this to your result along with the total home runs and total bases stolen for each of these elite players.

  1. Similarly, name every pitcher in baseball history who has accumulated at least 300 wins (W) and at least 3,000 strikeouts (SO).

  2. Identify the name and year of every player who has hit at least 50 home runs in a single season. Which player had the lowest batting average in that season?

library(Lahman)
Batting %>%
  group_by(playerID) %>%
  summarise(totalHR = sum(HR), totalSB = sum(SB)) %>%
  filter(totalHR >= 300 & totalSB >= 300) %>%
  inner_join(People, by = c("playerID" = "playerID")) %>%
  select(nameFirst, nameLast, totalHR, totalSB)
## # A tibble: 8 × 4
##   nameFirst nameLast  totalHR totalSB
##   <chr>     <chr>       <int>   <int>
## 1 Carlos    Beltran       435     312
## 2 Barry     Bonds         762     514
## 3 Bobby     Bonds         332     461
## 4 Andre     Dawson        438     314
## 5 Steve     Finley        304     320
## 6 Willie    Mays          660     338
## 7 Alex      Rodriguez     696     329
## 8 Reggie    Sanders       305     304
library(Lahman)
Pitching %>%
  group_by(playerID) %>%
  summarise(totalW = sum(W), totalSO = sum(SO)) %>%
  filter(totalW >= 300 & totalSO >= 3000) %>%
  inner_join(People, by = c("playerID" = "playerID")) %>%
  select(nameFirst, nameLast, totalW, totalSO)
## # A tibble: 10 × 4
##    nameFirst nameLast totalW totalSO
##    <chr>     <chr>     <int>   <int>
##  1 Steve     Carlton     329    4136
##  2 Roger     Clemens     354    4672
##  3 Randy     Johnson     303    4875
##  4 Walter    Johnson     417    3509
##  5 Greg      Maddux      355    3371
##  6 Phil      Niekro      318    3342
##  7 Gaylord   Perry       314    3534
##  8 Nolan     Ryan        324    5714
##  9 Tom       Seaver      311    3640
## 10 Don       Sutton      324    3574
library(Lahman)
Batting %>%
  group_by(playerID, yearID) %>%
  summarize(homeRuns = sum(HR), battingAverage = sum(H)/sum(AB)) %>%
  filter(homeRuns >= 50) %>%
  inner_join(People, by = c("playerID" = "playerID")) %>%
  select(yearID, playerID, nameFirst, nameLast, nameGiven, homeRuns, battingAverage) %>%
  arrange(battingAverage)
## # A tibble: 46 × 7
## # Groups:   playerID [30]
##    yearID playerID  nameFirst nameLast nameGiven              homeRuns batting…¹
##     <int> <chr>     <chr>     <chr>    <chr>                     <int>     <dbl>
##  1   2019 alonspe01 Pete      Alonso   Peter Morgan                 53     0.260
##  2   2010 bautijo02 Jose      Bautista Jose Antonio                 54     0.260
##  3   2005 jonesan01 Andruw    Jones    Andruw Rudolf                51     0.263
##  4   1961 marisro01 Roger     Maris    Roger Eugene                 61     0.269
##  5   1998 vaughgr01 Greg      Vaughn   Gregory Lamont               50     0.272
##  6   1997 mcgwima01 Mark      McGwire  Mark David                   58     0.274
##  7   1990 fieldce01 Cecil     Fielder  Cecil Grant                  51     0.277
##  8   1999 mcgwima01 Mark      McGwire  Mark David                   65     0.278
##  9   2017 stantmi03 Giancarlo Stanton  Giancarlo Cruz-Michael       59     0.281
## 10   2017 judgeaa01 Aaron     Judge    Aaron James                  52     0.284
## # … with 36 more rows, and abbreviated variable name ¹​battingAverage