Week 5 - Arrow package_student

For larger than life datasets

Author

DSA_406_001_SP25_wk5_unityid

Today’s Lesson: Efficient Big Data Analysis with Arrow

Objectives

By the end of this lesson, you will be able to:

  • Understand why Arrow is crucial for big data analysis
  • Practice with real-world data
  • Apply data science communication principles to your analysis

Why Arrow?

Arrow is designed for handling very large datasets (100GB+) with:

  • Zero-copy reads
  • Columnar format efficiency
  • Cross-language compatibility
  • Native parallel processing

πŸ€” Data Scientist Thinking: When working with big data, we need to consider both computational efficiency and memory management. How might these features impact our daily workflow?

Resource: Apache coockbook

#Load Library
library(tidyverse)
Warning: package 'tidyverse' was built under R version 4.3.3
Warning: package 'ggplot2' was built under R version 4.3.3
Warning: package 'tibble' was built under R version 4.3.3
Warning: package 'tidyr' was built under R version 4.3.3
Warning: package 'readr' was built under R version 4.3.3
Warning: package 'purrr' was built under R version 4.3.3
Warning: package 'dplyr' was built under R version 4.3.3
Warning: package 'stringr' was built under R version 4.3.3
Warning: package 'forcats' was built under R version 4.3.3
Warning: package 'lubridate' was built under R version 4.3.3
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
βœ” dplyr     1.1.4     βœ” readr     2.1.5
βœ” forcats   1.0.0     βœ” stringr   1.5.1
βœ” ggplot2   3.5.1     βœ” tibble    3.2.1
βœ” lubridate 1.9.3     βœ” tidyr     1.3.1
βœ” purrr     1.0.2     
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
βœ– dplyr::filter() masks stats::filter()
βœ– dplyr::lag()    masks stats::lag()
β„Ή Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(arrow)
Warning: package 'arrow' was built under R version 4.3.3

Attaching package: 'arrow'

The following object is masked from 'package:lubridate':

    duration

The following object is masked from 'package:utils':

    timestamp

Downloading the Data

A dataset of item checkouts from Seattle public libraries, available online at data.seattle.gov/Community/Checkouts-by-Title/tmmm-ytt6.

Step 1: Create a Directory

First, let’s create a special folder to store our data:

# Create a "data" directory if it doesn't exist already
# Using showWarnings = FALSE to suppress warning if directory already exists
dir.create("data", showWarnings = FALSE)

Step 2: Download the Dataset

Now for the fun part! We’ll download the Seattle Library dataset (9GB).

⚠️ Important: This is a 9GB file, so:

  • Make sure you have enough disk space
# Download Seattle library checkout dataset:
# 1. Fetch data from AWS S3 bucket URL
# 2. Save to local data directory
# 3. Use resume = TRUE to allow continuing interrupted downloads
curl::multi_download(
  "https://r4ds.s3.us-west-2.amazonaws.com/seattle-library-checkouts.csv",
  "data/seattle-library-checkouts.csv",
  resume = TRUE
)

Why USE: curl::multi_download()

  • Shows a progress bar (great for tracking large downloads)
  • Can resume if interrupted (super helpful for big files!)
  • More reliable than base R download methods

Resource: CURL

πŸ€” While the file downloads, let’s think about:

  1. Why do we need special tools for such large datasets?

  2. What challenges might we face with traditional R methods?

  3. How might a library use this kind of data?

Step 3: Verify the Download

After the download completes, let’s make sure everything worked:

# Check if the Seattle library dataset file exists and print its size:
# 1. Verify file exists at specified path
# 2. Calculate file size in gigabytes by dividing bytes by 1024^3

file.exists("data/seattle-library-checkouts.csv")
[1] TRUE
file.size("data/seattle-library-checkouts.csv") / 1024^3  # Size in GB
[1] 8.579315

Loading Our Data with Arrow πŸš€

Step 1: Opening the Dataset

# Load Seattle library checkout data into Arrow dataset:
# 1. Specify the CSV file path
# 2. Set ISBN column to be read as string type to preserve leading zeros
# 3. Define CSV as the file format
seattle_csv <- open_dataset(
  sources = "data/seattle-library-checkouts.csv", 
  col_types = schema(ISBN = string()),
  format = "csv"
)

What’s Actually Happening? πŸ€”

The Magic of Lazy Loading

When you run this code with open_dataset(), Arrow does something clever:

  1. It peeks at the first few thousand rows

  2. Figures out what kind of data is in each column

  3. Creates a roadmap of the data

  4. Then… it stops!

That’s right - it doesn’t load the whole 9GB file. Imagine Arrow as a really efficient librarian who:

  • Creates an index of where everything is

  • Only gets books (data) when you specifically ask for them

  • Keeps track of what’s where without moving everything

Why Specify ISBN as String?

  • The first 80,000 rows have blank ISBNs
  • Without our help, Arrow might get confused about what type of data this is
  • By telling Arrow β€œthis is definitely text”, we prevent any confusion

Run seattle_csv

You’ll see something interesting:

  • Information about where the data is stored

  • The structure Arrow discovered

  • Column names and types

  • But NO actual data yet!

#Inspect the data
seattle_csv
FileSystemDataset with 1 csv file
12 columns
UsageClass: string
CheckoutType: string
MaterialType: string
CheckoutYear: int64
CheckoutMonth: int64
Checkouts: int64
Title: string
ISBN: string
Creator: string
Subjects: string
Publisher: string
PublicationYear: string

🎯 Key Takeaways

  1. Arrow is lazy (in a good way!)
  • Only does work when necessary

  • Saves memory and time

  1. Arrow is smart
  • Figures out data types automatically

  • But accepts our help when needed

  1. Arrow is efficient

    • Keeps track of data without loading it

    • Ready to fetch exactly what we need

DPLYR functions

Arrow package provides a dplyr backend allowing you to analyze larger-than-memory datasets using familiar dplyr syntax.

Using collect() we can force arrow to perform the computation and return some data.

# View this will take a long time so don't run it in class
#dplyr::collect(seattle_csv)

πŸ” Investigation

πŸ‘‰ Now You Try!** (3 min)

Check out the dataset to undertand the data structure

#Create below:

# Clue 1: Basic structure
str(seattle_csv)
Classes 'FileSystemDataset', 'Dataset', 'ArrowObject', 'R6' <FileSystemDataset>
  Inherits from: <Dataset>
  Public:
    .:xp:.: externalptr
    .class_title: function () 
    .unsafe_delete: function () 
    class_title: function () 
    clone: function (deep = FALSE) 
    files: active binding
    filesystem: active binding
    format: active binding
    initialize: function (xp) 
    metadata: active binding
    NewScan: function () 
    num_cols: active binding
    num_rows: active binding
    pointer: function () 
    print: function (...) 
    schema: active binding
    set_pointer: function (xp) 
    ToString: function () 
    type: active binding
    WithSchema: function (schema)  
# Clue 2: Dataset dimensions

dim(seattle_csv)
[1] 41389465       12
# Clue 3: Column names 
#ADD CODE BELOW

names(seattle_csv)
 [1] "UsageClass"      "CheckoutType"    "MaterialType"    "CheckoutYear"   
 [5] "CheckoutMonth"   "Checkouts"       "Title"           "ISBN"           
 [9] "Creator"         "Subjects"        "Publisher"       "PublicationYear"
#Clue 4: Structure and types

glimpse(seattle_csv)
FileSystemDataset with 1 csv file
41,389,465 rows x 12 columns
$ UsageClass      <string> "Physical", "Physical", "Digital", "Physical", "Physi…
$ CheckoutType    <string> "Horizon", "Horizon", "OverDrive", "Horizon", "Horizo…
$ MaterialType    <string> "BOOK", "BOOK", "EBOOK", "BOOK", "SOUNDDISC", "BOOK",…
$ CheckoutYear     <int64> 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016,…
$ CheckoutMonth    <int64> 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,…
$ Checkouts        <int64> 1, 1, 1, 1, 1, 1, 1, 1, 4, 1, 1, 2, 3, 2, 1, 3, 2, 3,…
$ Title           <string> "Super rich : a guide to having it all / Russell Simm…
$ ISBN            <string> "", "", "", "", "", "", "", "", "", "", "", "", "", "…
$ Creator         <string> "Simmons, Russell", "Barclay, James, 1965-", "Tim Par…
$ Subjects        <string> "Self realization, Conduct of life, Attitude Psycholo…
$ Publisher       <string> "Gotham Books,", "Pyr,", "Random House, Inc.", "Dial …
$ PublicationYear <string> "c2011.", "2010.", "2015", "2005.", "c2004.", "c2005.…

πŸ‘‰ Now You Try! Create a write up:

So this dataset contains 41,389,465 rows that tell you how many times each book was checked out each month from April 2005 to October 2022.

Part 2: Practical Arrow Usage

Working with summarize() in Arrow

  1. The summarise() function is like creating a summary report of your data. With Arrow, it’s especially powerful because it can create these summaries without loading all the data into memory.

    Common summary functions:

  • sum():

    • Adds up all values

    • Best for: Totals, like total checkouts

    • Example: sum(5, 3, 2) = 10

  • mean():

    • Calculates the average

    • Best for: Typical or central values

    • Example: mean(5, 3, 2) = (5+3+2)/3 = 3.33

  • median():

    • Finds the middle value

    • Best for: Central tendency when you have outliers

    • Example: median(5, 3, 2) = 3

  • max():

    • Finds the highest value

    • Best for: Peak values

    • Example: max(5, 3, 2) = 5

  • min():

    • Finds the lowest value

    • Best for: Minimum values

    • Example: min(5, 3, 2) = 2

  1. Let’s start with just sum

I wonder how many checkouts there are in a year?

# Calculate total checkouts per year by:
# 1. Grouping the data by checkout year
# 2. Summing all checkouts within each year
# 3. Collecting results from the lazy query into memory
yearly_checkouts <- seattle_csv %>%
  group_by(CheckoutYear) %>%
  summarise(
    total_checkouts = sum(Checkouts)
  ) %>%
  collect()

yearly_checkouts
# A tibble: 18 Γ— 2
   CheckoutYear total_checkouts
          <int>           <int>
 1         2016         9021051
 2         2022         7001989
 3         2017         9231648
 4         2018         9149176
 5         2019         9199083
 6         2020         6053717
 7         2021         7361031
 8         2005         3798685
 9         2006         6599318
10         2007         7126627
11         2008         8438486
12         2009         9135167
13         2010         8608966
14         2011         8321732
15         2012         8163046
16         2013         9057096
17         2014         9136081
18         2015         9084179

πŸ’­ Analysis Questions:

  • What trends do you notice in the yearly checkouts?

  • Can you spot any unusual years? What might explain them?

I wonder what different types of material types there are?

# Get unique material types in the dataset by:
# 1. Selecting only the MaterialType column
# 2. Removing duplicate values
# 3. Collecting results from the lazy query into memory

seattle_csv %>%
  select(MaterialType) %>%
  distinct() %>%
  collect()
# A tibble: 71 Γ— 1
   MaterialType
   <chr>       
 1 BOOK        
 2 EBOOK       
 3 SOUNDDISC   
 4 AUDIOBOOK   
 5 VIDEODISC   
 6 SONG        
 7 MUSIC       
 8 SOUNDREC    
 9 MOVIE       
10 TELEVISION  
# β„Ή 61 more rows

πŸ’­ After the Results:

  • Were there any material types that surprised you?

  • How do these categories reflect changes in technology and media consumption?

  • What questions could we investigate about these different material types?

I wonder in 2020 what were the totals for the materials types?

# Calculate total checkouts per material type in 2020 by:
# 1. Filtering for checkouts made in 2020
# 2. Grouping the data by material type (e.g., books, DVDs)
# 3. Summing checkouts within each group
# 4. Collecting results from the lazy query into memory
seattle_csv %>%
  filter(CheckoutYear == 2020) %>%
  group_by(MaterialType) %>%
  summarise(total_checkouts = sum(Checkouts)) %>%
  collect()
# A tibble: 43 Γ— 2
   MaterialType  total_checkouts
   <chr>                   <int>
 1 BOOK                  1241999
 2 EBOOK                 2793961
 3 AUDIOBOOK             1513625
 4 SOUNDDISC              116221
 5 CR                        789
 6 VIDEO                    2430
 7 VIDEODISC              361587
 8 ER, SOUNDDISC             197
 9 MUSIC                    2404
10 LARGEPRINT                697
# β„Ή 33 more rows

Critical Thinking:

  • Why might we be particularly interested in 2020 data?

  • What hypotheses can we form about different material types?

πŸ‘‰ Now You Try! (3 minutes)

Modify the code above to find the total checkouts for a different year.

#ADD COMMENTS AND CODE BELOW

If time allows work in a group and come up with a wondering and investigate it.

#ADD COMMENTS AND CODE BELOW

🎯 Key Takeaways

  1. Arrow is lazy (in a good way!)
-    Only does work when necessary

-    Saves memory and time
  1. Arrow is smart

    • Figures out data types automatically

    • But accepts our help when needed

  2. Arrow is efficient

    • Keeps track of data without loading it

    • Ready to fetch exactly what we need

Homework Assignment πŸ‘‰

  1. What is the average number of checkouts per year?
seattle_csv %>%
  group_by(CheckoutYear) %>%
  summarise(
  average_checkouts = mean(Checkouts, na.rm = TRUE)
  ) %>%
  collect()
# A tibble: 18 Γ— 2
   CheckoutYear average_checkouts
          <int>             <dbl>
 1         2016              3.24
 2         2022              3.31
 3         2017              3.27
 4         2018              3.43
 5         2019              3.55
 6         2020              3.52
 7         2021              3.28
 8         2005              2.85
 9         2006              3.40
10         2007              3.49
11         2008              3.95
12         2009              4.10
13         2010              3.93
14         2011              3.56
15         2012              3.40
16         2013              3.60
17         2014              3.50
18         2015              3.33
  1. What is the average number of checkouts for each material type in 2021?
seattle_csv %>%
  filter(CheckoutYear == 2021) %>%
  group_by(MaterialType) %>%
  summarise(avgmaterial = mean(Checkouts, na.rm = TRUE)) %>%
  collect()
# A tibble: 46 Γ— 2
   MaterialType         avgmaterial
   <chr>                      <dbl>
 1 AUDIOBOOK                   4.69
 2 EBOOK                       3.55
 3 BOOK                        2.69
 4 REGPRINT                    3.93
 5 SOUNDDISC                   1.56
 6 VIDEODISC                   3.19
 7 SOUNDDISC, VIDEODISC        1.45
 8 ATLAS                       1.96
 9 MUSIC                       1.12
10 VIDEO                       1.18
# β„Ή 36 more rows

Write a brief analysis (200-300 words) answering these questions:

  • What trends do you notice in the yearly averages?
  • Which material types had the highest and lowest average checkouts in 2021?
  • What might explain these differences in average checkouts?

For this assignment I performed analysis on a Seattle dataset containing data on quantity of book checkouts in the 2000s. After doing analysis on the yearly average of book checkouts, my results contained averages ranging from 3 to around 3.5. And what I noticed is that 2019 and 2020 had the largest yearly averages. I thought this was interesting as the beginning of 2020 was the beginning of the Covid era, so I thought this might have altered the results but it did not 3.5. I also performed analysis on material type averages for 2021 and i think it is interested that there is a massive difference in Mixed material which had an average of 163 and VideoCass which had a average of 1. This makes me curious to what exactly makes mixed more appealing than other material types, which are practically negligent compared to Mixed. I think because each material type has a pro or con, having a mix of multiple material types cancel out the cons to make the ideal type for books.