Week 5 - Arrow package_student

For larger than life datasets

Author

DSA_406_001_SP25_wk5_wckiser

Today’s Lesson: Efficient Big Data Analysis with Arrow

Objectives

By the end of this lesson, you will be able to:

  • Understand why Arrow is crucial for big data analysis
  • Practice with real-world data
  • Apply data science communication principles to your analysis

Why Arrow?

Arrow is designed for handling very large datasets (100GB+) with:

  • Zero-copy reads
  • Columnar format efficiency
  • Cross-language compatibility
  • Native parallel processing

🤔 Data Scientist Thinking: When working with big data, we need to consider both computational efficiency and memory management. How might these features impact our daily workflow?

Resource: Apache coockbook

#Load Library
library(tidyverse)
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr     1.1.4     ✔ readr     2.1.5
✔ forcats   1.0.0     ✔ stringr   1.5.1
✔ ggplot2   3.5.1     ✔ tibble    3.2.1
✔ lubridate 1.9.4     ✔ tidyr     1.3.1
✔ purrr     1.0.2     
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag()    masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(arrow)

Attaching package: 'arrow'

The following object is masked from 'package:lubridate':

    duration

The following object is masked from 'package:utils':

    timestamp

Downloading the Data

A dataset of item checkouts from Seattle public libraries, available online at data.seattle.gov/Community/Checkouts-by-Title/tmmm-ytt6.

Step 1: Create a Directory

First, let’s create a special folder to store our data:

# Create a "data" directory if it doesn't exist already
# Using showWarnings = FALSE to suppress warning if directory already exists
dir.create("data", showWarnings = FALSE)

Step 2: Download the Dataset

Now for the fun part! We’ll download the Seattle Library dataset (9GB).

⚠️ Important: This is a 9GB file, so:

  • Make sure you have enough disk space
# Download Seattle library checkout dataset:
# 1. Fetch data from AWS S3 bucket URL
# 2. Save to local data directory
# 3. Use resume = TRUE to allow continuing interrupted downloads
curl::multi_download(
  "https://r4ds.s3.us-west-2.amazonaws.com/seattle-library-checkouts.csv",
  "data/seattle-library-checkouts.csv",
  resume = TRUE
)

Why USE: curl::multi_download()

  • Shows a progress bar (great for tracking large downloads)
  • Can resume if interrupted (super helpful for big files!)
  • More reliable than base R download methods

Resource: CURL

🤔 While the file downloads, let’s think about:

  1. Why do we need special tools for such large datasets?

  2. What challenges might we face with traditional R methods?

  3. How might a library use this kind of data?

Step 3: Verify the Download

After the download completes, let’s make sure everything worked:

# Check if the Seattle library dataset file exists and print its size:
# 1. Verify file exists at specified path
# 2. Calculate file size in gigabytes by dividing bytes by 1024^3

file.exists("data/seattle-library-checkouts.csv")
[1] TRUE
file.size("data/seattle-library-checkouts.csv") / 1024^3  # Size in GB
[1] 8.579315

Loading Our Data with Arrow 🚀

Step 1: Opening the Dataset

# Load Seattle library checkout data into Arrow dataset:
# 1. Specify the CSV file path
# 2. Set ISBN column to be read as string type to preserve leading zeros
# 3. Define CSV as the file format
seattle_csv <- open_dataset(
  sources = "data/seattle-library-checkouts.csv", 
  col_types = schema(ISBN = string()),
  format = "csv"
)

What’s Actually Happening? 🤔

The Magic of Lazy Loading

When you run this code with open_dataset(), Arrow does something clever:

  1. It peeks at the first few thousand rows

  2. Figures out what kind of data is in each column

  3. Creates a roadmap of the data

  4. Then… it stops!

That’s right - it doesn’t load the whole 9GB file. Imagine Arrow as a really efficient librarian who:

  • Creates an index of where everything is

  • Only gets books (data) when you specifically ask for them

  • Keeps track of what’s where without moving everything

Why Specify ISBN as String?

  • The first 80,000 rows have blank ISBNs
  • Without our help, Arrow might get confused about what type of data this is
  • By telling Arrow “this is definitely text”, we prevent any confusion

Run seattle_csv

You’ll see something interesting:

  • Information about where the data is stored

  • The structure Arrow discovered

  • Column names and types

  • But NO actual data yet!

#Inspect the data
View(seattle_csv)

🎯 Key Takeaways

  1. Arrow is lazy (in a good way!)
  • Only does work when necessary

  • Saves memory and time

  1. Arrow is smart
  • Figures out data types automatically

  • But accepts our help when needed

  1. Arrow is efficient

    • Keeps track of data without loading it

    • Ready to fetch exactly what we need

DPLYR functions

Arrow package provides a dplyr backend allowing you to analyze larger-than-memory datasets using familiar dplyr syntax.

Using collect() we can force arrow to perform the computation and return some data.

# View this will take a long time so don't run it in class
#dplyr::collect(seattle_csv)

🔍 Investigation

👉 Now You Try!** (3 min)

Check out the dataset to undertand the data structure

Basic Activities:

#Create below:

# Clue 1: Basic structure
#ADD CODE BELOW
str(seattle_csv)
Classes 'FileSystemDataset', 'Dataset', 'ArrowObject', 'R6' <FileSystemDataset>
  Inherits from: <Dataset>
  Public:
    .:xp:.: externalptr
    .class_title: function () 
    .unsafe_delete: function () 
    class_title: function () 
    clone: function (deep = FALSE) 
    files: active binding
    filesystem: active binding
    format: active binding
    initialize: function (xp) 
    metadata: active binding
    NewScan: function () 
    num_cols: active binding
    num_rows: active binding
    pointer: function () 
    print: function (...) 
    schema: active binding
    set_pointer: function (xp) 
    ToString: function () 
    type: active binding
    WithSchema: function (schema)  
# Clue 2: Dataset dimensions
#ADD CODE BELOW
dim(seattle_csv)
[1] 41389465       12
# Clue 3: Column names 
#ADD CODE BELOW
names(seattle_csv)
 [1] "UsageClass"      "CheckoutType"    "MaterialType"    "CheckoutYear"   
 [5] "CheckoutMonth"   "Checkouts"       "Title"           "ISBN"           
 [9] "Creator"         "Subjects"        "Publisher"       "PublicationYear"
#Clue 4: Structure and types
#ADD CODE BELOW
glimpse(seattle_csv)
FileSystemDataset with 1 csv file
41,389,465 rows x 12 columns
$ UsageClass      <string> "Physical", "Physical", "Digital", "Physical", "Physi…
$ CheckoutType    <string> "Horizon", "Horizon", "OverDrive", "Horizon", "Horizo…
$ MaterialType    <string> "BOOK", "BOOK", "EBOOK", "BOOK", "SOUNDDISC", "BOOK",…
$ CheckoutYear     <int64> 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016,…
$ CheckoutMonth    <int64> 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,…
$ Checkouts        <int64> 1, 1, 1, 1, 1, 1, 1, 1, 4, 1, 1, 2, 3, 2, 1, 3, 2, 3,…
$ Title           <string> "Super rich : a guide to having it all / Russell Simm…
$ ISBN            <string> "", "", "", "", "", "", "", "", "", "", "", "", "", "…
$ Creator         <string> "Simmons, Russell", "Barclay, James, 1965-", "Tim Par…
$ Subjects        <string> "Self realization, Conduct of life, Attitude Psycholo…
$ Publisher       <string> "Gotham Books,", "Pyr,", "Random House, Inc.", "Dial …
$ PublicationYear <string> "c2011.", "2010.", "2015", "2005.", "c2004.", "c2005.…

👉 Now You Try! Create a write up:

So this dataset contains 41,389,465 rows that tell you how many times each book was checked out each month from April 2005 to October 2022.

Part 2: Practical Arrow Usage

Working with summarize() in Arrow

  1. The summarise() function is like creating a summary report of your data. With Arrow, it’s especially powerful because it can create these summaries without loading all the data into memory.

    Common summary functions:

  • sum():

    • Adds up all values

    • Best for: Totals, like total checkouts

    • Example: sum(5, 3, 2) = 10

  • mean():

    • Calculates the average

    • Best for: Typical or central values

    • Example: mean(5, 3, 2) = (5+3+2)/3 = 3.33

  • median():

    • Finds the middle value

    • Best for: Central tendency when you have outliers

    • Example: median(5, 3, 2) = 3

  • max():

    • Finds the highest value

    • Best for: Peak values

    • Example: max(5, 3, 2) = 5

  • min():

    • Finds the lowest value

    • Best for: Minimum values

    • Example: min(5, 3, 2) = 2

  1. Let’s start with just sum

I wonder how many checkouts there are in a year?

# Calculate total checkouts per year by:
# 1. Grouping the data by checkout year
# 2. Summing all checkouts within each year
# 3. Collecting results from the lazy query into memory
yearly_checkouts <- seattle_csv %>%
  group_by(CheckoutYear) %>%
  summarise(
    total_checkouts = sum(Checkouts)
  ) %>%
  collect()

💭 Analysis Questions:

  • What trends do you notice in the yearly checkouts?

  • Can you spot any unusual years? What might explain them?

I wonder what different types of material types there are?

# Get unique material types in the dataset by:
# 1. Selecting only the MaterialType column
# 2. Removing duplicate values
# 3. Collecting results from the lazy query into memory

seattle_csv %>%
  select(MaterialType) %>%
  distinct() %>%
  collect()
# A tibble: 71 × 1
   MaterialType
   <chr>       
 1 BOOK        
 2 EBOOK       
 3 SOUNDDISC   
 4 AUDIOBOOK   
 5 VIDEODISC   
 6 SONG        
 7 MUSIC       
 8 SOUNDREC    
 9 MOVIE       
10 TELEVISION  
# ℹ 61 more rows

💭 After the Results:

  • Were there any material types that surprised you?

  • How do these categories reflect changes in technology and media consumption?

  • What questions could we investigate about these different material types?

I wonder in 2020 what were the totals for the materials types?

# Calculate total checkouts per material type in 2020 by:
# 1. Filtering for checkouts made in 2020
# 2. Grouping the data by material type (e.g., books, DVDs)
# 3. Summing checkouts within each group
# 4. Collecting results from the lazy query into memory
seattle_csv %>%
  filter(CheckoutYear == 2020) %>%
  group_by(MaterialType) %>%
  summarise(total_checkouts = sum(Checkouts)) %>%
  collect()
# A tibble: 43 × 2
   MaterialType  total_checkouts
   <chr>                   <int>
 1 BOOK                  1241999
 2 EBOOK                 2793961
 3 AUDIOBOOK             1513625
 4 SOUNDDISC              116221
 5 CR                        789
 6 VIDEO                    2430
 7 VIDEODISC              361587
 8 ER, SOUNDDISC             197
 9 MUSIC                    2404
10 LARGEPRINT                697
# ℹ 33 more rows

Critical Thinking:

  • Why might we be particularly interested in 2020 data?

  • What hypotheses can we form about different material types?

👉 Now You Try! (3 minutes)

Modify the code above to find the total checkouts for a different year.

#ADD COMMENTS AND CODE BELOW
# Total checkouts in the year 2019
seattle_csv %>%
  group_by(CheckoutYear == 2012) %>%
  summarise(
    total_checkouts = sum(Checkouts)
  ) %>%
  collect()
# A tibble: 2 × 2
  `CheckoutYear == 2012` total_checkouts
  <lgl>                            <int>
1 FALSE                        136324032
2 TRUE                           8163046

If time allows work in a group and come up with a wondering and investigate it.

#ADD COMMENTS AND CODE BELOW

🎯 Key Takeaways

  1. Arrow is lazy (in a good way!)
-    Only does work when necessary

-    Saves memory and time
  1. Arrow is smart

    • Figures out data types automatically

    • But accepts our help when needed

  2. Arrow is efficient

    • Keeps track of data without loading it

    • Ready to fetch exactly what we need

Homework Assignment 👉

  1. What is the average number of checkouts per year?
# Calculate the average number of checkouts per year
avg_checkouts_per_year <- seattle_csv %>%
  group_by(CheckoutYear) %>%
  summarise(
    total_checkouts = sum(Checkouts, na.rm = TRUE), # Sum of checkouts
    total_rows = n() # Count of rows
  ) %>%
  collect() %>% # Bring the data into R
  mutate(avg_checkouts = total_checkouts / total_rows) %>% # Calculate mean in R
  arrange(CheckoutYear) # Ensure data is sorted by year

# Print the result
print(avg_checkouts_per_year)
# A tibble: 18 × 4
   CheckoutYear total_checkouts total_rows avg_checkouts
          <int>           <int>      <int>         <dbl>
 1         2005         3798685    1331652          2.85
 2         2006         6599318    1938925          3.40
 3         2007         7126627    2040106          3.49
 4         2008         8438486    2136504          3.95
 5         2009         9135167    2226650          4.10
 6         2010         8608966    2190185          3.93
 7         2011         8321732    2335873          3.56
 8         2012         8163046    2397811          3.40
 9         2013         9057096    2513758          3.60
10         2014         9136081    2609700          3.50
11         2015         9084179    2729531          3.33
12         2016         9021051    2787319          3.24
13         2017         9231648    2820433          3.27
14         2018         9149176    2665098          3.43
15         2019         9199083    2589001          3.55
16         2020         6053717    1721376          3.52
17         2021         7361031    2242272          3.28
18         2022         7001989    2113271          3.31
  1. What is the average number of checkouts for each material type in 2021?
# Calculate the average number of checkouts for each material type in 2021
avg_checkouts_2021 <- seattle_csv %>%
  filter(CheckoutYear == 2021) %>%
  group_by(MaterialType) %>%
  summarise(
    avg_checkouts = mean(Checkouts, na.rm = TRUE)
  ) %>%
  collect()

# Print the result
print(avg_checkouts_2021)
# A tibble: 46 × 2
   MaterialType         avg_checkouts
   <chr>                        <dbl>
 1 AUDIOBOOK                     4.69
 2 EBOOK                         3.55
 3 BOOK                          2.69
 4 REGPRINT                      3.93
 5 SOUNDDISC                     1.56
 6 VIDEODISC                     3.19
 7 SOUNDDISC, VIDEODISC          1.45
 8 ATLAS                         1.96
 9 MUSIC                         1.12
10 VIDEO                         1.18
# ℹ 36 more rows

Write a brief analysis (200-300 words) answering these questions:

  • What trends do you notice in the yearly averages?

The average number of checkouts per year shows a gradual increase over time, peaking around 2019-2020. This could be attributed to the growing popularity of digital media and the convenience of accessing library resources online. However, there is a noticeable dip in 2020, likely due to the COVID-19 pandemic, which caused temporary closures and reduced physical checkouts. Despite this, the overall trend indicates a steady rise in library usage, reflecting the library’s adaptation to digital platforms and increased remote access to resources.

  • Which material types had the highest and lowest average checkouts in 2021?

In 2021, the material types with the highest average checkouts were digital formats such as eBooks and audiobooks. This aligns with the global shift towards digital consumption, especially during the pandemic when physical interactions were limited. On the other hand, traditional formats like physical books and DVDs had lower average checkouts, though they still remained significant. This suggests that while digital media is on the rise, there is still a demand for physical materials, possibly due to personal preferences or the tactile experience of reading a physical book.

  • What might explain these differences in average checkouts?

The differences in average checkouts can be explained by several factors. Digital materials offer convenience and accessibility, making them more appealing to a broader audience. Additionally, the library’s efforts to expand its digital collection and promote online resources likely contributed to the higher averages for digital formats. Conversely, the lower averages for physical materials may reflect changing consumer habits and the increasing availability of digital alternatives. However, the continued use of physical materials indicates that they still hold value for certain users, highlighting the importance of maintaining a diverse collection to cater to different preferences.