Questions

The goal of this assignment is to give you practice in preparing different datasets for downstream analysis work. Your task is to:

  1. Choose any three of the “wide” datasets identified in the Week 6 Discussion items. (You may use your own dataset; please don’t use my Sample Post dataset, since that was used in your Week 6 assignment!) For each of the three chosen datasets:

 Create a .CSV file (or optionally, a MySQL database!) that includes all of the information included in the dataset. You’re encouraged to use a “wide” structure similar to how the information appears in the discussion item, so that you can practice tidying and transformations as described below.

 Read the information from your .CSV file into R, and use tidyr and dplyr as needed to tidy and transform your data. [Most of your grade will be based on this step!]

 Perform the analysis requested in the discussion item.

 Your code should be in an R Markdown file, posted to rpubs.com, and should include narrative descriptions of your data cleanup work, analysis, and conclusions.

  1. Please include in your homework submission, for each of the three chosen datasets:

 The URL to the .Rmd file in your GitHub repository, and

 The URL for your rpubs.com web page.

# Load library
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.0     ✔ stringr   1.5.1
## ✔ ggplot2   3.5.1     ✔ tibble    3.2.1
## ✔ lubridate 1.9.4     ✔ tidyr     1.3.1
## ✔ purrr     1.0.2     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(dplyr)
library(readr)
library(tidyr)
library(stringr)
library(data.table)
## 
## Attaching package: 'data.table'
## 
## The following objects are masked from 'package:lubridate':
## 
##     hour, isoweek, mday, minute, month, quarter, second, wday, week,
##     yday, year
## 
## The following objects are masked from 'package:dplyr':
## 
##     between, first, last
## 
## The following object is masked from 'package:purrr':
## 
##     transpose
library(ggplot2)

DATASET 1

I will be using my dataset firstly, The dataset shows sales figures for different products across multiple months for several regions. The data is messy because it’s in wide format, with many columns representing individual months, and each row represents a different product across the regions. Analysis that might be performed on this data include:

Year-over-Year Trends: Analyze the sales growth for each product across different months and regions. Identify products with consistent growth or seasonal variations.

Product Sales Distribution: Assess which product is performing best in terms of overall sales across all regions. This analysis could be extended to compare performance across months.

Regional Performance Comparison: Compare sales performance between the North, South, and East regions. This could reveal whether there are regional preferences for certain products, or if the same products perform differently in various regions.

Sales Forecasting: Using the available monthly sales data, you could build a forecasting model to predict sales for the upcoming months, based on historical trends.

# Create the data as a data frame
dataset1 <- data.frame(
  Product_Name = c('Product A', 'Product A', 'Product A', 'Product B', 'Product B', 'Product B', 'Product C', 'Product C', 'Product C'),
  Region = c('North', 'South', 'East', 'North', 'South', 'East', 'North', 'South', 'East'),
  Jan_Sales = c(100, 200, 300, 150, 250, 350, 50, 100, 150),
  Feb_Sales = c(110, 210, 310, 160, 260, 360, 55, 105, 155),
  Mar_Sales = c(120, 220, 320, 170, 270, 370, 60, 110, 160),
  Apr_Sales = c(130, 230, 330, 180, 280, 380, 65, 115, 165),
  May_Sales = c(140, 240, 340, 190, 290, 390, 70, 120, 170),
  Jun_Sales = c(150, 250, 350, 200, 300, 400, 75, 125, 175)
)

# Write the data frame to a CSV file
write.csv(dataset1, "dataset1.csv", row.names = FALSE)

cat("CSV file 'dataset1.csv' has been created.")
## CSV file 'dataset1.csv' has been created.
# Read the CSV file into a data frame
sales_dataset1 <- read.csv("dataset1.csv")

# View the first few rows of the data
head(sales_dataset1)
##   Product_Name Region Jan_Sales Feb_Sales Mar_Sales Apr_Sales May_Sales
## 1    Product A  North       100       110       120       130       140
## 2    Product A  South       200       210       220       230       240
## 3    Product A   East       300       310       320       330       340
## 4    Product B  North       150       160       170       180       190
## 5    Product B  South       250       260       270       280       290
## 6    Product B   East       350       360       370       380       390
##   Jun_Sales
## 1       150
## 2       250
## 3       350
## 4       200
## 5       300
## 6       400
# Check column names
colnames(sales_dataset1)
## [1] "Product_Name" "Region"       "Jan_Sales"    "Feb_Sales"    "Mar_Sales"   
## [6] "Apr_Sales"    "May_Sales"    "Jun_Sales"

Using tidyr and dplyr for DATASET 1, The dataset will be Pivot from wide format to long format, where each month (Jan, Feb, Mar, etc.) will become a row under a “Month” column. And then extract the month sales values into a “Sales” column.

# Tidy the data: Pivot the sales columns into a long format
tidy_sales_dataset1 <- sales_dataset1 %>%
  pivot_longer(
    cols = starts_with("Jan"):starts_with("Jun"),  # Select the month columns
    names_to = "Month",  # The new column for month names
    values_to = "Sales"  # The new column for sales values
  ) %>%
  # Convert 'Month' to a factor with proper order
  mutate(Month = factor(Month, levels = c("Jan_Sales", "Feb_Sales", "Mar_Sales", "Apr_Sales", "May_Sales", "Jun_Sales"))) %>%
  # Clean up the month names by removing the "Sales" part
  mutate(Month = sub("_Sales", "", Month))

# Filter data to show only "Product A" and the first few regions
filtered_dataset1 <- tidy_sales_dataset1 %>%
  filter(Product_Name == "Product A" & Region %in% c("North", "South")) %>%
  head(6)  # Limit the output to the first 6 rows

# View the filtered data
print(filtered_dataset1)
## # A tibble: 6 × 4
##   Product_Name Region Month Sales
##   <chr>        <chr>  <chr> <int>
## 1 Product A    North  Jan     100
## 2 Product A    North  Feb     110
## 3 Product A    North  Mar     120
## 4 Product A    North  Apr     130
## 5 Product A    North  May     140
## 6 Product A    North  Jun     150

dplyr is then use to group and summarize the datA from DATASET 1

# Group the data by Product_Name, Region, and Month, then summarize the total Sales
summary_sales <- tidy_sales_dataset1 %>%
  group_by(Product_Name, Region, Month) %>%
  summarize(Total_Sales = sum(Sales), .groups = "drop")

# View the summarized data
print(summary_sales)
## # A tibble: 54 × 4
##    Product_Name Region Month Total_Sales
##    <chr>        <chr>  <chr>       <int>
##  1 Product A    East   Apr           330
##  2 Product A    East   Feb           310
##  3 Product A    East   Jan           300
##  4 Product A    East   Jun           350
##  5 Product A    East   Mar           320
##  6 Product A    East   May           340
##  7 Product A    North  Apr           130
##  8 Product A    North  Feb           110
##  9 Product A    North  Jan           100
## 10 Product A    North  Jun           150
## # ℹ 44 more rows

Plotting the Data from DATASET 1

We will create a line plot where the x-axis represents the Month, the y-axis represents Total_Sales, and the lines represent each combination of Product_Name and Region.

# Create the plot
ggplot(summary_sales, aes(x = Month, y = Total_Sales, color = interaction(Product_Name, Region), group = interaction(Product_Name, Region))) +
  geom_line(linewidth = 1) +  # Line plot
  geom_point(size = 3) +  # Points at each month
  labs(
    title = "Total Sales by Product, Region, and Month",
    x = "Month",
    y = "Total Sales",
    color = "Product and Region"
  ) +
  theme_minimal() +
  theme(
    axis.text.x = element_text(angle = 45, hjust = 1)  # Rotate x-axis labels
  )

# Save the tidied data to a new CSV file
write.csv(filtered_dataset1, "filtered_dataset1.csv", row.names = FALSE)

Conclusion DATASET 1

The sales data reveals key trends:

  1. Sales Growth: Sales consistently increase from January to June across all products and regions.

  2. Regional Differences: The East region shows the highest sales for both products, while the North region has the lowest.

  3. Product Comparison: Product A outperforms Product B in all regions, suggesting higher demand or popularity.

  4. Monthly Trends: Sales grow steadily month-over-month, indicating a positive seasonal trend.

Business Implications:

-Prepare for higher sales in summer months based on the growth trend.


DATASET 2

For Dataset 2, I will be using Daniel Hanasab posted dataset. This dataset contains monthly average temperatures and humidity levels for three cities: New York, Los Angeles, and Chicago.

It is untidy because:

Months are represented as separate columns instead of being in a single “Month” column.

Temperature and humidity are stored in separate columns for each month, rather than having a single “Measure” column with corresponding values.

Each row represents a city, but instead of a single observation per row, there are multiple values in different columns.

# Create the data frame with the temperature and humidity data
dataset2 <- tibble(
  City = c("New York", "Los Angeles", "Chicago"),
  Temp_Jan = c(32, 60, 30),
  Temp_Feb = c(35, 60, 30),
  Temp_Mar = c(42, 65, 40),
  Humid_Jan = c(75, 65, 80),
  Humid_Feb = c(72, 63, 78),
  Humid_Mar = c(68, 60, 75)
)

# Write the data frame to a CSV file
write.csv(dataset2, "city_weather_dataset2.csv", row.names = FALSE)
# Read the CSV file into a data frame
city_weather_dataset2 <- read.csv("city_weather_dataset2.csv")

# View the data
print(city_weather_dataset2)
##          City Temp_Jan Temp_Feb Temp_Mar Humid_Jan Humid_Feb Humid_Mar
## 1    New York       32       35       42        75        72        68
## 2 Los Angeles       60       60       65        65        63        60
## 3     Chicago       30       30       40        80        78        75

Tidy the Data (DataSet2)

The original Data is in a “wide” format, which contains multiple columns for each month’s temperature and humidity. Pivot this data to a “long” format to make it easier to analyze.

# Tidy the data: Pivot the temperature and humidity columns to long format
dataset2_tidy <- city_weather_dataset2 %>%
  pivot_longer(
    cols = c(Temp_Jan, Temp_Feb, Temp_Mar, Humid_Jan, Humid_Feb, Humid_Mar),  # Explicitly select the columns
    names_to = c("Variable", "Month"),  # New column names for variable type and month
    names_sep = "_",  # Use underscore to separate variable type and month
    values_to = "Value"  # Store values in the "Value" column
  ) %>%
  # Convert Month to a factor with proper order
  mutate(Month = factor(Month, levels = c("Jan", "Feb", "Mar")),
         Variable = factor(Variable, levels = c("Temp", "Humid")))

# View the tidy data
print(dataset2_tidy)
## # A tibble: 18 × 4
##    City        Variable Month Value
##    <chr>       <fct>    <fct> <int>
##  1 New York    Temp     Jan      32
##  2 New York    Temp     Feb      35
##  3 New York    Temp     Mar      42
##  4 New York    Humid    Jan      75
##  5 New York    Humid    Feb      72
##  6 New York    Humid    Mar      68
##  7 Los Angeles Temp     Jan      60
##  8 Los Angeles Temp     Feb      60
##  9 Los Angeles Temp     Mar      65
## 10 Los Angeles Humid    Jan      65
## 11 Los Angeles Humid    Feb      63
## 12 Los Angeles Humid    Mar      60
## 13 Chicago     Temp     Jan      30
## 14 Chicago     Temp     Feb      30
## 15 Chicago     Temp     Mar      40
## 16 Chicago     Humid    Jan      80
## 17 Chicago     Humid    Feb      78
## 18 Chicago     Humid    Mar      75

Transform the Data (DataSet2)

Transforming the data to calculate average temperatures and humidity for each city across all months.

# Summarize the data: Calculate average temperature and humidity by city and month

summary_dataset2 <- dataset2_tidy %>%
  group_by(City, Month, Variable) %>%
  summarize(Average_Value = mean(Value), .groups = "drop")

# View the summary statistics
print(summary_dataset2)
## # A tibble: 18 × 4
##    City        Month Variable Average_Value
##    <chr>       <fct> <fct>            <dbl>
##  1 Chicago     Jan   Temp                30
##  2 Chicago     Jan   Humid               80
##  3 Chicago     Feb   Temp                30
##  4 Chicago     Feb   Humid               78
##  5 Chicago     Mar   Temp                40
##  6 Chicago     Mar   Humid               75
##  7 Los Angeles Jan   Temp                60
##  8 Los Angeles Jan   Humid               65
##  9 Los Angeles Feb   Temp                60
## 10 Los Angeles Feb   Humid               63
## 11 Los Angeles Mar   Temp                65
## 12 Los Angeles Mar   Humid               60
## 13 New York    Jan   Temp                32
## 14 New York    Jan   Humid               75
## 15 New York    Feb   Temp                35
## 16 New York    Feb   Humid               72
## 17 New York    Mar   Temp                42
## 18 New York    Mar   Humid               68

Analyze the Data (DataSet2)

Visualizing the Temperature Data

# Plot the data: Create a line plot to visualize the trends of Temp and Humid across months for each city
ggplot(summary_dataset2, aes(x = Month, y = Average_Value, color = interaction(City, Variable), group = interaction(City, Variable))) +
  geom_line(size = 1) +  # Line plot
  geom_point(size = 3) +  # Points at each month
  labs(
    title = "Average Temperature and Humidity by City and Month",
    x = "Month",
    y = "Average Value",
    color = "City and Variable"
  ) +
  theme_minimal() +
  theme(
    axis.text.x = element_text(angle = 45, hjust = 1)  # Rotate x-axis labels
  )
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.

# Save the tidied data to a new CSV file
write.csv(dataset2_tidy, "dataset2_tidy.csv", row.names = FALSE)

Conclusion for DataSet 2

By transforming the data into a tidy format, summarizing it to calculate average values, and visualizing it, we can clearly see the trends in temperature and humidity across different cities and months. The plot helps to compare the fluctuations in temperature and humidity for each city.


DATASET 3

For Dataset 3, I will be using Musrat Jahan dataset posted on week 4 discussion. This data is about New York City Leading causes of Death. It is untidy because some columns can be deleted or separated into 2 columns. Leading cause of death values can be separated into two columns: the cause and the codes. Also, race and ethnicity should be separated into 2 columns. We don’t need 2 columns for death rates; just one is fine, either age adjusted or normal.

The data is untidy because:

# Read the CSV file from github
dataset3 <- read_csv("https://raw.githubusercontent.com/Badigun/Data-607-Assignments/refs/heads/main/New_York_City_Leading_Causes_of_Death.csv")
## Rows: 1094 Columns: 7
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (6): Leading Cause, Sex, Race Ethnicity, Deaths, Death Rate, Age Adjuste...
## dbl (1): Year
## 
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
# View the first few rows of the dataset
head(dataset3)
## # A tibble: 6 × 7
##    Year `Leading Cause`               Sex   `Race Ethnicity` Deaths `Death Rate`
##   <dbl> <chr>                         <chr> <chr>            <chr>  <chr>       
## 1  2007 Diabetes Mellitus (E10-E14)   M     Other Race/ Eth… 11     .           
## 2  2010 Diseases of Heart (I00-I09, … F     Not Stated/Unkn… 70     .           
## 3  2007 Cerebrovascular Disease (Str… M     Black Non-Hispa… 213    25          
## 4  2007 Atherosclerosis (I70)         F     Other Race/ Eth… .      .           
## 5  2014 Malignant Neoplasms (Cancer:… F     Black Non-Hispa… 1852   176.5       
## 6  2010 Chronic Lower Respiratory Di… F     White Non-Hispa… 501    35          
## # ℹ 1 more variable: `Age Adjusted Death Rate` <chr>

Tidy the Data for Dataset 3

# Read the CSV file
data <- fread("https://raw.githubusercontent.com/Badigun/Data-607-Assignments/refs/heads/main/New_York_City_Leading_Causes_of_Death.csv")

# Replace "." with NA and convert columns to numeric
dataset3_clean <- data %>% 
  mutate(across(where(is.character), ~na_if(., "."))) %>% 
  mutate(across(where(is.character), as.numeric))
## Warning: There were 3 warnings in `mutate()`.
## The first warning was:
## ℹ In argument: `across(where(is.character), as.numeric)`.
## Caused by warning:
## ! NAs introduced by coercion
## ℹ Run `dplyr::last_dplyr_warnings()` to see the 2 remaining warnings.
# View the cleaned data
head(dataset3_clean)
##     Year Leading Cause   Sex Race Ethnicity Deaths Death Rate
##    <int>         <num> <num>          <num>  <num>      <num>
## 1:  2007            NA    NA             NA     11         NA
## 2:  2010            NA    NA             NA     70         NA
## 3:  2007            NA    NA             NA    213       25.0
## 4:  2007            NA    NA             NA     NA         NA
## 5:  2014            NA    NA             NA   1852      176.5
## 6:  2010            NA    NA             NA    501       35.0
##    Age Adjusted Death Rate
##                      <num>
## 1:                      NA
## 2:                      NA
## 3:                    33.0
## 4:                      NA
## 5:                   148.4
## 6:                    20.7
# View the column names to inspect the dataset
colnames(dataset3)
## [1] "Year"                    "Leading Cause"          
## [3] "Sex"                     "Race Ethnicity"         
## [5] "Deaths"                  "Death Rate"             
## [7] "Age Adjusted Death Rate"
# Rename columns 
dataset3_tidy <- dataset3 %>%
  rename(
    Cause_of_Death = `Leading Cause`,  
    Year = `Year`
  )

# Check for missing values
summary(dataset3_tidy)
##       Year      Cause_of_Death         Sex            Race Ethnicity    
##  Min.   :2007   Length:1094        Length:1094        Length:1094       
##  1st Qu.:2008   Class :character   Class :character   Class :character  
##  Median :2010   Mode  :character   Mode  :character   Mode  :character  
##  Mean   :2010                                                           
##  3rd Qu.:2012                                                           
##  Max.   :2014                                                           
##     Deaths           Death Rate        Age Adjusted Death Rate
##  Length:1094        Length:1094        Length:1094            
##  Class :character   Class :character   Class :character       
##  Mode  :character   Mode  :character   Mode  :character       
##                                                               
##                                                               
## 
# Remove or impute missing values
dataset3_tidy <- dataset3 %>%
  drop_na()  # Drops rows with missing values

print(dataset3_tidy)
## # A tibble: 1,094 × 7
##     Year `Leading Cause`              Sex   `Race Ethnicity` Deaths `Death Rate`
##    <dbl> <chr>                        <chr> <chr>            <chr>  <chr>       
##  1  2007 Diabetes Mellitus (E10-E14)  M     Other Race/ Eth… 11     .           
##  2  2010 Diseases of Heart (I00-I09,… F     Not Stated/Unkn… 70     .           
##  3  2007 Cerebrovascular Disease (St… M     Black Non-Hispa… 213    25          
##  4  2007 Atherosclerosis (I70)        F     Other Race/ Eth… .      .           
##  5  2014 Malignant Neoplasms (Cancer… F     Black Non-Hispa… 1852   176.5       
##  6  2010 Chronic Lower Respiratory D… F     White Non-Hispa… 501    35          
##  7  2007 Intentional Self-Harm (Suic… M     Asian and Pacif… 36     7.4         
##  8  2012 All Other Causes             F     Not Stated/Unkn… 53     .           
##  9  2009 Diseases of Heart (I00-I09,… F     Hispanic         1349   112.7       
## 10  2010 Septicemia (A40-A41)         F     White Non-Hispa… 107    7.5         
## # ℹ 1,084 more rows
## # ℹ 1 more variable: `Age Adjusted Death Rate` <chr>

Step 3: Transform the Data for Dataset 3

colnames(dataset3_clean)
## [1] "Year"                    "Leading Cause"          
## [3] "Sex"                     "Race Ethnicity"         
## [5] "Deaths"                  "Death Rate"             
## [7] "Age Adjusted Death Rate"
# Summarize deaths by cause and year
summary_dataset3 <- dataset3_clean %>%
  group_by(Year) %>%
  summarise(across(starts_with("Deaths"), sum, na.rm = TRUE))
## Warning: There was 1 warning in `summarise()`.
## ℹ In argument: `across(starts_with("Deaths"), sum, na.rm = TRUE)`.
## ℹ In group 1: `Year = 2007`.
## Caused by warning:
## ! The `...` argument of `across()` is deprecated as of dplyr 1.1.0.
## Supply arguments directly to `.fns` through an anonymous function instead.
## 
##   # Previously
##   across(a:b, mean, na.rm = TRUE)
## 
##   # Now
##   across(a:b, \(x) mean(x, na.rm = TRUE))
# View summary
print(summary_dataset3)
## # A tibble: 8 × 2
##    Year Deaths
##   <int>  <dbl>
## 1  2007  53996
## 2  2008  54138
## 3  2009  52820
## 4  2010  52505
## 5  2011  52726
## 6  2012  52420
## 7  2013  53387
## 8  2014  53006

Analyze and Visualize the Data for Dataset 3

plot_dataset3 <- summary_dataset3 %>%
  pivot_longer(cols = starts_with("Deaths"), names_to = "Cause", values_to = "Deaths")

print(plot_dataset3)
## # A tibble: 8 × 3
##    Year Cause  Deaths
##   <int> <chr>   <dbl>
## 1  2007 Deaths  53996
## 2  2008 Deaths  54138
## 3  2009 Deaths  52820
## 4  2010 Deaths  52505
## 5  2011 Deaths  52726
## 6  2012 Deaths  52420
## 7  2013 Deaths  53387
## 8  2014 Deaths  53006
# Create a bar chart of total deaths by cause
ggplot(plot_dataset3, aes(x = reorder(Cause, -Deaths), y = Deaths)) +
  geom_bar(stat = "identity", fill = "skyblue") +
  coord_flip() +
  labs(
    title = "Total Deaths by Cause in NYC",
    x = "Cause of Death",
    y = "Total Deaths"
  ) +
  theme_minimal()

# Save the tidied data to a new CSV file
write.csv(dataset3_clean, "dataset3_clean.csv", row.names = FALSE)

Conclusion for Dataset 3

Based on the bar chart and analysis, we can conclude the following: