Synopsis

We consider the data about the severe weather events. These can produce some major problems for the society - in terms of health and economy. Here, we’d like to extraplotate which types of events cause the main issues.

The key idea is to divide the type of damage into two different categories - fatalities/injuries and economic damages. We extrapolate top six type of events that cause major cosequences.

In the end, the analysis concludes that tornados are responsible for the highest numbers of fatalities, while the floods are resposible for the highest economic damages.

Preliminaries

Here, we state the libraris that are used troughout this paper.

library(lubridate)
## Warning: package 'lubridate' was built under R version 3.6.3
## 
## Attaching package: 'lubridate'
## The following objects are masked from 'package:base':
## 
##     date, intersect, setdiff, union
library(dplyr)
## Warning: package 'dplyr' was built under R version 3.6.3
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(tidyverse)
## Warning: package 'tidyverse' was built under R version 3.6.3
## -- Attaching packages --------------------------------------- tidyverse 1.3.0 --
## v ggplot2 3.3.0     v purrr   0.3.3
## v tibble  3.0.4     v stringr 1.4.0
## v tidyr   1.1.2     v forcats 0.5.0
## v readr   1.3.1
## Warning: package 'ggplot2' was built under R version 3.6.3
## Warning: package 'tibble' was built under R version 3.6.3
## Warning: package 'tidyr' was built under R version 3.6.3
## Warning: package 'readr' was built under R version 3.6.3
## Warning: package 'purrr' was built under R version 3.6.3
## Warning: package 'stringr' was built under R version 3.6.3
## Warning: package 'forcats' was built under R version 3.6.3
## -- Conflicts ------------------------------------------ tidyverse_conflicts() --
## x lubridate::as.difftime() masks base::as.difftime()
## x lubridate::date()        masks base::date()
## x dplyr::filter()          masks stats::filter()
## x lubridate::intersect()   masks base::intersect()
## x dplyr::lag()             masks stats::lag()
## x lubridate::setdiff()     masks base::setdiff()
## x lubridate::union()       masks base::union()

Data Processing

The dataset is directly loaded from the web in the compressed format.

temp <- tempfile()
download.file("http://d396qusza40orc.cloudfront.net/repdata%2Fdata%2FStormData.csv.bz2",temp)
data <- read.csv(temp)
unlink(temp)

Let’s see the structure of the data and the very first observations.

str(data)
## 'data.frame':    902297 obs. of  37 variables:
##  $ STATE__   : num  1 1 1 1 1 1 1 1 1 1 ...
##  $ BGN_DATE  : Factor w/ 16335 levels "1/1/1966 0:00:00",..: 6523 6523 4242 11116 2224 2224 2260 383 3980 3980 ...
##  $ BGN_TIME  : Factor w/ 3608 levels "00:00:00 AM",..: 272 287 2705 1683 2584 3186 242 1683 3186 3186 ...
##  $ TIME_ZONE : Factor w/ 22 levels "ADT","AKS","AST",..: 7 7 7 7 7 7 7 7 7 7 ...
##  $ COUNTY    : num  97 3 57 89 43 77 9 123 125 57 ...
##  $ COUNTYNAME: Factor w/ 29601 levels "","5NM E OF MACKINAC BRIDGE TO PRESQUE ISLE LT MI",..: 13513 1873 4598 10592 4372 10094 1973 23873 24418 4598 ...
##  $ STATE     : Factor w/ 72 levels "AK","AL","AM",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ EVTYPE    : Factor w/ 985 levels "   HIGH SURF ADVISORY",..: 834 834 834 834 834 834 834 834 834 834 ...
##  $ BGN_RANGE : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ BGN_AZI   : Factor w/ 35 levels "","  N"," NW",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ BGN_LOCATI: Factor w/ 54429 levels "","- 1 N Albion",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ END_DATE  : Factor w/ 6663 levels "","1/1/1993 0:00:00",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ END_TIME  : Factor w/ 3647 levels ""," 0900CST",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ COUNTY_END: num  0 0 0 0 0 0 0 0 0 0 ...
##  $ COUNTYENDN: logi  NA NA NA NA NA NA ...
##  $ END_RANGE : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ END_AZI   : Factor w/ 24 levels "","E","ENE","ESE",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ END_LOCATI: Factor w/ 34506 levels "","- .5 NNW",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ LENGTH    : num  14 2 0.1 0 0 1.5 1.5 0 3.3 2.3 ...
##  $ WIDTH     : num  100 150 123 100 150 177 33 33 100 100 ...
##  $ F         : int  3 2 2 2 2 2 2 1 3 3 ...
##  $ MAG       : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ FATALITIES: num  0 0 0 0 0 0 0 0 1 0 ...
##  $ INJURIES  : num  15 0 2 2 2 6 1 0 14 0 ...
##  $ PROPDMG   : num  25 2.5 25 2.5 2.5 2.5 2.5 2.5 25 25 ...
##  $ PROPDMGEXP: Factor w/ 19 levels "","-","?","+",..: 17 17 17 17 17 17 17 17 17 17 ...
##  $ CROPDMG   : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ CROPDMGEXP: Factor w/ 9 levels "","?","0","2",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ WFO       : Factor w/ 542 levels ""," CI","$AC",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ STATEOFFIC: Factor w/ 250 levels "","ALABAMA, Central",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ ZONENAMES : Factor w/ 25112 levels "","                                                                                                               "| __truncated__,..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ LATITUDE  : num  3040 3042 3340 3458 3412 ...
##  $ LONGITUDE : num  8812 8755 8742 8626 8642 ...
##  $ LATITUDE_E: num  3051 0 0 0 0 ...
##  $ LONGITUDE_: num  8806 0 0 0 0 ...
##  $ REMARKS   : Factor w/ 436781 levels "","-2 at Deer Park\n",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ REFNUM    : num  1 2 3 4 5 6 7 8 9 10 ...
head(data)
##   STATE__           BGN_DATE BGN_TIME TIME_ZONE COUNTY COUNTYNAME STATE  EVTYPE
## 1       1  4/18/1950 0:00:00     0130       CST     97     MOBILE    AL TORNADO
## 2       1  4/18/1950 0:00:00     0145       CST      3    BALDWIN    AL TORNADO
## 3       1  2/20/1951 0:00:00     1600       CST     57    FAYETTE    AL TORNADO
## 4       1   6/8/1951 0:00:00     0900       CST     89    MADISON    AL TORNADO
## 5       1 11/15/1951 0:00:00     1500       CST     43    CULLMAN    AL TORNADO
## 6       1 11/15/1951 0:00:00     2000       CST     77 LAUDERDALE    AL TORNADO
##   BGN_RANGE BGN_AZI BGN_LOCATI END_DATE END_TIME COUNTY_END COUNTYENDN
## 1         0                                               0         NA
## 2         0                                               0         NA
## 3         0                                               0         NA
## 4         0                                               0         NA
## 5         0                                               0         NA
## 6         0                                               0         NA
##   END_RANGE END_AZI END_LOCATI LENGTH WIDTH F MAG FATALITIES INJURIES PROPDMG
## 1         0                      14.0   100 3   0          0       15    25.0
## 2         0                       2.0   150 2   0          0        0     2.5
## 3         0                       0.1   123 2   0          0        2    25.0
## 4         0                       0.0   100 2   0          0        2     2.5
## 5         0                       0.0   150 2   0          0        2     2.5
## 6         0                       1.5   177 2   0          0        6     2.5
##   PROPDMGEXP CROPDMG CROPDMGEXP WFO STATEOFFIC ZONENAMES LATITUDE LONGITUDE
## 1          K       0                                         3040      8812
## 2          K       0                                         3042      8755
## 3          K       0                                         3340      8742
## 4          K       0                                         3458      8626
## 5          K       0                                         3412      8642
## 6          K       0                                         3450      8748
##   LATITUDE_E LONGITUDE_ REMARKS REFNUM
## 1       3051       8806              1
## 2          0          0              2
## 3          0          0              3
## 4          0          0              4
## 5          0          0              5
## 6          0          0              6

We are ready to make some data processing in order to make the dataset more confortable and more managable. It’s gonna be done column by column. Firstly, observe that BGN_DATE, BGN_TIME and TIME_ZONE could be merge into one BGN column.

One can observe that the part after the date in BGN_DATE can be casually deleted since it does not bring any information. Similarly, we can delete the seconds BGN_TIME, if they appear. It is not so important to know the exact second of the obervation, so we can omit it. Also, some times are not appropriate (like 19:90), so we simply ignore these rows by deleting.

Finally, we connect BGN_DATE, BGN_TIME and TIME_ZONE in the column BGN and transformit it to the Data structure. That column represents the time of the start of an extreme weather condition.

data$BGN_DATE <- gsub("\\ .*","",data$BGN_DATE)
data$BGN_TIME <- gsub('^([0-9]{2})([0-9]+)$', '\\1:\\2', data$BGN_TIME)
data$BGN_TIME <- sub('^([^:]+:[^:]+).*', '\\1', data$BGN_TIME)
data$BGN <- paste(data$BGN_DATE, data$BGN_TIME, " ")
data$BGN <- parse_date_time(data$BGN, "mdy HM")
## Warning: 5 failed to parse.
head(data$BGN)
## [1] "1950-04-18 01:30:00 UTC" "1950-04-18 01:45:00 UTC"
## [3] "1951-02-20 16:00:00 UTC" "1951-06-08 09:00:00 UTC"
## [5] "1951-11-15 15:00:00 UTC" "1951-11-15 20:00:00 UTC"
data <- data[is.na(data$BGN) == FALSE, ] 

We could do the same thing with the end dates and times, but a lot of the end dates and times are missing. The exact reason is not known, but we can guess that it is uncertain when to define the end of a weather condition. Therefore, we omit the procedure with the end dates.

Nevertheless, there are some columns that are quite important for our analysis. These are

levels(data$PROPDMGEXP)
##  [1] ""  "-" "?" "+" "0" "1" "2" "3" "4" "5" "6" "7" "8" "B" "h" "H" "K" "m" "M"
levels(data$CROPDMGEXP)
## [1] ""  "?" "0" "2" "B" "k" "K" "m" "M"

Almost all of these variables are self explainatory - except the last two. They measure what is the scale of a damage, and it is stored as K, M, B etc. Therefore, we have to standardize these two variables. So, we make the transformation in which we store the exponent as a non-negative integer which comes in the form \(10^\exp\). For example, if we have M, it repreents a milion, so we transform it into 6 since one milion is equal to \(10^6\).

The idea is to transform all the letters in the uppercase letters, and then perform the required transformation.

data$PROPDMGEXP <- toupper(data$PROPDMGEXP)
data$PROPDMGEXP[data$PROPDMGEXP %in% c("", "+", "-", "?")] <- "0"
data$PROPDMGEXP[data$PROPDMGEXP %in% c("B")] <- "9"
data$PROPDMGEXP[data$PROPDMGEXP %in% c("M")] <- "6"
data$PROPDMGEXP[data$PROPDMGEXP %in% c("K")] <- "3"
data$PROPDMGEXP[data$PROPDMGEXP %in% c("H")] <- "2"

data$CROPDMGEXP <- toupper(data$CROPDMGEXP)
data$CROPDMGEXP[data$CROPDMGEXP %in% c("", "+", "-", "?")] <- "0"
data$CROPDMGEXP[data$CROPDMGEXP %in% c("B")] <- "9"
data$CROPDMGEXP[data$CROPDMGEXP %in% c("M")] <- "6"
data$CROPDMGEXP[data$CROPDMGEXP %in% c("K")] <- "3"
data$CROPDMGEXP[data$CROPDMGEXP %in% c("H")] <- "2"

Finally, we can convert these exponents to the multipliers which will multiply the damages. Therefore, we create new varaibles that mark the total damage.

data$PROPDMGTOTAL <- data$PROPDMG * (10 ^ as.numeric(data$PROPDMGEXP))
data$CROPDMGTOTAL <- data$CROPDMG * (10 ^ as.numeric(data$CROPDMGEXP))
data$DMGTOTAL <- data$PROPDMGTOTAL + data$CROPDMGTOTAL

Data analysis

The harm to the society can be divided into three main groups.

The idea is to group the data with respect to the type of an event and consider the total harm based on these three (or four) main groups.

data_grouped <- data %>%
    group_by(EVTYPE) %>%
    summarize(SUM_FATALITIES = sum(FATALITIES),
              SUM_INJURIES = sum(INJURIES),
              SUM_PROPDMG = sum(PROPDMGTOTAL),
              SUM_CROPDMG = sum(CROPDMGTOTAL),
              TOTAL_DMG = sum(DMGTOTAL))
## `summarise()` ungrouping output (override with `.groups` argument)
head(data_grouped)
## # A tibble: 6 x 6
##   EVTYPE           SUM_FATALITIES SUM_INJURIES SUM_PROPDMG SUM_CROPDMG TOTAL_DMG
##   <fct>                     <dbl>        <dbl>       <dbl>       <dbl>     <dbl>
## 1 "   HIGH SURF A~              0            0      200000           0    200000
## 2 " COASTAL FLOOD"              0            0           0           0         0
## 3 " FLASH FLOOD"                0            0       50000           0     50000
## 4 " LIGHTNING"                  0            0           0           0         0
## 5 " TSTM WIND"                  0            0     8100000           0   8100000
## 6 " TSTM WIND (G4~              0            0        8000           0      8000

Results

Across the United States, which types of events are most harmful with respect to population health?

To answer the upper question, we can seek the types of events which produced the highest number of fatailities (and/or injuries). Since we have too many types of events, we can not consider them all. Therefore, we will consider only top 7 events.

top_fatalities <- head(arrange(data_grouped, desc(SUM_FATALITIES)), 7)

Firtly, let’s make a barplot considering the total number of fatalities. We can see that tornados produce the most number of fatalities, and the difference between the second most cause (excessive heat) is significant.

ggplot(top_fatalities, aes(EVTYPE, SUM_FATALITIES, label = SUM_FATALITIES)) +
    geom_bar(stat = "identity") +
    geom_text(nudge_y = 150) +
    xlab("Event Type") +
    ylab("Total Fatalities") +
    ggtitle("Top 7 Fatal Events") 

Let’s repeat the same procedure with the total number of injuries. One can easily see that tornados, once again, yield the highest number of injuries, and the difference from the second one is even greater.

top_injuries <- head(arrange(data_grouped, desc(SUM_INJURIES)), 7)

ggplot(top_injuries, aes(EVTYPE, SUM_INJURIES, label = SUM_INJURIES)) +
    geom_bar(stat = "identity") +
    geom_text(nudge_y = 2500) +
    xlab("Event Type") +
    ylab("Total Number of Injuries") +
    ggtitle("Top 7 Events with the highest number of injuries")

Across the United States, which types of events have the greatest economic consequences?

Now, we focus on the economic damages. As described earlier, we have two types of economic harms - property and crop damage. Both the property and crop damage are compared, so we tie the data. Total damage (property and crop combined) causing events are plotted to see the magnitude.

top_damage <- data_grouped %>% arrange(desc(TOTAL_DMG)) %>% head(6) %>% select(EVTYPE, SUM_PROPDMG, SUM_CROPDMG, TOTAL_DMG)

#Gather the data by the type of the economic damage
top_damage$EVTYPE <- with(top_damage, reorder(EVTYPE, -TOTAL_DMG))
top_damage_gathered <- top_damage %>%
    gather(key = "Type", value = "TOTAL_DMG", c("SUM_PROPDMG", "SUM_CROPDMG")) %>%
    select(EVTYPE, Type, TOTAL_DMG) %>%
    arrange("TOTAL_DMG")
top_damage_gathered$Type[top_damage_gathered$Type %in% c("SUM_PROPDMG")] <- "Property damage"
top_damage_gathered$Type[top_damage_gathered$Type %in% c("SUM_CROPDMG")] <- "Crop damage"

# Plot the stacked damage
ggplot(top_damage_gathered, aes(x = EVTYPE, y = TOTAL_DMG, fill = Type)) +
    geom_bar(stat = "identity", position = "stack") +
    xlab("Type of the event") +
    ylab("Property + Crop Damage") +
    ggtitle("Economic Damage") +
    theme(plot.title = element_text(hjust = 0.5), legend.position = "top")

Onse can see that floods cause the highest damage to property and crop combined, followed by hurricane/typhoon and tornado.

Conclusion

Based on our analysis, we can see that tornados are responsible for the highest numbers of fatalities, while the floods are resposible for the highest economic damages.