Updated on Thu Aug 17 00:10:42 2017.

library(tidyverse)
## Loading tidyverse: ggplot2
## Loading tidyverse: tibble
## Loading tidyverse: tidyr
## Loading tidyverse: readr
## Loading tidyverse: purrr
## Loading tidyverse: dplyr
## Conflicts with tidy packages ----------------------------------------------
## filter(): dplyr, stats
## lag():    dplyr, stats
library(ggplot2)
library(ggthemes)
library(ggvis)
## 
## Attaching package: 'ggvis'
## The following object is masked from 'package:ggplot2':
## 
##     resolution
library(reshape2)
## 
## Attaching package: 'reshape2'
## The following object is masked from 'package:tidyr':
## 
##     smiths
library(knitr)
library(shiny)
library(scales)
## 
## Attaching package: 'scales'
## The following objects are masked from 'package:ggvis':
## 
##     fullseq, zero_range
## The following object is masked from 'package:purrr':
## 
##     discard
## The following object is masked from 'package:readr':
## 
##     col_factor

DATA BASICS

This section will guide you in the process of decoding your data into information and ultimately intelligible insights. In doing so, we will explore the use of tidyverse and R base packages.


When working with a new data what initial questions do you have?


Consider the following questions to guide your understanding.


Once you have this basic understanding of your data you can dig deeper. Then you can use visualization techniques to explore your data and derive some basic understandings of the phenomena you are studying, such as the largest and smallest values for each variable. In addition, calculating summary statistics translate data into information by revealing the shape of the data, the mean, median, minimum value, maximum value, and variability all with simple visualizations.


For any data science project there are few simple steps to follow. Caption for the picture.


A. Exercise: Importing your data

Using the World internet usage data we will compare of read.csv to read_csv for importing data.


utils package using read.csv()

internet_utils <- read.csv("world_internet_usage.csv")
head(internet_utils)
##                country X2000 X2001 X2002 X2003 X2004 X2005 X2006 X2007
## 1                China  1.78  2.64  4.60  6.20  7.30  8.52 10.52 16.00
## 2               Mexico  5.08  7.04 11.90 12.90 14.10 17.21 19.52 20.81
## 3               Panama  6.55  7.27  8.52  9.99 11.14 11.48 17.35 22.29
## 4              Senegal  0.40  0.98  1.01  2.10  4.39  4.79  5.61  7.70
## 5            Singapore 36.00 41.67 47.00 53.84 62.00 61.00 59.00 69.90
## 6 United Arab Emirates 23.63 26.27 28.32 29.48 30.13 40.00 52.00 61.00
##   X2008 X2009 X2010 X2011 X2012
## 1 22.60 28.90 34.30 38.30 42.30
## 2 21.71 26.34 31.05 34.96 38.42
## 3 33.82 39.08 40.10 42.70 45.20
## 4 10.60 14.50 16.00 17.50 19.20
## 5 69.00 69.00 71.00 71.00 74.18
## 6 63.00 64.00 68.00 78.00 85.00

readr read_csv using read_csv()

library(readr)
internet_readr <- read_csv("world_internet_usage.csv")
## Parsed with column specification:
## cols(
##   country = col_character(),
##   `2000` = col_double(),
##   `2001` = col_double(),
##   `2002` = col_double(),
##   `2003` = col_double(),
##   `2004` = col_double(),
##   `2005` = col_double(),
##   `2006` = col_double(),
##   `2007` = col_double(),
##   `2008` = col_double(),
##   `2009` = col_double(),
##   `2010` = col_double(),
##   `2011` = col_double(),
##   `2012` = col_double()
## )
head(internet_readr)
## # A tibble: 6 x 14
##                country `2000` `2001` `2002` `2003` `2004` `2005` `2006`
##                  <chr>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>
## 1                China   1.78   2.64   4.60   6.20   7.30   8.52  10.52
## 2               Mexico   5.08   7.04  11.90  12.90  14.10  17.21  19.52
## 3               Panama   6.55   7.27   8.52   9.99  11.14  11.48  17.35
## 4              Senegal   0.40   0.98   1.01   2.10   4.39   4.79   5.61
## 5            Singapore  36.00  41.67  47.00  53.84  62.00  61.00  59.00
## 6 United Arab Emirates  23.63  26.27  28.32  29.48  30.13  40.00  52.00
## # ... with 6 more variables: `2007` <dbl>, `2008` <dbl>, `2009` <dbl>,
## #   `2010` <dbl>, `2011` <dbl>, `2012` <dbl>

Accessing specific rows and columns

#extract by position
internet_readr[[2,1]]
## [1] "Mexico"
internet_utils [2,1] # double [[ ]] works too
## [1] Mexico
## 7 Levels: China Mexico Panama Senegal Singapore ... United States
#extract by name
internet_readr$country
## [1] "China"                "Mexico"               "Panama"              
## [4] "Senegal"              "Singapore"            "United Arab Emirates"
## [7] "United States"
internet_utils$country
## [1] China                Mexico               Panama              
## [4] Senegal              Singapore            United Arab Emirates
## [7] United States       
## 7 Levels: China Mexico Panama Senegal Singapore ... United States
#to use with infix function add a .
internet_readr %>% .$country 
## [1] "China"                "Mexico"               "Panama"              
## [4] "Senegal"              "Singapore"            "United Arab Emirates"
## [7] "United States"

B. Exercise: Tidy data - reshaping

You need to rename columns first to remove the X in front of each year.

names(internet_utils) <-c("country", "2000", "2001", "2002", "2003", "2004", "2005", "2006", "2007", "2008", "2009", "2010", "2011", "2012")
names(internet_utils)
##  [1] "country" "2000"    "2001"    "2002"    "2003"    "2004"    "2005"   
##  [8] "2006"    "2007"    "2008"    "2009"    "2010"    "2011"    "2012"

Reshape a data frame

library(reshape2)
internet_utils_reshaped <- melt(internet_utils,id.vars="country", variable.name="year", value.name="usage")

Reshape a tibble

internet_readr_reshaped <- melt(internet_readr,id.vars="country", variable.name="year", value.name="usage")
internet_readr_reshaped
##                 country year usage
## 1                 China 2000  1.78
## 2                Mexico 2000  5.08
## 3                Panama 2000  6.55
## 4               Senegal 2000  0.40
## 5             Singapore 2000 36.00
## 6  United Arab Emirates 2000 23.63
## 7         United States 2000 43.08
## 8                 China 2001  2.64
## 9                Mexico 2001  7.04
## 10               Panama 2001  7.27
## 11              Senegal 2001  0.98
## 12            Singapore 2001 41.67
## 13 United Arab Emirates 2001 26.27
## 14        United States 2001 49.08
## 15                China 2002  4.60
## 16               Mexico 2002 11.90
## 17               Panama 2002  8.52
## 18              Senegal 2002  1.01
## 19            Singapore 2002 47.00
## 20 United Arab Emirates 2002 28.32
## 21        United States 2002 58.79
## 22                China 2003  6.20
## 23               Mexico 2003 12.90
## 24               Panama 2003  9.99
## 25              Senegal 2003  2.10
## 26            Singapore 2003 53.84
## 27 United Arab Emirates 2003 29.48
## 28        United States 2003 61.70
## 29                China 2004  7.30
## 30               Mexico 2004 14.10
## 31               Panama 2004 11.14
## 32              Senegal 2004  4.39
## 33            Singapore 2004 62.00
## 34 United Arab Emirates 2004 30.13
## 35        United States 2004 64.76
## 36                China 2005  8.52
## 37               Mexico 2005 17.21
## 38               Panama 2005 11.48
## 39              Senegal 2005  4.79
## 40            Singapore 2005 61.00
## 41 United Arab Emirates 2005 40.00
## 42        United States 2005 67.97
## 43                China 2006 10.52
## 44               Mexico 2006 19.52
## 45               Panama 2006 17.35
## 46              Senegal 2006  5.61
## 47            Singapore 2006 59.00
## 48 United Arab Emirates 2006 52.00
## 49        United States 2006 68.93
## 50                China 2007 16.00
## 51               Mexico 2007 20.81
## 52               Panama 2007 22.29
## 53              Senegal 2007  7.70
## 54            Singapore 2007 69.90
## 55 United Arab Emirates 2007 61.00
## 56        United States 2007 75.00
## 57                China 2008 22.60
## 58               Mexico 2008 21.71
## 59               Panama 2008 33.82
## 60              Senegal 2008 10.60
## 61            Singapore 2008 69.00
## 62 United Arab Emirates 2008 63.00
## 63        United States 2008 74.00
## 64                China 2009 28.90
## 65               Mexico 2009 26.34
## 66               Panama 2009 39.08
## 67              Senegal 2009 14.50
## 68            Singapore 2009 69.00
## 69 United Arab Emirates 2009 64.00
## 70        United States 2009 71.00
## 71                China 2010 34.30
## 72               Mexico 2010 31.05
## 73               Panama 2010 40.10
## 74              Senegal 2010 16.00
## 75            Singapore 2010 71.00
## 76 United Arab Emirates 2010 68.00
## 77        United States 2010 74.00
## 78                China 2011 38.30
## 79               Mexico 2011 34.96
## 80               Panama 2011 42.70
## 81              Senegal 2011 17.50
## 82            Singapore 2011 71.00
## 83 United Arab Emirates 2011 78.00
## 84        United States 2011 77.86
## 85                China 2012 42.30
## 86               Mexico 2012 38.42
## 87               Panama 2012 45.20
## 88              Senegal 2012 19.20
## 89            Singapore 2012 74.18
## 90 United Arab Emirates 2012 85.00
## 91        United States 2012 81.03
class(internet_readr_reshaped) # turns into a data.frame!
## [1] "data.frame"

Use the gather function to reshape

tidy_internet_readr <- 
internet_readr %>%
gather(`2000`,`2001`,`2002`,`2003`,`2004`,`2005`,`2006`,`2007`,`2008`,`2009`,`2010`,`2011`,`2012`, key="year", value="usage")

tidy_internet_readr
## # A tibble: 91 x 3
##                 country  year usage
##                   <chr> <chr> <dbl>
##  1                China  2000  1.78
##  2               Mexico  2000  5.08
##  3               Panama  2000  6.55
##  4              Senegal  2000  0.40
##  5            Singapore  2000 36.00
##  6 United Arab Emirates  2000 23.63
##  7        United States  2000 43.08
##  8                China  2001  2.64
##  9               Mexico  2001  7.04
## 10               Panama  2001  7.27
## # ... with 81 more rows

C. Exercise: Understand - Visualize

Create a few statistical visualizations to understand the makeup of your data.


Single boxplot

boxplot(internet_readr$`2000`, main="Range of internet users in 2000", sub="Median of 6.55 users per 100 people")

boxplot(internet_readr$`2001`, main="Range of internet users in 2001", sub="Median of 7.21 users per 100 people")


Single histogram

hist(internet_readr$`2000`, main="Frequency of internet users in 2000 per 100 people", xlab="2000")

hist(internet_readr$`2001`, main="Frequency of internet users in 2001 per 100 people", xlab="2001")


Percentage histogram

library(lattice)
histogram(internet_readr$`2000`, main="Frequency of internet users in 2000 per 100 people", xlab="2000")

library(lattice)
histogram(internet_readr$`2000`, main="Frequency of internet users in 2001 per 100 people", xlab="2001")

***Histogram Matrix ##Version 1

histogram(~ usage | year, data=tidy_internet_readr, layout=c(4,4))

Version 2

h <-histogram(~tidy_internet_readr$usage|tidy_internet_readr$year,col=("lightgreen"),breaks=5,layout=c(3,5))
update (h, index.cond=list(c(10:12, 7:9, 4:6, 1:3)))

#13, 10:12, 7:9, 4:6, 1:3

Version 3

tidy_internet_readr$year<-as.character(tidy_internet_readr$year)
h <-histogram(~tidy_internet_readr$usage|tidy_internet_readr$year,col=("lightgreen"),
              xlab="Usage", breaks=5,layout=c(4,4), ylab ="Year")

update (h, index.cond=list(c(10:13, 6:9, 2:5, 1)))


Multiple box plots

boxplot(internet_readr[,2:14], main="Range of internet users per 100 people")


Simple point plot

plot(tidy_internet_readr$year, tidy_internet_readr$usage,main="Internet usage per 100 people",xlab="Year",ylab="Usage", type="p")

***

D. Exercise: Communicate

Create charts and reports.

Create a presentation ready chart using ggplot and apply a ggtheme.

library(ggthemes)
library(ggplot2)
#line chart
ggplot(tidy_internet_readr,aes(x=year,y=usage,colour=country,group=country)) + geom_line() + labs(title = "Internet Usage per 100 people", subtitle = "Since 2011, the UAE has surpassed Singapore and the US in internet users", caption = "Source: World Bank, 2013",x = "Year",y ="Usage") + theme_excel()


Create a markdown document and publish it

Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents.

See the sample markdown here:http://rpubs.com/sosulski/277649

For more details on using R Markdown see http://rmarkdown.rstudio.com.


APPLICATION: Capital Bikeshare

Understand your data

The type of data you have will dictate the types of questions you use to guide your analysis. To begin, import the bike sharing data from the Capital Bikeshare system.


E. Exercise: Import the bike sharing data

This data spans the District of Columbia, Arlington County, Alexandria, Montgomery County and Fairfax County. The Capital Bikeshare system is owned by the participating jurisdictions and is operated by Motivate, a Brooklyn, NY-based company that operates several other bikesharing systems including Citibike in New York City, Hubway in Boston and Divvy Bikes in Chicago.


library(readr)
bikeshare <- read_csv("bikesharedailydata.csv")
## Parsed with column specification:
## cols(
##   instant = col_integer(),
##   dteday = col_character(),
##   season = col_integer(),
##   yr = col_integer(),
##   mnth = col_integer(),
##   holiday = col_integer(),
##   weekday = col_integer(),
##   workingday = col_integer(),
##   weathersit = col_integer(),
##   temp = col_double(),
##   atemp = col_double(),
##   hum = col_double(),
##   windspeed = col_double(),
##   casual = col_integer(),
##   registered = col_integer(),
##   cnt = col_integer()
## )

F. Exercise: Take a look at the data.

Preview the data

You can preview the data using the head function to show the first few observations.

head(bikeshare)
## # A tibble: 6 x 16
##   instant dteday season    yr  mnth holiday weekday workingday weathersit
##     <int>  <chr>  <int> <int> <int>   <int>   <int>      <int>      <int>
## 1       1 1/1/11      1     0     1       0       6          0          2
## 2       2 1/2/11      1     0     1       0       0          0          2
## 3       3 1/3/11      1     0     1       0       1          1          1
## 4       4 1/4/11      1     0     1       0       2          1          1
## 5       5 1/5/11      1     0     1       0       3          1          1
## 6       6 1/6/11      1     0     1       0       4          1          1
## # ... with 7 more variables: temp <dbl>, atemp <dbl>, hum <dbl>,
## #   windspeed <dbl>, casual <int>, registered <int>, cnt <int>

Next, you can view the variables and types by using the str function.

str(bikeshare)

One of the first things you may notice is the data dimensions, the number of rows and columns. Specifically there are 731 rows (observations) and 16 columns (variables or attributes).

Rows are commonly referred to as observations or records and columns are described as attributes or variables.

However, the variable names listed at the first row of every column are not very descriptive.


G. Exercise: Understanding the variables

Take a look column named season. What is the meaning of season? What are the possible values for this variable?

bikeshare$season
##   [1]  1  1  1  1  1  1 NA  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1
##  [24]  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1
##  [47]  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1
##  [70]  1  1  1  1  1  1  1  1  1  1  2  2  2  2  2  2  2  2  2  2  2  2  2
##  [93]  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2
## [116]  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2
## [139]  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2
## [162]  2  2  2  2  2  2  2  2  2  2  3  3  3  3  3  3  3  3  3  3  3  3  3
## [185]  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3
## [208]  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3
## [231]  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3
## [254]  3  3  3  3  3  3  3  3  3  3  3  3  4  4  4  4  4  4  4  4  4  4  4
## [277]  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4
## [300]  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4
## [323]  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4
## [346]  4  4  4  4  4  4  4  4  4  1  1  1  1  1  1  1  1  1  1  1  1  1  1
## [369]  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1
## [392]  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1
## [415]  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1
## [438]  1  1  1  1  1  1  1  1  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2
## [461]  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2
## [484]  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2
## [507]  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2
## [530]  2  2  2  2  2  2  2  2  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3
## [553]  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3
## [576]  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3
## [599]  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3
## [622]  3  3  3  3  3  3  3  3  3  3  4  4  4  4  4  4  4  4  4  4  4  4  4
## [645]  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4
## [668]  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4
## [691]  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4
## [714]  4  4  4  4  4  4  4  1  1  1  1  1  1  1  1  1  1  1

What type of variable is it?

It is an integer. You’ll notice that in the column seasons the values are integers that range between 1 and 4.


What do the numbers represent?

If we really think about it’s unlikely that the numbers represent quantities. Instead, they probably represent the seasons of the year because we know there are four seasons. The numbers (1 through 4) are probably a code for the each of the four seasons of the year. Without additional information, such as a data dictionary or read me file, it would be impossible for the user of the data to know what the possible values of 1 through 4 correspond to in the categorical variable named season.

This leads us to the next step, reviewing the data dictionary along with the data set to better understand the meaning behind the values.


Review the data dictionary

A data dictionary defines the characteristics of each of the data attributes. If your data comes from a reputable source, odds are that it is accompanied with a data dictionary or metadata. To know which season is represented by each number in the variable season we can review the data dictionary.


Field Definition
instant record index
dteday date
season season (1:spring, 2:summer, 3:fall, 4:winter)
yr year (0: 2011, 1:2012)
mnth month ( 1 to 12)
hr hour (0 to 23)
holiday weather day is holiday or not
weekday day of the week
workingday if day is neither weekend nor holiday is 1, otherwise is 0.
weathersit 1, 2, 3, 4
– 1 Clear, Few clouds, Partly cloudy, Partly cloudy
– 2 Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist
– 3 Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds
– 4 Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog
temp Normalized temperature in Celsius. The values are divided to 41 (max)
atemp Normalized feeling temperature in Celsius. The values are divided to 50 (max)
hum Normalized humidity. The values are divided to 100 (max)
windspeed Normalized wind speed. The values are divided to 67 (max)
casual count of casual users
registered count of registered users
cnt count of total rental bikes including both casual and registered

For example, season is a categorical variable defined by one of four values, each representing a season (1: spring, 2: summer, 3: fall, 4: winter).


You’ll notice that the variable year is coded with the value of 0 for 2011 and 1 for 2012, rather than actual year value of 2011 or 2012.


The variable weathersit is encoded with four possible values, 1 through 4. The values represent the daily weather situation as defined below.

  1. Clear, Few clouds, Partly cloudy, Partly cloudy
  2. Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist
  3. Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds
  4. Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog

It is essential undergo this process of understanding to help inform the formulate questions for exploration and further analysis. Visualizing data without understanding the meaning of the variables will make it difficult for you to interpret the result. By approaching a data visualization task informed about the data and its attributes you can better formulate questions for visual exploration. The next step is to prepare the data for analytical and visualization tasks.


At this point, you may want to rename the columns in your data set to make the data more usable when you begin the analysis. Renaming columns is a manual process that literally involves change the each column name. It is best practice to use lower case lettering and avoid spaces or hyphenation.


Preparing your data ##H. Exercise: Renaming columns

There are many ways to rename columns. Two approaches are presented below

Renaming columns with the rename function from the dplyr library.

library(dplyr)
bikeshare <- rename(bikeshare, humidity = hum)
names(bikeshare)
##  [1] "instant"    "dteday"     "season"     "yr"         "mnth"      
##  [6] "holiday"    "weekday"    "workingday" "weathersit" "temp"      
## [11] "atemp"      "humidity"   "windspeed"  "casual"     "registered"
## [16] "cnt"

Renaming columns with R base functions.

# Rename column where names is "yr"
names(bikeshare)[names(bikeshare) == "yr"] <- "year"
names(bikeshare)
##  [1] "instant"    "dteday"     "season"     "year"       "mnth"      
##  [6] "holiday"    "weekday"    "workingday" "weathersit" "temp"      
## [11] "atemp"      "humidity"   "windspeed"  "casual"     "registered"
## [16] "cnt"

I. Exercise: Dealing with missing values

Even before you define the questions you seek to have answered from the data, it needs to be formatted appropriately. The rows should correspond to observations and the columns correspond the observed variables. This makes it easier to map the data to visual properties such as position, color, size, or shape. A preprocessing step is necessary to verify the dataset for correctness and consistency. Incomplete information has a high potential for incorrect results.


Tactics

There are several ways you tackle working with data that are incomplete. Each has its pros and cons.

  1. Ignore any record with missing values
  2. Replace empty fields with a pre-defined value
  3. Replace empty fields with the most frequently appeared value
  4. Use the mean value
  5. Manual approach

Problem

  • Row 7, column 3: The season variable has no value
  • Row 10, column 5: The month has no value.

Solution

In these two cases it’s easy to replace the value with a pre-known value. We wouldn’t want to ignore the record because the values can be easily determined.


Updating the records

bikeshare$season[7]
## [1] NA
1->bikeshare$season[7]
bikeshare$season[7]
## [1] 1
bikeshare$mnth[10]
## [1] NA
1->bikeshare$mnth[10]
bikeshare$mnth[10]
## [1] 1

J. Exercise: Understand - Calculate basic summary statistics

It is helpful to calculate some summary statistics about your data to learn more about the distribution, the median, minimum, maximum values, variance, standard deviation, number of observations and attributes.


summary(bikeshare)
##     instant         dteday              season           year       
##  Min.   :  1.0   Length:731         Min.   :1.000   Min.   :0.0000  
##  1st Qu.:183.5   Class :character   1st Qu.:2.000   1st Qu.:0.0000  
##  Median :366.0   Mode  :character   Median :3.000   Median :1.0000  
##  Mean   :366.0                      Mean   :2.497   Mean   :0.5007  
##  3rd Qu.:548.5                      3rd Qu.:3.000   3rd Qu.:1.0000  
##  Max.   :731.0                      Max.   :4.000   Max.   :1.0000  
##       mnth          holiday           weekday        workingday   
##  Min.   : 1.00   Min.   :0.00000   Min.   :0.000   Min.   :0.000  
##  1st Qu.: 4.00   1st Qu.:0.00000   1st Qu.:1.000   1st Qu.:0.000  
##  Median : 7.00   Median :0.00000   Median :3.000   Median :1.000  
##  Mean   : 6.52   Mean   :0.02873   Mean   :2.997   Mean   :0.684  
##  3rd Qu.:10.00   3rd Qu.:0.00000   3rd Qu.:5.000   3rd Qu.:1.000  
##  Max.   :12.00   Max.   :1.00000   Max.   :6.000   Max.   :1.000  
##    weathersit         temp             atemp            humidity     
##  Min.   :1.000   Min.   :0.05913   Min.   :0.07907   Min.   :0.0000  
##  1st Qu.:1.000   1st Qu.:0.33708   1st Qu.:0.33784   1st Qu.:0.5200  
##  Median :1.000   Median :0.49833   Median :0.48673   Median :0.6267  
##  Mean   :1.395   Mean   :0.49538   Mean   :0.47435   Mean   :0.6279  
##  3rd Qu.:2.000   3rd Qu.:0.65542   3rd Qu.:0.60860   3rd Qu.:0.7302  
##  Max.   :3.000   Max.   :0.86167   Max.   :0.84090   Max.   :0.9725  
##    windspeed           casual         registered        cnt      
##  Min.   :0.02239   Min.   :   2.0   Min.   :  20   Min.   :  22  
##  1st Qu.:0.13495   1st Qu.: 315.5   1st Qu.:2497   1st Qu.:3152  
##  Median :0.18097   Median : 713.0   Median :3662   Median :4548  
##  Mean   :0.19049   Mean   : 848.2   Mean   :3656   Mean   :4504  
##  3rd Qu.:0.23321   3rd Qu.:1096.0   3rd Qu.:4776   3rd Qu.:5956  
##  Max.   :0.50746   Max.   :3410.0   Max.   :6946   Max.   :8714

The summary function shows the mean, median, minimum, and maximum values for each variable in the data set. This is particular useful for continuous variables such as temp, cnt, casual, and registered. For example, you can easily see the average number of customers (casual and registered) per day.


K. Exercise: Understand - Visualize

Explore the data visually. As a first step, consider scatterplots to show relationships between variables, histograms for frequencies, density plots to show distributions, and box plots to show the range of values.

Kernal density plot

Let’s say you wanted to see know the distribution of the ridership.

Kernal density plots are an effective way to view the distribution of a variable. Create the plot using plot(density(x)) where x is a numeric vector.


A density plot that shows the shape of the data for the number of riders per day.

density_riders = density(bikeshare$cnt)
plot(density_riders, main= "Number of riders per day",sub= round(mean(bikeshare$cnt), 2),"Mean =", frame=FALSE)
polygon(density_riders, col="gray", border="gray")

How would we interpret the density plot?


What if we wanted to show just a year of data?

bikeshare_2011 <- subset(bikeshare, year==0)
bikeshare_2011
## # A tibble: 365 x 16
##    instant  dteday season  year  mnth holiday weekday workingday
##      <int>   <chr>  <dbl> <int> <dbl>   <int>   <int>      <int>
##  1       1  1/1/11      1     0     1       0       6          0
##  2       2  1/2/11      1     0     1       0       0          0
##  3       3  1/3/11      1     0     1       0       1          1
##  4       4  1/4/11      1     0     1       0       2          1
##  5       5  1/5/11      1     0     1       0       3          1
##  6       6  1/6/11      1     0     1       0       4          1
##  7       7  1/7/11      1     0     1       0       5          1
##  8       8  1/8/11      1     0     1       0       6          0
##  9       9  1/9/11      1     0     1       0       0          0
## 10      10 1/10/11      1     0     1       0       1          1
## # ... with 355 more rows, and 8 more variables: weathersit <int>,
## #   temp <dbl>, atemp <dbl>, humidity <dbl>, windspeed <dbl>,
## #   casual <int>, registered <int>, cnt <int>
bikeshare_2012 <- subset(bikeshare, year==1)
bikeshare_2012
## # A tibble: 366 x 16
##    instant  dteday season  year  mnth holiday weekday workingday
##      <int>   <chr>  <dbl> <int> <dbl>   <int>   <int>      <int>
##  1     366  1/1/12      1     1     1       0       0          0
##  2     367  1/2/12      1     1     1       1       1          0
##  3     368  1/3/12      1     1     1       0       2          1
##  4     369  1/4/12      1     1     1       0       3          1
##  5     370  1/5/12      1     1     1       0       4          1
##  6     371  1/6/12      1     1     1       0       5          1
##  7     372  1/7/12      1     1     1       0       6          0
##  8     373  1/8/12      1     1     1       0       0          0
##  9     374  1/9/12      1     1     1       0       1          1
## 10     375 1/10/12      1     1     1       0       2          1
## # ... with 356 more rows, and 8 more variables: weathersit <int>,
## #   temp <dbl>, atemp <dbl>, humidity <dbl>, windspeed <dbl>,
## #   casual <int>, registered <int>, cnt <int>
density_riders = density(bikeshare_2012$cnt)
plot(density_riders, main= "Number of riders per day",sub= round(mean(bikeshare_2012$cnt), 2),"Mean =", frame=FALSE)
polygon(density_riders, col="gray", border="gray")

density_riders = density(bikeshare_2011$cnt)
plot(density_riders, main= "Number of riders per day",sub= round(mean(bikeshare_2011$cnt), 2),"Mean =", frame=FALSE)
polygon(density_riders, col="gray", border="gray")

**Histogram A histogram that shows the frequency of the weather situation by day.

hist(bikeshare$weathersit, col="gray",border="gray", xlab="Weather", main="Frequency of weather situations")

Value Meaning
1 Clear, Few clouds, Partly cloudy, Partly cloudy
2 Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist
3 Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds
4 Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog

How would we interpret the histogram?


You can check to see if your histogram makes is clear by reviewing the sum of each value for weathersit.

table(bikeshare$weathersit)
## 
##   1   2   3 
## 463 247  21

L. Exercise: Scatter plots

To see relationships, scatter plots are useful. In this case, we are looking for positive or negative correlations.


Scatter plot

A simple scatter plot that shows the relationship between the rentals and temperature

plot(bikeshare$cnt, bikeshare$atemp, main= "Relationship between bike rentals and average daily temperature", frame=FALSE, xlab="Number of rentals per day", ylab="Average daily temperature in degrees fahrenheit")


Scatter plot with fit lines

To aid in the interpretation, it is helpful to add a linear regression line if there is a linear relationship or a lowess line. A lowess line will more accurate fit the line to the data.

plot(bikeshare$cnt, bikeshare$atemp, main= "Relationship between bike rentals and average daily temperature", frame=FALSE, xlab="Number of rentals per day", ylab="Average daily temperature in degrees fahrenheit")

# Add fit lines
abline(lm(bikeshare$atemp~bikeshare$cnt), col="blue", lwd=5) # regression line (y~x) 
lines(lowess(bikeshare$cnt, bikeshare$atemp), col="orange", lwd=5) # lowess line (x,y)


How would we interpret this scatter plot? Use this to inform the title of your plot.


Scatter plot with grouped categorical data (season)

Consider using color to group categorical data. In this example, we are grouping the points by season. We’re using the ggvis package.

#static chart
library(ggvis)
bikeshare %>% 
  ggvis(x=~cnt, y=~atemp) %>% 

layer_points(fill = ~as.factor(season))   %>% 

  add_axis("x", title = "Number of rentals per day") %>%
  add_axis("y", title = "Average daily temperature in degrees fahrenheit")

Scatter plot with grouped categorical data (year)

We can even look at the data by year.

#static chart
library(ggvis)
 bikeshare %>% 
  ggvis(x=~cnt, y=~atemp) %>% 

layer_points(fill = ~as.factor(year))   %>% 
  add_axis("x", title = "Number of rentals per day") %>%
  add_axis("y", title = "Average daily temperature in degrees fahrenheit")

M. Exercise: Interactive chart - Use ggvis to filter

Then we can build on the example above and add a filter to hide and reveal different seasons.

bikeshare$season<-ordered(factor(bikeshare$season, levels =c(1,2,3,4),
labels = c("Spring", "Summer", "Fall", "Winter")))

library(ggvis)
bikeshare %>% 
  ggvis(x=~cnt, y=~atemp, fill = ~factor(season)) %>% 
  filter(bikeshare$season %in% eval(input_checkboxgroup(choices=unique(bikeshare$season), 
    selected = "Spring")))%>% 
layer_points()   %>% 
  add_legend("fill", 
  title = "Season", 
  orient = "left")%>%
  add_axis("x", title = "Number of rentals per day") %>%
  add_axis("y", title = "Average daily temperature in degrees fahrenheit") 
## Warning: Can't output dynamic/interactive ggvis plots in a knitr document.
## Generating a static (non-dynamic, non-interactive) version of the plot.
 # %>%

N.Homework: Communicate - Create an RMarkdown document

Complete on your own.

Create an RMarkdown document named Bike_Sharing.Rmd.Show data by year for all the visualizations of the bike sharing data.


Exercise N - Solution

2011 data using plot

plot(bikeshare_2011$cnt, bikeshare_2011$atemp, main= "Relationship between bike rentals and average daily temperature in 2011", frame=FALSE, xlab="Number of rentals per day", ylab="Average daily temperature in degrees fahrenheit")

###2012 data using plot

plot(bikeshare_2012$cnt, bikeshare_2012$atemp, main= "Relationship between bike rentals and average daily temperature in 2012", frame=FALSE, xlab="Number of rentals per day", ylab="Average daily temperature in degrees fahrenheit")

# Add fit lines
#abline(lm(bikeshare$atemp~bikeshare$cnt), col="blue") # regression line (y~x) 
#lines(lowess(bikeshare$cnt, bikeshare$atemp), col="orange") # lowess line (x,y)

line chart

#line chart
#reference: http://www.cookbook-r.com/Graphs/Legends_(ggplot2)/
lineplot <- ggplot(bikeshare,aes(x=dteday,y=cnt,color=factor(year),group=factor(year))) + geom_line() + labs(title = "Rentals by day", subtitle = "Insight here", caption = "Source: Capital Bikeshare",x = "day",y ="Rentals") 

lineplot + scale_color_manual(values=c("#999999", "#56B4E9"), 
                       name="Year",
                       breaks=c("0", "1"),
                       labels=c("2011", "2012"))

Stacked area

areaplot <- ggplot(bikeshare,aes(x=dteday,y=cnt,color=factor(year),group=factor(year))) + geom_area() + labs(title = "Rentals by day", subtitle = "Insight here", caption = "Source: Capital Bikeshare",x = "day",y ="Rentals") + theme_fivethirtyeight()

areaplot + scale_color_manual(values=c("#999999", "#56B4E9"), 
                       name="Year",
                       breaks=c("0", "1"),
                       labels=c("2011", "2012"))

#source https://chrisalbon.com/r-stats/stacked-area-graph.html
####
#geom_area() +
  # change the colors and reserve the legend order (to match to stacking order)
 # scale_fill_brewer(palette="Blues", breaks=rev(levels(uspopage$AgeGroup)))

histogram matrix

histogram(~ cnt | as.factor(mnth), data=bikeshare_2012, layout=c(4,3))

#levels
#mydata$v1 <- ordered(factor(mydata$v1,
#levels = c(1,2,3,4,5,6,7,8,9,10,11,12)),
#labels = c("red", "blue", "green"))

bikeshare_2012$mnth<-ordered(factor(bikeshare_2012$mnth, levels =c(1,2,3,4,5,6,7,8,9,10,11,12),
labels = c("Jan", "Feb", "March", "April", "May", "June", "July", "Aug.", "Sept", "Oct", "Nov.","Dec.")))

#reference: http://www.statmethods.net/RiA/lattice.pdf
require (lattice)
bikesharebymonth <-histogram(~bikeshare_2012$cnt|(bikeshare_2012$mnth),type=c("count"),col=("lightgreen"),strip =strip.custom(bg="lightgrey",
par.strip.text=list(col="black", cex=1, font=1)),main="The frequency of bicycle rentals by month in 2012\n",
              xlab="Rentals", breaks=5,layout=c(4,3), ylab ="Month", sub=("\n                  Kristen Sosulski | Captial Bikeshare, 2012"))

update (bikesharebymonth, index.cond=list(c(9:12, 5:8, 1:4)))

##basic bar
options(scipen=10000)
ggplot(bikeshare,aes(x=season,y=cnt)) + geom_bar(stat="identity") + labs(title = "Rentals by day", subtitle = "Insight here", caption = "Source: Capital Bikeshare",x = "Season",y ="Rentals") + scale_y_continuous(limits=c(0,1000000),oob = rescale_none)  

#stackeed bar
bar<- ggplot(bikeshare,aes(x=season,y=cnt,color=factor(year),group=factor(year))) + geom_bar(stat="identity") + labs(title = "Rentals by day", subtitle = "Insight here", caption = "Source: Capital Bikeshare",x = "Season",y ="Rentals")  + scale_y_continuous(limits=c(0,1000000),oob = rescale_none) 

bar + scale_color_manual(values=c("#999999", "#56B4E9"), 
                       name="Year",
                       breaks=c("0", "1"),
                       labels=c("2011", "2012"))

parallel coordinates

library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
parcoord(bikeshare[, c(16, 10,12,13)], col="#4cbea3", lty=7, var.label=TRUE, lwd = .4)

#http://www.buildingwidgets.com/blog/2015/1/30/week-04-interactive-parallel-coordinates-1```

HTML Widget

library(devtools)
devtools::install_github("timelyportfolio/parcoords")
## Skipping install of 'parcoords' from a github remote, the SHA1 (324d00b8) has not changed since last install.
##   Use `force = TRUE` to force installation
library(parcoords)
parcoords(bikeshare)

Brush On and Reorder

library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following objects are masked from 'package:ggvis':
## 
##     add_data, hide_legend
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
p <- ggplot(data = bikeshare, aes(x = as.factor(weathersit), fill =season)) + geom_bar(position = "dodge")
ggplotly(p)
## We recommend that you use the dev version of ggplot2 with `ggplotly()`
## Install it with: `devtools::install_github('hadley/ggplot2')`

scatter

plot_ly(bikeshare, x = bikeshare$humidity, y = bikeshare$cnt, 
        text = paste("Weather situation: ", as.factor(bikeshare$weathersit)),
        mode = "markers", color = as.factor(bikeshare$weathersit), xlab="humidity") # size = bikeshare$weathersit
## No trace type specified:
##   Based on info supplied, a 'scatter' trace seems appropriate.
##   Read more about this trace type -> https://plot.ly/r/reference/#scatter
## Warning: 'scatter' objects don't have these attributes: 'xlab'
## Valid attributes include:
## 'type', 'visible', 'showlegend', 'legendgroup', 'opacity', 'name', 'uid', 'ids', 'customdata', 'hoverinfo', 'hoverlabel', 'stream', 'x', 'x0', 'dx', 'y', 'y0', 'dy', 'text', 'hovertext', 'mode', 'hoveron', 'line', 'connectgaps', 'cliponaxis', 'fill', 'fillcolor', 'marker', 'textposition', 'textfont', 'r', 't', 'error_y', 'error_x', 'xaxis', 'yaxis', 'xcalendar', 'ycalendar', 'idssrc', 'customdatasrc', 'hoverinfosrc', 'xsrc', 'ysrc', 'textsrc', 'hovertextsrc', 'textpositionsrc', 'rsrc', 'tsrc', 'key', 'set', 'frame', 'transforms', '_isNestedKey', '_isSimpleKey', '_isGraticule'

## Warning: 'scatter' objects don't have these attributes: 'xlab'
## Valid attributes include:
## 'type', 'visible', 'showlegend', 'legendgroup', 'opacity', 'name', 'uid', 'ids', 'customdata', 'hoverinfo', 'hoverlabel', 'stream', 'x', 'x0', 'dx', 'y', 'y0', 'dy', 'text', 'hovertext', 'mode', 'hoveron', 'line', 'connectgaps', 'cliponaxis', 'fill', 'fillcolor', 'marker', 'textposition', 'textfont', 'r', 't', 'error_y', 'error_x', 'xaxis', 'yaxis', 'xcalendar', 'ycalendar', 'idssrc', 'customdatasrc', 'hoverinfosrc', 'xsrc', 'ysrc', 'textsrc', 'hovertextsrc', 'textpositionsrc', 'rsrc', 'tsrc', 'key', 'set', 'frame', 'transforms', '_isNestedKey', '_isSimpleKey', '_isGraticule'

## Warning: 'scatter' objects don't have these attributes: 'xlab'
## Valid attributes include:
## 'type', 'visible', 'showlegend', 'legendgroup', 'opacity', 'name', 'uid', 'ids', 'customdata', 'hoverinfo', 'hoverlabel', 'stream', 'x', 'x0', 'dx', 'y', 'y0', 'dy', 'text', 'hovertext', 'mode', 'hoveron', 'line', 'connectgaps', 'cliponaxis', 'fill', 'fillcolor', 'marker', 'textposition', 'textfont', 'r', 't', 'error_y', 'error_x', 'xaxis', 'yaxis', 'xcalendar', 'ycalendar', 'idssrc', 'customdatasrc', 'hoverinfosrc', 'xsrc', 'ysrc', 'textsrc', 'hovertextsrc', 'textpositionsrc', 'rsrc', 'tsrc', 'key', 'set', 'frame', 'transforms', '_isNestedKey', '_isSimpleKey', '_isGraticule'

O. Homework: Devise the problem, challenge, and/or questions

At this point in the process, you should have gained enough insight to frame a question to guide the rest of your analysis. Sometimes you don’t know what to ask of the data and other times the questions you have cannot be answered by the data that you have. In most visual analytical explorations there will be a back and forth between defining the questions and identifying the data sources that have contain the information you need to extract. ***

Often your question will fall into one of three categories: Past, present, or future.

Some questions that can guide an historical analysis of past events are:

  • Do weather conditions affect rental behaviors?
  • Does the precipitation, day of week, season, hour of the day, etc. affect rental behavior?
  • Which weather conditions affect behavior the most? Do they differ by season?

These questions serve a purpose of guiding reports, where the analyst is reporting on past events.

A question based on the present is:

How many bikes were rented in the past hour or today?

This type of question is reserved for producing a current state of an event.


Can we answer this question?

The data we are using cannot answer this question since it is historical data from 2011 and 2012.


A question about the future could be framed as the following:

Will bike rentals be higher in the summer rather than the winter due to weather?

Questions about the future using involve analysis that requires prediction or forecasting methods. The analyst in this case is trying to predict the future from past data.

To complete on your own. ###Try to answer the following questions. Show your work as a data visualization.

  • Do weather conditions affect rental behaviors?
  • Does the precipitation, day of week, season, hour of the day, etc. affect rental behavior?
  • Which weather conditions affect behavior the most? Do they differ by season

**SOLUTION