This is problem set #1, in which we hope you will practice the packages tidyr and dplyr. There are some great cheat sheets from RStudio.
This data set comes from a replication of Janiszewski and Uy (2008), who investigated whether the precision of the anchor for a price influences the amount of adjustment.
In the data frame, the Input.condition variable represents the experimental condition (under the rounded anchor, the rounded anchor, over the rounded anchor). Input.price1, Input.price2, and Input.price3 are the anchors for the Answer.dog_cost, Answer.plasma_cost, and Answer.sushi_cost items.
I pretty much always clear the workspace and load the same basic helper functions before starting an analysis.
library(tidyverse)
## Loading tidyverse: ggplot2
## Loading tidyverse: tibble
## Loading tidyverse: tidyr
## Loading tidyverse: readr
## Loading tidyverse: purrr
## Loading tidyverse: dplyr
## Conflicts with tidy packages ----------------------------------------------
## filter(): dplyr, stats
## lag(): dplyr, stats
library(stringr)
library(ggplot2)
Note that I’m using a “relative” path (the “helper”) rather than an absolute path (e.g. “/Users/mcfrank/code/projects/etc…”). The relative path means that someone else can run your code by changing to the right directory, while the absolute path will force someone else to make trivial changes every time they want to run it.
The first part of this exercise actually just consists of getting the data in a format usable for analysis. This is not trivial. Let’s try it:
d <- read_csv("data/janiszewski_rep_exercise.csv")
## Parsed with column specification:
## cols(
## .default = col_character(),
## Reward = col_double(),
## MaxAssignments = col_integer(),
## AssignmentDurationInSeconds = col_integer(),
## AutoApprovalDelayInSeconds = col_integer(),
## NumberOfSimilarHITs = col_integer(),
## WorkTimeInSeconds = col_integer(),
## Input.price1 = col_number(),
## Input.price2 = col_number(),
## Input.price3 = col_double(),
## Answer.plasma_cost = col_number()
## )
## See spec(...) for full column specifications.
Fine, right? Why can’t we go forward with the analysis?
HINT: try computing some summary statistics for the different items. Also, are there any participants that did the task more than once?
#summary stats
summary(d$Input.condition)
## Length Class Mode
## 90 character character
summary(d$Input.price1)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 4988 4988 5000 5000 5012 5012
summary(d$Input.price2)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 2492 2492 2500 2500 2508 2508
summary(d$Input.price3)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 8.64 8.64 9.00 9.00 9.36 9.36
summary(d$Answer.dog_cost)
## Length Class Mode
## 90 character character
summary(d$Answer.plasma_cost)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.45 3500.00 4500.00 4077.00 4700.00 5000.00
summary(d$Answer.sushi_cost)
## Length Class Mode
## 90 character character
Answer.dog_cost and Answer.sushi_cost are in “character” not numeric. They also contain values in a wrong format such as “five hundred” and “2,000”.
#search for duplicate participants
anyDuplicated(d$WorkerId)
## [1] 48
which(duplicated(d$WorkerId))
## [1] 48 58 62
It looks like three participants did the task more than once.
Fix the data file programmatically, i.e., write code that transforms the unclean data frame into a clean data frame.
d <- d[-c(which(duplicated(d$WorkerId))),]
pattern_comma <- "[0-9][,]"
d$Answer.dog_cost <- gsub(pattern_comma,2, d$Answer.dog_cost)
d$Answer.sushi_cost <- gsub(pattern_comma,7, d$Answer.sushi_cost)
#fix string
d$Answer.dog_cost <- gsub("five hundred",500, d$Answer.dog_cost)
d$Answer.sushi_cost <- gsub("ehight",8, d$Answer.sushi_cost)
d$Answer.dog_cost <- as.numeric(d$Answer.dog_cost)
d$Answer.sushi_cost <- as.numeric(d$Answer.sushi_cost)
Now let’s start with the cleaned data, so that we are all beginning from the same place.
d <- read_csv("data/janiszewski_rep_cleaned.csv")
## Parsed with column specification:
## cols(
## .default = col_character(),
## Reward = col_double(),
## MaxAssignments = col_integer(),
## AssignmentDurationInSeconds = col_integer(),
## AutoApprovalDelayInSeconds = col_integer(),
## NumberOfSimilarHITs = col_integer(),
## WorkTimeInSeconds = col_integer(),
## Input.price1 = col_integer(),
## Input.price2 = col_integer(),
## Input.price3 = col_double(),
## Answer.dog_cost = col_double(),
## Answer.plasma_cost = col_double(),
## Answer.sushi_cost = col_double()
## )
## See spec(...) for full column specifications.
This data frame is in wide format - that means that each row is a participant and there are multiple observations per participant. This data is not tidy.
To make this data tidy, we’ll do some cleanup. First, remove the columns you don’t need, using the verb select.
HINT: ?select and the examples of helper functions will help you be efficient.
d.tidy <- select(d, WorkerId, starts_with("Input"), starts_with("Answer"))
Try renaming some variables using rename. A good naming scheme is:
Try using the %>% operator as well. So you will be “piping” d %>% rename(...).
d.tidy <- d.tidy %>% rename(workerid = WorkerId, condition = Input.condition, anchor_dog=Input.price1, anchor_plasma=Input.price2, anchor_sushi=Input.price3)
d.tidy <- d.tidy %>% rename(dog = Answer.dog_cost, plasma= Answer.plasma_cost, sushi = Answer.sushi_cost)
OK, now for the tricky part. Use the verb gather to turn this into a tidy data frame.
HINT: look for online examples!
d.tidy <- d.tidy %>% gather(item, cost, dog:sushi)
Bonus problem: spread these data back into a wide format data frame.
#d.wide <- d.tidy %>% spread(anchor, price)
d.wide <- d.tidy %>% spread(item, cost)
NOTE: If you generally use plyr package, note that they do not play nicely together so things like the rename function won’t work unless you load dplyr after plyr.
As we said in class, a good thing to do is always to check histograms of the response variable. Do that now, using either regular base graphics or ggplot. What can you conclude?
g<-ggplot(d.tidy, aes(cost))+geom_histogram()
g + facet_grid(.~ item, scale='free_x')
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 1 rows containing non-finite values (stat_bin).
Try also using the dplyr distinct function to remove the duplicate participants from the raw csv file that you discovered in part 1.
d.raw <- read.csv("data/janiszewski_rep_exercise.csv")
d.unique.subs <- distinct(d.raw,WorkerId,.keep_all=TRUE)
OK, now we turn to the actual data anlysis. We’ll be using dplyr verbs to filter, group,mutate, and summarise the data.
Start by using summarise to compute the grand mean bet. (Note that this is the same as taking the grant mean - the value will come later. Right now we’re just learning the syntax of that verb.)
grandmean = summarise(d.tidy,grandmean=mean(cost,na.rm=TRUE))
print(grandmean)
## # A tibble: 1 Ă— 1
## grandmean
## <dbl>
## 1 2022.557
This is a great time to get comfortable with the %>% operator. In brief, %>% allows you to pipe data from one function to another. So if you would have written:
d <- function(d, other_stuff)
you can now write:
d <- d %>% function(other_stufF)
That doesn’t seem like much, but it’s cool when you can replace:
d <- function1(d, other_stuff) d <- function2(d, lots_of_other_stuff, more_stuff) d <- function3(d, yet_more_stuff)
with
d <- d %>% function1(other_stuff) %>% function2(lots_of_other_stuff, more_stuff) %>% function3(yet_more_stuff)
In other words, you get to make a clean list of the things you want to do and chain them together without a lot of intermediate assignments.
Let’s use that capacity to combine summarise with group_by, which allows us to break up our summary into groups. Try grouping by item and condition and taking means using summarise, chaining these two verbs with %>%.
groupmean <- d.tidy %>% group_by(item,condition) %>%
summarise(mean=mean(cost,na.rm = TRUE))
print(groupmean)
## Source: local data frame [9 x 3]
## Groups: item [?]
##
## item condition mean
## <chr> <chr> <dbl>
## 1 dog over 1898.300000
## 2 dog rounded 1884.482414
## 3 dog under 1906.964286
## 4 plasma over 4300.333000
## 5 plasma rounded 4091.655172
## 6 plasma under 4018.357143
## 7 sushi over 8.322414
## 8 sushi rounded 7.955517
## 9 sushi under 7.742500
OK, it’s looking like there are maybe some differences between conditions, but how are we going to plot these? They are fundamentally different magnitudes from one another.
Really we need the size of the deviation from the anchor, which means we need the anchor value. Let’s go back to the data and add that in.
Take a look at this complex expression. You don’t have to modify it, but see what is being done here with gather, separate and spread. Run each part (e.g. the first verb, the first two verbs, etc.) and after doing each, look at head(d.tidy) to see what they do.
d.tidy <- d %>%
select(WorkerId, Input.condition,
starts_with("Answer"),
starts_with("Input")) %>%
rename(workerid = WorkerId,
condition = Input.condition,
plasma_anchor = Input.price1,
dog_anchor = Input.price2,
sushi_anchor = Input.price3,
dog_cost = Answer.dog_cost,
plasma_cost = Answer.plasma_cost,
sushi_cost = Answer.sushi_cost) %>%
gather(name, cost,
dog_anchor, plasma_anchor, sushi_anchor,
dog_cost, plasma_cost, sushi_cost) %>%
separate(name, c("item", "type"), "_") %>%
spread(type, cost)
Now we can do the same thing as before but look at the relative difference between anchor and estimate. Let’s do this two ways:
To do the first, use the mutate verb to add a percent change column, then comute the same summary as before.
pcts <- d.tidy %>% mutate(pct_change = (cost - anchor)/anchor*100) %>%
group_by(item,condition) %>%
summarise(pct=mean(pct_change,na.rm = TRUE))
print(pcts)
## Source: local data frame [9 x 3]
## Groups: item [?]
##
## item condition pct
## <chr> <chr> <dbl>
## 1 dog over -24.31021
## 2 dog rounded -24.62070
## 3 dog under -23.47655
## 4 plasma over -14.19926
## 5 plasma rounded -18.16690
## 6 plasma under -19.43951
## 7 sushi over -11.08532
## 8 sushi rounded -11.60536
## 9 sushi under -10.38773
To do the second, you will need to group once by item, then to ungroup and do the same thing as before. NOTE: you can use group_by(…, add=FALSE) to set new grouping levels, also.
HINT: scale(x) returns a complicated data structure that doesn’t play nicely with dplyr. try scale(x)[,1] to get what you need.
z.scores <- d.tidy %>%
group_by(item) %>%
mutate(z = scale(cost-anchor)[,1]) %>%
group_by(item,condition,add=FALSE) %>%
summarize(z = mean(z, na.rm=TRUE))
print(z.scores)
## Source: local data frame [9 x 3]
## Groups: item [?]
##
## item condition z
## <chr> <chr> <dbl>
## 1 dog over -0.01393253
## 2 dog rounded -0.02744433
## 3 dog under 0.04335220
## 4 plasma over 0.17990127
## 5 plasma rounded -0.05822745
## 6 plasma under -0.13244436
## 7 sushi over -0.08033004
## 8 sushi rounded -0.09312927
## 9 sushi under 0.17965428
OK, now here comes the end: we’re going to plot the differences and see if anything happened. First the percent change:
pcts %>%
ggplot(aes(item,pct,fill=condition)) +
geom_bar(stat='identity',position='dodge')
and the z-scores:
z.scores %>%
ggplot(aes(item,z,fill=condition)) +
geom_bar(stat='identity',position='dodge')
Oh well. This replication didn’t seem to work out straightforwardly.