library(tidyverse)
## -- Attaching packages -------------------------------------------------------------------------------------------------------- tidyverse 1.3.0 --
## v ggplot2 3.3.2     v purrr   0.3.4
## v tibble  3.0.3     v dplyr   1.0.2
## v tidyr   1.1.2     v stringr 1.4.0
## v readr   1.3.1     v forcats 0.5.0
## -- Conflicts ----------------------------------------------------------------------------------------------------------- tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag()    masks stats::lag()
library(ggplot2)

library(rvest)
## Loading required package: xml2
## 
## Attaching package: 'rvest'
## The following object is masked from 'package:purrr':
## 
##     pluck
## The following object is masked from 'package:readr':
## 
##     guess_encoding
library(httr)

library(readr)

1

[here is a package of data on old faithful] (https://cran.r-project.org/web/packages/MASS/index.html)

[here is a package of reading googles API] (https://cran.r-project.org/web/packages/googleAuthR/readme/README.html)

[Here is data from github] (https://github.com/rudeboybert/resampledata)

[here is a data set from ICSR on Adolescent and Adult Health] (https://www.icpsr.umich.edu/web/ICPSR/studies/21600/summary)

2

lab4dataA <- read.csv("https://raw.githubusercontent.com/prlitics/Election-Data-Science-Fall-2020/master/Data/wk4_challenge2.a.txt") 

lab4dataB <- read.csv("https://raw.githubusercontent.com/prlitics/Election-Data-Science-Fall-2020/master/Data/wk4_challenge2.b.txt") 

lab4dataC <- read.csv("https://raw.githubusercontent.com/prlitics/Election-Data-Science-Fall-2020/master/Data/wk4_challenge2.c.txt")

3

firstset<- read.csv ("https://raw.githubusercontent.com/prlitics/Election-Data-Science-Fall-2020/master/Data/wk4_challenge2.a.txt", nrows=1000)

firstset$contribution_receipt_date <- as.Date.character(firstset$contribution_receipt_date)

4

wear_a_mask <- read.csv("https://raw.githubusercontent.com/nytimes/covid-19-data/master/mask-use/mask-use-by-county.csv")

This data could be useful to look at state levels of complience if you know which counties correspond to whitch states. I’m pretty sure

this could be done with out too much difficulty. Once you know which codes go with which states you could then group_by county codes so say 1:100 is California then take the averages of each column 1:100 to get the average of each column for each of californias 100 districts. Do this for each state then chart them all. This sounds like a lot though. What would be a more efficent way? ###

5

url <- "https://en.wikipedia.org/wiki/Cats_(musical)"

wiki <- read_html(url) %>% 
  
html_node( xpath = "/html/body/div[3]/div[3]/div[5]/div[1]/table[2]" ) %>%
  html_table()