Healthy Cities GIS Assignment

Author

Your Name

Load the libraries and set the working directory

library(tidyverse)
library(tidyr)
library(RColorBrewer)
setwd("C:/Users/emmam/OneDrive/Documents/Data 110")
cities500 <- read_csv("500CitiesLocalHealthIndicators.cdc.csv")
data(cities500)

The GeoLocation variable has (lat, long) format

Split GeoLocation (lat, long) into two columns: lat and long

latlong <- cities500|>
  mutate(GeoLocation = str_replace_all(GeoLocation, "[()]", ""))|>
  separate(GeoLocation, into = c("lat", "long"), sep = ",", convert = TRUE)
head(latlong)
# A tibble: 6 × 25
   Year StateAbbr StateDesc  CityName  GeographicLevel DataSource Category      
  <dbl> <chr>     <chr>      <chr>     <chr>           <chr>      <chr>         
1  2017 CA        California Hawthorne Census Tract    BRFSS      Health Outcom…
2  2017 CA        California Hawthorne City            BRFSS      Unhealthy Beh…
3  2017 CA        California Hayward   City            BRFSS      Health Outcom…
4  2017 CA        California Hayward   City            BRFSS      Unhealthy Beh…
5  2017 CA        California Hemet     City            BRFSS      Prevention    
6  2017 CA        California Indio     Census Tract    BRFSS      Health Outcom…
# ℹ 18 more variables: UniqueID <chr>, Measure <chr>, Data_Value_Unit <chr>,
#   DataValueTypeID <chr>, Data_Value_Type <chr>, Data_Value <dbl>,
#   Low_Confidence_Limit <dbl>, High_Confidence_Limit <dbl>,
#   Data_Value_Footnote_Symbol <chr>, Data_Value_Footnote <chr>,
#   PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
#   MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>

Filter the dataset

Remove the StateDesc that includes the United Sates, select Prevention as the category (of interest), filter for only measuring crude prevalence and select only 2017.

latlong_clean <- latlong |>
  filter(StateDesc != "United States") |>
  filter(Data_Value_Type == "Crude prevalence") |>
  filter(Year == 2017) |>
  filter(StateAbbr == "CT") |>
  filter(Category == "Unhealthy Behaviors")
head(latlong_clean)
# A tibble: 6 × 25
   Year StateAbbr StateDesc   CityName   GeographicLevel DataSource Category    
  <dbl> <chr>     <chr>       <chr>      <chr>           <chr>      <chr>       
1  2017 CT        Connecticut Bridgeport Census Tract    BRFSS      Unhealthy B…
2  2017 CT        Connecticut Danbury    City            BRFSS      Unhealthy B…
3  2017 CT        Connecticut Norwalk    Census Tract    BRFSS      Unhealthy B…
4  2017 CT        Connecticut Bridgeport Census Tract    BRFSS      Unhealthy B…
5  2017 CT        Connecticut Hartford   Census Tract    BRFSS      Unhealthy B…
6  2017 CT        Connecticut Waterbury  Census Tract    BRFSS      Unhealthy B…
# ℹ 18 more variables: UniqueID <chr>, Measure <chr>, Data_Value_Unit <chr>,
#   DataValueTypeID <chr>, Data_Value_Type <chr>, Data_Value <dbl>,
#   Low_Confidence_Limit <dbl>, High_Confidence_Limit <dbl>,
#   Data_Value_Footnote_Symbol <chr>, Data_Value_Footnote <chr>,
#   PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
#   MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>

What variables are included? (can any of them be removed?)

names(latlong_clean)
 [1] "Year"                       "StateAbbr"                 
 [3] "StateDesc"                  "CityName"                  
 [5] "GeographicLevel"            "DataSource"                
 [7] "Category"                   "UniqueID"                  
 [9] "Measure"                    "Data_Value_Unit"           
[11] "DataValueTypeID"            "Data_Value_Type"           
[13] "Data_Value"                 "Low_Confidence_Limit"      
[15] "High_Confidence_Limit"      "Data_Value_Footnote_Symbol"
[17] "Data_Value_Footnote"        "PopulationCount"           
[19] "lat"                        "long"                      
[21] "CategoryID"                 "MeasureId"                 
[23] "CityFIPS"                   "TractFIPS"                 
[25] "Short_Question_Text"       

Remove the variables that will not be used in the assignment

latlong_clean2 <- latlong_clean |>
  select(-DataSource,-Data_Value_Unit, -DataValueTypeID, -Low_Confidence_Limit, -High_Confidence_Limit, -Data_Value_Footnote_Symbol, -Data_Value_Footnote)
head(latlong_clean2)
# A tibble: 6 × 18
   Year StateAbbr StateDesc   CityName GeographicLevel Category UniqueID Measure
  <dbl> <chr>     <chr>       <chr>    <chr>           <chr>    <chr>    <chr>  
1  2017 CT        Connecticut Bridgep… Census Tract    Unhealt… 0908000… Obesit…
2  2017 CT        Connecticut Danbury  City            Unhealt… 918430   Obesit…
3  2017 CT        Connecticut Norwalk  Census Tract    Unhealt… 0955990… Obesit…
4  2017 CT        Connecticut Bridgep… Census Tract    Unhealt… 0908000… Curren…
5  2017 CT        Connecticut Hartford Census Tract    Unhealt… 0937000… Obesit…
6  2017 CT        Connecticut Waterbu… Census Tract    Unhealt… 0980000… Obesit…
# ℹ 10 more variables: Data_Value_Type <chr>, Data_Value <dbl>,
#   PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
#   MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>

The new dataset “latlong_clean2” is a manageable dataset now.

For your assignment, work with a cleaned dataset where you perform your own cleaning and filtering.

1. Once you run the above code and filter this complicated dataset, perform your own investigation by filtering this dataset however you choose so that you have a subset with no more than 900 observations through some inclusion/exclusion criteria.

Filter chunk here (you may need multiple chunks)

richandpoor <- c("DC","MA","CT","AL","WV","LA") #I looked up richest and poorest US States on google and these are top and bottom 3
cleancity <- latlong |>
  filter(MeasureId == "TEETHLOST") |>
  filter(StateAbbr == richandpoor) |>
  select(-DataSource,-Data_Value_Unit, -DataValueTypeID, -Low_Confidence_Limit, -High_Confidence_Limit, -Data_Value_Footnote_Symbol, -Data_Value_Footnote)  
Warning: There was 1 warning in `filter()`.
ℹ In argument: `StateAbbr == richandpoor`.
Caused by warning in `StateAbbr == richandpoor`:
! longer object length is not a multiple of shorter object length

2. Based on the GIS tutorial (Japan earthquakes), create one plot about something in your subsetted dataset.

First plot chunk here

p2 <- cleancity |>
  ggplot(aes(x=Data_Value, y= StateDesc ,fill = StateDesc, ))+
  geom_boxplot(show.legend = FALSE)+
  labs(x = "Distribution of tooth loss", y = "State", title = "Distribution of Tooth Loss in 3 Richest and Poorest States")
p2
Warning: Removed 2 rows containing non-finite outside the scale range
(`stat_boxplot()`).

3. Now create a map of your subsetted dataset.

First map chunk here

library(leaflet)
Warning: package 'leaflet' was built under R version 4.5.2
leaflet(data = cleancity) |>
  setView(lng = -80, lat = 36, zoom =4.5)|>
  addProviderTiles("Esri.WorldStreetMap")|>
  addCircleMarkers(
    ~long, ~lat,
    radius= ~Data_Value / 3,
    stroke = FALSE,
    fillOpacity = .2,
    fillColor = ~"purple"
  )

4. Refine your map to include a mouse-click tooltip

Refined map chunk here

library(leaflet)

leaflet(data = cleancity) |>
  addProviderTiles("Esri.WorldStreetMap") |>
    addCircleMarkers(
      ~long, ~lat,
      radius= ~Data_Value/3,
      stroke = FALSE,
      fillOpacity = .2,
      fillColor = ~"purple",
      popup = ~paste0(
        "<b>State: <b>", StateAbbr, "<br>",
        "<b>City: </b>", CityName, "<br>",
        "<b>Population: </b>", PopulationCount, "<br>",
        "<b>Avg # of Teeth Lost: </b>", Data_Value, "<br>",
        "<b>Study ID: <b>", UniqueID, "<br>"
      )
    )|>
  setView(lng = -80, lat = 36, zoom =4.5)

5. Write a paragraph

In a paragraph, describe the plots you created and the insights they show.

I made a plot trying to find out if higher poverty rates contributed to a loss of teeth, presumably from a lack of dental care. I first made a boxplot of the average tooth loss in the top and bottom 3 states by income level. It looked like the states with lower income had a higher average of tooth loss than the higher income states. I then made a map to show the location of all studies conducted which shows that there are problems with the data, mainly how there are significantly less datapoints for the lower income states, which can lead to a less accurate comparison. I believe that there is no super strong correlation between poverty and tooth loss since a sign of good dental care can often mean people getting teeth removed as a preventative measure. A better measure would probably be something like “% of people with tooth decay” rather than simply losing teeth.