Introduction:

The CDC’s “PLACES: Local Data for Better Health” provides access to detailed local health data. It helps users explore health indicators by county, city, and census areas to improve public health efforts. The platform offers interactive maps, data tools, and resources for understanding local health measures based on CDC and U.S. Census data. For more details, visit the PLACES website.

This dataset is adapted from 500 Cities: Local Data for Better Health, 2017 release.

Project Prompt:

For this project, you will work with a cleaned dataset and conduct an analysis using GIS techniques.

  1. Start by filtering the dataset further to create a subset containing no more than 900 observations. Choose a specific subset based on a meaningful criterion related to your analysis.

  2. Create a plot that visualizes an aspect of your subsetted dataset. This could be a histogram, scatter plot, or line chart, depending on the nature of your data.

  3. Generate a basic GIS map that represents the geographic distribution of your subsetted data points. Ensure that the map clearly conveys relevant spatial patterns.

  4. Refine your GIS map by adding interactive elements, such as a tooltip that displays information when users click on a data point.

  5. Write a paragraph summarizing your visualizations. Explain what your plot and map reveal about your subsetted dataset. Discuss any trends, patterns, or insights gained from your analysis.

This project will help you practice data filtering, visualization, and GIS mapping techniques, reinforcing concepts from the Japan earthquakes tutorial.

Let’s start:

Load the libraries and set the working directory

library(tidyverse)
library(tidyr)
library(leaflet)
library(sf)

cities500 <- read_csv("500CitiesLocalHealthIndicators.cdc.csv")
data(cities500)

Cleaning the data set:

1. The GeoLocation variable has (lat, long) format. We need to split GeoLocation (lat, long) into two columns: lat and long.

To do so, we will remove the parentheses from a column, then split it into separate latitude and longitude columns.

latlong <- cities500|>
  mutate(GeoLocation = str_replace_all(GeoLocation, "[()]", ""))|>
  separate(GeoLocation,
           into = c("lat", "long"),
           sep = ",",
           convert = TRUE)
head(latlong)
## # A tibble: 6 × 25
##    Year StateAbbr StateDesc  CityName  GeographicLevel DataSource Category      
##   <dbl> <chr>     <chr>      <chr>     <chr>           <chr>      <chr>         
## 1  2017 CA        California Hawthorne Census Tract    BRFSS      Health Outcom…
## 2  2017 CA        California Hawthorne City            BRFSS      Unhealthy Beh…
## 3  2017 CA        California Hayward   City            BRFSS      Health Outcom…
## 4  2017 CA        California Hayward   City            BRFSS      Unhealthy Beh…
## 5  2017 CA        California Hemet     City            BRFSS      Prevention    
## 6  2017 CA        California Indio     Census Tract    BRFSS      Health Outcom…
## # ℹ 18 more variables: UniqueID <chr>, Measure <chr>, Data_Value_Unit <chr>,
## #   DataValueTypeID <chr>, Data_Value_Type <chr>, Data_Value <dbl>,
## #   Low_Confidence_Limit <dbl>, High_Confidence_Limit <dbl>,
## #   Data_Value_Footnote_Symbol <chr>, Data_Value_Footnote <chr>,
## #   PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
## #   MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>

str_replace_all(GeoLocation, "[()]", "") removes any parentheses from the GeoLocation column.

separate(GeoLocation, into = c("lat", "long"), sep = ",", convert = TRUE)

The separate() function splits the GeoLocation column into two new columns:

“lat” (latitude) “long” (longitude)

sep = "," specifies that the values are separated by a comma.

convert = TRUE automatically converts the new columns into appropriate data types (numeric in this case).

2. Filter the dataset: Remove the StateDesc that includes the United Sates, select Prevention as the category (of interest), filter for only measuring crude prevalence and select only 2017.

latlong_clean <- latlong |>
  filter(StateDesc != "United States") |>
  filter(Category == "Prevention") |>
  filter(Data_Value_Type == "Crude prevalence") |>
  filter(Year == 2017)
head(latlong_clean)
## # A tibble: 6 × 25
##    Year StateAbbr StateDesc  CityName   GeographicLevel DataSource Category  
##   <dbl> <chr>     <chr>      <chr>      <chr>           <chr>      <chr>     
## 1  2017 AL        Alabama    Montgomery City            BRFSS      Prevention
## 2  2017 CA        California Concord    City            BRFSS      Prevention
## 3  2017 CA        California Concord    City            BRFSS      Prevention
## 4  2017 CA        California Fontana    City            BRFSS      Prevention
## 5  2017 CA        California Richmond   Census Tract    BRFSS      Prevention
## 6  2017 FL        Florida    Davie      Census Tract    BRFSS      Prevention
## # ℹ 18 more variables: UniqueID <chr>, Measure <chr>, Data_Value_Unit <chr>,
## #   DataValueTypeID <chr>, Data_Value_Type <chr>, Data_Value <dbl>,
## #   Low_Confidence_Limit <dbl>, High_Confidence_Limit <dbl>,
## #   Data_Value_Footnote_Symbol <chr>, Data_Value_Footnote <chr>,
## #   PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
## #   MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>

What variables are included? (can any of them be removed?)

names(latlong_clean)
##  [1] "Year"                       "StateAbbr"                 
##  [3] "StateDesc"                  "CityName"                  
##  [5] "GeographicLevel"            "DataSource"                
##  [7] "Category"                   "UniqueID"                  
##  [9] "Measure"                    "Data_Value_Unit"           
## [11] "DataValueTypeID"            "Data_Value_Type"           
## [13] "Data_Value"                 "Low_Confidence_Limit"      
## [15] "High_Confidence_Limit"      "Data_Value_Footnote_Symbol"
## [17] "Data_Value_Footnote"        "PopulationCount"           
## [19] "lat"                        "long"                      
## [21] "CategoryID"                 "MeasureId"                 
## [23] "CityFIPS"                   "TractFIPS"                 
## [25] "Short_Question_Text"

Remove the variables that will not be used in the assignment

prevention <- latlong_clean |>
  select(-DataSource,-Data_Value_Unit, -DataValueTypeID, -Low_Confidence_Limit, -High_Confidence_Limit, -Data_Value_Footnote_Symbol, -Data_Value_Footnote)
head(prevention)
## # A tibble: 6 × 18
##    Year StateAbbr StateDesc  CityName  GeographicLevel Category UniqueID Measure
##   <dbl> <chr>     <chr>      <chr>     <chr>           <chr>    <chr>    <chr>  
## 1  2017 AL        Alabama    Montgome… City            Prevent… 151000   Choles…
## 2  2017 CA        California Concord   City            Prevent… 616000   Visits…
## 3  2017 CA        California Concord   City            Prevent… 616000   Choles…
## 4  2017 CA        California Fontana   City            Prevent… 624680   Visits…
## 5  2017 CA        California Richmond  Census Tract    Prevent… 0660620… Choles…
## 6  2017 FL        Florida    Davie     Census Tract    Prevent… 1216475… Choles…
## # ℹ 10 more variables: Data_Value_Type <chr>, Data_Value <dbl>,
## #   PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
## #   MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
md <- prevention |>
  filter(StateAbbr=="MD")
dim(md)
## [1] 804  18
# Count the number of observations per state abbreviation
state_counts <- latlong_clean |> 
  count(StateAbbr) |> 
  arrange()  # Sort states by the number of observations

# View the results
head(state_counts)
## # A tibble: 6 × 2
##   StateAbbr     n
##   <chr>     <int>
## 1 AK          224
## 2 AL         1494
## 3 AR          507
## 4 AZ         4202
## 5 CA        21947
## 6 CO         2888

Your Work

  1. MD has 804 observations. For this project, start by filtering the dataset further to create a subset containing no more than 900 observations. Choose a specific subset based on a meaningful criterion related to your analysis.
ia_filtered <- prevention |>
  filter(StateAbbr=="IA")

dim(ia_filtered)
## [1] 820  18
head(ia_filtered)
## # A tibble: 6 × 18
##    Year StateAbbr StateDesc CityName   GeographicLevel Category UniqueID Measure
##   <dbl> <chr>     <chr>     <chr>      <chr>           <chr>    <chr>    <chr>  
## 1  2017 IA        Iowa      Des Moines Census Tract    Prevent… 1921000… "Chole…
## 2  2017 IA        Iowa      Davenport  Census Tract    Prevent… 1919000… "Curre…
## 3  2017 IA        Iowa      Des Moines Census Tract    Prevent… 1921000… "Takin…
## 4  2017 IA        Iowa      Waterloo   Census Tract    Prevent… 1982425… "Curre…
## 5  2017 IA        Iowa      Waterloo   Census Tract    Prevent… 1982425… "Visit…
## 6  2017 IA        Iowa      Sioux City Census Tract    Prevent… 1973335… "Takin…
## # ℹ 10 more variables: Data_Value_Type <chr>, Data_Value <dbl>,
## #   PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
## #   MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
  1. Create a plot that visualizes an aspect of your subsetted dataset. This could be a histogram, scatter plot, or line chart, depending on the nature of your data.
#Bar graph (bad)
ggplot(ia_filtered, aes(x = MeasureId, fill = CityName)) +
  geom_bar(fill = "steelblue")

ia_filtered_n <- ia_filtered |> 
  count(CityName, MeasureId)  # Create a count column named 'n'

#Heatmap (also bad)
ggplot(ia_filtered_n, aes(x = CityName, y = MeasureId, fill = n)) +
  geom_tile() +
  scale_fill_gradient(low = "palevioletred", high = "violetred4") +
  theme(axis.text.x = element_text(angle = 90, hjust = 1))

  1. Generate a basic GIS map that represents the geographic distribution of your subsetted data points. Ensure that the map clearly conveys relevant spatial patterns.
#Base
leaflet(data = ia_filtered) |>
  setView(lng = -93.54918, lat = 41.57905, zoom =7) |>
  addProviderTiles("Esri.WorldPhysical") |>
  addCircles(
    data = ia_filtered,
    color = "#BF6559",
    fillOpacity = 0.1,
    radius = ~PopulationCount/5)
## Assuming "long" and "lat" are longitude and latitude, respectively
  1. Refine your GIS map by adding interactive elements, such as a tooltip that displays information when users click on a data point.
#Detail + Refine
leaflet(data = ia_filtered) |>
  setView(lng = -93.54918, lat = 41.57905, zoom =7) |>
  addProviderTiles("Esri.WorldPhysical") |>
  addCircles(
    data = ia_filtered,
    color = "#64195F",
    fillColor = "#B157AB",
    fillOpacity = 0.1,
    label = ~paste("City: ", CityName, "/", "Procedure: ", Short_Question_Text, "/", "Pop Count: ", PopulationCount, "/", "Geo. Level: ", GeographicLevel),
    radius = ~PopulationCount/40,
    weight = 0.5)
## Assuming "long" and "lat" are longitude and latitude, respectively
  1. Write a paragraph summarizing your visualizations. Explain what your plot and map reveal about your subsetted dataset. Discuss any trends, patterns, or insights gained from your analysis.

Iowa was and is, completely honestly, the worst state possible for this and now I absolutely despise it. Haha jokingg (not really)! On a serious note,

There appear to be, with this data set at least, multiple small clusters of locations where one each has a massive population spike, disproportionate to all the other smaller locations by an insane amount. This did lead to me being forced into making each point so small, because if I did not, the larger population would overlap far to much, and you would not be able to access information for each of the smaller location circles. Even now, it is still not a very good representation, since there is still a good amount of overlap- this was just the best option to have a majority, at least, of the points accessible. In general, Iowa is a pretty consistent state outside of population, with the same few related procedures and measures being taken, as well as having majority of the geographic levels being at census tract. This data really only represented 6 cities in Iowa (as shown by the 6 clusters) which may lead to questioning about the general reliability of this data when taking a generalization, but if other cities follow same consistency patterns that these displayed, it may be reasonable. Overall, Iowa was just a bad state to pick for the physical graphing portion and I did just get unlucky with my choice, but data-wise it is conveniently very consistent and provides good insight into general health concerns and procedures taken throughout the state.