The CDC’s “PLACES: Local Data for Better Health” provides access to detailed local health data. It helps users explore health indicators by county, city, and census areas to improve public health efforts. The platform offers interactive maps, data tools, and resources for understanding local health measures based on CDC and U.S. Census data. For more details, visit the PLACES website.
This dataset is adapted from 500 Cities: Local Data for Better Health, 2017 release.
Project Prompt:
For this project, you will work with a cleaned dataset and conduct an analysis using GIS techniques.
Start by filtering the dataset further to create a subset containing no more than 900 observations. Choose a specific subset based on a meaningful criterion related to your analysis.
Create a plot that visualizes an aspect of your subsetted dataset. This could be a histogram, scatter plot, or line chart, depending on the nature of your data.
Generate a basic GIS map that represents the geographic distribution of your subsetted data points. Ensure that the map clearly conveys relevant spatial patterns.
Refine your GIS map by adding interactive elements, such as a tooltip that displays information when users click on a data point.
Write a paragraph summarizing your visualizations. Explain what your plot and map reveal about your subsetted dataset. Discuss any trends, patterns, or insights gained from your analysis.
This project will help you practice data filtering, visualization, and GIS mapping techniques, reinforcing concepts from the Japan earthquakes tutorial.
Load the libraries and set the working directory
library(tidyverse)
library(tidyr)
library(leaflet)
library(sf)
library(knitr)
setwd("~/data 110")
cities500 <- read_csv("500CitiesLocalHealthIndicators.cdc.csv")
data(cities500)
1. The GeoLocation variable has (lat, long) format. We need to split GeoLocation (lat, long) into two columns: lat and long.
To do so, we will remove the parentheses from a column, then split it into separate latitude and longitude columns.
latlong <- cities500|>
mutate(GeoLocation = str_replace_all(GeoLocation, "[()]", ""))|>
separate(GeoLocation,
into = c("lat", "long"),
sep = ",",
convert = TRUE)
head(latlong)
## # A tibble: 6 × 25
## Year StateAbbr StateDesc CityName GeographicLevel DataSource Category
## <dbl> <chr> <chr> <chr> <chr> <chr> <chr>
## 1 2017 CA California Hawthorne Census Tract BRFSS Health Outcom…
## 2 2017 CA California Hawthorne City BRFSS Unhealthy Beh…
## 3 2017 CA California Hayward City BRFSS Health Outcom…
## 4 2017 CA California Hayward City BRFSS Unhealthy Beh…
## 5 2017 CA California Hemet City BRFSS Prevention
## 6 2017 CA California Indio Census Tract BRFSS Health Outcom…
## # ℹ 18 more variables: UniqueID <chr>, Measure <chr>, Data_Value_Unit <chr>,
## # DataValueTypeID <chr>, Data_Value_Type <chr>, Data_Value <dbl>,
## # Low_Confidence_Limit <dbl>, High_Confidence_Limit <dbl>,
## # Data_Value_Footnote_Symbol <chr>, Data_Value_Footnote <chr>,
## # PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
## # MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
str_replace_all(GeoLocation, "[()]", "") removes any
parentheses from the GeoLocation column.
separate(GeoLocation, into = c("lat", "long"), sep = ",", convert = TRUE)
The separate() function splits the GeoLocation column
into two new columns:
“lat” (latitude) “long” (longitude)
sep = "," specifies that the values are separated by a
comma.
convert = TRUE automatically converts the new columns
into appropriate data types (numeric in this case).
2. Filter the dataset: Remove the StateDesc that includes the United Sates, select Prevention as the category (of interest), filter for only measuring crude prevalence and select only 2017.
latlong_clean <- latlong |>
filter(StateDesc != "United States") |>
filter(Category == "Prevention") |>
filter(Data_Value_Type == "Crude prevalence") |>
filter(Year == 2017)
head(latlong_clean)
## # A tibble: 6 × 25
## Year StateAbbr StateDesc CityName GeographicLevel DataSource Category
## <dbl> <chr> <chr> <chr> <chr> <chr> <chr>
## 1 2017 AL Alabama Montgomery City BRFSS Prevention
## 2 2017 CA California Concord City BRFSS Prevention
## 3 2017 CA California Concord City BRFSS Prevention
## 4 2017 CA California Fontana City BRFSS Prevention
## 5 2017 CA California Richmond Census Tract BRFSS Prevention
## 6 2017 FL Florida Davie Census Tract BRFSS Prevention
## # ℹ 18 more variables: UniqueID <chr>, Measure <chr>, Data_Value_Unit <chr>,
## # DataValueTypeID <chr>, Data_Value_Type <chr>, Data_Value <dbl>,
## # Low_Confidence_Limit <dbl>, High_Confidence_Limit <dbl>,
## # Data_Value_Footnote_Symbol <chr>, Data_Value_Footnote <chr>,
## # PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
## # MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
What variables are included? (can any of them be removed?)
names(latlong_clean)
## [1] "Year" "StateAbbr"
## [3] "StateDesc" "CityName"
## [5] "GeographicLevel" "DataSource"
## [7] "Category" "UniqueID"
## [9] "Measure" "Data_Value_Unit"
## [11] "DataValueTypeID" "Data_Value_Type"
## [13] "Data_Value" "Low_Confidence_Limit"
## [15] "High_Confidence_Limit" "Data_Value_Footnote_Symbol"
## [17] "Data_Value_Footnote" "PopulationCount"
## [19] "lat" "long"
## [21] "CategoryID" "MeasureId"
## [23] "CityFIPS" "TractFIPS"
## [25] "Short_Question_Text"
Remove the variables that will not be used in the assignment
prevention <- latlong_clean |>
select(-DataSource,-Data_Value_Unit, -DataValueTypeID, -Low_Confidence_Limit, -High_Confidence_Limit, -Data_Value_Footnote_Symbol, -Data_Value_Footnote)
head(prevention)
## # A tibble: 6 × 18
## Year StateAbbr StateDesc CityName GeographicLevel Category UniqueID Measure
## <dbl> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
## 1 2017 AL Alabama Montgome… City Prevent… 151000 Choles…
## 2 2017 CA California Concord City Prevent… 616000 Visits…
## 3 2017 CA California Concord City Prevent… 616000 Choles…
## 4 2017 CA California Fontana City Prevent… 624680 Visits…
## 5 2017 CA California Richmond Census Tract Prevent… 0660620… Choles…
## 6 2017 FL Florida Davie Census Tract Prevent… 1216475… Choles…
## # ℹ 10 more variables: Data_Value_Type <chr>, Data_Value <dbl>,
## # PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
## # MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
md <- prevention |>
filter(StateAbbr=="MD")
dim(md)
## [1] 804 18
# basically do the same thing as lines 109-113 but don't use maryland
most_populated <- prevention |>
filter(StateAbbr %in% c("CA", "FL", "TX", "NY"))
dim(most_populated)
## [1] 50041 18
samp <- sample_n(most_populated, 900, replace = TRUE)
head(samp)
## # A tibble: 6 × 18
## Year StateAbbr StateDesc CityName GeographicLevel Category UniqueID Measure
## <dbl> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
## 1 2017 CA California South Ga… Census Tract Prevent… 0673080… "Visit…
## 2 2017 CA California Carlsbad Census Tract Prevent… 0611194… "Curre…
## 3 2017 CA California Corona Census Tract Prevent… 0616350… "Curre…
## 4 2017 CA California Los Ange… Census Tract Prevent… 0644000… "Chole…
## 5 2017 CA California Los Ange… Census Tract Prevent… 0644000… "Curre…
## 6 2017 CA California Santa Cl… Census Tract Prevent… 0669084… "Curre…
## # ℹ 10 more variables: Data_Value_Type <chr>, Data_Value <dbl>,
## # PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
## # MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
I did misunderstand the instructions and made a sample before I did it correctly by filtering for just New Mexico, but I kept the code in because I liked the graphs it produced and I think you told me to keep them but I’m not sure.
nm <- prevention|>
filter(StateAbbr == "NM")
dim(nm)
## [1] 884 18
unique(nm$Measure)
## [1] "Cholesterol screening among adults aged >=18 Years"
## [2] "Current lack of health insurance among adults aged 18\x9664 Years"
## [3] "Visits to doctor for routine checkup within the past Year among adults aged >=18 Years"
## [4] "Taking medicine for high blood pressure control among adults aged >=18 Years with high blood pressure"
options(scipen=999)
plot1 <- ggplot(samp, aes(MeasureId, fill = MeasureId)) +
geom_bar() +
scale_fill_manual(values = c("#a21e1a", "#a2681a", "#1aa21c", "#1a94a2"),
labels = c("Lack Health Insurance", "Taking blood pressure meds", "Check Ups", "Cholestoral Screenings"),
name = "Measure") +
facet_wrap(~StateAbbr) +
theme(axis.text.x = element_blank()) +
labs(x = "Measure")
plot1
plot2 <- ggplot(nm, aes(MeasureId, fill = MeasureId)) +
geom_bar() + scale_fill_manual(values = c("#a21e1a", "#a2681a", "#1aa21c", "#1a94a2"),
labels = c("Lack Health Insurance", "Taking blood pressure meds", "Check Ups", "Cholestoral Screenings"),
name = "Measure") + labs(x = "Measure") + theme(axis.text.x = element_blank())
plot2
library(ggridges)
## Warning: package 'ggridges' was built under R version 4.4.2
plot3 <- ggplot(nm, aes(MeasureId, CityName, fill = MeasureId)) + geom_density_ridges(alpha = 0.5) + scale_fill_manual(values = c("#a21e1a", "#a2681a", "#1aa21c", "#1a94a2"),
labels = c("Lack Health Insurance", "Taking blood pressure meds", "Check Ups", "Cholestoral Screenings"),
name = "Measure") +
theme(axis.text.x = element_blank()) + labs(x = "Measure")
plot3
## Picking joint bandwidth of 1.09
count(nm, MeasureId)
## # A tibble: 4 × 2
## MeasureId n
## <chr> <int>
## 1 ACCESS2 221
## 2 BPMED 221
## 3 CHECKUP 221
## 4 CHOLSCREEN 221
As demonstrated by the bar graph(not the faceted one) and the density ridges, for each New Mexico(and every state) the count for each option under Measure/MeasureId/Short_Question_Text is exactly the same. The faceted bar graphs aren’t relevant I just liked them.
library(leaflet)
library(sf)
library(tidyverse)
library(knitr)
nm_lat <- 34.9727
nm_long <- 105.0324
leaflet() |>
setView(lng = -105.0324, lat = 34.9727, zoom =6) |>
addProviderTiles("Esri.NatGeoWorldMap") |>
addCircles(
data = nm)
## Assuming "long" and "lat" are longitude and latitude, respectively
popupNm <- paste0(
"<b>Measure: </b>", nm$Short_Question_Text, "<br>",
"<b>Population Count: </b>", nm$PopulationCount, "<br>",
"<b>Year : </b>", nm$Year, "<br>"
)
leaflet() |>
setView(lng = -105.0324, lat = 34.9727, zoom =6) |>
addProviderTiles("Esri.NatGeoWorldMap") |>
addCircles(
data = nm,
radius = sqrt(nm$PopulationCount) * 3, #honestly, did not know what I was doing with the radius but it worked-ish so...
color = "#144f11",
fillColor = "white",
fillOpacity = 0.25,
popup = popupNm
)
## Assuming "long" and "lat" are longitude and latitude, respectively
As a side note, for some reason when I tried to use the variable Measure for the popup I would get an error saying “Error in gsub(”</“,”\u003c/“, payload, fixed = TRUE) : input string 1 is invalid UTF-8”
But MeasureId and Short_Question_Text work.
My visualizations show that the count for each element under Measure/MeasureId/Short_Question_Text(they all tell the same information with different wording) is equal in every state. That doesn’t necessarily reflect real life, it’s just how the data was entered. Unfortunately this also means that it will be more difficult to find any patterns pertaining to important information about the patients(I don’t really understand what the variable is saying, but that’s my guess) as the counts are equal.
My maps show that the data from New Mexico is clustered around Albuquerque, Las Cruces, Rio Rancho, and Santa Fe. Generally the population is similar for each data point, but occasionally there’s a significantly bigger point, probably due to differences in how densely populated different areas are. The map doesn’t show any correlation between the Measure and the Population, it seems mostly random.
Overall my findings are quite boring. Whoever collected the data took it from some of the most populated cities in Mew Mexico: Albuquerque, Santa Fe, Rio Rancho, and Las Cruces. Though, a large majority of the data came from Albuquerque. The data that they collected has equal amounts of each element under Measure, meaning there are 221 counts for cholesterol screening, blood pressure medication, annual check up, etc. And finally, looking at the map, there doesn’t appear to be any correlation between the population and what the measure was.
count(nm, Measure)
## # A tibble: 4 × 2
## Measure n
## <chr> <int>
## 1 "Cholesterol screening among adults aged >=18 Years" 221
## 2 "Current lack of health insurance among adults aged 18\x9664 Years" 221
## 3 "Taking medicine for high blood pressure control among adults aged >=18… 221
## 4 "Visits to doctor for routine checkup within the past Year among adults… 221