library(tidyverse)
library(tidyr)
library(leaflet)
library(sf)
library(tigris)
setwd("C:/Users/eyong/OneDrive - montgomerycollege.edu/Desktop/data110/DATASETS")
<- read_csv("500CitiesLocalHealthIndicators.cdc.csv")
cities500 data(cities500)
Healthy Cities GIS Assignment
Load the libraries and set the working directory
The GeoLocation variable has (lat, long) format
Split GeoLocation (lat, long) into two columns: lat and long
<- cities500|>
latlong mutate(GeoLocation = str_replace_all(GeoLocation, "[()]", ""))|>
separate(GeoLocation, into = c("lat", "long"), sep = ",", convert = TRUE)
head(latlong)
# A tibble: 6 × 25
Year StateAbbr StateDesc CityName GeographicLevel DataSource Category
<dbl> <chr> <chr> <chr> <chr> <chr> <chr>
1 2017 CA California Hawthorne Census Tract BRFSS Health Outcom…
2 2017 CA California Hawthorne City BRFSS Unhealthy Beh…
3 2017 CA California Hayward City BRFSS Health Outcom…
4 2017 CA California Hayward City BRFSS Unhealthy Beh…
5 2017 CA California Hemet City BRFSS Prevention
6 2017 CA California Indio Census Tract BRFSS Health Outcom…
# ℹ 18 more variables: UniqueID <chr>, Measure <chr>, Data_Value_Unit <chr>,
# DataValueTypeID <chr>, Data_Value_Type <chr>, Data_Value <dbl>,
# Low_Confidence_Limit <dbl>, High_Confidence_Limit <dbl>,
# Data_Value_Footnote_Symbol <chr>, Data_Value_Footnote <chr>,
# PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
# MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
Filter the dataset
Remove the StateDesc that includes the United Sates, select Prevention as the category (of interest), filter for only measuring crude prevalence and select only 2017.
<- latlong |>
latlong_clean filter(StateDesc != "United States") |>
filter(Category == "Prevention") |>
filter(Data_Value_Type == "Crude prevalence") |>
filter(Year == 2017)
head(latlong_clean)
# A tibble: 6 × 25
Year StateAbbr StateDesc CityName GeographicLevel DataSource Category
<dbl> <chr> <chr> <chr> <chr> <chr> <chr>
1 2017 AL Alabama Montgomery City BRFSS Prevention
2 2017 CA California Concord City BRFSS Prevention
3 2017 CA California Concord City BRFSS Prevention
4 2017 CA California Fontana City BRFSS Prevention
5 2017 CA California Richmond Census Tract BRFSS Prevention
6 2017 FL Florida Davie Census Tract BRFSS Prevention
# ℹ 18 more variables: UniqueID <chr>, Measure <chr>, Data_Value_Unit <chr>,
# DataValueTypeID <chr>, Data_Value_Type <chr>, Data_Value <dbl>,
# Low_Confidence_Limit <dbl>, High_Confidence_Limit <dbl>,
# Data_Value_Footnote_Symbol <chr>, Data_Value_Footnote <chr>,
# PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
# MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
What variables are included? (can any of them be removed?)
names(latlong_clean)
[1] "Year" "StateAbbr"
[3] "StateDesc" "CityName"
[5] "GeographicLevel" "DataSource"
[7] "Category" "UniqueID"
[9] "Measure" "Data_Value_Unit"
[11] "DataValueTypeID" "Data_Value_Type"
[13] "Data_Value" "Low_Confidence_Limit"
[15] "High_Confidence_Limit" "Data_Value_Footnote_Symbol"
[17] "Data_Value_Footnote" "PopulationCount"
[19] "lat" "long"
[21] "CategoryID" "MeasureId"
[23] "CityFIPS" "TractFIPS"
[25] "Short_Question_Text"
Remove the variables that will not be used in the assignment
<- latlong_clean |>
prevention select(-DataSource,-Data_Value_Unit, -DataValueTypeID, -Low_Confidence_Limit, -High_Confidence_Limit, -Data_Value_Footnote_Symbol, -Data_Value_Footnote)
head(prevention)
# A tibble: 6 × 18
Year StateAbbr StateDesc CityName GeographicLevel Category UniqueID Measure
<dbl> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 2017 AL Alabama Montgome… City Prevent… 151000 Choles…
2 2017 CA California Concord City Prevent… 616000 Visits…
3 2017 CA California Concord City Prevent… 616000 Choles…
4 2017 CA California Fontana City Prevent… 624680 Visits…
5 2017 CA California Richmond Census Tract Prevent… 0660620… Choles…
6 2017 FL Florida Davie Census Tract Prevent… 1216475… Choles…
# ℹ 10 more variables: Data_Value_Type <chr>, Data_Value <dbl>,
# PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
# MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
<- prevention |>
md filter(StateAbbr=="MD")
head(md)
# A tibble: 6 × 18
Year StateAbbr StateDesc CityName GeographicLevel Category UniqueID Measure
<dbl> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 2017 MD Maryland Baltimore Census Tract Preventi… 2404000… "Chole…
2 2017 MD Maryland Baltimore Census Tract Preventi… 2404000… "Visit…
3 2017 MD Maryland Baltimore Census Tract Preventi… 2404000… "Visit…
4 2017 MD Maryland Baltimore Census Tract Preventi… 2404000… "Curre…
5 2017 MD Maryland Baltimore Census Tract Preventi… 2404000… "Curre…
6 2017 MD Maryland Baltimore Census Tract Preventi… 2404000… "Visit…
# ℹ 10 more variables: Data_Value_Type <chr>, Data_Value <dbl>,
# PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
# MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
unique(md$CityName)
[1] "Baltimore"
The new dataset “Prevention” is a manageable dataset now.
For your assignment, work with a cleaned dataset.
1. Once you run the above code, filter this dataset one more time for any particular subset with no more than 900 observations.
Filter chunk here
examining ohio
<- latlong_clean |>
prevention select(-DataSource,-Data_Value_Unit, -DataValueTypeID, -Low_Confidence_Limit, -High_Confidence_Limit, -Data_Value_Footnote_Symbol, -Data_Value_Footnote)
head(prevention)
# A tibble: 6 × 18
Year StateAbbr StateDesc CityName GeographicLevel Category UniqueID Measure
<dbl> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 2017 AL Alabama Montgome… City Prevent… 151000 Choles…
2 2017 CA California Concord City Prevent… 616000 Visits…
3 2017 CA California Concord City Prevent… 616000 Choles…
4 2017 CA California Fontana City Prevent… 624680 Visits…
5 2017 CA California Richmond Census Tract Prevent… 0660620… Choles…
6 2017 FL Florida Davie Census Tract Prevent… 1216475… Choles…
# ℹ 10 more variables: Data_Value_Type <chr>, Data_Value <dbl>,
# PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
# MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
<- prevention |>
oh filter(StateAbbr=="OH")
head(oh)
# A tibble: 6 × 18
Year StateAbbr StateDesc CityName GeographicLevel Category UniqueID Measure
<dbl> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 2017 OH Ohio Canton City Preventi… 3912000 "Chole…
2 2017 OH Ohio Cleveland Census Tract Preventi… 3916000… "Curre…
3 2017 OH Ohio Cleveland Census Tract Preventi… 3916000… "Chole…
4 2017 OH Ohio Columbus Census Tract Preventi… 3918000… "Takin…
5 2017 OH Ohio Dayton Census Tract Preventi… 3921000… "Visit…
6 2017 OH Ohio Dayton Census Tract Preventi… 3921000… "Chole…
# ℹ 10 more variables: Data_Value_Type <chr>, Data_Value <dbl>,
# PopulationCount <dbl>, lat <dbl>, long <dbl>, CategoryID <chr>,
# MeasureId <chr>, CityFIPS <dbl>, TractFIPS <dbl>, Short_Question_Text <chr>
Medicine use for HBP
In this step, I filtered out individuals taking HBP medication, which resulted in a separate column. I then further segmented the population into those who use the medication only and also as made the data value to be a % which also created a separate column too which you will see below .
<- oh %>%
MHBP filter(Measure == "Taking medicine for high blood pressure control among adults aged >=18 Years with high blood pressure") %>%
mutate(
data_value_decimal = Data_Value / 100,
estimated_users = round(data_value_decimal * PopulationCount)
%>%
) drop_na()
<- tracts(state = "OH", cb = TRUE) %>%
oh_tracts st_transform(4326)
Retrieving data for the year 2022
|
| | 0%
|
|= | 1%
|
|== | 2%
|
|== | 4%
|
|==== | 6%
|
|========== | 15%
|
|=========== | 16%
|
|============== | 20%
|
|=============== | 21%
|
|================== | 25%
|
|================== | 26%
|
|==================== | 29%
|
|===================== | 30%
|
|======================= | 33%
|
|======================== | 34%
|
|========================= | 36%
|
|========================== | 37%
|
|=========================== | 38%
|
|=========================== | 39%
|
|============================= | 42%
|
|============================== | 43%
|
|================================ | 45%
|
|================================ | 46%
|
|================================= | 47%
|
|=================================== | 50%
|
|=================================== | 51%
|
|==================================== | 52%
|
|===================================== | 52%
|
|====================================== | 54%
|
|======================================= | 55%
|
|======================================== | 57%
|
|========================================= | 58%
|
|========================================= | 59%
|
|========================================== | 60%
|
|=========================================== | 61%
|
|============================================ | 63%
|
|============================================= | 64%
|
|============================================== | 65%
|
|============================================== | 66%
|
|=============================================== | 67%
|
|=============================================== | 68%
|
|================================================ | 69%
|
|================================================= | 70%
|
|================================================== | 72%
|
|=================================================== | 72%
|
|=================================================== | 73%
|
|==================================================== | 74%
|
|==================================================== | 75%
|
|===================================================== | 76%
|
|====================================================== | 77%
|
|====================================================== | 78%
|
|======================================================= | 79%
|
|======================================================== | 80%
|
|========================================================= | 81%
|
|========================================================== | 82%
|
|========================================================== | 83%
|
|=========================================================== | 84%
|
|============================================================ | 85%
|
|============================================================ | 86%
|
|============================================================= | 87%
|
|============================================================== | 88%
|
|============================================================== | 89%
|
|=============================================================== | 90%
|
|================================================================ | 91%
|
|================================================================= | 93%
|
|================================================================== | 94%
|
|================================================================== | 95%
|
|=================================================================== | 96%
|
|==================================================================== | 97%
|
|==================================================================== | 98%
|
|===================================================================== | 99%
|
|======================================================================| 99%
|
|======================================================================| 100%
2. Based on the GIS tutorial (Japan earthquakes), create one plot about something in your subsetted dataset.
First plot chunk here
creating a bargraph for those who use HBP meds according to the city
options(scipen = 999)
<- MHBP |>
bar2 ggplot(aes(CityName, estimated_users)) +
geom_col()
bar2
3. Now create a map of your subsetted dataset.
First map chunk here creating a map for HBP med usage
leaflet(MHBP) %>%
addProviderTiles(providers$CartoDB.Positron) %>%
addCircleMarkers(
~long, ~lat,
radius = sqrt(MHBP$PopulationCount/100),
color = "red",
fillOpacity = 0.3
)
4. Refine your map to include a mouse-click tooltip
Refined map chunk here
<- colorNumeric(
pal palette = "viridis",
domain = MHBP$Data_Value
)
leaflet(MHBP) %>%
addProviderTiles(providers$Esri.WorldStreetMap) %>%
addCircleMarkers(
~long, ~lat,
radius = sqrt(MHBP$PopulationCount/100), #~sqrt(PopulationCount)/100,
color = ~pal(Data_Value),
popup = ~paste0(
"<strong>", CityName, "</strong><br>",
"City FIPS: ", CityFIPS, "<br>",
"Population: ", scales::comma(PopulationCount), "<br>",
"HBP Medicine Usage: ", round(Data_Value, 1), "%<br>",
"Estimated Users: ", scales::comma(estimated_users)
),fillOpacity = 0.7
%>%
) addLegend(
position = "bottomright",
pal = pal,
values = ~Data_Value,
title = "HBP Medicine Usage (%)",
opacity = 1
)
5. Write a paragraph
In a paragraph, describe the plots you created and what they show.
Analysis
The aim of my research was to investigate the prevalence of high blood pressure (HBP) medication use in Ohio. Specifically, I focused on identifying the population aged 18 and older who are using HBP medication.
During my analysis, I found that the city of Columbus has the highest number of individuals using this medication. This suggests that a significant portion of the population in Columbus may be suffering from high blood pressure.
As part of the analysis, I created an interactive map that allows users to explore the data more deeply. By clicking on the map, you can view population data for specific cities in Ohio. For each city, you can see the total population, the number of people using HBP medication, and the percentage of the population that uses it.
When focusing on Columbus, the data clearly shows that a large percentage of its residents are using HBP medication. This trend is also observable across many cities in Ohio, indicating that high blood pressure is a widespread health issue in the state.
Worries
My main challenge was working with the tract FIPS codes. While these codes are useful for representing geographic areas on the map, they only appear as numbers, which can be confusing. While locals might understand what these numbers represent, it didn’t make sense to me. I wish the FIPS codes were represented by actual names instead of just numbers, as this would have provided a clearer and more intuitive analysis of the data