Predictive policing is a multi-dimensional optimization problem where law enforcement agencies try to efficiently utilize a scarce resource to minimize instances of crime overtime and across geographies. But how do we optimize? This is precisely what we answer in this assignment by using real and publicly available criminal data of Chicago. Using statistics and computer aided technologies we try to devise a solution for this optimization problem. In an attempt to address a real-world problem, our primary focus here is on the fundamentals of data rather than on acrobatics with techniques.
Crime analysis includes looking at the data from 2 different dimension spatial and temporal. Spatial dimension involves observing the characteristics of a particular region along with its neighbor. Temporal dimension involves observing the characteristics of a particular region overtime. The question then is how far away, from the epicenter, do we look for a similar pattern and how far back in time, from the date of event, do we go to capture the trend. Ideally we would like to have as much data as possible. Often, in reality we don’t, and that makes data science a creative process. A process bound by mathematical logic in centered around statistical validity.
Crime data are not easy to deal with period with both spatial and temporal attributes, processing them can be a challenging task. The challenge is not limited to handling spatial and temporal data but also deriving information from them at these levels. Any predictive model for crime will have to have these 2 dimensions attached to it. And to make an effort toward effective predictive policing strategies, this inherent structure of the data needs to be leveraged.
For this exercise, we use crime data for the city of Chicago which are available from 2001 onwards on the city’s open data portal. Crime data for city of Chicago available from their open data portal at: https://data.cityofchicago.org/. To make analysis manageable, we utilized the past one year of data from the current date.
R has the capability of reading files and data tables directly from the Web. We can do this by specifying the connection string instead of the file name in the read_csv() function. We can access the Chicago crime data from the following url: https://data.cityofchicago.org/api/views/x2n5-8w5q/rows.csv?accessType=DOWNLOAD
#install.packages("tidyverse")
library(tidyverse)
# Download one year of crime data from the open data portal of city of Chicago
# NOTE: This may take a while depending on the strength of your internet connection
# First I ran read_csv() to find the default col_types() then I updated them to this:
type=cols( `CASE#` = col_character(),
`DATE OF OCCURRENCE` = col_datetime(format="%m/%d/%Y %I:%M:%S %p"),
BLOCK = col_factor(),
IUCR = col_factor(),
`PRIMARY DESCRIPTION` = col_factor(),
`SECONDARY DESCRIPTION` = col_factor(),
`LOCATION DESCRIPTION` = col_factor(),
ARREST = col_factor(),
DOMESTIC = col_factor(),
BEAT = col_factor(),
WARD = col_factor(),
`FBI CD` = col_factor(),
`X COORDINATE` = col_double(),
`Y COORDINATE` = col_double(),
LATITUDE = col_double(),
LONGITUDE = col_double(),
LOCATION = col_character()
)
# Specify download url
url.data <- "https://data.cityofchicago.org/api/views/x2n5-8w5q/rows.csv?accessType=DOWNLOAD"
# Read in data
crime_raw <- read_csv(url.data, na='',col_types = type)
# Fix column names
names(crime_raw)<-str_to_lower(names(crime_raw)) %>%
str_replace_all(" ","_") %>%
str_replace_all("__","_") %>%
str_replace_all("#","_num")
Before we start playing with data, it is important to understand how the data are organized, what fields are present in the table, and how they are stored. We can investigate the internal structure of this data easily since it’s stored as a tibble.
# Print out the tibble
crime_raw
# Understanding the data fields
str(crime_raw)
## spec_tbl_df [211,377 × 17] (S3: spec_tbl_df/tbl_df/tbl/data.frame)
## $ case_num : chr [1:211377] "JF156487" "JE362576" "JE364188" "JE364232" ...
## $ date_of_occurrence : POSIXct[1:211377], format: "2021-12-01 00:01:00" "2021-09-05 14:47:00" ...
## $ block : Factor w/ 26976 levels "056XX N SPAULDING AVE",..: 1 2 3 4 5 6 7 8 9 10 ...
## $ iucr : Factor w/ 301 levels "2820","1320",..: 1 2 3 4 5 6 4 7 8 9 ...
## $ primary_description : Factor w/ 31 levels "OTHER OFFENSE",..: 1 2 3 2 4 5 2 6 5 7 ...
## $ secondary_description: Factor w/ 280 levels "TELEPHONE THREAT",..: 1 2 3 4 5 3 4 6 7 8 ...
## $ location_description : Factor w/ 130 levels "APARTMENT","STREET",..: 1 2 3 4 5 3 1 6 2 3 ...
## $ arrest : Factor w/ 2 levels "N","Y": 1 1 2 1 1 1 1 1 1 2 ...
## $ domestic : Factor w/ 2 levels "N","Y": 1 2 1 1 1 1 1 1 1 1 ...
## $ beat : Factor w/ 274 levels "1711","511","1123",..: 1 2 3 4 5 6 7 8 9 10 ...
## $ ward : Factor w/ 51 levels "39","9","28",..: 1 2 3 4 5 6 7 1 8 9 ...
## $ fbi_cd : Factor w/ 26 levels "08A","14","06",..: 1 2 1 2 3 4 2 5 6 7 ...
## $ x_coordinate : num [1:211377] NA 1181052 1154061 1172637 1100317 ...
## $ y_coordinate : num [1:211377] NA 1837198 1900783 1863365 1935229 ...
## $ latitude : num [1:211377] NA 41.7 41.9 41.8 42 ...
## $ longitude : num [1:211377] NA -87.6 -87.7 -87.6 -87.9 ...
## $ location : chr [1:211377] NA "(41.708514886, -87.61258026)" "(41.883578698, -87.709734846)" "(41.780509717, -87.642627358)" ...
## - attr(*, "spec")=
## .. cols(
## .. `CASE#` = col_character(),
## .. `DATE OF OCCURRENCE` = col_datetime(format = "%m/%d/%Y %I:%M:%S %p"),
## .. BLOCK = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
## .. IUCR = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
## .. `PRIMARY DESCRIPTION` = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
## .. `SECONDARY DESCRIPTION` = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
## .. `LOCATION DESCRIPTION` = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
## .. ARREST = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
## .. DOMESTIC = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
## .. BEAT = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
## .. WARD = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
## .. `FBI CD` = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
## .. `X COORDINATE` = col_double(),
## .. `Y COORDINATE` = col_double(),
## .. LATITUDE = col_double(),
## .. LONGITUDE = col_double(),
## .. LOCATION = col_character()
## .. )
## - attr(*, "problems")=<externalptr>
# Summarize the data
summary(crime_raw)
## case_num date_of_occurrence
## Length:211377 Min. :2021-05-03 05:09:00
## Class :character 1st Qu.:2021-07-30 12:30:00
## Mode :character Median :2021-10-23 16:00:00
## Mean :2021-10-28 17:08:26
## 3rd Qu.:2022-01-28 07:00:00
## Max. :2022-05-02 23:49:00
##
## block iucr
## 0000X W TERMINAL ST : 530 0486 : 19316
## 001XX N STATE ST : 448 0820 : 16568
## 003XX E RANDOLPH ST : 316 0810 : 14849
## 100XX W OHARE ST : 245 0560 : 12928
## 0000X N STATE ST : 223 0460 : 12650
## 064XX S DR MARTIN LUTHER KING JR DR: 222 1310 : 12554
## (Other) :209393 (Other):122512
## primary_description secondary_description
## THEFT :44812 SIMPLE : 25826
## BATTERY :41206 DOMESTIC BATTERY SIMPLE: 19316
## CRIMINAL DAMAGE :25198 $500 AND UNDER : 16568
## ASSAULT :20469 OVER $500 : 14849
## OTHER OFFENSE :14212 TO PROPERTY : 12554
## DECEPTIVE PRACTICE:14120 TO VEHICLE : 12237
## (Other) :51360 (Other) :110027
## location_description arrest domestic
## STREET :53563 N:187261 N:167280
## APARTMENT :44257 Y: 24116 Y: 44097
## RESIDENCE :29444
## SIDEWALK :11773
## PARKING LOT / GARAGE (NON RESIDENTIAL): 7075
## SMALL RETAIL STORE : 6279
## (Other) :58986
## beat ward fbi_cd x_coordinate
## 1834 : 2501 42 : 10592 06 :44812 Min. :1091242
## 421 : 1901 27 : 9709 08B :33485 1st Qu.:1153730
## 624 : 1728 28 : 9481 14 :25198 Median :1167213
## 1831 : 1634 6 : 7950 08A :17410 Mean :1165318
## 123 : 1577 24 : 7579 26 :14445 3rd Qu.:1176822
## 511 : 1567 8 : 7001 11 :12802 Max. :1205119
## (Other):200469 (Other):159065 (Other):63225 NA's :3826
## y_coordinate latitude longitude location
## Min. :1813909 Min. :41.65 Min. :-87.94 Length:211377
## 1st Qu.:1858662 1st Qu.:41.77 1st Qu.:-87.71 Class :character
## Median :1892058 Median :41.86 Median :-87.66 Mode :character
## Mean :1886231 Mean :41.84 Mean :-87.67
## 3rd Qu.:1909299 3rd Qu.:41.91 3rd Qu.:-87.63
## Max. :1951499 Max. :42.02 Max. :-87.53
## NA's :3826 NA's :3826 NA's :3826
# Identidy duplicate identifiers
crime_raw %>%
group_by(case_num) %>%
mutate(count=n()) %>%
filter(count>1)
The data are stored at a crime incident global, that is, there is
observation for each crime incident and the data table. Each incident
has a unique identifier associated with it which is stored in the
case_number variable. By definition then,
case_number should have all unique values however we see
that some instances are duplicated, i.e., there are two or mor rows
which have the same case value. For example, there are two rows in the
data that have a case value equal to “JC438604”.
# Get row names for display
getrow<-t(filter(crime_raw,case_num=='JC438604'))
# Create tibble for example duplicate record
JC438604<-as_tibble(t(filter(crime_raw,case_num=='JC438604')),.name_repair=NULL,validate=NULL)
# add row names and reorganize the duplicate for display
JC438604 %>%
mutate(Variable=rownames(getrow)) %>%
rename(Row1=V1,Row2=V2,Row3=V3) %>%
select(Variable, Row1, Row2)
## Error in `chr_as_locations()`:
## ! Can't rename columns that don't exist.
## x Column `V1` doesn't exist.
These duplicated rows need to be removed. Since the differences only
exist in one variable and the difference is minor, these duplications
are likely a recording error. We can exlude the duplicated
case_number’s with the distinct command inside
the of the dplyr::filter command function
# Remove duplicates
crime_no_dup<-filter(distinct(crime_raw,case_num,.keep_all=TRUE))
# Check to make sure
crime_no_dup %>%
group_by(case_num) %>%
summarize(count=n()) %>%
filter(count>1)
The date_of_occurrence gives an approximate date and
time stamp as to when the crime incident might have happened. This
variable was initially read in as a character, but I used the
col_datetime() designation with the correct format to make
R recognize that this is in fact a date.
crime_no_dup %>%
select(date_of_occurrence) %>%
head()
crime_no_dup %>%
select(date_of_occurrence) %>%
tail()
The timezone for date_of_occurrence should be
America/Chicago, even though the timezone is stored in R as
Coordinated Universal Time (UTC). Time zone is actually unnecessary
since all of the crimes committed occured in the same time zone. As
such, I’m going to leave the timezone as UTC for this analysis.
Moreover, setting the time zone to America/Chicago coerces four
date_of_occurrence to NA since March 1, 2020
02:00:00 doesn’t exist (thank you daylight savings). As there were
definitely crimes committed those records should remain in the data
set.
R understands the data stored in the date_of_occurrence
column is a date and time stamp. Processing the data a bit further we
can separate the time stamps from the date part using the functions from
the lubridate library.
The frequency of crimes is probably not consistent throughout the day. There could be certain time intervals of the day where criminal activity is more prevalent compared to other intervals. To check this, we can bucket the timestamps into a few categories and then see the distribution the buckets. As an example we create four 6-hour time windows beginning at midnight to bucket the time stamps. The four time intervals we get are midnight to 6 AM, 6 AM to noon, noon to 6 PM, and 6 PM to midnight.
For bucketing we first create variable bins using the four time intervals mentioned above. Once the bins are created the next step is to match each timestamp in the data to one of these time enter this can be done using the cut function.
# Remove timestamp from datetime and place in separate column
library(lubridate)
crime_clean<-crime_no_dup %>%
mutate(time=hms::as.hms(hour(date_of_occurrence)*60+minute(date_of_occurrence)), # Remove timestamp from datetime and place in separate column
date=date(date_of_occurrence), # Separate date part from date time
time_group=cut(as.numeric(time),breaks=c(0,6*60,12*60,18*60,23*60+59),labels=c("00-06","06-12","12-18","18-00"),include.lowest = TRUE))
crime_clean %>% select(case_num, date_of_occurrence, date, time, time_group)
crime_clean %>% group_by(time_group) %>% summarize(count=n())
The distribution of crime incidents across the day suggests that crimes are more frequent during the latter half of the day.
One of the core aspects of data mining is deriving increasigly more information from the limited data that we have. We will see a few examples of what we mean by this as we go along. Let’s start with something simple and intuitive.
We can use the date of incidence to determine which day of the week and which month of the year the crime occurred. It is possible that there is a pattern in the way crimes occur (or are committed) depending on the day of the week and month.
crime_clean <- crime_clean %>%
mutate(
day=wday(date,label=TRUE,abbr=TRUE),
month=month(date,label=TRUE,abbr=TRUE)
)
crime_clean %>% select(case_num, date_of_occurrence, day, month)
There are two fields in the data which provide the description of the crime incident. The first, primary description provides a broad category of the crime type and the second provides more detailed information about the first. We use the primary description to categorize different crime types.
# Specific crime types
(t<-crime_clean %>%
group_by(primary_description) %>%
summarize(count=n()) %>%
arrange(desc(count)))
The data contains 31 crime types; not all of which are mutually exclusive. We can combine two or more similar categories into one to reduce this number and make the analysis a bit more manageable.
# Some categories can be combined to reduce this number
crime_clean<-crime_clean %>%
mutate(
crime=fct_recode(primary_description,
"DAMAGE"="CRIMINAL DAMAGE",
"DRUG"="NARCOTICS",
"DRUG"="OTHER NARCOTIC VIOLATION",
"FRAUD"="DECEPTIVE PRACTICE",
"MVT"="MOTOR VEHICLE THEFT",
"NONVIOLENT"="LIQUOR LAW VIOLATION",
"NONVIOLENT"="CONCEALED CARRY LICENSE VIOLATION",
"NONVIOLENT"="STALKING",
"NONVIOLENT"="INTIMIDATION",
"NONVIOLENT"="GAMBLING",
"NONVIOLENT"="OBSCENITY",
"NONVIOLENT"="PUBLIC INDECENCY",
"NONVIOLENT"="INTERFERENCE WITH PUBLIC OFFICER",
"NONVIOLENT"="PUBLIC PEACE VIOLATION",
"NONVIOLENT"="NON-CRIMINAL",
"OTHER"="OTHER OFFENSE",
"SEX"="HUMAN TRAFFICKING",
"SEX"="CRIMINAL SEXUAL ASSAULT",
"SEX"="SEX OFFENSE",
"SEX"="CRIM SEXUAL ASSAULT",
"SEX"="PROSTITUTION",
"TRESSPASS"="CRIMINAL TRESPASS",
"VIOLENT"="KIDNAPPING",
"VIOLENT"="WEAPONS VIOLATION",
"VIOLENT"="OFFENSE INVOLVING CHILDREN"
),
crime_type=fct_recode(crime,
"VIOLENT"="SEX",
"VIOLENT"="ARSON",
"VIOLENT"="ASSAULT",
"VIOLENT"="HOMICIDE",
"VIOLENT"="VIOLENT",
"VIOLENT"="BATTERY",
"NONVIOLENT"="BURGLARY",
"NONVIOLENT"="DAMAGE",
"NONVIOLENT"="DRUG",
"NONVIOLENT"="FRAUD",
"NONVIOLENT"="MVT",
"NONVIOLENT"="NONVIOLENT",
"NONVIOLENT"="ROBBERY",
"NONVIOLENT"="THEFT",
"NONVIOLENT"="TRESSPASS",
"NONVIOLENT"="OTHER"
) # Further combination into violent and non-violent crime types
)
crime_clean %>%
group_by(crime) %>%
summarize(count=n()) %>%
arrange(desc(count))
crime_clean %>%
group_by(crime_type) %>%
summarize(count=n()) %>%
arrange(count)
With a couple of basic variables in place, we can start with a few visualizations to see how, when, and where are the crime incidents occuring.
Visualizing data is a powerful way to derive high-level insights
about the underlying patterns in the data. Visualizations provide
helpful clues as to where we need to investigate further. To see a few
examples, we start with some simple plots of variables we processed in
the previous section using the powerful ggplot2
library.
# Frequency of crime
library(scales)
crime_clean %>%
group_by(crime) %>%
summarise(count=n()) %>%
ggplot(aes(x = reorder(crime,count), y = count)) +
geom_bar(stat = "identity", fill = "#756bb1") +
labs(x ="Crimes", y = "Number of crimes", title = "Crimes in Chicago") +
scale_y_continuous(label = comma) +
coord_flip()
Prevalence of different crimes seem to be an evenly distributed in Chicago with theft and battery being much more frequent. It would be interesting to look at how crimes are distributed with respect to time of day, day of week, and month.
# Time of day
crime_clean %>%
ggplot(aes(x = time_group)) +
geom_bar(fill = "#756bb1") +
labs(x = "Time of day", y= "Number of crimes", title = "Crimes by time of day")
# Day of week
crime_clean %>%
ggplot(aes(x = day)) +
geom_bar(fill = "#756bb1") +
labs(x = "Day of week", y = "Number of crimes", title = "Crimes by day of week")
# Month
crime_clean %>%
ggplot(aes(x = month)) +
geom_bar(fill = "#756bb1") +
labs(x = "Month", y = "Number of crimes", title = "Crimes by month")
There does seem to be a pattern in the occurrence of crime with respect to the dimension of time. The latter part of the day, Fridays, and summer months witness more crime incidents, on average, with respect to other corresponding time periods.
These plots show the combined distribution of all crime with respect
to different intervals of time. We can demonstrate the same plots with
additional information by splitting out the different crime types. For
example, we can see how different crimes vary by different times of the
day. To get the number of different crimes by time of day, we need to
aggregate the data at a crime – time group level. That is, four rows for
each crime type – one for each time interval of the day. An easy way to
aggregate data is to use the summarize function.
library(viridis)
library(scales)
crime_clean %>%
group_by(crime,time_group) %>%
summarise(count=n()) %>%
ggplot(aes(x=crime, y=time_group)) +
geom_tile(aes(fill=count)) +
labs(x="Crime", y = "Time of day", title="Theft occurs most often between noon and 6pm") +
scale_fill_viridis_c("Number of Crimes",label=comma) +
coord_flip()
A quick look at the heat map shows that most of the theft incidents
occur in the afternoon whereas drug related crimes are more prevalent in
the evening.
We can perform a similar analysis by day of week and month as well.
# Crimes by day of the week
crime_clean %>%
group_by(crime,day) %>%
summarise(count=n()) %>%
ggplot(aes(x=crime, y=day)) +
geom_tile(aes(fill=count)) +
labs(x="Crime", y = "Day of week", title="Battery is more prevelant on Sundays") +
scale_fill_viridis_c("Number of Crimes",label=comma) +
coord_flip()
# Crimes by month
# A third way of aggregating data is using the summaryBy function from the doBy package
crime_clean %>%
group_by(crime,month) %>%
summarise(count=n()) %>%
ggplot(aes(x=crime, y=month)) +
geom_tile(aes(fill=count)) +
labs(x="Crime", y = "Month of year", title="Summer is popular for crimes") +
scale_fill_viridis_c("Number of Crimes",label=comma) +
coord_flip()
Till now we have only looked at the temporal distribution of crimes.
But there is also a spatial element attached to them. Crimes vary
considerably with respect to geographies. Typically, within an area like
a zip code, city, or county, there will be pockets or zones which
observe higher criminal activity as compared to the others. These zones
are labeled as crime hot-stops and are often the focus areas for
effective predictive policing. We have the location of each crime
incident in our data that can be used to look for these spatial patterns
in the city of Chicago. For this purpose, we will utilize the shape
files for Chicago Police Department’s beats by processing them in R
using the maptools library. The shape files for CPD beats
can be downloaded from https://data.cityofchicago.org/Public-Safety/BoundariesPolice-Beats/kd6k-pxkv.
# Police beat shape files
library(maptools)
beat_shp<-readShapePoly("/Users/nicole/Desktop/PoliceBeats/PoliceBeat.shp")
plot(beat_shp)
If we plot the shape file, we get the city of Chicago cut up into beats. We can plot our crime incidences by mapping them to the coordinates of the shape files.
To add more value to our analysis, we can also plot the locations of the police stations in Chicago and see if there is any relation between hot-spots of crime and proximity of police stations. Shape files for CPD police stations can be downloaded from https://data.cityofchicago.org/Public-Safety/PoliceStations-Shapefiles/tc9m-x6u6.
station_shp <- rgdal::readOGR("/Users/nicole/Desktop/PoliceBeats/police_stations.shp") # A more modern way to read in shapefiles
## Error in loadNamespace(x): there is no package called 'rgdal'
Unfortunately, we don’t have exact locations of the police stations in a ready format. They are embedded in the shape files and need to be extracted out. We have written a function to do just that. It takes a shape file as an input and returns the location (lat and long) of each police station in Chicago as the input.
# Get police station information
getPoliceData <- function(shp_file){
PoliceData <- tibble()
NumOfStations = nrow(shp_file@data)
for(i in 1:NumOfStations){
PoliceData[i, 1] <- shp_file@data$DESCRIPTIO[i]
PoliceData[i, 2] <- shp_file@polygons[[i]]@labpt[1]
PoliceData[i, 3] <- shp_file@polygons[[i]]@labpt[2]
}
names(PoliceData) <- c("description", "lat", "long")
return(PoliceData)
}
police_data <- getPoliceData(station_shp)
## Error in nrow(shp_file@data): object 'station_shp' not found
Before we can plot the data on the map, we need to process it a bit further and get crime counts for each location. ddply() from the plyr library is a convenient function for this. ddply() takes a portion of a data frame, applies a function to that portion, and returns the result back in the form of a data frame. It works similar to the aggregate() function in the base package but is more flexible, versatile, and powerful.
# Aggregate data for plotting it on maps
(crime_agg <- group_by(crime_clean, crime, arrest, beat, date, x_coordinate,y_coordinate, time_group, day, month) %>%
summarise(count = n())) %>%
arrange(desc(count))
We are going to utilize ggplot2’s extensive plotting capabilities for
this exercise as well. To plot the shape file using
ggplot(), we need to convert the data in the shape file
into a data frame using the fortify.SpatialPolygons()
function. The function fortify.SpatialPolygons() has been
written by Hadley Wickham and is
available on his GitHub page https://github.com/hadley/ggplot2/blob/master/R/fortify-spatial.r.
library(bigreadr)
#source("https://raw.githubusercontent.com/tidyverse/ggplot2/master/R/fortify-spatial.r")
fortify.SpatialPolygons <- function(model, data, ...) {
rbind_df(lapply(model@polygons, fortify)) #had to fix line in Hadley's source code. rbind_dfs was replaced with rbind_df
}
beat.shp_df<-fortify.SpatialPolygons(beat_shp)
(crime_plot <- ggplot(beat_shp_df, aes(x=long, y=lat)) +
geom_path(aes(group = group),col="#969696") +
theme_bw() +
geom_point(data= police_data, aes(x= lat, y= long),col="#de2d26",shape=15, size=2))
## Error in ggplot(beat_shp_df, aes(x = long, y = lat)): object 'beat_shp_df' not found
To see how the crimes in Chicago are spread out spatially on a given day, we take the first date in our data set and the subset data for observations relating to this date only. In addition, to plot the crime incidents on the map, we convert our coordinates from factor to numeric.
# the cool ggplot way
# crimes for one day
today <- crime_agg %>%
ungroup() %>%
select(date) %>%
filter(row_number()==1)
(crime_plot <- crime_plot +
geom_point(data=filter(crime_agg, as.character(date) == today),
aes(x=x_coordinate, y= y_coordinate, color=crime),alpha=0.15) +
theme_bw() +
ggtitle(str_c("Crime in Chicago on",
wday(today$date,label=TRUE,abbr=FALSE), ",",
month(today$date, label=TRUE, abbr=FALSE),
day(today$date), year(today$date))) +
theme(panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.border = element_blank(),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
axis.text.x = element_blank(),
axis.text.y = element_blank(),
axis.ticks = element_blank()))
## Error in eval(expr, envir, enclos): object 'crime_plot' not found
The red squares on the plot are the police stations, whereas the other dots represent different crimes. A quick look at the plot indicates that quite a few police stations are located in and around high-crime areas except for the ones in the south-east corner where the incidents of crime appear much lesser. This plot, however, just shows the crime instances for one particular day. A better way to understand the pattern would be look at these crimes for a larger time period, for example, a month or a year. However, plotting the data for the entire year on one plot would not be visually conclusive since there will be too many data points over-crowding the limited space.
An effective alternative is to plot the crimes for each day and see
how the pattern is changing. We, however, don’t want to run the code
above for 365 days and then look at 365 different files. This is where
R’s powerful plotting capabilities come out in full glory. We can use
the animation library to create a rolling window plot that
updates for each date, plots crimes for the entire year, and provides
only one file as the output. We plot the crimes by taking the subset of
the data for each date in the data set, plotting it, saving it, and
repeating the process again. We then use the saveMovie()
function in the animation package to convert the temporarily saved png
files into a gif image. A visually more appealing plot along these lines
has been made by Drew Conway and can be found on his blog.
We have seen some interesting visualizations that helped us gauge the data better and provide clues to potential predictor variables for the model. Visualizations, like these and many others, are a great resource for data explorers as they guide us towards information hidden deep down somewhere among these large piles of data.
Before we get to our model, we need to process the data a bit more. Specifically, we need to determine at what level of data are we going to build the model and what kind of independent variables can we derive from the existing data set.
The data we have are at a crime incident level, that is, for each recorded each crime incident in the city of Chicago that occurred in the past 12 months, we have the location, date, type, beat, and ward. The location is available at a very granular level with the exact geographical coordinates present alongside the larger territorial identity of police beats. Such information is vital, but it needs to be understood that not all information can be directly transformed into an implementable solution. We cannot predict exactly when and where the next crime is going to happen. Data scientists don’t make prophecies. They derive actionable insights in the form of statistical aggregates. Therefore, understanding the nature of the problem is critical for designing a workable solution within a given set of constraints.
Keeping the ultimate goal of predictive policing in mind, our modeling engine would need to be constructed at a reasonably sized predefined geographical area for reasonably long predefined time interval. For example, the geographical area can be a block, or a couple of blocks taken together, a zip code, a beat etc. Similarly, the time interval can be an hour or a couple hours or a day.
For our purpose, we will build a multivariate regression model with a negative binomial distribution for errors at a beat-day level for all the crimes taken together. There are many approaches that can be taken to develop crime prediction models. We chose a negative binomial model because of its simplicity, general acceptance and understanding, and ease of implementation. It is debatable whether building the model in this fashion is the right approach or not. We agree to the fact that in terms of accuracy, a better solution would be to deal with different crime types separately and generate predictions for smaller time periods. However, this is just an indicative exercise which has been broken down for understanding and at times, brevity. Following these lines, we try to explain how a solution can be designed through a well suited approach without worrying too much about the exact details and metrics.
To create the modeling data set, we first create our base data set which has all possible unique combinations of beats and dates. The number of rows for the base data will be the product of the total number of unique beats and total number of unique dates.
crime_agg %>%
ungroup() %>%
summarize(count_beats=n_distinct(beat),
count_date=n_distinct(date))
(beats <- crime_agg %>%
ungroup() %>%
distinct(beat) %>%
arrange(beat))
(dates <- crime_agg %>%
ungroup() %>%
distinct(date) %>%
arrange(date))
The function expand_grid() will help create the desired
data frame with the unique combination of beat and
date. For convenience, we can sort/order the data frame by
beats
(temp <- expand_grid(beats, dates) %>%
arrange(beat))
The total number of crimes in a beat on a given day can be found by aggregating the data by beats and dates.
(model_data <- crime_agg %>%
ungroup() %>%
group_by(beat,date) %>%
summarize(count=sum(count,na.rm=TRUE),
arrest=sum(as.numeric(arrest)-1,na.rm=TRUE)))
We then overlap the aggregated crimes data with the temp data set we created to get our final modeling data set
(model_data <- temp %>%
left_join(model_data, keep=TRUE))
Our modeling data set has all the crime incidents that were recorded in the past 12 months. However, during this period, there were beats in which no crime occurred (or was not recorded) on certain days. For these rows, we see NAs in the count and arrest fields. We can replace the NAs with zero to indicate that there were no crimes, and therefore no arrests, in these beats on these days.
(model_data <- model_data %>%
mutate(arrest=replace_na(arrest,0),
count=replace_na(count,0)))
We saw in the visualization exercise that there was variation in the
crime incidents with respect to days of the week and month, suggesting
that they can be used as predictor variables in the model. These
variables can be created using the wday() and
month() functions, respectively
(model_data <- model_data %>%
mutate(day=wday(date, abbr = TRUE, label=TRUE),
month=month(date, abbr = TRUE, label=TRUE),
day=as_factor(day),
month=as_factor(month)))
## Error in `mutate()`:
## ! Problem while computing `day = wday(date, abbr = TRUE, label = TRUE)`.
## Caused by error in `as.POSIXlt.default()`:
## ! do not know how to convert 'x' to class "POSIXlt"
A potential important indicator of criminal activity in a particular
area could be the history of criminal activities in the past. We can
calculate the crime history of each beat for different time periods. To
calculate the crime histories, we will use the group averages function
roll_sum from the RcppRoll
library. The n arguement will help us look back in time for
any number of days by supplying only one argument.
To calculate the number of crimes in the past seven days for each
beat, we pass our data into mutate to create a past_crime1
variable, ensuring that our data is arranged by beat and
date and grouping it by date. The
roll_sum function takes the sum of the number of rows going
back equal to the arguement supplied in n.
library(RcppRoll)
(model_data <- model_data %>%
ungroup() %>%
arrange(beat,date) %>%
group_by(beat) %>%
mutate(
past_crime1=lag(count),
past_crime7=roll_sum(count, n = 7, align = 'right', fill=NA)))
## Error in `arrange()`:
## ! Problem with the implicit `transmute()` step.
## x Problem while computing `..1 = beat`.
## Caused by error:
## ! object 'beat' not found
We can calculate the crimes for the past 30 days as well.
(model_data <- model_data %>%
mutate(past_crime30=roll_sum(count, n = 30, align = 'right', fill=NA)))
If we look at the data, we can see that the past crime information
will not be available for certain dates. For example, if the first
observation in our data is from May 29, 2019. This implies that the
first observation for the past_crime30 variable can only be
calculated from June 21, 2011 onwards. Since we have 0 beats in our
data, we will have missing values for 29 rows, which is about 0 % of our
total observations. Simply removing these rows can lead to loss of
valuable information. To circumvent this barrier, we need to impute for
these missing observations with logical estimates from the data. One way
to approach this is by taking the mean for each of these variables and
replace the missing cells with this value.
(model_data <- model_data %>%
mutate(
past_crime1 = if_else(is.na(past_crime1),mean(past_crime1,na.rm = TRUE),past_crime1),
past_crime7 = if_else(is.na(past_crime7),mean(past_crime7,na.rm = TRUE),past_crime7),
past_crime30 = if_else(is.na(past_crime30), mean(past_crime30,na.rm = TRUE),past_crime30))
)
## Error in `mutate()`:
## ! Problem while computing `past_crime1 = if_else(...)`.
## Caused by error in `if_else()`:
## ! object 'past_crime1' not found
We are not done with past crimes as yet. There is more valuable information that we can extract from these and, typically, such will be the case in most data mining exercises we do. With a few base variables, we can tweak, twist, and turn them to look at the response from different perspectives.
We have information on the number of arrests made in each beat on each day. We calculate similar past variables for arrests and see the effects of police actions in the past on crimes in the future. For example, we can calculate the number of arrests in the past 30 days for each beat.
(model_data <- model_data %>%
mutate(
past_arrest30=roll_sum(arrest, n = 30, align = 'right', fill=NA), # Past 30 days
past_arrest30 = if_else(is.na(past_arrest30), mean(past_arrest30,na.rm = TRUE),past_arrest30)))
Simply using the past arrests variable will not bring out the effect
we are looking for since it will be highly correlated with the past
crimes variable (the correlation between past_crime30 and
past_arrest30 is NA).
To bring out the real effect of policing, we can normalize past arrests by past crimes and see if more arrests per crime in the past have deterred criminals from committing crimes in the future
(model_data <- model_data %>%
mutate(policing=if_else(past_crime30 == 0, 0, past_arrest30/past_crime30)))
Another useful extension of past crimes variables can be the interaction between crimes in the past 7 days to crime in the past 30 days to see if crime has been increasing or decreasing in the recent past and what effect does this have on future crimes.
(model_data <- model_data %>%
mutate(crime_trend = if_else(past_crime30 == 0, 0, past_crime7/past_crime30)))
## Error in `mutate()`:
## ! Problem while computing `crime_trend = if_else(past_crime30 == 0, 0,
## past_crime7/past_crime30)`.
## Caused by error in `replace_with()`:
## ! object 'past_crime7' not found
Our visualization exercise showed that there is considerable variation in crime incidents with respect to the time of the year. But we have only 1 year of data for both building the model and validating it. To use the month variable effectively, we can combine the similar months to form a new variable that reflects the changes in crime incidents with seasons.
(model_data <- model_data %>%
mutate(season=fct_recode(month,
"spring"="Mar",
"spring"="Apr",
"spring"="May",
"summer"="Jun",
"summer"="Jul",
"summer"="Aug",
"fall"="Sep",
"fall"="Oct",
"fall"="Nov",
"winter"="Dec",
"winter"="Jan",
"winter"="Feb"
),
season=as_factor(season)
)
)
## Error in `mutate()`:
## ! Problem while computing `season = fct_recode(...)`.
## Caused by error:
## ! `f` must be a factor (or character vector).
We have created a number of predictor variables, including
interactions. The motivation behind deriving these was to get a set of
variables that explains a good amount of variation in the dependent
variable. One possible way to check this is the correlation between the
dependent variable and the independent variables. The
corrplot library has a convenient function that will allow
us to do just that.
model_data %>%
ungroup() %>%
select(count, past_crime1, past_crime7, past_crime30, policing, crime_trend, day, season)
## Error in `select()`:
## ! Can't subset columns that don't exist.
## x Column `past_crime1` doesn't exist.
library(corrplot)
M <- model_data %>%
ungroup() %>%
select(count, past_crime1, past_crime7, past_crime30, policing, crime_trend) %>%
cor()
## Error in `select()`:
## ! Can't subset columns that don't exist.
## x Column `past_crime1` doesn't exist.
corrplot.mixed(M, order = "hclust", number.cex = 0.9)
## Error in nrow(corr): object 'M' not found
Though data mining is a creative process, it does have some rules. One of the most stringent ones being that the performance of any predictive model is to be tested on a separate out-of-sample dataset. Analysts and scientists who are ignorant of this are bound to face the wrath of over-fitting leading to poor results when the model is deployed.
In our case, to measure the performance of our model, we will use an
out-of-time validation sample. We will divide our data into two
portions—a 90% development sample where we can train or develop our
model and a 10% validation sample where we can test the performance of
our model. We can do this with the createDataPartition()
function in the caret library then splitting them into
train and test using proportional numbers for observations required in
each.
library(modelr)
(model_dat <- model_data %>%
ungroup() %>%
mutate(season_summer=if_else(season=='summer',1,0),
season_winter=if_else(season=='winter',1,0),
season_spring=if_else(season=='spring',1,0),
season_fall=if_else(season=='fall',1,0)) %>%
arrange(date))
## Error in `mutate()`:
## ! Problem while computing `season_summer = if_else(season == "summer",
## 1, 0)`.
## Caused by error in `if_else()`:
## ! object 'season' not found
preds_x <- model_matrix(model_dat,count ~ past_crime1 + past_crime7 + past_crime30 + policing + crime_trend + day + season_spring + season_summer + season_winter + season_fall + I(past_crime30^2) - 1)
## Error in terms.formula(object, data = data): object 'model_dat' not found
ids_mod<-model_dat %>%
select(beat,date,count)
## Error in select(., beat, date, count): object 'model_dat' not found
(modeling_data<-bind_cols(ids_mod,preds_x) %>%
arrange(date) %>%
rename(past_crime_sq=`I(past_crime30^2)`)
)
## Error in list2(...): object 'ids_mod' not found
inTrain<-c(1:floor(nrow(modeling_data)*0.90))
## Error in nrow(modeling_data): object 'modeling_data' not found
(training<-modeling_data[inTrain,])
## Error in eval(expr, envir, enclos): object 'modeling_data' not found
(test<-modeling_data[-inTrain,])
## Error in eval(expr, envir, enclos): object 'modeling_data' not found