Problem definition

Predictive policing is a multi-dimensional optimization problem where law enforcement agencies try to efficiently utilize a scarce resource to minimize instances of crime overtime and across geographies. But how do we optimize? This is precisely what we answer in this assignment by using real and publicly available criminal data of Chicago. Using statistics and computer aided technologies we try to devise a solution for this optimization problem. In an attempt to address a real-world problem, our primary focus here is on the fundamentals of data rather than on acrobatics with techniques.

Crime analysis includes looking at the data from 2 different dimension spatial and temporal. Spatial dimension involves observing the characteristics of a particular region along with its neighbor. Temporal dimension involves observing the characteristics of a particular region overtime. The question then is how far away, from the epicenter, do we look for a similar pattern and how far back in time, from the date of event, do we go to capture the trend. Ideally we would like to have as much data as possible. Often, in reality we don’t, and that makes data science a creative process. A process bound by mathematical logic in centered around statistical validity.

Crime data are not easy to deal with period with both spatial and temporal attributes, processing them can be a challenging task. The challenge is not limited to handling spatial and temporal data but also deriving information from them at these levels. Any predictive model for crime will have to have these 2 dimensions attached to it. And to make an effort toward effective predictive policing strategies, this inherent structure of the data needs to be leveraged.

Data extraction

For this exercise, we use crime data for the city of Chicago which are available from 2001 onwards on the city’s open data portal. Crime data for city of Chicago available from their open data portal at: https://data.cityofchicago.org/. To make analysis manageable, we utilized the past one year of data from the current date.

R has the capability of reading files and data tables directly from the Web. We can do this by specifying the connection string instead of the file name in the read_csv() function. We can access the Chicago crime data from the following url: https://data.cityofchicago.org/api/views/x2n5-8w5q/rows.csv?accessType=DOWNLOAD

library(tidyverse)
# Download one year of crime data from the open data portal of city of Chicago
# NOTE: This may take a while depening on the strength of your internet connection
# First I ran read_csv() to find the default col_types() then I updated them to this:
type=cols( `CASE#` = col_character(),
           `DATE  OF OCCURRENCE` = col_datetime(format="%m/%d/%Y %I:%M:%S %p"),
           BLOCK = col_factor(),
           IUCR = col_factor(),
           `PRIMARY DESCRIPTION` = col_factor(),
           `SECONDARY DESCRIPTION` = col_factor(),
           `LOCATION DESCRIPTION` = col_factor(),
           ARREST = col_factor(),
           DOMESTIC = col_factor(),
           BEAT = col_factor(),
           WARD = col_factor(),
           `FBI CD` = col_factor(),
           `X COORDINATE` = col_double(),
           `Y COORDINATE` = col_double(),
           LATITUDE = col_double(),
           LONGITUDE = col_double(),
           LOCATION = col_character()
)

# Specify download url
url.data <- "https://data.cityofchicago.org/api/views/x2n5-8w5q/rows.csv?accessType=DOWNLOAD"

# Read in data
crime_raw <- read_csv(url.data, na='',col_types = type)

# Fix column names
names(crime_raw)<-str_to_lower(names(crime_raw)) %>%
  str_replace_all(" ","_") %>%
  str_replace_all("__","_") %>%
  str_replace_all("#","_num")

Data Exploration and pre-processing

Before we start playing with data, it is important to understand how the data are organized, what fields are present in the table, and how they are stored. We can investigate the internal structure of this data easily since it’s stored as a tibble.

# Print out the tibble
crime_raw
# Understanding the data fields
str(crime_raw)
## spec_tbl_df [200,131 x 17] (S3: spec_tbl_df/tbl_df/tbl/data.frame)
##  $ case_num             : chr [1:200131] "JE145602" "JE173033" "JD220983" "JD390623" ...
##  $ date_of_occurrence   : POSIXct[1:200131], format: "2020-12-22 20:59:00" "2021-02-01 21:30:00" ...
##  $ block                : Factor w/ 26996 levels "004XX W 58TH ST",..: 1 2 3 4 5 6 7 8 9 10 ...
##  $ iucr                 : Factor w/ 307 levels "1153","0890",..: 1 2 3 4 4 5 6 7 2 8 ...
##  $ primary_description  : Factor w/ 32 levels "DECEPTIVE PRACTICE",..: 1 2 3 2 2 4 5 6 2 7 ...
##  $ secondary_description: Factor w/ 283 levels "FINANCIAL IDENTITY THEFT OVER $ 300",..: 1 2 3 4 4 5 6 7 2 8 ...
##  $ location_description : Factor w/ 131 levels "RESIDENCE","GROCERY FOOD STORE",..: 1 1 1 1 2 3 4 5 5 1 ...
##  $ arrest               : Factor w/ 2 levels "N","Y": 1 1 1 1 2 1 1 1 1 1 ...
##  $ domestic             : Factor w/ 2 levels "N","Y": 1 1 1 1 1 1 1 2 1 2 ...
##  $ beat                 : Factor w/ 274 levels "711","1833","822",..: 1 2 3 4 5 6 7 8 9 10 ...
##  $ ward                 : Factor w/ 50 levels "20","2","14",..: 1 2 3 4 5 6 6 7 8 9 ...
##  $ fbi_cd               : Factor w/ 26 levels "11","06","05",..: 1 2 3 2 2 4 5 6 2 7 ...
##  $ x_coordinate         : num [1:200131] NA NA 1154351 1173303 1169505 ...
##  $ y_coordinate         : num [1:200131] NA NA 1868931 1912547 1910252 ...
##  $ latitude             : num [1:200131] NA NA 41.8 41.9 41.9 ...
##  $ longitude            : num [1:200131] NA NA -87.7 -87.6 -87.7 ...
##  $ location             : chr [1:200131] NA NA "(41.796167139, -87.70951943)" "(41.915454333, -87.638727274)" ...
##  - attr(*, "spec")=
##   .. cols(
##   ..   `CASE#` = col_character(),
##   ..   `DATE  OF OCCURRENCE` = col_datetime(format = "%m/%d/%Y %I:%M:%S %p"),
##   ..   BLOCK = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
##   ..   IUCR = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
##   ..   `PRIMARY DESCRIPTION` = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
##   ..   `SECONDARY DESCRIPTION` = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
##   ..   `LOCATION DESCRIPTION` = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
##   ..   ARREST = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
##   ..   DOMESTIC = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
##   ..   BEAT = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
##   ..   WARD = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
##   ..   `FBI CD` = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
##   ..   `X COORDINATE` = col_double(),
##   ..   `Y COORDINATE` = col_double(),
##   ..   LATITUDE = col_double(),
##   ..   LONGITUDE = col_double(),
##   ..   LOCATION = col_character()
##   .. )
# Summarize the data
summary(crime_raw)
##    case_num         date_of_occurrence           
##  Length:200131      Min.   :2020-04-14 05:05:00  
##  Class :character   1st Qu.:2020-07-13 11:09:30  
##  Mode  :character   Median :2020-10-02 12:15:00  
##                     Mean   :2020-10-07 17:33:28  
##                     3rd Qu.:2020-12-31 17:16:30  
##                     Max.   :2021-04-13 23:50:00  
##                                                  
##                                  block             iucr       
##  001XX N STATE ST                   :   347   0486   : 19892  
##  0000X W TERMINAL ST                :   275   0820   : 16114  
##  064XX S DR MARTIN LUTHER KING JR DR:   215   1310   : 12312  
##  033XX W FILLMORE ST                :   176   0560   : 11312  
##  008XX N MICHIGAN AVE               :   171   1320   : 10894  
##  065XX S DR MARTIN LUTHER KING JR DR:   145   0460   : 10135  
##  (Other)                            :198802   (Other):119472  
##          primary_description             secondary_description
##  BATTERY           :39272    SIMPLE                 : 21601   
##  THEFT             :36121    DOMESTIC BATTERY SIMPLE: 19892   
##  CRIMINAL DAMAGE   :24460    $500 AND UNDER         : 16114   
##  DECEPTIVE PRACTICE:18639    TO PROPERTY            : 12312   
##  ASSAULT           :18091    TO VEHICLE             : 11432   
##  OTHER OFFENSE     :11895    OVER $500              :  9983   
##  (Other)           :51653    (Other)                :108797   
##                              location_description arrest     domestic  
##  STREET                                :49783     N:172153   N:159958  
##  APARTMENT                             :37534     Y: 27978   Y: 40173  
##  RESIDENCE                             :37247                          
##  SIDEWALK                              :11938                          
##  PARKING LOT / GARAGE (NON RESIDENTIAL): 5388                          
##  (Other)                               :57154                          
##  NA's                                  : 1087                          
##       beat             ward            fbi_cd       x_coordinate    
##  421    :  1812   28     : 10015   06     :36121   Min.   :1092706  
##  511    :  1707   27     :  8487   08B    :31362   1st Qu.:1152636  
##  624    :  1687   24     :  8158   14     :24460   Median :1166298  
##  1834   :  1603   6      :  8095   11     :17755   Mean   :1164711  
##  1112   :  1557   8      :  6953   08A    :14884   3rd Qu.:1176743  
##  631    :  1516   (Other):158413   26     :12689   Max.   :1205119  
##  (Other):190249   NA's   :    10   (Other):62860   NA's   :2090     
##   y_coordinate        latitude       longitude        location        
##  Min.   :1813897   Min.   :41.64   Min.   :-87.93   Length:200131     
##  1st Qu.:1857618   1st Qu.:41.76   1st Qu.:-87.72   Class :character  
##  Median :1889766   Median :41.85   Median :-87.67   Mode  :character  
##  Mean   :1884966   Mean   :41.84   Mean   :-87.67                     
##  3rd Qu.:1908717   3rd Qu.:41.91   3rd Qu.:-87.63                     
##  Max.   :1951527   Max.   :42.02   Max.   :-87.52                     
##  NA's   :2090      NA's   :2090    NA's   :2090
# Identidy duplicate identifiers
crime_raw %>%
  group_by(case_num) %>%
  mutate(count=n()) %>%
  filter(count>1)

The data are stored at a crime incident global, that is, there is observation for each crime incident and the data table. Each incident has a unique identifier associated with it which is stored in the case_number variable. By definition then, case_number should have all unique values however we see that some instances are duplicated, i.e., there are two or mor rows which have the same case value. For example, there are two rows in the data that have a case value equal to "JC438604".

# Get row names for display
getrow<-t(filter(crime_raw,case_num=='JC438604'))

# Create tibble for example duplicate record
JC438604<-as_tibble(t(filter(crime_raw,case_num=='JC438604')),.name_repair=NULL,validate=NULL)

# add row names and reorganize the duplicate for display

These duplicated rows need to be removed. Since the differences only exist in one variable and the difference is minor, these duplications are likely a recording error. We can exlude the duplicated case_number's with the distinct command inside the of the dplyr::filter command function

# Remove duplicates
crime_no_dup<-filter(distinct(crime_raw,case_num,.keep_all=TRUE))

# Check to make sure 
crime_no_dup %>%
  group_by(case_num) %>%
  summarize(count=n()) %>%
  filter(count>1)

The date_of_occurrence gives an approximate date and time stamp as to when the crime incident might have happened. This variable was initially read in as a character, but I used the col_datetime() designation with the correct format to make R recognize that this is in fact a date.

crime_no_dup %>% 
  select(date_of_occurrence) %>% 
  head()
crime_no_dup %>% 
  select(date_of_occurrence) %>% 
  tail()

The timezone for date_of_occurrence should be America/Chicago, even though the timezone is stored in R as Coordinated Universal Time (UTC). Time zone is actually unnecessary since all of the crimes committed occured in the same time zone. As such, I'm going to leave the timezone as UTC for this analysis. Moreover, setting the time zone to America/Chicago coerces four date_of_occurrence to NA since March 1, 2020 02:00:00 doesn't exist (thank you daylight savings). As there were definitely crimes committed those records should remain in the data set.

R understands the data stored in the date_of_occurrence column is a date and time stamp. Processing the data a bit further we can separate the time stamps from the date part using the functions from the lubridate library.

The frequency of crimes is probably not consistent throughout the day. There could be certain time intervals of the day where criminal activity is more prevalent compared to other intervals. To check this, we can bucket the timestamps into a few categories and then see the distribution the buckets. As an example we create four 6-hour time windows beginning at midnight to bucket the time stamps. The four time intervals we get are midnight to 6 AM, 6 AM to noon, noon to 6 PM, and 6 PM to midnight.

For bucketing we first create variable bins using the four time intervals mentioned above. Once the bins are created the next step is to match each timestamp in the data to one of these time enter this can be done using the cut function.

# Remove timestamp from datetime and place in separate column
library(lubridate)
crime_clean<-crime_no_dup %>%
  mutate(time=hms::as.hms(hour(date_of_occurrence)*60+minute(date_of_occurrence)), # Remove timestamp from datetime and place in separate column
         date=date(date_of_occurrence), # Separate date part from date time
         time_group=cut(as.numeric(time),breaks=c(0,6*60,12*60,18*60,23*60+59),labels=c("00-06","06-12","12-18","18-00"),include.lowest = TRUE))

crime_clean %>% select(case_num, date_of_occurrence, date, time, time_group)
crime_clean %>% group_by(time_group) %>% summarize(count=n())

The distribution of crime incidents across the day suggests that crimes are more frequent during the latter half of the day.

One of the core aspects of data mining is deriving increasigly more information from the limited data that we have. We will see a few examples of what we mean by this as we go along. Let's start with something simple and intuitive.

We can use the date of incidence to determine which day of the week and which month of the year the crime occurred. It is possible that there is a pattern in the way crimes occur (or are committed) depending on the day of the week and month.

crime_clean <- crime_clean %>%
         mutate(
           day=wday(date,label=TRUE,abbr=TRUE),
           month=month(date,label=TRUE,abbr=TRUE)
         )

crime_clean %>% select(case_num, date_of_occurrence, day, month)

There are two fields in the data which provide the description of the crime incident. The first, primary description provides a broad category of the crime type and the second provides more detailed information about the first. We use the primary description to categorize different crime types.

# Specific crime types
(t<-crime_clean %>% 
  group_by(primary_description) %>%
  summarize(count=n()) %>%
  arrange(desc(count)))

The data contains 32 crime types; not all of which are mutually exclusive. We can combine two or more similar categories into one to reduce this number and make the analysis a bit more manageable.

# Some categories can be combined to reduce this number
crime_clean<-crime_clean %>%
  mutate(
    crime=fct_recode(primary_description,
                     "DAMAGE"="CRIMINAL DAMAGE",
                     "DRUG"="NARCOTICS",
                     "DRUG"="OTHER NARCOTIC VIOLATION",
                     "FRAUD"="DECEPTIVE PRACTICE",
                     "MVT"="MOTOR VEHICLE THEFT",
                     "NONVIOLENT"="LIQUOR LAW VIOLATION",
                     "NONVIOLENT"="CONCEALED CARRY LICENSE VIOLATION",
                     "NONVIOLENT"="STALKING",
                     "NONVIOLENT"="INTIMIDATION",
                     "NONVIOLENT"="GAMBLING",
                     "NONVIOLENT"="OBSCENITY",
                     "NONVIOLENT"="PUBLIC INDECENCY",
                     "NONVIOLENT"="INTERFERENCE WITH PUBLIC OFFICER",
                     "NONVIOLENT"="PUBLIC PEACE VIOLATION",
                     "NONVIOLENT"="NON-CRIMINAL",
                     "OTHER"="OTHER OFFENSE",
                     "SEX"="HUMAN TRAFFICKING",
                     "SEX"="CRIMINAL SEXUAL ASSAULT",
                     "SEX"="SEX OFFENSE",
                     "SEX"="CRIM SEXUAL ASSAULT",
                     "SEX"="PROSTITUTION",
                     "TRESSPASS"="CRIMINAL TRESPASS",
                     "VIOLENT"="KIDNAPPING",
                     "VIOLENT"="WEAPONS VIOLATION",
                     "VIOLENT"="OFFENSE INVOLVING CHILDREN"
                     ),
    crime_type=fct_recode(crime,
                          "VIOLENT"="SEX",
                          "VIOLENT"="ARSON",
                          "VIOLENT"="ASSAULT",
                          "VIOLENT"="HOMICIDE",
                          "VIOLENT"="VIOLENT",
                          "VIOLENT"="BATTERY",
                          "NONVIOLENT"="BURGLARY",
                          "NONVIOLENT"="DAMAGE",
                          "NONVIOLENT"="DRUG",
                          "NONVIOLENT"="FRAUD",
                          "NONVIOLENT"="MVT",
                          "NONVIOLENT"="NONVIOLENT",
                          "NONVIOLENT"="ROBBERY",
                          "NONVIOLENT"="THEFT",
                          "NONVIOLENT"="TRESSPASS",
                          "NONVIOLENT"="OTHER"
                          ) # Further combination into violent and non-violent crime types
  )
crime_clean %>%
  group_by(crime) %>%
  summarize(count=n()) %>%
  arrange(desc(count))
crime_clean %>%
  group_by(crime_type) %>%
  summarize(count=n()) %>%
  arrange(count)

With a couple of basic variables in place, we can start with a few visualizations to see how, when, and where are the crime incidents occuring.

Visualizations

Visualizing data is a powerful way to derive high-level insights about the underlying patterns in the data. Visualizations provide helpful clues as to where we need to investigate further. To see a few examples, we start with some simple plots of variables we processed in the previous section using the powerful ggplot2 library.

# Frequency of crime
library(scales)
crime_clean %>% 
  group_by(crime) %>%
  summarise(count=n()) %>%
  ggplot(aes(x = reorder(crime,count), y = count)) +
  geom_bar(stat = "identity", fill = "#756bb1") +
  labs(x ="Crimes", y = "Number of crimes", title = "Crimes in Chicago") + 
  scale_y_continuous(label = comma) +
  coord_flip()

Prevalence of different crimes seem to be an evenly distributed in Chicago with theft and battery being much more frequent. It would be interesting to look at how crimes are distributed with respect to time of day, day of week, and month.

# Time of day
crime_clean %>%
  ggplot(aes(x = time_group)) +
  geom_bar(fill = "#756bb1") +
  labs(x = "Time of day", y= "Number of crimes", title = "Crimes by time of day")

# Day of week
crime_clean %>%
  ggplot(aes(x = day)) +
  geom_bar(fill = "#756bb1") +
  labs(x = "Day of week", y = "Number of crimes", title = "Crimes by day of week")

# Month
crime_clean %>%
  ggplot(aes(x = month)) +
  geom_bar(fill = "#756bb1") +
  labs(x = "Month", y = "Number of crimes", title = "Crimes by month")

There does seem to be a pattern in the occurrence of crime with respect to the dimension of time. The latter part of the day, Fridays, and summer months witness more crime incidents, on average, with respect to other corresponding time periods.

These plots show the combined distribution of all crime with respect to different intervals of time. We can demonstrate the same plots with additional information by splitting out the different crime types. For example, we can see how different crimes vary by different times of the day. To get the number of different crimes by time of day, we need to aggregate the data at a crime – time group level. That is, four rows for each crime type – one for each time interval of the day. An easy way to aggregate data is to use the summarize function.

library(viridis)
library(scales)
crime_clean %>%
  group_by(crime,time_group) %>%
  summarise(count=n()) %>%
  ggplot(aes(x=crime, y=time_group)) +
  geom_tile(aes(fill=count)) +
  labs(x="Crime", y = "Time of day", title="Theft occurs most often between noon and 6pm") +
  scale_fill_viridis_c("Number of Crimes",label=comma) +
  coord_flip()

A quick look at the heat map shows that most of the theft incidents occur in the afternoon whereas drug related crimes are more prevalent in the evening.

We can perform a similar analysis by day of week and month as well.

# Crimes by day of the week
crime_clean %>%
  group_by(crime,day) %>%
  summarise(count=n()) %>%
  ggplot(aes(x=crime, y=day)) +
  geom_tile(aes(fill=count)) +
  labs(x="Crime", y = "Day of week", title="Battery is more prevelant on Sundays") +
  scale_fill_viridis_c("Number of Crimes",label=comma) +
  coord_flip()

# Crimes by month
# A third way of aggregating data is using the summaryBy function from the doBy package
crime_clean %>%
  group_by(crime,month) %>%
  summarise(count=n()) %>%
  ggplot(aes(x=crime, y=month)) +
  geom_tile(aes(fill=count)) +
  labs(x="Crime", y = "Month of year", title="Summer is popular for crimes") +
  scale_fill_viridis_c("Number of Crimes",label=comma) +
  coord_flip()

Till now we have only looked at the temporal distribution of crimes. But there is also a spatial element attached to them. Crimes vary considerably with respect to geographies. Typically, within an area like a zip code, city, or county, there will be pockets or zones which observe higher criminal activity as compared to the others. These zones are labeled as crime hot-stops and are often the focus areas for effective predictive policing. We have the location of each crime incident in our data that can be used to look for these spatial patterns in the city of Chicago. For this purpose, we will utilize the shape files for Chicago Police Department’s beats by processing them in R using the maptools library. The shape files for CPD beats can be downloaded from https://data.cityofchicago.org/Public-Safety/BoundariesPolice-Beats/kd6k-pxkv.

# Police beat shape files
library(maptools)

If we plot the shape file, we get the city of Chicago cut up into beats. We can plot our crime incidences by mapping them to the coordinates of the shape files.

To add more value to our analysis, we can also plot the locations of the police stations in Chicago and see if there is any relation between hot-spots of crime and proximity of police stations. Shape files for CPD police stations can be downloaded from https://data.cityofchicago.org/Public-Safety/PoliceStations-Shapefiles/tc9m-x6u6.

Unfortunately, we don’t have exact locations of the police stations in a ready format. They are embedded in the shape files and need to be extracted out. We have written a function to do just that. It takes a shape file as an input and returns the location (lat and long) of each police station in Chicago as the input.

# Get police station information
getPoliceData <- function(shp_file){
    PoliceData <- tibble()
    NumOfStations = nrow(shp_file@data)
        for(i in 1:NumOfStations){
            PoliceData[i, 1] <- shp_file@data$DESCRIPTIO[i]
            PoliceData[i, 2] <- shp_file@polygons[[i]]@labpt[1]
            PoliceData[i, 3] <- shp_file@polygons[[i]]@labpt[2]
        }
    names(PoliceData) <- c("description", "lat", "long")
    return(PoliceData)
}

Before we can plot the data on the map, we need to process it a bit further and get crime counts for each location. ddply() from the plyr library is a convenient function for this. ddply() takes a portion of a data frame, applies a function to that portion, and returns the result back in the form of a data frame. It works similar to the aggregate() function in the base package but is more flexible, versatile, and powerful.

# Aggregate data for plotting it on maps
(crime_agg <- group_by(crime_clean, crime, arrest, beat, date, x_coordinate,y_coordinate, time_group, day, month) %>%
  summarise(count = n())) %>%
  arrange(desc(count))

We are going to utilize ggplot2’s extensive plotting capabilities for this exercise as well. To plot the shape file using ggplot(), we need to convert the data in the shape file into a data frame using the fortify.SpatialPolygons() function. The function fortify.SpatialPolygons() has been written by Hadley Wickham and is available on his GitHub page https://github.com/hadley/ggplot2/blob/master/R/fortify-spatial.r.

library(bigreadr)
#source("https://raw.githubusercontent.com/tidyverse/ggplot2/master/R/fortify-spatial.r")
fortify.SpatialPolygons <- function(model, data, ...) {
  rbind_df(lapply(model@polygons, fortify)) #had to fix line in Hadley's source code. rbind_dfs was replaced with rbind_df
}

To see how the crimes in Chicago are spread out spatially on a given day, we take the first date in our data set and the subset data for observations relating to this date only. In addition, to plot the crime incidents on the map, we convert our coordinates from factor to numeric.

# the cool ggplot way
# crimes for one day

The red squares on the plot are the police stations, whereas the other dots represent different crimes. A quick look at the plot indicates that quite a few police stations are located in and around high-crime areas except for the ones in the south-east corner where the incidents of crime appear much lesser. This plot, however, just shows the crime instances for one particular day. A better way to understand the pattern would be look at these crimes for a larger time period, for example, a month or a year. However, plotting the data for the entire year on one plot would not be visually conclusive since there will be too many data points over-crowding the limited space.

An effective alternative is to plot the crimes for each day and see how the pattern is changing. We, however, don’t want to run the code above for 365 days and then look at 365 different files. This is where R’s powerful plotting capabilities come out in full glory. We can use the animation library to create a rolling window plot that updates for each date, plots crimes for the entire year, and provides only one file as the output. We plot the crimes by taking the subset of the data for each date in the data set, plotting it, saving it, and repeating the process again. We then use the saveMovie() function in the animation package to convert the temporarily saved png files into a gif image. A visually more appealing plot along these lines has been made by Drew Conway and can be found on his blog.

We have seen some interesting visualizations that helped us gauge the data better and provide clues to potential predictor variables for the model. Visualizations, like these and many others, are a great resource for data explorers as they guide us towards information hidden deep down somewhere among these large piles of data.

Modeling

Before we get to our model, we need to process the data a bit more. Specifically, we need to determine at what level of data are we going to build the model and what kind of independent variables can we derive from the existing data set.

The data we have are at a crime incident level, that is, for each recorded each crime incident in the city of Chicago that occurred in the past 12 months, we have the location, date, type, beat, and ward. The location is available at a very granular level with the exact geographical coordinates present alongside the larger territorial identity of police beats. Such information is vital, but it needs to be understood that not all information can be directly transformed into an implementable solution. We cannot predict exactly when and where the next crime is going to happen. Data scientists don’t make prophecies. They derive actionable insights in the form of statistical aggregates. Therefore, understanding the nature of the problem is critical for designing a workable solution within a given set of constraints.

Keeping the ultimate goal of predictive policing in mind, our modeling engine would need to be constructed at a reasonably sized predefined geographical area for reasonably long predefined time interval. For example, the geographical area can be a block, or a couple of blocks taken together, a zip code, a beat etc. Similarly, the time interval can be an hour or a couple hours or a day.

For our purpose, we will build a multivariate regression model with a negative binomial distribution for errors at a beat-day level for all the crimes taken together. There are many approaches that can be taken to develop crime prediction models. We chose a negative binomial model because of its simplicity, general acceptance and understanding, and ease of implementation. It is debatable whether building the model in this fashion is the right approach or not. We agree to the fact that in terms of accuracy, a better solution would be to deal with different crime types separately and generate predictions for smaller time periods. However, this is just an indicative exercise which has been broken down for understanding and at times, brevity. Following these lines, we try to explain how a solution can be designed through a well suited approach without worrying too much about the exact details and metrics.

To create the modeling data set, we first create our base data set which has all possible unique combinations of beats and dates. The number of rows for the base data will be the product of the total number of unique beats and total number of unique dates.

# Create a base data set that contains unique combination of all beats and dates
# The number of rows for the base data will be the product of 
crime_agg %>%
  ungroup() %>%
  summarize(count_beats=n_distinct(beat),
            count_date=n_distinct(date))
# Unqiue beats from crime data
(beats <- crime_agg %>%
    ungroup() %>%
    distinct(beat) %>%
    arrange(beat))
# Unqiue dates from crime data
(dates <- crime_agg %>%
    ungroup() %>%
    distinct(date) %>%
    arrange(date))

The function expand_grid() will help create the desired data frame with the unique combination of beat and date. For convenience, we can sort/order the data frame by beats

# Create base data
(temp <- expand_grid(beats, dates) %>%
   arrange(beat)) # order base data

The total number of crimes in a beat on a given day can be found by aggregating the data by beats and dates.

# Aggregate crime.agg data to get one row for beat-date combination
(model_data <- crime_agg %>%
  ungroup() %>%
  group_by(beat,date) %>%
  summarize(count=sum(count,na.rm=TRUE),
            arrest=sum(as.numeric(arrest)-1,na.rm=TRUE)))