For the purpose of illustration, I’m using data that was generated for a statistics course at UCLA (see http://stats.idre.ucla.edu/r/dae/poisson-regression/). The data comes from the following situation:

The number of awards earned by students at one high school. Predictors of the number of awards earned include the type of program in which the student was enrolled (e.g., vocational, general or academic) and the score on their final exam in math.

They also provide a description of the data:

In this example, num_awards is the outcome variable and indicates the number of awards earned by students at a high school in a year, math is a continuous predictor variable and represents students’ scores on their math final exam, and prog is a categorical predictor variable with three levels indicating the type of program in which the students were enrolled. It is coded as 1 = “General”, 2 = “Academic” and 3 = “Vocational”.

We can import the data directly from their website:

dataset <- read.csv("http://www.ats.ucla.edu/stat/data/poisson_sim.csv")
# Change prog to a factor
dataset$prog <- factor(dataset$prog, levels = 1:3, 
                       labels = c("General", "Academic", "Vocational"))

Poisson regression

We’ll start with basic Poisson regression. We assume that the relationship between math and the (log) expected value of the number of awards is linear.

model1 <- glm(num_awards ~ prog + math, 
              family = "poisson", data = dataset)
summary(model1)

Call:
glm(formula = num_awards ~ prog + math, family = "poisson", data = dataset)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.2043  -0.8436  -0.5106   0.2558   2.6796  

Coefficients:
               Estimate Std. Error z value Pr(>|z|)    
(Intercept)    -5.24712    0.65845  -7.969 1.60e-15 ***
progAcademic    1.08386    0.35825   3.025  0.00248 ** 
progVocational  0.36981    0.44107   0.838  0.40179    
math            0.07015    0.01060   6.619 3.63e-11 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 287.67  on 199  degrees of freedom
Residual deviance: 189.45  on 196  degrees of freedom
AIC: 373.5

Number of Fisher Scoring iterations: 6

From the output of the fitting routine, we can plot the relationship between the expected outcome and the predictors:

library(ggplot2)
package <U+393C><U+3E31>ggplot2<U+393C><U+3E32> was built under R version 3.3.3
ggplot(data = dataset, aes(x = math, y = predict(model1, type = "response"),
                           colour = prog)) + 
  geom_point() + xlab("Math score") + ylab("Expected number of awards")

Using splines in Poisson regression

Depending on the data analysis, we may not be satisfied with the hypothesis that the relationship between the math score and the expected number of awards is linear. One way to remove that assumption is to use B-splines. As a reminder, splines are piece-wise polynomials constrained to match at certains points called “knots”. The number of knots and the sum of the degree of the polynomials is what we call the “degrees of freedom” of the spline, and it is determined by the user. By default, the R function splines::bs assumes that we allow three degrees of freedom. Let’s repeat the above analysis using B-splines:

library(splines)
model2 <- glm(num_awards ~ prog + bs(math), 
              family = "poisson", data = dataset)
summary(model2)

Call:
glm(formula = num_awards ~ prog + bs(math), family = "poisson", 
    data = dataset)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.0875  -0.8093  -0.5170   0.2339   2.5463  

Coefficients:
               Estimate Std. Error z value Pr(>|z|)   
(Intercept)     -2.3783     1.1759  -2.023  0.04312 * 
progAcademic     1.0774     0.3584   3.006  0.00265 **
progVocational   0.4316     0.4510   0.957  0.33868   
bs(math)1       -0.6956     2.2602  -0.308  0.75825   
bs(math)2        2.3647     0.9385   2.520  0.01175 * 
bs(math)3        2.0641     1.2986   1.590  0.11194   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 287.67  on 199  degrees of freedom
Residual deviance: 188.10  on 194  degrees of freedom
AIC: 376.15

Number of Fisher Scoring iterations: 6

We see that we get one regression coefficient for each degree of freedom. Note that p-values for each regression coefficient are difficult to interpret, and we should not assume that they measure the statistical significance of the spline itself. We can create the same graph as above with our new model:

ggplot(data = dataset, aes(x = math, y = predict(model2, type = "response"),
                           colour = prog)) + 
  geom_point() + xlab("Math score") + ylab("Expected number of awards")

The important thing to note is that we do not get the same linear relationship as above.

To increase the degrees of freedom of the spline, we simply change the value of the parameter df:

model3 <- glm(num_awards ~ prog + bs(math, df = 5), 
              family = "poisson", data = dataset)
summary(model3)

Call:
glm(formula = num_awards ~ prog + bs(math, df = 5), family = "poisson", 
    data = dataset)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.0718  -0.8276  -0.5439   0.2257   2.5726  

Coefficients:
                  Estimate Std. Error z value Pr(>|z|)   
(Intercept)        -4.1482     3.7372  -1.110   0.2670   
progAcademic        1.1006     0.3626   3.035   0.0024 **
progVocational      0.4508     0.4524   0.996   0.3190   
bs(math, df = 5)1   2.3720     4.7879   0.495   0.6203   
bs(math, df = 5)2   1.4836     3.4900   0.425   0.6708   
bs(math, df = 5)3   3.4696     3.9150   0.886   0.3755   
bs(math, df = 5)4   3.7625     3.6639   1.027   0.3045   
bs(math, df = 5)5   3.8769     3.7686   1.029   0.3036   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 287.67  on 199  degrees of freedom
Residual deviance: 187.59  on 192  degrees of freedom
AIC: 379.64

Number of Fisher Scoring iterations: 6

Once again, let’s look at the same graph as above but with our new model fit:

ggplot(data = dataset, aes(x = math, y = predict(model3, type = "response"),
                           colour = prog)) + 
  geom_point() + xlab("Math score") + ylab("Expected number of awards")

LS0tDQp0aXRsZTogIlBvaXNzb24gcmVncmVzc2lvbiBhbmQgc3BsaW5lcyINCm91dHB1dDoNCiAgaHRtbF9ub3RlYm9vazogZGVmYXVsdA0KLS0tDQoNCkZvciB0aGUgcHVycG9zZSBvZiBpbGx1c3RyYXRpb24sIEknbSB1c2luZyBkYXRhIHRoYXQgd2FzIGdlbmVyYXRlZCBmb3IgYSBzdGF0aXN0aWNzIGNvdXJzZSBhdCBVQ0xBIChzZWUgaHR0cDovL3N0YXRzLmlkcmUudWNsYS5lZHUvci9kYWUvcG9pc3Nvbi1yZWdyZXNzaW9uLykuIFRoZSBkYXRhIGNvbWVzIGZyb20gdGhlIGZvbGxvd2luZyBzaXR1YXRpb246DQoNCj4gVGhlIG51bWJlciBvZiBhd2FyZHMgZWFybmVkIGJ5IHN0dWRlbnRzIGF0IG9uZSBoaWdoIHNjaG9vbC4gUHJlZGljdG9ycyBvZiB0aGUgbnVtYmVyIG9mIGF3YXJkcyBlYXJuZWQgaW5jbHVkZSB0aGUgdHlwZSBvZiBwcm9ncmFtIGluIHdoaWNoIHRoZSBzdHVkZW50IHdhcyBlbnJvbGxlZCAoZS5nLiwgdm9jYXRpb25hbCwgZ2VuZXJhbCBvciBhY2FkZW1pYykgYW5kIHRoZSBzY29yZSBvbiB0aGVpciBmaW5hbCBleGFtIGluIG1hdGguDQoNClRoZXkgYWxzbyBwcm92aWRlIGEgZGVzY3JpcHRpb24gb2YgdGhlIGRhdGE6DQoNCj4gSW4gdGhpcyBleGFtcGxlLCBgbnVtX2F3YXJkc2AgaXMgdGhlIG91dGNvbWUgdmFyaWFibGUgYW5kIGluZGljYXRlcyB0aGUgbnVtYmVyIG9mIGF3YXJkcyBlYXJuZWQgYnkgc3R1ZGVudHMgYXQgYSBoaWdoIHNjaG9vbCBpbiBhIHllYXIsIGBtYXRoYCBpcyBhIGNvbnRpbnVvdXMgcHJlZGljdG9yIHZhcmlhYmxlIGFuZCByZXByZXNlbnRzIHN0dWRlbnRzJyBzY29yZXMgb24gdGhlaXIgbWF0aCBmaW5hbCBleGFtLCBhbmQgYHByb2dgIGlzIGEgY2F0ZWdvcmljYWwgcHJlZGljdG9yIHZhcmlhYmxlIHdpdGggdGhyZWUgbGV2ZWxzIGluZGljYXRpbmcgdGhlIHR5cGUgb2YgcHJvZ3JhbSBpbiB3aGljaCB0aGUgc3R1ZGVudHMgd2VyZSBlbnJvbGxlZC4gSXQgaXMgY29kZWQgYXMgMSA9ICJHZW5lcmFsIiwgMiA9ICJBY2FkZW1pYyIgYW5kIDMgPSAiVm9jYXRpb25hbCIuIA0KDQpXZSBjYW4gaW1wb3J0IHRoZSBkYXRhIGRpcmVjdGx5IGZyb20gdGhlaXIgd2Vic2l0ZToNCg0KYGBge3IgaW1wb3J0X2RhdGF9DQpkYXRhc2V0IDwtIHJlYWQuY3N2KCJodHRwOi8vd3d3LmF0cy51Y2xhLmVkdS9zdGF0L2RhdGEvcG9pc3Nvbl9zaW0uY3N2IikNCg0KIyBDaGFuZ2UgcHJvZyB0byBhIGZhY3Rvcg0KZGF0YXNldCRwcm9nIDwtIGZhY3RvcihkYXRhc2V0JHByb2csIGxldmVscyA9IDE6MywgDQogICAgICAgICAgICAgICAgICAgICAgIGxhYmVscyA9IGMoIkdlbmVyYWwiLCAiQWNhZGVtaWMiLCAiVm9jYXRpb25hbCIpKQ0KYGBgDQoNCiMjIFBvaXNzb24gcmVncmVzc2lvbg0KDQpXZSdsbCBzdGFydCB3aXRoIGJhc2ljIFBvaXNzb24gcmVncmVzc2lvbi4gV2UgYXNzdW1lIHRoYXQgdGhlIHJlbGF0aW9uc2hpcCBiZXR3ZWVuIGBtYXRoYCBhbmQgdGhlIChsb2cpIGV4cGVjdGVkIHZhbHVlIG9mIHRoZSBudW1iZXIgb2YgYXdhcmRzIGlzIGxpbmVhci4NCg0KYGBge3IgbGluZWFyfQ0KbW9kZWwxIDwtIGdsbShudW1fYXdhcmRzIH4gcHJvZyArIG1hdGgsIA0KICAgICAgICAgICAgICBmYW1pbHkgPSAicG9pc3NvbiIsIGRhdGEgPSBkYXRhc2V0KQ0Kc3VtbWFyeShtb2RlbDEpDQpgYGANCg0KRnJvbSB0aGUgb3V0cHV0IG9mIHRoZSBmaXR0aW5nIHJvdXRpbmUsIHdlIGNhbiBwbG90IHRoZSByZWxhdGlvbnNoaXAgYmV0d2VlbiB0aGUgZXhwZWN0ZWQgb3V0Y29tZSBhbmQgdGhlIHByZWRpY3RvcnM6DQoNCmBgYHtyIGxpbmVhcl9wbG90fQ0KbGlicmFyeShnZ3Bsb3QyKQ0KDQpnZ3Bsb3QoZGF0YSA9IGRhdGFzZXQsIGFlcyh4ID0gbWF0aCwgeSA9IHByZWRpY3QobW9kZWwxLCB0eXBlID0gInJlc3BvbnNlIiksDQogICAgICAgICAgICAgICAgICAgICAgICAgICBjb2xvdXIgPSBwcm9nKSkgKyANCiAgZ2VvbV9wb2ludCgpICsgeGxhYigiTWF0aCBzY29yZSIpICsgeWxhYigiRXhwZWN0ZWQgbnVtYmVyIG9mIGF3YXJkcyIpDQpgYGANCg0KIyMgVXNpbmcgc3BsaW5lcyBpbiBQb2lzc29uIHJlZ3Jlc3Npb24NCg0KRGVwZW5kaW5nIG9uIHRoZSBkYXRhIGFuYWx5c2lzLCB3ZSBtYXkgbm90IGJlIHNhdGlzZmllZCB3aXRoIHRoZSBoeXBvdGhlc2lzIHRoYXQgdGhlIHJlbGF0aW9uc2hpcCBiZXR3ZWVuIHRoZSBtYXRoIHNjb3JlIGFuZCB0aGUgZXhwZWN0ZWQgbnVtYmVyIG9mIGF3YXJkcyBpcyBsaW5lYXIuIE9uZSB3YXkgdG8gcmVtb3ZlIHRoYXQgYXNzdW1wdGlvbiBpcyB0byB1c2UgQi1zcGxpbmVzLiBBcyBhIHJlbWluZGVyLCBzcGxpbmVzIGFyZSBwaWVjZS13aXNlIHBvbHlub21pYWxzIGNvbnN0cmFpbmVkIHRvIG1hdGNoIGF0IGNlcnRhaW5zIHBvaW50cyBjYWxsZWQgImtub3RzIi4gVGhlIG51bWJlciBvZiBrbm90cyBhbmQgdGhlIHN1bSBvZiB0aGUgZGVncmVlIG9mIHRoZSBwb2x5bm9taWFscyBpcyB3aGF0IHdlIGNhbGwgdGhlICJkZWdyZWVzIG9mIGZyZWVkb20iIG9mIHRoZSBzcGxpbmUsIGFuZCBpdCBpcyBkZXRlcm1pbmVkIGJ5IHRoZSB1c2VyLiBCeSBkZWZhdWx0LCB0aGUgYFJgIGZ1bmN0aW9uIGBzcGxpbmVzOjpic2AgYXNzdW1lcyB0aGF0IHdlIGFsbG93IHRocmVlIGRlZ3JlZXMgb2YgZnJlZWRvbS4gTGV0J3MgcmVwZWF0IHRoZSBhYm92ZSBhbmFseXNpcyB1c2luZyBCLXNwbGluZXM6DQoNCmBgYHtyIHNwbGluZXMxfQ0KbGlicmFyeShzcGxpbmVzKQ0KbW9kZWwyIDwtIGdsbShudW1fYXdhcmRzIH4gcHJvZyArIGJzKG1hdGgpLCANCiAgICAgICAgICAgICAgZmFtaWx5ID0gInBvaXNzb24iLCBkYXRhID0gZGF0YXNldCkNCnN1bW1hcnkobW9kZWwyKQ0KYGBgDQoNCldlIHNlZSB0aGF0IHdlIGdldCBvbmUgcmVncmVzc2lvbiBjb2VmZmljaWVudCBmb3IgZWFjaCBkZWdyZWUgb2YgZnJlZWRvbS4gTm90ZSB0aGF0IHAtdmFsdWVzIGZvciBlYWNoIHJlZ3Jlc3Npb24gY29lZmZpY2llbnQgYXJlIGRpZmZpY3VsdCB0byBpbnRlcnByZXQsIGFuZCB3ZSBzaG91bGQgbm90IGFzc3VtZSB0aGF0IHRoZXkgbWVhc3VyZSB0aGUgc3RhdGlzdGljYWwgc2lnbmlmaWNhbmNlIG9mIHRoZSBzcGxpbmUgaXRzZWxmLiBXZSBjYW4gY3JlYXRlIHRoZSBzYW1lIGdyYXBoIGFzIGFib3ZlIHdpdGggb3VyIG5ldyBtb2RlbDoNCg0KYGBge3Igc3BsaW5lczFfcGxvdH0NCmdncGxvdChkYXRhID0gZGF0YXNldCwgYWVzKHggPSBtYXRoLCB5ID0gcHJlZGljdChtb2RlbDIsIHR5cGUgPSAicmVzcG9uc2UiKSwNCiAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbG91ciA9IHByb2cpKSArIA0KICBnZW9tX3BvaW50KCkgKyB4bGFiKCJNYXRoIHNjb3JlIikgKyB5bGFiKCJFeHBlY3RlZCBudW1iZXIgb2YgYXdhcmRzIikNCmBgYA0KDQpUaGUgaW1wb3J0YW50IHRoaW5nIHRvIG5vdGUgaXMgdGhhdCB3ZSBkbyBub3QgZ2V0IHRoZSBzYW1lIGxpbmVhciByZWxhdGlvbnNoaXAgYXMgYWJvdmUuDQoNClRvIGluY3JlYXNlIHRoZSBkZWdyZWVzIG9mIGZyZWVkb20gb2YgdGhlIHNwbGluZSwgd2Ugc2ltcGx5IGNoYW5nZSB0aGUgdmFsdWUgb2YgdGhlIHBhcmFtZXRlciBgZGZgOg0KDQpgYGB7ciBzcGxpbmVzMn0NCm1vZGVsMyA8LSBnbG0obnVtX2F3YXJkcyB+IHByb2cgKyBicyhtYXRoLCBkZiA9IDUpLCANCiAgICAgICAgICAgICAgZmFtaWx5ID0gInBvaXNzb24iLCBkYXRhID0gZGF0YXNldCkNCnN1bW1hcnkobW9kZWwzKQ0KYGBgDQoNCk9uY2UgYWdhaW4sIGxldCdzIGxvb2sgYXQgdGhlIHNhbWUgZ3JhcGggYXMgYWJvdmUgYnV0IHdpdGggb3VyIG5ldyBtb2RlbCBmaXQ6DQoNCmBgYHtyIHNwbGluZXMyX3Bsb3R9DQpnZ3Bsb3QoZGF0YSA9IGRhdGFzZXQsIGFlcyh4ID0gbWF0aCwgeSA9IHByZWRpY3QobW9kZWwzLCB0eXBlID0gInJlc3BvbnNlIiksDQogICAgICAgICAgICAgICAgICAgICAgICAgICBjb2xvdXIgPSBwcm9nKSkgKyANCiAgZ2VvbV9wb2ludCgpICsgeGxhYigiTWF0aCBzY29yZSIpICsgeWxhYigiRXhwZWN0ZWQgbnVtYmVyIG9mIGF3YXJkcyIpDQpgYGANCg==