library(tidyverse); library(knitr); library(kableExtra); library(broom); library(cowplot); 
library(rstan); library(tidybayes); library(scales); library(plotly);
library(png); library(grid); library(gridExtra)



theme_set(theme_classic(base_size = 12) + 
              background_grid(color.major = "grey90", 
                              color.minor = "grey95", 
                              minor = "none", major = "none") +
              theme(legend.position = "none"))

source("Make_Stan_Data.R")
d <- Make_Stan_Data(min_case_rate = 0.02, min_days = 12, upper_mult = 3)

References

I’m using figures from these excellent books.

sc <- 1
img1 <- rasterGrob(as.raster(readPNG("Myndir/SGBS.png")), interpolate = FALSE, 
                   height = unit(10 * sc, "cm"), width = unit(6.6 * sc, "cm"))
img2 <- rasterGrob(as.raster(readPNG("Myndir/SR.png")), interpolate = FALSE, 
                   height = unit(10 * sc, "cm"), width = unit(6.6 * sc, "cm"))
grid.arrange(img1, img2, ncol = 2)

Hierarchical Logistic Growth Curves

Let \(E_i\) and \(I_{i, t}\) be the population and number of infected in country \(i\) at time \(t\). Then the percent of infected can be calculated as

\[ P_{i, t} = \frac{I_{i, t}}{E_i}. \]

In the usual Logistic Regression GLM we could model the percent of infected, as a function of time (in days), with

\[ \log\left(\frac{P_{i, t}}{1 - P_{i, t}}\right) = \alpha_i + \beta_i \cdot t, \]

where \(\alpha_i\) is a measure of how many have been infected in country \(i\) at time \(t = 0\) and \(\beta_i\) is a measure of growth. In the case of COVID-19 infections we don’t know the maximum percent of populations that will be infected, so we have another unknown parameter, the saturation percent at which a country will reach its maximum number of infected, \(S_i\). Thus our model looks like

\[ \log\left(\frac{P_{i, t}}{S_i - P_{i, t}}\right) = \alpha_i + \beta_i \cdot t. \]

These parameters are hard to estimate when data from only one country are used. However, if we were to pool information about them between countries, as in a hierarchical Bayesian model, estimation might be possible. Let

\[ z_{i, t} = \alpha_i + \beta_i \cdot t, \]

where \(\alpha_i\) is a measure of how many have been infected in country \(i\) at time \(t = 0\) and \(\beta_i\) is a measure of growth. Then the logistic growth equation tells us that

\[ P_{i, t} = \frac{S_i}{1 + \exp(-z_{i, t})} \qquad \text{(Logistic)}. \]

In the case of a generalized logistic growth curve, the equation above becomes

\[ P_{i, t} = \frac{S_i}{\left(1 + \exp(-z_{i, t})\right)^{\frac{1}{\nu_i}}}, \qquad \text{(Generalized Logistic)} \]

where \(\nu_i\) is a country-specific parameter that enables non-symmetric growth. If \(\nu_i = 1\) we have a regular logistic growth model.

knitr::include_app("https://bgautijonsson.shinyapps.io/Logistic_Growth_App/", height = "600px")

If we utilize a Poisson likelihood for the observed cumulative cases, then

\[ I_{i, t} \sim \mathrm{Pois}(P_{i, t} \cdot E_i). \]

However, the Poisson model can be restrictive since the predicted variance must always equal the mean. Thus we fit Negative Binomial versions of the models

\[ I_{i, t} \sim \mathrm{NegBin}(P_{i, t} \cdot E_i, \phi_i). \]

where \(\phi_i\) is a country-level effect specifying the amount of overdispersion in that country.

Modeling Daily Counts

There is implicit correlation in the values of \(I_{i, t}\) for different values of \(t\). Thus, a better parametrisation would be to model the daily number of cases. Let

\[ z_{i, t} = \alpha_i + \beta_i \cdot t, \]

so that the percent of infected, \(P_i\), is

\[ P_{i, t} = \frac{S_i}{1 + \exp(-z_{i,t})}. \]

If we furthermore write

\[ z^*_{i, t - 1} = \alpha_i + \beta_i \cdot (t - 1), \]

and

\[ P^*_{i, t - 1} = \frac{S_i}{1 + \exp(-z^*_{i, t-1})}, \]

the change in rates between days is

\[ C_{i, t} = P_{i, t} - P^*_{i, t - 1}. \]

Since \(C_{i, t}\) is simply the first derivative of \(P_{i, t}\) with respect to \(t\), we can skip the differencing step and directly model the derivative

\[ C_{i, t} = \frac{d}{dt}P_{i, t} = P_{i, t} \left(1 - \frac{P_{i, t}}{S_i}\right), \]

in the case of a logistic growth model, and

\[ C_{i, t} = \frac{d}{dt}P_{i, t} = P_{i, t} \left(1 - \left(\frac{P_{i, t}}{S_i}\right)^{\nu_i}\right), \]

for a generalized logistic growth model. Then, conditional on a Negative Binomial likelihood and population size \(E_i\), the daily number of observed cases, \(D_{i, t}\), can be written as

\[ D_{i, t} \sim \mathrm{NegBin}(C_{i, t} \cdot E_i, \phi_i) \]

Stan

Bayesian Inference

The goal of statistical analysis is to summarise knowledge gained about parameters \(\theta = (\theta_1, \dots, \theta_d)\) by looking at data \(y = (y_1, \dots, y_n)\). Consider the joint density of data and parameters

\[ p(y, \theta) = p(y|\theta)p(\theta), \]

The posterior distribution can be obtained via

\[ p(y, \theta) = p(y|\theta)p(\theta) = p(y) p(\theta|y), \]

which can be rearranged to get

\[ p(\theta|y) = \frac{p(y|\theta)p(\theta)}{p(y)}. \]

\[ p(y) = \int p(y|\theta)p(\theta) d\theta. \]

  • In moderately complex problems can’t calculate \(p(y)\).
  • Don’t know the truth, can’t handle the truth.

Metropolis-Hastings

If we have two values, \(\theta^{(t)}\) and \(\theta_{cand}\) we can calculate

\[ \frac{p(\theta^{(t)}|y)}{p(\theta_{cand}|y)} = \frac{p(y) p(y|\theta^{(t)})p(\theta^{(t)})}{p(y) p(y|\theta_{cand})p(\theta_{cand})} = \frac{p(y|\theta^{(t)})p(\theta^{(t)})}{ p(y|\theta_{cand})p(\theta_{cand})}, \]

The Metropolis-Hastings algorithm is a clever way to construct a markov chain, \(\theta^{(t)}\), with \(p(\theta|y)\) as its stationary distribution without ever having to perform the necessary integration to obtain \(p(y)\). In the Metropolis-Hastings algorithm the chain moves from \(\theta^{(t)}\) to \(\theta_{cand}\) with probability

\[ \alpha = \min\left\{1, \frac{p(\theta_{cand}|y)q(\theta_{cand}|\theta^{(t)})}{p(\theta^{(t)}|y)q(\theta^{(t)}|\theta_{cand})}\right\}, \]

where \(q\) is a proposal distribution and may be non-symmetric, so that \(q(\theta_{cand}|\theta^{(t)})\) is not necessarily equal to \(q(\theta^{(t)}|\theta_{cand})\).

  • When \(q\) is symmetric the algorithm reduces to the Metropolis agorithm.

The Good King Markov

knitr::include_graphics("Myndir/KingMarkov.png")

For the moment, forget about posterior densities and MCMC. Consider instead the tale of Good King Markov. King Markov was a benevolent autocrat of an island kingdom, a circular archipelago, with 10 islands. Each island was neighbored by two others, and the entire archipelago formed a ring. The islands were of different sizes, and so had different sized populations living on them. The second island was about twice as populous as the first, the third about three times as populous as the first, and so on, up to the largest island, which was 10 times as populous as the smallest.

The Good King was an autocrat, but he did have a number of obligations to His people. Among these obligations, King Markov agreed to visit each island in His kingdom from time to time. Since the people love their king, each island would prefer that he visit them more often. And so everyone agreed that the king should visit each island in proportion to its population size, visiting the largest island 10 times as often as the smallest, for example.

The Good King Markov, however, wasn’t one for schedules or bookkeeping, and so he wanted a way to fulfill his obligation without planning his travels months ahead of time. Also, since the archipelago was a ring, the King insisted that he only move among adjacent islands, to minimize time spent on the water—like many citizens of his kingdom, the king believes there are sea monsters in the middle of the archipelago.

The king’s advisor, a Mr Metropolis, engineered a clever solution to these demands. We’ll call this solution the Metropolis algorithm. Here’s how it works.

  1. Wherever the King is, each week he decides between staying put for another week or moving to one of the two adjacent islands. To decide his next move, he flips a coin.

  2. If the coin turns up heads, the King considers moving to the adjacent island clockwise around the archipelago. If the coin turns up tails, he considers instead moving counterclockwise. Call the island the coin nominates the proposal island.

  3. Now, to see whether or not he moves to the proposal island, King Markov counts out a number of seashells equal to the relative population size of the proposal island. So for example, if the proposal island is number 9, then he counts out 9 seashells. Then he also counts out a number of stones equal to the relative popula- tion of the current island. So for example, if the current island is number 10, then King Markov ends up holding 10 stones, in addition to the 9 seashells.

  4. When there are more seashells than stones, King Markov always moves to the proposal island. But if there are fewer shells than stones, he discards a number of stones equal to the number of shells. So for example, if there are 4 shells and 6 stones, he ends up with 4 shells and 6 − 4 = 2 stones. Then he places the shells and the re- maining stones in a bag. He reaches in and randomly pulls out one object. If it is a shell, he moves to the proposal island. Otherwise, he stays put another week. As a result, the probability that he moves is equal to the number of shells divided by the original number of stones.

-Richard McElreath

Hamiltonian Monte Carlo

It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought.

—E. T. Jaynes

Stan, named after Stanislav Ulam, is a general purpose softare package written in C++ for the estimation of models using Hamiltonian Monte Carlo. What is Hamiltonian Monte Carl? I will let Richard McElreath explain.

Suppose King Markov’s cousin Monty is King on the mainland. Monty’s kingdom is not a discrete set of islands. Instead, it is a continuous territory stretched out along a narrow valley, running north-south. But the King has a similar obligation: to visit his citizens in proportion to their local population density. Within the valley, people distribute themselves inversely proportional to elevation—most people live in the middle of the valley, fewer up the mountainside. How can King Monty fulfill his royal obligation?

Like Markov, Monty doesn’t wish to bother with schedules and calculations. Also like Markov, Monty has a highly educated and mathematically gifted advisor, named Hamilton. Hamilton designed an odd, but highly efficient, method. And this method solves one of Metropolis’ flaws—the king hardly ever stays in the same place, but keeps moving on to visit new locations.

Here’s how it works. The king’s vehicle picks random direction, either north or south, and drives off at a random momentum. As the vehicle goes uphill, it slows down and turns around when its declining momentum forces it to. Then it picks up speed again on the way down. After a fixed period of time, they stop the vehicle, get out, and start shaking hands and kissing babies. Then they get back in the vehicle and begin again. Amazingly, Hamilton can prove mathematically that this procedure guarantees that, in the long run, the locations visited will be inversely proportional to their relative elevations, which are also inversely proportional to the population densities. Not only does this keep the king moving, but it also spaces the locations apart better—unlike the other king, Monty does not only visit neighboring locations.

-Richard McElreath

knitr::include_graphics("Myndir/NLP.png")

Hamiltonian Monte Carlo is a hybrid of Hamiltonian mechanics and the well known Metropolis algorithms. The topology of the Hamiltonian mechanics are based on the negative log posterior space, \(-\log p(\theta|y)\).

  1. Select a random starting location, \(\theta_0\).
  2. For \(t = 1, 2, \dots, n_{\text{iter}}\)
  • Generate random momentum from proposal distribution, say \(m \sim \mathcal N(0, \Sigma)\)
  • Solve for the path of your particle moving through negative log posterior space using L leapfrog steps..
  • Update your momentum using \(m \leftarrow m + \frac12\epsilon \frac{d\log p(\theta|y)}{d\theta}\)
  • Use your momentum to update position \(\theta \leftarrow \theta + \epsilon \Sigma^{-1}m\)
  • After L such steps you end up at \(\theta^*\). This is your proposal and you process it just like in a normal Metropolis-Hastings algorithm.

Stan is NUTS!

knitr::include_graphics("Myndir/NUTS.png")

In HMC there’s a danger that you could blow the particle one way and then it rolls back down the log posterior and lands back in the same spot. Stan implements the No U-Turn Sampler (NUTS). If at any time your particle would do a u-turn the last point before the turn becomes the proposal. It’s more complicated than this, in order to ensure that we have detailed balance, but we won’t go into that here.

knitr::include_graphics("Myndir/MCMCCompare.png")
Figure from Ben Lambert's A Student's Guide to Bayesian Statistics (2019)

Figure from Ben Lambert’s A Student’s Guide to Bayesian Statistics (2019)

Divergent Transitions and Max Treedepth

  • Divergent Transision: The sampler has surfed so hard it fell of the posterior
  • Max Treedepth: The sampler surfed for a set amount of time but didn’t perform a u-turn yet.

Modeling

Priors and Parameters

Logistic Model

The parameters, \(\alpha\) and \(\beta\), are treated as in a generalized linear model, except that we model \(\beta_i\) on the \(\log\) scale to impose the constraints \(\beta_i > 0\). We put hierarchical priors on \(\alpha_i\) and \(\beta_i\) so that for each country, \(i\),

\[ \begin{aligned} \beta_i &\sim \mathrm{LogNormal}(\mu_\beta, \sigma^2_\beta) \\ \alpha_i &\sim \mathrm{Normal}(\mu_\alpha, \sigma^2_\alpha) \end{aligned} \]

The \(\mu\) parameters are given vaguely informative prior distributions

\[ \begin{aligned} \mu_\alpha &\sim \mathrm{Normal}(-1.5 , 3^2) \\ \mu_\beta &\sim \mathrm{Normal}(-2, 1^2) \end{aligned} \]

We chose to put priors on the standard deviations, so that \(\sigma_\beta\) and \(\sigma_\alpha\) are given \(\mathrm{Exponential}\) prior distributions

\[ \begin{aligned} \sigma_\alpha &\sim \mathrm{Exponential}(0.2) \\ \sigma_\beta &\sim \mathrm{Exponential}(0.5). \end{aligned} \]

The \(S_i\) parameters take on values in \((0, 1)\), so we thought it proper to model them with Beta distributions. By putting hierarchical priors on them we could also share information between countries on the estimated saturation points. Thus the saturation parameter for country \(i\) is sampled as

\[ S_i \sim \mathrm{Beta}(a_S, b_S). \]

We parametrise in terms of the mean, \(\mu_S\), and prior sample size, \(\kappa_S\),

\[ \begin{aligned} \mu_S &\sim \mathrm{Beta}(1, 99) \\ \kappa_S &\sim \mathrm{Exponential}(0.001). \end{aligned} \]

The prior for \(\mu_S\) is deliberately pessimistic mean saturation \(1\%\). We can then back-transform into \(a_S\) and \(b_S\) as

\[ \begin{aligned} a_S &= \mu_S \cdot \kappa_S \\ b_S &= (1 - \mu_S) \cdot \kappa_S \end{aligned} \]

We parametrise the negative binomial likelihood in the form of mean and overdispersion. If we write

\[ D_{i, t} \sim \mathrm{NegBin}(\mu_{i, t}, \phi_i), \]

where

\[ \mu_{i, t} = C_{i, t} \cdot E_i, \]

we can write

\[ E[D_{i, t}] = \mu_{i, t} \qquad Var[D_{i, t}] = \mu_{i, t} + \phi_i \cdot \mu_{i, t}^2. \]

We put a hierarchical exponential prior on the \(\phi_i\) parameters as follows:

\[ \begin{aligned} \sqrt{\phi_i} &\sim \mathrm{Exponential}(\sigma_\phi) \\ \sigma_\phi &\sim \mathrm{Exponential}(0.5) \end{aligned} \]

Generalized Logistic Model

The parameters and accompanying priors for the generalized logistic model are mostly the same with some exceptions. The \(\mu_\alpha\) parameter is now given prior distribution

\[ \begin{aligned} \mu_\alpha &\sim \mathrm{Normal}(0 , 1^2). \end{aligned} \]

We model the \(\nu_i\) parameters on the \(\log\) scale to impose the constraint \(\nu_i > 0\). Additionally we giv hierarchical priors according to

\[ \nu_i \sim \mathrm{LogNormal}(\mu_\nu, \sigma_\nu^2), \]

with hyperpriors

\[ \begin{aligned} \mu_\nu &\sim \mathrm{Normal}(0, 2) \\ \sigma_\nu &\sim \mathrm{Exponential}(0.5) \end{aligned} \]

Predicting Hospital and ICU Bed Use

Let \(I_{x}\) be the number of diagnosed cases in age-group \(x\) and \(I\) be the total number of diagnosed cases. Then we can define the age distribution, \(\theta\), of diagnosed cases as

\[ \Lambda_x = \frac{I_x}{I}, \]

where \(x\) can take on values \(0 - 9\), \(10 - 19\), \(\dots\), \(80+\). Given predictions \(\tilde D_{t}\), for the daily number of diagnosed cases we can then split them up into age-groups using a multinomial distribution with \(N = \tilde D_{i, t}\) and \(\theta = \Lambda\). We name these age-grouped cases \(A\), and write them as

\[ A_{t} \sim \mathrm{Multinomial}(\tilde D_{t}, \Lambda). \]

Given this posterior distribution of diagnosed cases in each age-group we can use estimates of age-group-specific hospital and ICU admittance rates, \(\eta_x\) and \(\pi_x\), from a study by Ferguson at al to predict counts of hospitalization in age-group \(x\), \(H_{t, x}\), and ICU admittance, \(ICU_{t, x}\), as

\[ H_{t, x} \sim \mathrm{Binomial}(A_{t, x}, \eta_x) \\ ICU_{t, x} \sim \mathrm{Binomial}(H_{ t, x}, \pi_x). \]

Given some estimates of length-of-stay and the time from diagnosis to admittance, we can then estimate the number of hospital and ICU beds in use at each time point. For our application in Iceland we used the following assumptions based on a literature review

  • Lag from infection to hospitaization: \(7\) days
  • Lag from hospitalization to ICU admittance: \(3\) days
  • Days in hospital: \(14\) days
  • Days in ICU: \(10\) days
for (i in 1:N_age_groups) age_dist[i] <- N_cases_agegrouped[i] / sum(N_cases_agegrouped)

for (t in 1:N_pred_days) {
    age_cases[t, ] <- rmultinom(1, N_cases[t], age_dist)
    hospital[t + 7, ] <- rbinom(N_age_groups, age_cases[t, ], hospital_dist)
    icu[t + 7 + 3, ] <- rbinom(N_age_groups, hospital[t + 7, ], icu_dist)
}

total_cases <- cumsum(cases)
total_hospital <- cumsum(hospital)
total_icu <- cumsum(icu)

current_cases <- total_cases - lag(total_cases, 21, default = 0)
current_hospital <- total_hospital - lag(total_hospital, 14, default = 0)
current_icu <- total_icu - lag(total_icu, 10, default = 0)

Data

We use Icelandic data from the Directorate of Health and supplement with data on other countries from the European Center for Disease Prevention and Control, obtained daily. Data used for modeling were filtered according to

  • Only use data for countries with a population \(\geq 100.000\).
  • Only keep data for which cases per 1000 inhabitants \(\geq 0.02\).
  • Only keep data for countries with more than 12 days of follow-up after the above steps.
  • Countries that are not included for other reasons
    • China
    • South Korea

As of May \(16^{th}\) this gives us 6926 observations in 135 countries.

d %>% 
    summarise(Countries = length(unique(country)),
              Days = n())
## # A tibble: 1 x 2
##   Countries  Days
##       <int> <int>
## 1       136  7211
which_iceland <- d %>% filter(country == "Iceland") %>% .$country_id %>% unique
d %>% 
    group_by(country) %>% 
    summarise(First = min(date),
              Days_In_Data = n(),
              Start_Rate = min(case_rate),
              End_Rate = max(case_rate)) %>% 
    arrange(Days_In_Data) %>% 
    set_names(c("Country", "Entry", "Days in data", "Start", "End")) %>% 
    kable(caption = "Table 1. Summary information about countries used in modeling",
          align = c("l", rep("c", ncol(.) - 1)),
          digits = 3) %>% 
    kable_styling(bootstrap_options = c("striped", "hover")) %>% 
    row_spec(which_iceland, bold = T) %>% 
    add_header_above(c("", "", "", "Rate per 1000" = 2)) %>%
    scroll_box(height = "500px")
Table 1. Summary information about countries used in modeling
Rate per 1000
Country Entry Days in data Start End
Sudan 2020-05-07 14 0.020 0.062
Central African Republic 2020-05-05 16 0.020 0.088
Tajikistan 2020-05-05 16 0.025 0.190
Sierra Leone 2020-05-03 18 0.020 0.070
Rwanda 2020-05-02 19 0.020 0.025
India 2020-04-27 24 0.021 0.079
Mali 2020-04-27 24 0.020 0.047
Sri Lanka 2020-04-26 25 0.021 0.047
Guatemala 2020-04-24 27 0.022 0.124
Somalia 2020-04-24 27 0.022 0.100
Eswatini 2020-04-23 28 0.023 0.152
Sao Tome And Principe 2020-04-23 28 0.033 1.189
Bangladesh 2020-04-22 29 0.021 0.156
Fiji 2020-04-21 30 0.020 0.020
Liberia 2020-04-21 30 0.021 0.048
Indonesia 2020-04-17 34 0.021 0.069
Senegal 2020-04-17 34 0.021 0.165
Afghanistan 2020-04-16 35 0.021 0.206
Cape Verde 2020-04-15 36 0.020 0.616
Congo 2020-04-16 36 0.021 0.080
Equatorial Guinea 2020-04-15 36 0.031 0.680
Ghana 2020-04-15 36 0.021 0.205
El Salvador 2020-04-14 37 0.023 0.233
Egypt 2020-04-13 38 0.021 0.137
Paraguay 2020-04-13 38 0.021 0.119
Cote Divoire 2020-04-12 39 0.021 0.086
Guinea 2020-04-12 39 0.020 0.231
Guinea Bissau 2020-04-12 39 0.020 0.554
Niger 2020-04-12 39 0.022 0.041
Uzbekistan 2020-04-12 39 0.024 0.087
Gabon 2020-04-10 41 0.021 0.709
Belize 2020-04-09 42 0.021 0.047
Bolivia 2020-04-09 42 0.023 0.395
Burkina Faso 2020-04-09 42 0.021 0.041
Pakistan 2020-04-09 42 0.020 0.216
Mexico 2020-04-08 43 0.022 0.431
Jamaica 2020-04-07 44 0.020 0.177
Morocco 2020-04-05 46 0.026 0.195
Cameroon 2020-04-04 47 0.020 0.140
Kyrgyzstan 2020-04-04 47 0.023 0.201
Ukraine 2020-04-04 47 0.022 0.423
Belarus 2020-04-03 48 0.027 3.322
Cuba 2020-04-03 48 0.021 0.166
Iraq 2020-04-03 48 0.020 0.094
Japan 2020-04-03 48 0.021 0.129
Kazakhstan 2020-04-03 48 0.021 0.381
Russia 2020-04-03 48 0.025 2.076
Algeria 2020-04-02 49 0.020 0.175
Colombia 2020-04-02 49 0.021 0.341
Guyana 2020-04-02 49 0.024 0.160
Honduras 2020-04-02 49 0.023 0.308
Philippines 2020-04-02 49 0.022 0.121
Argentina 2020-03-31 51 0.022 0.198
Djibouti 2020-03-31 51 0.027 1.583
Thailand 2020-03-31 51 0.024 0.044
Azerbaijan 2020-03-30 52 0.021 0.354
Brazil 2020-03-30 52 0.020 1.297
Palestine 2020-03-29 52 0.021 0.125
Tunisia 2020-03-30 52 0.024 0.090
Peru 2020-03-29 53 0.021 3.110
Saint Lucia 2020-03-29 53 0.022 0.099
Jordan 2020-03-28 54 0.021 0.065
Puerto Rico 2020-03-28 54 0.020 0.878
South Africa 2020-03-28 54 0.020 0.298
Bahamas 2020-03-27 55 0.023 0.249
Georgia 2020-03-27 55 0.021 0.189
Curaçao 2020-03-24 56 0.038 0.100
Oman 2020-03-26 56 0.020 1.174
Hungary 2020-03-25 57 0.023 0.368
Poland 2020-03-25 57 0.024 0.507
Saudi Arabia 2020-03-25 57 0.023 1.776
Turkey 2020-03-25 57 0.023 1.842
Aruba 2020-03-20 58 0.038 0.954
Dominican Republic 2020-03-24 58 0.023 1.244
Kosovo 2020-03-24 58 0.033 0.536
New Caledonia 2020-03-24 58 0.025 0.063
United Arab Emirates 2020-03-24 58 0.021 2.602
Mauritius 2020-03-23 59 0.022 0.262
New Zealand 2020-03-23 59 0.021 0.236
Romania 2020-03-23 59 0.022 0.883
Bosnia And Herzegovina 2020-03-22 60 0.028 0.698
Bulgaria 2020-03-22 60 0.023 0.326
Moldova 2020-03-22 60 0.023 1.788
Serbia 2020-03-22 60 0.021 1.537
Trinidad And Tobago 2020-03-22 60 0.035 0.083
Barbados 2020-03-21 61 0.021 0.314
Chile 2020-03-21 61 0.023 2.647
Costa Rica 2020-03-21 61 0.023 0.176
Ecuador 2020-03-21 61 0.025 1.999
French Polynesia 2020-03-21 61 0.054 0.216
Lithuania 2020-03-21 61 0.025 0.560
Canada 2020-03-20 62 0.023 2.134
Croatia 2020-03-20 62 0.025 0.546
Jersey 2020-03-20 62 0.047 2.837
Lebanon 2020-03-20 62 0.022 0.139
Montenegro 2020-03-20 62 0.021 0.521
Slovakia 2020-03-20 62 0.023 0.274
Albania 2020-03-19 63 0.021 0.331
Australia 2020-03-19 63 0.023 0.283
North Macedonia 2020-03-19 63 0.020 0.883
United States 2020-03-19 63 0.029 4.672
Uruguay 2020-03-19 63 0.023 0.214
Armenia 2020-03-18 64 0.026 1.708
Latvia 2020-03-18 64 0.032 0.525
Malaysia 2020-03-18 64 0.021 0.221
Panama 2020-03-18 64 0.021 2.362
Cyprus 2020-03-16 66 0.025 0.772
Maldives 2020-03-16 66 0.025 2.216
Portugal 2020-03-16 66 0.024 2.863
United Kingdom 2020-03-16 66 0.021 3.742
Czechia 2020-03-15 67 0.020 0.814
Greece 2020-03-15 67 0.021 0.265
Ireland 2020-03-15 67 0.027 4.997
Israel 2020-03-15 67 0.020 1.874
Kuwait 2020-03-14 68 0.024 4.052
Luxembourg 2020-03-13 68 0.043 6.513
Malta 2020-03-14 68 0.037 1.158
Estonia 2020-03-13 69 0.020 1.356
Finland 2020-03-13 69 0.028 1.160
Germany 2020-03-13 69 0.029 2.122
Brunei Darussalam 2020-03-12 70 0.026 0.329
Slovenia 2020-03-12 70 0.028 0.710
Austria 2020-03-11 71 0.021 1.838
Denmark 2020-03-11 71 0.046 1.905
Netherlands 2020-03-11 71 0.022 2.568
Belgium 2020-03-10 72 0.021 4.884
France 2020-03-10 72 0.021 2.141
Sweden 2020-03-10 72 0.024 3.024
Spain 2020-03-08 73 0.023 4.966
Norway 2020-03-07 75 0.021 1.554
Switzerland 2020-03-07 75 0.025 3.585
Singapore 2020-03-06 76 0.021 5.107
Iceland 2020-03-02 78 0.027 4.949
Iran 2020-03-04 78 0.029 1.523
Italy 2020-03-02 80 0.028 3.751
Bahrain 2020-02-27 83 0.021 4.799
p <- d %>% 
    ggplot(aes(days, case_rate, group = country, col = country == "Iceland")) +
    geom_line() +
    scale_y_log10() +
    scale_colour_manual(values = c("grey", "blue")) +
    labs(x = "Days since rate reached 0.02 per 1000",
         y = "Cases per 1000",
         title = "Observed trends for countries used in data",
         subtitle = "Shown as days since a country entered the modeling data")
ggplotly(p)

Results

Posterior Predictions

knitr::include_graphics("calibration_newcases.png")

Model Comparison

read_csv("loo_compare.csv") %>% 
    set_names(c("Moodel", "elpd", "p", "diff", "weight")) %>% 
    kable(format = "markdown", align = c("l", rep("c", ncol(.) - 1)), digits = 4)
Moodel elpd p diff weight
Generalized Logistic -33903 (se=207) 678 (se=41) 0 (se=0) 0.9493
Logistic -34121 (se=206) 634 (se=35) -218 (se=24) 0.0507