The epidist
package
enables users to estimate delay distributions while accounting for
common reporting biases. This vignette first introduces the background
(Park et al.
2024) required to understand the models implemented in
epidist
. It then goes on to explain each model in turn.
Estimating a delay distribution may appear to be simple: one could imagine fitting a probability distribution to a set of observed delays. However, observed delays are often biased during an ongoing outbreak. We begin by presenting a formalism for characterizing delay distributions as well as two main biases (truncation and censoring) affecting them. We then present statistical models for correcting for these biases.
Any epidemiological delay requires a primary (starting) and a secondary (ending) event. For example, the incubation period measures the time between infection (primary event) and symptom onset (secondary event). Here, we use p to denote the time of the primary event, and s to denote the time of the secondary event.
We can measure any delay distribution in two different ways. First, we can measure the forward distribution fp(τ), starting from a cohort of individuals who experienced the primary event at the same time p and looking at when they experienced their secondary event. Second, we can measure the backward distribution bs(τ), starting from a cohort of individuals who experienced the secondary event at the same time s and looking at when they experienced their primary event. While the length of each individual delay τ = p − s remains constant whether we look at it forward or backwards, the shape of the distribution is affected by the differences in perspectives due to the differences in cohort composition.
To illustrate their differences, let’s assume that primary and secondary events occur at rates, or equivalently incidences, 𝒫(p) and 𝒮(s), respectively. Then, the total density 𝒯(p, s) of individuals with a primary event at time p and secondary event at time s can be expressed equivalently in terms of both forward and backward distributions: 𝒯(p, s) = 𝒫(p)fp(s − p) = 𝒮(s)bs(s − p).(#eq : forwards − backwards) Rearranging Equation @ref(eq:forwards-backwards) gives $$ b_s(\tau) = \frac{\mathcal{P}(s - \tau) f_{s - \tau}(\tau)}{\mathcal{S}(s)}. (\#eq:forwards-backwards2) $$ The denominator of Equation @ref(eq:forwards-backwards2), which corresponds to the incidence of secondary events, may be expressed as the integral over all possible delays 𝒮(s) = ∫−∞∞𝒫(s − τ)fs − τ(τ)dτ, such that $$ b_s(\tau) = \frac{\mathcal{P}(s - \tau) f_{s - \tau}(\tau)}{\int_{-\infty}^\infty \mathcal{P}(s - \tau) f_{s - \tau}(\tau) \text{d} \tau}. $$ Here, we see that bs(τ) depends not only on fs − τ(τ) but also on 𝒫(s − τ), meaning that past changes in the incidence pattern will affect the shape of the distribution. Particularly, when an epidemic is growing, we are more likely to observe shorter delays, causing an underestimation of the mean delay. Therefore, we always want to characterize epidemiological delays from the forward perspective and estimate the forward distribution. For this reason, our current methodology focuses on biases that affect the estimation of the forward distribution.
One key bias that affects the forward distribution is right truncation. Right truncation refers to the bias arising from the inability to observe future events and ocurs when we observe data based on the secondary events. For example, assume the data are right truncated and we don’t observe secondary events past time T. Then, we will only observe delays whose secondary events occurred before time T, causing us to underestimate the mean of the distribution as these delays will on average be shorter.
Bias from right truncation is greater when events are more likely to be more recent. A common example of severely right truncated data is data collected during outbreaks when growth in incidence is exponential (so you are much more likely to have a recent event). On the other hand, if data collection is continued until the end of an outbreak then many fewer events are likely to be more recent and so there will be little right truncation in general.
Mathematically right truncation can be described as follows. Let P and S be random variables. Let Fp be the forward cumulative distribution. Then, the probability of observing a delay of length τ given that the primary event occurred at time p and a truncation at time T can be written as: $$ \begin{aligned} \mathbb{P}(S = P + \tau \, | \, P = p, S < T) &= \frac{\mathbb{P}(S = P + \tau, P = p, S < T)}{\mathbb{P}(P = p, S < T)} \\ &= \frac{\mathbb{P}(S = P + \tau < T \, | \, P = p)\mathbb{P}(P = p)}{\mathbb{P}(S < T \, | \, P = p)\mathbb{P}(P = p)} \\ &= \frac{\mathbb{P}(S = P + \tau < T \, | \, P = p)}{\mathbb{P}(S < T \, | \, P = p)} \\ &= \frac{f_p(\tau)}{\int_0^{T - p} f_p(x) \text{d}x} = \frac{f_p(\tau)}{F_p(T - p)}, \quad p + \tau < T. (\#eq:right-truncation) \end{aligned} $$
Examining Equation @ref(eq:right-truncation) illustrates that (right) truncation renormalises the density by the values which are possible. For example, if the distribution x ∼ Unif(0, 1) were right truncated by T = 0.5 then x ∼ Unif(0, 0.5).
The exact timing of epidemiological events is often unknown. Instead, we may only know that the event happened within a certain interval. We refer to this as interval censoring. A very common example of interval censoring in epidemiology is date censoring, where we only know, or are using, data to the day of an event rather than the precise time. Other forms of interval censoring, like weekly or monthly interval censoring, are also common. When both primary and secondary events are interval censored, this is referred to as double censoring.
Mathematically single interval censoring is defined as follows. Assume the secondary event S is censored and so we don’t know when the event exactly happened. Instead, we only know that the secondary event happened between SL and SR. Then, $$ \begin{aligned} \mathbb{P}(S_L < S < S_R \, | \, P = p) &= \int_{S_L}^{S_R} f_p(y-p) dy\\ &= F_p(S_R-p) - F_p(S_L-p) \end{aligned} $$
Similarly, double interval censoring is defined as follows. Now, assume that both the primary P and secondary S events are truncated. We only know that the primary event happened between PL and PR and the secondary event happened between SL and SR. We now write gP to denote the unconditional distribution of primary events. Then, $$ \begin{aligned} \mathbb{P}(S_L < S < S_R \, | \, P_L < P < P_R) &= \mathbb{P}(P_L < P < P_R, S_L < S < S_R \, | \, P_L < P < P_R)\\ &= \frac{\mathbb{P}(P_L < P < P_R, S_L < S < S_R)}{\mathbb{P}(P_L < P < P_R)}\\ &= \frac{\int_{P_L}^{P_R} \int_{S_L}^{S_R} g_P(x) f_x(y-x) \,dy\, dx}{\int_{P_L}^{P_R} g_P(z)\, dz }\\ &= \int_{P_L}^{P_R} \int_{S_L}^{S_R} g_P(x\,|\,P_L,P_R) f_x(y-x) \,dy\, dx \end{aligned} $$ where $ g_P(x,|,P_L,P_R)$ represents the conditional distribution of primary event given lower PL and upper PR bounds.
The simplest approach to modelling epidemiological delay distributions is ignoring truncation and censoring biases and simply treating the delays as continuous fully observed data. Then, the likelihood of observing a delay Yi given parameter θ is straightforward: ℒ(Yi | θ) = f(yi − xi). where yi and xi are the observed primary and secondary event times.
As shown in (Park et al. 2024) when the data is double censored this modelling approach biases the mean by approximately a day as well as the standard deviation. Where right truncation is also present biases can be more severe with plausible simulated scenarios leading to biased means that were >30% shorter than the true distributions.
This approach aims to account for the right truncation and double censoring using a generative modelling approach. For each event, a latent variable is used to represent the exact time of the event. This then allows the modelling of the continuous distribution, adjusted for the right truncation. Whilst this is an approximation (Park et al. 2024) showed good recovery of simulated distributions in a range of settings. However, the use of two latent variables per observed delay means that this approach may scale poorly with larger datasets. That being said this approach has been used successfully in multiple real-world outbreak settings ((ward2022transmission?)).
Mathematically this model is described as follows. We look at the conditional probability that the secondary event S falls between SL and SR, given that the primary event P falls between PL and PR and that the secondary event S occurs before the truncation time T: $$ \begin{aligned} \mathbb{P}(S_L < S < S_R \, | \, P_L < P < P_R, S<T) &= \mathbb{P}(P_L < P < P_R, S_L < S < S_R, S<T \, | \, P_L < P < P_R, S<T)\\ &= \frac{\mathbb{P}(P_L < P < P_R, S_L < S < S_R, S<T)}{\mathbb{P}(P_L < P < P_R, S<T)}\\ &= \frac{\int_{P_L}^{P_R} \int_{S_L}^{S_R} g_P(x) f_x(y-x) \,dy\, dx}{\int_{P_L}^{P_R} \int_{z}^T g_P(z) f_z(w-z) \, dz \,dw }\\ &= \frac{\int_{P_L}^{P_R} \int_{S_L}^{S_R} g_P(x) f_x(y-x) \,dy\, dx}{\int_{P_L}^{P_R} g_P(z) F_z(T-z) \,dw }\\ &= \frac{\int_{P_L}^{P_R} \int_{S_L}^{S_R} g_P(x|P_L, P_R) f_x(y-x) \,dy\, dx}{\int_{P_L}^{P_R} g_P(z|P_L, P_R) F_z(T-z) \,dw }\\ \end{aligned} $$ Using latent variables, we can now rewrite the observation likelihood as: $$ \begin{aligned} x_i &\sim g_P(x_i \, | \, p_{L, i}, p_{R, i}) \\ y_i &\sim \text{Unif}(s_{L, i}, s_{R, i}) \\ \mathcal{L}(\mathbf{Y} \, | \, \boldsymbol{\theta}) &= \prod_i \left[ \frac{f_{x_i}(y_i - x_i)}{\int_{P_{L, i}}^{P_{R, i}} g_P(z \, | \, p_{L, i}, p_{R, i}) F_z(T - z) \text{d}z} \right]. \end{aligned} $$ As before, gP(z | pL, i, pR, i) represents the conditional distribution of the primary event given lower PL and upper PR bounds; this is equivalent to modelling the incidence in primary events.