Notes on stochastic count processes with independent non stationary increments

Problem Statement

Let \mathbb{P}(X_{ij}>t)=e^{-\lambda_i t} thus X_{ij} \sim Exponential(\lambda_i), we construct a stochastic process as T_{jn} = \sum_{i=0}^n X_{ij}, this kind of random variable are called hypoexponential random variables. We define a counting process such that N_j(t) = max\{n| T_{jn} < t\}. This allows to model a certain relation between events, for example to have a shorter expected waiting time for a second event if the first event was observed.

Statement 1: N(t) is no Poisson process.
To see this let 0<T_1<t_1<T_2<T_3<t_2 and t_2-t_1<T_3 then N(t_2)-N(t_1)=X_2 + X_3 while N(t_2 - t_1)=X_1 + X_2. Since the density of X_i + X_{i+1} is given by \frac{\lambda_i}{\lambda_i- \lambda_{i+1}} \lambda_{i+1} e^{-  \lambda_{i+1} t } + \frac{\lambda_{i+1}}{\lambda_{i+1} - \lambda_i} \lambda_i e^{-  \lambda_i t } with \lambda_1  \neq \lambda_3 the process is non stationary and can thus not be a Poisson process.

Statement 2: The (hypoexponential) distribution of T_{jn} is given by f(t)= \sum_{i=1}^n C_{i,n} \lambda_i e ^{-\lambda_it} for t>0 and 0 else where C_{i,n} = \prod_{i \neq j} \frac{\lambda_j}{\lambda_j - \lambda_i} and thus \mathbb{E}(T_{jn})= \int_{0}^\infty \sum_{i=1}^n t C_{i,n} \lambda_i e ^{-\lambda_it} dt = \sum_{i=1}^n  \frac{C_{i,n}}{\lambda_i}.

(1)   \begin{eqnarray*}\mathbb{E}(N_j(t))=\sum_m m \mathbb{P}(N_j(t)=m)= \sum_m m  \mathbb{P}\left(T_{jm}<t \leq T_{jm+1}\right) \\= \sum_m  m \left(1-\sum_{k=0}^{m} C_{k,m} e^{-\lambda_k t} \right) \left(\sum_{k=0}^{m+1} C_{k,m+1} e^{-\lambda_k t} \right)\end{eqnarray*}

Estimation

To estimate \lambda_i we construct a different process. For fixed t, let N_{(j)}(t) be N_j(t) sorted descending by size. S_i(t)=max\{j| N_{(j)}(t) \leq i\} this is the amount of all processes where the i-th event was reached at time t. Let E_i(t) = \sum_{j=0}^{S_i(t)}  X_{i(j)}, here (j) is determined by the ordering of N_{(j)}(t), this is also often referred to as exposure. \lambda_i is now estimated with \hat{\lambda}_i=\frac{S_i(t)}{E_i(t)}.

The sum of order statistics exponential random variables is given by E_i(t) \sim \Gamma(\alpha= S_i(t), \beta= \lambda_i) [1] thus the joined distribution is given by

(2)   \begin{eqnarray*}f_{E_i(t)}(x) = \sum_m  \frac{\lambda_i^m x^{m-1}}{\Gamma(m)}  e^{-\lambda_i x}    \mathbb{P}\left(S_i(t)=m\right).\end{eqnarray*}

Note that X_{ij} and S_i(t) are not independent, in particular

(3)   \begin{eqnarray*}\mathbb{E}(S_i(t)) = \sum_{j=0}^n \mathbb{P}(\sum_{k=0}^i X_{kj} \leq t) = n \int_0^t \sum_{k=0}^i C_{k,i} \lambda_k e ^{-\lambda_kx} dx = n -n \sum_{k=0}^i C_{k,i} e^{-\lambda_k t} \\\mathbb{P}\left(S_i(t)=m\right) =  \binom{n}{m}  \left(1 - \sum_{k=0}^i C_{k,i} e^{-\lambda_k t}\right)^m \left(\sum_{k=0}^i C_{k,i} e^{-\lambda_k t} \right)^{n-m}\end{eqnarray*}


(see Statement 2), if the realisation of X_{ij} is relatively small than for given t the probability that the event was observed is higher. For simplicity, in the following, we will consider a special case where t^*_i = max_j(T_{ji}), then S_i(t^*_i)=n and the sum of exponential random variables is given by y = E_i(t^*_i) \sim \Gamma(\alpha= n, \beta= \lambda_i), thus

(4)   \begin{eqnarray*}\mathbb{E}\left(\hat{\lambda}_i\right)= &\int_0^\infty \frac{n}{y}\frac{\lambda_i^n}{\Gamma(n)}y^{n-1}e^{-\lambda_i y}dy = \frac{n \lambda_i^{n}_i \Gamma(n-1)}{\Gamma(n) \lambda_i^{n-1}} = \frac{n}{n-1} \lambda_i. \\Var\left(\hat{\lambda}_i\right)= &\int_0^\infty \left(\frac{n}{y}\right)^2 \frac{\lambda_i^n}{\Gamma(n)}y^{n-1}e^{-\lambda_i y}dy = \frac{n^2}{(n-1)^2(n-2)} \lambda_i^2.\end{eqnarray*}


Therefore the estimator \mathbb{E}(\frac{n-1}{n} \hat{\lambda}_i ) = \lambda_i is unbiased while the MSE is given by \mathbb{E}(\hat{\lambda}_i - \lambda_i)^2 = \frac{\lambda_i^2 (n+2)}{(n-1)(n-2)}. Which reflects the well-known dichotomy between unbiasedness and minimal MSE.

For the general case

(5)   \begin{eqnarray*}\mathbb{E}\left(\frac{S_i(t)-1}{S_i(t)} \hat{\lambda}_i |S_i(t)>0  \right) = \sum_{m=1}^n \int_0^\infty \frac{m-1}{m} \frac{m}{y} \frac{\lambda_i^m y^{m-1}}{\Gamma(m)}  e^{-\lambda_i y}   \mathbb{P}\left(S_i(t)=m\right) dy \\= \lambda_i \left(1- \mathbb{P}\left(S_i(t)=0\right)   \left) = \lambda_i \left(1 - \left(\sum_{k=0}^i C_{k,i} e^{-\lambda_k t} \right)^{n} \right)\end{eqnarray*}


thus we face a slight bias, the cause of which is when there are too few observations available.

[1] [doi] H. N. Nagaraja, “Order statistics from independent exponential random variables and the sum of the top order statistics,” in Advances in distribution theory, order statistics, and inference, N. Balakrishnan, J. M. Sarabia, and E. Castillo, Eds., Boston, MA: Birkhäuser boston, 2006, p. 173–185.
[Bibtex]
@Inbook{Nagaraja2006,
author="Nagaraja, H. N.",
editor="Balakrishnan, N.
and Sarabia, Jos{\'e} Mar{\'i}a
and Castillo, Enrique",
title="Order Statistics from Independent Exponential Random Variables and the Sum of the Top Order Statistics",
bookTitle="Advances in Distribution Theory, Order Statistics, and Inference",
year="2006",
publisher="Birkh{\"a}user Boston",
address="Boston, MA",
pages="173--185",
abstract="Let X(1)<...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.