The problem that we will study is in business economics. We consider a business which performs a service for its customers. We assume that customers arrive and wait in line until it is their turn for.service. When their turn for service comes, the service takes a certain length of time, after which they depart. An example would be the post office stamp window. In this problem we are interested in the evolution of the line and in the length of time during which the servicer is kept busy.
We wish to make a simple mathematical model which will describe the way that customers arrive and are serviced. We assume that there is only one server. We shall measure time in some conveniently small unit. We assume that in each unit of time there is a fixed probability \(a\) that a customer arrives, and hence a probability \(1-a\) that no customer arrives, and that the time unit is small enough so that at most one customer arrives in any one time unit.
We assume that for each customer there is a probability \(p_j\) that his service time is \(j\) units long. We consider in detail two different choices for these service time probabilities. The first is called the geometric service time. This is obtained by assuming that there is a probability \(s\) that a customer finishes his service in any one time unit of service. Then \(p_j=(1-s)^{j-1}s\) for \(j=1,2,\ldots\). The second type of service that we shall consider is the constant service time. For this we assume that each customer requires a fixed number \(c\) of time units for his service.
We make the convention that the customer arrives at the beginning of a time unit and is served immediately if no other person is being served. If someone else is being served when he arrives, he joins the line waiting to be served. We shall define a busy period to be the period of time between the arrival of the first customer and the first time after this that the server finishes his task (i.e., he finishes serving a customer, and there are none in line). We are interested first in finding conditions under which the busy periods will end (i.e., the line will not grow indefinitely), second in finding the distribution of the number of customers served in a busy period, and third in finding the mean total length of a busy period.
Note. The suggestion of using the time branching process is due to D.G. Kendall.
We shall solve this problem by forming a branching process which we shall call the customer branching process. For any customer we define the “offspring” of this customer to be all the customers that arrive during his service time. For each of the customers the number of offspringis the summ of independent 0-1 random variables of 0 or 1 customer arriving in one time unit multiplied by the random length of the service period. Denote:
\(h(t)\) - PGF of the number of arrivals (0 or 1) in one time unit;
\(k(t)\) - PGF of the number of time unnits of service.
Then, the number of offspring has a PDF (see Lecture 6) \[ f(t):=k(h(t)), \qquad \mbox{say}. \] In our example this will be the same for all customers in a busy period except for its first customer. No one can arrive during the first customer’s first unit of service since he himself arrives during this time period; hence, the distribution for his offspring will differ from that of the others. Therefore, we shall start considering the branching process after the first person has been served, and the initial state will consist of the number of people who arrive while the first person is being served.
Putting all of this together we see that the total number of customers during a busy period can be considered to be a branching process with a distribution of offspring determined by the above generating function \(f(t) = k(h(t))\), but with a random initial state, i.e., the number of customers that arrive while the first person is being served.
Let us now use the previously results we have for branching processes to find the total number of customers during a busy period.
We know that the branching process will die out (i.e., the busy period will end) with probability 1 if and only if the mean \(\mu\) of the distribution of offspring is \(\le 1\). But this is \[ \mu = f'(l) = k'(h(l))\cdot h'(l)= k'(l)\cdot h'(l). \] Thus, the mean number of offspring of a customer is the product of the mean number of arrivals in a unit time and the mean service time, and the busy period is sure to end if this product is at most 1.
The generating function for the number of arrivals in a unit time interval is always \[ h(t) = 1 - a + at, \] since with probability \(1-a\) no one arrives and otherwise one person arrives.
We will introduce some additional notation. Let \(s_n\) is the probability that the total progeny of the process equals \(n\). Therefore, the total number of customers in a busy period will have distribution \[ s_n=P(\ \mbox{total number customers}\ =n). \] Let also denote by \(g(u)\) the generation function \[ g(u)=\sum_{k=0}^\infty s_n\cdot u^k. \]
We illustrate the calculation of the distribution of the number of customers in a busy period for the case of constant service time \(c=2\), i.e., when each customer is served in exactly 2 time units. In this case the generating function for the service time \(c=2\) is \[ k(t) = t^2. \] Hence the generating function for the number of offspring of a customer after the first is \[ f(t) = k(h(t)) = (1-a+at)^2. \]
It can be proved (we omit this proof) that the total number of customers served during a busy period has generating function \[ q(u)= \frac{1}{2a}\left(1-\sqrt{1-4ua(1-a)}\right). \] If we expand this in a Taylor series, we find that the coefficients are \[ s_n=\frac{[a(1-a)]^n}{an}{2n-2 \choose n-1}. \]
A similar procedure may be carried out for geometric service time, but the formulas are more complicated. When this is done, the generating function for the total number of customers served in a busy period is found to be \[ q(u)=\frac{s}{2}u+\frac{B}{2a}\left(1-\sqrt{1-Cu+Du^2}\right), \] where \(B=1-(1-a)(1-s)\), \(C=2sa[1+(1-s)(1-a)]/B^2]\), and \(D=(as/B)^2\).
It is interesting to compare, the geometric service time with the constant service time when the arrivals are the same and the mean service times are the same.
The mean time for a constant service time \(c\) is, of course, \(c\). Fora
geometric service time the mean is \[
\sum_{j=1}^\infty j(1-s)^{j-1}s=\frac{1}{s}.
\] Hence we choose \(s=1/c\).
We do this for the case \(a=0.3\), and a geometric distribution with \(s=1/2\), and constant service time \(c=2\). The values for \(S_n\) for these two cases are given in the table below.
Table 11.1 Table of \(s_n\)
| n | Geometric Sevice Time \(s=1/2\), \(a=0.3\) |
Constant Service Time \(c=2\), \(a=0.3\) |
|---|---|---|
| 1 | 0.770 | 0.700 |
| 2 | 0.096 | 0.146 |
| 3 | 0.046 | 0.062 |
| 4 | 0.026 | 0.032 |
| 5 | 0.016 | 0.019 |
| \(>5\) | 0.046 | 0.040 |
As would be expected, we see that extreme cases are more probable in the geometric distribution than in the constant service distribution: For the geometric case it is more likely that only one person is served or that more than five are served in a busy period.
This example was carried out by having a computer generate the customers and the service times. The computer carried out a thousand time units and obtained 174 busy periods for the geometric distribution and 170 busy periods for the constant service time. The number of observed periods with a given number of customers and the number predicted from the above distributions is shown in Table 11.2. A simple statistical test shows that in each case observations and predictions are in good agreement.
Table 11.2 Number of busy periods in which exactly \(n\) customers are served (Results of 1000 time units of computer generated service.)
| n | Geometric Sevice Time 174 Busy Periods Total \(s=1/2\), \(a=0.3\) |
Constant Service Time 170 Busy Periods Total \(c=2\), \(a=0.3\) |
||
|---|---|---|---|---|
| orserved | predicted | observed | predicted | |
| 1 | 131 | 134.0 | 122 | 119.0 |
| 2 | 17 | 16.7 | 26 | 24.8 |
| 3 | 11 | 8.0 | 8 | 10.5 |
| 4 | 7 | 4.5 | 6 | 5.4 |
| 5 | 2 | 2.8 | 3 | 3.2 |
| \(>5\) | 6 | 8.0 | 5 | 6.8 |
Note. The suggestion of u.sing the time branching process is due to I.J. Good.
We show next that a similar technique enables us to find the distribution of the total number of time units in a busy period. For this we define a new branching process which we call the time branching process. We define the “offspring” of a time unit to be the time units of service for the customers which arrive in this time unit. The number of offspring is the sum of a random number of independent random variables, where the random variables are the service times, and the random number the number of arrivals. Hence the basic generating function for this branching process is \(h(k(t))\) where \(h\) and \(k\) are, as before, the generating functions for the number of arrivals in a unit of time and the service time of a customer. As in the customer branching process, we must consider the first unit of time separately. We take the first customer’s service time as the random initial state for the branching process.
For the constant service time the total time in a busy period is simply a multiple of the number of customers served, and hence it is of no new interest. Thus we shall consider only the geometric case.
The generating function for the service time is \[
k(t)=\sum_{j=1}^\infty (1-s)^{j-1}s t^j=\frac{st}{1-(1-s)t}.
\] The generating function for the number of arrivals in a unit
of time is
\[
h(t)=1-a+at.
\] Hence the generating function for the number of offspring in
our time branching process is \[
\bar{f}(t)=h(k(t))=1-a +\frac{ast}{1-(1-s)t}.
\] Finding the total time generating function in the general case
is somewhat complicated, so we will consider only a special case. Let us
select the case \(a=s=1/2\). This is
interesting, because the mean number of offspring is 1, so that the mean
duration of the busy period will be infinite. Let \(\bar{s}_n\) be the probability of exactly
\(n\) time units in the busy period. It
can be proved (we omit this proof) that \[
\bar{q}(u)=\sum_{n=1}^\infty \bar{s}_n\cdot u^n=1-\sqrt{1-u}.
\] If we expand this in a Taylor series, we find that the
coefficients are \[
\bar{s}_n=(-1)^{n+1}{\frac{1}{2} \choose n}=\frac{1}{n}{2n-2 \choose
n-1} 2^{-2n+1}.
\] In Table 11.3 we give the results of a computer experiment
comparing these with the expected values. Notice that, although there
were very large busy periods, it is still true that a period of length 1
is the most probable single value.
Table 11.3 Number of busy periods of length \(n\), \(a=s=1/2\) (Results of 1000 time units of computer generated service.)
| n | 96 Observed Busy Periods | Expected, \(96x s_n\) |
|---|---|---|
| 1 | 46 | 48 |
| 2 | 11 | 12 |
| 3 | 7 | 6 |
| 4-6 | 10 | 8.34 |
| 7-10 | 3 | 4.74 |
| 11-20 | 5 | 4.87 |
| 21-50 | 9 | 4.38 |
| \(>50\) | 5 | 7.67 |
Finally, let us compute \(L\), the mean length of a busy period. For the case of constant service time of 2, the mean number of customers per busy period is \[ q'(1)=\frac{1-a}{1-2a}, \] and hence \[ L=2\frac{1-a}{1-2a}. \] For the geometric case one can obtain \[ L=\frac{1-a}{s-a}. \] For \(s=1/2\) this agrees with the value odf \(L\) obtained for the constant case above. For example , if \(a=0.3\), then \(L=7/2\) for ither teh geometric case with \(s=1/2\) or for teh constantcase \(c=2\).
It is interesting to see that the process divides into clack periods and busy periods, which alternate. The mean legth of slack periods is \((1-a)/a\). Let us call a slack period followed by a busy period a cycle. The mean length of a cycle in the geometric case is \[ \frac{1-a}{a}+\frac{1-a}{s-a}=\frac{s(1-a)}{a(s-a)}. \] If \(a=0.3\), \(s=1/2\), tis is \(35/6\). And the length of a cycle in the constant case \(c=2\) is the same. Thus in 1000 periods we can expect about \(1000 X 6/35 = 171.4\) cycles, and the same number of busy periods. We observed 174 in one example, 170 in the other, in excellent agreement.