Which of the following is not generally considered to be a measure of system performance in queuing analysis?

Queues

C. Armero, M.J. Bayarri, in International Encyclopedia of the Social & Behavioral Sciences, 2001

Queueing systems are simplified mathematical models to explain congestion. Broadly speaking, a queueing system occurs any time ‘customers’ demand ‘service’ from some facility; usually both the arrival of the customers and the service times are assumed to be random. If all of the ‘servers’ are busy when new customers arrive, these will generally wait in line for the next available server. Simple queueing systems are defined by specifying the following (a) the arrival pattern, (b) the service mechanism, and (c) queue discipline. From the probabilistic point of view, properties of queues are usually derived from the properties of stochastic processes associated with them. However, in all but the simplest queues, determination of the state probabilities is extremely difficult. Often, however, it is possible to determine their large time limit, the so-called equilibrium or steady-state distribution. This distribution does not depend on the initial conditions of the system and it is stationary. The ergodic conditions give the restrictions on the parameters under which the system will eventually reach the equilibrium. For the most part, queueing theory deals with computations of the steady-state probabilities and their use in computing other (steady-state) measures of performance of the queue. When only the expected values are required, an extremely useful formula for systems in equilibrium is Little's law. Most of the vast effort in queueing theory has been devoted to the probabilistic development of queueing models and to the study of its mathematical properties; that is, the parameters governing the models are, for the most part, assumed given. Statistical analyses, in which uncertainty is introduced, are comparatively very scarce. Inference in queueing systems is not easy: development of the necessary sampling distributions can be very involved and often the analysis is restricted to asymptotic results. The statistical analysis is simpler if approached from the Bayesian perspective. Since Bayesian analyses are insensitive to (noninformative) stopping rules, all that it is required from the data is a likelihood function, which combined with the prior distribution on the parameters produces the posterior distribution from which inferences are derived. This is an important simplification in the analysis of queues where there are a variety of possible ways of observing the system, many providing proportional likelihood functions but very different sampling distributions. The prior distribution quantifies whatever is known about the system before the data is collected. Usually, there is plenty of information a priori about the queue, especially if it is assumed to be in equilibrium. However, it is also possible to keep the parallelism with a likelihood analysis and to avoid the incorporation of further subjective inputs, by carrying out a Bayesian analysis usually called ‘objective’ because the prior distribution used is of the ‘non-informative’ or ‘objective’ type. From the posterior distribution, computation of estimates and standard errors is immediate. Also, probabilities of direct interest (like the probability that the ergodic condition holds) can be computed. Most importantly, restrictions in the parameter space imposed by the assumption of equilibrium are readily incorporated into the analysis. Prediction of measures of congestion of the system (number of customers waiting, time spent queuing, number of busy servers, and to on) is carried out from the corresponding predictive distributions which are also very useful for design and intervention in the system.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767004927

Birth-and-Death Queueing Systems: Exponential Models

J. MEDHI, in Stochastic Models in Queueing Theory (Second Edition), 2003

Remarks:

The queueing system with ordered entry has received considerable attention because of its importance in application, mainly in conveyor theory. (See Muth and White (1979) for a survey.) For further work in this area, reference may be made, for example, to Elsayed (1983), Elsayed and Elayat (1976), Elsayed and Proctor (1977), Gregory and Litton (1975), Lin and Elsayed (1978), Matsui and Fukuta (1977), Nawijn (1983, 1984), Newell (1984), Pourbabai (1987), Pourbabai and Sonderman (1986), Pritsker (1966), Proctor et al. (1977), Sonderman (1982), Yao (1987), and Shanthikumar and Yao (1987).

Apart from conveyor systems, the ordered entry model also applies to communication networks, such as System Network Architecture (SNA) (see, for example, Gray and Mcneill (1979)). A third area of application is database systems—see, for example, Cooper and Solomon (1984).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124874626500035

Queueing Theory

H.M. Srivastava, in Encyclopedia of Physical Science and Technology (Third Edition), 2003

I.A Queueing Systems

A queueing system can be described as a system having a service facility at which units of some kind (generically called “customers”) arrive for service; whenever there are more units in the system than the service facility can handle simultaneously, a queue (or waiting line) develops. The waiting units take their turn for service according to a preassigned rule, and after service they leave the system. Thus, the input to the system consists of the customers demanding service, and the output is the serviced customers. A queueing system is usually characterized by the following terms:

1.

The input process. Let the customers arrive at the instants t0, t1, t2,…; then the interarrival times are

(1)ur =tr−tr−1,r=1,2,3,.…

The random variables ur are, in general, assumed to be statistically independent, and their probability distribution A(u) is called the interarrival time distribution or, simply, the arrival distribution or input distribution. The customers may come from an infinite source, as in the case of telephone calls and as is assumed in many queueing studies, or from a finite source. They may arrive singly or in groups of fixed or various sizes. The queueing system may have an upper limit on the number that can be admitted into the system, as in the case of finite waiting space.2.

The queue discipline. This can be described as the rule determining the formation of a queue or queues and the manner in which a customer or customers are selected for service from those waiting. The most common queue discipline is “first come, first served,” according to which the units enter service in order of their arrival. Other possibilities are random selection for service, a priority rule, or even the “last come, first served” rule. In the case of priority, there may be two classes, namely, the priority class and the nonpriority class, or there may be several priority classes representing different levels of priority. Furthermore, there may be a preemptive priority discipline according to which a lower-priority unit is taken out of service whenever a higher-priority unit arrives, the service on the preempted unit resuming only when there are no higher-priority units in the system. Contrary to this is the nonpreemptive, or the head-of-the-line, priority rule, in which priorities are taken into account only at the commencement of service, and once started, the service is continued until completion. It is customary to include under this heading queueing phenomena, such as balking and reneging, depicting the behavior of the waiting customers. Customers are said to balk when, looking at the size of the queue and then estimating the time they may have to wait before service, they do not join the queue. After joining a queue, customers are said to renege if they become impatient with waiting and leave the queue before service starts.

3.

The service mechanism. The time that elapses while a unit is being served is called its service time. The service times v1, v2, v3,… of the successive units are assumed to be independent of one another and of the input distribution, and their probability distribution B(v) is called the service-time distribution or, briefly, the service distribution. The specification of service mechanism includes the number of servers. Thus, there are single-server and multiserver systems (sometimes called single-channel and multichannel from telephone parlance). Again, in a bulk service system, service may be provided in batches of fixed or various sizes.

As is common in probability theory, if we can write

(2)dA(u )=a(u)du;dB(υ)dυ,

then a(u) and b(v) are, respectively, the interarrival time and service-time density functions.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122274105006323

Non-Markovian Queueing Systems

J. MEDHI, in Stochastic Models in Queueing Theory (Second Edition), 2003

6.2 Embedded-Markov-Chain Technique for the System with Poisson Input

We are concerned at any instant t with a pair of RVs N(t), the number in the system at time t, and X(t), the service time already received by the customer in service, if any. While {N(t), t ≥ 0} is non-Markovian, the vector {N(t), X(t), t > 0} is a Markov process. Whereas in the case of an M/M/1 system (because of the memoryless property of service-time distribution), attention can be confined to N(t) alone, for the system M/G/1 we have to consider X(t) also along with N(t). Now by observing the number in the system at a select set of points rather than at all points of time t, it is possible to simplify matters to a great extent. These special sets of points or instants should be such that by considering the number in the system at any such point and other inputs, it should be possible to calculate the number in the system at the next such point or instant. There are several such sets of points. A very suitable set of points is the set of departure instant (from service channel) at which successive customers leave the system on completion of service. Let the departure instants of the customers C1, C2,…, Cn,… be t1, t2,…, tn …, respectively. At such a point of time—say, the departure instant tn, of Cn, -the time spent in service by the next customer Cn+1 is zero and, thus, given N(tn) at any departure instant (that is, the number of customers left behind by the departing customer Cn) and given the additional input to the system (arrivals during the time of service of the next customer Cn+1), it is possible to calculate N(tn+1), the number left behind by the next departing customer Cn+1. Thus, we get N(tn+1) given N(tn), and the number of arrivals during the service time of customer Cn+1. So {N(tn), n ≥ 1} defines a Markov chain, the instants t1,t2,…,tn being embedded Markovian points. Thus, we can get N(tn) and its distribution—that is, the distribution of the number in the system at departure epochs tn,n ≥ 1.

For a queueing system (in steady state) with Poisson arrivals, we have the following properties.

(1)

The probability an of the number n found by an arriving customer is equal to the probability dn of the number n left behind by a departing customer. Again, Poisson arrivals see time averages. When equilibrium is reached in a queueing system with Poisson arrivals, we have an = pn, where pn is the probability that the number in the system at any time (in steady state) is n. Thus,

an=dn=pn.

Thus, the probability distribution of the number in the system at the embedded Markov points is the same as the probability distribution of the number in the system at all points of time. Thus, it suffices to consider the process {N(tn), n ≥ 0} at the departure instants or, to be more specific, the process {N(tn + 0), n ≥ 0}, where N(tn + 0) is the number immediately following the nth departure. (See also Remark, Section 6.3.3.)(2)

The transitions of the process occur at departure instant tn. The number in the system immediately following these instants form a Markov chain such that the transitions occur at the departure instants. The interval between two transitions (that is, between two departures) is equal to the service time when the departure leaves at least one in the system and is equal to the convolution of the interarrival time (which is exponential) and the service time when the departure leaves the system empty. If Y(t) denotes the number of customers left behind by the most recent departing customer—that is, Y(t) = N(tn), tn ≤ t ≤ tn+1—then Y(t) will be a semi-Markov process having {N(tn + 0), n =0,1,2,…} for its embedded Markov chain. The sequence of intervals (tn+1 − tn),n = 0,1,2,…, being the interdeparture time of successive units, forms a renewal process.

Assume that the input process is Poisson with rate λ, and the service times are IID RVs having a general distribution with DF B(t) and mean (1/μ). Let B*(s)=∫0∞e−stdB(t) be its LST, then − B*(1) (0) = l/μ.

Let A be the number of arrivals during the service time of a unit. Conditioning on the duration of the service time of a unit, we get

(6.2.1)kr=Pr{A=r}=∫0∞e−λt(λt)rr!dB(t),r=0,1,2,…

The PGF K(s) of {kr} is given by

(6.2.2a)k(s)= ∑r=0∞krsr=∑r=0 ∞sr{∫0∞e−λt (λt)rr!dB(t)}=∫0∞e–λtdB(t),{∑r=0∞(λts)rr! }=B*(λ−λs).

We have

(6.2.2b)E{A}=K′(1)=−λB*(1)(0)=λμ=ρ.

Suppose that the arrivals occur in bulk with distribution aj = Pr(X = j), the arrival instants being its accordance with a Poisson process with rate λ. Then the probability distribution of the total number of arrivals A in an interval of time t is given by

(6.2.3)Pr(X=j)=∑k= 0je−λt(λt)kk! aj(k)*,j=0,1,2,3,…

where aj(k)* is the k-fold convolution of aj with itself. The distribution of the total number of arrivals A during the service period of a unit is given by

(6.2.4)Pr{A=j} =∫0∞∑k=0je−λt(λt)kk!aj(k)*dB(t).

Let A(s) = Σjajsj be the PGF of the bulk size X;then Σjaj(k)*sj = [A(s)]k.

The PGF of A is then given by

K(s)=∑j=0∞Pr(A=j)sj=∑j=0∞sj{∑k=0j∫0∞e−λt(λt)kk!aj( k)dB(t)}=∫0∞∑k=0∞e−λt(λt)kk![A(s)]kdB(t) =∫0∞e−[λt−λA(s)t]dB(t).

Thus, for a bulk arrival system

(6.2.5a)K (s)=B*[λ−λA(s)],and

(6.2.5b)ρ=λE(X)μ.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124874626500060

Queueing Systems: General Concepts

J. MEDHI, in Stochastic Models in Queueing Theory (Second Edition), 2003

2.2 Queueing Processes

The analysis of a queueing system with fixed (deterministic) interarrival and service times does not present much difficulty. We shall be concerned with models or systems where one or both (interarrival and service times) are stochastic. Their analyses will involve a stochastic description of the system and related performance measures, as discussed below.

(1)

Distribution of the number N(t) in the system at time t (the number in the queue and the one being served, if any). N(t) is also called the queue length of the system at time t. By the number in the system (queue), we will always mean the number of customers in the system (queue).

(2)

Distribution of the waiting time in the queue (in the system), the time that an arrival has to wait in the queue (remain in the system). If Wn denotes the waiting time of the nth arrival, then of interest is the distribution of Wn.

(3)

Distribution of the virtual waiting time W(t)—the length of time an arrival has to wait had he arrived at time t.

(4)

Distribution of the busy period being the length (or duration) of time during which the server remains busy. The busy period is the interval from the moment of arrival of a unit at an empty system to the moment that the channel becomes free for the first time. The busy period is a random variable.

From a complete description of the above distributions, various performance measures of interest are obtained.

The problems studied in queueing theory may be grouped as:

(i)

Stochastic behavior of various random variables, or stochastic processes that arise, and evaluation of the related performance measures;

(ii)

Method of solution—exact, transform, algorithmic, asymptotic, numerical, approximations, etc.;

(iii)

Nature of solution—time dependent, limiting form, etc.;

(iv)

Control and design of queues—comparison of behavior and performances under various situations, as well as queue disciplines, service rules, strategies, etc.; and

(v)

Optimization of specific objective functions involving performance measures, associated cost functions, etc.

Analysts and operations researchers generally will be involved with these types of problems. But in order to study such problems, one will have to study first the types of problems enumerated under (i)–(iii).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124874626500023

Queues with General Arrival Time and Service-Time Distributions

J. MEDHI, in Stochastic Models in Queueing Theory (Second Edition), 2003

7.2.3 Multiserver queues: approximation of mean waiting time

We discuss here one of the main performance measures of multiserver queueing systems in steady state: the mean waiting (queueing) time. While exact and closed-form expressions are known for simpler systems M/M/c, M/G/1, no such result is available in case of more general systems such as M/G/c, G/G/c. Various approximations have been suggested for obtaining mean waiting times for such systems. The approximations are in closed forms that combine analytical solutions of simpler systems, and so approximations are known as system interpolations.

One thing that facilitates use of such approximations is that tables for building block systems (M/M/c, M/D/c, D/M/c) for specific values of the parameters are available (e.g., Page, 1972, 1982; Seelen et al. 1985). We indicate below some of these approximations. Denote the mean waiting (queueing) time of a A/B/c system in steady state by E W(A/B/c). Let ca2, cv2 be the squares of coefficient of variation for the interarrival and service-time distributions, respectively. We recall that, by virtue of the Pollaczek-Khinchin formula

(7.2.12)EW(M/G/1)=1+cυ22EW(M/M/1).

Based on this, the following heuristic approximation for the c-server system has been suggested:

(7.2.13)EW(M/G/c)≃1+cυ22EW(M/M/c).

(Lee and Longton, 1957)

The mean waiting time for both M/M/c and M/G/1 systems can be written as

EW (M/M/c)=EW(M/G/1)=C(c,a)cμ(1−ρ)=1+cυ22ρC(c,a) λ(1−ρ).

See (3.6.21) and (6.3.12c); note that C(c, a) = ρ when c = 1 (see (3.6.4)).

Based on this, an approximation suggested (Hokstad, 1978; Stoyan, 1976; Nozaki and Ross, 1978; Maaloe, 1973), is

E(M/G/c)≃1+cυ22λ(1−ρ)ρC(c,a).

The approximation appears quite satisfactory for large ρ. Harel and Zipkin (1987) show that f(ρ) = 1/{E(W) + E(v)} is strictly concave for M/G/c, c ≥ 2 for sufficiently light traffic.

From (6.3.12), we have also the closed-form expression

(7.2.14)EW( W/G/1)=cυ2EW(M/M/1)+(1−cυ2)EW(M/D/1).

Based on this, Björklund and Elldin (1964) suggested the following approximation for the c-server case:

(7.2.15)EW(W/G/c)≃cυ2EW(M/M/c)+(1−cυ2)EW(M/D /c).

in terms of those of M/M/c and M/D/c. Kimura (1986) suggested the approximation

(7.2.16)EW(M/G/c)≃1+cυ22cυ2E W(M/M/c)+1−cυ2EW(M/D/c).

For E W(M/D/c), approximations given by Cosmetatos (1975, 1976) (or refinements thereof: Kimura, 1991) could be used.

Approximations have also been provided by Boxma et al. (1979), Tijms (1987), Tijms (1994) and Tijms et al. (1981).

Approximations have also been suggested for queue-length and waiting-time distributions—for example, Kimura (1986, 1991,1993), Tijms et al. (1981), and Van Hoorn and Tijms (1982).

For general interarrival and service times, one has the Kingman (1962) approximation for heavy traffic (ρ → 1 −0; see Section 8.1)

(7.2.17)EW(G/ G/1)≃ca2+cυ22EW(M/M/1).

Krämer and Langenbach-Belz (1976) suggested

(7.2.18)EW(G/G/1)≃ca2+cυ22kEW(M/M/1),

where

(7.2.19)k≡k( ρ,ca2,cυ2)={exp{ −2(1−ρ)3ρ(1−ca2)2ca2+cυ2},ca2≤1exp{−(1−ρ)ca2 −1ca2+cυ2},ca2>1.

This approximation (7.2.18) could be taken as a refinement of Kingman's heavy traffic approximation (7.2.17). As a natural extension of Kingman's result (7.2.17), an approximation suggested (Kimura) for G/G/c queue is

(7.2.20)EW( G/G/c)≃ca2+cυ22EW(M/M/c).

Page (1982) suggested the approximation

(7.2.21)E W(G/G/c)≃ca2cυ2EW(M/M/c)+ca2(1−cυ2)EW(M/D/c)+(1−ca2)cυ2EW(D/M/c ).

This approximation coincides with (7.2.15) for the case M/G/c. Kimura (1986) also suggested the approximation,

(7.2.22)EW(G/ G/c)≃k(ca2+cυ2) 2(ca2+cυ2−1)EW( M/M/c)+1−cυ2EW (W/D/c)+k01(1−ca2)EW(D/M/c),ca2≤1≃(ca2+cυ2−1 )EW(M/M/c)+(1−cυ 2)EW(M/D/c)

(7.2.23) +1−ca2k01EW(D/M/c),ca2>1,

where k is given by (7.2.19) and

(7.2.24)k01≡k(ρ,0,1)=exp{−2(1−ρ)/3ρ}.

Approximations for G/G/c have also been provided by Shore (1988a,b). For a discussion (with some comparisons with numerical results), refer to the survey paper by Kimura (1994).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124874626500072

Miscellaneous Topics

J. MEDHI, in Stochastic Models in Queueing Theory (Second Edition), 2003

8.2.2 Asymptotic queue-length distribution

Let A(t), D(t), and N(t) denote, respectively, the number of arrivals, number of departures, and number in the system at time t of a system. Here {A(t), t ≥ 0}, [D(t), t ≥ 0}, and {N(0. t ≥ 0} are stochastic processes. We assume that the system is under heavy traffic and that

N(t)=N(0)+A(t)−D (t).

The assumption that N(t) does not become zero is basic in this approach. The departure process {D(t), t ≥ 0}, which is otherwise dependent upon the arrival process {A(t),t ≥ 0}, then becomes approximately independent of the arrival process. The number of departures increases by unity each time a service is completed, and the interdeparture time will have the same distribution as service times when the system remains continually busy. Let the IID random variables ti, =1,2,…, denote the interarrival times and let

Tn=t1+⋯+tn.

The nth arriving customer arrives at the epoch Tn. We have the important equivalence relation

(8.2.11)Pr{A(t)≥n }=Pr{Tn≤t}.

The preceding relation enables us to find the distribution of A(t) from that of Tn. Since tis are IID random variables, the Central Limit Theorem can be applied to find the asymptotic distribution of Tn. We have

E{Tn}≃nλ var{Tn}≃nσu2

where 1/λ and σu are the mean and SD of the interarrival times, respectively. From the Central Limit Theorem, we have

(8.2.12) Pr{Tn−nλσun≤x}=Φ(x).

To find the RHS of (8.2.8), we have to relate n with t. Define

t=xσun+ nλ.

For large n, the dominant term being t ≃ n/λ, we can express n in terms of t as follows:

n≃λt−xλσutλ.

From (8.2.12) we have

Pr{Tn≤t}=Φ(x)

so that

(8.2.13)Pr{A(t)≥n}= Φ(x)orPr{A(t)≥λt −xλσutλ}=Φ(x)or Pr{A(t)−λtλσutλ≥−x}=Φ(x)or Pr{A(t)−λtλσutλ≤x}=1−Φ(−x)=Φ(x).

Thus, the asymptotic distribution of A(t) is Gaussian, with

(8.2.14)E{A(t)}≃λtvar{A(t)}≃λ3σu2t.

Denoting the mean and SD of service-time distribution by 1/μ and σv, we can show that D(t) is also asymptotically normal with

(8.2.15)E{D(t)}≃μtandvar{D(t)}≃μ3σu2 t.

That is,

D(t)∼N(μt,μ3 συ2t).

The result can be put as follows.

Theorem 8.3

For large t and for moderate to heavy-loaded queueing systems it can be said that

N1 (t)=N(t)−N(0)=A(t)−D (t)

is a Gaussian process with

(8.2.16)E{N1(t)}≃λt−μt=μ(ρ−1)tand

(8.2.17)var{N1(t)}≃(λ3σu 2+μ3συ2)t.

Remarks:

(1)

It is suggested that the process {N(t),t ≥ 0}, where

N(t)=N(0)+A(t)−D(t)

can be approximated by a diffusion process having infinitesimal mean m and variance D2 given by

(8.2.18)m=limΔt→0E{N(t+Δt)−N(t)}Δt=λ−μ

(8.2.19)D2=limΔt→0var{N (t+Δt)−N(t)}Δt=λ3σu2+μ3συ2.

Equation (8.2.19) follows from the fact that {N(t),t ≥ 0} has independent increments and that cov{N(t), N(s)} = var{N[min(t,s)]} holds for such a process.(2)

The expressions (8.2.16) and (8.2.18) give a reasonable approximation of the mean when ρ > 1; however for ρ < 1, (8.2.16) becomes negative; this defect is taken care of by considering a reflecting barrier for N(t) at the origin.

(3)

That (8.2.14) and (8.2.15) hold for large t also follows from a result of renewal theory (Cox 1962). For a renewal process {R(t),t ≥ 0} where the interrenewal times have mean 1 /v and variance σ2, we have, for large t,

(8.2.20)E{R (t)}≃υtandvar{R(t)}≃σ2υ3t.

Here {A(f) t ≥ 0} is a renewal process. For ρ close to 1, {D(t), t ≥ 0} can also be approximated as a renewal process. (4)

The results, though based on a Central Limit Theorem approach, are renewal theoretic results.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124874626500084

Stochastic Processes

J. MEDHI, in Stochastic Models in Queueing Theory (Second Edition), 2003

1.8.1 Application in queueing theory

The results (1.8.1) and (1.8.2) have an important application in queueing theory. Consider a single-server queueing system such that an arriving customer is immediately taken for service if the server is free, but joins a waiting line if the server is busy. The system can be considered to be in two states (idle or busy) according to whether the server is idle or busy. The idle and busy states alternate and together constitute a cycle of an alternating renewal process. A busy period starts as soon as a customer arrives before an idle server and ends at the instant when the server becomes free for the first time. The epochs of commencement of busy periods are regeneration points. Let In and Bn denote the lengths of nth idle and busy periods, respectively, and let

(1.8.3)E{I n}=E{I}andE{Bn}=E{B}.

Then the long-run proportion of time that the server is idle equals

(1.8.4)p0=E{I}E {I}+E{B},

and the long-run proportion of time that the server is busy equals

(1.8.5)p1=E{ B}E{I}+E{B}.

In particular, if the arrival process is Poisson with mean λt, then it follows (from its lack of memory property) that an idle period is exponentially distributed with mean 1/λ—that is, E(I) = 1/λ. Then when p0 or p1 is known, E(B) can be found.

The case of the alternating renewal process can be generalized to cover cyclical movement of more than two states. Suppose that the state space of the process {X(t), t ≥ 0} is S = {0,1,…, m} and its movement from initial state 0 is cyclic as 0 → 1 → 2 … m → 0…, and that τk is the duration of sojourn at state k, having mean πk = E{τk), k = 0,1,…,m. Then

(1.8.6)pk=limt→∞ Pr{X(t)=k}=μk∑i=0mμi,k=0,1,…,m.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124874626500011

Stream Sessions: Stochastic Analysis

Anurag Kumar, ... Joy Kuri, in Communication Networking, 2004

5.13 Notes on the Literature

Although much of the modeling and analysis presented in this chapter are concerned with the analysis of queueing systems, we have deliberately avoided most of the traditional queueing theory. Among many others, the books by Kleinrock ([173, 174]), and by Wolff ([300]) provide excellent coverage of this vast topic, with the latter also covering the underlying stochastic processes theory. Instead, in this chapter, we provide more recent material on approximate analysis via bounds and asymptotics. What we call “marginal buffering” has been referred to in the literature as “bufferless” multiplexing, and what we call “arbitrary buffering” is called “burst scale” buffering in the literature.

The seminal paper [13] by Anick, Mitra, and Sondhi, dating back to 1982, provided the asymptotics of the tail distribution of a buffer fed by a superposition of on–off Markov fluid sources. The approach was via the differential equation characterizing the stationary distribution of buffer occupancy. The advent of ATM technology resulted in renewed interest in such problems, leading to the development of the notion of effective bandwidth and related admission control techniques during the early 1990s. Some of the significant papers were [164, 118, 91, 92, 79, 169, 52]. Our derivation of the effective bandwidth result for a buffered multiplexerfollows the heuristic treatment by Kesidis, Walrand, and Chang [169]. A rigorous proof was provided by Glynn and Whitt in [120]. We have presented the approach in which the number of sources being multiplexed is kept fixed, the buffer is assumed to be infinite, and large-buffer asymptotics are characterized. It is also possible to develop an analysis in which the buffer is finite and the number of sources is scaled; see, for example, Likhanov and Mazumdar [197]. That the use of additive effective bandwidths can lead to very conservative designs was observed by many researchers; two reports on this topic were [64] and [246]. The technique of combining the burst scale effective bandwidth along with marginal buffer analysis was developed in [130]. A related technique, which also uses the marginal buffer analysis to correct for the conservatism in additive effective bandwidths, was developed by Elwalid et al. in [92]. A comprehensive recent survey of the approaches for single-link analysis was provided by Knightly and Shroff in [176]. In this survey is also reported a Gaussian approximation from the paper [63] by Choe and Shroff.

The idea of using worst case traffic compatible with leaky bucket shaped sources was developed by Doshi ([86]). For buffered multiplexers receiving leaky bucket shaped traffic, the approach of using extremal on–off sources was introduced in [93] by Elwalid, Mitra, and Wentworth, and then extended by LoPresti et al. in [200]. Of the limited literature on the analysis of multihop networks, the paper [247] reported the work on bufferless multiplexers at each hop, whereas the papers [40, 196] developed the idea of effective envelopes.

The observation that traffic in packet networks can have long-range dependence was made in the landmark paper [195], by Leland, Taqqu, Willinger, and Wilson. These measurement results were further analyzed statistically by the same research group in [299]. The impact of such traffic on the performance of multiplexers has been studied by several authors; some important references are [287, 137, 94, 226]. Buffer analysis with a Gaussian LRD process was performed by Norros in [226]. In [137], Heath, Resnick, and Samorodnitsky analyze an LRD on–off process; we will come across this model in Section 7.6.8. We have pointed out that for an LRD input the tail of the buffer does not decay as fast as exponential; the approach based on the Gärtner–Ellis theorem has been extended to handle the LRD case by Duffield and O’Connell [89]. Cox’s paper [68] and the book [30] by Beran are the main references for the general theory of LRD processes.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124287518500057

Markov Processes

Scott L. Miller, Donald Childers, in Probability and Random Processes, 2004

9.4 Continuous Time Markov Processes

In this section, we investigate Markov processes where the time variable is continuous. In particular, most of our attention will be devoted to the so-called birth-death processes which are a generalization of the Poisson counting process studied in the previous chapter. To start with, consider a random process X(t) whose state space is either finite or countable infinite so that we can represent the states of the process by the set of integers, X(t) ɛ {…, −3, −2, −1,0,1,2,3, …}. Any process of this sort that is a Markov process has the interesting property that the time between any change of states is an exponential random variable. To see this, define Ti to be the time between the ith and the (i + 1)th change of state and let hi(t) be the complement to its CDF, hi(t)=Pr(Ti>t). Then, for t > 0, s > 0,

(9.31) hi(t+s)=Pr(Ti>t+s) =Pr(Ti>t+s,Ti>s)=Pr( Ti>t+s|Ti>s)Pr(Ti>s).

Due to the Markovian nature of the process, Pr(Ti>t+s|Ti>s )=Pr(Ti>t), and hence the previous equation simplifies to

(9.32)hi(t+s)=hi(t)hi(s).

The only function which satisfies this type of relationship for arbitrary t and s is an exponential function of the form hi(t)=e-ρi t for some constant ρi.

Furthermore, for this function to be a valid probability, the constant ρi must not be negative. From this, the PDF of the time between change of states is easily found to be fTi (t)=ρie-ρitu(t).

As with discrete time Markov chains, the continuous time Markov process can be described by its transition probabilities.

DEFINITION 9.11: Define pi, j(t) = Pr(X(to + t) = j|X(to) = i) to be the transition probability for a continuous time Markov process. If this probability does not depend on to, then the process is said to be a homogeneous Markov process.

Unless otherwise stated, we assume for the rest of this chapter that all continuous time Markov processes are homogeneous. The transition probabilities, pi, j(t), are somewhat analogous to the n-step transition probabilities used in the study of discrete time processes and as a result, these probabilities satisfy a continuous time version of the Chapman-Kolmogorov equations:

(9.33)pi,j(t+s)=∑k pi,k(t)pk,j(s),fort,s>0.

One of the most commonly studied class of continuous time Markov processes is the birth-death process. These processes get their name from applications in the study of biological systems, but they are also commonly used in the study of queueing theory, and many other applications. The birth-death process is similar to the discrete time random walk studied in the previous section in that when the process changes states, it either increases by 1 or decreases by 1. As with the Poisson counting process, the general class of birth-death processes can be described by the transition probabilities over an infinitesimal period of time, Δt. For a birth-death process,

(9.34)pi,j(Δt)={λiΔt+o(Δt)ifj=i+1μiΔt +o(Δt)ifj=i-11-( λi+μi)Δt+o(Δt)ifj=io(Δt)ifj≠i-1,i,i+1.

The parameter λi is called the birth rate, while μi is the death rate when the process is in state i. In the context of queueing theory, λi and μi are referred to as the arrival and departure rates, respectively.

Similar to what was done with the Poisson counting process, by letting s = Δt in Equation 9.33 and then applying the infinitesimal transition probabilities, a set of differential equations can be developed that will allow us to solve for the general transition probabilities. From Equation 9.33,

(9.35)pi,j(t+Δt)=∑kpi,k(t)pk,j(Δt)=(λj-1Δt)pi,j-1(t)+( 1-(λj+μi)Δt)pi,j(t)+(μj+1Δt)pi,j+1 (t)+o(Δt).

Rearranging terms and dividing by Δt produces

(9.36)pi,j(t+Δt)pi,j(t)Δt=λj-1pi,j-1(t)-(λj+μi)pi,j(t)+μj+1pi,j+1(t)+o(Δt)Δt.

Finally, passing to the limit as Δt → 0 results in

(9.37)ddtpi,j (t)=λj-1pi,j-1(t)- (λj+μi)pi,j(t)+μ j+1pi,j+1(t).

This set of equations is referred to as the forward Kolmogorov equations. One can follow a similar procedure (see Exercise 9.24) to develop a slightly different set of equations known as the backward Kolmogorov equations:

(9.38)ddtpi,j(t)=λipi+1,j(t)-(λj+μi)pi,j(t)+μipi-1,j(t).

For all but the simplest examples, it is very difficult to find a closed form solution for this system of equations. However, the Kolmogorov equations can lend some insight into the behavior of the system. For example, consider the steady state distribution of the Markov process. If a steady state exists, we would expect that as t→ ∞, pi, j(t) → πj independent of i and also that dpi, j(t)/(dt) → 0. Plugging these simplifications into the forward Kolmogorov equations leads to

(9.39)λj-1πj-1-(λj+μj)πj+μj+1πj+1=0.

These equations are known as the global balance equations. From them, the steady state distribution can be found (if it exists). The solution to the balance equations is surprisingly easy to obtain. First, we rewrite the difference equation in the more symmetric form

(9.40)λjπj-μj+1πj+1=λj-1πj-1-μjπj.

Next, assume that the Markov process is defined on the states j = 0,1,2, …. Then the previous equation must be adjusted for the end point j = 0 (assuming μ0 = 0 which merely states that there can be no deaths when the population's size is zero) according to

(9.41)λ0π0-μ 1π1=0.

Combining Equations 9.40 and 9.41 results in

(9.42)λjπj -μj+1πj+1=0,j=0,1, 2,…,

which leads to the simple recursion

(9.43)πj+1=λj μj+1πj,j=0,1,2,…,

whose solution is given by

(9.44)πj=π0Πi=1j λi-1μi,j=0,1,2,…,

This gives the πj in terms of π0. In order to determine π0, the constraint that the πj must form a distribution is imposed.

(9.45)∑j=0∞πj=1⇒π011+∑j=1∞Π i=1jλi-1μi.

This completes the proof of the following theorem.

THEOREM 9.4: For a Markov birth-death process with birth rate λn, n = 0,1,2, …, and death rate μn, n = 1,2,3, …, the steady state distribution is given by

(9.46)πk=limt→ ∞pi,k(t)Πi=1kλ i-1μi1+∑j=1∞Πi=1jλi-1μi.

If the series in the denominator diverges, then πk = 0 for any finite k. This indicates that a steady state distribution does not exist. Likewise, if the series converges, the πk will be nonzero, resulting in a well-behaved steady state distribution.

EXAMPLE 9.12: (The M/M/1 Queue) In this example, we consider the birth-death process with constant birth rate and constant death rate. In particular, we take

λn=λ,n=0,1,2,…andμ0=0,μn=μ,n=1,2,3,….

This model is commonly used in the study of queueing systems and, in that context, is referred to as the M/M/1 queue. In this nomenclature, the first “M” refers to the arrival process as being Markovian, the second “M” refers to the departure process as being Markovian, and the “1” is the number of servers. So this is a single server queue, where the interarrival time of new customers is an exponential random variable with mean 1/λ and the service time for each customer is exponential with mean 1/μ. For the M/M/1 queueing system, λi–1/μi = λ/μ for all i so that

1+∑j=1∞ Πi=1jλi-1μi∑j=0∞(λμ)j=11-λ/μforλ<μ.

The resulting steady state distribution of the queue size is then

πk=(λ/μ)k11-λ/μ=( 1-λ/μ)(λ/μ)k,k=0,1,2,…,forλ<μ.

Hence, if the arrival rate is less than the departure rate, the queue size will have a steady state. It makes sense that if the arrival rate is greater than the departure rate, then the queue size will tend to grow without bound.

EXAMPLE 9.13: (The M/M/ ∞ Queue) Next suppose the last example is modified so that there are an infinite number of servers available to simultaneously provide service to all customers in the system. In that case, there are no customers ever waiting in line, and the process X(t) now counts the number of customers in the system (receiving service) at time t. As before, we take the arrival rate to be constant λn = λ, but now the departure rate needs to be proportional to the number of customers in service, μn = nμ. In this case, λi-1/μi = λ/(iμ) and

1+∑j=1∞Πi=1jλi-1μi=1+∑j=1∞Πi=1jλ iμ=1+∑j=0∞(λ/μ)j!=eλ/μ.

Note that the series converges for any λ and μ, and hence the M/M/∞ queue will always have a steady state distribution given by

πk=(λ/μ)kk!=e-λ /μ.

Which of the following is not generally considered to be a measure of system performance in queuing analysis?

EXAMPLE 9.14: This example demonstrates one way to simulate the M/M/1 queueing system of Example 9.12. One realization of this process as produced by the code that follows is illustrated in Figure 9.4. In generating the figure, we use an average arrival rate of λ = 20 customers per hour and an average service time of 1/μ = 2 minutes. This leads to the condition λ < μ and the M/M/1 queue exhibits stable behavior. The reader is encouraged to run the program for the case when λ > μ to observe the unstable behavior (the queue size will tend to grow continuously over time).

Which of the following is not generally considered to be a measure of system performance in queuing analysis?

Figure 9.4. Simulated realization of the birth-death process for M/M/1 queueing system of Example 9.12.

Which of the following is not generally considered to be a measure of system performance in queuing analysis?

Which of the following is not generally considered to be a measure of system performance in queuing analysis?

If the birth-death process is truly modeling the size of a population of some organism, then it would be reasonable to consider the case when λ0 = 0. That is, when the population size reaches zero, no further births can occur. In that case, the species is extinct and the state X(t) = 0 is an absorbing state. A fundamental question would then be, Is extinction a certain event, and if not what is the probability of the process being absorbed into the state of extinction? Naturally the answer to this question would depend on the starting population size. Let qi be the probability that the process eventually enters the absorbing state, given that it is initially in state i. Note that if the process is currently in state i, after the next transition, the birth-death process must be either in state i – 1 or state i + 1. The time to the next birth, Bi, is a random variable with an exponential distribution with a mean of 1/λi, while the time to the next death is an exponential random variable, Di, with a mean of 1/μi. Hence, the process will transition to state i + 1 if Bi < Di, otherwise it will transition to state i – 1. The reader can easily verify that Pr(Bi < Di) = λi/(λi + μi). The absorption probability can then be written as

(9.47)qi=Pr(absorption|instatei )=Pr(absorption,nextstate isi+1|instate i)+Pr(absorption,nextstate isi-1|instatei)=Pr(absorption|instatei+ 1)Pr(nextstate isi+1|instatei)+Pr(absorption|instatei-1)Pr(next state isi-1|instatei)=qi+1λiλi+μi+qi-1μiλi+μi,i=1,2,3….

This provides a recursive set of equations that can be solved to find the absorption probabilities. To solve this set of equations, we rewrite them as

(9.48)qi+1-qi= μiλi(qi-qi-1),i=1,2,3….

After applying this recursion repeatedly and using the fact that q0 = 1,

(9.49)qi+1-qi=(qi-q1-1)Πj =1iμiλi.

Summing this equation from i = 1,2, …, n results in

(9.50)q n+1-q1=(q1-1)∑i=1 nΠj=1iμjλj.

Next, suppose that the series on the right-hand side of the previous equation diverges as n → ∞. Since the qi are probabilities, the left-hand side of the equation must be bounded, which implies that q1 = 1. Then from Equation 9.49, it is determined that qn must be equal to one for all n. That is, if

(9.51)∑i =1∞Πj=1iμjλj=∞ ,

then absorption will eventually occur with probability 1 regardless of the starting state. If q1 > 1 (absorption is not certain), then the preceding series must converge to a finite number. It is expected in that case that as n → ∞, qn → 0. Passing to the limit as n → ∞ in Equation 9.50 then allows a solution for q1 of the form

(9.52)q1=∑i=1∞Πj=1 iμjλj1+∑i=1∞Πj=1iμjλj.

Furthermore, the general solution for the absorption probability is

(9.53) qn=∑i=n∞Πj=1iμjλj1+∑i=1∞Πj=1iμjλj.

EXAMPLE 9.15: Consider a population model where both the birth and death rates are proportional to the population, λn = nλ, μn = nμ. For this model,

∑i=1∞Πj=1iμjλj=∑i=1∞Πj=1 iμλ=∑i=1∞(μλ)i =μ/λ1-μ/λ=μλ-μforλ>μ.

Hence, if λ < μ, the series diverges and the species will eventually reach extinction with probability 1. If λ > μ,

∑i=n∞Πj=1iμjλj=∑i=n∞ (μλ)i=(μ/λ)n 1-μ/λ,

and the absorption (extinction) probabilities are

qn=(μλ)n,n=1,2,3,….

Continuous time Markov processes do not necessarily need to have a discrete amplitude as in the previous examples. In the following, we discuss a class of continuous time, continuous amplitude Markov processes. To start with, it is noted that for any time instants t0 < t1 < t2, the conditional PDF of a Markov process must satisfy the Chapman-Kolmogorov equation

(9.54)f(x2,t2|x0,t0)=∫- ∞∞f(x2,t2|x1,t1)f(x1,t1|x0,t0)dx1.

This is just the continuous amplitude version of Equation 9.33. Here we use the notation f (x2, t2|x1, t1) to represent the conditional probability density of the process X(t2) at the point x2 conditioned on X(t1) = x1. Next, suppose we interpret these time instants as t0 = 0, t1 = t, and t2 = t + Δt. In this case, we interpret x2 – x1 = Δx as the the infinitesimal change in the process that occurs during the infinitesimal time instant Δt and f (x2, t2|x1, t1) is the PDF of that increment.

Define ΦΔx(ω) to be the characteristic function of Δx = x2 – x1:

(9.55)Φ Δx(ω)=E[ejωΔx]=∫ -∞∞ejω(x2-x1)f( x2,t+Δt|x1,t)dx2.

We note that the characteristic function can be expressed in a Taylor series as

(9.56)ΦΔx(ω)=∑k=0∞Mk(x1,t)k!(jω )k,

where Mk(x1, t) = E[(x2 - x1)k (x1, t)] is the kth moment of the increment Δx. Taking inverse transforms of this expression, the conditional PDF can be expressed as

(9.57)f(x2,t+Δt|x1,t)=∑k=0∞Mk(x1,t)k!(-1 )2∂k∂x2k(δ(x2 -x1)).

Inserting this result into the Chapman-Kolmogorov equation, Equation 9.54, results in

f(x2,t+Δt|x0,t0)∑k=0∞ (-1)kk!∫-∞∞Mk(x1,t)∂k∂xk δ(x2-x1)f(x2,t|x 0,t0)dx1=∑k=0∞ (-1)kk!∂k∂xk [Mk(x2,t)f(x2,t |x0,t0)]=f(x2,t|x0,t0)+∑k=0∞(- 1)kk!∂k∂xk[Mk(x2,t)f(x2,t|x0,t0)].

Subtracting f (x2, t|x0, t0) from both sides of this equation and dividing by Δt results in

(9.59)f(x2,t+Δt|x0,t0)-f(x2,t|x0,t0)Δt=∑k=0∞(-1)kk!∂ k∂xk[Mk(x2,t)Δtf(x2,t|x0,t0)] .

Finally, passing to the limit as Δt → 0 results in the partial differential equation

(9.60)∂∂tf(x,t|x0 ,t0)=∑k=1∞(-1) kk!∂k∂xk[Kk( x,t)f(x,t|x0,t0)],

where the function Kk(x, t) is defined as

(9.61)Kk(x,t)=limΔt→0E[(X(t+Δt)-X(t ))k|X(t)]Δt.

For many processes of interest, the PDF of an infinitesimal increment can be accurately approximated from its first few moments, and hence we take Kk(x, t) = 0 for k > 2. For such processes, the PDF must satisfy

(9.62)∂∂tf(x,t|x0,t0)=-∂∂x(K1(x,t)f(x,t|x0,t0))+12∂2∂x2(K2(x,t)f(x,t|x0,t0)).

This is known as the (one-dimensional) Fokker-Planck equation and is used extensively in diffusion theory to model the dispersion of fumes, smoke, and similar phenomena.

In general, the Fokker-Planck equation is notoriously difficult to solve and doing so is well beyond the scope of this text. Instead, we consider a simple special case where the functions K1(x, t) and K2(x, t) are constants, in which case the Fokker Planck equation reduces to

(9.63)∂∂tf(x,t|x0,t0)=-2c ∂∂x(f(x,t|x0,t0) )+D∂2∂x2(f(x,t|x0,t0)),

where in diffusion theory, D is known as the coefficient of diffusion and c is the drift. This equation is used in models that involve the diffusion of smoke or other pollutants in the atmosphere, the diffusion of electrons in a conductive medium, the diffusion of liquid pollutants in water and soil, and the diffusion of plasmas. This equation can be solved in several ways. Perhaps one of the easiest methods is to use Fourier transforms. This is explored further in the exercises where the reader is asked to show that (taking x0 = 0 and t0 = 0) the solution to this diffusion equation is

(9.64)f( x,t|x0=0,t0=0)=14π Dtexp(-(x-2ct)24Dt).

That is, the PDF is Gaussian with a mean and variance that changes with time. The behavior of this process is explored in the next example.

Which of the following is not generally considered to be a measure of system performance in queuing analysis?

EXAMPLE 9.16: In this example, we model the diffusion of smoke from a forest fire that starts in a National Park at time t = 0 and location x = 0. The smoke from the fire drifts in the positive x direction due to wind blowing at 10 miles per hour, and the diffusion coefficient is 1 square mile per hour. The probability density function is given in Equation 9.64. We provide a three-dimensional rendition of this function in Figure 9.5 using the following MATLAB program.

Which of the following is not generally considered to be a measure of system performance in queuing analysis?

Figure 9.5. Observations of the PDF at different time instants showing the drift and dispersion of smoke for Example 9.16.

Which of the following is not generally considered to be a measure of system performance in queuing analysis?

Which of the following is not generally considered to be a measure of system performance in queuing analysis?

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780121726515500099

Which of the following is not generally considered to be a measure of system performance in a queuing analysis?

However, the average service time is not considered to be a measure of performance in the queueing analysis as that depends on a lot of other factors and is outside the scope of the Queueing systems.

Which of the following is not a measure of queue performance?

Answer and Explanation: The average inter-arrival time is not generally considered as a measure of performance in queuing analysis.

What are the most common measures of system performance in a queuing analysis?

The most common measure of system performance in queuing analysis are the number of customers waiting in a line, average time customers wait, system utilization, etc.

Which one of the following measures of system performance is a key measure with respect to customer satisfaction?

Which one of the following measures of system performance is a key measure with respect to customer satisfaction? Average number of customers waiting in line.