1.

Record Nr.

UNINA9910840905003321

Autore

Sennott Linn I. <1943->

Titolo

Stochastic dynamic programming and the control of queueing systems [[electronic resource] /] / Linn I. Sennott

Pubbl/distr/stampa

New York, : John Wiley Sons, c1999

ISBN

1-282-30800-9

9786612308000

0-470-31703-5

0-470-31787-6

Descrizione fisica

1 online resource (354 p.)

Collana

Wiley series in probability and statistics. Applied probability and statistics section

Disciplina

519.703

519.82

Soggetti

Stochastic programming

Dynamic programming

Queuing theory

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Note generali

"A Wiley-Interscience publication."

Nota di bibliografia

Includes bibliographical references (p. 316-323) and index.

Nota di contenuto

Stochastic Dynamic Programming and the Control of Queueing Systems; Contents; Preface; 1. Introduction; 1.1. Examples; 1.2. Aspects of Control; 1.3. Goals and Summary of Chapters; Bibliographic Notes; Problems; 2. Optimization Criteria; 2.1. Basic Notation; 2.2. Policies; 2.3. Conditional Cost Distributions; 2.4. Optimization Criteria; 2.5. Approximating Sequence Method; Bibliographic Notes; Problems; 3. Fiite Horizon Optimization; 3.1. Finite Horizon Optimality Equation; 3.2. ASM for the Finite Horizon; 3.3. When Does FH(α, n) Hold?; 3.4. A Queueing Example; Bibliographic Notes; Problems

4. Lnfinite Horizon Discounted Cost Optimization4.1 Infinite Horizon Discounted Cost Optimality Equation; 4.2 Solutions to the Optimality Equation; 4.3 Convergence of Finite Horizon Value Functions; 4.4 Characterization of Optimal Policies; 4.5 Analytic Properties of the Value Function; 4.6 ASM for the Infinite Horizon Discounted Case; 4.7 When Does DC(α) HOLD?; Bibliographic Notes; Problems; 5. An inventory Model; 5.1. FomuIation of the MDC; 5.2. Optimality



Equations; 5.3. An Approximating Sequence; 5.4. Numerical Results; Bibliographic Notes; Problems

6 Average Cost Optimization for Finite State Spaces6.1. A Fundamental Relationship for S Countable; 6.2. An Optimal Stationary Policy Exists; 6.3. An Average Cost Optimality Equation; 6.4. ACOE for Constant Minimum Average Cost; 6.5. Solutions to the ACOE; 6.6 Method of Calculation; 6.7. An Example; Bibliographic Notes; Problems; 7. Average Cost Optimization Theory for Countable State Spaces; 7.1. Counterexamples; 7.2. The (SEN) Assumptions; 7.3. An Example; 7.4. Average Cost Optimality Inequality; 7.5. Sufficient Conditions for the (SEN) Assumptions; 7.6. Examples

7.7. Weakening the (SEN) AssumptionsBibliographic Notes; Problems; 8. Computation of Average Cost Optimal Policies for Infinite State Spaces; 8.1. The (AC) Assumptions; 8.2. Verification of the Assumptions; 8.3. Examples; *8.4. Another Example; 8.5. Service Rate Control Queue; 8.6. Routing to ParalleI Queues; 8.7. Weakening the (AC) Assumptions; Bibliographic Notes; Problems; 9. Optimization Under Actions at Selected Epochs; 9.1. Single- and Multiple-Sample Models; 9.2. Properties of an MS Distribution; 9.3. Service Control of the Single-Server Queue

9.4. Arrival Control of the Single-Server Queue9.5. Average Cost Optimization of Example 9.3.1; 9.6. Average Cost Optimization of Example 9.3.2; 9.7. Computation Under Deterministic Service Times; 9.8. Computation Under Geometric Service Times; Bibliographic Notes; Problems; 10. Average Cost Optimization of Continuous Time Processes; 10.1. Exponential Distributions and the Poisson Process; 10.2. Continuous Time Markov Decision Chains; 10.3. Average Cost Optimization of a CTMDC; 10.4. Service Rate Control of the M/M/l Queue,; 10.5. MW/K Queue with Dynamic Service Pool

10.6. Control of a Polling System

Sommario/riassunto

A path-breaking account of Markov decision processes-theory and computationThis book's clear presentation of theory, numerous chapter-end problems, and development of a unified method for the computation of optimal policies in both discrete and continuous time make it an excellent course text for graduate students and advanced undergraduates. Its comprehensive coverage of important recent advances in stochastic dynamic programming makes it a valuable working resource for operations research professionals, management scientists, engineers, and others.Stochastic Dynamic Programmi