1.

Record Nr.

UNINA9910139593803321

Autore

Powell Warren B. <1955->

Titolo

Approximate dynamic programming [[electronic resource] ] : solving the curses of dimensionality / / Warren B. Powell

Pubbl/distr/stampa

Hoboken, N.J., : J. Wiley & Sons, c2011

ISBN

1-283-27370-5

9786613273703

1-118-02916-X

1-118-02917-8

1-118-02915-1

Edizione

[2nd ed.]

Descrizione fisica

1 online resource (658 p.)

Collana

Wiley series in probability and statistics

Disciplina

519.7/03

519.703

Soggetti

Dynamic programming

Programming (Mathematics)

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Note generali

Description based upon print version of record.

Nota di bibliografia

Includes bibliographical references and index.

Nota di contenuto

Approximate Dynamic Programming; Contents; Preface to the Second Edition; Preface to the First Edition; Acknowledgments; 1 The Challenges of Dynamic Programming; 1.1 A Dynamic Programming Example: A Shortest Path Problem; 1.2 The Three Curses of Dimensionality; 1.3 Some Real Applications; 1.4 Problem Classes; 1.5 The Many Dialects of Dynamic Programming; 1.6 What Is New in This Book?; 1.7 Pedagogy; 1.8 Bibliographic Notes; 2 Some Illustrative Models; 2.1 Deterministic Problems; 2.2 Stochastic Problems; 2.3 Information Acquisition Problems; 2.4 A Simple Modeling Framework for Dynamic Programs

2.5 Bibliographic NotesProblems; 3 Introduction to Markov Decision Processes; 3.1 The Optimality Equations; 3.2 Finite Horizon Problems; 3.3 Infinite Horizon Problems; 3.4 Value Iteration; 3.5 Policy Iteration; 3.6 Hybrid Value-Policy Iteration; 3.7 Average Reward Dynamic Programming; 3.8 The Linear Programming Method for Dynamic Programs; 3.9 Monotone Policies*; 3.10 Why Does It Work?**; 3.11 Bibliographic Notes; Problems; 4 Introduction to Approximate Dynamic



Programming; 4.1 The Three Curses of Dimensionality (Revisited); 4.2 The Basic Idea; 4.3 Q-Learning and SARSA

4.4 Real-Time Dynamic Programming4.5 Approximate Value Iteration; 4.6 The Post-Decision State Variable; 4.7 Low-Dimensional Representations of Value Functions; 4.8 So Just What Is Approximate Dynamic Programming?; 4.9 Experimental Issues; 4.10 But Does It Work?; 4.11 Bibliographic Notes; Problems; 5 Modeling Dynamic Programs; 5.1 Notational Style; 5.2 Modeling Time; 5.3 Modeling Resources; 5.4 The States of Our System; 5.5 Modeling Decisions; 5.6 The Exogenous Information Process; 5.7 The Transition Function; 5.8 The Objective Function; 5.9 A Measure-Theoretic View of Information**

5.10 Bibliographic NotesProblems; 6 Policies; 6.1 Myopic Policies; 6.2 Lookahead Policies; 6.3 Policy Function Approximations; 6.4 Value Function Approximations; 6.5 Hybrid Strategies; 6.6 Randomized Policies; 6.7 How to Choose a Policy?; 6.8 Bibliographic Notes; Problems; 7 Policy Search; 7.1 Background; 7.2 Gradient Search; 7.3 Direct Policy Search for Finite Alternatives; 7.4 The Knowledge Gradient Algorithm for Discrete Alternatives; 7.5 Simulation Optimization; 7.6 Why Does It Work?**; 7.7 Bibliographic Notes; Problems; 8 Approximating Value Functions; 8.1 Lookup Tables and Aggregation

8.2 Parametric Models8.3 Regression Variations; 8.4 Nonparametric Models; 8.5 Approximations and the Curse of Dimensionality; 8.6 Why Does It Work?**; 8.7 Bibliographic Notes; Problems; 9 Learning Value Function Approximations; 9.1 Sampling the Value of a Policy; 9.2 Stochastic Approximation Methods; 9.3 Recursive Least Squares for Linear Models; 9.4 Temporal Difference Learning with a Linear Model; 9.5 Bellman's Equation Using a Linear Model; 9.6 Analysis of TD(0), LSTD, and LSPE Using a Single State; 9.7 Gradient-Based Methods for Approximate Value Iteration*

9.8 Least Squares Temporal Differencing with Kernel Regression*

Sommario/riassunto

Praise for the First Edition ""Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! This beautiful book fills a gap in the libraries of OR specialists and practitioners.""-Computing Reviews This new edition showcases a focus on modeling and computation for complex classes of approximate dynamic programming problems Understanding approximate dynamic programming (ADP) is vital in order to develop practical and high-quality solutions to complex industrial problems, particularly when those problems i