04429nam 2200721 450 991082999410332120240219141508.010.1109/9780470544785(CKB)3280000000033540(SSID)ssj0000403147(PQKBManifestationID)12190227(PQKBTitleCode)TC0000403147(PQKBWorkID)10431497(PQKB)10540472(CaBNVSL)mat05273582(IDAMS)0b000064810d1139(IEEE)5273582(OCoLC)798698949(EXLCZ)99328000000003354020151221d2004 uy engur|n|||||||||txtccrHandbook of learning and approximate dynamic programming /[edited by] Jennie Si ... [et al.]Hoboken, New Jersey :IEEE Press,c2004.[Piscataqay, New Jersey] :IEEE Xplore,[2004]1 PDF (xxi, 644 pages) illustrationsIEEE press series on computational intelligence ;2Bibliographic Level Mode of Issuance: MonographPrint version: 9780471660545 Includes bibliographical references and index.Foreword. -- 1. ADP: goals, opportunities and principles. -- Part I: Overview. -- 2. Reinforcement learning and its relationship to supervised learning. -- 3. Model-based adaptive critic designs. -- 4. Guidance in the use of adaptive critics for control. -- 5. Direct neural dynamic programming. -- 6. The linear programming approach to approximate dynamic programming. -- 7. Reinforcement learning in large, high-dimensional state spaces. -- 8. Hierarchical decision making. -- Part II: Technical advances. -- 9. Improved temporal difference methods with linear function approximation. -- 10. Approximate dynamic programming for high-dimensional resource allocation problems. -- 11. Hierarchical approaches to concurrency, multiagency, and partial observability. -- 12. Learning and optimization - from a system theoretic perspective. -- 13. Robust reinforcement learning using integral-quadratic constraints. -- 14. Supervised actor-critic reinforcement learning. -- 15. BPTT and DAC - a common framework for comparison. -- Part III: Applications. -- 16. Near-optimal control via reinforcement learning. -- 17. Multiobjective control problems by reinforcement learning. -- 18. Adaptive critic based neural network for control-constrained agile missile. -- 19. Applications of approximate dynamic programming in power systems control. -- 20. Robust reinforcement learning for heating, ventilation, and air conditioning control of buildings. -- 21. Helicopter flight control using direct neural dynamic programming. -- 22. Toward dynamic stochastic optimal power flow. -- 23. Control, optimization, security, and self-healing of benchmark power systems. . A complete resource to Approximate Dynamic Programming (ADP), including on-line simulation code. Provides a tutorial that readers can use to start implementing the learning algorithms provided in the book. Includes ideas, directions, and recent results on current research issues and addresses applications where ADP has been successfully implemented. The contributors are leading researchers in the field.IEEE press series on computational intelligence ;2Dynamic programmingAutomatic programming (Computer science)Machine learningControl theorySystems engineeringEngineering & Applied SciencesHILCCCivil & Environmental EngineeringHILCCComputer ScienceHILCCOperations ResearchHILCCDynamic programmingAutomatic programming (Computer science)Machine learningControl theorySystems engineeringEngineering & Applied SciencesCivil & Environmental EngineeringComputer ScienceOperations Research519.7/03Si Jennie845885CaBNVSLCaBNVSLCaBNVSLBOOK9910829994103321Handbook of learning and approximate dynamic programming1888761UNINA