01731nam 2200373z- 450 9910346911203321202102111-000-01979-9(CKB)4920000000101410(oapen)https://directory.doabooks.org/handle/20.500.12854/45907(oapen)doab45907(EXLCZ)99492000000010141020202102d2010 |y 0engurmn|---annantxtrdacontentcrdamediacrrdacarrierEfficient Reinforcement Learning using Gaussian ProcessesKIT Scientific Publishing20101 online resource (IX, 205 p. p.)Karlsruhe Series on Intelligent Sensor-Actuator-Systems / Karlsruher Institut für Technologie, Intelligent Sensor-Actuator-Systems Laboratory3-86644-569-5 This book examines Gaussian processes in both model-based reinforcement learning (RL) and inference in nonlinear dynamic systems.First, we introduce PILCO, a fully Bayesian approach for efficient RL in continuous-valued state and action spaces when no expert knowledge is available. PILCO takes model uncertainties consistently into account during long-term planning to reduce model bias. Second, we propose principled algorithms for robust filtering and smoothing in GP dynamic systems.autonomous learningBayesian inferencecontrolGaussian processesmachine learningDeisenroth Marc Peterauth1295433BOOK9910346911203321Efficient Reinforcement Learning using Gaussian Processes3023440UNINA