LEADER 00499nam 2200193zu 450 001 9910918090003321 005 20241203191033.0 035 $a(CKB)36718693300041 035 $a(EXLCZ)9936718693300041 100 $a20241203|2023uuuu || | 101 0 $aeng 135 $aur||||||||||| 200 10$aCivil Society Elites 210 $cNIAS Press$d2023 700 $aNorén-Nilsson$b Astrid$01742188 906 $aBOOK 912 $a9910918090003321 996 $aCivil Society Elites$94305641 997 $aUNINA LEADER 01731nam 2200373z- 450 001 9910346911203321 005 20210211 010 $a1-000-01979-9 035 $a(CKB)4920000000101410 035 $a(oapen)https://directory.doabooks.org/handle/20.500.12854/45907 035 $a(oapen)doab45907 035 $a(EXLCZ)994920000000101410 100 $a20202102d2010 |y 0 101 0 $aeng 135 $aurmn|---annan 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 00$aEfficient Reinforcement Learning using Gaussian Processes 210 $cKIT Scientific Publishing$d2010 215 $a1 online resource (IX, 205 p. p.) 225 1 $aKarlsruhe Series on Intelligent Sensor-Actuator-Systems / Karlsruher Institut für Technologie, Intelligent Sensor-Actuator-Systems Laboratory 311 08$a3-86644-569-5 330 $aThis book examines Gaussian processes in both model-based reinforcement learning (RL) and inference in nonlinear dynamic systems.First, we introduce PILCO, a fully Bayesian approach for efficient RL in continuous-valued state and action spaces when no expert knowledge is available. PILCO takes model uncertainties consistently into account during long-term planning to reduce model bias. Second, we propose principled algorithms for robust filtering and smoothing in GP dynamic systems. 610 $aautonomous learning 610 $aBayesian inference 610 $acontrol 610 $aGaussian processes 610 $amachine learning 700 $aDeisenroth$b Marc Peter$4auth$01295433 906 $aBOOK 912 $a9910346911203321 996 $aEfficient Reinforcement Learning using Gaussian Processes$93023440 997 $aUNINA