LEADER 01359nam 2200457 450 001 9910789357103321 005 20230725054654.0 010 $a2-84516-491-2 035 $a(CKB)3710000000104035 035 $a(EBL)3239309 035 $a(SSID)ssj0001212370 035 $a(PQKBManifestationID)11699914 035 $a(PQKBTitleCode)TC0001212370 035 $a(PQKBWorkID)11207627 035 $a(PQKB)11629637 035 $a(MiAaPQ)EBC3239309 035 $a(EXLCZ)993710000000104035 100 $a20140425d2011 uy| 0 101 0 $afre 135 $aur|n|---||||| 181 $ctxt 182 $cc 183 $acr 200 13$aLa mer lumie?re /$fPedro Salinas ; E?dition, introduction, traduction et notes de Bernadette Hidalgo Bachs 210 1$aClermont-Ferrand :$cPresses Universitaires Blaise Pascal,$d2011. 215 $a1 online resource (126 p.) 225 0 $aCelis Textes 300 $aDescription based upon print version of record. 311 $a2-84516-490-4 410 0$aCRLMC/Textes 606 $aSpanish poetry$y20th century 615 0$aSpanish poetry 700 $aSalinas$b Pedro$0152724 702 $aBachs$b Bernadette Hidalgo 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910789357103321 996 $aLa mer lumie?re$93727158 997 $aUNINA LEADER 01731nam 2200373z- 450 001 9910346911203321 005 20210211 010 $a1-000-01979-9 035 $a(CKB)4920000000101410 035 $a(oapen)https://directory.doabooks.org/handle/20.500.12854/45907 035 $a(oapen)doab45907 035 $a(EXLCZ)994920000000101410 100 $a20202102d2010 |y 0 101 0 $aeng 135 $aurmn|---annan 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 00$aEfficient Reinforcement Learning using Gaussian Processes 210 $cKIT Scientific Publishing$d2010 215 $a1 online resource (IX, 205 p. p.) 225 1 $aKarlsruhe Series on Intelligent Sensor-Actuator-Systems / Karlsruher Institut für Technologie, Intelligent Sensor-Actuator-Systems Laboratory 311 08$a3-86644-569-5 330 $aThis book examines Gaussian processes in both model-based reinforcement learning (RL) and inference in nonlinear dynamic systems.First, we introduce PILCO, a fully Bayesian approach for efficient RL in continuous-valued state and action spaces when no expert knowledge is available. PILCO takes model uncertainties consistently into account during long-term planning to reduce model bias. Second, we propose principled algorithms for robust filtering and smoothing in GP dynamic systems. 610 $aautonomous learning 610 $aBayesian inference 610 $acontrol 610 $aGaussian processes 610 $amachine learning 700 $aDeisenroth$b Marc Peter$4auth$01295433 906 $aBOOK 912 $a9910346911203321 996 $aEfficient Reinforcement Learning using Gaussian Processes$93023440 997 $aUNINA