Vai al contenuto principale della pagina
| Autore: |
Deisenroth Marc Peter
|
| Titolo: |
Efficient Reinforcement Learning using Gaussian Processes
|
| Pubblicazione: | KIT Scientific Publishing, 2010 |
| Descrizione fisica: | 1 online resource (IX, 205 p. p.) |
| Soggetto non controllato: | autonomous learning |
| Bayesian inference | |
| control | |
| Gaussian processes | |
| machine learning | |
| Sommario/riassunto: | This book examines Gaussian processes in both model-based reinforcement learning (RL) and inference in nonlinear dynamic systems.First, we introduce PILCO, a fully Bayesian approach for efficient RL in continuous-valued state and action spaces when no expert knowledge is available. PILCO takes model uncertainties consistently into account during long-term planning to reduce model bias. Second, we propose principled algorithms for robust filtering and smoothing in GP dynamic systems. |
| Titolo autorizzato: | Efficient Reinforcement Learning using Gaussian Processes ![]() |
| ISBN: | 1-000-01979-9 |
| Formato: | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione: | Inglese |
| Record Nr.: | 9910346911203321 |
| Lo trovi qui: | Univ. Federico II |
| Opac: | Controlla la disponibilità qui |