LEADER 02913nam 2200685 a 450 001 9910830115603321 005 20210715085735.0 010 $a1-118-62010-0 010 $a1-118-55742-5 010 $a1-299-31547-X 010 $a1-118-61987-0 035 $a(CKB)2560000000100628 035 $a(EBL)1143625 035 $a(SSID)ssj0000833610 035 $a(PQKBManifestationID)11529302 035 $a(PQKBTitleCode)TC0000833610 035 $a(PQKBWorkID)10936259 035 $a(PQKB)10364510 035 $a(MiAaPQ)EBC1143625 035 $a(OCoLC)830161640 035 $a(CaSebORM)9781118620106 035 $a(EXLCZ)992560000000100628 100 $a20091207d2010 uy 0 101 0 $aeng 135 $aur|n|---||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 00$aMarkov decision processes in artificial intelligence $eMDPs, beyond MDPs and applications /$fedited by Olivier Sigaud, Olivier Buffet 210 $aLondon $cISTE$d[2010] 215 $a1 online resource (457 pages) 300 $aFirst published 2008 in France by Hermes Science/Lavoisier in two volumes entitled: Processus de?cisionnels de Markov en intelligence artificielle. 311 $a1-84821-167-8 320 $aIncludes bibliographical references and index. 327 $apt. 1. MDPs : models and methods -- pt. 2. Beyond MDPs -- pt. 3. Applications. 330 $aMarkov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more advanced research trends in the domain and gives some concrete examples using illustr 410 0$aSafari tech books online. 410 0$aWiley UBCM ebooks. 410 0$aISTE 606 $aArtificial intelligence$xMathematics 606 $aArtificial intelligence$xStatistical methods 606 $aMarkov processes 606 $aStatistical decision 615 0$aArtificial intelligence$xMathematics. 615 0$aArtificial intelligence$xStatistical methods. 615 0$aMarkov processes. 615 0$aStatistical decision. 676 $a006.301/509233 676 $a006.301509233 676 $a006.33 700 $aSigaud$b Olivier$0564768 701 $aSigaud$b Olivier$0564768 701 $aBuffet$b Olivier$0953653 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910830115603321 996 $aMarkov decision processes in artificial intelligence$92156329 997 $aUNINA