LEADER 04335nam 22008535 450 001 9910144617903321 005 20230810202148.0 010 $a3-540-44507-2 024 7 $a10.1007/b99352 035 $a(CKB)1000000000231434 035 $a(SSID)ssj0000326895 035 $a(PQKBManifestationID)11258164 035 $a(PQKBTitleCode)TC0000326895 035 $a(PQKBWorkID)10298187 035 $a(PQKB)11487771 035 $a(DE-He213)978-3-540-44507-4 035 $a(MiAaPQ)EBC6302410 035 $a(MiAaPQ)EBC5591844 035 $a(Au-PeEL)EBL5591844 035 $a(OCoLC)1066195452 035 $a(PPN)155176072 035 $a(EXLCZ)991000000000231434 100 $a20121227d2004 u| 0 101 0 $aeng 135 $aurnn|008mamaa 181 $ctxt 182 $cc 183 $acr 200 10$aStatistical Learning Theory and Stochastic Optimization $eEcole d'Eté de Probabilités de Saint-Flour XXXI - 2001 /$fby Olivier Catoni ; edited by Jean Picard 205 $a1st ed. 2004. 210 1$aBerlin, Heidelberg :$cSpringer Berlin Heidelberg :$cImprint: Springer,$d2004. 215 $a1 online resource (VIII, 284 p.) 225 1 $aÉcole d'Été de Probabilités de Saint-Flour ;$v1851 300 $aBibliographic Level Mode of Issuance: Monograph 311 $a3-540-22572-2 320 $aIncludes bibliographical references and index. 327 $aUniversal Lossless Data Compression -- Links Between Data Compression and Statistical Estimation -- Non Cumulated Mean Risk -- Gibbs Estimators -- Randomized Estimators and Empirical Complexity -- Deviation Inequalities -- Markov Chains with Exponential Transitions -- References -- Index. 330 $aStatistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. over-simplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PAC-Bayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on non-asymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results. 410 0$aÉcole d'Été de Probabilités de Saint-Flour ;$v1851 606 $aProbabilities 606 $aStatistics 606 $aMathematical optimization 606 $aArtificial intelligence 606 $aComputer science$xMathematics 606 $aNumerical analysis 606 $aProbability Theory 606 $aStatistical Theory and Methods 606 $aOptimization 606 $aArtificial Intelligence 606 $aMathematical Applications in Computer Science 606 $aNumerical Analysis 615 0$aProbabilities. 615 0$aStatistics. 615 0$aMathematical optimization. 615 0$aArtificial intelligence. 615 0$aComputer science$xMathematics. 615 0$aNumerical analysis. 615 14$aProbability Theory. 615 24$aStatistical Theory and Methods. 615 24$aOptimization. 615 24$aArtificial Intelligence. 615 24$aMathematical Applications in Computer Science. 615 24$aNumerical Analysis. 676 $a519.5 700 $aCatoni$b Olivier$4aut$4http://id.loc.gov/vocabulary/relators/aut$0478894 702 $aPicard$b Jean$4edt$4http://id.loc.gov/vocabulary/relators/edt 712 12$aEcole d'e?te? de probabilite?s de Saint-Flour$d(31st :$f2001) 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910144617903321 996 $aStatistical learning theory and stochastic optimization$9262229 997 $aUNINA