LEADER 04238nam 2200625 a 450 001 9910451546003321 005 20200520144314.0 010 $a1-281-49137-3 010 $a9786611491376 010 $a0-387-77501-3 024 7 $a10.1007/978-0-387-77501-2 035 $a(CKB)1000000000440899 035 $a(EBL)372798 035 $a(OCoLC)272306847 035 $a(SSID)ssj0000251060 035 $a(PQKBManifestationID)11174281 035 $a(PQKBTitleCode)TC0000251060 035 $a(PQKBWorkID)10247134 035 $a(PQKB)11574741 035 $a(DE-He213)978-0-387-77501-2 035 $a(MiAaPQ)EBC372798 035 $a(PPN)127050396 035 $a(Au-PeEL)EBL372798 035 $a(CaPaEBR)ebr10239481 035 $a(CaONFJC)MIL149137 035 $a(EXLCZ)991000000000440899 100 $a20081211d2008 uy 0 101 0 $aeng 135 $aur|n|---||||| 181 $ctxt 182 $cc 183 $acr 200 10$aStatistical learning from a regression perspective$b[electronic resource] /$fRichard A. Berk 205 $a1st ed. 2008. 210 $aNew York $cSpringer$d2008 215 $a1 online resource (377 p.) 225 1 $aSpringer series in statistics 300 $aDescription based upon print version of record. 311 $a0-387-77500-5 320 $aIncludes bibliographical references and index. 327 $aStatistical Learning as a Regression Problem -- Regression Splines and Regression Smoothers -- Classification and Regression Trees (CART) -- Bagging -- Random Forests -- Boosting -- Support Vector Machines -- Broader Implications and a Bit of Craft Lore. 330 $aStatistical Learning from a Regression Perspective considers statistical learning applications when interest centers on the conditional distribution of the response variable, given a set of predictors, and when it is important to characterize how the predictors are related to the response. As a first approximation, this is can be seen as an extension of nonparametric regression. Among the statistical learning procedures examined are bagging, random forests, boosting, and support vector machines. Response variables may be quantitative or categorical. Real applications are emphasized, especially those with practical implications. One important theme is the need to explicitly take into account asymmetric costs in the fitting process. For example, in some situations false positives may be far less costly than false negatives. Another important theme is to not automatically cede modeling decisions to a fitting algorithm. In many settings, subject-matter knowledge should trump formal fitting criteria. Yet another important theme is to appreciate the limitation of one?s data and not apply statistical learning procedures that require more than the data can provide. The material is written for graduate students in the social and life sciences and for researchers who want to apply statistical learning procedures to scientific and policy problems. Intuitive explanations and visual representations are prominent. All of the analyses included are done in R. Richard Berk is Distinguished Professor of Statistics Emeritus from the Department of Statistics at UCLA and currently a Professor at the University of Pennsylvania in the Department of Statistics and in the Department of Criminology. He is an elected fellow of the American Statistical Association and the American Association for the Advancement of Science and has served in a professional capacity with a number of organizations such as the Committee on Applied and Theoretical Statistics for the National Research Council and the Board of Directors of the Social Science Research Council. His research has ranged across a variety of applications in the social and natural sciences. 410 0$aSpringer series in statistics. 606 $aRegression analysis 608 $aElectronic books. 615 0$aRegression analysis. 676 $a519.5/36 700 $aBerk$b Richard A$0558720 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910451546003321 996 $aStatistical learning from a regression perspective$91523646 997 $aUNINA