LEADER 05672nam 2200745Ia 450 001 9910818427003321 005 20200520144314.0 010 $a1-283-09868-7 010 $a9786613098689 010 $a1-118-02346-3 010 $a1-118-02347-1 010 $a1-118-02343-9 035 $a(CKB)2550000000032266 035 $a(EBL)697570 035 $a(OCoLC)729724626 035 $a(SSID)ssj0000476962 035 $a(PQKBManifestationID)11295636 035 $a(PQKBTitleCode)TC0000476962 035 $a(PQKBWorkID)10501657 035 $a(PQKB)10879663 035 $a(MiAaPQ)EBC697570 035 $a(MiAaPQ)EBC4030500 035 $a(Au-PeEL)EBL4030500 035 $a(CaPaEBR)ebr11107015 035 $a(CaONFJC)MIL309868 035 $a(OCoLC)927501377 035 $a(PPN)185056555 035 $a(EXLCZ)992550000000032266 100 $a20101123d2011 uy 0 101 0 $aeng 135 $aur|n|---||||| 181 $ctxt 182 $cc 183 $acr 200 13$aAn elementary introduction to statistical learning theory /$fSanjeev Kulkarni, Gilbert Harman 205 $a1st ed. 210 $aHoboken, N.J. $cWiley$dc2011 215 $a1 online resource (235 p.) 225 1 $aWiley series in probability and statistics 300 $aDescription based upon print version of record. 311 $a0-470-64183-5 320 $aIncludes bibliographical references and index. 327 $aAn Elementary Introduction to Statistical Learning Theory; Contents; Preface; 1 Introduction: Classification, Learning, Features, and Applications; 1.1 Scope; 1.2 Why Machine Learning?; 1.3 Some Applications; 1.3.1 Image Recognition; 1.3.2 Speech Recognition; 1.3.3 Medical Diagnosis; 1.3.4 Statistical Arbitrage; 1.4 Measurements, Features, and Feature Vectors; 1.5 The Need for Probability; 1.6 Supervised Learning; 1.7 Summary; 1.8 Appendix: Induction; 1.9 Questions; 1.10 References; 2 Probability; 2.1 Probability of Some Basic Events; 2.2 Probabilities of Compound Events 327 $a2.3 Conditional Probability2.4 Drawing Without Replacement; 2.5 A Classic Birthday Problem; 2.6 Random Variables; 2.7 Expected Value; 2.8 Variance; 2.9 Summary; 2.10 Appendix: Interpretations of Probability; 2.11 Questions; 2.12 References; 3 Probability Densities; 3.1 An Example in Two Dimensions; 3.2 Random Numbers in [0,1]; 3.3 Density Functions; 3.4 Probability Densities in Higher Dimensions; 3.5 Joint and Conditional Densities; 3.6 Expected Value and Variance; 3.7 Laws of Large Numbers; 3.8 Summary; 3.9 Appendix: Measurability; 3.10 Questions; 3.11 References 327 $a4 The Pattern Recognition Problem4.1 A Simple Example; 4.2 Decision Rules; 4.3 Success Criterion; 4.4 The Best Classifier: Bayes Decision Rule; 4.5 Continuous Features and Densities; 4.6 Summary; 4.7 Appendix: Uncountably Many; 4.8 Questions; 4.9 References; 5 The Optimal Bayes Decision Rule; 5.1 Bayes Theorem; 5.2 Bayes Decision Rule; 5.3 Optimality and Some Comments; 5.4 An Example; 5.5 Bayes Theorem and Decision Rule with Densities; 5.6 Summary; 5.7 Appendix: Defining Conditional Probability; 5.8 Questions; 5.9 References; 6 Learning from Examples; 6.1 Lack of Knowledge of Distributions 327 $a6.2 Training Data6.3 Assumptions on the Training Data; 6.4 A Brute Force Approach to Learning; 6.5 Curse of Dimensionality, Inductive Bias, and No Free Lunch; 6.6 Summary; 6.7 Appendix: What Sort of Learning?; 6.8 Questions; 6.9 References; 7 The Nearest Neighbor Rule; 7.1 The Nearest Neighbor Rule; 7.2 Performance of the Nearest Neighbor Rule; 7.3 Intuition and Proof Sketch of Performance; 7.4 Using more Neighbors; 7.5 Summary; 7.6 Appendix: When People use Nearest Neighbor Reasoning; 7.6.1 Who Is a Bachelor?; 7.6.2 Legal Reasoning; 7.6.3 Moral Reasoning; 7.7 Questions; 7.8 References 327 $a8 Kernel Rules8.1 Motivation; 8.2 A Variation on Nearest Neighbor Rules; 8.3 Kernel Rules; 8.4 Universal Consistency of Kernel Rules; 8.5 Potential Functions; 8.6 More General Kernels; 8.7 Summary; 8.8 Appendix: Kernels, Similarity, and Features; 8.9 Questions; 8.10 References; 9 Neural Networks: Perceptrons; 9.1 Multilayer Feedforward Networks; 9.2 Neural Networks for Learning and Classification; 9.3 Perceptrons; 9.3.1 Threshold; 9.4 Learning Rule for Perceptrons; 9.5 Representational Capabilities of Perceptrons; 9.6 Summary; 9.7 Appendix: Models of Mind; 9.8 Questions; 9.9 References 327 $a10 Multilayer Networks 330 $aA thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, An Elementary Introduction to Statistical Learning Theory is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary ma 410 0$aWiley series in probability and statistics. 606 $aMachine learning$xStatistical methods 606 $aPattern recognition systems 615 0$aMachine learning$xStatistical methods. 615 0$aPattern recognition systems. 676 $a006.3/1 686 $aST 300$2rvk 700 $aKulkarni$b Sanjeev$0502753 701 $aHarman$b Gilbert$0160614 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910818427003321 996 $aElementary introduction to statistical learning theory$91734732 997 $aUNINA