|
|
|
|
|
|
|
|
|
1. |
Record Nr. |
UNINA9910139455203321 |
|
|
Autore |
Kulkarni Sanjeev |
|
|
Titolo |
An elementary introduction to statistical learning theory [[electronic resource] /] / Sanjeev Kulkarni, Gilbert Harman |
|
|
|
|
|
|
|
Pubbl/distr/stampa |
|
|
Hoboken, N.J., : Wiley, c2011 |
|
|
|
|
|
|
|
ISBN |
|
1-283-09868-7 |
9786613098689 |
1-118-02346-3 |
1-118-02347-1 |
1-118-02343-9 |
|
|
|
|
|
|
|
|
Edizione |
[1st ed.] |
|
|
|
|
|
Descrizione fisica |
|
1 online resource (235 p.) |
|
|
|
|
|
|
Collana |
|
Wiley series in probability and statistics |
|
|
|
|
|
|
Classificazione |
|
|
|
|
|
|
Altri autori (Persone) |
|
|
|
|
|
|
Disciplina |
|
|
|
|
|
|
|
|
Soggetti |
|
Machine learning - Statistical methods |
Pattern recognition systems |
|
|
|
|
|
|
|
|
Lingua di pubblicazione |
|
|
|
|
|
|
Formato |
Materiale a stampa |
|
|
|
|
|
Livello bibliografico |
Monografia |
|
|
|
|
|
Note generali |
|
Description based upon print version of record. |
|
|
|
|
|
|
Nota di bibliografia |
|
Includes bibliographical references and index. |
|
|
|
|
|
|
Nota di contenuto |
|
An Elementary Introduction to Statistical Learning Theory; Contents; Preface; 1 Introduction: Classification, Learning, Features, and Applications; 1.1 Scope; 1.2 Why Machine Learning?; 1.3 Some Applications; 1.3.1 Image Recognition; 1.3.2 Speech Recognition; 1.3.3 Medical Diagnosis; 1.3.4 Statistical Arbitrage; 1.4 Measurements, Features, and Feature Vectors; 1.5 The Need for Probability; 1.6 Supervised Learning; 1.7 Summary; 1.8 Appendix: Induction; 1.9 Questions; 1.10 References; 2 Probability; 2.1 Probability of Some Basic Events; 2.2 Probabilities of Compound Events |
2.3 Conditional Probability2.4 Drawing Without Replacement; 2.5 A Classic Birthday Problem; 2.6 Random Variables; 2.7 Expected Value; 2.8 Variance; 2.9 Summary; 2.10 Appendix: Interpretations of Probability; 2.11 Questions; 2.12 References; 3 Probability Densities; 3.1 An Example in Two Dimensions; 3.2 Random Numbers in [0,1]; 3.3 Density Functions; 3.4 Probability Densities in Higher Dimensions; 3.5 Joint and Conditional Densities; 3.6 Expected Value and Variance; 3.7 Laws of Large Numbers; 3.8 Summary; 3.9 Appendix: Measurability; |
|
|
|
|
|
|
|
|
|
|
|
3.10 Questions; 3.11 References |
4 The Pattern Recognition Problem4.1 A Simple Example; 4.2 Decision Rules; 4.3 Success Criterion; 4.4 The Best Classifier: Bayes Decision Rule; 4.5 Continuous Features and Densities; 4.6 Summary; 4.7 Appendix: Uncountably Many; 4.8 Questions; 4.9 References; 5 The Optimal Bayes Decision Rule; 5.1 Bayes Theorem; 5.2 Bayes Decision Rule; 5.3 Optimality and Some Comments; 5.4 An Example; 5.5 Bayes Theorem and Decision Rule with Densities; 5.6 Summary; 5.7 Appendix: Defining Conditional Probability; 5.8 Questions; 5.9 References; 6 Learning from Examples; 6.1 Lack of Knowledge of Distributions |
6.2 Training Data6.3 Assumptions on the Training Data; 6.4 A Brute Force Approach to Learning; 6.5 Curse of Dimensionality, Inductive Bias, and No Free Lunch; 6.6 Summary; 6.7 Appendix: What Sort of Learning?; 6.8 Questions; 6.9 References; 7 The Nearest Neighbor Rule; 7.1 The Nearest Neighbor Rule; 7.2 Performance of the Nearest Neighbor Rule; 7.3 Intuition and Proof Sketch of Performance; 7.4 Using more Neighbors; 7.5 Summary; 7.6 Appendix: When People use Nearest Neighbor Reasoning; 7.6.1 Who Is a Bachelor?; 7.6.2 Legal Reasoning; 7.6.3 Moral Reasoning; 7.7 Questions; 7.8 References |
8 Kernel Rules8.1 Motivation; 8.2 A Variation on Nearest Neighbor Rules; 8.3 Kernel Rules; 8.4 Universal Consistency of Kernel Rules; 8.5 Potential Functions; 8.6 More General Kernels; 8.7 Summary; 8.8 Appendix: Kernels, Similarity, and Features; 8.9 Questions; 8.10 References; 9 Neural Networks: Perceptrons; 9.1 Multilayer Feedforward Networks; 9.2 Neural Networks for Learning and Classification; 9.3 Perceptrons; 9.3.1 Threshold; 9.4 Learning Rule for Perceptrons; 9.5 Representational Capabilities of Perceptrons; 9.6 Summary; 9.7 Appendix: Models of Mind; 9.8 Questions; 9.9 References |
10 Multilayer Networks |
|
|
|
|
|
|
Sommario/riassunto |
|
A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, An Elementary Introduction to Statistical Learning Theory is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary ma |
|
|
|
|
|
|
|
| |