LEADER 05344nam 2200625Ia 450 001 9910830653803321 005 20230124182208.0 010 $a1-280-93517-0 010 $a9786610935178 010 $a0-470-14815-2 010 $a0-470-14814-4 035 $a(CKB)1000000000355009 035 $a(EBL)309748 035 $a(OCoLC)476091973 035 $a(SSID)ssj0000251017 035 $a(PQKBManifestationID)11188681 035 $a(PQKBTitleCode)TC0000251017 035 $a(PQKBWorkID)10247187 035 $a(PQKB)10527494 035 $a(MiAaPQ)EBC309748 035 $a(EXLCZ)991000000000355009 100 $a20070830d2007 uy 0 101 0 $aeng 135 $aur|n|---||||| 181 $ctxt 182 $cc 183 $acr 200 12$aA statistical approach to neural networks for pattern recognition$b[electronic resource] /$fRobert A. Dunne 210 $aHoboken, N.J. ;$aChichester $cWiley$dc2007 215 $a1 online resource (289 p.) 225 1 $aWiley series in computational statistics 300 $aDescription based upon print version of record. 311 $a0-471-74108-6 320 $aIncludes bibliographical references and index. 327 $aA Statistical Approach to Neural Networks for Pattern Recognition; Contents; Notation and Code Examples; Preface; Acknowledgments; 1 Introduction; 1.1 The perceptron; 2 The Multi-Layer Perceptron Model; 2.1 The multi-layer perceptron (MLP); 2.2 The first and second derivatives; 2.3 Additional hidden layers; 2.4 Classifiers; 2.5 Complements and exercises; 3 Linear Discriminant Analysis; 3.1 An alternative method; 3.2 Example; 3.3 Flexible and penalized LDA; 3.4 Relationship of MLP models to LDA; 3.5 Linear classifiers; 3.6 Complements and exercises; 4 Activation and Penalty Functions 327 $a4.1 Introduction4.2 Interpreting outputs as probabilities; 4.3 The fiuniversal approximatorfl and consistency; 4.4 Variance and bias; 4.5 Binary variables and logistic regression; 4.6 MLP models and cross-entropy; 4.7 A derivation of the softmax activation function; 4.8 The finaturalfl pairing and A,; 4.9 A comparison of least squares and cross-entropy; 4.10 Conclusion; 4.11 Complements and exercises; 5 Model Fitting and Evaluation; 5.1 Introduction; 5.2 Error rate estimation; 5.3 Model selection for MLP models; 5.4 Penalized training; 5.5 Complements and exercises; 6 The Task-based MLP 327 $a6.1 Introduction6.2 The task-based MLP; 6.3 Pruning algorithms; 6.4 Interpreting and evaluating task-based MLP models; 6.5 Evaluating the models; 6.6 Conclusion; 6.7 Complements and exercises; 7 Incorporating Spatial Information into an MLP Classifier; 7.1 Allocation and neighbor information; 7.2 Markov random fields; 7.3 Hopfield networks; 7.4 MLP neighbor models; 7.5 Sequential updating; 7.6 Example - MartinTMs farm; 7.7 Conclusion; 7.8 Complements and exercises; 8 Influence Curves for the Multi-layer Perceptron Classifier; 8.1 Introduction; 8.2 Estimators; 8.3 Influence curves 327 $a8.4 M-estimators8.5 The MLP; 8.6 Influence curves for pc; 8.7 Summary and Conclusion; 9 The Sensitivity Curves of the MLP Classifier; 9.1 Introduction; 9.2 The sensitivity curve; 9.3 Some experiments; 9.4 Discussion; 9.5 Conclusion; 10 A Robust Fitting Procedure for MLP Models; 10.1 Introduction; 10.2 The effect of a hidden layer; 10.3 Comparison of MLP with robust logistic regression; 10.4 A robust MLP model; 10.5 Diagnostics; 10.6 Conclusion; 10.7 Complements and exercises; 11 Smoothed Weights; 11.1 Introduction; 11.2 MLP models; 11.3 Examples; 11.4 Conclusion 327 $a11.5 Cornplernents and exercises12 Translation Invariance; 12.1 Introduction; 12.2 Example 1; 12.3 Example 2; 12.4 Example 3; 12.5 Conclusion; 13 Fixed-slope Training; 13.1 Introduction; 13.2 Strategies; 13.3 Fixing ? or O; 13.4 Example 1; 13.5 Example 2; 13.6 Discussion; Bibliography; Appendix A: Function Minimization; A.l Introduction; A.2 Back-propagation; A.3 Newton-Raphson; A.4 The method of scoring; A.5 Quasi-Newton; A.6 Conjugate gradients; A.7 Scaled conjugate gradients; A.8 Variants on vanilla fiback-propagationfl; A.9 Line search; A.10 The simplex algorithm; A.11 Implementation 327 $aA.12 Examples 330 $aAn accessible and up-to-date treatment featuring the connection between neural networks and statistics A Statistical Approach to Neural Networks for Pattern Recognition presents a statistical treatment of the Multilayer Perceptron (MLP), which is the most widely used of the neural network models. This book aims to answer questions that arise when statisticians are first confronted with this type of model, such as: How robust is the model to outliers? Could the model be made more robust? Which points will have a high leverage? What are good starting values for the fitting algorithm?