LEADER 04931nam 2200661Ia 450 001 9910777501403321 005 20230607221456.0 010 $a981-277-665-6 035 $a(CKB)1000000000412332 035 $a(EBL)1681619 035 $a(OCoLC)879025543 035 $a(SSID)ssj0000190761 035 $a(PQKBManifestationID)11215653 035 $a(PQKBTitleCode)TC0000190761 035 $a(PQKBWorkID)10180237 035 $a(PQKB)11166554 035 $a(MiAaPQ)EBC1681619 035 $a(WSP)00005089 035 $a(Au-PeEL)EBL1681619 035 $a(CaPaEBR)ebr10201197 035 $a(CaONFJC)MIL505384 035 $a(EXLCZ)991000000000412332 100 $a20020813d2002 uy 0 101 0 $aeng 135 $aur|n|---||||| 181 $ctxt 182 $cc 183 $acr 200 00$aLeast squares support vector machines$b[electronic resource] /$fJohan A.K. Suykens ... [et al.] 210 $aRiver Edge, NJ $cWorld Scientific$d2002 215 $a1 online resource (308 p.) 300 $aDescription based upon print version of record. 311 $a981-238-151-1 320 $aIncludes bibliographical references and index. 327 $aContents ; Preface ; Chapter 1 Introduction ; 1.1 Multilayer perceptron neural networks ; 1.2 Regression and classification ; 1.3 Learning and generalization ; 1.3.1 Weight decay and effective number of parameters ; 1.3.2 Ridge regression ; 1.3.3 Bayesian learning 327 $a1.4 Principles of pattern recognition 1.4.1 Bayes rule and optimal classifier under Gaussian assumptions ; 1.4.2 Receiver operating characteristic ; 1.5 Dimensionality reduction methods ; 1.6 Parametric versus non-parametric approaches and RBF networks 327 $a1.7 Feedforward versus recurrent network models Chapter 2 Support Vector Machines ; 2.1 Maximal margin classification and linear SVMs ; 2.1.1 Margin ; 2.1.2 Linear SVM classifier: separable case ; 2.1.3 Linear SVM classifier: non-separable case ; 2.2 Kernel trick and Mercer condition 327 $a2.3 Nonlinear SVM classifiers 2.4 VC theory and structural risk minimization ; 2.4.1 Empirical risk versus generalization error ; 2.4.2 Structural risk minimization ; 2.5 SVMs for function estimation ; 2.5.1 SVM for linear function estimation 327 $a2.5.2 SVM for nonlinear function estimation 2.5.3 VC bound on generalization error ; 2.6 Modifications and extensions ; 2.6.1 Kernels ; 2.6.2 Extension to other convex cost functions ; 2.6.3 Algorithms ; 2.6.4 Parametric versus non-parametric approaches 327 $aChapter 3 Basic Methods of Least Squares Support Vector Machines 330 $a This book focuses on Least Squares Support Vector Machines (LS-SVMs) which are reformulations to standard SVMs. LS-SVMs are closely related to regularization networks and Gaussian processes but additionally emphasize and exploit primal-dual interpretations from optimization theory. The authors explain the natural links between LS-SVM classifiers and kernel Fisher discriminant analysis. Bayesian inference of LS-SVM models is discussed, together with methods for imposing sparseness and employing robust statistics. The framework is further extended towards unsupervised learning by considering P 606 $aMachine learning 606 $aAlgorithms 606 $aKernel functions 606 $aLeast squares 615 0$aMachine learning. 615 0$aAlgorithms. 615 0$aKernel functions. 615 0$aLeast squares. 676 $a006.3/1 701 $aSuykens$b Johan A. K$022315 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910777501403321 996 $aLeast squares support vector machines$9711830 997 $aUNINA