06937nam 2200697 450 99642633670331620200520144314.00-12-801722-8(CKB)2670000000607933(EBL)2007481(SSID)ssj0001564305(PQKBManifestationID)16212387(PQKBTitleCode)TC0001564305(PQKBWorkID)14835460(PQKB)11120752(Au-PeEL)EBL2007481(CaPaEBR)ebr11040166(CaONFJC)MIL762943(OCoLC)908071759(CaSebORM)9780128015223(MiAaPQ)EBC2007481(PPN)197826369(EXLCZ)99267000000060793320150415h20152015 uy 0engur|n|---|||||txtccrMachine learning a Bayesian and optimization perspective /Sergios TheodoridisFirst edition.Amsterdam, [Netherlands] :Academic Press,2015.©20151 online resource (1075 p.).NET Developers SeriesDescription based upon print version of record.0-12-801522-5 Includes bibliographical references and index.Front Cover; Machine Learning: A Bayesian and Optimization Perspective; Copyright ; Contents; Preface; Acknowledgments; Notation; Dedication ; Chapter 1: Introduction; 1.1 What Machine Learning is About; 1.1.1 Classification; 1.1.2 Regression; 1.2 Structure and a Road Map of the Book; References; Chapter 2: Probability and Stochastic Processes ; 2.1 Introduction; 2.2 Probability and Random Variables; 2.2.1 Probability; Relative frequency definition; Axiomatic definition; 2.2.2 Discrete Random Variables; Joint and conditional probabilities; Bayes theorem; 2.2.3 Continuous Random Variables2.2.4 Mean and VarianceComplex random variables; 2.2.5 Transformation of Random Variables; 2.3 Examples of Distributions; 2.3.1 Discrete Variables; The Bernoulli distribution; The Binomial distribution; The Multinomial distribution; 2.3.2 Continuous Variables; The uniform distribution; The Gaussian distribution; The central limit theorem; The exponential distribution; The beta distribution; The gamma distribution; The Dirichlet distribution; 2.4 Stochastic Processes; 2.4.1 First and Second Order Statistics; 2.4.2 Stationarity and Ergodicity; 2.4.3 Power Spectral DensityProperties of the autocorrelation sequencePower spectral density; Transmission through a linear system; Physical interpretation of the PSD; 2.4.4 Autoregressive Models; 2.5 Information Theory; 2.5.1 Discrete Random Variables; Information; Mutual and conditional information; Entropy and average mutual information; 2.5.2 Continuous Random Variables; Average mutual information and conditional information; Relative entropy or Kullback-Leibler divergence; 2.6 Stochastic Convergence; Convergence everywhere; Convergence almost everywhere; Convergence in the mean-square senseConvergence in probabilityConvergence in distribution; Problems; References; Chapter 3: Learning in Parametric Modeling: Basic Concepts and Directions ; 3.1 Introduction; 3.2 Parameter Estimation: The Deterministic Point of View; 3.3 Linear Regression; 3.4 Classification; Generative versus discriminative learning; Supervised, semisupervised, and unsupervised learning; 3.5 Biased Versus Unbiased Estimation; 3.5.1 Biased or Unbiased Estimation?; 3.6 The Cramér-Rao Lower Bound; 3.7 Sufficient Statistic; 3.8 Regularization; Inverse problems: Ill-conditioning and overfitting3.9 The Bias-Variance Dilemma3.9.1 Mean-Square Error Estimation; 3.9.2 Bias-Variance Tradeoff; 3.10 Maximum Likelihood Method; 3.10.1 Linear Regression: The Nonwhite Gaussian Noise Case; 3.11 Bayesian Inference; 3.11.1 The Maximum A Posteriori Probability Estimation Method; 3.12 Curse of Dimensionality; 3.13 Validation; Cross-validation; 3.14 Expected and Empirical Loss Functions; 3.15 Nonparametric Modeling and Estimation; Problems; References; Chapter 4: Mean-Square Error Linear Estimation; 4.1 Introduction; 4.2 Mean-Square Error Linear Estimation: The Normal Equations4.2.1 The Cost Function SurfaceThis tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches -which are based on optimization techniques – together with the Bayesian inference approach, whose essence lies in the use of a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods  to  the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for  different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code..NET Developers SeriesMachine learningMathematical optimizationBayesian statistical decision theoryMachine learning.Mathematical optimization.Bayesian statistical decision theory.006.31Theodoridis Sergios1951-299259MiAaPQMiAaPQMiAaPQBOOK996426336703316Machine learning1909061UNISA