top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Machine learning : a Bayesian and optimization perspective / / Sergios Theodoridis
Machine learning : a Bayesian and optimization perspective / / Sergios Theodoridis
Autore Theodoridis Sergios <1951->
Edizione [First edition.]
Pubbl/distr/stampa Amsterdam, [Netherlands] : , : Academic Press, , 2015
Descrizione fisica 1 online resource (1075 p.)
Disciplina 006.31
Collana NET Developers Series
Soggetto topico Machine learning
Mathematical optimization
Bayesian statistical decision theory
ISBN 0-12-801722-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Front Cover; Machine Learning: A Bayesian and Optimization Perspective; Copyright ; Contents; Preface; Acknowledgments; Notation; Dedication ; Chapter 1: Introduction; 1.1 What Machine Learning is About; 1.1.1 Classification; 1.1.2 Regression; 1.2 Structure and a Road Map of the Book; References; Chapter 2: Probability and Stochastic Processes ; 2.1 Introduction; 2.2 Probability and Random Variables; 2.2.1 Probability; Relative frequency definition; Axiomatic definition; 2.2.2 Discrete Random Variables; Joint and conditional probabilities; Bayes theorem; 2.2.3 Continuous Random Variables
2.2.4 Mean and VarianceComplex random variables; 2.2.5 Transformation of Random Variables; 2.3 Examples of Distributions; 2.3.1 Discrete Variables; The Bernoulli distribution; The Binomial distribution; The Multinomial distribution; 2.3.2 Continuous Variables; The uniform distribution; The Gaussian distribution; The central limit theorem; The exponential distribution; The beta distribution; The gamma distribution; The Dirichlet distribution; 2.4 Stochastic Processes; 2.4.1 First and Second Order Statistics; 2.4.2 Stationarity and Ergodicity; 2.4.3 Power Spectral Density
Properties of the autocorrelation sequencePower spectral density; Transmission through a linear system; Physical interpretation of the PSD; 2.4.4 Autoregressive Models; 2.5 Information Theory; 2.5.1 Discrete Random Variables; Information; Mutual and conditional information; Entropy and average mutual information; 2.5.2 Continuous Random Variables; Average mutual information and conditional information; Relative entropy or Kullback-Leibler divergence; 2.6 Stochastic Convergence; Convergence everywhere; Convergence almost everywhere; Convergence in the mean-square sense
Convergence in probabilityConvergence in distribution; Problems; References; Chapter 3: Learning in Parametric Modeling: Basic Concepts and Directions ; 3.1 Introduction; 3.2 Parameter Estimation: The Deterministic Point of View; 3.3 Linear Regression; 3.4 Classification; Generative versus discriminative learning; Supervised, semisupervised, and unsupervised learning; 3.5 Biased Versus Unbiased Estimation; 3.5.1 Biased or Unbiased Estimation?; 3.6 The Cramér-Rao Lower Bound; 3.7 Sufficient Statistic; 3.8 Regularization; Inverse problems: Ill-conditioning and overfitting
3.9 The Bias-Variance Dilemma3.9.1 Mean-Square Error Estimation; 3.9.2 Bias-Variance Tradeoff; 3.10 Maximum Likelihood Method; 3.10.1 Linear Regression: The Nonwhite Gaussian Noise Case; 3.11 Bayesian Inference; 3.11.1 The Maximum A Posteriori Probability Estimation Method; 3.12 Curse of Dimensionality; 3.13 Validation; Cross-validation; 3.14 Expected and Empirical Loss Functions; 3.15 Nonparametric Modeling and Estimation; Problems; References; Chapter 4: Mean-Square Error Linear Estimation; 4.1 Introduction; 4.2 Mean-Square Error Linear Estimation: The Normal Equations
4.2.1 The Cost Function Surface
Record Nr. UNISA-996426336703316
Theodoridis Sergios <1951->  
Amsterdam, [Netherlands] : , : Academic Press, , 2015
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Machine learning : a Bayesian and optimization perspective / / Sergios Theodoridis
Machine learning : a Bayesian and optimization perspective / / Sergios Theodoridis
Autore Theodoridis Sergios <1951->
Edizione [First edition.]
Pubbl/distr/stampa Amsterdam, [Netherlands] : , : Academic Press, , 2015
Descrizione fisica 1 online resource (1075 p.)
Disciplina 006.31
Collana NET Developers Series
Soggetto topico Machine learning
Mathematical optimization
Bayesian statistical decision theory
ISBN 0-12-801722-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Front Cover; Machine Learning: A Bayesian and Optimization Perspective; Copyright ; Contents; Preface; Acknowledgments; Notation; Dedication ; Chapter 1: Introduction; 1.1 What Machine Learning is About; 1.1.1 Classification; 1.1.2 Regression; 1.2 Structure and a Road Map of the Book; References; Chapter 2: Probability and Stochastic Processes ; 2.1 Introduction; 2.2 Probability and Random Variables; 2.2.1 Probability; Relative frequency definition; Axiomatic definition; 2.2.2 Discrete Random Variables; Joint and conditional probabilities; Bayes theorem; 2.2.3 Continuous Random Variables
2.2.4 Mean and VarianceComplex random variables; 2.2.5 Transformation of Random Variables; 2.3 Examples of Distributions; 2.3.1 Discrete Variables; The Bernoulli distribution; The Binomial distribution; The Multinomial distribution; 2.3.2 Continuous Variables; The uniform distribution; The Gaussian distribution; The central limit theorem; The exponential distribution; The beta distribution; The gamma distribution; The Dirichlet distribution; 2.4 Stochastic Processes; 2.4.1 First and Second Order Statistics; 2.4.2 Stationarity and Ergodicity; 2.4.3 Power Spectral Density
Properties of the autocorrelation sequencePower spectral density; Transmission through a linear system; Physical interpretation of the PSD; 2.4.4 Autoregressive Models; 2.5 Information Theory; 2.5.1 Discrete Random Variables; Information; Mutual and conditional information; Entropy and average mutual information; 2.5.2 Continuous Random Variables; Average mutual information and conditional information; Relative entropy or Kullback-Leibler divergence; 2.6 Stochastic Convergence; Convergence everywhere; Convergence almost everywhere; Convergence in the mean-square sense
Convergence in probabilityConvergence in distribution; Problems; References; Chapter 3: Learning in Parametric Modeling: Basic Concepts and Directions ; 3.1 Introduction; 3.2 Parameter Estimation: The Deterministic Point of View; 3.3 Linear Regression; 3.4 Classification; Generative versus discriminative learning; Supervised, semisupervised, and unsupervised learning; 3.5 Biased Versus Unbiased Estimation; 3.5.1 Biased or Unbiased Estimation?; 3.6 The Cramér-Rao Lower Bound; 3.7 Sufficient Statistic; 3.8 Regularization; Inverse problems: Ill-conditioning and overfitting
3.9 The Bias-Variance Dilemma3.9.1 Mean-Square Error Estimation; 3.9.2 Bias-Variance Tradeoff; 3.10 Maximum Likelihood Method; 3.10.1 Linear Regression: The Nonwhite Gaussian Noise Case; 3.11 Bayesian Inference; 3.11.1 The Maximum A Posteriori Probability Estimation Method; 3.12 Curse of Dimensionality; 3.13 Validation; Cross-validation; 3.14 Expected and Empirical Loss Functions; 3.15 Nonparametric Modeling and Estimation; Problems; References; Chapter 4: Mean-Square Error Linear Estimation; 4.1 Introduction; 4.2 Mean-Square Error Linear Estimation: The Normal Equations
4.2.1 The Cost Function Surface
Record Nr. UNINA-9910788020503321
Theodoridis Sergios <1951->  
Amsterdam, [Netherlands] : , : Academic Press, , 2015
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine learning : a Bayesian and optimization perspective / / Sergios Theodoridis
Machine learning : a Bayesian and optimization perspective / / Sergios Theodoridis
Autore Theodoridis Sergios <1951->
Edizione [First edition.]
Pubbl/distr/stampa Amsterdam, [Netherlands] : , : Academic Press, , 2015
Descrizione fisica 1 online resource (1075 p.)
Disciplina 006.31
Collana NET Developers Series
Soggetto topico Machine learning
Mathematical optimization
Bayesian statistical decision theory
ISBN 0-12-801722-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Front Cover; Machine Learning: A Bayesian and Optimization Perspective; Copyright ; Contents; Preface; Acknowledgments; Notation; Dedication ; Chapter 1: Introduction; 1.1 What Machine Learning is About; 1.1.1 Classification; 1.1.2 Regression; 1.2 Structure and a Road Map of the Book; References; Chapter 2: Probability and Stochastic Processes ; 2.1 Introduction; 2.2 Probability and Random Variables; 2.2.1 Probability; Relative frequency definition; Axiomatic definition; 2.2.2 Discrete Random Variables; Joint and conditional probabilities; Bayes theorem; 2.2.3 Continuous Random Variables
2.2.4 Mean and VarianceComplex random variables; 2.2.5 Transformation of Random Variables; 2.3 Examples of Distributions; 2.3.1 Discrete Variables; The Bernoulli distribution; The Binomial distribution; The Multinomial distribution; 2.3.2 Continuous Variables; The uniform distribution; The Gaussian distribution; The central limit theorem; The exponential distribution; The beta distribution; The gamma distribution; The Dirichlet distribution; 2.4 Stochastic Processes; 2.4.1 First and Second Order Statistics; 2.4.2 Stationarity and Ergodicity; 2.4.3 Power Spectral Density
Properties of the autocorrelation sequencePower spectral density; Transmission through a linear system; Physical interpretation of the PSD; 2.4.4 Autoregressive Models; 2.5 Information Theory; 2.5.1 Discrete Random Variables; Information; Mutual and conditional information; Entropy and average mutual information; 2.5.2 Continuous Random Variables; Average mutual information and conditional information; Relative entropy or Kullback-Leibler divergence; 2.6 Stochastic Convergence; Convergence everywhere; Convergence almost everywhere; Convergence in the mean-square sense
Convergence in probabilityConvergence in distribution; Problems; References; Chapter 3: Learning in Parametric Modeling: Basic Concepts and Directions ; 3.1 Introduction; 3.2 Parameter Estimation: The Deterministic Point of View; 3.3 Linear Regression; 3.4 Classification; Generative versus discriminative learning; Supervised, semisupervised, and unsupervised learning; 3.5 Biased Versus Unbiased Estimation; 3.5.1 Biased or Unbiased Estimation?; 3.6 The Cramér-Rao Lower Bound; 3.7 Sufficient Statistic; 3.8 Regularization; Inverse problems: Ill-conditioning and overfitting
3.9 The Bias-Variance Dilemma3.9.1 Mean-Square Error Estimation; 3.9.2 Bias-Variance Tradeoff; 3.10 Maximum Likelihood Method; 3.10.1 Linear Regression: The Nonwhite Gaussian Noise Case; 3.11 Bayesian Inference; 3.11.1 The Maximum A Posteriori Probability Estimation Method; 3.12 Curse of Dimensionality; 3.13 Validation; Cross-validation; 3.14 Expected and Empirical Loss Functions; 3.15 Nonparametric Modeling and Estimation; Problems; References; Chapter 4: Mean-Square Error Linear Estimation; 4.1 Introduction; 4.2 Mean-Square Error Linear Estimation: The Normal Equations
4.2.1 The Cost Function Surface
Record Nr. UNINA-9910828993803321
Theodoridis Sergios <1951->  
Amsterdam, [Netherlands] : , : Academic Press, , 2015
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Pattern recognition [[electronic resource] /] / Sergios Theodoridis, Konstantinos Koutroumbas
Pattern recognition [[electronic resource] /] / Sergios Theodoridis, Konstantinos Koutroumbas
Autore Theodoridis Sergios <1951->
Edizione [4th ed.]
Pubbl/distr/stampa Amsterdam ; ; London, : Elsevier/Academic Press, c2009
Descrizione fisica 1 online resource (981 p.)
Disciplina 006.4
Altri autori (Persone) KoutroumbasKonstantinos <1967->
Soggetto topico Pattern recognition systems
Pattern perception
Soggetto genere / forma Electronic books.
ISBN 1-282-54115-3
9786612541155
0-08-094912-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Front Cover; Pattern Recognition; Copyright Page; Contents; Preface; Chapter 1 Introduction; 1.1 Is Pattern Recognition Important?; 1.2 Features, Feature Vectors, and Classifiers; 1.3 Supervised, Unsupervised, and Semi-Supervised Learning; 1.4 MATLAB Programs; 1.5 Outline of The Book; Chapter 2 Classifiers Based on Bayes Decision Theory; 2.1 Introduction; 2.2 Bayes Decision Theory; 2.3 Discriminant Functions and Decision Surfaces; 2.4 Bayesian Classification for Normal Distributions; 2.5 Estimation of Unknown Probability Density Functions; 2.6 The Nearest Neighbor Rule; 2.7 Bayesian Networks
2.8 Problems References; Chapter 3 Linear Classifiers; 3.1 Introduction; 3.2 Linear Discriminant Functions and Decision Hyperplanes; 3.3 The Perceptron Algorithm; 3.4 Least Squares Methods; 3.5 Mean Square Estimation Revisited; 3.6 Logistic Discrimination; 3.7 Support Vector Machines; 3.8 Problems; References; Chapter 4 Nonlinear Classifiers; 4.1 Introduction; 4.2 The XOR Problem; 4.3 The Two-Layer Perceptron; 4.4 Three-Layer Perceptrons; 4.5 Algorithms Based on Exact Classification of the Training Set; 4.6 The Back propagation Algorithm; 4.7 Variations on the Back propagation Theme
4.8 The Cost Function Choice 4.9 Choice of the Network Size; 4.10 A Simulation Example; 4.11 Networks with Weight Sharing; 4.12 Generalized Linear Classifiers; 4.13 Capacity of the l-Dimensional Space in Linear Dichotomies; 4.14 Polynomial Classifiers; 4.15 Radial Basis Function Networks; 4.16 Universal Approximators; 4.17 Probabilistic Neural Networks; 4.18 Support Vector Machines:The Nonlinear Case; 4.19 Beyond the SVM Paradigm; 4.20 Decision Trees; 4.21 Combining Classifiers; 4.22 The Boosting Approach to Combine Classifiers; 4.23 The Class Imbalance Problem; 4.24 Discussion; 4.25 Problems
References Chapter 5 Feature Selection; 5.1 Introduction; 5.2 Preprocessing; 5.3 The Peaking Phenomenon; 5.4 Feature Selection Based on Statistical Hypothesis Testing; 5.5 The Receiver Operating Characteristics (ROC) Curve; 5.6 Class Separability Measures; 5.7 Feature Subset Selection; 5.8 Optimal Feature Generation; 5.9 Neural Networks and Feature Generation/Selection; 5.10 A Hint On Generalization Theory; 5.11 The Bayesian Information Criterion; 5.12 Problems; References; Chapter 6 Feature Generation I: Data Transformation and Dimensionality Reduction; 6.1 Introduction
6.2 Basis Vectors and Images 6.3 The Karhunen-Loève Transform; 6.4 The Singular Value Decomposition; 6.5 Independent Component Analysis; 6.6 Non-negative Matrix Factorization; 6.7 Nonlinear Dimensionality Reduction; 6.8 The Discrete Fourier Transform (DFT); 6.9 The Discrete Cosine and Sine Transforms; 6.10 The Hadamard Transform; 6.11 The Haar Transform; 6.12 The Haar Expansion Revisited; 6.13 Discrete Time Wavelet Transform (DTWT); 6.14 The Multi-resolution Interpretation; 6.15 Wavelet Packets; 6.16 A Look at Two-Dimensional Generalizations; 6.17 Applications; 6.18 Problems; References
Chapter 7
Record Nr. UNINA-9910458352203321
Theodoridis Sergios <1951->  
Amsterdam ; ; London, : Elsevier/Academic Press, c2009
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Pattern recognition [[electronic resource] /] / Sergios Theodoridis, Konstantinos Koutroumbas
Pattern recognition [[electronic resource] /] / Sergios Theodoridis, Konstantinos Koutroumbas
Autore Theodoridis Sergios <1951->
Edizione [4th ed.]
Pubbl/distr/stampa Amsterdam ; ; London, : Elsevier/Academic Press, c2009
Descrizione fisica 1 online resource (981 p.)
Disciplina 006.4
Altri autori (Persone) KoutroumbasKonstantinos <1967->
Soggetto topico Pattern recognition systems
Pattern perception
ISBN 1-282-54115-3
9786612541155
0-08-094912-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Front Cover; Pattern Recognition; Copyright Page; Contents; Preface; Chapter 1 Introduction; 1.1 Is Pattern Recognition Important?; 1.2 Features, Feature Vectors, and Classifiers; 1.3 Supervised, Unsupervised, and Semi-Supervised Learning; 1.4 MATLAB Programs; 1.5 Outline of The Book; Chapter 2 Classifiers Based on Bayes Decision Theory; 2.1 Introduction; 2.2 Bayes Decision Theory; 2.3 Discriminant Functions and Decision Surfaces; 2.4 Bayesian Classification for Normal Distributions; 2.5 Estimation of Unknown Probability Density Functions; 2.6 The Nearest Neighbor Rule; 2.7 Bayesian Networks
2.8 Problems References; Chapter 3 Linear Classifiers; 3.1 Introduction; 3.2 Linear Discriminant Functions and Decision Hyperplanes; 3.3 The Perceptron Algorithm; 3.4 Least Squares Methods; 3.5 Mean Square Estimation Revisited; 3.6 Logistic Discrimination; 3.7 Support Vector Machines; 3.8 Problems; References; Chapter 4 Nonlinear Classifiers; 4.1 Introduction; 4.2 The XOR Problem; 4.3 The Two-Layer Perceptron; 4.4 Three-Layer Perceptrons; 4.5 Algorithms Based on Exact Classification of the Training Set; 4.6 The Back propagation Algorithm; 4.7 Variations on the Back propagation Theme
4.8 The Cost Function Choice 4.9 Choice of the Network Size; 4.10 A Simulation Example; 4.11 Networks with Weight Sharing; 4.12 Generalized Linear Classifiers; 4.13 Capacity of the l-Dimensional Space in Linear Dichotomies; 4.14 Polynomial Classifiers; 4.15 Radial Basis Function Networks; 4.16 Universal Approximators; 4.17 Probabilistic Neural Networks; 4.18 Support Vector Machines:The Nonlinear Case; 4.19 Beyond the SVM Paradigm; 4.20 Decision Trees; 4.21 Combining Classifiers; 4.22 The Boosting Approach to Combine Classifiers; 4.23 The Class Imbalance Problem; 4.24 Discussion; 4.25 Problems
References Chapter 5 Feature Selection; 5.1 Introduction; 5.2 Preprocessing; 5.3 The Peaking Phenomenon; 5.4 Feature Selection Based on Statistical Hypothesis Testing; 5.5 The Receiver Operating Characteristics (ROC) Curve; 5.6 Class Separability Measures; 5.7 Feature Subset Selection; 5.8 Optimal Feature Generation; 5.9 Neural Networks and Feature Generation/Selection; 5.10 A Hint On Generalization Theory; 5.11 The Bayesian Information Criterion; 5.12 Problems; References; Chapter 6 Feature Generation I: Data Transformation and Dimensionality Reduction; 6.1 Introduction
6.2 Basis Vectors and Images 6.3 The Karhunen-Loève Transform; 6.4 The Singular Value Decomposition; 6.5 Independent Component Analysis; 6.6 Non-negative Matrix Factorization; 6.7 Nonlinear Dimensionality Reduction; 6.8 The Discrete Fourier Transform (DFT); 6.9 The Discrete Cosine and Sine Transforms; 6.10 The Hadamard Transform; 6.11 The Haar Transform; 6.12 The Haar Expansion Revisited; 6.13 Discrete Time Wavelet Transform (DTWT); 6.14 The Multi-resolution Interpretation; 6.15 Wavelet Packets; 6.16 A Look at Two-Dimensional Generalizations; 6.17 Applications; 6.18 Problems; References
Chapter 7
Record Nr. UNINA-9910791293803321
Theodoridis Sergios <1951->  
Amsterdam ; ; London, : Elsevier/Academic Press, c2009
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Pattern recognition / / Sergios Theodoridis, Konstantinos Koutroumbas
Pattern recognition / / Sergios Theodoridis, Konstantinos Koutroumbas
Autore Theodoridis Sergios <1951->
Edizione [4th ed.]
Pubbl/distr/stampa Amsterdam ; ; London, : Elsevier/Academic Press, c2009
Descrizione fisica 1 online resource (981 p.)
Disciplina 006.4
Altri autori (Persone) KoutroumbasKonstantinos <1967->
Soggetto topico Pattern recognition systems
Pattern perception
ISBN 1-282-54115-3
9786612541155
0-08-094912-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Front Cover; Pattern Recognition; Copyright Page; Contents; Preface; Chapter 1 Introduction; 1.1 Is Pattern Recognition Important?; 1.2 Features, Feature Vectors, and Classifiers; 1.3 Supervised, Unsupervised, and Semi-Supervised Learning; 1.4 MATLAB Programs; 1.5 Outline of The Book; Chapter 2 Classifiers Based on Bayes Decision Theory; 2.1 Introduction; 2.2 Bayes Decision Theory; 2.3 Discriminant Functions and Decision Surfaces; 2.4 Bayesian Classification for Normal Distributions; 2.5 Estimation of Unknown Probability Density Functions; 2.6 The Nearest Neighbor Rule; 2.7 Bayesian Networks
2.8 Problems References; Chapter 3 Linear Classifiers; 3.1 Introduction; 3.2 Linear Discriminant Functions and Decision Hyperplanes; 3.3 The Perceptron Algorithm; 3.4 Least Squares Methods; 3.5 Mean Square Estimation Revisited; 3.6 Logistic Discrimination; 3.7 Support Vector Machines; 3.8 Problems; References; Chapter 4 Nonlinear Classifiers; 4.1 Introduction; 4.2 The XOR Problem; 4.3 The Two-Layer Perceptron; 4.4 Three-Layer Perceptrons; 4.5 Algorithms Based on Exact Classification of the Training Set; 4.6 The Back propagation Algorithm; 4.7 Variations on the Back propagation Theme
4.8 The Cost Function Choice 4.9 Choice of the Network Size; 4.10 A Simulation Example; 4.11 Networks with Weight Sharing; 4.12 Generalized Linear Classifiers; 4.13 Capacity of the l-Dimensional Space in Linear Dichotomies; 4.14 Polynomial Classifiers; 4.15 Radial Basis Function Networks; 4.16 Universal Approximators; 4.17 Probabilistic Neural Networks; 4.18 Support Vector Machines:The Nonlinear Case; 4.19 Beyond the SVM Paradigm; 4.20 Decision Trees; 4.21 Combining Classifiers; 4.22 The Boosting Approach to Combine Classifiers; 4.23 The Class Imbalance Problem; 4.24 Discussion; 4.25 Problems
References Chapter 5 Feature Selection; 5.1 Introduction; 5.2 Preprocessing; 5.3 The Peaking Phenomenon; 5.4 Feature Selection Based on Statistical Hypothesis Testing; 5.5 The Receiver Operating Characteristics (ROC) Curve; 5.6 Class Separability Measures; 5.7 Feature Subset Selection; 5.8 Optimal Feature Generation; 5.9 Neural Networks and Feature Generation/Selection; 5.10 A Hint On Generalization Theory; 5.11 The Bayesian Information Criterion; 5.12 Problems; References; Chapter 6 Feature Generation I: Data Transformation and Dimensionality Reduction; 6.1 Introduction
6.2 Basis Vectors and Images 6.3 The Karhunen-Loève Transform; 6.4 The Singular Value Decomposition; 6.5 Independent Component Analysis; 6.6 Non-negative Matrix Factorization; 6.7 Nonlinear Dimensionality Reduction; 6.8 The Discrete Fourier Transform (DFT); 6.9 The Discrete Cosine and Sine Transforms; 6.10 The Hadamard Transform; 6.11 The Haar Transform; 6.12 The Haar Expansion Revisited; 6.13 Discrete Time Wavelet Transform (DTWT); 6.14 The Multi-resolution Interpretation; 6.15 Wavelet Packets; 6.16 A Look at Two-Dimensional Generalizations; 6.17 Applications; 6.18 Problems; References
Chapter 7
Record Nr. UNINA-9910810039903321
Theodoridis Sergios <1951->  
Amsterdam ; ; London, : Elsevier/Academic Press, c2009
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Pattern recognition [[electronic resource] /] / Sergios Theodoridis, Konstantinos Koutroumbas
Pattern recognition [[electronic resource] /] / Sergios Theodoridis, Konstantinos Koutroumbas
Autore Theodoridis Sergios <1951->
Edizione [3rd ed.]
Pubbl/distr/stampa San Diego, CA, : Academic Press, c2006
Descrizione fisica 1 online resource (854 p.)
Disciplina 006.3
006.4
Altri autori (Persone) KoutroumbasKonstantinos <1967->
Soggetto topico Pattern recognition systems
Soggetto genere / forma Electronic books.
ISBN 1-281-31146-4
9786611311469
0-08-051361-1
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Front cover; Title page; Copyright page; Table of contents; PREFACE; 1 INTRODUCTION; 1.1 IS PATTERN RECOGNITION IMPORTANT?; 1.2 FEATURES, FEATURE VECTORS, AND CLASSIFIERS; 1.3 SUPERVISED VERSUS UNSUPERVISED PATTERN RECOGNITION; 1.4 OUTLINE OF THE BOOK; 2 CLASSIFIERS BASED ON BAYES DECISION THEORY; 2.1 INTRODUCTION; 2.2 BAYES DECISION THEORY; 2.3 DISCRIMINANT FUNCTIONS AND DECISION SURFACES; 2.4 BAYESIAN CLASSIFICATION FOR NORMAL DISTRIBUTIONS; 2.5 ESTIMATION OF UNKNOWN PROBABILITY DENSITY FUNCTIONS; 2.6 THE NEAREST NEIGHBOR RULE; 2.7 BAYESIAN NETWORKS; 3 LINEAR CLASSIFIERS; 3.1 INTRODUCTION
3.2 LINEAR DISCRIMINANT FUNCTIONS AND DECISION HYPERPLANES3.3 THE PERCEPTRON ALGORITHM; 3.4 LEAST SQUARES METHODS; 3.5 MEAN SQUARE ESTIMATION REVISITED; 3.6 LOGISTIC DISCRIMINATION; 3.7 SUPPORT VECTOR MACHINES; 4 NONLINEAR CLASSIFIERS; 4.1 INTRODUCTION; 4.2 THE XOR PROBLEM; 4.3 THE TWO-LAYER PERCEPTRON; 4.4 THREE-LAYER PERCEPTRONS; 4.5 ALGORITHMS BASED ON EXACT CLASSIFICATION OF THE TRAINING SET; 4.6 THE BACKPROPAGATION ALGORITHM; 4.7 VARIATIONS ON THE BACKPROPAGATION THEME; 4.8 THE COST FUNCTION CHOICE; 4.9 CHOICE OF THE NETWORK SIZE; 4.10 A SIMULATION EXAMPLE
4.11 NETWORKS WITH WEIGHT SHARING4.12 GENERALIZED LINEAR CLASSIFIERS; 4.13 CAPACITY OF THE l-DIMENSIONAL SPACE IN LINEAR DICHOTOMIES; 4.14 POLYNOMIAL CLASSIFIERS; 4.15 RADIAL BASIS FUNCTION NETWORKS; 4.16 UNIVERSAL APPROXIMATORS; 4.17 SUPPORT VECTOR MACHINES: THE NONLINEAR CASE; 4.18 DECISION TREES; 4.19 COMBINING CLASSIFIERS; 4.20 THE BOOSTING APPROACH TO COMBINE CLASSIFIERS; 4.21 DISCUSSION; 5 FEATURE SELECTION; 5.1 INTRODUCTION; 5.2 PREPROCESSING; 5.3 FEATURE SELECTION BASED ON STATISTICAL HYPOTHESIS TESTING; 5.4 THE RECEIVER OPERATING CHARACTERISTICS (ROC) CURVE
5.5 CLASS SEPARABILITY MEASURES5.6 FEATURE SUBSET SELECTION; 5.7 OPTIMAL FEATURE GENERATION; 5.8 NEURAL NETWORKS AND FEATURE GENERATION/ SELECTION; 5.9 A HINT ON GENERALIZATION THEORY; 5.10 THE BAYESIAN INFORMATION CRITERION; 6 FEATURE GENERATION I: LINEAR TRANSFORMS; 6.1 INTRODUCTION; 6.2 BASIS VECTORS AND IMAGES; 6.3 THE KARHUNEN-LOÈVE TRANSFORM; 6.4 THE SINGULAR VALUE DECOMPOSITION; 6.5 INDEPENDENT COMPONENT ANALYSIS; 6.6 THE DISCRETE FOURIER TRANSFORM (DFT); 6.7 THE DISCRETE COSINE AND SINE TRANSFORMS; 6.8 THE HADAMARD TRANSFORM; 6.9 THE HAAR TRANSFORM; 6.10 THE HAAR EXPANSION REVISITED
6.11 DISCRETE TIMEWAVELET TRANSFORM (DTWT)6.12 THE MULTIRESOLUTION INTERPRETATION; 6.13 WAVELET PACKETS; 6.14 A LOOK AT TWO-DIMENSIONAL GENERALIZATIONS; 6.15 APPLICATIONS; 7 FEATURE GENERATION II; 7.1 INTRODUCTION; 7.2 REGIONAL FEATURES; 7.3 FEATURES FOR SHAPE AND SIZE CHARACTERIZATION; 7.4 A GLIMPSE AT FRACTALS; 7.5 TYPICAL FEATURES FOR SPEECH AND AUDIO CLASSIFICATION; 8 TEMPLATE MATCHING; 8.1 INTRODUCTION; 8.2 MEASURES BASED ON OPTIMAL PATH SEARCHING TECHNIQUES; 8.3 MEASURES BASED ON CORRELATIONS; 8.4 DEFORMABLE TEMPLATE MODELS; 9 CONTEXT-DEPENDENT CLASSIFICATION; 9.1 INTRODUCTION
9.2 THE BAYES CLASSIFIER
Record Nr. UNINA-9910457973203321
Theodoridis Sergios <1951->  
San Diego, CA, : Academic Press, c2006
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Pattern recognition [[electronic resource] /] / Sergios Theodoridis, Konstantinos Koutroumbas
Pattern recognition [[electronic resource] /] / Sergios Theodoridis, Konstantinos Koutroumbas
Autore Theodoridis Sergios <1951->
Edizione [3rd ed.]
Pubbl/distr/stampa San Diego, CA, : Academic Press, c2006
Descrizione fisica 1 online resource (854 p.)
Disciplina 006.3
006.4
Altri autori (Persone) KoutroumbasKonstantinos <1967->
Soggetto topico Pattern recognition systems
ISBN 1-281-31146-4
9786611311469
0-08-051361-1
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Front cover; Title page; Copyright page; Table of contents; PREFACE; 1 INTRODUCTION; 1.1 IS PATTERN RECOGNITION IMPORTANT?; 1.2 FEATURES, FEATURE VECTORS, AND CLASSIFIERS; 1.3 SUPERVISED VERSUS UNSUPERVISED PATTERN RECOGNITION; 1.4 OUTLINE OF THE BOOK; 2 CLASSIFIERS BASED ON BAYES DECISION THEORY; 2.1 INTRODUCTION; 2.2 BAYES DECISION THEORY; 2.3 DISCRIMINANT FUNCTIONS AND DECISION SURFACES; 2.4 BAYESIAN CLASSIFICATION FOR NORMAL DISTRIBUTIONS; 2.5 ESTIMATION OF UNKNOWN PROBABILITY DENSITY FUNCTIONS; 2.6 THE NEAREST NEIGHBOR RULE; 2.7 BAYESIAN NETWORKS; 3 LINEAR CLASSIFIERS; 3.1 INTRODUCTION
3.2 LINEAR DISCRIMINANT FUNCTIONS AND DECISION HYPERPLANES3.3 THE PERCEPTRON ALGORITHM; 3.4 LEAST SQUARES METHODS; 3.5 MEAN SQUARE ESTIMATION REVISITED; 3.6 LOGISTIC DISCRIMINATION; 3.7 SUPPORT VECTOR MACHINES; 4 NONLINEAR CLASSIFIERS; 4.1 INTRODUCTION; 4.2 THE XOR PROBLEM; 4.3 THE TWO-LAYER PERCEPTRON; 4.4 THREE-LAYER PERCEPTRONS; 4.5 ALGORITHMS BASED ON EXACT CLASSIFICATION OF THE TRAINING SET; 4.6 THE BACKPROPAGATION ALGORITHM; 4.7 VARIATIONS ON THE BACKPROPAGATION THEME; 4.8 THE COST FUNCTION CHOICE; 4.9 CHOICE OF THE NETWORK SIZE; 4.10 A SIMULATION EXAMPLE
4.11 NETWORKS WITH WEIGHT SHARING4.12 GENERALIZED LINEAR CLASSIFIERS; 4.13 CAPACITY OF THE l-DIMENSIONAL SPACE IN LINEAR DICHOTOMIES; 4.14 POLYNOMIAL CLASSIFIERS; 4.15 RADIAL BASIS FUNCTION NETWORKS; 4.16 UNIVERSAL APPROXIMATORS; 4.17 SUPPORT VECTOR MACHINES: THE NONLINEAR CASE; 4.18 DECISION TREES; 4.19 COMBINING CLASSIFIERS; 4.20 THE BOOSTING APPROACH TO COMBINE CLASSIFIERS; 4.21 DISCUSSION; 5 FEATURE SELECTION; 5.1 INTRODUCTION; 5.2 PREPROCESSING; 5.3 FEATURE SELECTION BASED ON STATISTICAL HYPOTHESIS TESTING; 5.4 THE RECEIVER OPERATING CHARACTERISTICS (ROC) CURVE
5.5 CLASS SEPARABILITY MEASURES5.6 FEATURE SUBSET SELECTION; 5.7 OPTIMAL FEATURE GENERATION; 5.8 NEURAL NETWORKS AND FEATURE GENERATION/ SELECTION; 5.9 A HINT ON GENERALIZATION THEORY; 5.10 THE BAYESIAN INFORMATION CRITERION; 6 FEATURE GENERATION I: LINEAR TRANSFORMS; 6.1 INTRODUCTION; 6.2 BASIS VECTORS AND IMAGES; 6.3 THE KARHUNEN-LOÈVE TRANSFORM; 6.4 THE SINGULAR VALUE DECOMPOSITION; 6.5 INDEPENDENT COMPONENT ANALYSIS; 6.6 THE DISCRETE FOURIER TRANSFORM (DFT); 6.7 THE DISCRETE COSINE AND SINE TRANSFORMS; 6.8 THE HADAMARD TRANSFORM; 6.9 THE HAAR TRANSFORM; 6.10 THE HAAR EXPANSION REVISITED
6.11 DISCRETE TIMEWAVELET TRANSFORM (DTWT)6.12 THE MULTIRESOLUTION INTERPRETATION; 6.13 WAVELET PACKETS; 6.14 A LOOK AT TWO-DIMENSIONAL GENERALIZATIONS; 6.15 APPLICATIONS; 7 FEATURE GENERATION II; 7.1 INTRODUCTION; 7.2 REGIONAL FEATURES; 7.3 FEATURES FOR SHAPE AND SIZE CHARACTERIZATION; 7.4 A GLIMPSE AT FRACTALS; 7.5 TYPICAL FEATURES FOR SPEECH AND AUDIO CLASSIFICATION; 8 TEMPLATE MATCHING; 8.1 INTRODUCTION; 8.2 MEASURES BASED ON OPTIMAL PATH SEARCHING TECHNIQUES; 8.3 MEASURES BASED ON CORRELATIONS; 8.4 DEFORMABLE TEMPLATE MODELS; 9 CONTEXT-DEPENDENT CLASSIFICATION; 9.1 INTRODUCTION
9.2 THE BAYES CLASSIFIER
Record Nr. UNINA-9910784636303321
Theodoridis Sergios <1951->  
San Diego, CA, : Academic Press, c2006
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Pattern recognition [[electronic resource] /] / Sergios Theodoridis, Konstantinos Koutroumbas
Pattern recognition [[electronic resource] /] / Sergios Theodoridis, Konstantinos Koutroumbas
Autore Theodoridis Sergios <1951->
Edizione [3rd ed.]
Pubbl/distr/stampa San Diego, CA, : Academic Press, c2006
Descrizione fisica 1 online resource (854 p.)
Disciplina 006.3
006.4
Altri autori (Persone) KoutroumbasKonstantinos <1967->
Soggetto topico Pattern recognition systems
ISBN 1-281-31146-4
9786611311469
0-08-051361-1
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Front cover; Title page; Copyright page; Table of contents; PREFACE; 1 INTRODUCTION; 1.1 IS PATTERN RECOGNITION IMPORTANT?; 1.2 FEATURES, FEATURE VECTORS, AND CLASSIFIERS; 1.3 SUPERVISED VERSUS UNSUPERVISED PATTERN RECOGNITION; 1.4 OUTLINE OF THE BOOK; 2 CLASSIFIERS BASED ON BAYES DECISION THEORY; 2.1 INTRODUCTION; 2.2 BAYES DECISION THEORY; 2.3 DISCRIMINANT FUNCTIONS AND DECISION SURFACES; 2.4 BAYESIAN CLASSIFICATION FOR NORMAL DISTRIBUTIONS; 2.5 ESTIMATION OF UNKNOWN PROBABILITY DENSITY FUNCTIONS; 2.6 THE NEAREST NEIGHBOR RULE; 2.7 BAYESIAN NETWORKS; 3 LINEAR CLASSIFIERS; 3.1 INTRODUCTION
3.2 LINEAR DISCRIMINANT FUNCTIONS AND DECISION HYPERPLANES3.3 THE PERCEPTRON ALGORITHM; 3.4 LEAST SQUARES METHODS; 3.5 MEAN SQUARE ESTIMATION REVISITED; 3.6 LOGISTIC DISCRIMINATION; 3.7 SUPPORT VECTOR MACHINES; 4 NONLINEAR CLASSIFIERS; 4.1 INTRODUCTION; 4.2 THE XOR PROBLEM; 4.3 THE TWO-LAYER PERCEPTRON; 4.4 THREE-LAYER PERCEPTRONS; 4.5 ALGORITHMS BASED ON EXACT CLASSIFICATION OF THE TRAINING SET; 4.6 THE BACKPROPAGATION ALGORITHM; 4.7 VARIATIONS ON THE BACKPROPAGATION THEME; 4.8 THE COST FUNCTION CHOICE; 4.9 CHOICE OF THE NETWORK SIZE; 4.10 A SIMULATION EXAMPLE
4.11 NETWORKS WITH WEIGHT SHARING4.12 GENERALIZED LINEAR CLASSIFIERS; 4.13 CAPACITY OF THE l-DIMENSIONAL SPACE IN LINEAR DICHOTOMIES; 4.14 POLYNOMIAL CLASSIFIERS; 4.15 RADIAL BASIS FUNCTION NETWORKS; 4.16 UNIVERSAL APPROXIMATORS; 4.17 SUPPORT VECTOR MACHINES: THE NONLINEAR CASE; 4.18 DECISION TREES; 4.19 COMBINING CLASSIFIERS; 4.20 THE BOOSTING APPROACH TO COMBINE CLASSIFIERS; 4.21 DISCUSSION; 5 FEATURE SELECTION; 5.1 INTRODUCTION; 5.2 PREPROCESSING; 5.3 FEATURE SELECTION BASED ON STATISTICAL HYPOTHESIS TESTING; 5.4 THE RECEIVER OPERATING CHARACTERISTICS (ROC) CURVE
5.5 CLASS SEPARABILITY MEASURES5.6 FEATURE SUBSET SELECTION; 5.7 OPTIMAL FEATURE GENERATION; 5.8 NEURAL NETWORKS AND FEATURE GENERATION/ SELECTION; 5.9 A HINT ON GENERALIZATION THEORY; 5.10 THE BAYESIAN INFORMATION CRITERION; 6 FEATURE GENERATION I: LINEAR TRANSFORMS; 6.1 INTRODUCTION; 6.2 BASIS VECTORS AND IMAGES; 6.3 THE KARHUNEN-LOÈVE TRANSFORM; 6.4 THE SINGULAR VALUE DECOMPOSITION; 6.5 INDEPENDENT COMPONENT ANALYSIS; 6.6 THE DISCRETE FOURIER TRANSFORM (DFT); 6.7 THE DISCRETE COSINE AND SINE TRANSFORMS; 6.8 THE HADAMARD TRANSFORM; 6.9 THE HAAR TRANSFORM; 6.10 THE HAAR EXPANSION REVISITED
6.11 DISCRETE TIMEWAVELET TRANSFORM (DTWT)6.12 THE MULTIRESOLUTION INTERPRETATION; 6.13 WAVELET PACKETS; 6.14 A LOOK AT TWO-DIMENSIONAL GENERALIZATIONS; 6.15 APPLICATIONS; 7 FEATURE GENERATION II; 7.1 INTRODUCTION; 7.2 REGIONAL FEATURES; 7.3 FEATURES FOR SHAPE AND SIZE CHARACTERIZATION; 7.4 A GLIMPSE AT FRACTALS; 7.5 TYPICAL FEATURES FOR SPEECH AND AUDIO CLASSIFICATION; 8 TEMPLATE MATCHING; 8.1 INTRODUCTION; 8.2 MEASURES BASED ON OPTIMAL PATH SEARCHING TECHNIQUES; 8.3 MEASURES BASED ON CORRELATIONS; 8.4 DEFORMABLE TEMPLATE MODELS; 9 CONTEXT-DEPENDENT CLASSIFICATION; 9.1 INTRODUCTION
9.2 THE BAYES CLASSIFIER
Record Nr. UNINA-9910814844903321
Theodoridis Sergios <1951->  
San Diego, CA, : Academic Press, c2006
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui