top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Advances in genetic programming . Vol. 2 / / edited by Peter J. Angeline and Kenneth E. Kinnear, Jr
Advances in genetic programming . Vol. 2 / / edited by Peter J. Angeline and Kenneth E. Kinnear, Jr
Pubbl/distr/stampa Cambridge, Mass. ; ; London, : MIT Press, ©1996
Descrizione fisica 1 online resource (xv, 538 p.)
Disciplina 006.3
Altri autori (Persone) KinnearKenneth E
AngelinePeter J
Collana Complex adaptive systems
A Bradford book
Soggetto topico Genetic programming (Computer science)
Computer programming
Soggetto non controllato COMPUTER SCIENCE/Artificial Intelligence
COMPUTER SCIENCE/Machine Learning & Neural Networks
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910260611103321
Cambridge, Mass. ; ; London, : MIT Press, ©1996
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Dataset shift in machine learning / / [edited by] Joaquin Quiñonero-Candela [and others]
Dataset shift in machine learning / / [edited by] Joaquin Quiñonero-Candela [and others]
Pubbl/distr/stampa Cambridge, Mass., : MIT Press, ©2009
Descrizione fisica 1 online resource (246 p.)
Disciplina 006.3/1
Altri autori (Persone) Quiñonero-CandelaJoaquin
Collana Neural information processing series
Soggetto topico Machine learning
Soggetto non controllato COMPUTER SCIENCE/Machine Learning & Neural Networks
ISBN 0-262-29253-X
1-282-24038-2
0-262-25510-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Contents; Series Foreword; Preface; I - Introduction to Dataset Shift; 1 - When Training and Test Sets Are Different: Characterizing Learning Transfer; 2 - Projection and Projectability; II - Theoretical Views on Dataset and Covariate Shift; 3 - Binary Classi cation under Sample Selection Bias; 4 - On Bayesian Transduction: Implications for the Covariate Shift Problem; 5 - On the Training/Test Distributions Gap: A Data Representation Learning Framework; III - Algorithms for Covariate Shift; 6 - Geometry of Covariate Shift with Applications to Active Learning
7 - A Conditional Expectation Approach to Model Selection and Active Learning under Covariate Shift 8 - Covariate Shift by Kernel Mean Matching; 9 - Discriminative Learning under Covariate Shift with a Single Optimization Problem; 10 - An Adversarial View of Covariate Shift and a Minimax Approach; IV - Discussion; 11 - Author Comments; References; Notation and Symbols; Contributors; Index
Record Nr. UNINA-9910782857003321
Cambridge, Mass., : MIT Press, ©2009
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Learning machine translation / / [edited by] Cyril Goutte [and others]
Learning machine translation / / [edited by] Cyril Goutte [and others]
Pubbl/distr/stampa Cambridge, Mass., : MIT Press, ©2009
Descrizione fisica 1 online resource (329 p.)
Disciplina 418/.020285
Altri autori (Persone) GoutteCyril
Collana Neural information processing series
Soggetto topico Machine translating - Statistical methods
Soggetto non controllato COMPUTER SCIENCE/Machine Learning & Neural Networks
ISBN 0-262-29270-X
1-282-24020-X
0-262-25509-X
9786612240201
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Contents; Series Foreword; Preface; 1 A Statistical Machine Translation Primer; I Enabling Technologies; 2 Mining Patents for Parallel Corpora; 3 Automatic Construction of Multilingual Name Dictionaries; 4 Named Entity Transliteration and Discovery in Multilingual Corpora; 5 Combination of Statistical Word Alignments Based on Multiple Preprocessing Schemes; 6 Linguistically Enriched Word-Sequence Kernels for Discriminative Language Modeling; II Machine Translation; 7 Toward Purely Discriminative Training for Tree-Structured Translation Models
8 Reranking for Large-Scale Statistical Machine Translation9 Kernel-Based Machine Translation; 10 Statistical Machine Translation through Global Lexical Selection and Sentence Reconstruction; 11 Discriminative Phrase Selection for SMT; 12 Semisupervised Learning for Machine Translation; 13 Learning to Combine Machine Translation Systems; References; Contributors; Index
Record Nr. UNINA-9910782858503321
Cambridge, Mass., : MIT Press, ©2009
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Learning with kernels : support vector machines, regularization, optimization, and beyond / / Bernhard Schölkopf, Alexander J. Smola
Learning with kernels : support vector machines, regularization, optimization, and beyond / / Bernhard Schölkopf, Alexander J. Smola
Autore Schölkopf Bernhard
Pubbl/distr/stampa Cambridge, Mass., : MIT Press, ©2002
Descrizione fisica 1 online resource (645 p.)
Disciplina 006.3/1
Altri autori (Persone) SmolaAlexander J
Collana Adaptive computation and machine learning
Soggetto topico Machine learning
Algorithms
Kernel functions
Soggetto non controllato COMPUTER SCIENCE/Machine Learning & Neural Networks
ISBN 0-262-25693-2
0-585-47759-0
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Contents; Series Foreword; Preface; 1 - A Tutorial Introduction; I - Concepts and Tools; 2 - Kernels; 3 - Risk and Loss Functions; 4 - Regularization; 5 - Elements of Statistical Learning Theory; 6 - Optimization; II - Support Vector Machines; 7 - Pattern Recognition; 8 - Single-Class Problems: Quantile Estimation and Novelty Detection; 9 - Regression Estimation; 10 - Implementation; 11 - Incorporating Invariances; 12 - Learning Theory Revisited; III - Kernel Methods; 13 - Designing Kernels; 14 - Kernel Feature Extraction; 15 - Kernel Fisher Discriminant; 16 - Bayesian Kernel Methods
17 - Regularized Principal Manifolds18 - Pre-Images and Reduced Set Methods; A - Addenda; B - Mathematical Prerequisites; References; Index; Notation and Symbols
Record Nr. UNINA-9910780260003321
Schölkopf Bernhard  
Cambridge, Mass., : MIT Press, ©2002
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine learning in non-stationary environments : introduction to covariate shift adaptation / / Masashi Sugiyama and Motoaki Kawanabe
Machine learning in non-stationary environments : introduction to covariate shift adaptation / / Masashi Sugiyama and Motoaki Kawanabe
Autore Sugiyama Masashi <1974->
Pubbl/distr/stampa Cambridge, Mass., : MIT Press, ©2012
Descrizione fisica 1 online resource (279 p.)
Disciplina 006.3/1
Altri autori (Persone) KawanabeMotoaki
Collana Adaptive computation and machine learning
Soggetto topico Machine learning
Soggetto non controllato COMPUTER SCIENCE/Machine Learning & Neural Networks
COMPUTER SCIENCE/General
COMPUTER SCIENCE/Artificial Intelligence
ISBN 0-262-30043-5
1-280-49922-2
9786613594457
0-262-30122-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Contents; Foreword; Preface; I INTRODUCTION; 1 Introduction and Problem Formulation; 1.1 Machine Learning under Covariate Shift; 1.2 Quick Tour of Covariate Shift Adaptation; 1.3 Problem Formulation; 1.4 Structure of This Book; II LEARNING UNDER COVARIATE SHIFT; 2 Function Approximation; 2.1 Importance-Weighting Techniques for Covariate Shift Adaptation; 2.2 Examples of Importance-Weighted Regression Methods; 2.3 Examples of Importance-Weighted Classification Methods; 2.4 Numerical Examples; 2.5 Summary and Discussion; 3 Model Selection; 3.1 Importance-Weighted Akaike Information Criterion
3.2 Importance-Weighted Subspace Information Criterion3.3 Importance-Weighted Cross-Validation; 3.4 Numerical Examples; 3.5 Summary and Discussion; 4 Importance Estimation; 4.1 Kernel Density Estimation; 4.2 Kernel Mean Matching; 4.3 Logistic Regression; 4.4 Kullback-Leibler Importance Estimation Procedure; 4.5 Least-Squares Importance Fitting; 4.6 Unconstrained Least-Squares Importance Fitting; 4.7 Numerical Examples; 4.8 Experimental Comparison; 4.9 Summary; 5 Direct Density-Ratio Estimation with Dimensionality Reduction; 5.1 Density Difference in Hetero-Distributional Subspace
5.2 Characterization of Hetero-Distributional Subspace5.3 Identifying Hetero-Distributional Subspace by Supervised Dimensionality Reduction; 5.4 Using LFDA for Finding Hetero-Distributional Subspace; 5.5 Density-Ratio Estimation in the Hetero-Distributional Subspace; 5.6 Numerical Examples; 5.7 Summary; 6 Relation to Sample Selection Bias; 6.1 Heckman's Sample Selection Model; 6.2 Distributional Change and Sample Selection Bias; 6.3 The Two-Step Algorithm; 6.4 Relation to Covariate Shift Approach; 7 Applications of Covariate Shift Adaptation; 7.1 Brain-Computer Interface
7.2 Speaker Identification7.3 Natural Language Processing; 7.4 Perceived Age Prediction from Face Images; 7.5 Human Activity Recognition from Accelerometric Data; 7.6 Sample Reuse in Reinforcement Learning; III LEARNING CAUSING COVARIATE SHIFT; 8 Active Learning; 8.1 Preliminaries; 8.2 Population-Based Active Learning Methods; 8.3 Numerical Examples of Population-Based Active Learning Methods; 8.4 Pool-Based Active Learning Methods; 8.5 Numerical Examples of Pool-Based Active Learning Methods; 8.6 Summary and Discussion; 9 Active Learning with Model Selection
9.1 Direct Approach and the Active Learning/Model Selection Dilemma9.2 Sequential Approach; 9.3 Batch Approach; 9.4 Ensemble Active Learning; 9.5 Numerical Examples; 9.6 Summary and Discussion; 10 Applications of Active Learning; 10.1 Design of Efficient Exploration Strategies in Reinforcement Learning; 10.2 Wafer Alignment in Semiconductor Exposure Apparatus; IV CONCLUSIONS; 11 Conclusions and Future Prospects; 11.1 Conclusions; 11.2 Future Prospects; Appendix: List of Symbols and Abbreviations; Bibliography; Index
Record Nr. UNINA-9910789928103321
Sugiyama Masashi <1974->  
Cambridge, Mass., : MIT Press, ©2012
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
The minimum description length principle / / Peter D. Grünwald
The minimum description length principle / / Peter D. Grünwald
Autore Grünwald Peter D
Pubbl/distr/stampa Cambridge, Mass., : MIT Press, ©2007
Descrizione fisica 1 online resource (736 p.)
Disciplina 003/.54
Collana Adaptive computation and machine learning
Soggetto topico Minimum description length (Information theory)
Soggetto non controllato COMPUTER SCIENCE/Machine Learning & Neural Networks
ISBN 1-282-09635-4
9786612096358
0-262-25629-0
1-4294-6560-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Contents; List of Figures; Series Foreword; Foreword; Preface; PART I - Introductory Material; 1 - Learning, Regularity, and Compression; 2 - Probabilistic and Statistical Preliminaries; 3 - Information-Theoretic Preliminaries; 4 - Information-Theoretic Properties of Statistical Models; 5 - Crude Two-Part Code MDL; PART II - Universal Coding; 6 - Universal Coding with Countable Models; 7 - Parametric Models: Normalized Maximum Likelihood; 8 - Parametric Models: Bayes; 9 - Parametric Models: Prequential Plug-in; 10 - Parametric Models: Two-Part; 11 - NMLWith Innite Complexity
12 - Linear RegressionPART III - Refined MDL; 14 - MDL Model Selection; 15 - MDL Prediction and Estimation; 16 - MDL Consistency and Convergence; 17 - MDL in Context; PART IV - Additional Background; 18 - The Exponential or "Maximum Entropy" Families; 19 - Information-Theoretic Properties of Exponential Families; References; List of Symbols; Subject Index
Record Nr. UNINA-9910777833803321
Grünwald Peter D  
Cambridge, Mass., : MIT Press, ©2007
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Nearest-neighbor methods in learning and vision : theory and practice / / edited by Gregory Shakhnarovich, Trevor Darrell, Piotr Indyk
Nearest-neighbor methods in learning and vision : theory and practice / / edited by Gregory Shakhnarovich, Trevor Darrell, Piotr Indyk
Pubbl/distr/stampa Cambridge, Mass., : MIT Press, ©2005
Descrizione fisica 1 online resource (280 p.)
Disciplina 006.3/1
Altri autori (Persone) ShakhnarovichGregory
DarrellTrevor
IndykPiotr
Collana Neural information processing series
Soggetto topico Nearest neighbor analysis (Statistics)
Machine learning
Algorithms
Geometry - Data processing
Soggetto non controllato COMPUTER SCIENCE/Machine Learning & Neural Networks
ISBN 1-282-09675-3
9786612096754
0-262-25695-9
1-4237-7253-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Contents; Series Foreword; Preface; 1 Introduction; I THEORY; 2 Nearest-Neighbor Searching and Metric Space Dimensions; 3 Locality-Sensitive Hashing Using Stable Distributions; II APPLICATIONS: LEARNING; 4 New Algorithms for Efficient High-Dimensional Nonparametric Classification; 5 Approximate Nearest Neighbor Regression in Very High Dimensions; 6 Learning Embeddings for Fast Approximate Nearest Neighbor Retrieval; III APPLICATIONS: VISION; 7 Parameter-Sensitive Hashing for Fast Pose Estimation; 8 Contour Matching Using Approximate Earth Mover's Distance
9 Adaptive Mean Shift Based Clustering in High Dimensions10 Object Recognition using Locality Sensitive Hashing of Shape Contexts; Contributors; Index
Record Nr. UNINA-9910777516603321
Cambridge, Mass., : MIT Press, ©2005
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
New directions in statistical signal processing : from systems to brain / / edited by Simon Haykin [and others]
New directions in statistical signal processing : from systems to brain / / edited by Simon Haykin [and others]
Pubbl/distr/stampa Cambridge, Mass., : MIT Press, ©2007
Descrizione fisica 1 online resource (544 p.)
Disciplina 612.8/2
Altri autori (Persone) HaykinSimon S. <1931->
Collana Neural information processing series
Soggetto topico Neural networks (Neurobiology)
Neural networks (Computer science)
Signal processing - Statistical methods
Neural computers
Soggetto non controllato COMPUTER SCIENCE/Machine Learning & Neural Networks
NEUROSCIENCE/General
ISBN 0-262-29279-3
9786612096372
1-282-09637-0
0-262-25631-2
1-4294-1873-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Modeling the mind : from circuits to systems / Suzanna Becker -- Empirical statistics and stochastic models for visual signals / David Mumford -- ; The machine cocktail party problem / Simon Haykin, Zhe Chen -- Sensor adaptive signal processing of biological nanotubes (ion channels) at macroscopic and nano scales / Vikram Krishnamurthy -- Spin diffusion : a new perspective in magnetic resonance imaging / Timothy R. Field -- What makes a dynamical system computationally powerful? / Robert Legenstein, Wolfgang Maass -- ; A variational principle for graphical models / Martin J. Wainwright, Michael I. Jordan -- Modeling large dynamical systems with dynamical consistent neural networks / Hans-Georg Zimmermann ... [et al.] -- Diversity in communication : from source coding to wireless networks / Suhas N. Diggavi -- Designing patterns for easy recognition : information transmission with low-density parity-check codes / Frank R. Kschischang, Masoud Ardakani -- Turbo processing / Claude Berrou, Charlotte Langlais, Fabrice Seguin -- Blind signal processing based on data geometric properties / Konstantinos Diamantaras -- Game-theoretic learning / Geoffrey J. Gordon -- Learning observable operator models via the efficient sharpening algorithm / Herbert Jaeger ... [et al.].
Record Nr. UNINA-9910777796103321
Cambridge, Mass., : MIT Press, ©2007
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Optimization for machine learning / / edited by Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright
Optimization for machine learning / / edited by Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright
Pubbl/distr/stampa Cambridge, Mass. : , : MIT Press, , [2012]
Descrizione fisica 1 online resource (509 p.)
Disciplina 006.3/1
Altri autori (Persone) SraSuvrit <1976->
NowozinSebastian <1980->
WrightStephen J. <1960->
Collana Neural information processing series
Soggetto topico Machine learning - Mathematical models
Mathematical optimization
Soggetto non controllato COMPUTER SCIENCE/Machine Learning & Neural Networks
COMPUTER SCIENCE/Artificial Intelligence
ISBN 0-262-29789-2
1-283-30284-5
9786613302847
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Contents; Series Foreword; Preface; Chapter 1. Introduction: Optimization and Machine Learning; 1.1 Support Vector Machines; 1.2 Regularized Optimization; 1.3 Summary of the Chapters; 1.4 References; Chapter 2. Convex Optimization with Sparsity-Inducing Norms; 2.1 Introduction; 2.2 Generic Methods; 2.3 Proximal Methods; 2.4 (Block) Coordinate Descent Algorithms; 2.5 Reweighted- 2 Algorithms; 2.6 Working-Set Methods; 2.7 Quantitative Evaluation; 2.8 Extensions; 2.9 Conclusion; 2.10 References; Chapter 3. Interior-Point Methods for Large-Scale Cone Programming; 3.1 Introduction
3.2 Primal-Dual Interior-Point Methods3.3 Linear and Quadratic Programming; 3.4 Second-Order Cone Programming; 3.5 Semidefinite Programming; 3.6 Conclusion; 3.7 References; Chapter 4. Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey; 4.1 Introduction; 4.2 Incremental Subgradient-Proximal Methods; 4.3 Convergence for Methods with Cyclic Order; 4.4 Convergence for Methods with Randomized Order; 4.5 Some Applications; 4.6 Conclusions; 4.7 References; Chapter 5. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, I: General Purpose Methods
5.1 Introduction5.2 Mirror Descent Algorithm: Minimizing over a Simple Set; 5.3 Problems with Functional Constraints; 5.4 Minimizing Strongly Convex Functions; 5.5 Mirror Descent Stochastic Approximation; 5.6 Mirror Descent for Convex-Concave Saddle-Point Problems; 5.7 Setting up a Mirror Descent Method; 5.8 Notes and Remarks; 5.9 References; Chapter 6. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, II: Utilizing Problem's Structure; 6.1 Introduction; 6.2 Saddle-Point Reformulations of Convex Minimization Problems; 6.3 Mirror-Prox Algorithm
6.4 Accelerating the Mirror-Prox Algorithm6.5 Accelerating First-Order Methods by Randomization; 6.6 Notes and Remarks; 6.7 References; Chapter 7. Cutting-Plane Methods in Machine Learning; 7.1 Introduction to Cutting-plane Methods; 7.2 Regularized Risk Minimization; 7.3 Multiple Kernel Learning; 7.4 MAP Inference in Graphical Models; 7.5 References; Chapter 8. Introduction to Dual Decomposition for Inference; 8.1 Introduction; 8.2 Motivating Applications; 8.3 Dual Decomposition and Lagrangian Relaxation; 8.4 Subgradient Algorithms; 8.6 Relations to Linear Programming Relaxations
8.7 Decoding: Finding the MAP Assignment8.8 Discussion; Appendix: Technical Details; 8.10 References; 8.5 Block Coordinate Descent Algorithms; Chapter 9. Augmented Lagrangian Methods for Learning, Selecting, and Combining Features; 9.1 Introduction; 9.2 Background; 9.3 Proximal Minimization Algorithm; 9.4 Dual Augmented Lagrangian (DAL) Algorithm; 9.5 Connections; 9.6 Application; 9.7 Summary; Acknowledgment; Appendix: Mathematical Details; 9.9 References; Chapter 10. The Convex Optimization Approach to Regret Minimization; 10.1 Introduction; 10.2 The RFTL Algorithm and Its Analysis
10.3 The "Primal-Dual" Approach
Record Nr. UNINA-9910788563103321
Cambridge, Mass. : , : MIT Press, , [2012]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Semi-supervised learning / / [edited by] Olivier Chapelle, Bernhard Schölkopf, Alexander Zien
Semi-supervised learning / / [edited by] Olivier Chapelle, Bernhard Schölkopf, Alexander Zien
Pubbl/distr/stampa Cambridge, Mass., : MIT Press, ©2006
Descrizione fisica 1 online resource (528 p.)
Disciplina 006.3/1
Altri autori (Persone) ChapelleOlivier
SchölkopfBernhard
ZienAlexander
Collana Adaptive computation and machine learning
Soggetto topico Supervised learning (Machine learning)
Soggetto non controllato COMPUTER SCIENCE/Machine Learning & Neural Networks
ISBN 1-282-09618-4
0-262-25589-8
1-4294-1408-1
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Contents; Series Foreword; Preface; 1 - Introduction to Semi-Supervised Learning; 2 - A Taxonomy for Semi-Supervised Learning Methods; 3 - Semi-Supervised Text Classification Using EM; 4 - Risks of Semi-Supervised Learning: How Unlabeled Data Can Degrade Performance of Generative Classifiers; 5 - Probabilistic Semi-Supervised Clustering with Constraints; 6 - Transductive Support Vector Machines; 7 - Semi-Supervised Learning Using Semi- Definite Programming; 8 - Gaussian Processes and the Null-Category Noise Model; 9 - Entropy Regularization; 10 - Data-Dependent Regularization
11 - Label Propagation and Quadratic Criterion12 - The Geometric Basis of Semi-Supervised Learning; 13 - Discrete Regularization; 14 - Semi-Supervised Learning with Conditional Harmonic Mixing; 15 - Graph Kernels by Spectral Transforms; 16- Spectral Methods for Dimensionality Reduction; 17 - Modifying Distances; 18 - Large-Scale Algorithms; 19 - Semi-Supervised Protein Classification Using Cluster Kernels; 20 - Prediction of Protein Function from Networks; 21 - Analysis of Benchmarks; 22 - An Augmented PAC Model for Semi- Supervised Learning
23 - Metric-Based Approaches for Semi- Supervised Regression and Classification24 - Transductive Inference and Semi-Supervised Learning; 25 - A Discussion of Semi-Supervised Learning and Transduction; References; Notation and Symbols; Contributors; Index
Record Nr. UNINA-9910777620503321
Cambridge, Mass., : MIT Press, ©2006
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui