1.

Record Nr.

UNINA9910788563103321

Titolo

Optimization for machine learning / / edited by Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright

Pubbl/distr/stampa

Cambridge, Mass. : , : MIT Press, , [2012]

©2012

ISBN

0-262-29789-2

1-283-30284-5

9786613302847

Descrizione fisica

1 online resource (509 p.)

Collana

Neural information processing series

Altri autori (Persone)

SraSuvrit <1976->

NowozinSebastian <1980->

WrightStephen J. <1960->

Disciplina

006.3/1

Soggetti

Machine learning - Mathematical models

Mathematical optimization

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Note generali

Description based upon print version of record.

Nota di bibliografia

Includes bibliographical references.

Nota di contenuto

Contents; Series Foreword; Preface; Chapter 1. Introduction: Optimization and Machine Learning; 1.1 Support Vector Machines; 1.2 Regularized Optimization; 1.3 Summary of the Chapters; 1.4 References; Chapter 2. Convex Optimization with Sparsity-Inducing Norms; 2.1 Introduction; 2.2 Generic Methods; 2.3 Proximal Methods; 2.4 (Block) Coordinate Descent Algorithms; 2.5 Reweighted- 2 Algorithms; 2.6 Working-Set Methods; 2.7 Quantitative Evaluation; 2.8 Extensions; 2.9 Conclusion; 2.10 References; Chapter 3. Interior-Point Methods for Large-Scale Cone Programming; 3.1 Introduction

3.2 Primal-Dual Interior-Point Methods3.3 Linear and Quadratic Programming; 3.4 Second-Order Cone Programming; 3.5 Semidefinite Programming; 3.6 Conclusion; 3.7 References; Chapter 4. Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey; 4.1 Introduction; 4.2 Incremental Subgradient-Proximal Methods; 4.3 Convergence for Methods with Cyclic Order; 4.4 Convergence for Methods with Randomized Order; 4.5 Some Applications; 4.6 Conclusions; 4.7 References; Chapter 5. First-Order



Methods for Nonsmooth Convex Large-Scale Optimization, I: General Purpose Methods

5.1 Introduction5.2 Mirror Descent Algorithm: Minimizing over a Simple Set; 5.3 Problems with Functional Constraints; 5.4 Minimizing Strongly Convex Functions; 5.5 Mirror Descent Stochastic Approximation; 5.6 Mirror Descent for Convex-Concave Saddle-Point Problems; 5.7 Setting up a Mirror Descent Method; 5.8 Notes and Remarks; 5.9 References; Chapter 6. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, II: Utilizing Problem's Structure; 6.1 Introduction; 6.2 Saddle-Point Reformulations of Convex Minimization Problems; 6.3 Mirror-Prox Algorithm

6.4 Accelerating the Mirror-Prox Algorithm6.5 Accelerating First-Order Methods by Randomization; 6.6 Notes and Remarks; 6.7 References; Chapter 7. Cutting-Plane Methods in Machine Learning; 7.1 Introduction to Cutting-plane Methods; 7.2 Regularized Risk Minimization; 7.3 Multiple Kernel Learning; 7.4 MAP Inference in Graphical Models; 7.5 References; Chapter 8. Introduction to Dual Decomposition for Inference; 8.1 Introduction; 8.2 Motivating Applications; 8.3 Dual Decomposition and Lagrangian Relaxation; 8.4 Subgradient Algorithms; 8.6 Relations to Linear Programming Relaxations

8.7 Decoding: Finding the MAP Assignment8.8 Discussion; Appendix: Technical Details; 8.10 References; 8.5 Block Coordinate Descent Algorithms; Chapter 9. Augmented Lagrangian Methods for Learning, Selecting, and Combining Features; 9.1 Introduction; 9.2 Background; 9.3 Proximal Minimization Algorithm; 9.4 Dual Augmented Lagrangian (DAL) Algorithm; 9.5 Connections; 9.6 Application; 9.7 Summary; Acknowledgment; Appendix: Mathematical Details; 9.9 References; Chapter 10. The Convex Optimization Approach to Regret Minimization; 10.1 Introduction; 10.2 The RFTL Algorithm and Its Analysis

10.3 The "Primal-Dual" Approach

Sommario/riassunto

An up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities.The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields.Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.