top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Optimization for machine learning [[electronic resource] /] / edited by Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright
Optimization for machine learning [[electronic resource] /] / edited by Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright
Pubbl/distr/stampa Cambridge, Mass., : MIT Press, c2012
Descrizione fisica 1 online resource (509 p.)
Disciplina 006.3/1
Altri autori (Persone) SraSuvrit <1976->
NowozinSebastian <1980->
WrightStephen J. <1960->
Collana Neural information processing series
Soggetto topico Machine learning - Mathematical models
Mathematical optimization
Soggetto genere / forma Electronic books.
ISBN 0-262-29789-2
1-283-30284-5
9786613302847
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Contents; Series Foreword; Preface; Chapter 1. Introduction: Optimization and Machine Learning; 1.1 Support Vector Machines; 1.2 Regularized Optimization; 1.3 Summary of the Chapters; 1.4 References; Chapter 2. Convex Optimization with Sparsity-Inducing Norms; 2.1 Introduction; 2.2 Generic Methods; 2.3 Proximal Methods; 2.4 (Block) Coordinate Descent Algorithms; 2.5 Reweighted- 2 Algorithms; 2.6 Working-Set Methods; 2.7 Quantitative Evaluation; 2.8 Extensions; 2.9 Conclusion; 2.10 References; Chapter 3. Interior-Point Methods for Large-Scale Cone Programming; 3.1 Introduction
3.2 Primal-Dual Interior-Point Methods3.3 Linear and Quadratic Programming; 3.4 Second-Order Cone Programming; 3.5 Semidefinite Programming; 3.6 Conclusion; 3.7 References; Chapter 4. Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey; 4.1 Introduction; 4.2 Incremental Subgradient-Proximal Methods; 4.3 Convergence for Methods with Cyclic Order; 4.4 Convergence for Methods with Randomized Order; 4.5 Some Applications; 4.6 Conclusions; 4.7 References; Chapter 5. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, I: General Purpose Methods
5.1 Introduction5.2 Mirror Descent Algorithm: Minimizing over a Simple Set; 5.3 Problems with Functional Constraints; 5.4 Minimizing Strongly Convex Functions; 5.5 Mirror Descent Stochastic Approximation; 5.6 Mirror Descent for Convex-Concave Saddle-Point Problems; 5.7 Setting up a Mirror Descent Method; 5.8 Notes and Remarks; 5.9 References; Chapter 6. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, II: Utilizing Problem's Structure; 6.1 Introduction; 6.2 Saddle-Point Reformulations of Convex Minimization Problems; 6.3 Mirror-Prox Algorithm
6.4 Accelerating the Mirror-Prox Algorithm6.5 Accelerating First-Order Methods by Randomization; 6.6 Notes and Remarks; 6.7 References; Chapter 7. Cutting-Plane Methods in Machine Learning; 7.1 Introduction to Cutting-plane Methods; 7.2 Regularized Risk Minimization; 7.3 Multiple Kernel Learning; 7.4 MAP Inference in Graphical Models; 7.5 References; Chapter 8. Introduction to Dual Decomposition for Inference; 8.1 Introduction; 8.2 Motivating Applications; 8.3 Dual Decomposition and Lagrangian Relaxation; 8.4 Subgradient Algorithms; 8.6 Relations to Linear Programming Relaxations
8.7 Decoding: Finding the MAP Assignment8.8 Discussion; Appendix: Technical Details; 8.10 References; 8.5 Block Coordinate Descent Algorithms; Chapter 9. Augmented Lagrangian Methods for Learning, Selecting, and Combining Features; 9.1 Introduction; 9.2 Background; 9.3 Proximal Minimization Algorithm; 9.4 Dual Augmented Lagrangian (DAL) Algorithm; 9.5 Connections; 9.6 Application; 9.7 Summary; Acknowledgment; Appendix: Mathematical Details; 9.9 References; Chapter 10. The Convex Optimization Approach to Regret Minimization; 10.1 Introduction; 10.2 The RFTL Algorithm and Its Analysis
10.3 The "Primal-Dual" Approach
Record Nr. UNINA-9910463833103321
Cambridge, Mass., : MIT Press, c2012
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Optimization for machine learning / / edited by Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright
Optimization for machine learning / / edited by Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright
Pubbl/distr/stampa Cambridge, Mass. : , : MIT Press, , [2012]
Descrizione fisica 1 online resource (509 p.)
Disciplina 006.3/1
Altri autori (Persone) SraSuvrit <1976->
NowozinSebastian <1980->
WrightStephen J. <1960->
Collana Neural information processing series
Soggetto topico Machine learning - Mathematical models
Mathematical optimization
Soggetto non controllato COMPUTER SCIENCE/Machine Learning & Neural Networks
COMPUTER SCIENCE/Artificial Intelligence
ISBN 0-262-29789-2
1-283-30284-5
9786613302847
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Contents; Series Foreword; Preface; Chapter 1. Introduction: Optimization and Machine Learning; 1.1 Support Vector Machines; 1.2 Regularized Optimization; 1.3 Summary of the Chapters; 1.4 References; Chapter 2. Convex Optimization with Sparsity-Inducing Norms; 2.1 Introduction; 2.2 Generic Methods; 2.3 Proximal Methods; 2.4 (Block) Coordinate Descent Algorithms; 2.5 Reweighted- 2 Algorithms; 2.6 Working-Set Methods; 2.7 Quantitative Evaluation; 2.8 Extensions; 2.9 Conclusion; 2.10 References; Chapter 3. Interior-Point Methods for Large-Scale Cone Programming; 3.1 Introduction
3.2 Primal-Dual Interior-Point Methods3.3 Linear and Quadratic Programming; 3.4 Second-Order Cone Programming; 3.5 Semidefinite Programming; 3.6 Conclusion; 3.7 References; Chapter 4. Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey; 4.1 Introduction; 4.2 Incremental Subgradient-Proximal Methods; 4.3 Convergence for Methods with Cyclic Order; 4.4 Convergence for Methods with Randomized Order; 4.5 Some Applications; 4.6 Conclusions; 4.7 References; Chapter 5. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, I: General Purpose Methods
5.1 Introduction5.2 Mirror Descent Algorithm: Minimizing over a Simple Set; 5.3 Problems with Functional Constraints; 5.4 Minimizing Strongly Convex Functions; 5.5 Mirror Descent Stochastic Approximation; 5.6 Mirror Descent for Convex-Concave Saddle-Point Problems; 5.7 Setting up a Mirror Descent Method; 5.8 Notes and Remarks; 5.9 References; Chapter 6. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, II: Utilizing Problem's Structure; 6.1 Introduction; 6.2 Saddle-Point Reformulations of Convex Minimization Problems; 6.3 Mirror-Prox Algorithm
6.4 Accelerating the Mirror-Prox Algorithm6.5 Accelerating First-Order Methods by Randomization; 6.6 Notes and Remarks; 6.7 References; Chapter 7. Cutting-Plane Methods in Machine Learning; 7.1 Introduction to Cutting-plane Methods; 7.2 Regularized Risk Minimization; 7.3 Multiple Kernel Learning; 7.4 MAP Inference in Graphical Models; 7.5 References; Chapter 8. Introduction to Dual Decomposition for Inference; 8.1 Introduction; 8.2 Motivating Applications; 8.3 Dual Decomposition and Lagrangian Relaxation; 8.4 Subgradient Algorithms; 8.6 Relations to Linear Programming Relaxations
8.7 Decoding: Finding the MAP Assignment8.8 Discussion; Appendix: Technical Details; 8.10 References; 8.5 Block Coordinate Descent Algorithms; Chapter 9. Augmented Lagrangian Methods for Learning, Selecting, and Combining Features; 9.1 Introduction; 9.2 Background; 9.3 Proximal Minimization Algorithm; 9.4 Dual Augmented Lagrangian (DAL) Algorithm; 9.5 Connections; 9.6 Application; 9.7 Summary; Acknowledgment; Appendix: Mathematical Details; 9.9 References; Chapter 10. The Convex Optimization Approach to Regret Minimization; 10.1 Introduction; 10.2 The RFTL Algorithm and Its Analysis
10.3 The "Primal-Dual" Approach
Record Nr. UNINA-9910788563103321
Cambridge, Mass. : , : MIT Press, , [2012]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Optimization for machine learning / / edited by Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright
Optimization for machine learning / / edited by Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright
Edizione [1st ed.]
Pubbl/distr/stampa Cambridge, Mass., : MIT Press, c2012
Descrizione fisica 1 online resource (509 p.)
Disciplina 006.3/1
Altri autori (Persone) SraSuvrit <1976->
NowozinSebastian <1980->
WrightStephen J. <1960->
Collana Neural information processing series
Soggetto topico Machine learning - Mathematical models
Mathematical optimization
ISBN 0-262-29789-2
1-283-30284-5
9786613302847
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Contents; Series Foreword; Preface; Chapter 1. Introduction: Optimization and Machine Learning; 1.1 Support Vector Machines; 1.2 Regularized Optimization; 1.3 Summary of the Chapters; 1.4 References; Chapter 2. Convex Optimization with Sparsity-Inducing Norms; 2.1 Introduction; 2.2 Generic Methods; 2.3 Proximal Methods; 2.4 (Block) Coordinate Descent Algorithms; 2.5 Reweighted- 2 Algorithms; 2.6 Working-Set Methods; 2.7 Quantitative Evaluation; 2.8 Extensions; 2.9 Conclusion; 2.10 References; Chapter 3. Interior-Point Methods for Large-Scale Cone Programming; 3.1 Introduction
3.2 Primal-Dual Interior-Point Methods3.3 Linear and Quadratic Programming; 3.4 Second-Order Cone Programming; 3.5 Semidefinite Programming; 3.6 Conclusion; 3.7 References; Chapter 4. Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey; 4.1 Introduction; 4.2 Incremental Subgradient-Proximal Methods; 4.3 Convergence for Methods with Cyclic Order; 4.4 Convergence for Methods with Randomized Order; 4.5 Some Applications; 4.6 Conclusions; 4.7 References; Chapter 5. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, I: General Purpose Methods
5.1 Introduction5.2 Mirror Descent Algorithm: Minimizing over a Simple Set; 5.3 Problems with Functional Constraints; 5.4 Minimizing Strongly Convex Functions; 5.5 Mirror Descent Stochastic Approximation; 5.6 Mirror Descent for Convex-Concave Saddle-Point Problems; 5.7 Setting up a Mirror Descent Method; 5.8 Notes and Remarks; 5.9 References; Chapter 6. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, II: Utilizing Problem's Structure; 6.1 Introduction; 6.2 Saddle-Point Reformulations of Convex Minimization Problems; 6.3 Mirror-Prox Algorithm
6.4 Accelerating the Mirror-Prox Algorithm6.5 Accelerating First-Order Methods by Randomization; 6.6 Notes and Remarks; 6.7 References; Chapter 7. Cutting-Plane Methods in Machine Learning; 7.1 Introduction to Cutting-plane Methods; 7.2 Regularized Risk Minimization; 7.3 Multiple Kernel Learning; 7.4 MAP Inference in Graphical Models; 7.5 References; Chapter 8. Introduction to Dual Decomposition for Inference; 8.1 Introduction; 8.2 Motivating Applications; 8.3 Dual Decomposition and Lagrangian Relaxation; 8.4 Subgradient Algorithms; 8.6 Relations to Linear Programming Relaxations
8.7 Decoding: Finding the MAP Assignment8.8 Discussion; Appendix: Technical Details; 8.10 References; 8.5 Block Coordinate Descent Algorithms; Chapter 9. Augmented Lagrangian Methods for Learning, Selecting, and Combining Features; 9.1 Introduction; 9.2 Background; 9.3 Proximal Minimization Algorithm; 9.4 Dual Augmented Lagrangian (DAL) Algorithm; 9.5 Connections; 9.6 Application; 9.7 Summary; Acknowledgment; Appendix: Mathematical Details; 9.9 References; Chapter 10. The Convex Optimization Approach to Regret Minimization; 10.1 Introduction; 10.2 The RFTL Algorithm and Its Analysis
10.3 The "Primal-Dual" Approach
Record Nr. UNINA-9910828395703321
Cambridge, Mass., : MIT Press, c2012
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui