07333oam 2200793Ia 450 991078856310332120190503073359.00-262-29789-21-283-30284-59786613302847978661330284799952600993(CKB)3330000000000106(EBL)3339310(SSID)ssj0000539758(PQKBManifestationID)11339775(PQKBTitleCode)TC0000539758(PQKBWorkID)10580990(PQKB)10937649(MiAaPQ)EBC3339310(OCoLC)758384972(OCoLC)766417417(OCoLC)778616598(OCoLC)816867514(OCoLC)961619660(OCoLC)962692133(OCoLC)966255483(OCoLC)988408134(OCoLC)992030970(OCoLC)995029221(OCoLC)1037932629(OCoLC)1038696068(OCoLC)1045523529(OCoLC)1058175671(OCoLC)1062970938(OCoLC)1066406052(OCoLC)1081226157(OCoLC-P)758384972(MaCbMITP)8996(Au-PeEL)EBL3339310(CaPaEBR)ebr10504740(CaONFJC)MIL330284(OCoLC)758384972(PPN)170244695(PPN)158467647(EXLCZ)99333000000000010620111024h20122012 uy 0engur|n|---|||||txtccrOptimization for machine learning /edited by Suvrit Sra, Sebastian Nowozin, and Stephen J. WrightCambridge, Mass. :MIT Press,[2012]©20121 online resource (509 p.)Neural information processing seriesDescription based upon print version of record.0-262-01646-X 0-262-29877-5 Includes bibliographical references.Contents; Series Foreword; Preface; Chapter 1. Introduction: Optimization and Machine Learning; 1.1 Support Vector Machines; 1.2 Regularized Optimization; 1.3 Summary of the Chapters; 1.4 References; Chapter 2. Convex Optimization with Sparsity-Inducing Norms; 2.1 Introduction; 2.2 Generic Methods; 2.3 Proximal Methods; 2.4 (Block) Coordinate Descent Algorithms; 2.5 Reweighted- 2 Algorithms; 2.6 Working-Set Methods; 2.7 Quantitative Evaluation; 2.8 Extensions; 2.9 Conclusion; 2.10 References; Chapter 3. Interior-Point Methods for Large-Scale Cone Programming; 3.1 Introduction3.2 Primal-Dual Interior-Point Methods3.3 Linear and Quadratic Programming; 3.4 Second-Order Cone Programming; 3.5 Semidefinite Programming; 3.6 Conclusion; 3.7 References; Chapter 4. Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey; 4.1 Introduction; 4.2 Incremental Subgradient-Proximal Methods; 4.3 Convergence for Methods with Cyclic Order; 4.4 Convergence for Methods with Randomized Order; 4.5 Some Applications; 4.6 Conclusions; 4.7 References; Chapter 5. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, I: General Purpose Methods5.1 Introduction5.2 Mirror Descent Algorithm: Minimizing over a Simple Set; 5.3 Problems with Functional Constraints; 5.4 Minimizing Strongly Convex Functions; 5.5 Mirror Descent Stochastic Approximation; 5.6 Mirror Descent for Convex-Concave Saddle-Point Problems; 5.7 Setting up a Mirror Descent Method; 5.8 Notes and Remarks; 5.9 References; Chapter 6. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, II: Utilizing Problem's Structure; 6.1 Introduction; 6.2 Saddle-Point Reformulations of Convex Minimization Problems; 6.3 Mirror-Prox Algorithm6.4 Accelerating the Mirror-Prox Algorithm6.5 Accelerating First-Order Methods by Randomization; 6.6 Notes and Remarks; 6.7 References; Chapter 7. Cutting-Plane Methods in Machine Learning; 7.1 Introduction to Cutting-plane Methods; 7.2 Regularized Risk Minimization; 7.3 Multiple Kernel Learning; 7.4 MAP Inference in Graphical Models; 7.5 References; Chapter 8. Introduction to Dual Decomposition for Inference; 8.1 Introduction; 8.2 Motivating Applications; 8.3 Dual Decomposition and Lagrangian Relaxation; 8.4 Subgradient Algorithms; 8.6 Relations to Linear Programming Relaxations8.7 Decoding: Finding the MAP Assignment8.8 Discussion; Appendix: Technical Details; 8.10 References; 8.5 Block Coordinate Descent Algorithms; Chapter 9. Augmented Lagrangian Methods for Learning, Selecting, and Combining Features; 9.1 Introduction; 9.2 Background; 9.3 Proximal Minimization Algorithm; 9.4 Dual Augmented Lagrangian (DAL) Algorithm; 9.5 Connections; 9.6 Application; 9.7 Summary; Acknowledgment; Appendix: Mathematical Details; 9.9 References; Chapter 10. The Convex Optimization Approach to Regret Minimization; 10.1 Introduction; 10.2 The RFTL Algorithm and Its Analysis10.3 The "Primal-Dual" ApproachAn up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities.The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields.Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.Neural information processing series.Machine learningMathematical modelsMathematical optimizationCOMPUTER SCIENCE/Machine Learning & Neural NetworksCOMPUTER SCIENCE/Artificial IntelligenceMachine learningMathematical models.Mathematical optimization.006.3/1Sra Suvrit1976-1464328Nowozin Sebastian1980-1464329Wright Stephen J.1960-55245OCoLC-POCoLC-PBOOK9910788563103321Optimization for machine learning3673926UNINA