LEADER 01501nam--2200421---450- 001 990000638870203316 005 20051110085432.0 035 $a0063887 035 $aUSA010063887 035 $a(ALEPH)000063887USA01 035 $a0063887 100 $a20010925d1991----km-y0itay0103----ba 101 $aita 102 $aIT 105 $a||||||||001yy 200 1 $a<> favolistica latina in distici elegiaci$eatti del convegno internazionale$eAssisi, 26-28 ottobre 1990$fa cura di Giuseppe Catanzaro e Francesco Santucci 210 $aAssisi$c[s.n.]$d1991 215 $a235 p.$d24 cm 225 2 $aCentro Studi Poesia Latina in distici elegiaci$v2 300 $aIn testa al front.: Accademia Properziana del Subasio. Centro Studi Poesia Latina in distici elegiaci, Assisi 410 0$12001$aCentro Studi Poesia Latina in distici elegiaci 410 0$12001 606 $aCongressi$yAssisi$z1990 676 $a874.01 702 1$aCATANZARO,$bGiuseppe 702 1$aSANTUCCI,$bFrancesco 801 0$aIT$bsalbc$gISBD 912 $a990000638870203316 951 $aV.3. Coll.14/ 4(VIII C COLL. 184 2)$b159306 L.M.$cVIII C COLL.$d00076215 959 $aBK 969 $auma 979 $aCHIARA$b40$c20010925$lUSA01$h1230 979 $aCHIARA$b40$c20010925$lUSA01$h1235 979 $c20020403$lUSA01$h1713 979 $aPATRY$b90$c20040406$lUSA01$h1644 979 $aCOPAT3$b90$c20051110$lUSA01$h0854 996 $aFavolistica latina in distici elegiaci$9957974 997 $aUNISA LEADER 01165nam a2200265 i 4500 001 991002419249707536 005 20020503165048.0 008 980220s1996 it ||| | ita 020 $a8885363253 035 $ab10362010-39ule_inst 035 $aEXGIL103581$9ExL 040 $aBiblioteca Interfacoltà$bita 100 1 $aMassolo, Arturo$0160070 245 10$aDella propedeutica filosofica e altre pagine sparse /$cArturo Massolo ; seguite da Io, don Giovanni :$brecital a cura di Arturo Massolo, Giuseppe Paioni e Nicola Ciarletta 260 $aUrbino :$bMontefeltro,$c[1996] 300 $ax, 203 p. :$b1 ritr. ;$c24 cm. 500 $aIn parte già pubbl. - In cop.: Università degli studi di Urbino. - Tit. sul dorso: Della propedeutica filosofica. 700 1 $aPaioni, Giuseppe 700 1 $aCiarletta, Nicola 907 $a.b10362010$b02-04-14$c27-06-02 912 $a991002419249707536 945 $aLE002 Fil. XII N 3$g1$i2002000540713$lle002$o-$pE0.00$q-$rl$s- $t0$u0$v0$w0$x0$y.i10423345$z27-06-02 996 $aDella propedeutica filosofica e altre pagine sparse$9203841 997 $aUNISALENTO 998 $ale002$b01-01-98$cm$da $e-$fita$git $h0$i1 LEADER 07333oam 2200793Ia 450 001 9910788563103321 005 20190503073359.0 010 $a0-262-29789-2 010 $a1-283-30284-5 010 $a9786613302847 024 8 $a9786613302847 024 8 $a99952600993 035 $a(CKB)3330000000000106 035 $a(EBL)3339310 035 $a(SSID)ssj0000539758 035 $a(PQKBManifestationID)11339775 035 $a(PQKBTitleCode)TC0000539758 035 $a(PQKBWorkID)10580990 035 $a(PQKB)10937649 035 $a(MiAaPQ)EBC3339310 035 $a(OCoLC)758384972$z(OCoLC)766417417$z(OCoLC)778616598$z(OCoLC)816867514$z(OCoLC)961619660$z(OCoLC)962692133$z(OCoLC)966255483$z(OCoLC)988408134$z(OCoLC)992030970$z(OCoLC)995029221$z(OCoLC)1037932629$z(OCoLC)1038696068$z(OCoLC)1045523529$z(OCoLC)1058175671$z(OCoLC)1062970938$z(OCoLC)1066406052$z(OCoLC)1081226157 035 $a(OCoLC-P)758384972 035 $a(MaCbMITP)8996 035 $a(Au-PeEL)EBL3339310 035 $a(CaPaEBR)ebr10504740 035 $a(CaONFJC)MIL330284 035 $a(OCoLC)758384972 035 $z(PPN)170244695 035 $a(PPN)158467647 035 $a(EXLCZ)993330000000000106 100 $a20111024h20122012 uy 0 101 0 $aeng 135 $aur|n|---||||| 181 $ctxt 182 $cc 183 $acr 200 00$aOptimization for machine learning /$fedited by Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright 210 1$aCambridge, Mass. :$cMIT Press,$d[2012] 210 4$d©2012 215 $a1 online resource (509 p.) 225 1 $aNeural information processing series 300 $aDescription based upon print version of record. 311 $a0-262-01646-X 311 $a0-262-29877-5 320 $aIncludes bibliographical references. 327 $aContents; Series Foreword; Preface; Chapter 1. Introduction: Optimization and Machine Learning; 1.1 Support Vector Machines; 1.2 Regularized Optimization; 1.3 Summary of the Chapters; 1.4 References; Chapter 2. Convex Optimization with Sparsity-Inducing Norms; 2.1 Introduction; 2.2 Generic Methods; 2.3 Proximal Methods; 2.4 (Block) Coordinate Descent Algorithms; 2.5 Reweighted- 2 Algorithms; 2.6 Working-Set Methods; 2.7 Quantitative Evaluation; 2.8 Extensions; 2.9 Conclusion; 2.10 References; Chapter 3. Interior-Point Methods for Large-Scale Cone Programming; 3.1 Introduction 327 $a3.2 Primal-Dual Interior-Point Methods3.3 Linear and Quadratic Programming; 3.4 Second-Order Cone Programming; 3.5 Semidefinite Programming; 3.6 Conclusion; 3.7 References; Chapter 4. Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey; 4.1 Introduction; 4.2 Incremental Subgradient-Proximal Methods; 4.3 Convergence for Methods with Cyclic Order; 4.4 Convergence for Methods with Randomized Order; 4.5 Some Applications; 4.6 Conclusions; 4.7 References; Chapter 5. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, I: General Purpose Methods 327 $a5.1 Introduction5.2 Mirror Descent Algorithm: Minimizing over a Simple Set; 5.3 Problems with Functional Constraints; 5.4 Minimizing Strongly Convex Functions; 5.5 Mirror Descent Stochastic Approximation; 5.6 Mirror Descent for Convex-Concave Saddle-Point Problems; 5.7 Setting up a Mirror Descent Method; 5.8 Notes and Remarks; 5.9 References; Chapter 6. First-Order Methods for Nonsmooth Convex Large-Scale Optimization, II: Utilizing Problem's Structure; 6.1 Introduction; 6.2 Saddle-Point Reformulations of Convex Minimization Problems; 6.3 Mirror-Prox Algorithm 327 $a6.4 Accelerating the Mirror-Prox Algorithm6.5 Accelerating First-Order Methods by Randomization; 6.6 Notes and Remarks; 6.7 References; Chapter 7. Cutting-Plane Methods in Machine Learning; 7.1 Introduction to Cutting-plane Methods; 7.2 Regularized Risk Minimization; 7.3 Multiple Kernel Learning; 7.4 MAP Inference in Graphical Models; 7.5 References; Chapter 8. Introduction to Dual Decomposition for Inference; 8.1 Introduction; 8.2 Motivating Applications; 8.3 Dual Decomposition and Lagrangian Relaxation; 8.4 Subgradient Algorithms; 8.6 Relations to Linear Programming Relaxations 327 $a8.7 Decoding: Finding the MAP Assignment8.8 Discussion; Appendix: Technical Details; 8.10 References; 8.5 Block Coordinate Descent Algorithms; Chapter 9. Augmented Lagrangian Methods for Learning, Selecting, and Combining Features; 9.1 Introduction; 9.2 Background; 9.3 Proximal Minimization Algorithm; 9.4 Dual Augmented Lagrangian (DAL) Algorithm; 9.5 Connections; 9.6 Application; 9.7 Summary; Acknowledgment; Appendix: Mathematical Details; 9.9 References; Chapter 10. The Convex Optimization Approach to Regret Minimization; 10.1 Introduction; 10.2 The RFTL Algorithm and Its Analysis 327 $a10.3 The "Primal-Dual" Approach 330 $aAn up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities.The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields.Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community. 410 0$aNeural information processing series. 606 $aMachine learning$xMathematical models 606 $aMathematical optimization 610 $aCOMPUTER SCIENCE/Machine Learning & Neural Networks 610 $aCOMPUTER SCIENCE/Artificial Intelligence 615 0$aMachine learning$xMathematical models. 615 0$aMathematical optimization. 676 $a006.3/1 701 $aSra$b Suvrit$f1976-$01464328 701 $aNowozin$b Sebastian$f1980-$01464329 701 $aWright$b Stephen J.$f1960-$055245 801 0$bOCoLC-P 801 1$bOCoLC-P 906 $aBOOK 912 $a9910788563103321 996 $aOptimization for machine learning$93673926 997 $aUNINA