top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Digital filter design and realization / / Takao Hinamoto, Wu-Sheng Lu
Digital filter design and realization / / Takao Hinamoto, Wu-Sheng Lu
Autore Hinamoto Takao
Edizione [1st ed.]
Pubbl/distr/stampa Gistrup, Denmark ; ; Delft, The Netherlands : , : River Publishers, , 2017
Descrizione fisica 1 online resource (484 pages) : illustrations, tables
Disciplina 621.3815324
Collana River Publishers Series in Signal, Image and Speech Processing
Soggetto topico Electric filters, Digital - Design and construction
ISBN 1-000-79129-7
1-00-333790-2
1-003-33790-2
1-000-79441-5
87-93519-34-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910793083103321
Hinamoto Takao  
Gistrup, Denmark ; ; Delft, The Netherlands : , : River Publishers, , 2017
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Digital filter design and realization / / Takao Hinamoto, Wu-Sheng Lu
Digital filter design and realization / / Takao Hinamoto, Wu-Sheng Lu
Autore Hinamoto Takao
Edizione [1st ed.]
Pubbl/distr/stampa Gistrup, Denmark ; ; Delft, The Netherlands : , : River Publishers, , 2017
Descrizione fisica 1 online resource (484 pages) : illustrations, tables
Disciplina 621.3815324
Collana River Publishers Series in Signal, Image and Speech Processing
Soggetto topico Electric filters, Digital - Design and construction
ISBN 1-000-79129-7
1-00-333790-2
1-003-33790-2
1-000-79441-5
87-93519-34-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910817355403321
Hinamoto Takao  
Gistrup, Denmark ; ; Delft, The Netherlands : , : River Publishers, , 2017
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Practical optimization : algorithms and engineering applications / / Andreas Antoniou, Wu-Sheng Lu
Practical optimization : algorithms and engineering applications / / Andreas Antoniou, Wu-Sheng Lu
Autore Antoniou Andreas <1938->
Edizione [Second edition.]
Pubbl/distr/stampa New York, New York : , : Springer, , [2021]
Descrizione fisica 1 online resource (737 pages)
Disciplina 620.00151
Collana Texts in computer science
Soggetto topico Engineering mathematics
ISBN 1-0716-0843-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface to the Second Edition -- Preface to the First Edition -- Contents -- About the Authors -- Abbreviations -- 1 The Optimization Problem -- 1.1 Introduction -- 1.2 The Basic Optimization Problem -- 1.3 General Structure of Optimization Algorithms -- 1.4 Constraints -- 1.5 The Feasible Region -- 1.6 Branches of Mathematical Programming -- 1.6.1 Linear Programming -- 1.6.2 Integer Programming -- 1.6.3 Quadratic Programming -- 1.6.4 Nonlinear Programming -- 1.6.5 Dynamic Programming -- 2 Basic Principles -- 2.1 Introduction -- 2.2 Gradient Information -- 2.3 The Taylor Series -- 2.4 Types of Extrema -- 2.5 Necessary and Sufficient Conditions For Local Minima and Maxima -- 2.5.1 First-Order Necessary Conditions -- 2.5.2 Second-Order Necessary Conditions -- 2.6 Classification of Stationary Points -- 2.7 Convex and Concave Functions -- 2.8 Optimization of Convex Functions -- 3 General Properties of Algorithms -- 3.1 Introduction -- 3.2 An Algorithm as a Point-to-Point Mapping -- 3.3 An Algorithm as a Point-to-Set Mapping -- 3.4 Closed Algorithms -- 3.5 Descent Functions -- 3.6 Global Convergence -- 3.7 Rates of Convergence -- 4 One-Dimensional Optimization -- 4.1 Introduction -- 4.2 Dichotomous Search -- 4.3 Fibonacci Search -- 4.4 Golden-Section Search -- 4.5 Quadratic Interpolation Method -- 4.5.1 Two-Point Interpolation -- 4.6 Cubic Interpolation -- 4.7 Algorithm of Davies, Swann, and Campey -- 4.8 Inexact Line Searches -- 5 Basic Multidimensional Gradient Methods -- 5.1 Introduction -- 5.2 Steepest-Descent Method -- 5.2.1 Ascent and Descent Directions -- 5.2.2 Basic Method -- 5.2.3 Orthogonality of Directions -- 5.2.4 Step-Size Estimation for Steepest-Descent Method -- 5.2.5 Step-Size Estimation Using the Barzilai-Borwein Two-Point Formulas -- 5.2.6 Convergence -- 5.2.7 Scaling -- 5.3 Newton Method.
5.3.1 Modification of the Hessian -- 5.3.2 Computation of the Hessian -- 5.3.3 Newton Decrement -- 5.3.4 Backtracking Line Search -- 5.3.5 Independence of Linear Changes in Variables -- 5.4 Gauss-Newton Method -- 6 Conjugate-Direction Methods -- 6.1 Introduction -- 6.2 Conjugate Directions -- 6.3 Basic Conjugate-Directions Method -- 6.4 Conjugate-Gradient Method -- 6.5 Minimization of Nonquadratic Functions -- 6.6 Fletcher-Reeves Method -- 6.7 Powell's Method -- 6.8 Partan Method -- 6.9 Solution of Systems of Linear Equations -- 7 Quasi-Newton Methods -- 7.1 Introduction -- 7.2 The Basic Quasi-Newton Approach -- 7.3 Generation of Matrix Sk -- 7.4 Rank-One Method -- 7.5 Davidon-Fletcher-Powell Method -- 7.5.1 Alternative Form of DFP Formula -- 7.6 Broyden-Fletcher-Goldfarb-Shanno Method -- 7.7 Hoshino Method -- 7.8 The Broyden Family -- 7.8.1 Fletcher Switch Method -- 7.9 The Huang Family -- 7.10 Practical Quasi-Newton Algorithm -- 8 Minimax Methods -- 8.1 Introduction -- 8.2 Problem Formulation -- 8.3 Minimax Algorithms -- 8.4 Improved Minimax Algorithms -- 9 Applications of Unconstrained Optimization -- 9.1 Introduction -- 9.2 Classification of Handwritten Digits -- 9.2.1 Handwritten-Digit Recognition Problem -- 9.2.2 Histogram of Oriented Gradients -- 9.2.3 Softmax Regression for Use in Multiclass Classification -- 9.2.4 Use of Softmax Regression for the Classification of Handwritten Digits -- 9.3 Inverse Kinematics for Robotic Manipulators -- 9.3.1 Position and Orientation of a Manipulator -- 9.3.2 Inverse Kinematics Problem -- 9.3.3 Solution of Inverse Kinematics Problem -- 9.4 Design of Digital Filters -- 9.4.1 Weighted Least-Squares Design of FIR Filters -- 9.4.2 Minimax Design of FIR Filters -- 9.5 Source Localization -- 9.5.1 Source Localization Based on Range Measurements -- 9.5.2 Source Localization Based on Range-Difference Measurements.
10 Fundamentals of Constrained Optimization -- 10.1 Introduction -- 10.2 Constraints -- 10.2.1 Notation and Basic Assumptions -- 10.2.2 Equality Constraints -- 10.2.3 Inequality Constraints -- 10.3 Classification of Constrained Optimization Problems -- 10.3.1 Linear Programming -- 10.3.2 Quadratic Programming -- 10.3.3 Convex Programming -- 10.3.4 General Constrained Optimization Problem -- 10.4 Simple Transformation Methods -- 10.4.1 Variable Elimination -- 10.4.2 Variable Transformations -- 10.5 Lagrange Multipliers -- 10.5.1 Equality Constraints -- 10.5.2 Tangent Plane and Normal Plane -- 10.5.3 Geometrical Interpretation -- 10.6 First-Order Necessary Conditions -- 10.6.1 Equality Constraints -- 10.6.2 Inequality Constraints -- 10.7 Second-Order Conditions -- 10.7.1 Second-Order Necessary Conditions -- 10.7.2 Second-Order Sufficient Conditions -- 10.8 Convexity -- 10.9 Duality -- 11 Linear Programming Part I: The Simplex Method -- 11.1 Introduction -- 11.2 General Properties -- 11.2.1 Formulation of LP Problems -- 11.2.2 Optimality Conditions -- 11.2.3 Geometry of an LP Problem -- 11.2.4 Vertex Minimizers -- 11.3 Simplex Method -- 11.3.1 Simplex Method for Alternative-Form LP Problem -- 11.3.2 Simplex Method for Standard-Form LP Problems -- 11.3.3 Tabular Form of the Simplex Method -- 11.3.4 Computational Complexity -- 12 Linear Programming Part II: Interior-Point Methods -- 12.1 Introduction -- 12.2 Primal-Dual Solutions and Central Path -- 12.2.1 Primal-Dual Solutions -- 12.2.2 Central Path -- 12.3 Primal Affine Scaling Method -- 12.4 Primal Newton Barrier Method -- 12.4.1 Basic Idea -- 12.4.2 Minimizers of Subproblem -- 12.4.3 A Convergence Issue -- 12.4.4 Computing a Minimizer of the Problem in Eqs. (12.26a) and (12.26b) -- 12.5 Primal-Dual Interior-Point Methods -- 12.5.1 Primal-Dual Path-Following Method.
12.5.2 A Nonfeasible-Initialization Primal-Dual Path-Following Method -- 12.5.3 Predictor-Corrector Method -- 13 Quadratic, Semidefinite, and Second-Order Cone Programming -- 13.1 Introduction -- 13.2 Convex QP Problems with Equality Constraints -- 13.3 Active-Set Methods for Strictly Convex QP Problems -- 13.3.1 Primal Active-Set Method -- 13.3.2 Dual Active-Set Method -- 13.4 Interior-Point Methods for Convex QP Problems -- 13.4.1 Dual QP Problem, Duality Gap, and Central Path -- 13.4.2 A Primal-Dual Path-Following Method for Convex QP Problems -- 13.4.3 Nonfeasible-initialization Primal-Dual Path-Following Method for Convex QP Problems -- 13.4.4 Linear Complementarity Problems -- 13.5 Primal and Dual SDP Problems -- 13.5.1 Notation and Definitions -- 13.5.2 Examples -- 13.6 Basic Properties of SDP Problems -- 13.6.1 Basic Assumptions -- 13.6.2 Karush-Kuhn-Tucker Conditions -- 13.6.3 Central Path -- 13.6.4 Centering Condition -- 13.7 Primal-Dual Path-Following Method -- 13.7.1 Reformulation of Centering Condition -- 13.7.2 Symmetric Kronecker Product -- 13.7.3 Reformulation of Eqs. (13.79a)-(13.79c) -- 13.7.4 Primal-Dual Path-Following Algorithm -- 13.8 Predictor-Corrector Method -- 13.9 Second-Order Cone Programming -- 13.9.1 Notation and Definitions -- 13.9.2 Relations Among LP, QP, SDP, and SOCP Problems -- 13.9.3 Examples -- 13.10 A Primal-Dual Method for SOCP Problems -- 13.10.1 Assumptions and KKT Conditions -- 13.10.2 A Primal-Dual Interior-Point Algorithm -- 14 Algorithms for General Convex Problems -- 14.1 Introduction -- 14.2 Concepts and Properties of Convex Functions -- 14.2.1 Subgradient -- 14.2.2 Convex Functions with Lipschitz-Continuous Gradients -- 14.2.3 Strongly Convex Functions -- 14.2.4 Conjugate Functions -- 14.2.5 Proximal Operators -- 14.3 Extension of Newton Method to Convex Constrained and Unconstrained Problems.
14.3.1 Minimization of Smooth Convex Functions Without Constraints -- 14.3.2 Minimization of Smooth Convex Functions Subject to Equality Constraints -- 14.3.3 Newton Algorithm for Problem in Eq. (14.34) with a Nonfeasible x0 -- 14.3.4 A Newton Barrier Method for General Convex Programming Problems -- 14.4 Minimization of Composite Convex Functions -- 14.4.1 Proximal-Point Algorithm -- 14.4.2 Fast Algorithm For Solving the Problem in Eq. (14.56) -- 14.5 Alternating Direction Methods -- 14.5.1 Alternating Direction Method of Multipliers -- 14.5.2 Application of ADMM to General Constrained Convex Problem -- 14.5.3 Alternating Minimization Algorithm (AMA) -- 15 Algorithms for General Nonconvex Problems -- 15.1 Introduction -- 15.2 Sequential Convex Programming -- 15.2.1 Principle of SCP -- 15.2.2 Convex Approximations for f(x) and cj(x) and Affine Approximation of ai(x) -- 15.2.3 Exact Penalty Formulation -- 15.2.4 Alternating Convex Optimization -- 15.3 Sequential Quadratic Programming -- 15.3.1 Basic SQP Algorithm -- 15.3.2 Positive Definite Approximation of Hessian -- 15.3.3 Robustness and Solvability of QP Subproblem of Eqs. (15.16a)-(15.16c) -- 15.3.4 Practical SQP Algorithm for the Problem of Eq. (15.1) -- 15.4 Convex-Concave Procedure -- 15.4.1 Basic Convex-Concave Procedure -- 15.4.2 Penalty Convex-Concave Procedure -- 15.5 ADMM Heuristic Technique for Nonconvex Problems -- 16 Applications of Constrained Optimization -- 16.1 Introduction -- 16.2 Design of Digital Filters -- 16.2.1 Design of Linear-Phase FIR Filters Using QP -- 16.2.2 Minimax Design of FIR Digital Filters Using SDP -- 16.2.3 Minimax Design of IIR Digital Filters Using SDP -- 16.2.4 Minimax Design of FIR and IIR Digital Filters Using SOCP -- 16.2.5 Minimax Design of IIR Digital Filters Satisfying Multiple Specifications -- 16.3 Model Predictive Control of Dynamic Systems.
16.3.1 Polytopic Model for Uncertain Dynamic Systems.
Record Nr. UNINA-9910506405003321
Antoniou Andreas <1938->  
New York, New York : , : Springer, , [2021]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Practical optimization : algorithms and engineering applications / / Andreas Antoniou, Wu-Sheng Lu
Practical optimization : algorithms and engineering applications / / Andreas Antoniou, Wu-Sheng Lu
Autore Antoniou Andreas <1938->
Edizione [Second edition.]
Pubbl/distr/stampa New York, New York : , : Springer, , [2021]
Descrizione fisica 1 online resource (737 pages)
Disciplina 620.00151
Collana Texts in computer science
Soggetto topico Engineering mathematics
ISBN 1-0716-0843-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface to the Second Edition -- Preface to the First Edition -- Contents -- About the Authors -- Abbreviations -- 1 The Optimization Problem -- 1.1 Introduction -- 1.2 The Basic Optimization Problem -- 1.3 General Structure of Optimization Algorithms -- 1.4 Constraints -- 1.5 The Feasible Region -- 1.6 Branches of Mathematical Programming -- 1.6.1 Linear Programming -- 1.6.2 Integer Programming -- 1.6.3 Quadratic Programming -- 1.6.4 Nonlinear Programming -- 1.6.5 Dynamic Programming -- 2 Basic Principles -- 2.1 Introduction -- 2.2 Gradient Information -- 2.3 The Taylor Series -- 2.4 Types of Extrema -- 2.5 Necessary and Sufficient Conditions For Local Minima and Maxima -- 2.5.1 First-Order Necessary Conditions -- 2.5.2 Second-Order Necessary Conditions -- 2.6 Classification of Stationary Points -- 2.7 Convex and Concave Functions -- 2.8 Optimization of Convex Functions -- 3 General Properties of Algorithms -- 3.1 Introduction -- 3.2 An Algorithm as a Point-to-Point Mapping -- 3.3 An Algorithm as a Point-to-Set Mapping -- 3.4 Closed Algorithms -- 3.5 Descent Functions -- 3.6 Global Convergence -- 3.7 Rates of Convergence -- 4 One-Dimensional Optimization -- 4.1 Introduction -- 4.2 Dichotomous Search -- 4.3 Fibonacci Search -- 4.4 Golden-Section Search -- 4.5 Quadratic Interpolation Method -- 4.5.1 Two-Point Interpolation -- 4.6 Cubic Interpolation -- 4.7 Algorithm of Davies, Swann, and Campey -- 4.8 Inexact Line Searches -- 5 Basic Multidimensional Gradient Methods -- 5.1 Introduction -- 5.2 Steepest-Descent Method -- 5.2.1 Ascent and Descent Directions -- 5.2.2 Basic Method -- 5.2.3 Orthogonality of Directions -- 5.2.4 Step-Size Estimation for Steepest-Descent Method -- 5.2.5 Step-Size Estimation Using the Barzilai-Borwein Two-Point Formulas -- 5.2.6 Convergence -- 5.2.7 Scaling -- 5.3 Newton Method.
5.3.1 Modification of the Hessian -- 5.3.2 Computation of the Hessian -- 5.3.3 Newton Decrement -- 5.3.4 Backtracking Line Search -- 5.3.5 Independence of Linear Changes in Variables -- 5.4 Gauss-Newton Method -- 6 Conjugate-Direction Methods -- 6.1 Introduction -- 6.2 Conjugate Directions -- 6.3 Basic Conjugate-Directions Method -- 6.4 Conjugate-Gradient Method -- 6.5 Minimization of Nonquadratic Functions -- 6.6 Fletcher-Reeves Method -- 6.7 Powell's Method -- 6.8 Partan Method -- 6.9 Solution of Systems of Linear Equations -- 7 Quasi-Newton Methods -- 7.1 Introduction -- 7.2 The Basic Quasi-Newton Approach -- 7.3 Generation of Matrix Sk -- 7.4 Rank-One Method -- 7.5 Davidon-Fletcher-Powell Method -- 7.5.1 Alternative Form of DFP Formula -- 7.6 Broyden-Fletcher-Goldfarb-Shanno Method -- 7.7 Hoshino Method -- 7.8 The Broyden Family -- 7.8.1 Fletcher Switch Method -- 7.9 The Huang Family -- 7.10 Practical Quasi-Newton Algorithm -- 8 Minimax Methods -- 8.1 Introduction -- 8.2 Problem Formulation -- 8.3 Minimax Algorithms -- 8.4 Improved Minimax Algorithms -- 9 Applications of Unconstrained Optimization -- 9.1 Introduction -- 9.2 Classification of Handwritten Digits -- 9.2.1 Handwritten-Digit Recognition Problem -- 9.2.2 Histogram of Oriented Gradients -- 9.2.3 Softmax Regression for Use in Multiclass Classification -- 9.2.4 Use of Softmax Regression for the Classification of Handwritten Digits -- 9.3 Inverse Kinematics for Robotic Manipulators -- 9.3.1 Position and Orientation of a Manipulator -- 9.3.2 Inverse Kinematics Problem -- 9.3.3 Solution of Inverse Kinematics Problem -- 9.4 Design of Digital Filters -- 9.4.1 Weighted Least-Squares Design of FIR Filters -- 9.4.2 Minimax Design of FIR Filters -- 9.5 Source Localization -- 9.5.1 Source Localization Based on Range Measurements -- 9.5.2 Source Localization Based on Range-Difference Measurements.
10 Fundamentals of Constrained Optimization -- 10.1 Introduction -- 10.2 Constraints -- 10.2.1 Notation and Basic Assumptions -- 10.2.2 Equality Constraints -- 10.2.3 Inequality Constraints -- 10.3 Classification of Constrained Optimization Problems -- 10.3.1 Linear Programming -- 10.3.2 Quadratic Programming -- 10.3.3 Convex Programming -- 10.3.4 General Constrained Optimization Problem -- 10.4 Simple Transformation Methods -- 10.4.1 Variable Elimination -- 10.4.2 Variable Transformations -- 10.5 Lagrange Multipliers -- 10.5.1 Equality Constraints -- 10.5.2 Tangent Plane and Normal Plane -- 10.5.3 Geometrical Interpretation -- 10.6 First-Order Necessary Conditions -- 10.6.1 Equality Constraints -- 10.6.2 Inequality Constraints -- 10.7 Second-Order Conditions -- 10.7.1 Second-Order Necessary Conditions -- 10.7.2 Second-Order Sufficient Conditions -- 10.8 Convexity -- 10.9 Duality -- 11 Linear Programming Part I: The Simplex Method -- 11.1 Introduction -- 11.2 General Properties -- 11.2.1 Formulation of LP Problems -- 11.2.2 Optimality Conditions -- 11.2.3 Geometry of an LP Problem -- 11.2.4 Vertex Minimizers -- 11.3 Simplex Method -- 11.3.1 Simplex Method for Alternative-Form LP Problem -- 11.3.2 Simplex Method for Standard-Form LP Problems -- 11.3.3 Tabular Form of the Simplex Method -- 11.3.4 Computational Complexity -- 12 Linear Programming Part II: Interior-Point Methods -- 12.1 Introduction -- 12.2 Primal-Dual Solutions and Central Path -- 12.2.1 Primal-Dual Solutions -- 12.2.2 Central Path -- 12.3 Primal Affine Scaling Method -- 12.4 Primal Newton Barrier Method -- 12.4.1 Basic Idea -- 12.4.2 Minimizers of Subproblem -- 12.4.3 A Convergence Issue -- 12.4.4 Computing a Minimizer of the Problem in Eqs. (12.26a) and (12.26b) -- 12.5 Primal-Dual Interior-Point Methods -- 12.5.1 Primal-Dual Path-Following Method.
12.5.2 A Nonfeasible-Initialization Primal-Dual Path-Following Method -- 12.5.3 Predictor-Corrector Method -- 13 Quadratic, Semidefinite, and Second-Order Cone Programming -- 13.1 Introduction -- 13.2 Convex QP Problems with Equality Constraints -- 13.3 Active-Set Methods for Strictly Convex QP Problems -- 13.3.1 Primal Active-Set Method -- 13.3.2 Dual Active-Set Method -- 13.4 Interior-Point Methods for Convex QP Problems -- 13.4.1 Dual QP Problem, Duality Gap, and Central Path -- 13.4.2 A Primal-Dual Path-Following Method for Convex QP Problems -- 13.4.3 Nonfeasible-initialization Primal-Dual Path-Following Method for Convex QP Problems -- 13.4.4 Linear Complementarity Problems -- 13.5 Primal and Dual SDP Problems -- 13.5.1 Notation and Definitions -- 13.5.2 Examples -- 13.6 Basic Properties of SDP Problems -- 13.6.1 Basic Assumptions -- 13.6.2 Karush-Kuhn-Tucker Conditions -- 13.6.3 Central Path -- 13.6.4 Centering Condition -- 13.7 Primal-Dual Path-Following Method -- 13.7.1 Reformulation of Centering Condition -- 13.7.2 Symmetric Kronecker Product -- 13.7.3 Reformulation of Eqs. (13.79a)-(13.79c) -- 13.7.4 Primal-Dual Path-Following Algorithm -- 13.8 Predictor-Corrector Method -- 13.9 Second-Order Cone Programming -- 13.9.1 Notation and Definitions -- 13.9.2 Relations Among LP, QP, SDP, and SOCP Problems -- 13.9.3 Examples -- 13.10 A Primal-Dual Method for SOCP Problems -- 13.10.1 Assumptions and KKT Conditions -- 13.10.2 A Primal-Dual Interior-Point Algorithm -- 14 Algorithms for General Convex Problems -- 14.1 Introduction -- 14.2 Concepts and Properties of Convex Functions -- 14.2.1 Subgradient -- 14.2.2 Convex Functions with Lipschitz-Continuous Gradients -- 14.2.3 Strongly Convex Functions -- 14.2.4 Conjugate Functions -- 14.2.5 Proximal Operators -- 14.3 Extension of Newton Method to Convex Constrained and Unconstrained Problems.
14.3.1 Minimization of Smooth Convex Functions Without Constraints -- 14.3.2 Minimization of Smooth Convex Functions Subject to Equality Constraints -- 14.3.3 Newton Algorithm for Problem in Eq. (14.34) with a Nonfeasible x0 -- 14.3.4 A Newton Barrier Method for General Convex Programming Problems -- 14.4 Minimization of Composite Convex Functions -- 14.4.1 Proximal-Point Algorithm -- 14.4.2 Fast Algorithm For Solving the Problem in Eq. (14.56) -- 14.5 Alternating Direction Methods -- 14.5.1 Alternating Direction Method of Multipliers -- 14.5.2 Application of ADMM to General Constrained Convex Problem -- 14.5.3 Alternating Minimization Algorithm (AMA) -- 15 Algorithms for General Nonconvex Problems -- 15.1 Introduction -- 15.2 Sequential Convex Programming -- 15.2.1 Principle of SCP -- 15.2.2 Convex Approximations for f(x) and cj(x) and Affine Approximation of ai(x) -- 15.2.3 Exact Penalty Formulation -- 15.2.4 Alternating Convex Optimization -- 15.3 Sequential Quadratic Programming -- 15.3.1 Basic SQP Algorithm -- 15.3.2 Positive Definite Approximation of Hessian -- 15.3.3 Robustness and Solvability of QP Subproblem of Eqs. (15.16a)-(15.16c) -- 15.3.4 Practical SQP Algorithm for the Problem of Eq. (15.1) -- 15.4 Convex-Concave Procedure -- 15.4.1 Basic Convex-Concave Procedure -- 15.4.2 Penalty Convex-Concave Procedure -- 15.5 ADMM Heuristic Technique for Nonconvex Problems -- 16 Applications of Constrained Optimization -- 16.1 Introduction -- 16.2 Design of Digital Filters -- 16.2.1 Design of Linear-Phase FIR Filters Using QP -- 16.2.2 Minimax Design of FIR Digital Filters Using SDP -- 16.2.3 Minimax Design of IIR Digital Filters Using SDP -- 16.2.4 Minimax Design of FIR and IIR Digital Filters Using SOCP -- 16.2.5 Minimax Design of IIR Digital Filters Satisfying Multiple Specifications -- 16.3 Model Predictive Control of Dynamic Systems.
16.3.1 Polytopic Model for Uncertain Dynamic Systems.
Record Nr. UNISA-996464399603316
Antoniou Andreas <1938->  
New York, New York : , : Springer, , [2021]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui