Adaptive digital filters / / Branko Kovacevic, Zoran Banjac, Milan Milosavljevic
| Adaptive digital filters / / Branko Kovacevic, Zoran Banjac, Milan Milosavljevic |
| Autore | Kovacevic Branko |
| Edizione | [1st ed. 2013.] |
| Pubbl/distr/stampa | Heidelberg ; ; New York, : Springer, 2013 |
| Descrizione fisica | 1 online resource (xiv, 211 pages) : illustrations (some color) |
| Disciplina | 621.3815324 |
| Altri autori (Persone) |
BanjacZoran
MilosavljevicMilan |
| Collana | Gale eBooks |
| Soggetto topico |
Adaptive filters
Adaptive signal processing |
| ISBN | 3-642-33561-6 |
| Classificazione |
621.3
ZN 5760 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Nota di contenuto | Adaptive filtering -- Finite impulse response adaptive filters with variable forgetting factor -- Finite impulse response adaptive filters with increased convergence speed -- Robustification of finite impulse response adaptive filters -- Application of adaptive digital filters for echo cancellation in telecommunication networks. |
| Record Nr. | UNINA-9910437891803321 |
Kovacevic Branko
|
||
| Heidelberg ; ; New York, : Springer, 2013 | ||
| Lo trovi qui: Univ. Federico II | ||
| ||
Adaptive filter theory / / Simon Haykin ; international edition contributions by Telagarapu Prabhakar
| Adaptive filter theory / / Simon Haykin ; international edition contributions by Telagarapu Prabhakar |
| Autore | Haykin Simon S. <1931-> |
| Edizione | [Fifth edition, International edition.] |
| Pubbl/distr/stampa | Upper Saddle River : , : Pearson, , [2014] |
| Descrizione fisica | 1 online resource (912 pages) : illustrations (some color) |
| Disciplina | 621.3815324 |
| Collana | Always learning |
| Soggetto topico | Adaptive filters |
| ISBN | 0-273-77572-3 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Nota di contenuto |
Cover -- Title -- Contents -- Preface -- Acknowledgments -- Background and Preview -- 1. The Filtering Problem -- 2. Linear Optimum Filters -- 3. Adaptive Filters -- 4. Linear Filter Structures -- 5. Approaches to the Development of Linear Adaptive Filters -- 6. Adaptive Beamforming -- 7. Four Classes of Applications -- 8. Historical Notes -- Chapter 1 Stochastic Processes and Models -- 1.1 Partial Characterization of a Discrete-Time Stochastic Process -- 1.2 Mean Ergodic Theorem -- 1.3 Correlation Matrix -- 1.4 Correlation Matrix of Sine Wave Plus Noise -- 1.5 Stochastic Models -- 1.6 Wold Decomposition -- 1.7 Asymptotic Stationarity of an Autoregressive Process -- 1.8 Yule-Walker Equations -- 1.9 Computer Experiment: Autoregressive Process of Order Two -- 1.10 Selecting the Model Order -- 1.11 Complex Gaussian Proceses -- 1.12 Power Spectral Density -- 1.13 Propert ies of Power Spectral Density -- 1.14 Transmission of a Stationary Process Through a Linear Filter -- 1.15 Cramér Spectral Representation for a Stationary Process -- 1.16 Power Spectrum Estimation -- 1.17 Other Statistical Characteristics of a Stochastic Process -- 1.18 Polyspectra -- 1.19 Spectral-Correlation Density -- 1.20 Summary and Discussion -- Problems -- Chapter 2 Wiener Filters -- 2.1 Linear Optimum Filtering: Statement of the Problem -- 2.2 Principle of Orthogonality -- 2.3 Minimum Mean-Square Error -- 2.4 Wiener-Hopf Equations -- 2.5 Error-Performance Surface -- 2.6 Multiple Linear Regression Model -- 2.7 Example -- 2.8 Linearly Constrained Minimum-Variance Filter -- 2.9 Generalized Sidelobe Cancellers -- 2.10 Summary and Discussion -- Problems -- Chapter 3 Linear Prediction -- 3.1 Forward Linear Prediction -- 3.2 Backward Linear Prediction -- 3.3 Levinson-Durbin Algorithm -- 3.4 Properties of Prediction-Error Filters -- 3.5 Schur-Cohn Test.
3.6 Autoregressive Modeling of a Stationary Stochastic Process -- 3.7 Cholesky Factorization -- 3.8 Lattice Predictors -- 3.9 All-Pole, All-Pass Lattice Filter -- 3.10 Joint-Process Estimation -- 3.11 Predictive Modeling of Speech -- 3.12 Summary and Discussion -- Problems -- Chapter 4 Method of Steepest Descent -- 4.1 Basic Idea of the Steepest-Descent Algorithm -- 4.2 The Steepest-Descent Algorithm Applied to the Wiener Filter -- 4.3 Stability of the Steepest-Descent Algorithm -- 4.4 Example -- 4.5 The Steepest-Descent Algorithm Viewed as a Deterministic Search Method -- 4.6 Virtue and Limitation of the Steepest-Descent Algorithm -- 4.7 Summary and Discussion -- Problems -- Chapter 5 Method of Stochastic Gradient Descent -- 5.1 Principles of Stochastic Gradient Descent -- 5.2 Application 1: Least-Mean-Square (LMS) Algorithm -- 5.3 Application 2: Gradient-Adaptive Lattice Filtering Algorithm -- 5.4 Other Applications of Stochastic Gradient Descent -- 5.5 Summary and Discussion -- Problems -- Chapter 6 The Least-Mean-Square (LMS) Algorithm -- 6.1 Signal-Flow Graph -- 6.2 Optimality Considerations -- 6.3 Applications -- 6.4 Statistical Learning Theory -- 6.5 Transient Behavior and Convergence Considerations -- 6.6 Efficiency -- 6.7 Computer Experiment on Adaptive Prediction -- 6.8 Computer Experiment on Adaptive Equalization -- 6.9 Computer Experiment on a Minimum-Variance Distortionless-Response Beamformer -- 6.10 Summary and Discussion -- Problems -- Chapter 7 Normalized Least-Mean-Square (LMS) Algorithm and Its Generalization -- 7.1 Normalized LMS Algorithm: The Solution to a Constrained Optimization Problem -- 7.2 Stability of the Normalized LMS Algorithm -- 7.3 Step-Size Control for Acoustic Echo Cancellation -- 7.4 Geometric Considerations Pertaining to the Convergence Process for Real-Valued Data -- 7.5 Affine Projection Adaptive Filters. 7.6 Summary and Discussion -- Problems -- Chapter 8 Block-Adaptive Filters -- 8.1 Block-Adaptive Filters: Basic Ideas -- 8.2 Fast Block LMS Algorithm -- 8.3 Unconstrained Frequency-Domain Adaptive Filters -- 8.4 Self-Orthogonalizing Adaptive Filters -- 8.5 Computer Experiment on Adaptive Equalization -- 8.6 Subband Adaptive Filters -- 8.7 Summary and Discussion -- Problems -- Chapter 9 Method of Least-Squares -- 9.1 Statement of the Linear Least-Squares Estimation Problem -- 9.2 Data Windowing -- 9.3 Principle of Orthogonality Revisited -- 9.4 Minimum Sum of Error Squares -- 9.5 Normal Equations and Linear Least-Squares Filters -- 9.6 Time-Average Correlation Matrix Φ -- 9.7 Reformulation of the Normal Equations in Terms of Data Matrices -- 9.8 Properties of Least-Squares Estimates -- 9.9 Minimum-Variance Distortionless Response (MVDR) Spectrum Estimation -- 9.10 Regularized MVDR Beamforming -- 9.11 Singular-Value Decomposition -- 9.12 Pseudoinverse -- 9.13 Interpretation of Singular Values and Singular Vectors -- 9.14 Minimum-Norm Solution to the Linear Least-Squares Problem -- 9.15 Normalized LMS Algorithm Viewed as the Minimum-Norm Solution to an Underdetermined Least-Squares Estimation Problem -- 9.16 Summary and Discussion -- Problems -- Chapter 10 The Recursive Least-Squares (RLS) Algorithm -- 10.1 Some Preliminaries -- 10.2 The Matrix Inversion Lemma -- 10.3 The Exponentially Weighted RLS Algorithm -- 10.4 Selection of the Regularization Parameter -- 10.5 Updated Recursion for the Sum of Weighted Error Squares -- 10.6 Example: Single-Weight Adaptive Noise Canceller -- 10.7 Statistical Learning Theory -- 10.8 Efficiency -- 10.9 Computer Experiment on Adaptive Equalization -- 10.10 Summary and Discussion -- Problems -- Chapter 11 Robustness -- 11.1 Robustness, Adaptation, and Disturbances. 11.2 Robustness: Preliminary Considerations Rooted in H∞ Optimization -- 11.3 Robustness of the LMS Algorithm -- 11.4 Robustness of the RLS Algorithm -- 11.5 Comparative Evaluations of the LMS and RLS Algorithms from the Perspective of Robustness -- 11.6 Risk-Sensitive Optimality -- 11.7 Trade-Offs Between Robustness and Efficiency -- 11.8 Summary and Discussion -- Problems -- Chapter 12 Finite-Precision Effects -- 12.1 Quantization Errors -- 12.2 Least-Mean-Square (LMS) Algorithm -- 12.3 Recursive Least-Squares (RLS) Algorithm -- 12.4 Summary and Discussion -- Problems -- Chapter 13 Adaptation in Nonstationary Environments -- 13.1 Causes and Consequences of Nonstationarity -- 13.2 The System Identification Problem -- 13.3 Degree of Nonstationarity -- 13.4 Criteria for Tracking Assessment -- 13.5 Tracking Performance of the LMS Algorithm -- 13.6 Tracking Performance of the RLS Algorithm -- 13.7 Comparison of the Tracking Performance of LMS and RLS Algorithms -- 13.8 Tuning of Adaptation Parameters -- 13.9 Incremental Delta-Bar-Delta (IDBD) Algorithm -- 13.10 Autostep Method -- 13.11 Computer Experiment: Mixture of Stationary and Nonstationary Environmental Data -- 13.12 Summary and Discussion -- Problems -- Chapter 14 Kalman Filters -- 14.1 Recursive Minimum Mean-Square Estimation for Scalar Random Variables -- 14.2 Statement of the Kalman Filtering Problem -- 14.3 The Innovations Process -- 14.4 Estimation of the State Using the Innovations Process -- 14.5 Filtering -- 14.6 Initial Conditions -- 14.7 Summary of the Kalman Filter -- 14.8 Optimality Criteria for Kalman Filtering -- 14.9 Kalman Filter as the Unifying Basis for RLS Algorithms -- 14.10 Covariance Filtering Algorithm -- 14.11 Information Filtering Algorithm -- 14.12 Summary and Discussion -- Problems -- Chapter 15 Square-Root Adaptive Filtering Algorithms. 15.1 Square-Root Kalman Filters -- 15.2 Building Square-Root Adaptive Filters on the Two Kalman Filter Variants -- 15.3 QRD-RLS Algorithm -- 15.4 Adaptive Beamforming -- 15.5 Inverse QRD-RLS Algorithm -- 15.6 Finite-Precision Effects -- 15.7 Summary and Discussion -- Problems -- Chapter 16 Order-Recursive Adaptive Filtering Algorithm -- 16.1 Order-Recursive Adaptive Filters Using Least-Squares Estimation: An Overview -- 16.2 Adaptive Forward Linear Prediction -- 16.3 Adaptive Backward Linear Prediction -- 16.4 Conversion Factor -- 16.5 Least-Squares Lattice (LSL) Predictor -- 16.6 Angle-Normalized Estimation Errors -- 16.7 First-Order State-Space Models for Lattice Filtering -- 16.8 QR-Decomposition-Based Least-Squares Lattice (QRD-LSL) Filters -- 16.9 Fundamental Properties of the QRD-LSL Filter -- 16.10 Computer Experiment on Adaptive Equalization -- 16.11 Recursive (LSL) Filters Using A Posteriori Estimation Errors -- 16.12 Recursive LSL Filters Using A Priori Estimation Errors with Error Feedback -- 16.13 Relation Between Recursive LSL and RLS Algorithms -- 16.14 Finite-Precision Effects -- 16.15 Summary and Discussion -- Problems -- Chapter 17 Blind Deconvolution -- 17.1 Overview of Blind Deconvolution -- 17.2 Channel Identifiability Using Cyclostationary Statistics -- 17.3 Subspace Decomposition for Fractionally Spaced Blind Identification -- 17.4 Bussgang Algorithm for Blind Equalization -- 17.5 Extension of the Bussgang Algorithm to Complex Baseband Channels -- 17.6 Special Cases of the Bussgang Algorithm -- 17.7 Fractionally Spaced Bussgang Equalizers -- 17.8 Estimation of Unknown Probability Distribution Function of Signal Source -- 17.9 Summary and Discussion -- Problems -- Epilogue -- 1. Robustness, Efficiency, and Complexity -- 2. Kernel-Based Nonlinear Adaptive Filtering -- Appendix A Theory of Complex Variables. A.1 Cauchy-Riemann Equations. |
| Record Nr. | UNINA-9910150209303321 |
Haykin Simon S. <1931->
|
||
| Upper Saddle River : , : Pearson, , [2014] | ||
| Lo trovi qui: Univ. Federico II | ||
| ||
Adaptive filter theory / Simon Haykin
| Adaptive filter theory / Simon Haykin |
| Autore | Haykin, Simon S. <1931- > |
| Edizione | [3. ed] |
| Pubbl/distr/stampa | Upper Saddle River (N.J.) [etc.], : Prentice Hall, c1996 |
| Descrizione fisica | XVII, 989 p. : ill. ; 24 cm |
| Disciplina | 621.3815324 |
| Collana | Prentice-Hall information and system sciences series |
| Soggetto topico | Filtri adattativi |
| ISBN |
013322760X
0133979857 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Record Nr. | UNICAS-UFI0253948 |
Haykin, Simon S. <1931- >
|
||
| Upper Saddle River (N.J.) [etc.], : Prentice Hall, c1996 | ||
| Lo trovi qui: Univ. di Cassino e del Lazio Meridionale | ||
| ||
Adaptive filter theory / Simon Haykin
| Adaptive filter theory / Simon Haykin |
| Autore | Haykin, Simon S. <1931- > |
| Edizione | [3. ed] |
| Pubbl/distr/stampa | Upper Saddle River (N.J.) [etc.], : Prentice Hall, c1996 |
| Descrizione fisica | XVII, 989 p. : ill. ; 24 cm |
| Disciplina |
621.3815
621.3815324 |
| Collana | Prentice-Hall information and system sciences series |
| Soggetto topico | Filtri elettrici |
| ISBN |
013322760X
0133979857 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Record Nr. | UNISANNIO-UFI0253948 |
Haykin, Simon S. <1931- >
|
||
| Upper Saddle River (N.J.) [etc.], : Prentice Hall, c1996 | ||
| Lo trovi qui: Univ. del Sannio | ||
| ||
Adaptive filter theory / Simon Haykin
| Adaptive filter theory / Simon Haykin |
| Autore | Haykin, Simon S. <1931- > |
| Edizione | [2. ed] |
| Pubbl/distr/stampa | Englewood Cliffs (N.J.), : Prentice Hall, c1991 |
| Descrizione fisica | XX, 854 p. : ill. ; 24 cm |
| Disciplina |
621.3815
621.3815324 |
| Collana | Prentice-Hall information and system sciences series |
| Soggetto topico | Filtri elettrici |
| ISBN | 0130132365 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Record Nr. | UNISANNIO-MIL0059567 |
Haykin, Simon S. <1931- >
|
||
| Englewood Cliffs (N.J.), : Prentice Hall, c1991 | ||
| Lo trovi qui: Univ. del Sannio | ||
| ||
Adaptive filtering : recent advances and practical implementation / / Wenping Cao, Qian Zhang, editor
| Adaptive filtering : recent advances and practical implementation / / Wenping Cao, Qian Zhang, editor |
| Pubbl/distr/stampa | London : , : IntechOpen, , 2021 |
| Descrizione fisica | 1 online resource |
| Disciplina | 621.3815324 |
| Soggetto topico | Adaptive filters |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Altri titoli varianti | Adaptive Filtering |
| Record Nr. | UNINA-9910688489203321 |
| London : , : IntechOpen, , 2021 | ||
| Lo trovi qui: Univ. Federico II | ||
| ||
Adaptive Filtering : Theories and Applications / / Lino Garcia Morales, editor
| Adaptive Filtering : Theories and Applications / / Lino Garcia Morales, editor |
| Pubbl/distr/stampa | Rijeka, Croatia : , : IntechOpen, , [2013] |
| Descrizione fisica | 1 online resource (164 pages) : illustrations |
| Disciplina | 621.3815324 |
| Soggetto topico | Adaptive filters |
| ISBN | 953-51-6308-6 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Altri titoli varianti | Adaptive filtering |
| Record Nr. | UNINA-9910317752203321 |
| Rijeka, Croatia : , : IntechOpen, , [2013] | ||
| Lo trovi qui: Univ. Federico II | ||
| ||
Adaptive Filtering : Theories and Applications / / Lino Garcia Morales, editor
| Adaptive Filtering : Theories and Applications / / Lino Garcia Morales, editor |
| Pubbl/distr/stampa | Rijeka, Croatia : , : InTech, , [2011] |
| Descrizione fisica | 1 online resource (64 pages) : illustrations |
| Disciplina | 621.3815324 |
| Soggetto topico | Adaptive filters |
| ISBN | 953-51-6039-7 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Record Nr. | UNINA-9910138290703321 |
| Rijeka, Croatia : , : InTech, , [2011] | ||
| Lo trovi qui: Univ. Federico II | ||
| ||
Adaptive Filtering Applications / / Lino García Morales, editor
| Adaptive Filtering Applications / / Lino García Morales, editor |
| Pubbl/distr/stampa | Rijeka, Croatia : , : InTech, , [2011] |
| Descrizione fisica | 1 online resource (412 pages) : illustrations |
| Disciplina | 621.3815324 |
| Soggetto topico | Adaptive filters |
| ISBN | 953-51-6016-8 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Record Nr. | UNINA-9910138290403321 |
| Rijeka, Croatia : , : InTech, , [2011] | ||
| Lo trovi qui: Univ. Federico II | ||
| ||
Adaptive filters / / Ali H. Sayed
| Adaptive filters / / Ali H. Sayed |
| Autore | Sayed Ali H |
| Edizione | [1st ed.] |
| Pubbl/distr/stampa | Hoboken, New Jersey : , : Wiley-Interscience : , c2008 |
| Descrizione fisica | 1 online resource (820 p.) |
| Disciplina |
621.3815
621.3815324 |
| Soggetto topico | Adaptive filters |
| ISBN |
1-118-21084-0
1-281-37431-8 9786611374310 0-470-37412-8 0-470-37411-X |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Nota di contenuto |
Preface and Acknowledgments -- Notation and Symbols -- BACKGROUND MATERIAL -- A. Random Variables -- A.1 Variance of a Random Variable -- A.2 Dependent Random Variables -- A.3 Complex-Valued Random Variables -- A.4 Vector-Valued Random Variables -- A.5 Gaussian Random Vectors -- B. Linear Algebra -- B.1 Hermitian and Positive-Definite Matrices -- B.2 Range Spaces and Nullspaces of Matrices -- B.3 Schur Complements -- B.4 Cholesky Factorization -- B.5 QR Decomposition -- B.6 Singular Value Decomposition -- B.7 Kronecker Products -- C. Complex Gradients -- C.1 Cauchy-Riemann Conditions -- C.2 Scalar Arguments -- C.3 Vector Arguments -- PART I: OPTIMAL ESTIMATION -- 1. Scalar-Valued Data -- 1.1 Estimation Without Observations -- 1.2 Estimation Given Dependent Observations -- 1.3 Orthogonality Principle -- 1.4 Gaussian Random Variables -- 2. Vector-Valued Data -- 2.1 Optimal Estimator in the Vector Case -- 2.2 Spherically Invariant Gaussian Variables -- 2.3 Equivalent Optimization Criterion -- Summary and Notes -- Problems and Computer Projects -- PART II: LINEAR ESTIMATION -- 3. Normal Equations -- 3.1 Mean-Square Error Criterion -- 3.2 Minimization by Differentiation -- 3.3 Minimization by Completion-of-Squares -- 3.4 Minimization of the Error Covariance Matrix -- 3.5 Optimal Linear Estimator -- 4. Orthogonality Principle -- 4.1 Design Examples -- 4.2 Orthogonality Condition -- 4.3 Existence of Solutions -- 4.4 Nonzero-Mean Variables -- 5. Linear Models -- 5.1 Estimation using Linear Relations -- 5.2 Application: Channel Estimation -- 5.3 Application: Block Data Estimation -- 5.4 Application: Linear Channel Equalization -- 5.5 Application: Multiple-Antenna Receivers -- 6. Constrained Estimation -- 6.1 Minimum-Variance Unbiased Estimation -- 6.2 Example: Mean Estimation -- 6.3 Application: Channel and Noise Estimation -- 6.4 Application: Decision Feedback Equalization -- 6.5 Application: Antenna Beamforming -- 7. Kalman Filter.
7.1 Innovations Process -- 7.2 State-Space Model -- 7.3 Recursion for the State Estimator -- 7.4 Computing the Gain Matrix -- 7.5 Riccati Recursion -- 7.6 Covariance Form -- 7.7 Measurement and Time-Update Form -- Summary and Notes -- Problems and Computer Projects -- PART III: STOCHASTIC GRADIENT ALGORITHMS -- 8. Steepest-Descent Technique -- 8.1 Linear Estimation Problem -- 8.2 Steepest-Descent Method -- 8.3 More General Cost Functions -- 9. Transient Behavior -- 9.1 Modes of Convergence -- 9.2 Optimal Step-Size -- 9.3 Weight-Error Vector Convergence -- 9.4 Time Constants -- 9.5 Learning Curve -- 9.6 Contour Curves of the Error Surface -- 9.7 Iteration-Dependent Step-Sizes -- 9.8 Newton?s Method -- 10. LMS Algorithm -- 10.1 Motivation -- 10.2 Instantaneous Approximation -- 10.3 Computational Cost -- 10.4 Least-Perturbation Property -- 10.5 Application: Adaptive Channel Estimation -- 10.6 Application: Adaptive Channel Equalization -- 10.7 Application: Decision-Feedback Equalization -- 10.8 Ensemble-Average Learning Curves -- 11. Normalized LMS Algorithm -- 11.1 Instantaneous Approximation -- 11.2 Computational Cost -- 11.3 Power Normalization -- 11.4 Least-Perturbation Property -- 12. Other LMS-Type Algorithms -- 12.1 Non-Blind Algorithms -- 12.2 Blind Algorithms -- 12.3 Some Properties -- 13. Affine Projection Algorithm -- 13.1 Instantaneous Approximation -- 13.2 Computational Cost -- 13.3 Least-Perturbation Property -- 13.4 Affine Projection Interpretation -- 14. RLS Algorithm -- 14.1 Instantaneous Approximation -- 14.2 Computational Cost -- Summary and Notes -- Problems and Computer Projects -- PART IV: MEAN-SQUARE PERFORMANCE -- 15. Energy Conservation -- 15.1 Performance Measure -- 15.2 Stationary Data Model -- 15.3 Energy Conservation Relation -- 15.4 Variance Relation -- 15.A Interpretations of the Energy Relation -- 16. Performance of LMS -- 16.1 Variance Relation -- 16.2 Small Step-Sizes -- 16.3 Separation Principle. 16.4 White Gaussian Input -- 16.5 Statement of Results -- 16.6 Simulation Results -- 17. Performance of NLMS -- 17.1 Separation Principle -- 17.2 Simulation Results -- 17.A Relating NLMS to LMS -- 18. Performance of Sign-Error LMS -- 18.1 Real-Valued Data -- 18.2 Complex-Valued Data -- 18.3 Simulation Results -- 19. Performance of RLS and Other Filters -- 19.1 Performance of RLS -- 19.2 Performance of Other Filters -- 19.3 Performance Table for Small Step-Sizes -- 20. Nonstationary Environments -- 20.1 Motivation -- 20.2 Nonstationary Data Model -- 20.3 Energy Conservation Relation -- 20.4 Variance Relation -- 21. Tracking Performance -- 21.1 Performance of LMS -- 21.2 Performance of NLMS -- 21.3 Performance of Sign-Error LMS -- 21.4 Performance of RLS -- 21.5 Comparison of Tracking Performance -- 21.6 Comparing RLS and LMS -- 21.7 Performance of Other Filters -- 21.8 Performance Table for Small Step-Sizes -- Summary and Notes -- Problems and Computer Projects -- PART V: TRANSIENT PERFORMANCE -- 22. Weighted Energy Conservation -- 22.1 Data Model -- 22.2 Data-Normalized Adaptive Filters -- 22.3 Weighted Energy Conservation Relation -- 22.4 Weighted Variance Relation -- 23. LMS with Gaussian Regressors -- 23.1 Mean and Variance Relations -- 23.2 Mean Behavior -- 23.3 Mean-Square Behavior -- 23.4 Mean-Square Stability -- 23.5 Steady-State Performance -- 23.6 Small Step-Size Approximations -- 23.A Convergence Time -- 24. LMS with non-Gaussian Regressors -- 24.1 Mean and Variance Relations -- 24.2 Mean-Square Stability and Performance -- 24.3 Small Step-Size Approximations -- 24.A Independence and Averaging Analysis -- 25. Data-Normalized Filters -- 25.1 NLMS Filter -- 25.2 Data-Normalized Filters -- 25.A Stability Bound -- 25.B Stability of NLMS -- Summary and Notes -- Problems and Computer Projects -- PART VI: BLOCK ADAPTIVE FILTERS -- 26. Transform Domain Adaptive Filters -- 26.1 Transform-Domain Filters -- 26.2 DFT-Domain LMS. 26.3 DCT-Domain LMS -- 26.A DCT-Transformed Regressors -- 27. Efficient Block Convolution -- 27.1 Motivation -- 27.2 Block Data Formulation -- 27.3 Block Convolution -- 28. Block and Subband Adaptive Filters -- 28.1 DFT Block Adaptive Filters -- 28.2 Subband Adaptive Filters -- 28.A Another Constrained DFT Block Filter -- 28.B Overlap-Add Block Adaptive Filters -- Summary and Notes -- Problems and Computer Projects -- PART VII: LEAST-SQUARES METHODS -- 29. Least-Squares Criterion -- 29.1 Least-Squares Problem -- 29.2 Geometric Argument -- 29.3 Algebraic Arguments -- 29.4 Properties of Least-Squares Solution -- 29.5 Projection Matrices -- 29.6 Weighted Least-Squares -- 29.7 Regularized Least-Squares -- 29.8 Weighted Regularized Least-Squares -- 30. Recursive Least-Squares -- 30.1 Motivation -- 30.2 RLS Algorithm -- 30.3 Regularization -- 30.4 Conversion Factor -- 30.5 Time-Update of the Minimum Cost -- 30.6 Exponentially-Weighted RLS Algorithm -- 31. Kalman Filtering and RLS -- 31.1 Equivalence in Linear Estimation -- 31.2 Kalman Filtering and Recursive Least-Squares -- 31.A Extended RLS Algorithms -- 32. Order and Time-Update Relations -- 32.1 Backward Order-Update Relations -- 32.2 Forward Order-Update Relations -- 32.3 Time-Update Relation -- Summary and Notes -- Problems and Computer Projects -- PART VIII: ARRAY ALGORITHMS -- 33. Norm and Angle Preservation -- 33.1 Some Difficulties -- 33.2 Square-Root Factors -- 33.3 Norm and Angle Preservation -- 33.4 Motivation for Array Methods -- 34. Unitary Transformations -- 34.1 Givens Rotations -- 34.2 Householder Transformations -- 35. QR and Inverse QR Algorithms -- 35.1 Inverse QR Algorithm -- 35.2 QR Algorithm -- 35.3 Extended QR Algorithm -- 35.A Array Algorithms for Kalman Filtering -- Summary and Notes -- Problems and Computer Projects -- PART IX: FAST RLS ALGORITHMS -- 36. Hyperbolic Rotations -- 36.1 Hyperbolic Givens Rotations -- 36.2 Hyperbolic Householder Transformations. 36.3 Hyperbolic Basis Rotations -- 37. Fast Array Algorithm -- 37.1 Time-Update of the Gain Vector -- 37.2 Time-Update of the Conversion Factor -- 37.3 Initial Conditions -- 37.4 Array Algorithm -- 37.A Chandrasekhar Filter -- 38. Regularized Prediction Problems -- 38.1 Regularized Backward Prediction -- 38.2 Regularized Forward Prediction -- 38.3 Low-Rank Factorization -- 39. Fast Fixed-Order Filters -- 39.1 Fast Transversal Filter -- 39.2 FAEST Filter -- 39.3 Fast Kalman Filter -- 39.4 Stability Issues -- Summary and Notes -- Problems and Computer Projects -- PART X: LATTICE FILTERS -- 40. Three Basic Estimation Problems -- 40.1 Motivation for Lattice Filters -- 40.2 Joint Process Estimation -- 40.3 Backward Estimation Problem -- 40.4 Forward Estimation Problem -- 40.5 Time and Order-Update Relations -- 41. Lattice Filter Algorithms -- 41.1 Significance of Data Structure -- 41.2 A Posteriori-Based Lattice Filter -- 41.3 A Priori-Based Lattice Filter -- 42. Error-Feedback Lattice Filters -- 42.1 A Priori Error-Feedback Lattice Filter -- 42.2 A Posteriori Error-Feedback Lattice Filter -- 42.3 Normalized Lattice Filter -- 43. Array Lattice Filters -- 43.1 Order-Update of Output Estimation Errors -- 43.2 Order-Update of Backward Estimation Errors -- 43.3 Order-Update of Forward Estimation Errors -- 43.4 Significance of Data Structure -- Summary and Notes -- Problems and Computer Projects -- PART XI: ROBUST FILTERS -- 44. Indefinite Least-Squares -- 44.1 Indefinite Least-Squares -- 44.2 Recursive Minimization Algorithm -- 44.3 Time-Update of the Minimum Cost -- 44.4 Singular Weighting Matrices -- 44.A Stationary Points -- 44.B Inertia Conditions -- 45. Robust Adaptive Filters -- 45.1 A Posteriori-Based Robust Filters -- 45.2 ε-NLMS Algorithm -- 45.3 A Priori-Based Robust Filters -- 45.4 LMS Algorithm -- 45.A H1 Filters -- 46. Robustness Properties -- 46.1 Robustness of LMS -- 46.2 Robustness of εNLMS. 46.3 Robustness of RLS -- Summary and Notes -- Problems and Computer Projects -- REFERENCES AND INDICES -- References -- Author Index -- Subject Index. |
| Record Nr. | UNINA-9910145592003321 |
Sayed Ali H
|
||
| Hoboken, New Jersey : , : Wiley-Interscience : , c2008 | ||
| Lo trovi qui: Univ. Federico II | ||
| ||