LEADER 00877cam0-22003251i-450- 001 990004241800403321 005 20080624105720.0 010 $a0-262-66100-4 035 $a000424180 035 $aFED01000424180 035 $a(Aleph)000424180FED01 035 $a000424180 100 $a19990604d1996----km-y0itay50------ba 101 0 $aeng 102 $aUS 105 $ay-------001yy 200 1 $aZero syntax$eexperiencers and cascades$fDavid Pesetsky 210 $aCambridge (Massachusetts)$cThe MIT press$d1996 215 $aXVIII, 351 p.$d23 cm 225 1 $aCurrent studies in linguistics$v27 610 0 $aGrammatica 676 $a415 700 1$aPesetsky,$bDavid$0167598 801 0$aIT$bUNINA$gRICA$2UNIMARC 901 $aBK 912 $a990004241800403321 952 $a415 PES 1$fFLFBC 959 $aFLFBC 996 $aZero syntax$9481217 997 $aUNINA LEADER 12385nam 2200577 a 450 001 9910971850203321 005 20241122173746.0 010 $a1-118-59134-8 010 $a1-118-59135-6 010 $a1-118-59133-X 010 $a1-299-46521-8 035 $a(CKB)24989750100041 035 $a(MiAaPQ)EBC1165234 035 $a(OCoLC)830837650 035 $a(MiAaPQ)EBC4036588 035 $a(MiAaPQ)EBC7103838 035 $a(Au-PeEL)EBL1165234 035 $a(CaPaEBR)ebr10684905 035 $a(CaONFJC)MIL477771 035 $a(EXLCZ)9924989750100041 100 $a20130319d2013 uy 0 101 0 $aeng 135 $aur||||||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aAdaptive filters $etheory and applications /$fBehrouz Farhang-Boroujeny 205 $a2nd ed. 210 $aChichester, West Sussex, U.K. $cWiley$d[2013] 215 $axx, 778 p. $cill 320 $aIncludes bibliographical references and index. 327 $aCover -- Title Page -- Copyright -- Contents -- Preface -- Acknowledgments -- Chapter 1 Introduction -- 1.1 Linear Filters -- 1.2 Adaptive Filters -- 1.3 Adaptive Filter Structures -- 1.4 Adaptation Approaches -- 1.4.1 Approach Based on Wiener Filter Theory -- 1.4.2 Method of Least-Squares -- 1.5 Real and Complex Forms of Adaptive Filters -- 1.6 Applications -- 1.6.1 Modeling -- 1.6.2 Inverse Modeling -- 1.6.3 Linear Prediction -- 1.6.4 Interference Cancellation -- Chapter 2 Discrete-Time Signals and Systems -- 2.1 Sequences and z-Transform -- 2.2 Parseval's Relation -- 2.3 System Function -- 2.4 Stochastic Processes -- 2.4.1 Stochastic Averages -- 2.4.2 z-Transform Representations -- 2.4.3 The Power Spectral Density -- 2.4.4 Response of Linear Systems to Stochastic Processes -- 2.4.5 Ergodicity and Time Averages -- Problems -- Chapter 3 Wiener Filters -- 3.1 Mean-Squared Error Criterion -- 3.2 Wiener Filter-Transversal, Real-Valued Case -- 3.3 Principle of Orthogonality -- 3.4 Normalized Performance Function -- 3.5 Extension to Complex-Valued Case -- 3.6 Unconstrained Wiener Filters -- 3.6.1 Performance Function -- 3.6.2 Optimum Transfer Function -- 3.6.3 Modeling -- 3.6.4 Inverse Modeling -- 3.6.5 Noise Cancellation -- 3.7 Summary and Discussion -- Problems -- Chapter 4 Eigenanalysis and Performance Surface -- 4.1 Eigenvalues and Eigenvectors -- 4.2 Properties of Eigenvalues and Eigenvectors -- 4.3 Performance Surface -- Problems -- Chapter 5 Search Methods -- 5.1 Method of Steepest Descent -- 5.2 Learning Curve -- 5.3 Effect of Eigenvalue Spread -- 5.4 Newton's Method -- 5.5 An Alternative Interpretation of Newton's Algorithm -- Problems -- Chapter 6 LMS Algorithm -- 6.1 Derivation of LMS Algorithm -- 6.2 Average Tap-Weight Behavior of the LMS Algorithm -- 6.3 MSE Behavior of the LMS Algorithm -- 6.3.1 Learning Curve. 327 $a6.3.2 Weight-Error Correlation Matrix -- 6.3.3 Excess MSE and Misadjustment -- 6.3.4 Stability -- 6.3.5 The Effect of Initial Values of Tap Weights on the Transient Behavior of the LMS Algorithm -- 6.4 Computer Simulations -- 6.4.1 System Modeling -- 6.4.2 Channel Equalization -- 6.4.3 Adaptive Line Enhancement -- 6.4.4 Beamforming -- 6.5 Simplified LMS Algorithms -- 6.6 Normalized LMS Algorithm -- 6.7 Affine Projection LMS Algorithm -- 6.8 Variable Step-Size LMS Algorithm -- 6.9 LMS Algorithm for Complex-Valued Signals -- 6.10 Beamforming (Revisited) -- 6.11 Linearly Constrained LMS Algorithm -- 6.11.1 Statement of the Problem and Its Optimal Solution -- 6.11.2 Update Equations -- 6.11.3 Extension to the Complex-Valued Case -- Problems -- Chapter 7 Transform Domain Adaptive Filters -- 7.1 Overview of Transform Domain Adaptive Filters -- 7.2 Band-Partitioning Property of Orthogonal Transforms -- 7.3 Orthogonalization Property of Orthogonal Transforms -- 7.4 Transform Domain LMS Algorithm -- 7.5 Ideal LMS-Newton Algorithm and Its Relationship with TDLMS -- 7.6 Selection of the Transform T -- 7.6.1 A Geometrical Interpretation -- 7.6.2 A Useful Performance Index -- 7.6.3 Improvement Factor and Comparisons -- 7.6.4 Filtering View -- 7.7 Transforms -- 7.8 Sliding Transforms -- 7.8.1 Frequency Sampling Filters -- 7.8.2 Recursive Realization of Sliding Transforms -- 7.8.3 Nonrecursive Realization of Sliding Transforms -- 7.8.4 Comparison of Recursive and Nonrecursive Sliding Transforms -- 7.9 Summary and Discussion -- Problems -- Chapter 8 Block Implementation of Adaptive Filters -- 8.1 Block LMS Algorithm -- 8.2 Mathematical Background -- 8.2.1 Linear Convolution Using the Discrete Fourier Transform -- 8.2.2 Circular Matrices -- 8.2.3 Window Matrices and Matrix Formulation of the Overlap-Save Method -- 8.3 The FBLMS Algorithm. 327 $a8.3.1 Constrained and Unconstrained FBLMS Algorithms -- 8.3.2 Convergence Behavior of the FBLMS Algorithm -- 8.3.3 Step-Normalization -- 8.3.4 Summary of the FBLMS Algorithm -- 8.3.5 FBLMS Misadjustment Equations -- 8.3.6 Selection of the Block Length -- 8.4 The Partitioned FBLMS Algorithm -- 8.4.1 Analysis of the PFBLMS Algorithm -- 8.4.2 PFBLMS Algorithm with M > -- L -- 8.4.3 PFBLMS Misadjustment Equations -- 8.4.4 Computational Complexity and Memory Requirement -- 8.4.5 Modified Constrained PFBLMS Algorithm -- 8.5 Computer Simulations -- Problems -- Chapter 9 Subband Adaptive Filters -- 9.1 DFT Filter Banks -- 9.1.1 Weighted Overlap-Add Method for Realization of DFT Analysis Filter Banks -- 9.1.2 Weighted Overlap-Add Method for Realization of DFT Synthesis Filter Banks -- 9.2 Complementary Filter Banks -- 9.3 Subband Adaptive Filter Structures -- 9.4 Selection of Analysis and Synthesis Filters -- 9.5 Computational Complexity -- 9.6 Decimation Factor and Aliasing -- 9.7 Low-Delay Analysis and Synthesis Filter Banks -- 9.7.1 Design Method -- 9.7.2 Filters Properties -- 9.8 A Design Procedure for Subband Adaptive Filters -- 9.9 An Example -- 9.10 Comparison with FBLMS Algorithm -- Problems -- Chapter 10 IIR Adaptive Filters -- 10.1 Output Error Method -- 10.2 Equation Error Method -- 10.3 Case Study I: IIR Adaptive Line Enhancement -- 10.3.1 IIR ALE Filter, W(z) -- 10.3.2 Performance Functions -- 10.3.3 Simultaneous Adaptation of s and w -- 10.3.4 Robust Adaptation of w -- 10.3.5 Simulation Results -- 10.4 Case Study II: Equalizer Design for Magnetic Recording Channels -- 10.4.1 Channel Discretization -- 10.4.2 Design Steps -- 10.4.3 FIR Equalizer Design -- 10.4.4 Conversion from FIR into IIR Equalizer -- 10.4.5 Conversion from z Domain into s Domain -- 10.4.6 Numerical Results -- 10.5 Concluding Remarks -- Problems -- Chapter 11 Lattice Filters. 327 $a11.1 Forward Linear Prediction -- 11.2 Backward Linear Prediction -- 11.3 Relationship Between Forward and Backward Predictors -- 11.4 Prediction-Error Filters -- 11.5 Properties of Prediction Errors -- 11.6 Derivation of Lattice Structure -- 11.7 Lattice as an Orthogonalization Transform -- 11.8 Lattice Joint Process Estimator -- 11.9 System Functions -- 11.10 Conversions -- 11.10.1 Conversion Between Lattice and Transversal Predictors -- 11.10.2 Levinson-Durbin Algorithm -- 11.10.3 Extension of Levinson-Durbin Algorithm -- 11.11 All-Pole Lattice Structure -- 11.12 Pole-Zero Lattice Structure -- 11.13 Adaptive Lattice Filter -- 11.13.1 Discussion and Simulations -- 11.14 Autoregressive Modeling of Random Processes -- 11.15 Adaptive Algorithms Based on Autoregressive Modeling -- 11.15.1 Algorithms -- 11.15.2 Performance Analysis -- 11.15.3 Simulation Results and Discussion -- Problems -- Chapter 12 Method of Least-Squares -- 12.1 Formulation of Least-Squares Estimation for a Linear Combiner -- 12.2 Principle of Orthogonality -- 12.3 Projection Operator -- 12.4 Standard Recursive Least-Squares Algorithm -- 12.4.1 RLS Recursions -- 12.4.2 Initialization of the RLS Algorithm -- 12.4.3 Summary of the Standard RLS Algorithm -- 12.5 Convergence Behavior of the RLS Algorithm -- 12.5.1 Average Tap-Weight Behavior of the RLS Algorithm -- 12.5.2 Weight-Error Correlation Matrix -- 12.5.3 Learning Curve -- 12.5.4 Excess MSE and Misadjustment -- 12.5.5 Initial Transient Behavior of the RLS Algorithm -- Problems -- Chapter 13 Fast RLS Algorithms -- 13.1 Least-Squares Forward Prediction -- 13.2 Least-Squares Backward Prediction -- 13.3 Least-Squares Lattice -- 13.4 RLSL Algorithm -- 13.4.1 Notations and Preliminaries -- 13.4.2 Update Recursion for the Least-Squares Error Sums -- 13.4.3 Conversion Factor -- 13.4.4 Update Equation for Conversion Factor. 327 $a13.4.5 Update Equation for Cross-Correlations -- 13.4.6 RLSL Algorithm Using A Posteriori Errors -- 13.4.7 RLSL Algorithm with Error Feedback -- 13.5 FTRLS Algorithm -- 13.5.1 Derivation of the FTRLS Algorithm -- 13.5.2 Summary of the FTRLS Algorithm -- 13.5.3 Stabilized FTRLS Algorithm -- Problems -- Chapter 14 Tracking -- 14.1 Formulation of the Tracking Problem -- 14.2 Generalized Formulation of LMS Algorithm -- 14.3 MSE Analysis of the Generalized LMS Algorithm -- 14.4 Optimum Step-Size Parameters -- 14.5 Comparisons of Conventional Algorithms -- 14.6 Comparisons Based on Optimum Step-Size Parameters -- 14.7 VSLMS: An Algorithm with Optimum Tracking Behavior -- 14.7.1 Derivation of VSLMS Algorithm -- 14.7.2 Variations and Extensions -- 14.7.3 Normalization of the Parameter ? -- 14.7.4 Computer Simulations -- 14.8 RLS Algorithm with Variable Forgetting Factor -- 14.9 Summary -- Problems -- Chapter 15 Echo Cancellation -- 15.1 The Problem Statement -- 15.2 Structures and Adaptive Algorithms -- 15.2.1 Normalized LMS (NLMS) Algorithm -- 15.2.2 Affine Projection LMS (APLMS) Algorithm -- 15.2.3 Frequency Domain Block LMS Algorithm -- 15.2.4 Subband LMS Algorithm -- 15.2.5 LMS-Newton Algorithm -- 15.2.6 Numerical Results -- 15.3 Double-Talk Detection -- 15.3.1 Coherence Function -- 15.3.2 Double-Talk Detection Using the Coherence Function -- 15.3.3 Numerical Evaluation of the Coherence Function -- 15.3.4 Power-Based Double-Talk Detectors -- 15.3.5 Numerical Results -- 15.4 Howling Suppression -- 15.4.1 Howling Suppression Through Notch Filtering -- 15.4.2 Howling Suppression by Spectral Shift -- 15.5 Stereophonic Acoustic Echo Cancellation -- 15.5.1 The Fundamental Problem -- 15.5.2 Reducing Coherence Between x1(n) and x2(n) -- 15.5.3 The LMS-Newton Algorithm for Stereophonic Systems -- Chapter 16 Active Noise Control. 327 $a16.1 Broadband Feedforward Single-Channel ANC. 330 $aThis second edition of Adaptive Filters: Theory and Applications has been updated throughout to reflect the latest developments in this field; notably an increased coverage given to the practical applications of the theory to illustrate the much broader range of adaptive filters applications developed in recent years. The book offers an easy to understand approach to the theory and application of adaptive filters by clearly illustrating how the theory explained in the early chapters of the book is modified for the various applications discussed in detail in later chapters. This integrated approach makes the book a valuable resource for graduate students; and the inclusion of more advanced applications including antenna arrays and wireless communications makes it a suitable technical reference for engineers, practitioners and researchers. Key features: Offers a thorough treatment of the theory of adaptive signal processing; incorporating new material on transform domain, frequency domain, subband adaptive filters, acoustic echo cancellation and active noise control. Provides an in-depth study of applications which now includes extensive coverage of OFDM, MIMO and smart antennas. Contains exercises and computer simulation problems at the end of each chapter. Includes a new companion website hosting MATLABŪ simulation programs which complement the theoretical analyses, enabling the reader to gain an in-depth understanding of the behaviours and properties of the various adaptive algorithms. 606 $aAdaptive filters 606 $aAdaptive signal processing 615 0$aAdaptive filters. 615 0$aAdaptive signal processing. 676 $a621.3815/324 700 $aFarhang-Boroujeny$b B$01858000 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910971850203321 996 $aAdaptive filters$94459187 997 $aUNINA