您现在的位置:首页 > 知识库 > 电子信息 >自适应滤波器原理(第五版 英文版)
自适应滤波器原理(第五版 英文版)

自适应滤波器原理(第五版 英文版)

资料大小: 406.59 MB
文档格式: PDF文档
资料语言: 英文版
资料类别: 电子信息
更新日期: 2020-07-22
下载说明:
推荐信息: 滤波器   英文   原理   自适应   Simon

本地下载(30点)  备用下载(30点)

内容简介
自适应滤波器原理(第五版 英文版)
出版时间: 2017年版
内容简介
  本书是自适应信号处理领域的一本经典教材。全书共17章,系统全面、深入浅出地讲述了自适应信号处理的基本理论与方法,充分反映了近年来该领域的新理论、新技术和新应用。内容包括:随机过程与模型、维纳滤波器、线性预测、最速下降法、随机梯度下降法、最小均方(LMS)算法、归一化LMS自适应算法及其推广、分块自适应滤波器、最小二乘法、递归最小二乘(RLS)算法、鲁棒性、有限字长效应、非平衡环境下的自适应、卡尔曼滤波器、平方根自适应滤波算法、阶递归自适应滤波算法、盲反卷积,以及它们在通信与信息系统中的应用。
目录
Contents Background and Preview 1 1. The Filtering Problem 1 2. Linear Optimum Filters 4 3. Adaptive Filters 4 4. Linear Filter Structures 6 5. Approaches to the Development of Linear Adaptive Filters 12 6. Adaptive Beamforming 13 7. Four Classes of Applications 17 8. Historical Notes 20 Chapter 1 Stochastic Processes and Models 30 1.1 Partial Characterization of a Discrete-Time Stochastic Process 30 1.2 Mean Ergodic Theorem 32 1.3 Correlation Matrix 34 1.4 Correlation Matrix of Sine Wave Plus Noise 39 1.5 Stochastic Models 40 1.6 Wold Decomposition 46 1.7 Asymptotic Stationarity of an Autoregressive Process 49 1.8 Yule–Walker Equations 51 1.9 Computer Experiment: Autoregressive Process of Order Two 52 1.10 Selecting the Model Order 60 1.11 Complex Gaussian Processes 63 1.12 Power Spectral Density 65 1.13 Properties of Power Spectral Density 67 1.14 Transmission of a Stationary Process Through a Linear Filter 69 1.15 Cramér Spectral Representation for a Stationary Process 72 1.16 Power Spectrum Estimation 74 1.17 Other Statistical Characteristics of a Stochastic Process 77 1.18 Polyspectra 78 1.19 Spectral-Correlation Density 81 1.20 Summary and Discussion 84 Problems 85 Chapter 2 Wiener Filters 90 2.1 Linear Optimum Filtering: Statement of the Problem 90 2.2 Principle of Orthogonality 92 2.3 Minimum Mean-Square Error 96 2.4 Wiener–Hopf Equations 98 2.5 Error-Performance Surface 100 2.6 Multiple Linear Regression Model 104 2.7 Example 106 2.8 Linearly Constrained Minimum-Variance Filter 111 2.9 Generalized Sidelobe Cancellers 116 2.10 Summary and Discussion 122 Problems 124 Chapter 3 Linear Prediction 132 3.1 Forward Linear Prediction 132 3.2 Backward Linear Prediction 139 3.3 Levinson–Durbin Algorithm 144 3.4 Properties of Prediction-Error Filters 153 3.5 Schur–Cohn Test 162 3.6 Autoregressive Modeling of a Stationary Stochastic Process 164 3.7 Cholesky Factorization 167 3.8 Lattice Predictors 170 3.9 All-Pole, All-Pass Lattice Filter 175 3.10 Joint-Process Estimation 177 3.11 Predictive Modeling of Speech 181 3.12 Summary and Discussion 188 Problems 189 Chapter 4 Method of Steepest Descent 199 4.1 Basic Idea of the Steepest-Descent Algorithm 199 4.2 The Steepest-Descent Algorithm Applied to the Wiener Filter 200 4.3 Stability of the Steepest-Descent Algorithm 204 4.4 Example 209 4.5 The Steepest-Descent Algorithm Viewed as a Deterministic Search Method 221 4.6 Virtue and Limitation of the Steepest-Descent Algorithm 222 4.7 Summary and Discussion 223 Problems 224 Chapter 5 Method of Stochastic Gradient Descent 228 5.1 Principles of Stochastic Gradient Descent 228 5.2 Application 1: Least-Mean-Square (LMS) Algorithm 230 5.3 Application 2: Gradient-Adaptive Lattice Filtering Algorithm 237 5.4 Other Applications of Stochastic Gradient Descent 244 5.5 Summary and Discussion 245 Problems 246 Chapter 6 The Least-Mean-Square (LMS) Algorithm 248 6.1 Signal-Flow Graph 248 6.2 Optimality Considerations 250 6.3 Applications 252 6.4 Statistical Learning Theory 272 6.5 Transient Behavior and Convergence Considerations 283 6.6 Efficiency 286 6.7 Computer Experiment on Adaptive Prediction 288 6.8 Computer Experiment on Adaptive Equalization 293 6.9 Computer Experiment on a Minimum-Variance Distortionless-Response Beamformer 302 6.10 Summary and Discussion 306 Problems 308 Chapter 7 Normalized Least-Mean-Square (LMS) Algorithm and Its Generalization 315 7.1 Normalized LMS Algorithm: The Solution to a Constrained Optimization Problem 315 7.2 Stability of the Normalized LMS Algorithm 319 7.3 Step-Size Control for Acoustic Echo Cancellation 322 7.4 Geometric Considerations Pertaining to the Convergence Process for Real-Valued Data 327 7.5 Affine Projection Adaptive Filters 330 7.6 Summary and Discussion 334 Problems 335 Chapter 8 Block-Adaptive Filters 339 8.1 Block-Adaptive Filters: Basic Ideas 340 8.2 Fast Block LMS Algorithm 344 8.3 Unconstrained Frequency-Domain Adaptive Filters 350 8.4 Self-Orthogonalizing Adaptive Filters 351 8.5 Computer Experiment on Adaptive Equalization 361 8.6 Subband Adaptive Filters 367 8.7 Summary and Discussion 375 Problems 376 Chapter 9 Method of Least-Squares 380 9.1 Statement of the Linear Least-Squares Estimation Problem 380 9.2 Data Windowing 383 9.3 Principle of Orthogonality Revisited 384 9.4 Minimum Sum of Error Squares 387 9.5 Normal Equations and Linear Least-Squares Filters 388 9.6 Time-Average Correlation Matrix ≥ 391 9.7 Reformulation of the Normal Equations in Terms of Data Matrices 393 9.8 Properties of Least-Squares Estimates 397 9.9 Minimum-Variance Distortionless Response (MVDR) Spectrum Estimation 401 9.10 Regularized MVDR Beamforming 404 9.11 Singular-Value Decomposition 409 9.12 Pseudoinverse 416 9.13 Interpretation of Singular Values and Singular Vectors 418 9.14 Minimum-Norm Solution to the Linear Least-Squares Problem 419 9.15 Normalized LMS Algorithm Viewed as the Minimum-Norm Solution to an Underdetermined Least-Squares Estimation Problem 422 9.16 Summary and Discussion 424 Problems 425 Chapter 10 The Recursive Least-Squares (RLS) Algorithm 431 10.1 Some Preliminaries 431 10.2 The Matrix Inversion Lemma 435 10.3 The Exponentially Weighted RLS Algorithm 436 10.4 Selection of the Regularization Parameter 439 10.5 Updated Recursion for the Sum of Weighted Error Squares 441 10.6 Example: Single-Weight Adaptive Noise Canceller 443 10.7 Statistical Learning Theory 444 10.8 Efficiency 449 10.9 Computer Experiment on Adaptive Equalization 450 10.10 Summary and Discussion 453 Problems 454 Chapter 11 Robustness 456 11.1 Robustness, Adaptation, and Disturbances 456 11.2 Robustness: Preliminary Considerations Rooted in H∞ Optimization 457 11.3 Robustness of the LMS Algorithm 460 11.4 Robustness of the RLS Algorithm 465 11.5 Comparative Evaluations of the LMS and RLS Algorithms from the Perspective of Robustness 470 11.6 Risk-Sensitive Optimality 470 11.7 Trade-Offs Between Robustness and Efficiency 472 11.8 Summary and Discussion 474 Problems 474 Chapter 12 Finite-Precision Effects 479 12.1 Quantization Errors 480 12.2 Least-Mean-Square (LMS) Algorithm 482 12.3 Recursive Least-Squares (RLS) Algorithm 491 12.4 Summary and Discussion 497 Problems 498 Chapter 13 Adaptation in Nonstationary Environments 500 13.1 Causes and Consequences of Nonstationarity 500 13.2 The System Identification Problem 501 13.3 Degree of Nonstationarity 504 13.4 Criteria for Tracking Assessment 505 13.5 Tracking Performance of the LMS Algorithm 507 13.6 Tracking Performance of the RLS Algorithm 510 13.7 Comparison of the Tracking Performance of LMS and RLS Algorithms 514 13.8 Tuning of Adaptation Parameters 518 13.9 Incremental Delta-Bar-Delta (IDBD) Algorithm 520 13.10 Autostep Method 526 13.11 Computer Experiment: Mixture of Stationary and Nonstationary Environmental Data 530 13.12 Summary and Discussion 534 Problems 535 Chapter 14 Kalman Filters 540 14.1 Recursive Minimum Mean-Square Estimation for Scalar Random Variables 541 14.2 Statement of the Kalman Filtering Problem 544 14.3 The Innovations Process 547 14.4 Estimation of the State Using the Innovations Process 549 14.5 Filtering 555 14.6 Initial Conditions 557 14.7 Summary of the Kalman Filter 558 14.8 Optimality Criteria for Kalman Filtering 559 14.9 Kalman Filter as the Unifying Basis for RLS Algorithms 561 14.10 Covariance Filtering Algorithm 566 14.11 Information Filtering Algorithm 568 14.12 Summary and Discussion 571 Problems 572 Chapter 15 Square-Root Adaptive Filtering Algorithms 576 15.1 Square-Root Kalman Filters 576 15.2 Building Square-Root Adaptive Filters on the Two Kalman Filter Variants 582 15.3 QRD-RLS Algorithm 583 15.4 Adaptive Beamforming 591 15.5 Inverse QRD-RLS Algorithm 598 15.6 Finite-Precision Effects 601 15.7 Summary and Discussion 602 Problems 603 Chapter 16 Order-Recursive Adaptive Filtering Algorithm 607 16.1 Order-Recursive Adaptive Filters Using Least-Squares Estimation: An Overview 608 16.2 Adaptive Forward Linear Prediction 609 16.3 Adaptive Backward Linear Prediction 612 16.4 Conversion Factor 615 16.5 Least-Squares Lattice (LSL) Predictor 618 16.6 Angle-Normalized Estimation Errors 628 16.7 First-Order State-Space Models for Lattice Filtering 632 16.8 QR-Decomposition-Based Least-Squares Lattice (QRD-LSL) Filters 637 16.9 Fundamental Properties of the QRD-LSL Filter 644 16.10 Computer Experiment on Adaptive Equalization 649 16.11 Recursive (LSL) Filters Using A Posteriori Estimation Errors 654 16.12 Recursive LSL Filters Using A Priori Estimation Errors with Error Feedback 657 16.13 Relation Between Recursive LSL and RLS Algorithms 662 16.14 Finite-Precision Effects 665 16.15 Summary and Discussion 667 Problems 669 Chapter 17 Blind Deconvolution 676 17.1 Overview of Blind Deconvolution 676 17.2 Channel Identifiability Using Cyclostationary Statistics 681 17.3 Subspace Decomposition for Fractionally Spaced Blind Identification 682 17.4 Bussgang Algorithm for Blind Equalization 696 17.5 Extension of the Bussgang Algorithm to Complex Baseband Channels 713 17.6 Special Cases of the Bussgang Algorithm 714 17.7 Fractionally Spaced Bussgang Equalizers 718 17.8 Estimation of Unknown Probability Distribution Function of Signal Source 723 17.9 Summary and Discussion 727 Problems 728 Epilogue 732 1. Robustness, Efficiency, and Complexity 732 2. Kernel-Based Nonlinear Adaptive Filtering 735 Appendix A Theory of Complex Variables 752 A.1 Cauchy–Riemann Equations 752 A.2 Cauchy’s Integral Formula 754 A.3 Laurent’s Series 756 A.4 Singularities and Residues 758 A.5 Cauchy’s Residue Theorem 759 A.6 Principle of the Argument 760 A.7 Inversion Integral for the z-Transform 763 A.8 Parseval’s Theorem 765 Appendix B Wirtinger Calculus for Computing Complex Gradients 767 B.1 Wirtinger Calculus: Scalar Gradients 767 B.2 Generalized Wirtinger Calculus: Gradient Vectors 770 B.3 Another Approach to Compute Gradient Vectors 772 B.4 Expressions for the Partial Derivatives 0f>0z and 0f>0z* 773 Appendix C Method of Lagrange Multipliers 774 C.1 Optimization Involving a Single Equality Constraint 774 C.2 Optimization Involving Multiple Equality Constraints 775 C.3 Optimum Beamformer 776 Appendix D Estimation Theory 777 D.1 Likelihood Function 777 D.2 Cramér–Rao Inequality 778 D.3 Properties of Maximum-Likelihood Estimators 779 D.4 Conditional Mean Estimator 780 Appendix E Eigenanalysis 782 E.1 The Eigenvalue Problem 782 E.2 Properties of Eigenvalues and Eigenvectors 784 E.3 Low-Rank Modeling 798 E.4 Eigenfilters 802 E.5 Eigenvalue Computations 804 Appendix F Langevin Equation of Nonequilibrium Thermodynamics 807 F.1 Brownian Motion 807 F.2 Langevin Equation 807 Appendix G Rotations and Reflections 809 G.1 Plane Rotations 809 G.2 Two-Sided Jacobi Algorithm 811 G.3 Cyclic Jacobi Algorithm 817 G.4 Householder Transformation 820 G.5 The QR Algorithm 823 Appendix H Complex Wishart Distribution 830 H.1 Definition 830 H.2 The Chi-Square Distribution as a Special Case 831 H.3 Properties of the Complex Wishart Distribution 832 H.4 Expectation of the Inverse Correlation Matrix ≥–1(n) 833 Glossary 834 Text Conventions 834 Abbreviations 837 Principal Symbols 840 Bibliography 846 Suggested Reading 861 Index 879