Vai al contenuto principale della pagina
Autore: | Hoang Dinh Thai <1986-> |
Titolo: | Deep reinforcement learning for wireless communications and networking : theory, applications and implementation / / Dinh Thai Hoang [and four others] |
Pubblicazione: | Hoboken, New Jersey : , : John Wiley & Sons, Inc., , [2023] |
©2023 | |
Edizione: | First edition. |
Descrizione fisica: | 1 online resource (291 pages) |
Disciplina: | 006.31 |
Soggetto topico: | Reinforcement learning |
Wireless communication systems | |
Soggetto non controllato: | Artificial Intelligence |
Computer Networks | |
Computers | |
Persona (resp. second.): | HuynhNguyen Van |
NguyenDiep N. | |
HossainEkram | |
NiyatoDusit | |
Nota di bibliografia: | Includes bibliographical references and index. |
Nota di contenuto: | Cover -- Title Page -- Copyright -- Contents -- Notes on Contributors -- Foreword -- Preface -- Acknowledgments -- Acronyms -- Introduction -- Part I Fundamentals of Deep Reinforcement Learning -- Chapter 1 Deep Reinforcement Learning and Its Applications -- 1.1 Wireless Networks and Emerging Challenges -- 1.2 Machine Learning Techniques and Development of DRL -- 1.2.1 Machine Learning -- 1.2.2 Artificial Neural Network -- 1.2.3 Convolutional Neural Network -- 1.2.4 Recurrent Neural Network -- 1.2.5 Development of Deep Reinforcement Learning -- 1.3 Potentials and Applications of DRL -- 1.3.1 Benefits of DRL in Human Lives -- 1.3.2 Features and Advantages of DRL Techniques -- 1.3.3 Academic Research Activities -- 1.3.4 Applications of DRL Techniques -- 1.3.5 Applications of DRL Techniques in Wireless Networks -- 1.4 Structure of this Book and Target Readership -- 1.4.1 Motivations and Structure of this Book -- 1.4.2 Target Readership -- 1.5 Chapter Summary -- References -- Chapter 2 Markov Decision Process and Reinforcement Learning -- 2.1 Markov Decision Process -- 2.2 Partially Observable Markov Decision Process -- 2.3 Policy and Value Functions -- 2.4 Bellman Equations -- 2.5 Solutions of MDP Problems -- 2.5.1 Dynamic Programming -- 2.5.1.1 Policy Evaluation -- 2.5.1.2 Policy Improvement -- 2.5.1.3 Policy Iteration -- 2.5.2 Monte Carlo Sampling -- 2.6 Reinforcement Learning -- 2.7 Chapter Summary -- References -- Chapter 3 Deep Reinforcement Learning Models and Techniques -- 3.1 Value‐Based DRL Methods -- 3.1.1 Deep Q‐Network -- 3.1.2 Double DQN -- 3.1.3 Prioritized Experience Replay -- 3.1.4 Dueling Network -- 3.2 Policy‐Gradient Methods -- 3.2.1 REINFORCE Algorithm -- 3.2.1.1 Policy Gradient Estimation -- 3.2.1.2 Reducing the Variance -- 3.2.1.3 Policy Gradient Theorem -- 3.2.2 Actor‐Critic Methods -- 3.2.3 Advantage of Actor‐Critic Methods. |
3.2.3.1 Advantage of Actor‐Critic (A2C) -- 3.2.3.2 Asynchronous Advantage Actor‐Critic (A3C) -- 3.2.3.3 Generalized Advantage Estimate (GAE) -- 3.3 Deterministic Policy Gradient (DPG) -- 3.3.1 Deterministic Policy Gradient Theorem -- 3.3.2 Deep Deterministic Policy Gradient (DDPG) -- 3.3.3 Distributed Distributional DDPG (D4PG) -- 3.4 Natural Gradients -- 3.4.1 Principle of Natural Gradients -- 3.4.2 Trust Region Policy Optimization (TRPO) -- 3.4.2.1 Trust Region -- 3.4.2.2 Sample‐Based Formulation -- 3.4.2.3 Practical Implementation -- 3.4.3 Proximal Policy Optimization (PPO) -- 3.5 Model‐Based RL -- 3.5.1 Vanilla Model‐Based RL -- 3.5.2 Robust Model‐Based RL: Model‐Ensemble TRPO (ME‐TRPO) -- 3.5.3 Adaptive Model‐Based RL: Model‐Based Meta‐Policy Optimization (MB‐MPO) -- 3.6 Chapter Summary -- References -- Chapter 4 A Case Study and Detailed Implementation -- 4.1 System Model and Problem Formulation -- 4.1.1 System Model and Assumptions -- 4.1.1.1 Jamming Model -- 4.1.1.2 System Operation -- 4.1.2 Problem Formulation -- 4.1.2.1 State Space -- 4.1.2.2 Action Space -- 4.1.2.3 Immediate Reward -- 4.1.2.4 Optimization Formulation -- 4.2 Implementation and Environment Settings -- 4.2.1 Install TensorFlow with Anaconda -- 4.2.2 Q‐Learning -- 4.2.2.1 Codes for the Environment -- 4.2.2.2 Codes for the Agent -- 4.2.3 Deep Q‐Learning -- 4.3 Simulation Results and Performance Analysis -- 4.4 Chapter Summary -- References -- Part II Applications of DRL in Wireless Communications and Networking -- Chapter 5 DRL at the Physical Layer -- 5.1 Beamforming, Signal Detection, and Decoding -- 5.1.1 Beamforming -- 5.1.1.1 Beamforming Optimization Problem -- 5.1.1.2 DRL‐Based Beamforming -- 5.1.2 Signal Detection and Channel Estimation -- 5.1.2.1 Signal Detection and Channel Estimation Problem -- 5.1.2.2 RL‐Based Approaches -- 5.1.3 Channel Decoding. | |
5.2 Power and Rate Control -- 5.2.1 Power and Rate Control Problem -- 5.2.2 DRL‐Based Power and Rate Control -- 5.3 Physical‐Layer Security -- 5.4 Chapter Summary -- References -- Chapter 6 DRL at the MAC Layer -- 6.1 Resource Management and Optimization -- 6.2 Channel Access Control -- 6.2.1 DRL in the IEEE 802.11 MAC -- 6.2.2 MAC for Massive Access in IoT -- 6.2.3 MAC for 5G and B5G Cellular Systems -- 6.3 Heterogeneous MAC Protocols -- 6.4 Chapter Summary -- References -- Chapter 7 DRL at the Network Layer -- 7.1 Traffic Routing -- 7.2 Network Slicing -- 7.2.1 Network Slicing‐Based Architecture -- 7.2.2 Applications of DRL in Network Slicing -- 7.3 Network Intrusion Detection -- 7.3.1 Host‐Based IDS -- 7.3.2 Network‐Based IDS -- 7.4 Chapter Summary -- References -- Chapter 8 DRL at the Application and Service Layer -- 8.1 Content Caching -- 8.1.1 QoS‐Aware Caching -- 8.1.2 Joint Caching and Transmission Control -- 8.1.3 Joint Caching, Networking, and Computation -- 8.2 Data and Computation Offloading -- 8.3 Data Processing and Analytics -- 8.3.1 Data Organization -- 8.3.1.1 Data Partitioning -- 8.3.1.2 Data Compression -- 8.3.2 Data Scheduling -- 8.3.3 Tuning of Data Processing Systems -- 8.3.4 Data Indexing -- 8.3.4.1 Database Index Selection -- 8.3.4.2 Index Structure Construction -- 8.3.5 Query Optimization -- 8.4 Chapter Summary -- References -- Part III Challenges, Approaches, Open Issues, and Emerging Research Topics -- Chapter 9 DRL Challenges in Wireless Networks -- 9.1 Adversarial Attacks on DRL -- 9.1.1 Attacks Perturbing the State space -- 9.1.1.1 Manipulation of Observations -- 9.1.1.2 Manipulation of Training Data -- 9.1.2 Attacks Perturbing the Reward Function -- 9.1.3 Attacks Perturbing the Action Space -- 9.2 Multiagent DRL in Dynamic Environments -- 9.2.1 Motivations -- 9.2.2 Multiagent Reinforcement Learning Models. | |
9.2.2.1 Markov/Stochastic Games -- 9.2.2.2 Decentralized Partially Observable Markov Decision Process (DPOMDP) -- 9.2.3 Applications of Multiagent DRL in Wireless Networks -- 9.2.4 Challenges of Using Multiagent DRL in Wireless Networks -- 9.2.4.1 Nonstationarity Issue -- 9.2.4.2 Partial Observability Issue -- 9.3 Other Challenges -- 9.3.1 Inherent Problems of Using RL in Real‐Word Systems -- 9.3.1.1 Limited Learning Samples -- 9.3.1.2 System Delays -- 9.3.1.3 High‐Dimensional State and Action Spaces -- 9.3.1.4 System and Environment Constraints -- 9.3.1.5 Partial Observability and Nonstationarity -- 9.3.1.6 Multiobjective Reward Functions -- 9.3.2 Inherent Problems of DL and Beyond -- 9.3.2.1 Inherent Problems of DL -- 9.3.2.2 Challenges of DRL Beyond Deep Learning -- 9.3.3 Implementation of DL Models in Wireless Devices -- 9.4 Chapter Summary -- References -- Chapter 10 DRL and Emerging Topics in Wireless Networks -- 10.1 DRL for Emerging Problems in Future Wireless Networks -- 10.1.1 Joint Radar and Data Communications -- 10.1.2 Ambient Backscatter Communications -- 10.1.3 Reconfigurable Intelligent Surface‐Aided Communications -- 10.1.4 Rate Splitting Communications -- 10.2 Advanced DRL Models -- 10.2.1 Deep Reinforcement Transfer Learning -- 10.2.1.1 Reward Shaping -- 10.2.1.2 Intertask Mapping -- 10.2.1.3 Learning from Demonstrations -- 10.2.1.4 Policy Transfer -- 10.2.1.5 Reusing Representations -- 10.2.2 Generative Adversarial Network (GAN) for DRL -- 10.2.3 Meta Reinforcement Learning -- 10.3 Chapter Summary -- References -- Index -- EULA. | |
Sommario/riassunto: | "This book provides fundamental background on Deep Reinforcement Learning (DRL) and then studies recent advances in DRL to address practical challenges in wireless communications and networking. In particular, this book first gives a tutorial of DRL from basic concepts to advanced modelling techniques to motivate and provide fundamental knowledge for the readers. The authors then provide case studies together with implementation details to help readers better understand how to practice and apply DRL to their problems. After that, they review DRL approaches that address emerging issues in communications and networking. Finally, the authors highlight important challenges, open issues, and future research directions of applying DRL in wireless networks."-- |
Titolo autorizzato: | Deep reinforcement learning for wireless communications and networking |
ISBN: | 1-119-87374-6 |
1-119-87368-1 | |
1-119-87373-8 | |
Formato: | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione: | Inglese |
Record Nr.: | 9910830760503321 |
Lo trovi qui: | Univ. Federico II |
Opac: | Controlla la disponibilità qui |