04580nam 2200481 450 991083094550332120230629222455.01-119-78276-71-119-78277-51-119-78275-9(CKB)4100000012037087(MiAaPQ)EBC6735011(Au-PeEL)EBL6735011(OCoLC)1273975487(EXLCZ)99410000001203708720220626d2022 uy 0engurcnu||||||||txtrdacontentcrdamediacrrdacarrierHuman-robot interaction control using reinforcement learning /Wen Yu, Adolfo PerrusquiaHoboken, New Jersey :IEEE Press :Wiley,[2022]©20221 online resource (289 pages)IEEE Press series on systems science and engineering1-119-78274-0 Includes bibliographical references and index.Cover -- Title Page -- Copyright -- Contents -- Author Biographies -- List of Figures -- List of Tables -- Preface -- Part I Human‐robot Interaction Control -- Chapter 1 Introduction -- 1.1 Human‐Robot Interaction Control -- 1.2 Reinforcement Learning for Control -- 1.3 Structure of the Book -- References -- Chapter 2 Environment Model of Human‐Robot Interaction -- 2.1 Impedance and Admittance -- 2.2 Impedance Model for Human‐Robot Interaction -- 2.3 Identification of Human‐Robot Interaction Model -- 2.4 Conclusions -- References -- Chapter 3 Model Based Human‐Robot Interaction Control -- 3.1 Task Space Impedance/Admittance Control -- 3.2 Joint Space Impedance Control -- 3.3 Accuracy and Robustness -- 3.4 Simulations -- 3.5 Conclusions -- References -- Chapter 4 Model Free Human‐Robot Interaction Control -- 4.1 Task‐Space Control Using Joint‐Space Dynamics -- 4.2 Task‐Space Control Using Task‐Space Dynamics -- 4.3 Joint Space Control -- 4.4 Simulations -- 4.5 Experiments -- 4.6 Conclusions -- References -- Chapter 5 Human‐in‐the‐loop Control Using Euler Angles -- 5.1 Introduction -- 5.2 Joint‐Space Control -- 5.3 Task‐Space Control -- 5.4 Experiments -- 5.5 Conclusions -- References -- Part II Reinforcement Learning for Robot Interaction Control -- Chapter 6 Reinforcement Learning for Robot Position/Force Control -- 6.1 Introduction -- 6.2 Position/Force Control Using an Impedance Model -- 6.3 Reinforcement Learning Based Position/Force Control -- 6.4 Simulations and Experiments -- 6.5 Conclusions -- References -- Chapter 7 Continuous‐Time Reinforcement Learning for Force Control -- 7.1 Introduction -- 7.2 K‐means Clustering for Reinforcement Learning -- 7.3 Position/Force Control Using Reinforcement Learning -- 7.4 Experiments -- 7.5 Conclusions -- References -- Chapter 8 Robot Control in Worst‐Case Uncertainty Using Reinforcement Learning.8.1 Introduction -- 8.2 Robust Control Using Discrete‐Time Reinforcement Learning -- 8.3 Double Q‐Learning with k‐Nearest Neighbors -- 8.4 Robust Control Using Continuous‐Time Reinforcement Learning -- 8.5 Simulations and Experiments: Discrete‐Time Case -- 8.6 Simulations and Experiments: Continuous‐Time Case -- 8.7 Conclusions -- References -- Chapter 9 Redundant Robots Control Using Multi‐Agent Reinforcement Learning -- 9.1 Introduction -- 9.2 Redundant Robot Control -- 9.3 Multi‐Agent Reinforcement Learning for Redundant Robot Control -- 9.4 Simulations and experiments -- 9.5 Conclusions -- References -- Chapter 10 Robot ℋ2 Neural Control Using Reinforcement Learning -- 10.1 Introduction -- 10.2 ℋ2 Neural Control Using Discrete‐Time Reinforcement Learning -- 10.3 ℋ2 Neural Control in Continuous Time -- 10.4 Examples -- 10.5 Conclusion -- References -- Chapter 11 Conclusions -- A Robot Kinematics and Dynamics -- A.1 Kinematics -- A.2 Dynamics -- A.3 Examples -- References -- B Reinforcement Learning for Control -- B.1 Markov decision processes -- B.2 Value functions -- B.3 Iterations -- B.4 TD learning -- Reference -- Index -- EULA.IEEE Press series on systems science and engineering.Human-robot interactionHuman-robot interaction.629.8924019Yu Wen(Robotics engineer),760806Perrusquia AdolfoMiAaPQMiAaPQMiAaPQBOOK9910830945503321Human-robot interaction control using reinforcement learning4098966UNINA