Vai al contenuto principale della pagina

Human-robot interaction control using reinforcement learning / / Wen Yu, Adolfo Perrusquia



(Visualizza in formato marc)    (Visualizza in BIBFRAME)

Autore: Yu Wen (Robotics engineer) Visualizza persona
Titolo: Human-robot interaction control using reinforcement learning / / Wen Yu, Adolfo Perrusquia Visualizza cluster
Pubblicazione: Hoboken, New Jersey : , : IEEE Press : , : Wiley, , [2022]
©2022
Descrizione fisica: 1 online resource (289 pages)
Disciplina: 629.8924019
Soggetto topico: Human-robot interaction
Soggetto genere / forma: Electronic books.
Persona (resp. second.): PerrusquiaAdolfo
Nota di bibliografia: Includes bibliographical references and index.
Nota di contenuto: Cover -- Title Page -- Copyright -- Contents -- Author Biographies -- List of Figures -- List of Tables -- Preface -- Part I Human‐robot Interaction Control -- Chapter 1 Introduction -- 1.1 Human‐Robot Interaction Control -- 1.2 Reinforcement Learning for Control -- 1.3 Structure of the Book -- References -- Chapter 2 Environment Model of Human‐Robot Interaction -- 2.1 Impedance and Admittance -- 2.2 Impedance Model for Human‐Robot Interaction -- 2.3 Identification of Human‐Robot Interaction Model -- 2.4 Conclusions -- References -- Chapter 3 Model Based Human‐Robot Interaction Control -- 3.1 Task Space Impedance/Admittance Control -- 3.2 Joint Space Impedance Control -- 3.3 Accuracy and Robustness -- 3.4 Simulations -- 3.5 Conclusions -- References -- Chapter 4 Model Free Human‐Robot Interaction Control -- 4.1 Task‐Space Control Using Joint‐Space Dynamics -- 4.2 Task‐Space Control Using Task‐Space Dynamics -- 4.3 Joint Space Control -- 4.4 Simulations -- 4.5 Experiments -- 4.6 Conclusions -- References -- Chapter 5 Human‐in‐the‐loop Control Using Euler Angles -- 5.1 Introduction -- 5.2 Joint‐Space Control -- 5.3 Task‐Space Control -- 5.4 Experiments -- 5.5 Conclusions -- References -- Part II Reinforcement Learning for Robot Interaction Control -- Chapter 6 Reinforcement Learning for Robot Position/Force Control -- 6.1 Introduction -- 6.2 Position/Force Control Using an Impedance Model -- 6.3 Reinforcement Learning Based Position/Force Control -- 6.4 Simulations and Experiments -- 6.5 Conclusions -- References -- Chapter 7 Continuous‐Time Reinforcement Learning for Force Control -- 7.1 Introduction -- 7.2 K‐means Clustering for Reinforcement Learning -- 7.3 Position/Force Control Using Reinforcement Learning -- 7.4 Experiments -- 7.5 Conclusions -- References -- Chapter 8 Robot Control in Worst‐Case Uncertainty Using Reinforcement Learning.
8.1 Introduction -- 8.2 Robust Control Using Discrete‐Time Reinforcement Learning -- 8.3 Double Q‐Learning with k‐Nearest Neighbors -- 8.4 Robust Control Using Continuous‐Time Reinforcement Learning -- 8.5 Simulations and Experiments: Discrete‐Time Case -- 8.6 Simulations and Experiments: Continuous‐Time Case -- 8.7 Conclusions -- References -- Chapter 9 Redundant Robots Control Using Multi‐Agent Reinforcement Learning -- 9.1 Introduction -- 9.2 Redundant Robot Control -- 9.3 Multi‐Agent Reinforcement Learning for Redundant Robot Control -- 9.4 Simulations and experiments -- 9.5 Conclusions -- References -- Chapter 10 Robot ℋ2 Neural Control Using Reinforcement Learning -- 10.1 Introduction -- 10.2 ℋ2 Neural Control Using Discrete‐Time Reinforcement Learning -- 10.3 ℋ2 Neural Control in Continuous Time -- 10.4 Examples -- 10.5 Conclusion -- References -- Chapter 11 Conclusions -- A Robot Kinematics and Dynamics -- A.1 Kinematics -- A.2 Dynamics -- A.3 Examples -- References -- B Reinforcement Learning for Control -- B.1 Markov decision processes -- B.2 Value functions -- B.3 Iterations -- B.4 TD learning -- Reference -- Index -- EULA.
Titolo autorizzato: Human-Robot Interaction Control Using Reinforcement Learning  Visualizza cluster
ISBN: 1-119-78276-7
1-119-78277-5
1-119-78275-9
Formato: Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione: Inglese
Record Nr.: 9910555251603321
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Serie: IEEE Press series on systems science and engineering.