LEADER 01479nam2-2200397---450- 001 990000784460203316 005 20100119103840.0 010 $a88-348-8011-0 035 $a0078446 035 $aUSA010078446 035 $a(ALEPH)000078446USA01 035 $a0078446 100 $a20011203d1998----km-y0itay0103----ba 101 $aita 102 $aIT 105 $a||||||||001yy 200 1 $aAmministrazione pubblica e beni ecclesiastici$el'amministrazione del patrimonio negli accordi di Villa Madama$fMaria Fausta Maternini Zotta 210 $aTorino$cG. Giappicchelli$d1998 215 $a181 p$d24 cm 225 2 $aCollana di studi di diritto canonico ed ecclesiastico$iSezione ecclesiastica$v15 410 0$1001000312376$12001$aCollana di studi di diritto canonico ed ecclesiastico$iSez. ecclesiastica 606 0 $aBeni ecclesiastici$xAmministrazione$xDiritto canonico 676 $a262.94 700 1$aMATERNINI ZOTTA,$bMaria Fausta$0231688 801 0$aIT$bsalbc$gISBD 912 $a990000784460203316 951 $aXXIV.4. Coll. 3/ 5 (COLL. POT 15)$b17052 G$cXXIV.4. Coll. 3/ 5 (COLL. POT)$d00264545 959 $aBK 969 $aGIU 979 $aPATTY$b90$c20011203$lUSA01$h1112 979 $c20020403$lUSA01$h1725 979 $aPATRY$b90$c20040406$lUSA01$h1654 979 $aALESSANDRA$b90$c20080701$lUSA01$h1250 979 $aRSIAV2$b90$c20100119$lUSA01$h1038 996 $aAmministrazione pubblica e beni ecclesiastici$9625442 997 $aUNISA LEADER 04614nam 2200493 450 001 9910555251603321 005 20220626195430.0 010 $a1-119-78276-7 010 $a1-119-78277-5 010 $a1-119-78275-9 035 $a(CKB)4100000012037087 035 $a(MiAaPQ)EBC6735011 035 $a(Au-PeEL)EBL6735011 035 $a(OCoLC)1273975487 035 $a(EXLCZ)994100000012037087 100 $a20220626d2022 uy 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aHuman-robot interaction control using reinforcement learning /$fWen Yu, Adolfo Perrusquia 210 1$aHoboken, New Jersey :$cIEEE Press :$cWiley,$d[2022] 210 4$dİ2022 215 $a1 online resource (289 pages) 225 1 $aIEEE Press series on systems science and engineering 311 $a1-119-78274-0 320 $aIncludes bibliographical references and index. 327 $aCover -- Title Page -- Copyright -- Contents -- Author Biographies -- List of Figures -- List of Tables -- Preface -- Part I Human?robot Interaction Control -- Chapter 1 Introduction -- 1.1 Human?Robot Interaction Control -- 1.2 Reinforcement Learning for Control -- 1.3 Structure of the Book -- References -- Chapter 2 Environment Model of Human?Robot Interaction -- 2.1 Impedance and Admittance -- 2.2 Impedance Model for Human?Robot Interaction -- 2.3 Identification of Human?Robot Interaction Model -- 2.4 Conclusions -- References -- Chapter 3 Model Based Human?Robot Interaction Control -- 3.1 Task Space Impedance/Admittance Control -- 3.2 Joint Space Impedance Control -- 3.3 Accuracy and Robustness -- 3.4 Simulations -- 3.5 Conclusions -- References -- Chapter 4 Model Free Human?Robot Interaction Control -- 4.1 Task?Space Control Using Joint?Space Dynamics -- 4.2 Task?Space Control Using Task?Space Dynamics -- 4.3 Joint Space Control -- 4.4 Simulations -- 4.5 Experiments -- 4.6 Conclusions -- References -- Chapter 5 Human?in?the?loop Control Using Euler Angles -- 5.1 Introduction -- 5.2 Joint?Space Control -- 5.3 Task?Space Control -- 5.4 Experiments -- 5.5 Conclusions -- References -- Part II Reinforcement Learning for Robot Interaction Control -- Chapter 6 Reinforcement Learning for Robot Position/Force Control -- 6.1 Introduction -- 6.2 Position/Force Control Using an Impedance Model -- 6.3 Reinforcement Learning Based Position/Force Control -- 6.4 Simulations and Experiments -- 6.5 Conclusions -- References -- Chapter 7 Continuous?Time Reinforcement Learning for Force Control -- 7.1 Introduction -- 7.2 K?means Clustering for Reinforcement Learning -- 7.3 Position/Force Control Using Reinforcement Learning -- 7.4 Experiments -- 7.5 Conclusions -- References -- Chapter 8 Robot Control in Worst?Case Uncertainty Using Reinforcement Learning. 327 $a8.1 Introduction -- 8.2 Robust Control Using Discrete?Time Reinforcement Learning -- 8.3 Double Q?Learning with k?Nearest Neighbors -- 8.4 Robust Control Using Continuous?Time Reinforcement Learning -- 8.5 Simulations and Experiments: Discrete?Time Case -- 8.6 Simulations and Experiments: Continuous?Time Case -- 8.7 Conclusions -- References -- Chapter 9 Redundant Robots Control Using Multi?Agent Reinforcement Learning -- 9.1 Introduction -- 9.2 Redundant Robot Control -- 9.3 Multi?Agent Reinforcement Learning for Redundant Robot Control -- 9.4 Simulations and experiments -- 9.5 Conclusions -- References -- Chapter 10 Robot ?2 Neural Control Using Reinforcement Learning -- 10.1 Introduction -- 10.2 ?2 Neural Control Using Discrete?Time Reinforcement Learning -- 10.3 ?2 Neural Control in Continuous Time -- 10.4 Examples -- 10.5 Conclusion -- References -- Chapter 11 Conclusions -- A Robot Kinematics and Dynamics -- A.1 Kinematics -- A.2 Dynamics -- A.3 Examples -- References -- B Reinforcement Learning for Control -- B.1 Markov decision processes -- B.2 Value functions -- B.3 Iterations -- B.4 TD learning -- Reference -- Index -- EULA. 410 0$aIEEE Press series on systems science and engineering. 606 $aHuman-robot interaction 608 $aElectronic books. 615 0$aHuman-robot interaction. 676 $a629.8924019 700 $aYu$b Wen$c(Robotics engineer),$0760806 702 $aPerrusquia$b Adolfo 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910555251603321 996 $aHuman-Robot Interaction Control Using Reinforcement Learning$92820805 997 $aUNINA