LEADER 00827nam0-22002891i-450- 001 990000551820403321 005 20001010 035 $a000055182 035 $aFED01000055182 035 $a(Aleph)000055182FED01 035 $a000055182 100 $a20001010d--------km-y0itay50------ba 101 0 $aita 105 $ay-------001yy 200 1 $aRecuperi navali in basso fondale$eLeonardo da Vinci , Duilio , Corazziere , Pontone posa massi$fArmando Andri 210 $aRoma$cEdizioni Ateneo$d1979 215 $a23 cm$d205 p 225 1 $aProtagonisti$v4 700 1$aAndri,$bArmando$029837 801 0$aIT$bUNINA$gRICA$2UNIMARC 901 $aBK 912 $a990000551820403321 952 $a05 64 1$b2775$fDININ 959 $aDININ 996 $aRecuperi navali in basso fondale$9320241 997 $aUNINA DB $aING01 LEADER 05500nam 2200685 450 001 9910824178803321 005 20200520144314.0 010 $a1-118-88448-5 010 $a1-118-88461-2 010 $a1-118-88447-7 035 $a(CKB)3710000000226951 035 $a(EBL)1775207 035 $a(DLC) 2014021985 035 $a(Au-PeEL)EBL1775207 035 $a(CaPaEBR)ebr10921255 035 $a(CaONFJC)MIL640727 035 $a(OCoLC)881065009 035 $a(CaSebORM)9781118362082 035 $a(MiAaPQ)EBC1775207 035 $a(EXLCZ)993710000000226951 100 $a20140902h20142014 uy 0 101 0 $aeng 135 $aur|n|---||||| 181 $2rdacontent 182 $2rdamedia 183 $2rdacarrier 200 10$aMulti-agent machine learning $ea reinforcement approach /$fHoward M. Schwartz 205 $a1st edition 210 1$aHoboken, New Jersey :$cJohn Wiley & Sons, Inc.,$d2014. 210 4$dİ2014 215 $a1 online resource (458 p.) 300 $aDescription based upon print version of record. 311 $a1-322-09476-4 311 $a1-118-36208-X 320 $aIncludes bibliographical references at the end of each chapters and index. 327 $aCover; Title Page; Copyright; Preface; References; Chapter 1: A Brief Review of Supervised Learning; 1.1 Least Squares Estimates; 1.2 Recursive Least Squares; 1.3 Least Mean Squares; 1.4 Stochastic Approximation; References; Chapter 2: Single-Agent Reinforcement Learning; 2.1 Introduction; 2.2 n-Armed Bandit Problem; 2.3 The Learning Structure; 2.4 The Value Function; 2.5 The Optimal Value Functions; 2.6 Markov Decision Processes; 2.7 Learning Value Functions; 2.8 Policy Iteration; 2.9 Temporal Difference Learning; 2.10 TD Learning of the State-Action Function; 2.11 Q-Learning 327 $a2.12 Eligibility TracesReferences; Chapter 3: Learning in Two-Player Matrix Games; 3.1 Matrix Games; 3.2 Nash Equilibria in Two-Player Matrix Games; 3.3 Linear Programming in Two-Player Zero-Sum Matrix Games; 3.4 The Learning Algorithms; 3.5 Gradient Ascent Algorithm; 3.6 WoLF-IGA Algorithm; 3.7 Policy Hill Climbing (PHC); 3.8 WoLF-PHC Algorithm; 3.9 Decentralized Learning in Matrix Games; 3.10 Learning Automata; 3.11 Linear Reward-Inaction Algorithm; 3.12 Linear Reward-Penalty Algorithm; 3.13 The Lagging Anchor Algorithm; 3.14 L R-I Lagging Anchor Algorithm; References 327 $aChapter 4: Learning in Multiplayer Stochastic Games4.1 Introduction; 4.2 Multiplayer Stochastic Games; 4.3 Minimax-Q Algorithm; 4.4 Nash Q-Learning; 4.5 The Simplex Algorithm; 4.6 The Lemke-Howson Algorithm; 4.7 Nash-Q Implementation; 4.8 Friend-or-Foe Q-Learning; 4.9 Infinite Gradient Ascent; 4.10 Policy Hill Climbing; 4.11 WoLF-PHC Algorithm; 4.12 Guarding a Territory Problem in a Grid World; 4.13 Extension of L R-I Lagging Anchor Algorithm to Stochastic Games; 4.14 The Exponential Moving-Average Q-Learning (EMA Q-Learning) Algorithm 327 $a4.15 Simulation and Results Comparing EMA Q-Learning to Other MethodsReferences; Chapter 5: Differential Games; 5.1 Introduction; 5.2 A Brief Tutorial on Fuzzy Systems; 5.3 Fuzzy Q-Learning; 5.4 Fuzzy Actor-Critic Learning; 5.5 Homicidal Chauffeur Differential Game; 5.6 Fuzzy Controller Structure; 5.7 Q(?)-Learning Fuzzy Inference System; 5.8 Simulation Results for the Homicidal Chauffeur; 5.9 Learning in the Evader-Pursuer Game with Two Cars; 5.10 Simulation of the Game of Two Cars; 5.11 Differential Game of Guarding a Territory 327 $a5.12 Reward Shaping in the Differential Game of Guarding a Territory5.13 Simulation Results; References; Chapter 6: Swarm Intelligence and the Evolution of Personality Traits; 6.1 Introduction; 6.2 The Evolution of Swarm Intelligence; 6.3 Representation of the Environment; 6.4 Swarm-Based Robotics in Terms of Personalities; 6.5 Evolution of Personality Traits; 6.6 Simulation Framework; 6.7 A Zero-Sum Game Example; 6.8 Implementation for Next Sections; 6.9 Robots Leaving a Room; 6.10 Tracking a Target; 6.11 Conclusion; References; Index; End User License Agreement 330 $a"Multi-Agent Machine Learning: A Reinforcement Learning Approach is a framework to understanding different methods and approaches in multi-agent machine learning. It also provides cohesive coverage of the latest advances in multi-agent differential games and presents applications in game theory and robotics. Framework for understanding a variety of methods and approaches in multi-agent machine learning. Discusses methods of reinforcement learning such as a number of forms of multi-agent Q-learning Applicable to research professors and graduate students studying electrical and computer engineering, computer science, and mechanical and aerospace engineering"--$cProvided by publisher. 330 $a"Provide an in-depth coverage of multi-player, differential games and Gam theory"--$cProvided by publisher. 606 $aReinforcement learning 606 $aDifferential games 606 $aSwarm intelligence 606 $aMachine learning 615 0$aReinforcement learning. 615 0$aDifferential games. 615 0$aSwarm intelligence. 615 0$aMachine learning. 676 $a519.3 686 $aTEC008000$2bisacsh 700 $aSchwartz$b Howard M.$0127910 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910824178803321 996 $aMulti-agent machine learning$94029957 997 $aUNINA