LEADER 04402nam 2201093z- 450 001 9910404084203321 005 20210211 010 $a3-03921-854-9 035 $a(CKB)4100000011302296 035 $a(oapen)https://directory.doabooks.org/handle/20.500.12854/41358 035 $a(oapen)doab41358 035 $a(EXLCZ)994100000011302296 100 $a20202102d2020 |y 0 101 0 $aeng 135 $aurmn|---annan 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 00$aArtificial Superintelligence: Coordination & Strategy 210 $cMDPI - Multidisciplinary Digital Publishing Institute$d2020 215 $a1 online resource (206 p.) 311 08$a3-03921-855-7 330 $aAttention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield "collateral benefits" for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility. 517 $aArtificial Superintelligence 606 $aHistory of engineering and technology$2bicssc 610 $aadaptive learning systems 610 $aAGI 610 $aAI 610 $aAI alignment 610 $aAI arms race 610 $aAI containment 610 $aAI forecasting 610 $aAI governance 610 $aAI risk 610 $aAI safety 610 $aAI Thinking 610 $aAI value alignment 610 $aAI welfare policies 610 $aAI welfare science 610 $aantispeciesism 610 $aartificial general intelligence 610 $aartificial intelligence 610 $aartificial intelligence safety 610 $aartificial superintelligence 610 $aartilects 610 $aASILOMAR 610 $aautonomous distributed system 610 $aBayesian networks 610 $ablockchain 610 $aconflict 610 $adesign for values 610 $adistributed goals management 610 $adistributed ledger 610 $aethics 610 $aexistential risk 610 $aexplainable AI 610 $aforecasting AI behavior 610 $afuture-ready 610 $aGoodhart's Law 610 $aholistic forecasting framework 610 $ahuman-centric reasoning 610 $ahuman-in-the-loop 610 $ajudgmental distillation mapping 610 $amachine learning 610 $amoral and ethical behavior 610 $amulti-agent systems 610 $apedagogical motif 610 $apolicy making on AI 610 $apolicymaking process 610 $apredictive optimization 610 $asafe for design 610 $ascenario analysis 610 $ascenario mapping 610 $ascenario network mapping 610 $asentiocentrism 610 $asimulations 610 $aspecification gaming 610 $astrategic oversight 610 $asuperintelligence 610 $asupermorality 610 $atechnological singularity 610 $atechnology forecasting 610 $aterraforming 610 $atransformative AI 610 $atypologies of AI policy 610 $avalue sensitive design 610 $aVSD 615 7$aHistory of engineering and technology 700 $aYampolskiy$b Roman$4auth$01278170 702 $aDuettmann$b Allison$4auth 906 $aBOOK 912 $a9910404084203321 996 $aArtificial Superintelligence: Coordination & Strategy$93012755 997 $aUNINA