Vai al contenuto principale della pagina
| Autore: |
Yampolskiy Roman
|
| Titolo: |
Artificial Superintelligence: Coordination & Strategy
|
| Pubblicazione: | MDPI - Multidisciplinary Digital Publishing Institute, 2020 |
| Descrizione fisica: | 1 online resource (206 p.) |
| Soggetto topico: | History of engineering and technology |
| Soggetto non controllato: | adaptive learning systems |
| AGI | |
| AI | |
| AI alignment | |
| AI arms race | |
| AI containment | |
| AI forecasting | |
| AI governance | |
| AI risk | |
| AI safety | |
| AI Thinking | |
| AI value alignment | |
| AI welfare policies | |
| AI welfare science | |
| antispeciesism | |
| artificial general intelligence | |
| artificial intelligence | |
| artificial intelligence safety | |
| artificial superintelligence | |
| artilects | |
| ASILOMAR | |
| autonomous distributed system | |
| Bayesian networks | |
| blockchain | |
| conflict | |
| design for values | |
| distributed goals management | |
| distributed ledger | |
| ethics | |
| existential risk | |
| explainable AI | |
| forecasting AI behavior | |
| future-ready | |
| Goodhart's Law | |
| holistic forecasting framework | |
| human-centric reasoning | |
| human-in-the-loop | |
| judgmental distillation mapping | |
| machine learning | |
| moral and ethical behavior | |
| multi-agent systems | |
| pedagogical motif | |
| policy making on AI | |
| policymaking process | |
| predictive optimization | |
| safe for design | |
| scenario analysis | |
| scenario mapping | |
| scenario network mapping | |
| sentiocentrism | |
| simulations | |
| specification gaming | |
| strategic oversight | |
| superintelligence | |
| supermorality | |
| technological singularity | |
| technology forecasting | |
| terraforming | |
| transformative AI | |
| typologies of AI policy | |
| value sensitive design | |
| VSD | |
| Persona (resp. second.): | DuettmannAllison |
| Sommario/riassunto: | Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield "collateral benefits" for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility. |
| Altri titoli varianti: | Artificial Superintelligence |
| Titolo autorizzato: | Artificial Superintelligence: Coordination & Strategy ![]() |
| ISBN: | 3-03921-854-9 |
| Formato: | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione: | Inglese |
| Record Nr.: | 9910404084203321 |
| Lo trovi qui: | Univ. Federico II |
| Opac: | Controlla la disponibilità qui |