LEADER 05911nam 22007215 450 001 996466320103316 005 20211006184241.0 010 $a3-030-28954-0 024 7 $a10.1007/978-3-030-28954-6 035 $a(CKB)4100000009191108 035 $a(DE-He213)978-3-030-28954-6 035 $a(MiAaPQ)EBC5927126 035 $a(PPN)248601466 035 $a(EXLCZ)994100000009191108 100 $a20190829d2019 u| 0 101 0 $aeng 135 $aurnn#008mamaa 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aExplainable AI: Interpreting, Explaining and Visualizing Deep Learning$b[electronic resource] /$fedited by Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller 205 $a1st ed. 2019. 210 1$aCham :$cSpringer International Publishing :$cImprint: Springer,$d2019. 215 $a1 online resource (XI, 439 p. 152 illus., 119 illus. in color.) 225 1 $aLecture Notes in Artificial Intelligence ;$v11700 311 $a3-030-28953-2 320 $aIncludes bibliographical references and index. 327 $aTowards Explainable Artificial Intelligence -- Transparency: Motivations and Challenges -- Interpretability in Intelligent Systems: A New Concept? -- Understanding Neural Networks via Feature Visualization: A Survey -- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation -- Unsupervised Discrete Representation Learning -- Towards Reverse-Engineering Black-Box Neural Networks -- Explanations for Attributing Deep Neural Network Predictions -- Gradient-Based Attribution Methods -- Layer-Wise Relevance Propagation: An Overview -- Explaining and Interpreting LSTMs -- Comparing the Interpretability of Deep Networks via Network Dissection -- Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison -- The (Un)reliability of Saliency Methods -- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation -- Understanding Patch-Based Learning of Video Data by Explaining Predictions -- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks -- Interpretable Deep Learning in Drug Discovery -- Neural Hydrology: Interpreting LSTMs in Hydrology -- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI -- Current Advances in Neural Decoding -- Software and Application Patterns for Explanation Methods. 330 $aThe development of ?intelligent? systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to ?intelligent? machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI. 410 0$aLecture Notes in Artificial Intelligence ;$v11700 606 $aArtificial intelligence 606 $aOptical data processing 606 $aComputers 606 $aComputer security 606 $aComputer organization 606 $aArtificial Intelligence$3https://scigraph.springernature.com/ontologies/product-market-codes/I21000 606 $aImage Processing and Computer Vision$3https://scigraph.springernature.com/ontologies/product-market-codes/I22021 606 $aComputing Milieux$3https://scigraph.springernature.com/ontologies/product-market-codes/I24008 606 $aSystems and Data Security$3https://scigraph.springernature.com/ontologies/product-market-codes/I28060 606 $aComputer Systems Organization and Communication Networks$3https://scigraph.springernature.com/ontologies/product-market-codes/I13006 615 0$aArtificial intelligence. 615 0$aOptical data processing. 615 0$aComputers. 615 0$aComputer security. 615 0$aComputer organization. 615 14$aArtificial Intelligence. 615 24$aImage Processing and Computer Vision. 615 24$aComputing Milieux. 615 24$aSystems and Data Security. 615 24$aComputer Systems Organization and Communication Networks. 676 $a006.32 702 $aSamek$b Wojciech$4edt$4http://id.loc.gov/vocabulary/relators/edt 702 $aMontavon$b Grégoire$4edt$4http://id.loc.gov/vocabulary/relators/edt 702 $aVedaldi$b Andrea$4edt$4http://id.loc.gov/vocabulary/relators/edt 702 $aHansen$b Lars Kai$4edt$4http://id.loc.gov/vocabulary/relators/edt 702 $aMüller$b Klaus-Robert$4edt$4http://id.loc.gov/vocabulary/relators/edt 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a996466320103316 996 $aExplainable AI$92821265 997 $aUNISA