LEADER 05884nam 22006375 450 001 9910872189603321 005 20250806170130.0 010 $a3-031-63803-4 024 7 $a10.1007/978-3-031-63803-9 035 $a(CKB)32775679400041 035 $a(MiAaPQ)EBC31523169 035 $a(Au-PeEL)EBL31523169 035 $a(DE-He213)978-3-031-63803-9 035 $a(EXLCZ)9932775679400041 100 $a20240710d2024 u| 0 101 0 $aeng 135 $aur||||||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aExplainable Artificial Intelligence $eSecond World Conference, xAI 2024, Valletta, Malta, July 17?19, 2024, Proceedings, Part IV /$fedited by Luca Longo, Sebastian Lapuschkin, Christin Seifert 205 $a1st ed. 2024. 210 1$aCham :$cSpringer Nature Switzerland :$cImprint: Springer,$d2024. 215 $a1 online resource (480 pages) 225 1 $aCommunications in Computer and Information Science,$x1865-0937 ;$v2156 311 08$a3-031-63802-6 327 $a -- Explainable AI in healthcare and computational neuroscience. -- SRFAMap: a method for mapping integrated gradients of a CNN trained with statistical radiomic features to medical image saliency maps. -- Transparently Predicting Therapy Compliance of Young Adults Following Ischemic Stroke. -- Precision medicine in student health: Insights from Tsetlin Machines into chronic pain and psychological distress. -- Evaluating Local Explainable AI Techniques for the Classification of Chest X-ray Images. -- Feature importance to explain multimodal prediction models. A clinical use case. -- Identifying EEG Biomarkers of Depression with Novel Explainable Deep Learning Architectures. -- Increasing Explainability in Time Series Classification by Functional Decomposition. -- Towards Evaluation of Explainable Artificial Intelligence in Streaming Data. -- Quantitative Evaluation of xAI Methods for Multivariate Time Series - A Case Study for a CNN-based MI Detection Model. -- Explainable AI for improved human-computer interaction and Software Engineering for explainability. -- Influenciae: A library for tracing the influence back to the data-points. -- Explainability Engineering Challenges: Connecting Explainability Levels to Run-time Explainability. -- On the Explainability of Financial Robo-advice Systems. -- Can I trust my anomaly detection system? A case study based on explainable AI.. -- Explanations considered harmful: The Impact of misleading Explanations on Accuracy in hybrid human-AI decision making. -- Human emotions in AI explanations. -- Study on the Helpfulness of Explainable Artificial Intelligence. -- Applications of explainable artificial intelligence. -- Pricing Risk: An XAI Analysis of Irish Car Insurance Premiums. -- Exploring the Role of Explainable AI in the Development and Qualification of Aircraft Quality Assurance Processes: A Case Study. -- Explainable Artificial Intelligence applied to Predictive Maintenance: Comparison of Post-hoc Explainability Techniques. -- A comparative analysis of SHAP, LIME, ANCHORS, and DICE for interpreting a dense neural network in Credit Card Fraud Detection. -- Application of the representative measure approach to assess the reliability of decision trees in dealing with unseen vehicle collision data. -- Ensuring Safe Social Navigation via Explainable Probabilistic and Conformal Safety Regions. -- Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments. -- AcME-AD: Accelerated Model Explanations for Anomaly Detection. 330 $aThis four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on: Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI. Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI. Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI. Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence. 410 0$aCommunications in Computer and Information Science,$x1865-0937 ;$v2156 606 $aArtificial intelligence 606 $aNatural language processing (Computer science) 606 $aApplication software 606 $aComputer networks 606 $aArtificial Intelligence 606 $aNatural Language Processing (NLP) 606 $aComputer and Information Systems Applications 606 $aComputer Communication Networks 615 0$aArtificial intelligence. 615 0$aNatural language processing (Computer science) 615 0$aApplication software. 615 0$aComputer networks. 615 14$aArtificial Intelligence. 615 24$aNatural Language Processing (NLP). 615 24$aComputer and Information Systems Applications. 615 24$aComputer Communication Networks. 676 $a006.3 700 $aLongo$b Luca$01337583 701 $aLapuschkin$b Sebastian$01744132 701 $aSeifert$b Christin$01744133 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910872189603321 996 $aExplainable Artificial Intelligence$94173978 997 $aUNINA