01230nam a22003135i 4500991002788329707536150623s1989 gw | s |||0 0|eng d9783540468585b14232637-39ule_instBibl. Dip.le Aggr. Matematica e Fisica - Sez. Matematicaeng516.3623AMS 53AAMS 53CAMS 57RDifferential geometry :proceedings of the 3rd international symposium, held at Peniscola, Spain, June 5-12, 1988 /edited by Francisco J. Carreras, Olga Gil-Medrano, Antonio M. NaveiraBerlin :Springer,1989308 p. ;25 cmLecture Notes in Mathematics,0075-8434 ;1410MathematicsGlobal differential geometryCarreras, Francisco J.Gil-Medrano, OlgaNaveira, Antonio M..b1423263720-10-1523-06-15991002788329707536LE013 Fondo Cattaneo 53A CAR11 (1989)12013000226101le013gE15.00-n- 10000.i1568236523-06-15Differential geometry79914UNISALENTOle01323-06-15ma -enggw 0005884nam 22006375 450 991087218960332120250806170130.03-031-63803-410.1007/978-3-031-63803-9(CKB)32775679400041(MiAaPQ)EBC31523169(Au-PeEL)EBL31523169(DE-He213)978-3-031-63803-9(EXLCZ)993277567940004120240710d2024 u| 0engur|||||||||||txtrdacontentcrdamediacrrdacarrierExplainable Artificial Intelligence Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part IV /edited by Luca Longo, Sebastian Lapuschkin, Christin Seifert1st ed. 2024.Cham :Springer Nature Switzerland :Imprint: Springer,2024.1 online resource (480 pages)Communications in Computer and Information Science,1865-0937 ;21563-031-63802-6 -- Explainable AI in healthcare and computational neuroscience. -- SRFAMap: a method for mapping integrated gradients of a CNN trained with statistical radiomic features to medical image saliency maps. -- Transparently Predicting Therapy Compliance of Young Adults Following Ischemic Stroke. -- Precision medicine in student health: Insights from Tsetlin Machines into chronic pain and psychological distress. -- Evaluating Local Explainable AI Techniques for the Classification of Chest X-ray Images. -- Feature importance to explain multimodal prediction models. A clinical use case. -- Identifying EEG Biomarkers of Depression with Novel Explainable Deep Learning Architectures. -- Increasing Explainability in Time Series Classification by Functional Decomposition. -- Towards Evaluation of Explainable Artificial Intelligence in Streaming Data. -- Quantitative Evaluation of xAI Methods for Multivariate Time Series - A Case Study for a CNN-based MI Detection Model. -- Explainable AI for improved human-computer interaction and Software Engineering for explainability. -- Influenciae: A library for tracing the influence back to the data-points. -- Explainability Engineering Challenges: Connecting Explainability Levels to Run-time Explainability. -- On the Explainability of Financial Robo-advice Systems. -- Can I trust my anomaly detection system? A case study based on explainable AI.. -- Explanations considered harmful: The Impact of misleading Explanations on Accuracy in hybrid human-AI decision making. -- Human emotions in AI explanations. -- Study on the Helpfulness of Explainable Artificial Intelligence. -- Applications of explainable artificial intelligence. -- Pricing Risk: An XAI Analysis of Irish Car Insurance Premiums. -- Exploring the Role of Explainable AI in the Development and Qualification of Aircraft Quality Assurance Processes: A Case Study. -- Explainable Artificial Intelligence applied to Predictive Maintenance: Comparison of Post-hoc Explainability Techniques. -- A comparative analysis of SHAP, LIME, ANCHORS, and DICE for interpreting a dense neural network in Credit Card Fraud Detection. -- Application of the representative measure approach to assess the reliability of decision trees in dealing with unseen vehicle collision data. -- Ensuring Safe Social Navigation via Explainable Probabilistic and Conformal Safety Regions. -- Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments. -- AcME-AD: Accelerated Model Explanations for Anomaly Detection.This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on: Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI. Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI. Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI. Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.Communications in Computer and Information Science,1865-0937 ;2156Artificial intelligenceNatural language processing (Computer science)Application softwareComputer networksArtificial IntelligenceNatural Language Processing (NLP)Computer and Information Systems ApplicationsComputer Communication NetworksArtificial intelligence.Natural language processing (Computer science)Application software.Computer networks.Artificial Intelligence.Natural Language Processing (NLP).Computer and Information Systems Applications.Computer Communication Networks.006.3Longo Luca1337583Lapuschkin Sebastian1744132Seifert Christin1744133MiAaPQMiAaPQMiAaPQBOOK9910872189603321Explainable Artificial Intelligence4173978UNINA