10641nam 22004693 450 991087219590332120240716080257.03-031-63800-X(CKB)32775437300041(MiAaPQ)EBC31523152(Au-PeEL)EBL31523152(EXLCZ)993277543730004120240716d2024 uy 0engur|||||||||||txtrdacontentcrdamediacrrdacarrierExplainable Artificial Intelligence Second World Conference, XAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part III1st ed.Cham :Springer International Publishing AG,2024.©2024.1 online resource (471 pages)Communications in Computer and Information Science Series ;v.21553-031-63799-2 Intro -- Preface -- Organization -- Contents - Part III -- Counterfactual Explanations and Causality for eXplainable AI -- Sub-SpaCE: Subsequence-Based Sparse Counterfactual Explanations for Time Series Classification Problems -- 1 Introduction -- 2 Related Work -- 3 Proposed Method -- 3.1 Problem Formulation -- 3.2 Sub-SpaCE: Subsequence-Based Sparse Counterfactual Explanations -- 4 Numerical Experiments -- 4.1 Set Up -- 4.2 Results -- 4.3 Ablation Study -- 5 Conclusions and Future Work -- References -- Human-in-the-Loop Personalized Counterfactual Recourse -- 1 Introduction -- 2 Related Work -- 3 Problem Statement -- 4 Framework -- 4.1 Personalized Counterfactual Generation -- 4.2 Preference Modeling -- 4.3 Preference Estimation -- 4.4 HIP-CORE Framework -- 4.5 Complexity of User Feedback -- 4.6 Limitations -- 5 Experiments -- 5.1 Experimental Setting -- 5.2 Overall Performance -- 5.3 Model-Agnostic Validation -- 5.4 Study on the Number of Iterations -- 5.5 Study on the Number of Decimal Places -- 5.6 Discussion and Ethical Implications -- 6 Conclusions -- A Appendix -- References -- COIN: Counterfactual Inpainting for Weakly Supervised Semantic Segmentation for Medical Images -- 1 Introduction -- 2 Related Works -- 2.1 Weakly Supervised Semantic Segmentation -- 2.2 Counterfactual Explanations -- 3 Counterfactual Approach for WSSS -- 3.1 Method Formulation -- 3.2 Image Generation Architecture -- 3.3 Loss Function for Training GAN -- 4 Experiments -- 4.1 Datasets -- 4.2 Evaluation -- 4.3 Implementation Details -- 4.4 Comparison with Modified Singla et al.* Method -- 5 Results -- 5.1 Ablation Experiments -- 6 Discussion -- 7 Conclusion -- A.1 Loss Function for Dual-Conditioning in Singla et al.* -- A.2 Synthetic Anomaly Generation -- A.3 Original vs Perturbation-Based Generator.A.4 Influence of Skip Connections on the Generated Images Quality -- A.5 Counterfactual Explanation vs Counterfactual Inpainting Segmentation Accuracy -- References -- Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence -- 1 Introduction -- 2 Related Work -- 3 Incorporating Novel Biases in Counterfactual Search -- 3.1 Using Diffusion Distance to Search for More Feasible Transitions -- 3.2 Directional Coherence -- 3.3 Bringing Feasibility and Directional Coherence into Counterfactual Objective Function -- 3.4 Evaluation Metrics -- 4 Experiments -- 4.1 Synthetic Datasets -- 4.2 Datasets with Continuous Features -- 4.3 Classification Datasets with Mix-Type Features -- 4.4 Benchmarking with Other Frameworks -- 5 Results -- 5.1 Diffusion Distance and Directional Coherence on Synthetic and Diabetes Datasets -- 5.2 Comparison of CoDiCE with Other Counterfactual Methods on Various Datasets -- 5.3 Ablation Experiments -- 6 Discussion -- 7 Conclusion -- A Appendix -- References -- CountARFactuals - Generating Plausible Model-Agnostic Counterfactual Explanations with Adversarial Random Forests -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Multi-objective Counterfactual Explanations -- 3.2 Generative Modeling and Adversarial Random Forests -- 4 Methods -- 4.1 Algorithm [alg::mocspsarf]1: Integrating ARF into MOC -- 4.2 Algorithm [alg::onlyspsarf]2: ARF Is All You Need -- 5 Experiments -- 5.1 Data-Generating Process -- 5.2 Competing Methods -- 5.3 Evaluation Criteria -- 5.4 Results -- 6 Real Data Example -- 7 Discussion -- A Algorithm [alg::mocspsarf]1: Integrating ARF into MOC -- B Algorithm [alg::onlyspsarf]2: ARF Is All You Need -- C Synthetic Data -- C.1 Illustrative Datasets -- C.2 Randomly Generated DGPs -- D Additional Empirical Results -- References.Causality-Aware Local Interpretable Model-Agnostic Explanations -- 1 Introduction -- 2 Related Works -- 3 Background -- 4 Causality-Aware LIME -- 5 Experiments -- 5.1 Datasets and Classifiers -- 5.2 Comparison with Related Works -- 5.3 Evaluation Measures -- 5.4 Results -- 6 Conclusion -- References -- Evaluating the Faithfulness of Causality in Saliency-Based Explanations of Deep Learning Models for Temporal Colour Constancy -- 1 Introduction -- 2 Related Work -- 3 Proposed Neural Architectures -- 4 Original Methodology of the Tests -- 4.1 Adaptation of the Original Tests -- 5 Experimental Setup -- 6 Method -- 7 Results -- 7.1 Preliminary Accuracy Investigation -- 7.2 Test WP1 -- 7.3 Test WP2 -- 7.4 Discussion -- 8 Conclusions and Future Work -- A Extended Results of the Experiments -- References -- CAGE: Causality-Aware Shapley Value for Global Explanations -- 1 Introduction -- 2 Preliminaries and Notation -- 2.1 Causal Models and Interventions -- 2.2 Shapley Additive Global Importance -- 3 Causality-Aware Global Explanations -- 3.1 Global Causal Shapley Values -- 3.2 Properties of Global Causal Feature Importance -- 3.3 Computing Causal Shapley Values -- 4 Experiments -- 4.1 Experiments on Synthetic Data -- 4.2 Explanations on Alzheimer Data -- 5 Related Work -- 6 Discussion -- 7 Conclusion -- A Data - Generating Causal Models -- A.1 Direct-Cause structure -- A.2 Markovian Structure -- A.3 Mixed structure -- References -- Fairness, Trust, Privacy, Security, Accountability and Actionability in eXplainable AI -- Exploring the Reliability of SHAP Values in Reinforcement Learning -- 1 Introduction -- 1.1 Shapley Values -- 1.2 Shapley Values for ML - SHAP -- 1.3 Contributions -- 2 Related Work -- 3 Benchmark Environments -- 4 Experiment 1: Dependency of KernelSHAP on Background Data -- 4.1 KernelSHAP and Background Data -- 4.2 Experimental Setup.4.3 Robustness of KernelSHAP -- 5 Experiment 2: Empirical Evaluation of SHAP-Based Feature Importance -- 5.1 Generalized Feature Importance -- 5.2 Experimental Setup -- 5.3 Performance Drop Vs. Feature Importance -- 6 Interpretation of SHAP Time Dependency in RL -- 7 Conclusion and Outlook -- References -- Categorical Foundation of Explainable AI: A Unifying Theory -- 1 Introduction -- 2 Explainable AI Theory: Requirements -- 2.1 Category Theory: A Framework for (X)AI Processes -- 2.2 Institution Theory: A Framework for Explanations -- 3 Categorical Framework of Explainable AI -- 3.1 Abstract Learning Processes -- 3.2 Concrete Learning and Explaining Processes -- 4 Impact on XAI and Key Findings -- 4.1 Finding #1: Our Framework Models Existing Learning Schemes and Architectures -- 4.2 Finding #2: Our Framework Enables a Formal Definition of ``explanation'' -- 4.3 Finding #3: Our Framework Provides a Theoretical Foundation for XAI Taxonomies -- 4.4 Finding #4: Our Framework Emphasizes Commonly Overlooked Aspects of Explanations -- 5 Discussion -- A Elements of Category Theory -- A.1 Monoidal Categories -- A.2 Cartesian and Symmetric Monoidal Categories -- A.3 Feedback Monoidal Categories -- A.4 Free Categories -- A.5 Institutions -- References -- Investigating Calibrated Classification Scores Through the Lens of Interpretability -- 1 Introduction -- 2 Formal Setup -- 3 Desiderata for Calibration -- 3.1 Interplay of Strict Properties -- 4 Relaxed Desiderata for Calibration -- 4.1 Analysis of Cell Merging -- 4.2 Analysis of Average Label Assignment -- 5 Experimental Evaluation of Decision Tree Based Models -- 6 Concluding Discussion -- A Exploring the Probabilistic Count (PC) -- B Critiquing the Expected Calibration Error -- C Empirically Motivating the Probability Deviation Error -- References.XentricAI: A Gesture Sensing Calibration Approach Through Explainable and User-Centric AI -- 1 Introduction -- 2 Background and Related Work -- 2.1 User-Centric XAI Techniques -- 2.2 Gesture Sensing Model Calibration Using Experience Replay -- 3 XAI for User-Centric and Customized Gesture Sensing -- 3.1 Gesture Sensing Algorithm and Feature Design -- 3.2 Model Calibration Using Experience Replay -- 3.3 Anomalous Gesture Detection and Characterization -- 4 Experiments -- 4.1 Implementation Settings -- 4.2 Experimental Results -- 5 Conclusion -- Appendix -- References -- Toward Understanding the Disagreement Problem in Neural Network Feature Attribution -- 1 Introduction -- 2 Background and Related Work -- 3 Understanding the Explanation's Distribution -- 4 Do Feature Attribution Methods Attribute? -- 4.1 Impact of Data Preprocessing -- 4.2 Faithfulness of Effects -- 4.3 Beyond Feature Attribution Toward Importance -- 5 Discussion -- 6 Conclusion -- A Appendix -- A.1 COMPAS Dataset -- A.2 Simulation Details -- A.3 Model Performance -- References -- ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework -- 1 Introduction -- 2 Preliminaries -- 2.1 Explainability -- 2.2 Uncertainty Estimation and Quantification -- 2.3 Conformal Prediction -- 3 Related Work -- 4 Methodology -- 4.1 ConformaSight Structure and Mechanism -- 4.2 ConformaSight in Practice: A Sample Scenario -- 4.3 Computational Complexity of the ConformaSight -- 5 Experiments and Evaluations -- 5.1 Experimental Settings -- 6 Results and Discussion -- 7 Conclusion, Limitations and Future Work -- References -- Differential Privacy for Anomaly Detection: Analyzing the Trade-Off Between Privacy and Explainability -- 1 Introduction -- 2 Related Work -- 2.1 Privacy-Preserving Anomaly Detection -- 2.2 Explainable Anomaly Detection.2.3 Impact of Privacy on Explainability.Communications in Computer and Information Science SeriesLongo Luca1337583Lapuschkin Sebastian1744132Seifert Christin1744133MiAaPQMiAaPQMiAaPQBOOK9910872195903321Explainable Artificial Intelligence4173978UNINA