top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Database and Expert Systems Applications : DEXA 2019 International Workshops BIOKDD, IWCFS, MLKgraphs and TIR, Linz, Austria, August 26–29, 2019, Proceedings / / edited by Gabriele Anderst-Kotsis, A Min Tjoa, Ismail Khalil, Mourad Elloumi, Atif Mashkoor, Johannes Sametinger, Xabier Larrucea, Anna Fensel, Jorge Martinez-Gil, Bernhard Moser, Christin Seifert, Benno Stein, Michael Granitzer
Database and Expert Systems Applications : DEXA 2019 International Workshops BIOKDD, IWCFS, MLKgraphs and TIR, Linz, Austria, August 26–29, 2019, Proceedings / / edited by Gabriele Anderst-Kotsis, A Min Tjoa, Ismail Khalil, Mourad Elloumi, Atif Mashkoor, Johannes Sametinger, Xabier Larrucea, Anna Fensel, Jorge Martinez-Gil, Bernhard Moser, Christin Seifert, Benno Stein, Michael Granitzer
Edizione [1st ed. 2019.]
Pubbl/distr/stampa Cham : , : Springer International Publishing : , : Imprint : Springer, , 2019
Descrizione fisica 1 online resource (XV, 222 p. 59 illus., 30 illus. in color.)
Disciplina 005.74
Collana Communications in Computer and Information Science
Soggetto topico Database management
Data mining
Information storage and retrieval
Machine learning
Computer security
Optical data processing
Database Management
Data Mining and Knowledge Discovery
Information Storage and Retrieval
Machine Learning
Systems and Data Security
Image Processing and Computer Vision
ISBN 3-030-27684-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910349287003321
Cham : , : Springer International Publishing : , : Imprint : Springer, , 2019
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Database and Expert Systems Applications : DEXA 2018 International Workshops, BDMICS, BIOKDD, and TIR, Regensburg, Germany, September 3–6, 2018, Proceedings / / edited by Mourad Elloumi, Michael Granitzer, Abdelkader Hameurlain, Christin Seifert, Benno Stein, A Min Tjoa, Roland Wagner
Database and Expert Systems Applications : DEXA 2018 International Workshops, BDMICS, BIOKDD, and TIR, Regensburg, Germany, September 3–6, 2018, Proceedings / / edited by Mourad Elloumi, Michael Granitzer, Abdelkader Hameurlain, Christin Seifert, Benno Stein, A Min Tjoa, Roland Wagner
Edizione [1st ed. 2018.]
Pubbl/distr/stampa Cham : , : Springer International Publishing : , : Imprint : Springer, , 2018
Descrizione fisica 1 online resource (IX, 316 p. 81 illus.)
Disciplina 006.312
Collana Communications in Computer and Information Science
Soggetto topico Database management
Information storage and retrieval
Data mining
Application software
Bioinformatics 
Computational biology 
Database Management
Information Storage and Retrieval
Data Mining and Knowledge Discovery
Information Systems Applications (incl. Internet)
Computer Appl. in Life Sciences
ISBN 3-319-99133-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto BDMICS 2918 -- Parallel data management systems, consistency and privacy -- Cloud computing and graph queries -- BIOKDD 2018 -- TIR 2018 -- Web and domain corpora -- NLP applications -- Social media and personalization.
Record Nr. UNINA-9910299305603321
Cham : , : Springer International Publishing : , : Imprint : Springer, , 2018
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Explainable Artificial Intelligence : Second World Conference, XAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part IV
Explainable Artificial Intelligence : Second World Conference, XAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part IV
Autore Longo Luca
Edizione [1st ed.]
Pubbl/distr/stampa Cham : , : Springer International Publishing AG, , 2024
Descrizione fisica 1 online resource (480 pages)
Altri autori (Persone) LapuschkinSebastian
SeifertChristin
Collana Communications in Computer and Information Science Series
ISBN 9783031638039
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents - Part IV -- Explainable AI in Healthcare and Computational Neuroscience -- SRFAMap: A Method for Mapping Integrated Gradients of a CNN Trained with Statistical Radiomic Features to Medical Image Saliency Maps -- 1 Introduction -- 2 Related Work -- 3 Design and Methodology -- 3.1 The Approach -- 3.2 The Experiment -- 3.3 Evaluation of Saliency Maps -- 4 Results and Discussion -- 4.1 Discussion of Results -- 5 Conclusions and Future Work -- References -- Transparently Predicting Therapy Compliance of Young Adults Following Ischemic Stroke -- 1 Introduction -- 2 Related Studies -- 3 Materials and Methods -- 3.1 Participants and Clinical Settings -- 3.2 Cognitive Assessment -- 3.3 Computer-Based Rehabilitation Therapy -- 3.4 Modelling -- 3.5 Explanation Methods -- 4 Results -- 4.1 Experimental Data -- 4.2 Therapy Compliance Prediction -- 4.3 Explanation Reports -- 5 Discussion -- 6 Conclusions -- References -- Precision Medicine for Student Health: Insights from Tsetlin Machines into Chronic Pain and Psychological Distress -- 1 Introduction -- 2 Tsetlin Machines -- 3 Related Work -- 3.1 Pain and Psychological Distress -- 3.2 Explainable AI -- 4 Materials and Methods -- 4.1 The SHoT2018 Study -- 4.2 Models and Analyses -- 5 Results and Discussion -- 5.1 Performance -- 5.2 Interpretability Analysis -- 6 Conclusions and Future Work -- A Literal Frequency in the Tsetlin Machine -- References -- Evaluating Local Explainable AI Techniques for the Classification of Chest X-Ray Images -- 1 Introduction -- 2 Previous Work -- 2.1 Explainable AI for X-Ray Imaging -- 2.2 Evaluation Metrics for XAI -- 3 Explainable AI Techniques -- 4 Analyzed Dataset -- 5 Proposed Metrics -- 6 Results and Evaluation -- 7 Conclusions -- References -- Feature Importance to Explain Multimodal Prediction Models. a Clinical Use Case.
1 Introduction -- 2 Related Work -- 2.1 Short-Term Complication Prediction -- 2.2 Multimodal Prediction Models -- 2.3 Explainability -- 3 Materials and Methods -- 3.1 Dataset -- 3.2 Machine Learning Models -- 3.3 Training Procedure and Evaluation -- 3.4 Explanation -- 4 Results -- 4.1 Model Performance -- 4.2 Explainability -- 5 Discussion -- 6 Conclusion -- A Hyperparameters -- B Detailed Feature Overview -- References -- Identifying EEG Biomarkers of Depression with Novel Explainable Deep Learning Architectures -- 1 Introduction -- 2 Methods -- 2.1 Description of Dataset -- 2.2 Description of Model Development -- 2.3 Description of Explainability Analyses Applied to All Models -- 2.4 Description of Approach for Characterization of M2 and M3 Filters -- 2.5 Description of Novel Activation Explainability Analyses for M2 and M3 -- 2.6 Key Aspects of Approach -- 3 Results and Discussion -- 3.1 M1-M3: Model Performance Analysis -- 3.2 M1-M3: Post Hoc Explainability Analysis -- 3.3 M2-M3: Characterization of Extracted Features -- 3.4 M2-M3: Post Hoc Spatial Activation Analysis -- 3.5 M2-M3: Post Hoc of Activation Correlation Analysis -- 3.6 Summary of MDD-Related Findings -- 3.7 Limitations and Future Work -- 4 Conclusion -- References -- Increasing Explainability in Time Series Classification by Functional Decomposition -- 1 Introduction -- 2 Background and Related Work -- 3 Method -- 4 Case Study -- 4.1 Sensor Model -- 4.2 Simulator -- 5 Application -- 5.1 Instantiation of the Generic Methodology -- 5.2 Influence of Data Representation and Decompositions -- 5.3 Influence of the Chunking -- 5.4 Datasets -- 6 Realization and Evaluation -- 6.1 Training and Testing of the Chunk Classifier -- 6.2 Training and Testing of the Velocity Estimator -- 6.3 Robustness Analysis of the Chunk Classifier -- 6.4 Testing of the Complete System -- 7 Explanations.
7.1 Dataset-Based Explanations -- 7.2 Visual Explanations -- 8 Conclusion and Future Work -- References -- Towards Evaluation of Explainable Artificial Intelligence in Streaming Data -- 1 Introduction -- 2 Related Work -- 2.1 Consistency, Fidelity and Stability of Explanations -- 3 Methodology -- 4 A Case Study with the Iris Dataset -- 5 Results Analysis -- 5.1 XAI Metric: Agreement (Consistency) Between Explainers -- 5.2 XAI Metric: Lipschitz and Average Stability -- 5.3 Comparison of Stability Metrics -- 5.4 Detailed Stability Comparison for Anomalies A1 and A2 -- 5.5 Quantification of Differences in Stability Between Ground Truth and Black-Box Explainers -- 6 Conclusion -- 7 Future Work -- References -- Quantitative Evaluation of xAI Methods for Multivariate Time Series - A Case Study for a CNN-Based MI Detection Model -- 1 Introduction -- 2 Background and State of the Art -- 2.1 Multivariate Time Series -- 2.2 MI Detection Use Case -- 2.3 Related Work -- 3 Methodology -- 3.1 Explanations for Time Series Data -- 3.2 Truthfulness Analysis -- 3.3 Stability Analysis -- 3.4 Consistency Analysis -- 4 Experimental Results -- 4.1 Results of the Truthfulness Analysis -- 4.2 Results of the Stability Analysis -- 4.3 Results of the Consistency Analysis -- 5 Discussion -- 6 Conclusion -- References -- Explainable AI for Improved Human-Computer Interaction and Software Engineering for Explainability -- Influenciæ: A Library for Tracing the Influence Back to the Data-Points -- 1 Introduction -- 2 Attributing Model Behavior Through Data Influence -- 2.1 Notation -- 2.2 Influence Functions -- 2.3 Kernel-Based Influence -- 2.4 Tracing Influence Throughout the Training Process -- 3 API -- 4 Conclusion -- References -- Explainability Engineering Challenges: Connecting Explainability Levels to Run-Time Explainability -- 1 Introduction -- 2 Explainability Terminology.
3 MAB-EX Framework for Self-Explainable Systems -- 4 Explainability Requirements in Software-Intensive Systems -- 5 Integration of Explainability Levels into the MAB-EX Framework -- 6 The Role of eXplainable DRL in Explainability Engineering -- 7 Conclusion -- References -- On the Explainability of Financial Robo-Advice Systems -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Financial Robo-Advice Systems -- 3.2 XAI and the Law -- 3.3 EU Regulations Relevant to Financial Robo-Advice Systems: Scopes and Notions -- 4 Proposed Methodology -- 5 Robo-Advice Systems -- 6 Legal Compliance Questions for Robo-Advice Systems -- 7 Case Studies -- 7.1 Requested Financial Information -- 7.2 Personas -- 7.3 Results: Robo-Generated Financial Advice -- 8 Threats to Validity -- 9 Discussion -- 10 Conclusion and Future Work -- References -- Can I Trust My Anomaly Detection System? A Case Study Based on Explainable AI -- 1 Introduction -- 2 Literature Review -- 3 Preliminaries -- 3.1 VAE-GAN Models -- 3.2 Semi-supervised Anomaly Detection Using Variational Models -- 3.3 Explaining Anomaly Maps Using Model-Agnostic XAI Methods -- 3.4 Comparing Explained Anomalies Against a Ground Truth -- 4 Experimental Evaluation -- 5 Conclusions -- References -- Explanations Considered Harmful: The Impact of Misleading Explanations on Accuracy in Hybrid Human-AI Decision Making -- 1 Introduction -- 2 Background and Related Work -- 3 How Explanations Can Be Misleading -- 4 Methods -- 5 Results -- 5.1 Impact on Accuracy -- 5.2 Impact on Confidence -- 6 Discussion -- 7 Conclusion -- References -- Human Emotions in AI Explanations -- 1 Introduction -- 2 Related Literature -- 3 Method -- 4 Results -- 5 Robustness Check -- 6 Discussion -- 7 Conclusion -- References -- Study on the Helpfulness of Explainable Artificial Intelligence -- 1 Introduction -- 2 Measuring Explainability.
2.1 Approaches for Measuring Explainability -- 2.2 User Studies on the Performance of XAI -- 3 An Objective Methodology for Evaluating XAI -- 3.1 Objective Human-Centered XAI Evaluation -- 3.2 Image Classification and XAI Methods -- 3.3 Survey Design -- 4 Survey Results -- 4.1 Questionnaire Responses -- 4.2 Qualitative Feedback -- 5 Discussion -- 6 Conclusion -- Appendix A Additional Visualizations -- Appendix B Demographic Overview of Participants -- References -- Applications of Explainable Artificial Intelligence -- Pricing Risk: An XAI Analysis of Irish Car Insurance Premiums -- 1 Introduction -- 2 Background and Related Work -- 3 Data and Methods -- 4 Results -- 5 Discussion and Conclusion -- References -- Exploring the Role of Explainable AI in the Development and Qualification of Aircraft Quality Assurance Processes: A Case Study -- 1 Introduction -- 1.1 Background -- 1.2 Related Work -- 2 Description of Use Case -- 3 XAI Methods Applied to Use Case -- 4 Insights in the xAI Results -- 4.1 Experiment Results -- 4.2 xAI on New Model -- 5 Discussing xAI w.r.t. Development and Qualifiability -- 6 Conclusion -- References -- Explainable Artificial Intelligence Applied to Predictive Maintenance: Comparison of Post-Hoc Explainability Techniques -- 1 Introduction -- 2 Proposed Methodology -- 3 Post-Hoc Explainability Techniques -- 3.1 Impurity-Based Feature Importance -- 3.2 Permutation Feature Importance -- 3.3 Partial Dependence Plot (PDP) -- 3.4 Accumulated Local Effects (ALE) -- 3.5 Shapley Additive Explanations (SHAP) -- 3.6 Local Interpretable Model-Agnostic Explanations (LIME) -- 3.7 Anchor -- 3.8 Individual Conditional Expectation (ICE) -- 3.9 Discussion on Implementation and Usability of the Techniques -- 4 Conclusions -- References.
A Comparative Analysis of SHAP, LIME, ANCHORS, and DICE for Interpreting a Dense Neural Network in Credit Card Fraud Detection.
Record Nr. UNINA-9910872189603321
Longo Luca  
Cham : , : Springer International Publishing AG, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Explainable Artificial Intelligence : Second World Conference, XAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part II
Explainable Artificial Intelligence : Second World Conference, XAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part II
Autore Longo Luca
Edizione [1st ed.]
Pubbl/distr/stampa Cham : , : Springer International Publishing AG, , 2024
Descrizione fisica 1 online resource (0 pages)
Altri autori (Persone) LapuschkinSebastian
SeifertChristin
Collana Communications in Computer and Information Science Series
ISBN 9783031637971
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents - Part II -- XAI for Graphs and Computer Vision -- Model-Agnostic Knowledge Graph Embedding Explanations for Recommender Systems -- 1 Introduction -- 1.1 Problem Setting and Objective -- 1.2 Overview of Main Findings and Contributions -- 2 Related Work -- 3 Methodology -- 3.1 Baselines -- 3.2 Metrics -- 3.3 Multi-Attribute Utility Theory (MAUT) -- 4 Embedding Proposal Algorithm -- 5 Results -- 6 Future Work, Limitations, and Open Directions -- 7 Conclusion -- References -- Graph-Based Interface for Explanations by Examples in Recommender Systems: A User Study -- 1 Introduction -- 2 Related Work -- 3 Generating Graph-Based Explanations-by-Examples -- 3.1 Obtaining the Explanation Links -- 3.2 Visualising Explanations Through Interactive Graphs -- 4 User Study -- 4.1 Methodology -- 4.2 User Study Results -- 5 Conclusions -- References -- Explainable AI for Mixed Data Clustering -- 1 Introduction -- 2 Related Work -- 3 Entropy-Based Cluster Explanations for Mixed Data -- 4 Evaluation -- 5 Discussion -- 6 Conclusion -- References -- Explaining Graph Classifiers by Unsupervised Node Relevance Attribution -- 1 Introduction -- 2 Preliminaries -- 2.1 Deep Graph Networks -- 2.2 XAI Methods for Graphs -- 2.3 Benchmarking XAI Methods for Graphs -- 2.4 Assessing XAI Methods for Graphs in Real-World Contexts -- 3 Methods -- 3.1 Unsupervised Attribution Strategies for the Realistic Setting -- 3.2 A Measure of Quality for the Relevance Assignment -- 4 Experiments -- 4.1 Objective -- 4.2 Experimental Details -- 5 Results -- 6 Conclusion -- References -- Explaining Clustering of Ecological Momentary Assessment Data Through Temporal and Feature Attention -- 1 Introduction -- 2 Related Work -- 2.1 Clusters' Descriptive Representation -- 2.2 Explanations on TS Clustering -- 3 Review on Challenges of Explaining MTS Data.
3.1 Clustering Explanations -- 4 Framework for Clustering Explanations -- 4.1 EMA Data -- 4.2 EMA Clustering -- 4.3 Proposed Framework for Clustering Explanations -- 5 Analysis and Results -- 5.1 Performance Evaluation -- 5.2 Cluster-Level Explanations Through Temporal-Attention -- 5.3 Cluster-Level Explanations Through Feature-Attention -- 5.4 Individual-Level Explanations -- 6 Discussion -- 6.1 The Role of the Multi-aspect Attention -- 6.2 The Role of the Multi-level Analysis -- 6.3 The Impact of Utilizing a Pre-defined Clustering Result -- 7 Conclusion -- References -- Graph Edits for Counterfactual Explanations: A Comparative Study -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 The Importance of Graph Machine Learning -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Quantitative Results -- 4.3 Qualitative Results -- 5 Conclusion -- References -- Model Guidance via Explanations Turns Image Classifiers into Segmentation Models -- 1 Introduction -- 1.1 Relation to Previous Works -- 1.2 Limitations -- 2 Unrolled Heatmap Architectures -- 2.1 LRP Basics -- 2.2 Unrolled LRP Architectures for Convolutional Classifiers -- 2.3 Losses and Training -- 2.4 Relation to Previous Formal Analyses and Standard Architectures -- 3 Unrolled Heatmap Architectures for Segmentation: Results -- 4 Conclusion -- References -- Understanding the Dependence of Perception Model Competency on Regions in an Image -- 1 Importance of Understanding Model Competency -- 2 Background and Related Work -- 2.1 Uncertainty Quantification -- 2.2 Out-of-Distribution (OOD) Detection -- 2.3 Explainable Image Classification -- 2.4 Explainable Competency Estimation -- 3 Approach for Understanding Model Competency -- 3.1 Estimating Model Competency -- 3.2 Identifying Regions Contributing to Low Competency -- 4 Method Evaluation and Analysis -- 4.1 Metrics for Comparison.
4.2 Dataset 1: Lunar Environment -- 4.3 Dataset 2: Speed Limit Signs -- 4.4 Dataset 3: Outdoor Park -- 4.5 Analysis of Results -- 5 Conclusions -- 6 Limitations and Future Work -- A Data Sources and Algorithm Parameters -- B Comparison to Class Activation Maps -- B.1 Dataset 1: Lunar Environment -- B.2 Dataset 2: Speed Limit Signs -- B.3 Dataset 3: Outdoor Park -- References -- A Guided Tour of Post-hoc XAI Techniques in Image Segmentation -- 1 Introduction -- 2 Categorization of XAI Techniques for Image Segmentation -- 3 Review of XAI Techniques for Image Segmentation -- 3.1 Local XAI -- 3.2 Evaluation of Local XAI Methods -- 3.3 A Comparative Analysis of Local XAI Methods -- 3.4 Global XAI -- 4 Tools for Practitioners -- 5 Discussion -- 6 Conclusion -- A Reviewed XAI Algorithms -- References -- Explainable Emotion Decoding for Human and Computer Vision -- 1 Introduction -- 2 Related Works -- 2.1 Explainable Computer Vision -- 2.2 Brain Decoding: Machine Learning on fMRI Data -- 2.3 Emotion Decoding for Human and Computer Vision -- 3 Experimental Setup -- 3.1 Frames, fMRI and Labels -- 3.2 Machine Learning on Movie Frames -- 3.3 Machine Learning on fMRI Data -- 3.4 XAI for Emotion Decoding -- 3.5 CNN-Humans Attentional Match: A Comparative Analysis -- 4 Experimental Results -- 4.1 Machine Learning on Movie Frames -- 4.2 Machine Learning on fMRI Data -- 4.3 Explainability for fMRI-Based Models -- 4.4 Comparative Analysis -- 5 Conclusions -- References -- Explainable Concept Mappings of MRI: Revealing the Mechanisms Underlying Deep Learning-Based Brain Disease Classification -- 1 Introduction -- 2 Methodology -- 2.1 Dataset -- 2.2 Preprocessing -- 2.3 Standard Classification Network -- 2.4 Relevance-Guided Classification Network -- 2.5 Training -- 2.6 Model Selection -- 2.7 Bootstrapping Analysis -- 2.8 Concepts Identification.
2.9 Relevance-Weighted Concept Map Representation -- 2.10 R2* and Relevance Region of Interest Analysis -- 3 Results -- 4 Discussion -- 5 Conclusion -- References -- Logic, Reasoning, and Rule-Based Explainable AI -- Template Decision Diagrams for Meta Control and Explainability -- 1 Introduction -- 2 Related Work -- 3 Foundations -- 4 Template Decision Diagrams -- 4.1 Hierarchical Decision Diagrams by Templates -- 4.2 Standard Template Boxes -- 5 Templates for Self-adaptive Systems and Meta Control -- 5.1 An Overview of the Case Study -- 5.2 Modeling the Case Study with Template DDs -- 6 Improving Control Strategy Explanations -- 6.1 Explainability Metrics for Template DDs -- 6.2 Decision Diagram Refactoring -- 6.3 Implementation and Evaluation -- 7 Conclusion -- References -- A Logic of Weighted Reasons for Explainable Inference in AI -- 1 Introduction -- 2 Weighted Default Justification Logic -- 2.1 Justification Logic Preliminaries -- 2.2 Weighted Default Justification Logic -- 2.3 Example -- 3 Strong Weighted Default Justification Logic -- 3.1 Preliminaries -- 3.2 Strong Weighted Default Justification Logic -- 3.3 Example -- 3.4 Justification Default Graphs -- 4 WDJL and WDJL+ as Explainable Neuro-Symbolic Architectures -- 5 Related Work -- 5.1 Numeric Inference Graphs -- 5.2 Numeric Argumentation Frameworks -- 6 Conclusions and Future Work -- References -- On Explaining and Reasoning About Optical Fiber Link Problems -- 1 Introduction -- 2 Literature Review -- 3 Dataset Overview -- 4 Explanations Pipeline Architecture -- 4.1 Data Aggregation -- 4.2 Data Cleansing and Normalisation -- 4.3 Data Transformation -- 4.4 ML Training -- 4.5 AI Explainability -- 5 Experimental Results -- 5.1 Model Performance -- 5.2 Model Explainability -- 6 Conclusion -- A Appendix A -- References.
Construction of Artificial Most Representative Trees by Minimizing Tree-Based Distance Measures -- 1 Introduction -- 2 Methods -- 2.1 Random Forests -- 2.2 Variable Importance Measures (VIMs) -- 2.3 Selection of Most Representative Trees (MRTs) -- 2.4 Construction of Artificial Most Representative Trees (ARTs) -- 2.5 Simulation Design -- 2.6 Benchmarking Experiment -- 3 Results -- 3.1 Prediction Performance -- 3.2 Included Variables -- 3.3 Computation Time -- 3.4 Influence of Tuning Parameters -- 3.5 Tree Depth -- 3.6 Benchmarking -- 4 Discussion -- Appendix -- References -- Decision Predicate Graphs: Enhancing Interpretability in Tree Ensembles -- 1 Introduction -- 2 Literature Review -- 3 Decision Predicate Graphs -- 3.1 Definition -- 3.2 From Ensemble to a DPG -- 3.3 DPG Interpretability -- 4 Empirical Results and Discussion -- 4.1 DPG: Iris Insights -- 4.2 Comparing to the Graph-Based Solutions -- 4.3 Potential Improvements -- 5 Conclusion -- References -- Model-Agnostic and Statistical Methods for eXplainable AI -- Observation-Specific Explanations Through Scattered Data Approximation -- 1 Introduction -- 2 Methodology -- 2.1 Observation-Specific Explanations -- 2.2 Surrogate Models Using Scattered Data Approximation -- 2.3 Estimation of the Observation-Specific Explanations -- 3 Application -- 3.1 Simulated Studies -- 3.2 Real-World Application -- 4 Discussion -- References -- CNN-Based Explanation Ensembling for Dataset, Representation and Explanations Evaluation -- 1 Introduction -- 2 Related Works -- 3 The Concept of CNN-Based Ensembled Explanations -- 3.1 Experimental Setup for Training -- 3.2 Ablation Studies -- 3.3 Method Evaluation -- 4 Metrics for Representation, Dataset and Explanation Evaluation -- 4.1 Representation Evaluation -- 4.2 Dataset Evaluation -- 4.3 Explanations Evaluation -- 5 Conclusions and Future Works -- References.
Local List-Wise Explanations of LambdaMART.
Record Nr. UNINA-9910872194103321
Longo Luca  
Cham : , : Springer International Publishing AG, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Explainable Artificial Intelligence : Second World Conference, XAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part I
Explainable Artificial Intelligence : Second World Conference, XAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part I
Autore Longo Luca
Edizione [1st ed.]
Pubbl/distr/stampa Cham : , : Springer International Publishing AG, , 2024
Descrizione fisica 1 online resource (508 pages)
Altri autori (Persone) LapuschkinSebastian
SeifertChristin
Collana Communications in Computer and Information Science Series
ISBN 9783031637872
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents - Part I -- Intrinsically Interpretable XAI and Concept-Based Global Explainability -- Seeking Interpretability and Explainability in Binary Activated Neural Networks -- 1 Introduction -- 2 Related Works and Positioning -- 3 Notation -- 4 Dissecting Binary Activated Neural Networks -- 5 The BGN (Binary Greedy Network) Algorithm -- 5.1 Learning Shallow Networks -- 5.2 Properties of the BGN Algorithm -- 5.3 Improvements and Deeper Networks -- 6 SHAP Values for BANNs: Inputs, Neurons and Weights -- 7 Numerical Experiments -- 7.1 Predictive Accuracy Experiments -- 7.2 Pruning Experiments -- 7.3 Interpretability and Explainability of 1-BANNs -- 8 Conclusion -- References -- Prototype-Based Interpretable Breast Cancer Prediction Models: Analysis and Challenges -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 4 Prototype Evaluation Framework -- 5 Experimental Setup -- 5.1 Datasets -- 5.2 Training Details -- 5.3 Prototype Evaluation Framework Setup -- 5.4 Visualization of Prototypes -- 6 Results and Discussion -- 6.1 Performance Comparison of Black-Box Vs Prototype-Based Models -- 6.2 Local and Global Visualization of Prototypes -- 6.3 Automatic Quantitative Evaluation of Prototypes -- 7 Conclusion and Future Work -- References -- Evaluating the Explainability of Attributes and Prototypes for a Medical Classification Model -- 1 Introduction -- 2 Related Work -- 3 Methods and Materials -- 3.1 Dataset -- 3.2 xAI-Model -- 4 Experiments -- 4.1 Aim of Study -- 4.2 Survey Setup -- 4.3 Participants -- 5 Results and Discussion -- 5.1 Radiologists' Attitude Towards AI. -- 5.2 RQ1: How Does the Explanation Affect the User's Performance? -- 5.3 RQ2: How Does the Explanation Affect the Trust in the Model? -- 5.4 RQ3: Are Attribute-Based Explanation (scores, Prototypes) Helpful? -- 5.5 Limitations -- 6 Conclusion.
References -- Revisiting FunnyBirds Evaluation Framework for Prototypical Parts Networks -- 1 Introduction -- 2 Related Works -- 3 Methods -- 3.1 FunnyBirds -- 3.2 Summed Similarity Maps (SSM) for More Precise Interface Functions -- 4 Experimental Setup -- 5 Results -- 5.1 Metrics Scores for Attribution Maps Based on Bounding Boxes or Similarity Maps -- 5.2 Various Backbones of ProtoPNet -- 6 Conclusions -- References -- CoProNN: Concept-Based Prototypical Nearest Neighbors for Explaining Vision Models -- 1 Introduction -- 2 Related Work -- 3 Methods -- 3.1 CoProNN -- 3.2 Prototype Images via Stable Diffusion -- 3.3 Nearest Neighbors as Explanations -- 3.4 Coarse and Fine Grained Classification Tasks -- 3.5 Evaluation Without Humans-in-the-Loop -- 3.6 Quantitative User Study -- 3.7 Qualitative User Study -- 4 Results -- 4.1 Explanations via Task-Specific Concept-Based Prototypes -- 4.2 Explanations via Task-Unspecific Concepts -- 4.3 Quantitative User Study -- 4.4 Results Qualitative User Study -- 5 Discussion -- 5.1 Improved Task Specificity of CoProNN Concepts -- 5.2 Interpretation User Study Results -- 5.3 Applying CoProNN to Your Own Use Case -- 5.4 Limitations and Extensions -- 6 Conclusion -- References -- Unveiling the Anatomy of Adversarial Attacks: Concept-Based XAI Dissection of CNNs -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Adversarial Attacks -- 3.2 Concept Discovery with Matrix Factorization -- 3.3 Concept Comparison -- 4 Experimental Setup -- 4.1 Models -- 4.2 Data -- 4.3 Layer Selection -- 5 Experimental Results -- 5.1 Adversarial Attacks Impact on Latent Space Representations -- 5.2 Concept Discovery in Adversarial Samples -- 5.3 Concept Analysis of Adversarial Perturbation -- 6 Conclusion and Outlook -- References -- AutoCL: AutoML for Concept Learning -- 1 Introduction -- 2 Background -- 3 Related Work -- 4 AutoCL.
4.1 Feature Selection -- 4.2 Hyperparameter Optimization -- 5 Evaluation -- 5.1 Experimental Setup -- 5.2 Feature Selection Results -- 5.3 Hyperparameter Optimization Results -- 5.4 AutoCL Results -- 6 Discussion -- 7 Conclusion -- References -- Locally Testing Model Detections for Semantic Global Concepts -- 1 Introduction -- 2 Related Work -- 2.1 Local Input Attribution -- 2.2 Global Concept Encodings -- 2.3 Combining Local and Global Approaches -- 3 Local Concept-Based Attributions -- 3.1 Global-to-Local Concept Attribution -- 3.2 Applicability in Object Detection -- 4 Quantification Measures -- 4.1 Concept Localization -- 4.2 Faithfulness Testing -- 5 Results -- 5.1 Experimental Setting -- 5.2 Local Concept Attribution -- 5.3 Evaluating Concept Usage -- 5.4 Localization Quantification -- 5.5 Faithfulness Evaluation -- 6 Testing for Erroneous Feature Correlation -- 7 Conclusion -- A Selection of Concepts -- B Implementation Details and Color Coding -- C Comparison to CRP -- D Criteria for the Qualitative Evaluation -- E Limitations -- F Additional Visualizations -- References -- Knowledge Graphs for Empirical Concept Retrieval -- 1 Introduction -- 2 Related Work -- 2.1 Concept-Based Explainability Methods -- 2.2 What Is a Concept? -- 2.3 Knowledge Graphs -- 3 Methods -- 3.1 Concepts from Knowledge Graphs -- 3.2 Retrieval of a Concept Database -- 3.3 Concept Activation Vectors and Regions -- 3.4 Machine Learning Models -- 3.5 Accuracy and Robustness -- 3.6 Alignment of Concepts and Sub-concepts -- 4 Results -- 4.1 Knowledge Graphs Can Assist the Definition of Data-Driven Concepts -- 4.2 Robustness of Knowledge Graph Derived Concepts -- 4.3 Alignment of Human and Machine Representations -- 5 Conclusion -- References -- Global Concept Explanations for Graphs by Contrastive Learning -- 1 Introduction -- 2 Related Work -- 3 Background.
3.1 MEGAN: Multi-Explanation Graph Attention Network -- 3.2 Definition of Graph Concept Explanations -- 4 Methods -- 4.1 Extended Network Architecture -- 4.2 Contrastive Explanation Learning -- 4.3 Concept Clustering -- 4.4 Prototype Optimization -- 4.5 Hypothesis Generation -- 5 Computational Experiments -- 5.1 Synthetic Datasets -- 5.2 Real-World Datasets -- 6 Limitations -- 7 Conclusion -- References -- Generative Explainable AI and Verifiability -- Augmenting XAI with LLMs: A Case Study in Banking Marketing Recommendation*-9pt -- 1 Introduction -- 1.1 Research Questions and Contribution -- 2 State of the Art -- 2.1 NLG for Explainability -- 3 Existing Environments -- 3.1 Human Roles and Responsibilities -- 3.2 Pipeline for Generating Commercial Recommendation -- 3.3 Pipeline for Generating Rule-Based Natural Language Explanations -- 4 Proposed Methods -- 4.1 Pipeline for Generating Automated Natural Language Explanations -- 4.2 Evaluation Methods -- 4.3 Statistical Analysis -- 5 Results -- 6 Discussion and Conclusion -- References -- Generative Inpainting for Shapley-Value-Based Anomaly Explanation -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Shapley Value Explanations and Replacement Values -- 3.2 Perturbation with Generative Inpainting -- 3.3 Tabular Diffusion with TabDDPM -- 3.4 Generative Inpainting for Diffusion Models -- 4 Experiments -- 4.1 Data, Anomaly Detectors, and Metrics -- 4.2 Generative Models and Inpainting -- 4.3 Results -- 5 Conclusion -- References -- Challenges and Opportunities in Text Generation Explainability -- 1 Introduction -- 2 Background and Related Work -- 2.1 Text Generation -- 2.2 Explainability Methods -- 2.3 Attribution-Based Methods -- 3 Dataset Creation -- 3.1 Human-Centered Explanations -- 3.2 Perturbation-Based Datasets -- 3.3 Tracing the Effect of Perturbations -- 4 Explanation Design.
4.1 Challenges Originating from the Language Model -- 4.2 Challenges Originating from the Text Data -- 5 Explanation Evaluation -- 5.1 Static Evaluation with Accuracy -- 5.2 Static Evaluation with Faithfulness -- 5.3 Contrastive Evaluation with Coherency -- 5.4 Characterization of Explainability Methods -- 6 Conclusion -- References -- NoNE Found: Explaining the Output of Sequence-to-Sequence Models When No Named Entity Is Recognized -- 1 Introduction -- 2 Related Work -- 2.1 Disaster Risk Management -- 2.2 Seq2seq NER Approach -- 2.3 NER Explanations -- 2.4 Seq2seq Explanations -- 3 Methods -- 4 Experimental Setup -- 4.1 Datasets -- 4.2 Experiments -- 5 Results -- 5.1 Model Results -- 5.2 Validation Results -- 5.3 Insights from NoNE Explanations -- 6 Conclusion and Future Work -- References -- Notion, Metrics, Evaluation and Benchmarking for XAI -- Benchmarking Trust: A Metric for Trustworthy Machine Learning -- 1 Introduction -- 2 Aspects of Trustworthy Machine Learning -- 2.1 Fairness -- 2.2 Robustness -- 2.3 Integrity -- 2.4 Explainability -- 2.5 Safety -- 3 Measures of Quantification and Operationalization -- 3.1 Quantifying the Notion of Trust -- 3.2 Experimental Design -- 4 Experimental Results -- 5 Conclusion and Outlook -- References -- Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI -- 1 Introduction -- 2 Explainable AI -- 3 Related Work -- 4 Semantic Continuity -- 4.1 Proof-of-Concept Experiment -- 4.2 From Perfect Predictor to Imperfect Predictor -- 4.3 Synthesis of the Human Facial Dataset -- 5 Experimental Setup -- 5.1 Shape Dataset -- 5.2 Synthesis Facial Dataset -- 5.3 Software -- 6 Results -- 6.1 Proof-of-Concept Results: Shape Dataset -- 6.2 Synthesis Facial Dataset -- 7 Conclusions and Outlook -- References -- Conditional Calibrated Explanations: Finding a Path Between Bias and Uncertainty.
1 Introduction.
Record Nr. UNINA-9910872185403321
Longo Luca  
Cham : , : Springer International Publishing AG, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Explainable Artificial Intelligence : Second World Conference, XAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part III
Explainable Artificial Intelligence : Second World Conference, XAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part III
Autore Longo Luca
Edizione [1st ed.]
Pubbl/distr/stampa Cham : , : Springer International Publishing AG, , 2024
Descrizione fisica 1 online resource (471 pages)
Altri autori (Persone) LapuschkinSebastian
SeifertChristin
Collana Communications in Computer and Information Science Series
ISBN 9783031638008
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents - Part III -- Counterfactual Explanations and Causality for eXplainable AI -- Sub-SpaCE: Subsequence-Based Sparse Counterfactual Explanations for Time Series Classification Problems -- 1 Introduction -- 2 Related Work -- 3 Proposed Method -- 3.1 Problem Formulation -- 3.2 Sub-SpaCE: Subsequence-Based Sparse Counterfactual Explanations -- 4 Numerical Experiments -- 4.1 Set Up -- 4.2 Results -- 4.3 Ablation Study -- 5 Conclusions and Future Work -- References -- Human-in-the-Loop Personalized Counterfactual Recourse -- 1 Introduction -- 2 Related Work -- 3 Problem Statement -- 4 Framework -- 4.1 Personalized Counterfactual Generation -- 4.2 Preference Modeling -- 4.3 Preference Estimation -- 4.4 HIP-CORE Framework -- 4.5 Complexity of User Feedback -- 4.6 Limitations -- 5 Experiments -- 5.1 Experimental Setting -- 5.2 Overall Performance -- 5.3 Model-Agnostic Validation -- 5.4 Study on the Number of Iterations -- 5.5 Study on the Number of Decimal Places -- 5.6 Discussion and Ethical Implications -- 6 Conclusions -- A Appendix -- References -- COIN: Counterfactual Inpainting for Weakly Supervised Semantic Segmentation for Medical Images -- 1 Introduction -- 2 Related Works -- 2.1 Weakly Supervised Semantic Segmentation -- 2.2 Counterfactual Explanations -- 3 Counterfactual Approach for WSSS -- 3.1 Method Formulation -- 3.2 Image Generation Architecture -- 3.3 Loss Function for Training GAN -- 4 Experiments -- 4.1 Datasets -- 4.2 Evaluation -- 4.3 Implementation Details -- 4.4 Comparison with Modified Singla et al.* Method -- 5 Results -- 5.1 Ablation Experiments -- 6 Discussion -- 7 Conclusion -- A.1 Loss Function for Dual-Conditioning in Singla et al.* -- A.2 Synthetic Anomaly Generation -- A.3 Original vs Perturbation-Based Generator.
A.4 Influence of Skip Connections on the Generated Images Quality -- A.5 Counterfactual Explanation vs Counterfactual Inpainting Segmentation Accuracy -- References -- Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence -- 1 Introduction -- 2 Related Work -- 3 Incorporating Novel Biases in Counterfactual Search -- 3.1 Using Diffusion Distance to Search for More Feasible Transitions -- 3.2 Directional Coherence -- 3.3 Bringing Feasibility and Directional Coherence into Counterfactual Objective Function -- 3.4 Evaluation Metrics -- 4 Experiments -- 4.1 Synthetic Datasets -- 4.2 Datasets with Continuous Features -- 4.3 Classification Datasets with Mix-Type Features -- 4.4 Benchmarking with Other Frameworks -- 5 Results -- 5.1 Diffusion Distance and Directional Coherence on Synthetic and Diabetes Datasets -- 5.2 Comparison of CoDiCE with Other Counterfactual Methods on Various Datasets -- 5.3 Ablation Experiments -- 6 Discussion -- 7 Conclusion -- A Appendix -- References -- CountARFactuals - Generating Plausible Model-Agnostic Counterfactual Explanations with Adversarial Random Forests -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Multi-objective Counterfactual Explanations -- 3.2 Generative Modeling and Adversarial Random Forests -- 4 Methods -- 4.1 Algorithm [alg::mocspsarf]1: Integrating ARF into MOC -- 4.2 Algorithm [alg::onlyspsarf]2: ARF Is All You Need -- 5 Experiments -- 5.1 Data-Generating Process -- 5.2 Competing Methods -- 5.3 Evaluation Criteria -- 5.4 Results -- 6 Real Data Example -- 7 Discussion -- A Algorithm [alg::mocspsarf]1: Integrating ARF into MOC -- B Algorithm [alg::onlyspsarf]2: ARF Is All You Need -- C Synthetic Data -- C.1 Illustrative Datasets -- C.2 Randomly Generated DGPs -- D Additional Empirical Results -- References.
Causality-Aware Local Interpretable Model-Agnostic Explanations -- 1 Introduction -- 2 Related Works -- 3 Background -- 4 Causality-Aware LIME -- 5 Experiments -- 5.1 Datasets and Classifiers -- 5.2 Comparison with Related Works -- 5.3 Evaluation Measures -- 5.4 Results -- 6 Conclusion -- References -- Evaluating the Faithfulness of Causality in Saliency-Based Explanations of Deep Learning Models for Temporal Colour Constancy -- 1 Introduction -- 2 Related Work -- 3 Proposed Neural Architectures -- 4 Original Methodology of the Tests -- 4.1 Adaptation of the Original Tests -- 5 Experimental Setup -- 6 Method -- 7 Results -- 7.1 Preliminary Accuracy Investigation -- 7.2 Test WP1 -- 7.3 Test WP2 -- 7.4 Discussion -- 8 Conclusions and Future Work -- A Extended Results of the Experiments -- References -- CAGE: Causality-Aware Shapley Value for Global Explanations -- 1 Introduction -- 2 Preliminaries and Notation -- 2.1 Causal Models and Interventions -- 2.2 Shapley Additive Global Importance -- 3 Causality-Aware Global Explanations -- 3.1 Global Causal Shapley Values -- 3.2 Properties of Global Causal Feature Importance -- 3.3 Computing Causal Shapley Values -- 4 Experiments -- 4.1 Experiments on Synthetic Data -- 4.2 Explanations on Alzheimer Data -- 5 Related Work -- 6 Discussion -- 7 Conclusion -- A Data - Generating Causal Models -- A.1 Direct-Cause structure -- A.2 Markovian Structure -- A.3 Mixed structure -- References -- Fairness, Trust, Privacy, Security, Accountability and Actionability in eXplainable AI -- Exploring the Reliability of SHAP Values in Reinforcement Learning -- 1 Introduction -- 1.1 Shapley Values -- 1.2 Shapley Values for ML - SHAP -- 1.3 Contributions -- 2 Related Work -- 3 Benchmark Environments -- 4 Experiment 1: Dependency of KernelSHAP on Background Data -- 4.1 KernelSHAP and Background Data -- 4.2 Experimental Setup.
4.3 Robustness of KernelSHAP -- 5 Experiment 2: Empirical Evaluation of SHAP-Based Feature Importance -- 5.1 Generalized Feature Importance -- 5.2 Experimental Setup -- 5.3 Performance Drop Vs. Feature Importance -- 6 Interpretation of SHAP Time Dependency in RL -- 7 Conclusion and Outlook -- References -- Categorical Foundation of Explainable AI: A Unifying Theory -- 1 Introduction -- 2 Explainable AI Theory: Requirements -- 2.1 Category Theory: A Framework for (X)AI Processes -- 2.2 Institution Theory: A Framework for Explanations -- 3 Categorical Framework of Explainable AI -- 3.1 Abstract Learning Processes -- 3.2 Concrete Learning and Explaining Processes -- 4 Impact on XAI and Key Findings -- 4.1 Finding #1: Our Framework Models Existing Learning Schemes and Architectures -- 4.2 Finding #2: Our Framework Enables a Formal Definition of ``explanation'' -- 4.3 Finding #3: Our Framework Provides a Theoretical Foundation for XAI Taxonomies -- 4.4 Finding #4: Our Framework Emphasizes Commonly Overlooked Aspects of Explanations -- 5 Discussion -- A Elements of Category Theory -- A.1 Monoidal Categories -- A.2 Cartesian and Symmetric Monoidal Categories -- A.3 Feedback Monoidal Categories -- A.4 Free Categories -- A.5 Institutions -- References -- Investigating Calibrated Classification Scores Through the Lens of Interpretability -- 1 Introduction -- 2 Formal Setup -- 3 Desiderata for Calibration -- 3.1 Interplay of Strict Properties -- 4 Relaxed Desiderata for Calibration -- 4.1 Analysis of Cell Merging -- 4.2 Analysis of Average Label Assignment -- 5 Experimental Evaluation of Decision Tree Based Models -- 6 Concluding Discussion -- A Exploring the Probabilistic Count (PC) -- B Critiquing the Expected Calibration Error -- C Empirically Motivating the Probability Deviation Error -- References.
XentricAI: A Gesture Sensing Calibration Approach Through Explainable and User-Centric AI -- 1 Introduction -- 2 Background and Related Work -- 2.1 User-Centric XAI Techniques -- 2.2 Gesture Sensing Model Calibration Using Experience Replay -- 3 XAI for User-Centric and Customized Gesture Sensing -- 3.1 Gesture Sensing Algorithm and Feature Design -- 3.2 Model Calibration Using Experience Replay -- 3.3 Anomalous Gesture Detection and Characterization -- 4 Experiments -- 4.1 Implementation Settings -- 4.2 Experimental Results -- 5 Conclusion -- Appendix -- References -- Toward Understanding the Disagreement Problem in Neural Network Feature Attribution -- 1 Introduction -- 2 Background and Related Work -- 3 Understanding the Explanation's Distribution -- 4 Do Feature Attribution Methods Attribute? -- 4.1 Impact of Data Preprocessing -- 4.2 Faithfulness of Effects -- 4.3 Beyond Feature Attribution Toward Importance -- 5 Discussion -- 6 Conclusion -- A Appendix -- A.1 COMPAS Dataset -- A.2 Simulation Details -- A.3 Model Performance -- References -- ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework -- 1 Introduction -- 2 Preliminaries -- 2.1 Explainability -- 2.2 Uncertainty Estimation and Quantification -- 2.3 Conformal Prediction -- 3 Related Work -- 4 Methodology -- 4.1 ConformaSight Structure and Mechanism -- 4.2 ConformaSight in Practice: A Sample Scenario -- 4.3 Computational Complexity of the ConformaSight -- 5 Experiments and Evaluations -- 5.1 Experimental Settings -- 6 Results and Discussion -- 7 Conclusion, Limitations and Future Work -- References -- Differential Privacy for Anomaly Detection: Analyzing the Trade-Off Between Privacy and Explainability -- 1 Introduction -- 2 Related Work -- 2.1 Privacy-Preserving Anomaly Detection -- 2.2 Explainable Anomaly Detection.
2.3 Impact of Privacy on Explainability.
Record Nr. UNINA-9910872195903321
Longo Luca  
Cham : , : Springer International Publishing AG, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui