top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Biomedical Informatics : Discovering Knowledge in Big Data / / by Andreas Holzinger
Biomedical Informatics : Discovering Knowledge in Big Data / / by Andreas Holzinger
Autore Holzinger Andreas
Edizione [1st ed. 2014.]
Pubbl/distr/stampa Cham : , : Springer International Publishing : , : Imprint : Springer, , 2014
Descrizione fisica 1 online resource (XIX, 551 p. 210 illus., 164 illus. in color.)
Disciplina 610.285
Soggetto topico Computational intelligence
Bioinformatics
Biomedical engineering
Biomathematics
Computational Intelligence
Computational Biology/Bioinformatics
Biomedical Engineering and Bioengineering
Mathematical and Computational Biology
ISBN 3-319-04528-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Introduction: Computer Science meets Life Science -- Fundamentals of Data, Information and Knowledge -- Structured Data: Coding and Classification -- Biomedical Databases: Acquisition, Storage, Information Retrieval and Use -- Multimedia Data Mining and Knowledge Discovery -- Knowledge and Decision: Cognitive Science and Human–Computer Interaction -- Biomedical Decision Making: Reasoning and Decision Support -- Interactive Information Visualization and Visual Analytics -- Biomedical Information Systems and Medical Knowledge Management -- Biomedical Data: Privacy, Safety and Security -- Methodology for Information Systems: System Design, Usability and Evaluation.
Record Nr. UNINA-9910299751003321
Holzinger Andreas  
Cham : , : Springer International Publishing : , : Imprint : Springer, , 2014
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Extraction [[electronic resource] ] : 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 – September 1, 2023, Proceedings / / edited by Andreas Holzinger, Peter Kieseberg, Federico Cabitza, Andrea Campagner, A Min Tjoa, Edgar Weippl
Machine Learning and Knowledge Extraction [[electronic resource] ] : 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 – September 1, 2023, Proceedings / / edited by Andreas Holzinger, Peter Kieseberg, Federico Cabitza, Andrea Campagner, A Min Tjoa, Edgar Weippl
Autore Holzinger Andreas
Edizione [1st ed. 2023.]
Pubbl/distr/stampa Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2023
Descrizione fisica 1 online resource (335 pages)
Disciplina 006.3
Altri autori (Persone) KiesebergPeter
CabitzaFederico
CampagnerAndrea
TjoaA. Min
WeipplEdgar
Collana Lecture Notes in Computer Science
Soggetto topico Artificial intelligence
Software engineering
Database management
Data mining
Information storage and retrieval systems
Machine theory
Artificial Intelligence
Software Engineering
Database Management
Data Mining and Knowledge Discovery
Information Storage and Retrieval
Formal Languages and Automata Theory
ISBN 3-031-40837-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Controllable AI - An alternative to trustworthiness in complex AI systems? -- Efficient approximation of Asymmetric Shapley Values using Functional Decomposition -- Domain-Specific Evaluation of Visual Explanations for Application-Grounded Facial Expression Recognition -- Human-in-the-Loop Integration of Domain-Knowledge Graphs for Explainable and Federated Deep Learning -- The Tower of Babel in explainable Artificial Intelligence (XAI) -- Hyper-Stacked: Scalable and Distributed Approach to AutoML for Big Data -- Transformers are Short-text Classifiers -- Reinforcement Learning with Temporal-Logic-Based Causal Diagrams -- Using Machine Learning to Generate an ESG Dictionary -- Let me think! Investigating the effect of explanations feeding doubts about the AI advice -- Enhancing Trust in Machine Learning Systems by Formal Methods -- Sustainability Effects of Robust and Resilient Artificial Intelligence -- The Split Matters: Flat Minima Methods for Improving the Performance of GNNs -- Probabilistic framework based on Deep Learning for differentiating ultrasound movie view planes -- Standing Still is Not An Option: Alternative Baselines for Attainable Utility Preservation -- Memorization of Named Entities in Fine-tuned BERT Models -- Event and Entity Extraction from Generated Video Captions -- Fine-Tuning Language Models for Scientific Writing Support.
Record Nr. UNISA-996546851503316
Holzinger Andreas  
Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2023
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Extraction [[electronic resource] ] : 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 – September 1, 2023, Proceedings / / edited by Andreas Holzinger, Peter Kieseberg, Federico Cabitza, Andrea Campagner, A Min Tjoa, Edgar Weippl
Machine Learning and Knowledge Extraction [[electronic resource] ] : 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 – September 1, 2023, Proceedings / / edited by Andreas Holzinger, Peter Kieseberg, Federico Cabitza, Andrea Campagner, A Min Tjoa, Edgar Weippl
Autore Holzinger Andreas
Edizione [1st ed. 2023.]
Pubbl/distr/stampa Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2023
Descrizione fisica 1 online resource (335 pages)
Disciplina 006.3
Altri autori (Persone) KiesebergPeter
CabitzaFederico
CampagnerAndrea
TjoaA. Min
WeipplEdgar
Collana Lecture Notes in Computer Science
Soggetto topico Artificial intelligence
Software engineering
Database management
Data mining
Information storage and retrieval systems
Machine theory
Artificial Intelligence
Software Engineering
Database Management
Data Mining and Knowledge Discovery
Information Storage and Retrieval
Formal Languages and Automata Theory
ISBN 3-031-40837-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Controllable AI - An alternative to trustworthiness in complex AI systems? -- Efficient approximation of Asymmetric Shapley Values using Functional Decomposition -- Domain-Specific Evaluation of Visual Explanations for Application-Grounded Facial Expression Recognition -- Human-in-the-Loop Integration of Domain-Knowledge Graphs for Explainable and Federated Deep Learning -- The Tower of Babel in explainable Artificial Intelligence (XAI) -- Hyper-Stacked: Scalable and Distributed Approach to AutoML for Big Data -- Transformers are Short-text Classifiers -- Reinforcement Learning with Temporal-Logic-Based Causal Diagrams -- Using Machine Learning to Generate an ESG Dictionary -- Let me think! Investigating the effect of explanations feeding doubts about the AI advice -- Enhancing Trust in Machine Learning Systems by Formal Methods -- Sustainability Effects of Robust and Resilient Artificial Intelligence -- The Split Matters: Flat Minima Methods for Improving the Performance of GNNs -- Probabilistic framework based on Deep Learning for differentiating ultrasound movie view planes -- Standing Still is Not An Option: Alternative Baselines for Attainable Utility Preservation -- Memorization of Named Entities in Fine-tuned BERT Models -- Event and Entity Extraction from Generated Video Captions -- Fine-Tuning Language Models for Scientific Writing Support.
Record Nr. UNINA-9910739458903321
Holzinger Andreas  
Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2023
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
XxAI - Beyond Explainable AI [[electronic resource] ] : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers
XxAI - Beyond Explainable AI [[electronic resource] ] : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers
Autore Holzinger Andreas
Pubbl/distr/stampa Cham, : Springer International Publishing AG, 2022
Descrizione fisica 1 online resource (397 p.)
Altri autori (Persone) GoebelRandy
FongRuth
MoonTaesup
MüllerKlaus-Robert
SamekWojciech
Collana Lecture Notes in Computer Science
Soggetto topico Artificial intelligence
Machine learning
Soggetto non controllato Computer Science
Informatics
Conference Proceedings
Research
Applications
ISBN 3-031-04083-X
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNISA-996472069203316
Holzinger Andreas  
Cham, : Springer International Publishing AG, 2022
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
XxAI - Beyond Explainable AI : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers
XxAI - Beyond Explainable AI : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers
Autore Holzinger Andreas
Edizione [1st ed.]
Pubbl/distr/stampa Cham, : Springer International Publishing AG, 2022
Descrizione fisica 1 online resource (397 p.)
Disciplina 006.31
Altri autori (Persone) GoebelRandy
FongRuth
MoonTaesup
MüllerKlaus-Robert
SamekWojciech
Collana Lecture Notes in Computer Science
Soggetto topico Artificial intelligence
Machine learning
Intel·ligència artificial
Aprenentatge automàtic
Soggetto genere / forma Congressos
Llibres electrònics
Soggetto non controllato Computer Science
Informatics
Conference Proceedings
Research
Applications
ISBN 3-031-04083-X
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents -- Editorial -- xxAI - Beyond Explainable Artificial Intelligence -- 1 Introduction and Motivation for Explainable AI -- 2 Explainable AI: Past and Present -- 3 Book Structure -- References -- Current Methods and Challenges -- Explainable AI Methods - A Brief Overview -- 1 Introduction -- 2 Explainable AI Methods - Overview -- 2.1 LIME (Local Interpretable Model Agnostic Explanations) -- 2.2 Anchors -- 2.3 GraphLIME -- 2.4 Method: LRP (Layer-wise Relevance Propagation) -- 2.5 Deep Taylor Decomposition (DTD) -- 2.6 Prediction Difference Analysis (PDA) -- 2.7 TCAV (Testing with Concept Activation Vectors) -- 2.8 XGNN (Explainable Graph Neural Networks) -- 2.9 SHAP (Shapley Values) -- 2.10 Asymmetric Shapley Values (ASV) -- 2.11 Break-Down -- 2.12 Shapley Flow -- 2.13 Textual Explanations of Visual Models -- 2.14 Integrated Gradients -- 2.15 Causal Models -- 2.16 Meaningful Perturbations -- 2.17 EXplainable Neural-Symbolic Learning (X-NeSyL) -- 3 Conclusion and Future Outlook -- References -- General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models -- 1 Introduction -- 2 Assuming One-Fits-All Interpretability -- 3 Bad Model Generalization -- 4 Unnecessary Use of Complex Models -- 5 Ignoring Feature Dependence -- 5.1 Interpretation with Extrapolation -- 5.2 Confusing Linear Correlation with General Dependence -- 5.3 Misunderstanding Conditional Interpretation -- 6 Misleading Interpretations Due to Feature Interactions -- 6.1 Misleading Feature Effects Due to Aggregation -- 6.2 Failing to Separate Main from Interaction Effects -- 7 Ignoring Model and Approximation Uncertainty -- 8 Ignoring the Rashomon Effect -- 9 Failure to Scale to High-Dimensional Settings -- 9.1 Human-Intelligibility of High-Dimensional IML Output -- 9.2 Computational Effort.
9.3 Ignoring Multiple Comparison Problem -- 10 Unjustified Causal Interpretation -- 11 Discussion -- References -- CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations -- 1 Introduction -- 2 Related Work -- 3 The CLEVR-X Dataset -- 3.1 The CLEVR Dataset -- 3.2 Dataset Generation -- 3.3 Dataset Analysis -- 3.4 User Study on Explanation Completeness and Relevance -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Evaluating Explanations Generated by State-of-the-Art Methods -- 4.3 Analyzing Results on CLEVR-X by Question and Answer Types -- 4.4 Influence of Using Different Numbers of Ground-Truth Explanations -- 4.5 Qualitative Explanation Generation Results -- 5 Conclusion -- References -- New Developments in Explainable AI -- A Rate-Distortion Framework for Explaining Black-Box Model Decisions -- 1 Introduction -- 2 Related Works -- 3 Rate-Distortion Explanation Framework -- 3.1 General Formulation -- 3.2 Implementation -- 4 Experiments -- 4.1 Images -- 4.2 Audio -- 4.3 Radio Maps -- 5 Conclusion -- References -- Explaining the Predictions of Unsupervised Learning Models -- 1 Introduction -- 2 A Brief Review of Explainable AI -- 2.1 Approaches to Attribution -- 2.2 Neuralization-Propagation -- 3 Kernel Density Estimation -- 3.1 Explaining Outlierness -- 3.2 Explaining Inlierness: Direct Approach -- 3.3 Explaining Inlierness: Random Features Approach -- 4 K-Means Clustering -- 4.1 Explaining Cluster Assignments -- 5 Experiments -- 5.1 Wholesale Customer Analysis -- 5.2 Image Analysis -- 6 Conclusion and Outlook -- A Attribution on CNN Activations -- A.1 Attributing Outlierness -- A.2 Attributing Inlierness -- A.3 Attributing Cluster Membership -- References -- Towards Causal Algorithmic Recourse -- 1 Introduction -- 1.1 Motivating Examples -- 1.2 Summary of Contributions and Structure of This Chapter -- 2 Preliminaries.
2.1 XAI: Counterfactual Explanations and Algorithmic Recourse -- 2.2 Causality: Structural Causal Models, Interventions, and Counterfactuals -- 3 Causal Recourse Formulation -- 3.1 Limitations of CFE-Based Recourse -- 3.2 Recourse Through Minimal Interventions -- 3.3 Negative Result: No Recourse Guarantees for Unknown Structural Equations -- 4 Recourse Under Imperfect Causal Knowledge -- 4.1 Probabilistic Individualised Recourse -- 4.2 Probabilistic Subpopulation-Based Recourse -- 4.3 Solving the Probabilistic Recourse Optimization Problem -- 5 Experiments -- 5.1 Compared Methods -- 5.2 Metrics -- 5.3 Synthetic 3-Variable SCMs Under Different Assumptions -- 5.4 Semi-synthetic 7-Variable SCM for Loan-Approval -- 6 Discussion -- 7 Conclusion -- References -- Interpreting Generative Adversarial Networks for Interactive Image Generation -- 1 Introduction -- 2 Supervised Approach -- 3 Unsupervised Approach -- 4 Embedding-Guided Approach -- 5 Concluding Remarks -- References -- XAI and Strategy Extraction via Reward Redistribution -- 1 Introduction -- 2 Background -- 2.1 Explainability Methods -- 2.2 Reinforcement Learning -- 2.3 Credit Assignment in Reinforcement Learning -- 2.4 Methods for Credit Assignment -- 2.5 Explainability Methods for Credit Assignment -- 2.6 Credit Assignment via Reward Redistribution -- 3 Strategy Extraction via Reward Redistribution -- 3.1 Strategy Extraction with Profile Models -- 3.2 Explainable Agent Behavior via Strategy Extraction -- 4 Experiments -- 4.1 Gridworld -- 4.2 Minecraft -- 5 Limitations -- 6 Conclusion -- References -- Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis -- 1 Introduction -- 2 Background on Reinforcement Learning -- 3 Programmatic Policies -- 3.1 Traditional Interpretable Models -- 3.2 State Machine Policies -- 3.3 List Processing Programs.
3.4 Neurosymbolic Policies -- 4 Synthesizing Programmatic Policies -- 4.1 Imitation Learning -- 4.2 Q-Guided Imitation Learning -- 4.3 Updating the DNN Policy -- 4.4 Program Synthesis for Supervised Learning -- 5 Case Studies -- 5.1 Interpretability -- 5.2 Verification -- 5.3 Robustness -- 6 Conclusions and Future Work -- References -- Interpreting and Improving Deep-Learning Models with Reality Checks -- 1 Interpretability: For What and For Whom? -- 2 Computing Interpretations for Feature Interactions and Transformations -- 2.1 Contextual Decomposition (CD) Importance Scores for General DNNs -- 2.2 Agglomerative Contextual Decomposition (ACD) -- 2.3 Transformation Importance with Applications to Cosmology (TRIM) -- 3 Using Attributions to Improve Models -- 3.1 Penalizing Explanations to Align Neural Networks with Prior Knowledge (CDEP) -- 3.2 Distilling Adaptive Wavelets from Neural Networks with Interpretations -- 4 Real-Data Problems Showcasing Interpretations -- 4.1 Molecular Partner Prediction -- 4.2 Cosmological Parameter Prediction -- 4.3 Improving Skin Cancer Classification via CDEP -- 5 Discussion -- 5.1 Building/Distilling Accurate and Interpretable Models -- 5.2 Making Interpretations Useful -- References -- Beyond the Visual Analysis of Deep Model Saliency -- 1 Introduction -- 2 Saliency-Based XAI in Vision -- 2.1 White-Box Models -- 2.2 Black-Box Models -- 3 XAI for Improved Models: Excitation Dropout -- 4 XAI for Improved Models: Domain Generalization -- 5 XAI for Improved Models: Guided Zoom -- 6 Conclusion -- References -- ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs -- 1 Introduction -- 2 Related Work -- 3 Neural Network Quantization -- 3.1 Entropy-Constrained Quantization -- 4 Explainability-Driven Quantization -- 4.1 Layer-Wise Relevance Propagation.
4.2 eXplainability-Driven Entropy-Constrained Quantization -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 ECQx Results -- 6 Conclusion -- References -- A Whale's Tail - Finding the Right Whale in an Uncertain World -- 1 Introduction -- 2 Related Work -- 3 Humpback Whale Data -- 3.1 Image Data -- 3.2 Expert Annotations -- 4 Methods -- 4.1 Landmark-Based Identification Framework -- 4.2 Uncertainty and Sensitivity Analysis -- 5 Experiments and Results -- 5.1 Experimental Setup -- 5.2 Uncertainty and Sensitivity Analysis of the Landmarks -- 5.3 Heatmapping Results and Comparison with Whale Expert Knowledge -- 5.4 Spatial Uncertainty of Individual Landmarks -- 6 Conclusion and Outlook -- References -- Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science -- 1 Introduction -- 2 XAI Applications -- 2.1 XAI in Remote Sensing and Weather Forecasting -- 2.2 XAI in Climate Prediction -- 2.3 XAI to Extract Forced Climate Change Signals and Anthropogenic Footprint -- 3 Development of Attribution Benchmarks for Geosciences -- 3.1 Synthetic Framework -- 3.2 Assessment of XAI Methods -- 4 Conclusions -- References -- An Interdisciplinary Approach to Explainable AI -- Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond -- 1 Introduction -- 1.1 Functional Varieties of AI Explanations -- 1.2 Technical Varieties of AI Explanations -- 1.3 Roadmap of the Paper -- 2 Explainable AI Under Current Law -- 2.1 The GDPR: Rights-Enabling Transparency -- 2.2 Contract and Tort Law: Technical and Protective Transparency -- 2.3 Banking Law: More Technical and Protective Transparency -- 3 Regulatory Proposals at the EU Level: The AIA -- 3.1 AI with Limited Risk: Decision-Enabling Transparency (Art. 52 AIA)? -- 3.2 AI with High Risk: Encompassing Transparency (Art. 13 AIA)?.
3.3 Limitations.
Record Nr. UNINA-9910561298803321
Holzinger Andreas  
Cham, : Springer International Publishing AG, 2022
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui