01323cam0 2200301 450 E60020006622120210326125007.020100908d2010 |||||ita|0103 bamulITSovranità e rappresentanzala dottrina dello stato in Thomas HobbesAnna Di BelloRomaNella sede dell'Istituto2010625 p.21 cmMomenti e problemi della storia del pensiero40In testa al frontespizio: Istituto italiano per gli studi studi filosofici001LAEC000275092001 *Momenti e problemi della storia del pensiero40Di Bello, AnnaA600200062159070477535ITUNISOB20210326RICAUNISOBUNISOB100151949UNISOBFondo|De|Sanctis|F157816E600200066221M 102 Monografia moderna SBNM100011190Si151949donocatenacciUNISOBUNISOB20100908083745.020120719085931.0menleFondo|De|Sanctis|F000997CON157816DeSanctisFdonoNmenleUNISOBUNISOB20120719084608.020210326125007.0SpinosaPer le modalità di consultazione vedi homepage della Biblioteca link FondiSovranità e rappresentanza240401UNISOB05456nam 22006375 450 991087219590332120250806170448.03-031-63800-X10.1007/978-3-031-63800-8(CKB)32775437300041(MiAaPQ)EBC31523152(Au-PeEL)EBL31523152(DE-He213)978-3-031-63800-8(EXLCZ)993277543730004120240710d2024 u| 0engur|||||||||||txtrdacontentcrdamediacrrdacarrierExplainable Artificial Intelligence Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part III /edited by Luca Longo, Sebastian Lapuschkin, Christin Seifert1st ed. 2024.Cham :Springer Nature Switzerland :Imprint: Springer,2024.1 online resource (471 pages)Communications in Computer and Information Science,1865-0937 ;21553-031-63799-2 -- Counterfactual explanations and causality for eXplainable AI. -- Sub-SpaCE: Subsequence-based Sparse Counterfactual Explanations for Time Series Classification Problems. -- Human-in-the-loop Personalized Counterfactual Recourse. -- COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images. -- Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence. -- CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests. -- Causality-Aware Local Interpretable Model-Agnostic Explanations. -- Evaluating the Faithfulness of Causality in Saliency-based Explanations of Deep Learning Models for Temporal Colour Constancy. -- CAGE: Causality-Aware Shapley Value for Global Explanations. -- Fairness, trust, privacy, security, accountability and actionability in eXplainable AI. -- Exploring the Reliability of SHAP Values in Reinforcement Learning. -- Categorical Foundation of Explainable AI: A Unifying Theory. -- Investigating Calibrated Classification Scores through the Lens of Interpretability. -- XentricAI: A Gesture Sensing Calibration Approach through Explainable and User-Centric AI. -- Toward Understanding the Disagreement Problem in Neural Network Feature Attribution. -- ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework. -- Differential Privacy for Anomaly Detection: Analyzing the Trade-off Between Privacy and Explainability. -- Blockchain for Ethical & Transparent Generative AI Utilization by Banking & Finance Lawyers. -- Multi-modal Machine learning model for Interpretable Mobile Malware Classification. -- Explainable Fraud Detection with Deep Symbolic Classification. -- Better Luck Next Time: About Robust Recourse in Binary Allocation Problems. -- Towards Non-Adversarial Algorithmic Recourse. -- Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring. -- XAI for Time Series Classification: Evaluating the Benefits of Model Inspection for End-Users.This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on: Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI. Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI. Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI. Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.Communications in Computer and Information Science,1865-0937 ;2155Artificial intelligenceNatural language processing (Computer science)Application softwareComputer networksArtificial IntelligenceNatural Language Processing (NLP)Computer and Information Systems ApplicationsComputer Communication NetworksArtificial intelligence.Natural language processing (Computer science)Application software.Computer networks.Artificial Intelligence.Natural Language Processing (NLP).Computer and Information Systems Applications.Computer Communication Networks.006.3Longo Luca1337583Lapuschkin Sebastian1744132Seifert Christin1744133MiAaPQMiAaPQMiAaPQBOOK9910872195903321Explainable Artificial Intelligence4173978UNINA