01000nam0-22002771i-450-99000504792040332119990530000504792FED01000504792(Aleph)000504792FED0100050479219990530f19001997km-y0itay50------baitaf-------00---Etudes, ou, Discours historiques sur la chute de l'empire romainLa naissance et le progrès du Christianisme et l'invasion des barbaressuivis des Mèlanges littèrairesChateaubriandParisPenaud Frèress.d.599 p., [1] tav.24 cmChateaubriand,François-René de<1768-1848>388389ITUNINARICAUNIMARCBK990005047920403321SX CHA 18(8)Fil.Mod. 24432FLFBCFLFBCEtudes, ou, Discours historiques sur la chute de l'empire romain533153Suivis des Mèlanges littèraires533154UNINA05329nam 22006375 450 991087218540332120250806170130.03-031-63787-910.1007/978-3-031-63787-2(CKB)32775313100041(MiAaPQ)EBC31523148(Au-PeEL)EBL31523148(DE-He213)978-3-031-63787-2(EXLCZ)993277531310004120240710d2024 u| 0engur|||||||||||txtrdacontentcrdamediacrrdacarrierExplainable Artificial Intelligence Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part I /edited by Luca Longo, Sebastian Lapuschkin, Christin Seifert1st ed. 2024.Cham :Springer Nature Switzerland :Imprint: Springer,2024.1 online resource (508 pages)Communications in Computer and Information Science,1865-0937 ;21533-031-63786-0 -- Intrinsically interpretable XAI and concept-based global explainability. -- Seeking Interpretability and Explainability in Binary Activated Neural Networks. -- Prototype-based Interpretable Breast Cancer Prediction Models: Analysis and Challenges. -- Evaluating the Explainability of Attributes and Prototypes for a Medical Classification Model. -- Revisiting FunnyBirds evaluation framework for prototypical parts networks. -- CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining Vision Models. -- Unveiling the Anatomy of Adversarial Attacks: Concept-based XAI Dissection of CNNs. -- AutoCL: AutoML for Concept Learning. -- Locally Testing Model Detections for Semantic Global Concepts. -- Knowledge graphs for empirical concept retrieval. -- Global Concept Explanations for Graphs by Contrastive Learning. -- Generative explainable AI and verifiability. -- Augmenting XAI with LLMs: A Case Study in Banking Marketing Recommendation. -- Generative Inpainting for Shapley-Value-Based Anomaly Explanation. -- Challenges and Opportunities in Text Generation Explainability. -- NoNE Found: Explaining the Output of Sequence-to-Sequence Models when No Named Entity is Recognized. -- Notion, metrics, evaluation and benchmarking for XAI. -- Benchmarking Trust: A Metric for Trustworthy Machine Learning. -- Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI. -- Conditional Calibrated Explanations: Finding a Path between Bias and Uncertainty. -- Meta-evaluating stability measures: MAX-Sensitivity & AVG-Senstivity. -- Xpression: A unifying metric to evaluate Explainability and Compression of AI models. -- Evaluating Neighbor Explainability for Graph Neural Networks. -- A Fresh Look at Sanity Checks for Saliency Maps. -- Explainability, Quantified: Benchmarking XAI techniques. -- BEExAI: Benchmark to Evaluate Explainable AI. -- Associative Interpretability of Hidden Semantics with Contrastiveness Operators in Face Classification tasks.This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on: Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI. Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI. Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI. Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.Communications in Computer and Information Science,1865-0937 ;2153Artificial intelligenceNatural language processing (Computer science)Application softwareComputer networksArtificial IntelligenceNatural Language Processing (NLP)Computer and Information Systems ApplicationsComputer Communication NetworksArtificial intelligence.Natural language processing (Computer science)Application software.Computer networks.Artificial Intelligence.Natural Language Processing (NLP).Computer and Information Systems Applications.Computer Communication Networks.006.3Longo Luca1337583Lapuschkin Sebastian1744132Seifert Christin1744133MiAaPQMiAaPQMiAaPQBOOK9910872185403321Explainable Artificial Intelligence4173978UNINA