1.

Record Nr.

UNINA990009705810403321

Titolo

Convegno di scienze morali e storiche : 4-11 ottobre 1938 : tema: L'Africa

Pubbl/distr/stampa

Roma : Reale Accademia d'Italia, 1939

Descrizione fisica

2 v. ; 26 cm

Collana

Atti dei convegni / Reale Accademia d'Italia, Fondazione Alessandro Volta ; 8

Disciplina

960.31

Locazione

FSPBC

Collocazione

XIV E 2204

Lingua di pubblicazione

Italiano

Formato

Materiale a stampa

Livello bibliografico

Monografia

2.

Record Nr.

UNISA996392030503316

Autore

Fowler Christopher <1610?-1678.>

Titolo

Dæmonium meridianum. Sathan at noon . The second part [[electronic resource] ] : The first hath discovered the blasphemies of J. Pordage, against the Lord Christ, under the pretence of visions, and converse with angles. This now discovereth the slanders and calumnies cast upon some corporations, with forged and false articles upon the author, in a pamphlet intituled, The case of reading rightly stated, by the adherents and abettors of the said J.P. With a word to infant-baptisme, and a glance to Mr. Pendarves his arrowes against Babylon. / / By Christopher Fowler, minister of the gospel at St. Mary's in Reading

Pubbl/distr/stampa

London, : Printed for Fra : Eglesfield at the Golden Marigold in Paul's Church-yard., 1656

Descrizione fisica

[4], 60 p

Soggetti

Infant baptism

Visions

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa



Livello bibliografico

Monografia

Note generali

A reply to: Pordage, John.  Innocency appearing. Also in reply to: Pendarves, John. Arrowes against Babylon.

Annotation on Thomason copy: "February 16"; also the last number of the imprint date has been marked through and replaced by a 5.

Reproduction of the original in the British Library.

Sommario/riassunto

eebo-0018

3.

Record Nr.

UNINA9910872189603321

Autore

Longo Luca

Titolo

Explainable Artificial Intelligence : Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part IV / / edited by Luca Longo, Sebastian Lapuschkin, Christin Seifert

Pubbl/distr/stampa

Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024

ISBN

3-031-63803-4

Edizione

[1st ed. 2024.]

Descrizione fisica

1 online resource (480 pages)

Collana

Communications in Computer and Information Science, , 1865-0937 ; ; 2156

Altri autori (Persone)

LapuschkinSebastian

SeifertChristin

Disciplina

006.3

Soggetti

Artificial intelligence

Natural language processing (Computer science)

Application software

Computer networks

Artificial Intelligence

Natural Language Processing (NLP)

Computer and Information Systems Applications

Computer Communication Networks

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Nota di contenuto

-- Explainable AI in healthcare and computational neuroscience.  -- SRFAMap: a method for mapping integrated gradients of a CNN trained with statistical radiomic features to medical image saliency maps.  --



Transparently Predicting Therapy Compliance of Young Adults Following Ischemic Stroke.  -- Precision medicine in student health: Insights from Tsetlin Machines into chronic pain and psychological distress.  -- Evaluating Local Explainable AI Techniques for the Classification of Chest X-ray Images.  -- Feature importance to explain multimodal prediction models. A clinical use case.  -- Identifying EEG Biomarkers of Depression with Novel Explainable Deep Learning Architectures.  -- Increasing Explainability in Time Series Classification by Functional Decomposition.  -- Towards Evaluation of Explainable Artificial Intelligence in Streaming Data.  -- Quantitative Evaluation of xAI Methods for Multivariate Time Series - A Case Study for a CNN-based MI Detection Model.  -- Explainable AI for improved human-computer interaction and Software Engineering for explainability.  -- Influenciae: A library for tracing the influence back to the data-points.  -- Explainability Engineering Challenges: Connecting Explainability Levels to Run-time Explainability.  -- On the Explainability of Financial Robo-advice Systems.  -- Can I trust my anomaly detection system? A case study based on explainable AI..  -- Explanations considered harmful: The Impact of misleading Explanations on Accuracy in hybrid human-AI decision making.  -- Human emotions in AI explanations.  -- Study on the Helpfulness of Explainable Artificial Intelligence.  -- Applications of explainable artificial intelligence.  -- Pricing Risk: An XAI Analysis of Irish Car Insurance Premiums.  -- Exploring the Role of Explainable AI in the Development and Qualification of Aircraft Quality Assurance Processes: A Case Study.  -- Explainable Artificial Intelligence applied to Predictive Maintenance: Comparison of Post-hoc Explainability Techniques.  -- A comparative analysis of SHAP, LIME, ANCHORS, and DICE for interpreting a dense neural network in Credit Card Fraud Detection.  -- Application of the representative measure approach to assess the reliability of decision trees in dealing with unseen vehicle collision data.  -- Ensuring Safe Social Navigation via Explainable Probabilistic and Conformal Safety Regions.  -- Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments.  -- AcME-AD: Accelerated Model Explanations for Anomaly Detection.

Sommario/riassunto

This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on: Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI. Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI. Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI. Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.