| |
|
|
|
|
|
|
|
|
1. |
Record Nr. |
UNINA9910616374003321 |
|
|
Titolo |
Interpretability of Machine Intelligence in Medical Image Computing : 5th International Workshop, iMIMIC 2022, Held in Conjunction with MICCAI 2022, Singapore, Singapore, September 22, 2022, Proceedings / / edited by Mauricio Reyes, Pedro Henriques Abreu, Jaime Cardoso |
|
|
|
|
|
|
|
Pubbl/distr/stampa |
|
|
Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2022 |
|
|
|
|
|
|
|
ISBN |
|
|
|
|
|
|
|
|
Edizione |
[1st ed. 2022.] |
|
|
|
|
|
Descrizione fisica |
|
1 online resource (134 pages) |
|
|
|
|
|
|
Collana |
|
Lecture Notes in Computer Science, , 1611-3349 ; ; 13611 |
|
|
|
|
|
|
Disciplina |
|
|
|
|
|
|
Soggetti |
|
Computer vision |
Machine learning |
Education - Data processing |
Social sciences - Data processing |
Bioinformatics |
Computer Vision |
Machine Learning |
Computers and Education |
Computer Application in Social and Behavioral Sciences |
Computational and Systems Biology |
|
|
|
|
|
|
|
|
Lingua di pubblicazione |
|
|
|
|
|
|
Formato |
Materiale a stampa |
|
|
|
|
|
Livello bibliografico |
Monografia |
|
|
|
|
|
Note generali |
|
|
|
|
|
|
Nota di contenuto |
|
Intro -- Preface -- Organization -- Contents -- Interpretable Lung Cancer Diagnosis with Nodule Attribute Guidance and Online Model Debugging -- 1 Introduction -- 2 Materials -- 3 Methodology -- 3.1 Collaborative Model Architecture with Attribute-Guidance -- 3.2 Debugging Model with Semantic Interpretation -- 3.3 Explanation by Attribute-Based Nodule Retrieval -- 4 Experiments and Results -- 4.1 Implementation -- 4.2 Quantitative Evaluation -- 4.3 Trustworthiness Check and Interpretable Diagnosis -- 5 Conclusions -- References -- Do Pre-processing and Augmentation Help Explainability? A Multi-seed Analysis for Brain Age Estimation -- 1 Introduction -- 2 Related Work |
|
|
|
|
|
|
|
|
|
|
|
-- 3 Methods -- 4 Results -- 4.1 Performance -- 4.2 Voxel Agreement -- 4.3 Atlas-Based Analyses -- 4.4 Region Validation -- 5 Conclusion -- References -- Towards Self-explainable Transformers for Cell Classification in Flow Cytometry Data -- 1 Introduction -- 2 Related Work -- 3 Methods -- 3.1 Architecture -- 3.2 Preprocessing -- 3.3 Loss Function -- 3.4 Data Augmentation -- 4 Experiments -- 4.1 Data -- 4.2 Results -- 5 Conclusion -- References -- Reducing Annotation Need in Self-explanatory Models for Lung Nodule Diagnosis -- 1 Introduction -- 2 Method -- 3 Experimental Results -- 3.1 Prediction Performance of Nodule Attributes and Malignancy -- 3.2 Analysis of Extracted Features in Learned Space -- 3.3 Ablation Study -- 4 Conclusion -- References -- Attention-Based Interpretable Regression of Gene Expression in Histology -- 1 Introduction -- 2 Methods -- 2.1 Datasets -- 2.2 Multiple Instance Regression of Gene Expression -- 2.3 Attention-Based Model Interpretability -- 2.4 Evaluation of Performance and Interpretability -- 3 Experiments and Results -- 3.1 Network Training -- 3.2 Quantitative Model Evaluation -- 3.3 Attention-Based Identification of Hotspots and Patterns. |
3.4 Quantitative Evaluation of the Attention -- 4 Discussion -- 5 Conclusion -- A Description of Selected Genes -- B Detailed Model Evaluation -- C Additional Visualizations -- D Single-Cell Co-expression -- References -- Beyond Voxel Prediction Uncertainty: Identifying Brain Lesions You Can Trust -- 1 Introduction -- 2 Our Framework: Graph Modelization for Lesion Uncertainty Quantification -- 2.1 Monte Carlo Dropout Model and Voxel-Wise Uncertainty -- 2.2 Graph Dataset Generation -- 2.3 GCNN Architecture and Training -- 3 Material and Method -- 3.1 Data -- 3.2 Comparison with Known Approaches -- 3.3 Evaluation Setting -- 3.4 Implementation Details -- 4 Results and Discussion -- 5 Conclusion -- References -- Interpretable Vertebral Fracture Diagnosis -- 1 Introduction -- 1.1 Related Work -- 2 Methodology -- 2.1 Vertebral Fracture Detection -- 2.2 Semantic Concept Extraction (Correlation) -- 2.3 Visualization of Highly Correlating Concepts at Inference -- 3 Experimental Setup -- 4 Results and Discussion -- 4.1 Clinical Meaningfulness of Extracted Semantic Concepts -- 4.2 Single-Inference Concept Visualization -- 5 Conclusion -- References -- Multi-modal Volumetric Concept Activation to Explain Detection and Classification of Metastatic Prostate Cancer on PSMA-PET/CT -- 1 Introduction -- 2 Data -- 3 Method -- 3.1 Preprocessing -- 3.2 Detection -- 3.3 Classification -- 3.4 Explainable AI -- 4 Results -- 4.1 Detection -- 4.2 Classification -- 4.3 Explainable AI -- 5 Discussion -- 6 Conclusion -- References -- KAM - A Kernel Attention Module for Emotion Classification with EEG Data -- 1 Introduction -- 2 Related Work -- 3 Kernel Attention Module -- 4 Experiments -- 5 Conclusion -- References -- Explainable Artificial Intelligence for Breast Tumour Classification: Helpful or Harmful -- 1 Introduction -- 2 Related Work -- 2.1 XAI in Medicine. |
3 Model Setup -- 3.1 Data Pre-Processing -- 3.2 Model Architecture -- 4 Explanations -- 4.1 LIME -- 4.2 RISE -- 4.3 SHAP -- 5 Evaluating Explanations -- 5.1 One-Way ANOVA -- 5.2 Kendall's Tau -- 5.3 Radiologist Evaluation -- 5.4 Threats to Validity -- 6 Observations and Discussion -- 6.1 Discussion -- A Appendix -- A.1 Model Training Results -- A.2 Choosing L Parameter for LIME -- A.3 One-Way ANOVA Results -- A.4 Pixel Agreement Statistics -- A.5 Ranked Biased Overlap (RBO) Results -- A.6 Kendall's Tau Results -- A.7 Radiologist Opinions -- A.8 Explanation Examples -- References -- Author Index. |
|
|
|
|
|
|
Sommario/riassunto |
|
This book constitutes the refereed joint proceedings of the 5th International Workshop on Interpretability of Machine Intelligence in Medical Image Computing, iMIMIC 2022, held in September 2022, in |
|
|
|
|
|
|
|
|
|
|
conjunction with the 25th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2022. The 10 full papers presented at iMIMIC 2022 were carefully reviewed and selected from 24 submissions each. The iMIMIC papers focus on introducing the challenges and opportunities related to the topic of interpretability of machine learning systems in the context of medical imaging and computer assisted intervention. . |
|
|
|
|
|
| |