Vai al contenuto principale della pagina
| Titolo: |
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
|
| Pubblicazione: | Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023] |
| ©2023 | |
| Edizione: | First edition. |
| Descrizione fisica: | 1 online resource (758 pages) |
| Disciplina: | 006.31 |
| Soggetto topico: | Data mining |
| Databases | |
| Machine learning | |
| Persona (resp. second.): | KoutraDanai |
| Nota di bibliografia: | Includes bibliographical references and index. |
| Nota di contenuto: | Intro -- Preface -- Organization -- Invited Talks Abstracts -- Neural Wave Representations -- Physics-Inspired Graph Neural Networks -- Mapping Generative AI -- Contents - Part II -- Computer Vision -- Sample Prior Guided Robust Model Learning to Suppress Noisy Labels -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Prior Guided Sample Dividing -- 3.2 Denoising with the Divided Sets -- 4 Experiment -- 4.1 Datasets and Implementation Details -- 4.2 Comparison with State-of-the-Art Methods -- 4.3 Ablation Study -- 4.4 Generalization to Instance-Dependent Label Noise -- 4.5 Hyper-parameters Analysis -- 4.6 Discussion for Prior Generation Module -- 5 Limitations -- 6 Conclusions -- References -- DCID: Deep Canonical Information Decomposition -- 1 Introduction -- 2 Related Work -- 2.1 Canonical Correlation Analysis (CCA) -- 2.2 Multi-Task Learning (MTL) -- 3 Univariate Shared Information Retrieval -- 3.1 Problem Setting -- 3.2 Evaluating the Shared Representations -- 4 Method: Deep Canonical Information Decomposition -- 4.1 Limitations of the CCA Setting -- 4.2 Deep Canonical Information Decomposition (DCID) -- 5 Experiments -- 5.1 Baselines -- 5.2 Experimental Settings -- 5.3 Learning the Shared Features Z -- 5.4 Variance Explained by Z and Model Performance -- 5.5 Obesity and the Volume of Brain Regions of Interest (ROIs) -- 6 Discussion -- 6.1 Results Summary -- 6.2 Limitations -- References -- Negative Prototypes Guided Contrastive Learning for Weakly Supervised Object Detection -- 1 Introduction -- 2 Related Work -- 2.1 Weakly Supervised Object Detection -- 2.2 Contrastive Learning -- 3 Proposed Method -- 3.1 Preliminaries -- 3.2 Feature Extractor -- 3.3 Contrastive Branch -- 4 Experimental Results -- 4.1 Datasets -- 4.2 Implementation Details -- 4.3 Comparison with State-of-the-Arts -- 4.4 Qualitative Results -- 4.5 Ablation Study. |
| 5 Conclusion -- References -- Voting from Nearest Tasks: Meta-Vote Pruning of Pre-trained Models for Downstream Tasks -- 1 Introduction -- 2 Related Works -- 3 Empirical Study: Pruning a Pre-trained Model for Different Tasks -- 3.1 A Dataset of Pruned Models -- 3.2 Do Similar Tasks Share More Nodes on Their Pruned Models? -- 4 Meta-Vote Pruning (MVP) -- 5 Experiments -- 5.1 Implementation Details -- 5.2 Baseline Methods -- 5.3 Main Results -- 5.4 Performance on Unseen Dataset -- 5.5 Results of MVP on Sub-tasks of Different Sizes -- 5.6 Ablation Study -- 6 Conclusion -- References -- Make a Long Image Short: Adaptive Token Length for Vision Transformers -- 1 Introduction -- 2 Related Works -- 3 Methodology -- 3.1 Token-Length Assigner -- 3.2 Resizable-ViT -- 3.3 Training Strategy -- 4 Experiments -- 4.1 Experimental Results -- 4.2 Ablation Study -- 5 Conclusions -- References -- Graph Rebasing and Joint Similarity Reconstruction for Cross-Modal Hash Retrieval -- 1 Introduction -- 2 Related Work -- 2.1 Supervised Cross-Modal Hashing Methods -- 2.2 Unsupervised Cross-Modal Hashing Methods -- 3 Methodology -- 3.1 Local Relation Graph Building -- 3.2 Graph Rebasing -- 3.3 Global Relation Graph Construction -- 3.4 Joint Similarity Reconstruction -- 3.5 Training Objectives -- 4 Experiments -- 4.1 Datasets and Evaluation Metrics -- 4.2 Implementation Details -- 4.3 Performance Comparison -- 4.4 Parameter Sensitivity Experiments -- 4.5 Ablation Experiments -- 5 Conclusion -- References -- ARConvL: Adaptive Region-Based Convolutional Learning for Multi-class Imbalance Classification -- 1 Introduction -- 2 Related Work -- 2.1 Multi-class Imbalance Learning -- 2.2 Loss Modification Based Methods -- 2.3 Convolutional Prototype Learning -- 3 ARConvL -- 3.1 Overview of ARConvL -- 3.2 Region Learning Module -- 3.3 Optimizing Class-Wise Latent Feature Distribution. | |
| 3.4 Enlarging Margin Between Classes -- 4 Experimental Studies -- 4.1 Experimental Setup -- 4.2 Performance Comparison -- 4.3 Performance Deterioration with Increasing Imbalance Levels -- 4.4 Effect of Each Adaptive Component of ARConvL -- 4.5 Utility in Large-Scale Datasets -- 5 Conclusion -- References -- Deep Learning -- Binary Domain Generalization for Sparsifying Binary Neural Networks -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Preliminaries -- 3.2 Sparse Binary Neural Network (SBNN) Formulation -- 3.3 Weight Optimization -- 3.4 Network Training -- 3.5 Implementation Gains -- 4 Experiments and Results -- 4.1 Ablation Studies -- 4.2 Benchmark -- 5 Conclusions -- References -- Efficient Hyperdimensional Computing -- 1 Introduction -- 2 Background -- 3 High Dimensions Are Not Necessary -- 3.1 Dimension-Accuracy Analysis -- 3.2 Low-Dimension Hypervector Training -- 4 Results -- 4.1 A Case Study of Our Technologies -- 4.2 Experimental Results -- 5 Discussion -- 5.1 Limitation of HDCs -- 5.2 Further Discussion of the Low Accuracy When d Is Low -- 6 Conclusion -- References -- Rényi Divergence Deep Mutual Learning -- 1 Introduction -- 2 Deep Mutual Learning -- 2.1 Rényi Divergence Deep Mutual Learning -- 3 Properties of RDML -- 3.1 Convergence Guarantee -- 3.2 Computational Complexity of RDML -- 4 Empirical Study -- 4.1 Experimental Setup -- 4.2 Convergence Trace Analysis -- 4.3 Evaluation Results -- 4.4 Generalization Results -- 5 Related Work -- 6 Conclusion -- References -- Is My Neural Net Driven by the MDL Principle? -- 1 Introduction -- 2 Related Work -- 3 MDL Principle, Signal, and Noise -- 3.1 Information Theory Primer -- 3.2 Signal and Noise -- 4 Learning with the MDL Principle -- 4.1 MDL Objective -- 4.2 Local Formulation -- 4.3 Combining Local Objectives to Obtain a Spectral Distribution -- 4.4 The MDL Spectral Distributions. | |
| 5 Experimental Results -- 5.1 Experimental Noise -- 5.2 Discussion -- 6 Conclusion and Future Work -- References -- Scoring Rule Nets: Beyond Mean Target Prediction in Multivariate Regression -- 1 Introduction -- 2 Distributional Regression -- 2.1 Proper Scoring Rules -- 2.2 Conditional CRPS -- 2.3 CCRPS as ANN Loss Function for Multivariate Gaussian Mixtures -- 2.4 Energy Score Ensemble Models -- 3 Experiments -- 3.1 Evaluation Metrics -- 3.2 Synthetic Experiments -- 3.3 Real World Experiments -- 4 Conclusion -- References -- Learning Distinct Features Helps, Provably -- 1 Introduction -- 2 Preliminaries -- 3 Learning Distinct Features Helps -- 4 Extensions -- 4.1 Binary Classification -- 4.2 Multi-layer Networks -- 4.3 Multiple Outputs -- 5 Discussion and Open Problems -- References -- Continuous Depth Recurrent Neural Differential Equations -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Problem Definition -- 3.2 Gated Recurrent Unit -- 3.3 Recurrent Neural Ordinary Differential Equations -- 4 Continuous Depth Recurrent Neural Differential Equations -- 4.1 CDR-NDE Based on Heat Equation -- 5 Experiments -- 5.1 Baselines -- 5.2 Person Activity Recognition with Irregularly Sampled Time-Series -- 5.3 Walker2d Kinematic Simulation -- 5.4 Stance Classification -- 6 Conclusion and Future Work -- References -- Fairness -- Mitigating Algorithmic Bias with Limited Annotations -- 1 Introduction -- 2 Preliminaries -- 2.1 Notation and Problem Definition -- 2.2 Fairness Evaluation Metrics -- 3 Active Penalization Of Discrimination -- 3.1 Penalization Of Discrimination (POD) -- 3.2 Active Instance Selection (AIS) -- 3.3 The APOD Algorithm -- 3.4 Theoretical Analysis -- 4 Experiment -- 4.1 Bias Mitigation Performance Analysis (RQ1) -- 4.2 Annotation Effectiveness Analysis (RQ2) -- 4.3 Annotation Ratio Analysis (RQ3) -- 4.4 Ablation Study (RQ4). | |
| 4.5 Visualization of Annotated Instances -- 5 Conclusion -- References -- FG2AN: Fairness-Aware Graph Generative Adversarial Networks -- 1 Introduction -- 2 Related Work -- 2.1 Graph Generative Model -- 2.2 Fairness on Graphs -- 3 Notation and Background -- 3.1 Notation -- 3.2 Root Causes of Representational Discrepancies -- 4 Methodology -- 4.1 Mitigating Degree-Related Bias -- 4.2 Mitigating Connectivity-Related Bias -- 4.3 FG2AN Assembling -- 4.4 Fairness Definitions for Graph Generation -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Experimental Results -- 6 Conclusion -- References -- Targeting the Source: Selective Data Curation for Debiasing NLP Models -- 1 Introduction -- 2 Three Sources of Bias in Text Encoders -- 2.1 Bias in Likelihoods -- 2.2 Bias in Attentions -- 2.3 Bias in Representations -- 3 Pipeline for Measuring Bias in Text -- 3.1 Masking -- 3.2 Probing -- 3.3 Aggregation and Normalization -- 3.4 Bias Computation -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Question Answering -- 4.3 Sentence Inference -- 4.4 Sentiment Analysis -- 5 Related Work -- 5.1 Bias Quantification -- 5.2 Bias Reduction -- 6 Conclusion -- 7 Ethical Considerations -- References -- Fairness in Multi-Task Learning via Wasserstein Barycenters -- 1 Introduction -- 2 Problem Statement -- 2.1 Multi-task Learning -- 2.2 Demographic Parity -- 3 Wasserstein Fair Multi-task Predictor -- 4 Plug-In Estimator -- 4.1 Data-Driven Approach -- 4.2 Empirical Multi-task -- 5 Numerical Evaluation -- 5.1 Datasets -- 5.2 Methods -- 5.3 Results -- 6 Conclusion -- References -- REST: Enhancing Group Robustness in DNNs Through Reweighted Sparse Training -- 1 Introduction -- 2 Related Work -- 2.1 Sparse Neural Network Training -- 2.2 Debiasing Frameworks -- 3 Methodology -- 3.1 Sparse Training -- 4 Experiments -- 4.1 Baselines -- 4.2 Datasets -- 4.3 Setup. | |
| 4.4 Computational Costs. | |
| Titolo autorizzato: | Machine Learning and Knowledge Discovery in Databases ![]() |
| ISBN: | 3-031-43415-3 |
| Formato: | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione: | Inglese |
| Record Nr.: | 996550555203316 |
| Lo trovi qui: | Univ. di Salerno |
| Opac: | Controlla la disponibilità qui |