Vai al contenuto principale della pagina

Machine Learning and Knowledge Discovery in Databases. Research Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part III / / edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Indrė Žliobaitė



(Visualizza in formato marc)    (Visualizza in BIBFRAME)

Autore: Bifet Albert Visualizza persona
Titolo: Machine Learning and Knowledge Discovery in Databases. Research Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part III / / edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Indrė Žliobaitė Visualizza cluster
Pubblicazione: Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024
Edizione: 1st ed. 2024.
Descrizione fisica: 1 online resource (510 pages)
Disciplina: 006.3
Soggetto topico: Artificial intelligence
Computer engineering
Computer networks
Computers
Image processing - Digital techniques
Computer vision
Software engineering
Artificial Intelligence
Computer Engineering and Networks
Computing Milieux
Computer Imaging, Vision, Pattern Recognition and Graphics
Software Engineering
Altri autori: DavisJesse  
KrilavičiusTomas  
KullMeelis  
NtoutsiEirini  
ŽliobaitėIndrė  
Nota di contenuto: Intro -- Preface -- Organization -- Invited Talks Abstracts -- The Dynamics of Memorization and Unlearning -- The Emerging Science of Benchmarks -- Enhancing User Experience with AI-Powered Search and Recommendations at Spotify -- How to Utilize (and Generate) Player Tracking Data in Sport -- Resource-Aware Machine Learning-A User-Oriented Approach -- Contents - Part III -- Research Track -- Interpretable and Generalizable Spatiotemporal Predictive Learning with Disentangled Consistency -- 1 Introduction -- 2 Related Works -- 2.1 Spatiotemporal Predictive Learning -- 2.2 Disentangled Representation -- 3 Methods -- 3.1 Preliminaries -- 3.2 Context-Motion Disentanglement -- 3.3 Disentangled Consistency -- 3.4 Practical Implementation -- 4 Experiments -- 4.1 Standard Spatiotemporal Predictive Learning -- 4.2 Generalizing to Unknown Scenes -- 4.3 Ablation Study -- 5 Limitations -- 5.1 Reverse Problem -- 5.2 Handling of Irregularly Sampled Data -- 5.3 Adaptability to Dynamic Views -- 6 Conclusion -- References -- Reinventing Node-centric Traffic Forecasting for Improved Accuracy and Efficiency -- 1 Introduction -- 2 Preliminaries -- 2.1 Formulations -- 2.2 Graph-Centric Approaches -- 2.3 Node-centric Approaches -- 3 Empirical Comparisons on Graph-Centric and Node-centric Methods -- 3.1 Results Analysis -- 4 The Proposed Framework -- 4.1 Local Proximity Modeling -- 4.2 Node Correlation Learning -- 4.3 Temporal Encoder and Predictor -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Comparisons on Common Benchmarks -- 5.3 Comparisons on the CA Dataset -- 5.4 Ablation Studies -- 5.5 Case Study -- 6 Conclusion and Future Work -- References -- Direct-Effect Risk Minimization for Domain Generalization -- 1 Introduction -- 2 Preliminaries -- 2.1 Correlation Shift -- 2.2 Problem Setting -- 3 Method -- 3.1 Recovering Indirect Effects.
3.2 Eliminating Indirect Effects in Training (TB) -- 3.3 Model Selection (VB) -- 4 Experiments -- 4.1 Datasets -- 4.2 Results -- 4.3 Foundation Models and O.o.d. Generalization -- 4.4 Visual Explanation -- 5 Related Works -- 6 Conclusion -- References -- Federated Frank-Wolfe Algorithm -- 1 Introduction -- 2 Related Work -- 3 Federated Frank-Wolfe Algorithm -- 3.1 Convergence Guarantees -- 3.2 Privacy and Communication Benefits -- 4 Design Variants of FedFW -- 4.1 FedFW with stochastic gradients -- 4.2 FedFW with Partial Client Participation -- 4.3 FedFW with Split Constraints for Stragglers -- 4.4 FedFW with Augmented Lagrangian -- 5 Numerical Experiments -- 5.1 Comparison of Algorithms in the Convex Setting -- 5.2 Comparison of Algorithms in the Non-convex Setting -- 5.3 Comparison of Algorithms in the Stochastic Setting -- 5.4 Impact of Hyperparameters -- 6 Conclusions -- References -- Bootstrap Latents of Nodes and Neighbors for Graph Self-supervised Learning -- 1 Introduction -- 2 Related Work -- 2.1 Graph Self-supervised Learning -- 2.2 Generation of Positive and Negative Pairs -- 3 Preliminary -- 3.1 Problem Statement -- 3.2 Graph Homophily -- 3.3 Bootstrapped Graph Latents -- 4 Methodology -- 4.1 Motivation -- 4.2 Bootstrap Latents of Nodes and Neighbors -- 5 Experiments -- 5.1 Experiment Setup -- 5.2 Experiment Results -- 6 Conclusion -- References -- Deep Sketched Output Kernel Regression for Structured Prediction -- 1 Introduction -- 2 Deep Sketched Output Kernel Regression -- 2.1 Learning Neural Networks with Infinite-Dimensional Outputs -- 2.2 The Pre-image Problem at Inference Time -- 3 Experiments -- 3.1 Analysis of DSOKR on Synthetic Least Squares Regression -- 3.2 SMILES to Molecule: SMI2Mol -- 3.3 Text to Molecule: ChEBI-20 -- 4 Conclusion -- References -- Hyperbolic Delaunay Geometric Alignment -- 1 Introduction -- 2 Related Work.
3 Background -- 3.1 Voronoi Cells and Delaunay Graph -- 3.2 The Klein-Beltrami Model -- 4 Method -- 4.1 Conversion to Klein-Beltrami -- 4.2 Hyperbolic Voronoi Diagram in Kn -- 4.3 HyperDGA -- 5 Experiments -- 5.1 Synthetic Data with Hyperbolic VAE -- 5.2 Real-Life Biological Data With Poincaré Embedding -- 6 Conclusions, Limitations and Future Work -- References -- ApmNet: Toward Generalizable Visual Continuous Control with Pre-trained Image Models -- 1 Introduction -- 2 Related Work -- 2.1 Pre-trained Models for Policy Learning -- 2.2 Data Augmentation for Policy Learning -- 3 Preliminaries -- 3.1 Continuous Control from Image -- 3.2 Masked Autoencoders -- 4 Method -- 4.1 ApmNetArchitecture -- 4.2 Asymmetric Policy Learning -- 5 Experiments -- 5.1 Environments Setup -- 5.2 Evaluation on Generalization Ability -- 5.3 Evaluation on Sample Efficiency -- 5.4 Ablation Study -- 6 Conclusion and Future Work -- References -- AdaHAT: Adaptive Hard Attention to the Task in Task-Incremental Learning -- 1 Introduction -- 2 Related Work -- 3 Task-Incremental Learning with Adaptive Hard Attention to the Task -- 3.1 The Algorithm: Adaptive Updates to the Parameters in the Network with Summative Attention to Previous Tasks -- 4 Experiments -- 4.1 Setups -- 4.2 Results -- 4.3 Ablation Study -- 4.4 Hyperparameters -- 5 Conclusion -- References -- Probabilistic Circuits with Constraints via Convex Optimization -- 1 Introduction -- 2 Probabilistic Circuits -- 3 Probabilistic Circuits with Constraints -- 4 Experiments -- 4.1 Scarce Datasets -- 4.2 Experiments with Missing Values -- 4.3 Fairness Experiments -- 5 Conclusions and Future Work -- References -- FedAR: Addressing Client Unavailability in Federated Learning with Local Update Approximation and Rectification -- 1 Introduction -- 2 Related Work -- 3 Problem Setup -- 3.1 Basic Algorithm of FL -- 3.2 Motivation.
4 FedAR Algorithm -- 5 Theoretical Analysis of FedAR -- 5.1 Convex Loss Function -- 5.2 Non-convex Loss Function -- 6 Experiments and Evaluations -- 6.1 Experimental Setup -- 6.2 Experimental Results -- 7 Conclusion -- References -- Selecting from Multiple Strategies Improves the Foreseeable Reasoning of Tool-Augmented Large Language Models -- 1 Introduction -- 2 Related Work -- 3 Methods -- 3.1 Problem Formulation -- 3.2 Method Components -- 4 Token Consumption Estimation -- 5 Experiments -- 5.1 Benchmarks -- 5.2 Baselines -- 5.3 Action Space -- 5.4 Evaluation Metrics -- 5.5 Experimental Setup -- 6 Results -- 6.1 Benchmarking Prompting Methods -- 6.2 Impact of the Multi-strategy Mechanism -- 6.3 Error Analysis -- 7 Discussion -- 7.1 Observation-Dependent Reasoning Vs. Foreseeable Reasoning -- 7.2 Single Vs. Multiple Reasoning Trajectories -- 8 Conclusions, Future Work, and Ethical Statement -- References -- Estimating Direct and Indirect Causal Effects of Spatiotemporal Interventions in Presence of Spatial Interference -- 1 Introduction -- 2 Preliminaries -- 2.1 Notations and Definitions -- 2.2 Assumptions -- 3 Spatio-Temporal Causal Inference Network (STCINet) -- 3.1 Latent Factor Model for Temporal Confounding -- 3.2 Double Attention Mechanism -- 3.3 U-Net for Spatial Interference -- 4 Experiments -- 4.1 Synthetic Dataset -- 4.2 Evaluation Metrics -- 4.3 Experimental Setup -- 4.4 Ablation Study -- 4.5 Comparison with Baseline Methods -- 4.6 Case Study on Real-World Arctic Data -- 5 Related Work -- 6 Conclusion -- References -- Continuous Geometry-Aware Graph Diffusion via Hyperbolic Neural PDE -- 1 Introduction -- 2 Preliminaries -- 3 Hyperbolic Numerical Integrators -- 3.1 Hyperbolic Projective Explicit Scheme -- 3.2 Hyperbolic Projective Implicit Scheme -- 3.3 Interpolation on Curved Space -- 4 Diffusing Graphs in Hyperbolic Space.
4.1 Hyperbolic Graph Diffusion Equation -- 4.2 Convergence of Dirichlet Energy -- 5 Empirical Results -- 5.1 Experiment Setup -- 5.2 Experiment Results -- 5.3 Ablation Study -- 6 Conclusion -- References -- SpanGNN: Towards Memory-Efficient Graph Neural Networks via Spanning Subgraph Training -- 1 Introduction -- 2 Preliminary -- 2.1 Graph Neural Networks -- 2.2 Spanning Subgraph GNN Training -- 3 SpanGNN: Memory-Efficient Full-Graph GNN Learning -- 4 Fast Quality-Aware Edge Selection -- 4.1 Variance-Minimized Sampling Strategy -- 4.2 Gradient Noise-Reduced Sampling Strategy -- 4.3 Two-Step Edge Sampling Method -- 5 Connection to Curriculum Learning -- 6 Experimental Studies -- 6.1 Experimental Setups -- 6.2 Performance of SpanGNN -- 6.3 Ablation Studies -- 6.4 Efficiency of SpanGNN -- 6.5 Performance of SpanGNN Compared to Mini-batch Training -- 7 Related Work -- 7.1 Memory-Efficient Graph Neural Networks -- 7.2 Curriculum Learning on GNN -- 8 Conclusion -- References -- AKGNet: Attribute Knowledge Guided Unsupervised Lung-Infected Area Segmentation -- 1 Introduction -- 2 Related Work -- 2.1 Medical Image Segmentation -- 2.2 Vision-Language Based Segmentation -- 3 Method -- 3.1 Overall Framework -- 3.2 Coarse Mask Generation -- 3.3 Text Attribute Knowledge Learning Module -- 3.4 Attribute-Image Cross-Attention Module -- 3.5 Self-training Mask Refinement -- 3.6 Loss Function -- 4 Experimental Results -- 4.1 Experimental Settings -- 4.2 Comparison Results -- 4.3 Ablation Studies -- 4.4 Qualitative Evaluation Results -- 5 Conclusion -- References -- Diffusion Model in Normal Gathering Latent Space for Time Series Anomaly Detection -- 1 Introduction -- 2 Related Work -- 2.1 Time Series Anomaly Detection -- 2.2 Diffusion Model for Time Series Analysis -- 3 Problem Formulation -- 4 Methodology -- 4.1 Overview -- 4.2 Autoencoder.
4.3 Normal Gathering Latent Space.
Sommario/riassunto: This multi-volume set, LNAI 14941 to LNAI 14950, constitutes the refereed proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2024, held in Vilnius, Lithuania, in September 2024. The papers presented in these proceedings are from the following three conference tracks: - Research Track: The 202 full papers presented here, from this track, were carefully reviewed and selected from 826 submissions. These papers are present in the following volumes: Part I, II, III, IV, V, VI, VII, VIII. Demo Track: The 14 papers presented here, from this track, were selected from 30 submissions. These papers are present in the following volume: Part VIII. Applied Data Science Track: The 56 full papers presented here, from this track, were carefully reviewed and selected from 224 submissions. These papers are present in the following volumes: Part IX and Part X.
Titolo autorizzato: Machine Learning and Knowledge Discovery in Databases. Research Track  Visualizza cluster
ISBN: 3-031-70352-9
Formato: Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione: Inglese
Record Nr.: 9910886077003321
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Serie: Lecture Notes in Artificial Intelligence, . 2945-9141 ; ; 14943