top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Machine Learning and Knowledge Discovery in Databases. Research Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part III / / edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Indrė Žliobaitė
Machine Learning and Knowledge Discovery in Databases. Research Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part III / / edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Indrė Žliobaitė
Autore Bifet Albert
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (510 pages)
Disciplina 006.3
Altri autori (Persone) DavisJesse
KrilavičiusTomas
KullMeelis
NtoutsiEirini
ŽliobaitėIndrė
Collana Lecture Notes in Artificial Intelligence
Soggetto topico Artificial intelligence
Computer engineering
Computer networks
Computers
Image processing - Digital techniques
Computer vision
Software engineering
Artificial Intelligence
Computer Engineering and Networks
Computing Milieux
Computer Imaging, Vision, Pattern Recognition and Graphics
Software Engineering
ISBN 3-031-70352-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- The Dynamics of Memorization and Unlearning -- The Emerging Science of Benchmarks -- Enhancing User Experience with AI-Powered Search and Recommendations at Spotify -- How to Utilize (and Generate) Player Tracking Data in Sport -- Resource-Aware Machine Learning-A User-Oriented Approach -- Contents - Part III -- Research Track -- Interpretable and Generalizable Spatiotemporal Predictive Learning with Disentangled Consistency -- 1 Introduction -- 2 Related Works -- 2.1 Spatiotemporal Predictive Learning -- 2.2 Disentangled Representation -- 3 Methods -- 3.1 Preliminaries -- 3.2 Context-Motion Disentanglement -- 3.3 Disentangled Consistency -- 3.4 Practical Implementation -- 4 Experiments -- 4.1 Standard Spatiotemporal Predictive Learning -- 4.2 Generalizing to Unknown Scenes -- 4.3 Ablation Study -- 5 Limitations -- 5.1 Reverse Problem -- 5.2 Handling of Irregularly Sampled Data -- 5.3 Adaptability to Dynamic Views -- 6 Conclusion -- References -- Reinventing Node-centric Traffic Forecasting for Improved Accuracy and Efficiency -- 1 Introduction -- 2 Preliminaries -- 2.1 Formulations -- 2.2 Graph-Centric Approaches -- 2.3 Node-centric Approaches -- 3 Empirical Comparisons on Graph-Centric and Node-centric Methods -- 3.1 Results Analysis -- 4 The Proposed Framework -- 4.1 Local Proximity Modeling -- 4.2 Node Correlation Learning -- 4.3 Temporal Encoder and Predictor -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Comparisons on Common Benchmarks -- 5.3 Comparisons on the CA Dataset -- 5.4 Ablation Studies -- 5.5 Case Study -- 6 Conclusion and Future Work -- References -- Direct-Effect Risk Minimization for Domain Generalization -- 1 Introduction -- 2 Preliminaries -- 2.1 Correlation Shift -- 2.2 Problem Setting -- 3 Method -- 3.1 Recovering Indirect Effects.
3.2 Eliminating Indirect Effects in Training (TB) -- 3.3 Model Selection (VB) -- 4 Experiments -- 4.1 Datasets -- 4.2 Results -- 4.3 Foundation Models and O.o.d. Generalization -- 4.4 Visual Explanation -- 5 Related Works -- 6 Conclusion -- References -- Federated Frank-Wolfe Algorithm -- 1 Introduction -- 2 Related Work -- 3 Federated Frank-Wolfe Algorithm -- 3.1 Convergence Guarantees -- 3.2 Privacy and Communication Benefits -- 4 Design Variants of FedFW -- 4.1 FedFW with stochastic gradients -- 4.2 FedFW with Partial Client Participation -- 4.3 FedFW with Split Constraints for Stragglers -- 4.4 FedFW with Augmented Lagrangian -- 5 Numerical Experiments -- 5.1 Comparison of Algorithms in the Convex Setting -- 5.2 Comparison of Algorithms in the Non-convex Setting -- 5.3 Comparison of Algorithms in the Stochastic Setting -- 5.4 Impact of Hyperparameters -- 6 Conclusions -- References -- Bootstrap Latents of Nodes and Neighbors for Graph Self-supervised Learning -- 1 Introduction -- 2 Related Work -- 2.1 Graph Self-supervised Learning -- 2.2 Generation of Positive and Negative Pairs -- 3 Preliminary -- 3.1 Problem Statement -- 3.2 Graph Homophily -- 3.3 Bootstrapped Graph Latents -- 4 Methodology -- 4.1 Motivation -- 4.2 Bootstrap Latents of Nodes and Neighbors -- 5 Experiments -- 5.1 Experiment Setup -- 5.2 Experiment Results -- 6 Conclusion -- References -- Deep Sketched Output Kernel Regression for Structured Prediction -- 1 Introduction -- 2 Deep Sketched Output Kernel Regression -- 2.1 Learning Neural Networks with Infinite-Dimensional Outputs -- 2.2 The Pre-image Problem at Inference Time -- 3 Experiments -- 3.1 Analysis of DSOKR on Synthetic Least Squares Regression -- 3.2 SMILES to Molecule: SMI2Mol -- 3.3 Text to Molecule: ChEBI-20 -- 4 Conclusion -- References -- Hyperbolic Delaunay Geometric Alignment -- 1 Introduction -- 2 Related Work.
3 Background -- 3.1 Voronoi Cells and Delaunay Graph -- 3.2 The Klein-Beltrami Model -- 4 Method -- 4.1 Conversion to Klein-Beltrami -- 4.2 Hyperbolic Voronoi Diagram in Kn -- 4.3 HyperDGA -- 5 Experiments -- 5.1 Synthetic Data with Hyperbolic VAE -- 5.2 Real-Life Biological Data With Poincaré Embedding -- 6 Conclusions, Limitations and Future Work -- References -- ApmNet: Toward Generalizable Visual Continuous Control with Pre-trained Image Models -- 1 Introduction -- 2 Related Work -- 2.1 Pre-trained Models for Policy Learning -- 2.2 Data Augmentation for Policy Learning -- 3 Preliminaries -- 3.1 Continuous Control from Image -- 3.2 Masked Autoencoders -- 4 Method -- 4.1 ApmNetArchitecture -- 4.2 Asymmetric Policy Learning -- 5 Experiments -- 5.1 Environments Setup -- 5.2 Evaluation on Generalization Ability -- 5.3 Evaluation on Sample Efficiency -- 5.4 Ablation Study -- 6 Conclusion and Future Work -- References -- AdaHAT: Adaptive Hard Attention to the Task in Task-Incremental Learning -- 1 Introduction -- 2 Related Work -- 3 Task-Incremental Learning with Adaptive Hard Attention to the Task -- 3.1 The Algorithm: Adaptive Updates to the Parameters in the Network with Summative Attention to Previous Tasks -- 4 Experiments -- 4.1 Setups -- 4.2 Results -- 4.3 Ablation Study -- 4.4 Hyperparameters -- 5 Conclusion -- References -- Probabilistic Circuits with Constraints via Convex Optimization -- 1 Introduction -- 2 Probabilistic Circuits -- 3 Probabilistic Circuits with Constraints -- 4 Experiments -- 4.1 Scarce Datasets -- 4.2 Experiments with Missing Values -- 4.3 Fairness Experiments -- 5 Conclusions and Future Work -- References -- FedAR: Addressing Client Unavailability in Federated Learning with Local Update Approximation and Rectification -- 1 Introduction -- 2 Related Work -- 3 Problem Setup -- 3.1 Basic Algorithm of FL -- 3.2 Motivation.
4 FedAR Algorithm -- 5 Theoretical Analysis of FedAR -- 5.1 Convex Loss Function -- 5.2 Non-convex Loss Function -- 6 Experiments and Evaluations -- 6.1 Experimental Setup -- 6.2 Experimental Results -- 7 Conclusion -- References -- Selecting from Multiple Strategies Improves the Foreseeable Reasoning of Tool-Augmented Large Language Models -- 1 Introduction -- 2 Related Work -- 3 Methods -- 3.1 Problem Formulation -- 3.2 Method Components -- 4 Token Consumption Estimation -- 5 Experiments -- 5.1 Benchmarks -- 5.2 Baselines -- 5.3 Action Space -- 5.4 Evaluation Metrics -- 5.5 Experimental Setup -- 6 Results -- 6.1 Benchmarking Prompting Methods -- 6.2 Impact of the Multi-strategy Mechanism -- 6.3 Error Analysis -- 7 Discussion -- 7.1 Observation-Dependent Reasoning Vs. Foreseeable Reasoning -- 7.2 Single Vs. Multiple Reasoning Trajectories -- 8 Conclusions, Future Work, and Ethical Statement -- References -- Estimating Direct and Indirect Causal Effects of Spatiotemporal Interventions in Presence of Spatial Interference -- 1 Introduction -- 2 Preliminaries -- 2.1 Notations and Definitions -- 2.2 Assumptions -- 3 Spatio-Temporal Causal Inference Network (STCINet) -- 3.1 Latent Factor Model for Temporal Confounding -- 3.2 Double Attention Mechanism -- 3.3 U-Net for Spatial Interference -- 4 Experiments -- 4.1 Synthetic Dataset -- 4.2 Evaluation Metrics -- 4.3 Experimental Setup -- 4.4 Ablation Study -- 4.5 Comparison with Baseline Methods -- 4.6 Case Study on Real-World Arctic Data -- 5 Related Work -- 6 Conclusion -- References -- Continuous Geometry-Aware Graph Diffusion via Hyperbolic Neural PDE -- 1 Introduction -- 2 Preliminaries -- 3 Hyperbolic Numerical Integrators -- 3.1 Hyperbolic Projective Explicit Scheme -- 3.2 Hyperbolic Projective Implicit Scheme -- 3.3 Interpolation on Curved Space -- 4 Diffusing Graphs in Hyperbolic Space.
4.1 Hyperbolic Graph Diffusion Equation -- 4.2 Convergence of Dirichlet Energy -- 5 Empirical Results -- 5.1 Experiment Setup -- 5.2 Experiment Results -- 5.3 Ablation Study -- 6 Conclusion -- References -- SpanGNN: Towards Memory-Efficient Graph Neural Networks via Spanning Subgraph Training -- 1 Introduction -- 2 Preliminary -- 2.1 Graph Neural Networks -- 2.2 Spanning Subgraph GNN Training -- 3 SpanGNN: Memory-Efficient Full-Graph GNN Learning -- 4 Fast Quality-Aware Edge Selection -- 4.1 Variance-Minimized Sampling Strategy -- 4.2 Gradient Noise-Reduced Sampling Strategy -- 4.3 Two-Step Edge Sampling Method -- 5 Connection to Curriculum Learning -- 6 Experimental Studies -- 6.1 Experimental Setups -- 6.2 Performance of SpanGNN -- 6.3 Ablation Studies -- 6.4 Efficiency of SpanGNN -- 6.5 Performance of SpanGNN Compared to Mini-batch Training -- 7 Related Work -- 7.1 Memory-Efficient Graph Neural Networks -- 7.2 Curriculum Learning on GNN -- 8 Conclusion -- References -- AKGNet: Attribute Knowledge Guided Unsupervised Lung-Infected Area Segmentation -- 1 Introduction -- 2 Related Work -- 2.1 Medical Image Segmentation -- 2.2 Vision-Language Based Segmentation -- 3 Method -- 3.1 Overall Framework -- 3.2 Coarse Mask Generation -- 3.3 Text Attribute Knowledge Learning Module -- 3.4 Attribute-Image Cross-Attention Module -- 3.5 Self-training Mask Refinement -- 3.6 Loss Function -- 4 Experimental Results -- 4.1 Experimental Settings -- 4.2 Comparison Results -- 4.3 Ablation Studies -- 4.4 Qualitative Evaluation Results -- 5 Conclusion -- References -- Diffusion Model in Normal Gathering Latent Space for Time Series Anomaly Detection -- 1 Introduction -- 2 Related Work -- 2.1 Time Series Anomaly Detection -- 2.2 Diffusion Model for Time Series Analysis -- 3 Problem Formulation -- 4 Methodology -- 4.1 Overview -- 4.2 Autoencoder.
4.3 Normal Gathering Latent Space.
Record Nr. UNINA-9910886077003321
Bifet Albert  
Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases. Research Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part VI / / edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Indrė Žliobaitė
Machine Learning and Knowledge Discovery in Databases. Research Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part VI / / edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Indrė Žliobaitė
Autore Bifet Albert
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (509 pages)
Disciplina 006.3
Altri autori (Persone) DavisJesse
KrilavičiusTomas
KullMeelis
NtoutsiEirini
ŽliobaitėIndrė
Collana Lecture Notes in Artificial Intelligence
Soggetto topico Artificial intelligence
Computer engineering
Computer networks
Computers
Image processing - Digital techniques
Computer vision
Software engineering
Artificial Intelligence
Computer Engineering and Networks
Computing Milieux
Computer Imaging, Vision, Pattern Recognition and Graphics
Software Engineering
ISBN 3-031-70365-0
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- The Dynamics of Memorization and Unlearning -- The Emerging Science of Benchmarks -- Enhancing User Experience with AI-Powered Search and Recommendations at Spotify -- How to Utilize (and Generate) Player Tracking Data in Sport -- Resource-Aware Machine Learning-A User-Oriented Approach -- Contents - Part VI -- Research Track -- Rejection Ensembles with Online Calibration -- 1 Introduction -- 2 Notation and Related Work -- 2.1 Related Work -- 3 A Theoretical Investigation of Rejection -- 3.1 Three Distinct Situations Can Occur When Training the Rejector -- 3.2 Even a Perfect Rejector Will Overuse Its Budget -- 3.3 A Rejector Should Not Trust fs and fb -- 4 Training a Rejector for a Rejection Ensemble -- 5 Experiments -- 5.1 Experiments with Deep Learning Models -- 5.2 Experiments with Decision Trees -- 5.3 Conclusion from the Experiments -- 6 Conclusion -- References -- Lighter, Better, Faster Multi-source Domain Adaptation with Gaussian Mixture Models and Optimal Transport -- 1 Introduction -- 2 Preliminaries -- 2.1 Gaussian Mixtures -- 2.2 Domain Adaptation -- 2.3 Optimal Transport -- 3 Methodological Contributions -- 3.1 First Order Analysis of MW2 -- 3.2 Supervised Mixture-Wasserstein Distances -- 3.3 Mixture Wasserstein Barycenters -- 3.4 Multi-source Domain Adaptation Through GMM-OT -- 4 Experiments -- 4.1 Toy Example -- 4.2 Multi-source Domain Adaptation -- 4.3 Lighter, Better, Faster Domain Adaptation -- 5 Conclusion -- References -- Subgraph Retrieval Enhanced by Graph-Text Alignment for Commonsense Question Answering -- 1 Introduction -- 2 Related Work -- 2.1 Commonsense Question Answering -- 2.2 Graph-Text Alignment -- 3 Task Formulation -- 4 Methods -- 4.1 Graph-Text Alignment -- 4.2 Subgraph Retrieval Module -- 4.3 Prediction -- 5 Experiments -- 5.1 Datasets.
5.2 Baselines -- 5.3 Implementation Details -- 5.4 Main Results -- 5.5 Ablation Study -- 5.6 Low-Resource Setting -- 5.7 Evaluation with other GNNs -- 5.8 Hyper-parameter Analysis -- 6 Ethical Considerations and Limitations -- 6.1 Ethical Considerations -- 6.2 Limitations -- 7 Conclusion -- References -- HetCAN: A Heterogeneous Graph Cascade Attention Network with Dual-Level Awareness -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 3.1 Heterogeneous Information Network -- 3.2 Graph Neural Networks -- 3.3 Transformer-Style Architecture -- 4 The Proposed Model -- 4.1 Overall Architecture -- 4.2 Type-Aware Encoder -- 4.3 Dimension-Aware Encoder -- 4.4 Time Complexity Analysis -- 5 Experiments -- 5.1 Experimental Setups -- 5.2 Node Classification -- 5.3 Link Prediction -- 5.4 Model Analysis -- 6 Conclusion -- References -- Interpetable Target-Feature Aggregation for Multi-task Learning Based on Bias-Variance Analysis -- 1 Introduction -- 2 Preliminaries -- 2.1 Related Works: Dimensionality Reduction, Multi-task Learning -- 3 Bias-Variance Analysis: Theoretical Results -- 4 Multi-task Learning via Aggregations: Algorithms -- 5 Experimental Validation -- 5.1 Synthetic Experiments and Ablation Study -- 5.2 Real World Datasets -- 6 Conclusions and Future Developments -- References -- The Simpler The Better: An Entropy-Based Importance Metric to Reduce Neural Networks' Depth -- 1 Introduction -- 2 Related Works -- 3 Method -- 3.1 How Layers Can Degenerate -- 3.2 Entropy for Rectifier Activations -- 3.3 EASIER -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Results -- 4.3 Ablation Study -- 4.4 Limitations and Future Work -- 5 Conclusion -- References -- Towards Few-Shot Self-explaining Graph Neural Networks -- 1 Introduction -- 2 Problem Definition -- 3 The Proposed MSE-GNN -- 3.1 Architecture of MSE-GNN -- 3.2 Optimization Objective.
3.3 Meta Training -- 4 Experiments -- 4.1 Datasets and Experimental Setup -- 5 Related Works -- 6 Conclusion -- References -- Uplift Modeling Under Limited Supervision -- 1 Introduction -- 2 Related Work -- 3 Proposed Methodology -- 3.1 Uplift Modeling with Graph Neural Networks (UMGNet) -- 3.2 Active Learning for Uplift GNNs (UMGNet-AL) -- 4 Experimental Evaluation -- 4.1 Datasets -- 4.2 Benchmark Models -- 4.3 Experiments -- 5 Conclusion -- References -- Self-supervised Spatial-Temporal Normality Learning for Time Series Anomaly Detection -- 1 Introduction -- 2 Related Work -- 3 STEN: Spatial-Temporal Normality Learning -- 3.1 Problem Statement -- 3.2 Overview of The Proposed Approach -- 3.3 OTN: Order Prediction-Based Temporal Normality Learning -- 3.4 DSN: Distance Prediction-Based Spatial Normality Learning -- 3.5 Training and Inference -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Main Results -- 4.3 Ablation Study -- 4.4 Qualitative Analysis -- 4.5 Sensitivity Analysis -- 4.6 Time Efficiency -- 5 Conclusion -- References -- Modeling Text-Label Alignment for Hierarchical Text Classification -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Text Encoder -- 3.2 Graph Encoder -- 3.3 Generation of Composite Representation -- 3.4 Loss Functions -- 4 Experiments -- 4.1 Datasets and Evaluation Metrics -- 4.2 Implementation Details -- 4.3 Experimental Results -- 4.4 Analysis -- 5 Conclusion -- A Details of Statistical Test -- B Performance Analysis on Additional Datasets -- References -- Secure Aggregation Is Not Private Against Membership Inference Attacks -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 4 Privacy Analysis of Secure Aggregation -- 4.1 Threat Model -- 4.2 SecAgg as a Noiseless LDP Mechanism -- 4.3 Asymptotic Privacy Guarantee -- 4.4 Upper Bounding M() via Dominating Pairs of Distributions.
4.5 Lower Bounding M() and Upper Bounding fM() via Privacy Auditing -- 5 Experiments and Discussion -- 6 Conclusions -- A Correlated Gaussian Mechanism -- A.1 Optimal LDP Curve: Proof of Theorem 2 -- A.2 The Case Sd={xRd:||x||2 rd} -- A.3 Trade-Off Function: Proof of Proposition 1 -- B LDP Analysis of the Mechanism (1) in a Special Case: Proof of Theorem 3 -- References -- Evaluating Negation with Multi-way Joins Accelerates Class Expression Learning -- 1 Introduction -- 2 Preliminaries -- 2.1 The Description Logic ALC -- 2.2 Class Expression Learning -- 2.3 Semantics and Properties of SPARQL -- 2.4 Worst-Case Optimal Multi-way Join Algorithms -- 3 Mapping ALC Class Expressions to SPARQL Queries -- 4 Negation in Multi-way Joins -- 4.1 Rewriting Rule for Negation and UNION Normal Form -- 4.2 Multi-way Join Algorithm -- 4.3 Implementation -- 5 Experimental Results -- 5.1 Systems, Setup and Execution -- 5.2 Datasets and Queries -- 5.3 Results and Discussion -- 6 Related Work -- 7 Conclusion And Future Work -- References -- LayeredLiNGAM: A Practical and Fast Method for Learning a Linear Non-gaussian Structural Equation Model -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 3.1 LiNGAM -- 3.2 DirectLiNGAM -- 4 LayeredLiNGAM -- 4.1 Generalization of Lemma 2 -- 4.2 Algorithm -- 4.3 Adaptive Thresholding -- 5 Experiments -- 5.1 Datasets and Evaluation Metrics -- 5.2 Determining Threshold Parameters -- 5.3 Results on Synthetic Datasets -- 5.4 Results on Real-World Datasets -- 6 Conclusion -- References -- Enhanced Bayesian Optimization via Preferential Modeling of Abstract Properties -- 1 Introduction -- 2 Background -- 2.1 Bayesian Optimization -- 2.2 Rank GP Distributions -- 3 Framework -- 3.1 Expert Preferential Inputs on Abstract Properties -- 3.2 Augmented GP with Abstract Property Preferences -- 3.3 Overcoming Inaccurate Expert Inputs.
4 Convergence Remarks -- 5 Experiments -- 5.1 Synthetic Experiments -- 5.2 Real-World Experiments -- 6 Conclusion -- References -- Enhancing LLM's Reliability by Iterative Verification Attributions with Keyword Fronting -- 1 Introduction -- 2 Related Work -- 2.1 Retrieval-Augmented Generation -- 2.2 Text Generation Attribution -- 3 Methodology -- 3.1 Task Formalization -- 3.2 Overall Framework -- 3.3 Keyword Fronting -- 3.4 Attribution Verification -- 3.5 Iterative Optimization -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Main Results -- 4.3 Ablation Studies -- 4.4 Impact of Hyperparameters -- 4.5 The Performance of the Iteration -- 5 Conclusion -- References -- Reconstructing the Unseen: GRIOT for Attributed Graph Imputation with Optimal Transport -- 1 Introduction -- 2 Related Works -- 3 Multi-view Optimal Transport Loss for Attribute Imputation -- 3.1 Notations -- 3.2 Optimal Transport and Wasserstein Distance -- 3.3 Definition of the `3́9`42`"̇613A``45`47`"603AMultiW Loss Function -- 3.4 Instantiation of `3́9`42`"̇613A``45`47`"603AMultiW Loss with Attributes and Structure -- 4 Imputing Missing Attributes with `3́9`42`"̇613A``45`47`"603AMultiW Loss -- 4.1 Architecture of GRIOT -- 4.2 Accelerating the Imputation -- 5 Experimental Analysis -- 5.1 Experimental Protocol -- 5.2 Imputation Quality v.s. Node Classification Accuracy -- 5.3 Imputing Missing Values for Unseen Nodes -- 5.4 Time Complexity -- 6 Conclusion and Perspectives -- References -- Introducing Total Harmonic Resistance for Graph Robustness Under Edge Deletions -- 1 Introduction -- 2 Problem Statement and a New Robustness Measure -- 2.1 Problem Statement and Notation -- 2.2 Robustness Measures -- 3 Related Work -- 4 Comparison of Exact Solutions -- 5 Greedy Heuristic for k-GRoDel -- 5.1 Total Harmonic Resistance Loss After Deleting an Edge.
5.2 Forest Index Loss After Deleting an Edge.
Record Nr. UNINA-9910886089803321
Bifet Albert  
Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases. Research Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part VII / / edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Indrė Žliobaitė
Machine Learning and Knowledge Discovery in Databases. Research Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part VII / / edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Indrė Žliobaitė
Autore Bifet Albert
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (503 pages)
Disciplina 006.3
Altri autori (Persone) DavisJesse
KrilavičiusTomas
KullMeelis
NtoutsiEirini
ŽliobaitėIndrė
Collana Lecture Notes in Artificial Intelligence
Soggetto topico Artificial intelligence
Computer engineering
Computer networks
Computers
Image processing - Digital techniques
Computer vision
Software engineering
Artificial Intelligence
Computer Engineering and Networks
Computing Milieux
Computer Imaging, Vision, Pattern Recognition and Graphics
Software Engineering
ISBN 3-031-70368-5
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- The Dynamics of Memorization and Unlearning -- The Emerging Science of Benchmarks -- Enhancing User Experience with AI-Powered Search and Recommendations at Spotify -- How to Utilize (and Generate) Player Tracking Data in Sport -- Resource-Aware Machine Learning-A User-Oriented Approach -- Contents - Part VII -- Research Track -- Data with Density-Based Clusters: A Generator for Systematic Evaluation of Clustering Algorithms -- 1 Introduction -- 2 Related Work -- 3 A Reliable Data Generator for Density-Based Clusters -- 3.1 Main Concept of DENSIRED -- 3.2 Generation of Skeletons -- 3.3 Instantiating Data Points -- 3.4 Delimitations -- 3.5 Analysis Intrinsic Dimensionality -- 4 Experiments -- 4.1 Discussion of the Data Generator -- 4.2 Benchmarking -- 5 Conclusion -- References -- Model-Based Reinforcement Learning with Multi-task Offline Pretraining -- 1 Introduction -- 2 Related Work -- 3 Problem Formulation -- 4 Method -- 4.1 Why Model-Based RL for Domain Transfer? -- 4.2 Multi-task Offline Pretraining -- 4.3 Domain-Selective Dynamics Transfer -- 4.4 Domain-Selective Behavior Transfer -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Main Results -- 5.3 Ablation Studies -- 5.4 Analyses of Task Relations -- 5.5 Results on CARLA Environment -- 5.6 Results with Medium Offline Data -- 6 Conclusion -- References -- Advancing Graph Counterfactual Fairness Through Fair Representation Learning -- 1 Introduction -- 2 Related Work -- 2.1 Graph Neural Networks -- 2.2 Fairness in Graph -- 3 Notations -- 4 Methodology -- 4.1 Causal Model -- 4.2 Framework Overview -- 4.3 Fair Ego-Graph Generation Module -- 4.4 Counterfactual Data Augmentation Module -- 4.5 Fair Disentangled Representation Learning Module -- 4.6 Final Optimization Objectives -- 5 Experiment -- 5.1 Datasets.
5.2 Evaluation Metrics -- 5.3 Baselines -- 5.4 Experiment Results -- 6 Conclusion -- References -- Continuously Deep Recurrent Neural Networks -- 1 Introduction -- 2 Shallow and Deep Echo State Networks -- 3 Continuously Deep Echo State Networks -- 4 Analysis of Deep Dynamics -- 5 Mathematical Analysis -- 6 Experiments -- 6.1 Memory Capacity -- 6.2 Time-Series Reconstruction -- 7 Conclusions -- References -- Dynamics Adaptive Safe Reinforcement Learning with a Misspecified Simulator -- 1 Introduction -- 2 Related Work -- 2.1 Safe Reinforcement Learning -- 2.2 Sim-to-Real Reinforcement Learning -- 3 Problem Formulation -- 4 Method -- 4.1 Theoretical Motivation -- 4.2 Value Estimation Alignment with an Inverse Dynamics Model -- 4.3 Conservative Cost Critic Learning via Uncertainty Estimation -- 5 Experiments -- 5.1 Baselines and Environments -- 5.2 Overall Performance Comparison -- 5.3 Ablation Studies and Data Sensitivity Study -- 5.4 Visualization Analysis -- 5.5 Parameter Sensitivity Studies -- 6 Final Remarks -- References -- CRISPert: A Transformer-Based Model for CRISPR-Cas Off-Target Prediction -- 1 Introduction -- 2 Computational Methods for Off-Target Prediction -- 3 Method -- 3.1 Problem Formalisation -- 3.2 Model Architecture -- 3.3 CRISPR-Cas Binding Concentration Features -- 3.4 Data Imbalance Handling -- 3.5 Model Implementation -- 4 Experimental Setting -- 4.1 Data -- 4.2 Test Scenarios -- 4.3 Hyper-parameter Optimisation -- 4.4 Pre-training -- 5 Results and Analysis -- 6 Conclusion -- References -- Improved Topology Features for Node Classification on Heterophilic Graphs -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Notation -- 3.2 Motivations -- 3.3 Bin of Paths Embedding -- 3.4 Confidence and Class-Wise Training Accuracy Weighting -- 4 Evaluation -- 4.1 Experimental Settings -- 4.2 Node Classification.
4.3 Improvements on Base GNN Models -- 4.4 Distribution of CCAW Weights -- 4.5 Class-Wise Node Classification Accuracy -- 4.6 Ablations -- 4.7 Hyperparameter Analysis -- 4.8 Efficiency Analysis -- 5 Conclusion -- References -- Fast Redescription Mining Using Locality-Sensitive Hashing -- 1 Introduction -- 2 The Algorithm -- 2.1 The ReReMi Algorithm -- 2.2 Primer on LSH -- 2.3 Finding Initial Pairs -- 2.4 Extending Initial Pairs -- 2.5 Time Complexity -- 3 Experimental Evaluation -- 3.1 Experimental Setup -- 3.2 Finding Initial Pairs -- 3.3 Extending Initial Pairs -- 3.4 Building Full Redescriptions -- 4 Conclusions -- References -- sigma-GPTs: A New Approach to Autoregressive Models -- 1 Introduction -- 2 Methodology -- 2.1 sigma-GPTs: Shuffled Autoregression -- 2.2 Double Positional Encodings -- 2.3 Conditional Probabilities and Infilling -- 2.4 Token-Based Rejection Sampling -- 2.5 Other Orders -- 2.6 Denoising Diffusion Models -- 3 Results -- 3.1 General Performance -- 3.2 Training Efficiency -- 3.3 Curriculum Learning -- 3.4 Open Text Generation: t-SNE of Generated Sequences -- 3.5 Training and Generating in Fractal Order -- 3.6 Memorizing -- 3.7 Infilling and Conditional Density Estimation -- 3.8 Token-Based Rejection Sampling Scheme -- 4 Related Works -- 5 Conclusion -- References -- FairFlow: An Automated Approach to Model-Based Counterfactual Data Augmentation for NLP -- 1 Introduction -- 2 Background and Related Literature -- 3 Approach -- 3.1 Attribute Classifier Training -- 3.2 Generating Word-Pair List -- 3.3 Error Correction -- 3.4 Training the Generative Model -- 4 Experimental Set-Up -- 4.1 Training Set-Up -- 4.2 Evaluation Datasets -- 4.3 Comparative Techniques -- 5 Evaluation and Results -- 5.1 Utility -- 5.2 Extrinsic Bias Mitigation -- 5.3 Task Performance -- 5.4 Qualitative Analysis and Key Observations -- 6 Conclusion.
References -- GrINd: Grid Interpolation Network for Scattered Observations -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Fourier Interpolation Layer -- 3.2 NeuralPDE -- 3.3 GrINd -- 4 Experiments -- 4.1 Data -- 4.2 Baseline Models -- 4.3 Model Configuration -- 4.4 Training -- 5 Results and Discussion -- 5.1 Interpolation Accuracy -- 5.2 DynaBench -- 5.3 Limitations -- 6 Conclusion and Future Work -- References -- MEGA: Multi-encoder GNN Architecture for Stronger Task Collaboration and Generalization -- 1 Introduction -- 2 Related Works -- 3 Methods -- 3.1 Preliminaries -- 3.2 Task Interference Problem in MT-SSL -- 3.3 MEGA Architecture -- 3.4 Pretext Tasks -- 4 Experiments -- 4.1 Experiment Setting -- 4.2 Results -- 5 Conclusion -- References -- MetaQuRe: Meta-learning from Model Quality and Resource Consumption -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Automated Algorithm Selection -- 3.2 Incorporating Resource Awareness -- 3.3 Relative Index Scaling -- 3.4 Compositional Meta-learning -- 3.5 Additional Remarks -- 4 Data on Model Quality and Resource Consumption -- 5 Experimental Results -- 5.1 Insights from MetaQuRe -- 5.2 Learning from MetaQuRe -- 6 Conclusion -- References -- Propagation Structure-Semantic Transfer Learning for Robust Fake News Detection -- 1 Introduction -- 2 Related Work -- 3 Propagation Structure-Semantic Transfer Learning Framework -- 3.1 Overview -- 3.2 Dual Teacher Models -- 3.3 Local-Global Propagation Interaction Enhanced Student Model -- 3.4 Multi-channel Knowledge Distillation Training Objective -- 4 Experiment -- 4.1 Experimental Setups -- 4.2 Main Results -- 4.3 Ablation Study -- 4.4 Generalization Evaluation -- 4.5 Robustness Evaluation -- 4.6 Parameter Analysis -- 5 Conclusion -- References -- Exploring Contrastive Learning for Long-Tailed Multi-label Text Classification -- 1 Introduction.
2 Related Work -- 2.1 Supervised Contrastive Learning -- 2.2 Multi-label Classification -- 2.3 Supervised Contrastive Learning for Multi-label Classification -- 3 Method -- 3.1 Contrastive Baseline LBase -- 3.2 Motivation -- 3.3 Multi-label Supervised Contrastive Loss -- 4 Experimental Setup -- 4.1 Datasets -- 4.2 Comparison Baselines -- 4.3 Implementation Details -- 5 Experimental Results -- 5.1 Comparison with Standard MLTC Losses -- 5.2 Fine-Tuning After Supervised Contrastive Learning -- 5.3 Representation Analysis -- 6 Conclusion -- References -- Simultaneous Linear Connectivity of Neural Networks Modulo Permutation -- 1 Introduction -- 2 Methods -- 2.1 Preliminaries -- 2.2 Aligning Networks via Permutation -- 3 Related Work -- 4 Notions of Linear Connectivity Modulo Permutation -- 5 Empirical Findings -- 5.1 Training Trajectories Are Simultaneously Weak Linearly Connected Modulo Permutation -- 5.2 Iteratively Sparsified Networks Are Simultaneously Weak Linearly Connected Modulo Permutation -- 5.3 Evidence for Strong Linear Connectivity Modulo Permutation -- 6 Algorithmic Aspects of Network Alignment -- 7 Conclusion -- References -- Fast Fishing: Approximating Bait for Efficient and Scalable Deep Active Image Classification -- 1 Introduction -- 2 Related Work -- 3 Notation -- 4 Time and Space Complexity of Bait -- 5 Approximations -- 5.1 Expectation -- 5.2 Gradient -- 6 Experimental Results -- 6.1 Setup -- 6.2 Assessment of Approximations -- 6.3 Benchmark Experiments -- 7 Conclusion -- References -- Understanding Domain-Size Generalization in Markov Logic Networks -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Basic Definitions -- 3.2 First-Order Logic -- 4 Learning in Markov Logic -- 5 Markov Logic Across Domain Sizes -- 6 Domain-Size Generalization -- 7 Experiments -- 7.1 Datasets -- 7.2 Methodology -- 7.3 Results -- 8 Conclusion.
References.
Record Nr. UNINA-9910886096803321
Bifet Albert  
Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases. Research Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part I / / edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Indrė Žliobaitė
Machine Learning and Knowledge Discovery in Databases. Research Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part I / / edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Indrė Žliobaitė
Autore Bifet Albert
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (514 pages)
Disciplina 006.3
Altri autori (Persone) DavisJesse
KrilavičiusTomas
KullMeelis
NtoutsiEirini
ŽliobaitėIndrė
Collana Lecture Notes in Artificial Intelligence
Soggetto topico Artificial intelligence
Computer engineering
Computer networks
Computers
Image processing - Digital techniques
Computer vision
Software engineering
Artificial Intelligence
Computer Engineering and Networks
Computing Milieux
Computer Imaging, Vision, Pattern Recognition and Graphics
Software Engineering
ISBN 3-031-70341-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- The Dynamics of Memorization and Unlearning -- The Emerging Science of Benchmarks -- Enhancing User Experience with AI-Powered Search and Recommendations at Spotify -- How to Utilize (and Generate) Player Tracking Data in Sport -- Resource-Aware Machine Learning-A User-Oriented Approach -- Contents - Part I -- Research Track -- Adaptive Sparsity Level During Training for Efficient Time Series Forecasting with Transformers -- 1 Introduction -- 2 Background -- 2.1 Sparse Neural Networks -- 2.2 Time Series Forecasting -- 2.3 Problem Formulation and Notations -- 3 Analyzing Sparsity Effect in Transformers for Time Series Forecasting -- 4 Proposed Methodology: PALS -- 5 Experiments and Results -- 5.1 Experimental Settings -- 5.2 Results -- 6 Discussion -- 6.1 Performance Comparison with Pruning and Sparse Training Algorithms -- 6.2 Hyperparameter Sensitivity -- 7 Conclusions -- References -- RumorMixer: Exploring Echo Chamber Effect and Platform Heterogeneity for Rumor Detection -- 1 Introduction -- 2 Related Works -- 3 Methodology -- 3.1 Overview -- 3.2 Echo Chamber Extraction and Representation Learning -- 3.3 Neural Architecture Search for Platform Heterogeneity -- 4 Experiments -- 4.1 Experimental Setting -- 4.2 Performance Comparison (RQ1) -- 4.3 Ablation Study (RQ2) -- 4.4 Parameter Analysis (RQ3) -- 4.5 Early Rumor Detection (RQ4) -- 5 Conclusion -- References -- Diversified Ensemble of Independent Sub-networks for Robust Self-supervised Representation Learning -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Robust Self-supervised Learning via Independent Sub-networks -- 3.2 Empirical Analysis of Diversity -- 3.3 Computational Cost and Efficiency Analysis -- 4 Experimental Setup -- 5 Results and Discussion -- 6 Ablation Study -- 7 Conclusion -- References.
Modular Debiasing of Latent User Representations in Prototype-Based Recommender Systems -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 4 Experiment Setup -- 5 Results and Analysis -- 6 Conclusion and Future Directions -- References -- A Mathematics Framework of Artificial Shifted Population Risk and Its Further Understanding Related to Consistency Regularization -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Revisiting Data Augmentation with Empirical Risk -- 3.2 The Augmented Neighborhood -- 3.3 The Artificial Shifted Population Risk -- 3.4 Understanding the Decomposition of Shifted Population Risk -- 4 Experiment -- 4.1 Experiment Implementation -- 4.2 Experimental Results -- 5 Conclusion and Discussion -- References -- Attention-Driven Dropout: A Simple Method to Improve Self-supervised Contrastive Sentence Embeddings -- 1 Introduction -- 2 Background and Related Work -- 3 Method -- 3.1 Attention Rollout Aggregation -- 3.2 Static Dropout Rate -- 3.3 Dynamic Dropout Rate -- 4 Experiment -- 4.1 Datasets and Tasks -- 4.2 Training Procedure -- 5 Result and Discussion -- 5.1 Ablation Study -- 6 Conclusion -- References -- AEMLO: AutoEncoder-Guided Multi-label Oversampling -- 1 Introduction -- 1.1 Research Goal -- 1.2 Motivation -- 1.3 Summary -- 2 Related Work -- 2.1 Multi-label Classification -- 2.2 Multi-label Imbalance Learning -- 2.3 Deep Sampling Method -- 3 Multi-label AutoEncoder Oversampling -- 3.1 Method Description and Overview -- 3.2 Loss Function -- 3.3 Generate Instances and Post-processing -- 4 Experiments and Analysis -- 4.1 Datasets -- 4.2 Experiment Setup -- 4.3 Experimental Analysis -- 4.4 Parameter Analysis -- 4.5 Sampling Time -- 5 Conclusion -- References -- MANTRA: Temporal Betweenness Centrality Approximation Through Sampling -- 1 Introduction -- 2 Related Work -- 3 Preliminaries.
4 MANTRA: Temporal Betweenness Centrality Approxi-mation Through Sampling -- 4.1 Temporal Betweenness Estimator -- 4.2 Sample Complexity Bounds -- 4.3 Fast Approximation of the Characteristic Quantities -- 4.4 The MANTRA Framework -- 5 Experimental Evaluation -- 5.1 Experimental Setting -- 5.2 Networks -- 5.3 Experimental Results -- 6 Conclusions -- References -- Dimensionality-Induced Information Loss of Outliers in Deep Neural Networks -- 1 Introduction -- 2 Problem Setting and Related Work -- 2.1 Stable Rank of the Matrix -- 2.2 Feature-Based Detection -- 2.3 Projection-Based Detection -- 2.4 Similarity of DNN Representations -- 2.5 Noise Sensitivity in the DNN -- 3 Results -- 3.1 Overview of the Experiments and a Possible Picture -- 3.2 Observation of Dimensionality via Stable Ranks -- 3.3 Transition of OOD Detection Performance -- 3.4 Block Structure of CKA -- 3.5 Instability of OOD Samples to Noise Injection -- 3.6 Dataset Bias-Induced Imbalanced Inference -- 3.7 Quantitative Comparison of OOD Detection Performance -- 4 Discussion -- 5 Summary and Conclusion -- References -- Towards Open-World Cross-Domain Sequential Recommendation: A Model-Agnostic Contrastive Denoising Approach -- 1 Introduction -- 2 Methodology -- 2.1 Problem Formulation -- 2.2 Embedding Encoder -- 2.3 Denoising Interest-Aware Network -- 2.4 Fusion Gate Unit -- 2.5 Model Training -- 2.6 Inductive Representation Generator -- 3 Experiments -- 3.1 Datasets -- 3.2 Experiment Setting -- 3.3 Performance Comparisons (RQ1) -- 3.4 Ablation Study (RQ2) -- 3.5 Online Evaluation (RQ3) -- 3.6 Model Analyses (RQ4) -- 3.7 Parameter Sensitivity (RQ5) -- 4 Related Work -- 5 Conclusion -- References -- MixerFlow: MLP-Mixer Meets Normalising Flows -- 1 Introduction -- 2 Related Works -- 3 Preleminaries -- 4 MixerFlow Architecture and Its Components -- 5 Experiments.
5.1 Density Estimation on 3232 Datasets -- 5.2 Density Estimation on 6464 Datasets -- 5.3 Enhancing MAF with the MixerFlow -- 5.4 Datasets with Specific Permutations -- 5.5 Hybrid Modelling -- 5.6 Integration of Powerful Architecture -- 6 Conclusion and Limitations -- 7 Future Work and Broader Impact -- References -- Handling Delayed Feedback in Distributed Online Optimization: A Projection-Free Approach -- 1 Introduction -- 1.1 Our Contribution -- 1.2 Related Work -- 2 Projection-Free Algorithms Under Delayed Feedback -- 2.1 Preliminaries -- 2.2 Centralized Algorithm -- 2.3 Distributed Algorithm -- 3 Numerical Experiments -- 4 Concluding Remarks -- References -- Secure Dataset Condensation for Privacy-Preserving and Efficient Vertical Federated Learning -- 1 Introduction -- 2 Related Work -- 2.1 Vertical Federated Learning -- 2.2 Privacy Protection in VFL -- 2.3 Dataset Size Reduction in FL -- 3 Preliminaries -- 3.1 Problem Formulation -- 3.2 Dataset Condensation -- 3.3 Secure Aggregation -- 3.4 Differential Privacy -- 4 Proposed Approach -- 4.1 Overview -- 4.2 Class-Wise Secure Aggregation -- 4.3 VFDC Algorithm -- 4.4 Privacy Analysis -- 5 Experimental Study -- 5.1 Experimental Setup -- 5.2 Visualization of Condensed Dataset -- 5.3 Performance Comparison -- 5.4 Efficiency Improvement -- 5.5 Impact of Hyperparameters -- 6 Conclusion and Future Directions -- References -- Neighborhood Component Feature Selection for Multiple Instance Learning Paradigm -- 1 Introduction -- 2 Methods -- 2.1 The Lazy Learning Approach for Multiple Instance Learning Setting -- 2.2 Neighborhood Component Feature Selection for Single Instance Learning Setting -- 2.3 Our Proposal: Neighborhood Component Feature Selection for the Multiple Instance Learning Setting -- 3 Datasets -- 3.1 Musk Dataset -- 3.2 DEAP Dataset -- 4 Experimental Procedure -- 5 Experimental Results.
5.1 Musk Dataset -- 5.2 DEAP Dataset -- 5.3 Comparison with State-of-the-Art -- 5.4 Statistical Significance -- 5.5 Computational Complexity -- 6 Conclusions -- References -- MESS: Coarse-Grained Modular Two-Way Dialogue Entity Linking Framework -- 1 Introduction -- 2 Related Work -- 2.1 Mention-to-Entities -- 2.2 Transferred EL -- 3 Our MESS Framework -- 3.1 M2E Module -- 3.2 E2M Module -- 3.3 SS Module -- 3.4 Dialogue Module -- 4 Experiments -- 4.1 Setting -- 4.2 Results -- 4.3 Ablation Studies -- 5 Conclusion -- References -- Session Target Pair: User Intent Perceiving Networks for Session-Based Recommendation -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Problem Statement -- 3.2 Session-Level Intent Representation Module -- 3.3 Target-Level Intent Representation Module -- 3.4 Intent Alignment Mechanism Module -- 3.5 Prediction and Training -- 4 Experiments -- 4.1 Experiment Setups -- 4.2 Overall Performance -- 4.3 Model Analysis and Discussion -- 5 Conclusion -- References -- Hierarchical Fine-Grained Visual Classification Leveraging Consistent Hierarchical Knowledge -- 1 Introduction -- 2 Related Work -- 2.1 Fine-Grained Visual Classification -- 2.2 Hierarchical Multi-granularity Classification -- 2.3 Graph Representation Learning -- 3 Approach -- 3.1 Problem Setting -- 3.2 Multi-granularity Graph Convolutional Neural Network -- 3.3 Hierarchy-Aware Conditional Supervised Learning -- 3.4 Loss Function -- 3.5 Tree-Structured Granularity Consistency Rate -- 4 Experiments -- 4.1 Datasets -- 4.2 Experimental Settings -- 4.3 Compared Methods -- 4.4 Ablation Study -- 4.5 Comparison with State-of-the-Art Method -- 4.6 Qualitative Analysis -- 5 Conclusion -- References -- Backdoor Attacks with Input-Unique Triggers in NLP -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Problem Formulation -- 3.2 NURA: Input-Unique Backdoor Attack.
3.3 Model Training.
Record Nr. UNINA-9910886100803321
Bifet Albert  
Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases. Research Track and Demo Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part VIII / / edited by Albert Bifet, Povilas Daniušis, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Kai Puolamäki, Indrė Žliobaitė
Machine Learning and Knowledge Discovery in Databases. Research Track and Demo Track : European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part VIII / / edited by Albert Bifet, Povilas Daniušis, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, Kai Puolamäki, Indrė Žliobaitė
Autore Bifet Albert
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (487 pages)
Disciplina 006.3
Altri autori (Persone) DaniusisPovilas
DavisJesse
KrilavičiusTomas
KullMeelis
NtoutsiEirini
PuolamäkiKai
ŽliobaitėIndrė
Collana Lecture Notes in Artificial Intelligence
Soggetto topico Artificial intelligence
Computer engineering
Computer networks
Computers
Image processing - Digital techniques
Computer vision
Software engineering
Artificial Intelligence
Computer Engineering and Networks
Computing Milieux
Computer Imaging, Vision, Pattern Recognition and Graphics
Software Engineering
ISBN 3-031-70371-5
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910886080803321
Bifet Albert  
Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui