top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Edizione [First edition.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2023]
Descrizione fisica 1 online resource (0 pages)
Disciplina 006.3
Collana Lecture Notes in Computer Science Series
Soggetto topico Data mining
Databases
Machine learning
ISBN 3-031-43412-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- Neural Wave Representations -- Physics-Inspired Graph Neural Networks -- Mapping Generative AI -- Contents - Part I -- Active Learning -- Learning Objective-Specific Active Learning Strategies with Attentive Neural Processes -- 1 Introduction -- 2 Related Work -- 3 A Study on Existing Active Learning Methods -- 4 Myopic Oracle Active Learning -- 5 Learning Active Learning with a Neural Process -- 6 Conclusion and Discussion -- References -- Real: A Representative Error-Driven Approach for Active Learning -- 1 Introduction -- 2 Preliminaries -- 2.1 Related Work -- 2.2 Problem Definition -- 3 Representative Pseudo Errors -- 3.1 Overview of Representative Error-Driven Active Learning -- 3.2 Pseudo Error Identification -- 3.3 Adaptive Sampling of Representative Errors -- 4 Experimental Setup -- 4.1 Datasets -- 4.2 Baselines and Implementation Details -- 5 Results -- 5.1 Classification Performance -- 5.2 Representative Errors -- 5.3 Ablation and Hyperparameter Study -- 6 Conclusion and Future Work -- References -- Knowledge-Driven Active Learning -- 1 Introduction -- 2 Knowledge-Driven Active Learning -- 2.1 Converting Domain-Knowledge into Loss Functions -- 2.2 An Intuitive Example: The XOR-like Problem -- 2.3 Real-Life Scenario: Partial Knowledge and Different Type of Rules -- 3 Experiments -- 3.1 KAL Provides Better Performance Than Many Active Strategies -- 3.2 Ablation Studies -- 3.3 KAL Discovers Novel Data Distributions, Unlike Uncertainty Strategies -- 3.4 KAL Ensures Domain Experts that Their Knowledge is Acquired -- 3.5 KAL Can Be Used Even Without Domain-Knowledge -- 3.6 KAL Can Be Employed in Object Recognition Tasks -- 3.7 KAL Is Not Computationally Expensive -- 4 Related Work -- 5 Conclusions -- References -- ActiveGLAE: A Benchmark for Deep Active Learning with Transformers.
1 Introduction -- 2 Problem Setting -- 3 Current Practice in Evaluating Deep Active Learning -- 3.1 (C1) Data Set Selection -- 3.2 (C2) Model Training -- 3.3 (C3) Deep Active Learning Setting -- 4 ActiveGLAE - Data Sets and Tasks -- 5 Experimental Setup -- 6 Results -- 7 Conclusion -- References -- DiffusAL: Coupling Active Learning with Graph Diffusion for Label-Efficient Node Classification -- 1 Introduction -- 2 Related Work -- 3 DiffusAL -- 3.1 Preliminaries -- 3.2 Model Architecture -- 3.3 Node Ranking and Selection -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 R1 - Performance Comparison -- 4.3 R2 - Analysis of Contributing Factors -- 4.4 R3-Acquisition and Training Efficiency -- 5 Conclusion -- References -- Adversarial Machine Learning -- Quantifying Robustness to Adversarial Word Substitutions -- 1 Introduction -- 2 Preliminary -- 2.1 Problems -- 3 Methods -- 3.1 Type-2 Problem: Pseudo-Dynamic Programming for Crafting Adversarial Examples -- 3.2 Type-3 Problem: Robustness Verification -- 3.3 Type-4 Problem: Robustness Metric -- 4 Experiments -- 4.1 General Experiment Setup -- 4.2 Type-2 Problem: Attack Evaluation -- 4.3 Type-3 Problem: Robustness Verification -- 4.4 Type-4 Problem: Robustness Metric -- 5 Related Work -- 6 Conclusion -- References -- Enhancing Adversarial Training via Reweighting Optimization Trajectory -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 WOT: Optimization Trajectories -- 3.2 WOT: Objective Function -- 3.3 WOT: In-Time Refining Optimization Trajectories -- 4 Experiments -- 4.1 Superior Performance in Improving Adversarial Robustness -- 4.2 Ability to Prevent Robust Overfitting -- 4.3 Ablations and Visualizations -- 5 Conclusion -- References -- Adversarial Imitation Learning with Controllable Rewards for Text Generation -- 1 Introduction -- 2 Adversarial Imitation Learning for Text Generation.
2.1 TextGAIL -- 2.2 Divergence Minimization of Adversarial Imitation Learning -- 2.3 Relationship Between Maximum Likelihood Estimation and Adversarial Imitation Learning -- 3 Adversarial Imitation Learning with Approximation of Mixture Distribution -- 3.1 Lower Bounded Reward Function -- 3.2 Clipped Surrogate Objective with Approximation of Mixture Distribution -- 4 Study and Result -- 4.1 Dataset and Models -- 4.2 Evaluation Metrics -- 4.3 PPO Ratio and Confidence -- 4.4 Comparison Distributions of the Reward Values -- 4.5 Quality and Diversity -- 5 Conclusions -- References -- Towards Minimising Perturbation Rate for Adversarial Machine Learning with Pruning -- 1 Introduction -- 2 Related Works -- 2.1 Adversarial Attack Algorithms -- 2.2 Pruning Algorithm -- 3 Method -- 3.1 Problem Definition -- 3.2 Min-PRv1 for D1 -- 3.3 Min-PRv2 for D2 -- 4 Experiments -- 4.1 Experiments for FGSM -- 4.2 Experiments for LinfPGD -- 4.3 Experiments for L2PGD -- 4.4 Experiments for AdvGAN -- 5 Conclusion -- References -- Adversarial Sample Detection Through Neural Network Transport Dynamics -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Optimal Transport -- 3.2 Least Action Principle Residual Networks -- 4 Method -- 4.1 Detection -- 4.2 Regularization -- 5 Experiments -- 5.1 Preliminary Experiments -- 5.2 Detection of Seen Attacks -- 5.3 Detection of Unseen Attacks -- 5.4 Detection of Out-of-Distribution Samples -- 5.5 Attacking the Detector -- 6 Conclusion -- References -- Anomaly Detection -- CVTGAD: Simplified Transformer with Cross-View Attention for Unsupervised Graph-Level Anomaly Detection -- 1 Introduction -- 2 Related Work -- 2.1 Graph-Level Anomaly Detection -- 2.2 Graph Contrastive Learning -- 3 Problem Definition -- 4 Methodology -- 4.1 Graph Pre-processing Module -- 4.2 Simplified Transformer-Based Embedding Module.
4.3 Adaptive Anomaly Scoring Module -- 5 Experiment -- 5.1 Experimental Setting -- 5.2 Overall Performance Comparison -- 5.3 Ablation Study-Effects of Key Components -- 5.4 Hyper Parameter Analysis and Visualization -- 6 Conclusion -- References -- Graph-Level Anomaly Detection via Hierarchical Memory Networks -- 1 Introduction -- 2 Related Work -- 2.1 Graph-Level Anomaly Detection -- 2.2 Memory Networks -- 3 Methodology -- 3.1 The GLAD Problem -- 3.2 Overview of the Proposed Hierarchical Memory Networks -- 3.3 Graph Autoencoder -- 3.4 Hierarchical Memory Learning -- 3.5 Training and Inference -- 4 Experiments -- 4.1 Experimental Setups -- 4.2 Comparison to State-of-the-Art Methods -- 4.3 Robustness w.r.t Anomaly Contamination -- 4.4 Ablation Study -- 4.5 Analysis of Hyperparameters -- 5 Conclusion -- References -- Semi-supervised Learning from Active Noisy Soft Labels for Anomaly Detection -- 1 Introduction -- 2 Background and Notation -- 2.1 Anomaly Detection -- 2.2 Gaussian Processes -- 3 SLADe -- 3.1 Training -- 3.2 Inference -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Experimental Results -- 5 Related Work -- 6 Conclusion -- References -- Learning with Noisy Labels by Adaptive Gradient-Based Outlier Removal -- 1 Introduction -- 2 Related Work -- 3 AGRA: Adaptive Gradient-Based Outlier Removal -- 3.1 Notation -- 3.2 Algorithm Description -- 3.3 Comparison Batch Sampling -- 3.4 Selection of Comparison Loss Functions -- 4 Experiments -- 4.1 Datasets -- 4.2 Baselines -- 4.3 Experimental Setup -- 4.4 Results -- 4.5 Ablation Study -- 4.6 Case Study -- 5 Conclusion -- References -- DSV: An Alignment Validation Loss for Self-supervised Outlier Model Selection -- 1 Introduction -- 2 Problem Definition and Related Works -- 2.1 Problem Definition -- 2.2 Self-supervised Anomaly Detection (SSAD) -- 2.3 Unsupervised Outlier Model Selection (UOMS).
3 Proposed Method -- 3.1 Definitions and Assumptions -- 3.2 Main Ideas: Discordance and Separability -- 3.3 Discordance Surrogate Loss -- 3.4 Separability Surrogate Loss -- 4 Experiments -- 4.1 Experimental Settings -- 4.2 Detection Performance (Q1) -- 4.3 Ablation Studies (Q2) -- 4.4 Case Studies (Q3) -- 5 Conclusion -- A Proofs of Lemmas -- A.1 Proof of Lemma 1 -- A.2 Proof of Lemma 2 -- References -- Marvolo: Programmatic Data Augmentation for Deep Malware Detection -- 1 Introduction -- 2 Background and Related Work -- 3 Approach -- 4 Marvolo -- 4.1 Binary Rewriting Overview -- 4.2 Code Transformations -- 4.3 Optimizations for Practicality -- 5 Evaluation -- 5.1 Methodology -- 5.2 Overall Accuracy Improvements -- 5.3 Analyzing Marvolo -- 6 Conclusion -- References -- A Transductive Forest for Anomaly Detection with Few Labels -- 1 Introduction -- 2 Related Work -- 3 Preliminary -- 4 TransForest: A Transductive Forest -- 5 Experiments -- 5.1 Semi-supervised Comparisons -- 5.2 Effects of Pseudo-Labeling of TransForest -- 5.3 Parameter Sensitivity of TransForest -- 5.4 Robustness Against Irrelevant Features -- 5.5 Feature Importance Ranking -- 5.6 Running Time -- 6 Conclusion -- References -- Applications -- Co-Evolving Graph Reasoning Network for Emotion-Cause Pair Extraction -- 1 Introduction -- 2 Related Works -- 3 Methodology -- 3.1 Constructing a MRG from a Document -- 3.2 CGR-Net -- 4 Experiments -- 4.1 Datasets and Evaluation Metrics -- 4.2 Implement Details -- 4.3 Compared Baselines -- 4.4 Main Results -- 4.5 Variants of MRG Structure -- 4.6 Investigation of Supervision Signals -- 4.7 Ablation Study of MRGT Cell -- 4.8 Step Number of Co-evolving Reasoning -- 5 Conclusion and Prospect -- References -- SpotGAN: A Reverse-Transformer GAN Generates Scaffold-Constrained Molecules with Property Optimization -- 1 Introduction -- 2 Related Work.
2.1 Structure-Unconstrained Molecular Generation.
Record Nr. UNISA-996550555003316
Cham, Switzerland : , : Springer, , [2023]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Edizione [First edition.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2023]
Descrizione fisica 1 online resource (506 pages)
Disciplina 943.005
Collana Lecture Notes in Computer Science Series
Soggetto topico Data mining
Databases
Machine learning
ISBN 3-031-43424-2
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- Neural Wave Representations -- Physics-Inspired Graph Neural Networks -- Mapping Generative AI -- Contents - Part V -- Robustness -- MMA: Multi-Metric-Autoencoder for Analyzing High-Dimensional and Incomplete Data -- 1 Introduction -- 2 Related Work -- 2.1 LFA-Based Model -- 2.2 Deep Learning-Based Model -- 3 Methodology -- 3.1 Establishment of Base Model -- 3.2 Self-Adaptively Aggregation -- 3.3 Theoretical Analysis -- 4 Experiments -- 4.1 General Settings -- 4.2 Performance Comparison (RQ.1) -- 4.3 The Self-ensembling of MMA (RQ.2) -- 4.4 Base Models' Latent Factors Distribution (RQ. 3) -- 5 Conclusion -- References -- Exploring and Exploiting Data-Free Model Stealing -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 TandemGAN Framework -- 3.2 Optimization Objectives and Training Procedure -- 4 Evaluation -- 4.1 Model Stealing Performance -- 4.2 Ablation Study -- 5 Possible Extension -- 6 Conclusion -- References -- Exploring the Training Robustness of Distributional Reinforcement Learning Against Noisy State Observations -- 1 Introduction -- 2 Background: Distributional RL -- 3 Tabular Case: State-Noisy MDP -- 3.1 Analysis of SN-MDP for Expectation-Based RL -- 3.2 Analysis of SN-MDP in Distributional RL -- 4 Function Approximation Case -- 4.1 Convergence of Linear TD Under Noisy States -- 4.2 Vulnerability of Expectation-Based RL -- 4.3 Robustness Advantage of Distributional RL -- 5 Experiments -- 5.1 Results on Continuous Control Environments -- 5.2 Results on Classical Control and Atari Games -- 6 Discussion and Conclusion -- References -- Overcoming the Limitations of Localization Uncertainty: Efficient and Exact Non-linear Post-processing and Calibration -- 1 Introduction -- 2 Background and Related Work -- 3 Method -- 3.1 Uncertainty Estimation.
3.2 Uncertainty Propagation -- 3.3 Uncertainty Calibration -- 4 Experiments -- 4.1 Decoding Methods -- 4.2 Calibration Evaluation -- 4.3 Uncertainty Correlation -- 5 Conclusion -- References -- Label Shift Quantification with Robustness Guarantees via Distribution Feature Matching -- 1 Introduction -- 1.1 Related Literature -- 1.2 Contributions of the Paper -- 2 Distribution Feature Matching -- 2.1 Kernel Mean Matching -- 2.2 BBSE as Distribution Feature Matching -- 3 Theoretical Guarantees -- 3.1 Comparison to Related Literature -- 3.2 Robustness to Contamination -- 4 Algorithm and Applications -- 4.1 Optimisation Problem -- 4.2 Experiments -- 5 Conclusion -- References -- Robust Classification of High-Dimensional Data Using Data-Adaptive Energy Distance -- 1 Introduction -- 1.1 Our Contribution -- 2 Methodology -- 2.1 A Classifier Based on W*FG -- 2.2 Refinements of 0 -- 3 Asymptotics Under HDLSS Regime -- 3.1 Misclassification Probabilities of 1, 2, and 3 in the HDLSS Asymptotic Regime -- 3.2 Comparison of the Classifiers -- 4 Empirical Performance and Results -- 4.1 Simulation Studies -- 4.2 Implementation on Real Data -- 5 Concluding Remarks -- References -- DualMatch: Robust Semi-supervised Learning with Dual-Level Interaction -- 1 Introduction -- 2 Related Work -- 2.1 Semi-supervised Learning -- 2.2 Supervised Contrastive Learning -- 3 Method -- 3.1 Preliminaries -- 3.2 The DualMatch Structure -- 3.3 First-Level Interaction: Align -- 3.4 Second-Level Interaction: Aggregate -- 3.5 Final Objective -- 4 Experiment -- 4.1 Semi-supervised Classification -- 4.2 Class-Imbalanced Semi-supervised Classification -- 4.3 Ablation Study -- 5 Conclusion -- References -- Detecting Evasion Attacks in Deployed Tree Ensembles -- 1 Introduction -- 2 Preliminaries -- 3 Detecting Evasion Attacks with OC-score -- 3.1 The OC-score Metric -- 3.2 Theoretical Analysis.
4 Related Work -- 5 Experimental Evaluation -- 5.1 Experimental Methodology -- 5.2 Results Q1: Detecting Evasion Attacks -- 5.3 Results Q2: Prediction Time Cost -- 5.4 Results Q3: Size of the Reference Set -- 6 Conclusions and Discussion -- References -- Time Series -- Deep Imbalanced Time-Series Forecasting via Local Discrepancy Density -- 1 Introduction -- 2 Related Work -- 2.1 Deep Learning Models for Time-Series Forecasting -- 2.2 Robustness Against Noisy Samples and Data Imbalance -- 3 Method -- 3.1 Local Discrepancy -- 3.2 Density-Based Reweighting for Time-Series Forecasting -- 4 Experiments -- 4.1 Experiment Setting -- 4.2 Main Results -- 4.3 Comparisons with Other Methods -- 4.4 Variants for Local Discrepancy -- 4.5 Dataset Analysis -- 4.6 Computational Cost of ReLD -- 5 Discussion and Limitation -- References -- Online Deep Hybrid Ensemble Learning for Time Series Forecasting -- 1 Introduction -- 2 Related Works -- 2.1 On Online Ensemble Aggregation for Time Series Forecasting -- 2.2 On Ensemble Learning Using RoCs -- 3 Methodology -- 3.1 Preliminaries -- 3.2 Ensemble Architecture -- 3.3 RoCs Computation -- 3.4 Ensemble Aggregation -- 3.5 Ensemble Adaptation -- 4 Experiments -- 4.1 Experimental Set-Up -- 4.2 ODH-ETS Setup and Baselines -- 4.3 Results -- 4.4 Discussion -- 5 Concluding Remarks -- References -- Sparse Transformer Hawkes Process for Long Event Sequences -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Hawkes Process -- 3.2 Self-attention -- 4 Proposed Model -- 4.1 Event Model -- 4.2 Count Model -- 4.3 Intensity -- 4.4 Training -- 5 Experiment -- 5.1 Data -- 5.2 Baselines -- 5.3 Evaluation Metrics and Training Details -- 5.4 Results of Log-Likelihood -- 5.5 Results of Event Type and Time Prediction -- 5.6 Computational Efficiency -- 5.7 Sparse Attention Mechanism -- 5.8 Ablation Study -- 6 Conclusion -- References.
Adacket: ADAptive Convolutional KErnel Transform for Multivariate Time Series Classification -- 1 Introduction -- 2 Related Work -- 2.1 Multivariate Time Series Classification -- 2.2 Reinforcement Learning -- 3 Preliminaries -- 4 Method -- 4.1 Multi-objective Optimization: Performance and Resource -- 4.2 RL-Based Decision Model: Channel and Temporal Dimensions -- 4.3 DDPG Adaptation: Efficient Kernel Exploration -- 5 Experiments -- 5.1 Experimental Settings -- 5.2 Classification Performance Evaluation -- 5.3 Understanding Adacket's Design Selections -- 6 Conclusion -- References -- Efficient Adaptive Spatial-Temporal Attention Network for Traffic Flow Forecasting -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 4 Methodology -- 4.1 Adaptive Spatial-Temporal Fusion Embedding -- 4.2 Dominant Spatial-Temporal Attention -- 4.3 Efficient Spatial-Temporal Block -- 4.4 Encoder-Decoder Architecture -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Overall Comparison -- 5.3 Ablation Study -- 5.4 Model Analysis -- 5.5 Interpretability of EASTAN -- 6 Conclusion -- References -- Estimating Dynamic Time Warping Distance Between Time Series with Missing Data -- 1 Introduction -- 2 Notation and Problem Statement -- 3 Background: Dynamic Time Warping (DTW) -- 4 DTW-AROW -- 5 DTW-CAI -- 5.1 DBAM -- 5.2 DTW-CAI -- 6 Comparison to Related Work -- 7 Experimental Evaluation -- 7.1 Datasets and Implementation Details -- 7.2 Evaluation of Pairwise Distances (Q1) -- 7.3 Evaluation Through Classification (Q2) -- 7.4 Evaluation Through Clustering (Q3) -- 8 Conclusion -- References -- Uncovering Multivariate Structural Dependency for Analyzing Irregularly Sampled Time Series -- 1 Introduction -- 2 Preliminaries -- 3 Our Proposed Model -- 3.1 Multivariate Interaction Module -- 3.2 Correlation-Aware Neighborhood Aggregation -- 3.3 Masked Time-Aware Self-Attention.
3.4 Graph-Level Learning Module -- 4 Experiments -- 4.1 Datasets -- 4.2 Competitors -- 4.3 Setups and Results -- 5 Conclusion -- References -- Weighted Multivariate Mean Reversion for Online Portfolio Selection -- 1 Introduction -- 2 Problem Setting -- 3 Related Work and Motivation -- 3.1 Related Work -- 3.2 Motivation -- 4 Multi-variate Robust Mean Reversion -- 4.1 Formulation -- 4.2 Online Portfolio Selection -- 4.3 Algorithms -- 5 Experiments -- 5.1 Cumulative Wealth -- 5.2 Computational Time -- 5.3 Parameter Sensitivity -- 5.4 Risk-Adjusted Returns -- 5.5 Transaction Cost Scalability -- 6 Conclusion -- References -- H2-Nets: Hyper-hodge Convolutional Neural Networks for Time-Series Forecasting -- 1 Introduction -- 2 Related Work -- 3 Higher-Order Structures on Graph -- 3.1 Hyper-k-Simplex-Network Learning Statement -- 3.2 Preliminaries on Hodge Theory -- 4 The H2-Nets Methodology -- 5 Experiments -- 6 Conclusion -- References -- Transfer and Multitask Learning -- Overcoming Catastrophic Forgetting for Fine-Tuning Pre-trained GANs -- 1 Introduction -- 2 Background and Related Work -- 2.1 Deep Transfer Learning -- 2.2 Generative Adversarial Networks (GANs) -- 2.3 Transfer Learning for GANs -- 3 Approach -- 3.1 Trust-Region Optimization -- 3.2 Spectral Diversification -- 4 Experiment -- 4.1 Performance on the Full Datasets -- 4.2 Performance on the Subsets of 1K Samples -- 4.3 Performance on the Subsets of 100 and 25 Samples -- 4.4 Ablation Study -- 4.5 Limitation and Future Work -- 5 Conclusion -- References -- Unsupervised Domain Adaptation via Bidirectional Cross-Attention Transformer -- 1 Introduction -- 2 Related Work -- 2.1 Unsupervised Domain Adaptation -- 2.2 Vision Transformers -- 2.3 Vision Transformer for Unsupervised Domain Adaptation -- 3 The BCAT Method -- 3.1 Quadruple Transformer Block.
3.2 Bidirectional Cross-Attention as Implicit Feature Mixup.
Record Nr. UNISA-996550555603316
Cham, Switzerland : , : Springer, , [2023]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Edizione [First edition.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Descrizione fisica 1 online resource (758 pages)
Disciplina 006.31
Collana Lecture Notes in Computer Science Series
Soggetto topico Data mining
Databases
Machine learning
ISBN 3-031-43415-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- Neural Wave Representations -- Physics-Inspired Graph Neural Networks -- Mapping Generative AI -- Contents - Part II -- Computer Vision -- Sample Prior Guided Robust Model Learning to Suppress Noisy Labels -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Prior Guided Sample Dividing -- 3.2 Denoising with the Divided Sets -- 4 Experiment -- 4.1 Datasets and Implementation Details -- 4.2 Comparison with State-of-the-Art Methods -- 4.3 Ablation Study -- 4.4 Generalization to Instance-Dependent Label Noise -- 4.5 Hyper-parameters Analysis -- 4.6 Discussion for Prior Generation Module -- 5 Limitations -- 6 Conclusions -- References -- DCID: Deep Canonical Information Decomposition -- 1 Introduction -- 2 Related Work -- 2.1 Canonical Correlation Analysis (CCA) -- 2.2 Multi-Task Learning (MTL) -- 3 Univariate Shared Information Retrieval -- 3.1 Problem Setting -- 3.2 Evaluating the Shared Representations -- 4 Method: Deep Canonical Information Decomposition -- 4.1 Limitations of the CCA Setting -- 4.2 Deep Canonical Information Decomposition (DCID) -- 5 Experiments -- 5.1 Baselines -- 5.2 Experimental Settings -- 5.3 Learning the Shared Features Z -- 5.4 Variance Explained by Z and Model Performance -- 5.5 Obesity and the Volume of Brain Regions of Interest (ROIs) -- 6 Discussion -- 6.1 Results Summary -- 6.2 Limitations -- References -- Negative Prototypes Guided Contrastive Learning for Weakly Supervised Object Detection -- 1 Introduction -- 2 Related Work -- 2.1 Weakly Supervised Object Detection -- 2.2 Contrastive Learning -- 3 Proposed Method -- 3.1 Preliminaries -- 3.2 Feature Extractor -- 3.3 Contrastive Branch -- 4 Experimental Results -- 4.1 Datasets -- 4.2 Implementation Details -- 4.3 Comparison with State-of-the-Arts -- 4.4 Qualitative Results -- 4.5 Ablation Study.
5 Conclusion -- References -- Voting from Nearest Tasks: Meta-Vote Pruning of Pre-trained Models for Downstream Tasks -- 1 Introduction -- 2 Related Works -- 3 Empirical Study: Pruning a Pre-trained Model for Different Tasks -- 3.1 A Dataset of Pruned Models -- 3.2 Do Similar Tasks Share More Nodes on Their Pruned Models? -- 4 Meta-Vote Pruning (MVP) -- 5 Experiments -- 5.1 Implementation Details -- 5.2 Baseline Methods -- 5.3 Main Results -- 5.4 Performance on Unseen Dataset -- 5.5 Results of MVP on Sub-tasks of Different Sizes -- 5.6 Ablation Study -- 6 Conclusion -- References -- Make a Long Image Short: Adaptive Token Length for Vision Transformers -- 1 Introduction -- 2 Related Works -- 3 Methodology -- 3.1 Token-Length Assigner -- 3.2 Resizable-ViT -- 3.3 Training Strategy -- 4 Experiments -- 4.1 Experimental Results -- 4.2 Ablation Study -- 5 Conclusions -- References -- Graph Rebasing and Joint Similarity Reconstruction for Cross-Modal Hash Retrieval -- 1 Introduction -- 2 Related Work -- 2.1 Supervised Cross-Modal Hashing Methods -- 2.2 Unsupervised Cross-Modal Hashing Methods -- 3 Methodology -- 3.1 Local Relation Graph Building -- 3.2 Graph Rebasing -- 3.3 Global Relation Graph Construction -- 3.4 Joint Similarity Reconstruction -- 3.5 Training Objectives -- 4 Experiments -- 4.1 Datasets and Evaluation Metrics -- 4.2 Implementation Details -- 4.3 Performance Comparison -- 4.4 Parameter Sensitivity Experiments -- 4.5 Ablation Experiments -- 5 Conclusion -- References -- ARConvL: Adaptive Region-Based Convolutional Learning for Multi-class Imbalance Classification -- 1 Introduction -- 2 Related Work -- 2.1 Multi-class Imbalance Learning -- 2.2 Loss Modification Based Methods -- 2.3 Convolutional Prototype Learning -- 3 ARConvL -- 3.1 Overview of ARConvL -- 3.2 Region Learning Module -- 3.3 Optimizing Class-Wise Latent Feature Distribution.
3.4 Enlarging Margin Between Classes -- 4 Experimental Studies -- 4.1 Experimental Setup -- 4.2 Performance Comparison -- 4.3 Performance Deterioration with Increasing Imbalance Levels -- 4.4 Effect of Each Adaptive Component of ARConvL -- 4.5 Utility in Large-Scale Datasets -- 5 Conclusion -- References -- Deep Learning -- Binary Domain Generalization for Sparsifying Binary Neural Networks -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Preliminaries -- 3.2 Sparse Binary Neural Network (SBNN) Formulation -- 3.3 Weight Optimization -- 3.4 Network Training -- 3.5 Implementation Gains -- 4 Experiments and Results -- 4.1 Ablation Studies -- 4.2 Benchmark -- 5 Conclusions -- References -- Efficient Hyperdimensional Computing -- 1 Introduction -- 2 Background -- 3 High Dimensions Are Not Necessary -- 3.1 Dimension-Accuracy Analysis -- 3.2 Low-Dimension Hypervector Training -- 4 Results -- 4.1 A Case Study of Our Technologies -- 4.2 Experimental Results -- 5 Discussion -- 5.1 Limitation of HDCs -- 5.2 Further Discussion of the Low Accuracy When d Is Low -- 6 Conclusion -- References -- Rényi Divergence Deep Mutual Learning -- 1 Introduction -- 2 Deep Mutual Learning -- 2.1 Rényi Divergence Deep Mutual Learning -- 3 Properties of RDML -- 3.1 Convergence Guarantee -- 3.2 Computational Complexity of RDML -- 4 Empirical Study -- 4.1 Experimental Setup -- 4.2 Convergence Trace Analysis -- 4.3 Evaluation Results -- 4.4 Generalization Results -- 5 Related Work -- 6 Conclusion -- References -- Is My Neural Net Driven by the MDL Principle? -- 1 Introduction -- 2 Related Work -- 3 MDL Principle, Signal, and Noise -- 3.1 Information Theory Primer -- 3.2 Signal and Noise -- 4 Learning with the MDL Principle -- 4.1 MDL Objective -- 4.2 Local Formulation -- 4.3 Combining Local Objectives to Obtain a Spectral Distribution -- 4.4 The MDL Spectral Distributions.
5 Experimental Results -- 5.1 Experimental Noise -- 5.2 Discussion -- 6 Conclusion and Future Work -- References -- Scoring Rule Nets: Beyond Mean Target Prediction in Multivariate Regression -- 1 Introduction -- 2 Distributional Regression -- 2.1 Proper Scoring Rules -- 2.2 Conditional CRPS -- 2.3 CCRPS as ANN Loss Function for Multivariate Gaussian Mixtures -- 2.4 Energy Score Ensemble Models -- 3 Experiments -- 3.1 Evaluation Metrics -- 3.2 Synthetic Experiments -- 3.3 Real World Experiments -- 4 Conclusion -- References -- Learning Distinct Features Helps, Provably -- 1 Introduction -- 2 Preliminaries -- 3 Learning Distinct Features Helps -- 4 Extensions -- 4.1 Binary Classification -- 4.2 Multi-layer Networks -- 4.3 Multiple Outputs -- 5 Discussion and Open Problems -- References -- Continuous Depth Recurrent Neural Differential Equations -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Problem Definition -- 3.2 Gated Recurrent Unit -- 3.3 Recurrent Neural Ordinary Differential Equations -- 4 Continuous Depth Recurrent Neural Differential Equations -- 4.1 CDR-NDE Based on Heat Equation -- 5 Experiments -- 5.1 Baselines -- 5.2 Person Activity Recognition with Irregularly Sampled Time-Series -- 5.3 Walker2d Kinematic Simulation -- 5.4 Stance Classification -- 6 Conclusion and Future Work -- References -- Fairness -- Mitigating Algorithmic Bias with Limited Annotations -- 1 Introduction -- 2 Preliminaries -- 2.1 Notation and Problem Definition -- 2.2 Fairness Evaluation Metrics -- 3 Active Penalization Of Discrimination -- 3.1 Penalization Of Discrimination (POD) -- 3.2 Active Instance Selection (AIS) -- 3.3 The APOD Algorithm -- 3.4 Theoretical Analysis -- 4 Experiment -- 4.1 Bias Mitigation Performance Analysis (RQ1) -- 4.2 Annotation Effectiveness Analysis (RQ2) -- 4.3 Annotation Ratio Analysis (RQ3) -- 4.4 Ablation Study (RQ4).
4.5 Visualization of Annotated Instances -- 5 Conclusion -- References -- FG2AN: Fairness-Aware Graph Generative Adversarial Networks -- 1 Introduction -- 2 Related Work -- 2.1 Graph Generative Model -- 2.2 Fairness on Graphs -- 3 Notation and Background -- 3.1 Notation -- 3.2 Root Causes of Representational Discrepancies -- 4 Methodology -- 4.1 Mitigating Degree-Related Bias -- 4.2 Mitigating Connectivity-Related Bias -- 4.3 FG2AN Assembling -- 4.4 Fairness Definitions for Graph Generation -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Experimental Results -- 6 Conclusion -- References -- Targeting the Source: Selective Data Curation for Debiasing NLP Models -- 1 Introduction -- 2 Three Sources of Bias in Text Encoders -- 2.1 Bias in Likelihoods -- 2.2 Bias in Attentions -- 2.3 Bias in Representations -- 3 Pipeline for Measuring Bias in Text -- 3.1 Masking -- 3.2 Probing -- 3.3 Aggregation and Normalization -- 3.4 Bias Computation -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Question Answering -- 4.3 Sentence Inference -- 4.4 Sentiment Analysis -- 5 Related Work -- 5.1 Bias Quantification -- 5.2 Bias Reduction -- 6 Conclusion -- 7 Ethical Considerations -- References -- Fairness in Multi-Task Learning via Wasserstein Barycenters -- 1 Introduction -- 2 Problem Statement -- 2.1 Multi-task Learning -- 2.2 Demographic Parity -- 3 Wasserstein Fair Multi-task Predictor -- 4 Plug-In Estimator -- 4.1 Data-Driven Approach -- 4.2 Empirical Multi-task -- 5 Numerical Evaluation -- 5.1 Datasets -- 5.2 Methods -- 5.3 Results -- 6 Conclusion -- References -- REST: Enhancing Group Robustness in DNNs Through Reweighted Sparse Training -- 1 Introduction -- 2 Related Work -- 2.1 Sparse Neural Network Training -- 2.2 Debiasing Frameworks -- 3 Methodology -- 3.1 Sparse Training -- 4 Experiments -- 4.1 Baselines -- 4.2 Datasets -- 4.3 Setup.
4.4 Computational Costs.
Record Nr. UNISA-996550555203316
Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Edizione [First edition.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2023]
Descrizione fisica 1 online resource (752 pages)
Disciplina 006.31
Collana Lecture Notes in Computer Science Series
Soggetto topico Data mining
Databases
Machine learning
ISBN 3-031-43418-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- Neural Wave Representations -- Physics-Inspired Graph Neural Networks -- Mapping Generative AI -- Contents - Part III -- Graph Neural Networks -- Learning to Augment Graph Structure for both Homophily and Heterophily Graphs -- 1 Introduction -- 2 Related Work -- 2.1 Graph Neural Networks -- 2.2 Graph Structure Augmentation -- 2.3 Variational Inference for GNNs -- 3 Methodology -- 3.1 Problem Statement -- 3.2 Augmentation from a Probabilistic Generation Perspective -- 3.3 Iterative Variational Inference -- 3.4 Parameterized Augmentation Distribution -- 3.5 GNN Classifier Module for Node Classification -- 3.6 Complexity Analysis -- 4 Experiments -- 4.1 Experimental Setups -- 4.2 Classification on Real-World Datasets (Q1) -- 4.3 Homophily Ratios and GNN Architectures (Q2) -- 4.4 Ablation Study (Q3) -- 4.5 Augmentation Strategy Learning (Q4) -- 4.6 Parameter Sensitivity Analysis (Q5) -- 5 Conclusion -- References -- Learning Representations for Bipartite Graphs Using Multi-task Self-supervised Learning -- 1 Introduction -- 2 Background Work -- 2.1 Bipartite Graph Representation Learning -- 2.2 Self Supervised Learning (SSL) for GNNs -- 2.3 Multi-task Self Supervised Learning and Optimization -- 3 Proposed Algorithm -- 3.1 Notation -- 3.2 Bipartite Graph Encoder -- 3.3 Multi Task Self Supervised Learning -- 3.4 DST++: Dropped Schedule Task MTL with Task Affinity -- 4 Experiments -- 4.1 Datasets -- 4.2 Downstream Tasks and Evaluation Metrics -- 4.3 Evaluation Protocol -- 4.4 Baselines -- 5 Results and Analysis -- 5.1 Comparison with Unsupervised Baselines -- 5.2 Ablation Study -- 6 Conclusion -- References -- ChiENN: Embracing Molecular Chirality with Graph Neural Networks -- 1 Introduction -- 2 Related Work -- 3 Order-Sensitive Message-Passing Scheme.
4 ChiENN: Chiral-Aware Neural Network -- 4.1 Edge Graph -- 4.2 Neighbors Order -- 4.3 Chiral-Aware Update -- 5 Experiments -- 5.1 Set-Up -- 5.2 Comparison with Reference Methods -- 5.3 Ablation Studies -- 6 Conclusions -- References -- Multi-label Image Classification with Multi-scale Global-Local Semantic Graph Network -- 1 Introduction -- 2 Proposed Method -- 2.1 Multi-scale Feature Reconstruction -- 2.2 Channel Dual-Branch Cross Attention -- 2.3 Multi-perspective Dynamic Semantic Representation -- 2.4 Classification and Loss -- 3 Experiments -- 3.1 Comparison with State of the Arts -- 3.2 Ablation Studies -- 3.3 Visual Analysis -- 4 Conclusion -- References -- CasSampling: Exploring Efficient Cascade Graph Learning for Popularity Prediction -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 4 Method -- 4.1 Graph Sampling -- 4.2 Local-Level Propagation Embedding -- 4.3 Global-Level Time Flow Representation -- 4.4 Prediction Layer -- 4.5 Complexity Analysis -- 5 Experiments -- 5.1 Datasets -- 5.2 Baseline -- 5.3 Evaluation Metrics -- 5.4 Experiment Settings -- 6 Results and Analysis -- 6.1 Experiment Results -- 6.2 Ablation Study -- 6.3 Further Analysis -- 7 Conclusion -- References -- Boosting Adaptive Graph Augmented MLPs via Customized Knowledge Distillation -- 1 Introduction -- 2 Related Work -- 2.1 Inference Acceleration -- 2.2 GNN Distillation -- 2.3 GNN on Addressing Heterophily -- 3 Preliminaries -- 4 Methodology -- 4.1 Customized Knowledge Distillation -- 4.2 Adaptive Graph Propagation -- 4.3 Approximate Aggregation Feature -- 5 Experiments -- 5.1 Datasets -- 5.2 Experimental Setup -- 5.3 Node Classification on Different Types of Graph -- 5.4 Comparing with GNN Distillation Methods -- 5.5 Ablation Study -- 5.6 Parameter Sensitivity Analysis -- 5.7 Inference Acceleration and Practical Deployment -- 6 Conclusion -- References.
ENGAGE: Explanation Guided Data Augmentation for Graph Representation Learning -- 1 Introduction -- 2 Related Work -- 2.1 Representation Learning for Graph Data -- 2.2 Graph Contrastive Learning -- 2.3 Explanation for Graph Neural Networks -- 3 Preliminaries -- 3.1 Notations -- 3.2 Contrastive Learning Frameworks -- 4 The ENGAGE Framework -- 4.1 Mitigating Superfluous Information in Representations -- 4.2 Efficient Explanations for Unsupervised Representations -- 4.3 Explanation-Guided Contrastive Views Generation -- 4.4 Theoretical Justification -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Experiment Results and Comparisons -- 5.3 Ablation Study -- 6 Conclusion and Future Work -- References -- Modeling Graphs Beyond Hyperbolic: Graph Neural Networks in Symmetric Positive Definite Matrices -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 The Space SPD -- 3.2 Gyrocalculus on SPD -- 4 Graph Neural Networks -- 4.1 GCN in Euclidean Space -- 4.2 GCN in SPD -- 5 Experiments -- 5.1 Node Classification -- 5.2 Graph Classification -- 5.3 Analysis -- 6 Conclusions -- References -- Leveraging Free Labels to Power up Heterophilic Graph Learning in Weakly-Supervised Settings: An Empirical Study -- 1 Introduction -- 2 Related Work -- 2.1 Adaptive Filters for Heterophilic Graph Learning -- 2.2 Evaluation on Heterophilic Graph Learning -- 3 Motivation -- 3.1 Experimental Setups -- 3.2 Results and Observations -- 3.3 Analysis -- 4 Proposed Approach -- 4.1 Implementation -- 5 Experiments -- 5.1 Performance Improvements on GPR-GNN -- 5.2 Performance Improvements on BernNet -- 5.3 Visualization of the Learned Filters -- 6 Conclusion -- References -- Train Your Own GNN Teacher: Graph-Aware Distillation on Textual Graphs -- 1 Introduction -- 2 Background -- 2.1 Problem Formulation -- 2.2 GNNs on Textual Graphs -- 3 Towards Graph-Aware Knowledge Distillation.
3.1 Knowledge Distillation -- 3.2 What Does Knowledge Distillation Learn? An Analysis -- 4 GraD5513621En10FigaPrint.eps Framework -- 4.1 GraD-Joint -- 4.2 GraD-Alt -- 4.3 GraD-JKD -- 4.4 Student Models -- 5 Experimental Setup -- 5.1 Datasets -- 5.2 Implementation Details -- 5.3 Compared Methods -- 6 Experimental Results -- 6.1 GraDBERT Results -- 6.2 GraDMLP Results -- 7 Related Work -- 8 Conclusion -- References -- Graphs -- The Mont Blanc of Twitter: Identifying Hierarchies of Outstanding Peaks in Social Networks -- 1 Introduction -- 2 Related Work -- 3 Mountain Graphs and Line Parent Trees -- 3.1 Landscapes and Mountain Graphs -- 3.2 Line Parent Trees -- 3.3 Discarding Edges via Relative Neighborhood Graphs -- 4 Line Parent Trees of Real-World Networks -- 4.1 Comparison with Sampling Approaches -- 4.2 Distances to Line-Parent Trees -- 5 Experiments on Random Data -- 6 Conclusion and Future Work -- References -- RBNets: A Reinforcement Learning Approach for Learning Bayesian Network Structure -- 1 Introduction -- 2 Preliminaries -- 2.1 Bayesian Network Structure Learning -- 2.2 Local Scores -- 2.3 Order Graph -- 3 Deep Reinforcement Learning-Based Bayesian Network Structure Learning -- 3.1 Reinforcement Learning Formulation -- 3.2 Upper Confidence Bounds Based Strategy -- 3.3 Deep Q-Learning Algorithm -- 4 Experimental Validation -- 4.1 Experiment Setup -- 4.2 Datasets -- 4.3 Baseline Methods -- 4.4 Evaluation Metrics -- 4.5 Performance Evaluation of Time -- 4.6 Learning Performance from Datasets -- 5 Conclusion -- References -- A Unified Spectral Rotation Framework Using a Fused Similarity Graph -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Similarity Matrix Construction -- 3.2 High-Order Laplacian Construction -- 3.3 Unified Framework -- 3.4 Optimization -- 3.5 Complexity Analysis -- 4 Experiments -- 4.1 Experimental Setup.
4.2 Comparison with State-of-the-Art Algorithms -- 4.3 Ablation Study -- 4.4 Parameter Sensitivity -- 4.5 Convergence Analysis -- 5 Conclusion -- References -- SimSky: An Accuracy-Aware Algorithm for Single-Source SimRank Search -- 1 Introduction -- 2 ApproxDiag: Approximate Diagonal Correction Matrix -- 3 SimSky -- 4 Experiments -- 4.1 Experimental Setting -- 4.2 Comparative Experiments -- 4.3 Ablation Experiments -- 5 Conclusions -- References -- Online Network Source Optimization with Graph-Kernel MAB -- 1 Introduction -- 2 Online Source Optimization Problem -- 2.1 Problem Formulation -- 2.2 Graph-Kernel MAB Framework -- 3 Grab-UCB: Proposed Algorithm -- 4 Grab-arm-Light: Efficient Action Selection -- 5 Simulation Results -- 5.1 Settings -- 5.2 Performance of Grab-UCB -- 6 Related Work -- 7 Conclusions -- References -- Quantifying Node-Based Core Resilience -- 1 Introduction -- 2 Background -- 3 Related Work -- 4 Node-Based Core Resilience -- 4.1 Resilience Against Edge Removal -- 4.2 Resilience Against Edge Insertion -- 5 Experimental Evaluation -- 5.1 Runtime Results -- 5.2 Finding Critical Edges -- 5.3 Identifying Influential Spreaders -- 6 Conclusions and Future Work -- References -- Construction and Training of Multi-Associative Graph Networks -- 1 Introduction -- 2 Essence of Data Relationship Representation -- 3 Multi-Associative Graph Network -- 3.1 Representation of Horizontal and Vertical Relationships -- 3.2 Consolidation of Attributes and Aggregation of Duplicates -- 3.3 Associating Features and Objects -- 3.4 Associative Prioritization Algorithm -- 3.5 MAGN Implementation and Source Code -- 4 Results of Experiments and Comparisons -- 4.1 Regression Benchmark -- 4.2 Classification Benchmark -- 5 Conclusions -- References -- Skeletal Cores and Graph Resilience -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 k-Cores.
3.2 Core Strength.
Record Nr. UNISA-996550555503316
Cham, Switzerland : , : Springer, , [2023]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Edizione [First edition.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Descrizione fisica 1 online resource (758 pages)
Disciplina 006.31
Collana Lecture Notes in Computer Science Series
Soggetto topico Data mining
Databases
Machine learning
ISBN 3-031-43415-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- Neural Wave Representations -- Physics-Inspired Graph Neural Networks -- Mapping Generative AI -- Contents - Part II -- Computer Vision -- Sample Prior Guided Robust Model Learning to Suppress Noisy Labels -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Prior Guided Sample Dividing -- 3.2 Denoising with the Divided Sets -- 4 Experiment -- 4.1 Datasets and Implementation Details -- 4.2 Comparison with State-of-the-Art Methods -- 4.3 Ablation Study -- 4.4 Generalization to Instance-Dependent Label Noise -- 4.5 Hyper-parameters Analysis -- 4.6 Discussion for Prior Generation Module -- 5 Limitations -- 6 Conclusions -- References -- DCID: Deep Canonical Information Decomposition -- 1 Introduction -- 2 Related Work -- 2.1 Canonical Correlation Analysis (CCA) -- 2.2 Multi-Task Learning (MTL) -- 3 Univariate Shared Information Retrieval -- 3.1 Problem Setting -- 3.2 Evaluating the Shared Representations -- 4 Method: Deep Canonical Information Decomposition -- 4.1 Limitations of the CCA Setting -- 4.2 Deep Canonical Information Decomposition (DCID) -- 5 Experiments -- 5.1 Baselines -- 5.2 Experimental Settings -- 5.3 Learning the Shared Features Z -- 5.4 Variance Explained by Z and Model Performance -- 5.5 Obesity and the Volume of Brain Regions of Interest (ROIs) -- 6 Discussion -- 6.1 Results Summary -- 6.2 Limitations -- References -- Negative Prototypes Guided Contrastive Learning for Weakly Supervised Object Detection -- 1 Introduction -- 2 Related Work -- 2.1 Weakly Supervised Object Detection -- 2.2 Contrastive Learning -- 3 Proposed Method -- 3.1 Preliminaries -- 3.2 Feature Extractor -- 3.3 Contrastive Branch -- 4 Experimental Results -- 4.1 Datasets -- 4.2 Implementation Details -- 4.3 Comparison with State-of-the-Arts -- 4.4 Qualitative Results -- 4.5 Ablation Study.
5 Conclusion -- References -- Voting from Nearest Tasks: Meta-Vote Pruning of Pre-trained Models for Downstream Tasks -- 1 Introduction -- 2 Related Works -- 3 Empirical Study: Pruning a Pre-trained Model for Different Tasks -- 3.1 A Dataset of Pruned Models -- 3.2 Do Similar Tasks Share More Nodes on Their Pruned Models? -- 4 Meta-Vote Pruning (MVP) -- 5 Experiments -- 5.1 Implementation Details -- 5.2 Baseline Methods -- 5.3 Main Results -- 5.4 Performance on Unseen Dataset -- 5.5 Results of MVP on Sub-tasks of Different Sizes -- 5.6 Ablation Study -- 6 Conclusion -- References -- Make a Long Image Short: Adaptive Token Length for Vision Transformers -- 1 Introduction -- 2 Related Works -- 3 Methodology -- 3.1 Token-Length Assigner -- 3.2 Resizable-ViT -- 3.3 Training Strategy -- 4 Experiments -- 4.1 Experimental Results -- 4.2 Ablation Study -- 5 Conclusions -- References -- Graph Rebasing and Joint Similarity Reconstruction for Cross-Modal Hash Retrieval -- 1 Introduction -- 2 Related Work -- 2.1 Supervised Cross-Modal Hashing Methods -- 2.2 Unsupervised Cross-Modal Hashing Methods -- 3 Methodology -- 3.1 Local Relation Graph Building -- 3.2 Graph Rebasing -- 3.3 Global Relation Graph Construction -- 3.4 Joint Similarity Reconstruction -- 3.5 Training Objectives -- 4 Experiments -- 4.1 Datasets and Evaluation Metrics -- 4.2 Implementation Details -- 4.3 Performance Comparison -- 4.4 Parameter Sensitivity Experiments -- 4.5 Ablation Experiments -- 5 Conclusion -- References -- ARConvL: Adaptive Region-Based Convolutional Learning for Multi-class Imbalance Classification -- 1 Introduction -- 2 Related Work -- 2.1 Multi-class Imbalance Learning -- 2.2 Loss Modification Based Methods -- 2.3 Convolutional Prototype Learning -- 3 ARConvL -- 3.1 Overview of ARConvL -- 3.2 Region Learning Module -- 3.3 Optimizing Class-Wise Latent Feature Distribution.
3.4 Enlarging Margin Between Classes -- 4 Experimental Studies -- 4.1 Experimental Setup -- 4.2 Performance Comparison -- 4.3 Performance Deterioration with Increasing Imbalance Levels -- 4.4 Effect of Each Adaptive Component of ARConvL -- 4.5 Utility in Large-Scale Datasets -- 5 Conclusion -- References -- Deep Learning -- Binary Domain Generalization for Sparsifying Binary Neural Networks -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Preliminaries -- 3.2 Sparse Binary Neural Network (SBNN) Formulation -- 3.3 Weight Optimization -- 3.4 Network Training -- 3.5 Implementation Gains -- 4 Experiments and Results -- 4.1 Ablation Studies -- 4.2 Benchmark -- 5 Conclusions -- References -- Efficient Hyperdimensional Computing -- 1 Introduction -- 2 Background -- 3 High Dimensions Are Not Necessary -- 3.1 Dimension-Accuracy Analysis -- 3.2 Low-Dimension Hypervector Training -- 4 Results -- 4.1 A Case Study of Our Technologies -- 4.2 Experimental Results -- 5 Discussion -- 5.1 Limitation of HDCs -- 5.2 Further Discussion of the Low Accuracy When d Is Low -- 6 Conclusion -- References -- Rényi Divergence Deep Mutual Learning -- 1 Introduction -- 2 Deep Mutual Learning -- 2.1 Rényi Divergence Deep Mutual Learning -- 3 Properties of RDML -- 3.1 Convergence Guarantee -- 3.2 Computational Complexity of RDML -- 4 Empirical Study -- 4.1 Experimental Setup -- 4.2 Convergence Trace Analysis -- 4.3 Evaluation Results -- 4.4 Generalization Results -- 5 Related Work -- 6 Conclusion -- References -- Is My Neural Net Driven by the MDL Principle? -- 1 Introduction -- 2 Related Work -- 3 MDL Principle, Signal, and Noise -- 3.1 Information Theory Primer -- 3.2 Signal and Noise -- 4 Learning with the MDL Principle -- 4.1 MDL Objective -- 4.2 Local Formulation -- 4.3 Combining Local Objectives to Obtain a Spectral Distribution -- 4.4 The MDL Spectral Distributions.
5 Experimental Results -- 5.1 Experimental Noise -- 5.2 Discussion -- 6 Conclusion and Future Work -- References -- Scoring Rule Nets: Beyond Mean Target Prediction in Multivariate Regression -- 1 Introduction -- 2 Distributional Regression -- 2.1 Proper Scoring Rules -- 2.2 Conditional CRPS -- 2.3 CCRPS as ANN Loss Function for Multivariate Gaussian Mixtures -- 2.4 Energy Score Ensemble Models -- 3 Experiments -- 3.1 Evaluation Metrics -- 3.2 Synthetic Experiments -- 3.3 Real World Experiments -- 4 Conclusion -- References -- Learning Distinct Features Helps, Provably -- 1 Introduction -- 2 Preliminaries -- 3 Learning Distinct Features Helps -- 4 Extensions -- 4.1 Binary Classification -- 4.2 Multi-layer Networks -- 4.3 Multiple Outputs -- 5 Discussion and Open Problems -- References -- Continuous Depth Recurrent Neural Differential Equations -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Problem Definition -- 3.2 Gated Recurrent Unit -- 3.3 Recurrent Neural Ordinary Differential Equations -- 4 Continuous Depth Recurrent Neural Differential Equations -- 4.1 CDR-NDE Based on Heat Equation -- 5 Experiments -- 5.1 Baselines -- 5.2 Person Activity Recognition with Irregularly Sampled Time-Series -- 5.3 Walker2d Kinematic Simulation -- 5.4 Stance Classification -- 6 Conclusion and Future Work -- References -- Fairness -- Mitigating Algorithmic Bias with Limited Annotations -- 1 Introduction -- 2 Preliminaries -- 2.1 Notation and Problem Definition -- 2.2 Fairness Evaluation Metrics -- 3 Active Penalization Of Discrimination -- 3.1 Penalization Of Discrimination (POD) -- 3.2 Active Instance Selection (AIS) -- 3.3 The APOD Algorithm -- 3.4 Theoretical Analysis -- 4 Experiment -- 4.1 Bias Mitigation Performance Analysis (RQ1) -- 4.2 Annotation Effectiveness Analysis (RQ2) -- 4.3 Annotation Ratio Analysis (RQ3) -- 4.4 Ablation Study (RQ4).
4.5 Visualization of Annotated Instances -- 5 Conclusion -- References -- FG2AN: Fairness-Aware Graph Generative Adversarial Networks -- 1 Introduction -- 2 Related Work -- 2.1 Graph Generative Model -- 2.2 Fairness on Graphs -- 3 Notation and Background -- 3.1 Notation -- 3.2 Root Causes of Representational Discrepancies -- 4 Methodology -- 4.1 Mitigating Degree-Related Bias -- 4.2 Mitigating Connectivity-Related Bias -- 4.3 FG2AN Assembling -- 4.4 Fairness Definitions for Graph Generation -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Experimental Results -- 6 Conclusion -- References -- Targeting the Source: Selective Data Curation for Debiasing NLP Models -- 1 Introduction -- 2 Three Sources of Bias in Text Encoders -- 2.1 Bias in Likelihoods -- 2.2 Bias in Attentions -- 2.3 Bias in Representations -- 3 Pipeline for Measuring Bias in Text -- 3.1 Masking -- 3.2 Probing -- 3.3 Aggregation and Normalization -- 3.4 Bias Computation -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Question Answering -- 4.3 Sentence Inference -- 4.4 Sentiment Analysis -- 5 Related Work -- 5.1 Bias Quantification -- 5.2 Bias Reduction -- 6 Conclusion -- 7 Ethical Considerations -- References -- Fairness in Multi-Task Learning via Wasserstein Barycenters -- 1 Introduction -- 2 Problem Statement -- 2.1 Multi-task Learning -- 2.2 Demographic Parity -- 3 Wasserstein Fair Multi-task Predictor -- 4 Plug-In Estimator -- 4.1 Data-Driven Approach -- 4.2 Empirical Multi-task -- 5 Numerical Evaluation -- 5.1 Datasets -- 5.2 Methods -- 5.3 Results -- 6 Conclusion -- References -- REST: Enhancing Group Robustness in DNNs Through Reweighted Sparse Training -- 1 Introduction -- 2 Related Work -- 2.1 Sparse Neural Network Training -- 2.2 Debiasing Frameworks -- 3 Methodology -- 3.1 Sparse Training -- 4 Experiments -- 4.1 Baselines -- 4.2 Datasets -- 4.3 Setup.
4.4 Computational Costs.
Record Nr. UNINA-9910746289903321
Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Edizione [First edition.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2023]
Descrizione fisica 1 online resource (506 pages)
Disciplina 943.005
Collana Lecture Notes in Computer Science Series
Soggetto topico Data mining
Databases
Machine learning
ISBN 3-031-43424-2
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- Neural Wave Representations -- Physics-Inspired Graph Neural Networks -- Mapping Generative AI -- Contents - Part V -- Robustness -- MMA: Multi-Metric-Autoencoder for Analyzing High-Dimensional and Incomplete Data -- 1 Introduction -- 2 Related Work -- 2.1 LFA-Based Model -- 2.2 Deep Learning-Based Model -- 3 Methodology -- 3.1 Establishment of Base Model -- 3.2 Self-Adaptively Aggregation -- 3.3 Theoretical Analysis -- 4 Experiments -- 4.1 General Settings -- 4.2 Performance Comparison (RQ.1) -- 4.3 The Self-ensembling of MMA (RQ.2) -- 4.4 Base Models' Latent Factors Distribution (RQ. 3) -- 5 Conclusion -- References -- Exploring and Exploiting Data-Free Model Stealing -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 TandemGAN Framework -- 3.2 Optimization Objectives and Training Procedure -- 4 Evaluation -- 4.1 Model Stealing Performance -- 4.2 Ablation Study -- 5 Possible Extension -- 6 Conclusion -- References -- Exploring the Training Robustness of Distributional Reinforcement Learning Against Noisy State Observations -- 1 Introduction -- 2 Background: Distributional RL -- 3 Tabular Case: State-Noisy MDP -- 3.1 Analysis of SN-MDP for Expectation-Based RL -- 3.2 Analysis of SN-MDP in Distributional RL -- 4 Function Approximation Case -- 4.1 Convergence of Linear TD Under Noisy States -- 4.2 Vulnerability of Expectation-Based RL -- 4.3 Robustness Advantage of Distributional RL -- 5 Experiments -- 5.1 Results on Continuous Control Environments -- 5.2 Results on Classical Control and Atari Games -- 6 Discussion and Conclusion -- References -- Overcoming the Limitations of Localization Uncertainty: Efficient and Exact Non-linear Post-processing and Calibration -- 1 Introduction -- 2 Background and Related Work -- 3 Method -- 3.1 Uncertainty Estimation.
3.2 Uncertainty Propagation -- 3.3 Uncertainty Calibration -- 4 Experiments -- 4.1 Decoding Methods -- 4.2 Calibration Evaluation -- 4.3 Uncertainty Correlation -- 5 Conclusion -- References -- Label Shift Quantification with Robustness Guarantees via Distribution Feature Matching -- 1 Introduction -- 1.1 Related Literature -- 1.2 Contributions of the Paper -- 2 Distribution Feature Matching -- 2.1 Kernel Mean Matching -- 2.2 BBSE as Distribution Feature Matching -- 3 Theoretical Guarantees -- 3.1 Comparison to Related Literature -- 3.2 Robustness to Contamination -- 4 Algorithm and Applications -- 4.1 Optimisation Problem -- 4.2 Experiments -- 5 Conclusion -- References -- Robust Classification of High-Dimensional Data Using Data-Adaptive Energy Distance -- 1 Introduction -- 1.1 Our Contribution -- 2 Methodology -- 2.1 A Classifier Based on W*FG -- 2.2 Refinements of 0 -- 3 Asymptotics Under HDLSS Regime -- 3.1 Misclassification Probabilities of 1, 2, and 3 in the HDLSS Asymptotic Regime -- 3.2 Comparison of the Classifiers -- 4 Empirical Performance and Results -- 4.1 Simulation Studies -- 4.2 Implementation on Real Data -- 5 Concluding Remarks -- References -- DualMatch: Robust Semi-supervised Learning with Dual-Level Interaction -- 1 Introduction -- 2 Related Work -- 2.1 Semi-supervised Learning -- 2.2 Supervised Contrastive Learning -- 3 Method -- 3.1 Preliminaries -- 3.2 The DualMatch Structure -- 3.3 First-Level Interaction: Align -- 3.4 Second-Level Interaction: Aggregate -- 3.5 Final Objective -- 4 Experiment -- 4.1 Semi-supervised Classification -- 4.2 Class-Imbalanced Semi-supervised Classification -- 4.3 Ablation Study -- 5 Conclusion -- References -- Detecting Evasion Attacks in Deployed Tree Ensembles -- 1 Introduction -- 2 Preliminaries -- 3 Detecting Evasion Attacks with OC-score -- 3.1 The OC-score Metric -- 3.2 Theoretical Analysis.
4 Related Work -- 5 Experimental Evaluation -- 5.1 Experimental Methodology -- 5.2 Results Q1: Detecting Evasion Attacks -- 5.3 Results Q2: Prediction Time Cost -- 5.4 Results Q3: Size of the Reference Set -- 6 Conclusions and Discussion -- References -- Time Series -- Deep Imbalanced Time-Series Forecasting via Local Discrepancy Density -- 1 Introduction -- 2 Related Work -- 2.1 Deep Learning Models for Time-Series Forecasting -- 2.2 Robustness Against Noisy Samples and Data Imbalance -- 3 Method -- 3.1 Local Discrepancy -- 3.2 Density-Based Reweighting for Time-Series Forecasting -- 4 Experiments -- 4.1 Experiment Setting -- 4.2 Main Results -- 4.3 Comparisons with Other Methods -- 4.4 Variants for Local Discrepancy -- 4.5 Dataset Analysis -- 4.6 Computational Cost of ReLD -- 5 Discussion and Limitation -- References -- Online Deep Hybrid Ensemble Learning for Time Series Forecasting -- 1 Introduction -- 2 Related Works -- 2.1 On Online Ensemble Aggregation for Time Series Forecasting -- 2.2 On Ensemble Learning Using RoCs -- 3 Methodology -- 3.1 Preliminaries -- 3.2 Ensemble Architecture -- 3.3 RoCs Computation -- 3.4 Ensemble Aggregation -- 3.5 Ensemble Adaptation -- 4 Experiments -- 4.1 Experimental Set-Up -- 4.2 ODH-ETS Setup and Baselines -- 4.3 Results -- 4.4 Discussion -- 5 Concluding Remarks -- References -- Sparse Transformer Hawkes Process for Long Event Sequences -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Hawkes Process -- 3.2 Self-attention -- 4 Proposed Model -- 4.1 Event Model -- 4.2 Count Model -- 4.3 Intensity -- 4.4 Training -- 5 Experiment -- 5.1 Data -- 5.2 Baselines -- 5.3 Evaluation Metrics and Training Details -- 5.4 Results of Log-Likelihood -- 5.5 Results of Event Type and Time Prediction -- 5.6 Computational Efficiency -- 5.7 Sparse Attention Mechanism -- 5.8 Ablation Study -- 6 Conclusion -- References.
Adacket: ADAptive Convolutional KErnel Transform for Multivariate Time Series Classification -- 1 Introduction -- 2 Related Work -- 2.1 Multivariate Time Series Classification -- 2.2 Reinforcement Learning -- 3 Preliminaries -- 4 Method -- 4.1 Multi-objective Optimization: Performance and Resource -- 4.2 RL-Based Decision Model: Channel and Temporal Dimensions -- 4.3 DDPG Adaptation: Efficient Kernel Exploration -- 5 Experiments -- 5.1 Experimental Settings -- 5.2 Classification Performance Evaluation -- 5.3 Understanding Adacket's Design Selections -- 6 Conclusion -- References -- Efficient Adaptive Spatial-Temporal Attention Network for Traffic Flow Forecasting -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 4 Methodology -- 4.1 Adaptive Spatial-Temporal Fusion Embedding -- 4.2 Dominant Spatial-Temporal Attention -- 4.3 Efficient Spatial-Temporal Block -- 4.4 Encoder-Decoder Architecture -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Overall Comparison -- 5.3 Ablation Study -- 5.4 Model Analysis -- 5.5 Interpretability of EASTAN -- 6 Conclusion -- References -- Estimating Dynamic Time Warping Distance Between Time Series with Missing Data -- 1 Introduction -- 2 Notation and Problem Statement -- 3 Background: Dynamic Time Warping (DTW) -- 4 DTW-AROW -- 5 DTW-CAI -- 5.1 DBAM -- 5.2 DTW-CAI -- 6 Comparison to Related Work -- 7 Experimental Evaluation -- 7.1 Datasets and Implementation Details -- 7.2 Evaluation of Pairwise Distances (Q1) -- 7.3 Evaluation Through Classification (Q2) -- 7.4 Evaluation Through Clustering (Q3) -- 8 Conclusion -- References -- Uncovering Multivariate Structural Dependency for Analyzing Irregularly Sampled Time Series -- 1 Introduction -- 2 Preliminaries -- 3 Our Proposed Model -- 3.1 Multivariate Interaction Module -- 3.2 Correlation-Aware Neighborhood Aggregation -- 3.3 Masked Time-Aware Self-Attention.
3.4 Graph-Level Learning Module -- 4 Experiments -- 4.1 Datasets -- 4.2 Competitors -- 4.3 Setups and Results -- 5 Conclusion -- References -- Weighted Multivariate Mean Reversion for Online Portfolio Selection -- 1 Introduction -- 2 Problem Setting -- 3 Related Work and Motivation -- 3.1 Related Work -- 3.2 Motivation -- 4 Multi-variate Robust Mean Reversion -- 4.1 Formulation -- 4.2 Online Portfolio Selection -- 4.3 Algorithms -- 5 Experiments -- 5.1 Cumulative Wealth -- 5.2 Computational Time -- 5.3 Parameter Sensitivity -- 5.4 Risk-Adjusted Returns -- 5.5 Transaction Cost Scalability -- 6 Conclusion -- References -- H2-Nets: Hyper-hodge Convolutional Neural Networks for Time-Series Forecasting -- 1 Introduction -- 2 Related Work -- 3 Higher-Order Structures on Graph -- 3.1 Hyper-k-Simplex-Network Learning Statement -- 3.2 Preliminaries on Hodge Theory -- 4 The H2-Nets Methodology -- 5 Experiments -- 6 Conclusion -- References -- Transfer and Multitask Learning -- Overcoming Catastrophic Forgetting for Fine-Tuning Pre-trained GANs -- 1 Introduction -- 2 Background and Related Work -- 2.1 Deep Transfer Learning -- 2.2 Generative Adversarial Networks (GANs) -- 2.3 Transfer Learning for GANs -- 3 Approach -- 3.1 Trust-Region Optimization -- 3.2 Spectral Diversification -- 4 Experiment -- 4.1 Performance on the Full Datasets -- 4.2 Performance on the Subsets of 1K Samples -- 4.3 Performance on the Subsets of 100 and 25 Samples -- 4.4 Ablation Study -- 4.5 Limitation and Future Work -- 5 Conclusion -- References -- Unsupervised Domain Adaptation via Bidirectional Cross-Attention Transformer -- 1 Introduction -- 2 Related Work -- 2.1 Unsupervised Domain Adaptation -- 2.2 Vision Transformers -- 2.3 Vision Transformer for Unsupervised Domain Adaptation -- 3 The BCAT Method -- 3.1 Quadruple Transformer Block.
3.2 Bidirectional Cross-Attention as Implicit Feature Mixup.
Record Nr. UNINA-9910746283403321
Cham, Switzerland : , : Springer, , [2023]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]
Edizione [First edition.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2023]
Descrizione fisica 1 online resource (752 pages)
Disciplina 006.31
Collana Lecture Notes in Computer Science Series
Soggetto topico Data mining
Databases
Machine learning
ISBN 3-031-43418-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- Neural Wave Representations -- Physics-Inspired Graph Neural Networks -- Mapping Generative AI -- Contents - Part III -- Graph Neural Networks -- Learning to Augment Graph Structure for both Homophily and Heterophily Graphs -- 1 Introduction -- 2 Related Work -- 2.1 Graph Neural Networks -- 2.2 Graph Structure Augmentation -- 2.3 Variational Inference for GNNs -- 3 Methodology -- 3.1 Problem Statement -- 3.2 Augmentation from a Probabilistic Generation Perspective -- 3.3 Iterative Variational Inference -- 3.4 Parameterized Augmentation Distribution -- 3.5 GNN Classifier Module for Node Classification -- 3.6 Complexity Analysis -- 4 Experiments -- 4.1 Experimental Setups -- 4.2 Classification on Real-World Datasets (Q1) -- 4.3 Homophily Ratios and GNN Architectures (Q2) -- 4.4 Ablation Study (Q3) -- 4.5 Augmentation Strategy Learning (Q4) -- 4.6 Parameter Sensitivity Analysis (Q5) -- 5 Conclusion -- References -- Learning Representations for Bipartite Graphs Using Multi-task Self-supervised Learning -- 1 Introduction -- 2 Background Work -- 2.1 Bipartite Graph Representation Learning -- 2.2 Self Supervised Learning (SSL) for GNNs -- 2.3 Multi-task Self Supervised Learning and Optimization -- 3 Proposed Algorithm -- 3.1 Notation -- 3.2 Bipartite Graph Encoder -- 3.3 Multi Task Self Supervised Learning -- 3.4 DST++: Dropped Schedule Task MTL with Task Affinity -- 4 Experiments -- 4.1 Datasets -- 4.2 Downstream Tasks and Evaluation Metrics -- 4.3 Evaluation Protocol -- 4.4 Baselines -- 5 Results and Analysis -- 5.1 Comparison with Unsupervised Baselines -- 5.2 Ablation Study -- 6 Conclusion -- References -- ChiENN: Embracing Molecular Chirality with Graph Neural Networks -- 1 Introduction -- 2 Related Work -- 3 Order-Sensitive Message-Passing Scheme.
4 ChiENN: Chiral-Aware Neural Network -- 4.1 Edge Graph -- 4.2 Neighbors Order -- 4.3 Chiral-Aware Update -- 5 Experiments -- 5.1 Set-Up -- 5.2 Comparison with Reference Methods -- 5.3 Ablation Studies -- 6 Conclusions -- References -- Multi-label Image Classification with Multi-scale Global-Local Semantic Graph Network -- 1 Introduction -- 2 Proposed Method -- 2.1 Multi-scale Feature Reconstruction -- 2.2 Channel Dual-Branch Cross Attention -- 2.3 Multi-perspective Dynamic Semantic Representation -- 2.4 Classification and Loss -- 3 Experiments -- 3.1 Comparison with State of the Arts -- 3.2 Ablation Studies -- 3.3 Visual Analysis -- 4 Conclusion -- References -- CasSampling: Exploring Efficient Cascade Graph Learning for Popularity Prediction -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 4 Method -- 4.1 Graph Sampling -- 4.2 Local-Level Propagation Embedding -- 4.3 Global-Level Time Flow Representation -- 4.4 Prediction Layer -- 4.5 Complexity Analysis -- 5 Experiments -- 5.1 Datasets -- 5.2 Baseline -- 5.3 Evaluation Metrics -- 5.4 Experiment Settings -- 6 Results and Analysis -- 6.1 Experiment Results -- 6.2 Ablation Study -- 6.3 Further Analysis -- 7 Conclusion -- References -- Boosting Adaptive Graph Augmented MLPs via Customized Knowledge Distillation -- 1 Introduction -- 2 Related Work -- 2.1 Inference Acceleration -- 2.2 GNN Distillation -- 2.3 GNN on Addressing Heterophily -- 3 Preliminaries -- 4 Methodology -- 4.1 Customized Knowledge Distillation -- 4.2 Adaptive Graph Propagation -- 4.3 Approximate Aggregation Feature -- 5 Experiments -- 5.1 Datasets -- 5.2 Experimental Setup -- 5.3 Node Classification on Different Types of Graph -- 5.4 Comparing with GNN Distillation Methods -- 5.5 Ablation Study -- 5.6 Parameter Sensitivity Analysis -- 5.7 Inference Acceleration and Practical Deployment -- 6 Conclusion -- References.
ENGAGE: Explanation Guided Data Augmentation for Graph Representation Learning -- 1 Introduction -- 2 Related Work -- 2.1 Representation Learning for Graph Data -- 2.2 Graph Contrastive Learning -- 2.3 Explanation for Graph Neural Networks -- 3 Preliminaries -- 3.1 Notations -- 3.2 Contrastive Learning Frameworks -- 4 The ENGAGE Framework -- 4.1 Mitigating Superfluous Information in Representations -- 4.2 Efficient Explanations for Unsupervised Representations -- 4.3 Explanation-Guided Contrastive Views Generation -- 4.4 Theoretical Justification -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Experiment Results and Comparisons -- 5.3 Ablation Study -- 6 Conclusion and Future Work -- References -- Modeling Graphs Beyond Hyperbolic: Graph Neural Networks in Symmetric Positive Definite Matrices -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 The Space SPD -- 3.2 Gyrocalculus on SPD -- 4 Graph Neural Networks -- 4.1 GCN in Euclidean Space -- 4.2 GCN in SPD -- 5 Experiments -- 5.1 Node Classification -- 5.2 Graph Classification -- 5.3 Analysis -- 6 Conclusions -- References -- Leveraging Free Labels to Power up Heterophilic Graph Learning in Weakly-Supervised Settings: An Empirical Study -- 1 Introduction -- 2 Related Work -- 2.1 Adaptive Filters for Heterophilic Graph Learning -- 2.2 Evaluation on Heterophilic Graph Learning -- 3 Motivation -- 3.1 Experimental Setups -- 3.2 Results and Observations -- 3.3 Analysis -- 4 Proposed Approach -- 4.1 Implementation -- 5 Experiments -- 5.1 Performance Improvements on GPR-GNN -- 5.2 Performance Improvements on BernNet -- 5.3 Visualization of the Learned Filters -- 6 Conclusion -- References -- Train Your Own GNN Teacher: Graph-Aware Distillation on Textual Graphs -- 1 Introduction -- 2 Background -- 2.1 Problem Formulation -- 2.2 GNNs on Textual Graphs -- 3 Towards Graph-Aware Knowledge Distillation.
3.1 Knowledge Distillation -- 3.2 What Does Knowledge Distillation Learn? An Analysis -- 4 GraD5513621En10FigaPrint.eps Framework -- 4.1 GraD-Joint -- 4.2 GraD-Alt -- 4.3 GraD-JKD -- 4.4 Student Models -- 5 Experimental Setup -- 5.1 Datasets -- 5.2 Implementation Details -- 5.3 Compared Methods -- 6 Experimental Results -- 6.1 GraDBERT Results -- 6.2 GraDMLP Results -- 7 Related Work -- 8 Conclusion -- References -- Graphs -- The Mont Blanc of Twitter: Identifying Hierarchies of Outstanding Peaks in Social Networks -- 1 Introduction -- 2 Related Work -- 3 Mountain Graphs and Line Parent Trees -- 3.1 Landscapes and Mountain Graphs -- 3.2 Line Parent Trees -- 3.3 Discarding Edges via Relative Neighborhood Graphs -- 4 Line Parent Trees of Real-World Networks -- 4.1 Comparison with Sampling Approaches -- 4.2 Distances to Line-Parent Trees -- 5 Experiments on Random Data -- 6 Conclusion and Future Work -- References -- RBNets: A Reinforcement Learning Approach for Learning Bayesian Network Structure -- 1 Introduction -- 2 Preliminaries -- 2.1 Bayesian Network Structure Learning -- 2.2 Local Scores -- 2.3 Order Graph -- 3 Deep Reinforcement Learning-Based Bayesian Network Structure Learning -- 3.1 Reinforcement Learning Formulation -- 3.2 Upper Confidence Bounds Based Strategy -- 3.3 Deep Q-Learning Algorithm -- 4 Experimental Validation -- 4.1 Experiment Setup -- 4.2 Datasets -- 4.3 Baseline Methods -- 4.4 Evaluation Metrics -- 4.5 Performance Evaluation of Time -- 4.6 Learning Performance from Datasets -- 5 Conclusion -- References -- A Unified Spectral Rotation Framework Using a Fused Similarity Graph -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Similarity Matrix Construction -- 3.2 High-Order Laplacian Construction -- 3.3 Unified Framework -- 3.4 Optimization -- 3.5 Complexity Analysis -- 4 Experiments -- 4.1 Experimental Setup.
4.2 Comparison with State-of-the-Art Algorithms -- 4.3 Ablation Study -- 4.4 Parameter Sensitivity -- 4.5 Convergence Analysis -- 5 Conclusion -- References -- SimSky: An Accuracy-Aware Algorithm for Single-Source SimRank Search -- 1 Introduction -- 2 ApproxDiag: Approximate Diagonal Correction Matrix -- 3 SimSky -- 4 Experiments -- 4.1 Experimental Setting -- 4.2 Comparative Experiments -- 4.3 Ablation Experiments -- 5 Conclusions -- References -- Online Network Source Optimization with Graph-Kernel MAB -- 1 Introduction -- 2 Online Source Optimization Problem -- 2.1 Problem Formulation -- 2.2 Graph-Kernel MAB Framework -- 3 Grab-UCB: Proposed Algorithm -- 4 Grab-arm-Light: Efficient Action Selection -- 5 Simulation Results -- 5.1 Settings -- 5.2 Performance of Grab-UCB -- 6 Related Work -- 7 Conclusions -- References -- Quantifying Node-Based Core Resilience -- 1 Introduction -- 2 Background -- 3 Related Work -- 4 Node-Based Core Resilience -- 4.1 Resilience Against Edge Removal -- 4.2 Resilience Against Edge Insertion -- 5 Experimental Evaluation -- 5.1 Runtime Results -- 5.2 Finding Critical Edges -- 5.3 Identifying Influential Spreaders -- 6 Conclusions and Future Work -- References -- Construction and Training of Multi-Associative Graph Networks -- 1 Introduction -- 2 Essence of Data Relationship Representation -- 3 Multi-Associative Graph Network -- 3.1 Representation of Horizontal and Vertical Relationships -- 3.2 Consolidation of Attributes and Aggregation of Duplicates -- 3.3 Associating Features and Objects -- 3.4 Associative Prioritization Algorithm -- 3.5 MAGN Implementation and Source Code -- 4 Results of Experiments and Comparisons -- 4.1 Regression Benchmark -- 4.2 Classification Benchmark -- 5 Conclusions -- References -- Skeletal Cores and Graph Resilience -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 k-Cores.
3.2 Core Strength.
Record Nr. UNINA-9910746296203321
Cham, Switzerland : , : Springer, , [2023]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases : Applied Data Science and Demo Track / / edited by Gianmarco De Francisci Morales [and five others]
Machine Learning and Knowledge Discovery in Databases : Applied Data Science and Demo Track / / edited by Gianmarco De Francisci Morales [and five others]
Edizione [First edition.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Descrizione fisica 1 online resource (427 pages)
Disciplina 006.3
Collana Lecture Notes in Computer Science Series
Soggetto topico Data mining
Databases
Machine learning
ISBN 3-031-43430-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- Neural Wave Representations -- Physics-Inspired Graph Neural Networks -- Mapping Generative AI -- Contents - Part VII -- Sustainability, Climate, and Environment -- Continually Learning Out-of-Distribution Spatiotemporal Data for Robust Energy Forecasting -- 1 Introduction -- 2 Related Works -- 2.1 Energy Prediction in Urban Environments -- 2.2 Mobility Data as Auxiliary Information in Forecasting -- 2.3 Deep Learning for Forecasting -- 3 Problem Definition -- 3.1 Time Series Forecasting -- 3.2 Continual Learning for Time Series Forecasting -- 4 Method -- 4.1 Backbone-Temporal Convolutional Network -- 4.2 Fast Adaptation -- 4.3 Associative Memory -- 5 Datasets and Contextual Data -- 5.1 Energy Usage Data -- 5.2 Mobility Data -- 5.3 COVID Lockdown Dates -- 5.4 Temperature Data -- 5.5 Dataset Preprocessing -- 6 Experiments and Results -- 6.1 Experimental Setup -- 6.2 Mobility -- 6.3 Continual Learning -- 7 Conclusion -- References -- Counterfactual Explanations for Remote Sensing Time Series Data: An Application to Land Cover Classification -- 1 Introduction -- 2 Related Work -- 3 Study Area and Land Cover Classification -- 3.1 Study Area -- 3.2 Land Cover Classification -- 4 Proposed Method -- 4.1 Architecture Overview -- 4.2 Networks Implementation and Training -- 4.3 Class-Swapping Loss -- 4.4 GAN-Based Regularization for Plausibility -- 4.5 Unimodal Regularization for Time-Contiguity -- 5 Results -- 5.1 Experimental Settings -- 5.2 Comparative Analysis -- 5.3 CFE4SITS In-depth Analysis -- 6 Conclusion -- References -- Cloud Imputation for Multi-sensor Remote Sensing Imagery with Style Transfer -- 1 Introduction -- 2 Related Work -- 2.1 Multi-sensor Cloud Imputation -- 2.2 Style Transfer -- 3 Methodology -- 3.1 Adaptive Instance Normalization (AdaIN).
3.2 Cluster-Based Attentional Instance Normalization (CAIN) -- 3.3 Composite Style Transfer Module, CAIN + AdaIN (CAINA) -- 3.4 The Deep Learning Network Architecture -- 4 Experiments -- 4.1 Dataset and Environmental Configuration -- 4.2 Experiment Settings -- 4.3 Quantitative Results of the First Set of Experiments -- 4.4 Quantitative Results of the Second Set of Experiments -- 4.5 Analysis on Variances Among the Compared Methods -- 4.6 Qualitative Results and Residual Maps -- 5 Conclusions -- References -- Comprehensive Transformer-Based Model Architecture for Real-World Storm Prediction -- 1 Introduction -- 2 Related Work -- 3 Problem Statement, Challenge, and Idea -- 3.1 Problem Statement -- 3.2 Challenges -- 3.3 Our Idea -- 4 Method -- 4.1 Representation Learning -- 4.2 Prediction -- 5 Experiments and Results -- 5.1 Experimental Setting -- 5.2 Overall Performance Under Storm Event Predictions -- 5.3 Significance of Our Design Components for Storm Predictions -- 5.4 Necessity of Content Embedding in Our MAE Encoder -- 5.5 Detailed Design Underlying Our Pooling Layer -- 5.6 Constructing Temporal Representations -- 5.7 Impact of Positional Embedding on Our ViT Encoder -- 6 Conclusion -- References -- Explaining Full-Disk Deep Learning Model for Solar Flare Prediction Using Attribution Methods -- 1 Introduction -- 2 Related Work -- 3 Data and Model -- 4 Attribution Methods -- 5 Experimental Evaluation -- 5.1 Experimental Settings -- 5.2 Evaluation -- 6 Discussion -- 7 Conclusion and Future Work -- References -- Deep Spatiotemporal Clustering: A Temporal Clustering Approach for Multi-dimensional Climate Data -- 1 Introduction -- 2 Background and Problem Definition -- 2.1 Clustering Multi-dimensional Climate Data -- 2.2 Problem Definition -- 3 Related Works -- 4 Proposed Methodology -- 4.1 Overview of Our Deep Spatiotemporal Clustering (DSC) Approach.
4.2 Clustering Assignment -- 4.3 Joint Optimization -- 5 Experiments -- 5.1 Dataset and Data Preprocessing -- 5.2 Baseline Methods -- 5.3 Evaluation Metrics -- 5.4 Experiment Results -- 5.5 Ablation Study -- 6 Conclusions -- References -- Circle Attention: Forecasting Network Traffic by Learning Interpretable Spatial Relationships from Intersecting Circles -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Problem Definition -- 3.2 Circle Attention -- 3.3 Transformer Model -- 4 Experimental Settings -- 5 Results and Discussion -- 5.1 Experiment 1: Baseline Comparison -- 5.2 Experiment 2: Ablation Study of Circle Parameters -- 6 Conclusion -- 7 Ethical Statement -- References -- Transportation and Urban Planning -- Pre-training Contextual Location Embeddings in Personal Trajectories via Efficient Hierarchical Location Representations -- 1 Introduction -- 2 Preliminaries -- 3 Model -- 3.1 Geo-Tokenizer Embedding Layer -- 3.2 Causal Location Embedding Model -- 3.3 Pre-training Hierarchical Auto-Regressive Location Model -- 3.4 Fine-Tuning Downstream Tasks -- 4 Experiments -- 4.1 Datasets -- 4.2 Settings -- 4.3 Experimental Results (RQ1) -- 4.4 Ablation Study -- 4.5 Deployed Solution -- 5 Related Work -- 6 Conclusions -- References -- Leveraging Queue Length and Attention Mechanisms for Enhanced Traffic Signal Control Optimization -- 1 Introduction -- 2 Related Work -- 2.1 Traditional Methods -- 2.2 RL-Based Methods -- 3 Preliminary -- 4 Method -- 4.1 Introduce Queue Length for TSC Methods -- 4.2 AttentionLight Agent Design -- 4.3 Network Design of AttentionLight -- 5 Experiment -- 5.1 Overall Performance -- 5.2 Queue Length Effectiveness Analysis -- 5.3 Reward Function Investigation -- 5.4 Action Duration Study -- 5.5 Model Generalization -- 6 Conclusion -- References.
PICT: Precision-enhanced Road Intersection Recognition Using Cycling Trajectories -- 1 Introduction -- 2 Related Work -- 3 Methods -- 3.1 Problem Formulation -- 3.2 Framework Overview -- 3.3 Geometry Feature Extraction -- 3.4 Grid Topology Representation -- 3.5 Intersection Inference -- 4 Experiment -- 4.1 Experiment Settings -- 4.2 Main Results -- 4.3 Ablation Study -- 5 Conclusion -- References -- FDTI: Fine-Grained Deep Traffic Inference with Roadnet-Enriched Graph -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 3.1 Problem Definition -- 4 Method -- 4.1 Fine-Grained Traffic Spatial-Temporal Graph -- 4.2 Dynamic Mobility Convolution -- 4.3 Flow Conservative Traffic State Transition -- 5 Experiment -- 5.1 Experiment Settings -- 5.2 Overall Performance -- 5.3 Graph Smooth Analysis -- 5.4 Ablation Study -- 5.5 Scalability -- 6 Conclusion -- References -- RulEth: Genetic Programming-Driven Derivation of Security Rules for Automotive Ethernet -- 1 Introduction -- 2 Background and Related Work -- 3 Threat Model -- 4 RulEth Language -- 5 RulEth System Architecture -- 6 Evaluation -- 7 Conclusion -- References -- Spatial-Temporal Graph Sandwich Transformer for Traffic Flow Forecasting -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 3.1 Problem Formulation -- 3.2 Transformer Architecture -- 4 Proposed Method -- 4.1 Overall Design -- 4.2 Spatial-Temporal Sandwich Transformer -- 4.3 Multi-step Prediction -- 4.4 Training Procedure and Complexity -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Performance Comparison and Analysis (RQ1) -- 5.3 Ablation Study (RQ2) -- 5.4 Learning Stability (RQ3) -- 5.5 Parameter Sensitivity (RQ4) -- 6 Conclusion -- References -- Data-Driven Explainable Artificial Intelligence for Energy Efficiency in Short-Sea Shipping -- 1 Introduction -- 2 Related Work -- 3 Case Study Description -- 3.1 Problem Formulation.
4 Modeling and Analysis -- 4.1 Exploratory Analysis -- 4.2 Optimizing the Model -- 4.3 Exploiting the Model -- 5 Conclusion -- References -- Multivariate Time-Series Anomaly Detection with Temporal Self-supervision and Graphs: Application to Vehicle Failure Prediction -- 1 Introduction -- 1.1 Problem Statement -- 1.2 Addressing the Challenges: Our Methodology -- 2 Related Works -- 2.1 Vehicle Predictive Maintenance with Machine Learning -- 2.2 Time-Series Anomaly Detection -- 3 Proposed Model -- 3.1 Data Preprocessing and Feature Construction -- 3.2 Graph Autoencoder -- 3.3 Graph Generative Learning -- 3.4 Temporal Contrastive Learning -- 3.5 Anomaly Scoring -- 4 Experiments -- 4.1 Dataset -- 4.2 Evaluation Protocol -- 4.3 Experimental Results -- 5 Conclusion -- References -- Predictive Maintenance, Adversarial Autoencoders and Explainability -- 1 Introduction -- 2 Overview of Dataset -- 3 Proposed Solution -- 3.1 Autoencoder Models for Time Series Anomaly Detection -- 3.2 Failure Detection -- 3.3 Model Explainability -- 4 Results -- 4.1 Failure Detection Based on Compressor Cycles -- 4.2 Anomaly Detection on Data Chunks -- 4.3 Explainability -- 5 Discussion -- 6 Conclusion -- References -- TDCM: Transport Destination Calibrating Based on Multi-task Learning -- 1 Introduction -- 2 Related Work -- 3 Problem Definition -- 4 Overview -- 4.1 Waybill Trajectory Pre-processing -- 4.2 Stay Hotspot Detection -- 4.3 Hotspot Feature Extraction -- 4.4 Transport Destination Calibration -- 5 Experiments -- 5.1 Datasets and Settings -- 5.2 Overall Evaluation -- 5.3 Ablation Analysis of Features -- 5.4 Multi-task Weight Selection -- 5.5 Case Study -- 6 Conclusion -- References -- Demo -- An Interactive Interface for Novel Class Discovery in Tabular Data -- 1 Introduction -- 2 Interface Description -- 3 Conclusion -- References.
marl-jax: Multi-agent Reinforcement Leaning Framework for Social Generalization.
Record Nr. UNISA-996550555303316
Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Machine Learning and Knowledge Discovery in Databases : Applied Data Science and Demo Track / / edited by Gianmarco De Francisci Morales [and five others]
Machine Learning and Knowledge Discovery in Databases : Applied Data Science and Demo Track / / edited by Gianmarco De Francisci Morales [and five others]
Edizione [First edition.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Descrizione fisica 1 online resource (427 pages)
Disciplina 006.3
Collana Lecture Notes in Computer Science Series
Soggetto topico Data mining
Databases
Machine learning
ISBN 3-031-43430-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks Abstracts -- Neural Wave Representations -- Physics-Inspired Graph Neural Networks -- Mapping Generative AI -- Contents - Part VII -- Sustainability, Climate, and Environment -- Continually Learning Out-of-Distribution Spatiotemporal Data for Robust Energy Forecasting -- 1 Introduction -- 2 Related Works -- 2.1 Energy Prediction in Urban Environments -- 2.2 Mobility Data as Auxiliary Information in Forecasting -- 2.3 Deep Learning for Forecasting -- 3 Problem Definition -- 3.1 Time Series Forecasting -- 3.2 Continual Learning for Time Series Forecasting -- 4 Method -- 4.1 Backbone-Temporal Convolutional Network -- 4.2 Fast Adaptation -- 4.3 Associative Memory -- 5 Datasets and Contextual Data -- 5.1 Energy Usage Data -- 5.2 Mobility Data -- 5.3 COVID Lockdown Dates -- 5.4 Temperature Data -- 5.5 Dataset Preprocessing -- 6 Experiments and Results -- 6.1 Experimental Setup -- 6.2 Mobility -- 6.3 Continual Learning -- 7 Conclusion -- References -- Counterfactual Explanations for Remote Sensing Time Series Data: An Application to Land Cover Classification -- 1 Introduction -- 2 Related Work -- 3 Study Area and Land Cover Classification -- 3.1 Study Area -- 3.2 Land Cover Classification -- 4 Proposed Method -- 4.1 Architecture Overview -- 4.2 Networks Implementation and Training -- 4.3 Class-Swapping Loss -- 4.4 GAN-Based Regularization for Plausibility -- 4.5 Unimodal Regularization for Time-Contiguity -- 5 Results -- 5.1 Experimental Settings -- 5.2 Comparative Analysis -- 5.3 CFE4SITS In-depth Analysis -- 6 Conclusion -- References -- Cloud Imputation for Multi-sensor Remote Sensing Imagery with Style Transfer -- 1 Introduction -- 2 Related Work -- 2.1 Multi-sensor Cloud Imputation -- 2.2 Style Transfer -- 3 Methodology -- 3.1 Adaptive Instance Normalization (AdaIN).
3.2 Cluster-Based Attentional Instance Normalization (CAIN) -- 3.3 Composite Style Transfer Module, CAIN + AdaIN (CAINA) -- 3.4 The Deep Learning Network Architecture -- 4 Experiments -- 4.1 Dataset and Environmental Configuration -- 4.2 Experiment Settings -- 4.3 Quantitative Results of the First Set of Experiments -- 4.4 Quantitative Results of the Second Set of Experiments -- 4.5 Analysis on Variances Among the Compared Methods -- 4.6 Qualitative Results and Residual Maps -- 5 Conclusions -- References -- Comprehensive Transformer-Based Model Architecture for Real-World Storm Prediction -- 1 Introduction -- 2 Related Work -- 3 Problem Statement, Challenge, and Idea -- 3.1 Problem Statement -- 3.2 Challenges -- 3.3 Our Idea -- 4 Method -- 4.1 Representation Learning -- 4.2 Prediction -- 5 Experiments and Results -- 5.1 Experimental Setting -- 5.2 Overall Performance Under Storm Event Predictions -- 5.3 Significance of Our Design Components for Storm Predictions -- 5.4 Necessity of Content Embedding in Our MAE Encoder -- 5.5 Detailed Design Underlying Our Pooling Layer -- 5.6 Constructing Temporal Representations -- 5.7 Impact of Positional Embedding on Our ViT Encoder -- 6 Conclusion -- References -- Explaining Full-Disk Deep Learning Model for Solar Flare Prediction Using Attribution Methods -- 1 Introduction -- 2 Related Work -- 3 Data and Model -- 4 Attribution Methods -- 5 Experimental Evaluation -- 5.1 Experimental Settings -- 5.2 Evaluation -- 6 Discussion -- 7 Conclusion and Future Work -- References -- Deep Spatiotemporal Clustering: A Temporal Clustering Approach for Multi-dimensional Climate Data -- 1 Introduction -- 2 Background and Problem Definition -- 2.1 Clustering Multi-dimensional Climate Data -- 2.2 Problem Definition -- 3 Related Works -- 4 Proposed Methodology -- 4.1 Overview of Our Deep Spatiotemporal Clustering (DSC) Approach.
4.2 Clustering Assignment -- 4.3 Joint Optimization -- 5 Experiments -- 5.1 Dataset and Data Preprocessing -- 5.2 Baseline Methods -- 5.3 Evaluation Metrics -- 5.4 Experiment Results -- 5.5 Ablation Study -- 6 Conclusions -- References -- Circle Attention: Forecasting Network Traffic by Learning Interpretable Spatial Relationships from Intersecting Circles -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Problem Definition -- 3.2 Circle Attention -- 3.3 Transformer Model -- 4 Experimental Settings -- 5 Results and Discussion -- 5.1 Experiment 1: Baseline Comparison -- 5.2 Experiment 2: Ablation Study of Circle Parameters -- 6 Conclusion -- 7 Ethical Statement -- References -- Transportation and Urban Planning -- Pre-training Contextual Location Embeddings in Personal Trajectories via Efficient Hierarchical Location Representations -- 1 Introduction -- 2 Preliminaries -- 3 Model -- 3.1 Geo-Tokenizer Embedding Layer -- 3.2 Causal Location Embedding Model -- 3.3 Pre-training Hierarchical Auto-Regressive Location Model -- 3.4 Fine-Tuning Downstream Tasks -- 4 Experiments -- 4.1 Datasets -- 4.2 Settings -- 4.3 Experimental Results (RQ1) -- 4.4 Ablation Study -- 4.5 Deployed Solution -- 5 Related Work -- 6 Conclusions -- References -- Leveraging Queue Length and Attention Mechanisms for Enhanced Traffic Signal Control Optimization -- 1 Introduction -- 2 Related Work -- 2.1 Traditional Methods -- 2.2 RL-Based Methods -- 3 Preliminary -- 4 Method -- 4.1 Introduce Queue Length for TSC Methods -- 4.2 AttentionLight Agent Design -- 4.3 Network Design of AttentionLight -- 5 Experiment -- 5.1 Overall Performance -- 5.2 Queue Length Effectiveness Analysis -- 5.3 Reward Function Investigation -- 5.4 Action Duration Study -- 5.5 Model Generalization -- 6 Conclusion -- References.
PICT: Precision-enhanced Road Intersection Recognition Using Cycling Trajectories -- 1 Introduction -- 2 Related Work -- 3 Methods -- 3.1 Problem Formulation -- 3.2 Framework Overview -- 3.3 Geometry Feature Extraction -- 3.4 Grid Topology Representation -- 3.5 Intersection Inference -- 4 Experiment -- 4.1 Experiment Settings -- 4.2 Main Results -- 4.3 Ablation Study -- 5 Conclusion -- References -- FDTI: Fine-Grained Deep Traffic Inference with Roadnet-Enriched Graph -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 3.1 Problem Definition -- 4 Method -- 4.1 Fine-Grained Traffic Spatial-Temporal Graph -- 4.2 Dynamic Mobility Convolution -- 4.3 Flow Conservative Traffic State Transition -- 5 Experiment -- 5.1 Experiment Settings -- 5.2 Overall Performance -- 5.3 Graph Smooth Analysis -- 5.4 Ablation Study -- 5.5 Scalability -- 6 Conclusion -- References -- RulEth: Genetic Programming-Driven Derivation of Security Rules for Automotive Ethernet -- 1 Introduction -- 2 Background and Related Work -- 3 Threat Model -- 4 RulEth Language -- 5 RulEth System Architecture -- 6 Evaluation -- 7 Conclusion -- References -- Spatial-Temporal Graph Sandwich Transformer for Traffic Flow Forecasting -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 3.1 Problem Formulation -- 3.2 Transformer Architecture -- 4 Proposed Method -- 4.1 Overall Design -- 4.2 Spatial-Temporal Sandwich Transformer -- 4.3 Multi-step Prediction -- 4.4 Training Procedure and Complexity -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Performance Comparison and Analysis (RQ1) -- 5.3 Ablation Study (RQ2) -- 5.4 Learning Stability (RQ3) -- 5.5 Parameter Sensitivity (RQ4) -- 6 Conclusion -- References -- Data-Driven Explainable Artificial Intelligence for Energy Efficiency in Short-Sea Shipping -- 1 Introduction -- 2 Related Work -- 3 Case Study Description -- 3.1 Problem Formulation.
4 Modeling and Analysis -- 4.1 Exploratory Analysis -- 4.2 Optimizing the Model -- 4.3 Exploiting the Model -- 5 Conclusion -- References -- Multivariate Time-Series Anomaly Detection with Temporal Self-supervision and Graphs: Application to Vehicle Failure Prediction -- 1 Introduction -- 1.1 Problem Statement -- 1.2 Addressing the Challenges: Our Methodology -- 2 Related Works -- 2.1 Vehicle Predictive Maintenance with Machine Learning -- 2.2 Time-Series Anomaly Detection -- 3 Proposed Model -- 3.1 Data Preprocessing and Feature Construction -- 3.2 Graph Autoencoder -- 3.3 Graph Generative Learning -- 3.4 Temporal Contrastive Learning -- 3.5 Anomaly Scoring -- 4 Experiments -- 4.1 Dataset -- 4.2 Evaluation Protocol -- 4.3 Experimental Results -- 5 Conclusion -- References -- Predictive Maintenance, Adversarial Autoencoders and Explainability -- 1 Introduction -- 2 Overview of Dataset -- 3 Proposed Solution -- 3.1 Autoencoder Models for Time Series Anomaly Detection -- 3.2 Failure Detection -- 3.3 Model Explainability -- 4 Results -- 4.1 Failure Detection Based on Compressor Cycles -- 4.2 Anomaly Detection on Data Chunks -- 4.3 Explainability -- 5 Discussion -- 6 Conclusion -- References -- TDCM: Transport Destination Calibrating Based on Multi-task Learning -- 1 Introduction -- 2 Related Work -- 3 Problem Definition -- 4 Overview -- 4.1 Waybill Trajectory Pre-processing -- 4.2 Stay Hotspot Detection -- 4.3 Hotspot Feature Extraction -- 4.4 Transport Destination Calibration -- 5 Experiments -- 5.1 Datasets and Settings -- 5.2 Overall Evaluation -- 5.3 Ablation Analysis of Features -- 5.4 Multi-task Weight Selection -- 5.5 Case Study -- 6 Conclusion -- References -- Demo -- An Interactive Interface for Novel Class Discovery in Tabular Data -- 1 Introduction -- 2 Interface Description -- 3 Conclusion -- References.
marl-jax: Multi-agent Reinforcement Leaning Framework for Social Generalization.
Record Nr. UNINA-9910746290003321
Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Making Databases Work : The Pragmatic Wisdom of Michael Stonebraker
Making Databases Work : The Pragmatic Wisdom of Michael Stonebraker
Autore Brodie Michael L
Pubbl/distr/stampa San Rafael : , : Morgan & Claypool Publishers, , 2018
Descrizione fisica 1 online resource (732 pages)
Disciplina 005.74
Collana ACM books
Soggetto topico Databases
Relational databases - Computer programs
ISBN 1-947487-17-5
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Contents -- Data Management Technology Kairometer: The Historical Context -- Foreword -- Preface -- Introduction -- PART I. 2014 ACM A.M. TURING AWARD PAPER AND LECTURE -- The Land Sharks Are on the Squawk Box -- PART II. MIKE STONEBRAKER'S CAREER -- 1. Make it Happen: The Life of Michael Stonebraker -- PART III. MIKE STONEBRAKER SPEAKS OUT: AN INTERVIEW WITH MARIANNE WINSLETT -- PART IV. THE BIG PICTURE -- 3. Leadership and Advocacy -- 4. Perspectives: The 2014 ACM Turing Award -- 5. Birth of an Industry -- Path to the Turing Award -- 6. A Perspective of Mike from a 50-Year Vantage Point -- PART V. STARTUPS -- 7. Howto Start a Company in Five (Not So) Easy Steps -- 8. How to Create and Run a Stonebraker Startup-The Real Story -- 9. Getting Grownups in the Room: A VC Perspective -- PART VI. DATABASE SYSTEMS RESEARCH -- 10. Where Good Ideas Come From and How to Exploit Them -- 11. Where We Have Failed -- 12. Stonebraker and Open Source -- 13. The Relational Database Management Systems Genealogy -- PART VII. CONTRIBUTIONS BY SYSTEM -- 14. Research Contributions of Mike Stonebraker: An Overview -- PART VII.A. Research Contributions by System -- 15. The Later Ingres Years -- 16. Looking Back at Postgres -- 17. Databases Meet the Stream Processing Era -- 18. C-Store: Through the Eyes of a Ph.D. Student -- 19. In-Memory, Horizontal, and Transactional: The H-Store OLTP DBMS Project -- 20. Scaling Mountains: SciDB and Scientific Data Managment -- 21. Data Unification at Scale: Data Tamer -- 22. The BigDAWG Polystore System -- 23. Data Civilizer: End-to-End Support for Data Discovery, Integration, and Cleaning -- PART VII.B. Contributions from Building Systems -- 24. The Commercial Ingres Codeline -- 25. The Postgres and Illustra Codelines -- 26. The Aurora/ Borealis/ StreamBase Codelines: A Tale of Three Systems. -- 27. The Vertica Codeline Codeline -- 28. The VoltDB Codeline -- 29. The SciDB Codeline: Crossing the Chasm -- 30. The Tamr Codeline -- 31. The BigDAWG Codeline -- PART VIII. PERSPECTIVES -- 32. IBM Relational Database Code Bases -- 33. Aurum: A Story about Research Taste -- 34. Nice: Or What It Was Like to Be Mike's Student -- 35. Michael Stonebraker: Competitor, Collaborator, Friend -- 36. The Changing of the Database Guard -- PART IX. SEMINAL WORKS OF MICHAEL STONEBRAKER AND HIS COLLABORATORS -- OLTP Through the Looking Glass, and What We Found There -- "One Size Fits All": An Idea Whose Time Has Come and Gone -- The End of an Architectural Era (It's Time for a Complete Rewrite) -- C-Store: A Column-Oriented DBMS -- The Implementation of POSTGRES -- The Design and Implementation of INGRES -- The Collected Works of Michael Stonebraker -- References -- Index -- Biographies -- Blank Page.
Altri titoli varianti Making databases work
Record Nr. UNINA-9910838321003321
Brodie Michael L  
San Rafael : , : Morgan & Claypool Publishers, , 2018
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui