Vai al contenuto principale della pagina

Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others]



(Visualizza in formato marc)    (Visualizza in BIBFRAME)

Titolo: Machine Learning and Knowledge Discovery in Databases : Research Track / / edited by Danai Koutra [and four others] Visualizza cluster
Pubblicazione: Cham, Switzerland : , : Springer, , [2023]
©2023
Edizione: First edition.
Descrizione fisica: 1 online resource (506 pages)
Disciplina: 943.005
Soggetto topico: Data mining
Databases
Machine learning
Persona (resp. second.): KoutraDanai
Nota di bibliografia: Includes bibliographical references and index.
Nota di contenuto: Intro -- Preface -- Organization -- Invited Talks Abstracts -- Neural Wave Representations -- Physics-Inspired Graph Neural Networks -- Mapping Generative AI -- Contents - Part V -- Robustness -- MMA: Multi-Metric-Autoencoder for Analyzing High-Dimensional and Incomplete Data -- 1 Introduction -- 2 Related Work -- 2.1 LFA-Based Model -- 2.2 Deep Learning-Based Model -- 3 Methodology -- 3.1 Establishment of Base Model -- 3.2 Self-Adaptively Aggregation -- 3.3 Theoretical Analysis -- 4 Experiments -- 4.1 General Settings -- 4.2 Performance Comparison (RQ.1) -- 4.3 The Self-ensembling of MMA (RQ.2) -- 4.4 Base Models' Latent Factors Distribution (RQ. 3) -- 5 Conclusion -- References -- Exploring and Exploiting Data-Free Model Stealing -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 TandemGAN Framework -- 3.2 Optimization Objectives and Training Procedure -- 4 Evaluation -- 4.1 Model Stealing Performance -- 4.2 Ablation Study -- 5 Possible Extension -- 6 Conclusion -- References -- Exploring the Training Robustness of Distributional Reinforcement Learning Against Noisy State Observations -- 1 Introduction -- 2 Background: Distributional RL -- 3 Tabular Case: State-Noisy MDP -- 3.1 Analysis of SN-MDP for Expectation-Based RL -- 3.2 Analysis of SN-MDP in Distributional RL -- 4 Function Approximation Case -- 4.1 Convergence of Linear TD Under Noisy States -- 4.2 Vulnerability of Expectation-Based RL -- 4.3 Robustness Advantage of Distributional RL -- 5 Experiments -- 5.1 Results on Continuous Control Environments -- 5.2 Results on Classical Control and Atari Games -- 6 Discussion and Conclusion -- References -- Overcoming the Limitations of Localization Uncertainty: Efficient and Exact Non-linear Post-processing and Calibration -- 1 Introduction -- 2 Background and Related Work -- 3 Method -- 3.1 Uncertainty Estimation.
3.2 Uncertainty Propagation -- 3.3 Uncertainty Calibration -- 4 Experiments -- 4.1 Decoding Methods -- 4.2 Calibration Evaluation -- 4.3 Uncertainty Correlation -- 5 Conclusion -- References -- Label Shift Quantification with Robustness Guarantees via Distribution Feature Matching -- 1 Introduction -- 1.1 Related Literature -- 1.2 Contributions of the Paper -- 2 Distribution Feature Matching -- 2.1 Kernel Mean Matching -- 2.2 BBSE as Distribution Feature Matching -- 3 Theoretical Guarantees -- 3.1 Comparison to Related Literature -- 3.2 Robustness to Contamination -- 4 Algorithm and Applications -- 4.1 Optimisation Problem -- 4.2 Experiments -- 5 Conclusion -- References -- Robust Classification of High-Dimensional Data Using Data-Adaptive Energy Distance -- 1 Introduction -- 1.1 Our Contribution -- 2 Methodology -- 2.1 A Classifier Based on W*FG -- 2.2 Refinements of 0 -- 3 Asymptotics Under HDLSS Regime -- 3.1 Misclassification Probabilities of 1, 2, and 3 in the HDLSS Asymptotic Regime -- 3.2 Comparison of the Classifiers -- 4 Empirical Performance and Results -- 4.1 Simulation Studies -- 4.2 Implementation on Real Data -- 5 Concluding Remarks -- References -- DualMatch: Robust Semi-supervised Learning with Dual-Level Interaction -- 1 Introduction -- 2 Related Work -- 2.1 Semi-supervised Learning -- 2.2 Supervised Contrastive Learning -- 3 Method -- 3.1 Preliminaries -- 3.2 The DualMatch Structure -- 3.3 First-Level Interaction: Align -- 3.4 Second-Level Interaction: Aggregate -- 3.5 Final Objective -- 4 Experiment -- 4.1 Semi-supervised Classification -- 4.2 Class-Imbalanced Semi-supervised Classification -- 4.3 Ablation Study -- 5 Conclusion -- References -- Detecting Evasion Attacks in Deployed Tree Ensembles -- 1 Introduction -- 2 Preliminaries -- 3 Detecting Evasion Attacks with OC-score -- 3.1 The OC-score Metric -- 3.2 Theoretical Analysis.
4 Related Work -- 5 Experimental Evaluation -- 5.1 Experimental Methodology -- 5.2 Results Q1: Detecting Evasion Attacks -- 5.3 Results Q2: Prediction Time Cost -- 5.4 Results Q3: Size of the Reference Set -- 6 Conclusions and Discussion -- References -- Time Series -- Deep Imbalanced Time-Series Forecasting via Local Discrepancy Density -- 1 Introduction -- 2 Related Work -- 2.1 Deep Learning Models for Time-Series Forecasting -- 2.2 Robustness Against Noisy Samples and Data Imbalance -- 3 Method -- 3.1 Local Discrepancy -- 3.2 Density-Based Reweighting for Time-Series Forecasting -- 4 Experiments -- 4.1 Experiment Setting -- 4.2 Main Results -- 4.3 Comparisons with Other Methods -- 4.4 Variants for Local Discrepancy -- 4.5 Dataset Analysis -- 4.6 Computational Cost of ReLD -- 5 Discussion and Limitation -- References -- Online Deep Hybrid Ensemble Learning for Time Series Forecasting -- 1 Introduction -- 2 Related Works -- 2.1 On Online Ensemble Aggregation for Time Series Forecasting -- 2.2 On Ensemble Learning Using RoCs -- 3 Methodology -- 3.1 Preliminaries -- 3.2 Ensemble Architecture -- 3.3 RoCs Computation -- 3.4 Ensemble Aggregation -- 3.5 Ensemble Adaptation -- 4 Experiments -- 4.1 Experimental Set-Up -- 4.2 ODH-ETS Setup and Baselines -- 4.3 Results -- 4.4 Discussion -- 5 Concluding Remarks -- References -- Sparse Transformer Hawkes Process for Long Event Sequences -- 1 Introduction -- 2 Related Work -- 3 Background -- 3.1 Hawkes Process -- 3.2 Self-attention -- 4 Proposed Model -- 4.1 Event Model -- 4.2 Count Model -- 4.3 Intensity -- 4.4 Training -- 5 Experiment -- 5.1 Data -- 5.2 Baselines -- 5.3 Evaluation Metrics and Training Details -- 5.4 Results of Log-Likelihood -- 5.5 Results of Event Type and Time Prediction -- 5.6 Computational Efficiency -- 5.7 Sparse Attention Mechanism -- 5.8 Ablation Study -- 6 Conclusion -- References.
Adacket: ADAptive Convolutional KErnel Transform for Multivariate Time Series Classification -- 1 Introduction -- 2 Related Work -- 2.1 Multivariate Time Series Classification -- 2.2 Reinforcement Learning -- 3 Preliminaries -- 4 Method -- 4.1 Multi-objective Optimization: Performance and Resource -- 4.2 RL-Based Decision Model: Channel and Temporal Dimensions -- 4.3 DDPG Adaptation: Efficient Kernel Exploration -- 5 Experiments -- 5.1 Experimental Settings -- 5.2 Classification Performance Evaluation -- 5.3 Understanding Adacket's Design Selections -- 6 Conclusion -- References -- Efficient Adaptive Spatial-Temporal Attention Network for Traffic Flow Forecasting -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 4 Methodology -- 4.1 Adaptive Spatial-Temporal Fusion Embedding -- 4.2 Dominant Spatial-Temporal Attention -- 4.3 Efficient Spatial-Temporal Block -- 4.4 Encoder-Decoder Architecture -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Overall Comparison -- 5.3 Ablation Study -- 5.4 Model Analysis -- 5.5 Interpretability of EASTAN -- 6 Conclusion -- References -- Estimating Dynamic Time Warping Distance Between Time Series with Missing Data -- 1 Introduction -- 2 Notation and Problem Statement -- 3 Background: Dynamic Time Warping (DTW) -- 4 DTW-AROW -- 5 DTW-CAI -- 5.1 DBAM -- 5.2 DTW-CAI -- 6 Comparison to Related Work -- 7 Experimental Evaluation -- 7.1 Datasets and Implementation Details -- 7.2 Evaluation of Pairwise Distances (Q1) -- 7.3 Evaluation Through Classification (Q2) -- 7.4 Evaluation Through Clustering (Q3) -- 8 Conclusion -- References -- Uncovering Multivariate Structural Dependency for Analyzing Irregularly Sampled Time Series -- 1 Introduction -- 2 Preliminaries -- 3 Our Proposed Model -- 3.1 Multivariate Interaction Module -- 3.2 Correlation-Aware Neighborhood Aggregation -- 3.3 Masked Time-Aware Self-Attention.
3.4 Graph-Level Learning Module -- 4 Experiments -- 4.1 Datasets -- 4.2 Competitors -- 4.3 Setups and Results -- 5 Conclusion -- References -- Weighted Multivariate Mean Reversion for Online Portfolio Selection -- 1 Introduction -- 2 Problem Setting -- 3 Related Work and Motivation -- 3.1 Related Work -- 3.2 Motivation -- 4 Multi-variate Robust Mean Reversion -- 4.1 Formulation -- 4.2 Online Portfolio Selection -- 4.3 Algorithms -- 5 Experiments -- 5.1 Cumulative Wealth -- 5.2 Computational Time -- 5.3 Parameter Sensitivity -- 5.4 Risk-Adjusted Returns -- 5.5 Transaction Cost Scalability -- 6 Conclusion -- References -- H2-Nets: Hyper-hodge Convolutional Neural Networks for Time-Series Forecasting -- 1 Introduction -- 2 Related Work -- 3 Higher-Order Structures on Graph -- 3.1 Hyper-k-Simplex-Network Learning Statement -- 3.2 Preliminaries on Hodge Theory -- 4 The H2-Nets Methodology -- 5 Experiments -- 6 Conclusion -- References -- Transfer and Multitask Learning -- Overcoming Catastrophic Forgetting for Fine-Tuning Pre-trained GANs -- 1 Introduction -- 2 Background and Related Work -- 2.1 Deep Transfer Learning -- 2.2 Generative Adversarial Networks (GANs) -- 2.3 Transfer Learning for GANs -- 3 Approach -- 3.1 Trust-Region Optimization -- 3.2 Spectral Diversification -- 4 Experiment -- 4.1 Performance on the Full Datasets -- 4.2 Performance on the Subsets of 1K Samples -- 4.3 Performance on the Subsets of 100 and 25 Samples -- 4.4 Ablation Study -- 4.5 Limitation and Future Work -- 5 Conclusion -- References -- Unsupervised Domain Adaptation via Bidirectional Cross-Attention Transformer -- 1 Introduction -- 2 Related Work -- 2.1 Unsupervised Domain Adaptation -- 2.2 Vision Transformers -- 2.3 Vision Transformer for Unsupervised Domain Adaptation -- 3 The BCAT Method -- 3.1 Quadruple Transformer Block.
3.2 Bidirectional Cross-Attention as Implicit Feature Mixup.
Titolo autorizzato: Machine Learning and Knowledge Discovery in Databases  Visualizza cluster
ISBN: 3-031-43424-2
Formato: Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione: Inglese
Record Nr.: 996550555603316
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Serie: Lecture notes in computer science ; ; Volume 14173.