LEADER 12838nam 22008535 450 001 9910851993803321 005 20240826123243.0 010 $a981-9722-66-7 024 7 $a10.1007/978-981-97-2266-2 035 $a(CKB)31801766700041 035 $a(MiAaPQ)EBC31305356 035 $a(Au-PeEL)EBL31305356 035 $a(MiAaPQ)EBC31319818 035 $a(Au-PeEL)EBL31319818 035 $a(DE-He213)978-981-97-2266-2 035 $a(EXLCZ)9931801766700041 100 $a20240424d2024 u| 0 101 0 $aeng 135 $aur||||||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aAdvances in Knowledge Discovery and Data Mining $e28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2024, Taipei, Taiwan, May 7?10, 2024, Proceedings, Part VI /$fedited by De-Nian Yang, Xing Xie, Vincent S. Tseng, Jian Pei, Jen-Wei Huang, Jerry Chun-Wei Lin 205 $a1st ed. 2024. 210 1$aSingapore :$cSpringer Nature Singapore :$cImprint: Springer,$d2024. 215 $a1 online resource (329 pages) 225 1 $aLecture Notes in Artificial Intelligence,$x2945-9141 ;$v14650 311 $a981-9722-65-9 327 $aIntro -- General Chairs' Preface -- PC Chairs' Preface -- Organization -- Contents - Part VI -- Scientific Data -- FR3LS: A Forecasting Model with Robust and Reduced Redundancy Latent Series -- 1 Introduction -- 2 Related Work -- 3 Problem Setup -- 4 Model Architecture -- 4.1 Temporal Contextual Consistency -- 4.2 Non-contrastive Representations Learning -- 4.3 Deterministic Forecasting -- 4.4 Probabilistic Forecasting -- 4.5 End-to-End Training -- 5 Experiments -- 5.1 Experimental Results -- 5.2 Visualization of Latent and Original Series Forecasts -- 5.3 Further Experimental Setup Details -- 6 Conclusion -- References -- Knowledge-Infused Optimization for Parameter Selection in Numerical Simulations -- 1 Introduction -- 2 Preliminaries on Numerical Simulations -- 3 Identification and Analysis of Relevant Metadata -- 4 Efficient Metadata Capture for Parameter Optimization -- 4.1 Early Termination of Farming Runs -- 4.2 PROBE: Probing Specific Parameter Combinations -- 5 Experimental Evaluation -- 5.1 Quality of Parameter Optimization Using PROBE -- 5.2 Efficiency Evaluation -- 5.3 Reuse of Metadata Acquired Through PROBE -- 5.4 Generalization to Other Model Problems and Schemes -- 6 Conclusion and Outlook -- References -- Material Microstructure Design Using VAE-Regression with a Multimodal Prior -- 1 Introduction -- 2 Methodology -- 3 Related Work -- 4 Experimental Results -- 5 Summary and Conclusions -- References -- A Weighted Cross-Modal Feature Aggregation Network for Rumor Detection -- 1 Introduction -- 2 Related Work -- 2.1 Rumor Detection -- 2.2 Multimodal Alignment -- 3 Methodology -- 3.1 Overview of WCAN -- 3.2 Feature Extraction -- 3.3 Weighted Cross-Modal Aggregation Module -- 3.4 Multimodal Feature Fusion -- 3.5 Objective Function -- 4 Experiments -- 4.1 Datasets -- 4.2 Baselines -- 4.3 Ablation Experiment. 327 $a4.4 Hyper-parameter Analysis -- 4.5 Visualization on the Representations -- 4.6 Case Study -- 5 Conclusions -- References -- Texts, Web, Social Network -- Quantifying Opinion Rejection: A Method to Detect Social Media Echo Chambers -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 4 Echo Chamber Detection in Signed Networks -- 5 SEcho Method -- 5.1 SEcho Metric -- 5.2 Greedy Optimisation -- 6 Experiments -- 7 Conclusion -- References -- KiProL: A Knowledge-Injected Prompt Learning Framework for Language Generation -- 1 Introduction -- 2 Methodology -- 2.1 Problem Statement -- 2.2 Knowledge-Injected Prompt Learning Generation -- 2.3 Training and Inference -- 3 Experiments -- 3.1 Datasets -- 3.2 Settings -- 3.3 Automatic Evaluation -- 3.4 Human Annotation -- 3.5 Ablation Study -- 3.6 In-Depth Analysis -- 3.7 Case Study -- 4 Conclusion -- References -- GViG: Generative Visual Grounding Using Prompt-Based Language Modeling for Visual Question Answering -- 1 Introduction -- 2 Related Work -- 2.1 Pix2Seq Framework -- 2.2 Prompt Tuning -- 3 Methodology -- 3.1 Prompt Tuning Module -- 3.2 VG Module -- 3.3 Conditional Trie-Based Search Algorithm (CTS) -- 4 Results -- 4.1 Dataset Description -- 4.2 Results on WSDM 2023 Toloka VQA Dataset Benchmark -- 5 Discussion -- 5.1 Prompt Study -- 5.2 Interpretable Attention -- 6 Conclusion -- References -- Aspect-Based Fake News Detection -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Problem Definition and Model Overview -- 3.2 Aspect Learning and Extraction -- 3.3 News Article Classification -- 4 Experiments -- 4.1 Datasets -- 4.2 Experimental Settings -- 4.3 Evaluation -- 5 Analysing the Effect of Aspects Across Topics -- 6 Discussion and Future Work -- 7 Conclusion -- References -- DQAC: Detoxifying Query Auto-completion with Adapters -- 1 Introduction -- 2 Related Work -- 3 Methodology. 327 $a3.1 QDetoxify: Toxicity Classifier for Search Queries -- 3.2 The DQAC Model -- 4 Experimental Setup -- 5 Results and Analyses -- 6 Conclusions -- References -- Graph Neural Network Approach to Semantic Type Detection in Tables -- 1 Introduction -- 2 Related Works -- 3 Problem Definition -- 4 GAIT -- 4.1 Single-Column Prediction -- 4.2 Graph-Based Prediction -- 4.3 Overall Prediction -- 5 Evaluation -- 5.1 Evaluation Method -- 5.2 Results -- 6 Conclusion -- References -- TCGNN: Text-Clustering Graph Neural Networks for Fake News Detection on Social Media -- 1 Introduction -- 2 Related Work -- 3 The Proposed TCGNN Method -- 3.1 Text-Clustering Graph Construction -- 3.2 Model Training -- 4 Experiments -- 5 Conclusions and Discussion -- References -- Exploiting Adaptive Contextual Masking for Aspect-Based Sentiment Analysis -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Problem Formulation and Motivation -- 3.2 Standalone ABSA Tasks -- 3.3 Adaptive Contextual Threshold Masking (ACTM) -- 3.4 Adaptive Attention Masking (AAM) -- 3.5 Adaptive Mask Over Masking (AMOM) -- 3.6 Training Procedure for ATE and ASC -- 4 Experiments and Results -- 5 Conclusion -- References -- An Automated Approach for Generating Conceptual Riddles -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Triples Creator -- 3.2 Properties Classifier -- 3.3 Generator -- 3.4 Validator -- 4 Evaluation and Results -- 5 Conclusion and Future Work -- References -- Time-Series and Streaming Data -- DiffFind: Discovering Differential Equations from Time Series -- 1 Introduction -- 2 Background and Related Work -- 2.1 Related Work -- 2.2 Background - Genetic Algorithms for Architecture Search -- 3 Proposed Method: DiffFind -- 4 Experiments -- 4.1 Q1 - DiffFind is Effective -- 4.2 Q2 - DiffFind is Explainable -- 4.3 Q3 - DiffFind is Scalable -- 5 Conclusions -- References. 327 $aDEAL: Data-Efficient Active Learning for Regression Under Drift -- 1 Introduction -- 2 Related Work -- 3 Problem Statement and Notation -- 4 Our Method: DEAL -- 4.1 The Adapted Stream-Based AL Cycle -- 4.2 Our Drift-Aware Estimation Model -- 5 Experimental Design -- 5.1 Baselines -- 5.2 Evaluation Data -- 5.3 Evaluation Metrics -- 6 Evaluation -- 6.1 Comparison of DEAL Against Baselines -- 6.2 Impact of the User-Required Error Threshold -- 7 Conclusion -- References -- Evolving Super Graph Neural Networks for Large-Scale Time-Series Forecasting -- 1 Introduction -- 2 Related Models -- 3 Evolving Super Graph Neural Networks -- 3.1 Preliminary Notations -- 3.2 Super Graph Construction -- 4 Diffusion on Evolving Super Graphs -- 4.1 Predictor -- 5 Experiments on Large-Scale Datasets -- 5.1 Forecasting Result and Analysis -- 5.2 Runtime and Space Usage Analysis -- 5.3 Ablation Study -- 6 Conclusion -- References -- Unlearnable Examples for Time Series -- 1 Introduction -- 2 Related Work -- 2.1 Data Poisoning -- 2.2 Adversarial Attack -- 2.3 Unlearnable Examples -- 3 Error-Minimizing Noise for Time Series -- 3.1 Objective -- 3.2 Threat Model -- 3.3 Challenges -- 3.4 Problem Formulation -- 3.5 A Straightforward Baseline Approach -- 3.6 Controllable Noise on Partial Time Series Samples -- 4 Experiments -- 4.1 Experiment Setup -- 4.2 Against Classification Models -- 4.3 Against Generative Models -- 5 Conclusion -- References -- Learning Disentangled Task-Related Representation for Time Series -- 1 Introduction -- 2 Related Work -- 3 The Proposed Method -- 3.1 Overview -- 3.2 Task-Relevant Feature Disentangled -- 3.3 Task-Adaptive Augmentation Selection -- 4 Experiments and Discussions -- 4.1 Datasets and Implementation Details -- 4.2 Ablation Analysis -- 4.3 Results on Classification Tasks -- 4.4 Results on Forecasting Tasks -- 4.5 Visualization Analysis. 327 $a5 Conclusion -- References -- A Multi-view Feature Construction and Multi-Encoder-Decoder Transformer Architecture for Time Series Classification -- 1 Introduction -- 2 Related Works -- 3 Problem Formulation -- 4 Methodology -- 4.1 Feature Construction -- 4.2 Multi-view Representation -- 4.3 Multi-Encoder-Decoder Transformer (MEDT) Classification -- 5 Experiments -- 5.1 Experiments Using Multivariate Time Series Data Benchmarks -- 5.2 Experiment Using a Real-World Physical Activities Dataset -- 6 Conclusion -- References -- Kernel Representation Learning with Dynamic Regime Discovery for Time Series Forecasting -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 3.1 Key Concepts -- 3.2 Self-representation Learning in Time Series -- 3.3 Kernel Trick for Modeling Time Series -- 4 Proposed Method -- 4.1 Kernel Representation Learning: Modeling Regime Behavior -- 4.2 Forecasting -- 5 Experiments -- 5.1 Data -- 5.2 Experimental Setup and Evaluation -- 5.3 Regime Identification -- 5.4 Benchmark Comparison -- 5.5 Ablation Study -- 6 Conclusion -- References -- Hyperparameter Tuning MLP's for Probabilistic Time Series Forecasting -- 1 Introduction -- 2 Problem Statement -- 3 MLPs for Time Series Forecasting -- 3.1 Nlinear Model -- 4 Hyperparameters -- 4.1 Time Series Specific Configuration -- 4.2 Training Specific Configurations -- 4.3 TSBench-Metadataset -- 5 Experimental Setup -- 6 Results -- 7 Conclusion -- References -- Efficient and Accurate Similarity-Aware Graph Neural Network for Semi-supervised Time Series Classification -- 1 Introduction -- 2 Related Work -- 2.1 Graph-Based Time Series Classification -- 2.2 Lower Bound of DTW -- 3 Problem Formulation -- 4 Methodology -- 4.1 Batch Sampling -- 4.2 LB_Keogh Graph Construction -- 4.3 Graph Convolution and Classification -- 4.4 Advantages of Our Model -- 5 Experimental Evaluation. 327 $a5.1 Comparing with 1NN-DTW. 330 $aThe 6-volume set LNAI 14645-14650 constitutes the proceedings of the 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2024, which took place in Taipei, Taiwan, during May 7?10, 2024. The 177 papers presented in these proceedings were carefully reviewed and selected from 720 submissions. They deal with new ideas, original research results, and practical development experiences from all KDD related areas, including data mining, data warehousing, machine learning, artificial intelligence, databases, statistics, knowledge engineering, big data technologies, and foundations. 410 0$aLecture Notes in Artificial Intelligence,$x2945-9141 ;$v14650 606 $aArtificial intelligence 606 $aAlgorithms 606 $aEducation$xData processing 606 $aComputer science$xMathematics 606 $aSignal processing 606 $aComputer networks 606 $aArtificial Intelligence 606 $aDesign and Analysis of Algorithms 606 $aComputers and Education 606 $aMathematics of Computing 606 $aSignal, Speech and Image Processing 606 $aComputer Communication Networks 615 0$aArtificial intelligence. 615 0$aAlgorithms. 615 0$aEducation$xData processing. 615 0$aComputer science$xMathematics. 615 0$aSignal processing. 615 0$aComputer networks. 615 14$aArtificial Intelligence. 615 24$aDesign and Analysis of Algorithms. 615 24$aComputers and Education. 615 24$aMathematics of Computing. 615 24$aSignal, Speech and Image Processing. 615 24$aComputer Communication Networks. 676 $a006.3 700 $aYang$b De-Nian$01737375 701 $aXie$b Xing$01734375 701 $aTseng$b Vincent S$01737376 701 $aPei$b Jian$0868267 701 $aHuang$b Jen-Wei$01737377 701 $aLin$b Jerry Chun-Wei$01453271 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910851993803321 996 $aAdvances in Knowledge Discovery and Data Mining$94159072 997 $aUNINA