Vai al contenuto principale della pagina
Titolo: | Database systems for advanced applications : 26th international conference, DASFAA 2021, Taipei, Taiwan, April 11-14, 2021 : proceedings Part II / / Christian S. Jensen [and seven others] editors |
Pubblicazione: | Cham, Switzerland : , : Springer, , [2021] |
©2021 | |
Descrizione fisica: | 1 online resource (817 pages) |
Disciplina: | 005.74 |
Soggetto topico: | Database management |
Databases | |
Persona (resp. second.): | JensenChristian S. |
Nota di contenuto: | Intro -- Preface -- Organization -- Contents - Part II -- Text and Unstructured Data -- Multi-label Classification of Long Text Based on Key-Sentences Extraction -- 1 Introduction -- 2 Related Work -- 2.1 Multi-Label Learning -- 2.2 Multi-Task Learning -- 3 Model -- 3.1 Task Definition -- 3.2 Sentence Encoder -- 3.3 Key-Sentences Extraction with Semi-supervised Learning -- 3.4 Multi-label Prediction Based Multi-label Attention -- 3.5 Optimization -- 4 Experiments -- 4.1 Data -- 4.2 Baseline Models and Evaluation Metrics -- 4.3 Experimental Settings -- 4.4 Results and Analysis -- 4.5 Ablation Test -- 4.6 Case Study -- 5 Conclusion -- References -- Automated Context-Aware Phrase Mining from Text Corpora -- 1 Introduction -- 2 Methodology -- 2.1 Problem Definition -- 2.2 Overview -- 2.3 Data Process -- 2.4 Topic-Aware Phrase Recognition Network (TPRNet) -- 2.5 Instance Selection Network (ISNet) -- 2.6 Training Details -- 3 Experiments -- 3.1 Experimental Setup -- 3.2 Experimental Results -- 3.3 Impact of Topic Information -- 3.4 Effectiveness of Selection Policy -- 4 Related Work -- 5 Conclusion -- References -- Keyword-Aware Encoder for Abstractive Text Summarization -- 1 Introduction -- 2 Related Work -- 3 Model Description -- 3.1 Keywords Extraction -- 3.2 Dependency-Based Keyword Sequence -- 3.3 Keyword-Aware Encoder -- 3.4 Summary Decoder -- 3.5 Objective Function -- 4 Experiments -- 4.1 Datasets -- 4.2 Baselines -- 4.3 Evaluation Metric -- 4.4 Implementation Details -- 4.5 Evaluation -- 5 Discussion -- 5.1 Visualization of Gates and Attention Weights -- 5.2 Influence of Keyword Extraction Ratio -- 5.3 Analysis of Content Selection Methods -- 5.4 Case Study -- 6 Conclusion -- References -- Neural Adversarial Review Summarization with Hierarchical Personalized Attention -- 1 Introduction -- 2 Related Work -- 3 Proposed Method. |
3.1 Problem Formulation -- 3.2 Review Encoder -- 3.3 Abstractive Summary Generation -- 4 Experimental Setup -- 4.1 Datasets -- 4.2 Baseline Methods -- 4.3 Experimental Settings -- 5 Result and Discussion -- 5.1 Performance Evaluation -- 5.2 Ablation Study -- 5.3 Case Study -- 5.4 Visualization of Attention -- 6 Conclusion -- References -- Generating Contextually Coherent Responses by Learning Structured Vectorized Semantics -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Model Overview -- 3.2 Hierarchical Centralized Encoder -- 3.3 Inference Network -- 3.4 Decoder with Calibration Mechanism -- 3.5 Loss Function -- 4 Experiments -- 4.1 Experimental Settings -- 4.2 Automatic Metric-Based Evaluation -- 4.3 Manual Evaluation -- 4.4 Further Analysis of Our Method -- 4.5 Case Study -- 5 Conclusion -- References -- Latent Graph Recurrent Network for Document Ranking -- 1 Introduction -- 2 Related Work -- 2.1 Interaction Based Neural Ranking Models -- 2.2 Pretrained Neural Language Models for IR -- 2.3 Graph Neural Network -- 3 Method -- 3.1 Formalization -- 3.2 Architecture -- 3.3 Loss Function -- 4 Experiments -- 4.1 Experimental Setting -- 4.2 Effectiveness Analysis -- 4.3 Ablation Study for Masking Strategy -- 4.4 Ablation Study for Distance Learning Task -- 4.5 Query Length Analysis -- 5 Conclusion -- References -- Discriminative Feature Adaptation via Conditional Mean Discrepancy for Cross-Domain Text Classification -- 1 Introduction -- 2 Preliminary -- 2.1 Kernels and Hilbert Space Embedding -- 2.2 Hilbert Space Embedding of Conditional Distributions -- 3 Proposed Model -- 3.1 Conditional Mean Discrepancy -- 3.2 Aligned Adaptation Networks with Adversarial Learning -- 4 Experiments -- 4.1 Setup -- 4.2 Results -- 4.3 Analysis -- 5 Related Work -- 6 Conclusion -- References. | |
Discovering Protagonist of Sentiment with Aspect Reconstructed Capsule Network -- 1 Introduction -- 2 Related Work -- 3 The CAPSAR Model -- 3.1 Model Overview -- 3.2 Sequence Encoder -- 3.3 Location Proximity with Given Aspect -- 3.4 Capsule Layers with Sharing-Weight Routing -- 3.5 Model Training with Aspect Reconstruction -- 3.6 Combining CAPSAR with BERT -- 4 Experiments -- 4.1 Datasets -- 4.2 Compared Methods -- 4.3 Experimental Settings -- 4.4 Results on Standard ATSA -- 4.5 Results on Aspect Term Detection -- 5 Conclusion -- References -- Discriminant Mutual Information for Text Feature Selection -- 1 Introduction -- 2 Related Work -- 2.1 Text Representation -- 2.2 Mutual Information -- 2.3 mRMR -- 3 Discriminant Mutual Information -- 3.1 Text Preprocessing -- 3.2 Discriminant Mutual Information -- 4 Experiments and Analysis -- 4.1 Datasets -- 4.2 Classifiers and Evaluation Measure -- 4.3 Experimental Results -- 5 Conclusion -- References -- CAT-BERT: A Context-Aware Transferable BERT Model for Multi-turn Machine Reading Comprehension -- 1 Introduction -- 2 Related Work -- 2.1 Machine Reading Comprehension -- 2.2 Transfer Learning -- 3 The CAT-BERT Model -- 3.1 Task Description and Overall Framework -- 3.2 Context-Aware BERT Encoding -- 3.3 Transfer Learning with Task-Specific Attention -- 3.4 Dynamic Training Policy -- 4 Experiments -- 4.1 Datasets -- 4.2 Experimental Setup -- 4.3 Overall Results -- 4.4 Comparison of Transfer Policies -- 4.5 The Benefit from the Attention Mechanism -- 4.6 Error Analysis -- 5 Conclusion and Future Work -- References -- Unpaired Multimodal Neural Machine Translation via Reinforcement Learning -- 1 Introduction -- 2 Background -- 3 Methodology -- 3.1 Problem Definition -- 3.2 Overview -- 3.3 Reward Computation -- 3.4 Objective Function -- 3.5 Training Details -- 4 Experiments -- 4.1 Datasets. | |
4.2 Baseline Methods -- 4.3 Implementation Details -- 4.4 Main Results -- 4.5 Impact of Hyper-parameter -- 4.6 Case Study -- 5 Related Work -- 6 Conclusion and Future Work -- References -- Multimodal Named Entity Recognition with Image Attributes and Image Knowledge -- 1 Introduction -- 2 Related Work -- 2.1 Traditional NER with Text only -- 2.2 MNER with Image and Text -- 2.3 Other Multimodal Tasks -- 3 Our Proposed Model -- 3.1 Problem Formulation -- 3.2 Introducting Image Attributes and Knowledge -- 3.3 Feature Extraction -- 3.4 Modality Fusion -- 3.5 Conditional Random Fields -- 4 Experiments -- 4.1 Dataset -- 4.2 Implementation Details -- 4.3 Baselines -- 4.4 Results and Discussion -- 4.5 Bad Case Analysis -- 5 Conclusions -- References -- Multi-task Neural Shared Structure Search: A Study Based on Text Mining -- 1 Introduction -- 2 Our Approach -- 2.1 Multi-task Shared Structure Encoding (SSE) -- 2.2 Shared Structure and Auxiliary Task Search -- 2.3 Variant of Vanilla NAS Approach -- 2.4 m-Sparse Search Approach for Neural-Based Multi-task Model (m-S4MT) -- 2.5 Task-Wise Greedy Generation Search Approach for Neural-Based Multi-task Model (TGG-S3MT) -- 3 Experiments -- 3.1 Datasets -- 3.2 Experimental Settings -- 3.3 Q1: Are SSE and m-S4MT Effective? -- 3.4 Q2: Is TGG-S3MT Effective? -- 3.5 Q3: Which Search Approach Is More Efficient? -- 4 Related Work -- 4.1 Multi-task Methods in Text Mining -- 4.2 Network Architecture Search for Multi-task Models -- 4.3 Peer Review Prediction -- 5 Conclusion -- References -- A Semi-structured Data Classification Model with Integrating Tag Sequence and Ngram -- 1 Introduction -- 2 Related Works -- 3 TSGram Feature -- 3.1 Basic Definitions -- 3.2 Constructing a TSGram Feature Space -- 4 TSGram-Based Classifier -- 4.1 TSGrams Class Model -- 4.2 Classifying Documents Using the TSGrams Class Model. | |
5 Experimental Study -- 5.1 Experimental Setting -- 5.2 Effects of the Length and Numbers of TSGrams -- 5.3 Effects of TSGram Feature Selection Parameter and Feature Combination -- 5.4 Classification Results -- 6 Conclusions -- References -- Inferring Deterministic Regular Expression with Unorder and Counting -- 1 Introduction -- 2 Preliminaries -- 2.1 Regular Expression with Unorder and Counting -- 2.2 SORE, SOREUC, SOA and Unorder Unit -- 3 Finite Automaton with Unorder and Counting (FAUC) -- 3.1 Unorder Markers, Counters and Update Instructions -- 3.2 Finite Automata with Unorder and Counting -- 4 Inference of SOREUCs -- 4.1 Computing Unorder Units -- 4.2 Constructing FAUC -- 4.3 Running FAUC -- 4.4 Generating SOREUC -- 5 Experiments -- 5.1 Expressiveness of SOREUCs -- 5.2 Conciseness, Generalization Ability and Time Performance -- 6 Conclusion -- References -- MACROBERT: Maximizing Certified Region of BERT to Adversarial Word Substitutions -- 1 Introduction -- 2 Methods -- 2.1 Certified Region -- 2.2 Perturbation Distribution Based on Multi-Hop Neighbors -- 2.3 Robust Training by Maximizing Certified Region -- 3 Experiment -- 3.1 Experimental Data and Baselines -- 3.2 Results and Analysis -- 4 Conclusion -- References -- A Diversity-Enhanced and Constraints-Relaxed Augmentation for Low-Resource Classification -- 1 Introduction -- 2 Model Description -- 2.1 Transformer-Based Encoder -- 2.2 Language Model Layer -- 2.3 Classification Layer -- 2.4 K- Augmentation -- 2.5 Regularization -- 2.6 Training Process -- 3 Experiments -- 3.1 Experimental Settings -- 3.2 Main Results -- 3.3 Ablation Study -- 3.4 Importance of Diversity and Constraints -- 4 Conclusion -- References -- Neural Demographic Prediction in Social Media with Deep Multi-view Multi-task Learning -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Context View. | |
3.2 Sentiment View. | |
Titolo autorizzato: | Database Systems for Advanced Applications |
ISBN: | 3-030-73197-9 |
Formato: | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione: | Inglese |
Record Nr.: | 996464484703316 |
Lo trovi qui: | Univ. di Salerno |
Opac: | Controlla la disponibilità qui |