LEADER 00876cam 2200253 c 450 001 996424547603316 005 20210728135320.0 100 $a20210706d1943----km y0itay5003 ba 101 0 $aspa 102 $aVE 105 $ay 00 y 200 1 $a<> trascendencia de la actividad de los escritores españoles e hispanoamericanos en Londres, de 1810 a 1830$fPedro Grases 210 $aCaracas$cElite$d1943 215 $a79 p.$d21 cm 606 0 $aScrittori spagnoli [e] Scrittori ispano-americani$yLondra$z1810-1830$2BNCF 676 $a860.9005 700 1$aGRASES,$bPedro$0168186 801 0$aIT$bcba$gREICAT 912 $a996424547603316 951 $aVI.7.B. 2164$bISLA$cVI.7.$d540033 959 $aBK 969 $aISLA 996 $aTrascendencia de la actividad de los escritores españoles e hispanoamericanos en Londres, de 1810 a 1830$91834640 997 $aUNISA LEADER 11146nam 2200541 450 001 996490354703316 005 20231110224431.0 010 $a3-031-17120-9 035 $a(CKB)5840000000091731 035 $a(MiAaPQ)EBC7101851 035 $a(Au-PeEL)EBL7101851 035 $a(PPN)264953363 035 $a(EXLCZ)995840000000091731 100 $a20230223d2022 uy 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 00$aNatural language processing and Chinese computing $e11th CCF international conference, NLPCC 2022, Guilin, China, September 24-25, 2022, proceedings, Part I /$fedited by Wei Lu [and three others] 210 1$aCham, Switzerland :$cSpringer,$d[2022] 210 4$d©2022 215 $a1 online resource (878 pages) 225 1 $aLecture Notes in Computer Science ;$vv.13551 311 $a3-031-17119-5 320 $aIncludes bibliographical references and index. 327 $aIntro -- Preface -- Organization -- Contents - Part I -- Contents - Part II -- Fundamentals of NLP (Oral) -- Exploiting Word Semantics to Enrich Character Representations of Chinese Pre-trained Models -- 1 Introduction -- 2 Related Work -- 3 Multiple Word Segmentation Aggregation -- 4 Projecting Word Semantics to Character Representation -- 4.1 Integrating Word Embedding to Character Representation -- 4.2 Mixing Character Representations Within a Word -- 4.3 Fusing New Character Embedding to Sentence Representation -- 5 Experimental Setup -- 5.1 Tasks and Datasets -- 5.2 Baseline Models -- 5.3 Training Details -- 6 Results and Analysis -- 6.1 Overall Results -- 6.2 Ablation Study -- 6.3 Case Study -- 7 Conclusion -- References -- PGBERT: Phonology and Glyph Enhanced Pre-training for Chinese Spelling Correction -- 1 Introduction -- 2 Related Work -- 3 Our Approach -- 3.1 Problem and Motivation -- 3.2 Model -- 4 Experiment -- 4.1 Pre-training -- 4.2 Fine Tuning -- 4.3 Parameter Setting -- 4.4 Baseline Models -- 4.5 Main Results -- 4.6 Ablation Experiments -- 5 Conclusions -- References -- MCER: A Multi-domain Dataset for Sentence-Level Chinese Ellipsis Resolution -- 1 Introduction -- 2 Definition of Ellipsis -- 2.1 Ellipsis for Chinese NLP -- 2.2 Explanations -- 3 Dataset -- 3.1 Annotation -- 3.2 Dataset Analysis -- 3.3 Annotation Format -- 3.4 Considerations -- 4 Experiments -- 4.1 Baseline Methods -- 4.2 Evaluation Metrics -- 4.3 Results -- 5 Conclusion -- References -- Two-Layer Context-Enhanced Representation for Better Chinese Discourse Parsing -- 1 Introduction -- 2 Related Work -- 3 Model -- 3.1 Basic Principles of Transition-Based Approach -- 3.2 Bottom Layer of Enhanced Context Representation: Intra-EDU Encoder with GCN -- 3.3 Upper Layer of Enhanced Context Representation: Inter-EDU Encoder with Star-Transformer -- 3.4 SPINN-Based Decoder. 327 $a3.5 Training Loss -- 4 Experiments -- 4.1 Experimental Settings -- 4.2 Overall Experimental Results -- 4.3 Compared with Other Parsing Framework -- 5 Conclusion -- References -- How Effective and Robust is Sentence-Level Data Augmentation for Named Entity Recognition? -- 1 Introduction -- 2 Methodology -- 2.1 CMix -- 2.2 CombiMix -- 2.3 TextMosaic -- 3 Experiment -- 3.1 Datasets -- 3.2 Experimental Setup -- 3.3 Results of Effectiveness Evaluation -- 3.4 Study of the Sample Size After Data Augmentation -- 3.5 Results of Robustness Evaluation -- 3.6 Results of CCIR Cup -- 4 Conclusion -- References -- Machine Translation and Multilinguality (Oral) -- Random Concatenation: A Simple Data Augmentation Method for Neural Machine Translation -- 1 Introduction -- 2 Related Works -- 3 Approach -- 3.1 Vanilla Randcat -- 3.2 Randcat with Back-Translation -- 4 Experiment -- 4.1 Experimental Setup -- 4.2 Translation Performance -- 4.3 Analysis -- 4.4 Additional Experiments -- 5 Conclusions -- References -- Contrastive Learning for Robust Neural Machine Translation with ASR Errors -- 1 Introduction -- 2 Related Work -- 2.1 Robust Neural Machine Translation -- 2.2 Contrastive Learning -- 3 NISTasr Test Dataset -- 4 Our Approach -- 4.1 Overview -- 4.2 Constructing Perturbed Inputs -- 5 Experimentation -- 5.1 Experimental Settings -- 5.2 Experimental Results -- 5.3 Ablation Analysis -- 5.4 Effect on Hyper-Parameter -- 5.5 Case Study -- 6 Conclusion -- References -- An Enhanced New Word Identification Approach Using Bilingual Alignment -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Architecture -- 3.2 Multi-new Model -- 3.3 Bilingual Identification Algorithm -- 4 Experiment -- 4.1 Datasets -- 4.2 Results of Multi-new Model -- 4.3 Results of NEWBA-P Model and NEWBA-E Model -- 5 Conclusions -- References -- Machine Learning for NLP (Oral). 327 $aMulti-task Learning with Auxiliary Cross-attention Transformer for Low-Resource Multi-dialect Speech Recognition -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Two Task Streams -- 3.2 Auxiliary Cross-attention -- 4 Experiment -- 4.1 Data -- 4.2 Settings -- 4.3 Experimental Results -- 5 Conclusions -- References -- Regularized Contrastive Learning of Semantic Search -- 1 Introduction -- 2 Related Work -- 3 Regularized Contrastive Learning -- 3.1 Task Description -- 3.2 Data Augmentation -- 3.3 Contrastive Regulator -- 3.4 Anisotropy Problem -- 4 Experiments -- 4.1 Datasets -- 4.2 Training Details -- 4.3 Results -- 4.4 Ablation Study -- 5 Conclusion -- A APPENDIX -- A.1 A Training Details -- References -- Kformer: Knowledge Injection in Transformer Feed-Forward Layers -- 1 Introduction -- 2 Knowledge Neurons in the FFN -- 3 Kformer: Knowledge Injection in FFN -- 3.1 Knowledge Retrieval -- 3.2 Knowledge Embedding -- 3.3 Knowledge Injection -- 4 Experiments -- 4.1 Dataset -- 4.2 Experiment Setting -- 4.3 Experiments Results -- 5 Analysis -- 5.1 Impact of Top N Knowledge -- 5.2 Impact of Layers -- 5.3 Interpretability -- 6 Related Work -- 7 Conclusion and Future Work -- References -- Doge Tickets: Uncovering Domain-General Language Models by Playing Lottery Tickets -- 1 Introduction -- 2 Background -- 2.1 Out-of-domain Generalization -- 2.2 Lottery Ticket Hypothesis -- 2.3 Transformer Architecture -- 3 Identifying Doge Tickets -- 3.1 Uncovering Domain-general LM -- 3.2 Playing Lottery Tickets -- 4 Experiments -- 4.1 Datasets -- 4.2 Models and Implementation -- 4.3 Main Comparison -- 5 Analysis -- 5.1 Sensitivity to Learning Variance -- 5.2 Impact of the Number of Training Domains -- 5.3 Existence of Domain-specific Manner -- 5.4 Consistency with Varying Sparsity Levels -- 6 Conclusions -- References. 327 $aInformation Extraction and Knowledge Graph (Oral) -- BART-Reader: Predicting Relations Between Entities via Reading Their Document-Level Context Information -- 1 Introduction -- 2 Task Formulation -- 3 BART-Reader -- 3.1 Entity-aware Document Context Representation -- 3.2 Entity-Pair Representation -- 3.3 Relation Prediction -- 3.4 Loss Function -- 4 Experiments -- 4.1 Dataset -- 4.2 Experiment Settings -- 4.3 Main Results -- 4.4 Ablation Study -- 4.5 Cross-attention Attends on Proper Mentions -- 5 Related Work -- 6 Conclusion -- References -- DuEE-Fin: A Large-Scale Dataset for Document-Level Event Extraction -- 1 Introduction -- 2 Preliminary -- 2.1 Concepts -- 2.2 Task Definition -- 2.3 Challenges of DEE -- 3 Dataset Construction -- 3.1 Event Schema Construction -- 3.2 Candidate Data Collection -- 3.3 Annotation Process -- 4 Data Analysis -- 4.1 Overall Statics -- 4.2 Event Types and Argument Roles -- 4.3 Comparison with Existing Benchmarks -- 5 Experiment -- 5.1 Baseline -- 5.2 Evaluation Metric -- 5.3 Results -- 6 Conclusion -- References -- Temporal Relation Extraction on Time Anchoring and Negative Denoising -- 1 Introduction -- 2 Related Work -- 3 TAM: Time Anchoring Model for TRE -- 3.1 Mention Embedding Module -- 3.2 Multi-task Learning Module -- 3.3 Interval Anchoring Module -- 3.4 Negative Denoising Module -- 4 Experimentation -- 4.1 Datasets and Experimental Settings -- 4.2 Results -- 4.3 Ablation Study -- 4.4 Effects of Learning Curves -- 4.5 Case Study and Error Analysis -- 5 Conclusion -- References -- Label Semantic Extension for Chinese Event Extraction -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Event Type Detection -- 3.2 Label Semantic Extension -- 3.3 Event Extraction -- 4 Experiments -- 4.1 Dataset and Experiment Setup -- 4.2 Main Result -- 4.3 Ablation Study -- 4.4 Effect of Threshold -- 5 Conclusions. 327 $aReferences -- QuatSE: Spherical Linear Interpolation of Quaternion for Knowledge Graph Embeddings -- 1 Introduction -- 2 Related Work -- 3 Proposed Model -- 3.1 Quaternion Background -- 3.2 QuatSE -- 3.3 Theoretical Analysis -- 4 Experiment -- 4.1 Datasets -- 4.2 Evaluation Protocol -- 4.3 Implementation Details -- 4.4 Baselines -- 5 Results and Analysis -- 5.1 Main Results -- 5.2 1-N, N-1 and Multiple-Relations Pattern -- 6 Conclusion -- References -- Entity Difference Modeling Based Entity Linking for Question Answering over Knowledge Graphs -- 1 Introduction -- 2 Related Work -- 2.1 Entity Representation -- 2.2 Model Architecture -- 3 Framework -- 3.1 Question Encoder -- 3.2 Entity Encoder -- 3.3 Mention Detection and Entity Disambiguation -- 4 Experiments -- 4.1 Model Comparison -- 4.2 Ablation Study -- 4.3 Case Study -- 5 Conclusion -- References -- BG-EFRL: Chinese Named Entity Recognition Method and Application Based on Enhanced Feature Representation -- 1 Introduction -- 2 Related Work -- 2.1 Chinese Named Entity Recognition -- 2.2 Embedding Representation -- 3 NER Model -- 3.1 Embedding Representation -- 3.2 Initialize the Graph Structure -- 3.3 Encoders -- 3.4 Feature Enhancer -- 3.5 Decoder -- 4 Experiments -- 4.1 Datasets and Metrics -- 4.2 Implementation Details -- 4.3 Comparison Methods -- 4.4 Results -- 5 Conclusion -- References -- TEMPLATE: TempRel Classification Model Trained with Embedded Temporal Relation Knowledge -- 1 Introduction -- 2 Related Work -- 3 Our Baseline Model -- 4 TEMPLATE Approach -- 4.1 Build Templates -- 4.2 Embedded Knowledge of TempRel Information -- 4.3 Train the Model with Embedded Knowledge of TempRel Information -- 5 Experiments and Results -- 5.1 Data-set -- 5.2 Experimental Setup -- 5.3 Main Results -- 5.4 Ablation Study and Qualitative Analysis -- 6 Conclusion -- References. 327 $aDual Interactive Attention Network for Joint Entity and Relation Extraction. 410 0$aLecture Notes in Computer Science 606 $aChinese language$xData processing$vCongresses 606 $aChinese language$xData processing 606 $aNatural language processing (Computer science) 615 0$aChinese language$xData processing 615 0$aChinese language$xData processing. 615 0$aNatural language processing (Computer science) 676 $a495.10285 702 $aLu$b Wei 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a996490354703316 996 $aNatural Language Processing and Chinese Computing$91912515 997 $aUNISA