Vai al contenuto principale della pagina
Autore: | Xiao Tong |
Titolo: | Machine translation : 18th China conference, CCMT 2022, Lhasa, China, August 6-10, 2022 : revised selected papers / / Tong Xiao and Juan Pino |
Pubblicazione: | Singapore : , : Springer, , [2022] |
©2022 | |
Descrizione fisica: | 1 online resource (175 pages) |
Disciplina: | 495.10285 |
Soggetto topico: | Chinese language - Machine translating |
Machine translating | |
Persona (resp. second.): | PinoJuan |
Nota di contenuto: | Intro -- Preface -- Organization -- Contents -- PEACook: Post-editing Advancement Cookbook -- 1 Introduction -- 2 Related Work -- 2.1 APE Problem and APE Metrics -- 2.2 APE Baselines -- 3 PEACook Corpus -- 3.1 PEACook Corpus Details -- 4 Baseline Model Experiments -- 4.1 Pre-training AR-APE Model -- 4.2 Fine-Tuning AR-APE Model -- 4.3 Pre-training NAR-APE Model -- 4.4 Fine-Tuning NAR-APE Model -- 5 Conclusion -- References -- Hot-Start Transfer Learning Combined with Approximate Distillation for Mongolian-Chinese Neural Machine Translation -- 1 Introduction -- 2 Background -- 2.1 NMT -- 2.2 Transfer Learning -- 2.3 Pre-train Techniques -- 3 Methods -- 3.1 Word Alignment Under Hot-Start -- 3.2 Approximate Distillation -- 4 Experiment -- 4.1 Settings -- 4.2 Results and Analysis -- 4.3 Ablation Test -- 4.4 Case Analysis -- 5 Conclusion -- References -- Review-Based Curriculum Learning for Neural Machine Translation -- 1 Introduction -- 2 Related Work -- 3 Review-Based Curriculum Learning -- 3.1 Time-Based Review Method -- 3.2 Master-Based Review Method -- 3.3 General Domain Enhanced Training -- 4 Experiment -- 4.1 Data and Setup -- 4.2 Main Results -- 5 Analysis -- 5.1 Effect of Mixed Fine Tuning -- 5.2 Low-Resource Scenario -- 5.3 Data Sharding -- 5.4 Training Efficiency -- 6 Conclusion -- References -- Multi-strategy Enhanced Neural Machine Translation for Chinese Minority Languages -- 1 Introduction -- 2 Dataset -- 3 System Overview -- 3.1 Back-Translation -- 3.2 Alternated Training -- 3.3 Ensemble -- 4 Experiments -- 4.1 Mongolian Chinese -- 4.2 TibetanChinese -- 4.3 UyghurChinese -- 5 Analysis -- 5.1 The Effect of Different Back-Translation Methods -- 5.2 The Impact of Sentence Segmentation on the Translation Quality of Machine Translation -- 5.3 Analysis of BLEU Scores of MongolianChinese Machine Translation on the Development Set. |
6 Conclusion -- References -- Target-Side Language Model for Reference-Free Machine Translation Evaluation -- 1 Introduction -- 2 Target-Side Language Model Metrics -- 3 Experiments -- 3.1 Datasets and Baselines -- 3.2 Results -- 3.3 Discussion -- 4 Conclusion -- References -- Life Is Short, Train It Less: Neural Machine Tibetan-Chinese Translation Based on mRASP and Dataset Enhancement -- 1 Introduction -- 2 Prerequisite -- 2.1 Neural Machine Translation with mRASP -- 2.2 Diversification Method -- 2.3 Curvature -- 3 Methodology -- 3.1 Overall Structure -- 3.2 Curvature Based Checkpoint Hijack -- 4 Experiments -- 4.1 Dataset Description and Finetune Parameters -- 4.2 Experiment Result -- 5 Conclusion -- References -- Improving the Robustness of Low-Resource Neural Machine Translation with Adversarial Examples -- 1 Introduction -- 2 Background and Related Work -- 2.1 Neural Machine Translation -- 2.2 Adversarial Example, Adversarial Attack and Adversarial Training in NLP -- 2.3 Genetic Algorithm-Based Adversarial Attack -- 2.4 Gradient-Based Adversarial Attack -- 3 Adversarial Examples Based on Reinforcement Learning -- 3.1 Reinforcement Learning -- 3.2 Environment -- 3.3 Agent -- 4 Experiment -- 4.1 Data Preprocessing -- 4.2 NMT Model -- 4.3 Evaluating Indicator -- 4.4 Adversarial Attack Results and Analysis -- 4.5 Adversarial Training Results and Analysis -- 4.6 Ablation Study -- 5 Conclusion -- References -- Dynamic Mask Curriculum Learning for Non-Autoregressive Neural Machine Translation -- 1 Introduction -- 2 Background -- 2.1 Non-autoregressive Neural Machine Translation -- 2.2 Curriculum Learning -- 3 Method -- 3.1 Model -- 3.2 Dynamic Mask Curriculum Learning -- 3.3 Train and Inference -- 4 Experiment -- 4.1 Data Preparation -- 4.2 Configuration -- 4.3 Baseline -- 4.4 Results -- 5 Analysis -- 5.1 Mask Strategy -- 5.2 Method Generality. | |
6 Conclusion -- References -- Dynamic Fusion Nearest Neighbor Machine Translation via Dempster-Shafer Theory -- 1 Introduction -- 2 Background -- 3 Method -- 3.1 Dempster-Shafer Theory -- 3.2 Label Smoothing -- 4 Experiment -- 4.1 Experimental Setup -- 4.2 Result and Analysis -- 4.3 Robustness -- 4.4 Case Study -- 5 Conclusion -- References -- A Multi-tasking and Multi-stage Chinese Minority Pre-trained Language Model -- 1 Introduction -- 2 Related Work -- 2.1 Pre-trained Language Model -- 2.2 Multilingual Model -- 2.3 Chinese Minority Languages -- 3 Main Methods -- 3.1 Model Architecture -- 3.2 Multi-tasking Multi-stage Pre-training -- 3.3 Model Parameter Details -- 3.4 Model Setting Details -- 4 Experiments -- 4.1 Main Results -- 4.2 Case Study -- 5 Conclusion -- References -- An Improved Multi-task Approach to Pre-trained Model Based MT Quality Estimation -- 1 Introduction -- 2 Related Works -- 3 PE Based Multi-task Learning for Sentence Level QE -- 3.1 Multi-task Learning Framework for QE -- 3.2 PE Based Multi-task Learning QE -- 3.3 Multi-model Ensemble -- 4 Experiments -- 4.1 Dataset -- 4.2 Model Training and Evaluation Metric -- 4.3 Experimental Results and Analysis -- 4.4 Ablation Study -- 5 Conclusion -- References -- Optimizing Deep Transformers for Chinese-Thai Low-Resource Translation -- 1 Introduction -- 2 Background -- 2.1 Transformer -- 2.2 Low-Resource NMT -- 2.3 Parameter Initialization for Deep Transformers -- 2.4 Deep Transformers for Low-Resource Tasks -- 3 Our Work -- 3.1 Data Processing -- 3.2 Exploration of Training Settings -- 3.3 Deep Transformers for Low-Resource Machine Translation -- 4 Related Work -- 5 Conclusion -- References -- CCMT 2022 Translation Quality Estimation Task -- 1 Introduction -- 2 Estimation System -- 3 Data -- 4 Method -- 4.1 System Training -- 4.2 System Test -- 5 Experiment -- 5.1 System Environment. | |
5.2 Experiment Settings -- 5.3 Experiment Result -- 6 Conclusion -- References -- Effective Data Augmentation Methods for CCMT 2022 -- 1 Introduction -- 2 System Architecture -- 3 Methods -- 3.1 Data Augmentation -- 3.2 CE Task and EC Task -- 3.3 CThai Task and ThaiC Task -- 4 Experiments -- 4.1 System Settings -- 4.2 Data Pre-processing -- 4.3 Experimental Results -- 5 Conclusion -- References -- NJUNLP's Submission for CCMT 2022 Quality Estimation Task -- 1 Introduction -- 2 Methods -- 2.1 Existing Methods -- 2.2 Proposed Methods -- 3 Experiments -- 3.1 Dataset -- 3.2 Settings -- 3.3 Single Model Results -- 3.4 Ensemble -- 3.5 Analysis -- 4 Conclusion -- References -- ISTIC's Thai-to-Chinese Neural Machine Translation System for CCMT' 2022 -- 1 Introduction -- 2 System Architecture -- 2.1 Baseline System -- 2.2 Our System -- 3 Methods -- 3.1 Back Translation -- 3.2 Add External Data -- 3.3 Model Averaging -- 3.4 Model Ensemble Strategy -- 4 Experiments -- 4.1 System Settings -- 4.2 Data Preprocessing -- 4.3 Experimental Results -- 4.4 Conclusion -- References -- Author Index. | |
Titolo autorizzato: | Machine translation |
ISBN: | 981-19-7960-X |
Formato: | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione: | Inglese |
Record Nr.: | 996503566103316 |
Lo trovi qui: | Univ. di Salerno |
Opac: | Controlla la disponibilità qui |