Vai al contenuto principale della pagina

Foundation Models for Natural Language Processing : Pre-trained Language Models Integrating Media / / by Gerhard Paaß, Sven Giesselbach



(Visualizza in formato marc)    (Visualizza in BIBFRAME)

Autore: Paaß Gerhard Visualizza persona
Titolo: Foundation Models for Natural Language Processing : Pre-trained Language Models Integrating Media / / by Gerhard Paaß, Sven Giesselbach Visualizza cluster
Pubblicazione: Cham : , : Springer International Publishing : , : Imprint : Springer, , 2023
Edizione: 1st ed. 2023.
Descrizione fisica: 1 online resource
Disciplina: 006.35
Soggetto topico: Natural language processing (Computer science)
Computational linguistics
Artificial intelligence
Expert systems (Computer science)
Machine learning
Natural Language Processing (NLP)
Computational Linguistics
Artificial Intelligence
Knowledge Based Systems
Machine Learning
Classificazione: COM004000COM025000COM073000LAN009000
Altri autori: GiesselbachSven  
Nota di contenuto: 1. Introduction -- 2. Pre-trained Language Models -- 3. Improving Pre-trained Language Models -- 4. Knowledge Acquired by Foundation Models -- 5. Foundation Models for Information Extraction -- 6. Foundation Models for Text Generation -- 7. Foundation Models for Speech, Images, Videos, and Control -- 8. Summary and Outlook.
Sommario/riassunto: This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction tobasic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.
Titolo autorizzato: Foundation models for natural language processing  Visualizza cluster
ISBN: 9783031231902
3031231902
Formato: Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione: Inglese
Record Nr.: 9910847154703321
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Serie: Artificial Intelligence: Foundations, Theory, and Algorithms, . 2365-306X