| |
|
|
|
|
|
|
|
|
1. |
Record Nr. |
UNISA996390685003316 |
|
|
Autore |
Garrett Walter |
|
|
Titolo |
Theorems [[electronic resource] ] : evincing, that the subject of the fourth and fifth chapters of the Revelation, is the Church of England, B.L.E. With answers to objections. Humbly offered to the serious consideration of all enemies of the Church of England, dissenters and separatists. By Wal. Garrett, rector of Everly in Wiltshire: sometime fellow of Trinity College in Cambridge |
|
|
|
|
|
|
|
Pubbl/distr/stampa |
|
|
[London, : printed and are to be sold by the booksellers of London and Westminster, [1700?]] |
|
|
|
|
|
|
|
|
|
Descrizione fisica |
|
|
|
|
|
|
Lingua di pubblicazione |
|
|
|
|
|
|
Formato |
Materiale a stampa |
|
|
|
|
|
Livello bibliografico |
Monografia |
|
|
|
|
|
Note generali |
|
Caption title. |
Imprint from colophon; date of publication conjectured by Wing. |
In two columns. |
Reproduction of the original in St. David's University College Library, Lampeter, Wales. |
|
|
|
|
|
|
|
|
Sommario/riassunto |
|
|
|
|
|
|
|
|
|
|
|
|
|
2. |
Record Nr. |
UNINA9911015683903321 |
|
|
Autore |
Passban Peyman |
|
|
Titolo |
Enhancing LLM Performance : Efficacy, Fine-Tuning, and Inference Techniques / / edited by Peyman Passban, Andy Way, Mehdi Rezagholizadeh |
|
|
|
|
|
|
|
Pubbl/distr/stampa |
|
|
Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2025 |
|
|
|
|
|
|
|
ISBN |
|
9783031857478 |
9783031857461 |
|
|
|
|
|
|
|
|
Edizione |
[1st ed. 2025.] |
|
|
|
|
|
Descrizione fisica |
|
1 online resource (279 pages) |
|
|
|
|
|
|
Collana |
|
Machine Translation: Technologies and Applications, , 2522-803X ; ; 7 |
|
|
|
|
|
|
Altri autori (Persone) |
|
WayAndy |
RezagholizadehMehdi |
|
|
|
|
|
|
|
|
Disciplina |
|
|
|
|
|
|
Soggetti |
|
Machine learning |
Natural language processing (Computer science) |
Machine Learning |
Natural Language Processing (NLP) |
|
|
|
|
|
|
|
|
Lingua di pubblicazione |
|
|
|
|
|
|
Formato |
Materiale a stampa |
|
|
|
|
|
Livello bibliografico |
Monografia |
|
|
|
|
|
Nota di contenuto |
|
Introduction and Fundamentals -- SPEED: Speculative Pipelined Execution for Efficient Decoding -- Efficient LLM Inference on CPUs -- KronA: Parameter-Efficient Tuning with Kronecker Adapter -- LoDA: Low-Dimensional Adaptation of Large Language Models -- Sparse Fine-Tuning for Inference Acceleration of Large Language Models -- TCNCA: Temporal CNN with Chunked Attention for Efficient Training on Long Sequences -- Class-Based Feature Knowledge Distillation -- On the Use of Cross-Attentive Fusion Techniques for Audio-Visual Speaker Verification -- An Efficient Clustering Algorithm for Self-Supervised Speaker Recognition -- Remaining Issues for AI. |
|
|
|
|
|
|
|
|
Sommario/riassunto |
|
This book is a pioneering exploration of the state-of-the-art techniques that drive large language models (LLMs) toward greater efficiency and scalability. Edited by three distinguished experts—Peyman Passban, Mehdi Rezagholizadeh, and Andy Way—this book presents practical solutions to the growing challenges of training and deploying these massive models. With their combined experience across academia, research, and industry, the authors provide insights |
|
|
|
|
|
|
|
|
|
|
into the tools and strategies required to improve LLM performance while reducing computational demands. This book is more than just a technical guide; it bridges the gap between research and real-world applications. Each chapter presents cutting-edge advancements in inference optimization, model architecture, and fine-tuning techniques, all designed to enhance the usability of LLMs in diverse sectors. Readers will find extensive discussions on the practical aspects of implementing and deploying LLMs in real-world scenarios. The book serves as a comprehensive resource for researchers and industry professionals, offering a balanced blend of in-depth technical insights and practical, hands-on guidance. It is a go-to reference book for students, researchers in computer science and relevant sub-branches, including machine learning, computational linguistics, and more. |
|
|
|
|
|
| |