Details for this torrent 

Passban P. Enhancing LLM Performance. Efficacy, Fine-Tuning,...2025
Type:
Other > E-books
Files:
1
Size:
6.94 MiB (7273776 Bytes)
Uploaded:
2025-07-07 10:27:19 GMT
By:
andryold1 VIP
Seeders:
41
Leechers:
4
Comments
0  

Info Hash:
0D58E6BF7F8A95FE886B40C41EB63EA56ECC203B




(Problems with magnets links are fixed by upgrading your torrent client!)
 
Textbook in PDF format

This book is a pioneering exploration of the state-of-the-art techniques that drive Large Language Models (LLMs) toward greater efficiency and scalability. Edited by three distinguished experts—Peyman Passban, Mehdi Rezagholizadeh, and Andy Way—this book presents practical solutions to the growing challenges of training and deploying these massive models. With their combined experience across academia, research, and industry, the authors provide insights into the tools and strategies required to improve LLM performance while reducing computational demands.
This book is more than just a technical guide; it bridges the gap between research and real-world applications. Each chapter presents cutting-edge advancements in inference optimization, model architecture, and fine-tuning techniques, all designed to enhance the usability of LLMs in diverse sectors. Readers will find extensive discussions on the practical aspects of implementing and deploying LLMs in real-world scenarios. The book serves as a comprehensive resource for researchers and industry professionals, offering a balanced blend of in-depth technical insights and practical, hands-on guidance. It is a go-to reference book for students, researchers in Computer Science and relevant sub-branches, including Machine Learning, computational linguistics, and more.
The main theme of this book is efficiency and the pivotal topic is “scale”. More specifically, in this volume, we aim to examine the reasons behind the substantial size of LLMs, investigate the intricacies of their design and the consequent implications. We will discuss the formidable challenges they pose, as well as the unprecedented opportunities they offer. The discussion extends to various technical considerations such as model training, selection of data sets, and the architecture of LLMs. In the first introductory chapter, we lay out a roadmap for the journey ahead, detailing what readers can expect from each subsequent section and chapter of the book. Additionally, we provide the basic fundamentals necessary to understand LLMs, ensuring that regardless of their prior knowledge, readers have a solid foundation from which to explore more advanced concepts throughout the book. The following chapters will not shy away from (sometimes quite deep) detail, as dissecting the intricacies of LLMs is critical to aid understanding of this new paradigm.
Preface
Part I Fundamentals
Introduction and Fundamentals
Part II Inference Time Efficiency
SPEED: Speculative Pipelined Execution for Efficient Decoding
Efficient LLM Inference on CPUs
Part III Efficiency Techniques for Fine-Tuning
KronA: Parameter-Efficient Tuning with Kronecker Adapter
LoDA: Low-Dimensional Adaptation of Large Language Models
Sparse Fine-Tuning for Inference Acceleration of Large Language Models
Part IV Sequence Efficiency and Model Compression
TCNCA: Temporal CNN with Chunked Attention for Efficient Training on Long Sequences
Class-Based Feature Knowledge Distillation
Part V Efficiency Techniques for Smaller Scales
On the Use of Cross-Attentive Fusion Techniques for Audio-Visual Speaker Verification
An Efficient Clustering Algorithm for Self-Supervised Speaker Recognition
Part VI Conclusion
Remaining Issues for AI

Passban P. Enhancing LLM Performance. Efficacy, Fine-Tuning,...2025.pdf6.94 MiB