A Hands-On Guide to Fine-Tuning Large Language Models with PyTorch and Hugging Face
Par :Formats :
- Compatible avec une lecture sur My Vivlio (smartphone, tablette, ordinateur)
- Compatible avec une lecture sur liseuses Vivlio
- Pour les liseuses autres que Vivlio, vous devez utiliser le logiciel Adobe Digital Edition. Non compatible avec la lecture sur les liseuses Kindle, Remarkable et Sony

Notre partenaire de plateforme de lecture numérique où vous retrouverez l'ensemble de vos ebooks gratuitement
- FormatePub
- ISBN8227232182
- EAN9798227232182
- Date de parution16/02/2025
- Protection num.pas de protection
- Infos supplémentairesepub
- ÉditeurBig Dog Books, LLC
Résumé
If terms like Transformers, attention mechanisms, Adam optimizer, tokens, embeddings, or GPUs sound familiar, you're in the right place. Familiarity with Hugging Face and PyTorch is assumed. If you're new to these concepts, consider starting with a beginner-friendly introduction to deep learning with PyTorch before diving in. What You'll Learn: Load quantized models using BitsAndBytes. Configure Low-Rank Adapters (LoRA) using Hugging Face's PEFT.
Format datasets effectively using chat templates and formatting functions. Fine-tune LLMs on consumer-grade GPUs using techniques such as gradient checkpointing and accumulation. Deploy LLMs locally in the GGUF format using Llama.cpp and Ollama. Troubleshoot common error messages and exceptions to keep your fine-tuning process on track. This book doesn't just skim the surface; it zooms in on the critical adjustments and configuration-those all-important "knobs"-that make or break the fine-tuning process.
By the end, you'll have the skills and confidence to fine-tune LLMs for your own real-world applications. Whether you're looking to enhance existing models or tailor them to niche tasks, this book is your essential companion.