Hands-On Large Language Models. Language Understanding and Generation
Par : ,Formats :
- Réservation en ligne avec paiement en magasin :
- Indisponible pour réserver et payer en magasin
- Nombre de pages403
- PrésentationBroché
- FormatGrand Format
- Poids0.74 kg
- Dimensions17,5 cm × 23,0 cm × 2,0 cm
- ISBN978-1-0981-5096-9
- EAN9781098150969
- Date de parution30/09/2024
- ÉditeurO'Reilly
Résumé
You'll understand how to use pretrained large language models for use cases like copywriting and summarization ; create semantic search systems that go beyond keyword matching ; and use existing libraries and pretrained models for text classification, search, and clusterings. This book also helps you : Understand the architecture of Transformer language models that excel at text generation and representation ; Build advanced LLM pipelines to cluster text documents and explore the topics they cover ; Build semantic search engines that go beyond keyword search, using methods like dense retrieval and rerankers ; Explore how generative models can be used, from prompt engineering all the way to retrieval-augmented generation ; Gain a deeper understanding of how to train LLMs and optimize them for specific applications using generative model fine-tuning, contrastive fine-tuning, and in-context learning.
You'll understand how to use pretrained large language models for use cases like copywriting and summarization ; create semantic search systems that go beyond keyword matching ; and use existing libraries and pretrained models for text classification, search, and clusterings. This book also helps you : Understand the architecture of Transformer language models that excel at text generation and representation ; Build advanced LLM pipelines to cluster text documents and explore the topics they cover ; Build semantic search engines that go beyond keyword search, using methods like dense retrieval and rerankers ; Explore how generative models can be used, from prompt engineering all the way to retrieval-augmented generation ; Gain a deeper understanding of how to train LLMs and optimize them for specific applications using generative model fine-tuning, contrastive fine-tuning, and in-context learning.