- Accueil /
- Denis Rothman
Denis Rothman

Dernière sortie
Transformers for Natural Language Processing and Computer Vision
Transformers for Natural Language Processing and Computer Vision, Third Edition, explores Large Language Model (LLM) architectures, applications, and various platforms (Hugging Face, OpenAl, and Google Vertex Al) used for Natural Language Processing (NLP) and Computer Vision (CV). The book guides you through different transformer architectures to the latest Foundation Models and Generative Al. You'll pretrain and fine-tune LLMs and work through different use cases, from summarization to implementing question-answering systems with embedding-based search techniques.
You will also learn the risks of LLMs, from hallucinations and memorization to privacy, and how to mitigate such risks using moderation models with rule and knowledge bases. You'll implement Retrieval Augmented Generation (RAG) with LLMs to improve the accuracy of your models and gain greater control over LLM outputs. Dive into generative vision transformers and multimodal model architectures and build applications, such as image and video-to-text classifiers.
Go further by combining different models and platforms and learning about Al agent replication. This book provides you with an understanding of transformer architectures, pretraining, fine-tuning, LLM use cases, and best practices. Things you will learn : - Learn how to pretrain and fine-tune LLMs - Learn how to work with multiple platforms, such as Hugging Face, OpenAl, and Google Vertex Al - Learn about different tokenizers and the best practices for preprocessing language data - Implement Retrieval Augmented Generation and rules bases to mitigate hallucinations - Visualize transformer model activity for deeper insights using BertViz, LIME, and SHAP.
- Create and implement cross-platform chained models, such as HuggingGPT. - Go in-depth into vision transformers with CLIP, DALL-E 2, DALL-E 3, and GPT-4V.
You will also learn the risks of LLMs, from hallucinations and memorization to privacy, and how to mitigate such risks using moderation models with rule and knowledge bases. You'll implement Retrieval Augmented Generation (RAG) with LLMs to improve the accuracy of your models and gain greater control over LLM outputs. Dive into generative vision transformers and multimodal model architectures and build applications, such as image and video-to-text classifiers.
Go further by combining different models and platforms and learning about Al agent replication. This book provides you with an understanding of transformer architectures, pretraining, fine-tuning, LLM use cases, and best practices. Things you will learn : - Learn how to pretrain and fine-tune LLMs - Learn how to work with multiple platforms, such as Hugging Face, OpenAl, and Google Vertex Al - Learn about different tokenizers and the best practices for preprocessing language data - Implement Retrieval Augmented Generation and rules bases to mitigate hallucinations - Visualize transformer model activity for deeper insights using BertViz, LIME, and SHAP.
- Create and implement cross-platform chained models, such as HuggingGPT. - Go in-depth into vision transformers with CLIP, DALL-E 2, DALL-E 3, and GPT-4V.
Transformers for Natural Language Processing and Computer Vision, Third Edition, explores Large Language Model (LLM) architectures, applications, and various platforms (Hugging Face, OpenAl, and Google Vertex Al) used for Natural Language Processing (NLP) and Computer Vision (CV). The book guides you through different transformer architectures to the latest Foundation Models and Generative Al. You'll pretrain and fine-tune LLMs and work through different use cases, from summarization to implementing question-answering systems with embedding-based search techniques.
You will also learn the risks of LLMs, from hallucinations and memorization to privacy, and how to mitigate such risks using moderation models with rule and knowledge bases. You'll implement Retrieval Augmented Generation (RAG) with LLMs to improve the accuracy of your models and gain greater control over LLM outputs. Dive into generative vision transformers and multimodal model architectures and build applications, such as image and video-to-text classifiers.
Go further by combining different models and platforms and learning about Al agent replication. This book provides you with an understanding of transformer architectures, pretraining, fine-tuning, LLM use cases, and best practices. Things you will learn : - Learn how to pretrain and fine-tune LLMs - Learn how to work with multiple platforms, such as Hugging Face, OpenAl, and Google Vertex Al - Learn about different tokenizers and the best practices for preprocessing language data - Implement Retrieval Augmented Generation and rules bases to mitigate hallucinations - Visualize transformer model activity for deeper insights using BertViz, LIME, and SHAP.
- Create and implement cross-platform chained models, such as HuggingGPT. - Go in-depth into vision transformers with CLIP, DALL-E 2, DALL-E 3, and GPT-4V.
You will also learn the risks of LLMs, from hallucinations and memorization to privacy, and how to mitigate such risks using moderation models with rule and knowledge bases. You'll implement Retrieval Augmented Generation (RAG) with LLMs to improve the accuracy of your models and gain greater control over LLM outputs. Dive into generative vision transformers and multimodal model architectures and build applications, such as image and video-to-text classifiers.
Go further by combining different models and platforms and learning about Al agent replication. This book provides you with an understanding of transformer architectures, pretraining, fine-tuning, LLM use cases, and best practices. Things you will learn : - Learn how to pretrain and fine-tune LLMs - Learn how to work with multiple platforms, such as Hugging Face, OpenAl, and Google Vertex Al - Learn about different tokenizers and the best practices for preprocessing language data - Implement Retrieval Augmented Generation and rules bases to mitigate hallucinations - Visualize transformer model activity for deeper insights using BertViz, LIME, and SHAP.
- Create and implement cross-platform chained models, such as HuggingGPT. - Go in-depth into vision transformers with CLIP, DALL-E 2, DALL-E 3, and GPT-4V.
Les livres de Denis Rothman
