Apache Sqoop Cookbook
Par : ,Formats :
Définitivement indisponible
Cet article ne peut plus être commandé sur notre site (ouvrage épuisé ou plus commercialisé). Il se peut néanmoins que l'éditeur imprime une nouvelle édition de cet ouvrage à l'avenir. Nous vous invitons donc à revenir périodiquement sur notre site.
Disponible dans votre compte client Decitre ou Furet du Nord dès validation de votre commande. Le format Multi-format est :
- Pour les liseuses autres que Vivlio, vous devez utiliser le logiciel Adobe Digital Edition. Non compatible avec la lecture sur les liseuses Kindle, Remarkable et Sony

Notre partenaire de plateforme de lecture numérique où vous retrouverez l'ensemble de vos ebooks gratuitement
Pour en savoir plus sur nos ebooks, consultez notre aide en ligne ici
- Nombre de pages94
- FormatMulti-format
- ISBN978-1-4493-6457-1
- EAN9781449364571
- Date de parution02/07/2013
- Protection num.NC
- Infos supplémentairesMulti-format incluant PDF sans p...
- ÉditeurO'Reilly Media
Résumé
Integrating data from multiple sources is essential in the age of big data, but it can be a challenging and time-consuming task. This handy cookbook provides dozens of ready-to-use recipes for using Apache Sqoop, the command-line interface application that optimizes data transfers between relational databases and Hadoop.
Sqoop is both powerful and bewildering, but with this cookbook's problem-solution-discussion format, you'll quickly learn how to deploy and then apply Sqoop in your environment.
The authors provide MySQL, Oracle, and PostgreSQL database examples on GitHub that you can easily adapt for SQL Server, Netezza, Teradata, or other relational systems. - Transfer data from a single database table into your Hadoop ecosystem - Keep table data and Hadoop in sync by importing data incrementally - Import data from more than one database table - Customize transferred data by calling various database functions - Export generated, processed, or backed-up data from Hadoop to your database - Run Sqoop within Oozie, Hadoop's specialized workflow scheduler - Load data into Hadoop's data warehouse (Hive) or database (HBase) - Handle installation, connection, and syntax issues common to specific database vendors
The authors provide MySQL, Oracle, and PostgreSQL database examples on GitHub that you can easily adapt for SQL Server, Netezza, Teradata, or other relational systems. - Transfer data from a single database table into your Hadoop ecosystem - Keep table data and Hadoop in sync by importing data incrementally - Import data from more than one database table - Customize transferred data by calling various database functions - Export generated, processed, or backed-up data from Hadoop to your database - Run Sqoop within Oozie, Hadoop's specialized workflow scheduler - Load data into Hadoop's data warehouse (Hive) or database (HBase) - Handle installation, connection, and syntax issues common to specific database vendors
Integrating data from multiple sources is essential in the age of big data, but it can be a challenging and time-consuming task. This handy cookbook provides dozens of ready-to-use recipes for using Apache Sqoop, the command-line interface application that optimizes data transfers between relational databases and Hadoop.
Sqoop is both powerful and bewildering, but with this cookbook's problem-solution-discussion format, you'll quickly learn how to deploy and then apply Sqoop in your environment.
The authors provide MySQL, Oracle, and PostgreSQL database examples on GitHub that you can easily adapt for SQL Server, Netezza, Teradata, or other relational systems. - Transfer data from a single database table into your Hadoop ecosystem - Keep table data and Hadoop in sync by importing data incrementally - Import data from more than one database table - Customize transferred data by calling various database functions - Export generated, processed, or backed-up data from Hadoop to your database - Run Sqoop within Oozie, Hadoop's specialized workflow scheduler - Load data into Hadoop's data warehouse (Hive) or database (HBase) - Handle installation, connection, and syntax issues common to specific database vendors
The authors provide MySQL, Oracle, and PostgreSQL database examples on GitHub that you can easily adapt for SQL Server, Netezza, Teradata, or other relational systems. - Transfer data from a single database table into your Hadoop ecosystem - Keep table data and Hadoop in sync by importing data incrementally - Import data from more than one database table - Customize transferred data by calling various database functions - Export generated, processed, or backed-up data from Hadoop to your database - Run Sqoop within Oozie, Hadoop's specialized workflow scheduler - Load data into Hadoop's data warehouse (Hive) or database (HBase) - Handle installation, connection, and syntax issues common to specific database vendors