Apache HadoopCareerVision Training
In Birmingham (Grossbritannien)
£ 36 - (40 €)
- Birmingham (Grossbritannien)
Was lernen Sie in diesem Kurs?
• Format: Download
• Duration: 7.5 hours (54 tutorial videos)
• Instructor: Rich Marrow
• Platform: Windows PC or Mac
• Date Released 2014-04-30
You will start out by learning the basics of Hadoop, including the Hadoop run modes and job types and Hadoop in the cloud. You will then learn about the Hadoop distributed file system (HDFS), such as the HDFS architecture, secondary name node, and access controls. This video tutorial will also cover topics including MapReduce, debugging basics, hive and pig basics, and impala fundamentals. Finally, Rich will teach you how to import and export data.
Once you have completed this computer based training video, you will be fully capable of using the tools and functions you’ve learned to work successfully in Hadoop. Working files are included, allowing you to follow along with the author throughout the lessons.
In this project-based Learning Apache Hadoop video tutorial series, you'll quickly have relevant skills for real-world applications. Follow along with our expert instructor in this training course to get: Concise, informative and broadcast-quality Learning Apache Hadoop training videos delivered to your desktop The ability to learn at your own pace with our intuitive, easy-to-use interface A quick grasp of even the most complex Learning Apache Hadoop subjects because they're broken into simple, easy to follow tutorial videos Practical working files further enhance the learning process and provide a degree of retention that is unmatched by any other form of Learning Apache Hadoop tutorial, online or offline... so you'll know the exact steps for your own projects. 01. Introduction What Is Big Data? About The Author Historical Approaches Big data In The Modern World The Hadoop Approach Hadoop Hardware Requirements Hadoop Core Vs. Ecosystem Hadoopable Problems Hadoop Support Companies 0110 How To Access Your Working Files 02. Hadoop Basics HDFS And MapReduce Hadoop Run Modes And Job Types Hadoop Software Requirements And Recommendations Hadoop in the Cloud - Amazon Web Services 0205 Lab - Installing Hadoop From CDH With Cloudera Manager - Part 1 0206 Lab - Installing Hadoop From CDH With Cloudera Manager - Part 2 0207 Lab - Installing Hadoop From CDH With Cloudera Manager - Part 3 0208 Lab - Installing Hadoop From CDH With Cloudera Manager - Part 4 0209 Introduction To Hive And Pig Interface 0210 Installing Cloudera Quickstart VM 03. Hadoop Distributed File System (HDFS) 0301 HDFS Architecture 0302 HDFS File Write Walkthrough 0303 Secondary Name Node 0304 Lab - Using HDFS - Part 1 0305 Lab - Using HDFS - Part 2 0306 HA And Federation Basics 0307 HDFS Access Controls 04. MapReduce 0401 MapReduce Explained 0402 MapReduce Architecture 0403 MapReduce Code Walkthrough - Part 1 0404 MapReduce Code Walkthrough - Part 2 0405 MapReduce Job Walkthrough 0406 Rack Awareness 0407 Advanced MapReduce - Partioners, Combiners, Comparators And More 0408 Partitioner Code Walkthrough 0409 Java Concerns 05. Logging And Debugging 0501 Debugging Basics 0502 Benchmarking With Teragen And Terasort 06. Hive, Pig, And Impala 0601 Comparing Hive, Pig And Impala 0602 Hive Basics 0603 Hive Patterns And Anti-Patterns 0604 Lab - Hive Basic Usage 0605 Pig Basics 0606 Pig Patterns And Anti-Patterns 0607 Lab - Pig Basic Usage 0608 Impala Fundamentals 07. Data Import And Export 0701 Import And Export Options 0702 Flume Introduction 0703 Lab - Using Flume 0704 HDFS Interaction Tools 0705 Sqoop Introduction 0706 Lab - Using Sqoop 0707 Oozie Introduction 08. Conclusion 0801 Wrap-Up