Big Data Hadoop Certification Training Course

51,189

It is an extensive Hadoop Big Data instructional class planned by industry specialists considering current industry work necessities to assist you with learning Data Hadoop and Spark modules. This is an industry-perceived Big Data Hadoop confirmation instructional class that is a mix of the instructional classes in Hadoop engineer, Hadoop head, Hadoop testing and investigation with Apache Spark. This Cloudera Hadoop and Spark preparing will set you up to clear Cloudera CCA175 Big Data confirmation.

Category:

What will you learn in this Big Data Hadoop online training?

  1. Fundamentals of Hadoop and YARN and write applications using them
  2. Setting up pseudo-node and multi-node clusters on Amazon EC2
  3. HDFS, MapReduce, Hive, Pig, Oozie, Sqoop, Flume, ZooKeeper and HBase
  4. Spark, Spark SQL, Streaming, Data Frame, RDD, GraphX and MLlib writing Spark applications
  5. Hadoop administration activities like cluster managing, monitoring, administration and troubleshooting
  6. Configuring ETL tools like Pentaho/Talend to work with MapReduce, Hive, Pig, etc.
  7. Hadoop testing applications using MRUnit and other automation tools
  8. Working with Avro data formats
  9. Practicing real-life projects using Hadoop and Apache Spark
  10. Be equipped to clear Big Data Hadoop Certification

Who should take up this Big Data Hadoop online training?

  1. Programming Developers and System Administrators
  2. Experienced working professionals and Project Managers
  3. Big Data Hadoop Developers eager to learn other verticals like testing, analytics and administration
  4. Mainframe Professionals, Architects and Testing Professionals
  5. Business Intelligence, Data Warehousing and Analytics Professionals
  6. Graduates and undergraduates eager to learn Big Data

What are the prerequisites for taking up this Big Data Hadoop certification training?

There are no prerequisites to take up this Big Data course and to master Hadoop. But basics of UNIX, SQL and Java would be good to learn Big Data Hadoop. At Edutech Skills, we provide complimentary Linux and Java course with our Big Data certification training to brush-up the required skills so that you are good to go in the Hadoop learning path.

Why should you go for Big Data Hadoop online training?

  • Global Hadoop market to reach $84.6 billion in two years – Allied Market Research
  • The number of jobs for all the US Data Professionals will increase to 2.7 million per year – IBM
  • A Hadoop Administrator in the US can get a salary of $123,000 – Indeed

Big Data is the fastest growing and the most promising technology for handling large volumes of data for doing data analytics. This Big Data Hadoop training will help you be up and running in the most demanding professional skills. Almost all top MNCs are trying to get into Big Data Hadoop; hence, there is a huge demand for certified Big Data professionals. Our Big Data online training will help you learn Big Data and upgrade your career in the Big Data domain. Getting the Big Data certification from Edutech Skills can put you in a different league when it comes to applying for the best jobs. Edutech Skills Big Data online course has been created with a complete focus on the practical aspects of Big Data Hadoop.

Module 01 – Hadoop Installation and Setup

The architecture of Hadoop cluster
1.2 What is High Availability and Federation?
1.3 How to setup a production cluster?
1.4 Various shell commands in Hadoop
1.5 Understanding configuration files in Hadoop
1.6 Installing a single node cluster with Cloudera Manager
1.7 Understanding Spark, Scala, Sqoop, Pig, and Flume

Module 02 – Introduction to Big Data Hadoop and Understanding HDFS and MapReduce

2.1 Introducing Big Data and Hadoop
2.2 What is Big Data and where does Hadoop fit in?
2.3 Two important Hadoop ecosystem components, namely, MapReduce and HDFS
2.4 In-depth Hadoop Distributed File System – Replications, Block Size, Secondary Name node, High Availability and in-depth YARN – resource manager and node manager
 

Hands-on Exercise:
 

1. HDFS working mechanism
2. Data replication process
3. How to determine the size of the block?
4. Understanding a data node and name node

Module 03 – Deep Dive in MapReduce

3.1 Learning the working mechanism of MapReduce
3.2 Understanding the mapping and reducing stages in MR
3.3 Various terminologies in MR like Input Format, Output Format, Partitioners, Combiners, Shuffle, and Sort
 

Hands-on Exercise:
 

1. How to write a WordCount program in MapReduce?
2. How to write a Custom Partitioner?
3. What is a MapReduce Combiner?
4. How to run a job in a local job runner
5. Deploying a unit test
6. What is a map side join and reduce side join?
7. What is a tool runner?
8. How to use counters, dataset joining with map side, and reduce side joins?

Module 04 – Introduction to Hive

4.1 Introducing Hadoop Hive
4.2 Detailed architecture of Hive
4.3 Comparing Hive with Pig and RDBMS
4.4 Working with Hive Query Language
4.5 Creation of a database, table, group by and other clauses
4.6 Various types of Hive tables, HCatalog
4.7 Storing the Hive Results, Hive partitioning, and Buckets
 

Hands-on Exercise:
 

1. Database creation in Hive
2. Dropping a database
3. Hive table creation
4. How to change the database?
5. Data loading
6. Dropping and altering table
7. Pulling data by writing Hive queries with filter conditions
8. Table partitioning in Hive
9. What is a group by clause?

Module 05 – Advanced Hive and Impala

5.1 Indexing in Hive
5.2 The ap Side Join in Hive
5.3 Working with complex data types
5.4 The Hive user-defined functions
5.5 Introduction to Impala
5.6 Comparing Hive with Impala
5.7 The detailed architecture of Impala
 

Hands-on Exercise: 
 

1. How to work with Hive queries?
2. The process of joining the table and writing indexes
3. External table and sequence table deployment
4. Data storage in a different table

Module 06 – Introduction to Pig

6.1 Apache Pig introduction and its various features
6.2 Various data types and schema in Hive
6.3 The available functions in Pig, Hive Bags, Tuples, and Fields
 

Hands-on Exercise: 
 

1. Working with Pig in MapReduce and local mode
2. Loading of data
3. Limiting data to 4 rows
4. Storing the data into files and working with Group By, Filter By, Distinct, Cross, Split in Hive

Module 07 – Flume, Sqoop and HBase

7.1 Apache Sqoop introduction
7.2 Importing and exporting data
7.3 Performance improvement with Sqoop
7.4 Sqoop limitations
7.5 Introduction to Flume and understanding the architecture of Flume
7.6 What is HBase and the CAP theorem?
 

Hands-on Exercise: 
 

1. Working with Flume to generate Sequence Number and consume it
2. Using the Flume Agent to consume the Twitter data
3. Using AVRO to create Hive Table
4. AVRO with Pig
5. Creating Table in HBase
6. Deploying Disable, Scan, and Enable Table

Module 08 – Writing Spark Applications Using Scala

8.1 Using Scala for writing Apache Spark applications
8.2 Detailed study of Scala
8.3 The need for Scala
8.4 The concept of object-oriented programming
8.5 Executing the Scala code
8.6 Various classes in Scala like getters, setters, constructors, abstract, extending objects, overriding methods
8.7 The Java and Scala interoperability
8.8 The concept of functional programming and anonymous functions
8.9 Bobsrockets package and comparing the mutable and immutable collections
8.10 Scala REPL, Lazy Values, Control Structures in Scala, Directed Acyclic Graph (DAG), first Spark application using SBT/Eclipse, Spark Web UI, Spark in Hadoop ecosystem.
 

Hands-on Exercise:
 

1. Writing Spark application using Scala
2. Understanding the robustness of Scala for Spark real-time analytics operation

Module 09 – Use Case Bobs rockets Package

9.1 Introduction to Scala packages and imports
9.2 The selective imports
9.3 The Scala test classes
9.4 Introduction to JUnit test class
9.5 JUnit interface via JUnit 3 suite for Scala test
9.6 Packaging of Scala applications in the directory structure
9.7 Examples of Spark Split and Spark Scala

Module 10 – Introduction to Spark

10.1 Introduction to Spark
10.2 Spark overcomes the drawbacks of working on MapReduce
10.3 Understanding in-memory MapReduce
10.4 Interactive operations on MapReduce
10.5 Spark stack, fine vs. coarse-grained update, Spark stack, Spark Hadoop YARN, HDFS Revision, and YARN Revision
10.6 The overview of Spark and how it is better than Hadoop
10.7 Deploying Spark without Hadoop
10.8 Spark history server and Cloudera distribution

Module 11 – Spark Basics

11.1 Spark installation guide
11.2 Spark configuration
11.3 Memory management
11.4 Executor memory vs. driver memory
11.5 Working with Spark Shell
11.6 The concept of resilient distributed datasets (RDD)
11.7 Learning to do functional programming in Spark
11.8 The architecture of Spark

Module 12 – Working with RDDs in Spark

12.1 Spark RDD
12.2 Creating RDDs
12.3 RDD partitioning
12.4 Operations and transformation in RDD
12.5 Deep dive into Spark RDDs
12.6 The RDD general operations
12.7 Read-only partitioned collection of records
12.8 Using the concept of RDD for faster and efficient data processing
12.9 RDD action for the collect, count, collects map, save-as-text-files, and pair RDD functions

Module 13 – Aggregating Data with Pair RDDs

13.1 Understanding the concept of key-value pair in RDDs
13.2 Learning how Spark makes MapReduce operations faster
13.3 Various operations of RDD
13.4 MapReduce interactive operations
13.5 Fine and coarse-grained update
13.6 Spark stack

Module 14 – Writing and Deploying Spark Applications

14.1 Comparing the Spark applications with Spark Shell
14.2 Creating a Spark application using Scala or Java
14.3 Deploying a Spark application
14.4 Scala built application
14.5 Creation of the mutable list, set and set operations, list, tuple, and concatenating list
14.6 Creating an application using SBT
14.7 Deploying an application using Maven
14.8 The web user interface of Spark application
14.9 A real-world example of Spark
14.10 Configuring of Spark

Module 15 – Project Solution Discussion and Cloudera Certification Tips and Tricks

15.1 Working towards the solution of the Hadoop project solution
15.2 Its problem statements and the possible solution outcomes
15.3 Preparing for the Cloudera certifications
15.4 Points to focus on scoring the highest marks
15.5 Tips for cracking Hadoop interview questions
 

Hands-on Exercise:
 

1. The project of a real-world high value Big Data Hadoop application
2. Getting the right solution based on the criteria set by the Edutech Skills team

Module 16 – Parallel Processing

16.1 Learning about Spark parallel processing
16.2 Deploying on a cluster
16.3 Introduction to Spark partitions
16.4 File-based partitioning of RDDs
16.5 Understanding of HDFS and data locality
16.6 Mastering the technique of parallel operations
16.7 Comparing repartition and coalesce
16.8 RDD actions

Module 17 – Spark RDD Persistence

17.1 The execution flow in Spark
17.2 Understanding the RDD persistence overview
17.3 Spark execution flow, and Spark terminology
17.4 Distribution shared memory vs. RDD
17.5 RDD limitations
17.6 Spark shell arguments
17.7 Distributed persistence
17.8 RDD lineage
17.9 Key-value pair for sorting implicit conversions like Count By Key, Reduce By Key, Sort By Key, and Aggregate By Key

Module 18 – Spark MLlib

18.1 Introduction to Machine Learning
18.2 Types of Machine Learning
18.3 Introduction to MLlib
18.4 Various ML algorithms supported by MLlib
18.5 Linear regression, logistic regression, decision tree, random forest, and K-means clustering techniques
 

Hands-on Exercise: 
 

1. Building a Recommendation Engine

Module 19 – Integrating Apache Flume and Apache Kafka

19.1 Why Kafka and what is Kafka?
19.2 Kafka architecture
19.3 Kafka workflow
19.4 Configuring Kafka cluster
19.5 Operations
19.6 Kafka monitoring tools
19.7 Integrating Apache Flume and Apache Kafka
 

Hands-on Exercise: 
 

1. Configuring Single Node Single Broker Cluster
2. Configuring Single Node Multi Broker Cluster
3. Producing and consuming messages
4. Integrating Apache Flume and Apache Kafka

Module 20 – Spark Streaming

20.1 Introduction to Spark Streaming
20.2 Features of Spark Streaming
20.3 Spark Streaming workflow
20.4 Initializing StreamingContext, discretized Streams (DStreams), input DStreams and Receivers
20.5 Transformations on DStreams, output operations on DStreams, windowed operators and why it is useful
20.6 Important windowed operators and stateful operators

Hands-on Exercise: 
 

1. Twitter Sentiment analysis
2. Streaming using Netcat server
3. Kafka–Spark streaming
4. Spark–Flume streaming

Module 21 – Improving Spark Performance

21.1 Introduction to various variables in Spark like shared variables and broadcast variables
21.2 Learning about accumulators
21.3 The common performance issues
21.4 Troubleshooting the performance problems

Module 22 – Spark SQL and Data Frames

22.1 Learning about Spark SQL
22.2 The context of SQL in Spark for providing structured data processing
22.3 JSON support in Spark SQL
22.4 Working with XML data
22.5 Parquet files
22.6 Creating Hive context
22.7 Writing data frame to Hive
22.8 Reading JDBC files
22.9 Understanding the data frames in Spark
22.10 Creating Data Frames
22.11 Manual inferring of schema
22.12 Working with CSV files
22.13 Reading JDBC tables
22.14 Data frame to JDBC
22.15 User-defined functions in Spark SQL
22.16 Shared variables and accumulators
22.17 Learning to query and transform data in data frames
22.18 Data frame provides the benefit of both Spark RDD and Spark SQL
22.19 Deploying Hive on Spark as the execution engine

Module 23 – Scheduling/Partitioning

23.1 Learning about the scheduling and partitioning in Spark
23.2 Hash partition
23.3 Range partition
23.4 Scheduling within and around applications
23.5 Static partitioning, dynamic sharing, and fair scheduling
23.6 Map partition with index, the Zip, and GroupByKey
23.7 Spark master high availability, standby masters with Zoo Keeper, single-node recovery with the local file system and high order functions

Module 24 – Hadoop Administration – Multi-node Cluster Setup Using Amazon EC2

24.1 Create a 4-node Hadoop cluster setup
24.2 Running the MapReduce Jobs on the Hadoop cluster
24.3 Successfully running the MapReduce code
24.4 Working with the Cloudera Manager setup

 

Hands-on Exercise:

 

1. The method to build a multi-node Hadoop cluster using an Amazon EC2 instance
2. Working with the Cloudera Manager

Module 25 – Hadoop Administration – Cluster Configuration

25.1 Overview of Hadoop configuration
25.2 The importance of Hadoop configuration file
25.3 The various parameters and values of configuration
25.4 The HDFS parameters and MapReduce parameters
25.5 Setting up the Hadoop environment
25.6 The Include and Exclude configuration files
25.7 The administration and maintenance of name node, data node directory structures, and files
25.8 What is a File system image?
25.9 Understanding Edit log
 

Hands-on Exercise:
 

1. The process of performance tuning in MapReduce

Module 26 – Hadoop Administration – Maintenance, Monitoring and Troubleshooting

26.1 Introduction to the checkpoint procedure, name node failure
26.2 How to ensure the recovery procedure, Safe Mode, Metadata and Data backup, various potential problems and solutions, what to look for and how to add and remove nodes
 

Hands-on Exercise:
 

1. How to go about ensuring the MapReduce File System Recovery for different scenarios
2. JMX monitoring of the Hadoop cluster
3. How to use the logs and stack traces for monitoring and troubleshooting
4. Using the Job Scheduler for scheduling jobs in the same cluster
5. Getting the MapReduce job submission flow
6. FIFO schedule
7. Getting to know the Fair Scheduler and its configuration

Module 27 – ETL Connectivity with Hadoop Ecosystem (Self-Paced)

27.1 How ETL tools work in Big Data industry?
27.2 Introduction to ETL and data warehousing
27.3 Working with prominent use cases of Big Data in ETL industry
27.4 End-to-end ETL PoC showing Big Data integration with ETL tool
 

Hands-on Exercise:
 

1. Connecting to HDFS from ETL tool
2. Moving data from Local system to HDFS
3. Moving data from DBMS to HDFS,
4. Working with Hive with ETL Tool
5. Creating MapReduce job in ETL tool

Module 28 – Hadoop Application Testing

28.1 Importance of testing
28.2 Unit testing, Integration testing, Performance testing, Diagnostics, Nightly QA test, Benchmark and end-to-end tests, Functional testing, Release certification testing, Security testing, Scalability testing, Commissioning and Decommissioning of data nodes testing, Reliability testing, and Release testing

Module 29 – Roles and Responsibilities of Hadoop Testing Professional

29.1 Understanding the Requirement
29.2 Preparation of the Testing Estimation
29.3 Test Cases, Test Data, Test Bed Creation, Test Execution, Defect Reporting, Defect Retest, Daily Status report delivery, Test completion, ETL testing at every stage (HDFS, Hive and HBase) while loading the input (logs, files, records, etc.) using Sqoop/Flume which includes but not limited to data verification, Reconciliation, User Authorization and Authentication testing (Groups, Users, Privileges, etc.), reporting defects to the development team or manager and driving them to closure
29.4 Consolidating all the defects and create defect reports
29.5 Validating new feature and issues in Core Hadoop

Module 30 – Framework Called MRUnit for Testing of MapReduce Programs

30.1 Report defects to the development team or manager and driving them to closure
30.2 Consolidate all the defects and create defect reports
30.3 Responsible for creating a testing framework called MRUnit for testing of MapReduce programs

Module 31 – Unit Testing

31.1 Automation testing using the OOZIE
31.2 Data validation using the query surge tool

Module 32 – Test Execution

32.1 Test plan for HDFS upgrade
32.2 Test automation and result

Module 33 – Test Plan Strategy and Writing Test Cases for Testing Hadoop Application

33.1 Test, install and configure