The main objective of this course is to help you understand Complex Architectures of Hadoop and its components, guide you in the right direction to start with, and quickly start working with Hadoop and its components.
It covers everything what you need as a Big Data Beginner. Learn about Big Data market, different job roles, technology trends, history of Hadoop, HDFS, Hadoop Ecosystem, Hive and Pig. In this course, we will see how as a beginner one should start with Hadoop. This course comes with a lot of hands-on examples which will help you learn Hadoop quickly.
The course have 6 sections, and focuses on the following topics:
Big Data at a Glance: Learn about Big Data and different job roles required in Big Data market. Know big data salary trends around the globe. Learn about hottest technologies and their trends in the market.
Getting Started with Hadoop: Understand Hadoop and its complex architecture. Learn Hadoop Ecosystem with simple examples. Know different versions of Hadoop (Hadoop 1.x vs Hadoop 2.x), different Hadoop Vendors in the market and Hadoop on Cloud. Understand how Hadoop uses ELT approach. Learn installing Hadoop on your machine. We will see running HDFS commands from command line to manage HDFS.
Getting Started with Hive: Understand what kind of problem Hive solves in Big Data. Learn its architectural design and working mechanism. Know data models in Hive, different file formats supported by Hive, Hive queries etc. We will see running queries in Hive.
Getting Started with Pig: Understand how Pig solves problems in Big Data. Learn its architectural design and working mechanism. Understand how Pig Latin works in Pig. You will understand the differences between SQL and Pig Latin. Demos on running different queries in Pig.
Use Cases: Real life applications of Hadoop is really important to better understand Hadoop and its components, hence we will be learning by designing a sample Data Pipeline in Hadoop to process big data. Also, understand how companies are adopting modern data architecture i.e. Data Lake in their data infrastructure.
Practice: Practice with huge Data Sets. Learn Design and Optimization Techniques by designing Data Models, Data Pipelines by using real life applications' data sets.
Check out some of our reviews from real students:-
"A nice learning for beginners, the thing which differentiate this course from other similar courses is that it has very "effective and concise" content, so do even a layman can understand easily. The course shows only 3 hours of on-demand video lecture but one should always give time to each lecture ( by means of bookmarks and pause), then you would able to understand all the basics of Big data and Hadoop."
"I liked the hands-on approach. very helpful."
"Overall definitely worth the money for what you get, I learnt so much about Big Data."
"I absolutely recommend taking this course."
"Presenter explains in simple terms and any lay person or someone like me who has no background about databases and data can understand. Explaining the business use case application us very helpful in understanding how this can be useful for everyday business."
"Loved it. Saved lots of time searching information on the internet."
"Very informative, and the course gave me what I was looking for. Thanks!"
"Big Data introduction can be daunting with several new keywords and components that one needs to understand. But, this course very clearly explains to a beginner about the architecture and different tools that can be leveraged in a big data project. It also has indications on the scope of big data in the industry, different roles one can perform in the big data space and also cover various commercial distributions of big data. Overall, a great course for a beginner to get started on the fundamentals of big data. Use Case is a bonus !"
- Basics knowledge of SQL and RDBMS would be a plus
- Machine- Mac or Linux/Unix or Windows
- Understand different technology trends, salary trends, Big Data market and different job roles in Big Data
- Understand what Hadoop is for, and how it works
- Understand complex architectures of Hadoop and its component
- Hadoop installation on your machine
- Understand how MapReduce, Hive and Pig can be used to analyze big data sets
- High quality documents
- Demos: Running HDFS commands, Hive queries, Pig queries
- Sample data sets and scripts (HDFS commands, Hive sample queries, Pig sample queries, Data Pipeline sample queries)
- Start writing your own codes in Hive and Pig to process huge volumes of data
- Design your own data pipeline using Pig and Hive
- Understand modern data architecture: Data Lake
- Practice with Big Data sets
I am passionate about Data Engineering and the latest technologies, and I am looking forward to sharing my passion and knowledge with you.