Apache Spark is currently one of the most popular systems for processing big data.
Apache Hadoop continues to be used by many organizations that look to store data locally on premises. Hadoop allows these organisations to efficiently store big datasets ranging in size from gigabytes to petabytes.
As the number of vacancies for data science, big data analysis and data engineering roles continue to grow, so too will the demand for individuals that possess knowledge of Spark and Hadoop technologies to fill these vacancies.
This course has been designed specifically for data scientists, big data analysts and data engineers looking to leverage the power of Hadoop and Apache Spark to make sense of big data.
This course will help those individuals that are looking to interactively analyse big data or to begin writing production applications to prepare data for further analysis using Spark SQL in a Hadoop environment.
The course is also well suited for university students and recent graduates that are keen to gain exposure to Spark & Hadoop or anyone who simply wants to apply their SQL skills in a big data environment using Spark-SQL.
This course has been designed to be concise and to provide students with a necessary and sufficient amount of theory, enough for them to be able to use Hadoop & Spark without getting bogged down in too much theory about older low-level APIs such as RDDs.
On solving the questions contained in this course students will begin to develop those skills & the confidence needed to handle real world scenarios that come their way in a production environment.
(a) There are just under 30 problems in this course. These cover hdfs commands, basic data engineering tasks and data analysis.
(b) Fully worked out solutions to all the problems.
(c) Also included is the Verulam Blue virtual machine which is an environment that has a spark Hadoop cluster already installed so that you can practice working on the problems.
- The VM contains a Spark Hadoop environment which allows students to read and write data to & from the Hadoop file system as well as to store metastore tables on the Hive metastore.
- All the datasets students will need for the problems are already loaded onto HDFS, so there is no need for students to do any extra work.
- The VM also has Apache Zeppelin installed. This is a notebook specific to Spark and is similar to Python’s Jupyter notebook.
This course will allow students to get hands-on experience working in a Spark Hadoop environment as they practice:
- Converting a set of data values in a given format stored in HDFS into new data values or a new data format and writing them into HDFS.
- Loading data from HDFS for use in Spark applications & writing the results back into HDFS using Spark.
- Reading and writing files in a variety of file formats.
- Performing standard extract, transform, load (ETL) processes on data using the Spark API.
- Using metastore tables as an input source or an output sink for Spark applications.
- Applying the understanding of the fundamentals of querying datasets in Spark.
- Filtering data using Spark.
- Writing queries that calculate aggregate statistics.
- Joining disparate datasets using Spark.
- Producing ranked or sorted data.
Introduction to Hadoop & Spark
Our Working Environment
HDFS Basic File Management
Data Structures
-
20Interacting with HDFS
-
21The File System Shell (FS Shell)
-
22Commands and operations -help
-
23Commands and operations -ls
-
24Commands and operations -find
-
25Commands and operations -mkdir
-
26Commands and operations -put
-
27Commands and operations -cp -mv
-
28Commands and operations -cat -tail -text
-
29Commands and operations -rmdir -rm
-
30Commands and operations -get
-
31Health warning
-
32HDFS Basic File Management - Problems & Solutions
Spark SQL & Creating Data Structures
Basic Operations on Data Structures
Data Engineering
Data Analysis
-
46Section Introduction
-
47The ETL Process
-
48The Extract Phase of an ETL process
-
49The Extract Phase - Loading CSV and Text files
-
50The Extract Phase - Loading JSON and Parquet files
-
51The Extract Phase - Loading Avro and ORC files
-
52The Transform Phase of an ETL process
-
53The Transform Phase - String Transformations
-
54The Transform Phase - Numerical Transformations
-
55The Transform Phase - Date & Time Transformations
-
56The Transform Phase - Data Type Transformations
-
57The Transform Phase - Transformations of Nulls
-
58The Load Phase of an ETL process
-
59The Load Phase - Saving DataFrame data to Files I
-
60The Load Phase - Saving DataFrame data to Files II
-
61The Load Phase - Saving DataFrame data to Tables
-
62Data Engineering - Solutions to Problems
End of Course Test Solutions
-
63Section Introduction
-
64Metastore Tables as Input Sources or Output Sinks
-
65Querying datasets in Spark
-
66Math Functions in SQL
-
67Filtering
-
68Sorting & Ranking
-
69Aggregation
-
70Grouping
-
71Multi Table Queries
-
72Multi Table Queries - Joins
-
73Multi Table Queries - Types of Joins
-
74Multi Table Queries - Unions
-
75Data Analysis - Solutions to Problems