Chapter 1: Introduction to Spark 3.1
Chapter Goal: The book's opening chapter introduces the readers to latest changes in PySpark and updates to the framework. This chapter covers the different components of Spark ecosystem. The chapter doubles up as an introduction to the book's format, including explanation of formatting practices, pointers to the book's accompanying codebase online, and support contact information. The chapter sets readers' expectations in terms of the content and structure of the rest of the book. This chapter provides the audience with a set of required libraries and code/data download information so that the user is able to set up their environment appropriately.
No of pages -30
Sub -Topics
1. Data status
2. Apache Spark evolution
3. Apache Spark fundamentals
4. Spark components
5. Setting up Spark 3.1
Chapter 2: Manage Data with PySpark
Chapter Goal:
This chapter covers the steps right from reading the data, pre-processing and cleaning for machine learning purpose. The chapter showcases the steps to build end to end data handling pipelines to transform and create features for machine learning. It covers simple way to use Koalas in order to leverage pandas in a distributed way in Spark.It also covers the method to automate the data scripts in order to run schedules data jobs using Airflow.
No of pages:50
Sub - Topics
1. Data ingestion
2. Data cleaning
3. Data transformation
4. End- to end data pipelines
5. Data processing using koalas in Spark on Pandas DataFrame
6. Automate data workflow using Airflow
Chapter 3: Introduction to Machine Learning
Chapter Goal:
This chapter introduces the readers to basic fundamentals of machine learning. This chapter covers different categories of machine learning and different stages in the machine learning lifecycle. It highlights the method to extract information related to model interpretation to understand the reasoning behind model predictions in PySpark .
No of pages: 25
Sub - Topics:
1. Supervised machine learning
2. Unsupervised machine learning
3. Model interpretation
4. Machine learning lifecycle
Chapter 4: Linear Regression with PySpark
Chapter Goal:
This chapter covers the fundamentals of linear regression for readers. This chapter then showcases the steps to build feature engineering pipeline and fitting a regression model using PySpark latest machine learning library
No of pages:20
Sub - Topics:
1. Introduction to linear regression
2. Feature engineering in PySpark
3. Model training
4. End-to end pipeline for model prediction
Chapter 5: Logistic Regression with PySpark
Chapter Goal:
This chapter covers the fundamentals of logistic regression for readers. This chapter then showcases the steps to build feature engineering pipeline and fitting a logistic regression model using PySpark machine learning library on a customer dataset
No of pages:25
1. Introduction to logistic regression
2. Feature engineering in PySpark
3. Model training
4. End-to end pipeline for model prediction
Chapter 6: Ensembling with Pyspark
Chapter Goal:
This chapter covers the fundamentals of ensembling methods including bagging, boosting and stacking. This chapter then showcases strengths of ensembling methods over other machine learning techniques. In the final part -the steps to build feature engineering pipeline and fitting random forest model using PySpark Machine learning library are covered
No of pages:30
1. Introduction to ensembling methods
2. Feature engineering in PySpark
3. Model training
4. End-to end pipeline for model prediction
Chapter 7: Clustering with PySpark
Chapter Goal:
This chapter introduces the unsupervised part of machine learning - clustering. This chapter covers the steps to build feature engineering pipeline and running a customer segmentation exercise using PySpark machine learning library
No of pages:20
1.Introduction to clustering
2. Feature engineering in PySpark
3. Segmentation using Pyspark
Chapter 8: Recommendation Engine with PySpark
Chapter Goal:
This chapter focuses on the fundamentals of building scalable recommendation models. This chapter introduces different types of recommendation models that are used widely and then showcases the steps to build data pipeline and training a hybrid recommendation model using PySpark machine learning library for making recommendations to customers
No of pages:25
1. Introduction to types of recommender systems
2. Deep dive into collaborative filtering
3. Building recommendation engine using PySpark
Chapter 9: Advanced Feature Engineering with PySpark
Chapter Goal:
This chapter covers the process to handle sequential data such as customer journey that can be used in prediction as well. This chapter also includes the use of PCA technique for reducing the dimensional space to handful features. At the end -it showcase use of machine learning flow to deploy Spark models in production.
No of pages:45
1.Sequence embeddings for prediction
2. Dimensionality reduction
3. Model deployment in PySpark