Cart
Free US shipping over $10
Proud to be B-Corp

Optimizing Databricks Workloads Anirudh Kala

Optimizing Databricks Workloads By Anirudh Kala

Optimizing Databricks Workloads by Anirudh Kala


$50.99
Condition - Like New
Only 3 left

Summary

The book takes a hands-on approach to speeding up your Spark jobs and data processing by covering the implementation and associated methodologies that will have you up and running in no time. Developers working with Databricks and Spark will be able to put their knowledge to work with this practical guide to optimizing workloads.

Optimizing Databricks Workloads Summary

Optimizing Databricks Workloads: Harness the power of Apache Spark in Azure and maximize the performance of modern big data workloads by Anirudh Kala

Accelerate computations and make the most of your data effectively and efficiently on Databricks

Key Features
  • Understand Spark optimizations for big data workloads and maximizing performance
  • Build efficient big data engineering pipelines with Databricks and Delta Lake
  • Efficiently manage Spark clusters for big data processing
Book Description

Databricks is an industry-leading, cloud-based platform for data analytics, data science, and data engineering supporting thousands of organizations across the world in their data journey. It is a fast, easy, and collaborative Apache Spark-based big data analytics platform for data science and data engineering in the cloud.

In Optimizing Databricks Workloads, you will get started with a brief introduction to Azure Databricks and quickly begin to understand the important optimization techniques. The book covers how to select the optimal Spark cluster configuration for running big data processing and workloads in Databricks, some very useful optimization techniques for Spark DataFrames, best practices for optimizing Delta Lake, and techniques to optimize Spark jobs through Spark core. It contains an opportunity to learn about some of the real-world scenarios where optimizing workloads in Databricks has helped organizations increase performance and save costs across various domains.

By the end of this book, you will be prepared with the necessary toolkit to speed up your Spark jobs and process your data more efficiently.

What you will learn
  • Get to grips with Spark fundamentals and the Databricks platform
  • Process big data using the Spark DataFrame API with Delta Lake
  • Analyze data using graph processing in Databricks
  • Use MLflow to manage machine learning life cycles in Databricks
  • Find out how to choose the right cluster configuration for your workloads
  • Explore file compaction and clustering methods to tune Delta tables
  • Discover advanced optimization techniques to speed up Spark jobs
Who this book is for

This book is for data engineers, data scientists, and cloud architects who have working knowledge of Spark/Databricks and some basic understanding of data engineering principles. Readers will need to have a working knowledge of Python, and some experience of SQL in PySpark and Spark SQL is beneficial.

About Anirudh Kala

Anirudh Kala is an expert in machine learning techniques, artificial intelligence, and natural language processing. He has helped multiple organizations to run their large-scale data warehouses with quantitative research, natural language generation, data science exploration, and big data implementation. He has worked in every aspect of data analytics using the Azure data platform. Currently, he works as the director of Celebal Technologies, a data science boutique firm dedicated to large-scale analytics. Anirudh holds a computer engineering degree from the University of Rajasthan and his work history features the likes of IBM and ZS Associates. Anshul Bhatnagar is an experienced, hands-on data architect involved in the architecture, design, and implementation of data platform architectures, and distributed systems. He has worked in the IT industry since 2015 in a range of roles such as Hadoop/Spark developer, data engineer, and data architect. He has also worked in many other sectors including energy, media, telecoms, and e-commerce. He is currently working for a data and AI boutique company, Celebal Technologies, in India. He is always keen to hear about new ideas and technologies in the areas of big data and AI, so look him up on LinkedIn to ask questions or just to say hi. Sarthak Sarbahi is a certified data engineer and analyst with a wide technical breadth and a deep understanding of Databricks. His background has led him to a variety of cloud data services with an eye toward data warehousing, big data analytics, robust data engineering, data science, and business intelligence. Sarthak graduated with a degree in mechanical engineering.

Table of Contents

Table of Contents
  1. Discovering Databricks
  2. Batch and Real-Time Processing in Databricks
  3. Learning about Machine Learning and Graph Processing in Databricks
  4. Managing Spark Clusters
  5. Big Data Analytics
  6. Databricks Delta Lake
  7. Spark Core
  8. Case Studies

Additional information

GOR013860130
9781801819077
1801819076
Optimizing Databricks Workloads: Harness the power of Apache Spark in Azure and maximize the performance of modern big data workloads by Anirudh Kala
Used - Like New
Paperback
Packt Publishing Limited
2021-08-13
230
N/A
Book picture is for illustrative purposes only, actual binding, cover or edition may vary.
The book has been read, but looks new. The book cover has no visible wear, and the dust jacket is included if applicable. No missing or damaged pages, no tears, possible very minimal creasing, no underlining or highlighting of text, and no writing in the margins

Customer Reviews - Optimizing Databricks Workloads