-
Course
- Cloud
- Data
Optimizing Apache Spark on Databricks
This course will teach you how to optimize the performance of Spark clusters on Azure Databricks by identifying and mitigating various issues such as data ingestion problems and performance bottlenecks
What you'll learn
The Apache Spark unified analytics engine is an extremely fast and performant framework for big data processing. However, you might find that your Apache Spark code running on Azure Databricks still suffers from a number of issues. These could be due to the difficulty in ingesting data in a reliable manner from a variety of sources or due to performance issues that you encounter because of disk I/O, network performance, or computation bottlenecks.
In this course, Optimizing Apache Spark on Databricks, you will first explore and understand the issues that you might encounter ingesting data into a centralized repository for data processing and insight extraction. Then, you will learn how Delta Lake on Azure Databricks allows you to store data for processing, insights, as well as machine learning on Delta tables and you will see how you can mitigate your data ingestion problems using Auto Loader on Databricks to ingest streaming data.
Next, you will explore common performance bottlenecks that you are likely to encounter while processing data in Apache Spark, issues dealing with serialization, skew, spill, and shuffle. You will learn techniques to mitigate these issues and see how you can improve the performance of your processing code using disk partitioning, z-order clustering, and bucketing.
Finally, you will learn how you can share resources on the cluster using scheduler pools and fair scheduling and how you can reduce disk read and write operations using caching on Delta tables.
When you are finished with this course, you will have the skills and knowledge of optimizing performance in Spark needed to get the best out of your Spark cluster.
Table of contents
- Version Check | 15s
- Prerequisites and Course Outline | 1m 59s
- Delta Lake | 7m 1s
- Data Ingestion: Definition, Challenges, and Best Practices | 6m 8s
- Auto Loader for Data Ingestion | 2m 52s
- Demo: Creating an External Cloud Storage Source for Ingestion of Files | 4m 33s
- Demo: Ingesting Streaming Data into Delta Lake | 6m 29s
- Demo: Tracking Processed Files using Auto Loader | 3m 21s
- Demo: Ingesting Batch Data into Delta Lake | 3m 3s
- Demo: Ingesting Data into Delta Lake Using SQL | 3m 9s
- Databricks Data Ingestion Network | 3m 13s
About the author
A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.
More Courses by Janani