-
Course
- Data
Processing Streaming Data Using Apache Spark Structured Streaming
Structured streaming is the scalable and fault-tolerant stream processing engine in Apache Spark 2 which can be used to process high-velocity streams.
What you'll learn
Stream processing applications work with continuously updated data and react to changes in real-time. In this course, Processing Streaming Data Using Apache Spark Structured Streaming, you'll focus on integrating your streaming application with the Apache Kafka reliable messaging service to work with real-world data such as Twitter streams.
First, you’ll explore Spark’s architecture to support distributed processing at scale. Next, you will install and work with the Apache Kafka reliable messaging service.
Finally, you'll perform a number of transformation operations on Twitter streams, including windowing and join operations.
When you're finished with this course you will have the skills and knowledge to work with high volume and velocity data using Spark and integrate with Apache Kafka to process streaming data.
Table of contents
- Version Check | 15s
- Prerequisites and Course Outline | 1m 56s
- Drivers, Workers, Executors, and Tasks | 4m 37s
- Introducing Spark Standalone | 3m 17s
- High Availability Schemes | 4m 49s
- Demo: Install and Set up Spark on Your Local Machine | 3m 20s
- Demo: Start Master and Worker Processes | 5m 3s
- Demo: Config Files for Worker Nodes | 4m 44s
- Demo: Configuring Processing Using Command Line Arguments | 2m 12s
- Demo: The Spark Web UI for Monitoring Applications | 5m 43s
- Demo: High Availability Configuration with Zookeeper | 6m
- Demo: Configuring the Spark Environment Using Config Files | 1m 59s
- Security for Spark Clusters | 5m 11s
- Backpressure | 5m 9s
About the author
A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.
More Courses by Janani