-
Course
- Data
Getting Started with Stream Processing with Spark Streaming
The Spark Streaming module lets you to work with large scale streaming data using familiar batch processing abstractions. This course starts with how standard transformations and operations are performed on streams, and moves to more advanced topics.
What you'll learn
Traditional distributed systems like Hadoop work on data stored in a file system. Jobs can run for hours, sometimes days. This is a major limitation in processing real-time data such as trends and breaking news. The Spark Streaming module extends the Spark batch infrastructure to deal with data for real-time analysis. In this course, Getting Started with Stream Processing with Spark Streaming, you'll learn the nuances of dealing with streaming data using the same basic Spark transformations and actions that work with batch processing. Next, you'll explore you how you can extend machine learning algorithms to work with streams. Finally, you'll learn the subtle details of how the streaming K-means clustering algorithm helps find patterns in data. By the end of this course, you'll feel confident in your knowledge, and you can start integrating what you've learned into your own projects.
Table of contents
- Version Check | 16s
- Limitations of Traditional Distributed Computing | 4m 50s
- Spark for Real-time Processing | 4m 44s
- Introduction to Streaming | 3m 11s
- The RDD Programming Abstraction | 7m 27s
- Using the Pyspark Interactive Shell | 6m 44s
- Discretized Streams | 5m 1s
- Working with Streaming Data in Spark Using Python | 5m 27s
- Running Your First Streaming Application in Spark | 4m 47s
About the author
A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.
More Courses by Janani