Architecting Serverless Big Data Solutions Using Google Dataflow
Dataflow represents a fundamentally different approach to Big Data processing than computing engines such as Spark. Dataflow is serverless and fully-managed, meaning that provisioning resources and scaling can be transparent to the data architect.
What you'll learn
Dataflow allows developers to process and transform data using easy, intuitive APIs. Dataflow is built on the Apache Beam architecture and unifies batch as well as stream processing of data. In this course, Architecting Serverless Big Data Solutions Using Google Dataflow, you will be exposed to the full potential of Cloud Dataflow and its radically innovative programming model. You will start this course off with a basic understanding of how Dataflow works for serverless compute. You’ll study the Apache Beam API used to build pipelines and understand what data sources, sinks, and transformations are. You’ll study the stages in a Dataflow pipeline and visualize it as a directed-acyclic graph. Next, you'll use Apache Beam APIs to build pipelines for data transformations in both Java as well as Python and execute these pipelines locally and on the cloud. You’ll integrate your pipelines with other GCP services such as BigQuery and see how you can monitor and debug slow pipeline stages. Additionally, you'll study different pipeline architectures such as branching and pipelines using side inputs. You’ll also see how you can apply windowing operations to perform aggregations on our data. Finally, you’ll work with Dataflow without writing any code using pre-built Dataflow templates that Google offers for common operations. At the end of this course, you should be comfortable using Dataflow pipelines to transform and process your data and integrate your pipelines with other Google services.
Table of contents
- Module Overview 1m
- Prerequisites and Course Outline 4m
- Introducing Dataflow 5m
- Enabling APIs and Installing Client Libraries 7m
- Using Dataflow 5m
- Introducing Apache Beam 4m
- Exploring the Wineries Dataset 2m
- Running a Dataflow Job Locally 4m
- Running a Dataflow Job on the Cloud 8m
- Staging and Output on Cloud Storage 2m
- Dataflow Pricing 2m
- Monitor Dataflow Jobs Using Stackdriver 6m
- Module Overview 2m
- Create a Java Project Using Maven 3m
- Writing a Dataflow Job in Java 7m
- Examining Output Files on Cloud Storage 1m
- Find the Top Selling Products 6m
- Executing a Java Pipeline on Cloud Dataflow 4m
- Execute Jobs and Monitor Logs 3m
- Scaling Number of Workers 3m
- Identifying Slow Pipeline Stages 3m
- Integrating Dataflow with BigQuery 4m
- Writing Results to BigQuery 3m