Extracting Structured Data from the Web Using Scrapy
Data analysts and scientists are always on the lookout for new sources of data, competitive intelligence, and new signals for proprietary models in applications. The Scrapy package in Python makes extracting raw web content easy and scalable.
What you'll learn
Websites contain meaningful information which can drive decisions within your organization. The Scrapy package in Python makes crawling websites to scrape structured content easy and intuitive and at the same time allows crawling to scale to hundreds of thousands of websites.
In this course, Extracting Structured Data from the Web Using Scrapy, you will learn how you can scrape raw content from web pages and save them for later use in a structured and meaningful format.
You will start off by exploring how Scrapy works and how you can use CSS and XPath selectors in Scrapy to select the relevant portions of any website. You'll use the Scrapy command shell to prototype the selectors you want to use when building Spiders.
Next, you'll see learn Spiders specify what to crawl, how to crawl, and how to process scraped data.
You'll also learn how you can take your Spiders to the cloud using the Scrapy Cloud. The cloud platform offers advanced scraping functionality including a cutting-edge tool called Portia with which you can build a Spider without writing a single line of code.
At the end of this course, you will be able to build your own spiders and crawlers to extract insights from any website on the web. This course uses Scrapy version 1.5 and Python 3.
Table of contents
- Overview 1m
- Introducing Spiders 1m
- Running Spiders to Crawl Websites 4m
- Using Crawl Spiders to Follow Links 3m
- Specifying Link Extraction Rules for Crawl Spiders 2m
- Crawling CSV Files 2m
- Introducing Nested Selectors 2m
- Using Items to Store Structured Data 4m
- Using Items with Spiders 4m
- Input Processors 2m
- Item Loaders 4m
- Item Pipelines 2m
- Using Feed Exporters to Save to a File 2m
- Dropping Scraped Items 2m