Scrapy masterclass: Python web scraping and data pipelines
Work on 7 real-world web-scraping projects using Scrapy, Splash, and Selenium. Build data pipelines locally and on AWS
How To Download Course With Smartphone
What you’ll learn
- Extract data from the most difficult web sites using Scrapy
- Build ETL pipelines and store data in CSV, JSON, MySQL, MongoDB, and S3
- Avoid getting banned and evade bot-protection techniques
- Harness the power of Selenium browser automation to scrape any website
- Deploy your Scrapy bots in local and AWS environments
- Some Python background
- All projects are run on Python 3.10 so it needs to be installed
- Familiarity with Linux is recommended but not strictly required
- Familiarity with the HTTP protocol and HTML
This is the era of data!
Everyone is telling you what to do with the data that you already have. But how can you “have” this data?
Most of the Data Engineering / Data Science discussions today focus on how to analyze and process datasets to draw some useful information out of them. However, they all assume that those datasets are already available to you. That they’ve been collected somehow. They spend little time showing how you can obtain this dataset firsthand! This course fills this gap.
Scrapy for building powerful web scraping pipelines is all about walking you through the process of extracting data of interest from websites. True, there are a lot of datasets already available for you to consume either for free or at some cost. However, what if those datasets are outdated? What if they don’t address your specific needs? You’d better know how to build your own dataset from scratch no matter how unstructured your data source was.
Scrapy is a Python web scraping framework. Thousands of companies and professionals use it to collect data and build datasets. Then they can sell them or use them in their own projects. Today, you can be one of those professionals. Even build your own business around data harvesting!
Today, data scientists and data engineers are among the most highly paid in the industry. Yet, if they don’t have enough data to work on, they can do nothing.
You will also learn the next steps after you obtain your data. ETL (Extract, Transform, and Load) starts with Scrapy (Extract). But this course covers the other two aspects (Transform and Load). Using Scrapy pipelines, we’ll see how we can store our data to SQL, and NoSQL databases, Elastic Search clusters, event brokers like Kafka, object storage like S3, and message queues like AWS SQS.
Even if you know nothing about web scraping or data harvesting, even if all of this seems new to you, you’ve come to the right place.
I’ve designed this class for total beginners. It will walk you from “What is web scraping? What is Scrapy? Why should I learn and use it?” all the way up to “Now I have several gigabytes of web-scraped data from dozens of websites. Let’s figure out how we can put them to effective use”.
Web scraping can be as easy as extracting some text from some HTML page do going several levels deep among several websites, crawling each link, and hoping from one page to another. It can also get incredibly challenging when websites place blockers to disallow web bots from accessing them. Don’t worry, we’ll address all use-cases and, together, figure out how we can overcome them.
Who this course is for:
- Anyone who wants to automate data collection from websites (web scraping) using Scrapy
- Anyone who wants to build a business around web scraping and data collection
- Data engineers, data scientists, ML engineers who want to master web scraping for their data collection needs
- Developers, DevOps engineers or IT professionals who want to switch careers to data engineering
- Python programmers who want to know more about Scrapy or web scraping in general
How to Download Our Course With Desktop