Web Scraping (2 Part Series)
1 Scraping Dynamic Javascript Websites with Scrapy and Scrapy-playwright
2 XPath vs CSS Selector: what is going on with these two?
The world of web scraping is truly fascinating. Automating data collection from website is both fun and a useful skill.
Scrapy is a popular Python package that makes scraping website a breeze. However, it works best on static pages. In case of Javascript-heavy websites that load data on-demand or require rendering and user input Scrapy struggles a lot.
In this article I will explore ways to use Scrapy to scrape dynamic websites.
Code for this example here
This article will rely heavily on videos by John Watson Rooney. Check out his Youtube channel because he has a lot of amazing video on web scraping!
Let’s begin. We will explore scraping of dynamic website using scrapy-playwright.
Note: I use example.com
as an example. I don’t want to use existing website and inadvertently ddos it. Scraping should be done responsibly and we should always read website’s ToS to see if they even allow scraping. This article for educational purposes only.
First we will create our virtual environment and install scrapy
, scrapy-playwright
, and initialize playwright:
<span>$ </span>python <span>-m</span> virtualenv venv<span>$ </span><span>source </span>venv/bin/activate<span>$ </span>pip <span>install </span>scrapy scrapy-playwright<span>$ </span>playwright <span>install</span><span>$ </span>python <span>-m</span> virtualenv venv <span>$ </span><span>source </span>venv/bin/activate <span>$ </span>pip <span>install </span>scrapy scrapy-playwright <span>$ </span>playwright <span>install</span>$ python -m virtualenv venv $ source venv/bin/activate $ pip install scrapy scrapy-playwright $ playwright install
Enter fullscreen mode Exit fullscreen mode
We need a scrapy project to proceed. Luckily, scrapy has a built-in command to create a new project. Let’s create a scrapy project and change into the newly created folder:
<span>$ </span>scrapy startproject playwright_demo<span>$ </span><span>cd </span>playwright_demo<span>$ </span>scrapy startproject playwright_demo <span>$ </span><span>cd </span>playwright_demo$ scrapy startproject playwright_demo $ cd playwright_demo
Enter fullscreen mode Exit fullscreen mode
Next we will create a new spider.
<span>$ </span>scrapy genspider pwspider example.com<span># Output</span>Created spider <span>'pwspider'</span> using template <span>'basic'</span> <span>in </span>module:playwright_demo.spiders.pwspider<span>$ </span>scrapy genspider pwspider example.com <span># Output</span> Created spider <span>'pwspider'</span> using template <span>'basic'</span> <span>in </span>module: playwright_demo.spiders.pwspider$ scrapy genspider pwspider example.com # Output Created spider 'pwspider' using template 'basic' in module: playwright_demo.spiders.pwspider
Enter fullscreen mode Exit fullscreen mode
You should see a new folder in your working directory named playwright_demo
with similar structure (you may have different numbers next to python depending on your version):
<span>.</span>├── playwright_demo│ ├── __init__.py│ ├── __pycache__│ │ ├── __init__.cpython-38.pyc│ │ └── settings.cpython-38.pyc│ ├── items.py│ ├── middlewares.py│ ├── pipelines.py│ ├── settings.py│ └── spiders│ ├── __init__.py│ ├── __pycache__│ │ └── __init__.cpython-38.pyc│ └── pwspider.py└── scrapy.cfg4 directories, 11 files<span>.</span> ├── playwright_demo │ ├── __init__.py │ ├── __pycache__ │ │ ├── __init__.cpython-38.pyc │ │ └── settings.cpython-38.pyc │ ├── items.py │ ├── middlewares.py │ ├── pipelines.py │ ├── settings.py │ └── spiders │ ├── __init__.py │ ├── __pycache__ │ │ └── __init__.cpython-38.pyc │ └── pwspider.py └── scrapy.cfg 4 directories, 11 files. ├── playwright_demo │ ├── __init__.py │ ├── __pycache__ │ │ ├── __init__.cpython-38.pyc │ │ └── settings.cpython-38.pyc │ ├── items.py │ ├── middlewares.py │ ├── pipelines.py │ ├── settings.py │ └── spiders │ ├── __init__.py │ ├── __pycache__ │ │ └── __init__.cpython-38.pyc │ └── pwspider.py └── scrapy.cfg 4 directories, 11 files
Enter fullscreen mode Exit fullscreen mode
Now we need to modify scrapy’s settings to allow it to work with playwright. Instructions can be found on playwright’s GitHub page. We need to add settings for DOWNLOAD_HANDLERS
and TWISTED_REACTOR
. New settings that were added can be found between ###
.
This is what the settings file should look like:
<span># playwright_demo/settings.py </span><span># Scrapy settings for playwright_demo project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html </span><span>BOT_NAME</span> <span>=</span> <span>'</span><span>playwright_demo</span><span>'</span><span>SPIDER_MODULES</span> <span>=</span> <span>[</span><span>'</span><span>playwright_demo.spiders</span><span>'</span><span>]</span><span>NEWSPIDER_MODULE</span> <span>=</span> <span>'</span><span>playwright_demo.spiders</span><span>'</span><span>### Playwright settings </span><span>DOWNLOAD_HANDLERS</span> <span>=</span> <span>{</span><span>"</span><span>http</span><span>"</span><span>:</span> <span>"</span><span>scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler</span><span>"</span><span>,</span><span>"</span><span>https</span><span>"</span><span>:</span> <span>"</span><span>scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler</span><span>"</span><span>,</span><span>}</span><span>TWISTED_REACTOR</span> <span>=</span> <span>"</span><span>twisted.internet.asyncioreactor.AsyncioSelectorReactor</span><span>"</span><span>### </span><span># More scrapy settings down below... </span><span># playwright_demo/settings.py </span> <span># Scrapy settings for playwright_demo project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html </span> <span>BOT_NAME</span> <span>=</span> <span>'</span><span>playwright_demo</span><span>'</span> <span>SPIDER_MODULES</span> <span>=</span> <span>[</span><span>'</span><span>playwright_demo.spiders</span><span>'</span><span>]</span> <span>NEWSPIDER_MODULE</span> <span>=</span> <span>'</span><span>playwright_demo.spiders</span><span>'</span> <span>### Playwright settings </span><span>DOWNLOAD_HANDLERS</span> <span>=</span> <span>{</span> <span>"</span><span>http</span><span>"</span><span>:</span> <span>"</span><span>scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler</span><span>"</span><span>,</span> <span>"</span><span>https</span><span>"</span><span>:</span> <span>"</span><span>scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler</span><span>"</span><span>,</span> <span>}</span> <span>TWISTED_REACTOR</span> <span>=</span> <span>"</span><span>twisted.internet.asyncioreactor.AsyncioSelectorReactor</span><span>"</span> <span>### </span> <span># More scrapy settings down below... </span># playwright_demo/settings.py # Scrapy settings for playwright_demo project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'playwright_demo' SPIDER_MODULES = ['playwright_demo.spiders'] NEWSPIDER_MODULE = 'playwright_demo.spiders' ### Playwright settings DOWNLOAD_HANDLERS = { "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler", "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler", } TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor" ### # More scrapy settings down below...
Enter fullscreen mode Exit fullscreen mode
Another thing we need to edit is the spider itself. We need to add a start_requests()
method and delete a couple of lines from the spider. Here is the file:
<span># playwright_demo/spiders/pwspider.py </span><span>import</span> <span>scrapy</span><span>class</span> <span>PwspiderSpider</span><span>(</span><span>scrapy</span><span>.</span><span>Spider</span><span>):</span><span>name</span> <span>=</span> <span>'</span><span>pwspider</span><span>'</span><span>def</span> <span>start_requests</span><span>(</span><span>self</span><span>):</span><span>yield</span> <span>scrapy</span><span>.</span><span>Request</span><span>(</span><span>'</span><span>example.com</span><span>'</span><span>,</span> <span>meta</span><span>=</span><span>{</span><span>'</span><span>playwright</span><span>'</span><span>:</span> <span>True</span><span>})</span><span>def</span> <span>parse</span><span>(</span><span>self</span><span>,</span> <span>response</span><span>):</span><span>yield</span> <span>{</span><span>'</span><span>text</span><span>'</span><span>:</span> <span>response</span><span>.</span><span>text</span><span>,</span><span>}</span><span># playwright_demo/spiders/pwspider.py </span> <span>import</span> <span>scrapy</span> <span>class</span> <span>PwspiderSpider</span><span>(</span><span>scrapy</span><span>.</span><span>Spider</span><span>):</span> <span>name</span> <span>=</span> <span>'</span><span>pwspider</span><span>'</span> <span>def</span> <span>start_requests</span><span>(</span><span>self</span><span>):</span> <span>yield</span> <span>scrapy</span><span>.</span><span>Request</span><span>(</span><span>'</span><span>example.com</span><span>'</span><span>,</span> <span>meta</span><span>=</span><span>{</span><span>'</span><span>playwright</span><span>'</span><span>:</span> <span>True</span><span>})</span> <span>def</span> <span>parse</span><span>(</span><span>self</span><span>,</span> <span>response</span><span>):</span> <span>yield</span> <span>{</span> <span>'</span><span>text</span><span>'</span><span>:</span> <span>response</span><span>.</span><span>text</span><span>,</span> <span>}</span># playwright_demo/spiders/pwspider.py import scrapy class PwspiderSpider(scrapy.Spider): name = 'pwspider' def start_requests(self): yield scrapy.Request('example.com', meta={'playwright': True}) def parse(self, response): yield { 'text': response.text, }
Enter fullscreen mode Exit fullscreen mode
Ok, we got the basic setup. Now we need to inspect the source of the website we want to scrape.
We are looking for something similar to this:
We're sorry, but the site doesn't work properly without Javascript enabled.We're sorry, but the site doesn't work properly without Javascript enabled.We're sorry, but the site doesn't work properly without Javascript enabled.
Enter fullscreen mode Exit fullscreen mode
What this means is that we have to actually load the website and allow it to load the data we want to scrape.
If we try to run our spider with this command and output the result of the scrape into the output_data.json
we will get data back:
scrapy crawl pwspider <span>-o</span> output_data.jsonscrapy crawl pwspider <span>-o</span> output_data.jsonscrapy crawl pwspider -o output_data.json
Enter fullscreen mode Exit fullscreen mode
The problem is that we get junk data. Scrapy does not give the website enough time to load the data we want.
What we do is go to the website you want to scrape and start looking for selectors, ids, and classes of items we want. We need to tell playwright to wait until the data we want is loaded and only after that scrape it.
We will change the meta
dictionary inside the start_requests
method to point scrapy and playwright in the right direction. I will use imaginary div
that has an id itemName
. For the price, it is an imaginary div
form-group
that contains a label
with price. There should also be an async
in front of the parse
method or it will not work. Here is the updated spider file:
<span># playwright_demo/spiders/pwspider.py </span><span>import</span> <span>scrapy</span><span>from</span> <span>scrapy_playwright.page</span> <span>import</span> <span>PageCoroutine</span><span>class</span> <span>PwspiderSpider</span><span>(</span><span>scrapy</span><span>.</span><span>Spider</span><span>):</span><span>name</span> <span>=</span> <span>'</span><span>pwspider</span><span>'</span><span>def</span> <span>start_requests</span><span>(</span><span>self</span><span>):</span><span>yield</span> <span>scrapy</span><span>.</span><span>Request</span><span>(</span><span>'</span><span>https://twitter.com</span><span>'</span><span>,</span><span>meta</span><span>=</span><span>dict</span><span>(</span><span>playwright</span><span>=</span><span>True</span><span>,</span><span>playwright_include_page</span><span>=</span><span>True</span><span>,</span><span>playwright_page_coroutines</span><span>=</span><span>[</span><span># This where we can implement scrolling if we want </span> <span>PageCoroutine</span><span>(</span><span>'</span><span>wait_for_selector</span><span>'</span><span>,</span> <span>'</span><span>div#itemName</span><span>'</span><span>)</span><span>]</span><span>)</span><span>)</span><span>async</span> <span>def</span> <span>parse</span><span>(</span><span>self</span><span>,</span> <span>response</span><span>):</span><span>for</span> <span>item</span> <span>in</span> <span>response</span><span>.</span><span>css</span><span>(</span><span>'</span><span>div.card</span><span>'</span><span>):</span><span>yield</span> <span>{</span><span>'</span><span>name</span><span>'</span><span>:</span> <span>item</span><span>.</span><span>css</span><span>(</span><span>'</span><span>h3::text</span><span>'</span><span>).</span><span>get</span><span>(),</span><span>'</span><span>price</span><span>'</span><span>:</span> <span>item</span><span>.</span><span>css</span><span>(</span><span>'</span><span>div.form-group label::text</span><span>'</span><span>).</span><span>get</span><span>()</span><span>}</span><span># playwright_demo/spiders/pwspider.py </span> <span>import</span> <span>scrapy</span> <span>from</span> <span>scrapy_playwright.page</span> <span>import</span> <span>PageCoroutine</span> <span>class</span> <span>PwspiderSpider</span><span>(</span><span>scrapy</span><span>.</span><span>Spider</span><span>):</span> <span>name</span> <span>=</span> <span>'</span><span>pwspider</span><span>'</span> <span>def</span> <span>start_requests</span><span>(</span><span>self</span><span>):</span> <span>yield</span> <span>scrapy</span><span>.</span><span>Request</span><span>(</span><span>'</span><span>https://twitter.com</span><span>'</span><span>,</span> <span>meta</span><span>=</span><span>dict</span><span>(</span> <span>playwright</span><span>=</span><span>True</span><span>,</span> <span>playwright_include_page</span><span>=</span><span>True</span><span>,</span> <span>playwright_page_coroutines</span><span>=</span><span>[</span> <span># This where we can implement scrolling if we want </span> <span>PageCoroutine</span><span>(</span> <span>'</span><span>wait_for_selector</span><span>'</span><span>,</span> <span>'</span><span>div#itemName</span><span>'</span><span>)</span> <span>]</span> <span>)</span> <span>)</span> <span>async</span> <span>def</span> <span>parse</span><span>(</span><span>self</span><span>,</span> <span>response</span><span>):</span> <span>for</span> <span>item</span> <span>in</span> <span>response</span><span>.</span><span>css</span><span>(</span><span>'</span><span>div.card</span><span>'</span><span>):</span> <span>yield</span> <span>{</span> <span>'</span><span>name</span><span>'</span><span>:</span> <span>item</span><span>.</span><span>css</span><span>(</span><span>'</span><span>h3::text</span><span>'</span><span>).</span><span>get</span><span>(),</span> <span>'</span><span>price</span><span>'</span><span>:</span> <span>item</span><span>.</span><span>css</span><span>(</span><span>'</span><span>div.form-group label::text</span><span>'</span><span>).</span><span>get</span><span>()</span> <span>}</span># playwright_demo/spiders/pwspider.py import scrapy from scrapy_playwright.page import PageCoroutine class PwspiderSpider(scrapy.Spider): name = 'pwspider' def start_requests(self): yield scrapy.Request('https://twitter.com', meta=dict( playwright=True, playwright_include_page=True, playwright_page_coroutines=[ # This where we can implement scrolling if we want PageCoroutine( 'wait_for_selector', 'div#itemName') ] ) ) async def parse(self, response): for item in response.css('div.card'): yield { 'name': item.css('h3::text').get(), 'price': item.css('div.form-group label::text').get() }
Enter fullscreen mode Exit fullscreen mode
If we run our spider again:
scrapy crawl pwspider <span>-o</span> output_data.jsonscrapy crawl pwspider <span>-o</span> output_data.jsonscrapy crawl pwspider -o output_data.json
Enter fullscreen mode Exit fullscreen mode
Our output file will have something similar to this output:
<span>//</span><span> </span><span>output_data.json</span><span> </span><span>[</span><span> </span><span>{</span><span>"name"</span><span>:</span><span> </span><span>"Item1"</span><span>,</span><span> </span><span>"price"</span><span>:</span><span> </span><span>"$14.99"</span><span>}</span><span> </span><span>{</span><span>"name"</span><span>:</span><span> </span><span>"Item2"</span><span>,</span><span> </span><span>"price"</span><span>:</span><span> </span><span>"$19.99"</span><span>}</span><span> </span><span>{</span><span>"name"</span><span>:</span><span> </span><span>"Item3"</span><span>,</span><span> </span><span>"price"</span><span>:</span><span> </span><span>"$134.99"</span><span>}</span><span> </span><span>{</span><span>"name"</span><span>:</span><span> </span><span>"Item4"</span><span>,</span><span> </span><span>"price"</span><span>:</span><span> </span><span>"$17.99"</span><span>}</span><span> </span><span>]</span><span> </span><span>//</span><span> </span><span>output_data.json</span><span> </span><span>[</span><span> </span><span>{</span><span>"name"</span><span>:</span><span> </span><span>"Item1"</span><span>,</span><span> </span><span>"price"</span><span>:</span><span> </span><span>"$14.99"</span><span>}</span><span> </span><span>{</span><span>"name"</span><span>:</span><span> </span><span>"Item2"</span><span>,</span><span> </span><span>"price"</span><span>:</span><span> </span><span>"$19.99"</span><span>}</span><span> </span><span>{</span><span>"name"</span><span>:</span><span> </span><span>"Item3"</span><span>,</span><span> </span><span>"price"</span><span>:</span><span> </span><span>"$134.99"</span><span>}</span><span> </span><span>{</span><span>"name"</span><span>:</span><span> </span><span>"Item4"</span><span>,</span><span> </span><span>"price"</span><span>:</span><span> </span><span>"$17.99"</span><span>}</span><span> </span><span>]</span><span> </span>// output_data.json [ {"name": "Item1", "price": "$14.99"} {"name": "Item2", "price": "$19.99"} {"name": "Item3", "price": "$134.99"} {"name": "Item4", "price": "$17.99"} ]
Enter fullscreen mode Exit fullscreen mode
This is it. Now you know how to scrape dynamic websites.
Web Scraping (2 Part Series)
1 Scraping Dynamic Javascript Websites with Scrapy and Scrapy-playwright
2 XPath vs CSS Selector: what is going on with these two?
原文链接:Scraping Dynamic Javascript Websites with Scrapy and Scrapy-playwright
暂无评论内容