This is the continuation of the previous post on “Scraping housing prices using Python Scrapy“. In this session, we will use Xpath to retrieve the corresponding fields from the targeted website instead of just having the full html page. For a preview on how to extract the information from a particular web page, you can refer to the following post “Retrieving stock news and Ex-date from SGX using python“.
Parsing the web page using Scrapy will require the use of Scrapy spider “parse” function. To test out the function, it might be an hassle to run Scrapy crawl command each time you try out a field as this means making requests to the website every single time.
There are two ways to go about it. One way is to let Scrapy cache the data. The other is to make use of the html webpage downloaded in the previous session. I have not really try out caching the information using scrapy but it is possible to run using Scrapy Middleware. Some of the links below might help to provide some ideas.
- https://doc.scrapy.org/en/0.12/topics/downloader-middleware.html
- http://stackoverflow.com/questions/22963585/using-middleware-to-ignore-duplicates-in-scrapy
- http://stackoverflow.com/questions/40051215/scraping-cached-pages
For utilizing the downloaded copy of the html page which is what I have been using, the following script demonstrate how it is done. The downloaded page is taken from this property website link. Create an empty script and input the following snippets, run the script as normal python script.
import os, sys, time, datetime, re
from scrapy.http import HtmlResponse
#Enter file path
filename = r'targeted file location'
with open(filename,'r') as f:
html = f.read()
response = HtmlResponse(url="my HTML string", body=html) # Key line to allow Scrapy to parse the page
item = dict()
for sel in response.xpath("//tr")[10:]:
item['id'] = sel.xpath('td/text()')[0].extract()
item['block_add'] = sel.xpath('td/a/span/text()')[0].extract()
individual_block_link = sel.xpath('td/a/@href')[0].extract()
item['individual_block_link'] = response.urljoin(individual_block_link)
item['date'] = sel.xpath('td/text()')[3].extract()
price = sel.xpath('td/text()')[4].extract()
price = int(price.replace(',',''))
price_k = price/1000
item['price'] = price
item['price_k'] = price_k
item['size'] = sel.xpath('td/text()')[5].extract()
item['psf'] = sel.xpath('td/text()')[6].extract()
#agent = sel.xpath('td/a/span/text()')[1].extract()
item['org_url_str'] = response.url
for k, v in item.iteritems():
print k, v
Once verified there are no issue retrieving the various components, we can paste the portion to the actual Scrapy spider parse function. Remember to exclude the statement “response = HtmlResponse …”.
From the url, we noticed that the property search results are available in multiple pages. The idea is to traverse each page and obtain the desired information from each page. This would need Scrapy to know the next url to go to. To parse the information, the same method can be use to retrieve the url link to the next page.
Below show the parse function use in the Scrapy spider.py.
def parse(self, response):
for sel in response.xpath("//tr")[10:]:
item = ScrapePropertyguruItem()
item['id'] = sel.xpath('td/text()')[0].extract()
item['block_add'] = sel.xpath('td/a/span/text()')[0].extract()
individual_block_link = sel.xpath('td/a/@href')[0].extract()
item['individual_block_link'] = response.urljoin(individual_block_link)
item['date'] = sel.xpath('td/text()')[3].extract()
price = sel.xpath('td/text()')[4].extract()
price = int(price.replace(',',''))
price_k = price/1000
item['price'] = price
item['price_k'] = price_k
item['size'] = sel.xpath('td/text()')[5].extract()
item['psf'] = sel.xpath('td/text()')[6].extract()
#agent = sel.xpath('td/a/span/text()')[1].extract()
item['org_url_str'] = response.url
yield item
#get next page link
next_page = response.xpath("//div/div[6]/div/a[10]/@href")
if next_page:
page_url = response.urljoin(next_page[0].extract())
yield scrapy.Request(page_url, self.parse)
For the next post, I will share how to migrate the running of spider to Scrapy Cloud
Related Posts
- Scraping housing prices using Python Scrapy
- Retrieving stock news and Ex-date from SGX using python