SGproperties

Scraping housing prices using Python Scrapy Part 2

This is the continuation of the previous post on “Scraping housing prices using Python Scrapy“. In this session, we will use Xpath to retrieve the corresponding fields from the targeted website instead of just having the full html page. For a preview on how to extract the information from a particular web page, you can refer to the following post “Retrieving stock news and Ex-date from SGX using python“.

Parsing the web page using Scrapy will require the use of Scrapy spider “parse” function. To test out the function, it might be an hassle to run Scrapy crawl command each time you try out a field as this means making requests to the website every single time.

There are two ways to go about it. One way is to let Scrapy cache the data. The other is to make use of the html webpage downloaded in the previous session. I have not really try out caching the information using scrapy but it is possible to run using Scrapy Middleware. Some of the links below might help to provide some ideas.

  1. https://doc.scrapy.org/en/0.12/topics/downloader-middleware.html
  2. http://stackoverflow.com/questions/22963585/using-middleware-to-ignore-duplicates-in-scrapy
  3. http://stackoverflow.com/questions/40051215/scraping-cached-pages

For utilizing the downloaded copy of the html page which is what I have been using, the following script demonstrate how it is done. The downloaded page is taken from this property website link. Create an empty script and input the following snippets, run the script as normal python script.

    import os, sys, time, datetime, re
    from scrapy.http import HtmlResponse

    #Enter file path
    filename = r'targeted file location'

    with open(filename,'r') as f:
        html =  f.read()

    response = HtmlResponse(url="my HTML string", body=html) # Key line to allow Scrapy to parse the page

    item = dict()

    for sel in response.xpath("//tr")[10:]:
        item['id'] = sel.xpath('td/text()')[0].extract()
        item['block_add'] = sel.xpath('td/a/span/text()')[0].extract()
        individual_block_link = sel.xpath('td/a/@href')[0].extract()
        item['individual_block_link'] = response.urljoin(individual_block_link)
        item['date'] = sel.xpath('td/text()')[3].extract()

        price = sel.xpath('td/text()')[4].extract()
        price = int(price.replace(',',''))
        price_k = price/1000
        item['price'] = price
        item['price_k'] = price_k
        item['size'] = sel.xpath('td/text()')[5].extract()
        item['psf'] = sel.xpath('td/text()')[6].extract()
        #agent = sel.xpath('td/a/span/text()')[1].extract()
        item['org_url_str'] = response.url

        for k, v in item.iteritems():
            print k, v

Once verified there are no issue retrieving the various components, we can paste the portion to the actual Scrapy spider parse function. Remember to exclude the statement “response = HtmlResponse …”.

From the url, we noticed that the property search results are available in multiple pages. The idea is to traverse each page and obtain the desired information from each page. This would need Scrapy to know the next url to go to. To parse the information, the same method can be use to retrieve the url link to the next page.

Below show the parse function use in the Scrapy spider.py.

def parse(self, response):

    for sel in response.xpath("//tr")[10:]:
        item = ScrapePropertyguruItem()
        item['id'] = sel.xpath('td/text()')[0].extract()
        item['block_add'] = sel.xpath('td/a/span/text()')[0].extract()
        individual_block_link = sel.xpath('td/a/@href')[0].extract()
        item['individual_block_link'] = response.urljoin(individual_block_link)
        item['date'] = sel.xpath('td/text()')[3].extract()

        price = sel.xpath('td/text()')[4].extract()
        price = int(price.replace(',',''))
        price_k = price/1000
        item['price'] = price
        item['price_k'] = price_k
        item['size'] = sel.xpath('td/text()')[5].extract()
        item['psf'] = sel.xpath('td/text()')[6].extract()
        #agent = sel.xpath('td/a/span/text()')[1].extract()
        item['org_url_str'] = response.url

        yield item

    #get next page link
    next_page = response.xpath("//div/div[6]/div/a[10]/@href")
    if next_page:
        page_url = response.urljoin(next_page[0].extract())
        yield scrapy.Request(page_url, self.parse)

For the next post, I will share how to migrate the running of spider to Scrapy Cloud

Related Posts

  1. Scraping housing prices using Python Scrapy
  2. Retrieving stock news and Ex-date from SGX using python

Scraping housing prices using Python Scrapy

This post (and subsequent posts) show how to scrape the latest housing prices from the web using python Scrapy. As an example, the following website, propertyguru.com, is used. To start, select the criteria and filtering within the webpage to get the desired search results. Once done, copy the url link. Information from this url will be scraped using Scrapy. Information on installing Scrapy can be found from the  following post “How to Install Scrapy in Windows“.

For a guide of running Scrapy, you can refer to the Scrapy tutorial.  The following guidelines can be used for building a simple project.

  1. Create project
    scrapy startproject name_of_project
  2. Define items in items.py (temporary set a few fields)
    from scrapy.item import Item, Field
    
    class ScrapePropertyguruItem(Item):
        # define the fields for your item here like:
        name = Field()
        id = Field()
        block_add = Field()
    
  3. Create a spider.py. Open spider.py and input the following codes to get the stored html form of the scraped web.
    import scrapy
    from propertyguru_sim.items import ScrapePropertyguruItem #this refer to name of project
    
    class DmozSpider(scrapy.Spider):
        name = "demo"
        allowed_domains = ['propertyguru.com.sg']
        start_urls = [
           r'http://www.propertyguru.com.sg/simple-listing/property-for-sale?market=residential&property_type_code%5B%5D=4A&property_type_code%5B%5D=4NG&property_type_code%5B%5D=4S&property_type_code%5B%5D=4I&property_type_code%5B%5D=4STD&property_type=H&freetext=Jurong+East%2C+Jurong+West&hdb_estate%5B%5D=13&hdb_estate%5B%5D=14'
        ]
        def parse(self, response):
            filename = response.url.split("/")[-2] + '.html'
            print
            print
            print 'filename', filename 
    
            with open(filename, 'wb') as f:
                f.write(response.body)
    
  4. Run the scrapy command “scrapy crawl demo” where “demo” is the spider name assigned.

You will notice that by setting the project this way, there will be error parsing the website. Some websites like the one above required an user agent to be set. In this case, you can add the user_agent to settings.py to have the scrapy run with an user agent.

BOT_NAME = 'propertyguru_sim'

SPIDER_MODULES = ['propertyguru_sim.spiders']
NEWSPIDER_MODULE = 'propertyguru_sim.spiders'

USER_AGENT = "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36"

Run the script again with the updated code and you will see an html page appear in the project folder. Success.

In the next post, we will look at getting the individual components from the html page using xpath.

Retrieving Singapore housing (HDB) resale prices with Python

This post is more suited for Singapore context with the aim of retrieving the Housing Development Board (HDB) resale prices for the year 2015 grouped by different parts of Singapore. All the prices information are retrieved from the HDB main website. The website retrieves the past 1 yr records for each block or by postcode. Hence, in order to retrieve all the records, one would need to retrieve all the postcode in Singapore first. Below outline the list of information required in order to form the full picture.

  1. Retrieve the full postcode from following sg postcode database.
  2. The above only have postcode, next will have to merge the postcode to the actual address. This website also provide the search of post code and retrieve the corresponding address. You can automate using the same process with python, python pattern and pandas.
  3. Retrieve the HDB resale prices by iterating all the postcode retrieved from above.
  4. The optional steps will also be retrieving the Geocodes correspond to the post code so all the data can be put into a map. This post “Retrieving Geocodes from ZipCodes using Python and Selenium” describes the retrieval method.

The 1st code snippet will be applied to item 1, i.e.,  retrieving the post code. For item 2, it is a two steps process, first have to search the postcode, get the link and from the link, retrieve the address.


import pandas as pd
from pattern.web import  URL, extension

def retrieve_postal_code_fr_web_1(target_url, savefilelocation):
    """ 
        target_url (str): url from function.
        savefilelocation (str): full file path.
    """
    savefile = target_url.split('=')[-1] + '.csv'
    fullsavefile = os.path.join(savefilelocation,savefile)
    
    contents = URL(target_url).download()

    w = pd.read_html(contents)
    w[0].to_csv(fullsavefile, index =False)

The next snippet will describe the method to retrieve the HDB resale prices. By exploring the HDB website, the dataset are in the xml format, The url are as followed: http://services2.hdb.gov.sg/webapp/BB33RTIS/BB33SResaleTransMap?postal=<postcode>. For easy retrieval of data in xml format,  one way is to convert the xml to dict form and then convert to pandas dataframe object from the dict. This python module xmltodict will serve the required function.


import re, os, sys, datetime, time
import pandas as pd
import pattern
import xmltodict

from pattern.web import  URL, extension

class HDBResalesQuery(object):
    """ 
        For retrieving the resales prices from HDB webpage.
    """
    def __init__(self):
        """ List of url parameters -- for url formation """
        self.com_data_start_url = 'http://services2.hdb.gov.sg/webapp/BB33RTIS/BB33SResaleTransMap?postal='
        self.postal_portion_url = ''
        self.com_data_full_url = ''
        self.postal_list = [] #multiple postal code list

        ## storage
        self.single_postal_df = pd.DataFrame()
        self.multi_postal_df = pd.DataFrame()

        ## debugging
        self.en_print = 1
        
    def set_postal_code(self, postalcode):
        """ Set the postal code to url part.
            Set to self.postal_portion_url.
            Args:
                postalcode (str): can be str or int??
        """
        self.postal_portion_url = str(postalcode)

    def set_postal_code_list(self, postalcodelist):
        """ Set list of postal code. Set to self.postal_list
            Args:
                postalcodelist(list): list of postal code
        """
        self.postal_list = postalcodelist

    def form_url_str(self):
        """ Form the url str necessary to get the xml

        """           
        self.com_data_full_url = self.com_data_start_url + self.postal_portion_url
        
    def get_com_data(self):
        """ Combine the url str and get html contents
        """
        self.form_url_str()
        if self.en_print: print self.com_data_full_url
        contents = URL(self.com_data_full_url).download()
        return contents

    def process_single_postal_code(self):
        """ process single postal code and retrieve the relevant information from HDB.

        """
        contents = self.get_com_data()
        if self.en_print: print contents
        obj = xmltodict.parse(contents)

        data_dict_list = []
        if obj['Datasets'].has_key('Dataset'):
            data_set = obj['Datasets']['Dataset']
            if type(data_set) == list:
                for single_data in data_set:
                    data_dict_list.append(dict(single_data))
            else:
                data_dict_list.append(dict(data_set))
        
        #Can convert to pandas dataframe w = pd.DataFrame(data_dict_list)
        self.single_postal_df = pd.DataFrame(data_dict_list)
        if self.en_print: print self.single_postal_df

    def process_mutli_postal_code(self):
        """ for processing multiple postal code.
        """
        self.multi_postal_df = pd.DataFrame()
        
        for postalcode in self.postal_list:
            if self.en_print: print 'processing postalcode: ', postalcode
            self.set_postal_code(postalcode)
            self.process_single_postal_code()
            if len(self.single_postal_df) == 0: #no data
                continue
            if len(self.multi_postal_df) == 0:
                self.multi_postal_df = self.single_postal_df
            else:
                self.multi_postal_df = self.multi_postal_df.append(self.single_postal_df)

            

if __name__ == '__main__':
        """ Trying out the class"""
        postallist = ['640525','180262']
        w = HDBResalesQuery()
        w.set_postal_code_list(postallist)
        w.process_mutli_postal_code()
        print w.multi_postal_df

Note that all the processes require large number of queries (110k) to the website. It is best to schedule it to retrieve in batches or the website will shut you out (identify you as a bot).

The following is the Tableau representation of all the data. It is still a prelim version.

HDB Resale Prices