automation

Shorte.st Url Shortener API with Python: Create multiple shorteners at one go (& monetize your links)

A mini project that shortens urls with Shorte.st using python. Shorte.st only provides the “curl” command version of the API. In this post, the command is translated in the form of python requests for easy integration with rest of python scripts and enable multiple urls shortening.

Please note that I have an account with Shorte.st.

  1. Objectives:
      1. Create python function to shorten url using Shorte.st
  2. Required Tools:
      1. Requests —  for handling HTML protocol. Use pip install requests.
      2. Shorte.st account — Shorte.st account to shorten url.
  3. Steps:
      1. Retrieve the API token from Shorte.st by going to Link Tools –> Developer API and copy the API token.
      2. Use request.put with the following parameters:
        1. headers containing the API token and user-agent
        2. data which contains the target url to shorten.
      3. Get the response.text which contain the shortened url
      4. Complete! Include shortened url in target sites/twitter/social media etc.

Curl commands as provided by Shorte.st

curl -H "public-api-token: your_api_token" -X PUT -d "urlToShorten=target_url_to_shortened.com" https://api.shorte.st/v1/data/url

Python function to insert to part of your code or as standalone

import os, sys, re
import requests

USER_AGENT = "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36"

def shorten_url(target_url, api_token):
    """
        Function to shorten url (With your shorte.st) account.
        Args:
            target_url (str): url to shorten
            api_token (str): api token str
        Returns:
            shortened_url (str)

    """

    headers = {'user_agent':USER_AGENT, 'public-api-token':api_token}
    data = dict(urlToShorten=target_url)

    url = 'https://api.shorte.st/v1/data/url'

    r= requests.put(url, data, headers= headers)

    shortened_url = re.search('"shortenedUrl":"(.*)"',r.text).group(1)
    shortened_url = shortened_url.replace('\\','')

    return shortened_url

if __name__ == "__main__":

    api_token = 'your_api_token'

    urllist = [
                'https://simply-python.com/2018/07/20/fast-download-images-from-google-image-search-with-python-requests-grequests',
                'https://simply-python.com/2018/04/22/building-a-twitter-bot-with-python'

                ]

    for target_url in urllist:
        shortened_url = shorten_url(target_url, api_token)
        print 'shortened_url: {}'.format(shortened_url)

Results

shortened_url: http://destyy.com/wKqD2s
shortened_url: http://destyy.com/wKqD17

 

Further notes 

  1. If you have some fantastic links to share and hope to monetize your links, you can click on below banner to explore more.
  2. The above script is not meant for spamming with huge amount of urls. Shorte.st will monitor on the quality of the urls be shortened.
  3. An ads-free shortener will be with bit.ly. Please see post on using the bit.ly shortener with python if prefer an alternative.

Advertisements

Package your python code made simple & Fast

A mini project that create the required python packaging template folders, submit to GitHub & enable pip installation.

  1. Objectives:
      1. Upload a python project to GitHub and enable py-installable.
  2. Required Tools:
      1. Cookie Cutter–  for templating. Use pip install cookiecutter.
      2. GitHub account, Github desktop, Git shell — version control, git command line.
      3. PyPI account — for uploading to pypi so a user can just do “pip install your_project”.
  3. Steps:
      1. Cookie Cutter to set up the template directory and required folders with relevant docs and files (Readme.md, .gitignore, setup.py etc) for uploading. –> See commands section 1 below.
        • use commands in cmd prompt or Git shell  for windows (preferred Git shell if you executing additional git commands in step 2).
      2. Create a folder with same name as the directory name created in step 1 and place the relevant python codes inside.
      3. Use Git commands to upload files to GitHub. The below commands will only work if the repository is first created in your GitHub account. –> See commands section 2 below.
      4. Alternatively, you can use the GUI version for the GitHub instead of command line to submit your project to the repository.
      5. Create a .pypirc in same directory as the setup.py file. This will be used to provide the info to upload to pypi. –> See section 3
      6. With the .pypirc created, the project can be uploaded to pypi with the command: python setup.py sdist upload -r pypi

Windows Command prompt for step 1

pip install cookiecutter
cookiecutter https://github.com/wdm0006/cookiecutter-pipproject.git
cd projectname

Git Commands for step 3

git init
git add -A
git commit -m 'first commit'
git remote add origin http://repository_url
git push origin master
git tag {{version}} -m 'adds the version you entered in cookiecutter as the first tag for release'
git push --tags origin master

.pypirc contents for step 5

[distutils] # this tells distutils what package indexes you can push to
index-servers =
pypi

[pypi]
repository: https://pypi.python.org/pypi
username: {{your_username}}
password: {{your_password}}

Further notes 

  1. Most of the commands above are from Will McGinnis’ post and python packaging tutorial
  2. To create an empty file in windows for the .pypirc, use cmd echo >.pypirc
  3. Uploading to PyPI require a verfiied email address else there will be error uploading.

Fast Download Images from Google Image search with python requests/grequests

A mini project that highlights the usage of requests and grequests.

  1. Objectives:
      1. Download multiple images from Google Image search results.
  2. Required Modules:
      1. Requests –  for HTTP request
      2. grequests – for easy asynchronous HTTP Requests.
      3. Both can be installed by using pip install requests, grequests
  3. Steps:
      1. Retrieve html source from the google image search results.
      2. Retrieve all image url links from above html source. (function: get_image_urls_fr_gs)
      3. Feed the image url list to grequests for multiple downloads (function: dl_imagelist_to_dir)
  4. Breakdown: Steps on grequests implementation.
    1. Very similar to requests implementation which instead of using requests. get()  use grequests.get() or grequests.post()
    2. Create a list of GET or POST actions with different urls as the url parameters. Identify a further action after getting the response e.g. download image to file after the get request.
    3. Map the list of get requests to grequests to activate it. e.g. grequests.map(do_stuff, size=x) where x is the number of async https requests. You can choose x for values such as 20, 50, 100 etc.
    4. Done !

Below is the complete code.


import os, sys, re
import string
import random
import requests, grequests
from functools import partial
import smallutils as su  #only use for creating folder

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36'
headers = { 'User-Agent': USER_AGENT }

def get_image_urls_fr_gs(query_key):
    """
        Get all image url from google image search
        Args:
            query_key: search term as of what is input to search box.
        Returns:
            (list): list of url for respective images.

    """

    query_key = query_key.replace(' ','+')#replace space in query space with +
    tgt_url = 'https://www.google.com.sg/search?q={}&tbm=isch&tbs=sbd:0'.format(query_key)#last part is the sort by relv

    r = requests.get(tgt_url, headers = headers)

    urllist = [n for n in re.findall('"ou":"([a-zA-Z0-9_./:-]+.(?:jpg|jpeg|png))",', r.text)] 

    return urllist

def dl_imagelist_to_dir(urllist, tgt_folder, job_size = 100):
    """
        Download all images from list of url link to tgt dir
        Args:
            urllist: list of the image url retrieved from the google image search
            tgt_folder: dir at which the image is stored
        Kwargs:
            job_size: (int) number of downloads to spawn.

    """
    if len(urllist) == 0:
        print "No links in urllist"
        return

    def dl_file(r, folder_dir, filename, *args, **kwargs):
        fname = os.path.join(folder_dir, filename)
        with open(fname, 'wb') as my_file:
            # Read by 4KB chunks
            for byte_chunk in r.iter_content(chunk_size=1024*10):
                if byte_chunk:
                    my_file.write(byte_chunk)
                    my_file.flush()
                    os.fsync(my_file)

        r.close()

    do_stuff = []
    su.create_folder(tgt_folder)

    for run_num, tgt_url in enumerate(urllist):
        print tgt_url
        # handle the tgt url to be use as basename
        basename = os.path.basename(tgt_url)
        file_name = re.sub('[^A-Za-z0-9.]+', '_', basename ) #prevent special characters in filename

        #handling grequest
        action_item =  grequests.get(tgt_url, hooks={'response': partial(dl_file, folder_dir = tgt_folder, filename=file_name)}, headers= headers,  stream=True)
        do_stuff.append(action_item)

    grequests.map(do_stuff, size=job_size)

def dl_images_fr_gs(query_key, tgt_folder):
    """
        Function to download images from google search

    """
    url_list = get_image_urls_fr_gs(query_key)
    dl_imagelist_to_dir(url_list, tgt_folder, job_size = 100)

if __name__ == "__main__":

    query_key= 'python symbol'
    tgt_folder = r'c:\data\temp\addon'
    dl_images_fr_gs(query_key, tgt_folder)		

Further notes 

  1. Note that the images download from google search are only those displayed. Additional images which are only shown when “show more results” button is clicked will not be downloaded. To resolve this case:
    1. a user can continuously clicked on “show more results”, manually download the html source and run the 2nd function (dl_imagelist_to_dir) on the url list extracted.
    2. Use python selenium to download the html source.
  2. Instead of using grequests, request module can be used to download the images sequentially or one by one.
  3. The downloading of files are break into chunks especially for those very big files.
  4. Code can be further extended for downloading other stuff.
  5. Further parameters in the google search url here.

Create Static Website with AWS S3

While Amazon AWS S3 are usually used to store files and documents (objects are stored in buckets), users can easily create their own static website by configure a bucket to host the webpage. The first step is to sign up for an Amazon AWS account. User will get to enjoy the free-tier version for the 1st year.

The detailed guide for setting up the static website are provided in the amazon AWS link. Below list the main steps:

  1. Create a bucket. Note that if we have our own registered domain name, we will need to ensure the bucket name is same as the domain name. See additional steps in link for mapping the domain name to the bucket url.
  2. Upload two files (index.html and error.html by default, we can specify other names but have to align with step 3 below) to the bucket. The index.html will be the landing page.
  3. Under bucket properties, select static website hosting. After which we will need to set the main page (index.html) and error page (eg error.html). This will allow the bucket to open the page (index.html) upon visiting the given url.
  4. Note that all objects (including image, video or wav files) in bucket have a particular url.
  5. Enable public access on either every single object by clicking on objects-> permission or public access to whole bucket by setting the bucket policy.
  6. Note that there will be charges for storage and also for GET/POST requests.

A basic index.html can be as simple as below or it can be much more complicated which include client side rendering/processing (CSS, Javascript, JQuery).

<html><body><h1> This is the body</h1></body></html>

To simplify the uploading process and development work, we can use python with aws boto3 to auto upload different files and set configurations/permissions for each file. To use boto3 with python. simply pip install boto3. We would need to configure the AWS IAM role and also local PC to include the credentials as shown in link.  An example of the python script is shown below. Use argument -ACL for permission setting and -ContentType to modify file type.


import smallutils as su
import os, sys
import boto3

TARGET_FNAME = r'directory/targetfile_to_update.html'
TARGET_BUCKET = r'bucket_name'
BUCKET_KNAME = 'filename_in_bucket.html'
MODIFY_CONTENT_TYPE = 1 #changing the default content type. particular for html, need change to text/html.

FOLDER_NAME = 'DATA/' #need a / at the end

PUT_FILES = 1 #if 1-- put files, else treat as creating folder<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

if __name__ == "__main__":
    print "Print S3 resources"
    s3 = boto3.resource('s3') 

    print "List of buckets: "
    for bucket in s3.buckets.all():
        print bucket.name

    if PUT_FILES:
        print "Put files in bucket."
        data = open(TARGET_FNAME, 'rb')
        if MODIFY_CONTENT_TYPE:
            s3.Bucket(TARGET_BUCKET).put_object(Key=BUCKET_KNAME, Body=data, ACL='public-read', ContentType = 'text/html' ) #modify the content type
        else:
            s3.Bucket(TARGET_BUCKET).put_object(Key=BUCKET_KNAME, Body=data, ACL='public-read', ) #modify the content type
    else:
        # assumte to be create folder
        print "Create Folder"
        s3.Bucket(TARGET_BUCKET).put_object(Key=FOLDER_NAME, Body='') # ACL='public-read-write'??

We can also add in CSS and Jquery to render the index.html website.

Building a twitter bot with python

For this post, we will be creating a bot that tweet daily (and automatically) on world events or any categories desired.

Major steps as follows:

1. Create a twitter account and API authorization.

As we will be automating using python, we will require to authorize the twitter API to work with python. Sign in to twitter application, click the “create new App” button and fill the required fields. You will need to obtain the “Access Token” and “Access Token Secret.” These two token will be used for python module in the later part.

2. Using python and tweepy

Tweepy module will be used to handle twitter related actions such as posting and getting results or even following/follow. Below snippet shows how to initialize the api for posting tweets and twitter related api. It will require consumer key and secret key from part 1.

import os, sys, datetime, re
import tweepy
import ConfigParser

def get_twitter_api():

    config_file_list = [
                        'directory/configfile_that_contain_credentials.ini'
                        ]

    #get the config_file that exists
    config_file = [n for n in config_file_list if os.path.exists(n)][0] #take the first entry

    parser = ConfigParser.ConfigParser()
    parser.read(config_file)

    CONSUMER_KEY =parser.get('CONFIG', 'CONSUMER_KEY')
    CONSUMER_SECRET = parser.get('CONFIG', 'CONSUMER_SECRET')
    ACCESS_KEY = parser.get('CONFIG', 'ACCESS_KEY')
    ACCESS_SECRET = parser.get('CONFIG', 'ACCESS_SECRET')

    auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
    auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)

    api = tweepy.API(auth)
    return api

3. Getting Contents

We can either create own contents or get contents from various sources (the twitter will be like some sort of feeds/content aggregators). We will explore one simple case of displaying RSS feeds from various sources (such as blog, news etc) as contents for our twitter bot. The first step is to get all the RSS feeds from various sites. Below are some of the python scripts that will aid in the the collection of RSS feeds, links and contents. The main modules used are python pattern for all url/RSS feed access and downloading.

You can pip install the following modules pattern, smallutils and pandas for below python snippets.

3.1 Getting all url links from particular website. 

This is for cases such as an aggregation site that display a list of websites that you might be interested to get all the website links. Note that the following scripts will retrieve all the link tags in the website and there might be redundant data. You can set the filter to limit the website search or you can manually select from the output list.

from pattern.web import URL, extension
from pattern.web import find_urls
from pattern.web import Newsfeed

def get_all_url_link_fr_target_website(tgt_site):
    """ Quick way to harvest all the url links and extract those that are feeds"""

    url = URL(tgt_site)
    page_source = url.download()

    return find_urls(page_source)

for site in  [n for n in get_all_url_link_fr_target_website(tgt_site) if not re.search("jpg|jpeg|png|ico|bit|svg|js",n)]:
	site_list.append(site)

site_list = [n for n in site_list if re.search("http(?:s)?://(?:www.)?[a-zA-Z0-9_]*.[a-zA-Z0-9_]*/$",n)]

for n in sorted(site_list):
	print n

3.2 Getting RSS feeds link from a website

Sometimes it is difficult to search for the RSS link from a particular website and blog. The following script will search for any RSS feeds link in the website and output it. Again, there might be some redundant links present.

from pattern.web import URL, extension
from pattern.web import find_urls
from pattern.web import Newsfeed
import smallutils as su

def get_feed_link_fr_target_website(tgt_site, pull_one = 1):
    """ Get the feed url from target website
        Args:
            tgt_site = url of target site
            pull_one = pull only 1 particular feed link

    """

    url = URL(tgt_site)
    page_source = url.download()

    if pull_one:
        return [n for n in find_urls(page_source) if re.search("feed|feeds",n)][0]
    else:
        return [n for n in find_urls(page_source) if re.search("feed|feeds",n)]

tgt_file = r'directory/txtfile_with_all_url.txt'
url_list = su.read_data_fr_file(tgt_file)

for url in url_list:
	try:
		w =  get_feed_link_fr_target_website(url,0)
	except:
		continue

if type(w) == list:
	for n in w:
		print n

3.3 Extracting contents from the RSS feeds

To extract contents from the RSS feeds, we need a python module that can parse a RSS feed structure (primarily xml format). We will make use of python pattern for RSS feed parsing and pandas to save extracted data in csv format. The following snippet will take in a file that contain a list of feeds url and retrieve the corresponding feeds.

from pattern.web import URL, extension
from pattern.web import find_urls
from pattern.web import Newsfeed
import smallutils as su
import pandas as pd

def get_feed_details_fr_url_list(url_list, save_csvfilename):
    """ Get the feeds info and save as dataframe to target location"""
    target_list = []
    for feed_url in url_list:
        print feed_url
        if feed_url == "-":
            break
        try:
            for result in Newsfeed().search(feed_url)[:2]:
                print repr(result.title), repr(result.url),  repr(result.date)
                temp_data = {"title":result.title, "feed_url":result.url, "date":result.date, "ref":extract_site_name_fr_url(feed_url)}
                target_list.append(temp_data)
            print "*"*18
            print
        except:
            print "No feeds found"
            continue

    ## save to padnas
    df = pd.DataFrame(target_list)
    df.to_csv(save_csvfilename, index= False , encoding='utf-8')

tgt_file = r'directory\tgt_file_that_contain_list_of_feeds_url.txt'
url_list = su.read_data_fr_file(tgt_file)

get_feed_details_fr_url_list(url_list, r"output\feed_result.csv")<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

You can also refer below post on feeds extraction.

  1. Get RSS feeds using python pattern

3.4 URL shortener

Normally we would like to include the actual link in the twitter after including the content. However, sometimes the url is too long and may hit the twitter word limit. In this case, we can use URL shortener to help in our job. There are a couple of URL shortener services such as google, tinyurl. We will incorporate tinyurl in our python script.

from pattern.web import URL, extension

def shorten_target_url(tgt_url):
    agent = 'http://tinyurl.com/api-create.php?url={}'
    query_url = agent.format(tgt_url)

    url = URL(query_url)
    page_source = url.download()

    return page_source

4. Posting contents to Twitter

We make use of the snippets in section 2 and 3 and create a combined script that authenticate the user, get all feeds from a list a feeds url text file, select a few of the more recent feeds and post them to the twitter account with targeted hash tags and url shortening.  Do observe proper tweeting etiquette and avoid spamming.

import os, sys, datetime, time
import pandas as pd
from FeedsHandler import get_feed_details_fr_url_list
from urlshortener import shorten_target_url
from initialize_twitter_api import get_twitter_api
import smallutils as su

if __name__  == "__main__":

    print "start of project"

    ## Defined parameters
    tgt_file_list = [
                        r'directory\tgt_file_contain_feedurl_list.txt'
                        ]

    #get the tgt_file that exists
    tgt_file = [n for n in tgt_file_list if os.path.exists(n)][0] #take the first entry

    feeds_outputfile =  r"c:\data\temp\feed_result.csv"
    hashtags = '#DIY #hacks' #include hash tags
    feeds_sample_size = 8

    ## Get feeds from url list
    print "Get feeds from target url list ... "
    url_list = su.read_data_fr_file(tgt_file)
    get_feed_details_fr_url_list(url_list, feeds_outputfile)

    ## Read the feeds_outputfile and
    print "Handling the feeds data"
    feeds_df = pd.read_csv(feeds_outputfile)
    feeds_df['date'] = pd.to_datetime(feeds_df['date'])

    ## filter the date within one day to today
    feeds_df['date_delta'] = datetime.datetime.now() - feeds_df['date']
    feeds_df['date_delta_days'] = feeds_df['date_delta'].apply(lambda x: float(x.days))

    feeds_df_filtered =  feeds_df[feeds_df['date_delta_days']  feeds_sample_size:# do a sampling if the input is high
        feeds_df_filtered_sample = feeds_df_filtered.sample(feeds_sample_size)
    else:
        feeds_df_filtered_sample = feeds_df_filtered

    ## set up for twitter api
    print "Initialized the Twitter API"
    api = get_twitter_api()

    ## handling message to twitter
    print "Sending all data to twitter"
    for index, row in feeds_df_filtered_sample.iterrows():
        #convert to full text for output
        target_txt = 'Via @' + row['ref'] + ': ' + row['title'] + ' ' + row['feeds_url_shorten'] + ' ' + hashtags
        try:
            api.update_status(target_txt)
        except:
            pass
        time.sleep(60*30)

5. Scheduling tweets

We can use either windows task scheduler or cron job to do scheduling of tweet posting daily.

6. What to do next

Above contents are derived mainly from RSS feeds. We can add contents by retweeting or embedding youtube videos automatically. A sample twitter bot created using the above methods are included in the link.

You can refer to some of the posts that include retrieving data from twitter.

  1. Get Stocks tweets using Twython
  2. Get Stocks tweets using Twython (Updates)

Automating Google Sheets with Python

This post demonstrate basic use of python to read/edit Google sheets. For fast setup, you can visit this link. Below is the setup procedure copied from the link itself.

  1. Use this wizard to create or select a project in the Google Developers Console and automatically turn on the API. Click Continue, then Go to credentials.
  2. On the Add credentials to your project page, click the Cancel button.
  3. At the top of the page, select the OAuth consent screen tab. Select an Email address, enter a Product name if not already set, and click the Save button.
  4. Select the Credentials tab, click the Create credentials button and select OAuth client ID.
  5. Select the application type Other, enter the name “Google Sheets API Quickstart”, and click the Create button.
  6. Click OK to dismiss the resulting dialog.
  7. Click the file_download (Download JSON) button to the right of the client ID.
  8. Move this file to your working directory and rename it client_secret.json.

The next step  will be to install the google client using pip.

pip install --upgrade google-api-python-client

The final step is to copy the sample from the same link. For the first time running the script, you would need to sign in with Google. Use the below command to link the sheets credentials to the targeted gmail account. Follow the instruction as from the prompt.

$ python name_of_script.py --noauth_local_webserver

You can easily access/modify the contents of the sheets especially if it is in the table format by linking it with Python Pandas.

# authorization: reference from link
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
discoveryUrl = ('https://sheets.googleapis.com/$discovery/rest?'
'version=v4')
service = discovery.build('sheets', 'v4', http=http,
discoveryServiceUrl=discoveryUrl)

# Target spreadsheet
spreadsheetId = 'your_spreadsheet_name'
rangeName = 'Sheet1!A1:N'

# read from spreadsheet
result = service.spreadsheets().values().get(
spreadsheetId=spreadsheetId, range=rangeName).execute()
values = result.get('values', [])

import pandas
# Pandas Dataframe with values and header
data_df = pd.DataFrame(values[1:], columns = values[0])
print data_df

Related Posts:

  1. Automating Ms Powerpoint with Python: https://simply-python.com/2014/07/04/rapid-generation-of-powerpoint-report-with-template-scanning
  2. Using Excel with Python: https://simply-python.com/2014/08/20/manage-and-extract-data-using-python-and-excel-tables

 

YouTube videos download using Python (Part 2)

A continuation from the “Search and download YouTube videos using Python” post with more features added.

The initial project only allows searching of playlists within YouTube and downloading the videos for all the playlist found. The project is expanded with the following features:

  1. Multiple searches of different playlist can be inputted at one go (key in all search phrases in a text file) and automatically download for all videos found relating to the search phrases. Playlist search recommended for search such as songs playlist or online courses (eg.  “Top favorite English songs/Most popular English songs”, “Machine learning Coursera”)
  2. Non playlist search (normal video search); Both single and multiple search can be performed. For normal video search or general topic with less likely chance of being in a playlist. (eg. “Python Machine learning”)
  3. Single video download (directly use Pafy module). User just need to input the video link.
  4. Multiple options: users can limit the number of downloads, include filter count such as popularity, video length limit, download in video or audio format.

The script makes use of Python Pattern module for URL request and DOM object processing. For actual downloading of videos, it utilizes Pafy. Pafy is very comprehensive python module, allowing download in both video and audio format. There are other features of Pafy which is not used in this module.

The full script can be found in the GitHub.

Search and download youtube videos using Python

The following python module allows users to search YouTube videos and download all the videos from the different playlists found within the search. Currently, it is able to search for playlists or collections of videos  and download individual videos from each of the playlists.

For example, searching for “Top English KTV” will scan for all the songs playlists found in the search results and collect the individual songs web link from each playlist to be downloaded locally. Users can choose either to download as video format or as audio format.

The script makes use of Python Pattern module for URL request and DOM object processing. For actual downloading of videos, it utilizes Pafy. Pafy is very comprehensive python module, allowing download in both video and audio format. There are other features of Pafy which is not used in this module.

The following are the main flow of the script.

  1. Form the YouTube search URL with the prefix “https://www.youtube.com/results?search_query=” and the search keyword
  2. Based on the above URL, scrape and get all the urls that are linked to a playlist. The Xpath for the playlist element can be easily obtained using any web browser developer options, inspecting the element and  retrieving the Xpath. The playlist url can be obtained using pattern dom object: ‘dom_object(div ul li a[class=”yt-uix-sessionlink”])’.
  3. Filter the list of extracted link to cater only for URL link starting with “/playlist?“. A typical url for playlist looks something like below:
  4. From the list of playlist, scrape the individual playlist webpage to retrieve the url link for each individual videos. The  playlist element can be retrieved using pattern dom object: ‘dom_object(div ul li a[class=”yt-uix-sessionlink”])’.
  5. Download each individual video/audio to local computer using Pafy module by passing in the video URL to Pafy.

Below is the sample code to download a series of videos.


from youtube_search_and_download import YouTubeHandler

search_key = 'chinese top ktv' #keywords
yy = YouTubeHandler(search_key)
yy.download_as_audio =1 # 1- download as audio format, 0 - download as video
yy.set_num_playlist_to_extract(5) # number of playlist to download

print 'Get all the playlist'
yy.get_playlist_url_list()
print yy.playlist_url_list

## Get all the individual video and title from each of the playlist
yy.get_video_link_fr_all_playlist()
for key in  yy.video_link_title_dict.keys():
    print key, '  ', yy.video_link_title_dict[key]
    print
print

print 'download video'
yy.download_all_videos(dl_limit =200) #number of videos to download.

This is the initial script. There are still work in progress such as option to download individual videos instead of playlist from the search page and catering for multiple search.

The full script can be found in the GitHub.

Saving images from google search using Selenium and Python

Below is a short python script that allows users to save searched images to local drive using Image search on Google. It requires Selenium as Google requires users to press the “show more results” button and the scroll bar to move all the way to the bottom of page for more images to be displayed. Using Selenium will be an easier choice for this function.

The below python script will have the following:

  1. Enable users to input multiple search keywords either by entry or get from file. Users can leave the program to download on its own after creating a series of search keywords.
  2. Based on each keyword, form the google search url. Most of the parameters inside the google search url can be fixed. The only part that required changing is the search keyword as highlighted below in red.
  3. Run google search and obtain page source for the images. This is run using Selenium. To obtain the full set of images, Selenium will help to press the button and scroll the scrollbar to bottom of pages so that Google can load the remaining images. There seems to be a hard quota of 1000 pics for image search on Google.
  4. Use python pattern and xpath to retrieve the corresponding url for each image. The xpath will use the following tag:
    • tag_list = dom(‘a.rg_l’) #a tag with class = rg_l
  5. Based on each url, it will check the following before downloading the image file:
    • whether there is any redirect of site. This is done using Python Pattern redirect function.
    • check the extension whether it is a valid image file type.
  6. The image files are downloaded to a local folder (generated by date). Each image will be label according to the search key and a counter. There will be a corresponding text file mapping the image label to the image url for reference.
import re, os, sys, datetime, time
import pandas
from selenium import webdriver
from contextlib import closing
from selenium.webdriver import Firefox
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

from pattern.web import URL, extension, cache, plaintext, Newsfeed, DOM

class GoogleImageExtractor(object):

    def __init__(self, search_key = '' ):
        """ Google image search class
            Args:
                search_key to be entered.

        """
        if type(search_key) == str:
            ## convert to list even for one search keyword to standalize the pulling.
            self.g_search_key_list = [search_key]
        elif type(search_key) == list:
            self.g_search_key_list = search_key
        else:
            print 'google_search_keyword not of type str or list'
            raise

        self.g_search_key = ''

        ## user options
        self.image_dl_per_search = 200

        ## url construct string text
        self.prefix_of_search_url = "https://www.google.com.sg/search?q="
        self.postfix_of_search_url = '&source=lnms&tbm=isch&sa=X&ei=0eZEVbj3IJG5uATalICQAQ&ved=0CAcQ_AUoAQ&biw=939&bih=591'# non changable text
        self.target_url_str = ''

        ## storage
        self.pic_url_list = []
        self.pic_info_list = []

        ## file and folder path
        self.folder_main_dir_prefix = r'C:\data\temp\gimage_pic'

    def reformat_search_for_spaces(self):
        """
            Method call immediately at the initialization stages
            get rid of the spaces and replace by the "+"
            Use in search term. Eg: "Cookie fast" to "Cookie+fast"

            steps:
            strip any lagging spaces if present
            replace the self.g_search_key
        """
        self.g_search_key = self.g_search_key.rstrip().replace(' ', '+')

    def set_num_image_to_dl(self, num_image):
        """ Set the number of image to download. Set to self.image_dl_per_search.
            Args:
                num_image (int): num of image to download.
        """
        self.image_dl_per_search = num_image

    def get_searchlist_fr_file(self, filename):
        """Get search list from filename. Ability to add in a lot of phrases.
            Will replace the self.g_search_key_list
            Args:
                filename (str): full file path
        """
        with open(filename,'r') as f:
            self.g_search_key_list = f.readlines()

    def formed_search_url(self):
        ''' Form the url either one selected key phrases or multiple search items.
            Get the url from the self.g_search_key_list
            Set to self.sp_search_url_list
        '''
        self.reformat_search_for_spaces()
        self.target_url_str = self.prefix_of_search_url + self.g_search_key +\
                                self.postfix_of_search_url

    def retrieve_source_fr_html(self):
        """ Make use of selenium. Retrieve from html table using pandas table.

        """
        driver = webdriver.Firefox()
        driver.get(self.target_url_str)

        ## wait for log in then get the page source.
        try:
            driver.execute_script("window.scrollTo(0, 30000)")
            time.sleep(2)
            self.temp_page_source = driver.page_source
            #driver.find_element_by_css_selector('ksb _kvc').click()#cant find the class
            driver.find_element_by_id('smb').click() #ok
            time.sleep(2)
            driver.execute_script("window.scrollTo(0, 60000)")
            time.sleep(2)
            driver.execute_script("window.scrollTo(0, 60000)")

        except:
            print 'not able to find'
            driver.quit()

        self.page_source = driver.page_source

        driver.close()

    def extract_pic_url(self):
        """ extract all the raw pic url in list

        """
        dom = DOM(self.page_source)
        tag_list = dom('a.rg_l')

        for tag in tag_list[:self.image_dl_per_search]:
            tar_str = re.search('imgurl=(.*)&imgrefurl', tag.attributes['href'])
            try:
                self.pic_url_list.append(tar_str.group(1))
            except:
                print 'error parsing', tag

    def multi_search_download(self):
        """ Mutli search download"""
        for indiv_search in self.g_search_key_list:
            self.pic_url_list = []
            self.pic_info_list = []

            self.g_search_key = indiv_search

            self.formed_search_url()
            self.retrieve_source_fr_html()
            self.extract_pic_url()
            self.downloading_all_photos() #some download might not be jpg?? use selnium to download??
            self.save_infolist_to_file()

    def downloading_all_photos(self):
        """ download all photos to particular folder

        """
        self.create_folder()
        pic_counter = 1
        for url_link in self.pic_url_list:
            print pic_counter
            pic_prefix_str = self.g_search_key  + str(pic_counter)
            self.download_single_image(url_link.encode(), pic_prefix_str)
            pic_counter = pic_counter +1

    def download_single_image(self, url_link, pic_prefix_str):
        """ Download data according to the url link given.
            Args:
                url_link (str): url str.
                pic_prefix_str (str): pic_prefix_str for unique label the pic
        """
        self.download_fault = 0
        file_ext = os.path.splitext(url_link)[1] #use for checking valid pic ext
        temp_filename = pic_prefix_str + file_ext
        temp_filename_full_path = os.path.join(self.gs_raw_dirpath, temp_filename )

        valid_image_ext_list = ['.png','.jpg','.jpeg', '.gif', '.bmp', '.tiff'] #not comprehensive

        url = URL(url_link)
        if url.redirect:
            return # if there is re-direct, return

        if file_ext not in valid_image_ext_list:
            return #return if not valid image extension

        f = open(temp_filename_full_path, 'wb') # save as test.gif
        print url_link
        self.pic_info_list.append(pic_prefix_str + ': ' + url_link )
        try:
            f.write(url.download())#if have problem skip
        except:
            #if self.__print_download_fault:
            print 'Problem with processing this data: ', url_link
            self.download_fault =1
        f.close()

    def create_folder(self):
        """
            Create a folder to put the log data segregate by date

        """
        self.gs_raw_dirpath = os.path.join(self.folder_main_dir_prefix, time.strftime("_%d_%b%y", time.localtime()))
        if not os.path.exists(self.gs_raw_dirpath):
            os.makedirs(self.gs_raw_dirpath)

    def save_infolist_to_file(self):
        """ Save the info list to file.

        """
        temp_filename_full_path = os.path.join(self.gs_raw_dirpath, self.g_search_key + '_info.txt' )

        with  open(temp_filename_full_path, 'w') as f:
            for n in self.pic_info_list:
                f.write(n)
                f.write('\n')

if __name__ == '__main__':

    choice =4

    if choice ==4:
        """test the downloading of files"""
        w = GoogleImageExtractor('')#leave blanks if get the search list from file
        searchlist_filename = r'C:\data\temp\gimage_pic\imgsearch_list.txt'
        w.set_num_image_to_dl(200)
        w.get_searchlist_fr_file(searchlist_filename)#replace the searclist
        w.multi_search_download()

Sending alerts to iphone or Android phone using python

I was trying to figure out ways to send stocks alerts to my phone when I came across the following blog which demonstrated this using the app called “pushover”:

How To Get Alerts On Stock Price Changes Using Python

PushOver provides very good API support and source codes for variety of languages including python for those who need to setup the program for doing the notification. However, PushOver requires a one time license fee for continuous use and limit to iOS. A free alternative is the “pushbullet“. PushBullet is able to cater to similar function and can provide alerts to (android and iOS) phone plus computer with any of the major internet explorer.

The pushbullet require a access token which can then be used to submit notification text via post command to the specified url making it very easy to set up. Below is a sample function to post a notification which can be set up easily with requests.

Azelphur also provide pyPushBullet in Git Hub which includes all the PushBullet function in python.

import requests
import json

def send_notification_via_pushbullet(title, body):
    """ Sending notification via pushbullet.
        Args:
            title (str) : title of text.
            body (str) : Body of text.
    """
    data_send = {"type": "note", "title": title, "body": body}

    ACCESS_TOKEN = 'your_access_token'
    resp = requests.post('https://api.pushbullet.com/v2/pushes', data=json.dumps(data_send),
                         headers={'Authorization': 'Bearer ' + ACCESS_TOKEN, 'Content-Type': 'application/json'})
    if resp.status_code != 200:
        raise Exception('Something wrong')
    else:
        print 'complete sending'