Month: April 2018

Create Static Website with AWS S3

While Amazon AWS S3 are usually used to store files and documents (objects are stored in buckets), users can easily create their own static website by configure a bucket to host the webpage. The first step is to sign up for an Amazon AWS account. User will get to enjoy the free-tier version for the 1st year.

The detailed guide for setting up the static website are provided in the amazon AWS link. Below list the main steps:

  1. Create a bucket. Note that if we have our own registered domain name, we will need to ensure the bucket name is same as the domain name. See additional steps in link for mapping the domain name to the bucket url.
  2. Upload two files (index.html and error.html by default, we can specify other names but have to align with step 3 below) to the bucket. The index.html will be the landing page.
  3. Under bucket properties, select static website hosting. After which we will need to set the main page (index.html) and error page (eg error.html). This will allow the bucket to open the page (index.html) upon visiting the given url.
  4. Note that all objects (including image, video or wav files) in bucket have a particular url.
  5. Enable public access on either every single object by clicking on objects-> permission or public access to whole bucket by setting the bucket policy.
  6. Note that there will be charges for storage and also for GET/POST requests.

A basic index.html can be as simple as below or it can be much more complicated which include client side rendering/processing (CSS, Javascript, JQuery).

<html><body><h1> This is the body</h1></body></html>

To simplify the uploading process and development work, we can use python with aws boto3 to auto upload different files and set configurations/permissions for each file. To use boto3 with python. simply pip install boto3. We would need to configure the AWS IAM role and also local PC to include the credentials as shown in link.  An example of the python script is shown below. Use argument -ACL for permission setting and -ContentType to modify file type.


import smallutils as su
import os, sys
import boto3

TARGET_FNAME = r'directory/targetfile_to_update.html'
TARGET_BUCKET = r'bucket_name'
BUCKET_KNAME = 'filename_in_bucket.html'
MODIFY_CONTENT_TYPE = 1 #changing the default content type. particular for html, need change to text/html.

FOLDER_NAME = 'DATA/' #need a / at the end

PUT_FILES = 1 #if 1-- put files, else treat as creating folder<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

if __name__ == "__main__":
    print "Print S3 resources"
    s3 = boto3.resource('s3') 

    print "List of buckets: "
    for bucket in s3.buckets.all():
        print bucket.name

    if PUT_FILES:
        print "Put files in bucket."
        data = open(TARGET_FNAME, 'rb')
        if MODIFY_CONTENT_TYPE:
            s3.Bucket(TARGET_BUCKET).put_object(Key=BUCKET_KNAME, Body=data, ACL='public-read', ContentType = 'text/html' ) #modify the content type
        else:
            s3.Bucket(TARGET_BUCKET).put_object(Key=BUCKET_KNAME, Body=data, ACL='public-read', ) #modify the content type
    else:
        # assumte to be create folder
        print "Create Folder"
        s3.Bucket(TARGET_BUCKET).put_object(Key=FOLDER_NAME, Body='') # ACL='public-read-write'??

We can also add in CSS and Jquery to render the index.html website.

Advertisement

Building a twitter bot with python

For this post, we will be creating a bot that tweet daily (and automatically) on world events or any categories desired.

Major steps as follows:

1. Create a twitter account and API authorization.

As we will be automating using python, we will require to authorize the twitter API to work with python. Sign in to twitter application, click the “create new App” button and fill the required fields. You will need to obtain the “Access Token” and “Access Token Secret.” These two token will be used for python module in the later part.

2. Using python and tweepy

Tweepy module will be used to handle twitter related actions such as posting and getting results or even following/follow. Below snippet shows how to initialize the api for posting tweets and twitter related api. It will require consumer key and secret key from part 1.

import os, sys, datetime, re
import tweepy
import ConfigParser

def get_twitter_api():

    config_file_list = [
                        'directory/configfile_that_contain_credentials.ini'
                        ]

    #get the config_file that exists
    config_file = [n for n in config_file_list if os.path.exists(n)][0] #take the first entry

    parser = ConfigParser.ConfigParser()
    parser.read(config_file)

    CONSUMER_KEY =parser.get('CONFIG', 'CONSUMER_KEY')
    CONSUMER_SECRET = parser.get('CONFIG', 'CONSUMER_SECRET')
    ACCESS_KEY = parser.get('CONFIG', 'ACCESS_KEY')
    ACCESS_SECRET = parser.get('CONFIG', 'ACCESS_SECRET')

    auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
    auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)

    api = tweepy.API(auth)
    return api

3. Getting Contents

We can either create own contents or get contents from various sources (the twitter will be like some sort of feeds/content aggregators). We will explore one simple case of displaying RSS feeds from various sources (such as blog, news etc) as contents for our twitter bot. The first step is to get all the RSS feeds from various sites. Below are some of the python scripts that will aid in the the collection of RSS feeds, links and contents. The main modules used are python pattern for all url/RSS feed access and downloading.

You can pip install the following modules pattern, smallutils and pandas for below python snippets.

3.1 Getting all url links from particular website. 

This is for cases such as an aggregation site that display a list of websites that you might be interested to get all the website links. Note that the following scripts will retrieve all the link tags in the website and there might be redundant data. You can set the filter to limit the website search or you can manually select from the output list.

from pattern.web import URL, extension
from pattern.web import find_urls
from pattern.web import Newsfeed

def get_all_url_link_fr_target_website(tgt_site):
    """ Quick way to harvest all the url links and extract those that are feeds"""

    url = URL(tgt_site)
    page_source = url.download()

    return find_urls(page_source)

for site in  [n for n in get_all_url_link_fr_target_website(tgt_site) if not re.search("jpg|jpeg|png|ico|bit|svg|js",n)]:
	site_list.append(site)

site_list = [n for n in site_list if re.search("http(?:s)?://(?:www.)?[a-zA-Z0-9_]*.[a-zA-Z0-9_]*/$",n)]

for n in sorted(site_list):
	print n

3.2 Getting RSS feeds link from a website

Sometimes it is difficult to search for the RSS link from a particular website and blog. The following script will search for any RSS feeds link in the website and output it. Again, there might be some redundant links present.

from pattern.web import URL, extension
from pattern.web import find_urls
from pattern.web import Newsfeed
import smallutils as su

def get_feed_link_fr_target_website(tgt_site, pull_one = 1):
    """ Get the feed url from target website
        Args:
            tgt_site = url of target site
            pull_one = pull only 1 particular feed link

    """

    url = URL(tgt_site)
    page_source = url.download()

    if pull_one:
        return [n for n in find_urls(page_source) if re.search("feed|feeds",n)][0]
    else:
        return [n for n in find_urls(page_source) if re.search("feed|feeds",n)]

tgt_file = r'directory/txtfile_with_all_url.txt'
url_list = su.read_data_fr_file(tgt_file)

for url in url_list:
	try:
		w =  get_feed_link_fr_target_website(url,0)
	except:
		continue

if type(w) == list:
	for n in w:
		print n

3.3 Extracting contents from the RSS feeds

To extract contents from the RSS feeds, we need a python module that can parse a RSS feed structure (primarily xml format). We will make use of python pattern for RSS feed parsing and pandas to save extracted data in csv format. The following snippet will take in a file that contain a list of feeds url and retrieve the corresponding feeds.

from pattern.web import URL, extension
from pattern.web import find_urls
from pattern.web import Newsfeed
import smallutils as su
import pandas as pd

def get_feed_details_fr_url_list(url_list, save_csvfilename):
    """ Get the feeds info and save as dataframe to target location"""
    target_list = []
    for feed_url in url_list:
        print feed_url
        if feed_url == "-":
            break
        try:
            for result in Newsfeed().search(feed_url)[:2]:
                print repr(result.title), repr(result.url),  repr(result.date)
                temp_data = {"title":result.title, "feed_url":result.url, "date":result.date, "ref":extract_site_name_fr_url(feed_url)}
                target_list.append(temp_data)
            print "*"*18
            print
        except:
            print "No feeds found"
            continue

    ## save to padnas
    df = pd.DataFrame(target_list)
    df.to_csv(save_csvfilename, index= False , encoding='utf-8')

tgt_file = r'directory\tgt_file_that_contain_list_of_feeds_url.txt'
url_list = su.read_data_fr_file(tgt_file)

get_feed_details_fr_url_list(url_list, r"output\feed_result.csv")<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

You can also refer below post on feeds extraction.

  1. Get RSS feeds using python pattern

3.4 URL shortener

Normally we would like to include the actual link in the twitter after including the content. However, sometimes the url is too long and may hit the twitter word limit. In this case, we can use URL shortener to help in our job. There are a couple of URL shortener services such as google, tinyurl. We will incorporate tinyurl in our python script.

from pattern.web import URL, extension

def shorten_target_url(tgt_url):
    agent = 'http://tinyurl.com/api-create.php?url={}'
    query_url = agent.format(tgt_url)

    url = URL(query_url)
    page_source = url.download()

    return page_source

4. Posting contents to Twitter

We make use of the snippets in section 2 and 3 and create a combined script that authenticate the user, get all feeds from a list a feeds url text file, select a few of the more recent feeds and post them to the twitter account with targeted hash tags and url shortening.  Do observe proper tweeting etiquette and avoid spamming.

import os, sys, datetime, time
import pandas as pd
from FeedsHandler import get_feed_details_fr_url_list
from urlshortener import shorten_target_url
from initialize_twitter_api import get_twitter_api
import smallutils as su

if __name__  == "__main__":

    print "start of project"

    ## Defined parameters
    tgt_file_list = [
                        r'directory\tgt_file_contain_feedurl_list.txt'
                        ]

    #get the tgt_file that exists
    tgt_file = [n for n in tgt_file_list if os.path.exists(n)][0] #take the first entry

    feeds_outputfile =  r"c:\data\temp\feed_result.csv"
    hashtags = '#DIY #hacks' #include hash tags
    feeds_sample_size = 8

    ## Get feeds from url list
    print "Get feeds from target url list ... "
    url_list = su.read_data_fr_file(tgt_file)
    get_feed_details_fr_url_list(url_list, feeds_outputfile)

    ## Read the feeds_outputfile and
    print "Handling the feeds data"
    feeds_df = pd.read_csv(feeds_outputfile)
    feeds_df['date'] = pd.to_datetime(feeds_df['date'])

    ## filter the date within one day to today
    feeds_df['date_delta'] = datetime.datetime.now() - feeds_df['date']
    feeds_df['date_delta_days'] = feeds_df['date_delta'].apply(lambda x: float(x.days))

    feeds_df_filtered =  feeds_df[feeds_df['date_delta_days']  feeds_sample_size:# do a sampling if the input is high
        feeds_df_filtered_sample = feeds_df_filtered.sample(feeds_sample_size)
    else:
        feeds_df_filtered_sample = feeds_df_filtered

    ## set up for twitter api
    print "Initialized the Twitter API"
    api = get_twitter_api()

    ## handling message to twitter
    print "Sending all data to twitter"
    for index, row in feeds_df_filtered_sample.iterrows():
        #convert to full text for output
        target_txt = 'Via @' + row['ref'] + ': ' + row['title'] + ' ' + row['feeds_url_shorten'] + ' ' + hashtags
        try:
            api.update_status(target_txt)
        except:
            pass
        time.sleep(60*30)

5. Scheduling tweets

We can use either windows task scheduler or cron job to do scheduling of tweet posting daily.

6. What to do next

Above contents are derived mainly from RSS feeds. We can add contents by retweeting or embedding youtube videos automatically. A sample twitter bot created using the above methods are included in the link.

You can refer to some of the posts that include retrieving data from twitter.

  1. Get Stocks tweets using Twython
  2. Get Stocks tweets using Twython (Updates)