Month: January 2019

Retrieving Stock statistics from Yahoo Finance using python

For this post, we are only going to scrape the “Key Statistics” page of a particular stock in Yahoo Finance. The usual way might be to use Requests and BeautifulSoup to parse the web page. However, with the table format in the targeted webpage, it is easier to use Pandas read_html and DataFrame function.

  1. Objectives:
      1. Retrieving stocks information (Key statistics) from Yahoo Finance.
  2. Required Tools:
      1. Python Pandas—  Using Pandas read_html function for reading web table form.

Usage — Pulling a particular stock data data

import pandas as pd

tgt_website = r'https://sg.finance.yahoo.com/quote/WDC/key-statistics?p=WDC'

def get_key_stats(tgt_website):

    # The web page is make up of several html table. By calling read_html function.
    # all the tables are retrieved in dataframe format.
    # Next is to append all the table and transpose it to give a nice one row data.
    df_list = pd.read_html(tgt_website)
    result_df = df_list[0]

    for df in df_list[1:]:
        result_df = result_df.append(df)

    # The data is in column format.
    # Transpose the result to make all data in single row
    return result_df.set_index(0).T

# Save the result to csv
result_df = get_key_stats(tgt_website)

Pulling all the stocks symbols

Here, we are pulling one known stock symbol. To get all the stocks in particular indices, the stock symbols need to be known first. The below code will extract all the stock symbols, along with other data, from the NASDAQ website. [Note: the NASDAQ website has changed format and the original method of getting the stock symbols is not valid. Please see the 2nd method to pull from eoddata website]

import pandas as pd

weblink = 'https://www.nasdaq.com/screening/companies-by-name.aspx?letter=A&render=download'
sym_df = pd.read_csv(weblink)
stock_symbol_list = sym_df.Symbol.tolist()

import string
import time
import pandas as pd

url_template = 'http://eoddata.com/stocklist/NASDAQ/{}.htm'

sym_df = pd.DataFrame()
for letter in list(string.ascii_uppercase):
    tempurl = url_template.format(letter)
    temp_data = pd.read_html(tempurl)
    temp_df = temp_data[4]
    if len(sym_df)==0:
        sym_df = temp_df
    else:
        sym_df = sym_df.append(temp_df)
    time.sleep(1)
stock_symbol_list = sym_df.Code.tolist()

Pulling key statistics for all stock symbols (for given index)

The last step will be to iterate all the symbols and get the corresponding key statistcis

all_result_df = pd.DataFrame()
url_prefix = 'https://sg.finance.yahoo.com/quote/{0}/key-statistics?p={0}'
for sym in stock_symbol_list:
    stock_url = url_prefix.format(sym)
    result_df = get_key_stats(stock_url)
    if len(all_result_df) ==0:
        all_result_df = result_df
    else:
        all_result_df = all_result_df.append(result_df)

# Save all results
all_result_df.to_csv('results.csv', index =False)

Monitoring quality over time with heap map

A particular concern with testing hard disk drives over multiple times is the quality of certain drives may degrade (wear and tear) over time and we failed to detect this degradation.

We have certain metrics to gauge any degradation symptom observed for a particular head in a particular drive. For example, with metric A, we are looking at the % change over time reference to the date of the first test o determine whether a head is degraded.

Below python code will base on the following table to generate the required heatmap for easy visualization.

untitled

Calculating %Change

import seaborn as sns
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

df1['DATE1'] = df1.DATE.dt.strftime('%m/%d/%Y')
df1 = df1.sort_values(by = 'DATE1')

# calculate the metric % change and
# actual change with reference to each individual head first data

df1['METRIC_A_PCT_CHANGE'] = df1.groupby(['SERIAL','HEAD'])['METRIC_A']\
                            .apply(lambda x: x.div(x.iloc[0]).subtract(1).mul(100))
df1['METRIC_A_CHANGE'] = df1.groupby(['SERIAL','HEAD'])['METRIC_A']\
                         .apply(lambda x: x - x.iloc[0])

Plotting in HeapMap

fig, ax = plt.subplots(figsize=(10,10))

# Pivot it for plotting in heap map
ww = df1.pivot_table(index = ['SERIAL','HEAD'], \
                     columns = 'DATE1', values = "METRIC_A_PCT_CHANGE")

g = sns.heatmap(ww, vmin= -5, vmax = 5, center = 0, \
                cmap= sns.diverging_palette(220, 20, sep=20, as_cmap=True),\
                xticklabels=True, yticklabels=True, \
                ax = ax, linecolor = 'white', linewidths = 0.1, annot = True)

g.set_title("% METRIC_A changes over multiple Dates", \
            fontsize = 16, color = 'blue')

 

Generated Plots

From the heap map, SER_3BZ-0 have some indication of degradation with increasing % Metric A loss over the different test date.

untitled

Notes

  • Getting the % percentage change relative to first value of each group.
    • df.groupby(‘security’)[‘price’].apply(lambda x: x.div(x.iloc[0]).subtract(1).mul(100))

 

Downloading YouTube Videos and converting to MP3

A simple guide to download videos from YouTube using python

  1. Objectives:
      1. Download YouTube Videos
      2. Saving as subclip (saving a portion of the video)
      3. Converting to MP3
      4.  
  2. Required Tools:
      1. PyTube— primarily for downloading youtube videos.
      2. MoviePy — for video editing and also convert to mp3.
      3.  
  3. Steps:
    1. pip install pytube and moviepy

Basic Usage

from pytube import YouTube
from moviepy.editor import *

# download a file from youtube
youtube_link = 'https://www.youtube.com/watch?v=yourtubevideos'
w = YouTube(youtube_link).streams.first()
w.download(output_path="/your/target/directory")

# download a file with only audio, to save space
# if the final goal is to convert to mp3
youtube_link = 'https://www.youtube.com/watch?v=targetyoutubevideos'
y = YouTube(youtube_link)
t = y.streams.filter(only_audio=True).all()
t[0].download(output_path="/your/target/directory")

Downloading videos from a YouTube playlist

import requests
import re
from bs4 import BeautifulSoup

website = 'https://www.youtube.com/playlist?list=yourfavouriteplaylist'
r= requests.get(website)
soup = BeautifulSoup(r.text)

tgt_list = [a['href'] for a in soup.find_all('a', href=True)]
tgt_list = [n for n in tgt_list if re.search('watch',n)]

unique_list= []
for n in tgt_list:
    if n not in unique_list:
        unique_list.append(n)

# all the videos link in a playlist
unique_list = ['https://www.youtube.com' + n for n in unique_list]

for link in unique_list:
    print(link)
    y = YouTube(link)
    t = y.streams.all()
    t[0].download(output_path="/your/target/directory")

Converting from MP4 to MP3 (from a folder with mp4 files)

import moviepy.editor as mp
import re
tgt_folder = "/folder/contains/your/mp4"

for file in [n for n in os.listdir(tgt_folder) if re.search('mp4',n)]:
full_path = os.path.join(tgt_folder, file)
output_path = os.path.join(tgt_folder, os.path.splitext(file)[0] + '.mp3')
clip = mp.AudioFileClip(full_path).subclip(10,) # disable if do not want any clipping
clip.write_audiofile(output_path)