Predict Product Attributes from Product Listings Part 2 – Pipelines & GridSearch

Further improvement on the Product Attributes Text Classifier

This is part 2 of the extracting attributes from product title with the following improvements or add on.

  1. Creating a more generic text cleaning function.
  2. Adding GridSearch for hyper parameters tuning.

Text Cleaning Function

I created a more generic text cleaning function that can accommodate various text data sets. This can use as a base function for text related problem set. The function, if enabled all options, will be able to perform the following:

  1. Converting all text to lowercase.
  2. Stripping html tags especially if data is scrapped from web.
  3. Replacing accented characters with closest English alphabets/characters.
  4. Removing special characters which includes punctuation. Digits may or may not be excluded depending on context. (Digits are not removed for this data set)
  5. Removing stop-words (simple vs detailed. If detailed, will tokenize words before removal else will use simple word replacement.
  6. Removing extra white spaces and newlines.
  7. Normalize text. This either refer to stemming or lemmatizing.

In this example, we only turn on:

  1. converting text to lowercase
  2. remove special characters (need to keep digits) and white spaces,
  3. do a simple stop words removal.

As mentioned in previous post, it is likely a seller would not include much stop words and will try to keep the title as concise as possible given the limited characters and also to make the title more relevant to search engine. As the text length is not too long, will skip normalizing text to save time.

# Text pre-processing modules
from bs4 import BeautifulSoup
import unidecode
import spacy, en_core_web_sm
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import PorterStemmer
STOPWORDS = set(stopwords.words('english')) 

# Compile regular expression
SPEC_CHARS_REPLACE_BY_SPACE = re.compile('[/(){}\[\]\|@,;]')
SPEC_CHARS = re.compile(r'[^a-zA-z0-9\s]')
SPEC_CHARS_INCLUDE_DIGITS = re.compile(r'[^a-zA-z\s]')
EXTRA_NEWLINES = re.compile(r'[\r|\n|\r\n]+')

## Functions for text preprocessing, cleaning

def strip_htmltags(text):
    soup = BeautifulSoup(text,"lxml")
    return soup.get_text()

def replace_accented_chars(text):
    return unidecode.unidecode(text)

def stem_text(text):
    ps = PorterStemmer()
    modified_txt = ' '.join([ps.stem(word) for word in text.split()])
    return modified_txt    

def lemmatize(text):
    modified_text = nlp(text)
    return ' '.join([word.lemma_ if word.lemma_ != '-PRON-' else word.text for word in modified_text])

def normalize(text, method='stem'):
    """ Text normalization to generate the root form of the inflected words.
        This is done by either "stem" or "lemmatize" the text as defined by the 'method' arguments.
        Note that using "lemmatize" will take much longer to run compared to "stem".
    """
    if method == 'stem':
        return stem_text(text)
    if method == 'lemmatize':
        return lemmatize(text)
    print('Please choose either "stem" or "lemmatize" method to normalize.')
    return text

def rm_special_chars(text, rm_digits=False):
    # remove & replace below special chars with space
    modified_txt = SPEC_CHARS_REPLACE_BY_SPACE.sub(' ', text)

    # remove rest of special chars, no replacing with space
    if rm_digits:
        return SPEC_CHARS_INCLUDE_DIGITS.sub('', modified_txt)
    else:
        return SPEC_CHARS.sub('', modified_txt)

def rm_extra_newlines_and_whitespace(text):
    # rm extra newlines
    modified_txt =  EXTRA_NEWLINES.sub(' ', text)

    # rm extra whitespaces
    return re.sub(r'\s+', ' ', modified_txt)

def rm_stopwords(text, simple=True):
    """ Remove stopwords using either the simple model with replacement.
        or using nltk.tokenize to split the words and replace each words. This will incur speed penalty.
    """
    if simple:
        return ' '.join(word for word in text.split() if word not in STOPWORDS)
    else:
        tokens = word_tokenize(text)
        tokens = [token.strip() for token in tokens]
        return ' '.join(word for word in tokens if word not in STOPWORDS)

def clean_text(raw_text, strip_html = True, replace_accented = True,
                normalize_text = True, normalize_methd = 'stem',
                remove_special_chars = True, remove_digits = True,
                remove_stopwords = True, rm_stopwords_simple_mode = True):

    """ The combined function for all the various preprocessing method.
        Keyword args:
            strip_html               : Remove html tags.
            replace_accented         : Convert accented characters to closest English characters.
            normalize_text           : Normalize text based on normalize_methd.
            normalize_methd          : "stem" or "lemmatize". Default "stem".
            remove_special_chars     : Remove special chars.
            remove_digits            : Remove digits/numeric as special characters.
            remove_stopwords         : Stopwords removal basedon NLTK corpus.
            rm_stopwords_simple_mode : skip tokenize before stopword removal. Speed up time.
    """

    text = raw_text.lower()

    if strip_html:
        text = strip_htmltags(text)
    if replace_accented:
        text = replace_accented_chars(text)
    if remove_special_chars:
        text = rm_special_chars(text, remove_digits)
    if normalize_text:
        text = normalize(text, normalize_methd)
    if remove_stopwords:
        text = rm_stopwords(text, rm_stopwords_simple_mode)

    text = rm_extra_newlines_and_whitespace(text)  

    return text

Grid Search for Hyper Parameters Tuning

Using pipelines, it is easy to incorporate the sklearn grid search to sweep through the various the hyper parameters and select the best value. Two main parameters tuning are:

  1. ngram range in CountVectorizer:
    • In the first part, we only looking a unigram or single word but there are some attributes that are identified by more than one word alone (eg 4G network, 32GB Memory etc) therefore we will sweep the ngram range to find the optimal range.
    • The larger the ngram range the more feature columns will be generated so it will be more memory consuming.
  2. alpha in SGDClassifier
    • This will affect the regularization term and the learning rate of the training model.

With the ngram range and alpha parameters sweep and the best value selected, we can see quite a significant improvement to the accuracy to all the attribute prediction compared to the first version. Most of the improvement comes from the ngram adjusted to (1,3), meaning account for trigram. This is within expectation as more attributes are described by more than one word.

# Prepare model -- Drop na and keep those with values
def get_X_Y_data(x_col, y_col):
    sub_df =  df[[x_col, y_col]]
    sub_df.head()
    sub_df = sub_df.dropna()
    return sub_df[x_col], sub_df[y_col]

# Model training & GridSearch
def generate_model(X, y, verbose = 1):

    text_vect_pipe = Pipeline([
                            ('vect', CountVectorizer()),
                            ('tfidf', TfidfTransformer())
                            ])

    pred_model = Pipeline([
                ('process', text_vect_pipe),
                ('clf', SGDClassifier(loss='hinge', penalty='l2',alpha=1e-3, random_state=42, max_iter=5, tol=None))
               ])

    parameters = {}
    parameters['process__vect__ngram_range'] = [(0,1),(1,2),(1,3)]
    parameters['clf__loss'] = ["hinge"]
    parameters['clf__alpha'] = [5e-6,1e-5]

    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state = 42)

    CV = GridSearchCV(pred_model, parameters)
    CV.fit(X_train, y_train)
    y_pred = CV.predict(X_test)

    print('accuracy %s' % accuracy_score(y_pred, y_test))
    print("=="*18)
    print()
    print("Details of GridSearch")

    if verbose:
        print('Best score and parameter combination = ')
        print(CV.best_score_)
        print(CV.best_params_)
        print()
        print("Grid scores on development set:")
        means = CV.cv_results_['mean_test_score']
        stds = CV.cv_results_['std_test_score']
        for mean, std, params in zip(means, stds, CV.cv_results_['params']):
            print("%0.3f (+/-%0.03f) for %r"
                  % (mean, std * 2, params))
        print("=="*18)
        print()

    return CV

X, y = get_X_Y_data('title1', 'Brand')
brand_model = generate_model(X, y)
print('='*29)

The full script is as below. The text cleaning function takes a large part of the code. Excluding the function, the additional of few lines of code for the grid search and pipeline can can bring a relatively significant accuracy improvement.

Next Actions

So far only text features are considered, the next part we will try adding numeric features to see if further improvement can be made.

See Also

  1. Predict Product Attributes from Product Listing Title — Text Feature Extraction and Classification
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

 

Leave a comment