Shantanu's Blog

Corporate Consultant

August 21, 2018

 

Introducing python module for Indian Names stemming

Here is the basic code behind the new package called easystemmer.

https://github.com/shantanuo/easystemmer

You need to save the following code as a file called easystemmer.py and then import it like any other module.

import itertools, re
from nltk.stem import StemmerI

class IndianNameStemmer(StemmerI):
    def stem(self, token):
        newtup=list()
        for i in token:
            i = i[:-3] if i.endswith('bai') else i
            for r in (("tha", "ta"), ("i", "e")):
                i = i.replace(*r)
                i = re.sub(r'(\w)\1+',r'\1', i)
            newtup.append(''.join(i for i, _ in itertools.groupby(i)))
        return tuple(newtup)


from easystemmer import IndianNameStemmer
s = IndianNameStemmer()
s.stem(['savithabai', 'aaabaa'])

It will return the stemmed version of the names like...
('saveta', 'aba')

Community contributions are welcome.

Labels: , ,


August 15, 2018

 

Extract duplicate numbers from a text file

Here is 6 lines of code that will extract more than 10 digits numbers from a given text file.
We will extract only the duplicate numbers from the file and save the data as a csv file.

## Open lines as list
with open('some_file.csv', 'r') as f:
    X_train=list(f)

## Create sparse matrix
vect=CountVectorizer(min_df=2, token_pattern='(?u)\\b\\d{10,}\\b')
vX = vect.fit_transform(X_train)

## convert to dataframe, query and report
df=pd.DataFrame(vX.toarray(), columns=vect.get_feature_names())
df.sum().sort_values().to_csv('dupes.csv')
_____

## Install modules
#!conda install --yes -c conda-forge fastparquet
#!pip install scipy sklearn

## Import modules
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd

Labels: , ,


August 12, 2018

 

Find duplicates using Natural Language Processing

This script will return all the words in a given file that appear more than once.

#!conda install --yes -c conda-forge fastparquet
#!pip install scipy sklearn

from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd

with open('my_data.csv', 'r') as f:
    X_train=list(f)

vect=CountVectorizer()
vX = vect.fit_transform(X_train)

x=vX.sum(axis=0).tolist()
s=pd.Series(x[0], vect.get_feature_names())

s = s[s != 1].sort_values()
s.to_csv('export.csv')

This will help to find duplicates in a text file.

Labels: ,


 

Find outliers

Data that is out of range of standard deviation can be considered as outliers. (Or 2 * StanD)

import random
def outliers(tmp):
    """tmp is a list of numbers"""
    outs = []
    mean = sum(tmp)/(1.0*len(tmp))
    var = sum((tmp[i] - mean)**2 for i in range(0, len(tmp)))/(1.0*len(tmp))
    std = var**0.5
    outs = [tmp[i] for i in range(0, len(tmp)) if abs(tmp[i]-mean) > 1.96*std]
    return outs


lst = [random.randrange(-10, 55) for _ in range(40)]
print (lst)
print (outliers(lst))

Labels:


August 11, 2018

 

Gini index calculation

Here is a function that will calculate the weighted gini index for a given feature.

import pandas as pd

url="https://raw.githubusercontent.com/bharat-patidar/Decision-trees/master/data/films.csv"
films=pd.read_csv(url)

def gini_calculate(node='gender'):
    my_films=films.groupby(['watching', node])[node].count().unstack()
    watching_df=my_films.div(my_films.sum(axis=0), axis=1)
    watching_gini=watching_df.apply(lambda x: x**2 + (1-x)**2)
    watching_gini.loc['total', :] = my_films.sum(axis=0)
    watching_gini.loc['grand_total', :] = my_films.sum(axis=0).sum()
    x=0
    for i in watching_gini.columns:
        x = x + watching_gini.loc['total', i] / watching_gini.loc['grand_total', i] * watching_gini.loc['yes', i]
    return x

print (gini_calculate(node='employment_status'))
print (gini_calculate(node='gender'))

>>> 0.5033062330623306
>>> 0.522077922077922

# Since weighted gini(gender) > weighted gini(employment), the node split will take on Gender

_____

Function to calculate entropy:

from math import log, e
def entropy3(labels, base=None):
  vc = pd.Series(labels).value_counts(normalize=True)
  base = e if base is None else base
  return -(vc * np.log(vc)/np.log(base)).sum()

def ent(data):
    p_data= data.value_counts()/len(data)
    print (p_data)
    entropy=scipy.stats.entropy(p_data) 
    return entropy


Labels:


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018  

This page is powered by Blogger. Isn't yours?