Shantanu's Blog

Database Consultant

November 17, 2024

 

Language prediction

FastText library by facebook has the language detection feature.


import fasttext
model = fasttext.load_model("/tmp/lid.176.ftz")
model.predict(" विकिपीडिया पर", k=2)

The above code returns Hindi "hi" correctly. Google also has it's own library called langdetect. The following code returns Marathi "mr" correctly.

from langdetect import detect
detect("आत्मा आणि")

The polyglot library has supported this and other language tools since a very long time.

https://github.com/saffsd/polyglot

Labels: ,


October 04, 2024

 

awk Case Study - 13

I have 2 text files. corpus file is the collection of words and exclude file has all the suffixes. I need to extract the stemmed words after removing all suffixes.

==> exclude.txt <==
works
ed
s
ing
ings

==> corpus.txt <==
worked
working
works
tested
tests
find
found
workings

awk -f tst.awk exclude.txt corpus.txt | sort

unmatched find
unmatched found
matched working/s
matched work/ed,ing,s,ings
matched test/ed,s

And the awk script will look something like this...

$ cat tst.awk
{ lineLgth = length($0) }
NR == FNR {
    suffixes[$0]
    sfxLgths[lineLgth]
    next
}
{
    base = ""
    for ( sfxLgth in sfxLgths ) {
        baseLgth = lineLgth - sfxLgth
        if ( baseLgth > 0 ) {
            sfx = substr($0,baseLgth+1)
            if ( sfx in suffixes ) {
                base = substr($0,1,baseLgth)
                bases2sfxs[base] = bases2sfxs[base] "," sfx
            }
        }
    }
    if ( base == "" ) {
        print "unmatched", $0
    }
}
END {
    for ( base in bases2sfxs ) {
        sub(/,/,"/",bases2sfxs[base])
        print "matched", base bases2sfxs[base]
    }
}

Labels: ,


August 27, 2024

 

Sanskrit-English translation corpus

Itihāsa is a Sanskrit-English translation corpus containing 93,000 Sanskrit shlokas and their English translations extracted from M. N. Dutt's seminal works on The Rāmāyana and The Mahābhārata. 

https://github.com/rahular/itihasa

Itihāsa can be used directly from Huggingface Datasets:

from datasets import load_dataset
dataset = load_dataset("rahular/itihasa")
dataset['train'][0]

{'translation': {'en': 'The ascetic Vālmīki asked Nārada, the best of sages and foremost of those conversant with words, ever engaged in austerities and Vedic studies.',  'sn': 'ॐ तपः स्वाध्यायनिरतं तपस्वी वाग्विदां वरम्। नारदं परिपप्रच्छ वाल्मीकिर्मुनिपुङ्गवम्॥'}}

Labels:


March 31, 2023

 

summarize text using chatGPT

This function takes any text and summarizes it.

from marvin import ai_fn

@ai_fn
def summarize(text: str) -> str:

    """Summarize the provided text"""

import wikipedia
page = wikipedia.page('large language model')
summarize(text=page.content)

Labels: ,


 

Using chatGPT to generate fake records for pandas

I can use chatGPT to generate fake data for testing in pandas dataframes in python!

import pandas as pd
from marvin import ai_fn

@ai_fn
def fake_people(n: int) -> list[dict]:

    """
    Generates n examples of fake data representing people,  each with a name and an age.

    """

myfake=fake_people(3)
df = pd.DataFrame(myfake)
print (df)

# will return the dataframe something like this:

    name  age
0   John   28
1  Emily   35
2  David   19

# docker run --rm -it --entrypoint /bin/bash python:3.10
# pip install marvin pandas

Labels: ,


March 15, 2023

 

Text embedding model - part II

I compared the text similarity algorithms for the folloing strings:

string_a = 'MUKESH VITHAL GURAV'
string_b = 'MUKESH VITTHAL GURAO'

I found that all the popular methods like cosine, levenshtein, jaro do not work. The best performance was by "entropy_ncd" that is built using word embeddings. The openai model returned the similar (98.10%) score.

import textdistance
mydic = dict()
for i in myalgs:
    try:

        exec('x = textdistance.algorithms.'+i+'.normalized_similarity("'+string_a+'", "'+string_b+'")')
        mydic[i] = round(x,4) 

    except:
        pass

dict(sorted(mydic.items(), key=lambda item: item[1]))

 'levenshtein': 0.9,
 'ratcliff_obershelp': 0.9231,

 'sorensen': 0.9231,
 'sorensen_dice': 0.9231,
 'cosine': 0.9234,
 'needleman_wunsch': 0.925,
 'gotoh': 0.9474,
 'overlap': 0.9474,
 'jaro': 0.9491,
 'editex': 0.95,
 'length': 0.95,
 'jaro_winkler': 0.9695,
 'strcmp95': 0.9695,
 'entropy_ncd': 0.9815

import openai
from openai.embeddings_utils import get_embedding, cosine_similarity
openai.api_key = "sk-xxx"

string_b_embed = get_embedding(string_b, engine="text-similarity-davinci-001")
string_a_embed = get_embedding(string_a, engine="text-similarity-davinci-001")
cosine_similarity(string_a_embed, string_b_embed)

Labels: , ,


March 01, 2023

 

Text embedding model by openAI to solve string similarity problem

While everyone is discussing about chatGPT by openAI, I decided to try it with an old puzzle that I could not solve for a very long time. The problem is about finding the similar names where almost  50% words are non-English.

Following strings are semantically similar but there  is no algorithm that can tell the truth. For e.g. the fuzzy similarity match will return 82 and 64 percent score while there are several other entries in that range. 

from thefuzz import fuzz

fuzz.ratio('A.R.B.GARUD ARTS, COMM.& SCIENCE COLLEGE SHENDURNI', 'A.R.B.Garud ARTS, COM, SCI. COLLEGE SHENDURNI') # 82% 

fuzz.ratio('AADARSH HIGH SCHOOL MANDAL',  'AADARSHA VIDYA. MAR.HIGHSCHOOL') # 64%

Text embeddings by openai (text-embedding-ada-002) returned more than 95% similarity score for both the entries and this is very surprising as well as encouraging. It is very close to human accuracy if not better.

Labels: ,


May 03, 2021

 

Interactive Analysis of Sentence Embeddings

We can encode a sentence  into 768 dimensions using a pre-built model.
The data can be visualized using tensorflow projector.

https://projector.tensorflow.org/

Install the required python module:
#!pip install sentence-transformers

We will open a csv file having 2 columns. Text and label.
We will modify the dataframe to add outliers to the data.

import pandas as pd

df = pd.read_csv(
    "http://bit.ly/dataset-sst2", nrows=100, sep="\t", names=["text", "label"]
)

df["label"] = df["label"].replace({0: "negative", 1: "positive"})
df.loc[[10, 27, 54, 72, 91], "text"] = "askgkn askngk kagkasng"
df.to_csv("metadata.tsv", index=False, sep="\t")

Use the distilbert pre-built model to transform the text to return encoded values. Unlike other word vectors, this considers the entire sentence and return 768 dimensions irrespective of number of words per sentence.

from sentence_transformers import SentenceTransformer
sentence_bert_model = SentenceTransformer("distilbert-base-nli-stsb-mean-tokens")
e = sentence_bert_model.encode(df["text"])
embedding_df = pd.DataFrame(e)
embedding_df.to_csv("output.tsv", index=False, sep="\t", header=None)

The numpy array is converted to pandas dataframe and exported as csv.
The 2 files can be uploaded to tensorflow projector and you can easily find the outliers as shown in this blog post...

https://amitness.com/interactive-sentence-embeddings/

Labels: , ,


April 17, 2020

 

High Level module for NLP tasks

GluonNLP provides Pre-trained models for common NLP tasks. It has carefully designed APIs that greatly reduce the implementation complexity.

import mxnet as mx
import gluonnlp as nlp

glove = nlp.embedding.create('glove', source='glove.6B.50d')

def cos_similarity(embedding, word1, word2):
    vec1, vec2 = embedding[word1], embedding[word2]
    return mx.nd.dot(vec1, vec2) / (vec1.norm() * vec2.norm())

cos_similarity(glove, 'baby', 'infant').asnumpy()[0]
_____

This will load wikipedia article words as a python list.

train = nlp.data.WikiText2(segment='train')
train[10000:10199]

Labels: , ,


September 30, 2018

 

Natural Language Processing using gensim

Here is a  very basic example of how gensim works to compare documents.

lee_train_file = '/opt/conda/lib/python3.6/site-packages/gensim/test/test_data/lee_background.cor'

import gensim
train_corpus=list()
with open(lee_train_file) as f:
    for i, line in enumerate(f):
        train_corpus.append(gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(line), [i]))

model = gensim.models.doc2vec.Doc2Vec(vector_size=48, min_count=2, epochs=40)
model.build_vocab(train_corpus)
model.wv.vocab['penalty'].count
model.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)

line="""
dummy text
"""

inferred_vector=model.infer_vector(gensim.utils.simple_preprocess(line))
model.docvecs.most_similar([inferred_vector], topn=5)

Labels: ,


August 21, 2018

 

Introducing python module for Indian Names stemming

Here is the basic code behind the new package called easystemmer.

https://github.com/shantanuo/easystemmer

You need to save the following code as a file called easystemmer.py and then import it like any other module.

import itertools, re
from nltk.stem import StemmerI

class IndianNameStemmer(StemmerI):
    def stem(self, token):
        newtup=list()
        for i in token:
            i = i[:-3] if i.endswith('bai') else i
            for r in (("tha", "ta"), ("i", "e")):
                i = i.replace(*r)
                i = re.sub(r'(\w)\1+',r'\1', i)
            newtup.append(''.join(i for i, _ in itertools.groupby(i)))
        return tuple(newtup)


from easystemmer import IndianNameStemmer
s = IndianNameStemmer()
s.stem(['savithabai', 'aaabaa'])

It will return the stemmed version of the names like...
('saveta', 'aba')

Community contributions are welcome.

Labels: , ,


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024   January 2025   February 2025   April 2025   June 2025   July 2025   August 2025  

This page is powered by Blogger. Isn't yours?