Shantanu's Blog

Database Consultant

January 14, 2025

 

RAG made easy using LLama

# use virtual environment to install python and packages

uv init ai-app2

cd ai-app2

pip install llama-index


# download training data

mkdir data

cd data

wget https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt

cd ..


# start python prompt

python

import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("data").load_data()

index = VectorStoreIndex.from_documents(documents)

query_engine = index.as_query_engine()

response = query_engine.query("What the author do growing up?")

print(response)


Labels: ,


July 21, 2023

 

langchain for pandas

langchain is a module to query pandas dataframe using Natural Language. It uses chatGPT to build pandas commands!

!pip install langchain
import os
os.environ["OPENAI_API_KEY"] = "XXXX"

from langchain.agents import create_pandas_dataframe_agent
from langchain.llms import OpenAI
import pandas as pd

pd_agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)

pd_agent.run("Find the total sales for each product line in the year 2003")

_____

Something similar...


# https://github.com/gventuri/pandas-ai

!pip install pandasai
from pandasai import SmartDataframe, SmartDatalake
from pandasai.llm import OpenAI
llm = OpenAI(api_token="YOUR TOKEN")

sdf = SmartDataframe(df, config={"llm": llm})
sdf.chat("Return the top 5 countries by GDP")
sdf.chat("Plot a chart of the gdp by country")

print(sdf.last_code_generated)

If you have more than one dataframe, then use SmartDatalake method and supply a list of dataframes. For e.g.

sdf = SmartDatalake([df, df2, df3], config={"llm": llm})

Labels: , , ,


March 15, 2023

 

Text embedding model - part II

I compared the text similarity algorithms for the folloing strings:

string_a = 'MUKESH VITHAL GURAV'
string_b = 'MUKESH VITTHAL GURAO'

I found that all the popular methods like cosine, levenshtein, jaro do not work. The best performance was by "entropy_ncd" that is built using word embeddings. The openai model returned the similar (98.10%) score.

import textdistance
mydic = dict()
for i in myalgs:
    try:

        exec('x = textdistance.algorithms.'+i+'.normalized_similarity("'+string_a+'", "'+string_b+'")')
        mydic[i] = round(x,4) 

    except:
        pass

dict(sorted(mydic.items(), key=lambda item: item[1]))

 'levenshtein': 0.9,
 'ratcliff_obershelp': 0.9231,

 'sorensen': 0.9231,
 'sorensen_dice': 0.9231,
 'cosine': 0.9234,
 'needleman_wunsch': 0.925,
 'gotoh': 0.9474,
 'overlap': 0.9474,
 'jaro': 0.9491,
 'editex': 0.95,
 'length': 0.95,
 'jaro_winkler': 0.9695,
 'strcmp95': 0.9695,
 'entropy_ncd': 0.9815

import openai
from openai.embeddings_utils import get_embedding, cosine_similarity
openai.api_key = "sk-xxx"

string_b_embed = get_embedding(string_b, engine="text-similarity-davinci-001")
string_a_embed = get_embedding(string_a, engine="text-similarity-davinci-001")
cosine_similarity(string_a_embed, string_b_embed)

Labels: , ,


March 01, 2023

 

Text embedding model by openAI to solve string similarity problem

While everyone is discussing about chatGPT by openAI, I decided to try it with an old puzzle that I could not solve for a very long time. The problem is about finding the similar names where almost  50% words are non-English.

Following strings are semantically similar but there  is no algorithm that can tell the truth. For e.g. the fuzzy similarity match will return 82 and 64 percent score while there are several other entries in that range. 

from thefuzz import fuzz

fuzz.ratio('A.R.B.GARUD ARTS, COMM.& SCIENCE COLLEGE SHENDURNI', 'A.R.B.Garud ARTS, COM, SCI. COLLEGE SHENDURNI') # 82% 

fuzz.ratio('AADARSH HIGH SCHOOL MANDAL',  'AADARSHA VIDYA. MAR.HIGHSCHOOL') # 64%

Text embeddings by openai (text-embedding-ada-002) returned more than 95% similarity score for both the entries and this is very surprising as well as encouraging. It is very close to human accuracy if not better.

Labels: ,


May 03, 2021

 

Interactive Analysis of Sentence Embeddings

We can encode a sentence  into 768 dimensions using a pre-built model.
The data can be visualized using tensorflow projector.

https://projector.tensorflow.org/

Install the required python module:
#!pip install sentence-transformers

We will open a csv file having 2 columns. Text and label.
We will modify the dataframe to add outliers to the data.

import pandas as pd

df = pd.read_csv(
    "http://bit.ly/dataset-sst2", nrows=100, sep="\t", names=["text", "label"]
)

df["label"] = df["label"].replace({0: "negative", 1: "positive"})
df.loc[[10, 27, 54, 72, 91], "text"] = "askgkn askngk kagkasng"
df.to_csv("metadata.tsv", index=False, sep="\t")

Use the distilbert pre-built model to transform the text to return encoded values. Unlike other word vectors, this considers the entire sentence and return 768 dimensions irrespective of number of words per sentence.

from sentence_transformers import SentenceTransformer
sentence_bert_model = SentenceTransformer("distilbert-base-nli-stsb-mean-tokens")
e = sentence_bert_model.encode(df["text"])
embedding_df = pd.DataFrame(e)
embedding_df.to_csv("output.tsv", index=False, sep="\t", header=None)

The numpy array is converted to pandas dataframe and exported as csv.
The 2 files can be uploaded to tensorflow projector and you can easily find the outliers as shown in this blog post...

https://amitness.com/interactive-sentence-embeddings/

Labels: , ,


April 26, 2021

 

Anomaly detection in time series

adtk is the anomaly detection module.
It is rule-based unsupervised anomaly detection in time series.

Here is how my chart will look like in most cases.

import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline


rand = np.random.RandomState(123)
s = pd.Series(
    np.cumsum(rand.normal(size=300)),
    index=pd.date_range(start="2017-1-1", periods=300, freq="D"),
)

plt.plot(s)

In order to find and plot outliers, I will need only 4 lines of code.

from adtk.visualization import plot
import adtk.detector as detector
anomaly = detector.QuantileAD(low=0.05, high=0.95).fit_detect(s, return_list=False)
plot(s, anomaly)

#pip install adtk

Labels:


December 27, 2020

 

spell checker using Machine Learning

JamSpell is a ML based spell checker that will use pre-trained models from:

https://github.com/bakwc/JamSpell-models/

The python code looks very clear and concise.

import jamspell
jsp = jamspell.TSpellCorrector()
assert jsp.LoadLangModel('en.bin')
jsp.FixFragment("I am the begt spell cherken")

The above code will return:

I am the best spell checker

This is really an accurate, fast and multi-language spell checker.

https://github.com/bakwc/JamSpell

Labels: ,


September 18, 2020

 

Speech to text using assembly AI

Run this python code to submit the mp3 file from S3. You will have to register first to get the authorization API key.

https://app.assemblyai.com/login/

import requests
headers = {
    "authorization": "XXX",
    "content-type": "application/json"
}

endpoint = "https://api.assemblyai.com/v2/transcript"
json = {
  "audio_url": "https://s3-us-west-2.amazonaws.com/blog.assemblyai.com/audio/8-7-2018-post/7510.mp3"
}
response = requests.post(endpoint, json=json, headers=headers)
print(response.json())

You will get id and status like this...

'id': 'g9j4q46h9-5d04-4f96-8186-b4def1b1b65b', 'status': 'queued',

Use the id to query the results.

endpoint = "https://api.assemblyai.com/v2/transcript/g9j4q46h9-5d04-4f96-8186-b4def1b1b65b"
response = requests.get(endpoint, headers=headers)
print(response.json())

And you will get the text of audio file. It will look something like this...

'text': 'You know, Demons on TV like that. And and for people to expose themselves to being rejected on TV or you know, her M humiliated by fear factor or you know.'

Labels: ,


July 13, 2020

 

Understanding your data

Pandas dataframe stores all the data into a single table that makes it difficult to understand the relationships between columns. For e.g. I will like to know how Area Abbreviation is related to Item code in the following data.

# Download the csv file from kaggle:
https://www.kaggle.com/dorbicycle/world-foodfeed-production

import pandas as pd
food_df = pd.read_csv('FAO.csv' , encoding='latin1')
food_df = food_df.drop(columns=food_df.columns[10:])

I will now import auto normalize class from featuretools. This will detect the internal relationships between columns and show us a nice graph.

from featuretools.autonormalize import autonormalize as an
entityset = an.auto_entityset(food_df)
entityset.plot()

  Entities:
    Element Code_Item Code_Area Code [Rows: 21477, Columns: 4]
    Element Code [Rows: 2, Columns: 2]
    Item Code [Rows: 117, Columns: 2]
    Area Code [Rows: 174, Columns: 5]
    Area Abbreviation [Rows: 169, Columns: 2]
  Relationships:
    Element Code_Item Code_Area Code.Area Code -> Area Code.Area Code
    Element Code_Item Code_Area Code.Item Code -> Item Code.Item Code
    Element Code_Item Code_Area Code.Element Code -> Element Code.Element Code
    Area Code.Area Abbreviation -> Area Abbreviation.Area Abbreviation

Do not forget to check featuretools module as well. This will add new columns to your dataframe those can be useful in building machine learning module.

import featuretools as ft
fm, features = ft.dfs(entityset=entityset, target_entity='Element Code_Item Code_Area Code')
print (fm)

https://innovation.alteryx.com/automatic-dataset-normalization-for-feature-engineering-in-python/

Labels: , ,


May 01, 2020

 

Pandas case study 32

Handling Outliers

Outliers can be removed or adjusted using statistical methods of IQR, Z-Score and Data Smoothing.

1) For calculating IQR (Inter Quartile Range) of a dataset, first calculate it’s 1st Quartile(Q1) and 3rd Quartile(Q3) i.e. 25th and 75 percentile of the data and then subtract Q1 from Q3

import pandas as pd
data = [-2,8,13,19,34,49,50,53,59,64,87,89,1456]
df = pd.DataFrame(data)
df.columns = ['values']
ndf=df.describe().T
ndf['75%'] - ndf['25%']
# returns 45

 For finding out the Outlier using IQR we have to define a multiplier which is 1.5 ideally that will decide how far below Q1 and above Q3 will be considered as an Outlier.

higher_limit = ndf['75%'] + 1.5 * 45
lower_limit = ndf['25%'] - 1.5 * 45
df[(df['values'] > higher_limit[0]) | (df['values'] < lower_limit[0])]

2) Z-Score tells how far a point is from the mean of dataset in terms of standard deviation. An absolute value of z score which is above 3 is considered as an outlier.

from scipy import stats
df['z_score']=stats.zscore(df['values'])
df[df['z_score'] > 3]

# returns values z_score
12 1456 3.454979

3) Data smoothing is a process to adjust the spikes and peaks. If your current value if 13 and previous value is 8 and smoothing level is 0.6 then the smoothed value is 11 given by
13*0.6 + (1-0.6)*8

Pandas smoothing function (EWM) can be used to calculate the exponential weighted Moving Average at different alpha levels.

df['ewm_alpha_1']=df['values'].ewm(alpha=0.1).mean()
df['ewm_alpha_3']=df['values'].ewm(alpha=0.3).mean()
df['ewm_alpha_6']=df['values'].ewm(alpha=0.6).mean()
df

https://kanoki.org/2020/04/23/how-to-remove-outliers-in-python/

Labels: ,


April 17, 2020

 

High Level module for NLP tasks

GluonNLP provides Pre-trained models for common NLP tasks. It has carefully designed APIs that greatly reduce the implementation complexity.

import mxnet as mx
import gluonnlp as nlp

glove = nlp.embedding.create('glove', source='glove.6B.50d')

def cos_similarity(embedding, word1, word2):
    vec1, vec2 = embedding[word1], embedding[word2]
    return mx.nd.dot(vec1, vec2) / (vec1.norm() * vec2.norm())

cos_similarity(glove, 'baby', 'infant').asnumpy()[0]
_____

This will load wikipedia article words as a python list.

train = nlp.data.WikiText2(segment='train')
train[10000:10199]

Labels: , ,


April 15, 2020

 

YOLO what? come again?

YOLO helps detect objects in an image using a pre-trained model.
1) Install darknet

git clone https://github.com/pjreddie/darknet
cd darknet
make

2) download the pre-trained weight file

wget https://pjreddie.com/media/files/yolov3.weights

3) Run the detector

./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg

You will see the output something like this...
dog: 99%
truck: 93%
bicycle: 99%

An image called predictions.jpg is saved in current directory.

Labels: ,


January 23, 2020

 

Ensemble explained in plain words

Bagging classification method like Random Forests Classifier, will train all the individual trees on a different sample of the dataset. The tree is also trained using random selections of features. When the results are averaged together, the overall variance decreases and the model performs better as a result.

Boosting algorithms like adaboost or gradient boosting are capable of taking weak, underperforming models and converting them into strong models. You assign many weak learning models to the datasets, and then the weights for misclassified examples are tweaked during subsequent rounds of learning. The predictions of the classifiers are aggregated and then the final predictions are made through a weighted sum (in the case of regressions), or a weighted majority vote (in the case of classification).

https://stackabuse.com/ensemble-voting-classification-in-python-with-scikit-learn/

Labels:


January 22, 2020

 

Understanding Naive Bayes

Naive Bayes implicitly assumes that all the attributes are mutually independent. That is not the case in most of the cases. If a categorical variable has a category in the test dataset, which was not observed in training dataset, then the model may fail.

Naive Bayes can handle any type of data (for e.g. continuous or discrete) and the size of data does not really matter. It can be applied to IRIS dataset as shown below:

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score

X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)
gnb = GaussianNB()
y_pred = gnb.fit(X_train, y_train).predict(X_test)
accuracy_score(y_test, y_pred)

Labels: ,


November 28, 2019

 

Transform strings to integers

Here is a dataframe that contains strings those are found in both columns. For e.g. BE is present in col1 as well as col2.

import pandas as pd
import numpy as np

d = {"col1": ["NL", "BE", "FR", "BE"], "col2": ["BE", "NL", "ES", "ES"]}
df = pd.DataFrame(data=d)

col1 col2
0 NL BE
1 BE NL
2 FR ES
3 BE ES

The easiest way to convert the strings to integers is by using stack / unstack and category column type.

df.stack().astype("category").cat.codes.unstack()

Internally this happened:

s = df.stack()
s[:] = s.factorize()[0]
s.unstack()

Since this looks more like Machine Learning problem, let's try sklearn:

from sklearn.preprocessing import LabelEncoder
df.apply(LabelEncoder().fit_transform)

Very concise and logical. But this gives wrong results!

col1 col2
0 2 0
1 0 2
2 1 1
3 0 1

We need to fit the data on unique values across the entire dataframe and not just a single column. This will work as expected:

le = LabelEncoder()
le.fit(np.unique(df))
df.apply(le.transform)

numpy module will help in most cases. So the third option will look something like this...

pd.DataFrame(
    np.unique(df, return_inverse=True)[1].reshape(df.shape),
    index=df.index,
    columns=df.columns,
)

And these are the steps to achieve this:

np.unique(df)
np.unique(df, return_inverse=True)
np.unique(df, return_inverse=True)[1]
np.unique(df, return_inverse=True)[1].reshape(df.shape)
pd.DataFrame(np.unique(df, return_inverse=True)[1].reshape(df.shape))

pd.DataFrame(
    np.unique(df, return_inverse=True)[1].reshape(df.shape),
    index=df.index,
    columns=df.columns,
)

# https://stackoverflow.com/questions/58821280/transform-multiple-categorical-columns

Labels: , ,


October 11, 2019

 

What is Linear equation?

The ultimate goal of solving a system of linear equations is to find the values of the unknown variables. Here is an example of a system of linear equations with two unknown variables, x (Child) and y (adults):

Bus: 3 * x  + 3.2 * y = 118.4
Train: 3.5 * x + 3.6 * y = 135.20

A group took a trip on a bus, at $3 per child and $3.20 per adult for a total of $118.40. They took the train back at $3.50 per child and $3.60 per adult for a total of $135.20.

How many children, and how many adults were in the group?

https://www.mathsisfun.com/algebra/matrix-inverse.html

numpy implementation has inv (inverse) and dot methods:

A = np.array([[3, 3.2], [3.5, 3.6]])
B = np.array([118.4, 135.2])

np.linalg.inv(A).dot(B)

# numpy also has solve method that is easier to use:
np.linalg.solve(A,B)

Labels: ,


October 08, 2019

 

SVM for mnist digit images

Here is the code that will load the popular mnist digits data and apply Support Vector Classifier. The overall accuracy is 97% for this multi-class classification problem and that is not bad for 10 lines of code!

from sklearn import datasets, svm, metrics

digits = datasets.load_digits()

n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))

classifier = svm.SVC(gamma=0.001)
classifier.fit(data[: n_samples // 2], digits.target[: n_samples // 2])
predicted = classifier.predict(data[n_samples // 2 :])
expected = digits.target[n_samples // 2 :]

print(metrics.classification_report(expected, predicted))

print (metrics.confusion_matrix(expected, predicted))

# Optionally, we can save the model using pickle.

with open('mymodel.pkl', 'wb') as file:
    pickle.dump(classifier, file, protocol=pickle.HIGHEST_PROTOCOL)

Labels: ,


October 01, 2019

 

Using embeddings for similarity search

Let’s suppose we have a large collection of questions and answers. A user can ask a question, and we want to retrieve the most similar question in our collection to help them find an answer.

* "zipping up files" should return "Compressing / Decompressing Folders & Files"
* "determine if something is an IP" should return "How do you tell whether a string is an IP or a hostname"
* "translate bytes to doubles" should return "Convert Bytes to Floating Point Numbers in Python"

Because that is the closest entry of all.

The indexed document should look like this...

{'user': '5156',
 'tags': ['xcode', 'git', 'osx', 'version-control', 'gitignore'],
 'questionId': '49478',
 'creationDate': '2008-09-08T11:07:49.953',
 'title': 'Git ignore file for Xcode projects',
 'acceptedAnswerId': '12021580',
 'type': 'question',
 'body': 'Which files should I include in .gitignore when using Git in conjunction with Xcode? ',
 'title_vector': [0.031643908470869064,
  -0.04750939458608627,
  -0.04847564920783043,
...

  0.001153663732111454,
  0.04351674020290375]}

The title_vector has exactly 512 elements in it for all records irrespective of the number of words in the title. This is because we are using google tensorflow "Universal Sentence Encoder".

https://tfhub.dev/google/universal-sentence-encoder/2

Here is the article on this topic:
https://www.elastic.co/blog/text-similarity-search-with-vectors-in-elasticsearch

And github repo:
https://github.com/jtibshirani/text-embeddings

If you want to test the application:

docker run --name text_embeddings  -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node"  -d shantanuo/textembeddings
docker exec -it text_embeddings bash
cd text-embeddings/
python3.6 src/main.py

Labels: ,


September 20, 2019

 

Using category encoders instead of One Hot

We use "One hot encoding" all the time. Right? It will convert the categorical values into new binary columns populated with sparse data. It means there will be a lot of 0 values and only one column will have "True" value of 1. Let's see an example.

Let us create a sample dataframe with X features and y as a target. load_boston is the function available in sklearn.datasets class.

import pandas as pd
from sklearn.datasets import load_boston
bunch = load_boston()
y = bunch.target
X = pd.DataFrame(bunch.data, columns=bunch.feature_names)

Pandas method called "get_dummies" makes it very easy to quickly transform the data.

ndf=X.join(pd.get_dummies(X['RAD'], prefix='RAD'))

CRIM ZN INDUS CHA ... RAD_1.0 RAD_2.0 RAD_3.0 RAD_4.0 RAD_5.0 RAD_6.0 RAD_7.0 RAD_8.0 RAD_24.0

As you can see all the unique values in "RAD" column are provided with their own columns. If pandas find a value "24.0", it will create a new column with default value of 0 and change it to 1 only if found "True".  This is really wonderful. The only problem is that if there are thousand unique values in this column, we will have to deal with one thousand columns!

In order to reduce the number of columns we will use "category_encoders" module instead of "get_dummies" like this...

import category_encoders as ce
enc = ce.BinaryEncoder(cols=['RAD']).fit_transform(X)

CRIM ZN INDUS CHAS ... RAD_0 RAD_1 RAD_2 RAD_3 RAD_4

As you can see there are now only 5 additional columns and the 9 unique values of  original column are distributed among themselves. Unlike one hot encoding where only 1 column is populated as "True", here there are 2 or 3 columns those may have "1" in it. This will reduce the number of column and allow us to use "hot encodings" even if there are too many unique values in a given column.

Labels: ,


September 10, 2019

 

Using pre-trained resnet model after modifying layers

pytorch is the package developed by facebook to help computer vision and Natural Language Processing. TorchVision package consists of popular datasets, model architectures, and common image transformations for computer vision. We use the "models" class and import pre-trained model from resnet group.


from torchvision import models
import torch
res_mod = models.resnet34(pretrained=True)

It is possible to print the layers that the pre-trained model is using. The numpy array data passes through all this trouble to give birth to final dataset that will represent the given class.


for name, child in res_mod.named_children():
    print(name)

In some cases we may want to selectively unfreeze layers and have the gradients computed for just a few chosen layers.
for e.g. in this case layer3 and layer4 should be made available for training while re-using rest of the slabs.


for name, child in res_mod.named_children():
    if name in ["layer3", "layer4"]:
        print(name + " has been unfrozen.")
        for param in child.parameters():
            param.requires_grad = True
    else:
        for param in child.parameters():
            param.requires_grad = False

The change in the sequence should be communicated back to torch so that it will be used for the training.


optimizer_conv = torch.optim.SGD(
    filter(lambda x: x.requires_grad, res_mod.parameters()), lr=0.001, momentum=0.9
)

Labels: ,


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024   January 2025   February 2025   April 2025   June 2025   July 2025   August 2025  

This page is powered by Blogger. Isn't yours?