Shantanu's Blog

Database Consultant

February 29, 2020

 

Pandas case study 29

Is there a way to identify leading and trailing NAs in a pandas.DataFrame?  Currently I do the following but it seems not straightforward:

df = pd.DataFrame(dict(a=[0.1, 0.2, 0.2],
                       b=[None, 0.1, None],
                       c=[0.1, None, 0.1])
lead_na = (df.isnull() == False).cumsum() == 0
trail_na = (df.iloc[::-1].isnull() == False).cumsum().iloc[::-1] == 0
trail_lead_nas = top_na | trail_na

Any ideas how this could be expressed more efficiently?

Answer:

df.ffill().isna() | df.bfill().isna()

https://stackoverflow.com/questions/59820159/identify-leading-and-trailing-nas-in-pandas-dataframe

Labels:


 

Pandas case study 28

I am attempting to generate a dataframe (or series) based on another dataframe, selecting a different column from the first frame dependent on the row using another series. In the below simplified example, I want the frame1 values from 'a' for the first three rows, and 'b for the final two (the picked_values series).

frame1=pd.DataFrame(np.random.randn(10).reshape(5,2),index=range(5),columns=['a','b'])
picked_values=pd.Series(['a','a','a','b','b'])
Frame1

    a           b
0   0.283519    1.462209
1   -0.352342   1.254098
2   0.731701    0.236017
3   0.022217    -1.469342
4   0.386000    -0.706614
Trying to get to the series:

0   0.283519
1   -0.352342
2   0.731701
3   -1.469342
4   -0.706614

I was hoping values[picked_values] would work, but this ends up with five columns.

Answer:

pd.Series(frame1.lookup(picked_values.index,picked_values))

https://stackoverflow.com/questions/59898266/select-columns-in-a-dataframe-conditional-on-row

Labels:


 

Pandas case study 27

I have a dataframe that looks like below.

dataframe1 =
In  AA   BB  CC
0   10   1   0
1   11   2   3
2   10   6   0
3   9    1   0
4   10   3   1
5   1    2   0

now I want to create a dataframe that gives me the count of modes for each column, for column AA the count is 3 for mode 10, for columns CC the count is 4 for mode 0, but for BB there are two modes 1 and 2, so for BB I want the sum of counts for the modes. so for BB the count is 2+2=4, for mode 1 and 2.

Therefore the final dataframe that I want looks like below.

Columns  Counts
AA        3
BB        4
CC        4

How to do it?


Answer:

You can compare columns with modes and count matches by sum:

df = pd.DataFrame({'Columns': df.columns,
                   'Val':[df[x].isin(df[x].mode()).sum() for x in df]})
print (df)
  Columns  Val
0      AA    3
1      BB    4
2      CC    4

https://stackoverflow.com/questions/59874756/counting-mode-occurrences-for-all-columns-in-a-dataframe

Labels:


 

Pandas case study 26

I have a list of dictionaries, and I would like to obtain those that have the same value in a key:

my_list_of_dicts = [{
    'id': 3,
    'name': 'John'
  },{
    'id': 5,
    'name': 'Peter'
  },{
    'id': 2,
    'name': 'Peter'
  },{
    'id': 6,
    'name': 'Mariah'
  },{
    'id': 7,
    'name': 'John'
  },{
    'id': 1,
    'name': 'Louis'
  }
]

Answer:

df = pd.DataFrame(my_list_of_dicts)
df[df.name.isin(df[df.name.duplicated()]['name'])].to_json(orient='records')

https://stackoverflow.com/questions/59822973/keep-duplicates-by-key-in-a-list-of-dictionaries/60465827#60465827

Labels:


 

Pandas case study 25

I have this dataframe that I need to re-format for report purpose.

df = pd.DataFrame(data = {'RecordID' : [1,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5],
'DisplayLabel' : ['Source','Test','Value 1','Value 2','Value3','Source','Test','Value 1','Value 2','Source','Test','Value 1','Value 2','Source','Test','Value 1','Value 2','Source','Test','Value 1','Value 2'],
'Value' : ['Web','Logic','S','I','Complete','Person','Voice','>20','P','Mail','OCR','A','I','Dictation','Understandable','S','I','Web','Logic','R','S']})

I am trying to "unmelt" though not exactly the source and test columns into new dataframe.

Answer 1: mask, pivot and join

mask = df['DisplayLabel'].str.contains('Value')
df2 = df[~mask].pivot(index='RecordID', columns='DisplayLabel', values='Value')

dfpiv = (
    df[mask].rename(columns={'DisplayLabel':'Result'})
            .set_index('RecordID')
            .join(df2)
            .reset_index()
)

Answer 2: set_index, unstack, then melt

df.set_index(['RecordID', 'DisplayLabel']).Value.unstack().reset_index() \
  .melt(['RecordID', 'Source', 'Test'], var_name='Result', value_name='Value') \
  .sort_values('RecordID').dropna(subset=['Value'])

https://stackoverflow.com/questions/59847074/unmelt-only-part-of-a-column-from-pandas-dataframe

Labels:


 

Pandas case study 24

I have such DataFrame:

df = pd.DataFrame(data={
    'col0': [11, 22,1, 5]
    'col1': ['aa:a:aaa', 'a:a', 'a', 'a:aa:a:aaa'],
    'col2': ["foo", "foo", "foobar", "bar"],
    'col3': [True, False, True, False],
    'col4': ['elo', 'foo', 'bar', 'dupa']})

I want to get length of the list after split on ":" in col1, then I want to overwrite the values if length > 2

Answer:

First, we need to know the length...

df['col1'].str.split(":").apply(len)

If the length is greater than 2 then such rows should be replaced with blank values.

df.loc[df['col1'].str.split(":").apply(len).gt(2), ['col1','col2','col3']] = ["", "", False]

https://stackoverflow.com/questions/59825672/pandas-overwrite-values-in-multiple-columns-at-once-based-on-condition-of-values

Labels:


February 19, 2020

 

MySQL case study 184

How do I enable general log of mysql and then query the log using the commands like tail and grep?

mysql> set global general_log_file="general.log";

tail -f general.log | tee -a from_general.txt

# make sure to use "Select" (note the capital s) in your application query and then search for it general log

tail -f general.log | grep Select

grep -i "SELECT " /var/log/mysql/general.log | grep -io "SELECT .*" | sed 's|\(FROM [^ ]*\) .*|\1|' | sort | uniq -c | sort -nr | head -100

grep "from " general.log | awk -Ffrom '{print $2}' | awk '{print $1}' | cat

# Or use packetbeat to push the queries to elastic - for better search experience!

Labels: ,


 

Manage athena tables using python


Here is 4 or 5 lines of code to read data from athena table into pandas dataframe.

from pyathena import connect
from pyathena.pandas.util import to_sql

bucket = ‘my_bucket_name’

conn = connect(
    aws_access_key_id=access,
    aws_secret_access_key=secret,
    s3_staging_dir="s3://" + bucket + "/tutorial/staging/",
    region_name="us-east-1",
)

ndf = pd.read_sql("SELECT * FROM sampledb.todel limit 100", conn)

# pandas dataframe to Athena table

to_sql(ndf, "sample_table", conn, "s3://" + bucket + "/tutorial/s3dir/",
    schema="sampledb", index=False, if_exists="replace")

_____

The following script will display output of "show create table" command for all tables. It will also create a new excel file called "output.xlsx". 10 records from each table will be saved on separate sheets of the file. Running this script is recommended to learn more about the tables you have saved in Athena.

import pandas as pd
from pyathena import connect
from pyathena.pandas.util import to_sql

conn = connect(
    aws_access_key_id="XXX", aws_secret_access_key="XXX",
    s3_staging_dir="s3://as-athena-qquery-results-5134690XXXX-us-east-1/todel/",
    region_name="us-east-1",
)

dbname = pd.read_sql("show databases", conn)

mydict = dict()
for db in dbname["database_name"]:
    mydict[db] = pd.read_sql("show tables in {0}".format(db), conn)

newdict = dict()
for k in mydict.keys():
    for i in mydict[k].values:
        newdict["{0}.{1}".format(k, i[0])] = pd.read_sql("show create table {0}.{1}".format(k, i[0]), conn)


# print the create table output:
for i in newdict.keys():
    for x in newdict[i].values:
        print(x[0])
    print("\n")


# select 10 records from each table and save as excel sheets:
datadict = dict()
for k in mydict.keys():
    for i in mydict[k].values:
        try:
            datadict["{0}.{1}".format(k, i[0])] = pd.read_sql( "select * from {0}.{1} limit 10".format(k, i[0]), conn)
        except:
            pass

with pd.ExcelWriter("output.xlsx") as writer:
    for k, v in datadict.items():
        v.to_excel(writer, sheet_name=k)


Labels: , , ,


 

connect to redshift and import data into pandas

There are 2 ways to connect to redshift server and get the data into pandas dataframe. Use the module "sqlalchemy" or "psycopg2". As you can see, sqlalchemy is using psycopg2 module internally.

from sqlalchemy import create_engine

pg_engine = create_engine(
    "postgresql+psycopg2://%s:%s@%s:%i/%s" % (myuser, mypasswd, myserver, int(myport), mydbname)
)

my_query = "select * from some_table limit 100”
df = pd.read_sql(my_query, con=pg_engine)

since "create_engine" class can also be used to connect to mysql database, it is recommended for the sake of consistency.
_____

#!pip install psycopg2-binary
import psycopg2
pconn = psycopg2.connect("host=myserver port=myport  dbname=mydbname user=myuser password=mypasswd")
my_query = "select * from some_table limit 100”

cur = pconn.cursor()           
cur.execute(my_query)
mydict = cur.fetchall()
import pandas as pd
df = pd.DataFrame(mydict)

Labels: , , ,


 

Import csv data and convert it to parquet in athena

# Let's assume we have a large file that we need to import in athena, here are the commands to be used.

# gunzip -c panindia_pincode.csv.gz | head
"1","110016","DELHI","DELHI","","","",""
"2","110027","DELHI","DELHI","","","",""
"3","110062","DELHI","DELHI","","","",""


# create a table in athena

CREATE EXTERNAL TABLE pandindia_pincode (
  serial_number string,
  pincode_number string,
  client_city string,
  client_state string,
  dummy1 string,
  dummy2 string,
  dummy3 string,
  dummy4 string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
  'serialization.format' = ',',
  'field.delim' = ',',
   "quoteChar"     = "\""
)
LOCATION 's3://datameetgeo/pincode/'
TBLPROPERTIES ('has_encrypted_data'='false');

## create parquet file format table

CREATE TABLE default.pandindia_pincode_parq
with (format='PARQUET', external_location='s3://datameetgeo/parquetpincode/'
) AS
SELECT * FROM default.pandindia_pincode

Labels: , ,


February 16, 2020

 

Using Python in Redshift


Here is a python function that can be installed in Redshift. It will normalize the text by removing junk characters and non-essential strings.

CREATE OR REPLACE FUNCTION f_file_split (mystr varchar(1000) ) RETURNS varchar(1000) IMMUTABLE as $$ 
    try:
        import itertools
        mylist=list()
        if mystr:
            for i in mystr[:100].split("_"):
                for x in i.split("-"):
                    for y in x.split("/"):
                        mylist.append(y.split("."))

        news = ' '.join(itertools.chain(*mylist))
        newlist=list()
        stopwords = ['sanstha', 'vikas', 'society', 'seva', 'json']
        for i in news.split():
            if len(i) < 4 or i in stopwords or i.isdigit() or i.startswith('bnk') or not i.isalpha() :
                pass
            else:
                newlist.append(i.lower().replace('vkss', '').replace('vks',''))
        return ' '.join(set(newlist))

    except:
        pass

$$ LANGUAGE plpythonu

I can add a new column in the table and populate that column with transformed values.

alter table final_details add column branch_name_fuzzy varchar(500);

update final_details set branch_name_fuzzy = f_file_split(filename);

Labels: , ,


 

Using partitions in Athena table

Here is a create table query that will create a partitioned table. The column "product_category" that is used for partitions, is not part of create table statement and still shows up in the select query.

CREATE EXTERNAL TABLE `reviews`(
  `marketplace` varchar(10),
  `customer_id` varchar(15),
  `review_id` varchar(15),
  `product_id` varchar(25),
  `product_parent` varchar(15),
  `product_title` varchar(50),
  `star_rating` int,
  `helpful_votes` int,
  `total_votes` int,
  `vine` varchar(5),
  `verified_purchase` varchar(5),
  `review_headline` varchar(25),
  `review_body` varchar(1024),
  `review_date` date,
  `year` int)
PARTITIONED BY (
  `product_category` varchar(25))
ROW FORMAT SERDE
  'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
  's3://amazon-reviews-pds/parquet'

In order to populate the table, we will need to use the alter table statement.
 
ALTER TABLE  reviews ADD
partition(product_category='Apparel')
location 's3://amazon-reviews-pds/parquet/product_category=Apparel/'
partition(product_category='Automotive')
location 's3://amazon-reviews-pds/parquet/product_category=Automotive'
partition(product_category='Baby')
location 's3://amazon-reviews-pds/parquet/product_category=Baby'
partition(product_category='Beauty')
location 's3://amazon-reviews-pds/parquet/product_category=Beauty'
partition(product_category='Books')
location 's3://amazon-reviews-pds/parquet/product_category=Books'
partition(product_category='Camera')
location 's3://amazon-reviews-pds/parquet/product_category=Camera'
partition(product_category='Grocery')
location 's3://amazon-reviews-pds/parquet/product_category=Grocery'
partition(product_category='Furniture')
location 's3://amazon-reviews-pds/parquet/product_category=Furniture'
partition(product_category='Watches')
location 's3://amazon-reviews-pds/parquet/product_category=Watches'
partition(product_category='Lawn_and_Garden')
location 's3://amazon-reviews-pds/parquet/product_category=Lawn_and_Garden';

Around 1 TB data per category is already saved in the given S3 bucket in parquet format.

# aws s3 ls --human s3://amazon-reviews-pds/parquet/product_category=Apparel/
2018-04-09 06:35:35  115.0 MiB part-00000-495c48e6-96d6-4650-aa65-3c36a3516ddd.c000.snappy.parquet
2018-04-09 06:35:35  115.3 MiB part-00001-495c48e6-96d6-4650-aa65-3c36a3516ddd.c000.snappy.parquet
2018-04-09 06:35:36  114.9 MiB part-00002-495c48e6-96d6-4650-aa65-3c36a3516ddd.c000.snappy.parquet
2018-04-09 06:35:35  115.2 MiB part-00003-495c48e6-96d6-4650-aa65-3c36a3516ddd.c000.snappy.parquet
2018-04-09 06:35:35  115.3 MiB part-00004-495c48e6-96d6-4650-aa65-3c36a3516ddd.c000.snappy.parquet
2018-04-09 06:35:39  115.3 MiB part-00005-495c48e6-96d6-4650-aa65-3c36a3516ddd.c000.snappy.parquet
2018-04-09 06:35:39  115.4 MiB part-00006-495c48e6-96d6-4650-aa65-3c36a3516ddd.c000.snappy.parquet
2018-04-09 06:35:39  114.8 MiB part-00007-495c48e6-96d6-4650-aa65-3c36a3516ddd.c000.snappy.parquet
2018-04-09 06:35:39  115.3 MiB part-00008-495c48e6-96d6-4650-aa65-3c36a3516ddd.c000.snappy.parquet
2018-04-09 06:35:40  115.3 MiB part-00009-495c48e6-96d6-4650-aa65-3c36a3516ddd.c000.snappy.parquet

If you are planning to use csv instead of parquet, then you will have to change the serialization, input / output formats in the create table statement.

Labels: ,


February 15, 2020

 

Manage Athena tables using PyAthena

PyAthena is an indispensable toolfor Amazon Athena.

import pandas as pd
from pyathena import connect
from pyathena.pandas.util import to_sql

# create connection object
conn = connect(aws_access_key_id="xxx",
aws_secret_access_key="xxx",
s3_staging_dir="s3://testme162/tutorial/staging/",
region_name="us-east-1",
)

# You may have a very large dataframe instead of this...
df = pd.DataFrame({"a": [1, 2, 3, 4, 5, 6, 7, 8, 11, 21, 545]})

# use the helper function
to_sql(df, "todel", conn, "s3://testme162/tutorial/s3dir/",
    schema="sampledb", index=False, if_exists="replace")

# read the athena data into a new dataframe
ndf = pd.read_sql("SELECT * FROM sampledb.todel limit 100", conn)

Labels: , ,


 

Pandas case study 23

Here is a pandas way of processing the values in column "GMT_DATE" using the given function and then rename it to "datetime1" to be declared as index.

myheader=['PKT_HEADER', 'DEVICE_ID',  'PACKET_CODE',  'SPEED_KNOTS', 'GMT_DATE', 'CHECKSUM_HEX']

def mydate(x):
    try:
        return dt.datetime.strptime(x, '%d%m%y %H%M%S')
    except ValueError:
        return pd.NaT

df=pd.read_csv('vtss.txt', sep=',', header=None, names = myheader, parse_dates={'datetime1' : ["GMT_DATE"]}, date_parser=mydate, keep_date_col=True, index_col='datetime1')

Labels:


February 08, 2020

 

Pandas case study 22

This is how easy it is to connect to mysql data-source and get the query results into a dataframe.

# conda install -y sqlalchemy pymysql

import pandas as pd
import sqlalchemy
engine = sqlalchemy.create_engine('mysql+pymysql://root:XXXXX@172.17.0.1/examDB')
df = pd.read_sql_query('SELECT * FROM candidateresult limit 10', engine, index_col = 'resultid')
_____

You can also connect to redshift database and get the data into a pandas dataframe using this code...

import easyboto
x=easyboto.connect()
x.my_add='xxxx.us-east-1.redshift.amazonaws.com'
x.my_user='root'
x.my_pas='xxx'
x.my_db='xxx'

dlr=x.runQuery("select * from some_table limit 10 ")

dlr.columns=["useID","messageid","client_code", "message", "status", "mobilenos"]
dlr.set_index('messageid')

You will need to save a file called "easyboto.py" and here is the code:

https://raw.githubusercontent.com/shantanuo/easyboto/master/easyboto.py
_____

For more advance options use awswrangler

https://oksoft.blogspot.com/search?q=awswrangler

Labels: ,


February 07, 2020

 

Shell script basics

This shell script will check 10 IP addresses sequentially and print if they are responding to ping command.

#!/bin/bash
for ip in 192.168.1.{1..10}; do 
    ping -c 1 -t 1 $ip > /dev/null 2> /dev/null 
    if [ $? -eq 0 ]; then 
        echo "$ip is up"
    else
        echo "$ip is down"
    fi
done

Labels: ,


February 06, 2020

 

MySQL case study 183

There are times when my stored procedure fails with this error:

mysql> call PROC_DBD_EVENTS;

ERROR 1270 (HY000): Illegal mix of collations (utf8_general_ci,COERCIBLE), (utf8_general_ci,COERCIBLE), (latin1_swedish_ci,IMPLICIT) for operation 'case'

1) The work-around is to modify the proc table like this...

mysql> select db,name,character_set_client,collation_connection from mysql.proc where name='PROC_DBD_EVENTS' ;
+-----------+-----------------------------+----------------------+----------------------+
| db | name | character_set_client | collation_connection |
+-----------+-----------------------------+----------------------+----------------------+
| upsrtcVTS | PROC_DBD_EVENTS | utf8 | utf8_general_ci |
+-----------+-----------------------------+----------------------+----------------------+

update mysql.proc set character_set_client='latin1', collation_connection='latin1_swedish_ci' where name= "PROC_DBD_EVENTS";

2) But officially supported workaround should be (re)creating the procedure using latin1 character set: E.g. in MySQL command line client:

set names latin1;
CREATE DEFINER= ... PROCEDURE ...

3) In Java application you should not use utf8 in connection string, (when procedure is created), and use Cp1252 instead, e.g.:

jdbc:mysql://127.0.0.1:3306/test?characterEncoding=Cp1252

Labels: ,


 

Manage redshift cluster using boto

Save or restore from last snapshot and delete the running redshift cluster are the two important activities those are possible using this boto code.

import boto
import datetime
conn = boto.connect_redshift(aws_access_key_id='XXX', aws_secret_access_key='XXX')

mymonth = datetime.datetime.now().strftime("%b").lower()
myday = datetime.datetime.now().strftime("%d")
myvar = mymonth+myday+'-v-mar5-dreport-new'

# take snapshot and delete cluster
mydict=conn.describe_clusters()
myidentifier=mydict['DescribeClustersResponse']['DescribeClustersResult']['Clusters'][0]['ClusterIdentifier']
conn.delete_cluster(myidentifier, skip_final_cluster_snapshot=False, final_cluster_snapshot_identifier=myvar)

# Restore from the last snapshot
response = conn.describe_cluster_snapshots()
snapshots = response['DescribeClusterSnapshotsResponse']['DescribeClusterSnapshotsResult']['Snapshots']
snapshots.sort(key=lambda d: d['SnapshotCreateTime'])
mysnapidentifier = snapshots[-1]['SnapshotIdentifier']
conn.restore_from_cluster_snapshot('v-mar5-dreport-new', mysnapidentifier, availability_zone='us-east-1a')

Labels: , , ,


 

MySQL case study 182

It is easy to use Docker to start multiple mysql instances. But it is also possible using multi mysql as shown below:

$ cat /bin/multi
#!/bin/sh
# chmod 755 /bin/multi
# to start or stop multiple instances of mysql
# multi start
# multi stop
# change the root user and password # default action is to start

action=${1:-"start"}

stop()
(
for socket in {3307..3320}
do
mysqladmin shutdown -uroot -proot@123 --socket=/tmp/mysql.sock$socket
done
)

start()
(
for socket in {3307..3320}
do
mysqld_multi start $socket
done
)

$action 

Labels: ,


 

MySQL Case Study - 181

Backup mysql tables

Here is a shell script that will take backup of 3 tables from a database. The records will be delimited by Tile + Tile (~~)

#!/bin/sh
rm -rf /pdump && mkdir /pdump
chmod 777 /pdump
while read -r myTBL
do
mysql -uroot -pPasswd -Bse"select * into outfile '/pdump/$myTBL.000000.txt' FIELDS TERMINATED BY '~~'  from dbName.$myTBL"
done << heredoc
customer_ticket
cutomer_card
fees_transactions
heredoc

Labels: ,


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023  

This page is powered by Blogger. Isn't yours?