Shantanu's Blog

Database Consultant

June 04, 2025

 

Public Appeal to LibreOffice Source Code Contributors

LibreOffice is among the finest software applications I have ever used, and I deeply appreciate all the contributions that have made it so feature-rich and reliable. However, I have a humble request: please ensure that every contribution is thoroughly tested before it is merged into the codebase.

If anyone is still reading, I would like to share more information for your consideration.

I would like to see an improvement in the testing process for LibreOffice. When a change is made to the source code, it should ideally undergo review and testing by at least two independent testers, followed by approval from a senior developer.

1) Even seemingly minor or trivial changes in the source code can have significant and potentially disruptive effects on the user experience. For example, a shortcut key combination — Ctrl + Shift + C — was assigned to the "Track Changes" function:

https://gerrit.libreoffice.org/c/core/+/65041

This raised concerns: Who approved this change? Who tested and validated it before it was committed?

Many users, including myself, were confused as to why the "Track Changes" feature was suddenly being triggered unexpectedly. The impact of this change was discussed in the following bug reports:

https://bugs.documentfoundation.org/show_bug.cgi?id=130847

https://bugs.documentfoundation.org/show_bug.cgi?id=134151

2) Another example is the addition of the Alt + 5 shortcut key to activate the Sidebar pane:

https://bugs.documentfoundation.org/show_bug.cgi?id=158112

3) PDF files are now exported to the most recently used directory, rather than the directory of the active document. This change in LibreOffice's behavior was unexpected and caused inconvenience.

https://bugs.documentfoundation.org/show_bug.cgi?id=165917

4) Additionally, the removal of the "Add to Dictionary" option from the context menu caused inconvenience to many users:

https://bugs.documentfoundation.org/show_bug.cgi?id=166689

Although this particular issue was quickly resolved within five days, it still raises a critical question: Who is responsible for testing and verifying such changes before they are merged? If the "Add to Dictionary" option is not available on right click, proof reading would have become impossible.

To ensure quality and minimize unintended consequences, I recommend establishing a more robust review and testing protocol for code changes.

Labels: ,


June 02, 2025

 

Manage AWS resources using command line

1) add access key and secret key of a read-only user

aws configure

2) I need to install amazon Q using the instructions found on this page...

https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-installing-ssh-setup-autocomplete.html

3) I can now start the q service using the command...

./q/bin/q

Use natural language instructions like "list all dynamoDB tables" or "list all S3 buckets"

4) For advance users, I can create MCP server and save database credentials like username and password so that Q can query database and return results.

https://awslabs.github.io/mcp/servers/dynamodb-mcp-server/

Labels: ,


February 19, 2025

 

Apply libreoffice styles using a Macro and create PDF

I have this dockerfile that is working as expected. I use it to convert a txt file to pdf after formatting it using a style created by macro.
_____

FROM ubuntu:latest

# Install LibreOffice and scripting dependencies
RUN apt-get update && apt-get install -y libreoffice libreoffice-script-provider-python libreoffice-script-provider-bsh libreoffice-script-provider-js

# Install required dependencies
RUN apt-get update && apt-get install -y wget unzip fonts-dejavu

# Download and install Shobhika font
RUN mkdir -p /usr/share/fonts/truetype/shobhika && wget -O /tmp/Shobhika.zip https://github.com/Sandhi-IITBombay/Shobhika/releases/download/v1.05/Shobhika-1.05.zip && unzip /tmp/Shobhika.zip -d /tmp/shobhika && mv /tmp/shobhika/Shobhika-1.05/*.otf /usr/share/fonts/truetype/shobhika/

# Create necessary directories with proper permissions
RUN mkdir -p /app/.config/libreoffice/4/user/basic/Standard
RUN chmod -R 777 /app/.config

# Set LibreOffice user profile path
ENV UserInstallation=file:///app/.config/libreoffice/4/user

WORKDIR /app
COPY StyleLibrary.oxt /app/
COPY marathi_spell_check.oxt /app/
COPY myfile.txt /app/

RUN unopkg add /app/StyleLibrary.oxt --shared
RUN unopkg add /app/marathi_spell_check.oxt --shared

# Run the LibreOffice macro
CMD soffice --headless --invisible --norestore "macro:///StyleLibrary.Module1.myStyleMacro2(\"/app/myfile.txt\")"
_____

# create an image:
docker build -t shantanuo/mylibre .

# Run the container:
docker run -v .:/app/ --rm shantanuo/mylibre

As you can see I have applied the styles from StyleLibrary to myfile and then created a pdf document successfully.

Labels: , ,


September 28, 2024

 

Firefox and Libreoffice in your browser

Kasm VNC is a modern open source VNC server.

Quickly connect to your Linux server's desktop from any web browser.
No client software install required.

1) Firefox using VNC

docker run -d \
--name=firefox \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Etc/UTC \
-p 3000:3000 \
-p 3001:3001 \
-v /path/to/config2:/config \
--shm-size="1gb" \
--restart unless-stopped \
lscr.io/linuxserver/firefox:latest

2) Libreoffice using VNC

docker run -d \
  --name=libreoffice \
  --security-opt seccomp=unconfined `#optional` \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Etc/UTC \
  -p 3000:3000 \
  -p 3001:3001 \
  -v /path/to/config:/config \
  --restart unless-stopped \
  lscr.io/linuxserver/libreoffice:latest

Labels: , , , , ,


May 18, 2024

 

Make Ubuntu great again!

when you click above or below the slider on a scrollbar, instead of scrolling up or down by a "page", like they have done for many years, instead you now jump to wherever you click.

If you need to change, edit (or create) the file:

~/.config/gtk-3.0/settings.ini

And add the following:

[Settings]
gtk-primary-button-warps-slider = false

do not forget to restart.

Labels: ,


May 01, 2024

 

Remote Desktop to Ubuntu Server

Ubuntu Desktop requires downloading about 500 MB of packages and an additional 2 GB of disk space. If that's too much, you can install a more lightweight desktop environment called Xfce. It's just 45 MB of packages that use an extra 175 MB of space. Install it like this:

sudo apt-get install -y tightvncserver xrdp ubuntu-desktop m17n-db ibus-m17n

_____

1) You may need to change the ubuntu password if using ec2 instance.

echo 'ubuntu:india' | sudo chpasswd 

2) The second step is to change PasswordAuthentication to "yes" in the file /etc/ssh/sshd_config

3) avoid security error

sudo adduser xrdp ssl-cert

4) disable screen lock and suspend

gsettings set org.gnome.desktop.session idle-delay 0

systemctl mask suspend.target

Labels: ,


July 21, 2023

 

langchain for pandas

langchain is a module to query pandas dataframe using Natural Language. It uses chatGPT to build pandas commands!

!pip install langchain
import os
os.environ["OPENAI_API_KEY"] = "XXXX"

from langchain.agents import create_pandas_dataframe_agent
from langchain.llms import OpenAI
import pandas as pd

pd_agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)

pd_agent.run("Find the total sales for each product line in the year 2003")

_____

Something similar...


# https://github.com/gventuri/pandas-ai

!pip install pandasai
from pandasai import SmartDataframe, SmartDatalake
from pandasai.llm import OpenAI
llm = OpenAI(api_token="YOUR TOKEN")

sdf = SmartDataframe(df, config={"llm": llm})
sdf.chat("Return the top 5 countries by GDP")
sdf.chat("Plot a chart of the gdp by country")

print(sdf.last_code_generated)

If you have more than one dataframe, then use SmartDatalake method and supply a list of dataframes. For e.g.

sdf = SmartDatalake([df, df2, df3], config={"llm": llm})

Labels: , , ,


March 16, 2022

 

Interactive pandas dataframe

iTables is an important utility for pandas dataframe. It will make the df or series interactive.

https://github.com/mwouts/itables

Install the package with:

pip install itables

Activate the interactive mode:

from itables import init_notebook_mode
init_notebook_mode(all_interactive=True)

or use itables.show to show just one Series or DataFrame as an interactive table.

_____

1) At the moment itables does not have an offline mode. While the table data is embedded in the notebook, the jquery and datatables.net are loaded from a CDN.

2) When the data in a table is larger than maxBytes, which is equal to 64KB by default, itables will display only a subset of the table. To show the table in full, modify the value of maxBytes either locally:

show(df, maxBytes=0)

or globally:

import itables.options as opt

opt.maxBytes = 2 ** 20

Labels: , ,


September 18, 2021

 

wikipedia search

Google search works very well but it is not useful for certain "tasks". For e.g. when I am searching for heritage sites, I google...

unesco world heritage sites in india

Usually the first site is the official and the best one to start with...

https://whc.unesco.org/en/statesparties/in

The problem with this site is that there is no way to "download" the list in excel. I can visit the relevant wikipedia page and download the table found on that page. But there is a better way to search and download results.

https://tinyurl.com/36wrc3pw

This is visual query builder for wikipedia. Every entity as been given an ID and we can use SQL query like interface to get the "data" in tabular format. Once I download the csv file, I can open it in libre office and convert the image links to actual images using an excellent add-on that can be installed from...

https://extensions.libreoffice.org/en/extensions/show/links-to-images

Labels: , , ,


May 28, 2021

 

Athena and Unicode text

Athena supports unicode characters very well. For e.g. if the datafile looks like this...

"Root_word";"Word";"Primary";"Type";"Code";"Position";"Rule"
"अँटिबायोटिक","अँटिबायोटिक","अँटिबायोटिक","Primary","","",""
"अँटिबायोटिक","अँटिबायोटिकअंती","अँटिबायोटिक","Suffix","A","7293","001: 0 अंती ."
"अँटिबायोटिक","अँटिबायोटिकअर्थी","अँटिबायोटिक","Suffix","A","7293","002: 0 अर्थी ."
"अँटिबायोटिक","अँटिबायोटिकआतून","अँटिबायोटिक","Suffix","A","7293","003: 0 आतून ."
"अँटिबायोटिक","अँटिबायोटिकआतूनचा","अँटिबायोटिक","Suffix","A","7293","004: 0 आतूनचा ."
"अँटिबायोटिक","अँटिबायोटिकआतूनची","अँटिबायोटिक","Suffix","A","7293","005: 0 आतूनची ."
"अँटिबायोटिक","अँटिबायोटिकआतूनचे","अँटिबायोटिक","Suffix","A","7293","006: 0 आतूनचे ."
"अँटिबायोटिक","अँटिबायोटिकआतूनच्या","अँटिबायोटिक","Suffix","A","7293","007: 0 आतूनच्या ."
"अँटिबायोटिक","अँटिबायोटिकआतूनला","अँटिबायोटिक","Suffix","A","7293","008: 0 आतूनला ."

This create table statement is all I need...

create external table myptg (
root_word varchar(255),
derived_word varchar(255),
stemmed_word varchar(255),
type varchar(255),
code varchar(255),
position varchar(255),
rule varchar(255)
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
  'separatorChar' = '\;',
  'quoteChar' = '\"',
  'escapeChar' = '\\'
)
LOCATION 's3://ptg1/mc/'
TBLPROPERTIES ("skip.header.line.count"="1");

I can create a supporting table like this...

create external table gamabhana (derived_word varchar(255))
LOCATION 's3://ptg1/mc2/'
TBLPROPERTIES ("skip.header.line.count"="1");

A new table can be created using the syntax something like this...
 
create external table anoop (
serial_number int,
root_word varchar(255),
stem1_word varchar(255),
stem2_word varchar(255),
stem3_word varchar(255),
stem4_word varchar(255),
stem5_word varchar(255),
stem6_word varchar(255),
stem7_word varchar(255)
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
  'field.delim' = ',',
  'escape.delim' = '\\',
  'line.delim' = '\n'
)
LOCATION 's3://ptg1/mc3/'
TBLPROPERTIES ("skip.header.line.count"="1");


And then run a join statement like this...

create table gamabhana_match as
select a.derived_word, b.root_word, b.stemmed_word, b.type, b.code, b.position, b.rule, c.stem1_word, c.stem2_word, c.stem3_word, c.stem4_word, c.stem5_word, c.stem6_word, c.stem7_word
from gamabhana as a left join myptg as b
on b.derived_word = a.derived_word
left join anoop as c
on c.derived_word = a.derived_word

It will scan around 2 GB data (in this case) and the cost will be around 1 cent per query. This can also be done in MySQL. But importing data and building indexes is not easy. Unlike Athena, MySQL allows unlimited queries for free!

Athena is good for data that is important and accessed rarely.

Labels: , ,


April 29, 2021

 

Manage your csv using clever csv

Command to analyze the csv file and let us know about delimiter/ escaping.

$ clevercsv detect ./imdb.csv
Detected: SimpleDialect(',', '', '\\')

We can import the csv in pandas dataframe without using read_csv method!
$ clevercsv explore -p imdb.csv
>>> df

The code to create pandas dataframe using clever csv
$ clevercsv code  -p  ./imdb.csv
import clevercsv
df = clevercsv.read_dataframe("./imdb.csv", delimiter=",", quotechar="", escapechar="\\")

If you are using Jupyter Notebook, use this code...
import clevercsv
rows = clevercsv.read_table('./imdb.csv')
df = clevercsv.read_dataframe('./imdb.csv')

Labels: ,


January 07, 2021

 

SQL and PPL support by AWS elasticsearch (Open distro for ES)

AWS elasticsearch now supports standard SQL syntax. For the system admins, it also supports PPL (Pipe Processing Language). Here is an example of both:

select userAgent, eventID from newcwl where requestParameters.bucketName.keyword like 'web%' and (eventName.keyword like 'PutObject%' OR eventName.keyword like 'UploadPartCopy%' OR eventName.keyword like 'UploadPart%') ;

And this is PPL syntax:

search source=newcwl eventSource.keyword='s3.amazonaws.com' | where eventName.keyword like 'PutObject%' or eventName.keyword like 'UploadPart%' or eventName.keyword like 'UploadPartCopy%' | where requestParameters.bucketName.keyword like "web%" | fields userAgent, eventID

This is really a great feature. I was looking for something like this for years!

Labels: ,


December 27, 2020

 

spell checker using Machine Learning

JamSpell is a ML based spell checker that will use pre-trained models from:

https://github.com/bakwc/JamSpell-models/

The python code looks very clear and concise.

import jamspell
jsp = jamspell.TSpellCorrector()
assert jsp.LoadLangModel('en.bin')
jsp.FixFragment("I am the begt spell cherken")

The above code will return:

I am the best spell checker

This is really an accurate, fast and multi-language spell checker.

https://github.com/bakwc/JamSpell

Labels: ,


July 13, 2020

 

Understanding your data

Pandas dataframe stores all the data into a single table that makes it difficult to understand the relationships between columns. For e.g. I will like to know how Area Abbreviation is related to Item code in the following data.

# Download the csv file from kaggle:
https://www.kaggle.com/dorbicycle/world-foodfeed-production

import pandas as pd
food_df = pd.read_csv('FAO.csv' , encoding='latin1')
food_df = food_df.drop(columns=food_df.columns[10:])

I will now import auto normalize class from featuretools. This will detect the internal relationships between columns and show us a nice graph.

from featuretools.autonormalize import autonormalize as an
entityset = an.auto_entityset(food_df)
entityset.plot()

  Entities:
    Element Code_Item Code_Area Code [Rows: 21477, Columns: 4]
    Element Code [Rows: 2, Columns: 2]
    Item Code [Rows: 117, Columns: 2]
    Area Code [Rows: 174, Columns: 5]
    Area Abbreviation [Rows: 169, Columns: 2]
  Relationships:
    Element Code_Item Code_Area Code.Area Code -> Area Code.Area Code
    Element Code_Item Code_Area Code.Item Code -> Item Code.Item Code
    Element Code_Item Code_Area Code.Element Code -> Element Code.Element Code
    Area Code.Area Abbreviation -> Area Abbreviation.Area Abbreviation

Do not forget to check featuretools module as well. This will add new columns to your dataframe those can be useful in building machine learning module.

import featuretools as ft
fm, features = ft.dfs(entityset=entityset, target_entity='Element Code_Item Code_Area Code')
print (fm)

https://innovation.alteryx.com/automatic-dataset-normalization-for-feature-engineering-in-python/

Labels: , ,


May 07, 2020

 

Chrome Extensions

some of the important Google Chrome extensions:

1) HTML5 Outliner: Generates a navigable page outline with heading and sectioning elements
https://chrome.google.com/webstore/detail/html5-outliner/afoibpobokebhgfnknfndkgemglggomo

2) Page load time: Displays page load time in the toolbar
https://chrome.google.com/webstore/detail/page-load-time/fploionmjgeclbkemipmkogoaohcdbig

3) Open Multiple URLs: Extract and open a list of URLs
https://chrome.google.com/webstore/detail/open-multiple-urls/oifijhaokejakekmnjmphonojcfkpbbh

4) Former2 Helper: Helps avoid CORS issues with former2.com while calling the AWS service API endpoints
https://chrome.google.com/webstore/detail/former2-helper/fhejmeojlbhfhjndnkkleooeejklmigi

5) Chrome Remote Desktop
https://chrome.google.com/webstore/detail/chrome-remote-desktop/inomeogfingihgjfjlpeplalcfajhgai

6) Open in Colab: Open a Github-hosted notebook in Google Colab
https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo

7) Block image: Prevent images from "DownLoading". You can toggle blocking on/off by clicking the extension icon on the chrome toolbar.
https://chrome.google.com/webstore/detail/block-image/pehaalcefcjfccdpbckoablngfkfgfgj

8) Copy URLs: copy the urls of all tabs to clipboard
https://chrome.google.com/webstore/detail/copy-urls/efkmnflmpgiklkehhoeiibnmdfffmmjk?hl=en

Labels:


April 16, 2020

 

Download and unzip any file using python

Usually I download a file and extract using 2 linux commands like this...

! wget https://github.com/d6t/d6tstack/raw/master/test-data.zip
! unzip -o test-data.zip 

But it can also be done using python code as shown below!

import urllib.request
import zipfile

cfg_fname_sample = "test-data.zip"
urllib.request.urlretrieve(
    "https://github.com/d6t/d6tstack/raw/master/" + cfg_fname_sample, cfg_fname_sample
)

zip_ref = zipfile.ZipFile(cfg_fname_sample, "r")
zip_ref.extractall(".")
zip_ref.close()

Labels: , ,


April 12, 2020

 

Launch spot EC2 instances

The following code will initiate a Linux instance of type m3.medium using spot pricing and associate it to IP address 13.228.39.49 Make sure to use your own elastic IP address and key. Do not forget to change the access_key and secret_key parameters.

!wget https://raw.githubusercontent.com/shantanuo/easyboto/master/easyboto.py
 
import easyboto
dev=easyboto.connect('access_key', 'secret_key')
dev.placement='us-east-1a'
dev.myaddress='13.228.39.49'
dev.key='dec15abc'
dev.MAX_SPOT_BID= '2.9'
dev.startEc2Spot('ami-0323c3dd2da7fb37d', 'm3.medium')

This will return the instance id and the ssh command that you can use to connect to your instance. The output will look something like...

job instance id: i-029a926e68118d089
ssh -i dec15a.pem ec2-user@13.228.39.49

You can list all instances along with their details like launch time, image_id  and save the results as pandas dataframe using showEc2 method like this...

df=dev.showEc2()

Now "df" is a pandas dataframe object. You can sort or groupby the instances just like an excel sheet.

You can delete the instance by providing the instance ID that was generated in the first step using deleteEc2 method.

dev.deleteEc2('i-029a926e68118d089')
_____

You can also use cloudformation template for this purpose. Visit the following link and look for "Linux EC2 Instance on SPOT" section.

https://github.com/shantanuo/cloudformation

Click on "Launch Stack" button. It is nothing but GUI for the python code mentioned above. You will simply have to submit a form for the methods like key and IP address.

Labels: , , ,


April 07, 2020

 

emailThis service using serverless API

There are times when I find a great article or web page but don't have time to read. I use EmailThis service to save text & images from a website to my email inbox. The concept is very simple. Drag and drop a bookmarklet to the bookmark toolbar and click on it to send the current web-page to your inbox!

https://www.emailthis.me/

But I did not like the premium ads and partial content that the site sends. So I built my own serverless API to get exactly the same functionality using mailgun and Amazon Web Services.

https://www.mailgun.com/

Once you register with mailgun, you will get a URL and API-Key that you need to copy-paste to notepad. You will need to provide this information when you launch the cloudformation template by clicking on this link.


https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=emailThis&templateURL=https://datameetgeobk.s3.amazonaws.com/cftemplates/furl.yaml.txt

Once the resources are created, you can see a URL in output section something like this...

https://ie5n05bqo0.execute-api.us-east-1.amazonaws.com/mycall

Now building the javaScript bookmarklet is easy.

javascript:(function(){location.href='https://ie5n05bqo0.execute-api.us-east-1.amazonaws.com/mycall?email=shantanu.oak@gmail.com&title=emailThis&url='+encodeURIComponent(location.href);})();

Right click on any bookmark and then copy-paste the above link. Make sure that you have changed the URL and email address to your own. Now click on this bookmarklet while you are on an important web page that you need to send to your inbox. Enjoy!

Labels: , , ,


March 13, 2020

 

Visual pandas - bamboolib

bamboolib - a GUI for pandas dataframes. Stop googling pandas commands

1) Get your free 14 days trial key.

https://bamboolib.8080labs.com/

2) Install required python package:

pip install bamboolib

jupyter nbextension enable --py qgrid --sys-prefix
jupyter nbextension enable --py widgetsnbextension --sys-prefix
jupyter nbextension install --py bamboolib --sys-prefix
jupyter nbextension enable --py bamboolib --sys-prefix

3) Restart docker container:

4) Start exploring visual pandas!

import bamboolib as bam
import pandas as pd
df = pd.read_csv(bam.titanic_csv)
df

_____

import modules automatically when required!

pip install pyforest
conda install nodejs

python -m pyforest install_extensions

Restart docker container.

Labels: ,


December 24, 2019

 

Console Recorder for AWS

Writing cloudformation code can be very difficult for learners. Here is a "Recorder" that will watch your browser activity and convert it to CloudFormation/Terraform templates. Neat!

https://github.com/iann0036/AWSConsoleRecorder
_____

Generate CloudFormation / Terraform / Troposphere templates from your existing AWS resources

https://former2.com

Source code:

https://github.com/iann0036/former2


Labels: , ,


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024   January 2025   February 2025   April 2025   June 2025   July 2025   August 2025  

This page is powered by Blogger. Isn't yours?