Shantanu's Blog

Database Consultant

February 06, 2020

 

MySQL case study 183

There are times when my stored procedure fails with this error:

mysql> call PROC_DBD_EVENTS;

ERROR 1270 (HY000): Illegal mix of collations (utf8_general_ci,COERCIBLE), (utf8_general_ci,COERCIBLE), (latin1_swedish_ci,IMPLICIT) for operation 'case'

1) The work-around is to modify the proc table like this...

mysql> select db,name,character_set_client,collation_connection from mysql.proc where name='PROC_DBD_EVENTS' ;
+-----------+-----------------------------+----------------------+----------------------+
| db | name | character_set_client | collation_connection |
+-----------+-----------------------------+----------------------+----------------------+
| upsrtcVTS | PROC_DBD_EVENTS | utf8 | utf8_general_ci |
+-----------+-----------------------------+----------------------+----------------------+

update mysql.proc set character_set_client='latin1', collation_connection='latin1_swedish_ci' where name= "PROC_DBD_EVENTS";

2) But officially supported workaround should be (re)creating the procedure using latin1 character set: E.g. in MySQL command line client:

set names latin1;
CREATE DEFINER= ... PROCEDURE ...

3) In Java application you should not use utf8 in connection string, (when procedure is created), and use Cp1252 instead, e.g.:

jdbc:mysql://127.0.0.1:3306/test?characterEncoding=Cp1252

Labels: ,


January 26, 2020

 

Copy mysql data to another server using tar and nc

If you want to copy the data from one server for e.g. (63) to another (64):

1) Stop MySQL service on both servers, for e.g. 10.10.10.63 and 10.10.10.64
2) Go to /var/lib/mysql/ directory on both servers.
cd /var/lib/mysql/

3) On 10.10.10.63
tar -cf - * | nc -l 1234

4) On 10.10.10.64
nc 10.10.10.63 1234 | tar xf -

Restart MySQL service on both the servers and you should get exactly the same data on 64 as you see on 63 (assuming you have the same my.cnf config). This needs to be done very carefully or else data may corrupt.

Labels: ,


April 28, 2019

 

MySQL error log using Elastic stack

Here are 3 easy steps to enable mysql failed query log.
1) Download packetbeat config file
2) Edit config file to add "send_response" parameter
3) Start docker container

# download packetbeat config file
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.0/deploy/docker/packetbeat.docker.yml

# add send_response parameter to mysql
packetbeat.protocols.mysql:
  ports: [3306]
  send_response: true

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
setup.ilm.enabled: false
setup.pack.security.enabled: false
setup.xpack.graph.enabled: false
setup.xpack.watcher.enabled: false
setup.xpack.monitoring.enabled: false
setup.xpack.reporting.enabled: false

# start docker container
docker run \
  --user=packetbeat \
  --volume="$(pwd)/packetbeat.docker.yml:/usr/share/packetbeat/packetbeat.yml:ro" \
  --cap-add="NET_RAW" \
  --cap-add="NET_ADMIN" \
  --network=host \
  -d docker.elastic.co/beats/packetbeat:7.0.0 \
  --strict.perms=false -e \
  -E cloud.id=XXX \
  -E cloud.auth=elastic:XXX

# Once you get logs in Kibana, use a filter type:mysql and status:Error to extract failing queries.

Labels: , ,


December 30, 2017

 

Install mysql with tokuDB engine within percona

This is required if you get an error while initiating tokudb engine:

echo never > /sys/kernel/mm/transparent_hugepage/enabled

And this is required if you get permissions error:

rm -rf /storage/custom3381

mkdir /storage/custom3381

chown 1001 /storage/custom3381

percona server has built-in environment variable for tokudb:

docker run -p 3381:3306 -v /my/custom3381:/etc/mysql/conf.d -v /storage/custom3381:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=india3381 -e INIT_TOKUDB=1 -d percona/percona-server:5.7

Labels: ,


 

Using xtra-backup for incremental backups

1) Download xtrabackup package
2) change directory
3) Full Backup
4) INcremental backup
5) Restore
6) Start mysql using backup
 
# Linux
wget https://www.percona.com/downloads/XtraBackup/Percona-XtraBackup-2.4.9/binary/tarball/percona-xtrabackup-2.4.9-Linux-x86_64.tar.gz

# centOS and redhat
yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
yum install percona-xtrabackup-24
_____

cd percona-xtrabackup-2.4.9-Linux-x86_64

bin/xtrabackup --defaults-file=/my/custom3396/my.cnf -H 172.31.0.57 -uroot -pindia3396 -P 3396 --datadir /storage/mysql/datadir3396 --backup --target-dir=/data3/backups/full/
_____

The main advantage of using xtrabackup is that we can take incremental backup that will be much faster.

bin/xtrabackup --defaults-file=/my/custom3396/my.cnf -H 172.31.0.57 -uroot -pindia3396 -P 3396 --datadir /storage/mysql/datadir3396 --backup --target-dir=/data3/backups/inc1 --incremental-basedir=/data3/backups/full/


The next day, we need to simply change the target directory path to "inc2" like this:

bin/xtrabackup --defaults-file=/my/custom3396/my.cnf -H 172.31.0.57 -uroot -pindia3396 -P 3396 --datadir /storage/mysql/datadir3396 --backup --target-dir=/data3/backups/inc2 --incremental-basedir=/data3/backups/inc1
_____

In case of disaster we need to apply logs and then prepare data:

1) First apply logs of target directory:
bin/xtrabackup --prepare  --apply-log-only --target-dir=/data3/backups/full/

2) Apply logs from incremental backup:
bin/xtrabackup --prepare --apply-log-only --target-dir=/data3/backups/full/ --incremental-dir=/data3/backups/inc1

3) apply log only option should not be used for the last incremental backup.
bin/xtrabackup --prepare  --target-dir=/data3/backups/full/  --incremental-dir=/data3/backups/inc2

4) Finally prepare target without apply log option for target directory:
bin/xtrabackup --prepare --target-dir=/data3/backups/full/
_____

Now since the backup data directory is ready, we can create a new docker container pointing to the newly "prepared" data.

docker run -p 3391:3306 -e MYSQL_ROOT_PASSWORD=india3391 -v /my/custom3391:/etc/mysql/conf.d  -v /data3/backups/full:/var/lib/mysql -d shantanuo/mysql:5.7

You can check if the new data is working correctly.

mysql -h `hostname -i` -uroot -pindia3396 -P 3391

Labels:


May 31, 2017

 

Install mysql on CentOS

Here are the steps to install mysql version 5.7.18 on CentOS 7 and Red Hat (RHEL) 7

# yum rpm
yum localinstall https://dev.mysql.com/get/mysql57-community-release-el7-9.noarch.rpm

# yum install
yum install mysql-community-server

# make sure mysql starts on reboot
chkconfig --levels 235 mysqld on

# config 
[mysqld]
server-id=101048235

#log-bin=/var/log/mysql/mysql-bin.log
max_binlog_size=1024M
expire_logs_days=40
binlog_format=ROW
binlog_checksum=NONE

innodb_buffer_pool_size=4G
innodb_log_file_size=512M
innodb_flush_method=O_DIRECT
innodb_file_per_table
innodb-flush-log-at-trx-commit = 2

# start mysql
service mysqld start

# find current root password
grep 'A temporary password is generated for root@localhost' /var/log/mysqld.log |tail -1

# Change password
/usr/bin/mysql_secure_installation

Labels:


January 14, 2017

 

Import data into mysql using pandas

Here is 6 lines of python code that will import any (excel or csv) data into mysql.

1) Open the excel file and save the data from a single sheet "s3_usage" into a data frame.
2) Copy the dataframe data into MySQL (test database - myreport_tbl table)

!wget https://s3.amazonaws.com/oksoft/pandas_work.xlsx

import pandas as pd
xl = pd.ExcelFile('pandas_work.xlsx')
s3_usage=xl.parse('s3_usage')

import sqlalchemy
engine = sqlalchemy.create_engine('mysql+pymysql://root:passwd@172.31.30.248/test')
s3_usage.to_sql('myreport_tbl', engine, if_exists='replace')
_____

Login to mysql and check if the data is imported correctly.

mysql> select * from myreport_tbl limit 1;
+-------+----------+------------------+--------------------------+----------+---------------------+---------------------+------------+
| index | Service  | Operation        | UsageType                | Resource | StartTime           | EndTime             | UsageValue |
+-------+----------+------------------+--------------------------+----------+---------------------+---------------------+------------+
|     0 | AmazonS3 | ListAllMyBuckets | C3DataTransfer-Out-Bytes | NULL     | 2016-12-01 00:00:00 | 2016-12-01 01:00:00 |       1332 |
+-------+----------+------------------+--------------------------+----------+---------------------+---------------------+------------+
1 row in set (0.00 sec)

mysql> show create table myreport_tbl\G
*************************** 1. row ***************************
       Table: myreport_tbl
Create Table: CREATE TABLE `myreport_tbl` (
  `index` bigint(20) DEFAULT NULL,
  `Service` text,
  `Operation` text,
  `UsageType` text,
  `Resource` text,
  `StartTime` datetime DEFAULT NULL,
  `EndTime` datetime DEFAULT NULL,
  `UsageValue` bigint(20) DEFAULT NULL,
  KEY `ix_myreport_tbl_index` (`index`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Labels: , , ,


October 05, 2016

 

shell script to backup all mysql tables

This is a generic shell script that will take the backup of all mysql tabels in CVS format.
I have added --where clause to mysqldump command. It will take backup of 10 rows of each table. This is for testing purpose only. While implementing the script in production evvironment, remove that where clause.

#!/bin/sh
mydate=`date +"%d-%m-%Y-%H-%M"`

for db in `mysql -Bse"show databases"`
do
mydir='/tmp/testme/'$mydate/$db
rm -rf $mydir
mkdir -p $mydir
chmod 777 $mydir

for i in `mysql -Bse "select TABLE_NAME from information_schema.tables where TABLE_SCHEMA = '$db'"`
do
# take backup of only 10 rows for testing purpose
# remove --where clause in production
mysqldump $db $i --where='true limit 10' --tab=$mydir

done
done

Labels: ,


June 30, 2016

 

list of useful containers

Here is a list of some of the containers those I use frequently. Here is how to install docker
First make sure that you are a root user. And there is enough disk space available.

# install docker on AWS linux

yum install -y docker

# You can install docker on centOS version 7 if you have 64-bit version

cat /etc/redhat-release

sudo yum remove docker docker-common container-selinux docker-selinux docker-engine

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum makecache fast

yum install docker-ce

service docker start
_____

# start docker
vi /etc/sysconfig/docker-storage

DOCKER_STORAGE_OPTIONS="--storage-opt dm.basesize=200G"

/etc/init.d/docker start

# Install docker-compose
curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

# Install aliases
curl -sf -L https://raw.githubusercontent.com/shantanuo/docker/master/alias.sh | sh

Editing storage options above will allow bigger containers (upto 200 GB) to be loaded. The container may get out of space once you start saving data into it.

Node.js

In order to use node application within docker environment, you can use the official node image that can be found here...

https://hub.docker.com/r/library/node/

Change to the directory where you have already written code and add the dockerfile with these 2 lines...

$ vi Dockerfile
FROM node:4-onbuild
EXPOSE 8888

Once your script is ready, you need to build an image...

$ docker build -t shantanuo/my-nodejs-app .

And run the node application...

$ docker run -p 8888:8888 -d shantanuo/my-nodejs-app
_____

You can push this image to docker hub as a private or public repository.

docker login
username:shantanuo
password:XXXX

docker push shantanuo/my-nodejs-app

MySQL

ofifical mysql repository

mkdir -p /storage/test-mysql/datadir
docker run -d -p 3306:3306  -e MYSQL_ALLOW_EMPTY_PASSWORD=yes  -v /my/custom:/etc/mysql/conf.d  -v /storage/test-mysql/datadir:/var/lib/mysql   -v /my/custom:/etc/mysql/conf.d  mysql:5.6

(size: 100MB)
Just by changing the name to test-mysql2 we can set up another mysql container. Instead of mysql official version, we can use tutum/mysql which has customized installation.

fixed the bug in the official mysql image

https://github.com/shantanuo/mysql

Percona with tokuDB

docker run -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e INIT_TOKUDB=1 -d percona/percona-server

log in to the container and run this command to enable tokudb if "show engines" command does not show tokudb.

ps_tokudb_admin --enable

Backup
# backup of mysql hosted in the folder /storage of another container named mysql-server

docker run -it \
--link mysql-server:mysql \
-v /storage/mysql-server/datadir:/var/lib/mysql \
-v /storage/backups:/backups \
--rm=true \
severalnines/mysql-pxb \
sh -c 'exec /run_backup.sh'

# backup of mysql from hosted machine

docker run -it \
-v /var/lib/mysql:/var/lib/mysql \
-v /storage/backups:/backups \
--rm=true \
severalnines/mysql-pxb \
sh -c 'exec innobackupex --host="$hostname" --port="3306" --user=root --password="$rootpassword" /backups'

Utilities

# cluster control container:
docker run -d --name clustercontrol -p 5000:80 severalnines/clustercontrol

elastic

1) official container

# command to install both, elasticsearch and kibana (unofficial version)
docker run -d -p 9200:9200 -p 5601:5601 nshou/elasticsearch-kibana

Or use the following with volume attached:

docker run -d -p 5601:5601 -p 5000:5000  -p 9200:9200 --ulimit nofile=65536:65536 -v /mydata:/var/lib/elasticsearch kenwdelong/elk-docker:latest
_____

Here are 2 commands to start Elasticsearch with Kibana using docker official version.

# cat /tmp/elasticsearch.yml
script.inline: on
script.indexed: on
network.host: 0.0.0.0

# docker run -d -v /tmp/:/usr/share/elasticsearch/config  -p 9200:9200 -p 9300:9300  -e ES_HEAP_SIZE=1g elasticsearch:2

Find the name of the container and link it to kibana by changing kibana_name_here below like this...

# docker run -d -p 5601:5601 --link   kibana_name_here:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana
_____

Login to the newly created elastic container and install plug-ins
docker exec -it container_id bash

# pwd
/usr/share/elasticsearch

# bin/plugin install analysis-phonetic

Once the plugin is installed, restart the container so that elastic service will be restarted....
# docker restart c44004a47f46
_____

# get the IP of elastic using command hostname -i and then install metric-beat dashboard using docker

docker run docker.elastic.co/beats/metricbeat:5.5.0 ./scripts/import_dashboards  -es http://172.31.73.228:9200

2) custom container

elastic - customize elasticsearch installation and maintenance

3) elastic with kibana version 5
docker run --name myelastic -v /tmp/:/usr/share/elasticsearch/config  -p 9200:9200 -p 9300:9300 -d elasticsearch:5.0

docker run -d -p 5601:5601 --link myelastic:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana:5.0
_____

# docker run -p 9200:9200 -p 9300:9300 -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -d elasticsearch:5

You may get this error in your logs:

Exception in thread "main" java.lang.RuntimeException: bootstrap checks failed max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]

You'll need to fix up your docker host to support more vm.max_map_count. For reference:

sysctl -w vm.max_map_count=262144

https://www.elastic.co/guide/en/elasticsearch/guide/current/_file_descriptors_and_mmap.html

adminer

adminer is the web interface to connect to any database like postgresql, mysql or oracle
Instead of linking adminer on any port, use --net=host to use the default port 80 of host machine. It will also use default mysql port 3306 that is fortunately linked to mysql container as shown above.

If you do not want to add one more parameter i.e. --net then use the default "bridge" network protocol. You will need to use the following command to find the IP address of the docker host.

# ip addr show docker0

This command will show the docker host IP address on the docker0 network interface.

redshift connection:

docker run -i -t --rm -p 80:80 --name adminer shantanuo/adminer

The above command will log-in to docker container. You need to start apache service within container...

sudo service apache2 start

Or use any of the method mentioned below:

download

wget http://www.adminer.org/latest.php -O /tmp/index.php


## connect to redshift 
docker run -it postgres psql -h merged2017.xxx.us-east-1.redshift.amazonaws.com -p 5439 -U root vdbname

connect to any database like mysql, pgsql, redshift, oracle or mongoDB

postgresql (redshift) or mysql

docker run -it -p 8060:80 -v /tmp/:/var/www/html/ shantanuo/phpadminer

mongoDB

docker run -d -p 8070:80 -v /tmp:/var/www/html ishiidaichi/apache-php-mongo-phalcon

oracle

docker run -d -p 8080:80 -v /tmp/:/app lukaszkinder/apache-php-oci8-pdo_oci

# not sure about how to support mssql

python

compact packages

1) pyrun
python versions 2 and 3 compact, without any external libraries, for basic testing like this...
here is an easy way to convince people to upgrade to python 3.0+

# docker run -it --rm shantanuo/pyrun:2.7 python
>>> 3/2
1

# docker run -it --rm shantanuo/pyrun:3.4 python
>>> 3/2
1.5

Python 2.7 version returns absolute value 1 while 3.4 version correctly returns 1.5

2) staticpython 
4 MB single file python package!

3) socket 
 python with application files

Complete python package

4) conda official 
Official python installation:

https://github.com/ContinuumIO/docker-images

And here is the command to start miniconda and ipython together...

docker run -i -t -p 8888:8888 -v /tmp:/tmp continuumio/miniconda3 /bin/bash -c "/opt/conda/bin/conda install jupyter -y --quiet && cd /tmp/ && /opt/conda/bin/jupyter notebook --notebook-dir=/tmp --ip='*' --port=8888 --no-browser --allow-root"

5) miniconda customized
Here is an impage with pandas and sqldf modules

# Start ipython container that is based on miniconda image in a screen session
docker run -p 7778:7778 -t shantanuo/miniconda_ipython_sqldf /bin/bash

# better start with environment variables
docker run -p 7778:7778 \
-e DEV_ACCESS_KEY=XXX -e DEV_SECRET_KEY=YYY \
-e PROD_READONLY_ACCESS_KEY=XXX -e PROD_READONLY_SECRET_KEY=YYY \
-e PROD_READWRITE_ACCESS_KEY=XXX -e PROD_READWRITE_SECRET_KEY=YYY \
-t shantanuo/miniconda_ipython_sqldf /bin/bash

# Log-in to newly created container
docker exec -it $(docker ps -l -q) /bin/bash

# Start ipython notebook on port 7778 that can be accessed from anywhere (*)
cd /home/
ipython notebook --ip=* --port=7778

# and use the environment keys in your code like this...
import boto3
import os
s3 = boto3.client('s3',aws_access_key_id=os.environ['DEV_ACCESS_KEY'], aws_secret_access_key=os.environ['DEV_SECRET_KEY'])

application containers

Here is an example from amazon about how to build your own container with php application.

https://github.com/awslabs/ecs-demo-php-simple-app

Utility containers

1) myscan 

Use OCR to read any image.
alias pancard='docker run -i --rm -v "$(pwd)":/home/ shantanuo/myscan python /scan.py "$@"'

wget wget https://github.com/shantanuo/docker/raw/master/myscan/pan_card.jpg

pancard 1crop.jpg

2) panamapapers 

container with sqlite database ready for query

3) newrelic
newrelic docker image that works like nagios

docker run -d \
--privileged=true --name nrsysmond \
--pid=host \
--net=host \
-v /sys:/sys \
-v /dev:/dev \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/log:/var/log:rw \
-e NRSYSMOND_license_key=186b2a8d6af29107609abca749296b46cda9fa69 \
-e NRSYSMOND_logfile=/var/log/nrsysmond.log \
newrelic/nrsysmond:latest

docker run -d \
  -e NEW_RELIC_LICENSE_KEY=186b2a8d6af29107609abca749296b46cda9fa69  \
  -e AGENT_HOST=52.1.174.168 \
  -e AGENT_USER=root \
  -e AGENT_PASSWD=XXXXX \
  newrelic/mysql-plugin

4) OCS inventory
docker run -d -p 80:80 -p 3301:3306 zanhsieh/docker-ocs-inventory-ng

http://52.86.68.170/ocsreports/
(username:admin, password:admin)

5) selenium
simulate a browser (crome) with selenium pre-installed

docker run -d -v /dev/shm:/dev/shm -p 4444:4444 selenium/standalone-chrome

The Hub url...
http://52.205.135.220:4444/wd/hub/


6) Deploying registry server
#Start your registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2

#You can now use it with docker. Tag any image to point to your registry:
docker tag image_name localhost:5000/image_name

#then push it to your registry:
docker push localhost:5000/image_name

# pull it back from your registry:
docker pull localhost:5000/image_name

# push the registry container to hub
docker stop registry
docker commit registry
docker push registry

7) Docker User Interface
docker run -d -p 9000:9000 -v /var/run/docker.sock:/docker.sock --name dockerui abh1nav/dockerui:latest -e="/docker.sock"

Better user interface with shell access:

docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer

8) docker clean up container
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes --dry-run

9) prometheus monitoring
check port 9090 and cadvisor on 8080

git clone https://github.com/vegasbrianc/prometheus.git
cd prometheus/

/usr/local/bin/docker-compose  up -d

10) open refine utility
# docker run --privileged -v /openrefine_projects/:/mnt/refine -p 35181:3333 -d psychemedia/ou-tm351-openrefine
Or use this:
docker run -p 3334:3333 -v /mnt/refine -d psychemedia/docker-openrefine

11) Freeswitch
docker run -d sous/freeswitch

12) mongodb
from tutum
docker run -d -p 27017:27017 -p 28017:28017 -e MONGODB_PASS="mypass" tutum/mongodb

offical image with wiredTiger engine
docker run -p 27017:27017 -v /tokudata:/data/db -d mongo --storageEngine wiredTiger

with tokumx compression engine
docker run -p 27017:27017 -v /tokudata:/data/db -d ankurcha/tokumx

# create alias for bsondump command

# alias bsondump='docker run -i --rm -v /tmp/:/tmp/ -w /tmp/ mongo bsondump "$@"'

# bsondump data_hits_20160423.bson > test.json


# alias mongorestore='docker run -i --rm -v /tmp/:/tmp/ -w /tmp/ mongo mongorestore "$@"'

# mongorestore --host `hostname -i` incoming_reports_testing.bson

# docker exec -it 12db5a259e58 mongo

# db.incoming_reports_testing.findOne()

# db.incoming_reports_testing.distinct("caller_id.number")


13) Consul Monitor
docker run -d --name=consul --net=host gliderlabs/consul-server -bootstrap -advertise=52.200.204.48

14) Registrator container
$ docker run -d \
--name=registrator \
--net=host \
--volume=/var/run/docker.sock:/tmp/docker.sock \
gliderlabs/registrator:latest \
consul://localhost:8500

15) wordpress

There is a custom image here...

docker run -p 8081:80 -d tutum/wordpress

docker has official wordpress containers.

docker run -d -p 3306:3306  -e MYSQL_ROOT_PASSWORD=india mysql:5.7

docker run -p 8083:80 --link gigantic_pike:mysql -e WORDPRESS_DB_NAME=wpdb -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=india -d wordpress

And we can also use docker compose the start and link db and application containers.

vi docker-compose.yml

version: '2'

services:
   db:
     image: mysql:5.7
     volumes:
       - "./.data/db:/var/lib/mysql"
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: wordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     links:
       - db
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_PASSWORD: wordpress

/usr/local/bin/docker-compose up -d

15a) Drupal

docker run --name cmsdb -p 3306:3306  -e MYSQL_ROOT_PASSWORD=india -d mysql:5.7

docker run --name mydrupal --link cmsdb:mysql -p 8080:80 -e MYSQL_USER=root -e MYSQL_PASSWORD=india -d drupal

Choose advance option and change "localhost" value for Database host to the mysql container name.

16) Packetbeat container
docker run -d --restart=always --net=host shantanuo/packetbeat-agent

17) Django

docker run --name some-django-app -v "$PWD":/usr/src/app -w /usr/src/app -p 8000:8000  -e location=mumbai -d django bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"

18) rabbitmq
docker run -d --hostname oksoft -p 8080:15672 rabbitmq:3-management

19) ruby and passenger
(official docker image from phusion)

docker run -d -p 3000:3000 phusion/passenger-full

# login to your container:
docker exec -it container_id bash

# change to opt directory
cd /opt/
mkdir public

# a test file
curl google.com > public/index.html

# start passenger:
passenger start

20) update containers
# monitor the containers named "nginx" and "redis" for updates

docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  centurylink/watchtower nginx redis

21) sematext monitoring
Access your docker and other stats from # https://apps.sematext.com

docker run --memory-swap=-1  -d --name sematext-agent --restart=always -e SPM_TOKEN=653a6dc9-1740-4a25-85d3-b37c9ad76308 -v /var/run/docker.sock:/var/run/docker.sock sematext/sematext-agent-docker

23) network emulator delay
Add network delay of 3000 mili seconds to docker traffic.

# terminal 1
# docker run -it --rm --name tryme alpine sh -c     "apk add --update iproute2 && ping www.example.com"

# terminal 2
# docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock gaiaadm/pumba pumba netem --interface eth0 --duration 1m delay --time 3000 tryme

24) postgresql
docker run -p 5432:5432 --name dbt-postgresql  -m 1G -c 256 -v /mypgdata:/var/lib/postgresql/data  -e POSTGRES_PASSWORD=india -d postgres

25) jboss
docker run -p 8080:8080 -p 9990:9990 -m 1G -c 512 -e JBOSS_PASS="mypass" -d tutum/jboss

26) Jupyter notebook with google facets for pandas dataframe:
docker run -d -p 8889:8888 kozo2/facets start-notebook.sh --NotebookApp.token='' --NotebookApp.iopub_data_rate_limit=10000000

27) AWS container
# cat ~/.aws/config
[default]
aws_access_key_id = XXX
aws_secret_access_key = XXX
region = ap-south-1

alias myaws='docker run --rm -v ~/.aws:/root/.aws -v $(pwd):/aws  -it amazon/aws-cli'

28) metricbeat dashboard
# get the IP of elastic using command hostname -i and then install metric-beat dashboard using docker

docker run docker.elastic.co/beats/metricbeat:5.5.0 ./scripts/import_dashboards  -es http://172.31.73.228:9200

29) recall the original run statement of a given container
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike [container_id]

30) Jupyter Notebook
docker run -d -p 8887:8888 -v /tmp:/tmp shantanuo/notebook

31) terraforming
# docker run -e AWS_ACCESS_KEY_ID=xxx  -e AWS_SECRET_ACCESS_KEY=xxx -e AWS_DEFAULT_REGION=ap-south-1 quay.io/dtan4/terraforming:latest terraforming s3

32) apache bench to check server load performance:

docker run -d -p 80 --name web -v /tmp/:/var/www/html russmckendrick/nginx-php

# docker run --link=silly_bassi russmckendrick/ab ab -k -n 10000 -c 16 http://134.195.194.88/

# docker run --link=web russmckendrick/ab ab -k -n 10000 -c 16 http://web/

33) airflow - monitor workflows and tasks

# docker run -p 8080:8080 -d puckel/docker-airflow webserver
_____

You may get an error like this when you restart server:
Error response from daemon: oci runtime error: container with id exists:

The fix is to remove this from /run folder

# rm -rf /run/runc/*
# rm -rf /run/container-id

34) gitlab installation

docker run -d  \
    --env GITLAB_OMNIBUS_CONFIG="external_url 'https://134.195.194.88/'; gitlab_rails['lfs_enabled'] = true; registry_external_url 'https://134.195.194.88:4567';" \
    --publish 443:443 --publish 80:80  --publish 4567:4567 --publish 10022:22 \
    --env 'GITLAB_SSH_PORT=10022' --env 'GITLAB_PORT=443' \
    --env 'GITLAB_HTTPS=true' --env 'SSL_SELF_SIGNED=true' \
    --volume /mysrv/gitlab/config:/etc/gitlab \
    --volume /mysrv/gitlab/logs:/var/log/gitlab \
    --volume /mysrv/gitlab/data:/var/opt/gitlab \
    --volume /srv/docker/gitlab/gitlab/certs:/etc/gitlab/ssl \
    gitlab/gitlab-ce:latest

Labels: , ,


 

Backup mysql using a container!

Suppose you have a MySQL container running named "mysql-server", started with this command:

$ docker run -d \
--name=mysql-server \
-v /storage/mysql-server/datadir:/var/lib/mysql \
-e MySQL_ROOT_PASSWORD=mypassword \
mysql

Then, to perform backup against the above container, the command would be:

$ docker run -it \
--link mysql-server:mysql \
-v /storage/mysql-server/datadir:/var/lib/mysql \
-v /storage/backups:/backups \
--rm=true \
severalnines/mysql-pxb \
sh -c 'exec /run_backup.sh'

Labels: , ,


May 21, 2016

 

decouple application using docker compose

# install packages
yum install -y git mysql-server docker

curl -L https://github.com/docker/compose/releases/download/1.7.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

/etc/init.d/docker start

# create and copy public key to github to clone private repo

ssh-keygen -t rsa -b 4096 -C "user@gmail.com"

cat ~/.ssh/id_rsa.pub


# clone your private repository
git clone git@github.com:shantanuo/xxx.git

# create your custom my.cnf config file

mkdir -p /my/custom/
vi /my/custom/my.cnf
[mysqld]
sql_mode=''

# start docker containers as per yml file

/usr/local/bin/docker-compose up -d

# restore mysql data into container from host:
mysqladmin -h localhost -P 3306 --protocol=tcp -u root -ppasswd create livebox

mysql -h localhost -P 3306 --protocol=tcp -u root -ppasswd livebox < livebox.sql

# access mysql container from another container:

mysql -h mysql -uroot -ppasswd
_____

## use container ID of tutum/mysql:5.5 for e.g.
docker logs c7950eeab8e9
    mysql -uadmin -pxmFShXB1Asgn -h127.0.0.1

# use the password to connect to 127.0.0.1 and execute commands:
mysql> grant all on *.* to 'root'@'%' identified by 'passwd' with grant option;
Query OK, 0 rows affected (0.00 sec)

# data only container to be used for mysql data
docker run -d -v /var/lib/mysql --name db_vol1 -p 23:22 tutum/ubuntu:trusty
docker run -d --volumes-from db_vol1 -p 3306:3306 tutum/mysql:5.5

Labels: , , , , , ,


April 10, 2016

 

MySQL using docker

1) Create mysql data directory
mkdir -p /opt/Docker/masterdb/data

2) Create my.cnf file
mkdir -p /opt/Docker/masterdb/cnf

vi /opt/Docker/masterdb/cnf/config-file.cnf
# Config Settings:
[mysqld]
server-id=1

innodb_buffer_pool_size = 2G
innodb_log_file_size=212M
binlog_format=ROW
log-bin

3) Run docker image

docker run --name masterdb -v /opt/Docker/masterdb/cnf:/etc/mysql/conf.d -v /opt/Docker/masterdb/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=mysecretpass -d shantanuo/mysql

Official docker image for mysql using simply mysql:5.5
You can also use percona or an image by oracle using percona:5.5 OR mysql/mysql-server:5.5

Labels: , , , , ,


 

galera mysql cluster in docker

$ sudo docker run --detach=true --name node1 -h node1 erkules/galera:basic --wsrep-cluster-name=local-test --wsrep-cluster-address=gcomm://

$ sudo docker run --detach=true --name node2 -h node2 --link node1:node1 erkules/galera:basic --wsrep-cluster-name=local-test --wsrep-cluster-address=gcomm://node1
$ sudo docker run --detach=true --name node3 -h node3 --link node1:node1 erkules/galera:basic --wsrep-cluster-name=local-test --wsrep-cluster-address=gcomm://node1

# sudo docker exec -ti node1 mysql
mysql> show status like "wsrep_cluster_size";
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+
1 row in set (0.00 sec)

_____


deploy Galera Cluster over multiple servers

By design, Docker containers are reachable using port-forwarded TCP ports only, even if the containers have IP addresses. Set up port forwarding for all TCP ports that are required for Galera to operate.

docker run -d -p 3306:3306 -p 4567:4567 -p 4444:4444 -p 4568:4568 --name nodea erkules/galera:basic --wsrep-cluster-address=gcomm:// --wsrep-node-address=10.10.10.10
docker run -d -p 3306:3306 -p 4567:4567 -p 4444:4444 -p 4568:4568 --name nodeb erkules/galera:basic --wsrep-cluster-address=gcomm://10.10.10.10 --wsrep-node-address=10.10.10.11
docker run -d -p 3306:3306 -p 4567:4567 -p 4444:4444 -p 4568:4568 --name nodec erkules/galera:basic --wsrep-cluster-address=gcomm://10.10.10.10 --wsrep-node-address=10.10.10.12
docker exec -t nodea mysql -e 'show status like "wsrep_cluster_size"'

+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size |     3 |
+--------------------+-------+

Labels: , ,


February 28, 2016

 

Install MySQL using docker

## create a directory on base machine for mysql data
mkdir -p /my/own/datadir

## link data directory and port 3306 to base machine while starting mysql container from official mysql image
docker run --name some-mysql1 -v /my/own/datadir:/var/lib/mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql

## connect to your mysql
mysql -uroot -pmy-secret-pw -h127.0.0.1 -P 3306

Labels: , , , ,


 

docker mysql

You can pull the latest mysql image and create a container named "new-mysql1". Then start the container in second command as shown below:

docker create -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password --name="new-mysql1" mysql:latest

docker start 22914939f301

Or merge create + start into a single "run" command as shown below:

docker run --name new-mysql1 -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:latest
_____

Then you can access from your host using the mysql command line:

mysql -h127.0.0.1 -ppassword -uroot

Labels: , , , , , ,


August 14, 2015

 

mongoDB mysql comparison cheat sheet

db.users.find({}, {})
select * from users
db.users.find({}, {“username”:1, “email”:1}}
select username, email from  users
db.users.find({}, {“username”:1, “email”:1}}.limit(10)
select username, email from  users limit 10
.limit()          .skip()                   .sort()
db.users.count()
select count(*) from users
db.runCommand(“distinct”: “users”, “key”: “age”})
select distinct(age) from users
db.users.find({“age”:{“$gte”:18, “$lte”:30}})
select * from users where age >= 18 and age <= 30
$lt      $gt     $lte    $get             $ne    $elemMatch
db.users.find({“ticket_no”:{“$in”:[75, 390]}})
select * from  users where ticket_no in (“75”, “390”)
$in     $nin             $not             $all
db.users.find({“$or”:[{“ticket_no”:{“$in”:[75, 390]}}, {“winner”:true}]})
select * from  users where ticket_no in (“75”, “390”) or winner is not null
$or     $and           $nor             $elemMatch
db.users.find({“age”:{“$in”:[null], “$exists”:true}})
select * from  users where age is null
db.users.find({“username”:/happy?/i})
select * from  users where username like ‘happy%’
perl compatible regular expressions
db.users.find({“ticket_no”:75})
select * from  users where ticket_no like ‘%75%’
[75, 390, 120, 450]
“75”, “390”, “120”, “450”
db.users.find({“ticket_no.2”:120})

db.users.find({“ticket_no”:{“$size”:4}})

db.users.findOne({criteria as above}, {“$slice”:[23, 10]}})
select * from users where age >= 18 and age <= 30 limit 23, 10
db.runCommand($getLastError”:1})

show warnings;
db.articles.aggregate("$project": {"author":1}}, {$group":{"_id":"$author", "count":{"$sum":1}}},
{"$sort": {"count": -1}}, {"$limit":5}
Select  author, count(*) as cnt from  articles group by author order by cnt desc limit 5
Aggregation results are limited to maximum response time of 16 MB
db.employees.aggregate( {"$project": {"totalPay" : {"$subtract" : [{"$add": ["$salary", "$bonus"]}, "$taxes"] } } } )
Select  (salary + bouns – taxes) as totalPay from  employees
$add  $subtract      $multiply  $divide   $mod
db.employees.aggregate( { "$project" : { "tenure" : {"$subtract" : [{"$year" : new Date()}, {"$year": "$hireDate"}] } } } )
select   year(now()) – year(hireDate) as tenure from employees
$year $month $week $dayOfMonth $dayOfWeek $dayOfYear  $hour  $minute  $second
db.employees.aggreage( { "$project": { "email" : { "$concat" : [ {"$substr" : [ "$firstName", 0, 1]}, ".", "$lastName", "@company.com" ] } } } )
select  concat(left(firstName, 1), “.”, lastName, “@company.com”) as email from employees
$substr   $concat  $toLower   $toUpper
db.sales.aggregate( { "$group": { "_id": "$country", "totalRevenue": { "$sum" : "$revenue" } } } )
select country, sum(revenue) from sales group by country
db.blog.aggregate({"$project": {"comments": "$comments"}}, {"$unwind" : "$comments"}, {"$match": {"comments.author" : "Akbar" }})

Labels: , , ,


December 22, 2014

 

dealing with mysql issues

While troubleshooting mysql issue, the first place to check is error log. If the error log is clean, then the next option to evaluate is slow query log

# enable slow query log
mysql> set global slow_query_log = on;

# change the default 10 seconds to 1 second
# make sure that the queries not using indexes are logged
SET GLOBAL long_query_time=1;
SET GLOBAL log_queries_not_using_indexes=1;

# If the slow log is growing too fast, feel free to again set the variables back to how they were:

SET GLOBAL long_query_time=10;
SET GLOBAL log_queries_not_using_indexes=0;

# or disable the slow query log
set global slow_query_log = off;

Labels: ,


December 17, 2014

 

Working with unicode strings in mysql

The unicode characters are not allowed to be stored in latin1
You will get an error as shown below:

drop table todel;
create table todel (name varchar(100)) DEFAULT CHARSET=latin1;

insert into todel values ('हिदी' )
Error in query (1366): Incorrect string value: '\xE0\xA4\xB9\xE0\xA4\xBF...' for column 'name' at row 1

insert into todel values (convert ('हिदी' using binary));
select convert(convert(name using binary) using utf8) from todel;

The work-around is to store the record as binary and while selecting, use convert function twice as shown above.
_____

There is a better way though. Why not to use utf8 encoding for the entire table?

drop table todel;
create table todel (name varchar(100)) DEFAULT CHARSET=utf8;

insert into todel values ('हिदी' );
select * from todel;
_____

You think that altering the table to utf8 will solve this issue?

alter table todel default charset=utf8;

No. Because even if the default table type is now utf8, the columns are still latin1

 mysql> show create table todel;
+-------+---------------------------------------------------------------------------------------------------------------------+
| Table | Create Table                                                                                                        |
+-------+---------------------------------------------------------------------------------------------------------------------+
| todel | CREATE TABLE `todel` (
  `name` varchar(100) CHARACTER SET latin1 DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
+-------+---------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

alter table todel modify name varchar(100) character set utf8 ;

Now even if we have column as well as table type utf8, we still get junk characters instead of unicode.

mysql> select * from todel;
+---------------------------+
| name                      |
+---------------------------+
| हिदी              |
+---------------------------+
1 row in set (0.00 sec)

The bad news is that the convert query that was working fine earlier has now stopped working as expected.

mysql> select convert(convert(name using binary) using utf8) from todel;
+------------------------------------------------+
| convert(convert(name using binary) using utf8) |
+------------------------------------------------+
| हिदी                                   |
+------------------------------------------------+
1 row in set (0.00 sec)

The unicode data is lost in the conversion.
_____

So the correct solution would be to add a utf8 column to latin1 table and update that column data with correct unicode string.
Let's start all over again:

drop table todel;
create table todel (name varchar(100)) DEFAULT CHARSET=latin1;
insert into todel values (convert ('हिदी' using binary));
select convert(convert(name using binary) using utf8) from todel;

alter table todel add column hindi varchar(100) character set utf8;

update todel set hindi = convert(convert(name using binary) using utf8) ;

mysql> select convert(convert(name using binary) using utf8), hindi from todel;
+------------------------------------------------+--------------+
| convert(convert(name using binary) using utf8) | hindi        |
+------------------------------------------------+--------------+
| हिदी                                           | हिदी         |
+------------------------------------------------+--------------+
1 row in set (0.00 sec)

_____

Now the problem is that the application is not aware of this new column called "hindi" and it is still using column "name". We need an insert and update trigger to keep correcting the values in the hindi column. Something like this...

delimiter |
CREATE TRIGGER todel_bi after INSERT ON todel
  FOR EACH ROW
  BEGIN
    UPDATE todel SET hindi = convert(convert(name using binary) using utf8) WHERE id = NEW.id;
  END;
|

insert into todel (id, name) values (2, (convert ('मराठी' using binary)));

I thought the after insert trigger would solve this issue, but I get an error.

ERROR 1442 (HY000): Can't update table in trigger because it is already used by statement which invoked this stored function/trigger.

Now I need to write a stored procedure that inserts the record in the target table and change the code accordingly (that will call the procedure).
_____

So we are back where we started. Why not simply replace the name column with this new unicode aware column?

drop table todel;
create table todel (name varchar(100)) DEFAULT CHARSET=latin1;
insert into todel values (convert ('हिदी' using binary));

alter table todel add column hindi varchar(100) character set utf8;
update todel set hindi = convert(convert(name using binary) using utf8) ;
alter table todel drop column name;
alter table todel change column hindi name varchar(100) character set utf8;

Labels: ,


October 27, 2014

 

Generate series numbers

Here is the Mysql code that can be used to quickly generate 5 digit numbers starting from 10000 to 99999

drop table t;
create table t (series int);

set @n = 1;
drop view if exists v3;
drop view if exists v10;
drop view if exists v100;
create view v3 as select null union all select null union all select null;
create view v10 as select null from v3 a, v3 b union all select null;
create view v100 as select null from v10 a, v10 b;
insert into t select @n:=@n+1 from v10 a,v100 b, v100 c;
delete from  t where length(series) != 5;

Labels:


January 12, 2014

 

Review Database

Here is how to quickly review the database. We can query the information_schema database to find if the column types are as per internal guidelines.

mysql> select DATA_TYPE, COUNT(*) AS CNT, GROUP_CONCAT(DISTINCT(COLUMN_TYPE)) from information_schema.COLUMNS
where TABLE_SCHEMA = 'recharge_db' GROUP BY DATA_TYPE;

+-----------+-----+----------------------------------------------------------------------------------------+
| DATA_TYPE | CNT | GROUP_CONCAT(DISTINCT(COLUMN_TYPE))                                                    |
+-----------+-----+----------------------------------------------------------------------------------------+
| bigint    |  15 | bigint(20)                                                                             |
| datetime  |   6 | datetime                                                                               |
| double    |   3 | double                                                                                 |
| int       |  24 | int(11),int(2)                                                                         |
| text      |   2 | text                                                                                   |
| varchar   |  16 | varchar(100),varchar(500),varchar(10),varchar(20),varchar(15),varchar(255),varchar(25) |
+-----------+-----+----------------------------------------------------------------------------------------+
6 rows in set (0.01 sec)

The suggestions would be:
1) Change double to decimal
2) Change int(2) to tinyint
3) change text to varchar(1000) if possible

mysql> select IS_NULLABLE, COUNT(*) AS CNT from information_schema.COLUMNS
where TABLE_SCHEMA = 'recharge_db' AND COLUMN_KEY != 'PRI'  GROUP BY IS_NULLABLE;
+-------------+-----+
| IS_NULLABLE | CNT |
+-------------+-----+
| NO          |   1 |
| YES         |  52 |
+-------------+-----+
2 rows in set (0.00 sec)

Most of the columns are nullable those should be changed to "NOT NULL".
Certain columns can not be changed to "NOT NULL" and they are candidate for further normalization.

The following query will list all the columns those can be linked to the table that has declared it as Primary Key.

select COLUMN_NAME, COUNT(*) AS CNT, GROUP_CONCAT(IF(COLUMN_KEY = 'PRI', concat(TABLE_NAME, '_primary_key'), TABLE_NAME) order by COLUMN_KEY != 'PRI', TABLE_NAME) as tbl_name
from information_schema.COLUMNS
where TABLE_SCHEMA = 'recharge_db'
group by COLUMN_NAME HAVING CNT > 1 AND tbl_name like '%_primary_key%';
_____

Find missing primary key:

SELECT table_schema, table_name
FROM information_schema.tables
WHERE (table_catalog, table_schema, table_name) NOT IN
(SELECT table_catalog, table_schema, table_name
FROM information_schema.table_constraints
WHERE constraint_type in ('PRIMARY KEY', 'UNIQUE'))
AND table_schema = 'recharge_db';
_____

// Check if the column names are consistent and as per standard

select column_name, count(*) as cnt
from information_schema.columns where TABLE_SCHEMA = 'recharge_db'
GROUP BY column_name;

_____

Data Normalization tips:
1) Data should be normalized
2) There should be no NULL values in any column
3) There should be no need to use "update" statement

Labels: ,


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024   January 2025   February 2025   April 2025   June 2025   July 2025   August 2025  

This page is powered by Blogger. Isn't yours?