Shantanu's Blog

Database Consultant

February 19, 2025

 

Apply libreoffice styles using a Macro and create PDF

I have this dockerfile that is working as expected. I use it to convert a txt file to pdf after formatting it using a style created by macro.
_____

FROM ubuntu:latest

# Install LibreOffice and scripting dependencies
RUN apt-get update && apt-get install -y libreoffice libreoffice-script-provider-python libreoffice-script-provider-bsh libreoffice-script-provider-js

# Install required dependencies
RUN apt-get update && apt-get install -y wget unzip fonts-dejavu

# Download and install Shobhika font
RUN mkdir -p /usr/share/fonts/truetype/shobhika && wget -O /tmp/Shobhika.zip https://github.com/Sandhi-IITBombay/Shobhika/releases/download/v1.05/Shobhika-1.05.zip && unzip /tmp/Shobhika.zip -d /tmp/shobhika && mv /tmp/shobhika/Shobhika-1.05/*.otf /usr/share/fonts/truetype/shobhika/

# Create necessary directories with proper permissions
RUN mkdir -p /app/.config/libreoffice/4/user/basic/Standard
RUN chmod -R 777 /app/.config

# Set LibreOffice user profile path
ENV UserInstallation=file:///app/.config/libreoffice/4/user

WORKDIR /app
COPY StyleLibrary.oxt /app/
COPY marathi_spell_check.oxt /app/
COPY myfile.txt /app/

RUN unopkg add /app/StyleLibrary.oxt --shared
RUN unopkg add /app/marathi_spell_check.oxt --shared

# Run the LibreOffice macro
CMD soffice --headless --invisible --norestore "macro:///StyleLibrary.Module1.myStyleMacro2(\"/app/myfile.txt\")"
_____

# create an image:
docker build -t shantanuo/mylibre .

# Run the container:
docker run -v .:/app/ --rm shantanuo/mylibre

As you can see I have applied the styles from StyleLibrary to myfile and then created a pdf document successfully.

Labels: , ,


September 28, 2024

 

Firefox and Libreoffice in your browser

Kasm VNC is a modern open source VNC server.

Quickly connect to your Linux server's desktop from any web browser.
No client software install required.

1) Firefox using VNC

docker run -d \
--name=firefox \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Etc/UTC \
-p 3000:3000 \
-p 3001:3001 \
-v /path/to/config2:/config \
--shm-size="1gb" \
--restart unless-stopped \
lscr.io/linuxserver/firefox:latest

2) Libreoffice using VNC

docker run -d \
  --name=libreoffice \
  --security-opt seccomp=unconfined `#optional` \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Etc/UTC \
  -p 3000:3000 \
  -p 3001:3001 \
  -v /path/to/config:/config \
  --restart unless-stopped \
  lscr.io/linuxserver/libreoffice:latest

Labels: , , , , ,


September 24, 2024

 

export to pdf using linux command

You can generate a "pdf" file from Libreoffice writer "odt" file.

File - Export as PDF option is available only if you are using GUI
Here is how to convert to pdf using command line.

# vi Dockerfile
FROM ubuntu:latest

RUN apt-get update && \
    apt-get install -y libreoffice

WORKDIR /workspace

ENTRYPOINT ["libreoffice", "--headless", "--convert-to", "pdf"]

# docker build -t shantanuo/libreoffice-converter .

run the docker command to convert a file to pdf
# docker run --rm -v .:/workspace shantanuo/libreoffice-converter /workspace/pm_in_paris.odt --outdir /workspace

_____

Use this dockerfile if you need to apply a template before creating a PDF file.

FROM ubuntu:latest

RUN apt-get update && apt-get install -y libreoffice python3 python3-venv
RUN python3 -m venv /workspace/venv
RUN /workspace/venv/bin/pip install --upgrade pip
RUN /workspace/venv/bin/pip install unotools

COPY * /workspace/
WORKDIR /workspace

# Start LibreOffice in headless mode in the background and run the Python script after it is started
ENTRYPOINT soffice --headless --accept="pipe,name=libreoffice;urp;StarOffice.ComponentContext" & \
    sleep 5 && \
    python3 /workspace/updated3.py /workspace/ra.txt /workspace/prajakta.ott

I can create an image and it converts the text file to PDF correctly.

docker build -t shantanuo/libreoffice-converter .

The raw text file and template is available in current directory. The generated PDF is also available in the same place after running this command:

docker run -v .:/workspace/ --rm  shantanuo/libreoffice-converter

The python code to apply the template and create pdf is available here...

https://gist.github.com/shantanuo/f635bbdb764d1fafa8587203d7f8823a

Labels: ,


March 27, 2020

 

docker compose is really awesome

If you are already using compose then you already know how important it is for docker users. If you want to learn more about it, here are few templates to start with.

$ git clone https://github.com/docker/awesome-compose.git

$ cd awesome-compose
$ cd nginx-flask-mysql

$ docker-compose up -d
$ curl localhost:80
Blog post #1
Blog post #2
Blog post #3
Blog post #4


https://www.docker.com/blog/awesome-compose-app-samples-for-project-dev-kickoff/

Labels:


January 24, 2020

 

distro-less using multi-stage

"Distroless" images contain only your application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution. Docker multi-stage builds make using distroless images easy.

# vi Dockerfile

FROM python:3-slim AS build-env
ADD . /app
WORKDIR /app

FROM gcr.io/distroless/python3
COPY --from=build-env /app /app
WORKDIR /app
CMD ["hello.py", "/etc"]


# Build and run the image as usual

docker build -t myapp .
docker run -t myapp

More info:
https://github.com/GoogleContainerTools/distroless

Labels:


September 07, 2019

 

Docker security check

Running the security check on docker server, is easy.

git clone https://github.com/docker/docker-bench-security.git
cd docker-bench-security
sudo sh docker-bench-security.sh

You may get a few warnings like this...

[WARN] 1.2.4  - Ensure auditing is configured for Docker files and directories - /var/lib/docker

Open this file and add the log files paths. Do not forget to restart audit deamon.

# vi  /etc/audit/audit.rules

-w /usr/bin/docker -p wa
-w /var/lib/docker -p wa
-w /etc/docker -p wa
-w /etc/default/docker -p wa
-w /etc/docker/daemon.json -p wa
-w /usr/bin/docker-containerd -p wa
-w /usr/bin/docker-runc -p wa
-w /etc/sysconfig/docker -p wa

# restart auditd service
_____

Another file to be added for security purpose:

vi /etc/docker/daemon.json

{
    "icc": false,
    "log-driver": "syslog",
    "disable-legacy-registry": true,
    "live-restore": true,
    "userland-proxy": false,
    "no-new-privileges": true
}
_____

Add this environment variable:

export DOCKER_CONTENT_TRUST=1
echo "DOCKER_CONTENT_TRUST=1" | sudo tee -a /etc/environment

# restart docker

Labels:


November 13, 2018

 

Django using Docker container

Here is 5 steps to use Dockerized django installation as explained in this article.
https://testdriven.io/dockerizing-django-with-postgres-gunicorn-and-nginx

1) Install docker compose:
sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

sudo chmod +x /usr/local/bin/docker-compose

2) Download docker file
git clone https://github.com/testdrivenio/django-on-docker.git

3) Edit allowed host to add your site:
vi /home/ec2-user/django-on-docker/app/hello_django/settings.py
# ALLOWED_HOSTS = ['shantanuoak.com']

4) Use compose to start relevant containers
cd django-on-docker
docker-compose up -d --build

5) Visit your site:
http://shantanuoak.com:1337

Labels:


February 25, 2018

 

Install and configure packetbeat to monitor mysql traffic

1) Install packetbeat
deb:
sudo apt-get install libpcap0.8
curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.2.2-amd64.deb
sudo dpkg -i packetbeat-6.2.2-amd64.deb

rpm:
sudo yum install libpcap
curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.2.2-x86_64.rpm
sudo rpm -vi packetbeat-6.2.2-x86_64.rpm

2) Make sure that "query" property in "mysql" section is "text" and not "keyword".

[root@localhost packetbeat]# vi packetbeat.template-es6x.json

        "mysql": {
          "properties": {
            "affected_rows": {
              "type": "long"
            },
             "query": {
              "type": "text"
            }
          }
        },
        "nfs": {
          "properties": {
            "minor_version": {


3) Change the host, protocol and password in elasticsearch output secion of config file. Enable template overwriting and make sure version 6x will be loaded.

[root@localhost packetbeat]# vi packetbeat.yml

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["https://944fe807b7525eaf163f502e08a412c5.us-east-1.aws.found.io:9243"]
  # Optional protocol and basic auth credentials.
  protocol: "https"
  username: "elastic"
  password: "xxx"

 # Set to false to disable template loading.
  template.enabled: true

  # Template name. By default the template name is packetbeat.
  template.name: "packetbeat"

  # Path to template file
  template.path: "${path.config}/packetbeat.template.json"

  # Overwrite existing template
  template.overwrite: true

  # If set to true, packetbeat checks the Elasticsearch version at connect time, and if it
  # is 2.x, it loads the file specified by the template.versions.2x.path setting. The
  # default is true.
  template.versions.2x.enabled: false

  # If set to true, packetbeat checks the Elasticsearch version at connect time, and if it
  # is 6.x, it loads the file specified by the template.versions.6x.path setting. The
  # default is true.
  template.versions.6x.enabled: true

  # Path to the Elasticsearch 6.x version of the template file.
  template.versions.6x.path: "${path.config}/packetbeat.template-es6x.json"


4) Check the logs that everything is being loaded correctly.

[root@localhost packetbeat]# cat /var/log/packetbeat/packetbeat| more
2018-02-25T11:53:30+05:30 INFO Metrics logging every 30s
2018-02-25T11:53:30+05:30 INFO Loading template enabled for Elasticsearch 6.x. Reading template file: /etc/packetbeat/packetbeat.template-es6x.json
2018-02-25T11:53:30+05:30 INFO Elasticsearch url: https://944fe807b7525eaf163f502e08a412c.us-east-1.aws.found.io:9243
2018-02-25T11:53:30+05:30 INFO Activated elasticsearch as output plugin.
2018-02-25T11:53:30+05:30 INFO Publisher name: localhost.localdomain
2018-02-25T11:53:30+05:30 INFO Flush Interval set to: 1s
2018-02-25T11:53:30+05:30 INFO Max Bulk Size set to: 50
2018-02-25T11:53:30+05:30 INFO Process matching disabled
2018-02-25T11:53:30+05:30 INFO registered protocol plugin: amqp
2018-02-25T11:53:30+05:30 INFO registered protocol plugin: mongodb
2018-02-25T11:53:30+05:30 INFO registered protocol plugin: mysql
2018-02-25T11:53:30+05:30 INFO registered protocol plugin: nfs
2018-02-25T11:53:30+05:30 INFO registered protocol plugin: pgsql
2018-02-25T11:53:30+05:30 INFO registered protocol plugin: thrift
2018-02-25T11:53:30+05:30 INFO registered protocol plugin: cassandra
2018-02-25T11:53:30+05:30 INFO registered protocol plugin: dns
2018-02-25T11:53:30+05:30 INFO registered protocol plugin: http
2018-02-25T11:53:30+05:30 INFO registered protocol plugin: memcache
2018-02-25T11:53:30+05:30 INFO registered protocol plugin: redis
2018-02-25T11:53:30+05:30 INFO packetbeat start running.
2018-02-25T11:53:32+05:30 INFO Connected to Elasticsearch version 6.2.2
2018-02-25T11:53:32+05:30 INFO Trying to load template for client: https://944fe807b7525eaf163f502e08a412c.us-east-1.aws.found.io:9243
2018-02-25T11:53:32+05:30 INFO Existing template will be overwritten, as overwrite is enabled.
2018-02-25T11:53:32+05:30 INFO Detected Elasticsearch 6.x. Automatically selecting the 6.x version of the template
2018-02-25T11:53:33+05:30 INFO Elasticsearch template with name 'packetbeat' loaded

_____

Or use docker image:

[root@localhost ~]# docker run --cap-add=NET_ADMIN --network=host -e HOST="https://944fe807b7525eaf163f502e08a412c5.us-east-1.aws.found.io:9243" -e PASS="rzmYYJUdHVaglRejr8XqjIX7" shantanuo/packetbeat-agent

_____

# curl commands to connect to secure elastic (cloud)
curl --user "elastic:passwd"  https://xxx.us-east-1.aws.found.io:9243/_aliases 

curl --user "elastic:passwd"  https://xxx.us-east-1.aws.found.io:9243/_cat/indices/ 

curl --user "elastic:passwd"  https://xxx.us-east-1.aws.found.io:9243/packetbeat-6.6.2-2019.03.26/_search?pretty=true&q=*:*

Labels: , ,


December 30, 2017

 

Install mysql with tokuDB engine within percona

This is required if you get an error while initiating tokudb engine:

echo never > /sys/kernel/mm/transparent_hugepage/enabled

And this is required if you get permissions error:

rm -rf /storage/custom3381

mkdir /storage/custom3381

chown 1001 /storage/custom3381

percona server has built-in environment variable for tokudb:

docker run -p 3381:3306 -v /my/custom3381:/etc/mysql/conf.d -v /storage/custom3381:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=india3381 -e INIT_TOKUDB=1 -d percona/percona-server:5.7

Labels: ,


December 07, 2017

 

Docker restart problems

If you restart server or if docker ends abnormally like a

kill -9 {DOCKER_PID}

then you may get an error while restarting your containers.

# docker restart 2dc3fc6e5e3e d6d9d1dab040

Error response from daemon: Cannot restart container 2dc3fc6e5e3e: oci runtime error: container with id exists: 2dc3fc6e5e3e5b63c9d3ad8074972b72867b9ccd250b4c7fced42c616adc2070
Error response from daemon: Cannot restart container d6d9d1dab040: oci runtime error: container with id exists: d6d9d1dab0407706ef4ec37d0bacfe43134054ddd0b7a06d9b97434d0c288564

The solution is to remove containers from runc and containerd.
# rm -rf /run/runc/80768bc717f353484ab54b306bca0506861688d0b1ae0f3d724208cb37cad047
# rm -rf /run/containerd/80768bc717f353484ab54b306bca0506861688d0b1ae0f3d724208cb37cad047
# rm -rf /run/runc/2dc3fc6e5e3e5b63c9d3ad8074972b72867b9ccd250b4c7fced42c616adc2070
# rm -rf /run/containerd/2dc3fc6e5e3e5b63c9d3ad8074972b72867b9ccd250b4c7fced42c616adc2070

Labels:


June 03, 2017

 

Frequently used docker containers

Here are 3 containers those I need most of the times.

1) elastic and kibana

a) elastic, kibana and packetbeat

docker run --disable-content-trust -p 9200:9200 -p 5601:5601 -d nshou/elasticsearch-kibana

docker run --cap-add=NET_ADMIN --net=host -e KIBANA="http://shantanuoak.com:5601" -e HOST="http://shantanuoak.com:9200" shantanuo/packetbeat-agent-unsecure

b) Connect to elastic hub:

docker run --cap-add=NET_ADMIN --network=host -e KIBANA="https://6a16d771c4fc3be7f251c7c629a421e2.us-east-1.aws.found.io:9243" -e HOST="https://d322f42d01dc50c50dba0b446e6a1c0a.us-east-1.aws.found.io:9243" -e PASS="pwkbZXIB3VMPtr4wOnpLNi8c"  shantanuo/packetbeat-agent

c) get the IP of elastic using command hostname -i and then install metric-beat dashboard using docker

docker run docker.elastic.co/beats/metricbeat:5.5.0 ./scripts/import_dashboards  -es http://172.31.73.228:9200


2) python pandas using miniconda

docker run -i -t -p 8888:8888 -v /tmp:/tmp continuumio/miniconda3 /bin/bash -c "/opt/conda/bin/conda install jupyter -y && cd /tmp/ && /opt/conda/bin/jupyter notebook --NotebookApp.token='india' --notebook-dir=/tmp --ip='0.0.0.0' --port=8888 --no-browser --allow-root"

3) mysql fixed bug and added IST timezone

docker run -p 3399:3306 -e MYSQL_ROOT_PASSWORD=india3399 -v /my/custom3399:/etc/mysql/conf.d  -v /storage/mysql/datadir3399:/var/lib/mysql -d shantanuo/mysql:5.7

This container uses the config file as shown below:

# vi /my/custom3399/my.cnf
[mysqld]
server-id=1723169137

max_binlog_size=1024M
expire_logs_days=40
binlog_format=ROW
binlog_checksum=NONE

### enable master
# log-bin=/var/log/mysql/mysql-bin.log

### myisam only
# skip-innodb
# default-storage-engine=MyISAM
# default_tmp_storage_engine=MyISAM
# key-buffer-size=1G
# myisam_max_sort_file_size=40G
# myisam_sort_buffer_size=512M
# bulk_insert_buffer_size=1G
### disable strict sql mode
# sql-mode=''
# secure-file-priv = ""

### innodb setting
# innodb_buffer_pool_size=1G
# innodb_log_file_size=512M

# innodb_flush_method=O_DIRECT
# innodb_file_per_table
# innodb-flush-log-at-trx-commit = 2

# make sure temp directory has sufficient space
# tmpdir=/

4) Adminer container to manage mysql

docker run -p 80:80  -d  shantanuo/adminer /bin/bash -c "/usr/sbin/apache2ctl -D FOREGROUND "



Labels: , ,


May 01, 2017

 

persistent data volumes using docker

If you are using the latest docker version, you can take advantage of docker plugin support.

On older version, you will get this error...

# docker plugin
docker: 'plugin' is not a docker command.

Once you have made sure that you are using the latest version that supports plugins, install rexray.

curl -sSL https://dl.bintray.com/emccode/rexray/install | sh -s -- stable

vi /etc/rexray/config.yml

libstorage:
  service: ebs
ebs:
  accessKey: xxx
  secretKey: xxx

rexray restart
_____

Create 250 GB EBS volume and attach it to container.

docker volume create --driver rexray --opt size=250 --name mysql_datax

docker run -d -p 3312:3306 -e MYSQL_ROOT_PASSWORD=india3312 -v /my/custom:/etc/mysql/conf.d --volume-driver rexray  -v mysql_datax:/var/lib/mysql  mysql:5.6

Labels: , ,


January 12, 2017

 

Dockerize php application

If you have CodeIgniter based php application, you can easily dockerize it. The docker command will look something like this...

docker run -p 80:80 -e MYHOST=172.31.11.168  -e MYUSER=root -e MYPASS=pass -e MYDB=livedbbox --restart always -d  oksoft/phpapp

We are passing the database and password as environment variable that can be read from the database config file. You need this code in your database.php file.

# vi application/config/database.php

$db['default'] = array(
        'dsn'   => '',
        'hostname' => getenv("MYHOST"),
        'username' => getenv("MYUSER"),
        'password' => getenv("MYPASS"),
        'database' => getenv("MYDB"),


If your application is not able to write session data to a file, then you may need this change as well.

# vi application/config/config.php
# $config['sess_save_path'] = sys_get_temp_dir();

nginx and other config files along with Dockerfile can be found here...

https://github.com/shantanuo/docker-1/tree/master/nginx-php

Labels: ,


December 27, 2016

 

play with docker

play-with-docker is a a Docker playground which gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Docker containers and even create clusters with Docker features like Swarm Mode. Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs.

http://play-with-docker.com/

Labels: ,


November 01, 2016

 

sysdig for system admins

What about a tool for sys admin that has all the utilties those we use everyday?
sysdig is a combination of strace + tcpdump + htop + iftop + lsof + transaction tracing

It is an open source system-level exploration tool that captures system state and activity.

Here is how to install it...

curl -s https://s3.amazonaws.com/download.draios.com/stable/install-sysdig | sudo bash

And here are a few examples

Dump system activity to file,
sysdig -w trace.scap

Show all the interactive commands executed inside a given container.
sysdig -pc -c spy_users container.name=wordpress1

View the top network connections for a single container.
sysdig -pc -c topconns container.name=wordpress1

See all the GET HTTP requests made by the machine
sudo sysdig -s 2000 -A -c echo_fds fd.port=80 and evt.buffer contains GET

See all the SQL select queries made by the machine
sudo sysdig -s 2000 -A -c echo_fds evt.buffer contains SELECT

See queries made via apache to an external MySQL server happening in real time
sysdig -s 2000 -A -c echo_fds fd.sip=192.168.30.5 and proc.name=apache2 and evt.buffer contains SELECT

More examples can be found here..

http://www.sysdig.org/wiki/sysdig-examples/#application

Labels: , ,


August 26, 2016

 

Logsene by sematext

Logsene by sematext is very similar to logstash which is part of ELK stack. It makes hosting and managing data much easier by adding some features like IP addresses white-listing and user management.

logstash

To start pushing logs, you must create a file named /etc/logstash/conf.d/logsene.conf with the below text and restart Logstash.

input {
    file {
        path => "/var/log/messages"
        start_position => "beginning"
    }
}

output {
    elasticsearch {
        hosts => "logsene-receiver.sematext.com:443" # use port 80 for plain HTTP, instead of HTTPS
        ssl => "true"                                # set to false if you don't want to use SSL/HTTPS
        index => "38e31db7-3762-4b9c-937a-3e2e080974"
        manage_template => false
        idle_flush_time => 10
        flush_size => 1000
    }
}


filebeat

The following example tails the /var/log/test.log file and forwards every line to a Logstash beats input. To start pushing logs, you need to replace the config file named filebeat.yml with the one below and restart Filebeat.

filebeat:
  prospectors:
    -
      paths:
        - /var/log/logstash/test.log
        # - c:\logs\test.log
output:
  logstash:
    hosts: ["LOGSTASH_HOST:11111"]
For this to work, Logstash also needs to be configured to accept logs from Filebeat:

input {
  beats {
    port => 11111
  }
}

output {
    elasticsearch {
        hosts => "logsene-receiver.sematext.com:443" # use port 80 for plain HTTP, instead of HTTPS
        ssl => "true"                                # set to false if you don't want to use SSL/HTTPS
        index => "38e31db7-3762-4b9c-937a-3e2e080974"
        manage_template => false
    }
}


collect docker logs

docker run --name sematext-agent --restart=always \
  -e LOGSENE_TOKEN=38e31db7-3762-4b9c-937a-3e2e080974 \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /etc/localtime:/etc/localtime:ro \
  -d sematext/sematext-agent-docker


AWS lamabda

https://github.com/sematext/logsene-aws-lambda-s3


Python

import logging
import logging.handlers


handler = logging.handlers.SysLogHandler(address=('logsene-receiver-syslog.sematext.com', 514))
formater = logging.Formatter("38e31db7-3762-4b9c-937a-3e2e080974:%(message)s")
handler.setFormatter(formater)
logger = logging.getLogger('HelloLogsene')
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)


logger.debug("Hello, Logsene!")
logger.info("Hello, Logsene!")
logger.warning("Hello, Logsene!")
logger.error("Hello, Logsene!")



Elasticsearch API

host: logsene-receiver.sematext.com
port: 80 (HTTP) or 443 (HTTPS)
index: 38e31db7-3762-4b9c-937a-3e2e080974d0 (this is your Logsene app token - keep it secure)
curl -XPOST http://logsene-receiver.sematext.com/38e31db7-3762-4b9c-937a-3e2e080974/example/ -d '{
    "message": "Hello, Logsene!"
}'

Labels: , , ,


July 21, 2016

 

Install and use private docker registry

# On the master server, create a registry container...
docker run -d -p 5000:5000 registry

This command will start a fresh new registry. If you have a registry with all your images built-in then use that like this...

docker run -p 5000:5000 -d shantanuo/myregistry
_____

# On the client server, change the docker config file as shown below and restart docker...
(centOS)
vi /etc/sysconfig/docker
or
vi /etc/init.d/docker

OPTIONS="--insecure-registry 52.205.213.245:5000"

(Ubuntu)
vi /etc/default/docker

DOCKER_OPTS="--insecure-registry 52.205.213.245:5000"

# Now download an image from docker hub and upload it to private repository...
docker pull django
docker pull rabbitmq:3-management
docker pull mongo:3.3.9
docker pull phusion/passenger-full
docker pull continuumio/miniconda

docker tag django 52.205.213.245:5000/shantanuo/mydjango
docker tag rabbitmq:3-management 52.205.213.245:5000/shantanuo/myrabbit
docker tag mongo:3.3.9 52.205.213.245:5000/shantanuo/mymongo
docker tag phusion/passenger-full 52.205.213.245:5000/shantanuo/mypassenger
docker tag continuumio/miniconda 52.205.213.245:5000/shantanuo/myminiconda

docker push 52.205.213.245:5000/shantanuo/mydjango
docker push 52.205.213.245:5000/shantanuo/myrabbit
docker push 52.205.213.245:5000/shantanuo/mymongo
docker push 52.205.213.245:5000/shantanuo/mypassenger
docker push 52.205.213.245:5000/shantanuo/myminiconda

# check if all the images are uploaded correctly
docker search 52.205.213.245:5000/
OR
docker search localhost:5000/
_____

# backup your private registry so that you can restore it in case of master server crash

docker commit 126781fc1667 shantanuo/myregistry

docker push shantanuo/myregistry
_____

Download and run the private registry image from docker hub

docker run -p 5000:5000 -d shantanuo/myregistry

docker search localhost:5000/
NAME                    DESCRIPTION   STARS     OFFICIAL   AUTOMATED
openshift/busybox                     0
shantanuo/pyrun                       0
shantanuo/mydjango                    0
shantanuo/myrabbit                    0
shantanuo/mymongo                     0
shantanuo/mypassenger                 0
shantanuo/myminiconda                 0
_____

Here is how to download and start a container from private registry...

From docker hub:

docker run --hostname oksoft -p 15672:15672  -d rabbitmq:3-management

From private registry:

docker run --hostname oksoft -p 15672:15672  -d ec2-54-164-0-64.compute-1.amazonaws.com:5000/shantanuo/myrabbit

Labels:


 

Install docker on Ubuntu Trusty 14.04

Steps to install docker on Ubuntu Trusty

sudo apt-get update

sudo apt-get install apt-transport-https ca-certificates

sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" > /etc/apt/sources.list.d/docker.list
# change the word "main" to "experimental" if you need 1.12 version of docker
_____

sudo apt-get update

sudo apt-get purge lxc-docker

apt-cache policy docker-engine

sudo apt-get update

sudo apt-get install -y linux-image-extra-$(uname -r)

sudo apt-get install apparmor

sudo apt-get update

sudo apt-get install -y docker-engine

sudo service docker start

Labels: ,


July 12, 2016

 

tokuDB (mongo) using docker

Here is the command that will initiate a toku container.

docker run -d -p 27017:27017 -v /tokudata:/data/db ankurcha/tokumx

(or use official image "mongo" instead of ankurcha/tokumx to install mongoDB without toku engine)
Now this toku installation is available through port 27017 from host IP that may be 172.17.0.1 and you can find it using ifconfig command.

The following python code will connect to the toku mongo container and add a record.

from pymongo import MongoClient
client = MongoClient('172.17.0.1:27017')
db = client.myFirstMB
db.countries.insert_one({"name" : "USA"})
for i in db.countries.find():
    print i

Since we have linked the data directory to /tokudata folder of the host machine, the data can be easily backed up.

Labels: ,


June 30, 2016

 

list of useful containers

Here is a list of some of the containers those I use frequently. Here is how to install docker
First make sure that you are a root user. And there is enough disk space available.

# install docker on AWS linux

yum install -y docker

# You can install docker on centOS version 7 if you have 64-bit version

cat /etc/redhat-release

sudo yum remove docker docker-common container-selinux docker-selinux docker-engine

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum makecache fast

yum install docker-ce

service docker start
_____

# start docker
vi /etc/sysconfig/docker-storage

DOCKER_STORAGE_OPTIONS="--storage-opt dm.basesize=200G"

/etc/init.d/docker start

# Install docker-compose
curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

# Install aliases
curl -sf -L https://raw.githubusercontent.com/shantanuo/docker/master/alias.sh | sh

Editing storage options above will allow bigger containers (upto 200 GB) to be loaded. The container may get out of space once you start saving data into it.

Node.js

In order to use node application within docker environment, you can use the official node image that can be found here...

https://hub.docker.com/r/library/node/

Change to the directory where you have already written code and add the dockerfile with these 2 lines...

$ vi Dockerfile
FROM node:4-onbuild
EXPOSE 8888

Once your script is ready, you need to build an image...

$ docker build -t shantanuo/my-nodejs-app .

And run the node application...

$ docker run -p 8888:8888 -d shantanuo/my-nodejs-app
_____

You can push this image to docker hub as a private or public repository.

docker login
username:shantanuo
password:XXXX

docker push shantanuo/my-nodejs-app

MySQL

ofifical mysql repository

mkdir -p /storage/test-mysql/datadir
docker run -d -p 3306:3306  -e MYSQL_ALLOW_EMPTY_PASSWORD=yes  -v /my/custom:/etc/mysql/conf.d  -v /storage/test-mysql/datadir:/var/lib/mysql   -v /my/custom:/etc/mysql/conf.d  mysql:5.6

(size: 100MB)
Just by changing the name to test-mysql2 we can set up another mysql container. Instead of mysql official version, we can use tutum/mysql which has customized installation.

fixed the bug in the official mysql image

https://github.com/shantanuo/mysql

Percona with tokuDB

docker run -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e INIT_TOKUDB=1 -d percona/percona-server

log in to the container and run this command to enable tokudb if "show engines" command does not show tokudb.

ps_tokudb_admin --enable

Backup
# backup of mysql hosted in the folder /storage of another container named mysql-server

docker run -it \
--link mysql-server:mysql \
-v /storage/mysql-server/datadir:/var/lib/mysql \
-v /storage/backups:/backups \
--rm=true \
severalnines/mysql-pxb \
sh -c 'exec /run_backup.sh'

# backup of mysql from hosted machine

docker run -it \
-v /var/lib/mysql:/var/lib/mysql \
-v /storage/backups:/backups \
--rm=true \
severalnines/mysql-pxb \
sh -c 'exec innobackupex --host="$hostname" --port="3306" --user=root --password="$rootpassword" /backups'

Utilities

# cluster control container:
docker run -d --name clustercontrol -p 5000:80 severalnines/clustercontrol

elastic

1) official container

# command to install both, elasticsearch and kibana (unofficial version)
docker run -d -p 9200:9200 -p 5601:5601 nshou/elasticsearch-kibana

Or use the following with volume attached:

docker run -d -p 5601:5601 -p 5000:5000  -p 9200:9200 --ulimit nofile=65536:65536 -v /mydata:/var/lib/elasticsearch kenwdelong/elk-docker:latest
_____

Here are 2 commands to start Elasticsearch with Kibana using docker official version.

# cat /tmp/elasticsearch.yml
script.inline: on
script.indexed: on
network.host: 0.0.0.0

# docker run -d -v /tmp/:/usr/share/elasticsearch/config  -p 9200:9200 -p 9300:9300  -e ES_HEAP_SIZE=1g elasticsearch:2

Find the name of the container and link it to kibana by changing kibana_name_here below like this...

# docker run -d -p 5601:5601 --link   kibana_name_here:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana
_____

Login to the newly created elastic container and install plug-ins
docker exec -it container_id bash

# pwd
/usr/share/elasticsearch

# bin/plugin install analysis-phonetic

Once the plugin is installed, restart the container so that elastic service will be restarted....
# docker restart c44004a47f46
_____

# get the IP of elastic using command hostname -i and then install metric-beat dashboard using docker

docker run docker.elastic.co/beats/metricbeat:5.5.0 ./scripts/import_dashboards  -es http://172.31.73.228:9200

2) custom container

elastic - customize elasticsearch installation and maintenance

3) elastic with kibana version 5
docker run --name myelastic -v /tmp/:/usr/share/elasticsearch/config  -p 9200:9200 -p 9300:9300 -d elasticsearch:5.0

docker run -d -p 5601:5601 --link myelastic:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana:5.0
_____

# docker run -p 9200:9200 -p 9300:9300 -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -d elasticsearch:5

You may get this error in your logs:

Exception in thread "main" java.lang.RuntimeException: bootstrap checks failed max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]

You'll need to fix up your docker host to support more vm.max_map_count. For reference:

sysctl -w vm.max_map_count=262144

https://www.elastic.co/guide/en/elasticsearch/guide/current/_file_descriptors_and_mmap.html

adminer

adminer is the web interface to connect to any database like postgresql, mysql or oracle
Instead of linking adminer on any port, use --net=host to use the default port 80 of host machine. It will also use default mysql port 3306 that is fortunately linked to mysql container as shown above.

If you do not want to add one more parameter i.e. --net then use the default "bridge" network protocol. You will need to use the following command to find the IP address of the docker host.

# ip addr show docker0

This command will show the docker host IP address on the docker0 network interface.

redshift connection:

docker run -i -t --rm -p 80:80 --name adminer shantanuo/adminer

The above command will log-in to docker container. You need to start apache service within container...

sudo service apache2 start

Or use any of the method mentioned below:

download

wget http://www.adminer.org/latest.php -O /tmp/index.php


## connect to redshift 
docker run -it postgres psql -h merged2017.xxx.us-east-1.redshift.amazonaws.com -p 5439 -U root vdbname

connect to any database like mysql, pgsql, redshift, oracle or mongoDB

postgresql (redshift) or mysql

docker run -it -p 8060:80 -v /tmp/:/var/www/html/ shantanuo/phpadminer

mongoDB

docker run -d -p 8070:80 -v /tmp:/var/www/html ishiidaichi/apache-php-mongo-phalcon

oracle

docker run -d -p 8080:80 -v /tmp/:/app lukaszkinder/apache-php-oci8-pdo_oci

# not sure about how to support mssql

python

compact packages

1) pyrun
python versions 2 and 3 compact, without any external libraries, for basic testing like this...
here is an easy way to convince people to upgrade to python 3.0+

# docker run -it --rm shantanuo/pyrun:2.7 python
>>> 3/2
1

# docker run -it --rm shantanuo/pyrun:3.4 python
>>> 3/2
1.5

Python 2.7 version returns absolute value 1 while 3.4 version correctly returns 1.5

2) staticpython 
4 MB single file python package!

3) socket 
 python with application files

Complete python package

4) conda official 
Official python installation:

https://github.com/ContinuumIO/docker-images

And here is the command to start miniconda and ipython together...

docker run -i -t -p 8888:8888 -v /tmp:/tmp continuumio/miniconda3 /bin/bash -c "/opt/conda/bin/conda install jupyter -y --quiet && cd /tmp/ && /opt/conda/bin/jupyter notebook --notebook-dir=/tmp --ip='*' --port=8888 --no-browser --allow-root"

5) miniconda customized
Here is an impage with pandas and sqldf modules

# Start ipython container that is based on miniconda image in a screen session
docker run -p 7778:7778 -t shantanuo/miniconda_ipython_sqldf /bin/bash

# better start with environment variables
docker run -p 7778:7778 \
-e DEV_ACCESS_KEY=XXX -e DEV_SECRET_KEY=YYY \
-e PROD_READONLY_ACCESS_KEY=XXX -e PROD_READONLY_SECRET_KEY=YYY \
-e PROD_READWRITE_ACCESS_KEY=XXX -e PROD_READWRITE_SECRET_KEY=YYY \
-t shantanuo/miniconda_ipython_sqldf /bin/bash

# Log-in to newly created container
docker exec -it $(docker ps -l -q) /bin/bash

# Start ipython notebook on port 7778 that can be accessed from anywhere (*)
cd /home/
ipython notebook --ip=* --port=7778

# and use the environment keys in your code like this...
import boto3
import os
s3 = boto3.client('s3',aws_access_key_id=os.environ['DEV_ACCESS_KEY'], aws_secret_access_key=os.environ['DEV_SECRET_KEY'])

application containers

Here is an example from amazon about how to build your own container with php application.

https://github.com/awslabs/ecs-demo-php-simple-app

Utility containers

1) myscan 

Use OCR to read any image.
alias pancard='docker run -i --rm -v "$(pwd)":/home/ shantanuo/myscan python /scan.py "$@"'

wget wget https://github.com/shantanuo/docker/raw/master/myscan/pan_card.jpg

pancard 1crop.jpg

2) panamapapers 

container with sqlite database ready for query

3) newrelic
newrelic docker image that works like nagios

docker run -d \
--privileged=true --name nrsysmond \
--pid=host \
--net=host \
-v /sys:/sys \
-v /dev:/dev \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/log:/var/log:rw \
-e NRSYSMOND_license_key=186b2a8d6af29107609abca749296b46cda9fa69 \
-e NRSYSMOND_logfile=/var/log/nrsysmond.log \
newrelic/nrsysmond:latest

docker run -d \
  -e NEW_RELIC_LICENSE_KEY=186b2a8d6af29107609abca749296b46cda9fa69  \
  -e AGENT_HOST=52.1.174.168 \
  -e AGENT_USER=root \
  -e AGENT_PASSWD=XXXXX \
  newrelic/mysql-plugin

4) OCS inventory
docker run -d -p 80:80 -p 3301:3306 zanhsieh/docker-ocs-inventory-ng

http://52.86.68.170/ocsreports/
(username:admin, password:admin)

5) selenium
simulate a browser (crome) with selenium pre-installed

docker run -d -v /dev/shm:/dev/shm -p 4444:4444 selenium/standalone-chrome

The Hub url...
http://52.205.135.220:4444/wd/hub/


6) Deploying registry server
#Start your registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2

#You can now use it with docker. Tag any image to point to your registry:
docker tag image_name localhost:5000/image_name

#then push it to your registry:
docker push localhost:5000/image_name

# pull it back from your registry:
docker pull localhost:5000/image_name

# push the registry container to hub
docker stop registry
docker commit registry
docker push registry

7) Docker User Interface
docker run -d -p 9000:9000 -v /var/run/docker.sock:/docker.sock --name dockerui abh1nav/dockerui:latest -e="/docker.sock"

Better user interface with shell access:

docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer

8) docker clean up container
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes --dry-run

9) prometheus monitoring
check port 9090 and cadvisor on 8080

git clone https://github.com/vegasbrianc/prometheus.git
cd prometheus/

/usr/local/bin/docker-compose  up -d

10) open refine utility
# docker run --privileged -v /openrefine_projects/:/mnt/refine -p 35181:3333 -d psychemedia/ou-tm351-openrefine
Or use this:
docker run -p 3334:3333 -v /mnt/refine -d psychemedia/docker-openrefine

11) Freeswitch
docker run -d sous/freeswitch

12) mongodb
from tutum
docker run -d -p 27017:27017 -p 28017:28017 -e MONGODB_PASS="mypass" tutum/mongodb

offical image with wiredTiger engine
docker run -p 27017:27017 -v /tokudata:/data/db -d mongo --storageEngine wiredTiger

with tokumx compression engine
docker run -p 27017:27017 -v /tokudata:/data/db -d ankurcha/tokumx

# create alias for bsondump command

# alias bsondump='docker run -i --rm -v /tmp/:/tmp/ -w /tmp/ mongo bsondump "$@"'

# bsondump data_hits_20160423.bson > test.json


# alias mongorestore='docker run -i --rm -v /tmp/:/tmp/ -w /tmp/ mongo mongorestore "$@"'

# mongorestore --host `hostname -i` incoming_reports_testing.bson

# docker exec -it 12db5a259e58 mongo

# db.incoming_reports_testing.findOne()

# db.incoming_reports_testing.distinct("caller_id.number")


13) Consul Monitor
docker run -d --name=consul --net=host gliderlabs/consul-server -bootstrap -advertise=52.200.204.48

14) Registrator container
$ docker run -d \
--name=registrator \
--net=host \
--volume=/var/run/docker.sock:/tmp/docker.sock \
gliderlabs/registrator:latest \
consul://localhost:8500

15) wordpress

There is a custom image here...

docker run -p 8081:80 -d tutum/wordpress

docker has official wordpress containers.

docker run -d -p 3306:3306  -e MYSQL_ROOT_PASSWORD=india mysql:5.7

docker run -p 8083:80 --link gigantic_pike:mysql -e WORDPRESS_DB_NAME=wpdb -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=india -d wordpress

And we can also use docker compose the start and link db and application containers.

vi docker-compose.yml

version: '2'

services:
   db:
     image: mysql:5.7
     volumes:
       - "./.data/db:/var/lib/mysql"
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: wordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     links:
       - db
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_PASSWORD: wordpress

/usr/local/bin/docker-compose up -d

15a) Drupal

docker run --name cmsdb -p 3306:3306  -e MYSQL_ROOT_PASSWORD=india -d mysql:5.7

docker run --name mydrupal --link cmsdb:mysql -p 8080:80 -e MYSQL_USER=root -e MYSQL_PASSWORD=india -d drupal

Choose advance option and change "localhost" value for Database host to the mysql container name.

16) Packetbeat container
docker run -d --restart=always --net=host shantanuo/packetbeat-agent

17) Django

docker run --name some-django-app -v "$PWD":/usr/src/app -w /usr/src/app -p 8000:8000  -e location=mumbai -d django bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"

18) rabbitmq
docker run -d --hostname oksoft -p 8080:15672 rabbitmq:3-management

19) ruby and passenger
(official docker image from phusion)

docker run -d -p 3000:3000 phusion/passenger-full

# login to your container:
docker exec -it container_id bash

# change to opt directory
cd /opt/
mkdir public

# a test file
curl google.com > public/index.html

# start passenger:
passenger start

20) update containers
# monitor the containers named "nginx" and "redis" for updates

docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  centurylink/watchtower nginx redis

21) sematext monitoring
Access your docker and other stats from # https://apps.sematext.com

docker run --memory-swap=-1  -d --name sematext-agent --restart=always -e SPM_TOKEN=653a6dc9-1740-4a25-85d3-b37c9ad76308 -v /var/run/docker.sock:/var/run/docker.sock sematext/sematext-agent-docker

23) network emulator delay
Add network delay of 3000 mili seconds to docker traffic.

# terminal 1
# docker run -it --rm --name tryme alpine sh -c     "apk add --update iproute2 && ping www.example.com"

# terminal 2
# docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock gaiaadm/pumba pumba netem --interface eth0 --duration 1m delay --time 3000 tryme

24) postgresql
docker run -p 5432:5432 --name dbt-postgresql  -m 1G -c 256 -v /mypgdata:/var/lib/postgresql/data  -e POSTGRES_PASSWORD=india -d postgres

25) jboss
docker run -p 8080:8080 -p 9990:9990 -m 1G -c 512 -e JBOSS_PASS="mypass" -d tutum/jboss

26) Jupyter notebook with google facets for pandas dataframe:
docker run -d -p 8889:8888 kozo2/facets start-notebook.sh --NotebookApp.token='' --NotebookApp.iopub_data_rate_limit=10000000

27) AWS container
# cat ~/.aws/config
[default]
aws_access_key_id = XXX
aws_secret_access_key = XXX
region = ap-south-1

alias myaws='docker run --rm -v ~/.aws:/root/.aws -v $(pwd):/aws  -it amazon/aws-cli'

28) metricbeat dashboard
# get the IP of elastic using command hostname -i and then install metric-beat dashboard using docker

docker run docker.elastic.co/beats/metricbeat:5.5.0 ./scripts/import_dashboards  -es http://172.31.73.228:9200

29) recall the original run statement of a given container
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike [container_id]

30) Jupyter Notebook
docker run -d -p 8887:8888 -v /tmp:/tmp shantanuo/notebook

31) terraforming
# docker run -e AWS_ACCESS_KEY_ID=xxx  -e AWS_SECRET_ACCESS_KEY=xxx -e AWS_DEFAULT_REGION=ap-south-1 quay.io/dtan4/terraforming:latest terraforming s3

32) apache bench to check server load performance:

docker run -d -p 80 --name web -v /tmp/:/var/www/html russmckendrick/nginx-php

# docker run --link=silly_bassi russmckendrick/ab ab -k -n 10000 -c 16 http://134.195.194.88/

# docker run --link=web russmckendrick/ab ab -k -n 10000 -c 16 http://web/

33) airflow - monitor workflows and tasks

# docker run -p 8080:8080 -d puckel/docker-airflow webserver
_____

You may get an error like this when you restart server:
Error response from daemon: oci runtime error: container with id exists:

The fix is to remove this from /run folder

# rm -rf /run/runc/*
# rm -rf /run/container-id

34) gitlab installation

docker run -d  \
    --env GITLAB_OMNIBUS_CONFIG="external_url 'https://134.195.194.88/'; gitlab_rails['lfs_enabled'] = true; registry_external_url 'https://134.195.194.88:4567';" \
    --publish 443:443 --publish 80:80  --publish 4567:4567 --publish 10022:22 \
    --env 'GITLAB_SSH_PORT=10022' --env 'GITLAB_PORT=443' \
    --env 'GITLAB_HTTPS=true' --env 'SSL_SELF_SIGNED=true' \
    --volume /mysrv/gitlab/config:/etc/gitlab \
    --volume /mysrv/gitlab/logs:/var/log/gitlab \
    --volume /mysrv/gitlab/data:/var/opt/gitlab \
    --volume /srv/docker/gitlab/gitlab/certs:/etc/gitlab/ssl \
    gitlab/gitlab-ce:latest

Labels: , ,


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024   January 2025   February 2025   April 2025   June 2025   July 2025   August 2025  

This page is powered by Blogger. Isn't yours?