Shantanu's Blog

Database Consultant

June 30, 2016

 

list of useful containers

Here is a list of some of the containers those I use frequently. Here is how to install docker
First make sure that you are a root user. And there is enough disk space available.

# install docker on AWS linux

yum install -y docker

# You can install docker on centOS version 7 if you have 64-bit version

cat /etc/redhat-release

sudo yum remove docker docker-common container-selinux docker-selinux docker-engine

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum makecache fast

yum install docker-ce

service docker start
_____

# start docker
vi /etc/sysconfig/docker-storage

DOCKER_STORAGE_OPTIONS="--storage-opt dm.basesize=200G"

/etc/init.d/docker start

# Install docker-compose
curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

# Install aliases
curl -sf -L https://raw.githubusercontent.com/shantanuo/docker/master/alias.sh | sh

Editing storage options above will allow bigger containers (upto 200 GB) to be loaded. The container may get out of space once you start saving data into it.

Node.js

In order to use node application within docker environment, you can use the official node image that can be found here...

https://hub.docker.com/r/library/node/

Change to the directory where you have already written code and add the dockerfile with these 2 lines...

$ vi Dockerfile
FROM node:4-onbuild
EXPOSE 8888

Once your script is ready, you need to build an image...

$ docker build -t shantanuo/my-nodejs-app .

And run the node application...

$ docker run -p 8888:8888 -d shantanuo/my-nodejs-app
_____

You can push this image to docker hub as a private or public repository.

docker login
username:shantanuo
password:XXXX

docker push shantanuo/my-nodejs-app

MySQL

ofifical mysql repository

mkdir -p /storage/test-mysql/datadir
docker run -d -p 3306:3306  -e MYSQL_ALLOW_EMPTY_PASSWORD=yes  -v /my/custom:/etc/mysql/conf.d  -v /storage/test-mysql/datadir:/var/lib/mysql   -v /my/custom:/etc/mysql/conf.d  mysql:5.6

(size: 100MB)
Just by changing the name to test-mysql2 we can set up another mysql container. Instead of mysql official version, we can use tutum/mysql which has customized installation.

fixed the bug in the official mysql image

https://github.com/shantanuo/mysql

Percona with tokuDB

docker run -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e INIT_TOKUDB=1 -d percona/percona-server

log in to the container and run this command to enable tokudb if "show engines" command does not show tokudb.

ps_tokudb_admin --enable

Backup
# backup of mysql hosted in the folder /storage of another container named mysql-server

docker run -it \
--link mysql-server:mysql \
-v /storage/mysql-server/datadir:/var/lib/mysql \
-v /storage/backups:/backups \
--rm=true \
severalnines/mysql-pxb \
sh -c 'exec /run_backup.sh'

# backup of mysql from hosted machine

docker run -it \
-v /var/lib/mysql:/var/lib/mysql \
-v /storage/backups:/backups \
--rm=true \
severalnines/mysql-pxb \
sh -c 'exec innobackupex --host="$hostname" --port="3306" --user=root --password="$rootpassword" /backups'

Utilities

# cluster control container:
docker run -d --name clustercontrol -p 5000:80 severalnines/clustercontrol

elastic

1) official container

# command to install both, elasticsearch and kibana (unofficial version)
docker run -d -p 9200:9200 -p 5601:5601 nshou/elasticsearch-kibana

Or use the following with volume attached:

docker run -d -p 5601:5601 -p 5000:5000  -p 9200:9200 --ulimit nofile=65536:65536 -v /mydata:/var/lib/elasticsearch kenwdelong/elk-docker:latest
_____

Here are 2 commands to start Elasticsearch with Kibana using docker official version.

# cat /tmp/elasticsearch.yml
script.inline: on
script.indexed: on
network.host: 0.0.0.0

# docker run -d -v /tmp/:/usr/share/elasticsearch/config  -p 9200:9200 -p 9300:9300  -e ES_HEAP_SIZE=1g elasticsearch:2

Find the name of the container and link it to kibana by changing kibana_name_here below like this...

# docker run -d -p 5601:5601 --link   kibana_name_here:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana
_____

Login to the newly created elastic container and install plug-ins
docker exec -it container_id bash

# pwd
/usr/share/elasticsearch

# bin/plugin install analysis-phonetic

Once the plugin is installed, restart the container so that elastic service will be restarted....
# docker restart c44004a47f46
_____

# get the IP of elastic using command hostname -i and then install metric-beat dashboard using docker

docker run docker.elastic.co/beats/metricbeat:5.5.0 ./scripts/import_dashboards  -es http://172.31.73.228:9200

2) custom container

elastic - customize elasticsearch installation and maintenance

3) elastic with kibana version 5
docker run --name myelastic -v /tmp/:/usr/share/elasticsearch/config  -p 9200:9200 -p 9300:9300 -d elasticsearch:5.0

docker run -d -p 5601:5601 --link myelastic:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana:5.0
_____

# docker run -p 9200:9200 -p 9300:9300 -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -d elasticsearch:5

You may get this error in your logs:

Exception in thread "main" java.lang.RuntimeException: bootstrap checks failed max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]

You'll need to fix up your docker host to support more vm.max_map_count. For reference:

sysctl -w vm.max_map_count=262144

https://www.elastic.co/guide/en/elasticsearch/guide/current/_file_descriptors_and_mmap.html

adminer

adminer is the web interface to connect to any database like postgresql, mysql or oracle
Instead of linking adminer on any port, use --net=host to use the default port 80 of host machine. It will also use default mysql port 3306 that is fortunately linked to mysql container as shown above.

If you do not want to add one more parameter i.e. --net then use the default "bridge" network protocol. You will need to use the following command to find the IP address of the docker host.

# ip addr show docker0

This command will show the docker host IP address on the docker0 network interface.

redshift connection:

docker run -i -t --rm -p 80:80 --name adminer shantanuo/adminer

The above command will log-in to docker container. You need to start apache service within container...

sudo service apache2 start

Or use any of the method mentioned below:

download

wget http://www.adminer.org/latest.php -O /tmp/index.php


## connect to redshift 
docker run -it postgres psql -h merged2017.xxx.us-east-1.redshift.amazonaws.com -p 5439 -U root vdbname

connect to any database like mysql, pgsql, redshift, oracle or mongoDB

postgresql (redshift) or mysql

docker run -it -p 8060:80 -v /tmp/:/var/www/html/ shantanuo/phpadminer

mongoDB

docker run -d -p 8070:80 -v /tmp:/var/www/html ishiidaichi/apache-php-mongo-phalcon

oracle

docker run -d -p 8080:80 -v /tmp/:/app lukaszkinder/apache-php-oci8-pdo_oci

# not sure about how to support mssql

python

compact packages

1) pyrun
python versions 2 and 3 compact, without any external libraries, for basic testing like this...
here is an easy way to convince people to upgrade to python 3.0+

# docker run -it --rm shantanuo/pyrun:2.7 python
>>> 3/2
1

# docker run -it --rm shantanuo/pyrun:3.4 python
>>> 3/2
1.5

Python 2.7 version returns absolute value 1 while 3.4 version correctly returns 1.5

2) staticpython 
4 MB single file python package!

3) socket 
 python with application files

Complete python package

4) conda official 
Official python installation:

https://github.com/ContinuumIO/docker-images

And here is the command to start miniconda and ipython together...

docker run -i -t -p 8888:8888 -v /tmp:/tmp continuumio/miniconda3 /bin/bash -c "/opt/conda/bin/conda install jupyter -y --quiet && cd /tmp/ && /opt/conda/bin/jupyter notebook --notebook-dir=/tmp --ip='*' --port=8888 --no-browser --allow-root"

5) miniconda customized
Here is an impage with pandas and sqldf modules

# Start ipython container that is based on miniconda image in a screen session
docker run -p 7778:7778 -t shantanuo/miniconda_ipython_sqldf /bin/bash

# better start with environment variables
docker run -p 7778:7778 \
-e DEV_ACCESS_KEY=XXX -e DEV_SECRET_KEY=YYY \
-e PROD_READONLY_ACCESS_KEY=XXX -e PROD_READONLY_SECRET_KEY=YYY \
-e PROD_READWRITE_ACCESS_KEY=XXX -e PROD_READWRITE_SECRET_KEY=YYY \
-t shantanuo/miniconda_ipython_sqldf /bin/bash

# Log-in to newly created container
docker exec -it $(docker ps -l -q) /bin/bash

# Start ipython notebook on port 7778 that can be accessed from anywhere (*)
cd /home/
ipython notebook --ip=* --port=7778

# and use the environment keys in your code like this...
import boto3
import os
s3 = boto3.client('s3',aws_access_key_id=os.environ['DEV_ACCESS_KEY'], aws_secret_access_key=os.environ['DEV_SECRET_KEY'])

application containers

Here is an example from amazon about how to build your own container with php application.

https://github.com/awslabs/ecs-demo-php-simple-app

Utility containers

1) myscan 

Use OCR to read any image.
alias pancard='docker run -i --rm -v "$(pwd)":/home/ shantanuo/myscan python /scan.py "$@"'

wget wget https://github.com/shantanuo/docker/raw/master/myscan/pan_card.jpg

pancard 1crop.jpg

2) panamapapers 

container with sqlite database ready for query

3) newrelic
newrelic docker image that works like nagios

docker run -d \
--privileged=true --name nrsysmond \
--pid=host \
--net=host \
-v /sys:/sys \
-v /dev:/dev \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/log:/var/log:rw \
-e NRSYSMOND_license_key=186b2a8d6af29107609abca749296b46cda9fa69 \
-e NRSYSMOND_logfile=/var/log/nrsysmond.log \
newrelic/nrsysmond:latest

docker run -d \
  -e NEW_RELIC_LICENSE_KEY=186b2a8d6af29107609abca749296b46cda9fa69  \
  -e AGENT_HOST=52.1.174.168 \
  -e AGENT_USER=root \
  -e AGENT_PASSWD=XXXXX \
  newrelic/mysql-plugin

4) OCS inventory
docker run -d -p 80:80 -p 3301:3306 zanhsieh/docker-ocs-inventory-ng

http://52.86.68.170/ocsreports/
(username:admin, password:admin)

5) selenium
simulate a browser (crome) with selenium pre-installed

docker run -d -v /dev/shm:/dev/shm -p 4444:4444 selenium/standalone-chrome

The Hub url...
http://52.205.135.220:4444/wd/hub/


6) Deploying registry server
#Start your registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2

#You can now use it with docker. Tag any image to point to your registry:
docker tag image_name localhost:5000/image_name

#then push it to your registry:
docker push localhost:5000/image_name

# pull it back from your registry:
docker pull localhost:5000/image_name

# push the registry container to hub
docker stop registry
docker commit registry
docker push registry

7) Docker User Interface
docker run -d -p 9000:9000 -v /var/run/docker.sock:/docker.sock --name dockerui abh1nav/dockerui:latest -e="/docker.sock"

Better user interface with shell access:

docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer

8) docker clean up container
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes --dry-run

9) prometheus monitoring
check port 9090 and cadvisor on 8080

git clone https://github.com/vegasbrianc/prometheus.git
cd prometheus/

/usr/local/bin/docker-compose  up -d

10) open refine utility
# docker run --privileged -v /openrefine_projects/:/mnt/refine -p 35181:3333 -d psychemedia/ou-tm351-openrefine
Or use this:
docker run -p 3334:3333 -v /mnt/refine -d psychemedia/docker-openrefine

11) Freeswitch
docker run -d sous/freeswitch

12) mongodb
from tutum
docker run -d -p 27017:27017 -p 28017:28017 -e MONGODB_PASS="mypass" tutum/mongodb

offical image with wiredTiger engine
docker run -p 27017:27017 -v /tokudata:/data/db -d mongo --storageEngine wiredTiger

with tokumx compression engine
docker run -p 27017:27017 -v /tokudata:/data/db -d ankurcha/tokumx

# create alias for bsondump command

# alias bsondump='docker run -i --rm -v /tmp/:/tmp/ -w /tmp/ mongo bsondump "$@"'

# bsondump data_hits_20160423.bson > test.json


# alias mongorestore='docker run -i --rm -v /tmp/:/tmp/ -w /tmp/ mongo mongorestore "$@"'

# mongorestore --host `hostname -i` incoming_reports_testing.bson

# docker exec -it 12db5a259e58 mongo

# db.incoming_reports_testing.findOne()

# db.incoming_reports_testing.distinct("caller_id.number")


13) Consul Monitor
docker run -d --name=consul --net=host gliderlabs/consul-server -bootstrap -advertise=52.200.204.48

14) Registrator container
$ docker run -d \
--name=registrator \
--net=host \
--volume=/var/run/docker.sock:/tmp/docker.sock \
gliderlabs/registrator:latest \
consul://localhost:8500

15) wordpress

There is a custom image here...

docker run -p 8081:80 -d tutum/wordpress

docker has official wordpress containers.

docker run -d -p 3306:3306  -e MYSQL_ROOT_PASSWORD=india mysql:5.7

docker run -p 8083:80 --link gigantic_pike:mysql -e WORDPRESS_DB_NAME=wpdb -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=india -d wordpress

And we can also use docker compose the start and link db and application containers.

vi docker-compose.yml

version: '2'

services:
   db:
     image: mysql:5.7
     volumes:
       - "./.data/db:/var/lib/mysql"
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: wordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     links:
       - db
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_PASSWORD: wordpress

/usr/local/bin/docker-compose up -d

15a) Drupal

docker run --name cmsdb -p 3306:3306  -e MYSQL_ROOT_PASSWORD=india -d mysql:5.7

docker run --name mydrupal --link cmsdb:mysql -p 8080:80 -e MYSQL_USER=root -e MYSQL_PASSWORD=india -d drupal

Choose advance option and change "localhost" value for Database host to the mysql container name.

16) Packetbeat container
docker run -d --restart=always --net=host shantanuo/packetbeat-agent

17) Django

docker run --name some-django-app -v "$PWD":/usr/src/app -w /usr/src/app -p 8000:8000  -e location=mumbai -d django bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"

18) rabbitmq
docker run -d --hostname oksoft -p 8080:15672 rabbitmq:3-management

19) ruby and passenger
(official docker image from phusion)

docker run -d -p 3000:3000 phusion/passenger-full

# login to your container:
docker exec -it container_id bash

# change to opt directory
cd /opt/
mkdir public

# a test file
curl google.com > public/index.html

# start passenger:
passenger start

20) update containers
# monitor the containers named "nginx" and "redis" for updates

docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  centurylink/watchtower nginx redis

21) sematext monitoring
Access your docker and other stats from # https://apps.sematext.com

docker run --memory-swap=-1  -d --name sematext-agent --restart=always -e SPM_TOKEN=653a6dc9-1740-4a25-85d3-b37c9ad76308 -v /var/run/docker.sock:/var/run/docker.sock sematext/sematext-agent-docker

23) network emulator delay
Add network delay of 3000 mili seconds to docker traffic.

# terminal 1
# docker run -it --rm --name tryme alpine sh -c     "apk add --update iproute2 && ping www.example.com"

# terminal 2
# docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock gaiaadm/pumba pumba netem --interface eth0 --duration 1m delay --time 3000 tryme

24) postgresql
docker run -p 5432:5432 --name dbt-postgresql  -m 1G -c 256 -v /mypgdata:/var/lib/postgresql/data  -e POSTGRES_PASSWORD=india -d postgres

25) jboss
docker run -p 8080:8080 -p 9990:9990 -m 1G -c 512 -e JBOSS_PASS="mypass" -d tutum/jboss

26) Jupyter notebook with google facets for pandas dataframe:
docker run -d -p 8889:8888 kozo2/facets start-notebook.sh --NotebookApp.token='' --NotebookApp.iopub_data_rate_limit=10000000

27) AWS container
# cat ~/.aws/config
[default]
aws_access_key_id = XXX
aws_secret_access_key = XXX
region = ap-south-1

alias myaws='docker run --rm -v ~/.aws:/root/.aws -v $(pwd):/aws  -it amazon/aws-cli'

28) metricbeat dashboard
# get the IP of elastic using command hostname -i and then install metric-beat dashboard using docker

docker run docker.elastic.co/beats/metricbeat:5.5.0 ./scripts/import_dashboards  -es http://172.31.73.228:9200

29) recall the original run statement of a given container
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike [container_id]

30) Jupyter Notebook
docker run -d -p 8887:8888 -v /tmp:/tmp shantanuo/notebook

31) terraforming
# docker run -e AWS_ACCESS_KEY_ID=xxx  -e AWS_SECRET_ACCESS_KEY=xxx -e AWS_DEFAULT_REGION=ap-south-1 quay.io/dtan4/terraforming:latest terraforming s3

32) apache bench to check server load performance:

docker run -d -p 80 --name web -v /tmp/:/var/www/html russmckendrick/nginx-php

# docker run --link=silly_bassi russmckendrick/ab ab -k -n 10000 -c 16 http://134.195.194.88/

# docker run --link=web russmckendrick/ab ab -k -n 10000 -c 16 http://web/

33) airflow - monitor workflows and tasks

# docker run -p 8080:8080 -d puckel/docker-airflow webserver
_____

You may get an error like this when you restart server:
Error response from daemon: oci runtime error: container with id exists:

The fix is to remove this from /run folder

# rm -rf /run/runc/*
# rm -rf /run/container-id

34) gitlab installation

docker run -d  \
    --env GITLAB_OMNIBUS_CONFIG="external_url 'https://134.195.194.88/'; gitlab_rails['lfs_enabled'] = true; registry_external_url 'https://134.195.194.88:4567';" \
    --publish 443:443 --publish 80:80  --publish 4567:4567 --publish 10022:22 \
    --env 'GITLAB_SSH_PORT=10022' --env 'GITLAB_PORT=443' \
    --env 'GITLAB_HTTPS=true' --env 'SSL_SELF_SIGNED=true' \
    --volume /mysrv/gitlab/config:/etc/gitlab \
    --volume /mysrv/gitlab/logs:/var/log/gitlab \
    --volume /mysrv/gitlab/data:/var/opt/gitlab \
    --volume /srv/docker/gitlab/gitlab/certs:/etc/gitlab/ssl \
    gitlab/gitlab-ce:latest

Labels: , ,


 

Backup mysql using a container!

Suppose you have a MySQL container running named "mysql-server", started with this command:

$ docker run -d \
--name=mysql-server \
-v /storage/mysql-server/datadir:/var/lib/mysql \
-e MySQL_ROOT_PASSWORD=mypassword \
mysql

Then, to perform backup against the above container, the command would be:

$ docker run -it \
--link mysql-server:mysql \
-v /storage/mysql-server/datadir:/var/lib/mysql \
-v /storage/backups:/backups \
--rm=true \
severalnines/mysql-pxb \
sh -c 'exec /run_backup.sh'

Labels: , ,


June 29, 2016

 

Selenium and Chrome using docker

Here is the container that will simulate a browser (crome) with selenium pre-installed on any headless server.

docker run -d -v /dev/shm:/dev/shm -p 4444:4444 selenium/standalone-chrome

The Hub url on my  machine is...

http://52.205.135.220:4444/wd/hub/

This will start up a single node with Selenium and Chrome configured. We can see this is working by opening up a browser and pointing it to the hub url on the exposed port. Create a session and type the URL in "Load Script" dialog box for e.g. http://google.com
Click on "Take Screenshot" button to see how the site will look like in crome.

Labels: , ,


June 28, 2016

 

elasticsearch index listing

There is no direct way to access the internal index in elasticsearch. But we can get an idea about how the words are listed using query. For e.g. In this case, the word "call" is indexed in 2 documents while all other tokens are indexed only once.

DELETE /test_index

POST /test_index/doc/_bulk
{"index":{"_id":1}}
{"msg":"call:4189, sales"}
{"index":{"_id":2}}
{"msg":"call:4210, marketing"}

POST /test_index/_search?search_type=count
{
   "aggs": {
      "msg_terms": {
         "terms": {
            "field": "msg"
         }
      }
   }
}

Labels:


June 25, 2016

 

elasticsearch import using stream2es

Here are 3 simple steps to download json data from S3 and import them to elasticsearch.

1) create a directory:

mkdir /my_node_apps
cd /my_node_apps

2) Download all compressed files from S3
# aws s3 cp --recursive s3://my_data/my_smpp/logs/node_apps/aug_2015/ .

3) Uncompress the files and import them in elasticsearch

## cat final.sh

#!/bin/bash
curl -O download.elasticsearch.org/stream2es/stream2es; chmod +x stream2es
indexname='smpaug2'
typename='smpaug2type'

for i in `find /my_node_apps/aug_2015/ -name "*.gz"`
do
gunzip $i
newname=`echo $i | sed 's/.gz$//'
cat $newname | ./stream2es stdin --target "http://152.204.218.128:9200/$indexname/$typename/"
done

Labels: , , , ,


 

access a VPC instance from internet

If you have launched the instance in default VPC, then you need to attach internet gateway so that the server can be accessed from the internet.

Attaching an Internet Gateway
In the navigation pane, choose Internet Gateways, and then choose Create Internet Gateway.
In the Create Internet Gateway dialog box, you can optionally name your Internet gateway, and then choose Yes, Create.
Select the Internet gateway that you just created, and then choose Attach to VPC.
In the Attach to VPC dialog box, select your VPC from the list, and then choose Yes, Attach.

To create a custom route table
In the navigation pane, choose Route Tables, and then choose Create Route Table.
In the Create Route Table dialog box, optionally name your route table, then select your VPC, and then choose Yes, Create.
Select the custom route table that you just created. The details pane displays tabs for working with its routes, associations, and route propagation.
On the Routes tab, choose Edit, specify 0.0.0.0/0 in the Destination box, select the Internet gateway ID in the Target list, and then choose Save.
On the Subnet Associations tab, choose Edit, select the Associate check box for the subnet, and then choose Save.

Security groups and Elastic IP addresses should be configured from the same VPC page.

Labels:


 

aggregation queries

We are used to SQL group by queries something like this...

# select session_id, count(*) as cnt from table group by session_id order by cnt desc limit 1;

This can be easily rewritten to elasic query as shown below:

POST /test_index/_bulk
{"index":{"_index":"test_index","_type":"doc","_id":1}}
{"session_id":1,"user_id":"jan"}
{"index":{"_index":"test_index","_type":"doc","_id":2}}
{"session_id":1,"user_id":"jan"}
{"index":{"_index":"test_index","_type":"doc","_id":3}}
{"session_id":1,"user_id":"jan"}
{"index":{"_index":"test_index","_type":"doc","_id":4}}
{"session_id":2,"user_id":"bob"}
{"index":{"_index":"test_index","_type":"doc","_id":5}}
{"session_id":2,"user_id":"bob"}

POST /test_index/_search?search_type=count
{
   "aggs": {
      "schedule_id": {
         "terms": {
            "field": "session_id",
            "order" : { "_term" : "desc" },
            "size": 1
         }
      }
   }
}

_____

# select ip, port, count(*) as cnt, sum(visits) from table group by ip,port

POST /test_index/_search?search_type=count
{
   "aggregations": {
      "ip": {
         "terms": {
            "field": "ip",
            "size": 10
         },
         "aggregations": {
            "port": {
               "terms": {
                  "field": "port",
                  "size": 0,
                  "order": {
                     "visits": "desc"
                  }
               },
               "aggregations": {
                  "visits": {
                     "sum": {
                        "field": "visits"
                     }
                  }
               }
            }
         }
      }
   }
}

# select ip, count(*) as cnt  from table where ip in ('146.233.189.126', '193.33.153.89') group by ip

POST /test_index/_search?search_type=count
{
   "aggregations": {
      "ip": {
         "terms": {
            "field": "ip",
            "size": 10,
            "include": [
               "146.233.189.126",
               "193.33.153.89"
            ]
         }
      }
   }
}

And here is sample data to test above queries:

POST /test_index/doc/_bulk
{"index":{"_id":1}}
{"ip":"146.233.189.126","port":80,"visits":10}
{"index":{"_id":2}}
{"ip":"146.233.189.126","port":8080,"visits":5}
{"index":{"_id":3}}
{"ip":"146.233.189.126","port":8080,"visits":15}
{"index":{"_id":4}}
{"ip":"200.221.51.224","port":80,"visits":10}
{"index":{"_id":5}}
{"ip":"193.33.153.89","port":80,"visits":10}
{"index":{"_id":6}}
{"ip":"193.33.153.89","port":80,"visits":20}
{"index":{"_id":7}}
{"ip":"193.33.153.89","port":80,"visits":30}

_____

Here is one more example of group by query.

DELETE /sport

PUT /sport

POST /sport/_bulk
{"index":{"_index":"sport","_type":"runner"}}
{"name":"Gary", "city":"New York","region":"A","sport":"Soccer"}
{"index":{"_index":"sport","_type":"runner"}}
{"name":"Bob", "city":"New York","region":"A","sport":"Tennis"}
{"index":{"_index":"sport","_type":"runner"}}
{"name":"Mike", "city":"Atlanta","region":"B","sport":"Soccer"}
{"index":{"_index":"sport","_type":"runner"}}
{"name":"Mike xyz", "city":"Atlanta","region":"B","sport":"Soccer"}

POST /sport/_search
{
   "size": 0,
   "aggregations": {
      "city_terms": {
         "terms": {
            "field": "city"
         },
         "aggregations": {
            "name_terms": {
               "terms": {
                  "field": "name"
               }
            }
         }
      }
   }
}

# select name, count(*) as cnt from table group by city, name

Labels:


 

save lowercase data into elasticsearch

Most of the searches we do are case insensitive. Elasticsearch by default indexes data in lowercase. But there are times when we do not want to use any analyzer and still want to save the data in lowercase. The best option is to take care of not inserting data in capital letters. Here is an example.

# delete the index if exists
DELETE /test_index

# insert the records, table structure will be created automatically
POST /test_index/doc/_bulk
{"index":{"_id":1}}
{"cities":["new york","delhi"]}
{"index":{"_id":2}}
{"cities":["new york","Delhi","new Jersey"]}

# query to show how each word is indexed
POST /test_index/_search?search_type=count
{
   "aggs": {
      "city_terms": {
         "terms": {
            "field": "cities"
         }
      }
   }
}

This will return
delhi 2
new 2
york 2
jersey 1
_____

All the text is split on whitespace and lowercased to be saved with each document identifier. But we need "new york" 2 and "new jersey" 1. The single word "new" does not mean anything in this case. Elasticsearch will build the table structure dynamically for you. It has decided that the "cities" column should be of "string" type.

get /test_index/_mapping

{
   "test_index": {
      "mappings": {
         "doc": {
            "properties": {
               "cities": {
                  "type": "string"
               }
            }
         }
      }
   }
}

If we decide not to analyze the cities, then each value in the list will be saved together.

# delete index
DELETE /test_index

PUT /test_index
{
   "mappings": {
      "doc": {
         "properties": {
            "cities": {
               "type": "string",
               "index": "not_analyzed"
            }
         }
      }
   }
}

If you run the same insert and select queries again, then you will get
new york 2
Delhi 1
delhi 1
new Jersey 1

As you must have noticed we got separate entry for "Delhi" and "delhi" because of capitalization. To avoid this use all lowercase letters while inserting the data, or use the following query....

POST /test_index/_search?search_type=count
{
    "aggs": {
        "city_terms": {
            "terms": {
                "script": "doc.cities.values.collect{it.toLowerCase()}"
            }
}}}

You should now get the correct results:
delhi 2
new york 2
new jersey 1

Lowercase all the fields before inserting them into elasic.
1) If you are using Elasticsearch version 5.0 then you can use "Lowercase Processor" to convert certain fields to lowercase while inserting the records into database. I guess this is an important reason to upgrade. We can not rely on the data that we receive and processing the data to lowercase using python will be difficult.
2) If you are using logstash, then use mutate filter...

filter {
  mutate {
    lowercase => [ "fieldname" ]
  }
}

3) Use "lower" function of python or any other scripting language.

You can use any of the 3 methods mentioned above, but saving the data into lowercase fields will save a lot of confusion later.

Labels:


June 22, 2016

 

Deploying a registry server

#Start your registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2

#You can now use it with docker. Tag any image to point to your registry:
docker tag image_name localhost:5000/image_name

#then push it to your registry:
docker push localhost:5000/image_name

# pull it back from your registry:
docker pull localhost:5000/image_name

# push the registry container to hub
docker stop registry
docker commit registry
docker push registry

Labels:


June 20, 2016

 

mappings properties those can be modified

1) dynamic changes to table structure can be disabled
2) Do not index all the columns
3) Enable default timestamp for each inserted record
4) index title column twice. First index it using standard analyzer and then title.raw will be indexed

{
   "page":{
      "dynamic":"false",
      "properties":{
         "_all":{
            "enabled":"false"
         },
         "_timestamp":{
            "enabled":True
         },
         "timestamp":{
            "type":"date",
            # "format":"strict_date_optional_time||epoch_millis"          
            #  "format":"strict_date_optional_time"
         },
         "title":{
            "type":"string",
            "fields":{
               "raw":{
                  "type":"string",
                  "omit_norms":True,
                  "index_options":"docs",
                  "analyzer":"snowball"
               }
            }
         }
      }
   }
}

Labels:


 

Using Analyzers in Elastic-search

Elastic indexes the strings differently depending upon which analyzer you use. The default standard analyzer will index "testing" word after changing it with lowercase filter while snowball or english analyzer will stem the word and save the orginal word "test" in the index. Here is how a string is broken down to be saved.

# curl -XPOST 'http://52.7.70.123:9200/_analyze?analyzer=standard&pretty' -d "this is testing ! & digits 123-456"

whitespace: THIS is Testing ! & digits 123-456
standard: this is testing digits 123 456
simple: this is testing digits
pattern (Regular Expression): this is testing digits
snowball: test digit 123 456
stop: testing digits
keyword:  "THIS is Testing ! & digits 123-456",
english: test digit 123 456

In the following example we will build custom analyzer called “custom_lowercase_stemmed”. It will use default tokenizer called “standard” and default filter “lowercase” along with custom filter called “custom_english_stemmer”. This newly built analyzer can be used in the properties section of the mappings of the give type (for e.g. “test”) The column “product_name” will use the custom analyzer to save the strings.

curl -XPOST 'http://52.7.70.123:9200/tryoindex/ -d'
{
  "settings": {
    "analysis": {
      "filter": {
        "custom_english_stemmer": {
          "type": "stemmer",
          "name": "english"
        }
      },
      "analyzer": {
        "custom_lowercase_stemmed": {
          "tokenizer": "standard",
          "filter": [
            "lowercase",
            "custom_english_stemmer"
          ]
        }
      }
    }
  },
  "mappings": {
    "test": {
      "properties": {
        "product_name": {
          "type": "string",
          "analyzer": "custom_lowercase_stemmed"
        }
      }
    }
  }
}'


In the following example we will be building 2 custom analyzers “nGram_analyzer” and “whiltespace_analyzer”. Both the analyzers use default filters like “lowercase” and “asci folding”, but one of them will use the custom filter called “ngram filter”. This will further index the words from 2 to 20 characters those can be used in auto-suggest queries.

curl -XPUT "http://localhost:9200/blurays " -d'
{
   "settings": {
      "analysis": {
         "filter": {
            "nGram_filter": {
               "type": "nGram",
               "min_gram": 2,
               "max_gram": 20,
               "token_chars": [
                  "letter",
                  "digit",
                  "punctuation",
                  "symbol"
               ]
            }
         },
         "analyzer": {
            "nGram_analyzer": {
               "type": "custom",
               "tokenizer": "whitespace",
               "filter": [
                  "lowercase",
                  "asciifolding",
                  "nGram_filter"
               ]
            },
            "whitespace_analyzer": {
               "type": "custom",
               "tokenizer": "whitespace",
               "filter": [
                  "lowercase",
                  "asciifolding"
               ]
            }
         }
      }
   },
   "mappings": {
      ...
   }
}'

Labels:


 

elasticsearch with python

# import module and connect to elastic server

import elasticsearch
from elasticsearch import helpers
es = elasticsearch.Elasticsearch('http://52.7.70.12:9200')

# list all the indexes
indices=es.indices.get_aliases().keys()
sorted(indices)

# save match all query as python variable
myquery={"query": {"match_all": {}}}

# execute the query using body parameter and return total number of records
# select count(*) from table
res = es.search(index="sbi", body=myquery)
print("Got %d Hits:" % res['hits']['total'])

# The same as above, variable myquery replaced by query string
res = es.search(index="sbi", body={"query": {"match_all": {}}})
print("Got %d Hits:" % res['hits']['total'])

# select count(*) from table where req_uuid = '940...33fa'
res = es.search(index="sbi", request_timeout=60, body={"query": {"match":{"req_uuid":"940b60ab-a51a-429"}}})
mylist=res['hits']['hits']
print("Got %d Hits:" % res['hits']['total'])

# show the column 'message' returned from above query
for i in range(len(res['hits']['hits'])):
    print res['hits']['hits'][i]['_source']['message']

#export elastic query results to pandas dataframe:
import pandas as pd
df=pd.DataFrame(mylist)
_____

# index 1 record
# insert into table values ('Outfound Call',

es.index(index='calls', doc_type='customer', request_timeout=60,   body={
 "caller_id_name" : "Outbound", "caller_id_number" : 1234567890, "destination_number" : 7500,
        "context" : "default", "start_stamp" : "2015-05-03 23:53:55", "answer_stamp" : "2015-05-03 23:54:05",  "end_stamp" : "2015-05-03 23:54:25", "duration" : 30
})

# Add datetime column
from datetime import datetime
es.index(index="my-index", doc_type="test-type", id=42, body={"any": "data", "timestamp": datetime.now()})

# open json file and read all the lines into a list for bulk import

mylist=[]
with open('sbi_call_passenger.log.2015-04-17') as fhandle:
    for line in fhandle:
        action = {
        "_index": "tickets-index1",
        "_type": "tickets",
        "_source": line.rstrip()
        }

        mylist.append(action)

# Import the list into elastic
helpers.bulk(es, mylist)


# import data from pandas dataframe into elastic

import json
tmp = df.to_json(orient = "records")
df_json= json.loads(tmp)
for doc in df_json:
    es.index(index="myindex", doc_type="testtype",body=doc)

# if you have a CSV file then import it to pandas dataframe first
# and then export the dataframe to elastic search as shown above

Labels: , ,


June 19, 2016

 

Pandas tips

pd.options.display.float_format = '{:,.3f}'.format # Limit output to 3 decimal places.

df[df.water_year.str.startswith('199')] # Filtering by string methods

df.sort_index(ascending=False).head(5) #inplace=True to apple the sorting in place

df = df.reset_index('water_year') # Returning an index to data


If your rows have numerical indices, you can reference them using iloc.

# Getting a row via a numerical index
df.iloc[30]

iloc will only work on numerical indices. It will return a series of that row. Each column of that row will be an element in the returned series.


# Applying a function to a column
def base_year(year):
    base_year = year[:4]
    base_year= pd.to_datetime(base_year).year
    return base_year

df['year'] = df.water_year.apply(base_year)
df.head(5)

Labels: ,


June 18, 2016

 

update pandas dataframe


Here is a good pandas example from...

http://pbpython.com/excel-filter-edit.html

import pandas as pd
df = pd.read_excel("https://github.com/chris1610/pbpython/blob/master/data/sample-sales-reps.xlsx?raw=true")
df.head()

df["commission"] = .02

# use loc attribute instead of slicing while updating

df.loc[df["category"] == "Shirt", ["commission"]] = .025

df.loc[(df["category"] == "Belt") & (df["quantity"] >= 10), ["commission"]] = .04


df["bonus"] = 0
df.loc[(df["category"] == "Shoes") & (df["ext price"] >= 1000 ), ["bonus", "commission"]] = 250, 0.045

df.ix[3:7]

df["comp"] = df["commission"] * df["ext price"] + df["bonus"]

df.groupby(["sales rep"])["comp"].sum().round(2)

Labels: , ,


June 15, 2016

 

Using watcher with elasticsearch

Elasticsearch can watch the documents being added and alert you when-ever there is an unusal activity. Here are 3 steps to configure watch. Here is an example...

1) Watch the log-events index.
2) Within that index search for any type where status field has a word "error"
3) If the error documents exceed the limit of 5 counts within the interval of 5 minutes, then send an email to specified user.

# install watcher plugin

bin/plugin install elasticsearch/license/latest
bin/plugin install elasticsearch/watcher/latest


# add email section to elasticsearch.yml file

watcher.actions.email.service.account:
    gmail_account:
        profile: gmail
        smtp:
            auth: true
            starttls.enable: true
            host: smtp.gmail.com
            port: 587
            user: shantanu.XXX
            password: XXX

# re-start elasticsearch and add a watcher document

curl -XPUT 'http://52.203.237.120:9200/_watcher/watch/log_event_watch' -d '{
  "metadata" : {
    "color" : "red"
  },
  "trigger" : {
    "schedule" : {
      "interval" : "5m"
    }
  },
  "input" : {
    "search" : {
      "request" : {
        "indices" : "log-events",
        "body" : {
          "size" : 0,
          "query" : { "match" : { "status" : "error" } }
        }
      }
    }
  },
  "condition" : {
    "script" : "return ctx.payload.hits.total > 5"
  },
  "actions" : {
    "email_administrator" : {
      "throttle_period": "15m",
      "email" : {
        "to" : "shantanu.oak@gmail.com",
        "subject" : "Encountered {{ctx.payload.hits.total}} errors",
        "body" : "Too many error in the system, see attached data",
        "attachments" : {
          "attached_data" : {
            "data" : {
              "format" : "json"
            }
          }
        },
        "priority" : "high"
      }

    }
  }
}'

Labels: ,


June 04, 2016

 

elasticsearch docker image with scripting support

The default elasticsearch image does not support scripting. So I have created a new image that anyone can download from...

Here are the steps used to create the new image.

# cat elasticsearch.yml
script.inline: true
script.indexed: true
network.host: 0.0.0.0

# cat Dockerfile
from elasticsearch
copy elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml

# docker build -t shantanuo/elasticsearch-script .

# docker push shantanuo/elasticsearch-script

I can now run the container based on the image I just created.

# docker run -d -p 9200:9200 -p 9300:9300 shantanuo/elasticsearch-script

Find the name of the container and link it to kibana like this...

# docker run -d -p 5603:5601 --link  stoic_goldstine:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana

# or use this to start es with kibana
docker run -d -p 9200:9200 -p 9300:9300 -p5603:5601 -e ES_HEAP_SIZE=1g --name myelastic  shantanuo/elastic

_____

This image is based on official elasticsearch image. If I need to build everything from scratch, then I can use Ubuntu official image as shown here...

https://hub.docker.com/r/shantanuo/elasticsearch/

Labels: , , , , , ,


June 03, 2016

 

Save your packets to elasticsearch

Here are 2 docker commands to start elasticsearch with kibana.

docker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch-pb elasticsearch
docker run -d -p 5601:5601 --name kibana-pb --link elasticsearch-pb:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana

You can now push logs to elasticsearch using logstash or use beat.
Install beat package on the server from where you want to push the logs...

deb:

sudo apt-get install libpcap0.8
curl -L -O https://download.elastic.co/beats/packetbeat/packetbeat_1.2.3_amd64.deb
sudo dpkg -i packetbeat_1.2.3_amd64.deb

rpm:

sudo yum install libpcap
curl -L -O https://download.elastic.co/beats/packetbeat/packetbeat-1.2.3-x86_64.rpm
sudo rpm -vi packetbeat-1.2.3-x86_64.rpm

### Config yml file

vi /etc/packetbeat/packetbeat.yml

# Multiple outputs may be used.
output:
  ### Elasticsearch as output
  elasticsearch:
    # hosts: ["localhost:9200"]
    # hosts: ["ec2-54-65-142-180.compute-1.amazonaws.com"]
    hosts: ["search-es-demo-jyhk2or3v3sesrgt6dgn5u7qm.us-east-1.es.amazonaws.com:443"]
    protocol: "https"

    # A template is used to set the mapping in Elasticsearch
    template:
      # Path to template file
      path: "packetbeat.template.json"

### start packetbeat

/etc/init.d/packetbeat start
_____

The packetbeat configuration file without comments looks something like this...

# cat /etc/packetbeat/packetbeat.yml  | grep -v '#' | grep -v '^$'
interfaces:
  device: any
protocols:
  dns:
    ports: [53]
    include_authorities: true
    include_additionals: true
  http:
    ports: [80, 8080, 8000, 5000, 8002]
  memcache:
    ports: [11211]
  mysql:
    ports: [3306]
  pgsql:
    ports: [5432]
  redis:
    ports: [6379]
  thrift:
    ports: [9090]
  mongodb:
    ports: [27017]
output:
  elasticsearch:
    hosts: ["search-es-demo-jyt2or3v3sesrgt6dgn5u7qm.us-east-1.es.amazonaws.com:443"]
    protocol: "https"
    template:
      path: "packetbeat.template.json"
shipper:
logging:
  files:

Labels: , , , ,


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023  

This page is powered by Blogger. Isn't yours?