Here is a list of some of the containers those I use frequently. Here is how to install docker
First make sure that you are a root user. And there is enough disk space available.
# install docker on AWS linux
yum install -y docker
# You can install docker on centOS version 7 if you have 64-bit version
cat /etc/redhat-release
sudo yum remove docker docker-common container-selinux docker-selinux docker-engine
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
yum install docker-ce
service docker start
_____
# start docker
vi /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS="--storage-opt dm.basesize=200G"
/etc/init.d/docker start
# Install docker-compose
curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
# Install aliases
curl -sf -L https://raw.githubusercontent.com/shantanuo/docker/master/alias.sh | sh
Editing storage options above will allow bigger containers (upto 200 GB) to be loaded. The container may get out of space once you start saving data into it.
Node.js
In order to use node application within docker environment, you can use the official node image that can be found here...
https://hub.docker.com/r/library/node/
Change to the directory where you have already written code and add the dockerfile with these 2 lines...
$ vi Dockerfile
FROM node:4-onbuild
EXPOSE 8888
Once your script is ready, you need to build an image...
$ docker build -t shantanuo/my-nodejs-app .
And run the node application...
$ docker run -p 8888:8888 -d shantanuo/my-nodejs-app
_____
You can push this image to docker hub as a private or public repository.
docker login
username:shantanuo
password:XXXX
docker push shantanuo/my-nodejs-app
MySQL
ofifical mysql repository
mkdir -p /storage/test-mysql/datadir
docker run -d -p 3306:3306 -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -v /my/custom:/etc/mysql/conf.d -v /storage/test-mysql/datadir:/var/lib/mysql -v /my/custom:/etc/mysql/conf.d mysql:5.6
(size: 100MB)
Just by changing the name to test-mysql2 we can set up another mysql container. Instead of mysql official version, we can use tutum/mysql which has customized installation.
fixed the bug in the official mysql image
https://github.com/shantanuo/mysql
Percona with tokuDB
docker run -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e INIT_TOKUDB=1 -d percona/percona-server
log in to the container and run this command to enable tokudb if "show engines" command does not show tokudb.
ps_tokudb_admin --enable
Backup
# backup of mysql hosted in the folder /storage of another container named mysql-server
docker run -it \
--link mysql-server:mysql \
-v /storage/mysql-server/datadir:/var/lib/mysql \
-v /storage/backups:/backups \
--rm=true \
severalnines/mysql-pxb \
sh -c 'exec /run_backup.sh'
# backup of mysql from hosted machine
docker run -it \
-v /var/lib/mysql:/var/lib/mysql \
-v /storage/backups:/backups \
--rm=true \
severalnines/mysql-pxb \
sh -c 'exec innobackupex --host="$hostname" --port="3306" --user=root --password="$rootpassword" /backups'
Utilities
# cluster control container:
docker run -d --name clustercontrol -p 5000:80 severalnines/clustercontrol
elastic
1) official container
# command to install both, elasticsearch and kibana (unofficial version)
docker run -d -p 9200:9200 -p 5601:5601 nshou/elasticsearch-kibana
Or use the following with volume attached:
docker run -d -p 5601:5601 -p 5000:5000 -p 9200:9200 --ulimit nofile=65536:65536 -v /mydata:/var/lib/elasticsearch kenwdelong/elk-docker:latest
_____
Here are 2 commands to start Elasticsearch with Kibana using docker official version.
# cat /tmp/elasticsearch.yml
script.inline: on
script.indexed: on
network.host: 0.0.0.0
# docker run -d -v /tmp/:/usr/share/elasticsearch/config -p 9200:9200 -p 9300:9300 -e ES_HEAP_SIZE=1g elasticsearch:2
Find the name of the container and link it to kibana by changing kibana_name_here below like this...
# docker run -d -p 5601:5601 --link kibana_name_here:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana
_____
Login to the newly created elastic container and install plug-ins
docker exec -it container_id bash
# pwd
/usr/share/elasticsearch
# bin/plugin install analysis-phonetic
Once the plugin is installed, restart the container so that elastic service will be restarted....
# docker restart c44004a47f46
_____
# get the IP of elastic using command hostname -i and then install metric-beat dashboard using docker
docker run docker.elastic.co/beats/metricbeat:5.5.0 ./scripts/import_dashboards -es http://172.31.73.228:9200
2) custom container
elastic - customize elasticsearch installation and maintenance
3) elastic with kibana version 5
docker run --name myelastic -v /tmp/:/usr/share/elasticsearch/config -p 9200:9200 -p 9300:9300 -d elasticsearch:5.0
docker run -d -p 5601:5601 --link myelastic:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana:5.0
_____
# docker run -p 9200:9200 -p 9300:9300 -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -d elasticsearch:5
You may get this error in your logs:
Exception in thread "main" java.lang.RuntimeException: bootstrap checks failed max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
You'll need to fix up your docker host to support more vm.max_map_count. For reference:
sysctl -w vm.max_map_count=262144
https://www.elastic.co/guide/en/elasticsearch/guide/current/_file_descriptors_and_mmap.html
adminer
adminer is the web interface to connect to any database like postgresql, mysql or oracle
Instead of linking adminer on any port, use --net=host to use the default port 80 of host machine. It will also use default mysql port 3306 that is fortunately linked to mysql container as shown above.
If you do not want to add one more parameter i.e. --net then use the default "bridge" network protocol. You will need to use the following command to find the IP address of the docker host.
# ip addr show docker0
This command will show the docker host IP address on the docker0 network interface.
redshift connection:
docker run -i -t --rm -p 80:80 --name adminer shantanuo/adminer
The above command will log-in to docker container. You need to start apache service within container...
sudo service apache2 start
Or use any of the method mentioned below:
download
wget http://www.adminer.org/latest.php -O /tmp/index.php
## connect to redshift
docker run -it postgres psql -h merged2017.xxx.us-east-1.redshift.amazonaws.com -p 5439 -U root vdbname
connect to any database like mysql, pgsql, redshift, oracle or mongoDB
postgresql (redshift) or mysql
docker run -it -p 8060:80 -v /tmp/:/var/www/html/ shantanuo/phpadminer
mongoDB
docker run -d -p 8070:80 -v /tmp:/var/www/html ishiidaichi/apache-php-mongo-phalcon
oracle
docker run -d -p 8080:80 -v /tmp/:/app lukaszkinder/apache-php-oci8-pdo_oci
# not sure about how to support
mssql
python
compact packages
1) pyrun
python versions 2 and 3 compact, without any external libraries, for basic testing like this...
here is an easy way to convince people to upgrade to python 3.0+
# docker run -it --rm shantanuo/pyrun:2.7 python
>>> 3/2
1
# docker run -it --rm shantanuo/pyrun:3.4 python
>>> 3/2
1.5
Python 2.7 version returns absolute value 1 while 3.4 version correctly returns 1.5
2) staticpython
4 MB single file python package!
3) socket
python with application files
Complete python package
4) conda official
Official python installation:
https://github.com/ContinuumIO/docker-images
And here is the command to start miniconda and ipython together...
docker run -i -t -p 8888:8888 -v /tmp:/tmp continuumio/miniconda3 /bin/bash -c "/opt/conda/bin/conda install jupyter -y --quiet && cd /tmp/ && /opt/conda/bin/jupyter notebook --notebook-dir=/tmp --ip='*' --port=8888 --no-browser --allow-root"
5) miniconda customized
Here is an impage with pandas and sqldf modules
# Start ipython container that is based on miniconda image in a screen session
docker run -p 7778:7778 -t shantanuo/miniconda_ipython_sqldf /bin/bash
# better start with environment variables
docker run -p 7778:7778 \
-e DEV_ACCESS_KEY=XXX -e DEV_SECRET_KEY=YYY \
-e PROD_READONLY_ACCESS_KEY=XXX -e PROD_READONLY_SECRET_KEY=YYY \
-e PROD_READWRITE_ACCESS_KEY=XXX -e PROD_READWRITE_SECRET_KEY=YYY \
-t shantanuo/miniconda_ipython_sqldf /bin/bash
# Log-in to newly created container
docker exec -it $(docker ps -l -q) /bin/bash
# Start ipython notebook on port 7778 that can be accessed from anywhere (*)
cd /home/
ipython notebook --ip=* --port=7778
# and use the environment keys in your code like this...
import boto3
import os
s3 = boto3.client('s3',aws_access_key_id=os.environ['DEV_ACCESS_KEY'], aws_secret_access_key=os.environ['DEV_SECRET_KEY'])
application containers
Here is an example from amazon about how to build your own container with php application.
https://github.com/awslabs/ecs-demo-php-simple-app
Utility containers
1) myscan
Use OCR to read any image.
alias pancard='docker run -i --rm -v "$(pwd)":/home/ shantanuo/myscan python /scan.py "$@"'
wget wget https://github.com/shantanuo/docker/raw/master/myscan/pan_card.jpg
pancard 1crop.jpg
2) panamapapers
container with sqlite database ready for query
3) newrelic
newrelic docker image that works like nagios
docker run -d \
--privileged=true --name nrsysmond \
--pid=host \
--net=host \
-v /sys:/sys \
-v /dev:/dev \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/log:/var/log:rw \
-e NRSYSMOND_license_key=186b2a8d6af29107609abca749296b46cda9fa69 \
-e NRSYSMOND_logfile=/var/log/nrsysmond.log \
newrelic/nrsysmond:latest
docker run -d \
-e NEW_RELIC_LICENSE_KEY=186b2a8d6af29107609abca749296b46cda9fa69 \
-e AGENT_HOST=52.1.174.168 \
-e AGENT_USER=root \
-e AGENT_PASSWD=XXXXX \
newrelic/mysql-plugin
4) OCS inventory
docker run -d -p 80:80 -p 3301:3306 zanhsieh/docker-ocs-inventory-ng
http://52.86.68.170/ocsreports/
(username:admin, password:admin)
5) selenium
simulate a browser (crome) with selenium pre-installed
docker run -d -v /dev/shm:/dev/shm -p 4444:4444 selenium/standalone-chrome
The Hub url...
http://52.205.135.220:4444/wd/hub/
6) Deploying registry server
#Start your registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
#You can now use it with docker. Tag any image to point to your registry:
docker tag image_name localhost:5000/image_name
#then push it to your registry:
docker push localhost:5000/image_name
# pull it back from your registry:
docker pull localhost:5000/image_name
# push the registry container to hub
docker stop registry
docker commit registry
docker push registry
7) Docker User Interface
docker run -d -p 9000:9000 -v /var/run/docker.sock:/docker.sock --name dockerui abh1nav/dockerui:latest -e="/docker.sock"
Better user interface with shell access:
docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
8) docker clean up container
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes --dry-run
9) prometheus monitoring
check port 9090 and cadvisor on 8080
git clone https://github.com/vegasbrianc/prometheus.git
cd prometheus/
/usr/local/bin/docker-compose up -d
10) open refine utility
# docker run --privileged -v /openrefine_projects/:/mnt/refine -p 35181:3333 -d psychemedia/ou-tm351-openrefine
Or use this:
docker run -p 3334:3333 -v /mnt/refine -d psychemedia/docker-openrefine
11) Freeswitch
docker run -d sous/freeswitch
12) mongodb
from tutum
docker run -d -p 27017:27017 -p 28017:28017 -e MONGODB_PASS="mypass" tutum/mongodb
offical image with wiredTiger engine
docker run -p 27017:27017 -v /tokudata:/data/db -d mongo --storageEngine wiredTiger
with tokumx compression engine
docker run -p 27017:27017 -v /tokudata:/data/db -d ankurcha/tokumx
# create alias for bsondump command
# alias bsondump='docker run -i --rm -v /tmp/:/tmp/ -w /tmp/ mongo bsondump "$@"'
# bsondump data_hits_20160423.bson > test.json
# alias mongorestore='docker run -i --rm -v /tmp/:/tmp/ -w /tmp/ mongo mongorestore "$@"'
# mongorestore --host `hostname -i` incoming_reports_testing.bson
# docker exec -it 12db5a259e58 mongo
# db.incoming_reports_testing.findOne()
# db.incoming_reports_testing.distinct("caller_id.number")
13) Consul Monitor
docker run -d --name=consul --net=host gliderlabs/consul-server -bootstrap -advertise=52.200.204.48
14) Registrator container
$ docker run -d \
--name=registrator \
--net=host \
--volume=/var/run/docker.sock:/tmp/docker.sock \
gliderlabs/registrator:latest \
consul://localhost:8500
15) wordpress
There is a custom image here...
docker run -p 8081:80 -d tutum/wordpress
docker has official wordpress containers.
docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=india mysql:5.7
docker run -p 8083:80 --link gigantic_pike:mysql -e WORDPRESS_DB_NAME=wpdb -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=india -d wordpress
And we can also use docker compose the start and link db and application containers.
vi docker-compose.yml
version: '2'
services:
db:
image: mysql:5.7
volumes:
- "./.data/db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
links:
- db
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
/usr/local/bin/docker-compose up -d
15a) Drupal
docker run --name cmsdb -p 3306:3306 -e MYSQL_ROOT_PASSWORD=india -d mysql:5.7
docker run --name mydrupal --link cmsdb:mysql -p 8080:80 -e MYSQL_USER=root -e MYSQL_PASSWORD=india -d drupal
Choose advance option and change "localhost" value for Database host to the mysql container name.
16) Packetbeat container
docker run -d --restart=always --net=host shantanuo/packetbeat-agent
17) Django
docker run --name some-django-app -v "$PWD":/usr/src/app -w /usr/src/app -p 8000:8000 -e location=mumbai -d django bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
18) rabbitmq
docker run -d --hostname oksoft -p 8080:15672 rabbitmq:3-management
19) ruby and passenger
(official docker image from phusion)
docker run -d -p 3000:3000 phusion/passenger-full
# login to your container:
docker exec -it container_id bash
# change to opt directory
cd /opt/
mkdir public
# a test file
curl google.com > public/index.html
# start passenger:
passenger start
20) update containers
# monitor the containers named "nginx" and "redis" for updates
docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
centurylink/watchtower nginx redis
21) sematext monitoring
Access your docker and other stats from # https://apps.sematext.com
docker run --memory-swap=-1 -d --name sematext-agent --restart=always -e SPM_TOKEN=653a6dc9-1740-4a25-85d3-b37c9ad76308 -v /var/run/docker.sock:/var/run/docker.sock sematext/sematext-agent-docker
23) network emulator delay
Add network delay of 3000 mili seconds to docker traffic.
# terminal 1
# docker run -it --rm --name tryme alpine sh -c "apk add --update iproute2 && ping www.example.com"
# terminal 2
# docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock gaiaadm/pumba pumba netem --interface eth0 --duration 1m delay --time 3000 tryme
24) postgresql
docker run -p 5432:5432 --name dbt-postgresql -m 1G -c 256 -v /mypgdata:/var/lib/postgresql/data -e POSTGRES_PASSWORD=india -d postgres
25) jboss
docker run -p 8080:8080 -p 9990:9990 -m 1G -c 512 -e JBOSS_PASS="mypass" -d tutum/jboss
26) Jupyter notebook with google facets for pandas dataframe:
docker run -d -p 8889:8888 kozo2/facets start-notebook.sh --NotebookApp.token='' --NotebookApp.iopub_data_rate_limit=10000000
27) AWS container
# cat ~/.aws/config
[default]
aws_access_key_id = XXX
aws_secret_access_key = XXX
region = ap-south-1
alias myaws='docker run --rm -v ~/.aws:/root/.aws -v $(pwd):/aws -it amazon/aws-cli'
28) metricbeat dashboard
# get the IP of elastic using command hostname -i and then install metric-beat dashboard using docker
docker run docker.elastic.co/beats/metricbeat:5.5.0 ./scripts/import_dashboards -es http://172.31.73.228:9200
29) recall the original run statement of a given container
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike [container_id]
30) Jupyter Notebook
docker run -d -p 8887:8888 -v /tmp:/tmp shantanuo/notebook
31) terraforming
# docker run -e AWS_ACCESS_KEY_ID=xxx -e AWS_SECRET_ACCESS_KEY=xxx -e AWS_DEFAULT_REGION=ap-south-1 quay.io/dtan4/terraforming:latest terraforming s3
32) apache bench to check server load performance:
docker run -d -p 80 --name web -v /tmp/:/var/www/html russmckendrick/nginx-php
# docker run --link=silly_bassi russmckendrick/ab ab -k -n 10000 -c 16 http://134.195.194.88/
# docker run --link=web russmckendrick/ab ab -k -n 10000 -c 16 http://web/
33) airflow - monitor workflows and tasks
# docker run -p 8080:8080 -d puckel/docker-airflow webserver
_____
You may get an error like this when you restart server:
Error response from daemon: oci runtime error: container with id exists:
The fix is to remove this from /run folder
# rm -rf /run/runc/*
# rm -rf /run/container-id
34) gitlab installation
docker run -d \
--env GITLAB_OMNIBUS_CONFIG="external_url 'https://134.195.194.88/'; gitlab_rails['lfs_enabled'] = true; registry_external_url 'https://134.195.194.88:4567';" \
--publish 443:443 --publish 80:80 --publish 4567:4567 --publish 10022:22 \
--env 'GITLAB_SSH_PORT=10022' --env 'GITLAB_PORT=443' \
--env 'GITLAB_HTTPS=true' --env 'SSL_SELF_SIGNED=true' \
--volume /mysrv/gitlab/config:/etc/gitlab \
--volume /mysrv/gitlab/logs:/var/log/gitlab \
--volume /mysrv/gitlab/data:/var/opt/gitlab \
--volume /srv/docker/gitlab/gitlab/certs:/etc/gitlab/ssl \
gitlab/gitlab-ce:latest