Shantanu's Blog

Database Consultant

September 28, 2024

 

Firefox and Libreoffice in your browser

Kasm VNC is a modern open source VNC server.

Quickly connect to your Linux server's desktop from any web browser.
No client software install required.

1) Firefox using VNC

docker run -d \
--name=firefox \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Etc/UTC \
-p 3000:3000 \
-p 3001:3001 \
-v /path/to/config2:/config \
--shm-size="1gb" \
--restart unless-stopped \
lscr.io/linuxserver/firefox:latest

2) Libreoffice using VNC

docker run -d \
  --name=libreoffice \
  --security-opt seccomp=unconfined `#optional` \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Etc/UTC \
  -p 3000:3000 \
  -p 3001:3001 \
  -v /path/to/config:/config \
  --restart unless-stopped \
  lscr.io/linuxserver/libreoffice:latest

Labels: , , , , ,


February 07, 2020

 

Shell script basics

This shell script will check 10 IP addresses sequentially and print if they are responding to ping command.

#!/bin/bash
for ip in 192.168.1.{1..10}; do 
    ping -c 1 -t 1 $ip > /dev/null 2> /dev/null 
    if [ $? -eq 0 ]; then 
        echo "$ip is up"
    else
        echo "$ip is down"
    fi
done

Labels: ,


February 06, 2020

 

Manage redshift cluster using boto

Save or restore from last snapshot and delete the running redshift cluster are the two important activities those are possible using this boto code.

import boto
import datetime
conn = boto.connect_redshift(aws_access_key_id='XXX', aws_secret_access_key='XXX')

mymonth = datetime.datetime.now().strftime("%b").lower()
myday = datetime.datetime.now().strftime("%d")
myvar = mymonth+myday+'-v-mar5-dreport-new'

# take snapshot and delete cluster
mydict=conn.describe_clusters()
myidentifier=mydict['DescribeClustersResponse']['DescribeClustersResult']['Clusters'][0]['ClusterIdentifier']
conn.delete_cluster(myidentifier, skip_final_cluster_snapshot=False, final_cluster_snapshot_identifier=myvar)

# Restore from the last snapshot
response = conn.describe_cluster_snapshots()
snapshots = response['DescribeClusterSnapshotsResponse']['DescribeClusterSnapshotsResult']['Snapshots']
snapshots.sort(key=lambda d: d['SnapshotCreateTime'])
mysnapidentifier = snapshots[-1]['SnapshotIdentifier']
conn.restore_from_cluster_snapshot('v-mar5-dreport-new', mysnapidentifier, availability_zone='us-east-1a')

Labels: , , ,


January 26, 2020

 

Copy mysql data to another server using tar and nc

If you want to copy the data from one server for e.g. (63) to another (64):

1) Stop MySQL service on both servers, for e.g. 10.10.10.63 and 10.10.10.64
2) Go to /var/lib/mysql/ directory on both servers.
cd /var/lib/mysql/

3) On 10.10.10.63
tar -cf - * | nc -l 1234

4) On 10.10.10.64
nc 10.10.10.63 1234 | tar xf -

Restart MySQL service on both the servers and you should get exactly the same data on 64 as you see on 63 (assuming you have the same my.cnf config). This needs to be done very carefully or else data may corrupt.

Labels: ,


November 01, 2016

 

sysdig for system admins

What about a tool for sys admin that has all the utilties those we use everyday?
sysdig is a combination of strace + tcpdump + htop + iftop + lsof + transaction tracing

It is an open source system-level exploration tool that captures system state and activity.

Here is how to install it...

curl -s https://s3.amazonaws.com/download.draios.com/stable/install-sysdig | sudo bash

And here are a few examples

Dump system activity to file,
sysdig -w trace.scap

Show all the interactive commands executed inside a given container.
sysdig -pc -c spy_users container.name=wordpress1

View the top network connections for a single container.
sysdig -pc -c topconns container.name=wordpress1

See all the GET HTTP requests made by the machine
sudo sysdig -s 2000 -A -c echo_fds fd.port=80 and evt.buffer contains GET

See all the SQL select queries made by the machine
sudo sysdig -s 2000 -A -c echo_fds evt.buffer contains SELECT

See queries made via apache to an external MySQL server happening in real time
sysdig -s 2000 -A -c echo_fds fd.sip=192.168.30.5 and proc.name=apache2 and evt.buffer contains SELECT

More examples can be found here..

http://www.sysdig.org/wiki/sysdig-examples/#application

Labels: , ,


June 25, 2016

 

elasticsearch import using stream2es

Here are 3 simple steps to download json data from S3 and import them to elasticsearch.

1) create a directory:

mkdir /my_node_apps
cd /my_node_apps

2) Download all compressed files from S3
# aws s3 cp --recursive s3://my_data/my_smpp/logs/node_apps/aug_2015/ .

3) Uncompress the files and import them in elasticsearch

## cat final.sh

#!/bin/bash
curl -O download.elasticsearch.org/stream2es/stream2es; chmod +x stream2es
indexname='smpaug2'
typename='smpaug2type'

for i in `find /my_node_apps/aug_2015/ -name "*.gz"`
do
gunzip $i
newname=`echo $i | sed 's/.gz$//'
cat $newname | ./stream2es stdin --target "http://152.204.218.128:9200/$indexname/$typename/"
done

Labels: , , , ,


June 04, 2016

 

elasticsearch docker image with scripting support

The default elasticsearch image does not support scripting. So I have created a new image that anyone can download from...

Here are the steps used to create the new image.

# cat elasticsearch.yml
script.inline: true
script.indexed: true
network.host: 0.0.0.0

# cat Dockerfile
from elasticsearch
copy elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml

# docker build -t shantanuo/elasticsearch-script .

# docker push shantanuo/elasticsearch-script

I can now run the container based on the image I just created.

# docker run -d -p 9200:9200 -p 9300:9300 shantanuo/elasticsearch-script

Find the name of the container and link it to kibana like this...

# docker run -d -p 5603:5601 --link  stoic_goldstine:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana

# or use this to start es with kibana
docker run -d -p 9200:9200 -p 9300:9300 -p5603:5601 -e ES_HEAP_SIZE=1g --name myelastic  shantanuo/elastic

_____

This image is based on official elasticsearch image. If I need to build everything from scratch, then I can use Ubuntu official image as shown here...

https://hub.docker.com/r/shantanuo/elasticsearch/

Labels: , , , , , ,


April 11, 2016

 

docker adminer container

This container has adminer.php that you can point to 80 port of the host machine. The exposed port from container is 80

docker run -d -p 80:80 clue/adminer

Download the adminer file and save it to /tmp/ folder.

wget http://www.adminer.org/latest.php -O /tmp/index.php

Oracle:
docker run -d -p 8080:80 -v /tmp/:/app lukaszkinder/apache-php-oci8-pdo_oci

MongoDB:
docker run -d -p 8070:80 -v /tmp:/var/www/html ishiidaichi/apache-php-mongo-phalcon

Labels: , , , ,


March 26, 2016

 

Write your own slack command

You can write lambda funciton that will handle a Slack slash command and echoes the details back to the user.

Follow these steps to configure the slash command in Slack:

1. Navigate to https://.slack.com/services/new
2. Search for and select "Slash Commands".
3. Enter a name for your command and click "Add Slash Command Integration".
4. Copy the token string from the integration settings and use it in the next section.
5. After you complete this blueprint, enter the provided API endpoint URL in the URL field.
_____

Lambda Function

Create a new function using a blueprint called "slack-echo-command-python". The only change is to comment the encryption line and declare the variable...

#kms = boto3.client('kms')
#expected_token = kms.decrypt(CiphertextBlob = b64decode(ENCRYPTED_EXPECTED_TOKEN))['Plaintext']
expected_token = 'A9sU70Lz4isPdTet5tvGD0PB'

You will get this token when you registered a new keyword at slack.
_____

API Gateway

Create an AIP - LambdaMicroservice

Actions - Create Resource - getme
Actions - Create Method - post

Integration type - Lambda Function
Lambda Function: Select function name getme

Integration Request - Body Mapping Templates - Add mapping template - application/x-www-form-urlencoded

Mapping template - {"body":$input.json("$")}

Deploy API - Stage name - prod
_____

Connect API to Lambda Function and slack slash command:

Add the method name to invoke url. If the invoke URL looks like this...

https://sevbnlvu69.execute-api.us-east-1.amazonaws.com/prod

Then the actual URL to be added in "API endpoints" tab of function - getme will be:

https://sevbnlvu69.execute-api.us-east-1.amazonaws.com/prod/getme

Labels: , , , , , , , ,


February 18, 2016

 

install package using one line docker command

You can install and launch redis server in just one command...

 docker run -v /myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis redis-server /usr/local/etc/redis/redis.conf

This will use redis configuration file from base image file /myredis/conf

Labels: , , , , ,


 

Making AWS usable

easyboto is a library that makes initiating an EC2 instance very easy.

import easyboto
x=easyboto.connect('your_access_key', 'your_secret_key')

x.placement='us-east-1a'
# use the free IP address if available
#x.myaddress='52.71.62.77'
x.key='dec15a'

# t2.nano (0.5 - $0.0065), t2.micro (1 - $0.013) t2.small (2 - $0.026), t2.medium (4 - $0.052), t2.large (8 - $0.104),
# m4.large (8 - $0.126 ), m4.xlarge (16 - $0.252), m4.2xlarge (32 - $0.504), m4.4xlarge (64 - $1.008)
# ami-da4d7cb0 is based on Amazon Linux AMI 2015.09.2 (HVM), changed SSD to mangetic with 200 GB

x.startEc2('ami-da4d7cb0', 'm4.4xlarge')

# use Spot method for cheaper rates
# x.MAX_SPOT_BID= '0.5'
# x.startEc2Spot('ami-da4d7cb0', 'm4.4xlarge')

Labels: , , , , ,


 

Start notebook server in 6 easy steps

1) Initiate a server using Amazon Linux from AWS console

https://console.aws.amazon.com/ec2

2) install docker

yum install docker

3) download conda image and initiate in interactive mode

docker run -t -p80:7778 -i continuumio/miniconda /bin/bash

4) install ipython notebook

conda install ipython-notebook

5) start notebook server on port 7778

ipython notebook --ip=* --port=7778

6) Your ipython notebook server is available on default 80 port of base machine that can be accessed here...

http://ec2-54-84-139-56.compute-1.amazonaws.com/

_____

You can log-in to the docker container using "execute" command as shown below. You will need TTY and interactive mode to access /bin/bash of the container.

docker exec -t -i  8c068c974e73 /bin/bash

Once you are in, simply call the conda command like this...

conda install --channel https://conda.binstar.org/bkreider postgresql psycopg2
conda install pandas
conda install boto
conda install pandasql
wget https://raw.githubusercontent.com/shantanuo/easyboto/master/easyboto.py
   
_____

namespace enter can be installed using this container:

docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter

Once installed, you can enter any container using the command...

/usr/local/bin/docker-enter 8c068c974e73 /bin/bash

namespace enter is similar to execute as shown above, but has more options.
_____

You can download the latest version of this image using pull command...

docker pull continuumio/miniconda:latest
_____

You can check stats, logs, events and info if everything got started as expected...

docker logs 8c068c974e73

Labels: , , , , , , ,


February 02, 2016

 

Starting with docker

1) Docker can be easily installed if you are using Amazon Linux. Here are the steps to install and run docker.

sudo yum update -y

sudo yum install -y docker

sudo service docker start

# applies on for AWS
sudo usermod -a -G docker ec2-user

2) Let's download a sample application from github and build it as docker image.

git clone https://github.com/awslabs/ecs-demo-php-simple-app

cd ecs-demo-php-simple-app

cat Dockerfile

docker build -t shantanuo/amazon-ecs-sample .

3) You can login to docker hub and push your image.

docer login

docker push shantanuo/amazon-ecs-sample


4) now you can pull it down and "activate" the contents floating in the docker image.

docker pull shantanuo/amazon-ecs-sample

docker run -p 80:80 shantanuo/amazon-ecs-sample

Labels: , , , ,


January 17, 2016

 

Overview of Unix commands

Here is an incomplete list of commands admins must be using frequently.

common commands:
cd, cp, file, cat, head, less more, tail, locate, ls, mkdir, mv, pwd, rm, rmdir, scp, ssh, wc, split, md5sum, date, man, clear, nice, nohup, passwd, screen, script, spell, su, find, grep (egrep, fgrep), strings, tar, gzip, gunzip, bzip2, bunzip2, cpio, zcat (gzcat), crontab,df, du, env, kill, ps, who, awk, sed, grep,  cut, join, paste, sort, tr, uniq, vi, xargs , iconv, parallel

Example commands:
ln -s /tmp/mysql.sock /var/lib/mysql/mysqld.sock
chown mysql:mysql /var/lib/mysql/
chmod 700 nov15a.pem

Comparisons:
cmp Compare two files, byte by byte
comm COmpare items in two sorted files
diff compare two files, line by line
dircmp compare directories
sdiff Compare two files, side by side

Shell Programming:
basename
dirname
echo
expr
id
line
printf
sleep
test

Labels:


April 27, 2015

 

Using Tsunami to copy big files

If I need to transfer a huge file from one server to another, I can use tsunami as shown below:

Sending server:

tsunamid 4_master_airtel_2559033.txt

Receiving server:
[shantanu@server tsunami-udp]$ tsunami
tsunami> connect 23.21.167.60
Connected.
tsunami> get 4_master_airtel_2559033.txt

You need to install tsunami on both the server. Here is how...

cvs -z3 -d:pserver:anonymous@tsunami-udp.cvs.sourceforge.net:/cvsroot/tsunami-udp co -P tsunami-udp
cd tsunami-udp
./recompile.sh
sudo make install

Tsunami uses port 46224, so make sure it is open from both ends.

Labels: ,


January 26, 2015

 

Memory usage report

Here is an utility to accurately report the memory usage for a program.

https://github.com/pixelb/ps_mem

ps_mem [-h|--help] [-p PID,...] [-s|--split-args] [-t|--total] [-w N]
Example output:

 Private  +   Shared  =  RAM used       Program

 34.6 MiB +   1.0 MiB =  35.7 MiB       gnome-terminal
139.8 MiB +   2.3 MiB = 142.1 MiB       firefox
291.8 MiB +   2.5 MiB = 294.3 MiB       gnome-shell
272.2 MiB +  43.9 MiB = 316.1 MiB       chrome (12)
913.9 MiB +   3.2 MiB = 917.1 MiB       thunderbird
---------------------------------
                          1.9 GiB
=================================

Labels: ,


June 04, 2013

 

Kill processes

There are times when I check the process ID from "ps aux" command output and kill it. But it may take time to kill processes one at a time.
Easier way to kill multiple processes is by executing the following two functions.

function psgrep ()
{
ps aux | grep "$1" | grep -v 'grep'
}

function psterm ()
{
[ ${#} -eq 0 ] && echo "usage: $FUNCNAME STRING" && return 0
local pid
pid=$(ps ax | grep "$1" | grep -v grep | awk '{ print $1 }')
echo -e "terminating '$1' / process(es):\n$pid"
kill -SIGTERM $pid
}

You can add them to the .bash_profile so that these functions are available all the time. You can check the processes using "psgrep" function while killing them easy with "psterm".

# psgrep http

# psterm httpd
terminating 'httpd' / process(es):
31186
31187

Labels: ,


 

apache control

You can start apache with different directory using directive parameter

# httpd -k start -c “DocumentRoot /var/www/html_debug/”
If you want to go back to original configuration using the default DocumentRoot (/var/www/html), restart the Apache.
# httpd -k stop
# apachectl start

httpd -k start -e debug

Possible values you can pass to option -e are:
info
notice
warn
error
crit
alert
debug
emerg

Labels: ,


 

Managing history

// add date when the command was executed
export HISTTIMEFORMAT=’%F %T ‘

// ignore commands in history
export HISTIGNORE=”pwd:ls:ls –ltr:”
export HISTCONTROL=ignoredups
export HISTCONTROL=ignorespace

// clear all the previous history.
history -c

// disable history
export HISTSIZE=0

Labels: ,


 

Using command prompt to alter user

Using the export command we can change the linux command prompt to make it more informative. It is also possible to display the output of shell script as command prompt. For e.g. in the following example, when the disk space is 85% full, it will display a prompt.

export PS1="[\@] [`ifconfig | grep Bcast | awk '{print $2}' | awk -F':' '{print $2}'`] \$(diskfull.sh) > "

The new prompt will look something like this...

root@ip-10-158-57-83 disk full >

And the shell script:

cat /bin/diskfull.sh
#!/bin/sh

# set alert level 80% as default if no user input found
ALERT=${1:-95}

df -HP | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }' | while read output;
do
usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 )
partition=$(echo $output | awk '{ print $2 }' )
if [ $usep -ge $ALERT ]; then
echo "disk full"
exit
fi
done

Labels: ,


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024   January 2025   February 2025   April 2025   June 2025   July 2025   August 2025  

This page is powered by Blogger. Isn't yours?