Shantanu's Blog

Database Consultant

March 29, 2012

 

create hourly partitions

In order to create hourly partitions, we need to use to_seconds function.
Here is a quick shell script to create the required hourly partitions for a given day.

#!/bin/sh
myval='2012-03-27'
mydate=${1:-$myval}
for ((a=0; a <= 24 ; a++))
do
parthour=`echo "$a hour" | sed 's/\ //'`
myhour=`mysql -Bse"select date_add('$mydate', interval $a hour)"`
echo "PARTITION $parthour VALUES less than (to_seconds('$myhour')),"
done


// create range daily partitions like this...
# sh create_partition.sh '2012-01-23' '2012-01-29'
PARTITION 20120123parti VALUES less than (to_seconds('2012-01-23 00:00:00')),
PARTITION 20120124parti VALUES less than (to_seconds('2012-01-24 00:00:00')),
PARTITION 20120125parti VALUES less than (to_seconds('2012-01-25 00:00:00')),
PARTITION 20120126parti VALUES less than (to_seconds('2012-01-26 00:00:00')),
PARTITION 20120127parti VALUES less than (to_seconds('2012-01-27 00:00:00')),
PARTITION 20120128parti VALUES less than (to_seconds('2012-01-28 00:00:00')),
PARTITION 20120129parti VALUES less than (to_seconds('2012-01-29 00:00:00')),

use following shell script

#!/bin/sh
mypart='parti'
start_date='2011-01-01'
myStartDate=${1:-$start_date}
end_date='2012-07-01'
myEndDate=${2:-$end_date}

start_seconds=`mysql -Bse"select to_seconds('$myStartDate')"`
end_seconds=`mysql -Bse"select to_seconds(date_add(('$myEndDate'), interval 1 day))"`
diff_seconds=$(($end_seconds-$start_seconds))

i="0"

while [ $i -lt $diff_seconds ]
do
myhour=`mysql -Bse"select date_add('$myStartDate', interval $i second)"`
partname=`mysql -Bse"select date_format(date_add('$myStartDate', interval $i second), '%Y%m%d')"`
echo "PARTITION $partname$mypart VALUES less than (to_seconds('$myhour')),"
i=$[$i+86400]
done

Labels: ,


March 27, 2012

 

redis case study 1

Inner join in redis

I've got a leader board set containing member ranks (roughly 2 million records) Each key has a corresponding hash set containing the member id, name,
gender and country. I'd like to be able to sort the leader board and also filter it by a field in the hash set (e.g.: *get scores for members where gender is male*).
What is the best approach to achieving this?

Ans:
sadd females laxmi kamal sonia
sadd male ram sham akbar

zadd scores 100 sonia
zadd scores 200 laxmi
zadd scores 300 kamal
zadd scores 100 ram
zadd scores 400 sham

zinterstore femscores 2 scores females
zrange femscores 0 50 withscores

Labels:


March 23, 2012

 

5 minute interval script

If you want to avoid 5 minute cron (*/5) you can set a per minute cron and check for 5 minute interval.
This will especially help when you already have per minute cron and want a command to run only after every 5 minutes.


cat todel.sh
#!/bin/sh
run_command='false'
myminute=`date '+%M'`
NUMBERS="5 10 15 20 25 30 35 40 45 50 55 00"
for number in `echo $NUMBERS`
do
if [ $myminute -eq $number ];then
run_command='true'
break
else
run_command='false'
fi
done

if [[ $run_command == 'true' ]];then
echo "run command"
date

fi

Labels:


March 19, 2012

 

create daily partitions

Mysql can have daily partitions i.e. different partition for each day in a year. This shell script can help us to create the required code.

#!/bin/bash
# c is the numeric value of Jan 1, 2012
# a is the number of days in year

c=734502
for ((a=1; a <= 366 ; a++))
do
c=`expr $c + 1`
partition_name=`date '+%d%b%Y' --date="2011-12-31 -$a day ago"`
echo "PARTITION $partition_name VALUES IN ($c),"
done

Labels: ,


March 16, 2012

 

using redis with python

redis-py client objects and connection pools are designed to be threadsafe. And creating a client instance without explicitly specifying a connection pool creates one for you by default.
 
# redis_connections.py
cache = redis.StrictRedis(..., db=1)
testing = redis.StrictRedis(..., db=2)
...
# end redis_connections.py


# your application:
import redis_connections
redis_connections.cache.set('foo', 'bar')
...
redis_connection.testing.get('bing')

Labels:


March 14, 2012

 

Display the source code in printable format

// contents of showcode.php file

?php
$source = $file;
highlight_file( $source [ $return = false ] );

?>

It is possible that "return=false" may not work on older browsers and / or Linux OS. We can make any php file printable by passing the file name as variable in the URL. for e.g.

http://my_web_site.com/showcode.php?file=toprint.php

Labels: ,


March 11, 2012

 

AWS firefox extensions

Here are 2 very useful firefox extensions if you are using AWS account.

1) ElasticFox is a Mozilla Firefox extension for managing your Amazon EC2 account. Launch new instances, mount Elastic Block Storage volumes, map Elastic IP addresses, and more.

http://s3.amazonaws.com/ec2-downloads/elasticfox.xpi


2) s3fox allows you to manage S3 account just like any other FTP software.

https://addons.mozilla.org/en-US/firefox/addon/amazon-s3-organizers3fox/

Labels:


March 08, 2012

 

upload a file to your google code

If you are having a project hosted on google code (and you should have one!) then the following steps will help you to upload a file to your downloads section.

1) Python script:
create the googlecode_upload.py on the server from where you want to upload a file.

wget http://support.googlecode.com/svn/trunk/scripts/googlecode_upload.py -o googlecode_upload_q7y2d4n8.py

2) credentials:
When you run the file, it will ask for your username that is usually the google username. But password is not the gmail password. You will see your password on the following page:

https://code.google.com/hosting/settings

Note your username and password.

3) Upload the file to your project:

# python google.py -s test -p mysqldump mysqldump.percona

summary (test), project name (mysqldump) and file name (mysqldump.percona) are required.

Labels: ,


March 07, 2012

 

Creating 10 million dummy records

Here is the shell script that will generate 10 million lines of dummy records with a counter to keep all the records distinct. The data is delimited by ^
It should take less than 1 minute to generate the final tosend2.txt file.

# cat myredis.sh
#!/bin/bash
>tosend.txt
counter=0
while [ $counter -lt 200000 ]; do
echo "set 3969:$counter 14.140.90.242^^IN^mah^2012-02-29:12:17:17^^servFullEngine^^^UNTRUSTED/1.0:3agpp-agba^Generic^Mozilla:Midlet^$counter^^^^^^" >> tosend.txt
let counter=counter+1
done

for i in `seq 50`
do
sed 's/3969:/3969:'$i'0000/' tosend.txt >> tosend2.txt
done

# wc -l tosend2.txt
10000000 tosend2.txt

Labels:


March 05, 2012

 

Managing s3 using command prompt

S3cmd is a tool for managing objects in Amazon S3 storage. It allows for making and removing "buckets" and uploading, downloading and removing "objects" from these buckets.

Install s3cmd package on Ubuntu or CentOS using the following commands respectively.

# sudo apt-get install s3cmd
# yum install s3cmd

Or download the file and isntall

wget http://tinyurl.com/6lmbsa8
unzip s3cmd-1.1.0-beta3.zip
cd s3cmd-1.1.0-beta3
python setup.py install

# s3cmd --configure
Keep the access and secret key handy. Config file defaults to /root/.s3cfg

Important Commands:
List objects or buckets
s3cmd ls [s3://BUCKET[/PREFIX]]
Get file from bucket
s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
Put file into bucket
s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
Delete file from bucket
s3cmd del s3://BUCKET/OBJECT
Synchronize a directory tree to S3
s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR
Copy object
s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
Move object
s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]

options:
-f, --force Force overwrite and other dangerous operations.
--continue Continue getting a partially downloaded file
-r, --recursive Recursive upload, download or removal.
-P, --acl-public Store objects with ACL allowing read for anyone.
-H, --human-readable-sizes Print sizes in human readable form (eg 1kB instead of 1234).

Other useful Commands:
Make bucket
s3cmd mb s3://BUCKET
Remove bucket
s3cmd rb s3://BUCKET
List all object in all buckets
s3cmd la
Get various information about Buckets or Files
s3cmd info s3://BUCKET[/OBJECT]
Disk usage by buckets
s3cmd du [s3://BUCKET[/PREFIX]]

See program homepage for more information at
http://s3tools.org


It is possible to share buckets between existing AWS users.

Using the console: In the Grantee field add the users email address or canonical userid
Using s3cmd: s3cmd setacl --grant-acl=read: s3://BUCKETNAME[/OBJECT]

 // recursively copy entire folder to s3
time /root/s3cmd-1.1.0-beta3/s3cmd put --recursive /path/folder s3://redis_archive/

Labels:


March 04, 2012

 

max connections and table cache

Adjusting max connections:

As per my.cnf setting, the maximum permissible max connections are 1024 but we have used only 3 simultaneous connections so far. So there is no need of such a high value and it can be reduced to 900

[root@myserver]# mysqladmin variables | grep max_connections
| max_connections | 1024 |

[root@myserver]# mysqladmin extended-status | grep Max_used_connections
| Max_used_connections | 3 |

[root@myserver]# mysql -e"set global max_connections=600"

Now don't we have a risk of running out of available connections in the future?
Yes. we do. and we still need to keep the max connections as low as possible. why? read on.

table cache
In order to have better performance, we can set the table cache number very high. for e.g.

table_cache=1M

But it is wrong to set such a high value (one million !?), since mysql does not allow more than half a million size for this variable. Let mysql initially have the default limit of 64 that is sufficient for most usual operations. Increasing this limit will return "too many connections" error to the application that is going to be very costly. The high value of table cache conflicts with high value of max_connections and limit its usage. In other words trying to set table cache to 1 million will limit the max_connections to just around 200 or so because there is an upper limit of allowed open file descriptors set by OS.
This point will be clear if you see the following line in mysql error log ...

120201 19:40:24 [Warning] Changed limits: max_open_files: 1024 max_connections: 214 table_cache: 400

In the above case, Mysql has automatically adjusted the table_cache value from 1 million to 400. At the same time it has also brought down the max_connections value from 2000 to 214. This has been done because the max_open_files limit of 1024 is set by OS and mysql has no control over it. If we reduce the table_cache value to the default 64 then mysql will allow max_connections to be set higher than 214. Another option would be to increase the max_open_files variable using "ulimit" command from default 1024 to something like 4024. Since disabling "table_cache" option from my.cnf can easily resolve this issue, I do not recommend increasing file descriptors limit unless absolutely necessary.

In other words, disabling one million table_cache from my.cnf is the only easy option to avoid "max connections reached" error.

Default value of table cache is 64. If the number of file descriptors mysql needs to open is too high (in millions) only then we need to increase the table_cache variable.

[root@myserver]# mysqladmin variables | grep table_cache
| table_cache | 64 |

[root@myserver]# mysqladmin extended-status | grep Opened_tables
| Opened_tables | 1 million |

[root@myserver]# mysql -e"set global table_cache=512"

After restarting mysql service, always check the error log, there may be a useful warning!

Labels:


 

optimizing key buffer size

If you are using MyISAM only database, key_buffer_size between 25% to 50% of total RAM is recommended. But it is also important to check if the allotted key buffer is really getting used.
my_key_blocks_unused=$((mysqladmin extended-status | grep Key_blocks_unused) | awk '{print $4}' )
my_key_cache_block_size=$((mysqladmin variables | grep key_cache_block_size) | awk '{print $4}' )
my_key_buffer_size=$((mysqladmin variables | grep key_buffer_size) | awk '{print $4}' )
echo "1 - (($my_key_blocks_unused * $my_key_cache_block_size) / $my_key_buffer_size)"
Once you execute the above commands at the command prompt, it will return something like this...

1 - ((38330 * 1024) / 67108864)

When this formula is evaluated (in excel or use calculator), it will return something like 0.4 and it means that half the key buffer is empty!

Here is how to increase key buffer from 64 MB to 8 GB:

# mysqladmin variables | grep key_buffer_size
| key_buffer_size | 67108864 |

# mysql -e"set global key_buffer_size = 67108864*128"

# mysqladmin variables | grep key_buffer_size
| key_buffer_size | 8589934592 |

_____

Tuning Primer shell script will do this calculation for you!

http://www.day32.com/MySQL/

Labels:


March 03, 2012

 

upgrading mysql using remi on centOS

Extra Packages for Enterprise Linux (EPEL) is a volunteer-based community effort from the Fedora project to create a repository of add-on packages for Red Hat Enterprise (RHEL) and its compatible spin-offs such as CentOS or Scientific Linux.
REMI is a repository created by Remi Collet. Here are the steps to install MySQL using Remi.

1.
rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-5.rpm
yum --exclude=php\* --enablerepo=remi,remi-test install mysql mysql-server


2.
make following changes in my.cnf

#master-host=10.11.12.13
#master-user=repli
#master-password=secret
#master-connect-retry=60
#skip-locking
#skip-innodb

3. start mysql using following commands.
service mysqld start
if this fails to start mysql then use following commamd to start mysql

mysqld_safe --datadir=/mysql/data


make sure that you have all user grant information with you... so you can restore it after running mysql_install_db
mysql_install_db

#### /usr/share/mysql/english/errmsg.sys check for this file, if this file lines less then 14 lines the you have to copy this file from mysql-5.2 tar ###
#### use --exclude=php\* for excluding php package being update or downloaded

Labels:


March 02, 2012

 

attaching an EBS volume

1) create the EBS volume using AWS web interface
When creating your EBS volume, make sure your EBS volume is in the same Availability Zone as your Amazon EC2 instance.

2) Attach the volume to the instance you have just created.

3) partition the EBS volume
fdisk /dev/`dmesg | grep 'unknown partition table' | awk '{print $1}' | sed 's/://' | tail -1`

4) format the newly created partition
mkfs.ext3 /dev/`dmesg | grep 'unknown partition table' | awk '{print $1}' | sed 's/://' | tail -1`

5) create a mount point for the partition
mkdir /mnt/data

6) mount the partition
mount -t ext3 /dev/`dmesg | grep 'unknown partition table' | awk '{print $1}' | sed 's/://' | tail -1` /mnt/data



Increase EBS size


# unmount and detach volume
umount -d /dev/sdh
ec2-detach-volume vol-xxxxxxxx

# create snapshot
ec2-create-snapshot vol-xxxxxxxx

# create volume based on snapshot
ec2-create-volume -z us-east-1a --size 20 --snapshot snap-xxxxxxxx

# attach volume to instance
ec2-attach-volume vol-yyyyyyyy --instance i-xxxxxxxx --device /dev/sdh

# mount EBS
mkdir /mnt/data
echo '/dev/sdh /mnt/data ext3 defaults,noatime 0 0' >> /etc/fstab
mount /mnt/data

# resize the volume
resize2fs /dev/sdh

Labels:


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023  

This page is powered by Blogger. Isn't yours?