Shantanu's Blog

Database Consultant

April 26, 2012

 

Expand root EBS volume

The steps to expand the existing root EBS volume for an EBS-backed instance:

    * Stop the instance.
    * Create a snapshot of the root EBS volume.
    * Create an ESB volume from that snapshot with the new desired size. (Please ensure it is in the same AZ as the instance)
    * Detach the root EBS volume and attach the newly created EBS volume to /dev/sda1 on the instance.
    * Start the instance and then login.
    * Enter 'df -h' to see the current size of the root volume.
    * Enter 'sudo resize2fs /dev/sda1' to get the rest of the expanded disk.
    * Enter 'df -h' again to see the new size of the root volume.

 

Partition Guide

Here are useful commands one can use while working with partitions.

CREATE TABLE `data_summary` (
  `data_summary_ad_hourly_id` bigint(20) NOT NULL AUTO_INCREMENT,
  `date_time` datetime NOT NULL,
  `ad_id` int(10) unsigned NOT NULL,
...
some more columns
...
) ENGINE=MyISAM DEFAULT CHARSET=latin1 ;

This is the table without any partitions. We need to use alter table statement to divide it.

ALTER TABLE `data_summary`
PARTITION BY RANGE (TO_SECONDS(`date_time` )) (
PARTITION 20120315parti VALUES less than (to_seconds('2012-03-15 00:00:00')),
PARTITION 20120316parti VALUES less than (to_seconds('2012-03-16 00:00:00')),
PARTITION 20120317parti VALUES less than (to_seconds('2012-03-17 00:00:00')),
PARTITION 20120318parti VALUES less than (to_seconds('2012-03-18 00:00:00')),
PARTITION 20120319parti VALUES less than (to_seconds('2012-03-19 00:00:00')),
PARTITION 20120320parti VALUES less than (to_seconds('2012-03-20 00:00:00')),
PARTITION 20120321parti VALUES less than (to_seconds('2012-03-21 00:00:00')),
PARTITION 20120322parti VALUES less than (to_seconds('2012-03-22 00:00:00')),
PARTITION 20120323parti VALUES less than (to_seconds('2012-03-23 00:00:00')),
PARTITION 20120324parti VALUES less than (to_seconds('2012-03-24 00:00:00')),
PARTITION 20120325parti VALUES less than (to_seconds('2012-03-25 00:00:00')),
PARTITION 20120326parti VALUES less than (to_seconds('2012-03-26 00:00:00')),
PARTITION 20120327parti VALUES less than (to_seconds('2012-03-27 00:00:00')),
PARTITION 20120328parti VALUES less than (to_seconds('2012-03-28 00:00:00')),
PARTITION 20120329parti VALUES less than (to_seconds('2012-03-29 00:00:00')),
PARTITION 20120330parti VALUES less than (to_seconds('2012-03-30 00:00:00')),
PARTITION 20120331parti VALUES less than (to_seconds('2012-03-31 00:00:00')),
PARTITION 20120401parti VALUES less than (to_seconds('2012-04-01 00:00:00')),
PARTITION 20120402parti VALUES less than (to_seconds('2012-04-02 00:00:00')),
PARTITION 20120403parti VALUES less than (to_seconds('2012-04-03 00:00:00')),
PARTITION 20120404parti VALUES less than (to_seconds('2012-04-04 00:00:00')),
PARTITION 20120405parti VALUES less than (to_seconds('2012-04-05 00:00:00')),
PARTITION 20120406parti VALUES less than (to_seconds('2012-04-06 00:00:00')),
PARTITION 20120407parti VALUES less than (to_seconds('2012-04-07 00:00:00')),
PARTITION 20120408parti VALUES less than (to_seconds('2012-04-08 00:00:00')),
PARTITION 20120409parti VALUES less than (to_seconds('2012-04-09 00:00:00')),
PARTITION 20120410parti VALUES less than (to_seconds('2012-04-10 00:00:00')),
PARTITION 20120411parti VALUES less than (to_seconds('2012-04-11 00:00:00')),
PARTITION 20120412parti VALUES less than (to_seconds('2012-04-12 00:00:00')),
PARTITION 20120413parti VALUES less than (to_seconds('2012-04-13 00:00:00')),
PARTITION 20120414parti VALUES less than (to_seconds('2012-04-14 00:00:00')),
PARTITION 20120415parti VALUES less than (to_seconds('2012-04-15 00:00:00')),
 PARTITION maxParti VALUES LESS THAN (MAXVALUE)
);


We can not add partitions to the earlier date range. For e.g. when I try to add a partition at lower end, March 13 and March 14 it does not work.

ALTER TABLE `data_summary`
REORGANIZE PARTITION  20120315parti INTO (
PARTITION 20120313parti VALUES less than (to_seconds('2012-03-13 00:00:00')),
PARTITION 20120314parti VALUES less than (to_seconds('2012-03-14 00:00:00'))
);

ERROR 1520 (HY000): Reorganize of range partitions cannot change total ranges except for last partition where it can extend the range

If you really need to create partitions for older days, it is still possible to drop and recreate all partitions as explained below.
_____

But we can add partitions at the higher end.

ALTER TABLE `data_summary`
REORGANIZE PARTITION  maxParti INTO (
PARTITION 20120416parti VALUES less than (to_seconds('2012-04-16 00:00:00')),
 PARTITION maxParti VALUES LESS THAN (MAXVALUE)
);

So we are taking data from the "maxParti" partition and splitting it up in April 16 partition and rest again in "maxParti" partition.

_____

We can merge the data of March 15, 16 and 17 into a single partition called "20120315TO17parti"

ALTER TABLE `data_summary`
REORGANIZE PARTITION  20120315parti, 20120316parti, 20120317parti INTO (
PARTITION 20120315TO17parti VALUES less than (to_seconds('2012-03-17 00:00:00'))
);

_____

We can totally change the way the data is divided by merging and splitting records all over again.

ALTER TABLE `data_summary`
REORGANIZE PARTITION
 20120315TO17parti, 20120318parti,  20120319parti, 20120320parti,  20120321parti, 20120322parti,  20120323parti, 20120324parti,  20120325parti, 20120326parti,  20120327parti, 20120328parti,  20120329parti, 20120330parti,  20120331parti, 20120401parti,  20120402parti, 20120403parti,  20120404parti, 20120405parti,  20120406parti, 20120407parti,  20120408parti, 20120409parti,  20120410parti, 20120411parti,  20120412parti, 20120413parti,  20120414parti, 20120415parti, 20120416parti, maxParti
INTO (
PARTITION 20120313parti VALUES less than (to_seconds('2012-03-13 00:00:00')),
PARTITION 20120314parti VALUES less than (to_seconds('2012-03-14 00:00:00')),
PARTITION 20120315parti VALUES less than (to_seconds('2012-03-15 00:00:00')),
PARTITION 20120316parti VALUES less than (to_seconds('2012-03-16 00:00:00')),
 PARTITION maxParti VALUES LESS THAN (MAXVALUE)
);

So the newly organized table will look like this...

mysql> SHOW CREATE TABLE `data_summary` \G
*************************** 1. row ***************************
       Table: data_summary
Create Table: CREATE TABLE `data_summary` (
  `data_summary_ad_hourly_id` bigint(20) NOT NULL AUTO_INCREMENT,
  `date_time` datetime NOT NULL,
  `ad_id` int(10) unsigned NOT NULL,
...
some more columns
...
) ENGINE=MyISAM DEFAULT CHARSET=latin1
/*!50500 PARTITION BY RANGE (TO_SECONDS(`date_time` ))
(PARTITION 20120313parti VALUES LESS THAN (63498816000) ENGINE = MyISAM,
 PARTITION 20120314parti VALUES LESS THAN (63498902400) ENGINE = MyISAM,
 PARTITION 20120315parti VALUES LESS THAN (63498988800) ENGINE = MyISAM,
 PARTITION 20120316parti VALUES LESS THAN (63499075200) ENGINE = MyISAM,
 PARTITION maxParti VALUES LESS THAN MAXVALUE ENGINE = MyISAM) */
1 row in set (0.00 sec)

_____

We can complete drop the partition along with the data by altering the table.

ALTER TABLE data_summary DROP PARTITION 20120313parti;

We can also truncate the data within a single or multiple partitions.
ALTER TABLE data_summary TRUNCATE PARTITION 20120314parti, 20120315parti;


 

Using load data efficiently

I have a tab separated text file as follows that needs to be imported in MySQL. The table however has only 3 columns while the text file being imported as 4 columns.
Here is an example file that has been generated using "select into outfile" syntax. It creates a tab delimited outfile.

# cat extract.txt
1    shantanu    mumbai, india
2    sameer    NY, USA
3    amar    jamnagar, ahmadabad
4    akbar    Hyderabad    forth column

1) I can still use "load data" command and mysql will ignore the forth column and show a warning about what it has done.
2) You can use column names while importing data. Use variable for e.g. @some_text to suppress that column.
3) You can do calculations / replacements using "set" keyword in load data statement.

Here is how to convert the IP addresses to Integer numeric values.

load data local infile 'sample.csv' into table test.raw_20120425 fields terminated by '^' (id, createid, @a, country, telco_name, some ... more ... columns ..) set ip = inet_aton(@a);
_____

mysql> drop table if exists loadme;
Query OK, 0 rows affected (0.03 sec)

mysql> create table loadme (id int, name varchar(100), address varchar(100));
Query OK, 0 rows affected (0.15 sec)

mysql> load data infile 'extract.txt' into table loadme;
Query OK, 4 rows affected, 1 warning (0.00 sec)
Records: 4  Deleted: 0  Skipped: 0  Warnings: 1

mysql> show warnings;
+---------+------+---------------------------------------------------------------------------+
| Level   | Code | Message                                                                   |
+---------+------+---------------------------------------------------------------------------+
| Warning | 1262 | Row 4 was truncated; it contained more data than there were input columns |
+---------+------+---------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> select * from loadme;
+------+----------+---------------------+
| id   | name     | address             |
+------+----------+---------------------+
|    1 | shantanu | mumbai, india       |
|    2 | sameer   | NY, USA             |
|    3 | amar     | jamnagar, ahmadabad |
|    4 | akbar    | Hyderabad           |
+------+----------+---------------------+
4 rows in set (0.00 sec)

mysql> load data infile 'extract.txt' into table loadme (id, name, address, @extra);
Query OK, 4 rows affected (0.00 sec)
Records: 4  Deleted: 0  Skipped: 0  Warnings: 0

mysql> load data infile 'extract.txt' into table loadme (id, name, address, @extra) set name=@extra;
Query OK, 4 rows affected (0.01 sec)
Records: 4  Deleted: 0  Skipped: 0  Warnings: 0

mysql> select * from loadme;
+------+--------------+---------------------+
| id   | name         | address             |
+------+--------------+---------------------+
|    1 | shantanu     | mumbai, india       |
|    2 | sameer       | NY, USA             |
|    3 | amar         | jamnagar, ahmadabad |
|    4 | akbar        | Hyderabad           |
|    1 | shantanu     | mumbai, india       |
|    2 | sameer       | NY, USA             |
|    3 | amar         | jamnagar, ahmadabad |
|    4 | akbar        | Hyderabad           |
|    1 | NULL         | mumbai, india       |
|    2 | NULL         | NY, USA             |
|    3 | NULL         | jamnagar, ahmadabad |
|    4 | forth column | Hyderabad           |
+------+--------------+---------------------+
12 rows in set (0.00 sec)

Labels:


April 19, 2012

 

same key pair file across all regions

It is better to have a single key-pair file across all regions so that it will be easy to connect to any server.
The following script will copy your rsa key to all regions and the key name will be "common_developer".

#!/bin/sh
# upload a key that can be used across all regions
private_key='pk-developer.pem'
cert='cert-developer.pem'

cat > common_developer.rsa << "my_devkey"
ssh-rsa AAAAB3Nz ... G41MT-S2
my_devkey

regions=$(ec2-describe-regions --private-key pk-developer.pem --cert cert-developer.pem | cut -f2)

for region in $regions; do
echo $region
ec2-import-keypair --region $region --private-key pk-developer.pem --cert cert-developer.pem --public-key-file common_developer.rsa common_developer
done

You can now use this file while connecting to a server using the following command.

ssh -i common_developer.pem root@ec2-1-2-3-4.sa-east-1.compute.amazonaws.com

This applies to the servers initiated with this new key-pair. For old servers you will need to copy the rsa key manually.

Labels: ,


 

connect to several servers

When there are several servers those you may need to connect, it is easier to create a single file instead of several executables.

case $1 in
"development") ssh -i development.pem 1.2.3.4 ; break;;
"production") ssh -i production.pem ec2-111-222-333-444.compute-1.amazonaws.com; break;;
*) echo "Sorry, I can not connect, please check server name ";;
esac

Labels: ,


April 18, 2012

 

AWS audit

Here is a shell script that will give you complete picture of all the objects you have across regions.


#!/bin/sh
# skip the following 4 lines if the environment variables are already set
private_key='pk-company.pem'
cert='cert-company.pem'

export EC2_PRIVATE_KEY=$private_key
export EC2_CERT=$cert

ec2-describe-regions | cut -f 2 | while read myregion
do
echo "========================"
echo "$myregion Details"
echo "========================"
echo "Instance list"
ec2-describe-instances --region $myregion
echo "Volues list"
ec2-describe-volumes --region $myregion
echo "Addresses list"
ec2-describe-addresses --region $myregion
echo "Zones list"
ec2-describe-availability-zones --region $myregion
echo "Group list"
ec2-describe-group --region $myregion
echo "Image list"
ec2-describe-images --region $myregion
echo "Keypair list"
ec2-describe-keypairs --region $myregion
echo "Reserved Instances list"
ec2-describe-reserved-instances --region $myregion
echo "snapshots list"
ec2-describe-snapshots --region $myregion
done > ec2_audit.txt

Labels:


April 12, 2012

 

API tools to manage instances

#!/bin/sh
# here are commands those can be part of shell script that will do most of the instance management tasks for you.

#!/bin/sh

# defince variables
ami_id='ami-5647a33f'
private_key='pk-developer.pem'
cert='cert-developer.pem'
instance_type='t1.micro'
## t1.micro 0.02 ## m1.small 0.08 ## m1.medium 0.16 ## c1.medium 0.165 ## m1.large 0.32 ## m2.xlarge 0.45 ## m1.xlarge 0.64
## c1.xlarge 0.66 ## m2.2xlarge 0.9 ## cc1.4xlarge 1.3 ## m2.4xlarge 1.8 ## cg1.4xlarge 2.1 ## cc2.8xlarge 2.4

zone='us-east-1a'
group='quick-start-1'
# default group will be used if not specified
key='virginia_developer'
# default region is virginia us-east-1
region='us-east-1'
## us-west-2 US West Oregon ## us-west-1 US West N. California ## eu-west-1 EU West Ireland
## ap-southeast-1 Asia Pacific Singapore ## ap-northeast-1 Asia Pacific Tokyo ## sa-east-1 South America Sao Paulo

# size in GB and mount point
volume_size='100'
volume_device='/dev/sdh'


cat > pk-developer.pem << "my_heredoc"
-----BEGIN PRIVATE KEY-----
-----END PRIVATE KEY-----
my_heredoc

cat > cert-developer.pem << "my_certdoc"
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
my_certdoc

cat > virginia_developer.pem << "my_devkey"
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
my_devkey

cat > oregon_developer.pem << "my_devkey"
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
my_devkey

chmod 700 *.pem

# set environment variables
export EC2_PRIVATE_KEY=$private_key
export EC2_CERT=$cert

ec2-run-instances $ami_id --instance-type $instance_type --region $region --availability-zone $zone --group $group --key $key > run_instances_stan.txt

instance_id=`egrep ^INSTANCE run_instances_stan.txt | cut -f 2`
instance_date=`egrep ^INSTANCE run_instances_stan.txt | cut -f 7`
instance_aki=`egrep ^INSTANCE run_instances_stan.txt | cut -f 9`
instance_ari=`egrep ^INSTANCE run_instances_stan.txt | cut -f 10`

ec2-create-volume --size $volume_size --availability-zone $zone > create_volume_stan.txt

volume_id=`egrep ^VOLUME create_volume_stan.txt | cut -f 2`
volume_date=`egrep ^VOLUME create_volume_stan.txt | cut -f 6`

sleep 120

ec2-describe-instances --region $region "$instance_id" > describe_instances_stan.txt

instance_ip=$(egrep ^INSTANCE describe_instances_stan.txt | cut -f4)

ec2-attach-volume $volume_id --instance $instance_id --device $volume_device

MOUNT=' mkdir /data; mkfs.ext3 /dev/sdh; mount -t ext3 /dev/sdh /data; echo "/dev/sdh /data ext3 defaults 0 0" >>/etc/fstab; '
CMD="$MOUNT echo 0 > /selinux/enforce; yum -y install mysql mysql-server mysql-client java; sed -i.bak 's| *datadir *=.*|datadir = /data/|g' /etc/my.cnf; mysql_install_db ; /etc/init.d/mysqld start ; "

ssh -i $key.pem root@$instance_ip "$CMD"

exit

cat > README << "readme_heredoc"

# get the volume ID
src_volumeid=$(egrep ^BLOCKDEVICE describe_instances_stan.txt | cut -f3)

# Now get the snapshot id from the volume id
ec2-describe-volumes --region $region "$instance_id" | egrep ^VOLUME > /tmp/volume_info
src_snapshotid=$(cut /tmp/volume_info | cut -f2)
echo $src_snapshotid
src_size=$(cut /tmp/volume_info | cut -f2)
echo $src_size
# Create a new volume from the snapshot
#src_volumeid=$(ec2-create-volume --region $src_region --snapshot $src_snapshotid -z $src_availability_zone | egrep ^VOLUME | cut -f2)
echo $src_volumeid

ec2-attach-volume --region $src_region $src_volumeid -i $src_instanceid -d $src_device

# install required tools

# ubuntu
sudo apt-get install ec2-api-tools

# fedora
# http://rpmfind.net/linux/rpm2html/search.php?query=ec2-api-tools
wget ftp://rpmfind.net/linux/rpmfusion/nonfree/el/updates/testing/5/i386/ec2-api-tools-1.3.36506-1.el5.noarch.rpm
yum install java
rpm -iUh *.rpm

# describe default images owned by Amazon
ec2-describe-images -o amazon

## terminate instances
# ec2-terminate-instances

## snapshots

Make note of the volume-id and device it’s connected to, eg: vol-abcd1234 and /dev/sdf also make a note of the ramdisk and kernel your running instance is using.
They’ll be something like “ari-12345678″ and “aki-abcdef12″.

# ec2-create-snapshot vol-abcd1234

That’ll give you a snapshot-id back. You then need to wait for the snapshot to finish. Keep running this until it says it’s “completed”:

# ec2-describe-snapshots snap-1234abcd

Finally, you can register the snapshot as an AMI:

# ec2-register –snapshot snap-1234abcd –description “your description here” –name “something-significant-here” –ramdisk ari-12345678 –kernel aki-abcdef12

ec2-describe-instances --private-key pk-developer.pem --cert cert-developer.pem

ec2-create-image -n "My AMI" i-eb977f82

ec2-create-tags

readme_heredoc

Labels: ,


April 11, 2012

 

re-sync a table to slave

Let's assume the table ox_data_summary from the database companyDB is out of sync with master and we need to quickly update it. These 4 commands (1 on master and 3 on slave) will update 40 million records in less than 3 minutes. Make sure that the table is MyISAM and no thread is writing to this table on master or else you will copy the inconsistent data.

On Master:

time rsync -e 'ssh -i /root/vservmys3rv3r.pem' -avz /mysql/vserv/ox_data_summary.* ec2-107-21-74-91.compute-1.amazonaws.com:/mysql/test/
_____

On slave:

mysql test -e" flush table test.ox_data_summary"
mysql companydb -e"drop table companydb.ox_data_summary"
mysql companydb -e"rename table test.ox_data_summary to companydb.ox_data_summary"

You may need to change the companydb to vserv or any other schema name.

We can take the count and check if it matches with the master:
mysql test -e" select count(*) from test.ox_data_summary"

We can also do the same in 2 commands by copying data directly to /mysql/vserv/ folder on slave.

Labels:


April 09, 2012

 

Change engine to archive

Here is a script that will change the engine type to archive of all the tables across all schema.


#!/bin/sh

mysql -Bse"select concat(TABLE_SCHEMA, ' ', TABLE_NAME) from information_schema.tables where TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'test') " | while read schema_name tbl_name
do
affix="_archive"
mytbl_name="$tbl_name$affix"
mysql -e"create table $schema_name.$mytbl_name select * from $schema_name.$tbl_name where 1 = 2"
mysql -e"alter table $schema_name.$mytbl_name engine=archive"
mysql -e"insert into $schema_name.$mytbl_name select * from $schema_name.$tbl_name"
if [ $? -eq '0' ];then
mysql -e"drop table $schema_name.$tbl_name"
fi

done

Labels:


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024  

This page is powered by Blogger. Isn't yours?