Create and configure load-balancer:
elb-create-lb MyLB --availability-zones us-east-1b --listener "protocol=TCP, lb-port=3306, instance-port=3306"
elb-configure-healthcheck MyLB --target "TCP:3306" --interval 30 --timeout 2 --healthy-threshold 6 --unhealthy-threshold 2
elb-register-instances-with-lb MyLB --instances i-29184c43
Create config parameter and auto-scaling group:
as-create-launch-config MyLC --image-id ami-41d00528 --instance-type m1.small --key severalnines --group quick-start-1
as-create-auto-scaling-group MyAutoScalingGroup --launch-configuration MyLC --availability-zones us-east-1a --min-size 2 --max-size 5 --load-balancers MyLB
Create policies and alarms:
as-put-scaling-policy MyScaleUpPolicy --auto-scaling-group MyAutoScalingGroup --adjustment=1 --type ChangeInCapacity --cooldown 300
mon-put-metric-alarm MyHighCPUAlarm --comparison-operator GreaterThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold 80 --alarm-actions Policy_from_previous_step --dimensions "AutoScalingGroupName=MyAutoScalingGroup"
as-put-scaling-policy MyScaleDownPolicy --auto-scaling-group MyAutoScalingGroup --adjustment=-1 --type ChangeInCapacity --cooldown 300
mon-put-metric-alarm MyLowCPUAlarm --comparison-operator LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold 40 --alarm-actions Policy_from_previous_step --dimensions "AutoScalingGroupName=MyAutoScalingGroup"
Describe scaling:
as-describe-auto-scaling-groups
as-describe-scaling-activities
as-describe-launch-configs
associate domain:
## associate and dis-associate route 53 domain to LB
elb-associate-route53-hosted-zone useast --rr-name mydb.shantanuoak.com --hosted-zone-id Z36XR238KZYP2F --weight 100
elb-disassociate-route53-hosted-zone useast --rr-name mydb.shantanuoak.com --hosted-zone-id Z36XR238KZYP2F --weight 100
Pause auto-scale:
as-suspend-processes MyAutoScalingGroup
as-resume-processes MyAutoScalingGroup
Delete config parameter and auto scale group:
as-delete-auto-scaling-group MyAutoScalingGroup --force-delete
as-delete-launch-config MyLC
Replace launch config:
as-create-launch-config app-server-launch-config-2 --image-id ami-c503e8ac --instance-type c1.medium --group web
as-update-auto-scaling-group app-server-as-group-1 --launch-configuration app-server-launch-config-2
Terminate instances:
as-update-auto-scaling-group app-server-as-group-1 --min-size 0
as-terminate-instance-in-auto-scaling-group i-12345abc --decrement-desired-capacity
as-terminate-instance-in-auto-scaling-group i-12345abc --no-decrement-desired-capacity
Labels: aws
Here is the shell script that will install mysql version 5.5 on a new instance.
sh -xv /root/clean_install.sh
#!/bin/sh
# mysql data directory
my_data_dir=/data/mysql/jun19
mydate=`date '+%j%H%M%S'`
## make sure new EBS is formatted
#dmesg
#time mkfs /dev/xvdt
## Add mount point if any and mysql start command in /etc/rc.local
#/bin/mkdir -p /data
#/bin/mount /dev/xvdab /data
#/etc/init.d/iptables stop
#/etc/init.d/mysql start
## disable selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
/usr/sbin/setenforce 0
echo 0 > /selinux/enforce
## shut-down mysql if already running
mysqladmin shutdown
# remove old data directory
#rm -rf /var/lib/mysql/
rm -rf /root/download
## create required directories
# datadir
mkdir -p $my_data_dir
# pid directory
mkdir -p /var/run/mysql
# default socket directory
mkdir -p /var/lib/mysql
# download directory
mkdir /root/download
cd /root/download
wget http://files.directadmin.com/services/all/mysql/64-bit/5.5.20/MySQL-client-5.5.20-1.linux2.6.x86_64.rpm
wget http://files.directadmin.com/services/all/mysql/64-bit/5.5.20/MySQL-devel-5.5.20-1.linux2.6.x86_64.rpm
wget http://files.directadmin.com/services/all/mysql/64-bit/5.5.20/MySQL-server-5.5.20-1.linux2.6.x86_64.rpm
wget http://files.directadmin.com/services/all/mysql/64-bit/5.5.20/MySQL-shared-5.5.20-1.linux2.6.x86_64.rpm
# create my.cnf file
cat > /etc/my.cnf << heredoc
[mysqld]
datadir=$my_data_dir
socket=/var/lib/mysql/mysql.sock
user=root
max_connections=2000
max_connect_errors=18446744073709547520
open_files_limit=40000
#slave-skip-errors=1062
## logging
relay-log=slave-relay-bin
log-bin=master-bin
slow_query_log=1
slow_query_log_file=/mnt/slow.log
long_query_time=2
log-slow-slave-statements
## date %j%H%M%Sserver-id=$mydate
## network and slave needs more bandwidth
max_allowed_packet=256M
## temporary tables optimization
tmp_table_size=256M
max_heap_table_size=256M
## join queries use this buffer
read_buffer_size=32M
## Query Cache
query_cache_type = 1
query_cache_size = 200M
## Performance Tunning
key_buffer_size=1024M
## temp dir
tmpdir = /mnt/
## InnoDB optimization
innodb_buffer_pool_size=1024M
innodb_log_file_size=64M
innodb_file_per_table=1
innodb_flush_method=O_DIRECT
[mysqld_multi]
mysqld = /usr/bin/mysqld_safe
mysqladmin = /usr/bin/mysqladmin
user = multi_admin
password = multipass
[mysqld2]
socket = /tmp/mysql.sock2
port = 3307
pid-file = /mnt/data/hostname.pid2
datadir = /mnt/data/
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysql/mysqld.pid
heredoc
# reset data-directory
mydir=`grep ^datadir /etc/my.cnf | awk -F'=' '{print $2}'`
rm -rf $mydir/*
# remove mysql
for package in `rpm -qa | grep -i mysql`
do
rpm -e $package
done
rpm -e mysql-libs-5.1.52-1.el6_0.1.x86_64
# install mysql
rpm -iUh /root/download/*
# install php
yum install -y php
# install mysql system files
mysql_install_db --datadir=$mydir
# restart mysql
/etc/init.d/mysql restart
Labels: aws, mysql, shell script
We can increase / decrease instances based on a particular image that will have all the data built-in. Here are the 4 commands those we can execute to create the auto-scaling group. Auto Scaling and Load Balancing tools can be installed from the following links...
http://aws.amazon.com/developertools/2535
http://aws.amazon.com/developertools/2536
Load balancer can be easily created / disabled from the web interface. But I guess for auto-scaling we will need to install the command line tool mentioned above.
1) Create a load balancer:
elb-create-lb mysql_LoadBal --listener "1b-port=3306,instance-port=3306,protocol=TCP" --availability-zones us-east-1b
2) Create auto scaling group launch configuration:
as-create-launch-config mysql_Config --image-id ami-60da3d09 --instance-type m1.small
3) Create auto scaling group:
as-create-auto-scaling-group mysql_AutoScale --launch-configuration mysql_Config --availability-zones us-east-1b --min-size 1 --max-size 5 load-balancers mysql_LoadBal
4) Create Trigger:
as-create-or-update-trigger mysql_Trigger1 --auto-scaling-group mysql_AutoScale --namespace "AWS/EC2" --measure CPUUtilization --statistic Average --dimensions "AutoScalingGroupName=mysql_AutoScale" --units "Percent" --period 60 --lower-threshold 30 --upper-threshold 70 --lower-breach-increment"=-1" --upper-breach-increment "1" --breach-duration 120
When average CPU utilization in the group exceeds 70% over a two-minute interval, increase instances up-to maximum of 5. When load falls below 30% reduce the number of instances but keep at-least 1 instance running.
To check how it is working:
as-describe-scaling-activities mysql_AutoScale
_____
i) In order to disable auto-scaling We will first have to remove the trigger:
as-delete-trigger mysql_Trigger1 --auto-scaling-group mysql_AutoScale
ii) Set minimum and maximum size of the group to 0 to force all its instances to terminate:
as-update-auto-scaling-group mysql_AutoScale --min-size 0 --max-size 0
iii) Remove auto-scaling group:
as-delete-auto-scaling-group mysql_AutoScale
iV) Remove load balancer:
elb-delete-lb mysql_LoadBal
Labels: aws
Each EC2 instance can access run-time data about itself by making HTTP requests to the special address 169.254.169.254
wget -q http://169.254.169.254/latest/meta-data/instance-id
cat instance-id
i-d830dfb0
You can also supply your own metadata when you launch an EC2 instance. This data is called "User Data". For e.g.
wget -q http://169.254.169.254/latest/user-data
cat user-data
Role=master_db,Size=small,Name=Production,Input=Queue1
You can have your custom AMI treat the user metadata as the URL of the script to be run when the instance starts.
Labels: aws
We can glue the EBS volumes together into a RAID.
mdadm --create /dev/md0 --level 0 --metadata=1.1 --raid-devices 2 /dev/sdh1 /dev/sdh2
Add the following line to /etc/mdadm.conf (creating if necessary)
DEVICES /dev/sdh1 /dev/sdh2
Run the following command to add additional configuration information to the file:
mdadm --detail --scan >> /etc/mdadm.conf
At this point the RAID volume (/dev/md0) has been created. It encapsulates the 2 volumes.
Create a file system:
mkfs /dev/md0
create a mount point and mount the volume on it:
mkdir /data
mount /dev/md0 /data
Labels: aws
While using "system" function to execute linux commands within PHP, we can use a variable to store the return value. We will continue only if the return value is 0 (success) or exit the script.
system("mysqldump -uroot db1 --ignore-table=db1.tbl_1 --ignore-table=db1.tbl_2 > /root/db1.sql", $retval);
if ($retval != '0') {
exit ("dump db1.sql command failed");
}
I have always ignored the following warning.
[Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=slave-usw-relay-bin' to avoid this problem.
This made a big difference when I changed the AWS EBS and attached it to a new instance. Since the new instance had a new IP address, the relay-log file name was inconsistent. Therefore we must always specify log-bin and relay-log parameters in my.cnf
log-bin=usw-slave-bin
realy-log=usw-slave-relay-bin
_____
It is possible to recover slave if it has Failed to open the relay log. It is possible that MySQL saved it's relay logs in /var/run by default, which gets cleared out on boot.
To fix this, there are 2 ways.
you need to change the location MySQL uses for the logging by adding the following line to the [mysqld] section of /etc/my.cnf
relay-log = /var/lib/mysql/relay-bin
Then edit /var/lib/mysql/relay-log.info to point to the first new relay log (leaving the master information the same.
/var/lib/mysql/relay-bin.000001
1
mysql-bin.12345
123456789
Start the slave. Or if you already have the slave status at the time of stopping the slave. Note the following 2 parameters.
Relay_Master_Log_File: mis-bin.000710
Exec_Master_Log_Pos: 461352141
And then reset slave and start the replication by using the statement "CHANGE MASTER TO ..."