Shantanu's Blog

Corporate Consultant

May 26, 2011

 

Check slave data

We need to check if the slave is in sync with the master data. Let's assume the slave IP is 10.10.10.3
Now assuming that the localhost is the master server, I can simply run the slave_check.sh script to check the data discrepancy.

# cat slave_check.sh
#/bin/sh
myslave=${1:-10.10.10.3}
for dbName in `mysqlshow`
do
mk-table-checksum h=localhost,u=root,p=root@123,P=3306 --databases $dbName h=$myslave,u=root,p=root@123,P=3306 --databases $dbName --no-crc --count | mk-checksum-filter
done

Labels:


 

hierarchy in parameters

While connecting to mysql, the sequence of options does matter. For e.g. the following does not work...
mysql -uUser -pPassWd --socket=/tmp/mysql.sock2

But this works as expected.
mysql --socket=/tmp/mysql.sock2 -uUser -PPassWd

The following does not work either...
mysql --socket=/tmp/mysql.sock2 -uUser -pPassWd --no-defaults

But the following should work:
mysql --no-defaults --socket=/tmp/mysql.sock2 -uUser -pPassWd

Interesting!

Labels:


May 25, 2011

 

NFS mounting to share free disk space

If is very easy to mount a drive from one server to the other on the same network. NFS sharing will come handy when we have a lot of disk space on server 1 and very less space on 2. Once mounted, the files in that directory can be easily shared between the two (or more) servers.

1) Check if portmap and nfs services are running on both servers. start service if necessary.

# /sbin/service portmap status
portmap (pid 2557) is running...

# /sbin/service nfs status
rpc.mountd (pid 2376) is running...
nfsd (pid 2373 2372 2371 2370 2369 2368 2367 2366) is running...
rpc.rquotad (pid 2344) is running...

2) On the server (10.10.10.1) where enough disk is available...
i) create directory to share:
# mkdir /db-nfs

ii) Add entry to exports file:
# cat /etc/exports
/db-nfs 10.10.10.0/24(rw,sync,no_root_squash)

iii) activate sharing:
# /usr/sbin/exportfs -a

3) On the server (10.10.10.2) where we need more disk space...
i) Mount the drive

/bin/mount 10.10.10.1:/db-nfs /mnt

ii) add it to fstab
cat /etc/fstab
10.10.10.1:/db-nfs/ /mnt/db-nfs nfs rw,hard,intr 0 0

Labels:


May 24, 2011

 

The maximum allowed size for varchar

# The maximum width of the varchar type is 65535 across all columns. So the following create table statement fails.

mysql> create table encrypted ( can_des text, tran_address varchar(300), can_resume varchar(1300), can_remark varchar(65000) ) engine=MyISAM;
ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs

# We will have to change some columns to text, mediumtext or longtext. I guess the big column like can_remark should be converted to text.

mysql> create table encrypted ( can_des text, tran_address varchar(300), can_resume varchar(1300), can_remark text ) engine=MyISAM;
Query OK, 0 rows affected (0.03 sec)

# The key can be added on smaller columns as usual.

mysql> alter table encrypted add key (tran_address);
Query OK, 0 rows affected (0.05 sec)
Records: 0 Duplicates: 0 Warnings: 0

# If I try to add a key on a varchar column that is more than say 700 bytes long, I will get the following warning and a partial index will be silently created which is useless.

mysql> alter table encrypted add key (can_resume);
Query OK, 0 rows affected, 1 warning (0.03 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> show warnings;
+---------+------+----------------------------------------------------------+
| Level | Code | Message |
+---------+------+----------------------------------------------------------+
| Warning | 1071 | Specified key was too long; max key length is 1000 bytes |
+---------+------+----------------------------------------------------------+
1 row in set (0.00 sec)

mysql> show create table encrypted\G
*************************** 1. row ***************************
Table: encrypted
Create Table: CREATE TABLE `encrypted` (
`can_des` text,
`tran_address` varchar(300) default NULL,
`can_resume` varchar(1300) default NULL,
`can_remark` text,
KEY `tran_address` (`tran_address`),
KEY `can_resume` (`can_resume`(1000))
) ENGINE=MyISAM DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

# I will prefer to drop the partial index

mysql> alter table encrypted drop key can_resume;
Query OK, 0 rows affected (0.03 sec)
Records: 0 Duplicates: 0 Warnings: 0

# full text index can be built on big varchar and text columns

mysql> ALTER TABLE encrypted ADD FULLTEXT my_ft (can_des, can_resume, can_remark);
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0

# finally the create table statement looks like this...

mysql> show create table encrypted\G
*************************** 1. row ***************************
Table: encrypted
Create Table: CREATE TABLE `encrypted` (
`can_des` text,
`tran_address` varchar(300) default NULL,
`can_resume` varchar(1300) default NULL,
`can_remark` text,
KEY `tran_address` (`tran_address`),
FULLTEXT KEY `my_ft` (`can_des`,`can_resume`,`can_remark`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Labels:


May 18, 2011

 

multiple instances of mysql

If you want to start mysql using different data directory, the simple way is to change the datadir variable path in my cnf. But there are several disadvantages of doing so.
1) I have to change the my.cnf
2) restart
3) I can not access data from 2 different locations at the same time
4) Does not work as expected with 5.1+ version

Welcome to mysqld_multi

http://dev.mysql.com/doc/refman/5.0/en/mysqld-multi.html

Add the following code to my.cnf and restart the mysql service. You can continue to use the data at the default location as you used to before.

[mysqld_multi]
mysqld = /usr/bin/mysqld_safe
mysqladmin = /usr/bin/mysqladmin
user = multi_admin
password = multipass

[mysqld2]
socket = /tmp/mysql.sock2
port = 3307
pid-file = /var/lib/mysql2/hostname.pid2
datadir = /var/lib/mysql2

[mysqld3]
socket = /tmp/mysql.sock3
port = 3308
pid-file = /var/lib/mysql3/hostname.pid3
datadir = /var/lib/mysql3

[mysqld4]
socket = /tmp/mysql.sock4
port = 3309
pid-file = /var/lib/mysql4/hostname.pid4
datadir = /var/lib/mysql4

[mysqld6]
socket = /tmp/mysql.sock6
port = 3311
pid-file = /mnt/company-nfs/mysql/hostname.pid6
datadir = /mnt/company-nfs/mysql

Now you need to start mysqld_multi instance to initiate the "mysqld6" instance. I have saved the data on nfs partition for the sake of flexibility.

# mysqld_multi --no-log start 6
mysqld_multi log file version 2.16; run: Thu May 19 10:48:30 2011

Starting MySQL servers
# Starting mysqld daemon with databases from /mnt/company-nfs/mysql

I can now use the new mysql instance by using the following credentials...

mysql -h10.10.10.10 -usome_user -psome_password --socket=/tmp/mysql.sock6 --port=3311

I can use the other sockets like 2,3 and 4 as well. All I need to do is to create the respective data-directories and change the ownership and permission so that mysql is allowed to read/write to these locations.

mysqld_multi stop 6

If the stop command as shown above does not work, try...

mysqladmin --socket=/tmp/mysql.sock6 shutdown

Labels: , ,


May 17, 2011

 

InnoDB monitor

If you suspect that due to innodb deadlocks, the application performance is affected, turn on the innodb monitor tool.

to enable the standard InnoDB Monitor, create the innodb_monitor table:
CREATE TABLE innodb_monitor (a INT) ENGINE=INNODB;

To stop the Monitor, drop the table:
DROP TABLE innodb_monitor;

This will save the innodb status to error log.

Labels:


May 12, 2011

 

Users without super privileges

It is recommended to have 2 types of users.

1) A user with all the privileges including with grant option. This user is equivalent to root.
grant all on *.* to 'foo'@'%' identified by 'bar' with grant option;

2) Another set of users is the ones who have all the privileges except "super".
revoke super on *.* from 'foo'@'%';

There are 3 advantages of having users without "super" access.
1) More secure
2) Read only slave
3) Logging

a) The first point does not need explanation.
b) When I start a slave in "read-only" mode, all the users without super privileges can only read the data and can not write.
c) When the server is not started in "read-only" mode, we can log the users source access points in a mysql table.

SET GLOBAL init_connect='create table if not exists test.connect(db_name varchar(100), user_with_ip varchar(100), user_permission varchar(100),
thread_id varchar(100), connect_time datetime) engine=MyISAM; insert into test.connect select @@hostname as db_name, user() as user_with_ip,
current_user() as user_permission, connection_id() as connect_id, now() as connect_time';

This will help to analyse how many times a particular credential was used. The thread_id can also be linked to binary log extract to know the commands executed by a user in that thread.

Labels:


May 11, 2011

 

Aspersa to watch CPU load

Here is a tool shell script that will collect server info when the CPU load is too high.

http://aspersa.googlecode.com/svn/html/stalk.html

The files are usually saved in the /root/user/collected folder.

cat aspersa.sh
#!/bin/sh
# cron to check CPU load every 5 minutes
# */5 * * * * /home/aspersa.sh > /home/aspersa_success.txt 2 > /home/aspersa_err.txt
# this will collect server info when the CPU load more than 10

echo "aspersa tool required, download from the following location"
echo "svn checkout http://aspersa.googlecode.com/svn/trunk/ aspersa-read-only"
# default threshold is 10 unless specified from command line

cpuload=${1:-10}

mynum=`cat /proc/loadavg | awk '{print $2}' | awk -F"." '{print $1}'`

if [[ $mynum < $cpuload ]];then
echo "running collect tool"
sh /home/aspersa/collect

else
echo "not running collect tool since CPU load is less than $cpuload"
fi

Labels:


May 08, 2011

 

Compare table count between 2 schema

If I need to compare 2 schemas from the same database, here is the script that will come handy.

# cat mydiff.sh
#!/bin/sh
db1=${1:-'mydb'}
db2=${2:-'mydb_log'}
echo $db1
mysqlshow $db1 --count > $db1.txt
mysqlshow $db2 --count > $db2.txt
diff $db1.txt $db2.txt --side-by-side --suppress-common-lines

Labels:


May 03, 2011

 

Logging records for archival purpose

There are times when we need to log all the old values of the table. For e.g. when a user updates his security answer, the old value should be saved in a log table. In other words, if a user changed his answer 4 times, I need to save all those 4 values. I do prefer to create archival tables on slave instead of master. Here is the shell script that will create the exact replica of the schema "account" as "account_log" with all the current data. The account database will get updated as and when there is an update/ insert on master. The log database will keep saving old and new values of each table.

back_track.sh

#!/bin/sh
# duplicate schema table to hold the archival values
# change the schema name below that you want to track
schema_name='account'
# username and password to access MySQL
user_name='root'
pass_word='root@123'

echo "drop database if exists "$schema_name"_log;"
echo "create database if not exists "$schema_name"_log;"
mysql -u$user_name -p$pass_word -Bse"SELECT CONCAT(\"CREATE TABLE "$schema_name"_log.\",TABLE_NAME,\" SELECT * FROM $schema_name.\",TABLE_NAME,\";\") from information_schema.columns WHERE TABLE_SCHEMA = \"$schema_name\" group by TABLE_NAME"
echo "use $schema_name;"
mysql -u$user_name -p$pass_word -Bse"SELECT CONCAT(\"DROP TRIGGER IF EXISTS \",TABLE_NAME, \"_update;\") from information_schema.columns WHERE TABLE_SCHEMA = \"$schema_name\" group by TABLE_NAME"
echo 'DELIMITER $$'
mysql -u$user_name -p$pass_word -Bse"SELECT CONCAT(\"CREATE TRIGGER \",TABLE_NAME,\"_update AFTER UPDATE ON \",TABLE_NAME, \" FOR EACH ROW BEGIN INSERT INTO "$schema_name"_log.\",TABLE_NAME,\" values (\", GROUP_CONCAT(CONCAT(\"NEW.\",COLUMN_NAME)), \"); END;\") from information_schema.columns WHERE TABLE_SCHEMA = \"$schema_name\" group by TABLE_NAME"
mysql -u$user_name -p$pass_word -Bse"SELECT CONCAT(\"CREATE TRIGGER \",TABLE_NAME,\"_insert AFTER INSERT ON \",TABLE_NAME, \" FOR EACH ROW BEGIN INSERT INTO "$schema_name"_log.\",TABLE_NAME,\" values (\", GROUP_CONCAT(CONCAT(\"NEW.\",COLUMN_NAME)), \"); END;\") from information_schema.columns WHERE TABLE_SCHEMA = \"$schema_name\" group by TABLE_NAME"
echo '$$'
_____

You can call the script and push the output to mysql using...
sh back_track.sh | mysql -uroot -proot@123 -h10.10.10.65 -P4040

Let's assume one record of the user table (account schema) was updated. You can check the old and new values from the account_log schema by executing query.

_____

The following script will help to know the changes taken place for each column.

# cat audit.sh
#/bin/sh
tbl_name=${1:-'address'}
mysql -Bse"select id, count(*) as cnt from database_log.$tbl_name group by id having cnt > 1" | while read -r id cnt
do
echo "**************************"
echo "values changed for ID $id"
echo "**************************"
mysql -Bse"select * from database_log.$tbl_name where id = $id\G" | sort -t":" -k1,1 -s | grep -v '^*'
done

Labels:


May 01, 2011

 

Replace into and On duplicate key

A detailed analysis of what happens when we use different options to insert/ update a record.

Here is a sample table...
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | london, UK |
+----+--------+---------+----------------------+

We want to insert a new record with emp_id 3, name - anthony and the address should be 'South Africa'. If the record is already present in the DB it should be updated.

This may look like a simple task, but checking the table structure, it is clear that there is no composite unique index, but separate per column unique keys.

CREATE TABLE `mykey` (
`id` int(11) NOT NULL auto_increment,
`emp_id` int(11) default NULL,

`name` varchar(100) default NULL,
`address` varchar(100) default NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `emp_id` (`emp_id`),
UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB

The challenge is to write an insert/ update statement that will satisfy all constraint and will fail predictably only in certain cases.

1) "insert ignore" will prefer not to do anything.
2) "replace into" may look like the best choice, but it will also change the auto_incremented ID and has some issues when there is more than one unique key.
For e.g. in the following example, while trying to insert emp_id 3 with name - akbar, it actually removed one row.

3) I guess the "on duplicate key update " clause is the right choice. It does not change the auto_id while updating the record. And even if there are no composite unique indexes in the table, it works as if they are present.

mysql> drop table if exists mykey;
Query OK, 0 rows affected (0.02 sec)

mysql> create table mykey(id int not null auto_increment, emp_id int, name varchar(100), address varchar(100), primary key (id), unique(emp_id), unique(name)) engine=InnoDB;
Query OK, 0 rows affected (0.13 sec)

mysql> insert into mykey (emp_id, name, address) values (1, 'amar', 'hyderabad, AP, india') ;
Query OK, 1 row affected (0.00 sec)

mysql> insert into mykey (emp_id, name, address) values (2, 'akbar', 'karachi, pakistan');
Query OK, 1 row affected (0.00 sec)

mysql> insert into mykey (emp_id, name, address) values (3, 'anthony', 'london, UK') ;
Query OK, 1 row affected (0.00 sec)

mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> savepoint savepoint_one;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | london, UK |
+----+--------+---------+----------------------+
3 rows in set (0.00 sec)
_____

# Case 1: "insert ignore"

mysql> insert ignore into mykey (emp_id, name, address) values (3, 'anthony', 'South Africa') ;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | london, UK |
+----+--------+---------+----------------------+
3 rows in set (0.00 sec)

_____

# Case 2: "replace into"

mysql> replace into mykey (emp_id, name, address) values (3, 'anthony', 'South Africa') ;
Query OK, 2 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 4 | 3 | anthony | South Africa |
+----+--------+---------+----------------------+
3 rows in set (0.00 sec)

mysql> rollback to savepoint_one;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | london, UK |
+----+--------+---------+----------------------+
3 rows in set (0.00 sec)

mysql> replace into mykey (emp_id, name, address) values (3, 'akbar', 'South Africa') ;
Query OK, 3 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+-------+----------------------+
| id | emp_id | name | address |
+----+--------+-------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 5 | 3 | akbar | South Africa |
+----+--------+-------+----------------------+
2 rows in set (0.00 sec)
_____

# Case 3: "on duplicate key"

mysql> rollback to savepoint_one;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | london, UK |
+----+--------+---------+----------------------+
3 rows in set (0.01 sec)

mysql> insert into mykey (emp_id, name, address) values (3, 'akbar', 'South Africa') on duplicate key update emp_id = 3, name = 'akbar', address='South Africa' ;
ERROR 1062 (23000): Duplicate entry 'akbar' for key 3

mysql> insert into mykey (emp_id, name, address) values (3, 'akbar', 'South Africa') on duplicate key update emp_id = 3, name = 'anthony', address='South Africa' ;
Query OK, 2 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | South Africa |
+----+--------+---------+----------------------+
3 rows in set (0.00 sec)

Labels: ,


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017  

This page is powered by Blogger. Isn't yours?