Shantanu's Blog

Database Consultant

August 31, 2010

 

MySQL Case Study - 178

substring to return all values after delimiter
How do I get all the values after the first delimiter? In the following example I am expecting 'xyz@yahoo.com,pqr@company.com'

(02:40) mysql>select substring_index('abc@hotmail.com,xyz@yahoo.com,pqr@company.com', ',', 1) as first;
+-----------------+
| first |
+-----------------+
| abc@hotmail.com |
+-----------------+
1 row in set (0.00 sec)

(02:41) mysql>select substring_index('abc@hotmail.com,xyz@yahoo.com,pqr@company.com', ',', -1) as last;
+-----------------+
| last |
+-----------------+
| pqr@company.com |
+-----------------+
1 row in set (0.00 sec)


http://stackoverflow.com/questions/3606623/substring-to-return-all-values-after-delimiter

Labels:


August 25, 2010

 

Using different location for Data

You can specify the path of the data directory while creating a table. For e.g.

create table mytest (id int, name varchar(100))
ENGINE = MyISAM
COMMENT = 'data directory is var mylink configured by shantanu'
DATA DIRECTORY = '/var/mylink';

I have added the comment so that others will know about my expertise. The index file .MYI can also be saved in the same location using the following parameter.
INDEX DIRECTORY = '/var/mylink';
_____

If you have already created the table, you will have to manually move the file to the new location and crate a symbolic link.

1) Check if symbolic links are supported.
mysql>SHOW VARIABLES LIKE 'have_symlink';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| have_symlink | YES |
+---------------+-------+
1 row in set (0.06 sec)

2) Lock the table so that no writes are possible.
mysql>flush tables with read lock;

3) Create a directory on the partition where there is enough space. For e.g. /var/
mkdir /var/mylink

4) Change the permission to 777 or 770
chown mysql:mysql /var/mylink
chmod 770 /var/mylink

5) Move the big data file to the new location
mv /mysql-data/shantanu/contact_attach.MYD /var/mylink/contact_attach.MYD

6) Symbolically link it
ln -sf /var/mylink/contact_attach.MYD /mysql-data/shantanu/contact_attach.MYD

7) Unlock tables
mysql> unlock tables;

8) flush tables so that mysql will reread table.
mysql>flush tables shantanu.contact_attach;

Repair the table if necessary.

When I issue the repair table command, a tempoarary file (.MYD) is created in the new location and not in the data-directory.
# ls -lht
total 3.0G
-rw-rw---- 1 mysql mysql 1.3G Aug 24 01:07 contact_attach.TMD
-rw-rw---- 1 mysql mysql 1.7G Aug 24 00:55 contact_attach.MYD

Note: The read lock will be global. It is necessary while we move the file to the new location. The expected downtime will be less than one minute per GB. You can move the file without the lock if you are sure that the company is not in use at that time.

Labels:


August 21, 2010

 

3 reasons to avoid timestamp

Here are 3 reasons why you should avoid "timestamp" column type.

1) Range: It can store values only from 1st Jan 1970 to 31 Dec 2037. This range is not enough for most applications.

2) On update clause:
As you can see from the following screen, I specified only a "timestamp" column type in the create table statement. MySQL silently added the "on update CURRENT_TIMESTAMP". When I tried to update a column, the value of "mytime" column was changed as well from 02 to 33 (seconds). This is unexpected behavior for a novice coder.

mysql>use test;
mysql>create table stamptest(id int, name varchar(100), mytime timestamp);
Query OK, 0 rows affected (0.03 sec)

mysql>show create table stamptest\G
*************************** 1. row ***************************
Table: stamptest
Create Table: CREATE TABLE `stamptest` (
`id` int(11) default NULL,
`name` varchar(100) default NULL,
`mytime` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP
) ENGINE=MyISAM DEFAULT CHARSET=latin1
1 row in set (0.01 sec)

mysql>insert into stamptest values (1, 'john', NULL);
Query OK, 1 row affected (0.01 sec)

mysql>select * from stamptest;
+------+------+---------------------+
| id | name | mytime |
+------+------+---------------------+
| 1 | john | 2010-08-21 06:21:02 |
+------+------+---------------------+
1 row in set (0.00 sec)

mysql>update stamptest set name = 'abcd' where id = 1;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

(06:21) mysql>select * from stamptest;
+------+------+---------------------+
| id | name | mytime |
+------+------+---------------------+
| 1 | abcd | 2010-08-21 06:21:33 |
+------+------+---------------------+
1 row in set (0.00 sec)


3) Back-up using dump
The time saved in the timestamp column will get adjusted to the timezone of the server from where you are trying to run the mysqldump command.
The --skip-tz-utc switch needs to be added to the dump command and most people are unaware of this.

http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_tz-utc

The only advantage that I can see is that it consumes 4 bytes compared to datetime's 8 bytes. This does not make much difference to most general purpose applications.

The poor PHP programmers have their own deadlines and are less interested in learning these things. Do not use "timestamp" column type and avoid using from_unixtime() and unix_timestamp() functions to save datetime at any cost. The only option left to save the date+time is... what else? "datetime"!!

Labels:


August 20, 2010

 

Information Schema Tables

When the database has too many tables or when the innodb table space is too large, you can not run queries against information_Schema Database. It is a good idea to copy the data of those tables to a new database, for e.g. "schemaBackup". Here is the script that will do the needful.

# cat stats.sh
#!/bin/sh
mysql -e"create database if not exists schemaBackup";

for tblName in `mysql -e"show tables from information_schema"`
do
mysql -e"drop table if exists schemaBackup.$tblName;"
mysql -e"create table schemaBackup.$tblName select * from information_schema.$tblName;"
done

for tblName in `mysql -e"show tables from mysql"`
do
mysql -e"drop table if exists schemaBackup.$tblName;"
mysql -e"create table schemaBackup.$tblName select * from mysql.$tblName;"
done

Labels: ,


August 19, 2010

 

Backup using mylvmbackup

Here are the commands to create a mylvmbackup user with minimal privileges:

CREATE USER 'mylvmbackup'@'localhost' IDENTIFIED BY '12345';
GRANT RELOAD, REPLICATION CLIENT ON *.* TO 'mylvmbackup'@'localhost';
GRANT SELECT ON mysql.* TO 'mylvmbackup'@'localhost';

In the mylvmbackup.conf file, the correlating rows are:

[mysql]
user=mylvmbackup
password=12345
host=localhost

By default, mylvmbackup creates a .tar.gz compressed backup file of your data.

mylvmbackup supports backing up the uncompressed files using rsync. To copy MySQL data to a remote host, configure the following in the mylvmbackup.conf file:
[fs]
backupdir=root@backuphost:/data/backup/mysql
[misc]
backuptype=rsync

You can backup to a remote host or save the files locally.
[fs]
backupdir=/mnt/backup/mysql
[misc]
backuptype=rsync

Labels:


August 18, 2010

 

Memory loss

There are times when you see the following PHP error in the error log.

[Mon Aug 02 14:28:11 2010] [error] PHP Fatal error: Allowed memory size of 1073741824 bytes exhausted (tried to allocate 292995097 bytes) in /var/www/html/text.inc on line 25, referer: https://yahoo.com/Compose.php?con_id=cand74035

The usual fix for this problem is as follows. At the top of the compose.php page use the following statement.

ini_set("memory_limit","200M");

This will increase the memory allocation for the current page only.

Labels:


 

Disable or enable logging

When you are restoring data from a dump file, the logging needs to be disabled. Here are 2 shell scripts that will disable (or enable) the binary, slow and general logs completely. The mysql service needs to be restarted after you run this script.

If you do not want to restart the mysql service then login to mysql prompt and type...
mysql> set sql_log_bin = 0;
mysql> use DB_Name;
mysql> source dumpfile.sql;


# cat enable_logs.sh
#/bin/sh
# enable logs by removing the comment
# first backup the current my.cnf
cp /etc/my.cnf /home/develop/my_$(date +'%d-%m-%y-%H-%M').cnf

sed -i "s/^#log-slow-queries=/log-slow-queries=/" /etc/my.cnf
sed -i "s/^#log-bin=/log-bin=/" /etc/my.cnf
#sed -i "s|^log=.*$|#\\0|" /home/develop/development.cnf
# general logs are disable

# restart mysql service
#/etc/init.d/mysql restart



# cat disable_logs.sh
#/bin/sh
# disable logs by adding the comment
# backup the current my.cnf file to develop subfolder
cp /etc/my.cnf /home/develop/my_$(date +'%d-%m-%y-%H-%M').cnf

sed -i "s|^log-slow-queries=.*$|#\\0|" /etc/my.cnf
sed -i "s|^log-bin=.*$|#\\0|" /etc/my.cnf
#sed -i "s|^log=.*$|#\\0|" /home/develop/development.cnf
# general logs are disabled

# restart mysql service
#/etc/init.d/mysql restart

Labels: ,


 

Extract of Apache error log

There are times when you need to check the latest entries in the apache error log.
Here is the easy way to do so.

Cron entry:
# extract of apache error log
# updated every 5 minutes and posted to http://yourSite.com/shantanu/index.txt
*/5 * * * * sh /home/develop/apache.sh >> /home/develop/apache_succ.txt 2>> /home/develop/apache_err.txt

Shell script:

# cat /home/develop/apache.sh
#!/bin/sh

mytemp='/home/develop/mytemp.txt'
mypath='/home/develop/index.txt'
> $mypath
> $mytemp

for file in `find /var/log/httpd/ -name "*log"`
do
tail -100 $file | grep "`date +%h\ %d`" >> $mytemp
done

sed -i '/^$/d' $mytemp
grep -v 'favicon' $mytemp > $mypath
mv $mypath /var/www/html/shantanu/

Labels: ,


 

Processlist snapshot

You can check the slow query log to find the query that is taking too long to complete. But What if you want to see the entire processlist at that time?
Here is the shell script that will do the needful. You will have to add it to cron like this...
# log high processes
*/10 * * * * sh /home/develop/log_process.sh >> /home/develop/kill_process.txt 2>> /home/develop/kill_proc_err.txt

# cat log_process.sh
#!/bin/bash
SEC=100
mnumber=919702977470
IFS='|'

mysqladmin proc -v|grep Query|grep -Evi "delete|update|insert|alter table" |while read dummy qid qusr qhost qdb qstat qsec qstat2 query
do
if [ $qsec -gt $SEC ]; then

echo `date`
echo " "
echo `mysqladmin proc`
# mysqladmin kill $qid
# echo "Killed query $qid..."
echo "##############################"
echo "##############################"

fi

if [ $qsec -gt $SEC ]; then

curl -Ld'username=SomeUser&password=PassWord&source=oksoft&dmobile='$mnumber'&message=process running more than 1000 seconds on QA DB server' http://67.23.229.95
/smsclient//api.php

fi

done

Labels: ,


 

mysqlreport

mysqlreport makes a friendly report of important MySQL status values. mysqlreport transforms the values from SHOW STATUS into an easy-to-read report that provides an in-depth understanding of how well MySQL is running. mysqlreport is a better alternative to manually interpreting SHOW STATUS.

http://hackmysql.com/mysqlreport

Labels:


 

Partitions problem

This is how a typical server look like...

# df -HP
Filesystem Size Used Avail Use% Mounted on
/dev/hda8 1.1G 502M 489M 51% /
/dev/hda1 128M 11M 111M 9% /boot
tmpfs 1.1G 0 1.1G 0% /dev/shm
/dev/hda7 1.1G 44M 947M 5% /home
/dev/hda3 27G 15G 9.8G 61% /mysql-bin-log
/dev/hda2 105G 18G 82G 18% /mysql-data
/dev/hda6 3.2G 1.1G 1.9G 36% /usr
/dev/hda9 16G 4.7G 11G 32% /var

The root / directory has space of only 1 GB and that is not enough. Mysql uses the /tmp folder to save its .sock file and other temporary files those are created at the time of alter and repair. There should be at least 10 GB space allocated for root.

Labels:


 

disable trigger

There are times when you want to add some data in a table and the trigger on that table has some complex logic built-in. In order to disable the trigger temporarily, add a condition in your triggers like:

if (@DISABLE_TRIGER IS NULL) then
#trigger body
end if;

and than if you want to disable triggers on import just:

SET @DISABLE_TRIGER=1;
do imports
SET @DISABLE_TRIGER=NULL;

Labels:


August 17, 2010

 

Finding relations

Finding the relations within the tables is possible by looking into the mysql general log. Here is an example

# grep -Eo '( *[^ ]* *){4}some_tbl_name( *[^ ]* *){4}' /mysql-bin-log/mysql-gen.log | head -10000 | sort -u

In the example above, the table "accounts" is related to sourcing. You can add word boundaries like \less than sign and \greater than sign.
Make sure that the general logs are enabled, before using the command.

Labels:


 

Repair broken tables

You can find out the names of corrupt tables from the error log and repair them.

tail -10000 `grep 'error=' /etc/my.cnf | awk -F'=' '{print $2}'` | grep "`date +%y%m%d`" | awk -F"ERROR" '{print $2}' | sort -u | sed -e "s/.*\.\//REPAIR TABLE /g" -e "s/[\\.'].*/;/g" -e "s/\//./"

It will grab the error log location from the my.cnf file. The statement above will output the "repair table db.tbl;" statements. You can pass them on to mysql using
| mysql -f

Labels:


August 16, 2010

 

Finding optimum column width

I have a column "fromadd" in the table headers. The table exist in all the databases. How do I find out what is the maximum length of data that has been stored in this column?

# cat whycorrupt.sh
#!/bin/sh

mypath='/home/develop/toprocess.txt'
> $mypath

for dbname in `mysqlshow`
do
mysql -e"select * from $dbname.headers procedure analyse ()\G" >> $mypath
done

echo "fromadd"
grep -A4 'fromadd' toprocess.txt | grep 'Max_length' | sort -nrk2 | head


The above script will output something like this...

fromadd
Max_length: 120
Max_length: 119
Max_length: 119
Max_length: 119
Max_length: 119
Max_length: 116
Max_length: 107
Max_length: 70
Max_length: 70
Max_length: 68

Now I can set varchar(200) for the column "fromadd". I can use the same loop mentioned above to run the alter table statement.


_____

You can create a text file with all the details of the Max and minimum values stored in that table. You will have to supply the Database name as the first positional parameter.

$ sh proan.sh dbName

#!/bin/bash
> max.txt

mysqlshow $1 --count >> max.txt
while read tblname ; do

mysql -e"select '$tblname' as tableName, count(*) as count from $tblname" >> max.txt
mysql -e"show create table $tblname\G" >> max.txt
mysql -e"select * from $tblname procedure analyse()\G" >> max.txt

done < `mysql -BNe"select CONCAT(TABLE_SCHEMA, '.' , TABLE_NAME) as noname from INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = '$1'"`

HERE

Labels:


 

Timezone error in apache

There are times when you see the following error in the apache error log.

[Sun Aug 08 14:06:12 2010] [error] [client 199.199.199.199] PHP Strict Standards: date() [function.date]: It is not safe to rely on the system's timezone settings. Please use the date.timezone setting, the TZ environment variable or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/New_York' for 'EDT/-4.0/DST' instead in /var/www/html/global_fun.inc on line 524, referer: https://yahoo.com/edit.php?acc=deliver&addr=543

This is because of timezone mismatch.

You can export the time zone variable as shown below.
export TZ="America/New_York"

Or add it to the php.ini file.

[Date]
; Defines the default timezone used by the date functions
date.timezone = America/New_York

Or else, set the timezone in the code itself.

date_default_timezone_set('America/New_York');
echo date('D,F j, Y, h:i:s A');
?>


You can check the current time in the following file.

# cat /etc/sysconfig/clock
ZONE="America/New_York"
UTC=true
ARC=false

The following file shows that we are not sure about the current localtime. It is probably taking the time from system clock.

# ls -l /etc/localtime
-rw-r--r-- 1 root root 1267 Mar 5 2008 /etc/localtime

The localtime file needs to be linked to the correct time zone.
ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime

All the above examples use America (New York) timezone has default one. You will have to change it if you need a different zone.

Labels: ,


 

Table size

You can check the size of the table using the following query. If it is a myisam table, you can check the actual file size on disk.

# mysql -e"SELECT COUNT(*) TABLES,
> SUM(table_rows) rows,
> SUM(data_length) DATA,
> SUM(index_length) idx,
> SUM(data_length + index_length) total_size
> FROM information_schema.TABLES where table_schema = 'customers' and table_name = 'mail_headers'"
+--------+------+--------+--------+------------+
| TABLES | rows | DATA | idx | total_size |
+--------+------+--------+--------+------------+
| 1 | 1695 | 518708 | 270336 | 789044 |
+--------+------+--------+--------+------------+

# ls -l /mysql-data/customers/mail_headers.*
-rw-rw---- 1 mysql mysql 9460 Aug 16 06:39 /mysql-data/customers/mail_headers.frm
-rw-rw---- 1 mysql mysql 518708 Aug 16 06:39 /mysql-data/customers/mail_headers.MYD
-rw-rw---- 1 mysql mysql 270336 Aug 16 06:39 /mysql-data/customers/mail_headers.MYI


The same query can be used to know the total size of the database. You have to remove the table_name clause. You can simply check the disk consumption if all the tables are myisam.


# mysql -e"SELECT COUNT(*) TABLES,
SUM(table_rows) rows,
SUM(data_length) DATA,
SUM(index_length) idx,
SUM(data_length + index_length) total_size
FROM information_schema.TABLES where table_schema = 'customers'"
+--------+-------+-----------+---------+------------+
| TABLES | rows | DATA | idx | total_size |
+--------+-------+-----------+---------+------------+
| 496 | 28406 | 274267486 | 4054016 | 278321502 |
+--------+-------+-----------+---------+------------+


# du -h /mysql-data/customers
274M /mysql-data/customers

Labels:


August 12, 2010

 

To comma or not to comma

This is related to the discussion on comma joins.

http://www.mysqlperformanceblog.com/2010/04/14/is-there-a-performance-difference-between-join-and-where/

Here are a few points to note:

1) "Inner join" syntax is ANSI compliant. Using the standard way of writing queries is better.

2) When you need to rewrite the comma syntax to left join, it will have to re-code the whole structure and adopt the join syntax.

SELECT * FROM A,B WHERE A.id = B.id and A.x=123;

If you have already written the above query, it will be difficult to write the left join as shown below.
SELECT * FROM A LEFT OUTER JOIN B ON A.id=B.id WHERE A.x=123;

If you had written the inner join as shown below, it is easy to change to Left join.
SELECT * FROM A INNER JOIN B ON A.id = B.id WHERE A.x=123


3) In the following example, isn't it difficult to find the relations between tables?

SELECT * FROM A, B, C, D WHERE A.x=123,
B.DATE > CURDATE()
...
some more conditions
...

D.id = C.id and
C.id = B.id and
A.id = B.id

Isn't the following much easy to read and understand?

SELECT * FROM A
INNER JOIN B ON B.id = A.id
INNER JOIN C C.id = B.id
INNER JOIN D D.id = C.id
WHERE A.x=123, B.DATE > CURDATE()

I prefer that the column of table that is being joined come first in the sequence and therefore B.id = A.id and not A.id = B.id though both are the same.

4) The forth point is that if I forget to add the comma, it becomes a table alias and things become difficult to find what went wrong.

SELECT * FROM A B WHERE B.id=123;

The above is a perfectly valid query but with wrong results since the table A is aliased as B!!

5) Updates
We need to join tables not just in "select", but in updates too. The update statement can be written using comma joins. But see the difference between these two queries and decide which one would you prefer.

UPDATE phpbb2_posts_text A, phpbb3_posts3 B -- implicit comma join
SET B.bbcode_uid=A.bbcode_uid
, B.post_subject=A.post_subject
, B.post_text=A.post_text
WHERE A.somecolumn = B.somecolumn
AND B.post_id<=218727


UPDATE phpbb2_posts_text A
INNER
JOIN phpbb3_posts3 B -- explicit join
ON A.somecolumn = B.somecolumn
SET B.bbcode_uid=A.bbcode_uid
, B.post_subject=A.post_subject
, B.post_text=A.post_text
WHERE B.post_id<=218727

Labels:


August 10, 2010

 

Unix Power

Here are 2 important tips.

A) Create a verbatim copy of any directory, including symbolic links, with tar:

tar cf - /path/to/original | \
(mkdir -p /path/to/copy; cd /path/to/copy; tar xvf -)

The first tar archives the directory /path/to/original and emits the archive file to stdout; the hyphen (-) used with the create (c) option specifies stdout. The command in parentheses is a subshell: Commands in the subshell don't affect the environment of the current shell. mkdir -p creates the named directory, including any intermediate directories that need to be created; and cd changes to the new directory. The second tar reads an archive from stdin and expands it in place; the hyphen used with the extract (x) option refers to stdin.

B) find all Web pages on your server that reference www.example.com, you can use this command line:

% find / -name '*html' -print \
| xargs grep -l 'www.example.com' \
| less -Opages

xargs consumes the filenames from find and runs grep -l repeatedly to process every file, no matter how many files are named. (grep -l prints the name of the file if a match is found and then stops further matching in that file.) less allows you to page through the results and saves the list in the file named pages. The result is a list of filenames that contain the string "www.example.com".

Labels:


 

skip name resolve and connect to IP address

1) Connect to IP address:
While connecting to MySQL from PHP application, do you connect to IP address or the named server?
Using IP addresses is better since it allows fast connection by avoiding name lookup and the IP can be assigned to another machine for e.g. slave in case of Master disaster.

There are 2 ways of granting access to the database. One is allow an IP address to log-in or a host name to access the database.

grant all on *.* to 'redhat'@'10.0.2.143' identified by 'SecRet';
grant all on *.* to 'redhat'@'mysite.net' identified by 'SecRet';

The first method mentioned above is very popular and it's versatile as well. For e.g. I can use wildcard 192.168.% to allow access from all the machines of the network. The second method involves a lookup of hostname and this extra step takes time. There are times when the DNS server is down and mysql goes down as well for no fault of itself. The people can not figure out the reason.

a) Use only IP "numbers" in the grant statement. Do not use hostname "text".
b) use skip-name-resolve line in the my.cnf so that mysql will allow quick connections.

I have experienced as much as 15% improvement in overall performance and less frequent downtime.

2) Super level permissions:
The user connecting to mysql can have specific table / DB level access or super level access that is equivalent to "root". For security reasons a user with limited access is used, but user with super level access is faster to connect.

If you are using super user to connect to IP address then adding the following 2 parameters in my.cnf will improve the mysql connection time by 20-25%

skip-name-resolve
read-only

Labels:


August 09, 2010

 

vi tips and tricks

Numbering
:set number
:set nonumber

Indent
:set autoindent
:set noautoindent

Case Sensitivity
:set ignorecase
:set noignorecase
If you want to ignore case sensitivity on searches every time you enter a vi session, you can add the line set ignorecase to the .exrc file in your home directory.


Search
/echo
to seach for the word echo
/echo.*file
to search for the word echo followed by the word file

Bookmarks
You can tell vi to place a bookmark at a point in a file by pressing the Escape key followed by the M key followed by another alphabetic character that denotes the bookmark reference. Therefore, you have up to 26 bookmarks named a to z. To return to the previous bookmark, press the Escape key followed by the back tick (`) followed by the bookmark reference alphabetic character.

Execute UNIX command
:!pwd

Filtering
:1,$!sort
# pass lines 1 through the end of the file ($) into the sort command
:1,$!awk '{print $1}' | sort | tr [:lower:] [:upper:]
4!!awk '{print "New text",$0}'

Section save
:6,9w >> /tmp/newfile
You can use the $ notation for the last line to specify to the end of the file

// Delete DOS Characters via VIM (^M)
:set ff=unix
:set ff=dos

Labels:


August 08, 2010

 

Altering huge table

CREATE TABLE `testhack` (
`id` int(11) NOT NULL DEFAULT '0',
`unq` varchar(100) DEFAULT NULL,
`keyword` varchar(250) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `unq` (`unq`)
) ENGINE=MyISAM

I need to drop the unique key. So, i create a new table testhack_new with the following schema

CREATE TABLE `testhack_new` (
`id` int(11) NOT NULL DEFAULT '0',
`unq` varchar(100) DEFAULT NULL,
`keyword` varchar(250) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM

mysql> Flush tables with read lock;

Open another terminal. And go to the mysql/data/ directory. Do the following:

$ mv testhack.frm testhack_old.frm; mv testhack_new.frm testhack.frm; mv testhack_old.frm testhack_new.frm;

$ mv testhack.MYI testhack_old.MYI; mv testhack_new.MYI testhack.MYI; mv testhack_old.MYI testhack_new.MYI;

So, what is happening here is that the index, table definitions are being switched. After this process, the table definition of testhack will not contain the unique key. Now unlock the tables in the main window. And run repair tables to remove any issues.

mysql> unlock tables;
mysql> repair tables testhack;


http://jayant7k.blogspot.com/2010/06/mysql-hack-altering-huge-tables.html

Labels:


August 05, 2010

 

restore DB script loadme.sh

If you need to restore database from the dump back files, here is a easy way to do so. You simply have to supply the database name and the SQL file name to this script and it will load the data on the localhost. If you want to restore data on a remote server don't forget to change the $myhost variable.

# cat loadme.sh
#!/bin/sh
# the first parameter is the name of the database and
# second parameter is the name of the sql file
# you call the script using the following syntax
# sh loadme.sh shantanu aug03_final.sql
# if the .sql file is not in the current directory, supply the complete path

myhost='localhost'
myuser='root'
mypassword=''

# Check if the DB and file name is supplied
# if missing, exit with the code 199

missing=199

if [ -z $1 ]
then
echo "DB and file name both are required"
exit $missing
fi

if [ -z $2 ]
then
echo "DB and file name both are required"
exit $missing
fi

# Create output file for results

mydate=`date +'%d_%h_%Y_V-%H%M%S'`
output="$1_$mydate".txt
> $output

if [[ $mypassword ]];then
mysql -h$myhost -u$myuser -p$mypassword -e"drop database if exists $1"
mysqladmin -h$myhost -u$myuser -p$mypassword create $1
time mysql -h$myhost -u$myuser -p$mypassword $1 < "$2"
mysqlshow -h$myhost -u$myuser -p$mypassword $1 --count > $output
else
mysql -h$myhost -u$myuser -e"drop database if exists $1"
mysqladmin -h$myhost -u$myuser create $1
time mysql -h$myhost -u$myuser $1 < "$2"
mysqlshow -h$myhost -u$myuser $1 --count > $output
fi

Labels: ,


August 02, 2010

 

Extracting queries from binary files

There are times when you need to check the actual queries being executed. The best way is to open up the binary file and look into it.
Here is the script that will do it for you.

sh extract.sh tbl_name

You have to supply the table name and call the shell script. -xv will enable the debug mode so that you can check if it running fine.

# cat extract.sh
#!/bin/sh
# the my.cnf path should have the filename initials like
# log-bin=/mysql-bin-log/db1-bin
# find the binary path
mypath=`grep log-bin /etc/my.cnf | awk -F'=' '{print $2}'`*.*

# extract the latest binary file to a text file
mysqlbinlog `ls $mypath -t | head -1` > /home/bintext.txt

# select the table name that is supplied as an argument
grep -C2 "$1" /home/bintext.txt | tail -500 > /home/tosend.txt
# save the last 500 lines to a text file

_____

** analyse binary files **
You will get to see the insert/ update statements along with its count.
You can supply the binary file name. The latest file is used as default.

sh /bin/mylog
OR
sh /bin/mylog/ db1-bin.000223


# cat /bin/mylog
#!/bin/sh

if [[ $1 ]];then

mysqlbinlog ${1} | grep -i -e "^update" -e "^insert" -e "^delete" -e "^replace" -e "^alter" | cut -c1-100 | tr '[A-Z]' '[a-z]' | sed -e "s/\t/ /g;s/\`//g;s/(.*$//;s/ set .*$//;s/ as .*$//" | sed -e "s/ where .*$//" | sort | uniq -c | sort -nr

echo "table list from binary log ${1}"
mysqlbinlog ${1} | grep -i -e "^update" -e "^insert" -e "^delete" -e "^replace" -e "^alter" | cut -c1-100 | tr '[A-Z]' '[a-z]' | sed -e "s/\t/ /g;s/\`//g;s/(.*$//;s/ set .*$//;s/ as .*$//" | sed -e "s/ where .*$//" | replace 'insert into ' '' | replace 'delete from ' '' | replace 'update ' '' | awk '{print $1}' | sort | uniq -c | sort -nr

else

mypath=`grep log-bin /etc/my.cnf | awk -F'=' '{print $2}'`*.*
latest=`ls $mypath -t | head -1`

mysqlbinlog ${latest} | grep -i -e "^update" -e "^insert" -e "^delete" -e "^replace" -e "^alter" | cut -c1-100 | tr '[A-Z]' '[a-z]' | sed -e "s/\t/ /g;s/\`//g;s/(.*$//;s/ set .*$//;s/ as .*$//" | sed -e "s/ where .*$//" | sort | uniq -c | sort -nr | head -50

echo "table list from binary log ${1}"
mysqlbinlog ${latest} | grep -i -e "^update" -e "^insert" -e "^delete" -e "^replace" -e "^alter" | cut -c1-100 | tr '[A-Z]' '[a-z]' | sed -e "s/\t/ /g;s/\`//g;s/(.*$//;s/ set .*$//;s/ as .*$//" | sed -e "s/ where .*$//" | replace 'insert into ' '' | replace 'delete from ' '' | replace 'update ' '' | awk '{print $1}' | sort | uniq -c | sort -nr | head -30

fi


// If you are using row based binary logging, you need to decode the data
mysqlbinlog --base64-output=decode-rows --verbose /var/lib/mysql/server-bin.000022

Labels: ,


 

Analyse Corrupt Tables

If you are using myisam only table types then you must have come across tables those are being corrupt without any known reason. The following command shows that the tables (from several databases) those have crashed today.

# tail -10000 /mysql-db-path/db1.err | grep `date "+%y%m%d"` | awk -F'ERROR' '{print $2}' | sort -u | awk -F"'" '{print $2}'
./db1/mail_me
./db2/mail_me
./db3/customers

In order to know why does only one table crashed and not any other table, we need to take a closer look at the table structure. The following comand will find the tables those are corrupt or going to be corrupted soon. You can add -r option to myisamchk to repair the tables on the fly.
# find `grep 'datadir' /etc/my.cnf | awk -F"=" '{print $2}'` -not -name '#*.MYI' -name '*.MYI' -exec myisamchk -sF {} \;

The options are:

* -s, --silent option: Prints only errors. You can use two -s to make myisamchk very silent.
* -f, --force option: Restart myisamchk automatically with repair option -r, if there are any errors in the table.
* -F, --fast option: Check only tables that haven?t been closed properly.
* -U --update-state option: Marks tables as crashed, when it finds any error.
* -r --recover: Can fix almost anything except unique keys that aren't unique.

You will need add -f or -r to recover the corrupt tables. The command mentioned above can be added to the cron so that it will run every day and repair the corrupt tables automatically.

Labels: ,


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024  

This page is powered by Blogger. Isn't yours?