Shantanu's Blog

Database Consultant

February 14, 2013

 

Restore data without keys

Here is a patched version of mysqldump that will improve the restoration speed by removing the secondary and unique keys from the table. This new mysqldump.static will add one line "alter table drop key xyz", immediately after the create table statement. The insert statement speed is improved significantly once the extra keys are removed. Once the data is inserted, we can put the keys back as and when required. This type of data without keys will be very useful for testing and backup purpose. This utility can be downloaded from...

http://code.google.com/p/mysqldump/downloads/detail?name=mysqldump.static

This is a fix provided as per the feature request mentioned here...

http://bugs.mysql.com/bug.php?id=64248

Following is the standard dump that we all are familiar with...

# mysqldump cas > cas.sql

CREATE TABLE `cas_level_info` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `cas_user__id` bigint(20) NOT NULL,
  `cas_user_created_datetime` datetime DEFAULT NULL,
  `cas_is_valid_reference` tinyint(4) DEFAULT NULL,
  `cas_level` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `idx_cas_user__id` (`cas_user_id`),
  KEY `idx_cas_is_valid_reference` (`cas_is_valid_reference`),
  KEY `idx_cas_level` (`cas_level`),
  KEY `idx_cas_user_created_datetime_cas_is_valid_reference` (`cas_user_created_datetime`,`cas_is_valid_reference`)
) ENGINE=InnoDB AUTO_INCREMENT=1267454 DEFAULT CHARSET=latin1;

I will like to remove the ENGINE=InnoDB part from the above statement. I will add "default-storage-engine=MyISAM" parameter to my.cnf
If there is no engine specified in the create table statement, it will be created as a MyISAM table. This will make the insert statements faster.
_____

# mysqldump.static cas --compatible=no_table_options --keys-remove --socket=/var/lib/mysql/mysql.sock > cas_static.sql

CREATE TABLE `cas_level_info` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `cas_user_id` bigint(20) NOT NULL,
  `cas_user_created_datetime` datetime DEFAULT NULL,
  `cas_is_valid_reference` tinyint(4) DEFAULT NULL,
  `cas_level` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `idx_cas_user_caller_id` (`cas_user_id`),
  KEY `idx_cas_is_valid_reference` (`cas_is_valid_reference`),
  KEY `idx_cas_level` (`cas_level`),
  KEY `idx_cas_user_created_datetime_cas_is_valid_reference` (`cas_user_created_datetime`,`cas_is_valid_reference`)
);

ALTER TABLE `cas_level_info` DROP KEY `idx_cas_user_id` , DROP KEY `idx_cas_is_valid_reference` , DROP KEY `idx_cas_level` , DROP KEY `idx_cas_user_created_datetime_cas_is_valid_reference` ;

The alter table drop key statement is added immediately next to create table statement. This will make sure that all the non-primary keys are removed before data is inserted. If you want the keys to be added once all the data is restored, use "--keys-last" option instead of "--keys-remove"

# mysqldump.static cas --compatible=no_table_options --keys-last --socket=/var/lib/mysql/mysql.sock > cas_static.sql

The keys last option will add the following alter table add key statement.

ALTER TABLE `cas_level_info` ADD KEY `idx_cas_user_id` (`cas_user_id`), ADD KEY `idx_cas_is_valid_reference` (`cas_is_valid_reference`), ADD KEY `idx_cas_level` (`cas_level`), ADD KEY `idx_cas_user_created_datetime_cas_is_valid_reference` (`cas_user_created_datetime`,`cas_is_valid_reference`);

*** Removing keys can restore the data upto 50% faster. ***
_____


# You can decide to build the keys later using keys-last or simply remove the secondary and unique keys except primary.
# Bug #64248 create secondary indexes after inserting rows statements in mysqldump
# http://bugs.mysql.com/bug.php?id=64248

--keys-last
--keys-remove 

# Certain SQL statements can be written at the top as well as at the end of the dump file.
# Bug #39233 --sql-append --sql-prepend options to mysqldump
# http://bugs.mysql.com/bug.php?id=39233

--sql-prepend="SET bulk_insert_buffer_size = 1024 * 1024 * 256;"
--sql-append="show variables like '%buffer%';"  

# You can now add schema names to the insert into statements. This will be especially helpful while using with 'skip-extended-insert'
# create table db_name.table_name and insert into db_name.table_name
# Bug #62069 adding schema name in mysqldump output
# http://bugs.mysql.com/bug.php?id=62069

--add-schema-name 
# You can now change the table options like auto-increment, default engine and also default character-set
# Bug #20786 mysqldump always includes AUTO_INCREMENT
# http://bugs.mysql.com/bug.php?id=20786

--remove-auto-increment
--default-engine=myisam
--default-charset=utf8
_____

You may be asked to provide the socket address for e.g.
--socket=/var/lib/mysql/mysql.sock

The new dump command will now look something like this...

mysqldump.static  --socket=/var/lib/mysql/mysql.sock --keys-remove --sql-prepend="SET bulk_insert_buffer_size = 1024 * 1024 * 256;" --sql-append="show variables like '%buffer%'; select now();"  --add-schema-name  --remove-auto-increment --default-engine=myisam db_name table_name > new_dump.sql
_____

# use the following 2 commands to install
wget http://mysqldump.googlecode.com/files/mysqldump.static --output-document=/usr/bin/mysqldump.static
chmod 777 /usr/bin/mysqldump.static

Labels: ,


December 28, 2012

 

Auto backup

Auto MySQL Backup is a shell script that helps to automate the back process.

http://sourceforge.net/projects/automysqlbackup/

Here are 7 easy steps to create backup of any database.

1) connect to the server.

2) Download the package:
wget http://tinyurl.com/c52x3sh

3) Extract files:
tar xvfz automysqlbackup-v3.0_rc6.tar

4) make sure you are "root" and then install:
sh install.sh

5) create backup folder:
mkdir /var/backup/
# this backup location can be changed as shown below

6) Make changes to user name / password and the DB that needs to be backup:
vi /usr/local/bin/automysqlbackup
  CONFIG_mysql_dump_password='admin'
  CONFIG_db_names=('drupaldb')
  CONFIG_backup_dir='/var/backup/db'

7) Run the script to take the backup:
/usr/local/bin/automysqlbackup

Labels: , , ,


July 23, 2011

 

schema and data sync

We are going to learn how update the production data to be in sync with the development server. We will use 3 utilities.

1) diff : This is linux built-in command to compare 2 files.

2) Maatkit: This will be used to compare checksum values of 2 tables and the data differences.
http://www.maatkit.org/doc/

3) schemasync: This utility is used to compare structural differences.
http://schemasync.org/

______________________________________

I) Schema sync

We will first make sure that the table structure of localhost mktest table matches with the SERVER_A table structure.
In the following example, the column "mytime" is missing. How do we know that? How do we create the alter table statement?

server:localhost
schema: test

mysql> select * from mktest;
+----+----------+--------+------+
| id | f_amount | myname | city |
+----+----------+--------+------+
| 1 | NULL | abc | NULL |
| 3 | NULL | abc | NULL |
| 6 | NULL | abc | NULL |
| 8 | NULL | abc | NULL |
| 9 | NULL | abc | NULL |
| 21 | NULL | abc | NULL |
+----+----------+--------+------+
6 rows in set (0.00 sec)


server: SERVER_A
schema: test

mysql> select * from mktest;
+----+----------+--------+------+--------+
| id | f_amount | myname | city | mytime |
+----+----------+--------+------+--------+
| 1 | NULL | abc | NULL | NULL |
| 3 | NULL | abc | NULL | NULL |
| 5 | NULL | abc | NULL | NULL |
| 6 | NULL | abc | NULL | NULL |
| 8 | NULL | abc | NULL | NULL |
| 9 | NULL | abc | NULL | NULL |
| 11 | NULL | abc | NULL | NULL |
+----+----------+--------+------+--------+
7 rows in set (0.00 sec)


# diff <(mysqldump -hSERVER_A -uroot -proot@123 test mktest --skip-extended-insert -d -R) <(mysqldump -hlocalhost -uroot -proot@123 test mktest --skip-extended-insert -d -R) --side-by-side --suppress-common-lines --width=190 | more
-- Host: SERVER_A Database: test | -- Host: localhost Database: test
`mytime` datetime default NULL, <
) ENGINE=MyISAM AUTO_INCREMENT=12 DEFAULT CHARSET=latin1; | ) ENGINE=InnoDB AUTO_INCREMENT=22 DEFAULT CHARSET=latin1;

*** mytime column needs to be added and the table type needs to be changed from InnoDB to MyISAM ***
_____

The diff utility does not generate ready to execute alter statements. We need a third party utility called schemasync.

# schemasync mysql://root:root@123@localhost:3306/test mysql://root:root@123@SERVER_A:3306/test

# cat /home/shantanu/test.20110720.patch.sql
USE `test`;
ALTER TABLE `mktest` ADD COLUMN `mytime` datetime NULL AFTER `city`;

It will also generate an alter table statement to change the table type.

______________________________________

II) Data Sync:

Now that our structure is in order, let's see what records are added / updated.

server:localhost
schema: test

mysql> select * from mktest;
+----+----------+--------+------+--------+
| id | f_amount | myname | city | mytime |
+----+----------+--------+------+--------+
| 1 | NULL | abc | NULL | NULL |
| 3 | NULL | abc | NULL | NULL |
| 6 | NULL | abc | NULL | NULL |
| 8 | NULL | abc | NULL | NULL |
| 9 | NULL | abc | NULL | NULL |
| 21 | NULL | abc | NULL | NULL |
+----+----------+--------+------+--------+
6 rows in set (0.00 sec)

_____

server: SERVER_A
schema: test

mysql> select * from mktest;
+----+----------+--------+------+--------+
| id | f_amount | myname | city | mytime |
+----+----------+--------+------+--------+
| 1 | NULL | abc | NULL | NULL |
| 3 | NULL | abc | NULL | NULL |
| 5 | NULL | abc | NULL | NULL |
| 6 | NULL | abc | NULL | NULL |
| 8 | NULL | abc | NULL | NULL |
| 9 | NULL | abc | NULL | NULL |
| 11 | NULL | abc | NULL | NULL |
+----+----------+--------+------+--------+
7 rows in set (0.00 sec)


a) Checksum match:
We need to find out if the checksum values of these 2 tables match. If the row count and checksum values are the same, the table are exactly the same.

# /usr/bin/mk-table-checksum h=SERVER_A,u=root,p=root@123,P=3306 --databases test --tables mktest h=localhost,u=root,p=root@123,P=3306 --databases test --tables mktest --count | /usr/bin/mk-checksum-filter
test mktest 0 SERVER_A MyISAM 7 4d38da45 0 0 NULL NULL
test mktest 0 localhost MyISAM 6 f1e08b3d 0 0 NULL NULL

There is a difference of 1 row and the checksum value does not match. So we need to find out what are those records.
_____

b) Finding differences:

# diff <(mysqldump -hSERVER_A -uroot -proot@123 test mktest --skip-extended-insert) <(mysqldump -hlocalhost -uroot -proot@123 test mktest --skip-extended-insert) --side-by-side --suppress-common-lines --width=190 | more
-- Host: SERVER_A Database: test | -- Host: localhost Database: test
) ENGINE=MyISAM AUTO_INCREMENT=12 DEFAULT CHARSET=latin1; | ) ENGINE=MyISAM AUTO_INCREMENT=22 DEFAULT CHARSET=latin1;
INSERT INTO `mktest` VALUES (5,NULL,'abc',NULL,NULL); <
INSERT INTO `mktest` VALUES (11,NULL,'abc',NULL,NULL); | INSERT INTO `mktest` VALUES (21,NULL,'abc',NULL,NULL);

The above screen shows that one row is missing from localhost while the another row ID should be 11 instead of 21.
_____

c) Generate SQL:
So we need one insert statement and one update. The maatkit table sync utility has generated delete / insert combo instead of update as shown below:

# /usr/bin/mk-table-sync --print h=SERVER_A,u=root,p=root@123,P=3306 --databases test --tables mktest h=localhost,u=root,p=root@123,P=3306 --databases test --tables mktest
DELETE FROM `test`.`mktest` WHERE `id`='21' LIMIT 1 ;
INSERT INTO `test`.`mktest`(`id`, `f_amount`, `myname`, `city`, `mytime`) VALUES ('5', NULL, 'abc', NULL, NULL) ;
INSERT INTO `test`.`mktest`(`id`, `f_amount`, `myname`, `city`, `mytime`) VALUES ('11', NULL, 'abc', NULL, NULL) ;

Labels: ,


May 18, 2011

 

multiple instances of mysql

If you want to start mysql using different data directory, the simple way is to change the datadir variable path in my cnf. But there are several disadvantages of doing so.
1) I have to change the my.cnf
2) restart
3) I can not access data from 2 different locations at the same time
4) Does not work as expected with 5.1+ version

Welcome to mysqld_multi

http://dev.mysql.com/doc/refman/5.0/en/mysqld-multi.html

Add the following code to my.cnf and restart the mysql service. You can continue to use the data at the default location as you used to before.

[mysqld_multi]
mysqld = /usr/bin/mysqld_safe
mysqladmin = /usr/bin/mysqladmin
user = multi_admin
password = multipass

[mysqld2]
socket = /tmp/mysql.sock2
port = 3307
pid-file = /var/lib/mysql2/hostname.pid2
datadir = /var/lib/mysql2

[mysqld3]
socket = /tmp/mysql.sock3
port = 3308
pid-file = /var/lib/mysql3/hostname.pid3
datadir = /var/lib/mysql3

[mysqld4]
socket = /tmp/mysql.sock4
port = 3309
pid-file = /var/lib/mysql4/hostname.pid4
datadir = /var/lib/mysql4

[mysqld6]
socket = /tmp/mysql.sock6
port = 3311
pid-file = /mnt/company-nfs/mysql/hostname.pid6
datadir = /mnt/company-nfs/mysql

Now you need to start mysqld_multi instance to initiate the "mysqld6" instance. I have saved the data on nfs partition for the sake of flexibility.

# mysqld_multi --no-log start 6
mysqld_multi log file version 2.16; run: Thu May 19 10:48:30 2011

Starting MySQL servers
# Starting mysqld daemon with databases from /mnt/company-nfs/mysql

I can now use the new mysql instance by using the following credentials...

mysql -h10.10.10.10 -usome_user -psome_password --socket=/tmp/mysql.sock6 --port=3311

I can use the other sockets like 2,3 and 4 as well. All I need to do is to create the respective data-directories and change the ownership and permission so that mysql is allowed to read/write to these locations.

mysqld_multi stop 6

If the stop command as shown above does not work, try...

mysqladmin --socket=/tmp/mysql.sock6 shutdown

Labels: , ,


May 01, 2011

 

Replace into and On duplicate key

A detailed analysis of what happens when we use different options to insert/ update a record.

Here is a sample table...
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | london, UK |
+----+--------+---------+----------------------+

We want to insert a new record with emp_id 3, name - anthony and the address should be 'South Africa'. If the record is already present in the DB it should be updated.

This may look like a simple task, but checking the table structure, it is clear that there is no composite unique index, but separate per column unique keys.

CREATE TABLE `mykey` (
`id` int(11) NOT NULL auto_increment,
`emp_id` int(11) default NULL,

`name` varchar(100) default NULL,
`address` varchar(100) default NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `emp_id` (`emp_id`),
UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB

The challenge is to write an insert/ update statement that will satisfy all constraint and will fail predictably only in certain cases.

1) "insert ignore" will prefer not to do anything.
2) "replace into" may look like the best choice, but it will also change the auto_incremented ID and has some issues when there is more than one unique key.
For e.g. in the following example, while trying to insert emp_id 3 with name - akbar, it actually removed one row.

3) I guess the "on duplicate key update " clause is the right choice. It does not change the auto_id while updating the record. And even if there are no composite unique indexes in the table, it works as if they are present.

mysql> drop table if exists mykey;
Query OK, 0 rows affected (0.02 sec)

mysql> create table mykey(id int not null auto_increment, emp_id int, name varchar(100), address varchar(100), primary key (id), unique(emp_id), unique(name)) engine=InnoDB;
Query OK, 0 rows affected (0.13 sec)

mysql> insert into mykey (emp_id, name, address) values (1, 'amar', 'hyderabad, AP, india') ;
Query OK, 1 row affected (0.00 sec)

mysql> insert into mykey (emp_id, name, address) values (2, 'akbar', 'karachi, pakistan');
Query OK, 1 row affected (0.00 sec)

mysql> insert into mykey (emp_id, name, address) values (3, 'anthony', 'london, UK') ;
Query OK, 1 row affected (0.00 sec)

mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> savepoint savepoint_one;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | london, UK |
+----+--------+---------+----------------------+
3 rows in set (0.00 sec)
_____

# Case 1: "insert ignore"

mysql> insert ignore into mykey (emp_id, name, address) values (3, 'anthony', 'South Africa') ;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | london, UK |
+----+--------+---------+----------------------+
3 rows in set (0.00 sec)

_____

# Case 2: "replace into"

mysql> replace into mykey (emp_id, name, address) values (3, 'anthony', 'South Africa') ;
Query OK, 2 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 4 | 3 | anthony | South Africa |
+----+--------+---------+----------------------+
3 rows in set (0.00 sec)

mysql> rollback to savepoint_one;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | london, UK |
+----+--------+---------+----------------------+
3 rows in set (0.00 sec)

mysql> replace into mykey (emp_id, name, address) values (3, 'akbar', 'South Africa') ;
Query OK, 3 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+-------+----------------------+
| id | emp_id | name | address |
+----+--------+-------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 5 | 3 | akbar | South Africa |
+----+--------+-------+----------------------+
2 rows in set (0.00 sec)
_____

# Case 3: "on duplicate key"

mysql> rollback to savepoint_one;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | london, UK |
+----+--------+---------+----------------------+
3 rows in set (0.01 sec)

mysql> insert into mykey (emp_id, name, address) values (3, 'akbar', 'South Africa') on duplicate key update emp_id = 3, name = 'akbar', address='South Africa' ;
ERROR 1062 (23000): Duplicate entry 'akbar' for key 3

mysql> insert into mykey (emp_id, name, address) values (3, 'akbar', 'South Africa') on duplicate key update emp_id = 3, name = 'anthony', address='South Africa' ;
Query OK, 2 rows affected (0.00 sec)

mysql> select * from mykey;
+----+--------+---------+----------------------+
| id | emp_id | name | address |
+----+--------+---------+----------------------+
| 1 | 1 | amar | hyderabad, AP, india |
| 2 | 2 | akbar | karachi, pakistan |
| 3 | 3 | anthony | South Africa |
+----+--------+---------+----------------------+
3 rows in set (0.00 sec)

Labels: ,


November 01, 2010

 

Master - Maser Replication Manager

MMM (Master-Master Replication Manager) is a set of flexible scripts to perform monitoring/failover and management of MySQL Master-Master replication configurations (with only one node writable at any time). The toolset also has the ability to read balance standard master/slave configurations with any number of slaves, so you can use it to move virtual IP addresses around a group of servers depending on whether they are behind in replication.

The main functionality is provided through the following three scripts:

mmm_mond: monitoring daemon which does all monitoring work and makes all decisions about roles moving and so on.
mmm_agentd: agent daemon which runs on each MySQL server and provides monitoring node with simple set of remote services.
mmm_control: simple script dedicated to management of the mmm_mond processes by commands.


# start and stop MMM Monitor
/etc/init.d/mysql-mmm-monitor start
/etc/init.d/mysql-mmm-monitor status
/etc/init.d/mysql-mmm-monitor stop

# Show MMM details
/usr/sbin/mmm_control show
/usr/sbin/mmm_control show | grep -v 'ONLINE'
/usr/sbin/useradd --comment "MMM Script owner" --shell /sbin/nologin mmmd

# Config files
vi /etc/default/mysql-mmm-monitor
vi /etc/mysql-mmm/mmm_common.conf
vi /etc/mysql-mmm/mmm_mon.conf

A typical MMM setup looks like this...
/usr/sbin/mmm_control show
db1(192.168.100.101) master/ONLINE. Roles: reader(192.168.100.122)
db2(192.168.100.102) master/ONLINE. Roles: reader(192.168.100.121), writer(192.168.100.123)

And in order to check the logs on monitor and DB servers you can use the command...
tail /var/log/mysql-mmm/mmm_mond.log
tail /var/log/mysql-mmm/mmm_agentd.log

Here are the permissions those need to be created in order to monitor the replication.

GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'%' IDENTIFIED BY 'monitor_password';
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'%' IDENTIFIED BY 'agent_password';

GRANT REPLICATION SLAVE ON *.* TO 'replication'@'%' IDENTIFIED BY 'replication_password';

grant replication slave on *.* to 'slave_user'@'%' IDENTIFIED BY 'slave_user';
# One of the last 2 statements can be used

And do not forget to check if the following setting is there in my.cnf file
log_slave_updates = 1

More info can be found here...
http://mysql-mmm.org/mmm2:guide

Labels:


October 26, 2010

 

Understanding Foreign Keys

1) Foreign keys are used to restrict the data being inserted unless the parent record is created. Isn't it logical that the child can not be born unless the parents are born? Therefore when I try to create the child table first, I get an error 1005 - (Can't create table).
2) While I am inserting the records, I have to follow the same sequence.
Therefore the first insert into child table fails with error no 1452 - (Cannot add or update a child row: a foreign key constraint fails) since no corresponding record is present in the parent table. Once I insert a record in parent table, my child table will allow it as well.
3) When I am trying to drop both the tables, I need to drop the child first and then the parent. Or else I get an error 1217 - (Cannot delete or update a parent row)

# Remember, the child has to reference a column to parent and not the other way round.

create table child(
userID int not null,
FOREIGN KEY (userID) REFERENCES parent(ID),
primary key (userID)
) ENGINE=InnoDB;
ERROR 1005 (HY000): Can't create table './test/child.frm' (errno: 150)

CREATE TABLE parent(
`ID` int(11) default NULL,
`name` varchar(100) default NULL,
`city` varchar(100) default NULL,
key (ID)
) ENGINE=InnoDB;
create table child(
userID int not null,
FOREIGN KEY (userID) REFERENCES parent(ID),
primary key (userID)
) ENGINE=InnoDB;

insert into child values (1);
ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails (`test/child`, CONSTRAINT `child_ibfk_1` FOREIGN KEY (`userID`) REFERENCES `parent` (`ID`));

insert into parent values (1, 'abc', 'delhi');
insert into child values (1);

drop table if exists parent;
ERROR 1217 (23000): Cannot delete or update a parent row: a foreign key constraint fails;

drop table if exists child;
drop table if exists parent;

The same table can be a parent and a child. It's like a family tree. The grandpa has 2 sons and each son will have 1 or 2 children.

http://dev.mysql.com/downloads/workbench/

If you are using mysql workbench, you can easily create a graphical representation of the data structure. On the "Database" menu, there is a "Reverse Engineer" wizard. It will let you connect to a MySQL server and reverse engineer schema. The graph generated will show the "References:" and "Referenced By:" columns. "References" will show the parents of the current table and "Referenced By" will show its children. Parents are shown in green dotted line with the sing of ">". The child tables are linked with blue line.

Labels: ,


April 21, 2009

 

MySQL Case Study - 174

Finding Numbers

How do I find only INT, BIGINT numbers from a column?
For e.g. in the following column I want to return 43344433 only
And not 434-433 or any other row that has dash or comma or space.

mysql> select * from ztest where name REGEXP '[[:digit:]]-';
+-----------+
| name |
+-----------+
| 434-433 |
| 4334-4433 |
+-----------+
2 rows in set (0.00 sec)

mysql> select * from ztest ;
+--------------------------+
| name |
+--------------------------+
| '004500' '00984' |
| '0304500' , '020984' |
| '000304500' , '00020984' |
| '000304500' '00020984' |
| '000304500 00020984' |
| '000304500, 00020984' |
| 434-433 |
| 4334-4433 |
| 43344433 |
+--------------------------+
9 rows in set (0.00 sec)

Labels: ,


October 02, 2007

 

MySQL FAQ - 3

1) Importing and exporting Excel Data

a) The syntax to import the CSV data from text file is simple and is very fast.

LOAD DATA INFILE 'datafile.txt' INTO TABLE employee
(employee_number,firstname,surname,tel_no,salary) FIELDS TERMINATED BY '|'");

LOAD DATA INFILE has defaults of:
FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\'

b) The trick to export the data in CSV or TSV format is to add the into outfile line like this...

set @row:= 0;
(select 'srno', 'enroll_no', 'stud_fname', 'stud_lname', 'stud_address', 'stud_address1', 'stud_address2', 'stud_city', 'stud_pin', 'state_nm')
union
(select (@row:= @row + 1) as srno, a.enroll_no, stud_fname, stud_lname, stud_address
into outfile '/home/shantanu/CAT_ADV_OCT_07.tsv' FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'
from course_enroll as a,
course as b,
where date like '%2007%' and flag in ('o','a') and
a.branch_id=b.branch_id
group by a.enroll_no
order by stud_pin);
_____

Here is another way to import / export data from Excel

a) In order to import data from excel to MySQL here are 2 easy steps.
Select Text (MS-DOS) as an option in "Save as type" drop down while saving the file. Use the following command to load the data in MySQL.

load data local infile 'country_code.txt' into table test.country_code columns optionally enclosed by '"';

You may need to trim the data if extra spaces are added to the beginning or end of the string.

update country_code set iso_code = trim(iso_code);

_____

b) To export the data, use the standard and in and standard out descriptors as this...

1) Save the query in a text file. For e.g. query.txt

2) Use the following command to generate the excel readable result file.
mysql country_list < query.txt > query_to_excel.txt


2) Delete V/s truncate

a) Do NOT use:

DELETE FROM classifieds;

Rather, use:

TRUNCATE TABLE classifieds;

The difference here is that DELETE drops records one by one, and that can be 1 million one by one's too slow!

b) If you want the count of records those were deleted then you have to use delete from command like this...
DELETE FROM classifieds where 1 = 1
It will display XXX records deleted message once it completes the operation.



3) Analyze and Optimize tables:

You can provide information to the parser by running

ANALYZE TABLE tablename;

This stores the key distribution for the table (running ANALYZE is equivalent to running myisamchk -a or myismachk --analyze). Many deletes and updates leave gaps in the table (especially when you're using varchar, or in particular text/blob fields). This means there are more unnecessary disk I/O's, as the head needs to skip over these gaps when reading. Running

OPTIMIZE TABLE tablename

solves this problem. Both of these statements should be run fairly frequently in any well looked after system.


4) Explain and Procedure:

a) Add the word EXPLAIN before any SELECT statement to know the "kundali" of the command.

EXPLAIN SELECT firstname FROM employee WHERE overtime_rate<20*2;

+--------+-------+---------------+---------------+---------+------+------+----------+
|table | TYPE | possible_keys | key | key_len | ref | rows |Extra |
+--------+-------+---------------+---------------+---------+------+------+----------+
|employee| range | overtime_rate | overtime_rate | 4 | NULL | 1 |where used|
+--------+-------+---------------+---------------+---------+------+------+----------+

The output from EXPLAIN shows "ALL" in the TYPE column when MySQL uses a table scan to resolve a query. The possible types are, from best to worst: system, const, eq_ref, ref, range, index and ALL. MySQL can perform the 20*2 calculation once, and then search the index for this constant.

explain tbl_name
will display all the relevant info about the table in question.

SHOW TABLE STATUS LIKE 'your_table_name‘
To find the time stamp of table creation.

b) Explain provides more information about indexes, but procedure analyse() gives you more information on data returned.

SELECT center_code FROM employee PROCEDURE ANALYSE()

Min_value : 34
Max_value : 232
Empties_or_zeros : 0
Nulls : 0
Avg_value : 133
Optimal_fieldtype : ENUM('34','232') NOT NULL


5) SHOW commands:

SHOW PROCESSLIST
// you can just check if your query running, or is waiting for some lock.
SHOW FULL PROCESSLIST
// to see which query or set of queries take the longest time

SHOW CREATE TABLE employee
// display exactly how the table was created.

DESCRIBE tbl_name
DESC tbl_name
// just like show create table command

SHOW VARIABLES
// All the information to know the state of server status

STATUS
// Information on connection, uptime, version and user


6) Indexes on partial columns:
In the last post, I discussed composite indexes. But there is a limit of 556 bytes those can be grouped together in a single index. So the following statement would fail if each of the fields is declared as 250 characters since the total will be more than 556.

ALTER TABLE employee ADD INDEX(surname, firstname, middlename);

Instead we can index on partial text something like this...

ALTER TABLE employee ADD INDEX(surname(20),firstname(20), middlename(20));

Now our updates write to an index are just 10% of the original and it will accept 3 column composite index even if the field total is more than 556.

Labels:


September 22, 2007

 

MySQL FAQ - 2

1) How do I backup my tables using UNIX file system?

.frm holds the definition of a table. With MyISAM, the .MYD and MYI files hold the data and index information for the table.
If you are using InnoDB or another non-MyISAM engine for a table, you will not have .MYD and .MYI files. Therefore backing up the MyISAM tables is easy and fast if you have access to these files. You can create table of the same structure without keys, load data into it to get correct .MYD (data file). Create table with all keys defined and copy over .frm and .MYI files from it, followed by FLUSH TABLES. Now you can use REPAIR TABLE to rebuild all keys by sort, including UNIQUE keys.

2) What is the difference in MyISAM and InnoDB tables?

MyISAM InnoDB
Backup files Can't backup files
Doesn't support crash recovery Does support crash recovery
Table level locking Row level locking
fast full table scan Slow full tablescan
Full text index No full text index possible
No Foreign Key support Foreign Key support
No Transaction support Supports transactions


InnoDB is a storage engine, and so is MyISAM. When you create a table, you specify one of the types. When you add data and indexes to that table, the type of table determines the storage engine used. InnoDB does row level locking. This means that when you are updating a row, only that one row gets locked (which means that another connection to the database cannot modify that row). MyISAM locks the entire table. Only one connection / session at a time can update / insert / delete.

InnoDB uses the concept of a tablespace; MyISAM doesn't. A tablespace is where you store your data, and is made up of datafiles. You don't know where your data is stored in those data files. When you create a table in MyISAM, it crates a file of the same name as your table. Some queries are really bad on InnoDB compared to MyISAM, the most notorious is probably SELECT COUNT(*) A full table scan is also much faster on MyISAM than on InnoDB.

InnoDB doesn't support full text indexes on text columns so if you need to search text fields you may want to use MyISAM.

In addition to foreign keys, InnoDB offers transaction support, which is absolutely critical when dealing with larger applications. Sped does suffer though because all this Foreign Key / Transaction stuff takes lots of overhead. With InnoDB it becomes particularly important that you use good keys.

Inserting 50,000 records is something MyISAM is very comfortable with. In some cases it is faster to drop the indexes, insert the records and recreate the indexes (especially with full text indexes).

Use InnoDB in a situation where there are lots of inserts, updates and selects. Tests show that MyISAM is a lot faster when there are very few records (<50k), but the average execution time for a query increases almost linearly with the number of records while InnoDB shows almost constant query execution times for very small and large tables.

MyISAM does not support crash recovery while InnoDB does.


3) What should be the correct column definition?
VARCHAR(25) and VARCHAR(200) are practically the same for the purpose of storing data on the disk. If the user types only 5 characters in either columns, the bytes used will be the same. Select the value liberally but remember that VARCHAR(250) for
all text columns is not good idea either because it will not allow you to create composite indexes.

4) How does Primary and other keys affect the programming logic?

Composite primary indexes are great to avoid duplicate entries. But sometimes it can be a problem. For e.g. take a look at the following tables PRIMARY KEY definition.

CREATE TABLE `student` (
`enroll` varchar(13) NOT NULL default '',
`other` int(2) NOT NULL default '0',
`year` int(4) NOT NULL default '0',
`month` tinyint(2) unsigned default NULL,
`mode_read` int(2) NOT NULL default '0',
`mode_direct` int(2) NOT NULL default '0',
`sent_flag` int(11) NOT NULL default '0',
`system_date` datetime NOT NULL default '0000-00-00 00:00:00',
PRIMARY KEY (`enroll_no`,`other`,`year`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |

Adding year as a part of primary key but not adding month column in it means there can be only 1 prospectus entry per year for each student. Now we can not use replace command that will insert a record if it does not exist in the primary key definition or update if it already does.

Labels:


August 15, 2007

 

MySQL FAQ

1) Why doesn't the following query work?

SELECT col1, col2, concat_ws( ' ', firstname, lastname ) AS fullname
FROM players
WHERE fullname LIKE 'The Name'

The reason you are getting the error is because you can't refer to a column alias in a WHERE clause. Take your main part of the query:

SELECT concat_ws( ' ', firstname, lastname ) AS fullname
FROM players

and wrap another select around it:

SELECT col1, col2 FROM (
SELECT concat_ws( ' ', firstname, lastname ) AS fullname
FROM players)

now in order to use this you need to assign an alias to your inner query because it is considered a derived table. you can use whatever you like but using DT or dt is easy to remind you it is a Derived Table

SELECT col1, col2 FROM (
SELECT concat_ws( ' ', firstname, lastname ) AS fullname
FROM players
) as DT

You can now use your WHERE clause to refer to fullname.

SELECT col1, col2 FROM (
SELECT concat_ws( ' ', firstname, lastname ) AS fullname
FROM players
) as DT
WHERE fullname LIKE 'The Name'
LIMIT 0,30

_____

2) Why am I getting the wrong results?

SELECT ROUND(( NOW() - last_accessed ) /60)
FROM sessions
WHERE user_id = '1'

Sometimes your query may show unexpected results. For e.g. in the query you may get correct results if the difference is 1 or 2 minutes but if the difference is 20 minutes or so, you get the incorrect value of 30 minutes and if actual result is 45, you get 58 minutes and so on. The correct way to write this query is to convert the time to seconds and then divide by 60.

SELECT ROUND( TIME_TO_SEC( NOW() - last_accessed ) /60 ) AS minutes_ago
FROM sessions
WHERE user_id =171

And if you are using latest version of MySQL then you can use timediff function.

SELECT ROUND( TIME_TO_SEC(TIMEDIFF(NOW(), last_accessed) ) / 60 ) as minutes_ago
FROM sessions
WHERE user_id = 1

_____

3) What is a Theta Join?

Theta Joins are not joins which employ comma syntax, theta joins are joins which involve inequality comparisons in the join conditions. For e.g.

select a.foo, b.bar
from a, b where b.id < a.id

select a.foo, b.bar
from a inner join b on b.id < a.id

Labels:


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024   January 2025   February 2025   April 2025   June 2025   July 2025   August 2025  

This page is powered by Blogger. Isn't yours?