Shantanu's Blog

Database Consultant

October 29, 2008

 

exception handler

Here is a procedure that will create a new product code or - if the product code already exits, update it with a new name.
The procedure detects an attempt to insert a duplicate value by using an exception handler. If the attempt to insert fails, the error is trapped and an UPDATE is issued in place of the INSERT. Without the exception handler, the stored program execution is stopped, and the exception is passed back unhandled to the calling program.

CREATE PROCEDURE sp_product_code
(in_product_code VARCHAR(2),
in_product_name VARCHAR(30))

BEGIN

DECLARE 1_dupkey_indicator INT DEFAULT 0;
DECLARE duplicate_key CONDITION FOR 1062;
DECLARE CONTINUE HANDLER FOR duplicate_key SET 1_dupkey_indicator =1;


INSERT INTO product_code (product_code, product_name) VALUES (in_product_code, in_product_name);

IF 1_dupkey_indicator THEN
UPDATE product_codes SET product_name=in_product_name WHERE product_code=in_product_code;
END IF;
END


Define a named condition, duplicate_key, that is associated with MySQL error 1062. While this step is not strictly necessary, we recommend that
you define such conditions to improve the readability of your code (you can now reference the error by name instead of number).

Define an error that will trap the duplicate key error and then set the value of the variable 1_dupkey_indicator to 1 (true) if a duplicate key violation is encountered anywhere in the subsequent code.

You can now check the value of 1_dupkey_indicator variable. If it is still 0, then the INSERT was successful and we are done. If the value has been changed to 1 (true), we know that there has been a duplicate key violation. We then run the update statement.

Labels: ,


October 12, 2008

 

Partitions in 5.1

Since I am using archive table in order to compress the data, I can't use any keys and still make the select query fast by taking advantage of the partitions.
You can create regular MyISAM or InnoDB tables with regular keys and use both, keys + partitions.
Sounds intereting, right?

drop table tr;

CREATE TABLE tr (id INT, name VARCHAR(50), purchased DATE) engine=archive
PARTITION BY RANGE( YEAR(purchased) ) (
PARTITION p0 VALUES LESS THAN (1990),
PARTITION p1 VALUES LESS THAN (1995),
PARTITION p2 VALUES LESS THAN (2000),
PARTITION p3 VALUES LESS THAN (2005)
);

INSERT INTO tr VALUES
(1, 'desk organiser', '2003-10-15'),
(2, 'CD player', '1993-11-05'),
(3, 'TV set', '1996-03-10'),
(4, 'bookcase', '1982-01-10'),
(5, 'exercise bike', '2004-05-09'),
(6, 'sofa', '1987-06-05'),
(7, 'popcorn maker', '2001-11-22'),
(8, 'aquarium', '1992-08-04'),
(9, 'study desk', '1984-09-16'),
(10, 'lava lamp', '1998-12-25');

SELECT * FROM tr WHERE purchased BETWEEN '1995-01-01' AND '1999-12-31';

mysql> explain partitions SELECT * FROM tr WHERE purchased BETWEEN '1995-01-01' AND '1999-12-31'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: tr
partitions: p2
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 10
Extra: Using where

mysql> SELECT * FROM tr WHERE purchased BETWEEN '1995-01-01' AND '1999-12-31';
+------+-----------+------------+
| id | name | purchased |
+------+-----------+------------+
| 3 | TV set | 1996-03-10 |
| 10 | lava lamp | 1998-12-25 |
+------+-----------+------------+

If you have a key on purchased column and the table type is MyISAM or InnoDB then the explain plan will look like this...

mysql> explain partitions SELECT purchased FROM tr WHERE purchased BETWEEN '1995-01-01' AND '1999-12-31'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: tr
partitions: p2
type: range
possible_keys: purchased
key: purchased
key_len: 4
ref: NULL
rows: 1
Extra: Using where; Using index

The following query will let me know about the details of partitions from information schema...

mysql> SELECT TABLE_SCHEMA AS db, TABLE_NAME AS tb, PARTITION_NAME AS pName, TABLE_ROWS AS rows, PARTITION_EXPRESSION AS criterion, PARTITION_DESCRIPTION as ulimit, PARTITION_METHOD as method, CREATE_TIME as ctime, UPDATE_TIME as mtime FROM information_schema.partitions WHERE table_name = 'tr';
+-----------+----+-------+------+------------------+--------+--------+---------------------+---------------------+
| db | tb | pName | rows | criterion | ulimit | method | ctime | mtime |
+-----------+----+-------+------+------------------+--------+--------+---------------------+---------------------+
| shantanuo | tr | p0 | 3 | YEAR(purchased) | 1990 | RANGE | 2008-10-12 13:55:23 | 2008-10-12 13:55:24 |
| shantanuo | tr | p1 | 2 | YEAR(purchased) | 1995 | RANGE | 2008-10-12 13:55:23 | 2008-10-12 13:55:24 |
| shantanuo | tr | p2 | 2 | YEAR(purchased) | 2000 | RANGE | 2008-10-12 13:55:23 | 2008-10-12 13:55:24 |
| shantanuo | tr | p3 | 3 | YEAR(purchased) | 2005 | RANGE | 2008-10-12 13:55:23 | 2008-10-12 13:55:24 |
+-----------+----+-------+------+------------------+--------+--------+---------------------+---------------------+
4 rows in set (56.70 sec)

I tried it on a big test table and found that one should use partitions especially with archive table types. Since archive does not support indexes, some of the queries are very slow. But now I can take advantage of partitions and I do not need indexes because MySQL will fetch the data from a relatively small file. See the example given below.

The MyISAM table with basic indexes consumed 20 GB to store 72,88,953 rows.

-rw-r----- 1 mysql mysql 11K Oct 14 17:17 Feedback.frm
-rw-r----- 1 mysql mysql 20G Oct 14 17:48 Feedback.MYD
-rw-r----- 1 mysql mysql 567M Oct 14 17:49 Feedback.MYI

The archive table is less than 2 GB (<90%) and partitioning broke it up to 9 files of about 300 MB each.

-rw-rw---- 1 mysql mysql 11K Oct 15 17:46 Feedback_archive2.frm
-rw-rw---- 1 mysql mysql 76 Oct 15 17:46 Feedback_archive2.par
-rw-rw---- 1 mysql mysql 88 Oct 15 17:46 Feedback_archive2#P#p803.ARZ
-rw-rw---- 1 mysql mysql 23M Oct 16 03:30 Feedback_archive2#P#p804.ARZ
-rw-rw---- 1 mysql mysql 108M Oct 16 03:30 Feedback_archive2#P#p805.ARZ
-rw-rw---- 1 mysql mysql 352M Oct 16 03:30 Feedback_archive2#P#p806.ARZ
-rw-rw---- 1 mysql mysql 308M Oct 16 03:30 Feedback_archive2#P#p807.ARZ
-rw-rw---- 1 mysql mysql 305M Oct 16 03:30 Feedback_archive2#P#p808.ARZ
-rw-rw---- 1 mysql mysql 375M Oct 16 03:30 Feedback_archive2#P#p809.ARZ
-rw-rw---- 1 mysql mysql 366M Oct 16 03:30 Feedback_archive2#P#p810.ARZ
-rw-rw---- 1 mysql mysql 134M Oct 16 03:30 Feedback_archive2#P#p811.ARZ

If I remove some of the unnecessary columns from the table then the size of the table is again reduced by 1GB as shown below.

-rw-rw---- 1 mysql mysql 15K Oct 14 18:41 Feedback_archive.frm
-rw-rw---- 1 mysql mysql 76 Oct 14 18:41 Feedback_archive.par
-rw-rw---- 1 mysql mysql 88 Oct 14 18:41 Feedback_archive#P#p803.ARZ
-rw-rw---- 1 mysql mysql 11M Oct 15 03:30 Feedback_archive#P#p804.ARZ
-rw-rw---- 1 mysql mysql 47M Oct 15 03:30 Feedback_archive#P#p805.ARZ
-rw-rw---- 1 mysql mysql 155M Oct 15 03:30 Feedback_archive#P#p806.ARZ
-rw-rw---- 1 mysql mysql 146M Oct 15 03:30 Feedback_archive#P#p807.ARZ
-rw-rw---- 1 mysql mysql 149M Oct 15 03:30 Feedback_archive#P#p808.ARZ
-rw-rw---- 1 mysql mysql 169M Oct 15 03:30 Feedback_archive#P#p809.ARZ
-rw-rw---- 1 mysql mysql 154M Oct 15 03:30 Feedback_archive#P#p810.ARZ
-rw-rw---- 1 mysql mysql 58M Oct 15 03:30 Feedback_archive#P#p811.ARZ

mysql> SELECT PARTITION_NAME, TABLE_ROWS FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_NAME = 'Feedback_archive';
+----------------+------------+
| PARTITION_NAME | TABLE_ROWS |
+----------------+------------+
| p803 | 0 |
| p804 | 85679 |
| p805 | 390157 |
| p806 | 1319141 |
| p807 | 1229578 |
| p808 | 1276819 |
| p809 | 1287831 |
| p810 | 1164843 |
| p811 | 534905 |
+----------------+------------+
9 rows in set (0.00 sec)

And the relevant extract of the create table statement looks like this...

) ENGINE=ARCHIVE PARTITION BY RANGE (extract(year_month from (FeedDate)))
(PARTITION p803 VALUES LESS THAN (200803),
PARTITION p804 VALUES LESS THAN (200804),
PARTITION p805 VALUES LESS THAN (200805),
PARTITION p806 VALUES LESS THAN (200806),
PARTITION p807 VALUES LESS THAN (200807),
PARTITION p808 VALUES LESS THAN (200808),
PARTITION p809 VALUES LESS THAN (200809),
PARTITION p810 VALUES LESS THAN (200810),
PARTITION p811 VALUES LESS THAN (200811))

Labels: ,


October 10, 2008

 

mysqldump backups and transfers

mysqldump and change dump file
// Good for backups
mysqldump --all-databases

// Only one database named 'test'
mysqldump test

// Only one table tt from the database test
mysqldump test tt

// all tables from test database except table Audit
mysqldump test --ignore-table=Audit

// save the dump to a file
mysqldump test tt > testing1.txt

// only structure without Data
mysqldump test -d

// only data in tab delimited format
mysqldump test tt --tab=/var/lib/mysql/customers

// dump the data in comma separated format just like --tab option
mysqldump --compact --no-create-info --skip-extended-insert db_name tbl_name | sed 's/INSERT .*.(//' | sed 's/);$//'

// dump a single row from each table with their column names
mysqldump emailplatform_db --no-create-info --where="1=1 limit 1" --compact --complete-insert 

copy the tt.sql and tt.txt files to another host and restore...
cat /backup/*.sql | mysql sakila
mysqlimport sakila /backup/*.txt

// without comments, lock tables, drop table if exist
// only create table statement and insert into in a single statement
mysqldump test tt --compact
CREATE TABLE `tt` (
`name` varchar(50) default NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
INSERT INTO `tt` VALUES ('aba'),('abacad'),('abacadaea'),('aba'),('abacad'),('abacadaea');

// triggers backup
mysqldump -hlocalhost -uroot -proot@123 db_name -d --no-create-info | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' > triggers_backup.sql

# drop current triggers
mysql -Bse"select CONCAT('drop trigger ', TRIGGER_SCHEMA, '.', TRIGGER_NAME, ';') from information_schema.triggers" | mysql -hlocalhost -uroot -proot@123

# Restore from file, use root@localhost credentials
mysql -hlocalhost -uroot -proot@123 < triggers_backup.sql


// without create table statement
mysqldump test tt --compact --no-create-info
INSERT INTO `tt` VALUES ('aba'),('abacad'),('abacadaea'),('aba'),('abacad'),('abacadaea');

// separate insert statements
mysqldump test tt --compact --no-create-info --skip-extended-insert --skip-tz-utc
INSERT INTO `tt` VALUES ('aba');
INSERT INTO `tt` VALUES ('abacad');
INSERT INTO `tt` VALUES ('abacadaea');
INSERT INTO `tt` VALUES ('aba');
INSERT INTO `tt` VALUES ('abacad');
INSERT INTO `tt` VALUES ('abacadaea');

// only records that satisfy a criterion
mysqldump test tt --compact --no-create-info --skip-triggers --insert-ignore --skip-extended-insert --where="name like 'abacada%'"
INSERT ignore INTO `tt` VALUES ('abacadaea');
INSERT ignore INTO `tt` VALUES ('abacadaea');

mysqldump db_name table_names ox_banners ox_campaigns --compact --no-create-info --skip-extended-insert  --where 'campaignid = 66  '

// 100 records from each table of database dbName for testing
mysqldump --opt --where="true LIMIT 100" dbName > dbname.sql

// backup the stored procedure
mysqldump mysql proc --no-create-info --compact -w"db='db_name' and name='sp_name'"
mysqldump dbname -R -d --compact > somefile.txt
mysqldump -R adyp_mumbai > /root/jdyp_backup/"adyp_mumbai_48_`date +%d%m%y%I`.sql"

// import remote table directly into your local test database
mysqldump -h db4free.net -P 3306 -ushantanuo -pPassWord shantanuo calendar | mysql test

// dump backup to another server
mysqldump test | ssh user@remote.server.com "dd of=/mysql/$(date +'%d-%m-%y')"

// transfer one database to another server
mysqldump -h host1 -u username -pPASSWORD database_name | mysql -h host2 -u username -pPASSWORD database_name

// transfer one table to remote server
mysqldump test india | mysql -h db4free.net -P 3306 -ushantanuo -pindia162 shantanuo

// transfer one table to another using ssh
mysqldump test mytest1 --compact | ssh root@172.29.0.215 mysql test

// transfer only data to remote table
mysqldump thisDB thisTBL --no-create-info --compact --skip-quote-names | replace "INSERT INTO thisTBL " "INSERT INTO thisTBL_New " | mysql -h 192.29.0.213 -uroot -pPassWd remoteDB

// Creates and optimized database backup of database then passes the output directly to gzip to compress the data
mysqldump –opt test | gzip > /root/

// Does the same thing as above, but for all of the databases on the server.
mysqldump –opt -A | gzip > destination

// copy a table to another server using gzip and ssh
mysqldump -uroot -pPassWord dbName tbl_name --compact | gzip | ssh 999.999.999.999 "gunzip | mysql -uroot -pPassWord someDB"

// dump all the databases to remote server
for I in $(mysql -e 'show databases' -uroot --pPassWord=root -s --skip-column-names); do mysqldump -uroot --pPassword=root $I | gzip -c | ssh user@server.com "cat > /home/$I.sql.gz"; done

//do not change the timestamp values based on timezones
mysqldump test testTime --skip-tz-utc

// restore only one database
mysqldump --one-database BooksDB < mytest.txt

// reset auto increment value to 0
# mysqldump -d dbname --compact | sed 's/\(.*ENGINE.*AUTO_INCREMENT=\).*/\10;/g'

// mysql dump of selected databases without data
mysqldump -uroot -pPassWord -d --compact --routine `echo $(mysqlshow -uroot -pPassWord | awk '{print $2}' | egrep -v 'mysql|test|Databases|information_schema|lost\+found')` | sed 's/\(.*ENGINE.*AUTO_INCREMENT=\).*/\10;/g'

// search for a word P122C8 in any column
mysqldump --skip-extended-insert db_name tbl_company | grep 'P122C8'

// In order to maintain the consistency of InnoDB tables
--single-transaction --master-data=2

// copy a procedure from one database to another
(echo "DELIMITER ;; " && mysqldump --all-databases --routines -d | awk '/testProc/,/;;/') | mysql -h192.0.0.0 -uroot -pPassWord Other_DB

// remove the DEFINER from SP or Function
// http://forge.mysql.com/tools/tool.php?id=242
mysqldump -uroot -pPassWord -d --routines dbName | perl /home/develop/dumper.pl --delete | more
time mysqldump -uroot -pPassWord -R --single-transaction dbDRCG | perl /home/shantanu/dumper.pl --delete | mysql -uroot -pPassWord shantanu

// or use sed
mysqldump prod_int_dev_ve app_users -d | sed -e 's/DEFINER=[^*]*\*/\*/' | more


// triggers backup
mysqldump -hlocalhost -uroot -proot@123 db_name -d --no-create-info | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' > triggers_backup.sql

# drop current triggers
mysql -Bse"select CONCAT('drop trigger ', TRIGGER_SCHEMA, '.', TRIGGER_NAME, ';') from information_schema.triggers" | mysql -hlocalhost -uroot -proot@123

# Restore from file, use root@localhost credentials
mysql -hlocalhost -uroot -proot@123 < triggers_backup.sql

// change all the definers of stored procedures to root@localhost
mysqldump -uroot --all-databases --no-create-info --no-data -R --skip-triggers | sed -e 's/DEFINER=[^*]*\*/\*/' | mysql -hlocalhost -uroot -proot@123


// strip indexes, change engine, set up slave
##perl -p0777i -e 's/,\n^\)/\n\)/mg' –> after removing the indexes, any commas before the closing bracket need to be removed
mysqldump -uuser -ppassword -h127.0.0.1 -P3306 dbname --master-data=1 | sed 's/ENGINE=MyISAM/ENGINE=archive/g' | grep -v '^ UNIQUE KEY' | grep -v '^ KEY' | perl -p0777i -e 's/,\n^\)/\n\)/mg' | mysql -uuser -ppassword -h127.0.0.1 -P3307 dbname

// Exclude one or two databases from dump
mysql -BNe"show databases" | egrep -vw 'four|five_report|information_schema|mysql|test' | xargs echo mysqldump --single-transaction

// skip 2 tables from dump, can ignore views as well
mysqldump -uroot -proot@123 -h10.10.10.10 -P3308 --all-databases --ignore-table=Batch.JOB_EXECUTION --ignore-table=Batch.EXECUTION_CONTEXT

// skip locking for log tables and take structure backup
50 08 * * * time mysqldump -h10.10.10.10 -uroot -proot@123 -d -R --all-databases --skip-lock-tables --skip-comments | zip > /home/backup_structure_101_$(date +'\%d\%m\%y').sql.zip 2> /home/backup_err.txt

// one liner to drop all databases
mysqlshow -uroot | egrep -v "(Databases|information_schema|performance_schema|mysql)" | awk '{print "mysql -uroot -Bse \"drop database", $2, "\""}' | sh

// one liner to check foreign key violations
mysqlshow | egrep -v "(Databases|information_schema|performance_schema|mysql)" | awk '{print $2}' | sed '/^$/d' | awk '{sub($1,"mysql \-uroot \-Bse \"call mysql.p_check_fk_constraint_violations\('"'"'&'"'"','"'"'%'"'"'\)\""); print $0}' | sh

// check all DB's for data violation of foreign key
# for dbname in `mysqlshow`; do echo $dbname; mysql -e"call mysql.p_check_fk_constraint_violations('$dbname','%')"; done

## mysql full backup cron
58 11 * * * time mysqldump -h10.11.12.13 -uroot -proot@123 -R --skip-tz-utc --master-data=2 --add-drop-database --databases db1 db2 db3 | zip > /home/shantanu/backup_full_master.sql.zip 2> /home/shantanu/backup_err.txt

# daily schemawise backup
mysql -BNe"show databases" | egrep -v "(Databases|information_schema|performance_schema|mysql)" | awk '{print "mysqldump -R -f --skip-tz-utc --master-data=2 --add-drop-database --databases", $1, "| gzip > /home/backup/"$1"_$(date +\%Y\%m\%d).sql.gzip"}' | sh

# daily schemawise backup using cron-job. escape special character %
32 09 * * * mysql -BNe"show databases" | egrep -v "(Databases|information_schema|performance_schema|mysql)" | awk '{print "mysqldump -R -f --skip-tz-utc --master-data=2 --add-drop-database --databases", $1, "| gzip > /home/backup/"$1"\_$(date +\%Y\%m\%d).sql.gzip"}' | sh > /home/shantanu/success.txt 2> /home/shantanu/err.txt

// schema wise back up of all databases starting with "STR"
time mysql -BNe"show databases" | grep ^STR | awk '{print "mysqldump -R -f --skip-tz-utc --master-data=2 --add-drop-database --databases", $1, "| gzip > /home/shantanu/STR/"$1".sql.gzip"}' | sh

// create blackhole tables on slave for dry run purpose
time mysqldump --databases db_name -d --skip-add-drop-table | sed 's/ENGINE=MyISAM/ENGINE=BLACKHOLE/g' | mysql -h Remote_server_ip -uUser_Name -pPassWd

// return the root user privileges
mysqldump mysql user --compact --no-create-info  --where="Host = 'localhost' and User = 'root'"

// there is no –ignore-database option, use the following to select specific DB from a long list
time mysqldump --databases `mysql -Bse "SELECT GROUP_CONCAT(schema_name SEPARATOR ' ') FROM information_schema.schemata WHERE schema_name NOT IN ('mysql','performance_schema','information_schema');" >` >/mysql/backup/rds2.sql

# create the same user root as localhost to access from any IP address
 mysqldump mysql user --where='Host="localhost" and User="root" limit 1' --compact --no-create-info | sed 's/localhost/%/' | mysql mysql && mysqladmin flush-privileges

// there are times when Linux commands like SED and GREP are more powerful than mysql update and alter:

mysqldump test AIRPORT --no-create-info --compact --skip-extended-insert | sed 's/INSERT INTO `AIRPORT` VALUES (//g' | sed 's/);//' > to_load.txt
1,'new airpor\"t',1
3,'3 airport',0
0,'FRANCE',4
5,'PAKIS\'TAN',5

// MySQL dump by default uses the extended mode that has multiple values in a single insert into statement. sed to break that up into several statements

sed 's/),(/); insert into audit_trail values (/g'  mysqldump.sql > skip_extended_dump.txt

// Verbose mode can be activated and error log redirected to standard log in order to find the statement that has any issues

time mysql db_name -fvvv < skip_extended_dump.txt 2>to_study_err.txt >&2

// process the data found in file to_load.txt and then load it back after truncating the table

LOAD DATA LOCAL INFILE 'to_load.txt' into table AIRPORT fields terminated by ',' optionally enclosed by "'";

// if you did not take mysqldump backup using --replace or --insert-ignore
sed -i 's/^INSERT INTO/INSERT IGNORE INTO/' filename.sql
_____

To pipe into gzip:
mysqldump [options] | gzip -c > output.sql.gz
or
mysqldump [options] | gzip --std-out > output.sql.gz

To avoid saving the output file directly to the disk:
mysqldump | gzip --std-out | ssh username@host “cd /preferred_path/; cat > mysqldump_file.gz”

or even more fancy (though, if one step dies this can be a problem)
mysqldump | gzip --std-out | ssh username@host “ cat > mysql -u username -p password”
(Note: If you prefer, netcat can be used instead of ssh)
_____

# typical backup command to be used on master server:
time mysqldump -h192.168.1.1 -ushantanu -pPassWd -R --master-data=2 db_name | zip > /home/db_name_101.sql.zip
_____

You can use --compact or --skip-add-drop-table option to suppress the drop table if exist statement in the dump.
But if you have received a file that has the line, you will need to remove it before executing the dump so that the data will be updated in the existing records.

// Use sed to delete those sql statements before importing

# Create a temporary filename
uniq="/tmp/temp_"`date "+%s"`
for item in `ssh user@dbserver ls`;
do
echo -n "Importing $item..."
scp user@dbserver\:$item $uniq;
gunzip < $uniq | sed {/$'DROP TABLE IF EXISTS'/d} | mysql -f -u root $DB_SCHEMA_NAME
echo "..done"
done

# clean up
rm -f "$uniq"

_____

If you like to keep your ddl backed up in some source management tool like svn or cvs and want to do it individually for stored procedures, events, triggers, tables and such rather than having a single file you can easily do so using the below.

user=backup_user
password=`cat ~/.backup_password`
hostname=127.0.0.1
port=3306
dbname=test_db
path=/home/mysql/ddl
date=`date +%Y%m%d`

mysqldump -u$user -p$password -h$hostname -P$port --no-create-info --no-data --no-create-db --skip-opt $dbname > "$path"/"$dbname"_triggers_"$date".sql
mysqldump -u$user -p$password -h$hostname -P$port --routines --skip-triggers --no-create-info --no-data --no-create-db --skip-opt $dbname > "$path"/"$dbname"_routines_"$date".sql
mysqldump -u$user -p$password -h$hostname -P$port --events --skip-triggers --no-create-info --no-data --no-create-db --skip-opt $dbname > "$path"/"$dbname"_events_"$date".sql
mysqldump -u$user -p$password -h$hostname -P$port --skip-triggers --no-data --no-create-db --skip-opt $dbname > "$path"/"$dbname"_tables_"$date".sql

_____

Copy data from one server to another.

Sending server:
mysqldump db_name tbl_name | bzip2 -c | nc -l 1234

Receiving server:
nc 10.10.10.14 1234 | bzip2 -cd | mysql -uroot -proot@123 db_name

Labels:


October 05, 2008

 

Unix Tip

Combining CLI

# cat redirect_example

#!/usr/bin/ksh

cat <<- ATC | sed "s/^/Redirect Example => /g" >> atc.out
This is an example of how to redirect
stdout to a file as well as pipe stdout into stdin
of another command (i.e. sed), all done inside
a here-document.

Cool eh?
ATC


Now let's see what the script looks like with the redirection and pipeline.

# ./redirect_example

# cat atc.out
Redirect Example => This is an example of how to redirect
Redirect Example => stdout to a file as well as pipe stdout into stdin
Redirect Example => of another command (i.e. sed), all done inside
Redirect Example => a here-document.
Redirect Example =>
Redirect Example => Cool eh?

Read more

Labels:


October 02, 2008

 

E-mail errors and troubleshooting

Add the MAILTO line to your cron file, if you want an e-mail of errors.

crontab -e
MAILTO=shantanu.oka+fromcronfile@gmail.com

For all other errors/ warnings you can create a forward file

1) create a new file called .forward
vi /root/.forward

2) Enter the email address on the first line in the file.
shantanu.oka+fromforwardfile@gmail.com

3) Save (w) the file and exit (q)
:wq

Now, anytime the ‘root’ user get an email, it will automatically be forwarded to the address you specified in the .forward file.

Click on "Create a filter" link in gmail.
Type shantanu.oka+fromforwardfile@gmail.com in the to: field. root@localhost.localdomain in the from filed. Use the check boxes "delete it" and "never send it to spam". Now these messages are automatically deleted as well as you can read them if there is any issue.

Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023  

This page is powered by Blogger. Isn't yours?