Shantanu's Blog

Database Consultant

November 24, 2009

 

Manage logs

The default configuration file to manage logs is /etc/logrotate.conf:

# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp -- we'll rotate them here
/var/log/wtmp {
monthly
create 0664 root utmp
rotate 1
}

Service or server specific configurations stored in /etc/logrotate.d directory, for example here is sample apache logrotate configuration file:

/etc/logrotate.d/httpd

_____

Almost all logfiles are located under /var/log directory (and subdirectory).

* /var/log/cron.log: Crond logs (cron job)
* /var/log/maillog: Mail server logs
* /var/log/httpd/: Apache access and error logs directory
* /var/log/mysqld.log: MySQL database server log file
* /var/log/secure: Authentication log
* /var/log/yum.log: Yum log files

_____

A typical log rotate file for slow query log looks like this...

[root@server-db1 logrotate.d]# cat /etc/logrotate.d/mysqld
/var/lib/mysqllogs/slow-log {
daily
rotate 5
missingok
delaycompress
create 0640 mysql mysql
# skip 'notifempty'

postrotate
MYCNF_FILE=/root/.my.cnf
MYSQLADMIN=/usr/bin/mysqladmin
if test -x $MYSQLADMIN && \
$MYSQLADMIN --defaults-file="$MYCNF_FILE" ping >/dev/null
then
$MYSQLADMIN --defaults-file="$MYCNF_FILE" flush-logs
fi
endscript
}

Labels:


November 22, 2009

 

Introduction to PEAR - net URL2

Net_URL2 package helps you quickly process urls, without resorting to complex regular expressions or string manipulation.

http://pear.php.net/package/Net_URL2/

Here is how to use it...

?php
include('Net/URL2.php');
$url = new Net_URL2('http://www.some-domain.com:80/search.php?q=beatles&id=56&cat=music');
echo "Host : " . $url->host . "\n";
echo "Protocol : " . $url->scheme. "\n";
echo "Port : " . $url->port . "\n";
echo "Path : " . $url->path . "\n";
echo "Query Variables: \n";
print_r($url->QueryVariables);
?>


Which will output the following:

Host : www.some-domain.com
Protocol : http
Port : 80
Path : /search.php
Query String :
Array
(
[q] => beatles
[id] => 56
[cat] => music
)

http://www.codediesel.com/pear/easy-manipulation-of-urls-in-php/

Labels:


 

collation rules in full text search

I had discussed using collation rules to search unformatted telephone numbers.

http://oksoft.blogspot.com/search?q=character+set
Using collation rules in your Application

Today, we will see how you can use collation sets in the full text search.

First, find your character set files. Typically they’re under [MySQL Install Path]\share\charsets, but you can double check that with SHOW VARIABLES LIKE 'character_sets_dir';

Edit Index.xml. Find the section for the character set you want to use (I’m using latin1), and add a new collation, with a new name and an unused id:


Edit the character set file (latin1.xml in my example). Near the top you’ll find a array. There’s a leading 00 (which is there to offset the array to 257 characters for some legacy EOF convention, per the manual.) After that, you’ll find 256 bytes which identify the type of each character in the set. Find the hex value for the character you want (MySQL’s HEX() function is handy for this.) The value for “:” is 0×3A. Find that position in the array. Remember to start at 0×00, so 0×3A is the fourth row down, and the eleventh in from the left. Change the 10 there (which means spacing character) to 01 (which means upper case letter.) You’ll find the rest of the possible character types in the manual.

Scroll down in the same file and find the collations. Copy and paste the whole map from whichever one you normally use (like latin1_swedish_ci) and change the name to match the one we created in the Index.xml file (like latin1_ft_ci).

http://thenoyes.com/littlenoise/?p=91

Labels:


November 19, 2009

 

Disk Full Notification

The following shell script will notify you when the disk is almost full.
1) The curl will allow you to SMS an alert to your mobile
2) You will also get a mail to any address provided sendmail is enabled. Do not forget to change the ADMIN variable to your own email.
3) An alert is saved in a text file. Do not forget to check the file disk.txt from /home/develop folder!

Run this script every hour using cron:



#!/bin/sh
# disk space crontab entry to check disk every hour
# if any partition is full more than 95% defaults to 80%
# 15 * * * * /bin/sh -xv /root/disk-alert/disk1.sh 90 1>/root/disk-alert/log/disk_alert_succ.txt 2>>/root/disk-alert/log/disk_alert_err.txt


# change the path for output files
path='/root/disk-alert/log'
mailfile="/root/disk-alert/log/mail.txt"

# alert email with hostname alias
myhostname=`hostname`
ADMIN="shantanu.oak+$myhostname@gmail.com"
ADMIN1="abc@company.com"
ADMIN2="support@website.com"

# alert mobile number
number1='01234567890'
number2='09876543210'

# set alert level 80% as default if no user input found
ALERT=${1:-90}
mydate=`date '+%d %b %H:%M'`

df -HP | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }' | while read output;
do
echo $output
usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 )
partition=$(echo $output | awk '{ print $2 }' )
if [ $usep -ge $ALERT ]; then
# the message that will be written to a file, mail, SMS, Pop-up

# the word space is black listed by SMS gateway
mymessage="$(hostname) running out of disk $usep percent full of $partition as on $mydate"
echo "$mymessage" > $mailfile
echo "***files consuming more than 400 MB disk space *************" >> $mailfile

find / -type f -size +400000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }' >> $mailfile

echo "***Users consuming more space *************" >> $mailfile
cd /home/ && du -sm */ | sort -k1,1n | awk '$1 > 500 { sub(/$/, " MB", $1); print $0 }' >> $mailfile

# write to a file and email
echo "$mymessage" >> $path/disk.txt 2>> $path/disk_err.txt
cat $mailfile | mail -s "$myhostname disk full " $ADMIN, $ADMIN1, $ADMIN2

# sms alert add as many numbers as you want to the list
while read mnumber
do
curl -Ld'user=shantanu123@gmail.com:PassWd&state=4&senderID=TEST SMS&receipientno='$mnumber'&msgtxt='"'$mymessage'"'' http://api.mVaayoo.com/mvaayooapi/MessageCompose

done << mnumber_list
$number1
$number2
mnumber_list

# pop-up not applicable
# DISPLAY=:0 notify-send "Disk above 80% FULL"

fi
done


Command to find big files excluding docker and mount directory:

find . -type d \(  -path "./var/lib/docker/*" -o -path "./mnt/*" \)  -prune -o  -type f -size +400000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

Labels:


November 14, 2009

 

Using rsync to resume an interrupted file copy

If the network connection fails while "scp"ing the file from remote server to the central server, you have to start all over again. So why not to use rsync instead of scp? Here is a file that took 2 minutes to transfer. I stopped the transfer after 1 minute and then resumed it again with the same command. And it completes the job without reinitaing the full transfer.

$ time rsync --archive --recursive --compress --partial --progress --append root@123.123.123.123:/backup/somefile.txt.bz2 /home/ubuntu/
root@123.123.123.123's password:
receiving file list ...
1 file to consider
somefile.txt.bz2
rsync error: unexplained error (code 130) at rsync.c(271) [generator=2.6.9]
rsync error: received SIGUSR1 (code 19) at main.c(1182) [receiver=2.6.9]

real 1m16.258s
user 0m0.092s
sys 0m0.056s
_____

$ time rsync --archive --recursive --compress --partial --progress --append root@123.123.123.123:/backup/somefile.txt.bz2 /home/ubuntu/
root@123.123.123.123's password:
receiving file list ...
1 file to consider
somefile.txt.bz2
4398997 100% 169.46kB/s 0:00:25 (xfer#1, to-check=0/1)

sent 42 bytes received 2302738 bytes 29713.29 bytes/sec
total size is 4398997 speedup is 1.91

real 1m17.166s
user 0m0.128s
sys 0m0.036s

Labels:


November 02, 2009

 

changing mode of my.cnf

If I keep the file my.cnf world writable (777), then I get an error
Warning: World-writable config file '/etc/my.cnf' is ignored

The solution is to change the mod to 770 and start mysqld as root.

# ls my*.* -lht
-rw-rw-rw- 1 root root 1.3K Nov 4 10:54 my.cnf
-rw-r--r-- 1 root root 515 Oct 6 12:41 my_old.cnf

# chmod 770 my.cnf

# ls my*.* -lht
-rwxrwx--- 1 root root 1.3K Nov 4 10:54 my.cnf
-rw-r--r-- 1 root root 515 Oct 6 12:41 my_old.cnf

# service mysqld restart
Stopping MySQL: [ OK ]
Starting MySQL: [ OK ]

Labels:


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024  

This page is powered by Blogger. Isn't yours?