I am using txtweb service to get the yubnub keyword info on my mobile. For e.g. In order to get the dictionary meaning of the word "procrastinate", I can type the following message and send it to 9266592665 (from india only)
@yubnub dit procrastinate
It will hit the yubnub.org website and pass the keyword "dict". The word "procrastinate" is searched for and the contents are returned to the user by SMS reply. @yubnub keyword is registered with txtweb and linked an intermediate page on my site (saraswaticlasses.net). It will eventually get the data from yubnub linked pages. Here is how it works. When I send a SMS request, it is first received by txtweb site.
1) goto txtweb site
http://txtweb.com
2) Look for the keyword @yubnub. It is linked to my saraswaticlasses.net site. So pass on the data there:
http://saraswaticlasses.net/yubnub/txtweb.php?txtweb-message=dit+procrastinate
3) My site will connect to yubnub site and hand over the keyword and variable.
http://yubnub.org/parser/parse?command=dit+procrastinate
4) Yubnub site will check where to get the dictionary data from. In this case it will go to
http://jonathanaquino.com/yubscripts/yn-dictionary.php/.txt?xn_auth=no&length=160&input=%s
you can get more info about keywords using the command "man dit" at yubnub.org site.
This will open endless opportunities in the SMS world. Anyone can create a keyword for free by registering at yubnub.org site and use it for SMS information exchange.
@yubnub yubnub_keyword variable (send to 92665 92665 in India and 898-932 (TXT-WEB) in US and Canada)
If you do not like the @yubnub alias that I have already created, you can register a new user friendly keyword with txtweb site, for e.g. @mumbai, @nmcollege etc. It is also possible to add other DB related features if you know a little bit of PHP programming. Read the example here...
http://shantanuo.livejournal.com/49615.html
_____
Hindi, Telugu, Tamil, Gujarati and Bengali translations are possible by sending sms to 9266592665
For e.g. Here is a message and the text that will return by sms
@yubnub hind kanchipuram guest house = कांचीपुरम गेस्ट हाउस
@yubnub telug kanchipuram guest house = కాంచీపురం గెస్ట్ హౌస్
@yubnub tami kanchipuram guest house = காஞ்சிபுரம் கெஸ்ட் ஹவுஸ்
@yubnub guja kanchipuram guest house = કાંચીપુરમ અતિથિગૃહ
@yubnub beng kanchipuram guest house = kanchipuram অতিথিশালা
If you need to test it on the web, simply visit the site http://yunbub.org and type the commands with arguments as mentioned above.
If I did not shut down the mysql service at the time of stopping the server. Next time I got the following message in the mysql error log file (mysqld.log or /var/log/syslog).
[Note] Plugin 'FEDERATED' is disabled.
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Compressed tables use zlib 1.2.3.4
InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: Completed initialization of buffer pool
InnoDB: highest supported file format is Barracuda.
The log sequence number in ibdata files does not match
the log sequence number in the ib_logfiles!
InnoDB: Database was not shut down normally!
Starting crash recovery.
Reading tablespace information from the .ibd files...
InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name .
InnoDB: File operation call: 'opendir'.
InnoDB: Cannot continue operation.
The fix was easy in this case. Just adding the following line in [mysqld] section of my.cnf file.
innodb_force_recovery = 6
* InnoDB is started in read only mode preventing users from performing INSERT, UPDATE, or DELETE operations.
* You can SELECT from tables to dump them, or DROP or CREATE tables even if forced recovery is used.
* If you know that a given table is causing a crash on rollback, you can drop it.
* You can also use this to stop a runaway rollback caused by a failing mass import or ALTER TABLE.
* You can kill the mysqld process and set innodb_force_recovery to 3 to bring the database up without the rollback.
Labels: mysql, mysql tips
# List all tables:
select db_id, id, name, sum(rows) as mysum from stv_tbl_perm where db_id = 100546 group by db_id, id, name order by mysum desc;
# list all running processes:
select pid, query from stv_recents where status = 'Running';
# describe table
select * from PG_TABLE_DEF where tablename='audit_trail';
select * from pg_tables where schemaname = 'public'
# Disk space used:
select sum(used-tossed) as used, sum(capacity) as capacity from stv_partitions
# Query log
select query, starttime , substring from svl_qlog where substring like '%tbl_name%' order by starttime desc limit 50;
# command history
select * from stl_ddltext where text like '%ox_data_summary_hourly_depot%' limit 10
# last load errors
select starttime, filename, err_reason from stl_load_errors order by starttime desc limit 100
select filename, count(*) as cnt from stl_load_errors group by filename
# create table from another table
select * into newevent from event;
# Check how columns are compressed
ANALYZE COMPRESSION
# ANALYZE and VACUUM
If you insert, update, or delete a significant number of rows in a table, run the ANALYZE and VACUUM commands against the table.
"analyze compression tbl_name" command produce a report with the suggested column encoding.
# To find and diagnose load errors for table 'event'
create view loadview as
(select distinct tbl, trim(name) as table_name, query, starttime,
trim(filename) as input, line_number, field, err_code,
trim(err_reason) as reason
from stl_load_errors sl, stv_tbl_perm sp
where sl.tbl = sp.id);
select * from loadview where table_name='event';
# Query to find blocks used
select stv_tbl_perm.name, count(*)
from stv_blocklist, stv_tbl_perm
where stv_blocklist.tbl = stv_tbl_perm.id
and stv_blocklist.slice = stv_tbl_perm.slice
group by stv_tbl_perm.name
order by stv_tbl_perm.name;
Load tips:
# While loading data you can specify "empty as null", "blanks as null" allow "max error 5", "ignore blank lines", "remove quotes", "use zip". Use the keywords: emptyasnull blanksasnull maxerror 5 ignoreblanklines removequotes gzip
# use NULL AS '\000' to fix the import from specific files
# use BLANKASNULL in the original COPY statement so that no empty strings are loaded into VARCHAR fields which might ultimately be converted to numeric fields.
# Use the NOLOAD keyword with a COPY command to validate the data in the input files before actually loading the data.
# use COMPUPDATE to enable automatic compression
# FILLRECORD to fill missing columns at the end with blanks or NULLs
# TRIMBLANKS Removes the trailing whitespace characters from a VARCHAR string.
# ESCAPE the backslash character (\) in input data is treated as an escape character. (useful for delimiters and embedded newlines)
# ROUNDEC a value of 20.259 is loaded into a DECIMAL(8,2) column is changed to 20.26. or else 20.25
# TRUNCATECOLUMNS Truncates data in columns to the appropriate number.
# IGNOREHEADER to ignore first row
_____
If you are using JDBC, can you try adding the keepalive option to your connect string. E.g.,
jdbc:postgresql://instance.amazonaws.com:8192/database?tcpkeepalive=true
You can have AUTOCOMMIT set in your Workbench client.
_____
In order to avoid timeout error while using workbench on Windows, use the following setting:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\KeepAliveTime 30000
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\KeepAliveInterval 1000
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\TcpMaxDataRetransmission 10
_____
# Consider using DISTKEY and SORTKEY - There can be multiple sortkeys but only one primary key.
# wlm_query_slot_count - This will set aside more memory for query, which may avoid operations spilling to disk
# the isolation level for Redshift is SERIALIZABLE
_____
// There is no equivalent of "show create table tbl_name"
select from the PG_TABLE_DEF table to gather all the necessary schema information
// convert to and from unixtime
select extract (epoch from timestamp '2011-08-08 11:11:58');
select TIMESTAMP 'epoch' + starttime * INTERVAL '1 second' starting from tbl_name;
// Update a joined table:
update abcd set ser_area_code=abcd_update.ser_area_code, preferences=abcd_update.preferences, opstype=abcd_update.opstype,
phone_type=abcd_update.phone_type
from abcd_update join abcd nc on nc.phone_number = abcd_update.phone_number
http://docs.aws.amazon.com/redshift/latest/dg/t_updating-inserting-using-staging-tables-.html#concept_upsert
_____
// install postgresql
yum install postgresql postgresql-server
chkconfig postgresql on
// You will now create a file where the redshift password will be stored.
vi ~/.pgpass
c.us-east-1.redshift.amazonaws.com:5439:mydb:root:Passwd
chmod 0600 ~/.pgpass
// load data to redshift
cat to_psql.txt | psql -hc.us-east-1.redshift.amazonaws.com -Uroot -p5439 mydb > to_save.csv
// send the file as an attachment
echo "report file attached. " | mutt -s "result data " -a to_save.csv -- some_address@gmail.com
// mysqldump command that will generate the required statements to be used in redshift
mysqldump db_name tbl_name --where='1=1 limit 10' --compact --no-create-info --skip-quote-names > to_psql.txt
_____
Amazon data types are different than of MySQL. For e.g. literals can be saved only as varchar type and upto 65000 bytes.
http://docs.aws.amazon.com/redshift/latest/dg/r_Character_types.html
Here is a script that will do this conversion automatically.
https://gist.github.com/shantanuo/5115366
_____
If postgresql client is installed, we can connect to redshift using something like this...
# PGPASSWORD=Fly8946392085 psql -U fsb_user_85_22719249 -h flydata-sandbox-cluster.clroanynhqjo.us-east-1.redshift.amazonaws.com -p 5439 -d flydatasandboxdb
Welcome to psql 8.1.23 (server 8.0.2), the PostgreSQL interactive terminal.
_____
## script that will display 10 rows from each table
#!/bin/sh
echo "select name from stv_tbl_perm where db_id = 100546 group by name ;" | psql -hkalc.us-east-1.redshift.amazonaws.com -Uroot -p5439 mydb > /root/psql.txt 2>> /root/psql_err.txt
for tbl_name in `cat /root/psql.txt`
do
echo "$tbl_name" >> /root/psql_limit.txt 2>> /root/psql_limit_err.txt
echo "select * from $tbl_name limit 10 ; " | psql -hkalc.us-east-1.redshift.amazonaws.com -Uroot -p5439 mydb >> /var/www/psql_limit.txt 2>> /root/psql_limit_err.txt
echo "====================================="
done
_____
The following statement queries the STV_LOCKS table to view all locks in effect for current transactions:
select table_id, last_update, lock_owner, lock_owner_pid, lock_status
from stv_locks;
table_id | last_update | lock_owner | lock_owner_pid | lock_status
----------+----------------------------+------------+----------------+------------------------
100295 | 2014-01-06 23:50:56.290917 | 95402 | 7723 | Holding write lock
100304 | 2014-01-06 23:50:57.408457 | 95402 | 7723 | Holding write lock
100304 | 2014-01-06 23:50:57.409986 | 95402 | 7723 | Holding insert lock
(3 rows)
The following statement terminates the session holding the locks:
select pg_terminate_backend(7723);
Labels: aws
AWS Command Line Interface is a must have tool for anyone who is working with amazon.
http://aws.amazon.com/cli/
In order to install you can simply use any one of the following
easy_install awscli
pip install awscli
Once installed use the following script to execute the commands.
#!/bin/sh
snapshot=${1:-'mysnap'}
mydate=`date '+%b%d'`
cat > myconfigfile.txt << "heredoc"
[default]
aws_access_key_id = ABC
aws_secret_access_key = ABC+XYZ
region = us-east-1
heredoc
export AWS_CONFIG_FILE=./myconfigfile.txt
# describe cluster and snapshots
aws redshift describe-clusters
aws redshift describe-cluster-snapshots | grep -C7 manual
# create a new cluster based on snapshot called mysnap
echo " aws redshift restore-from-cluster-snapshot --publicly-accessible --snapshot-identifier $snapshot --cluster-identifier $snapshot-$mydate "
# delete the cluster
echo " aws redshift delete-cluster --skip-final-cluster-snapshot --cluster-identifier $snapshot-$mydate "
Labels: aws