Shantanu's Blog

Corporate Consultant

October 31, 2007

 

MySQL Case Study - 150

Nirav has asked a question about
MySQL Aggregate functions and table joins


drop table transactions;
drop table transaction_products;
create table transactions (id int, totalPrice int, transDate DATE);
create table transaction_products (transId int, catId int);
insert into transactions values (1, 100, '2006-01-01');
insert into transactions values (1, 300, '2007-01-24');
insert into transactions values (2, 100, '2007-02-16');
insert into transaction_products values (1, 10);
insert into transaction_products values (2, 10);

SELECT SUM(t.totalPrice) FROM transactions t, transaction_products tp
WHERE t.id = tp.transId

It returns 500 that is correct total of all prices.
But now lets add the duplicate value into the table transaction_products.

insert into transaction_products values (1, 10);

If I run the same query above, now I get 900. It shows that the total value of 1 is duplicated and added to the total value. i.e. 400 + 400 + 100 = 900. This problem will be there when transaction_products table is not normalized. The rule says that the table should contain unique and atomic values and should point to the primary key. In this case, I can create a composite unique key on transaction_products like this...

alter ignore table transaction_products add unique (transId, catId)

Now the table transaction products has only one set of 1, 10 becasue it has deleted the duplicate one that I have just added. Hence it will allow us to run the very basic query mentioned in the post like...

SELECT SUM(t.totalPrice), tp.catId
FROM transactions t, transaction_products tp
WHERE t.id = tp.transId AND t.transDate BETWEEN "2007-01-01" AND "2007-10-31"
GROUP BY tp.catId

If adding the unique key is not an option, then the following query should work.

SELECT SUM(P.totalPrice) AS total, P.catId
FROM (SELECT DISTINCT t.totalPrice, t.id, t.transDate, tp.catId
FROM transactions t
INNER JOIN transaction_products tp
ON t.id = tp.transId
WHERE t.transDate BETWEEN '2007-01-01' AND '2007-10-31') AS P
GROUP BY P.catId

Or this should also work.

SELECT SUM(T.totalPrice) AS total, P.catId
FROM transactions AS T
INNER JOIN (SELECT catId, transId
FROM transaction_products tp
GROUP BY tp.transId) AS P
ON P.transId = T.id
AND transDate BETWEEN '2007-01-01' AND '2007-10-31'
GROUP BY P.catId

Labels:


October 30, 2007

 

MySQL Case Study - 149

Hi, im having a problem running a query using the following table:

mysql> desc news;
+-------+---------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+---------------+------+-----+---------+----------------+
| n_id | int(1) | NO | PRI | NULL | auto_increment |
| main | varchar(1000) | YES | | NULL | |
| |
+-------+---------------+------+-----+---------+----------------+



the query is:


mysql> update news
-> set main = (select main
-> from news
-> where n_id = 1)
-> where n_id = 2;




and here is the error message:

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'main = (select main
from news
where n_id = 1)
where n_id = 2' at line 2


the query is intended to set the "main" field where n_id =2 to the same value as "main" where n_id=1.

Any help would be appreciated!
http://forums.mysql.com/read.php?10,179923,179923#msg-179923

Labels:


October 18, 2007

 

MySQL Tips

1) Silent and verbose casting of datatypes:
One of the questions on MySQL forums was why am I getting wrong results when I do...
select * from mytable where accountnumber = 0
Here accountnumber column is text (varchar) and when I compare it with 0 (zero) the accounts starting with 0 (for e.g. 0A224) are also returned and not the accounts those are only 0. If you want the accounts with only 0 without any affixes, then
accountnumber = '0' will do the trick. This is effectively changing the 'number 0' to 'text 0' and then comparing it with account numbers. If I don't do it, then the number 0 will change the type of account number to integer and take the first valid numbers from it to compare.

Oh! And yes, it also affects the way indexes are used when comparing columns with different data types. for e.g. CHAR = INT
See this post for more info....
http://www.mysqlperformanceblog.com/2007/10/16/be-careful-when-joining-on-concat/

2) Between or Not between
One of the posters on MySQL forums asked why the 'BETWEEN' is not using indexes. The answer by one of the experts was NOT to use between but to use < and >

Labels: ,


October 11, 2007

 

Spell Check in Indian Languages

1) Standard: Unicode
Spell check would have no meaning if there were no standards put in place. Unicode has standardized the Indian languages scripts and hence it's possible!

2) Open Source software : Firefox, Open Office and Hunspell
Hunspell is an engine that powers language spell checking in firefox and open office. All of these softwares / tools are free and open source. Thus anyone can use it as well as contribute to it.

3) User participation:
The driving force behind open source is user participation. They must be looking at the bigger picture and spending their own time and money for the betterment of their language and people.

4) Spell Check in Hindi, Marathi and other languages:
There are two main components that the work needs to be concentrated.
a) Building open source words list.
b) Integrating it with applications.

Once you have the words list, it can be made available as Hunspell engine. Firefox and open office, both can install any hunspell compatible spell checker. The advantage of this new format is that it can integrate suffix, affix and many other formats like "sounds like". This effectively will help us to build the comprehensive dictionary that will work better than the current default English dictionary that most of us have got used to!

The complete presentation is online at...

http://www.slideshare.net/shantanuo/spell-check-in-indian-languages

here are steps how to integrate it with firefox...
http://www.flickr.com/photos/shantanuo/351731580/

And here are the steps how to integrate it with Open office...
http://oksoft.blogspot.com/2006/10/marathi-spell-check-within-your-word.html

Labels: , ,


October 04, 2007

 

ZOHO DB & Reports

The path breaking version of ZOHO DB & Reports has been released.

One of the core functionalities of Zoho DB is Data Visualization. The ‘Chart View’ option enables you to visualize your data in simple charts. This complex functionality is simplified with a very simple Drag-n-Drop interface. You have to play with it to really know the power of this feature.

This Drag-n-drop interface is also extended to creating ‘Pivot Tables’ in Zoho DB.

One of the unique features of Zoho DB is the ability to run SQL Queries of ANY dialect on the data. It understands SQL Queries/dialects from any of the supported databases which include Oracle, SQL Server, DB2, Sybase, MySQL, PostgreSQL, Informix and ANSI SQL. So you can run the SQL queries you already know on Zoho DB data to create custom tables or reports.

Labels:


October 02, 2007

 

MySQL FAQ - 3

1) Importing and exporting Excel Data

a) The syntax to import the CSV data from text file is simple and is very fast.

LOAD DATA INFILE 'datafile.txt' INTO TABLE employee
(employee_number,firstname,surname,tel_no,salary) FIELDS TERMINATED BY '|'");

LOAD DATA INFILE has defaults of:
FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\'

b) The trick to export the data in CSV or TSV format is to add the into outfile line like this...

set @row:= 0;
(select 'srno', 'enroll_no', 'stud_fname', 'stud_lname', 'stud_address', 'stud_address1', 'stud_address2', 'stud_city', 'stud_pin', 'state_nm')
union
(select (@row:= @row + 1) as srno, a.enroll_no, stud_fname, stud_lname, stud_address
into outfile '/home/shantanu/CAT_ADV_OCT_07.tsv' FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'
from course_enroll as a,
course as b,
where date like '%2007%' and flag in ('o','a') and
a.branch_id=b.branch_id
group by a.enroll_no
order by stud_pin);
_____

Here is another way to import / export data from Excel

a) In order to import data from excel to MySQL here are 2 easy steps.
Select Text (MS-DOS) as an option in "Save as type" drop down while saving the file. Use the following command to load the data in MySQL.

load data local infile 'country_code.txt' into table test.country_code columns optionally enclosed by '"';

You may need to trim the data if extra spaces are added to the beginning or end of the string.

update country_code set iso_code = trim(iso_code);

_____

b) To export the data, use the standard and in and standard out descriptors as this...

1) Save the query in a text file. For e.g. query.txt

2) Use the following command to generate the excel readable result file.
mysql country_list < query.txt > query_to_excel.txt


2) Delete V/s truncate

a) Do NOT use:

DELETE FROM classifieds;

Rather, use:

TRUNCATE TABLE classifieds;

The difference here is that DELETE drops records one by one, and that can be 1 million one by one's too slow!

b) If you want the count of records those were deleted then you have to use delete from command like this...
DELETE FROM classifieds where 1 = 1
It will display XXX records deleted message once it completes the operation.



3) Analyze and Optimize tables:

You can provide information to the parser by running

ANALYZE TABLE tablename;

This stores the key distribution for the table (running ANALYZE is equivalent to running myisamchk -a or myismachk --analyze). Many deletes and updates leave gaps in the table (especially when you're using varchar, or in particular text/blob fields). This means there are more unnecessary disk I/O's, as the head needs to skip over these gaps when reading. Running

OPTIMIZE TABLE tablename

solves this problem. Both of these statements should be run fairly frequently in any well looked after system.


4) Explain and Procedure:

a) Add the word EXPLAIN before any SELECT statement to know the "kundali" of the command.

EXPLAIN SELECT firstname FROM employee WHERE overtime_rate<20*2;

+--------+-------+---------------+---------------+---------+------+------+----------+
|table | TYPE | possible_keys | key | key_len | ref | rows |Extra |
+--------+-------+---------------+---------------+---------+------+------+----------+
|employee| range | overtime_rate | overtime_rate | 4 | NULL | 1 |where used|
+--------+-------+---------------+---------------+---------+------+------+----------+

The output from EXPLAIN shows "ALL" in the TYPE column when MySQL uses a table scan to resolve a query. The possible types are, from best to worst: system, const, eq_ref, ref, range, index and ALL. MySQL can perform the 20*2 calculation once, and then search the index for this constant.

explain tbl_name
will display all the relevant info about the table in question.

SHOW TABLE STATUS LIKE 'your_table_name‘
To find the time stamp of table creation.

b) Explain provides more information about indexes, but procedure analyse() gives you more information on data returned.

SELECT center_code FROM employee PROCEDURE ANALYSE()

Min_value : 34
Max_value : 232
Empties_or_zeros : 0
Nulls : 0
Avg_value : 133
Optimal_fieldtype : ENUM('34','232') NOT NULL


5) SHOW commands:

SHOW PROCESSLIST
// you can just check if your query running, or is waiting for some lock.
SHOW FULL PROCESSLIST
// to see which query or set of queries take the longest time

SHOW CREATE TABLE employee
// display exactly how the table was created.

DESCRIBE tbl_name
DESC tbl_name
// just like show create table command

SHOW VARIABLES
// All the information to know the state of server status

STATUS
// Information on connection, uptime, version and user


6) Indexes on partial columns:
In the last post, I discussed composite indexes. But there is a limit of 556 bytes those can be grouped together in a single index. So the following statement would fail if each of the fields is declared as 250 characters since the total will be more than 556.

ALTER TABLE employee ADD INDEX(surname, firstname, middlename);

Instead we can index on partial text something like this...

ALTER TABLE employee ADD INDEX(surname(20),firstname(20), middlename(20));

Now our updates write to an index are just 10% of the original and it will accept 3 column composite index even if the field total is more than 556.

Labels:


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017  

This page is powered by Blogger. Isn't yours?