Nirav has asked a question about
MySQL Aggregate functions and table joinsdrop table transactions;
drop table transaction_products;
create table transactions (id int, totalPrice int, transDate DATE);
create table transaction_products (transId int, catId int);
insert into transactions values (1, 100, '2006-01-01');
insert into transactions values (1, 300, '2007-01-24');
insert into transactions values (2, 100, '2007-02-16');
insert into transaction_products values (1, 10);
insert into transaction_products values (2, 10);
SELECT SUM(t.totalPrice) FROM transactions t, transaction_products tp
WHERE t.id = tp.transId
It returns 500 that is correct total of all prices.
But now lets add the duplicate value into the table transaction_products.
insert into transaction_products values (1, 10);
If I run the same query above, now I get 900. It shows that the total value of 1 is duplicated and added to the total value. i.e. 400 + 400 + 100 = 900. This problem will be there when transaction_products table is not normalized. The rule says that the table should contain unique and atomic values and should point to the primary key. In this case, I can create a composite unique key on transaction_products like this...
alter ignore table transaction_products add unique (transId, catId)
Now the table transaction products has only one set of 1, 10 becasue it has deleted the duplicate one that I have just added. Hence it will allow us to run the very basic query mentioned in the post like...
SELECT SUM(t.totalPrice), tp.catId
FROM transactions t, transaction_products tp
WHERE t.id = tp.transId AND t.transDate BETWEEN "2007-01-01" AND "2007-10-31"
GROUP BY tp.catId
If adding the unique key is not an option, then the following query should work.
SELECT SUM(P.totalPrice) AS total, P.catId
FROM (SELECT DISTINCT t.totalPrice, t.id, t.transDate, tp.catId
FROM transactions t
INNER JOIN transaction_products tp
ON t.id = tp.transId
WHERE t.transDate BETWEEN '2007-01-01' AND '2007-10-31') AS P
GROUP BY P.catId
Or this should also work.
SELECT SUM(T.totalPrice) AS total, P.catId
FROM transactions AS T
INNER JOIN (SELECT catId, transId
FROM transaction_products tp
GROUP BY tp.transId) AS P
ON P.transId = T.id
AND transDate BETWEEN '2007-01-01' AND '2007-10-31'
GROUP BY P.catId
Labels: mysql case study
Hi, im having a problem running a query using the following table:
mysql> desc news;
+-------+---------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+---------------+------+-----+---------+----------------+
| n_id | int(1) | NO | PRI | NULL | auto_increment |
| main | varchar(1000) | YES | | NULL | |
| |
+-------+---------------+------+-----+---------+----------------+
the query is:
mysql> update news
-> set main = (select main
-> from news
-> where n_id = 1)
-> where n_id = 2;
and here is the error message:
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'main = (select main
from news
where n_id = 1)
where n_id = 2' at line 2
the query is intended to set the "main" field where n_id =2 to the same value as "main" where n_id=1.
Any help would be appreciated!
http://forums.mysql.com/read.php?10,179923,179923#msg-179923Labels: mysql case study
1) Silent and verbose casting of datatypes:
One of the questions on MySQL forums was why am I getting wrong results when I do...
select * from mytable where accountnumber = 0
Here accountnumber column is text (varchar) and when I compare it with 0 (zero) the accounts starting with 0 (for e.g. 0A224) are also returned and not the accounts those are only 0. If you want the accounts with only 0 without any affixes, then
accountnumber = '0' will do the trick. This is effectively changing the 'number 0' to 'text 0' and then comparing it with account numbers. If I don't do it, then the number 0 will change the type of account number to integer and take the first valid numbers from it to compare.
Oh! And yes, it also affects the way indexes are used when comparing columns with different data types. for e.g. CHAR = INT
See this post for more info....
http://www.mysqlperformanceblog.com/2007/10/16/be-careful-when-joining-on-concat/2) Between or Not between
One of the posters on MySQL forums asked why the 'BETWEEN' is not using indexes. The answer by one of the experts was NOT to use between but to use < and >
Labels: mysql, mysql tips
1) Standard: Unicode
Spell check would have no meaning if there were no standards put in place. Unicode has standardized the Indian languages scripts and hence it's possible!
2) Open Source software : Firefox, Open Office and Hunspell
Hunspell is an engine that powers language spell checking in firefox and open office. All of these softwares / tools are free and open source. Thus anyone can use it as well as contribute to it.
3) User participation:
The driving force behind open source is user participation. They must be looking at the bigger picture and spending their own time and money for the betterment of their language and people.
4) Spell Check in Hindi, Marathi and other languages:
There are two main components that the work needs to be concentrated.
a) Building open source words list.
b) Integrating it with applications.
Once you have the words list, it can be made available as Hunspell engine. Firefox and open office, both can install any hunspell compatible spell checker. The advantage of this new format is that it can integrate suffix, affix and many other formats like "sounds like". This effectively will help us to build the comprehensive dictionary that will work better than the current default English dictionary that most of us have got used to!
The complete presentation is online at...
http://www.slideshare.net/shantanuo/spell-check-in-indian-languageshere are steps how to integrate it with firefox...
http://www.flickr.com/photos/shantanuo/351731580/And here are the steps how to integrate it with Open office...
http://oksoft.blogspot.com/2006/10/marathi-spell-check-within-your-word.htmlLabels: firefox, open office, unicode
The path breaking version of ZOHO DB & Reports has been released.
One of the core functionalities of Zoho DB is Data Visualization. The ‘Chart View’ option enables you to visualize your data in simple charts. This complex functionality is simplified with a very simple Drag-n-Drop interface. You have to play with it to really know the power of this feature.
This Drag-n-drop interface is also extended to creating ‘Pivot Tables’ in Zoho DB.
One of the unique features of Zoho DB is the ability to run SQL Queries of ANY dialect on the data. It understands SQL Queries/dialects from any of the supported databases which include Oracle, SQL Server, DB2, Sybase, MySQL, PostgreSQL, Informix and ANSI SQL. So you can run the SQL queries you already know on Zoho DB data to create custom tables or reports.
Labels: usability
1) Importing and exporting Excel Dataa) The syntax to import the CSV data from text file is simple and is very fast.
LOAD DATA INFILE 'datafile.txt' INTO TABLE employee
(employee_number,firstname,surname,tel_no,salary) FIELDS TERMINATED BY '|'");
LOAD DATA INFILE has defaults of:
FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\'
b) The trick to export the data in CSV or TSV format is to add the into outfile line like this...
set @row:= 0;
(select 'srno', 'enroll_no', 'stud_fname', 'stud_lname', 'stud_address', 'stud_address1', 'stud_address2', 'stud_city', 'stud_pin', 'state_nm')
union
(select (@row:= @row + 1) as srno, a.enroll_no, stud_fname, stud_lname, stud_address
into outfile '/home/shantanu/CAT_ADV_OCT_07.tsv' FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'
from course_enroll as a,
course as b,
where date like '%2007%' and flag in ('o','a') and
a.branch_id=b.branch_id
group by a.enroll_no
order by stud_pin);
_____
Here is another way to import / export data from Excel
a) In order to import data from excel to MySQL here are 2 easy steps.
Select Text (MS-DOS) as an option in "Save as type" drop down while saving the file. Use the following command to load the data in MySQL.
load data local infile 'country_code.txt' into table test.country_code columns optionally enclosed by '"';
You may need to trim the data if extra spaces are added to the beginning or end of the string.
update country_code set iso_code = trim(iso_code);
_____
b) To export the data, use the standard and in and standard out descriptors as this...
1) Save the query in a text file. For e.g. query.txt
2) Use the following command to generate the excel readable result file.
mysql country_list < query.txt > query_to_excel.txt
2) Delete V/s truncatea) Do NOT use:
DELETE FROM classifieds;
Rather, use:
TRUNCATE TABLE classifieds;
The difference here is that DELETE drops records one by one, and that can be 1 million one by one's too slow!
b) If you want the count of records those were deleted then you have to use delete from command like this...
DELETE FROM classifieds where 1 = 1
It will display XXX records deleted message once it completes the operation.
3) Analyze and Optimize tables:You can provide information to the parser by running
ANALYZE TABLE tablename;
This stores the key distribution for the table (running ANALYZE is equivalent to running myisamchk -a or myismachk --analyze). Many deletes and updates leave gaps in the table (especially when you're using varchar, or in particular text/blob fields). This means there are more unnecessary disk I/O's, as the head needs to skip over these gaps when reading. Running
OPTIMIZE TABLE tablename
solves this problem. Both of these statements should be run fairly frequently in any well looked after system.
4) Explain and Procedure:a) Add the word EXPLAIN before any SELECT statement to know the "kundali" of the command.
EXPLAIN SELECT firstname FROM employee WHERE overtime_rate<20*2;
+--------+-------+---------------+---------------+---------+------+------+----------+
|table | TYPE | possible_keys | key | key_len | ref | rows |Extra |
+--------+-------+---------------+---------------+---------+------+------+----------+
|employee| range | overtime_rate | overtime_rate | 4 | NULL | 1 |where used|
+--------+-------+---------------+---------------+---------+------+------+----------+
The output from EXPLAIN shows "ALL" in the TYPE column when MySQL uses a table scan to resolve a query. The possible types are, from best to worst: system, const, eq_ref, ref, range, index and ALL. MySQL can perform the 20*2 calculation once, and then search the index for this constant.
explain tbl_name
will display all the relevant info about the table in question.
SHOW TABLE STATUS LIKE 'your_table_name‘
To find the time stamp of table creation.
b) Explain provides more information about indexes, but procedure analyse() gives you more information on data returned.
SELECT center_code FROM employee PROCEDURE ANALYSE()
Min_value : 34
Max_value : 232
Empties_or_zeros : 0
Nulls : 0
Avg_value : 133
Optimal_fieldtype : ENUM('34','232') NOT NULL
5) SHOW commands:SHOW PROCESSLIST
// you can just check if your query running, or is waiting for some lock.
SHOW FULL PROCESSLIST
// to see which query or set of queries take the longest time
SHOW CREATE TABLE employee
// display exactly how the table was created.
DESCRIBE tbl_name
DESC tbl_name
// just like show create table command
SHOW VARIABLES
// All the information to know the state of server status
STATUS
// Information on connection, uptime, version and user
6) Indexes on partial columns:In the last post, I discussed composite indexes. But there is a limit of 556 bytes those can be grouped together in a single index. So the following statement would fail if each of the fields is declared as 250 characters since the total will be more than 556.
ALTER TABLE employee ADD INDEX(surname, firstname, middlename);
Instead we can index on partial text something like this...
ALTER TABLE employee ADD INDEX(surname(20),firstname(20), middlename(20));
Now our updates write to an index are just 10% of the original and it will accept 3 column composite index even if the field total is more than 556.
Labels: mysql FAQ