Shantanu's Blog

Database Consultant

August 31, 2015

 

import data into MySQL from excel using python pandas

## read data from excel file to pandas dataframe
import pandas as pd
df=pd.read_excel("5ch.xls", parse_dates='True', header=3)

## import pandas dataframe to mysql
import pymysql
conn = pymysql.connect(host='localhost', port=3306, user='dba', passwd='dbapass', db='test')
df.to_sql(con=conn, name='temp_aaa', if_exists='replace', flavor='mysql')


Labels: , ,


August 30, 2015

 

Working with mysql using python

# insert a record using python pymysql module
import pymysql
conn = pymysql.connect(host='localhost', port=3306, user='dba', passwd='dbapass', db='test')
cur = conn.cursor()
cur.execute("INSERT INTO todel VALUES ('%s', '%s')" % ('aa', 'bb'))
conn.commit()



# select a reord from mysql table using python
import pymysql
conn = pymysql.connect(host='localhost', port=3306, user='dba', passwd='dbapass', db='test')
cur = conn.cursor(pymysql.cursors.DictCursor)
cur.execute("select * from todel  where id = 0 limit 10" )
for rw in cur.fetchall():
    print rw

Labels: , ,


 

Assert if a given instance is list

Here is a function that will check if the given object is a list or not. If it is not a list then you will get an Assertion Error.

def checktype(mylist):
    assert isinstance(mylist, list), "only list allowed"
    for i in mylist:
        print i

checktype(['a', 'b'])

checktype(('a', 'b'))
AssertionError: need a list only


Labels:


 

count of dict items in the order they were added

An OrderedDict is a dictionary subclass that remembers the order in which its contents are added. counter is simple function that counts the items from a given list. Both are part of collections module.

import collections
c = collections.Counter('abcdaab')
print c
Counter({'a': 3, 'b': 2, 'c': 1, 'd': 1})
 
We can use these 2 functions to return the count in the order as it was entered.

from __future__ import print_function
from collections import Counter, OrderedDict

class OrderedCounter(Counter, OrderedDict):
    def __repr__(self):
        return '%s(%r)' % (self.__class__.__name__, OrderedDict(self))
    def __reduce__(self):
        return self.__class__,(OrderedDict(self),)

ac = OrderedCounter('abracdbdraa')
print(ac)

OrderedCounter(OrderedDict([('a', 4), ('b', 2), ('r', 2), ('c', 1), ('d', 2)]))

Labels:


August 15, 2015

 

mongodb indexes

# add index
db.users.ensureIndex({"email":1})

# add composite index
db.users.ensureIndex({"email":1, "customer":1})

# add index on sub-document
db.users.ensureIndex({"email":1, "customer.address":1})

# add unique constraint
db.users.ensureIndex({"email":1, "customer.address":1}, {"unique":true})

# Remove duplicates
db.users.ensureIndex({"email":1, "customer.address":1}, {"unique":true, "dropDups":true})

# add sparse index that will return the record only if the field exists in the collection
db.users.ensureIndex({"email":1, "customer.address":1}, {"unique":true, "dropDups":true, "sparse":true})

# add full text index
db.users.ensureIndex({"email":1, "customer.address":1, "comment":text})

# add full text index on multiple fields
db.users.ensureIndex({"email":1, "customer.address":1, "comment":text, "customer.name":text})

# add weight to full text idnex
db.users.ensureIndex({"email":1, "customer.address":1, "comment":text, "customer.name":text}, {"weight":{"cusomter.name":2}})

# add full text index on all fields
db.users.ensureIndex({"email":1, "customer.address":1, "$**":text})


And here is how you can use the full text index in the query:

db.runCommand({"text":users, "search":"excellent post"})

By default it will search for "excellent" or "post". Use quotes to search using "and"

db.runCommand({"text":users, "search":"\"excellent post\""})

Labels: ,


 

manage mongo data using python

1) Here is how we can connect to mongodb and create a "db" object.

# connect to mongo test database
from pymongo import MongoClient
client = MongoClient()
db = client.test

2) Save dict data to mongodb test database, collection name: posts

y={"name": "amar", "age": 30}
db.posts.insert_one(y)

3) Get a sample record using findOne

db.posts.find_one()

4) copy the pandas dataframe to mongo

# assuming "df" is a dataframe that is alrady created

db.posts.insert_many(df.to_dict('records'))

If you get an error, you may need to correct the data using map function

df["region"] = df["region"].map(lambda x: str(x).split(':')[-1:])

This will remove the semicolon : from region column and select the last slice.

5) Import mongo data to pandas dataframe

# get all data from posts collection into a list of dicts
mylist=[]
for post in db.posts.find():
   mylist.append(post)

# convert the list to pandas dataframe
import pandas as pd
pd.DataFrame(mylist)


Labels: , , , ,


 

Mongo DB sharding tips

1) Sharding helps only if you have 3+ servers.
2) Choose shard key carefully, it can not be changed later.
3) All queries should use shard key.
4) Shard server can be started as replica sets. Take advantage of that.
5) Shard server should not be removed from the cluster. It does not offer the flexibility like replica set.

Here is a website that will convert any MySQL Queries to MongoDB Syntax:

http://www.querymongo.com/

Labels:


August 14, 2015

 

mongoDB mysql comparison cheat sheet

db.users.find({}, {})
select * from users
db.users.find({}, {“username”:1, “email”:1}}
select username, email from  users
db.users.find({}, {“username”:1, “email”:1}}.limit(10)
select username, email from  users limit 10
.limit()          .skip()                   .sort()
db.users.count()
select count(*) from users
db.runCommand(“distinct”: “users”, “key”: “age”})
select distinct(age) from users
db.users.find({“age”:{“$gte”:18, “$lte”:30}})
select * from users where age >= 18 and age <= 30
$lt      $gt     $lte    $get             $ne    $elemMatch
db.users.find({“ticket_no”:{“$in”:[75, 390]}})
select * from  users where ticket_no in (“75”, “390”)
$in     $nin             $not             $all
db.users.find({“$or”:[{“ticket_no”:{“$in”:[75, 390]}}, {“winner”:true}]})
select * from  users where ticket_no in (“75”, “390”) or winner is not null
$or     $and           $nor             $elemMatch
db.users.find({“age”:{“$in”:[null], “$exists”:true}})
select * from  users where age is null
db.users.find({“username”:/happy?/i})
select * from  users where username like ‘happy%’
perl compatible regular expressions
db.users.find({“ticket_no”:75})
select * from  users where ticket_no like ‘%75%’
[75, 390, 120, 450]
“75”, “390”, “120”, “450”
db.users.find({“ticket_no.2”:120})

db.users.find({“ticket_no”:{“$size”:4}})

db.users.findOne({criteria as above}, {“$slice”:[23, 10]}})
select * from users where age >= 18 and age <= 30 limit 23, 10
db.runCommand($getLastError”:1})

show warnings;
db.articles.aggregate("$project": {"author":1}}, {$group":{"_id":"$author", "count":{"$sum":1}}},
{"$sort": {"count": -1}}, {"$limit":5}
Select  author, count(*) as cnt from  articles group by author order by cnt desc limit 5
Aggregation results are limited to maximum response time of 16 MB
db.employees.aggregate( {"$project": {"totalPay" : {"$subtract" : [{"$add": ["$salary", "$bonus"]}, "$taxes"] } } } )
Select  (salary + bouns – taxes) as totalPay from  employees
$add  $subtract      $multiply  $divide   $mod
db.employees.aggregate( { "$project" : { "tenure" : {"$subtract" : [{"$year" : new Date()}, {"$year": "$hireDate"}] } } } )
select   year(now()) – year(hireDate) as tenure from employees
$year $month $week $dayOfMonth $dayOfWeek $dayOfYear  $hour  $minute  $second
db.employees.aggreage( { "$project": { "email" : { "$concat" : [ {"$substr" : [ "$firstName", 0, 1]}, ".", "$lastName", "@company.com" ] } } } )
select  concat(left(firstName, 1), “.”, lastName, “@company.com”) as email from employees
$substr   $concat  $toLower   $toUpper
db.sales.aggregate( { "$group": { "_id": "$country", "totalRevenue": { "$sum" : "$revenue" } } } )
select country, sum(revenue) from sales group by country
db.blog.aggregate({"$project": {"comments": "$comments"}}, {"$unwind" : "$comments"}, {"$match": {"comments.author" : "Akbar" }})

Labels: , , ,


August 09, 2015

 

Using the same character sets across all tables

It is very important to use the same character set across all tables and acorss all databases.

The default character set used per database are listed using this query...

mysql> select * from information_schema.SCHEMATA;
+--------------+--------------------+----------------------------+------------------------+----------+
| CATALOG_NAME | SCHEMA_NAME        | DEFAULT_CHARACTER_SET_NAME | DEFAULT_COLLATION_NAME | SQL_PATH |
+--------------+--------------------+----------------------------+------------------------+----------+
| def          | information_schema | utf8                       | utf8_general_ci        | NULL     |
| def          | emailplatform      | latin1                     | latin1_swedish_ci      | NULL     |
| def          | mysql              | latin1                     | latin1_swedish_ci      | NULL     |
| def          | performance_schema | utf8                       | utf8_general_ci        | NULL     |
| def          | test               | latin1                     | latin1_swedish_ci      | NULL     |
| def          | test1              | latin1                     | latin1_swedish_ci      | NULL     |
+--------------+--------------------+----------------------------+------------------------+----------+
6 rows in set (0.01 sec)

If some of the tables from a database does not match with with other tables, we need to alter those tables. for e.g. if 1 or 2 tables from test database are using utf8 encoding, then the best choice is to drop and recreate those tables as latin1
There are 2 points to note here...
1) The table in question should not be part of foregin key relations.
2) The table should not actually contain unicode characters. Because once we convert it to latin1, there will be no way to store unicode.

The easiest way to check if the tables from a given database are using different character set is to use mysqldump command as shown here...
root@ip-10-86-106-75:/home/ubuntu# mysqldump test --no-data | grep ENGINE
) ENGINE=TokuDB DEFAULT CHARSET=latin1;
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
) ENGINE=TokuDB DEFAULT CHARSET=latin1;

This means there are a few tables with utf8 character while others are latin1.
You will have issues while joining a latin1 table with utf8 table. The query will not use indexes if the columns are of different character sets.
_____

The following query run on "information_schema" database shows that there is 1 table in test database that has utf8 collation. All other tables in test database are latin1. Therefore I need to change that single utf8 table to latin1

# create a new table in test database
create table test.tables select * from information_schema.tables;

# check for table_collations

mysql> select table_schema, table_collation, count(*) as cnt from test.tables group by table_schema, table_collation;
+--------------------+-------------------+-----+
| table_schema       | table_collation   | cnt |
+--------------------+-------------------+-----+
| emailplatform      | latin1_swedish_ci |  64 |
| information_schema | utf8_general_ci   |  45 |
| mysql              | latin1_swedish_ci |   1 |
| mysql              | utf8_bin          |   8 |
| mysql              | utf8_general_ci   |  15 |
| performance_schema | utf8_general_ci   |  17 |
| test               | latin1_swedish_ci |  18 |
| test               | utf8_general_ci   |   1 |
| test1              | latin1_swedish_ci |   1 |
+--------------------+-------------------+-----+
9 rows in set (0.00 sec)

To find the name of the table I use this query:

mysql> select table_name from tables where table_schema = 'test' and table_collation like 'utf8%';
+------------+
| table_name |
+------------+
| cdr_master |
+------------+
1 row in set (0.01 sec)

And here is the count:

mysql> select count(*) from test.cdr_master;
+----------+
| count(*) |
+----------+
|   186166 |
+----------+
1 row in set (0.04 sec)

This is relatively small table so we can quickly change the character set.
But we need to check that there is no other table linked to this using foreign key relations.

mysql> select * from information_schema.KEY_COLUMN_USAGE where TABLE_SCHEMA='TEST' AND TABLE_NAME = 'cdr_master' LIMIT 1\G
Empty set (0.00 sec)

mysql> select * from information_schema.KEY_COLUMN_USAGE where REFERENCED_TABLE_SCHEMA='TEST' AND REFERENCED_TABLE_NAME = 'cdr_master' LIMIT 1\G
Empty set (0.06 sec)

_____

To change the default character set we need to drop and recreate the table.

Take the backup of the table.

# mysqldump --tab=/tmp/ test cdr_master

Change the character set:

# sed -i.bak 's/CHARSET=utf8/CHARSET=latin1/' /tmp/cdr_master.sql

Recreate the new table after dropping the old one:

# mysql test < /tmp/cdr_master.sql

# restore data:

mysql> load data infile '/tmp/cdr_master.txt' into table test.cdr_master;
Query OK, 186166 rows affected (25.06 sec)
Records: 186166  Deleted: 0  Skipped: 0  Warnings: 0

Now log-out and log back in to find that all the tables in test database are having the same table_collation.

mysql> select table_collation, count(*) as cnt from information_schema.tables where table_schema = 'test' group by table_collation;
+-------------------+-----+
| table_collation   | cnt |
+-------------------+-----+
| latin1_swedish_ci |  19 |
+-------------------+-----+
1 row in set (0.00 sec)

_____

In order to check if the utf8 encoded table really has any unicode characters, you need to take the backup of the table.

mysqldump test tbl_name > todel.txt

And then run this python script. If the script processes all the lines without any problem then the data is compatible with latin1.

import codecs
f = codecs.open("/tmp/todel.txt", "r", "utf-8")
for line in f.readlines():
    todel=line.decode('latin1')

You have unicode data in your table if you get an unicode error like this...

UnicodeEncodeError: 'ascii' codec can't encode characters in position 31-35: ordinal not in range(128)

In such case, you can not continue with the task of table re-creation unless you decide to ignore the unicode characters being
_____

If you want to check which lines contain unicode characters, you need another type of dump (skip extended insert option will generate a line per record in the table)

# mysqldump test todel --skip-extended-insert > todel.txt

And the following python code will display all the records where unicode characters are used.

import codecs
f = codecs.open("/tmp/todel.txt", "r", "utf-8")
for line in f.readlines():
    try: 
        todel=line.decode('latin1')
    except:
        print line

_____

If recreating the entire table is not an option, then simply change the single column to latin1.

mysql> alter table test.cdr_master modify column call_uuid varchar(50) charset latin1;

This will make the query very fast if the column "call_uuid" is used in the join query. The other table's column "call_uuid" should be also latin1.
_____

if you use extended explain then you will see what mysql is trying to do internally.

mysql> explain extended select * from a inner join b on a.column = b.column
mysql> show warnings;

The warning will show that mysql is trying to convert the b.column from utf8 to latin1 in order to match with a.column. This internal conversion will not allow mysql to use indexes.

Labels: , ,


August 08, 2015

 

mongo warnings at startup

When I log in to mongo shell, I see these warnings...

[root@az-215 ~]# mongo
MongoDB shell version: 3.0.3
connecting to: test
Server has startup warnings:
2015-05-20T13:04:19.010+0530 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-05-20T13:04:19.010+0530 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-05-20T13:04:19.010+0530 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-05-20T13:04:19.010+0530 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-05-20T13:04:19.010+0530 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 65535 files. Number of processes should be at least 32767.5 : 0.5 times number of files.

$$$$ I need to edit these 2 files to change from "always" to "never"

# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

# cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never

And the soft limit of file descriptors is restricted to 1024 and that needs to be inreased for mongo to behave correctly.

cat /etc/security/limits.d/90-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     1024
root       soft    nproc     unlimited

cp /etc/security/limits.d/90-nproc.conf /etc/security/limits.d/99-mongodb-nproc.conf

$$$ edit the new file 99-mongodb-nproc with higher soft limit.

Do not take these warnings lightly!

Labels:


August 07, 2015

 

mongodb essential commands

Here are the commands used for logging, statistics, administration and replica set. Each of these 4 sections have a few commands those will be useful for the system administrators.

##### logging

db.adminCommand({"setParameter":1, "logLevel":3})
# loglevels 0 to 5 defaults to 0
# like mysql general log

db.setProfilingLevel(1)
# 0-disable, 1 - log queries slower than 100 ms 2 - log everything
# like mysql slow query log

db.getProfilingLevel()
# current profile level

db.adminCommand({"diagLogging":2})
# captures data for diagnostic purposes
# 0-off, 1-writes, 2-reads, 3-both
# /data/db/dialog* | hexdump -c

##### stats

db.adminCommand({"serverStatus":1})["recordStats"]
# accesssNotInMemory if high, server overloaded

use local
db.startup_log.findOne()
# know how the server was started

user local
db.serverCmdLineOpts()
# how mongodb was started

db.foo.validate()
# deletedCount to know how many documents deleted

db.stats()
# database stats

db.collname.stats()
# collection stats

object.bsonsize({_id:ObjectId()})
db.eval("Object.bsonsize(db.foo.findOne())")
#size of a document


##### administration

db.runCommand({"compact":"collname"})
# compress database

db.runCommand({"dropIndexes":"foo", "index":"*"})
# drop all indexes

db.runCommand({"cloneCollection":"collname", "from": "shot:27017"})
# copy data from remote server

db.currentOp()
# show processlist equivalent

db.killOp(opid)
# kill process

db.adminCommand({"shutdown":1, "force":true})
# same as # use admin ## db.shutdownServer()
# same as kill -2 process_id


##### Replica set commands:

# start replica, add / remove server
rs.initiate()
rs.add({"_id":1, "host":"server:27017", "votes":0, "priority":0})
rs.remove("server:27017")
rs.stopSet()

# check status
db.printSlaveReplicationInfo()
rs.status()
rs.config()

# manage
rs.reconfig()
rs.reconfig(myconfig, {"force":true})
rs.stepDown()
rs.freeze(10000)
rs.setSlaveOk()

Labels:


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024  

This page is powered by Blogger. Isn't yours?