Shantanu's Blog

Database Consultant

January 29, 2019

 

List recent bookmarks on medium

Here is a python script to list last 10 claps from medium blog. Change the @user to the actual username for e.g. @shantanuo

import requests
import json

def clean_json_response(response):
    return json.loads(response.text.split('])}while(1);
')[1])
url = 'https://medium.com/@user/has-recommended?format=json'
mylist=list()
response = requests.get(url)
response_dict = clean_json_response(response)
for i in response_dict['payload']['references']['Post'].keys():
    myurl='https://towardsdatascience.com/'+response_dict['payload']['references']['Post'][i]['uniqueSlug']
    mylist.append(myurl)

mylist

This will create a list something like this...

['https://towardsdatascience.com/how-to-learn-more-in-less-time-with-natural-language-processing-part-1-49d94543f73d',
 'https://towardsdatascience.com/how-to-learn-more-in-less-time-with-natural-language-processing-part-2-30539111b23a',
]

The gist is available here...
https://gist.github.com/shantanuo/fdca7c1fb314e4878925fc071122e9f0

Labels:


January 18, 2019

 

Using kaggle command line with google colab

Download any file from kaggle using command line is easy...

pip install kaggle

echo '{"username":"shantanuo","key":"c90c207ab8d6c445c54f77c5d5dcdedbx"}' > /root/.kaggle/kaggle.json

kaggle competitions download -c cifar-10

_____

If you are using Google co-lab...

Create the API token by visiting the “My Account” page on Kaggle.  This will download a kaggle.json file to your computer. Next, we need to upload this credential file to Colab:

from google.colab import files
files.upload()

Then we can install Kaggle API and save the credential file in the “.kaggle” directory.

!pip install -U -q kaggle
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/

Now we can download the dataset:

!kaggle datasets download -d uciml/pima-indians-diabetes-database

This dataset will be downloaded to your current working directory which is the “content” folder in Colab.  As files get deleted every time you restart your Colab session, it’s a good idea to save files in your Google Drive. You just need to mount the drive using below code and save there:

from google.colab import drive
drive.mount('/content/gdrive')

Labels:


January 15, 2019

 

Using Dask for text columns

Dask works really well when most of the columns are of numeric. But if I have a few columns with a lot of text (upto a few thousand characters) that includes special chars escaped by \ then dask does not work as expected. For e.g.

26546,"trans_sarotp-527817b4dbe92fc7","AARSAK","Dear User, TranID \"RNS2018113031387\" of Amount 200,Pending for Checking,01/12/2018-00:03:13 - CUSTOMER CARE",1,20181201000311

Pandas somehow reads this data, but there are cases when dask does not.

df = dd.read_csv('s3://mybucket/somefile.csv', error_bad_lines=False, header=None,
                 dtype=str, escapechar='\\',
                  encoding = "ISO-8859-1", engine='python',
                 storage_options = {'anon':True})

In order to use escapechar parameter I need to use "C" engine because "python" engine does not work. And "C" engine fails to parse some of the text columns, may be due to encoding issues. error_bad_lines is another parameter that does not work similar to pandas for obvious reasons.

There are cases when dask dataframe is able to read while dask distributed fails to read the same data. Overall dask seems to have very limited use-case. It is not a general purpose solution.

Labels:


January 12, 2019

 

Using tensorflow hub

Here is less than 10 lines of code to train your model based on elmo. No need to import the module because it is hosted on tensorflow hub and can be used dynamically!

import tensorflow as tf
import tensorflow_hub as hub
url = "https://tfhub.dev/google/elmo/2"
embed = hub.Module(url)
embeddings = embed(sentences, signature="default", as_dict=True)["default"]

with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())
  sess.run(tf.tables_initializer())
  x = sess.run(embeddings)

And here is how to use the module on test strings for e.g. "slave".

from sklearn.metrics.pairwise import cosine_similarity

search_string = "slave" #@param {type:"string"}
results_returned = "3" #@param [1, 2, 3]

embeddings2 = embed(
    [search_string],
    signature="default",
    as_dict=True)["default"]

with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())
  sess.run(tf.tables_initializer())
  search_vect = sess.run(embeddings2)
 
cosine_similarities = pd.Series(cosine_similarity(search_vect, x).flatten())
output =""
for i,j in cosine_similarities.nlargest(int(results_returned)).iteritems():
  output +='
'
  for i in sentences[i].split():
    if i.lower() in search_string:
      output += " "+str(i)+""
    else:
      output += " "+str(i)
  output += "

"

https://colab.research.google.com/drive/13f6dKakC-0yO6_DxqSqo0Kl41KMHT8A1#scrollTo=_Qgy7Jmr5wSx&forceEdit=true&offline=true&sandboxMode=true

Labels: ,


January 11, 2019

 

CNN and softmax

softmax is a type of normalization that amplifies the difference between numbers that makes it more distinct to understand especially in image processing.

import numpy as np
nums = np.array([4, 5, 6])

from sklearn import preprocessing
preprocessing.normalize([nums])
array([[0.45584231, 0.56980288, 0.68376346]])

def softmax(A): 
    expA = np.exp(A)
    return expA / expA.sum()

softmax(nums)
array([0.09003057, 0.24472847, 0.66524096])

after applying Kernel channel processing, softmax can reduce the number of features.

http://machinelearninguru.com/_images/topics/computer_vision/basics/convolutional_layer_1/rgb.gif

Labels:


January 08, 2019

 

pandas case study 9

Is there any way to change the order of group by output?

For e.g. in this case, I will get "India" first. How do I change the order so that "USA" will be first followed by "India"?

myst="""India, 905034 , 19:44 
USA, 905094  , 19:33
Russia,  905154 ,   21:56

"""
u_cols=['country', 'index', 'current_tm']

myf = StringIO(myst)
import pandas as pd

df = pd.read_csv(StringIO(myst), sep=',', names = u_cols)

df.groupby('country').sum()
country
India 905034
Russia 905154
USA 905094
_____

from pandas.api.types import CategoricalDtype
cats_to_order = ["USA", "India", "Russia"]
covered_type = CategoricalDtype(categories=cats_to_order, ordered=True)

df['country'] = df['country'].astype(covered_type)

df.groupby('country').sum()
country
USA 905094
India 905034
Russia 905154

http://pbpython.com/pandas_dtypes_cat.html

Labels: ,


 

newspaper module for python

Here is a useful python module to scrap text from any website.

# install newspaper module
!pip install newspaper3k

from newspaper import Article
article = Article('https://www.the-digital-picture.com/Reviews/Sony-FE-16-35mm-f-2.8-GM-Lens.aspx')
article.download()
article.parse()
print(article.text)

article.nlp()
article.keywords

Labels:


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023  

This page is powered by Blogger. Isn't yours?