Here is how we can run a single query across multiple regions.
-- Create a table with a single id column
CREATE EXTERNAL TABLE ids(id bigint)
PARTITIONED BY (region string)
ROW FORMAT DELIMITED
-- Add two partitions, use two buckets from two regions
ALTER TABLE ids ADD PARTITION (region='us')
LOCATION 's3://my-us-bucket/ids'
ALTER TABLE ids ADD PARTITION (region='eu')
LOCATION 's3://my-eu-bucket/ids'
-- Count the distinct ids, in all regions
SELECT COUNT(DISTINCT id)
FROM ids
https://aws.amazon.com/blogs/apn/running-sql-on-amazon-athena-to-analyze-big-data-quickly-and-across-regions/
Labels: athena
GluonNLP provides Pre-trained models for common NLP tasks. It has carefully designed APIs that greatly reduce the implementation complexity.
import mxnet as mx
import gluonnlp as nlp
glove = nlp.embedding.create('glove', source='glove.6B.50d')
def cos_similarity(embedding, word1, word2):
vec1, vec2 = embedding[word1], embedding[word2]
return mx.nd.dot(vec1, vec2) / (vec1.norm() * vec2.norm())
cos_similarity(glove, 'baby', 'infant').asnumpy()[0]
_____
This will load wikipedia article words as a python list.
train = nlp.data.WikiText2(segment='train')
train[10000:10199]
Labels: machine_learning, nlp, python
Usually I download a file and extract using 2 linux commands like this...
! wget https://github.com/d6t/d6tstack/raw/master/test-data.zip
! unzip -o test-data.zip
But it can also be done using python code as shown below!
import urllib.request
import zipfile
cfg_fname_sample = "test-data.zip"
urllib.request.urlretrieve(
"https://github.com/d6t/d6tstack/raw/master/" + cfg_fname_sample, cfg_fname_sample
)
zip_ref = zipfile.ZipFile(cfg_fname_sample, "r")
zip_ref.extractall(".")
zip_ref.close()
Labels: pandas, python, usability
YOLO helps detect objects in an image using a pre-trained model.
1) Install darknet
git clone https://github.com/pjreddie/darknet
cd darknet
make
2) download the pre-trained weight file
wget https://pjreddie.com/media/files/yolov3.weights
3) Run the detector
./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg
You will see the output something like this...
dog: 99%
truck: 93%
bicycle: 99%
An image called predictions.jpg is saved in current directory.
Labels: machine_learning, tensorflow
Athena and redshift both are great database. But there are times when we need a bridge to connect them. For e.g. when we need to join a redshift table with Athena. Redshift spectrum can be used in such cases.
1) Create an IAM role called "RedshiftCopyUnload" using the cloudformation template shown in this example:
https://stackoverflow.com/questions/58816446/template-to-create-iam-role-for-spectrum-s3-access
2) Create database using the Role created in the first step:
create external schema spectrum_schema from data catalog
database 'spectrum_db'
iam_role 'arn:aws:iam::XXX:role/RedshiftCopyUnload'
create external database if not exists;
3) Create external table:
create external table spectrum_schema.testme (number bigint)
row format delimited fields terminated by '|'
stored as textfile
location 's3://texport/c_pincode_data/';
4) Create internal table native to redshift using select query on external table:
create table mypincode as select * from spectrum_schema.testme limit 10;
Labels: athena, redshift
The following code will initiate a Linux instance of type m3.medium using spot pricing and associate it to IP address 13.228.39.49 Make sure to use your own elastic IP address and key. Do not forget to change the access_key and secret_key parameters.
!wget https://raw.githubusercontent.com/shantanuo/easyboto/master/easyboto.py
import easyboto
dev=easyboto.connect('access_key', 'secret_key')
dev.placement='us-east-1a'
dev.myaddress='13.228.39.49'
dev.key='dec15abc'
dev.MAX_SPOT_BID= '2.9'
dev.startEc2Spot('ami-0323c3dd2da7fb37d', 'm3.medium')
This will return the instance id and the ssh command that you can use to connect to your instance. The output will look something like...
job instance id: i-029a926e68118d089
ssh -i dec15a.pem ec2-user@13.228.39.49
You can list all instances along with their details like launch time, image_id and save the results as pandas dataframe using showEc2 method like this...
df=dev.showEc2()
Now "df" is a pandas dataframe object. You can sort or groupby the instances just like an excel sheet.
You can delete the instance by providing the instance ID that was generated in the first step using deleteEc2 method.
dev.deleteEc2('i-029a926e68118d089')
_____
You can also use cloudformation template for this purpose. Visit the following link and look for "Linux EC2 Instance on SPOT" section.
https://github.com/shantanuo/cloudformation
Click on "Launch Stack" button. It is nothing but GUI for the python code mentioned above. You will simply have to submit a form for the methods like key and IP address.
Labels: aws, aws_cloudformation, boto, usability
Save the following shell script and run it with your github username to get the list of all starred repositories.
sh myscript.sh shantanuo > starred.txt
You will need to install jq (yum install jq)
#!/bin/sh
USER=${1:-someUser}
STARS=$(curl -sI https://api.github.com/users/$USER/starred?per_page=1|egrep '^link'|egrep -o 'page=[0-9]+'|tail -1|cut -c6-)
PAGES=$((STARS/100+1))
for PAGE in `seq $PAGES`; do
curl -sH "Accept: application/vnd.github.v3.star+json" "https://api.github.com/users/$USER/starred?per_page=100&page=$PAGE"|jq -r '.[]|[.starred_at,.repo.full_name]|@tsv'
done
Labels: git, shell script
Some basic numpy methods everyone should be aware of.
import numpy as np
mylist = [[100, 2, 3], [4, 5786, 6]]
a = np.array(mylist)
a
np.ravel(a)
np.append(a, [2])
np.append(a, [10, 11, 12])
np.append(a, [""])
b = np.array([[400], [800]])
np.append(a, b, axis=1)
np.append(a, [[50, 60, 70]], axis=0)
np.insert(a, 2, [1, 2, 34])
a[:1]
a[0][2]
a.size
a.shape
np.where(a == 3)
np.sort(a, axis=1)
np.sort(a, axis=0)
a.tolist()
np.delete(a, 1, axis=0)
np.delete(a, 1, axis=1)
Labels: pandas, python
There are times when I find a great article or web page but don't have time to read. I use EmailThis service to save text & images from a website to my email inbox. The concept is very simple. Drag and drop a bookmarklet to the bookmark toolbar and click on it to send the current web-page to your inbox!
https://www.emailthis.me/
But I did not like the premium ads and partial content that the site sends. So I built my own serverless API to get exactly the same functionality using mailgun and Amazon Web Services.
https://www.mailgun.com/
Once you register with mailgun, you will get a URL and API-Key that you need to copy-paste to notepad. You will need to provide this information when you launch the cloudformation template by clicking on this link.
https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=emailThis&templateURL=https://datameetgeobk.s3.amazonaws.com/cftemplates/furl.yaml.txt
Once the resources are created, you can see a URL in output section something like this...
https://ie5n05bqo0.execute-api.us-east-1.amazonaws.com/mycall
Now building the javaScript bookmarklet is easy.
javascript:(function(){location.href='https://ie5n05bqo0.execute-api.us-east-1.amazonaws.com/mycall?email=shantanu.oak@gmail.com&title=emailThis&url='+encodeURIComponent(location.href);})();
Right click on any bookmark and then copy-paste the above link. Make sure that you have changed the URL and email address to your own. Now click on this bookmarklet while you are on an important web page that you need to send to your inbox. Enjoy!
Labels: api_gateway, aws, aws_lambda, usability