Shantanu's Blog

Corporate Consultant

September 16, 2019

 

Secure FTP access to your S3 bucket in 4 easy steps

1) Visit SFTP transfer home page and create a new server:

https://console.aws.amazon.com/transfer/

Endpoint configuration: Public
Identity provider: Service managed

2) Create required role (my_sftp_role) and policy (my_sftp_policy) using this documentation:
https://docs.aws.amazon.com/transfer/latest/userguide/requirements-roles.html

3) Create required SSH keys using this guide:
https://docs.aws.amazon.com/transfer/latest/userguide/sshkeygen.html

4) Create a new user:
a) Provide a name like 'test'
b) access role (my_sftp_role) and select policy (my_sftp_policy) that we created in step 2
c) Choose the same S3 bucket as home directory that we mentioned in the policy
d) Upload the SSH public key, we created in step 3

You can now connect to your SFTP server using private key that was created in step 3:

sftp -i /home/ec2-user/transfer-key test@s-XXX.server.transfer.us-east-1.amazonaws.com

https://docs.aws.amazon.com/transfer/latest/userguide/getting-started-use-the-service.html

Labels: ,


September 10, 2019

 

import aws command line output to pandas

Here is how we can get the API gateway report into pandas dataframe. The first command will download the api gateway REST apis in JSON format. Python pandas module has json_normalize function that will convert the json data into dataframe. json.load is used to read the document from file.

# !aws --region=us-east-1 apigateway get-rest-apis > /tmp/to_file.json

import pandas as pd
import json
from pandas.io.json import json_normalize

with open("/tmp/to_file.json") as f:
    data = json.load(f)

df = json_normalize(data, "items")

df["createdDate"] = pd.to_datetime(df["createdDate"], unit="s").dt.date
df["type"] = df["endpointConfiguration.types"].str[0]

Labels: ,


 

Cloudformation template of 3 lines

These 3 lines of cloudformation code will create a SNS topic. Since the name is not defined in the template, a new name will be created automatically.

Resources:
    MyTopic: 
    Type: AWS::SNS::Topic

When you remove this template, the topic is also removed. If you want to keep the resources even after the template is deleted, then update the stack with the following template...

Resources:
  MyTopic:
    DeletionPolicy: Retain
      Type: AWS::SNS::Topic

Once the stack is updated, your resources will not be removed even if you delete the stack that created it.

Labels: ,


 

Using pre-trained resnet model after modifying layers

pytorch is the package developed by facebook to help computer vision and Natural Language Processing. TorchVision package consists of popular datasets, model architectures, and common image transformations for computer vision. We use the "models" class and import pre-trained model from resnet group.


from torchvision import models
import torch
res_mod = models.resnet34(pretrained=True)

It is possible to print the layers that the pre-trained model is using. The numpy array data passes through all this trouble to give birth to final dataset that will represent the given class.


for name, child in res_mod.named_children():
    print(name)

In some cases we may want to selectively unfreeze layers and have the gradients computed for just a few chosen layers.
for e.g. in this case layer3 and layer4 should be made available for training while re-using rest of the slabs.


for name, child in res_mod.named_children():
    if name in ["layer3", "layer4"]:
        print(name + " has been unfrozen.")
        for param in child.parameters():
            param.requires_grad = True
    else:
        for param in child.parameters():
            param.requires_grad = False

The change in the sequence should be communicated back to torch so that it will be used for the training.


optimizer_conv = torch.optim.SGD(
    filter(lambda x: x.requires_grad, res_mod.parameters()), lr=0.001, momentum=0.9
)

Labels: ,


 

Autonormalize using featuretools

This is how a typical pandas data frame look like. How do I know the relations between the columns? Is it possible to normalize the data into 2 or 3 tables?

import pandas as pd
rows = [['tigers', 'boston', 'MA', 20],
       ['elephants', 'chicago', 'IL', 21],
       ['foxes', 'miami', 'FL', 20],
       ['snakes', 'austin', 'TX', 20],
       ['dolphins', 'honolulu', 'HI', 19],
       ['eagles', 'houston', 'TX', 21]]
df = pd.DataFrame(rows, columns=['team', 'city', 'state', 'roster_size'])

This is just 2 lines of code that will show the relations and build complex relations automatically.

from featuretools.autonormalize import autonormalize as an
print (an.find_dependencies(df))

featuretools module has auto normalize class that will do all this and more!

https://github.com/FeatureLabs/autonormalize/blob/master/autonormalize/demos/Editing%20Dependnecies%20Demo.ipynb

Labels: ,


September 09, 2019

 

Pandas dataframe to athena

Here are 5 steps to save your pandas dataframe to Athena table.

1) Create a sample dataframe.

from io import StringIO
import pandas as pd

u_cols = ["page_id", "web_id"]
audit_trail = StringIO(
    """
3|0
7|3
11|4
15|5
19|6
"""
)

df = pd.read_csv(audit_trail, sep="|", names=u_cols)

2) Convert all columns to string.

df = df.astype(str)

3) Create a new bucket. You can also use rb --force to empty the bucket before re-creating.

!aws s3 mb s3://todel162/

4) Save the pandas dataframe as parquet files to S3

import awswrangler
session = awswrangler.Session()
session.pandas.to_parquet(dataframe=df, path="s3://todel162")

5) Login to console and create a new table in Athena.

CREATE EXTERNAL TABLE IF NOT EXISTS sampledb.todel5 (
   `page_id` string,
  `web_id` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '1'
) LOCATION 's3://todel162/'
TBLPROPERTIES ('has_encrypted_data'='false');

Labels: ,


 

Using pre-trained models

Here is how you can use pre-trained model in just 5 or 6 lines of code using imageAI library.

#!pip install -q opencv-python tensorflow keras imageAI

#!wget https://github.com/OlafenwaMoses/ImageAI/releases/download/1.0/yolo.h5

from imageai.Detection import ObjectDetection
detector = ObjectDetection()
detector.setModelTypeAsYOLOv3()
detector.setModelPath("yolo.h5")
detector.loadModel()
detector.detectObjectsFromImage(input_image="test.jpeg", output_image_path="out.jpeg")

#!wget https://github.com/OlafenwaMoses/ImageAI/releases/download/1.0/inception_v3_weights_tf_dim_ordering_tf_kernels.h5

from imageai.Prediction import ImagePrediction
predictor = ImagePrediction()
predictor.setModelTypeAsInceptionV3()
predictor.setModelPath("inception_v3_weights_tf_dim_ordering_tf_kernels.h5")
predictor.loadModel()
predictor.predictImage("test.jpeg")

Labels: ,


September 08, 2019

 

Pandas transform inconsistent behavior for list

There is a serious bug in pandas aggregation using transform method.

df = pd.DataFrame(data={'label': ['a', 'b', 'b', 'c'], 'wave': [1, 2, 3, 4], 'y': [0,0,0,0]})

The following does not return a list as we would expect.

df['new'] = df.groupby(['label'])[['wave']].transform(list)

I can use tuple instead of list to get the correct results. But that is a work-around. The bug looks very annoying because we do not know if any other functions will also misbehave.

https://stackoverflow.com/questions/57743798/pandas-transform-inconsistent-behavior-for-list

Labels: ,


September 07, 2019

 

Docker security check

Running the security check on docker server, is easy.

git clone https://github.com/docker/docker-bench-security.git
cd docker-bench-security
sudo sh docker-bench-security.sh

You may get a few warnings like this...

[WARN] 1.2.4  - Ensure auditing is configured for Docker files and directories - /var/lib/docker

Open this file and add the log files paths. Do not forget to restart audit deamon.

# vi  /etc/audit/audit.rules

-w /usr/bin/docker -p wa
-w /var/lib/docker -p wa
-w /etc/docker -p wa
-w /etc/default/docker -p wa
-w /etc/docker/daemon.json -p wa
-w /usr/bin/docker-containerd -p wa
-w /usr/bin/docker-runc -p wa
-w /etc/sysconfig/docker -p wa

# restart auditd service
_____

Another file to be added for security purpose:

vi /etc/docker/daemon.json

{
    "icc": false,
    "log-driver": "syslog",
    "disable-legacy-registry": true,
    "live-restore": true,
    "userland-proxy": false,
    "no-new-privileges": true
}
_____

Add this environment variable:

export DOCKER_CONTENT_TRUST=1
echo "DOCKER_CONTENT_TRUST=1" | sudo tee -a /etc/environment

# restart docker

Labels:


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019  

This page is powered by Blogger. Isn't yours?