Shantanu's Blog

Database Consultant

September 30, 2019

 

Create pandas dataframe using elastic beats data

Here are 5 steps to create a pandas dataframe using the packetbeat data

1) Start elastic container
2) Download and configure packetbeat config file
3) Start packetbeat container
4) Login to packetbeat container and start service
5) Import packetbeat data into pandas dataframe
_____

# start elastic
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node"   --log-driver json-file  -d elasticsearch:7.3.1

# download packetbeat config file
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.0/deploy/docker/packetbeat.docker.yml

# add send_response parameter to mysql section of config file and change host address
vi packetbeat.docker.yml

packetbeat.protocols.mysql:
  ports: [3306]
  send_response: true

output.elasticsearch:
  hosts: 'some_site.com:9200'

# start a packetbeat container

docker run \
  --name pbeat \
  --disable-content-trust \
  --log-driver json-file \
  --user=packetbeat \
  --volume="$(pwd)/packetbeat.docker.yml:/usr/share/packetbeat/packetbeat.yml:ro" \
  --cap-add="NET_RAW" \
  --cap-add="NET_ADMIN" \
  --network=host \
  -d docker.elastic.co/beats/packetbeat:7.0.0

# login to packetbeat container and start packetbeat service
docker exec -it pbeat bash
cd /usr/share/packetbeat/
./packetbeat -e

# download the packetbeat data to pandas dataframe:

import pandas as pd
import numpy as np

import elasticsearch
from elasticsearch import helpers

es_client = elasticsearch.Elasticsearch("http://some_site.com:9200")

mylist = list()

for r in helpers.scan(es_client, index="packetbeat-7.0.0-2019.10.01-000001"):
    try:
        mylist.append((r["_source"]["client"]))
    except:
        mylist.append(({"ip": np.nan, "port": np.nan, "bytes": np.nan}))

df = pd.DataFrame(mylist)

Labels: ,


September 29, 2019

 

Connect to DocumentDB from EC2 instance

Here are 4 easy steps to connect to documentDB cluster from any EC2 instance that is part of the same VPC.

1) Download key file:
cd /tmp/
wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem

2) Create mongo container:
docker run --name mymongo  -p 27017:27017 -v /tmp/:/tmp/ -d mongo:3.4

3) Login to mongo container:
docker exec -it mymongo bash

4) connect to documentDB cluster:
mongo --ssl --host docdb-XXX.us-east-1.docdb.amazonaws.com:27017 --sslCAFile /tmp/rds-combined-ca-bundle.pem --username abcd  --password xyz

Note that /tmp/ folder of host is mounted on container. All the files in that folder are available to "mymongo" container.
_____

You can set up an SSH tunnel to the Amazon DocumentDB cluster by running the following command on your local computer. The -L flag is used for forwarding a local port.

> ssh -i "ec2Access.pem" -L 27017:sample-cluster.cluster-xxx.us-east-1.docdb.amazonaws.com:27017 ubuntu@ec2-4-3-2-1.compute-1.amazonaws.com -N

After the SSH tunnel is created, any commands that you issue to localhost:27017 are forwarded to the Amazon DocumentDB cluster sample-cluster running in the Amazon VPC.

Make sure that TLS is disabled.

Now, use this command to connect to documentDB...

> mongo --sslAllowInvalidHostnames --ssl --sslCAFile rds-combined-ca-bundle.pem --username --password  

Labels: ,


September 22, 2019

 

pandas case study 15

df = pd.DataFrame({"col1": [1, 3, 6, 10], "col2": ["A", "B", "A", "C"]})

col1    col2
 1        A
 3        B
 6        A
 10       C

How do I extend the dataframe by filling the missing values like this?

col1    col2
 1        A
 2        A
 3        B
 4        B
 5        B
 6        A
 7        A
 8        A
 9        A
 10       C

Reindex, forward fill and then reset the index will expand the dataframe:

df.set_index('col1').reindex(range(df.col1.min(),df.col1.max()+1)).ffill().reset_index()

And if col1 is a date column, then the answer is easier with:

df.set_index('col1').resample('D').ffill()

_____

# Pandas tips

## This is self join

df1 = df.reset_index()
df1.merge(df1, on="col1").query("index_x > index_y")

## And this is an example of groupby of all columns
df.groupby([*df]).size()

Labels: ,


 

pandas case study 14

If we have this dataframe, how do I take the average of numeric values?

import pandas as pd

df = pd.DataFrame(
    {
        "group": ["A", "A", "A", "B", "B"],
        "group_color": ["green", "green", "green", "blue", "blue"],
        "val1": [5, 2, 3, 4, 5],
        "val2": [4, 2, 8, 5, 7],
    }
)

This can be resolved by pivot table or groupby method.

df.pivot_table(index=["group", "group_color"])

df.groupby("group").agg(lambda x: x.head(1) if x.dtype == "object" else x.mean())

Labels: ,


 

pandas case study 13

Let's assume we have a dictionary where values are stored as list.

dictionary_col2 = {"MOB": [1, 2], "ASP": [1, 2], "YIP": [1, 2]}

It is easy to import the data into a dataframe without any problem.

pd.DataFrame(dictionary_col2)

MOB ASP YIP
0 1 1 1
1 2 2 2

We can traspose the data:

pd.DataFrame(dictionary_col2).T

0 1
MOB 1 2
ASP 1 2
YIP 1 2


# stack and unstack creates a series object in this case.

pd.DataFrame(dictionary_col2).stack()

0  MOB    1
   ASP    1
   YIP    1
1  MOB    2
   ASP    2
   YIP    2
dtype: int64


pd.DataFrame(dictionary_col2).unstack()

MOB  0    1
     1    2
ASP  0    1
     1    2
YIP  0    1
     1    2
dtype: int64

explode is the series method that will do similar transformation:

pd.Series(dictionary_col2).explode()

MOB    1
MOB    2
ASP    1
ASP    2
YIP    1
YIP    2

Labels: ,


 

Check EC2 security group for open ports

Here is a lambda function that will check if there is open port in a given security group.
It will send a message to sns topic if 0.0.0.0 is found anywhere in that security group.


def lambda_handler(event, context):
    import boto3, json
    ec2 = boto3.client('ec2' , region_name='us-east-1' )
    security_group = ec2.describe_security_groups(GroupIds=['sg-12345'])
    for i in range(100):
        try:
          for k, v in security_group['SecurityGroups'][0]['IpPermissions'][i]['IpRanges'][0].items():
            if '0.0.0.0' in v:
              print (k, v)
              message = {"alert": "open port found "}
              sns_client = boto3.client("sns", region_name="us-east-1")
              response = sns_client.publish(TargetArn='arn:aws:sns:us-east-1:12345:NotifyMe', Message=json.dumps({'default': json.dumps(message)}), MessageStructure='json')
        except:
          pass

Labels: , , ,


September 20, 2019

 

Using category encoders instead of One Hot

We use "One hot encoding" all the time. Right? It will convert the categorical values into new binary columns populated with sparse data. It means there will be a lot of 0 values and only one column will have "True" value of 1. Let's see an example.

Let us create a sample dataframe with X features and y as a target. load_boston is the function available in sklearn.datasets class.

import pandas as pd
from sklearn.datasets import load_boston
bunch = load_boston()
y = bunch.target
X = pd.DataFrame(bunch.data, columns=bunch.feature_names)

Pandas method called "get_dummies" makes it very easy to quickly transform the data.

ndf=X.join(pd.get_dummies(X['RAD'], prefix='RAD'))

CRIM ZN INDUS CHA ... RAD_1.0 RAD_2.0 RAD_3.0 RAD_4.0 RAD_5.0 RAD_6.0 RAD_7.0 RAD_8.0 RAD_24.0

As you can see all the unique values in "RAD" column are provided with their own columns. If pandas find a value "24.0", it will create a new column with default value of 0 and change it to 1 only if found "True".  This is really wonderful. The only problem is that if there are thousand unique values in this column, we will have to deal with one thousand columns!

In order to reduce the number of columns we will use "category_encoders" module instead of "get_dummies" like this...

import category_encoders as ce
enc = ce.BinaryEncoder(cols=['RAD']).fit_transform(X)

CRIM ZN INDUS CHAS ... RAD_0 RAD_1 RAD_2 RAD_3 RAD_4

As you can see there are now only 5 additional columns and the 9 unique values of  original column are distributed among themselves. Unlike one hot encoding where only 1 column is populated as "True", here there are 2 or 3 columns those may have "1" in it. This will reduce the number of column and allow us to use "hot encodings" even if there are too many unique values in a given column.

Labels: ,


September 16, 2019

 

Secure FTP access to your S3 bucket in 4 easy steps

1) Visit SFTP transfer home page and create a new server:

https://console.aws.amazon.com/transfer/

Endpoint configuration: Public
Identity provider: Service managed

2) Create required role (my_sftp_role) and policy (my_sftp_policy) using this documentation:
https://docs.aws.amazon.com/transfer/latest/userguide/requirements-roles.html

3) Create required SSH keys using this guide:
https://docs.aws.amazon.com/transfer/latest/userguide/sshkeygen.html

4) Create a new user:
a) Provide a name like 'test'
b) access role (my_sftp_role) and select policy (my_sftp_policy) that we created in step 2
c) Choose the same S3 bucket as home directory that we mentioned in the policy
d) Upload the SSH public key, we created in step 3

You can now connect to your SFTP server using private key that was created in step 3:

sftp -i /home/ec2-user/transfer-key test@s-XXX.server.transfer.us-east-1.amazonaws.com

https://docs.aws.amazon.com/transfer/latest/userguide/getting-started-use-the-service.html

Labels: ,


September 10, 2019

 

import aws command line output to pandas

Here is how we can get the API gateway report into pandas dataframe. The first command will download the api gateway REST apis in JSON format. Python pandas module has json_normalize function that will convert the json data into dataframe. json.load is used to read the document from file.

# !aws --region=us-east-1 apigateway get-rest-apis > /tmp/to_file.json

import pandas as pd
import json
from pandas.io.json import json_normalize

with open("/tmp/to_file.json") as f:
    data = json.load(f)

df = json_normalize(data, "items")

df["createdDate"] = pd.to_datetime(df["createdDate"], unit="s").dt.date
df["type"] = df["endpointConfiguration.types"].str[0]

Labels: ,


 

Cloudformation template of 3 lines

These 3 lines of cloudformation code will create a SNS topic. Since the name is not defined in the template, a new name will be created automatically.

Resources:
    MyTopic: 
    Type: AWS::SNS::Topic

When you remove this template, the topic is also removed. If you want to keep the resources even after the template is deleted, then update the stack with the following template...

Resources:
  MyTopic:
    DeletionPolicy: Retain
      Type: AWS::SNS::Topic

Once the stack is updated, your resources will not be removed even if you delete the stack that created it.

Labels: ,


 

Using pre-trained resnet model after modifying layers

pytorch is the package developed by facebook to help computer vision and Natural Language Processing. TorchVision package consists of popular datasets, model architectures, and common image transformations for computer vision. We use the "models" class and import pre-trained model from resnet group.


from torchvision import models
import torch
res_mod = models.resnet34(pretrained=True)

It is possible to print the layers that the pre-trained model is using. The numpy array data passes through all this trouble to give birth to final dataset that will represent the given class.


for name, child in res_mod.named_children():
    print(name)

In some cases we may want to selectively unfreeze layers and have the gradients computed for just a few chosen layers.
for e.g. in this case layer3 and layer4 should be made available for training while re-using rest of the slabs.


for name, child in res_mod.named_children():
    if name in ["layer3", "layer4"]:
        print(name + " has been unfrozen.")
        for param in child.parameters():
            param.requires_grad = True
    else:
        for param in child.parameters():
            param.requires_grad = False

The change in the sequence should be communicated back to torch so that it will be used for the training.


optimizer_conv = torch.optim.SGD(
    filter(lambda x: x.requires_grad, res_mod.parameters()), lr=0.001, momentum=0.9
)

Labels: ,


 

Autonormalize using featuretools

This is how a typical pandas data frame look like. How do I know the relations between the columns? Is it possible to normalize the data into 2 or 3 tables?

import pandas as pd
rows = [['tigers', 'boston', 'MA', 20],
       ['elephants', 'chicago', 'IL', 21],
       ['foxes', 'miami', 'FL', 20],
       ['snakes', 'austin', 'TX', 20],
       ['dolphins', 'honolulu', 'HI', 19],
       ['eagles', 'houston', 'TX', 21]]
df = pd.DataFrame(rows, columns=['team', 'city', 'state', 'roster_size'])

This is just 2 lines of code that will show the relations and build complex relations automatically.

from featuretools.autonormalize import autonormalize as an
print (an.find_dependencies(df))

featuretools module has auto normalize class that will do all this and more!

https://github.com/FeatureLabs/autonormalize/blob/master/autonormalize/demos/Editing%20Dependnecies%20Demo.ipynb

Labels: ,


September 09, 2019

 

Pandas dataframe to athena

Here are 5 steps to save your pandas dataframe to Athena table.

1) Create a sample dataframe.

from io import StringIO
import pandas as pd

u_cols = ["page_id", "web_id"]
audit_trail = StringIO(
    """
3|0
7|3
11|4
15|5
19|6
"""
)

df = pd.read_csv(audit_trail, sep="|", names=u_cols)

2) Convert all columns to string.

df = df.astype(str)

3) Create a new bucket. You can also use rb --force to empty the bucket before re-creating.

!aws s3 mb s3://todel162/

4) Save the pandas dataframe as parquet files to S3

import awswrangler
session = awswrangler.Session()
session.pandas.to_parquet(dataframe=df, path="s3://todel162")

5) Login to console and create a new table in Athena.

CREATE EXTERNAL TABLE IF NOT EXISTS sampledb.todel5 (
   `page_id` string,
  `web_id` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '1'
) LOCATION 's3://todel162/'
TBLPROPERTIES ('has_encrypted_data'='false');

Labels: ,


 

Using pre-trained models

Here is how you can use pre-trained model in just 5 or 6 lines of code using imageAI library.

#!pip install -q opencv-python tensorflow keras imageAI

#!wget https://github.com/OlafenwaMoses/ImageAI/releases/download/1.0/yolo.h5

from imageai.Detection import ObjectDetection
detector = ObjectDetection()
detector.setModelTypeAsYOLOv3()
detector.setModelPath("yolo.h5")
detector.loadModel()
detector.detectObjectsFromImage(input_image="test.jpeg", output_image_path="out.jpeg")

#!wget https://github.com/OlafenwaMoses/ImageAI/releases/download/1.0/inception_v3_weights_tf_dim_ordering_tf_kernels.h5

from imageai.Prediction import ImagePrediction
predictor = ImagePrediction()
predictor.setModelTypeAsInceptionV3()
predictor.setModelPath("inception_v3_weights_tf_dim_ordering_tf_kernels.h5")
predictor.loadModel()
predictor.predictImage("test.jpeg")

Labels: ,


September 08, 2019

 

Pandas transform inconsistent behavior for list

There is a serious bug in pandas aggregation using transform method.

df = pd.DataFrame(data={'label': ['a', 'b', 'b', 'c'], 'wave': [1, 2, 3, 4], 'y': [0,0,0,0]})

The following does not return a list as we would expect.

df['new'] = df.groupby(['label'])[['wave']].transform(list)

I can use tuple instead of list to get the correct results. But that is a work-around. The bug looks very annoying because we do not know if any other functions will also misbehave.

https://stackoverflow.com/questions/57743798/pandas-transform-inconsistent-behavior-for-list

Labels: ,


September 07, 2019

 

Docker security check

Running the security check on docker server, is easy.

git clone https://github.com/docker/docker-bench-security.git
cd docker-bench-security
sudo sh docker-bench-security.sh

You may get a few warnings like this...

[WARN] 1.2.4  - Ensure auditing is configured for Docker files and directories - /var/lib/docker

Open this file and add the log files paths. Do not forget to restart audit deamon.

# vi  /etc/audit/audit.rules

-w /usr/bin/docker -p wa
-w /var/lib/docker -p wa
-w /etc/docker -p wa
-w /etc/default/docker -p wa
-w /etc/docker/daemon.json -p wa
-w /usr/bin/docker-containerd -p wa
-w /usr/bin/docker-runc -p wa
-w /etc/sysconfig/docker -p wa

# restart auditd service
_____

Another file to be added for security purpose:

vi /etc/docker/daemon.json

{
    "icc": false,
    "log-driver": "syslog",
    "disable-legacy-registry": true,
    "live-restore": true,
    "userland-proxy": false,
    "no-new-privileges": true
}
_____

Add this environment variable:

export DOCKER_CONTENT_TRUST=1
echo "DOCKER_CONTENT_TRUST=1" | sudo tee -a /etc/environment

# restart docker

Labels:


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024  

This page is powered by Blogger. Isn't yours?