Shantanu's Blog

Corporate Consultant

June 25, 2016

 

elasticsearch import using stream2es

Here are 3 simple steps to download json data from S3 and import them to elasticsearch.

1) create a directory:

mkdir /my_node_apps
cd /my_node_apps

2) Download all compressed files from S3
# aws s3 cp --recursive s3://my_data/my_smpp/logs/node_apps/aug_2015/ .

3) Uncompress the files and import them in elasticsearch

## cat final.sh

#!/bin/bash
curl -O download.elasticsearch.org/stream2es/stream2es; chmod +x stream2es
indexname='smpaug2'
typename='smpaug2type'

for i in `find /my_node_apps/aug_2015/ -name "*.gz"`
do
gunzip $i
newname=`echo $i | sed 's/.gz$//'
cat $newname | ./stream2es stdin --target "http://152.204.218.128:9200/$indexname/$typename/"
done

Labels: , , , ,


 

access a VPC instance from internet

If you have launched the instance in default VPC, then you need to attach internet gateway so that the server can be accessed from the internet.

Attaching an Internet Gateway
In the navigation pane, choose Internet Gateways, and then choose Create Internet Gateway.
In the Create Internet Gateway dialog box, you can optionally name your Internet gateway, and then choose Yes, Create.
Select the Internet gateway that you just created, and then choose Attach to VPC.
In the Attach to VPC dialog box, select your VPC from the list, and then choose Yes, Attach.

To create a custom route table
In the navigation pane, choose Route Tables, and then choose Create Route Table.
In the Create Route Table dialog box, optionally name your route table, then select your VPC, and then choose Yes, Create.
Select the custom route table that you just created. The details pane displays tabs for working with its routes, associations, and route propagation.
On the Routes tab, choose Edit, specify 0.0.0.0/0 in the Destination box, select the Internet gateway ID in the Target list, and then choose Save.
On the Subnet Associations tab, choose Edit, select the Associate check box for the subnet, and then choose Save.

Security groups and Elastic IP addresses should be configured from the same VPC page.

Labels:


 

aggregation queries

We are used to SQL group by queries something like this...

# select session_id, count(*) as cnt from table group by session_id order by cnt desc limit 1;

This can be easily rewritten to elasic query as shown below:

POST /test_index/_bulk
{"index":{"_index":"test_index","_type":"doc","_id":1}}
{"session_id":1,"user_id":"jan"}
{"index":{"_index":"test_index","_type":"doc","_id":2}}
{"session_id":1,"user_id":"jan"}
{"index":{"_index":"test_index","_type":"doc","_id":3}}
{"session_id":1,"user_id":"jan"}
{"index":{"_index":"test_index","_type":"doc","_id":4}}
{"session_id":2,"user_id":"bob"}
{"index":{"_index":"test_index","_type":"doc","_id":5}}
{"session_id":2,"user_id":"bob"}

POST /test_index/_search?search_type=count
{
   "aggs": {
      "schedule_id": {
         "terms": {
            "field": "session_id",
            "order" : { "_term" : "desc" },
            "size": 1
         }
      }
   }
}

_____

# select ip, port, count(*) as cnt, sum(visits) from table group by ip,port

POST /test_index/_search?search_type=count
{
   "aggregations": {
      "ip": {
         "terms": {
            "field": "ip",
            "size": 10
         },
         "aggregations": {
            "port": {
               "terms": {
                  "field": "port",
                  "size": 0,
                  "order": {
                     "visits": "desc"
                  }
               },
               "aggregations": {
                  "visits": {
                     "sum": {
                        "field": "visits"
                     }
                  }
               }
            }
         }
      }
   }
}

# select ip, count(*) as cnt  from table where ip in ('146.233.189.126', '193.33.153.89') group by ip

POST /test_index/_search?search_type=count
{
   "aggregations": {
      "ip": {
         "terms": {
            "field": "ip",
            "size": 10,
            "include": [
               "146.233.189.126",
               "193.33.153.89"
            ]
         }
      }
   }
}

And here is sample data to test above queries:

POST /test_index/doc/_bulk
{"index":{"_id":1}}
{"ip":"146.233.189.126","port":80,"visits":10}
{"index":{"_id":2}}
{"ip":"146.233.189.126","port":8080,"visits":5}
{"index":{"_id":3}}
{"ip":"146.233.189.126","port":8080,"visits":15}
{"index":{"_id":4}}
{"ip":"200.221.51.224","port":80,"visits":10}
{"index":{"_id":5}}
{"ip":"193.33.153.89","port":80,"visits":10}
{"index":{"_id":6}}
{"ip":"193.33.153.89","port":80,"visits":20}
{"index":{"_id":7}}
{"ip":"193.33.153.89","port":80,"visits":30}

_____

Here is one more example of group by query.

DELETE /sport

PUT /sport

POST /sport/_bulk
{"index":{"_index":"sport","_type":"runner"}}
{"name":"Gary", "city":"New York","region":"A","sport":"Soccer"}
{"index":{"_index":"sport","_type":"runner"}}
{"name":"Bob", "city":"New York","region":"A","sport":"Tennis"}
{"index":{"_index":"sport","_type":"runner"}}
{"name":"Mike", "city":"Atlanta","region":"B","sport":"Soccer"}
{"index":{"_index":"sport","_type":"runner"}}
{"name":"Mike xyz", "city":"Atlanta","region":"B","sport":"Soccer"}

POST /sport/_search
{
   "size": 0,
   "aggregations": {
      "city_terms": {
         "terms": {
            "field": "city"
         },
         "aggregations": {
            "name_terms": {
               "terms": {
                  "field": "name"
               }
            }
         }
      }
   }
}

# select name, count(*) as cnt from table group by city, name

Labels:


 

save lowercase data into elasticsearch

Most of the searches we do are case insensitive. Elasticsearch by default indexes data in lowercase. But there are times when we do not want to use any analyzer and still want to save the data in lowercase. The best option is to take care of not inserting data in capital letters. Here is an example.

# delete the index if exists
DELETE /test_index

# insert the records, table structure will be created automatically
POST /test_index/doc/_bulk
{"index":{"_id":1}}
{"cities":["new york","delhi"]}
{"index":{"_id":2}}
{"cities":["new york","Delhi","new Jersey"]}

# query to show how each word is indexed
POST /test_index/_search?search_type=count
{
   "aggs": {
      "city_terms": {
         "terms": {
            "field": "cities"
         }
      }
   }
}

This will return
delhi 2
new 2
york 2
jersey 1
_____

All the text is split on whitespace and lowercased to be saved with each document identifier. But we need "new york" 2 and "new jersey" 1. The single word "new" does not mean anything in this case. Elasticsearch will build the table structure dynamically for you. It has decided that the "cities" column should be of "string" type.

get /test_index/_mapping

{
   "test_index": {
      "mappings": {
         "doc": {
            "properties": {
               "cities": {
                  "type": "string"
               }
            }
         }
      }
   }
}

If we decide not to analyze the cities, then each value in the list will be saved together.

# delete index
DELETE /test_index

PUT /test_index
{
   "mappings": {
      "doc": {
         "properties": {
            "cities": {
               "type": "string",
               "index": "not_analyzed"
            }
         }
      }
   }
}

If you run the same insert and select queries again, then you will get
new york 2
Delhi 1
delhi 1
new Jersey 1

As you must have noticed we got separate entry for "Delhi" and "delhi" because of capitalization. To avoid this use all lowercase letters while inserting the data, or use the following query....

POST /test_index/_search?search_type=count
{
    "aggs": {
        "city_terms": {
            "terms": {
                "script": "doc.cities.values.collect{it.toLowerCase()}"
            }
}}}

You should now get the correct results:
delhi 2
new york 2
new jersey 1

Lowercase all the fields before inserting them into elasic.
1) If you are using Elasticsearch version 5.0 then you can use "Lowercase Processor" to convert certain fields to lowercase while inserting the records into database. I guess this is an important reason to upgrade. We can not rely on the data that we receive and processing the data to lowercase using python will be difficult.
2) If you are using logstash, then use mutate filter...

filter {
  mutate {
    lowercase => [ "fieldname" ]
  }
}

3) Use "lower" function of python or any other scripting language.

You can use any of the 3 methods mentioned above, but saving the data into lowercase fields will save a lot of confusion later.

Labels:


June 22, 2016

 

Deploying a registry server

#Start your registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2

#You can now use it with docker. Tag any image to point to your registry:
docker tag image_name localhost:5000/image_name

#then push it to your registry:
docker push localhost:5000/image_name

# pull it back from your registry:
docker pull localhost:5000/image_name

# push the registry container to hub
docker stop registry
docker commit registry
docker push registry

Labels:


Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016  

This page is powered by Blogger. Isn't yours?