Shantanu's Blog

Database Consultant

November 10, 2011

 

redis, who?

Redis is an open source, advanced key-value store. Basically, if you can map a use case to Redis and discover you aren't at risk of running out of RAM by using Redis there is a good chance you should probably use Redis.

A few examples:

# You can sort by create timestamp if that is stored in epoch time.

> HMSET users:1 firstname 'john' lastname 'smith' created 1319729878
"OK"
> HMSET users:2 firstname 'Jane' lastname 'Forbes' created 1319729910
"OK"
> sadd users 1
true
> sadd users 2
true
> sort users get users:*->firstname by users:*->created
["john","Jane"]
> sort users get users:*->firstname by users:*->created desc
["Jane","john"]
_____

APPLE: RED ROUND FRUIT would map to the following inserts:

SADD RED:ROUND:FRUIT APPLE
SADD :ROUND:FRUIT APPLE
SADD RED::FRUIT APPLE
SADD RED:ROUND: APPLE
SADD RED:: APPLE
SADD :ROUND: APPLE
SADD ::FRUIT APPLE
SADD ::: APPLE

_____

redis 127.0.0.1:6379> SADD P1:YELLOW MANGO
(integer) 1
redis 127.0.0.1:6379> SADD P2:TASTE MANGO
(integer) 1
redis 127.0.0.1:6379> SADD P3:FRUIT MANGO
(integer) 1
redis 127.0.0.1:6379> SINTER P1:YELLOW P2:TASTE
1) "MANGO"

_____

redis 127.0.0.1:6379> HMSET id:4532143215432 username davejlong email dave@davejlong.com
OK
redis 127.0.0.1:6379> HMSET user:davejlong id 4532143215432 email dave@davejlong.com
OK
redis 127.0.0.1:6379> HGET id:4532143215432 username
"davejlong"
redis 127.0.0.1:6379> HGET user:davejlong id
"4532143215432"
redis 127.0.0.1:6379> HMGET user:davejlong email id
1) "dave@davejlong.com"
2) "4532143215432"
redis 127.0.0.1:6379> DEL user:davejlong
(integer) 1
redis 127.0.0.1:6379> DEL id:4532143215432
(integer) 1

_____


// set and expiry can be included in single command. The following key will expire after 48 hours
setex cb_num:r 172800 date_time

// Hash can be assgined to a key. In the following example, we can count zone + customerID
hincryby r:1633:1634 2012:03:15 12:55:00 3
hincryby r:1633:1634 noads 5

src/redis-cli hgetall r:1633:1634
1) "2012-03-15 12-50-00"
2) "188"
1) "2012-03-15 12-55-00"
4) "200"

// sorted sets allow to store and retrieve data more efficiently

zadd RequestSet 2012-03-15 r:zone:customerID

src/redis-cli ZRANGE RequestSet 0 1 WITHSCORES
1) "r:1008:0"
2) "20120310"
3) "r:1008:10422"
4) "20120310"
_____

Redis can be easily used in shell script or at command prompt.
It will be useful for storing the standard out data of other commands.

In the following example, you will get a break line (backslash+n) in the output as shown.

$ echo 'testme one more word new line' | ./src/redis-cli -x set mytest
OK
$ ./src/redis-cli get mytest
"testme one more word new line\"

That is added by the "echo" command. You can use echo -n to avoid that extra break line:
$ echo -n 'testme one more word new line' | ./src/redis-cli -x set mytest
_____

use the -n argument to choose DB number.

[root@server]# echo -n "testing" | /pat/to/redis/src/redis-cli -x -n 4 set my_pass > /dev/null 2>&1

[root@server]# /pat/to/redis/src/redis-cli --raw -n 4 get my_pass
testing

_____

We can write a shell script to import the data from mysql or use a tool like "awk"

#!/bin/sh
mysql pdb_name -Bse"select comments, concat(fee, status) as myfee from company limit 100000;" | while read -r account myfee
do
echo -n $myfee | /home/redis-2.2.12/src/redis-cli -h 10.10.10.100 -p 6379 -x set $account > /dev/null 2>&1

done
_____

mysql pdb_name -Bse"select comments, concat(fee, status) as myfee from company limit 100000;" | awk "{print \"set \" \$1 \" \" \$2}" | /home/redis-2.2.12/src/redis-cli -h 10.10.10.100 -p 6379 > /dev/null 2>&1

_____

# time mysql test -Bse"select concat(add_id,':', nd_id) from data_summary_hourly" | awk "{print \"incr \" \$1 }" | /home/shantanu/redis-2.4.8/src/redis-cli -h localhost -p 6379 > /dev/null 2>&1

real    23m0.207s
user    5m15.595s
sys    5m19.612s
_____

redisdump.sh is the script to dump all data from redis instance.

time sh redisdump.sh | sed 'N;s/\n/ /' | sed 's/KEY //' | sed 's/string //' > todump.txt

This will not work if you have tens of millions of records! but useful for small size databases.

#!/bin/sh
mypath="/home/shantanu/redis-2.4.8/src"
$mypath/redis-cli keys "*" > keys.txt
cat keys.txt | awk '{ printf "type %s\n", $1 }' | $mypath/redis-cli > types.txt

paste -d'|' keys.txt types.txt | awk -F\| '

$2 == "string" { printf "echo \"KEY %s %s\"\nget %s\n", $1, $2, $1 }

$2 == "list" || $2 == "set" { printf "echo \"KEY %s %s\"\nsort %s by nosort\n", $1, $2, $1 }

$2 == "hash" { printf "echo \"KEY %s %s\"\nhgetall %s\n", $1, $2, $1 }

$2 == "zset" { printf "echo \"KEY %s %s\"\nzrange %s 0 -1 withscores\n", $1, $2,$1 }

' | $mypath/redis-cli --raw

_____

redis> select 3
All subsequent commands will then use database 3, until you issue another SELECT

redis> flushdb
drop all the data in a single database

redis> FLUSHALL
drop all the data in all databases

in redis.conf — by default, it is set to 16. Simply set it to a higher number if you need more:
databases 42

_____

// cd to the redis installation
cd /home/shantanu/redis-2.4.8

// disable the save command, you will need to manually save the data to disk:
# sed -i 's/^save/#save/' redis.conf

// save the server log to a file instead of standard out
# sed -i 's/stdout/redis.log/' redis.conf

// increase the number of databases from default 16
sed -i 's/databases\ [0-9]*/databases\ 32/' redis.conf

// save data to disk, check redis info
# time src/redis-cli bgsave
# src/redis-cli info

// If you get a warning in redis log, you need to change the memory setting as shown below:
# echo 'vm.overcommit_memory = 1' >> /etc/sysctl.conf
# sysctl vm.overcommit_memory=1

// start redis server with the new config file:
# src/redis-server redis.conf &

// make current redis slave of master
// add slave from command line of redis config file
slaveof ec2-184-73-130-46.compute-1.amazonaws.com 6379

// make sure to add config file while stating redis server
/root/redis-2.6.12/src/redis-server /root/redis-2.6.12/redis.conf

// start redis on server startup
If there is an /etc/rc.local you can add an '/path/to/redis-server' line in there

// redis benchmark
./src/redis-benchmark -n 100 -r 100 -q -h 11.22.33.44 -p 6379

// general log that saves all commands received by redis
nohup ./src/redis-cli monitor > /mnt/bigDisk/nohup.out
time tail -100000 /mnt/todel/nohup.out | awk -F'.' '{print $1}' | sort | uniq -c

// If you have an error message saying "Error allocating resources for the client"
You will have to increase ulimit and also update ae.h file:
ulimit -n 10240 ; /usr/local/bin/redis-server

// If you get an error "gcc: Command not found" then install the required package.
yum install gcc

// If you get an error "Newer version of jemalloc required" you need to run...
make distclean

in file ae.h you have:
#define AE_SETSIZE (1024*10) /* Max number of fd supported */
You may want to try to increase this limit and recompile Redis.
_____

## Use docker to connect to any redis server:
docker run -it redis redis-cli -h  myredis.synfmnx.0001.use1.cache.amazonaws.com


Labels: , ,


Comments: Post a Comment

<< Home

Archives

June 2001   July 2001   January 2003   May 2003   September 2003   October 2003   December 2003   January 2004   February 2004   March 2004   April 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2007   April 2007   June 2007   July 2007   August 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   May 2009   June 2009   July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   May 2010   June 2010   July 2010   August 2010   September 2010   October 2010   November 2010   December 2010   January 2011   February 2011   March 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   October 2012   November 2012   December 2012   January 2013   February 2013   March 2013   April 2013   May 2013   June 2013   July 2013   September 2013   October 2013   January 2014   March 2014   April 2014   May 2014   July 2014   August 2014   September 2014   October 2014   November 2014   December 2014   January 2015   February 2015   March 2015   April 2015   May 2015   June 2015   July 2015   August 2015   September 2015   January 2016   February 2016   March 2016   April 2016   May 2016   June 2016   July 2016   August 2016   September 2016   October 2016   November 2016   December 2016   January 2017   February 2017   April 2017   May 2017   June 2017   July 2017   August 2017   September 2017   October 2017   November 2017   December 2017   February 2018   March 2018   April 2018   May 2018   June 2018   July 2018   August 2018   September 2018   October 2018   November 2018   December 2018   January 2019   February 2019   March 2019   April 2019   May 2019   July 2019   August 2019   September 2019   October 2019   November 2019   December 2019   January 2020   February 2020   March 2020   April 2020   May 2020   July 2020   August 2020   September 2020   October 2020   December 2020   January 2021   April 2021   May 2021   July 2021   September 2021   March 2022   October 2022   November 2022   March 2023   April 2023   July 2023   September 2023   October 2023   November 2023   April 2024   May 2024   June 2024   August 2024   September 2024   October 2024   November 2024   December 2024  

This page is powered by Blogger. Isn't yours?