1.redis键key

1.keykeys *:查看当前key列exists key的名字,判断某个key是否存在move key db
--->当前库就没有了,被移除了expire key 秒钟:为给定的key设置过期时间(到期/终止时间)ttl key 查看还有多少秒过期,-1表示永不过期,-2表示已过期type key 查看你的key是什么类型
[cevent@hadoop213 ~]$ redis-cli -p 6379  开启客户端127.0.0.1:6379>set k1 value1OK127.0.0.1:6379>set k2 value2OK127.0.0.1:6379>set k3 value3OK127.0.0.1:6379>exists k1  判断k1是否存在,存在1不存在0(integer)1127.0.0.1:6379>exists k10(integer)0127.0.0.1:6379>move k3 2  将k3移动到2号库(integer)1127.0.0.1:6379>keys *1) "k2"2)"k1"127.0.0.1:6379>select 2  进入2号库OK127.0.0.1:6379[2]>get k3  获取k3"value3"127.0.0.1:6379[2]>select 0  返回1号库OK127.0.0.1:6379>ttl k1  查看k1生命周期,-1表示永不过期(integer)-1127.0.0.1:6379>expire k2 10 设置k2终止生命10秒(integer)1127.0.0.1:6379>get k2"value2"127.0.0.1:6379>ttl k2  查询k2状态(integer)4127.0.0.1:6379>ttl k2  已经失效-2(integer)-2127.0.0.1:6379>keys *  已过期不存在(被移除内存系统)1)"k1"127.0.0.1:6379>get k2(nil)127.0.0.1:6379>set k1 cevent 覆盖k1值setOK127.0.0.1:6379>get k1"cevent"127.0.0.1:6379>lpush ceventlist 1 2 3 4 5  创建list(integer)5127.0.0.1:6379>lrange ceventlist 0 -1  查询范围1)"5"2)"4"3)"3"4)"2"5)"1"127.0.0.1:6379>type ceventlist 查看*list类型list127.0.0.1:6379>keys *1)"ceventlist"2)"k1"

2.Redis字符串(String)

127.0.0.1:6379> delceventlist 删除集合(integer) 1127.0.0.1:6379> keys *1) "k1"127.0.0.1:6379> get k1"cevent"127.0.0.1:6379> append k1 619为k1追加值(integer) 9127.0.0.1:6379> get k1"cevent619"127.0.0.1:6379> strlenk1  查看k1长度(integer) 9127.0.0.1:6379> set k2 6  创建k2值OK127.0.0.1:6379> get k2"6"127.0.0.1:6379> incr k2  增加(自增1)k2(integer) 7127.0.0.1:6379> incr k2(integer) 8127.0.0.1:6379> incr k2(integer) 9127.0.0.1:6379> get k2 获取k2"9"127.0.0.1:6379> decr k2  减(自减1)k2(integer) 8127.0.0.1:6379> get k2"8"127.0.0.1:6379> incrby k2 5  指定k2增加的int值 (integer) 13127.0.0.1:6379> decrby k22  指定k2减少的int值(integer) 11127.0.0.1:6379> decrby k2 2(integer) 9127.0.0.1:6379> set k3 value3OK127.0.0.1:6379> incr k3  计算只能与数字(error) ERR value is not an integer orout of range127.0.0.1:6379> get k1  获取k1"cevent619"127.0.0.1:6379> getrange k10 -1  获取k的范围(0开始 -1结束)"cevent619"127.0.0.1:6379> getrange k10 2 获取k1的范围(0开始 index=2结束)"cev"127.0.0.1:6379> setrange k1 0eee  设置index范围值(integer) 9127.0.0.1:6379> get k1"eeeent619"127.0.0.1:6379> setex k4 10 v4 创建k4并赋值10秒的生命周期OK127.0.0.1:6379> ttl k4  查看ttl生命(integer) 6127.0.0.1:6379> get k4  可获取"v4"127.0.0.1:6379> ttl k4  ttl=-2为已过期(integer) -2127.0.0.1:6379> get k4  获值为空(nil)127.0.0.1:6379> setnx k1 val如果不存在才会被创建执行setnx(integer) 0127.0.0.1:6379> get k1"eeeent619"127.0.0.1:6379> setnx k11val11  不存在被创建(integer) 1127.0.0.1:6379> mset k1 value1k2 v2 k3 v3  同时创建多个库并赋值OK127.0.0.1:6379> mget k1 k2k3  同时获取多个库1) "value1"2) "v2"3) "v3"127.0.0.1:6379> keys *1) "k11"2) "k2"3) "k1"4) "k3"127.0.0.1:6379> get k1  覆盖k1值"value1"127.0.0.1:6379> msetnx k3 v3 k4v4  如果有一个不存在,那么k3修改成功,不会创建k4(integer) 0127.0.0.1:6379> keys *1) "k11"2) "k2"3) "k1"4) "k3"127.0.0.1:6379> get k3"v3"127.0.0.1:6379> mset k4 v4 k5v5  如果都不存在,则会被创建OK127.0.0.1:6379> keys *1) "k4"2) "k5"3) "k11"4) "k2"5) "k1"6) "k3"

3.Redis列表(List)

[cevent@hadoop213 redis-3.0.4]$ ll总用量 152-rw-rw-r--.  1 cevent cevent 31391 9月   8 2015 00-RELEASENOTES-rw-rw-r--.  1 cevent cevent    53 9月   8 2015 BUGS-rw-rw-r--.  1 cevent cevent  1439 9月   8 2015 CONTRIBUTING-rw-rw-r--.  1 cevent cevent  1487 9月   8 2015 COPYINGdrwxrwxr-x.  6 cevent cevent  4096 7月   1 17:51 deps-rw-rw-r--.  1 cevent cevent    83 7月   2 18:01 dump.rdb-rw-rw-r--.  1 cevent cevent    11 9月   8 2015 INSTALL-rw-rw-r--.  1 cevent cevent   151 9月8 2015 Makefile-rw-rw-r--.  1 cevent cevent  4223 9月   8 2015 MANIFESTO-rw-rw-r--.  1 cevent cevent  5201 9月   8 2015 README-rw-rw-r--.  1 cevent cevent 41404 7月   1 21:20 redis.conf-rwxrwxr-x.  1 cevent cevent   271 9月   8 2015 runtest-rwxrwxr-x.  1 cevent cevent   280 9月   8 2015 runtest-cluster-rwxrwxr-x.  1 cevent cevent   281 9月   8 2015 runtest-sentinel-rw-rw-r--.  1 cevent cevent  7109 9月   8 2015 sentinel.confdrwxrwxr-x.  2 cevent cevent  4096 7月   1 17:52 srcdrwxrwxr-x. 10 cevent cevent  4096 9月   8 2015 testsdrwxrwxr-x.  5 cevent cevent  4096 9月   8 2015 utils[cevent@hadoop213 redis-3.0.4]$ redis-server redis.conf 启动redis服务[cevent@hadoop213 redis-3.0.4]$ redis-cli -p 6379  启动客户端127.0.0.1:6379> keys *1) "k1"2) "k3"3) "k5"4) "k11"5) "k4"6) "k2"127.0.0.1:6379> lpush list01 1 2 3 4 5 (l=leftà正序进 反序出 push(压栈,插入)输出一个list)(integer) 5127.0.0.1:6379> lrange list01 0 -11) "5"2) "4"3) "3"4) "2"5) "1"127.0.0.1:6379> rpush list02 1 2 3 4 5  (r=rightà正序进 正序出 rpush)(integer) 5127.0.0.1:6379> lrange list02 0 -11) "1"2) "2"3) "3"4) "4"5) "5"127.0.0.1:6379> lpop list01  (l先进先出,所以第一个pop(出栈,清出)-取出数据的是5)"5"127.0.0.1:6379> rpop list01  (r倒序先出)"1"127.0.0.1:6379> lpop list02  (rpush倒序取出数据)"1"127.0.0.1:6379> rpop list02   "5"127.0.0.1:6379> lrange list01 0 -1 (按照lpush进入的顺序输出)1) "4"2) "3"3) "2"127.0.0.1:6379>  lindex list010  (l=left index索引值)"4"127.0.0.1:6379> lindex list02 0"2"127.0.0.1:6379> llen list01  (查询list01长度)(integer) 3127.0.0.1:6379> rpush list03 1 1 1 22 2 2 4 4 4 4 5 5 5 5 6  插入集合值(integer) 16127.0.0.1:6379> lrem list03 2 4  删除2个4(rem=remove)(integer) 2127.0.0.1:6379> lrange list03 0 -1  查询删除结果1)"1"2)"1"3)"1"4)"2"5)"2"6)"2"7)"2"8)"4"9)"4"10) "5"11) "5"12) "5"13) "5"14) "6"127.0.0.1:6379> lpush list04 1 2 3 4 5 6 7 8 8   9  插入list(integer) 10127.0.0.1:6379> lrange list04 0 -1  查询list1)"9"2)"8"3)"8"4)"7"5)"6"6)"5"7)"4"8)"3"9)"2"10) "1"127.0.0.1:6379> ltrim list04 0 5  (修剪list 0-5(不包含5))OK127.0.0.1:6379> lrange list04(error) ERR wrong number of arguments for'lrange' command127.0.0.1:6379> lrange list04 0 -11) "9"2) "8"3) "8"4) "7"5) "6"6) "5"127.0.0.1:6379> lrange list02 0 -11) "2"2) "3"3) "4"127.0.0.1:6379> rpoplpush list04 list02 (rpop将list04倒序0=5出栈-清出,lpush到list02正序输出0位置)"5"127.0.0.1:6379> lrange list04 0 -1  (5被清出)1) "9"2) "8"3) "8"4) "7"5) "6"127.0.0.1:6379> lrange list02 0 -1  (5进入正序0)1) "5"2) "2"3) "2"4) "3"5) "4"127.0.0.1:6379> lset list04 1 cevent 覆盖指定下标值OK127.0.0.1:6379> lrange list04 0 -1  下标1变为cevent1) "9"2) "cevent"3) "8"4) "7"5) "6"127.0.0.1:6379> linsert list04 before cevent hadoop  根据值,进行前before后after插入(integer) 6127.0.0.1:6379> linsert list04 after cevent azkaban(integer) 7127.0.0.1:6379> lrange list04 0 -11) "9"2) "hadoop"3) "cevent"4) "azkaban"5) "8"6) "7"7) "6"

4.Redis集合(Set) :单值多value

4.1sadd/smembers/sismember

127.0.0.1:6379> sadd set01 1 1 2 2 3 3 插入值不可重复set(重复的值会被忽略)(integer) 3127.0.0.1:6379> smembers set01 查询set集合,s members1) "1"2) "2"3) "3"127.0.0.1:6379> sismember set01 1  查看1是否是(s is member)集合成员(integer) 1127.0.0.1:6379> smembers set011) "1"2) "2"3) "3"127.0.0.1:6379> sismember set01 ce(integer) 0(不是集合成员)

4.2scard,获取集合里面的元素个数

127.0.0.1:6379> scard set01 获取集合内值的个数(integer) 3

4.3srem key value 删除集合中元素

127.0.0.1:6379> srem set01 3  删除集合内指定值(s remove)(integer) 1127.0.0.1:6379> smembers set011) "1"2) "2"

4.4srandmember key 某个整数(随机出几个数)

  • 从set集合里面随机取出2个

  • 如果超过最大数量就全部取出,

  • 如果写的值是负数,比如-3 ,表示需要取出3个,但是可能会有重复值。

127.0.0.1:6379> sadd set02 1 2 3 4 5 6 7 8 创建set(integer) 8127.0.0.1:6379> srandmember set02 3  在set集合中随机取出3个数字1) "4"2) "8"3) "2"127.0.0.1:6379> srandmember set02 31) "1"2) "4"3) "5"

4.5spop key 随机出栈

127.0.0.1:6379> spop set02  随机出栈集合pop"3"127.0.0.1:6379> spop set02"2"127.0.0.1:6379> smembers set02 查询集合1) "1"2) "4"3) "5"4) "6"5) "7"6) "8"

4.6smove key1 key2 在key1里某个值 作用是将key1里的某个值赋给key2

127.0.0.1:6379> smove set03 set02 cevent 将set03中的cevent移入set02(integer) 1127.0.0.1:6379> smembers set031) "echo127.0.0.1:6379> smembers set021) "1"2) "4"3) "8"4) "7"5) "6"6) "cevent"7) "5"8) "2"

5.数学集合类

5.1差集:sdiff 在第一个set里面而不在后面任何一个set里面的项

127.0.0.1:6379> sadd set04 1 2 3 4 5 6(integer) 6127.0.0.1:6379> sadd set05 1 4 ce vent(integer) 4127.0.0.1:6379> sdiff set04 set05  第一个set为主键,在第一个里面的被排除,取第一个set集合的剩余值,第二个集合的剩余值不会被提取1) "2"2) "3"3) "5"4) "6"

5.2交集:sinter

127.0.0.1:6379> sinter set04 set05  取两个集合的交集1) "1"2) "4"

5.3并集:sunion

127.0.0.1:6379> sunion set04 set05  取并集,并去重1) "1"2) "4"3) "vent"4) "ce"5) "6"6) "3"7) "5"8) "2"

6.Redis哈希(Hash)- KV模式不变,但V是一个键值对

6.1hset/hget/hmset/hmget/hgetall/hdel

127.0.0.1:6379> hset user id 619   key:user(一个key)  value:id 619(键值对)(integer) 1127.0.0.1:6379> hget userid  查询时也需要传入 键值对的key"619"127.0.0.1:6379> hset user name cevent(integer) 1127.0.0.1:6379> hget user name"cevent"127.0.0.1:6379> hmset customer id 66 name cevn age 30  创建hash集合hmsetOK127.0.0.1:6379> hmget customer id name age 查看hm集合值hmget1) "66"2) "cevn"3) "30"127.0.0.1:6379> hgetall customer 全查hgetall1) "id"2) "66"3) "name"4) "cevn"5) "age"6) "30"127.0.0.1:6379> hdel user name  删除值hdel(integer) 1127.0.0.1:6379> hgetall user  获取user1) "id"2) "619"

6.2hlen

127.0.0.1:6379> hlen user  获取user集合的长度(integer) 1127.0.0.1:6379> hlen customer(integer) 3

6.3hexists key 在key里面的某个值的key

127.0.0.1:6379> hexists customer id  判断hashset集合是否存在value值 0不存在 1存在(integer) 1127.0.0.1:6379> hexists customer email(integer) 0

6.4hkeys/hvals

127.0.0.1:6379> hkeys customer  查询集合的所有key1) "id"2) "name"3) "age"127.0.0.1:6379> hvals customer  查询集合的所有值hvals …1) "66"2) "cevn"3) "30"

6.5hincrby/hincrbyfloat

127.0.0.1:6379> hincrby customer age 2 自增hash increment by 2=hincrby(integer) 32127.0.0.1:6379> hincrby customer age 4(integer) 36127.0.0.1:6379> hset customer score 92.8 建立小数集合(integer) 1127.0.0.1:6379> hincrbyfloat customer score 0.5  自增小数hash increment by float=hincrbyfloat"93.3"

6.6hsetnx:不存在赋值,存在了无效

127.0.0.1:6379> hsetnx customer age 26 建立hash set集合,有key则无法添加(integer) 0127.0.0.1:6379> hsetnx customer email 1540001771@qq.com  建立setnx ,无key直接添加(integer) 1127.0.0.1:6379> hgetall customer1)"id"2)"66"3)"name"4)"cevn"5)"age"6)"36"7)"score"8)"93.3"9)"email"10) "1540001771@qq.com"127.0.0.1:6379> hvals customer1) "66"2) "cevn"3) "36"4) "93.3"5) "1540001771@qq.com"

7.Redis有序集合Zset(sorted set)

7.1zadd/zrange、withscores

127.0.0.1:6379> zadd zset01 60 v1 70 v2 80 v3 90 v4 100 v5  创建有序集合 score key value 键值对(integer) 5127.0.0.1:6379> zrange zset01 0 -1  查看集value1) "v1"2) "v2"3) "v3"4) "v4"5) "v5"127.0.0.1:6379> zrange zset01 0 -1 withscores  查看集合score键1)"v1"2)"60"3)"v2"4)"70"5)"v3"6)"80"7)"v4"8)"90"9)"v5"10) "100"

7.2zrangebyscore key 开始score 结束score( 不包含Limit 作用是返回限制:limit 开始下标步 多少步

127.0.0.1:6379> zrangebyscore zset01 60 90  查看score的value,r range by score set集合 startxxx  endxxx(包含)1) "v1"2) "v2"3) "v3"4) "v4"127.0.0.1:6379> zrangebyscore zset01 60 (90  不包含90的score1) "v1"2) "v2"3) "v3"127.0.0.1:6379> zrangebyscore zset01 (60 (90  不包含score=60/901) "v2"2) "v3"127.0.0.1:6379> zrangebyscore zset01 60 90 limit 2 2  条件限制limit 限制下标 每页限制条数1) "v3"2) "v4"

7.3zrem key 某score下对应的value值,作用是删除元素

删除元素,格式是zrem zset的key 项的值,项的值可以是多个zrem key
score某个对应值,可以是多个值

127.0.0.1:6379> zrem zset01 v5  删除zset集合的vlaue值,将key-score一并删除(integer) 1127.0.0.1:6379> zrange zset01 0 -1 withscores 查询set集合列表1) "v1"2) "60"3) "v2"4) "70"5) "v3"6) "80"7) "v4"8) "90"

7.4zcard/zcount key score区间/zrank key values值,作用是获得下标值/zscore key 对应值,获得分数

zcard:获取集合中元素个数

zcount :获取分数区间内元素个数,zcount key 开始分数区间 结束分数区间

zrank:获取value在zset中的下标位置

zscore:按照值获得对应的分数

127.0.0.1:6379> zcard zset01  查询集合中的个数,以score为单元(integer) 4127.0.0.1:6379> zcount zset01 60 80  计算集合score在60-80的个数(integer) 3127.0.0.1:6379> zrank zset01 v4  查看集合v4的下标位置(0 1 2 3)(integer) 3127.0.0.1:6379> zscore zset01 v4  查询v4的分数"90"

7.5zrevrank key values值,作用是逆序获得下标值

127.0.0.1:6379> zrevrank zset01 v4  反转倒序查看指定vlaue索引下标(z reverse rank)(integer) 0127.0.0.1:6379> zrange zset01 0 -1 (v4为最后一位,翻转为第一位0)1) "v1"2) "v2"3) "v3"4) "v4"

7.6zrevrange

zrevrange127.0.0.1:6379> zrevrange zset01 0 -1  反转倒序查询1) "v4"2) "v3"3) "v2"4) "v1"

7.7zrevrangebyscore key 结束score 开始score

127.0.0.1:6379> zrevrangebyscore zset01 90 60  反转获取值,根据score,反转的话,第一个下标为901) "v4"2) "v3"3) "v2"4) "v1"

1.Units单位

(1)配置大小单位,开头定义了一些基本的度量单位,只支持bytes,不支持bit

(2)对大小写不敏感

进入vim编辑器

查询显示行:: set nu

到最后一行:shift g

到首行:gg

Redis.conf总计行

1# Redis configuration file example23# Note on units: when memory size is needed, it is possible to specify4# it in the usual form of 1k 5GB 4M and so forth:5#6# 1k => 1000 bytes7# 1kb => 1024 bytes  差异值8# 1m => 1000000 bytes9# 1mb => 1024*1024 bytes10# 1g => 1000000000 bytes11# 1gb => 1024*1024*1024 bytes12#13# units are caseinsensitive so 1GB 1Gb 1gB are all the same.  非大小写敏感区分,GB、gB都相同924 # By default "hz" is set to10. Raising the value will use more CPU when925 # Redis is idle, but at the same timewill make Redis more responsive when926 # there are many keys expiring at thesame time, and timeouts may be927 # handled with more precision.928 #929 # The range is between 1 and 500,however a value over 100 is usually not930 # a good idea. Most users should usethe default of 10 and raise this up to931 # 100 only in environments where verylow latency is required.932 hz 10933 934 # When a child rewrites the AOF file,if the following option is enabled935 # the file will be fsync-ed every 32MB of data generated. This is useful936 # in order to commit the file to thedisk more incrementally and avoid937 # big latency spikes.938 aof-rewrite-incremental-fsync yes

2.INCLUDES包含和我们的Struts2配置文件类似,可以通过includes包含,redis.conf可以作为总闸,包含其他

15 ##################################INCLUDES ################################   ###1617# Include one or more other config files here.  This is useful if you18# have a standard template that goes to all Redis servers but also need19# to customize a few per-server settings. Include files can include20# other files, so use this wisely.21#22# Notice option "include" won't be rewritten by command"CONFIG REWRITE"23# from admin or Redis Sentinel. Since Redis always uses the last processed24# line as value of a configuration directive, you'd better put includes25# at the beginning of this file to avoid overwriting config change atruntim    e.26#27# If instead you are interested in using includes to override configuration28# options, it is better to use include as the last line.29#30# include /path/to/local.conf31# include /path/to/other.conf

3.GENERAL通用

33 ################################GENERAL ##################################   ###3435# By default Redis does not run as a daemon. Use 'yes' if you need it.36# Note that Redis will write a pid file in /var/run/redis.pid whendaemonize    d.37daemonize yes 开启守护进程3839# When running daemonized, Redis writes a pid file in /var/run/redis.pid by40# default. You can specify a custom pid file location here.41pidfile /var/run/redis.pid 进程管道id文件,开启时如果没有指定文件,默认为此目录文件为file通道4243# Accept connections on the specified port, default is 6379.44# If port 0 is specified Redis will not listen on a TCP socket.45port 63794647# TCP listen() backlog.48#49# In highrequests-per-second environments you need an high backlog in order  高并发环境50# to avoid slow clients connections issues. Note that the Linux kernel51# will silently truncate it to the value of /proc/sys/net/core/somaxconn so52# make sure to raise both the value of somaxconn and tcp_max_syn_backlog53# in order to get the desired effect.54tcp-backlog 511 求backlog的队列总和,队列连接池backlog5556# By default Redis listens for connections from all the network interfaces57# available on the server. It is possible to listen to just one or multiple58# interfaces using the "bind" configuration directive, followed byone or59# more IP addresses.60#61# Examples:62#63# bind 192.168.1.100 10.0.0.1  端口绑定64# bind 127.0.0.1 6566# Specify the path for the Unix socket that will be used to listen for67# incoming connections. There is no default, so Redis will not listen68# on a unix socket when not specified.69#70# unixsocket /tmp/redis.sock71# unixsocketperm 7007273# Close the connection after a client is idle for N seconds (0 to disable)74timeout 0  当长时间没有使用redis服务,则会自动超时关闭7576# TCP keepalive.77#78# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence79# of communication. This is useful for two reasons:80#81# 1) Detect dead peers.82# 2) Take the connection alive from the point of view of network83#    equipment in the middle.84#85# On Linux, the specified value (in seconds) is the period used to sendACKs    .86# Note that to close the connection the double of the time is needed.87# On other kernels the period depends on the kernel configuration.88#89# A reasonable value for this option is 60 seconds.90 tcp-keepalive 0  当前tcp检测为不检测91 92# Specify the server verbosity level.93# This can be one of:94# debug (a lot of information, useful for development/testing)95# verbose (many rarely useful info, but not amess like the debug level)96# notice (moderately verbose, what you want inproduction probably)97# warning (only veryimportant / critical messages are logged)98loglevel notice99100 # Specify the log file name. Also theempty string can be used to force101 # Redis tolog on the standard output. Note that if you use standard   日志redis标准输出102 # output for logging but daemonize,logs will be sent to /dev/null103 logfile ""104 105 # To enable logging to the systemlogger, just set 'syslog-enabled' to yes,106 # and optionally update the othersyslog parameters to suit your needs.107 # syslog-enabled no108 109 # Specify the syslog identity.110 # syslog-identredis  系统日志-ID111 112 # Specify the syslog facility. Mustbe USER or between LOCAL0-LOCAL7.  输出日志设备LOCAL0-7113 # syslog-facility local0  默认使用local 0114 115 # Set the number of databases. Thedefault database is DB 0, you can select116 # a different one on a per-connectionbasis using SELECT <dbid> where117 # dbid is a number between 0 and'databases'-1118 databases16119

4.SNAPSHOTTING快照

120 ################################SNAPSHOTTING 快照数据备份相关设置#############################   ###121 #122 # Save theDB on disk:  redis中会将snapshotting中的DB保存在硬盘中123 #124 #  save<seconds> <changes>  命令125 #126 #  Will save the DB if both the givennumber of seconds and the given  保存DB动作127 #  number of write operations against the DB occurred.128 #129 #  In the example below the behaviour will be to save:130 #  after 900 sec (15min) if at least 1 key changed  如果在900s/15分钟内,1个key被改变,就会进行dump.rdb存储131 #  after 300 sec (5 min) if at least 10 keys changed如果在300s/5分钟内,10个key被改变,就会进行dump.rdb存储132 #  after 60 sec if at least 10000 keys changed 如果在60s/1分钟内,keys值由1万次变更,进行dump.rdb存储133 #134 #  Note: you can disable savingcompletely by commenting out all "save" lin    es. 可以禁用135 #136 #  It is also possible to remove all the previously configured save137 #  points by adding a save directive with a single empty string argument138 #  like in the following example:139 #140 #  save ""   禁用设置141 142 save 900 1  默认143 save 300 10144 save 60 10000145 146 # By default Redis will stopaccepting writes if RDB snapshots are enabled147 # (at least one save point) and thelatest background save failed.148 # This will make the user aware (in ahard way) that data is not persisting149 # on disk properly, otherwise chancesare that no one will notice and some150 # disaster will happen.151 #152 # If the background saving processwill start working again Redis will153 # automatically allow writes again.154 #155 # However if you have setup yourproper monitoring of the Redis server156 # and persistence, you may want todisable this feature so that Redis will157 # continue to work as usual even ifthere are problems with disk,158 # permissions, and so forth.159 stop-writes-on-bgsave-erroryes  出错后立即保存160 161 # Compress string objects using LZF when dump .rdbdatabases?  是否启动压缩算法162 # For default that's set to 'yes' asit's almost always a win.163 # If you want to save some CPU in thesaving child set it to 'no' but164 # the dataset will likely be biggerif you have compressible values or keys.165 rdbcompressionyes166 167 # Since version 5 of RDB a CRC64 checksum is placed at the end of the file.  检查64位算法校验168 # This makes the format moreresistant to corruption but there is a performa    nce169 # hit to pay (around 10%) when savingand loading RDB files, so you can disa   ble it170 # for maximum performances.171 #172 # RDB files created with checksumdisabled have a checksum of zero that will173 # tell the loading code to skip thecheck.174 rdbchecksumyes175 176 # The filename where to dump the DB177 dbfilenamedump.rdb  查询rdb数据位置178 179 # The working directory.180 #181 # The DB will be written inside thisdirectory, with the filename specified182 # above using the 'dbfilename'configuration directive.183 #184 # The Append Only File will also becreated inside this directory.185 #186 # Note that you must specify a directoryhere, not a file name.187 dir ./   默认dir目录位置

5.SECURITY安全:访问密码的查看、设置和取消

[cevent@hadoop213 redis-3.0.4]$redis-server redis.conf [cevent@hadoop213 redis-3.0.4]$ redis-cli-p 6379127.0.0.1:6379> pingPONG127.0.0.1:6379> config get requirepass 获取默认请求密码1) "requirepass"2) ""  空串127.0.0.1:6379> config get dir  获取路径目录1) "dir"2) "/opt/module/redis-3.0.4"127.0.0.1:6379> config set requirepass "cevent"  设置redis密码OK127.0.0.1:6379> ping  不输入密码不通(error) NOAUTH Authentication required.127.0.0.1:6379> auth cevent  输入面OK127.0.0.1:6379> pingPONG127.0.0.1:6379> config set requirepass ""  修改回空串OK127.0.0.1:6379> pingPONG
378 ##################################SECURITY ################################   ###379 380 # Require clients to issue AUTH <PASSWORD> before processing anyother  设置密码访问任何进程之前,输入AUTH <PASSWORD> 381 # commands.  This might be useful in environments in whichyou do not trust382 # others with access to the hostrunning redis-server.383 #384 # This should stay commented out forbackward compatibility and because most385 # people do not need auth (e.g. theyrun their own servers).386 #387 # Warning: since Redis is pretty fastan outside user can try up to388 # 150k passwords per second against agood box. This means that you should389 # use a very strong passwordotherwise it will be very easy to break.390 #391 # requirepass foobared392 393 # Command renaming.394 #395 # It is possible to change the nameof dangerous commands in a shared396 # environment. For instance theCONFIG command may be renamed into something397 # hard to guess so that it will stillbe available for internal-use tools398 # but not available for generalclients.399 #400 # Example:401 #402 # rename-command CONFIGb840fc02d524045429941cc15f59e41cb7be6c52403 #404 # It is also possible to completelykill a command by renaming it into405 # an empty string:406 #407 # rename-command CONFIG ""408 #409 # Please note that changing the nameof commands that are logged into the410 # AOF file or transmitted to slavesmay cause problems.411 

6. LIMITS限制:redis缓存清除策略

412 ###################################LIMITS #################################   ###413 414 # Set the max number of connectedclients at the same time. By default415 # this limit is set to 10000 clients,however if the Redis server is not416 # able to configure the process filelimit to allow for the specified limit417 # the max number of allowed clientsis set to the current file limit418 # minus 32 (as Redis reserves a fewfile descriptors for internal uses).419 #420 # Once the limit is reached Rediswill close all the new connections sending421 # an error 'max number of clientsreached'.422 #423 # maxclients10000  默认最大用户端访问量1万424 425 # Don't use more memory than thespecified amount of bytes.426 # When the memory limit is reachedRedis will try to remove keys427 # according to the eviction policyselected (see maxmemory-policy).428 #429 # If Redis can't remove keysaccording to the policy, or if the policy is430 # set to 'noeviction', Redis willstart to reply with errors to commands431 # that would use more memory, likeSET, LPUSH, and so on, and will continue432 # to reply to read-only commands likeGET.433 #434 # This option is usually useful whenusing Redis as an LRU cache, or to set435 # a hard memory limit for an instance(using the 'noeviction' policy).436 #437 # WARNING: If you have slavesattached to an instance with maxmemory on,438 # the size of the output buffersneeded to feed the slaves are subtracted439 # from the used memory count, so that network problems /resyncs will440 # not trigger a loop where keys are evicted, andin turn the output441 # buffer of slaves is full with DELsof keys evicted triggering the deletion442 # of more keys, and so forth untilthe database is completely emptied.443 #444 # In short... if you have slavesattached it is suggested that you set a low   er445 # limit for maxmemory so that thereis some free RAM on the system for slave446 # output buffers (but this is notneeded if the policy is 'noeviction').447 #448 # maxmemory<bytes>  缓存过期策略449 450 # MAXMEMORY POLICY: how Redis will select what to removewhen maxmemory451 # is reached. You can selectamong five behaviors:  如果达到最大缓存,从以下5中缓存策略中执行过期清除452 #453 # volatile-lru-> remove the key with an expire set using an LRU algorithm454 # allkeys-lru-> remove any key according to the LRU algorithm清除机制:LRU(left right use)先进后出,最近最少使用的清除455 # volatile-random-> remove a random key with an expire set 随机清除456 # allkeys-random-> remove a random key, any key457 # volatile-ttl-> remove the key with the nearest expire time (minor TTL)  有限时间内清除TTL(time to leave)458 # noeviction-> don't expire at all, just return an error on writeoperations    永不清除配置459 #460 # Note: with any of the abovepolicies, Redis will return an error on write461 #       operations, when there are no suitablekeys for eviction.462 #463 #       At the date of writing these commandsare: set setnx setex append464 #       incr decr rpush lpush rpushx lpushxlinsert lset rpoplpush sadd465 #       sinter sinterstore sunion sunionstoresdiff sdiffstore zadd zincrby466 #       zunionstore zinterstore hset hsetnxhmset hincrby incrby decrby467 #       getset mset msetnx exec sort468 #469 # The default is:470 #471 # maxmemory-policynoeviction  默认永不过期/永不清除(一般真正实操不配noeviction永不过期)472 473 # LRU and minimal TTL algorithms arenot precise algorithms but approximated474 # algorithms (in order to savememory), so you can tune it for speed or475 # accuracy. For default Redis willcheck five keys and pick the one that was476 # used less recently, you can changethe sample size using the following477 # configuration directive.478 #479 # The default of 5 produces goodenough results. 10 Approximates very closel   y480 # true LRU but costs a bit more CPU.3 is very fast but not very accurate.481 #482 # maxmemory-samples5  默认选取5个缓存样本483 

7.常见配置redis.conf

redis分布式缓存应用—五大数据类型:key/String/Hash/List/Set/Zset,配置文件redis.conf解析相关推荐

  1. 跟着狂神学Redis(NoSql+环境配置+五大数据类型+三种特殊类型+Hyperloglog+Bitmap+事务+Jedis+SpringBoot整合+Redis持久化+...)

    跟着狂神学Redis 狂神聊Redis 学习方式:不是为了面试和工作学习!仅仅是为了兴趣!兴趣才是最好的老师! 基本的理论先学习,然后将知识融汇贯通! 狂神的Redis课程安排: nosql 讲解 阿 ...

  2. Redis 分布式缓存 面试题重点(持续更新)

    Redis 分布式缓存 面试题重点 总结 常用数据类型 String 类型面试分析 博客的字数统计如何实现?(strlen) 如何将审计日志不断追加到指定key? (append) 你如何实现一个分布 ...

  3. Redis分布式缓存集群技术

    Redis分布式缓存集群技术(也支持持久化),是关系型数据库的互补产品 特点:追求高性能\高并发,对数据一致性要求比数据库要差一些. # 1. Redis在集群架构中的角色及工作流程     1)内存 ...

  4. 16-1 Redis分布式缓存引入与保存缓存功能实现

    16-1 Redis分布式缓存引入与保存缓存功能实现 现在功能已经完成了,但是我们还是要考虑一下性能问题,现在任何请求都是要到数据库中查询很多的数据,才能知道当前的用户是否有权限可以访问当前的url, ...

  5. 一文弄懂redis分布式缓存之微博推送技术方案

    1️⃣业务场景分析 关注微博 登录首页展示了我关注的所有人发的微博,展示形式是列表 滚动有分页加载 2.个人微博 我发的微博展示在个人微博,展示形式也是列表 滚动有分页加载 2️⃣ 基于redis技术 ...

  6. JAVA社交平台项目第六天 Redis分布式缓存

    第6章 - Redis分布式缓存 学习目标: 掌握Redis性能测试 掌握Redis读写分离搭建 掌握Redis高可用Sentinel搭建 掌握Sentinel整合SpringBoot 掌握Redis ...

  7. MyBatiesPlus+Redis分布式缓存

    一.开启二级缓存 cache-enabled: true # mybatis-plus相关配置 mybatis-plus:# xml扫描,多个目录用逗号或者分号分隔(告诉 Mapper 所对应的 XM ...

  8. Redis 分布式缓存 Java 框架

    https://dzone.com/articles/java-distributed-caching-in-redis 为什么要在 Java 分布式应用程序中使用缓存? 在提高应用程序速度和性能上, ...

  9. 高性能分布式缓存Redis(缓存分类 安装 数据类型选择和应用场景 发布订阅 事务 Lua脚本 慢查询日志)

    高性能分布式缓存Redis 高性能分布式缓存Redis 1. 缓存发展史&缓存分类 1.1 大型网站中缓存的使用 1.2 常见缓存的分类 1.3 分布式缓存选型方案对比 2. Redis概述& ...

最新文章

  1. 杨剑勇:物联网是一个未来概念?其实就在身边
  2. [Swift]LeetCode873. 最长的斐波那契子序列的长度 | Length of Longest Fibonacci Subsequence...
  3. Apache Wicket:记住我的功能
  4. 计算机网络 实验教案,《计算机网络》实验教案.pdf
  5. 关于oracle分组后组外排序的问题
  6. python字符画太小_python小项目(-)图片转字符画
  7. 作用域闭包,你真的懂了吗?
  8. 关于用C#编写ActiveX控件4(转)
  9. SpringBoot的自定义配置方法一,通过自定义配置文件
  10. 嵌入式学习文章推荐+资料下载
  11. AllenNLP—笔记—TokenEmbedder
  12. 金山毒霸系统清理专家
  13. 知识兔课程揭秘2021抖音卖货代运营的新骗局,你中招了吗?
  14. linux互信文件,linux SSH互信
  15. 静态背景下运动目标检测 matlab_动态拉伸、静态拉伸你做对了么?
  16. WPS怎样设置多级标题(如四级标题)
  17. Android -- 小功能 仿美图秀秀(美颜相机)马赛克功能
  18. win7字体大小怎么设置_怎么设置 win7系统excel2010定时保存和数据恢复的方案 -win7系统使用教程...
  19. 2021最新qq域名检测接口
  20. 联合概率数据关联JPDAF详解

热门文章

  1. GDAL库——读取图像并提取基本信息
  2. nvm安装node后npm run dev一直报node不是内部或外部命令
  3. 饥荒独立服务器在线模式收不到,饥荒服务器四种模式介绍 | 手游网游页游攻略大全...
  4. July 30, 10:00-16:45, 1309 多项式方法在调和分析问题中的应用
  5. 农村土地确权之成果展示 —— 三个调查表
  6. 华为手机上有什么好用的备忘录软件?
  7. 大一上学期大作业贪吃蛇
  8. Mediaplay(音乐播放器)
  9. 平安oracle面试考题,中国平安银行关于软件测试笔试试题(一)
  10. Windows主机漏洞扫描工具