Redis

1.软件用途

  • 1.解决功能性问题:Java,Tomcat,JDBC,Linux
  • 2.解决扩展性问题:Spring,SpringMVC,Mybatis,Hibernate
  • 3.解决性能问题:多线程,Nginx,MQ,ElasticSearch,NoSQL

2.简介

  • Redis开源免费,遵循BSD协议,是一个高性能的key-value的NoSQL数据库
  • NoSQL(Not Only SQL),泛指非关系型的数据库,NoSQL不依赖业务逻辑方式存储,以简单的kye-value模式存储

3.NoSQL数据库

1.Memcache

2.MongoDB

3.Cassandra

4.特点

  • 1.存储非结构化的数据:key-value方式存储数据
  • 2.弱事务:但一般能保证最终一致
  • 3.持久化:在内存中存储数据,但能自动持久化
  • 4.高性能:查询性能非常高
  • 5.特定命令:不支持SQL,需要使用特定的命令

5.使用场景

  • 1.高频次,热门访问的数据,降低数据库IO
  • 2.配合关系型数据库做高速缓存
  • 3.分布式架构,做session共享

不适用场景

  • 1.需要事务支持
  • 2.基于sql的结构化查询存储,处理复杂的关系

6.安装

1.下载tar.gz包

2.上传到Linux

  • 上传到/opt/redis下

    [root@localhost opt]# mkdir redis
    [root@localhost opt]# cd redis
    [root@localhost redis]# ls
    redis-6.0.16.tar.gz
    

3.安装gcc

[root@localhost gcc]# yum install -y gcc

4.解压,编译,安装

[root@localhost redis]# tar -xzvf redis-3.2.9.tar.gz
[root@localhost redis-3.2.9]# cd redis-3.2.9
[root@localhost redis-3.2.9]# make
[root@localhost redis-3.2.9]# make install

5.备份配置文件

[root@localhost redis-3.2.9]# mkdir -p /etc/redis
[root@localhost redis-3.2.9]# cp redis.conf /etc/redis

6.启动redis

# 启动服务器
[root@localhost redis-3.2.9]# redis-server
# 启动服务器时指定配置文件的位置
[root@localhost redis-3.2.9]# redis-server /etc/redis/redis.conf
# 启动客户端
[root@localhost redis-3.2.9]# redis-cli
# 关闭redis服务器
127.0.0.1:6379> shutdown
# 关闭客户端
127.0.0.1:6379> exit

7.数据类型

1.String

  • 1.一个key对应一个value
  • 2.String类是二进制安全的,即Redis的String可以包含任何数据(例:jpg图片或序列化的对象)
  • 3.一个Redis中字符串value最多可以是512M

1.增

# 1.set key value(value默认不用带双引号,会自动添加,除非有特殊符号。例:"zhang san")
127.0.0.1:6379> set name "李四"
OK
127.0.0.1:6379> set sex "男"
OK
127.0.0.1:6379> set age 18
OK># 2.NX XX EX PX 4种参数
127.0.0.1:6379> keys *
1) "aname"
2) "cname"
3) "bname"
127.0.0.1:6379> get aname
"lisi"
# NX:当数据库中key不存在时,可以将key-value添加数据库
127.0.0.1:6379> set aname wangwu NX
(nil)
127.0.0.1:6379> get aname
"lisi"
# XX:当数据库中key存在时,可以将key-value添加数据库,与NX参数互斥
127.0.0.1:6379> set aname wagnwu XX
OK
127.0.0.1:6379> get aname
"wagnwu"
# EX:key的超时秒数
127.0.0.1:6379> set aname wagnwu EX 50
OK
127.0.0.1:6379> ttl aname
(integer) 46
# PX:key的超时毫秒数,与EX互斥
127.0.0.1:6379> set aname wangwu PX 6000
OK
127.0.0.1:6379> ttl aname
(integer) 4# 3.mset 批量添加键值对
127.0.0.1:6379> mset bname test1 cname test2
OK
127.0.0.1:6379> mget bname cname
1) "test1"
2) "test2"# 4.msetnx 同时设置一个或多个 key-value 对,当且仅当所有给定 key 都不存在才能添加成功(原子性,有一个失败则都失败)
127.0.0.1:6379> msetnx bname test1 dname test3
(integer) 0
127.0.0.1:6379> msetnx dname test3 ename test4
(integer) 1
127.0.0.1:6379> keys *
1) "cname"
2) "dname"
3) "bname"
4) "ename"# 5.setex 设置键值的同时,设置过期时间,单位秒
127.0.0.1:6379> setex fname 50 test5
OK
127.0.0.1:6379> ttl fname
(integer) 45# 6.getset 以新换旧,设置了新值同时获得旧值
127.0.0.1:6379> get bname
"test1"
127.0.0.1:6379> getset bname newtest
"test1"
127.0.0.1:6379> get bname
"newtest"

2.删

# 1.删除指定键的值
127.0.0.1:6379> keys *
1) "name"
2) "age"
3) "sex"
127.0.0.1:6379> del sex
(integer) 1
127.0.0.1:6379> keys *
1) "name"
2) "age"

3.改

# 1.set 设置相同键的值,默认覆盖
127.0.0.1:6379> get name
李四
127.0.0.1:6379> set name lisi
OK
127.0.0.1:6379> get name
"lisi"# 2.setnx  只有key不存在时才能设置key的值
127.0.0.1:6379> keys *
1) "cname"
2) "bname"
127.0.0.1:6379> setnx bname test
(integer) 0# 3.incr/decr 将 key中储存的数字值增/减1(只能对数字值操作,如果为空,新值为1/-1)
# 因为redis是单线程的,所以能单条指定完成的操作都是原子操作(单线程中断只能发生在多个指令之间)
127.0.0.1:6379> get bname
"wangwuhello,test"
127.0.0.1:6379> get cname
"1810"
127.0.0.1:6379> incr bname
(error) ERR value is not an integer or out of range
127.0.0.1:6379> incr cname
(integer) 1811
127.0.0.1:6379> incr bname
(error) ERR value is not an integer or out of range
127.0.0.1:6379> decr cname
(integer) 1810# 4.incrby/decrby 将key中储存的数字值增减,自定义步长(只能对数字值操作,如果为空,新值为自定义步长/-自定义步长)
127.0.0.1:6379> incrby bname 2
(error) ERR value is not an integer or out of range
127.0.0.1:6379> incrby cname 2
(integer) 1812
127.0.0.1:6379> decrby bname 2
(error) ERR value is not an integer or out of range
127.0.0.1:6379> decrby cname 2
(integer) 1810# 5.append 将给定的值追加到原值的末尾(字符串拼接,无法加减)
127.0.0.1:6379> get bname
"wangwu"
127.0.0.1:6379> append bname hello
(integer) 11
127.0.0.1:6379> set cname 18
OK
127.0.0.1:6379> get cname
"18"
127.0.0.1:6379> append cname 10
(integer) 4
127.0.0.1:6379> get cname
"1810"# setrange  key 起始位置 value (覆写key所储存的字符串值,从起始位置开始,索引从0开始)
127.0.0.1:6379> get cname
"test2"
127.0.0.1:6379> setrange cname 1 cc
(integer) 5
127.0.0.1:6379> get cname
"tcct2"

4.查

# 1.查所有key
127.0.0.1:6379> keys *
1) "name"
2) "age"
3) "sex"# 2.获取指定键的值
127.0.0.1:6379> get name
"\xe6\x9d\x8e\xe5\x9b\x9b"
127.0.0.1:6379> get sex
"\xe7\x94\xb7"
127.0.0.1:6379> get age
"18"# 3.默认redis不转义中文,如果想看中文内容,打开客户端时 redis-cli  命令后面加上  --raw 即可
[root@localhost redis-3.2.9]# redis-cli --raw
127.0.0.1:6379> get name
李四# 4.查询不存在的键值对,结果为nil
[root@localhost redis-3.2.9]# redis-cli
127.0.0.1:6379> get test
(nil)
127.0.0.1:6379> get cname
"1810"# 5.strlen 查询指定键对应值的长度
127.0.0.1:6379> strlen bname
(integer) 5# 6.getrange  获得下标对应范围的值(闭合区间,下标从0开始,下标越界并不会报错)
127.0.0.1:6379> getrange bname 0 3
"test"
127.0.0.1:6379> getrange bname 0 4
"test1"
127.0.0.1:6379> getrange bname 0 5
"test1"
127.0.0.1:6379> getrange bname 1 4
"est1"

5.底层实现

  • 1.String的数据结构为简单动态字符串(Simple Dynamic String,缩写SDS)
  • 2.指可以修改的字符串,内部结构类似于Java的ArrayList
  • 3.其采用预分配冗余空间的方式减少内存的频繁分配
  • 4.内部为当前字符实际分配的空间capacity一般高于实际字符串长度
  • 5.当字符串长度小于1M时,扩容时加倍现有的空间,如果超过1M,扩容时一次只会多扩容1M的空间
  • 6.注意:字符串最大长度为512M

2.List

  • 1.单键多值,值属于简单的字符串列表
  • 2.默认按照插入顺序排序,可以添加一个元素到列表的头部(左边)或尾部(右边)

1.增

# lpush/rpush 从左边/右边插入一个或多个值(注意:lpush从左边插入采用的是头插法所以获取的顺序与插入顺序相反;rpush从右边插入采用的是尾插法所以获取的顺序与插入顺序一致)
127.0.0.1:6379> lpush aname l1 l2 l3 l4
(integer) 4
127.0.0.1:6379> lrange aname 0 3
1) "l4"
2) "l3"
3) "l2"
4) "l1"
127.0.0.1:6379> rpush bname r1 r2 r3 r4
127.0.0.1:6379> lrange bname 0 3
1) "r1"
2) "r2"
3) "r3"
4) "r4"

2.删

# 1.lpop/rpop 从左边(头部)/右边(尾部)推出一个值(当列表中的值推完后,列表也不存在了)
127.0.0.1:6379> lrange aname 0 -1
1) "l4"
2) "l3"
3) "l2"
4) "l1"
127.0.0.1:6379> lpop aname
"l4"
127.0.0.1:6379> rpop aname
"l1"# 2.lrem 从左边删除n个指定值(从左到右)
127.0.0.1:6379> lrange aname 0 -1
1) "l4"
2) "after4"
3) "after4"
4) "l3"
5) "before2"
6) "l2"
7) "l1"
127.0.0.1:6379> lrem aname 1 after4
(integer) 1
127.0.0.1:6379> lrange aname 0 -1
1) "l4"
2) "after4"
3) "l3"
4) "before2"
5) "l2"
6) "l1"

3.改

# 1.rpoplpush 从列表1右边推出一个值,插到列表2左边
127.0.0.1:6379> lrange aname 0 -1
1) "l4"
2) "l3"
3) "l2"
4) "l1"
127.0.0.1:6379> lrange bname 0 -1
1) "r1"
2) "r2"
3) "r3"
4) "r4"
127.0.0.1:6379> rpoplpush aname bname
"l1"
127.0.0.1:6379> lrange aname 0 -1
1) "l4"
2) "l3"
3) "l2"
127.0.0.1:6379> lrange bname 0 -1
1) "l1"
2) "r1"
3) "r2"
4) "r3"
5) "r4"# 2.linsert 指定列表的指定值的前/后面插入新值
127.0.0.1:6379> linsert aname before l2 before2
(integer) 5
127.0.0.1:6379> lrange aname 0 -1
1) "l4"
2) "l3"
3) "before2"
4) "l2"
5) "l1"
127.0.0.1:6379> linsert aname after l4 after4
(integer) 6
127.0.0.1:6379> lrange aname 0 -1
1) "l4"
2) "after4"
3) "l3"
4) "before2"
5) "l2"
6) "l1"# 3.将列表key下标为index的值替换成指定值
127.0.0.1:6379> lrange aname 0 -1
1) "l4"
2) "after4"
3) "l3"
4) "before2"
5) "l2"
6) "l1"
127.0.0.1:6379> lset aname 1 new4
OK
127.0.0.1:6379> lrange aname 0 -1
1) "l4"
2) "new4"
3) "l3"
4) "before2"
5) "l2"
6) "l1"

4.查

# 1.lrange 按照索引下标获得元素(从左到右),0左边第一个,-1右边第一个,(0到-1表示获取所有,无下标越界错误)
127.0.0.1:6379> lpush aname l1 l2 l3 l4
(integer) 4
127.0.0.1:6379> lrange aname 0 -1
1) "l4"
2) "l3"
3) "l2"
4) "l1"
127.0.0.1:6379> lrange aname 0 0
1) "l4"
127.0.0.1:6379> lrange aname 0 1
1) "l4"
2) "l3"
127.0.0.1:6379> lrange aname 0 2
1) "l4"
2) "l3"
3) "l2"
127.0.0.1:6379> lrange aname 0 3
1) "l4"
2) "l3"
3) "l2"
4) "l1"
127.0.0.1:6379> lrange aname 0 4
1) "l4"
2) "l3"
3) "l2"
4) "l1"# 2.lindex 按照索引下标获得元素(从左到右,从0开始)
127.0.0.1:6379> lindex aname 0
"l4"
127.0.0.1:6379> lindex aname 1
"l3"
127.0.0.1:6379> lindex aname 2
"l2"
127.0.0.1:6379> lindex aname 3
"l1"
127.0.0.1:6379> lindex aname 4
(nil)# 3.llen  获得指定列表长度
127.0.0.1:6379> llen aname
(integer) 4

5.底层实现

  • 1.List的数据结构为快速链表(quicklist)
  • 2.列表元素较少的情况下会只用一块连续的内存存储,即压缩列表(ziplist:将所有元素紧挨着存储,分配的是一块连续的内存)
  • 3.列表元素较多的情况下会使用快速链表(quicklist),即:将双向链表和压缩列表(ziplist)结合起来组成了quicklist(将多个ziplist使用双向指针连接起来)
  • 4.优点是:满足了快速的插入删除性能,又不会出现太大的空间冗余

3.Set

  • 1.Redis的Set和List功能类似,但是Set可以自动去重
  • 2.Set可以Set集合中是否存在某个成员(List不支持)
  • 3.Set是String类型的无序集合,底层是一个value为空的hash表,所以添加,删除,查找的复杂度都是O(1)

1.增

# sadd 将一个或多个 member 元素加入到集合 key 中,已经存在的 member 元素将被忽略
127.0.0.1:6379> sadd aname s1 s2 s3 s3 s4
(integer) 4
127.0.0.1:6379> smembers aname
1) "s2"
2) "s3"
3) "s1"
4) "s4"

2.删

# 1.srem 删除集合中的一个或多个元素
127.0.0.1:6379> srem aname s1
(integer) 1
127.0.0.1:6379> smembers aname
1) "s2"
2) "s3"
3) "s4"# 2.spop 随机从该集合中推出一个值
127.0.0.1:6379> spop aname
"s2"
127.0.0.1:6379> smembers aname
1) "s3"
2) "s4"

3.改

# smove 把集合中一个值从一个集合移动到另一个集合
127.0.0.1:6379> smembers aname
1) "s3"
2) "s4"
127.0.0.1:6379> smembers bname
1) "10"
2) "20"
3) "30"
4) "40"
127.0.0.1:6379> smove aname bname s3
(integer) 1
127.0.0.1:6379> smembers aname
1) "s4"
127.0.0.1:6379> smembers bname
1) "40"
2) "20"
3) "30"
4) "10"
5) "s3"

4.查

# 1.smembers 取出该集合的所有值
127.0.0.1:6379> sadd aname s1 s2 s3 s3 s4
(integer) 4
127.0.0.1:6379> smembers aname
1) "s2"
2) "s3"
3) "s1"
4) "s4"# 2.sismember 判断集合中是否为含有该值,有1,没有0
127.0.0.1:6379> sismember aname s3
(integer) 1
127.0.0.1:6379> sismember aname s5
(integer) 0# 3.scard 返回该集合的元素个数
127.0.0.1:6379> scard aname
(integer) 4# 4.srandmember 随机从该集合中取出n个值。不会从集合中删除
127.0.0.1:6379> srandmember aname 1
1) "s4"
127.0.0.1:6379> smembers aname
1) "s3"
2) "s4"

5.集合操作

127.0.0.1:6379> sadd aname a1 a2 a3 a4 b1 b2
(integer) 6
127.0.0.1:6379> sadd bname b1 b2 b3 b4 a1 a2
(integer) 6
# 1.sinter 返回两个集合的交集元素
127.0.0.1:6379> sinter aname bname
1) "a2"
2) "a1"
3) "b1"
4) "b2"
# 2.sunion 返回两个集合的并集元素
127.0.0.1:6379> sunion aname bname
1) "a2"
2) "b4"
3) "a1"
4) "a3"
5) "a4"
6) "b1"
7) "b3"
8) "b2"
# 3.sdiff 返回两个集合的差集元素(key1中的,不包含key2中的)
127.0.0.1:6379> sdiff aname bname
1) "a4"
2) "a3"

6.底层实现

  • 1.Set数据结构是字典,字典是用哈希表实现的
  • 2.类似Java中HashSet,内部使用的是hash结构,所有的value都指向同一内部值

4.Hash

  • 1.Redis Hash是一个键值对集合
  • 2.Redis Hash的值是一个string类型的filed和value的映射表,其类似Java中的Map<String,Object>

1.增

# 1.hset  给指定集合中的field键赋值
127.0.0.1:6379> hset user1 name 李四
(integer) 1
127.0.0.1:6379> hset user1 sex 男
(integer) 1
127.0.0.1:6379> hset user1 age 18
(integer) 1
127.0.0.1:6379> hset user2 name 王五
(integer) 1
127.0.0.1:6379> hset user2 sex 男
(integer) 1
127.0.0.1:6379> hset user2 age 20
(integer) 1
127.0.0.1:6379> keys *
1) "user2"
2) "user1"
127.0.0.1:6379> type user1
hash# 2.hmset 批量给指定集合中的field键赋值
127.0.0.1:6379> hmset user3 name qiqi sex girl age 18
OK
127.0.0.1:6379> keys *
user2
user3
user1# 3.hsetnx 将指定哈希表中的域 field的值设置为value ,当且仅当域 field 不存在时执行成功 .
127.0.0.1:6379> hsetnx user3 age 20
0
127.0.0.1:6379> hsetnx user3 height 175
1

2.删

# hdel 删除指定哈希表中的filed,当该哈希表中的filed都被删除后,该表也不存在了
127.0.0.1:6379> hkeys user3
name
sex
age
height
127.0.0.1:6379> hdel user3 name
1
127.0.0.1:6379> hdel user3 sex
1
127.0.0.1:6379> hdel user3 age
1
127.0.0.1:6379> hdel user3 height
1
127.0.0.1:6379> keys *
user2
user1

3.改

# hincrby 为哈希表中的域 field 的值加上增量
127.0.0.1:6379> hincrby user3 age 2
20
127.0.0.1:6379> hvals user3
qiqi
girl
20
127.0.0.1:6379> hincrby user3 age -2
18
127.0.0.1:6379> hvals user3
qiqi
girl
18

4.查

# 1.hget 从指定集合的field取出 value
[root@localhost redis]# redis-cli --raw
127.0.0.1:6379> hget user1 name
李四
127.0.0.1:6379> hget user1 sex
男
127.0.0.1:6379> hget user1 age
18
127.0.0.1:6379> hget user2 name
王五
127.0.0.1:6379> hget user2 sex
男
127.0.0.1:6379> hget user2 age
20# 2.hexists 判断指定哈希表中的filed是否存在,存在为1,不存在为0
127.0.0.1:6379> hexists user3 sex
1
127.0.0.1:6379> hexists user3 height
0# 3.hkeys 查询指定哈希表中的所有filed
127.0.0.1:6379> hkeys user3
name
sex
age# 4.hvals 查询指定哈希表中的所有filed的值
127.0.0.1:6379> hvals user3
qiqi
girl
18

5.底层实现

  • 1.Hash类型对应的数据结构是两种:压缩列表(ziplist),哈希表(hashtable)
  • 2.当filed-value长度较短且个数较少时,使用ziplist,否则使用hashtable

5.ZSet(sorted set)

  • 1.有序集合zset和普通集合set相似,是一个没有重复元素的字符串集合
  • 2.不同的是Zset每个成员都关联了一个评分(score),score用来按照最低分到最高分的方式排序集合中的成员
  • 3.集合的成员是唯一的,但是评分可以重复

1.增

# zadd 将一个或多个元素及其 score 值加入到有序集中(先输入分数再输值)
127.0.0.1:6379> zadd name 20 lisi 60 wangwu 100 zhaoliu 45 zhangsan
4
127.0.0.1:6379> zrange name 0 -1
lisi
zhangsan
wangwu
zhaoliu

2.删

# zrem 删除该集合下,指定值的元素
127.0.0.1:6379> zrem name lisi
1
127.0.0.1:6379> zrange name 0 -1 withscores
zhangsan
45
wangwu
75
zhaoliu
100

3.改

#  zincrby 为元素的score加上增量
127.0.0.1:6379> zincrby name 5 lisi
25
127.0.0.1:6379> zincrby name 15 wangwu
75

4.查

# 1.返回有序集中,下标在<start><stop>之间的元素(带WITHSCORES,可以让分数一起和值返回到结果集)
127.0.0.1:6379> zrange name 0 -1
lisi
zhangsan
wangwu
zhaoliu
127.0.0.1:6379> zrange name 0 -1 withscores
lisi
20
zhangsan
45
wangwu
60
zhaoliu
100
127.0.0.1:6379> zrange name 0 0 withscores
lisi
20
127.0.0.1:6379> zrange name 0 1 withscores
lisi
20
zhangsan
45
127.0.0.1:6379> zrange name 0 2 withscores
lisi
20
zhangsan
45
wangwu
60
127.0.0.1:6379> zrange name 0 3 withscores
lisi
20
zhangsan
45
wangwu
60
zhaoliu
100
127.0.0.1:6379> zrange name 0 4 withscores
lisi
20
zhangsan
45
wangwu
60
zhaoliu
100# 2.zrangebyscore  返回指定有序集中,所有score值介于 min 和 max 之间(包括等于 min 或 max )的成员,有序集成员按 score 值递增(从小到大)排列
127.0.0.1:6379> zrangebyscore  name 30 90 withscores
zhangsan
45
wangwu
60# 3.zrevrangebyscore  返回指定有序集中,所有score值介于 min 和 max 之间(包括等于 min 或 max )的成员,有序集成员按 score 值递增(从大到小)排列
127.0.0.1:6379> zrevrangebyscore  name 90 30 withscores
wangwu
60
zhangsan
45
127.0.0.1:6379> zrevrangebyscore  name 90 30 withscores limit 0 2
wangwu
75
zhangsan
45
127.0.0.1:6379> zrevrangebyscore  name 90 30 withscores limit 1 1
zhangsan
45# 4.zcount 统计该集合分数区间内的元素个数
127.0.0.1:6379> zcount name 50 100
2# 5.zrank 返回该值在集合中的排名,下标从0开始
127.0.0.1:6379> zrank name zhaoliu
2# 6.zscore 查询指定集合的值的分数
127.0.0.1:6379> zscore name zhangsan
45

5.底层实现

  • 1.zset底层使用了两个数据结构

    • 1.hash:hash的作用是通过filed-value关联元素value和分数score,保证元素value的唯一性,且可以通过value找到对应的score
    • 2.跳跃表:跳跃表的作用在于给元素value排序,快速查找到指定元素
  • 2.跳跃表(跳表)

6.Key的通用操作

1.查看

# 1.查看当前库的所有键
127.0.0.1:6379> keys *
(empty list or set)# 2. * 如果什么都不加表示匹配所有,如果配合字符串使用表示匹配前/后任意字符
127.0.0.1:6379> set aname lisi
OK
127.0.0.1:6379> set bname wangwu
OK
127.0.0.1:6379> set cname zhaoliu
OK
127.0.0.1:6379> keys *name
1) "cname"
2) "aname"
3) "bname"# 3.查看当前数据库中key的数量
127.0.0.1:6379> keys *
1) "name"
2) "18"
127.0.0.1:6379> dbsize
(integer) 2# 4.查看指定key的类型(是键的类型而不是值的类型)
127.0.0.1:6379> set age 18
OK
127.0.0.1:6379> type age
string
127.0.0.1:6379> type aname
list

2.判断是否存在

127.0.0.1:6379> exists name
(integer) 0
127.0.0.1:6379> set name 李四
OK
127.0.0.1:6379> exists name
(integer) 1
127.0.0.1:6379>

3.删除key

# del是直接删除,unlink是异步删除,而且unlink只有4.0版本以上才能使用
127.0.0.1:6379> del name
(integer) 1
127.0.0.1:6379> unlink 18
(integer) 1

4.设置/查看/取消key的过期时间

# 1.expire 设置key的过期时间,默认单位为秒(pexpire 单位毫秒)
127.0.0.1:6379> expire age 100
(integer) 1# 2.ttl 查看key的过期时间 -1 表示永不过期;-2 表示已过期
127.0.0.1:6379> ttl age
(integer) 96# 3.persist 取消键的过期时间,如果过期时间被成功清除则返回 1;否则返回 0
127.0.0.1:6379> expire course 200
(integer) 1
127.0.0.1:6379> ttl course
(integer) 195
127.0.0.1:6379> persist course
(integer) 1
127.0.0.1:6379> ttl course
(integer) -1

5.切换数据库

# 默认有16个数据库(下标从0到15)
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> keys *
(empty list or set)

6.清空数据库

# 1.清空当前数据库
127.0.0.1:6379> flushdb
OK
127.0.0.1:6379> keys *
(empty list or set)# 2.清空所有数据库
127.0.0.1:6379> flushall
OK

7.帮助命令

  • redis根据数据类型不同将命令分成不同的组,可以通过help命令获取帮助

    • 1.help @string
    • 2.help @list
    • 3.help @set
    • 4.help @hash
    • 5.help @sorted_set
      127.0.0.1:6379> help @sorted_setZADD key [NX|XX] [CH] [INCR] score member [score member ...]summary: Add one or more members to a sorted set, or update its score if it already existssince: 1.2.0......
      

8.配置文件(redis.conf)

[root@localhost ~]# find / -name redis.conf
/etc/redis/redis.conf  # 备份文件,已修改
/opt/redis/redis-3.2.9/redis.conf  # 原文件,未修改[root@localhost redis-3.2.9]# ll
总用量 212
-rw-rw-r--.  1 root root 87407 5月  17 2017 00-RELEASENOTES
-rw-rw-r--.  1 root root    53 5月  17 2017 BUGS
-rw-rw-r--.  1 root root  1805 5月  17 2017 CONTRIBUTING
-rw-rw-r--.  1 root root  1487 5月  17 2017 COPYING
drwxrwxr-x.  7 root root   211 10月 27 19:01 deps
-rw-r--r--.  1 root root   110 10月 28 14:05 dump.rdb
-rw-rw-r--.  1 root root    11 5月  17 2017 INSTALL
-rw-rw-r--.  1 root root   151 5月  17 2017 Makefile
-rw-rw-r--.  1 root root  4223 5月  17 2017 MANIFESTO
-rw-rw-r--.  1 root root  6834 5月  17 2017 README.md
-rw-rw-r--.  1 root root 46695 10月 27 21:18 redis.conf
-rwxrwxr-x.  1 root root   271 5月  17 2017 runtest
-rwxrwxr-x.  1 root root   280 5月  17 2017 runtest-cluster
-rwxrwxr-x.  1 root root   281 5月  17 2017 runtest-sentinel
-rw-rw-r--.  1 root root  7606 5月  17 2017 sentinel.conf
drwxrwxr-x.  2 root root  8192 10月 27 19:02 src
drwxrwxr-x. 10 root root   167 5月  17 2017 tests
drwxrwxr-x.  7 root root  4096 5月  17 2017 utils
# Redis 配置文件示例.
# 1.注意:
#   为了读取配置文件,Redis 必须以文件路径作为第一个参数启动:
# ./redis-server /path/to/redis.conf# 2.单位说明:
#   当需要内存大小时,可以使用 1k 5GB 4M 形式指定
#   redis只支持bytes,需要将其换算成对应的bytes,以下是换算说明
#   单位不区分大小写,因此 1GB 1Gb 1Gb 都是一样的
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
# # 3.INCLUDES
#   此处可以包含一个或多个其他配置文件(自定义一些服务器的设置)
#   注意:选项“include”不会被管理员或Redis Sentinel的命令“CONFIG REWRITE”重写
#   Redis总是使用最后处理的行作为配置指令的值
#   因此最好将包含放在此文件的开头,以避免在运行时覆盖配置更改
#   相反想使用 include 来覆盖配置选项,最好使用 include 作为最后一行
#
# include /path/to/local.conf
# include /path/to/other.conf# 4.NETWORK
#   默认情况下,如果未指定“绑定”配置指令,Redis 将侦听来自服务器上所有可用网络接口的连接
#   可以使用“bind”配置指令只侦听一个或多个选定的接口,后跟一个或多个 IP 地址。
#   Examples:
#   bind 192.168.1.100 10.0.0.1
#   bind 127.0.0.1 ::1
#   注意:如果运行 Redis 的计算机直接暴露在互联网上,则绑定到所有接口是危险的,并且会将实例暴露给互联网上的每个人
#   因此默认情况下,我们取消注释以下绑定指令,该指令将强制 Redis 仅侦听 IPv4 回溯接口地址(这意味着 Redis 将只能接受来自运行到其正在运行的同一台计算机的客户端的连接)。
#   如果希望实例侦听所有接口,只需在以下行中注释即可
bind 127.0.0.1# 5.保护模式
#   旨在避免访问和利用互联网上打开的 Redis 实例
#   当保护模式打开时,如果:
#   1) 服务器未使用 “bind” 指令显式绑定到一组地址
#   2) 未配置密码(身份验证)
#   服务器仅接受来自从 IPv4 和 IPv6 环回地址 127.0.0.1 和 ::1 连接的客户端以及来自 Unix 域套接字的连接
#   默认情况下,保护模式处于启用状态。仅当希望来自其他主机的客户端连接到 Redis 时,才应禁用它,即使未配置身份验证,也没有使用 “bind” 指令显式列出一组特定的接口
protected-mode yes# 6.端口号
#   接受指定端口上的连接,默认值为 6379 (IANA #815344)。
#   如果指定端口 0,Redis 将不会侦听 TCP 套接字
port 6379# 7.TCP listen() backlog
#   TCP协议通过三次握手进行连接
#   设置tcp的backlog,backlog其实是一个连接队列,backlog队列总和=未完成三次握手队列 + 已经完成三次握手队列
#   在高并发环境下需要一个高backlog值来避免慢客户端连接问题
#   注意Linux内核会将这个值减小到/proc/sys/net/core/somaxconn的值(128)
#   所以需要确认增大/proc/sys/net/core/somaxconn和/proc/sys/net/ipv4/tcp_max_syn_backlog(128)两个值来达到想要的效果
tcp-backlog 511# 8.Unix socket
#   指定将用于侦听传入连接的 Unix 套接字的路径
#   没有默认值,因此 Redis 在未指定时不会侦听 unix 套接字
# unixsocket /tmp/redis.sock
# unixsocketperm 700# 客户端空闲 N 秒后关闭连接(0 表示永不关闭)
timeout 0# 9.TCP keepalive.
#   对访问客户端的一种心跳检测,每隔n秒检测一次
#   单位为秒,如果设置为0,则不会进行Keepalive检测,建议设置成60
#   如果非零,则在没有通信的情况下使用 SO_KEEPALIVE 向客户端发送 TCP 确认
#   这很有用,原因有两个:
#   1)发现死去的客户端
#   2)使用中间网络设备使连接活起来
#   在 Linux 上,指定的值(以秒为单位)是用于发送 ACK 的时间段
#   请注意,要关闭连接,需要双倍的时间。在其他内核上,周期取决于内核配置
#   此选项的合理值为 300 秒,这是从 Redis 3.2.1 开始的新 Redis 默认值
tcp-keepalive 300# 10.GENERAL
#   默认情况下,Redis 不作为守护进程运行。如果需要,使用“yes”
#   注意:Redis 会在守护程序化时在 /var/run/redis.pid 中写入一个 pid 文件
daemonize no# 11.监督
#   如果你从upstart或systemd运行Redis, Redis可以与你的监管树交互。选项:
#   supervised no      - no supervision interaction(没有监督互动)
#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode(通过将Redis设置为SIGSTOP模式来启动信号)
#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET(将READY=1写入$NOTIFY_SOCKET)
#   supervised auto    - detect upstart or systemd method based on UPSTART_JOB or NOTIFY_SOCKET environment variables(基于UPSTART_JOB或NOTIFY_SOCKET环境变量检测upstart或systemd方法)
# 注意: 这些监督方法只表明“流程准备好了”。他们不启用连续的活动ping回你的上级
supervised no# 12.pidfile
#   如果指定了 pid 文件,Redis 会将其写入启动时指定的位置,并在退出时将其删除
#   存放pid文件的位置,每个实例会产生一个不同的pid文件
#   当服务器以非守护程序形式运行时,如果未在配置中指定任何 pid 文件,则不会创建 pid 文件。当服务器守护程序化时,即使未指定,也会使用 pid 文件,默认为 “/var/run/redis.pid”
#   如果 Redis 无法创建它,不会发生任何不良情况,服务器将启动并正常运行
pidfile /var/run/redis_6379.pid# 13.loglevel
#   指定日志记录级别,Redis总共支持四个级别:debug、verbose、notice、warning,默认为notice
#   debug (a lot of information, useful for development/testing)
#   调试(大量信息,对开发/测试有用)
#   verbose (many rarely useful info, but not a mess like the debug level)
#   冗长(许多很少有用的信息,但不像调试级别那样乱)
#   notice (moderately verbose, what you want in production probably)
#   注意(适度冗长,您可能想要的生产)
#   warning (only very important / critical messages are logged)
#   警告(仅记录非常重要/关键的消息)
loglevel notice# 14.logfile
#   指定日志文件名
#   空字符串也可以用来强制Redis登录到标准输出
#   注意:如果使用标准输出进行日志记录,但是对其进行daemonize(守护进程化),则日志将被发送到/dev/null
logfile ""# 15.系统日志
#   要启用日志记录到系统日志记录器,只需将“syslog-enabled”设置为yes
#   并可选地更新其他syslog参数以满足您的需要
# syslog-enabled no# 15.1指定系统日志标识
# syslog-ident redis# 15.2指定系统日志工具。必须是 USER 或介于 LOCAL0-LOCAL7 之间
# syslog-facility local0# 16. databases
#   设置数据库的数量
#   默认数据库是db0,您可以使用select 在每个连接的基础上选择一个不同的数据库,其中dbid是0和'databases'-1之间的数字
databases 16# 17.SNAPSHOTTING(快照)
# 将数据库保存在磁盘上:
#   save <seconds> <changes>
#   如果给定的秒数和给定的对DB的写操作都发生了,将保存DB
#   在下面的示例中,行为将是保存:
#   after 900 sec (15 min) if at least 1 key changed(在900秒(15分钟)后,如果至少有一个键更改)
#   after 300 sec (5 min) if at least 10 keys changed(300秒(5分钟)后,如果至少10个键更改)
#   after 60 sec if at least 10000 keys changed(60秒后,如果至少10000个键改变)
#   注意:你可以通过注释掉所有的“保存”行来完全禁用保存
#   也可以通过添加一个带有单个空字符串参数的保存指令来删除之前配置的所有保存点,如下例所示:
#   save ""save 900 1
save 300 10
save 60 10000# 17.1RDB异常停止写
#   默认情况下,如果启用了RDB快照(至少一个保存点),并且最近的后台保存失败,Redis将停止接受写操作
#   这将使用户意识到(以一种艰难的方式)数据没有正确地保存在磁盘上,否则很可能没有人会注意到,从而发生一些灾难
#   如果后台保存进程再次启动,Redis将自动允许再次写入
#   然而,如果已经设置了适当的Redis服务器和持久性的监控,想要禁用这个功能,以便Redis将继续像往常一样工作,即使有磁盘,权限等问题
stop-writes-on-bgsave-error yes# 17.2RDB转储
#   压缩字符串对象使用LZF时转储.rdb数据库?
#   默认设置为“是”,因为它几乎总是胜利的。如果你想在保存子节点中节省一些CPU,将其设置为“no”,但如果你有可压缩值或键,数据集可能会更大。
rdbcompression yes# 17.3RDB效验和
#   从RDB版本5开始,CRC64校验和放在文件的末尾
#   这使得该格式更能抵抗损坏,但在保存和加载 RDB 文件时会降低性能(约 10%),因此您可以禁用它以获得最佳性能。
#   在禁用校验和的情况下创建的 RDB 文件的校验和为零,这将告诉加载代码跳过检查。
rdbchecksum yes# 17.4RDB转储数据库的文件名
dbfilename dump.rdb# 17.5.RDB工作目录。
# DB将被写入这个目录中,上面使用'dbfilename'配置指令指定的文件名。
# 仅追加文件也将在此目录中创建。
# 注意:必须在此处指定目录,而不是文件名。
dir ./# 18.REPLICATION(主从复制)
#   使用 slaveof 使 Redis 实例成为另一个 Redis 服务器的副本
#   关于 Redis 复制,需要了解以下事项:
#   1) Redis复制是异步的,但是你可以配置一个主服务器,如果它没有与至少给定数量的从服务器连接,它就停止接受写操作
#   2) 如果复制链路丢失的时间相对较短,Redis 从站能够与主站执行部分重新同步。您可能需要根据需要使用合理的值配置复制待办事项列表大小(请参阅此文件的后续部分)。
#   3) 复制是自动的,不需要用户干预。网络分区后,从站自动尝试重新连接到主节点并重新与它们同步。
# slaveof <masterip> <masterport># 18.1如果主服务器是受密码保护的(使用下面的“requirepass”配置指令),那么可以在启动复制同步过程之前告诉从服务器进行身份验证,否则主服务器将拒绝从服务器的请求。
# masterauth <master-password># 18.2当从机失去与主机的连接,或者当复制仍在进行时,从机可以以两种不同的方式进行操作:
#   1)如果slave-serve-stale-data设置为“yes”(默认值),则从服务器仍将回复客户端请求,可能会有过期的数据,如果这是第一次同步,数据集可能是空的
#   2)如果 slave-serve-stale-data 设置为 “no”,则从服务器将回复错误“与正在进行中的主服务器同步”,但 INFO 和 SLAVEOF 除外。
#
slave-serve-stale-data yes# 18.3您可以配置从实例来接受或不接受写操作。
#   对从实例进行写入可能有助于存储一些短暂数据(因为写入从实例的数据在与主实例重新同步后很容易被删除)
#   但如果客户端由于配置错误而写入从实例,则可能会导致问题。
#   从Redis 2.6以后默认从库是只读的。
#   注意:只读从库不是被设计成在互联网上暴露给不受信任的客户端。它只是防止实例被滥用的保护层。
#   在默认情况下,只读从程序仍然导出所有管理命令,如CONFIG、DEBUG等。在一定程度上,您可以使用'rename-command'来隐藏所有的管理/危险命令,从而提高只读从机的安全性。
slave-read-only yes# 18.4复制同步策略:选择磁盘或套接字
#   设置是否启动无磁盘,默认不启动
#   警告:无磁盘复制目前处于实验阶段-
#   新的slave和重新连接的slave不能继续复制过程,只是接收到差异,需要进行所谓的“完全同步”。RDB文件从主服务器传输到从服务器。传播可以通过两种不同的方式发生:
#   1)磁盘备份(Disk-backed):Redis主进程创建一个新的进程,将RDB文件写入磁盘。之后,父进程将文件增量地传输到从进程。
#   2)无磁盘(Diskless):Redis主进程创建一个新的进程,直接将RDB文件写入从套接字,而完全不接触磁盘。
#   使用磁盘备份复制,在生成RDB文件时,只要产生RDB文件的当前子进程完成其工作,就可以将更多从进程排队并与RDB文件一起提供服务。
#   使用无磁盘复制时,一旦传输开始,新的从服务器到达时将排队,当当前传输终止时将开始新的传输。
#   当使用无磁盘复制时,主服务器在开始传输之前等待可配置的时间(以秒为单位),希望有多个从服务器到达,并且传输可以并行化。
#   对于慢磁盘和快(大带宽)网络,无磁盘复制工作得更好。
repl-diskless-sync no# 18.5当启用无磁盘复制时,可以配置服务器等待的延迟,以便生成通过套接字将RDB传输到从服务器的子服务器
#   因为一旦传输开始,它就不可能为到达的新slave提供服务,这些slave将排队等待下一次RDB传输,因此服务器等待延迟以让更多的slave到达
#   延迟时间以秒为单位,缺省情况下为5秒。要完全禁用它,只需将其设置为0秒,传输将尽快开始
repl-diskless-sync-delay 5# 18.6slave以预定义的间隔向服务器发送ping报文。可以使用repl_ping_slave_period选项来更改这个时间间隔。缺省值是10秒。
# repl-ping-slave-period 10# 18.7以下选项设置复制超时时间:
#   1) Bulk transfer I/O during SYNC, from the point of view of slave.(从从机的角度来看,同步期间的批量传输I/O)
#   2) Master timeout from the point of view of slaves (data, pings).(从从服务器(data, ping)的角度来看,Master超时)
#   3) Slave timeout from the point of view of masters (REPLCONF ACK pings).(从主服务器的角度看从服务器超时(REPLCONF ACK ping))
#   重要的是要确保这个值大于为repl-ping-slave-period指定的值,否则每次在主服务器和从服务器之间有低流量时就会检测到超时。
# repl-timeout 60# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
slave-priority 100# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.# A Redis master is able to list the address and port of the attached
# slaves in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover slave instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a masteer.
#
# The listed IP and address normally reported by a slave is obtained
# in the following way:
#
#   IP: The address is auto detected by checking the peer address
#   of the socket used by the slave to connect with the master.
#
#   Port: The port is communicated by the slave during the replication
#   handshake, and is normally the port that the slave is using to
#   list for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the slave may be actually reachable via different IP and port
# pairs. The following two options can be used by a slave in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# slave-announce-ip 5.5.5.5
# slave-announce-port 1234# 19.SECURITY(安全)
#   要求客户端在处理任何其他命令之前发出验证(AUTH)。设定能够访问运行redis-server的主机客户端
#   为了向后兼容,这应该保留注释,因为大多数人不需要认证(例如,他们运行自己的服务器)
#   警告:由于Redis非常快,外部用户可以尝试每秒150k的密码。这意味着你应该使用一个非常强大的密码,否则它将非常容易被破解。
# requirepass foobared# 19.1Command renaming(命令重命名)
#   在共享环境中,可以更改危险命令的名称
#   例:CONFIG命令可能被重命名为一些难以猜测的东西,以便它仍可用于内部使用的工具,但不能用于普通客户机。
#   Example:
#   rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#   也可以通过将命令重命名为空字符串来完全终止命令:
#   rename-command CONFIG ""
#   注意:更改登录到AOF文件或传输到从服务器的命令的名称可能会导致问题。# 20.LIMITS(限制)
# 设置客户端同时连接的最大数量
#   默认情况下,这个限制被设置为10000个客户端
#   但是如果Redis服务器不能配置进程文件限制以允许指定的限制,允许的客户端最大数量被设置为当前文件限制减32(因为Redis保留了一些文件描述符供内部使用)。
#   一旦达到限制,Redis将关闭所有的新连接,发送一个错误“最大数量的客户端已到达”。#
# maxclients 10000# 20.1不要使用超过指定字节数的内存。当内存限制达到时,Redis将尝试根据所选的驱逐策略删除键(参见maxmemory-policy)
#   建议必须设置,否则将内存占满,造成服务器宕机
#   设置redis可以使用的内存量。一旦到达内存使用上限,redis将会试图移除内部数据,移除规则可以通过maxmemory-policy来指定
#   如果Redis不能根据策略删除键,或者如果策略设置为“noeviction”,Redis将开始错误地回复那些需要更多内存的命令,如set、LPUSH等,并将继续正常回复read-only命令,如GET
#   当使用Redis作为LRU缓存时,或者为实例设置硬内存限制时(使用“noeviction”策略),这个选项通常很有用。
#   警告:如果你把slave附加到一个maxmemory打开的实例上,输出缓冲区的大小需要从使用的内存计数中减去,这样网络问题/重同步将不会触发一个循环,其中键被驱逐
#   反过来,slave的输出缓冲区充满了被驱逐的键的DELs,触发更多的键被删除,如此循环,直到数据库被完全清空。
#   总之如果你有slave附加,建议你设置maxmemory的下限,这样系统上就有一些空闲RAM用于slave输出缓冲区(但如果策略是“noeviction”,这是不需要的)。
# maxmemory <bytes># 20.2MAXMEMORY策略:当达到MAXMEMORY时,Redis将如何选择删除什么。你可以选择以下五种行为:
#   volatile-lru -> remove the key with an expire set using an LRU algorithm(使用LRU算法移除key,只对设置了过期时间的键生效)
#   allkeys-lru -> remove any key according to the LRU algorithm(在所有集合key中,使用LRU算法移除key)
#   volatile-random -> remove a random key with an expire set(在过期集合中移除随机的key,只对设置了过期时间的键)
#   allkeys-random -> remove a random key, any key(在所有集合key中,移除随机的key)
#   volatile-ttl -> remove the key with the nearest expire time (minor TTL)(移除那些TTL值最小的key,即那些最近要过期的key)
#   noeviction -> don't expire at all, just return an error on write operations(不进行移除。针对写操作,只是返回错误信息)
#   注意:使用上述任何一种策略,当没有合适的键用于移除时,Redis将在写操作时返回一个错误。
#   在编写这些命令的时: set setnx setex append
#   incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#   sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#   zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#   getset mset msetnx exec sort
#   默认的是:
# maxmemory-policy noeviction# 20.3LRU和最小TTL算法不是精确算法,而是近似算法(为了节省内存),因此可以调优其速度或精度。对于默认的Redis将检查5个键,并选择一个最近使用较少的,你可以使用以下配置指令改变样本大小。
#   默认值5会产生足够好的结果。近似于真实的LRU,但是消耗更多的CPU。3非常快,但不是很准确。
# maxmemory-samples 5# 21.APPEND ONLY MODE(只附加模式)
#   默认情况下,Redis异步转储数据集到磁盘上。这种模式在许多应用程序中已经足够好了,但是Redis进程的问题或电源中断可能会导致几分钟的写丢失(取决于配置的保存点)。
#   仅追加文件是一种可选持久性模式,它提供了更好的持久性。例如,使用默认的数据fsync策略(见后面的配置文件),Redis可以丢失一秒钟的写在一个戏剧性的事件,如服务器停电,或一个单写,如果Redis进程本身发生了什么错误,但操作系统仍然正常运行。
#   AOF和RDB持久性可以同时启用,没有任何问题。如果启动时启用了AOF, Redis将加载AOF,这是具有更好的持久性保证的文件。
# 请访问http://redis.io/topics/persistence了解更多信息appendonly no# 21.1唯一附加文件的名称(默认值:"appendonly.aof")appendfilename "appendonly.aof"# 21.2fsync()调用告诉操作系统实际地将数据写入磁盘,而不是等待输出缓冲区中的更多数据。有些操作系统会在磁盘上刷新数据,有些操作系统会尽快刷新数据
#   Redis支持三种不同的模式:
#   no: don't fsync, just let the OS flush the data when it wants. Faster.(no:不要fsync,让操作系统在需要的时候刷新数据更快)
#   always: fsync after every write to the append only log. Slow, Safest.(总是:每次写只追加日志后fsync。缓慢的,安全的)
#   everysec: fsync only one time every second. Compromise.(Everysec:每秒只执行一次fsync。妥协)
#   默认是“everysec”,因为这通常是速度和数据安全之间的正确折衷。这取决于您是否可以将其放宽为“no”,以便操作系统在需要的时候刷新输出缓冲区,以获得更好的性能(但如果您可以接受一些数据丢失的想法,请考虑默认的持久性模式,即快照),或者相反,使用“always”,它非常慢,但比everysec更安全一点
#   详情请查看以下文章:http://antirez.com/post/redis-persistence-demystified.html
#   如果不确定,就用“everysec”# appendfsync always
appendfsync everysec
# appendfsync no# 21.3When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.no-appendfsync-on-rewrite no# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes# 22.LUA SCRIPTING(Lua脚本)
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000# 23.REDIS CLUSTER(Redis集群)
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple slaves able to failover, they exchange messages
#    in order to try to give an advantage to the slave with the best
#    replication offset (more data from the master processed).
#    Slaves will try to get their rank by offset, and apply to the start
#    of the failover a delay proportional to their rank.
#
# 2) Every single slave computes the time of the last interaction with
#    its master. This can be the last ping or command received (if the master
#    is still in the "connected" state), or the time that elapsed since the
#    disconnection with the master (if the replication link is currently down).
#    If the last interaction is too old, the slave will not try to failover
#    at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
#   (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-slave-validity-factor 10# Cluster slaves are able to migrate to orphaned masters, that are masters
# that are left without working slaves. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working slaves.
#
# Slaves migrate to orphaned masters only if there are still at least a
# given number of other working slaves for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a slave
# will migrate only if there is at least 1 other working slave for its master
# and so forth. It usually reflects the number of slaves you want for every
# master in your cluster.
#
# Default is 1 (slaves migrate only if their masters remain with at least
# one slave). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.# 24.SLOW LOG(缓慢的日志)
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128# 25.LATENCY MONITOR(延迟监控)# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0# 26.EVENT NOTIFICATION(事件通知)# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
#  K     Keyspace events, published with __keyspace@<db>__ prefix.
#  E     Keyevent events, published with __keyevent@<db>__ prefix.
#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
#  $     String commands
#  l     List commands
#  s     Set commands
#  h     Hash commands
#  z     Sorted set commands
#  x     Expired events (events generated every time a key expires)
#  e     Evicted events (events generated when a key is evicted for maxmemory)
#  A     Alias for g$lshzxe, so that the "AKE" string means all the events.
#
#  The "notify-keyspace-events" takes as argument a string that is composed
#  of zero or multiple characters. The empty string means that notifications
#  are disabled.
#
#  Example: to enable list and generic events, from the point of view of the
#           event name, use:
#
#  notify-keyspace-events Elg
#
#  Example 2: to get the stream of the expired keys subscribing to channel
#             name __keyevent@0__:expired use:
#
#  notify-keyspace-events Ex
#
#  By default all notifications are disabled because most users don't need
#  this feature and the feature has some overhead. Note that if you don't
#  specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""# 27.ADVANCED CONFIG(高级配置)
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb  <-- not recommended for normal workloads
# -4: max size: 32 Kb  <-- not recommended
# -3: max size: 16 Kb  <-- probably not recommended
# -2: max size: 8 Kb   <-- good
# -1: max size: 4 Kb   <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression.  The head and tail of the list
# are always uncompressed for fast push/pop operations.  Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
#    going from either the head or tail"
#    So: [head]->node->node->...->node->[tail]
#    [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
#    2 here means: don't compress head or head->next or tail->prev or tail,
#    but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000# 主动重新哈希每 100 毫秒的 CPU 时间使用 1 毫秒,以帮助重新哈希主 Redis 哈希表(将顶级键映射到值的表)。Redis 使用的哈希表实现(参见 dict.c)执行延迟重新哈希:您遇到正在重新哈希的哈希表的操作越多,执行的重新哈希“步骤”就越多,因此如果服务器处于空闲状态,则重新哈希永远不会完成,哈希表会使用更多的内存。
#
# 默认设置是每秒使用此毫秒 10 次,以便主动重新散列主词典,并在可能的情况下释放内存。
#
# 如果不确定:如果您有硬延迟要求,请使用“activerehashing no”,并且在您的环境中,Redis 可以不时回复延迟 2 毫秒的查询并不是一件好事。
#
# 如果您没有这样的硬要求,但希望在可能的情况下尽快释放内存,请使用“ActiveRehashing yes”。
activerehashing yes# 客户端输出缓冲区限制可用于强制断开由于某种原因而无法足够快地从服务器读取数据的客户端的连接(常见原因是 Pub/Sub 客户端使用消息的速度不如发布者生成消息的速度)。
#
# 可以为三个不同类别的客户端设置不同的限制:
#
# 正常 -> 普通客户端,包括 MONITOR 客户端
# 从属 -> 从属客户端
# pubsub - 订阅至少一个 pubsub 频道或模式的 > 客户端
#
# 每个客户端-输出-缓冲区-限制指令的语法如下:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# 一旦达到硬限制,或者如果达到软限制并保持指定的秒数(连续),客户端将立即断开连接。
# 因此,例如,如果硬限制为 32 MB,软限制为 16 MB/10 秒,则如果输出缓冲区的大小达到 32 MB,客户端将立即断开连接,但如果客户端达到 16 MB 并连续超过限制 10 秒,客户端也将断开连接。
#
# 默认情况下,普通客户端不受限制,因为它们不会在没有询问(以推送方式)的情况下接收数据,而是在请求之后接收数据,因此只有异步客户端可能会创建请求数据的速度快于读取速度的场景。
#
# 相反,pubsub 和从属客户端有一个默认限制,因为订阅者和从属客户端以推送方式接收数据。
#
# 硬限制或软限制都可以通过将它们设置为零来禁用它们。
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60# Redis 调用内部函数来执行许多后台任务,例如在超时时关闭客户端的连接、清除从未请求的过期密钥等。
#
# 并非所有任务都以相同的频率执行,但 Redis 会根据指定的“hz”值检查要执行的任务。
#
# 默认情况下,“hz”设置为 10。当 Redis 处于空闲状态时,提高该值将使用更多的 CPU,但同时当有许多密钥同时过期时,会使 Redis 的响应速度更快,并且超时的处理可能会更精确。
#
# 范围介于 1 和 500 之间,但是超过 100 的值通常不是一个好主意。大多数用户应使用默认值 10,并仅在需要非常低延迟的环境中将其提高到 100。
hz 10# 当子项重写 AOF 文件时,如果启用以下选项,则每生成 32 MB 数据,文件将进行同步。这对于以增量方式将文件提交到磁盘并避免大的延迟峰值非常有用。
aof-rewrite-incremental-fsync yes

9.发布和订阅

  • Redis发布订阅(pub/sub)是一种消息通信模式:发送者(pub)发送消息,订阅者(sub)接收消息
  • Redis客户端可以订阅任意数量的频道,但是只能接收到已订阅的频道的信息,未订阅的频道的信息无法接收

1.客户端订阅频道

2.发布者发送消息

3.发布订阅命令行实现

  • 1.打开第一个客户端订阅channel1

    127.0.0.1:6379> subscribe channel1
    Reading messages... (press Ctrl-C to quit)
    1) "subscribe"
    2) "channel1"
    3) (integer) 1
    
  • 2.打开第二个客户端,给channel1发布消息
    127.0.0.1:6379> publish channel1 testConn
    (integer) 1
    
  • 3.打开第一个客户端可以看到第二个客户端发送的消息
    127.0.0.1:6379> subscribe channel1
    Reading messages... (press Ctrl-C to quit)
    1) "subscribe"
    2) "channel1"
    3) (integer) 1
    1) "message"
    2) "channel1"
    3) "testConn"
    

10.Jedis操作

1.准备工作

1.关闭防火墙

[root@localhost ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)
[root@localhost ~]#

2.修改配置文件

# vim /etc/redis/redis.conf# 1.注释掉bind,否则只能本机访问
# bind 127.0.0.1# 2.关闭保护模式
protected-mode no# 3.修改配置文件后,重新使用该配置文件启动redis服务器,使配置生效
./redis-server  /etc/redis/redis.conf
# 连接错误
# 该错误是没有关闭保护模式
redis.clients.jedis.exceptions.JedisDataException: DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions:
1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent.
2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.

2.测试连接

1.导入依赖

<dependency><groupId>redis.clients</groupId><artifactId>jedis</artifactId><version>2.9.0</version>
</dependency>

2.测试连接

@Test
public void testJedis(){//获取连接Jedis jedis = new Jedis("192.168.73.100",6379);String ping = jedis.ping();jedis.close();System.out.println(ping);
}
/* 成功PONG
*/

3.测试连接池

    @Testpublic void testJedisPool(){//创建连接池的配置对象JedisPoolConfig config = new JedisPoolConfig();//从连接池获取连接时,是否检测连接的有效性config.setTestOnBorrow(true);//可用连接实例的最大数目,默认值为8config.setMaxTotal(100);//最大空闲连接数, 默认8个config.setMaxIdle(8);//最小空闲连接数,默认8个config.setMinIdle(8);//没有空闲连接时,最大的等待毫秒数config.setMaxWaitMillis(60000);//创建连接池JedisPool jedisPool = new JedisPool(config,"192.168.73.100",6379);//获取连接Jedis jedis = jedisPool.getResource();//测试连通性String ping = jedis.ping();//释放资源jedis.close();System.out.println(ping);}

3.JedisUtil

  • redis.properties

    # redis机器ip
    redis.hostName=192.168.73.100
    # redis端口号
    redis.port=6379# 最大数量
    redis.maxTotal=500
    # 最大空闲数量
    redis.maxIdle=50
    # 最小空闲数量
    redis.minIdle=10
    # 建立连接最大等待时间,单位毫秒
    redis.maxWaitMillis=30000
    # 从连接池中获取连接时,是否检查连接的可用性
    redis.testOnBorrow=true
    
    package com.wd.util;import redis.clients.jedis.Jedis;
    import redis.clients.jedis.JedisPool;
    import redis.clients.jedis.JedisPoolConfig;import java.io.InputStream;
    import java.util.Properties;public class JedisUtil {private static JedisPool pool = null;static {InputStream inputStream = JedisUtil.class.getResourceAsStream("/redis.properties");Properties properties = new Properties();try {properties.load(inputStream);inputStream.close();} catch (Exception e) {e.printStackTrace();throw new RuntimeException(e);}JedisPoolConfig config = new JedisPoolConfig();String hostName = properties.getProperty("redis.hostName");String port = properties.getProperty("redis.port");String testOnBorrow = properties.getProperty("redis.testOnBorrow");String maxTotal = properties.getProperty("redis.maxTotal");String maxIdle = properties.getProperty("redis.maxIdle");String minIdle = properties.getProperty("redis.minIdle");String maxWaitMillis = properties.getProperty("redis.maxWaitMillis");if(testOnBorrow != null){config.setTestOnBorrow(Boolean.parseBoolean(testOnBorrow));}if(maxTotal != null){config.setMaxTotal(Integer.parseInt(maxTotal));}if(maxIdle != null){config.setMaxIdle(Integer.parseInt(maxIdle));}if(minIdle != null){config.setMinIdle(Integer.parseInt(minIdle));}if(maxWaitMillis != null){config.setMaxWaitMillis(Long.parseLong(maxIdle));}pool = new JedisPool(config, hostName, Integer.parseInt(port == null ? "6379" : port));}public static Jedis getJedis(){return pool.getResource();}public static void close(Jedis jedis){jedis.close();}
    }
    
     @Testpublic void testRedisUtil(){Jedis jedis = JedisUtil.getJedis();String ping = jedis.ping();jedis.close();System.out.println(ping);}
    

11.SpringBoot操作Redis

  • 1.Spring的spring-data-redis模块中封装了RedisTemplate对象来进行对Redis的各种操作
  • 2.RedisTemplate支持所有的Redis原生的api,提供了连接池自动管理,简化java操作Redis的编码工作

1.导入依赖

<!-- redis -->
<dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-redis</artifactId><version>2.6.3</version>
<!-- spring2.X集成redis所需连接池依赖common-pool2-->
</dependency>
<dependency><groupId>org.apache.commons</groupId><artifactId>commons-pool2</artifactId><version>2.9.0</version>
</dependency>

2.修改配置

springredis:# redis部署的服务器的iphost: 192.168.73.100#  端口号port: 6379jedis:pool:# 最大数量max-active: 500# 最大空闲数量max-idle: 50# 最小空闲数量min-idle: 10# 建立连接最大等待时间,单位毫秒max-wait: 30000

3.配置类

  • 将RedisTemplate交给Spring工厂管理

    package com.wd.config;import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    import org.springframework.data.redis.connection.RedisConnectionFactory;
    import org.springframework.data.redis.core.RedisTemplate;
    import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
    import org.springframework.data.redis.serializer.StringRedisSerializer;@Configuration
    public class RedisConfig {@Beanpublic RedisTemplate redisTemplate(RedisConnectionFactory redisConnectionFactory){//创建Redis操作模板RedisTemplate<Object, Object> redisTemplate = new RedisTemplate<>();//设置连接工厂redisTemplate.setConnectionFactory(redisConnectionFactory);//创建字符序列化方式StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();//创建Json序列化方式GenericJackson2JsonRedisSerializer genericJackson2JsonRedisSerializer = new GenericJackson2JsonRedisSerializer();//设置Redis键的设置方式为字符串序列化redisTemplate.setKeySerializer(stringRedisSerializer);//设置Redis值的设置方式为Json序列化redisTemplate.setValueSerializer(genericJackson2JsonRedisSerializer);//设置Redis中Hash结构键的设置方式为字符串序列化redisTemplate.setHashKeySerializer(stringRedisSerializer);//设置Redis中Hash结构键的设置方式为字符串序列化redisTemplate.setHashValueSerializer(genericJackson2JsonRedisSerializer);return redisTemplate;}
    }
    

3.常用API

  • spring-data-redis针对jedis客户端中大量api进行了归类封装,将同一类型操作封装为operations接口

    • 1.ValueOperations:提供了对Sring类型的操作
    • 2.ListOperations:提供了对List类型的数据操作
    • 3.SetOperations:提供了对Set类型的数据操作
    • 4.HashOperations:提供了对Map类型的数据操作
    • 5.ZsetOperations:提供了对ZSet类型的数据操作

1.ValueOperations

2.ListOperations

3.SetOperations

4.HashOperations

5.ZsetOperations

4.测试

@RunWith(SpringRunner.class)
//入口类对象作为该注解的参数
@SpringBootTest(classes = SpringBootApplication.class)
public class TestSpringBoot {@Resourceprivate RedisTemplate redisTemplate;@Testpublic void testStringOperations(){//获取五种常用操作类ValueOperations valueOperations = redisTemplate.opsForValue();ListOperations listOperations = redisTemplate.opsForList();SetOperations setOperations = redisTemplate.opsForSet();HashOperations hashOperations = redisTemplate.opsForHash();ZSetOperations zSetOperations = redisTemplate.opsForZSet();//添加(配置类中设置了添加获取值的格式为二进制json格式,所以手动存String类型的值获取时会报错)valueOperations.set("name","李四");//单个添加HashMap<String, Object> redisMap = new HashMap<>();redisMap.put("sex","男");redisMap.put("height",175);redisMap.put("weight",125);valueOperations.multiSet(redisMap);//批量添加//删除redisTemplate.delete("name");//修改valueOperations.set("age",18);//修改valueOperations.increment("age");//自增valueOperations.increment("age",10);//增加指定值valueOperations.decrement("age");//自减valueOperations.decrement("age",5);//减少指定值//查询Object age = valueOperations.get("age");ArrayList<String> redisArray = new ArrayList<>();redisArray.add("name");redisArray.add("sex");redisArray.add("age");redisArray.add("height");redisArray.add("weight");List list = valueOperations.multiGet(redisArray);//批量查询list.forEach(System.out::println);}
}

12.序列化和反序列化

  • 对象序列化:将对象的状态信息持久保存的过程
  • 对象反序列化:根据对象的状态信息恢复对象的过程
  • 在Redis中有两种常用的方式

1.字节数组

  • java对象在计算机中以字节数组的方式存储到Redis中
  • 序列化的类型,必须实现Serializable接口

2.json串

13.事务

  • 1.Redis事务是一个单独的隔离操作:事务中的所有命令都会序列化、按顺序地执行
  • 2.事务在执行的过程中,不会被其他客户端发送来的命令请求所打断
  • 3.Redis事务的主要作用就是串联多个命令防止别的命令插队
  • Redis通过multi,exec,discard,watch等命令实现事务功能

1.multi,exec和discard命令。

  • 1.组队阶段:通过multi开始事务后,输入的命令并不会立即执行,而是返回一个QUEUED(命令队列)
  • 2.执行阶段:执行exec后,命令队列中的命令才会依次执行
  • 3.放弃组队:组队的过程中可以通过discard来放弃组队

    # 1.开启事务
    127.0.0.1:6379> multi
    OK
    127.0.0.1:6379> set name lisi
    QUEUED
    127.0.0.1:6379> set age 18
    QUEUED
    127.0.0.1:6379> set sex 男
    QUEUED
    # 2.执行事务
    127.0.0.1:6379> exec
    1) OK
    2) OK
    3) OK
    127.0.0.1:6379> keys *
    1) "age"
    2) "name"
    3) "sex"
    
  • 1.如果在组队阶段有错误产生,则会导致整个命令队列的所有命令都执行失败
    127.0.0.1:6379> multi
    OK
    127.0.0.1:6379> set name 王五
    QUEUED
    127.0.0.1:6379> settt sex 男
    (error) ERR unknown command 'settt'
    127.0.0.1:6379> set age 18
    QUEUED
    127.0.0.1:6379> exec
    (error) EXECABORT Transaction discarded because of previous errors.
    
  • 2.如果是在执行阶段产生错误,则命令队列的其他命令正常执行,产生错误的命令执行失败
    127.0.0.1:6379> multi
    OK
    127.0.0.1:6379> set name lisi
    QUEUED
    127.0.0.1:6379> set age 18 18
    QUEUED
    127.0.0.1:6379> set sex 男
    QUEUED
    127.0.0.1:6379> exec
    1) OK
    2) (error) ERR syntax error
    3) OK
    127.0.0.1:6379>
    
  • 3.组队阶段,放弃组队
    127.0.0.1:6379> multi
    OK
    127.0.0.1:6379> set name lisi
    QUEUED
    127.0.0.1:6379> set age 18
    QUEUED
    127.0.0.1:6379> discard
    OK
    127.0.0.1:6379> exec
    (error) ERR EXEC without MULTI
    

2.watch命令

  • 1.watch命令用在multi命令前,用于监控任意数量的键是否发生改变
  • 2.当执行exec命令时,将检查监视的键是否已经被修改,如果已经修改,服务器将拒绝执行事务,如果没修改,可以正常执行事务
  • 3.该命令一般用来监测同步方法(多客户端)
    # 单客户端修改
    127.0.0.1:6379> set age 20
    OK
    127.0.0.1:6379> set sex 男
    OK
    127.0.0.1:6379> watch age sex
    OK
    127.0.0.1:6379> multi
    OK
    127.0.0.1:6379> set age 18
    QUEUED
    127.0.0.1:6379> set name 丽水
    QUEUED
    127.0.0.1:6379> exec
    1) OK
    2) OK# 多客户端修改
    # 客户端1
    127.0.0.1:6379> set age 18
    QUEUED
    127.0.0.1:6379> set name 丽水
    QUEUED
    127.0.0.1:6379> exec
    1) OK
    2) OK
    127.0.0.1:6379> watch age sex
    OK
    127.0.0.1:6379> multi
    OK
    127.0.0.1:6379> set name 张三
    QUEUED
    127.0.0.1:6379> exec
    (nil)# 客户端2(客户端1执行事务的中间修改其监测的值)
    [root@localhost ~]# redis-cli
    127.0.0.1:6379> set age 20
    OK
    127.0.0.1:6379>
    

3.弱事务性

  • 1.Redis事务并不能保证(ACID)事务中的命令要么同时成功,要么同时失败,并且运行错误后,也无法回滚事务(例:上述multi命令的条件4)
  • 2.Redis不能检验出运行时错误,能检验出语法错误
  • 3.Redis没有实现标准事务的原因
    • 1.目的性不同,Redis主要是为了解决性能问题,而标准事务会引入额外的复杂性,降低Redis的性能
    • 2.Redis适用的功能场景简单,使用Redis事务足以完成任务

4.事务的实现原理

  • Redis通过一个事务队列完成事务功能

    • 1.当一个客户端发送multi命令后,Redis服务器会将该客户端后续的命令保存到一个队列中
    • 2.当一个处于事务状态的客户端向Redis服务器发送exec命令时,服务器会遍历这个客户端的事务队列,执行队列中保存的所有命令,最后将执行命令的结果全部返回到客户端
    • 3.Redis在执行事务队列的命令前,如果发现入队的命令有语法错误或监控的值发生改变,将清空队列中的命令,拒绝执行

14.Redis缓存

  • 1.EhCache\Mybatis缓存:单机系统中将数据库中的热点数据进行备份,减少IO操作,提高应用性能
  • 2.Redis缓存:分布式系统中将数据库中的热点数据备份到Redis中,然后让多个服务器从Redis中获取缓存,减少冗余操作

1.单机架构的缓存方案

  • 1.EhCacahe/Mybatis缓存直接将数据缓存在JVM中,速度快,效率高(图一)
  • 2.不适用分布式集群系统,缓存管理非常麻烦(图二:当查询一条数据后,必须在所有服务器上都加上该缓存数据)

2.分布式集群的缓存方案

  • 1.图二解决方案:采用Redis缓存
  • 2.Redis基于内存操作,可以替代EhCache/Mybatis缓存充当缓存,单独部署Redis服务,应用通过网络和Redis通信,将缓存数据存放在Redis中

3.Redis缓存的理论基础

  • 1.执行查询时,先查询redis(类名+方法名+参数)缓存,如果有数据则不再调用dao直接返回,如果查不到,才调用dao,并将数据保存到redis缓存
  • 2.执行增删改后,需要清空redis中的缓存,避免脏数据
  • 3.避免操作缓存的代码和原始代码耦合,不能将操作缓存代理放入到业务方法中
  • 4.将操作Redis缓存的代码定义成Advice(增强),使用AOP的方式动态增强

4.Redis缓存的实际操作(基于SpringBoot注解)

  • 1.Spring内置了缓存注解

    • 1.Cacheable:执行方法前先查询缓存,如果缓存有数据,直接返回,如果缓存没有数据,则执行查询方法,并将结果保存到缓存中
    • 2.CacheEvict:可以在方法执行前或后删除缓存
  • 2.Spring内置了缓存增强类(RedisCacheManager)

1.引入依赖

 <dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-redis</artifactId><version>2.6.3</version></dependency><dependency><groupId>org.apache.commons</groupId><artifactId>commons-pool2</artifactId><version>2.9.0</version></dependency>

2.修改配置文件

springredis:# redis部署的服务器的iphost: 192.168.73.100#  端口号port: 6379jedis:#Lettuce 和 Jedis 都是Redis的Client,都可以连接 Redis Server
#  lettuce:pool:# 最大数量max-active: 500# 最大空闲数量max-idle: 50# 最小空闲数量min-idle: 10# 建立连接最大等待时间,单位毫秒max-wait: 30000

3.创建配置类

package com.wd.config;import org.springframework.cache.CacheManager;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.cache.RedisCacheConfiguration;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.cache.RedisCacheWriter;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.RedisSerializationContext;import java.time.Duration;
import java.util.HashMap;
import java.util.Map;@Configuration
public class RedisCacheConfig {@Bean//配置缓存管理器public CacheManager cacheManager(RedisConnectionFactory redisConnectionFactory){//采用无锁写的缓存策略RedisCacheWriter redisCacheWriter = RedisCacheWriter.nonLockingRedisCacheWriter(redisConnectionFactory);//返回Redis缓存管理器(缓存写策略,缓存60秒过期,其他缓存配置)return new RedisCacheManager(redisCacheWriter, getRedisCacheConfiguration(60), getRedisCacheConfigurationMap());}//RedisCacheConfiguration 用于负责Redis的缓存配置,private RedisCacheConfiguration getRedisCacheConfiguration(int seconds){//1.获取Redis缓存配置对象RedisCacheConfiguration redisCacheConfiguration = RedisCacheConfiguration.defaultCacheConfig();//2.准备Json序列化方式GenericJackson2JsonRedisSerializer jackson2JsonRedisSerializer = new GenericJackson2JsonRedisSerializer();return redisCacheConfiguration.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(jackson2JsonRedisSerializer))//3.设定序列化值的方式采用json.entryTtl(Duration.ofSeconds(seconds));//4.设置过期时间}//其他缓存配置private Map<String, RedisCacheConfiguration> getRedisCacheConfigurationMap(){Map<String, RedisCacheConfiguration> redisCacheConfigurationMap = new HashMap<>();//以UserServiceImpl(一般为类名)开头的缓存要存储100秒redisCacheConfigurationMap.put("UserServiceImpl", getRedisCacheConfiguration(100));return redisCacheConfigurationMap;}
}

4.启动类上添加开启缓存的注解

package com.wd;import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.cache.annotation.EnableCaching;@EnableCaching //开启缓存
@org.springframework.boot.autoconfigure.SpringBootApplication
@MapperScan("com.wd.mapper")//扫描mapper接口动态生成的代理类,也可以在具体>mapper接口上使用mapper接口,二选一
public class SpringBootApplication {//启动tomcat部署项目public static void main(String[] args) {SpringApplication.run(SpringBootApplication.class,args);}
}

5.目标方法上添加使用缓存的注解

package com.wd.service.impl;import com.wd.domains.User;
import com.wd.service.UserService;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.stereotype.Service;@Service
public class UserServiceImpl implements UserService {//存入redis的键应该是类名+方法名+参数,value的值是类名,key的值是方法名加参数值//一般用在查询方法上,表示使用缓存@Cacheable(value = "UserServiceImpl",key = "#root.methodName+#id")public String selectHello(Integer id){System.out.println("----没有调用缓存,直接调用方法----selectHello");return "hello redis cache";}//增删改方法执行后,删除缓存,防止脏数据,value值为类名,allEntries表明删除该类所有的缓存,beforeInvocation表示在方法执行前执行@CacheEvict(value="UserServiceImpl",allEntries = true,beforeInvocation = true)public void saveHello(){System.out.println("----没有调用缓存,直接调用方法saveHello----");}
}

15.持久化操作

16.主从复制

17.集群

18.分布式锁

场景应用

1.手机验证码

  • 1.需求

    • 1.输入手机号,点击发送后随机生成6位数字码,1分钟有效
    • 2.输入验证码,点击验证,返回成功或失败
    • 3.每个手机号每天只能输入3次
  • 2.实现
    • 1.使用Random生成随机6位数字验证码
    • 2.验证码保存到redis中,设置过期时间为60秒
    • 3.从redis获取验证码和输入验证码进行比较,判断验证码是否一致
    • 4.每次发送之后incr 1,大于3的时候,提交不能发送,一天后失效
    /*** 1.使用Random生成随机6位数字验证码* @return*/public static String testVerifyCode(){Random random = new Random();StringBuilder str = new StringBuilder();for (int i=0; i<6; i++) {int num = random.nextInt(10);str.append(num);}System.out.println(str.toString());return str.toString();}/*** 2.验证码存入redis中,存储60秒,并限制每天只能发送3次* @param phone*/public static void testSendRedis(String phone){//随机验证码String verifyCode = testVerifyCode();/** redis 键 值* verifyCode手机号:code  验证码* verifyCode手机号:count 今日该手机发送验证码次数*/String codeKey = "verifyCode" + phone + ":code";String countKey = "verifyCode" + phone + ":count";//工具类获取连接Jedis jedis = JedisUtil.getJedis();String countValue = jedis.get(countKey);//每个手机只能发送三次if(countValue == null){//没有发送次数,表明第一次发出验证码jedis.setex(countKey,24*60*60,"1");}else if (Integer.parseInt(countValue) < 3) {//发送次数+1jedis.incr(countKey);}else {//发送三次,不能再发送System.out.println("今日验证码次数达到上限3次,不可以再发送验证码!");JedisUtil.close(jedis);return;}//发送验证码到redis中,并设置1分钟过期jedis.setex(codeKey,60,verifyCode);JedisUtil.close(jedis);}/*** 3.验证码效验* @param phone 用户手机号* @param userCode 用户输入的验证码*/public static void compareVerifyCode(String phone, String userCode){String codeKey = "verifyCode" + phone + ":code";Jedis jedis = JedisUtil.getJedis();String realCode = jedis.get(codeKey);if(userCode.equals(realCode)){System.out.println("验证通过!");}else {System.out.println("验证失败!");}JedisUtil.close(jedis);}@Test//测试验证码是否保存到redis,并且只能发送三次(问题:一天能发送三次,而不是当天能发送三次,时间是从发送开始算起)public void testSendVerify(){testSendRedis("15340506070");}@Test//测试输入的验证码是否正确public void testCompareVerify(){compareVerifyCode("15340506070","043895");}
    
    127.0.0.1:6379> get verifyCode15340506070:count
    "1"
    127.0.0.1:6379> get verifyCode15340506070:code
    "524139"
    127.0.0.1:6379> ttl verifyCode15340506070:count
    (integer) 86333
    127.0.0.1:6379> ttl verifyCode15340506070:code
    (integer) 39
    

2.redis统一管理session

  • 通过nginx分发请求如果发送到不同tomcat上可能会导致session不可共用
  • 解决方法:
    • 1.通过ip_hash 将同一个ip发送的请求固定到同一个服务器上,缺点是可能会负载不均匀
    • 2.通过redis统一管理session

3.秒杀案例

Redis高级及实战相关推荐

  1. Redis高级项目实战,西安java程序员工资

    一面问题:MySQL+Redis+Kafka+线程+算法 mysql知道哪些存储引擎,它们的区别 mysql索引在什么情况下会失效 mysql在项目中的优化场景,慢查询解决等 mysql有什么索引,索 ...

  2. Redis高级项目实战!mysql和java的管理系统源码

    分享第一份Java基础-中级-高级面试集合 Java基础(对象+线程+字符+接口+变量+异常+方法) Java中级开发(底层+Spring相关+Redis+分布式+设计模式+MySQL+高并发+锁+线 ...

  3. Redis高级项目实战,java截取两个字符串中间的字符串

    我听到的一些发声 你们赚的钱已经可以了: 我一个发小是做土木工程的,上海大学博士,参与很多著名建筑的工程,但是从薪资上看,还不如一些稍微像样的公司的6年多的高级开发.为什么?这就是行业的红利,个体是享 ...

  4. Redis高级项目实战,java配置jdk环境时

    Spring Security观后感--手绘思维脑(供参考) Spring Security手绘思维脑图 手绘的思维导图,是我自己根据自身的情况读完这套阿里出品的Spring Security王者晋级 ...

  5. Redis高级项目实战!北京java编程入门培训

    Dubbo面试专题 JVM面试专题 Java并发面试专题 Kafka面试专题 MongDB面试专题 MyBatis面试专题 MySQL面试专题 Netty面试专题 RabbitMQ面试专题 Redis ...

  6. Redis高级项目实战,16条代码规范建议

    前几天逛知乎的时候看到一个话题:MySQL没前途了吗? 最近几年,似乎总有一种声音在说,MySQL可能不太行了,原因无非是这么几条,MySQL功能不如PG强大,原生没有分库分表不如TIDB,OLAP性 ...

  7. Redis高级项目实战,华为java开发工资

    个人基本情况: 首先介绍一下自己的个人基本情况,某专科学校毕业,计算机技术与应用专业,有过2年的工作经验,毕业以后一直想要进入一线互联网大厂工作,但无奈学历受限,屡屡被挡在门外.后来接触到一个朋友,了 ...

  8. Redis高级项目实战,docker加载镜像tar包

    常见的分布式事务场景 分布式事务其实就在我们身边,你一直在用,但是你却一直不注意它. 转账 扣你账户的余额,增加别人账户余额,如果只扣了你的,别人没增加这是失败:如果没扣你的钱别人也增加了那银行的赔钱 ...

  9. Redis高级项目实战,适合java开发的笔记本电脑

    面试前的准备 老实说,我自己平常没事就会看一些面试题,所以我都是直接去面的.不过我还是要建议大家如果准备面试的话,需要做以下准备 背题:看一看最近的面经文,了解现在公司都在面什么类型的题,准备一些常见 ...

最新文章

  1. mysql怎么查询排第几名并列_MySQL并列排名和顺序排名查询
  2. poj-2231(Moo Volume) 递推
  3. 51Node 01组成的N的倍数
  4. ArcGIS学习路线
  5. 北林oj-算法设计与分析-Removing the Wall(C++,思路+代码)
  6. spoolqa果然是病毒!
  7. 【问题6】Redis 的过期策略都有哪些?内存淘汰机制都有哪些?
  8. 文字转换片假字_模仿文字转换笔迹,word手写字体在线生成器网站
  9. java实现类的封装(物体实现椭圆运动)
  10. linux系统编译时make出错,centos 编译安装cmake和常见过程错误解决办法(linux系统均适用,以爬坑。。)...
  11. 几种 FPGA 芯片的工艺结构
  12. CPAN | 配置阿里源
  13. 一个奔四技术人的2020年
  14. Java实验项目三——平面图形和立体图形抽象类
  15. 哈希表解决冲突的方式
  16. LIO-SAM imageProjection
  17. Bootstrap4 安装方式
  18. Linux 命令(189)—— init 命令
  19. /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found问题解决方法
  20. RHEL 6.5----SCSI存储

热门文章

  1. html实现导航下拉菜单绝对定位,纯CSS导航下拉效果,神奇的定位与显示属性
  2. JAVAWEB NOTE 1
  3. go语言基础数据结构学习---- 数组, 列表(list)和切片(slice)
  4. 计算机投诉信英语作文,英语投诉信范文
  5. 《信息技术时代》期刊简介及投稿要求
  6. 数组指针和指针数组的区别
  7. 修改数据库字段名称类型
  8. 小笔记-简单但够用系列_PPT内嵌html格式动态图表运行时错误
  9. 使用 图灵验证码识别平台+Python+Selenium,智能识别B站/bilibili的中文验证码,并实现自动登陆
  10. 2010年全国100个城市房价 厦门第8漳州第52