Redis

Redis( Re mote Di ctionary S erver ),即远程字典服务,是一个开源的使用ANSI C语言 编写、支持网络、可基于内存亦可持久化的日志型、Key-Value 数据库 ,并提供多种语言的API。

redis 是当前最热门的NoSQL技术之一。

Redis 是一个开源(BSD许可)的,内存中的数据结构存储系统,它可以用作数据库、缓存和消息中间件。 它支持多种类型的数据结构,如 字符串(strings) 散列(hashes) 列表(lists) 集合(sets) 有序集合(sorted sets) 与范围查询, bitmaps hyperloglogs 地理空间(geospatial) 索引半径查询。 Redis 内置了 复制(replication) LUA脚本(Lua scripting) LRU驱动事件(LRU eviction) 事务(transactions) 和不同级别的 磁盘持久化(persistence) , 并通过 Redis哨兵(Sentinel) 和自动 分区(Cluster) 提供高可用性(high availability)。

安装 Redis

1、 下载安装包

Redis官网

Redis 中官网

下载路径 (6.2.3版本)

2、 使用 xftp 工具将安装包上传至 Linux

3、 解压压缩包

# 切换到上一步上传进来的目录
cd /home/
# 解压文件
tar -xzvf redis-6.2.2.tar.gz
# 移动到 /opt 目录
mv redis-6.2.2 /opt/

4、安装 gcc

# 安装gcc 期间服务器需要有网络,不然会安装失败
yum install gcc-c++

5、开始安装

# 进入Redis 目录
cd /opt/redis-6.2.2/
# 进行安装
# 进入redis 的 src目录
cd src/
# 安装运行启动程序
make install

6、修改配置文件

# 进入Redis安装后的默认启动路径
cd /usr/local/bin/
# 创建存放配置的文件夹
mkdir redisconf
# 将启动的配置文件移动到创建的目录中
mv /opt/redis-6.2.2/redis.conf /usr/local/bin/redisconf/
# 修改配置文件	没有vim的请先自行安装
vim /usr/local/bin/redisconf/redis.conf
# 在配置文件中找到 daemonize 修改为 yes 保存,如下图

7、启动测试

# 切换启动目录
cd /usr/local/bin/
# 启动服务
redis-server redisconf/redis-conf 
# 启动客户端
redis-cli -p 6379
# 出现下图启动成功

Redis 压力测试

Redis基本命令

Redis 默认有16个(0~15)数据库,默认使用第0个 我们可以 select命令切换数据库

# select 切换数据库
127.0.0.1:6379> select 15
127.0.0.1:6379[15]> 

查看数据库大小:DBSIZE

# DBSIZE 查看数据库大小
127.0.0.1:6379> DBSIZE
(integer) 1
127.0.0.1:6379> 

查看数据库的所有 key:keys *

# 查看全部的key
127.0.0.1:6379> keys *
1) "name"
127.0.0.1:6379> 

添加key:set

# 添加数据库的 key
127.0.0.1:6379> set age 18
127.0.0.1:6379> 

获取key的值:get

# get key名称 获取key的值
127.0.0.1:6379> get name
"fdfgfdd"
127.0.0.1:6379> 

判断key是否存在:exists

# 判断key是否存在 存在放回 1 不存在返回 0
127.0.0.1:6379> EXISTS name
(integer) 1
127.0.0.1:6379> EXISTS name1
(integer) 0

设置key 在多少秒后过期 : expire

# 设置key 在多少秒后过期
127.0.0.1:6379> EXPIRE name 10
(integer) 1
# 查看过期时间 -2 为已过期,-1为无限时间,其余的都是剩余多久过期
127.0.0.1:6379> ttl name
(integer) 3
127.0.0.1:6379> move name 1
(integer) 1

移动key到另外一个数据库:move

# 移动key到另外一个数据库
127.0.0.1:6379> move name 1
(integer) 1

获取key的类型:type

# 获取key的类型
127.0.0.1:6379[1]> TYPE nameset 
string

清空数据库flushdb flushall

# 清空当前数据库 FLUSHDB
127.0.0.1:6379> FLUSHDB
# 情况所有数据库 FLUSHALL
127.0.0.1:6379> FLUSHALL
127.0.0.1:6379> 

基本数据类型

String类型

追加字符串:append

# append 追加字符串
127.0.0.1:6379> set k1 v1
127.0.0.1:6379> set k2 v2
127.0.0.1:6379> set k3 v3
127.0.0.1:6379> APPEND k1 hello #追加字符串
(integer) 7
127.0.0.1:6379> get k1 #查看追加后的值 
"v1hello"
127.0.0.1:6379> 

查看字符串长度:strlen

# 查看字符串长度
127.0.0.1:6379> STRLEN k1
(integer) 7

值的增加减少:incr(增加) decr(减少)

# 值的增加减少(自能是数字类型)
127.0.0.1:6379> INCR k4
(integer) 1
127.0.0.1:6379> INCR k4
(integer) 2
127.0.0.1:6379> deCR k4
(integer) 1
127.0.0.1:6379> decr k4
(integer) 0

通过步长增加减少:incrby(步长增加) decrby(步长减少)

# 值按照增加减少(自能是数字类型)
127.0.0.1:6379> INCRBY k4 4
(integer) 4
127.0.0.1:6379> INCRBY k4 4
(integer) 8
127.0.0.1:6379> DECRby k4 3
(integer) 5
127.0.0.1:6379> DECRBY k4 3
(integer) 2

通过下标获取字符串的值(截取字符串):getrange

# 通过下标获取字符串的值:getrange key名 开始下标 结束下标(-1代表全部)
127.0.0.1:6379> clear
127.0.0.1:6379> set k1 hello,wangyxing
127.0.0.1:6379> get k1
"hello,wangyxing"
127.0.0.1:6379> GETRANGE k1 0 5	
"hello,"
127.0.0.1:6379> GETRANGE k1 0 -1
"hello,wangyxing"
127.0.0.1:6379> 

替换字符串:setrange

# 从某个位置替换字符串
127.0.0.1:6379> set k2 abcdefg
127.0.0.1:6379> SETRANGE k2 2 xx
(integer) 7
127.0.0.1:6379> get k2
"abxxefg"
127.0.0.1:6379> 

设置值并给出过期时间:setex

# 设置值并给出过期时间 setex key名 过期时间 值
127.0.0.1:6379> setex k3 30 hello

判断是否能设置值:setnx

# 判断是否能设置值 如果存在则不设置,不存在则设置
# setnx key名 value
127.0.0.1:6379> set ifk1 hello
127.0.0.1:6379> setnx ifk1 hhh
(integer) 0
127.0.0.1:6379> 

批量设置key:mset msetnx

# 批量设置key
# mset key value [key value]
127.0.0.1:6379> mset k1 v1 k2 v2 k3 v3
127.0.0.1:6379> keys *
1) "k3"
2) "k2"
3) "k1"
# msetnx 是一个原子性操作,要么一起成功,要么一起失败
127.0.0.1:6379> MSETNX k1 v1 k4 v4	# k1上面存在所以设置失败
(integer) 0
127.0.0.1:6379> MSETNX k5 v5 k4 v4
(integer) 1

批量获取key:mget

# 批量获取key mget key [key]
127.0.0.1:6379> MGET k1 k2 k3 k4 k5
1) "v1"
2) "v2"
3) "v3"
4) "v4"
5) "v5"
# 设置对象,mset 同时设置对象的多个属性 下面的例子是设置 1 号用户的属性
127.0.0.1:6379> mset user:1:name zhangsan user:1:age 18 user:1:sex nan
127.0.0.1:6379> MGET user:1:name user:1:age user:1:sex
1) "zhangsan"
2) "18"
3) "nan"

先获取值,在替换值:getset

# 先获取get的值,在设置一个新的值
127.0.0.1:6379> getset k1 v1
(nil)
127.0.0.1:6379> getset k1 v2
127.0.0.1:6379> getset k1 v3

String的使用场景

统计多单位的数据:粉丝数,关注数,评论数

List类型

因为 Redis的设计原理,我们可以把Redis中的List设计成 队列 阻塞队列等等。

存值与取值(从左边开始): lpush lpop (rpush 和 rpop没有演示)

# 向列表中存入值
# lpush 列表名 [value]
127.0.0.1:6379> LPUSH mylist hello 0 wangyuxing 2 v1
(integer) 5
# lpop 列表名 [出栈个数,(默认是1)]
127.0.0.1:6379> LPOP mylist
127.0.0.1:6379> LPOP mylist 2
1) "2"
2) "wangyuxing"

查看值:lrange

# lrange key名 开始范围 结束范围
127.0.0.1:6379> LPUSH mylist hello 0 wangyuxing 2 v1
(integer) 5
127.0.0.1:6379> LRANGE mylist 0 -1
1) "v1"
2) "2"
3) "wangyuxing"
4) "0"
5) "hello"
127.0.0.1:6379> LRANGE mylist 0 3
1) "v1"
2) "2"
3) "wangyuxing"
4) "0"

通过下标获取值:lindex

127.0.0.1:6379> LPUSH mylist hello 0 wangyuxing 2 v1
(integer) 5
127.0.0.1:6379> LINDEX mylist 0
127.0.0.1:6379> LINDEX mylist 2
"wangyuxing"

查看长度: llen

# 获取列表长度
127.0.0.1:6379> LLEN mylist
(integer) 5

移除指定的值:lrem

# lrem key名 移除个数(从左到右计数) value
127.0.0.1:6379> lrem mylist 1  wangyuxing
(integer) 1
127.0.0.1:6379> LRANGE mylist 0 -1
1) "v1"
2) "2"
3) "0"
4) "hello"

截取List:ltrim

127.0.0.1:6379> LPUSH mylist hello 0 wangyuxing 2 v1 
(integer) 5
127.0.0.1:6379> LRANGE mylist 0 -1
1) "v1"
2) "2"
3) "wangyuxing"
4) "0"
5) "hello"
127.0.0.1:6379> LTRIM mylist 1 3 #截取 1 到 3 
127.0.0.1:6379> LRANGE mylist 0 -1 #查看截取后的值
1) "2"
2) "wangyuxing"
3) "0"
127.0.0.1:6379> 

组合命令:rpoplpush(只能从左边出,新列表右边进)

# 移动列表的第一个元素到新的列表中
127.0.0.1:6379> LPUSH mylist hello 0 wangyuxing 2 v1
(integer) 5
127.0.0.1:6379> RPOPLPUSH mylist newlist
"hello"
127.0.0.1:6379> LRANGE mylist 0 -1
1) "v1"
2) "2"
3) "wangyuxing"
4) "0"
127.0.0.1:6379> LRANGE newlist 0 -1
1) "hello"

判断列表是否存在 :exists

127.0.0.1:6379> EXISTS mylist
(integer) 1

更新列表中的值:lset

# 使用lset 更新列表的值,不存在会报错
127.0.0.1:6379> LRANGE mylist 0 -1
1) "v1"
2) "2"
3) "wangyuxing"
4) "0"
# lset mylist 下标 value
127.0.0.1:6379> lset mylist 1 3
127.0.0.1:6379> LRANGE mylist 0 -1
1) "v1"
2) "3"
3) "wangyuxing"
4) "0"

插入元素:linsert

# 插入元素
127.0.0.1:6379> LRANGE mylist 0 -1
1) "v1"
2) "3"
3) "wangyuxing"
4) "0"
# 向mylist中的0元素的前面插入insert
127.0.0.1:6379> LINSERT mylist before 0 insert
(integer) 5
127.0.0.1:6379> LRANGE mylist 0 -1
1) "v1"
2) "3"
3) "wangyuxing"
4) "insert"
5) "0"
# 向mylist中的insert元素的后面插入insertafter
127.0.0.1:6379> LINSERT mylist after insert insertafter
(integer) 6
127.0.0.1:6379> LRANGE mylist 0 -1
1) "v1"
2) "3"
3) "wangyuxing"
4) "insert"
5) "insertafter"
6) "0"

Set(集合)

集合中的元素是不能重复的

添加元素和查看元素:sadd(添加) smembers(查看元素)

# 添加元素 sadd key名 value [value]
127.0.0.1:6379> SADD myset set1 set2
(integer) 2
# SMEMBERS key名
127.0.0.1:6379> SMEMBERS myset
1) "set2"
2) "set1"
127.0.0.1:6379> 

移除元素:srem

# srem key名 移除的元素
127.0.0.1:6379> SREM myset set1
(integer) 1

判断值是否存在集合中:sismember

# 判断元素是否在集合中
# SISMEMBER key名 判断的元素
127.0.0.1:6379> SISMEMBER myset set1
(integer) 1
127.0.0.1:6379> SISMEMBER myset set
(integer) 0

判断集合中的个数:scard

# 判断元素的个数
127.0.0.1:6379> SCARD myset
(integer) 2

随机获取集合的值:srandmember

# 随机获取集合的值 srandmember key名 [获取个数默认1]
127.0.0.1:6379> SMEMBERS myset
1) "set6"
2) "set2"
3) "set1"
4) "set4"
5) "set5"
6) "set7"
7) "set3"
8) "set8"
9) "set9"
127.0.0.1:6379> SRANDMEMBER myset
"set1"
127.0.0.1:6379> SRANDMEMBER myset
"set1"
127.0.0.1:6379> SRANDMEMBER myset
"set7"
127.0.0.1:6379> SRANDMEMBER myset
"set9"
# 可以添加参数获取几个
127.0.0.1:6379> SRANDMEMBER myset 4
1) "set6"
2) "set8"
3) "set1"
4) "set3"

随机移除一个元素:spop

# 随机移除一个元素 spop key名 [移除个数默认1]
127.0.0.1:6379> SMEMBERS myset
1) "set4"
2) "set2"
3) "set6"
4) "set8"
5) "set5"
6) "set1"
7) "set7"
8) "set3"
9) "set9"
127.0.0.1:6379> spop myset
"set1"
127.0.0.1:6379> spop myset
"set8"
127.0.0.1:6379> spop myset 2
1) "set3"
2) "set9"

将指定的元素移动到另一个集合中:smove

# 将指定的元素移动到另一个集合中
127.0.0.1:6379> SADD key a b c d
(integer) 4
127.0.0.1:6379> SMOVE key key1 d
(integer) 1
127.0.0.1:6379> SMEMBERS key1
1) "d"

集合的计算:sdiff(差集) sinter(交集) sunion(并集)

127.0.0.1:6379> sadd key1 a b c d e
(integer) 5
127.0.0.1:6379> sadd key a b f j k l 
(integer) 6
# 以 key1 为参照找出不同的
127.0.0.1:6379> SDIFF key1 key
1) "e"
2) "c"
3) "d"
127.0.0.1:6379> SINTER key1 key
1) "a"
2) "b"
127.0.0.1:6379> SUNION key1 key
1) "j"
2) "k"
3) "f"
4) "d"
5) "b"
6) "a"
7) "e"
8) "l"
9) "c"

Hash(map集合)

类型是:key-map

添加值:hset key名 字段名 值 [字段名 值]

127.0.0.1:6379> HSET myhash name zhansan age 4 sex nan
(integer) 3
127.0.0.1:6379> HGET myhash name
"zhansan"
127.0.0.1:6379> HGET myhash age
127.0.0.1:6379> HGET myhash sex
"nan"
127.0.0.1:6379> HGETALL myhash
1) "name"
2) "zhansan"
3) "age"
4) "4"
5) "sex"
6) "nan"
127.0.0.1:6379> HMGET myhash name sex
1) "zhansan"
2) "nan"

获取值:hget(单个值) hmget(多个值) hgetall(获取全部的值)

# hget(单个值)
127.0.0.1:6379> HGET myhash name
"zhansan"
127.0.0.1:6379> HGET myhash age
# hmget(多个值)
127.0.0.1:6379> HMGET myhash name sex
1) "zhansan"
2) "nan"
# hgetall(获取全部的值)
127.0.0.1:6379> HGETALL myhash
1) "name"
2) "zhansan"
3) "age"
4) "4"
5) "sex"
6) "nan"

删除值:hdel

127.0.0.1:6379> HDEL myhash name
(integer) 1
127.0.0.1:6379> HGETALL myhash
1) "age"
2) "4"
3) "sex"
4) "nan"

获取hash长度:hlen

127.0.0.1:6379> hlen myhash
(integer) 2

判断hash中的字段是否存在: hexists

127.0.0.1:6379> HEXISTS myhash name
(integer) 0
127.0.0.1:6379> HEXISTS myhash age
(integer) 1

获取所有字段或所有值:hkeys(获取所有字段) hvals(获取所有value)

127.0.0.1:6379> HKEYS myhash
1) "age"
2) "sex"
127.0.0.1:6379> HVALS myhash
1) "4"
2) "nan"

字段自增:hincrby(自增)

127.0.0.1:6379> HSET myhash num 2
(integer) 1
127.0.0.1:6379> HINCRBY myhash num 4
(integer) 6
127.0.0.1:6379> HGET myhash num

判断后在添加:hsetnx

# 判断是否存在 不存在才能设置
127.0.0.1:6379> HSETNX myhash num 5
(integer) 0
127.0.0.1:6379> HSETNX myhash num1 1
(integer) 1

设置对象

# hash更适合存对象
127.0.0.1:6379> HSET user:1 name xiaohong age 18 sex nan
(integer) 3

Zset(有序集合)

set 的基础添加一个值,set k1 sorce v1

添加值和查看值:zadd(添加) zrange(查看)

# 添加值 zadd key名 序号(排序使用) value
127.0.0.1:6379> ZADD myset 1 one 2 two 3 three
(integer) 3
# 查看值
127.0.0.1:6379> ZRANGE myset 0 -1 # 升序
1) "one"
2) "two"
3) "three"
# 查看值
127.0.0.1:6379> ZREVRANGE myset 0 -1 # 升序
1) "one"
2) "two"
3) "three"

排序的实现(升序):zrangebyscore 最小值 最大值 [withsorce(是否带上成绩选填)] 取的个数

127.0.0.1:6379> ZADD sorce 70 zhangsan 80 lisi 66 wangwu
(integer) 3
127.0.0.1:6379> ZRANGEBYSCORE sorce -inf +inf
1) "wangwu"
2) "zhangsan"
3) "lisi"

排序的实现(降序):zrevrangebyscore 最大值 最小值 [withsorce(是否带上成绩选填)] 取的个数

127.0.0.1:6379> ZREVRANGEBYSCORE sorce +inf -inf
1) "lisi"
2) "zhangsan"
3) "wangwu"

移除元素:zrem

127.0.0.1:6379> ZREM sorce lisi wangwu
(integer) 2
127.0.0.1:6379> ZRANGE sorce 0 -1
1) "zhangsan"

获取有序集合的个数:zcard

# zcard key名
127.0.0.1:6379> ZCARD sorce
(integer) 3

统计区间的个数:zcount

127.0.0.1:6379> ZRANGEBYSCORE sorce -inf +inf withscores
1) "wangwu"
2) "66"
3) "zhangsan"
4) "70"
5) "lisi"
6) "80"
# 获取 sorce 区间的s
127.0.0.1:6379> ZCOUNT sorce 70 100
(integer) 2

排行榜实现带权处理更多排序的东西

特殊数据类型

geospatia(地理位置)

定位功能 附件的人 打车距离计算 等待有关的距离的信息

测试查询经纬度的位置的网站:拾取坐标系统 (baidu.com)

具体命令使用

添加地理位置:geoadd

# 添加一些地理位置用于测试
# 添加格式为 geoadd key名 经度 维度 名称
127.0.0.1:6379> GEOADD china:city 116.277132 39.9655828 beijing
(integer) 1
127.0.0.1:6379> GEOADD china:city 106.559613 26.616218 guiyan
(integer) 0
127.0.0.1:6379> GEOADD china:city 121.470766 31.235188 shanghai
(integer) 1
127.0.0.1:6379> GEOADD china:city 114.066277 22.550325 shenzhen
(integer) 1
127.0.0.1:6379> GEOADD china:city 106.557287 29.561833 chongqin
(integer) 1

获取城市的经纬度:geopos

# 使用 geopos key名 名称(名称可以传递多个)
127.0.0.1:6379> GEOPOS china:city beijing
1) 1) "116.27713233232498169" #经度
   2) "39.96558346694765618"  #维度
127.0.0.1:6379> GEOPOS china:city beijing shenzhen
1) 1) "116.27713233232498169"
   2) "39.96558346694765618"
2) 1) "114.06627863645553589"
   2) "22.55032549190071478"

判断两个位置的直线距离geodist

  • m 表示单位为米。
  • km 表示单位为千米。
  • mi 表示单位为英里。
  • ft 表示单位为英尺。
  • # getdist key名 名称1 名称2 [单位  默认是米]
    127.0.0.1:6379> geodist china:city beijing shanghai
    "1078155.7397"
    127.0.0.1:6379> geodist china:city beijing shanghai km
    "1078.1557"
    

    查找周围的其他位置:GEORADIUS

    # georadius key名 经度 维度 半径 [单位默认是m] [withdist显示到中间位置的距离] [count x 查询的个数]
    127.0.0.1:6379> GEORADIUS china:city 110 30 500 km
    1) "chongqin"
    127.0.0.1:6379> GEORADIUS china:city 110 30 1000 km
    1) "guiyan"
    2) "liupanshuishi"
    3) "chongqin"
    4) "shenzhen"
    127.0.0.1:6379> GEORADIUS china:city 110 30 500 km withdist
    1) 1) "chongqin"
       2) "335.8886"
    127.0.0.1:6379> GEORADIUS china:city 110 30 700 km withdist
    1) 1) "guiyan"
       2) "505.0571"
    2) 1) "liupanshuishi"
       2) "505.0571"
    3) 1) "chongqin"
       2) "335.8886"
    127.0.0.1:6379> GEORADIUS china:city 110 30 700 km withdist count 2
    1) 1) "chongqin"
       2) "335.8886"
    2) 1) "guiyan"
       2) "505.0571"
    

    通过城市的名称查询周围的值:georadiusbymember

    # GEORADIUSBYMEMBER key名 中心点 半径 [单位默认是m] [withdist显示到中间位置的距离] [count x 查询的个数]
    127.0.0.1:6379> GEORADIUSBYMEMBER china:city shanghai 1500 km withdist count 3
    1) 1) "shanghai"
       2) "0.0000"
    2) 1) "beijing"
       2) "1078.1557"
    3) 1) "shenzhen"
       2) "1212.7005"
    

    geospatia 底层是基于 Zset,我们可以使用zset 的操作来对其进行操作

    127.0.0.1:6379> ZRANGE china:city 0 -1 #查看元素
    1) "guiyan"
    2) "liupanshuishi"
    3) "chongqin"
    4) "shenzhen"
    5) "shanghai"
    6) "beijing"
    127.0.0.1:6379> ZREM china:city guiyan	# 移除元素
    (integer) 1
    127.0.0.1:6379> ZREM china:city chongqin
    (integer) 1
    127.0.0.1:6379> ZRANGE china:city 0 -1
    1) "liupanshuishi"
    2) "shenzhen"
    3) "shanghai"
    4) "beijing"
    

    Hyperloglog

    统计基数:不重复的元素

    传统方式使用 set 保存用户 id 然后使用set的集合做统计数量来计数

    使用Hyperloglog 做统计

    优点:占有内存是固定的,自需要12kb的内存

    缺点:计算真确率,只有 0.81%

    # 创建元素
    127.0.0.1:6379> PFADD mykey a b c d e f g h i j k l
    (integer) 1
    # 统计数量
    127.0.0.1:6379> PFCOUNT mykey
    (integer) 12
    # 创建元素
    127.0.0.1:6379> PFadd mykey1 i j z x c v b e
    (integer) 1
    # 合并元素(会移除重复的元素)
    127.0.0.1:6379> PFMERGE mkykey3 mykey mykey2
    # 查看合并后的计数
    127.0.0.1:6379> PFCOUNT mkykey3
    (integer) 12
    

    Bitmaps

    Bitmaps 使用位存储,只要 01两个状态,常用于记录两个状态的值

    比如:打卡 每天登录状态

    优点,非常省内存,1 字节就能存取8个状态

    # 在对应的字节上添加记录情况
    # setbit key名 bit位 记录值[只能是 0,1]
    127.0.0.1:6379> SETBIT denglu 1 0
    (integer) 0
    127.0.0.1:6379> SETBIT denglu 2 1
    (integer) 0
    127.0.0.1:6379> SETBIT denglu 3 1
    (integer) 0
    127.0.0.1:6379> SETBIT denglu 4 0
    (integer) 0
    127.0.0.1:6379> SETBIT denglu 5 1
    (integer) 0
    127.0.0.1:6379> SETBIT denglu 6 0
    (integer) 0
    127.0.0.1:6379> SETBIT denglu 7 1
    (integer) 0
    # 获取相应位上的值 getbit key名 bit位 
    127.0.0.1:6379> GETBIT denglu 6
    (integer) 0
    127.0.0.1:6379> GETBIT denglu 3
    (integer) 1
    # 统计全部记录为 1 的所有值 [开始字节] [结束字节]
    127.0.0.1:6379> BITCOUNT denglu 
    (integer) 4
    # 注意字节于bit的关系 一个字节的8 bit
    # 上面统计的数存放在 0~7bit为 所以只能在 0 号上面查找到
    127.0.0.1:6379> BITCOUNT denglu 4 7
    (integer) 0
    127.0.0.1:6379> BITCOUNT denglu 0 1
    (integer) 4
    

    Redis单条命令保证原子性,但是事务不保证原子性

    Redis事务

  • 开启事务(multi)
  • 命令入队(正常命令)
  • 执行事务(exec)
  • # 开启事务
    127.0.0.1:6379> MULTI
    # 事务入队
    127.0.0.1:6379(TX)> set k1 v1
    QUEUED
    127.0.0.1:6379(TX)> set k2 v2
    QUEUED
    127.0.0.1:6379(TX)> get k2
    QUEUED
    127.0.0.1:6379(TX)> set k3 v3
    QUEUED
    127.0.0.1:6379(TX)> get k3
    QUEUED
    # 执行事务
    127.0.0.1:6379(TX)> exec
    1) OK
    2) OK
    3) "v2"
    4) OK
    5) "v3"
    

    放弃事务 discard

    # 开启事务
    127.0.0.1:6379> MULTI
    127.0.0.1:6379(TX)> set k1 v1
    QUEUED
    127.0.0.1:6379(TX)> set k2 v2
    QUEUED
    # 放弃事务
    127.0.0.1:6379(TX)> DISCARD
    # 因为是放弃事务,所以拿不到值
    127.0.0.1:6379> get k1
    (nil)
    

    事务出错 编译型错误 运行时错误

    编译型错误:编译时就出错,所有事务都不执行

    运行时错误只是出错的不执行其它正常执行

    # 编译型错误
    127.0.0.1:6379> MULTI # 开启事务
    127.0.0.1:6379(TX)> set k1 v1
    QUEUED
    127.0.0.1:6379(TX)> getset k1  #事务命令已经报错
    (error) ERR wrong number of arguments for 'getset' command
    127.0.0.1:6379(TX)> set k2 v2
    QUEUED
    127.0.0.1:6379(TX)> exec	# 执行事务
    (error) EXECABORT Transaction discarded because of previous errors.
    127.0.0.1:6379> get k1		# 获取不到值事务没有执行
    (nil)
    
    # 运行时错误
    127.0.0.1:6379> MULTI	# 开启事务
    127.0.0.1:6379(TX)> set k1 v1
    QUEUED
    127.0.0.1:6379(TX)> set k2 v2 k3 v3		# 错误命令,编译能过,执行会出错
    QUEUED
    127.0.0.1:6379(TX)> set k4 v4
    QUEUED
    127.0.0.1:6379(TX)> get k2
    QUEUED
    127.0.0.1:6379(TX)> get k4
    QUEUED
    127.0.0.1:6379(TX)> exec	#执行事务
    1) OK
    2) (error) ERR syntax error		#命令错误提示
    3) OK
    4) (nil)		# 错误命令没有执行,所有拿不到值
    5) "v4"			# 正常获取 k4 证明其他正常执行
    

    Redis 的乐观锁

    Redis 自带监控 key 的功能,可以用于实现乐观锁

    监控 watch

    # 正常执行
    127.0.0.1:6379> set num 10
    127.0.0.1:6379> WATCH num
    127.0.0.1:6379> MULTI
    127.0.0.1:6379(TX)> INCRBY num 2
    QUEUED
    127.0.0.1:6379(TX)> INCRBY num 4
    QUEUED
    127.0.0.1:6379(TX)> exec #事务执行成功监控会自动取消
    1) (integer) 12
    2) (integer) 16
    
    # 错误执行
    127.0.0.1:6379> WATCH num
    127.0.0.1:6379> MULTI
    127.0.0.1:6379(TX)> DECRBY num 10
    QUEUED
    # 现在不先执行事务,多开一个客户端模拟多线程插队,修改值
    127.0.0.1:6379> get num
    127.0.0.1:6379> set num 10
    # 在返回原客户端执行事务
    127.0.0.1:6379(TX)> exec #执行任务为 nil 修改失败
    (nil)
    

    unwatch 取消监控

    切记如果事务执行失败 需要手动的取消监控

    127.0.0.1:6379> UNWATCH
    

    Jedis

    什么是 Jedis

    Java 操作 redis 的操作工具

    1、创建一个空的 maven 项目

    2、导入依赖

    <!-- https://mvnrepository.com/artifact/redis.clients/jedis -->
    <dependency>
        <groupId>redis.clients</groupId>
        <artifactId>jedis</artifactId>
        <version>3.6.0</version>
    </dependency>
    

    3、测试

    对数据库的操作

    package com.wyx;
    import redis.clients.jedis.Jedis;
    public class JedisTest {
        public static void main(String[] args) {
            // 连接url 和 端口号
            System.out.println("################  数据库操作  ####################");
            Jedis jedis = new Jedis("192.168.137.129",6379);
            System.out.println("测试连接:"+jedis.ping());
            System.out.println("选择数据库 2号:"+jedis.select(2));
            System.out.println("清空 2号数据库:"+jedis.flushDB());
            System.out.println("查看 2号数据库的所有:"+ jedis.keys("*"));
            System.out.println("清空所有数据库:"+ jedis.flushAll());
            System.out.println("选择 0号数据库:"+jedis.select(0));
            System.out.println("向数据库添加key为name值为zhang sang的key:" + jedis.set("name", "zhang sang"));
            System.out.println("获取数据库的key为name的值:"+ jedis.get("name"));
            System.out.println("判断数据库中的key为name的key是否存在:" + jedis.exists("name"));
            System.out.println("判断数据库中的key为age的key是否存在:" + jedis.exists("age"));
            System.out.println("查看数据库大小:"+jedis.dbSize());
            System.out.println("将key为name的key 移动到 2号数据库:"+ jedis.move("name", 2));
            System.out.println("选择 2号数据库:"+ jedis.select(2));
            System.out.println("判断 key属于什么类型:"+jedis.type("name"));
            System.out.println("设置key 在10秒后过期:"+jedis.expire("name",4));
            //如果ttl 等于 -2 代表已经过期
            for (int i = -2; i != jedis.ttl("name");) {
                System.out.println("name的 key 将在"+jedis.ttl("name")+"秒后过期");
                try {
                    // 睡眠1秒
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
            System.out.println("再次获取 name 的值:"+jedis.get("name"));
            System.out.println("查看数据库大小:"+jedis.dbSize());        
    

    测试结果:

    测试连接:PONG
    选择数据库 2号:OK
    清空 2号数据库:OK
    查看 2号数据库的所有:[]
    清空所有数据库:OK
    选择 0号数据库:OK
    向数据库添加key为name值为zhang sang的key:OK
    获取数据库的key为name的值:zhang sang
    判断数据库中的key为name的key是否存在:true
    判断数据库中的key为age的key是否存在:false
    查看数据库大小:1
    将key为name的key 移动到 2号数据库:1
    选择 2号数据库:OK
    判断 key属于什么类型:string
    设置key 在10秒后过期:1
    name的 key 将在4秒后过期
    name的 key 将在3秒后过期
    name的 key 将在2秒后过期
    name的 key 将在1秒后过期
    再次获取 name 的值:null
    查看数据库大小:0

    String类型操作

    package com.wyx;
    import redis.clients.jedis.Jedis;
    public class JedisTest {
        public static void main(String[] args) {
            // 连接url 和 端口号
            System.out.println("################  string操作  ####################");
            Jedis jedis = new Jedis("192.168.137.129",6379);
            jedis.flushDB();
            System.out.println("测试连接:"+jedis.ping());
            System.out.println("设置一个key为hello的 key:"+ jedis.set("hello", "hello"));
            System.out.println("输出key为 hello 的值:"+jedis.get("hello"));
            System.out.println("给key为 hello 的key 追加字符串:"+jedis.append("hello"," this is a redis"));
            System.out.println("查看 hello 的字符串长度:"+ jedis.strlen("hello"));
            System.out.println("按下标获取字符串(获取0~9):"+ jedis.getrange("hello",0,9));
            System.out.println("按下标获取字符串(获取0~-1)全部:"+ jedis.getrange("hello",0,-1));
            System.out.println("从第 6 个位置用字符串替换:"+jedis.setrange("hello",6,"xxxxx"));
            System.out.println("输出key为 hello 的值:"+jedis.get("hello"));
            System.out.println("设置 key为name的值并给出过期时间为 5 秒:"+ jedis.setex("name",5,"li si"));
            System.out.println("判断没有hello的键才能设置成功:"+jedis.setnx("hello","ll"));
            System.out.println("判断没有age的键才能设置成功:"+jedis.setnx("age","18"));
            System.out.println("添加一个key为 num 值为数字的 key:"+ jedis.set("num","10"));
            System.out.println("获取 num 的值:"+jedis.get("num"));
            System.out.println("num 值加一 :"+jedis.incr("num"));
            System.out.println("获取 num 的值:"+jedis.get("num"));
            System.out.println("num 值加 10 :"+jedis.incrBy("num",10));
            System.out.println("获取 num 的值:"+jedis.get("num"));
            System.out.println("num 值减一 :"+jedis.decr("num"));
            System.out.println("获取 num 的值:"+jedis.get("num"));
            System.out.println("num 值减 5 :"+jedis.decrBy("num",5));
            System.out.println("获取 num 的值:"+jedis.get("num"));
            System.out.println("###################################");
            System.out.println("更多操作查看 string 的redis 命令");
    

    测试结果:

    ################ string操作 ####################
    测试连接:PONG
    设置一个key为hello的 key:OK
    输出key为 hello 的值:hello
    给key为 hello 的key 追加字符串:21
    查看 hello 的字符串长度:21
    按下标获取字符串(获取0~9):hello this
    按下标获取字符串(获取0~-1)全部:hello this is a redis
    从第 6 个位置用字符串替换:21
    输出key为 hello 的值:hello xxxxxis a redis
    设置 key为name的值并给出过期时间为 5 秒:OK
    判断没有hello的键才能设置成功:0
    判断没有age的键才能设置成功:1
    添加一个key为 num 值为数字的 key:OK
    获取 num 的值:10
    num 值加一 :11
    获取 num 的值:11
    num 值加 10 :21
    获取 num 的值:21
    num 值减一 :20
    获取 num 的值:20
    num 值减 5 :15
    获取 num 的值:15
    ###################################
    更多操作查看 string 的redis 命令

    剩下还有很多对 ListSetHashZsetgepspatiaHyperloglogBitmaps、的操作和命令差不多,具体使用时对应修改即可。

    整合SpringBoot

    1、创建SpringBoot项目

    2、导入以下依赖

    <!--SpringBoot集成的Redis启动依赖-->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-redis</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <!--lombok依赖-->
    <dependency>
        <groupId>org.projectlombok</groupId>
        <artifactId>lombok</artifactId>
        <optional>true</optional>
    </dependency>
    

    3、配置连接

    spring.redis.host=192.168.137.129
    spring.redis.port=6379
    

    4、编码测试

    package com.wyx;
    import org.junit.jupiter.api.Test;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.boot.test.context.SpringBootTest;
    import org.springframework.data.redis.core.RedisTemplate;
    @SpringBootTest
    class SpringBootRedisApplicationTests {
        // 注入Redis操作模板
        @Autowired
        RedisTemplate redisTemplate;
        @Test
        void contextLoads() {
            // 设置一个 key
            redisTemplate.opsForValue().set("name","Wang Yu Xing");
            // 获取一个key
            System.out.println(redisTemplate.opsForValue().get("name"));
    

    5、测试结果

    根据对 Springboot 的学习,我们自需要找到 spring-boot-autoconfigure-2.4.5.jar 下的 META-INF下的spring.factories 查看有关 redis的自动配置,如下。

    打开图片中勾选的类,查看如下。

    查看:RedisProperties类,查看其中的我们该如何配置

    /* 以下是在连接 Redis 是我们可以配置的属性
    配置格式只需要在 全局配置文件 application.properties 下配置即可,
    例如在简单案例中我们配置的
    spring.redis.host=192.168.137.129
    spring.redis.port=6379 
    表示需要连接的主机和对应的端口号。
    @ConfigurationProperties(prefix = "spring.redis")
    public class RedisProperties {
    	 * Database index used by the connection factory.
    	private int database = 0;
    	 * Connection URL. Overrides host, port, and password. User is ignored. Example:
    	 * redis://user:password@example.com:6379
    	private String url;
    	 * Redis server host.
    	private String host = "localhost";
    	 * Login username of the redis server.
    	private String username;
    	 * Login password of the redis server.
    	private String password;
    	 * Redis server port.
    	private int port = 6379;
    	 * Whether to enable SSL support.
    	private boolean ssl;
    	 * Read timeout.
    	private Duration timeout;
    	 * Connection timeout.
    	private Duration connectTimeout;
    	 * Client name to be set on connections with CLIENT SETNAME.
    	private String clientName;
    	 * Type of client to use. By default, auto-detected according to the classpath.
    	private ClientType clientType;
    	private Sentinel sentinel;
    	private Cluster cluster;
    

    查看:默认配置的 redisTemplate

    public class RedisAutoConfiguration {
        @Bean
    	@ConditionalOnMissingBean(name = "redisTemplate")
    	@ConditionalOnSingleCandidate(RedisConnectionFactory.class)
    	public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
    // 默认配置的 RedisTemplate 并没有给我们序列化(网络传输保证格式),而且几乎没有配置,所以一般情况下,我们并不会使用默认配置的模板,我们会自定义模板将其覆盖掉。
    		RedisTemplate<Object, Object> template = new RedisTemplate<>();
    		template.setConnectionFactory(redisConnectionFactory);
    		return template;
    

    由于默认配置的模板,所以在使用时如果没有序列化会报错如下:

    package com.wyx.pojo;
    import lombok.AllArgsConstructor;
    import lombok.Data;
    import lombok.NoArgsConstructor;
    @Data
    @NoArgsConstructor
    @AllArgsConstructor
    public class User {
        private String name;
        private Integer age;
        private String sex;
    
    package com.wyx;
    import com.wyx.pojo.User;
    import org.junit.jupiter.api.Test;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.boot.test.context.SpringBootTest;
    import org.springframework.data.redis.core.RedisTemplate;
    @SpringBootTest
    class SpringBootRedisApplicationTests {
        @Autowired
        RedisTemplate redisTemplate;
        @Test
        void contextLoads() {
            User user = new User("张明", 18, "男");
            // 设置一个 key
            redisTemplate.opsForValue().set("user",user);
            // 获取一个key
            System.out.println(redisTemplate.opsForValue().get("user"));
    

    运行测试:没有序列化,所以一般情况下,我们要重写模板

    自定义 RedisTemplate

    package com.wyx.conf;
    import com.fasterxml.jackson.annotation.JsonAutoDetect;
    import com.fasterxml.jackson.annotation.PropertyAccessor;
    import com.fasterxml.jackson.databind.ObjectMapper;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    import org.springframework.data.redis.connection.RedisConnectionFactory;
    import org.springframework.data.redis.core.RedisTemplate;
    import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
    import org.springframework.data.redis.serializer.StringRedisSerializer;
    @Configuration
    public class RedisConfig {
        @Bean
        RedisTemplate<String,Object> redisTemplate(RedisConnectionFactory factory){
            RedisTemplate<String, Object> template = new RedisTemplate();
            template.setConnectionFactory(factory);
            Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
            ObjectMapper om = new ObjectMapper();
            om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
            om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
            jackson2JsonRedisSerializer.setObjectMapper(om);
            StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
            // key采用String的序列化方式
            template.setKeySerializer(stringRedisSerializer);
            // hash的key也采用String的序列化方式
            template.setHashKeySerializer(stringRedisSerializer);
            // value序列化方式采用jackson
            template.setValueSerializer(jackson2JsonRedisSerializer);
            // hash的value序列化方式采用jackson
            template.setHashValueSerializer(jackson2JsonRedisSerializer);
            template.afterPropertiesSet();
            return template;
    

    封装 redisTemplate 的操作

    package com.wyx.utils;
    import org.springframework.data.redis.core.RedisTemplate;
    import org.springframework.stereotype.Component;
    import org.springframework.util.CollectionUtils;
    import javax.annotation.Resource;
    import java.util.Collection;
    import java.util.List;
    import java.util.Map;
    import java.util.Set;
    import java.util.concurrent.TimeUnit;
    @Component
    @SuppressWarnings("all")
    public class RedisUtil {
        @Resource
        private RedisTemplate<String, Object> redisTemplate;
         * 指定缓存失效时间
         * @param key  键
         * @param time 时间(秒)
         * @return boolean 返回布尔值
        public boolean expire(String key, long time) {
            try {
                if (time > 0) {
                    redisTemplate.expire(key, time, TimeUnit.SECONDS);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 根据key 获取过期时间
         * @param key 键 不能为null
         * @return 时间(秒) 返回-1代表为永久有效
        public long getExpire(String key) {
            return redisTemplate.getExpire(key, TimeUnit.SECONDS);
         * 判断key是否存在
         * @param key 键
         * @return true 存在 false不存在
        public boolean hasKey(String key) {
            try {
                return redisTemplate.hasKey(key);
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 删除缓存
         * @param key 可以传一个值 或多个
        public void del(String... key) {
            if (key != null && key.length > 0) {
                if (key.length == 1) {
                    redisTemplate.delete(key[0]);
                } else {
                    redisTemplate.delete((Collection<String>) CollectionUtils.arrayToList(key));
         * 普通缓存获取
         * @param key 键
         * @return 值
        public Object get(String key) {
            return key == null ? null : redisTemplate.opsForValue().get(key);
         * 普通缓存放入
         * @param key   键
         * @param value 值
         * @return true成功 false失败
        public boolean set(String key, Object value) {
            try {
                redisTemplate.opsForValue().set(key, value);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 普通缓存放入并设置时间
         * @param key   键
         * @param value 值
         * @param time  时间(秒) time要大于0 如果time小于等于0 将设置无限期
         * @return true成功 false 失败
        public boolean set(String key, Object value, long time) {
            try {
                if (time > 0) {
                    redisTemplate.opsForValue().set(key, value, time, TimeUnit.SECONDS);
                } else {
                    set(key, value);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * @param key   键
         * @param delta 要增加几(大于0)
         * @return
        public long incr(String key, long delta) {
            if (delta < 0) {
                throw new RuntimeException("递增因子必须大于0");
            return redisTemplate.opsForValue().increment(key, delta);
         * @param key   键
         * @param delta 要减少几(小于0)
         * @return
        public long decr(String key, long delta) {
            if (delta < 0) {
                throw new RuntimeException("递减因子必须大于0");
            return redisTemplate.opsForValue().increment(key, -delta);
        // ================================Map=================================
         * HashGet
         * @param key  键 不能为null
         * @param item 项 不能为null
         * @return 值
        public Object hget(String key, String item) {
            return redisTemplate.opsForHash().get(key, item);
         * 获取hashKey对应的所有键值
         * @param key 键
         * @return 对应的多个键值
        public Map<Object, Object> hmget(String key) {
            return redisTemplate.opsForHash().entries(key);
         * HashSet
         * @param key 键
         * @param map 对应多个键值
         * @return true 成功 false 失败
        public boolean hmset(String key, Map<String, Object> map) {
            try {
                redisTemplate.opsForHash().putAll(key, map);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * HashSet 并设置时间
         * @param key  键
         * @param map  对应多个键值
         * @param time 时间(秒)
         * @return true成功 false失败
        public boolean hmset(String key, Map<String, Object> map, long time) {
            try {
                redisTemplate.opsForHash().putAll(key, map);
                if (time > 0) {
                    expire(key, time);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 向一张hash表中放入数据,如果不存在将创建
         * @param key   键
         * @param item  项
         * @param value 值
         * @return true 成功 false失败
        public boolean hset(String key, String item, Object value) {
            try {
                redisTemplate.opsForHash().put(key, item, value);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 向一张hash表中放入数据,如果不存在将创建
         * @param key   键
         * @param item  项
         * @param value 值
         * @param time  时间(秒) 注意:如果已存在的hash表有时间,这里将会替换原有的时间
         * @return true 成功 false失败
        public boolean hset(String key, String item, Object value, long time) {
            try {
                redisTemplate.opsForHash().put(key, item, value);
                if (time > 0) {
                    expire(key, time);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 删除hash表中的值
         * @param key  键 不能为null
         * @param item 项 可以使多个 不能为null
        public void hdel(String key, Object... item) {
            redisTemplate.opsForHash().delete(key, item);
         * 判断hash表中是否有该项的值
         * @param key  键 不能为null
         * @param item 项 不能为null
         * @return true 存在 false不存在
        public boolean hHasKey(String key, String item) {
            return redisTemplate.opsForHash().hasKey(key, item);
         * hash递增 如果不存在,就会创建一个 并把新增后的值返回
         * @param key  键
         * @param item 项
         * @param by   要增加几(大于0)
         * @return
        public double hincr(String key, String item, double by) {
            return redisTemplate.opsForHash().increment(key, item, by);
         * hash递减
         * @param key  键
         * @param item 项
         * @param by   要减少记(小于0)
         * @return
        public double hdecr(String key, String item, double by) {
            return redisTemplate.opsForHash().increment(key, item, -by);
        // ============================set=============================
         * 根据key获取Set中的所有值
         * @param key 键
         * @return
        public Set<Object> sGet(String key) {
            try {
                return redisTemplate.opsForSet().members(key);
            } catch (Exception e) {
                e.printStackTrace();
                return null;
         * 根据value从一个set中查询,是否存在
         * @param key   键
         * @param value 值
         * @return true 存在 false不存在
        public boolean sHasKey(String key, Object value) {
            try {
                return redisTemplate.opsForSet().isMember(key, value);
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 将数据放入set缓存
         * @param key    键
         * @param values 值 可以是多个
         * @return 成功个数
        public long sSet(String key, Object... values) {
            try {
                return redisTemplate.opsForSet().add(key, values);
            } catch (Exception e) {
                e.printStackTrace();
                return 0;
         * 将set数据放入缓存
         * @param key    键
         * @param time   时间(秒)
         * @param values 值 可以是多个
         * @return 成功个数
        public long sSetAndTime(String key, long time, Object... values) {
            try {
                Long count = redisTemplate.opsForSet().add(key, values);
                if (time > 0)
                    expire(key, time);
                return count;
            } catch (Exception e) {
                e.printStackTrace();
                return 0;
         * 获取set缓存的长度
         * @param key 键
         * @return
        public long sGetSetSize(String key) {
            try {
                return redisTemplate.opsForSet().size(key);
            } catch (Exception e) {
                e.printStackTrace();
                return 0;
         * 移除值为value的
         * @param key    键
         * @param values 值 可以是多个
         * @return 移除的个数
        public long setRemove(String key, Object... values) {
            try {
                Long count = redisTemplate.opsForSet().remove(key, values);
                return count;
            } catch (Exception e) {
                e.printStackTrace();
                return 0;
        // ===============================list=================================
         * 获取list缓存的内容
         * @param key   键
         * @param start 开始
         * @param end   结束 0 到 -1代表所有值
         * @return
        public List<Object> lGet(String key, long start, long end) {
            try {
                return redisTemplate.opsForList().range(key, start, end);
            } catch (Exception e) {
                e.printStackTrace();
                return null;
         * 获取list缓存的长度
         * @param key 键
         * @return
        public long lGetListSize(String key) {
            try {
                return redisTemplate.opsForList().size(key);
            } catch (Exception e) {
                e.printStackTrace();
                return 0;
         * 通过索引 获取list中的值
         * @param key   键
         * @param index 索引 index>0时, 0 表头,1 第二个元素,依次类推;index<0时,-1,表尾,-2倒数第二个元素,依次类推
         * @return
        public Object lGetIndex(String key, long index) {
            try {
                return redisTemplate.opsForList().index(key, index);
            } catch (Exception e) {
                e.printStackTrace();
                return null;
         * 将list放入缓存
         * @param key   键
         * @param value 值
         * @return
        public boolean lSet(String key, Object value) {
            try {
                redisTemplate.opsForList().rightPush(key, value);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 将list放入缓存
         * @param key   键
         * @param value 值
         * @param time  时间(秒)
         * @return
        public boolean lSet(String key, Object value, long time) {
            try {
                redisTemplate.opsForList().rightPush(key, value);
                if (time > 0)
                    expire(key, time);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 将list放入缓存
         * @param key   键
         * @param value 值
         * @return
        public boolean lSet(String key, List<Object> value) {
            try {
                redisTemplate.opsForList().rightPushAll(key, value);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 将list放入缓存
         * @param key   键
         * @param value 值
         * @param time  时间(秒)
         * @return
        public boolean lSet(String key, List<Object> value, long time) {
            try {
                redisTemplate.opsForList().rightPushAll(key, value);
                if (time > 0)
                    expire(key, time);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 根据索引修改list中的某条数据
         * @param key   键
         * @param index 索引
         * @param value 值
         * @return
        public boolean lUpdateIndex(String key, long index, Object value) {
            try {
                redisTemplate.opsForList().set(key, index, value);
                return true;
            } catch (Exception e) {
                e.printStackTrace();
                return false;
         * 移除N个值为value
         * @param key   键
         * @param count 移除多少个
         * @param value 值
         * @return 移除的个数
        public long lRemove(String key, long count, Object value) {
            try {
                Long remove = redisTemplate.opsForList().remove(key, count, value);
                return remove;
            } catch (Exception e) {
                e.printStackTrace();
                return 0;
    

    Redis 配置文件

    # Redis configuration file example.
    # Redis 配置文件示例。 
    # Note that in order to read the configuration file, Redis must be
    # started with the file path as first argument:
    # 注意,为了读取配置文件,Redis必须是以文件路径作为第一个参数开头: 
    # 启动的配置文件必须包含目录,如下:
    # ./redis-server /path/to/redis.conf
    # Note on units: when memory size is needed, it is possible to specify
    # it in the usual form of 1k 5GB 4M and so forth:
    # 关于单位的注释:需要内存大小时,可以指定,通常的1k 5GB 4M格式,依此类推:
    # 1k => 1000 bytes
    # 1kb => 1024 bytes
    # 1m => 1000000 bytes
    # 1mb => 1024*1024 bytes
    # 1g => 1000000000 bytes
    # 1gb => 1024*1024*1024 bytes
    # units are case insensitive so 1GB 1Gb 1gB are all the same.
    # 单位不区分大小写,因此1GB 1Gb 1gB都相同。
    ################################## INCLUDES ###################################
    # Include one or more other config files here.  This is useful if you
    # have a standard template that goes to all Redis servers but also need
    # to customize a few per-server settings.  Include files can include
    # other files, so use this wisely.
    # Note that option "include" won't be rewritten by command "CONFIG REWRITE"
    # from admin or Redis Sentinel. Since Redis always uses the last processed
    # line as value of a configuration directive, you'd better put includes
    # at the beginning of this file to avoid overwriting config change at runtime.
    # If instead you are interested in using includes to override configuration
    # options, it is better to use include as the last line.
    # 导入其他的配置文件,组成同一个
    # include /path/to/local.conf
    # include /path/to/other.conf
    ################################## MODULES #####################################
    # Load modules at startup. If the server is not able to load modules
    # it will abort. It is possible to use multiple loadmodule directives.
    # 在启动时加载模块。 如果服务器无法加载模块,它会中止。 可以使用多个loadmodule指令。
    # loadmodule /path/to/my_module.so
    # loadmodule /path/to/other_module.so
    ################################## NETWORK #####################################
    # By default, if no "bind" configuration directive is specified, Redis listens
    # for connections from all available network interfaces on the host machine.
    # It is possible to listen to just one or multiple selected interfaces using
    # the "bind" configuration directive, followed by one or more IP addresses.
    # Each address can be prefixed by "-", which means that redis will not fail to
    # start if the address is not available. Being not available only refers to
    # addresses that does not correspond to any network interfece. Addresses that
    # are already in use will always fail, and unsupported protocols will always BE
    # silently skipped.
    # Examples:
    # bind 192.168.1.100 10.0.0.1     # listens on two specific IPv4 addresses
    # bind 127.0.0.1 ::1              # listens on loopback IPv4 and IPv6
    # bind * -::*                     # like the default, all available interfaces
    # ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
    # internet, binding to all the interfaces is dangerous and will expose the
    # instance to everybody on the internet. So by default we uncomment the
    # following bind directive, that will force Redis to listen only on the
    # IPv4 and IPv6 (if available) loopback interface addresses (this means Redis
    # will only be able to accept client connections from the same host that it is
    # running on).
    # IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
    # JUST COMMENT OUT THE FOLLOWING LINE.
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    # 绑定的Ip,如果不绑定在不能访问,具体配置参考上面
    # 如果是使用服务器配置,那么这个是必须的
    bind 127.0.0.1 -::1
    # Protected mode is a layer of security protection, in order to avoid that
    # Redis instances left open on the internet are accessed and exploited.
    # When protected mode is on and if:
    # 1) The server is not binding explicitly to a set of addresses using the
    #    "bind" directive.
    # 2) No password is configured.
    # The server only accepts connections from clients connecting from the
    # IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
    # sockets.
    # By default protected mode is enabled. You should disable it only if
    # you are sure you want clients from other hosts to connect to Redis
    # even if no authentication is configured, nor a specific set of interfaces
    # are explicitly listed using the "bind" directive.
    # 默认受保护模式是开启的,如果你关闭了,使用其他的客户端连接,都可以连接,如果开启了,具体查看 bind 中绑定的IP和规则,绑定了才能访问。
    # 是否用受保护模式启动
    protected-mode yes
    # Accept connections on the specified port, default is 6379 (IANA #815344).
    # If port 0 is specified Redis will not listen on a TCP socket.
    # 接收连接的端口,默认也是服务启动的端口
    port 6379
    # TCP listen() backlog.
    # In high requests-per-second environments you need a high backlog in order
    # to avoid slow clients connection issues. Note that the Linux kernel
    # will silently truncate it to the value of /proc/sys/net/core/somaxconn so
    # make sure to raise both the value of somaxconn and tcp_max_syn_backlog
    # in order to get the desired effect.
    # 设置tcp的backlog,backlog其实是一个连接队列,backlog队列总和=未完成三次握手队列+已完成三次握手队列.
    # 在高并发环境下你需要一个高的backlog值来避免慢客户端连接问题.
    # 注意linux内核会将这个值减小到/proc/sys/net/core/somaxconn的值,所以需要确认增大somaxconn和tcp_max_syn_backlog两个值来达到想要的效果
    # 更改客户端连接的速度,值越大越快
    # tcp-backlog 511
    # Unix socket.
    # Specify the path for the Unix socket that will be used to listen for
    # incoming connections. There is no default, so Redis will not listen
    # on a unix socket when not specified.
    # unixsocket /run/redis.sock
    # unixsocketperm 700
    # Close the connection after a client is idle for N seconds (0 to disable)
    # 连接超时的时间设置,如果是0则不设置超时时间
    timeout 0
    # TCP keepalive.
    # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
    # of communication. This is useful for two reasons:
    # 1) Detect dead peers.
    # 2) Force network equipment in the middle to consider the connection to be
    #    alive.
    # On Linux, the specified value (in seconds) is the period used to send ACKs.
    # Note that to close the connection the double of the time is needed.
    # On other kernels the period depends on the kernel configuration.
    # A reasonable value for this option is 300 seconds, which is the new
    # Redis default starting with Redis 3.2.1.
    # 保持连接的超时时间,单位为秒,如果设置为0,则不会进行keepalive检测
    tcp-keepalive 300
    ################################# TLS/SSL #####################################
    # By default, TLS/SSL is disabled. To enable it, the "tls-port" configuration
    # directive can be used to define TLS-listening ports. To enable TLS on the
    # default port, use:
    # port 0
    # tls-port 6379
    # Configure a X.509 certificate and private key to use for authenticating the
    # server to connected clients, masters or cluster peers.  These files should be
    # PEM formatted.
    # tls-cert-file redis.crt 
    # tls-key-file redis.key
    # If the key file is encrypted using a passphrase, it can be included here
    # as well.
    # tls-key-file-pass secret
    # Normally Redis uses the same certificate for both server functions (accepting
    # connections) and client functions (replicating from a master, establishing
    # cluster bus connections, etc.).
    # Sometimes certificates are issued with attributes that designate them as
    # client-only or server-only certificates. In that case it may be desired to use
    # different certificates for incoming (server) and outgoing (client)
    # connections. To do that, use the following directives:
    # tls-client-cert-file client.crt
    # tls-client-key-file client.key
    # If the key file is encrypted using a passphrase, it can be included here
    # as well.
    # tls-client-key-file-pass secret
    # Configure a DH parameters file to enable Diffie-Hellman (DH) key exchange:
    # tls-dh-params-file redis.dh
    # Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL
    # clients and peers.  Redis requires an explicit configuration of at least one
    # of these, and will not implicitly use the system wide configuration.
    # tls-ca-cert-file ca.crt
    # tls-ca-cert-dir /etc/ssl/certs
    # By default, clients (including replica servers) on a TLS port are required
    # to authenticate using valid client side certificates.
    # If "no" is specified, client certificates are not required and not accepted.
    # If "optional" is specified, client certificates are accepted and must be
    # valid if provided, but are not required.
    # tls-auth-clients no
    # tls-auth-clients optional
    # By default, a Redis replica does not attempt to establish a TLS connection
    # with its master.
    # Use the following directive to enable TLS on replication links.
    # tls-replication yes
    # By default, the Redis Cluster bus uses a plain TCP connection. To enable
    # TLS for the bus protocol, use the following directive:
    # tls-cluster yes
    # By default, only TLSv1.2 and TLSv1.3 are enabled and it is highly recommended
    # that older formally deprecated versions are kept disabled to reduce the attack surface.
    # You can explicitly specify TLS versions to support.
    # Allowed values are case insensitive and include "TLSv1", "TLSv1.1", "TLSv1.2",
    # "TLSv1.3" (OpenSSL >= 1.1.1) or any combination.
    # To enable only TLSv1.2 and TLSv1.3, use:
    # tls-protocols "TLSv1.2 TLSv1.3"
    # Configure allowed ciphers.  See the ciphers(1ssl) manpage for more information
    # about the syntax of this string.
    # Note: this configuration applies only to <= TLSv1.2.
    # tls-ciphers DEFAULT:!MEDIUM
    # Configure allowed TLSv1.3 ciphersuites.  See the ciphers(1ssl) manpage for more
    # information about the syntax of this string, and specifically for TLSv1.3
    # ciphersuites.
    # tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256
    # When choosing a cipher, use the server's preference instead of the client
    # preference. By default, the server follows the client's preference.
    # tls-prefer-server-ciphers yes
    # By default, TLS session caching is enabled to allow faster and less expensive
    # reconnections by clients that support it. Use the following directive to disable
    # caching.
    # tls-session-caching no
    # Change the default number of TLS sessions cached. A zero value sets the cache
    # to unlimited size. The default size is 20480.
    # tls-session-cache-size 5000
    # Change the default timeout of cached TLS sessions. The default timeout is 300
    # seconds.
    # tls-session-cache-timeout 60
    ################################# GENERAL #####################################
    # By default Redis does not run as a daemon. Use 'yes' if you need it.
    # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
    # When Redis is supervised by upstart or systemd, this parameter has no impact.
    # 是否用后台方式启动,默认为no
    daemonize yes
    # If you run Redis from upstart or systemd, Redis can interact with your
    # supervision tree. Options:
    #   supervised no      - no supervision interaction
    #   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
    #                        requires "expect stop" in your upstart job config
    #   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
    #                        on startup, and updating Redis status on a regular
    #                        basis.
    #   supervised auto    - detect upstart or systemd method based on
    #                        UPSTART_JOB or NOTIFY_SOCKET environment variables
    # Note: these supervision methods only signal "process is ready."
    #       They do not enable continuous pings back to your supervisor.
    # The default is "no". To run under upstart/systemd, you can simply uncomment
    # the line below:
    # 管理守护进程
    # supervised auto
    # If a pid file is specified, Redis writes it where specified at startup
    # and removes it at exit.
    # When the server runs non daemonized, no pid file is created if none is
    # specified in the configuration. When the server is daemonized, the pid file
    # is used even if not specified, defaulting to "/var/run/redis.pid".
    # Creating a pid file is best effort: if Redis is not able to create it
    # nothing bad happens, the server will start and run normally.
    # Note that on modern Linux systems "/run/redis.pid" is more conforming
    # and should be used instead.
    # 如果使用后台进程运行,那么我们需要指定一个pid文件
    pidfile /var/run/redis_6379.pid
    # Specify the server verbosity level.
    # This can be one of:
    # debug (a lot of information, useful for development/testing)
    # verbose (many rarely useful info, but not a mess like the debug level)
    # notice (moderately verbose, what you want in production probably)
    # warning (only very important / critical messages are logged)
    # 设置日志级别
    loglevel notice
    # Specify the log file name. Also the empty string can be used to force
    # Redis to log on the standard output. Note that if you use standard
    # output for logging but daemonize, logs will be sent to /dev/null
    # 设置日志保存的文件名
    logfile ""
    # To enable logging to the system logger, just set 'syslog-enabled' to yes,
    # and optionally update the other syslog parameters to suit your needs.
    # syslog-enabled no
    # Specify the syslog identity.
    # syslog-ident redis
    # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
    # syslog-facility local0
    # To disable the built in crash log, which will possibly produce cleaner core
    # dumps when they are needed, uncomment the following:
    # crash-log-enabled no
    # To disable the fast memory check that's run as part of the crash log, which
    # will possibly let redis terminate sooner, uncomment the following:
    # crash-memcheck-enabled no
    # Set the number of databases. The default database is DB 0, you can select
    # a different one on a per-connection basis using SELECT <dbid> where
    # dbid is a number between 0 and 'databases'-1
    # 设置默认的数据库个数
    databases 16
    # By default Redis shows an ASCII art logo only when started to log to the
    # standard output and if the standard output is a TTY and syslog logging is
    # disabled. Basically this means that normally a logo is displayed only in
    # interactive sessions.
    # However it is possible to force the pre-4.0 behavior and always show a
    # ASCII art logo in startup logs by setting the following option to yes.
    # 启动log显示
    always-show-logo no
    # By default, Redis modifies the process title (as seen in 'top' and 'ps') to
    # provide some runtime information. It is possible to disable this and leave
    # the process name as executed by setting the following to no.
    set-proc-title yes
    # When changing the process title, Redis uses the following template to construct
    # the modified title.
    # Template variables are specified in curly brackets. The following variables are
    # supported:
    # {title}           Name of process as executed if parent, or type of child process.
    # {listen-addr}     Bind address or '*' followed by TCP or TLS port listening on, or
    #                   Unix socket if only that's available.
    # {server-mode}     Special mode, i.e. "[sentinel]" or "[cluster]".
    # {port}            TCP port listening on, or 0.
    # {tls-port}        TLS port listening on, or 0.
    # {unixsocket}      Unix domain socket listening on, or "".
    # {config-file}     Name of configuration file used.
    proc-title-template "{title} {listen-addr} {server-mode}"
    ################################ SNAPSHOTTING  ################################
    # Save the DB to disk.
    # save <seconds> <changes>
    # Redis will save the DB if both the given number of seconds and the given
    # number of write operations against the DB occurred.
    # Snapshotting can be completely disabled with a single empty string argument
    # as in following example:
    # save ""
    # Unless specified otherwise, by default Redis will save the DB:
    #   * After 3600 seconds (an hour) if at least 1 key changed
    #   * After 300 seconds (5 minutes) if at least 100 keys changed
    #   * After 60 seconds if at least 10000 keys changed
    # You can set these explicitly by uncommenting the three following lines.
    # rdb的保存配置 在3600秒内,如果有一个值修改,则保存一次,我们可以自己配置
    # save 3600 1
    # save 300 100
    # save 60 10000
    # By default Redis will stop accepting writes if RDB snapshots are enabled
    # (at least one save point) and the latest background save failed.
    # This will make the user aware (in a hard way) that data is not persisting
    # on disk properly, otherwise chances are that no one will notice and some
    # disaster will happen.
    # If the background saving process will start working again Redis will
    # automatically allow writes again.
    # However if you have setup your proper monitoring of the Redis server
    # and persistence, you may want to disable this feature so that Redis will
    # continue to work as usual even if there are problems with disk,
    # permissions, and so forth.
    # 持久化失败后是否继续工作
    stop-writes-on-bgsave-error yes
    # Compress string objects using LZF when dump .rdb databases?
    # By default compression is enabled as it's almost always a win.
    # If you want to save some CPU in the saving child set it to 'no' but
    # the dataset will likely be bigger if you have compressible values or keys.
    # 是否压缩rdb 文件
    rdbcompression yes
    # Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
    # This makes the format more resistant to corruption but there is a performance
    # hit to pay (around 10%) when saving and loading RDB files, so you can disable it
    # for maximum performances.
    # RDB files created with checksum disabled have a checksum of zero that will
    # tell the loading code to skip the check.
    # 保存rdb是进行错误校验
    rdbchecksum yes
    # Enables or disables full sanitation checks for ziplist and listpack etc when
    # loading an RDB or RESTORE payload. This reduces the chances of a assertion or
    # crash later on while processing commands.
    # Options:
    #   no         - Never perform full sanitation
    #   yes        - Always perform full sanitation
    #   clients    - Perform full sanitation only for user connections.
    #                Excludes: RDB files, RESTORE commands received from the master
    #                connection, and client connections which have the
    #                skip-sanitize-payload ACL flag.
    # The default should be 'clients' but since it currently affects cluster
    # resharding via MIGRATE, it is temporarily set to 'no' by default.
    # sanitize-dump-payload no
    # The filename where to dump the DB
    # rdb保存的文件名
    dbfilename dump.rdb
    # Remove RDB files used by replication in instances without persistence
    # enabled. By default this option is disabled, however there are environments
    # where for regulations or other security concerns, RDB files persisted on
    # disk by masters in order to feed replicas, or stored on disk by replicas
    # in order to load them for the initial synchronization, should be deleted
    # ASAP. Note that this option ONLY WORKS in instances that have both AOF
    # and RDB persistence disabled, otherwise is completely ignored.
    # An alternative (and sometimes better) way to obtain the same effect is
    # to use diskless replication on both master and replicas instances. However
    # in the case of replicas, diskless is not always an option.
    # 保存时是否删除同步文件
    rdb-del-sync-files no
    # The working directory.
    # The DB will be written inside this directory, with the filename specified
    # above using the 'dbfilename' configuration directive.
    # The Append Only File will also be created inside this directory.
    # Note that you must specify a directory here, not a file name.
    # rdb 保存的目录默认是当前文件,也就是启动目录
    dir ./
    ################################# REPLICATION #################################
    # 主从配置,集群配置
    # Master-Replica replication. Use replicaof to make a Redis instance a copy of
    # another Redis server. A few things to understand ASAP about Redis replication.
    #   +------------------+      +---------------+
    #   |      Master      | ---> |    Replica    |
    #   | (receive writes) |      |  (exact copy) |
    #   +------------------+      +---------------+
    # 1) Redis replication is asynchronous, but you can configure a master to
    #    stop accepting writes if it appears to be not connected with at least
    #    a given number of replicas.
    # 2) Redis replicas are able to perform a partial resynchronization with the
    #    master if the replication link is lost for a relatively small amount of
    #    time. You may want to configure the replication backlog size (see the next
    #    sections of this file) with a sensible value depending on your needs.
    # 3) Replication is automatic and does not need user intervention. After a
    #    network partition replicas automatically try to reconnect to masters
    #    and resynchronize with them.
    # replicaof <masterip> <masterport>
    # If the master is password protected (using the "requirepass" configuration
    # directive below) it is possible to tell the replica to authenticate before
    # starting the replication synchronization process, otherwise the master will
    # refuse the replica request.
    # masterauth <master-password>
    # However this is not enough if you are using Redis ACLs (for Redis version
    # 6 or greater), and the default user is not capable of running the PSYNC
    # command and/or other commands needed for replication. In this case it's
    # better to configure a special user to use with replication, and specify the
    # masteruser configuration as such:
    # masteruser <username>
    # When masteruser is specified, the replica will authenticate against its
    # master using the new AUTH form: AUTH <username> <password>.
    # When a replica loses its connection with the master, or when the replication
    # is still in progress, the replica can act in two different ways:
    # 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
    #    still reply to client requests, possibly with out of date data, or the
    #    data set may just be empty if this is the first synchronization.
    # 2) If replica-serve-stale-data is set to 'no' the replica will reply with
    #    an error "SYNC with master in progress" to all commands except:
    #    INFO, REPLICAOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,
    #    UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,
    #    HOST and LATENCY.
    replica-serve-stale-data yes
    # You can configure a replica instance to accept writes or not. Writing against
    # a replica instance may be useful to store some ephemeral data (because data
    # written on a replica will be easily deleted after resync with the master) but
    # may also cause problems if clients are writing to it because of a
    # misconfiguration.
    # Since Redis 2.6 by default replicas are read-only.
    # Note: read only replicas are not designed to be exposed to untrusted clients
    # on the internet. It's just a protection layer against misuse of the instance.
    # Still a read only replica exports by default all the administrative commands
    # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
    # security of read only replicas using 'rename-command' to shadow all the
    # administrative / dangerous commands.
    replica-read-only yes
    # Replication SYNC strategy: disk or socket.
    # New replicas and reconnecting replicas that are not able to continue the
    # replication process just receiving differences, need to do what is called a
    # "full synchronization". An RDB file is transmitted from the master to the
    # replicas.
    # The transmission can happen in two different ways:
    # 1) Disk-backed: The Redis master creates a new process that writes the RDB
    #                 file on disk. Later the file is transferred by the parent
    #                 process to the replicas incrementally.
    # 2) Diskless: The Redis master creates a new process that directly writes the
    #              RDB file to replica sockets, without touching the disk at all.
    # With disk-backed replication, while the RDB file is generated, more replicas
    # can be queued and served with the RDB file as soon as the current child
    # producing the RDB file finishes its work. With diskless replication instead
    # once the transfer starts, new replicas arriving will be queued and a new
    # transfer will start when the current one terminates.
    # When diskless replication is used, the master waits a configurable amount of
    # time (in seconds) before starting the transfer in the hope that multiple
    # replicas will arrive and the transfer can be parallelized.
    # With slow disks and fast (large bandwidth) networks, diskless replication
    # works better.
    repl-diskless-sync no
    # When diskless replication is enabled, it is possible to configure the delay
    # the server waits in order to spawn the child that transfers the RDB via socket
    # to the replicas.
    # This is important since once the transfer starts, it is not possible to serve
    # new replicas arriving, that will be queued for the next RDB transfer, so the
    # server waits a delay in order to let more replicas arrive.
    # The delay is specified in seconds, and by default is 5 seconds. To disable
    # it entirely just set it to 0 seconds and the transfer will start ASAP.
    repl-diskless-sync-delay 5
    # -----------------------------------------------------------------------------
    # WARNING: RDB diskless load is experimental. Since in this setup the replica
    # does not immediately store an RDB on disk, it may cause data loss during
    # failovers. RDB diskless load + Redis modules not handling I/O reads may also
    # cause Redis to abort in case of I/O errors during the initial synchronization
    # stage with the master. Use only if you know what you are doing.
    # -----------------------------------------------------------------------------
    # Replica can load the RDB it reads from the replication link directly from the
    # socket, or store the RDB to a file and read that file after it was completely
    # received from the master.
    # In many cases the disk is slower than the network, and storing and loading
    # the RDB file may increase replication time (and even increase the master's
    # Copy on Write memory and salve buffers).
    # However, parsing the RDB file directly from the socket may mean that we have
    # to flush the contents of the current database before the full rdb was
    # received. For this reason we have the following options:
    # "disabled"    - Don't use diskless load (store the rdb file to the disk first)
    # "on-empty-db" - Use diskless load only when it is completely safe.
    # "swapdb"      - Keep a copy of the current db contents in RAM while parsing
    #                 the data directly from the socket. note that this requires
    #                 sufficient memory, if you don't have it, you risk an OOM kill.
    repl-diskless-load disabled
    # Replicas send PINGs to server in a predefined interval. It's possible to
    # change this interval with the repl_ping_replica_period option. The default
    # value is 10 seconds.
    # repl-ping-replica-period 10
    # The following option sets the replication timeout for:
    # 1) Bulk transfer I/O during SYNC, from the point of view of replica.
    # 2) Master timeout from the point of view of replicas (data, pings).
    # 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
    # It is important to make sure that this value is greater than the value
    # specified for repl-ping-replica-period otherwise a timeout will be detected
    # every time there is low traffic between the master and the replica. The default
    # value is 60 seconds.
    # repl-timeout 60
    # Disable TCP_NODELAY on the replica socket after SYNC?
    # If you select "yes" Redis will use a smaller number of TCP packets and
    # less bandwidth to send data to replicas. But this can add a delay for
    # the data to appear on the replica side, up to 40 milliseconds with
    # Linux kernels using a default configuration.
    # If you select "no" the delay for data to appear on the replica side will
    # be reduced but more bandwidth will be used for replication.
    # By default we optimize for low latency, but in very high traffic conditions
    # or when the master and replicas are many hops away, turning this to "yes" may
    # be a good idea.
    repl-disable-tcp-nodelay no
    # Set the replication backlog size. The backlog is a buffer that accumulates
    # replica data when replicas are disconnected for some time, so that when a
    # replica wants to reconnect again, often a full resync is not needed, but a
    # partial resync is enough, just passing the portion of data the replica
    # missed while disconnected.
    # The bigger the replication backlog, the longer the replica can endure the
    # disconnect and later be able to perform a partial resynchronization.
    # The backlog is only allocated if there is at least one replica connected.
    # repl-backlog-size 1mb
    # After a master has no connected replicas for some time, the backlog will be
    # freed. The following option configures the amount of seconds that need to
    # elapse, starting from the time the last replica disconnected, for the backlog
    # buffer to be freed.
    # Note that replicas never free the backlog for timeout, since they may be
    # promoted to masters later, and should be able to correctly "partially
    # resynchronize" with other replicas: hence they should always accumulate backlog.
    # A value of 0 means to never release the backlog.
    # repl-backlog-ttl 3600
    # The replica priority is an integer number published by Redis in the INFO
    # output. It is used by Redis Sentinel in order to select a replica to promote
    # into a master if the master is no longer working correctly.
    # A replica with a low priority number is considered better for promotion, so
    # for instance if there are three replicas with priority 10, 100, 25 Sentinel
    # will pick the one with priority 10, that is the lowest.
    # However a special priority of 0 marks the replica as not able to perform the
    # role of master, so a replica with priority of 0 will never be selected by
    # Redis Sentinel for promotion.
    # By default the priority is 100.
    replica-priority 100
    # -----------------------------------------------------------------------------
    # By default, Redis Sentinel includes all replicas in its reports. A replica
    # can be excluded from Redis Sentinel's announcements. An unannounced replica
    # will be ignored by the 'sentinel replicas <master>' command and won't be
    # exposed to Redis Sentinel's clients.
    # This option does not change the behavior of replica-priority. Even with
    # replica-announced set to 'no', the replica can be promoted to master. To
    # prevent this behavior, set replica-priority to 0.
    # replica-announced yes
    # It is possible for a master to stop accepting writes if there are less than
    # N replicas connected, having a lag less or equal than M seconds.
    # The N replicas need to be in "online" state.
    # The lag in seconds, that must be <= the specified value, is calculated from
    # the last ping received from the replica, that is usually sent every second.
    # This option does not GUARANTEE that N replicas will accept the write, but
    # will limit the window of exposure for lost writes in case not enough replicas
    # are available, to the specified number of seconds.
    # For example to require at least 3 replicas with a lag <= 10 seconds use:
    # min-replicas-to-write 3
    # min-replicas-max-lag 10
    # Setting one or the other to 0 disables the feature.
    # By default min-replicas-to-write is set to 0 (feature disabled) and
    # min-replicas-max-lag is set to 10.
    # A Redis master is able to list the address and port of the attached
    # replicas in different ways. For example the "INFO replication" section
    # offers this information, which is used, among other tools, by
    # Redis Sentinel in order to discover replica instances.
    # Another place where this info is available is in the output of the
    # "ROLE" command of a master.
    # The listed IP address and port normally reported by a replica is
    # obtained in the following way:
    #   IP: The address is auto detected by checking the peer address
    #   of the socket used by the replica to connect with the master.
    #   Port: The port is communicated by the replica during the replication
    #   handshake, and is normally the port that the replica is using to
    #   listen for connections.
    # However when port forwarding or Network Address Translation (NAT) is
    # used, the replica may actually be reachable via different IP and port
    # pairs. The following two options can be used by a replica in order to
    # report to its master a specific set of IP and port, so that both INFO
    # and ROLE will report those values.
    # There is no need to use both the options if you need to override just
    # the port or the IP address.
    # replica-announce-ip 5.5.5.5
    # replica-announce-port 1234
    ############################### KEYS TRACKING #################################
    # Redis implements server assisted support for client side caching of values.
    # This is implemented using an invalidation table that remembers, using
    # a radix key indexed by key name, what clients have which keys. In turn
    # this is used in order to send invalidation messages to clients. Please
    # check this page to understand more about the feature:
    #   https://redis.io/topics/client-side-caching
    # When tracking is enabled for a client, all the read only queries are assumed
    # to be cached: this will force Redis to store information in the invalidation
    # table. When keys are modified, such information is flushed away, and
    # invalidation messages are sent to the clients. However if the workload is
    # heavily dominated by reads, Redis could use more and more memory in order
    # to track the keys fetched by many clients.
    # For this reason it is possible to configure a maximum fill value for the
    # invalidation table. By default it is set to 1M of keys, and once this limit
    # is reached, Redis will start to evict keys in the invalidation table
    # even if they were not modified, just to reclaim memory: this will in turn
    # force the clients to invalidate the cached values. Basically the table
    # maximum size is a trade off between the memory you want to spend server
    # side to track information about who cached what, and the ability of clients
    # to retain cached objects in memory.
    # If you set the value to 0, it means there are no limits, and Redis will
    # retain as many keys as needed in the invalidation table.
    # In the "stats" INFO section, you can find information about the number of
    # keys in the invalidation table at every given moment.
    # Note: when key tracking is used in broadcasting mode, no memory is used
    # in the server side so this setting is useless.
    # tracking-table-max-keys 1000000
    ################################## SECURITY ###################################
    # 安全校验相关配置
    # Warning: since Redis is pretty fast, an outside user can try up to
    # 1 million passwords per second against a modern box. This means that you
    # should use very strong passwords, otherwise they will be very easy to break.
    # Note that because the password is really a shared secret between the client
    # and the server, and should not be memorized by any human, the password
    # can be easily a long string from /dev/urandom or whatever, so by using a
    # long and unguessable password no brute force attack will be possible.
    # Redis ACL users are defined in the following format:
    #   user <username> ... acl rules ...
    # For example:
    #   user worker +@list +@connection ~jobs:* on >ffa9203c493aa99
    # The special username "default" is used for new connections. If this user
    # has the "nopass" rule, then new connections will be immediately authenticated
    # as the "default" user without the need of any password provided via the
    # AUTH command. Otherwise if the "default" user is not flagged with "nopass"
    # the connections will start in not authenticated state, and will require
    # AUTH (or the HELLO command AUTH option) in order to be authenticated and
    # start to work.
    # The ACL rules that describe what a user can do are the following:
    #  on           Enable the user: it is possible to authenticate as this user.
    #  off          Disable the user: it's no longer possible to authenticate
    #               with this user, however the already authenticated connections
    #               will still work.
    #  skip-sanitize-payload    RESTORE dump-payload sanitation is skipped.
    #  sanitize-payload         RESTORE dump-payload is sanitized (default).
    #  +<command>   Allow the execution of that command
    #  -<command>   Disallow the execution of that command
    #  +@<category> Allow the execution of all the commands in such category
    #               with valid categories are like @admin, @set, @sortedset, ...
    #               and so forth, see the full list in the server.c file where
    #               the Redis command table is described and defined.
    #               The special category @all means all the commands, but currently
    #               present in the server, and that will be loaded in the future
    #               via modules.
    #  +<command>|subcommand    Allow a specific subcommand of an otherwise
    #                           disabled command. Note that this form is not
    #                           allowed as negative like -DEBUG|SEGFAULT, but
    #                           only additive starting with "+".
    #  allcommands  Alias for +@all. Note that it implies the ability to execute
    #               all the future commands loaded via the modules system.
    #  nocommands   Alias for -@all.
    #  ~<pattern>   Add a pattern of keys that can be mentioned as part of
    #               commands. For instance ~* allows all the keys. The pattern
    #               is a glob-style pattern like the one of KEYS.
    #               It is possible to specify multiple patterns.
    #  allkeys      Alias for ~*
    #  resetkeys    Flush the list of allowed keys patterns.
    #  &<pattern>   Add a glob-style pattern of Pub/Sub channels that can be
    #               accessed by the user. It is possible to specify multiple channel
    #               patterns.
    #  allchannels  Alias for &*
    #  resetchannels            Flush the list of allowed channel patterns.
    #  ><password>  Add this password to the list of valid password for the user.
    #               For example >mypass will add "mypass" to the list.
    #               This directive clears the "nopass" flag (see later).
    #  <<password>  Remove this password from the list of valid passwords.
    #  nopass       All the set passwords of the user are removed, and the user
    #               is flagged as requiring no password: it means that every
    #               password will work against this user. If this directive is
    #               used for the default user, every new connection will be
    #               immediately authenticated with the default user without
    #               any explicit AUTH command required. Note that the "resetpass"
    #               directive will clear this condition.
    #  resetpass    Flush the list of allowed passwords. Moreover removes the
    #               "nopass" status. After "resetpass" the user has no associated
    #               passwords and there is no way to authenticate without adding
    #               some password (or setting it as "nopass" later).
    #  reset        Performs the following actions: resetpass, resetkeys, off,
    #               -@all. The user returns to the same state it has immediately
    #               after its creation.
    # ACL rules can be specified in any order: for instance you can start with
    # passwords, then flags, or key patterns. However note that the additive
    # and subtractive rules will CHANGE MEANING depending on the ordering.
    # For instance see the following example:
    #   user alice on +@all -DEBUG ~* >somepassword
    # This will allow "alice" to use all the commands with the exception of the
    # DEBUG command, since +@all added all the commands to the set of the commands
    # alice can use, and later DEBUG was removed. However if we invert the order
    # of two ACL rules the result will be different:
    #   user alice on -DEBUG +@all ~* >somepassword
    # Now DEBUG was removed when alice had yet no commands in the set of allowed
    # commands, later all the commands are added, so the user will be able to
    # execute everything.
    # Basically ACL rules are processed left-to-right.
    # For more information about ACL configuration please refer to
    # the Redis web site at https://redis.io/topics/acl
    # ACL LOG
    # The ACL Log tracks failed commands and authentication events associated
    # with ACLs. The ACL Log is useful to troubleshoot failed commands blocked 
    # by ACLs. The ACL Log is stored in memory. You can reclaim memory with 
    # ACL LOG RESET. Define the maximum entry length of the ACL Log below.
    acllog-max-len 128
    # Using an external ACL file
    # Instead of configuring users here in this file, it is possible to use
    # a stand-alone file just listing users. The two methods cannot be mixed:
    # if you configure users here and at the same time you activate the external
    # ACL file, the server will refuse to start.
    # The format of the external ACL user file is exactly the same as the
    # format that is used inside redis.conf to describe users.
    # aclfile /etc/redis/users.acl
    # IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatibility
    # layer on top of the new ACL system. The option effect will be just setting
    # the password for the default user. Clients will still authenticate using
    # AUTH <password> as usually, or more explicitly with AUTH default <password>
    # if they follow the new protocol: both will work.
    # The requirepass is not compatable with aclfile option and the ACL LOAD
    # command, these will cause requirepass to be ignored.
    # 连接redis 配置的密码
    # requirepass foobared
    # requirepass 123456
    # 也可以通过命令来设置 set requirepass "123456",
    # 使用密码登录命令 auth 123456
    # New users are initialized with restrictive permissions by default, via the
    # equivalent of this ACL rule 'off resetkeys -@all'. Starting with Redis 6.2, it
    # is possible to manage access to Pub/Sub channels with ACL rules as well. The
    # default Pub/Sub channels permission if new users is controlled by the 
    # acl-pubsub-default configuration directive, which accepts one of these values:
    # allchannels: grants access to all Pub/Sub channels
    # resetchannels: revokes access to all Pub/Sub channels
    # To ensure backward compatibility while upgrading Redis 6.0, acl-pubsub-default
    # defaults to the 'allchannels' permission.
    # Future compatibility note: it is very likely that in a future version of Redis
    # the directive's default of 'allchannels' will be changed to 'resetchannels' in
    # order to provide better out-of-the-box Pub/Sub security. Therefore, it is
    # recommended that you explicitly define Pub/Sub permissions for all users
    # rather then rely on implicit default values. Once you've set explicit
    # Pub/Sub for all existing users, you should uncomment the following line.
    # acl-pubsub-default resetchannels
    # Command renaming (DEPRECATED).
    # ------------------------------------------------------------------------
    # WARNING: avoid using this option if possible. Instead use ACLs to remove
    # commands from the default user, and put them only in some admin user you
    # create for administrative purposes.
    # ------------------------------------------------------------------------
    # It is possible to change the name of dangerous commands in a shared
    # environment. For instance the CONFIG command may be renamed into something
    # hard to guess so that it will still be available for internal-use tools
    # but not available for general clients.
    # Example:
    # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
    # It is also possible to completely kill a command by renaming it into
    # an empty string:
    # rename-command CONFIG ""
    # Please note that changing the name of commands that are logged into the
    # AOF file or transmitted to replicas may cause problems.
    ################################### CLIENTS ####################################
    # Set the max number of connected clients at the same time. By default
    # this limit is set to 10000 clients, however if the Redis server is not
    # able to configure the process file limit to allow for the specified limit
    # the max number of allowed clients is set to the current file limit
    # minus 32 (as Redis reserves a few file descriptors for internal uses).
    # Once the limit is reached Redis will close all the new connections sending
    # an error 'max number of clients reached'.
    # IMPORTANT: When Redis Cluster is used, the max number of connections is also
    # shared with the cluster bus: every node in the cluster will use two
    # connections, one incoming and another outgoing. It is important to size the
    # limit accordingly in case of very large clusters.
    # 设置连接redis 最大客户端数量
    # maxclients 10000
    ############################## MEMORY MANAGEMENT ################################
    # Set a memory usage limit to the specified amount of bytes.
    # When the memory limit is reached Redis will try to remove keys
    # according to the eviction policy selected (see maxmemory-policy).
    # If Redis can't remove keys according to the policy, or if the policy is
    # set to 'noeviction', Redis will start to reply with errors to commands
    # that would use more memory, like SET, LPUSH, and so on, and will continue
    # to reply to read-only commands like GET.
    # This option is usually useful when using Redis as an LRU or LFU cache, or to
    # set a hard memory limit for an instance (using the 'noeviction' policy).
    # WARNING: If you have replicas attached to an instance with maxmemory on,
    # the size of the output buffers needed to feed the replicas are subtracted
    # from the used memory count, so that network problems / resyncs will
    # not trigger a loop where keys are evicted, and in turn the output
    # buffer of replicas is full with DELs of keys evicted triggering the deletion
    # of more keys, and so forth until the database is completely emptied.
    # In short... if you have replicas attached it is suggested that you set a lower
    # limit for maxmemory so that there is some free RAM on the system for replica
    # output buffers (but this is not needed if the policy is 'noeviction').
    # 最大的内存设置
    # maxmemory <bytes>
    # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
    # is reached. You can select one from the following behaviors:
    # volatile-lru -> Evict using approximated LRU, only keys with an expire set.
    # allkeys-lru -> Evict any key using approximated LRU.
    # volatile-lfu -> Evict using approximated LFU, only keys with an expire set.
    # allkeys-lfu -> Evict any key using approximated LFU.
    # volatile-random -> Remove a random key having an expire set.
    # allkeys-random -> Remove a random key, any key.
    # volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
    # noeviction -> Don't evict anything, just return an error on write operations.
    # LRU means Least Recently Used
    # LFU means Least Frequently Used
    # Both LRU, LFU and volatile-ttl are implemented using approximated
    # randomized algorithms.
    # Note: with any of the above policies, when there are no suitable keys for
    # eviction, Redis will return an error on write operations that require
    # more memory. These are usually commands that create new keys, add data or
    # modify existing keys. A few examples are: SET, INCR, HSET, LPUSH, SUNIONSTORE,
    # SORT (due to the STORE argument), and EXEC (if the transaction includes any
    # command that requires memory).
    # The default is:
    # 内存上限的处理策略
    # maxmemory-policy noeviction
    # 1、volatile-lru:只对设置了过期时间的key进行LRU(默认值) 
    # 2、allkeys-lru : 删除lru算法的key   
    # 3、volatile-random:随机删除即将过期key   
    # 4、allkeys-random:随机删除   
    # 5、volatile-ttl : 删除即将过期的   
    # 6、noeviction : 永不过期,返回错误
    # LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
    # algorithms (in order to save memory), so you can tune it for speed or
    # accuracy. By default Redis will check five keys and pick the one that was
    # used least recently, you can change the sample size using the following
    # configuration directive.
    # The default of 5 produces good enough results. 10 Approximates very closely
    # true LRU but costs more CPU. 3 is faster but not very accurate.
    # 最大内存配置 5G
    # maxmemory-samples 5
    # Eviction processing is designed to function well with the default setting.
    # If there is an unusually large amount of write traffic, this value may need to
    # be increased.  Decreasing this value may reduce latency at the risk of 
    # eviction processing effectiveness
    #   0 = minimum latency, 10 = default, 100 = process without regard to latency
    # 写入流最大配置
    # maxmemory-eviction-tenacity 10
    # Starting from Redis 5, by default a replica will ignore its maxmemory setting
    # (unless it is promoted to master after a failover or manually). It means
    # that the eviction of keys will be just handled by the master, sending the
    # DEL commands to the replica as keys evict in the master side.
    # This behavior ensures that masters and replicas stay consistent, and is usually
    # what you want, however if your replica is writable, or you want the replica
    # to have a different memory setting, and you are sure all the writes performed
    # to the replica are idempotent, then you may change this default (but be sure
    # to understand what you are doing).
    # Note that since the replica by default does not evict, it may end using more
    # memory than the one set via maxmemory (there are certain buffers that may
    # be larger on the replica, or data structures may sometimes take more memory
    # and so forth). So make sure you monitor your replicas and make sure they
    # have enough memory to never hit a real out-of-memory condition before the
    # master hits the configured maxmemory setting.
    # replica-ignore-maxmemory yes
    # Redis reclaims expired keys in two ways: upon access when those keys are
    # found to be expired, and also in background, in what is called the
    # "active expire key". The key space is slowly and interactively scanned
    # looking for expired keys to reclaim, so that it is possible to free memory
    # of keys that are expired and will never be accessed again in a short time.
    # The default effort of the expire cycle will try to avoid having more than
    # ten percent of expired keys still in memory, and will try to avoid consuming
    # more than 25% of total memory and to add latency to the system. However
    # it is possible to increase the expire "effort" that is normally set to
    # "1", to a greater value, up to the value "10". At its maximum value the
    # system will use more CPU, longer cycles (and technically may introduce
    # more latency), and will tolerate less already expired keys still present
    # in the system. It's a tradeoff between memory, CPU and latency.
    # active-expire-effort 1
    ############################# LAZY FREEING ####################################
    # Redis has two primitives to delete keys. One is called DEL and is a blocking
    # deletion of the object. It means that the server stops processing new commands
    # in order to reclaim all the memory associated with an object in a synchronous
    # way. If the key deleted is associated with a small object, the time needed
    # in order to execute the DEL command is very small and comparable to most other
    # O(1) or O(log_N) commands in Redis. However if the key is associated with an
    # aggregated value containing millions of elements, the server can block for
    # a long time (even seconds) in order to complete the operation.
    # For the above reasons Redis also offers non blocking deletion primitives
    # such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
    # FLUSHDB commands, in order to reclaim memory in background. Those commands
    # are executed in constant time. Another thread will incrementally free the
    # object in the background as fast as possible.
    # DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
    # It's up to the design of the application to understand when it is a good
    # idea to use one or the other. However the Redis server sometimes has to
    # delete keys or flush the whole database as a side effect of other operations.
    # Specifically Redis deletes objects independently of a user call in the
    # following scenarios:
    # 1) On eviction, because of the maxmemory and maxmemory policy configurations,
    #    in order to make room for new data, without going over the specified
    #    memory limit.
    # 2) Because of expire: when a key with an associated time to live (see the
    #    EXPIRE command) must be deleted from memory.
    # 3) Because of a side effect of a command that stores data on a key that may
    #    already exist. For example the RENAME command may delete the old key
    #    content when it is replaced with another one. Similarly SUNIONSTORE
    #    or SORT with STORE option may delete existing keys. The SET command
    #    itself removes any old content of the specified key in order to replace
    #    it with the specified string.
    # 4) During replication, when a replica performs a full resynchronization with
    #    its master, the content of the whole database is removed in order to
    #    load the RDB file just transferred.
    # In all the above cases the default is to delete objects in a blocking way,
    # like if DEL was called. However you can configure each case specifically
    # in order to instead release memory in a non-blocking way like if UNLINK
    # was called, using the following configuration directives.
    lazyfree-lazy-eviction no
    lazyfree-lazy-expire no
    lazyfree-lazy-server-del no
    replica-lazy-flush no
    # It is also possible, for the case when to replace the user code DEL calls
    # with UNLINK calls is not easy, to modify the default behavior of the DEL
    # command to act exactly like UNLINK, using the following configuration
    # directive:
    lazyfree-lazy-user-del no
    # FLUSHDB, FLUSHALL, and SCRIPT FLUSH support both asynchronous and synchronous
    # deletion, which can be controlled by passing the [SYNC|ASYNC] flags into the
    # commands. When neither flag is passed, this directive will be used to determine
    # if the data should be deleted asynchronously.
    lazyfree-lazy-user-flush no
    ################################ THREADED I/O #################################
    # Redis is mostly single threaded, however there are certain threaded
    # operations such as UNLINK, slow I/O accesses and other things that are
    # performed on side threads.
    # Now it is also possible to handle Redis clients socket reads and writes
    # in different I/O threads. Since especially writing is so slow, normally
    # Redis users use pipelining in order to speed up the Redis performances per
    # core, and spawn multiple instances in order to scale more. Using I/O
    # threads it is possible to easily speedup two times Redis without resorting
    # to pipelining nor sharding of the instance.
    # By default threading is disabled, we suggest enabling it only in machines
    # that have at least 4 or more cores, leaving at least one spare core.
    # Using more than 8 threads is unlikely to help much. We also recommend using
    # threaded I/O only if you actually have performance problems, with Redis
    # instances being able to use a quite big percentage of CPU time, otherwise
    # there is no point in using this feature.
    # So for instance if you have a four cores boxes, try to use 2 or 3 I/O
    # threads, if you have a 8 cores, try to use 6 threads. In order to
    # enable I/O threads use the following configuration directive:
    # io-threads 4
    # Setting io-threads to 1 will just use the main thread as usual.
    # When I/O threads are enabled, we only use threads for writes, that is
    # to thread the write(2) syscall and transfer the client buffers to the
    # socket. However it is also possible to enable threading of reads and
    # protocol parsing using the following configuration directive, by setting
    # it to yes:
    # io-threads-do-reads no
    # Usually threading reads doesn't help much.
    # NOTE 1: This configuration directive cannot be changed at runtime via
    # CONFIG SET. Aso this feature currently does not work when SSL is
    # enabled.
    # NOTE 2: If you want to test the Redis speedup using redis-benchmark, make
    # sure you also run the benchmark itself in threaded mode, using the
    # --threads option to match the number of Redis threads, otherwise you'll not
    # be able to notice the improvements.
    ############################ KERNEL OOM CONTROL ##############################
    # On Linux, it is possible to hint the kernel OOM killer on what processes
    # should be killed first when out of memory.
    # Enabling this feature makes Redis actively control the oom_score_adj value
    # for all its processes, depending on their role. The default scores will
    # attempt to have background child processes killed before all others, and
    # replicas killed before masters.
    # Redis supports three options:
    # no:       Don't make changes to oom-score-adj (default).
    # yes:      Alias to "relative" see below.
    # absolute: Values in oom-score-adj-values are written as is to the kernel.
    # relative: Values are used relative to the initial value of oom_score_adj when
    #           the server starts and are then clamped to a range of -1000 to 1000.
    #           Because typically the initial value is 0, they will often match the
    #           absolute values.
    oom-score-adj no
    # When oom-score-adj is used, this directive controls the specific values used
    # for master, replica and background child processes. Values range -2000 to
    # 2000 (higher means more likely to be killed).
    # Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities)
    # can freely increase their value, but not decrease it below its initial
    # settings. This means that setting oom-score-adj to "relative" and setting the
    # oom-score-adj-values to positive values will always succeed.
    oom-score-adj-values 0 200 800
    #################### KERNEL transparent hugepage CONTROL ######################
    # Usually the kernel Transparent Huge Pages control is set to "madvise" or
    # or "never" by default (/sys/kernel/mm/transparent_hugepage/enabled), in which
    # case this config has no effect. On systems in which it is set to "always",
    # redis will attempt to disable it specifically for the redis process in order
    # to avoid latency problems specifically with fork(2) and CoW.
    # If for some reason you prefer to keep it enabled, you can set this config to
    # "no" and the kernel global to "always".
    disable-thp yes
    ############################## APPEND ONLY MODE ###############################
    # aof 持久化配置
    # By default Redis asynchronously dumps the dataset on disk. This mode is
    # good enough in many applications, but an issue with the Redis process or
    # a power outage may result into a few minutes of writes lost (depending on
    # the configured save points).
    # The Append Only File is an alternative persistence mode that provides
    # much better durability. For instance using the default data fsync policy
    # (see later in the config file) Redis can lose just one second of writes in a
    # dramatic event like a server power outage, or a single write if something
    # wrong with the Redis process itself happens, but the operating system is
    # still running correctly.
    # AOF and RDB persistence can be enabled at the same time without problems.
    # If the AOF is enabled on startup Redis will load the AOF, that is the file
    # with the better durability guarantees.
    # Please check https://redis.io/topics/persistence for more information.
    # aof 持久化是否开启
    appendonly no
    # The name of the append only file (default: "appendonly.aof")
    # 持久化的文件名
    appendfilename "appendonly.aof"
    # The fsync() call tells the Operating System to actually write data on disk
    # instead of waiting for more data in the output buffer. Some OS will really flush
    # data on disk, some other OS will just try to do it ASAP.
    # Redis supports three different modes:
    # no: don't fsync, just let the OS flush the data when it wants. Faster.
    # always: fsync after every write to the append only log. Slow, Safest.
    # everysec: fsync only one time every second. Compromise.
    # The default is "everysec", as that's usually the right compromise between
    # speed and data safety. It's up to you to understand if you can relax this to
    # "no" that will let the operating system flush the output buffer when
    # it wants, for better performances (but if you can live with the idea of
    # some data loss consider the default persistence mode that's snapshotting),
    # or on the contrary, use "always" that's very slow but a bit safer than
    # everysec.
    # More details please check the following article:
    # http://antirez.com/post/redis-persistence-demystified.html
    # If unsure, use "everysec".
    # aof的同步方式
    # appendfsync always #总是
    appendfsync everysec #每秒
    # appendfsync no # 不同步
    # When the AOF fsync policy is set to always or everysec, and a background
    # saving process (a background save or AOF log background rewriting) is
    # performing a lot of I/O against the disk, in some Linux configurations
    # Redis may block too long on the fsync() call. Note that there is no fix for
    # this currently, as even performing fsync in a different thread will block
    # our synchronous write(2) call.
    # In order to mitigate this problem it's possible to use the following option
    # that will prevent fsync() from being called in the main process while a
    # BGSAVE or BGREWRITEAOF is in progress.
    # This means that while another child is saving, the durability of Redis is
    # the same as "appendfsync none". In practical terms, this means that it is
    # possible to lose up to 30 seconds of log in the worst scenario (with the
    # default Linux settings).
    # If you have latency problems turn this to "yes". Otherwise leave it as
    # "no" that is the safest pick from the point of view of durability.
    no-appendfsync-on-rewrite no
    # Automatic rewrite of the append only file.
    # Redis is able to automatically rewrite the log file implicitly calling
    # BGREWRITEAOF when the AOF log size grows by the specified percentage.
    # This is how it works: Redis remembers the size of the AOF file after the
    # latest rewrite (if no rewrite has happened since the restart, the size of
    # the AOF at startup is used).
    # This base size is compared to the current size. If the current size is
    # bigger than the specified percentage, the rewrite is triggered. Also
    # you need to specify a minimal size for the AOF file to be rewritten, this
    # is useful to avoid rewriting the AOF file even if the percentage increase
    # is reached but it is still pretty small.
    # Specify a percentage of zero in order to disable the automatic AOF
    # rewrite feature.
    auto-aof-rewrite-percentage 100
    auto-aof-rewrite-min-size 64mb
    # An AOF file may be found to be truncated at the end during the Redis
    # startup process, when the AOF data gets loaded back into memory.
    # This may happen when the system where Redis is running
    # crashes, especially when an ext4 filesystem is mounted without the
    # data=ordered option (however this can't happen when Redis itself
    # crashes or aborts but the operating system still works correctly).
    # Redis can either exit with an error when this happens, or load as much
    # data as possible (the default now) and start if the AOF file is found
    # to be truncated at the end. The following option controls this behavior.
    # If aof-load-truncated is set to yes, a truncated AOF file is loaded and
    # the Redis server starts emitting a log to inform the user of the event.
    # Otherwise if the option is set to no, the server aborts with an error
    # and refuses to start. When the option is set to no, the user requires
    # to fix the AOF file using the "redis-check-aof" utility before to restart
    # the server.
    # Note that if the AOF file will be found to be corrupted in the middle
    # the server will still exit with an error. This option only applies when
    # Redis will try to read more data from the AOF file but not enough bytes
    # will be found.
    aof-load-truncated yes
    # When rewriting the AOF file, Redis is able to use an RDB preamble in the
    # AOF file for faster rewrites and recoveries. When this option is turned
    # on the rewritten AOF file is composed of two different stanzas:
    #   [RDB file][AOF tail]
    # When loading, Redis recognizes that the AOF file starts with the "REDIS"
    # string and loads the prefixed RDB file, then continues loading the AOF
    # tail.
    aof-use-rdb-preamble yes
    ################################ LUA SCRIPTING  ###############################
    # Max execution time of a Lua script in milliseconds.
    # If the maximum execution time is reached Redis will log that a script is
    # still in execution after the maximum allowed time and will start to
    # reply to queries with an error.
    # When a long running script exceeds the maximum execution time only the
    # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
    # used to stop a script that did not yet call any write commands. The second
    # is the only way to shut down the server in the case a write command was
    # already issued by the script but the user doesn't want to wait for the natural
    # termination of the script.
    # Set it to 0 or a negative value for unlimited execution without warnings.
    lua-time-limit 5000
    ################################ REDIS CLUSTER  ###############################
    # Normal Redis instances can't be part of a Redis Cluster; only nodes that are
    # started as cluster nodes can. In order to start a Redis instance as a
    # cluster node enable the cluster support uncommenting the following:
    # cluster-enabled yes
    # Every cluster node has a cluster configuration file. This file is not
    # intended to be edited by hand. It is created and updated by Redis nodes.
    # Every Redis Cluster node requires a different cluster configuration file.
    # Make sure that instances running in the same system do not have
    # overlapping cluster configuration file names.
    # cluster-config-file nodes-6379.conf
    # Cluster node timeout is the amount of milliseconds a node must be unreachable
    # for it to be considered in failure state.
    # Most other internal time limits are a multiple of the node timeout.
    # cluster-node-timeout 15000
    # A replica of a failing master will avoid to start a failover if its data
    # looks too old.
    # There is no simple way for a replica to actually have an exact measure of
    # its "data age", so the following two checks are performed:
    # 1) If there are multiple replicas able to failover, they exchange messages
    #    in order to try to give an advantage to the replica with the best
    #    replication offset (more data from the master processed).
    #    Replicas will try to get their rank by offset, and apply to the start
    #    of the failover a delay proportional to their rank.
    # 2) Every single replica computes the time of the last interaction with
    #    its master. This can be the last ping or command received (if the master
    #    is still in the "connected" state), or the time that elapsed since the
    #    disconnection with the master (if the replication link is currently down).
    #    If the last interaction is too old, the replica will not try to failover
    #    at all.
    # The point "2" can be tuned by user. Specifically a replica will not perform
    # the failover if, since the last interaction with the master, the time
    # elapsed is greater than:
    #   (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period
    # So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor
    # is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
    # replica will not try to failover if it was not able to talk with the master
    # for longer than 310 seconds.
    # A large cluster-replica-validity-factor may allow replicas with too old data to failover
    # a master, while a too small value may prevent the cluster from being able to
    # elect a replica at all.
    # For maximum availability, it is possible to set the cluster-replica-validity-factor
    # to a value of 0, which means, that replicas will always try to failover the
    # master regardless of the last time they interacted with the master.
    # (However they'll always try to apply a delay proportional to their
    # offset rank).
    # Zero is the only value able to guarantee that when all the partitions heal
    # the cluster will always be able to continue.
    # cluster-replica-validity-factor 10
    # Cluster replicas are able to migrate to orphaned masters, that are masters
    # that are left without working replicas. This improves the cluster ability
    # to resist to failures as otherwise an orphaned master can't be failed over
    # in case of failure if it has no working replicas.
    # Replicas migrate to orphaned masters only if there are still at least a
    # given number of other working replicas for their old master. This number
    # is the "migration barrier". A migration barrier of 1 means that a replica
    # will migrate only if there is at least 1 other working replica for its master
    # and so forth. It usually reflects the number of replicas you want for every
    # master in your cluster.
    # Default is 1 (replicas migrate only if their masters remain with at least
    # one replica). To disable migration just set it to a very large value or
    # set cluster-allow-replica-migration to 'no'.
    # A value of 0 can be set but is useful only for debugging and dangerous
    # in production.
    # cluster-migration-barrier 1
    # Turning off this option allows to use less automatic cluster configuration.
    # It both disables migration to orphaned masters and migration from masters
    # that became empty.
    # Default is 'yes' (allow automatic migrations).
    # cluster-allow-replica-migration yes
    # By default Redis Cluster nodes stop accepting queries if they detect there
    # is at least a hash slot uncovered (no available node is serving it).
    # This way if the cluster is partially down (for example a range of hash slots
    # are no longer covered) all the cluster becomes, eventually, unavailable.
    # It automatically returns available as soon as all the slots are covered again.
    # However sometimes you want the subset of the cluster which is working,
    # to continue to accept queries for the part of the key space that is still
    # covered. In order to do so, just set the cluster-require-full-coverage
    # option to no.
    # cluster-require-full-coverage yes
    # This option, when set to yes, prevents replicas from trying to failover its
    # master during master failures. However the replica can still perform a
    # manual failover, if forced to do so.
    # This is useful in different scenarios, especially in the case of multiple
    # data center operations, where we want one side to never be promoted if not
    # in the case of a total DC failure.
    # cluster-replica-no-failover no
    # This option, when set to yes, allows nodes to serve read traffic while the
    # the cluster is in a down state, as long as it believes it owns the slots. 
    # This is useful for two cases.  The first case is for when an application 
    # doesn't require consistency of data during node failures or network partitions.
    # One example of this is a cache, where as long as the node has the data it
    # should be able to serve it. 
    # The second use case is for configurations that don't meet the recommended  
    # three shards but want to enable cluster mode and scale later. A 
    # master outage in a 1 or 2 shard configuration causes a read/write outage to the
    # entire cluster without this option set, with it set there is only a write outage.
    # Without a quorum of masters, slot ownership will not change automatically. 
    # cluster-allow-reads-when-down no
    # In order to setup your cluster make sure to read the documentation
    # available at https://redis.io web site.
    ########################## CLUSTER DOCKER/NAT support  ########################
    # In certain deployments, Redis Cluster nodes address discovery fails, because
    # addresses are NAT-ted or because ports are forwarded (the typical case is
    # Docker and other containers).
    # In order to make Redis Cluster working in such environments, a static
    # configuration where each node knows its public address is needed. The
    # following four options are used for this scope, and are:
    # * cluster-announce-ip
    # * cluster-announce-port
    # * cluster-announce-tls-port
    # * cluster-announce-bus-port
    # Each instructs the node about its address, client ports (for connections
    # without and with TLS) and cluster message bus port. The information is then
    # published in the header of the bus packets so that other nodes will be able to
    # correctly map the address of the node publishing the information.
    # If cluster-tls is set to yes and cluster-announce-tls-port is omitted or set
    # to zero, then cluster-announce-port refers to the TLS port. Note also that
    # cluster-announce-tls-port has no effect if cluster-tls is set to no.
    # If the above options are not used, the normal Redis Cluster auto-detection
    # will be used instead.
    # Note that when remapped, the bus port may not be at the fixed offset of
    # clients port + 10000, so you can specify any port and bus-port depending
    # on how they get remapped. If the bus-port is not set, a fixed offset of
    # 10000 will be used as usual.
    # Example:
    # cluster-announce-ip 10.1.1.5
    # cluster-announce-tls-port 6379
    # cluster-announce-port 0
    # cluster-announce-bus-port 6380
    ################################## SLOW LOG ###################################
    # The Redis Slow Log is a system to log queries that exceeded a specified
    # execution time. The execution time does not include the I/O operations
    # like talking with the client, sending the reply and so forth,
    # but just the time needed to actually execute the command (this is the only
    # stage of command execution where the thread is blocked and can not serve
    # other requests in the meantime).
    # You can configure the slow log with two parameters: one tells Redis
    # what is the execution time, in microseconds, to exceed in order for the
    # command to get logged, and the other parameter is the length of the
    # slow log. When a new command is logged the oldest one is removed from the
    # queue of logged commands.
    # The following time is expressed in microseconds, so 1000000 is equivalent
    # to one second. Note that a negative number disables the slow log, while
    # a value of zero forces the logging of every command.
    slowlog-log-slower-than 10000
    # There is no limit to this length. Just be aware that it will consume memory.
    # You can reclaim memory used by the slow log with SLOWLOG RESET.
    slowlog-max-len 128
    ################################ LATENCY MONITOR ##############################
    # The Redis latency monitoring subsystem samples different operations
    # at runtime in order to collect data related to possible sources of
    # latency of a Redis instance.
    # Via the LATENCY command this information is available to the user that can
    # print graphs and obtain reports.
    # The system only logs operations that were performed in a time equal or
    # greater than the amount of milliseconds specified via the
    # latency-monitor-threshold configuration directive. When its value is set
    # to zero, the latency monitor is turned off.
    # By default latency monitoring is disabled since it is mostly not needed
    # if you don't have latency issues, and collecting data has a performance
    # impact, that while very small, can be measured under big load. Latency
    # monitoring can easily be enabled at runtime using the command
    # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
    latency-monitor-threshold 0
    ############################# EVENT NOTIFICATION ##############################
    # Redis can notify Pub/Sub clients about events happening in the key space.
    # This feature is documented at https://redis.io/topics/notifications
    # For instance if keyspace events notification is enabled, and a client
    # performs a DEL operation on key "foo" stored in the Database 0, two
    # messages will be published via Pub/Sub:
    # PUBLISH __keyspace@0__:foo del
    # PUBLISH __keyevent@0__:del foo
    # It is possible to select the events that Redis will notify among a set
    # of classes. Every class is identified by a single character:
    #  K     Keyspace events, published with __keyspace@<db>__ prefix.
    #  E     Keyevent events, published with __keyevent@<db>__ prefix.
    #  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
    #  $     String commands
    #  l     List commands
    #  s     Set commands
    #  h     Hash commands
    #  z     Sorted set commands
    #  x     Expired events (events generated every time a key expires)
    #  e     Evicted events (events generated when a key is evicted for maxmemory)
    #  t     Stream commands
    #  d     Module key type events
    #  m     Key-miss events (Note: It is not included in the 'A' class)
    #  A     Alias for g$lshzxetd, so that the "AKE" string means all the events
    #        (Except key-miss events which are excluded from 'A' due to their
    #         unique nature).
    #  The "notify-keyspace-events" takes as argument a string that is composed
    #  of zero or multiple characters. The empty string means that notifications
    #  are disabled.
    #  Example: to enable list and generic events, from the point of view of the
    #           event name, use:
    #  notify-keyspace-events Elg
    #  Example 2: to get the stream of the expired keys subscribing to channel
    #             name __keyevent@0__:expired use:
    #  notify-keyspace-events Ex
    #  By default all notifications are disabled because most users don't need
    #  this feature and the feature has some overhead. Note that if you don't
    #  specify at least one of K or E, no events will be delivered.
    notify-keyspace-events ""
    ############################### GOPHER SERVER #################################
    # Redis contains an implementation of the Gopher protocol, as specified in
    # the RFC 1436 (https://www.ietf.org/rfc/rfc1436.txt).
    # The Gopher protocol was very popular in the late '90s. It is an alternative
    # to the web, and the implementation both server and client side is so simple
    # that the Redis server has just 100 lines of code in order to implement this
    # support.
    # What do you do with Gopher nowadays? Well Gopher never *really* died, and
    # lately there is a movement in order for the Gopher more hierarchical content
    # composed of just plain text documents to be resurrected. Some want a simpler
    # internet, others believe that the mainstream internet became too much
    # controlled, and it's cool to create an alternative space for people that
    # want a bit of fresh air.
    # Anyway for the 10nth birthday of the Redis, we gave it the Gopher protocol
    # as a gift.
    # --- HOW IT WORKS? ---
    # The Redis Gopher support uses the inline protocol of Redis, and specifically
    # two kind of inline requests that were anyway illegal: an empty request
    # or any request that starts with "/" (there are no Redis commands starting
    # with such a slash). Normal RESP2/RESP3 requests are completely out of the
    # path of the Gopher protocol implementation and are served as usual as well.
    # If you open a connection to Redis when Gopher is enabled and send it
    # a string like "/foo", if there is a key named "/foo" it is served via the
    # Gopher protocol.
    # In order to create a real Gopher "hole" (the name of a Gopher site in Gopher
    # talking), you likely need a script like the following:
    #   https://github.com/antirez/gopher2redis
    # --- SECURITY WARNING ---
    # If you plan to put Redis on the internet in a publicly accessible address
    # to server Gopher pages MAKE SURE TO SET A PASSWORD to the instance.
    # Once a password is set:
    #   1. The Gopher server (when enabled, not by default) will still serve
    #      content via Gopher.
    #   2. However other commands cannot be called before the client will
    #      authenticate.
    # So use the 'requirepass' option to protect your instance.
    # Note that Gopher is not currently supported when 'io-threads-do-reads'
    # is enabled.
    # To enable Gopher support, uncomment the following line and set the option
    # from no (the default) to yes.
    # gopher-enabled no
    ############################### ADVANCED CONFIG ###############################
    # Hashes are encoded using a memory efficient data structure when they have a
    # small number of entries, and the biggest entry does not exceed a given
    # threshold. These thresholds can be configured using the following directives.
    hash-max-ziplist-entries 512
    hash-max-ziplist-value 64
    # Lists are also encoded in a special way to save a lot of space.
    # The number of entries allowed per internal list node can be specified
    # as a fixed maximum size or a maximum number of elements.
    # For a fixed maximum size, use -5 through -1, meaning:
    # -5: max size: 64 Kb  <-- not recommended for normal workloads
    # -4: max size: 32 Kb  <-- not recommended
    # -3: max size: 16 Kb  <-- probably not recommended
    # -2: max size: 8 Kb   <-- good
    # -1: max size: 4 Kb   <-- good
    # Positive numbers mean store up to _exactly_ that number of elements
    # per list node.
    # The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
    # but if your use case is unique, adjust the settings as necessary.
    list-max-ziplist-size -2
    # Lists may also be compressed.
    # Compress depth is the number of quicklist ziplist nodes from *each* side of
    # the list to *exclude* from compression.  The head and tail of the list
    # are always uncompressed for fast push/pop operations.  Settings are:
    # 0: disable all list compression
    # 1: depth 1 means "don't start compressing until after 1 node into the list,
    #    going from either the head or tail"
    #    So: [head]->node->node->...->node->[tail]
    #    [head], [tail] will always be uncompressed; inner nodes will compress.
    # 2: [head]->[next]->node->node->...->node->[prev]->[tail]
    #    2 here means: don't compress head or head->next or tail->prev or tail,
    #    but compress all nodes between them.
    # 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
    # etc.
    list-compress-depth 0
    # Sets have a special encoding in just one case: when a set is composed
    # of just strings that happen to be integers in radix 10 in the range
    # of 64 bit signed integers.
    # The following configuration setting sets the limit in the size of the
    # set in order to use this special memory saving encoding.
    set-max-intset-entries 512
    # Similarly to hashes and lists, sorted sets are also specially encoded in
    # order to save a lot of space. This encoding is only used when the length and
    # elements of a sorted set are below the following limits:
    zset-max-ziplist-entries 128
    zset-max-ziplist-value 64
    # HyperLogLog sparse representation bytes limit. The limit includes the
    # 16 bytes header. When an HyperLogLog using the sparse representation crosses
    # this limit, it is converted into the dense representation.
    # A value greater than 16000 is totally useless, since at that point the
    # dense representation is more memory efficient.
    # The suggested value is ~ 3000 in order to have the benefits of
    # the space efficient encoding without slowing down too much PFADD,
    # which is O(N) with the sparse encoding. The value can be raised to
    # ~ 10000 when CPU is not a concern, but space is, and the data set is
    # composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
    hll-sparse-max-bytes 3000
    # Streams macro node max size / items. The stream data structure is a radix
    # tree of big nodes that encode multiple items inside. Using this configuration
    # it is possible to configure how big a single node can be in bytes, and the
    # maximum number of items it may contain before switching to a new node when
    # appending new stream entries. If any of the following settings are set to
    # zero, the limit is ignored, so for instance it is possible to set just a
    # max entries limit by setting max-bytes to 0 and max-entries to the desired
    # value.
    stream-node-max-bytes 4096
    stream-node-max-entries 100
    # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
    # order to help rehashing the main Redis hash table (the one mapping top-level
    # keys to values). The hash table implementation Redis uses (see dict.c)
    # performs a lazy rehashing: the more operation you run into a hash table
    # that is rehashing, the more rehashing "steps" are performed, so if the
    # server is idle the rehashing is never complete and some more memory is used
    # by the hash table.
    # The default is to use this millisecond 10 times every second in order to
    # actively rehash the main dictionaries, freeing memory when possible.
    # If unsure:
    # use "activerehashing no" if you have hard latency requirements and it is
    # not a good thing in your environment that Redis can reply from time to time
    # to queries with 2 milliseconds delay.
    # use "activerehashing yes" if you don't have such hard requirements but
    # want to free memory asap when possible.
    activerehashing yes
    # The client output buffer limits can be used to force disconnection of clients
    # that are not reading data from the server fast enough for some reason (a
    # common reason is that a Pub/Sub client can't consume messages as fast as the
    # publisher can produce them).
    # The limit can be set differently for the three different classes of clients:
    # normal -> normal clients including MONITOR clients
    # replica  -> replica clients
    # pubsub -> clients subscribed to at least one pubsub channel or pattern
    # The syntax of every client-output-buffer-limit directive is the following:
    # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
    # A client is immediately disconnected once the hard limit is reached, or if
    # the soft limit is reached and remains reached for the specified number of
    # seconds (continuously).
    # So for instance if the hard limit is 32 megabytes and the soft limit is
    # 16 megabytes / 10 seconds, the client will get disconnected immediately
    # if the size of the output buffers reach 32 megabytes, but will also get
    # disconnected if the client reaches 16 megabytes and continuously overcomes
    # the limit for 10 seconds.
    # By default normal clients are not limited because they don't receive data
    # without asking (in a push way), but just after a request, so only
    # asynchronous clients may create a scenario where data is requested faster
    # than it can read.
    # Instead there is a default limit for pubsub and replica clients, since
    # subscribers and replicas receive data in a push fashion.
    # Both the hard or the soft limit can be disabled by setting them to zero.
    client-output-buffer-limit normal 0 0 0
    client-output-buffer-limit replica 256mb 64mb 60
    client-output-buffer-limit pubsub 32mb 8mb 60
    # Client query buffers accumulate new commands. They are limited to a fixed
    # amount by default in order to avoid that a protocol desynchronization (for
    # instance due to a bug in the client) will lead to unbound memory usage in
    # the query buffer. However you can configure it here if you have very special
    # needs, such us huge multi/exec requests or alike.
    # client-query-buffer-limit 1gb
    # In the Redis protocol, bulk requests, that are, elements representing single
    # strings, are normally limited to 512 mb. However you can change this limit
    # here, but must be 1mb or greater
    # proto-max-bulk-len 512mb
    # Redis calls an internal function to perform many background tasks, like
    # closing connections of clients in timeout, purging expired keys that are
    # never requested, and so forth.
    # Not all tasks are performed with the same frequency, but Redis checks for
    # tasks to perform according to the specified "hz" value.
    # By default "hz" is set to 10. Raising the value will use more CPU when
    # Redis is idle, but at the same time will make Redis more responsive when
    # there are many keys expiring at the same time, and timeouts may be
    # handled with more precision.
    # The range is between 1 and 500, however a value over 100 is usually not
    # a good idea. Most users should use the default of 10 and raise this up to
    # 100 only in environments where very low latency is required.
    hz 10
    # Normally it is useful to have an HZ value which is proportional to the
    # number of clients connected. This is useful in order, for instance, to
    # avoid too many clients are processed for each background task invocation
    # in order to avoid latency spikes.
    # Since the default HZ value by default is conservatively set to 10, Redis
    # offers, and enables by default, the ability to use an adaptive HZ value
    # which will temporarily raise when there are many connected clients.
    # When dynamic HZ is enabled, the actual configured HZ will be used
    # as a baseline, but multiples of the configured HZ value will be actually
    # used as needed once more clients are connected. In this way an idle
    # instance will use very little CPU time while a busy instance will be
    # more responsive.
    dynamic-hz yes
    # When a child rewrites the AOF file, if the following option is enabled
    # the file will be fsync-ed every 32 MB of data generated. This is useful
    # in order to commit the file to the disk more incrementally and avoid
    # big latency spikes.
    aof-rewrite-incremental-fsync yes
    # When redis saves RDB file, if the following option is enabled
    # the file will be fsync-ed every 32 MB of data generated. This is useful
    # in order to commit the file to the disk more incrementally and avoid
    # big latency spikes.
    rdb-save-incremental-fsync yes
    # Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
    # idea to start with the default settings and only change them after investigating
    # how to improve the performances and how the keys LFU change over time, which
    # is possible to inspect via the OBJECT FREQ command.
    # There are two tunable parameters in the Redis LFU implementation: the
    # counter logarithm factor and the counter decay time. It is important to
    # understand what the two parameters mean before changing them.
    # The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
    # uses a probabilistic increment with logarithmic behavior. Given the value
    # of the old counter, when a key is accessed, the counter is incremented in
    # this way:
    # 1. A random number R between 0 and 1 is extracted.
    # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
    # 3. The counter is incremented only if R < P.
    # The default lfu-log-factor is 10. This is a table of how the frequency
    # counter changes with a different number of accesses with different
    # logarithmic factors:
    # +--------+------------+------------+------------+------------+------------+
    # | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |
    # +--------+------------+------------+------------+------------+------------+
    # | 0      | 104        | 255        | 255        | 255        | 255        |
    # +--------+------------+------------+------------+------------+------------+
    # | 1      | 18         | 49         | 255        | 255        | 255        |
    # +--------+------------+------------+------------+------------+------------+
    # | 10     | 10         | 18         | 142        | 255        | 255        |
    # +--------+------------+------------+------------+------------+------------+
    # | 100    | 8          | 11         | 49         | 143        | 255        |
    # +--------+------------+------------+------------+------------+------------+
    # NOTE: The above table was obtained by running the following commands:
    #   redis-benchmark -n 1000000 incr foo
    #   redis-cli object freq foo
    # NOTE 2: The counter initial value is 5 in order to give new objects a chance
    # to accumulate hits.
    # The counter decay time is the time, in minutes, that must elapse in order
    # for the key counter to be divided by two (or decremented if it has a value
    # less <= 10).
    # The default value for the lfu-decay-time is 1. A special value of 0 means to
    # decay the counter every time it happens to be scanned.
    # lfu-log-factor 10
    # lfu-decay-time 1
    ########################### ACTIVE DEFRAGMENTATION #######################
    # What is active defragmentation?
    # -------------------------------
    # Active (online) defragmentation allows a Redis server to compact the
    # spaces left between small allocations and deallocations of data in memory,
    # thus allowing to reclaim back memory.
    # Fragmentation is a natural process that happens with every allocator (but
    # less so with Jemalloc, fortunately) and certain workloads. Normally a server
    # restart is needed in order to lower the fragmentation, or at least to flush
    # away all the data and create it again. However thanks to this feature
    # implemented by Oran Agra for Redis 4.0 this process can happen at runtime
    # in a "hot" way, while the server is running.
    # Basically when the fragmentation is over a certain level (see the
    # configuration options below) Redis will start to create new copies of the
    # values in contiguous memory regions by exploiting certain specific Jemalloc
    # features (in order to understand if an allocation is causing fragmentation
    # and to allocate it in a better place), and at the same time, will release the
    # old copies of the data. This process, repeated incrementally for all the keys
    # will cause the fragmentation to drop back to normal values.
    # Important things to understand:
    # 1. This feature is disabled by default, and only works if you compiled Redis
    #    to use the copy of Jemalloc we ship with the source code of Redis.
    #    This is the default with Linux builds.
    # 2. You never need to enable this feature if you don't have fragmentation
    #    issues.
    # 3. Once you experience fragmentation, you can enable this feature when
    #    needed with the command "CONFIG SET activedefrag yes".
    # The configuration parameters are able to fine tune the behavior of the
    # defragmentation process. If you are not sure about what they mean it is
    # a good idea to leave the defaults untouched.
    # Enabled active defragmentation
    # activedefrag no
    # Minimum amount of fragmentation waste to start active defrag
    # active-defrag-ignore-bytes 100mb
    # Minimum percentage of fragmentation to start active defrag
    # active-defrag-threshold-lower 10
    # Maximum percentage of fragmentation at which we use maximum effort
    # active-defrag-threshold-upper 100
    # Minimal effort for defrag in CPU percentage, to be used when the lower
    # threshold is reached
    # active-defrag-cycle-min 1
    # Maximal effort for defrag in CPU percentage, to be used when the upper
    # threshold is reached
    # active-defrag-cycle-max 25
    # Maximum number of set/hash/zset/list fields that will be processed from
    # the main dictionary scan
    # active-defrag-max-scan-fields 1000
    # Jemalloc background thread for purging will be enabled by default
    jemalloc-bg-thread yes
    # It is possible to pin different threads and processes of Redis to specific
    # CPUs in your system, in order to maximize the performances of the server.
    # This is useful both in order to pin different Redis threads in different
    # CPUs, but also in order to make sure that multiple Redis instances running
    # in the same host will be pinned to different CPUs.
    # Normally you can do this using the "taskset" command, however it is also
    # possible to this via Redis configuration directly, both in Linux and FreeBSD.
    # You can pin the server/IO threads, bio threads, aof rewrite child process, and
    # the bgsave child process. The syntax to specify the cpu list is the same as
    # the taskset command:
    # Set redis server/io threads to cpu affinity 0,2,4,6:
    # server_cpulist 0-7:2
    # Set bio threads to cpu affinity 1,3:
    # bio_cpulist 1,3
    # Set aof rewrite child process to cpu affinity 8,9,10,11:
    # aof_rewrite_cpulist 8-11
    # Set bgsave child process to cpu affinity 1,10,11
    # bgsave_cpulist 1,10-11
    # In some cases redis will emit warnings and even refuse to start if it detects
    # that the system is in bad state, it is possible to suppress these warnings
    # by setting the following config which takes a space delimited list of warnings
    # to suppress
    # ignore-warnings ARM64-COW-BUG
    

    Redist持久化

    redis属于内存数据库,如果不保存那么根据内存的特性,断电就不存在,所以对应持久化是非常重要的。 redis的持久化主要有两种方式,rdb(Redis DataBase)aof(Append Only File)

    rdb 持久化方式主要是保存redis 中存储的数据。

    RDB持久化过程分为手动触发自动触发

    手动触发

    手动触发分别对应savebgsave命令

    save:阻塞当前Redis服务器,直到RDB过程完成为止,对于内存 比较大的实例会造成长时间阻塞,线上环境不建议使用

    bgsave:Redis进程执行fork操作创建子进程,RDB持久化过程由子进程负责,完成后自动结束。阻塞只发生在fork阶段,一般时间很短。

    自动触发

    1、按照配置文件中所配置的 save m n 的规则自动触发。

    2、如果从节点执行全量复制操作,主节点自动执行bgsave生成RDB文件并发送给从节点。

    3、执行debug reload命令重新加载Redis时,也会自动触发save操作。

    4、执行shutdown命令时,如果没有开启AOF持久化功能则自动执行bgsave。

    rdb的相关配置在,配置文件中的SNAPSHOTTING

    由于 rdb 的保存文件属于间断性保存,在运行的过程中,如果突然宕机,那么它保存的文件只能是之前一次保存的数据。

    优点,恢复数据快,适合大规模数据恢复,缺点 对数据的保存不完善,容易丢失小部分数据

    aof持久化方式,主要是记录对 redis写和修改的操作,在恢复时,将就记录的命令再次执行一次

    aof的主要作用 是解决了数据持久化的实时性,目前已经是Redis持久化的主流方式

    aof 的相关配置在 APPEND ONLY MODE,具体如下

    Redis订阅模式

    类似于消息队列的订阅模式,不做具体讲解,我们看看用法即可

    Redis 发布订阅 (pub/sub) 是一种消息通信模式:发送者 (pub) 发送消息,订阅者 (sub) 接收消息。

    Redis 客户端可以订阅任意数量的频道。

    订阅需要订阅的频道,类似于关注微信众号

    127.0.0.1:6379> SUBSCRIBE wyx 
    Reading messages... (press Ctrl-C to quit)
    1) "subscribe"
    2) "wyx"
    3) (integer) 1
    

    订阅频道发送消息

    127.0.0.1:6379> PUBLISH wyx "Redis PUBLISH test"
    (integer) 1
    

    查看订阅者,已经收到消息

    退订频道,取消关注

    # 可以同时退订多个值,空格隔开即可
    UNSUBSCRIBE wyz 
    

    Redis主从复制原理

    主从复制,是指将一台Redis服务器的数据,复制到其他的Redis服务器。前者称为主节点(master),后者称为 从节点(slave);数据的复制是单向的,只能由主节点到从节点

    默认情况下,每台Redis服务器都是主节点;且一个主节点可以有多个从节点(或没有从节点),但一个从节点只能有一个主节点。

    主从复制的作用

    主从复制的作用主要包括:

  • 数据冗余:主从复制实现了数据的热备份,是持久化之外的一种数据冗余方式。
  • 故障恢复:当主节点出现问题时,可以由从节点提供服务,实现快速的故障恢复;实际上是一种服务的冗余。
  • 负载均衡:在主从复制的基础上,配合读写分离,可以由主节点提供写服务,由从节点提供读服务(即写Redis数据时应用连接主节点,读Redis数据时应用连接从节点),分担服务器负载;尤其是在写少读多的场景下,通过多个从节点分担读负载,可以大大提高Redis服务器的并发量。
  • 高可用:除了上述作用以外,主从复制还是哨兵和集群能够实施的基础,因此说主从复制是Redis高可用的基础。
  • 一主一从(链式)

    集群的方式主要有两种配置文件,和使用命令集群

    但是使用命令的集群方式,下一次重新连接,会变会主机,一般不使用,使用配置文件的方式,但是如果主机宕机了,没有主机也会出现问题,配置文件一般都是结合哨兵模式进行使用,后面讲解

    首先我们准备3台安装Redis 的虚拟机,

    Redis中主从复制,使用命令方法,就只需要一个配置,就是认主机

    slaveod 主机IP 端口号 # 查看属于什么节点可以使用 info replication

    测试:启动三台虚拟机的redis-server ,并使用客户端连接后,执行info replication 查看自己的信息,都是master

    使用slaveod 主机IP 端口号,将其中两台通过认主机的方式查看具体信息

    127.0.0.1:6379> SLAVEOF 192.168.137.129 6379

    再次查看主机的信息

    查看从机信息

    在从机上执行写操作

    # 不能执行,因为redis 规则默认为主从复制,读写分离,master 写 (slave 只能读)
    127.0.0.1:6379> set k2 v2
    (error) READONLY You can't write against a read only replica.
    # 所有如果主机宕机后,redis的写操作将不可用,会变得非常危险,
    # 我们可以通过 slaveod no one 让 slave 成为主机,但是必须手动(非常麻烦)
    

    slave 连接 master 会发送同步命令

    master接收到命令后,会将自身的数据文件同步到slave中完成一次完全同步

    第一次连接时,一般都会全量复制,后续都是增量复制

    全量配置:全部数据进行一次复制

    增量复制:将master新的到的值,复制给slave

    注意点:master主机一般都是只写不读,slave是只读不写,执行写命令时报错

    如果主机断开连接,我们可以使用命令,让自己成为主机

    # 让自己变成主机(手动)
    slaveod no one
    

    Redis哨兵模式

    在上面的主从复制,我们介绍了可以使用命令slaveod 主机IP 端口号 来确定主从,我们也可以使用配置文件来配置(为讲解),但是都有问题,使用手动模式,每次都需要认主机,使用配置文件也可以实现,但是主节点宕机后,将要平凡的修改配置文件,将会严重的影响工作效率,下面我们来介绍一种可以自动配置主节点,和从节点的一种模式。哨兵模式

    哨兵模式原理图

    哨兵模式概述:

    哨兵:在一定时间间隔类,向Redis发送命令,并等待回复,如果有回复,则认为服务器正常。(同时监测多个Redis

    多哨兵:如果启动哨兵的服务器宕机,其他节点跟随宕机,也会造成不可以,所以引入多哨兵互相监督。

    工作流程:在服务启动后每个哨兵会向其他哨兵Redis服务器发送消息并等待回复。如果监测到某个哨兵或者Redis服务器没有回复,则哨兵会请求其他哨兵也发送请求再次确认,在多个哨兵的确认下,如果都没有回复,则认为服务器或哨兵已经宕机,如果宕机的是Master节点,那么哨兵和哨兵之间会进行投票选举出新的Master,当以前的Master再次上线时,作为slave 使用,这样的设计思路,解决了我们在主从复制中需要手动改变主机。提高使用效率。

    哨兵模式搭建

    修改三台服务器的redis.conf配置文件

    # 使得Redis服务器可以跨网络访问
    bind 0.0.0.0
    # 设置密码
    requirepass "123456"
    # 指定主服务器,注意:有关slaveof的配置只是配置从服务器,主服务器不需要配置(master不用配置)
    slaveof 192.168.137.129 6379
    # 主服务器密码,注意:有关slaveof的配置只是配置从服务器,主服务器不需要配置(master不用配置)
    masterauth 123456
    

    master

    slave

    编辑启动哨兵的配置文件sentinel.conf(3个都需要配置)

    # 在安装的目录下找到sentinel.conf文件,并复制到启动目录的redisconf
    cp /opt/redis-6.2.2/sentinel.conf /usr/local/bin/redisconf/
    

    编辑移动后的配置文件

    vim /usr/local/bin/redisconf/sentinel.conf
    
    # 添加以下内容
    # 禁止保护模式
    protected-mode no
    # 配置监听的主服务器,这里sentinel monitor代表监控,mymaster代表服务器的名称,可以自定义,192.168.11.128代表监控的主服务器,6379代表端口,2代表只有两个或两个以上的哨兵认为主服务器不可用的时候,才会进行failover操作。
    sentinel monitor mymaster 192.168.137.129 6379 2
    sentinel monitor myslave1 192.168.137.130 6379 2
    sentinel monitor myslave2 192.168.137.131 6379 2
    # sentinel author-pass定义服务的密码,mymaster是服务名称,123456是Redis服务器密码
    # sentinel auth-pass <master-name> <password>
    sentinel auth-pass mymaster 123456
    sentinel auth-pass myslave1 123456
    sentinel auth-pass myslave2 123456
    # 注意禁用它自己配置的默认监控
    

    哨兵的更多配置文件详解sentinel.conf

    # Example sentinel.conf  
    # 哨兵sentinel实例运行的端口 默认26379  
    port 26379  
    # 哨兵sentinel的工作目录  
    dir /tmp  
    # 哨兵sentinel监控的redis主节点的 ip port   
    # master-name  可以自己命名的主节点名字 只能由字母A-z、数字0-9 、这三个字符".-_"组成。  
    # quorum 当这些quorum个数sentinel哨兵认为master主节点失联 那么这时 客观上认为主节点失联了  
    # sentinel monitor <master-name> <ip> <redis-port> <quorum>  
    sentinel monitor mymaster 127.0.0.1 6379 2  
    # 当在Redis实例中开启了requirepass foobared 授权密码 这样所有连接Redis实例的客户端都要提供密码  
    # 设置哨兵sentinel 连接主从的密码 注意必须为主从设置一样的验证密码  
    # sentinel auth-pass <master-name> <password>  
    sentinel auth-pass mymaster MySUPER--secret-0123passw0rd  
    # 指定多少毫秒之后 主节点没有应答哨兵sentinel 此时 哨兵主观上认为主节点下线 默认30秒  
    # sentinel down-after-milliseconds <master-name> <milliseconds>  
    sentinel down-after-milliseconds mymaster 30000  
    # 这个配置项指定了在发生failover主备切换时最多可以有多少个slave同时对新的master进行 同步,  
    这个数字越小,完成failover所需的时间就越长,  
    但是如果这个数字越大,就意味着越 多的slave因为replication而不可用。  
    可以通过将这个值设为 1 来保证每次只有一个slave 处于不能处理命令请求的状态。  
    # sentinel parallel-syncs <master-name> <numslaves>  
    sentinel parallel-syncs mymaster 1  
    # 故障转移的超时时间 failover-timeout 可以用在以下这些方面:   
    #1. 同一个sentinel对同一个master两次failover之间的间隔时间。  
    #2. 当一个slave从一个错误的master那里同步数据开始计算时间。直到slave被纠正为向正确的master那里同步数据时。  
    #3.当想要取消一个正在进行的failover所需要的时间。    
    #4.当进行failover时,配置所有slaves指向新的master所需的最大时间。不过,即使过了这个超时,slaves依然会被正确配置为指向master,但是就不按parallel-syncs所配置的规则来了  
    # 默认三分钟  
    # sentinel failover-timeout <master-name> <milliseconds>  
    sentinel failover-timeout mymaster 180000  
    # SCRIPTS EXECUTION  
    #配置当某一事件发生时所需要执行的脚本,可以通过脚本来通知管理员,例如当系统运行不正常时发邮件通知相关人员。  
    #对于脚本的运行结果有以下规则:  
    #若脚本执行后返回1,那么该脚本稍后将会被再次执行,重复次数目前默认为10  
    #若脚本执行后返回2,或者比2更高的一个返回值,脚本将不会重复执行。  
    #如果脚本在执行过程中由于收到系统中断信号被终止了,则同返回值为1时的行为相同。  
    #一个脚本的最大执行时间为60s,如果超过这个时间,脚本将会被一个SIGKILL信号终止,之后重新执行。  
    #通知型脚本:当sentinel有任何警告级别的事件发生时(比如说redis实例的主观失效和客观失效等等),将会去调用这个脚本,这时这个脚本应该通过邮件,SMS等方式去通知系统管理员关于系统不正常运行的信息。调用该脚本时,将传给脚本两个参数,  一个是事件的类型,  一个是事件的描述。如果sentinel.conf配置文件中配置了这个脚本路径,那么必须保证这个脚本存在于这个路径,并且是可执行的,否则sentinel无法正常启动成功。  
    #通知脚本  
    # sentinel notification-script <master-name> <script-path>  
    sentinel notification-script mymaster /var/redis/notify.sh  
    # 客户端重新配置主节点参数脚本  
    # 当一个master由于failover而发生改变时,这个脚本将会被调用,通知相关的客户端关于master地址已经发生改变的信息。  
    # 以下参数将会在调用脚本时传给脚本:  
    # <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>  
    # 目前<state>总是“failover”,  
    # <role>是“leader”或者“observer”中的一个。   
    # 参数 from-ip, from-port, to-ip, to-port是用来和旧的master和新的master(即旧的slave)通信的  
    # 这个脚本应该是通用的,能被多次调用,不是针对性的。  
    # sentinel client-reconfig-script <master-name> <script-path>  
    sentinel client-reconfig-script mymaster /var/redis/reconfig.sh  
    

    测试启动首先是主机的Redis服务进程,然后启动从机的服务进程,最后启动3个哨兵的服务进程。

    # 启动Redis服务器进程
    ./redis-server ../redis.conf
    # 启动哨兵进程
    ./redis-sentinel ../sentinel.conf
    

    我们查看主机的启动信息

    [root@localhost bin]# redis-cli -p 6379
    127.0.0.1:6379>
    127.0.0.1:6379> auth 123456 # 密码登录,刚刚配置文件中设置了密码
    

    故意断开主机去从机测试再次测试

    master连接后再次查看信息

    至此:搭建完成

    缓存穿透与雪崩(后续补充)