1. 单机
docker run --name redis -p 6379:6379 -d --restart=always redis:4.0 redis-server --appendonly yes --requirepass "passw0rd"
2. 主从
ps:由于没采用挂载文件的方式启动,一些配置不妨通过命令的方式带进去,如下
docker run --name redis-6383 -p 6383:6379 -d redis:4.0 --requirepass "passw0rd"docker run --name redis-6384 -p 6384:6379 -d redis:4.0 --slaveof 121.43.162.28 6383 \--requirepass "passw0rd" --masterauth "passw0rd"docker run --name redis-6385 -p 6385:6379 -d redis:4.0 --slaveof 121.43.162.28 6383 \--requirepass "passw0rd" --masterauth "passw0rd"
-- appendonly yes
挂载配置文件的方式启动
配置文件 /usr/local/redis/redis.conf启动方式
requirepass passw0rd## 以下为rdb配置#dbfilename:持久化数据存储在本地的文件dbfilename dump.rdb#dir:持久化数据存储在本地的路径,如果是在/redis/redis-3.0.6/src下启动的redis-cli,则数据会存储在当前src目录下dir ./##snapshot触发的时机,save <seconds> <changes>##如下为900秒后,至少有一个变更操作,才会snapshot##对于此值的设置,需要谨慎,评估系统的变更操作密集程度##可以通过“save “””来关闭snapshot功能#save时间,以下分别表示更改了1个key时间隔900s进行持久化存储;更改了10个key300s进行存储;更改10000个key60s进行存储。save 900 1save 300 10save 60 10000##当snapshot时出现错误无法继续时,是否阻塞客户端“变更操作”,“错误”可能因为磁盘已满/磁盘故障/OS级别异常等stop-writes-on-bgsave-error yes##是否启用rdb文件压缩,默认为“yes”,压缩往往意味着“额外的cpu消耗”,同时也意味这较小的文件尺寸以及较短的网络传输时间rdbcompression yes##以下为aof配置##此选项为aof功能的开关,默认为“no”,可以通过“yes”来开启aof功能##只有在“yes”下,aof重写/文件同步等特性才会生效appendonly yes##指定aof文件名称appendfilename appendonly.aof##指定aof操作中文件同步策略,有三个合法值:always everysec no,默认为everysecappendfsync everysec##在aof-rewrite期间,appendfsync是否暂缓文件同步,"no"表示“不暂缓”,“yes”表示“暂缓”,默认为“no”no-appendfsync-on-rewrite no##aof文件rewrite触发的最小文件尺寸(mb,gb),只有大于此aof文件大于此尺寸是才会触发rewrite,默认“64mb”,建议“512mb”auto-aof-rewrite-min-size 64mb##相对于“上一次”rewrite,本次rewrite触发时aof文件应该增长的百分比。##每一次rewrite之后,redis都会记录下此时“新aof”文件的大小(例如A),那么当aof文件增长到A*(1 + p)之后##触发下一次rewrite,每一次aof记录的添加,都会检测当前aof文件的尺寸。auto-aof-rewrite-percentage 100#主节点密码masterauth passw0rd
docker run --name redis-6379 -p 6379:6379 \-v /usr/local/redis/redis.conf:/etc/redis/redis.conf \-d redis:4.0 redis-server /etc/redis/redis.conf
3. docker compose 安装redis
3.1 redis主从安装
mkdir -p /usr/local/master-slavecd /usr/local/master-slave && touch docker-compose.yml
- docker-compose.yml
version: "3.1"services:redis-6383:image: redis:4.0ports:- "6383:6379"restart: alwayscommand: ['--requirepass "passw0rd"','--masterauth "passw0rd"','--maxmemory 512mb','--maxmemory-policy volatile-ttl','--save ""',]redis-6384:image: redis:4.0ports:- "6384:6379"restart: alwaysdepends_on:- redis-6383command: ['--requirepass "passw0rd"','--masterauth "passw0rd"','--maxmemory 512mb','--maxmemory-policy volatile-ttl','--slaveof 121.43.162.28 6383','--save ""',]redis-6385:image: redis:4.0ports:- "6385:6379"restart: alwaysdepends_on:- redis-6383command: ['--requirepass "passw0rd"','--masterauth "passw0rd"','--maxmemory 512mb','--maxmemory-policy volatile-ttl','--slaveof 121.43.162.28 6383','--save ""',]
cd /usr/local/master-slave#启动docker-compose up -d#停止docker-compose stop#删除docker-compose rm
3.2 redis哨兵集群安装
第1台主机
mkdir -p /usr/local/docker/redis-sentinelcd /usr/local/docker/redis-sentineltouch sentinel.conf && touch docker-compose.yml
- sentinel.conf
# 守护进程模式(千万别加这个)#daemonize yes# 当前Sentinel服务运行的端口port 26379# Sentinel服务运行时使用的临时文件夹dir /data# Sentinel去监视一个名为mymaster的主redis实例,这个主实例的IP地址为redis-master,端口号为6379,而将这个主实例判断为失效至少需要2个Sentinel进程的同意,只要同意Sentinel的数量不达标,自动failover就不会执行sentinel monitor mymaster 121.43.162.28 6379 2# 指定了Sentinel认为Redis实例已经失效所需的毫秒数。当实例超过该时间没有返回PING,或者直接返回错误,那么Sentinel将这个实例标记为主观下线。只有一个 Sentinel进程将实例标记为主观下线并不一定会引起实例的自动故障迁移:只有在足够数量的Sentinel都将一个实例标记为主观下线之后,实例才会被标记为客观下线,这时自动故障迁移才会执行sentinel down-after-milliseconds mymaster 30000# 指定了在执行故障转移时,最多可以有多少个从Redis实例在同步新的主实例,在从Redis实例较多的情况下这个数字越小,同步的时间越长,完成故障转移所需的时间就越长sentinel parallel-syncs mymaster 1# 如果在该时间(ms)内未能完成failover操作,则认为该failover失败sentinel failover-timeout mymaster 180000# 设置主服务密码sentinel auth-pass mymaster passw0rd
:::success
但这里面有个坑,也是困扰我一阵的,因为哨兵没指定IP和端口号,会自动检测通常的本地地址,本地哨兵之间通信都是用默认的IP地址和端口号(单机部署,都是默认的局域网IP),这本身也没问题,但一旦客户端连接的时候,就算你配置了你认为的三个哨兵,并且都是你暴露出去的外网地址和端口号,但第一次能连上,连接完毕后,哨兵开始通信,却用到的是局域网IP,而这时候,你是连接不上的,所以我们需要为每个哨兵声明指定的IP地址。:::
- sentinel.conf追加
#声明IP地址和端口号sentinel announce-ip 121.43.162.28sentinel announce-port 26379
- docker-compose.yml
version: "3.1"services:redis-6379:image: redis:4.0ports:- "6379:6379"restart: alwayscommand: ['--requirepass "passw0rd"','--masterauth "passw0rd"','--maxmemory 512mb','--maxmemory-policy volatile-ttl','--save ""',]redis-sentinel:image: redis:4.0restart: alwaysports:- "26379:26379"volumes:- "/usr/local/docker/redis-sentinel/sentinel.conf:/usr/local/etc/redis/sentinel.conf"privileged: truecommand: redis-sentinel /usr/local/etc/redis/sentinel.confdepends_on:- redis-6379
- 验证节点
redis-cli -c -p 6379127.0.0.1:6379> info replication# Replicationrole:masterconnected_slaves:2slave0:ip=47.96.100.166,port=6379,state=online,offset=17443,lag=1slave1:ip=118.24.136.237,port=6379,state=online,offset=17443,lag=1master_replid:954733f7a22868fc4a120ca6d12f55c98c59e7a8master_replid2:0000000000000000000000000000000000000000master_repl_offset:17584second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:17584
- 验证哨兵
redis-cli -c -p 26379info sentinel
第2台主机
mkdir -p /usr/local/docker/redis-sentinelcd /usr/local/docker/redis-sentineltouch sentinel.conf && touch docker-compose.yml
- sentinel.conf
# 守护进程模式(千万别加这个)#daemonize yes# 当前Sentinel服务运行的端口port 26379# Sentinel服务运行时使用的临时文件夹dir /data# Sentinel去监视一个名为mymaster的主redis实例,这个主实例的IP地址为redis-master,端口号为6379,而将这个主实例判断为失效至少需要2个Sentinel进程的同意,只要同意Sentinel的数量不达标,自动failover就不会执行sentinel monitor mymaster 121.43.162.28 6379 2# 指定了Sentinel认为Redis实例已经失效所需的毫秒数。当实例超过该时间没有返回PING,或者直接返回错误,那么Sentinel将这个实例标记为主观下线。只有一个 Sentinel进程将实例标记为主观下线并不一定会引起实例的自动故障迁移:只有在足够数量的Sentinel都将一个实例标记为主观下线之后,实例才会被标记为客观下线,这时自动故障迁移才会执行sentinel down-after-milliseconds mymaster 30000# 指定了在执行故障转移时,最多可以有多少个从Redis实例在同步新的主实例,在从Redis实例较多的情况下这个数字越小,同步的时间越长,完成故障转移所需的时间就越长sentinel parallel-syncs mymaster 1# 如果在该时间(ms)内未能完成failover操作,则认为该failover失败sentinel failover-timeout mymaster 180000# 设置主服务密码sentinel auth-pass mymaster passw0rd
:::success
但这里面有个坑,也是困扰我一阵的,因为哨兵没指定IP和端口号,会自动检测通常的本地地址,本地哨兵之间通信都是用默认的IP地址和端口号(单机部署,都是默认的局域网IP),这本身也没问题,但一旦客户端连接的时候,就算你配置了你认为的三个哨兵,并且都是你暴露出去的外网地址和端口号,但第一次能连上,连接完毕后,哨兵开始通信,却用到的是局域网IP,而这时候,你是连接不上的,所以我们需要为每个哨兵声明指定的IP地址。:::
- sentinel.conf追加
#声明IP地址和端口号sentinel announce-ip 47.96.100.166sentinel announce-port 26379
- docker-compose.yml
version: "3.1"services:redis-6379:image: redis:4.0ports:- "6379:6379"restart: alwayscommand: ['--requirepass "passw0rd"','--masterauth "passw0rd"','--maxmemory 512mb','--maxmemory-policy volatile-ttl','--slaveof 121.43.162.28 6379','--save ""',]redis-sentinel:image: redis:4.0restart: alwaysports:- "26379:26379"volumes:- "/usr/local/docker/redis-sentinel/sentinel.conf:/usr/local/etc/redis/sentinel.conf"privileged: truecommand: redis-sentinel /usr/local/etc/redis/sentinel.confdepends_on:- redis-6379
- 验证节点
edis-cli -c -p 6379127.0.0.1:6379> info replication# Replicationrole:slavemaster_host:121.43.162.28master_port:6379master_link_status:upmaster_last_io_seconds_ago:1master_sync_in_progress:0slave_repl_offset:27688slave_priority:100slave_read_only:1connected_slaves:0master_replid:954733f7a22868fc4a120ca6d12f55c98c59e7a8master_replid2:0000000000000000000000000000000000000000master_repl_offset:27688second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:27688
第3台主机
mkdir -p /usr/local/docker/redis-sentinelcd /usr/local/docker/redis-sentineltouch sentinel.conf && touch docker-compose.yml
- sentinel.conf
# 守护进程模式(千万别加这个)#daemonize yes# 当前Sentinel服务运行的端口port 26379# Sentinel服务运行时使用的临时文件夹dir /data# Sentinel去监视一个名为mymaster的主redis实例,这个主实例的IP地址为redis-master,端口号为6379,而将这个主实例判断为失效至少需要2个Sentinel进程的同意,只要同意Sentinel的数量不达标,自动failover就不会执行sentinel monitor mymaster 121.43.162.28 6379 2# 指定了Sentinel认为Redis实例已经失效所需的毫秒数。当实例超过该时间没有返回PING,或者直接返回错误,那么Sentinel将这个实例标记为主观下线。只有一个 Sentinel进程将实例标记为主观下线并不一定会引起实例的自动故障迁移:只有在足够数量的Sentinel都将一个实例标记为主观下线之后,实例才会被标记为客观下线,这时自动故障迁移才会执行sentinel down-after-milliseconds mymaster 30000# 指定了在执行故障转移时,最多可以有多少个从Redis实例在同步新的主实例,在从Redis实例较多的情况下这个数字越小,同步的时间越长,完成故障转移所需的时间就越长sentinel parallel-syncs mymaster 1# 如果在该时间(ms)内未能完成failover操作,则认为该failover失败sentinel failover-timeout mymaster 180000# 设置主服务密码sentinel auth-pass mymaster passw0rd
:::success
但这里面有个坑,也是困扰我一阵的,因为哨兵没指定IP和端口号,会自动检测通常的本地地址,本地哨兵之间通信都是用默认的IP地址和端口号(单机部署,都是默认的局域网IP),这本身也没问题,但一旦客户端连接的时候,就算你配置了你认为的三个哨兵,并且都是你暴露出去的外网地址和端口号,但第一次能连上,连接完毕后,哨兵开始通信,却用到的是局域网IP,而这时候,你是连接不上的,所以我们需要为每个哨兵声明指定的IP地址。:::
- sentinel.conf追加
#声明IP地址和端口号sentinel announce-ip 118.24.136.237sentinel announce-port 26379
- docker-compose.yml
version: "3.1"services:redis-6379:image: redis:4.0ports:- "6379:6379"restart: alwayscommand: ['--requirepass "passw0rd"','--masterauth "passw0rd"','--maxmemory 512mb','--maxmemory-policy volatile-ttl','--slaveof 121.43.162.28 6379','--save ""',]redis-sentinel:image: redis:4.0restart: alwaysports:- "26379:26379"volumes:- "/usr/local/docker/redis-sentinel/sentinel.conf:/usr/local/etc/redis/sentinel.conf"privileged: truecommand: redis-sentinel /usr/local/etc/redis/sentinel.confdepends_on:- redis-6379
- 验证节点
redis-cli -c -p 6379127.0.0.1:6379> info replication# Replicationrole:slavemaster_host:121.43.162.28master_port:6379master_link_status:upmaster_last_io_seconds_ago:1master_sync_in_progress:0slave_repl_offset:27688slave_priority:100slave_read_only:1connected_slaves:0master_replid:954733f7a22868fc4a120ca6d12f55c98c59e7a8master_replid2:0000000000000000000000000000000000000000master_repl_offset:27688second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:27688
3.3 redis高可用集群安装
安装ruby(有需要)
sudo apt-get install rubysudo yum install ruby##macOS用如下命令#https://www.ruby-lang.org/en/documentation/installation/#homebrewbrew install ruby
创建模板文件
mkdir -p /usr/local/docker/redis-clustercd /usr/local/docker/redis-cluster && touch redis-cluster.tmpl
- redis-cluster.tmpl
三台服务器分别直接敲击命令如下
#指定端口port ${port}#设置事件通知(过期)notify-keyspace-events Ex#设置集群可用cluster-enabled yes#指定集群生成的配置文件名。注意,这个配置文件不是人为编辑的,是集群在运行中自动生成的,记录着集群中其他节点、状态信息、变量等配置信息,以便在启动的时候能重读到cluster-config-file nodes.conf#设置节点最大不可达时间,单位为毫秒。当主节点不可达时间超过这个设置时间,其对应的从节点将替换成为主节点;当一个节点在这个设置时间内不能访问到大多数主节点,将停止接收请求cluster-node-timeout 5000#要宣布的IP地址cluster-announce-ip 0.0.0.0#要宣布的数据端口cluster-announce-port ${port}#要宣布的集群总线端口cluster-announce-bus-port 1${port}#设置为aop模式appendonly yes#设置槽是否需要全覆盖。默认yes情况下,只要16384个槽没有被全覆盖,整个集群就停止服务。设置no之后,只要还有部分key都继续提供查询处理cluster-require-full-coverage no
for port in `seq 7001 7002`; do \mkdir -p ./${port}/conf \&& PORT=${port} envsubst < ./redis-cluster.tmpl > ./${port}/conf/redis.conf \&& mkdir -p ./${port}/data; \done
记得改cluster-announce-ip 0.0.0.0 为具体ip
- docker-compose.yml
version: "3.1"services:redis:image: redis:4.0ports:- "6379:6379"- "16379:16379"volumes:- "/usr/local/docker/redis-cluster/6379/conf/redis.conf:/usr/local/etc/redis/redis.conf"- "/usr/local/docker/redis-cluster/6379/data:/data"container_name: redis-clusterrestart: alwayscommand: redis-server /usr/local/etc/redis/redis.conf
至此三主三从全部启动完毕,下面需要关联,原始安装命令太过费劲,用官方推荐的redis-trib(ruby实现的)
cd /usr/local/docker/redis-cluster#启动docker-compose up -d#停止docker-compose stop#删除docker-compose rm
redis-cli -c -h 192.168.95.31 -p 6379
关联三主三从,分配卡槽
因为最新的5.0 不再推荐使用ruby,所以拿redis-stable/src/redis-trib.rb操作不了,我又找不到老版本的在线rb文件,所以用下面给的吧
docker run -it --rm ruby sh -c '\gem install redis --version=4.1.0\&& wget https://github.com/antirez/redis/archive/4.0.11.tar.gz \&& tar -zxvf 4.0.11.tar.gz\&& cd redis-4.0.11/src\&& ruby redis-trib.rb create --replicas 1 \121.43.162.28:7001 \121.43.162.28:7002 \47.96.100.166:7001 \47.96.100.166:7002 \118.24.136.237:7001 \118.24.136.237:7002'>>> Creating cluster>>> Performing hash slots allocation on 6 nodes...Using 3 masters:121.43.162.28:700147.96.100.166:7001118.24.136.237:7001Adding replica 47.96.100.166:7002 to 121.43.162.28:7001Adding replica 118.24.136.237:7002 to 47.96.100.166:7001Adding replica 121.43.162.28:7002 to 118.24.136.237:7001M: 6a8aaa3a2b5bbb5c768db9a2b5be6b653a060507 121.43.162.28:7001slots:0-5460 (5461 slots) masterS: 8f5b062d6cfc741f93dc4268bfdb264c922a1d4b 121.43.162.28:7002replicates 452d3091dcb62e479e2f823c849df40a7d01f9edM: c4885c0bc06e417bed929699147c41aa88a98aa9 47.96.100.166:7001slots:5461-10922 (5462 slots) masterS: 95f70de1ed840fbee1ef6d07b255ea974299bf7e 47.96.100.166:7002replicates 6a8aaa3a2b5bbb5c768db9a2b5be6b653a060507M: 452d3091dcb62e479e2f823c849df40a7d01f9ed 118.24.136.237:7001slots:10923-16383 (5461 slots) masterS: 6d771e7ac6592b0d4d7c6e7812977cf7713b1ac3 118.24.136.237:7002replicates c4885c0bc06e417bed929699147c41aa88a98aa9Can I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join...>>> Performing Cluster Check (using node 121.43.162.28:7001)M: 6a8aaa3a2b5bbb5c768db9a2b5be6b653a060507 121.43.162.28:7001slots:0-5460 (5461 slots) master1 additional replica(s)M: c4885c0bc06e417bed929699147c41aa88a98aa9 47.96.100.166:7001slots:5461-10922 (5462 slots) master1 additional replica(s)S: 95f70de1ed840fbee1ef6d07b255ea974299bf7e 47.96.100.166:7002slots: (0 slots) slavereplicates 6a8aaa3a2b5bbb5c768db9a2b5be6b653a060507S: 6d771e7ac6592b0d4d7c6e7812977cf7713b1ac3 118.24.136.237:7002slots: (0 slots) slavereplicates c4885c0bc06e417bed929699147c41aa88a98aa9M: 452d3091dcb62e479e2f823c849df40a7d01f9ed 118.24.136.237:7001slots:10923-16383 (5461 slots) master1 additional replica(s)S: 8f5b062d6cfc741f93dc4268bfdb264c922a1d4b 121.43.162.28:7002slots: (0 slots) slavereplicates 452d3091dcb62e479e2f823c849df40a7d01f9ed[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.
集群检查
docker run -it --rm ruby sh -c '\gem install redis --version=4.1.0\&& wget https://github.com/antirez/redis/archive/4.0.11.tar.gz \&& tar -zxvf 4.0.11.tar.gz\&& cd redis-4.0.11/src\&& ruby redis-trib.rb check 192.168.95.31:6379'>>> Performing Cluster Check (using node 121.43.162.28:7001)M: 6a8aaa3a2b5bbb5c768db9a2b5be6b653a060507 121.43.162.28:7001slots:0-5460 (5461 slots) master1 additional replica(s)M: c4885c0bc06e417bed929699147c41aa88a98aa9 47.96.100.166:7001slots:5461-10922 (5462 slots) master1 additional replica(s)S: 95f70de1ed840fbee1ef6d07b255ea974299bf7e 47.96.100.166:7002slots: (0 slots) slavereplicates 6a8aaa3a2b5bbb5c768db9a2b5be6b653a060507S: 6d771e7ac6592b0d4d7c6e7812977cf7713b1ac3 118.24.136.237:7002slots: (0 slots) slavereplicates c4885c0bc06e417bed929699147c41aa88a98aa9M: 452d3091dcb62e479e2f823c849df40a7d01f9ed 118.24.136.237:7001slots:10923-16383 (5461 slots) master0 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.
info查看集群信息
ocker run -it --rm ruby sh -c '\gem install redis --version=4.1.0\&& wget https://github.com/antirez/redis/archive/4.0.11.tar.gz \&& tar -zxvf 4.0.11.tar.gz\&& cd redis-4.0.11/src\&& ruby redis-trib.rb info 121.43.162.28:7001'
