sxd-redis-分布式
2021-07-15 10:06:35 0 举报
redis 哨兵,集群的搭建方式,和predixy,twity 的集成使用
作者其他创作
大纲/内容
哨兵2:43545:X 26 May 2021 19:42:46.505 # +new-epoch 143545:X 26 May 2021 19:42:46.506 # +vote-for-leader 67edcdc56a3ad000a196dcc6521d879f19aa2e35 1 投票并带上自己的权重 哪个哨兵判断为主43545:X 26 May 2021 19:42:47.505 # +sdown master mymaster 127.0.0.1 637943545:X 26 May 2021 19:42:47.505 # +odown master mymaster 127.0.0.1 6379 #quorum 1/143545:X 26 May 2021 19:42:47.505 # Next failover delay: I will not start a failover before Wed May 26 19:48:46 202143509:X 26 May 2021 19:42:47.813 # +config-update-from sentinel 67edcdc56a3ad000a196dcc6521d879f19aa2e35 127.0.0.1 26381 @ mymaster 127.0.0.1 6379 更新哨兵配置文件 主节点变更43509:X 26 May 2021 19:42:47.813 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6381 确定新的主节点和老的节点43509:X 26 May 2021 19:42:47.813 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6381 加入新的节点43509:X 26 May 2021 19:42:47.813 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381 将老节点也加入新的集群43509:X 26 May 2021 19:43:17.865 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381 将老节点从新的集群下线
joyieldInc/predixy
哨兵模式
找到一台从节点,使用 redis-cli -p 6381连接
客户端普通连接,可以设置但是报错
127.0.0.1:30001> set {oo}k1 value1OK127.0.0.1:30001> watch {oo}k1OK127.0.0.1:30001> MULTIOK127.0.0.1:30001> set {oo}k2 aQUEUED127.0.0.1:30001> set {oo}k3 bQUEUED127.0.0.1:30001> get {oo}aQUEUED127.0.0.1:30001> EXEC1) OK2) OK3) (nil)127.0.0.1:30001> get {oo}k2
扔进去后、自己都不会知道在哪台机器上
横向 主备
hash tag 支持事物客户端可以随意连接某个节点操作数据,自动跳转手动加减节点数据移动redis-cli --cluster help 看具体的帮助
弱一致性
redis
如果6379 起来了
node-2
此时执行检查,新加进来的并没有分配槽位
验证
当从节点开启aof
redis 集群
hash
预分区
client 4
哨兵模式启动之后,之前的配置文件会被他修改
都给出ok:强一致性部分给出ok ? 几个?
主节点:6379 日志39485:M 26 May 2021 16:54:40.573 * Replica 127.0.0.1:6380 asks for synchronization39485:M 26 May 2021 16:54:40.573 * Partial resynchronization request from 127.0.0.1:6380 accepted. Sending 0 bytes of backlog starting from offset 6007.
当从节点之前加入过主节点
mapping0、1、2
proxyeg:nginx
监控
1 client
移动槽位,从一个移动另外一个 ,移动的数量为1000 redis-cli --cluster reshard 127.0.0.1:30001 --cluster-from 9986717d8131de7fdc744ff82ba18af2d8455715 --cluster-to d5b47c99f30dddf01f15a7b55f6ddcf695a7740d --cluster-slots 1000--cluster-from:表示slot目前所在的节点的node ID,多个ID用逗号分隔--cluster-to:表示需要新分配节点的node ID(貌似每次只能分配一个)--cluster-slots:分配的slot数量
手动
代理服务端压力大
有可能取不到
port 26380sentinel monitor mymaster 127.0.0.1 6379 1哨兵监听的。 分组名 ip. 端口号 权重
redis-cli --cluster check 127.0.0.1:30001 --cluster-search-multiple-owners127.0.0.1:30001 (9986717d...) -> 0 keys | 5461 slots | 1 slaves.127.0.0.1:30002 (d5b47c99...) -> 0 keys | 5462 slots | 1 slaves.127.0.0.1:30003 (95ba199f...) -> 0 keys | 5461 slots | 1 slaves.[OK] 0 keys in 3 masters.0.00 keys per slot on average.>>> Performing Cluster Check (using node 127.0.0.1:30001)M: 9986717d8131de7fdc744ff82ba18af2d8455715 127.0.0.1:30001 slots:[0-5460] (5461 slots) master 1 additional replica(s)S: 16fc66e4d007f32f2923ba3812d55b81acebccdf 127.0.0.1:30004 slots: (0 slots) slave replicates d5b47c99f30dddf01f15a7b55f6ddcf695a7740dS: d1999ab0d3e92278c235062aab98a2ba74bef9a8 127.0.0.1:30006 slots: (0 slots) slave replicates 9986717d8131de7fdc744ff82ba18af2d8455715M: d5b47c99f30dddf01f15a7b55f6ddcf695a7740d 127.0.0.1:30002 slots:[5461-10922] (5462 slots) master 1 additional replica(s)S: d67229a14ffb13cd7c1888cca7a718e6fed46bce 127.0.0.1:30005 slots: (0 slots) slave replicates 95ba199f30d109e46ed6100e3e079add00a1de7bM: 95ba199f30d109e46ed6100e3e079add00a1de7b 127.0.0.1:30003 slots:[10923-16383] (5461 slots) master 1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.>>> Check for multiple slot owners...
cd /usr/local/Cellar/redis/5.0.8/bin./redis-server /usr/local/Cellar/redis/5.0.8/study/6381/6381.conf链接客户端replicaof loalhost 6379
优点:加节点会分担现有节点的压力,不会造成全局洗牌缺点:造成一小部分数据不能命中1.问题:击穿 压到mysql2.方案,取最近的2个物理节点的数据更倾向于缓存,而不是数据库!!!!
AKF 1. 单点故障2. 容量有限3. 压力x y z 方向拆分x: 全量、镜像y: 业务、功能z: 优先级、逻辑在拆分(集群)数据一致性!!!强一致性 (*通过同步方式)所有节点阻塞直到全部一致,破坏可用性弱一致性(*通过异步方式)容忍部分数据丢失
哨兵1:43509:X 26 May 2021 19:42:46.505 # +new-epoch 143509:X 26 May 2021 19:42:46.506 # +vote-for-leader 67edcdc56a3ad000a196dcc6521d879f19aa2e35 1。投票并带上自己的权重 哪个哨兵判断为主43509:X 26 May 2021 19:42:46.553 # +sdown master mymaster 127.0.0.1 637943509:X 26 May 2021 19:42:46.553 # +odown master mymaster 127.0.0.1 6379 #quorum 1/143509:X 26 May 2021 19:42:46.553 # Next failover delay: I will not start a failover before Wed May 26 19:48:47 202143509:X 26 May 2021 19:42:47.813 # +config-update-from sentinel 67edcdc56a3ad000a196dcc6521d879f19aa2e35 127.0.0.1 26381 @ mymaster 127.0.0.1 6379 更新哨兵配置文件 主节点变更43509:X 26 May 2021 19:42:47.813 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6381 确定新的主节点和老的节点43509:X 26 May 2021 19:42:47.813 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6381 加入新的节点43509:X 26 May 2021 19:42:47.813 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381 将老节点也加入新的集群43509:X 26 May 2021 19:43:17.865 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381 将老节点从新的集群下线
3 从这里获取睡觉
第二台的日志,监听到有一个哨兵正在监控43545:X 26 May 2021 19:32:57.398 # Sentinel ID is 38432172242251c7320ffd2dfaa7ef85f7d1677643545:X 26 May 2021 19:32:57.398 # +monitor master mymaster 127.0.0.1 6379 quorum 143545:X 26 May 2021 19:32:57.399 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 637943545:X 26 May 2021 19:32:57.399 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 637943545:X 26 May 2021 19:32:58.552 * +sentinel sentinel 389fa9a88ff573ef03175ece4d17b35112330b8c 127.0.0.1 26379 @ mymaster 127.0.0.1 6379启用第三个时候: 追加一跳日志,另外一个哨兵加入进来了+sentinel sentinel 67edcdc56a3ad000a196dcc6521d879f19aa2e35 127.0.0.1 26381 @ mymaster 127.0.0.1 6379
修改一些配置文件logfile: 关闭日志后台打印appendonly no 关闭aof日志
执行 replicaof no one 将当前节点置为主节点
自动故障转移
client 连接这2台redisrpop
cd /usr/local/Cellar/redis/5.0.8/bin/./redis-server /usr/local/Cellar/redis/5.0.8/study/6380/6380.conf链接客户端replicaof loalhost 6379
proxy
槽位分赃日志
主要配置1. slave-serve-stale-data yes 主从复制中,从服务器可以响应客户端请求slave-serve-stale-data no 主从复制中,从服务器将阻塞所有请求,有客户端请求时返回“SYNC with master in progress”2. replica-read-only yes 从节点是否只读 yes只读 no 可以写3. repl-diskless-sync no 使用生成rdb进行传输 yes 使用网络传输,适合磁盘读写速度慢但网络带宽非常高的环境4. repl-backlog-size 1mb 设置backlog的大小,backlog是一个缓冲区。在slave端失连时存放要同步到slave的数据,因此当一个slave要重连时,经常是不需要完全同步的,执行局部同步就足够了。backlog设置的越大,slave可以失连的时间就越长5. min-slaves-to-write 3 min-slaves-max-lag 10 设置当一个master端的可用slave少于N个,延迟时间大于M秒时,不接收写操作。慢慢的趋向强一致性,自己取舍
Y
2个方式执行的日志都是一样的
虚拟节点
redis 1
2 判断槽位不在这里
client 2
X
主备、主从的区别:备机不提供服务,只是数据全量同步从机,一班提供读读功能,读写分离
当监听多组哨兵的时候,单个哨兵组的时候,可以,好多的聚合函数不能使用,watch multkeys *sdiffsunion .....
sentinel1
get k1
主从带来的问题
proxy映射算法取模100.1.2.3...10
1变多带来的问题1. 数据一致性 同步、获取不到2.集群节点一般为基数点
1: 自己网络问题,推倒不准确问题:网络分区,脑裂!!2:过半,n/2 + 1
mapping3、4、5
主备主从
client 3
3
这个点是物理的
client
redis-cli -p 7617127.0.0.1:7617> WATCH k1OK127.0.0.1:7617> MULTIOK127.0.0.1:7617> get k1QUEUED127.0.0.1:7617> set k1 3QUEUED127.0.0.1:7617> get k1QUEUED127.0.0.1:7617> EXEC
弊端:好多的聚合函数不能使用watch multkeys *sdiffsunion .....
cd /usr/local/Cellar/redis/5.0.8/bin./redis-server /usr/local/Cellar/redis/5.0.8/study/6380/26380.conf --sentinel
主单节点读写监控
alpha: listen: 127.0.0.1:22121 hash: fnv1a_64 distribution: ketama auto_eject_hosts: true redis: true server_retry_timeout: 2000 server_failure_limit: 1 servers: - 127.0.0.1:6390:1 - 127.0.0.1:6391:1 - 127.0.0.1:6392:1
第三台日志:监听到前2个哨兵已经在监听了 43594:X 26 May 2021 19:35:33.033 # Sentinel ID is 67edcdc56a3ad000a196dcc6521d879f19aa2e3543594:X 26 May 2021 19:35:33.033 # +monitor master mymaster 127.0.0.1 6379 quorum 143594:X 26 May 2021 19:35:33.034 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 637943594:X 26 May 2021 19:35:33.035 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379 下面2行发现2个哨兵43594:X 26 May 2021 19:35:33.504 * +sentinel sentinel 389fa9a88ff573ef03175ece4d17b35112330b8c 127.0.0.1 26379 @ mymaster 127.0.0.1 637943594:X 26 May 2021 19:35:34.343 * +sentinel sentinel 38432172242251c7320ffd2dfaa7ef85f7d16776 127.0.0.1 26380 @ mymaster 127.0.0.1 6379
span style=\"font-size: inherit;\
查看集群详细信息
list:key :ooxx
mapping5.6.7
mapping6、7、8
2-2 client
cd /usr/local/Cellar/redis/5.0.8/bin./redis-server /usr/local/Cellar/redis/5.0.8/study/6381/26381.conf --sentinel
43285:S 26 May 2021 19:42:46.939 * Connecting to MASTER 127.0.0.1:637943285:S 26 May 2021 19:42:46.939 * MASTER <-> REPLICA sync started43285:S 26 May 2021 19:42:46.939 # Error condition on socket for SYNC: Connection refused
分配
node-1
手动模拟哨兵执行的过程
消息队列ooxx:topic redis: partitionkafka:很像,基于磁盘,并可重复消费
淘汰策略LRULFU
mapping0.1.2
主节点:6379 日志39485:M 26 May 2021 15:40:37.341 * Replica 127.0.0.1:6380 asks for synchronization 请求加入39485:M 26 May 2021 15:40:37.341 * Full resync requested by replica 127.0.0.1:638039485:M 26 May 2021 15:40:37.341 * Starting BGSAVE for SYNC with target: disk 开始数据落磁盘39485:M 26 May 2021 15:40:37.341 * Background saving started by pid 39521 开进程同步db39521:C 26 May 2021 15:40:37.342 * DB saved on disk. 已保存39485:M 26 May 2021 15:40:37.371 * Background saving terminated with success 39485:M 26 May 2021 15:40:37.371 * Synchronization with replica 127.0.0.1:6380 succeeded 异步发送给从
4台和5台容忍挂掉1台,2台一般使用基数台
引入代理层的意义:之前数据存储都在客户端取计算,现在可以移到代理层了
数据可以分类、交集不多
分区的实现方式
主备:主对外提供增删改查
redis 3
✘ duandian@MacBook-Pro-2 /usr/local/Cellar/redis/redis-5.0.12 redis-cli -p 30001127.0.0.1:30001> set k1 1(error) MOVED 12706 127.0.0.1:30003127.0.0.1:30001> get k1(error) MOVED 12706 127.0.0.1:30003127.0.0.1:30001>
支持事物的做法
VIP
client 1
client n
hashcrc16crc32fnvmd5映射算法
配置
主从:一般主写,从读
duandian@MacBook-Pro-2 ~ redis-cli --cluster info 127.0.0.1:30001 127.0.0.1:30001 (9986717d...) -> 0 keys | 5461 slots | 1 slaves.127.0.0.1:30002 (d5b47c99...) -> 0 keys | 5462 slots | 1 slaves.127.0.0.1:30003 (95ba199f...) -> 0 keys | 5461 slots | 1 slaves.[OK] 0 keys in 3 masters.0.00 keys per slot on average.
异步
中间没有数据落磁盘的操作
代理层的逻辑实现:modulerandomkemata
LVS
同步
redis 2
从节点一直报主节点连接失败
0~2^32个虚拟点
SentinelServerPool { Databases 16 Hash crc16 HashTag \"{}\" Distribution modula MasterReadPriority 60 StaticSlaveReadPriority 50 DynamicSlaveReadPriority 50 RefreshInterval 1 ServerTimeout 1 ServerFailureLimit 10 ServerRetryTimeout 1 KeepAlive 120 Sentinels { + 127.0.0.1:26379 + 127.0.0.1:26380 + 127.0.0.1:26381 } Group mymaster { }}
mapping3.4.8.9
逻辑:hash+取模module
2-3 client
上面3中模式都存在弊端,数据会丢失,不能作为持久层数据库,只能用作缓存
1)首先删除master对应的slaveredis-cli --cluster del-node 127.0.0.1:6386 530cf27337c1141ed12268f55ba06c15ca8494fcdel-node后面跟着slave节点的 ip:port 和node ID(2)清空master的slotredis-cli --cluster reshard 127.0.0.1:6385 --cluster-from 46f0b68b3f605b3369d3843a89a2b4a164ed21e8 --cluster-to 2846540d8284538096f111a8ce7cf01c50199237 --cluster-slots 1024 --cluster-yesreshard子命令前面已经介绍过了,这里需要注意的一点是,由于我们的集群一共有四个主节点,而每次reshard只能写一个目的节点,因此以上命令需要执行三次(--cluster-to对应不同的目的节点)。--cluster-yes:不回显需要迁移的slot,直接迁移。(3)下线(删除)节点redis-cli --cluster del-node 127.0.0.1:6385 46f0b68b3f605b3369d3843a89a2b4a164ed21e8至此就是redis cluster 简单的操作过程
最终一致性
移动槽位
2
数据来了key:a
keepalived
set k1 a
redis master
clientget k1
启用哨兵
开辟新的就执行数据同步,计算,如果中途有变更,维护增量队列,最后在一起发送过去
逻辑:随机randomlpush
强一致性
普通的开启事物还是不支持,没获取、设置 key 会自动跳转到对应的客户端,导致其他的客户端没有开启事物
2个从节点回一直报错
cluster
cd /usr/local/Cellar/redis/5.0.8/bin./redis-server /usr/local/Cellar/redis/5.0.8/study/6379/6379.conf
node-3后面新增
找到一台从节点,使用 redis-cli -p 6380连接
检查
哨兵的日志
解决数据倾斜
逻辑:kemata一致性哈希 没有取模key 和 node 都要参与计算
port 26381sentinel monitor mymaster 127.0.0.1 6379 1哨兵监听的。 分组名 ip. 端口号 权重
启用第一个时候:43509:X 26 May 2021 19:31:08.305 # Sentinel ID is 389fa9a88ff573ef03175ece4d17b35112330b8c43509:X 26 May 2021 19:31:08.305 # +monitor master mymaster 127.0.0.1 6379 quorum 1 配置43509:X 26 May 2021 19:31:08.306 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379 下面2行知道集群里有哪些节点43509:X 26 May 2021 19:31:08.307 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379启用第二个时候: 追加一跳日志,另外一个哨兵加入进来了 +sentinel sentinel 38432172242251c7320ffd2dfaa7ef85f7d16776 127.0.0.1 26380 @ mymaster 127.0.0.1 6379启用第三个时候: 追加一跳日志,另外一个哨兵加入进来了+sentinel sentinel 67edcdc56a3ad000a196dcc6521d879f19aa2e35 127.0.0.1 26381 @ mymaster 127.0.0.1 6379
autoreconf -fvi && ./configure needs automake and libtool to be installed 需要 automake 和 libtoolmac: brew install automake 和 brew install libtoollinux: yum install automake libtool -y (libtool用默认仓库的可能导致版本过低,需要使用阿里云的仓库重新安装一个新的)安装过程:$ git clone git@github.com:twitter/twemproxy.git$ cd twemproxy$ autoreconf -fvi$ ./configure --enable-debug=full$ make/usr/local/Cellar/twemproxy/twemproxy配置文件在 conf启动文件在 src启动命令:./src/nutcracker ./conf/nutcracker.yml启动3个redis服务端
port 26379sentinel monitor mymaster 127.0.0.1 6379 1哨兵监听的。 分组名 ip. 端口号 权重 (可以有多组)
集群解决问题及方案
当主节点挂了的时候
无状态的
当主节点挂了 6380
sharding分片
cd /usr/local/Cellar/redis/5.0.8/bin/./redis-server /usr/local/Cellar/redis/5.0.8/study/6380/6380.conf --replicaof localhost 6379
添加3个哨兵的配置
哨兵互相发现实现
使用redis-cli 连接redis客户端查看数据落在了哪里
当从节点挂了一个 6380
3个哨兵监听主节点下线3个哨兵开始投票选择老大哨兵,依据点看着时间点 前2个在19:42:46.553 之前就是选举老大哨兵看着时间点 前2个在19:42:46.553 到 19:42:47.813 之间 都没有操作。老大哨兵在9:42:46.553 到 19:42:47.813 之间做的事情拿到当前主节点选择新的主节点将新的主节点发送命令 replicaof no one 置为主节点,从节点晋升等待命令返回将原来的从节点加入到新的集群里来,已经下线的会加入失败修改哨兵文件主节点启动,自动当从节点加入到新的集群里面来
redis cluster
查询路由
哨兵3:43594:X 26 May 2021 19:42:46.498 # +sdown master mymaster 127.0.0.1 637943594:X 26 May 2021 19:42:46.498 # +odown master mymaster 127.0.0.1 6379 #quorum 1/143594:X 26 May 2021 19:42:46.498 # +new-epoch 143594:X 26 May 2021 19:42:46.498 # +try-failover master mymaster 127.0.0.1 637943594:X 26 May 2021 19:42:46.504 # +vote-for-leader 67edcdc56a3ad000a196dcc6521d879f19aa2e35 1 投票并带上自己的权重 哪个哨兵判断为主43594:X 26 May 2021 19:42:46.506 # 389fa9a88ff573ef03175ece4d17b35112330b8c voted for 67edcdc56a3ad000a196dcc6521d879f19aa2e35 143594:X 26 May 2021 19:42:46.506 # 38432172242251c7320ffd2dfaa7ef85f7d16776 voted for 67edcdc56a3ad000a196dcc6521d879f19aa2e35 143594:X 26 May 2021 19:42:46.585 # +elected-leader master mymaster 127.0.0.1 6379 当钱领导人 637943594:X 26 May 2021 19:42:46.585 # +failover-state-select-slave master mymaster 127.0.0.1 6379。故障转移状态选择从属43594:X 26 May 2021 19:42:46.664 # +selected-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379。 选定新的主节点43594:X 26 May 2021 19:42:46.664 * +failover-state-send-slaveof-noone slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379 向从节点发送 no one 命令将他设置为主节点43594:X 26 May 2021 19:42:46.719 * +failover-state-wait-promotion slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379 故障转移状态等待提升43594:X 26 May 2021 19:42:47.716 # +promoted-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379 从机状态晋升-》主机43594:X 26 May 2021 19:42:47.716 # +failover-state-reconf-slaves master mymaster 127.0.0.1 6379 从机故障转移状态识别43594:X 26 May 2021 19:42:47.812 * +slave-reconf-sent slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379 将6380 加入新的集群43594:X 26 May 2021 19:42:48.718 * +slave-reconf-inprog slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 637943594:X 26 May 2021 19:42:48.718 * +slave-reconf-done slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 637943594:X 26 May 2021 19:42:48.785 # +failover-end master mymaster 127.0.0.1 637943594:X 26 May 2021 19:42:48.785 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 638143594:X 26 May 2021 19:42:48.785 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 638143594:X 26 May 2021 19:42:48.785 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 638143594:X 26 May 2021 19:43:18.838 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
它将不是主节点了,需要执行replicaof localhost 6380将他加入到6380这个集群里
逻辑:业务拆分
redis 哨兵 主从复制
删除节点
支持hash tag
对主做HA
执行 replicaof localhost 6380将他加入到6380这个集群里
取模的数值必须固定取模有一个天生的弊端影响分布式的水平拓展
数据不可以拆分
4
twemproxy
才能扩展
使用hash tag 方式,支持事物
规划成一个hash环
代理层分区
新增 主、从节点
1 获取数据
服务端压力大
客户端分区
通过 发布、订阅来相互感知
集群的一些操作
not connected> duandian@MacBook-Pro-2 /usr/local/Cellar/redis/redis-5.0.12 redis-cli -p 30001 -c 127.0.0.1:30001> set k1(error) ERR wrong number of arguments for 'set' command127.0.0.1:30001> get k1-> Redirected to slot [12706] located at 127.0.0.1:30003(nil)127.0.0.1:30003> set k2 1-> Redirected to slot [449] located at 127.0.0.1:30001OK127.0.0.1:30001> get k2\"1\"127.0.0.1:30003> MULTI. OK127.0.0.1:30003> get k1QUEUED127.0.0.1:30003> get k2-> Redirected to slot [449] located at 127.0.0.1:30001\"1\"127.0.0.1:30001> EXEC(error) ERR EXEC without MULTI127.0.0.1:30001>
监控:人、程序程序又是单节点?集群
集群带来的问题
client get k1
redis-cli -p 22121127.0.0.1:22121> set k1 1OK127.0.0.1:22121> get k1\"1\"127.0.0.1:22121> keys *Error: Server closed the connection127.0.0.1:22121> set k2 2OK127.0.0.1:22121> set k3 2OK127.0.0.1:22121> set a aOK127.0.0.1:22121> set 1 1OK127.0.0.1:22121> set abc 2OK127.0.0.1:22121> set abdcd 2wweOK127.0.0.1:22121> SETBIT 12 1 1(integer) 0127.0.0.1:22121> get k1\"1\"127.0.0.1:22121> get k1\"1\"127.0.0.1:22121> get abc\"2\"127.0.0.1:22121> get abc(error) ERR Broken pipe127.0.0.1:22121> get abc(error) ERR Broken pipe127.0.0.1:22121> get abc(nil)127.0.0.1:22121> get abc(error) ERR Broken pipe127.0.0.1:22121> get abc(error) ERR Broken pipe127.0.0.1:22121> get abc
2-1 client
cd /usr/local/Cellar/redis/5.0.8/bin./redis-server /usr/local/Cellar/redis/5.0.8/study/6379/26379.conf --sentinel
手动故障转移
1
>>> Performing hash slots allocation on 6 nodes...Master[0] -> Slots 0 - 5460Master[1] -> Slots 5461 - 10922Master[2] -> Slots 10923 - 16383Adding replica 127.0.0.1:30005 to 127.0.0.1:30001Adding replica 127.0.0.1:30006 to 127.0.0.1:30002Adding replica 127.0.0.1:30004 to 127.0.0.1:30003>>> Trying to optimize slaves allocation for anti-affinity[WARNING] Some slaves are in the same host as their masterM: 2ae9a8174b691a26960ce7b22fd64f9c6ea878ba 127.0.0.1:30001 slots:[0-5460] (5461 slots) masterM: 078809a6f0002113275c9dd60b033a9e167568d6 127.0.0.1:30002 slots:[5461-10922] (5462 slots) masterM: cae83c912767138a38f871dd223f666fdef22464 127.0.0.1:30003 slots:[10923-16383] (5461 slots) masterS: 3bbd2349e0a0d4b927aaef7938cb46ec608e4030 127.0.0.1:30004 replicates cae83c912767138a38f871dd223f666fdef22464S: 7f8a38e2cdf8c8a0ec6ad2754559668647af084d 127.0.0.1:30005 replicates 2ae9a8174b691a26960ce7b22fd64f9c6ea878baS: ad3971ce2362c06671aa9f4290f04890f1d8eea9 127.0.0.1:30006 replicates 078809a6f0002113275c9dd60b033a9e167568d6Can I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join.>>> Performing Cluster Check (using node 127.0.0.1:30001)M: 2ae9a8174b691a26960ce7b22fd64f9c6ea878ba 127.0.0.1:30001 slots:[0-5460] (5461 slots) master 1 additional replica(s)M: 078809a6f0002113275c9dd60b033a9e167568d6 127.0.0.1:30002 slots:[5461-10922] (5462 slots) master 1 additional replica(s)S: 3bbd2349e0a0d4b927aaef7938cb46ec608e4030 127.0.0.1:30004 slots: (0 slots) slave replicates cae83c912767138a38f871dd223f666fdef22464S: 7f8a38e2cdf8c8a0ec6ad2754559668647af084d 127.0.0.1:30005 slots: (0 slots) slave replicates 2ae9a8174b691a26960ce7b22fd64f9c6ea878baM: cae83c912767138a38f871dd223f666fdef22464 127.0.0.1:30003 slots:[10923-16383] (5461 slots) master 1 additional replica(s)S: ad3971ce2362c06671aa9f4290f04890f1d8eea9 127.0.0.1:30006 slots: (0 slots) slave replicates 078809a6f0002113275c9dd60b033a9e167568d6[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.
kafka可靠、集群、响应速度够快
物理点经过treemap 啥的存储
0 条评论
下一页