日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 运维知识 > 数据库 >内容正文

数据库

Redis cluster集群扩容缩容原理

發布時間:2024/1/23 数据库 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Redis cluster集群扩容缩容原理 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1. Redis Cluster集群擴容

1.1 擴容原理

  • redis cluster可以實現對節點的靈活上下線控制
  • 3個主節點分別維護自己負責的槽和對應的數據,如果希望加入一個節點實現擴容,就需要把一部分槽和數據遷移和新節點

  • 每個master把一部分槽和數據遷移到新的節點node04

1.2 擴容過程

準備新節點

  • 準備兩個配置文件redis_6379.conf和redis_6380.conf

  • daemonize yes

  • port 6379

  • logfile "/var/log/redis/redis_6379.log"

  • pidfile /var/run/redis/redis_6379.pid

  • dir /data/redis/6379

  • bind 10.0.0.103

  • protected-mode no

  • # requirepass 123456

  • appendonly yes

  • cluster-enabled yes

  • cluster-node-timeout 15000

  • cluster-config-file /opt/redis/conf/nodes-6379.conf

  • # 另一份配置文件

  • daemonize yes

  • port 6380

  • logfile "/var/log/redis/redis_6380.log"

  • pidfile /var/run/redis/redis_6380.pid

  • dir /data/redis/6379

  • bind 10.0.0.103

  • protected-mode no

  • # requirepass 123456

  • appendonly yes

  • cluster-enabled yes

  • cluster-node-timeout 15000

  • cluster-config-file /opt/redis/conf/nodes-6380.onf

  • 創建目錄

  • mkdir -p /var/log/redis

  • touch /var/log/redis/redis_6379.log

  • touch /var/log/redis/redis_6380.log

  • mkdir -p /var/run/redis

  • mkdir -p /data/redis/6379

  • mkdir -p /data/redis/6380

  • mkdir -p /opt/redis/conf

  • 新節點啟動redis服務

  • [root@node04 redis]# bin/redis-server conf/redis_6379.conf

  • [root@node04 redis]# bin/redis-server conf/redis_6380.conf

  • [root@node04 opt]# ps -ef | grep redis

  • root 1755 1 0 19:06 ? 00:00:00 redis-server 10.0.0.103:6379 [cluster]

  • root 1757 1 0 19:06 ? 00:00:00 redis-server 10.0.0.103:6380 [cluster]

新節點加入集群

  • 在原有集群任意節點內執行以下命令

  • root@node01 opt]# redis-cli -c -h 10.0.0.100 -p 6380

  • 10.0.0.100:6380> cluster meet 10.0.0.103 6379

  • OK

  • 10.0.0.100:6380> cluster meet 10.0.0.103 6380

  • OK

  • 集群內新舊節點經過一段時間的通信之后,所有節點會更新它們的狀態并保存到本地

  • 10.0.0.100:6380> cluster nodes

  • # 可以看到新加入兩個服務(10.0.0.103:6379/10.0.0.103:6380)都是master,它們還沒有管理slot

  • 4fb4c538d5f29255f6212f2eae8a761fbe364a89 10.0.0.101:6380@16380 master - 0 1585048391000 7 connected 0-5460

  • 690b2e1f604a0227068388d3e5b1f1940524c565 10.0.0.102:6379@16379 master - 0 1585048389000 3 connected 10923-16383

  • 1be5d1aaaa9e9542224554f461694da9cba7c2b8 10.0.0.101:6379@16379 master - 0 1585048392055 2 connected 5461-10922

  • 724a8a15f4efe5a01454cb971d7471d6e84279f3 10.0.0.103:6379@16379 master - 0 1585048388000 8 connected

  • ed9b72fffd04b8a7e5ad20afdaf1f53e0eb95011 10.0.0.103:6380@16380 master - 0 1585048391046 0 connected

  • 89f52bfbb8803db19ab0c5a90adc4099df8287f7 10.0.0.100:6379@16379 slave 4fb4c538d5f29255f6212f2eae8a761fbe364a89 0 1585048388000 7 connected

  • 86e1881611440012c87fbf3fa98b7b6d79915e25 10.0.0.102:6380@16380 slave 1be5d1aaaa9e9542224554f461694da9cba7c2b8 0 1585048389033 6 connected

  • 8c13a2afa76194ef9582bb06675695bfef76b11d 10.0.0.100:6380@16380 myself,slave 690b2e1f604a0227068388d3e5b1f1940524c565 0 1585048390000 4 connected

  • 新節點剛開始都是master節點,但是由于沒有負責的槽,所以不能接收任何讀寫操作,對新節點的后續操作,一般有兩種選擇:

    • 從其他的節點遷移槽和數據給新節點
    • 作為其他節點的slave負責故障轉移

    redis-trib.rb工具也實現了為現有集群添加新節點的命令,同時也實現了直接添加為slave的支持:

  • # 新節點加入集群

  • redis-trib.rb add-node new_host:new_port old_host:old_port

  • # 新節點加入集群并作為指定master的slave

  • redis-trib.rb add-node new_host:new_port old_host:old_port --slave --master-id <master-id>

  • 建議使用redis-trib.rb add-node將新節點添加到集群中,該命令會檢查新節點的狀態,如果新節點已經加入了其他集群或者已經包含數據,則會報錯,而使用cluster meet命令則不會做這樣的檢查,假如新節點已經存在數據,則會合并到集群中,造成數據不一致

遷移slot和數據

  • slot遷移是集群伸縮的最核心步驟

  • 假設原有3個master,每個master負責10384 / 3 ≈ 5461個slot

  • 加入一個新的master之后,每個master負責10384 / 4 = 4096個slot

  • 確定好遷移計劃之后,例如,每個master將超過4096個slot的部分遷移到新的master中,然后開始以slot為單位進行遷移

  • 每個slot的遷移過程如下所示:

    • 對目標節點發送cluster setslot {slot_id} importing {sourceNodeId}命令,目標節點的狀態被標記為"importing",準備導入這個slot的數據
    • 對源節點發送cluster setslot {slot_id} migrating {targetNodeID}命令,源節點的狀態被標記為"migrating",準備遷出slot的數據
    • 源節點執行cluster getkeysinslot {slot_id} {count}命令,獲取這個slot的所有的key列表(分批獲取,count指定一次獲取的個數),然后針對每個key進行遷移
    • 在源節點執行migrate {targetIp} {targetPort} "" 0 {timeout} keys {keys}命令,把一批批key遷移到目標節點(redis-3.0.6之前一次只能遷移一個key),具體來說,源節點對遷移的key執行dump指令得到序列化內容,然后通過客戶端向目標節點發送攜帶著序列化內容的restore指令,目標節點進行反序列化后將接收到的內容存入自己的內存中,目標節點給客戶端返回"OK",然后源節點刪除這個key,這樣,一個key的遷移過程就結束了
    • 所有的key都遷移完成后,一個slot的遷移就結束了
    • 遷移所有的slot(應該被遷移的那些),所有的slot遷移完成后,新的集群的slot就重新分配完成了,向集群內所有master發送cluster setslot {slot_id} node {targetNodeId}命令,通知他們哪些槽被遷移到了哪些master上,讓它們更新自己的信息
  • slot遷移的其他說明

    • 遷移過程是同步的,在目標節點執行restore指令到原節點刪除key之間,原節點的主線程處于阻塞狀態,直到key被刪除成功
    • 如果遷移過程突然出現網路故障,整個slot遷移只進行了一半,這時兩個節點仍然會被標記為中間過濾狀態,即"migrating"和"importing",下次遷移工具連接上之后,會繼續進行遷移
    • 在遷移過程中,如果每個key的內容都很小,那么遷移過程很快,不會影響到客戶端的正常訪問
    • 如果key的內容很大,由于遷移一個key的遷移過程是阻塞的,就會同時導致原節點和目標節點的卡頓,影響集群的穩定性,所以,集群環境下,業務邏輯要盡可能的避免大key的產生
  • 手動完成slot遷移的過程

  • # 目標節點690b2e1f604a0227068388d3e5b1f1940524c565準備導入4096號slot

  • # 節點ID通過cluster nodes命令查看

  • cluster setslot 4096 importing 690b2e1f604a0227068388d3e5b1f1940524c565

  • # 源節點86e1881611440012c87fbf3fa98b7b6d79915e25準備導出4096號slot

  • cluster setslot 4096 migrating 86e1881611440012c87fbf3fa98b7b6d79915e25

  • # 批量獲取4096號槽的100個key

  • cluster getkeysinslot 4096 100

  • # 批量遷移這些key

  • migrate 10.0.0.100 6379 "" 0 5000 keys key1 key2 ... key100

  • # 通過所有master,4096號槽被遷移到目標節點690b2e1f604a0227068388d3e5b1f1940524c565

  • 10.0.0.100:6379> cluster setslot 4096 node 690b2e1f604a0227068388d3e5b1f1940524c565

  • 10.0.0.101:6379> cluster setslot 4096 node 690b2e1f604a0227068388d3e5b1f1940524c565

  • 10.0.0.102:6379> cluster setslot 4096 node 690b2e1f604a0227068388d3e5b1f1940524c565

  • 10.0.0.103:6379> cluster setslot 4096 node 690b2e1f604a0227068388d3e5b1f1940524c565

  • 使用redis-trib.rb工具完成slot遷移

  • redis-trib.rb reshard host:port --from <arg> --to <arg> --slots <arg> --yes --timeout <arg> --pipeline <arg>

  • host:port:隨便指定一個集群中的host:port,用以獲取全部集群的信息

  • --from:源節點的id,提示用戶輸入

  • --to:目標節點的id,提示用戶輸入

  • --slots:需要遷移的slot的總數量,提示用戶輸入

  • --yes:當打印出slot遷移計劃后是否需要用戶輸入yes確認后執行

  • --timeout:控制每次migrate操作的超時時間,默認60000ms

  • --pipeline:控制每次批量遷移的key的數量,默認10

  • [root@node01 redis]# redis-trib.rb reshard 10.0.0.100:6379

  • >>> Performing Cluster Check (using node 10.0.0.100:6379)

  • S: 89f52bfbb8803db19ab0c5a90adc4099df8287f7 10.0.0.100:6379

  • slots: (0 slots) slave

  • replicates 4fb4c538d5f29255f6212f2eae8a761fbe364a89

  • S: 8c13a2afa76194ef9582bb06675695bfef76b11d 10.0.0.100:6380

  • slots: (0 slots) slave

  • replicates 690b2e1f604a0227068388d3e5b1f1940524c565

  • M: 690b2e1f604a0227068388d3e5b1f1940524c565 10.0.0.102:6379

  • slots:10923-16383 (5461 slots) master

  • 1 additional replica(s)

  • M: 4fb4c538d5f29255f6212f2eae8a761fbe364a89 10.0.0.101:6380

  • slots:0-5460 (5461 slots) master

  • 1 additional replica(s)

  • M: ed9b72fffd04b8a7e5ad20afdaf1f53e0eb95011 10.0.0.103:6380

  • slots: (0 slots) master

  • 0 additional replica(s)

  • S: 86e1881611440012c87fbf3fa98b7b6d79915e25 10.0.0.102:6380

  • slots: (0 slots) slave

  • replicates 1be5d1aaaa9e9542224554f461694da9cba7c2b8

  • M: 1be5d1aaaa9e9542224554f461694da9cba7c2b8 10.0.0.101:6379

  • slots:5461-10922 (5462 slots) master

  • 1 additional replica(s)

  • M: 724a8a15f4efe5a01454cb971d7471d6e84279f3 10.0.0.103:6379

  • slots: (0 slots) master

  • 0 additional replica(s)

  • [OK] All nodes agree about slots configuration.

  • >>> Check for open slots...

  • >>> Check slots coverage...

  • [OK] All 16384 slots covered.

  • # 要遷移多少個slot?

  • How many slots do you want to move (from 1 to 16384)? 4096

  • # 遷移到那個master?

  • What is the receiving node ID? 724a8a15f4efe5a01454cb971d7471d6e84279f3

  • Please enter all the source node IDs.

  • Type 'all' to use all the nodes as source nodes for the hash slots.

  • Type 'done' once you entered all the source nodes IDs.

  • # 從哪里遷移?

  • Source node #1:4fb4c538d5f29255f6212f2eae8a761fbe364a89

  • Source node #2:690b2e1f604a0227068388d3e5b1f1940524c565

  • Source node #3:1be5d1aaaa9e9542224554f461694da9cba7c2b8

  • Source node #4:done

  • Ready to move 4096 slots.

  • Source nodes:

  • M: 4fb4c538d5f29255f6212f2eae8a761fbe364a89 10.0.0.101:6380

  • slots:0-5460 (5461 slots) master

  • 1 additional replica(s)

  • M: 690b2e1f604a0227068388d3e5b1f1940524c565 10.0.0.102:6379

  • slots:10923-16383 (5461 slots) master

  • 1 additional replica(s)

  • M: 1be5d1aaaa9e9542224554f461694da9cba7c2b8 10.0.0.101:6379

  • slots:5461-10922 (5462 slots) master

  • 1 additional replica(s)

  • Destination node:

  • M: 724a8a15f4efe5a01454cb971d7471d6e84279f3 10.0.0.103:6379

  • slots: (0 slots) master

  • 0 additional replica(s)

  • Resharding plan:

  • Moving slot 5461 from 1be5d1aaaa9e9542224554f461694da9cba7c2b8

  • Moving slot 5462 from 1be5d1aaaa9e9542224554f461694da9cba7c2b8

  • Moving slot 5463 from 1be5d1aaaa9e9542224554f461694da9cba7c2b8

  • ......

  • 10.0.0.100:6380> cluster nodes

  • 1be5d1aaaa9e9542224554f461694da9cba7c2b8 10.0.0.101:6379@16379 master - 0 1585053959158 2 connected 6827-10922

  • # 可以看到新加入的一個節點已經分配到了slot

  • 724a8a15f4efe5a01454cb971d7471d6e84279f3 10.0.0.103:6379@16379 master - 0 1585053957000 8 connected 0-1364 5461-6826 10923-12287

  • 4fb4c538d5f29255f6212f2eae8a761fbe364a89 10.0.0.101:6380@16380 master - 0 1585053960166 7 connected 1365-5460

  • ed9b72fffd04b8a7e5ad20afdaf1f53e0eb95011 10.0.0.103:6380@16380 master - 0 1585053957000 0 connected

  • 690b2e1f604a0227068388d3e5b1f1940524c565 10.0.0.102:6379@16379 master - 0 1585053959000 3 connected 12288-16383

  • 89f52bfbb8803db19ab0c5a90adc4099df8287f7 10.0.0.100:6379@16379 slave 4fb4c538d5f29255f6212f2eae8a761fbe364a89 0 1585053958149 7 connected

  • 86e1881611440012c87fbf3fa98b7b6d79915e25 10.0.0.102:6380@16380 slave 1be5d1aaaa9e9542224554f461694da9cba7c2b8 0 1585053958000 6 connected

  • 8c13a2afa76194ef9582bb06675695bfef76b11d 10.0.0.100:6380@16380 myself,slave 690b2e1f604a0227068388d3e5b1f1940524c565 0 1585053954000 4 connected

無需要求每個master的slot編號是連續的,只要每個master管理的slot的數量均衡就可以。

  • 添加slave

    我們剛開始添加10.0.0.103:6379和10.0.0.103:6380,現在他們都是master,應該讓10.0.0.103:6380成為10.0.0.103:6379的slave

  • # 首先進入10.0.0.103:6380客戶端

  • redis-cli -c -h 10.0.0.103 -p 6380

  • # 然后設置為10.0.0.103:6379的slave節點

  • 10.0.0.103:6380> cluster replicate 724a8a15f4efe5a01454cb971d7471d6e84279f3

  • OK

  • 10.0.0.103:6380> cluster nodes

  • 1be5d1aaaa9e9542224554f461694da9cba7c2b8 10.0.0.101:6379@16379 master - 0 1585054332556 2 connected 6827-10922

  • 4fb4c538d5f29255f6212f2eae8a761fbe364a89 10.0.0.101:6380@16380 master - 0 1585054332000 7 connected 1365-5460

  • 690b2e1f604a0227068388d3e5b1f1940524c565 10.0.0.102:6379@16379 master - 0 1585054332000 3 connected 12288-16383

  • 724a8a15f4efe5a01454cb971d7471d6e84279f3 10.0.0.103:6379@16379 master - 0 1585054334000 8 connected 0-1364 5461-6826 10923-12287

  • 89f52bfbb8803db19ab0c5a90adc4099df8287f7 10.0.0.100:6379@16379 slave 4fb4c538d5f29255f6212f2eae8a761fbe364a89 0 1585054333565 7 connected

  • 8c13a2afa76194ef9582bb06675695bfef76b11d 10.0.0.100:6380@16380 slave 690b2e1f604a0227068388d3e5b1f1940524c565 0 1585054334574 3 connected

  • 86e1881611440012c87fbf3fa98b7b6d79915e25 10.0.0.102:6380@16380 slave 1be5d1aaaa9e9542224554f461694da9cba7c2b8 0 1585054332000 2 connected

  • # 10.0.0.103:6380已經成為slave

  • ed9b72fffd04b8a7e5ad20afdaf1f53e0eb95011 10.0.0.103:6380@16380 myself,slave 724a8a15f4efe5a01454cb971d7471d6e84279f3 0 1585054333000 0 connected

  • 檢查slot的負載均衡

  • [root@node01 redis]# redis-trib.rb rebalance 10.0.0.100:6379

  • >>> Performing Cluster Check (using node 10.0.0.100:6379)

  • [OK] All nodes agree about slots configuration.

  • >>> Check for open slots...

  • >>> Check slots coverage...

  • [OK] All 16384 slots covered.

  • # 所有master節點管理的slot數量的差異在2%之內,不需要重新均衡!

  • *** No rebalancing needed! All nodes are within the 2.0% threshold.

2. Redis Cluster集群縮容

2.1 縮容原理

  • 如果下線的是slave,那么通知其他節點忘記下線的節點
  • 如果下線的是master,那么將此master的slot遷移到其他master之后,通知其他節點忘記此master節點
  • 其他節點都忘記了下線的節點之后,此節點就可以正常停止服務了

2.2 縮容過程

我們在上面添加了10.0.0.103:6379和10.0.0.103:6380兩個節點,現在把這兩個節點下線

確認下線節點的角色

  • 10.0.0.103:6380> cluster nodes

  • ...

  • # 10.0.0.103:6380是slave

  • # 10.0.0.103:6379是master

  • 724a8a15f4efe5a01454cb971d7471d6e84279f3 10.0.0.103:6379@16379 master - 0 1585055101000 8 connected 0-1364 5461-6826 10923-12287

  • ed9b72fffd04b8a7e5ad20afdaf1f53e0eb95011 10.0.0.103:6380@16380 slave 724a8a15f4efe5a01454cb971d7471d6e84279f3 0 1585055099000 0 connected

  • 下線master節點的slot遷移到其他master

  • [root@node01 redis]# redis-trib.rb reshard 10.0.0.100:6379

  • ......

  • [OK] All 16384 slots covered.

  • How many slots do you want to move (from 1 to 16384)? 1364

  • What is the receiving node ID? 1be5d1aaaa9e9542224554f461694da9cba7c2b8

  • Please enter all the source node IDs.

  • Type 'all' to use all the nodes as source nodes for the hash slots.

  • Type 'done' once you entered all the source nodes IDs.

  • Source node #1:724a8a15f4efe5a01454cb971d7471d6e84279f3

  • Source node #2:done

  • Ready to move 1364 slots.

  • Source nodes:

  • M: 724a8a15f4efe5a01454cb971d7471d6e84279f3 10.0.0.103:6379

  • slots:0-1364,5461-6826,10923-12287 (4096 slots) master

  • 1 additional replica(s)

  • Destination node:

  • M: 1be5d1aaaa9e9542224554f461694da9cba7c2b8 10.0.0.101:6379

  • slots:6827-10922 (4096 slots) master

  • 1 additional replica(s)

  • Resharding plan:

  • .......

  • [root@node01 redis]# redis-trib.rb reshard 10.0.0.100:6379

  • ......

  • [OK] All 16384 slots covered.

  • How many slots do you want to move (from 1 to 16384)? 1364

  • What is the receiving node ID? 4fb4c538d5f29255f6212f2eae8a761fbe364a89

  • Please enter all the source node IDs.

  • Type 'all' to use all the nodes as source nodes for the hash slots.

  • Type 'done' once you entered all the source nodes IDs.

  • Source node #1:724a8a15f4efe5a01454cb971d7471d6e84279f3

  • Source node #2:done

  • .......

  • [root@node01 redis]# redis-trib.rb reshard 10.0.0.100:6379

  • ......

  • [OK] All 16384 slots covered.

  • How many slots do you want to move (from 1 to 16384)? 1365

  • What is the receiving node ID? 690b2e1f604a0227068388d3e5b1f1940524c565

  • Please enter all the source node IDs.

  • Type 'all' to use all the nodes as source nodes for the hash slots.

  • Type 'done' once you entered all the source nodes IDs.

  • Source node #1:724a8a15f4efe5a01454cb971d7471d6e84279f3

  • Source node #2:done

  • .......

  • 10.0.0.103:6380> cluster nodes

  • 1be5d1aaaa9e9542224554f461694da9cba7c2b8 10.0.0.101:6379@16379 master - 0 1585056902000 9 connected 0-1363 6827-10922

  • 4fb4c538d5f29255f6212f2eae8a761fbe364a89 10.0.0.101:6380@16380 master - 0 1585056903544 12 connected 2729-6826 10923-12287

  • 690b2e1f604a0227068388d3e5b1f1940524c565 10.0.0.102:6379@16379 master - 0 1585056903000 11 connected 1364-2728 12288-16383

  • # 10.0.0.103:6379的slot已經遷移完成

  • 724a8a15f4efe5a01454cb971d7471d6e84279f3 10.0.0.103:6379@16379 master - 0 1585056903000 10 connected

  • ed9b72fffd04b8a7e5ad20afdaf1f53e0eb95011 10.0.0.103:6380@16380 myself,slave 4fb4c538d5f29255f6212f2eae8a761fbe364a89 0 1585056898000 0 connected

  • 89f52bfbb8803db19ab0c5a90adc4099df8287f7 10.0.0.100:6379@16379 slave 4fb4c538d5f29255f6212f2eae8a761fbe364a89 0 1585056901000 12 connected

  • 8c13a2afa76194ef9582bb06675695bfef76b11d 10.0.0.100:6380@16380 slave 690b2e1f604a0227068388d3e5b1f1940524c565 0 1585056900000 11 connected

  • 86e1881611440012c87fbf3fa98b7b6d79915e25 10.0.0.102:6380@16380 slave 1be5d1aaaa9e9542224554f461694da9cba7c2b8 0 1585056904551 9 connected

  • 忘記節點

    Redis提供了cluster forget{downNodeId}命令來通知其他節點忘記下線節點,當節點接收到cluster forget {down NodeId}命令后,會把nodeId指定的節點加入到禁用列表中,在禁用列表內的節點不再與其他節點發送消息,禁用列表有效期是60秒,超過60秒節點會再次參與消息交換。也就是說當第一次forget命令發出后,我們有60秒的時間讓集群內的所有節點忘記下線節點

    線上操作不建議直接使用cluster forget命令下線節點,這需要跟大量節點進行命令交互,建議使用redis- trib.rb del-node {host:port} {downNodeId}命令

    另外,先下線slave,再下線master可以防止不必要的數據復制

  • # 先下線slave 10.0.0.103:6380

  • [root@node01 redis]# redis-trib.rb del-node 10.0.0.100:6379 ed9b72fffd04b8a7e5ad20afdaf1f53e0eb95011

  • >>> Removing node ed9b72fffd04b8a7e5ad20afdaf1f53e0eb95011 from cluster 10.0.0.100:6379

  • >>> Sending CLUSTER FORGET messages to the cluster...

  • >>> SHUTDOWN the node.

  • # 再下線slave 10.0.0.103:6379

  • [root@node01 redis]# redis-trib.rb del-node 10.0.0.100:6379 724a8a15f4efe5a01454cb971d7471d6e84279f3

  • >>> Removing node 724a8a15f4efe5a01454cb971d7471d6e84279f3 from cluster 10.0.0.100:6379

  • >>> Sending CLUSTER FORGET messages to the cluster...

  • >>> SHUTDOWN the node.

  • 10.0.0.100:6379> cluster nodes

  • 8c13a2afa76194ef9582bb06675695bfef76b11d 10.0.0.100:6380@16380 slave 690b2e1f604a0227068388d3e5b1f1940524c565 0 1585057049247 11 connected

  • 690b2e1f604a0227068388d3e5b1f1940524c565 10.0.0.102:6379@16379 master - 0 1585057048239 11 connected 1364-2728 12288-16383

  • 4fb4c538d5f29255f6212f2eae8a761fbe364a89 10.0.0.101:6380@16380 master - 0 1585057048000 12 connected 2729-6826 10923-12287

  • 89f52bfbb8803db19ab0c5a90adc4099df8287f7 10.0.0.100:6379@16379 myself,slave 4fb4c538d5f29255f6212f2eae8a761fbe364a89 0 1585057047000 1 connected

  • 86e1881611440012c87fbf3fa98b7b6d79915e25 10.0.0.102:6380@16380 slave 1be5d1aaaa9e9542224554f461694da9cba7c2b8 0 1585057048000 9 connected

  • 1be5d1aaaa9e9542224554f461694da9cba7c2b8 10.0.0.101:6379@16379 master - 0 1585057048000 9 connected 0-1363 6827-10922

  • redis-trib.rb del-node還可以自動停止下線節點的服務。

    總結

    以上是生活随笔為你收集整理的Redis cluster集群扩容缩容原理的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。