Redis cluster reshard slots

修復 Redis cluster 節點異常過程紀錄.

建立新節點

進入 Redis cluster

1redis-cli -c -h 10.0.0.11 -p 6379

查看 Redis 資訊

 1sudo find / -name redis-trib.rb
 2
 3# 確認節點狀態,這邊發現節點有異常
 4# [WARNING] Node 10.0.2.12:6379 has slots in migrating state (5489).
 5# [WARNING] Node 10.0.2.13:6380 has slots in importing state (5489).
 6# [WARNING] The following slots are open: 5489
 7./redis-trib.rb check 10.0.2.12:6379
 8
 9# ubuntu @ cache-lottery-staging1 in /usr/share/doc/redis-tools/examples [13:29:42]
10$ ./redis-trib.rb check 10.0.2.12:6379
11./redis-trib.rb:1573: warning: key "threshold" is duplicated and overwritten on line 1573
12>>> Performing Cluster Check (using node 10.0.2.12:6379)
13M: 752a7943a7a9d1be5e25b73766a2362f89870b87 10.0.2.12:6379
14slots:5564-10922 (5359 slots) master
151 additional replica(s)
16S: 6375a4556ac673c41392e08ca48d43d74b37b3c2 10.0.2.11:6379
17slots: (0 slots) slave
18replicates 92dd7f73bb6b3aba30b527744057308bc869d48f
19M: 92dd7f73bb6b3aba30b527744057308bc869d48f 10.0.2.12:6380
20slots:297-5460 (5164 slots) master
211 additional replica(s)
22M: 3729b01a6feb9e7b674458355615c17dc2291a12 10.0.2.13:6380
23slots:0-296,5461-5563,10923-16383 (5861 slots) master
241 additional replica(s)
25S: ddfe4daca54b3d07ed3bbd5cb751ea2802ecaec2 10.0.2.13:6379
26slots: (0 slots) slave
27replicates 3729b01a6feb9e7b674458355615c17dc2291a12
28S: c5433165800a6c0019436891663aa3e9e2da8532 10.0.2.11:6380
29slots: (0 slots) slave
30replicates 752a7943a7a9d1be5e25b73766a2362f89870b87
31[OK] All nodes agree about slots configuration.
32>>> Check for open slots...
33>>> Check slots coverage...
34[OK] All 16384 slots covered.
35
36# 修復節點
37./redis-trib.rb fix 10.0.2.12:6379
38./redis-trib.rb fix 10.0.2.13:6380
39
40# delete solts
41CLUSTER DELSLOTS 297
42CLUSTER ADDSLOTS 297
43CLUSTER SETSLOT 297 STABLE

刪除 slave 節點

1./redis-trib.rb del-node 10.0.2.11:6379 6375a4556ac673c41392e08ca48d43d74b37b3c2
2./redis-trib.rb del-node 10.0.2.11:6380 c5433165800a6c0019436891663aa3e9e2da8532

加入節點

1./redis-trib.rb add-node 10.0.2.11:6379 10.0.2.12:6379
2./redis-trib.rb add-node 10.0.2.11:6380 10.0.2.12:6379

將 master 的 slots 移到其他 master 節點上

過程:

How many slots do you want to move (from 1 to 16384)? 5331 # 移除多少個hash,10.0.2.12:6380有5331個 全部移走

What is the receiving node ID? 3729b01a6feb9e7b674458355615c17dc2291a12 //接收10.0.2.12:6380節點slot的master 我用10.0.2.13:6380来接收

Source node #1:92dd7f73bb6b3aba30b527744057308bc869d48f //被删除master的node-id

Source node #2:done

Do you want to proceed with the proposed reshard plan (yes/no)? yes //取消slot后,reshard

執行:

 1$ ./redis-trib.rb reshard 10.0.2.12:6380
 2./redis-trib.rb:1573: warning: key "threshold" is duplicated and overwritten on line 1573
 3>>> Performing Cluster Check (using node 10.0.2.12:6380)
 4M: 92dd7f73bb6b3aba30b527744057308bc869d48f 10.0.2.12:6380
 5slots:130-5460 (5331 slots) master
 61 additional replica(s)
 7M: 752a7943a7a9d1be5e25b73766a2362f89870b87 10.0.2.12:6379
 8slots:5490-10922 (5433 slots) master
 91 additional replica(s)
10S: ddfe4daca54b3d07ed3bbd5cb751ea2802ecaec2 10.0.2.13:6379
11slots: (0 slots) slave
12replicates 3729b01a6feb9e7b674458355615c17dc2291a12
13S: c5433165800a6c0019436891663aa3e9e2da8532 10.0.2.11:6380
14slots: (0 slots) slave
15replicates 752a7943a7a9d1be5e25b73766a2362f89870b87
16S: 6375a4556ac673c41392e08ca48d43d74b37b3c2 10.0.2.11:6379
17slots: (0 slots) slave
18replicates 92dd7f73bb6b3aba30b527744057308bc869d48f
19M: 3729b01a6feb9e7b674458355615c17dc2291a12 10.0.2.13:6380
20slots:0-129,5461-5489,10923-16383 (5620 slots) master
211 additional replica(s)
22[OK] All nodes agree about slots configuration.
23>>> Check for open slots...
24>>> Check slots coverage...
25[OK] All 16384 slots covered.
26How many slots do you want to move (from 1 to 16384)? 5331
27What is the receiving node ID? 3729b01a6feb9e7b674458355615c17dc2291a12
28Please enter all the source node IDs.
29Type 'all' to use all the nodes as source nodes for the hash slots.
30Type 'done' once you entered all the source nodes IDs.
31Source node #1:92dd7f73bb6b3aba30b527744057308bc869d48f
32Source node #2:done
comments powered by Disqus