redis-cluster集群在windows中搭建详细版
1.下載redis安裝包
redis下載鏈接:https://github.com/MSOpenTech/redis/releases。
2.構(gòu)建redis-cluster的環(huán)境目錄
2.1 本文構(gòu)建的是3個(gè)節(jié)點(diǎn)(3主3從)的redis架構(gòu),是首先我們先建一個(gè)redis-cluster文件夾,將上面下載的redis包解壓,并復(fù)制6份,重命名6380~6385
2.2 打開6380文件夾下 redis.windows.conf文件,修改下圖中這些位置,并依葫蘆畫瓢將其它文件夾中配置文件修改完畢。
2.3依次在每個(gè)節(jié)點(diǎn)文件中創(chuàng)建一個(gè)txt文件,將以下文件內(nèi)容復(fù)制進(jìn)去,然后重命名為startup.bat,通過此腳本可以快速啟動(dòng)redis服務(wù)。
title為啟動(dòng)后窗口名稱
3.下載安裝Ruby
Ruby的下載鏈接:https://rubyinstaller.org/downloads/
RubyGems的下載鏈接:https://rubygems.org/pages/download
(管理 gem 安裝的工具,以及用于分發(fā) gem 的服務(wù)器。這類似于 Ubuntu 下的apt-get, Centos 的 yum,Python 的 pip,這里也需要下載安裝,要不然gem命令用不起來)
3.1 兩個(gè) 都一直next安裝即可,安裝完成后需要配置Ruby,
下載后解壓,當(dāng)前目錄切換到RubyGems解壓目錄中,例如我的路徑:D:\redis-cluster\RubyGems\rubygems-3.3.18 ,然后在命令行執(zhí)行
然后gem安裝 Redis :切換到redis安裝目錄,需要在命令行中,執(zhí)行
gem install redis4.構(gòu)建集群腳本redis-trib.rb
大家可以自行在網(wǎng)上查找下載,這里需要注意你下載的redis-trib.rb版本是需要和你的redis版本對(duì)應(yīng)的,否則會(huì)出現(xiàn)以下的報(bào)錯(cuò)。
WARNING: redis-trib.rb is not longer available! You should use redis-cli instead.我這邊提供與我文中redis版本相對(duì)應(yīng)的redis-trib.rb,可以在自己的redis-cluster文件夾中,創(chuàng)建一個(gè)txt文件,將redis-trib.rb內(nèi)容復(fù)制進(jìn)去,將文件名重命名為redis-trib.rb即可。
#!/usr/bin/env ruby# TODO (temporary here, we'll move this into the Github issues once # redis-trib initial implementation is completed). # # - Make sure that if the rehashing fails in the middle redis-trib will try # to recover. # - When redis-trib performs a cluster check, if it detects a slot move in # progress it should prompt the user to continue the move from where it # stopped. # - Gracefully handle Ctrl+C in move_slot to prompt the user if really stop # while rehashing, and performing the best cleanup possible if the user # forces the quit. # - When doing "fix" set a global Fix to true, and prompt the user to # fix the problem if automatically fixable every time there is something # to fix. For instance: # 1) If there is a node that pretend to receive a slot, or to migrate a # slot, but has no entries in that slot, fix it. # 2) If there is a node having keys in slots that are not owned by it # fix this condition moving the entries in the same node. # 3) Perform more possibly slow tests about the state of the cluster. # 4) When aborted slot migration is detected, fix it.require 'rubygems' require 'redis'ClusterHashSlots = 16384 MigrateDefaultTimeout = 60000 MigrateDefaultPipeline = 10 RebalanceDefaultThreshold = 2$verbose = falsedef xputs(s)case s[0..2]when ">>>"color="29;1"when "[ER"color="31;1"when "[WA"color="31;1"when "[OK"color="32"when "[FA","***"color="33"elsecolor=nilendcolor = nil if ENV['TERM'] != "xterm"print "\033[#{color}m" if colorprint sprint "\033[0m" if colorprint "\n" endclass ClusterNodedef initialize(addr)s = addr.split(":")if s.length < 2puts "Invalid IP or Port (given as #{addr}) - use IP:Port format"exit 1endport = s.pop # removes port from split arrayip = s.join(":") # if s.length > 1 here, it's IPv6, so restore address@r = nil@info = {}@info[:host] = ip@info[:port] = port@info[:slots] = {}@info[:migrating] = {}@info[:importing] = {}@info[:replicate] = false@dirty = false # True if we need to flush slots info into node.@friends = []enddef friends@friendsenddef slots@info[:slots]enddef has_flag?(flag)@info[:flags].index(flag)enddef to_s"#{@info[:host]}:#{@info[:port]}"enddef connect(o={})return if @rprint "Connecting to node #{self}: " if $verboseSTDOUT.flushbegin@r = Redis.new(:host => @info[:host], :port => @info[:port], :timeout => 60)@r.pingrescuexputs "[ERR] Sorry, can't connect to node #{self}"exit 1 if o[:abort]@r = nilendxputs "OK" if $verboseenddef assert_clusterinfo = @r.infoif !info["cluster_enabled"] || info["cluster_enabled"].to_i == 0xputs "[ERR] Node #{self} is not configured as a cluster node."exit 1endenddef assert_emptyif !(@r.cluster("info").split("\r\n").index("cluster_known_nodes:1")) ||(@r.info['db0'])xputs "[ERR] Node #{self} is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0."exit 1endenddef load_info(o={})self.connectnodes = @r.cluster("nodes").split("\n")nodes.each{|n|# name addr flags role ping_sent ping_recv link_status slotssplit = n.splitname,addr,flags,master_id,ping_sent,ping_recv,config_epoch,link_status = split[0..6]slots = split[8..-1]info = {:name => name,:addr => addr,:flags => flags.split(","),:replicate => master_id,:ping_sent => ping_sent.to_i,:ping_recv => ping_recv.to_i,:link_status => link_status}info[:replicate] = false if master_id == "-"if info[:flags].index("myself")@info = @info.merge(info)@info[:slots] = {}slots.each{|s|if s[0..0] == '['if s.index("->-") # Migratingslot,dst = s[1..-1].split("->-")@info[:migrating][slot.to_i] = dstelsif s.index("-<-") # Importingslot,src = s[1..-1].split("-<-")@info[:importing][slot.to_i] = srcendelsif s.index("-")start,stop = s.split("-")self.add_slots((start.to_i)..(stop.to_i))elseself.add_slots((s.to_i)..(s.to_i))end} if slots@dirty = false@r.cluster("info").split("\n").each{|e|k,v=e.split(":")k = k.to_symv.chop!if k != :cluster_state@info[k] = v.to_ielse@info[k] = vend}elsif o[:getfriends]@friends << infoend}enddef add_slots(slots)slots.each{|s|@info[:slots][s] = :new}@dirty = trueenddef set_as_replica(node_id)@info[:replicate] = node_id@dirty = trueenddef flush_node_configreturn if !@dirtyif @info[:replicate]begin@r.cluster("replicate",@info[:replicate])rescue# If the cluster did not already joined it is possible that# the slave does not know the master node yet. So on errors# we return ASAP leaving the dirty flag set, to flush the# config later.returnendelsenew = []@info[:slots].each{|s,val|if val == :newnew << s@info[:slots][s] = trueend}@r.cluster("addslots",*new)end@dirty = falseenddef info_string# We want to display the hash slots assigned to this node# as ranges, like in: "1-5,8-9,20-25,30"## Note: this could be easily written without side effects,# we use 'slots' just to split the computation into steps.# First step: we want an increasing array of integers# for instance: [1,2,3,4,5,8,9,20,21,22,23,24,25,30]slots = @info[:slots].keys.sort# As we want to aggregate adjacent slots we convert all the# slot integers into ranges (with just one element)# So we have something like [1..1,2..2, ... and so forth.slots.map!{|x| x..x}# Finally we group ranges with adjacent elements.slots = slots.reduce([]) {|a,b|if !a.empty? && b.first == (a[-1].last)+1a[0..-2] + [(a[-1].first)..(b.last)]elsea + [b]end}# Now our task is easy, we just convert ranges with just one# element into a number, and a real range into a start-end format.# Finally we join the array using the comma as separator.slots = slots.map{|x|x.count == 1 ? x.first.to_s : "#{x.first}-#{x.last}"}.join(",")role = self.has_flag?("master") ? "M" : "S"if self.info[:replicate] and @dirtyis = "S: #{self.info[:name]} #{self.to_s}"elseis = "#{role}: #{self.info[:name]} #{self.to_s}\n"+" slots:#{slots} (#{self.slots.length} slots) "+"#{(self.info[:flags]-["myself"]).join(",")}"endif self.info[:replicate]is += "\n replicates #{info[:replicate]}"elsif self.has_flag?("master") && self.info[:replicas]is += "\n #{info[:replicas].length} additional replica(s)"endisend# Return a single string representing nodes and associated slots.# TODO: remove slaves from config when slaves will be handled# by Redis Cluster.def get_config_signatureconfig = []@r.cluster("nodes").each_line{|l|s = l.splitslots = s[8..-1].select {|x| x[0..0] != "["}next if slots.length == 0config << s[0]+":"+(slots.sort.join(","))}config.sort.join("|")enddef info@infoenddef is_dirty?@dirtyenddef r@rend endclass RedisTribdef initialize@nodes = []@fix = false@errors = []@timeout = MigrateDefaultTimeoutenddef check_arity(req_args, num_args)if ((req_args > 0 and num_args != req_args) ||(req_args < 0 and num_args < req_args.abs))xputs "[ERR] Wrong number of arguments for specified sub command"exit 1endenddef add_node(node)@nodes << nodeenddef reset_nodes@nodes = []enddef cluster_error(msg)@errors << msgxputs msgend# Return the node with the specified ID or Nil.def get_node_by_name(name)@nodes.each{|n|return n if n.info[:name] == name.downcase}return nilend# Like get_node_by_name but the specified name can be just the first# part of the node ID as long as the prefix in unique across the# cluster.def get_node_by_abbreviated_name(name)l = name.lengthcandidates = []@nodes.each{|n|if n.info[:name][0...l] == name.downcasecandidates << nend}return nil if candidates.length != 1candidates[0]end# This function returns the master that has the least number of replicas# in the cluster. If there are multiple masters with the same smaller# number of replicas, one at random is returned.def get_master_with_least_replicasmasters = @nodes.select{|n| n.has_flag? "master"}sorted = masters.sort{|a,b|a.info[:replicas].length <=> b.info[:replicas].length}sorted[0]enddef check_cluster(opt={})xputs ">>> Performing Cluster Check (using node #{@nodes[0]})"show_nodes if !opt[:quiet]check_config_consistencycheck_open_slotscheck_slots_coverageenddef show_cluster_infomasters = 0keys = 0@nodes.each{|n|if n.has_flag?("master")puts "#{n} (#{n.info[:name][0...8]}...) -> #{n.r.dbsize} keys | #{n.slots.length} slots | "+"#{n.info[:replicas].length} slaves."masters += 1keys += n.r.dbsizeend}xputs "[OK] #{keys} keys in #{masters} masters."keys_per_slot = sprintf("%.2f",keys/16384.0)puts "#{keys_per_slot} keys per slot on average."end# Merge slots of every known node. If the resulting slots are equal# to ClusterHashSlots, then all slots are served.def covered_slotsslots = {}@nodes.each{|n|slots = slots.merge(n.slots)}slotsenddef check_slots_coveragexputs ">>> Check slots coverage..."slots = covered_slotsif slots.length == ClusterHashSlotsxputs "[OK] All #{ClusterHashSlots} slots covered."elsecluster_error \"[ERR] Not all #{ClusterHashSlots} slots are covered by nodes."fix_slots_coverage if @fixendenddef check_open_slotsxputs ">>> Check for open slots..."open_slots = []@nodes.each{|n|if n.info[:migrating].size > 0cluster_error \"[WARNING] Node #{n} has slots in migrating state (#{n.info[:migrating].keys.join(",")})."open_slots += n.info[:migrating].keysendif n.info[:importing].size > 0cluster_error \"[WARNING] Node #{n} has slots in importing state (#{n.info[:importing].keys.join(",")})."open_slots += n.info[:importing].keysend}open_slots.uniq!if open_slots.length > 0xputs "[WARNING] The following slots are open: #{open_slots.join(",")}"endif @fixopen_slots.each{|slot| fix_open_slot slot}endenddef nodes_with_keys_in_slot(slot)nodes = []@nodes.each{|n|next if n.has_flag?("slave")nodes << n if n.r.cluster("getkeysinslot",slot,1).length > 0}nodesenddef fix_slots_coveragenot_covered = (0...ClusterHashSlots).to_a - covered_slots.keysxputs ">>> Fixing slots coverage..."xputs "List of not covered slots: " + not_covered.join(",")# For every slot, take action depending on the actual condition:# 1) No node has keys for this slot.# 2) A single node has keys for this slot.# 3) Multiple nodes have keys for this slot.slots = {}not_covered.each{|slot|nodes = nodes_with_keys_in_slot(slot)slots[slot] = nodesxputs "Slot #{slot} has keys in #{nodes.length} nodes: #{nodes.join(", ")}"}none = slots.select {|k,v| v.length == 0}single = slots.select {|k,v| v.length == 1}multi = slots.select {|k,v| v.length > 1}# Handle case "1": keys in no node.if none.length > 0xputs "The folowing uncovered slots have no keys across the cluster:"xputs none.keys.join(",")yes_or_die "Fix these slots by covering with a random node?"none.each{|slot,nodes|node = @nodes.samplexputs ">>> Covering slot #{slot} with #{node}"node.r.cluster("addslots",slot)}end# Handle case "2": keys only in one node.if single.length > 0xputs "The folowing uncovered slots have keys in just one node:"puts single.keys.join(",")yes_or_die "Fix these slots by covering with those nodes?"single.each{|slot,nodes|xputs ">>> Covering slot #{slot} with #{nodes[0]}"nodes[0].r.cluster("addslots",slot)}end# Handle case "3": keys in multiple nodes.if multi.length > 0xputs "The folowing uncovered slots have keys in multiple nodes:"xputs multi.keys.join(",")yes_or_die "Fix these slots by moving keys into a single node?"multi.each{|slot,nodes|target = get_node_with_most_keys_in_slot(nodes,slot)xputs ">>> Covering slot #{slot} moving keys to #{target}"target.r.cluster('addslots',slot)target.r.cluster('setslot',slot,'stable')nodes.each{|src|next if src == target# Set the source node in 'importing' state (even if we will# actually migrate keys away) in order to avoid receiving# redirections for MIGRATE.src.r.cluster('setslot',slot,'importing',target.info[:name])move_slot(src,target,slot,:dots=>true,:fix=>true,:cold=>true)src.r.cluster('setslot',slot,'stable')}}endend# Return the owner of the specified slotdef get_slot_owners(slot)owners = []@nodes.each{|n|next if n.has_flag?("slave")n.slots.each{|s,_|owners << n if s == slot}}ownersend# Return the node, among 'nodes' with the greatest number of keys# in the specified slot.def get_node_with_most_keys_in_slot(nodes,slot)best = nilbest_numkeys = 0@nodes.each{|n|next if n.has_flag?("slave")numkeys = n.r.cluster("countkeysinslot",slot)if numkeys > best_numkeys || best == nilbest = nbest_numkeys = numkeysend}return bestend# Slot 'slot' was found to be in importing or migrating state in one or# more nodes. This function fixes this condition by migrating keys where# it seems more sensible.def fix_open_slot(slot)puts ">>> Fixing open slot #{slot}"# Try to obtain the current slot owner, according to the current# nodes configuration.owners = get_slot_owners(slot)owner = owners[0] if owners.length == 1migrating = []importing = []@nodes.each{|n|next if n.has_flag? "slave"if n.info[:migrating][slot]migrating << nelsif n.info[:importing][slot]importing << nelsif n.r.cluster("countkeysinslot",slot) > 0 && n != ownerxputs "*** Found keys about slot #{slot} in node #{n}!"importing << nend}puts "Set as migrating in: #{migrating.join(",")}"puts "Set as importing in: #{importing.join(",")}"# If there is no slot owner, set as owner the slot with the biggest# number of keys, among the set of migrating / importing nodes.if !ownerxputs ">>> Nobody claims ownership, selecting an owner..."owner = get_node_with_most_keys_in_slot(@nodes,slot)# If we still don't have an owner, we can't fix it.if !ownerxputs "[ERR] Can't select a slot owner. Impossible to fix."exit 1end# Use ADDSLOTS to assign the slot.puts "*** Configuring #{owner} as the slot owner"owner.r.cluster("setslot",slot,"stable")owner.r.cluster("addslots",slot)# Make sure this information will propagate. Not strictly needed# since there is no past owner, so all the other nodes will accept# whatever epoch this node will claim the slot with.owner.r.cluster("bumpepoch")# Remove the owner from the list of migrating/importing# nodes.migrating.delete(owner)importing.delete(owner)end# If there are multiple owners of the slot, we need to fix it# so that a single node is the owner and all the other nodes# are in importing state. Later the fix can be handled by one# of the base cases above.## Note that this case also covers multiple nodes having the slot# in migrating state, since migrating is a valid state only for# slot owners.if owners.length > 1owner = get_node_with_most_keys_in_slot(owners,slot)owners.each{|n|next if n == ownern.r.cluster('delslots',slot)n.r.cluster('setslot',slot,'importing',owner.info[:name])importing.delete(n) # Avoid duplciatesimporting << n}owner.r.cluster('bumpepoch')end# Case 1: The slot is in migrating state in one slot, and in# importing state in 1 slot. That's trivial to address.if migrating.length == 1 && importing.length == 1move_slot(migrating[0],importing[0],slot,:dots=>true,:fix=>true)# Case 2: There are multiple nodes that claim the slot as importing,# they probably got keys about the slot after a restart so opened# the slot. In this case we just move all the keys to the owner# according to the configuration.elsif migrating.length == 0 && importing.length > 0xputs ">>> Moving all the #{slot} slot keys to its owner #{owner}"importing.each {|node|next if node == ownermove_slot(node,owner,slot,:dots=>true,:fix=>true,:cold=>true)xputs ">>> Setting #{slot} as STABLE in #{node}"node.r.cluster("setslot",slot,"stable")}# Case 3: There are no slots claiming to be in importing state, but# there is a migrating node that actually don't have any key. We# can just close the slot, probably a reshard interrupted in the middle.elsif importing.length == 0 && migrating.length == 1 &&migrating[0].r.cluster("getkeysinslot",slot,10).length == 0migrating[0].r.cluster("setslot",slot,"stable")elsexputs "[ERR] Sorry, Redis-trib can't fix this slot yet (work in progress). Slot is set as migrating in #{migrating.join(",")}, as importing in #{importing.join(",")}, owner is #{owner}"endend# Check if all the nodes agree about the cluster configurationdef check_config_consistencyif !is_config_consistent?cluster_error "[ERR] Nodes don't agree about configuration!"elsexputs "[OK] All nodes agree about slots configuration."endenddef is_config_consistent?signatures=[]@nodes.each{|n|signatures << n.get_config_signature}return signatures.uniq.length == 1enddef wait_cluster_joinprint "Waiting for the cluster to join"while !is_config_consistent?print "."STDOUT.flushsleep 1endprint "\n"enddef alloc_slotsnodes_count = @nodes.lengthmasters_count = @nodes.length / (@replicas+1)masters = []# The first step is to split instances by IP. This is useful as# we'll try to allocate master nodes in different physical machines# (as much as possible) and to allocate slaves of a given master in# different physical machines as well.## This code assumes just that if the IP is different, than it is more# likely that the instance is running in a different physical host# or at least a different virtual machine.ips = {}@nodes.each{|n|ips[n.info[:host]] = [] if !ips[n.info[:host]]ips[n.info[:host]] << n}# Select master instancesputs "Using #{masters_count} masters:"interleaved = []stop = falsewhile not stop do# Take one node from each IP until we run out of nodes# across every IP.ips.each do |ip,nodes|if nodes.empty?# if this IP has no remaining nodes, check for terminationif interleaved.length == nodes_count# stop when 'interleaved' has accumulated all nodesstop = truenextendelse# else, move one node from this IP to 'interleaved'interleaved.push nodes.shiftendendendmasters = interleaved.slice!(0, masters_count)nodes_count -= masters.lengthmasters.each{|m| puts m}# Alloc slots on mastersslots_per_node = ClusterHashSlots.to_f / masters_countfirst = 0cursor = 0.0masters.each_with_index{|n,masternum|last = (cursor+slots_per_node-1).roundif last > ClusterHashSlots || masternum == masters.length-1last = ClusterHashSlots-1endlast = first if last < first # Min step is 1.n.add_slots first..lastfirst = last+1cursor += slots_per_node}# Select N replicas for every master.# We try to split the replicas among all the IPs with spare nodes# trying to avoid the host where the master is running, if possible.## Note we loop two times. The first loop assigns the requested# number of replicas to each master. The second loop assigns any# remaining instances as extra replicas to masters. Some masters# may end up with more than their requested number of replicas, but# all nodes will be used.assignment_verbose = false[:requested,:unused].each do |assign|masters.each do |m|assigned_replicas = 0while assigned_replicas < @replicasbreak if nodes_count == 0if assignment_verboseif assign == :requestedputs "Requesting total of #{@replicas} replicas " \"(#{assigned_replicas} replicas assigned " \"so far with #{nodes_count} total remaining)."elsif assign == :unusedputs "Assigning extra instance to replication " \"role too (#{nodes_count} remaining)."endend# Return the first node not matching our current masternode = interleaved.find{|n| n.info[:host] != m.info[:host]}# If we found a node, use it as a best-first match.# Otherwise, we didn't find a node on a different IP, so we# go ahead and use a same-IP replica.if nodeslave = nodeinterleaved.delete nodeelseslave = interleaved.shiftendslave.set_as_replica(m.info[:name])nodes_count -= 1assigned_replicas += 1puts "Adding replica #{slave} to #{m}"# If we are in the "assign extra nodes" loop,# we want to assign one extra replica to each# master before repeating masters.# This break lets us assign extra replicas to masters# in a round-robin way.break if assign == :unusedendendendenddef flush_nodes_config@nodes.each{|n|n.flush_node_config}enddef show_nodes@nodes.each{|n|xputs n.info_string}end# Redis Cluster config epoch collision resolution code is able to eventually# set a different epoch to each node after a new cluster is created, but# it is slow compared to assign a progressive config epoch to each node# before joining the cluster. However we do just a best-effort try here# since if we fail is not a problem.def assign_config_epochconfig_epoch = 1@nodes.each{|n|beginn.r.cluster("set-config-epoch",config_epoch)rescueendconfig_epoch += 1}enddef join_cluster# We use a brute force approach to make sure the node will meet# each other, that is, sending CLUSTER MEET messages to all the nodes# about the very same node.# Thanks to gossip this information should propagate across all the# cluster in a matter of seconds.first = false@nodes.each{|n|if !first then first = n.info; next; end # Skip the first noden.r.cluster("meet",first[:host],first[:port])}enddef yes_or_die(msg)print "#{msg} (type 'yes' to accept): "STDOUT.flushif !(STDIN.gets.chomp.downcase == "yes")xputs "*** Aborting..."exit 1endenddef load_cluster_info_from_node(nodeaddr)node = ClusterNode.new(nodeaddr)node.connect(:abort => true)node.assert_clusternode.load_info(:getfriends => true)add_node(node)node.friends.each{|f|next if f[:flags].index("noaddr") ||f[:flags].index("disconnected") ||f[:flags].index("fail")fnode = ClusterNode.new(f[:addr])fnode.connect()next if !fnode.rbeginfnode.load_info()add_node(fnode)rescue => exputs "[ERR] Unable to load info for node #{fnode}"end}populate_nodes_replicas_infoend# This function is called by load_cluster_info_from_node in order to# add additional information to every node as a list of replicas.def populate_nodes_replicas_info# Start adding the new field to every node.@nodes.each{|n|n.info[:replicas] = []}# Populate the replicas field using the replicate field of slave# nodes.@nodes.each{|n|if n.info[:replicate]master = get_node_by_name(n.info[:replicate])if !masterxputs "*** WARNING: #{n} claims to be slave of unknown node ID #{n.info[:replicate]}."elsemaster.info[:replicas] << nendend}end# Given a list of source nodes return a "resharding plan"# with what slots to move in order to move "numslots" slots to another# instance.def compute_reshard_table(sources,numslots)moved = []# Sort from bigger to smaller instance, for two reasons:# 1) If we take less slots than instances it is better to start# getting from the biggest instances.# 2) We take one slot more from the first instance in the case of not# perfect divisibility. Like we have 3 nodes and need to get 10# slots, we take 4 from the first, and 3 from the rest. So the# biggest is always the first.sources = sources.sort{|a,b| b.slots.length <=> a.slots.length}source_tot_slots = sources.inject(0) {|sum,source|sum+source.slots.length}sources.each_with_index{|s,i|# Every node will provide a number of slots proportional to the# slots it has assigned.n = (numslots.to_f/source_tot_slots*s.slots.length)if i == 0n = n.ceilelsen = n.floorends.slots.keys.sort[(0...n)].each{|slot|if moved.length < numslotsmoved << {:source => s, :slot => slot}end}}return movedenddef show_reshard_table(table)table.each{|e|puts " Moving slot #{e[:slot]} from #{e[:source].info[:name]}"}end# Move slots between source and target nodes using MIGRATE.## Options:# :verbose -- Print a dot for every moved key.# :fix -- We are moving in the context of a fix. Use REPLACE.# :cold -- Move keys without opening slots / reconfiguring the nodes.# :update -- Update nodes.info[:slots] for source/target nodes.# :quiet -- Don't print info messages.def move_slot(source,target,slot,o={})o = {:pipeline => MigrateDefaultPipeline}.merge(o)# We start marking the slot as importing in the destination node,# and the slot as migrating in the target host. Note that the order of# the operations is important, as otherwise a client may be redirected# to the target node that does not yet know it is importing this slot.if !o[:quiet]print "Moving slot #{slot} from #{source} to #{target}: "STDOUT.flushendif !o[:cold]target.r.cluster("setslot",slot,"importing",source.info[:name])source.r.cluster("setslot",slot,"migrating",target.info[:name])end# Migrate all the keys from source to target using the MIGRATE commandwhile truekeys = source.r.cluster("getkeysinslot",slot,o[:pipeline])break if keys.length == 0beginsource.r.client.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:keys,*keys])rescue => eif o[:fix] && e.to_s =~ /BUSYKEY/xputs "*** Target key exists. Replacing it for FIX."source.r.client.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:replace,:keys,*keys])elseputs ""xputs "[ERR] Calling MIGRATE: #{e}"exit 1endendprint "."*keys.length if o[:dots]STDOUT.flushendputs if !o[:quiet]# Set the new node as the owner of the slot in all the known nodes.if !o[:cold]@nodes.each{|n|next if n.has_flag?("slave")n.r.cluster("setslot",slot,"node",target.info[:name])}end# Update the node logical configif o[:update] thensource.info[:slots].delete(slot)target.info[:slots][slot] = trueendend# redis-trib subcommands implementations.def check_cluster_cmd(argv,opt)load_cluster_info_from_node(argv[0])check_clusterenddef info_cluster_cmd(argv,opt)load_cluster_info_from_node(argv[0])show_cluster_infoenddef rebalance_cluster_cmd(argv,opt)opt = {'pipeline' => MigrateDefaultPipeline,'threshold' => RebalanceDefaultThreshold}.merge(opt)# Load nodes info before parsing options, otherwise we can't# handle --weight.load_cluster_info_from_node(argv[0])# Options parsingthreshold = opt['threshold'].to_iautoweights = opt['auto-weights']weights = {}opt['weight'].each{|w|fields = w.split("=")node = get_node_by_abbreviated_name(fields[0])if !node || !node.has_flag?("master")puts "*** No such master node #{fields[0]}"exit 1endweights[node.info[:name]] = fields[1].to_f} if opt['weight']useempty = opt['use-empty-masters']# Assign a weight to each node, and compute the total cluster weight.total_weight = 0nodes_involved = 0@nodes.each{|n|if n.has_flag?("master")next if !useempty && n.slots.length == 0n.info[:w] = weights[n.info[:name]] ? weights[n.info[:name]] : 1total_weight += n.info[:w]nodes_involved += 1end}# Check cluster, only proceed if it looks sane.check_cluster(:quiet => true)if @errors.length != 0puts "*** Please fix your cluster problems before rebalancing"exit 1end# Calculate the slots balance for each node. It's the number of# slots the node should lose (if positive) or gain (if negative)# in order to be balanced.threshold = opt['threshold'].to_fthreshold_reached = false@nodes.each{|n|if n.has_flag?("master")next if !n.info[:w]expected = ((ClusterHashSlots.to_f / total_weight) *n.info[:w]).to_in.info[:balance] = n.slots.length - expected# Compute the percentage of difference between the# expected number of slots and the real one, to see# if it's over the threshold specified by the user.over_threshold = falseif threshold > 0if n.slots.length > 0err_perc = (100-(100.0*expected/n.slots.length)).absover_threshold = true if err_perc > thresholdelsif expected > 0over_threshold = trueendendthreshold_reached = true if over_thresholdend}if !threshold_reachedxputs "*** No rebalancing needed! All nodes are within the #{threshold}% threshold."returnend# Only consider nodes we want to changesn = @nodes.select{|n|n.has_flag?("master") && n.info[:w]}# Because of rounding, it is possible that the balance of all nodes# summed does not give 0. Make sure that nodes that have to provide# slots are always matched by nodes receiving slots.total_balance = sn.map{|x| x.info[:balance]}.reduce{|a,b| a+b}while total_balance > 0sn.each{|n|if n.info[:balance] < 0 && total_balance > 0n.info[:balance] -= 1total_balance -= 1end}end# Sort nodes by their slots balance.sn = sn.sort{|a,b|a.info[:balance] <=> b.info[:balance]}xputs ">>> Rebalancing across #{nodes_involved} nodes. Total weight = #{total_weight}"if $verbosesn.each{|n|puts "#{n} balance is #{n.info[:balance]} slots"}end# Now we have at the start of the 'sn' array nodes that should get# slots, at the end nodes that must give slots.# We take two indexes, one at the start, and one at the end,# incrementing or decrementing the indexes accordingly til we# find nodes that need to get/provide slots.dst_idx = 0src_idx = sn.length - 1while dst_idx < src_idxdst = sn[dst_idx]src = sn[src_idx]numslots = [dst.info[:balance],src.info[:balance]].map{|n|n.abs}.minif numslots > 0puts "Moving #{numslots} slots from #{src} to #{dst}"# Actaully move the slots.reshard_table = compute_reshard_table([src],numslots)if reshard_table.length != numslotsxputs "*** Assertio failed: Reshard table != number of slots"exit 1endif opt['simulate']print "#"*reshard_table.lengthelsereshard_table.each{|e|move_slot(e[:source],dst,e[:slot],:quiet=>true,:dots=>false,:update=>true,:pipeline=>opt['pipeline'])print "#"STDOUT.flush}endputsend# Update nodes balance.dst.info[:balance] += numslotssrc.info[:balance] -= numslotsdst_idx += 1 if dst.info[:balance] == 0src_idx -= 1 if src.info[:balance] == 0endenddef fix_cluster_cmd(argv,opt)@fix = true@timeout = opt['timeout'].to_i if opt['timeout']load_cluster_info_from_node(argv[0])check_clusterenddef reshard_cluster_cmd(argv,opt)opt = {'pipeline' => MigrateDefaultPipeline}.merge(opt)load_cluster_info_from_node(argv[0])check_clusterif @errors.length != 0puts "*** Please fix your cluster problems before resharding"exit 1end@timeout = opt['timeout'].to_i if opt['timeout'].to_i# Get number of slotsif opt['slots']numslots = opt['slots'].to_ielsenumslots = 0while numslots <= 0 or numslots > ClusterHashSlotsprint "How many slots do you want to move (from 1 to #{ClusterHashSlots})? "numslots = STDIN.gets.to_iendend# Get the target instanceif opt['to']target = get_node_by_name(opt['to'])if !target || target.has_flag?("slave")xputs "*** The specified node is not known or not a master, please retry."exit 1endelsetarget = nilwhile not targetprint "What is the receiving node ID? "target = get_node_by_name(STDIN.gets.chop)if !target || target.has_flag?("slave")xputs "*** The specified node is not known or not a master, please retry."target = nilendendend# Get the source instancessources = []if opt['from']opt['from'].split(',').each{|node_id|if node_id == "all"sources = "all"breakendsrc = get_node_by_name(node_id)if !src || src.has_flag?("slave")xputs "*** The specified node is not known or is not a master, please retry."exit 1endsources << src}elsexputs "Please enter all the source node IDs."xputs " Type 'all' to use all the nodes as source nodes for the hash slots."xputs " Type 'done' once you entered all the source nodes IDs."while trueprint "Source node ##{sources.length+1}:"line = STDIN.gets.chopsrc = get_node_by_name(line)if line == "done"breakelsif line == "all"sources = "all"breakelsif !src || src.has_flag?("slave")xputs "*** The specified node is not known or is not a master, please retry."elsif src.info[:name] == target.info[:name]xputs "*** It is not possible to use the target node as source node."elsesources << srcendendendif sources.length == 0puts "*** No source nodes given, operation aborted"exit 1end# Handle soures == all.if sources == "all"sources = []@nodes.each{|n|next if n.info[:name] == target.info[:name]next if n.has_flag?("slave")sources << n}end# Check if the destination node is the same of any source nodes.if sources.index(target)xputs "*** Target node is also listed among the source nodes!"exit 1endputs "\nReady to move #{numslots} slots."puts " Source nodes:"sources.each{|s| puts " "+s.info_string}puts " Destination node:"puts " #{target.info_string}"reshard_table = compute_reshard_table(sources,numslots)puts " Resharding plan:"show_reshard_table(reshard_table)if !opt['yes']print "Do you want to proceed with the proposed reshard plan (yes/no)? "yesno = STDIN.gets.chopexit(1) if (yesno != "yes")endreshard_table.each{|e|move_slot(e[:source],target,e[:slot],:dots=>true,:pipeline=>opt['pipeline'])}end# This is an helper function for create_cluster_cmd that verifies if# the number of nodes and the specified replicas have a valid configuration# where there are at least three master nodes and enough replicas per node.def check_create_parametersmasters = @nodes.length/(@replicas+1)if masters < 3puts "*** ERROR: Invalid configuration for cluster creation."puts "*** Redis Cluster requires at least 3 master nodes."puts "*** This is not possible with #{@nodes.length} nodes and #{@replicas} replicas per node."puts "*** At least #{3*(@replicas+1)} nodes are required."exit 1endenddef create_cluster_cmd(argv,opt)opt = {'replicas' => 0}.merge(opt)@replicas = opt['replicas'].to_ixputs ">>> Creating cluster"argv[0..-1].each{|n|node = ClusterNode.new(n)node.connect(:abort => true)node.assert_clusternode.load_infonode.assert_emptyadd_node(node)}check_create_parametersxputs ">>> Performing hash slots allocation on #{@nodes.length} nodes..."alloc_slotsshow_nodesyes_or_die "Can I set the above configuration?"flush_nodes_configxputs ">>> Nodes configuration updated"xputs ">>> Assign a different config epoch to each node"assign_config_epochxputs ">>> Sending CLUSTER MEET messages to join the cluster"join_cluster# Give one second for the join to start, in order to avoid that# wait_cluster_join will find all the nodes agree about the config as# they are still empty with unassigned slots.sleep 1wait_cluster_joinflush_nodes_config # Useful for the replicascheck_clusterenddef addnode_cluster_cmd(argv,opt)xputs ">>> Adding node #{argv[0]} to cluster #{argv[1]}"# Check the existing clusterload_cluster_info_from_node(argv[1])check_cluster# If --master-id was specified, try to resolve it now so that we# abort before starting with the node configuration.if opt['slave']if opt['master-id']master = get_node_by_name(opt['master-id'])if !masterxputs "[ERR] No such master ID #{opt['master-id']}"endelsemaster = get_master_with_least_replicasxputs "Automatically selected master #{master}"endend# Add the new nodenew = ClusterNode.new(argv[0])new.connect(:abort => true)new.assert_clusternew.load_infonew.assert_emptyfirst = @nodes.first.infoadd_node(new)# Send CLUSTER MEET command to the new nodexputs ">>> Send CLUSTER MEET to node #{new} to make it join the cluster."new.r.cluster("meet",first[:host],first[:port])# Additional configuration is needed if the node is added as# a slave.if opt['slave']wait_cluster_joinxputs ">>> Configure node as replica of #{master}."new.r.cluster("replicate",master.info[:name])endxputs "[OK] New node added correctly."enddef delnode_cluster_cmd(argv,opt)id = argv[1].downcasexputs ">>> Removing node #{id} from cluster #{argv[0]}"# Load cluster informationload_cluster_info_from_node(argv[0])# Check if the node exists and is not emptynode = get_node_by_name(id)if !nodexputs "[ERR] No such node ID #{id}"exit 1endif node.slots.length != 0xputs "[ERR] Node #{node} is not empty! Reshard data away and try again."exit 1end# Send CLUSTER FORGET to all the nodes but the node to removexputs ">>> Sending CLUSTER FORGET messages to the cluster..."@nodes.each{|n|next if n == nodeif n.info[:replicate] && n.info[:replicate].downcase == id# Reconfigure the slave to replicate with some other nodemaster = get_master_with_least_replicasxputs ">>> #{n} as replica of #{master}"n.r.cluster("replicate",master.info[:name])endn.r.cluster("forget",argv[1])}# Finally shutdown the nodexputs ">>> SHUTDOWN the node."node.r.shutdownenddef set_timeout_cluster_cmd(argv,opt)timeout = argv[1].to_iif timeout < 100puts "Setting a node timeout of less than 100 milliseconds is a bad idea."exit 1end# Load cluster informationload_cluster_info_from_node(argv[0])ok_count = 0err_count = 0# Send CLUSTER FORGET to all the nodes but the node to removexputs ">>> Reconfiguring node timeout in every cluster node..."@nodes.each{|n|beginn.r.config("set","cluster-node-timeout",timeout)n.r.config("rewrite")ok_count += 1xputs "*** New timeout set for #{n}"rescue => eputs "ERR setting node-timeot for #{n}: #{e}"err_count += 1end}xputs ">>> New node timeout set. #{ok_count} OK, #{err_count} ERR."enddef call_cluster_cmd(argv,opt)cmd = argv[1..-1]cmd[0] = cmd[0].upcase# Load cluster informationload_cluster_info_from_node(argv[0])xputs ">>> Calling #{cmd.join(" ")}"@nodes.each{|n|beginres = n.r.send(*cmd)puts "#{n}: #{res}"rescue => eputs "#{n}: #{e}"end}enddef import_cluster_cmd(argv,opt)source_addr = opt['from']xputs ">>> Importing data from #{source_addr} to cluster #{argv[1]}"use_copy = opt['copy']use_replace = opt['replace']# Check the existing cluster.load_cluster_info_from_node(argv[0])check_cluster# Connect to the source node.xputs ">>> Connecting to the source Redis instance"src_host,src_port = source_addr.split(":")source = Redis.new(:host =>src_host, :port =>src_port)if source.info['cluster_enabled'].to_i == 1xputs "[ERR] The source node should not be a cluster node."endxputs "*** Importing #{source.dbsize} keys from DB 0"# Build a slot -> node mapslots = {}@nodes.each{|n|n.slots.each{|s,_|slots[s] = n}}# Use SCAN to iterate over the keys, migrating to the# right node as needed.cursor = nilwhile cursor != 0cursor,keys = source.scan(cursor, :count => 1000)cursor = cursor.to_ikeys.each{|k|# Migrate keys using the MIGRATE command.slot = key_to_slot(k)target = slots[slot]print "Migrating #{k} to #{target}: "STDOUT.flushbegincmd = ["migrate",target.info[:host],target.info[:port],k,0,@timeout]cmd << :copy if use_copycmd << :replace if use_replacesource.client.call(cmd)rescue => eputs eelseputs "OK"end}endenddef help_cluster_cmd(argv,opt)show_helpexit 0end# Parse the options for the specific command "cmd".# Returns an hash populate with option => value pairs, and the index of# the first non-option argument in ARGV.def parse_options(cmd)idx = 1 ; # Current index into ARGVoptions={}while idx < ARGV.length && ARGV[idx][0..1] == '--'if ARGV[idx][0..1] == "--"option = ARGV[idx][2..-1]idx += 1# --verbose is a global optionif option == "verbose"$verbose = truenextendif ALLOWED_OPTIONS[cmd] == nil || ALLOWED_OPTIONS[cmd][option] == nilputs "Unknown option '#{option}' for command '#{cmd}'"exit 1endif ALLOWED_OPTIONS[cmd][option] != falsevalue = ARGV[idx]idx += 1elsevalue = trueend# If the option is set to [], it's a multiple arguments# option. We just queue every new value into an array.if ALLOWED_OPTIONS[cmd][option] == []options[option] = [] if !options[option]options[option] << valueelseoptions[option] = valueendelse# Remaining arguments are not options.breakendend# Enforce mandatory optionsif ALLOWED_OPTIONS[cmd]ALLOWED_OPTIONS[cmd].each {|option,val|if !options[option] && val == :requiredputs "Option '--#{option}' is required "+ \"for subcommand '#{cmd}'"exit 1end}endreturn options,idxend end################################################################################# # Libraries # # We try to don't depend on external libs since this is a critical part # of Redis Cluster. ################################################################################## This is the CRC16 algorithm used by Redis Cluster to hash keys. # Implementation according to CCITT standards. # # This is actually the XMODEM CRC 16 algorithm, using the # following parameters: # # Name : "XMODEM", also known as "ZMODEM", "CRC-16/ACORN" # Width : 16 bit # Poly : 1021 (That is actually x^16 + x^12 + x^5 + 1) # Initialization : 0000 # Reflect Input byte : False # Reflect Output CRC : False # Xor constant to output CRC : 0000 # Output for "123456789" : 31C3module RedisClusterCRC16def RedisClusterCRC16.crc16(bytes)crc = 0bytes.each_byte{|b|crc = ((crc<<8) & 0xffff) ^ XMODEMCRC16Lookup[((crc>>8)^b) & 0xff]}crcendprivateXMODEMCRC16Lookup = [0x0000,0x1021,0x2042,0x3063,0x4084,0x50a5,0x60c6,0x70e7,0x8108,0x9129,0xa14a,0xb16b,0xc18c,0xd1ad,0xe1ce,0xf1ef,0x1231,0x0210,0x3273,0x2252,0x52b5,0x4294,0x72f7,0x62d6,0x9339,0x8318,0xb37b,0xa35a,0xd3bd,0xc39c,0xf3ff,0xe3de,0x2462,0x3443,0x0420,0x1401,0x64e6,0x74c7,0x44a4,0x5485,0xa56a,0xb54b,0x8528,0x9509,0xe5ee,0xf5cf,0xc5ac,0xd58d,0x3653,0x2672,0x1611,0x0630,0x76d7,0x66f6,0x5695,0x46b4,0xb75b,0xa77a,0x9719,0x8738,0xf7df,0xe7fe,0xd79d,0xc7bc,0x48c4,0x58e5,0x6886,0x78a7,0x0840,0x1861,0x2802,0x3823,0xc9cc,0xd9ed,0xe98e,0xf9af,0x8948,0x9969,0xa90a,0xb92b,0x5af5,0x4ad4,0x7ab7,0x6a96,0x1a71,0x0a50,0x3a33,0x2a12,0xdbfd,0xcbdc,0xfbbf,0xeb9e,0x9b79,0x8b58,0xbb3b,0xab1a,0x6ca6,0x7c87,0x4ce4,0x5cc5,0x2c22,0x3c03,0x0c60,0x1c41,0xedae,0xfd8f,0xcdec,0xddcd,0xad2a,0xbd0b,0x8d68,0x9d49,0x7e97,0x6eb6,0x5ed5,0x4ef4,0x3e13,0x2e32,0x1e51,0x0e70,0xff9f,0xefbe,0xdfdd,0xcffc,0xbf1b,0xaf3a,0x9f59,0x8f78,0x9188,0x81a9,0xb1ca,0xa1eb,0xd10c,0xc12d,0xf14e,0xe16f,0x1080,0x00a1,0x30c2,0x20e3,0x5004,0x4025,0x7046,0x6067,0x83b9,0x9398,0xa3fb,0xb3da,0xc33d,0xd31c,0xe37f,0xf35e,0x02b1,0x1290,0x22f3,0x32d2,0x4235,0x5214,0x6277,0x7256,0xb5ea,0xa5cb,0x95a8,0x8589,0xf56e,0xe54f,0xd52c,0xc50d,0x34e2,0x24c3,0x14a0,0x0481,0x7466,0x6447,0x5424,0x4405,0xa7db,0xb7fa,0x8799,0x97b8,0xe75f,0xf77e,0xc71d,0xd73c,0x26d3,0x36f2,0x0691,0x16b0,0x6657,0x7676,0x4615,0x5634,0xd94c,0xc96d,0xf90e,0xe92f,0x99c8,0x89e9,0xb98a,0xa9ab,0x5844,0x4865,0x7806,0x6827,0x18c0,0x08e1,0x3882,0x28a3,0xcb7d,0xdb5c,0xeb3f,0xfb1e,0x8bf9,0x9bd8,0xabbb,0xbb9a,0x4a75,0x5a54,0x6a37,0x7a16,0x0af1,0x1ad0,0x2ab3,0x3a92,0xfd2e,0xed0f,0xdd6c,0xcd4d,0xbdaa,0xad8b,0x9de8,0x8dc9,0x7c26,0x6c07,0x5c64,0x4c45,0x3ca2,0x2c83,0x1ce0,0x0cc1,0xef1f,0xff3e,0xcf5d,0xdf7c,0xaf9b,0xbfba,0x8fd9,0x9ff8,0x6e17,0x7e36,0x4e55,0x5e74,0x2e93,0x3eb2,0x0ed1,0x1ef0] end# Turn a key name into the corrisponding Redis Cluster slot. def key_to_slot(key)# Only hash what is inside {...} if there is such a pattern in the key.# Note that the specification requires the content that is between# the first { and the first } after the first {. If we found {} without# nothing in the middle, the whole key is hashed as usually.s = key.index "{"if se = key.index "}",s+1if e && e != s+1key = key[s+1..e-1]endendRedisClusterCRC16.crc16(key) % 16384 end################################################################################# # Definition of commands #################################################################################COMMANDS={"create" => ["create_cluster_cmd", -2, "host1:port1 ... hostN:portN"],"check" => ["check_cluster_cmd", 2, "host:port"],"info" => ["info_cluster_cmd", 2, "host:port"],"fix" => ["fix_cluster_cmd", 2, "host:port"],"reshard" => ["reshard_cluster_cmd", 2, "host:port"],"rebalance" => ["rebalance_cluster_cmd", -2, "host:port"],"add-node" => ["addnode_cluster_cmd", 3, "new_host:new_port existing_host:existing_port"],"del-node" => ["delnode_cluster_cmd", 3, "host:port node_id"],"set-timeout" => ["set_timeout_cluster_cmd", 3, "host:port milliseconds"],"call" => ["call_cluster_cmd", -3, "host:port command arg arg .. arg"],"import" => ["import_cluster_cmd", 2, "host:port"],"help" => ["help_cluster_cmd", 1, "(show this help)"] }ALLOWED_OPTIONS={"create" => {"replicas" => true},"add-node" => {"slave" => false, "master-id" => true},"import" => {"from" => :required, "copy" => false, "replace" => false},"reshard" => {"from" => true, "to" => true, "slots" => true, "yes" => false, "timeout" => true, "pipeline" => true},"rebalance" => {"weight" => [], "auto-weights" => false, "use-empty-masters" => false, "timeout" => true, "simulate" => false, "pipeline" => true, "threshold" => true},"fix" => {"timeout" => MigrateDefaultTimeout}, }def show_helpputs "Usage: redis-trib <command> <options> <arguments ...>\n\n"COMMANDS.each{|k,v|o = ""puts " #{k.ljust(15)} #{v[2]}"if ALLOWED_OPTIONS[k]ALLOWED_OPTIONS[k].each{|optname,has_arg|puts " --#{optname}" + (has_arg ? " <arg>" : "")}end}puts "\nFor check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.\n" end# Sanity check if ARGV.length == 0show_helpexit 1 endrt = RedisTrib.new cmd_spec = COMMANDS[ARGV[0].downcase] if !cmd_specputs "Unknown redis-trib subcommand '#{ARGV[0]}'"exit 1 end# Parse options cmd_options,first_non_option = rt.parse_options(ARGV[0].downcase) rt.check_arity(cmd_spec[1],ARGV.length-(first_non_option-1))# Dispatch rt.send(cmd_spec[0],ARGV[first_non_option..-1],cmd_options)5.構(gòu)建集群
分別雙擊6380~6385文件中startup.bat,將6個(gè)redis服務(wù)啟動(dòng),不要關(guān)閉窗口。
cmd進(jìn)入redis-cluseter目錄后,執(zhí)行以下命令,中途會(huì)詢問是否打印更多詳細(xì)信息,輸入yes即可,然后redis-trib 就會(huì)將這份配置應(yīng)用到集群當(dāng)中,讓各個(gè)節(jié)點(diǎn)開始互相通訊。
注意 :不要用redis-cli --cluster create這個(gè)命令在windows環(huán)境上構(gòu)建集群,因?yàn)閞edis-5.0.0版本開始才支持“–cluster”,而windows最新的版本才是3.2.100,所以windows暫時(shí)不能使用redis-cli來創(chuàng)建集群,否則會(huì)報(bào)以下的錯(cuò)誤:
Unrecognized option or bad number of args for: '--cluster'6.redis注冊(cè)為Windows服務(wù)
到這里,我們的集群環(huán)境已經(jīng)構(gòu)建完成了,但是,因?yàn)槲覀兪怯每刂婆_(tái)啟動(dòng)的redis服務(wù),所以當(dāng)控制臺(tái)窗口被關(guān)閉后,我們的redis服務(wù)也將會(huì)被關(guān)閉,所以我們可以將redis注冊(cè)成windows本地服務(wù),通過start,stop命令去開啟和關(guān)閉服務(wù)。
6.1 首先我們以’管理員身份運(yùn)行’cmd,使用以下命令,注意路徑需要改成你自己的路徑:
SC CREATE redis6380 binpath= "\"D:\redis-cluster\6380\redis-server.exe\" --service-run \"D:\redis-cluster\6380\redis.windows.conf\"
注冊(cè)成功后,通過以下命令開啟redis服務(wù)
其它的節(jié)點(diǎn),如此這般依樣畫葫蘆,我們?cè)诒镜胤?wù)中即可看到6個(gè)節(jié)點(diǎn)服務(wù)
停止服務(wù)的命令:
7.集群信息查看和測(cè)試
cmd進(jìn)入6380文件下,輸入:redis-cli -c -h 127.0.0.1 -p 6380
(命令 redis-cli –c –h ”地址” –p “端口號(hào)” ; c 表示集群)
cluster info 命令:打印集群的相關(guān)信息
cluster nodes 命令:查看具體節(jié)點(diǎn)信息
info replication命令:查看當(dāng)前節(jié)點(diǎn)的主從關(guān)系
測(cè)試:
切換到從節(jié)點(diǎn):
搞定,收工!
合則一團(tuán)火,分則滿天星
總結(jié)
以上是生活随笔為你收集整理的redis-cluster集群在windows中搭建详细版的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 就是一个简单do--while语句,区间
- 下一篇: Java Essentials: Pre