LVS_DR实现(负载均衡)及LVS_DR+keepalived实现(高可用+负载均衡)
client->VS->RS->client(VS只做調度,RS為虛擬服務器)
LVS_DR原理圖解:
優(yōu)點:負載均衡器只負責將請求包分發(fā)給物理服務器,而物理服務器將應答包直接發(fā)給用戶。所以,負載均衡器能處理 很巨大的請求量,這種方式,一臺負載均衡能為 超過100臺的物理服務器服務,負載均衡器不再是系統(tǒng)的瓶頸.
缺點:這種方式需要所有的DIR和RIP都在同一廣播域;不支持異地容災。
環(huán)境:
iptables和selinux關閉
test1(調度器)端(172.25.1.1):
[root@test1 ~]# yum install -y ipvsadm
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use
subscription-manager to register.
Setting up Install Process
No package ipvsadm available.
Error: Nothing to do
[root@test1 ~]# vim /etc/yum.repos.d/rhel-source.repo
[rhel6.5]
name=rhel6.5
baseurl=http://172.25.1.250/rhel6.5
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[LoadBalancer]?????????????????????????????????????????????? //添加LoadBalancer用來下載ipvsadm服務
name=LoadBalancer
baseurl=http://172.25.1.250/rhel6.5/LoadBalancer
gpgcheck=0
enabled=1
?[root@test1 ~]# yum install -y ipvsadm
[root@test1 ~]# /etc/init.d/ipvsadm start?????? //啟動服務
[root@test1 ~]# ip addr add 172.25.1.100 dev eth0????? //添加虛擬IP
[root@test1 ~]# ip addr??? //查看是否添加
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state
UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 52:54:00:4f:1c:32 brd ff:ff:ff:ff:ff:ff
inet 172.25.1.1/24 brd 172.25.1.255 scope global eth0
inet 172.25.1.100/32 scope global eth0inet6 fe80::5054:ff:fe4f:1c32/64 scope link
valid_lft forever preferred_lft forever
[root@test1 ~]# ipvsadm -A -t 172.25.1.100:80 -s rr????????????
[root@test1 ~]# ipvsadm -a -t 172.25.1.100:80 -r 172.25.1.2:80 -g??????? //-a表示在添加虛擬服務中添加,-g表示使用直接模式[root@test1 ~]# ipvsadm -a -t 172.25.1.100:80 -r 172.25.1.3:80 -g
[root@test1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port
Forward Weight ActiveConn InActConn
TCP 172.25.1.100:80 rr
-> 172.25.1.2:80?? Route??? 1??? 0??? 0
-> 172.25.1.3:80?? Route??? 1??? 0??? 0
?服務器 1(server2:172.25.1.2)端:
[root@test2 ~]# ip addr add 172.25.1.100/32 dev eth0????? //添加虛擬ip,目的是讓test1與其正常進行三次握手。
[root@test2 ~]# vim /var/www/html/index.html
<h1>www.westos.org-server2</h1>
/etc/init.d/httpd restart
服務器 2(server3:172.25.1.3)端:
[root@test3 ~]# ip addr add 172.25.1.100/32 dev eth0?????? //添加虛擬ip,目的是讓test1與其正常進行三次握手。
[root@test3 ~]# vim /var/www/html/index.html
<h1>bbs.westos.org-server3</h1>
/etc/init.d/httpd restart
客戶端訪問:
????? 調度器 MAC 地址:52:54:00:4f:1c:32
????? 服務器 1(server2)端 MAC 地址:52:54:00:2b:85:5b
????? 服務器 2(server3)端 MAC 地址:52:54:00:98:3d:65
注意:訪問結果會出現(xiàn)下面三種情況:
[root@foundation1 ~]# arp -d 172.25.1.100??????? //端開之前的連接
[root@foundation1 ~]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
[root@foundation1 ~]# arp -an | grep 100
? (172.25.1.100) at 52:54:00:2b:85:5b [ether] on br0
總結1:從MAC地址可以看出沒有經過調度器,直接經過服務器 1 訪問
[root@foundation1 ~]# arp -d 172.25.1.100
[r[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
oot@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
<h[root@foundation1 ~]# arp -an | grep 100
? (172.25.1.100) at 52:54:00:2b:85:5b [ether] on
總結2:從MAC地址可以看出沒有經過調度器,直接經過服務器 2 訪問
[root@foundation1 ~]# arp -d 172.25.1.100
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server2</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server2</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 ~]# arp -an | grep 100
? (172.25.1.100) at 52:54:00:4f:1c:32 [ether] on br0
總結3:從 MAC 地址可以看出經過了調度器,所以輪詢。
總結: 從三種情況可以發(fā)現(xiàn),連接到的 ip(VS 和 兩個RS 的 ip 都一樣)是隨機的,因為三臺 server 在同一
VLAN 下具有相同的 vip,故不能保證每次都會訪問調度器。
解決:為了解決上面這個問題,需要設置禁止訪問連接 RS。
RS(test2) :
[root@test2 ~]# yum install arptables_jf -y????????? //下載服務arptables
[root@test2 ~]# arptables -A IN -d 172.25.1.100 -j DROP??????? //目的是不允許客戶端直接連接RS1
[root@test2 ~]# arptables -A OUT -s 172.25.1.100 -j mangle --mangle-ip-s 172.25.1.2?????? //允許VS與RS1連接
[root@test2 ~]# /etc/init.d/arptables_jf save?????? //保存該策略
[root@test2 ~]# cat /etc/sysconfig/arptables????? //查看所寫策略
# Generated by arptables-save v0.0.8 on Thu Sep 27 22:31:05 2018
*filter
:IN ACCEPT [0:0]
:OUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0][0:0] -A IN -d 172.25.1.100 -j DROP
[0:0] -A OUT -s 172.25.1.100 -j mangle --mangle-ip-s 172.25.1.2
COMMIT
# Completed on Thu Sep 27 22:31:05 2018
RS(test3)與test2步驟相同 :
[root@test3 ~]# arptables -nL????? //用于查看 arptables 的具體內容
[root@test3 ~]# yum install arptables_jf -y???
[root@test3 ~]# arptables -A IN -d 172.25.1.100 -j DROP??????????? //目的是不允許客戶端直接連接RS2
[root@test3 ~]# arptables -A OUT -s 172.25.1.100 -j mangle --mangle-ip-s 172.25.1.3????? //允許VS與RS1連接
[root@test3 ~]# /etc/init.d/arptables_jf save?????? //保存該策略
[root@test3 ~]# cat /etc/sysconfig/arptables
[root@test3 ~]# arptables -nL???? //用于查看 arptables 的具體內容
# Generated by arptables-save v0.0.8 on Thu Sep 27 22:31:09 2018
*filter
:IN ACCEPT [1:28]
:OUT ACCEPT [1:28]
:FORWARD ACCEPT [0:0]
[0:0] -A IN -d 172.25.1.100 -j DROP
[0:0] -A OUT -s 172.25.1.100 -j mangle --mangle-ip-s 172.25.1.3
COMMIT
# Completed on Thu Sep 27 22:31:09 2018
客戶端測試(172.25.1.250):
[root@foundation1 ~]# arp -an | grep 100
? (172.25.1.100) at 52:54:00:4f:1c:32 [ether] on br0
再次測試時 ip 的 mac 地址是VS 的
[root@foundation1 ~]# arp -d 172.25.1.100????? //多次 down 掉后查看是否會依舊是VS的MAC地址
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
問題:這種服務的缺點在于,如果后端服務器掛掉,比如說停掉 server 真實主機的 httpd 服務,
那么在客戶端解析的時候們就會報錯,但 server3 還會正常工作。這樣用戶就將得到錯
誤的信息:
[root@test2 ~]# /etc/init.d/httpd stop
Stopping httpd:
[ OK
[root@foundation1 ~]# curl 172.25.1.100
curl: (7) Failed connect to 172.25.1.100:80; Connection refused
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 ~]# curl 172.25.1.100
curl: (7) Failed connect to 172.25.1.100:80; Connection refused
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
總結: vs 對后端沒有健康檢查
解決這個問題的方法一:
?? 用 ldirectord 解決此問題
VS端:
[root@test1 ~]# vim /etc/yum.repos.d/rhel-source.repo????????????????? //配置yum源,下載ldirectord服務
[HighAvailability]
name=HighAvailability
baseurl=http://172.25.1.250/rhel6.5/HighAvailability
gpgcheck=0
[root@test1 ~]# ls
ldirectord-3.9.5-3.1.x86_64.rpm
[root@test1 ~]# yum install * -y
[root@test1 ~]# rpm -ql ldirectord?????? //查看配置文件
/usr/share/doc/ldirectord-3.9.5/ldirectord.cf
[root@test1 ~]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf? /etc/ha.d/?????? //拷貝一份到/etc/ha.d/
[root@test1 ~]# cd /etc/ha.d
[root@sest1 ha.d]# ls
ldirectord.cf resource.d shellfuncs
[root@test1 ha.d]# vim ldirectord.cf????????? //修改配置文件
virtual=172.25.1.100:80??????????? //虛擬vip
real=172.25.1.2:80 gate?????????? //真實服務器1的ip
real=172.25.1.3:80 gate?????????? //真實服務器1的ip
fallback=127.0.0.1:80 gate
service=http
scheduler=rr
#persistent=600
#netmask=255.255.255.255
protocol=tcpchecktype=negotiate
checkport=80
request="index.html"
#receive="Test Page"
#virtualhost=www.x.y.z
[root@test1 ha.d]# ipvsadm -ln????? //列出規(guī)則
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port
Forward Weight ActiveConn InActConn
TCP 172.25.1.100:80 rr
Route??? 1??? 0??? 3??? -> 172.25.1.2:80
Route??? 1??? 0??? 2??? -> 172.25.1.3:80
[root@test1 ~]# ipvsadm -C??? //清理規(guī)則
[root@test1 ~]# ipvsadm -l????? //查看是否已經清除
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port
Forward Weight ActiveConn InActConn
[root@test1 ha.d]# /etc/init.d/ldirectord start ? ? ?? //再次打開服務又可以加載出規(guī)則
Starting ldirectord... success
[root@test1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port
Forward Weight ActiveConn InActConn
TCP 172.25.1.100:80 rr
Route??? 1??? 0??? 3??? -> 172.25.1.2:80
Route??? 1??? 0??? 2??? -> 172.25.1.3:80
[root@test1 ha.d]# vim /etc/httpd/conf/httpd.conf????????? //修改端口為80
Listen 80
[root@test1 ha.d]# /etc/init.d/httpd start?????????? //重起服務
[root@test1 ha.d]# cd /var/www/html/????????
[root@test1 html]# vim index.html???????? //編輯下面內容
<h1>網站正在維護中......</h1>
客戶端測試:
[root@foundation1 ~]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@test2 ~]# /etc/init.d/httpd stop??????? //若后臺壞掉一個,則策略會自動更新
[root@test1 ha.d]# ipvsadm -ln??????????????? //可以看出已經實時更新壞掉的服務器
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port
Forward Weight ActiveConn InActConn
TCP 172.25.1.100:80 rr
-> 172.25.1.3:80
Route???? 1???? 0????? 2
客戶端再次訪問:
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
若兩個真實服務器都掛掉:
[root@test3 ~]# /etc/init.d/httpd stop
[root@test1 ha.d]# ipvsadm -ln??????????????????? //可以看出已經沒有正常的服務器了
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port
Forward Weight ActiveConn InActConn
TCP 172.25.1.100:80 rr
-> 127.0.0.1:80
Local 1??? 0???? 0
此時客戶端再次訪問:
[root@foundation1 ~]# curl 172.25.1.100
<h1>網站正在維護中......</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>網站正在維護中......</h1>
[root@foundation1 ~]# curl 172.25.1.100
<h1>網站正在維護中......</h1>
總結:
在客戶端 curl 172.25.1.100 測試時, RS 輪詢,當關閉 test2 時,只訪問 test3,
RS 都關閉時會訪問本地 test1,故顯示“系統(tǒng)正在維護中......”
解決健康檢查的方法二:
?????? 用 keepalived 軟件解決,
官網下載 keepalived 軟件:http://www.keepalived.org/download.html
兩個VS分別為:
主:test1
備:test4
VS 端分別安裝 keepalived:
1.?? 安裝keepalived服務
2.?? ./configure -->openssl-devel -->? make --> make install
在主VR:test1下載keepalived服務
[root@test1 ~]# ls
keepalived-2.0.6.tar.gz
[root@test1 ~]# tar zxf keepalived-2.0.6.tar.gz??????????????? //解壓壓縮包
[root@test1 ~]# ls
keepalived-2.0.6 keepalived-2.0.6.tar.gz
[root@test1 keepalived-2.0.6]# ./configure --prefix=/usr/local/keepalived
--with-init=SYSV
[root@test1 keepalived-2.0.6]# yum install openssl-devel
[root@test1 keepalived-2.0.6]# ./configure --prefix=/usr/local/keepalived
--with-init=SYSV
[root@test1 keepalived-2.0.6]# make??????? //編譯
[root@test1 keepalived-2.0.6]# make install
[root@test1 keepalived-2.0.6]# ln -s
/usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
[root@test1 keepalived-2.0.6]# ln -s /usr/local/keepalived/etc/keepalived/
/etc/
[root@test1 keepalived-2.0.6]# ln -s
/usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@test1 keepalived-2.0.6]# ln -s /usr/local/keepalived/sbin/keepalived
/sbin/
[root@test1 keepalived-2.0.6]# cd /usr/local/
[root@test1 init.d]# chmod +x keepalived?????????? //給keepalived執(zhí)行權限
[root@test1 init.d]# /etc/init.d/keepalived start??? //開啟keepalived服務
在備VR:test4
創(chuàng)建一個虛擬機(test4:172.25.1.4),在備VR:test4下載與 test1 相同的服務 keepalived服務:
[root@test4 ~]# yum install openssh-clients -y
[root@test1 local]# scp -r keepalived/ root@172.25.1.4:/usr/local/?????? //將test1已經下載好的keepalived傳給 test4
[root@test4 local]# ls
bin etc games include keepalived lib lib64 libexec sbin share src
[root@test4 ~]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived/etc/init.d/
[root@test4 local]# ln -s /usr/local/keepalived/etc/keepalived/ /etc/
[root@test4 local]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived
/etc/sysconfig/
[root@test4 local]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@test4 keepalived]# /etc/init.d/keepalived start
[root@test1 keepalived]# cd /etc/keepalived/
[root@test1 keepalived]# yum install mailx -y????????? //下載郵件服務
[root@test1 keepalived]# ip addr del 172.25.1.100/24 dev eth0????? //刪除虛擬ip
[root@test1 keepalived]# /etc/init.d/ldirectord stop
[root@test1 keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state
UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 52:54:00:4f:1c:32 brd ff:ff:ff:ff:ff:ff
inet 172.25.1.1/24 brd 172.25.1.255 scope global eth0????????????? //可以看到沒有虛擬ip
inet6 fe80::5054:ff:fe4f:1c32/64 scope link
[root@test1 keepalived]# vim keepalived.conf??????????? //修改內容
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost??????
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 172.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict??????????? //放棄修改防火墻規(guī)則vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {state MASTER???????? //MASTER表示主模式
interface eth0
virtual_router_id 1?????
priority 100??????????? //數(shù)值越大,優(yōu)先級越高advert_int 1
aauthentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.1.100?????????? //虛擬ip地址
}
}
virtual_server 172.25.1.100 80 {????????????//虛擬ip地址 ,服務啟動生效時會自動添加
delay_loop 6????????? //對后端的健康檢查時間
lb_algo rr
lb_kind DR????????????? //DR模式
#persistence_timeout 50????????? //注釋持續(xù)連接?
protocol TCP
}
real_server 172.25.1.2 80 {??????? //RS1的ip
weight 1
TCP_CHECK {
connect_timeout 3
retry 3
delay_before_retry 3
}
}
real_server 172.25.1.3 80 {?????? //RS2的ip
weight 1
TCP_CHECK {
connect_timeout 3
retry 3
delay_before_retry 3
}
}[root@test1 keepalived]# /etc/init.d/keepalived restart
[root@test1 keepalived]# scp keepalived.conf root@172.25.1.4:/etc/keepalived/????????? //將配置文傳給test4
[root@test4 keepalived]# cd /etc/keepalived/
[root@test4 keepalived]# yum install mailx -y
[root@test4 keepalived]# vim keepalived.conf
vrrp_instance VI_1 {
state BACKUP?????????? //該為備模式
interface eth0
virtual_router_id 1?
priority 50?????????????? ?? //優(yōu)先級要小于 test1 的優(yōu)先級
[root@test4 keepalived]# >/var/log/messages??? //清空日志
[root@test4 keepalived]# /etc/init.d/keepalived restart???????? //重起服務
[root@test4 keepalived]# cat /var/log/messages????????? //查看日志,可以看出test4做了BACKUP模式
客戶端測試:
此時 test1 和 test4 的 keepalived 都是開啟狀態(tài),其中 test1 做主,test4 做備
[root@foundation1 lvs]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 lvs]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
[root@foundation1 lvs]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 lvs]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
[root@foundation1 lvs]# arp -an | grep 100
? (172.25.1.100) at 52:54:00:4f:1c:32 [ether] on br0
從 MAC 地址可以看出走的是 test1。
若將 test1 的 keepalive 掛掉,則客戶端依舊可以正常訪問
[root@test1 keepalived]# /etc/init.d/ipvsadm stop
[root@foundation1 lvs]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>[root@foundation1 lvs]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
[root@foundation1 lvs]# curl 172.25.1.100
<h1>bbs.westos.org-server3</h1>
[root@foundation1 lvs]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
從 MAC 地址可以看出走的是 test4。
將 test1 的 keepalived 開啟,并 test3 的 http 服務關掉,則客戶只能訪問 test2 的
[root@test3 ~]# /etc/init.d/httpd stop
[root@foundation1 lvs]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
[root@foundation1 lvs]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
[root@foundation1 lvs]# curl 172.25.1.100
<h1>www.westos.org-server2</h1>
若將兩個都掛掉,則 test1,則客戶端直接不能正常訪問,與 ldirectord 不同的是本地 test1
不會接替讓 VS 訪問
[root@foundation1 lvs]# curl 172.25.1.100
curl: (7) Failed connect to 172.25.1.100:80; Connection refused
[root@foundation1 lvs]# curl 172.25.1.100
curl: (7) Failed connect to 172.25.1.100:80; Connection refused
[root@foundation1 lvs]# curl 172.25.1.100
curl: (7) Failed connect to 172.25.1.100:80; Connection refused
總結
以上是生活随笔為你收集整理的LVS_DR实现(负载均衡)及LVS_DR+keepalived实现(高可用+负载均衡)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 多线程和mysql
- 下一篇: varnish 实现 CDN 缓存系统构