keepalive之LVS-DR架构
author:JevonWei
版權聲明:原創作品
Keepalive實戰之LVS-DR
實驗目的:構建LVS-DR架構,為了達到LVS的高可用目的,故在LVS-DR的Director端做Keepalive集群,在Director-A上做keepalive-A,在Director上做keepalive-B,LVS-RS1和LVS-RS2為后端的兩臺web服務器,通過在Director上做keepalive集群實現高可用的目的
網絡拓撲圖
實驗環境(keepalive節點同時作為LVS的directory節點)
keepalive-A(Director-A) 172.16.253.108 keepalive-B(Director-A) 172.16.253.105 LVS-RS1 172.16.250.127 LVS-RS2 172.16.253.193 VIP 172.16.253.150 client 172.16.253.177LVS-RS web集群
為了更好的觀察實驗結果,故在此將RS1和RS2的web頁面內容設置不一致,以致可以更清晰的區分RS1服務端和RS2服務端
LVS-RS1
[root@LVS-RS1 ~]# systemctl restart chronyd \\多臺服務器時間同步 [root@LVS-RS1 ~]# iptables -F [root@LVS-RS1 ~]# setenforce 0 [root@LVS-RS1 ~]# yum -y install nginx [root@LVS-RS1 ~]# vim /usr/share/nginx/html/index.html <h1> Web RS1 </h1> [root@LVS-RS1 ~]# systemctl start nginx修改內核參數并添加VIP地址 [root@LVS-RS1 ~]# vim lvs_dr.sh #!/bin/bash # vip=172.16.253.150 mask=255.255.255.255 iface="lo:0"case $1 in start)ifconfig $iface $vip netmask $mask broadcast $vip uproute add -host $vip dev $ifaceecho 1 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 1 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 2 > /proc/sys/net/ipv4/conf/all/arp_announceecho 2 > /proc/sys/net/ipv4/conf/lo/arp_announce;; stop)echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/all/arp_announceecho 0 > /proc/sys/net/ipv4/conf/lo/arp_announceifconfig $iface down;; *)echo "Usage:$(basename $0) start|stop"exit 1;; esac [root@LVS-RS1 ~]# bash lvs_dr.sh start [root@LVS-RS1 ~]# ifconfig lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536inet 172.16.253.150 netmask 255.255.255.255loop txqueuelen 1 (Local Loopback)LVS-RS2
[root@LVS-RS2 ~]# systemctl restart chronyd \\多臺服務器時間同步 [root@LVS-RS2 ~]# iptables -F [root@LVS-RS2 ~]# setenforce 0 [root@LVS-RS2 ~]# yum -y install nginx [root@LVS-RS2 ~]# vim /usr/share/nginx/html/index.html <h1> Web RS2 </h1> [root@LVS-RS2 ~]# systemctl start nginx修改內核參數并添加VIP地址 [root@LVS-RS2 ~]# vim lvs_dr.sh #!/bin/bash # vip=172.16.253.150 mask=255.255.255.255 iface="lo:0"case $1 in start)ifconfig $iface $vip netmask $mask broadcast $vip uproute add -host $vip dev $ifaceecho 1 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 1 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 2 > /proc/sys/net/ipv4/conf/all/arp_announceecho 2 > /proc/sys/net/ipv4/conf/lo/arp_announce;; stop)echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 0 > /proc/sys/net/ipv4/conf/all/arp_announceecho 0 > /proc/sys/net/ipv4/conf/lo/arp_announceifconfig $iface down;; *)echo "Usage:$(basename $0) start|stop"exit 1;; esac [root@LVS-RS1 ~]# bash lvs_dr.sh start [root@LVS-RS1 ~]# ifconfig lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536inet 172.16.253.150 netmask 255.255.255.255loop txqueuelen 1 (Local Loopback)Keepalive集群
Director節點搭建
keepalive-A
[root@keepaliveA ~]# systemctl restart chronyd \\多臺服務器時間同步 [root@keepaliveA ~]# yum -y install ipvsadmkeepalive-B
[root@keepaliveB ~]# systemctl restart chronyd \\多臺服務器時間同步 [root@keepaliveB ~]# yum -y install ipvsadmkeepalive上配置web的sorry server
keepalive-A
[root@keepaliveA ~]# yum -y install nginx [root@keepaliveA ~]# vim /usr/share/nginx/html/index.html </h1> sorry from Director-A(keepalive-A) </h1> [root@keepaliveA ~]# systemctl start nginxkeepalive-B
[root@keepalive-B ~]# yum -y install nginx [root@keepalive-B ~]# vim /usr/share/nginx/html/index.html </h1> sorry from Director-B(keepalive-B) </h1> [root@keepaliveB ~]# systemctl start nginxkeepalive-A配置keepalive
keepalive-A
[root@keepalive-A ~]# iptables -F [root@keepalive-A ~]# yum -y install keepalived [root@keepaliveA ~]# vim /etc/keepalived/keepalived.conf global_defs { notification_email { \\定義郵件通知設置jevon@danran.com \\定義郵件接收地址}notification_email_from ka_admin@danran.com \\郵件發送者smtp_server 127.0.0.1 \\郵件server服務器smtp_connect_timeout 30 \\連接超時router_id keepaliveA \\route的ID信息,自定義vrrp_mcast_group4 224.103.5.5 \\多播地址段,默認為224.0.0.18 } vrrp_instance VI_A {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass qr8hQHuL} virtual_ipaddress {172.16.253.150/32 dev ens33 }virtual_server 172.16.253.150 80 {delay_loop 6 \\服務輪詢的時間間隔lb_algo rr \\定義調度方法;lb_kind DR \\集群的類型;protocol TCP \\服務協議,僅支持TCP;sorry_server 127.0.0.1 80 \\指定sorry server,且為本機的wen服務提供的web頁面real_server 172.16.250.127 80 {weight 1 \\權重SSL_GET { \\應用層檢測url {path / \\定義要監控的URL#digest ff20ad2481f97b1754ef3e12ecd3a9cc \\判斷上述檢測機制為健康狀態的響應的內容的校驗碼;status_code 200 \\判斷上述檢測機制為健康狀態的響應碼}connect_timeout 3 \\連接請求的超時時長;nb_get_retry 3 \\重試次數delay_before_retry 1 \\重試之前的延遲時長}}real_server 172.16.253.193 80 {weight 1SSL_GET {url {path /#digest ff20ad2481f97b1754ef3e12ecd3a9cc status_code 200}connect_timeout 3nb_get_retry 3delay_before_retry 1}} } [root@keepaliveA ~]# systemctl start keepalived [root@keepaliveA ~]# ip a l 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:75:dc:3c brd ff:ff:ff:ff:ff:ffinet 172.16.253.150/32 scope global ens33valid_lft forever preferred_lft forever [root@keepaliveA ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.16.253.150:80 rr-> 172.16.250.127:80 Route 1 0 0 -> 172.16.253.193:80 Route 1 0 0keepalive-B配置keepalive
keepalive-B
[root@keepalive-B ~]# iptables -F [root@keepalive-B ~]# yum -y install keepalived [root@keepaliveA ~]# vim /etc/keepalived/keepalived.conf global_defs { notification_email { \\定義郵件通知設置jevon@danran.com \\定義郵件接收地址}notification_email_from ka_admin@danran.com \\郵件發送者smtp_server 127.0.0.1 \\郵件server服務器smtp_connect_timeout 30 \\連接超時router_id keepaliveA \\route的ID信息,自定義vrrp_mcast_group4 224.103.5.5 \\多播地址段,默認為224.0.0.18 } vrrp_instance VI_A {state BACKUPinterface ens33virtual_router_id 51priority 95advert_int 1authentication {auth_type PASSauth_pass qr8hQHuL} virtual_ipaddress {172.16.253.150/32 dev ens33 }virtual_server 172.16.253.150 80 {delay_loop 6 \\服務輪詢的時間間隔lb_algo rr \\定義調度方法;lb_kind DR \\集群的類型;protocol TCP \\服務協議,僅支持TCP;sorry_server 127.0.0.1 80 \\指定sorry server,且為本機的wen服務提供的web頁面real_server 172.16.250.127 80 {weight 1 \\權重SSL_GET { \\應用層檢測url {path / \\定義要監控的URL#digest ff20ad2481f97b1754ef3e12ecd3a9cc \\判斷上述檢測機制為健康狀態的響應的內容的校驗碼;status_code 200 \\判斷上述檢測機制為健康狀態的響應碼}connect_timeout 3 \\連接請求的超時時長;nb_get_retry 3 \\重試次數delay_before_retry 1 \\重試之前的延遲時長}}real_server 172.16.253.193 80 {weight 1SSL_GET {url {path /#digest ff20ad2481f97b1754ef3e12ecd3a9cc status_code 200}connect_timeout 3nb_get_retry 3delay_before_retry 1}} } [root@keepaliveB ~]# systemctl start keepalived [root@keepalive-B ~]# ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.16.253.150:http rr-> 172.16.250.127:http Route 1 0 0 -> 172.16.253.193:http Route 1 0 0訪問測試
client測試
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done <h1> Web RS1 </h1> <h1> Web RS2 </h1> <h1> Web RS1 </h1> <h1> Web RS2 </h1> <h1> Web RS1 </h1>當keepalive-A故障時
[root@keepaliveA ~]# systemctl stop keepalivedkeepalive-B自動成為MASTER主節點,則LVS的director調度服務器切換為keepalive-B上,LVS-RS1和LVS-RS2的web服務正常使用
client訪問測試
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done <h1> Web RS2 </h1> <h1> Web RS1 </h1> <h1> Web RS2 </h1> <h1> Web RS1 </h1> <h1> Web RS2 </h1>當keepalive-A修恢復正常時,keepalive-A再次成為MASTER主節點
[root@keepaliveA ~]# systemctl start keepalived [root@keepaliveA ~]# ip a l : ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:75:dc:3c brd ff:ff:ff:ff:ff:ffinet 172.16.253.150/32 scope global ens33valid_lft forever preferred_lft forever當LVS-RS1的web服務故障時
[root@LVS-RS1 ~]# iptables -A INPUT -p tcp --dport 80 -j REJECTclient訪問
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done <h1> Web RS2 </h1> <h1> Web RS2 </h1> <h1> Web RS2 </h1> <h1> Web RS2 </h1>當LVS-RS1和LVS-RS2的web服務全部故障時
[root@LVS-RS1 ~]# iptables -A INPUT -p tcp --dport 80 -j REJECT [root@LVS-RS2 ~]# iptables -A INPUT -p tcp --dport 80 -j REJECTclient訪問到的時sorry server服務器,且sorry server服務器為keepalive-A
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done </h1> sorry from Director-A(keepalive-A) </h1> </h1> sorry from Director-A(keepalive-A) </h1> </h1> sorry from Director-A(keepalive-A) </h1> </h1> sorry from Director-A(keepalive-A) </h1> </h1> sorry from Director-A(keepalive-A) </h1>當keepalive-A故障時
[root@keepaliveA ~]# systemctl stop keepalived.serviceclient訪問sorry server服務頁面,且sorry server服務器為keepalive-B
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done </h1> sorry from Director-B(keepalive-B) </h1> </h1> sorry from Director-B(keepalive-B) </h1> </h1> sorry from Director-B(keepalive-B) </h1> </h1> sorry from Director-B(keepalive-B) </h1> </h1> sorry from Director-B(keepalive-B) </h1>LVS-RS1的web服務恢復正常后
[root@LVS-RS1 ~]# iptables -Fclient訪問測試
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done <h1> Web RS1 </h1> <h1> Web RS1 </h1> <h1> Web RS1 </h1> <h1> Web RS1 </h1> <h1> Web RS1 </h1>LVS-RS1和LVS-RS2的web服務全部恢復正常后
[root@LVS-RS1 ~]# iptables -F [root@LVS-RS2 ~]# iptables -Fclient訪問測試
[root@client ~]# for i in {1..10};do curl http://172.16.253.150;done <h1> Web RS2 </h1> <h1> Web RS1 </h1> <h1> Web RS2 </h1> <h1> Web RS1 </h1> <h1> Web RS2 </h1>保存及重載規則
保存:建議保存至/etc/sysconfig/ipvsadm
ipvsadm-save > /PATH/TO/IPVSADM_FILE ipvsadm -S > /PATH/TO/IPVSADM_FILE systemctl stop ipvsadm.service重載:
ipvsadm-restore < /PATH/FROM/IPVSADM_FILE ipvsadm -R < /PATH/FROM/IPVSADM_FILE systemctl restart ipvsadm.servicekeepalive節點通過DNS域名解析指向實現
獲取web主頁面內容的校驗碼
[root@keepaliveA ~]# genhash -s 172.16.250.127 -p 80 -u /轉載于:https://www.cnblogs.com/JevonWei/p/7482483.html
總結
以上是生活随笔為你收集整理的keepalive之LVS-DR架构的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 91岁传媒大亨默多克离婚 已是第四次:6
- 下一篇: PerfMon常用计数器