Nginx+Keeplived双机热备(主从模式)
?
Nginx+Keeplived雙機(jī)熱備(主從模式)
參考資料:
http://www.cnblogs.com/kevingrace/p/6138185.html
雙機(jī)高可用一般是通過虛擬IP(漂移IP)方法來實(shí)現(xiàn)的,基于Linux/Unix的IP別名技術(shù)。
雙機(jī)高可用方法目前分為兩種:
1.雙機(jī)主從模式:即前端使用兩臺服務(wù)器,一臺主服務(wù)器和一臺熱備服務(wù)器,正常情況下,主服務(wù)器綁定一個(gè)公網(wǎng)虛擬IP,提供負(fù)載均衡服務(wù),熱備服務(wù)器處于空閑狀態(tài);當(dāng)主服務(wù)器發(fā)生故障時(shí),熱備服務(wù)器接管主服務(wù)器的公網(wǎng)虛擬IP,提供負(fù)載均衡服務(wù);但是熱備服務(wù)器在主機(jī)器不出現(xiàn)故障的時(shí)候,永遠(yuǎn)處于浪費(fèi)狀態(tài),對于服務(wù)器不多的網(wǎng)站,該方案不經(jīng)濟(jì)實(shí)惠。
2.雙機(jī)主主模式:即前端使用兩臺負(fù)載均衡服務(wù)器,互為主備,且都處于活動狀態(tài),同事各自綁定一個(gè)公網(wǎng)虛擬IP,提供負(fù)載均衡服務(wù);當(dāng)其中一臺發(fā)生故障時(shí),另一臺接管發(fā)生故障服務(wù)器的公網(wǎng)虛擬IP(這時(shí)由非故障機(jī)器一臺負(fù)擔(dān)所有的請求)。這種方案,經(jīng)濟(jì)實(shí)惠,非常適合于當(dāng)前架構(gòu)環(huán)境。
今天再次分享下Nginx+keeplived實(shí)現(xiàn)高可用負(fù)載均衡的主從模式的操作記錄:
keeplived可以認(rèn)為是VRRP協(xié)議在Linux上的實(shí)現(xiàn),主要有三個(gè)模塊,分別是core,check和vrrp。
core模塊為keeplived的核心,負(fù)責(zé)主進(jìn)程的啟動、維護(hù)以及全局配置文件的加載和解析。
check負(fù)責(zé)健康檢查,包括創(chuàng)建的各種檢查方式。
vrrp模塊是來實(shí)現(xiàn)VRRP協(xié)議的。
一、環(huán)境說明
操作系統(tǒng):CentOS release 6.9 (Final) minimal
web1:172.16.12.223
web2:172.16.12.224
vip:svn:172.16.12.226
svn:172.16.12.225
?
二、環(huán)境安裝
安裝nginx和keeplived服務(wù)(web1和web2兩臺服務(wù)器上的安裝完全一樣)。
?
2.1、安裝依賴
yum clean all yum -y update yum -y install gcc-c++ gd libxml2-devel libjpeg-devel libpng-devel net-snmp-devel wget telnet vim zip unzip yum -y install curl-devel libxslt-devel pcre-devel libjpeg libpng libcurl4-openssl-dev yum -y install libcurl-devel libcurl freetype-config freetype freetype-devel unixODBC libxslt yum -y install gcc automake autoconf libtool openssl-devel yum -y install perl-devel perl-ExtUtils-Embed yum -y install cmake ncurses-devel.x86_64 openldap-devel.x86_64 lrzsz openssh-clients gcc-g77 bison yum -y install libmcrypt libmcrypt-devel mhash mhash-devel bzip2 bzip2-devel yum -y install ntpdate rsync svn patch iptables iptables-services yum -y install libevent libevent-devel cyrus-sasl cyrus-sasl-devel yum -y install gd-devel libmemcached-devel memcached git libssl-devel libyaml-devel auto make yum -y groupinstall "Server Platform Development" "Development tools" yum -y groupinstall "Development tools" yum -y install gcc pcre-devel zlib-devel openssl-devel2.2、Centos6系統(tǒng)安裝完畢后,需要優(yōu)化的地方
#關(guān)閉SELinux sed -i 's/SELINUX=enforcing/SELinux=disabled/' /etc/selinux/config grep SELINUX=disabled /etc/selinux/config setenforce 0 getenforce cat >> /etc/sysctl.conf << EOF # ##custom # net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 net.ipv4.tcp_max_tw_buckets = 6000 net.ipv4.tcp_sack = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 16384 4194304 net.core.wmem_default = 8388608 net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.netdev_max_backlog = 262144 net.core.somaxconn = 262144 net.ipv4.tcp_max_orphans = 3276800 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_timestamps = 0 #net.ipv4.tcp_synack_retries = 1 net.ipv4.tcp_synack_retries = 2 #net.ipv4.tcp_syn_retries = 1 net.ipv4.tcp_syn_retries = 2 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 94500000 915000000 927000000 #net.ipv4.tcp_fin_timeout = 1 net.ipv4.tcp_fin_timeout = 15 net.ipv4.tcp_keepalive_time = 30 net.ipv4.ip_local_port_range = 1024 65535 #net.ipv4.tcp_tw_len = 1 EOF#使其生效 sysctl -p cp /etc/security/limits.conf /etc/security/limits.conf.bak2017 cat >> /etc/security/limits.conf << EOF # ###custom # * soft nofile 20480 * hard nofile 65535 * soft nproc 20480 * hard nproc 65535 EOF2.3、修改shell終端的超時(shí)時(shí)間
vi /etc/profile 增加如下一行即可(3600秒,默認(rèn)不超時(shí)) cp /etc/profile /etc/profile.bak2017 cat >> /etc/profile << EOF export TMOUT=1800 EOF2.4、下載軟件包
(master和slave兩臺負(fù)載均衡機(jī)都要做) [root@web1 ~]# cd /usr/local/src/[root@web1 src]# wget http://nginx.org/download/nginx-1.9.7.tar.gz[root@web1 src]# wget http://www.keepalived.org/software/keepalived-1.3.2.tar.gz2.5、安裝nginx
(master和slave兩臺負(fù)載均衡機(jī)都要做) [root@web1 src]# tar -zxvf nginx-1.9.7.tar.gz [root@web1 nginx-1.9.7]# cd nginx-1.9.7 # 添加www用戶,其中-M參數(shù)表示不添加用戶家目錄,-s參數(shù)表示指定shell類型 [root@web1 nginx-1.9.7]# useradd www -M -s /sbin/nologin [root@web1 nginx-1.9.7]# vim auto/cc/gcc #將這句注釋掉 取消Debug編譯模式 大概在179行 # debug # CFLAGS="$CFLAGS -g" [root@web1 nginx-1.9.7]# ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre [root@web1 nginx-1.9.7]# make && make install2.6、安裝keeplived
(master和slave兩臺負(fù)載均衡機(jī)都要做) [root@web1 nginx-1.9.7]# cd /usr/local/src/ [root@web1 src]# tar -zvxf keepalived-1.3.2.tar.gz [root@web1 src]# cd keepalived-1.3.2 [root@web1 keepalived-1.3.2]# ./configure [root@web1 keepalived-1.3.2]# make && make install [root@web1 keepalived-1.3.2]# cp /usr/local/src/keepalived-1.3.2/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/ [root@web1 keepalived-1.3.2]# cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/ [root@web1 keepalived-1.3.2]# mkdir /etc/keepalived [root@web1 keepalived-1.3.2]# cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/ [root@web1 keepalived-1.3.2]# cp /usr/local/sbin/keepalived /usr/sbin/ [root@web1 keepalived-1.3.2]# echo "/usr/local/nginx/sbin/nginx" >> /etc/rc.local [root@web1 keepalived-1.3.2]# echo "/etc/init.d/keepalived start" >> /etc/rc.local三、配置服務(wù)
3.1、關(guān)閉selinux
?
先關(guān)閉SElinux、配置防火墻?(master和slave兩臺負(fù)載均衡機(jī)都要做) [root@web1 keepalived-1.3.2]# cd /root/ [root@web1 ~]#sed -i 's/SELINUX=enforcing/SELinux=disabled/' /etc/selinux/config [root@web1 ~]#grep SELINUX=disabled /etc/selinux/config [root@web1 ~]#setenforce 0?
3.2、關(guān)閉防火墻
[root@web1 ~]# /etc/init.d/iptables stop3.3、配置nginx
master-和slave兩臺服務(wù)器的nginx的配置完全一樣,主要是配置/usr/local/nginx/conf/nginx.conf的http,當(dāng)然也可以配置vhost虛擬主機(jī)目錄,然后配置vhost下的比如LB.conf文件。
其中:
多域名指向是通過虛擬主機(jī)(配置http下面的server)實(shí)現(xiàn);
同一域名的不同虛擬目錄通過每個(gè)server下面的不同location實(shí)現(xiàn);
到后端的服務(wù)器在vhost/LB.conf下面配置upstream,然后在server或location中通過proxy_pass引用。
要實(shí)現(xiàn)前面規(guī)劃的接入方式,LB.conf的配置如下(添加proxy_cache_path和proxy_temp_path這兩行,表示打開nginx的緩存功能):
?
[root@web1 ~]# vim /usr/local/nginx/conf/nginx.conf user www; worker_processes 8;#error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info;#pid logs/nginx.pid;events {worker_connections 65535; }http {include mime.types;default_type application/octet-stream;charset utf-8;######### set access log format#######log_format main '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';#access_log logs/access.log main;######### http setting#######sendfile on;#tcp_nopush on;tcp_nopush on;tcp_nodelay on;keepalive_timeout 65;proxy_cache_path /var/www/cache levels=1:2 keys_zone=mycache:20m max_size=2048m inactive=60m;proxy_temp_path /var/www/cache/tmp;fastcgi_connect_timeout 3000;fastcgi_send_timeout 3000;fastcgi_read_timeout 3000;fastcgi_buffer_size 256k;fastcgi_buffers 8 256k;fastcgi_busy_buffers_size 256k;fastcgi_temp_file_write_size 256k;fastcgi_intercept_errors on;#keepalive_timeout 0;#keepalive_timeout 65;#client_header_timeout 600s;client_body_timeout 600s;# client_max_body_size 50m;client_max_body_size 100m; #允許客戶端請求的最大單個(gè)文件字節(jié)數(shù)client_body_buffer_size 256k; #緩沖區(qū)代理緩沖請求的最大字節(jié)數(shù),可以理解為先保存到本地再傳給用戶#gzip on;gzip_min_length 1k;gzip_buffers 4 16k;gzip_http_version 1.1;gzip_comp_level 9;gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/x-httpd-php;gzip_vary on;## includes vhostsinclude vhosts/*.conf; }# 創(chuàng)建相應(yīng)的目錄 [root@web1 ~]# mkdir -p /usr/local/nginx/conf/vhosts [root@web1 ~]# mkdir -p /var/www/cache [root@web1 ~]# ulimit 65535 [root@web2 ~]# vim /usr/local/nginx/conf/vhosts/LB.conf upstream LB-WWW {ip_hash;server 172.16.12.223:80 max_fails=3 fail_timeout=30s; #max_fails = 3 為允許失敗的次數(shù),默認(rèn)值為1server 172.16.12.224:80 max_fails=3 fail_timeout=30s; #fail_timeout = 30s 當(dāng)max_fails次失敗后,暫停將請求分發(fā)到該后端服務(wù)器的時(shí)間server 172.16.12.225:80 max_fails=3 fail_timeout=30s;}upstream LB-OA {ip_hash;server 172.16.12.223:8080 max_fails=3 fail_timeout=30s;server 172.16.12.224:8080 max_fails=3 fail_timeout=30s; }server {listen 80;server_name localhost;access_log /usr/local/nginx/logs/dev-access.log main;error_log /usr/local/nginx/logs/dev-error.log;location /svn {proxy_pass http://172.16.12.226/svn/;proxy_redirect off ;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header REMOTE-HOST $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_connect_timeout 300; #跟后端服務(wù)器連接超時(shí)時(shí)間,發(fā)起握手等候響應(yīng)時(shí)間proxy_send_timeout 300; #后端服務(wù)器回傳時(shí)間,就是在規(guī)定時(shí)間內(nèi)后端服務(wù)器必須傳完所有數(shù)據(jù)proxy_read_timeout 600; #連接成功后等待后端服務(wù)器的響應(yīng)時(shí)間,已經(jīng)進(jìn)入后端的排隊(duì)之中等候處理proxy_buffer_size 256k; #代理請求緩沖區(qū),會保存用戶的頭信息以供nginx進(jìn)行處理proxy_buffers 4 256k; #同上,告訴nginx保存單個(gè)用幾個(gè)buffer最大用多少空間proxy_busy_buffers_size 256k; #如果系統(tǒng)很忙時(shí)候可以申請最大的proxy_buffersproxy_temp_file_write_size 256k; #proxy緩存臨時(shí)文件的大小proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;proxy_max_temp_file_size 128m;proxy_cache mycache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m;}location /submin {proxy_pass http://172.16.12.226/submin/;proxy_redirect off ;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header REMOTE-HOST $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_connect_timeout 300;proxy_send_timeout 300;proxy_read_timeout 600;proxy_buffer_size 256k;proxy_buffers 4 256k;proxy_busy_buffers_size 256k;proxy_temp_file_write_size 256k;proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;proxy_max_temp_file_size 128m;proxy_cache mycache; proxy_cache_valid 200 302 60m;proxy_cache_valid 404 1m;}}server {listen 80;server_name localhost;access_log /usr/local/nginx/logs/www-access.log main;error_log /usr/local/nginx/logs/www-error.log;location / {proxy_pass http://LB-WWW;proxy_redirect off ;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header REMOTE-HOST $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_connect_timeout 300;proxy_send_timeout 300;proxy_read_timeout 600;proxy_buffer_size 256k;proxy_buffers 4 256k;proxy_busy_buffers_size 256k;proxy_temp_file_write_size 256k;proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;proxy_max_temp_file_size 128m;proxy_cache mycache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m;} }server {listen 80;server_name localhost;access_log /usr/local/nginx/logs/oa-access.log main;error_log /usr/local/nginx/logs/oa-error.log;location / {proxy_pass http://LB-OA;proxy_redirect off ;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header REMOTE-HOST $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_connect_timeout 300;proxy_send_timeout 300;proxy_read_timeout 600;proxy_buffer_size 256k;proxy_buffers 4 256k;proxy_busy_buffers_size 256k;proxy_temp_file_write_size 256k;proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;proxy_max_temp_file_size 128m;proxy_cache mycache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m;} }?
3.4、驗(yàn)證準(zhǔn)備
3.4.1、在svn服務(wù)器上執(zhí)行
cat >/usr/local/nginx/conf/vhosts/svn.conf <<EOF server { listen 80; server_name svn 172.16.12.225;access_log /usr/local/nginx/logs/svn-access.log main; error_log /usr/local/nginx/logs/svn-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } EOF [root@svn ~]# cat /usr/local/nginx/conf/vhosts/svn.conf server { listen 80; server_name svn 172.16.12.225;access_log /usr/local/nginx/logs/svn-access.log main; error_log /usr/local/nginx/logs/svn-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } [root@svn ~]# [root@svn ~]# mkdir -p /var/www/html [root@svn ~]# mkdir -p /var/www/html/submin [root@svn ~]# mkdir -p /var/www/html/svn [root@svn ~]# cat /var/www/html/svn/index.html this is the page of svn/172.16.12.225 [root@svn ~]# cat /var/www/html/submin/index.html this is the page of submin/172.16.12.225 [root@svn ~]# chown -R www.www /var/www/html/ [root@svn ~]# chmod -R 755 /var/www/html/ [root@svn ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.12.223 web1 172.16.12.224 web2 172.16.12.225 svn [root@svn ~]# tail -4 /etc/rc.local touch /var/lock/subsys/local /etc/init.d/iptables stop /usr/local/nginx/sbin/nginx /etc/init.d/keepalived start# 啟動nginx [root@svn ~]# /usr/local/nginx/sbin/nginx # 訪問網(wǎng)址 [root@svn local]# curl http://172.16.12.225/submin/ this is the page of submin/172.16.12.225 [root@svn local]# curl http://172.16.12.225/svn/ this is the page of svn/172.16.12.2253.4.1、在web1上執(zhí)行
[root@web1 ~]# curl http://172.16.12.225/submin/ this is the page of submin/172.16.12.225 [root@web1 ~]# curl http://172.16.12.225/svn/ this is the page of svn/172.16.12.225cat >/usr/local/nginx/conf/vhosts/web.conf <<EOF server { listen 80; server_name web 172.16.12.223;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } EOF[root@web1 ~]# cat /usr/local/nginx/conf/vhosts/web.conf server { listen 80; server_name web 172.16.12.223;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } }[root@web1 ~]# mkdir -p /var/www/html [root@web1 ~]# mkdir -p /var/www/html/web [root@web1 ~]# cat /var/www/html/web/index.html this is the page of web/172.16.12.223 [root@web1 ~]# chown -R www.www /var/www/html/ [root@web1 ~]# chmod -R 755 /var/www/html/ [root@web1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.12.223 web1 172.16.12.224 web2 172.16.12.225 svn [root@web1 ~]# tail -4 /etc/rc.local touch /var/lock/subsys/local /etc/init.d/iptables stop /usr/local/nginx/sbin/nginx /etc/init.d/keepalived start [root@web1 ~]# /usr/local/nginx/sbin/nginx [root@web1 ~]# curl http://172.16.12.223/web/ this is the page of web/172.16.12.2232.4.2、在web2上執(zhí)行
[root@web2 ~]# curl http://172.16.12.225/submin/ this is the page of submin/172.16.12.225 [root@web2 ~]# curl http://172.16.12.225/svn/ this is the page of svn/172.16.12.225cat >/usr/local/nginx/conf/vhosts/web.conf <<EOF server { listen 80; server_name web 172.16.12.224;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } EOF[root@web2 ~]# cat /usr/local/nginx/conf/vhosts/web.conf server { listen 80; server_name web 172.16.12.224;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } [root@web2 ~]# [root@web2 ~]# mkdir -p /var/www/html [root@web2 ~]# mkdir -p /var/www/html/web [root@web2 ~]# cat /var/www/html/web/index.html this is the page of web/172.16.12.224 [root@web2 ~]# cat /var/www/html/web/index.html this is the page of web/172.16.12.224 [root@web2 ~]# chown -R www.www /var/www/html/ [root@web2 ~]# chmod -R 755 /var/www/html/ [root@web2 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.12.223 web1 172.16.12.224 web2 172.16.12.225 svn [root@web2 ~]# tail -4 /etc/rc.local touch /var/lock/subsys/local /etc/init.d/iptables stop /usr/local/nginx/sbin/nginx /etc/init.d/keepalived start# 啟動nginx [root@web2 ~]# /usr/local/nginx/sbin/nginx # 訪問網(wǎng)址 [root@web2 local]# curl http://172.16.12.224/web/ this is the page of web/172.16.12.2242.4.3、瀏覽器測試
四、keeplived配置
4.1、web1上的操作
[root@web1 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak [root@web1 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived #全局定義global_defs { # notification_email { #指定keepalived在發(fā)生事件時(shí)(比如切換)發(fā)送通知郵件的郵箱 # ops@wangshibo.cn #設(shè)置報(bào)警郵件地址,可以設(shè)置多個(gè),每行一個(gè)。 需開啟本機(jī)的sendmail服務(wù) # tech@wangshibo.cn # } # # notification_email_from ops@wangshibo.cn #keepalived在發(fā)生諸如切換操作時(shí)需要發(fā)送email通知地址 # smtp_server 127.0.0.1 #指定發(fā)送email的smtp服務(wù)器 # smtp_connect_timeout 30 #設(shè)置連接smtp server的超時(shí)時(shí)間 router_id master-node #運(yùn)行keepalived的機(jī)器的一個(gè)標(biāo)識,通??稍O(shè)為hostname。故障發(fā)生時(shí),發(fā)郵件時(shí)顯示在郵件主題中的信息。 }vrrp_script chk_http_port { #檢測nginx服務(wù)是否在運(yùn)行。有很多方式,比如進(jìn)程,用腳本檢測等等script "/opt/chk_nginx.sh" #這里通過腳本監(jiān)測interval 2 #腳本執(zhí)行間隔,每2s檢測一次weight -5 #腳本結(jié)果導(dǎo)致的優(yōu)先級變更,檢測失敗(腳本返回非0)則優(yōu)先級 -5fall 2 #檢測連續(xù)2次失敗才算確定是真失敗。會用weight減少優(yōu)先級(1-255之間)rise 1 #檢測1次成功就算成功。但不修改優(yōu)先級 }vrrp_instance VI_1 { #keepalived在同一virtual_router_id中priority(0-255)最大的會成為master,也就是接管VIP,當(dāng)priority最大的主機(jī)發(fā)生故障后次priority將會接管state MASTER #指定keepalived的角色,MASTER表示此主機(jī)是主服務(wù)器,BACKUP表示此主機(jī)是備用服務(wù)器。注意這里的state指定instance(Initial)的初始狀態(tài),就是說在配置好后,這臺服務(wù)器的初始狀態(tài)就是這里指定的,但這里指定的不算,還是得要通過競選通過優(yōu)先級來確定。如果這里設(shè)置為MASTER,但如若他的優(yōu)先級不及另外一臺,那么這臺在發(fā)送通告時(shí),會發(fā)送自己的優(yōu)先級,另外一臺發(fā)現(xiàn)優(yōu)先級不如自己的高,那么他會就回?fù)屨紴镸ASTERinterface eth1 #指定HA監(jiān)測網(wǎng)絡(luò)的接口。實(shí)例綁定的網(wǎng)卡,因?yàn)樵谂渲锰摂MIP的時(shí)候必須是在已有的網(wǎng)卡上添加的 # mcast_src_ip 103.110.98.14 # 發(fā)送多播數(shù)據(jù)包時(shí)的源IP地址,這里注意了,這里實(shí)際上就是在哪個(gè)地址上發(fā)送VRRP通告,這個(gè)非常重要,一定要選擇穩(wěn)定的網(wǎng)卡端口來發(fā)送,這里相當(dāng)于heartbeat的心跳端口,如果沒有設(shè)置那么就用默認(rèn)的綁定的網(wǎng)卡的IP,也就是interface指定的IP地址virtual_router_id 226 #虛擬路由標(biāo)識,這個(gè)標(biāo)識是一個(gè)數(shù)字,同一個(gè)vrrp實(shí)例使用唯一的標(biāo)識。即同一vrrp_instance下,MASTER和BACKUP必須是一致的priority 101 #定義優(yōu)先級,數(shù)字越大,優(yōu)先級越高,在同一個(gè)vrrp_instance下,MASTER的優(yōu)先級必須大于BACKUP的優(yōu)先級advert_int 1 #設(shè)定MASTER與BACKUP負(fù)載均衡器之間同步檢查的時(shí)間間隔,單位是秒authentication { #設(shè)置驗(yàn)證類型和密碼。主從必須一樣auth_type PASS #設(shè)置vrrp驗(yàn)證類型,主要有PASS和AH兩種auth_pass 1111 #設(shè)置vrrp驗(yàn)證密碼,在同一個(gè)vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通信}virtual_ipaddress { #VRRP HA 虛擬地址 如果有多個(gè)VIP,繼續(xù)換行填寫172.16.12.226}track_script { #執(zhí)行監(jiān)控的服務(wù)。注意這個(gè)設(shè)置不能緊挨著寫在vrrp_script配置塊的后面(實(shí)驗(yàn)中碰過的坑),否則nginx監(jiān)控失效!!chk_http_port #引用VRRP腳本,即在 vrrp_script 部分指定的名字。定期運(yùn)行它們來改變優(yōu)先級,并最終引發(fā)主備切換。 } }4.2、web2上的操作
[root@web2 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak [root@web2 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { # notification_email { # ops@wangshibo.cn # tech@wangshibo.cn # } # # notification_email_from ops@wangshibo.cn # smtp_server 127.0.0.1 # smtp_connect_timeout 30 router_id slave-node }vrrp_script chk_http_port { script "/opt/chk_nginx.sh" interval 2 weight -5 fall 2 rise 1 }vrrp_instance VI_1 { state BACKUP interface eth1 # mcast_src_ip 103.110.98.24 virtual_router_id 226 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 1111 }virtual_ipaddress { 172.16.12.226}track_script { chk_http_port }}4.3、監(jiān)控說明
讓keepalived監(jiān)控NginX的狀態(tài):
1)經(jīng)過前面的配置,如果master主服務(wù)器的keepalived停止服務(wù),slave從服務(wù)器會自動接管VIP對外服務(wù);
一旦主服務(wù)器的keepalived恢復(fù),會重新接管VIP。 但這并不是我們需要的,我們需要的是當(dāng)NginX停止服務(wù)的時(shí)候能夠自動切換。
2)keepalived支持配置監(jiān)控腳本,我們可以通過腳本監(jiān)控NginX的狀態(tài),如果狀態(tài)不正常則進(jìn)行一系列的操作,最終仍不能恢復(fù)NginX則殺掉keepalived,使得從服務(wù)器能夠接管服務(wù)。
如何監(jiān)控NginX的狀態(tài)
最簡單的做法是監(jiān)控NginX進(jìn)程,更靠譜的做法是檢查NginX端口,最靠譜的做法是檢查多個(gè)url能否獲取到頁面。
注意:這里要提示一下keepalived.conf中vrrp_script配置區(qū)的script一般有2種寫法:
1)通過腳本執(zhí)行的返回結(jié)果,改變優(yōu)先級,keepalived繼續(xù)發(fā)送通告消息,backup比較優(yōu)先級再決定。這是直接監(jiān)控Nginx進(jìn)程的方式。
2)腳本里面檢測到異常,直接關(guān)閉keepalived進(jìn)程,backup機(jī)器接收不到advertisement會搶占IP。這是檢查NginX端口的方式。
上文script配置部分,"killall -0 nginx"屬于第1種情況,"/opt/chk_nginx.sh" 屬于第2種情況。個(gè)人更傾向于通過shell腳本判斷,但有異常時(shí)exit 1,正常退出exit 0,然后keepalived根據(jù)動態(tài)調(diào)整的 vrrp_instance 優(yōu)先級選舉決定是否搶占VIP:
如果腳本執(zhí)行結(jié)果為0,并且weight配置的值大于0,則優(yōu)先級相應(yīng)的增加
如果腳本執(zhí)行結(jié)果非0,并且weight配置的值小于0,則優(yōu)先級相應(yīng)的減少
其他情況,原本配置的優(yōu)先級不變,即配置文件中priority對應(yīng)的值。
提示:
優(yōu)先級不會不斷的提高或者降低
可以編寫多個(gè)檢測腳本并為每個(gè)檢測腳本設(shè)置不同的weight(在配置中列出就行)
不管提高優(yōu)先級還是降低優(yōu)先級,最終優(yōu)先級的范圍是在[1,254],不會出現(xiàn)優(yōu)先級小于等于0或者優(yōu)先級大于等于255的情況
在MASTER節(jié)點(diǎn)的 vrrp_instance 中 配置 nopreempt ,當(dāng)它異常恢復(fù)后,即使它 prio 更高也不會搶占,這樣可以避免正常情況下做無謂的切換
以上可以做到利用腳本檢測業(yè)務(wù)進(jìn)程的狀態(tài),并動態(tài)調(diào)整優(yōu)先級從而實(shí)現(xiàn)主備切換。
另外:在默認(rèn)的keepalive.conf里面還有 virtual_server,real_server 這樣的配置,我們這用不到,它是為lvs準(zhǔn)備的。
如何嘗試恢復(fù)服務(wù)
由于keepalived只檢測本機(jī)和他機(jī)keepalived是否正常并實(shí)現(xiàn)VIP的漂移,而如果本機(jī)nginx出現(xiàn)故障不會則不會漂移VIP。
所以編寫腳本來判斷本機(jī)nginx是否正常,如果發(fā)現(xiàn)NginX不正常,重啟之。等待3秒再次校驗(yàn),仍然失敗則不再嘗試,關(guān)閉keepalived,其他主機(jī)此時(shí)會接管VIP;
根據(jù)上述策略很容易寫出監(jiān)控腳本。此腳本必須在keepalived服務(wù)運(yùn)行的前提下才有效!如果在keepalived服務(wù)先關(guān)閉的情況下,那么nginx服務(wù)關(guān)閉后就不能實(shí)現(xiàn)自啟動了。
該腳本檢測ngnix的運(yùn)行狀態(tài),并在nginx進(jìn)程不存在時(shí)嘗試重新啟動ngnix,如果啟動失敗則停止keepalived,準(zhǔn)備讓其它機(jī)器接管。
4.4、監(jiān)控腳本
監(jiān)控腳本如下(master和slave都要有這個(gè)監(jiān)控腳本): [root@web1 ~]# cat /opt/chk_nginx.sh #!/bin/bash counter=$(ps -C nginx --no-heading|wc -l) if [ "${counter}" = "0" ]; then/usr/local/nginx/sbin/nginxsleep 2counter=$(ps -C nginx --no-heading|wc -l)if [ "${counter}" = "0" ]; then/etc/init.d/keepalived stopfi fi [root@web1 ~]# [root@web1 ~]# chmod 755 /opt/chk_nginx.sh [root@web1 ~]# sh /opt/chk_nginx.sh[root@web2 ~]# cat /opt/chk_nginx.sh #!/bin/bash counter=$(ps -C nginx --no-heading|wc -l) if [ "${counter}" = "0" ]; then/usr/local/nginx/sbin/nginxsleep 2counter=$(ps -C nginx --no-heading|wc -l)if [ "${counter}" = "0" ]; then/etc/init.d/keepalived stopfi fi [root@web2 ~]# [root@web2 ~]# chmod 755 /opt/chk_nginx.sh [root@web2 ~]# sh /opt/chk_nginx.sh4.5、需要考慮的問題
此架構(gòu)需考慮的問題
1)master沒掛,則master占有vip且nginx運(yùn)行在master上
2)master掛了,則slave搶占vip且在slave上運(yùn)行nginx服務(wù)
3)如果master上的nginx服務(wù)掛了,則nginx會自動重啟,重啟失敗后會自動關(guān)閉keepalived,這樣vip資源也會轉(zhuǎn)移到slave上。
4)檢測后端服務(wù)器的健康狀態(tài)
5)master和slave兩邊都開啟nginx服務(wù),無論master還是slave,當(dāng)其中的一個(gè)keepalived服務(wù)停止后,vip都會漂移到keepalived服務(wù)還在的節(jié)點(diǎn)上;
如果要想使nginx服務(wù)掛了,vip也漂移到另一個(gè)節(jié)點(diǎn),則必須用腳本或者在配置文件里面用shell命令來控制。(nginx服務(wù)宕停后會自動啟動,啟動失敗后會強(qiáng)制關(guān)閉keepalived,從而致使vip資源漂移到另一臺機(jī)器上)
?
五、最后驗(yàn)證
?
?
最后驗(yàn)證(將配置的后端應(yīng)用域名都解析到VIP地址上):關(guān)閉主服務(wù)器上的keepalived或nginx,vip都會自動飄到從服務(wù)器上。
驗(yàn)證keepalived服務(wù)故障情況:
1)先后在master、slave服務(wù)器上啟動nginx和keepalived,保證這兩個(gè)服務(wù)都正常開啟:
?
[root@web2 ~]# /usr/local/nginx/sbin/nginx -s stop nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored [root@web2 ~]# /etc/init.d/keepalived stop Stopping keepalived: [FAILED] [root@web2 ~]# [root@web1 ~]# /usr/local/nginx/sbin/nginx -s stop nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored [root@web1 ~]# /etc/init.d/keepalived stop Stopping keepalived: [FAILED] [root@web1 ~]# [root@web1 ~]# /usr/local/nginx/sbin/nginx nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored [root@web1 ~]# /etc/init.d/keepalived start Starting keepalived: [ OK ]?
2)在主服務(wù)器上查看是否已經(jīng)綁定了虛擬IP:
?
[root@web1 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:ca:99:56 brd ff:ff:ff:ff:ff:ffinet 10.0.2.223/24 brd 10.0.2.255 scope global eth0inet 172.16.12.226/32 scope global eth0inet6 fe80::a00:27ff:feca:9956/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:b3:a9:36 brd ff:ff:ff:ff:ff:ffinet 172.16.12.223/24 brd 172.16.12.255 scope global eth1inet6 fe80::a00:27ff:feb3:a936/64 scope link valid_lft forever preferred_lft forever[root@web2 ~]# /usr/local/nginx/sbin/nginx nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored [root@web2 ~]# /etc/init.d/keepalived start Starting keepalived: [ OK ] [root@web2 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:9a:0b:97 brd ff:ff:ff:ff:ff:ffinet 10.0.2.224/24 brd 10.0.2.255 scope global eth0inet6 fe80::a00:27ff:fe9a:b97/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:63:26:1a brd ff:ff:ff:ff:ff:ffinet 172.16.12.224/24 brd 172.16.12.255 scope global eth1inet6 fe80::a00:27ff:fe63:261a/64 scope link valid_lft forever preferred_lft forever [root@web2 ~]#?
5.1、修改網(wǎng)站配置
[root@web1 ~]# cat /usr/local/nginx/conf/vhosts/web.conf server { listen 80; server_name localhost 172.16.12.223 172.16.12.226;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } [root@web1 ~]# [root@web2 ~]# cat /usr/local/nginx/conf/vhosts/web.conf server { listen 80; server_name localhost 172.16.12.224 172.16.12.226;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } [root@web2 ~]#5.2、訪問驗(yàn)證
5.3、停止主服務(wù)器的keeplived服務(wù)
[root@web1 ~]# /etc/init.d/keepalived stop Stopping keepalived: [ OK ] [root@web1 ~]# [root@web1 ~]# tail -f /var/log/messages Dec 14 13:32:12 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:12 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:12 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:12 web1 Keepalived_healthcheckers[7958]: Netlink reflector reports IP 172.16.12.226 added Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 172.16.12.226 Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:43:51 web1 Keepalived[7956]: Stopping Dec 14 13:43:51 web1 Keepalived_vrrp[7959]: VRRP_Instance(VI_1) sent 0 priority Dec 14 13:43:51 web1 Keepalived_vrrp[7959]: VRRP_Instance(VI_1) removing protocol VIPs. Dec 14 13:43:51 web1 Keepalived_healthcheckers[7958]: Netlink reflector reports IP 172.16.12.226 removed Dec 14 13:43:51 web1 Keepalived_healthcheckers[7958]: Stopped Dec 14 13:43:52 web1 Keepalived_vrrp[7959]: Stopped Dec 14 13:43:52 web1 Keepalived[7956]: Stopped Keepalived v1.3.2 (12/14,2017)[root@web1 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:ca:99:56 brd ff:ff:ff:ff:ff:ffinet 10.0.2.223/24 brd 10.0.2.255 scope global eth0inet6 fe80::a00:27ff:feca:9956/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:b3:a9:36 brd ff:ff:ff:ff:ff:ffinet 172.16.12.223/24 brd 172.16.12.255 scope global eth1inet6 fe80::a00:27ff:feb3:a936/64 scope link valid_lft forever preferred_lft forever [root@web1 ~]#5.4、在web2查看切換情況
[root@web2 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:9a:0b:97 brd ff:ff:ff:ff:ff:ffinet 10.0.2.224/24 brd 10.0.2.255 scope global eth0inet 172.16.12.226/32 scope global eth0inet6 fe80::a00:27ff:fe9a:b97/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:63:26:1a brd ff:ff:ff:ff:ff:ffinet 172.16.12.224/24 brd 172.16.12.255 scope global eth1inet6 fe80::a00:27ff:fe63:261a/64 scope link valid_lft forever preferred_lft forever [root@web2 ~]# [root@web2 ~]# tail -f /var/log/messages Dec 14 13:47:33 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:33 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:33 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:33 web2 Keepalived_healthcheckers[8186]: Netlink reflector reports IP 172.16.12.226 added Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 172.16.12.226 Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.2265.5、訪問網(wǎng)頁驗(yàn)證
切換前的網(wǎng)頁:
?
?
切換后的網(wǎng)頁:
?
?
說明已經(jīng)切換完畢。
?
?
轉(zhuǎn)載于:https://www.cnblogs.com/bjx2020/p/8057776.html
總結(jié)
以上是生活随笔為你收集整理的Nginx+Keeplived双机热备(主从模式)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。