第三部分:lnmp集群部署
承接上篇:http://linuxops.blog.51cto.com/2238445/899637請參考本人的以下文章:http://linuxops.blog.51cto.com/2238445/712035http://linuxops.blog.51cto.com/2238445/701590要說明的兩點:1.
這里的web數據在通過后面要介紹的NFS掛載共享!2.
數據庫與web是分開在不同服務器上!++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
第四部分:mysql主從部署mysql主從相對比較簡單,不過多講解!1.主服務器和從服務器上安裝的MySQL最好版本一致,從版本可以高于主.mysql> select version();+------------+| version() ?|+------------+| 5.5.12-log |+------------+1 row in set (0.00 sec)我這里選擇5.5.12!2.在主服務器上為從服務器設置一個連接賬戶mysql> grant replication slave,replication client on *.* to rep@"192.168.8.41" identified by "rep";3. 執行FLUSH TABLES WITH READ LOCK 進行鎖表mysql> FLUSH TABLES WITH READ LOCK;4. 讓客戶程序保持運行,發出FLUSH TABLES語句讓讀鎖定保持有效。(如果退出客戶程序,鎖被釋放)。進入主服務器的數據目錄,然后執行命令:在主上操作:shell> tar zcf /tmp/mysql.tgz /data/mysql/datashell> scp /tmp/mysql.tgz 192.168.8.41:/tmp/在從上操作:shell> tar zxf /tmp/mysql.tgz /data/mysql/data注意:對于主服務器沒有數據時沒必須以是3和4步驟!讀取主服務器上當前的二進制日志名(File)和偏移量值(Position),并記錄下來:mysql> SHOW MASTER STATUS;+---------------+----------+--------------+------------------+| File ? ? ? ? ?| Position | Binlog_Do_DB | Binlog_Ignore_DB |+---------------+----------+--------------+------------------+| binlog.000011 | ? ? ?349 | ? ? ? ? ? ? ?| ? ? ? ? ? ? ? ? ?|+---------------+----------+--------------+------------------+1 row in set (0.03 sec)取得快照并記錄日志名和偏移量后(POS),可以在主服務器上重新啟用寫活動:mysql> UNLOCK TABLES;5. 確保主服務器主機上my.cnf文件的[mysqld]部分包括一個log_bin選項[mysqld] ?log_bin=mysql-binserver-id=16. 停止用于從服務器的服務器并在其my.cnf文件中添加下面的行:[mysqld] ?replicate-ignore-db = mysqlreplicate-ignore-db = testreplicate-ignore-db = information_schemaserver-id=27.如果對主服務器的數據進行二進制備份,啟動從服務器之前將它復制到從服務器的數據目錄中。確保對這些文件和目錄的權限正確。服務器 MySQL運行的用戶必須能夠讀寫文件,如同在主服務器上一樣。8. 用--skip-slave-start選項啟動從服務器,以便它不立即嘗試連接主服務器。(可選操作)9. 在從服務器上執行下面的語句:mysql>change master to MASTER_HOST='192.168.8.40', MASTER_USER='rep', MASTER_PASSWORD='rep', MASTER_LOG_FILE='binlog.000011', MASTER_LOG_POS=349;9. 啟動從服務器線程:mysql> START SLAVE;10.驗證部署是否成功mysql> SHOW slave status \G*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.8.40Master_User: repMaster_Port: 3306Connect_Retry: 60Master_Log_File: binlog.000011Read_Master_Log_Pos: 349Relay_Log_File: relaylog.000002Relay_Log_Pos: 250Relay_Master_Log_File: binlog.000011Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: mysql,test,information_schemaReplicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 349Relay_Log_Space: 399Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 11 row in set (0.03 sec)當Slave_IO_Running和Slave_SQL_Running都顯示Yes的時候,表示同步成功。到此mysql主從同步配置完成!!!!!!!下面開開始相對來說比較復雜的nfs高可用架構!到時再有必要再添加主從切換部署說明。。。。。。。++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
第五部分:NFS高可用web存儲部署一.環境介紹nfs1 eth0:192.168.8.60 ?eth1:192.168.125.60 ? ---作為主服務器nfs2 eth0:192.168.8.61 ?eth1:192.168.125.61 ? ---作為從服務器虛擬IP 192.168.8.62 ? ---通過Heartbeat來實現,對外提供服務的IP兩臺服務器將 /dev/sda5 作為鏡像1.同步時鐘(實踐證明這個不同步關系不大,但是做下這步也無防)[root@nfs1 ~]# ntpdate ntp.api.bz2.設置hosts相互解析在 /etc/hosts 文件中加入如下內容:192.168.8.60
nfs1192.168.8.61
nfs2二.drbd安裝配置1.drbd安裝源碼安裝:[root@nfs1 ~]# tar zxf drbd-8.3.5.tar.gz[root@nfs1 ~]# cd drbd-8.3.5[root@nfs1 ~]# make[root@nfs1 ~]# make installyum 安裝:[root@nfs1 ~]# yum -y install drbd83 kmod-drbd832.加載模塊[root@nfs1 ~]# modprobe drbd[root@nfs1 ~]# lsmod |grep drbddrbd ? ? ? ? ? ? ? ? ?300440 ?0 3.drbd配置[root@nfs1 ~]# mv /etc/drbd.conf /etc/drbd.conf.bak[root@nfs1 ~]# vi /etc/drbd.conf加入如下內容:
global { ?
? ?usage-count yes; ?
? ?} ?
common { ?
?syncer { rate 100M; } ?
? ? ? } ?
resource r0 { ? ?
protocol C; ? ?
startup { wfc-timeout 0; degr-wfc-timeout 120; } ? ?
disk { on-io-error detach; } ? ?
net { ? ?
? ? timeout 60; ? ?
? ? connect-int 10; ? ?
? ? ping-int 10; ? ?
? ? max-buffers 2048; ? ?
? ? max-epoch-size 2048; ? ?
? ? } ?
syncer { rate 30M; } ? ?
on nfs1 { ? ?
? device ? ?/dev/drbd0; ? ?
? disk ? ? ?/dev/sda5; ? ?
? address ? 192.168.8.60:7788; ? ?
? meta-disk internal; ? ?
} ? ?
on nfs2 { ? ?
? device ? ?/dev/drbd0; ? ?
? disk ? ? ?/dev/sda5; ? ?
? address ? 192.168.8.61:7788; ? ?
? meta-disk internal; ? ?
} ? ?
} ?
4.創建資源同于在我的實驗環境中我之前的/dev/sda5在安裝系統時創建的文件系統,因此這里要破壞文件系統(如果是新增的硬盤,此步可省略)。[root@nfs1 ~]# dd if=/dev/zero bs=1M count=1 of=/dev/sda5;sync;sync1).創建一個名為ro的資源[root@nfs1 ~]# drbdadm create-md r0--== ?Thank you for participating in the global usage survey ?==--The server's response is:you are the 1724th user to install this versionWriting meta data...initializing activity logNOT initialized bitmapNew drbd meta data block successfully created.success2).啟動drbd服務[root@nfs1 ~]# service drbd start隨系統開機系統[root@nfs1 ~]# chkconfig drbd on以上操作同時在主備上操作!!!!!!!!!!!!!!!!!!!啟動好各節點drbd服務后,查看各節點的狀態:[root@nfs1 ~]# cat /proc/drbdversion: 8.3.13 (api:88/proto:86-96)GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by mockbuild@builder10.centos.org, 2012-05-07 11:56:360: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5236960[root@nfs2 ~]# cat /proc/drbd version: 8.3.13 (api:88/proto:86-96)GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by mockbuild@builder10.centos.org, 2012-05-07 11:56:360: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5236960++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++以下操作在nfs1主上操作5.指定主節點[root@nfs1 ~]# drbdsetup /dev/drbd0 primary -o[root@nfs1 ~]# cat /proc/drbdversion: 8.3.13 (api:88/proto:86-96)GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by mockbuild@builder10.centos.org, 2012-05-07 11:56:360: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----ns:170152 nr:0 dw:0 dr:173696 al:0 bm:9 lo:11 pe:69 ua:39 ap:0 ep:1 wo:b oos:5075552
[>....................] sync'ed: ?3.2% (4956/5112)M
finish: 0:03:08 speed: 26,900 (26,900) K/sec[root@nfs2 ~]# cat /proc/drbdversion: 8.3.13 (api:88/proto:86-96)GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by mockbuild@builder10.centos.org, 2012-05-07 11:56:360: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r-----ns:0 nr:514560 dw:513664 dr:0 al:0 bm:31 lo:8 pe:708 ua:7 ap:0 ep:1 wo:b oos:4723296
[>...................] sync'ed: ?9.9% (4612/5112)M
finish: 0:04:41 speed: 16,768 (19,024) want: 30,720 K/sec可以看到主從之間正在傳輸數據,稍等片刻就會同步完成!同步完成之后會是如下形式:[root@nfs1 ~]# cat /proc/drbd version: 8.3.13 (api:88/proto:86-96)GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by mockbuild@builder10.centos.org, 2012-05-07 11:56:360: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---n-ns:5451880 nr:0 dw:214920 dr:5237008 al:73 bm:320 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0[root@nfs2 ~]# cat /proc/drbdversion: 8.3.13 (api:88/proto:86-96)GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by mockbuild@builder10.centos.org, 2012-05-07 11:56:360: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----ns:0 nr:5451880 dw:5451880 dr:0 al:0 bm:320 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0在主節點格式化 /dev/drbd0 分區(從節點不用)[root@nfs1 ~]# mkfs.ext3 /dev/drbd0mke2fs 1.39 (29-May-2006)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)655360 inodes, 1309240 blocks65462 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=134217728040 block groups32768 blocks per group, 32768 fragments per group16384 inodes per groupSuperblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736Writing inode tables: done ? ? ? ? ? ? ? ? ? ? ? ? ? ?Creating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 37 mounts or180 days, whichever comes first. ?Use tune2fs -c or -i to override.在主節點上掛載分區(從節點不用)[root@nfs1 ~]# mkdir /data[root@nfs1 ~]# mount /dev/drbd0 /data[root@nfs1 ~]# mount |grep drbd/dev/drbd0 on /data type ext3 (rw)三.NFS配置(主從節點操作一樣)一般系統默認就安裝好NFS服務如果沒有安裝可以通過yum進行安裝:yum -y install portmap nfs1.修改 NFS 配置文件[root@nfs1 ~]# cat /etc/exports /data *(rw,sync,insecure,no_root_squash,no_wdelay)2.啟動NFS[root@nfs1 ~]# service portmap startStarting portmap: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?[ ?OK ?][root@nfs1 ~]# service nfs startStarting NFS services: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [ ?OK ?]Starting NFS quotas: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [ ?OK ?]Starting NFS daemon: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [ ?OK ?]Starting NFS mountd: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [ ?OK ?]Starting RPC idmapd: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [ ?OK ?][root@nfs1 ~]# chkconfig portmap on[root@nfs1 ~]# chkconfig nfs on注意:要先啟動portmap 再啟動nfs!四.heartbeat安裝配置1.heartbeat安裝源碼安裝:tar zxf libnet-1.1.5.tar.gzcd libnet-1.1.5./configuremake;make installtar jxf Heartbeat-2-1-STABLE-2.1.4.tar.bz2cd Heartbeat-2-1-STABLE-2.1.4./ConfigureMe configuremake;make installyum安裝:[root@nfs1 ~]# yum -y install libnet heartbeat-devel heartbeat-ldirectord heartbeat這里比較奇怪:heartbeat這個包要yum兩次!!!第一次貌似沒有安裝上2.創建配置文件[root@nfs1 ~]# cd /etc/ha.d創建主配置文件,主從之前有一處不同,文件中有說明[root@nfs1 ha.d]# vi ha.cf加入如下內容:
logfile /var/log/ha.log ?
debugfile /var/log/ha-debug ?
logfacility ? ? local0 ?
keepalive 2 ?
deadtime 10 ?
warntime 10 ?
initdead 10 ?
ucast eth1 192.168.52.61 ? ?#這里要指定對方從服務器eth1接口 IP,主從之間相互指定對方IP ?
auto_failback off ?
node nfs1 ?
node nfs2 ?
創建hertbeat認證文件authkeys,主從配置相同![root@nfs1 ha.d]# vi authkeys加入如下內容:auth 11 crc權限給600[root@nfs1 ha.d]# chmod 600 /etc/ha.d/authkeys創建集群資源文件 haresources,主從必須相同~![root@nfs1 ha.d]# vi haresources加入如下內容:nfs1 IPaddr::192.168.8.62/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 killnfsd注意:這里的IPaddr 指定為虛擬IP的地址3.創建kilnfsd腳本,主從相同!這個腳本的功能就是重啟nfs服務!這是因為NFS服務切換后,必須重新mount一下nfs共享出來的目錄,否則會出現stale NFS file handle的錯誤![root@nfs1 ha.d]# vi /etc/ha.d/resource.d/killnfsd加入如下內容:killall -9 nfsd; /etc/init.d/nfs restart; exit 0[root@nfs1 ha.d]# chmod 755 /etc/ha.d/resource.d/killnfsd4.主從分別啟動 nfs和heartbeat[root@nfs1 ha.d]# service heartbeat startStarting High-Availability services: 2012/06/09_10:27:43 INFO: ?Resource is stopped[ ?OK ?][root@nfs1 ha.d]# chkconfig heartbeat on先啟動主節點,再啟動從節點!整個環境運行OK后,首先來個簡單測試(模擬主節點出現故障,導致服務停掉):在做這個簡單測試前看下當前的整個狀態!主節點:[root@nfs1 ha.d]# cat /proc/drbd version: 8.3.13 (api:88/proto:86-96)GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by mockbuild@builder10.centos.org, 2012-05-07 11:56:360: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----ns:37912 nr:24 dw:37912 dr:219 al:12 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0[root@nfs1 ha.d]# mount |grep drbd0/dev/drbd0 on /data type ext3 (rw)[root@nfs1 ha.d]# ls /data/anaconda-ks.cfg ?install.log ? ? ? ? lost+found ? ? ? ? ? ? ? ? ? ? ?nohup.out ?sys_init.shinit.sh ? ? ? ? ?install.log.syslog ?mongodb-linux-x86_64-2.0.5.tgz ?sedcU4gy2[root@nfs1 ha.d]# ip a |grep eth0:0inet 192.168.8.62/24 brd 192.168.8.255 scope global secondary eth0:0從節點:[root@nfs2 ha.d]# cat /proc/drbd version: 8.3.13 (api:88/proto:86-96)GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by mockbuild@builder10.centos.org, 2012-05-07 11:56:360: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----ns:24 nr:37928 dw:37988 dr:144 al:1 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0[root@nfs2 ha.d]# service heartbeat statusheartbeat OK [pid 7323 et al] is running on nfs2 [nfs2]...我們現在把主節點heartbeat服務停掉:[root@nfs1 ha.d]# service heartbeat stopStopping High-Availability services: [ ?OK ?]我們再到從服務器上查看一下有沒有搶到虛擬VIP[root@nfs2 ha.d]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lo2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000link/ether 00:0c:29:fc:78:8f brd ff:ff:ff:ff:ff:ffinet 192.168.8.61/24 brd 192.168.8.255 scope global eth0inet 192.168.8.62/24 brd 192.168.8.255 scope global secondary eth0:03: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000link/ether 00:0c:29:fc:78:99 brd ff:ff:ff:ff:ff:ffinet 192.168.52.61/24 brd 192.168.52.255 scope global eth1[root@nfs2 ha.d]# mount |grep drbd0/dev/drbd0 on /data type ext3 (rw)[root@nfs2 ha.d]# ll /data/total 37752-rw------- 1 root root ? ? 1024 Jun 15 10:56 anaconda-ks.cfg-rwxr-xr-x 1 root root ? ? 4535 Jun 15 10:56 init.sh-rw-r--r-- 1 root root ? ?30393 Jun 15 10:56 install.log-rw-r--r-- 1 root root ? ? 4069 Jun 15 10:56 install.log.syslogdrwx------ 2 root root ? ?16384 Jun 15 09:41 lost+found-rw-r--r-- 1 root root 38527793 Jun 15 10:56 mongodb-linux-x86_64-2.0.5.tgz-rw------- 1 root root ? ? 2189 Jun 15 10:56 nohup.out-rw-r--r-- 1 root root ? ? ?101 Jun 15 10:56 sedcU4gy2-rw-r--r-- 1 root root ? ? 4714 Jun 15 10:56 sys_init.sh可以看虛擬VIP已經切換過來,同時NFS也自己掛載上,數據也OK!!發現整個主從節點之間的切換速度還是非常快的,大概在3秒左右!!如果主節點由于硬件損壞,需要將Secondary提生成Primay主機,處理方法如下:在primaty主機上,先要卸載掉DRBD設備.[root@nfs1 /]# umount /dev/drbd0將主機降級為”備機”[root@nfs1 /]# drbdadm secondary r0 ?[root@nfs1 /]# cat /proc/drbd ?1: cs:Connected st:Secondary/Secondary ds:UpToDate/UpToDate C r— .......略.......略現在,兩臺主機都是”備機”.在備機nfs2上, 將它升級為”主機”.[root@nfs2 /]# drbdadm primary r0 ?[root@nfs2 /]# cat /proc/drbd ?1: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r— .......略.......略現在nfs2成為主機了.當主節點狀態變成 primary/unknow 從節點此時是 secondary/unknow 時,可以采用以下步驟進行解決:1.從節點操作: drbdadm -- --discard-my-data connect all2.主節點操作: drbdadm connnect all基本以上兩步就OK了!至于drbd出現腦裂可以通過相應腳本,也可以手動恢復,但是推薦手動恢復!一般出現這種問題的機率是比較低的!手動恢復腦裂問題:在secondary上:drbdadm secondary r0 ?drbdadm disconnect all ?drbdadmin -- --discard-my-data connect r0在primary上:drbdadm disconnect all ?drbdadm connect r0但是網上說在drbd.conf配置文件中加入以下參數,能解決split brain(腦裂)問題!此時主從之間是雙向同步。。。net {
after-sb-0pri discard-older-primary;
after-sb-lpri call-pri-lost-after-sb;
after-sb-2pri call-pri-lost-after-sb;}+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++歡迎大家來拍磚。。參閱完。。。要記得評論哈!!讓大家共同學習交流!
轉載于:https://blog.51cto.com/opsmysql/899652
《新程序員》:云原生和全面數字化實踐50位技術專家共同創作,文字、視頻、音頻交互閱讀
總結
以上是生活随笔為你收集整理的【APP Web架构】企业web高可用集群实战之haproxy篇续(二)的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。