RHCS图形界面建立GFS共享下
我們上面通過圖形界面實(shí)現(xiàn)了GFS,我們這里使用字符界面實(shí)現(xiàn)
1.1.?????? 系統(tǒng)基礎(chǔ)配置
5臺(tái)節(jié)點(diǎn)均采用相同配置。
配置/etc/hosts文件
# vi /etc/hosts
127.0.0.1??localhost localhost.localdomain localhost4 localhost4.localdomain4
::1????????localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.130 t-lg-kvm-001
192.168.1.132 t-lg-kvm-002
192.168.1.134 t-lg-kvm-003
192.168.1.138 t-lg-kvm-005
192.168.1.140 t-lg-kvm-006
網(wǎng)絡(luò)設(shè)置
關(guān)閉NetworkManager:
# service NetworkManager stop
# chkconfig NetworkManager off
關(guān)閉SELinux
修改/etc/selinux/config文件中設(shè)置SELINUX=disabled :
# cat /etc/selinux/config
?
# This file hctrls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#???? enforcing - SELinux securitypolicy is enforced.
#???? permissive - SELinux printswarnings instead of enforcing.
#???? disabled - No SELinux policyis loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#???? targeted - Targeted processesare protected,
#???? mls - Multi Level Securityprotection.
SELINUXTYPE=targeted
設(shè)置當(dāng)前生效:
# setenforce 0
配置時(shí)間同步
5臺(tái)節(jié)點(diǎn)已配置時(shí)間同步。
1.2.??????配置yum源
Gfs2相關(guān)軟件直接存放在CentOS系統(tǒng)鏡像中,按照以下步驟進(jìn)行操作:
1、在192.168.1.130上掛載iso文件
#mount -o loop /opt/CentOS-6.5-x86_64-bin-DVD1.iso /var/www/html/DVD1
#mount -o loop /opt/CentOS-6.5-x86_64-bin-DVD2.iso /var/www/html/DVD2
2、在192.168.1.130修改/etc/yum.repos.d/CentOS-Media.repo:
#vi /etc/yum.repos.d/CentOS-Media.repo
[c6-media]
name=CentOS-$releasever - Media
baseurl=file:///var/www/html/DVD1
??????? file:///var/www/html/DVD2
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
3、在192.168.1.130上啟動(dòng)httpd服務(wù),以提供其他計(jì)算節(jié)點(diǎn)使用
# service httpd start
4、在其他4臺(tái)計(jì)算節(jié)點(diǎn)上配置yum源
#vi/etc/yum.repos.d/CentOS-Media.repo
[c6-media]
name=CentOS-$releasever - Media
baseurl=http://192.168.1.130/DVD1
??????? http://192.168.1.130/DVD2
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
1.3.??????安裝gfs2相關(guān)軟件
1.1.1.??安裝gfs2相關(guān)軟件
在5臺(tái)計(jì)算節(jié)點(diǎn)上分別執(zhí)行以下命令安裝gfs2軟件:
安裝cman和rgmanager:
# yuminstall -y rgmanager cman
安裝clvm
# yuminstall -y lvm2-cluster
安裝gfs2:
# yuminstall -y gfs*
1.1.2.??配置防火墻策略
在5臺(tái)計(jì)算節(jié)點(diǎn)上分別執(zhí)行以下命令配置防火墻策略:
#iptables-A INPUT -p udp -m udp --dport 5404 -j ACCEPT
#iptables-A INPUT -p udp -m udp --dport 5405 -j ACCEPT
#iptables-A INPUT -p tcp -m tcp --dport 21064 -j ACCEPT
#serviceiptables save
以上過程執(zhí)行完成后,建議重新啟動(dòng)計(jì)算節(jié)點(diǎn),否則有可能會(huì)出現(xiàn)cman服務(wù)啟動(dòng)不成功的問題。
?
1.4.??????配置cman與rgmanager 集群
配置集群在一臺(tái)計(jì)算節(jié)點(diǎn)上執(zhí)行即可,配置完成后同步到其他計(jì)算節(jié)點(diǎn)上,例如在192.168.1.130上進(jìn)行配置:
1、創(chuàng)建集群
在192.168.1.130上執(zhí)行:
root@t-lg-kvm-001:/#ccs_toolcreate kvmcluster
2、配置集群節(jié)點(diǎn)
總共有5臺(tái)計(jì)算節(jié)點(diǎn),因1臺(tái)網(wǎng)卡問題暫未使用,目前配置過程中只有5臺(tái)計(jì)算節(jié)點(diǎn),將計(jì)算節(jié)點(diǎn)添加到集群中,在192.168.1.130上執(zhí)行:
root@t-lg-kvm-001:/#ccs_tooladdnode -n 1 t-lg-kvm-001
root@t-lg-kvm-001:/#ccs_tooladdnode -n 2 t-lg-kvm-002
root@t-lg-kvm-001:/#ccs_tooladdnode -n 3 t-lg-kvm-003
root@t-lg-kvm-001:/#ccs_tooladdnode -n 4 t-lg-kvm-005
root@t-lg-kvm-001:/#ccs_tooladdnode -n 5 t-lg-kvm-006
查看集群:
root@t-lg-kvm-001:/root#ccs_toollsnode
?
Clustername: kvmcluster, config_version: 24
?
Nodename??????????????????????? Votes Nodeid Fencetype
t-lg-kvm-001?????????????????????? 1??? 1???
t-lg-kvm-002?????????????????????? 1??? 2???
t-lg-kvm-003?????????????????????? 1? ??3???
t-lg-kvm-005?????????????????????? 1??? 4???
t-lg-kvm-006?????????????????????? 1??? 5?
3、同步192.168.1.130上的配置文件到各節(jié)點(diǎn)
root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.132:/etc/cluster/
root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.134:/etc/cluster/
root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.138:/etc/cluster/
root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.140:/etc/cluster/
4、啟動(dòng)各個(gè)節(jié)點(diǎn)上的cman服務(wù)
5臺(tái)計(jì)算節(jié)點(diǎn)上均執(zhí)行:
#servicecman start
集群配置完成,接下來配置clvm.
1.5.??????配置CLVM
啟用集群LVM
在集群中的每個(gè)節(jié)點(diǎn)上均執(zhí)行以下命令開啟集群lvm:
#lvmconf--enable-cluster
驗(yàn)證集群lvm是否啟用:
#cat/etc/lvm/lvm.conf | grep "locking_type = 3"
locking_type= 3
有返回值locking_type = 3證明集群lvm已啟動(dòng)。
啟動(dòng)clvm服務(wù)
在各節(jié)點(diǎn)上啟動(dòng)clvm服務(wù):
#serviceclvmd start
在集群節(jié)點(diǎn)上創(chuàng)建lvm
此步驟在一臺(tái)節(jié)點(diǎn)上執(zhí)行即可,例如在192.168.1.130上執(zhí)行:
查看共享存儲(chǔ):
#fdisk-l
?
Disk/dev/sda: 599.0 GB, 598999040000 bytes
255heads, 63 sectors/track, 72824 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x000de0e7
?
?? Device Boot????? Start???????? End????? Blocks??Id? System
/dev/sda1?? *??????????1????????? 66????? 524288??83? Linux
Partition1 does not end on cylinder boundary.
/dev/sda2????????????? 66?????? 72825??584434688?? 8e? Linux LVM
?
Disk/dev/mapper/vg01-lv01: 53.7 GB, 53687091200 bytes
255heads, 63 sectors/track, 6527 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
?
?
Disk/dev/mapper/vg01-lv_swap: 537.7 GB, 537676218368 bytes
255heads, 63 sectors/track, 65368 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
?
?
Disk /dev/sdb: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
?
?
Disk /dev/sdc: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
?
?
Disk /dev/sdd: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
?
?
Disk /dev/sde: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
?
?
Disk /dev/sdf: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
?
?
Disk /dev/sdg: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
?
?
Disk/dev/mapper/vg01-lv_bmc: 5368 MB, 5368709120 bytes
255heads, 63 sectors/track, 652 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
?????? 共6個(gè)lun,每個(gè)1TB。
創(chuàng)建集群物理卷:
root@t-lg-kvm-001:/root#pvcreate/dev/sdb
root@t-lg-kvm-001:/root#pvcreate/dev/sdc
root@t-lg-kvm-001:/root#pvcreate/dev/sdd
root@t-lg-kvm-001:/root#pvcreate/dev/sde
root@t-lg-kvm-001:/root#pvcreate/dev/sdf
root@t-lg-kvm-001:/root#pvcreate/dev/sdg
?????? 創(chuàng)建集群卷組:
root@t-lg-kvm-001:/root#vgcreatekvmvg /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
? Clustered volume group "kvmvg"successfully created
root@t-lg-kvm-001:/root#vgs
? VG???#PV #LV #SN Attr?? VSize?? VFree
? kvmvg??6?? 0?? 0 wz--nc??5.86t 5.86t
? vg01???1?? 3?? 0 wz--n- 557.36g 1.61g
?????? 創(chuàng)建集群邏輯卷:
root@t-lg-kvm-001:/root#lvcreate -L 5998G -n kvmlv kvmvg
? Logical volume "kvmlv" created
root@t-lg-kvm-001:/root#lvs
? LV?????VG??? Attr?????? LSize??Pool Origin Data%? Move LogCpy%Sync Convert
??kvmlv??kvmvg -wi-a-----?? 5.86t????????????????????????????????????????????
? lv01?? ?vg01?-wi-ao----? 50.00g????????????????????????????????????????????
? lv_bmc?vg01? -wi-ao----?? 5.00g????????????????????????????????????????????
? lv_swap vg01?-wi-ao---- 500.75g???????????????????
到此集群的邏輯卷創(chuàng)建完成,邏輯卷在一臺(tái)節(jié)點(diǎn)上創(chuàng)建完成后,在其他節(jié)點(diǎn)上都能看到。
可登陸到其他節(jié)點(diǎn)上,使用lvs都能查看到該邏輯卷,驗(yàn)證是否成功。
1.6.??????配置gfs2
1、將邏輯卷格式化成集群文件系統(tǒng)
僅在一臺(tái)機(jī)器上執(zhí)行即可,例如在192.168.1.130上執(zhí)行:
root@t-lg-kvm-001:/root#mkfs.gfs2 -j 7 -p lock_dlm -t kvmcluster:sharedstorage/dev/kvmvg/kvmlv
Thiswill destroy any data on /dev/kvmvg/kvmlv.
Itappears to contain: symbolic link to `../dm-3'
?
Areyou sure you want to proceed? [y/n] y
?
Device:??????????????????? /dev/kvmvg/kvmlv
Blocksize:???????????????? 4096
DeviceSize??????????????? 5998.00 GB(1572339712 blocks)
FilesystemSize:?????????? 5998.00 GB (1572339710blocks)
Journals:????????????????? 7
ResourceGroups:?????????? 7998
LockingProtocol:????????? "lock_dlm"
LockTable:???????????????"kvmcluster:sharedstorage"
UUID:?????????????????????39f35f4a-e42a-164f-9438-967679e48f9f
2、將集群文件系統(tǒng)掛載到/openstack/instances目錄下
?? 該步驟在集群中的每個(gè)節(jié)點(diǎn)上都需要執(zhí)行掛載命令:
#mount-t gfs2 /dev/kvmvg/kvmlv /openstack/instances/
查看掛載情況:
#df-h
Filesystem?????????????? Size? Used Avail Use% Mounted on
/dev/mapper/vg01-lv01???? 50G??12G?? 35G? 26% /
tmpfs??????????????????? 379G?? 29M?379G?? 1% /dev/shm
/dev/mapper/vg01-lv_bmc? 5.0G?138M? 4.6G?? 3% /bmc
/dev/sda1??????????????? 504M?? 47M?433M? 10% /boot
/dev/mapper/kvmvg-kvmlv?5.9T? 906M? 5.9T??1% /openstack/instances
設(shè)置開機(jī)自動(dòng)掛載:
#echo"/dev/kvmvg/kvmlv /openstack/instances gfs2 defaults 0 0" >>/etc/fstab
啟動(dòng)rgmanager服務(wù):
#servicergmanager start
設(shè)置開機(jī)自啟動(dòng):
#chkconfigclvmd on
#chkconfigcman on
#chkconfigrgmanager on
#chkconfiggfs2 on
3、設(shè)置掛載目錄權(quán)限
因掛載目錄用于openstack存放虛擬機(jī),目錄的權(quán)限需要設(shè)置成nova:nova.
在集群中的任意節(jié)點(diǎn)上執(zhí)行:
#chown -R nova:nova /openstack/instances/
在各節(jié)點(diǎn)上查看目錄權(quán)限是否正確:
#ls-lh /openstack/
總用量 4.0K
drwxr-xr-x7 nova nova 3.8K 5月? 26 14:12 instances
轉(zhuǎn)載于:https://blog.51cto.com/3402313/1656136
總結(jié)
以上是生活随笔為你收集整理的RHCS图形界面建立GFS共享下的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: ABP(现代ASP.NET样板开发框架)
- 下一篇: MySQL主从复制,读写分离配置