CloudStack学习-2
環境準備
?
這次實驗主要是CloudStack結合glusterfs。 兩臺宿主機,做gluster復制卷
VmWare添加一臺和agent1配置一樣的機器
系統版本:centos6.6 x86_64 內存:4GB 網絡:機器是nat 磁盤:裝完系統后額外添加個50GB的磁盤 額外:勾選vt-x配置主機名為agent2
正式開始
關閉iptables和selinux
配置IP地址為靜態的
[root@agent2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet ONBOOT=yes BOOTPROTO=static IPADDR=192.168.145.153 NETMASK=255.255.255.0 GATEWAY=192.168.145.2 DNS1=10.0.1.11 [root@agent2 ~]#配置主機名為agent2
配置hosts文件
保證master和兩臺agent的配置都如下
[root@agent2 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.145.151 master1 192.168.145.152 agent1 192.168.145.153 agent2 [root@agent2 ~]#配置ntp
yum install ntp -y chkconfig ntpd on /etc/init.d/ntpd start檢查 hostname ?--fqdn
[root@agent2 ~]# hostname --fqdn agent2 [root@agent2 ~]#安裝epel源
yum install epel-release -yagent2上也如下操作,注意agent2新建primary目錄
[root@agent2 tools]# mkdir /export/primary -p [root@agent2 tools]#agent2上操作,格式化磁盤
[root@agent2 ~]# mkfs.ext4 /dev/sdb mke2fs 1.41.12 (17-May-2010) /dev/sdb is entire device, not just one partition! Proceed anyway? (y,n) y Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 3276800 inodes, 13107200 blocks 655360 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 400 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 36 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@agent2 ~]#agent2上操作
[root@agent2 ~]# echo "/dev/sdb /export/primary ext4 defaults 0 0">>/etc/fstab [root@agent2 ~]# mount -a [root@agent2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.3G 31G 7% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 52M 47G 1% /export/primary [root@agent2 ~]#
刪除之前實驗殘留的配置
?
master端操作
刪除之前的配置
先操作master,刪除之前的庫
停止management服務
[root@master1 ~]# /etc/init.d/cloudstack-management stop Stopping cloudstack-management: [FAILED] [root@master1 ~]# /etc/init.d/cloudstack-management stop Stopping cloudstack-management: [ OK ] [root@master1 ~]# /etc/init.d/cloudstack-management status cloudstack-management is stopped [root@master1 ~]#agent1上卸載gluster的包
卸載glusterfs會卸載kvm相關的依賴包,從c6.6開始的
6.5以前,kvm也不依賴glusterfs
如果replication包沒安裝的話,先執行上面的卸載操作
它會自動的把kvm的包卸載。libvirtd的包也被卸載
小插曲
agent1本次實驗完畢關機后,ifcfg-cloudbr0文件沒生成。在這之前實驗是正常的,重啟后發現cloudbr0文件丟失 解決辦法:復制ifcfg-eth0為ifcfg-cloudbr0,參照機器agent2上正常的cloudbr0文件修改
配置glusterfs
?
安裝gluster3.7的源(agent1和agent2都操作)
如果你安裝了3.6和3.8的包也沒問題
查看yum源文件
[root@agent1 ~]# cat /etc/yum.repos.d/CentOS-Gluster-3.7.repo # CentOS-Gluster-3.7.repo # # Please see http://wiki.centos.org/SpecialInterestGroup/Storage for more # information[centos-gluster37] name=CentOS-$releasever - Gluster 3.7 baseurl=http://mirror.centos.org/centos/$releasever/storage/$basearch/gluster-3.7/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage[centos-gluster37-test] name=CentOS-$releasever - Gluster 3.7 Testing baseurl=http://buildlogs.centos.org/centos/$releasever/storage/$basearch/gluster-3.7/ gpgcheck=0 enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage[root@agent1 ~]#指定源,安裝對應版本的glusterfs包(agent1和agent2都操作)
[root@agent1 ~]# yum --enablerepo=centos-gluster37-test install glusterfs-server glusterfs-cli gluster-geo-replication -y也可以從下面路徑下載rpm包,然后安裝
https://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/CentOS/epel-6.8/x86_64/
?
agent端刪除之前殘留文件
?agent1刪除掛載點原先的文件(原先的kvm等文件)
[root@agent1 ~]# cd /export/primary/ [root@agent1 primary]# ls 0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b cf3dac7a-a071-4def-83aa-555b5611fb02 1685f81b-9ac9-4b21-981a-f1b01006c9ef f3521c3d-fca3-4527-984d-5ff208e05b5c 99643b7d-aaf4-4c75-b7d6-832c060e9b77 lost+found [root@agent1 primary]# rm -rf * [root@agent1 primary]# ls [root@agent1 primary]#agent2也如此操作。刪除多余的東西
[root@agent2 ~]# cd /export/primary/ [root@agent2 primary]# ls lost+found [root@agent2 primary]# rm -rf * [root@agent2 primary]# ls [root@agent2 primary]#agent2上安裝CloudStack包(在這之前,已經把glusterfs3.7的包安裝上了)
這些agent不需要手動起,它們是交給master管理的
其實master是通過22端口連接過來管理的
[root@agent2 tools]# yum install cloudstack-agent-4.8.0-1.el6.x86_64.rpm cloudstack-common-4.8.0-1.el6.x86_64.rpm -yagent端檢查glusterfs版本
[root@agent1 ~]# glusterfs -V glusterfs 3.7.20 built on Jan 30 2017 15:39:27 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. [root@agent1 ~]#啟動glusterd并設置開機啟動(兩個agent上操作)
[root@agent1 ~]# /etc/init.d/glusterd start Starting glusterd: [ OK ] [root@agent1 ~]# chkconfig glusterd on [root@agent1 ~]#兩臺agent停止iptables
[root@agent1 ~]# /etc/init.d/iptables stop iptables: Setting chains to policy ACCEPT: nat mangle filte[ OK ] iptables: Flushing firewall rules: [ OK ] iptables: Unloading modules: [ OK ] [root@agent1 ~]# chkconfig iptables off [root@agent1 ~]#gluster加入節點,并檢查狀態,在一臺agent上操作即可?
[root@agent1 ~]# gluster peer probe agent2 peer probe: failed: Probe returned with Transport endpoint is not connected [root@agent1 ~]# gluster peer probe agent2 peer probe: success. [root@agent1 ~]# gluster peer status Number of Peers: 1Hostname: agent2 Uuid: 2778cb7a-32ef-4a3f-a34c-b97f5937bb49 State: Peer in Cluster (Connected) [root@agent1 ~]#創建復制卷
開始操作
gv2是自定義的
啟動這個卷,并查看狀態,Type顯示Replicate,就表示復制卷的意思 ,為什么采用gluster呢,原先主存儲是本地掛載的,假如宿主機掛掉后,宿主機上的kvm全部掛掉。
[root@agent1 ~]# gluster volume start gv2 volume start: gv2: success [root@agent1 ~]# gluster volume infoVolume Name: gv2 Type: Replicate Volume ID: 3a23ab68-73da-4f1b-bc5c-3310ffa9e8b7 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: agent1:/export/primary Brick2: agent2:/export/primary Options Reconfigured: performance.readdir-ahead: on [root@agent1 ~]#gluster的東西至此完畢,繼續之前的內容
?
?
CloudStack配置和用戶界面操作
?
繼續在master上如下初始化數據庫的操作,導入數據?
[root@master1 ~]# cloudstack-setup-databases cloud:123456@localhost --deploy-as=root:123456 Mysql user name:cloud [ OK ] Mysql user password:****** [ OK ] Mysql server ip:localhost [ OK ] Mysql server port:3306 [ OK ] Mysql root user name:root [ OK ] Mysql root user password:****** [ OK ] Checking Cloud database files ... [ OK ] Checking local machine hostname ... [ OK ] Checking SELinux setup ... [ OK ] Detected local IP address as 192.168.145.151, will use as cluster management server node IP[ OK ] Preparing /etc/cloudstack/management/db.properties [ OK ] Applying /usr/share/cloudstack-management/setup/create-database.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-schema.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-database-premium.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-schema-premium.sql [ OK ] Applying /usr/share/cloudstack-management/setup/server-setup.sql [ OK ] Applying /usr/share/cloudstack-management/setup/templates.sql [ OK ] Processing encryption ... [ OK ] Finalizing setup ... [ OK ]CloudStack has successfully initialized database, you can check your database configuration in /etc/cloudstack/management/db.properties[root@master1 ~]#數據庫配置完畢后,啟動master,它會做一些初始化的操作
以后不要這么啟動,初始化只執行一次就行了
查看日志,已經啟動完成了,看到8080端口已經監聽了
[root@master1 ~]# tail -f /var/log/cloudstack/management/catalina.out INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (main:ctx-d2bdddaf) (logid:) Done Configuring CloudStack Components INFO [c.c.u.LogUtils] (main:ctx-d2bdddaf) (logid:) log4j configuration found at /etc/cloudstack/management/log4j-cloud.xml Feb 12, 2017 7:59:25 PM org.apache.coyote.http11.Http11NioProtocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Feb 12, 2017 7:59:25 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:20400 Feb 12, 2017 7:59:25 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/18 config=null Feb 12, 2017 7:59:25 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 63790 ms登錄網頁
http://192.168.145.151:8080/client
登錄網頁
admin/password
?
接下來創建系統虛擬機(master上操作)
系統虛擬路由,vnc窗口都是這些虛擬機的作用,master上執行下面命令
這個步驟的作用就是把虛擬機模板導入到二級存儲,執行過程如下
[root@master1 tools]# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \ > -m /export/secondary \ > -f /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 \ > -h kvm -F Uncompressing to /usr/share/cloudstack-common/scripts/storage/secondary/9824edc4-61db-4ad8-a08a-61f051b9ebfe.qcow2.tmp (type bz2)...could take a long time Moving to /export/secondary/template/tmpl/1/3///9824edc4-61db-4ad8-a08a-61f051b9ebfe.qcow2...could take a while Successfully installed system VM template /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 to /export/secondary/template/tmpl/1/3/ [root@master1 ~]#
登錄CloudStack管理頁面,更改內存的超配
改完需要重啟下服務。
?
基礎架構----添加資源域
?
其余默認如下 下面這里都默認了,管理和guest都沒修改。之前這里改成eth0,下面兩個按鈕改成了cloudbr0. 其實可以不用改,它會自動幫你做好。這里我們就不改了 如果你的虛擬機多的話,可以填到250?
這些agent不需要手動起,它們是交給master管理的 其實master是通過22端口連接過來管理的?
?
這里配置如下 由于agent節點做的glusterfs復制卷,就可以選擇協議為gluster,服務器可以填127.0.0.1了?
這里配置結果如下,sec是隨便寫的?
點擊啟動資源?
可能是軟件的bug 點擊上面的取消就行了 區域,提供點,集群都加成功了,這里可以手動加主機和存儲?
可以通過主機界面添加
?
?
agent2加成功了,agent1加失敗了?
添加成功
agent1沒添加成功是上節課的cloudbr0 殘留引起的
刪除cloudbr0,重啟agent1網絡,再次master上添加,完成
?
日志在刷,卡在這里,這里是別人做實驗遇到的。就是cloudbr0文件沒刪除導致的
提示File exists
?
添加存儲
先添加主存儲
?
添加成功?
檢查,添加成功了,以前的版本是數據庫加了一條記錄,這里沒掛載,新版本數據庫加了記錄之后,這里掛載了 [root@agent1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.7G 31G 8% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 52M 47G 1% /export/primary 127.0.0.1:/gv2 50G 52M 47G 1% /mnt/6d915c5a-6640-354e-9209-d2c8479ca105 [root@agent1 ~]# [root@agent2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.7G 31G 9% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 52M 47G 1% /export/primary 127.0.0.1:/gv2 50G 52M 47G 1% /mnt/6d915c5a-6640-354e-9209-d2c8479ca105 [root@agent2 ~]# 添加二級存儲?
基礎架構完成如下?
啟動域之前,先優化一下
修改超配
關于超配可以參照上一節課的。
修改完之后需要重啟master的management服務,可能服務第一次重啟需要等待一段時間,為了確保成功,再次執行一次重啟
重啟過程中,日志有些報錯,參照下。不用管
[root@master1 ~]# tail -f /var/log/cloudstack/management/catalina.out INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean VolumeDataStoreDaoImpl INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean UsageDaoImpl INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean ManagementServerNode INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean ConfigurationServerImpl INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean DatabaseIntegrityChecker INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean ClusterManagerImpl INFO [c.c.c.ClusterManagerImpl] (Thread-85:null) (logid:) Stopping Cluster manager, msid : 52236852888 log4j:WARN No appenders could be found for logger (com.cloud.cluster.ClusterManagerImpl). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Exception in thread "SnapshotPollTask" java.lang.NullPointerExceptionat org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:304)at org.apache.cloudstack.managed.context.ManagedContextRunnable.getContext(ManagedContextRunnable.java:66)at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)at org.apache.cloudstack.managed.context.ManagedContextTimerTask.run(ManagedContextTimerTask.java:27)at java.util.TimerThread.mainLoop(Timer.java:555)at java.util.TimerThread.run(Timer.java:505) Feb 12, 2017 8:50:21 PM org.apache.catalina.core.AprLifecycleListener init Feb 12, 2017 8:50:22 PM org.apache.catalina.session.StandardManager doLoad SEVERE: IOException while loading persisted sessions: java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: net.sf.cglib.proxy.NoOp$1 java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: net.sf.cglib.proxy.NoOp$1at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1354) [root@master1 ~]# 啟動資源域?
啟動資源域的目的,為了創建兩臺kvm 一個是二級存儲用的,一個是vnc窗口代理的?
想判斷kvm有沒有啟動成功,兩種方法
1、網頁查看
2、vnc登錄看看
?
?
網頁方式查看
啟動中
找到如下內容,最后的密碼就是vnc的密碼
<graphics type='vnc' port='-1' autoport='yes' listen='192.168.145.152' passwd='Pdf1sAQ2bIl0oVpKSRfxaA'>
?
復制這一串密碼 這樣就表示虛擬機啟動成功了 輸入用戶名和密碼root/password 登錄進去 刷新下?
?虛擬機啟動成功后,查看所在位置
這里面就是模板還有那兩個虛擬機,兩個agent由于是復制卷,內容一致
[root@agent1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.7G 31G 8% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 682M 46G 2% /export/primary 127.0.0.1:/gv2 50G 682M 46G 2% /mnt/6d915c5a-6640-354e-9209-d2c8479ca105 [root@agent1 ~]# cd /mnt/6d915c5a-6640-354e-9209-d2c8479ca105/ [root@agent1 6d915c5a-6640-354e-9209-d2c8479ca105]# ls 745865fe-545e-4430-98ac-0ffd5186a9b6 bc5bc6eb-4900-4076-9d5d-36fd0480b5e2 9824edc4-61db-4ad8-a08a-61f051b9ebfe [root@agent1 6d915c5a-6640-354e-9209-d2c8479ca105]#內容一致 [root@agent2 ~]# cd /mnt/6d915c5a-6640-354e-9209-d2c8479ca105/ [root@agent2 6d915c5a-6640-354e-9209-d2c8479ca105]# ls 745865fe-545e-4430-98ac-0ffd5186a9b6 bc5bc6eb-4900-4076-9d5d-36fd0480b5e2 9824edc4-61db-4ad8-a08a-61f051b9ebfe [root@agent2 6d915c5a-6640-354e-9209-d2c8479ca105]#?
?虛擬機的在線遷移
?
目前是一個宿主機分了一個系統虛擬機
[root@agent1 ~]# virsh listId Name State ----------------------------------------------------1 v-2-VM running[root@agent1 ~]# [root@agent2 ~]# virsh listId Name State ----------------------------------------------------1 s-1-VM running[root@agent2 ~]# 它的ip是192.168.145.180 agent1上目前是沒有該虛擬機的 我們把它從agent2遷移到agent1上 [root@agent1 ~]# virsh listId Name State ----------------------------------------------------1 v-2-VM running[root@agent1 ~]# 先ping著,那邊做遷移,看看會不會丟包 提示遷移完畢 有少量丟包 64 bytes from 192.168.145.180: icmp_seq=132 ttl=64 time=2.97 ms 64 bytes from 192.168.145.180: icmp_seq=133 ttl=64 time=0.830 ms 64 bytes from 192.168.145.180: icmp_seq=134 ttl=64 time=0.640 ms 64 bytes from 192.168.145.180: icmp_seq=135 ttl=64 time=0.850 ms 64 bytes from 192.168.145.180: icmp_seq=136 ttl=64 time=1.43 ms ^C --- 192.168.145.180 ping statistics --- 136 packets transmitted, 132 received, 2% packet loss, time 135432ms rtt min/avg/max/mdev = 0.447/1.331/8.792/1.268 ms [root@master1 ~]# 已經遷移完畢了 [root@agent1 ~]# virsh listId Name State ----------------------------------------------------1 v-2-VM running3 s-1-VM running[root@agent1 ~]#遷移原理,在新的host上創建虛擬機,只不過是掛起的,拷貝硬盤數據和內存數據過去 遷移完畢,起來。 網絡一定要快,不丟包 這是kvm自帶的功能,只是用界面把底層命令封裝了
?
自定義虛擬機配置方案
?
你可以添加自定義的kvm配置套餐
先看看自己虛host的cpuinfo 看到2200mhz [root@agent1 ~]# cat /proc/cpuinfo | head -10 processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz stepping : 7 microcode : 26 cpu MHz : 2192.909 cache size : 6144 KB physical id : 0 [root@agent1 ~]#假如你物理機是2.2GMHZ,比如上面這種
雖然物理機有多核,這里你創建1個3GMHz的。 是不可以的。
?
添加成功 查看下磁盤方案 關于寫入緩存類型這里 cache=none,表示默認的,沒有緩存、。用戶界面賬戶和項目使用
?
賬戶配置 user1/123456?
?
普通用戶是無法看到系統虛擬機的?
?
關于安全組,普通用戶也可以自定義?
普通用戶的資源限制 只能加20kvm admin用戶可以改它的資源限制 項目和賬戶和資源也關聯 項目創建完畢?
?
事件這里記錄的操作日志?
?
重要的報警首頁會顯示?
?
?
轉載于:https://www.cnblogs.com/nmap/p/6392782.html
總結
以上是生活随笔為你收集整理的CloudStack学习-2的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Automatic Summarizat
- 下一篇: 历届试题 大臣的旅费 树形DP