Openstack实验笔记
生活随笔
收集整理的這篇文章主要介紹了
Openstack实验笔记
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
Openstack實驗筆記
制作人:全心全意
Openstack:提供可靠的云部署方案及良好的擴展性 Openstack簡單的說就是云操作系統,或者說是云管理平臺,自身并不提供云服務,只是提供部署和管理平臺 架構圖: http://m.qpic.cn/psb?/V12uCjhD3ATBKt/Mf6rnJXoRGXLpebCzPTUfETy68mVidyW.VTA2AbQxE0!/b/dDUBAAAAAAAA&bo=swFuAQAAAAARB.0!&rf=viewer_4Keystone作為Openstack的核心模塊,為Nova(計算),Glance(鏡像),Swift(對象存儲),Cinder(塊存儲),Neutron(網絡)以及Horizon(Dashboard)提供認證服務Glance:openstack的鏡像服務組件,主要提供了一個虛擬機鏡像文件的存儲、查詢和檢索服務,通過提供一個虛擬磁盤映像目錄和存儲庫,為Nova的虛擬機提供鏡像服務,現在有v1和v2兩個版本物理硬件配置(最低)控制節點:1-2個cpu8G內存2個網卡計算節點:2-4個cpu8G內存2個網卡塊節點:1-2個cpu4G內存1個網卡最少2個磁盤對象節點:1-2個cpu4G內存1個網卡最少2個磁盤網絡拓撲圖:(實驗中,管理、存儲和本地網絡合并) http://m.qpic.cn/psb?/V12uCjhD3ATBKt/r30ELjijnHAaYX*RMZe4vhwVNcix4zUb2pNnovlYZ7E!/b/dL8AAAAAAAAA&bo=xgKqAQAAAAADB00!&rf=viewer_4安裝 控制節點:quan 172.16.1.211 172.16.1.221 計算節點:quan1 172.16.1.212 172.16.1.222 存儲節點:storage 172.16.1.213 172.16.1.223 對象存儲節點1:object01 172.16.1.214 172.16.1.224 對象存儲節點2:object02 172.16.1.215 172.16.1.225準備工作:關閉防火墻關閉selinux關閉NetworkManager安裝ntp服務:yum -y install chrony(所有主機)修改配置文件:允許網段中的主機訪問allow 172.16.1.0/24systemctl enable chronyd.service systemctl start chronyd.service其它節點:vi /etc/chrony.confserver quan iburst#注意:使用原始的centos網絡源yum install epel-releaseyum install centos-release-openstack-queensyum install openstack-selinuxyum install python-openstackclient安裝數據庫控制(quan)節點安裝數據庫yum install -y mariadb mariadb-server python2-PyMySQLvi /etc/my.cnf.d/openstack.cnfbind-address=172.16.1.211default-storage-engine=innodbinnodb_file_per_table=onmax_connections=4096collation-server=utf8_general_cicharacter-set-server=utf8啟動數據庫,并設置開機啟動systemctl enable mariadb.service && systemctl start mariadb.service初始化數據庫mysql_secure_installation控制節點(quan)安裝消息隊列(端口:5672)yum install rabbitmq-server -y服務啟動,并設置開機啟動systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service添加openstack用戶rabbitmqctl add openstack openstack為openstack用戶添加讀寫權限rabbitmqctl set_permissions openstack ".*" ".*" ".*"控制節點(quan)安裝memcached緩存(端口:11211)yum -y install memcached python-memcachedvi /etc/sysconfig/memcachedOPTIONS="-l 127.0.0.1,::1,quan"服務啟動,并設置開機啟動systemctl enable memcached.service && systemctl start memcached.service控制節點(quan)安裝etcd服務(key-value存儲系統)yum -y install etcdvi /etc/etcd/etcd.conf#[Member]ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="http://quan:2380"ETCD_LISTEN_CLIENT_URLS="http://quan:2379"ETCD_NAME="quan"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="http://quan:2380"ETCD_ADVERTISE_CLIENT_URLS="http://quan:2379"ETCD_INITIAL_CLUSTER="quan=http://quan:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"服務啟動,并設置開機啟動systemctl enable etcd.service && systemctl start etcd.serviceKeystone組件 Keystone作為Openstack的核心模塊,為Nova(計算),Glance(鏡像),Swift(對象存儲),Cinder(塊存儲),Neutron(網絡)以及Horizon(Dashboard)提供認證服務 基本概念:User:用戶,代表可以通過keystone進行訪問的人或程序。User通過認證信息(credentials,如密碼,API Keys等)進行驗證。Tenant:租戶,各個服務中的一些可以訪問的資源集合。例如,在Nova中一個tenant可以是一些機器,在Swift和Glance中一個tenant可以是一些鏡像存儲,在Neutron中一個tenant可以是一些網絡資源。Users默認的總是綁定到某些tenant上。Role:角色,Roles代表一組用戶可以訪問的資源權限,例如Nova中的虛擬機、Glance中的鏡像。Users可以被添加到任意一個全局的或租戶的角色中。在全局的role中,用戶的role權限作用于所有的租戶,即可以對所有的租戶執行role規定的權限,在租戶內的role中,用戶僅能在當前租戶內執行role規定的權限。Service:服務,如Nove、Glance、Swift。根據User、Tenant和Role三個概念,一個服務可以確定當前用戶是否具有訪問其資源的權限,但是當一個user嘗試著訪問其租戶內的service時,他必須知道這個service是否存在以及如何訪問這個service,這里通常使用一些不同的名稱表示不同的服務。Endpoint:端點,可以理解為是一個服務暴露出的訪問點Token:訪問資源的鑰匙。通過Keystone驗證后的返回值,在之后與其它服務器交互中只需要攜帶Token值即可,每個Token都有一個有效期。各概念之間的關系http://m.qpic.cn/psb?/V12uCjhD3ATBKt/PJAecZuZ1C44VKDjcsKLYotu5KOz3RNZwumR07nBIug!/b/dDUBAAAAAAAA&bo=BAIsAQAAAAADBwk!&rf=viewer_41、租戶下,管理者一堆用戶(人,或程序)2、每個用戶都有自己的credentials(憑證)用戶名+密碼或者用戶名+API key,或其它憑證3、用戶在訪問其他資源(計算、存儲)之前,需要用自己的credential去請求keystone服務,獲得驗證信息(主要是Token信息)和服務信息(服務目錄和它們的endpoint)4、用戶拿著Token信息,就可以去訪問資源了keystone在Openstack中的工作流程圖 http://m.qpic.cn/psb?/V12uCjhD3ATBKt/ptROtuhyzh7Mq3vSVz3Ut1TtGDXuBbYf*WbN8UZdWDE!/b/dLgAAAAAAAAA&bo=igIRAgAAAAADB7k!&rf=viewer_4搭建keystone創建數據庫mysql -uroot -popenstackcreate database keystone;grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'openstack';grant all privileges on keystone.* to 'keystone'@'%' identified by 'openstack';安裝yum -y install openstack-keystone httpd mod_wsgivi /etc/keystone/keystone.conf[database]connection = mysql+pymysql://keystone:openstack@quan/keystone #數據庫連接 用戶名:密碼@主機名/數據庫名[token]provider=fernet初始化keystone數據庫su -s /bin/sh -c "keystone-manage db_sync" keystone初始化femet密鑰存儲庫keystone-manage fernet_setup --keystone-user keystone --keystone-group keystonekeystone-manage credential_setup --keystone-user keystone --keystone-group keystone創建keystone的服務端口(會在endpoint中生成數據)keystone-manage bootstrap --bootstrap-password openstack --bootstrap-admin-url http://quan:35357/v3/ --bootstrap_internal-url http://quan:5000/v3/ --bootstrap-public-url http://quan:5000/v3/ --bootstrap-region-id RegionOne配置http服務vi /etc/httpd/conf/httpd.confServerName quan創建軟鏈接ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/服務啟動,并設置開機啟動systemctl enable httpd.service && systemctl start httpd.service創建管理員賬號vim admin-openrcexport OS_USERNAME=adminexport OS_PASSWORD=openstackexport OS_PROJECT_NAME=adminexport OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_AUTH_URL=http://quan:35357/v3export OS_IDENTITY_API_VERSION=3導入管理員賬號source admin-openrc創建域/項目/用戶/和角色創建項目openstack project create --domain default --description "Service Project" serviceopenstack project create --domain default --description "Demo Project" demo創建用戶(demo),并指定其密碼openstack user create --domain default --password-prompt demo創建角色(user)openstack role create user將demo添加的user角色中openstack role add --project demo --user demo user驗證解除之前的環境變量unset OS_AUTH_URL OS_PASSWORD執行下面的命令,輸入admin的密碼openstack --os-auth-url http://quan:35357/v3 \--os-project-domain-name Default \--os-user-domain-name Default \--os-project-name admin \--os-username admin token issue執行下面的命令,輸入demo用戶的密碼openstack --os-auth-url http://quan:5000/v3 \--os-project-domain-name Default \--os-user-domain-name Default \--os-project-name demo \--os-username demo token issue創建openstack客戶端腳本環境創建管理員賬號vim admin-openrcexport OS_USERNAME=admin export OS_PASSWORD=openstackexport OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_AUTH_URL=http://quan:35357/v3export OS_IDENTITY_API_VERSION=3 #指定認證服務版本export OS_IMAGE_API_VERSION=2 #指定鏡像服務版本創建demo用戶賬號vim demo-openrcexport OS_USERNAME=demo export OS_PASSWORD=openstackexport OS_PROJECT_NAME=demoexport OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_AUTH_URL=http://quan:35357/v3export OS_IDENTITY_API_VERSION=3 #指定認證服務版本export OS_IMAGE_API_VERSION=2 #指定鏡像服務版本導入管理員賬號source admin-openrc驗證管理員openstack token issue導入demo用戶source demo-openrc驗證demo用戶openstack token issueglance組件Glance:openstack的鏡像服務組件,主要提供了一個虛擬機鏡像文件的存儲、查詢和檢索服務,通過提供一個虛擬磁盤映像目錄和存儲庫,為Nova的虛擬機提供鏡像服務,現在有v1和v2兩個版本Glance的架構圖:http://m.qpic.cn/psb?/V12uCjhD3ATBKt/mkXPMrNM9RL.NizLwc22Vm*FHkAc2NWh9668JHk4zS0!/b/dLYAAAAAAAAA&bo=RQHZAAAAAAADB78!&rf=viewer_4鏡像服務組件Glance-api:是一個對外的API接口,能夠接受外部的API鏡像請求。默認端口是9292glance-registry:用于存儲、處理、獲取Image Metadate。默認端口的9191glance-db:在Openstack中使用MySQL來支撐,用于存放Image Metadate。通過glance-registry保存在MySQL DatabaseImage Store:用于存儲鏡像文件。通過Strore Backend后端存儲接口來與glance-api聯系。通過這個接口,glance可以從Image Store獲取鏡像文件再交由Nove用于創建虛擬機Glance通過Store Adapter(存儲適配器)支持多種Image Store方案,支持swift、file system、s3、sheepdog、rbd、cinder等。Glance支持的Image格式raw:非結構化的鏡像格式vhd:一種通用的虛擬機磁盤格式,可用于Vmware、Xen、VirtualBox等vmdk:Vmware的虛擬機磁盤格式vdi:VirtualBox、QEMU等支持的虛擬機磁盤格式qcow2:一種支持QEMU并且可以動態擴展的磁盤格式(默認使用)aki:Amazon Kernel鏡像ari:Amazon Ramdisk鏡像ami:Amazon虛擬機鏡像Glance的訪問權限public:公共的,可以被所有的Tenant使用Private:私有的/項目的,只能被Image Owner所在的Tenant使用Shared:共享的,一個非公共的Image可以共享給指定的Tenant,通過member-*操作來實現Protected:受保護的,不能被刪除狀態類型Queued:沒有上傳Image數據,只存有該鏡像的元數據Saving:正在上傳ImageActive:正常狀態Deleted/pending_delete:已刪除/等待刪除的ImageKilled:Image元數據不正確,等待被刪除搭建glance創建數據庫mysql -uroot -popenstackcreate database glance;grant all privileges on glance.* to 'glance'@'localhost' identified by 'openstack';grant all privileges on glance.* to 'glance'@'%' identified by 'openstack';創建glance用戶,并在service項目中添加管理員角色source admin_openrcopenstack user create --domain default --password-prompt glance #輸入其密碼openstack role add --project service --user glance adminopenstack user list #可查看創建的用戶創建glance服務openstack service create --name glance --description "OpenStack Image" imageopenstack endpoint create --region RegionOne image public http://quan:9292openstack endpoint create --region RegionOne image internal http://quan:9292openstack endpoint create --region RegionOne image admin http://quan:9292安裝相關包并配置yum -y install openstack-glancevi /etc/glance/glance-api.conf[database]connection = mysql+pymysql://glance:openstack@quan/glance[keystone_authtoken]auth_uri=http://quan:5000auth-url=http://quan:35357memcached_servers=quan:11211auth_type=passwordproject_domain_name=defaultuser_domain_name=defaultproject_name=serviceusername = glancepassword = openstack[paste_deploy]flavor = keystone[glance_store]stores = file,httpdefault_store = filefilesystem_store_datadir = /var/lib/glance/images/vi /etc/glance/glance-registry.conf[database]connection = mysql+pymysql://glance:openstack@quan/glance[keystone_authtoken]auth_uri=http://quan:5000auth-url=http://quan:35357memcached_servers=quan:11211auth_type=passwordproject_domain_name=defaultuser_domain_name=defaultproject_name=serviceusername = glancepassword = openstack[paste_deploy]flavor = keystone初始化數據庫su -s /bin/sh -c "glance-manage db_sync" glance服務啟動,并設置開機啟動systemctl enable openstack-glance-api.service openstack-glance-registry.service && systemctl start openstack-glance-api.service openstack-glance-registry.service驗證source admin-openrc 下載實驗鏡像wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img創建鏡像:openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public查看已存在的鏡像openstack image list查看鏡像的詳細信息openstack image show (#鏡像id)Nova組件Nova:openstack中最核心的組件。openstack的其它組件歸根結底是為Nova組件服務的,基于用戶需求為VM提供計算資源管理Nova架構圖:http://m.qpic.cn/psb?/V12uCjhD3ATBKt/bKTJmZis5k..ds6fjUYXv8KDu9EzeaB4WYyV883uAq8!/b/dL8AAAAAAAAA&bo=*QE1AQAAAAADB.o!&rf=viewer_4目前的Nova主要由API、Compute、Conductor、Scheduler四個核心服務組成,它們之間通過AMQP通信,API是進入Nova的HTTP接口。Compute是VMM(虛擬機管理器)交互來運行虛擬機并管理虛擬機的生命周期(通常一個主機一個Compute服務)。Scheduler從可用池中選擇最合適的節點來創建虛擬機實例。Conductor主要用于和數據庫進行交互。Nova邏輯模塊Nova API:HTTP服務,用于接收和處理客戶端發送的HTTP請求Nova Cell:Nova Cell子服務的目的是為了便于實現橫向擴展和大規模的部署,同時不增加數據庫和RPC消息中間件的復雜度。在Nova Scheduler服務的主機調度基礎上實現了區域調度Nova Cert:用于管理證書,為了兼容AWS,AWS提供了一整套的基礎設施和應用程序服務,使得幾乎所有的應用程序在云上運行。Nova Comput:Nova組件中最核心的服務,實現虛擬機管理的功能。實現了在計算節點上創建、啟動、暫停、關閉和刪除虛擬機、虛擬機在不同的計算節點間遷移、虛擬機安全控制、管理虛擬機磁盤鏡像以及快照等功能。Nova Conductor:RPC服務,主要提供數據庫查詢功能,以前的openstack版本中,Nova Compute子服務中定義了許多的數據庫查詢方法。但是,由于Nova Compute子服務需要在每個計算節點上啟動,一旦某個計算節點被攻擊,就將完全獲得數據庫的訪問權限。有了Nova Compute子服務之后,便可在其中實現數據庫訪問權限的控制Nova Scheduler:Nova調度子服務。當客戶端向Nova服務器發起創建虛擬機的請求時,決定將虛擬機創建在哪個節點上。Nova Console、Nova Consoleauth、Nova VNCProxy,Nova控制臺子服務。功能是實現客戶端通過代理服務器遠程訪問虛擬機實例的控制界面。nova啟動虛擬機的過程圖:http://m.qpic.cn/psb?/V12uCjhD3ATBKt/iy2efxOLLowl3RvoIcZ6d7KNZ3jcdOI7zY5XroEBPVM!/b/dDQBAAAAAAAA&bo=xQJnAgAAAAADJ6A!&rf=viewer_4Nova Scheduler Filter的類型選擇一個虛擬機在哪個主機運行的方式有多種,nova支持的方式主要由以下三種:ChanceScheduler(隨機調度器):從所有nova-compute服務正常運行的節點中隨機選擇FilterScheduler(過濾調度器):根據指定的過濾條件以及權重挑選最佳節點CachingScheduler:FilterScheduler的一種,在FilterScheduler的基礎上,將主機資源的信息存到本地的內存中,然后通過后臺的定時任務從數據庫中獲取最新的主機資源信息。Nova Scheduler的工作流程圖:http://m.qpic.cn/psb?/V12uCjhD3ATBKt/LpB5fYBuLUgMASXWrH*Emw5qwkWHKM7slpof.lF21DY!/b/dEYBAAAAAAAA&bo=OQODAQAAAAADB5o!&rf=viewer_4FilterScheduler首先使用指定的Filters(過濾器)得到符合條件的主機,比如內存小于50%,然后對得到的主機重新計算權重并且排列,獲取最佳的一個。具體的Filter有以下幾種:1)RetryFilter:重試過濾,假設Host1、Host2、Host3過濾篩選出來了,Host1權重最高,被選中,由于某些原因VM在Host1上落地失敗,nova-scheduler會重新篩選新的host,Host1因為失敗不會入選。可通過scheduler_max_attempts=3設置重試的次數2)AvalilabilityZoneFilter可選域過濾,可以提供容災行和隔離服務,計算節點可以納入一個創建好的AZ中,創建VM的時候可以指定AZ,這樣虛擬機會落到指定的host中3)RamFilter:內存過濾,創建VM時會選擇flavor,不滿足flavor中內存要求的host會過濾掉。超量使用的設置:ram_allocation_ratio=3(如果計算節點有16G內存,那么openstack會認為有48G內存)4)CoreFilter:CPU core過濾,創建VM時會選擇flavor,不滿足flavor中core要求的host會過濾掉。CPU的超量設置:cpu_allocation_ratio=16.0(若計算節點為24core,那么openstack會認為348core)5)DiskFilter:磁盤容量過濾,創建VM時會選擇flavor,不滿足flavor中磁盤要求的host會過濾掉。Disk超量設置:disk_allocation_ratio=1.0(硬盤容量不建議調大)6)ComputeFilter:nova-compute服務過濾,創建VM時,若host的nova-compute服務不正常,就會被篩選掉7)ComputeCababilitiesFilter:根據計算節點的特性來篩選,例如x86_648)ImagePropertiesFilter:根據所選的image的屬性來匹配計算節點,例如希望某個image只能運行在KVM的hypervisor上,可以通過"Hypervisor Type"屬性來指定。9)ServerGroupAntiAffinityFilter:盡量將Instance部署到不同的節點上。例如vm1,vm2,vm3,計算節點有Host1,Host2,Host3創建一個anti-affinity策略server group “group-1”nova server-group-create-policy anti-affinity group-1nova boot-image IMAGE_ID -flavor 1 -hint group-group1 vm1nova boot-image IMAGE_ID -flavor 1 -hint group-group1 vm2nova boot-image IMAGE_ID -flavor 1 -hint group-group1 vm310)ServerGroupAffinityFilter:盡量將Instance部署到同一節點上。例如vm1,vm2,vm3,計算節點有Host1,Host2,Host3創建一個group-affinity策略server group “group-2”nova server-group-create-policy anti-affinity group-2nova boot-image IMAGE_ID -flavor 1 -hint group-group2 vm1nova boot-image IMAGE_ID -flavor 1 -hint group-group2 vm2nova boot-image IMAGE_ID -flavor 1 -hint group-group2 vm3搭建nova組件搭建nova控制節點數據庫相關操作mysql -uroot -popenstackcreate database nova_api;create database nova;create database nova_cell0;grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'openstack';grant all privileges on nova_api.* to 'nova'@'%' identified by 'openstack';grant all privileges on nova.* to 'nova'@'localhost' identified by 'openstack';grant all privileges on nova.* to 'nova'@'%' identified by 'openstack';grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by 'openstack';grant all privileges on nova_cell0.* to 'nova'@'%' identified by 'openstack';創建nova用戶,并在service項目中添加管理員角色source admin-openrcopenstack user create --domain default --password-prompt nova #創建nova用戶openstack role --project service --user nova admin #將nova用戶加入到service項目管理員角色創建nova服務及端口openstack service create --name nova --description "OpenStack Compute" conputeopenstack endpoint create --region RegionOne compute public http://quan:8774/v2.1openstack endpoint create --region RegionOne compute internal http://quan:8774/v2.1openstack endpoint create --region RegionOne compute admin http://quan:8774/v2.1創建placement用戶,并在service項目中添加管理員角色source admin-openrcopenstack user create --domain default --password-prompt placement #創建placement用戶openstack role --project service --user placement admin #將placement用戶加入到service項目管理員角色創建placement服務及端口openstack service create --name placement --description "Placement API" placementopenstack endpoint create --region RegionOne placement public http://quan:8778openstack endpoint create --region RegionOne placement internal http://quan:8778openstack endpoint create --region RegionOne placement admin http://quan:8778刪除端口的方法:查看端口:openstack endpoint list | grep placement根據id刪除端口openstack endpoint delete 端口id安裝相關包,并配置yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-apivi /etc/nova/nova.conf[DEFAULT]enabled_apis = osapi_compute,metadatatransport_url = rabbit://openstack:openstack@quanmy_ip = 172.16.1.221use_neutron = Truefirewall_driver = nova.virt.firewall.NoopFirewallDriver[api_database]connection = mysql+pymysql://nova:openstack@quan/nova_api[database]connection = mysql+pymysql://nova:openstack@quan/nova[api]auth_strategy = keystone[keystone_authtoken]auth_uri = http://quan:5000auth_url = http://quan:35357memcached_servers = quan:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = openstack[vnc]enabled = truevncserver_listen = 172.16.1.221vncserver_proxyclient_address = 172.16.1.221[glance]api_servers = http://quan:9292[oslo_concurrency]lock_path = /var/lib/nova/tmp[placement]os_region_name = RegionOneproject_domain_name = Defaultproject_name = serviceauth_type = passworduser_domain_name = Defaultauth_url = http://quan:35357/v3username = placementpassword = openstackvim /etc/httpd/conf.d/00-nova-placement-api.conf #添加至末尾<Directory /usr/bin><IfVersion >= 2.4>Require all granted</IfVersion><IfVersion < 2.4>Order allow,denyAllow from all</IfVersion></Directory>重啟httpd服務systemctl restart httpd修改配置文件(解決初始化nova_api數據庫表結構的bug)vi /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py在175行中加入"use_tpool"初始化nova_api數據庫表結構su -s /bin/sh -c "nova-manage api_db sync" nova注冊cell0數據庫su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova創建cell1su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell --verbose" nova初始化nova數據庫su -s /bin/sh -c "nova-manage db sync" nova驗證cell0和cell1是否注冊nova-manage cell_v2 list_cells服務啟動,并設置開機啟動systemctl enable openstack-nova-api openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxysystemctl start openstack-nova-api openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy驗證openstack compute service list搭建nova計算節點安裝相關包并配置yum -y install openstack-nova-computevim /etc/nova/nova.conf[DEFAULT]enabled_apis = osapi_compute,metadatatransport_url = rabbit://openstack:openstack@quanmy_ip = 172.16.1.222use_neutron = Truefirewall_driver = nova.virt.firewall.NoopFirewallDriver[api]auth_strategy = keystone[keystone_authtoken]auth_uri = http://quan:5000auth_url = http://quan:35357memcached_servers = quan:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = openstack[vnc]enabled = Truevncserver_listen = 0.0.0.0vncserver_proxyclient_address = 172.16.1.222novncproxy_base_url = http://172.16.1.221:6080/vnc_auto.html[glance]api_servers = http://quan:9292[oslo_concurrency]lock_path = /var/lib/nova/tmp[placement]os_region_name = RegionOneproject_domain_name = Defaultproject_name = serviceauth_type = passworduser_domain_name = Defaultauth_url = http://quan:35357/v3username = placementpassword = openstack查看機器是否支持虛擬化egrep -c '(vmx|svm)' /proc/cpuinfo若返回0,修改/etc/nova/nova.confvi /etc/nova/nova.conf[libvirt]virt_type = qemu服務啟動,并設置開機啟動systemctl enable libvirt openstack-nova-compute && systemctl start libvirt openstack-nova-compute將compute節點添加到cell數據庫(控制節點操作)source admin-openrcopenstack compute service list --service nova-computesu -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" novavi /etc/nova/nova.conf[scheduler]discover_hosts_in_cells_interval = 300驗證source admin-openrcopenstack compute service listopenstack catalog listopenstack image listnova-status upgrade checkneutron組件Neutron是Openstack中的一個項目,在各接口設備之間提供網絡服務,而且受其它openstack服務管理,如Nova。Neutron為openstack云提供了更靈活的劃分物理網絡,在多租戶的環境下提供給每個租戶獨立的網絡環境。另外,Neutron提供API來實現這種目標。Neutron中的“網絡”是一個可以被用戶創建的對象,如果要和物理環境下的概念映射的話,這個對象相當于一個巨大的交換機,可以擁有無限多個動態可創建和銷毀的虛擬端口。Neutron提供的網絡虛擬化能力有:(1)二層到七層網絡的虛擬化:L2(virtual switch)、L3(virtual Router和LB)、L4-L7(virtual Firewall)等(2)網絡連通性:二層網絡和三層網絡(3)租戶隔離性(4)網絡安全性(5)網絡擴展性(6)REST API(7)跟高級的服務:如LBaasNeutron的架構圖:http://m.qpic.cn/psb?/V12uCjhD3ATBKt/Ei6CaKeBs.55JXz9GIW8xuGBeMGe*rVaB*3D3cGQDsY!/b/dFIBAAAAAAAA&bo=vQLoAQAAAAADB3Q!&rf=viewer_4總的來說,創建一個Neutron網絡的過程如下:1、管理員拿到一組可在互聯網上尋址的IP地址,并且創建一個外部網絡和子網2、租戶創建一個網絡和子網3、租戶創建一個路由器并且連接租戶子網和外部網絡4、租戶創建虛擬機Neutron中的各種概念network:network是一個隔離的二層廣播域。Neutron支持多種類型的network,包括local,flat,VLAN,VxLAN和GRElocal:local網絡與其它網絡和節點隔離。local網絡中的instance只能與同一節點上同一網絡的instance通信,local網絡主要用于單機測試flat:flat網絡是無vlan tagging的網絡。flat網絡中的instance能與位于同一網絡的instance通信,并且可以跨多個節點。vlan:vlan網絡是具有802.1q tagging的網絡。vlan是一個二層的廣播域,同一vlan中的instance可以通信,不同vlan只能通過router通信。vlan網絡可以跨節點,是應用最廣泛的網絡類型vxlan:vxlan是基于隧道技術的overlay網絡。vxlan網絡通過唯一的segmentation ID(也叫VNI)與其它vxlan網絡區分。vxlan中數據包會通過VNI封裝成UDP包進行傳輸。因為二層的包通過封裝在三層傳輸,能夠克服vlan和物理網絡基礎設施的限制。gre:gre是vxlan類似的一種overlay網絡。主要區別在于使用IP包而非UDP進行封裝。不同network之間在二層上是隔離的。network必須屬于某個Project(Tenant租戶),Project中可以創建多個network。network與Project之間是1對多的關系subnet:subject是一個IPv4或者IPv6地址段。instance的IP從subnet中分配。每個subnet需要定義IP地址的范圍和掩碼。subnet與network是1對多的關系。一個subnet只能屬于某個network;一個network可以有多個subnet,這些subnet可以是不同的IP段,但不能重疊。例:有效的配置network Asubnet A-a:10.10.1.0/24 {"start":"10.10.1.1","end":"10.10.1.50"}subnet A-b:10.10.2.0/24 {"start":"10.10.2.1","end":"10.10.2.50"}無效的配置(因為subnet有重疊)network Asubnet A-a:10.10.1.0/24 {"start":"10.10.1.1","end":"10.10.1.50"}subnet A-b:10.10.1.0/24 {"start":"10.10.1.51","end":"10.10.1.100"}注:這里判斷的不是IP地址是否重疊,而是子網是否重疊(10.10.1.0/24)port:port可以看做是虛擬交換機上的一個端口,port上定義了MAC地址和IP地址,當instance的虛擬網卡VIF(Virtual Interface)綁定到port時,port會將MAC和IP分配給VIF。port與subnet是1對多的關系。一個port必須屬于某個subnet,一個subnet可以有多個port。Neutron中的Plugin和agenthttp://m.qpic.cn/psb?/V12uCjhD3ATBKt/Gm3J*.Vh27nLny6oXfuZlh.yXNYx.YE3I*Mwoea.MH4!/b/dL4AAAAAAAAA&bo=pAKJAQAAAAADBww!&rf=viewer_4搭建neutronlinuxbridge+vxlan模式控制節點:數據庫相關操作mysql -uroot -popenstackcreate database neutron;grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'openstack';grant all privileges on neutron.* to 'neutron'@'%' identified by 'openstack';創建neutron用戶,并在service項目中添加管理員角色source admin_openrcopenstack user create --domain default --password-prompt neutronopenstack role add --project service --user neutron admin創建網絡服務及端口openstack service create --name neutron --description "Openstack Networking" networkopenstack endpoint create --region RegionOne network public http://quan:9696openstack endpoint create --region RegionOne network internal http://quan:9696openstack endpoint create --region RegionOne network admin http://quan:9696安裝相關包并配置yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtablesvi /etc/neutron/neutron.conf[database]connection = mysql+pymysql://neutron:openstack@quan/neutron[DEFAULT]core_plugin=ml2service_plugins = routerallow_overlapping_ips = truetransport_url = rabbit://openstack:openstack@quanauth_strategy = keystonenotify_nova_on_port_status_changes = truenotify_nova_on_port_data_changes = true[keystone_authtoken]auth_uri = http://quan:5000auth_url = http://quan:35357memcached_servers = quan:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = openstack[nova]auth_url = http://quan:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = novapassword = openstack[oslo_concurrency]lock_path = /var/lib/neutron/tmpvi /etc/neutron/plugins/ml2/ml2_conf.ini[ml2]type_drivers = flat,vlan,vxlantenant_network_types = vxlanmechanism_drivers = linuxbridge,l2populationextension_drivers = port_security[ml2_type_flat]flat_networks = provider[ml2_type_vxlan]vni_ranges = 1:1000[securitygroup]enable_ipset = truevim /etc/neutron/plugins/ml2/linuxbridge_agent.ini[linux_bridge]physical_interface_mappings = provider:ens34 #外部網卡設備[vxlan]enable_vxlan = truelocal_ip = 172.16.1.221l2_population = true[securitygroup]enable_security_group = truefirewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver確認操作系統內核支持橋接echo "net.bridge.vridge-nf-call-iptables = 1" >> /etc/sysctl.confecho "net.bridge.vridge-nf-call-ip6tables = 1" >> /etc/sysctl.confsysctl -p #若出現“No such file or directory”錯誤,執行下面的操作modinfo by_netfilter #查看內核模塊信息modprobe by_netfilter #加載內核模塊再次執行sysctl -pvi /etc/neutron/l3_agent.ini[DEFAULT]interface_driver = linuxbridgevi /etc/neutron/dhcp.agent.ini[DEFAULT]interface_driver = linuxbridgedhcp_driver = neutron.agent.linux.dhcp.Dnsmasqenable_isolated_metadata = truevi /etc/neutron/metadata_agent.ini[DEFAULT]nova_metadata_host = 172.16.1.221metadata_proxy_shared_secret = openstackvi /etc/nova/nova.conf[neutron]url = http://quan:9696auth_url = http://quan:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = openstackservice_metadata_proxy = truemetadata_proxy_shared_secret = openstack#ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini初始化neutron數據庫su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin/ml2/ml2_conf.ini upgrade head" neutron重啟nova服務systemctl restart openstack-nova-api服務啟動,并設置開機啟動systemctl enable neutron-server neutron-linuxbridge neutron-dhcp-agent neutron-metadata-agentsystemctl start neutron-server neutron-linuxbridge neutron-dhcp-agent neutron-metadata-agentsystemctl enable neutron-l3-agent && systemctl start neutron-l3-agent計算節點:安裝相關包并配置yum -y install openstack-neutron-linuxbridge ebtables ipsetvi /etc/neutron/neutron.conf[DEFAULT]transport_url = rabbit://openstack:openstack@quanauth_strategy = keystone[keystone_authtoken]auth_uri = http://quan:5000auth_url = http://quan:35357memcached_servers = quan:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = openstack[oslo_concurrency]lock_path = /var/lib/neutron/tmpvi /etc/neutron/plugin/ml2/linuxbridge_agent.ini[linux_bridge]physical_interface_mappings = provider:ens34[vxlan]enable_vxlan = truelocal_ip = 172.16.1.222l2_population = true[securitygroup]enable_security_group = truefirewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDrivervi /etc/nova/nova.conf[neutron]url = http://quan:9696auth_url = http://quan:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = openstack重啟nova-compute服務systemctl restart openstack-nova-compute服務啟動,并設置開機啟動systemctl enable neutron-linuxbridge-agent && systemctl strat neutron-linuxbridge-agent驗證(控制節點)source admin-openrcopenstack extension list --networkopenstack network agent listhorizon組件horizon:UI界面 (Dashboard)。OpenStack中各種服務的Web管理門戶,用于簡化用戶對服務的操作搭建horizon安裝相關包并配置yum -y install openstack-dashboardvim /etc/openstack-dashboard/local_settingsOPENSTACK_HOST = "quan"ALLOWED_HOSTS = ['*']SESSION_ENGINE = 'django.contrib.sessions.backends.cache'CACHES = {'default':{'BACKEND':'django.core.cache.backends.memcached.MemcachedCache','LOCATION':'quan:11211',}}#注釋掉其它的cacheOPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" %OPENSTACK_HOSTOPENSTACK_kEYSTONE_MULTIDOMAIN_SUPPORT = TrueOPENSTACK_API_VERSIONS = {"identity":3,"image":2,"volume":2, }OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"OPENSTACK_KEYSTONE_DEFAULT_ROLE= 'user'OPENSTACK_NEUTRON_NETWORK = {...'enable_quotas':True,'enable_distributed_router':True,'enable_ha_router':True,'enable_lb':True,'enable_firewall':True,'enable_vpn':Flase,'enable_fip_topology_check':True,}TIME_ZONE = "Asia/Chongqing"vi /etc/httpd/conf.d/openstack-dashboard.confWSGIApplicationGroup %{GLOBAL}重啟相關服務systemctl restart httpd.service memcached.service訪問地址:http://172.16.1.221/dashboard/關閉domain驗證vi /etc/openstack-dashboard/local_settings#OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True #注釋此行重啟相關服務systemctl restart httpd.service memcached.service用戶名:admin 密碼:openstack通過命令行創建一個虛擬機的實例創建provider網絡(外部網絡)source admin-openrcopenstack network create --share --external \--provider-physical-network provider \--provider-network-type flat provideropenstack network create --network provider \ #創建外部子網(和物理網絡位于同一網絡)--allocation-pool start 172.16.1.231,end 172.16.1.240 \--dns-nameserver 8.8.4.4 --gateway 172.16.1.1 \--subnet-range 172.16.1.1/24 provider創建私有網絡self-servicessource demo-openrcopenstack network create selfservice #創建私有網絡openstack subnet create --network selfservice \ #創建私有網絡子網--dns-nameserver 8.8.4.4 --gateway 192.168.0.1 \--subnet-range 192.168.0.0/24 selfserviceopenstack router create router #創建虛擬路由openstack router add subnet selfservice #為路由添加子網openstack router set router --extemal-gateway provider #設置路由的外部網關驗證source admin-openrcip netnsopenstack port list --router routerping -c 網關ip創建flavor(啟動虛擬機的模板,cpu是幾個,內存是多少)openstack flavor --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano查看創建的flavorsource demo-openrcopenstack flavor list生成秘鑰對source demo-openrcssh-keygen -q -N ""openstack keypair create --public-key ~/.ssh/id_rsa.pub mykeyopenstack keypair list添加安全組規則openstack security group rule create --proto icmp default #允許ping通openstack security group rule create --proto tcp --dst-port 22 default #允許連接tcp22號端口查看驗證source demo-openrcopenstack flavor listopenstack image listopenstack network listopenstack security group listopenstack security group rule list啟動一個實例創建一個虛擬機openstack server create --flavor m1.nano --image cirros(可以是id也可以是名稱) \--nic net-id SELFSERVICE_NET_ID --security-group default \--key-name mykey selfservice-instance(虛擬機名稱)查看虛擬機openstack server list #查看擁有的虛擬機openstack server show (虛擬機id) #查看虛擬機詳細信息通過界面綁定ip查看虛擬機控制臺信息openstack console log show (虛擬機id)cinder組件cinder:提供REST_API使用戶能夠查詢和管理volume、volume snapshot以及volume type,提供scheduler調度volume創建請求,合理優化存儲資源的分配通過driver架構支持多種back-end(后端)存儲方式,包括LVM,NFS,Ceph和其它諸如EMC、IBM等商業存儲產品方案cinder的架構圖:http://m.qpic.cn/psb?/V12uCjhD3ATBKt/FpuhoZP0gP2rwhfFn*1Q1BXUZlHCtEvh7xmNRgJYqiw!/b/dL8AAAAAAAAA&bo=CQIYAQAAAAARByI!&rf=viewer_4cinder包含的組件:cinder-api:接收API請求,調用cinder-volume執行操作cinder-volume:管理volume的服務,與volume provider協調工作,管理volume的生命周期。運行cinder-volume服務的節點被稱作為存儲節點cinder-scheduler:scheduler通過調度算法選擇最合適的存儲節點創建volumevolume provider:數據的存儲設備,為volume提供物理存儲空間。cinder-volume支持多種volume provider,每種volume provider通過自己的driver與cinder-volume協調工作Message Queue:cinder各個子服務通過消息隊列實現進程間通信和相互協作。因為有了消息隊列,子服務之間實現了解耦,這種松散的結構也是分布式系統的重要特征Database cinder:有一些數據需要存放到數據庫中,一般使用MySQL。數據庫是安裝在控制節點上的。搭建cinder組件控制節點數據庫相關操作mysql -uroot -popenstackcreate database cinder;grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'openstack';grant all privileges on cinder.* to 'cinder'@'%' identified by 'openstack';創建cinder用戶,并在service項目中添加管理員角色source admin_openrcopenstack user create --domain default --password-prompt cinderopenstack role add --project service --user cinder admin創建cinder服務及端口openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3openstack endpoint create --region RegionOne volumev2 public http://quan:8776/v2/%\{project_id\}sopenstack endpoint create --region RegionOne volumev2 internal http://quan:8776/v2/%\{project_id\}sopenstack endpoint create --region RegionOne volumev2 admin http://quan:8776/v2/%\{project_id\}sopenstack endpoint create --region RegionOne volumev3 public http://quan:8776/v3/%\{project_id\}sopenstack endpoint create --region RegionOne volumev3 internal http://quan:8776/v3/%\{project_id\}sopenstack endpoint create --region RegionOne volumev3 admin http://quan:8776/v3/%\{project_id\}s安裝相關包并配置yum -y install openstack-cindervim /etc/cinder/cinder.conf[database]connection = mysql+pymysql://cinder:openstack@quan/cinder[DEFAULT]transport_url = rabbit://openstack:openstack@quanauth_strategy = keystonemy_ip = 172.16.1.221[keystone_authtoken]auth_uri = http://quan:5000auth_url = http://quan:35357memcached_servers = quan:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = cinderpassword = openstack[oslo_concurrency]lock_path = /var/lib/cinder/tmp初始化數據庫su -s /bin/sh -c "cinder-manage db sync" cinder配置計算服務使用cindervi /etc/nova/nova.conf[cinder]os_region_name = RegionOne計算服務重啟systemctl restart openstack-nova-api服務啟動,并設置開機啟動systemctl enable openstack-cinder-api openstack-cinder-scheduler && systemctl start openstack-cinder-api openstack-cinder-scheduler驗證openstack volume service list #state狀態為up即為啟動成功存儲節點(除系統盤外要有磁盤)安裝相關包并配置yum -y install lvm2 device-mapper-persistent-datasystemctl enable lvm2-lvmetad && systemctl start lvm2-lvmetadpvcreate /dev/sdb #創建pvvgcreate cinder-volume /dev/sdb #創建vgvi /etc/lvm/lvm.confdevices{"a/dev/sda/","a/dev/sdb/","r/.*/"}#a表示接收,r表示拒絕可通過命令lsblk查看系統安裝是否使用lvm,若sda磁盤沒有使用lvm可不添加"a/dev/sda/"yum -y install openstack-cinder targetcli python-keystonevi /etc/cinder/cinder.conf[database]connection = mysql+pymysql://cinder:openstack@quan/cinder[DEFAULT]transport_url = rabbit://openstack:openstack@quanauth_strategy = keystonemy_ip = 172.16.1.223enabled_backends = lvmglance_api_servers = http://quan:9292[keystone_authtoken]auth_uri = http://quan:5000auth_url = http://quan:35357memcached_servers = quan:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = cinderpassword = openstack[lvm]volume_driver = cinder.volume.drivers.lvm.LVMVolumeDrivervolume_group = cinder-volumes #vg的名稱iscsi_protocol = iscsiiscsi_helper = lioadm[oslo_concurrency]lock_path = /var/lib/cinder/tmp服務啟動,并設置開機啟動system enable openstack-cinder-volume target && system start openstack-cinder-volume target驗證source admin-openrcopenstack volume service list為虛擬機分配虛擬磁盤命令:source demo-openrcopenstack volume create --size 2 volume2 #--size指定虛擬機磁盤大小2Gopenstack volume list #狀態為available可用的openstack server add volume selfservice-instance volume2 #為虛擬機掛載磁盤openstack volume list #狀態為in-use可登錄虛擬機通過fdisk -l 查看掛載磁盤Swift組件swift:被稱為對象存儲,提供了強大的擴展性、冗余和持久性。對象存儲,用于永久類型的靜態數據的長期存儲搭建swift組件控制節點創建swift用戶,并在service項目中添加管理員角色source admin-openrcopenstack user create --domain default --password-prompt swiftopenstack role add --project service --user swift admin創建swift服務及端口openstack service create --name swift --description "OpenStack Object Stroage" object-storeopenstack endpoint create --region RegionOne object-store public http://quan:8080/v1/AUTH_%\{project_id\}sopenstack endpoint create --region RegionOne object-store internal http://quan:8080/v1/AUTH_%\{project_id\}sopenstack endpoint create --region RegionOne object-store admin http://quan:8080/v1安裝相關包yum -y install openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached下載swift-proxy.conf的配置文件,并配置curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/queensvi /etc/swift/proxy-server.conf[DEFAULT]bind_port = 8080swift_dir = /etc/swiftuser = swift[pipeline:main]pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server[app:proxy-server]use = egg:swift#proxyaccount_autocreate = True[filter:keystoneauth]use = egg:swift#keystoneauthoperator_roles = admin,user[filter:authtoken]paste.filter_factory = keystonemiddleware.auth_token:filter_factorywww_authenticate_uri = http://quan:5000auth_url = http://quan:35357memcached_servers = quan:11211auth_type = passwordproject_domain_id = defaultuser_domain_id = defaultproject_name = serviceusername = swiftpassword = openstackdelay_auth_decision = True[filter:cache]memcache_servers = quan:11211存儲節點(所有的)安裝相關包yum install xfsprogs rsync格式化磁盤mkfs.xfs /dev/sdbmkfs.xfs /dev/sdcmkdir -p /srv/node/sdbmkdir -p /src/node/sdc配置自動掛載vi /etc/fstab/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2mount /srv/node/sdbmount /srv/node/sdc或者mount -avi /etc/rsyncd.confuid = swiftgid = swiftlog_file = /var/log/rsyncd.logpid_file = /var/run/rsyncd.pidaddress = 172.16.1.224 #多個節點請自行調整 [account]max_connections = 2path = /srv/node/read only = Falselocak file = /var/lock/account.lock[container]max_connections = 2path = /srv/node/read only = Falselocak file = /var/lock/container.lock[object]max_connections = 2path = /srv/node/read only = Falselocak file = /var/lock/object.lock服務啟動,并設置開機啟動systemctl enable rsyncd && systemctl start rsyncd安裝相關包yum -y install openstack-swift-account openstack-swift-container openstack-swift-object下載相關配置文件,并配置curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/queenscurl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/queenscurl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/queensvi /etc/swift/account-server.conf[DEFAULT]bind_ip = 172.16.1.224bind_prot = 6202user = swiftswift_dir = /etc/swiftdevices = /srv/nodemount_check = True[pipeline:main]pipeline = healthcheck recon account-server[filter:recon]recon_cache_path = /var/cache/swiftvi /etc/swift/container-server.conf[DEFAULT]bind_ip = 172.16.1.224bind_prot = 6201user = swiftswift_dir = /etc/swiftdevices = /srv/nodemount_check = True[filter:recon]recon_cache_path = /var/cache/swiftvi /etc/swift/object-server.conf[DEFAULT]bind_ip = 172.16.1.224bind_prot = 6200user = swiftswift_dir = /etc/swiftdevices = /srv/nodemount_check = True[pipeline:main]pipeline = healthcheck recon object-server[filter:recon]recon_cache_path = /var/cache/swiftrecon_lock_path = /var/lock修改文件權限chown -R swfit:swift /srv/nodemkdir -p /var/cache/swiftchown -R root:swift /var/cache/swiftchmod -R 755 /var/cache/swift終止存儲節點操作,上述操作全部在所有存儲節點中操作控制節點操作cd /etc/swiftswift-ring-builder account.builder create 10 3 1創建第一存儲節點swift-ring-builder account.builder add \--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdb --weight 100swift-ring-builder account.builder add \--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdc --weight 100創建第二存儲節點swift-ring-builder account.builder add \--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdb --weight 100swift-ring-builder account.builder add \--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdc --weight 100swift-ring-builder account.builderswift-ring-builder account.builder rebalanceswift-ring-builder container.builder create 10 3 1創建第一存儲節點swift-ring-builder container.builder add \--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdb --weight 100swift-ring-builder container.builder add \--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdc --weight 100創建第二存儲節點swift-ring-builder container.builder add \--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdb --weight 100swift-ring-builder container.builder add \--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdc --weight 100swift-ring-builder container.builderswift-ring-builder container.builder rebalanceswift-ring-builder object.builder create 10 3 1創建第一存儲節點swift-ring-builder object.builder add \--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdb --weight 100swift-ring-builder object.builder add \--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdc --weight 100創建第二存儲節點swift-ring-builder object.builder add \--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdb --weight 100swift-ring-builder object.builder add \--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdc --weight 100swift-ring-builder object.builderswift-ring-builder object.builder rebalance將生成文件放到對象存儲節點中scp account.ring.gz container.ring.gz object.ring.gz object01:/etc/swift/scp account.ring.gz container.ring.gz object.ring.gz object02:/etc/swift/獲取swift.conf配置文件curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/queensvi /etc/swift/swift.conf[swift-hash]swift_hash_path_suffix = HASH_PATH_SUFFIXswift_hash_path_prefix = HASH_PATH_PREFIX[storage-policy:0]name = Policy-0default = yes將swift.conf配置文件分發到對象存儲節點scp /etc/swift/swift.conf object01:/etc/swift/scp /etc/swift/swift.conf object02:/etc/swift/控制節點和所有對象存儲節點執行chown -R root:swift /etc/swift控制節點systemctl enable openstack-swift-proxy memcached && systemctl start openstack-swift-proxy memcached對象存儲節點(所有)systemctl enable openstack-swift-account openstack-swift-account-auditor openstack-swift-account-reaper openstack-swift-account-replicatorsystemctl start openstack-swift-account openstack-swift-account-auditor openstack-swift-account-reaper openstack-swift-account-replicatorsystemctl enable openstack-swift-container openstack-swift-container-auditor openstack-swift-container-replicator openstack-swift-container-updatersystemctl start openstack-swift-container openstack-swift-container-auditor openstack-swift-container-replicator openstack-swift-container-updatersystemctl enable openstack-swift-object openstack-swift-object-auditor openstack-swift-object-replicator openstack-swift-object-updatersystemctl start openstack-swift-object openstack-swift-object-auditor openstack-swift-object-replicator openstack-swift-object-updater驗證(控制節點)備注:首先檢查/var/log/audit/audit.log,若存在selinux的信息,使得swift進程無法訪問,做如下修改:chcon -R system_u:object_r:swift_data_t:s0 /srv/nodesource demo-openrcswift stat #查看swift狀態openstack container create container1openstack object create container1 FILE #上傳文件到容器中openstack container list #查看所有的container(容器)openstack object list container1 #查看container1容器中的文件openstack object save container1 FILE #從容器中下載文件
轉載于:https://www.cnblogs.com/zhangquan-yw/p/10509017.html
總結
以上是生活随笔為你收集整理的Openstack实验笔记的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 2020全国高校计算机能力挑战赛程序设计
- 下一篇: ! [rejected] master