前提準備 首先看到ceph官網給出的大體結構: 大體結構 可以看到主要分為ceph集群、rbd、iscsi網關、initiator(也就是客戶端)構成。 那么所需的準備就如下: 一個HEALTH_OK的ceph集群,還有剩余的存儲空間(給創建的rbd使用)。 這里是我所搭建的集群: | 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| $ sudo ceph -scluster:id: 51e9f534-b15a-4273-953c-9b56e9312510health: HEALTH_OKservices:mon: 3 daemons, quorum node1,node2,node3mgr: node1(active), standbys: node2, node3mds: cephfs-1/1/1 up {0=node1=up:active}osd: 6 osds: 6 up, 6 indata:pools: 2 pools, 64 pgsobjects: 508 objects, 1.9 GiBusage: 26 GiB used, 6.0 TiB / 6.0 TiB availpgs: 64 active+clean
|
兩臺linux主機,作為iscsi網關,可以是集群中的主機。 一臺linux主機,作為linux系統下的客戶端。 一臺windows主機,作為windows系統下的客戶端。
配置ceph-iscsi網關
修改osd配置 安裝官網所述,先修改osd的配置: | 1
2
3
| [osd]
osd heartbeat grace = 20
osd heartbeat interval = 5
|
將上述配置添加到所有ceph節點的/etc/ceph/ceph.conf文件中, 當然可以使用ceph-deploy來推送。 ? | 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| $ vim ceph.conf
$ cat ceph.conf[global]
fsid = 51e9f534-b15a-4273-953c-9b56e9312510
mon_initial_members = node1, node2, node3
mon_host = 192.168.90.233,192.168.90.234,192.168.90.235
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 192.168.0.0/16[osd]
osd heartbeat grace = 20
osd heartbeat interval = 5$ ceph-deploy --overwrite-conf config push node1 node2 node3
|
下載所需要的相關rpm包 這里直接選擇Using the Command Line Interface,感覺這個更靠譜一些。 按照官網所述,yum的repo中需要有以下rpm包: 所需rpm包 直接yum install試試可以發現,只有targetcli和python-rtslib能裝上,而且版本都比官網說的要低,好吧,麻煩來了。 經過一段幾個小時*的搜索……從下面鏈接中找到了rpm包: Build new RPM for 3.0 Missing ceph-iscsi-cli package 新建repo文件: | 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| $ sudo vim /etc/yum.repo.d/iscsi.repo
$ cat /etc/yum.repo.d/iscsi.repo[ceph-iscsi]
name=Ceph-iscsi
baseurl=https://4.chacra.ceph.com/r/ceph-iscsi/master/88f3f67981c7da15448f140f711a1a8413d450b0/centos/7/flavors/default/noarch/
priority=1
gpgcheck=0[tcmu-runner]
name=tcmu-runner
baseurl=https://3.chacra.ceph.com/r/tcmu-runner/master/eef511565078fb4e2ed52caaff16e6c7e75ed6c3/centos/7/flavors/default/x86_64/
priority=1
gpgcheck=0[python-rtslib]
name=python-rtslib
baseurl=https://2.chacra.ceph.com/r/python-rtslib/master/67eb1605c697b6307d8083b2962f5170db13d306/centos/7/flavors/default/noarch/
priority=1
gpgcheck=0
|
這里我使用的是本地源,將上面的包下載到本地源: | 1
2
| $ sudo yum install --downloadonly --downloaddir=yum/ceph-iscsi/ targetcli python-rtslib tcmu-runner ceph-iscsi
$ createrepo -p -d -o yum/ yum/
|
注意到這里沒有包含targetcli的repo,因為沒有找到,使用yum基礎的Base源或者是Ceph源可以安裝targetcli-2.1.fb46-7.el7.noarch.rpm, 雖然官網需要的是targetcli-2.1.fb47 or newer package,但在后續使用中發現沒有影響,所以這里就不用管targetcli了。 那么這里下載下來的就是: targetcli-2.1.fb46-7.el7.noarch.rpm python-rtslib-2.1.fb68-1.noarch.rpm tcmu-runner-1.4.0-0.1.51.geef5115.el7.x86_64.rpm ceph-iscsi-3.2-8.g88f3f67.el7.noarch.rpm
ceph-iscsi網關初始配置 如果使用的不是集群內的節點作為ceph-iscsi網關,那就需要進行一些初始的配置。 安裝ceph。 從集群中的一臺機器上拷貝/etc/ceph/ceph.conf到本機的/etc/ceph/ceph.conf。 從集群中的一臺機器上拷貝/etc/ceph/ceph.client.admin.keyring到本機的/etc/ceph/ceph.client.admin.keyring。 當然第2和3步可以直接在deploy節點使用ceph-deploy admin {node-gateway},{node-gateway}就表示網關節點的名字。 可見這里就是為了將ceph-iscsi網關節點變成一個admin節點。 這時在網關節點上應該可以執行相關命令操作ceph集群,例如sudo ceph -s查詢當前集群的狀態。
安裝配置iscsi 這里官網建議先切換到root用戶,方便一點: 在兩個網關節點上都安裝iscsi(注意到上面已經將相關包下載到了本地源,所以可以直接yum安裝): | 1
| # yum install -y ceph-iscsi
|
服務啟動: 先創建rbd pool,如果沒有的話。 | 1
2
3
4
5
6
7
8
| # ceph osd lspools1 cephfs_data
2 cephfs_metadata# ceph osd pool create rbd 128pool 'rbd' created
|
創建并修改/etc/ceph/iscsi-gateway.cfg文件: | 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| # vim /etc/ceph/iscsi-gateway.cfg
# cat /etc/ceph/iscsi-gateway.cfg[config]
# Name of the Ceph storage cluster. A suitable Ceph configuration file allowing
# access to the Ceph storage cluster from the gateway node is required, if not
# colocated on an OSD node.
cluster_name = ceph# Place a copy of the ceph cluster's admin keyring in the gateway's /etc/ceph
# drectory and reference the filename here
gateway_keyring = ceph.client.admin.keyring# API settings.
# The API supports a number of options that allow you to tailor it to your
# local environment. If you want to run the API under https, you will need to
# create cert/key files that are compatible for each iSCSI gateway node, that is
# not locked to a specific node. SSL cert and key files *must* be called
# 'iscsi-gateway.crt' and 'iscsi-gateway.key' and placed in the '/etc/ceph/' directory
# on *each* gateway node. With the SSL files in place, you can use 'api_secure = true'
# to switch to https mode.# To support the API, the bear minimum settings are:
api_secure = false# Additional API configuration options are as follows, defaults shown.
# api_user = admin
# api_password = admin
# api_port = 5001
trusted_ip_list = 192.168.90.234,192.168.90.235
|
上面trusted_ip_list填寫的就是兩臺網關的ip(這里不討論多網卡的情況)。 在另外一臺網關上復制這個文件: | 1
| $ sudo scp cluster@node2:/etc/ceph/iscsi-gateway.cfg /etc/ceph/iscsi-gateway.cfg
|
在兩臺網關上都開啟rbd-target-api服務: | 1
2
3
| # systemctl daemon-reload
# systemctl enable rbd-target-api
# systemctl start rbd-target-api
|
配置:(在其中一臺網關配置就行) | 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| # gwcli
/> cd /iscsi-targets/iscsi-targets> create iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw
ok/iscsi-targets> cd iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/gateways/iscsi-target...-igw/gateways> create node2 192.168.90.234
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
ok/iscsi-target...-igw/gateways> create node3 192.168.90.235
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
ok/iscsi-target...-igw/gateways> cd /disks/disks> create pool=rbd image=disk_1 size=200G
ok/disks> cd /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/hosts/iscsi-target...csi-igw/hosts> create iqn.1994-05.com.redhat:rh7-client
ok/iscsi-target...at:rh7-client> auth username=myiscsiusername password=myiscsipassword
ok/iscsi-target...at:rh7-client> disk add rbd/disk_1
ok
|
配置完成,可以看到我當前的目錄結構: ? | 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| /> ls
o- / ......................................................................... [...]o- cluster ......................................................... [Clusters: 1]| o- ceph .......................................................... [HEALTH_WARN]| o- pools .......................................................... [Pools: 3]| | o- cephfs_data ... [(x3), Commit: 0.00Y/2028052096K (0%), Used: 2029431878b]| | o- cephfs_metadata .... [(x3), Commit: 0.00Y/2028052096K (0%), Used: 77834b]| | o- rbd ................ [(x3), Commit: 200G/2028052096K (10%), Used: 15352b]| o- topology ................................................ [OSDs: 6,MONs: 3]o- disks ........................................................ [200G, Disks: 1]| o- rbd ............................................................ [rbd (200G)]| o- disk_1 ................................................ [rbd/disk_1 (200G)]o- iscsi-targets ............................... [DiscoveryAuth: None, Targets: 1]o- iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw ..................... [Gateways: 2]o- disks .......................................................... [Disks: 1]| o- rbd/disk_1 ............................................... [Owner: node3]o- gateways ............................................ [Up: 2/2, Portals: 2]| o- node2 ............................................. [192.168.90.234 (UP)]| o- node3 ............................................. [192.168.90.235 (UP)]o- host-groups .................................................. [Groups : 0]o- hosts .............................................. [Hosts: 1: Auth: CHAP]o- iqn.1994-05.com.redhat:rh7-client .......... [Auth: CHAP, Disks: 1(200G)]o- lun 0 ................................ [rbd/disk_1(200G), Owner: node3]
|
linux客戶端配置 在作為客戶端的linux主機上。 安裝相關組件: | 1
2
| $ sudo yum install -y iscsi-initiator-utils
$ sudo yum install -y device-mapper-multipath
|
開啟multipathd服務并進行配置: | 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| $ sudo mpathconf --enable --with_multipathd y$ sudo vim /etc/multipath.conf
$ sudo cat /etc/multipath.confdevices {device {vendor "LIO-ORG"hardware_handler "1 alua"path_grouping_policy "failover"path_selector "queue-length 0"failback 60path_checker turprio aluaprio_args exclusive_pref_bitfast_io_fail_tmo 25no_path_retry queue}
}
|
修改客戶端名稱: | 1
2
3
4
| $ sudo vim /etc/iscsi/initiatorname.iscsi
$ sudo cat /etc/iscsi/initiatorname.iscsiInitiatorName=iqn.1994-05.com.redhat:rh7-client
|
修改chap認證配置文件: | 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| $ sudo vim /etc/iscsi/iscsid.conf
$ sudo cat /etc/iscsi/iscsid.conf...
# *************
# CHAP Settings
# *************# To enable CHAP authentication set node.session.auth.authmethod
# to CHAP. The default is None.
node.session.auth.authmethod = CHAP# To set a CHAP username and password for initiator
# authentication by the target(s), uncomment the following lines:
node.session.auth.username = myiscsiusername
node.session.auth.password = myiscsipassword
... |
發現target: | 1
2
3
4
| $ sudo iscsiadm -m discovery -t st -p 192.168.90.234192.168.90.234:3260,1 iqn.2003-01.org.linux-iscsi.rheln1
192.168.90.235:3260,2 iqn.2003-01.org.linux-iscsi.rheln1
|
登入target: | 1
| $ sudo iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.rheln1 -l
|
查看是否成功: | 1
2
3
4
5
6
7
| $ sudo multipath -ll
mpatha (360014050fedd563975249adb2e84e978) dm-2 LIO-ORG ,TCMU device
size=200G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| `- 3:0:0:0 sdc 8:32 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled`- 2:0:0:0 sdb 8:16 active ready running
|
在fdisk中就可以直接看到這個“硬盤”: ? | 1
2
3
4
5
6
7
8
| $ sudo fdisk -l...
Disk /dev/mapper/mpatha: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 524288 bytes
...
|
windows客戶端配置 在控制面板->管理工具->iSCSI 發起程序: 控制面板->管理工具 管理工具->iSCSI 發起程序 修改發起程序名稱: 發起程序名稱 添加發現目標門戶: ? 添加發現目標門戶 可以看到出現目標: ? 出現目標 連接到該目標: 連接目標 修改高級設置: 修改高級設置 可以看到已連接: 可以看到已連接 可以在磁盤管理中看到硬盤已經連接: 磁盤管理 這里將它分為F盤,并且成功往里面放了一個文件: 分配成為F盤 放入文件成功
Health Warning 注意到此時ceph集群可能會出現一個Health Warning: | 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| $ sudo ceph -scluster:id: 51e9f534-b15a-4273-953c-9b56e9312510health: HEALTH_WARNapplication not enabled on 1 pool(s)services:mon: 3 daemons, quorum node1,node2,node3mgr: node1(active), standbys: node2, node3mds: cephfs-1/1/1 up {0=node1=up:active}osd: 6 osds: 6 up, 6 intcmu-runner: 2 daemons activedata:pools: 3 pools, 192 pgsobjects: 559 objects, 2.1 GiBusage: 27 GiB used, 6.0 TiB / 6.0 TiB availpgs: 192 active+cleanio:client: 2.5 KiB/s rd, 2 op/s rd, 0 op/s wr
|
從官網中可以看到這個warning發生的原因New in Luminous: pool tags, 另外使用ceph health detail也可以看到解決方法: | 1
2
3
4
5
6
| $ sudo ceph health detailHEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)application not enabled on pool 'rbd'use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
|
這里就將所建立的rbd池標記為rbd: | 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| $ sudo ceph osd pool application enable rbd rbdenabled application 'rbd' on pool 'rbd'$ sudo ceph -scluster:id: 51e9f534-b15a-4273-953c-9b56e9312510health: HEALTH_OKservices:mon: 3 daemons, quorum node1,node2,node3mgr: node1(active), standbys: node2, node3mds: cephfs-1/1/1 up {0=node1=up:active}osd: 6 osds: 6 up, 6 intcmu-runner: 2 daemons activedata:pools: 3 pools, 192 pgsobjects: 559 objects, 2.1 GiBusage: 27 GiB used, 6.0 TiB / 6.0 TiB availpgs: 192 active+cleanio:client: 2.0 KiB/s rd, 1 op/s rd, 0 op/s wr
|
問題解決。 | ? |