k8s集成ceph
有兩個參考
參見1:內容全
參見2:rbd,比較詳細
ceph的配置
在ceph集群中執行如下命令:
[root@node1 ~]# ceph -scluster:id: 365b02aa-db0c-11ec-b243-525400ce981fhealth: HEALTH_OKservices:mon: 3 daemons, quorum node1,node2,node3 (age 41h)mgr: node2.dqryin(active, since 2d), standbys: node1.umzcuvosd: 12 osds: 10 up (since 2d), 10 in (since 2d)data:pools: 2 pools, 33 pgsobjects: 1 objects, 19 Busage: 10 GiB used, 4.9 TiB / 4.9 TiB availpgs: 33 active+clean[root@node1 ~]# ceph mon stat e11: 3 mons at {node1=[v2:172.70.10.181:3300/0,v1:172.70.10.181:6789/0],node2=[v2:172.70.10.182:3300/0,v1:172.70.10.182:6789/0],node3=[v2:172.70.10.183:3300/0,v1:172.70.10.183:6789/0]}, election epoch 100, leader 0 node1, quorum 0,1,2 node1,node2,node3部署ceph-csi版本
涉及三方的版本:ceph(Octopus),kubernetes (v1.24.0),ceph-sci版本
現階段對應ceph csi與k8s版本對應如下:
| v3.6.1 | Kubernetes | v1.21,v1.22,v1.23 |
| v3.6.0 | Kubernetes | v1.21,v1.22,v1.23 |
| v3.5.1 | Kubernetes | v1.21,v1.22,v1.23 |
| v3.5.0 | Kubernetes | v1.21,v1.22,v1.23 |
| v3.4.0 | Kubernetes | v1.20,v1.21,v1.22 |
目前使用的kubernetes版本是1.24,所以ceph-sci版本就使用最新版本3.6.1
目前使用的Ceph的版本是O版,ceph與Ceph CSI版本的對應關系,因為太多了,所以參照:ceph-sci插件官網
總上,部署ceph-csi V3.6.1版本就可以了
下載ceph-csi
下載ceph-csi 3.6.1的源碼:下載地址
deploy目錄下的rbd目錄下的內容
部署rbd
clusterID就是集群ID,ceph -s即可獲得
monitors 在/var/lib/ceph/365b02aa-db0c-11ec-b243-525400ce981f/mon.node1/config中
ceph.conf就是復制ceph集群的配置文件,也就是/ect/ceph/ceph.conf文件中的對應內容
獲取admin的key
創建csi-rbd-secret.yaml
--- apiVersion: v1 kind: Secret metadata:name: csi-rbd-secretnamespace: default stringData:userID: kubernetesuserKey: AQDpR4xiIN/TJRAAIIv3DMRdm70RnsGs5/DW9g==encryptionPassphrase: test_passphrase創建授權用戶,實際上,可以使用admin賬號
ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=k8s' mgr 'profile rbd pool=k8s'一共如下:
執行的時候,因為眾所周知的原因,導致Google的鏡像下載不下來,主要是
k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0 k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1 k8s.gcr.io/sig-storage/csi-attacher:v3.4.0 k8s.gcr.io/sig-storage/csi-resizer:v1.4.0 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.4.0將k8s.gcr.io/sig-storage替換成registry.aliyuncs.com/google_containers
然后執行
因為kubernetes1.24.0去掉了docker引擎,所以執行完成后,鏡像列表如下:
[root@node1 kubernetes]# crictl images IMAGE TAG IMAGE ID SIZE docker.io/calico/cni v3.23.1 90d97aa939bbf 111MB docker.io/calico/node v3.23.1 fbfd04bbb7f47 76.6MB docker.io/calico/pod2daemon-flexvol v3.23.1 01dda8bd1b91e 8.67MB docker.io/library/nginx latest de2543b9436b7 56.7MB quay.io/tigera/operator v1.27.1 02245817b973b 60.3MB registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7a 13.6MB registry.aliyuncs.com/google_containers/etcd 3.5.3-0 aebe758cef4cd 102MB registry.aliyuncs.com/google_containers/kube-apiserver v1.24.0 529072250ccc6 33.8MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.24.0 88784fb4ac2f6 31MB registry.aliyuncs.com/google_containers/kube-proxy v1.24.0 77b49675beae1 39.5MB registry.aliyuncs.com/google_containers/kube-scheduler v1.24.0 e3ed7dee73e93 15.5MB registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12e 302kB registry.aliyuncs.com/google_containers/pause 3.7 221177c6082a8 311kB [root@node1 kubernetes]#kubernetes1.24.0去掉了默認使用ipvs,不再使用iptables,所以,最好關閉iptables,以免出現下面的情況
[root@node5 ~]# crictl pods POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME 5d6dd91fd83e2 5 hours ago Ready csi-rbdplugin-provisioner-c6d7486dd-2jk5w default 0 (default) 354b1a8fd8b52 7 hours ago Ready csi-rbdplugin-wjpzd default 0 (default) 198dc5bf556df 5 days ago Ready calico-apiserver-6d4dd4bcf9-n8zgk calico-apiserver 0 (default) 244e05f6f9c67 5 days ago Ready calico-typha-b76b84965-bhsxn calico-system 0 (default) 4813ce009d806 5 days ago Ready calico-node-r6mv8 calico-system 0 (default) 2bca356d7a28a 5 days ago Ready kube-proxy-89lgc kube-system 0 (default) [root@node5 ~]# crictl ps -a CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID b798b5f047428 89f8fb0f77c15 4 hours ago Running csi-rbdplugin-controller 19 5d6dd91fd83e2 ed133e74dec67 89f8fb0f77c15 4 hours ago Exited csi-rbdplugin-controller 18 5d6dd91fd83e2 8ceafecc7d9f8 89f8fb0f77c15 5 hours ago Running liveness-prometheus 0 5d6dd91fd83e2 7ede68a34c5ab 89f8fb0f77c15 5 hours ago Running csi-rbdplugin 0 5d6dd91fd83e2 a3afb73c6ed92 551fd931edd5e 5 hours ago Running csi-resizer 0 5d6dd91fd83e2 753ded0a3a413 03e115718d258 5 hours ago Running csi-attacher 0 5d6dd91fd83e2 825eaf4f07fa8 53ae5b88a3380 5 hours ago Running csi-snapshotter 0 5d6dd91fd83e2 2abb44295907a c3dfb4b04796b 5 hours ago Running csi-provisioner 0 5d6dd91fd83e2 a9e6846498a1b 89f8fb0f77c15 7 hours ago Running liveness-prometheus 0 354b1a8fd8b52 39638d5c0961a 89f8fb0f77c15 7 hours ago Running csi-rbdplugin 0 354b1a8fd8b52 1b12e9d273f68 f45c8a305a0bb 7 hours ago Running driver-registrar 0 354b1a8fd8b52 797228d6b31ed 3bcf34f7d7d8d 5 days ago Running calico-apiserver 0 198dc5bf556df dc4bae329b42f fbfd04bbb7f47 5 days ago Running calico-node 0 4813ce009d806 d5d0be4a3ef2f 90d97aa939bbf 5 days ago Exited install-cni 0 4813ce009d806 8c5853e9a0905 4ac3a9100f349 5 days ago Running calico-typha 0 244e05f6f9c67 12f2be66fd320 01dda8bd1b91e 5 days ago Exited flexvol-driver 0 4813ce009d806 f4663a0650d73 77b49675beae1 5 days ago Running kube-proxy 0 2bca356d7a28a [root@node5 ~]# crictl logs ed133e74dec67 I0531 08:37:48.420227 1 cephcsi.go:180] Driver version: v3.6.1 and Git version: 1bd6297ecbdf11f1ebe6a4b20f8963b4bcebe13b I0531 08:37:48.420443 1 cephcsi.go:229] Starting driver type: controller with name: rbd.csi.ceph.com E0531 08:37:48.422369 1 controller.go:70] failed to create manager Get "https://10.96.0.1:443/api?timeout=32s": dial tcp 10.96.0.1:443: connect: no route to host E0531 08:37:48.422450 1 cephcsi.go:296] Get "https://10.96.0.1:443/api?timeout=32s": dial tcp 10.96.0.1:443: connect: no route to host關閉掉iptables服務即可解決上面的問題
創建cat storage.class.yaml
創建rbd-pvc.yaml
--- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: rbd-pvc spec:accessModes:- ReadWriteOnceresources:requests:storage: 10GistorageClassName: csi-rbd-sc創建nginx.yaml
部署文件系統
vim中替換掉csi-cephfsplugin-provisioner.yaml和csi-cephfsplugin.yaml中的
/k8s.gcr.io/sig-storage/registry.aliyuncs.com/google_containers/g
kubectl apply -f deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
kubectl apply -f deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
kubectl apply -f deploy/rbd/kubernetes/csi-cephfsplugin-provisioner.yaml
kubectl apply -f deploy/rbd/kubernetes/csi-cephfsplugin.yaml
pod狀態
[root@node1 fs]# pods NAME READY STATUS RESTARTS AGE csi-cephfsplugin-2wh9v 3/3 Running 0 85m csi-cephfsplugin-dwswx 3/3 Running 0 85m csi-cephfsplugin-n5js6 3/3 Running 0 85m csi-cephfsplugin-provisioner-794b8d9f95-jwmw4 6/6 Running 0 85m csi-cephfsplugin-provisioner-794b8d9f95-rprrp 6/6 Running 0 85m csi-cephfsplugin-provisioner-794b8d9f95-sd848 6/6 Running 0 85m csi-rbdplugin-provisioner-c6d7486dd-2jk5w 7/7 Running 19 (9h ago) 10h csi-rbdplugin-provisioner-c6d7486dd-mk68w 7/7 Running 2 (8h ago) 12h csi-rbdplugin-provisioner-c6d7486dd-qlgkf 7/7 Running 0 12h csi-rbdplugin-tthrg 3/3 Running 0 12h csi-rbdplugin-vtlbs 3/3 Running 0 12h csi-rbdplugin-wjpzd 3/3 Running 0 12h fs-nginx-6d86d5d84d-77gvt 1/1 Running 0 4m16s fs-nginx-6d86d5d84d-b9twd 1/1 Running 0 4m16s fs-nginx-6d86d5d84d-s4v85 1/1 Running 0 4m16s my-nginx-549466b985-nzkxl 1/1 Running 0 7h29m進入到其中一個pod中,在共享的目錄下創建文件fs.txt
[root@node1 fs]# k exec -it pod/fs-nginx-6d86d5d84d-77gvt -- /bin/bash root@fs-nginx-6d86d5d84d-77gvt:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 50G 15G 36G 30% / tmpfs 64M 0 64M 0% /dev tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup shm 64M 0 64M 0% /dev/shm /dev/mapper/centos-root 50G 15G 36G 30% /etc/hosts 172.70.10.181:6789,172.70.10.182:6789,172.70.10.183:6789:/volumes/csi/csi-vol-194d8e9e-e108-11ec-a020-26b729f874ac/372a597b-867d-40e0-b246-6537208c8a9f 11G 0 11G 0% /usr/share/rbd tmpfs 16G 12K 16G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 7.9G 0 7.9G 0% /proc/acpi tmpfs 7.9G 0 7.9G 0% /proc/scsi tmpfs 7.9G 0 7.9G 0% /sys/firmware root@fs-nginx-6d86d5d84d-77gvt:/# cd /usr/share/rbd root@fs-nginx-6d86d5d84d-77gvt:/usr/share/rbd# ls root@fs-nginx-6d86d5d84d-77gvt:/usr/share/rbd# echo "cephfs" > fs.txt root@fs-nginx-6d86d5d84d-77gvt:/usr/share/rbd# cat fs.txt cephfs root@fs-nginx-6d86d5d84d-77gvt:/usr/share/rbd# exit exit進入到另一個pod中,可以看到共享目錄下同樣有文件fs.txt
[root@node1 fs]# k exec -it pod/fs-nginx-6d86d5d84d-b9twd -- /bin/bash root@fs-nginx-6d86d5d84d-b9twd:/# cd /usr/share/rbd/ root@fs-nginx-6d86d5d84d-b9twd:/usr/share/rbd# ls fs.txt root@fs-nginx-6d86d5d84d-b9twd:/usr/share/rbd# cat fs.txt cephfs root@fs-nginx-6d86d5d84d-b9twd:/usr/share/rbd# exit exit至此,整合rbd和cephfs的過程結束。
對象存儲
對于ceph對象存儲,本身ceph提供的是基于七層協議的接口,直接通過對象存儲s3協議訪問即可,不需要通過csi進行集成。
總結
- 上一篇: iOS开发-常用第三方开源框架介绍(你了
- 下一篇: Android基础入门教程——4.4.2