使用 Sealos + Longhorn 部署 KubeSphere v3.0.0
使用 Sealos + Longhorn 部署 KubeSphere v3.0.0
本文來自 KubeSphere 社區用戶 Will,演示如何使用 Sealos + Longhorn 部署一個帶有持久化存儲的 Kubernetes 集群,然后使用 ks-installer 在該集群上部署 KubeSphere 3.0.0。這是一篇最適合小白初次上手的 KubeSphere 3.0.0 快速部署和體驗的文章???。
Sealos 簡介
Sealos (https://sealyun.com/),只能用絲滑一詞形容的 Kubernetes 高可用安裝工具,一條命令,離線安裝,包含所有依賴,內核負載不依賴 haproxy keepalived,純 Golang 開發,99 年證書,支持 v1.16 ~ v1.19。
Longhorn簡介
Longhorn(https://www.rancher.cn/longhorn)是 Rancher 開源的 Kubernetes 高可用持久化存儲,提供簡單的增量快照和備份,支持跨集群災難恢復。
KubeSphere 簡介
**KubeSphere(https://kubesphere.io)**是在 Kubernetes 之上構建的以應用為中心的多租戶容器平臺,完全開源,支持多云與多集群管理,提供全棧的 IT 自動化運維的能力,簡化企業的 DevOps 工作流。KubeSphere 提供了運維友好的向導式操作界面,幫助企業快速構建一個強大和功能豐富的容器云平臺。
KubeSphere 支持如下兩種安裝方式:
- 使用 KubeKey 部署 Kubernetes 集群 + KubeSphere
- 在已有 Kubernetes 集群部署 KubeSphere
對于已有 Kubernetes 集群的用戶來說,在已有 Kubernetes 集群部署 KubeSphere 具有更高的靈活性。下面演示單獨部署一個 Kubernetes 集群,并在集群上部署 KubeSphere。
使用 Sealos 部署 Kubernetes 集群
準備 4 個節點,由于實驗的機器有限,我們暫時準備 3 個 master 和 1 個 node,注意在實際的生產環境建議配置 3 master 和至少 3 node。所有節點必須配置主機名,并確認節點時間同步:
hostnamectl set-hostname xx yum install -y chrony systemctl enable --now chronyd timedatectl set-timezone Asia/Shanghai在第一個 master 節點操作,下載部署工具及離線包:
# 基于 go 的二進制安裝程序 wget -c https://sealyun.oss-cn-beijing.aliyuncs.com/latest/sealos && \chmod +x sealos && mv sealos /usr/bin# 以 K8s v1.18.8為例,不建議使用 v1.19.x,因為 KubeSphere v3.0.0 暫不支持 wget -c https://sealyun.oss-cn-beijing.aliyuncs.com/cd3d5791b292325d38bbfaffd9855312-1.18.8/kube1.18.8.tar.gz執行以下命令部署 Kubernetes 集群,passwd 為所有節點 root 密碼:
sealos init --passwd 123456 \--master 10.39.140.248 \--master 10.39.140.249 \--master 10.39.140.250 \--node 10.39.140.251 \--pkg-url kube1.18.8.tar.gz \--version v1.18.8確認 Kubernetes 集群運行正常:
# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 Ready master 13h v1.18.8 k8s-master2 Ready master 13h v1.18.8 k8s-master3 Ready master 13h v1.18.8 k8s-node1 Ready <none> 13h v1.18.8部署 Longhorn 存儲
Longhorn 推薦單獨掛盤作為存儲使用,這里作為測試直接使用本地存儲目錄 /data/longhorn,默認為 /var/lib/longhorn。
注意,KubeSphere 有幾個組件申請的 PV 大小為 20G,確保節點空間充足,否則可能出現 PV 能夠綁定成功但沒有滿足條件的節點可調度的情況。
安裝具有 3 數據副本的 Longhorn 至少需要 3 個節點,這里去除 master 節點污點使其可調度 Pod:
kubectl taint nodes --all node-role.kubernetes.io/master-在 k8s-master1 安裝 Helm:
version=v3.3.1 curl -LO https://repo.huaweicloud.com/helm/${version}/helm-${version}-linux-amd64.tar.gz tar -zxvf helm-${version}-linux-amd64.tar.gz mv linux-amd64/helm /usr/local/bin/helm && rm -rf linux-amd64所有節點安裝 longhorn 依賴:
yum install -y iscsi-initiator-utils systemctl enable --now iscsid添加 Longhorn Chart,如果網絡較差可以去 Longhorn 的 github release 下載 Chart:
helm repo add longhorn https://charts.longhorn.io helm repo update部署 Longhorn,支持離線部署,需要提前推送鏡像到私有倉庫 longhorn.io 下:
kubectl create namespace longhorn-systemhelm install longhorn \--namespace longhorn-system \--set defaultSettings.defaultDataPath="/data/longhorn/" \--set defaultSettings.defaultReplicaCount=3 \--set service.ui.type=NodePort \--set service.ui.nodePort=30890 \#--set privateRegistry.registryUrl=10.39.140.196:8081 \longhorn/longhorn確認 Longhorn 運行正常:
[root@jenkins longhorn]# kubectl -n longhorn-system get pods NAME READY STATUS RESTARTS AGE csi-attacher-58b856dcff-9kqdt 1/1 Running 0 13h csi-attacher-58b856dcff-c4zzp 1/1 Running 0 13h csi-attacher-58b856dcff-tvfw2 1/1 Running 0 13h csi-provisioner-56dd9dc55b-6ps8m 1/1 Running 0 13h csi-provisioner-56dd9dc55b-m7gz4 1/1 Running 0 13h csi-provisioner-56dd9dc55b-s9bh4 1/1 Running 0 13h csi-resizer-6b87c4d9f8-2skth 1/1 Running 0 13h csi-resizer-6b87c4d9f8-sqn2g 1/1 Running 0 13h csi-resizer-6b87c4d9f8-z6xql 1/1 Running 0 13h engine-image-ei-b99baaed-5fd7m 1/1 Running 0 13h engine-image-ei-b99baaed-jcjxj 1/1 Running 0 12h engine-image-ei-b99baaed-n6wxc 1/1 Running 0 12h engine-image-ei-b99baaed-qxfhg 1/1 Running 0 12h instance-manager-e-44ba7ac9 1/1 Running 0 12h instance-manager-e-48676e4a 1/1 Running 0 12h instance-manager-e-57bd994b 1/1 Running 0 12h instance-manager-e-753c704f 1/1 Running 0 13h instance-manager-r-4f4be1c1 1/1 Running 0 12h instance-manager-r-68bfb49b 1/1 Running 0 12h instance-manager-r-ccb87377 1/1 Running 0 12h instance-manager-r-e56429be 1/1 Running 0 13h longhorn-csi-plugin-fqgf7 2/2 Running 0 12h longhorn-csi-plugin-gbrnf 2/2 Running 0 13h longhorn-csi-plugin-kjj6b 2/2 Running 0 12h longhorn-csi-plugin-tvbvj 2/2 Running 0 12h longhorn-driver-deployer-74bb5c9fcb-khmbk 1/1 Running 0 14h longhorn-manager-82ztz 1/1 Running 0 12h longhorn-manager-8kmsn 1/1 Running 0 12h longhorn-manager-flmfl 1/1 Running 0 12h longhorn-manager-mz6zj 1/1 Running 0 14h longhorn-ui-77c6d6f5b7-nzsg2 1/1 Running 0 14h確認默認的 StorageClass 已就緒:
# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE longhorn (default) driver.longhorn.io Delete Immediate true 14h登錄 Longhorn UI 確認節點處于可調度狀態:
Longhorn UI
Longhorn UI 查看綁定的 PV 卷
longhorn UI 查看綁定的 PV 卷
查看存儲卷詳情
查看存儲卷詳情
在 Kubernetes 上部署 KubeSphere
使用 ks-installer 項目來安裝 KubeSphere,下載 KubeSphere 安裝的 Yaml 文件:
wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yamlKubeSphere 默認僅開啟了最小化安裝,可修改 cluster-configuration.yaml,找到相應字段開啟需要安裝的功能組件,以下僅為參考:
devops:enabled: true......logging:enabled: true......metrics_server:enabled: true......openpitrix:enabled: true......執行命令部署 KubeSphere:
kubectl apply -f kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml查看部署日志,確認無報錯:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f部署完成后確認所有 KubeSphere 相關的 Pod 運行正常:
[root@k8s-master1 ~]# kubectl get pods -A | grep kubesphere kubesphere-controls-system default-http-backend-857d7b6856-q24v2 1/1 Running 0 12h kubesphere-controls-system kubectl-admin-58f985d8f6-jl9bj 1/1 Running 0 11h kubesphere-controls-system kubesphere-router-demo-ns-6c97d4968b-njgrc 1/1 Running 1 154m kubesphere-devops-system ks-jenkins-54455f5db8-hm6kc 1/1 Running 0 11h kubesphere-devops-system s2ioperator-0 1/1 Running 1 11h kubesphere-devops-system uc-jenkins-update-center-cd9464fff-qnvfz 1/1 Running 0 12h kubesphere-logging-system elasticsearch-logging-curator-elasticsearch-curator-160079hmdmb 0/1 Completed 0 11h kubesphere-logging-system elasticsearch-logging-data-0 1/1 Running 0 12h kubesphere-logging-system elasticsearch-logging-data-1 1/1 Running 0 12h kubesphere-logging-system elasticsearch-logging-discovery-0 1/1 Running 0 12h kubesphere-logging-system fluent-bit-c45h2 1/1 Running 0 12h kubesphere-logging-system fluent-bit-kptfc 1/1 Running 0 12h kubesphere-logging-system fluent-bit-rzjfp 1/1 Running 0 12h kubesphere-logging-system fluent-bit-wztkp 1/1 Running 0 12h kubesphere-logging-system fluentbit-operator-855d4b977d-fk6hs 1/1 Running 0 12h kubesphere-logging-system ks-events-exporter-5bc4d9f496-x297f 2/2 Running 0 12h kubesphere-logging-system ks-events-operator-8dbf7fccc-9qmml 1/1 Running 0 12h kubesphere-logging-system ks-events-ruler-698b7899c7-fkn4l 2/2 Running 0 12h kubesphere-logging-system ks-events-ruler-698b7899c7-hw6rq 2/2 Running 0 12h kubesphere-logging-system logsidecar-injector-deploy-74c66bfd85-cxkxm 2/2 Running 0 12h kubesphere-logging-system logsidecar-injector-deploy-74c66bfd85-lzxbm 2/2 Running 0 12h kubesphere-monitoring-system alertmanager-main-0 2/2 Running 0 11h kubesphere-monitoring-system alertmanager-main-1 2/2 Running 0 11h kubesphere-monitoring-system alertmanager-main-2 2/2 Running 0 11h kubesphere-monitoring-system kube-state-metrics-95c974544-r8kmq 3/3 Running 0 12h kubesphere-monitoring-system node-exporter-9ddxn 2/2 Running 0 12h kubesphere-monitoring-system node-exporter-dw929 2/2 Running 0 12h kubesphere-monitoring-system node-exporter-ht868 2/2 Running 0 12h kubesphere-monitoring-system node-exporter-nxdsm 2/2 Running 0 12h kubesphere-monitoring-system notification-manager-deployment-7c8df68d94-hv56l 1/1 Running 0 12h kubesphere-monitoring-system notification-manager-deployment-7c8df68d94-ttdsg 1/1 Running 0 12h kubesphere-monitoring-system notification-manager-operator-6958786cd6-pllgc 2/2 Running 0 12h kubesphere-monitoring-system prometheus-k8s-0 3/3 Running 1 11h kubesphere-monitoring-system prometheus-k8s-1 3/3 Running 1 11h kubesphere-monitoring-system prometheus-operator-84d58bf775-5rqdj 2/2 Running 0 12h kubesphere-system etcd-65796969c7-whbzx 1/1 Running 0 12h kubesphere-system ks-apiserver-b4dbcc67-2kknm 1/1 Running 0 11h kubesphere-system ks-apiserver-b4dbcc67-k6jr2 1/1 Running 0 11h kubesphere-system ks-apiserver-b4dbcc67-q8845 1/1 Running 0 11h kubesphere-system ks-console-786b9846d4-86hxw 1/1 Running 0 12h kubesphere-system ks-console-786b9846d4-l6mhj 1/1 Running 0 12h kubesphere-system ks-console-786b9846d4-wct8z 1/1 Running 0 12h kubesphere-system ks-controller-manager-7fd8799789-478ks 1/1 Running 0 11h kubesphere-system ks-controller-manager-7fd8799789-hwgmp 1/1 Running 0 11h kubesphere-system ks-controller-manager-7fd8799789-pdbch 1/1 Running 0 11h kubesphere-system ks-installer-64ddc4b77b-c7qz8 1/1 Running 0 12h kubesphere-system minio-7bfdb5968b-b5v59 1/1 Running 0 12h kubesphere-system mysql-7f64d9f584-kvxcb 1/1 Running 0 12h kubesphere-system openldap-0 1/1 Running 0 12h kubesphere-system openldap-1 1/1 Running 0 12h kubesphere-system redis-ha-haproxy-5c6559d588-2rt6v 1/1 Running 9 12h kubesphere-system redis-ha-haproxy-5c6559d588-mhj9p 1/1 Running 8 12h kubesphere-system redis-ha-haproxy-5c6559d588-tgpjv 1/1 Running 11 12h kubesphere-system redis-ha-server-0 2/2 Running 0 12h kubesphere-system redis-ha-server-1 2/2 Running 0 12h kubesphere-system redis-ha-server-2 2/2 Running 0 12hKubeSphere 部分組件使用 Helm 部署,檢查 Chart 部署情況:
[root@k8s-master1 ~]# helm ls -A | grep kubesphere elasticsearch-logging kubesphere-logging-system 1 2020-09-23 00:49:08.526873742 +0800 CST deployed elasticsearch-1.22.1 6.7.0-0217 elasticsearch-logging-curator kubesphere-logging-system 1 2020-09-23 00:49:16.117842593 +0800 CST deployed elasticsearch-curator-1.3.3 5.5.4-0217 ks-events kubesphere-logging-system 1 2020-09-23 00:51:45.529430505 +0800 CST deployed kube-events-0.1.0 0.1.0 ks-jenkins kubesphere-devops-system 1 2020-09-23 01:03:15.106022826 +0800 CST deployed jenkins-0.19.0 2.121.3-0217 ks-minio kubesphere-system 2 2020-09-23 00:48:16.990599158 +0800 CST deployed minio-2.5.16 RELEASE.2019-08-07T01-59-21Z ks-openldap kubesphere-system 1 2020-09-23 00:03:28.767712181 +0800 CST deployed openldap-ha-0.1.0 1.0 ks-redis kubesphere-system 1 2020-09-23 00:03:19.439784188 +0800 CST deployed redis-ha-3.9.0 5.0.5 logsidecar-injector kubesphere-logging-system 1 2020-09-23 00:51:57.519733074 +0800 CST deployed logsidecar-injector-0.1.0 0.1.0 notification-manager kubesphere-monitoring-system 1 2020-09-23 00:54:14.662762759 +0800 CST deployed notification-manager-0.1.0 0.1.0 uc kubesphere-devops-system 1 2020-09-23 00:51:37.885154574 +0800 CST deployed jenkins-update-center-0.8.0 3.0.0獲取 KubeSphere Console 監聽端口,默認為 30880:
kubectl get svc/ks-console -n kubesphere-system默認登錄賬號為 admin/P@88w0rd,登錄 KubeSphere Console:
在 KubeSphere 查看 Kubernetes 集群總覽(界面非常清新簡潔):
集群管理總覽頁
查看 Kubernetes 集群節點信息:
集群節點信息
查看 KubeSphere 服務組件信息:
服務組件信息
訪問 KubeSphere 應用商店:
KubeSphere 應用商店
查看 KubeSphere 項目資源:
KubeSphere 項目資源
提示:關于如何在 KubeSphere 平臺導入多集群、創建項目與集群資源、開啟可插拔功能組件以及創建 CI/CD 流水線,可以參考 KubeSphere 官方文檔 (kubesphere.io/docs) 了解更多信息。
清理 KubeSphere 集群
wget https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/kubesphere-delete.shsh kubesphere-delete.sh關 于 KubeSphere
KubeSphere (https://kubesphere.io)是在 Kubernetes 之上構建的容器混合云,提供全棧的 IT 自動化運維的能力,簡化企業的 DevOps 工作流。
KubeSphere 已被 Aqara 智能家居、本來生活、新浪、中國人保壽險、華夏銀行、中國太平保險、四川航空、國藥集團、微眾銀行、紫金保險、Radore、ZaloPay 等海內外數千家企業采用。KubeSphere 提供了運維友好的向導式操作界面和豐富的企業級功能,包括多云與多集群管理、Kubernetes 資源管理、DevOps (CI/CD)、應用生命周期管理、微服務治理 (Service Mesh)、多租戶管理、監控日志、告警通知、存儲與網絡管理、GPU support 等功能,幫助企業快速構建一個強大和功能豐富的容器云平臺。
參考鏈接:
https://blog.csdn.net/networken/article/details/105664147
總結
以上是生活随笔為你收集整理的使用 Sealos + Longhorn 部署 KubeSphere v3.0.0的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: kubesphere 3.0离线安装
- 下一篇: 修改docker镜像的存储地址的方法(-