企业实战-Kubernetes(十四)k8s高可用集群
生活随笔
收集整理的這篇文章主要介紹了
企业实战-Kubernetes(十四)k8s高可用集群
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
k8s高可用集群
- 1 使用pacemaker搭建k8s的高可用(haproxy的高可用)
- 安裝并配置haproxy
- 安裝并配置pacemaker
- 2 k8s集群部署
- master準備
- 三個結點關閉交換分區
- 安裝docker及kubelet
- 初始化集群
- 添加fence
1 使用pacemaker搭建k8s的高可用(haproxy的高可用)
server5、server6配置倉庫
[root@server5 ~]# vim /etc/yum.repos.d/dvd.repo [dvd] name=dvd baseurl=http://172.25.14.250/rhel7.6 gpgcheck=0[HighAvailability] name=HighAvailability baseurl=http://172.25.14.250/rhel7.6//addons/HighAvailability gpgcheck=0安裝并配置haproxy
yum install -y haproxy cd /etc/haproxy/ vim haproxy.cfg systemctl restart haproxy.service安裝并配置pacemaker
安裝并設置開機自啟
yum install -y pacemaker pcs psmisc policycoreutils-python systemctl enable --now pcsd.service修改密碼并認證
passwd hacluster pcs cluster auth pcs cluster auth server5 server6集群組建
pcs cluster setup --name mycluster server5 server6設置開機自啟動集群
pcs property set stonith-enabled=false pcs cluster start --all pcs cluster enable --all crm_verify -L -V pcs status配置vip資源
pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.14.100 op monitor interval=30s pcs status配置haproxy服務資源
pcs resource create haproxy systemd:haproxy op monitor interval=60s pcs status資源放到一個組
pcs resource group add hagroup vip haproxy pcs status2 k8s集群部署
將server1倉庫的認證傳給server7、server8、server9,方便后續下載鏡像
master準備
server7、server8、server9做k8s master結點
三個結點關閉交換分區
[root@server7 ~]# swapoff -a [root@server7 ~]# vim /etc/fstab #/dev/mapper/rhel-swap swap swap defaults 0 0安裝docker及kubelet
server7、server8、server9安裝docker、kubelet并啟用
[root@server7 ~]# yum install -y docker-ce[root@server7 ~]# tar zxf kubeadm-1.21.3.tar.gz [root@server7 ~]# cd packages/ [root@server7 packages]# yum install -y *[root@server7 ~]# systemctl enable --now kubelet.service [root@server7 ~]# systemctl enable docker.service修改文件
vim /etc/docker/daemon.json {"registry-mirrors": ["https://reg.westos.org"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"] }vim /etc/sysctl.d/docker.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1重啟服務 systemctl restart docker sysctl --system 查看三個結點 docker info
初始化集群
修改初始化文件
[root@server7 ~]# kubeadm config print init-defaults > kubeadm-init.yaml ##生成init文件[root@server7 ~]# vim kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication kind: InitConfiguration localAPIEndpoint:advertiseAddress: 172.25.14.7bindPort: 6443 nodeRegistration:criSocket: /var/run/dockershim.sockname: server7taints: null --- apiServer:timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: "172.25.14.100:6443" controllerManager: {} dns:type: CoreDNS etcd:local:dataDir: /var/lib/etcd imageRepository: reg.westos.org/k8s kind: ClusterConfiguration kubernetesVersion: 1.21.3 networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.96.0.0/12 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvsk8s初始化
[root@server7 ~]# kubeadm init --config kubeadm-init.yaml --upload-certs初始化成功
將結點加入k8s master
將結點加入k8s node
kubeadm join 172.25.14.100:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:09c95026b52089ea481d22d82e9abff6555c7b54d3d2767c2f309b5182870360安裝網絡組件(flannel)
[root@server7 ~]# vim kube-flannel.yml --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN', 'NET_RAW']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:name: flannel rules: - apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged'] - apiGroups:- ""resources:- podsverbs:- get - apiGroups:- ""resources:- nodesverbs:- list- watch - apiGroups:- ""resources:- nodes/statusverbs:- patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:name: flannel roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel subjects: - kind: ServiceAccountname: flannelnamespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata:name: flannelnamespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "host-gw"}} --- apiVersion: apps/v1 kind: DaemonSet metadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannel spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: k8s/flannel:v0.14.0command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: k8s/flannel:v0.14.0command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg[root@server7 ~]# kubectl apply -f kube-flannel.yml查看
再創建server10,重復上述過程,作為node結點加入k8s集群
此時master端查看
此時server10
master再次查看,server10已經ready
server7運行鏡像
添加fence
server5、server6查看
pcs status真機
[root@foundation14 kiosk]# cd /etc/cluster/ [root@foundation14 cluster]# scp fence_xvm.key server6:/etc/cluster/server5 server6安裝
yum install -y fence-virt
創建fence
開啟fence
總結
以上是生活随笔為你收集整理的企业实战-Kubernetes(十四)k8s高可用集群的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 春考计算机应用本科哪所大学好,山东春考2
- 下一篇: 简述数制换算