K8S—二进制部署安装(包含UI界面设置)
安裝步驟
- 一、準備工作
- 二、部署單master K8S
- 2.1 部署etcd集群
- master 節點
- node 節點(1/2)
- 查看集群狀態
- 2.2 部署docker引擎
- node 節點(1/2)
- 2.3 設置flannel網絡
- master 節點
- node 節點操作(1/2)
- 2.4 部署master組件
- 2.5 node節點部署
- master節點操作
- node節點操作(1/2)
- master操作
- node節點啟動服務(1/2)
- 2.6 測試
- 三、部署多master K8S(加入負載均衡)
- 3.1 master02 設置
- 3.2 負載均衡(lb01、lb02)設置
- 3.3 修改node兩個節點的文件
- 3.4 master01 設置
- 四、部署 Dashboard UI
- 4.1 master01 節點上操作
- 4.2 訪問
一、準備工作
K8S集群
master01:192.168.253.11
服務:kube-apiserver kube-controller-manager kube-scheduler etcd
master02:192.168.253.44
node1:192.168.253.22
服務:kubelet kube-proxy docker flannel
node2:192.168.253.33
etcd節點1:192.168.253.11
etcd節點2:192.168.253.22
etcd節點3:192.168.253.33
lb01:192.168.253.55
lb02:192.168.253.66
Keepalived:192.168.253.111/24
二、部署單master K8S
2.1 部署etcd集群
master 節點
下載證書
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfochmod +x /usr/local/bin/cfssl*證書安裝
#創建k8s目錄 mkdir /opt/k8s cd /opt/k8s#上傳腳本(etcd-cert腳本要修改集群IP) chmod +x etcd-cert.sh etcd.sh#創建腳本生成的證書目錄 mkdir /opt/k8s/etcd-cert cd /opt/k8s/etcd-cert#運行腳本 bash etcd-cert.sh安裝etcd
#上傳壓縮包 cd /opt/k8s tar zxvf etcd-v3.3.10-linux-amd64.tar.gz#創建etcd配置文件、命令文件、證書的目錄 mkdir -p /opt/etcd/{cfg,bin,ssl} #移動到命令目錄 mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/ #復制證書進目錄 cp etcd-cert/*.pem /opt/etcd/ssl/配置文件發送給node
#復制證書 scp -r /opt/etcd/ root@192.168.253.22:/opt/ scp -r /opt/etcd/ root@192.168.253.33:/opt/#復制啟動腳本 scp /usr/lib/systemd/system/etcd.service root@192.168.253.22:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/etcd.service root@192.168.253.33:/usr/lib/systemd/system/開啟集群
./etcd.sh etcd01 192.168.253.11 etcd02=https://192.168.253.22:2380,etcd03=https://192.168.253.33:2380 #進入后會卡住,需要所有節點都開啟etcd,少一個,服務會卡死在這,直到全部開啟node 節點(1/2)
修改配置文件
vim /opt/etcd/cfg/etcd#[Member] ETCD_NAME="etcd02" #更改為所在節點 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.253.22:2380" #更改IP號 ETCD_LISTEN_CLIENT_URLS="https://192.168.253.22:2379" #更改IP號#[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.253.22:2380" #更改IP號 ETCD_ADVERTISE_CLIENT_URLS="https://192.168.253.22:2379" #更改IP號 ETCD_INITIAL_CLUSTER="etcd01=https://192.168.253.11:2380,etcd02=https://192.168.253.22:2380,etcd03=https://192.168.253.33:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"開啟服務
systemctl start etcd systemctl status etcd systemctl enable etcd查看集群狀態
方法一
ln -s /opt/etcd/bin/etcdctl /usr/local/bin/ cd /opt/etcd/ssl/etcdctl \ --ca-file=ca.pem \ --cert-file=server.pem \ --key-file=server-key.pem \ --endpoints="https://192.168.253.11:2379,https://192.168.253.22:2379,https://192.168.253.33:2379" cluster-health方法二
切換到etcd3版本查看
#切換etc3,默認為2 export ETCDCTL_API=3#查看版本狀態 etcdctl --write-out=table endpoint status#查看成員狀態 etcdctl --write-out=table member list2.2 部署docker引擎
node 節點(1/2)
安裝docker
#進入yum目錄 cd /etc/yum.repos.d/#repo文件移動回原目錄 mv repos.bak/* ./yum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo#安裝docker yum install -y docker-ce開啟
systemctl start docker systemctl status docker2.3 設置flannel網絡
master 節點
添加配置信息,寫入分配的子網段到etcd中
cd /opt/etcd/ssl//opt/etcd/bin/etcdctl \ --ca-file=ca.pem \ --cert-file=server.pem \ --key-file=server-key.pem \ --endpoints="https://192.168.253.11:2379,https://192.168.253.22:2379,https://192.168.253.33:2379" \ set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'查看寫入的信息
etcdctl \ --ca-file=ca.pem \ --cert-file=server.pem \ --key-file=server-key.pem \ --endpoints="https://192.168.253.11:2379,https://192.168.253.22:2379,https://192.168.253.33:2379" \ get /coreos.com/network/confignode 節點操作(1/2)
上傳文件解壓
cd /opt/#上傳 rz -E rz waiting to receive.#解壓 tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
創建目錄
開啟服務,開啟flannel網絡功能
./flannel.sh https://192.168.253.11:2379,https://192.168.253.22:2379,https://192.168.253.33:2379ifconfig修改docker連接flannel
vim /usr/lib/systemd/system/docker.service#添加下行13 EnvironmentFile=/run/flannel/subnet.env#修改下行,添加$DOCKER_NETWORK_OPTIONS進入14 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/contain erd.sock#查看flannel網段 cat /run/flannel/subnet.env重啟服務
systemctl daemon-reload systemctl restart dockerifconfig測試網絡
node1節點設置
#node節點安裝centos7鏡像 docker run -d centos:7#運行容器 docker run -itd centos:7 bash#進入 docker exec -it 7116f3026c6d bash#安裝net-tools yum -y install net-tools#獲取IP ifconfignode2節點設置
#node節點安裝centos7鏡像 docker run -d centos:7#運行容器 docker run -itd centos:7 bash#進入 docker exec -it 2cf56e1f1d02 bash#安裝net-tools yum -y install net-tools#獲取IP ifconfig#測試與node1節點容器網絡 ping 172.17.86.22.4 部署master組件
上傳文件并解壓
cd /opt/k8s rz master.zip k8s-cert.sh#解壓 unzip master.zip#加權限 chmod +x *.sh創建kubernetes目錄
mkdir -p /opt/kubernetes/{cfg,bin,ssl}創建證書、相關組件和私鑰的目錄
#創建目錄 mkdir k8s-cert cd k8s-cert/#移動腳本進來 mv /opt/k8s/k8s-cert.sh /opt/k8s/k8s-cert#修改配置文件內IP信息 vim k8s-cert.sh#運行腳本 ./k8s-cert.sh #安裝好的配置文件復制到ssl目錄 cp ca*pem apiserver*pem /opt/kubernetes/ssl/上傳kubernetes壓縮包
#建議使用winscp軟件上傳#解壓 tar zxvf kubernetes-server-linux-amd64.tar.gz復制命令文件到kubernetes/bin目錄
cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/創建隨機序列號文件
vim /opt/k8s/token.sh#!/bin/bash BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > /opt/kubernetes/cfg/token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOFchmod +x token.shcat /opt/kubernetes/cfg/token.csv 2a48e85a17c0de53f0f2605d7136d1b3,kubelet-bootstrap,10001,"system:kubelet-bootstrap"開啟apiserver
./apiserver.sh 192.168.253.11 https://192.168.253.11:2379,https://192.168.253.22:2379,https://192.168.253.33:2379systemctl status kube-apiserver.service查看配置文件
cat /opt/kubernetes/cfg/kube-apiserver查看HTTPS端口
netstat -anpt | grep 6443netstat -anpt | grep 8080啟動服務
#啟動scheduler服務 ./scheduler.sh 127.0.0.1#啟動manager服務 ./controller-manager.sh 127.0.0.1#查看節點狀態 /opt/kubernetes/bin/kubectl get cs
2.5 node節點部署
master節點操作
發送文件給node
cd /opt/k8s/kubernetes/server/bin/#遠程復制 scp kubelet kube-proxy root@192.168.253.22:/opt/kubernetes/bin/ scp kubelet kube-proxy root@192.168.253.33:/opt/kubernetes/bin/上傳文件
cd /opt/k8s/ mkdir kubeconfig cd kubeconfig/rz -E kubeconfig.shchmod +x *.sh設置環境變量
export PATH=$PATH:/opt/kubernetes/bin/ kubectl get cs生成kubelet配置文件
cd /opt/k8s/kubeconfig/./kubeconfig.sh 192.168.253.11 /opt/k8s/k8s-cert/復制文件到node節點
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.253.22:/opt/kubernetes/cfg/ scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.253.33:/opt/kubernetes/cfg/創建bootstrap角色賦予權限
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap#查看角色 kubectl get clusterroles | grep system:node-bootstrapper#查看授權角色 kubectl get clusterrolebindingnode節點操作(1/2)
上傳文件并解壓
rz -E node.zip#解壓 unzip node.zip chmod +x *.sh啟動kubelet
./kubelet.sh 192.168.253.22/33ps aux | grep kubeletmaster操作
檢查到node1節點的請求
kubectl get csr給集群頒發證書
kubectl certificate approve node-csr-7QLmDgr4zKfFZcCPdW3luBl3nkKs-KVE-WXx_Hu0Qn8kubectl certificate approve node-csr-qOzA2zIsXQFxZWRTNvMoO2-GV91miuBywuBsUf2Mb3Y查看狀態
kubectl get nodesnode節點啟動服務(1/2)
./proxy.sh 192.168.253.22/33systemctl status kube-proxy.service2.6 測試
啟動一個pod
kubectl create deployment nginx-test --image=nginx正在啟動
查看pod
kubectl get pods三、部署多master K8S(加入負載均衡)
3.1 master02 設置
master01拷貝文件過來
scp -r /opt/etcd/ root@192.168.253.44:/opt/ scp -r /opt/kubernetes/ root@192.168.253.44:/opt scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.253.44:/usr/lib/systemd/system/修改配置文件
vim /opt/kubernetes/cfg/kube-apiserver4 --bind-address=192.168.253.44 \6 --advertise-address=192.168.253.44 \開啟服務
systemctl start kube-apiserver.service systemctl enable kube-apiserver.servicesystemctl start kube-controller-manager.service systemctl enable kube-controller-manager.servicesystemctl start kube-scheduler.service systemctl enable kube-scheduler.service查看node節點狀態
ln -s /opt/kubernetes/bin/* /usr/local/bin/ kubectl get nodes kubectl get nodes -o wide
3.2 負載均衡(lb01、lb02)設置
安裝nginx服務
使用yum在線安裝nginx
vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0yum -y install nginx修改nginx配置文件
vim /etc/nginx/nginx.conf#12行添加13 stream {14 log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';15 16 access_log /var/log/nginx/k8s-access.log main;17 18 upstream k8s-apiserver {19 server 192.168.253.11:6443;20 server 192.168.253.44:6443;21 }22 server {23 listen 6443;24 proxy_pass k8s-apiserver;25 }26 }啟動服務
systemctl start nginx systemctl enable nginx netstat -natp | grep nginx安裝Keepalived服務
yum -y install keepalived修改keepalived配置文件
vim /etc/keepalived/keepalived.conf10 smtp_server 127.0.0.111 smtp_connect_timeout 3012 router_id NGINX_MASTER #lb01節點的為NGINX_MASTER,lb02節點的為NGINX_BACKUP13 # vrrp_skip_check_adv_addr14 # vrrp_strict15 # vrrp_garp_interval 016 # vrrp_gna_interval 0#添加一個周期性執行的腳本19 vrrp_script check_nginx {20 script "/etc/nginx/check_nginx.sh" #檢查nginx存活的腳本路徑21 }23 vrrp_instance VI_1 {24 state MASTER #lb01節點的為MASTER,lb02節點的為BACKUP25 interface ens33 #指定網卡名稱 ens3326 virtual_router_id 51 #指定vrid,兩個節點要一致27 priority 100 #lb01節點的為 100,lb02節點的為 9028 advert_int 129 authentication {30 auth_type PASS31 auth_pass 111132 } 33 virtual_ipaddress {34 192.168.253.111/24 #指定的VIP35 } 36 track_script {37 check_nginx #指定vrrp_script配置的腳本38 } 39 }創建nginx狀態檢查腳本
vim /etc/nginx/check_nginx.sh#!/bin/bash #egrep -cv "grep|$$" 用于過濾掉包含grep 或者 $$ 表示的當前Shell進程ID count=$(ps -ef | grep nginx | egrep -cv "grep|$$")if [ "$count" -eq 0 ];thensystemctl stop keepalived fichmod +x /etc/nginx/check_nginx.sh啟動服務
一定要先啟動了nginx服務,再啟動keepalived服務
systemctl start keepalived systemctl enable keepalivedip add #查看VIP是否生成3.3 修改node兩個節點的文件
bootstrap.kubeconfig,kubelet.kubeconfig配置文件的IP設置為VIP
vim /opt/kubernetes/cfg/bootstrap.kubeconfig server: https://192.168.253.111:6443vim /opt/kubernetes/cfg/kubelet.kubeconfig server: https://192.168.253.111:6443vim /opt/kubernetes/cfg/kube-proxy.kubeconfig server: https://192.168.253.111:6443重啟kubelet和kube-proxy服務
systemctl restart kubelet.service systemctl restart kube-proxy.service3.4 master01 設置
基于前面單master創建的pod
查看Pod的狀態信息
kubectl get podskubectl get pods -o wide #READY為1/1,表示這個Pod中有1個容器在對應網段的node節點上操作,可以直接使用瀏覽器或者curl命令訪問
curl 172.17.86.3查看nginx日志
#在master01節點上,將cluster-admin角色授予用戶system:anonymous kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created#查看nginx日志 kubectl logs nginx-test-7d965f56df-h6655四、部署 Dashboard UI
4.1 master01 節點上操作
創建dashborad工作目錄
mkdir /opt/k8s/dashboardcd /opt/k8s/dashboard unzip Dashboard.zip通過kubectl create 命令創建resources
cd /opt/k8s/dashboard#第一步 ##規定kubernetes-dashboard-minimal該角色的權限 kubectl create -f dashboard-rbac.yaml ##查看類型為 Role,RoleBinding 的資源對象 kubernetes-dashboard-minimal 是否生成 kubectl get role,rolebinding -n kube-system#第二步 ##證書和密鑰創建 kubectl create -f dashboard-secret.yaml ##查看類型為 Secret 的資源對象 kubernetes-dashboard-certs,kubernetes-dashboard-key-holder 是否生成 kubectl get secret -n kube-system#第三步 ##配置文件,對于集群dashboard設置的創建 kubectl create -f dashboard-configmap.yaml ##查看類型為 ConfigMap 的資源對象 kubernetes-dashboard-settings 是否生成 kubectl get configmap -n kube-system#第四部 ##創建容器需要的控制器以及服務賬戶 kubectl create -f dashboard-controller.yaml ##查看類型為 ServiceAccount,Deployment 的資源對象 kubernetes-dashboard-settings 是否生成 kubectl get serviceaccount,deployment -n kube-system#第五步 ##將服務提供出去 kubectl create -f dashboard-service.yaml查看創建在指定的 kube-system 命名空間下的 pod 和 service 狀態信息
kubectl get pods,svc -n kube-system -o wide4.2 訪問
這里dashboard分配給了node02服務器,訪問的入口是30001端口,打開瀏覽器訪問
火狐瀏覽器可直接訪問:https://192.168.253.33:30001
使用 k8s-admin.yaml 文件進行創建令牌
cd /opt/k8s/dashboard/ kubectl create -f k8s-admin.yaml獲取token簡要信息
kubectl get secrets -n kube-system查看令牌序列號,取 token: 后面的內容
kubectl describe secrets dashboard-admin-token-6htcm -n kube-system將令牌序列號復制填入到瀏覽器頁面中,點擊登錄
點擊側邊欄中的“容器組”,點擊容器名稱,進入一個頁面,點擊右上方的“運行命令”或”日志“控件會彈出另一個額外頁面
總結
以上是生活随笔為你收集整理的K8S—二进制部署安装(包含UI界面设置)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 线段树入门之夜空星亮
- 下一篇: 人工智能交互系统界面设计(Tkinter