Centos7 二进制安装 Kubernetes 1.13
目錄
- 1、目錄
- 1.1、什么是 Kubernetes?
- 1.2、Kubernetes 有哪些優勢?
- 2、環境準備
- 2.1、網絡配置
- 2.2、更改 HOSTNAME
- 2.3、配置ssh免密碼登錄登錄
- 2.4、關閉防火墻
- 2.5、關閉交換分區
- 2.6、關閉 SeLinux
- 2.7、安裝 NTP
- 2.8、安裝及配置 CFSSL
- 2.9、創建安裝目錄
- 2.10、升級內核
- 3、安裝 Docker 18.06.1-ce
- 3.1、刪除舊版本的 Docker
- 3.2、設置存儲庫
- 3.3、安裝 Docker
- 3.4、啟動 Docker
- 4、安裝 ETCD 3.3.10
- 4.1、創建 ETCD 證書
- 4.1.1、生成 ETCD SERVER 證書用到的JSON請求文件
- 4.1.2、創建 ETCD CA 證書配置文件
- 4.1.3、創建 ETCD SERVER 證書配置文件
- 4.1.4、生成 ETCD CA 證書和私鑰
- 4.1.5、生成 ETCD SERVER 證書和私鑰
- 4.2、安裝 ETCD
- 4.2.1、下載 ETCD
- 4.2.2、創建 ETCD 系統啟動文件
- 4.2.3、將 ETCD 啟動文件、證書文件、系統啟動文件復制到其他節點
- 4.2.4、ETCD 主配置文件
- 4.2.5、啟動 ETCD 服務
- 4.2.6、檢查 ETCD 服務運行狀態
- 4.2.7、查看 ETCD 集群成員信息
- 4.1、創建 ETCD 證書
- 5、安裝 Flannel v0.11.0
- 5.1、Flanneld 網絡安裝
- 5.2、向 ETCD 集群寫入網段信息
- 5.3、安裝 Flannel
- 5.4、配置 Flannel
- 5.5、創建 Flannel 系統啟動文件
- 5.6、配置 Docker 啟動指定子網段
- 5.7、將 Flannel 相關文件復制到其他機器
- 5.8、啟動服務
- 5.9、查看 Flannel 服務設置 docker0 網橋狀態
- 5.10、驗證 Flannel 服務
- 6、安裝Kubernetes
- 6.1、創建 Kubernetes 需要的證書
- 6.1.1、生成 Kubernetes 證書請求的JSON請求文件
- 6.1.2、生成 Kubernetes CA 配置文件和證書
- 6.1.3、生成 Kube API Server 配置文件和證書
- 6.1.4、生成 kubelet client 配置文件和證書
- 6.1.5、生成 Kube-Proxy 配置文件和證書
- 6.1.6、生成 kubectl 管理員配置文件和證書
- 6.1.7、將相關證書復制到 Kubernetes Node 節點
- 6.2、部署 Kubernetes Master 節點并加入集群
- 6.2.1、下載文件并安裝 Kubernetes Server
- 6.2.2、部署 Apiserver
- 6.2.3、部署 Scheduler
- 6.2.4、部署 Kube-Controller-Manager 組件
- 6.2.5、驗證 API Server 服務
- 6.2.6、部署 Kubelet
- 6.2.7、批準 Master 加入集群
- 6.3、部署 kube-proxy 組件
- 6.3.1、創建 kube-proxy 參數配置文件
- 6.3.2、創建 kube-proxy 系統啟動文件
- 6.3.3、啟動 kube-proxy 服務
- 6.3.4、檢查 kube-proxy 服務狀態
- 6.4、驗證 Server 服務
- 6.5、Kubernetes Node 節點加入集群
- 6.5.1、創建 kubelet 配置文件
- 6.5.2、創建 kubelet 系統啟動文件
- 6.5.3、啟動 kubelet 服務
- 6.5.4、查看 kubelet 服務運行狀態
- 6.5.5、批準 Node 加入集群
- 6.5.6、從集群刪除 Node
- 6.5.7、給 Node 打標簽
- 6.1、創建 Kubernetes 需要的證書
- 7、參考文章
- 8、常見問題
- 用虛擬機如何生成新的網卡UUID?
1、目錄
1.1、什么是 Kubernetes?
??Kubernetes,簡稱 k8s(k,8 個字符,s)或者 kube,是一個開源的 Linux 容器自動化運維平臺,它消除了容器化應用程序在部署、伸縮時涉及到的許多手動操作。
??Kubernetes 最開始是由 Google 的工程師設計開發的。Google 作為 Linux 容器技術的早期貢獻者之一,曾公開演講介紹 Google 如何將一切都運行于容器之中(這是 Google 的云服務背后的技術)。Google 一周內的容器部署超過 20 億次,全部的工作都由內部平臺 Borg 支撐。Borg 是 Kubernetes 的前身,幾年來開發 Borg 的經驗教訓也成了影響 Kubernetes 中許多技術的主要因素。
??
1.2、Kubernetes 有哪些優勢?
??使用 Kubernetes,你可以快速、高效地滿足用戶以下的需求:
- 快速精準地部署應用程序
- 即時伸縮你的應用程序
- 無縫展現新特征
- 限制硬件用量僅為所需資源
??
??Kubernetes 的優勢
- 可移動: 公有云、私有云、混合云、多態云
- 可擴展: 模塊化、插件化、可掛載、可組合
- 自修復: 自動部署、自動重啟、自動復制、自動伸縮
??
??Google 公司于 2014 年啟動了 Kubernetes 項目。Kubernetes 是在 Google 的長達 15 年的成規模的產品級任務的經驗下構建的,結合了來自社區的最佳創意和實踐經驗
??
2、環境準備
本文中的案例會有兩臺機器,他們的Host和IP地址如下
| 10.0.0.100 | c0(master) |
| 10.0.0.101 | c1(master) |
| 10.0.0.102 | c2 |
| 10.0.0.103 | c3 |
??
四臺機器的 host 以 c0 為例:
??
2.1、網絡配置
??以下以c0為例
[root@c0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth0 UUID=6d8d9ad6-37b5-431a-ab16-47d0aa00d01f DEVICE=eth0 ONBOOT=yes IPADDR0=10.0.0.100 PREFIXO0=24 GATEWAY0=10.0.0.1 DNS1=10.0.0.1 DNS2=8.8.8.8??
??重啟網絡:
??
??更改源為阿里云
??
??安裝網絡工具包和基礎工具包
??
2.2、更改 HOSTNAME
??在四臺機器上依次設置 hostname,以下以c0為例
[root@c0 ~]# hostnamectl --static set-hostname c0 [root@c0 ~]# hostnamectl statusStatic hostname: c0Icon name: computer-vmChassis: vmMachine ID: 04c3f6d56e788345859875d9f49bd4bdBoot ID: ba02919abe4245aba673aaf5f778ad10Virtualization: kvmOperating System: CentOS Linux 7 (Core)CPE OS Name: cpe:/o:centos:centos:7Kernel: Linux 3.10.0-957.el7.x86_64Architecture: x86-64??
2.3、配置ssh免密碼登錄登錄
??每一臺機器都單獨生成
[root@c0 ~]# ssh-keygen #一路按回車到最后??
??將 ssh-keygen 生成的密鑰,分別復制到其他三臺機器,以下以 c0 為例
??
??測試密鑰是否配置成功
??
2.4、關閉防火墻
??在每一臺機器上運行以下命令,以 c0 為例:
[root@c0 ~]# systemctl stop firewalld && systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.??
2.5、關閉交換分區
[root@c0 ~]# for N in $(seq 0 3); do ssh c$N swapoff -a; done;關閉前和關閉后,可以使用free -h命令查看swap的狀態,關閉后的total應該是0
??
??在每一臺機器上編輯配置文件: /etc/fstab , 注釋最后一條/dev/mapper/centos-swap swap,以c0為例
??
2.6、關閉 SeLinux
??在每一臺機器上,關閉 SeLinux,以 c0 為例
[root@c0 ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config [root@c0 ~]# cat /etc/selinux/config# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targetedSELinux就是安全加強的Linux
??
2.7、安裝 NTP
??安裝 NTP 時間同步工具,并啟動 NTP
[root@c0 ~]# for N in $(seq 0 3); do ssh c$N yum install ntp -y; done;??
??在每一臺機器上,設置 NTP 開機啟動
??
??依次查看每臺機器上的時間:
??
2.8、安裝及配置 CFSSL
??使用 CFSSL 能夠構建本地CA,生成后面需要使用的證書。
[root@c0 ~]# mkdir -p /home/work/_src [root@c0 ~]# cd /home/work/_src [root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 [root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 [root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 [root@c0 _src]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 [root@c0 _src]# mv cfssl_linux-amd64 /usr/local/bin/cfssl [root@c0 _src]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson [root@c0 _src]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo??
2.9、創建安裝目錄
??創建后面要用到的 ETCD 和 Kubernetes 使用目錄
[root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_app/k8s/etcd/{bin,cfg,ssl} -p; done; [root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_app/k8s/kubernetes/{bin,cfg,ssl,ssl_cert} -p; done; [root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_data/etcd -p; done;??
2.10、升級內核
??因為3.10版本內核且缺少 ip_vs_fo.ko 模塊,將導致 kube-proxy 無法開啟ipvs模式。ip_vs_fo.ko 模塊的最早版本為3.19版本,這個內核版本在 RedHat 系列發行版的常見RPM源中是不存在的。
[root@c0 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm [root@c0 ~]# yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y??
??重啟系統 reboot 后,手動選擇新內核,然后輸入以下命令,可以查看新內核的狀態:
??
3、安裝 Docker 18.06.1-ce
3.1、刪除舊版本的 Docker
??官方提供的刪除方法
$ sudo yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine??
??另外一種刪除舊版的 Docker 方法,先查詢安裝過的 Docker
??
??刪除已安裝的 Docker
??
??刪除 Docker 鏡像/容器
??
3.2、設置存儲庫
??安裝所需要的包,yum-utils 提供了 yum-config-manager 實用程序, device-mapper-persistent-data 和 lvm2 是 devicemapper 需要的存儲驅動程序。
??在每一臺機器上操作,以 c0 為例
??
3.3、安裝 Docker
[root@c0 ~]# sudo yum install docker-ce-18.06.1.ce-3.el7 -y??
3.4、啟動 Docker
[root@c0 ~]# systemctl enable docker && systemctl start docker??
4、安裝 ETCD 3.3.10
4.1、創建 ETCD 證書
4.1.1、生成 ETCD SERVER 證書用到的JSON請求文件
[root@c0 ~]# mkdir -p /home/work/_src/ssl_etcd [root@c0 ~]# cd /home/work/_src/ssl_etcd [root@c0 ssl_etcd]# cat << EOF | tee ca-config.json {"signing": {"default": {"expiry": "87600h"},"profiles": {"etcd": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}} } EOF默認策略,指定了證書的有效期是10年(87600h)
etcd策略,指定了證書的用途
signing, 表示該證書可用于簽名其它證書;生成的 ca.pem 證書中 CA=TRUE
server auth:表示 client 可以用該 CA 對 server 提供的證書進行驗證
client auth:表示 server 可以用該 CA 對 client 提供的證書進行驗證
??
4.1.2、創建 ETCD CA 證書配置文件
[root@c0 ssl_etcd]# cat << EOF | tee ca-csr.json {"CN": "etcd CA","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing"}] } EOF??
4.1.3、創建 ETCD SERVER 證書配置文件
[root@c0 ssl_etcd]# cat << EOF | tee server-csr.json {"CN": "etcd","hosts": ["10.0.0.100","10.0.0.101","10.0.0.102","10.0.0.103"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing"}] } EOF??
4.1.4、生成 ETCD CA 證書和私鑰
[root@c0 ssl_etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca 2019/02/14 18:44:37 [INFO] generating a new CA key and certificate from CSR 2019/02/14 18:44:37 [INFO] generate received request 2019/02/14 18:44:37 [INFO] received CSR 2019/02/14 18:44:37 [INFO] generating key: rsa-2048 2019/02/14 18:44:38 [INFO] encoded CSR 2019/02/14 18:44:38 [INFO] signed certificate with serial number 384346866475232855604658229421854651219342845660 [root@c0 ssl_etcd]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server-csr.json??
4.1.5、生成 ETCD SERVER 證書和私鑰
[root@c0 ssl_etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server 2019/02/09 20:52:57 [INFO] generate received request 2019/02/09 20:52:57 [INFO] received CSR 2019/02/09 20:52:57 [INFO] generating key: rsa-2048 2019/02/09 20:52:57 [INFO] encoded CSR 2019/02/09 20:52:57 [INFO] signed certificate with serial number 373071566605311458179949133441319838683720611466 2019/02/09 20:52:57 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@c0 ssl_etcd]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem [root@c0 _src]# cp server.pem server-key.pem /home/work/_app/k8s/etcd/ssl/??
??將生成的證書,復制到 etchd 使用目錄
??
4.2、安裝 ETCD
4.2.1、下載 ETCD
[root@c0 ssl_etcd]# cd /home/work/_src/ [root@c0 _src]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz [root@c0 _src]# tar -xvf etcd-v3.3.10-linux-amd64.tar.gz [root@c0 _src]# cd etcd-v3.3.10-linux-amd64 [root@c0 etcd-v3.3.10-linux-amd64]# cp etcd etcdctl /home/work/_app/k8s/etcd/bin/??
4.2.2、創建 ETCD 系統啟動文件
??創建 /usr/lib/systemd/system/etcd.service 文件并保存,內容如下:
[root@c0 etcd-v3.3.10-linux-amd64]# cat /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target[Service] Type=notify EnvironmentFile=/home/work/_app/k8s/etcd/cfg/etcd.conf ExecStart=/home/work/_app/k8s/etcd/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem \ --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem \ --peer-cert-file=/home/work/_app/k8s/etcd/ssl/server.pem \ --peer-key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem \ --trusted-ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536[Install] WantedBy=multi-user.target??
4.2.3、將 ETCD 啟動文件、證書文件、系統啟動文件復制到其他節點
[root@c0 ~]# for N in $(seq 0 3); do scp -r /home/work/_app/k8s/etcd c$N:/home/work/_app/k8s/; done; [root@c0 ~]# for N in $(seq 0 3); do scp -r /usr/lib/systemd/system/etcd.service c$N:/usr/lib/systemd/system/etcd.service; done;??
4.2.4、ETCD 主配置文件
??在 c0 創建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,內容如下:
[root@c0 _src]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf #[Member] # ETCD的節點名 ETCD_NAME="etcd00" # ETCD的數據存儲目錄 ETCD_DATA_DIR="/home/work/_data/etcd" # 該節點與其他節點通信時所監聽的地址列表,多個地址使用逗號隔開,其格式可以劃分為scheme://IP:PORT,這里的scheme可以是http、https ETCD_LISTEN_PEER_URLS="https://10.0.0.100:2380" # 該節點與客戶端通信時監聽的地址列表 ETCD_LISTEN_CLIENT_URLS="https://10.0.0.100:2379"#[Clustering] # 該成員節點在整個集群中的通信地址列表,這個地址用來傳輸集群數據的地址。因此這個地址必須是可以連接集群中所有的成員的。 ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.100:2380" # 配置集群內部所有成員地址,其格式為:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,如果有多個使用逗號隔開 ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.100:2379" ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380,etcd03=https://10.0.0.103:2380" # 初始化集群token ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" # 初始化集群狀態,new表示新建 ETCD_INITIAL_CLUSTER_STATE="new"#[Security] ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem" ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem" ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem" ETCD_CLIENT_CERT_AUTH="true" ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem" ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem" ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem" ETCD_PEER_CLIENT_CERT_AUTH="true"EOF??
??在 c1 創建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,內容如下:
??
??在 c2 創建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,內容如下:
??
??在 c3 創建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,內容如下:
??
4.2.5、啟動 ETCD 服務
??在每一臺節點機器上單獨運行
[root@c0 _src]# systemctl daemon-reload && systemctl enable etcd && systemctl start etcd??
4.2.6、檢查 ETCD 服務運行狀態
[root@c0 _src]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem cluster-health member 2cba54b8e3ba988a is healthy: got healthy result from https://10.0.0.103:2379 member 7c12135a398849e3 is healthy: got healthy result from https://10.0.0.102:2379 member 99c2fd4fe11e28d9 is healthy: got healthy result from https://10.0.0.100:2379 member f2fd0c12369e0d75 is healthy: got healthy result from https://10.0.0.101:2379 cluster is healthy??
4.2.7、查看 ETCD 集群成員信息
[root@c0 _src]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem member list 2cba54b8e3ba988a: name=etcd03 peerURLs=https://10.0.0.103:2380 clientURLs=https://10.0.0.103:2379 isLeader=false 7c12135a398849e3: name=etcd02 peerURLs=https://10.0.0.102:2380 clientURLs=https://10.0.0.102:2379 isLeader=false 99c2fd4fe11e28d9: name=etcd00 peerURLs=https://10.0.0.100:2380 clientURLs=https://10.0.0.100:2379 isLeader=true f2fd0c12369e0d75: name=etcd01 peerURLs=https://10.0.0.101:2380 clientURLs=https://10.0.0.101:2379 isLeader=false??
5、安裝 Flannel v0.11.0
5.1、Flanneld 網絡安裝
??Flannel 實質上是一種“覆蓋網絡(overlay network)”,也就是將TCP數據包裝在另一種網絡包里面進行路由轉發和通信,目前已經支持UDP、VxLAN、AWS VPC和GCE路由等數據轉發方式。Flannel 在 Kubernetes中用于配置第三層(網絡層)網絡結構。
??Flannel 負責在集群中的多個節點之間提供第 3 層 IPv4 網絡。Flannel 不控制容器如何與主機聯網,只負責主機之間如何傳輸流量。但是,Flannel 確實為 Kubernetes 提供了 CNI 插件,并提供了與 Docker 集成的指導。
沒有 Flanneld 網絡,Node節點間的 pod 不能通信,只能 Node 內通信。
有 Flanneld 服務啟動時主要做了以下幾步的工作: 從 ETCD 中獲取 NetWork 的配置信息劃分 Subnet,并在 ETCD 中進行注冊,將子網信息記錄到 /run/flannel/subnet.env 中
??
5.2、向 ETCD 集群寫入網段信息
[root@c0 _src]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem --endpoints="https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379" set /coreos.com/network/config '{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}' { "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}Flanneld 當前版本 (v0.11.0) 不支持 ETCD v3,所以使用 ETCD v2 API 寫入配置 key 和網段數據;
寫入的 Pod 網段 ${CLUSTER_CIDR} 必須是 /16 段地址,必須與 kube-controller-manager 的 –cluster-cidr 參數值一致;
??
5.3、安裝 Flannel
[root@c0 _src]# pwd /home/work/_src [root@c0 _src]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz [root@c0 _src]# tar -xvf flannel-v0.11.0-linux-amd64.tar.gz [root@c0 _src]# mv flanneld mk-docker-opts.sh /home/work/_app/k8s/kubernetes/bin/??
5.4、配置 Flannel
??創建 /home/work/_app/k8s/kubernetes/cfg/flanneld 文件并保存,寫入以下內容:
[root@c0 _src]# cat /home/work/_app/k8s/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379 -etcd-cafile=/home/work/_app/k8s/etcd/ssl/ca.pem -etcd-certfile=/home/work/_app/k8s/etcd/ssl/server.pem -etcd-keyfile=/home/work/_app/k8s/etcd/ssl/server-key.pem"??
5.5、創建 Flannel 系統啟動文件
??創建 /usr/lib/systemd/system/flanneld.service 文件并保存,內容如下:
[root@c0 _src]# cat /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service[Service] Type=notify EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/flanneld ExecStart=/home/work/_app/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/home/work/_app/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure[Install] WantedBy=multi-user.targetmk-docker-opts.sh 腳本將分配給 Flanneld 的 Pod 子網網段信息寫入 /run/flannel/docker 文件,后續 Docker 啟動時 使用這個文件中的環境變量配置 docker0 網橋.
Flanneld 使用系統缺省路由所在的接口與其它節點通信,對于有多個網絡接口(如內網和公網)的節點,可以用 -iface 參數指定通信接口;
??
5.6、配置 Docker 啟動指定子網段
??編輯 /usr/lib/systemd/system/docker.service 文件,內容如下:
[root@c0 _src]# cat /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target[Service] Type=notify # 加入環境變量的配件文件,并在 ExecStart 附加參數 EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s[Install] WantedBy=multi-user.target??
5.7、將 Flannel 相關文件復制到其他機器
??主要復制 Flannel 執行文件、Flannel 配置文件、Flannel 系統啟動文件、Docker 系統啟動文件
[root@c0 _src]# for N in $(seq 0 3); do scp -r /home/work/_app/k8s/kubernetes/* c$N:/home/work/_app/k8s/kubernetes/; done; [root@c0 _src]# for N in $(seq 0 3); do scp -r /usr/lib/systemd/system/docker.service c$N:/usr/lib/systemd/system/docker.service; done; [root@c0 _src]# for N in $(seq 0 3); do scp -r /usr/lib/systemd/system/flanneld.service c$N:/usr/lib/systemd/system/flanneld.service; done;??
5.8、啟動服務
??在每一臺機器上單獨運行,以 c0 為例:
[root@c0 _src]# systemctl daemon-reload && systemctl stop docker && systemctl enable flanneld && systemctl start flanneld && systemctl start docker Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.啟動 Flannel 前要關閉 Docker 及相關的 kubelet 這樣 Flannel 才會覆蓋 docker0 網橋
??
5.9、查看 Flannel 服務設置 docker0 網橋狀態
[root@c0 _src]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:1c:42:50:8c:6a brd ff:ff:ff:ff:ff:ffinet 10.0.0.100/8 brd 10.255.255.255 scope global noprefixroute eth0valid_lft forever preferred_lft foreverinet6 fe80::49d:e3e6:c623:9582/64 scope link noprefixroutevalid_lft forever preferred_lft forever 3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group defaultlink/ether 3e:80:5d:97:53:c4 brd ff:ff:ff:ff:ff:ffinet 10.172.46.0/32 scope global flannel.1valid_lft forever preferred_lft foreverinet6 fe80::3c80:5dff:fe97:53c4/64 scope linkvalid_lft forever preferred_lft forever 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group defaultlink/ether 02:42:9e:df:b9:87 brd ff:ff:ff:ff:ff:ffinet 10.172.46.1/24 brd 10.172.46.255 scope global docker0valid_lft forever preferred_lft forever??
5.10、驗證 Flannel 服務
[root@c0 _src]# for N in $(seq 0 3); do ssh c$N cat /run/flannel/subnet.env ; done; DOCKER_OPT_BIP="--bip=10.172.46.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=10.172.46.1/24 --ip-masq=false --mtu=1450" DOCKER_OPT_BIP="--bip=10.172.90.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=10.172.90.1/24 --ip-masq=false --mtu=1450" DOCKER_OPT_BIP="--bip=10.172.5.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=10.172.5.1/24 --ip-masq=false --mtu=1450" DOCKER_OPT_BIP="--bip=10.172.72.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=10.172.72.1/24 --ip-masq=false --mtu=1450"??
6、安裝Kubernetes
6.1、創建 Kubernetes 需要的證書
6.1.1、生成 Kubernetes 證書請求的JSON請求文件
[root@c0 ~]# cd /home/work/_app/k8s/kubernetes/ssl/ [root@c0 ssl]# cat << EOF | tee ca-config.json {"signing": {"default": {"expiry": "8760h"},"profiles": {"server": {"usages": ["signing","key encipherment","server auth"],"expiry": "8760h"},"client": {"usages": ["signing","key encipherment","client auth"],"expiry": "8760h"}}} } EOF??
6.1.2、生成 Kubernetes CA 配置文件和證書
[root@c0 ssl]# cat << EOF | tee ca-csr.json {"CN": "kubernetes CA","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "System"}] } EOF??
??初始化一個 Kubernetes CA 證書
??
6.1.3、生成 Kube API Server 配置文件和證書
??創建證書配置文件
[root@c0 ssl]# cat << EOF | tee kube-apiserver-server-csr.json {"CN": "kubernetes","hosts": ["127.0.0.1","10.0.0.1","10.0.0.100","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "API Server"}] } EOF??
??生成 kube-apiserver 證書
??
6.1.4、生成 kubelet client 配置文件和證書
??創建證書配置文件
[root@c0 ssl]# cat << EOF | tee kubelet-client-csr.json {"CN": "kubelet","hosts": [""],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","O": "k8s","OU": "Kubelet","ST": "Beijing"}] } EOF??
??生成 kubelet client證書
??
6.1.5、生成 Kube-Proxy 配置文件和證書
??創建證書配置文件
[root@c0 ssl]# cat << EOF | tee kube-proxy-client-csr.json {"CN": "system:kube-proxy","hosts": [""],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","O": "k8s","OU": "System","ST": "Beijing"}] } EOF??
??生成 Kube-Proxy 證書
??
6.1.6、生成 kubectl 管理員配置文件和證書
??創建 kubectl 管理員證書配置文件
[root@c0 ssl]# cat << EOF | tee kubernetes-admin-user.csr.json {"CN": "admin","hosts": [""],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","O": "k8s","OU": "Cluster Admins","ST": "Beijing"}] } EOF??
??生成 kubectl 管理員證書
??
6.1.7、將相關證書復制到 Kubernetes Node 節點
[root@c0 ~]# for N in $(seq 0 3); do scp -r /home/work/_app/k8s/kubernetes/ssl/*.pem c$N:/home/work/_app/k8s/kubernetes/ssl/; done;??
6.2、部署 Kubernetes Master 節點并加入集群
??Kubernetes Master 節點運行如下組件:
- APIServer
??APIServer負責對外提供RESTful的kubernetes API的服務,它是系統管理指令的統一接口,任何對資源的增刪該查都要交給APIServer處理后再交給etcd,如圖,kubectl(kubernetes提供的客戶端工具,該工具內部是對kubernetes API的調用)是直接和APIServer交互的。 - Schedule
??schedule負責調度Pod到合適的Node上,如果把scheduler看成一個黑匣子,那么它的輸入是pod和由多個Node組成的列表,輸出是Pod和一個Node的綁定。 kubernetes目前提供了調度算法,同樣也保留了接口。用戶根據自己的需求定義自己的調度算法。 - Controller manager
??如果APIServer做的是前臺的工作的話,那么controller manager就是負責后臺的。每一個資源都對應一個控制器。而control manager就是負責管理這些控制器的,比如我們通過APIServer創建了一個Pod,當這個Pod創建成功后,APIServer的任務就算完成了。 - ETCD
??etcd是一個高可用的鍵值存儲系統,kubernetes使用它來存儲各個資源的狀態,從而實現了Restful的API。 - Flannel
??默認沒有flanneld網絡,Node節點間的pod不能通信,只能Node內通信,Flannel從etcd中獲取network的配置信息 劃分subnet,并在etcd中進行注冊 將子網信息記錄
kube-scheduler 和 kube-controller-manager 可以以集群模式運行,通過 leader 選舉產生一個工作進程,其它進程處于阻塞模式。
??
6.2.1、下載文件并安裝 Kubernetes Server
[root@c0 ~]# cd /home/work/_src/ [root@c0 _src]# wget https://dl.k8s.io/v1.13.0/kubernetes-server-linux-amd64.tar.gz [root@c0 _src]# tar -xzvf kubernetes-server-linux-amd64.tar.gz [root@c0 _src]# cd kubernetes/server/bin/ [root@c0 bin]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl kubelet kube-proxy /home/work/_app/k8s/kubernetes/bin/??
??從 c0 復制 kubelet、kubectl、kube-proxy,同時復制到其他節點
??
6.2.2、部署 Apiserver
??創建 TLS Bootstrapping Token
[root@c0 ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 4470210dbf9d9c57f8543bce4683c3ce這里我們生成的隨機Token是4470210dbf9d9c57f8543bce4683c3ce,記下來后面要用到
??創建 /home/work/_app/k8s/kubernetes/cfg/token-auth-file 文件并保存,內容如下:
[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/token-auth-file 4470210dbf9d9c57f8543bce4683c3ce,kubelet-bootstrap,10001,"system:kubelet-bootstrap"??
6.2.2.1、創建 Apiserver 配置文件
??創建 /home/work/_app/k8s/kubernetes/cfg/kube-apiserver 文件并保存,內容如下:
[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379 \ --bind-address=10.0.0.100 \ --secure-port=6443 \ --advertise-address=10.0.0.100 \ --allow-privileged=true \ --service-cluster-ip-range=10.244.0.0/16 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/home/work/_app/k8s/kubernetes/cfg/token-auth-file \ --service-node-port-range=30000-50000 \ --tls-cert-file=/home/work/_app/k8s/kubernetes/ssl/kube-apiserver-server.pem \ --tls-private-key-file=/home/work/_app/k8s/kubernetes/ssl/kube-apiserver-server-key.pem \ --client-ca-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/home/work/_app/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/home/work/_app/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/home/work/_app/k8s/etcd/ssl/server-key.pem"??
6.2.2.2、創建 Apiserver 啟動文件
??創建 /usr/lib/systemd/system/kube-apiserver.service 文件并保存,內容如下:
[root@c0 ~]# cat /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes[Service] EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-apiserver ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure[Install] WantedBy=multi-user.target??
6.2.2.3、啟動 Kube Apiserver 服務
[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.??
6.2.2.4、檢查 Apiserver 服務是否運行
[root@c0 ~]# systemctl status kube-apiserver ● kube-apiserver.service - Kubernetes API ServerLoaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)Active: active (running) since Tue 2019-02-19 22:28:03 CST; 19s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 4708 (kube-apiserver)Tasks: 10Memory: 370.9MCGroup: /system.slice/kube-apiserver.service└─4708 /home/work/_app/k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379 --bind-address=10.0.0.100 ...Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.510271 4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.032168ms) 200 [kube-api...10.0.0.100:59408] Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.513149 4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.1516...10.0.0.100:59408] Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.515603 4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.88011ms) 200 ...10.0.0.100:59408] Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.518209 4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.980109ms) 200 [k...10.0.0.100:59408] Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.520474 4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.890751ms) 200 [kub...10.0.0.100:59408] Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.522918 4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.80026ms) 200 [kube-...10.0.0.100:59408] Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.525952 4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (2.148966ms) 200 [k...10.0.0.100:59408] Feb 19 22:28:20 c0 kube-apiserver[4708]: I0219 22:28:20.403713 4708 wrap.go:47] GET /api/v1/namespaces/default: (2.463889ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 10.0.0.100:59408] Feb 19 22:28:20 c0 kube-apiserver[4708]: I0219 22:28:20.406610 4708 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (2.080766ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 10.0.0.100:59408] Feb 19 22:28:20 c0 kube-apiserver[4708]: I0219 22:28:20.417019 4708 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.134397ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 10.0.0.100:59408]??
6.2.3、部署 Scheduler
??創建 /home/work/_app/k8s/kubernetes/cfg/kube-scheduler 文件并保存,內容如下:
[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"??
6.2.3.1、創建 Kube-scheduler 系統啟動文件
??創建 /usr/lib/systemd/system/kube-scheduler.service 文件并保存,內容如下:
[root@c0 ~]# cat /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes[Service] EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-scheduler ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure[Install] WantedBy=multi-user.target??
6.2.3.2、啟動 Kube-scheduler 服務
[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.??
6.2.3.3、檢查 Kube-scheduler 服務是否運行
[root@c0 ~]# systemctl status kube-scheduler ● kube-scheduler.service - Kubernetes SchedulerLoaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)Active: active (running) since Tue 2019-02-19 22:29:07 CST; 7s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 4839 (kube-scheduler)Tasks: 9Memory: 47.0MCGroup: /system.slice/kube-scheduler.service└─4839 /home/work/_app/k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-electFeb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.679756 4839 controller_utils.go:1027] Waiting for caches to sync for scheduler controller Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.779894 4839 shared_informer.go:123] caches populated Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.779928 4839 controller_utils.go:1034] Caches are synced for scheduler controller Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.779990 4839 leaderelection.go:205] attempting to acquire leader lease kube-system/kube-scheduler... Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.784100 4839 leaderelection.go:289] lock is held by c0_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.784135 4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler Feb 19 22:29:12 c0 kube-scheduler[4839]: I0219 22:29:12.829896 4839 leaderelection.go:289] lock is held by c0_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired Feb 19 22:29:12 c0 kube-scheduler[4839]: I0219 22:29:12.829921 4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler Feb 19 22:29:14 c0 kube-scheduler[4839]: I0219 22:29:14.941554 4839 leaderelection.go:289] lock is held by c0_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired Feb 19 22:29:14 c0 kube-scheduler[4839]: I0219 22:29:14.941573 4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler??
6.2.4、部署 Kube-Controller-Manager 組件
6.2.4.1、創建 kube-controller-manager 配置文件
??創建 /home/work/_app/k8s/kubernetes/cfg/kube-controller-manager 文件并保存,內容如下:
[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.244.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem"??
6.2.4.2、創建 kube-controller-manager 系統啟動文件
??創建 /usr/lib/systemd/system/kube-controller-manager.service 文件并保存,內容如下
[root@c0 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes[Service] EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure[Install] WantedBy=multi-user.target??
6.2.4.3、啟動 kube-controller-manager 服務
[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.??
6.2.4.4、檢查 kube-controller-manager 服務是否運行
[root@c0 ~]# systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller ManagerLoaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)Active: active (running) since Tue 2019-02-19 22:29:40 CST; 12s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 4933 (kube-controller)Tasks: 7Memory: 106.7MCGroup: /system.slice/kube-controller-manager.service└─4933 /home/work/_app/k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.244.0.0/16 --cluster-name=kubernet...Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.276841 4933 deprecated_insecure_serving.go:51] Serving insecurely on 127.0.0.1:10252 Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.278183 4933 leaderelection.go:205] attempting to acquire leader lease kube-system/kube-controller-manager... Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.301326 4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.301451 4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager Feb 19 22:29:44 c0 kube-controller-manager[4933]: I0219 22:29:44.679518 4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired Feb 19 22:29:44 c0 kube-controller-manager[4933]: I0219 22:29:44.679550 4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager Feb 19 22:29:47 c0 kube-controller-manager[4933]: I0219 22:29:47.078743 4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired Feb 19 22:29:47 c0 kube-controller-manager[4933]: I0219 22:29:47.078762 4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager Feb 19 22:29:49 c0 kube-controller-manager[4933]: I0219 22:29:49.529247 4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired Feb 19 22:29:49 c0 kube-controller-manager[4933]: I0219 22:29:49.529266 4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager??
6.2.5、驗證 API Server 服務
??將 kubectl 加入到$PATH變量中
[root@c0 ~]# echo "PATH=/home/work/_app/k8s/kubernetes/bin:$PATH:$HOME/bin" >> /etc/profile [root@c0 ~]# source /etc/profile??
??查看節點狀態
??
6.2.6、部署 Kubelet
6.2.6.1、創建 bootstrap.kubeconfig、kube-proxy.kubeconfig 配置文件
??創建 /home/work/_app/k8s/kubernetes/cfg/env.sh 文件并保存,內容如下:
[root@c0 cfg]# pwd /home/work/_app/k8s/kubernetes/cfg [root@c0 cfg]# cat env.sh #!/bin/bash #創建kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=4470210dbf9d9c57f8543bce4683c3ce KUBE_APISERVER="https://10.0.0.100:6443" #設置集群參數 kubectl config set-cluster kubernetes \--certificate-authority=/home/work/_app/k8s/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=bootstrap.kubeconfig#設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \--token=${BOOTSTRAP_TOKEN} \--kubeconfig=bootstrap.kubeconfig# 設置上下文參數 kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=bootstrap.kubeconfig# 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig#----------------------# 創建kube-proxy kubeconfig文件kubectl config set-cluster kubernetes \--certificate-authority=/home/work/_app/k8s/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-proxy.kubeconfigkubectl config set-credentials kube-proxy \--client-certificate=/home/work/_app/k8s/kubernetes/ssl/kube-proxy-client.pem \--client-key=/home/work/_app/k8s/kubernetes/ssl/kube-proxy-client-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfigkubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfigkubectl config use-context default --kubeconfig=kube-proxy.kubeconfigBOOTSTRAP_TOKEN使用在創建 TLS Bootstrapping Token 生成的4470210dbf9d9c57f8543bce4683c3ce
??
??執行腳本:
??
??將 bootstrap.kubeconfig、kube-proxy.kubeconfig 復制到其他節點
??
6.2.6.2、創建 kubelet 配置文件
??創建 /home/work/_app/k8s/kubernetes/cfg/kubelet.config 參數配置文件并保存,內容如下:
[root@c0 cfg]# cat /home/work/_app/k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 10.0.0.100 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.244.0.1"] clusterDomain: cluster.local. failSwapOn: false authentication:anonymous:enabled: true??
??創建 /home/work/_app/k8s/kubernetes/cfg/kubelet 啟動參數文件并保存,內容如下:
當 kubelet 啟動時,如果通過 --kubeconfig 指定的文件不存在,則通過 --bootstrap-kubeconfig 指定的 bootstrap kubeconfig 用于從API服務器請求客戶端證書。
在通過 kubelet 批準證書請求時,引用生成的密鑰和證書將放在 --cert-dir 目錄中。
??
6.2.6.3、將 kubelet-bootstrap 用戶綁定到系統集群角色
[root@c0 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created??
6.2.6.4、創建 kubelet 系統啟動文件
??創建 /usr/lib/systemd/system/kubelet.service 并保存,內容如下:
[root@c0 cfg]# cat /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service[Service] EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/kubelet ExecStart=/home/work/_app/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process[Install] WantedBy=multi-user.target??
6.2.6.5、啟動 kubelet 服務
[root@c0 cfg]# systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.??
6.2.6.6、查看 kubelet 服務運行狀態
[root@c0 cfg]# systemctl status kubelet ● kubelet.service - Kubernetes KubeletLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Active: active (running) since Tue 2019-02-19 22:31:23 CST; 14s agoMain PID: 5137 (kubelet)Tasks: 13Memory: 128.7MCGroup: /system.slice/kubelet.service└─5137 /home/work/_app/k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.0.0.100 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/home/work/_app/k8s/kub...Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.488086 5137 eviction_manager.go:226] eviction manager: synchronize housekeeping Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502001 5137 helpers.go:836] eviction manager: observations: signal=imagefs.inodesFree, available: 107287687, capacity: 107374144, time: 2019-02-19 22:31:34.48876...T m=+10.738964114 Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502103 5137 helpers.go:836] eviction manager: observations: signal=pid.available, available: 32554, capacity: 32Ki, time: 2019-02-19 22:31:34.50073593 +0800 CST m=+10.750931769 Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502132 5137 helpers.go:836] eviction manager: observations: signal=memory.available, available: 2179016Ki, capacity: 2819280Ki, time: 2019-02-19 22:31:34.4887683...T m=+10.738964114 Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502143 5137 helpers.go:836] eviction manager: observations: signal=allocatableMemory.available, available: 2819280Ki, capacity: 2819280Ki, time: 2019-02-19 22:31...T m=+10.751961068 Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502151 5137 helpers.go:836] eviction manager: observations: signal=nodefs.available, available: 1068393320Ki, capacity: 1048064Mi, time: 2019-02-19 22:31:34.4887...T m=+10.738964114 Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502161 5137 helpers.go:836] eviction manager: observations: signal=nodefs.inodesFree, available: 107287687, capacity: 107374144, time: 2019-02-19 22:31:34.488768...T m=+10.738964114 Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502170 5137 helpers.go:836] eviction manager: observations: signal=imagefs.available, available: 1068393320Ki, capacity: 1048064Mi, time: 2019-02-19 22:31:34.488...T m=+10.738964114 Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502191 5137 eviction_manager.go:317] eviction manager: no resources are starved Feb 19 22:31:36 c0 kubelet[5137]: I0219 22:31:36.104200 5137 kubelet.go:1995] SyncLoop (housekeeping)??
6.2.7、批準 Master 加入集群
??CSR 可以在內置批準流程之外做手動批準加入集群。
??管理員也可以使用 kubectl 手動批準證書請求。
??管理員可以使用 kubectl get csr 列出 CSR 請求, 并使用 kubectl describe csr <name> 列出詳細描述。
??管理員也可以使用 kubectl certificate approve <name> 或 kubectl certificate deny <name> 工具批準或拒絕 CSR 請求。
??
6.2.7.1、查看 CSR 列表
[root@c0 cfg]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k 14m kubelet-bootstrap Pending??
6.2.7.2、批準加入集群
[root@c0 cfg]# kubectl certificate approve node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k certificatesigningrequest.certificates.k8s.io/node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k approved??
6.2.7.3、驗證 Master 是否加入集群
??再次查看 CSR 列表
[root@c0 cfg]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k 15m kubelet-bootstrap Approved,Issued??
6.3、部署 kube-proxy 組件
??kube-proxy 運行在所有 Node 節點上,它監聽 apiserver 中 service 和 Endpoint 的變化情況,創建路由規則來進行服務負載均衡,以下操作以 c0 為例
??
6.3.1、創建 kube-proxy 參數配置文件
??創建 /home/work/_app/k8s/kubernetes/cfg/kube-proxy 配置文件并保存,內容如下:
[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=10.0.0.100 \ --cluster-cidr=10.244.0.0/16 \ --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kube-proxy.kubeconfig"--hostname-override在不同的節點處,要換成節點的IP
??
6.3.2、創建 kube-proxy 系統啟動文件
??創建 /usr/lib/systemd/system/kube-proxy.service 文件并保存,內容如下:
[root@c0 ~]# cat /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target[Service] EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-proxy ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure[Install] WantedBy=multi-user.target??
6.3.3、啟動 kube-proxy 服務
[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.??
6.3.4、檢查 kube-proxy 服務狀態
[root@c0 cfg]# systemctl status kube-proxy ● kube-proxy.service - Kubernetes ProxyLoaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2019-02-18 06:08:51 CST; 3h 49min agoMain PID: 12660 (kube-proxy)Tasks: 0Memory: 1.9MCGroup: /system.slice/kube-proxy.service? 12660 /home/work/_app/k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.0.0.100 --cluster-cidr=10.244.0.0/16 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/...Feb 18 09:58:38 c0 kube-proxy[12660]: I0218 09:58:38.205387 12660 config.go:141] Calling handler.OnEndpointsUpdate Feb 18 09:58:38 c0 kube-proxy[12660]: I0218 09:58:38.250931 12660 config.go:141] Calling handler.OnEndpointsUpdate Feb 18 09:58:40 c0 kube-proxy[12660]: I0218 09:58:40.249487 12660 config.go:141] Calling handler.OnEndpointsUpdate Feb 18 09:58:40 c0 kube-proxy[12660]: I0218 09:58:40.290336 12660 config.go:141] Calling handler.OnEndpointsUpdate Feb 18 09:58:42 c0 kube-proxy[12660]: I0218 09:58:42.264320 12660 config.go:141] Calling handler.OnEndpointsUpdate Feb 18 09:58:42 c0 kube-proxy[12660]: I0218 09:58:42.318954 12660 config.go:141] Calling handler.OnEndpointsUpdate Feb 18 09:58:44 c0 kube-proxy[12660]: I0218 09:58:44.273290 12660 config.go:141] Calling handler.OnEndpointsUpdate Feb 18 09:58:44 c0 kube-proxy[12660]: I0218 09:58:44.359236 12660 config.go:141] Calling handler.OnEndpointsUpdate Feb 18 09:58:46 c0 kube-proxy[12660]: I0218 09:58:46.287980 12660 config.go:141] Calling handler.OnEndpointsUpdate Feb 18 09:58:46 c0 kube-proxy[12660]: I0218 09:58:46.377475 12660 config.go:141] Calling handler.OnEndpointsUpdate??
6.4、驗證 Server 服務
??查看 Master 狀態
[root@c0 cfg]# kubectl get cs,nodes NAME STATUS MESSAGE ERROR componentstatus/scheduler Healthy ok componentstatus/controller-manager Healthy ok componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-3 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"}NAME STATUS ROLES AGE VERSION node/10.0.0.100 Ready <none> 51m v1.13.0??
6.5、Kubernetes Node 節點加入集群
??Kubernetes Node 節點運行如下組件:
- Proxy:
??該模塊實現了kubernetes中的服務發現和反向代理功能。kube-proxy支持TCP和UDP連接轉發,默認基Round Robin算法將客戶端流量轉發到與service對應的一組后端pod。服務發現方面,kube-proxy使用etcd的watch機制監控集群中service和endpoint對象數據的動態變化,并且維護一個service到endpoint的映射關系,從而保證了后端pod的IP變化不會對訪問者造成影響,另外,kube-proxy還支持session affinity。 - Kublet
??kublet是Master在每個Node節點上面的agent,是Node節點上面最重要的模塊,它負責維護和管理該Node上的所有容器,但是如果容器不是通過kubernetes創建的,它并不會管理。本質上,它負責使Pod的運行狀態與期望的狀態一致。
kublet 啟動時自動向 kube-apiserver 注冊節點信息,內置的 cadvisor 統計和監控節點的資源使用情況; 為確保安全,只開啟接收 https 請求的安全端口,對請求進行認證和授權,拒絕未授權的訪問(如apiserver、heapster) - Flannel
??默認沒有flanneld網絡,Node節點間的pod不能通信,只能Node內通信,Flannel從etcd中獲取network的配置信息 劃分subnet,并在etcd中進行注冊 將子網信息記錄 - ETCD
??ETCD是一個高可用的鍵值存儲系統,kubernetes使用它來存儲各個資源的狀態,從而實現了Restful的API。
??
6.5.1、創建 kubelet 配置文件
??在所有節點上都要運行,以 c1 為例。
??創建 /home/work/_app/k8s/kubernetes/cfg/kubelet.config 參數配置文件并保存,內容如下:
address在不同的節點處,要改成節點的IP
??
??創建 /home/work/_app/k8s/kubernetes/cfg/kubelet 啟動參數文件并保存,內容如下:
??
6.5.2、創建 kubelet 系統啟動文件
??創建 /usr/lib/systemd/system/kubelet.service 并保存,內容如下:
[root@c1 ~]# cat /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service[Service] EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/kubelet ExecStart=/home/work/_app/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process[Install] WantedBy=multi-user.target??
6.5.3、啟動 kubelet 服務
[root@c1 ~]# systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.??
6.5.4、查看 kubelet 服務運行狀態
[root@c1 ~]# systemctl status kubelet ● kubelet.service - Kubernetes KubeletLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2019-02-18 06:27:54 CST; 6s agoMain PID: 19123 (kubelet)Tasks: 12Memory: 18.3MCGroup: /system.slice/kubelet.service└─19123 /home/work/_app/k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.0.0.101 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-k...Feb 18 06:27:54 c1 kubelet[19123]: I0218 06:27:54.784286 19123 mount_linux.go:179] Detected OS with systemd Feb 18 06:27:54 c1 kubelet[19123]: I0218 06:27:54.784416 19123 server.go:407] Version: v1.13.0??
6.5.5、批準 Node 加入集群
??查看 CSR 列表,可以看到節點有 Pending 請求
[root@c0 cfg]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k 84m kubelet-bootstrap Approved,Issued node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA 2m45s kubelet-bootstrap Pending??
??通過以下命令,查看請求的詳細信息,能夠看到是 c1 的IP地址10.0.0.101發來的請求
??
??批準加入集群
??
??再次查看 CSR 列表,可以看到節點的加入請求已經被批準
??
6.5.6、從集群刪除 Node
??要刪除一個節點前,要先清除掉上面的 pod
??然后運行下面的命令刪除節點
??
??如果想要有效刪除節點,在節點啟動時,重新向集群發送 CSR 請求,還需要在被刪除的點節上,刪除 CSR 緩存數據
??
??刪除完 CSR 緩存數據以后,重啟啟動 kubelet 就可以在 Master 上收到新的 CSR 請求。
??
6.5.7、給 Node 打標簽
??
??查看所有節點狀態
??
??c0 的 Master 打標簽
??
??c1 的 Node 打標簽
??
??刪除掉 c1 上的 master 標簽
??
7、參考文章
??Linux7/Centos7 Selinux介紹
??Kubernetes網絡原理及方案
??Installing a Kubernetes Cluster on CentOS 7
??How to install Kubernetes(k8) in RHEL or Centos in just 7 steps
??docker-kubernetes-tls-guide
??kubernetes1.13.1+etcd3.3.10+flanneld0.10集群部署
??
8、常見問題
用虛擬機如何生成新的網卡UUID?
??例如我是在Parallels上安裝的一個 c1 ,克隆 c2 后,根據本文上面的內容可以更改IP,UUID如果要更改,可以使用以下命令查看網卡的UUID:
[root@c2 ~]# uuidgen eth0 6ea1a665-0126-456c-80c7-1f69f32e83b7??
博文作者:迦壹
博客地址:Centos7 二進制安裝 Kubernetes 1.13
轉載聲明:可以轉載, 但必須以超鏈接形式標明文章原始出處和作者信息及版權聲明,謝謝合作!
??
假設您認為這篇文章對您有幫助,可以通過以下方式進行捐贈,謝謝!
比特幣地址:1KdgydfKMcFVpicj5w4vyn3T88dwjBst6Y
以太坊地址:0xbB0a92d634D7b9Ac69079ed0e521CC2e0a97c420
轉載于:https://www.cnblogs.com/lion.net/p/10408512.html
總結
以上是生活随笔為你收集整理的Centos7 二进制安装 Kubernetes 1.13的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 布林带
- 下一篇: Boll布林带波动率策略