K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。
更新:今天抽時間寫了昨天部署的一鍵腳本: date:Aug 3,2019
《Kubernetes最新版本1.15.1,shell腳本一鍵部署,剛剛完成測試,實用。》
最近利用空閑時間,把之前部署和學習k8s時的整個過程和遇到的問題總結了一下,分享給有需要的小伙伴。對自己也是一種知識的加固
針對于K8S的安裝有很多種方法,像二進制啊等,這里介紹的是kubeadm方法,在線拉取鏡像,使用的是最新版的鏡像。廢話不多說。
組件版本清單:
| 鏡像組件、應用軟件 | 版本 |
|---|---|
| Virtual Box | 6.x |
| Secure CRT | X |
| Docker version | 19.03.1 |
| OS | centos7.x |
| k8s.gcr.io/kube-scheduler | v1.15.1 |
| k8s.gcr.io/kube-proxy | v1.15.1 |
| k8s.gcr.io/kube-controller-manager | v1.15.1 |
| k8s.gcr.io/kube-apiserver | v1.15.1 |
| quay.io/calico/node | v3.1.6 |
| quay.io/calico/cni | v3.1.6 |
| quay.io/calico/kube-controllers | v3.1.6 |
| k8s.gcr.io/coredns | 1.3.1 |
| k8s.gcr.io/etcd | 3.3.10 |
| quay.io/calico/node | v3.1.0 |
| quay.io/calico/cni | v3.1.0 |
| quay.io/calico/kube-controllers | v3.1.0 |
| k8s.gcr.io/pause | 3.1 |
一、準備工作
建議每個虛擬機的配置如下:
| 內(nèi)存 | 處理器個數(shù) |
|---|---|
| 2048M | 2 |
K8S的各個節(jié)點配置情況:
| hostname | ip addr |
|---|---|
| k8s-node1 | 192.168.10.9 |
| k8s-node2 | 192.168.10.10 |
| k8s-node3 | 192.168.10.11 |
首先開啟我們的linux的安裝步驟。
設置內(nèi)存大小:我這里設置4G
虛擬磁盤動態(tài)分配:
虛擬硬盤設置100G大小:
選著Centos-7-X86_64-1511.iso鏡像盤:
處理器數(shù)量大小設置為2:
整個linux的操作系統(tǒng)安裝較為簡單,這里不是本內(nèi)容重點,就不一一介紹了,若安裝過程遇到問題,請移步:https://blog.csdn.net/qq_28513801/article/details/90143552
等待安裝完成之后,修改一下網(wǎng)卡配置文件,然后重啟網(wǎng)卡。這里可以直接使用vi 編輯配置文件,也可以更快速的利用sed修改。
[root@localhost ~]# sed -i 's/^ONBOOT=no/ONBOOT=yes/g' /etc/sysconfig/network-scripts/ifcfg-enp0s3
[root@localhost ~]# /etc/init.d/network restart
如下圖所示:
為了使用linux系統(tǒng)方便,我們直接使用終端工具Secure CRT進行連接操作。由于我們安裝k8s時需要使用因特網(wǎng),這里就使用了NAT網(wǎng)絡,那么我們可以打開我們的虛擬機,設置一個端口轉(zhuǎn)發(fā),來便捷操作LINUX系統(tǒng)。
點擊端口轉(zhuǎn)發(fā),設置一下端口,這里要避開常用的端口。
設置好了端口號之后,就可以使用crt進行連接了,添加一個規(guī)則,這里使用真實的物理機端口2222來映射到虛擬機的22端口:
由于進行了端口轉(zhuǎn)發(fā),那么ip地址就使用本地地址127.0.0.1,端口號不在是22,而是我們設置的2222.注意,因為端口是映射到宿主機上的,所以主機地址要填寫為127.0.0.1:2222
點擊接受并保存
輸入我們的密碼
然后進行一些簡單設置:防止亂碼,設置成UTF-8編碼
這已經(jīng)正常連接了
然后修改我們的主機名為k8s-node1
[root@localhost ~]# hostnamectl set-hostname k8s-node1
[root@localhost ~]# bash
[root@k8s-node1 ~]#
1 開啟安裝之路
不建議使用CentOS 7 自帶的yum源,因為安裝軟件和依賴時會非常慢甚至超時失敗。這里,我們使用阿里云的源予以替換,執(zhí)行如下命令,替換文件/etc/yum.repos.d/CentOS-Base.repo:
[root@k8s-node1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
bash: wget: command not found這里由于我采用最小化安裝,所以不帶wget命令,那么我們先安裝一下該命令。
[root@k8s-node1 ~]# yum search wget
Loaded plugins: fastestmirror
base | 3.6 kB 00:00:00
extras | 3.4 kB 00:00:00
updates | 3.4 kB 00:00:00
(1/4): base/7/x86_64/group_gz | 166 kB 00:00:00
(2/4): extras/7/x86_64/primary_db | 205 kB 00:00:01
(3/4): base/7/x86_64/primary_db | 6.0 MB 00:00:01
(4/4): updates/7/x86_64/primary_db | 7.4 MB 00:00:03
Determining fastest mirrors* base: mirrors.163.com* extras: mirrors.neusoft.edu.cn* updates: mirrors.163.com
=============================================================== N/S matched: wget ===============================================================
wget.x86_64 : A utility for retrieving files using the HTTP or FTP protocolsName and summary matches only, use "search all" for everything.
[root@k8s-node1 ~]# yum install -y wget安裝完成之后,重新執(zhí)行命令:[root@k8s-node1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
--2019-08-01 07:16:22-- http://mirrors.aliyun.com/repo/Centos-7.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 150.138.121.102, 150.138.121.100, 150.138.121.98, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|150.138.121.102|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2523 (2.5K) [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/CentOS-Base.repo’100%[=======================================================================================================>] 2,523 --.-K/s in 0s 2019-08-01 07:16:22 (287 MB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [2523/2523][root@k8s-node1 ~]# [root@k8s-node1 ~]# yum makecache #建立一個緩存
Loaded plugins: fastestmirror
base | 3.6 kB 00:00:00
extras | 3.4 kB 00:00:00
updates | 3.4 kB 00:00:00
(1/8): extras/7/x86_64/prestodelta | 65 kB 00:00:00
(2/8): extras/7/x86_64/other_db | 127 kB 00:00:00
(3/8): extras/7/x86_64/filelists_db | 246 kB 00:00:00
(4/8): base/7/x86_64/other_db | 2.6 MB 00:00:01
(5/8): updates/7/x86_64/prestodelta | 945 kB 00:00:01
(6/8): base/7/x86_64/filelists_db | 7.1 MB 00:00:03
(7/8): updates/7/x86_64/other_db | 764 kB 00:00:01
(8/8): updates/7/x86_64/filelists_db | 5.2 MB 00:00:03
Loading mirror speeds from cached hostfile* base: mirrors.aliyun.com* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
Metadata Cache Created
1.2 關閉防火墻
防火墻一定要提前關閉,否則在后續(xù)安裝K8S集群的時候是個trouble maker。執(zhí)行下面語句關閉,并禁用開機啟動:
[root@k8s-node1 ~]# systemctl stop firewalld & systemctl disable firewalld
[1] 17699
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k8s-node1 ~]#
1.3 關閉Swap
類似ElasticSearch集群,在安裝K8S集群時,Linux的Swap內(nèi)存交換機制是一定要關閉的,否則會因為內(nèi)存交換而影響性能以及穩(wěn)定性。這里,我們可以提前進行設置:
執(zhí)行swapoff -a可臨時關閉,但系統(tǒng)重啟后恢復
[root@k8s-node1 ~]# swapoff -a
[1]+ Done systemctl stop firewalld
[root@k8s-node1 ~]#
編輯/etc/fstab,注釋掉包含swap的那一行即可,重啟后可永久關閉,如下所示:
[root@k8s-node1 ~]# vi /etc/fstab
/dev/mapper/centos-root / xfs defaults 0 0
UUID=dedcd30c-93a8-4e26-b111-d7c68a752bf9 /boot xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
~或直接執(zhí)行:sed -i '/ swap / s/^/#/' /etc/fstab
關閉成功后,使用top命令查看,如下圖所示表示正常:
2 安裝Docker
當然,安裝K8S必須要先安裝Docker。這里,我們使用yum方式安裝Docker社區(qū)最新版。Docker官方文檔是最好的教材:
https://docs.docker.com/install/linux/docker-ce/centos/#prerequisites
但由于方教授的防火墻,文檔網(wǎng)站經(jīng)常無法查看,并且使用yum安裝也經(jīng)常會超時失敗。我們使用如下方式解決:
2.1 添加倉庫
添加阿里云的Docker倉庫:
[root@k8s-node1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
bash: yum-config-manager: command not found
如果出現(xiàn)上面報錯,那么我們就安裝該命令[root@k8s-node1 ~]# yum search yum-config-manager
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile* base: mirrors.aliyun.com* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
========================================================== Matched: yum-config-manager ==========================================================
yum-utils.noarch : Utilities based around the yum package manager
[root@k8s-node1 ~]# yum install -y yum-utils.noarch #安裝該命令重新執(zhí)行該命令:
[root@k8s-node1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@k8s-node1 ~]# yum makecache
2.2 安裝Docker
執(zhí)行以下命令,安裝最新版Docker:
[root@k8s-node1 ~]# yum install docker-ce -y
出現(xiàn)如下則已安裝:
安裝完成后,查詢下docker版本:運行docker --version,可以看到安裝了截止目前最新的19.03.1版本:
[root@k8s-node1 ~]# docker --version
Docker version 19.03.1, build 74b1e89
2.3 啟動Docker
啟動Docker服務并激活開機啟動:
[root@k8s-node1 ~]# systemctl start docker & systemctl enable docker
[1] 20629
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-node1 ~]#
運行一條命令驗證一下:
[root@k8s-node1 ~]# docker run hello-world
出現(xiàn)如下代表成功啟動:
下面將詳細介紹在Node1上安裝Kubernetes的過程,安裝完畢后,再進行虛擬機的復制出Node2、Node3即可。
我們將現(xiàn)有的虛擬機稱之為Node1,用作主節(jié)點。為了減少工作量,在Node1安裝Kubernetes后,我們利用VirtualBox的虛擬機復制功能,復制出兩個完全一樣的虛擬機作為工作節(jié)點。三者角色為:
k8s-node1:Master
k8s-node2:Woker
k8s-node3:Woker
2 安裝Kubernetes
官方文檔永遠是最好的參考資料:https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ 僅供參考
2.1 配置K8S的yum源
官方倉庫無法使用,建議使用阿里源的倉庫,執(zhí)行以下命令添加kubernetes.repo倉庫:
[root@k8s-node1 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@k8s-node1 ~]#
2.2 關閉SeLinux
執(zhí)行:setenforce 0
[root@k8s-node1 ~]# setenforce 0
[root@k8s-node1 ~]# getenforce
Permissive
一個小建議
這里建議如果做高可用的話,要打開IP_VS模塊
因為:pod的負載均衡是用kube-proxy來實現(xiàn)的,實現(xiàn)方式有兩種,一種是默認的iptables,一種是ipvs,ipvs比iptable的性能更好而已。
后面master的高可用和集群服務的負載均衡要用到ipvs,所以加載內(nèi)核的以下模塊
需要開啟的模塊是
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4檢查有沒有開啟
cut -f1 -d " " /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4沒有的話,使用以下命令加載
[root@k8s-node1 ~]# modprobe -- ip_vs
[root@k8s-node1 ~]# modprobe -- ip_vs_rr
[root@k8s-node1 ~]# modprobe -- ip_vs_wrr
[root@k8s-node1 ~]# modprobe -- ip_vs_sh
[root@k8s-node1 ~]# modprobe -- nf_conntrack_ipv4
下面繼續(xù)進行我們的安裝
2.3 安裝K8S組件
執(zhí)行以下命令安裝 kubelet、kubeadm、kubectl :
[root@k8s-node1 ~]# yum install -y kubelet kubeadm kubectl
2.4 配置kubelet的cgroup drive
確保docker 的cgroup drive 和kubelet的cgroup drive一樣:
[root@k8s-node1 ~]# docker info | grep -i cgroup
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.Cgroup Driver: cgroupfs
[root@k8s-node1 ~]#
如圖所示:
然后再查看我們的kubectl的Cgroup。
[root@k8s-node1 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
cat: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf: No such file or directory如果提示找不到該文件,就再去我們的:/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf [root@k8s-node1 ~]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
[root@k8s-node1 ~]#
沒有的話,我們就加入進去
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
如圖所示:
然后使用命令重新加載一下:
[root@k8s-node1 ~]# systemctl daemon-reload
3 啟動kubelet
注意,根據(jù)官方文檔描述,安裝kubelet、kubeadm、kubectl三者后,要求啟動kubelet:
[root@k8s-node1 ~]# systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
但實際測試發(fā)現(xiàn),無法啟動,報如下錯誤:
[root@k8s-node1 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Drop-In: /usr/lib/systemd/system/kubelet.service.d└─10-kubeadm.confActive: activating (auto-restart) (Result: exit-code) since Thu 2019-08-01 07:52:48 EDT; 6s agoDocs: https://kubernetes.io/docs/Process: 21245 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)Main PID: 21245 (code=exited, status=255)Aug 01 07:52:48 k8s-node1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Aug 01 07:52:48 k8s-node1 systemd[1]: Unit kubelet.service entered failed state.
Aug 01 07:52:48 k8s-node1 systemd[1]: kubelet.service failed.
[root@k8s-node1 ~]#
對于上面的報錯,我們查看了一下日志:發(fā)現(xiàn)報錯
error: open /var/lib/kubelet/config.yaml: no such file or directory
事實上,我們還沒有配置kubelet,所以這個是很正常的。不需理會。也就是說,現(xiàn)在無法啟動并不影響后續(xù)操作,繼續(xù)!
4 下載K8S的Docker鏡像(重點)
本文使用的是K8S官方提供的kubeadm工具來初始化K8S集群,而初始化操作kubeadm init會默認去訪問谷歌的服務器,以下載集群所依賴的Docker鏡像,因此也會超時失敗。
但是,只要我們可以提前導入這些鏡像,kubeadm init操作就會發(fā)現(xiàn)這些鏡像已經(jīng)存在,就不會再去訪問谷歌。
網(wǎng)上有一些方法可以獲得這些鏡像,如利用Docker Hub制作鏡像等,但稍顯繁瑣。
方法一:(推薦方法二)
這里,我已將初始化時用到的所有Docker鏡像整理好了,鏡像版本是V1.15.1。這個也推薦大家使用大家使用。
鏈接:https://pan.baidu.com/s/1Pk5B6e2-14yZW11PYMdtbQ
提取碼:7wox
···········
這里順便帶一下,打包的命令:
[root@k8s-node1 mnt]# docker save $(docker images | grep -v REPOSITORY | awk 'BEGIN{OFS=":";ORS="\n"}{print $1,$2}') -o k8s_images_v1.5.1.tar那么下載好之后導入命令就是:
[root@k8s-node1 mnt]# docker load < k8s_images_v1.5.1.tar
然后再使用
[root@k8s-node1 mnt]# docker images
就可以看到我們的所需的基礎鏡像。或者自己有每一個基礎鏡像的tar包,那么只需要寫一個sh腳本,該腳本和這些tar包放在一個目錄下就可以拉取了.如下所示:
[root@k8s-node1 mnt]# vi docker_load.sh
docker load < quay.io#calico#node.tar
docker load < quay.io#calico#cni.tar
docker load < quay.io#calico#kube-controllers.tar
docker load < k8s.gcr.io#kube-proxy-amd64.tar
docker load < k8s.gcr.io#kube-scheduler-amd64.tar
docker load < k8s.gcr.io#kube-controller-manager-amd64.tar
docker load < k8s.gcr.io#kube-apiserver-amd64.tar
docker load < k8s.gcr.io#etcd-amd64.tar
docker load < k8s.gcr.io#k8s-dns-dnsmasq-nanny-amd64.tar
docker load < k8s.gcr.io#k8s-dns-sidecar-amd64.tar
docker load < k8s.gcr.io#k8s-dns-kube-dns-amd64.tar
docker load < k8s.gcr.io#pause-amd64.tar
docker load < quay.io#coreos#etcd.tar
docker load < quay.io#calico#node.tar
docker load < quay.io#calico#cni.tar
docker load < quay.io#calico#kube-policy-controller.tar
docker load < gcr.io#google_containers#etcd.tar
[root@k8s-node1 mnt]# source docker_load.sh
將鏡像與該腳本放置同一目錄,執(zhí)行即可導入Docker鏡像。運行docker images.
方法二:(推薦)
先提前下載初始化時需要用到的Images:
第一種 從國內(nèi)源下載好然后修改tag(推薦方式)
先查看要用到的鏡像有哪些,這里要注意的是:要拉取的4個核心組件的鏡像版本和你安裝的kubelet、kubeadm、kubectl 版本需要是一致的。
[root@k8s-node1 ~]# kubeadm config images list
W0801 08:08:18.271449 21980 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0801 08:08:18.271760 21980 version.go:99] falling back to the local client version: v1.15.1
k8s.gcr.io/kube-apiserver:v1.15.1
k8s.gcr.io/kube-controller-manager:v1.15.1
k8s.gcr.io/kube-scheduler:v1.15.1
k8s.gcr.io/kube-proxy:v1.15.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
下載鏡像:(這里我們使用一條組合命令來拉取鏡像,不過從下面的最后報錯:coredns/coredns:1.3.1鏡像拉取失敗,需要在手動拉取)
[root@k8s-node1 ~]# kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#mirrorgooglecontainers#g' |sh -x
W0801 08:09:14.832272 22033 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0801 08:09:14.832330 22033 version.go:99] falling back to the local client version: v1.15.1
+ docker pull mirrorgooglecontainers/kube-apiserver:v1.15.1
v1.15.1: Pulling from mirrorgooglecontainers/kube-apiserver
6cf6a0b0da0d: Pull complete
5899bcec7bbf: Pull complete
Digest: sha256:db15b7caa01ebea2510605f391fabaed06674438315a7b6313e18e93affa15bb
Status: Downloaded newer image for mirrorgooglecontainers/kube-apiserver:v1.15.1
docker.io/mirrorgooglecontainers/kube-apiserver:v1.15.1
+ docker pull mirrorgooglecontainers/kube-controller-manager:v1.15.1
v1.15.1: Pulling from mirrorgooglecontainers/kube-controller-manager
6cf6a0b0da0d: Already exists
5c943020ad72: Pull complete
Digest: sha256:271de9f26d55628cc58e048308bef063273fe68352db70dca7bc38df509d1023
Status: Downloaded newer image for mirrorgooglecontainers/kube-controller-manager:v1.15.1
docker.io/mirrorgooglecontainers/kube-controller-manager:v1.15.1
+ docker pull mirrorgooglecontainers/kube-scheduler:v1.15.1
v1.15.1: Pulling from mirrorgooglecontainers/kube-scheduler
6cf6a0b0da0d: Already exists
66ca8e0fb424: Pull complete
Digest: sha256:ffac8b6f6b9fe21f03c92ceb0855a7fb65599b8a7e7f8090182a02470a7d2ea6
Status: Downloaded newer image for mirrorgooglecontainers/kube-scheduler:v1.15.1
docker.io/mirrorgooglecontainers/kube-scheduler:v1.15.1
+ docker pull mirrorgooglecontainers/kube-proxy:v1.15.1
v1.15.1: Pulling from mirrorgooglecontainers/kube-proxy
6cf6a0b0da0d: Already exists
8e1ce322a1d9: Pull complete
3a8a38f10886: Pull complete
Digest: sha256:3d4e2f537c121bf6a824e564aaf406ead9466f04516a34f8089b4e4bb7abb33b
Status: Downloaded newer image for mirrorgooglecontainers/kube-proxy:v1.15.1
docker.io/mirrorgooglecontainers/kube-proxy:v1.15.1
+ docker pull mirrorgooglecontainers/pause:3.1
3.1: Pulling from mirrorgooglecontainers/pause
67ddbfb20a22: Pull complete
Digest: sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610
Status: Downloaded newer image for mirrorgooglecontainers/pause:3.1
docker.io/mirrorgooglecontainers/pause:3.1
+ docker pull mirrorgooglecontainers/etcd:3.3.10
3.3.10: Pulling from mirrorgooglecontainers/etcd
860b4e629066: Pull complete
3de3fe131c22: Pull complete
12ec62a49b1f: Pull complete
Digest: sha256:8a82adeb3d0770bfd37dd56765c64d082b6e7c6ad6a6c1fd961dc6e719ea4183
Status: Downloaded newer image for mirrorgooglecontainers/etcd:3.3.10
docker.io/mirrorgooglecontainers/etcd:3.3.10
+ docker pull mirrorgooglecontainers/coredns:1.3.1
Error response from daemon: pull access denied for mirrorgooglecontainers/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
#這里報錯了
[root@k8s-node1 ~]# 所以手動拉取報錯的這個:
[root@k8s-node1 ~]# docker pull coredns/coredns:1.3.11.3.1: Pulling from coredns/coredns
e0daa8927b68: Pull complete
3928e47de029: Pull complete
Digest: sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4
Status: Downloaded newer image for coredns/coredns:1.3.1
docker.io/coredns/coredns:1.3.1
[root@k8s-node1 ~]# #修改tag,將鏡像標記為k8s.gcr.io的名稱
[root@k8s-node1 ~]# docker images |grep mirrorgooglecontainers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#mirrorgooglecontainers#k8s.gcr.io#2' |sh -x
+ docker tag mirrorgooglecontainers/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
+ docker tag mirrorgooglecontainers/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
+ docker tag mirrorgooglecontainers/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
+ docker tag mirrorgooglecontainers/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
+ docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
+ docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
+ 手動修改coredns的tag[root@k8s-node1 ~]# docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
[root@k8s-node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mirrorgooglecontainers/kube-apiserver v1.15.1 68c3eb07bfc3 2 weeks ago 207MB
k8s.gcr.io/kube-apiserver v1.15.1 68c3eb07bfc3 2 weeks ago 207MB
mirrorgooglecontainers/kube-controller-manager v1.15.1 d75082f1d121 2 weeks ago 159MB
k8s.gcr.io/kube-controller-manager v1.15.1 d75082f1d121 2 weeks ago 159MB
mirrorgooglecontainers/kube-scheduler v1.15.1 b0b3c4c404da 2 weeks ago 81.1MB
k8s.gcr.io/kube-scheduler v1.15.1 b0b3c4c404da 2 weeks ago 81.1MB
mirrorgooglecontainers/kube-proxy v1.15.1 89a062da739d 2 weeks ago 82.4MB
k8s.gcr.io/kube-proxy v1.15.1 89a062da739d 2 weeks ago 82.4MB
coredns/coredns 1.3.1 eb516548c180 6 months ago 40.3MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 6 months ago 40.3MB
hello-world latest fce289e99eb9 7 months ago 1.84kB
mirrorgooglecontainers/etcd 3.3.10 2c4adeb21b4f 8 months ago 258MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 8 months ago 258MB
mirrorgooglecontainers/pause 3.1 da86e6ba6ca1 19 months ago 742kB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 19 months ago 742kB
[root@k8s-node1 ~]# 可以看到鏡像很多重復了,那么我們刪除無用的鏡像:[root@k8s-node1 ~]# docker images | grep mirrorgooglecontainers | awk '{print "docker rmi " $1":"$2}' | sh -x
+ docker rmi mirrorgooglecontainers/kube-scheduler:v1.15.1
Untagged: mirrorgooglecontainers/kube-scheduler:v1.15.1
Untagged: mirrorgooglecontainers/kube-scheduler@sha256:ffac8b6f6b9fe21f03c92ceb0855a7fb65599b8a7e7f8090182a02470a7d2ea6
+ docker rmi mirrorgooglecontainers/kube-proxy:v1.15.1
Untagged: mirrorgooglecontainers/kube-proxy:v1.15.1
Untagged: mirrorgooglecontainers/kube-proxy@sha256:3d4e2f537c121bf6a824e564aaf406ead9466f04516a34f8089b4e4bb7abb33b
+ docker rmi mirrorgooglecontainers/kube-apiserver:v1.15.1
Untagged: mirrorgooglecontainers/kube-apiserver:v1.15.1
Untagged: mirrorgooglecontainers/kube-apiserver@sha256:db15b7caa01ebea2510605f391fabaed06674438315a7b6313e18e93affa15bb
+ docker rmi mirrorgooglecontainers/kube-controller-manager:v1.15.1
Untagged: mirrorgooglecontainers/kube-controller-manager:v1.15.1
Untagged: mirrorgooglecontainers/kube-controller-manager@sha256:271de9f26d55628cc58e048308bef063273fe68352db70dca7bc38df509d1023
+ docker rmi mirrorgooglecontainers/etcd:3.3.10
Untagged: mirrorgooglecontainers/etcd:3.3.10
Untagged: mirrorgooglecontainers/etcd@sha256:8a82adeb3d0770bfd37dd56765c64d082b6e7c6ad6a6c1fd961dc6e719ea4183
+ docker rmi mirrorgooglecontainers/pause:3.1
Untagged: mirrorgooglecontainers/pause:3.1
Untagged: mirrorgooglecontainers/pause@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610手動刪除無用鏡像:
[root@k8s-node1 ~]# docker rmi coredns/coredns:1.3.1
Untagged: coredns/coredns:1.3.1
Untagged: coredns/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4
最后再看一下我們的鏡像:
查看準備好的鏡像
[root@k8s-node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.1 89a062da739d 2 weeks ago 82.4MB
k8s.gcr.io/kube-scheduler v1.15.1 b0b3c4c404da 2 weeks ago 81.1MB
k8s.gcr.io/kube-apiserver v1.15.1 68c3eb07bfc3 2 weeks ago 207MB
k8s.gcr.io/kube-controller-manager v1.15.1 d75082f1d121 2 weeks ago 159MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 6 months ago 40.3MB
hello-world latest fce289e99eb9 7 months ago 1.84kB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 8 months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 19 months ago 742kB
[root@k8s-node1 ~]#
·------------------------------------------------------------------------------------------------------------------------------
第二種:修改kubeadm配置文件中的docker倉庫地址imageRepository,注意:此方法只適用于1.11(?)版本以上
一開始沒有配置文件,先使用下面的命令生成配置文件
[root@k8s-node1 ~]# kubeadm config print init-defaults > kubeadm.conf
將配置文件中的 imageRepository: k8s.gcr.io 改為你自己的私有docker倉庫,比如
注意這里的xxxxx為你的阿里云的加速器的字符:
找不到的請移步:https://blog.csdn.net/qq_28513801/article/details/93381492[root@k8s-node1 ~]# sed -i '/^imageRepository/ s/k8s\.gcr\.io/xxxxxxx\.mirror\.aliyuncs\.com\/google_containers/g' kubeadm.conf
imageRepository: xxxxxx.mirror.aliyuncs.com/mirrorgooglecontainers
然后運行命令拉鏡像
[root@k8s-node1 ~]# kubeadm config images list --config kubeadm.conf
[root@k8s-node1 ~]# kubeadm config images pull --config kubeadm.conf
[root@k8s-node1 ~]# docker images #查看鏡像
5 復制虛擬機
前面講過:當k8s-node1的Kubernetes安裝完畢后,就需要進行虛擬機的復制了.
5.1 復制前需要退出虛擬機,我們選擇“正常關機”。右鍵虛擬機點擊復制:
克隆虛擬機,可以使用快捷鍵ctrl + o
選著完全復制:
采用同樣的操作,再復制一個虛擬機。最終的復制結果如下圖所示:
6 添加網(wǎng)卡,網(wǎng)絡。(重點)
復制結束后,如果直接啟動三個虛擬機,你會發(fā)現(xiàn)每個機子的IP地址(網(wǎng)卡enp0s3)都是一樣的:
這是因為復制虛擬機時連同網(wǎng)卡的地址也復制了,這樣的話,三個節(jié)點之間是無法訪問的。
因此,我建議復制結束后,不要馬上啟動虛擬機,而先要為每一個虛擬機添加一個網(wǎng)卡,用于節(jié)點間的互通訪問。
如下圖所示,連接方式選擇“Host-Only”模式:
同理 另外兩個虛擬機也這樣操作,然后依次啟動node1 、 node2、node3三個虛擬機
然后設置另外兩個主機名分別為k8s-node2 、 k8s-node3
網(wǎng)卡添加結束后,即可啟動三個虛擬機,我們需要進行一些簡單的設置,
6.1 繼續(xù)設置端口轉(zhuǎn)發(fā)
那么為了方便操作,我們按照第一個虛擬機的操作方式繼續(xù)設置端口轉(zhuǎn)發(fā)。
node2:設置端口為3333
node3 設置端口為4444
我們已經(jīng)順利連接了:
然后先看下各自 IP地址:
k8s-node1:[root@k8s-node1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:87:4d:7b brd ff:ff:ff:ff:ff:ffinet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3valid_lft 86026sec preferred_lft 86026secinet6 fe80::a00:27ff:fe87:4d7b/64 scope link valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:bb:7d:bb brd ff:ff:ff:ff:ff:ffinet 192.168.10.9/24 brd 192.168.10.255 scope global dynamic enp0s8valid_lft 826sec preferred_lft 826secinet6 fe80::a00:27ff:febb:7dbb/64 scope link valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:4d:d1:78:98 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever
[root@k8s-node1 ~]# k8s-node2:[root@k8s-node2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:90:ef:0f brd ff:ff:ff:ff:ff:ffinet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3valid_lft 86053sec preferred_lft 86053secinet6 fe80::a00:27ff:fe90:ef0f/64 scope link valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:80:99:e9 brd ff:ff:ff:ff:ff:ffinet 192.168.10.10/24 brd 192.168.10.255 scope global dynamic enp0s8valid_lft 853sec preferred_lft 853secinet6 fe80::a00:27ff:fe80:99e9/64 scope link valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:74:cf:5e:b2 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever
[root@k8s-node2 ~]# k8s-node3:[root@k8s-node3 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:dd:a5:a9 brd ff:ff:ff:ff:ff:ffinet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3valid_lft 86078sec preferred_lft 86078secinet6 fe80::a00:27ff:fedd:a5a9/64 scope link valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:fc:6b:97 brd ff:ff:ff:ff:ff:ffinet 192.168.10.11/24 brd 192.168.10.255 scope global dynamic enp0s8valid_lft 878sec preferred_lft 878secinet6 fe80::a00:27ff:fefc:6b97/64 scope link valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:25:5a:24:2c brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever
[root@k8s-node3 ~]#
6.2 配置免密(三個節(jié)點都需要操作)
為了操作方便我們直接配置免密登錄。()
[root@k8s-node1 ~]# ssh-keygen #連續(xù)按4次回車
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
d5:0d:91:66:3c:1d:6d:25:c7:3e:a6:ff:49:75:41:bd root@k8s-node1
The key's randomart image is:
+--[ RSA 2048]----+
| .o+o==|
| .*ooo=|
| .o...+.|
| . Eo|
| S o +|
| . o|
| .. |
| ...|
| .o|
+-----------------+
[root@k8s-node1 ~]#
[root@k8s-node1 ~]# ssh-copy-id k8s-node1
The authenticity of host 'k8s-node1 (192.168.10.9)' can't be established.
ECDSA key fingerprint is 80:4b:68:67:55:3a:b6:57:64:0a:98:e9:0e:df:c0:21.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node1's password: Number of key(s) added: 1Now try logging into the machine, with: "ssh 'k8s-node1'"
and check to make sure that only the key(s) you wanted were added.[root@k8s-node1 ~]# ssh-copy-id k8s-node2
The authenticity of host 'k8s-node2 (192.168.10.10)' can't be established.
ECDSA key fingerprint is 80:4b:68:67:55:3a:b6:57:64:0a:98:e9:0e:df:c0:21.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node2's password: Number of key(s) added: 1Now try logging into the machine, with: "ssh 'k8s-node2'"
and check to make sure that only the key(s) you wanted were added.[root@k8s-node1 ~]# ssh-copy-id k8s-node3
The authenticity of host 'k8s-node3 (192.168.10.11)' can't be established.
ECDSA key fingerprint is 80:4b:68:67:55:3a:b6:57:64:0a:98:e9:0e:df:c0:21.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node3's password: Number of key(s) added: 1Now try logging into the machine, with: "ssh 'k8s-node3'"
and check to make sure that only the key(s) you wanted were added.[root@k8s-node1 ~]# 然后依次在k8s-node2 、 k8s-node3上操作。
6.3 設置hosts文件映射
[root@k8s-node3 ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.9 k8s-node1
192.168.10.10 k8s-node2
192.168.10.11 k8s-node3#由于上面步驟配置了免密,可以直接發(fā)送過去
[root@k8s-node1 ~]# scp /etc/hosts k8s-node2:/etc/hosts
hosts 100% 229 0.2KB/s 00:00
[root@k8s-node1 ~]# scp /etc/hosts k8s-node3:/etc/hosts
hosts
7 創(chuàng)建集群
7.1 kubeadm介紹
前面的工作都準備好后,我們就可以真正的創(chuàng)建集群了。這里使用的是官方提供的kubeadm工具,它可以快速、方便的創(chuàng)建一個K8S集群。kubeadm的具體介紹大家可以參考官方文檔:https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/。
創(chuàng)建集群:初始化之前最好先了解一下 kubeadm init 參數(shù)
--apiserver-advertise-address string
API Server將要廣播的監(jiān)聽地址。如指定為 `0.0.0.0` 將使用缺省的網(wǎng)卡地址。--apiserver-bind-port int32 缺省值: 6443
API Server綁定的端口--apiserver-cert-extra-sans stringSlice
可選的額外提供的證書主題別名(SANs)用于指定API Server的服務器證書。可以是IP地址也可以是DNS名稱。--cert-dir string 缺省值: "/etc/kubernetes/pki"
證書的存儲路徑。--config string
kubeadm配置文件的路徑。警告:配置文件的功能是實驗性的。--cri-socket string 缺省值: "/var/run/dockershim.sock"
指明要連接的CRI socket文件--dry-run
不會應用任何改變;只會輸出將要執(zhí)行的操作。--feature-gates string
鍵值對的集合,用來控制各種功能的開關。可選項有:
Auditing=true|false (當前為ALPHA狀態(tài) - 缺省值=false)
CoreDNS=true|false (缺省值=true)
DynamicKubeletConfig=true|false (當前為BETA狀態(tài) - 缺省值=false)-h, --help
獲取init命令的幫助信息--ignore-preflight-errors stringSlice
忽視檢查項錯誤列表,列表中的每一個檢查項如發(fā)生錯誤將被展示輸出為警告,而非錯誤。 例如: 'IsPrivilegedUser,Swap'. 如填寫為 'all' 則將忽視所有的檢查項錯誤。--kubernetes-version string 缺省值: "stable-1"
為control plane選擇一個特定的Kubernetes版本。--node-name string
指定節(jié)點的名稱。--pod-network-cidr string
指明pod網(wǎng)絡可以使用的IP地址段。 如果設置了這個參數(shù),control plane將會為每一個節(jié)點自動分配CIDRs。--service-cidr string 缺省值: "10.96.0.0/12"
為service的虛擬IP地址另外指定IP地址段--service-dns-domain string 缺省值: "cluster.local"
為services另外指定域名, 例如: "myorg.internal".--skip-token-print
不打印出由 `kubeadm init` 命令生成的默認令牌。--token string
這個令牌用于建立主從節(jié)點間的雙向受信鏈接。格式為 [a-z0-9]{6}\.[a-z0-9]{16} - 示例: abcdef.0123456789abcdef--token-ttl duration 缺省值: 24h0m0s
令牌被自動刪除前的可用時長 (示例: 1s, 2m, 3h). 如果設置為 '0', 令牌將永不過期。
7.2 在master上開始初始化
在Master主節(jié)點(k8s-node1)上執(zhí)行:
[root@k8s-node1 ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.10.9含義:
1.選項--pod-network-cidr=192.168.0.0/16表示集群將使用Calico網(wǎng)絡,這里需要提前指定Calico的子網(wǎng)范圍
2.選項--kubernetes-version=v1.15.1指定K8S版本,這里必須與之前導入到Docker鏡像版本一致,否則會訪問谷歌去重新下載K8S最新版的Docker鏡像
3.選項--apiserver-advertise-address表示綁定的網(wǎng)卡IP,這里一定要綁定前面提到的enp0s8網(wǎng)卡,否則會默認使用enp0s3網(wǎng)卡
4.若執(zhí)行kubeadm init出錯或強制終止,則再需要執(zhí)行該命令時,需要先執(zhí)行kubeadm reset重置
如果出現(xiàn)下面報錯:
[root@k8s-node1 ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.10.9[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@k8s-node1 ~]#
處理方法:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
我們按照要求把值寫為1 就可以了.
輸入命令:
[root@k8s-node1 ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
然后再重新操作:
[root@k8s-node1 ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.10.9
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@k8s-node1 ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
[root@k8s-node1 ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.10.9
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.9]
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [192.168.10.9 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [192.168.10.9 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.003530 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ts9i67.6sn3ylpxri4qimgr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr \--discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0
[root@k8s-node1 ~]#
可以看到已經(jīng)順利初始化了。這里為了方便我們要保存好:
kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr \--discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0
可以看到,提示集群成功初始化,并且我們需要執(zhí)行以下命令:
[root@k8s-node1 ~]# mkdir -p $HOME/.kube
[root@k8s-node1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-node1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-node1 ~]#
另外, 提示我們還需要創(chuàng)建網(wǎng)絡,并且讓其他節(jié)點執(zhí)行kubeadm join…加入集群。
7.3 創(chuàng)建網(wǎng)絡
如果不創(chuàng)建網(wǎng)絡,查看pod狀態(tài)時,可以看到kube-dns組件是阻塞狀態(tài),集群時不可用的:
[root@k8s-node1 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-8nftr 0/1 Pending 0 3m28s #阻塞
coredns-5c98db65d4-n2zbj 0/1 Pending 0 3m28s #阻塞
etcd-k8s-node1 1/1 Running 0 2m44s
kube-apiserver-k8s-node1 1/1 Running 0 2m51s
kube-controller-manager-k8s-node1 1/1 Running 0 2m41s
kube-proxy-cdvhk 1/1 Running 0 3m28s
kube-scheduler-k8s-node1 1/1 Running 0 2m35s
大家可以參考官方文檔,根據(jù)需求選擇適合的網(wǎng)絡,這里,我們使用Calico(在前面初始化集群的時候就已經(jīng)確定了)。
根據(jù)官方文檔,在主節(jié)點上,需要執(zhí)行如下命令:
[root@k8s-node1 ~]# kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
執(zhí)行成功后:
[root@k8s-node1 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-etcd-2hmhv 1/1 Running 0 60s
calico-kube-controllers-6b6f4f7c64-c7v8p 1/1 Running 0 115s
calico-node-fzzmh 2/2 Running 2 115s
coredns-5c98db65d4-8nftr 1/1 Running 0 6m33s
coredns-5c98db65d4-n2zbj 1/1 Running 0 6m33s
etcd-k8s-node1 1/1 Running 0 5m49s
kube-apiserver-k8s-node1 1/1 Running 0 5m56s
kube-controller-manager-k8s-node1 1/1 Running 0 5m46s
kube-proxy-cdvhk 1/1 Running 0 6m33s
kube-scheduler-k8s-node1 1/1 Running 0 5m40s
[root@k8s-node1 ~]#
8 集群設置
將Master作為工作節(jié)點
K8S集群默認不會將Pod調(diào)度到Master上,這樣Master的資源就浪費了。在Master(即k8s-node1)上,可以運行以下命令使其作為一個工作節(jié)點:(利用該方法,我們可以不使用minikube而創(chuàng)建一個單節(jié)點的K8S集群)
[root@k8s-node1 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/k8s-node1 untainted
8.1 將其他節(jié)點加入集群
在其他兩個節(jié)點k8s-node2和k8s-node3上,執(zhí)行主節(jié)點生成的kubeadm join命令即可加入集群:
(最好是在每一個新的節(jié)點上先執(zhí)行如下命令:)
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
然后再加入新的節(jié)點:
kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr \--discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0
加入成功后,提示:
[root@k8s-node2 ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
[root@k8s-node2 ~]# kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr --discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@k8s-node2 ~]#
k8s-node3節(jié)點:
[root@k8s-node3 ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
[root@k8s-node3 ~]# kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr --discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@k8s-node3 ~]#
如果有其他報錯,要嘗試看日志:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...
例如這樣的報錯,那么要看看是否忘記關閉swap了。
[root@k8s-node1 ~]# swapoff -a
或者永久性修改/etc/fstab文件
[root@k8s-node1 ~]# free -mtotal used free shared buff/cache available
Mem: 992 524 74 7 392 284
Swap: 0 0 0
當所有節(jié)點加入集群后,稍等片刻,在主節(jié)點上運行kubectl get nodes可以看到:
[root@k8s-node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready master 13m v1.15.1
k8s-node2 Ready <none> 2m2s v1.15.1
k8s-node3 Ready <none> 68s v1.15.1
[root@k8s-node1 ~]#
如上,若提示notReady則表示節(jié)點尚未準備好,可能正在進行其他初始化操作,等待全部變?yōu)镽eady即可。
另外,建議查看所有pod狀態(tài),運行kubectl get pods -n kube-system:
[root@k8s-node1 ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-etcd-2hmhv 1/1 Running 0 8m49s 192.168.10.9 k8s-node1 <none> <none>
calico-kube-controllers-6b6f4f7c64-c7v8p 1/1 Running 0 9m44s 192.168.10.9 k8s-node1 <none> <none>
calico-node-fzzmh 2/2 Running 2 9m44s 192.168.10.9 k8s-node1 <none> <none>
calico-node-r9hh6 2/2 Running 0 2m59s 192.168.10.10 k8s-node2 <none> <none>
calico-node-rcqnp 2/2 Running 0 2m5s 192.168.10.11 k8s-node3 <none> <none>
coredns-5c98db65d4-8nftr 1/1 Running 0 14m 192.168.36.65 k8s-node1 <none> <none>
coredns-5c98db65d4-n2zbj 1/1 Running 0 14m 192.168.36.66 k8s-node1 <none> <none>
etcd-k8s-node1 1/1 Running 0 13m 192.168.10.9 k8s-node1 <none> <none>
kube-apiserver-k8s-node1 1/1 Running 0 13m 192.168.10.9 k8s-node1 <none> <none>
kube-controller-manager-k8s-node1 1/1 Running 0 13m 192.168.10.9 k8s-node1 <none> <none>
kube-proxy-8g5sw 1/1 Running 0 2m5s 192.168.10.11 k8s-node3 <none> <none>
kube-proxy-9z62p 1/1 Running 0 2m59s 192.168.10.10 k8s-node2 <none> <none>
kube-proxy-cdvhk 1/1 Running 0 14m 192.168.10.9 k8s-node1 <none> <none>
kube-scheduler-k8s-node1 1/1 Running 0 13m 192.168.10.9 k8s-node1 <none> <none>
[root@k8s-node1 ~]#
===========================================================================
以上便是K8S的整個部署過程。
下面將介紹基本服務檢查。
節(jié)點狀態(tài)
[root@k8s-node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready master 17m v1.15.1
k8s-node2 Ready <none> 5m44s v1.15.1
k8s-node3 Ready <none> 4m50s v1.15.1
[root@k8s-node1 ~]#
組件狀態(tài)
[root@k8s-node1 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@k8s-node1 ~]#
服務賬戶
[root@k8s-node1 ~]# kubectl get serviceaccount
NAME SECRETS AGE
default 1 18m
[root@k8s-node1 ~]#
集群信息
[root@k8s-node1 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.10.9:6443
KubeDNS is running at https://192.168.10.9:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-node1 ~]#
驗證dns功能
[root@k8s-node1 ~]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-6bf6db5c4f-8ldwk:/ ]$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ root@curl-6bf6db5c4f-8ldwk:/ ]$
9 測試集群功能是否正常
我們創(chuàng)建一個nginx的service試一下集群是否可用。
創(chuàng)建并運行deployment
[root@k8s-node1 ~]# kubectl run nginx1 --replicas=2 --labels="run=load-balancer-example0" --image=nginx --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx1 created
[root@k8s-node1 ~]#
把服務通過nodeport的形式暴露出來
[root@k8s-node1 ~]# kubectl expose deployment nginx1 --type=NodePort --name=example-service
service/example-service exposed
[root@k8s-node1 ~]#
查看服務的詳細信息
[root@k8s-node1 ~]# kubectl describe service example-service
Name: example-service
Namespace: default
Labels: run=load-balancer-example0
Annotations: <none>
Selector: run=load-balancer-example0
Type: NodePort
IP: 10.105.173.123
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30102/TCP
Endpoints: 192.168.107.196:80,192.168.169.133:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
服務狀態(tài)
[root@k8s-node1 ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort 10.105.173.123 <none> 80:30102/TCP 70s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30m
[root@k8s-node1 ~]#
查看pod
[root@k8s-node1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
curl-6bf6db5c4f-8ldwk 1/1 Running 1 12m
nginx-5c464d5cf5-b7xlh 1/1 Running 0 3m44s
nginx-5c464d5cf5-klqfd 1/1 Running 0 3m34s
nginx1-7c5744bf79-lc6sz 1/1 Running 0 2m18s
nginx1-7c5744bf79-pt7q4 1/1 Running 0 2m18s
訪問服務ip
[root@k8s-node1 ~]# curl 10.105.173.123:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-node1 ~]#
訪問endpoint,與訪問服務ip結果相同。這些 IP 只能在 Kubernetes Cluster 中的容器和節(jié)點訪問。endpoint與service 之間有映射關系。service實際上是負載均衡著后端的endpoint。其原理是通過iptables實現(xiàn)的.
訪問節(jié)點ip,與訪問集群ip相同,可以在集群外部訪問。
[root@k8s-node1 ~]# curl 192.168.10.9:31257
[root@k8s-node1 ~]# curl 192.168.10.10:31257
[root@k8s-node1 ~]# curl 192.168.10.11:31257
整個部署過程概述:
① kubectl 發(fā)送部署請求到 API Server。
② API Server 通知 Controller Manager 創(chuàng)建一個 deployment 資源。
③ Scheduler 執(zhí)行調(diào)度任務,將兩個副本 Pod 分發(fā)到 node1 和 node2。
④ node1 和 node2 上的 kubelet 在各自的節(jié)點上創(chuàng)建并運行 Pod。
這里加上一個新的docker 鏡像的批量導入和導出腳本。基于python實現(xiàn)。
myGithub_Docker_images_load.py
myGithub_Docker_images_save.py
下篇則介紹WEB UI界面部署。
這里是kubernetes-dashboard的鏡像地址。
https://github.com/kubernetes/dashboard/releases
這里先放上yaml文件
apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kube-system
type: Opaque
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: kubernetes-dashboard-minimalnamespace: kube-system
rules:
- apiGroups: [""]resources: ["secrets"]verbs: ["create"]
- apiGroups: [""]resources: ["configmaps"]verbs: ["create"]
- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]verbs: ["get", "update", "delete"]
- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]
- apiGroups: [""]resources: ["services"]resourceNames: ["heapster"]verbs: ["proxy"]
- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:"]verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: kubernetes-dashboard-minimalnamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccountname: kubernetes-dashboardnamespace: kube-system
---
kind: Deployment
apiVersion: apps/v1beta2
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0ports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificatesvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardtolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
spec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard
文件大致解讀:
apiVersion: v1 #指定api版本,此值必須在kubectl apiversion中
kind: Pod #指定創(chuàng)建資源的角色/類型
metadata: #資源的元數(shù)據(jù)/屬性 name: web04-pod #資源的名字,在同一個namespace中必須唯一 labels: #設定資源的標簽,詳情請見http://blog.csdn.net/liyingke112/article/details/77482384k8s-app: apache version: v1 kubernetes.io/cluster-service: "true" annotations: #自定義注解列表 - name: String #自定義注解名字
spec:#specification of the resource content 指定該資源的內(nèi)容 restartPolicy: Always #表明該容器一直運行,默認k8s的策略,在此容器退出后,會立即創(chuàng)建一個相同的容器 nodeSelector: #節(jié)點選擇,先給主機打標簽kubectl label nodes kube-node1 zone=node1 zone: node1 containers: - name: web04-pod #容器的名字 image: web:apache #容器使用的鏡像地址 imagePullPolicy: Never #三個選擇Always、Never、IfNotPresent,每次啟動時檢查和更新(從registery)images的策略,# Always,每次都檢查# Never,每次都不檢查(不管本地是否有)# IfNotPresent,如果本地有就不檢查,如果沒有就拉取command: ['sh'] #啟動容器的運行命令,將覆蓋容器中的Entrypoint,對應Dockefile中的ENTRYPOINT args: ["$(str)"] #啟動容器的命令參數(shù),對應Dockerfile中CMD參數(shù) env: #指定容器中的環(huán)境變量 - name: str #變量的名字 value: "/etc/run.sh" #變量的值 resources: #資源管理,請求請見http://blog.csdn.net/liyingke112/article/details/77452630requests: #容器運行時,最低資源需求,也就是說最少需要多少資源容器才能正常運行 cpu: 0.1 #CPU資源(核數(shù)),兩種方式,浮點數(shù)或者是整數(shù)+m,0.1=100m,最少值為0.001核(1m)memory: 32Mi #內(nèi)存使用量 limits: #資源限制 cpu: 0.5 memory: 32Mi ports: - containerPort: 80 #容器開發(fā)對外的端口name: httpd #名稱protocol: TCP livenessProbe: #pod內(nèi)容器健康檢查的設置,詳情請見http://blog.csdn.net/liyingke112/article/details/77531584httpGet: #通過httpget檢查健康,返回200-399之間,則認為容器正常 path: / #URI地址 port: 80 #host: 127.0.0.1 #主機地址 scheme: HTTP initialDelaySeconds: 180 #表明第一次檢測在容器啟動后多長時間后開始 timeoutSeconds: 5 #檢測的超時時間 periodSeconds: 15 #檢查間隔時間 #也可以用這種方法 #exec: 執(zhí)行命令的方法進行監(jiān)測,如果其退出碼不為0,則認為容器正常 # command: # - cat # - /tmp/health #也可以用這種方法 #tcpSocket: //通過tcpSocket檢查健康 # port: number lifecycle: #生命周期管理 postStart: #容器運行之前運行的任務 exec: command: - 'sh' - 'yum upgrade -y' preStop: #容器關閉之前運行的任務 exec: command: ['service httpd stop'] volumeMounts: #詳情請見http://blog.csdn.net/liyingke112/article/details/76577520- name: volume #掛載設備的名字,與volumes[*].name 需要對應 mountPath: /data #掛載到容器的某個路徑下 readOnly: True volumes: #定義一組掛載設備 - name: volume #定義一個掛載設備的名字 #meptyDir: {} hostPath: path: /opt #掛載設備類型為hostPath,路徑為宿主機下的/opt,這里設備類型支持很多種
對于寫yaml文件時,經(jīng)常會因為嚴格的格式要求而出現(xiàn)error。這里有一個常用的yaml文件格式檢查工具。
總結
以上是生活随笔為你收集整理的K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: BYOD安全保护的“原生态”方法
- 下一篇: dedecms入门教程