Kubernetes(k8s)常用资源的使用、Pod的常用操作
1、K8s是如何運行容器的。
答:k8s是通過定義一個Pod的資源,然后在Pod里面運行容器的。K8s最小的資源單位Pod。
?
2、如何創建一個Pod資源呢?
答:在K8s中,所有的資源單位都可以使用一個yaml配置文件來創建,創建Pod也可以使用yaml配置文件。
?
3、開始,創建一個Pod,先創建一個k8s目錄,然后在k8s里面創建一個pod目錄,然后創建vim nginx_pod.yaml。
1 [root@k8s-master ~]# mkdir k8s 2 [root@k8s-master ~]# cd k8s/ 3 [root@k8s-master k8s]# ls 4 [root@k8s-master k8s]# mkdir pod 5 [root@k8s-master k8s]# ls 6 pod 7 [root@k8s-master k8s]# cd pod/ 8 [root@k8s-master pod]# vim nginx_pod.yaml 9 [root@k8s-master pod]#配置內容,如下所示:
nginx_pod.yaml的內容,如下所示:
1 # 聲明api的版本。2 apiVersion: v13 # kind代表資源的類型,資源是Pod。4 kind: Pod5 # 資源叫什么名字,是在其屬性metadata里面的。6 metadata:7 # 第一個屬性name的值是nginx,即Pod的名字就叫做Nginx。8 name: nginx9 # 給Pod貼上了一個標簽,標簽是app: web,標簽是有一定的作用的。 10 labels: 11 app: web 12 # spec是詳細,詳細里面定義了一個容器。 13 spec: 14 # 定義一個容器,可以聲明多個容器的。 15 containers: 16 # 容器的名稱叫做nginx 17 - name: nginx 18 # 使用了什么鏡像,可以使用官方公有的,也可以使用私有的。 19 image: nginx:1.13 20 # ports定義容器的端口。 21 ports: 22 # 容器的端口是80,如果容器有多個端口,可以在后面接著寫一行即可。 23 - containerPort: 80在k8s中,所有的資源單位,只要使用配置文件聲明之后,使用create -f指定nginx_pod.yaml的位置,就可以被創建了。
1 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 2 Error from server (ServerTimeout): error when creating "nginx_pod.yaml": No API token found for service account "default", retry after the token is automatically created and added to the service account 3 [root@k8s-master pod]#報錯了,需要修改api-server的配置文件,需要將ServiceAccount禁用掉即可。
1 [root@k8s-master pod]# vim /etc/kubernetes/apiserver將ServiceAccount禁用掉即可。
由于修改了api-server的配置文件,現在需要重啟api-server。
1 [root@k8s-master pod]# systemctl restart kube-apiserver.service 2 [root@k8s-master pod]#重啟api-server完畢之后,再次使用命令進行創建。
1 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 2 pod "nginx" created 3 [root@k8s-master pod]#現在可以查看創建了那些Pod,get命令是查看資源的列表,如下所示:
1 [root@k8s-master pod]# kubectl get pod 2 NAME READY STATUS RESTARTS AGE 3 nginx 0/1 ContainerCreating 0 1m 4 [root@k8s-master pod]# kubectl get pod nginx 5 NAME READY STATUS RESTARTS AGE 6 nginx 0/1 ContainerCreating 0 1m 7 [root@k8s-master pod]#查看組件的狀態。
1 [root@k8s-master pod]# kubectl get componentstatus 2 NAME STATUS MESSAGE ERROR 3 controller-manager Healthy ok 4 scheduler Healthy ok 5 etcd-0 Healthy {"health":"true"} 6 [root@k8s-master pod]#查看node的命令,如下所示:
1 [root@k8s-master pod]# kubectl get node2 NAME STATUS AGE3 k8s-master Ready 22h4 k8s-node2 Ready 22h5 k8s-node3 Ready 21h6 [root@k8s-master pod]# kubectl get nodes7 NAME STATUS AGE8 k8s-master Ready 22h9 k8s-node2 Ready 22h 10 k8s-node3 Ready 21h 11 [root@k8s-master pod]#仔細觀察這個Pod一直處于ContainerCreating狀態,一直都沒有1/1準備好。
1 [root@k8s-master pod]# kubectl get pod nginx 2 NAME READY STATUS RESTARTS AGE 3 nginx 0/1 ContainerCreating 0 4m 4 [root@k8s-master pod]#可以使用命令kubectl describe pod nginx,查看具體卡在那里,如下所示:
1 [root@k8s-master pod]# kubectl describe pod nginx2 Name: nginx3 Namespace: default4 Node: k8s-node3/192.168.110.1355 Start Time: Fri, 05 Jun 2020 21:17:18 +08006 Labels: app=web7 Status: Pending8 IP: 9 Controllers: <none> 10 Containers: 11 nginx: 12 Container ID: 13 Image: nginx:1.13 14 Image ID: 15 Port: 80/TCP 16 State: Waiting 17 Reason: ContainerCreating 18 Ready: False 19 Restart Count: 0 20 Volume Mounts: <none> 21 Environment Variables: <none> 22 Conditions: 23 Type Status 24 Initialized True 25 Ready False 26 PodScheduled True 27 No volumes. 28 QoS Class: BestEffort 29 Tolerations: <none> 30 Events: 31 FirstSeen LastSeen Count From SubObjectPath Type Reason Message 32 --------- -------- ----- ---- ------------- -------- ------ ------- 33 7m 7m 1 {default-scheduler } Normal Scheduled Successfully assigned nginx to k8s-node3 34 6m 1m 6 {kubelet k8s-node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)" 35 36 6m 5s 25 {kubelet k8s-node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\"" 37 38 [root@k8s-master pod]#可以看到scheduler調度到k8s-node3節點上去了。
也可以使用kubectl get pod nginx -o wide命令,查看調度到那個節點上去了。
1 [root@k8s-master pod]# kubectl get pod nginx -o wide 2 NAME READY STATUS RESTARTS AGE IP NODE 3 nginx 0/1 ContainerCreating 0 10m <none> k8s-node3 4 [root@k8s-master pod]#可以看到是pull鏡像的時候,就出錯了。從這個地址registry.access.redhat.com/rhel7/pod-infrastructure:latest拉取的鏡像。
可以看到是在k8s-node3節點pull這個鏡像。在k8s-node3節點使用docker pull這個鏡像報錯了,報錯顯示沒有這個文件open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory。
1 [root@k8s-node3 ~]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest 2 Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure ... 3 open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory 4 [root@k8s-node3 ~]#但是這個證書文件是存在的,但是為什么打不開呢,因為這個證書文件是一個軟鏈接。軟鏈接就不存在,所以就打不開。
1 [root@k8s-node3 ~]# ls /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt 2 /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt 3 [root@k8s-node3 ~]#那么解決這個證書問題就可以解決這個問題了,但是其實并不用解決它,因為你思考一個,為什么啟動一個Pod資源的時候,需要下載這么一個鏡像地址呢,為什么不從別的地方下載呢,這個是由配置文件決定的。
1 [root@k8s-node3 ~]# vim /etc/kubernetes/kubelet在這個配置文件中定義的鏡像地址是registry.access.redhat.com/rhel7/pod-infrastructure:latest。這個鏡像地址,由于證書錯誤,下載不了,但是可以從其他地方進行下載。可以使用docker search搜索一下這個鏡像,這個是在Docker官方倉庫進行搜索的。
將鏡像地址docker.io/tianyebj/pod-infrastructure復制拷貝到/etc/kubernetes/kubelet配置文件中。
由于修改了配置文件,所以要重啟讓其kubelet生效。
1 [root@k8s-node3 ~]# systemctl restart kubelet.service 2 [root@k8s-node3 ~]#重啟結束之后,再次回到Master節點上,看看它的信息描述,看看它有沒有重試。
1 [root@k8s-master pod]# kubectl describe pod nginx不斷使用上述命令進行觀察,也可以在k8s-node3節點,看看docker的臨時目錄,查看其有沒有重試。
1 [root@k8s-node3 ~]# ls /var/lib/docker/tmp/2 GetImageBlob232005897 GetImageBlob649330130 GetImageBlob6882234443 [root@k8s-node3 ~]# ls /var/lib/docker/tmp/4 GetImageBlob232005897 GetImageBlob649330130 GetImageBlob6882234445 [root@k8s-node3 ~]# ls /var/lib/docker/tmp/6 GetImageBlob232005897 GetImageBlob649330130 GetImageBlob6882234447 [root@k8s-node3 ~]# ll /var/lib/docker/tmp/8 total 163249 -rw-------. 1 root root 9750959 Jun 5 21:49 GetImageBlob649330130 10 -rw-------. 1 root root 201 Jun 5 21:48 GetImageBlob688223444 11 [root@k8s-node3 ~]#這個/var/lib/docker/tmp/是Docker下載的臨時目錄。可以看到已經超時了,換了下載鏡像的地址也是超時了。
如何解決Docker的IO超時問題呢,熟悉Docker的應該知道Docker國內可以做Docker鏡像的加速。加速方法,如下所示:
由于我的Docker的版本是1.13.1,其加速方法跟最新的1809/1806是不一樣的。
1 [root@k8s-node3 ~]# vim /etc/sysconfig/docker具體操作,如下所示:
1 # 信任私有倉庫,鏡像加速 2 OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false 3 --registry-mirror=https://registry.docker-cn.com --insecure-registry=192.168.110.133:5000'
然后重啟一下Docker,如下所示:
1 [root@k8s-node3 ~]# systemctl restart docker 2 [root@k8s-node3 ~]#重啟完Docker之后,Master主節點過段時間會再次重試。可以在主節點使用kubectl describe pod nginx命令查看,在k8s-node3節點使用命令ll -h /var/lib/docker/tmp/進行查看。
我本地是下載下來了,如果下載不下來,也可以將安裝包上傳到服務器。可以使用命令kubectl get pod nginx進行查看,自己的Nginx已經跑起來了。
使用Docker的導入命令,將所需的鏡像導入進去。
1 [root@k8s-node3 ~]# docker load -i pod-infrastructure-latest.tar.gz 1 [root@k8s-node3 ~]# docker load -i docker_nginx1.13.tar.gz如果剛才的未下載完畢,然后你又將鏡像上傳到了服務器,此時可以使用重啟Docker的命令,然后去主節點Master使用命令進行查看kubectl describe pod nginx,可以看到已經識別出來了。可以使用命令kubectl get pod nginx -o wide,可以看到容器已經跑起來了。此時解決了k8s-node3可以啟動這個容器。
但是,此時將這個Pod進行刪除,然后再創建這個Pod。
1 [root@k8s-master pod]# kubectl delete pod nginx 2 pod "nginx" deleted 3 [root@k8s-master pod]# kubectl get pod nginx -o wide 4 Error from server (NotFound): pods "nginx" not found 5 [root@k8s-master pod]#此時,發現這個Pod調度到了k8s-node2節點了。
1 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 2 pod "nginx" created 3 [root@k8s-master pod]# kubectl get pod nginx -o wide 4 NAME READY STATUS RESTARTS AGE IP NODE 5 nginx 0/1 ContainerCreating 0 20s <none> k8s-node2 6 [root@k8s-master pod]# kubectl get pod nginx -o wide 7 NAME READY STATUS RESTARTS AGE IP NODE 8 nginx 0/1 ContainerCreating 0 37s <none> k8s-node2 9 [root@k8s-master pod]#此時,需要修改k8s-node2的鏡像地址。因為它還是會從紅帽那里下載,對應本地沒有的鏡像還是再次pull一遍的。這樣對于我們來說,啟動一個容器時間很長,如果網絡不穩定,這個節點上的容器就啟動不起來。如果節點非常多,那么這樣的情況會非常麻煩。如果Node節點很多的時候,應該使用一個私有倉庫,使用私有倉庫可以將已經有的鏡像可以從自己的私有倉庫進行下載,節省時間和網絡資源。
1 [root@k8s-node2 ~]# vim /etc/kubernetes/kubelet操作,如下所示:
具體內容,如下所示:
1 ###2 # kubernetes kubelet (minion) config3 4 # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)5 # 修改自己的監聽地址,將127.0.0.1修改為192.168.110.1346 KUBELET_ADDRESS="--address=192.168.110.134"7 8 # The port for the info server to serve on9 # kube-let的端口是10250 10 KUBELET_PORT="--port=10250" 11 12 # You may leave this blank to use the actual hostname 13 # 修改自己的主機名稱,將127.0.0.1修改為k8s-node2 14 KUBELET_HOSTNAME="--hostname-override=k8s-node2" 15 16 # location of the api-server 17 # 連接master節點的api-server端口 18 KUBELET_API_SERVER="--api-servers=http://192.168.110.133:8080" 19 20 # pod infrastructure container 21 # KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" 22 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=docker.io/tianyebj/pod-infrastructure:latest" 23 24 # Add your own! 25 KUBELET_ARGS=""可以先讓Node節點從內網Pull鏡像,如果內網沒有鏡像,可以先從外網進行Pull鏡像,鏡像Pull下來之后傳到我們的私有倉庫,僅需要Pull一次鏡像,后來的節點直接從自己的私有倉庫進行Pull鏡像,極大節省時間和流量帶寬。解決這個問題就是自己啟用一個私有倉庫。
1 # 終極解決方法,自己啟用自己的私有倉庫。為了節約硬件配置,使用官方提供的registry私有倉庫。也可以使用其他 2 docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry首先搜索一下這個鏡像。這個是官方的鏡像。
1 [root@k8s-master pod]# docker search registry2 INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED3 docker.io docker.io/registry The Docker Registry 2.0 implementation for... 2980 [OK] 4 docker.io docker.io/distribution/registry WARNING: NOT the registry official image!!... 57 [OK]5 docker.io docker.io/stefanscherer/registry-windows Containerized docker registry for Windows ... 31 6 docker.io docker.io/budry/registry-arm Docker registry build for Raspberry PI 2 a... 18 7 docker.io docker.io/deis/registry Docker image registry for the Deis open so... 12 8 docker.io docker.io/anoxis/registry-cli You can list and delete tags from your pri... 9 [OK]9 docker.io docker.io/jc21/registry-ui A nice web interface for managing your Doc... 8 10 docker.io docker.io/vmware/registry 6 11 docker.io docker.io/allingeek/registry A specialization of registry:2 configured ... 4 [OK] 12 docker.io docker.io/pallet/registry-swift Add swift storage support to the official ... 4 [OK] 13 docker.io docker.io/arm32v6/registry The Docker Registry 2.0 implementation for... 3 14 docker.io docker.io/goharbor/registry-photon 2 15 docker.io docker.io/concourse/registry-image-resource 1 16 docker.io docker.io/conjurinc/registry-oauth-server Docker registry authn/authz server backed ... 1 17 docker.io docker.io/ibmcom/registry Docker Image for IBM Cloud private-CE (Com... 1 18 docker.io docker.io/metadata/registry Metadata Registry is a tool which helps yo... 1 [OK] 19 docker.io docker.io/webhippie/registry Docker images for Registry 1 [OK] 20 docker.io docker.io/convox/registry 0 21 docker.io docker.io/deepsecurity/registryviews Deep Security Smart Check 0 22 docker.io docker.io/dwpdigital/registry-image-resource Concourse resource type 0 23 docker.io docker.io/gisjedi/registry-proxy Reverse proxy of registry mirror image gis... 0 24 docker.io docker.io/kontena/registry Kontena Registry 0 25 docker.io docker.io/lorieri/registry-ceph Ceph Rados Gateway (and any other S3 compa... 0 26 docker.io docker.io/pivnet/registry-gcloud-image 0 27 docker.io docker.io/upmcenterprises/registry-creds 0 28 [root@k8s-master pod]#可以直接將這個鏡像pull下來,也可以進行上傳。
1 [root@k8s-master pod]# docker pull docker.io/registry2 Using default tag: latest3 Trying to pull repository docker.io/library/registry ... 4 latest: Pulling from docker.io/library/registry5 486039affc0a: Pull complete 6 ba51a3b098e6: Pull complete 7 8bb4c43d6c8e: Pull complete 8 6f5f453e5f2d: Pull complete 9 42bc10b72f42: Pull complete 10 Digest: sha256:7d081088e4bfd632a88e3f3bcd9e007ef44a796fddfe3261407a3f9f04abe1e7 11 Status: Downloaded newer image for docker.io/registry:latest 12 [root@k8s-master pod]#接下來,就可以啟動我們的私有倉庫了。
1 [root@k8s-master pod]# docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry 2 a27987d97039c8596ad2a2150cee9e3fbe7580c8131e9f258aea8a922c22a237 3 [root@k8s-master pod]#我們的私有倉庫已經起來了,可以使用docker ps命令進行查看。
1 [root@k8s-master pod]# docker ps 2 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3 a27987d97039 registry "/entrypoint.sh /e..." 39 seconds ago Up 37 seconds 0.0.0.0:5000->5000/tcp registry 4 6d459781a3e5 busybox "sh" 10 hours ago Up 10 hours gracious_nightingale 5 [root@k8s-master pod]#此時,可以試著想這個私有倉庫上傳我們的鏡像,如下所示:
1 [root@k8s-node3 ~]# docker images 2 REPOSITORY TAG IMAGE ID CREATED SIZE3 docker.io/busybox latest 1c35c4412082 2 days ago 1.22 MB4 docker.io/nginx 1.13 ae513a47849c 2 years ago 109 MB5 docker.io/tianyebj/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB6 [root@k8s-node3 ~]# docker tag docker.io/tianyebj/pod-infrastructure:latest 192.168.110.133:5000/pod-infrastructure:latest7 [root@k8s-node3 ~]# docker push 192.168.110.133:5000/pod-infrastructure8 The push refers to a repository [192.168.110.133:5000/pod-infrastructure]9 Get https://192.168.110.133:5000/v1/_ping: http: server gave HTTP response to HTTPS client 10 [root@k8s-node3 ~]#由于我的這里報錯了,這個問題可能是由于客戶端采用https,docker registry未采用https服務所致。一種處理方式是把客戶對地址“192.168.110.133:5000”請求改為http。解決方法:在“/etc/docker/”目錄下,創建"daemon.json"文件。在文件中寫入:
1 [root@k8s-node3 ~]# cd /etc/docker/ 2 [root@k8s-node3 docker]# echo '{ "insecure-registries":["192.168.110.133:5000"] }' > /etc/docker/daemon.json 3 [root@k8s-node3 docker]#重啟docker。問題解決:
1 [root@k8s-node3 docker]# systemctl restart docker 2 [root@k8s-node3 docker]# docker tag docker.io/tianyebj/pod-infrastructure:latest 192.168.110.133:5000/pod-infrastructure:latest 3 [root@k8s-node3 docker]# docker push 192.168.110.133:5000/pod-infrastructure:latest 4 The push refers to a repository [192.168.110.133:5000/pod-infrastructure] 5 ba3d4cbbb261: Pushed 6 0a081b45cb84: Pushed 7 df9d2808b9a9: Pushed 8 latest: digest: sha256:a378b2d7a92231ffb07fdd9dbd2a52c3c439f19c8d675a0d8d9ab74950b15a1b size: 948 9 [root@k8s-node3 docker]#這里,順便將另外兩臺機器也配置一下,避免出現這種錯誤。
1 [root@k8s-master pod]# echo '{ "insecure-registries":["192.168.110.133:5000"] }' > /etc/docker/daemon.json 1 [root@k8s-node2 ~]# echo '{ "insecure-registries":["192.168.110.133:5000"] }' > /etc/docker/daemon.json如果其他Node節點需要使用這個私有倉庫都是需要修改Docker的配置文件的。
1 [root@k8s-node2 ~]# vim /etc/sysconfig/docker 1 OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false 2 --registry-mirror=https://registry.docker-cn.com --insecure-registry=192.168.110.133:5000'具體操作,如下所示:
然后需要重啟Docker。
1 [root@k8s-node2 ~]# systemctl restart docker為了Node節點從私有倉庫Pull鏡像,還需要修改/etc/kubernetes/kubelet
1 [root@k8s-node2 ~]# vim /etc/kubernetes/kubelet然后重啟kubelet。
1 [root@k8s-node2 ~]# systemctl restart kubelet.service 2 [root@k8s-node2 ~]#這里將Ngnix也上傳到私有倉庫里面,如下所示:
1 [root@k8s-node3 docker]# docker tag docker.io/nginx:1.13 192.168.110.133:5000/ngnix:1.13 2 [root@k8s-node3 docker]# docker push 192.168.110.133:5000/ngnix:1.13 3 The push refers to a repository [192.168.110.133:5000/ngnix] 4 7ab428981537: Pushed 5 82b81d779f83: Pushed 6 d626a8ad97a1: Pushed 7 1.13: digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90 size: 948 8 [root@k8s-node3 docker]#此時,可以將k8s-master、k8s-node3節點的kubelet修改為從私有倉庫Pull鏡像。
1 [root@k8s-node3 docker]# vim /etc/kubernetes/kubelet 1 [root@k8s-master pod]# vim /etc/kubernetes/kubelet具體操作,如下所示:
然后重啟kubelet。
1 [root@k8s-node3 docker]# systemctl restart kubelet.service 2 [root@k8s-node3 docker]# 1 [root@k8s-master pod]# systemctl restart kubelet.service 2 [root@k8s-master pod]#總結,步驟很多,但是這里需要注意的是,在三臺節點上面,都需要修改docker的配置和kubelet的配置。修改完畢需要進行重啟即可。
1 [root@k8s-master pod]# vim /etc/sysconfig/docker 2 [root@k8s-master pod]# systemctl restart docker修改內容,如下所示:
1 # 信任私有倉庫,鏡像加速 2 OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false 3 --registry-mirror=https://registry.docker-cn.com --insecure-registry=192.168.110.133:5000'
修改kubelet的配置。
1 [root@k8s-master pod]# vim /etc/kubernetes/kubelet 2 [root@k8s-master pod]# systemctl restart kubelet.service修改內容,如下所示:
1 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.110.133:5000/pod-infrastructure:latest"
最后需要,修改一下Pod,讓其從私有倉庫上Pull鏡像。如下所示:
修改內容,如下所示:
看到下面兩個文件的不同了嗎,折騰了一大圈。睡覺去。
1 apiVersion: v12 kind: Pod3 metadata:4 name: nginx5 labels:6 app: web7 spec:8 containers:9 - name: nginx 10 image: 192.168.110.133:5000/nginx:1.13 11 ports: 12 - containerPort: 80 1 apiVersion: v12 kind: Pod3 metadata:4 name: nginx5 labels:6 app: web7 spec:8 containers:9 - name: nginx 10 image: 192.168.110.133:5000/ngnix:1.13 11 ports: 12 - containerPort: 80最后這里我測試了好幾遍,由于快凌晨了,最后測試成功了,先貼一下吧。我把三臺機器的docker、kubelet都重啟了一遍,因為中間錯了好幾次,不能從私有倉庫下載鏡像。
1 [root@k8s-master pod]# vim nginx_pod.yaml 2 [root@k8s-master pod]# kubectl delete pod nginx3 pod "nginx" deleted4 [root@k8s-master pod]# kubectl describe pod nginx5 Error from server (NotFound): pods "nginx" not found6 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 7 pod "nginx" created8 [root@k8s-master pod]# kubectl describe pod nginx9 Name: nginx 10 Namespace: default 11 Node: k8s-master/192.168.110.133 12 Start Time: Fri, 05 Jun 2020 23:55:23 +0800 13 Labels: app=web 14 Status: Pending 15 IP: 16 Controllers: <none> 17 Containers: 18 nginx: 19 Container ID: 20 Image: 192.168.110.133:5000/ngnix:1.13 21 Image ID: 22 Port: 80/TCP 23 State: Waiting 24 Reason: ContainerCreating 25 Ready: False 26 Restart Count: 0 27 Volume Mounts: <none> 28 Environment Variables: <none> 29 Conditions: 30 Type Status 31 Initialized True 32 Ready False 33 PodScheduled True 34 No volumes. 35 QoS Class: BestEffort 36 Tolerations: <none> 37 Events: 38 FirstSeen LastSeen Count From SubObjectPath Type Reason Message 39 --------- -------- ----- ---- ------------- -------- ------ ------- 40 3s 3s 1 {default-scheduler } Normal Scheduled Successfully assigned nginx to k8s-master 41 3s 3s 1 {kubelet k8s-master} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy. 42 2s 2s 1 {kubelet k8s-master} spec.containers{nginx} Normal Pulling pulling image "192.168.110.133:5000/ngnix:1.13" 43 [root@k8s-master pod]# kubectl get pod nginx 44 NAME READY STATUS RESTARTS AGE 45 nginx 1/1 Running 0 20s 46 [root@k8s-master pod]# kubectl get pod nginx -o wide 47 NAME READY STATUS RESTARTS AGE IP NODE 48 nginx 1/1 Running 0 24s 172.16.77.3 k8s-master 49 [root@k8s-master pod]#由于上面將Nginx拼寫成了Ngnix造成的問題,我這里將上傳到私有倉庫的鏡像刪除一下,然后將K8s創建的Nginx Pod也刪除了,這里將拼寫正確的上傳到私有倉庫,再從私有倉庫下載一遍。
首先將k8s創建的Nginx Pod刪除掉。
1 [root@k8s-master pod]# kubectl get pod 2 NAME READY STATUS RESTARTS AGE 3 nginx 1/1 Running 0 2d 4 [root@k8s-master pod]# kubectl delete pod nginx 5 pod "nginx" deleted 6 [root@k8s-master pod]# kubectl get pod 7 No resources found. 8 [root@k8s-master pod]#具體操作,如下所示:
注意:刪除Docker鏡像的時候,需要注意的是Docker有兩個命令的,查看docker的幫助會發現有兩個與刪除有關的命令rm和rmi。rm Remove one or more containers,rmi Remove one or more images,對于images很好理解,跟平常使用的虛擬機的鏡像一個意思,相當于一個模版,而container則是images運行時的的狀態,docker對于運行過的image都保留一個狀態(container),可以使用命令docker ps來查看正在運行的container,對于已經退出的container,則可以使用docker ps -a來查看。 如果你退出了一個container而忘記保存其中的數據,你可以使用docker ps -a來找到對應的運行過的container使用docker commit命令將其保存為image然后運行。
由于image被某個container引用(拿來運行),如果不將這個引用的container銷毀(刪除),那image肯定是不能被刪除。所以想要刪除運行過的images必須首先刪除它的container。
由于我這里是k8s起Pod用到的,沒有使用Docker部署Nginx,所以我直接將它干掉了。
現在將Docker的鏡像上傳到私有倉庫。
1 [root@k8s-node2 ~]# docker images2 REPOSITORY TAG IMAGE ID CREATED SIZE3 docker.io/busybox latest 1c35c4412082 5 days ago 1.22 MB4 docker.io/nginx 1.13 ae513a47849c 2 years ago 109 MB5 192.168.110.133:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB6 docker.io/tianyebj/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB7 [root@k8s-node2 ~]# docker tag docker.io/nginx:1.13 192.168.110.133:5000/nginx:1.138 [root@k8s-node2 ~]# docker push 192.168.110.133:5000/nginx:1.13 9 The push refers to a repository [192.168.110.133:5000/nginx] 10 7ab428981537: Pushed 11 82b81d779f83: Pushed 12 d626a8ad97a1: Pushed 13 1.13: digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90 size: 948 14 [root@k8s-node2 ~]#記得修改自己的nginx_pod.yaml配置文件,下次創建Nginx可以直接從私有倉庫下載的,速度很快。
1 apiVersion: v12 kind: Pod3 metadata:4 name: nginx5 labels:6 app: web7 spec:8 containers:9 - name: nginx 10 image: 192.168.110.133:5000/nginx:1.13 11 ports: 12 - containerPort: 80可以看到這次創建的Pod非常的快。
1 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 2 pod "nginx" created 3 [root@k8s-master pod]# kubectl get pod 4 NAME READY STATUS RESTARTS AGE 5 nginx 1/1 Running 0 8s 6 [root@k8s-master pod]#?
4、使用docker ps可以查看運行了多少個容器。
1 [root@k8s-node3 ~]# docker ps 2 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3 3df24ca19115 192.168.110.133:5000/nginx:1.13 "nginx -g 'daemon ..." 18 minutes ago Up 18 minutes k8s_nginx.536c04d1_nginx_default_c8a6f3d8-a959-11ea-8dbd-000c2919d52d_875fe334 4 652f57e1b9a9 192.168.110.133:5000/pod-infrastructure:latest "/pod" 18 minutes ago Up 18 minutes k8s_POD.cbd802f1_nginx_default_c8a6f3d8-a959-11ea-8dbd-000c2919d52d_de21241c 5 [root@k8s-node3 ~]#使用命令查看容器的ip地址是多少。需要注意的是Nginx容器和pod是共用ip地址的。
1 [root@k8s-node3 ~]# docker ps2 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES3 3df24ca19115 192.168.110.133:5000/nginx:1.13 "nginx -g 'daemon ..." 24 minutes ago Up 24 minutes k8s_nginx.536c04d1_nginx_default_c8a6f3d8-a959-11ea-8dbd-000c2919d52d_875fe3344 652f57e1b9a9 192.168.110.133:5000/pod-infrastructure:latest "/pod" 24 minutes ago Up 24 minutes k8s_POD.cbd802f1_nginx_default_c8a6f3d8-a959-11ea-8dbd-000c2919d52d_de21241c5 [root@k8s-node3 ~]# docker inspect 652f57e1b9a9 | grep -i ipaddress6 "SecondaryIPAddresses": null,7 "IPAddress": "172.16.13.2",8 "IPAddress": "172.16.13.2",9 [root@k8s-node3 ~]# docker inspect 3df24ca19115 | grep -i ipaddress 10 "SecondaryIPAddresses": null, 11 "IPAddress": "", 12 [root@k8s-node3 ~]#可以看到只有pod有ip地址,nginx沒有ip地址的。nginx和pod共用ip地址。
這里可以查看Nginx的網絡類型,可以發現Nginx的網絡類型是container類型的,它和652f57e1b9a9這個容器共用ip地址。
1 [root@k8s-node3 ~]# docker inspect 3df24ca19115 | grep -i network 2 "NetworkMode": "container:652f57e1b9a9d71453d39c40f48c90738b53a66a42888a72f4885b0a69c4a233", 3 "NetworkSettings": { 4 "Networks": {} 5 [root@k8s-node3 ~]#此處有坑,需要注意,SELinux是「Security-Enhanced Linux」的簡稱,是美國國家安全局「NSA=The National Security Agency」 和SCC(Secure Computing Corporation)開發的 Linux的一個擴張強制訪問控制安全模塊。
問題,我使用k8s創建的nginx pod,使用curl -I 172.16.101.2命令一直卡住不動,應該是那里不通,這里需要將SELinux關閉。
1 # 方法一,查看selinux的狀態 2 [root@k8s-master ~]# /usr/sbin/sestatus -v 3 SELinux status: disabled 4 # 方法二,查看selinux的狀態 5 [root@k8s-master ~]# getenforce 6 Disabled 7 [root@k8s-master ~]#不關機臨時變更狀態為關閉setenforce 0,這個方法好像不好使的樣子,反正我是不好使。需要關機永久變更狀態為關閉,將SELINUX從enforcing變更為disabled,修改配置文件vim /etc/selinux/config,將SELINUX=enforcing修改為SELINUX=disabled即可。
重啟三臺機器,由于之前將k8s的組件都是設置為開機自啟動了,這里檢測一下他們的狀態,看看創建的Pod是否正常運行,看看k8s的組件是否健康,然后就可以使用curl進行測試了。
剛才在Master主節點使用curl -I 172.16.101.2是可以訪問Nginx的。使用docker ps查看容器列表的時候發現了兩個容器,一個是pod的容器,它的ip地址是172.16.101.2,還有一個是在Pod配置文件中定義的一個容器,它的ip地址是沒有的,它的網絡模式是container共用網絡模式,它和pod容器共用網絡,container類型是共用容器,即他們兩個的ip地址都是172.16.101.2,唯一的差別就是其中一個占用的端口,另一個容器不用占用這個端口了,端口不可以沖突,先到先得。
5、k8s中的pod資源到底是什么?
答:k8s中創建一個pod資源,這個pod資源會控制kuelet,kubelet控制docker至少啟動兩個容器,一個容器是業務容器Nginx,一個容器是pod容器。kubernetes核心功能有自我修復,服務發現和負載均衡,自動部署和回滾,彈性伸縮。
如果這些高級功能只是靠一個普通的業務容器Nginx,這是不可能做到的,如果想要普通業務容器支持這些功能,就需要進行定制,為了降低你的制作容器鏡像的成本,k8s做好了一個容器,這個容器就是pod,這個pod容器支持k8s的高級功能,那么這個pod容器如何和普通容器綁到一起呢,這里使用的就是網絡類型container。
k8s的高級功能由pod容器提供,nginx業務容器只需要提供80端口訪問即可,通過container類型將他們綁定到一起,訪問80端口可以正常走nginx容器,其他的k8s的高級功能由pod提供,他們之間相輔相成,最終他們合在一起構建了一個資源,這個資源就叫做pod。在k8s中經常提到pod是一個資源叫做pod,這個pod資源會啟動兩個容器,一個是nginx業務容器,一個是基礎的pod容器。
?
6、K8s中Pod的常用操作。
K8s的Pod的配置文件是yaml格式的文件,yaml格式里面如果冒號屬性的前面是短橫線的話,就代表這是一個列表資源,可以有多個,這個也就是說k8s中創建一個pod資源,這個pod資源會控制kuelet,kubelet控制docker至少啟動兩個容器,也可以啟動多個容器,只要容器的端口不沖突即可。
1 apiVersion: v12 kind: Pod3 metadata:4 name: test15 labels:6 app: web7 spec:8 containers:9 # 使用鍵盤4yy,然后使用pp就可以粘貼復制的4行。 10 - name: nginx 11 image: 192.168.110.133:5000/nginx:1.13 12 ports: 13 - containerPort: 80 14 # 一個Pod可以啟動至少兩個容器。 15 - name: busybox 16 # 記得加上版本號的哦,這里使用docker里面的鏡像 17 image: docker.io/busybox:latest 18 # 如果是docker默認命令是夯不住的,夯不住就會死掉了,這里使用一些命令讓它夯住。 19 command: ["sleep","3600"] 20 ports: 21 - containerPort: 80配置如下所示:
然后使用kubectl創建資源,如下所示:
1 [root@k8s-master pod]# vim nginx_pod.yaml 2 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 3 pod "test1" created 4 [root@k8s-master pod]# kubectl get pod -o wide 5 NAME READY STATUS RESTARTS AGE IP NODE 6 nginx 1/1 Running 1 4h 172.16.101.2 k8s-node3 7 test 1/1 Running 0 3m 172.16.52.2 k8s-node2 8 test1 0/2 ContainerCreating 0 4s <none> k8s-master發現test1這個Pod里面的兩個沒有全部啟動起來。可以使用kubectl describe pod test1命令查看詳情。
1 [root@k8s-master pod]# kubectl get pod -o wide2 NAME READY STATUS RESTARTS AGE IP NODE3 nginx 1/1 Running 1 4h 172.16.101.2 k8s-node34 test 1/1 Running 0 6m 172.16.52.2 k8s-node25 test1 1/2 ImagePullBackOff 0 3m 172.16.29.2 k8s-master6 [root@k8s-master pod]# kubectl describe pod test17 Name: test18 Namespace: default9 Node: k8s-master/192.168.110.133 10 Start Time: Mon, 08 Jun 2020 19:32:18 +0800 11 Labels: app=web 12 Status: Pending 13 IP: 172.16.29.2 14 Controllers: <none> 15 Containers: 16 nginx: 17 Container ID: 18 Image: 192.168.110.133:5000/nginx:1.13 19 Image ID: 20 Port: 80/TCP 21 State: Waiting 22 Reason: ImagePullBackOff 23 Ready: False 24 Restart Count: 0 25 Volume Mounts: <none> 26 Environment Variables: <none> 27 busybox: 28 Container ID: docker://adb4a9f14d1b0d6ee390923eeabd9269bfa1683f0ef02f094c5a24d4b204db64 29 Image: docker.io/busybox:latest 30 Image ID: docker-pullable://docker.io/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 31 Port: 80/TCP 32 Command: 33 sleep 34 3600 35 State: Running 36 Started: Mon, 08 Jun 2020 19:32:45 +0800 37 Ready: True 38 Restart Count: 0 39 Volume Mounts: <none> 40 Environment Variables: <none> 41 Conditions: 42 Type Status 43 Initialized True 44 Ready False 45 PodScheduled True 46 No volumes. 47 QoS Class: BestEffort 48 Tolerations: <none> 49 Events: 50 FirstSeen LastSeen Count From SubObjectPath Type Reason Message 51 --------- -------- ----- ---- ------------- -------- ------ ------- 52 4m 4m 1 {default-scheduler } Normal Scheduled Successfully assigned test1 to k8s-master 53 4m 4m 1 {kubelet k8s-master} spec.containers{busybox} Normal Pulling pulling image "docker.io/busybox:latest" 54 4m 3m 2 {kubelet k8s-master} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy. 55 3m 3m 1 {kubelet k8s-master} spec.containers{busybox} Normal Pulled Successfully pulled image "docker.io/busybox:latest" 56 3m 3m 1 {kubelet k8s-master} spec.containers{busybox} Normal Created Created container with docker id adb4a9f14d1b; Security:[seccomp=unconfined] 57 3m 3m 1 {kubelet k8s-master} spec.containers{busybox} Normal Started Started container with docker id adb4a9f14d1b 58 4m 1m 5 {kubelet k8s-master} spec.containers{nginx} Normal Pulling pulling image "192.168.110.133:5000/nginx:1.13" 59 4m 1m 5 {kubelet k8s-master} spec.containers{nginx} Warning Failed Failed to pull image "192.168.110.133:5000/nginx:1.13": Error while pulling image: Get http://192.168.110.133:5000/v1/repositories/nginx/images: dial tcp 192.168.110.133:5000: connect: connection refused 60 3m 1m 5 {kubelet k8s-master} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx" with ErrImagePull: "Error while pulling image: Get http://192.168.110.133:5000/v1/repositories/nginx/images: dial tcp 192.168.110.133:5000: connect: connection refused" 61 62 3m 11s 15 {kubelet k8s-master} spec.containers{nginx} Normal BackOff Back-off pulling image "192.168.110.133:5000/nginx:1.13" 63 3m 0s 16 {kubelet k8s-master} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx" with ImagePullBackOff: "Back-off pulling image \"192.168.110.133:5000/nginx:1.13\"" 64 65 [root@k8s-master pod]#其實,我的三臺節點上面是有busybox的鏡像的,可以使用docker images進行查看,但是還是失敗了。這里需要配置一下鏡像的默認更新策略,在自己的nginx_pod.yaml配置默認的更新策略imagePullPolicy,默認是Always,可以設置為IfNotPresent如果有就不更新。
1 apiVersion: v12 kind: Pod3 metadata:4 name: test25 labels:6 app: web7 spec:8 containers:9 - name: nginx 10 image: 192.168.110.133:5000/nginx:1.13 11 ports: 12 - containerPort: 80 13 - name: busybox 14 image: docker.io/busybox:latest 15 imagePullPolicy: IfNotPresent 16 command: ["sleep","3600"] 17 ports: 18 - containerPort: 80然后創建這個Pod,可以看到立馬就創建成功了。
1 [root@k8s-master pod]# vim nginx_pod.yaml 2 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 3 pod "test2" created4 5 [root@k8s-master pod]# kubectl get pod -o wide6 NAME READY STATUS RESTARTS AGE IP NODE7 nginx 1/1 Running 1 4h 172.16.101.2 k8s-node38 test 1/1 Running 0 29m 172.16.52.2 k8s-node29 test1 1/2 ImagePullBackOff 0 26m 172.16.29.2 k8s-master 10 test2 2/2 Running 0 12s 172.16.101.3 k8s-node3 11 [root@k8s-master pod]#證明了一個Pod資源里面至少可以啟動兩個容器,也可以啟動多個業務容器的。
?
?
7、查看kubectl的幫助,如下所示:
一個Pod的必須屬性,apiVersion、kind、metadata、spec,但是status創建之后,系統自動添加的,不用管它。
我們要找的是pod.spec.containers,可以繼續使用命令進行查找。
1 [root@k8s-master ~]# kubectl explain pod.spec.containerspod.spec.containers詳細,如下所示:
可以看到command的而寫法是command?? ?<[]string>。參數是中括號,里面是一個字符串,然后里面是一個shell。
1 [root@k8s-master ~]# kubectl explain pod.spec.containers2 RESOURCE: containers <[]Object>3 4 DESCRIPTION:5 List of containers belonging to the pod. Containers cannot currently be6 added or removed. There must be at least one container in a Pod. Cannot be7 updated. More info: http://kubernetes.io/docs/user-guide/containers8 9 A single application container that you want to run within a pod.10 11 FIELDS:12 command <[]string>13 Entrypoint array. Not executed within a shell. The docker image's14 ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME)15 are expanded using the container's environment. If a variable cannot be16 resolved, the reference in the input string will be unchanged. The17 $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME).18 Escaped references will never be expanded, regardless of whether the19 variable exists or not. Cannot be updated. More info:20 http://kubernetes.io/docs/user-guide/containers#containers-and-commands21 22 env <[]Object>23 List of environment variables to set in the container. Cannot be updated.24 25 lifecycle <Object>26 Actions that the management system should take in response to container27 lifecycle events. Cannot be updated.28 29 volumeMounts <[]Object>30 Pod volumes to mount into the container's filesystem. Cannot be updated.31 32 stdin <boolean>33 Whether this container should allocate a buffer for stdin in the container34 runtime. If this is not set, reads from stdin in the container will always35 result in EOF. Default is false.36 37 livenessProbe <Object>38 Periodic probe of container liveness. Container will be restarted if the39 probe fails. Cannot be updated. More info:40 http://kubernetes.io/docs/user-guide/pod-states#container-probes41 42 name <string> -required-43 Name of the container specified as a DNS_LABEL. Each container in a pod44 must have a unique name (DNS_LABEL). Cannot be updated.45 46 readinessProbe <Object>47 Periodic probe of container service readiness. Container will be removed48 from service endpoints if the probe fails. Cannot be updated. More info:49 http://kubernetes.io/docs/user-guide/pod-states#container-probes50 51 resources <Object>52 Compute Resources required by this container. Cannot be updated. More info:53 http://kubernetes.io/docs/user-guide/persistent-volumes#resources54 55 workingDir <string>56 Container's working directory. If not specified, the container runtime's57 default will be used, which might be configured in the container image.58 Cannot be updated.59 60 args <[]string>61 Arguments to the entrypoint. The docker image's CMD is used if this is not62 provided. Variable references $(VAR_NAME) are expanded using the container's63 environment. If a variable cannot be resolved, the reference in the input64 string will be unchanged. The $(VAR_NAME) syntax can be escaped with a65 double $$, ie: $$(VAR_NAME). Escaped references will never be expanded,66 regardless of whether the variable exists or not. Cannot be updated. More67 info:68 http://kubernetes.io/docs/user-guide/containers#containers-and-commands69 70 imagePullPolicy <string>71 Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always72 if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.73 More info: http://kubernetes.io/docs/user-guide/images#updating-images74 75 ports <[]Object>76 List of ports to expose from the container. Exposing a port here gives the77 system additional information about the network connections a container78 uses, but is primarily informational. Not specifying a port here DOES NOT79 prevent that port from being exposed. Any port which is listening on the80 default "0.0.0.0" address inside a container will be accessible from the81 network. Cannot be updated.82 83 tty <boolean>84 Whether this container should allocate a TTY for itself, also requires85 'stdin' to be true. Default is false.86 87 image <string>88 Docker image name. More info: http://kubernetes.io/docs/user-guide/images89 90 securityContext <Object>91 Security options the pod should run with. More info:92 http://releases.k8s.io/HEAD/docs/design/security_context.md93 94 stdinOnce <boolean>95 Whether the container runtime should close the stdin channel after it has96 been opened by a single attach. When stdin is true the stdin stream will97 remain open across multiple attach sessions. If stdinOnce is set to true,98 stdin is opened on container start, is empty until the first client attaches99 to stdin, and then remains open and accepts data until the client 100 disconnects, at which time stdin is closed and remains closed until the 101 container is restarted. If this flag is false, a container processes that 102 reads from stdin will never receive an EOF. Default is false 103 104 terminationMessagePath <string> 105 Optional: Path at which the file to which the container's termination 106 message will be written is mounted into the container's filesystem. Message 107 written is intended to be brief final status, such as an assertion failure 108 message. Defaults to /dev/termination-log. Cannot be updated. 109 110 111 [root@k8s-master ~]#?
8、K8s中Pod的常用操作。
1 -- 創建Pod資源2 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 3 4 -- 刪除一個Pod,強制刪除Pod的參數--force --grace-period=05 [root@k8s-master pod]# kubectl delete pod test16 pod "test1" deleted7 [root@k8s-master pod]# kubectl get pod -o wide8 NAME READY STATUS RESTARTS AGE IP NODE9 nginx 1/1 Running 1 4h 172.16.101.2 k8s-node3 10 test 1/1 Running 0 35m 172.16.52.2 k8s-node2 11 test1 1/2 Terminating 0 31m 172.16.29.2 k8s-master 12 test2 2/2 Running 0 5m 172.16.101.3 k8s-node3 13 [root@k8s-master pod]# kubectl get pod -o wide 14 NAME READY STATUS RESTARTS AGE IP NODE 15 nginx 1/1 Running 1 4h 172.16.101.2 k8s-node3 16 test 1/1 Running 0 36m 172.16.52.2 k8s-node2 17 test2 2/2 Running 0 6m 172.16.101.3 k8s-node3 18 [root@k8s-master pod]# kubectl delete pod test --force --grace-period=0 19 warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. 20 pod "test" deleted 21 [root@k8s-master pod]# kubectl get pod -o wide 22 NAME READY STATUS RESTARTS AGE IP NODE 23 nginx 1/1 Running 1 4h 172.16.101.2 k8s-node3 24 test2 2/2 Running 0 7m 172.16.101.3 k8s-node3 25 [root@k8s-master pod]# 26 27 -- 查看Pod的詳細描述 28 [root@k8s-master pod]# kubectl describe pod nginx 29 30 -- 更新Pod,根據配置文件更新,只能加資源。 31 [root@k8s-master pod]# kubectl apply -f nginx_pod.yaml 32 pod "test4" created 33 [root@k8s-master pod]# kubectl get pod -o wide 34 NAME READY STATUS RESTARTS AGE IP NODE 35 nginx 1/1 Running 1 4h 172.16.101.2 k8s-node3 36 test1 0/1 ImagePullBackOff 0 1m 172.16.29.2 k8s-master 37 test2 2/2 Running 0 23m 172.16.101.3 k8s-node3 38 test4 1/1 Running 0 3s 172.16.52.2 k8s-node2 39 [root@k8s-master pod]#?
接下來開始學習RC(Replication Controller)咯。
總結
以上是生活随笔為你收集整理的Kubernetes(k8s)常用资源的使用、Pod的常用操作的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 数字后端基本概念介绍Tie cell
- 下一篇: 【EXP】函数使用技巧