VMware虚拟机部署k8s集群
VMware虛擬機部署k8s集群
最近在學k8s,奈何云服務器有點小貴,就嘗試用VMware虛擬機搭建了k8s集群,與大家分享,歡迎指點。
VMware安裝CentOs7.9虛擬機
- VMware 下載地址:VMware下載(此處給的是16的下載地址)
安裝教程可以參考VMware安裝教程 - CentOs7.9下載地址:CnetOs 7.9
至此算是完成一半了,
Vmware 網絡配置
我的設置:
- master節點IP: 172.31.0.3
- node01節點IP:172.31.0.4
- node02節點IP:172.31.0.5
- 子網掩碼:255.255.0.0
- 網關:172.31.0.2
- DNS:114.114.114.114
首先在Vmware中設置如下:
主頁—>編輯—>虛擬網絡編輯器
需要設置為靜態ip,因此取消勾選DHCP服務
下面就可以開啟剛剛安裝好的虛擬機了,
CentOS 7.9安裝配置
安裝過程這語言選擇中文。
這些內容根據自己需要設置
主要是網絡和主機名需要提前設置好,不然后期改配置文件很頭疼
設置好密碼就可以漫長的等待了
安裝完畢后重啟,輸入賬戶與密碼,
輸入ip addr 查看ip設置是否正確
ping www.baidu.com檢查是否可以ping通
至此,就大功告成啦,接下來大家也可以按照上述流程分別安裝node01與node02,當然,想要偷懶的也可直接克隆,不過克隆后要記得去配置文件里面修改ip地址。
安裝docker
移除以前的docker
yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine配置yum源,鏡像用的是阿里云
sudo yum install -y yum-utils sudo yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo安裝指定版本的docker并啟動
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6 systemctl enable docker --now配置加速
sudo mkdir -p /etc/docker # 創建文件夾 sudo tee /etc/docker/daemon.json <<-'EOF' {"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2" } EOF sudo systemctl daemon-reload sudo systemctl restart docker至此docker就安裝成功
安裝kubeadm
設置基礎環境
# 將 SELinux 設置為 permissive 模式(相當于將其禁用) sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config#關閉swap swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab#允許 iptables 檢查橋接流量 cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOFcat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system安裝kubelet、kubeadm、kubectl
# 配置k8s 下載的地址 cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF# 安裝3大件 sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes# 啟動kubelet sudo systemctl enable --now kubelet構建集群
下載鏡像
# 下載鏡像 總共7個sudo tee ./images.sh <<-'EOF' #!/bin/bash images=( kube-apiserver:v1.20.9 kube-proxy:v1.20.9 kube-controller-manager:v1.20.9 kube-scheduler:v1.20.9 coredns:1.7.0 etcd:3.4.13-0 pause:3.2 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName done EOFchmod +x ./images.sh && ./images.sh主節點初始化
#所有機器添加master域名映射,以下IP地址需要修改為自己的 echo "172.31.0.3 cluster-endpoint" >> /etc/hosts # master節點 每個節點都需要執行,讓每個節點知道master節點#主節點初始化 # 只需要在master節點運行kubeadm init \ --apiserver-advertise-address=172.31.0.3 \ --control-plane-endpoint=cluster-endpoint \ --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \ --kubernetes-version v1.20.9 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=192.168.0.0/16# 以下是各個命令的備注不需要執行 kubeadm init \ --apiserver-advertise-address=172.31.0.4 \ # master 節點ip --control-plane-endpoint=cluster-endpoint \ # 域名值 --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \ # 鏡像倉庫 --kubernetes-version v1.20.9 \ # k8s 版本 --service-cidr=10.96.0.0/16 \ # 網絡范圍 一般不用改 網絡范圍不重疊 --pod-network-cidr=192.168.0.0/16 # k8s 給pod分配網絡ip的范圍 一般不用改#所有網絡范圍不重疊 # 我自己的運行結果Your Kubernetes control-plane has initialized successfully! # 組建集群 需要執行以下命令 To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.conf # 部署pod網絡插件 連接k8s所有網絡 You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: # 加入master節點kubeadm join cluster-endpoint:6443 --token uu0mpy.fdsjy3wojwwpatyj \--discovery-token-ca-cert-hash sha256:3d0c32c41667faf5424f6a3506e330bdaa57edda63c3d0f09bb4346c0b7c5b4f \--control-plane Then you can join any number of worker nodes by running the following on each as root: # 加入工作node節點 kubeadm join cluster-endpoint:6443 --token uu0mpy.fdsjy3wojwwpatyj \--discovery-token-ca-cert-hash sha256:3d0c32c41667faf5424f6a3506e330bdaa57edda63c3d0f09bb4346c0b7c5b4f根據上述運行結果的提示進行下一步操作
設置.kube/config
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config使用下面命令,確保所有的pod都處于running狀態
kubectl get pod --all-namespaces -o wide安裝網絡組件
curl https://docs.projectcalico.org/manifests/calico.yaml -O kubectl apply -f calico.yaml # 部署 calico 網絡插件加入node節點
kubeadm join cluster-endpoint:6443 --token uu0mpy.fdsjy3wojwwpatyj \--discovery-token-ca-cert-hash sha256:3d0c32c41667faf5424f6a3506e330bdaa57edda63c3d0f09bb4346c0b7c5b4f #查看集群所有節點 kubectl get nodes提示如下則安裝成功:
部署 k8s可視化界面dashboard
聯網部署
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml離線部署
vi dashboard.yaml # 創建yaml文件 # 下面內容放入dashboard.yaml中# Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.apiVersion: v1 kind: Namespace metadata:name: kubernetes-dashboard---apiVersion: v1 kind: ServiceAccount metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service apiVersion: v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard spec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard---apiVersion: v1 kind: Secret metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard type: Opaque---apiVersion: v1 kind: Secret metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard type: Opaque data:csrf: ""---apiVersion: v1 kind: Secret metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard type: Opaque---kind: ConfigMap apiVersion: v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: kubernetes-dashboard roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment apiVersion: apps/v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.3.1imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service apiVersion: v1 metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment apiVersion: apps/v1 metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperannotations:seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'spec:containers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.6ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}然后執行
kubectl apply -f dashboard.yaml設置訪問端口
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard # 將 type: ClusterIP 改為 type: NodePort如下圖
查詢端口
kubectl get svc -A |grep kubernetes-dashboard
如上所示,Dashboard已經在32002端口上公開,現在可以在外部使用https://:32002進行訪問。需要注意的是,在多節點的集群中,必須找到運行Dashboard節點的IP來訪問,而不是Master節點的IP可以通過如下命令查詢
可以看到dashboard 部署在node01,而本例中,node01的ip為:172.31.0.4
故訪問:https://172.31.0.4:32002
界面如下:
創建訪問賬號
#創建訪問賬號,準備一個yaml文件; vi dash.yaml apiVersion: v1 kind: ServiceAccount metadata:name: admin-usernamespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: admin-user roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin subjects: - kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard # 執行下面語句 kubectl apply -f dash.yaml令牌訪問
#獲取訪問令牌 kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"將運行結果下述白色內容復制到之前dashboard的登陸界面,
登陸成功如下圖所示:
完結撒花
參考
云原生Java架構師的第一課K8s+Docker
kubernetes-dashboard(1.8.3)部署與踩坑
kubernetes dashboard創建后無法打開頁面問題解決方法
總結
以上是生活随笔為你收集整理的VMware虚拟机部署k8s集群的全部內容,希望文章能夠幫你解決所遇到的問題。