Kubernetes安装EFK教程(非存储持久化方式部署)
1.簡介
這里所指的EFK是指:ElasticSearch,F(xiàn)luentd,Kibana
ElasticSearch
Elasticsearch是一個(gè)基于Apache Lucene™的開源搜索和數(shù)據(jù)分析引擎引擎,Elasticsearch使用Java進(jìn)行開發(fā),并使用Lucene作為其核心實(shí)現(xiàn)所有索引和搜索的功能。它的目的是通過簡單的RESTful API來隱藏Lucene的復(fù)雜性,從而讓全文搜索變得簡單。Elasticsearch不僅僅是Lucene和全文搜索,它還提供如下的能力:
分布式的實(shí)時(shí)文件存儲(chǔ),每個(gè)字段都被索引并可被搜索;
分布式的實(shí)時(shí)分析搜索引擎;
可以擴(kuò)展到上百臺(tái)服務(wù)器,處理PB級(jí)結(jié)構(gòu)化或非結(jié)構(gòu)化數(shù)據(jù)。
在Elasticsearch中,包含多個(gè)索引(Index),相應(yīng)的每個(gè)索引可以包含多個(gè)類型(Type),這些不同的類型每個(gè)都可以存儲(chǔ)多個(gè)文檔(Document),每個(gè)文檔又有多個(gè)屬性。索引 (index) 類似于傳統(tǒng)關(guān)系數(shù)據(jù)庫中的一個(gè)數(shù)據(jù)庫,是一個(gè)存儲(chǔ)關(guān)系型文檔的地方。Elasticsearch 使用的是標(biāo)準(zhǔn)的 RESTful API 和 JSON。此外,還構(gòu)建和維護(hù)了很多其他語言的客戶端,例如 Java, Python, .NET, 和 PHP。
Fluentd
Fluentd是一個(gè)開源數(shù)據(jù)收集器,通過它能對(duì)數(shù)據(jù)進(jìn)行統(tǒng)一收集和消費(fèi),能夠更好地使用和理解數(shù)據(jù)。Fluentd將數(shù)據(jù)結(jié)構(gòu)化為JSON,從而能夠統(tǒng)一處理日志數(shù)據(jù),包括:收集、過濾、緩存和輸出。Fluentd是一個(gè)基于插件體系的架構(gòu),包括輸入插件、輸出插件、過濾插件、解析插件、格式化插件、緩存插件和存儲(chǔ)插件,通過插件可以擴(kuò)展和更好的使用Fluentd。
Kibana
Kibana是一個(gè)開源的分析與可視化平臺(tái),被設(shè)計(jì)用于和Elasticsearch一起使用的。通過kibana可以搜索、查看和交互存放在Elasticsearch中的數(shù)據(jù),利用各種不同的圖表、表格和地圖等,Kibana能夠?qū)?shù)據(jù)進(jìn)行分析與可視化
2.下載需要用到的EFK的yaml文件
kubernetes的github
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
溫馨提示:
此github有非存儲(chǔ)持久化方式部署,需要存儲(chǔ)持久化請(qǐng)修改現(xiàn)有的yaml
下載連接
mdkir /root/EFK
cd /root/EFK
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/es-service.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-service.yaml
或者使用easzlab的也可以
https://github.com/easzlab/kubeasz/tree/master/manifests/efk
溫馨提示:
此github,有非存儲(chǔ)持久化部署方式,也有持久化部署方式。
下載連接地址:
wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/es-without-pv/es-statefulset.yaml
溫馨提示:es-static-pv和es-dynamic-pv分別是靜態(tài)pv和動(dòng)太pv存儲(chǔ)持久方案,如需要可參考,es-without-pv此文件夾是非存儲(chǔ)持久方案
wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/es-service.yaml
wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/fluentd-es-configmap.yaml
wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/fluentd-es-ds.yaml
wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/kibana-deployment.yaml
wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/kibana-service.yaml
3.下載EFK需要的鏡像
yaml源文件需要的鏡像及地址
elasticsearch:v7.4.2 quay.io/fluentd_elasticsearch/elasticsearch:v7.4.2
fluentd:v2.8.0 quay.io/fluentd_elasticsearch/fluentd:v2.8.0
kibana-oss:7.4.2 docker.elastic.co/kibana/kibana-oss:7.4.2
溫馨提示:因v7.4.2測試了幾次都存在問題,elasticsearch一直重啟報(bào)錯(cuò),故更新為6.6.1
elasticsearch:v6.6.1 quay.io/fluentd_elasticsearch/elasticsearch:v6.6.1
fluentd-elasticsearch:v2.4.0 quay.io/fluentd_elasticsearch/fluentd_elasticsearch:v2.4.0
kibana-oss:6.6.1 docker.elastic.co/kibana/kibana-oss:6.6.1
因不能上網(wǎng),故在阿里云鏡像上直接找到相對(duì)應(yīng)的連接
elasticsearch:v6.6.1 registry.cn-hangzhou.aliyuncs.com/yfhub/elasticsearch:v6.6.1
fluentd-elasticsearch:v2.4.0 registry.cn-hangzhou.aliyuncs.com/yfhub/fluentd-elasticsearch:v2.4.0
kibana-oss:6.6.1 registry.cn-hangzhou.aliyuncs.com/yfhub/kibana-oss:6.6.1
使用docker pull把鏡像拉下來
docker pull registry.cn-hangzhou.aliyuncs.com/yfhub/elasticsearch:6.6.1
docker pull registry.cn-hangzhou.aliyuncs.com/yfhub/fluentd-elasticsearch:v2.4.0
docker pull registry.cn-hangzhou.aliyuncs.com/yfhub/kibana-oss:6.6.1
把鏡像打標(biāo)簽使之與yaml需要的一致
docker tag registry.cn-hangzhou.aliyuncs.com/yfhub/elasticsearch:6.6.1 quay.io/fluentd_elasticsearch/elasticsearch:v6.6.1
docker tag registry.cn-hangzhou.aliyuncs.com/yfhub/fluentd-elasticsearch:v2.4.0 quay.io/fluentd_elasticsearch/fluentd_elasticsearch:v2.4.0
docker tag registry.cn-hangzhou.aliyuncs.com/yfhub/kibana-oss:6.6.1 docker.elastic.co/kibana/kibana-oss:6.6.1
上傳打標(biāo)簽前的節(jié)點(diǎn)
docker rmi registry.cn-hangzhou.aliyuncs.com/yfhub/elasticsearch:6.6.1
docker rmi registry.cn-hangzhou.aliyuncs.com/yfhub/fluentd-elasticsearch:v2.4.0
docker rmi registry.cn-hangzhou.aliyuncs.com/yfhub/kibana-oss:6.6.1
把鏡像保存為tar,方便分發(fā)到其它的Node節(jié)點(diǎn)并導(dǎo)入
docker save -o elasticsearch-v6.6.1 quay.io/fluentd_elasticsearch/elasticsearch:v6.6.1
docker save -o fluentd-elasticsearch-v2.4.0 quay.io/fluentd_elasticsearch/fluentd_elasticsearch:v2.4.0
docker save -o kibana-oss-6.6.1 docker.elastic.co/kibana/kibana-oss:6.6.1
把打包的鏡像傳到其它節(jié)點(diǎn)
scp -r elasticsearch-v6.6.1 fluentd-elasticsearch-v2.4.0 kibana-oss-6.6.12 k8s-node02:/root/
**在Node02節(jié)點(diǎn)上導(dǎo)入鏡像
docker load -i elasticsearch-v6.6.1 && docker load -i fluentd-elasticsearch-v2.4.0 && docker load -i kibana-oss-6.6.1
4.對(duì)kubernetes官方的EFK的yaml進(jìn)行改動(dòng)
es-service.yaml文件內(nèi)容如下(溫馨提示,帶有叉的都是注釋行,默認(rèn)原文件可能是啟用狀態(tài))
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Elasticsearch"
spec:
type: NodePort #通過NodePort暴露端口,以便通過elasticsearch-head來連接elasticsearch查看
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
es-statefulset.yaml文件內(nèi)容如下(溫馨提示,帶有叉的都是注釋行,默認(rèn)原文件可能是啟用狀態(tài))
# RBAC authn and authz
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true" #此行是新添加
addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true" #此行是新添加
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-system
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true" #此行是新添加
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: elasticsearch-logging
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch-logging
apiGroup: ""
---
# Elasticsearch deployment itself
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v6.6.1
kubernetes.io/cluster-service: "true" #此行是新添加
addonmanager.kubernetes.io/mode: Reconcile
spec:
serviceName: elasticsearch-logging
replicas: 2
selector:
matchLabels:
k8s-app: elasticsearch-logging
version: v6.6.1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v6.6.1
kubernetes.io/cluster-service: "true" #此行是新添加
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: quay.io/fluentd_elasticsearch/elasticsearch:v6.6.1
name: elasticsearch-logging
imagePullPolicy: IfNotPresent #默認(rèn)為Always,修改為IfNotPresent
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
# memory: 3Gi
requests:
cpu: 100m
# memory: 3Gi
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
# livenessProbe:
# tcpSocket:
# port: transport
# initialDelaySeconds: 5
# timeoutSeconds: 10
# readinessProbe:
# tcpSocket:
# port: transport
# initialDelaySeconds: 5
# timeoutSeconds: 10
volumeMounts:
- name: elasticsearch-logging
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: elasticsearch-logging
emptyDir: {}
# Elasticsearch requires vm.max_map_count to be at least 262144.
# If your OS already sets up this number to a higher value, feel free
# to remove this init container.
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-logging-init
securityContext:
privileged: true
fluentd-es-ds.yaml文件內(nèi)容如下,fluentd-es-configmap.yaml文件內(nèi)容保持不變(溫馨提示,帶有叉的都是注釋行,默認(rèn)原文件可能是啟用狀態(tài))
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd-es
namespace: kube-system
labels:
k8s-app: fluentd-es
addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- "namespaces"
- "pods"
verbs:
- "get"
- "watch"
- "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: fluentd-es
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: fluentd-es
apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-es-v2.4.0
namespace: kube-system
labels:
k8s-app: fluentd-es
version: v2.4.0
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: fluentd-es
version: v2.4.0
template:
metadata:
labels:
k8s-app: fluentd-es
version: v2.4.0
# This annotation ensures that fluentd does not get evicted if the node
# supports critical pod annotation based priority scheme.
# Note that this does not guarantee admission on the nodes (#40573).
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
priorityClassName: system-node-critical
serviceAccountName: fluentd-es
containers:
- name: fluentd-es
image: quay.io/fluentd_elasticsearch/fluentd_elasticsearch:v2.4.0 #鏡像地址一定記得修改
env:
- name: FLUENTD_ARGS
value: --no-supervisor -q
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: config-volume
mountPath: /etc/fluent/config.d
# ports:
# - containerPort: 24231
# name: prometheus
# protocol: TCP
#livenessProbe:
# tcpSocket:
# port: prometheus
# initialDelaySeconds: 5
# timeoutSeconds: 10
#readinessProbe:
# tcpSocket:
# port: prometheus
# initialDelaySeconds: 5
# timeoutSeconds: 10
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config-volume
configMap:
name: fluentd-es-config-v0.2.0
kibana-deployment.yaml文件內(nèi)容如下(溫馨提示,帶有叉的都是注釋行,默認(rèn)原文件可能是啟用狀態(tài))
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
containers:
- name: kibana-logging
image: docker.elastic.co/kibana/kibana-oss:6.6.1 #鏡像連接地址
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
env:
#- name: ELASTICSEARCH_HOSTS
- name: ELASTICSEARCH_URL
value: http://elasticsearch-logging:9200
#- name: SERVER_NAME
# value: kibana-logging
- name: SERVER_BASEPATH
value: "" #kibana是通過nodeport方式進(jìn)行訪問,請(qǐng)把value的值改為此
#value: /api/v1/namespaces/kube-system/services/kibana-logging/proxy
# - name: SERVER_REWRITEBASEPATH
# value: "false"
ports:
- containerPort: 5601
name: ui
protocol: TCP
#livenessProbe: #livenessProbe和readinessProbe檢測可以注釋,不需要啟用
# httpGet:
# path: /api/status
# port: ui
# initialDelaySeconds: 5
# timeoutSeconds: 10
#readinessProbe:
#httpGet:
# path: /api/status
# port: ui
#initialDelaySeconds: 5
#timeoutSeconds: 10
kibana-service.yaml文件內(nèi)容如下(溫馨提示,帶有叉的都是注釋行,默認(rèn)原文件可能是啟用狀態(tài))
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
type: NodePort #添加此選項(xiàng),以便能直接通過IP:端口的方式訪問kibana
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
5.應(yīng)用EFK的yaml所有文件,我把EFK需要的所有文件都保存到一個(gè)文件夾/root/EFK
kubectl apply -f /root/EFK/
6.查看svc暴露的端口
7.可以在谷歌瀏覽器安裝elastisearch-head連接并查看
可以在谷歌瀏覽器安裝elastisearch-head連接并查看elasticsearch是否能正常連接上或有沒有報(bào)錯(cuò)等之類
8.通過NodePort暴露kibana的service端口來訪問kibana
總結(jié)
以上是生活随笔為你收集整理的Kubernetes安装EFK教程(非存储持久化方式部署)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。