Kubernetes 【安全】-System Hardening - 1. apparmor
文章目錄
- 1. 簡介
- 2. 準備
- 2.1 Kubernetes & docker版本
- 2.2 內核模塊
- 2.3 節點配置文件加載
- 2.4 kubelet版本
- 3. AppArmor 配置文件
- 4. Practice - AppArmor for curl
- 5. Practice - AppArmor for Docker Nginx
- 6. Practice - AppArmor for Kubernetes Nginx
1. 簡介
了解 Kube-apparmor-manager 如何幫助您管理 Kubernetes 上的 AppArmor 配置文件,以減少集群的攻擊面。
AppArmor是一個 Linux 內核安全模塊,它補充了標準的 Linux 用戶和基于組的權限,以將程序限制在一組有限的資源中。
AppArmor 可以針對任何應用程序進行配置,以減少其潛在的攻擊面并提供更深入的防御。您可以通過配置文件對其進行配置并將其調整為將特定程序或容器所需的訪問列入白名單,例如 Linux 功能、網絡訪問、文件權限等。
在本博客中,我們將首先給出 AppArmor 配置文件的快速示例,以及 Kubernetes 工作負載如何使用它來減少攻擊面。然后,我們將推出一個新的開源工具,KUBE-AppArmor-manager,并告訴你如何能幫助到Kubernetes集群內輕松管理AppArmor配置文件。最后但并非最不重要的一點是,我們將演示如何從圖像配置文件構建 AppArmor 配置文件以防止反向 shell 攻擊。
2. 準備
2.1 Kubernetes & docker版本
Kubernetes 版本至少是 v1.4 – AppArmor 在 Kubernetes v1.4 版本中才添加了對 AppArmor 的支持。早于 v1.4 版本的 Kubernetes 組件不知道新的 AppArmor 注釋,并且將會 默認忽略 提供的任何 AppArmor 設置。為了確保您的 Pods 能夠得到預期的保護,必須驗證節點的 Kubelet 版本:
$ kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.kubeletVersion}\n{end}' master: v1.20.1 node1: v1.20.1 node2: v1.20.1$ kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.containerRuntimeVersion}\n{end}' master: docker://19.3.4 node1: docker://19.3.4 node2: docker://19.3.42.2 內核模塊
AppArmor 內核模塊已啟用 – 要使 Linux 內核強制執行 AppArmor 配置文件,必須安裝并且啟動 AppArmor 內核模塊。默認情況下,有幾個發行版支持該模塊,如 Ubuntu 和 SUSE,還有許多發行版提供可選支持。要檢查模塊是否已啟用,請檢查 /sys/module/apparmor/parameters/enabled 文件:
cat /sys/module/apparmor/parameters/enabledY2.3 節點配置文件加載
配置文件已加載 – 通過指定每個容器都應使用 AppArmor 配置文件,AppArmor 應用于 Pod。如果指定的任何配置文件尚未加載到內核, Kubelet (>=v1.4) 將拒絕 Pod。通過檢查 /sys/kernel/security/apparmor/profiles 文件,可以查看節點加載了哪些配置文件。例如:
$ ssh root@192.168.211.41 "sudo cat /sys/kernel/security/apparmor/profiles | sort" docker-default (enforce) docker-nginx (enforce) /sbin/dhclient (enforce) /usr/bin/curl (enforce) /usr/lib/connman/scripts/dhclient-script (enforce) /usr/lib/NetworkManager/nm-dhcp-client.action (enforce) /usr/lib/NetworkManager/nm-dhcp-helper (enforce) /usr/sbin/tcpdump (enforce)2.4 kubelet版本
只要 Kubelet 版本包含 AppArmor 支持(>=v1.4),如果不滿足任何先決條件,Kubelet 將拒絕帶有 AppArmor 選項的 Pod。您還可以通過檢查節點就緒狀況消息來驗證節點上的 AppArmor 支持(盡管這可能會在以后的版本中刪除):
$ kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}\n{end}' master: kubelet is posting ready status. AppArmor enabled node1: kubelet is posting ready status. AppArmor enabled node2: kubelet is posting ready status. AppArmor enabled3. AppArmor 配置文件
AppArmor 配置文件被指定為 per-container。要指定要用其運行 Pod 容器的 AppArmor 配置文件,請向 Pod 的元數據添加注釋:
container.apparmor.security.beta.kubernetes.io/<container_name>: <profile_ref><container_name> 的名稱是容器的簡稱,用以描述簡介,并且簡稱為 <profile_ref> 。<profile_ref> 可以作為其中之一:
- runtime/default 應用運行時的默認配置
- localhost/<profile_name> 應用在名為 <profile_name> 的主機上加載的配置文件
- unconfined 表示不加載配置文件
apparmor 配置文件定義了目標受限應用程序可以訪問系統上的哪些資源(如網絡、系統功能或文件)。
下面是一個簡單的 AppArmor 配置文件示例:
profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {file,# Deny all file writes.deny /** w, }在此示例中,配置文件授予應用程序所有類型的訪問權限,但寫入整個文件系統除外。它包含兩個規則:
- file: 允許對整個文件系統進行各種訪問
- deny /** w: 拒絕在根目錄下寫入任何文件/。該表達式/**轉換為根目錄下的任何文件,以及其子目錄下的文件。
通過以下步驟設置 Kubernetes 集群以便容器可以使用 apparmor 配置文件:
- 在所有集群節點上安裝并啟用 AppArmor。
- 將要使用的 apparmor 配置文件復制到每個節點,并將其解析為強制(enforce)模式或抱怨(complain)模式。
- 使用 AppArmor 配置文件名稱注釋容器工作負載。
以下是在 Pod 中使用配置文件的方法:
apiVersion: v1 kind: Pod metadata:name: hello-apparmorannotations:# Tell Kubernetes to apply the AppArmor profile "k8s-apparmor-example-deny-write".container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-deny-write spec:containers:- name: helloimage: busyboxcommand: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]在上面的 pod yaml 中,名為的容器hello正在使用名為的 AppArmor 配置文件k8s-apparmor-example-deny-write。如果 AppArmor 配置文件不存在,則 Pod 將無法創建。
每個配置文件都可以在強制模式(阻止訪問不允許的資源)或抱怨模式(僅報告違規)下運行。構建 AppArmor 配置文件后,最好先將其應用到抱怨模式,然后讓工作負載運行一段時間。通過分析 AppArmor 日志,您可以檢測并修復任何誤報活動。一旦您有足夠的信心,您就可以將配置文件轉為強制模式。
如果之前的配置文件在強制模式下運行,它將阻止任何文件寫入活動:
$ kubectl exec hello-apparmor touch /tmp/test touch: /tmp/test: Permission denied error: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container:這是一個簡化的例子。
4. Practice - AppArmor for curl
root@master:~/cks/runtime-security# aa-status apparmor module is loaded. 6 profiles are loaded. 6 profiles are in enforce mode./sbin/dhclient/usr/lib/NetworkManager/nm-dhcp-client.action/usr/lib/NetworkManager/nm-dhcp-helper/usr/lib/connman/scripts/dhclient-script/usr/sbin/tcpdumpdocker-default 0 profiles are in complain mode. 10 processes have profiles defined. 10 processes are in enforce mode.docker-default (26146) docker-default (26164) docker-default (26184) docker-default (26480) docker-default (27226) docker-default (32926) docker-default (47085) docker-default (47820) docker-default (47906) docker-default (48662) 0 processes are in complain mode. 0 processes are unconfined but have a profile defined.root@master:~/cks/runtime-security# apt-get install apparmor-utilsroot@master:~/cks/runtime-security# aa- aa-audit aa-complain aa-enabled aa-genprof aa-remove-unknown aa-update-browser aa-autodep aa-decode aa-enforce aa-logprof aa-status aa-cleanprof aa-disable aa-exec aa-mergeprof aa-unconfined root@master:~/cks/runtime-security# aa-genprof curl root@master:~/cks/runtime-security# aa-status apparmor module is loaded. 7 profiles are loaded. 7 profiles are in enforce mode./sbin/dhclient/usr/bin/curl/usr/lib/NetworkManager/nm-dhcp-client.action/usr/lib/NetworkManager/nm-dhcp-helper/usr/lib/connman/scripts/dhclient-script/usr/sbin/tcpdumpdocker-defaultroot@master:~/cks/runtime-security# cd /etc/apparmor.d/ root@master:/etc/apparmor.d# ls abstractions cache disable force-complain local sbin.dhclient tunables usr.bin.curl usr.sbin.rsyslogd usr.sbin.tcpdump root@master:/etc/apparmor.d# cat usr.bin.curl # Last Modified: Mon May 24 23:11:35 2021 #include <tunables/global>/usr/bin/curl {#include <abstractions/base>/usr/bin/curl mr,}root@master:/etc/apparmor.d# aa-logprof Reading log entries from /var/log/syslog. Updating AppArmor profiles in /etc/apparmor.d. Enforce-mode changes:Profile: /usr/bin/curl Network Family: inet Socket Type: dgram[1 - #include <abstractions/nameservice>]2 - network inet dgram, (A)llow / [(D)eny] / (I)gnore / Audi(t) / Abo(r)t / (F)inish Adding #include <abstractions/nameservice> to profile.Profile: /usr/bin/curl Path: /etc/ssl/openssl.cnf Mode: r Severity: 21 - #include <abstractions/openssl> 2 - #include <abstractions/ssl_keys> [3 - /etc/ssl/openssl.cnf] (A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Abo(r)t / (F)inish / (M)ore Adding /etc/ssl/openssl.cnf r to profile= Changed Local Profiles =The following local profiles were changed. Would you like to save them?[1 - /usr/bin/curl] (S)ave Changes / Save Selec(t)ed Profile / [(V)iew Changes] / View Changes b/w (C)lean profiles / Abo(r)t Writing updated profile for /usr/bin/curl.root@master:/etc/apparmor.d# cat usr.bin.curl # Last Modified: Mon May 24 23:18:29 2021 #include <tunables/global>/usr/bin/curl {#include <abstractions/base>#include <abstractions/nameservice>/etc/ssl/openssl.cnf r,/usr/bin/curl mr,}root@master:/etc/apparmor.d# curl killer.sh -v * Rebuilt URL to: killer.sh/ * Trying 35.227.196.29... * TCP_NODELAY set * Connected to killer.sh (35.227.196.29) port 80 (#0) > GET / HTTP/1.1 > Host: killer.sh > User-Agent: curl/7.58.0 > Accept: */* > < HTTP/1.1 301 Moved Permanently < Cache-Control: private < Content-Type: text/html; charset=UTF-8 < Referrer-Policy: no-referrer < Location: https://killer.sh/ < Content-Length: 215 < Date: Tue, 25 May 2021 06:19:36 GMT < <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"> <TITLE>301 Moved</TITLE></HEAD><BODY> <H1>301 Moved</H1> The document has moved <A HREF="https://killer.sh/">here</A>. </BODY></HTML> * Connection #0 to host killer.sh left intact5. Practice - AppArmor for Docker Nginx
k8s網站: https://v1-18.docs.kubernetes.io/zh/docs/tutorials/clusters/apparmor/
6. Practice - AppArmor for Kubernetes Nginx
AppArmor Pod annotation
root@master:~/cks/apparmor# scp /etc/apparmor.d/docker-nginx root@192.168.211.41:/etc/apparmor.d/ 100% 1644 1.6KB/s 00:00 root@master:~/cks/apparmor# scp /etc/apparmor.d/docker-nginx root@192.168.211.42:/etc/apparmor.d/ 100% 1644 1.6KB/s 00:00 root@node1:/etc/apparmor.d# apparmor_parser /etc/apparmor.d/docker-nginx root@node1:/etc/apparmor.d# aa-status apparmor module is loaded. 7 profiles are loaded. 7 profiles are in enforce mode./sbin/dhclient/usr/lib/NetworkManager/nm-dhcp-client.action/usr/lib/NetworkManager/nm-dhcp-helper/usr/lib/connman/scripts/dhclient-script/usr/sbin/tcpdumpdocker-defaultdocker-nginxroot@master:~/cks/apparmor# k run secure --image=nginx -oyaml --dry-run=client > pod.yaml root@master:~/cks/apparmor# cat pod.yaml apiVersion: v1 kind: Pod metadata:creationTimestamp: nullannotations: #添加此行container.apparmor.security.beta.kubernetes.io/secure: localhost/hello #添加此行labels:run: securename: secure spec:containers:- image: nginxname: secureresources: {}dnsPolicy: ClusterFirstrestartPolicy: Always status: {}root@master:~/cks/apparmor# k create -f pod.yaml pod/secure createdroot@master:~/cks/apparmor# k get pods secure NAME READY STATUS RESTARTS AGE secure 0/1 Blocked 0 6sroot@master:~/cks/apparmor# k describe pod secure Name: secure Namespace: default Priority: 0 Node: node2/192.168.211.42 Start Time: Mon, 24 May 2021 23:50:37 -0700 Labels: run=secure Annotations: container.apparmor.security.beta.kubernetes.io/secure: localhost/hello Status: Pending Reason: AppArmor Message: Cannot enforce AppArmor: profile "hello" is not loaded IP: IPs: <none> Containers:secure:Container ID: Image: nginxImage ID: Port: <none>Host Port: <none>State: WaitingReason: BlockedReady: FalseRestart Count: 0Environment: <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from default-token-4lh26 (ro) Conditions:Type StatusInitialized True Ready False ContainersReady False PodScheduled True Volumes:default-token-4lh26:Type: Secret (a volume populated by a Secret)SecretName: default-token-4lh26Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 12s default-scheduler Successfully assigned default/secure to node2 root@master:~/cks/apparmor# cat pod.yaml apiVersion: v1 kind: Pod metadata:creationTimestamp: nullannotations: container.apparmor.security.beta.kubernetes.io/secure: localhost/hellolabels:run: securename: secure spec:containers:- image: nginxname: secureresources: {}dnsPolicy: ClusterFirstrestartPolicy: Always status: {}#修改pod.yaml annotations root@master:~/cks/apparmor# cat pod.yaml apiVersion: v1 kind: Pod metadata:creationTimestamp: nullannotations: container.apparmor.security.beta.kubernetes.io/secure: localhost/docker-nginx #修改此行labels:run: securename: secure spec:containers:- image: nginxname: secureresources: {}dnsPolicy: ClusterFirstrestartPolicy: Always status: {}root@master:~/cks/apparmor# k -f pod.yaml delete --force --grace-period 0 warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "secure" force deleted root@master:~/cks/apparmor# k create -f pod.yaml pod/secure createdroot@master:~/cks/apparmor# k get pod secure NAME READY STATUS RESTARTS AGE secure 1/1 Running 0 10s總結
以上是生活随笔為你收集整理的Kubernetes 【安全】-System Hardening - 1. apparmor的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: lua学习笔记(4)-- 搭建mobde
- 下一篇: 严肃不搞笑的小黄鸭调试法