久久精品国产精品国产精品污,男人扒开添女人下部免费视频,一级国产69式性姿势免费视频,夜鲁夜鲁很鲁在线视频 视频,欧美丰满少妇一区二区三区,国产偷国产偷亚洲高清人乐享,中文 在线 日韩 亚洲 欧美,熟妇人妻无乱码中文字幕真矢织江,一区二区三区人妻制服国产

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

CKA-1.26 模拟试题

發布時間:2023/12/18 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 CKA-1.26 模拟试题 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

目錄

    • Question 1 | Contexts
    • Question 2 | Schedule Pod on Controlplane Node
    • Question 3 | Scale down StatefulSet
    • Question 4 | Pod Ready if Service is reachable
    • Question 5 | Kubectl sorting
    • Question 6 | Storage, PV, PVC, Pod volume
    • Question 7 | Node and Pod Resource Usage
    • Question 8 | Get Controlplane Information
    • Question 9 | Kill Scheduler, Manual Scheduling
    • Question 11 | DaemonSet on all Nodes
    • Question 12 | Deployment on all Nodes
    • Question 13 | Multi Containers and Pod shared Volume
    • Question 14 | Find out Cluster Information
    • Question 15 | Cluster Event Logging
    • Question 16 | Namespaces and Api Resources
    • Question 17 | Find Container of Pod and check info
    • Question 18 | Fix Kubelet
    • Question 19 | Create Secret and mount into Pod
    • Question 20 | Update Kubernetes Version and join cluster
    • Question 21 | Create a Static Pod and Service
    • Question 22 | Check how long certificates are valid
    • Question 23 | Kubelet client/server cert info
    • Question 24 | NetworkPolicy
    • Question 25 | Etcd Snapshot Save and Restore
    • Extra Question 1 | Find Pods first to be terminated
    • Extra Question 2 | Curl Manually Contact API
    • Preview Question 1
    • Preview Question 2
    • Preview Question 3
    • CKA Tips Kubernetes 1.26

Questions and Answers Preview Questions and Answers Exam Tips

CKA Simulator Kubernetes 1.26

https://killer.sh
Pre Setup

Once you’ve gained access to your terminal it might be wise to spend ~1 minute to setup your environment. You could set these:

alias k=kubectl # will already be pre-configured export do="--dry-run=client -o yaml" # k create deploy nginx --image=nginx $do export now="--force --grace-period 0" # k delete pod x $now

Vim
The following settings will already be configured in your real exam environment in ~/.vimrc. But it can never hurt to be able to type these down:

set tabstop=2 set expandtab set shiftwidth=2

More setup suggestions are in the tips section.

Question 1 | Contexts

Task weight: 1%

You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.

Answer:

Maybe the fastest way is just to run:

k config get-contexts k config get-contexts -o name > /opt/course/1/contexts Or using jsonpath: k config view -o yaml # overview k config view -o jsonpath="{.contexts[*].name}" k config view -o jsonpath="{.contexts[*].name}" | tr " " "\n" # new lines k config view -o jsonpath="{.contexts[*].name}" | tr " " "\n" > /opt/course/1/contexts

The content should then look like:

#/opt/course/1/contexts k8s-c1-H k8s-c2-AC k8s-c3-CCC

Next create the first command:

# /opt/course/1/context_default_kubectl.sh kubectl config current-context ? sh /opt/course/1/context_default_kubectl.sh k8s-c1-H

And the second one:

# /opt/course/1/context_default_no_kubectl.sh cat ~/.kube/config | grep current ? sh /opt/course/1/context_default_no_kubectl.sh current-context: k8s-c1-H

Notice: In the real exam you might need to filter and find information from bigger lists of resources, hence knowing a little jsonpath and simple bash filtering will be helpful.

The second command could also be improved to:

# /opt/course/1/context_default_no_kubectl.sh cat ~/.kube/config | grep current | sed -e "s/current-context: //"

Question 2 | Schedule Pod on Controlplane Node

Task weight: 3%

Use context: kubectl config use-context k8s-c1-HCreate a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on a controlplane node, do not add new labels any nodes.

Answer:
First we find the controlplane node(s) and their taints:

k get node # find controlplane node k describe node cluster1-controlplane1 | grep Taint -A1 # get controlplane node taints k get node cluster1-controlplane1 --show-labels # get controlplane node labels

Next we create the Pod template:

# check the export on the very top of this document so we can use $do k run pod1 --image=httpd:2.4.41-alpine $do > 2.yaml

Perform the necessary changes manually. Use the Kubernetes docs and search for example for tolerations and nodeSelector to find examples:

# 2.yaml apiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:run: pod1name: pod1 spec:containers:- image: httpd:2.4.41-alpinename: pod1-container # changeresources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaystolerations:- key: "node-role.kubernetes.io/master"operator: "Equal"value: ""effect: "NoSchedule"nodeSelector: # addnode-role.kubernetes.io/control-plane: "" # add status: {}

Important here to add the toleration for running on controlplane nodes, but also the nodeSelector to make sure it only runs on controlplane nodes. If we only specify a toleration the Pod can be scheduled on controlplane or worker nodes.

Now we create it:

k -f 2.yaml create Let's check if the pod is scheduled: ? k get pod pod1 -o wide NAME READY STATUS RESTARTS ... NODE NOMINATED NODE pod1 1/1 Running 0 ... cluster1-controlplane1 <none>

Question 3 | Scale down StatefulSet

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources.

Answer:

If we check the Pods we see two replicas:? k -n project-c13 get pod | grep o3dbo3db-0 1/1 Running 0 52so3db-1 1/1 Running 0 42sFrom their name it looks like these are managed by a StatefulSet. But if we're not sure we could also check for the most common resources which manage Pods:? k -n project-c13 get deploy,ds,sts | grep o3dbstatefulset.apps/o3db 2/2 2m56sConfirmed, we have to work with a StatefulSet. To find this out we could also look at the Pod labels:? k -n project-c13 get pod --show-labels | grep o3dbo3db-0 1/1 Running 0 3m29s app=nginx,controller-revision-hash=o3db-5fbd4bb9cc,statefulset.kubernetes.io/pod-name=o3db-0o3db-1 1/1 Running 0 3m19s app=nginx,controller-revision-hash=o3db-5fbd4bb9cc,statefulset.kubernetes.io/pod-name=o3db-1

To fulfil the task we simply run:

? k -n project-c13 scale sts o3db --replicas 1 statefulset.apps/o3db scaled ? k -n project-c13 get sts o3db NAME READY AGE o3db 1/1 4m39s

C13 Mangement is happy again.

Question 4 | Pod Ready if Service is reachable

Task weight: 4%

Use context: kubectl config use-context k8s-c1-HDo the following in Namespace default. Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. Configure a LivenessProbe which simply executes command true. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe. Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready. The already existing Service service-am-i-ready should now have that second Pod as endpoint.Now the first Pod should be in ready state, confirm that.

Answer:

It’s a bit of an anti-pattern for one Pod to check another Pod for being ready using probes, hence the normally available readinessProbe.httpGet doesn’t work for absolute remote urls. Still the workaround requested in this task should show how probes and Pod<->Service communication works.

First we create the first Pod:

k run ready-if-service-ready --image=nginx:1.16.1-alpine $do > 4_pod1.yaml

Next perform the necessary additions manually:

# 4_pod1.yaml apiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:run: ready-if-service-readyname: ready-if-service-ready spec:containers:- image: nginx:1.16.1-alpinename: ready-if-service-readyresources: {}livenessProbe: # add from hereexec:command:- 'true'readinessProbe:exec:command:- sh- -c- 'wget -T2 -O- http://service-am-i-ready:80' # to herednsPolicy: ClusterFirstrestartPolicy: Always status: {}

Then create the Pod:

k -f 4_pod1.yaml create

And confirm it’s in a non-ready state:

? k get pod ready-if-service-ready NAME READY STATUS RESTARTS AGE ready-if-service-ready 0/1 Running 0 7s

We can also check the reason for this using describe:

? k describe pod ready-if-service-ready...Warning Unhealthy 18s kubelet, cluster1-node1 Readiness probe failed: Connecting to service-am-i-ready:80 (10.109.194.234:80) wget: download timed out

Now we create the second Pod:

k run am-i-ready --image=nginx:1.16.1-alpine --labels="id=cross-server-ready"

The already existing Service service-am-i-ready should now have an Endpoint:

k describe svc service-am-i-readyk get ep # also possible

Which will result in our first Pod being ready, just give it a minute for the Readiness probe to check again:

? k get pod ready-if-service-ready NAME READY STATUS RESTARTS AGE ready-if-service-ready 1/1 Running 0 53s

Look at these Pods coworking together!

Question 5 | Kubectl sorting

Task weight: 1%

Use context: kubectl config use-context k8s-c1-HThere are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.

Answer:

A good resources here (and for many other things) is the kubectl-cheat-sheet. You can reach it fast when searching for “cheat sheet” in the Kubernetes docs.

# /opt/course/5/find_pods.sh kubectl get pod -A --sort-by=.metadata.creationTimestamp

And to execute:

? sh /opt/course/5/find_pods.sh NAMESPACE NAME ... AGE kube-system kube-scheduler-cluster1-controlplane1 ... 63m kube-system etcd-cluster1-controlplane1 ... 63m kube-system kube-apiserver-cluster1-controlplane1 ... 63m kube-system kube-controller-manager-cluster1-controlplane1 ... 63m ...

For the second command:

# /opt/course/5/find_pods_uid.sh kubectl get pod -A --sort-by=.metadata.uid

And to execute:

? sh /opt/course/5/find_pods_uid.sh NAMESPACE NAME ... AGE kube-system coredns-5644d7b6d9-vwm7g ... 68m project-c13 c13-3cc-runner-heavy-5486d76dd4-ddvlt ... 63m project-hamster web-hamster-shop-849966f479-278vp ... 63m project-c13 c13-3cc-web-646b6c8756-qsg4b ... 63m

Question 6 | Storage, PV, PVC, Pod volume

Task weight: 8%

Use context: kubectl config use-context k8s-c1-HCreate a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.

Answer

Find an example from https://kubernetes.io/docs and alter it:# 6_pv.yaml kind: PersistentVolume apiVersion: v1 metadata:name: safari-pv spec:capacity:storage: 2GiaccessModes:- ReadWriteOncehostPath:path: "/Volumes/Data"

Then create it:

k -f 6_pv.yaml create

Next the PersistentVolumeClaim:

Find an example from https://kubernetes.io/docs and alter it:

# 6_pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata:name: safari-pvcnamespace: project-tiger spec:accessModes:- ReadWriteOnceresources:requests:storage: 2Gi

Then create:

k -f 6_pvc.yaml create

And check that both have the status Bound:

? k -n project-tiger get pv,pvcNAME CAPACITY ... STATUS CLAIM ...persistentvolume/safari-pv 2Gi ... Bound project-tiger/safari-pvc ...NAME STATUS VOLUME CAPACITY ...persistentvolumeclaim/safari-pvc Bound safari-pv 2Gi ...

Next we create a Deployment and mount that volume:

k -n project-tiger create deploy safari \--image=httpd:2.4.41-alpine $do > 6_dep.yaml

Alter the yaml to mount the volume:

# 6_dep.yaml apiVersion: apps/v1 kind: Deployment metadata:creationTimestamp: nulllabels:app: safariname: safarinamespace: project-tiger spec:replicas: 1selector:matchLabels:app: safaristrategy: {}template:metadata:creationTimestamp: nulllabels:app: safarispec:volumes: # add- name: data # addpersistentVolumeClaim: # addclaimName: safari-pvc # addcontainers:- image: httpd:2.4.41-alpinename: containervolumeMounts: # add- name: data # addmountPath: /tmp/safari-data # add

Excuet:

k -f 6_dep.yaml create

We can confirm it’s mounting correctly:

? k -n project-tiger describe pod safari-5cbf46d6d-mjhsb | grep -A2 Mounts: Mounts:/tmp/safari-data from data (rw) # there it is/var/run/secrets/kubernetes.io/serviceaccount from default-token-n2sjj (ro)

Question 7 | Node and Pod Resource Usage

Task weight: 1%

Use context: kubectl config use-context k8s-c1-HThe metrics-server has been installed in the cluster. Your college would like to know the kubectl commands to:show Nodes resource usageshow Pods and their containers resource usage Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.

Answer:

The command we need to use here is top:

? k top -hDisplay Resource (CPU/Memory/Storage) usage.The top command allows you to see the resource consumption for nodes or pods.This command requires Metrics Server to be correctly configured and working on the server.Available Commands:node Display Resource (CPU/Memory/Storage) usage of nodespod Display Resource (CPU/Memory/Storage) usage of pods

We see that the metrics server provides information about resource usage:

? k top nodeNAME CPU(cores) CPU% MEMORY(bytes) MEMORY% cluster1-controlplane1 178m 8% 1091Mi 57% cluster1-node1 66m 6% 834Mi 44% cluster1-node2 91m 9% 791Mi 41%

We create the first file:

# /opt/course/7/node.shkubectl top node

For the second file we might need to check the docs again:

? k top pod -h Display Resource (CPU/Memory/Storage) usage of pods. ... Namespace in current context is ignored even if specified with --namespace.--containers=false: If present, print usage of containers within a pod.--no-headers=false: If present, print output without headers. ...

With this we can finish this task:

# /opt/course/7/pod.sh kubectl top pod --containers=true

Question 8 | Get Controlplane Information

Task weight: 2%

Use context: kubectl config use-context k8s-c1-HSsh into the controlplane node with ssh cluster1-controlplane1. Check how the controlplane components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the controlplane node. Also find out the name of the DNS application and how it's started/installed on the controlplane node.Write your findings into file /opt/course/8/controlplane-components.txt. The file should be structured like:# /opt/course/8/controlplane-components.txt kubelet: [TYPE] kube-apiserver: [TYPE] kube-scheduler: [TYPE] kube-controller-manager: [TYPE] etcd: [TYPE] dns: [TYPE] [NAME] Choices of [TYPE] are: not-installed, process, static-pod, pod

Answer:

We could start by finding processes of the requested components, especially the kubelet at first:

? ssh cluster1-controlplane1 root@cluster1-controlplane1:~# ps aux | grep kubelet # shows kubelet process

We can see which components are controlled via systemd looking at /etc/systemd/system directory:

? root@cluster1-controlplane1:~# find /etc/systemd/system/ | grep kube /etc/systemd/system/kubelet.service.d /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/multi-user.target.wants/kubelet.service? root@cluster1-controlplane1:~# find /etc/systemd/system/ | grep etcd

This shows kubelet is controlled via systemd, but no other service named kube nor etcd. It seems that this cluster has been setup using kubeadm, so we check in the default manifests directory:

? root@cluster1-controlplane1:~# find /etc/kubernetes/manifests/ /etc/kubernetes/manifests/ /etc/kubernetes/manifests/kube-controller-manager.yaml /etc/kubernetes/manifests/etcd.yaml /etc/kubernetes/manifests/kube-apiserver.yaml /etc/kubernetes/manifests/kube-scheduler.yaml (The kubelet could also have a different manifests directory specified via parameter --pod-manifest-path in it's systemd startup config)

This means the main 4 controlplane services are setup as static Pods. Actually, let’s check all Pods running on in the kube-system Namespace on the controlplane node:

? root@cluster1-controlplane1:~# kubectl -n kube-system get pod -o wide | grep controlplane1coredns-5644d7b6d9-c4f68 1/1 Running ... cluster1-controlplane1 coredns-5644d7b6d9-t84sc 1/1 Running ... cluster1-controlplane1 etcd-cluster1-controlplane1 1/1 Running ... cluster1-controlplane1 kube-apiserver-cluster1-controlplane1 1/1 Running ... cluster1-controlplane1 kube-controller-manager-cluster1-controlplane1 1/1 Running ... cluster1-controlplane1 kube-proxy-q955p 1/1 Running ... cluster1-controlplane1 kube-scheduler-cluster1-controlplane1 1/1 Running ... cluster1-controlplane1 weave-net-mwj47 2/2 Running ... cluster1-controlplane1

There we see the 5 static pods, with -cluster1-controlplane1 as suffix.

We also see that the dns application seems to be coredns, but how is it controlled?

? root@cluster1-controlplane1$ kubectl -n kube-system get ds NAME DESIRED CURRENT ... NODE SELECTOR AGE kube-proxy 3 3 ... kubernetes.io/os=linux 155m weave-net 3 3 ... <none> 155m? root@cluster1-controlplane1$ kubectl -n kube-system get deploy NAME READY UP-TO-DATE AVAILABLE AGE coredns 2/2 2 2 155m

Seems like coredns is controlled via a Deployment. We combine our findings in the requested file:

# /opt/course/8/controlplane-components.txt kubelet: process kube-apiserver: static-pod kube-scheduler: static-pod kube-controller-manager: static-pod etcd: static-pod dns: pod coredns

You should be comfortable investigating a running cluster, know different methods on how a cluster and its services can be setup and be able to troubleshoot and find error sources.

Question 9 | Kill Scheduler, Manual Scheduling

Task weight: 5%

Use context: kubectl config use-context k8s-c2-ACSsh into the controlplane node with ssh cluster2-controlplane1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm it's created but not scheduled on any node.Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-controlplane1. Make sure it's running.Start the kube-scheduler again and confirm it's running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-node1.

Answer:
Stop the Scheduler

First we find the controlplane node:

? k get node NAME STATUS ROLES AGE VERSION cluster2-controlplane1 Ready control-plane 26h v1.26.0 cluster2-node1 Ready <none> 26h v1.26.0

Then we connect and check if the scheduler is running:

? ssh cluster2-controlplane1 ? root@cluster2-controlplane1:~# kubectl -n kube-system get pod | grep schedule kube-scheduler-cluster2-controlplane1 1/1 Running 0 6s

Kill the Scheduler (temporarily):

? root@cluster2-controlplane1:~# cd /etc/kubernetes/manifests/ ? root@cluster2-controlplane1:~# mv kube-scheduler.yaml ..

And it should be stopped:

? root@cluster2-controlplane1:~# kubectl -n kube-system get pod | grep schedule ? root@cluster2-controlplane1:~#

Create a Pod
Now we create the Pod:

k run manual-schedule --image=httpd:2.4-alpine

And confirm it has no node assigned:

? k get pod manual-schedule -o wideNAME READY STATUS ... NODE NOMINATED NODE manual-schedule 0/1 Pending ... <none> <none>

Manually schedule the Pod
Let’s play the scheduler now:

k get pod manual-schedule -o yaml > 9.yaml# 9.yaml apiVersion: v1 kind: Pod metadata:creationTimestamp: "2020-09-04T15:51:02Z"labels:run: manual-schedulemanagedFields: ...manager: kubectl-runoperation: Updatetime: "2020-09-04T15:51:02Z"name: manual-schedulenamespace: defaultresourceVersion: "3515"selfLink: /api/v1/namespaces/default/pods/manual-scheduleuid: 8e9d2532-4779-4e63-b5af-feb82c74a935 spec:nodeName: cluster2-controlplane1 # add the controlplane node namecontainers:- image: httpd:2.4-alpineimagePullPolicy: IfNotPresentname: manual-scheduleresources: {}terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /var/run/secrets/kubernetes.io/serviceaccountname: default-token-nxnc7readOnly: truednsPolicy: ClusterFirst ...The only thing a scheduler does, is that it sets the nodeName for a Pod declaration. How it finds the correct node to schedule on, that's a very much complicated matter and takes many variables into account.As we cannot kubectl apply or kubectl edit , in this case we need to delete and create or replace:

k -f 9.yaml replace --force

How does it look?

? k get pod manual-schedule -o wide

NAME READY STATUS … NODE

manual-schedule 1/1 Running … cluster2-controlplane1

It looks like our Pod is running on the controlplane now as requested, although no tolerations were specified. Only the scheduler takes tains/tolerations/affinity into account when finding the correct node name. That's why it's still possible to assign Pods manually directly to a controlplane node and skip the scheduler.Start the scheduler again

? ssh cluster2-controlplane1

? root@cluster2-controlplane1:~# cd /etc/kubernetes/manifests/

? root@cluster2-controlplane1:~# mv …/kube-scheduler.yaml .

Checks it's running:

? root@cluster2-controlplane1:~# kubectl -n kube-system get pod | grep schedule

kube-scheduler-cluster2-controlplane1 1/1 Running 0 16s

Schedule a second test Pod:

k run manual-schedule2 --image=httpd:2.4-alpine

? k get pod -o wide | grep schedule

manual-schedule 1/1 Running … cluster2-controlplane1

manual-schedule2 1/1 Running … cluster2-node1

Back to normal.## Question 10 | RBAC ServiceAccount Role RoleBindingTask weight: 6%

Use context: kubectl config use-context k8s-c1-H

Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.

Answer: Let's talk a little about RBAC resourcesA ClusterRole|Role defines a set of permissions and where it is available, in the whole cluster or just a single Namespace.A ClusterRoleBinding|RoleBinding connects a set of permissions with an account and defines where it is applied, in the whole cluster or just a single Namespace.Because of this there are 4 different RBAC combinations and 3 valid ones:Role + RoleBinding (available in single Namespace, applied in single Namespace)ClusterRole + ClusterRoleBinding (available cluster-wide, applied cluster-wide)ClusterRole + RoleBinding (available cluster-wide, applied in single Namespace)Role + ClusterRoleBinding (NOT POSSIBLE: available in single Namespace, applied cluster-wide)To the solutionWe first create the ServiceAccount:

? k -n project-hamster create sa processor

serviceaccount/processor created

Then for the Role:

k -n project-hamster create role -h # examples

So we execute: k -n project-hamster create role processor --verb=create --resource=secret --resource=configmap

Which will create a Role like:

# kubectl -n project-hamster create role processor --verb=create --resource=secret --resource=configmap apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata:name: processornamespace: project-hamster rules: - apiGroups:- ""resources:- secrets- configmapsverbs:- create

Now we bind the Role to the ServiceAccount:

k -n project-hamster create rolebinding -h # examples

So we create it:

k -n project-hamster create rolebinding processor --role processor --serviceaccount project-hamster:processor

This will create a RoleBinding like:

# kubectl -n project-hamster create rolebinding processor --role processor --serviceaccount project-hamster:processor apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata:name: processornamespace: project-hamster roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: processor subjects: - kind: ServiceAccountname: processornamespace: project-hamster

To test our RBAC setup we can use kubectl auth can-i:

k auth can-i -h # examples

Like this:

? k -n project-hamster auth can-i create secret --as system:serviceaccount:project-hamster:processor yes? k -n project-hamster auth can-i create configmap --as system:serviceaccount:project-hamster:processor yes? k -n project-hamster auth can-i create pod --as system:serviceaccount:project-hamster:processor no? k -n project-hamster auth can-i delete secret --as system:serviceaccount:project-hamster:processor no? k -n project-hamster auth can-i get configmap --as system:serviceaccount:project-hamster:processor no

Done.

Question 11 | DaemonSet on all Nodes

Task weight: 4%

Use context: kubectl config use-context k8s-c1-HUse Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, also controlplanes.

Answer:

As of now we aren’t able to create a DaemonSet directly using kubectl, so we create a Deployment and just change it up:

k -n project-tiger create deployment --image=httpd:2.4-alpine ds-important $do > 11.yaml

(Sure you could also search for a DaemonSet example yaml in the Kubernetes docs and alter it.)

Then we adjust the yaml to:

# 11.yaml apiVersion: apps/v1 kind: DaemonSet # change from Deployment to Daemonset metadata:creationTimestamp: nulllabels: # addid: ds-important # adduuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # addname: ds-importantnamespace: project-tiger # important spec:#replicas: 1 # removeselector:matchLabels:id: ds-important # adduuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add#strategy: {} # removetemplate:metadata:creationTimestamp: nulllabels:id: ds-important # adduuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # addspec:containers:- image: httpd:2.4-alpinename: ds-importantresources:requests: # addcpu: 10m # addmemory: 10Mi # addtolerations: # add- effect: NoSchedule # addkey: node-role.kubernetes.io/control-plane # add #status: {} # remove

It was requested that the DaemonSet runs on all nodes, so we need to specify the toleration for this.

Let’s confirm:

k -f 11.yaml create ? k -n project-tiger get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ds-important 3 3 3 3 3 <none> 8s ? k -n project-tiger get pod -l id=ds-important -o wide NAME READY STATUS NODE ds-important-6pvgm 1/1 Running ... cluster1-node1 ds-important-lh5ts 1/1 Running ... cluster1-controlplane1 ds-important-qhjcq 1/1 Running ... cluster1-node2

Question 12 | Deployment on all Nodes

Task weight: 6%

Use context: kubectl config use-context k8s-c1-HUse Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image kubernetes/pause.There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-node1 and cluster1-node2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added.In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.

Answer:

There are two possible ways, one using podAntiAffinity and one using topologySpreadConstraint.

PodAntiAffinity

The idea here is that we create a “Inter-pod anti-affinity” which allows us to say a Pod should only be scheduled on a node where another Pod of a specific label (here the same label) is not already running.

Let’s begin by creating the Deployment template:

k -n project-tiger create deployment --image=nginx:1.17.6-alpine deploy-important $do > 12.yaml

Then change the yaml to:

# 12.yaml apiVersion: apps/v1 kind: Deployment metadata:creationTimestamp: nulllabels:id: very-important # changename: deploy-importantnamespace: project-tiger # important spec:replicas: 3 # changeselector:matchLabels:id: very-important # changestrategy: {}template:metadata:creationTimestamp: nulllabels:id: very-important # changespec:containers:- image: nginx:1.17.6-alpinename: container1 # changeresources: {}- image: kubernetes/pause # addname: container2 # addaffinity: # addpodAntiAffinity: # addrequiredDuringSchedulingIgnoredDuringExecution: # add- labelSelector: # addmatchExpressions: # add- key: id # addoperator: In # addvalues: # add- very-important # addtopologyKey: kubernetes.io/hostname # add status: {}

Specify a topologyKey, which is a pre-populated Kubernetes label, you can find this by describing a node.
TopologySpreadConstraints
We can achieve the same with topologySpreadConstraints. Best to try out and play with both.

# 12.yaml apiVersion: apps/v1 kind: Deployment metadata:creationTimestamp: nulllabels:id: very-important # changename: deploy-importantnamespace: project-tiger # important spec:replicas: 3 # changeselector:matchLabels:id: very-important # changestrategy: {}template:metadata:creationTimestamp: nulllabels:id: very-important # changespec:containers:- image: nginx:1.17.6-alpinename: container1 # changeresources: {}- image: kubernetes/pause # addname: container2 # addtopologySpreadConstraints: # add- maxSkew: 1 # addtopologyKey: kubernetes.io/hostname # addwhenUnsatisfiable: DoNotSchedule # addlabelSelector: # addmatchLabels: # addid: very-important # add status: {}

Apply and Run
Let’s run it:

k -f 12.yaml create

Then we check the Deployment status where it shows 2/3 ready count:

? k -n project-tiger get deploy -l id=very-important NAME READY UP-TO-DATE AVAILABLE AGE deploy-important 2/3 3 2 2m35s

And running the following we see one Pod on each worker node and one not scheduled.

? k -n project-tiger get pod -o wide -l id=very-important NAME READY STATUS ... NODE deploy-important-58db9db6fc-9ljpw 2/2 Running ... cluster1-node1 deploy-important-58db9db6fc-lnxdb 0/2 Pending ... <none> deploy-important-58db9db6fc-p2rz8 2/2 Running ... cluster1-node2

If we kubectl describe the Pod deploy-important-58db9db6fc-lnxdb it will show us the reason for not scheduling is our implemented podAntiAffinity ruling:

Warning FailedScheduling 63s (x3 over 65s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/control-plane: }, that the pod didn’t tolerate, 2 node(s) didn’t match pod affinity/anti-affinity, 2 node(s) didn’t satisfy existing pods anti-affinity rules.

Or our topologySpreadConstraints:

Warning FailedScheduling 16s default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/control-plane: }, that the pod didn’t tolerate, 2 node(s) didn’t match pod topology spread constraints.

Question 13 | Multi Containers and Pod shared Volume

Task weight: 4%

Use context: kubectl config use-context k8s-c1-HCreate a Pod named multi-container-playground in Namespace default with three containers, named c1, c2 and c3. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn't be persisted or shared with other Pods.Container c1 should be of image nginx:1.17.6-alpine and have the name of the node where its Pod is running available as environment variable MY_NODE_NAME.Container c2 should be of image busybox:1.31.1 and write the output of the date command every second in the shared volume into file date.log. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done for this.Container c3 should be of image busybox:1.31.1 and constantly send the content of file date.log from the shared volume to stdout. You can use tail -f /your/vol/path/date.log for this.Check the logs of container c3 to confirm correct setup.

Answer:

First we create the Pod template:

k run multi-container-playground --image=nginx:1.17.6-alpine $do > 13.yaml

And add the other containers and the commands they should execute:

# 13.yaml apiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:run: multi-container-playgroundname: multi-container-playground spec:containers:- image: nginx:1.17.6-alpinename: c1 # changeresources: {}env: # add- name: MY_NODE_NAME # addvalueFrom: # addfieldRef: # addfieldPath: spec.nodeName # addvolumeMounts: # add- name: vol # addmountPath: /vol # add- image: busybox:1.31.1 # addname: c2 # addcommand: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"] # addvolumeMounts: # add- name: vol # addmountPath: /vol # add- image: busybox:1.31.1 # addname: c3 # addcommand: ["sh", "-c", "tail -f /vol/date.log"] # addvolumeMounts: # add- name: vol # addmountPath: /vol # adddnsPolicy: ClusterFirstrestartPolicy: Alwaysvolumes: # add- name: vol # addemptyDir: {} # add status: {} k -f 13.yaml create

Oh boy, lot’s of requested things. We check if everything is good with the Pod:

? k get pod multi-container-playgroundNAME READY STATUS RESTARTS AGEmulti-container-playground 3/3 Running 0 95s

Good, then we check if container c1 has the requested node name as env variable:

? k exec multi-container-playground -c c1 -- env | grep MY MY_NODE_NAME=cluster1-node2

And finally we check the logging:

? k logs multi-container-playground -c c3 Sat Dec 7 16:05:10 UTC 2077 Sat Dec 7 16:05:11 UTC 2077 Sat Dec 7 16:05:12 UTC 2077 Sat Dec 7 16:05:13 UTC 2077 Sat Dec 7 16:05:14 UTC 2077 Sat Dec 7 16:05:15 UTC 2077 Sat Dec 7 16:05:16 UTC 2077

Question 14 | Find out Cluster Information

Task weight: 2%

Use context: kubectl config use-context k8s-c1-HYou're ask to find out following information about the cluster k8s-c1-H:How many controlplane nodes are available?How many worker nodes are available?What is the Service CIDR?Which Networking (or CNI Plugin) is configured and where is its config file?Which suffix will static pods have that run on cluster1-node1?Write your answers into file /opt/course/14/cluster-info, structured like this:# /opt/course/14/cluster-info1: [ANSWER]2: [ANSWER]3: [ANSWER]4: [ANSWER]5: [ANSWER]

Answer:
How many controlplane and worker nodes are available?

? k get node NAME STATUS ROLES AGE VERSION cluster1-controlplane1 Ready control-plane 27h v1.26.0 cluster1-node1 Ready <none> 27h v1.26.0 cluster1-node2 Ready <none> 27h v1.26.0

We see one controlplane and two workers.

What is the Service CIDR? ? ssh cluster1-controlplane1 ? root@cluster1-controlplane1:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep range- --service-cluster-ip-range=10.96.0.0/12

Which Networking (or CNI Plugin) is configured and where is its config file?

? root@cluster1-controlplane1:~# find /etc/cni/net.d/ /etc/cni/net.d/ /etc/cni/net.d/10-weave.conflist? root@cluster1-controlplane1:~# cat /etc/cni/net.d/10-weave.conflist {"cniVersion": "0.3.0","name": "weave",...

By default the kubelet looks into /etc/cni/net.d to discover the CNI plugins. This will be the same on every controlplane and worker nodes.

Which suffix will static pods have that run on cluster1-node1?

The suffix is the node hostname with a leading hyphen. It used to be -static in earlier Kubernetes versions.

Result

The resulting /opt/course/14/cluster-info could look like:

# /opt/course/14/cluster-info # How many controlplane nodes are available? 1: 1# How many worker nodes are available? 2: 2# What is the Service CIDR? 3: 10.96.0.0/12# Which Networking (or CNI Plugin) is configured and where is its config file? 4: Weave, /etc/cni/net.d/10-weave.conflist# Which suffix will static pods have that run on cluster1-node1? 5: -cluster1-node1

Question 15 | Cluster Event Logging

Task weight: 3%

Use context: kubectl config use-context k8s-c2-ACWrite a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time (metadata.creationTimestamp). Use kubectl for it. Now kill the kube-proxy Pod running on node cluster2-node1 and write the events this caused into /opt/course/15/pod_kill.log. Finally kill the containerd container of the kube-proxy Pod on node cluster2-node1 and write the events into /opt/course/15/container_kill.log. Do you notice differences in the events both actions caused?

Answer:

# /opt/course/15/cluster_events.sh kubectl get events -A --sort-by=.metadata.creationTimestamp

Now we kill the kube-proxy Pod:

k -n kube-system get pod -o wide | grep proxy # find pod running on cluster2-node1 k -n kube-system delete pod kube-proxy-z64cg

Now check the events:

sh /opt/course/15/cluster_events.sh

Write the events the killing caused into /opt/course/15/pod_kill.log:

# /opt/course/15/pod_kill.log kube-system 9s Normal Killing pod/kube-proxy-jsv7t ... kube-system 3s Normal SuccessfulCreate daemonset/kube-proxy ... kube-system <unknown> Normal Scheduled pod/kube-proxy-m52sx ... default 2s Normal Starting node/cluster2-node1 ... kube-system 2s Normal Created pod/kube-proxy-m52sx ... kube-system 2s Normal Pulled pod/kube-proxy-m52sx ... kube-system 2s Normal Started pod/kube-proxy-m52sx ...

Finally we will try to provoke events by killing the container belonging to the container of the kube-proxy Pod:

? ssh cluster2-node1 ? root@cluster2-node1:~# crictl ps | grep kube-proxy 1e020b43c4423 36c4ebbc9d979 About an hour ago Running kube-proxy ... ? root@cluster2-node1:~# crictl rm 1e020b43c4423 1e020b43c4423 ? root@cluster2-node1:~# crictl ps | grep kube-proxy 0ae4245707910 36c4ebbc9d979 17 seconds ago Running kube-proxy ...

We killed the main container (1e020b43c4423), but also noticed that a new container (0ae4245707910) was directly created. Thanks Kubernetes!
Now we see if this caused events again and we write those into the second file:

sh /opt/course/15/cluster_events.sh# /opt/course/15/container_kill.log kube-system 13s Normal Created pod/kube-proxy-m52sx ... kube-system 13s Normal Pulled pod/kube-proxy-m52sx ... kube-system 13s Normal Started pod/kube-proxy-m52sx ...

Comparing the events we see that when we deleted the whole Pod there were more things to be done, hence more events. For example was the DaemonSet in the game to re-create the missing Pod. Where when we manually killed the main container of the Pod, the Pod would still exist but only its container needed to be re-created, hence less events.

Question 16 | Namespaces and Api Resources

Task weight: 2%

Use context: kubectl config use-context k8s-c1-HWrite the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap...) into /opt/course/16/resources.txt. Find the project-* Namespace with the highest number of Roles defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt.

Answer:
Namespace and Namespaces Resources
Now we can get a list of all resources like:

k api-resources # shows all k api-resources -h # help always good k api-resources --namespaced -o name > /opt/course/16/resources.txt Which results in the file: # /opt/course/16/resources.txt bindings configmaps endpoints events limitranges persistentvolumeclaims pods podtemplates replicationcontrollers resourcequotas secrets serviceaccounts services controllerrevisions.apps daemonsets.apps deployments.apps replicasets.apps statefulsets.apps localsubjectaccessreviews.authorization.k8s.io horizontalpodautoscalers.autoscaling cronjobs.batch jobs.batch leases.coordination.k8s.io events.events.k8s.io ingresses.extensions ingresses.networking.k8s.io networkpolicies.networking.k8s.io poddisruptionbudgets.policy rolebindings.rbac.authorization.k8s.io roles.rbac.authorization.k8s.io

Namespace with most Roles:

? k -n project-c13 get role --no-headers | wc -l No resources found in project-c13 namespace. 0? k -n project-c14 get role --no-headers | wc -l 300? k -n project-hamster get role --no-headers | wc -l No resources found in project-hamster namespace. 0? k -n project-snake get role --no-headers | wc -l No resources found in project-snake namespace. 0? k -n project-tiger get role --no-headers | wc -l No resources found in project-tiger namespace. 0

Finally we write the name and amount into the file:

# /opt/course/16/crowded-namespace.txtproject-c14 with 300 resources

Question 17 | Find Container of Pod and check info

Task weight: 3%

Use context: kubectl config use-context k8s-c1-HIn Namespace project-tiger create a Pod named tigers-reunite of image httpd:2.4.41-alpine with labels pod=container and container=pod. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod. Using command crictl:Write the ID of the container and the info.runtimeType into /opt/course/17/pod-container.txtWrite the logs of the container into /opt/course/17/pod-container.log

Answer:

First we create the Pod:

k -n project-tiger run tigers-reunite --image=httpd:2.4.41-alpine --labels "pod=container,container=pod"

Next we find out the node it’s scheduled on:

k -n project-tiger get pod -o wide

or fancy:

k -n project-tiger get pod tigers-reunite -o jsonpath="{.spec.nodeName}"

Then we ssh into that node and and check the container info:

? ssh cluster1-node2 ? root@cluster1-node2:~# crictl ps | grep tigers-reunite b01edbe6f89ed 54b0995a63052 5 seconds ago Running tigers-reunite ...? root@cluster1-node2:~# crictl inspect b01edbe6f89ed | grep runtimeType"runtimeType": "io.containerd.runc.v2",

Then we fill the requested file (on the main terminal):

# /opt/course/17/pod-container.txt b01edbe6f89ed io.containerd.runc.v2

Finally we write the container logs in the second file:

ssh cluster1-node2 'crictl logs b01edbe6f89ed' &> /opt/course/17/pod-container.log The &> in above's command redirects both the standard output and standard error.

You could also simply run crictl logs on the node and copy the content manually, if it’s not a lot. The file should look like:

# /opt/course/17/pod-container.log AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.37. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.37. Set the 'ServerName' directive globally to suppress this message [Mon Sep 13 13:32:18.555280 2021] [mpm_event:notice] [pid 1:tid 139929534545224] AH00489: Apache/2.4.41 (Unix) configured -- resuming normal operations [Mon Sep 13 13:32:18.555610 2021] [core:notice] [pid 1:tid 139929534545224] AH00094: Command line: 'httpd -D FOREGROUND'

Question 18 | Fix Kubelet

Task weight: 8%

Use context: kubectl config use-context k8s-c3-CCCThere seems to be an issue with the kubelet not running on cluster3-node1. Fix it and confirm that cluster has node cluster3-node1 available in Ready state afterwards. You should be able to schedule a Pod on cluster3-node1 afterwards.Write the reason of the issue into /opt/course/18/reason.txt.

Answer:

The procedure on tasks like these should be to check if the kubelet is running, if not start it, then check its logs and correct errors if there are some.

Always helpful to check if other clusters already have some of the components defined and running, so you can copy and use existing config files. Though in this case it might not need to be necessary.

Check node status:

? k get nodeNAME STATUS ROLES AGE VERSION cluster3-controlplane1 Ready control-plane 14d v1.26.0 cluster3-node1 NotReady <none> 14d v1.26.0

First we check if the kubelet is running:

? ssh cluster3-node1 ? root@cluster3-node1:~# ps aux | grep kubelet root 29294 0.0 0.2 14856 1016 pts/0 S+ 11:30 0:00 grep --color=auto kubelet Nope, so we check if it's configured using systemd as service: ? root@cluster3-node1:~# service kubelet status ● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)Drop-In: /etc/systemd/system/kubelet.service.d└─10-kubeadm.confActive: inactive (dead) since Sun 2019-12-08 11:30:06 UTC; 50min 52s ago

Yes, it’s configured as a service with config at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, but we see it’s inactive. Let’s try to start it:

? root@cluster3-node1:~# service kubelet start ? root@cluster3-node1:~# service kubelet status ● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)Drop-In: /etc/systemd/system/kubelet.service.d└─10-kubeadm.confActive: activating (auto-restart) (Result: exit-code) since Thu 2020-04-30 22:03:10 UTC; 3s agoDocs: https://kubernetes.io/docs/home/Process: 5989 ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=203/EXEC)Main PID: 5989 (code=exited, status=203/EXEC) Apr 30 22:03:10 cluster3-node1 systemd[5989]: kubelet.service: Failed at step EXEC spawning /usr/local/bin/kubelet: No such file or directory Apr 30 22:03:10 cluster3-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC Apr 30 22:03:10 cluster3-node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.

We see it’s trying to execute /usr/local/bin/kubelet with some parameters defined in its service config file. A good way to find errors and get more logs is to run the command manually (usually also with its parameters).

? root@cluster3-node1:~# /usr/local/bin/kubelet -bash: /usr/local/bin/kubelet: No such file or directory ? root@cluster3-node1:~# whereis kubelet kubelet: /usr/bin/kubelet

Another way would be to see the extended logging of a service like using journalctl -u kubelet.

Well, there we have it, wrong path specified. Correct the path in file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and run:

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # fix

systemctl daemon-reload && systemctl restart kubelet systemctl status kubelet # should now show running

Also the node should be available for the api server, give it a bit of time though:

? k get node NAME STATUS ROLES AGE VERSION cluster3-controlplane1 Ready control-plane 14d v1.26.0 cluster3-node1 Ready <none> 14d v1.26.0

Finally we write the reason into the file:

# /opt/course/18/reason.txt wrong path to kubelet binary specified in service config

Question 19 | Create Secret and mount into Pod

Task weight: 3%

NOTE: This task can only be solved if questions 18 or 20 have been successfully implemented and the k8s-c3-CCC cluster has a functioning worker node Use context: kubectl config use-context k8s-c3-CCCDo the following in a new Namespace secret. Create a Pod named secret-pod of image busybox:1.31.1 which should keep running for some time.There is an existing Secret located at /opt/course/19/secret1.yaml, create it in the Namespace secret and mount it readonly into the Pod at /tmp/secret1.Create a new Secret in Namespace secret called secret2 which should contain user=user1 and pass=1234. These entries should be available inside the Pod's container as environment variables APP_USER and APP_PASS.Confirm everything is working.

Answer

First we create the Namespace and the requested Secrets in it:

k create ns secret cp /opt/course/19/secret1.yaml 19_secret1.yaml

We need to adjust the Namespace for that Secret:

# 19_secret1.yaml apiVersion: v1 data:halt: IyEgL2Jpbi9zaAo... kind: Secret metadata:creationTimestamp: nullname: secret1namespace: secret # change k -f 19_secret1.yaml create

Next we create the second Secret:

k -n secret create secret generic secret2 --from-literal=user=user1 --from-literal=pass=1234

Now we create the Pod template:

k -n secret run secret-pod --image=busybox:1.31.1 $do -- sh -c "sleep 5d" > 19.yaml

Then make the necessary changes:

# 19.yaml apiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:run: secret-podname: secret-podnamespace: secret # add spec:containers:- args:- sh- -c- sleep 1dimage: busybox:1.31.1name: secret-podresources: {}env: # add- name: APP_USER # addvalueFrom: # addsecretKeyRef: # addname: secret2 # addkey: user # add- name: APP_PASS # addvalueFrom: # addsecretKeyRef: # addname: secret2 # addkey: pass # addvolumeMounts: # add- name: secret1 # addmountPath: /tmp/secret1 # addreadOnly: true # adddnsPolicy: ClusterFirstrestartPolicy: Alwaysvolumes: # add- name: secret1 # addsecret: # addsecretName: secret1 # add status: {}

It might not be necessary in current K8s versions to specify the readOnly: true because it’s the default setting anyways.

And execute:

k -f 19.yaml create

Finally we check if all is correct:

? k -n secret exec secret-pod -- env | grep APP APP_PASS=1234 APP_USER=user1? k -n secret exec secret-pod -- find /tmp/secret1 /tmp/secret1 /tmp/secret1/..data /tmp/secret1/halt /tmp/secret1/..2019_12_08_12_15_39.463036797 /tmp/secret1/..2019_12_08_12_15_39.463036797/halt ? k -n secret exec secret-pod -- cat /tmp/secret1/halt #! /bin/sh ### BEGIN INIT INFO # Provides: halt # Required-Start: # Required-Stop: # Default-Start: # Default-Stop: 0 # Short-Description: Execute the halt command. # Description: ...

All is good.

Question 20 | Update Kubernetes Version and join cluster

Task weight: 10%

Use context: kubectl config use-context k8s-c3-CCCYour coworker said node cluster3-node2 is running an older Kubernetes version and is not even part of the cluster. Update Kubernetes on that node to the exact version that's running on cluster3-controlplane1. Then add this node to the cluster. Use kubeadm for this.

Answer:

Upgrade Kubernetes to cluster3-controlplane1 version

Search in the docs for kubeadm upgrade: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade

? k get nodeNAME STATUS ROLES AGE VERSION cluster3-controlplane1 Ready control-plane 22h v1.26.0 cluster3-node1 Ready <none> 22h v1.26.0

Controlplane node seems to be running Kubernetes 1.26.0 and cluster3-node2 is not yet part of the cluster.

? ssh cluster3-node2 ? root@cluster3-node2:~# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:57:06Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}? root@cluster3-node2:~# kubectl version --short Client Version: v1.25.5 Kustomize Version: v4.5.7? root@cluster3-node2:~# kubelet --version Kubernetes v1.25.5

Here kubeadm is already installed in the wanted version, so we don’t need to install it. Hence we can run:

? root@cluster3-node2:~# kubeadm upgrade node couldn't create a Kubernetes client from file "/etc/kubernetes/kubelet.conf": failed to load admin kubeconfig: open /etc/kubernetes/kubelet.conf: no such file or directoryTo see the stack trace of this error execute with --v=5 or higherThis is usually the proper command to upgrade a node. But this error means that this node was never even initialised, so nothing to update here. This will be done later using kubeadm join. For now we can continue with kubelet and kubectl:? root@cluster3-node2:~# apt update...Fetched 5,775 kB in 2s (2,313 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 90 packages can be upgraded. Run 'apt list --upgradable' to see them.? root@cluster3-node2:~# apt show kubectl -a | grep 1.26 Version: 1.26.0-00? root@cluster3-node2:~# apt install kubectl=1.26.0-00 kubelet=1.26.0-00 Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded:kubectl kubelet 2 upgraded, 0 newly installed, 0 to remove and 135 not upgraded. Need to get 30.5 MB of archives. After this operation, 9,996 kB of additional disk space will be used. Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.26.0-00 [10.1 MB] Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.26.0-00 [20.5 MB] Fetched 30.5 MB in 1s (29.7 MB/s) (Reading database ... 112508 files and directories currently installed.) Preparing to unpack .../kubectl_1.26.0-00_amd64.deb ... Unpacking kubectl (1.26.0-00) over (1.25.5-00) ... Preparing to unpack .../kubelet_1.26.0-00_amd64.deb ... Unpacking kubelet (1.26.0-00) over (1.25.5-00) ... Setting up kubectl (1.26.0-00) ... Setting up kubelet (1.26.0-00) ...? root@cluster3-node2:~# kubelet --version Kubernetes v1.26.0

Now we’re up to date with kubeadm, kubectl and kubelet. Restart the kubelet:

? root@cluster3-node2:~# service kubelet restart ? root@cluster3-node2:~# service kubelet status ● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)Drop-In: /etc/systemd/system/kubelet.service.d└─10-kubeadm.confActive: activating (auto-restart) (Result: exit-code) since Wed 2022-12-21 16:29:26 UTC; 5s agoDocs: https://kubernetes.io/docs/home/Process: 32111 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)Main PID: 32111 (code=exited, status=1/FAILURE) Dec 21 16:29:26 cluster3-node2 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 21 16:29:26 cluster3-node2 systemd[1]: kubelet.service: Failed with result 'exit-code'. These errors occur because we still need to run kubeadm join to join the node into the cluster. Let's do this in the next step.

Add cluster3-node2 to cluster:
First we log into the controlplane1 and generate a new TLS bootstrap token, also printing out the join command:

? ssh cluster3-controlplane1 ? root@cluster3-controlplane1:~# kubeadm token create --print-join-command kubeadm join 192.168.100.31:6443 --token rbhrjh.4o93r31o18an6dll --discovery-token-ca-cert-hash sha256:d94524f9ab1eed84417414c7def5c1608f84dbf04437d9f5f73eb6255dafdb18? root@cluster3-controlplane1:~# kubeadm token list TOKEN TTL EXPIRES ... 44dz0t.2lgmone0i1o5z9fe <forever> <never> 4u477f.nmpq48xmpjt6weje 1h 2022-12-21T18:14:30Z rbhrjh.4o93r31o18an6dll 23h 2022-12-22T16:29:58Z

We see the expiration of 23h for our token, we could adjust this by passing the ttl argument.

Next we connect again to cluster3-node2 and simply execute the join command:

? ssh cluster3-node2 ? root@cluster3-node2:~# kubeadm join 192.168.100.31:6443 --token rbhrjh.4o93r31o18an6dll --discovery-token-ca-cert-hash sha256:d94524f9ab1eed84417414c7def5c1608f84dbf04437d9f5f73eb6255dafdb18 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.? root@cluster3-node2:~# service kubelet status ● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)Drop-In: /etc/systemd/system/kubelet.service.d└─10-kubeadm.confActive: active (running) since Wed 2022-12-21 16:32:19 UTC; 1min 4s agoDocs: https://kubernetes.io/docs/home/Main PID: 32510 (kubelet)Tasks: 11 (limit: 462)Memory: 55.2MCGroup: /system.slice/kubelet.service└─32510 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runti>

If you have troubles with kubeadm join you might need to run kubeadm reset.

This looks great though for us. Finally we head back to the main terminal and check the node status:

? k get nodeNAME STATUS ROLES AGE VERSION cluster3-controlplane1 Ready control-plane 22h v1.26.0 cluster3-node1 Ready <none> 22h v1.26.0 cluster3-node2 NotReady <none> 22h v1.26.0

Give it a bit of time till the node is ready.

? k get nodeNAME STATUS ROLES AGE VERSION cluster3-controlplane1 Ready control-plane 22h v1.26.0 cluster3-node1 Ready <none> 22h v1.26.0 cluster3-node2 Ready <none> 22h v1.26.0

We see cluster3-node2 is now available and up to date.

Question 21 | Create a Static Pod and Service

Task weight: 2%

Use context: kubectl config use-context k8s-c3-CCCCreate a Static Pod named my-static-pod in Namespace default on cluster3-controlplane1. It should be of image nginx:1.16-alpine and have resource requests for 10m CPU and 20Mi memory.Then create a NodePort Service named static-pod-service which exposes that static Pod on port 80 and check if it has Endpoints and if it's reachable through the cluster3-controlplane1 internal IP address. You can connect to the internal node IPs from your main terminal.

Answer:

? ssh cluster3-controlplane1 ? root@cluster1-controlplane1:~# cd /etc/kubernetes/manifests/ ? root@cluster1-controlplane1:~# kubectl run my-static-pod --image=nginx:1.16-alpine -o yaml --dry-run=client > my-static-pod.yaml

Then edit the my-static-pod.yaml to add the requested resource requests:

# /etc/kubernetes/manifests/my-static-pod.yaml apiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:run: my-static-podname: my-static-pod spec:containers:- image: nginx:1.16-alpinename: my-static-podresources:requests:cpu: 10mmemory: 20MidnsPolicy: ClusterFirstrestartPolicy: Always status: {}

And make sure it’s running:

? k get pod -A | grep my-static NAMESPACE NAME READY STATUS ... AGE default my-static-pod-cluster3-controlplane1 1/1 Running ... 22s

Now we expose that static Pod:

k expose pod my-static-pod-cluster3-controlplane1 --name static-pod-service --type=NodePort --port 80

This would generate a Service like:

# kubectl expose pod my-static-pod-cluster3-controlplane1 --name static-pod-service --type=NodePort --port 80apiVersion: v1 kind: Service metadata:creationTimestamp: nulllabels:run: my-static-podname: static-pod-service spec:ports:- port: 80protocol: TCPtargetPort: 80selector:run: my-static-podtype: NodePort status:loadBalancer: {}

Then run and test:

? k get svc,ep -l run=my-static-pod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/static-pod-service NodePort 10.99.168.252 <none> 80:30352/TCP 30s NAME ENDPOINTS AGE endpoints/static-pod-service 10.32.0.4:80 30s

Looking good.

Question 22 | Check how long certificates are valid

Task weight: 2%

Use context: kubectl config use-context k8s-c2-ACCheck how long the kube-apiserver server certificate is valid on cluster2-controlplane1. Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration.Also run the correct kubeadm command to list the expiration dates and confirm both methods show the same date.Write the correct kubeadm command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh.

Answer:

First let’s find that certificate:

? ssh cluster2-controlplane1 ? root@cluster2-controlplane1:~# find /etc/kubernetes/pki | grep apiserver /etc/kubernetes/pki/apiserver.crt /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-kubelet-client.crt /etc/kubernetes/pki/apiserver.key /etc/kubernetes/pki/apiserver-kubelet-client.key

Next we use openssl to find out the expiration date:

? root@cluster2-controlplane1:~# openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt | grep Validity -A2ValidityNot Before: Dec 20 18:05:20 2022 GMTNot After : Dec 20 18:05:20 2023 GMT

There we have it, so we write it in the required location on our main terminal:

# /opt/course/22/expiration Dec 20 18:05:20 2023 GMT

And we use the feature from kubeadm to get the expiration too:

? root@cluster2-controlplane1:~# kubeadm certs check-expiration | grep apiserverapiserver Jan 14, 2022 18:49 UTC 363d ca no apiserver-etcd-client Jan 14, 2022 18:49 UTC 363d etcd-ca no apiserver-kubelet-client Jan 14, 2022 18:49 UTC 363d ca no

Looking good. And finally we write the command that would renew all certificates into the requested location:

# /opt/course/22/kubeadm-renew-certs.sh kubeadm certs renew apiserver

Question 23 | Kubelet client/server cert info

Task weight: 2%

Use context: kubectl config use-context k8s-c2-ACNode cluster2-node1 has been added to the cluster using kubeadm and TLS bootstrapping.Find the "Issuer" and "Extended Key Usage" values of the cluster2-node1:kubelet client certificate, the one used for outgoing connections to the kube-apiserver.kubelet server certificate, the one used for incoming connections from the kube-apiserver.Write the information into file /opt/course/23/certificate-info.txt.Compare the "Issuer" and "Extended Key Usage" fields of both certificates and make sense of these.

Answer:

To find the correct kubelet certificate directory, we can look for the default value of the --cert-dir parameter for the kubelet. For this search for “kubelet” in the Kubernetes docs which will lead to: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet. We can check if another certificate directory has been configured using ps aux or in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.

First we check the kubelet client certificate:

? ssh cluster2-node1 ? root@cluster2-node1:~# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep IssuerIssuer: CN = kubernetes ? root@cluster2-node1:~# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep "Extended Key Usage" -A1X509v3 Extended Key Usage: TLS Web Client Authentication

Next we check the kubelet server certificate:

? root@cluster2-node1:~# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep IssuerIssuer: CN = cluster2-node1-ca@1588186506 ? root@cluster2-node1:~# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep "Extended Key Usage" -A1X509v3 Extended Key Usage: TLS Web Server Authentication

We see that the server certificate was generated on the worker node itself and the client certificate was issued by the Kubernetes api. The “Extended Key Usage” also shows if it’s for client or server authentication.

More about this: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping

Question 24 | NetworkPolicy

Task weight: 9%

Use context: kubectl config use-context k8s-c1-HThere was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to:connect to db1-* Pods on port 1111connect to db2-* Pods on port 2222Use the app label of Pods in your policy.After implementation, connections from backend-* Pods to vault-* Pods on port 3333 should for example no longer work.

Answer:

First we look at the existing Pods and their labels:

? k -n project-snake get pod NAME READY STATUS RESTARTS AGE backend-0 1/1 Running 0 8s db1-0 1/1 Running 0 8s db2-0 1/1 Running 0 10s vault-0 1/1 Running 0 10s? k -n project-snake get pod -L app NAME READY STATUS RESTARTS AGE APP backend-0 1/1 Running 0 3m15s backend db1-0 1/1 Running 0 3m15s db1 db2-0 1/1 Running 0 3m17s db2 vault-0 1/1 Running 0 3m17s vault

We test the current connection situation and see nothing is restricted:

? k -n project-snake get pod -o wide NAME READY STATUS RESTARTS AGE IP ... backend-0 1/1 Running 0 4m14s 10.44.0.24 ... db1-0 1/1 Running 0 4m14s 10.44.0.25 ... db2-0 1/1 Running 0 4m16s 10.44.0.23 ... vault-0 1/1 Running 0 4m16s 10.44.0.22 ...? k -n project-snake exec backend-0 -- curl -s 10.44.0.25:1111 database one? k -n project-snake exec backend-0 -- curl -s 10.44.0.23:2222 database two? k -n project-snake exec backend-0 -- curl -s 10.44.0.22:3333 vault secret storage

Now we create the NP by copying and chaning an example from the k8s docs:

# 24_np.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata:name: np-backendnamespace: project-snake spec:podSelector:matchLabels:app: backendpolicyTypes:- Egress # policy is only about Egressegress:- # first ruleto: # first condition "to"- podSelector:matchLabels:app: db1ports: # second condition "port"- protocol: TCPport: 1111- # second ruleto: # first condition "to"- podSelector:matchLabels:app: db2ports: # second condition "port"- protocol: TCPport: 2222

The NP above has two rules with two conditions each, it can be read as:

allow outgoing traffic if:

(destination pod has label app=db1 AND port is 1111)

OR

(destination pod has label app=db2 AND port is 2222)

Wrong example

Now let’s shortly look at a wrong example:

# WRONG apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata:name: np-backendnamespace: project-snake spec:podSelector:matchLabels:app: backendpolicyTypes:- Egressegress:- # first ruleto: # first condition "to"- podSelector: # first "to" possibilitymatchLabels:app: db1- podSelector: # second "to" possibilitymatchLabels:app: db2ports: # second condition "ports"- protocol: TCP # first "ports" possibilityport: 1111- protocol: TCP # second "ports" possibilityport: 2222

The NP above has one rule with two conditions and two condition-entries each, it can be read as:

allow outgoing traffic if:

(destination pod has label app=db1 OR destination pod has label app=db2)

AND

(destination port is 1111 OR destination port is 2222)

Using this NP it would still be possible for backend-* Pods to connect to db2-* Pods on port 1111 for example which should be forbidden.

Create NetworkPolicy

We create the correct NP:

k -f 24_np.yaml create

And test again:

? k -n project-snake exec backend-0 -- curl -s 10.44.0.25:1111 database one? k -n project-snake exec backend-0 -- curl -s 10.44.0.23:2222 database two? k -n project-snake exec backend-0 -- curl -s 10.44.0.22:3333 ^C

Also helpful to use kubectl describe on the NP to see how k8s has interpreted the policy.

Great, looking more secure. Task done.

Question 25 | Etcd Snapshot Save and Restore

Task weight: 8%

Use context: kubectl config use-context k8s-c3-CCCMake a backup of etcd running on cluster3-controlplane1 and save it on the controlplane node at /tmp/etcd-backup.db.Then create a Pod of your kind in the cluster.Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.

Answer:
Etcd Backup

First we log into the controlplane and try to create a snapshop of etcd:

? ssh cluster3-controlplane1 ? root@cluster3-controlplane1:~# ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db Error: rpc error: code = Unavailable desc = transport is closing But it fails because we need to authenticate ourselves. For the necessary information we can check the etc manifest:? root@cluster3-controlplane1:~# vim /etc/kubernetes/manifests/etcd.yaml

We only check the etcd.yaml for necessary information we don’t change it.

# /etc/kubernetes/manifests/etcd.yamlapiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:component: etcdtier: control-planename: etcdnamespace: kube-system spec:containers:- command:- etcd- --advertise-client-urls=https://192.168.100.31:2379- --cert-file=/etc/kubernetes/pki/etcd/server.crt # use- --client-cert-auth=true- --data-dir=/var/lib/etcd- --initial-advertise-peer-urls=https://192.168.100.31:2380- --initial-cluster=cluster3-controlplane1=https://192.168.100.31:2380- --key-file=/etc/kubernetes/pki/etcd/server.key # use- --listen-client-urls=https://127.0.0.1:2379,https://192.168.100.31:2379 # use- --listen-metrics-urls=http://127.0.0.1:2381- --listen-peer-urls=https://192.168.100.31:2380- --name=cluster3-controlplane1- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt- --peer-client-cert-auth=true- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt # use- --snapshot-count=10000- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crtimage: k8s.gcr.io/etcd:3.3.15-0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthport: 2381scheme: HTTPinitialDelaySeconds: 15timeoutSeconds: 15name: etcdresources: {}volumeMounts:- mountPath: /var/lib/etcdname: etcd-data- mountPath: /etc/kubernetes/pki/etcdname: etcd-certshostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/kubernetes/pki/etcdtype: DirectoryOrCreatename: etcd-certs- hostPath:path: /var/lib/etcd # importanttype: DirectoryOrCreatename: etcd-data status: {}

But we also know that the api-server is connecting to etcd, so we can check how its manifest is configured:

? root@cluster3-controlplane1:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key- --etcd-servers=https://127.0.0.1:2379

We use the authentication information and pass it to etcdctl:

? root@cluster3-controlplane1:~# ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db \ --cacert /etc/kubernetes/pki/etcd/ca.crt \ --cert /etc/kubernetes/pki/etcd/server.crt \ --key /etc/kubernetes/pki/etcd/server.key Snapshot saved at /tmp/etcd-backup.db NOTE: Dont use snapshot status because it can alter the snapshot file and render it invalid

Etcd restore

Now create a Pod in the cluster and wait for it to be running:

? root@cluster3-controlplane1:~# kubectl run test --image=nginx pod/test created? root@cluster3-controlplane1:~# kubectl get pod -l run=test -w NAME READY STATUS RESTARTS AGE test 1/1 Running 0 60s NOTE: If you didn't solve questions 18 or 20 and cluster3 doesn't have a ready worker node then the created pod might stay in a Pending state. This is still ok for this task.

Next we stop all controlplane components:

root@cluster3-controlplane1:~# cd /etc/kubernetes/manifests/ root@cluster3-controlplane1:/etc/kubernetes/manifests# mv * .. root@cluster3-controlplane1:/etc/kubernetes/manifests# watch crictl ps

Now we restore the snapshot into a specific directory:

? root@cluster3-controlplane1:~# ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db \ --data-dir /var/lib/etcd-backup \ --cacert /etc/kubernetes/pki/etcd/ca.crt \ --cert /etc/kubernetes/pki/etcd/server.crt \ --key /etc/kubernetes/pki/etcd/server.key 2020-09-04 16:50:19.650804 I | mvcc: restore compact to 9935 2020-09-04 16:50:19.659095 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 We could specify another host to make the backup from by using etcdctl --endpoints http://IP, but here we just use the default value which is: http://127.0.0.1:2379,http://127.0.0.1:4001.

The restored files are located at the new folder /var/lib/etcd-backup, now we have to tell etcd to use that directory:

? root@cluster3-controlplane1:~# vim /etc/kubernetes/etcd.yaml # /etc/kubernetes/etcd.yamlapiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:component: etcdtier: control-planename: etcdnamespace: kube-system spec: ...- mountPath: /etc/kubernetes/pki/etcdname: etcd-certshostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/kubernetes/pki/etcdtype: DirectoryOrCreatename: etcd-certs- hostPath:path: /var/lib/etcd-backup # changetype: DirectoryOrCreatename: etcd-data status: {}

Now we move all controlplane yaml again into the manifest directory. Give it some time (up to several minutes) for etcd to restart and for the api-server to be reachable again:

root@cluster3-controlplane1:/etc/kubernetes/manifests# mv ../*.yaml . root@cluster3-controlplane1:/etc/kubernetes/manifests# watch crictl ps

Then we check again for the Pod:

? root@cluster3-controlplane1:~# kubectl get pod -l run=test No resources found in default namespace.

Awesome, backup and restore worked as our pod is gone.

Extra Question 1 | Find Pods first to be terminated

Use context: kubectl config use-context k8s-c1-HCheck all available Pods in the Namespace project-c13 and find the names of those that would probably be terminated first if the nodes run out of resources (cpu or memory) to schedule all Pods. Write the Pod names into /opt/course/e1/pods-not-stable.txt.

Answer:

When available cpu or memory resources on the nodes reach their limit, Kubernetes will look for Pods that are using more resources than they requested. These will be the first candidates for termination. If some Pods containers have no resource requests/limits set, then by default those are considered to use more than requested.

Kubernetes assigns Quality of Service classes to Pods based on the defined resources and limits, read more here: https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod

Hence we should look for Pods without resource requests defined, we can do this with a manual approach:

k -n project-c13 describe pod | less -p Requests # describe all pods and highlight Requests

Or we do:

k -n project-c13 describe pod | egrep "^(Name:| Requests:)" -A1

We see that the Pods of Deployment c13-3cc-runner-heavy don’t have any resources requests specified. Hence our answer would be:

# /opt/course/e1/pods-not-stable.txt c13-3cc-runner-heavy-65588d7d6-djtv9map c13-3cc-runner-heavy-65588d7d6-v8kf5map c13-3cc-runner-heavy-65588d7d6-wwpb4map o3db-0 o3db-1 # maybe not existing if already removed via previous scenario

To automate this process you could use jsonpath like this:

? k -n project-c13 get pod \-o jsonpath="{range .items[*]} {.metadata.name}{.spec.containers[*].resources}{'\n'}"c13-2x3-api-86784557bd-cgs8gmap[requests:map[cpu:50m memory:20Mi]]c13-2x3-api-86784557bd-lnxvjmap[requests:map[cpu:50m memory:20Mi]]c13-2x3-api-86784557bd-mnp77map[requests:map[cpu:50m memory:20Mi]]c13-2x3-web-769c989898-6hbgtmap[requests:map[cpu:50m memory:10Mi]]c13-2x3-web-769c989898-g57nqmap[requests:map[cpu:50m memory:10Mi]]c13-2x3-web-769c989898-hfd5vmap[requests:map[cpu:50m memory:10Mi]]c13-2x3-web-769c989898-jfx64map[requests:map[cpu:50m memory:10Mi]]c13-2x3-web-769c989898-r89mgmap[requests:map[cpu:50m memory:10Mi]]c13-2x3-web-769c989898-wtgxlmap[requests:map[cpu:50m memory:10Mi]]c13-3cc-runner-98c8b5469-dzqhrmap[requests:map[cpu:30m memory:10Mi]]c13-3cc-runner-98c8b5469-hbtdvmap[requests:map[cpu:30m memory:10Mi]]c13-3cc-runner-98c8b5469-n9lswmap[requests:map[cpu:30m memory:10Mi]]c13-3cc-runner-heavy-65588d7d6-djtv9map[]c13-3cc-runner-heavy-65588d7d6-v8kf5map[]c13-3cc-runner-heavy-65588d7d6-wwpb4map[]c13-3cc-web-675456bcd-glpq6map[requests:map[cpu:50m memory:10Mi]]c13-3cc-web-675456bcd-knlpxmap[requests:map[cpu:50m memory:10Mi]]c13-3cc-web-675456bcd-nfhp9map[requests:map[cpu:50m memory:10Mi]]c13-3cc-web-675456bcd-twn7mmap[requests:map[cpu:50m memory:10Mi]]o3db-0{}o3db-1{}

This lists all Pod names and their requests/limits, hence we see the three Pods without those defined.

Or we look for the Quality of Service classes:

? k get pods -n project-c13 \-o jsonpath="{range .items[*]}{.metadata.name} {.status.qosClass}{'\n'}" c13-2x3-api-86784557bd-cgs8g Burstable c13-2x3-api-86784557bd-lnxvj Burstable c13-2x3-api-86784557bd-mnp77 Burstable c13-2x3-web-769c989898-6hbgt Burstable c13-2x3-web-769c989898-g57nq Burstable c13-2x3-web-769c989898-hfd5v Burstable c13-2x3-web-769c989898-jfx64 Burstable c13-2x3-web-769c989898-r89mg Burstable c13-2x3-web-769c989898-wtgxl Burstable c13-3cc-runner-98c8b5469-dzqhr Burstable c13-3cc-runner-98c8b5469-hbtdv Burstable c13-3cc-runner-98c8b5469-n9lsw Burstable c13-3cc-runner-heavy-65588d7d6-djtv9 BestEffort c13-3cc-runner-heavy-65588d7d6-v8kf5 BestEffort c13-3cc-runner-heavy-65588d7d6-wwpb4 BestEffort c13-3cc-web-675456bcd-glpq6 Burstable c13-3cc-web-675456bcd-knlpx Burstable c13-3cc-web-675456bcd-nfhp9 Burstable c13-3cc-web-675456bcd-twn7m Burstable o3db-0 BestEffort o3db-1 BestEffort

Here we see three with BestEffort, which Pods get that don’t have any memory or cpu limits or requests defined.

A good practice is to always set resource requests and limits. If you don’t know the values your containers should have you can find this out using metric tools like Prometheus. You can also use kubectl top pod or even kubectl exec into the container and use top and similar tools.

Extra Question 2 | Curl Manually Contact API

Use context: kubectl config use-context k8s-c1-HThere is an existing ServiceAccount secret-reader in Namespace project-hamster. Create a Pod of image curlimages/curl:7.65.3 named tmp-api-contact which uses this ServiceAccount. Make sure the container keeps running.Exec into the Pod and use curl to access the Kubernetes Api of that cluster manually, listing all available secrets. You can ignore insecure https connection. Write the command(s) for this into file /opt/course/e4/list-secrets.sh.

Answer:

https://kubernetes.io/docs/tasks/run-application/access-api-from-pod

It’s important to understand how the Kubernetes API works. For this it helps connecting to the api manually, for example using curl. You can find information fast by search in the Kubernetes docs for “curl api” for example.

First we create our Pod:

k run tmp-api-contact --image=curlimages/curl:7.65.3 $do --command > e2.yaml -- sh -c 'sleep 1d'

Add the service account name and Namespace:

# e2.yaml apiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:run: tmp-api-contactname: tmp-api-contactnamespace: project-hamster # add spec:serviceAccountName: secret-reader # addcontainers:- command:- sh- -c- sleep 1dimage: curlimages/curl:7.65.3name: tmp-api-contactresources: {}dnsPolicy: ClusterFirstrestartPolicy: Always status: {}

Then run and exec into:

k -f 6.yaml create k -n project-hamster exec tmp-api-contact -it -- sh

Once on the container we can try to connect to the api using curl, the api is usually available via the Service named kubernetes in Namespace default (You should know how dns resolution works across Namespaces.). Else we can find the endpoint IP via environment variables running env.

So now we can do:

curl https://kubernetes.default curl -k https://kubernetes.default # ignore insecure as allowed in ticket description curl -k https://kubernetes.default/api/v1/secrets # should show Forbidden 403

The last command shows 403 forbidden, this is because we are not passing any authorisation information with us. The Kubernetes Api Server thinks we are connecting as system:anonymous. We want to change this and connect using the Pods ServiceAccount named secret-reader.

We find the the token in the mounted folder at /var/run/secrets/kubernetes.io/serviceaccount, so we do:

? TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) ? curl -k https://kubernetes.default/api/v1/secrets -H "Authorization: Bearer ${TOKEN}"% Total % Received % Xferd Average Speed Time Time Time CurrentDload Upload Total Spent Left Speed0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"kind": "SecretList","apiVersion": "v1","metadata": {"selfLink": "/api/v1/secrets","resourceVersion": "10697"},"items": [{"metadata": {"name": "default-token-5zjbd","namespace": "default","selfLink": "/api/v1/namespaces/default/secrets/default-token-5zjbd","uid": "315dbfd9-d235-482b-8bfc-c6167e7c1461","resourceVersion": "342",...

Now we’re able to list all Secrets, registering as the ServiceAccount secret-reader under which our Pod is running.

To use encrypted https connection we can run:

CACERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt curl --cacert ${CACERT} https://kubernetes.default/api/v1/secrets -H "Authorization: Bearer ${TOKEN}"

For troubleshooting we could also check if the ServiceAccount is actually able to list Secrets using:

? k auth can-i get secret --as system:serviceaccount:project-hamster:secret-readeryes

Finally write the commands into the requested location:

# /opt/course/e4/list-secrets.shTOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) curl -k https://kubernetes.default/api/v1/secrets -H "Authorization: Bearer ${TOKEN}"

Preview Question 1

Use context: kubectl config use-context k8s-c2-ACThe cluster admin asked you to find out the following information about etcd running on cluster2-controlplane1:Server private key locationServer certificate expiration dateIs client certificate authentication enabledWrite these information into /opt/course/p1/etcd-info.txt

Finally you’re asked to save an etcd snapshot at /etc/etcd-snapshot.db on cluster2-controlplane1 and display its status.

Answer:
Find out etcd information
Let’s check the nodes:

? k get node NAME STATUS ROLES AGE VERSION cluster2-controlplane1 Ready control-plane 89m v1.23.1 cluster2-node1 Ready <none> 87m v1.23.1? ssh cluster2-controlplane1

First we check how etcd is setup in this cluster:

? root@cluster2-controlplane1:~# kubectl -n kube-system get pod NAME READY STATUS RESTARTS AGE coredns-66bff467f8-k8f48 1/1 Running 0 26h coredns-66bff467f8-rn8tr 1/1 Running 0 26h etcd-cluster2-controlplane1 1/1 Running 0 26h kube-apiserver-cluster2-controlplane1 1/1 Running 0 26h kube-controller-manager-cluster2-controlplane1 1/1 Running 0 26h kube-proxy-qthfg 1/1 Running 0 25h kube-proxy-z55lp 1/1 Running 0 26h kube-scheduler-cluster2-controlplane1 1/1 Running 1 26h weave-net-cqdvt 2/2 Running 0 26h weave-net-dxzgh 2/2 Running 1 25h

We see it’s running as a Pod, more specific a static Pod. So we check for the default kubelet directory for static manifests:

? root@cluster2-controlplane1:~# find /etc/kubernetes/manifests/ /etc/kubernetes/manifests/ /etc/kubernetes/manifests/kube-controller-manager.yaml /etc/kubernetes/manifests/kube-apiserver.yaml /etc/kubernetes/manifests/etcd.yaml /etc/kubernetes/manifests/kube-scheduler.yaml? root@cluster2-controlplane1:~# vim /etc/kubernetes/manifests/etcd.yaml

So we look at the yaml and the parameters with which etcd is started:

# /etc/kubernetes/manifests/etcd.yamlapiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:component: etcdtier: control-planename: etcdnamespace: kube-system spec:containers:- command:- etcd- --advertise-client-urls=https://192.168.102.11:2379- --cert-file=/etc/kubernetes/pki/etcd/server.crt # server certificate- --client-cert-auth=true # enabled- --data-dir=/var/lib/etcd- --initial-advertise-peer-urls=https://192.168.102.11:2380- --initial-cluster=cluster2-controlplane1=https://192.168.102.11:2380- --key-file=/etc/kubernetes/pki/etcd/server.key # server private key- --listen-client-urls=https://127.0.0.1:2379,https://192.168.102.11:2379- --listen-metrics-urls=http://127.0.0.1:2381- --listen-peer-urls=https://192.168.102.11:2380- --name=cluster2-controlplane1- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt- --peer-client-cert-auth=true- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt- --snapshot-count=10000- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

We see that client authentication is enabled and also the requested path to the server private key, now let’s find out the expiration of the server certificate:

? root@cluster2-controlplane1:~# openssl x509 -noout -text -in /etc/kubernetes/pki/etcd/server.crt | grep Validity -A2ValidityNot Before: Sep 13 13:01:31 2021 GMTNot After : Sep 13 13:01:31 2022 GMT

There we have it. Let’s write the information into the requested file:

# /opt/course/p1/etcd-info.txtServer private key location: /etc/kubernetes/pki/etcd/server.key Server certificate expiration date: Sep 13 13:01:31 2022 GMT Is client certificate authentication enabled: yes

Create etcd snapshot
First we try:

ETCDCTL_API=3 etcdctl snapshot save /etc/etcd-snapshot.db

We get the endpoint also from the yaml. But we need to specify more parameters, all of which we can find the yaml declaration above:

ETCDCTL_API=3 etcdctl snapshot save /etc/etcd-snapshot.db \ --cacert /etc/kubernetes/pki/etcd/ca.crt \ --cert /etc/kubernetes/pki/etcd/server.crt \ --key /etc/kubernetes/pki/etcd/server.key

This worked. Now we can output the status of the backup file:

? root@cluster2-controlplane1:~# ETCDCTL_API=3 etcdctl snapshot status /etc/etcd-snapshot.db 4d4e953, 7213, 1291, 2.7 MB The status shows:Hash: 4d4e953Revision: 7213Total Keys: 1291Total Size: 2.7 MB

Preview Question 2

Use context: kubectl config use-context k8s-c1-HYou're asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster:Create a new Pod named p2-pod with two containers, one of image nginx:1.21.3-alpine and one of image busybox:1.31. Make sure the busybox container keeps running for some time. Create a new Service named p2-service which exposes that Pod internally in the cluster on port 3000->80. Find the kube-proxy container on all nodes cluster1-controlplane1, cluster1-node1 and cluster1-node2 and make sure that it's using iptables. Use command crictl for this. Write the iptables rules of all nodes belonging the created Service p2-service into file /opt/course/p2/iptables.txt. Finally delete the Service and confirm that the iptables rules are gone from all nodes.

Answer:
Create the Pod
First we create the Pod:

# check out export statement on top which allows us to use $dok run p2-pod --image=nginx:1.21.3-alpine $do > p2.yaml

Next we add the requested second container:

# p2.yaml apiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:run: p2-podname: p2-podnamespace: project-hamster # add spec:containers:- image: nginx:1.21.3-alpinename: p2-pod- image: busybox:1.31 # addname: c2 # addcommand: ["sh", "-c", "sleep 1d"] # addresources: {}dnsPolicy: ClusterFirstrestartPolicy: Always status: {}

And we create the Pod:

k -f p2.yaml create

Create the Service

Next we create the Service:

k -n project-hamster expose pod p2-pod --name p2-service --port 3000 --target-port 80

This will create a yaml like:

apiVersion: v1 kind: Service metadata:creationTimestamp: "2020-04-30T20:58:14Z"labels:run: p2-podmanagedFields: ...operation: Updatetime: "2020-04-30T20:58:14Z"name: p2-servicenamespace: project-hamsterresourceVersion: "11071"selfLink: /api/v1/namespaces/project-hamster/services/p2-serviceuid: 2a1c0842-7fb6-4e94-8cdb-1602a3b1e7d2 spec:clusterIP: 10.97.45.18ports:- port: 3000protocol: TCPtargetPort: 80selector:run: p2-podsessionAffinity: Nonetype: ClusterIP status:loadBalancer: {}

We should confirm Pods and Services are connected, hence the Service should have Endpoints.

k -n project-hamster get pod,svc,ep

Confirm kube-proxy is running and is using iptables

First we get nodes in the cluster:

? k get nodeNAME STATUS ROLES AGE VERSION cluster1-controlplane1 Ready control-plane 98m v1.23.1 cluster1-node1 Ready <none> 96m v1.23.1 cluster1-node2 Ready <none> 95m v1.23.1

The idea here is to log into every node, find the kube-proxy container and check its logs:

? ssh cluster1-controlplane1 ? root@cluster1-controlplane1$ crictl ps | grep kube-proxy 27b6a18c0f89c 36c4ebbc9d979 3 hours ago Running kube-proxy? root@cluster1-controlplane1~# crictl logs 27b6a18c0f89c ... I0913 12:53:03.096620 1 server_others.go:212] Using iptables Proxier. ... This should be repeated on every node and result in the same output Using iptables Proxier.

Check kube-proxy is creating iptables rules

Now we check the iptables rules on every node first manually:

? ssh cluster1-controlplane1 iptables-save | grep p2-service -A KUBE-SEP-6U447UXLLQIKP7BB -s 10.44.0.20/32 -m comment --comment "project-hamster/p2-service:" -j KUBE-MARK-MASQ -A KUBE-SEP-6U447UXLLQIKP7BB -p tcp -m comment --comment "project-hamster/p2-service:" -m tcp -j DNAT --to-destination 10.44.0.20:80 -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-SVC-2A6FNMCK6FDH7PJH -A KUBE-SVC-2A6FNMCK6FDH7PJH -m comment --comment "project-hamster/p2-service:" -j KUBE-SEP-6U447UXLLQIKP7BB? ssh cluster1-node1 iptables-save | grep p2-service -A KUBE-SEP-6U447UXLLQIKP7BB -s 10.44.0.20/32 -m comment --comment "project-hamster/p2-service:" -j KUBE-MARK-MASQ -A KUBE-SEP-6U447UXLLQIKP7BB -p tcp -m comment --comment "project-hamster/p2-service:" -m tcp -j DNAT --to-destination 10.44.0.20:80 -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-SVC-2A6FNMCK6FDH7PJH -A KUBE-SVC-2A6FNMCK6FDH7PJH -m comment --comment "project-hamster/p2-service:" -j KUBE-SEP-6U447UXLLQIKP7BB? ssh cluster1-node2 iptables-save | grep p2-service -A KUBE-SEP-6U447UXLLQIKP7BB -s 10.44.0.20/32 -m comment --comment "project-hamster/p2-service:" -j KUBE-MARK-MASQ -A KUBE-SEP-6U447UXLLQIKP7BB -p tcp -m comment --comment "project-hamster/p2-service:" -m tcp -j DNAT --to-destination 10.44.0.20:80 -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.97.45.18/32 -p tcp -m comment --comment "project-hamster/p2-service: cluster IP" -m tcp --dport 3000 -j KUBE-SVC-2A6FNMCK6FDH7PJH -A KUBE-SVC-2A6FNMCK6FDH7PJH -m comment --comment "project-hamster/p2-service:" -j KUBE-SEP-6U447UXLLQIKP7BB

Great. Now let’s write these logs into the requested file:

? ssh cluster1-controlplane1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt ? ssh cluster1-node1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt ? ssh cluster1-node2 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt

Delete the Service and confirm iptables rules are gone

Delete the Service:

k -n project-hamster delete svc p2-service

And confirm the iptables rules are gone:

? ssh cluster1-controlplane1 iptables-save | grep p2-service ? ssh cluster1-node1 iptables-save | grep p2-service ? ssh cluster1-node2 iptables-save | grep p2-service

Done.

Kubernetes Services are implemented using iptables rules (with default config) on all nodes. Every time a Service has been altered, created, deleted or Endpoints of a Service have changed, the kube-apiserver contacts every node’s kube-proxy to update the iptables rules according to the current state.

Preview Question 3

Use context: kubectl config use-context k8s-c2-ACCreate a Pod named check-ip in Namespace default using image httpd:2.4.41-alpine. Expose it on port 80 as a ClusterIP Service named check-ip-service. Remember/output the IP of that Service.Change the Service CIDR to 11.96.0.0/12 for the cluster.Then create a second Service named check-ip-service2 pointing to the same Pod to check if your settings did take effect. Finally check if the IP of the first Service has changed.

Answer:

Let’s create the Pod and expose it:

k run check-ip --image=httpd:2.4.41-alpine k expose pod check-ip --name check-ip-service --port 80

And check the Pod and Service ips:

? k get svc,ep -l run=check-ip NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/check-ip-service ClusterIP 10.104.3.45 <none> 80/TCP 8s NAME ENDPOINTS AGE endpoints/check-ip-service 10.44.0.3:80 7s

Now we change the Service CIDR on the kube-apiserver:

? ssh cluster2-controlplane1 ? root@cluster2-controlplane1:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml # /etc/kubernetes/manifests/kube-apiserver.yamlapiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:component: kube-apiservertier: control-planename: kube-apiservernamespace: kube-system spec:containers:- command:- kube-apiserver- --advertise-address=192.168.100.21...- --service-account-key-file=/etc/kubernetes/pki/sa.pub- --service-cluster-ip-range=11.96.0.0/12 # change- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key ...

Give it a bit for the kube-apiserver and controller-manager to restart

Wait for the api to be up again:

? root@cluster2-controlplane1:~# kubectl -n kube-system get pod | grep api kube-apiserver-cluster2-controlplane1 1/1 Running 0 49s

Now we do the same for the controller manager:

? root@cluster2-controlplane1:~# vim /etc/kubernetes/manifests/kube-controller-manager.yaml # /etc/kubernetes/manifests/kube-controller-manager.yamlapiVersion: v1 kind: Pod metadata:creationTimestamp: nulllabels:component: kube-controller-managertier: control-planename: kube-controller-managernamespace: kube-system spec:containers:- command:- kube-controller-manager- --allocate-node-cidrs=true- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf- --bind-address=127.0.0.1- --client-ca-file=/etc/kubernetes/pki/ca.crt- --cluster-cidr=10.244.0.0/16- --cluster-name=kubernetes- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key- --controllers=*,bootstrapsigner,tokencleaner- --kubeconfig=/etc/kubernetes/controller-manager.conf- --leader-elect=true- --node-cidr-mask-size=24- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt- --root-ca-file=/etc/kubernetes/pki/ca.crt- --service-account-private-key-file=/etc/kubernetes/pki/sa.key- --service-cluster-ip-range=11.96.0.0/12 # change- --use-service-account-credentials=true

Give it a bit for the scheduler to restart.

We can check if it was restarted using crictl:

? root@cluster2-controlplane1:~# crictl ps | grep scheduler 3d258934b9fd6 aca5ededae9c8 About a minute ago Running kube-scheduler ...

Checking our existing Pod and Service again:

? k get pod,svc -l run=check-ip NAME READY STATUS RESTARTS AGE pod/check-ip 1/1 Running 0 21mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/check-ip-service ClusterIP 10.99.32.177 <none> 80/TCP 21m

Nothing changed so far. Now we create another Service like before:

k expose pod check-ip --name check-ip-service2 --port 80

And check again:

? k get svc,ep -l run=check-ip NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/check-ip-service ClusterIP 10.109.222.111 <none> 80/TCP 8m service/check-ip-service2 ClusterIP 11.111.108.194 <none> 80/TCP 6m32sNAME ENDPOINTS AGE endpoints/check-ip-service 10.44.0.1:80 8m endpoints/check-ip-service2 10.44.0.1:80 6m13s

There we go, the new Service got an ip of the new specified range assigned. We also see that both Services have our Pod as endpoint.

CKA Tips Kubernetes 1.26

In this section we’ll provide some tips on how to handle the CKA exam and browser terminal.

Knowledge

Study all topics as proposed in the curriculum till you feel comfortable with all.

General

Study all topics as proposed in the curriculum till you feel comfortable with all Do 1 or 2 test session with this CKA Simulator. Understand the solutions and maybe try out other ways to achieve the same thing. Setup your aliases, be fast and breath kubectl The majority of tasks in the CKA will also be around creating Kubernetes resources, like it's tested in the CKAD. So preparing a bit for the CKAD can't hurt. Learn and Study the in-browser scenarios on https://killercoda.com/killer-shell-cka (and maybe for CKAD https://killercoda.com/killer-shell-ckad) Imagine and create your own scenarios to solve

Components

Understanding Kubernetes components and being able to fix and investigate clusters: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster Know advanced scheduling: https://kubernetes.io/docs/concepts/scheduling/kube-scheduler When you have to fix a component (like kubelet) in one cluster, just check how it's setup on another node in the same or even another cluster. You can copy config files over etc If you like you can look at Kubernetes The Hard Way once. But it's NOT necessary to do, the CKA is not that complex. But KTHW helps understanding the concepts You should install your own cluster using kubeadm (one controlplane, one worker) in a VM or using a cloud provider and investigate the components Know how to use Kubeadm to for example add nodes to a cluster Know how to create an Ingress resources Know how to snapshot/restore ETCD from another machine

CKA Preparation

Read the Curriculum

https://github.com/cncf/curriculum

Read the Handbook

https://docs.linuxfoundation.org/tc-docs/certification/lf-candidate-handbook

Read the important tips

https://docs.linuxfoundation.org/tc-docs/certification/tips-cka-and-ckad

Read the FAQ:

https://docs.linuxfoundation.org/tc-docs/certification/faq-cka-ckad

Kubernetes documentation

Get familiar with the Kubernetes documentation and be able to use the search. Allowed links are:

https://kubernetes.io/docs https://kubernetes.io/blog https://helm.sh/docsNOTE: Verify the list here

The Test Environment / Browser Terminal

You’ll be provided with a browser terminal which uses Ubuntu 20. The standard shells included with a minimal install of Ubuntu 20 will be available, including bash.

Laggin

There could be some lagging, definitely make sure you are using a good internet connection because your webcam and screen are uploading all the time.

Kubectl autocompletion and commands

Autocompletion is configured by default, as well as the k alias source and others:

kubectl with k alias and Bash autocompletion

yq and jqfor YAML/JSON processing

tmux for terminal multiplexing

curl and wget for testing web services

man and man pages for further documentation

Copy & Paste

There could be issues copying text (like pod names) from the left task information into the terminal. Some suggested to “hard” hit or long hold Cmd/Ctrl+C a few times to take action. Apart from that copy and paste should just work like in normal terminals.

Percentages and Score

There are 15-20 questions in the exam and 100% of total percentage to reach. Each questions shows the % it gives if you solve it. Your results will be automatically checked according to the handbook. If you don’t agree with the results you can request a review by contacting the Linux Foundation support.

Notepad & Skipping Questions

You have access to a simple notepad in the browser which can be used for storing any kind of plain text. It makes sense to use this for saving skipped question numbers and their percentages. This way it’s possible to move some questions to the end. It might make sense to skip 2% or 3% questions and go directly to higher ones.

Contexts

You’ll receive access to various different clusters and resources in each. They provide you the exact command you need to run to connect to another cluster/context. But you should be comfortable working in different namespaces with kubectl.

PSI Bridge

Starting with PSI Bridge:

The exam will now be taken using the PSI Secure Browser, which can be downloaded using the newest versions of Microsoft Edge, Safari, Chrome, or Firefox Multiple monitors will no longer be permitted Use of personal bookmarks will no longer be permitted

The new ExamUI includes improved features such as:

A remote desktop configured with the tools and software needed to complete the tasks A timer that displays the actual time remaining (in minutes) and provides an alert with 30, 15, or 5 minute remaining The content panel remains the same (presented on the Left Hand Side of the ExamUI)

Read more here.

Browser Terminal Setup

It should be considered to spend ~1 minute in the beginning to setup your terminal. In the real exam the vast majority of questions will be done from the main terminal. For few you might need to ssh into another machine. Just be aware that configurations to your shell will not be transferred in this case.
Minimal Setup

Alias

The alias k for kubectl will already be configured together with autocompletion. In case not you can configure it using this link.

Vim

The following settings will already be configured in your real exam environment in ~/.vimrc. But it can never hurt to be able to type these down:

set tabstop=2set expandtabset shiftwidth=2

The expandtab make sure to use spaces for tabs. Memorize these and just type them down. You can’t have any written notes with commands on your desktop etc.
Optional Setup

Fast dry-run output

export do="--dry-run=client -o yaml"

This way you can just run k run pod1 --image=nginx $do. Short for “dry output”, but use whatever name you like.

Fast pod delete

export now="--force --grace-period 0"

This way you can run k delete pod1 $now and don’t have to wait for ~30 seconds termination time.

Persist bash settings

You can store aliases and other setup in ~/.bashrc if you’re planning on using different shells or tmux.

Alias Namespace

In addition you could define an alias like:

alias kn='kubectl config set-context --current --namespace '

Which allows you to define the default namespace of the current context. Then once you switch a context or namespace you can just run:

kn default # set default to defaultkn my-namespace # set default to my-namespace

But only do this if you used it before and are comfortable doing so. Else you need to specify the namespace for every call, which is also fine:

k -n my-namespace get allk -n my-namespace get pod

Be fast

Use the history command to reuse already entered commands or use even faster history search through Ctrl r .

If a command takes some time to execute, like sometimes kubectl delete pod x. You can put a task in the background using Ctrl z and pull it back into foreground running command fg.

You can delete pods fast with:

k delete pod x --grace-period 0 --force

k delete pod x $now # if export from above is configured

Vim

Be great with vim.

toggle vim line numbers

When in vim you can press Esc and type :set number or :set nonumber followed by Enter to toggle line numbers. This can be useful when finding syntax errors based on line - but can be bad when wanting to mark&copy by mouse. You can also just jump to a line number with Esc :22 + Enter.

copy&paste

Get used to copy/paste/cut with vim:

Mark lines: Esc+V (then arrow keys)

Copy marked lines: y

Cut marked lines: d

Past lines: p or P

Indent multiple lines

To indent multiple lines press Esc and type :set shiftwidth=2. First mark multiple lines using Shift v and the up/down keys. Then to indent the marked lines press > or <. You can then press . to repeat the action.

Split terminal screen

By default tmux is installed and can be used to split your one terminal into multiple. But just do this if you know your shit, because scrolling is different and copy&pasting might be weird.

https://www.hamvocke.com/blog/a-quick-and-easy-guide-to-tmux

總結

以上是生活随笔為你收集整理的CKA-1.26 模拟试题的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

av无码电影一区二区三区 | 18无码粉嫩小泬无套在线观看 | 丰满妇女强制高潮18xxxx | 97资源共享在线视频 | 老头边吃奶边弄进去呻吟 | 熟妇人妻无乱码中文字幕 | 人妻有码中文字幕在线 | 99精品视频在线观看免费 | 日本www一道久久久免费榴莲 | 青青青手机频在线观看 | 精品欧美一区二区三区久久久 | 久久成人a毛片免费观看网站 | 国产麻豆精品一区二区三区v视界 | 性欧美videos高清精品 | 国产乱子伦视频在线播放 | 少妇无套内谢久久久久 | 久久亚洲国产成人精品性色 | 西西人体www44rt大胆高清 | 久久亚洲中文字幕无码 | 欧美日韩视频无码一区二区三 | 中文字幕无线码免费人妻 | 综合激情五月综合激情五月激情1 | 国产成人精品三级麻豆 | 在线观看国产一区二区三区 | 亚洲国产欧美日韩精品一区二区三区 | 无人区乱码一区二区三区 | 亚洲天堂2017无码中文 | 在线观看欧美一区二区三区 | 亚洲七七久久桃花影院 | 精品无人区无码乱码毛片国产 | 中文字幕乱码亚洲无线三区 | 国产精品二区一区二区aⅴ污介绍 | 鲁大师影院在线观看 | 性欧美videos高清精品 | 亚洲精品成人福利网站 | 精品偷拍一区二区三区在线看 | 亚洲中文字幕久久无码 | 永久免费观看美女裸体的网站 | 在线精品国产一区二区三区 | 西西人体www44rt大胆高清 | 人人妻人人澡人人爽欧美一区 | 97精品人妻一区二区三区香蕉 | 高潮毛片无遮挡高清免费视频 | 美女毛片一区二区三区四区 | 欧美人与动性行为视频 | 亚洲中文无码av永久不收费 | 国产精品无码一区二区桃花视频 | 性欧美疯狂xxxxbbbb | 亚洲国产av美女网站 | 国产超级va在线观看视频 | 国产在线一区二区三区四区五区 | 国产精品爱久久久久久久 | 亲嘴扒胸摸屁股激烈网站 | 欧美激情内射喷水高潮 | 成人无码视频在线观看网站 | 三级4级全黄60分钟 | 天天燥日日燥 | 亚洲无人区午夜福利码高清完整版 | 丁香花在线影院观看在线播放 | 97精品国产97久久久久久免费 | v一区无码内射国产 | 国产黄在线观看免费观看不卡 | 在线а√天堂中文官网 | 波多野结衣一区二区三区av免费 | 久9re热视频这里只有精品 | 18精品久久久无码午夜福利 | 国产色精品久久人妻 | 精品国偷自产在线视频 | а√资源新版在线天堂 | 国产免费久久久久久无码 | 精品偷拍一区二区三区在线看 | 国产后入清纯学生妹 | 妺妺窝人体色www婷婷 | 欧美日韩在线亚洲综合国产人 | 日本精品久久久久中文字幕 | 国产午夜视频在线观看 | 国内精品一区二区三区不卡 | 九九久久精品国产免费看小说 | 波多野结衣aⅴ在线 | 国产内射爽爽大片视频社区在线 | 国产精品久久国产精品99 | av人摸人人人澡人人超碰下载 | 亚洲高清偷拍一区二区三区 | 国产精品久久久久久无码 | 亚洲aⅴ无码成人网站国产app | v一区无码内射国产 | 国产在线精品一区二区高清不卡 | 久久精品一区二区三区四区 | 免费国产黄网站在线观看 | 久久精品国产99精品亚洲 | 久久五月精品中文字幕 | 午夜精品一区二区三区在线观看 | 国产综合色产在线精品 | 久久久国产一区二区三区 | 精品欧洲av无码一区二区三区 | 又黄又爽又色的视频 | 欧美日韩一区二区免费视频 | 欧美性猛交内射兽交老熟妇 | 国产人妖乱国产精品人妖 | 国产亚洲精品精品国产亚洲综合 | 日韩 欧美 动漫 国产 制服 | 无套内谢的新婚少妇国语播放 | 国产精品18久久久久久麻辣 | 亚洲狠狠色丁香婷婷综合 | 日本丰满熟妇videos | 兔费看少妇性l交大片免费 | 无码国产乱人伦偷精品视频 | 香蕉久久久久久av成人 | 国产精品亚洲五月天高清 | 色欲av亚洲一区无码少妇 | 人人爽人人爽人人片av亚洲 | 国产成人综合在线女婷五月99播放 | 免费播放一区二区三区 | 国产精品亚洲а∨无码播放麻豆 | 乱码av麻豆丝袜熟女系列 | 免费观看的无遮挡av | 亚洲精品中文字幕乱码 | 正在播放东北夫妻内射 | 亚洲爆乳无码专区 | 学生妹亚洲一区二区 | 成年美女黄网站色大免费全看 | 亚洲精品综合五月久久小说 | 99久久精品日本一区二区免费 | 人妻少妇被猛烈进入中文字幕 | 亚洲中文字幕无码一久久区 | 久久久久免费看成人影片 | 久久久久久亚洲精品a片成人 | 日本在线高清不卡免费播放 | 奇米影视888欧美在线观看 | 国产深夜福利视频在线 | 成人影院yy111111在线观看 | 久久久中文字幕日本无吗 | 无码乱肉视频免费大全合集 | 久久久www成人免费毛片 | 国产亚洲人成a在线v网站 | 成人无码视频免费播放 | 亚洲区小说区激情区图片区 | 欧美 日韩 亚洲 在线 | 国产人妻精品一区二区三区 | 曰韩无码二三区中文字幕 | 国产精品久久久午夜夜伦鲁鲁 | 国产婷婷色一区二区三区在线 | 欧美日韩久久久精品a片 | 国产在线一区二区三区四区五区 | 免费看少妇作爱视频 | 国产网红无码精品视频 | 亚洲小说春色综合另类 | 久久人人爽人人爽人人片av高清 | 国产成人综合美国十次 | 欧美乱妇无乱码大黄a片 | 在线亚洲高清揄拍自拍一品区 | 国产精品手机免费 | 国产精品亚洲五月天高清 | 亚洲 欧美 激情 小说 另类 | 日韩精品a片一区二区三区妖精 | 久久国语露脸国产精品电影 | 国产精品久久久久久无码 | 国産精品久久久久久久 | 久在线观看福利视频 | 强奷人妻日本中文字幕 | 黑人巨大精品欧美一区二区 | 亚洲色成人中文字幕网站 | 天天躁夜夜躁狠狠是什么心态 | 成人影院yy111111在线观看 | 天天av天天av天天透 | 美女黄网站人色视频免费国产 | 国产亚洲精品久久久久久 | 精品人妻中文字幕有码在线 | 无码午夜成人1000部免费视频 | 蜜桃臀无码内射一区二区三区 | 亚洲a无码综合a国产av中文 | 国产在线无码精品电影网 | 俄罗斯老熟妇色xxxx | 精品久久久久久人妻无码中文字幕 | 亚洲综合无码一区二区三区 | 国产精品视频免费播放 | 免费无码一区二区三区蜜桃大 | 亚洲精品国产a久久久久久 | 又色又爽又黄的美女裸体网站 | 欧美丰满熟妇xxxx性ppx人交 | 欧美亚洲日韩国产人成在线播放 | 俺去俺来也在线www色官网 | 欧美大屁股xxxxhd黑色 | 国产人妻精品一区二区三区不卡 | 精品乱子伦一区二区三区 | 天堂亚洲免费视频 | 国产香蕉97碰碰久久人人 | 国产精品视频免费播放 | 国产精品无码永久免费888 | 丰满人妻一区二区三区免费视频 | 国产成人无码专区 | 少妇无套内谢久久久久 | 国产精品嫩草久久久久 | 爆乳一区二区三区无码 | а天堂中文在线官网 | 亚洲成av人综合在线观看 | 国产艳妇av在线观看果冻传媒 | 国产亚洲欧美在线专区 | 国模大胆一区二区三区 | 扒开双腿疯狂进出爽爽爽视频 | 无码av免费一区二区三区试看 | 婷婷色婷婷开心五月四房播播 | 久久精品国产日本波多野结衣 | 精品成在人线av无码免费看 | 俺去俺来也在线www色官网 | 亚洲成av人影院在线观看 | 午夜无码人妻av大片色欲 | 久久综合香蕉国产蜜臀av | 免费人成在线视频无码 | 亚洲国产精品久久久久久 | 国产精品igao视频网 | 3d动漫精品啪啪一区二区中 | 成人精品一区二区三区中文字幕 | 波多野结衣 黑人 | 亚洲日韩精品欧美一区二区 | 骚片av蜜桃精品一区 | 久久成人a毛片免费观看网站 | 久久久久久国产精品无码下载 | 国产欧美精品一区二区三区 | 国产av无码专区亚洲awww | yw尤物av无码国产在线观看 | 成人免费视频视频在线观看 免费 | 中文字幕无码免费久久9一区9 | 亚洲精品鲁一鲁一区二区三区 | 免费播放一区二区三区 | 娇妻被黑人粗大高潮白浆 | 真人与拘做受免费视频 | 人人妻人人澡人人爽欧美精品 | 亚洲色欲色欲天天天www | 欧洲美熟女乱又伦 | 国产av久久久久精东av | 老太婆性杂交欧美肥老太 | 撕开奶罩揉吮奶头视频 | 牲欲强的熟妇农村老妇女视频 | 免费人成在线观看网站 | 国产区女主播在线观看 | 最近中文2019字幕第二页 | 国产精品a成v人在线播放 | 精品人妻中文字幕有码在线 | 国产av无码专区亚洲a∨毛片 | 国产精品爱久久久久久久 | 纯爱无遮挡h肉动漫在线播放 | 国产成人人人97超碰超爽8 | 久久人妻内射无码一区三区 | 国产av剧情md精品麻豆 | 日欧一片内射va在线影院 | 国产欧美熟妇另类久久久 | 熟妇女人妻丰满少妇中文字幕 | 久久综合给久久狠狠97色 | 精品亚洲韩国一区二区三区 | 亚洲成色www久久网站 | 日日摸夜夜摸狠狠摸婷婷 | 国内老熟妇对白xxxxhd | 人人妻人人澡人人爽人人精品浪潮 | 色诱久久久久综合网ywww | 久久这里只有精品视频9 | 97资源共享在线视频 | 一本加勒比波多野结衣 | 男人的天堂av网站 | 欧美三级a做爰在线观看 | 国产精品久久久午夜夜伦鲁鲁 | 精品一区二区不卡无码av | 97se亚洲精品一区 | 天干天干啦夜天干天2017 | 亚洲综合在线一区二区三区 | 亚洲人成影院在线无码按摩店 | 色综合久久中文娱乐网 | av人摸人人人澡人人超碰下载 | 99在线 | 亚洲 | 又粗又大又硬毛片免费看 | 久久久久免费精品国产 | 亚洲日本va午夜在线电影 | 丝袜美腿亚洲一区二区 | 国产麻豆精品一区二区三区v视界 | 久久久国产一区二区三区 | 国产成人精品优优av | 无码人妻少妇伦在线电影 | 亚洲区欧美区综合区自拍区 | 欧美成人家庭影院 | 欧美色就是色 | 国产精品久久久久久久9999 | 婷婷五月综合激情中文字幕 | 爱做久久久久久 | 日本护士毛茸茸高潮 | 成人欧美一区二区三区黑人 | 美女毛片一区二区三区四区 | 丰满护士巨好爽好大乳 | 国产舌乚八伦偷品w中 | 无人区乱码一区二区三区 | 久久国产自偷自偷免费一区调 | 国产精品鲁鲁鲁 | 女人色极品影院 | 欧美三级不卡在线观看 | 午夜理论片yy44880影院 | 亚洲欧美日韩成人高清在线一区 | 久久天天躁夜夜躁狠狠 | 无码人妻丰满熟妇区毛片18 | 成人影院yy111111在线观看 | 精品无人国产偷自产在线 | 日韩精品a片一区二区三区妖精 | 小鲜肉自慰网站xnxx | 亚洲中文字幕乱码av波多ji | 99精品久久毛片a片 | 天堂久久天堂av色综合 | 国产乱人无码伦av在线a | 久久国产自偷自偷免费一区调 | 2019午夜福利不卡片在线 | 麻豆蜜桃av蜜臀av色欲av | 亚洲精品www久久久 | 少妇被黑人到高潮喷出白浆 | 永久免费精品精品永久-夜色 | 男人的天堂av网站 | 日本熟妇浓毛 | 一本久道久久综合狠狠爱 | 熟妇人妻无乱码中文字幕 | yw尤物av无码国产在线观看 | 国产无av码在线观看 | 无码播放一区二区三区 | 熟妇女人妻丰满少妇中文字幕 | 亚洲精品一区二区三区婷婷月 | 亚洲国产精品久久人人爱 | 天堂а√在线中文在线 | 成人影院yy111111在线观看 | 四虎国产精品免费久久 | 97久久国产亚洲精品超碰热 | 午夜精品一区二区三区的区别 | 中文字幕久久久久人妻 | 乌克兰少妇xxxx做受 | 国产成人精品无码播放 | 亚洲一区二区三区 | 亚洲国产高清在线观看视频 | 欧美老熟妇乱xxxxx | 国产午夜亚洲精品不卡下载 | 国产无套粉嫩白浆在线 | 少妇人妻大乳在线视频 | 熟女体下毛毛黑森林 | 欧美国产日产一区二区 | 日本丰满熟妇videos | 久久综合九色综合97网 | 撕开奶罩揉吮奶头视频 | 曰本女人与公拘交酡免费视频 | 色妞www精品免费视频 | 97久久国产亚洲精品超碰热 | 久久精品人妻少妇一区二区三区 | 久久精品国产一区二区三区肥胖 | 亚洲天堂2017无码中文 | 日韩在线不卡免费视频一区 | 蜜臀av在线播放 久久综合激激的五月天 | 日本护士xxxxhd少妇 | 精品国产福利一区二区 | 国产精品人妻一区二区三区四 | 樱花草在线播放免费中文 | 欧美性色19p | 女人被男人躁得好爽免费视频 | 亚洲国产高清在线观看视频 | 牲欲强的熟妇农村老妇女 | 98国产精品综合一区二区三区 | 国产激情无码一区二区 | 国产成人精品久久亚洲高清不卡 | 奇米影视7777久久精品人人爽 | 日韩无码专区 | 九一九色国产 | 国产真实伦对白全集 | 奇米影视7777久久精品人人爽 | 国产精品第一区揄拍无码 | 国产特级毛片aaaaaaa高清 | 成年美女黄网站色大免费全看 | 精品无码一区二区三区爱欲 | 久久99精品国产麻豆蜜芽 | 2020久久超碰国产精品最新 | 久久亚洲国产成人精品性色 | 中文字幕+乱码+中文字幕一区 | 无码中文字幕色专区 | 精品无码av一区二区三区 | 国产女主播喷水视频在线观看 | 精品一区二区三区无码免费视频 | 日日摸夜夜摸狠狠摸婷婷 | 精品人妻人人做人人爽 | 亚洲成av人影院在线观看 | 久激情内射婷内射蜜桃人妖 | 日韩 欧美 动漫 国产 制服 | 国产精品鲁鲁鲁 | 亚洲乱码日产精品bd | 亚洲中文字幕在线观看 | 少妇一晚三次一区二区三区 | 日本一区二区三区免费高清 | 少妇邻居内射在线 | 精品久久综合1区2区3区激情 | 国产一区二区不卡老阿姨 | 成人av无码一区二区三区 | 久久久亚洲欧洲日产国码αv | 国产精品亚洲专区无码不卡 | 国产极品美女高潮无套在线观看 | 伊人色综合久久天天小片 | 久久国产精品二国产精品 | 日本精品人妻无码免费大全 | 久久精品中文字幕大胸 | 黄网在线观看免费网站 | 少妇被黑人到高潮喷出白浆 | 久久综合久久自在自线精品自 | 国产亚洲精品久久久久久 | 四虎影视成人永久免费观看视频 | 国产sm调教视频在线观看 | 日日摸日日碰夜夜爽av | 日本饥渴人妻欲求不满 | 亚洲日韩中文字幕在线播放 | 国产精品久久久久久亚洲毛片 | 九九综合va免费看 | 成人女人看片免费视频放人 | 人人澡人人透人人爽 | 18黄暴禁片在线观看 | 亚洲国产成人av在线观看 | aa片在线观看视频在线播放 | 精品一区二区不卡无码av | 欧美日本精品一区二区三区 | 波多野结衣一区二区三区av免费 | 捆绑白丝粉色jk震动捧喷白浆 | 国产精品多人p群无码 | 日日碰狠狠躁久久躁蜜桃 | 久久精品国产亚洲精品 | 牲交欧美兽交欧美 | 日日摸夜夜摸狠狠摸婷婷 | 波多野结衣一区二区三区av免费 | 亚洲精品久久久久久一区二区 | 日韩无码专区 | 乌克兰少妇性做爰 | 99国产精品白浆在线观看免费 | 色综合久久久无码网中文 | 久久久久se色偷偷亚洲精品av | 国内精品人妻无码久久久影院 | 少妇久久久久久人妻无码 | 一区二区传媒有限公司 | 国精产品一品二品国精品69xx | 国产情侣作爱视频免费观看 | 精品乱子伦一区二区三区 | 纯爱无遮挡h肉动漫在线播放 | 熟女少妇在线视频播放 | 在线а√天堂中文官网 | 日韩精品无码一区二区中文字幕 | 午夜肉伦伦影院 | 久久99精品久久久久久动态图 | 成人免费视频视频在线观看 免费 | 香蕉久久久久久av成人 | 国产av久久久久精东av | 久久综合色之久久综合 | 男人的天堂av网站 | 久久这里只有精品视频9 | 国产精品久久久一区二区三区 | 国产精品久久国产精品99 | 精品熟女少妇av免费观看 | 欧美 亚洲 国产 另类 | 国内揄拍国内精品少妇国语 | 中文字幕中文有码在线 | 99久久婷婷国产综合精品青草免费 | 无码播放一区二区三区 | 无码人妻少妇伦在线电影 | 亚洲欧美精品aaaaaa片 | 日本熟妇人妻xxxxx人hd | 免费看男女做好爽好硬视频 | 亚洲欧美中文字幕5发布 | 亚洲成av人片在线观看无码不卡 | 国产av一区二区精品久久凹凸 | 亚洲无人区一区二区三区 | 欧美三级不卡在线观看 | 99久久无码一区人妻 | 最近的中文字幕在线看视频 | 久久99精品久久久久久 | 99er热精品视频 | 极品尤物被啪到呻吟喷水 | 伊人久久大香线焦av综合影院 | 狠狠噜狠狠狠狠丁香五月 | 日韩人妻少妇一区二区三区 | 国产97色在线 | 免 | 亚洲精品成人福利网站 | 国产超碰人人爽人人做人人添 | 小sao货水好多真紧h无码视频 | 国产肉丝袜在线观看 | 亚洲日韩精品欧美一区二区 | 亚洲精品中文字幕久久久久 | 性生交大片免费看女人按摩摩 | 欧美三级a做爰在线观看 | 无码av最新清无码专区吞精 | 国产极品视觉盛宴 | 少妇人妻大乳在线视频 | 亚洲日本va中文字幕 | 中国女人内谢69xxxxxa片 | 国产97人人超碰caoprom | 亚洲国产精华液网站w | 少妇一晚三次一区二区三区 | 激情五月综合色婷婷一区二区 | 国产成人无码av片在线观看不卡 | 亚洲另类伦春色综合小说 | 98国产精品综合一区二区三区 | 久激情内射婷内射蜜桃人妖 | 天天综合网天天综合色 | 精品欧美一区二区三区久久久 | 中文字幕无码热在线视频 | 成人亚洲精品久久久久 | 久久久无码中文字幕久... | 欧美人与动性行为视频 | 伊人久久大香线蕉亚洲 | 性色欲网站人妻丰满中文久久不卡 | 少妇无码av无码专区在线观看 | 久久精品成人欧美大片 | 久久成人a毛片免费观看网站 | 国精品人妻无码一区二区三区蜜柚 | 国产综合在线观看 | 亚洲欧美中文字幕5发布 | 亚洲日韩av片在线观看 | 牲欲强的熟妇农村老妇女视频 | 国语自产偷拍精品视频偷 | 中文字幕乱码亚洲无线三区 | 2020久久超碰国产精品最新 | 撕开奶罩揉吮奶头视频 | 丰满岳乱妇在线观看中字无码 | 一区二区三区乱码在线 | 欧洲 | 呦交小u女精品视频 | 亚洲第一无码av无码专区 | 日本大乳高潮视频在线观看 | 亚洲色欲色欲天天天www | 亚洲熟女一区二区三区 | 国产香蕉尹人视频在线 | 樱花草在线播放免费中文 | 精品夜夜澡人妻无码av蜜桃 | 国产激情精品一区二区三区 | 131美女爱做视频 | 中文字幕无码av波多野吉衣 | 日韩精品一区二区av在线 | 免费国产成人高清在线观看网站 | 欧美性生交xxxxx久久久 | 成人三级无码视频在线观看 | 国产av久久久久精东av | 中文字幕乱码人妻无码久久 | 综合网日日天干夜夜久久 | 久精品国产欧美亚洲色aⅴ大片 | 老子影院午夜精品无码 | 天天综合网天天综合色 | 亚洲欧美日韩成人高清在线一区 | aⅴ在线视频男人的天堂 | 久青草影院在线观看国产 | 野狼第一精品社区 | 蜜桃av抽搐高潮一区二区 | 国产精品久久久久久亚洲影视内衣 | 无码毛片视频一区二区本码 | 国产乱子伦视频在线播放 | 大肉大捧一进一出视频出来呀 | 久久久久成人精品免费播放动漫 | 国产xxx69麻豆国语对白 | av无码久久久久不卡免费网站 | 又黄又爽又色的视频 | 国产精品沙发午睡系列 | 国精品人妻无码一区二区三区蜜柚 | 国产偷抇久久精品a片69 | 亚洲日韩精品欧美一区二区 | 国产在热线精品视频 | 久激情内射婷内射蜜桃人妖 | 天天综合网天天综合色 | 成人欧美一区二区三区黑人 | 亚洲精品成人福利网站 | 亚洲七七久久桃花影院 | 久久综合狠狠综合久久综合88 | 乌克兰少妇性做爰 | 国产三级久久久精品麻豆三级 | 久久久久久久久888 | 少女韩国电视剧在线观看完整 | 亚洲熟悉妇女xxx妇女av | 丰满人妻翻云覆雨呻吟视频 | 国产精品久久久久久久9999 | 午夜无码人妻av大片色欲 | 亚洲国产高清在线观看视频 | 人妻少妇精品无码专区二区 | 黑人巨大精品欧美黑寡妇 | 学生妹亚洲一区二区 | 久久久久成人片免费观看蜜芽 | 久久久久久久人妻无码中文字幕爆 | 丰满人妻一区二区三区免费视频 | 色诱久久久久综合网ywww | 国产精品99爱免费视频 | 亚洲综合无码一区二区三区 | 成人免费无码大片a毛片 | 国产免费久久久久久无码 | 亚洲阿v天堂在线 | 人妻aⅴ无码一区二区三区 | 久久久久久国产精品无码下载 | 精品无码国产自产拍在线观看蜜 | 久久精品中文字幕大胸 | 日本丰满熟妇videos | 亚洲精品无码国产 | 日韩亚洲欧美精品综合 | 日韩 欧美 动漫 国产 制服 | 国产综合在线观看 | 久久 国产 尿 小便 嘘嘘 | 丰满人妻翻云覆雨呻吟视频 | 久久亚洲精品中文字幕无男同 | 久久综合狠狠综合久久综合88 | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | 国产乱子伦视频在线播放 | 1000部夫妻午夜免费 | 亚洲自偷精品视频自拍 | 麻豆md0077饥渴少妇 | 在教室伦流澡到高潮hnp视频 | 亚洲国产精品久久人人爱 | 99精品久久毛片a片 | 国产精品永久免费视频 | 日本一区二区更新不卡 | 亚洲人成影院在线无码按摩店 | 中文字幕无码日韩欧毛 | 无人区乱码一区二区三区 | 久久精品国产一区二区三区 | 丰满护士巨好爽好大乳 | 色偷偷人人澡人人爽人人模 | 无码乱肉视频免费大全合集 | 在线а√天堂中文官网 | 麻豆国产丝袜白领秘书在线观看 | 天天拍夜夜添久久精品大 | 精品乱子伦一区二区三区 | 最近的中文字幕在线看视频 | 福利一区二区三区视频在线观看 | 国产9 9在线 | 中文 | 娇妻被黑人粗大高潮白浆 | 国产无遮挡吃胸膜奶免费看 | 日日躁夜夜躁狠狠躁 | 欧美人妻一区二区三区 | 中国大陆精品视频xxxx | 日本一卡2卡3卡4卡无卡免费网站 国产一区二区三区影院 | 欧美日韩视频无码一区二区三 | 国产国语老龄妇女a片 | 久久久精品国产sm最大网站 | 少妇无码一区二区二三区 | 国产精品久久久久久亚洲影视内衣 | 欧美人与禽zoz0性伦交 | 欧洲极品少妇 | 日韩av无码一区二区三区不卡 | 精品人人妻人人澡人人爽人人 | 亲嘴扒胸摸屁股激烈网站 | 国产莉萝无码av在线播放 | 亚洲另类伦春色综合小说 | 亚洲欧洲日本无在线码 | 亚洲男人av香蕉爽爽爽爽 | 又粗又大又硬毛片免费看 | 亚洲国产欧美日韩精品一区二区三区 | 国产在热线精品视频 | 精品无码一区二区三区爱欲 | 欧美熟妇另类久久久久久多毛 | 精品国产一区二区三区av 性色 | √天堂中文官网8在线 | 无码午夜成人1000部免费视频 | 国产在线aaa片一区二区99 | 亚洲国产精品久久久久久 | 1000部啪啪未满十八勿入下载 | 一个人看的www免费视频在线观看 | 国产精品亚洲五月天高清 | 无码吃奶揉捏奶头高潮视频 | 欧美性色19p | 亚洲综合在线一区二区三区 | 亚洲国产一区二区三区在线观看 | 欧美激情综合亚洲一二区 | 国产精品第一区揄拍无码 | 丰满人妻被黑人猛烈进入 | √天堂资源地址中文在线 | 97se亚洲精品一区 | 老熟女重囗味hdxx69 | 人人爽人人爽人人片av亚洲 | 在线天堂新版最新版在线8 | 麻豆蜜桃av蜜臀av色欲av | 亚洲熟妇自偷自拍另类 | 欧美放荡的少妇 | 在线а√天堂中文官网 | 亚洲一区av无码专区在线观看 | 色妞www精品免费视频 | 帮老师解开蕾丝奶罩吸乳网站 | 国产福利视频一区二区 | 国产成人无码午夜视频在线观看 | 国产真人无遮挡作爱免费视频 | 丰满人妻精品国产99aⅴ | 狠狠色丁香久久婷婷综合五月 | 嫩b人妻精品一区二区三区 | 亚洲区欧美区综合区自拍区 | av在线亚洲欧洲日产一区二区 | 亚洲一区二区观看播放 | 四虎国产精品免费久久 | 熟妇人妻无码xxx视频 | v一区无码内射国产 | 乱人伦人妻中文字幕无码久久网 | 无码精品国产va在线观看dvd | 性欧美牲交在线视频 | 综合网日日天干夜夜久久 | 国产色视频一区二区三区 | 国产欧美精品一区二区三区 | 国产精品美女久久久久av爽李琼 | 久久www免费人成人片 | 中文精品无码中文字幕无码专区 | 亚洲精品国偷拍自产在线观看蜜桃 | 狠狠cao日日穞夜夜穞av | 日韩精品久久久肉伦网站 | 国产精品嫩草久久久久 | 一本久道高清无码视频 | aa片在线观看视频在线播放 | 欧美真人作爱免费视频 | 国产亚av手机在线观看 | 国产亚洲日韩欧美另类第八页 | 日本一区二区三区免费播放 | 久久国内精品自在自线 | 精品一区二区三区波多野结衣 | 熟女少妇人妻中文字幕 | 亚洲va中文字幕无码久久不卡 | 日韩亚洲欧美中文高清在线 | 扒开双腿吃奶呻吟做受视频 | 免费看少妇作爱视频 | 国产成人综合色在线观看网站 | 国产精品va在线观看无码 | 亚拍精品一区二区三区探花 | 强奷人妻日本中文字幕 | 国产肉丝袜在线观看 | 色爱情人网站 | 波多野结衣av一区二区全免费观看 | 久久99精品国产.久久久久 | 中文字幕无线码 | 亚洲精品久久久久久久久久久 | 国产精品永久免费视频 | 一本色道久久综合亚洲精品不卡 | 国产亚洲美女精品久久久2020 | 日欧一片内射va在线影院 | 国产极品视觉盛宴 | 国产午夜无码精品免费看 | 国产成人综合色在线观看网站 | 青青久在线视频免费观看 | 三级4级全黄60分钟 | 日韩成人一区二区三区在线观看 | 成人三级无码视频在线观看 | 亚洲一区二区观看播放 | 天天摸天天碰天天添 | 色综合天天综合狠狠爱 | 亚洲欧美精品aaaaaa片 | 国产办公室秘书无码精品99 | 亚洲综合在线一区二区三区 | 久久精品国产一区二区三区肥胖 | 精品久久久久香蕉网 | 午夜熟女插插xx免费视频 | 又大又黄又粗又爽的免费视频 | 亚洲国产精品一区二区第一页 | 六月丁香婷婷色狠狠久久 | 国产成人无码区免费内射一片色欲 | 日日夜夜撸啊撸 | 少妇无码av无码专区在线观看 | 水蜜桃色314在线观看 | 国产人妻精品午夜福利免费 | 成人精品视频一区二区三区尤物 | 精品国产精品久久一区免费式 | 激情五月综合色婷婷一区二区 | 夜精品a片一区二区三区无码白浆 | 大地资源网第二页免费观看 | 99精品久久毛片a片 | 无码一区二区三区在线 | 亚洲毛片av日韩av无码 | 国产亚洲美女精品久久久2020 | 欧美人与禽猛交狂配 | 成人试看120秒体验区 | 亚洲の无码国产の无码影院 | 水蜜桃av无码 | 国产人妻人伦精品1国产丝袜 | 亚洲天堂2017无码 | av无码电影一区二区三区 | 粗大的内捧猛烈进出视频 | 久久久中文字幕日本无吗 | 无码av最新清无码专区吞精 | 噜噜噜亚洲色成人网站 | 久久亚洲精品中文字幕无男同 | 日本大香伊一区二区三区 | 欧洲精品码一区二区三区免费看 | 亚洲精品成人福利网站 | 东京热一精品无码av | 亚洲国产综合无码一区 | 国产成人精品久久亚洲高清不卡 | 福利一区二区三区视频在线观看 | 丰满少妇弄高潮了www | 爽爽影院免费观看 | 亚洲综合另类小说色区 | 久久久久久久久蜜桃 | 久久久久成人片免费观看蜜芽 | 女高中生第一次破苞av | 久久久国产一区二区三区 | 精品国产麻豆免费人成网站 | 狠狠亚洲超碰狼人久久 | 日韩精品无码免费一区二区三区 | 波多野结衣av一区二区全免费观看 | 成人一在线视频日韩国产 | 久久久久久久久蜜桃 | 亚洲日韩一区二区 | 日本饥渴人妻欲求不满 | 中文字幕无码乱人伦 | 免费视频欧美无人区码 | 国精产品一品二品国精品69xx | 亚洲s码欧洲m码国产av | 99久久精品无码一区二区毛片 | 亚洲中文字幕无码中字 | 亚洲一区av无码专区在线观看 | 波多野结衣一区二区三区av免费 | 国产在线精品一区二区高清不卡 | av无码电影一区二区三区 | 午夜福利不卡在线视频 | 日本一卡2卡3卡4卡无卡免费网站 国产一区二区三区影院 | 99久久精品午夜一区二区 | 女人被爽到呻吟gif动态图视看 | 免费乱码人妻系列无码专区 | 成人性做爰aaa片免费看不忠 | 亚洲aⅴ无码成人网站国产app | 欧美大屁股xxxxhd黑色 | 内射后入在线观看一区 | 欧美日韩一区二区综合 | 东京一本一道一二三区 | 女人被男人爽到呻吟的视频 | 老熟妇乱子伦牲交视频 | 一本久久伊人热热精品中文字幕 | 无码播放一区二区三区 | 综合激情五月综合激情五月激情1 | 欧美人妻一区二区三区 | 亚洲熟悉妇女xxx妇女av | 乌克兰少妇xxxx做受 | 欧美性生交xxxxx久久久 | 天天综合网天天综合色 | 国产精品亚洲综合色区韩国 | 曰韩无码二三区中文字幕 | 亚洲精品综合五月久久小说 | 97夜夜澡人人爽人人喊中国片 | 无码乱肉视频免费大全合集 | 兔费看少妇性l交大片免费 | 国产精品无码成人午夜电影 | 久久精品女人天堂av免费观看 | 国产av人人夜夜澡人人爽麻豆 | 亚洲乱码中文字幕在线 | 日韩欧美中文字幕公布 | 亚洲熟妇色xxxxx亚洲 | 四虎国产精品一区二区 | 人人妻人人藻人人爽欧美一区 | 精品国产av色一区二区深夜久久 | 蜜桃无码一区二区三区 | 在教室伦流澡到高潮hnp视频 | 欧美成人家庭影院 | 欧美丰满熟妇xxxx性ppx人交 | 无码人妻少妇伦在线电影 | 色综合久久久无码中文字幕 | 国产在线一区二区三区四区五区 | 97精品国产97久久久久久免费 | 51国偷自产一区二区三区 | 波多野结衣 黑人 | 麻豆人妻少妇精品无码专区 | 人妻无码αv中文字幕久久琪琪布 | 给我免费的视频在线观看 | 色婷婷香蕉在线一区二区 | 一本久道高清无码视频 | 久久综合色之久久综合 | 国产国语老龄妇女a片 | 国产成人精品久久亚洲高清不卡 | 丰满人妻一区二区三区免费视频 | 国产日产欧产精品精品app | 亚洲日韩一区二区三区 | 欧美性生交活xxxxxdddd | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | 精品人妻中文字幕有码在线 | 人妻插b视频一区二区三区 | 亚洲欧美中文字幕5发布 | 精品无码av一区二区三区 | 在线 国产 欧美 亚洲 天堂 | 久久这里只有精品视频9 | 一区二区三区高清视频一 | 性啪啪chinese东北女人 | 丰满诱人的人妻3 | 学生妹亚洲一区二区 | 色综合久久久无码中文字幕 | 久久人人97超碰a片精品 | 欧美一区二区三区视频在线观看 | 国产精品毛多多水多 | 欧美人与禽zoz0性伦交 | 丰满岳乱妇在线观看中字无码 | 成人亚洲精品久久久久软件 | 3d动漫精品啪啪一区二区中 | 人人妻人人澡人人爽欧美精品 | 日日天日日夜日日摸 | 男女猛烈xx00免费视频试看 | 男女作爱免费网站 | 丰满少妇人妻久久久久久 | av人摸人人人澡人人超碰下载 | 疯狂三人交性欧美 | 精品人妻av区 | 人妻无码αv中文字幕久久琪琪布 | 在线观看国产一区二区三区 | 成人女人看片免费视频放人 | 99久久精品日本一区二区免费 | 18精品久久久无码午夜福利 | 亚洲日韩乱码中文无码蜜桃臀网站 | 精品国产一区二区三区四区 | 国内精品人妻无码久久久影院 | 精品一区二区三区波多野结衣 | 天天躁夜夜躁狠狠是什么心态 | 女人和拘做爰正片视频 | 精品一区二区三区无码免费视频 | 亚洲日本在线电影 | 日本大香伊一区二区三区 | 国产乱人无码伦av在线a | 国产区女主播在线观看 | 欧美精品国产综合久久 | 成年美女黄网站色大免费全看 | 久久亚洲精品成人无码 | 中文字幕人成乱码熟女app | 熟妇人妻无乱码中文字幕 | 少妇高潮喷潮久久久影院 | 强开小婷嫩苞又嫩又紧视频 | 国产人妻人伦精品1国产丝袜 | 欧美日韩一区二区免费视频 | 国产亲子乱弄免费视频 | 午夜性刺激在线视频免费 | 天天摸天天碰天天添 | 97精品国产97久久久久久免费 | 中文字幕乱码亚洲无线三区 | 久久99热只有频精品8 | 亚洲一区二区三区在线观看网站 | 久久精品国产亚洲精品 | 波多野结衣高清一区二区三区 | 少女韩国电视剧在线观看完整 | 大地资源中文第3页 | 天天综合网天天综合色 | 亚洲精品成人av在线 | 日韩av无码一区二区三区不卡 | 国产亚洲精品久久久ai换 | 奇米影视888欧美在线观看 | 国产凸凹视频一区二区 | 丝袜美腿亚洲一区二区 | 福利一区二区三区视频在线观看 | 色欲综合久久中文字幕网 | 国产免费久久精品国产传媒 | 亚洲啪av永久无码精品放毛片 | 欧美丰满熟妇xxxx性ppx人交 | 东京热无码av男人的天堂 | 欧美肥老太牲交大战 | 国产无遮挡又黄又爽又色 | 久久精品一区二区三区四区 | 国产超碰人人爽人人做人人添 | 色综合久久久久综合一本到桃花网 | 国内少妇偷人精品视频 | 国产精品永久免费视频 | 国产在线精品一区二区三区直播 | 无码人妻丰满熟妇区五十路百度 | 中文精品久久久久人妻不卡 | 国产精品-区区久久久狼 | 永久免费观看国产裸体美女 | 成人精品视频一区二区三区尤物 | 亚洲综合久久一区二区 | 爆乳一区二区三区无码 | 无码av中文字幕免费放 | 亚洲爆乳精品无码一区二区三区 | 亚洲国产精品一区二区第一页 | 奇米影视7777久久精品 | 国产精品久久久久久久影院 | 中文字幕无线码免费人妻 | 欧美性色19p | 内射后入在线观看一区 | 中文字幕无码乱人伦 | 国产欧美亚洲精品a | 色一情一乱一伦一区二区三欧美 | 欧美日韩在线亚洲综合国产人 | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | 日韩无套无码精品 | 亚洲男人av天堂午夜在 | 久久精品视频在线看15 | 国产精品久久久久影院嫩草 | 亚洲国产欧美日韩精品一区二区三区 | 激情内射日本一区二区三区 | 亚洲 a v无 码免 费 成 人 a v | 国产免费久久精品国产传媒 | 久久国产精品萌白酱免费 | 奇米影视7777久久精品人人爽 | 国产精品福利视频导航 | 国内少妇偷人精品视频 | 澳门永久av免费网站 | 无码国产激情在线观看 | 成 人 网 站国产免费观看 | 精品成在人线av无码免费看 | 欧美人与禽zoz0性伦交 | 丰满少妇弄高潮了www | 中文字幕精品av一区二区五区 | 麻豆国产丝袜白领秘书在线观看 | 久久久久久久久蜜桃 | 国产精品高潮呻吟av久久 | 国语精品一区二区三区 | 人人澡人人透人人爽 | 一二三四社区在线中文视频 | 久久99精品国产麻豆蜜芽 | 国产激情一区二区三区 | 97精品人妻一区二区三区香蕉 | 国产乱人伦app精品久久 国产在线无码精品电影网 国产国产精品人在线视 | 中文亚洲成a人片在线观看 | 亚洲の无码国产の无码步美 | 久久久久免费看成人影片 | 成人精品视频一区二区三区尤物 | 中文字幕精品av一区二区五区 | 亚洲精品中文字幕乱码 | 国产在线精品一区二区高清不卡 | 老司机亚洲精品影院无码 | 亚洲狠狠婷婷综合久久 | 正在播放老肥熟妇露脸 | 曰韩无码二三区中文字幕 | 无码帝国www无码专区色综合 | 色欲av亚洲一区无码少妇 | 欧美一区二区三区 | 久久久久人妻一区精品色欧美 | 亚洲欧美日韩成人高清在线一区 | 四虎4hu永久免费 | 日日鲁鲁鲁夜夜爽爽狠狠 | 国产另类ts人妖一区二区 | 中文字幕无码日韩专区 | 无码av中文字幕免费放 | 在线观看欧美一区二区三区 | 成人欧美一区二区三区黑人 | 日韩精品成人一区二区三区 | 一本加勒比波多野结衣 | 中文无码成人免费视频在线观看 | 国产日产欧产精品精品app | 欧美精品无码一区二区三区 | 精品亚洲成av人在线观看 | 亚洲毛片av日韩av无码 | 国产精品久久精品三级 | 免费无码av一区二区 | 国内精品久久毛片一区二区 | 久精品国产欧美亚洲色aⅴ大片 | 乱中年女人伦av三区 | 欧洲精品码一区二区三区免费看 | 3d动漫精品啪啪一区二区中 | 乱中年女人伦av三区 | 中文久久乱码一区二区 | 国产午夜无码视频在线观看 | 无码福利日韩神码福利片 | 97久久精品无码一区二区 | 蜜桃视频韩日免费播放 | 午夜精品一区二区三区在线观看 | 国产精品亚洲综合色区韩国 | 精品国产青草久久久久福利 | 欧美日韩色另类综合 | 国产精品国产三级国产专播 | 中文精品久久久久人妻不卡 | 国产精品18久久久久久麻辣 | 免费无码午夜福利片69 | 婷婷丁香五月天综合东京热 | 亚洲国产欧美日韩精品一区二区三区 | 日韩欧美成人免费观看 | 2019午夜福利不卡片在线 | ass日本丰满熟妇pics | 无码国内精品人妻少妇 | 久久精品人人做人人综合试看 | 国产成人综合在线女婷五月99播放 | 国产乱码精品一品二品 | 乌克兰少妇xxxx做受 | 国产精品99爱免费视频 | 红桃av一区二区三区在线无码av | 一个人看的www免费视频在线观看 | 思思久久99热只有频精品66 | 高清无码午夜福利视频 | 少妇人妻偷人精品无码视频 | 国产精品久久久久久亚洲毛片 | 欧美 日韩 亚洲 在线 | 狠狠综合久久久久综合网 | 久久久久久久久蜜桃 | 国精产品一区二区三区 | 无码播放一区二区三区 | 无码一区二区三区在线 | 夜夜高潮次次欢爽av女 | 国产偷自视频区视频 | 亚洲の无码国产の无码步美 | 婷婷综合久久中文字幕蜜桃三电影 | 成年美女黄网站色大免费全看 | 麻豆成人精品国产免费 | 亚洲热妇无码av在线播放 | 老司机亚洲精品影院 | 男女猛烈xx00免费视频试看 | 无码人妻久久一区二区三区不卡 | 波多野结衣乳巨码无在线观看 | 久久视频在线观看精品 | 国产亲子乱弄免费视频 | 丰满少妇弄高潮了www | 国产在线精品一区二区高清不卡 | 亚洲精品久久久久avwww潮水 | 在线视频网站www色 | 国产精品无码一区二区桃花视频 | 亚洲国产精品一区二区第一页 | 人妻插b视频一区二区三区 | 国产精品对白交换视频 | 4hu四虎永久在线观看 | 亚洲一区二区三区四区 | 人妻夜夜爽天天爽三区 | 99精品久久毛片a片 | 乌克兰少妇性做爰 | 欧美喷潮久久久xxxxx | 极品嫩模高潮叫床 | 国产一区二区三区精品视频 | 精品 日韩 国产 欧美 视频 | 人妻中文无码久热丝袜 | 午夜男女很黄的视频 | 亚洲国产精品一区二区第一页 | 久久久婷婷五月亚洲97号色 | 成人一在线视频日韩国产 | 天天做天天爱天天爽综合网 | 欧美精品一区二区精品久久 | 欧美日韩视频无码一区二区三 | 少妇人妻大乳在线视频 | 国产激情无码一区二区app | 特级做a爰片毛片免费69 | 国产精品va在线观看无码 | 学生妹亚洲一区二区 | 免费观看的无遮挡av | 99精品国产综合久久久久五月天 | 亚洲国产av美女网站 | 国产成人精品一区二区在线小狼 | av无码久久久久不卡免费网站 | 午夜免费福利小电影 | 网友自拍区视频精品 | 老熟妇乱子伦牲交视频 | 色诱久久久久综合网ywww | 强辱丰满人妻hd中文字幕 | 色综合久久88色综合天天 | 男人扒开女人内裤强吻桶进去 | 亚洲 a v无 码免 费 成 人 a v | 国产特级毛片aaaaaaa高清 | 麻豆果冻传媒2021精品传媒一区下载 | 无码国模国产在线观看 | 亚洲精品国产品国语在线观看 | 大乳丰满人妻中文字幕日本 | 在线播放免费人成毛片乱码 | 欧美三级a做爰在线观看 | 丝袜足控一区二区三区 | 青草青草久热国产精品 | 久久久久人妻一区精品色欧美 | 巨爆乳无码视频在线观看 | 日本一本二本三区免费 | 在线亚洲高清揄拍自拍一品区 | 丰满人妻一区二区三区免费视频 | 日韩精品无码免费一区二区三区 | 最近免费中文字幕中文高清百度 | 久久久久成人片免费观看蜜芽 | 成人性做爰aaa片免费看 | 国产sm调教视频在线观看 | 正在播放老肥熟妇露脸 | 中文亚洲成a人片在线观看 | 亚洲欧美精品aaaaaa片 | 国产精品怡红院永久免费 | 欧美成人高清在线播放 | 国产亚洲精品久久久久久久久动漫 | 中文字幕无码免费久久99 | 色窝窝无码一区二区三区色欲 | 久久久久亚洲精品中文字幕 | 国产亚洲精品久久久久久大师 | 久久综合激激的五月天 | 1000部啪啪未满十八勿入下载 | 久久精品中文闷骚内射 | 丰满少妇高潮惨叫视频 | 亚洲日本va午夜在线电影 | 国产精品理论片在线观看 | 在线а√天堂中文官网 | 奇米综合四色77777久久 东京无码熟妇人妻av在线网址 | 300部国产真实乱 | 国产熟妇高潮叫床视频播放 | 日韩在线不卡免费视频一区 | 丁香花在线影院观看在线播放 | 亚洲熟女一区二区三区 | 一本色道久久综合亚洲精品不卡 | 国产精品第一区揄拍无码 | 蜜臀aⅴ国产精品久久久国产老师 | 免费无码一区二区三区蜜桃大 | 亚洲熟妇色xxxxx欧美老妇 | 亚洲国产精品无码久久久久高潮 | 国产口爆吞精在线视频 | 久久国产36精品色熟妇 | 国产精品人人妻人人爽 | 国产av久久久久精东av | 欧美日韩一区二区三区自拍 | 色婷婷综合中文久久一本 | 午夜男女很黄的视频 | 一本大道久久东京热无码av | 亚洲区小说区激情区图片区 | 帮老师解开蕾丝奶罩吸乳网站 | 久久午夜夜伦鲁鲁片无码免费 | 国产又爽又黄又刺激的视频 | 国产精品久久久久9999小说 | 亚洲一区二区三区在线观看网站 | 精品国产一区二区三区四区 | 人人爽人人澡人人人妻 | 97资源共享在线视频 | 永久免费观看国产裸体美女 | 人妻人人添人妻人人爱 | 国产偷抇久久精品a片69 | а√天堂www在线天堂小说 | 67194成是人免费无码 | 99国产精品白浆在线观看免费 | 内射爽无广熟女亚洲 | 丰满少妇熟乱xxxxx视频 | 一本一道久久综合久久 | 无码人妻出轨黑人中文字幕 | 少妇无码av无码专区在线观看 | 欧美 日韩 人妻 高清 中文 | 欧美zoozzooz性欧美 | 亚洲精品美女久久久久久久 | 99久久人妻精品免费二区 | 精品无码国产一区二区三区av | 国产成人无码a区在线观看视频app | 一区二区三区高清视频一 | 亚洲の无码国产の无码步美 | 国产福利视频一区二区 | 国产色视频一区二区三区 | 亚洲经典千人经典日产 | 日日碰狠狠丁香久燥 | 日韩精品无码一区二区中文字幕 | 国产精品久久精品三级 | 波多野结衣aⅴ在线 | 国产午夜手机精彩视频 | 国产三级久久久精品麻豆三级 | 少妇太爽了在线观看 | 日本欧美一区二区三区乱码 | av小次郎收藏 | 天堂一区人妻无码 | 人人澡人人妻人人爽人人蜜桃 | 麻豆蜜桃av蜜臀av色欲av | 欧美人与物videos另类 | 亚洲精品国偷拍自产在线观看蜜桃 | aⅴ在线视频男人的天堂 | 日韩欧美群交p片內射中文 | 高潮毛片无遮挡高清免费视频 | 国产午夜福利亚洲第一 | 久久精品女人天堂av免费观看 | 性色欲情网站iwww九文堂 | 国产欧美精品一区二区三区 | 中文精品无码中文字幕无码专区 | 青春草在线视频免费观看 | 国产午夜亚洲精品不卡下载 | 国产精品亚洲综合色区韩国 | 激情五月综合色婷婷一区二区 | 亚洲成在人网站无码天堂 | 欧美丰满老熟妇xxxxx性 | 无套内谢老熟女 | 中文毛片无遮挡高清免费 | 99久久无码一区人妻 | 国产精品视频免费播放 | 131美女爱做视频 | 99久久人妻精品免费二区 | 亚洲成a人一区二区三区 | 国产超级va在线观看视频 | 国产成人无码av片在线观看不卡 | 午夜福利试看120秒体验区 | 丝袜 中出 制服 人妻 美腿 | 东京无码熟妇人妻av在线网址 | 少妇性l交大片欧洲热妇乱xxx | 97se亚洲精品一区 | 97久久国产亚洲精品超碰热 | 亚洲熟妇自偷自拍另类 | 久久久精品欧美一区二区免费 | 小鲜肉自慰网站xnxx | 少妇性l交大片 | 亚洲精品午夜无码电影网 | 国产成人无码a区在线观看视频app | 无码国内精品人妻少妇 | 久久99精品国产麻豆蜜芽 | 高清无码午夜福利视频 | 久久亚洲日韩精品一区二区三区 | 中文字幕 人妻熟女 | 又粗又大又硬又长又爽 | 一本色道久久综合狠狠躁 | 亚洲中文字幕乱码av波多ji | 激情综合激情五月俺也去 | 久久综合九色综合欧美狠狠 | 2020久久超碰国产精品最新 | 日本高清一区免费中文视频 | 国精品人妻无码一区二区三区蜜柚 | 国产亚洲精品久久久ai换 | 国产精品亚洲综合色区韩国 | 国产色视频一区二区三区 | 无人区乱码一区二区三区 | 日本丰满熟妇videos | 2020久久超碰国产精品最新 | 午夜理论片yy44880影院 | 久久久中文久久久无码 | 成人综合网亚洲伊人 | 131美女爱做视频 | 黑人巨大精品欧美一区二区 | 亚洲成a人片在线观看无码3d | 久久zyz资源站无码中文动漫 | 亚洲精品久久久久久一区二区 | www国产亚洲精品久久网站 | 国产办公室秘书无码精品99 | 亚洲日韩中文字幕在线播放 | 精品久久久久香蕉网 | 扒开双腿吃奶呻吟做受视频 | 国产亚洲视频中文字幕97精品 | 最新国产乱人伦偷精品免费网站 | 欧美肥老太牲交大战 | 成人精品一区二区三区中文字幕 | 中文字幕+乱码+中文字幕一区 | 青青久在线视频免费观看 | 一本色道久久综合狠狠躁 | 精品无码一区二区三区的天堂 | 国产三级久久久精品麻豆三级 | 国产成人亚洲综合无码 | 欧美兽交xxxx×视频 | 亚洲精品成人av在线 | 高清国产亚洲精品自在久久 | 国产精品亚洲一区二区三区喷水 | 无遮挡国产高潮视频免费观看 | 国产精品欧美成人 | 国产深夜福利视频在线 | 国产做国产爱免费视频 | 欧美精品在线观看 | 大屁股大乳丰满人妻 | 久久精品国产日本波多野结衣 | 九九在线中文字幕无码 | 日日摸夜夜摸狠狠摸婷婷 | 精品国产福利一区二区 | 精品国产一区二区三区四区在线看 | 欧美野外疯狂做受xxxx高潮 | 婷婷综合久久中文字幕蜜桃三电影 | 亚洲色偷偷男人的天堂 | 亚洲成a人片在线观看无码3d | 又色又爽又黄的美女裸体网站 | 亚洲爆乳精品无码一区二区三区 | 少妇人妻av毛片在线看 | 国产精品第一国产精品 | 最新国产乱人伦偷精品免费网站 | 久久无码中文字幕免费影院蜜桃 | 亚洲色无码一区二区三区 | av香港经典三级级 在线 | 久久久国产一区二区三区 | 国内综合精品午夜久久资源 | 乱人伦中文视频在线观看 | 国产又爽又黄又刺激的视频 | 大肉大捧一进一出好爽视频 | aⅴ在线视频男人的天堂 | 丁香花在线影院观看在线播放 | 蜜臀av在线观看 在线欧美精品一区二区三区 | 久久久久久av无码免费看大片 | 国产乱人偷精品人妻a片 | 在线亚洲高清揄拍自拍一品区 | 好男人www社区 | 日本又色又爽又黄的a片18禁 | 999久久久国产精品消防器材 | 国产福利视频一区二区 | 国精品人妻无码一区二区三区蜜柚 | 久久亚洲国产成人精品性色 | 在线视频网站www色 | 丰满肥臀大屁股熟妇激情视频 | 男女猛烈xx00免费视频试看 | 图片区 小说区 区 亚洲五月 | 夜夜夜高潮夜夜爽夜夜爰爰 | 日本饥渴人妻欲求不满 | 成人无码视频在线观看网站 | 精品一区二区三区无码免费视频 | 亚洲人成影院在线观看 | 欧美一区二区三区 | 天天躁日日躁狠狠躁免费麻豆 | 波多野结衣高清一区二区三区 | 少妇无码av无码专区在线观看 | 国产精品香蕉在线观看 | 亚洲成a人片在线观看无码3d | 乌克兰少妇性做爰 | 亚洲人成无码网www | 性生交大片免费看l | 国产欧美熟妇另类久久久 | 午夜精品一区二区三区在线观看 | 亚洲综合无码久久精品综合 | 欧美老熟妇乱xxxxx | 亚洲精品久久久久avwww潮水 | 国产精品福利视频导航 | 一本色道久久综合亚洲精品不卡 | 亚洲综合另类小说色区 | 99国产精品白浆在线观看免费 | 国产熟妇高潮叫床视频播放 | 日日干夜夜干 | 少妇性俱乐部纵欲狂欢电影 | 天干天干啦夜天干天2017 | 国产成人精品三级麻豆 | 99久久精品午夜一区二区 | 一本久道久久综合狠狠爱 | 国产精品对白交换视频 | 国精产品一区二区三区 | 欧美激情内射喷水高潮 | 国产精品无码mv在线观看 | 好爽又高潮了毛片免费下载 | 国产人成高清在线视频99最全资源 | 亚洲国产成人av在线观看 | 国产亚洲精品久久久久久国模美 | 一本色道久久综合亚洲精品不卡 | 狠狠色欧美亚洲狠狠色www | 成人综合网亚洲伊人 | 欧美真人作爱免费视频 | 麻豆成人精品国产免费 | 丰满岳乱妇在线观看中字无码 | 夜夜夜高潮夜夜爽夜夜爰爰 | 秋霞特色aa大片 | 内射老妇bbwx0c0ck | 亚洲欧美日韩国产精品一区二区 | 久久久久久久人妻无码中文字幕爆 | 蜜桃av蜜臀av色欲av麻 999久久久国产精品消防器材 | 国产成人综合在线女婷五月99播放 | 欧美日韩综合一区二区三区 | 日韩av激情在线观看 | 久久久久久久人妻无码中文字幕爆 | 国产欧美熟妇另类久久久 | 久久精品中文字幕一区 | 台湾无码一区二区 | 欧美日韩一区二区免费视频 | 内射欧美老妇wbb | 熟女少妇人妻中文字幕 | 欧洲极品少妇 | 欧美肥老太牲交大战 | 狠狠色噜噜狠狠狠狠7777米奇 | 国产精品自产拍在线观看 | 国产人妻大战黑人第1集 | 日韩亚洲欧美精品综合 | 在线 国产 欧美 亚洲 天堂 | 国产精品怡红院永久免费 | 国内丰满熟女出轨videos | 国产精品.xx视频.xxtv | 99精品国产综合久久久久五月天 | 蜜臀av无码人妻精品 | 国产一区二区三区精品视频 | 国产成人无码av一区二区 | 亚洲成av人片在线观看无码不卡 | 日本爽爽爽爽爽爽在线观看免 | 久精品国产欧美亚洲色aⅴ大片 | 国产三级久久久精品麻豆三级 | 亚洲色大成网站www | 男女超爽视频免费播放 | 国产凸凹视频一区二区 | 亚洲国产av美女网站 | 波多野结衣av一区二区全免费观看 | 亚洲精品国产精品乱码视色 | 精品一区二区三区无码免费视频 | 国产乱人伦app精品久久 国产在线无码精品电影网 国产国产精品人在线视 | 色五月五月丁香亚洲综合网 | 在线精品亚洲一区二区 | 日本欧美一区二区三区乱码 | 日韩亚洲欧美中文高清在线 | 久久综合狠狠综合久久综合88 | 久久精品无码一区二区三区 | 久久综合九色综合97网 | 精品国精品国产自在久国产87 | 东北女人啪啪对白 | 亚洲中文字幕无码一久久区 | 亚洲成a人一区二区三区 | 亚洲欧洲日本综合aⅴ在线 | 国产成人无码av在线影院 | 国产亚洲视频中文字幕97精品 | 男女猛烈xx00免费视频试看 | 荫蒂添的好舒服视频囗交 | 亚洲a无码综合a国产av中文 | 在线精品亚洲一区二区 | 精品欧洲av无码一区二区三区 | 欧美日本日韩 | 无码人妻丰满熟妇区毛片18 | 娇妻被黑人粗大高潮白浆 | 亚洲の无码国产の无码影院 | 久久综合香蕉国产蜜臀av | 中文亚洲成a人片在线观看 | 国产成人综合色在线观看网站 | 国内丰满熟女出轨videos | 国产精品18久久久久久麻辣 | 欧美日韩在线亚洲综合国产人 | 国产综合色产在线精品 | 中文字幕人妻无码一区二区三区 | 人人爽人人澡人人高潮 | 亚洲日本va中文字幕 | 自拍偷自拍亚洲精品被多人伦好爽 | 少女韩国电视剧在线观看完整 | 国产精品毛多多水多 | 精品久久久久香蕉网 | 超碰97人人射妻 | 国内精品人妻无码久久久影院蜜桃 | 亚洲精品午夜国产va久久成人 | 国产激情综合五月久久 | 久久久久99精品成人片 | 国产熟妇另类久久久久 | 精品成人av一区二区三区 | 亚洲国产成人a精品不卡在线 | 狠狠色噜噜狠狠狠狠7777米奇 | 国产农村妇女aaaaa视频 撕开奶罩揉吮奶头视频 | 思思久久99热只有频精品66 | 波多野结衣一区二区三区av免费 | 日韩精品无码一区二区中文字幕 | 国产精品久久久久久久9999 | 一本大道久久东京热无码av | 天天拍夜夜添久久精品 | 天堂久久天堂av色综合 | 亚洲一区二区三区无码久久 | 国产小呦泬泬99精品 | 色妞www精品免费视频 | 国产精品亚洲一区二区三区喷水 | 99久久人妻精品免费一区 | 东京一本一道一二三区 | 亚洲人成网站色7799 | 极品尤物被啪到呻吟喷水 | av无码不卡在线观看免费 | 俄罗斯老熟妇色xxxx | 98国产精品综合一区二区三区 | 成人亚洲精品久久久久 | 亚洲va中文字幕无码久久不卡 | 一本大道久久东京热无码av | 无码人妻丰满熟妇区毛片18 | 国产成人久久精品流白浆 | 国产午夜福利100集发布 | 欧美老妇与禽交 | 久久久www成人免费毛片 | 久久精品人妻少妇一区二区三区 | 国产熟妇另类久久久久 | 国产99久久精品一区二区 | 精品一区二区不卡无码av | 精品亚洲韩国一区二区三区 | 亚洲 欧美 激情 小说 另类 | 亚洲 a v无 码免 费 成 人 a v | 色五月五月丁香亚洲综合网 | 久久久久久久人妻无码中文字幕爆 | 97色伦图片97综合影院 | 67194成是人免费无码 | 亚洲色无码一区二区三区 | 人妻少妇精品无码专区二区 | 老熟女重囗味hdxx69 | 久久综合给合久久狠狠狠97色 | 人人妻人人澡人人爽欧美一区九九 | 成年美女黄网站色大免费视频 | 精品水蜜桃久久久久久久 | 两性色午夜视频免费播放 | 麻豆av传媒蜜桃天美传媒 | 国产明星裸体无码xxxx视频 | 欧美成人免费全部网站 | 国产精品久久久久影院嫩草 | 中文字幕人妻无码一区二区三区 | 无码人妻黑人中文字幕 | 77777熟女视频在线观看 а天堂中文在线官网 | 欧美日韩一区二区综合 | 未满小14洗澡无码视频网站 | 国产av剧情md精品麻豆 | 日韩欧美成人免费观看 | 亚洲欧洲中文日韩av乱码 | 国内丰满熟女出轨videos | 久久精品无码一区二区三区 | 欧美喷潮久久久xxxxx | 欧美三级a做爰在线观看 | 免费无码午夜福利片69 | 亚洲色欲色欲天天天www | 中文字幕 人妻熟女 | а√天堂www在线天堂小说 | 熟妇人妻无码xxx视频 | 国产亚洲精品精品国产亚洲综合 | 爽爽影院免费观看 | 久久国产劲爆∧v内射 | √天堂资源地址中文在线 | 中文字幕 亚洲精品 第1页 | 欧美三级不卡在线观看 | 蜜臀aⅴ国产精品久久久国产老师 | 亚洲精品成a人在线观看 | 亚洲日韩精品欧美一区二区 | 国产精品第一区揄拍无码 | 国内精品一区二区三区不卡 | 乱码av麻豆丝袜熟女系列 | 国产精品理论片在线观看 | 国产真实伦对白全集 | 人妻无码αv中文字幕久久琪琪布 | 蜜桃av蜜臀av色欲av麻 999久久久国产精品消防器材 | 97精品国产97久久久久久免费 | 熟妇女人妻丰满少妇中文字幕 | 国产精品资源一区二区 | 狠狠色丁香久久婷婷综合五月 | 国产疯狂伦交大片 | 亚洲精品国产品国语在线观看 | 国产精品久久久久久亚洲影视内衣 | 亚洲精品午夜无码电影网 | 国产精品鲁鲁鲁 | 在线a亚洲视频播放在线观看 | 亚洲国产精品无码一区二区三区 | 亚洲精品国产精品乱码不卡 | 亚洲va中文字幕无码久久不卡 | www成人国产高清内射 | 国产精品va在线观看无码 | 日产精品高潮呻吟av久久 | 狂野欧美性猛交免费视频 | 欧美日本免费一区二区三区 | 国产真实夫妇视频 | 国产av无码专区亚洲a∨毛片 | 国内揄拍国内精品人妻 | 蜜臀av在线观看 在线欧美精品一区二区三区 | 国产成人综合色在线观看网站 | 亚洲精品无码人妻无码 | 亚洲爆乳大丰满无码专区 | 久久久久99精品国产片 | 欧美日韩精品 | 国内精品一区二区三区不卡 | 国产亚av手机在线观看 | 一二三四在线观看免费视频 | 国产综合久久久久鬼色 | 天天躁夜夜躁狠狠是什么心态 | 亚洲欧美色中文字幕在线 | 久久久久亚洲精品中文字幕 | 十八禁真人啪啪免费网站 | 熟妇人妻中文av无码 | 久久视频在线观看精品 | 国产亚洲tv在线观看 | 亚洲人亚洲人成电影网站色 | 欧美日韩人成综合在线播放 | 成人免费视频一区二区 | 欧美激情内射喷水高潮 | 亚洲精品无码人妻无码 | 女人高潮内射99精品 | 老子影院午夜精品无码 | 中文亚洲成a人片在线观看 | 又紧又大又爽精品一区二区 | 精品乱码久久久久久久 | 欧美亚洲日韩国产人成在线播放 | 女人被男人躁得好爽免费视频 | 国产两女互慰高潮视频在线观看 | 性欧美牲交在线视频 | 欧美freesex黑人又粗又大 | 乱中年女人伦av三区 | 日本在线高清不卡免费播放 | 在线а√天堂中文官网 | 东京一本一道一二三区 | 日产精品99久久久久久 | 扒开双腿疯狂进出爽爽爽视频 | 日本一卡2卡3卡4卡无卡免费网站 国产一区二区三区影院 | 色窝窝无码一区二区三区色欲 | 内射老妇bbwx0c0ck | 熟妇人妻中文av无码 | 无套内谢的新婚少妇国语播放 | 精品一二三区久久aaa片 | аⅴ资源天堂资源库在线 | 日韩人妻无码中文字幕视频 | 伊人久久大香线蕉亚洲 | 成人aaa片一区国产精品 | 精品欧洲av无码一区二区三区 | 精品无码国产自产拍在线观看蜜 | 国产成人无码av一区二区 | 久久99精品久久久久久 | 日韩精品乱码av一区二区 | 亚洲午夜久久久影院 | 久久久国产精品无码免费专区 | 色欲综合久久中文字幕网 | 午夜熟女插插xx免费视频 | 亚洲中文字幕av在天堂 | 久久午夜无码鲁丝片午夜精品 | 99国产精品白浆在线观看免费 | 少妇高潮喷潮久久久影院 | 亚洲综合久久一区二区 | 巨爆乳无码视频在线观看 | 亚洲色大成网站www国产 | 成在人线av无码免观看麻豆 | 乱人伦人妻中文字幕无码久久网 | 亚洲无人区一区二区三区 | 亚洲欧美国产精品专区久久 | 无码成人精品区在线观看 | 亚洲欧美国产精品久久 | 日本精品久久久久中文字幕 | 中文字幕日产无线码一区 | 美女黄网站人色视频免费国产 | 国产xxx69麻豆国语对白 | 精品国产精品久久一区免费式 | 亚洲国产精品无码久久久久高潮 | 亚洲s色大片在线观看 | 色欲久久久天天天综合网精品 | 日韩亚洲欧美中文高清在线 | 激情爆乳一区二区三区 | 欧洲vodafone精品性 | 亚洲最大成人网站 | 国产精品怡红院永久免费 | 国产色视频一区二区三区 | 久久久久亚洲精品男人的天堂 | 特黄特色大片免费播放器图片 | 日本又色又爽又黄的a片18禁 | 亚洲综合另类小说色区 | 亚洲经典千人经典日产 | 色五月五月丁香亚洲综合网 | 又粗又大又硬毛片免费看 | 国产福利视频一区二区 | 久久精品成人欧美大片 |